entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
24
167
authors
sequencelengths
1
661
primary_category
stringclasses
111 values
categories
sequencelengths
1
8
text
stringlengths
2
383k
http://arxiv.org/abs/2406.09356v1
20240613174137
CMC-Bench: Towards a New Paradigm of Visual Signal Compression
[ "Chunyi Li", "Xiele Wu", "Haoning Wu", "Donghui Feng", "Zicheng Zhang", "Guo Lu", "Xiongkuo Min", "Xiaohong Liu", "Guangtao Zhai", "Weisi Lin" ]
cs.CV
[ "cs.CV", "eess.IV" ]
A More Practical Approach to Machine Unlearning David Zagardo, dave@greenwillowstudios.com June 2024 =============================================== § ABSTRACT Ultra-low bitrate image compression is a challenging and demanding topic. With the development of Large Multimodal Models (LMMs), a Cross Modality Compression (CMC) paradigm of Image-Text-Image has emerged. Compared with traditional codecs, this semantic-level compression can reduce image data size to 0.1% or even lower, which has strong potential applications. However, CMC has certain defects in consistency with the original image and perceptual quality. To address this problem, we introduce CMC-Bench, a benchmark of the cooperative performance of Image-to-Text (I2T) and Text-to-Image (T2I) models for image compression. This benchmark covers 18,000 and 40,000 images respectively to verify 6 mainstream I2T and 12 T2I models, including 160,000 subjective preference scores annotated by human experts. At ultra-low bitrates, this paper proves that the combination of some I2T and T2I models has surpassed the most advanced visual signal codecs; meanwhile, it highlights where LMMs can be further optimized toward the compression task. We encourage LMM developers to participate in this test to promote the evolution of visual signal codec protocols. § INTRODUCTION Visual signal compression aims to minimize image data, playing a crucial role in delivering high-quality image/video services with limited network resources and storage capacity. Since the MPEG-1 <cit.> standard was introduced, compression rates for visual signals have doubled  <cit.> every decade. In recent years, traditional image codecs have achieved a 500 times compression rate while ensuring a decent visual experience for humans. However, traditional codecs are approaching the Shannon limit of 1,000 times Compression Rate (CR) in the upcoming next-generation protocols. Fortunately, the rapid development of Large Multimodal Models (LMMs) has opened up possibilities for such Ultra-low-Bitrate (ULB) compression. Why use LMMs for compression? LMMs support the conversion between multiple modalities, where text consumes much less space than image modalities. By cascading Image-to-Text (I2T) and Text-to-Image (T2I) models, images can be compressed and reconstructed from semantic information. This Cross-Modality Compression (CMC) paradigm operates at the semantic level, which outperforms traditional codecs at the pixel level. It enables easy attainment of ULB, and even the Extreme-low Bitrate (ELB) for CR about 10,000 times. However, at such low bitrates, CMC presents two significant issues that cannot be overlooked. (1) Consistency: The reconstruction process heavily relies on intermediate textual information. Any omission by the I2T model (encoder) or misunderstanding by the T2I model (decoder) can result in severe distortion. Unlike minor changes in brightness and color, this can lead to a semantic-level inversion of the entire image. (2) Perception: Textual encoding provides a coarse representation of the image, necessitating the T2I model to add details. Insufficient details degrade perceptual quality, while excessive ones compromise faithfulness to the original image. Unfortunately, as the bitrate decreases, the conflict between them  <cit.> becomes more pronounced. As the consistency and perception failure cases in Figure <ref>, these issues jointly limit the application of CMC. For LMMs, there is a lack of effective evaluation criteria both in terms of consistency and perceptual aspects. Although numerous benchmarks have recently emerged for LMMs, they are primarily designed to assess the performance of either I2T (Image-to-Text) or T2I (Text-to-Image) models working alone, such as captioning/visual question answering for I2T  <cit.>, or generation quality/realism for T2I  <cit.>. Consequently, we introduce the first joint benchmark called CMC-Bench, aimed at testing the collaborative capabilities of I2T and T2I models. Our contributions include: * A large-scale dataset consists of 58,000 images using the CMC paradigm. 4,000 images among them have 160,000 expert annotations, covering both consistency and perception issues, paving the way for information loss modeling in the I2T and T2I processes. * A comprehensive evaluation standards, consisting of four compression modes under different requirements, along with the two dimensions mentioned above. We validate mainstream models (including 6 I2T and 12 T2I) to explore optimal combinations. * A throughout comparison with traditional codecs. We compared the benchmark winner with existing image codecs, revealing the significant advantages of the CMC image compression paradigm and some remaining drawbacks. We encourage LMM developers (both I2T and T2I) to participate in CMC-Bench to further expand the application of CMC. § RELATED WORKS Cross-Modality Compression. The earliest CMC method <cit.> emerged in 2021, achieving a compression ratio of almost 10,000 times through text modality. However, as a simple combination of I2T and T2I models, their results often exhibit noticeable differences from the original images. Subsequently, Text+Sketch <cit.> employed edge operators and ControlNet <cit.> to refine CMC, but its consistency remained inferior to traditional codecs. The most advanced CMC methods, like M-CMC and MISC  <cit.>, have surpassed advanced codecs like VVC <cit.> in both consistency and perception, indicating the promising future of this paradigm. Nevertheless, there is still room for improvement in these two aspects. All existing CMC methods are from one specific I2T and one T2I model, and the models used are relatively outdated. Considering the rapid development of Generative-AI, how to combine the latest models towards a better CMC becomes an unrevealed question. Benchmark for LMM Evaluation. Existing benchmarks are mainly designed for T2I and I2T models. For I2T, they usually take a specific image sequence as input, compare the text output by LMM with the ground truth, and use the relevance of the two as a performance indicator. The annotation content includes common sense  <cit.> or specific expert fields  <cit.>. For T2I, the input is a carefully designed text prompt  <cit.> (e.g. different themes, adjectives, and spatial relationship). They use specific visual encoders to process the output image of LMM and determine its alignment with the text as the generative performance  <cit.>. However, as the current CMC paradigm is still immature, there is no pipeline for the joint evaluation of I2T+T2I model. Benchmark for Image Compression. Given the significance of visual information compression, several related competitions <cit.> have been held in recent years. However, these competitions often limit their scope to Natural Scene Images (NSIs). Screen Content Images (SCIs)  <cit.>, which are prevalent on the internet, and the emerging AI-Generated Images (AIGIs)  <cit.> have received some attention with new datasets, but no existing dataset comprehensively considers them together. Moreover, the performance evaluation of compression algorithms can be challenging, often requiring subjective quality assessments from human viewers to train Image Quality Assessment (IQA)  <cit.> models, which provide objective metrics for compression algorithms. In the context of ULB image compression, both the consistency between the distorted and reference images, as well as the inherent appeal of the distorted image in human perception, need to be annotated. Existing IQA datasets typically annotate only one aspect, while often in a coarse-grained manner through Single Stimulus (SS) or Double Stimulus (DS) comparisons. In contrast, Mean Opinion Score (MOS) derived from multiple subjects offers a more detailed and objective evaluation as shown in Table <ref>. § DATASET CONSTRUCTION §.§ Ground Truth Selection To provide a comprehensive and high-quality resource for various applications on the Internet, we carefully curated 1,000 high-quality images without compression distortion as the ground truth of CMC-Bench. Among them, NSIs are the most mainstream content, so we selected 400 images. At the same time, considering that SCIs are more common on screens and AIGIs are increasing on the Internet in the upcoming LMM era, we selected 300 images from each of these two categories. The specific content is as shown in Figure <ref>. NSI: A collection of 200 high-quality Professional Generated Content (PGC) released by TV stations and photographers, specifically sampled from the CLIC database <cit.>; and 200 User Generated Content (UGC) by average users, selected from MS-COCO <cit.>. To ensure image quality, we employed Q-Align <cit.> to filter out low-quality UGC that might be overexposed. SCI: Consisting 100 computer graphics from CGIQA-6K <cit.> in animated movies; 100 game renders from CCT and CGIQA-6K  <cit.>; and 100 webpages with both images and text from CCT, SCID, and Webpage Saliency datasets  <cit.>. To maintain frame clarity, we also applied Q-Align <cit.> to address factors like motion blur that affect visual quality. AIGI: Comprises 50 images each, generated by 6 latest models: DALLE3, MidJourney, PG v25, PixArt α, SDXL, and SSD-1B  <cit.>. They have demonstrated exceptional preference in previous subjective ratings  <cit.>, representing the pinnacle of AIGI capabilities. §.§ Compression Mode Drawing on previous work in CMC, we categorize CMC into four working modes, as shown in Figure <ref>. Each type employs distinct configurations and is suitable for different scenarios: Text: The I2T model converts images to text and is directly restored by the T2I model. Due to its reliance on the text modality only, this approach achieves a CR of 10,000, ideal for ELB situations. Pixel: Each 64 × 64 blocks from ground truth are merged and quantized into one pixel. Beyond the Text mode, these pixels initialize the T2I process. The pixel representation is relatively compact, offering a CR of around 5,000, suitable for less rigorous ELB but higher demands on consistency. Image: Traditional codecs are employed to compress the image, which serves as input for the T2I model for enhancement. Unlike the previous two, it omits the time-consuming I2T process by leaving the text input of the T2I model empty. This approach can achieve a CR of 1,000, suitable for ULB bandwidth but with high real-time requirements. Full: Extending the Image mode, the T2I is guided by text content, encompassing the full pipeline of I2T, traditional codec, and T2I. It also has a CR of approximately 1,000, suitable for the most demanding performance scenarios. §.§ Benchmark Candidates We employ 6 I2T and 12 T2I models across four compression modes. Due to the absence of text, the I2T model is not used in the Image mode; while for T2I, among the 4 Image Reconstruction (IR) models requiring an initial image and are not compatible with Text and Pixel modes. The remaining 8 T2I generative models support all modes. We use one certain T2I, and validate all possible I2T models to verify their performance separately (vice versa for T2I validation). For a fair comparison, We fixed RealVis <cit.> to minimize the T2I process distortion, which ensures the performance fluctuation mainly comes from the I2T. Similarly, we fix I2T as GPT-4o <cit.> when validating T2I models. Each I2T model produces 3,000 images, while restorative and generative models for T2I have 2,000 and 4,000, respectively. A total of 18,000 + 40,000 = 58,000 images are generated. I2T model: GPT-4o <cit.>, LLAVA-1.5 <cit.>, MPlugOwl-2 <cit.>, Qwen <cit.>, ShareGPT <cit.>, and InstructBLIP <cit.>. Except for one model <cit.> for image captioning with default token length, we modify the output length of others to 10∼20 words for a balance between bitrate and performance. T2I model: Animate <cit.>, Dreamlike <cit.>, PG20 <cit.>, PG25 <cit.>, RealVis <cit.>, SD15 <cit.>, SDXL <cit.>, and SSD-1B <cit.> as generative model; DiffBIR <cit.>, InstructPix <cit.>, PASD <cit.>, and StableSR <cit.> as IR model. A higher denoising strength indicates a more obvious modification on the starting point. To balance the consistency and perception indicators, we set the strength of Full and Image modes as 0.5, the Pixel mode as 0.8, and the Text mode as the default 1. Traditional codec: For Full and Image mode, we utilize the most advanced traditional codec VVC <cit.> to provide a reference image. Towards 1,000 times compression, we take its nearest bitrate that meets the ULB requirement, where the Quantizer Parameter (QP) is 53. §.§ Human Preference Annotation Referring from previous large-scale subjective annotation <cit.> methods, we do not perform coarse-grained labeling on the entire dataset considering the limitation on annotator numbers. Instead, we fine-grain the annotations on 4,000 images to ensure multiple ratings for each image. Note that, as the benchmark indicator should be adjusted on subjective data, we did not directly select subsets from the 58,000 test images. Instead, we generated new images to prevent prior exposure to the content being evaluated. Given the greater impact of T2I models on CMC tasks than I2T models, we follow the T2I paradigm described in Section <ref>. The I2T model is fixed as GPT-4o <cit.> and combined with 12 different T2I models, compressing 100 ground truth into 4,000 distorted images. To ensure quality diversity, we randomly assigned strength from 0.2 to 0.9 rather than a fixed value. Each distorted image is paired with its corresponding ground truth and shown to 20 experienced participants who provided ratings on consistency and perception dimensions. Each image is then summarized into two overall scores from 0 to 5, combining all participants' feedback. For a detailed description of the experimental setup and data process, please refer to the supplementary. § EXPERIMENT §.§ Evaluation Indicator Settings All 6 I2T and 12 T2I LMMs are verified and tested by different parameters and fixed them towards an optimal situation according to Section <ref>, while the internal model weight remains zero-shot to ensure fairness in ranking. The specific parameters verification is provided in the supplementary. Taking the reference and distorted image pairs as input, We use TOPIQ <cit.>, the most advanced IQA metric in Full-Reference (FR) and No-Reference (NR) configuration to characterize consistency and perception. The average score of 1,000 ground truth images is reported as the final performance. Combining these two issues towards 4 working modes, the models are evaluated by 8 indicators for generative T2I, 6 indicators (exclude Image mode) for I2T, and 4 indicators (exclude Pixel and Text mode) for T2I restorative models. A weighted average of 2× FR indicators and 1× NR indicators is given as the overall score for ranking since the TOPIQ-FR has a smaller floating range than TOPIQ-NR. As restorative models only support ULB compression in Full and Image mode, but not ELB compression in Pixel and Text mode, the overall score of the T2I model is ranked under ULB and ELB respectively. r0.5 Correlation between objective IQA evaluation and subjective human preference. max width=0.5 Consistency σ↑ κ↑ Perception σ↑ κ↑ AHIQ 0.844 0.645 CLIPIQA 0.825 0.623 [HTML]D0D0D0 DISTS 0.795 0.599 CNNIQA 0.584 0.414 LPIPS 0.583 0.406 DBCNN 0.833 0.640 [HTML]D0D0D0 PieAPP 0.433 0.294 HyperIQA 0.730 0.534 TOPIQ 0.943 0.792 TOPIQ 0.901 0.738 In addition to TOPIQ, we also used four cutting-edge FR-IQA (AHIQ <cit.>, DISTS <cit.>, LPIPS <cit.>, PieAPP <cit.>) and NR-IQA (CLIPIQA <cit.>, CNNIQA <cit.>, DBCNN <cit.>, HyperIQA <cit.>) algorithms to objectively score the distorted images in terms of consistency and perception. The higher the Spearman (σ) and Kendall Rank-order Correlation Coefficient (κ), the better correlation between the objective and subjective scores. All models are trained on 80% of the distorted images in Section 3.4 and tested on the remaining 20%. Experiments in Table <ref> show that the correlation between the fine-tuned TOPIQ <cit.> and the subjective score is outstanding with σ beyond 0.9 in both dimensions, making it appropriate performance indicators reflecting human preference for compressed images. §.§ Benchmark Result and Discussion Figure <ref> shows the performance of 6 I2T models as encoders and 12 I2T models as decoders in image compression. For I2T, considering the different lengths of intermediate text, we show the bit-per-pixel (bpp) of each model together with the performance index, where ULB and ELB correspond to 0.024 and 0.0024 bpps respectively. For 6 indicators in I2T LMMs, while GPT-4o <cit.> does not perform well on Text-FR, it significantly outperforms on Full-FR. This suggests that although its generated text carries limited information, it has a strong orthogonal relationship with the low-level details of the image. This semantic information effectively compensates for the information loss after image compression. In addition, the given text facilitates the subsequent T2I model in decoding high-quality images, and its performance across various NR indicators is also commendable. In comparison, MPlugOwl-2 <cit.> and InstructBLIP <cit.> can effectively encode images into text, but their results are still inferior to GPT-4o. The only viable competitor is ShareGPT <cit.>, but it has a bpp of around 0.008, which is significantly larger than the other 5 models. This data size exceeds ELB and occupies one-third of the available ULB space. Considering multiple factors, GPT-4o remains the most suitable I2T model as the CMC encoder. For 8 indicators in T2I LMMs, 2 restorative models  <cit.> exhibit overwhelming consistency in Full and Image modes with acceptable perception results, enabling faithful image reconstruction close to the ground truth. However, its applicability is limited for the other 2 modes, particularly under the strict ELB conditions. The performance of the remaining models falls into two distinct extremes, where RealVis <cit.> shows high consistency but PG25 <cit.> demonstrates high perception. Given that it is feasible to enhance a compressed low-quality image with high fidelity to the original, while correcting a completely different high-quality image with low fidelity remains challenging, we opt to prioritize consistency by assigning it a higher weight. Consequently, considering the strong performance and wide versatility of RealVis, it is relatively more suitable than the CMC decoder. To delve into the compression capability of I2T and T2I LMMs with different content on various modes, we present the T2I leaderboard under ULB and ELB conditions in Table <ref> and Table <ref>, respectively, and showcase the performance of I2T models on Full and Pixel modes (Text mode attached in supplementary) in Table <ref>, with a discussion of content-specific analysis. A horizontal comparison among different modes in Tables 3 and 4 reveals that the Full mode has a clear advantage over the Image mode in terms of consistency and perception, indicating the significance of the text provided by the I2T model for T2I decoding. This text guidance not only enhances consistency but provides a clear target for the T2I process, thus also boosting perception. In contrast, the Pixel mode sacrifices perception for consistency compared to the Text mode. This is because the more control conditions added, the less room for creative freedom the model has, leading to a decrease in image aesthetics. However, for models that already have high perception scores <cit.> in the Text mode, the trade-off of improving overall performance is acceptable. Among NSI, SCI, and AIGI, different LMMs excel at different content. For instance, as shown in Table <ref> and Table <ref>, PG25 <cit.>, trained on internet data, performs better in AIGI tasks; conversely, RealVis aims at image naturalness, manifesting its superior reconstruction capability in NSI. Regardless of the model employed, we observe that NSI generally yields higher consistency scores, while AIGI has higher perception scores. However, SCI stands out from the others, with the compression results of the same model lagging behind in both perception and consistency. This deficiency is relevant to certain words <cit.> (even long paragraphs) within SCI, making I2T models unable to re-encode them into text, while the text generation capabilities of recent T2I models are still limited. Besides, although the performance disparities among I2T models are not as significant as those in T2I models, Table <ref> also clearly illustrates the limitation in SCI, indicating room for further optimization. §.§ Subjective Data Analysis Figure <ref> presents the subjective preference for images decoded by 12 T2I models under ULB and 8 models under ELB. For ULB, the 3 restorative models  <cit.> exhibit slightly higher consistency compared to generative models, where PG25 achieves the highest perception score against all others. It is worth noting that the restorative models are more robust. The upper and lower bounds of the scores in each dimension seldom surpass 1.0, whereas the randomness of the generative models notably deteriorates. As the bitrate further decreases to ELB, consistency scores of all models decline, while perception scores have slight improvement. In summary, apart from Animate <cit.> specifically for cartoon styles, and InstructPix <cit.> that significantly alters images, all other models demonstrate potential applications in CMC. Additionally, by averaging all scores, we find that the models ranking based on subjective scores aligns closely with the objective ones shown in Table <ref> and Table <ref>. This finding validates the reasonability of our previous experiments and highlights that, compared to perception, humans tend to focus more on consistency when viewing compressed images. §.§ Compare with Traditional Codecs To validate the practicality of the CMC paradigm, we select 2 outstanding combinations of I2T and T2I models from CMC-Bench, and compare them with 3 mainstream codecs: AVC <cit.>, HEVC <cit.>, and VVC <cit.> at I-Frame mode, and the latest semantic codec pipeline CDC <cit.>. Given the superiority of GPT-4o <cit.> as the encoder, we initially pair it with the top-ranked decoder DiffBIR <cit.>. Considering applications on different modes, excluding the reconstructive model, we also assess its performance with the third-tanked decoder RealVis <cit.>. These two approaches with four bitrates correspond to Text, Pixel, Image, Full modes are shown in Figure <ref>. To comprehensively compare the two paradigms across different dimensions, we add 3 Consistency metrics: CLIPSIM <cit.>, LPIPS <cit.>, and SSIM <cit.>; and 3 Perception metrics: CLIPIQA <cit.>, LIQE <cit.>, and FID <cit.>. Models ranked higher prioritize semantic information, while those lower focus on pixel-level consistency. Both CMC paradigms demonstrate an advance in terms of most metrics. Given that SSIM is purely pixel-based, the performance drop due to generative compression is expected. The lead in perception is particularly notable, as it surpasses traditional codecs at extremely low bitrates. However, the advantage in consistency is relatively smaller, achieving a reduction of around 30% in bitrate compared to traditional methods at 0.02 bpp. The DiffBIR decoder generally shows better performance, while RealVis fits A wider range of bitrates. In summary, based on the above analyses, we believe that CMC holds a certain advantage over traditional encoding. However, for implementing LMMs into the next generation of visual signal codecs, further optimization is required in the following aspects: Enhanced T2I models: Both encoders and decoders are crucial in CMC, but decoders are more decisive. Future T2I models should possess more sophisticated control mechanisms, ensuring high-quality generation while maintaining consistency with reference images and text. Better adaption to SCI: the compression performance of SCI is inferior to NSI and AIGI, necessitating LMMs with specialized understanding and generating mechanisms to handle SCI. Wider bitrate range: Although leading in ULB and ELB, the margin of consistency improvement is not as pronounced as perception. Future efforts should focus on CMC at higher bitrates, incorporating more control information to aid in reconstructing the original image, ultimately achieving superiority across all bitrates and dimensions as compared to traditional codecs. § CONCLUSION We construct CMC-Bench, a benchmark for assessing the collaborative functioning of I2T and T2I models in image compression. Anticipating the bitrate requirements for codecs in the next decade, we proposed four collaboration modes among LMMs, along with two indicators of consistency and perception. By employing 6 mainstream I2T and 12 T2I models, we collected 58,000 distorted images through CMC with 160,000 human subjective annotations to train objective metrics for comprehensive evaluation. Our assessment demonstrates that even without dedicated training for compression tasks, combinations of several advanced I2T and T2I models have already surpassed traditional codecs in multiple aspects. However, there is still a long way to go before LMMs can directly become the future codecs paradigm. We sincerely hope that CMC-Bench will inspire future LMMs to perform better compression towards the evolution of visual signal codecs. unsrt § APPENDIX In this section, we briefly describe the content of the checklist requirements. Considering that our experiments tried a variety of parameter configurations, the conclusions under different configurations are also stated here, including specific ablation data. §.§ Limitations and Social Impact Limitation 1: Although we have considered most of the mainstream I2T and T2I models in CMC-Bench (till March 2024), the number of models is still insufficient to fully characterize the performance of all current LMMs on CMC. Taking the open-source T2 model as an example, more than 20,000 models have been released on huggingface (till May 2024). Although we cannot run all models, the capabilities of some relatively unpopular or more advanced LMMs in the future need to be further updated on CMC-Bench. Limitation 2: CMC-Bench is currently designed for the performance verification of image compression, not video compression. Considering that the temporal information of videos is relatively complex, the current LMMs are only applicable to image compression, which makes it difficult to ensure consistency with the reference when generating videos. However, as LMMs gradually apply to video compression in the future, CMC-Bench will also be evaluated at the video level. Social Impact: Through the CMC paradigm, the size of the image can be compressed by 1,000 times, and even 10,000 times in extreme cases. This will effectively promote image communication between a large number of terminals under limited bandwidth, thereby realizing multi-device collaboration in the Internet of Things and semantic communication. Considering that traditional codecs have encountered bottlenecks after three decades of development and the compression rate is gradually approaching the Shannon limit, we believe that LMM will effectively achieve semantic-level compression and thus become the future evolution direction of visual information codec protocol. §.§ Subjective Annotation Settings Compliant with the ITU-R BT.500-13 <cit.> standard, we invited 20 viewers (11 male, 9 female) in this subjective experiment with normal lighting levels. Images are presented on the iMac display together with the ground truth in random order on the screen, with a resolution of up to 4096 × 2304. Both ground truth and distorted images are accessible for subjective. Considering the consistency between the reference and distorted image, and the perceptual quality of the only distorted image, subjects were asked to give two scores within the range of [0, 5], where each one-point interval stands for poor, bad, fair, good, or excellent quality. The user interface is shown in Figure <ref>. Each user, in accordance with the Helsinki Declaration, provides informed consent for their data to be used in experiments. To prevent NSFW content, we implement three preventive measures: (1) Conduct a thorough manual screening of the ground truth; (2) Utilize the SD safety checker <cit.> during decoding; (3) Incorporate an `offensive' flag in the annotation process, allowing viewers to report NSFW content if encountered. The data confirms that the ground truth is safe, with approximately 0.2% of distorted images receiving reports, which is generally acceptable. In case of visual fatigue, we split the database into g ∈ [0,10] groups including M=400 images each, while limiting the experiment time to an hour. After collecting every viewer's quality ratings, we compute the Spearman Rank-order Correlation Coefficient (SRoCC) between them and the global average and remove the outliers with SRoCC lower than 0.6. Then we normalize the average score s for between each session to avoid inter-session scoring differences as: s_ij(g) = r_ij(g) - 1/M∑_i=0^g· M -1 r_ij + 2.5, where (i,j) represent the index of the image and viewer and r stands for raw score. We observed a fairly even distribution of subjective scores on both dimensions and bar graphs for each score range are provided in the Supplementary. Then subjective scores are converted to Z-scores z_ij by: z_ij=s_ij-μ_j/σ_j, where μ_j=1/N∑_i=0^N-1 s_ij, σ_j=√(1/N-1∑_i=0^N-1(s_ij-μ_i)^2) and N=10 is the number of subjects, which is finally reported as MOS, namely golden user annotations. The distribution of Consistency and the Perception MOS is shown in Figure <ref>, which proves that extremely low and high scores are rare, and most scores are between 1 and 4. The Perception score is concentrated in the medium-low area, while the Consistency dimension tends to be moderately high. §.§ Experimental Platform For 4,000 labeled image pairs, we trained five FR and five NR quality indicators for 50 epochs using Adam optimizer on a local NVIDIA GeForce RTX 4090. Among which 80/20 for training/testing. We take MSE loss with a learning rate at 2×10^-5. The TOPIQ-FR and TOPIQ-NR are set as objective indicators for Consistency/Perception. Noted these 4,000 training data images are not included in the source data for objective evaluation for a fair comparison. The LMMs are validated on a server with four NVIDIA RTX A6000, using I2T in different output lengths, and T2I in different strengths, combining through Text/Pixel/Image/Full modes. §.§ Image-to-Text Model Configuration Towards different output lengths, we applied different prompts as the input for I2T models. The prompt follows previous CMC templates <cit.>, with length [5,10,20,50]. For example, to describe Figure <ref> in different lengths, the input prompt format and output text from GPT-4o <cit.> are: r0.42 < g r a p h i c s > Example ground truth. Q(5): `Generate an informative paragraph in 5 words based on the image.' A(5): `Traditional attire on serene river.' Q(10): `Generate an informative paragraph in 10 words based on the image. Considering Light, Color.' A(10): `Two individuals navigate a wooden boat in traditional attire, serene.' Q(20): `Generate an informative paragraph in 20 words based on the image. Considering Light, Color, Clarity.' A(20): `A woman in traditional attire and a conical hat, seated on a wooden boat, with another person rowing on water.' Q(50): `Generate an informative paragraph in 50 words based on the image. Considering Light, Color, Clarity, Dense Caption, and Region Semantic.' A(50): `A woman in a purple dress sits near the front of a boat, her hand adjusting a straw hat. Behind her, a person in a beige top and floral pants stands near the boat's wooden oars. The setting is a calm, expansive body of water under a cloudy sky.' To explore how much information the above four output lengths can represent. We use different I2T models, and combine them with the most effective T2I model (RealVis <cit.>) under the above four output lengths, and use four Consistency and four Perception indicators for analysis, as shown in Figure <ref>, where the four datapoints of each curve represent four output lengths. The experimental results show that most I2T models can dynamically adjust the output length, except that InstructBLIP for image annotation cannot input prompt, and ShareGPT is not sensitive to the specified output length. Overall, when inputting Q(5), the reconstruction effect is relatively poor because of the short output; when inputting Q(50), the overly long paragraph from the I2T model cannot be understood by the T2I model, so the performance is not significantly improved while wasting bitrate. By observing the trend of all curves, we find that when the bpp of the text is between 0.002-0.003, the balance between performance and bitrate can be achieved. Therefore, for each model, we choose the output length closest to this bitrate, that is, Q(20) for GPT-4o <cit.>, MPlugOwl-2 <cit.>; Q(10) for LLAVA <cit.>, Qwen <cit.>, ShareGPT <cit.>; and the default length for InstructBLIP <cit.>. §.§ Text-to-Image Model Configuration In Text mode, since there is no reference image as a starting point, the denoising strength is undisputedly 1. In the other three modes, we adjusted different intensities with a granularity of 0.1. For Full and Image modes that provide a reference image, a high denoising strength will waste the information of this reference, so we verified the performance under strength from 0.2 to 0.8; for Pixel mode, since the pixel provides less information than the compressed image, we increased the strength and range from 0.4 to 0.99 (as strength=1 will ignore the reference). The verification of Full/Image/Pixel results are shown in Figure <ref>/<ref>/<ref> respectively, using same 4 Consistency and 4 Perception indicators. In general, as the strength increases, the Consistency index increases first and then decreases, while the Perception index continues to rise. This is because the greater the strength, the more details the T2I model adds to the image, thereby improving the Perception score. However, for Consistency, the added details at low strength can indeed make up for the unclear areas in the reference image, thereby performing restoration; but when the strength increases, the added details are inconsistent with the original image, and instead bring negative optimization to the reference image. Thus, a good strength requires a trade-off between Consistency and Perception. Taking both dimensions into consideration, we set the strength of Full and Image mode to 0.5 and the Pixel mode to 0.8. §.§ Applicability on Different Reference Image In the main text, the VVC provides the reference image with QP=53. In Table <ref> we further use VVC with QP=51,48,45 as the reference image for T2I model denoising to perform CMC. At higher bitrates, CMC still has an overwhelming advantage over VVC in the Perception metric. Except for SSIM, CMC achieves comprehensive optimization of all other indicators compared to traditional codecs, but the optimization range gradually decreases with the increase of bitrate. Moreover, once the QP is lower than 45, it will fall behind in the Consistency indicators. In summary, compared with traditional codecs, CMC can achieve an overall improvement in Perception and Consistency at low bitrates. However, when bpp increases to 0.1 or above, the improvement in Perception comes at the cost of Consistency. This indicates that ideal performance at higher bitrates is an important factor when using LMMs for image compression. §.§ Example Result Visualization The CMC result visualization is shown from Figure <ref> to <ref>, all result use GPT-4o <cit.> as encoder, and Animate<cit.>/ Dreamlike<cit.>/PG20<cit.>/PG25<cit.>/RealVis<cit.> as decoder (from left to right). Four working modes Full/Image/Pixel/Text are all included (from top to bottom). For different modes, the compression results from LMMs show that as the bitrate decreases, the decoded image is more different from the ground truth. Among them, the Full mode can obtain results generally similar to the ground truth; the Image mode will lose some semantic details while introducing artifacts; the Pixel mode loses more details but ensures the consistency of the overall composition; and the result generated by Text is significantly different from the ground truth. For the performance of the CMC on different contents, Figure <ref>/<ref> reveals it performs most satisfactorily on AIGIs; Figure <ref>/<ref> indicates it can also obtain results consistent with ground truth on NSIs, but it is easy to lose details such as human faces and vehicle signs; Figure <ref>/<ref> implies it is the least ideal on SCIs, as it misunderstands the relationship between characters in the movie or games, and cannot draw formed letters on webpages. In conclusion, CMC is a promising visual signal compression method, but to become a universal codec standard in the future, the robustness to all content types needs to be improved. §.§ Data Statement The CMC-Bench dataset is released under the CC BY 4.0 license. This includes all ground truth, distorted images, subjective annotations, and the weight of the Consistency/Perception evaluation model. All LMM developers can test their performance through our public scripts, and all image compression researchers can obtain the public I2T+T2I LMM pipeline. We believe these resources can inspire the next generation of visual signal codec protocols.
http://arxiv.org/abs/2406.08076v1
20240612105129
VECL-TTS: Voice identity and Emotional style controllable Cross-Lingual Text-to-Speech
[ "Ashishkumar Gudmalwar", "Nirmesh Shah", "Sai Akarsh", "Pankaj Wasnik", "Rajiv Ratn Shah" ]
eess.AS
[ "eess.AS", "cs.SD" ]
Uses of Active and Passive Learning in Stateful Fuzzing Erik Poll June 17, 2024 ======================================================= § ABSTRACT Despite the significant advancements in Text-to-Speech (TTS) systems, their full utilization in automatic dubbing remains limited. This task necessitates the extraction of voice identity and emotional style from a reference speech in a source language and subsequently transferring them to a target language using cross-lingual TTS techniques. While previous approaches have mainly concentrated on controlling voice identity within the cross-lingual TTS framework, there has been limited work on incorporating emotion and voice identity together. To this end, we introduce an end-to-end Voice Identity and Emotional Style Controllable Cross-Lingual (VECL) TTS system using multilingual speakers and an emotion embedding network. Moreover, we introduce content and style consistency losses to enhance the quality of synthesized speech further. The proposed system achieved an average relative improvement of 8.83% compared to the state-of-the-art (SOTA) methods on a database comprising English and three Indian languages (Hindi, Telugu, and Marathi). § INTRODUCTION With the advancement of sophisticated Text-to-Speech (TTS) systems, research has notably shifted towards implementing speech-generation technologies for automatic dubbing applications <cit.>. Particularly, Cross-lingual TTS plays a vital role in automatic dubbing by generating high-quality speech in a target language while upholding the distinctive voice identity and emotional nuances of the original speaker. The core idea involves extracting the vocal characteristics of a speaker in the source language, specifically English, and transferring them to a foreign language, such as an Indian language, or vice versa <cit.>. This creates an impression that the English-speaking actor/actress is now conversing in an Indian language post-dubbing, which enhances the viewers' immersive experience. While the input text provided to the TTS may contain certain emotional cues, it often falls short of capturing the nuanced speaking style specific to a target speaker. Consequently, TTS systems typically generate speech based on the styles they have been trained on. Various methodologies have been proposed to address these limitations, aiming to extract voice identity style information from a reference speech signal <cit.>. However, many of these approaches assume both the reference speech signal and the target text for synthesis are in the same language. Furthermore, for emotion control, some techniques leverage one-hot vector embeddings or emotional features extracted from a reference speech signal in the same language <cit.>. More recently, attempts have been made to control voice identity using reference signals from different languages, a concept referred to as cross-lingual TTS <cit.>. Nevertheless, these attempts have not extensively delved into the control of emotion within a cross-lingual framework. Only one approach, named METTS <cit.>, attempts to achieve multilingual Text-to-Speech (TTS) by incorporating cross-lingual emotion and cross-speaker transfer. METTS stands out due to its base architecture and the methodology employed for extracting and aligning emotion-embedding information from the reference signal. However, it's noteworthy that the emotion similarity of METTS is not sufficiently competitive when compared to state-of-the-art (SOTA) models <cit.>. In this paper, we propose a novel approach for the simultaneous control of voice identity and emotional style within a cohesive framework, aiming to develop an end-to-end system for Voice Identity and Emotional Style Controllable Cross-Lingual Text-to-Speech (TTS). Building upon the established VITS/YourTTS architecture <cit.>, our design serves as an extension. To address a significant limitation in VITS/YourTTS, specifically the instability in predicting phoneme duration through the stochastic duration predictor, resulting in the production of unnatural speech, we introduce a conditioning mechanism for the stochastic duration network predictor. This involves utilizing both speaker and emotion embedding features, recognizing that duration is influenced by both styles. Furthermore, we observe a substantial decline in the pronunciation quality of the generated speech after applying cross-lingual voice identity and emotion style, primarily due to unnatural duration. Therefore, we propose the application of content loss by integrating wav2vec2-based self-supervised speech representations <cit.> during training to mitigate pronunciation errors. Our findings indicate that the proposed model achieves better subjective and objective scores, particularly in emotion similarity, compared to state-of-the-art algorithms and various ablation studies across English and three Indian languages: Hindi, Telugu, and Marathi (these languages are spoken by more than 600 million people <cit.>). The key contributions can be summarized as follows: * The proposed approach employs two separate multilingual speaker and emotion embedding extractor networks to achieve cross-lingual speaker and emotion transfer in a unified framework. * Both speaker and emotion embeddings were employed to produce stable phoneme duration via a stochastic duration predictor network. * We introduce emotion consistency loss to improve emotion controllability further. * Wav2vec2 based self-supervised speech representations were utilized to reduce pronunciation errors via proposed content loss after cross-lingual transfer. * This paper focuses on cross-lingual transfer among English and three Indian languages (Hindi, Telugu, and Marathi), which has received limited attention until now. § METHODOLOGY This section discusses the proposed VECL-TTS architecture along with the motivation for different proposed components. §.§ Proposed VECL-TTS Model Our work builds upon the foundation of YourTTS <cit.>. The proposed model uses raw text as input due to the unavailability of decent grapheme-to-phoneme converters for Indian languages. The model demonstrates its ability to synthesize good-quality speech directly from raw text and eliminates the need for the linguistically taxing job of creating suitable grapheme-to-phoneme converters. As shown in Figure <ref>, our model employs a transformer-based text encoder <cit.> with 10 transformer blocks and the number of hidden channels as 196. Moreover, we concatenate a 4-dimensional language embedding into each input character embedding to facilitate multilingual control. By doing so, we are conditioning the text encoder to use language information at every character, thus ensuring more stable outputs. For the decoder of the text module, we use a stack of 4 affine coupling layers <cit.>, where each layer is composed of 4 WaveNet residual blocks <cit.> following previous work <cit.>. For generating the output speech waveform, we use HiFi-GAN V1 <cit.> as the neural vocoder. We have a Variational Auto-Encoder (VAE) <cit.> that uses a Posterior Encoder <cit.>, which is composed of 16 WaveNet residual blocks <cit.> to convert linear spectrogram into a latent variable. This latent variable is fed into the vocoder and the flow-based decoder thus connecting the vocoder to the TTS model, making it end-to-end. The usage of learned latent variables helps the model to come up with its own representation of speech and eliminates the need for a mel-spectrogram. Having the model learn the intermediate representation of speech achieves better results as the representation learned might be more suitable for the task than traditional approaches. We use Stochastic Duration Predictor (SDP) <cit.>, to generate speech with diverse rhythms from input text. The SDP is composed of Dilated Depth Separable Convolutions (DDSC) based Spline Flows <cit.> and predicts the duration of each character, which is then used to generate the output in parallel, giving it faster than real-time inference. We propose multiple-style emotion control, which allows us to generate speech with different emotions even in a cross-lingual setup. We used a multi-lingual emotion encoder to get emotion embeddings for emotion controllability. Using these embeddings, we condition the YourTTS encoder-decoder to produce emotional speech, as explained in the following sections. We also have a KL divergence loss (which we represented as, L_KL) because of the model's VAE nature <cit.>. This loss tries to maximize the evidence lower bound (ELBO) to get a desired multivariate normal distribution. §.§ Emotion Style Control Transferring speaker and emotion traits from one language to another is tricky. Prosody patterns and small nuances vary among languages due to differences in pronunciation. In the proposed approach, we trained a multilingual emotion encoder to obtain the language-specific emotion representation, as shown in Figure <ref>. The motivation for using a pre-trained emotion encoder is to get more diverse emotion information representation from reference audio rather than using just emotion ID or one-hot encoding. The proposed multilingual emotion encoder is developed by training an emotion recognition model consisting of a parallel 2D Convolutional Neural Network (CNN) and a transformer encoder <cit.>. This emotion recognition model is trained using multilingual data to learn language-independent representations. This model obtained an F1 score of 91.28% using a multi-lingual test dataset. The embedding obtained from the emotion encoder represents emotion characteristics, which are used to condition the encoder, decoder, and duration predictor to generate emotional audio. We also added Emotion Consistency Loss (ECL) to preserve targeted emotion in generated audio. To do so, we computed emotion embeddings from ground truth audio and generated audio, and maximized the cosine similarity between them. The ECL loss (as shown in Fig. <ref>) is defined in Eq. <ref> as follows, L_ECL = -α_e/n∑_i^ncos_sim(ϕ_e(gen_i), ϕ_e(gt_i)), where ϕ_e(gen) and ϕ_e(gt) are functions to obtain emotion embeddings from generated and ground truth audio. The α_e is the tuning parameter used while adding ECL to the final loss to decide ECL's contribution to the final loss. The cosine similarity function is represented by cos_sim. §.§ Speaker Identity Control In order to transfer speaker characteristics from one language to another, we conditioned the proposed TTS model using speaker embeddings obtained from the speaker encoder. The H/ASP <cit.> model is used as the speaker encoder as shown in Figure <ref>. We also used Speaker Consistency Loss (SCL) in the final loss. The SCL is obtained by maximizing cosine similarity between extracted embeddings from ground truth and generated audio, as shown in Eq. <ref>. L_SCL = -α_s/n∑_i^ncos_sim(ϕ_s(gen_i), ϕ_s(gt_i)). In addition to ECL and SCL, we also added content loss L_Content to the final loss including mean squared error (MSE) loss (i.e., L_MSE) and L_KL. The content loss is the loss between wav2vec embeddings obtained from ground truth and generated audio. The final loss is represented by the Eq. <ref>. L_final = L_ECL + L_SCL+L_Content+L_MSE+L_KL. § EVALUATION §.§ Experimental Database In this work, we considered four languages, English, Hindi, Marathi, and Telugu, having one dataset for each language. We used the VCTK dataset for the English language, which comprises of 44 hours of speech from 109 different speakers <cit.>. For Indian language databases, we used the LIMMITS challenge <cit.> database. Each Hindi, Marathi, and Telugu data was recorded by two speakers. The total duration of the recordings in each language varies between 45-55 hours approximately. Additionally, we have used  57.42 hours of internal in-house Emotional data from 11 speakers in the Hindi language. We considered six emotions: Neutral, Angry, Happy, Sad, Fear and Surprise. For all considered datasets, we performed pre-processing to remove long silences and loudness normalization to have similar loudness across the datasets[https://github.com/wiseman/py-webrtcvad][https://github.com/slhck/ffmpeg-normalize]. Total data is divided into subsets of 80% training, 10% validation and 10% for test. §.§ Experimental Setup The model was trained on a single NVIDIA A100 80GB PCIe with a batch size of 128. For both the TTS module and HiFi-GAN vocoder, we used AdamW Optimizer <cit.> with betas 0.8 and 0.99, weight decay 0.01 and an initial learning rate of 0.0002 decaying exponentially. The model was trained for 1M iterations and has around 88M parameters. §.§ SOTA Methods * YourTTS <cit.> is an effective multilingual TTS model. It uses a speaker encoder to generate speaker embeddings to condition the TTS model to achieve cross-lingual voice cloning. * M3 <cit.> TTS is a multi-modal and multi-scale style TTS. Adversarial training and Mixed Density Networks (MDN) are used to map style vectors for voice cloning. * METTS <cit.> is a multi-lingual emotional TTS model which supports cross-lingual emotion and speaker transfer. It uses delightfulTTS as the backbone and introduces multi-scale emotion modeling. * CET <cit.> TTS transfers emotions across speakers. Emotion tokens are trained to be closely related to corresponding emotions. The model transfers emotion from a reference mel-spectrogram. § RESULTS AND DISCUSSION This section presents the comparison and discussion between the proposed method and other SOTA methods. §.§ Subjective Evaluation In this work, performance evaluation is carried out to analyze the generated cross-lingual emotional speech quality in terms of naturalness, emotion, and speaker similarity. The Mean Opinion Score (MOS) <cit.> is used to evaluate the quality of synthesized speech. Total 20 listeners (aged between 23 to 35 with no known hearing impairments) participated in the subjective evaluation. MOS score is calculated by considering a total of 800 samples during each subjective test for naturalness, speaker similarity and emotional similarity. Table <ref> shows the similarity MOS scores for the proposed model and SOTA approaches. The evaluation results show that the proposed model performs better than the baseline models for emotion similarity. Regarding naturalness, the proposed model performs better compared to all baselines except for METTS, where the MOS score is close to METTS. For speaker similarity score, the YourTTS model performs better. From Table <ref>, we can say that the proposed model performs better than all the baseline models for transferring cross-lingual emotion characteristics without much degradation in speaker similarity and naturalness. The CET <cit.> model is designed for transferring emotions across different languages, but it struggles to capture the various emotional expressions. This leads to synthetic speech with a strong accent and lower scores in naturalness and speaker similarity evaluations. In contrast, our proposed model uses emotion embedding to capture both language-specific and language-agnostic emotional expressions, avoiding accent-emotion entanglement. M3 <cit.>, on the other hand, performs poorly in all evaluation metrics. It assumes a strong correlation between style coding, speaker attributes, and content, resulting in an entanglement between the speaker's timbre and emotion. In contrast, our proposed model utilizes an emotion embedding network to effectively remove the speaker's timbre, resulting in a more natural emotional speech. Demo samples are available at demo page[<https://nirmesh-sony.github.io/VECL_TTS/>.] §.§ Objective Evaluation The objective evaluation is carried out to comprehensively evaluate the performance of the proposed VECL-TTS system. In objective evaluation, we computed cosine similarity score for speaker and Emotion similarities. In particular, cosine distance is computed between the speaker embeddings of the synthesized audio and ground truth audio. Similarly, the cosine distance between emotion embedding of synthesized audio and ground truth audio is computed for emotion similarity. We used a speaker and emotion encoder network for estimating style embeddings. A higher number of cosine similarities denotes stronger resemblance <cit.>. Table <ref> denotes the objective evaluation comparison between the proposed approach and baseline methods. The objective test results correlate well with subjective evaluations. Table <ref> shows that the proposed VECL-TTS model achieves the highest score for cosine similarity compared to the SOTA approaches. The CET approach achieves a lower score for cosine speaker similarity, which indicates a significant challenge in transferring emotion characteristics in the cross-lingual scenario as it focuses on inter-lingual emotional TTS. Additionally, M3 fails to address accent-related challenges in multilingual emotional speech synthesis effectively. §.§ Ablation Study We perform ablation analyses, individually excluding the ECL and content loss. Ablation 1 denotes the YourTTS model, including content loss (i.e., proposed model w/o ECL) with the proposed multiple style controlling block. Similarly, Ablation 2 denotes the YourTTS model, including ECL loss (i.e., proposed model w/o content loss). Table <ref> displays the corresponding ablation analysis subjective MOS score. Table <ref> shows that when we add ECL loss to the baseline YourTTS, emotion similarity, and naturalness improve, as indicated in Ablation 2. Content loss in Ablation 1, which focuses on improving pronunciation, is evident from Table <ref>, where it improves the naturalness of generated audio. It can be observed from the MOS scores of the proposed VECL-TTS and baseline YourTTS that while transferring emotion characteristics, the proposed VECL-TTS model maintains speaker similarity close to the baseline YourTTS. §.§ Visual Analysis of VECL model Further, we visualize the changes in prosody patterns such as Mel spectrogram and pitch for different emotions between reference, generated, and YourTTS audio in a cross-lingual setting. Figure <ref> shows that the generated audio using VECL-TTS for a particular emotion follows a similar trend in pitch variations compared to reference audio. For example, for angry and happy emotions, the range of pitch variation indicated by the blue box is higher for reference audio, and a similar pattern is found in the proposed VECL-TTS generated audio. In YourTTS, the pitch variations are much less when compared to reference audio. In contrast, VECL-TTS captures diverse pitch variations and is very similar in range compared to reference audio. We also notice that in YourTTS, the pitch variances in angry and happy are very similar to sad. This indicates that the proposed VECL-TTS model is more expressive when compared to the YourTTS model. § SUMMARY AND CONCLUSION In this paper, we proposed the VECL-TTS model to control the voice identity and emotion style simultaneously in a cross-lingual context. Here, we employed speaker and emotion embeddings from two multilingual pre-trained classifiers to obtain more refined style representations. Additionally, we introduced the ECL loss to preserve emotions. To address challenges related to pronunciation and degradation in generated speech resulting from cross-lingual style transfer, we proposed a content loss framework that utilizes wave2vec2-based self-supervised speech representations. The efficacy of our proposed VECL-TTS model is compared to the SOTA architectures within the framework of English and three Indian languages: Hindi, Telugu, and Marathi. We have observed that the proposed model attains better subjective and objective scores, particularly regarding emotional similarity compared to the pertinent baseline. In the future, we would like to further improvise the output with respect to naturalness, speaker, and emotion similarity simultaneously. IEEEtran
http://arxiv.org/abs/2406.08745v1
20240613020706
UruBots Autonomous Cars Team One Description Paper for FIRA 2024
[ "Pablo Moraes", "Christopher Peters", "Any Da Rosa", "Vinicio Melgar", "Franco Nuñez", "Maximo Retamar", "William Moraes", "Victoria Saravia", "Hiago Sodre", "Sebastian Barcelona", "Anthony Scirgalea", "Juan Deniz", "Bruna Guterres", "André Kelbouscas", "Ricardo Grando" ]
cs.RO
[ "cs.RO" ]
Technological University of Uruguay, UTEC, Ostfalia University of Applied Sciences UruBots AC One et al. UruBots AC One UruBots Autonomous Cars Team One Description Paper for FIRA 2024 Pablo Moraes1 Christopher Peters2 Any Da Rosa1 Vinicio Melgar1 Franco Nuñez1 Maximo Retamar1 William Moraes1 Victoria Saravia1 Hiago Sodre1 Sebastian Barcelona1 Anthony Scirgalea1 Juan Deniz1 Bruna Guterres1 André Kelbouscas1 Ricardo Grando1 ====================================================================================================================================================================================================================================================== § ABSTRACT This document presents the design of an autonomous car developed by the UruBots team for the 2024 FIRA Autonomous Cars Race Challenge. The project involves creating an RC-car sized electric vehicle capable of navigating race tracks with in an autonomous manner. It integrates mechanical and electronic systems alongside artificial intelligence based algorithms for the navigation and real-time decision-making. The core of our project include the utilization of an AI-based algorithm to learn information from a camera and act in the robot to perform the navigation. We show that by creating a dataset with more than five thousand samples and a five-layered CNN we managed to achieve promissing performance we our proposed hardware setup. Overall, this paper aims to demonstrate the autonomous capabilities of our car, highlighting its readiness for the 2024 FIRA challenge, helping to contribute to the field of autonomous vehicle research. § INTRODUCTION The development of autonomous vehicles marks an advancement in transportation technology and competitions such as the FIRA Autonomous Cars Race Challenge helps on the development and fostering of this area of study. Here we present our autonomous car, designed for the 2024 FIRA Autonomous Cars Race Challenge. This project focuses on creating a fully autonomous, RC-car sized electric vehicle capable of navigating race tracks environments with good precision and in a small amount of time. Our vehicle integrates mechanical and electronic systems with learning algorithms. By using many sensors and using this information to train and feedback our model, our car manages to achieve environmental perception, being able to follow a proposed track and even avoid obstacles. The software architecture is designed for real-time processing with a dedicated board to run the AI model. Overall, this document presents a detailed examination of both the software and hardware components of our autonomous car. The hardware section covers the mechanical design, electronic systems, and sensor integration, while the software section details the algorithms, data processing techniques, and how our vehicle perform the decision-making. We performed an evaluation on an example of of race track, where our vehicle was able to race a track of 13 meters in less than 20 seconds. § CONSTRUCTION §.§ Hardware To introduce the construction specifications of this vehicle, it is possible to view its actual physical dimensions in table 1 below. The vehicle's hardware was fully developed according to the competition's needs, with fully defined dimensions. The description of its parts are included in the following subsection with the description of the components and their functions for operation. An initial image of the vehicle in its configuration phase for operation can be seen in Figure 1. Our autonomous vehicle is characterized by a design that combines PLA+ in its chassis to ensure both strength and lightness, having also a low cost bias. Its computational core lies in a Jetson Nano 4GB, backed by a 3D-printed casing for protection. Equipped with a Logitech C920 HD Pro camera for real-time vision, and powered by two 11.1V LiPo batteries, the vehicle ensures real time performance. The vehicle's locomotion is achieved through 6V DC motors and an SG5010 servo motor, controlled respectively by a PCA9685 and an L298N H-bridge. Additionally, a TP-Link TX20 USB WiFi adapter is incorporated for wireless connectivity. The vehicle uses 19mm bearings and soft iron shafts for the Ackerman system. A XL4016 step-down converter is also included to power the Jetson Nano. These components are interconnected following a specific scheme, ensuring coordinated and efficient operation throughout all autonomous vehicle operations. §.§ Software Our software system is based on the Donkey Car open-source framework. This framework was designed for building self-driving robotic vehicles using Python and the Raspberry Pi or for the Jetson Nano board. We used this framework to perform the steering and throttle of our vehicle on a Jetson Nano board. Our proposed workflow to achieve an autonomous car are described as follows: Our vehicle was projected with a monocular camera pointed towards the front of the vehicle as shown in Figure 1. The onboard camera captures real-time images of the track, serving as the primary input for navigation. The Jetson Nano processes these images, utilizing CUDA support for intensive image processing. A CNN with a Keras Linear architecture, comprising five convolutional layers, analyzes the images and predicts the optimal action. 5000 images were recorded in our dataset from parts of the an example track that we used for our qualification environment. Predictions from the CNN are translated into commands for motor and steering control. These commands are sent via the Donkey Car control interface, integrated with the vehicle's hardware. PWM controllers manage motor speed and wheel direction. A web interface allows real-time monitoring of the vehicle's status, including speed, direction, and camera feed. Flask, a Python micro web framework, powers this interface, running on the Jetson Nano. All navigation data (images, control commands, telemetry) are logged for analysis. Pandas and Matplotlib facilitate data analysis and visualization, essential for refining the CNN model. Image capture is configured for real-time navigation, while the Jetson Nano processes images using the TensorFlow-based CNN. The PWM controller translates CNN commands into motor and steering control. The web interface offers real-time control and monitoring from any network-connected device. This comprehensive software system enables our autonomous vehicle to navigate efficiently and precisely in controlled environments, meeting the requirements of the FIRA competition. Our convolutional model summary can be seen in the Figure 2. It has a total amount of five convolutional layers followed by some dense layers that at the end provide the steering an throttle for the vehicle; all of them with added dropuot layers. The 5000 images from our collected dataset were resized to 120x160 pixels and used to train for a total amount of 100 epochs. The model trained then was deployed and used then to control the vehicle. § RESULTS Our proposed scenario for evaluating our model can be seen in the Figure 3. It is approximately 13 meters long in a 5x5 meters area. After collecting the dataset and training our model, we set our vehicle to perform at it and on average we managed to make it complete the track in less than 20 seconds, giving us a pace of approximately 0.65 meters per second. § CONCLUSION Overall, we conclude that our proposed vehicle is capable to perform autonomous driving in race track and even avoid obstacles. We believe that our approach based on a learning model, advanced sensing and embedded processing met the requirements to perform at the 2024 FIRA RoboWorld Cup and help the development of this field of study and the competition itself. 8 2 Amirmohammad Zarif, Amirmahdi Zarif y Soroush Sadeghnejad (2024). Autonomous Cars Challenge Physical (Pro)
http://arxiv.org/abs/2406.08268v2
20240612143514
Multi-Static ISAC based on Network-Assisted Full-Duplex Cell-Free Networks: Performance Analysis and Duplex Mode Optimization
[ "Fan Zeng", "Ruoyun Liu", "Xiaoyu Sun", "Jingxuan Yu", "Jiamin Li", "Pengchen Zhu", "Dongming Wang", "Xiaohu You" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Multi-Static ISAC based on Network-Assisted Full-Duplex Cell-Free Networks: Performance Analysis and Duplex Mode Optimization Fan Zeng, Ruoyun Liu, Xiaoyu Sun, Jingxuan Yu, Student, IEEE, Jiamin Li, Pengchen Zhu, Dongming Wang, Member, IEEE, and Xiaohu You, Fellow, IEEE June 17, 2024 ====================================================================================================================================================== This work was supported in part by the National Key R&D Program of China under Grant 2021YFB2900300, by the National Natural Science Foundation of China (NSFC) under Grants 61971127, by the Fundamental Research Funds for the Central Universities under Grant 2242022k60006, and by the Major Key Project of PCL (PCL2021A01-2).(Corresponding author: Jiamin Li.) The authors are with National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, China (email: {zengfan, ryliu, 230238231, jingxuanyu, jiaminli, p.zhu, wangdm, xhyu}@seu.edu.cn). J. Li, D. Wang and X. You are also with Purple Mountain Laboratories, Nanjing 211111, China. § ABSTRACT Multi-static integrated sensing and communication (ISAC) technology, which can achieve a wider coverage range and avoid self-interference, is an important trend for the future development of ISAC. Existing multi-static ISAC designs are unable to support the asymmetric uplink (UL)/downlink (DL) communication requirements in the scenario while simultaneously achieving optimal sensing performance. This paper proposes a design for multi-static ISAC based on network-assisted full-duplex (NAFD) cell-free networks can well solve the above problems. Under this design, closed-form expressions for the individual comunication rate and localization error rate are derived under imperfect channel state information, which are respectively utilized to assess the communication and sensing performances. Then, we propose a deep Q-network-based accesss point (AP) duplex mode optimization algorithm to obtain the trade-off between communication and sensing from the UL and DL perspectives of the APs. Simulation results demonstrate that the NAFD-based ISAC system proposed in this paper can achieve significantly better communication performance than other ISAC systems while ensuring minimal impact on sensing performance. Then, we validate the accuracy of the derived closed-form expressions. Furthermore, the proposed optimization algorithm achieves performance comparable to that of the exhaustion method with low complexity. Multi-static integrated sensing and communication, network-assisted full-duplex system, access point duplex mode optimization, multi-objective optimization. § INTRODUCTION Integrated sensing and communication (ISAC) has received increasing attention as one of the six typical application scenarios of 6G<cit.>. In recent years, most of the research has stayed on the single-cell ISAC, such as integrated waveform design <cit.>, joint transmission beamforming <cit.>, and joint signal reception <cit.>, etc. Multi-static ISAC has more advantages than single-cell ISAC. In multi-static ISAC system, some access points (APs) send sensing signals while others receive echoes, which can avoid the problem of self-interference and achieve a wider coverage range. Multi-static sensing can also obtain distance and doppler-related information from different angles of the target, which can obtain more accurate information and effectively improve the accuracy and reliability of sensing. In addition, the multi-static ISAC system can also perform interference coordination. It can schedule some nodes as receiving/transmitting nodes according to the actual needs and conditions of the network, which can better adapt to different environments and task requirements. Therefore, the research in this paper is mainly based on multi-static ISAC. Rahman et al.<cit.> developed a new perceptual mobile network framework, which includes the architecture of multi-static ISAC. Huang et al.<cit.> proposed a coordinated power control design for multi-static ISAC systems aimed at maximizing the signal to interference plus noise ratio (SINR) at the user equipments (UEs) and the target position estimation Cramer-Rao Lower Bound (CRLB). However, the previous studies are based on cellular network implementation of multi-static ISAC. In multi-static ISAC system, the cross-link interference (CLI) within the system is exacerbated due to the increased number of sensing links. In traditional cellular-based ISAC, inter-cell link interference is complicated. In contrast, in the cell-free (CF) systems, signal transmission and reception have a larger DoF, which enables more flexible communication and sensing cooperation and various interference avoidance<cit.>. Therefore, there have been an increasing number of recent studies to integrate multi-static ISAC into CF networks. Sakhnini et al.<cit.> were the first to propose a communication and sensing architecture combining CF networks and ISAC. In this architecture, a group of APs transmit downlink (DL) sensing signals and the remaining APs receive echo signals for sensing. All signal processing is performed in the central processing unit (CPU). However, the APs in this architecture only support either sensing or communication. Behdad et al. <cit.> and Demirhan et al.<cit.> proposed power allocation strategies and beamforming strategies for CF-ISAC systems, respectively. These systems allow APs to communicate and sense simultaneously. However, these studies were conducted based on perfect channel state information (CSI) and did not consider the case of imperfect CSI. Mao et al. <cit.> derived a closed-form expression for the spectral efficiency (SE) of the CF-ISAC system, taking into account the uncertainty of the target location and the imperfect CSI information. However, in the previous study, only the case where uplink (UL) or DL communication coexists with sensing is considered, which constrains the system to serving only a subset of UEs at a time. For instance, in <cit.>, only UL UEs are considered to share resources with sensing. Conversely, in <cit.>, <cit.>, and <cit.>, only DL communications are considered. However, in real systems, the communication demands of UEs often present an asymmetry between UL and DL. The studies of Wei et al. <cit.> and Soret et al. <cit.> demonstrate that supporting only UL or DL on a piece of resource cannot respond quickly to such demands. In fact, in a multi-static ISAC system, some APs send DL sensing signals and others receive UL echoes and the system already has the capability of simultaneous UL and DL communication, i.e., it can send and receive signals at the same time. Nevertheless, to the best of our knowledge, there has been no research on multi-static ISAC systems for coexisting UL and DL communication. However, when UL and DL communication and sensing are performed simultaneously in the system, it will exacerbate the CLI in the system, which will seriously affect the performance of the system. Therefore, it is essential to suppress the CLI. Sit et al. <cit.> proposed to reconstruct the interfering signal and perform the suppression of the CLI between sensing and communication using the pilot signal to enhance the dynamic range of the radar. Wang et al.<cit.> proposed the network-assisted full-duplex (NAFD) technology for the CLI suppression in the systems that only consider UL and DL communication. The NAFD is based on the CF network in which each AP can perform UL reception or DL transmission with joint signal processing at the CPU side. Once the channels between the DL APs and the UL APs have been estimated using the DL pilot signals, the CPU can obtain the DL precoded signals of all UEs in advance. Consequently, the CLI suppression can be achieved in the digital domain. In <cit.>, the case of perfect CSI is considered. Li et al.<cit.> derived a closed-form expression for the SE of NAFD systems under imperfect CSI. However, similar interference cancellation has not been investigated for ISAC systems in which UL and DL communication coexist. Furthermore, the majority of existing multi-static ISAC studies are based on the assumption of fixed AP duplex modes<cit.>. In fact, the optimization of AP duplex mode represents a further management of the CLI within the system. By adjusting the AP duplex mode to meet the communication and sensing requirements of the system, the interference situation of the system can be further adjusted, thus improving the system performance. In previous studies, some of the literature has investigated the AP duplex mode optimization (ADMO) strategies in systems that only consider UL and DL communication without sensing<cit.>. Zhu et al.<cit.> investigated the ADMO for NAFD CF networks with the objective of maximizing the sum of SE, and proposed a parallel successive convex approximation-based algorithm and a reinforcement learning (RL) algorithm based on augmented Q-learning to solve this problem. Mohammadi et al.<cit.> proposed a joint optimization method to improve the sum of SE and energy efficiency of NAFD CF systems by jointly optimizing the AP duplex mode assignment, power control, and massive fading weights. Sun et al.<cit.> proposed a pre-allocation mechanism for AP mode optimization in the NAFD CF networks, aiming to achieve a balance between SE and resource utilization. Liu et al.<cit.> proposed three low-complexity heuristic algorithms to solve the ADMO problem in CF-ISAC networks, but each AP in their research architecture only supports sensing or communication. Elfiatoure et al.<cit.> proposed a greedy algorithm based on long term statistics for the ADMO in the CF-ISAC network. However, this study only considered the case of coexistence of DL communication and sensing within the system. Therefore, this paper focuses on the performance analysis and ADMO of multi-static ISAC system with UL and DL communication coexistence based on NAFD for CLI suppression. The main contributions of this paper are summarized as * This paper considers a multi-static ISAC system with UL and DL communications coexisting and are able to respond rapidly to the UL and DL communication needs of UEs. To the best of the authors' knowledge, no previous studies have investigated in this kind of system. * The CLI in the system is partially eliminated by using NAFD architecture. Then, we derive the closed-form expressions for the communication rate and location error rate (LER) to evaluate the performance of the communication and sensing operation, respectively. Based on the derived expressions, we establish a multi-objective optimization problem (MOOP) for getting the trade-off between communication and sensing. * A deep Q-network (DQN)-based algorithm for ADMO is proposed, which enables the CPU to autonomously learn the near-optimal strategy for communication and sensing performance under different weights in this MOOP. * The simulation results have verified the accuracy of the closed-form expression. By comparing the NAFD-based ISAC system with existing ISAC systems, the superiority of NAFD-based ISAC system in terms of communication performance and sensing performance has been demonstrated. Furthermore, the proposed ADMO algorithm can achieve performance close to that of the exhaustive method with extremely low complexity. Notations: Bold letters denote vectors or matrices. 𝐈_M denotes an M-dimensional identity matrix. The conjugate transpose and transpose are denoted by (·)^ H and (·)^ T, respectively. | ·| and ||· || represent the absolute value and spectral norm, respectively. 𝔼[·] and cov[·] denote expectation and covariance operators, respectively. tr(·) denotes the trace of a matrix. The Kronecker product is denoted by ⊗. For a matrix 𝐗, the nth element on the diagonal of 𝐗 is represented by [𝐗]_n. Matrix inequality 𝐗≽𝐘 denotes that 𝐗 - 𝐘 is positive semidefinite. The estimation of x is denoted by x̂, and the estimation error is denoted by e. A circularly symmetric complex Gaussian random variable x with mean zero and variance σ^2 is denoted as x ∼𝒞𝒩 (0, σ^2). Γ (k, θ) denotes the Gamma distribution with parameters k and θ. § SYSTEM MODEL Fig. <ref> illustrates the schematic of a NAFD-based ISAC system that enables simultaneous UL and DL communication and sensing. The system consists of a CPU, M uniformly distributed ISAC-APs, K UEs, and T passive sensing targets. All ISAC-APs are equipped with both communication and sensing capabilities. In this paper, we refer to these ISAC-APs as APs. K_ dl UEs perform DL reception and K_ ul perform UL transmission, and K=K_ dl+K_ ul. ℳ is the set of all APs. 𝒦_ dl={1,... ,K_ dl} and 𝒦_ ul={1,... ,K_ ul} denote the set of all DL and UL UEs, respectively, and 𝒦=𝒦_ dl+𝒦_ ul denotes the set of all UEs. Each AP has N antennas, and the UEs have single antennas. The APs can perform either UL reception or DL transmission, and this decision is made by the CPU. In the NAFD-based ISAC system, within a coherent block, the length of the coherent block is denoted as τ, and the first τ_ up symbols are used for UL pilot training for the channel estimation of UEs. The next τ_ dp symbols are used for DL pilot training. In the DL training phase, the channel between the DL APs and DL UEs and the channel between the DL APs and UL APs can be estimated. Finally, the DL APs use coherent joint transmission to transmit the communication data and orthogonal sensing signal according to the specific beamforming and resource allocation algorithms designed by the CPU. §.§ Channel modeling A quasi-static, flat-fading channel model is used in this study, where the channel remains static within each coherence interval and is flat in frequency. The channel is modeled as an aggregated channel containing a random scattering channel and the target-reflected channel. For instance, the random scattering channel between the m AP and the kth UE is denoted as 𝐡_ A_m, U_k = λ _ A_m, U_k𝐪_ A_m, U_k∈ℂ^N × 1, where λ _ A_m, U_kΔ = d_ A_m, U_k^ - α denotes the large-scale decay, d_ A_m, U_k^ is the distance between the mth AP and the kth UE, α is the path loss exponent, 𝐪_ A_m, U_k∈ℂ^N × 1∼ C N(0,𝐈_N) is the small scale Rayleigh decay. For the target-reflected channel, it can be expressed mathematically as the product of the channel from AP to the targets and the channel from the targets to the UE. The target-reflected channel between the mth AP and the tth target can be modeled as 𝐡_ A_m, T_t = λ _ A_m, T_t𝐪_ A_m, T_t∈ℂ^N × 1, where λ _ A_m, T_tΔ = d_ A_m, T_t^ - α, d_ A_m, T_t^ is the distance between the mth AP and the tth target. Since the CPU knows the approximate position of the target but not the exact position, the position of the target can be expressed as d_ A_m, T_t^∼ C N(d̅_ A_m, T_t^,σ _ A_m, T_t^2), 𝐪_ A_m, T_t∼ C N(𝐪̅_ A_m, T_t,χ _ A_m, T_t^2 𝐈_N) is the steering vector between the mth AP and the tth target, where 𝐪̅_ A_m, T_t is denoted as 𝐪̅_ A_m, T_t = [1,e^j2π𝐤^ T_mt𝐩_m1/λ...,e^j2π𝐤^ T_mt𝐩_mN/λ] ∈ℂ^N × 1, where 𝐤_mt=[cos (θ _mt),sin (θ _mt)]^ T denotes the wave vector, and θ _mt is the Direction of Arrival (DOA) of the mth AP to the tth target, 𝐩_mi=[x_mi,y_mi]^ T denotes the position of the ith antenna of the mth AP, and λ denotes the signal wavelength. The channel between the tth target and the kth UE can be expressed as h_ T_t, U_k = λ _ T_t, U_k q_ T_t, U_k∈ℂ^1 × 1, where λ _ T_t, U_kΔ = d_ T_t, U_k^ - α, d_ T_t, U_k is the distance between between the tth target and the kth UE. Similarly the exact location of the target is unknown, d_ T_t, U_k^∼ C N(d̅_ T_t, U_k^, q_ T_t, U_k∼ C N(0,1) ∈ℂ^1 × 1 is the small scale decay. Therefore, the aggregation channel between the mth AP and the kth UE can be expressed as 𝐡_mkΔ = 𝐡_ A_m, U_k + ∑_t ∈𝒯α _t𝐡_ A_m, T_t𝐡_ T_t, U_k, which consists of a target-independent channel and T NLOS paths reflecting off the target, where α _t is the reflection coefficient of the target t. The aggregation channel between APs can be represented as 𝐡_ A,mnΔ = 𝐡_ A_m, A_n + ∑_t ∈𝒯α _t𝐡_ A_m, T_t𝐡_ T_t, A_n, where 𝐡_ A_m, A_n=λ _ A_m, A_n𝐪_ A_m, A_n∈ℂ^N × Ndenotes the random scattering channel between the mth AP and the nth AP, 𝐡_ T_t, A_n = λ _ T_t, A_n𝐪_ T_t, A_n denotes the channel between the tth target and the nth UL AP, λ _ T_t, A_n=λ _ A_n, T_tdenotes the large-scale decay, 𝐪_ T_t, A_n denotes the steering vector between the tth target and the nth AP satisfying 𝐪_ T_t, A_n∼ C N(𝐪̅_ T_t, A_n,χ _ T_t, A_n^2 𝐈_N) where 𝐪̅_ T_t, A_n = [1,e^j2π𝐤^ T_tn𝐩_n1/λ,...,e^j2π𝐤^ T_tn𝐩_nN/λ] ∈ℂ^1 × N, 𝐤_tn=[cos (ϕ_nt),sin (ϕ_nt)]^ T denotes the wave vector, ϕ _tn is the Direction of Departure (DOD) from the tth target and the nth AP. The aggregation channel between UEs can be represented as h_ I,u,lΔ = h_ U_u, U_l + ∑_t ∈𝒯α _t h_ T_t, U_u h_ T_t, U_l, where h_ U_u, U_l=λ _ U_k, U_u q_ U_k, U_u∈ℂ^1 × 1 denotes the random scattering channel between UE u and UE k. §.§ Channel estimation In this section, we perform the channel estimation for the UL and DL pilot training phase. In the UL pilot training phase, we estimate the channel between all UEs and APs for beamforming and data decoding. In the DL training phase, the channel between the DL APs and DL UEs and the channel between the DL APs and UL APs are estimated. Since all APs are connected to the CPU, the UL AP knows the signal sent by the DL AP. At the same time, it is assumed that the sensing signals sent by the DL APs are known to the DL UEs. Therefore, the CLI between the DL APs and UL APs and the CLI caused by the sensing signal to DL UEs can be partially eliminated by reconstructing the interfering signal when the channel is estimated in the DL pilot training phase. §.§.§ Autocorrelation Matrix/Coefficient The aggregation channel differs from the conventional channel in that it considers the presence of the target to be sensed in the scene, resulting in changes to the correlation matrix or correlation coefficient of the channel. To facilitate the subsequent channel estimation, the following theorem provides the autocorrelation matrix/coefficient for each aggregated channel in the system. The autocorrelation matrix/coefficient of all channels present in the NAFD-based system can be expressed as 𝔼[𝐡_mk𝐡_mk^ H] = λ _ A_m, U_k^2𝐈_N + ∑_t ∈𝒯α^2_tζ_ A_m, T_tγ_ T_t, U_k=ϕ_ml, 𝔼[ 𝐡_ A,mn^𝐡^ H_ A,mn]= λ _ A_m, A_n^2𝐈_N + ∑_t ∈𝒯α _t^2ζ_ A_m, T_tζ_ T_t, A_n= ϕ^ A_mn, 𝔼[ | h_ I,u,l|^2] = λ _ U_u, U_l^2 + ∑_t ∈𝒯α _t^2γ_ U_u, T_tγ_ T_t, U_l= ϕ^ u_ul, where ζ_a,b=(d̅_a,b^ -2α + σ _a,b^2)( 𝐪̅_a,b𝐪̅_a,b^ H + χ _a,b^2 𝐈_N), γ_a,b=d̅_a,b^ -2α + σ _a,b^2, a and b represent the corresponding subscript types respectively. Please refer to Appendix <ref>. §.§.§ Uplink Pilot Training First, in the UL pilot training phase, the UE sends UL pilot to APs, the pilot sent by the UE k is denoted as √(τ _ p)φ _k∈ℂ^τ _ p× 1 and φ _k^2 = 1, the pilot signal received at the mth AP is denoted as 𝐘_ up,m = ∑_k ∈κ√(p_ pτ _ up)𝐡_mkφ _k^ H + 𝐍_ up,m. The received signal can be projected on φ as 𝐲_ up,mkΔ = √(p_ pτ _ up)h_mk + 𝐍_ up,mφ _k∈ℂ^N × 1, The MMSE channel estimation technique is employed for direct estimation of the aggregation channel, the estimated channel is denoted as 𝐡̂_mk, and the channel estimation error is denoted as 𝐞_mk, which satisfies 𝐡_mk=𝐡̂_mk+𝐞_mk. Then the autocorrelation matrix of the estimated channel is expressed as 𝔼[𝐡̂_mk𝐡̂^ H_ mk] =√(p_ pτ _ p)𝐂_ mkϕ_mkΔ =𝐑̂_mk, where 𝐂_ mk=√(p_ pτ _ p)ϕ _mk/p_ pτ_ pϕ_mk+𝐈_p, and the autocorrelation matrix of the estimation error is expressed as 𝔼[𝐞_ mk𝐞^ H_ mk]=ϕ_mk-𝐑̂_mkΔ =θ_mk. §.§.§ Downlink Pilot Training During the DL pilot training phase, the AP sends the DL pilots. All the DL UEs can estimate the channel between the DL APs and DL UEs. The estimation method is similar to the UL pilot transmission stage, so it will not be described here. In addition, the UL APs can estimate the channel between the APs to eliminate the CLI. The UL APs transmit the received signal to the CPU through the backhaul link. The pilot sent by the DL AP m is denoted as √(τ _ p)φ _m∈ℂ^τ _ p× N, and the signal received by the CPU can be expressed as 𝐘_ dp = ∑_m ∈ℳ _∑_n ∈ℳ_√(p_ pτ _ dp)𝐠_ A,mnφ^ H_m + 𝐍_ dp, where 𝐠_ A,mnΔ = x_ d,mx_ u,n𝐡_ A,mn indicates the channel between DL AP m and UL AP n, 𝐍_ dp∼ C N(0,σ _ dp^2𝐈) ∈ℂ^N × N. Similarly, the estimated channel between the APs can be expressed as 𝐠̂_ A,mn, satisfying 𝐠_ A,mn=𝐠̂_ A,mn +𝐞_ A,mn, where 𝐞_ A,mn denotes the channel estimation error. Then the autocorrelation matrix of the estimated channel is expressed as 𝔼[𝐠̂_ A,mn𝐠̂^ H_ A,mn] =√(p_ pτ _ p)𝐂^ A_mnϕ_mnΔ =𝐑̂^ A_mn, where 𝐂^ A_mn=√(p_ pτ _ p)ϕ^ A_mn/p_ pτ_ pϕ^ A_mn+𝐈_p, and the autocorrelation matrix of the estimation error is expressed as 𝔼[𝐞_ A,mn𝐞^ H_ A,mn]=ϕ^ A_mn-𝐑̂^ A_mkΔ =θ^ A_mk. §.§ Communication Signal Model To determine the operating modes of the APs, two binary assignment vectors, 𝐱_ u,𝐱_ d∈{0,1}^M × 1, are used to formulate the mode selection signal model. Specifically, if AP i(j) is used for UL reception (DL transmission), x_ u,i( x_ d,j) is assigned a value of 1, otherwise it is set to 0. In this paper, it is assumed that all antennas belonging to the same AP operate in the same mode and that each AP can only be designated as either UL or DL, i.e., x_ u,i + x_ d,i = 1. §.§.§ Downlink Signal Model For DL transmission, in each time slot, the DL AP sends a signal to the DL UE. The DL signal sent by the mth AP is denoted as 𝐱_m = ∑_k ∈𝒦 _dl√(p_ dl,k)𝐰_mk^ c s_ dl,k^_ + ∑_t ∈𝒯√(p_ s,t)𝐰_mt^ sφ _m,t_∈ℂ^N × 1, where p_ dl,k denotes the communication power of the kth DL UE, 𝐰_mk^ c denotes the beamforming matrix of the communication signal, and s_ dl,k is the communication signal transmitted to the kth DL UE satisfying 𝔼 [ s_ dl,k^ H s_ dl,k^] = 1, p_ s,t denotes the power of the sensed signal assigned to the tth sensing target, 𝐰_mt^ s denotes the beamforming matrix of the sensed signal, and φ _m,t denotes the sensing signal sent by the mth AP to perceive the tth target, satisfying 𝔼 [ φ_m,t^ Hφ _m,t^] = 1. Then the signal received by the lth DL UE can be expressed as y_l^ dl = ∑_m ∈ℳ _x_ d,m𝐡_ml^ H𝐱_m + ∑_u ∈𝒦 _ul√(p_ ul,u) h_ I,u,ls_ ul,u^ + n_ dl, where s_ ul,u^ denotes the UL data signal of the UL UE, p_ ul,u denotes the data transmission power of the uth UL UE, and n_ dl∼ C N(0,σ^2_ dl) is the DL additive white Gaussian noise(AWGN). For DL communication, the CLI includes the interference of sensing signals sent by the DL ap to the DL UE, i.e., sensing-to-communication CLI and the interference of UL data signals to the DL UEs, i.e., UL-to-DL CLI. Assuming the sensing signal is known to the DL UEs, since the channel estimation has been performed during the pilot training phase, the sensing-to-communication CLI can be partially mitigated by reconstructing the sensing signal. Considering the channel estimation error, the remain received signal is expressed as y_l^ dl = 𝒟_l + 𝒩_l^ dl + 𝒩_l^ error + 𝒩_l^ CLI-s + 𝒩_l^ CLI-c + n_ dl, where 𝒟_lΔ = ∑_m ∈ℳ _^√(p_ dl,l)𝐡̂^ H_ dl,ml𝐰_ml^ cs_ dl,l denotes the desired effective DL signal, 𝐡̂^_ dl,mlΔ = x_ d,m𝐡̂_ml^ denotes the DL estimated channel between AP m and UE l, 𝒩_l^ dlΔ = ∑_l' ∈𝒦 _ dl\{ l}^∑_m ∈ℳ _^√(p_ dl,l')𝐡̂_ dl,ml^ H𝐰_ml'^ cs_ dl,l denotes the communication interference from other DL UEs to the lth UE, 𝒩_l^ errorΔ = ∑_k ∈𝒦_ dl∑_m ∈ℳ _^√(p_ dl,k)𝐞_ dl,ml^ H𝐰_mk^ cs_ dl,l denotes the interference generated by the DL channel estimation error, 𝐞_ dl,mlΔ =x_ d,m𝐞_ml denotes the DL channel estimation error of the channel between AP m and UE l, 𝒩_l^ CLI-sΔ = ∑_m ∈ℳ∑_t ∈𝒯√(p_ s,t)𝐞_ dl,ml^ H𝐰_mt^ sφ _m,t denotes the residual sensing-to-communication interference, 𝒩_l^ CLI-cΔ = ∑_u ∈𝒦 _ ul√(p_ ul,u) h_ I,u,ls_ ul,u denotes the UL-to-DL CLI. §.§.§ Uplink Signal Model For UL transmission, each UL UE sends a signal, while the DL APs send communication and sensing signals, so the signal received by the nth AP is denoted as 𝐘_n^ ul = ∑_k ∈𝒦 _ ulx_ u,m𝐡_mks_ ul,k + ∑_m ∈ℳ _𝐠_ A,mn𝐱_m + 𝐧_ ul,n, where s_ ul,k denotes the UL data, and 𝐧_ ul,n∼𝒞𝒩(0,σ^2_ ul𝐈) denotes the UL AWGN. The signals received from each AP are processed centrally at the CPU. For UL communication, the CLI includes the interference of the sensing and communication signals sent by the DL APs to the UL APs, i.e., DL-to-UL CLI. Since signals transmitted by the DL APs are familiar to the UL APs, and the channels between the DL APs and the UL APs are estimated during the DL pilot training phase, it is possible to reconstruct the signals transmitted by the DL APs. Therefore, the DL-to-UL CLI can be partially mitigated. Consequently, the signal of UE u at the CPU can be mathematically represented as r_u^ ul = 𝒟_u + 𝒩_u^ ul + 𝒩_u^ error + 𝒩_u^ CLI-c + 𝒩_u^ CLI-s + 𝒩_u^ noise, where 𝒟_uΔ = ∑_n ∈ℳ^𝐯_nu^ H𝐡̂_ ul,mu√(p_ ul,u)s_ ul,u denotes the desired signal of the uth UE, 𝐡̂_ ul,muΔ = x_ u,n𝐡̂_mu denotes the UL estimated channel between the nth AP n and the uth UE, 𝐯_nu^ H denotes the receiver vector, 𝒩_u^ ulΔ = ∑_u' ∈𝒦_ ul\{ u}^∑_n ∈ℳ _^∑_t ∈𝒯^𝐯_nu^ H𝐡̂_ ul,mu√(p_ ul,u')s_ ul,u' denotes the UL-interference caused by the other UL UEs, 𝒩_u^ errorΔ = ∑_k ∈𝒦 _ ul∑_n ∈ℳ_∑_t ∈𝒯^x_ u,n𝐯_nu^ H𝐞_ ul,nk√(p_ ul,k)s_ ul,k denotes the interference term due to the UL channel estimation error, 𝐞_ ul,nkΔ =x_ u,n𝐞_nk denotes the UL channel estimation error between the nth AP and the kth UL UE, 𝒩_u^ CLI-cΔ = ∑_m ∈ℳ _^∑_n ∈ℳ _^∑_j ∈𝒦 _ dl^𝐯_nu^ H𝐞_ A,mn√(p_ dl,j)𝐰_mj^ cs_ dl,j denotes the residual DL-to-UL CLI term of the communication signal, 𝒩_u^ CLI-sΔ = ∑_m ∈ℳ^∑_n ∈ℳ^∑_t ∈𝒯^𝐯_nu^ H𝐰_nt^s√(p_ s,t)φ _m,t denotes the residual DL-to-UL CLI term of the sensing signal, 𝒩_u^ noiseΔ = ∑_n ∈ℳ _x_ u,n𝐯_nu^ H𝐧_n^ ul denotes the product of UL Gaussian noise and receiver vector. §.§ Sensing Signal Model Analyzing from the sensing point of view, all DL APs send the communication signals together with the sensing signals. Typically, the orientation of the sensing beams are different from that of the communication beams to satisfy the sensing field of view, and the independent communication and sensing sequences provide a high correlation gain to differentiate the radar module whose goal is to estimate the position of the target. Moreover, in CF network, the APs serving a UE all transmit the data signals of that UE which leads to simultaneous same-frequency interference<cit.>. This will seriously limit the sensing performance if the communication signals are reused for sensing. Therefore, in this paper, we use the reflected echoes of the sensing beams to sense the unknown target. Therefore, the communication signals are interference to sensing. In the NAFD-based ISAC, all APs are connected to the CPU. As a result, the DL communication signals can be reconstructed by estimating the channels between the APs. This enables the signal received by the UL APs to be subtracted from the DL communication signals, while a similar operation can be performed for the UL communication signals. Assuming the UL communication data can be decoded correctly. Once the UL communication signals are extracted, their effect can be subtracted from the received signal. Therefore, following the above signal interference cancellation, the remaining signal at the AP n can be expressed as 𝐲_n = ∑_m ∈ℳ∑_ t ∈𝒯^√(p_ s,t)𝐠_ A,mn𝐰_mt^ sφ _m,t^+ 𝐲̅_ dl,n+𝐲̅_ ul,n+𝐧_s, where 𝐲̅_ dl,n=∑_m ∈ℳ _^∑_j ∈𝒦 _ dl^𝐞_ A,mn^√(p_ dl,j)𝐰_mj^ c denotes the DL communication interference cancellation residual term, 𝐲̅_ ul,n=∑_k ∈𝒦 _ ul√(p_ ul,k)𝐞_ ul,nk denotes the UL communication interference cancellation residual term, 𝐧_s∼𝒞𝒩(0,σ^2_s) denotes the sensing AWGN. Expanding the aggregated channel of the above equation yields the NLOS radial component reflected by the target and the target-independent multipath component, commonly known as clutter. Clutter is typically caused by scattering from permanent objects and temporary obstacles<cit.>. The accurate modeling of clutter can be achieved through the long-term observation of the environment<cit.>. Therefore, we assume that the clutter signal can be perfectly canceled, and the remaining received signal is denoted as 𝐲_n = ∑_m ∈ℳ_∑_ t ∈𝒯^√(p_ s,t)𝐠̅^_ A,mn𝐰_mt^ sφ _m,t^+𝐳, where 𝐠̅^_ A,mnΔ =∑_ i ∈𝒯^α _ix_ d,mx_ u,n𝐡_ A_m, T_i𝐡_ T_i, A_n indicates the channel that the DL AP m reflects through the targets to the UL AP n, 𝐳 is the combination of sensing AWGN and residual UL and DL interference, modeled as a complex Gaussian distribution with covariance <cit.>, and is denoted as cov [𝐳]=[𝐲̅_ dl,n^2+𝐲̅_ ul,n^2+σ^2_ s]𝐈_N×1. The remaining signals are individually mapped to their corresponding sensing symbols, and is denoted as 𝐲_m,n,t =√(p_ s,t)α _t𝐡_ A_m, T_t𝐡_ T_t, A_n𝐰_mt^ s+𝐳φ _m,t^ H. The sensing processing is performed based on the above formula. § CLOSED-FORM EXPRESSION DERIVATION In this section, we derive the closed-form expressions for the communication rate and sensing performance of the system. We utilize the maximum ratio transmission (MRT) beamforming, i.e. 𝐰_ml^ c= ε _l^𝐡̂_ml, the maximal ratio combining (MRC) receiver, i.e. 𝐯_nu= ε _u^𝐡̂_nu, and conjugate-aware sensing beamforming<cit.>, i.e. 𝐰_mt^ s= √(1/N)𝐪_ A_m, T_t in the system, where ε _l^ and ε _u^ are the normalization coefficient. §.§ Downlink Communication Rate The DL communication rate of UE l can be expressed as R_l^ dl = (1-τ_ dp+τ_ up/τ)𝔼[ log_2 (1 + γ _l^ dl)], where γ^ dl_l represents the SINR of the signal received by the DL UE, which can be expressed as γ _l^ dl=| 𝒟_l|^2/| 𝒩_l^ dl|^2 + | 𝒩_l^ error|^2 + |𝒩_l^ CLI-s|^2 + |𝒩_l^ CLI-c|^2+ σ _dl^2. As indicated by Eq. (<ref>), the DL communication rate formula involves an expectation, which can be resolved by incorporating the expectation into γ _l and solving for each expectation separately. The derivation result is presented in the following theorem. Considering the interference cancellation mechanism, the DL communication rate for the NAFD-based ISAC system with MRT beamforming and MRC receiver can be derived as a closed-form expression, given by R_l^ dl = (1-τ_ dp+τ_ up/τ) log_2 (1 + 𝔼[γ _l^ dl]), where γ _l represents the SINR of the signal received by the DL UE, which can be expressed as 𝔼[γ _l^ dl]=p_ dl,lε _l^2[∑_m ∈ℳ _^x_ d,mk̂_mlθ̂_ml^2 + (∑_m ∈ℳ _^x_ d,mk̂_mlθ̂_ml^ )^2]/ℐ^ inter_ dl,l+ℐ^ e_ dl,l+ℐ^ s_ dl,l+ℐ^ CLI_ dl,l+σ^2_ dl, where ℐ^ inter_ dl,l=∑_l' ∈𝒦 _ dl\{ l}^∑_m ∈ℳ _^p_ dl,l'ε^2_ dl,l' tr(𝐑̂^ dl_ml𝐑̂^ dl_ml'), ℐ^ error_ dl,l=∑_k ∈𝒦 _dl∑_m ∈ℳ _p_ dl,kε^2_ dl,k tr(𝐑̂^ dl_mkθ^ dl_ml), ℐ^ sense_ dl,l=∑_t ∈𝒯∑_m ∈ℳ _p_ s,t/N tr(ψ _mtθ^ dl_ml), ℐ^ CLI_ dl,l=∑_u ∈𝒦 _ ulp_ ul,uϕ^u_ul. Please refer to Appendix <ref>. §.§ Uplink Communication Rate The communication rate of UL UE u is as follows R_u^ ul = (1-τ_ dp+τ_ up/τ)𝔼[ log_2 (1 + γ _u^ ul )] , where γ _u^ is the SINR of the UL UE u, expressed as γ _u^ ul =|𝒟_u|^2/|𝒩_u^ ul|^2 +| 𝒩_u^ e|^2 + |𝒩_u^ CLI-c|^2 + |𝒩_u^ CLI-s|^2 + | 𝒩_u^ n|^2. Similarly, by incorporating the expectation into γ _u^ ul and solving each expectation separately, we can derive the closed-form expression for the UL communication rate of the system. The results of the derivation are given in the following theorem. Considering the interference cancellation mechanism, the UL communication rate for the NAFD-based ISAC system with MRT beamforming and MRC receiver can be derived as a closed-form expression, given by R_u^ ul = (1-τ_ dp+τ_ up/τ) log_2 (1 + 𝔼[γ _u^ ul]), where γ _u^ is the SINR of the UL UE u, expressed as 𝔼[γ _u^ ul]=p_ ul,uε _u^2[∑_n ∈ℳ _^x_ u,nk̂_nu^θ̂_nu^2 + (∑_n ∈ℳ_^x_ u,nk̂_nu^θ̂_nu^)^2]/ℐ^ inter_ ul,u+ℐ^ e_ ul,u+ℐ^ CLI-c_ ul,u+ℐ^ CLI-s_ ul,u+ℐ^ n_ ul,u, where ℐ^ inter_ ul,u=∑_u' ∈𝒦_ ul\{ u}^∑_n ∈ℳ _p_ ul,uε _ ul,u^2 tr(𝐑̂^ ul_nu𝐑̂^ ul_nu'), ℐ^ CLI-c_ ul,u=∑_m ∈ℳ _^∑_n ∈ℳ _^∑_j ∈𝒦 _ dl^ρ_ c,uj tr(θ^ A_mn𝐑̂^ ul_nu𝐑̂^ dl_mj), ℐ^ CLI-s_ ul,u=∑_m ∈ℳ _^∑_n ∈ℳ _^∑_t ∈𝒯p_ ul,uε_ ul,u^2/N tr(θ_mn^ A𝐑̂^ ul_nuψ_mt), ℐ^ error_ ul,u=∑_k ∈𝒦 _ ul∑_n ∈ℳ_p_ ul,kε_ ul,u^2 tr(𝐑̂^ ul _nuθ̂^ ul_nk), ℐ^ noise_ ul,u=σ^2_ ulε_ ul,u^2∑_n ∈ℳ _ tr(𝐑̂^ ul_nu). Please refer to Appendix <ref>. §.§ Sensing Localization Estimation Rate To standardize the performance evaluation of the ISAC system, we adopt the LER, which is analogous to the communication rate, to evaluate the sensing performance of the system. Assuming that the sensing targets are spatially separated, the LER of the system can be expressed as the sum of the LER of each target <cit.>. Specifically, the LER of the target t can be obtained by solving the following R^ est_t=(1-τ_ dp+τ_ up/τ)𝔼[log _2(1 + σ _loc^2/ CRLB_ loc,t)], where σ _loc^2 is the uncertainty of the target location and CRLB_ loc,t is the CRLB for location estimation. Typically, the estimation of location is decomposed into the estimation of the angle of the direction of arrival θ_n,mt, the angle of the direction of departure ϕ _m,tn, and the distance d, respectively. Thus, the CRLB is also decomposed into CRLB_ loc,t=∑_m ∈ℳ_ dl∑_n ∈ℳ_ ulσ^2_d_m,n,t+σ^2_θ_n,mt+σ^2_ϕ_m,tn/M_ dlM_ ul, where σ^2_d_m,n,t denotes the distance CRLB of target t sensing based on 𝐲_m,n,t in Eq. (<ref>), σ^2_θ_n,mt denotes the CRLB of DOA between DL AP m and target t sensing based on 𝐲_m,n,t , σ^2_ϕ_m,tn denotes the CRLB of DOD between target t and UL AP n sensing based on 𝐲_m,n,t. The CRLB closed-form expression for each estimated parameter is given by the following theorem. In the case of large antenna arrays, the CRLB closed-form expression for each estimated parameter in the NAFD-based ISAC system is given by σ^2_d_m,n,t= σ^2_ dl,n+σ^2_ ul,n+σ^2_ s/α_tp_ s,tπ ^2(Δ f/c)^2η _m,n,t^2N^2𝐰_m,t^ s^2 σ^2_θ_n,mt= λ ^2(σ^2_ dl,n+σ^2_ ul,n+σ^2_ s)/4α_tp_ s,tπ ^2η _m,n,t^2(NB_m,t - A_m,t^2)𝐰_m,t^ s^2, σ^2_ϕ_m,tn= λ ^2(σ^2_ dl,n+σ^2_ ul,n+σ^2_ s)/4α_tp_ s,tπ ^2η _m,n,t^2(NB_t,n - A_t,n^2)𝐰_m,t^ s^2, where A_m,t = ∑_i = 1^N [y_micos (θ _mt)- x_misin (θ _mt)], B_m = ∑_i = 1^N [y_micos (θ _mt)- x_misin (θ _mt)] ^2, A_m,t and B_t,n are related to the antenna array on the transmitter, A_t,n = ∑_i = 1^N (y_nicos (ϕ _tn)- x_nisin (ϕ _tn)), B_t,n = ∑_i = 1^N (y_nicos (ϕ _tn)- x_nisin (ϕ _tn)) ^2, A_t,n and B_t,n are related to the antenna array on the transmitter, σ^2_ dl,n=∑_i ∈ℳ _^∑_j ∈𝒦 _ dl^p_ dl,jε _ dl,j^2 tr(θ_mn^ A𝐑̂^ dl_ij) denotes the DL communication signal interference residual power at the UL AP n, σ^2_ ul,n=∑_k ∈𝒦 _ ulp_ ul,k tr(𝐑̂^ ul_nk) denotes the UL communication signal interference residual term power at the UL AP n. Please refer to Appendix <ref>. § AP DUPLEX MODE OPTIMIZATION §.§ Problem Formulation Fig.<ref> shows the simulation results for communication and sensing performance with a fixed total number of APs and varying numbers of sensing UEs and targets, as well as different UL and DL ratios of the APs. The results demonstrate that the duplex mode selection of APs has a significant impact on both the communication and sensing performance of the system. Based on the communication performance expression derived, it is evident that the communication rates of both DL and UL are impacted by the CLI between the AP and the UEs. To mitigate the CLI, the CPU can operate APs with different duplex modes by spatially isolating them. However, it should be noted that the sensing performance is also linked to the distance from the DL APs to the UL APs, as revealed by the derived CRLB expression in Theorem <ref>. The use of spatially isolated duplex modes may lead to lower sensing signal energy received by the UL APs, which in turn affects the LER. Thus, it is imperative to strike a balance between communication performance and sensing performance. In this section, we use the theory of multi-objective optimization to propose the following MOOP. The first optimization objective is to maximize the sum communication rate, denoted as 𝒫_1: 𝐱_ u,𝐱_ dmax  f_1=∑_l ∈𝒦 _ dlR_l^ dl + ∑_u ∈𝒦 _ ulR_u^ ul, s.t. 𝐱_ u+𝐱_ d=𝐈_M. Each AP can be either UL or DL at any time, so the duplex mode selection vector satisfies the constraint in Eq. (<ref>). Mathematically, the optimization problem for maximizing LER can be expressed as 𝒫_2: 𝐱_ u,𝐱_ dmax  f_2 = ∑_t ∈𝒯R_ loc,t^ est, s.t. (<ref>). Building on the previous analysis, given the conflict between the two objectives, we propose a MOOP to investigate the trade-off between them. This optimization problem can be mathematically formulated as 𝒫_3: 𝐱_ u,𝐱_ dmax  𝐟 = [ f_1,f_2]^T, s.t. (<ref>). The objective of 𝒫_3 is to optimize the combined performance of communication and sensing. To tackle this MOOP, the concept of Pareto optimality can be utilized. An allocation policy is considered Pareto optimal if no other policy exists that can enhance one objective without compromising others. Therefore, a MOOP may have multiple Pareto optimal solutions. While these solutions are selected based on their dominance relationships, a closer examination of these optimal solutions reveals that they correspond to distinct multi-objective weights. By adjusting these weights, it is feasible to regulate the tradeoff between different objectives and investigate various regions of the Pareto boundary. We aim to develop an algorithm that can efficiently identify a Pareto-optimal solution for 𝒫_3. §.§ DQN-based Solution The optimization problem discussed above is classified as NP-hard, indicating its high computational complexity. To address this challenge, Reinforcement Learning (RL) has emerged as a promising approach that involves an intelligent agent perceiving the state of the environment and making decisions based on feedback to maximize rewards. The DQN algorithm is a type of RL that combines deep learning and Q-learning<cit.>. DQN utilizes a neural network to learn Q-value functions, enabling it to handle complex environments. Subsequently, we shall present the implementation of the DQN algorithm in the context of ADMO within the NAFD-based ISAC system. This approach involves four key components: * Agent: A centralized agent was established on the CPU side to facilitate the selection of duplex mode for each AP by monitoring the current state of the environment. * State: The current state of the agent includes a set of M APs, denoted as {x_1,x_2,...,x_M}. The state of each AP is represented in binary format, where x_m of 0(1) indicates that the AP m is operating in the UL(DL) mode. * Action: As there are only two operating modes for each AP, the system can take actions to change the original operating mode, either from UL to DL or from DL to UL. Consequently, these actions can be represented as a Markov action space denoted as A= {a_1, a_2,..., a_M} where a_t represents the action of switching the operating mode of AP m from its original mode to the other mode. * Reward: Define the reward as r_tΔ = ω _cf_1 + ω _sf_2, where ω _c and ω _s denote the weights of the two targets respectively. The working mode of each AP can be switched between UL mode and DL mode by the policy adopted by the agent. The DQN algorithm enhances its efficiency and applicability by adopting a value function-based approach instead of directly storing Q-values. To improve the training stability, the Double DQN structure is utilized in this section, which consists of two deep neural networks(DNN): A Q-evaluate network and a Q-target network. The Q-evaluate network is responsible for selecting actions, while the Q-target network computes the Q-value of the target. To minimize the correlation between the estimate of the Q-value of the target and the actual value, and to mitigate the over-estimation caused by the correlation, the parameters of the Q-target network are periodically copied from the Q-evaluate network. The Q-value update strategy for DQN is Q(s_t,a_t;θ) =r_t + γ *max_a_t+1 Q(s_t+1,a_t+1;θ), where Q (s_t,a_t;θ) is the Q value corresponding to taking action a_t in state s_t when the Q-evaluate network parameter is θ. The DQN algorithm framework is shown in Fig.<ref>. First, the agent observes the current state s_t and inputs s_t into the Q-evaluate network. The Q-evaluate network outputs Q (s_t,a_m;θ) for all possible actions a_m in s_t. The action with the highest Q value, a_t=max Q (s_t,a_m;θ) is selected as the next action. After obtaining the action a_t, the reward r_t can be calculated using Eq. (<ref>). During training, the current action a_t, the state s_t, the reward r_t, and the next state s_t+1 are stored as a set of experiences e={a_t,s_t,r_t,s_t+1} in the experience replay buffer 𝒟_ replay. After a sufficient number of experiences have been stored, a set of experience values 𝒟_ min is randomly sampled from 𝒟_ replay for updating the parameters θ in the Q-evaluate network. The DQN algorithm adopts experience replay to prevent sample correlation and overfitting issues caused by selecting a specific set of experiences. The network parameters are updated using the gradient descent method θ = θ - α *∇ _θL(θ), where α is the learning rate and L (θ)is the loss function, which is calculated by L(θ) = 𝔼_e_i ∼𝒟[(y_i - Q(s_i,a_i;θ ))^2], where y_i = r_i + γ *max_a_t+1 Q(s_i+1,a_i+1;θ_ target) , θ_ target refers to the parameters of the Q-target network. When selecting actions, DQN typically employs a ε-greedy strategy, which selects a random action with a certain probability ε to explore the environment, and selects the action with the highest predicted Q value with a probability of 1-ε to exploit existing information. In the context of the DQN algorithm, two important concepts are introduced, namely the episode and the time step t_ max. An episode is a sequence of actions and state transitions that start from the initial state of the environment and continue until the intelligent agent reaches the termination state. The behaviors of the agent between Episodes are considered independent, and the agent can learn and improve its behaviors through each episode. t_ max is typically used to represent the number of time steps accumulated before a gradient update is performed. It is used to control the length of the experience sequence stored and sampled in the experience replay, as well as to determine when to update the Q-target network. The pseudocode of the algorithm can be found in Algorithm <ref>. §.§ Algorithm Complexity Analysis The exhaustion method exhibits an exponential growth in computational complexity. In contrast, the computational complexity of DQN mainly arises from matrix multiplication in the forward and backward propagation of the deep neural network and the number of training iterations. It can be approximated as O (2 E_ maxt_maxD_ min(M n_0 + ∑^m-2_i=1 n_i n_i+1+ 2M n_m)), where M is the number of AP, the input layer of the neural network is M, the output layer is 2M, D_ min represents the size of the min-batch, t_max represents the number of iterations, and m represents the number of hidden layers in the neural network, with n_i denoting the number of neurons in the corresponding hidden layer. Notably, the computational complexity of the DQN is linearly related to M, resulting in lower complexity and better scalability. § SIMULATION RESULT In this section, we present a comparative analysis of the proposed NAFD-based ISAC system with the existing ISAC system design. Furthermore, the accuracy of the closed-form expression derived in the previous section is verified via Monte Carlo simulation. Finally, based on the closed-form expression results, a comparative evaluation of the proposed DQN-based algorithm with four commonly used algorithms in ADMO is presented. We consider a scenario, where all UEs and targets are assumed to be randomly distributed in a 300 m×300 m square area with a total of M=8 APs, and consider each AP configured with N=20 antennas. There are 8 active UEs in the system, K_ dl=K_ ul=4. Consider the channel to be quasi-static and the path loss exponent of the channel α=3.7<cit.>. All UL UEs send the same data power i.e. {p_ ul,i}_∀ i ∈κ _ ul = p_ ul=0.1  W. For DL APs, the power is allocated equally based on the number of UEs and the number of targets to be sensed i.e.{p_ dl,i}_∀ i ∈𝒦_ dl = {p_ s,t}_∀ t ∈𝒯=0.5  W. The reflection coefficient is set as {α_t}_∀ t ∈𝒯_=0.8<cit.>. The signal bandwidth is set as Δ f=10 MHz and the coherent block length is set as τ=100<cit.>. The AWGN power for DL and UL communication is set to σ _ dl^2 = σ _ ul^2 = σ^2_ s=-113 dB<cit.>. §.§ Results of System Performance Comparison In this part, we will compare the performance of the NAFD-based ISAC system proposed in this paper with two existing ISAC systems. The specific system design is as follows * Time-division duplex (TDD)-based ISAC: In a TDD-based ISAC system, the coherent time block is divided into three separate modes: sensing, UL communication, and DL communication, which are carried out using the time-division duplex mode<cit.>. Symbol lengths for each mode are denoted as τ _s, τ _ ul and τ _ dl, respectively. * Co-frequency co-time full duplex (CCFD)-based ISAC: CCFD-based ISAC system employs a single base station to achieve both sensing and simultaneous UL and DL communication<cit.>. To ensure a fair comparison, the AP deployment location for TDD-based ISAC is consistent with NAFD-based ISAC and is uniformly distributed on a circular area with a radius of 200 m. Specifically, half of the APs are set to UL mode and the other half are set to DL mode. Conversely, for CCFD-based ISAC, a single base station is deployed at the center of the circular. The self-interference cancellation technique is employed in the CCFD-based ISAC system to achieve a self-interference power σ _ self^2 = - 85 dB<cit.>. Fig. <ref> presents a comparison and analysis of the sensing performance between single-station independent sensing (CCFD-based ISAC) and multi-static cooperative sensing when only one target is present in the sensing scene. The color scale indicates the LER, with darker colors indicating poorer performance and brighter colors indicating better performance. The results show that single-station independent sensing performs well only near the base station, while multi-static cooperative sensing exhibits better performance in regions near each AP. This highlights the advantage of decentralized AP in achieving larger sensing ranges. Therefore, multi-static cooperative sensing is expected to become a major trend in future developments. Fig. <ref> presents the changes in communication and sensing performance with varying numbers of antennas for three different types of ISAC systems. The sensing symbol length of the TDD-based ISAC system is set to τ_ s=20 and τ_ s=50. Fig. <ref> illustrates the communication performance, where the NAFD-based ISAC system outperforms the CCFD-based and TDD-based ISAC systems. Although the communication performance of the CCFD-based ISAC system improves with the increase in the number of antennas, it still lags behind the NAFD-based ISAC system. The longer the sensing symbol length for the TDD-based ISAC system, the shorter the relative communication symbols, leading to poorer communication performance, whereas the NAFD-based ISAC system is not affected by this issue. Fig. <ref> illustrates the sensing performance, where the TDD-based ISAC system's perceptual capability depends only on the length of the sensing symbols, as its sensing is carried out individually. The overall sensing performance of all systems increases with an increase in the number of antennas, as it enhances the signal energy and improves the directivity of the sensed beam. Nevertheless, the sensing performance of the CCFD-based ISAC system is impaired by self-interference. The communication and sensing performance of the NAFD-based ISAC system are mutually interfering, resulting in a degradation of sensing performance that is proportional to the number of antennas. The simulation results demonstrate that the NAFD-based ISAC system with interference cancellation significantly improves communication performance compared to other systems and can outperform them in terms of sensing performance with the appropriate number of antennas. The sensing performance of the NAFD-based ISAC system can also be improved by using appropriate interference management strategies. §.§ Result of Closed-form Expression Validation In this section, we conduct a validation study of the closed-form expressions for the communication rate and LER through Monte Carlo simulations. Specifically, we analyze the variations of the UL and DL communication rates and position LER obtained from the Monte Carlo simulation with different numbers of antennas. As shown in Fig. <ref>, the results of the closed-form expressions are consistent with the Monte Carlo simulation results, and the outcomes are not affected by changes in the number of antennas. These findings confirm the accuracy of the derived closed-form expressions. §.§ Result of duplex mode selection To determine the optimal network structure for DQN, we conduct a study on the effect of different DNN hyperparameters on the convergence performance of the neural network. The reward results of every 20 episodes are averaged and recorded as a convergence curve during the training process. We compare the convergence of neural networks with four network structures using different hyperparameters. Our results show that the single hidden layer consisting of only 10 neurons, i.e. neure=[10], doesn't converge as well as the other network structures. A double hidden layer with 20 neurons, i.e. neure=[20,20], is found to be more effective. Therefore, we use this network structure in subsequent simulations. Furthermore, we study the impact of two key hyperparameters, namely t_ max and learning rate lr on the convergence performance of the neural network. t_ max denotes the number of steps during each episode, and a larger one leads to faster convergence but also consumes more training time. On the other hand, the learning rate of the neural network parameters determines the magnitude of the weights along the gradient in the process of updating the network parameters. We find that a smaller learning rate could avoid instability during the training process, resulting in better convergence and avoidance of local optima. However, it may also lead to slower convergence and require more iterations to achieve the desired performance. After simulation comparisons, we select the network with the best convergence performance, i.e. a double hidden layer of 20 neurons, with t_max=10 and lr=0.01 for subsequent simulations. In this section, we introduce four baseline algorithms to compare the performance of the proposed DQN-based ADMO algorithm. * Random Selection (RANDOM): The RANDOM method is employed to randomly select APs for allocation, which serves as a baseline for system performance without any ADMO. * Average Selection (AVG): The AVG method is utilized to assign equal numbers of UL APs and DL APs, which demonstrates system performance when the UL and DL modes of the APs are fixed. * Exhaustion (EXU): The EXU method is employed to conduct an exhaustive search, which aims to find the optimal solution by exploring all possible combinations of AP allocation. However, the time complexity of the exhaustive method is typically exponential, which can significantly reduce its efficiency in solving large-scale problems and increase with the size of the problem. * Q-learning<cit.>: The Q-learning method is a table-based RL approach that utilizes a Q-table to store the Q-value of each state-action pair. In the Q-learning-based ADMO algorithm, the agent, states, actions, and rewards are consistent with those in DQN. Fig. <ref> presents the simulated cumulative probability distribution (CDF) of the objective function values for the proposed algorithm and other baseline algorithms. The objective function is calculated using Eq. (<ref>), while the communication and sensing performance weights in Eq. (<ref>) are set to ω _c=0.5 and ω _s=0.5 for algorithm comparison. The performance gap between the RANDOM algorithm and the EXU is analyzed as a benchmark. The results show that, at 0.5 CDF, the algorithm with an average allocation of APs outperforms the RANDOM algorithm by about 38%. This is because the simulation assumes an equal number of UL and DL UEs, resulting in similar communication needs. Therefore, using the average allocation of APs for UL and DL communication can better match UE needs and improve performance. However, the performance gap between the algorithm with the AVG and the EXU algorithm is still significant. Hence, there is room for performance improvement under the fixed AP duplex mode, and dynamic ADMO is necessary. At 0.5 CDF, the Q-learning-based ADMO algorithm improves performance by about 70% compared to the RANDOM algorithm, and the gap with the EXU algorithm is about 25%. The result of the DQN algorithm is almost identical to the result of the EXU algorithm, with a performance gap narrowed down to 4%. Therefore, compared with other algorithms, the DQN-based ADMO algorithm demonstrates superior performance. Fig. <ref> demonstrates the communication and sensing performance corresponding to all the AP duplex modes obtained using the EXU method, which are represented by the blue points in the figure. According to the definition of Pareto solution, a Pareto solution cannot achieve the value of a certain objective function without compromising the values of other objective functions. Therefore, a Pareto solution can be found among the ordinary solutions, as indicated by the black dots in the figure. The Pareto solutions are connected to form a Pareto boundary. Moreover, the red dots in the figure depict the output solutions of DQN when considering different weights of the objectives in Eq. (<ref>), and all the solutions of DQN are either Pareto solutions or approximate Pareto solutions. § CONCLUSION In this paper, we propose a NAFD-based ISAC design that supports simultaneous UL and DL communication in muti-static ISAC system. The design employs the NAFD technique, which can partially eliminate CLI. Subsequently, we derive closed-form expressions for the communication rate and LER under imperfect CSI for evaluating the communication performance and sensing performance, respectively. In terms of optimization, we establish a MOOP for communication and sensing ADMO to obtain a trade-off between communication and sensing. Subsequently, we propose a DQN-based ADMO algorithm that sums multiple objective values with different weights as rewards to solve the MOOP. We compare the proposed design with existing TDD-based ISAC and CCFD-based ISAC as benchmark design. The simulation results show that the NAFD-based ISAC system is able to significantly outperform the other ISAC systems in terms of communication performance with essentially no loss of sensing performance. The simulation results verify that the derived closed-form expressions can fit the Monte Carlo simulated values better. Finally, the DQN-based ADMO algorithm can achieve a performance effect close to that of the EXH method with low complexity, and by adopting different target weights, the DQN algorithm can quickly achieve an effect close to the Pareto boundary. § PRELIMINARY RESULTS We first give two lemmas on gamma distribution and projection properties. If a random vector 𝐱 obeys the distribution 𝐱∼𝒞𝒩(0,σ ^2𝐈_ N), then 𝐱 satisfies 𝐱^ H𝐱∼Γ ( N,σ ^2). If the random vector 𝐲 obeys the distribution y∼Γ (a,b), then 𝔼{𝐲}=ab and var{𝐲}=ab^2<cit.>. If a set of independent random vectors {𝐱_i} and each vector is distributed as Γ (k_i,θ _i), then ∑_i^x_i^ Hx_i∼Γ (k,θ ), where k = (∑_i^k_iθ _i )^2/∑_i^k_iθ _i^2, θ = ∑_i^k_iθ _i^2/∑_i^k_iθ _i^<cit.>. § PROOF OF THE THEOREM 1 In the aggregation channel, the multipath channel and NLOS paths reflected from the target are statistically independent, and NLOS paths reflected from different targets are also independent. Therefore, the autocorrelation matrix for the channel 𝐡_mk between the kth UE and AP m is denoted as 𝔼[ 𝐡_mk𝐡_mk^ H] = 𝔼[ 𝐡_ A_m, U_k𝐡_ A_m, U_k^ H] +∑_t ∈𝒯α^2_t𝔼[| h_ T_t, U_k|^2]𝔼[𝐡_ A_m, T_t𝐡_ A_m, T_t^ H]. According to the definition of the channel, we can calculate that 𝔼[ 𝐡_ A_m, U_k𝐡_ A_m, U_k^ H]=λ _ A_m, U_k^2𝐈_N, 𝔼[ h_ T_t, U_k^2] =d̅_ T_t, U_k^ -2α + σ _ T_t, U_k^2=γ_ T_t, U_k, 𝔼[ 𝐡_ A_m, T_t𝐡_ A_m, T_t^ H] =( d̅_ A_m, T_t_^ - 2α + σ _ A_m, T_t^2)( 𝐪̅_ A_m, T_t𝐪̅_ A_m, T_t^ H + χ _ A_m, T_t^2𝐈_N)=ζ_ A_m, T_t, substituting the result into Eq. (<ref>) gives 𝔼[ 𝐡_mk𝐡_mk^ H]=ϕ_ml. A similar argument can be made to demonstrate that 𝔼[ 𝐡_ A,mn^𝐡^ H_ A,mn]= ϕ^ A_mn and 𝔼[ | h_ I,u,l|^2]=ϕ^ u_ul. This completes the proof. § PROOF OF THEOREM 2 Put expectations into γ^ dl_l. First, we calculate the molecular part 𝔼[ | D_l|^2]=p_ dl,lε _ dl,l𝔼[ |∑_m ∈ℳ _^𝐡̂_ dl,ml^ H𝐡̂_ dl,ml|^2]. According to Eq. (<ref>), 𝔼[ 𝐡̂_ dl,ml𝐡̂_ dl,ml^ H]= 𝐑̂^ dl_ml. Actually, 𝐡̂^ H_ dl,ml𝐡̂_ dl,ml=∑_n = 1^N 𝐡̂^ H_ dl,ml,n𝐡̂_ dl,ml,n, where 𝐡̂_ dl,ml,n denotes the estimated channel from the nth antenna of the AP m to the UE l. It is known that 𝐡̂_ dl,ml∼𝒞𝒩(0,𝐑̂^ dl_ml), so 𝐡̂_ dl,ml,n∼𝒞𝒩(0,[𝐑̂^ dl_ml]_n). According to the Lemma <ref>, 𝐡̂^ H_ dl,ml,n𝐡̂_ dl,ml,n∼( 1,[ 𝐑̂^ dl_ml]_n). Combined with the Lemma <ref>, we can get 𝐡̂^ H_ dl,ml𝐡̂_ dl,ml∼Γ( k̂_ dl,ml,θ̂_ dl,ml) obeying gamma distribution, where k̂_ dl,ml =(∑_n = 1^N[ 𝐑̂^ dl_ml]_n)^2/∑_n = 1^N [ 𝐑̂^ dl_ml]_n^2, θ̂_ dl,ml =∑_n = 1^N [ 𝐑̂^ dl_ml]_n^2/∑_n = 1^N [ 𝐑̂^ dl_ml]_n^. As previously stated in Lemma 2, the distrbution of ∑_m ∈ℳ _ dl^𝐡̂_ dl,ml^ H𝐡̂_ dl,ml can be derived. Combining Lemma 1, we can get 𝔼[ | ∑_m ∈ℳ _^𝐡̂_ dl,ml^ H𝐡̂_ dl,ml|^2] =∑_m ∈ℳ _^k̂_ dl,mlθ̂_ dl,ml^2 + (∑_m ∈ℳ _^k̂_ dl,mlθ̂_ dl,ml^)^2. By substituting the aforementioned equation into Eq. (<ref>), we can derive the numerator term. Next, the procedure involves the identification of individual components in the denominator. Since the channels are orthogonal between different UEs, the DL interference from other UEs can be expressed as 𝔼[ | 𝒩_l^ dl|^2]=∑_l' ∈𝒦 _ dl\{ l}^∑_m ∈ℳ_^p_ dl,l'ε^2_ dl,l'𝔼[| 𝐡̂_ dl,ml^ H𝐡̂_ dl,ml'|^2], where 𝔼[ | 𝐡̂_ dl,ml^ H𝐡̂_ dl,ml'|^2]=𝔼[ 𝐡̂_ dl,ml^ H𝐑̂^ dl_ml'𝐡̂_ dl,ml^ H]=𝔼[∑_n = 1^N 𝐡̂_ dl,ml,n^ H[ 𝐑̂^ dl_ml']_n𝐡̂_ dl,ml,n]. Using an approach similar to Eq. (<ref>) yields, we can deduce that 𝔼[ 𝐡̂_ dl,ml^ H𝐑̂^ dl_ml'𝐡̂_ dl,ml^ H]= tr(𝐑̂^ dl_ml𝐑̂^ dl_ml'). So the first term of the denominator is 𝔼[ | 𝒩_l^ dl|^2] =∑_l' ∈𝒦 _ dl\{ l}^∑_m ∈ℳ _^p_ dl,l'ε^2_ dl,l' tr(𝐑̂^ dl_ml𝐑̂^ dl_ml'). The second term in the denominator is similarly obtained through a similar approach, which can be expressed as 𝔼[|𝒩_l^ error|^2]= ∑_k ∈𝒦 _dl∑_m ∈ℳ _ dlp_ dl,kε^2_ dl,k tr(𝐑̂^ dl_mkθ^ dl_ml). It can be demonstrated that the DL channel estimation error and sensing beamforming are independent for different APs. The third term in the denominator can be expressed as 𝔼[| 𝒩_l^ CLI-s|^2] =∑_m ∈ℳ _𝔼[| 𝐞_ dl,ml^ H𝐪_ A_m, T_t|^2]. Following the definition of the steering vector in Eq. (<ref>), we can derive that 𝔼[𝐪_ A_m, T_t𝐪^ H_ A_m, T_t] =𝐪̅_ A_m, T_t𝐪̅^ H_ A_m, T_t+χ _ A_m, T_t^2𝐈_N =ψ _mt. Subsequently, Eq. (<ref>) can be expressed as 𝔼[∑_n = 1^N𝐞_ dl,ml,n^ H[ψ _mt]_n𝐞_ dl,ml,n]. Using a derivation process similar to the above, it can be obtained 𝔼[| 𝒩_l^ CLI-s|^2] =∑_t ∈𝒯∑_m ∈ℳ _p_ s,t/N tr(ψ _mtθ_ dl,ml). The final term is the CLI term, which, due to the orthogonality of the channels utilized by different UL UEs, can be expressed as 𝔼[| 𝒩_l^ CLI-c|^2] =∑_u ∈𝒦 _ ulp_ ul,u𝔼[|h_ I,u,l|^2]=∑_u ∈𝒦 _ ulp_ ul,uϕ^ u_ul. Combining all results completes the proof. § PROOF OF THEOREM 3 Put expectations into γ^ ul_u. The derivations of 𝔼[|𝒟_u|^2], 𝔼[|𝒩_u^ ul|^2], 𝔼[|𝒩_u^ error|^2] and 𝔼[|𝒩_u^ CLI-s|^2] in Eq. (<ref>) align with those in the Appendix <ref>, and will not be repeated here. The third term of the denominator is expressed as 𝔼[|𝒩_u^ CLI-c|^2] =∑_m ∈ℳ _^∑_n ∈ℳ _^p_ dl,j𝔼[|∑_j ∈𝒦 _ dl^𝐯_nu^ H𝐞_ A,mn𝐰_m,j^ c|^2], Since 𝐞_ A,mn and 𝐯_nu^ H, 𝐰_m,j^ c are not correlated with each other, each of the terms in the above equation can be calculated individually. 𝔼[𝐞_ A,mn^2]=𝔼[|𝐞_ A,mn^ H𝐞_ A,mn|]= tr(θ_mn^ A). Combining the Lemma <ref> and Lemma <ref>, we can get 𝔼[|∑_j ∈𝒦 _ dl^𝐯_nu^ H𝐰_m,j^ c|^2] =∑_j ∈𝒦 _ dl^ε _ ul ,u^2ε _ dl,j^2𝔼[| 𝐡̂_ ul,mu^ H𝐡̂_ dl,m,j|^2] =∑_j ∈𝒦 _ dl^ε _ ul,u^2ε _ dl,j^2 tr(𝐑̂^ ul_nu𝐑̂^ dl_mj). By substituting the aforementioned equation into Eq. (<ref>), we can derive the third term of the denominator. The last term in the denominator can be expressed as 𝔼[|𝒩_u^ noise|^2] =σ^2_ ul∑_n ∈ℳ _𝔼[𝐯_nu^ H^2]=σ^2_ ul∑_n ∈ℳ _ tr(𝐑̂^ ul_nu). Combining all results completes the proof. § PROOF OF THEOREM 4 The received signals are separately mapped onto distinct sensing symbols, enabling accurate identification of the corresponding signals y_m,n,t. Concerning the active MIMO radar signal model, the detailed modeling of 𝐲_m,n,t is represented as 𝐲_m,n,t =∑_i ∈𝒯√(p_ s,t)α _iη _m,n,i𝐚_mi⊗𝐛_ni𝐰_mt^ se^-j2πΔ f/cd_m,n,i+𝐳, where 𝐚_mi=𝐪_ A_m, T_t, 𝐛_in=𝐪_ T_i, A_n, η _m,n,i=λ _ A_m, T_iλ _ T_i, A_n represents large-scale fading. First, we derive the closed-form expression for the noise 𝐳. The terms of Eq. (<ref>) are obtained through a method analogous to that used in Appendix <ref>. These terms can be expressed as 𝔼[𝐲̅_ dl,n^2] =∑_m ∈ℳ _^∑_j ∈𝒦 _ dl^p_ dl,jε _ dl,j^2 tr(θ_mn^ A𝐑̂^ dl_mj)=σ^2_ dl,n, 𝔼[𝐲̅_ ul,n^2] =∑_k ∈𝒦 _ ulp_ ul,k tr(𝐑̂^ ul_nk)=σ^2_ ul,n. Based on the above equation the CRLB of the estimated parameters can be obtained. The CRLB is obtained by computing the inverse of the Bayesian Fisher Information Matrix (FIM), represented as 𝔼[ (γ̂_m,n,t - γ _m,n,t)(γ̂_m,n,t - γ _m,n,t)^ T] ≽𝐉_m,n,t^ - 1, where γ _m,n,t = [ d_m,n,t,θ _mt,ϕ _tn] ^T is the true value of the estimated parameter of target t, d_m,n,t=d_ A_m, T_t+d_ T_t, A_n, γ̂ _m,n,t= [ d̂_m,n,t,θ̂_mt,ϕ̂_tn] ^T is the estimated value of the parameter of target t based on Eq. (<ref>), 𝐉_m,n,t^∈ℝ^3 × 3 is the FIM for the unknown matrix γ _m,n,t. The FIM can be computed through 𝐉_m,n,t=1/σ^2_z( d𝐲_m,n,t/ dγ _m,n,t^T)^ H( d𝐲_m,n,t/ dγ _m,n,t^T), where σ^2_z=σ^2_ dl,n+σ^2_ ul,n+σ^2_ s, d𝐲_m,n,t/ dγ _m,n,t^T expressed as d𝐲_m,n,t/ dγ_m,n,t^T=ζ_m,n,t×[ -j2πΔ f/c𝐚_mt⊗𝐛_tn, d𝐚_mt/ dθ_mt⊗𝐛_tn,𝐚_mt⊗ d𝐛_tn/ dϕ_tn], where ζ_m,n,t=α^2_t𝐰_m,t^ s^2e^-j2πΔ f/cd_m,n,t, and d𝐚_mt/ dθ_mt=-j2π/λ[𝐩^ T_m1𝐤_mt𝐚_mt,1,...,𝐩^ T_mN𝐤_mt𝐚_mt,N], d𝐛_tn/ dϕ_tn=-j2π/λ[𝐩^ T_n1𝐤_tn𝐛_tn,1,...,𝐩^ T_nN𝐤_tn𝐛_tn,N], where 𝐤_mt=[-sin(θ_mt),cos(θ_mt)]^ T, and 𝐤_nt=[-sin(ϕ_tn),cos(ϕ_tn)]^ T. In the context of large antenna arrays, the orthogonal nature of multiple paths from distinct directions allows for the asymptotic approximation of the non-diagonal sub-matrix of the FIM as the zero matrix. This results in an FIM approximation matrix of a block diagonal structure. The block diagonal FIM matrix enables the calculation of the closed-form expression for the CRLB of each estimated parameter. Inverting Eq. (<ref>) and taking out the block for d_m,n,t, θ_mt and ϕ_tn yields Theorem <ref>. This completes the proof. IEEEtran
http://arxiv.org/abs/2406.08814v1
20240613051552
Skim then Focus: Integrating Contextual and Fine-grained Views for Repetitive Action Counting
[ "Zhengqi Zhao", "Xiaohu Huang", "Hao Zhou", "Kun Yao", "Errui Ding", "Jingdong Wang", "Xinggang Wang", "Wenyu Liu", "Bin Feng" ]
cs.CV
[ "cs.CV" ]
[figure]labelfont=bf 1. Huazhong University of Science and Technology, Wuhan, China. 2. Department of Computer Vision Technology (VIS), Baidu Inc. † Contributed equally to this work. Corresponding authors: fengbin@hust.edu.cn and zhouh156@mail.ustc.edu.cn. Skim then Focus: Integrating Contextual and Fine-grained Views for Repetitive Action Counting Zhengqi Zhao^1† Xiaohu Huang^12† Hao Zhou^2 Kun Yao^2 Errui Ding^2 Jingdong Wang^2 Xinggang Wang^1 Wenyu Liu^1 Bin Feng^1 Received: date / Accepted: date ============================================================================================================================== § ABSTRACT The key to action counting is accurately locating each video's repetitive actions. Instead of estimating the probability of each frame belonging to an action directly, we propose a dual-branch network, i.e., SkimFocusNet, working in a two-step manner. The model draws inspiration from empirical observations indicating that humans typically engage in coarse skimming of entire sequences to grasp the general action pattern initially, followed by a finer, frame-by-frame focus to determine if it aligns with the target action. Specifically, SkimFocusNet incorporates a skim branch and a focus branch. The skim branch scans the global contextual information throughout the sequence to identify potential target action for guidance. Subsequently, the focus branch utilizes the guidance to diligently identify repetitive actions using a long-short adaptive guidance (LSAG) block. Additionally, we have observed that videos in existing datasets often feature only one type of repetitive action, which inadequately represents real-world scenarios. To more accurately describe real-life situations, we establish the Multi-RepCount dataset, which includes videos containing multiple repetitive motions. On Multi-RepCount, our SkimFoucsNet can perform specified action counting, that is, to enable counting a particular action type by referencing an exemplary video. This capability substantially exhibits the robustness of our method. Extensive experiments demonstrate that SkimFocusNet achieves state-of-the-art performances with significant improvements. We also conduct a thorough ablation study to evaluate the network components. The source code will be published upon acceptance. § INTRODUCTION Performing periodic movements over time leads to the formation of repetitive actions, a phenomenon prevalent in daily activities such as physical training. In computer vision, counting repetitive actions in videos is a newly arisen topic, which has potential applications in intelligent systems, , intelligent workout tracking <cit.>. The key step of action counting is to detect the action periods along the time dimension accurately, therefore learning to differentiate action-related and -unrelated frames is critical for this task. The previous top-performing methods tackle this task in different ways. Zhang <cit.> establish a context-aware and scale-insensitive refinement method to locate the periods. Differently, RepNet <cit.> and TransRAC <cit.> construct correlation matrices <cit.> to mine the relations among frames. Furthermore, TransRAC <cit.> estimates the period distribution in the form of density maps, which are widely adopted in crowd counting <cit.>. We notice that these methods rely heavily on identifying the similarity between frames but lack comprehension of the target action. However, if we do not know which action is the target action that needs to be counted, the feature learned could be ambiguous especially when various interference actions exist in the video. For example, when doing physical training, one may combine multiple action classes to build a workout plan. In summary, we argue that there is an important idea that has long been neglected: we need to determine which action to count before starting counting. This argument is consistent with human life experience. As shown in Fig.<ref>, humans first skim the whole sequence to quickly locate the frames of the possible target action to acquire an action pattern, since there are a lot of irrelevant frames in the video as distractions. Then, taking the located frames as guidance, humans can focus on the frames with large similarities and dedicatedly count the actions. Naturally, this human perception procedure can be summarized as a skim-then-focus process. In this way, the skim process provides cues about the target action to the focus stage, which helps filter out the irrelevant frames and exploit the informative ones, eventually benefiting action counting. Compared with the skim process exploring the global context, the focus stage keeps an eye on local video segments to notice the fine-grained visual clues. Inspired by the above observation, we propose a network for repetitive action counting, dubbed SkimFocusNet. The core idea of SkimFocusNet is to mimic the human perception mechanism for locating fine-grained clues under the guidance of contextual information across the sequence. Specifically, we construct a network with a dual-branch architecture, where each branch corresponds to the skim and focus processes, respectively. The skim branch takes a long contextual sequence as input for quick viewing and samples a short instructive sequence from it. As for the focus branch, it takes the short instructive sequence and the local fragments of the video as inputs. The instructive sequence provides information on the target action while local fragments deliver fine-grained details with a high temporal resolution. They are both beneficial for conducting meticulous counting. Afterward, the focus branch generates an informative feature vector using the instructive sequence which represents the properties of the target action in the whole sequence. This vector is then used as a guide, based on which the feature expression of each frame from the local fragment is adjusted dynamically, , highlights the action-relevant frames, and suppresses action-irrelevant frames. The adjustment is completed through a long-short adaptive guidance (LSAG) block, which further fuses the features along the temporal dimension with long-term and short-term time scales for enriching temporal representation diversities. Moreover, the videos within current datasets solely feature one single type of repetitive action, failing to accommodate scenarios where multiple repetitive actions coexist. To better reflect complicated situations, we construct the Multi-RepCount dataset with the data unit of a video containing various repetitive actions and an exemplary video for reference. Based on that, we introduce a novel problem setting, namely specified action counting, which refers to counting occurrences of a specified action within a video, utilizing an exemplary video as a reference. Conducting specified action counting on the Multi-RepCount dataset provides an effective means of evaluating the robustness of various methods under complex scenarios. We evaluate SkimFocusNet on three datasets, , RepCount <cit.>, UCFRep <cit.>, and the proposed Multi-RepCount, where RepCount and UCFRep typically include only one type of repetitive action for each video. SkimFocusNet sets new state-of-the-art performance on the three datasets and outperforms the previous methods by a large margin. Summarily, the contribution can be concluded as follows: * We propose a dual-branch framework (SkimFocusNet) for action counting, where the skim branch provides informative cues for the focus branch, and helps it pay delicate attention to action-related frames. * We propose a long-short adaptive guidance (LSAG) module for adaptively conducting the guiding from the skim branch to the focus branch in detail. LSAG further explores multi-scale modeling for enhancing temporal representation. * We propose a new problem setting of specified action counting and construct a new dataset Multi-RepCount, which is effective for comparing the robustness of different methods. * Extensive experiments conducted on three datasets, RepCount <cit.>, UCFRep <cit.> and the proposed Multi-RepCount, demonstrate the superior performance of SkimFocusNet. Additionally, further ablation experiments are conducted to prove the effectiveness of the proposed components in the network. § RELATED WORKS §.§ Action Counting Our work focuses on action counting in videos that involve repetitive actions. Previous classical approaches transform the video into a one-dimensional time-domain signal <cit.>. Early researchers then estimate the period of the original video sequence by utilizing the Fourier transform <cit.>, which is an effective mathematical tool for processing periodic information from a one-dimensional signal. However, this mathematical method assumes that the motion is stable and continuous <cit.>, which may not hold in real-life scenarios <cit.>. Levy and Wolf <cit.> introduce an online classification network trained on a synthetic dataset to estimate action periods. The fixed length of the input sequence for their network may hinder performance when dealing with original sequences of varying lengths. Ruina <cit.> propose a method adopting wavelet transform to handle the non-stationary video sequence. This method is incapable of well generalizing to real-world datasets due to complexity. Zhang <cit.> propose an iterative refinement framework for action counting, which is not friendly to computational cost. Dwibedi <cit.> construct a self-similarity matrix among frames of the input sequence to achieve class-agnostic action counting. However, it predefines the range of actions in each video, limiting its ability to handle sequences with a higher number of actions. Zhang <cit.> incorporate multi-modal information in action counting, which integrates visual signals with sound for the first time. Hu <cit.> propose multi-scale input and use the transformer blocks to predict the density maps of action periods. Li <cit.> propose a two-branch framework incorporating RGB and motion to enhance the foreground motion feature learning. In this paper, we point out a key concept that we need to skim the sequence in advance to locate the possible target action. However, the previous methods all tend to estimate the action periods directly but are not meant to take an early look at the sequence to determine which action to count in general. This may lead to inaccurate attention to those irrelevant frames. In contrast, we propose a dual-branch architecture, where a skim branch first captures the noteworthy frames as guidance, then a focus branch counts the action based on the guidance in detail. §.§ Action Recognition In a closely related field, , action recognition, several dual-branch architectures are very well known, , Two-Stream networks <cit.> and SlowFast <cit.>. Two-stream networks <cit.> take RGB images and optical flows as inputs for each branch, and finally, ensemble the two-stream predictions for recognition. Differently, SlowFast <cit.> feed sequences with different frame rates into two different paths, aiming at capturing spatial semantics and temporal motion simultaneously. Our method differs from them in 1) Unlike Two-Stream networks <cit.>, we only use images as inputs instead of multi-modal information. 2) Unlike Two-Stream networks <cit.> and SlowFast <cit.>, the two branches in our framework are given nonequivalent status. Exactly, the skim branch is more like an assistant to the focus branch. § SKIMFOCUSNET In this section, we first describe the architecture of SkimFocusNet, and then elaborate on the proposed components in detail. §.§ Architecture Fig.<ref> illustrates the overall framework of SkimFocusNet. SkimFocusNet incorporates a skim branch and a focus branch, both consisting of an encoder and a decoder that adhere to the overall framework established by TransRAC <cit.>. The encoder Φ in the focus branch adopts a complete video swin transformer, whereas the encoder φ in the skim branch utilizes only the first layer of the video swin transformer. The decoder in the focus branch employs a correlation matrix using a self-attention block and then applies a transformer-based period predictor to output a vector. Meanwhile, the tiny decoder in the skim branch follows the same design but with fewer computational layers. As the objective of the skim branch is to capture the action pattern coarsely, it possesses a comparatively lighter network structure than the focus branch. Specifically, given a video I including multiple repetitive actions, after downsampling by a factor R, we truncate it into a contextual view G with the length N_S and divide it into M fine-grained views F={F_i|i=1,2,⋯, M} with the same length N_F. The skim branch takes the contextual view G as an input. Through encoder φ, and a tiny decoder, the skim branch outputs the confidence map D_S=[α^1,α^2,⋯,α^N_S], where α^n denotes the predicted confidence value that the n-th frame belongs to the target action. According to the confidence map D_S, the skim branch samples the instructive frames C of length N_C from the contextual view G through the informative sampling module, which is discussed in <ref>. The instructive frames C contain fewer frames but more representative information about the target action which is beneficial for counting. The focus branch takes the instructive frames C and one fine-grained view F_i from F as inputs. They are fed into an encoder Φ to produce the feature embedding X_C ∈ℝ ^ N_C × d and X_F^i ∈ℝ ^ N_F × d, respectively, where d denotes the feature dimension of each frame. We apply the max pooling layer on the feature embedding X_C to acquire guidance information Z ∈ℝ ^ d. Next, the focus branch utilizes the LSAG block to integrate the feature embedding X_F^i with the guidance information Z. The LSAG block first adaptively adjusts feature embedding according to Z and then it models long-term and short-term relations in the time domain to adapt to actions with different cycle lengths. The specific structure of LSAG is detailed in Sec.<ref>. Afterward, the focus branch outputs the density map D_F^i=[β^1,β^2,⋯,β^N_F] which describes the distribution of action periods, where β^n denotes the predicted density value for the n-th frame. Particularly, the sum of D_F^i along the time dimension denotes the predicted number of repetitions for the fine-grained view F_i. During inference, we can get the predicted number of repetitions for the whole video by summing up the outputs from all of the M fine-grained views. L=L_mse^S+L_mse^F Finally, we use Mean Square Error (MSE) as the loss function to penalize the distance between the predicted and ground-truth density maps for both the skim branch and the focus branch. As shown in Eq.<ref>, the overall loss function L is the sum of the skim branch loss L_mse^S and the focus branch loss L_mse^F. §.§ Informative Sampling The informative sampling module is designed to extract the instructive frames C from the skim branch. As shown in Fig.<ref>, this section shows a couple of implementations of the informative sampling strategies, random sampling, uniform sampling, and top N_C sampling. The random sampling strategy doesn't rely on the predicted confidence map D_S and the sampling frames are likely to cover the whole input sequence. It serves as the baseline strategy for comparison. The uniform sampling strategy evenly distributes the sampling frames in the temporal domain, capturing the overall information extracted from the contextual view G. The top N_C sampling strategy selects N_C frames with the highest N_C values in the D_S map, which contain the most discriminative information of the contextual view G. According to the performance comparison of different informative sampling strategies in Sec.<ref>, we choose the top N_C sampling strategy for SkimFocusNet. §.§ LSAG Fig.<ref> shows the overall architecture of the LSAG block. It takes the guidance information Z with the focus branch feature embedding X_F^i as inputs and then outputs the enriched feature embedding X_CF^i. X_CF^i retains fine-grained temporal resolution and contains instructive information from the skim branch. Specifically, the LSAG block consists of two components: feature adaption and long-short relation modeling. The feature adaption is to emphasize the frames containing repetitive actions and ignore the irrelevant ones and backgrounds, according to the guidance from the skim branch. Long-short relation modeling models temporal features in different scales, which can better understand the motion pattern of different actions with both short periods and long periods. For feature adaption, we first expand the guidance information Z in the time domain and then concatenate it with the feature embedding X_F^i, which produces the combined embedding X_cat^i ∈ℝ^N_F × 2d. The process can be formulated as Eq.<ref>: X_cat^i=cat(X_F^i, repeat(Z,N_F)), where repeat(Z,N_F) denotes expanding the guidance information Z in the time domain for N_F times. Then, the feature embedding X_cat^i goes through a bottleneck with a sigmoid activation function to extract a set of attention in the channel dimension. Next, we use 1D temporal convolutions to aggregate the feature embedding of the attention output with a shortcut connection. In this way, we can adaptively adjust feature representation to emphasize the frames that correspond to repetitive actions and ignore those containing background movements. Next, we use B long-short relation modeling blocks to model long- and short-term relations along the sequence. The multi-head self-attention layer is used to model long-term relations like contextual information. After that, a 1D convolution layer is applied to capture short-term relations in adjacent frames. Finally, the LSAG block produces the feature embedding X_CF^i, which contains temporal features in different scales and thus can better capture the motion pattern of different actions with both long and short periods. In this way, X_CF^i is a reliable feature embedding to predict the repetition number for the fine-grained view F_i. § SPECIFIED ACTION COUNTING Repetitive actions performed by humans tend to exhibit great complexity in real-life scenarios. For instance, engaging in diverse exercises such as push-ups or sit-ups is a common practice during workouts. However, current datasets for repetitive action counting, such as RepCount <cit.>, primarily focus on a single type of repetitive action, with minimal interruptions such as breaks between repetitions. These interruption actions are characterized by their brevity and lack of repetition and barely disrupt the network’s similarity-based counting process. To better reflect real-life scenarios, we introduce a new problem setting of specified action counting and construct a more intricate dataset Multi-RepCount. §.§ Problem Setting We propose a new problem setting of specified action counting to simulate counting problems under complicated scenarios. In detail, given a video I and an exemplary video E, our objective is to quantify the occurrence of the target action a in video I which comprises multiple repetitive actions by comparing it with the repetitive action a depicted in exemplary video E. Even though the other actions may repeat themselves in the video I, they are regarded as background movements concerning the target action a. In conclusion, the main goal of the proposed specified action counting is to accurately count the number of the specified actions in the videos containing different categories of repetitive actions. However, without specifying the target action, it is very challenging for class-agnostic action counting to perform well under such circumstances. Therefore, besides the video with different actions, another video of a certain repetitive action is provided as an exemplar. Notably, the proposed SkimFocusNet can readily handle this new setting. As shown in Fig.<ref>, the Skim branch takes the exemplary video E as input and outputs the instructive frames C. Then, the focus branch takes the instructive frames C and a fine-grained fragment of video I as inputs to perform specified action counting. With the help of the guidance information, we believe that our method can differentiate whether the action is of the target category or not. §.§ Multi-RepCount Based on the dataset RepCount <cit.>, we create a more complex dataset Multi-RepCount, where each video contains multiple types of repetitive actions. Statistically, the RepCount dataset includes 9 categories of repetitive actions including others. We discard the others category because it does not refer to a specific action category and, therefore, is not applicable in this setting. When composing Multi-RepCount, we first select 3 videos separated from each category of the training set as candidate exemplary video set. Then, for each video of a certain target action a, we insert clips from other categories as distractions for each video. The insertion position and the order of the clips are randomly generated. In this way, we compose the video I containing multiple actions. Additionally, the frames of the target action a take up half of the percentage of the entire video I. Moreover, from the candidate exemplary video set, we randomly choose one exemplary video E for the corresponding category a associated with video I, representing the fundamental data unit. In conclusion, our Multi-RepCount has 2 main modifications compared to RepCount. (1) each video contains repetitive actions from multiple categories. (2) each video is packed with a randomly selected exemplary video. Figure <ref> shows the example data unit of the dataset Multi-RepCount. § EXPERIMENTS §.§ Evaluation Metrics Off-By-One (OBO) Accuracy. OBO will be accumulated when the absolute difference between the prediction and the ground truth is no larger than one. Otherwise, OBO will not be accumulated, which is formulated as: OBO=1/N∑_i=1^N[|ĉ_̂î - c_i|≤1], where ĉ_̂î, c_i, and N denote the prediction, the ground truth, and the number of the test sequences, respectively. In this way, a higher OBO indicates better performance. Mean Absolute Error (MAE). MAE measures the absolute error between the prediction and the ground truth. Therefore, a lower MAE indicates better performance. The MAE calculation can be denoted as: MAE=1/N∑_i=1^N|ĉ_̂î-c_i|/c_i. §.§ Datasets There are many existing datasets for repetition counting <cit.>. However, due to their limited size, some early proposed datasets <cit.> are not suitable for training complex networks. As a result, we evaluate our network on the two large-scale widely used datasets and the proposed Multi-RepCount. RepCount Part-A. The RepCount Part-A dataset <cit.> contains 1041 videos with fine-grained annotations and significant variation in length. These videos are collected from the YouTube website. While containing many anomaly cases, it has a larger average count number and a longer average duration than any other dataset. Due to the above characteristics, it becomes the most challenging one. UCFRep. The UCFRep dataset <cit.> contains 526 videos of 23 categories. It comprises the annotated repetition videos from the action recognition dataset UCF101 <cit.>. Multi-RepCount. The proposed Multi-RepCount dataset is a synthetic dataset made up of repetitive action clips of RepCount. The basic data unit of the dataset is a video pair consisting of a video for counting and an exemplary video for reference. Statistically, Multi-RepCount contains 984 videos of 8 repetitive action classes and each video contains more than 3 kinds of different actions. §.§ Implementation Details We implement our method with the PyTorch platform and train it with an NVIDIA GeForce RTX3090 GPU. We train the network (except for the pre-trained VideoSwin Transformer or ResNet) for an overall 200 epochs and set the batch size as 8. Unless otherwise stated, VideoSwin Transformer is used by default. Besides, we apply the Adam optimizer with a declining learning rate of 8 × 10^-6. The frame lengths for the contextual view G are set to 256. The frame lengths for the instructive Frames C and the focus branch N_F are set to 32 and 64. The downsampling rate (R) for both inputs for the focus branch and the contextual view C is set as 4. The number of long-short relation modeling blocks (B) is set as 3. §.§ Comparison with State-of-the-art Methods RepCount Part-A. Tab.<ref> shows the performance comparison between SkimFocusNet and previous competitive methods, including action counting methods (RepNet, Zhang , and TransRAC, Li ), action recognition methods (X3D, TANet, and Video SwinT), and action segmentation methods (Huang and ASFormer). We find that SkimFocusNet outperforms these methods by a large margin. Compared with the best competitor, our SkimFocusNet decreases MAE from 0.3841 to 0.2489 and improves OBO from 0.3860 to 0.5166, which are relatively improved by 35.2% and 33.8%, respectively. The comparison of RepCount Part-A strongly suggests the superior counting accuracy of the proposed method. UCFRep. Following the evaluation protocol in TransRAC <cit.>, we first train the models on the RepCount Part-A dataset, then test them on the UCFRep dataset. In Tab. <ref>, we compare SkimFocusNet with RepNet, Zhang , TransRAC, and Li , from which we can notice that SkimFocusNet still significantly outperforms all of them. However, the improvement is not as substantial as the RepCount Part-A which indicates the difference between the two datasets and the difficulty of such an evaluation protocol. The comparison on UCFRep proves the generality of the proposed method. Multi-RepCount. Table <ref> shows the performance comparison between previous competitive methods. However, both TransRAC and RepNet are single-stream networks that cannot utilize critical information from the exemplary video E. Therefore, besides our SkimFocusNet, we test our model without the skim branch on the Multi-RepCount for a fair comparison. In general, it is reasonable for single-stream methods to count the main repetitions since the target action a takes up half of the percentage. Without the guidance information from the other branch, we find that SkimFocusNet still outperforms other methods in the single-stream framework. With the critical information provided by the skim branch, the performance of SkimFocusNet is improved by a large margin which suggests the importance of the skim process under complex situations. The comparison on Multi-RepCount proves the robustness of the proposed method. §.§ Complexity Analysis for Action Counting Methods In Tab. <ref>, we analyze the complexity and performance of different methods on the RepCount dataset. We train all methods for 200 epochs and the test-time batch size is set to 1 for all methods. Besides, for RepNet <cit.> and Zhang <cit.>, we use the same frame-sampling strategy as TransRAC <cit.>. From Tab. <ref>, we find that: (1) With a bit more parameters and longer training time, our method can achieve the best performance compared with others. The extra parameters are due to the dual-branch design and the extra training time is due to the basic training data of video fragments. (2) Compared to the previous method TransRAC <cit.>, we improve the inference time from 38.02 seconds to 18.12 seconds which is due to the abandonment of the sliding window. Moreover, during inference, each contextual view is processed only once for all the fine-grained views in the video which greatly improves efficiency. §.§ Ablation Study In this section, we conduct extensive diagnostic experiments to evaluate the effectiveness of our model design. Effectiveness of the proposed components. In Tab.<ref>, we evaluate the effectiveness of the proposed components in SkimFocusNet. We use basic CNN to replace LSAG in certain experiments. From the results, it can be noticed that: (1) In the first experiment, without skim branch and LSAG, the performance drops by a large margin. (2) By comparing the 2nd and the 3rd experiments, we find that, based on the focus branch, the skim branch contributes more improvement than LSAG, demonstrating the importance of the skimming process before counting. (3) Using all three components can achieve the best performance, revealing these components play complementary roles to each other. Versatility of the dual-branch design. The skim-then-focus architecture is proposed to seamlessly integrate with various encoders and decoders. To verify this, in Tab.<ref>, we implement the encoders and decoders with different networks. The first two dual-branch experiments based on encoder ResNet perform better than ResNet-based RepNet. For the VideoSwin encoder, we can find the last two experiments show dual-branch method has better performance than the VideoSwin-based TransRAC. The experiments strongly support our dual-branch design philosophy. Number of the downsampling rate R. As shown in Tab.<ref>, we conduct experiments to compare the performance of different downsampling rates R (2, 4, and 8). From the results, we observe that a downsampling rate that is either too high or too low is not beneficial to performance. Therefore, we set R=4 for optimal performance. Impact of the informative sampling strategies. In Tab.<ref>, we investigate the impact of the informative sampling strategies. From the results, we can find that: (1) Compared with the random sampling strategy, the results of the uniform sampling are similar which indicates the extracted general information brings almost no improvement. (2) Our method can achieve the best performance when using top N_C sampling which suggests significant information is superior to general information. Number of sampling frames for instructive frames C. In Tab.<ref>, we evaluate the effectiveness of the skim branch on the RepCount Part-A dataset. From the results, several noteworthy conclusions are summarized: (1) Compared with not using the skim branch (N_C = 0), using only 16 frames for the skim branch can effectively improve the performance. It reveals that even if we skim the sequence with a limited number of frames, it can still provide useful guidance to locate the action periods. (2) When we sample more frames for the skim branch, the performance improves obviously, which demonstrates that the abundant contextual clues can indeed contribute to action counting. (3) When we further feed more frames to the skim branch, it is found that the performance declines. One possible explanation is that over-sampling brings increasing noise, which degrades the quality of guidance information. To achieve a better trade-off between computational cost and performance, we choose to feed 32 frames to the skim branch. Number of the fine-grained view length N_F. To investigate the effects of the fine-grained view length N_F, we conducted experiments with N_F values of 32, 64, and 128. As shown in Table <ref>, setting the fine-grained view length N_F to 64 outperforms other experiments in terms of the OBO metric. Additionally, the first two settings achieve similar performances in terms of the MAE metric. However, setting N_F to 32 increases training time consumption, and setting it to 128 takes up more GPU memory. To achieve a better trade-off between computational cost and performance, we set the fine-grained view length N_F to 64. Impact of the module design in LSAG. In Tab.<ref>, we investigate the impact of the module design in LSAG. From the results, we can notice that: (1) Without the feature adaption block, counting performance declines. It indicates that it is important to adapt the feature representation of each frame in the focus branch according to the guidance from the skim branch. (2) Removing the long-short relation modeling degrades the performance obviously, which shows that richer granularities of temporal expression would benefit the motion representation and help better capture the characteristics of different actions. Number of the long-short relation modeling blocks in LSAG. In Tab.<ref>, we investigate the impact of the value of B. By increasing B from 1 to 3, the performance is improved significantly (especially on OBO), which is brought by richer temporal representation with more blocks. When we further increase B to 5, the performance declines on both metrics. In this paper, we set B as 3 for a trade-off between computational cost and performance. Performance comparison in different ranges of action counts. In Fig.<ref>, we present the performance comparison in different ranges of counts on the RepCount Part-A dataset. Several conclusions are summarized: (1) Our method achieves the best performance within all ranges, proving its superior capacity under different scenarios. (2) With the skim branch, our method can consistently improve its performance in various situations. (3) The performances of all methods decline in terms of the OBO metric when the number of actions in a video increases, which is somehow reasonable since such scenarios are relatively harder. But when it comes to videos with more than 20 actions, the performances become terrible due to the accumulated errors. It reveals a potential direction for future research in this field, , solving dense repetitive action counting in long videos. §.§ Qualitative Analysis Visualization of predicted density map. As shown in Fig.<ref>, we visualize the density maps of the ground truths and the predictions. From the visualization, we find that: (1) Compared with TransRAC <cit.>, our method can locate the action periods more accurately, which differentiates the frames that belong to the target action and the frames that are distractions, thus avoiding unnecessary attention on irrelevant frames. (2) By using the guidance from skim branch, it can help the model suppress the wrong focus on the transitional actions, and highlight the focus on the target actions. Visualization of LSAG. In Fig.<ref>, the attention weights of LSAG are visualized. We find that the network pays more attention to frames with more useful information about the target action. § CONCLUSION In this paper, we propose a framework for action counting, dubbed SkimFocusNet, which adheres to human intuition to construct a dual-branch architecture. The skim branch aims to capture the possible target action and offers guidance to the focus branch for action counting. In this way, the model can precisely detect repetitive actions and ignore distracting ones. We establish the Multi-RepCount dataset and the problem setting of specified action counting to compare the performance of different methods under more complicated conditions. Our SkimFocusNet is evaluated on RepCount, UCFRep, and Multi-RepCount datasets and achieves state-of-the-art performance over previous methods. Extensive diagnostic experiments and visualizations are given to evaluate the effectiveness of the proposed network. § DATA AVAILABILITY STATEMENT Some of the datasets used in this paper are available online. RepCount Part-A [<https://svip-lab.github.io/dataset/RepCount_dataset.html>] and UCFRep [<https://github.com/Xiaodomgdomg/Deep-Temporal-Repetition-Counting>] can be downloaded from their official website accordingly. The proposed Multi-RepCount and source code will be available upon acceptance. unsrt
http://arxiv.org/abs/2406.08644v1
20240612210812
Toward Fully-End-to-End Listened Speech Decoding from EEG Signals
[ "Jihwan Lee", "Aditya Kommineni", "Tiantian Feng", "Kleanthis Avramidis", "Xuan Shi", "Sudarsana Kadiri", "Shrikanth Narayanan" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.SD", "eess.AS" ]
[ [ 12 June 2024 ================ § ABSTRACT Speech decoding from EEG signals is a challenging task, where brain activity is modeled to estimate salient characteristics of acoustic stimuli. We propose FESDE, a novel framework for Fully-End-to-end Speech Decoding from EEG signals. Our approach aims to directly reconstruct listened speech waveforms given EEG signals, where no intermediate acoustic feature processing step is required. The proposed method consists of an EEG module and a speech module along with a connector. The EEG module learns to better represent EEG signals, while the speech module generates speech waveforms from model representations. The connector learns to bridge the distributions of the latent spaces of EEG and speech. The proposed framework is both simple and efficient, by allowing single-step inference, and outperforms prior works on objective metrics. A fine-grained phoneme analysis is conducted to unveil model characteristics of speech decoding. The source code is available here: <github.com/lee-jhwn/fesde>. § INTRODUCTION Brain-computer interfaces (BCI), particularly those targeting decoding listened or articulated speech from neural (brain activity) signals such as electroencephalography (EEG) and electrocorticography (ECoG), hold promise for improving life quality and rehabilitation outcomes for patients with communication disorders. Speech decoding technologies from brain activity also offer possibilities for seamless, immersive, and interactive entertainment, where command and control are coordinated without compromising sensitive information accessible to others. For instance, a research group recently successfully prototypes a digital avatar of a paralyzed patient relying on understanding and interpretation of ECoG signals <cit.> enabling the patient to communicate more freely. Several approaches have been proposed to decode speech or text from neural activities such as EEG or ECoG signals <cit.>. For example, many efforts have proposed the use of convolution neural networks (CNN) to decode speech from EEG <cit.> and ECoG signals <cit.> especially to reconstruct speech signals or representations, like mel-spectrogram, from brain activity signals. Furthermore, several methods to decode imagined speech from neural activities have also been proposed by <cit.>. Decoding speech from EEG signals rather than ECoG signals is more practical and economical in real world scenarios. Acquiring ECoG signals is challenging and limited to certain types of patients, as a surgical intervention is necessary to implant the sensing electrodes. On the other hand, obtaining EEG signals only requires participants to wear a cap-like device. Hence, speech decoding frameworks based on EEG signals can easily be applied to larger group of users, while ECoG based approaches are primarily focused on patients with specific clinical conditions such as epilepsy or certain movement disorders. However, modeling and decoding information from EEG recordings has been a significant signal processing challenge, due to the low signal-to-noise ratio (SNR) that is inherent in EEG measurements and sparsity of measurements. EEG devices attempt to measure electrical activity in the brain through non-invasive electrodes, spatially arranged on the scalp. Hence, the sensed signals are weakened, with low spatial resolution, and contaminated by various biological, environmental noises, such as muscle activity, cardiac activity, eye movements, power lines, or nearby electronic devices <cit.>. Various approaches have been proposed to deal with these sources of variability, ranging from sophisticated pre-processing and decomposition <cit.> algorithms to representation learning frameworks that are trained in a label-agnostic, self-supervised regime <cit.>. Decoding speech waveforms directly from EEG signals has numerous advantages when compared to multi-step approaches that usually incorporate signal-to-text estimation as an intermediate step. Speech contains rich information such as prosody including rhythm, intonation, emotion, and speaker identity that are lost in the typical written lexical form. Hence, a pipelined approach to processing EEG signals that first goes through text decoding, followed by text-to-speech (TTS), faces challenges in decoding high-fidelity speech from the loss of important prosodic and other paralinguistic information. Recent development in TTS technology may facilitate direct speech decoding from EEG signals. Fully-end-to-end TTS systems, such as VITS <cit.>, are TTS systems that directly synthesize speech in waveform, rather than intermediate acoustic features like mel-spectrograms or mel-frequency cepstral coefficients (MFCC) <cit.>. Fully-end-to-end TTS systems have numerous advantages over other TTS systems that require any additional intermediate acoustic feature mapping step. As the intermediate step is skipped, fully-end-to-end TTS systems are faster, simpler, and less vulnerable to error accumulation, compared to TTS systems that require an extra step of converting acoustic features to final waveforms. With such recent developments in the TTS domain, we believe the intermediate acoustic feature extraction step can be omitted. In this paper, we propose FESDE, a novel framework for Fully-End-to-end Speech Decoding from EEG signals. Our proposed framework directly generates waveform of listened speech given EEG signals, without any additional intermediate acoustic feature mapping step, such as to mel-spectrogram. To the best of our knowledge, this is the first framework that directly generates speech waveform given respective EEG signals. FESDE consists of three parts: the EEG module, the speech module, and the connector. The EEG module learns to provide a better representation of EEG signals. The speech module aims to generate the speech waveform from the speech embeddings. The connector learns to map and convert the distribution of EEG embedding to speech embedding. Figure <ref> illustrates the overall structure of the proposed framework. During inference, only the EEG encoder and the speech decoder are utilized, along with the connector. The proposed framework has various advantages over previous approaches. It enables single-step inference and does not require any additional pipelines such as a vocoder. Hence, it is faster and more straightforward compared to previous multi-step approaches, while outperforming the previous approach in terms of objective metrics. The main contributions of the paper are as follows: * We propose a fully-end-to-end speech decoding framework for EEG signals, by incorporating EEG module and speech module, reducing multiple steps into one. To our best knowledge, this is the first attempt to directly reconstruct listened speech waveform from EEG signals. * The proposed framework allows single-step inference, which is simpler and faster, while also outperforming the previous approaches in objective measures. * We conduct phoneme analysis and present the characteristics of phonemes that are easy or challenging to decode by this approach. § PROPOSED METHOD §.§ Model Architecture As shown in Figure <ref>, the proposed framework consists of three parts: the EEG module, the speech module, and the connector. The EEG and speech signals are handled by their respective modules. The connector bridges the two intermediate embeddings from EEG and speech. During inference, only the EEG encoder, the connector, and the speech decoder are used. For detailed implementation, refer to the source code. §.§.§ EEG Module The EEG module is based on <cit.>, a self-supervised learning framework for EEG signal representation. The EEG module consists of an encoder, that learns to encode input EEG signals into intermediate EEG embeddings, and a decoder, that learns to reconstruct EEG signals from those embeddings. The EEG encoder consists of convolution blocks and Structured State Space Sequence (S4) layers <cit.>. Each convolution block is composed of a 1D convolution layer, a dropout layer, layer normalization, and GELU activation. Each S4 layer consists of an S4 kernel estimator, gated linear unit (GLU) activation, dropout, and layer normalization. The S4 layers show superior performance in encoding long range temporal dependencies which make them suitable for encoding EEG signal <cit.>. The EEG decoder is composed of deconvolution blocks, where each block contains a 1D transpose convolution layer along with a dropout layer, layer normalization, and GELU activation. §.§.§ Speech Module We base the speech module on VITS <cit.>, one of the well performing fully-end-to-end TTS frameworks. The speech module consists of two parts: the speech encoder and the speech decoder. The speech encoder takes linear spectrograms as input and outputs the intermediate speech embeddings. It is composed of non-causal WaveNet <cit.> residual blocks, as in <cit.>, and a projection layer that outputs the mean and the variance of the distribution of the speech embeddings. We adopt HiFi-GAN V1 <cit.> as the speech decoder, as in <cit.>. §.§.§ Connector The connector consists of two parts: the prenet and the flow. As in <cit.>, the prenet consists of the transformer encoder <cit.> and a linear projection layer. It takes the intermediate EEG embeddings as input and outputs mean and variance of the distribution. The flow is identical to the normalizing flow network in <cit.>, with a stack of affine coupling layers <cit.>. The flow works as an invertible function between the two distributions of the EEG and speech embeddings. There is a gradient stop between the intermediate EEG embeddings and the connector. We empirically observed that the gradient stop stabilizes the training of the EEG module, as it is not affected by the performance of the speech module in early stages of training. §.§ Training Objectives Employing the approach in <cit.>, the cosine similarity loss is used for training the EEG module, as in Eq. (<ref>): L_EEG(x, x̂) = 1 - 1/N_ch∑_i=1^N_chx_i^T ·x̂_̂î/‖ x_i ‖‖x̂_̂î‖ where N_ch is the number of EEG channels, and x and x̂ represent the input and the reconstructed EEG signals, respectively. We adopt training objectives for the speech module from <cit.>. The reconstruction loss L_mel is defined as the L1 loss between the mel-spectrograms of generated and actual speech waveform. The KL-divergence loss L_KL helps map the distributions of the intermediate embeddings of speech and EEG to one another, as in Eq. (<ref>). L_KL(p, q) = logq(z|y) - logp(z|x) where y is the input speech and z is the intermediate speech embedding. Hence, q(z|y) and p(z|x) represent the distribution of the intermediate speech embeddings given y (speech) and x (EEG signals), respectively. The total loss for the speech module L_speech is as follows: L_speech = L_mel + L_KL + L_GAN where L_GAN consists of the adversarial loss and the feature matching loss from HiFi-GAN V1 <cit.>. Note that gradient stop is applied between the EEG module and the connector, the training of the EEG module is not affected by the speech module. § EXPERIMENTS §.§ Dataset We conducted experiments on the N400 dataset <cit.>, which contains EEG signals that were recorded from 24 subjects, while each subject was listening to 440 sentences in English. The speech samples are around 2 - 3 seconds long and contain 5 to 8 words. The 128 channel EEG was recorded at a sampling rate of 512 Hz, while the participants listened to the sentences. Four subjects[subject # 5, 10, 15, and 18.] were excluded owing to unreliable data, as suggested by <cit.>. The test set consists of two subjects[subject # 23 and 24.] and 40 sentences. As a result, the train set contains 7,200 pairs[18 subjects with 400 sentences each.] and the remaining 1,600 pairs were selected for the test set, where 720 and 800 of them are unseen audio and subject, respectively, and 80 are unseen in both audio and subject. The pre-processing pipeline for EEG is as follows: First, a notch filter at 60 Hz was employed in order to remove the powerline noise. Then, bandpass filtering with frequency limits of low 0.5 Hz and high 50 Hz was applied in order to preserve the bands relevant to EEG spectral information, followed by eye blink removal using independent component analysis (ICA). The signals were then resampled to 256 Hz. All of the speech samples were down-sampled to 22,050 Hz, to match the sampling rate of pre-trained speech models. For linear spectrograms, the following short-time Fourier transform (STFT) parameters were used: 1024 for both FFT and window size, and 256 for the hop size. For mel-spectrograms, the same parameters were utilized with 80 mel-bands. §.§ Experimental Setup Five different training configurations were considered as below: * vanila: all of the modules were trained from scratch, without any pre-trained parameters. * pt-audio: only the speech module was initialized with pre-trained parameters for training. * pt-audio-fz: the pre-trained parameters were used for the speech module, but frozen, that is, the speech module was not trained. * pt-audio-eeg: both of the EEG module and the speech module were initialized with pre-trained parameters. * pt-audio-eeg-fz: the EEG module and the speech module adopted the pre-trained parameters, but they were not trained. For the pre-trained speech module, [<https://github.com/jaywalnut310/vits>] was adopted. In pt-audio-eeg and pt-audio-eeg-fz, the EEG module, that had been pre-trained for 300 epochs, was later frozen when combined with the speech module. The compared baseline model is VLAAI <cit.>, which consists of a stack of convolution layers with skip connections. The final layer of the baseline model was modified to generate 80 band mel-spectrograms. One Nvidia A40 GPU was utilized for each training configuration. All of the FESDE training configurations were trained for 100k iterations with the AdamW optimizer <cit.>. The baseline model was trained for 68 epochs. § RESULTS AND DISCUSSION §.§ Evaluation The performance was measured in the following two objective metrics: mel-cepstral distortion (MCD) <cit.> and mel-spectrogram correlation (Mel-Corr). MCD measures the distance between two mel-cepstral coefficients as Eq (<ref>): MCD = α√(∑_i=1^N_MCC(MCC_i-MCC_i)^2) where α = 10√(2)/ln10 and N_MCC is the number of mel-cepstral coefficients. Mel-Corr is calculated as the Pearson correlation coefficient between two mel-spectrograms. For convenience, the reported numbers are multiplied with 100. Lower MCD and higher Mel-Corr values indicate better performance. Table <ref> shows the MCD and Mel-Corr values for the baseline model and the proposed five models. From the results in the table, the best performance was achieved when both of the EEG and speech modules are pre-trained separately (i.e., pt-audio-eeg-fz). Also, almost all of the proposed models perform better than the baseline model. §.§ Phoneme Analysis In order to analyze the effectiveness of proposed models in decoding speech at the level of phoneme, a fine-grained analysis was carried out by calculating some objective metrics. This analysis will reveal which of the category of phonemes are difficult or easy to decode. The Montreal forced aligner (MFA) ver3 <cit.> was used to acquire the timestamps of each phoneme. The pre-trained configuration was used for both acoustic model and the dictionary. As MFA does not perform accurately for reconstructed speech, the timestamps acquired from the ground-truth speech are assumed to be identical to the reconstructed speech. The consonants are grouped according to their manner of articulation, place of articulation, and tenseness of articulation. Similarly, the vowels are grouped based on position of the tongue and tenseness. As illustrated in Figures <ref>, the consonants that are nasal, dental, or lenis tend to be relatively easily decoded. Also, an interesting tendency is observed wherein more closed vowels are easier to decode. § ABLATION STUDY §.§ EEG Channels As recent studies <cit.> suggest, we explore the utilization of only the parietal and temporal regions of EEG that are known to contain rich auditory information. Instead of an increase in performance, a drop in performance was observed, as shown in Table <ref>, especially when the speech module was frozen during training. This result may suggest that non-auditory parts of the brain may be involved in speech decoding, however, further investigation is necessary for a clearer explanation. §.§ Text-Spotting Apart from evaluating the performance of speech reconstruction from EEG, we investigate whether our proposed EEG encoder captures textual information. A binarized word-spotting experiment was conducted to validate whether our proposed EEG presentation can be used to identify certain words in a speech sample. The 30 most frequently occurring nouns among all sentences are chosen as the set of keywords, and the task is to detect whether any of these keywords exist in a speech. The pre-trained EEG module was utilized. The results across different test conditions are presented in Table <ref>. We demonstrate the feasibility of text detection from EEG signals, and it remains a challenging and open endeavor. § CONCLUSION In this work, we introduce FESDE, a framework to decode listened speech waveforms directly from EEG signals. The proposed approach is faster and simpler by enabling single-step inference and it also outperforms the baseline model that involves an intermediate conversion to text representation. We also explore the characteristics of phonemes that are easily or challenging to be decoded by the proposed framework. Our approach is currently limited to listened speech, however, in the foreseeable future, we plan to extend our research scope to speech production tasks, such as imagined or phonated speech decoding. § ACKNOWLEDGEMENTS This project was partially supported by a USC Annenberg Graduate Fellowship and also by the Defense Advanced Research Projects Agency (DARPA) under cooperative agreement No. N660012324006. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. IEEEtran
http://arxiv.org/abs/2406.07840v1
20240612031515
SynthForge: Synthesizing High-Quality Face Dataset with Controllable 3D Generative Models
[ "Abhay Rawat", "Shubham Dokania", "Astitva Srivastava", "Shuaib Ahmed", "Haiwen Feng", "Rahul Tallamraju" ]
cs.CV
[ "cs.CV" ]
Target Speaker Extraction with Curriculum Learning Zhenglong Luo, Zhiyong Chen, and James Welsh The Authors are with the School of Engineering, The University of Newcastle, Callaghan, NSW 2308, Australia. Z. Chen is the corresponding author. E-mail: zhiyong.chen@newcastle.edu.au. =================================================================================================================================================================================================================================================== [1]Equal Contribution [2]Mercedes-Benz Research & Development India (firstname.lastname@mercedes-benz.com) [3]IIIT Hyderabad (astitva.srivastava@research.iiit.ac.in) [4]Max-Planck Institute for Intelligent Systems, Tuebingen (haiwen.feng@tuebingen.mpg.de) § ABSTRACT Recent advancements in generative models have unlocked the capabilities to render photo-realistic data in a controllable fashion. Trained on the real data, these generative models are capable of producing realistic samples with minimal to no domain gap, as compared to the traditional graphics rendering. However, using the data generated using such models for training downstream tasks remains under-explored, mainly due to the lack of 3D consistent annotations. Moreover, controllable generative models are learned from massive data and their latent space is often too vast to obtain meaningful sample distributions for downstream task with limited generation. To overcome these challenges, we extract 3D consistent annotations from an existing controllable generative model, making the data useful for downstream tasks. Our experiments show competitive performance against state-of-the-art models using only generated synthetic data, demonstrating potential for solving downstream tasks. Project page: https://synth-forge.github.iohttps://synth-forge.github.io § INTRODUCTION The field of facial analysis encompasses a range of critical tasks, including recognition, expression analysis, and biometric authentication, each dependent on accurate face parsing tasks. As the demand for robust facial analysis systems escalates, the need for extensive and diverse datasets to train these systems becomes paramount. Traditional data collection and labeling methods are not only time-consuming and costly but also susceptible to human bias and errors. An exponential rise in data demand, fueled by the digital transformation across industries, has incited the need for innovative and efficient data acquisition techniques. Recent advancements in synthesizing training data for face analysis have garnered considerable attention, pointing to a significant shift in the methodologies employed for training facial recognition and analysis models. Unlike traditional data, which relies on real-world occurrences for collection, synthetic data is algorithmically generated, offering a plethora of advantages in terms of cost, scalability, and accuracy. Traditional approaches, heavily reliant on sophisticated computer graphics pipelines <cit.>, offer the ability to render high-quality, auto-annotated facial images. However, the complexity and prohibitive cost of setting up such pipelines limit their accessibility and scalability <cit.>. Additionally, a persistent challenge with models trained on physically based rendering (PBR) synthesized imagery is the domain gap — the discrepancy between synthetic and real-world data distributions — which necessitates elaborate domain adaptation techniques to achieve practical utility. In contrast, the emergence of advanced generative models promises a new horizon in the generation of photo-realistic human faces. However, their potential for downstream face analysis tasks is yet to be fully harnessed, primarily due to the difficulty in obtaining 3D consistent annotations for the generated images <cit.>. Synthetic data obtained from generative models is particularly well-suited for providing such comprehensive supervision for computer vision tasks as has been seen in the community <cit.>, however yet remains under-explored in the facial analysis domain with 3D consistent annotations for the corresponding generated data. Controllable generative models are usually trained on real data, and are therefore capable of generating realistic data with minimal or no domain gap. However, the space spanned by the latent control variables of these models are generally too vast to obtain meaningful samples. Uniformly sampling the latent space for data generation would result in samples that might not represent the distribution required to meaningfully train for a downstream task, and often provide redundant information that contributes towards scaling the dataset size without significant enhancement to task performance. The objective of this research is to mitigate the shortcomings related to the use of controllable generative models for synthetic dataset curation, towards training downstream tasks more efficiently. To address the inherent challenges, we introduce two key insights: (i) the utilization of a generative model for the synthesis of high-quality facial images, alongside 3DMMs for extracting 3D-aware annotations, and (ii) the employment of this exclusively synthesized and annotated dataset for training a robust multi-task model capable of predicting keypoints (kp), semantic segmentation (seg), and depth modalities As the annotation schema obtained from the generative model differ from those followed in the existing benchmark datasets, we adopt a label fine-tuning strategy. Label finetuning is aimed at enhancing the predictions of our synthetically trained network to match those of the real-world datasets <cit.>. Although the label fine-tuning strategy enhances the model's performance for evaluation purposes on existing datasets, it is primarily an evaluative/regulative step. The downstream models, as trained exclusively with synthetic data, demonstrate strong performance on real-world data directly, underscoring its practical viability and the effectiveness of our pipeline. A schematic of our method for obtaining multi-modal annotations is depicted in figure <ref>, along with image samples of the generated data in figure <ref> showcasing the practical outcomes of our proposed pipeline. The key contributions in the proposed work are as follows: * Annotated Data Generation in a 3D-aware pipeline: Utilizes a 3DMM-controllable generative model to extract soft semantic and spatial labels across a comprehensive range of facial features, providing dense, multi-modal supervision essential for detailed facial analysis and parsing. * Multi-Task Framework for Synthetic Data: Employs a synthesized dataset to train a robust multi-task model that accurately predicts various facial attributes such as keypoints, segmentation, and depth, showcasing the potency of synthetic data in enhancing complex model training. Generative models show potential in overcoming the domain gap between synthetic and real-world data while providing a scalable and cost-effective alternative to traditional data collection and annotation methods for facial analysis. The cost of acquiring the data from these generative models can be reduced further by efficiently sampling the latent space for the data generation process guided by the down-stream tasks. In conclusion, we show that models trained on the controllable generative synthetic data are on par with the existing state-of-the-art on various facial analysis benchmarks. The project page is available at https://synth-forge.github.iohttps://synth-forge.github.io highlighting the links to additional resources, released dataset and code for reproducability. § RELATED WORKS Datasets for Facial Analysis: Deep learning methods that aim at facial analysis, such as face detection <cit.> and face parsing <cit.>, require high-quality face images with reliable & accurate task-specific annotations, e.g. facial landmarks or semantic labels. Several datasets <cit.> have been proposed in this regard, which contain tens of thousands of face images. More specifically, CelebAMask-HQ<cit.> contains 30,000 high-resolution face images selected from the CelebA<cit.> dataset by following CelebA-HQ<cit.>. All the images have manually annotated segmentation masks of facial attributes of 512 x 512 resolution, spread among 19 classes including all facial components and accessories. Another dataset LaPa<cit.> comprises over 22,000 facial images showcasing a wide array of expressions, poses, and occlusions. Each image in LaPa comes with an 11-category pixel-level label map and 106 facial landmark points. Though such datasets seem large-scale, they are often limited in terms of scalability and diversity, along with being resource intensive, and subject to human errors through manual annotations. To bridge this gap, <cit.> proposes a computer graphics inspired PBR-based synthetic data generation pipeline, procedurally generating 100,000 synthetic faces with precise 2D landmarks and per-pixel segmentation labels, allowing for vast exploration in training large-scale models on downstream tasks and real-world label adaptation. While the dataset in <cit.> boast high-fidelity and large scale nature, it is very expensive to build such a dataset owing to the cost of design, compute, and the domain-gap arising from the inherent nature of PBR-based methods. Generative Approaches for Face Synthesis: The progress in generative adversarial learning in the past decade <cit.> has led researchers to introduce learning-based methods for generating training data. <cit.> proposes a generative framework to enable data generation for multi-view representation learning and concludes that the representations learned using generated data outperform those learned directly from real data. In the context of face synthesis, methods like <cit.> propose a bi-directional method for face editing by manipulating semantic maps. A recent trend is to condition the face generation by learning an inherent 3D prior in an unsupervised setup, proposed in EG3D<cit.>. This conditioning only helps in improving the generation quality & view consistency, however, it doesn't provide any explicit control over the generation process, e.g. manipulating facial expressions. Recent advancements in diffusion models <cit.> have given rise to text-driven generation of high-quality and highly diverse images, with methods like <cit.> provide different types of control over the denoising process. <cit.> leverages the parametric head model to control facial image synthesis. The usage of the parametric head model enables control over facial expressions, head pose, and even lighting, however, it doesn't allow fine-grain control over hair, beard etc. Omniavatar<cit.> address this issue by combining both parametric as well as neural implicit head representation, where the parametric head model allows controllable generation and implicit representation models of other details such as hair, beard, accessories etc. Another similar and recent work, Next3D<cit.>, built on top of EG3D utilises a FLAME <cit.> mesh explicitly to generate a texture map and convert to Triplanar representation, followed by volumetric rendering similar to EG3D. Although these approaches generate highly photorealistic and diverse images, a caveat is the lack of control over generation of dense annotations. <cit.> addresses this issue by using a PTI-inversion <cit.> over the StyleGAN2 <cit.> latent space in the EG3D pipeline to generate multi-view data and 3D landmarks for further training landmark estimation task. However, the lack of further dense annotations still limits the utility of existing approaches. Through the proposed pipeline in our work, we aim to bridge the gap between synthetic data generation with control over annotations, and the ease of complexity of synthetic data generation. Multi-task Facial Analysis: While there exist datasets in the facial analysis domain with annotations for multiple tasks, the exploration in the domain of multi-task learning for face parsing and analysis is very recent. <cit.> provide a framework to mutually learn semantic segmentation and depth information through domain translation, and also release the LaPa-D dataset. <cit.> explores the domain of multi-task learning through dense pre-training on text-image pairs to learn robust representations, and then fine-tunes on face parsing, alignment, and attribute recognition.; However, The success of the approach can be attributed to the extensive pre-training on the text-image dataset, which may not be easily available on a large-scale. The work presented in <cit.> performs a cyclic self-regularization for face parsing tasks by utilizing related tasks such as edge estimation and edge categorization. The caveat with such approaches is the requirement of dense multi-task datasets which are resource intensive to compile. Multi-task approaches using partially annotated data allow for handling of sparsely annotated datasets in a multi-task manner, such as the approach shown in <cit.>. We follow a similar methodology towards multi-task learning with cross-task feature representations combined with the capability to generate large volumes of densely annotated synthetic data. The proposed approach towards multi-task learning allows for leveraging the relationships between tasks to extract richer feature representaions in the downstream tasks § METHOD Our research aims to enhance multi-task facial analysis, with a focus on synthetic data. We introduce a three-component generative pipeline: (A) Synthetic data generation employing a 3D-aware generative model with an annotation module, (B) Training a multi-task network on synthetic data, and (C) Real-world label fine-tuning for evaluation on standard real-world benchmarks. In Data Generation (A), detailed in <ref>, we use Next3D <cit.> and FLAME <cit.> to generate images and annotations for facial attributes. Synthetic Backbone (B) involves training a network with self-supervised learning, which is later frozen for Stage-II training (see <ref>). Label Finetuning (C), as elaborated in <ref>, fine-tunes this network prediction head on real images, fine-tuning each task's predictions for real-world application. This approach seeks to boost the accuracy and adaptability of facial analysis tasks. §.§ Data Generation Pipeline We incorporate Next3D <cit.> as the 3D-aware generative model which uses FLAME as the parametric head model (3DMM), allowing a disentangled control of the facial geometry and expression in the generated image. The generative approach in Next3D models dynamic and static components with two independent tri-plane branches. The first branch is a generative texture-rasterized tri-plane which operates on a view-dependent synthesized texture map from a StyleGAN2 <cit.> network. The inputs are a set of randomly sampled latent vector z ∼𝒩(0, I) and the camera parameters for the synthesized view c ∈ℝ^1×25, where c = K · [R | t] represents the camera intrinsic and extrinsic parameters. The generated textures are applied to a deformed FLAME head mesh which is parameterized by the shape, expression, and pose parameters denoted as β, ψ, and θ respectively. The second branch models the static components into a tri-plane feature map following the method proposed in EG3D <cit.>. The tri-plane features from both the branches are fused through a neural blending module and used for neural volume rendering, followed by an image super-resolution module to generate realistic face images, ℐ^synth∈ℝ^512 × 512 × 3. In our pipeline, annotations derive from Next3D's neural rendering and FLAME mesh transformations. Semantic segmentation and 68 3D landmarks from FLAME mesh's ground truth are projected onto generated images, utilizing camera parameters to align with Next3D's transformations, resulting in accurate 3D and 2D landmark representations as depicted in fig. <ref> and fig. <ref> (A). Minor misalignments exist in the transformation process due to the non-linear transformations through the StyleGAN modules in the Next3D pipeline. Towards the rectification and alignment of the generated annotations, we utilise an alignment process for the FLAME mesh which optimizes the consistency between the 3D mesh points and neural densities from the Next3D rendering module. More details and exploration about the alignment process are discussed in the Supplementary material. Depth maps are consistently aligned with RGB images already, owing to volumetric rendering based on NeRF<cit.>. This approach enables the creation of a diverse, large-scale annotated dataset, enhancing the training efficiency for subsequent stages. §.§ Multi-Task Synthetic Backbone (Stage I) In Stage I, we train a multi-task model on the synthetically generated data. This process is guided by a training regimen that capitalizes on the dense multi-task annotations provided by our data generation pipeline. Model Formulation: Following common multi-task learning paradigms <cit.>, the pipeline starts with the input image x_s ∈ℝ^3 × H × W, representing the synthetically generated image ℐ^synth in this case. This image undergoes initial processing through a feature encoder block ℰ: ℝ^3 × H × W→ℝ^C × H × W, to generate the feature map f_s = ℰ(x_s) ∈ℝ^C × H × W, where C represents the number of feature channels, is the basis for further task-specific processing. Each task's processing is handled by individual task heads ϕ^t_i:ℝ^C × H × W→ℝ^C' × H' × W'. For our tasks, these heads are ϕ^seg, ϕ^depth, and ϕ^kp, with their corresponding predictions denoted as ŷ^t_i_s = ϕ^t_i(f_s). Training Losses: The training for each task utilizes specific loss functions: L1-norm loss for depth estimation (ℒ^depth_s), cross-entropy loss for semantic segmentation (ℒ^seg_s), and L2-norm loss for keypoint estimation (ℒ^kp_s). The overall task loss is defined as: ℒ_s^task = 1/N∑_n=0^N ∑_t_i ∈𝒯λ_s,n^t_iℒ_s^t_i(ŷ^t_i_s,n, y^t_i_s,n) Here, N is the number of samples, 𝒯∈{seg, depth, kp} represents the set of tasks, λ_s,n^t_i is the loss weight for task t_i, and y^t_i_s,n is the ground truth label for the task. Additionally, to enhance feature representation in the backbone network, we incorporate a self-supervised loss term, based on an affine transformation ϵ applicable to both the input and the intermediate feature. The SSL loss term is formulated as: ℒ^SSL_s = 1/N∑_n=0^N ||ℰ(ϵ(x_s,n)) - ϵ(ℰ(x_s,n))||__2 Upon completion of the training, we freeze the weights of the multi-task synthetic backbone network for use in the next stage. The network in its frozen state is denoted as ℳ, and its predictions are ŷ^seg_s, ŷ^depth_s, and ŷ^kp_s for the respective tasks. §.§ Multi-Task Label Fine-tuning (Stage II) Stage II of our methodology focuses on label fine-tuning, a crucial process for adapting the multi-task network's predictions to various real-world datasets for fair evaluations. This stage utilizes the network trained in Stage I and fine-tunes its predictions to align with real-dataset annotations. The input to the pre-trained multi-task synthetic backbone network, denoted as ℳ, is x_r ∈ℝ^3 × H × W∈ℐ^real, from the real-world dataset. Processing this input through ℳ, we obtain predictions for segmentation, depth, keypoints, and the intermediate feature map, (ŷ^seg_s, ŷ^depth_s, ŷ^kp_s, f_s) = ℳ(x_r). Task-Specific Label Fine-Tuning Networks: For each task t_i ∈𝒯, we construct specific label fine-tuning networks, ϕ^t_i_L, such as ϕ^seg_L, ϕ^depth_L, and ϕ^kp_L, based on the tasks available in the real-world dataset. The predictions from these networks are formulated as ŷ_r^t_i = ϕ^t_i_L(ŷ_s^t_i, f_s), for each task t_i ∈𝒯. The inclusion of the intermediate feature map f_s in the fine-tuning process is optional and subject to exploration in our ablation studies. This approach is designed to account for the label disparity between synthetic and real annotations, accommodating attributes like hair and accessories that may not be present in the generated data but are found in real-world datasets. The training of stage II follows a similar routine as in Stage I for loss metrics. The implementation of this label fine-tuning stage significantly enhances the model's versatility and utility across diverse datasets. This approach not only aligns the model's predictions with real-world data but also ensures that the model retains its robustness and accuracy, thereby elevating its overall performance in practical applications. § EXPERIMENTS & RESULTS §.§ Experiment Setup Model Architecture: We employ the UNet <cit.> architecture for the multi-task synthetic backbone model ℰ and also for the individual heads for the depth and face parsing heads - ϕ^depth and ϕ^seg. For predicting the facial landmarks, we make use of the a 2 layer convolution network followed by a stacked hourglass architecture <cit.> with 2 stacks - which we denote as ϕ^kp in the text. For label finetuning, we follow the same architecture for the respective tasks. Data: The data used to train the multi-task prediction module ℳ is generated at a resolution of 512 × 512. We generate a dataset of 100,000 samples with varying identities and expressions by i.i.d. sampling the FLAME shape β and expression parameters ψ. To ensure that our synthetically generated data contains images from different viewpoints, we position the camera by sampling its azimuth and elevation angles from a range of [-π/4, π/4]. The field of view for the camera is sampled uniformly within a range of 12^∘ to 27^∘ and its distance from the subject is kept at a constant 2.7 meters. Unlike physics-based rendering methods, our data generation pipeline takes a fraction of the time to generate photo-realistic data with annotations to be leveraged for training. Using our data generation pipeline, it took about 7 hours to generate a dataset of 100k images with 3 different annotation modalities on a single NVIDIA V100 GPU with a batch size of 1. We use Next3D as the 3D aware generative model for our work which features disentangled control for facial geometry via FLAME. However, this generative model can be replaced by any existing 3D aware generative model that has the potential to leverage the 3D facial geometry to obtain facial (or presumably other kinds of) annotations. FLAME does not have access to the textures therefore we cannot obtain perfect annotations for certain categories like eyebrows and lips. Therefore, we leverage FLAME's texture maps <cit.> to manually annotate these categories onto the UV texture map that we use for segmentation - Fig. <ref>. Training: During training, we include data augmentations like affine transformations, perspective warping, random flipping, blurring, random erasing, adding noise, and, varying the brightness and contrast adjustments. The images are resized to a resolution of 256 × 256 for all our experiments. The multi-task prediction module ℳ during stage I, is trained entirely on the synthetically generated data. For stage I, we use a learning rate of 1e-3 for all our task heads (ϕ^kp, ϕ^depth and ϕ^seg) and 1e-4 for the feature encoder ℰ. During label-finetuning (stage II), the entire multi-task module ℳ is frozen. We pass the predictions of the output heads ŷ^t_i_s, optionally combining the feature maps obtained from the feature encoder ℰ, to their respective label-finetuning networks ϕ^t_i_L. The reason we pass the feature maps to the label-finetuning networks is to make the model aware of facial attributes like hair, eyebrows, and lips. The learning rate in all our experiments is annealed using a cosine decay schedule with a minimum learning rate of 1e-6. We use AdamW optimizer to train all our models and our experiments are implemented in PyTorch <cit.> and Pytorch 3D <cit.>. We use Pytorch Lightning <cit.> to manage and scale our experiments. §.§ Datasets Facial Landmark Localization: We evaluate our approach on the 300W dataset which is a popular landmark localization benchmark. We note that the 300W landmarks are different from the 68 landmarks obtained from FLAME. The difference is primarily observed around the jawline. Face Parsing: Face parsing involves classifying the face into different regions (like nose, eyes, lips, etc.) at a pixel level. We showcase our method's performance on two popular face parsing benchmarks - LaPa (Landmark guided face Parsing) <cit.> and CelebAMask-HQ <cit.>. The faces in LaPa dataset are segmented into 11 semantic classes. CelebAMask-HQ is a subset of CelebA-HQ dataset containg 30,000 high-resolution facial images wherein each image is segmented into various categories which include facial attributes like eyes, ears, nose, mouth, hair and skin. In addition to facial attributes, this benchmarks also provides annotations for facial accessories like earrings, necklace, hats and sunglasses. Depth: Using the depth maps curated by <cit.>, we qualitatively showcase the results from our approach on depth estimation. §.§ Results We evaluate our approach on two facial analysis tasks - facial landmark localization and face parsing. We also show qualitative results of our models in predicting facial depth. We compare our approach with existing methods that incorporate the use of synthetic data in 2 settings: a) using the entire sample set of 100k images for training, and b) using a randomly sampled batch of 10k images. We show the performance of our approach in contrast to the popular PBR based method <cit.>, and, using DECA <cit.> to infer the annotations for the images from our dataset. Table <ref> showcases results for landmark localization task on the 300W dataset using the Normalized Mean Error (NME) metric - normalized by the inter-ocular outer eye distance. We observe that our proposed synthetic backbone (SB) outperforms the PBR based method <cit.>. The same trend is observed after label-finetuning. Moreover, we also note that the performance of our approach is close to the state of the art methods trained entirely on real data for landmark localization. Table <ref> compares the results of our approach with other face parsing methods on the LAPA dataset. We note that our label-finetuned synthetic backbone is very similar in performance to the PBR based method <cit.>. We see a sharp increase in performance of our models when provided with the features f_s from our synthetic feature backbone ℰ. This is primarily seen in categories which are not a part of our annotation scheme like hair or have a considerable disparity from the segmentation maps in real world datasets - like eyebrows and lips. Table <ref> showcases results for face parsing on the CelebAMask-HQ dataset. We highlight that our approach is on-par with other state-of-the-art face parsing methods on this dataset. Fig. <ref> shows the qualitative results on samples from the LaPa validation set for all the modalities. Notice that the predictions from the synthetic model are aligned to the groundtruth landmarks, despite never explicitly being trained on real samples. The predictions from the finetuned model also exhibit similar performance. The face parsing maps from out SB model are able to localize and segment different parts of the face accurately. The label finetuned model shows aligned predictions to the ground-truth annotations. Finally, we highlight that the predicted depth map by the SB model is qualitatively superior compared to the ground-truth reconstructed depth provided by the LaPa-D <cit.> dataset. Notice how SB and SB+LF are able model the depth for glasses for the third identity. We showcase more qualitative results in the supplementary material. § CONCLUSION AND LIMITATIONS In conclusion, our work explores the integration of generative models for synthesizing high-quality face images and addresses the challenge of controllable annotations through the use of 3D Morphable Models (3DMMs). By leveraging generative synthetic data, coupled with dense multi-modal annotations, we demonstrate the efficacy of our approach in training a multi-task model for facial analysis. Our strategy includes label fine-tuning to enhance predictions on real-world datasets, supported by a robust dense multi-modal training pipeline with cross-task consistency and self-supervised losses. Our experiments reveal competitive performance against state-of-the-art single and multi-task models. Additionally, we contribute to the research community by releasing our curated dataset, pretrained models, and codebase, encouraging further exploration of synthetic data for real-world facial analysis tasks. Limitations: The segmentation from the texture pipeline of the FLAME model introduces the need for a label finetuning step as FLAME does not have access to the hair, and other features like facial accessories. Moreover, the generative model used might not be consistent with the 3d annotations provided by FLAME in extreme poses. We discuss in detail about this misalignment in the supplementary text. § ACKNOWLEDGEMENT We gratefully acknowledge the support of Mercedes-Benz Research and Development India for this work, especially in the form of compute clusters to generate data and train models. plain
http://arxiv.org/abs/2406.08618v1
20240612200154
A thermodynamic approach to adhesion and deformation of DNA-bound droplets
[ "Nicolas Judd", "Angus McMullen", "Sascha Hilgenfeldt", "Jasna Brujic" ]
cond-mat.soft
[ "cond-mat.soft" ]
m k_BT % vol/vol % wt/vol k_BT L>l<C>c<kbT name=Energy Unit, symbol=, description=Boltzmann constant times Temperaturetotal_dna name=Total DNA, symbol=, description=the total DNA on a dropletcomplement_dna name=Compliment DNA, symbol=, description=the total DNA on the compliment dropletpatch_dna name=Patch DNA, symbol=, description=the DNA in the patchpatch_thresh name=Patch DNA Threshold, symbol=, description=the DNA in the patchpatch_area name=Patch Area, symbol=, description=the area of the patcharea_limit name=Geometric Limit, symbol=, description=the geometric limit of the systemsurface_area name=Surface Area, symbol=, description=The surface area of the dropletsurface_tension name=Surface Tension, symbol=, description=The surface tension of the dropletspatch_conc name=Patch Concentration, symbol=, description=Surface Density of DNA in the patch droplet_conc name=Droplet Concentration, symbol=, description=Surface Density of DNA on a dropletpartner_conc name=Droplet Concentration, symbol=, description=Surface Density of DNA on the parner dropletlinker_area name=Linker Area, symbol=, description=Area that characterizes the closeness needed to find a partner in the patchener_binding name=Binding Energy, symbol=, description=Energy of complimentary pairs of DNA bindingener_spring name=Spring Energy, symbol=, description=Energy penalty of stretching the complex springener_deformation name=Deformation Energy, symbol=, description=Energy penalty of deforming the dropletener_interaction name=Onsauger Interaction Energy (from PNAS), symbol=, description=Energy of Onsauger interaction bindersener_total name=Total Energy, symbol=, description=Total energy functional of the dimer systemconfig_micro name=Configurational Microstate, symbol=, description=Configurational Microstate of the patchconst_bind name=Energy Constant of Binding, symbol=, description=Energy constant of bindingconst_spring name=Energy Constant of Spring, symbol=, description=Energy constant of Springbind_eff name=Energy Constant of Spring, symbol=, description=Energy constant of Springener_tot_def name=Total Deformed Regime Energy, symbol=, description=Total free energy functional of the deformed regimeener_tot_undef name=Total Undeformed Regieme Energy, symbol=, description=Total free energy functional of the undeformed regime./img/Center for Soft Matter Research, New York University, New York, NY 10003Center for Soft Matter Research, New York University, New York, NY 10003Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801Center for Soft Matter Research, New York University, New York, NY 10003§ ABSTRACT Here we derive and experimentally test a free energy functional that captures the adhesion of DNA-coated emulsion droplets. Generalizing previous approaches, the theory combines important energetic and entropic effects of microscopic DNA mechanics and droplet elasticity. It simultaneously predicts adhesion size, morphology, and binder concentration as a function of experimental control parameters. Notably, droplets transition from undeformed binding to flat droplet interfaces at a characteristic DNA coverage. These equilibrium predictions agree quantitatively with experiments on droplet-substrate and droplet-droplet binding, revealing a weak effective binding strength of 3.7±0.3kbT owing to entropic costs. Our results open the path to rich design strategies for making colloidal architectures. A thermodynamic approach to adhesion and deformation of DNA-bound droplets Jasna Brujic ========================================================================== Biological cells are a prime example of a system in which membrane deformation and molecular adhesion dictate cellular self-assembly into large-scale structures, such as tissues <cit.>. Their shape distribution and adhesion strength are crucial to biological function <cit.>. Analogously, droplets <cit.>, vesicles <cit.>, and colloids <cit.> coated with ligands are known to self-assemble into programmable architectures, such as chains <cit.>, foldamers <cit.>, clusters <cit.>, soft gels <cit.>, and biomimetic tissues <cit.>. These systems offer avenues for biomimicry in simplified model systems to help understand mechanisms underlying biological processes <cit.>. Moreover, they open the path to tunability in materials with novel mechanical and optical properties  <cit.>. On the molecular scale, the binder strength, flexibility, and specificity of interactions have been shown to influence the structure of self-assembled particulate networks <cit.>. On the scale of the particles, increasing the concentration of binders at the interface leads to more connected networks with a higher valence  <cit.> and larger deformations away from spherical <cit.>. Further, decreasing the stiffness of the particles or the surface tension of emulsion droplets allows for the formation of cohesive packings that mimic tissues <cit.>. To gain greater control over these systems, it is necessary to construct a theoretical model that relates molecular binder properties to the self-assembly of the constituent particles. The literature spans from microscopic models of protein-mediated cell adhesion <cit.> and cellular recognition <cit.>, to DNA-binding of vesicles <cit.>, and biomimetic droplets <cit.>. These models balance ligand binding strength with particle elasticity to reach mechanical equilibrium, which is distinct from adhering solid particles, where the ligands are immobile and the binding region is limited by the spherical geometry of the interfaces <cit.>. In the case of droplet-droplet binding, low concentrations of DNA result in adhesions with extended linkers between spherical interfaces <cit.>. Adding more DNA binders to the droplet surfaces leads to progressively larger and denser adhesion patches, which can deform into flat interfaces if the binders overcome the surface tension cost. Here we combine experiments and theory to develop a thermodynamic model for the deformed droplet regime, as shown in the schematics of <ref>. The cost of the energy of deformation, governed by surface tension surface_tension, balances with the configurational entropy gain of the binders to give the equilibrium patch DNA density and size as a function of the total numbers of DNA total_dna and complement_dna on the droplet surfaces. Comparing the free energies of the undeformed <cit.> and deformed bound droplets allows us to identify the DNA coverage that favors the deformed state. By varying surfactant concentration, we show that emulsions with a lower surface tension require less DNA to deform into flat patches, consistent with model predictions. Once deformed, the area of adhesion patch_area grows approximately with the square root of the number of DNAs patch_dna inside the adhesion patch, in good agreement with theory. While the nominal DNA binding strength const_bind plays an important role in determining the fraction of DNA recruited from the droplet surface into the patch, it does not significantly affect patch size. Thus, our statistical mechanics approach describes equilibrium droplet adhesion spanning spherical and deformed regimes, allowing one to tune macroscopic adhesion shape and strength from the bottom up. The free energy functional of the deformed system consists of the energy gain of DNA hybridization, the energy costs of binder stretching, crowding, and surface deformation, and the configurational entropy changes due to binder recruitment from the surface into the adhesion patch, ener_tot_def = ener_binding + ener_spring + ener_interaction + ener_deformation - kbTlnconfig_micro. At high droplet coverage and significant deformation, the theory assumes that the DNA linkers stand perpendicular to the bound interfaces, implying that the steric repulsion between the strands ener_interaction is negligible. Additionally, the Debye length in our system is on the order of 1, such that electrostatic repulsion can be ignored. The total hybridization energy ener_binding = - const_bindpatch_dna is reduced by a uniform molecular stretching penalty ener_spring = const_springpatch_dna, where patch_dna is the number of DNAs in a patch. The resulting effective binding energy is balanced against the cost of droplet deformation, modeled as ener_deformation = νsurface_tensionpatch_area^2/surface_area where ν is the number of deforming surfaces, surface_tension is the surface tension and patch_area and surface_area are the areas of the deformed patch and the undeformed droplet surface, respectively <cit.>. Equation (<ref>) is accurate if patch_area≪surface_area. Finally, configurational entropy is modeled by counting the number of microstates config_micro that total_dna and complement_dna DNA binders on the two surfaces can assemble into config_micro=total_dnapatch_dnacomplement_dnapatch_dnapatch_dna! patch_area^patch_dnalinker_area^patch_dna (surface_area-patch_area)^total_dna-patch_dna (surface_area-patch_area)^complement_dna-patch_dna where linker_area = 1/4(droplet_conc^-1/2 +partner_conc^-1/2)^2 is the linker area that characterizes the closeness needed by a DNA binder to find a partner in the patch, for initial DNA concentrations droplet_conc=total_dna/surface_area, partner_conc=complement_dna/surface_area<cit.>. Minimizing the free energy with respect to patch_area and patch_dna respectively gives ener_tot_defpatch_area=0= 2νsurface_tensionpatch_area/surface_area - patch_dna/patch_areakbT + (total_dna + complement_dna - 2patch_dna)/surface_area - patch_areakbT and ener_tot_defpatch_dna =0= -bind_eff + kbTln( patch_dna/patch_areasurface_area/1-patch_dna/total_dna(surface_area-patch_area/√(total_dna))^2/1-patch_dna/complement_dna), with bind_eff = const_bind - const_spring and the mean total_dna = ((total_dna^1/2 + complement_dna^1/2)/2)^2. Simultaneously solving equations (<ref>) and (<ref>) predicts patch_area and patch_dna as a function of experimental control parameters, total_dna, complement_dna, and surface_tension. Note that when the density of DNA outside the patch is low, <ref> simplifies to a square root law for the growth of patch_area with patch_dna, patch_area≈√(surface_areakbT/2νsurface_tension)√(patch_dna). Interestingly, the patch area does not depend on the DNA binding energy in this limit. To test the model, DNA-coated silane droplets stabilized with SDS surfactant are bound to a substrate surface coated with mobile complementary DNA of concentration c_s (<ref>(a)). In this case, the theory remains quantitatively valid when setting ν=1 and complement_dna=c_ssurface_area. High-intensity fluorescent adhesions are observed under the microscope (<ref>(b)), resolving the patch size patch_area and radial distribution of DNA, whose integral gives patch_dna. These data reveal the onset of droplet deformation by the change in morphology from ring-shaped to disk-shaped adhesions, as shown by the examples in the zoom in <ref>(b). Given that the droplets are above the focal plane, it is not possible to measure their perimeter intensity, from which total_dna could be inferred. Therefore, a second set of experiments is performed using droplet-droplet dimers (ν=2, <ref>(c)) functionalized with complementary DNA with distinct fluorescent labels (<ref>(d)). Here the intensities of both the patches and the droplet perimeters are simultaneously quantified, determining total_dna,complement_dna as well as patch_dna. We use the droplet-substrate data to test the effect of surface tension on the transition from undeformed to deformed binding and subsequent patch growth. In <ref>(a) we plot patch_area versus patch_dna for droplets with varying amounts of TMN co-surfactant. All data collapses onto a curve with a limiting patch area area_limit= 0.9μm^2 at low patch_dna, consistent with the geometric undeformed area area_limit=2π R_0L, where R_0=3.0±0.3μm is the droplet radius and L=46nm is the theoretical linker contour length, cf. <cit.>. The behavior changes qualitatively for higher patch_dna, showing continuous growth above area_limit. This growth is well fit by <ref> using increasing surface tensions with decreasing TMN concentration, consistent with measured values <cit.>. Patches with patch_area>area_limit show a decreased likelihood of ring morphology P(Ring)(<ref>(b)), consistent with the transition from a spherical droplet to one with a flat interface. Sigmoidal fits show that P(Ring)=0.5 is crossed at higher patch_dna with increasing surface tension, as shown in the inset in <ref>(a). The error bars correspond to the gap in data around the transition point, which may be due to hysteretic effects. <Ref> Panels (c) and (d) show typical radial profiles of DNA before and after this transition, at points indicated on the graph in <ref>(a). Droplet-droplet binding data reveals a similar growth in the area versus patch DNA curve (<ref>(a)) as the droplet-substrate case, in good agreement with the square root law in <ref> (dashed lines) for two emulsion batches. Lines of best fit correspond to surface tensions surface_tension=2.8 and 4.4±0.2. These droplets appear softer than those in the droplet-substrate case because of differences in surfactants <cit.>. Due to a lower spatial resolution of patches, signatures of the morphology transition are not apparent. On the other hand, the total number of DNA at the interface total_dna is readily measured, such that the full model solving Eqs. (<ref>) and (<ref>) can be tested using both patch_dna and total_dna. Fitting the data in <ref>(a),(b) with the full model (solid lines) gives lower surface tensions surface_tension=1.6 and 2.4±0.2 than <ref> (dashed lines), as well as an effective binding energy bind_eff=3.7kbT±0.3kbT. This discrepancy is because the full model additionally takes into account DNA recruitment and crowding effects inside the patch. Indeed, the DNA spacing, obtained from the triangular lattice spacing per molecule (2patch_area/√(3)patch_dna)^1/2 inside the patch, asymptotically reaches the crystalline limit of 4.7 <cit.> in <ref>(c). This packing density limit fixes the maximum patch size possible with a given DNA binder. Growing larger patches would require a decrease in surface tension or lateral attractive interactions <cit.>. Commonly, an effective binding energy per molecule is obtained from the logarithmic ratio of concentrations inside and outside the patch, shown in <ref>(d). Both in experiments and theory, this ratio decreases at large total_dna. This is because the further growth of a patch becomes less favorable due to increased crowding. The obtained values are in the range of a few kbT, consistent with those reported for the same DNA binders in the undeformed regime in <cit.> where droplet-droplet binding was shown to be reversible. While this ratio varies with total_dna, the quantity bind_eff is a property of the individual DNA molecules as modeled in <ref>. Its fitted value of 3.7kbT is much lower than the expected -Δ G≈ 48kbT<cit.> for the DNA sequence used in this study because it includes the conformational entropy loss of DNA molecules confined in the binding patch, as well as the stretch energy per molecule const_spring. For large total_dna, the logarithmic concentration ratio inferred from our theory with this value of bind_eff is in good agreement with experiment (<ref>(d)). Given the fact that we use the same DNA in both sets of experiments, we can use the estimated value for bind_eff in the full model for droplet-substrate binding. The solid lines in <ref>(a) show that the model successfully predicts patch_area for large patch_dna. In order to estimate the patch coverage where the droplet first deforms, we now compare the free energies ener_tot_def of the present formalism with those of the undeformed droplet theory ener_tot_undef<cit.>. The latter differs from <ref> in two respects: the microstate count config_micro is evaluated in the limit of small patch_dna, and the molecular contributions ener_spring, ener_interaction vary with position in the patch. This is because the spacing h between interfaces varies with the position for undeformed droplets, resulting in non-uniform concentration profiles (cf. <ref>(c)). The functional form of the nonlinear molecular spring energy s and the exclusion interaction between molecules are kept the same in the deformed case. Note that we do not neglect ener_interaction in either ener_tot_undef or ener_tot_def here, as the patch coverage patch_dna at the undeformed/deformed transition is relatively small. The uniform equilibrium distance h_0 between droplet and substrate in the deformed case is obtained by minimizing ener_tot_def with respect to h, from which the equilibrium spring energy per molecule const_spring=s(h_0) is deduced. <Ref> plots the free energies per molecule ener_tot_undef/patch_dna and ener_tot_def/patch_dna for the three surface tensions of <ref>(a). This identifies the surface_tension-dependent transition patch coverage patch_thresh (gray shading, also shown in <ref>(a),(b)). The functional dependence patch_thresh(surface_tension) predicted from theory is shown as the solid line in the inset of <ref>(a), in good agreement with the empirical transition values obtained from experimental patch morphology data. If we consider ener_total/patch_dna to control the melting temperature of a patch, then <ref> predicts that melting temperature is maximized for sparsely covered undeformed patches. On the other hand, the total free energy of the patch, i.e. the binding strength, is significantly higher in the deformed regime with large patch_dna. The kinetics of the transition between the two states upon changes in N remains an open question. However, the fact that ener_total/patch_dna is on the order of a few kbT implies that the system is reconfigurable and will evolve towards the equilibria described here. Therefore, tuning the microscopic properties of the binders and the mechanical properties of the particles then allows for flexible control over the shape, size, and strength of adhesion of particles with mobile linkers. For instance, the spring energies could be replaced with Hookean springs or catch bonds <cit.> in the case of protein-protein adhesion, or additional lateral interactions could be present, as in the case of DNA-condensation <cit.> or cis-bound cadherins <cit.>. Alternatively, droplets could be exchanged for soft particles with Hertzian contact mechanics <cit.> or liposomes <cit.> with membrane bending elasticity. These changes to the free energy functional open the path to designing an even broader variety of adhesive particles. Extending the theory to higher coordination numbers will give rise to novel particulate networks (e.g. colloidal gels) with well-defined tunable architectures. This work was supported by the NSF DMR grant No. 2105255 and the Swiss National Science Foundation through Grant No. 10000141. We thank Jérôme Bibette, Jean Baudry, and Frank Scheffold for insightful discussions. winkler_deformation_2003mcmullen_freely_2018-1apsrev4-2get arXiv to do 4 passes: Labels(s) may have changed. Rerun
http://arxiv.org/abs/2406.09256v1
20240613155837
Integral solutions to systems of diagonal equations
[ "Nick Rome", "Shuntaro Yamagishi" ]
math.NT
[ "math.NT", "11P55, 11P05, 11D45, 11D72" ]
Freudenthal Duality in Conformal Field Theory Arghya Chattopadhyay^amailto:arghya.chattopadhyay@umons.ac.bearghya.chattopadhyay@umons.ac.be, Taniya Mandal^btaniya.mandal@niser.ac.intaniya.mandal@niser.ac.in, Alessio Marrani^calessio.marrani@um.esalessio.marrani@um.es ^aService de Physique de l'Univers, Champs et Gravitation Université de Mons, 20 Place du Parc, 7000 Mons, Belgium ^bSchool of Physical Sciences, National Institute of Science Education and Research An OCC of Homi Bhabha National Institute Bhubaneswar 752050, India ^cInstituto de Física Teorica, Dep.to de Física Universidad de Murcia, Campus de Espinardo, E-30100, Spain Abstract Rotational Freudenthal duality (RFD) relates two extremal Kerr-Newman (KN) black holes (BHs) with different angular momenta and electric-magnetic charges, but with the same Bekenstein-Hawking entropy. Through the Kerr/CFT correspondence (and its KN extension), a four-dimensional, asymptotically flat extremal KN BH is endowed with a dual thermal, two-dimensional conformal field theory (CFT) such that the Cardy entropy of the CFT is the same as the Bekenstein-Hawking entropy of the KN BH itself. Using this connection, we study the effect of the RFD on the thermal CFT dual to the KN extremal BH. We find that the RFD maps two different thermal, two-dimensional CFTs with different temperatures and central charges, but with the same asymptotic density of states, thereby matching the Cardy entropy. In an appendix, we discuss the action of the RFD on doubly-extremal rotating BHs, finding a spurious branch in the non-rotating limit, and determining that for this class of BH solutions the image of the RFD necessarily over-rotates. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this paper, we obtain the asymptotic formula for the number of integral solutions to a system of diagonal equations. We obtain the asymptotic formula for the number of solutions with variables restricted to smooth numbers as well. We improve the required number of variables compared to previous results by incorporating the recent progress on Waring's problem and the resolution of Vinogradov's mean value theorem. § INTRODUCTION We consider the system of equations defined by m_1,1 x_1^d + ⋯ + m_1,n x_n^d = μ_1 ⋮ m_R, 1 x_1^d + ⋯ + m_R, n x_n^d = μ_R, which we denote by M ^d = μ, where M = [m_i, j]_ 1 ≤ i ≤ R 1 ≤ j ≤ n is the coefficient matrix with integer entries, ^d = [ x_1^d; ⋮; x_n^d ] and μ = [ μ_1; ⋮; μ_R ]∈^R. The system of diagonal equations (<ref>) with μ = 0 was first studied by Davenport and Lewis <cit.> who established the following. Let d ≥ 3 and μ = 0. Suppose that all n variables occur explicitly in the equations (<ref>). Suppose that any linear combination, not identically zero, of the R rows of M contains more than (2 H + 3d - 1)R non-zero entries, where H = ⌊ 3 d log R d⌋. Suppose the equations (<ref>) have a non-singular solution in every p-adic field, and further, if d is even, a real non-singular solution. Then the equations (<ref>) have infinitely many solutions in integers. In fact, they obtain an asymptotic formula for the number of solutions. Their main results <cit.> are consequences of this theorem and require n ≥⌊ 9 R^2 d log (3 R d) ⌋ , ⌊ 48 R^2 d^3 log (3 R d^2) ⌋ , for the results to hold. By incorporating the breakthrough on Waring's problem by Vaughan <cit.>, Brüdern and Cook <cit.> improved the number of variables required to be n > n_0(d) R, where n_0(d) = 2 d (log d + O(loglog d) ), under a suitable “rank condition” on the coefficient matrix M. Their result obtains an asymptotic formula for the number of solutions with variables restricted to smooth numbers. Since the time of these two papers, there has been great progress regarding Waring's problem (for example, by Wooley <cit.>, <cit.> and more recently by Wooley and Brüdern <cit.>) and also the resolution of Vinogradov's mean value theorem (see the work by Bourgain, Demeter and Guth <cit.>, and by Wooley <cit.>). The purpose of this paper is to incorporate these recent progress to improve the required number of variables in the results mentioned above when d is large; for smaller values of d, we refer the reader to <cit.> and <cit.>. For X ≥ 1 and 𝔅⊆, we introduce the following counting function N( 𝔅; X) = #{∈ ( 𝔅∩ [1,X] )^n: M ^d = μ}. We will estimate this counting functions using the Hardy-Littlewood circle method. We define Ψ (M) to be the largest integer 𝔗 such that there exists {𝔇_1, …, 𝔇_𝔗}, where each 𝔇_i is a linearly independent set of R columns of M and 𝔇_i ∩𝔇_j ≠∅ if i ≠ j. We let T be a non-negative integer such that Ψ (M) ≥ T. The following are the main results of this paper. Suppose T ≥min{ 2^d + 1, d (d+1) + 1 }. Then there exits γ > 0 such that N(; X) = 𝔖ℑ X^n - d R + O( X^n - d R - γ ), where 𝔖 is the singular series defined in (<ref>) and ℑ is the singular integral defined in (<ref>). Given 1 ≤ Z ≤ X, we define the smooth numbers 𝒜(X, Z) = { x ∈ [1, X] ∩: prime p| x implies p ≤ Z }. Suppose d ≥ 20 and T ≥⌈ d (log d + 4.20032) ⌉ + 1. Then for η > 0 sufficiently small, there exits γ > 0 such that N(𝒜(X, X^η) ; X) = c(η) 𝔖ℑ X^n - d R + O( X^n - d R (log X)^ - γ ), where 𝔖 is the singular series defined in (<ref>), ℑ is the singular integral defined in (<ref>) and c(η) > 0 depends only on η. We remark that 𝔖 > 0 if the equations (<ref>) have a non-singular solution in every p-adic field, and ℑ > 0 if the equations (<ref>) have a real non-singular solution. §.§ Acknowledgements NR was supported by FWF project ESP 441-NBL while SY by a FWF grant (DOI 10.55776/P32428). The authors are grateful to Jörg Brüdern and Trevor Wooley for very helpful discussions regarding his paper <cit.> and recent developments in Waring's problem, respectively. §.§ Notation We make use of the standard abbreviations e(z) = e^2π i z and e_q(z) = e^2 π i z/q. Given a vector = (a_1, …, a_R) ∈^R, by 0 ≤≤ q we mean 0 ≤ a_i ≤ R for each 1 ≤ i ≤ R. § PRELIMINARIES Let 𝔏 > 0. We define the major arcs 𝔐_𝔏 = ⋃_1 ≤ q ≤𝔏⋃_∈^R 0 ≤≤ q (q, ) = 1 {θ∈ [0,1]^R: | q θ_i - a_i | < 𝔏 X^ - d (1 ≤ i ≤ R) }, and the minor arcs 𝔪_𝔏 = [0,1]^R∖𝔐_𝔏. Given C > 0, we define 𝔑_𝔏(C) = ⋃_1 ≤ q ≤ C 𝔏⋃_ 0 ≤ a ≤ q (q, a) = 1 {θ∈ [0,1]: | q θ - a | < C 𝔏 X^ - d}. §.§ Weyl sum In this section, we collect two results which are the main ingredients to prove Theorem <ref>; both of these are consequences of the resolution of Vinogradov's mean value theorem (by Bourgain, Demeter and Guth <cit.> and by Wooley <cit.>). Let d ≥ 2. Let α∈ and suppose that there exist q ∈ and a ∈ with (q, a)=1 such that |α - a/q |≤ q^-2 and q ≤ X^d. We define λ(d) = 1/2^d-1 if 2 ≤ d ≤ 5, 1/d(d - 1) otherwise. Then |∑_1 ≤ x ≤ X e(α x^d) |≪ X^1+ε ( q^-1 + X^-1 + q X^-d)^λ(d), for any ε > 0. The bound for 2 ≤ d ≤ 5 is the classic Weyl's inequality <cit.>. The other estimate for larger d is a consequence of the resolution of Vinogradov's mean value theorem (c.f. <cit.>). Let d ≥ 2 and s a real number such that s ≥min{2^d,d(d+1)}. Then ∫_0^1 |∑_1 ≤ x ≤ X e(α x^d) |^s dα≪ X^s - d + ε, for any ε > 0. The bound when 2^d ≤ d(d+1) is the classic version of Hua's lemma <cit.>. The other estimate for larger d is a consequence of the resolution of Vinogradov's mean value theorem (c.f. <cit.>). §.§ Smooth Weyl sum In this section, we collect few key estimates regarding the smooth Weyl sum to prove Theorem <ref>. Let d ≥ 3. We let f(α; X, Z) = ∑_ x ∈𝒜(X, Z) e(α x^d). We need two estimates from <cit.>. We begin with <cit.> which is obtained by combining <cit.> and <cit.>. <cit.> Let d ≥ 3. Let ε > 0 sufficiently small. Suppose η > 0 is sufficiently small. Then there exists γ = γ(d) > 0 such that given α∈ [0,1] one of the following two alternatives holds: (i) we have | f(α; X, X^η) | < X^1 - γ; (ii) there exist 0 ≤ a ≤ q, (q, a) = 1 such that f(α; X, X^η) ≪ q^ε X ( q + X^d | q α - a| )^ - 1/2d (log X)^3. The following is <cit.>, which is a special case of <cit.>. <cit.> Let d ≥ 3. Suppose η > 0 is sufficiently small. Let A_0 > 0. Let Suppose (q,a) = 1 and 1 ≤ q ≤ (log X)^A_0 and |q α - a| ≤ (log X)^A_0 X^-d. Then f(α; X, X^η) ≪ X q^ε (q + X^d |α - a|)^- 1 / d , for any ε > 0. We make use of these two lemmas to prove the following. Let δ > 0, A = 2 d δ and 𝔏 = (log X)^A. Suppose η > 0 is sufficiently small. Let C_0 > 0 be sufficiently large. If |f(α; X, X^η) | > X (log X)^- δ, then α∈𝔑_𝔏(C_0). Since we are in alternative (ii) of Lemma <ref>, it follows that X (log X)^- δ < C q^ε X ( q + X^d | q α - a| )^ - 1/2d (log X)^3, for ε > 0 sufficiently small and some C > 0, which in turn implies q^1/2d < C q^ε (log X)^δ + 3 and ( X^d | q α - a| )^1/2d < C q^ε (log X)^δ + 3. Therefore, by setting A_0 = (δ + 3) 4 d, we obtain 1 ≤ q < (log X)^ 4 d(δ + 3) and |q α - a| < (log X)^A_0 X^-d. It then follows from Lemma <ref> that X (log X)^- δ < C_1 q^ε X (q + X^d |q α - a|)^- 1/d, for some C_1 = C_1(d, δ, ε) > 0, which in turn implies q^1/d < C_1 q^ε (log X)^δ and ( X^d | q α - a| )^1/d < C_1 q^ε (log X)^δ. Therefore, for C_0 > 0 sufficiently large and 𝔏 = (log X)^A with A = 2 d δ, it follows that α∈𝔑_𝔏(C_0) as desired. Finally, we have the following mean value estimate. The result may be deduced by closely following the proof of <cit.>. For completeness we present the details in Section <ref>. Let d ≥ 20 and s be an integer such that s ≥⌈ d (log d + 4.20032) ⌉. Let η > 0 be sufficiently small and 1 ≤ Z ≤ X^η. Then ∫_0^1 |f(α; X, Z)|^s dα≪ X^s - d. § SINGULAR SERIES AND SINGULAR INTEGRAL In this section, we prove estimates regarding the singular series and the singular integral. We let Col(M) denote the set of columns of M. §.§ Singular series Let S(q,a) = ∑_1 ≤ x ≤ q e_q (a x^d). We define the singular series 𝔖 = ∑_q =1^∞ A (q), where A (q) = q^- n ∑_ 1 ≤≤ q (q, ) = 1 ∏_∈̧Col( M ) S(q, .)̧· e_q ( - ∑_i = 1^Rμ_i a_i ). It follows from the following lemma that this series is absolutely convergent. Let q ∈ℕ. Then A (q) ≪ q^- R( T /d - 1 ). By <cit.>, we have |S(q, .)̧| = |S(q /(q, .)̧ , / (q, .)̧ ) | ≪( q/(q, .)̧)^1 - 1/d. Applying Hölder's inequality, it follows that |A(q)| ≤ q^ n - T R ∑_ 1 ≤≤ q (q, ) = 1 ∏_ℓ = 1^T∏_∈̧𝔇_ℓ |S(q, .)̧| ≪ q^ n - T R /d∏_ℓ = 1^ T ( ∑_ 1 ≤≤ q (q, ) = 1 ∏_∈̧𝔇_ℓ(q, .)̧^- T /d)^ 1 / T . Let 1 ≤ℓ≤ T and denote =̱ M( 𝔇_ℓ ). Then it is clear that ||̱≪ 1. Since M( 𝔇_ℓ ) is invertible, we have ∑_ 1 ≤≤ q (q, ) = 1 ∏_∈̧𝔇_ℓ(q, .)̧^- T /d ≪ ∑_1 ≤≤̱q∏_i = 1^R(q, b_i)^- T /d ≪ ∑_ g_i | q 1 ≤ i ≤ R (g_1 ⋯ g_2n+1)^- T (d-1)/d q^R /g_1 ⋯ g_R ≪ q^R. The result follows on substituting this estimate into (<ref>). Let us define the truncated singular series 𝔖(B)= ∑_1 ≤ q ≤ B A (q) for any B > 1. Suppose T > d (R + 1)/R. Then | 𝔖 - 𝔖 (B)| ≪ B^1 - R (T/d - 1). The first part of the statement is obtained by Lemma <ref> as follows | 𝔖 - 𝔖 (B)| ≤∑_q > B |A (q)| ≪∑_q > B q^- R (T/d - 1)≪ B^1 - R (T/d - 1). Since A (q_1 q_2) = A (q_1) A (q_2) for any coprime positive integers q_1 and q_2, we have 𝔖 = ∏_p primeχ(p), where χ(p) = 1 + ∑_k = 1^∞ A (p^k). §.§ Singular integral Let I(β) = ∫_0^1 e(βξ^d) dξ. We define the singular integral ℑ = ∫_^R∏_∈̧Col(M) I(.)̧· e ( - 1/X^d∑_i = 1^Rμ_i γ_i ) d, and also the truncated singular integral ℑ(B) = ∫_ || ≤ B ∏_∈̧Col(M) I(.)̧· e ( - 1/X^d∑_i = 1^Rμ_i γ_i ) d for any B > 0. Suppose T > d. Then ℑ (B) = ℑ + O(B^1 - T / d ) for any B > 0. We begin with the bound I( . )̧ = ∫_0^1 e( . ξ̧^d )dξ≪min{ 1, |.|̧^-1/d}, which for instance can be found in <cit.> or <cit.>. It then follows by Hölder's inequality that |ℑ - ℑ (B) | ≤ ∫_|| > B∏_∈̧Col(M)min{ 1, |. |̧^-1/d}d ≤ ∫_|| > B∏_ℓ = 1^ T ∏_∈̧𝔇_ℓmin{ 1, |. |̧^-1/d}d ≤ ∏_ℓ = 1^ T ( ∫_|| > B∏_∈̧𝔇_ℓmin{ 1, | . |̧^-1/d}^ T d)^1/T. By the change of variable = M(𝔇_ℓ), we obtain ∫_|| > B∏_∈̧𝔇_ℓmin{ 1, | . |̧^-1/d}^ T d≤∫_||≫ Bmin{ 1, ||^-1/d}^ T d≪ B^1 - T /d for each 1 ≤ℓ≤ T. On substituting this estimate into (<ref>), it follows that |ℑ - ℑ (B) | ≪ B^1 - T /d. § THE HARDY-LITTLEWOOD CIRCLE METHOD Let 𝔅 = ℕ or 𝒜(X, X^η). For ∈ [0,1]^R and ∈̧Col(M), we introduce the exponential sum S_() = S_( 𝔅; ) = ∑_ x ∈𝔅∩ [1, X] e(.̧ x^d). Then N (𝔅; X) = ∫_[0,1]^R∏_∈̧Col (M) S_( 𝔅; ) · e( - ∑_i = 1^Rμ_i θ_i ) d. We define 𝔏 = X^δ 𝔅 = ℕ, (log X)^A 𝔅 = 𝒜(X, X^η). Also throughout the remainder of the paper, unless stated otherwise, we assume d ≥ 2 if 𝔅 = ℕ, and d ≥ 3 if 𝔅 = 𝒜(X, X^η). The following lemma allows us to understand when a phase of the form .̧ is in the minor arcs. Given a set of vectors 𝔇 = {_̧1, …, _̧R}, we denote by M( 𝔇 ) = [ _̧1 ⋯_̧R ] the matrix with these vectors as columns. Let 𝔇⊆Col( M ) be a set of R linearly independent vectors. Let C_0 > 0 and C > 1 be sufficiently large with respect to M, n and C_0. If .̧∈𝔑_𝔏(C_0) for all ∈̧𝔇, then θ_i ∈𝔑_𝔏(C) for all 1 ≤ i ≤ R. We have = M( 𝔇 )^-1[ a_1/q + E_1; ⋮; a_R/q + E_R ] for some 1 ≤ q ≤ C_0 𝔏 and 1 ≤ a ≤ q such that (q, a) = 1 and |E_i| < C_0 𝔏 X^- d / q for each 1 ≤ i ≤ R. The result then follows by simplifying this equation. §.§ The minor arc estimate Let us recall the definition of 𝔏 from (<ref>). Suppose T ≥min{ 2^d + 1, d (d+1) + 1 } 𝔅 = ℕ, ⌈ d (log d + 4.20032) ⌉ + 1 𝔅 = 𝒜(X, X^η). Suppose η > 0 is sufficiently small. Then, we may choose δ, A > 0 such that there exists γ > 0 such that ∫_𝔪_𝔏∏_∈̧Col(M) |S_(𝔅; θ)| dθ≪ X^n - d R𝔏^- γ. Let 𝔇_1, …, 𝔇_ T be the disjoint subsets of Col(M) as in Definition <ref>. We begin by applying Lemma <ref> with 𝔇_T = {_̧1, …, _̧R}. Suppose θ∈𝔪_𝔏, which in particular implies that there exists 1 ≤ i ≤ R for which θ_i ∈ [0,1] ∖𝔑_𝔏(C). Then by Lemma <ref> there exists ∈̧𝔇_T such that .̧∈ [0,1] ∖𝔑_𝔏(C_0). Here C_0 > 0 is a sufficiently large constant if 𝔅 = 𝒜(X, X^η), and C_0 = 1 if 𝔅 =. Therefore, we may decompose 𝔪 as 𝔪_𝔏 = ⋃_ 1 ≤ i ≤ R 𝔪^(i), where 𝔪^(i) = {θ∈𝔪_𝔏 : _̧i.∈ [0,1] ∖𝔑_𝔏(C_0) }. Thus, max_∈𝔪_𝔏∏_∈̧𝔇_ T |S_(θ)| ≪ ∑_ i = 1 ^ R max_∈𝔪^(i) |S__̧i(θ)| ∏_∈̧𝔇_ T ∖{_̧i } |S_(θ)| ≪ X^R - 1max_α∈ [0,1] ∖𝔑_𝔏(C_0)| ∑_x ∈𝔅∩ [1,X] e (α x^d) | ≪ X^R 𝔏^- γ, for some γ > 0, where the final inequality follows from Lemmas <ref> and <ref>. It then follows by Hölder's inequality that ∫_𝔪_𝔏∏_∈̧Col(M) |S_()| d ≪ X^R 𝔏^- γ∫_[0,1]^R∏_ℓ = 1^ T - 1 ∏_∈̧𝔇_ℓ| S_(̧ ) |d ≪ X^R 𝔏^- γ∏_ℓ = 1^T - 1( ∫_[0,1]^R ∏_𝐜∈𝔇_ℓ| S_(̧ ) |^T - 1d)^1/T - 1. Since each M (𝔇_ℓ) is an invertible matrix, by the change of variables ' = M (𝔇_ℓ)^t. and Lemmas <ref> and <ref>, we obtain ∫_[0,1]^R ∏_𝐜∈𝔇_ℓ| S_(̧ ) |^T - 1d ≪ ∫_[0,1]^R∏_i=1^R | ∑_ x ∈𝔅∩ [1,X] e( θ'_i x^d) |^T - 1d' ≪ X^ R (T - 1 - d). Finally, the result follows on substituting this estimate into (<ref>). §.§ Major arc analysis We define 𝔐_𝔏^+ = ⋃_1 ≤ q ≤𝔏⋃_∈^R 0 ≤≤ q (q, ) = 1 {θ∈ [0,1]^R: | q θ_i - a_i | < q 𝔏 X^ - d (1 ≤ i ≤ R) }, which clearly satisfies 𝔐_𝔏⊆𝔐_𝔏^+. Suppose that q ∈, a ∈ and β = α - a/q. Then f(α) = q^-1 S(q,a) I(X^d β) + O ( q (1 + X^d |β|) ). The statement with additional hypothesis (q,a) = 1 follows from <cit.>. Suppose (q, a) = g and let q_0 = q/g and a_0 = a/g. Then q^-1 S(q,a) = q^-1∑_1 ≤ x ≤ q e_q (a x^d) = q^-1∑_1 ≤ x ≤ q e_q_0 (a_0 x^d) = q^-1 g ∑_1 ≤ x ≤ g e_q_0 (a_0 x^d) = q_0^-1 S(q_0, a_0). Therefore, we see that we may remove the coprimality condition. For the smooth Weyl sum we have the following. Suppose that 1 ≤ q ≤ Z, a ∈ and β = α - a/q. Then f(α; X, Z) = q^-1 S(q,a) w(β) + O ( q X/log X (1 + X^d |β|) ), where w(β) = ∑_Z^d < m ≤ X^d1/d m^1/d - 1 ϱ( log m/ d log Z) e(β m) and ϱ is the Dickman's function (for example, see <cit.>). The statement with additional hypothesis (q,a) = 1 is precisely <cit.>. The coprimality condition may be removed in the same way as in the proof of Lemma <ref>. Let |β| < 𝔏 X^-d and w be as in Lemma <ref>. Then w(β) = ϱ( d log X/d log Z) X I( X^d β ) + O ( X/log X). Let us denote T(y) = ∑_Z^d < m ≤ y1/d m^1/d - 1 e(β m). Then, by summation by parts, it follows that w(β) = ∑_Z^d < m ≤ X^d1/d m^1/d - 1 e(β m) ϱ( log m/d log R) = T(X^d) ϱ( d log X/d log Z) + O (1 + ∫_Z^d^X^d |T(y)| 1 /y log Rd y ). Since |T(y)| ≪ y^1/d, we have ∫_Z^d^X^d |T(y)| 1 /y log Z dy ≪1/log X∫_Z^d^X^d y^1/d - 1d y ≪X/log X. Therefore, we obtain w(β) = ϱ( d log X/d log Z) ∑_1 ≤ m ≤ X^d1/d m^1/d - 1 e(β m) + O ( X/log X). By the mean value theorem, we obtain 1/d∑_1 ≤ m ≤ X^d m^1/d-1 e ( β m ) = 1/d∫_0^X^d x^1/d-1 e ( β x ) d x + O( 1 + ∑_1 ≤ m ≤ X^d m^1/d-1(m^-1 + |β|) ) = ∫_0^X e ( β t^d ) d t + O( 1 ) = X ∫_0^1 e ( X^d β y^d ) d y + O( 1 ) = X I( X^d β ) + O( 1 ). Let now combine the above three lemmas in the following convenient manner. Let η > 0 be sufficiently small and C_𝔅 = 1 𝔅 = , ϱ(1/η) 𝔅 = 𝒜(X, X^η). Let δ, A > 0 be sufficiently small. Suppose that 0 ≤ a ≤ q ≤𝔏 and |β| < 𝔏 X^-d. Then ∑_ x ∈𝔅∩ [1, X] e(α x^d) = C_𝔅 X q^-1 S(q,a) I(X^d β) + O ( X 𝔏^- (R + 3)) ). Suppose d( R + 1)/R < T. Let η > 0 be sufficiently small and C_𝔅 as in (<ref>). Then there exists λ > 0 such that ∫_𝔐_𝔏 ^+∏_∈̧Col(M) S_(̧) e(-∑_i = 1^Rμ_i θ_i )d = C_𝔅^n X^n - d R𝔖ℑ + O( X^n - d R𝔏^- λ). It follows from the definition of 𝔐^+_𝔏 and Lemma <ref> that ∫_𝔐^+_𝔏∏_∈̧Col( M ) S_(̧) e (-∑_i=1^Rμ_i θ_i ) d = C_𝔅^n X^n ∑_1 ≤ q ≤𝔏 q^- n ∑_ 1 ≤≤ q (q, ) = 1 ∏_∈̧Col( M ) S_(̧ /q) · e_q (-∑_i=1^Rμ_i a_i ) ∫_|| < 𝔏 X^-d∏_∈̧Col( M ) I ( X^d .̧ ) · e (-∑_i=1^Rμ_i γ_i ) d + O ( X^ n - 1 X /𝔏^R+3𝔏^ R + 2 /X^d) = C_𝔅^n X^n 𝔖 (𝔏) ∫_|| < 𝔏 X^-d∏_∈̧Col( M ) I (X^d .̧ ) · e (-∑_i=1^Rμ_i γ_i ) d + O ( X^ n - d 𝔏^ -1 ) = C_𝔅^n X^n- d 𝔖 (𝔏) ∫_|| < 𝔏∏_∈̧Col( M ) I( .̧ ) · e ( - 1 /X^d∑_i=1^Rμ_i γ_i ) d + O ( X^ n - d 𝔏^ -1 ) = C_𝔅^n X^n - d𝔖 (𝔏) ℑ (𝔏) + O ( X^ n - d 𝔏^ -1 ). Finally, we obtain from Lemmas <ref> and <ref> that 𝔖 (𝔏) ℑ (𝔏) = 𝔖ℑ + O ( 𝔏^ 1 - R (T/d - 1) + 𝔏^1 - T/d). Finally, on recalling (<ref>) and 𝔐_𝔏⊆𝔐_𝔏^+, Theorems <ref> and <ref> follow by combining Propositions <ref> and <ref>. § PROOF OF THEOREM <REF> As explained in Secion <ref>, we closely follow the proof of <cit.> to prove Theorem <ref>. A real number Δ_s is referred to as an admissible exponent (for d) if it has the property that, whenever ε > 0 and η is a positive number sufficiently small in terms of ε, k and s, then whenever 1 ≤ Z ≤ X^η and X is sufficiently large, one has ∫_0^1 |f(α; X, Z)|^s dα≪ X^s - d + Δ_s + ε. We assume that we have available an admissible exponent Δ_u for each positive number u (which we know we can assume as explained in <cit.>). When d ≥ 4, we define G_0(d) = min_v ≥ 2( v + Δ_v/τ(d) ), where τ(d) = max_w ∈d - 2 Δ_w/4 w^2. Given a real number s ≥ 2, we also define Δ_s^* = min_ 0 ≤ t ≤ s - 2 ( Δ_s - 2 - t τ (d) ), and refer to Δ_s^* as an admissible exponent for minor arcs. Let 𝔐(Q) = ⋃_1 ≤ q ≤ Q⋃_0 ≤ a ≤ q (q, a) = 1{α∈ [0,1]: |q α - a | ≤ Q X^-d}, where 𝔪(Q) = [0,1] ∖𝔐(Q). The key estimate we make use of is the following, which is <cit.>. Suppose that d ≥ 3, s ≥ 2d + 3 and Δ_s^* is an admissible exponent for minor arcs with Δ^*_s < 0. Let ν be any positive number with ν < min{ 2 |Δ_s^*| / d , 1/ 6 d}. Then, when 1 ≤ Q ≤ X^d/2, one has the uniform bound ∫_𝔪(Q) | f(α; X, Z) |^s dα≪ X^s - d Q^- ν. Let L = ⌊ d log X/ log 4 ⌋. We set Q = L^1/15, and we specify η to be sufficiently small in the context of the (finitely many) admissible exponents that must be discussed in determining τ(d) and G_0(d). Let 𝔎 = ⋃_1 ≤ q ≤ Q⋃_ 0 ≤ a ≤ q (q,a) = 1 𝔎(q,a), where 𝔎(q,a) = {α∈ [0,1]: |α - a/q| ≤ Q X^-d}, and then put 𝔨 = [0,1) ∖𝔎. Suppose s ≥max{⌊ G_0 (d) ⌋ + 1, 2d + 3 }. Then there exists a positive number v with v ≥ 2 and an admissible exponent Δ_v for which Δ_s^* = Δ_v - (s - v) τ(d) = - τ(d) (s - G_0 (d) ) < 0. Put ν = min{ |Δ_s^*|/d, 1/(18 d) }. Then we see from Theorem <ref> that ∫_𝔪(Q) |f(α; X, Z)|^s dα≪ X^s - k Q^-nu≤ X^s - k. Finally, since 𝔨⊆𝔪(Q), we may conclude that ∫_𝔨 |f(α; P, R)|^s dα≤∫_𝔪(Q) |f(α; X, Z)|^s dα≪ X^s - k Q^-nu≤ Z^s - k. Next we attend to the contribution of the major arcs 𝔎. Suppose that α∈𝔎(q,a) ⊆𝔎. The standard theory of smooth Weyl sums shows that there is a positive number c(η) such that f(α; X, Z) = c(η) q^-1 S(q, a) v (α - a/q) + O(X L^-1/4). Since 𝔎 has measure O(Q^3 X^-d), we see that ∫_𝔨 |f(α; X, Z)|^s dα≪ c^s 𝔖(Q) ℑ(Q) + O(X^s - d Q^3 L^-1/4), where ℑ(Q) = ∫_- Q X^-k^Q X^-k |v(β)|^s dβ, v(β) = 1/d∑_1 ≤ m ≤ X^d m^1/d - 1 e(β m). and 𝔖(Q) = ∑_1 ≤ q ≤ Q∑_ 1 ≤ a ≤ q (q,a) = 1 q^-s |S(q,a)|^s. Notice that since Q = L^1/15, the error term above is O(X^s-k L^-1/20). Familiar estimates from the theory of Waring's problem (see <cit.> or Section <ref>) show that under the hypothesis on s at hand, |𝔖(Q)| ≪ 1 and |ℑ(Q)| ≪ X^s - d. Therefore, we obtain ∫_0^1 |f(α; X, Z)|^s dα≪ X^s - d. Finally, when d ≥ 20, it is proved in the proof of <cit.> that ⌊ G_0(d) ⌋≤⌊ d (log d + 4.2003199) ⌋≤⌈ d (log d + 4.20032) ⌉ - 1, and the result follows. rome
http://arxiv.org/abs/2406.09274v1
20240613161252
Doubled Shapiro steps in a dynamic axion insulator Josephson junction
[ "Yu-Hang Li", "Ziqian Zhou", "Ran Cheng", "Hua Jiang", "X. C. Xie" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
Jet formation in post-AGB binariesBased on observations made with the Mercator Telescope, operated on the island of La Palma by the Flemish Community, at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. T. De Prins1 H. Van Winckel1 J. Ferreira2 O. Verhamme1 D. Kamath34 N. Zimniak2 J. Jacquemin-Ide5 Received 16 February 2024 / Accepted 12 June 2024 ========================================================================================================================================================================================================================================================== § ABSTRACT Abstract: Dynamic axion insulators feature a time-dependent axion field that can be induced by antiferromagnetic resonance. Here, we show that a Josephson junction incorporating this dynamic axion insulator between two superconductors exhibits a striking doubled Shapiro steps wherein all odd steps are completely suppressed in the jointly presence of a DC bias and a static magnetic field. The resistively shunted junction simulation confirms that these doubled Shapiro steps originate from the distinctive axion electrodynamics driven by the antiferromagnetic resonance, which thus not only furnishes a hallmark to identify the dynamic axion insulator but also provides a method to evaluate its mass term. Furthermore, the experimentally feasible differential conductance is also determined. Our work holds significant importance in condensed matter physics and materials science for understanding the dynamic axion insulator, paving the way for its further exploration and applications. Introduction The advent of dynamic axion insulator (DAI) <cit.> has sparked a surge of interest across multidisciplinary fields including particle physics, cosmology, optics, and particularly condensed matter physics <cit.>. Because of the inherent dynamic axion field, DAI possesses an additional term ℒ_θ=θ(t)αE·B/π in the Lagrangian <cit.>, where θ(t) is the time-dependent, massive axion field, α is the fine structure constant, E and B are conventional magnetoelectric field. This Lagrangian introduces a magnetoelectric contribution to the Maxwell's equations, making it possible to realize the axion polaritons or axion instability in condensed systems <cit.>, to drive anomalous magnetoelectric transport<cit.>, to detect the dark matters <cit.>, and so on <cit.>. Several candidate systems have been predicted to be DAIs <cit.>. In particular, recent first principle calculations suggest that Mn_2Bi_2Te_5 and its family, which possess coexisting antiferromagnetic ordering and topology <cit.>, may be materials of this category <cit.>. Since the axion field in these materials originates from the exchange interaction between topological electrons and the antiparallel magnetic moments, or the Néel vectors, the axion dynamics can then be readily achieved through the magnetic fluctuation that stimulates a time-dependent Néel vector <cit.>. However, because of the ultrafast dynamics of the axion field, detecting this novel state remains an outstanding challenge, thereby significantly hindering its further exploration and potential applications. Meanwhile, there is also fierce debate whether the magnitude of the axion mass in condensed matter materials is on the order of eV or meV <cit.>. In this work, we demonstrate that this exotic quantum state can be unambiguously verified by the transport signature of a superconductor-DAI-superconductor Josephson junction. Since DAIs possess spontaneous antiferromagnetic ordering and axion field, an antiferromagnetic resonance (AFMR) can occurs in DAI under the driven of a linear-polarized microwave if the microwave frequency ω matches its intrinsic AFMR frequency <cit.>. In this case the Néel vector, hence the magnetic exchange gap in the DAI, become time-dependent, giving rise to a dynamic axion field binding to the AFMR. The quantitative expression for this dynamic axion field turns out to be a sinusidal function possessing a strikingly doubled frequency ω_1=2ω. Subsequently, this dynamic axion field leads to a magnetoelectric current in the presence of a static magnetic field, which becomes a harmonic supercurrent when the DAI is sandwiched between two superconductors and simultaneously, populates a coherent phase difference possessing the same doubled frequency ω_1=2ω. On the other hand, a DC bias V_0 can also induce a harmonic phase difference across the Josephson junction, whose frequency is proportional to the applied bias voltage is ω_2=2eV_0. Consequently, when the two frequencies ω_1 and ω_2 are commensurate with each other, such a superconductor-DAI-superconductor Josephson junction exhibits a remarkable doubled Shapiro steps where all odd steps are completely suppressed. Because the magnetoelectric current originates from the axion electrodynamics driven by the AFMR, those Shapiro steps thus not only furnish a fingerprint of DAIs but also provide a method to evaluate the axion mass. Results Effective model of the DAI The generic low-energy effective Hamiltonian for a DAI defined on the basis of ψ_k=[ |p^+_z,↑⟩, |p^+_z,↓⟩, |p^-_z,↑⟩, |p^-_z,↓⟩ ]^T has the form <cit.> ℋ(t)=∑_α=1^5 d_α(k)Γ^α, where d_1,2,3,4,5(k)=[Ak_x+m_5n_x(t),Ak_y,Ak_z,m_0+Bk^2,m_5n_z(t)] and Γ^1,2,3,4,5=[σ_x⊗ s_x,σ_x⊗ s_y,σ_y,σ_z,σ_x⊗ s_z]. Here, A, B, m_0 and m_5 are system parameters. n_x(z)(t) denotes the x-component (z-component) of the Néel vector, which can be time-dependent in the presence of magnetic fluctuations. σ_x,y,z and s_x,y,z are Pauli matrices acting on the orbital and spin spaces, respectively. The lattice wave vectors k_x,y,z are defined in the first Brillouin zone with k=√(k_x^2+k_y^2+k_z^2). The first four terms in Eq. <ref> describe a topological insulator preserving both parity 𝒫=σ_z and time reversal 𝒯=iτ_y𝒦 (𝒦 is the complex conjugation operator) symmetry, whose axion field is quantized to θ=0 when B=0 while θ=π otherwise <cit.>. Whereas the fifth term describes the exchange interaction between the topological electron and the dynamic, antiparallel magnetic moments, which breaks these two symmetries explicitly and thus introduces an additional dynamic part to the static axion field, giving rise to the DAI. The system parameters are generally taken as A=1, B=-0.5, m_0=0.1, m_5=0.1. However, our theory is universal and not limited to these parameters. Antiferromagnetic resonance in the DAI Under a linearly polarized microwave, an antiferromagnetic insulator resonates coherently if the frequency of the microwave is identical to its intrinsic AFMR frequency ω=2πγ√(H_A(2H_E+H_A)) given in Supplementary Note 1 <cit.>, where γ=28GHz·T^-1 is the gyromagnetic ratio, H_A and H_E are the uniaxial anisotropy and the exchange field between two antiparallel magnetic moments m_1 and m_2. Although the magnetic moment in this case individually rotates clockwise (or counterclockwise) on an elliptical orbit, the Néel vector n=m_1-m_2 oscillates in a purely linear mode as schematically illustrated in Figs. <ref>a and c, traveling back and forth inside a perpendicular plane normal to ŷ-axis. Consequently, x component of the Néel vector has an explicit form n_x(t)=A_xsinω t with A_x the AFMR amplitude while z component can therefore be analytically expressed as n_z(t)=√(1-A_x^2sin^2ω t) since the magnitude of the Néel vector is normalized. Figures <ref>b and c show the temporal evoluations of n_x (b) and n_z (c), respectively, at A_x=0.02, 0.05, 0.1. We see that the periodicity of n_z is halved compared to that of n_x, indicating that the frequency of n_z is multipled. Quantitative expression for the dynamic axion field in the DAI To establish a quantitative expression for this dynamic axion field driven by the linearly polarized AFMR, we employ the gauge invariant expression for the axion field in terms of the Chern-Simons 3-form <cit.>. While the timescale of the AFMR hence θ(t) is about 7 orders of magnitude larger than the typical electron response time <cit.>, the electrons inside the DAI can adjust adiabatically to the instantaneous configuration of Néel vector, viewing the AFMR as a static magnetic vector almost frozen in time. As a result, we can treat the DAI adiabatically <cit.>. As detailed in Supplementary Note 2, this axion field in the adiabatic limit can be calculated by using <cit.> θ=-1/4π∫dk^32d+d_4/(d+d_4)^2d^3d_5∂_k_xd_1∂_k_yd_2∂_k_zd_3, where d=√(∑_id_i^2) with d_i presented in Eq. <ref>. Repeating this calculation at different instant of time gives the time-dependent axion field including a static part and a dynamic one, or θ(t)=θ_0+δθ(t). Figure <ref>f plots δθ(t) as a function of time at different AFMR amplitudes corresponding to Figs. <ref>b and c. It turns out that the temporal evolution of δθ(t) is a harmonic function with a doubled frequency compared to the AFMR, that is δθ(t)=A_θcos2ω t where A_θ is the amplitude proportional to A_x^2. These mathematical relations, especially the frequency doubling of the dynamic axion field δθ(t), are further confirmed analytically in Supplementary Note 3. Dynamic magnetoelectric current Owing to the magnetoelectric term in the Lagrangian ℒ_θ=θ(t)αE·B/π, the Maxwell equation acquires an additional term and thus becomes ∇·E=ρ/ϵ-α∇θ·B/π where ϵ is the permittivity and ρ is the charge density <cit.>. So a static magnetic field B_0 applied to the DAI slab can induce a charge polarization given by P_θ(t)=ϕθ(t)e^2/h <cit.>, where e is the electron charge, h is the Planck constant, and ϕ=B_0S (S the area of the DAI normal to B_0) is the magnetic flux penetrating the DAI. This charge polarization should also evolves adiabatically as the time-dependent axion field θ(t), leading to an equivalent magnetoelectric current. At particular instant of time, the unbalanced charge distribution illustrated in Fig. <ref>a can be calculated by using the lattice Green's functions provided in Methods. Figure <ref>b displays the unbalanced charge distribution induced by a static magnetic field along ẑ-direction at t_1=0 for a system with size L_y=16, L_z=10 and A_x=0.1, which corresponds to the charge polarization originating from the static axion term θ_0. The charge distributions corresponding to δθ(t) are further plotted in Fig. <ref>c ranging from t=0 to t=π/2ω with an interval of π/10ω after subtracting the background charge distribution displayed in Fig. <ref>b. Moreover, the red inverted triangle in Fig. <ref>d shows the instant charge polarization given by δ P(t)=[δ Q(z=-1/2)-δ Q(z=1/2)]/2, whose temporal evolution is also a harmonic function with a doubled frequency, in agreement with the dynamic axion field shown in Fig. <ref>e. The time derivative of this instant charge polarization gives the dynamic magnetoelectric current, which is represented by the blue triangle shown in the same figure. The black dashed line is the anticipated magnetoelectric current obtained from the time derivative of the dynamic axion field at A_x=1 shown in Fig. <ref>e, and is plotted in the same figure as a benchmark. Despite the slit deviation between the two results, which can be ascribed to the finite size effect, both data can be well fitted by using a double-frequency harmonic function. This quantitative expression of the magnetoelectric current is further verified by the fast-Fourier transform shown in Fig. <ref>e. Since this harmonic current originates directly from the axion dynamics in DAI driven by the AFMR, it can thus stand as an evidence for identifying DAIs. Nevertheless, in view of the high frequency of AFMR, which can even reach terahertz, quantitatively detecting a temporal current with such a high frequency remains a challenge. Doubled Shapiro steps in the DAI Josephson junction To detect this fast dynamic magnetoelectric current unique to the axion dynamics in DAI, we consider a Josephson junction consisting of a DAI sandwiched between two identical superconductors, as schematically shown in Fig. <ref>a. Through Andreev reflection, the dynamic magnetoelectric current could stimulate a coherent phase difference across the junction, potentially synchronizing with the AC Josephson current when subjected to additional DC bias and hence becoming a static transport signals termed Shapiro steps. To explore the transport properties of such a DAI Josephson junction, we employ the resistively shunted junction model illustrated in Fig. <ref>b, where the system is simulated as an effective circuit featuring a resistance R in parallel to the Josephson junction <cit.>. This model captures not only the tunneling electron pairs but also the leakage currents through the insulating barrier at the interfaces. The total current then comprises two components: one is the normal supercurrent I_0 adjustable by applied DC bias V_0 while the other is the time-dependent axion current I_θ(t) induced by a static magnetic field B_0. Moreover, this current can be further expressed as the sum of the parallel currents in the Kirchhoff's circuit, whose dynamics can be quantitatively described by the equation of motion <cit.> I_0+I_θ(t)=V/R+I_csinψ, where I_0+I_θ(t) is applied current, V=ħψ̇/2e is the effective voltage with ψ the phase difference across the Josephson junction and I_c is the critical supercurrent. This equation of motion can be solved numerically by using the Runge-Kutta method after substituting the dynamic current I_θ(t)=-I_acsin2ω t, which is provided in Methods. We first rewrite Eq. <ref> as ħψ̇/2eR=-∂U(ψ,t)/∂ψ where U(ψ,t)=-[I_0+I_θ(t)]ψ-I_ccosψ is the washboard potential. In the absence of dynamic axion current I_θ(t), the static washboard potential for different I_0 displayed in Fig. <ref>c is generally consistent with that in conventional Josephson junctions. When I_0<I_c, the system is trapped in a local minimum maintaining a constant phase. Increasing the current to I_0=I_c leads to an instability responsible for the emergence of phase oscillation. The static washboard potential apparently obeys a linear Ohmic relation I_0=V/R when I_0≫ I_c. The dynamic washboard potential for non-vanishing I_θ(t) is presented in Fig. <ref>d, which oscillates between two critical boundaries represented by the green and magenta lines. This washboard potential exhibits a halved periodicity with T=π/ω, in sharp contrast to conventional T=2π/ω. During one period, the system remains stable as long as the phase changes 2nπ with n an integer, resulting in a velocity ψ̇=2nπ/T=2nω in the DAI Josephson junction. As a result, the Shapiro steps emerge when the motion of the system coincides with the dynamics of the washboard potential , manifested as ω_0(=2eV_0)=2nω. Indeed, this washboard potential is instructive to understand the behavior of the I-⟨ V⟩ curve. Figure <ref>e shows the I-⟨ V⟩ curve with different I_ac that is proportional to the magnetic flux ϕ and thus B_0. When the magnetic field is absent, I_ac=0 and a typical I-⟨ V⟩ curve for conventional Josephson junctions without any Shapiro steps is restored as the dynamic axion current is vanishing [I_θ(t)=0]. In the presence of magnetic field (I_ac0), we see that the even Shapiro steps emerge at ⟨ V⟩=ħω/e while the odd steps disappear completely, in agreement with the halved periodicity in the dynamic washboard potential above. Because the voltage drop inside one Shapiro step is zero, the magnitude, also referred to as Shapiro spike, can be obtained through the identification of the width of the plateaus at ⟨ V⟩=ħω/2e, which is depicted by colorful markers in Fig. <ref>f as a function of I_ac. The result reaffirms that only doubled Shapiro steps that evolve in perfect accordance to the Bessel functions sustain in the DAI Josephson junction. Phenomenological scenario To understand these doubled Shapiro steps heuristically, we provide a phenomenological analysis following Barone and Paternò's argument <cit.>. When the DAI is intimately connected to two identical superconductors, hence forming a DAI Josephson junction, the magnetoelectric current induced by applied magnetic field B_0 leads to an effective bias voltage V_DAI(t)=I_θ(t)R across the Josephson junction, where R is the effective resistance originating from the leakage current at the interfaces between the DAI and the superconductors. This voltage in turn populates a time-dependent phase difference given by Josephson's second equation Φ_θ=∫dt V_DAI(t)2e/ħ, which is Φ_θ=eV_1cos2ω t/ħω with the effective voltage V_1=I_acR. The total phase difference across the junction, combined with an additional DC bias V_0, can thus be expressed as Φ=2eV_0t/ħ+eV_1cos2ω t/ħω. The Josephson current under such a phase difference thus reads I_s(t)=I_cIm{exp[i(2eV_0t/ħ+eV_1cos2ω t/ħω)]} where I_c is the critical supercurrent. Applying the Jacobi-Anger expansion recasts the Josephson current as <cit.> I_s(t)=I_c∑_n𝒥_n(eV_1/ħω)sin[(2eV_0/ħ+2nω)t], where 𝒥_n(x) is the first kind Bessel function. Equation <ref> manifestly shows that the supercurrent oscillates as a function of time unless at eV_0=-nħω, where doubled Shapiro steps appear with a magnitude I_s^n=I_c𝒥_n(eV_1/ħω). To check the consistency independently, we calculate the Shapiro steps as a function of V_1 by using Eq. <ref> and superimpose the result as a dotted line in the same figure. We see that the two results agree with each other remarkably well, which demonstrates the consistency and reliability of obtained results. Finally, we determine the differential conductance d⟨ V⟩/dI on the I_0-I_ac plane by using the resistively shunted junction simulation and show the result in Fig. <ref>. We observe a clear and sharp double Shapiro step pattern, where the differential conductance plateaus appear only near the location I_0=2nI_c. The boundaries of the plateaus marking the width of the doubled Shapiro steps coincides quantitatively with the Bessel function given in Eq. <ref>. This unique differential conductance pattern provides a fingerprint of the DAIs and meanwhile, offers a platform to simulate the axion electrodynamics in condensed matter systems. Discussion Despite that a microwave is indispensable in both conventional Josephson junction and the DAI Josephson junction discussed in this work, we stress that the underlying mechanisms of the doubled Shapiro steps are completely different. In conventional Josephson junctions, microwave plays the role of a driven electric field used to induces a harmonic phase difference resonating with an additional DC bias coherently, resulting in a typical Shapiro steps appear consecutively at ⟨ V⟩=ħω/2e. In the sharp contrary, the microwave here is solely used to populate AFMR. The doubled Shapiro steps appear at ⟨ V⟩=ħω/e originate from the coherence between the phase difference induced simultaneously from applied DC bias and the dynamic magnetoelectric current, which can thus be further ascribed to the axion dynamics inside DAIs. On the other hand, performing Euler-Lagrangian equation to ℒ_θ yields the following classical equation of motion for the dynamic axion field <cit.> δ̈θ̈-∇^2δθ+m_θ^2δθ=αE·B, where m_θ is the axion mass. Although there are ongoing debates regarding the magnitude of axion mass in condensed matter systems <cit.>, it is straightforward to see that the dynamic axion field induced by AFMR, δθ(t)=A_θcos2ω t, constitutes an analytical solution to Eq. <ref> in a uniform system without any electromagnetic field. From this viewpoint, we obtain the axion mass m_θ=2ω. Consequently, the doubled Shapiro steps proposed here also provide a method to detect the axion mass. In Mn_2Bi_2Te_5, as the anisotropy and the exchange interaction are H_A=0.8meV and H_E=0.1meV <cit.>, respectively, the intrinsic AFMR frequency identical to that of the static magnetic field required to drive this AFMR is f=ω/2π≈ 143GHz. So the AFMR in Mn_2Bi_2Te_5 can be accomplished through a linearly polarized sub-terahertz radiation analogous to that in Cr_2O_3 <cit.> and MnF_2 <cit.>. Moreover, the typical tilting angle of the Néel vector under AFMR is about A_x∼ 1%, in which case the amplitude of the dynamic axion field is A_θ∼1.0× 10^-5. For a DAI Josephson junction with size S=10^-6 m^2 and contact resistance R∼10Ω <cit.>, the magnitude of required static magnetic field to observe a sizable doubled Shapiro steps is about B_0∼0.02Tesla. Those can be readily achieved within current experiments. In summary, we have demonstrated a unique doubled Shapiro steps in a superconductor-DAI-superconductor Josephson junction driven by a DC bias and a static magnetic field instead of a microwave. We ascertain that this distinctive phenomenon arises from the frequency doubling of the axion electrodynamics in DAI under the driven of a linearly polarized microwave, or alternatively from the coherent resonance between the axion mass and applied DC bias, which thus provides a fingerprint of the DAIs and can also be utilized to evaluate the mass term of axion field in antiferromagnetic topological insulators. Methods Lattice Hamiltonian of the DAI. The low-energy effective Hamiltonian presented in Eq. <ref> can be discretized by using the k· p theory. Performing the substitutions k_α=x,y,z=-i∂_α→ -i(ψ_i+α-ψ_i)/(2a_0) and k_α^2=-∂_α^2→ -(ψ_i+α+ψ_i-α-2ψ_i)/a_0^2 with a_0 the lattice constant, we can write the effective Hamiltonian as ℋ(t)=∑_iψ_i^†T_i(t)ψ_i+(ψ_i^†T_xψ_i+x+ψ_i^†T_yψ_i+y+ψ_i^†T_zψ_i+z+H.c.), where H.c. is the shorthand for the Hermitian conjugate, and the hoping matrices take the following form T_i=(m_0+6B/a_0^2)Γ_4+m_5n_z(t)Γ_5+m_5n_x(t)Γ_1 T_x=-iA/(2a_0)Γ_1-B/a_0^2Γ_4 T_y=-iA/(2a_0)Γ_2-B/a_0^2Γ_4 T_z=-iA/(2a_0)Γ_3-B/a_0^2Γ_4, with n_x(t)=A_xsinω t and n_z(t)=√(1-A_x^2sin^2ω t). Note that in Eq.<ref> the translation symmetry along all three spatial directions in the Cartesian coordinate is well preserved. In the presence of a static magnetic field B=[ 0, 0, B ] along ẑ-direction, the Hamiltonian in Eq. <ref> acquires a Peierls phase ϕ_ij=2π∫_i^jA· dr/ϕ_0 inside one unit cell from site i to j, where A is the vector potential and ϕ_0=h/e is the magnetic flux quanta. In particular, we use the Landau gauge, therefore A=[ -yB, 0, 0 ]. The Hamiltonian thus becomes ℋ(t)=∑_iψ_i^†T_i(t)ψ_i+[ψ_i^†T_xf_x(y)ψ_i+x+ψ_i^†T_yψ_i+y+ψ_i^†T_zψ_i+z+H.c.], where f_x(y)=e^-iyBa_0^2/ϕ_0. Green's functions method for calculating the charge polarization and the magnetoelectric current. Since the AFMR frequency is orders of magnitude smaller compared to the electron response time, we can treat the time-dependent AFMR adiabatically. In this case, the magnetoelectric current induced by a static magnetic field B along ẑ-direction can be obtained from the time derivative of the instant charge polarization. Subsequently, this instant charge distribution parallel to the magnetic field can be expressed as <cit.> q(z,t)=e/2π∑_y∫_∞^ϵ_fdϵ∫dk_xIm{Tr[G^r(ϵ,t)]}, where ϵ_f is the Fermi energy and the Green's function G^r(ϵ,t)=[ϵ+iη-ℋ(t)]^-1 with η the imaginary line width function. Moreover, equation <ref> includes only the contribution from negative electron charge. To obtain the unbalanced charge distribution as well as the instant charge polarization, it is necessary to include the positive background charge originating from ions that compensates the electron charge, which has the form q_b=∑_zq(z,t)/L_z owing to the charge conservation. As a result, the unbalanced charge distribution at time t can finally be expressed as Q(z,t)=q(z,t)+q_b. Fourth order Runge-Kutta method for solving the equation of motion of the DAI Josephson junction. For convenience, we rewritten Eq. <ref> in the form ψ̇=f(t,ψ) with f(t,ψ)=2I_cRe(I_0/I_c-I_acsin2ω t/I_c-sinψ)/ħ. This is a first-order differential equation that can be numerically solved by using the typical Runge-Kutta method. For a step size h>0, the phase difference ψ can be obtained iteratively by using the following equations ψ(t_n+1) =ψ(t_n)+h/6(k_1+2k_2+2k_3+k_4), t_n+1 =t_n+h, where k_1 =f(t_n,ψ(t_n)), k_2 =f(t_n+h/2,ψ(t_n)+hk_1/2), k_3 =f(t_n+h/2,ψ(t_n)+hk_2/2), k_4 =f(t_n+h,ψ(t_n)+hk_3). The equation of motion can then be solved numerically by substituting the initial value ψ(t=0)=0 and ψ̇(t=0)=0. The average voltage across the Josephson junction, as a function of the existing parameters I_0 and I_ac that are adjustable through the manipulation of V_0 and B_0, respectively, is then given by Josephson's second equation V=∫_t^t+Tdtħψ̇/2eT with the period T=π/ω. Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon request. Code availability The code deemed central to the conclusions is available from the corresponding authors upon request.. References naturemag Acknowledgements H.J. acknowledges the support from the National Key R&D Program of China (Grants No. 2019YFA0308403 and No. 2022YFA1403700) and the National Natural Science Foundation of China (Grant No. 12350401). Y.-H.L. acknowledges the support from the Fundamental Research Funds for the Central Universities. X.C.X. acknowledges the support from the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302400). R.C. acknowledges the support from the AFOSR (Grant No. FA9550-19-1-0307). Y.-H.L. is also grateful for the financial support from the State Key Laboratory of the Surface Physics and the Department of Physics at Fudan University. Author contributions Y.-H.L, H.J and X.C.X conceived the initial idea of doubled Shapiro steps in the dynamic axion insulator Josephson junction. Y.-H.L performed calculations with assistance from Z.Z. Y.-H.L. wrote the manuscript with contributions from all authors. H.J. and X.C.X. supervised the project. Competing interests The authors declare no competing interests. Additional information Supplementary information The online version contains supplementary material available at . Correspondence and requests for materials should be addressed to H.J. or X.C.X. [figure]labelfont=bf, name=Extended Data Fig., labelsep=period
http://arxiv.org/abs/2406.08547v1
20240612180002
AGN Feedback in Quiescent Galaxies at Cosmic Noon Traced by Ionized Gas Emission
[ "Letizia Bugiani", "Sirio Belli", "Minjung Park", "Rebecca L. Davies", "J. Trevor Mendel", "Benjamin D. Johnson", "Amir H. Khoram", "Chloë Benton", "Andrea Cimatti", "Charlie Conroy", "Razieh Emami", "Joel Leja", "Yijia Li", "Gabriel Maheson", "Elijah P. Mathews", "Rohan P. Naidu", "Erica J. Nelson", "Sandro Tacchella", "Bryan A. Terrazas", "Rainer Weinberger" ]
astro-ph.GA
[ "astro-ph.GA" ]
Dipartimento di Fisica e Astronomia, Università di Bologna, Bologna, Italy. INAF, Osservatorio di Astrofisica e Scienza dello Spazio, Via Piero Gobetti 93/3, I-40129, Bologna, Italy. Dipartimento di Fisica e Astronomia, Università di Bologna, Bologna, Italy. Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA, USA. Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, Victoria, Australia. ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia. ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia. Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT, Australia. Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA, USA. Dipartimento di Fisica e Astronomia, Università di Bologna, Bologna, Italy. INAF, Osservatorio di Astrofisica e Scienza dello Spazio, Via Piero Gobetti 93/3, I-40129, Bologna, Italy. Department for Astrophysical and Planetary Science, University of Colorado, Boulder, CO, USA. Dipartimento di Fisica e Astronomia, Università di Bologna, Bologna, Italy. INAF, Osservatorio di Astrofisica e Scienza dello Spazio, Via Piero Gobetti 93/3, I-40129, Bologna, Italy. Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA, USA. Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA, USA. Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA, USA. Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA, USA. Institute for Computational & Data Sciences, The Pennsylvania State University, University Park, PA, USA. Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA, USA. Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA, USA. Kavli Institute for Cosmology, University of Cambridge, Cambridge, UK. Cavendish Laboratory, University of Cambridge, Cambridge, UK. Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA, USA. Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA, USA. Institute for Computational & Data Sciences, The Pennsylvania State University, University Park, PA, USA. MIT Kavli Institute for Astrophysics and Space Research, Cambridge, MA, USA. Department for Astrophysical and Planetary Science, University of Colorado, Boulder, CO, USA. Kavli Institute for Cosmology, University of Cambridge, Cambridge, UK. Cavendish Laboratory, University of Cambridge, Cambridge, UK. Columbia Astrophysics Laboratory, Columbia University, New York, NY, USA. Leibniz Institute for Astrophysics, Potsdam, Germany. § ABSTRACT We analyze ionized gas emission lines in deep rest-frame optical spectra of 16 quiescent galaxies at redshift 1.7<z<3.5 observed with JWST/NIRSpec by the Blue Jay survey. Robust detection of emission lines in 75% of the sample indicates the presence of ongoing ionizing sources in this passive population. The Hα line luminosities confirm that the population is quiescent, with star formation rates that are at least ten times lower than the main sequence of star formation. The quiescent sample is clearly separate from the star-forming population in line diagnostic diagrams, and occupies a region usually populated by active galactic nuclei (AGN). Analysis of the observed line ratios, equivalent widths, and velocity dispersions leads us to conclude that in most cases the gas is ionized by AGN activity, despite the lack of X-ray detections. A subset of the sample also hosts ionized and/or neutral outflows. Our results show, for the first time using a representative sample, that low luminosity AGN are extremely common among quiescent galaxies at high redshift. These low luminosity AGN may play a key role in quenching star formation and in maintaining massive galaxies quiescent from Cosmic Noon to z∼0. § INTRODUCTION At Cosmic Noon, the z∼2 peak of the cosmic star formation history <cit.>, most galaxies host a large amount of gas. However, this is also the epoch when the population of massive galaxies begins to transform into quiescent, gas-poor systems. The quenching of star formation is one of the key moments in a galaxy's lifetime, but the physics behind it is still poorly understood, and requires more detailed studies of the first generation of quiescent galaxies. Rest-frame optical emission lines due to ionized gas can provide a wealth of information about the physical conditions of high-redshift galaxies <cit.>. In star-forming galaxies, the ionization of the interstellar medium (ISM) is caused by young, massive stars; strong emission lines can also be produced by Active Galactic Nuclei (AGN), where the ionizing photon field is due to the actively feeding supermassive black hole (SMBH) in the center of the galaxy. Gas in the ISM can also be excited by shocks, which are more difficult to detect and can be caused by several different phenomena, including galactic-scale outflows, galaxy interactions, ram pressure stripping, and AGN-related activity such as jets and winds <cit.>. Mergers can also produce widespread shocks throughout galaxies which significantly affect the emission-line spectrum <cit.>, and stellar wind-induced shocks have been observed both in star-forming and starburst galaxies <cit.>. Observations of ionized gas in high-redshift quiescent galaxies are necessary in order to constrain the physical conditions of these systems and to develop a comprehensive picture of galaxy quenching. However, the small amount of ionized gas left in these galaxies makes it challenging to detect and measure the emission lines, particularly at high redshift. Detections of very weak emission lines can be obtained more easily in the local universe. The presence of warm ionized gas in local quiescent galaxies, estimated to represent around 1% of their total ISM, has been known for a long time <cit.>: proposed explanations for the presence of this ionized phase have included external accretion of warm ISM from a companion galaxy <cit.>, heat transfer from hot phase gas in the halo <cit.>, low-power nuclear activity (as in LINERs – Low-ionization nuclear emission-line regions); <cit.> and shocks <cit.>. The prevailing theory attributes the presence of warm gas to photoionization by hot evolved low-mass stars (HOLMES); <cit.>. More recently, <cit.> used emission line ratios to distinguish so-called Emission Line Retired Galaxies (EL-RGs) with persistent ionized gas due to HOLMES photoionizing flux from true LINERs, i.e. galaxies that host a low-power AGN <cit.>. Beyond the limits of the local Universe the picture becomes much less clear, as the observations becomes more challenging; at z>1 it is particularly difficult to detect faint emission lines from ionized gas because the rest-frame optical spectrum is observed at near-infrared wavelengths. Deep exposures with the largest ground-based telescopes have led to the detection of [NII] and Hα emission lines in individual quiescent galaxies <cit.> or in stacked spectra <cit.>. The faintness of the observed emission lines rules out a substantial star formation rate in these galaxies; moreover, the [NII]/Hα flux ratio is typically elevated and not consistent with ionization from young stars in HII regions. More detailed measurements of faint emission lines are possible in those rare z∼2 quiescent galaxies that are gravitationally lensed by a foreground cluster <cit.>. These studies confirm that young stars in HII regions contribute very little to both the ionization of the gas and the overall growth of the galaxies (i.e., the measured specific star formation rates are very low). The ionization mechanism is often attributed to AGN feedback, but very little direct evidence for this exists given the scarcity of data; for example, the four emission lines required for the standard BPT diagram <cit.> have been measured in just two lensed galaxies <cit.>. Recently, the launch of JWST has begun a new era of sensitive near-infrared spectroscopy, free from contamination by atmospheric emission and absorption. Early JWST spectra of individual quiescent galaxies at z>2 have revealed the presence of emission lines clearly due to AGN activity <cit.>, showcasing the potential of space-based spectroscopy for the study of faint emission from ionized gas at high redshift. In this work, we present the first comprehensive study of rest-frame optical emission lines in a representative sample of massive quiescent galaxies at Cosmic Noon, based on deep spectroscopy obtained by the Blue Jay survey using the NIRSpec instrument onboard JWST. In Section <ref> we describe the spectroscopic data and the selection of the quiescent sample. In Section <ref> we present the spectral fitting of the emission lines: the results derived from these fits are explored in Section <ref>, where we investigate the origin of the observed ionized gas using several line diagnostics, and in Section <ref>, where we derive upper limits on the current star formation rates. In Section <ref> we conduct a more in-depth spectral analysis on a subset of galaxies which show signs of powerful ionized gas outflows and derive the outflow velocities. In Section <ref> we gather all the evidence obtained from our data analysis to try and identify the main ionization mechanisms for this sample of z∼2 quiescent galaxies, and put our results in context within the overall picture of quenching in high-redshift systems. § DATA AND SAMPLE SELECTION §.§ The Blue Jay survey This work is based on spectroscopic data obtained by the Blue Jay survey, a medium-sized JWST Cycle-1 program (GO 1810; PI: S. Belli). Observations targeted 147 galaxies at Cosmic Noon and 4 filler galaxies at z∼6, for a total of 151 targets, and were performed using the NIRSpec instrument in multi-object mode with the Micro-Shutter Assembly (MSA). The targets were selected in the COSMOS field using Hubble Space Telescope (HST) observations by the Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey <cit.> and the photometric catalog released by the 3D-HST team <cit.>, which includes both ground- and space-based data. The target selection was designed to ensure a roughly uniform coverage in both redshift and stellar mass, yielding a representative sample of galaxies with stellar mass 9 ≲log(M_*/M_⊙) ≲ 11.5 in the redshift range 1.7<z<3.5. Each target was observed through an MSA slitlet composed of at least two shutters. Empty shutters were used to construct a master background spectrum, which was then subtracted by each target spectrum. Targets were observed using the G140M/F100LP, G235M/F170LP and G395M/F290LP medium-resolution gratings (R ∼ 1000) with exposure times of 13h, 3.2h and 1.6h respectively. The individual grating spectra were combined to produce spectra with a continuous wavelength coverage from 1 to 5 μm, but with occasional gaps due to the space between the two NIRSpec detectors. The full description of the Blue Jay survey design and data reduction process will be provided in the survey paper (Belli et al, in prep.). §.§ Stellar continuum fit and flux calibration In order to measure the emission line fluxes from the observed spectra, we need to account for two issues. First, given the small size of the MSA shutters, the NIRSpec spectra probe only a fraction of the light emitted by the target and are therefore affected by heavy slit losses: any line flux measured from the spectra will therefore underestimate the true line flux. Second, the spectra of massive galaxies typically have a strong stellar continuum, which must be subtracted from the observed spectrum to obtain the spectrum of the ionized gas. We solve both problems by fitting stellar population models to the combined photometric and spectroscopic data. The fitting procedure and the analysis of the stellar population properties, including the star formation histories, are detailed in <cit.>; here we only give a summary of the methodology. The fits were done using the fully Bayesian code <cit.>, following the approach outlined in <cit.> and <cit.>. The code adopts the stellar population synthesis model FSPS <cit.> and a set of free parameters to generate a synthetic galactic spectral energy distribution which can then be fitted to the observations. The model is highly flexible and includes a non-parametric star formation history fitted over 14 age bins, dust attenuation, and dust emission at longer wavelengths. A single model is fitted both to the NIRSpec spectrum, in which all the emission lines analyzed in this work are masked out, and to HST/ACS+WFC3 and Spitzer/IRAC (channels 1 and 2) photometric data from 3DHST <cit.>. During the fit, a polynomial distortion is applied to the spectrum in order to match it with the multi-band photometry. This method ties the flux calibration of the spectrum to that of the broadband photometry, which is far more robust; and it also accounts for slit losses if we assume that the emission probed by the MSA shutters is a scaled down version of the total emission coming from the whole galaxy, i.e., assuming spatially uniform emission across the target. We also assume that the ionized gas emission has the same distribution of the stellar emission, so that the same flux calibration can be applied to both components. In any way, this assumption does not significantly impact our results because they are mostly based on line ratios and equivalent widths. For each galaxy, the Blue Jay spectra cover a wide wavelength range due to the use of three separate gratings. However, the default fits use only the rest-frame 4000-6700 Å region, which is easier to model with synthetic stellar populations and is less prone to systematic uncertainties, since the calibration polynomial can artificially change the strength of the 4000-Å continuum break. This choice leads to the most robust measurements of the galaxy physical properties (see ). On the other hand, several emission lines from ionized gas lie outside this spectral region, and so was run again on a wider wavelength range, from 3700 to 13700 Å; see Figure <ref> for an example. Both versions of the fit use the same set of broadband photometry and adopt the same model. In the present work, we take the physical measurements such as stellar mass and star formation rate from the default fits with a narrow wavelength range, but we adopt the fits with extended wavelength range when measuring the emission lines. §.§ Quiescent sample selection To select the quiescent galaxies from the parent sample, we adopt the SFR-based selection of <cit.>. Using the star formation history derived with Prospector, we take the mean SFR over the last 30 Myr as the “instantaneous” measurement, and compare it to the main sequence of star-forming galaxies measured by <cit.> at each galaxy's redshift. We then classify as quiescent those galaxies whose 84th percentile of the SFR posterior distribution lies more that 1 dex below the main sequence. We further exclude one galaxy (COSMOS-21541) since the data reduction of its NIRSpec spectrum failed. We thus obtain a sample of 16 quiescent galaxies, represented by the magenta circles in Figure <ref>, which illustrates the SFR vs stellar mass diagram of the whole Blue Jay sample, as well as the <cit.> star-formation main sequence at z∼2.46 (median redshift of the Blue Jay sample) plotted for visual reference. The main properties of the quiescent sample are listed in Table <ref>; we note that all the selected galaxies are more massive than 10^10.2 M_⊙. ccccccc Overview of selected quiescent galaxies properties COSMOS ID H mag 2lRest-frame colors 3l fitted parameters U-V V-J z log M_*/M_⊙a SFR (M_⊙/yr) 7549 24.3 2.0 1.3 2.627 10.82 3.0_-0.7^+0.7 8013 22.2 1.5 0.7 1.690 10.54 1_-1^+3 8469 22.7 1.5 1.0 1.868 10.58 0.06_-0.05^+0.15 9395 22.6 1.6 0.7 2.127 10.70 0.2_-0.1^+0.1 10128 21.9 2.0 1.2 1.852 11.16 1.6_-0.3^+0.3 10339 23.7 1.7 0.7 2.363 10.34 0.05_-0.05^+0.20 10400 24.1 2.2 1.1 2.099 10.27 0.01_-0.01^+0.04 10565 23.3 1.8 1.0 2.441 10.80 0.3_-0.2^+0.3 10592 21.7 1.8 1.0 1.801 11.10 0.00_-0.00^+0.02 11142 22.7 1.7 1.0 2.444 10.90 4_-3^+9 11494 20.8 1.9 1.0 2.091 11.65 0.4_-0.4^+0.7 16419 21.1 1.6 1.0 1.925 11.51 4.2_-0.7^+0.9 18668 22.9 1.9 1.5 2.086 11.01 0.4_-0.4^+2.7 18688 21.8 1.6 1.2 2.007 11.18 2_-2^+4 19572 22.3 1.2 1.0 1.867 10.86 3_-2^+5 21477 22.6 1.4 0.7 2.474 10.76 0.09_-0.09^+0.37 aTypical error on stellar mass 0.05 dex. §.§ Alternative sample selection We check our selection by plotting the sample onto the rest-frame UVJ color-color diagram (Figure <ref>), which has been proven effective in selecting quiescent galaxies up to z=3.5 <cit.>. We calculate the rest-frame colors using the public photometric redshift EAzY code <cit.>, and adopting the UV-to-NIR photometry from the 3D-HST catalog, to which we add near-infrared photometry measured on JWST/NIRCam data from the PRIMER survey <cit.>. The rest-frame U-V and V-J colors are reported in Table <ref>. As expected, most of the selected galaxies are found inside the quiescent selection limit at z>1 <cit.>, traced by the black dashed line, with some galaxies found just outside the boundary. A criterion based purely on the UVJ colors would have selected a subset of our sample (see also discussion in <cit.> and <cit.>); the SFR-based selection can therefore be considered slightly broader. Given that the two types of selection result in similar samples, we conclude that our results could also be applied to a sample of color-selected quiescent galaxies for which no spectroscopy is available. cc Fitted emission lines Species 1cλ_rest-frame 1c(Å) [OII] 3727.10, 3729.86 [NeIII] 3869.86 Hβ 4862.71 [OIII] 4960.30, 5008.24 [NII] 6549.86, 6585.27 Hα 6564.60 [SII] 6718.29, 6732.67 He I 10833.31 Rest-frame wavelength in vacuum. § SPECTRAL FITTING We employ the wide-range fits ( <ref>) to subtract the stellar contribution from the spectra and isolate the emission due to ionized gas. We then construct a model for the ionized gas spectrum, composed of a Gaussian profile for each of the emission lines listed in Table <ref>. Given the relatively low spectral resolution of the observations and the fact that we are observing gas-poor, quiescent galaxies, we adopt a single kinematic component with the same velocity dispersion and redshift for all the lines in each galaxy. As will be explored in  <ref>, these assumptions do not hold up for all galaxies in the sample. The flux ratios between [OIII]5007/[OIII]4959 and [NII]6581/[NII]6548 are fixed at the theoretical value of 3:1, following <cit.>. Although our spectral resolution is not sufficiently high to be able to resolve the [OII]λλ3727,3739 doublet, we fit the lines separately and impose the 1.5<[OII]3729/[OII]3727<2.5 theoretical limit on their flux ratio <cit.>. The velocity dispersion parameter is fitted taking into account the contribution to the overall broadening of the lines given by the wavelength-dependent NIRSpec nominal spectral resolution. The fitting is performed using the Python library, an Affine Invariant Markov Chain Monte Carlo (MCMC) Ensemble sampler <cit.>. We cut spectral regions of width 600 Å in the observed frame around each emission line and fit only the data within these regions. The free parameters of the model are the global redshift z_gas and velocity dispersion σ_gas (identical for all lines), and the flux of each line (apart from lines with a fixed flux ratio). Flux and velocity dispersion priors are constructed by employing flat logarithmic probability distributions, in order to represent our ignorance of the real probability distribution of these parameters. The velocity dispersion prior is set between 10 and 1000 km/s. For the redshift parameter we adopt a Gaussian prior, centered around the -estimated value, with dispersion informed by the posterior distribution for z. The resulting best-fit emission line parameters are determined as the median values from the posterior distribution, with uncertainties calculated as the values at the 16th and 84th percentile ranges, and are reported in Appendix <ref>. As already discussed in Section <ref>, a correction is applied to the spectra in order to align the spectroscopic flux with the measured photometry. Assuming that the ionized gas emission is originating mainly in HII regions scattered throughout the galaxy, the slit-loss calibrated spectrum yields accurate flux estimations. However, it is possible that emission in these quiescent galaxies comes from limited regions of ionized gas that are not evenly distributed throughout the galaxy, e.g. only the nuclear regions: in this case, slit-loss flux corrections may not be accurate and fluxes may be overestimated. § IONIZED GAS CONTENT OF QUIESCENT GALAXIES The deep JWST/NIRSpec data obtained by the Blue Jay survey reveals the widespread presence of ionized gas in the quiescent galaxy population. Figure <ref> illustrates the HST cutouts and NIRSpec data for each galaxy. The continuum-subtracted data are shown in blue, while the best-fit spectrum obtained from the MCMC fitting is traced by the red solid line, with the red shaded regions indicating the 1σ,2σ and 3σ confidence levels. We detect at least one emission line with SNR> 3 – formal limit of line detection for this work – in 12 out of 16 galaxies (75% of the sample); and for most galaxies we find that multiple emission lines are robustly detected. This surprising result suggests the presence of continued ionizing sources in the population of quiescent galaxies at z ∼ 2. Formally, only two spectra (IDs 10400 and 21477) out of 16 show no detected emission lines. However, we notice that the spectra of galaxies COSMOS-10592 and COSMOS-8469 appear to suffer from the presence of systematics in the data: thus, the formal line detections we have for these galaxies are not reliable, and we treat them as non-detections. Furthermore, we find that galaxy COSMOS-11494, the most massive in the sample, shows few emission lines that are formally detected but at very low SNR. Low SNR and non-detections characterize the noisiest spectra of the sample: in many of these cases, the MSA slits appear to be not well-centered on their galaxies: this could be a factor contributing to the missing detections of emission lines in these systems. In fact, the mean flux correction applied to the spectra due to slit loss is of a factor ∼2, while for the low SNR galaxies the corrected flux averages at around 4 times the observed flux. Lines detected with the highest SNR in the sample are low-ionization lines, such as the [OII] doublet at λ 3727,2729Å and Hα, while higher ionization lines such as [OIII]λ5007 are more rarely detected. The [NeIII] line, a high-ionization line which is considered to be a tracer of AGN emission, is successfully detected in only two galaxies (COSMOS-11142 and COSMOS-18688) and is thus not included in Figure <ref>. The He I line, which is a high-ionization line usually not detected in quiescent galaxy's spectra even at low-z, is actually detected with SNR>3 in 7 galaxies of the sample and is further discussed in Section <ref>. §.§ Emission line ratios From Figure <ref> it is clear that, for most galaxies, the [NII]λ6783 line is similar to, or stronger than, the Hα line. A high [NII]/Hα line ratio is indicative of a hard ionizing photon field, typical of AGN or shock-induced photoionization <cit.>, even though it is also influenced by metallicity. In Figure <ref>, we show the [NII]/Hα ratio of the star-forming and quiescent galaxies in the Blue Jay sample, as a function of their position on the UVJ diagram: most of the galaxies in the quiescent sample tend to have [NII]/Hα ratio equal or higher that 1, while much lower values are typical of the star-forming sample. We also notice a trend in [NII]/Hα ratio along the star-forming population, where redder galaxies tend to have higher values of the [NII]/Hα ratio. This is likely due to the underlying correlation between color and mass: redder star-forming galaxies are more massive and thus contain more dust. Massive star-forming galaxies, in turn, have higher metallicity, resulting in a higher [NII]/Hα ratio. Moving towards the quiescent part of the diagram, however, we find generally higher [NII]/Hα ratios compared to star-forming galaxies, suggesting a more complex origin behind the observed line ratio. In order to explore the origin of the ionization, in Figure <ref> we show the Blue Jay sample on the [NII]/Hα vs. [OIII]/Hβ diagram, i.e., the so-called BPT diagram <cit.>. We consider galaxies where all four emission lines ([NII]λ6583, Hα, [OIII]λ5007, and Hβ) are detected with SNR > 3; in order to include as many galaxies as possible from the quiescent sample, we also plot upper/lower-limit values at 3 sigma for systems with at least one detection at SNR > 3 in each of the two line ratios. Examples of galaxies added this way are COSMOS-8013 and COSMOS-16419. The distribution of the two samples on the diagram is compared with the observationally-derived star formation limit by <cit.>, based on local SDSS galaxies, and the theoretical limit of extreme starburst galaxies by <cit.>. The empirical classification of the AGN population into Seyfert and LINERs given by <cit.> is shown as well. We find a clear segregation of the quiescent Blue Jay sample on the AGN-powered side of the diagnostic diagram, excluding one object (COSMOS-7549) which is instead found inside the extreme starburst limit, in the lower part of the diagram. This galaxy has non-detected [OIII] line, which translates into an upper limit on the [OIII]/Hβ ratio which is much lower than what is found among the rest of the sample. The star-forming galaxies in the Blue Jay sample form a well defined sequence along the so-called star forming galaxy abundance sequence <cit.>. This sequence is slightly shifted towards higher [OIII]/Hβ ratios compared to the distribution of local galaxies, which we take from the MPA-JHU DR7 release[Online repository: https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/#platelist] of spectral measurements for the SDSS survey <cit.>. This is a well-known phenomenon observed at Cosmic Noon <cit.>, and is likely due to lower-metallicity and α-enhanced stellar populations <cit.>. Despite this offset, the quiescent sample remains well-separated from the star-forming sequence, indicating that the primary ionizing source present in these galaxies is not star-formation, but rather the probable presence of an active galactic nucleus. This conclusion is further reinforced by the [SII]/Hα <cit.> and [OIII]/[OII] (not corrected for dust) <cit.> diagnostic diagrams, shown in Figure <ref> and Figure <ref>: again, the quiescent sample is mostly found to the right of the maximum starburst limit, confirming that a surprising number of these galaxies may be hosting an AGN in their center. Interestingly, the star-forming galaxies (selected by their SFR) that are found on the AGN side of the diagnostic diagrams are a very small fraction of the total. Two of these AGN-hosting star-forming galaxies are COSMOS-18977 and COSMOS-12020, which are the only two spectroscopically confirmed Broad Line AGN in the Blue Jay sample (see Figure <ref>). §.§ Hα equivalent width We also make use of the WHaN diagnostic diagram introduced by <cit.> (CF10,CF11). This line diagnostic diagram is similar to the BPT diagram, but with the [OIII]/Hβ line ratio replaced by the Hα equivalent width, which is easier to obtain in low-SNR spectra, thus allowing us to plot more quiescent galaxies with respect to the BPT. The results of the WHaN diagram, shown in Figure <ref>, are consistent with those of the BPT diagram: we find most of the quiescent sample clustered on the AGN region of the plot. As before, the exception is COSMOS-7549, which is classified as a composite galaxy, as well as COSMOS-10565, which is the only one to be found in the "retired galaxies" region. When compared with the population of SDSS galaxies at z = 0, the distribution of the star-forming sample is systematically shifted towards higher Hα EWs, as observed also by <cit.> at 0.7 < z < 2.7: this shift can be attributed to the elevated specific SFR typical of Cosmic Noon galaxies <cit.>, which naturally results in a higher Hα EW. The WHaN diagram is very effective in distinguishing between low-power AGNs and weak Emission Lines Retired Galaxies (EL-RGs) (CF11). In retired galaxies the ionizing flux comes from hot evolved post-AGB stars <cit.>, but due to the hard ionizing field these galaxies are often misclassified by the BPT diagram. To differentiate between true AGN and retired galaxies, CF11 propose a division on the WHaN diagram at EW(Hα) = 3 Å. In our sample, only one galaxy (COSMOS-10565) falls below this line. However, we have to consider that the limit set at 3 Å by CF11 has been calibrated on local galaxies and may not hold up at higher redshift. The quiescent galaxies observed by Blue Jay are substantially younger than those observed at z ∼ 0 (many of them have stellar ages ≤ 1 Gyr, see ), since the age of the Universe at z ∼ 2 is only 3 Gyr, and photoionization models show that the LIERs-like (Low-Ionization Emission Regions) line emissions due to post-AGB stars are actually weaker in younger stellar populations <cit.>. This means that the Hα emission by retired galaxies at Cosmic Noon must be even fainter than the EW= 3 Å limit set by CF11. We thereby conclude that this type of ionization is not relevant for the vast majority of galaxies considered in this study. §.§ Line kinematics As a last piece of the puzzle, in Figure <ref> we show the relation between the [NII]/Hα line ratio and σ_gas, the velocity dispersion measured from the emission lines, for both the quiescent and the star-forming population. We find a clear division of the two samples, mirroring the division found in the other diagnostic diagrams: the quiescent sample is characterized by a higher [NII]/Hα ratio and a higher gas velocity dispersion with respect to the star-forming sample. In many cases the velocity dispersion is so large (σ_gas > 400 km/s) that it is clearly not due to the galaxy gravitational potential, but it must trace other processes such as outflows or shocks <cit.>, which must be associated to AGN activity (as discussed below). As a note, when fitting the velocity dispersions we adopted the nominal spectral resolution provided by JDox: however, we stress that the actual spectral resolution may be higher than this for compact sources (see ), thus the velocity dispersion values we provide may be underestimated. This issue mainly affects galaxies near or below the nominal spectral resolution, which is marked by the dashed black line in Figure <ref>, and is not relevant for galaxies with high measured velocity dispersion. We also highlight with blue square frames the galaxies that are AGN hosts according to the diagnostic diagrams: notably, star-forming AGN hosts are found mixed in with the quiescent sample. Our analysis of the kinematics further confirms the AGN origin of the observed emission in quiescent galaxies. §.§ AGN activity in the sample The classification of galaxies in the sample based on their rest-frame optical lines is summed up in Table <ref>. We are able to classify 9 out of 16 galaxies (56%), while the rest of them are either missing the required emission lines altogether or they have too low SNR for the diagrams to be reliable. Only one galaxy, COSMOS-7549, is classified as part of the composite population: this system completely lacks any [OIII] emission and may be a recently quenched non-active galaxy, whose residual ionizing flux from A-type stars is enough to produce low-ionization emission lines, such as [OII] and Hα, but no young stellar population is present to doubly ionize oxygen. This is similar to the sample of [OIII]-deficient galaxies analyzed by <cit.> at z<0.21; we further explore the star formation history of this system in Section <ref>. Another galaxy, COSMOS-10565, is the only one classified as a retired galaxy by the WHaN diagram and could be a genuine inactive quiescent galaxy. However, its extreme [NII]/Hα ratio and the presence of neutral gas outflows, as reported by <cit.>, suggest a more complex reality. This specific system is further discussed in Section <ref>. Most importantly, all the other galaxies reveal signs of hosting an AGN in their center. AGN represent 78% of the quiescent galaxy population for which we have reliable classification from line ratios, corresponding to a global 44% of all galaxies in the quiescent sample. The incidence of AGN in our sample is comparable to the one found by <cit.> in gas-rich, massive star-forming galaxies at Cosmic Noon. However, this high incidence is not necessarily expected for gas-poor quiescent galaxies, such as the ones in our sample. These sources are not classified as AGN by any other catalog. None of these galaxies are detected in the X-rays by either the Chandra-COSMOS Legacy Survey Point Source Catalog <cit.> or the COSMOS XMM Point-like Source Catalog <cit.>. Only one object (COSMOS-18688) is detected in the radio by the VLA-COSMOS 3 GHz Large Project and COSMOS VLA Deep surveys <cit.>. The AGN properties of the sample are discussed further in Section <ref>. One caveat is that the BPT and other rest-frame optical diagnostic diagrams cannot discriminate between shocks and AGN photoionization: <cit.> and <cit.> showed that diffuse radiation produced by fast shocks (v_gas∼ 150-500 km/s) from jets or star-formation feedback can reproduce optical emission line ratios of both LINERs and Seyfert galaxies. We are not able to rule out the possibility that shocks, rather than AGN photoionization, may play a dominant role in the excitation budget of the quiescent sample. However, even if fast shocks were present in these quiescent galaxies, some of these quiescent galaxies also show outflows (see Section <ref>) which cannot be star-formation-driven given the low SFR measured in these systems (see Section <ref>). Moreover, only one galaxy classified as AGN, COSMOS-19572, seems to be a merging system (see again Section <ref>); for the other galaxies we can thus exclude that the shocks are a consequence of galaxy interactions. The only possible source of shocks for the bulk of BPT-classified AGN, then, is through mechanical feedback from the AGN itself. ccccccc Quiescent sample classification and SFR from Hα line luminosity ID BPT WHaN [OIII]/[OII] [SII]/Hα Av SFR (mag) (M_⊙/yr) 7549 SF/Comp. SF/Comp. Comp. SF -0.05_-0.08^+0.07* 0.145_-0.003^+0.002 8013 Seyfert AGN LINER Seyfert 1.4_-0.3^+0.5 2.0_-0.7^+1.8 8469 - - - - 0.9_-0.3^+0.4 <0.06 9395 - - - - 0.8_-0.2^+0.3 <0.14 10128 - AGN Seyfert - 1.4_-0.3^+0.4 0.6_-0.2^+0.3 10339 - - - - 1.5_-0.3^+0.4 - 10400 - - - - 0.2_-0.1^+0.3 <0.02 10565 - EL-RG - - 1.3_-0.3^+0.4 0.1_-0.1^+0.1 10592 - - - - 0.3_-0.2^+0.2 <0.001 11142 Seyfert AGN LINER Seyfert 2.4_-0.4^+0.5 5.6_-2.5^+4.9 11494 - - - - 0.7_-0.2^+0.2 2.6_-0.8^+1.1 16419 LINER AGN LINER LINER 0.4_-0.1^+0.2 1.9_-0.3^+0.5 18668 LINER AGN LINER SF 3.04_-0.13^+0.12* 45_-6^+5 18688 Seyfert AGN Seyfert Seyfert 1.90_-0.06^+0.06* 19_-1^+1 19572 Seyfert AGN Seyfert Seyfert 2.81_-0.13^+0.12* 24_-4^+3 21477 - - - - 1.3_-0.5^+0.4 <0.3 *Dust attenuation derived from Balmer ratio Classification of the sample according to various diagnostic diagrams and SFR estimates from Hα line luminosity. Upper limits for SFR at the 3σ confidence level are given for galaxies without Hα detection. Galaxy 10339 has no Hα line due to a detector gap. § STAR FORMATION RATE We derive the SFR of each galaxy in the sample from the Hα line luminosity, assuming the conversion provided by <cit.>. However, the conversion of Hα luminosity to SFR is affected by several issues. First, we know from the emission line ratios that most of the Hα emitters in the quiescent sample host an AGN, which is clearly the greatest contributor to the measured Hα flux. In these cases, we consider the calculated SFRs as strict upper limits. Secondly, the measured line fluxes are affected by slit loss. This effect has been accounted for by the parametric correction estimated by Prospector and applied to the spectra before fitting the emission lines (see  <ref>): as already mentioned in  <ref>, however, the calibrated Hα line luminosity is overestimated in the case of ionized gas emission whose origin is not evenly distributed throughout the galaxy. Finally, the measured line fluxes are affected by dust attenuation. Ideally, we would use the Hβ/Hα Balmer decrement to estimate the nebular dust attenuation: in most cases, however, the SNR of the Hβ line is too low. We therefore choose to estimate the attenuation A_V using the results of the fits. The dust attenuation curve adopted by employs a two-components dust model <cit.> consisting of a dust optical depth associated with diffuse ISM dust, plus an additional dust component associated with dense star-forming clouds. further models dust extinction using a power-law modifier δ to the <cit.> dust extinction law, following the empirical relation found in <cit.>. Thus, we first employ the fitted two-component dust attenuation curve to derive the V-band extinction Av and then obtain the dust-corrected Hα flux applying the modifier δ estimated by to the Calzetti law. The SFR estimates based on the Hα line luminosity for the quiescent sample are summarized in Table <ref>. In four cases we are able to recover the dust correction from the Hα/Hβ ratio and then apply the Calzetti law directly, in all the other instances we use the Prospector dust model. Figure <ref> depicts the position of the Blue Jay galaxies in the stellar mass vs SFR diagram, with the SFR of the star-forming sample computed in the same way. Galaxies hosting an AGN according to the line diagnostic diagrams are highlighted with black empty squares: for these cases, the computed SFR represents an upper limit. We find results in agreement with the initial SFR vs stellar mass selection plot, which was based on Prospector measurements (Figure <ref>). This confirms that the sample is indeed quiescent and lies well below the star formation main sequence at z∼2. Notably, three galaxies (IDs 18668, 18688, and 19572) exhibit SFR values higher than Prospector estimates, falling within ± 1dex from the main sequence limit. However, given their status of AGN hosts, the measured Hα emission predominantly originates from the active nucleus rather than star-forming regions, and thus their true SFR is lower than what measured from Hα. Interestingly, one AGN within the star-forming sample, COSMOS-18977 (z=2.08, log(M_*/ M_⊙)=10.70), also borders the -1dex limit: this system is one of the two Broad Line AGN in the Blue Jay sample, as indicated by the large broadening of its permitted emission lines. Consequently, the stellar-continuum fit was performed only on the observed photometry <cit.> and the estimated SFR may not be as accurate as for galaxies with a spectroscopic fit. In any case, we find a relatively low upper limit on its SFR from the Hα line luminosity (SFR∼18 M_⊙/yr): given that the central AGN contributes significantly to the Hα emission, and considering the presence of deep Balmer absorption lines in its spectrum, COSMOS-18977 could represent an additional massive quiescent galaxy hosting an AGN. §.§ Rejuvenated candidates The two SFR estimates of Table <ref> and Table <ref> trace the star formation activity of galaxies on different timescales. The measurement corresponds to the SFR in the most recent bin in the SFH posterior distribution, which spans the last 30 Myr of the galaxy history. The stellar continuum, however, is not sensitive to changes in the stellar population on such short timescales. Moreover, the SFH fits employ a “continuity” Bayesian prior on the SFR between adjacent time bins, which disfavours abrupt changes in the SFH such as bursts, quenching and rejuvenation events on short timescales. On the other hand, the SFR estimated from the observed Hα line luminosity traces very recent star-formation activity, since most of the Hα flux in HII regions is due to short-lived massive stars, which photoionize the ISM around them on a time scale of ∼ 10 Myr. In Figure <ref> we compare the SFR measured with the two methods, in order to explore the possibility of rejuvenating galaxies, i.e., galaxies that show recent star formation activity traced by Hα but not detected by the fits. Because of the reasoning above, we choose to compute the SFR as the mean value over the last ∼100 Myr as obtained by the reconstructed SFH. We find only two obvious outliers in the SFR-SFR plot, with most galaxies found within 1 dex of a 1-1 relation between the two estimates (orange shaded region). One of the outliers is galaxy COSMOS-10592, whose spectrum suffers from systematic errors in the data reduction and for which we cannot trust the computed quantities. Apparently, no clear rejuvenated candidates are found in the sample, because no galaxy appears to have increased significantly its SFR in the last ∼10 Myr. However, the second outlier, COSMOS-7549, may be of interest to understand variations of SFR on short timescales. This system is located at the highest redshift end of the sample (z∼2.6) and is the only galaxy classified as part of the composite population by the diagnostic diagrams, meaning that its observed emission line ratios could be produced by a mix of recent SF and a contribution from a weak active nucleus or shocked gas emission. Thus, we would expect to observe a slightly higher SFR from Hα in this system with respect to the others, but instead we find a very low rate of 0.145 M_⊙/yr — one of the lowest of the sample. Furthermore, its Hα-estimated SFR is much lower than the -estimated one, suggesting that the galaxy underwent rapid quenching sometimes between ∼100 and ∼10 Myr ago. According to the SFH fits of <cit.>, when using a “bursty” prior for the fits — a modified prior which allows for more flexible SFR changes between time bins — there is evidence for a recent rejuvenation event which briefly boosted the galaxy SFR, followed by rapid quenching. This scenario, then, could explain the observed emission lines: the rejuvenation event initially shifted the line ratios towards the star formation region in the BPT and WHaN diagrams, followed by a shift in the opposite direction as a consequence of the recent quenching. The final line ratios are thus found in the composite region. Interestingly, this may also explain the unusually low [OIII]/Hβ and [OIII]/[OII] line ratios observed in COSMOS-7549: the high-ionization [OIII] line is expected to disappear after a few Myr since it requires very young and hot stars, while the lower ionization lines such as Hβ and [OII] can be produced by B-type stars up to ∼100 Myr after the starburst event <cit.>. Therefore, in this scenario the small amount of ionized gas would come from an extremely recent and abrupt quenching of a minor rejuvenation event. § IONIZED GAS OUTFLOWS The fitting model employed in Section <ref>, made up of single-Gaussian profiles for each emission line, was not able to successfully fit the spectra of 4 galaxies in the quiescent sample: specifically, galaxies 8013, 11142, 18688, and 19572, all AGN hosts (Table <ref>). As visible in Figure <ref>, this simple model is inadequate for an accurate representation of the data, suggesting that more complex physical processes are involved in producing the observed emission. The detailed study of one of the galaxies, COSMOS-11142, has revealed the presence of a powerful outflow, which is responsible for deviation from the simple Gaussian profile in the ionized gas emission lines <cit.>. We thus employ a multiple-Gaussian model for these four galaxies, in order to investigate the presence of possible ionized gas outflows. §.§ Emission line broad components Recent works on rest-frame optical emission of outflowing gas in active galaxies <cit.> employ a fitting model with three Gaussian components for each line: one systemic, narrow component tracing the narrow line region (NLR) and/or star-formation; one very broad, systemic component for permitted transitions, tracing the broad line region (BLR); and finally a third component free to vary in both line width and centroid position, tracing ionized outflows. This model is physically motivated, but suffers from a degeneracy between the two types of broad component. In principle, the degeneracy can be broken by comparing the profiles of different emission lines, since the BLR component is present only in permitted transitions, while the outflow can be traced by both permitted and forbidden transitions. Among the emission lines detected in our sample, the only permitted transitions are Hβ, Hα, and He I. However, Hβ and He I are generally too faint for an accurate modeling of the line profile; moreover, He I shows unique kinematics due to resonant absorption (explored in <ref>). Finally, Hα is partially blended with the [NII] doublet, which makes it impossible to tell the difference between a broadening due to the BLR component of Hα or a broadening due to an outflow component underlying all three lines. Therefore, in absence of evident BLR emission (such as for COSMOS-18977, as discussed in  <ref>), we cannot break this degeneracy for the galaxies in our sample. Since in some cases we clearly observe broad components in forbidden lines, we conclude that ionized outflows must be present, and we adopt a simpler fitting model made up of only two Gaussian components, as done in <cit.> and <cit.>. We fix a narrow component at the galaxy redshift with a velocity dispersion σ≤ 300 km/s; and then we add a broad component with σ≥ 200 km/s tracing outflows/BLR emission, free to vary with respect to the galaxy systemic velocity. We fit the narrow line profile under each line in Table <ref> and fix the same velocity dispersion and redshift for all lines. The broad component is considered only for lines with generally higher signal-to-noise ratios, namely Hα, Hβ, [OII], [OIII] and [NII] doublets. Given the relatively low spectral resolution of our data and the low SNR of the emission lines, we assume that, for a given galaxy, the broad component has the same velocity dispersion and velocity shift in all the emission lines. We performed the MCMC fits in the same way as done for the single-Gaussian model (see  <ref>). The broad component velocity shift is initialized with a flat prior between -1000 and 1000 km/s; the limits on the velocity dispersion prior are 10-300 km/s for the narrow component and 200-2000 km/s for the broad component. The MCMC walkers for the narrow component are initialised in small regions around the best-fit values given from the previous, single-Gaussian fit, while generic starting-point values are given for the broad component walkers. The results of the double-Gaussian model are compared with the single-Gaussian fit in Figure <ref>. For each of the four galaxies, we show the single-Gaussian fit in the first panel and the double-Gaussian fit in the second panel, for just one example line. In all cases there is a blueshifted excess that requires a double-Gaussian model, and that we interpret as an outflow of ionized gas leaving the galaxy. The SFR we measure (see <ref>) is too low to explain the presence of star-formation driven outflows. We can also rule out a star-formation driven fossil outflow: in all galaxies we measure winds velocities (estimated using the velocity offset v and the dispersion σ from the broad fitted component and computing v_out = |v| + 2σ) of ∼ 800 - 1000 km/s, substantially higher than what is observed in the most powerful star-formation driven outflows at z∼2 <cit.>. The high velocity, together with the lack of any tidal features visible in the near-IR imaging data, also rules out the possibility of tidally-ejected gas after a major merger. The only reasonable origin for the observed outflows is AGN feedback, consistent with the position of these four galaxies on the line diagnostic diagrams. §.§ Multiphase outflows in quiescent AGN hosts Many galaxies in the Blue Jay sample host neutral outflows, as shown by an analysis of the Na D absorption line conducted by <cit.>. In the third panel of Figure <ref> we show, for each galaxy with detected ionized outflows, the observed spectrum around the Na D_1, D_2 doublet, together with the stellar absorption as estimated by (red), and the model including excess absorption due to neutral gas (black) as fitted by <cit.>. Significant blueshifted Na D absorption, indicative of neutral outflows, is present in three of the four systems (IDs 8013, 11142, 18668). In the fourth galaxy with a ionized outflow, COSMOS-19572, <cit.> detect redshifted Na excess absorption, which is a signature of infalling neutral gas: this fact, combined with the evidence of a nearby companion in NIRCam imaging (last panel of Figure <ref>) suggests that this is an interacting system. We note, however, that although the NaD absorption in COSMOS-19572 is formally classified as infalling by <cit.>, its absolute velocity offset is comparable to the measurement error, and therefore this absorption could plausibly be systemic. The interpretation of these observations is not straightforward: most probably, we are looking at an interacting system with a main AGN-hosting galaxy accreting neutral gas from a nearby satellite - which could be fueling the AGN activity - intercepted along the stellar continuum on the 2D spectrum. The combination of ionized and neutral gas outflows in a subset of quiescent galaxies provides observational evidence for negative AGN feedback on the star formation activity, since we are seeing gas leaving these already gas-poor, quiescent systems. Thus, we can link the observation of ionized gas outflows observed in this sample of quiescent galaxies to the quenching of star-formation in these systems at Cosmic Noon. There are additional quiescent galaxies in which <cit.> observe blueshifted Na D absorption. The only one among these to be classified as an AGN by the line diagnostic diagrams is COSMOS-18688, in which we do not detect any significant ionized gas outflows. We tried fitting a two-component Gaussian profile to its emission lines, but this did not significantly improve the fit with respect to the single-component model. Neutral outflows are detected even in galaxies that are not robustly classified as AGN according to their emission lines. A particularly interesting case is that of galaxy COSMOS-10565 (Figure <ref>), which has weak emission lines, with the only detected ones being Hα and [NII]. This system exhibits the highest [NII]/Hα ratio of the whole sample and is the only galaxy classified as “retired” by the WHaN diagram (Figure <ref>). The Hα and [NII] line profiles are very blended and closely resemble the profiles observed in COSMOS-11142 (which is characterized by powerful outflows), but their SNR is too low for a meaningful fit with the double-Gaussian profile. If indeed an ionized outflow is present, the outflowing gas is too faint to be seen robustly in the emission lines. The presence of a neutral outflow indicates that this galaxy is probably hosting a low-luminosity AGN, which is undetected due to the weak emission lines. Its low SFR of ∼ 0.1 M_⊙/yr and the fact that it is isolated (Figure <ref>) rule out a major role for star formation feedback and galaxy interactions. This suggests that COSMOS-10565 is not a retired galaxy, despite its location in the WHaN diagram, and is consistent with the expectation that the local criterion from <cit.> (EW_Hα < 3 Å) should be lowered at high redshift, as discussed in <ref>. §.§ The He I line We report the detection of the high-ionization He I λ10831 Å line in 7 galaxies of the quiescent sample, shown in Figure <ref> (we exclude a formal detection in COSMOS-10592 on account of the problems with the data, see  <ref>). This emission line is detected almost exclusively in galaxies classified as AGN in the BPT — the only exception being COSMOS-10565, which is also likely to host an AGN since it has a neutral outflow (as discussed above) and COSMOS-11494, in which the line is marginally above detection and the fit does not seem reliable. In galaxy COSMOS-11142, which presents a wealth of ionized gas emission lines, the He I line is the only one to be redshifted, as shown in <cit.>. Here we report two more cases – COSMOS-8013 and COSMOS-18668 – where we detected a red excess in the He I emission line. As explained in <cit.>, this behaviour is likely a consequence of resonant scattering, similar to that found in Lyα emission. This means that we are observing the redshifted (i.e. receding) part of the ionized outflows instead of the approaching one, due to the fact that the He I λ10831 transition is meta-stable and thus photons emitted in the outflows are re-absorbed unless they are redshifted. This constitutes an independent confirmation of the presence of ionized outflows in our sample of quiescent galaxies. Whether it is redshifted or not, the He I emission line in quiescent galaxies may represent a novel, promising indicator to investigate the presence of AGN activity at z∼2. § SUMMARY AND DISCUSSION In this paper we analyzed the optical rest-frame emission lines of 16 quiescent galaxies observed as part of the Blue Jay survey, a medium-sized Cycle 1 JWST program that targeted 147 galaxies at redshift 1.7<z<3.5 in the COSMOS field. The Blue Jay survey was executed with NIRSpec in MOS mode and obtained spectra between 1 - 5μm, covering the ∼3000-16000 Å rest-frame. The present work is the first statistical look at rest-frame optical ionized gas emission lines in a sample of massive quiescent galaxies at z∼2 with deep spectra from JWST. Quiescent galaxies were selected from the main sample by requiring their SFR, determined by spectro-photometric fitting, to be 1 dex below the main sequence. SFR upper limits determined by the Hα line luminosity confirm that the sample is truly quiescent. Nonetheless, we detect emission lines from ionized gas in 12 out of 16 galaxies, representing 75% of the sample. This shows that very deep spectroscopy is able to reveal the presence of ionized gas in the majority of high-redshift quiescent galaxies, confirming a trend established by increasingly deep ground-based observations such as the Heavy Metal survey <cit.>, which detected Hα emission in 55% of their quiescent galaxy sample at z>1.4, and small samples of strongly lensed galaxies <cit.>. §.§ AGN activity in quiescent galaxies at z∼2 We highlight the detection of multiple emission lines in addition to Hα throughout the sample: the exquisite sensitivity of JWST/NIRSpec observations in the near-IR has enabled the detection of weak ionized gas emission lines that would have been otherwise missed by ground-based observations. This, in turn, made it possible to classify galaxies using line ratio diagnostic diagrams, which revealed rampant AGN activity in our sample of massive quiescent galaxies. We find that 8 out of 16 galaxies show emission line ratios compatible with the presence of an active nucleus (50% of the sample). We consider the possibility of contamination of the observed line ratios by fast shocks, but we find no possible sources of shocked gas other than AGN activity itself to be present in these galaxies, apart from the case of the merging system COSMOS-19572. Interestingly, no X-ray detection is found for any of the AGN in our sample, while only one system (COSMOS-18688) is detected in the radio. The main result of this work thus consists in the discovery of a population of never-before-detected AGN at z∼2 in a sample of apparently inactive massive quiescent galaxies. To investigate the AGN properties, we start by estimating the bolometric luminosity of each AGN from the observed [OIII] line <cit.>. Most of the AGN exhibit bolometric luminosities of the order of ∼10^44 erg/s; the largest value is found for COSMOS-18688 (L_BOL = (1.35 ± 0.02) · 10^45 erg/s). We then derive the inferred X-ray luminosities in the 2-10 keV range from scaling relations <cit.>. We find that most of the sample falls under the flux limits of both the Chandra-COSMOS Legacy Survey Point Source Catalog <cit.> and the COSMOS XMM Point-like Source Catalog <cit.>, with the only exceptions being galaxies 19572 and 18688. In any case, the observed [OIII] line luminosity may come also from shock excitation (especially relevant in the case of the merger COSMOS-19572), thus both the bolometric luminosity and inferred X-ray luminosity have to be considered upper limits. We thus conclude that the lack of X-ray detections is expected, given the weak AGN luminosities. cccc 5cm Bolometric and inferred X-ray luminosity of confirmed AGN from [OIII] line luminosity. ID L_BOL L_X[2-10 KeV] L_BOL/L_Edd (10^45 erg/s) (10^42 erg/s) 8013 0.112 ±0.003 9.3 ±0.2 0.01 ±0.01 10128 0.028 ±0.001 2.5 ±0.1 0.005 ±0.004 10565 0.018 ±0.005 1.6 ±0.5 0.002 ±0.002 11142 0.82 ±0.02 53.6 ±0.9 0.005 ±0.002 16419 0.47 ±0.02 33 ±1 0.029 ±0.009 18668 0.218 ±0.007 17.1 ±0.5 0.001 ±0.001 18688 1.35 ±0.02 79.0 ±0.9 0.02 ±0.02 19572 0.764 ±0.007 50.5 ±0.5 0.003 ±0.002 18977 0.166 ±0.001 13.4 ±0.1 0.03 ±0.03 We also estimate the Eddington luminosity L_Edd from the black hole mass, which we infer from the M_BH-σ relation of <cit.>: ( M_BH/10^8 M_⊙ ) = 1.95 ( σ/200 km s^-1 )^5.12 , adopting the stellar velocity dispersions measured from the spectral fits. We show the bolometric luminosity versus the Eddington luminosity for the AGN hosts in our sample in Figure <ref>, where the diagonal lines have constant Eddington ratio L_bol/L_Edd. Most of the AGN are probably low-luminosity AGN (LL-AGN), with Eddington ratios of ∼ 10^ -3 - 10^-2, typical of a central engine fueled by a Radiatively Inefficient Accretion Flow (RIAF) (see for a schematic view of the central engine of LL-AGN). As already mentioned in  <ref>, the measured velocity dispersions may be underestimated, thus Eddington ratios may actually be lower than computed here. Even though the high incidence of AGN discovered in this sample of apparently inactive quiescent galaxies has not been seen before at z∼2, this result is consistent with what is observed in the local Universe: <cit.> analyzed optical emission lines in several surveys of nearby galactic nuclei, finding that 43% of all galaxies show evidence of weak nuclear activity from an accreting SMBH and that the AGN fraction is even more remarkable for galaxies with an obvious bulge component, rising to ∼50%-70% for Hubble types E-Sb. Our measured AGN incidence of 50%, then, is in line with what is observed at z∼0, notwithstanding that galaxies with missing line detections could also be hosting LL-AGN out of the sensitivity reach for this survey. Within this framework, it is reasonable to think that many of the Blue Jay galaxies in the star-forming sample may also be hosting a low-luminosity active nucleus whose line emission is however outshined by the nebular emission from HII regions, a problem that does not affect the quiescent sample. To illustrate this idea more clearly, we can follow the relative contributions of star-formation and LL-AGN as a sequence across the WHaN diagram of Figure <ref>, where symbols have been colored by stellar mass. Gas metallicity also affects line ratios in galaxies of similar masses. In the top left corner (highest Hα EWs and lowest [NII]/Hα ratio) we find low-mass, low-metallicity SF galaxies, which probably do not host any AGN: at these low stellar masses, line ratios are determined by star-formation processes and the spread in [NII]/Hα ratio values is likely due to different gas metallicity. Moving towards lower Hα EWs, we find higher-mass galaxies: as the specific SFR decreases in the vertical direction, the contribution of a possible LL-AGN in the central regions raises the [NII]/Hα ratio with respect to star-formation only emission, until the massive quiescent galaxies in the bottom right corner of the diagram are reached, for which the LL-AGN contribution is dominant. For high mass galaxies, metallicity should have a smaller impact on the line ratios, as shown by the relatively small difference in [NII]/Hα ratio for galaxies of different masses at fixed Hα EW. The observed sequence could thus be seen as an evolutionary track on the WHaN diagram, tracing the evolution of massive galaxies from star-forming to quiescent as driven by the effect of AGN feedback. §.§ Star-formation quenching by multiphase AGN outflows We find ionized outflows in a subset of four out of 16 quiescent galaxies, and measure outflow velocities of the order of ∼1000 km/s. All four galaxies are AGN hosts (according to the line diagnostic diagrams) and display evidence of neutral gas outflows/inflows as well <cit.>. Additionally, two other galaxies categorized as AGN hosts exhibit solely neutral phase outflows (COSMOS-10565 and COSMOS-18688). With the exception of COSMOS-19572, for which tidal disruption due to the ongoing merger needs to be considered, the observed outflows can only be driven by AGN activity. One of the galaxies in our sample is COSMOS-11142, which was studied in detail in <cit.>. The mass outflow rate in this system is sufficiently large to fully explain the rapid shut-off of star formation, providing one of the first direct observational links between quenching and AGN ejective feedback. However, in this galaxy most of the gas that is being ejected is in the neutral phase, with the ionized phase playing a minor role. The other galaxies in our sample have ionized outflows that are comparable to, or smaller than, the one observed in COSMOS-11142. Thus, we do not expect that these quiescent galaxies have been quenched solely by the observed ionized outflows. However, many galaxies in our sample do show evidence of neutral outflows, and are therefore similar in many respects to COSMOS-11142, even though none of them is close in terms of providing such a clear quenching picture. One possibility is that COSMOS-11142 was caught at the ideal time, just in the middle of the short-lived, powerful multiphase outflow episode that is responsible for rapidly quenching most of the star formation activity in the galaxy. In this scenario, the other galaxies in our sample are observed in later evolutionary stages: nonetheless, we find that the majority of them still host detectable AGN activity. Thus, our results represent a new, crucial piece of the quenching puzzle, and justifies extrapolating the physical processes observed in one galaxy to the majority of the quiescent population. Finally, it is possible that the observed LL-AGN activity plays a key role in maintaining quiescence in massive galaxies as they evolve to z ∼ 0. In the local universe, a particular class of quiescent galaxies exhibiting LL-AGN activity and ionized gas winds, called “red geysers”, has already been observed <cit.>. Local LL-AGN are associated to RIAFs <cit.>, which have a tendency to drive powerful winds <cit.> whose thermal and kinetic energy is deposited in the surrounding ISM. <cit.> developed a model for LL-AGN feedback through galaxy-scale winds produced by RIAFs and showed that these winds, especially if long-lived (on timescales of 10 Myr or longer) can prevent gas collapse and effectively quench star-formation in massive galaxies.There could be then a direct link between the LL-AGN activity at Cosmic Noon, which is responsible for rapid quenching, and the maintenance mode observed in local "red geysers". We thank Salvatore Quai, Ivan Lopez, Raffaele Pascale, Marcella Brusa and Elena Bertola for the help and for the illuminating discussions. The Blue Jay Survey is funded in part by STScI Grant JWST-GO-01810. SB is supported by the the ERC Starting Grant “Red Cardinal”, GA 101076080. RD is supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. RE acknowledges the support from grant numbers 21-atp21-0077, NSF AST-1816420, and HST-GO-16173.001-A as well as the Institute for Theory and Computation at the Center for Astrophysics. RW acknowledges funding of a Leibniz Junior Research Group (project number J131/2022). This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program GO 1810. This work also makes use of observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. JWST(NIRSpec) Python library <cit.>, <cit.>, EAzY <cit.> § FULL SPECTRAL MEASUREMENTS In Table <ref> we display the full set of spectral measurements for each galaxy in the sample. cccccccccccc 1pt Spectral fitting results COSMOS ID z_gas σ_gas 9cFlux (km s^-1) 9c(10^-18 erg s^-1 cm^-2) 4-12 [OII] [NeIII] Hβ [OIII]5007 Hα [NII]6583 [SII]6716 [SII]6731 He I 7549 2.6238 157.07_-10.20^+10.40 2.63_-0.26^+0.28 <0.020 0.76_-0.06^+0.06 <0.022 2.14_-0.13^+0.13 1.27_-0.11^+0.12 0.68_-0.11^+0.11 <0.651 <0.201 8013 1.6894 261.08_-3.94^+4.38 8.65_-0.59^+0.60 1.06_-0.14^+0.14 <0.987 4.51_-0.12^+0.13 9.13_-0.13^+0.13 9.63_-0.13^+0.14 3.29_-0.29^+0.31 2.02_-0.30^+0.32 1.59_-0.16^+0.17 8469 <0.126 <0.212 <0.643 <3.75 <0.142 <0.098 <0.129 <0.100 <0.201 9395 2.1268 400.47_-26.93^+27.69 2.44_-0.34^+0.35 <0.026 <0.166 <1.055 <0.604 <0.747 <0.039 <0.020 <0.935 10128 1.8517 235.24_-7.46^+8.01 0.55_-0.03^+0.03 <0.002 0.95_-0.04^+0.04 2.91_-0.11^+0.11 2.93_-0.11^+0.11 <1.649 <0.873 <0.261 10339 2.3636 536.04_-54.90^+59.23 1.54_-0.33^+0.34 0.30_-0.09^+0.10 <0.160 10400 <0.274 <0.728 <0.048 <0.51 <0.078 <0.116 <0.093 10565 2.4416 425.70_-30.41^+31.50 110.10_-53.27^+55.80 49.51_-33.30^+34.44 <0.028 <0.371 0.29_-0.20^+0.25 4.07_-0.27^+0.26 <1.054 <0.155 3.12_-0.27^+0.28 10592 <0.001 <0.27 <0.006 <0.007 <0.017 <0.008 <8.88 11142 2.4434 522.15_-9.05^+9.26 6.58_-0.24^+0.25 2.13_-0.19^+0.19 <0.049 17.18_-0.31^+0.31 5.65_-0.63^+0.62 35.42_-0.63^+0.63 2.39_-0.51^+0.52 4.53_-0.51^+0.51 5.06_-0.39^+0.38 11494 2.0909 985.43_-23.10^+10.85 7.59_-1.77^+1.94 <0.167 <0.123 <0.060 24.57_-3.37^+2.76 2.68_-1.91^+2.68 <0.702 <0.512 21.30_-1.89^+1.78 16419 1.9257 437.70_-10.40^+10.30 43.11_-5.53^+5.56 <0.622 <5.293 14.97_-0.62^+0.63 28.12_-1.58^+1.61 25.03_-1.45^+1.46 28.32_-1.82^+1.87 18.04_-1.86^+1.82 <1.610 18668 2.0857 204.44_-3.43^+3.68 17.66_-1.18^+1.15 <0.206 3.74_-0.21^+0.21 5.99_-0.21^+0.21 29.28_-0.45^+0.44 30.23_-0.41^+0.42 6.84_-0.35^+0.35 4.35_-0.34^+0.34 4.44_-0.32^+0.33 18688 2.0068 205.46_-2.30^+2.31 16.01_-0.44^+0.46 4.19_-0.36^+0.36 9.06_-0.32^+0.33 39.87_-0.44^+0.44 48.60_-0.72^+0.74 59.37_-0.71^+0.74 7.48_-0.63^+0.62 7.60_-0.61^+0.61 7.57_-0.55^+0.56 19572 1.8674 201.19_-1.99^+2.06 9.98_-1.05^+1.05 <2.360 3.07_-0.18^+0.17 25.62_-0.22^+0.22 22.34_-0.33^+0.31 42.17_-0.31^+0.31 5.45_-0.34^+0.33 4.23_-0.31^+0.31 5.56_-0.19^+0.19 21477 <4.690 <1.225 <0.147 <0.939 <0.450 <0.842 <0.028 <0.995 <0.358 Flux measurement for fitted emission lines in the quiescent sample. aasjournal
http://arxiv.org/abs/2406.09302v1
20240613163905
The reflection complexity of sequences over finite alphabets
[ "Jean-Paul Allouche", "John M. Campbell", "Jeffrey Shallit", "Manon Stipulanti" ]
math.CO
[ "math.CO", "cs.DM", "cs.FL", "05A05, 11B85, 68R15" ]
unicodehyperref plain theoremTheorem lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition conjecture[theorem]Conjecture definition definition[theorem]Definition example[theorem]Example question[theorem]Question remark[theorem]Remark ∧ : #1#2#1 #2 smallarray[1] =.13885em @#1@The reflection complexity of sequences over finite alphabets Jean-Paul Allouche John M. Campbell Jeffrey Shallit Manon Stipulanti § ABSTRACT In combinatorics on words, the well-studied factor complexity function ρ_x of a sequence x over a finite alphabet counts, for any nonnegative integer n, the number of distinct length-n factors of x. In this paper, we introduce the reflection complexity function r_x to enumerate the factors occurring in a sequence x, up to reversing the order of symbols in a word. We introduce and prove results on r_x regarding its growth properties and relationship with other complexity functions. We prove that if x is k-automatic, then r_x is computably k-regular, and we use the software to evaluate the reflection complexity of automatic sequences, such as the Thue–Morse sequence. We prove a Morse–Hedlund-type result characterizing eventually periodic sequences in terms of their reflection complexity, and we deduce a characterization of Sturmian sequences. Furthermore, we investigate the reflection complexity of episturmian, (s+1)-dimensional billiard, and Rote sequences. There are still many unanswered questions about this measure. Keywords: reflection complexity, factor complexity, reversal, automatic sequence, Sturmian sequence, episturmian sequence, Rote sequence, Morse-Hedlund theorem, .MSC: 05A05, 11B85, 68R15 § INTRODUCTION The discipline of combinatorics on words continues to be burgeoning as a relatively new and interdisciplinary area of mathematics. In this regard, the significance of combinatorics on words within disciplines such as theoretical computer science leads us to explore variants and generalizations of fundamental objects and constructions involved within the field. If x is an infinite sequence over a finite alphabet (see Section <ref> for precise definitions), natural problems that arise in the combinatorial study of x and in the context of computer science-based problems concern the behavior of factors of x. Writing = { 1, 2, …} and = { 0, 1, …}, we are led to consider the factor complexity functionρ_x→ which maps n≥ 0 to the number of distinct factors of x of length n. (The term factor refers to any contiguous block occurring in x.) Variations on this definition can be considered as a measure of how “complicated” a sequence is. For example, the abelian complexity function of x counts the number of factors of x of a given length and up to permutation of symbols, so that two factors u= u(1) u(2) ⋯ u(m) and v = v(1) v(2) ⋯ v(n) are abelian equivalent if m = n and if there exists a bijection σ{ 1, 2, …, m }→{ 1, 2, …, m } such that u(1) u(2) ⋯ u(m) = v(σ(1)) v(σ(2)) ⋯ v(σ(m)). Similarly, the cyclic complexity functionc_x introduced in 2017 <cit.> is equal to the number of length-n factors of x, up to equivalence under cyclic permutations. The abelian and cyclic complexity functions lead us to introduce a reflection complexity function on sequences via an invariant property involving permutations of symbols, by analogy with the abelian and cyclic complexity functions. In addition to the factor, abelian, and cyclic complexity functions indicated above, there have been many different complexity functions on sequences that have been previously introduced. In this regard, we highlight the following “complexities”: additive complexity <cit.>, arithmetical complexity <cit.>, gapped binomial complexity <cit.>, k-abelian complexity <cit.>, k-binomial complexity <cit.>, Kolmogorov complexity <cit.>, Lempel–Ziv complexity <cit.>, Lie complexity <cit.>, linear complexity (see the survey by Niederreiter <cit.>), maximal pattern complexity <cit.>, maximum order complexity <cit.>, (initial) (non-)repe­ti­tive complexity <cit.>, opacity complexity <cit.>, open and closed complexity <cit.>, palindrome complexity <cit.>, periodicity complexity <cit.>, privileged complexity <cit.>, relational factor complexity <cit.>, span and leftmost complexity <cit.>, string attractor profile complexity (implicitly defined in <cit.>; also see <cit.>), and window complexity <cit.>. Also see the references in the surveys in <cit.>. The complexity function r_x defined below does not seem to have been previously studied, but may be thought of as natural in terms of its relationships with automatic sequences such as the Thue–Morse sequence. To begin with, we require the equivalence relation ∼_r defined below. Let m,n be nonnegative integers. Given a finite word u= u(1) u(2) ⋯ u(m), its reversal is the word u^R=u(m) u(m-1) ⋯ u(1), that is, u^R(i) = u(m+1-i) for all i∈{1,…,m}. A palindrome is a word that is equal to its reversal. Two finite words u and v are reflectively equivalent if u = v or u = v^R. We denote this equivalence relation by u ∼_r v. Over the alphabet { a, b, …, z}, the English word reward is reflectively equivalent to drawer, while deed, kayak, and level are palindromes. Let x be a sequence. The reflection complexity functionr_x→ of x maps every n≥ 0 to the number of distinct length-n factors of x, up to equivalence by ∼_r. Let t = 011010011001011010010110011010011⋯ denote the Thue–Morse sequence, where the nth term in (<ref>) for n≥ 1 is defined as the number of ones, modulo 2, in the base-2 expansion of n. The initial terms of the integer sequence (r_t(n))_n≥ 0 are such that (r_t(n))_n≥ 0 = 1, 2, 3, 4, 6, 6, 10, 10, 13, 12, 16, 16, 20, 20, 22, …. We see that r_t(2) = 3, for example, since there are 3 length-2 factors of t, up to reflection complexity, i.e., the factors 00 and 11 and one member of the equivalence class { 01, 10 }, with respect to ∼_r. The integer sequence in (<ref>) is not currently included in the On-Line Encyclopedia of Integer Sequences <cit.>, which suggests that our notion of “reflection complexity” is new. See also the work of Krawchuk and Rampersad in <cit.>, which introduced the notion of cyclic/reversal complexity for sequences. The evaluation of reflection complexity functions is closely related to the work of Rampersad and Shallit <cit.>, who investigated sequences x such that all sufficiently long factors w have the property that that w^R is not a factor of x. Also, the evaluation of reflection complexities for sequences is related to the enumeration of palindromes contained in sequences; see, for example, Fici and Zamboni <cit.>. This paper is organized as follows. In Section <ref>, we introduce the notation and definitions needed for the paper. In Section <ref>, we give general results on the reflection complexity. In particular, we investigate growth properties and relationships with other complexity functions. In Sections <ref>, <ref>, and <ref>, respectively, we investigate reflection complexities for eventually periodic sequences, Sturmian sequences and generalizations, and reversal-closed and rich sequences. Next, in Section <ref>, we focus on classical automatic sequences and, with the use of , we prove that the reflection complexity function for automatic sequences is a regular sequence. We also study the reflection complexity for famous automatic sequences such as the Thue–Morse sequence. Finally, some further research directions are considered in Section <ref>. §.§ Preliminaries Generalities. For a general reference on words, we cite <cit.>. An alphabet is a finite set of elements called letters. A word on an alphabet A is a finite sequence of letters from A. The length of a word, denoted between vertical bars, is the number of its letters (counting multiplicities). The empty word is the only 0-length word, denoted as . For all n≥ 0, we let A^n denote the set of all length-n words over A. We let A^* denote the set of words over A, including the empty word and equipped with the concatenation operation. In order to distinguish finite words and infinite sequences, we write the latter in bold. Except for complexity functions, we start indexing words and sequences at 1. A factor of a word or a sequence is any of its (finite and contiguous) subblocks. A prefix (resp., suffix) is any starting (resp., ending) factor. Given a word w, its nth term is written w(n) for 1≤ n≤ |w|. The factor starting at position n and ending at position m with 1≤ m≤ n≤ |w| is written w[m..n]. A factor u of a word w over A is right (resp., left) special if ua and ub (resp., au and bu) are factors of w for some distinct letters a,b∈ A. We extend these notations to (finite) words as well. A sequence x is reversal-closed if, for any factor w of x, the word w^R is also a factor of x. A sequence x is eventually periodic if there exist finite words u,v, with v nonempty, such that x=uv^ω where v^ω=vvv⋯ denotes the infinite concatenation of the word v. A sequence that is not eventually periodic is said to be aperiodic. A sequence is said to be recurrent if every factor occurs infinitely many times; it is uniformly recurrent if each factor occurs with bounded gaps, i.e., for all factors w, there is some length m = m(w) such that w occurs in every length-m block. Morphisms. Let A and B be finite alphabets. A morphismf A^* → B^* is a map satisfying f(uv)=f(u)f(v) for all u,v∈ A^*. In particular, f()=, and f is entirely determined by the images of the letters in A. For an integer k≥ 2, a morphism is k-uniform if it maps each letter to a length-k word. A 1-uniform morphism is called a coding. A sequence x is morphic if there exist a morphism f A^* → A^*, a coding g A^* → B^*, and a letter a ∈ A such that x = g(f^ω(a)), where f^ω(a) = lim_n→∞ f^n(a). We let E{0,1}^*→{0,1}^* be the exchange morphism defined by E(0)=1 and E(1)=0. We naturally extend E on sequences. Numeration systems. Let U=(U(n))_n≥ 0 be an increasing sequence of integers with U(0)=1. Any integer n can be decomposed, not necessarily uniquely, as n=∑_i=0^t c(i) U(i) with non-negative integer coefficients c(i). The word c(t)⋯ c(0)∈ℕ^* is a U-representation of n. If this representation is computed greedily, then for all j< t we have ∑_i=0^j c(i) U(i)<U(j+1) and _U(n)=c(t)⋯ c(0) is said to be the greedyU-representation of n. By convention, the greedy representation of 0 is the empty word , and the greedy representation of n>0 starts with a non-zero digit. For any c_t⋯ c_0∈ℕ^*, we let _U(c(t)⋯ c(0)) denote the integer ∑_i=0^t c(i) U(i). A sequence U satisfying all the above conditions defines a positional numeration system. Automatic and regular sequences. For the case of integer base numeration systems, a classical reference on automatic sequences is <cit.>, while <cit.> treat the case of more exotic numeration systems. Let U=(U(n))_n≥ 0 be an positional numeration system. A sequence 𝐱 is U-automatic if there exists a deterministic finite automaton with output (DFAO) 𝒜 such that, for all n≥ 0, the nth term 𝐱(n) of x is given by the output 𝒜(_U(n)) of 𝒜. In particular, if U is the sequence of consecutive powers of an integer k≥ 2, then 𝐱 is said to be k-automatic. It is known that a sequence is k-automatic if and only if it is the image, under a coding, of a fixed point of a k-uniform morphism <cit.>. A generalization of automatic sequences to infinite alphabets is the following <cit.>. Let U=(U(n))_n≥ 0 be an positional numeration system. A sequence x is U-regular if there exist a column vector λ, a row vector γ and matrix-valued morphism μ such that x(n)=λμ(_U(n)) γ. Such a system of matrices forms a linear representation of x. Another definition of k-regular sequences is the following one <cit.>. Consider a sequence x and an integer k≥ 2. The k-kernel of x is the set of subsequences of the form (x(k^e n + r))_n≥ 0 where r∈{1,2,…,k^e}. Equivalently, a sequence is k-regular if the -module generated by its k-kernel is finitely generated. A sequence is then k-automatic if and only if its k-kernel is finite <cit.>. Sturmian sequences. A sequence x is Sturmian if its factor complexity function satisfies ρ_x(n) = n+1 (see, e.g., <cit.>). Sturmian sequences have minimal factor complexity among all non-eventually periodic sequences, as proved by Morse and Hedlund <cit.>. Let x be a sequence and let ℓ be the number of distinct letters occurring in x. The following properties are equivalent. (a) The sequence x is eventually periodic. (b) We have ρ_x(n) = ρ_x(n+1) for some n≥ 0. (c) We have ρ_x(n)<n+ℓ-1 for some n≥ 1. (d) The factor complexity ρ_x is bounded. This theorem implies in particular that ρ_x(n) is either bounded or it satisfies ρ_x(n) ≥ n+ℓ for all n. Thus the minimal factor complexity among all non-eventually periodic sequences is ρ_x(n) = n+1 for all n, i.e., the complexity of Sturmian sequences. Actually there is another “growth gap” for ρ_x. Recall that a sequence x is called quasi-Sturmian if there exists a constant C such that, for n large enough, one has ρ_x(n) = n + C (see <cit.>; also see <cit.>). It is known that if x is neither eventually periodic nor quasi-Sturmian, then ρ_x(n) - n tends to infinity (this result is due to Coven <cit.>; the first author <cit.> attributed the result to J. Cassaigne, because he first learned it from him). Thus we have: (a) either ρ_x(n) is bounded, which happens if and only if x is eventually periodic; (b) or else ρ_x(n) = n +C for some constant C and all n large enough, which means that x is quasi-Sturmian; (c) or else ρ_x(n) - n tends to infinity. One more point (once explained to the first author by J. Berstel) is that if ρ_x(n) = n + 1 for n large enough, then ρ_x(n) = n + 1 for all n. Namely, let n_0 be the least integer n for which ρ_x(n) = n + 1, and suppose that n_0 >1. Hence ρ_x(n_0-1) ≠ n_0. The sequence x cannot be eventually periodic, since its factor complexity is not bounded. Thus, one has ρ_x(n) ≥ n+1 for all n. Hence, in particular, ρ_x(n_0 - 1) ≥ n_0. Thus ρ_x(n_0 - 1) > n_0. Since ρ_x is non-decreasing, we have that n_0 < ρ_x(n_0-1) ≤ρ_x(n_0) = n_0+1. This gives ρ_x(n_0-1) = n_0+1 = ρ_x(n_0), which is impossible since x is not eventually periodic. In other words, in the second item above, if C=1, then x is Sturmian. § GENERAL RESULTS Given a sequence x, we can decompose its factor complexity function ρ_x and its reflection complexity function r_x by using the following functions: (a) The number _x(n) of “unreflected” length-n factors w of x such that w^R is not a factor of x; (b) The number _x(n) of “reflected” length-n factors w of x such that w^R is also a factor of x; and (c) The number _x(n) of length-n palindrome factors w of x (i.e., the palindrome complexity function of x, see <cit.>). In particular, we have [ ρ_x(n) = _x(n) + _x(n),; r_x(n) = _x(n) + 12(_x(n) - _x(n)) + _x(n). ] Let f = 01101010⋯ denote the Fibonacci sequence, which is the fixed point of 0↦ 01, 1↦ 0. Its length-5 factors are u_1=01001, u_2=10010, u_3=00101, u_4=01010, u_5=10100, and u_6=00100. Observe that u_4 is an unreflected factor (Type 1 above), u_1,u_2,u_3,u_5 are reflected (Type 2), and u_6 is a palindrome (Types 2 and 3). We obtain r_f(6)= 1 + 1/2(5-1) + 1=4. There is a rich interplay among the complexity functions _x, _x, ρ_x, r_x, and _x, which motivates the study of the combinations of these functions indicated in Equalities (<ref>). This is illustrated below. For a sequence x and for all n≥ 1, we have ρ_x(n) - r_x(n) = 1/2(_x(n) - _x(n)) and 2 r_x(n) - ρ_x(n) = _x(n) + _x(n). Immediate from Equalities (<ref>). This lemma implies the following bounds on the ratio r/ρ. For any sequence x and for all n≥ 0, we have 1/2ρ_x(n) ≤1/2 (ρ_x(n) + _x(n)) ≤ r_x(n) ≤ρ_x(n). Furthermore, the equality cases are as follows. (a) We have r_x(n) = ρ_x(n) if and only if every reflected length-n factor of x is a palindrome. (b) We have r_x(n) = 1/2 (ρ_x(n) + _x(n)) if and only if x has no unreflected length-n factors. In particular, if the sequence x is reversal-closed, we have r_x = 1/2 (ρ_x + _x). (c) We have r_x(n) = 1/2ρ_x(n) if and only if x has no palindrome of length n and each of its length-n factors is reflected. The inequalities and the equality cases are immediate consequences of Lemma <ref>. It is known that if a sequence x is reversal-closed, then x is recurrent: it suffices to adapt the proof of <cit.>, as indicated in <cit.>. Also note that if a sequence x is uniformly recurrent and contains infinitely many distinct palindromes, then x is reversal-closed <cit.>. One can say more for uniformly recurrent sequences. The following dichotomy holds. Let x be a uniformly recurrent sequence. Then either it is reversal-closed, or else it has no long reflected factors (which implies that x has no long palindromes). In other words, (a) either ρ_x = _x, which implies the equality r_x = 1/2(ρ_x + _x); (b) or else there exists n_0 such that ρ_x(n) = _x(n) for all n ≥ n_0, which implies r_x(n) = ρ_x(n) for all n ≥ n_0. If x is reversal-closed, then r_x = 1/2(ρ_x + _x) from Theorem <ref>(b) above. Now suppose that x has an unreflected factor w. Since x is uniformly recurrent, any long enough factor of x contains w as a factor, which implies that this long factor itself is unreflected. This exactly says that _x(n) = 0 for n large enough (and in particular _x(n) = 0 for n large enough). This implies from Equalities (<ref>) that, for n large enough, ρ_x(n) = _x(n), and so r_x(n) = ρ_x(n). Now we exhibit sequences with particular behaviors of their reflection complexity. It is possible to construct an aperiodic automatic sequence x such that r_x(n) = ρ_x(n) and _x(n) > 0 for all n. An example of such a sequence is given by a fixed point of the morphism 0 ↦ 01, 1 ↦ 23, 2 ↦ 45, 3 ↦ 23, 4 ↦ 44, and 5 ↦ 44. This sequence has no reflected factors except palindromes, and there is exactly one palindrome of each length >1. Consider the sequence x on {0,1} whose nth prefix x_n is given recursively as follows: x_0=01 and x_n+1=x_n01x_n^R for all n≥ 0. See <cit.> or <cit.>. The sequence x is uniformly recurrent, reversal-closed, and 2-automatic and accepted by a DFAO of 6 states (see, e.g., <cit.>), and contains only a finite number of palindromes. Furthermore, for all sufficiently large n, we have r_x(n) = 12ρ_x(n). It is also possible to construct an aperiodic automatic sequence where the only palindromes are of length 1, but there are reflected factors of each length > 1. In this regard, we let g_n be the prefix of length 2^n - 2 of (012)^ω. Then an example of an automatic sequence satisfying the desired properties is 3 g_1 4 5 g_2^R 6 3 g_3 4 5 g_4^R 6 3 g_5 4 5 g_6^R 6 ⋯, where <cit.> is required (observe that we intertwine the sequences (3456)^ω and g_1 g_2^R g_3 g_4^R ⋯ to build our sequence). There is an automatic sequence x over the alphabet {0, 1} such that Ref_x(n) = Ω(n) and such that Unr_x(n) = Ω(n). Namely, consider the image under the coding 0, 1, 2 ↦ 0 and 3,4↦ 1 of the fixed point, starting with 0, of the morphism 0 ↦ 01, 1 ↦ 23, 2 ↦ 32, 3 ↦ 42, and 4 ↦ 43. We also provide a construction of an automatic sequence x such that r_x(n+1) < r_x(n) for all odd n ≥ 3. In particular, let x denote the sequence given by applying the coding a, b, d ↦ 1 and c ↦ 0 to the fixed point, starting with a, of the morphism defined by a ↦ ab, b ↦ cd, c ↦ cd, and d ↦ bb. This gives us sequence <cit.> in the OEIS. Computing the reflection complexity of x (e.g., using ) gives that r_x(n) = n+1, for odd n≥1; n-1, for even n ≥ 4. Actually we even have, for this sequence x, that r_x(n+1) = r_x(n) - 1 for all odd n ≥ 3. With extra hypotheses on a sequence x, we can give more precise results in comparing the respective growths of reflection and factor complexities. We will need Theorem <ref> below. Note that Part (b) of this theorem was originally stated for uniformly recurrent sequences: see <cit.>. However, its proof only requires the sequences to be recurrent (see <cit.> and also <cit.>). Furthermore we have seen that a reversal-closed sequence must be recurrent (see Remark <ref>). Thus we can state the theorem as follows (also see Theorem <ref>). (a) Let x be a uniformly recurrent sequence. If x is not closed under reversal, then (n) = 0 for n large enough (actually one even has _x(n) = 0 for n large enough). (b) Let x be a reversal-closed sequence. For all n≥ 0, we have _x(n+1) + _x(n) ≤ρ_x(n+1)-ρ_x(n)+2. There exist sequences that are uniformly recurrent, reversal-closed, and have no long palindromes (see <cit.>; also see Example <ref> above). We deduce the following theorem from Theorem <ref>. Let x be a reversal-closed sequence. For all n≥ 0, we have 1/2ρ_x(n) ≤ r_x(n) ≤1/2ρ_x(n+1) + 1. Using Theorem <ref>(a) and (b), Remark <ref> and Theorem <ref>(b), we have 1/2ρ_x(n) ≤ r_x(n) = 1/2(ρ_x(n) + _x(n)) ≤1/2 (ρ_x(n) + ρ_x(n+1)-ρ_x(n)+2) for all n≥ 0. Thus 1/2ρ_x(n) ≤ r_x(n) ≤1/2ρ_x(n+1) + 1 for all n≥ 0, as desired. On the other hand, we can use a result of <cit.> to obtain the following theorem. Let x be a non-eventually periodic and reversal-closed sequence. For all n≥ 1, we have 1/2ρ_x(n) ≤ r_x(n) < 1/2ρ_x(n) + 8/nρ_x( n + ⌊n/4⌋). Given a non-eventually periodic sequence x, we have from <cit.> the inequality _x(n) < 16/nρ_x( n + ⌊n/4⌋) for all n≥ 1. The statement follows from this and Theorem <ref>(a) and (b). Let x be a non-eventually periodic and reversal-closed sequence. If its factor complexity satisfies ρ_x(n+1) ∼ρ_x(n) or ρ_x(2n)/ρ_x(n) = o(n), then r_x(n) ∼1/2ρ_x(n) when n tends to infinity. In particular, this equivalence holds if x is non-eventually periodic, reversal-closed and morphic. Let x be a non-eventually periodic and reversal-closed sequence. If ρ_x(n+1) ∼ρ_x(n), then, from Theorem <ref>, we obtain that r_x(n) ∼1/2ρ_x(n) when n tends to infinity. Now, if ρ_x(2n)/ρ_x(n) = o(n), we obtain, from Theorems <ref> and <ref>, and using the fact that ρ_x is non-decreasing, 1/2ρ_x(n) ≤ r_x(n) < 1/2ρ_x(n) + 8/nρ_x(2n) = 1/2ρ_x(n) +o(ρ_x(n)), which is enough. Now suppose that, in addition, the sequence x is morphic. We know that either ρ_x(n) = Θ(n^2) or ρ_x(n) = O(n^3/2) (see <cit.>). In the first case, then ρ_x(2n)/ρ_x(n) is bounded, and hence o(n). If ρ_x(n) = O(n^3/2), since x is not eventually periodic (hence ρ_x(n) ≥ n+1), we have ρ_x(2n)/ρ_x(n)≤ Cn^3/2/n+1 = o(n). This finishes the proof. The upper bound in Theorem <ref> raises questions as to growth properties of the function r_x more generally, apart from the case where the set of factors of x satisfies the hypotheses of Theorem <ref>. This leads us toward the growth property in Theorem <ref> below. Let x be a sequence. Then r_x (n) ≤ r_x (n+2) for all n ≥ 0. The result is clear for n = 0, so assume n > 0 in what follows. Let c be a letter not in the alphabet of x, and define y = c x. Then r_y (n) = r_x (n) + 1 for all n >0, since y has exactly one additional factor for each length n ≥ 1; namely, the prefix of length n. Thus, it suffices to prove the claim for y instead of x. With each length-n factor w of y associate a set S_w of length-(n+2) factors of y, as follows: If w is the length-n prefix of y, then S_w := { w' }, where w' is the prefix of length n+2 of y. We call such a factor exceptional. Otherwise, define S_w := { z ∈(y) z = awb for some letters a,b}. Note that the sets S_w, over all length-n factors of y, are pairwise disjoint, and cover all the length-(n+2) factors of y. For a factor w of y, define [w]_1 = 1 if w is a palindrome, and 0 otherwise. Similarly, [w]_2 = 1 if w^R is not a factor of y and 0 otherwise. Finally, define [w]_3 = 1 if w^R is also a factor of y but w is not a palindrome, and 0 otherwise. Notice that these three cases are disjoint and subsume all possibilities for factors of y (also recall the decomposition at the beginning of the section). We can extend this notation to sets by defining [S]_i = ∑_w ∈ S [w]_i for i∈{1,2,3}. Define [w]= [w]_1 + [w]_2 + [w]_3/2 and similarly for [S]. From Equalities (<ref>), we know that r_y(n) = ∑_|w|=n w ∈(y) [w] while r_y(n+2) = ∑_|w|=n w ∈(y) [S_w]. Therefore, to show the desired inequality r_y (n) ≤ r_y(n+2), it suffices to show that [w] ≤ [S_w] for all length-n factors w of y. Suppose w is exceptional. Recall that w starts with c, which appears nowhere else in y. Then [w]_1 = [w]_3 = 0, but [w]_2 = 1. And S_w = { wab }, so [S_w]_1 = [S_w]_3 = 0, but [S_w]_2 = 1. Therefore [w]≤ [S_w]. Now suppose w is not exceptional. There are three cases to consider. Case 1: If [w]_1 = 1, then w is a palindrome. Consider a factor awb ∈ S_w. If it is a palindrome, then [awb]_1 = 1, so [w]≤ [awb]. If awb is not a palindrome, then awb ≠ (awb)^R = b w^R a = bwa. Thus a≠b. If bwa is not a factor of y, then [awb]_2 = 1, so [w] ≤ [awb]. If bwa is a factor of y, then bwa∈ S_w and [awb]_3 + [bwa]_3 = 2, so in all cases [w] ≤ [awb]. Thus [w] ≤ [S_w]. Case 2: If [w]_2 = 1, then w^R is not a factor of y. Consider a factor awb ∈ S_w. Then (awb)^R = b w^R a, so (awb)^R cannot be a factor of y either. Hence [awb]_2=1, [w]≤ [awb], and hence [w]≤ [S_w]. Case 3: If [w]_3 = 1, then w^R is a factor of y, but w is not a palindrome. Consider a factor awb ∈ S_w. If awb is a palindrome, then awb = (awb)^R = b w^R a, so w^R would be a palindrome, a contradiction. So awb is not a palindrome and [awb]_1=0. If (awb)^R = b w^R a is a factor of y, then [awb]_3 = 1, so [w]≤ [awb]. If (awb)^R is not a factor of y, then [awb]_2 = 1, so [w] <[awb]. Thus [w]≤ [S_w]. This completes the proof. Another formulation of Theorem <ref> above is that the sequence (r_x(n+1) + r_x(n))_n ≥ 0 is non-decreasing. Numerical experiments concerning the growth of the reflection complexity have led us to formulate Conjectures <ref>–<ref> below. We leave these conjectures as open problems. Let x be a sequence. Then r_x(n) = r_x(n+2) for some n if and only if x is eventually periodic. Note that one direction is true. We have even more: namely, if the sequence x is eventually periodic, then r_x(n) = r_x(n+2), for all n large enough (see Theorem <ref> below). Let x be a sequence. Then r_x (n) - 1 ≤ r_x(n+1) for all n≥ 0. Note that we can have equality for infinitely many values of n—see Example <ref> above. Let x be a sequence of at most linear factor complexity. Then r_x(n+1)-r_x(n) is bounded for all n≥ 0. Hence, in particular, if x is (generalized) automatic, so is (r_x(n+1)-r_x(n))_n≥ 0. It can be shown that Conjecture <ref> holds for the Thue–Morse, period-doubling, Golay–Shapiro, second-bit, paperfolding, Stewart choral, Baum-Sweet, Chacon, and Mephisto-Waltz sequences. (Also see Corollary <ref>.) Let x be a sequence. If the limit lim_n →∞ r_x(n)/ρ_x(n) exists, then it is either equal to 1/2 or to 1. Actually we prove below weaker forms of Conjectures <ref> and <ref> for reversal-closed sequences, and weaker forms of Conjectures <ref> and <ref> for sequences without long palindromes. Also we can prove that Conjecture <ref> holds for primitive morphic sequence. Let x be a reversal-closed sequence. Then, for all n≥ 0, we have r_x (n) - 1 ≤ r_x(n+1). If, in addition, x has at most linear factor complexity, then r_x(n+1)- r_x(n) is bounded. Let x be a reversal-closed sequence. By Remark <ref>, x is recurrent. Thus, putting together Theorems <ref>(b) and <ref>(b) gives, for all n≥ 0, 2(r_x(n+1) - r_x(n)) = ρ_x(n+1) + _x(n+1) - ρ_x(n) - _x(n). Equality (<ref>) implies 2(r_x(n+1) - r_x(n)) ≥_x(n+1) - 2, hence we obtain r_x(n+1) - r_x(n) ≥ -1 as desired. If, in addition, the factor complexity of x is at most linear, then Equality (<ref>) gives 2|r_x(n+1) - r_x(n)| ≤ |ρ_x(n+1) - ρ_x(n)| + _x(n+1) + _x(n). But |ρ_x(n+1) - ρ_x(n)| is bounded (see <cit.>) and _x is also bounded (see <cit.> or use Theorem <ref>(b) above). Let n_0≥ 0 be an integer and let x be a sequence with no palindrome of length ≥ n_0. Then (r_x(n))_n ≥ 0 is eventually non-decreasing: r_x(n) ≤ r_x(n+1) for n ≥ n_0. Furthermore, if r_x(n+2) = r_x(n) for some n ≥ n_0, then the sequence x is eventually periodic. By combining the assumption and the second equality of Lemma <ref>, we have that r_x(n) = 12(ρ_x(n) + _x(n)) for n ≥ n_0. Since both (ρ_x(n))_n≥ 0 and (_x(n))_n≥ 0 are non-decreasing, we see that (r_x(n))_n ≥ n_0 is non-decreasing, which gives that r_x(n) ≤ r_x(n+1) for n ≥ n_0. This shows the first part of the statement. For the second part, if we have r_x(n+2) = r_x(n) for some n ≥ n_0, then Equality (<ref>) implies that ρ_x(n+2) + _x(n+2) = ρ_x(n) + _x(n). Hence ρ_x(n+2) + _x(n+2) = ρ_x(n+1) + _x(n+1) = ρ_x(n) + _x(n). Hence ρ_x(n+1) = ρ_x(n), which implies that x is eventually periodic from Theorem <ref>, as desired. Actually, Theorems <ref>, <ref> and <ref> imply the following corollary. Conjectures <ref> and <ref> hold if x is uniformly recurrent. Let x be a uniformly recurrent sequence. Then, from Theorem <ref>, we have that x is either reversal-closed and ρ_x = _x, so that r_x = 1/2(ρ_x + _x), or else that there exists n_0 such that ρ_x(n) = _x(n) for all n ≥ n_0, which implies r_x(n) = ρ_x(n) for all n ≥ n_0. In the first case, the claim is proved by using Theorem <ref>, and that, if, in addition, x is automatic, then both sequences (ρ_x(n+1) - ρ_x(n))_n≥ 0 and (_x(n))_n≥ 0 are automatic (see <cit.> and <cit.> respectively). Hence the sequence (_x(n+1) - _x(n))_n≥ 0 is also automatic. The proofs extend easily to generalized automatic sequences. In the second case, inspired by the proof of Theorem <ref>, we note that there is an integer n_0 such that r_x(n) = ρ_x(n) = _x(n) for all n ≥ n_0. But _x is clearly non-decreasing, which gives the property of Conjecture <ref>. If, in addition, x has at most linear complexity, we know that ρ_x(n+1) - ρ_x(n) is bounded (see <cit.>), hence r_x(n+1) - r_x(n) is bounded for n large enough, hence for all n. The (generalized) automatic property is proved by using, as above, that (ρ_x(n+1) - ρ_x(n))_n≥ 0 is (generalized) automatic. In the same vein, Theorem<ref> and Corollary <ref> imply the following corollary. Conjecture <ref> holds for non-eventually periodic primitive morphic sequences. Let x be a primitive morphic sequence. We know that x is uniformly recurrent. Thus, from Theorem <ref>, x is either reversal-closed, or else it has no long palindromes. If x is reversal-closed, then, by Corollary <ref>, we have r_x(n) ∼1/2ρ_x(n). Otherwise, x has no long palindromes, then, still from Theorem <ref>, we have that r_x(n) = ρ_x(n) for n large enough. § EVENTUALLY PERIODIC SEQUENCES We can characterize eventually periodic sequences (i.e., sequences that are periodic from some index on) in terms of their reflection complexity. A sequence x is eventually periodic if and only if both sequences (r_x(2n))_n≥ 0 and (r_x(2n+1))_n≥ 0 are eventually constant. From Theorem <ref> both sequences (r_x(2n))_n≥ 0 and (r_x(2n+1))_n≥ 0 are non-decreasing. Also, from the inequality 1/2ρ_x(n) ≤ r_x(n) ≤ρ_x(n) in Theorem <ref>, and the fact that the sequence (ρ_x(n))_n≥ 0 is non-decreasing, we have that either the three integer sequences (r_x(2n))_n≥ 0, (r_x(2n+1))_n≥ 0, and (ρ_x(n))_n≥ 0 are all bounded, or else none of them is. Furthermore, we know that (ρ_x(n))_n≥ 0 is bounded if and only if the sequence x is eventually periodic (Theorem <ref>(d) above). Hence, we have two cases depending on the periodicity of x. (a) If x is eventually periodic, then (ρ_x(n))_n≥ 0 is bounded, so (r_x(2n))_n≥ 0 and (r_x(2n+1))_n≥ 0 are eventually constant. (b) If x is not eventually periodic, its factor complexity is not bounded: thus both sequences (r_x(2n))_n≥ 0 and (r_x(2n+1))_n≥ 0 tend to infinity. This ends the proof. If x is eventually periodic, the eventual values of (r_x(2n))_n≥ 0 and (r_x(2n+1))_n≥ 0 can be either equal or distinct, as seen from the examples of the sequences (01)^ω and (011)^ω. Theorem <ref>, Corollary <ref> and Remark <ref> below give more precise results for the growth of the reflection complexity of eventually periodic and non-eventually periodic sequences. § STURMIAN SEQUENCES AND GENERALIZATIONS In this section, we study Sturmian sequences as well as some generalizations. §.§ Sturmian sequences First we state the following result, which notably characterizes Sturmian sequences in terms of their reflection complexity. Let x be a non-eventually periodic sequence on a finite alphabet. (a) For all n ≥ 1, we have r_x(n) ≥ 1 + ⌊n+1/2⌋; (b) We have r_x(n) = 1 + ⌊n+1/2⌋ if and only if x is Sturmian. For each integer n ≥ 1, let 𝒮_n be the permutation group on n elements. Let σ_n be the permutation defined by σ_n := [ 1 2 … n-1 n; n n-1 … 2 1; ] and G_n be the subgroup of 𝒮_n generated by σ_n, i.e., the group {σ_n, id_n}. The number of distinct orbits of {1, 2, …, n } under G_n is equal to n/2 if n is even, and to (n+1)/2 if n is odd, which can be written ⌊ (n+1)/2 ⌋ in both cases. Thus, applying <cit.> proves the first item of the theorem and the implication ⟹ of the second item. To prove the last assertion, suppose that x is a Sturmian sequence. We know that any Sturmian sequence is reversal-closed (see <cit.>, where reversals are called mirror images). Furthermore, it is proved in <cit.> that a sequence is Sturmian if and only if it has one palindrome of all even lengths and two palindromes of all odd lengths. Now, from Theorem <ref>(b) we have that r_x(n) = 1/2 (ρ_x(n) + _x(n)) = n+2/2 = 1+ ⌊n+1/2⌋, if n even; n+3/2 = 1+ ⌊n+1/2⌋, if n odd. This ends the proof. With regard to the above referenced work of Charlier et al. <cit.>; also see the related and recent work by Luchinin and Puzynina <cit.>. The following is an analog of the Morse–Hedlund theorem (which is recalled in Theorem <ref> above). A sequence x is eventually periodic if and only if there exists n ≥ 1 such that r_x(n) ≤⌊n+1/2⌋. Furthermore both sequences (r_x(2n))_n and (r_x(2n+1))_n are then eventually constant. Let x be a sequence. Contraposing Property (a) of Theorem <ref>, we obtain that if r_x(n) ≤⌊n+1/2⌋ for some n, then x must be eventually periodic. Conversely, if x is eventually periodic, it has a bounded number of factors, hence there exists some integer n for which the inequality of the statement is true. The last assertion is Theorem <ref> above. Actually, it is possible to prove the first part of the proof of Corollary <ref> without using Theorem <ref> in the case where the integer n is even. Namely, Theorem <ref> implies that ρ_x(n)/2 ≤ r_x(n) for all n. So that, if r_x(n_0) ≤⌊n_0+1/2⌋ for some n_0, then ρ_x(n_0) ≤ 2 ⌊n_0+1/2⌋ = n_0 if n_0 is even. Then we apply the Morse–Hedlund theorem (Theorem <ref>). As in Remark <ref>, there is another “growth gap”. Namely, an easy consequence of Theorem <ref> above is that, for any sequence x that is neither eventually periodic, nor quasi-Sturmian, one has r_x(n) - n/2→ + ∞ when n → + ∞ (recall the definition of quasi-Sturmian in Remark <ref>; and use the fact that, for a sequence x that is neither eventually periodic nor quasi-Sturmian, one has that ρ_x(n) - n → + ∞). So that we can have the following possibilities for a sequence x: (a)r_x(n) is bounded, which happens if and only if x is eventually periodic. Then both sequences (r_x(2n))_n and (r_x(2n+1))_n are eventually constant; (b)r_x(n) = 1 + ⌊n+1/2⌋ for all n, which happens if and only if x is Sturmian. Note that r_x(n) - n/2 > 1/2 for any non-eventually periodic sequence, and that r_x(n) - n/2∈{1, 3/2} for any Sturmian sequence; (c)r_x(n) - n/2 is bounded, and x is not Sturmian. This implies that ρ_x(n) - n is bounded (use Theorem <ref>), hence that x is quasi-Sturmian; (d)r_x(n) - n/2 is not bounded and x is quasi-Sturmian. (e)r_x(n) - n/2 tends to infinity. This is the case where x is neither eventually periodic nor quasi-Sturmian. Note that both behaviors r_x(n) - n/2 bounded (Item (c)) or r_x(n) - n/2 not bounded (Item (d)) are possible for quasi-Sturmian sequences. Namely, if we start from the binary Fibonacci sequence f (fixed point of the morphism 0 → 01, 1 → 0), and apply two particular morphisms, then (using ), we have for the corresponding quasi-Sturmian sequences: (a) the image of f under the morphism 0 → 0101, 1 → 1111 is reversal closed and has arbitrarily long palindromes; its reflection complexity is given by r(n)-n/2 = 9/2 for n≥ 7 odd and r(n)-n/2 = 3 for n ≥ 8 even; (b) the image of f under the morphism 0 → 01101, 1 → 10100 is not reversal-closed, and has no large palindrome; its complexity has the property that r(n) =n+9 for n≥ 11. It is also worth noting that in <cit.>, it is indicated that, for any Sturmian sequence x with values in {0, 1}, the image of x by the morphism f defined by f(0) = 011001, f(1) = 001011 is a quasi-Sturmian sequence without long palindromes (the authors call such a sequence a non-palindromic sequence, using the terminology of <cit.>). More precise results on the reflection complexity of quasi-Sturmian sequences are given in the next subsection. §.§ Quasi-Sturmian sequences Recall that any quasi-Sturmian sequence x can be written as x = u f(z), where u is any word on any alphabet, z is a (necessarily binary) Sturmian sequence, and f an aperiodic morphism from {0, 1} to any alphabet (see <cit.>). We state the following theorem. Let x = y f(z) be a quasi-Sturmian sequence, where y is any word, z is a Sturmian sequence, and f is an aperiodic morphism from {0, 1} to any alphabet. Then (a) either f(z) is reversal-closed and r_x = n/2 + O(1); (b) or else f(z) is not reversal-closed and r_x = n + O(1). First we note that, clearly, r_x = r_f(z)(n) + O(1), hence it suffices to prove both statements for r_f(z) instead of r_x. Then we note that z and hence f(z) are both uniformly recurrent. Thus we can apply Theorem <ref> to f(z). We now consider two cases. (a) If f(z) is reversal-closed, then, r_f(z)= 1/2(ρ_f(z) + _f(z)). Since f(z) is reversal-closed and quasi-Sturmian, we have from Theorem <ref>(b) that _f(z)≤ 3. Hence r_f(z)(n) = 1/2ρ_f(z)(n) + O(1) as desired. (b) If f(z) is not reversal-closed, then, r_f(z)(n) = ρ_f(z)(n) for n large enough. By assumption, we have ρ_f(z)(n) = n + C for some constant C and for n large enough, so we have r_f(z)(n) = n + O(1) as desired. This ends the proof. One can compare the first part of Theorem <ref> above with Corollary <ref>. §.§ Episturmian sequences Among several generalizations of Sturmian sequences, episturmian sequences have in particular the property–sometimes even taken as part of their defini-tion–to be reversal-closed. Furthermore, their palindrome complexity has been studied. See the survey <cit.>; also see the survey <cit.>. We develop here for these sequences a theorem similar to Theorem <ref> above. Let A be a finite alphabet with cardinality ℓ. A sequence x over A is episturmian if it is reversal-closed and has at most one left special factor of each length. An episturmian sequence x is ℓ-strict if it has exactly one left special factor of each length and for which every left special factor u of x has ℓ distinct left extensions in x. We compute the reflection complexity of episturmian sequences as follows. (Recall that the factor complexity of an ℓ-strict episturmian sequence is given by ρ_x(n) = (ℓ -1) n +1.) Let x be an ℓ-strict episturmian sequence. Then, for all n≥ 0, r_x(n) = (ℓ-1) ⌊n+1/2⌋ + 1. Let x be an ℓ-strict episturmian sequence. The case n=0 is true. Assume that n≥ 1. Then by <cit.>, we have ρ_x(n) = (ℓ-1)n+1. We also know that _x(n) = 1, if n is even; ℓ, if n is odd. Using these together with Theorem <ref>(b), we deduce the desired result. For the Tribonacci sequence tr, which is the fixed point of the morphism 0↦ 01, 1↦ 02, 2↦ 0, we have r_tr(n) = 2⌊ (n+1)/2 ⌋ + 1 for all n ≥ 0. §.§ Billiard sequences on a hypercube Since one interpretation of Sturmian sequences is the binary coding of irrational trajectories on a square billiard table, one can turn to irrational trajectories on a hypercube. The following result can be found in <cit.>: the first item is in <cit.> and the second is the main theorem of that paper (which proves a conjecture due to Tamura; note the unexpected symmetry between s and n, where (s+1) is the dimension of the hypercube). Let x be an irrational billiard sequence on an (s+1)-dimensional hypercube. (a) The sequence x is reversal-closed. (b) The factor complexity of x is given by ρ_x(n) = ∑_k=0^min (s,n) k! sknk. Note that, if x is an irrational billiard sequence on an (s+1)-dimensional hypercube, the previous result implies that ρ_x(n) = Θ(n^s). In particular for s+1 = 2, we obtain ρ_x(n) = n + 1 (which gives back Sturmian sequences), and for s+1=3, we obtain ρ_x(n) = n^2 + n + 1, which had been conjectured by Rauzy and proved in <cit.>. Let x be an irrational billiard sequence on an hypercube of dimension (s+1). Then its reflection complexity has the property that r_x(n) ∼1/2∑_k=0^min(s,n) k! sknk when n tends to infinity. In particular, r_x(n) = Θ(n^s) when n tends to infinity. Use Theorem <ref> above with Corollary <ref>. §.§ Complementation-symmetric Rote sequences So-called complementation-symmetric Rote sequences, which were defined and studied in <cit.>, are related to Sturmian sequences as stated below in Theorem <ref>. In this section, after recalling their definition, we study their reflection complexity. Let x be a binary sequence. Then x is called a Rote sequence if its factor complexity satisfies ρ_x(n) = 2n for all n≥ 1. The sequence x is said to be complementation-symmetric if its set of factors is closed under the exchange morphism, i.e., if w is a factor of x, so is E(w). We consider the mapping Δ{0,1}^+→{0,1}^* defined as follows: Δ(a)=a for all a∈{0,1} and for n≥ 1, Δ(v(0)v(1)⋯ v(n))=u(0)u(1)⋯u(n-1) with u(i) = (v(i+1)-v(i)) 2 for all i∈{0,…,n-1}. There is a natural extension of Δ to sequences: if x=(x(n))_n≥ 0 is a binary sequence, then Δ(x) is the sequence whose nth letter is defined by (x(n+1)-x(n)) 2 for all n≥ 0. Observe that Δ(x) is the sequence of first differences of x, taken modulo 2. A binary sequence x is a complementation-symmetric Rote sequence if and only if Δ(x) is Sturmian. In fact, with each Sturmian sequence s, there are two associated complem-entation-symmetric Rote sequences x and x' with x'=E(x). The factors in s and its corresponding Rote sequences are closely related as shown below. Let s be a Sturmian sequence and let x be the complementation-symmetric Rote sequence such that s=Δ(x). Then u is a factor of s if and only if both words v,v' such that u=Δ(v)=Δ(v') are factors of x. Furthermore, for every n≥ 0, x occurs at position n in s if and only if v or v' occurs at position n in x. A complementation-symmetric Rote sequence is reversal-closed. Let x be a complementation-symmetric Rote sequence. Let s be the Sturmian sequence corresponding to x, i.e., s=Δ(x) given by Theorem <ref>. Consider a factor v of x. Write u=Δ(v). Since s is reversal-closed, the word u^R is also a factor of s. Let w and w' be the binary words such that u^R=Δ(w)=Δ(w') and w'=E(w). By Proposition <ref>, both w and w' are factors of x. Now observe that we have either v^R=w or v^R=w'. This ends the proof. We compute the reflection complexity of Rote sequences as follows. Let x be a complementation-symmetric Rote sequence. Then its reflection complexity satisfies r_x(n)=n+1 for all n≥ 0. Let x be a complementation-symmetric Rote sequence. We clearly have r_x(0)=1. Now, for n≥ 1, <cit.> states that _x(n)=2. We finish the proof using Lemma <ref> and Theorems <ref>(b). § REVERSAL-CLOSED AND RICH SEQUENCES Let x be a reversal-closed sequence. Then we have r_x(n+1)+r_x(n) ≤ρ_x(n+1)+1 for all n≥ 0. It is enough to combine Remark <ref> and Theorems <ref>(b) and <ref>(b). Rich sequences have several equivalent definitions. It is known that a word w contain at most |w|+1 many palindromic factors <cit.>. A sequence is called rich if each factor contains the maximal number of palindromic factors. Let x be a reversal-closed sequence. Then x is rich if and only if r_x(n+1)+r_x(n) = ρ_x(n+1)+1 for all n≥ 0. From <cit.>, the sequence x is rich if and only if the inequality in Theorem <ref>(b) is an equality. The result then follows from Theorem <ref>(b). Those among binary quasi-Sturmian sequences that are coding of rotations are rich, see <cit.>. Let x be a binary reversal-closed quasi-Sturmian sequence. There exists a constant C such that r_x(n+1)+r_x(n) = n + C for n large enough. Let C' be a constant such that ρ_x(n) = n + C' for n large enough. It is enough to choose C=C'+2. Using Theorem <ref>(b) and <cit.>, it is possible to bound the reflection complexity of rich sequences as follows. Let x be a rich sequence over an alphabet of q letters and write δ = 2/3(log 3- log 2). Then r_x(n) ≤nq/2 (2q^2n)^δlog n (1 + nq^3 (2q^2n)^δlog n) for all n≥ 1. Other sequences have a reflection complexity satisfying the equality of Theorem <ref>. For instance, it is the case of complementation–symmetric sequences, sequences canonically associated with some specific Parry numbers, and sequences coding particular interval exchange transformations. For more details, see <cit.>. § AUTOMATIC SEQUENCES In this section, we study the reflection complexity of automatic sequences. First, in a positional numeration system U having an adder, we show that if a sequence is U-automatic, then its reflection complexity is a U-regular sequence. Furthermore we show how to effectively compute a linear representation for the sequence, making use of the free software  <cit.>. Next, we explore the reflection complexity of some famous automatic sequences, namely the Thue–Morse, the period-doubling, generalized paperfolding, generalized Golay–Shapiro, and the Baum-Sweet sequences. §.§ Reflection complexity is computably regular We now show that the reflection complexity of any automatic sequence is regular. Let U=(U(n))_n≥ 0 be a positional numeration system such that there is an adder, i.e., addition is recognizable by an automaton reading U-representations, and let x be a U-automatic sequence. Then (r_x(n))_n≥ 0 is a U-regular sequence. Furthermore, a linear representation for (r_x(n))_n≥ 0 is computable from the DFAO for x. Here is a sketch of the proof before we give the details: We create a first-order logical formula asserting that the factor x[i..i+n-1] is the first occurrence of this factor, or its reversal. Then the number of such i is precisely the reflection complexity at n. From this, we can create a linear representation for the number of such i. Now some more details. We define the following logical formulas: (i,j,n) := ∀ t (t<n) x[i+t]=x[j+t] (i,j,n) := ∀ t (t<n) x[i+t]=x[(j+n)-(t+1)] (i,n) := ∀ j (j<i) (((i,j,n)) ((i,j,n))). Now we use the fundamental result on Büchi arithmetic to translate each of these formulas to their corresponding automata accepting the base-k representation of those pairs (i,n) making the formula true. Next, we use a basic result to convert the automaton for to the corresponding linear representation computing the reflection complexity. Once we have a linear representation for the reflection complexity, we can easily compute it for a given n. Furthermore, we can compare it to a guessed formula, provided that this formula can also be expressed as a linear representation (see <cit.>). In the next section we carry this out in detail for a number of famous sequences. §.§ The Thue–Morse and period-doubling sequences We can compute a linear representation for the reflection complexity r_t (n) of the 2-automatic Thue–Morse sequence, using the same approach as in the preceding section. Here we use the following code: This generates a linear representation of rank 66, which can be minimized to the following. v = [[ 1 0 0 0 0 0 0 0 0 ]], w = [[ 1 2 3 4 6 6 10 10 13 ]]^T μ(0) = 1/33[rrrrrrrrr 33 0 0 0 0 0 0 0 0 0 0 33 0 0 0 0 0 0 0 0 0 0 33 0 0 0 0 0 0 0 0 0 0 33 0 0 0 0 0 0 0 0 0 0 33 0 0 -26 0 0 23 10 -10 36 0 0 -57 33 0 6 -6 6 51 0 0 -79 33 33 -5 -28 28 51 0 0 -72 0 33 18 -18 18 54 ], μ(1) = 1/33[rrrrrrrrr 0 33 0 0 0 0 0 0 0 0 0 0 33 0 0 0 0 0 0 0 0 0 0 33 0 0 0 0 0 0 0 0 0 0 33 0 0 0 -24 0 0 39 -6 6 18 0 0 -40 0 0 43 -10 10 30 0 0 -78 33 33 3 -36 36 42 0 0 -86 33 33 5 -38 38 48 0 0 -72 0 0 51 -18 18 54 ]. Recall that Brlek <cit.>, de Luca and Varricchio <cit.>, and Avgustinovich <cit.> independently gave a simple recurrence for the number of length-n factors of t, namely ρ_t(2n) = ρ_t(n) + ρ_t(n+1) and ρ_t(2n+1) = 2 ρ_t(n+1) for n≥ 2. As it turns out, there is a simple relationship between r_t and ρ_t. Let t be the Thue–Morse sequence. (a) For all n≥ 0, we have r_t(2n+1) = ρ_t(n + 1). (b) For all n ≥ 2, we have r_t(2 n) = ρ_t(n+1) + 1, if ∃ m≥ 0 with 3 · 4^m-1 + 1 ≤ n ≤ 4^m; ρ_t(n+1), otherwise. (c) There is an automaton of 14 states computing the first difference r_t(n+1)-r_t(n). We prove each item separately. (a) Above in Equalities (<ref>) we computed a linear representation for r_t(n). From this linear representation we can easily compute one for r_t(2n+1) merely by replacing w with μ(1) w. (Indeed, base-2 representations of integers 2n+1 all end with 1.) Next, we can compute a linear representation for ρ_t (n+1) using the following command. This creates a linear representation of rank 6. Finally, we use a block matrix construction to compute a linear representation for the difference r_t(2n+1) -ρ_t(n + 1) and minimize it; the result is the 0 representation. This computation gives a rigorous proof of item (a). (b) This identity can be proven in a similar way. We form the linear representation for r_t (2n) - ρ_t (n+1) - [∃ m 3 · 4^m-1 + 1 ≤ n ≤ 4^m], where the last term uses the Iverson bracket. We then minimize the result and obtain the 0 representation. (c) We can compute a linear representation for the first difference r_t(n+1)-r_t(n), and then use the “semigroup trick”<cit.> to prove that the difference is bounded and find the automaton for it. It is displayed in Figure <ref>. These computations rigorously prove the three items of the claim. The period-doubling sequence(d(n))_n≥ 0 is a natural companion of the Thue–Morse sequence. Recall that t is the fixed point, starting with 0, of the morphism defined by 0 ↦ 01 and 1 ↦ 10. We similarly define p as the fixed point of the morphism 0 ↦ 01 and 1 ↦ 00. This gives us that p is 2-automatic as well. By defining d(n) as the highest power of 2, modulo 2, dividing n+1, the sequence p can be equivalently defined as the sequence (d(n))_n≥ 0. The close relationship between t and p is captured by the identity ρ_p(n) = ρ_t(n+1)/2 for all n. We may devise a close analogue of Theorem <ref> for the reflection complexity of p, again with the use of . Explicitly, it can be shown that: For all n≥ 0, we have r_p(2n+1) = ρ_p(n) + 1, and, for all n ≥ 2, we have r_p(2 n) = ρ_p(n+1) - 1, if ∃ m ≥ 0 with 3 · 2^m-1≤ n ≤ 2^m+1 - 1; ρ_p(n+1) - 2, otherwise, and we may similarly devise an analogue of part (c) of Theorem <ref>. Observe that lim inf_n →∞r_p(n)/n = 3/4 and lim sup_n →∞r_p(n)/n = 5/6, and similarly for the reflection complexity of t. §.§ The generalized paperfolding sequences A paperfolding sequence p_f is a binary sequence p_1 p_2 p_3 ⋯ specified by a sequence of binary unfolding instructions f_0 f_1 f_2 ⋯, as the limit of the sequences p_f_0 f_1 f_2 ⋯, defined as follows: p_ = and p_f_0 ⋯ f_i+1 = p_f_0 ⋯ f_i f_i+1 E(p_f_0 ⋯ f_i^R) for all i≥ 0 where E is the exchange morphism. For example, if f = 000⋯, we get the simplest paperfolding sequence p = 0010011000110110001001110011011 ⋯. Note that a paperfolding sequence is 2-automatc if and only if the sequence of unfolding instructions is eventually periodic <cit.>. Allouche <cit.>, and later, Baake <cit.> proved that no paperfolding sequence contains a palindrome of length >13. In fact, even more is true as shown below. No paperfolding sequence contains a reflected factor of length >13. It suffices to show that no paperfolding sequence contains a reflected factor of length 14. For if this holds, but there is a longer reflected factor x, we could write x = yz where |y| = 14. Then x^R = z^R y^R, so y would be a reflected factor of length 14, a contradiction. Now, by a known result on the appearance function of paperfolding sequences <cit.>, we know that every length-14 factor of a paperfolding sequence p_f appears in a prefix of length 109, which is in turn specified by the first 7 unfolding instructions. We can then simply examine each of the 56 length-14 factors of these 128 (finite) words and verify that no factor is reflected. We can now prove the following result. Let p_f be a paperfolding sequence. Then (a) For all n ≥ 13, we have r_f(n) = ρ_f(n) = 4n. (b) The reflection complexity of every paperfolding sequence is the same, and takes the values 2, 3, 6, 7, 12, 15, 22, 24, 32, 36, 42, 46 for 1 ≤ n ≤ 12. We prove each item separately. (a) For n ≥ 14, the result follows from combining the results of Allouche <cit.> and Proposition <ref>. For n=13, we can verify the claim by explicit enumeration. (b) The result for n ≥ 13 follows from (a). For n < 13 the result can be verified by enumeration of all length-109 prefixes of paperfolding sequences specified by instructions of length 7. This ends the proof. §.§ The generalized Golay–Shapiro sequences A generalized Golay–Shapiro sequence g is defined by taking the running sum, modulo 2, of a paperfolding sequence p_f. The famous Golay–Shapiro sequence (also called the Rudin–Shapiro sequence) <cit.> corresponds to the case of unfolding instructions 0(01)^ω <cit.>. Note that the 2-automaticity of a generalized Golay–Shapiro sequence follows from that of its corresponding generalized paperfolding sequence. We can prove the analogue of Proposition <ref>. No generalized Golay–Shapiro sequence contains a reflected factor of length >14. As above, it suffices to show that no Golay–Shapiro sequence contains a reflected factor of length 15. Now, by a known result on the recurrence function of generalized Golay–Shapiro sequences <cit.>, we know that every length-15 factor of a paperfolding sequence p_f appears in a prefix of length 2408, which is in turn specified by the first 12 unfolding instructions. We can then simply examine each of the 60 length-15 factors of these 4096 (finite) words and verify that no factor is reflected. We can now prove the following result. Let g be a generalized Golay–Shapiro sequence. (a) For all n ≥ 15, we have r_g(n) = ρ_g (n) = 8n-8. (b) The reflection complexity of every generalized Golay–Shapiro sequence is the same, and takes the values 2, 3, 6, 10, 14, 22, 30, 42, 48, 62, 72, 83, 92, 103 for 1 ≤ n ≤ 14. We prove each item separately. (a) For n ≥ 15, the result follows from combining the results of Allouche and Bousquet-Melou <cit.> and Proposition <ref>. (b) The result for n ≥ 15 follows from (a). For n < 15 the result can be verified by enumeration of all length-2408 prefixes of paperfolding sequences specified by instructions of length 12. This ends the proof. §.§ The Baum–Sweet sequence Let the Baum–Sweet sequenceb = 1101100101001001100100000100100101001001⋯ be defined by b(0) = 1 and for n≥ 1, b(n) is 1 if the base-2 expansion of n contains no block of successive zeros of odd length and 0 otherwise. It is 2-automatic as well. The factor complexity function for b is such that (ρ_b(n))_n≥ 0 = 1, 2, 4, 7, 13, 17, 21, 27, 33, 38, 45, 52, 59, 65, 70, … and the reflection complexity function for b starts with (r_b(n) )_n≥ 0 = 1, 2, 3, 5, 8, 11, 13, 17, 21, 25, 30, 35, 40, 46, 50, 56, …. We can again compute a linear representation for r_b (n) using the following code: This gives us a linear representation of rank 90. From this linear representation, a computation proves the following result. Let b be the Baum-Sweet sequence. Then the first difference of the sequence r_b (n) is 2-automatic, over the alphabet { 1,2, …, 8 }. § FURTHER DIRECTIONS We conclude the paper by considering some further research directions to pursue in relation to reflection complexities of sequences and by raising some open problems. We encourage further explorations of the evaluation of r_x for sequences x for which properties of _x and/or ρ_x are known, especially if cannot be used directly in the investigation of r_x. For example, by letting the Chacon sequencec be the fixed point of the morphism 0 ↦ 0010 and 1 ↦ 1, it is known that _c(n)=0 for all n≥ 13. Also, its factor complexity satisfies ρ_c(n)=2n-1 for n≥ 2 <cit.>. We have (r_c(n))_n≥ 0 = 1, 2, 2, 4, 4, 6, 7, 10, 11, 14, 16, 20, 23, 25, 27, 29, 31, 33, …. This sequence is not automatic in a given so-called addable numeration system (where there is an adder). Therefore, we cannot use , in this case. However, an inductive argument can be applied to prove that r_c(n)=ρ_c(n) for all n≥ 13. To what extent can the reflection complexity be used to discriminate between different families of sequences, by analogy with our characterizations of Sturmian and eventually periodic sequences? The complexity function _x defined above may be of interest in its own right, as is the case with the “reflection-free” complexity function enumerating factors such that the reversal of every sufficiently large factor is not a factor. How can Theorem <ref> be generalized with the use of standard generalizations of the Thue–Morse sequence? For example, if we let t3 = 011212201220200112202001200⋯ denote the generalized Thue–Morse sequence for which the nth term t3(n) is equal to the number of 1's, modulo 3, in the base-2 expansion of n, it can be shown that r_t3(n) = ρ_t3(n) for all n ≥ 3, and it appears that a similar property holds for the cases given by taking the number of 1's modulo ℓ > 4. What is the reflection complexity of the Thue–Morse sequence over polynomial extractions, with regard to the work of Yossi Moshe <cit.>? How could the upper bound in Theorem <ref> be improved? If r_x(n) is of the form Ω(n), then how could this be improved? How does the reflection complexity compare with other complexity functions, as in the complexity functions listed in Section <ref>? This leads us to ask about the respective growths of the complexities listed in Section <ref>, in particular for morphic sequences. In this direction, recall that the factor complexity of a morphic sequence is either Θ(1), Θ(n), Θ(n loglog n), Θ(n log n) or Θ(n^2), see <cit.> (more details can be found, e.g., in <cit.>; also see <cit.>). As an illustration with a result that has not been already cited above, a comparison between growths for the factor complexity and the Lempel-Ziv complexity can be found in <cit.>. We end with an easy result for the growth of the reflection complexity in the case of morphic sequences. The reflection complexity of a morphic sequence is either Θ(1), Θ(n), Θ(n loglog n), Θ(n log n) or Θ(n^2). Use the inequalities in Theorem <ref>: for any sequence x and for all n≥ 0, we have 1/2ρ_x(n) ≤ r_x(n) ≤ρ_x(n). §.§ Acknowledgments We thank Boris Adamczewski for discussions. John Campbell is grateful to acknowledge support from a Killam Postdoctoral Fellowship from the Killam Trusts. Manon Stipulanti is an FNRS Research Associate supported by the Research grant 1.C.104.24F. The research of Jeffrey Shallit is supported by NSERC grant 2018-04118. plain Jean-Paul Allouche CNRS, IMJ-PRG Sorbonne, 4 Place Jussieu 75252 Paris Cedex 05, France jean-paul.allouche@imj-prg.fr John M. Campbell Department of Mathematics and Statistics, Dalhousie University Halifax, 6299 South St, NS B3H 4R2, Canada jmaxwellcampbell@gmail.com Jeffrey Shallit School of Computer Science, University of Waterloo Waterloo, ON N2L 3G1, Canada shallit@uwaterloo.ca Manon Stipulanti Department of Mathematics, University of Liège 4000 Liège, Allée de la Découverte 12, Belgium m.stipulanti@uliege.be
http://arxiv.org/abs/2406.08782v1
20240613032701
Hybrid Spatial-spectral Neural Network for Hyperspectral Image Denoising
[ "Hao Liang", "Chengjie", "Kun Li", "Xin Tian" ]
eess.IV
[ "eess.IV", "cs.CV" ]
a]Hao Liang a]Chengjie Ke a]Kun Li a]Xin Tiancor [cor]Corresponding author. [a]Electronic Information School, Wuhan University, Wuhan, 430072, China § ABSTRACT Hyperspectral image (HSI) denoising is an essential procedure for HSI applications. Unfortunately, the existing Transformer-based methods mainly focus on non-local modeling, neglecting the importance of locality in image denoising. Moreover, deep learning methods employ complex spectral learning mechanisms, thus introducing large computation costs. To address these problems, we propose a hybrid spatial-spectral denoising network (HSSD), in which we design a novel hybrid dual-path network inspired by CNN and Transformer characteristics, leading to capturing both local and non-local spatial details while suppressing noise efficiently. Furthermore, to reduce computational complexity, we adopt a simple but effective decoupling strategy that disentangles the learning of space and spectral channels, where multilayer perception with few parameters is utilized to learn the global correlations among spectra. The synthetic and real experiments demonstrate that our proposed method outperforms state-of-the-art methods on spatial and spectral reconstruction. The code and details are available on <https://github.com/HLImg/HSSD>. * Introduced a hybrid spatial-spectral network that enhances HSI denoising, preserving spatial details and leveraging global spectral correlations. * Devised a low-complexity decoupling strategy for separate learning of spatial and spectral channels. * Achieved precise HSI denoising capability beyond existing techniques. Blind HSI denoising self attention spectral similarity § INTRODUCTION Hyperspectral image (HSI) contains tens to hundreds of approximately continuous spectral bands, offering richer spectral information than RGB image. Thus, the utilization of HSI can improve the ability to recognize different objects. HSI is widely applied in various fields, including remote sensing <cit.>, medical diagnosis <cit.>, and document image analysis <cit.>. However, due to insufficient light collection and hardware limitations in the sensing process, the existing noise in HSI significantly inhibits subsequent applications. Hence, HSI denoising is the foundational step for developing advanced computer vision tasks. In the field of HSI denoising research, two fundamental attributes are non-local similarity across spatial dimensions and global correlation within spectral dimensions. The current trend in denoising methodologies is to synergistically harness these two attributes, departing from a reliance on a single prior alone. The commonly used priors include total variation <cit.>, non-local <cit.>, sparse representation <cit.>, and low-rank models <cit.>. Well-designed priors can convert the noisy HSI to a clean one with the preservation of the spatial and spectral characteristics. However, due to complex handcrafted priors, traditional model-driven methods always face challenges of optimization difficulty. Recently, CNN-based methods have achieved significant improvement over traditional model-driven approaches in HSI denoising. Nonetheless, due to the inherently limited effective receptive field, these CNN-based methods exhibit weak non-local modeling capabilities. Furthermore, given that HSIs possess more spectra than RGB images, the standard 2-D convolution methods, which rely on coupled learning of spatial and channel dimensions, may inadequately learn spatial and spectral correlations within high-dimensional feature space. Transformer-based methods has received much attention from researchers in image denoising. Different from convolution in CNN-based methods, the self-attention mechanism demonstrates excellent non-local modeling capabilities in Transformer. However, the existing Transformer-based HSI denoising methods emphasize non-local information but overlook local information. Considering the sparse spatial detail in HSIs, the loss of fine-grained information may result in distortion and blurring <cit.>. In addition, many Transformer-based denoising methods implement attention algorithms by utilizing image tokens with dimensions of (N,h× w× C), which may mix spectral and spatial learning as mentioned in CNN-based methods. For decoupling in Transformer, Li  <cit.> proposed SST by utilizing spatial and spectral attention to learn spatial and spectra separately. However, SST utilizes channel self-attention to learn spectral channels, which largely increases the model's computational complexity. Therefore, how to design a lightweight Transformer architecture with both local and non-local modeling capabilities is an important issue in the field of HSI denoising. In this paper, we propose a Hybrid Spatial-Spectral Denoising Neural Network (HSSD) to efficiently explore local and non-local information of HSI. Specifically, a dual-path network (spatial-spectral mixer layers, SSML) composed of a non-local mixer and a local mixer is utilized to exploit both local and non-local spatial information. For the non-local mixer, we creatively introduce window self-attention (WSA) into the non-local modeling for spatial learning, benefitted from the demonstration that WSA can serve as a special form of the non-local mean algorithm <cit.>. Furthermore, inspired by the channel-mixing MLPs <cit.> being able to effectively learn channel correlations, we use a simple multilayer perception (MLP) network with low parameters to learn spectral global correlations. Therefore, the non-local mixer can separately learn spatial and spectral features, leading to a good decoupling ability. Instead of standard convolution, depthwise convolution block (DWB) with fewer parameters is utilized to build the local learning module, avoiding mixing space and spectrum. Spectral learning in the local mixer is the same as the non-local mixer. With the combination of local and non-local mixers, SSML can conduct non-local coarse denoising while preserving fine-grained detail information. Besides, benefiting from CNNs serving as high-pass filters and self-attention serving as low-pass filters <cit.>, SSML can maintain both high-frequency and low-frequency information for a clear denoised image. Additionally, by a combination of decoupling strategy and DWB, SSML can better explore the correlation of hundreds of spectra with less computational complexity. Based on multiple SSMLs, residual spatial-spectral mixer block (RSSMB) is designed to capture non-local and local features. Then, the backbone of HSSD is built by stacking multiple RSSMB blocks. In summary, the main contributions of our work are as follows. * We propose a hybrid spatial-spectral network for HSI denoising, which can effectively restore more spatial details and efficiently exploit the global spectral correlation of noisy images. * We employ spatial self-attention and depthwise convolution to capture both local and non-local spatial information in a dual-path architecture. * We adopt a simple decoupling strategy to learn space and spectral channels separately with small computational complexity, where MLP with low parameters can effectively learn global spectral correlations. § RELATED WORKS §.§ Hyperspectral Image Denoising The existing HSI denoising techniques can be divided into two categories: traditional model-based methods and deep learning-based methods. In model-driven methods, various handcrafted priors regularization constraints are utilized to solve optimization problems. Maggioni  <cit.> implemented a volumetric denoising algorithm using voxel cubes stacked into a 4-D group. Chang  <cit.> designed a unidirectional low-rank tensor to explore the non-local similarity. To integrate spatial non-local similarity and global spectral low-rank property, the NGMeet algorithm <cit.> can jointly learn and iteratively update the orthogonal band matrix and reduced image. However, model-based methods often suffer from an expensive cost of computational sources and have difficulty in choosing handcrafted priors. Due to its powerful non-linear mapping capability, the deep learning method has become a research hotspot in HSI denoising. A novel network combining spatial-spectral information was proposed by Yuan  <cit.>. Wen  <cit.> developed a special 3-D quasi-recurrent network by applying 3-D convolution. Bodrito  <cit.> advocated a method based on sparse coding principles. Xiong  <cit.> built a spectral low-rank model and utilized a nonlocal U-Net. Cao  <cit.> built a spatial-spectral global reasoning network to utilize global contextual information. However, the inherent characteristics of 2-D convolution, such as limited effective receptive fields and coupled learning, limit the further application of CNN-based methods. §.§ Vision Transformer To overcome the locality of CNN, the Transformer has been proposed to explore global relationships. Dosovitskiy  <cit.> first proposed the Vision Transformer (VIT) for image classification, demonstrating the Transformer's potential in computer vision. Liu  <cit.> proposed a general backbone that self-attention is performed with the shifted-window strategy called the Swin transformer. Additionally, Transformer shows excellent performances in low-level image tasks, such as image super-resolution <cit.>, image denoising <cit.> and so on. In HSI denoising, some researchers have designed transformer architecture suitable for HSI characteristics. Li  <cit.> combined non-local spatial attention with global spectral attention. Moreover, Li  <cit.> also designed a rectangle transformer that operates spatial attention in both horizontal and vertical directions, which utilizes global spectral low-rank property to reduce noise. Lai  <cit.> proposed a 3D transformer to capture global and local spatial-spectral correlations better. These methods perform well in global modeling, but they overlook local information, which may lead to over-smoothing and loss of detail textures. § METHOD In this section, we first describe the overall pipeline of HSSD applied for HSI denoising. Then, we provide a detailed implementation of SSML. §.§ Overall Pipeline Considering the lower spatial resolution of spectral images, HSSD employs the single residual network instead of the U-shaped network <cit.> to preserve spatial details better, as shown in <ref>. Given a noisy HSI input 𝒴∈ℝ^H× W× C, where C represents the number of spectral bands, we first adopt a 3× 3 convolution (ℱ^0_conv(·)) to extract low-level visual feature, denoted as F^0_conv, which usually contain high-frequency information like edges, textures and noise. The ℱ^0_conv(·) also maps the input image to a high-dimensional space. F^0_conv = ℱ_conv^0(𝒴) Next, the feature maps F_conv^0 are fed into a cascade of M residual spatial-spectral mixer blocks (RSSMB) for deeper feature processing. To prevent the issue of vanishing gradients, HSSD incorporates residual connections <cit.> throughout the network. Then, we obtain the final output as follows. F_out = F_conv^0 + ℱ_conv^M + 1(ℱ_rssm^M(⋯ (ℱ_rssm^1(F_conv^0)))) where ℱ_rssm^i(·) denotes the i-th RSSMB and F_conv^i represents the feature maps obtained by the i-th 3× 3 convolution layer. Similarly, F_rssm^i represents the feature maps output by the i-th RSSMB. As shown in <ref>, each RSSMB contains N SSMLs, and additional convolution operations are performed at the end of the block. The process can be expressed as follows. F^i_rssm = F^i-1_rssm + ℱ_conv^i(ℱ_ssm^N(⋯ (ℱ_ssm^1(F^i-1_rssm)))) In <ref>, the input features F^i_ssm are first projected into different mixers: mixer ℱ_nl^i(·), which handles non-local spatial and spectral channels, and mixer ℱ_l^i(·), which deals with local spatial and spectral channels. The feature projection and aggregation are expressed as ℱ_proj(·) and ℱ_agg(·). The features processed by these two mixers are aggregated together as follows. F^(l)_proj, F^(nl)_proj = ℱ^i+1_proj(F^i_ssm) F_ssm^i+1 = F^i_ssm+ℱ_agg^i+1(ℱ^i+1_l(F^(l)_proj), ℱ_nl^i+1(F^(nl)_proj)) where F^(l)_proj represent the projection output in local mixer, and F^(nl)_proj is the projection output in non-local mixer. In <ref>, we integrate the extracted shallow and deep features through residual connections. In the last layer of HSSD, a 3× 3 convolution, denoted as ℱ^M+2_conv(·), is used to reconstruct the residual image. We obtain the denoised image using 𝒳̂ = 𝒴 + ℱ^M+2_conv(F_out), and the HSSD is trained using the ℒ_1 loss function. §.§ Spatial-Spectral Mixer We utilize a parallel structure to perform local and non-local spatial modeling on the projected feature. Subsequently, spectral channel correlation learning is carried out on each branch following the spatial modeling. §.§.§ Local Spatial-Spectral Mixer The standard 2D convolution attempts to learn a filter in 3D space, coupling the spatial and channel-wise correlations <cit.>. Current high-performance vision works often decouple space and channel learning. MLP-Mixer <cit.> uses token-mixing MLPs and channel-mixing MLPs to learn space and channel, respectively. We follow the same design principles to decouple spatial and channel learning in HSI denoising. First, we use a residual block DWB composed of two depthwise <cit.> convolutions to model the local features 𝐳^(l)∈ℝ^H× W × C. Subsequently, the MLP is employed to capture the correlation among spectral channels, incorporating GELU as the activation function between DWB and MLP. This simple design improves performance and ensures good efficiency without causing much computational complexity. Additionally, residual connections are applied. The whole process is formulated as follows. 𝐳^(l) = DWB(𝐳^(l)) + 𝐳^(l) 𝐳^(l) = MLP(𝐳^(l)) + 𝐳^(l) §.§.§ Non-Local Spatial-Spectral Mixer We incorporate the idea of the NLM <cit.> algorithm in modeling non-local similarity. Given the input feature 𝐳^(nl)∈ℝ^H× W× C, we define the search domain Ω in the range of [1, H× W]. Here, P_k(𝐳^(nl)_i) represents the neighborhood window of the pixel 𝐳_i. The diameter of P_n(𝐳^(nl)_i) is set to n. The general form of non-local modeling is expressed as follows. 𝐲_i = ∑_j ∈Ω Sim(P_k(𝐳^(nl)_i), P_k(𝐳^(nl)_j))/∑_j∈Ω Sim(P_k(𝐳^(nl)_i), P_k(𝐳^(nl)_j))·𝐳^(nl)_j where 𝐲∈ℝ^H× W× C is the output of the non-local operation, and Sim(·) represents the similarity measurement function. <Ref> shows the visual Transformer's general self-attention form <cit.>. In order to enhance the expressive ability of features, we adopt three linear layers to project the input features into different spaces. Q=𝐳^(nl)W_q, K=𝐳^(nl) W_k, V=𝐳^(nl) W_v where W_q, W_k, and W_v∈ℝ^C× C. We use the Embedded Gaussian in NLNN <cit.> as the similarity measurement function, then <ref> can be expressed as follows. 𝐲_i = ∑_j∈Ωexp{P_n(Q_i)^TP_n(K_j)}/∑_j∈Ωexp{P_n(Q_i)^TP_n(K_j)}· V_j =∑_j∈ΩSoftmax(P_n(Q_i)^TP_n(K_j))· V_j In our specific implementation, we first assign n=1, degenerating P_n(Q_i) into a single pixel. When setting the search domain Ω as the entire input feature (token), <ref> theoretically performs as standard self-attention (SA). However, when Ω is defined as a smaller rectangle, <ref> equates to window self-attention (WSA). Therefore, by finely adjusting the scale and overlap of the search domain, we can derive various forms of WSA, including overlapping WSA, overlapping WSA, non-overlapping WSA, and shifted WSA. The Swin Transformer provides an efficient parallel acceleration method for WSA. Like the Swin Transformer, we alternate between non-overlapping and shifted WSA operations. This alternation facilitates efficient self-attention computation and ensures information interaction. In practice, we employ learnable position encoding, carry out attention computations h times in parallel, and then concatenate the results to perform Multi-head WSA (MWSA) operations. Maintaining a structure consistent with the local mixer, we continue using a MLP for spectral channel learning and incorporating the LayerNorm (LN). The whole process is formulated as follows. 𝐳^(nl) = MWSA(LN( 𝐳^(nl))) + 𝐳^(nl) 𝐳^(nl) = MLP(LN(𝐳^(nl))) + 𝐳^(nl) §.§.§ Projection and Aggregation We denote the input features of the SSML as 𝐳∈ℝ^H× W× C. Then, <ref> can be implemented by the following three projection and aggregation manners. * Addition. We utilize a 1× 1 convolution to project the feature and subsequently apply two mixers. Subsequently, we perform an element-wise addition of the two features. Lastly, another 1× 1 convolution is executed. 𝐳_proj = conv_1× 1(𝐳) 𝐳_agg = conv_1× 1(ℱ_l(𝐳_proj) + ℱ_nl(𝐳_proj) ) where 𝐳_proj represents the projection output by projection manner, and 𝐳_agg is the aggregation output by aggregation manner. * Concatenation. The input features are projected and transformed through 1x1 convolution. The features are split in the channel dimension according to the specified ratio, and the two features are sent to different mixers. Finally, we concatenate the processed features in the channel dimension. 𝐳_proj^(l), 𝐳_proj^(nl) =split(conv_1× 1(𝐳)) 𝐳_agg = conv_1× 1(concat(ℱ_l(𝐳_proj^(l)), ℱ_nl(𝐳_proj^(nl)))) * Gate Mechanism. The processing flow is similar to the addition method, but we use a gate mechanism instead of element-wise addition when aggregating features. In order to reduce the complexity caused by feature aggregation, we adopt the simple gate method in NAFNet <cit.>, which has a small computational overhead and retains the characteristics of the gate mechanism. 𝐳_proj = conv_1× 1(𝐳) 𝐳_agg = conv_1× 1(gate(ℱ_l(𝐳_proj), ℱ_nl(𝐳_proj) )) § EXPERIMENTS In this section, we first present the detailed experimental setup. We compare HSSD with traditional model-driven and deep learning HSI denoising methods[Pre-trained models are from SERT and respective official repositories.] on synthetic and realistic datasets. Model-driven methods include BM4D <cit.>, LLRT <cit.>, and NGMeet <cit.>. CNN-based methods are HSID <cit.>, GRNet <cit.>, QRNN3D <cit.>, T3SC <cit.>, and MAC-Net <cit.>. Transformer-based methods SST <cit.>, SERT <cit.>, and HSDT <cit.>. Finally, experiments on computational complexity analysis and model effectiveness are conducted. It is noteworthy that, like those comparison methods, we train a model each for iid Gaussian, complex noise, and real noise, and perform blind denoising at different intensities. §.§ Experimental Setup Datasets. The ICVL <cit.> dataset provides 201 clean images with a spatial resolution of 1392×1300 and 31 spectral bands. Following most methods, we add various synthesized noises to the ICVL dataset for simulation experiments. Our dataset partitioning approach is consistent with other practices such as SST <cit.>. The RealHSI <cit.> collects short-exposure and long-exposure images of the same scene to create a real paired hyperspectral denoising dataset. This real dataset contains 62 pairs of hyperspectral images with dimensions of 696×520×34. We select 15 pairs for testing while using the remaining images for training. Implementation Details. We implement the proposed method using PyTorch <cit.> with random initialization. Throughout the training process, we randomly flip the samples vertically and horizontally, as well as rotate them by 90^∘, 180^∘ and 270^∘. To optimize the network, we employ the AdamW <cit.> optimizer with the momentum terms of [0.9, 0.99] and the weight decay of 0. We carry out a total of 8× 10^5 iterations of training on 8 RTX 2080Ti GPUs, utilizing the cosine annealing scheduler <cit.> to decrease the learning rate from 2× 10^-4 to 1× 10^-7 progressively. Architecture variants. To highlight the effectiveness of our method, we introduce two variant models with different numbers of RSSMB and SSBL: HSSD-S (Small) and HSSD-B (Base) in comparative experiments. The detailed settings are as follows. * HSSD-S: Contains 3 RSSMBs, with each RSSMB contains 6 SSMLs. * HSSD-B: Contains 6 RSSMBs, with each RSSMB contains 6 SSMLs. Evaluation metrics. We use Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Metric (SSIM) <cit.> to measure the image reconstruction quality. Since hyperspectral images typically have a large number of bands, we assess the spectral reconstruction quality using Spectral Angle Mapper (SAM). §.§ Experiments on Synthetic Data Gaussian Noise. <Ref> reports the iid Gaussian denoising results. Our models achieve significant improvements across multiple noise cases, owing to the effectiveness of the spatial-spectral mixers. HSSD-B has an average improvement of at least 0.36 dB in PSNR for all noise bases compared to other methods. Furthermore, the lowest SAM indicates that our method performs better in maintaining global spectral consistency. We synthesize color images using the 28th, 15th, and 9th bands as red, green, and blue channels. As shown in <ref>, model-driven and CNN-based methods fail to maintain detail texture. HSSD restores more details while removing noise, indicating the excellent ability to capture noise patterns. Complex Noise. Due to sensor inaccuracies and poor imaging conditions, realistic HSIs suffer from various noises such as stripe noise, deadline noise, and impulse noise <cit.>. Furthermore, these noise structures typically exhibit non-independent and non-identically distributed statistical properties <cit.>, indicating that it is more complex and challenging to remove than iid Gaussian noise. We present the simulation denoising results for different complex noises in <ref>. Under various combinations of complex noise, HSSD-B almost surpasses the state-of-the-art other method, while HSSD-S also demonstrates its competitiveness. This shows that our method has strong generalization ability and modeling performance. We present the visual comparison results in <ref> and <ref>, where multiple types of noise are used to corrupt the HSI. It can be observed that GRNet suffers from complex noise. SERT still retains a significant amount of stripe noise, while T3SC shows apparent color inconsistency with other results. In contrast, our method provides a clean image with better preservation of lines, which indicates that our method has strong spatial and spectral reconstruction capabilities. §.§ Experiments on Realistic Data We perform real denoising on the RealHSI <cit.> dataset, and the partitioning of the dataset is the same as SERT <cit.>. <Ref> reports the quantitative results. Compared with previous methods, our method achieves better results in handling real noise, with the best PSNR and SAM values than other methods. We present the denoising results of different methods and their corresponding residual maps in <ref>. Owing to our approach combining the advantages of CNN and Transformer, HSSD performs well in reconstructing high-frequency information, particularly in regions with edges where the residual errors are smaller. In summary, our method achieves the best metrics and visual results. This verifies the effectiveness of the non-local and local modeling in the proposed method. §.§ Model Analysis We perform computational complexity analysis and ablation experiments under blind and σ=50 intensity of Gaussian noise, respectively. The test dataset contains 50 clean images of size 512× 512× 31. We conduct ablation studies on different feature projection and aggregation methods. We uniformly project the input HSI to 96 channels. We report the corresponding experimental results in <Ref>. The addition and gate mechanisms are higher than the concat mechanism in PSNR at the cost of a higher number of parameters. In Section <ref>, we begin with the classical non-local means algorithm and establish the connection between it and the current mainstream window-based self-attention mechanism. We note that when the diameter n in <ref> is set to 1, the non-local similarity degenerates into the window self-attention. Furthermore, an increase in the diameter allows the non-local similarity computation to incorporate a broader scope of local feature information. <Ref> depicts the training progression of three models with varying diameters within the same search domain. The PSNR and SSIM metrics during the validation phase reveal that, as the number of training iterations increases, models with a larger diameter consistently outperform the single-scale model (n=1). Comparison with window-based self-attention methods. The proposed HSSD method was inspired by <cit.>. In <ref>, we present quantitative comparison results between HSSD and other Transformer-based methods that have leveraged window-based self-attention. To ensure fairness, all methods were trained using the same dataset and split files. Among these methods, SwinIR <cit.> directly applies the module from <cit.> to image denoising, and thus can be considered as a benchmark for comparison. CSwin <cit.> introduced the cross-shaped mechanism based on the swin transformer, which has a better ability to capture local information. SST <cit.> and SERT <cit.>, two recent Transformer models, further improve upon SwinIR and CSwin, respectively, by incorporating spectral information modeling to hyperspectral image denoising. Considering the image property of denoising, we additionally consider local spatial modeling and use a simple spectral information processing module with minimal cost. As shown in <Ref>, our proposed method demonstrates superior denoising performance while maintaining a lower level of model complexity. Comparison with CNN-Transformer methods. We utilize 2D CNNs to model local spatial information and propose three mechanisms for fusing the extracted local and non-local spatial information. The denoising performance of the CNN-Transformer-based approach is presented in <ref>. Integrating convolutional operations with Transformers is an intuitive improvement, yet diverse strategies yield different denoising results. We have selected two representative image denoising methods for comparative analysis. The Uformer <cit.> employs convolution layers to act as a feed-forward network module, thereby enhancing the Transformer's capability to capture local contextual information. Restormer <cit.> computes self-attention across channel dimensions and utilizes depthwise convolution for local context mixing. As demonstrated in <ref>, our proposed method also achieves optimal denoising results with minimum computational cost. Component Analysis. <Ref> shows the ablation study results of the main modules. Compared with the local spatial and spectral mixer, the non-local spatial and spectral mixer significantly improves the denoising effect. Since image denoising relies on local operators, we fuse local and non-local mixers to optimize the denoising performance, thus improving 0.23 dB and 0.13 in PSNR and SAM, respectively. <Ref> shows the residual maps and their corresponding ranks for the three methods. Local mixer encounters difficulties when attempting to capture complex noise patterns, thus obtaining the worst result. The non-local mixer's residual map still retains semantic information, indicating that it does not fully capture all noise patterns. In contrast, our method effectively integrates the advantages of both local and long-range modeling. This integration facilitates the capture of more complex noise patterns, ultimately leading to a higher rank for feature maps. § CONCULUSION In this paper, we present a hybrid spatial-spectral denoising neural network to explore local and non-local spatial information. A dual-path network composed of a non-local and a local mixer is designed to learn non-local and local features separately, leading to good preservation of fine-grained information. Based on the decoupling strategy, We use spatial self-attention and MLP to learn spatial and spectral features separately, thus resulting in an excellent exploration of spatial and spectral features. Furthermore, depthwise convolution with small computation costs is utilized to capture local information. Therefore, our proposed method can achieve a good balance between spatial-spectral learning and computational complexity. Both synthetic and real noisy experiments are carried out to verify the efficiency and superiority of the proposed method over state-of-the-art methods in visual results and objective quality analysis. In the future, we plan to extend our method to tackle various HSI restoration tasks. elsarticle-num-names
http://arxiv.org/abs/2406.09255v1
20240613155749
Compact Parallel Hash Tables on the GPU
[ "Steef Hegeman", "Daan Wöltgens", "Anton Wijs", "Alfons Laarman" ]
cs.DS
[ "cs.DS" ]
Freudenthal Duality in Conformal Field Theory Arghya Chattopadhyay^amailto:arghya.chattopadhyay@umons.ac.bearghya.chattopadhyay@umons.ac.be, Taniya Mandal^btaniya.mandal@niser.ac.intaniya.mandal@niser.ac.in, Alessio Marrani^calessio.marrani@um.esalessio.marrani@um.es ^aService de Physique de l'Univers, Champs et Gravitation Université de Mons, 20 Place du Parc, 7000 Mons, Belgium ^bSchool of Physical Sciences, National Institute of Science Education and Research An OCC of Homi Bhabha National Institute Bhubaneswar 752050, India ^cInstituto de Física Teorica, Dep.to de Física Universidad de Murcia, Campus de Espinardo, E-30100, Spain Abstract Rotational Freudenthal duality (RFD) relates two extremal Kerr-Newman (KN) black holes (BHs) with different angular momenta and electric-magnetic charges, but with the same Bekenstein-Hawking entropy. Through the Kerr/CFT correspondence (and its KN extension), a four-dimensional, asymptotically flat extremal KN BH is endowed with a dual thermal, two-dimensional conformal field theory (CFT) such that the Cardy entropy of the CFT is the same as the Bekenstein-Hawking entropy of the KN BH itself. Using this connection, we study the effect of the RFD on the thermal CFT dual to the KN extremal BH. We find that the RFD maps two different thermal, two-dimensional CFTs with different temperatures and central charges, but with the same asymptotic density of states, thereby matching the Cardy entropy. In an appendix, we discuss the action of the RFD on doubly-extremal rotating BHs, finding a spurious branch in the non-rotating limit, and determining that for this class of BH solutions the image of the RFD necessarily over-rotates. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT On the GPU, hash table operation speed is determined in large part by cache line efficiency, and state-of-the-art hashing schemes thus divide tables into cache line-sized buckets. This raises the question whether performance can be further improved by increasing the number of entries that fit in such buckets. Known compact hashing techniques have not yet been adapted to the massively parallel setting, nor have they been evaluated on the GPU. We consider a compact version of bucketed cuckoo hashing, and a version of compact iceberg hashing suitable for the GPU. We discuss the tables from a theoretical perspective, and provide an open source implementation of both schemes in CUDA for comparative benchmarking. In terms of performance, the state-of-the-art cuckoo hashing benefits from compactness on lookups and insertions (most experiments show at least 10–20% increase in throughput), and the iceberg table benefits significantly, to the point of being comparable to compact cuckoo hashing— while supporting performant dynamic operation. § INTRODUCTION General purpose graphics processing units (GPUs) have been used to significantly speed up computations in many different domains. With thousands of processing cores, GPUs offer access to massive parallelism for everyone. However, the main GPU memory tends to be scarcer than the main memory available to CPUs. Moreover, memory access times are often the main performance bottleneck of GPU programs. Therefore, data structures that sparingly use GPU memory can positively affect both the amount of data that can be stored and the performance of a GPU program. Memory-efficiency can be achieved by quotienting, a technique for reducing the storage required per key by using its storage location as information. This was first used in practice in Cleary's compact hash table <cit.>. This reduces the memory usage per table slot logarithmically in the number of slots. The parallelization of Cleary's hashing scheme in <cit.> involves coarse locking. On the GPU, the (coarse) locking strategies of traditional CPU multicore algorithms do not perform well. The fastest GPU tables use atomic operations to directly insert keys into the table. We refer to this as lockless. In addition, it is fair to say that optimizing GPU hash table performance is about reducing the number of cache lines involved per operation <cit.>. Among the available GPU hash tables, adaptations of cuckoo hashing <cit.> and iceberg hashing <cit.> have proven promising. In particular, the bucketed cuckoo table of <cit.>, the slab hash table of <cit.> and DyCuckoo <cit.> are state-of-the-art. Indeed, <cit.> confirm that at least in theory compact hashing with cuckoo or iceberg hashing is a good idea. A downside of these tables is however that they are more strict in terms of where a key can be stored. With cuckoo hashing, this can lead to situations where there are still empty slots in the table, but specific keys cannot be added to it. So here, one should not just consider the memory savings per slot, but also the expected fill factor of the table. Moreover, not all hash tables offer the same guarantees. The bucketed cuckoo table of <cit.> for example is static, meaning that it only supports a mode of operation where the table is built once out of a batch of unique keys, and can then only be queried. As inserts temporarily move existing keys from the table to local memory, lookup operations performed during insertion may return false negatives, and concurrent insertion operations may cause keys to be stored in multiple slots. For the same reason, DyCuckoo <cit.>, while supporting resizing, does not support concurrent insert operations. We say that a table supports dynamic operation if it supports concurrent combinations of lookups and writes. In many applications, dynamic operation is essential. For instance, in model checking <cit.>, fixpoint algorithms continuously check whether keys have been seen before and insert them if not. We propose a compact, lockless GPU hash table that supports concurrent inserts. Our table is based on iceberg hashing, with a provably correct find-or-put operation. We provide an open source implementation in CUDA. We also combine the lockless GPU hash table based on cuckoo hashing from <cit.> with compact hashing for comparison. Synthetic benchmarks show that both tables benefit from compactness (10–20% speedup on lookups and insertions), while also halving or nearly quartering memory usage in many situations. Furthermore, the compact iceberg table performance is very close to the compact cuckoo one in static situations —a significant result, as the cuckoo table of <cit.> is, to the best of our knowledge, the fastest static GPU hash table to date, and the compact cuckoo table is comparable to it. This demonstrates the competitiveness of the compact iceberg table with the state-of-the-art. In conclusion, we establish that when it comes to hash tables on the GPU, you can have your cake and eat it too: compact hashing can lead to both reduced memory usage as well as improved performance. § BACKGROUND §.§ Hash tables A hash table T is a data structure implementing a set or map from keys K to values V. Some hash tables are stable, meaning that the index at which a key is stored does not change. In this paper, we focus on hash tables representing sets, but the presented operations can be extended to a map implementation, by storing key-value or key-pointer pairs instead of keys, or, in the case of a stable table, by storing values in a separate array at the same index as the key. Hash tables are typically optimized for Find, Put, and Delete operations. We generally do not consider deletions (or treat them as if exceptional), as we are interested in the use of hash tables in search algorithms. In this paper, our tables generally consist of an array of buckets. Each bucket contains one or more slots. Each slot is either unoccupied (has value /) or stores a key, possibly together with additional bookkeeping information. H hash functions h_0,…,h_H-1: K → T map keys to buckets in T. Given a hash function h_i, a key k can be stored in some slot with index j of the corresponding bucket T[h_i(k)], i.e., T[h_i(k)][j]. A table's fill factor is defined as the fraction of slots that are occupied. If key k is to be inserted into the table, but buckets indexed by h_0(k),…,h_H-1(k) are full (all their slots are occupied), then there is typically no way to add k to the table and the table is considered full (an exception is the cuckoo hashing scheme below). A bigger table can then be allocated to store the set (rehashing) <cit.>. The larger H, the greater the expected fill factor a table achieves before rehashing is needed, but the longer lookups may take, as potentially all buckets h_i(k), i < H need to be checked. In which of the buckets h_i(k) a key k is stored, and how the table is manipulated, is determined by the hashing scheme. The memory efficiency of a scheme depends on the expected fill factor that can be achieved, as well as the memory used per table slot. We discuss two schemes below. §.§ Cuckoo hashing §.§.§ Insertion Cuckoo hashing <cit.> achieves high fill factors in practice while limiting the slots in which a key may be placed, by moving keys between buckets during insertion. A yet to be inserted key may take the place of a key already in the table (evicting the original key), forcing it to be moved elsewhere. Each slot either contains a dedicated value /, signaling it is unoccupied, or (in the case of cuckoo hashing) stores a pair (k,j) of key and hash function index j < H. When a slot containing (k,j) is evicted, the key k is in turn directed to bucket h_j+1 H(k), evicting a slot there, until finally a key is directed to a bucket with an empty slot, in which case it takes the empty slot and the insertion is finished. If the chain of evictions exceeds a threshold length C, the insertion is aborted and the table is considered full. An optimization is to, instead of storing a hash function index with the key, recover it from the key k found in bucket b as the first j with h_j(k) = b. §.§.§ Lookup A key k is only found in a bucket h_i+1(k) if it was kicked out of bucket h_i(k), so if deletions are forbidden, the Find(k) procedure only has to check indices h_0(k),h_1(k),… in order up to the first bucket containing k or an unoccupied slot. (For rare deletion operations, additional `tombstone' values can preserve this invariant, as discussed in, for example, <cit.>.) §.§ Iceberg hashing Iceberg hashing <cit.> divides the table into multiple levels. Each key is assigned to one large primary bucket in the first level by h_0(k), and two smaller secondary buckets in the second level indicated by h_1(k),h_2(k). There is an additional third level outside of the table structure, made up of linked lists. Iceberg hashing is stable: once a key is inserted into a slot, it is never moved to another <cit.>. §.§.§ Insertion On insertions, a key k is placed in the primary bucket if it has unoccupied slots, otherwise it is placed in its secondary bucket with the most unoccupied slots. This choice aspect has significant impact on load balancing <cit.>. If both the primary and secondary buckets of k are full, then it gets sent to the third level, where k is inserted into the linked list indicated by h_0(k). As level 3 can grow arbitrarily large, insertions can never fail. But to maintain performance, the table should be rehashed when the third level grows large. §.§.§ Lookup Again assuming deletions are prohibited, lookups can be performed as follows. First inspect the primary bucket. If it contains k, we are done, and if it does not contain k but some of its slots are unoccupied, we are done as well. Otherwise, both secondary buckets need to be inspected. If k is found in neither and one of the buckets has an unoccupied slot, then k is not in the table. If all three buckets are full and do not contain k, the linked list indicated by h_0(k) needs to be searched for k. §.§.§ Level sizes Not all buckets in the table have the same number of slots: the primary buckets are generally chosen to have more slots than the secondary ones <cit.>. Additionally, there are typically fewer secondary buckets than primary buckets <cit.>. So it might be instructive to think of an iceberg table as two hash tables and an array of linked lists. The multi-level approach allows iceberg tables to achieve a high fill factor. [ In <cit.>, definition of fill factor (space efficiency) is adapted to include level 3. ] On the other hand, the more levels an operation needs to traverse, the longer it takes. If an operation only needs to work on the first two levels, it runs in constant time, but if level 3 also needs to be inspected, it does not. With proper tuning of the parameters, the expected spillage of level 1 to level 2 is small, and the spillage of level 2 to level 3 even smaller <cit.>. In <cit.>, they use large primary buckets (64 slots), and a 1 to 8 ratio in the size of the primary and secondary level. In practice, iceberg hashing achieves a high fill factor while maintaining near constant time performance (constant-time with high probability). §.§ Compact hashing In <cit.>, Cleary introduced a technique for compact hashing (now known as quotienting <cit.>), which reduces the storage space required per slot logarithmically in the number of buckets as follows. Let T be a hash table with 2^N buckets, let K = 2^M be the binary strings of length M, and let π: K → K be a permutation of keys. Assign key k to the bucket a(k), where a(k) is the number denoted by the first N bits of π(k). This is also called the address of k. The remaining bits of π(k), called the remainder r(k), are then stored in a slot in this bucket. The key occupying a slot can thus be recovered by combining the address of the bucket that the slot is in and the remainder stored in the slot, followed by computing the inverse under π. In summary, k = π^-1(a(k)r(k)), where stands for concatenation of strings. The remainders are only M-N bits in length so per-slot and thus per-table memory savings are logarithmic in the total number of buckets. Schemes using multiple hash functions h_i to assign keys to multiple buckets can be made compact by using multiple permutations π_i, giving rise to multiple address functions a_i and remainder functions r_i. § A PARALLEL COMPACT ICEBERG HASH TABLE We provide a lockless parallel compact iceberg algorithm. We focus on a parallel find-or-put operation which returns / if the given key is in the table, and otherwise inserts the key (or returns /). With this operation, we can support ubiquitous fixpoint computations as the ones used in model checking <cit.> and many other applications. Both cuckoo and iceberg hashing have proven promising for GPU applications, as discussed in the introduction. Because of the difficulty of realizing concurrent writes in cuckoo hashing, we opt for a compact iceberg table with a concurrent find-or-put operation. §.§.§ Find-or-put Algorithm <ref> gives our lockless parallel find-or-put procedure for compact iceberg hashing, called . We use a table T_0 for the primary buckets, and T_1 for the secondary buckets, each compacted separately, so that a_0(k) indicates a k's primary bucket in T_0, and a_1(k),a_2(k) its secondary buckets in T_1. The constants B_0 and B_1 are table parameters indicating the number of slots per primary bucket and per secondary bucket respectively. Lines in the algorithm are not considered to be atomic, except for the com­pare-and-swap operation Cas(a,b,c), which checks if the value of a is b, and if so, replaces it with c, all in one atomic operation. It returns / if it did replace the value of a, and / otherwise. (In particular, if c ≠/ and multiple threads simultaneously execute (a,/,c) on the same empty slot a, only one succeeds.) The reads in lines <ref>,<ref>,<ref> are consequently atomic per-slot. §.§.§ Correctness A proof of correctness is given in appendix <ref>. The key idea is that when the algorithm attempts to insert into a slot, all earlier slots are nonempty and have been read (see e.g. lines <ref>,<ref>). This allows the table to handle concurrent inserts with duplicate inputs. §.§.§ Limitations We limit ourselves to an iceberg table with level 1 and 2 tables —the difficulty in parallelizing iceberg hashing lies in the choice aspect of level 2. It should be noted that <cit.> does not truly separate the first and second level for its iceberg implementation, defying the basis of the theoretical analysis in <cit.>. We believe that the dynamic slab hash table of <cit.> could be adapted as a third level for the scheme below, forming a full 3-level iceberg table. We have found that already with the first two levels, good fill factors can be achieved in practice. We currently omit resizing and support for deletion operations, as a vast number of operations can already be supported with the given find-or-put operation. As discussed in for example <cit.>, the table can be extended to support a delete operation using so-called `tombstone' values, which preserve the invariant that non-empty slots occur consecutively. Using a stop-the-world approach, we believe resizing can also be implemented. In most applications on the GPU, it suffices however to simply claim all memory for the task at hand. § IMPLEMENTATION §.§ Architectural considerations §.§.§ Architecture We summarize the architecture of NVIDIA GPUs as described in the CUDA programming guide <cit.>. A GPU contains several multiprocessors, each capable of executing multiple threads (processes) in parallel. Threads are divided in groups of 32 called warps. Warps are then assigned to multiprocessors. As a consequence of this warp-oriented architecture, if threads in a warp take different branches, the execution of these branches may be serialized. The highest performance is achieved if all threads in a warp execute the same lines of code. An upside of the tight coupling between threads in a warp is that they can efficiently communicate, and that their memory accesses can be coalesced: if threads in a warp access elements in the same cache line in parallel, the line is retrieved from memory only once, instead of once per thread. In summary, to improve performance, threads in a warp should have as little branch divergence as possible, and should aim to access memory in the same cache line as often as possible. §.§.§ Cooperative work sharing For bucketed GPU hash tables, this is typically achieved by warp-cooperative work sharing <cit.>. Each thread receives an input key, but warps then cooperate, working together on one of their threads' keys at a time. When a bucket is inspected, each thread in the warp reads one of the bucket slots, together assessing the whole bucket in one (or few, depending on the bucket size) coalesced reads. CUDA allows for warps to be subdivided in smaller cooperative groups, which can be used when buckets have fewer than 32 slots. In summary, cooperative work sharing allows all threads to do useful work while decreasing the number of memory operations per thread. §.§.§ Group synchronization In CUDA, each thread has a global rank, an index in the total number of threads. In a cooperative group, each thread also has a local group rank, from 0 to the group size (exclusive). Cooperative groups have several synchronization primitives, such as , , and , which can be used to implement the following abstract procedures. Let G be a group. _G,s(v) evaluates v in the thread with group rank s and returns the result. _G(P) is true if and only if the predicate P evaluates to true in any thread in the group, _G(P) gives the group rank of the first thread in G in which P holds, or if P is not true in any, and _G(P) gives the number of threads in G in which P holds. §.§.§ Global synchronization When one thread writes to memory (in our case, a bucket slot), there is no guarantee that this change is reflected in reads by other threads until explicit synchronization, unless this memory was written using atomic instructions. (Even then, volatile loads are required.) Of interest to us are and , atomic and Swap operations, respectively. §.§ Iceberg find-or-put Algorithm <ref> describes the cooperative find-or-put procedure. For simplicity, we assume that the primary bucket size B_0 must divide 32 (the size of a warp), and that the secondary buckets contain half that many rows. [ The actual implementation also supports smaller secondary buckets. Larger primary buckets could be implemented. ] Algorithm <ref> implements the cooperative work sharing for inputs of a multiple of B_0 keys. The actual implementation supports input batches of any size. §.§.§ Implementation notes In the actual CUDA implementation, the reads use volatile loads, and there are some minor optimizations (filled slots are not read again, and it exploits that returns the read value of the target slot to avoid rereading the slot after a failed insertion attempt). §.§ Permutations We use simple permutation functions: one-round Feistel functions based on the hash family used in <cit.> for comparison purposes. Users of the CUDA implementation can easily supply their own permutations. §.§ Parallel compact cuckoo implementation We give a compact cuckoo implementation, that is close to the bucketed cuckoo implementation of <cit.>. The main difference being the use of permutations instead of hash functions and the use of remainders, cf. Algorithm <ref>. A downside of cuckoo hashing is that it is not stable: keys move during insertions. This makes it difficult to define a performant find-or-put procedure: it could be that one process is checking whether k is in the table exactly when another process has decided to insert k', and has in this process temporarily evicted k out of the table. We see only one way to avoid such situations, and that is to somehow synchronize all processes, so that no process works on the insertion-phase of a find-or-put while others are in the lookup-phase. However, on the GPU, synchronization of all parallel processes is very expensive. Even if all processes are forced to finish their lookup-phase before one starts their insertion phase, there is yet another problem: multiple processes executing find-or-put for the same key k. After the lookup phase, they could all conclude that k has to be inserted into the table. During the insertion phase, after one process inserts k into its first bucket, an unrelated process might evict k from this bucket in order to insert a key k', followed by one of the other processes inserting a new copy of k in the first bucket. This is another way in which multiple copies of a key could be inserted into the table. While there could still be solutions to this (locking, or using a prohibitive amount of metadata), we do not see a performant parallel find-or-put algorithm for cuckoo hashing. But as a static table, we can still make it compact. §.§.§ Insertion See Algorithm <ref> for a parallel insertion algorithm for compact cuckoo. Note the inversion in line <ref> to recover the evicted key. The operation is atomic as in Algorithm <ref>. In addition the Swap(a,b) operation atomically swaps the values of a and b. Again, the snapshot read in line <ref> is atomic per-slot. The CUDA implementation of Algorithm <ref> uses the same warp-based work-sharing approach as the iceberg table. §.§.§ Lookup Look for r_i(k) in buckets a_i(k). As with non-compact cuckoo hashing, the insertion guarantees that if bucket r_i(k) is not full, then buckets r_j(k) with j > i are completely empty (buckets are filled in order). Hence the search inspects the buckets for k in order, and stops early if a bucket a_i(k) is inspected that does not contain r_i(k) and is not full (then k is not in the table). §.§ CUDA code The full code, used in the benchmarks below, has been accepted to appear as a conference artifact <cit.>. The latest version of our CUDA library is open source and available at <https://github.com/system-verification-lab/compact-parallel-hash-tables>. The cuckoo part is based partially on the second author's master's thesis <cit.>. For simplicity, we support only tables of which the number of slots (per level) is a power of 2, and we assume input keys are 64 bits wide. The code is set up so that these restrictions can be eliminated if so desired. Users can easily supply their own permutation functions. The iceberg table uses (for ), and the cuckoo table also uses (for Swap). Both operations support word widths of 32, 64, and 128 bits, and additionally supports 16-bit words. Our cuckoo table thus supports slots of 32, 64 and 128 bits, and the iceberg table additionally supports slots of 16 bits. Smaller slot sizes could be supported by using atomic instructions that are “too coarse”, likely at a performance cost. It is however important to be mindful of the global memory layout: the total memory usage of a bucket should divide the cache line size (128 bytes) lest some buckets end up being spread out over multiple cache lines, degrading performance. § EXPERIMENTAL EVALUATION §.§ Synthetic benchmarks We want to measure the performance impact of compact hashing. To this end, we set up tables (in various bucket combinations) with the same total number of slots. Our original keys are wider than 32 bits, but we can store them in slots of 32 (cuckoo) or 16, 32 (iceberg) bits. We also benchmark the same tables with 64 bit slots that would fit the original keys. We can see the 64 bit tables as a baseline: they essentially behave as non-compact cuckoo and iceberg tables. [ Microbenchmarks have shown that the runtime of computing the permutations themselves is negligible, so this is a fair assumption. ] Thus, our benchmarks show the impact of compact versus non-compact hashing in terms of runtime performance. Based on <cit.>, we expect the tables with 32 slots per (primary) bucket in particular to benefit significantly, as the 64 bit versions use two cache lines per (primary) bucket and the compact tables one, or a half. Input was taken from a set of uniformly drawn unique keys, taking duplicates from them as necessary. As in the benchmarks of <cit.>, to keep the runtime manageable, we vary the permutations (associating keys to buckets) between measurements, instead of the input keys themselves. We benchmark the largest table that can be stored on our GPU. Apart from the table, we also need to store input keys, and reserve memory for storing a.o. return values (/, /, et cetera). With the 24GB memory limit of our RTX 4090, we end up with cuckoo tables of 2^27 slots, and iceberg tables of 2^27 primary and 2^24 secondary slots. Larger tables are possible with the use of CUDA unified memory <cit.>, which allows for allocating more GPU memory than available, dynamically swapping memory to and from the host (CPU) RAM. Using unified memory, our RTX 4090 can work with tables of 2^29 primary and 2^26 secondary slots. In this case, the compact tables are 10× faster than the non-compact ones. However, this is not a particularly fair comparison (the non-compact tables take at least twice as much memory, so they will require more swapping) we focus on the situation where the whole experiment fits in GPU memory. If each primary bucket contains 32 of the 2^27 primary slots, then there are 2^22 primary buckets, and so compactness will shave 22 bits off of every key. If each primary slot is 16 bits wide, they can store remainders of at most 15 bits in length (one bit is required to indicate whether the slot is occupied or not), and so the primary level can store keys of at most 22 + 15 = 37 bits. So we drew our input keys uniformly from [0,2^37). We benchmarked cuckoo tables of 8, 16, and 32 slots per bucket, with slot sizes of 32 (compact) and 64 (non-compact) bits. For iceberg, we benchmarked tables with 8, 16, and 32 slots per primary bucket —the secondary buckets having half the slots of the primary ones— for 16 bit primary slots with 32 bit secondary slots (compact), [ We use 32 bit secondary slots with the 16 bit primary slots because the secondary slots have less compactness (as there are fewer secondary slots), and so the keys of width 37 would not fit in 16 bit secondary slots. They do fit in the 32 bit slots. ] 32 bit primary slots with 32 bit secondary slots (compact), and 64 bit primary slots with 64 bit secondary slots (non-compact). We reiterate that each cuckoo resp. iceberg table has the same number of slots, we only vary how they are divided into buckets and how much of the compactness is realized in practice (how much memory each slot consumes). For cuckoo, the compact 32 bit versions use half the memory of the non-compact 64 bit versions. For iceberg, the compact 32 bit versions use half the memory of the non-compact 64 bit versions, and the 16 bit versions use 9/32 of the memory. With other recent GPUs, the RTX 3090 and L40s, we have found results similar to the ones presented here. On the older RTX 2080 Ti, we have not. [On the RTX 2080 Ti, compactness shows only a negligible performance increase.] §.§ Results §.§.§ Insertion Figure <ref> shows the average throughput of filling the table to a certain fill factor with a batch of unique keys, measured for fill factors 0.5, 0.6, 0.7, 0.75, 0.8, 0.85, 0.9, and 0.95. (A static insertion benchmark.) The cuckoo tables with bucket size 16, 32 achieve fill factor 0.95. The highest fill factor achieved by the iceberg tables is 0.9, with bucket size 32. Both tables show similar performance, with a modest (5–10%) advantage for most compact 32 bit versions over 64 bit ones. The more compact 16 bit slots show an additional slight improvement, resulting in a 15–20% advantage over the 64 bit ones. For the variants with 32 slots per (primary) bucket, there is a larger speedup. This is especially relevant for the iceberg table, as this only reaches fill factor 0.9 on this variant. The most compact (16 bit primary slots) version of this table is the fastest table in this benchmark, with a 60% performance increase over the non-compact version. §.§.§ Lookup Figure <ref> shows Find benchmark results, in which for each measurement the table is filled to a certain fill factor, and then queried for half as many unique keys as there are slots in the table. The tables with 32 slots per (primary) bucket benefit significantly from compactness, roughly doubling throughput over the non-compact version. For most other bucket sizes, compactness gives a rough 15–25% performance increase. The best cuckoo table (compact, 32 slots per bucket) outperforms the best compact iceberg table (compact, 32 slots per bucket) by about 20–30% at higher load factors, especially when querying mostly keys that are not in the table. This can be explained by the fact that with iceberg tables, both secondary buckets are inspected if the primary bucket is full and does not contain k, while with cuckoo tables it can also happen that only two buckets are inspected (cf. section <ref>). §.§.§ Find-or-put We conducted a find-or-put benchmark, measuring throughput for various combinations of before and after fill factors. Before each measurement, the table was filled to the before fill factor. The Fop operation was then issued with as many input keys as there are slots in the table (2^27 for cuckoo, 2^27 + 2^24 for iceberg), containing a mix (with duplicates) of keys not in the table and keys already in the table, such that after the operation, the table was filled exactly to the after fill factor. To have a baseline for the iceberg table, we implemented an unsophisticated find-or-put operation for the cuckoo table that first sorts the input keys to detect duplicates (using the radix sort in Thrust, a library included with the CUDA toolkit), issues a Find once per unique key, and then inserts one of each key not in the table with Put. Figure <ref> shows the results. There is little difference between the cuckoo versions. The most compact 16 bit iceberg tables show a 10–20% speedup over the non-compact 64 bit variants, with a greater 60–100% speedup for the variant with 32 slots per primary bucket. The Find-or-put of the compact iceberg tables is more than 5 times faster than the unsophisticated cuckoo find-or-put. §.§ Experiments with real-world data Apart from synthetic data, we have also benchmarked the find-or-put operation against real-world data from a model checking application (HAVi from <cit.>). A single-threaded model checker was used to explore the model, and the sequence in which the nodes were visited forms our benchmark data. The set contains about 2^26.1 keys of width 24, about 2^23.9 without duplicates. We measured the throughput of handling certain ratios of the data, for cuckoo tables of 2^24 slots and iceberg tables of 2^24 + 2^21 slots. See Figure <ref> for the results. The find-or-put of the fastest compact iceberg table is 8 times faster than the fastest baseline cuckoo find-or-put implementation on the RTX 4090. The most compact iceberg tables are competitive to their non-compact versions, with around 5–15% increase in throughput. § CONCLUSION On the GPU, compact hashing through quotienting not only saves precious GPU memory, it also modestly improves performance. (Compact) iceberg hashing provides a valid alternative to cuckoo hashing, with comparable Find and Put performance, while supporting an efficient, correct find-or-put —though our version with only two levels does not achieve as high load factors. A third level is worth considering, perhaps made of the slab lists of <cit.>. While we focused on modeling sets, similar techniques can be used to model key-value dictionaries. As this increases cache line strain (when storing the value with the key), compactness might be of even more importance here. splncs04 § CORRECTNESS OF ICEBERG FIND-OR-PUT We now consider the correctness of the iceberg find-or-put algorithm. To ease our analysis, we first introduce a simplified version of Algorithm <ref> by making explicit its well-ordering of the slots available to a given key. Recall that for each key k there are three (non-exclusive) storage locations: the primary bucket T_0[a_0(k)], containing B_0 slots, and two secondary buckets T_1[a_1(k)], T_1[a_2(k)], containing B_1 slots each. Algorithm <ref> prefers to store k in the first nonempty row in its primary bucket. If the primary bucket is full, then it opts for the first nonempty slot in the least full secondary bucket instead (if they are equally full, in the second one). This corresponds with the following. Let S = { 0 }×{ y | 0 ≤ y < B_0 }∪{ 1, 2 }×{ y | 0 ≤ y < B_1 }. Define the well-order ≺ on S by * (0, x) ≺ (1+y,z) for all x,y,z * (x, y) ≺ (x, z) if and only if y < z * (1, x) ≺ (2,y) if and only if x < y Given a key k, we see pairs of the form (0, y) as indicating primary slots T_0[a_0(k)][y], and see the pairs (1+x, y) as slots T_1[a_x(k)][y] in the first (x = 0) and second (x=1) secondary buckets. This essentially orders the slots “top to bottom, right-to-left” as seen in Figure <ref>. The insertion behavior of Algorithm <ref> can be summarized as continuously attempting to insert k into the first nonempty slot in this ordering until it succeeds. Recall that keys k are stored in their primary bucket by writing r_0(k) into one of its slots; that in the first secondary bucket, k is stored by writing (r_1(k), 0) into a slot; and in the second secondary bucket by writing (r_2(k), 1) into a slot. We ease notation by using the sign function (x)—which returns 0 for 0 and 1 for all other inputs— and functions r'_i(k), where r'_0(k) = r_0(k) and r'_1+i(k) = (r_i(k), i). Given (x,y) ∈ S, the (x,y)th slot for k can then be written as T_(x)[a_x(k)][y], and it contains k if and only if it contains r'_x(k). In order to state anything about the correctness of the find-or-put algorithm, we first formalize what table properties it expects and preserves. A table T = (T_0, T_1) is well-formed if the following holds: * Each slot in T_(x)[a_y(k)] is either / or of the form r'_x(k) for some k * For each slot of the form T_(x)[a_x(k)][y] = r'_x(k), the following holds: ∀ (x', y') ≺ (x, y): T_(x')[a_x'(k)][y'] ∉{/, r'_x'(k) } That is, if k is in a slot, then all earlier slots for k are nonempty and do not contain k. We call this the order property. The first part states that each table slot must either be empty or contain an appropriate remainder. The second part (the order property) will play an important role in the correctness argument because of the following observation. Well-formed tables do not contain duplicate entries. Let T be a well-formed table, and assume towards a contradiction that a key k appears in two distinct slots. This means that there are (x,y), (x',y') ∈ S with T_(x)[a_x(k)][y] = r'_x(k) and T_(x')[a_x'(k)][y'] = r'_x'(k). Assume without loss of generality that (x',y') ≺ (x,y). (We may do this because ≺ is a well-order.) Applying the order property, we find T_(x')[a_x'(k)][y'] ≠ r'_x'(k): contradiction. Instead of proving the correctness of Algorithm <ref>, we shall prove it for Algorithm <ref>, which incorporates our new terminology. It can be obtained from Algorithm <ref> by merging the two loops (trading performance for simplicity) and realizing that at any point in the algorithm, the slot it selects to attempt to insert into is precisely the least nonempty slot (in the local bucket snapshots) according to ≺. From a correctness perspective, the algorithms are indistinguishable. We merely choose to analyze Algorithm <ref> for legibility. Algorithm <ref> in words: make local copies, snapshots b_0,b_1,b_2, of the three buckets for k. Inspect the snapshots: if any of their slots contain k, return /. Otherwise, try to insert k. If the snapshots do not contain any empty slots, k cannot be inserted and we return /. Otherwise, find the minimal (x,y) ∈ S under the ≺-order so that slot b_x[y] is empty, and attempt to insert k into the corresponding slot in the actual table T using an atomic compare-and-swap. If this succeeds, return /. If not, update the snapshots and repeat. At the end of this section, we shall prove the following theorem, which also holds for Algorithm <ref>. Let (k_0) ∥…∥(k_n) be a parallel composition of Algorithm <ref> applied to a well-formed table T. The following holds: * The composition preserves the well-formedness of T * Each (k_i) terminates, and returns either /, /, or / * (k_i) returns / or / if and only if k is in T after completion * (k_i) returns / if and only if it has inserted k into T * If (k_i) returns / then the buckets for k are full and do not contain k When we say that an algorithm preserves an invariant ϕ, we mean that, for any instruction in the algorithm, if ϕ holds before the instruction, then it also holds after the instruction. As the one operation modifying shared state in Algorithm <ref> is atomic, ϕ is then also preserved by parallel compositions (k_0) ∥…∥(k_n) of the algorithm. From now on, we work in a context where there are only (and finitely many) parallel executions of Algorithm <ref>, that is, that there are no other algorithms run alongside it. [ With extra care, one can show that the results still hold if other algorithms are executed in parallel with the find-or-put algorithm, so long as they preserve well-formedness and do not modify non-empty table slots. ] This allows us to use the following invariant. Algorithm <ref> does not modify non-empty slots in T. Consequently, the following is an invariant for Algorithm <ref>: if a slot in a local snapshot is nonempty, then it agrees with the table. Symbolically: ∀ (x,y) ∈ S: b_x[y] ≠/ b_x[y] = T_(x)[a_x(k)][y]. The first part is immediate, as the only write-operation in Algorithm <ref> is a compare-and-swap against / slots. Hence, for the second part, we need only consider the lines in Algorithm <ref> which modify one of the b_i: lines <ref>, <ref>, and <ref>. These only update the snapshots with their corresponding buckets in T, so this trivially preserves the property. Algorithm <ref> preserves the well-formedness of T. We focus on the order property, as the other parts of well-formedness are easily seen to be preserved. The only line of concern is line <ref>, as this is the only line potentially modifying T. If the is unsuccessful, the table is not modified and so the well-formedness is trivially preserved. If the is successful, only the (x,y)th slot for k is modified. In this case, it thus suffices to prove that ∀ (x', y') ≺ (x, y): T_(x')[a_x'(k)][y'] ∉{/, r'_x'(k) } is true afterwards. By merit of line <ref> (and the fact that (x,y) are local variables), (x,y) satisfies ∀ (x', y') ≺ (x, y): b_x'[y'] ∉{/, r'_x'(k) }. Applying Lemma <ref>, we find ∀ (x', y') ≺ (x, y): T_(x')[a_x'(k)][y'] ∉{/, r'_x'(k) } as desired. Thus, regardless of the result, line <ref> preserves the order property. (In the context of a finite parallel composition,) Algorithm <ref> eventually terminates. It returns either /, /, or /. The difficult part is to prove that it must eventually return. For this, it suffices that the number |{ (x, y) ∈ S : b_x[y] ≠/ }| of nonempty local snapshot slots grows strictly over the while-loop iterations: as there are finitely many slots, algorithm then cannot loop infinitely. To prove this, note: * once a snapshot slot is filled, it remains filled by Lemma <ref>; * if during a loop iteration it holds that k is in some snapshot, then the algorithm returns /; * if during a loop iteration it holds that all snapshot rows are nonempty and do not contain k, then the algorithm returns /. So at the start of the m+1th loop iteration, we may conclude that lines <ref> and <ref> were executed in the iteration before. It then follows from line <ref> and the fact that the snapshots have not been updated since its execution, that (x,y) = min_≺{ (x,y) ∈ S | b_x[y] = / } holds. However, after line <ref>—no matter whether the compare-and-swap succeeded— it must have been the case that T_(x)[a_x(k)][y] was nonempty. By Lemma <ref>, this is still the case when the snapshots are updated, thus the number of nonempty snapshot slots increases. We are now ready to prove Theorem <ref>. Part (a) and (b) follow from Lemma <ref> and Lemma <ref> respectively. Part (d) is immediate from inspecting line <ref>. One half of part (c) follows from part (d). For the other half, note that the algorithm only returns / if k is found in a local snapshot (line <ref>), and by Lemma <ref>, this means that k must be in T after completion. Finally part (e) follows from line <ref> and Lemma <ref>. The above results can be proved for Algorithm <ref> much the same way as for Algorithm <ref>, with some extra work because of the two separate operations and to note that the choice of which slot to into corresponds with the least nonempty slot in the local snapshots under the ≺-order. Lemmas <ref>,<ref>, <ref>, and Theorem <ref> also hold for Algorithm <ref>. In summary, any finite parallel composition of find-or-put procedures behaves as desired, preserving well-formedness, reporting found keys, and inserting missing keys where possible. As a corollary of the well-formedness of T, each key is inserted at most once. In particular, if multiple processes find-or-put k, at most one returns / (and if this happens, the others return /).
http://arxiv.org/abs/2406.08887v1
20240613074225
Low-Overhead Channel Estimation via 3D Extrapolation for TDD mmWave Massive MIMO Systems Under High-Mobility Scenarios
[ "Binggui Zhou", "Xi Yang", "Shaodan Ma", "Feifei Gao", "Guanghua Yang" ]
eess.SP
[ "eess.SP" ]
Low-Overhead Channel Estimation via 3D Extrapolation for TDD mmWave Massive MIMO Systems Under High-Mobility Scenarios Binggui Zhou, Xi Yang, Shaodan Ma, Feifei Gao, and Guanghua Yang Binggui Zhou is with the School of Intelligent Systems Science and Engineering, Jinan University, Zhuhai 519070, China; and also with the State Key Laboratory of Internet of Things for Smart City and the Department of Electrical and Computer Engineering, University of Macau, Macao 999078, China (e-mail: binggui.zhou@connect.um.edu.mo). Xi Yang is with the Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China (email: xyang@cee.ecnu.edu.cn). Shaodan Ma is with the State Key Laboratory of Internet of Things for Smart City and the Department of Electrical and Computer Engineering, University of Macau, Macao 999078, China (e-mail: shaodanma@um.edu.mo). Feifei Gao is with the Department of Automation, Tsinghua University, Beijing 100084, China (e-mail: feifeigao@ieee.org). Guanghua Yang is with the School of Intelligent Systems Science and Engineering, Jinan University, Zhuhai 519070, China (e-mail: ghyang@jnu.edu.cn). Received day month year; Accepted ... ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT In time division duplexing (TDD) millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems, the downlink channel state information (CSI) can be attained through uplink channel estimation thanks to the uplink-downlink channel reciprocity. However, the channel aging issue is significant under high-mobility scenarios and thus necessitates frequent uplink channel estimation. In addition, large amounts of antennas and subcarriers lead to high-dimensional CSI matrices, aggravating the pilot training overhead. To systematically reduce the pilot overhead, a spatial, frequency, and temporal domain (3D) channel extrapolation framework is proposed in this paper. Considering the marginal effects of pilots in the spatial and frequency domains and the effectiveness of traditional knowledge-driven channel estimation methods, we first propose a knowledge-and-data driven spatial-frequency channel extrapolation network (KDD-SFCEN) for uplink channel estimation by exploiting the least square estimator for coarse channel estimation and joint spatial-frequency channel extrapolation to reduce the spatial-frequency domain pilot overhead. Particularly, we propose the attention-based sub-element extrapolation module and the progressive extrapolation architecture to improve the accuracy of joint spatial-frequency channel extrapolation. Then, resorting to the uplink-downlink channel reciprocity and temporal domain dependencies of downlink channels, a temporal uplink-downlink channel extrapolation network (TUDCEN) is proposed for slot-level channel extrapolation, aiming to enlarge the pilot signal period and thus reduce the temporal domain pilot overhead under high-mobility scenarios. Specifically, we propose the spatial-frequency sampling embedding module to reduce the representation dimension and consequent computational complexity, and we propose to exploit the autoregressive generative Transformer for generating downlink channels autoregressively thanks to the powerful capability of generative artificial intelligence. Numerical results demonstrate the superiority of the proposed framework in significantly reducing the pilot training overhead by more than 16 times and improving the system's spectral efficiency under high-mobility scenarios. Channel Extrapolation, High-Mobility Scenarios, Millimeter Wave, Uplink-Downlink Channel Reciprocity, Massive MIMO § INTRODUCTION MILLIMETER wave (mmWave) massive multi-input multi-output (MIMO) has been recognized as a pivotal technology for the fifth generation (5G) wireless communication systems and beyond <cit.>. mmWave massive MIMO is anticipated to provide spatial degrees of freedom, diversity or multiplexing gain, and array gain, thereby improving the spectral and energy efficiencies of wireless communication systems. To reap the benefits of massive MIMO, accurate channel state information (CSI) should be attained whether operating in the time division duplexing (TDD) or frequency division duplexing (FDD) modes<cit.>. In FDD massive MIMO systems, the base station (BS) needs to obtain the downlink CSI through downlink channel estimation and CSI feedback from the user equipment (UE) due to the lack of uplink-downlink channel reciprocity. While in TDD massive MIMO systems, thanks to the uplink-downlink channel reciprocity, the BS can obtain the downlink CSI via uplink channel estimation and derive the downlink CSI from the estimated uplink CSI. The uplink-downlink channel reciprocity in the TDD massive MIMO systems holds within the channel coherence time. However, in high-mobility scenarios, the channel is fast time-varying due to the UE movement, resulting in a short channel coherence time and the consequent channel aging issue, i.e., the channel varies between when it is acquired at the BS and when it is used for downlink precoding<cit.>. In addition, compared with sub-6 GHz bands, mmWave bands are more vulnerable to the UE movement since higher frequency bands generally lead to a shorter channel coherence time<cit.>, deteriorating the channel aging issue in mmWave massive MIMO systems under high-mobility scenarios. To avoid significant performance degradation caused by channel aging, frequent channel estimation needs to be conducted such that the downlink precoding can be conducted based on up-to-date CSI, leading to huge temporal domain pilot training overhead. Besides, due to the large number of antennas at the BS and large amounts of subcarriers in orthogonal frequency-division multiplexing (OFDM) systems, the spatial and frequency domain pilot training overhead is also huge and unacceptable. In addition, to reduce the hardware cost and power consumption, hybrid beamforming structure is usually adopted and only a small number of radio frequency (RF) chains are deployed at the BS, especially for a millimeter wave (mmWave) massive MIMO system. To obtain the uplink CSI at all receiving antennas at the BS, the BS has to switch the antennas connected to these RF chains several times during the uplink channel estimation, leading to substantial time and power consumption. Therefore, it is of great significance to minimize the pilot training overhead in the spatial, frequency, and temporal domains while estimating the CSI accurately. To circumvent huge frequency domain pilot training overhead, frequency domain channel extrapolation has been widely investigated in related works. In <cit.>, a linear interpolation least square (LS) channel estimation method was proposed for channel estimation with pilots in partial subcarriers. In <cit.>, the performance of various interpolation methods for mmWave MIMO-OFDM systems, including spline interpolation, discrete Fourier transform (DFT) -based interpolation, etc., was investigated. Recently, with the advances in deep learning, many deep learning-based frequency domain channel extrapolation methods were proposed to further reduce pilot training overhead and improve channel estimation accuracy from the frequency domain perspective. In <cit.>, the super-resolution convolutional neural network (SRCNN) was presented to obtain full channel responses with the channel responses at pilot positions via deep image processing techniques. By jointly learning the spatial-temporal domain features of massive MIMO channels with a temporal attention module and a spatial attention module, a dual-attention-based channel estimation network (DACEN) was proposed to realize accurate channel estimation via low-density pilots in the frequency domain<cit.>. The development of deep learning has also led to increasing attention in antenna (spatial) domain channel extrapolation in recent years, aiming at reducing the huge spatial domain pilot training overhead and prohibitive time and power consumption. For example, two fully connected neural networks (FCNNs) were proposed to use the CSI of a subset of antennas to extrapolate the CSI of other antennas<cit.>. In addition to spatial and frequency domain channel extrapolation, channel prediction (i.e., temporal domain channel extrapolation) is also a convincing method for alleviating the huge pilot training overhead in high-mobility scenarios. Linear extrapolation methods <cit.> and statistical prediction models, e.g., autoregressive (AR) models <cit.>, were proposed and have demonstrated that channel prediction is promising to mitigate the impacts of channel aging. The advancements of DNNs further improve the channel prediction performance and thus would further reduce the channel estimation frequency in high-mobility scenarios. In <cit.>, a novel long short-term memory (LSTM) based channel predictor was proposed to learn channel variations and thereby reduce the pilot training overhead for channel estimation. An attention-based channel predictor was proposed in <cit.> to achieve frame-level channel estimates for mobile scenarios. Nonetheless, existing channel prediction methods are unable to deal with massive MIMO-OFDM systems with substantial antennas and subcarriers due to the high-dimensional CSI matrices and the consequent tremendous computational complexity. In addition, since the channel aging issue is critical in high-mobility scenarios, frame-level channel prediction methods are unable to achieve high spectral efficiency due to the varying channels, posing demands on slot-level channel extrapolation. However, existing channel prediction methods predict future channels at the same time granularity as the collected historical channels. This indicates that exploiting these methods for slot-level channel extrapolation comes at the cost of sacrificing a large amount of time-frequency resources for slot-level historical channel estimation, making them infeasible in practice. To overcome the limitations of these existing works and to systematically reduce the pilot overhead, a spatial, frequency, and temporal domain (3D) channel extrapolation framework is proposed in this paper to reduce the pilot training overhead from these three domains respectively. First, it can be observed that the number of pilots in one certain domain shows the marginal effect, i.e., as the number of pilots in this domain increases, the improvement in channel estimation accuracy gradually decreases.[This observation is further validated with our simulation results in Section <ref>.] Due to the capability of deep neural networks (DNNs) in extracting the spatial and frequency domain characteristics of massive MIMO channels, it is expected that exploiting DNNs for joint spatial and frequency domain channel extrapolation will further reduce the pilot training overhead. However, although some works have pointed out that jointly learning spatial-frequency domain features is beneficial to channel estimation<cit.>, spatial-frequency channel extrapolation has not yet been well investigated to reduce the pilot training overhead for mmWave massive MIMO-OFDM systems. Therefore, considering the marginal effects of pilots in the spatial and frequency domains and the effectiveness of traditional knowledge-driven channel estimation methods, we first propose a knowledge-and-data driven spatial-frequency channel extrapolation network (KDD-SFCEN) for uplink channel estimation by exploiting the least square estimator for coarse channel estimation and joint spatial-frequency channel extrapolation to reduce the spatial-frequency domain pilot overhead. Then, resorting to the uplink-downlink channel reciprocity and temporal domain dependencies of downlink channels, a temporal uplink-downlink channel extrapolation network (TUDCEN) is proposed for slot-level channel extrapolation, aiming to enlarge the pilot signal period and thus reduce the temporal domain pilot overhead. To the best of our knowledge, none of the existing works have investigated such a systematic framework to reduce the pilot training overhead and improve the spectral efficiency of mmWave massive MIMO-OFDM systems under high-mobility scenarios. The major contributions of this paper are summarized as follows: * We propose the KDD-SFCEN to reduce the spatial-frequency domain pilot overhead effectively via joint spatial-frequency channel extrapolation. In addition, it is worth emphasizing that the substantial time and power consumption due to antenna switching can be avoided through spatial extrapolation. The proposed KDD-SFCEN consists of a knowledge-driven coarse channel estimator to provide coarse channel estimates and accelerate the training of the proposed network. The KDD-SFCEN also comprises a spatial-frequency channel extrapolator encompassing the attention-based sub-element extrapolation module and the progressive extrapolation architecture. The proposed attention-based sub-element extrapolation module mimics the sub-pixel imaging technology and is demonstrated to be highly effective in learning spatial-frequency channel characteristics for joint spatial and frequency extrapolation. Moreover, the progressive extrapolation architecture is proposed to progressively extrapolate the uplink CSI, thereby further improving the performance of joint spatial-frequency channel extrapolation with few pilots in the spatial and frequency domains. * We propose the TUDCEN for accurate slot-level channel extrapolation given the estimated uplink CSI at the first slot. Through the proposed TUDCEN, channel estimation can be conducted less frequently and more slots can be configured for data transmission under high-mobility scenarios, further reducing the pilot training overhead from the temporal domain perspective and improving the system's spectral efficiency. The TUDCEN is composed of an uplink-downlink channel calibration network (UDCCN) and a downlink channel extrapolation network (DCEN). The UDCCN calibrates the estimated uplink channel to the downlink channel at the first downlink slot to compensate for the hardware asymmetry in transceivers. The proposed DCEN achieves slot-level channel extrapolation via spatial-frequency sampling embedding and autoregressive generation with the generative Transformer<cit.>. Spatial-frequency sampling embedding is proposed to reduce the tremendous computational complexity due to high-dimensional CSI matrices. Specifically, the spatial-frequency sampling embedding layer samples the spatial and frequency domain representation of downlink channels with a spatial sampling factor and a frequency sampling factor to reduce the representation dimension by resorting to the spatial and frequency correlations of downlink channels. Meanwhile, the sampled antenna groups and subcarrier groups are combined to facilitate model training. We exploit the generative Transformer for generating downlink channels autoregressively thanks to the powerful capability of generative artificial intelligence (AI), which is able to conduct slot-level downlink channel extrapolation given only the first downlink channel, thereby avoiding heavy slot-level historical downlink channel estimation. * We use the sounding reference signal (SRS) defined by the 3rd generation partnership project (3GPP) 5G technical specification <cit.> as the pilot signal for uplink pilot training. The frame structures and system settings in framework design and numerical simulations also follow the 3GPP 5G technical specification. Numerical results demonstrate the superiority of the proposed framework in significantly reducing the spatial-frequency domain pilot overhead by more than 4 times via spatial-frequency channel extrapolation. In addition, via enlarging the pilot signal period with slot-level channel extrapolation, the proposed framework further reduces the temporal domain pilot overhead by 4 times and significantly improves the mmWave massive MIMO system's spectral efficiency under high-mobility scenarios. The remainder of this paper is organized as follows. In Section <ref>, we introduce the system model and formulate the spatial, frequency, and temporal channel extrapolation problem. In Section <ref>, we propose the KDD-SFCEN for uplink channel estimation to reduce the spatial-frequency domain pilot training overhead. In Section <ref>, we propose the TUDCEN for slot-level channel extrapolation to enlarge the pilot signal period and thus reduce the temporal domain pilot overhead. In Section <ref>, simulation results are presented to demonstrate the superiority of the proposed spatial, frequency, and temporal channel extrapolation framework. Finally, we conclude this work in Section <ref>. Notation: Underlined bold uppercase letter 𝐀, bold uppercase letter 𝐀, and bold lowercase letter 𝐚 represent a tensor, a matrix, and a vector, respectively. Calligraphy uppercase letter 𝒜 represents a set. 𝐀{s,t} denotes the representation of 𝐀 at the t-th slot of the s-th sub-frame. 𝐀_:,n and 𝐀_m,n denote the n-th column and the element at the m-th row and n-th column of the matrix 𝐀, respectively. (·)^T, (·)^H, and (·)^-1 denote the transpose, conjugate-transpose, and inverse of a matrix, respectively. ⌈·⌉, ⊙, 𝔼{·}, and ·_2 denote the ceiling function, Hadamard product, expectation, and L2 norm, respectively. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ System Model As shown in Fig. <ref> (a), we consider a mmWave massive MIMO system working in the TDD mode, where a single BS equipped with N_T ≫ 1 antennas and N_RF≪ N_T RF chains serves a single user equipped with N_R antennas. The BS is realized with a hybrid precoding architecture, as Fig. <ref> (b) shows. The system operates with the OFDM modulation with a total of N_c ≫ 1 subcarriers. By leveraging the uplink-downlink channel reciprocity in the TDD system, the downlink channel can be acquired via uplink channel estimation to avoid downlink channel estimation and feedback at the UE side, thereby improving the system's spectral efficiency. A simplified downlink CSI acquisition and data transmission process based on uplink pilot training is also shown in Fig. <ref> (a). The process consists of three phases, i.e., uplink pilot transmission (phase 1), BS signal processing (phase 2), and downlink data transmission (phase 3). Specifically, the uplink pilot signal is first transmitted by the UE to the BS for uplink channel estimation (phase 1). Then the BS estimates the uplink channel given the received pilot signal, derives the downlink channel, and conducts downlink precoding (phase 2). After that, the BS transmits data symbols to the UE based on the downlink precoder and other transmission parameters (phase 3). Due to the limited number of RF chains, only the uplink CSI at N_RF antennas connected to the N_RF RF chains can be obtained in each uplink pilot signaling process. Therefore, to obtain the uplink CSI at all N_T antennas, the BS has to switch the antennas connected to these N_RF RF chains for N_T/N_RF times, leading to huge time and power consumption. Specifically, for the i-th subcarrier and the k-th uplink pilot signaling process, denote the uplink CSI at N_RF antennas, the transmitted diagonal pilot signal, the received pilot signal, and the uplink noise as 𝐇_i^(k)∈ℂ^N_RF× N_R, 𝐒^p_i^(k)∈ℂ^N_R × N_R, 𝐘^p_i^(k)∈ℂ^N_RF× N_R, and 𝐍^p_i^(k)∈ℂ^N_RF× N_R, respectively. The set of antennas connected to RF chains in the k-th uplink pilot signaling process is denoted as 𝒜_RF^(k), and 𝐇_i^(k) corresponds to the uplink CSI at antennas in 𝒜_RF^(k). Then, 𝐘^p_i^(k) can be represented as 𝐘^p_i^(k) = 𝐇_i^(k)𝐒^p_i^(k) + 𝐍^p_i^(k), where 𝐇_i^(k) can then be estimated by a channel estimator f_uce given 𝐘^p_i^(k) and 𝐒^p_i^(k) as 𝐇̂_i^(k) = f_uce(𝐘^p_i^(k), 𝐒^p_i^(k)), where 𝐇̂_i^(k)∈ℂ^N_RF× N_R is the estimate of 𝐇_i^(k). By repeating the uplink pilot signaling process for N_T/N_RF times, the uplink channel at the i-th subcarrier, i.e., 𝐇_i ∈ℂ^N_T × N_R, can be obtained as 𝐇̂_i = [(𝐇̂_i^(1))^T, …, (𝐇̂_i^(k))^T, …, (𝐇̂_i^(N_T/N_RF))^T]^T, where 𝐇̂_i is the estimate of 𝐇_i, and 𝒜 = 𝒜_RF^(1)∪…∪𝒜_RF^(k)∪…∪𝒜_RF^(N_T/N_RF), where 𝒜 is the set containing all BS antennas. Then, due to the hardware asymmetry in transceivers<cit.>, the downlink channel at the i-th subcarrier, i.e., 𝐇^d_i ∈ℂ^N_R × N_T, has to be derived from 𝐇̂_i based on the uplink-downlink channel reciprocity and via channel calibration as 𝐇̂^d_i = f_c(𝐇̂_i), where 𝐇̂^d_i ∈ℂ^N_R × N_T is the estimate of 𝐇^d_i, and f_c(·) denotes the uplink-downlink channel calibration function. Given 𝐇^d_i, the downlink precoding matrix 𝐅_i ∈ℂ^N_T × N_s at the i-th subcarrier with N_s being the number of data streams and N_s ≤ N_R can be designed by the singular value decomposition (SVD)-based precoding algorithm <cit.>. Specifically, SVD for 𝐇^d_i can be expressed as 𝐇^d_i = 𝐔_i Σ_i 𝐕^H_i, where 𝐔_i ∈ℂ^N_R × N_R, Σ_i ∈ℂ^N_R × N_T, and 𝐕^H_i ∈ℂ^N_T × N_T are the left singular vectors matrix, the diagonal matrix of singular values, and the right singular vectors matrix, respectively. Note that 𝐅_i = [𝐟_i^(1), 𝐟_i^(2), …, 𝐟_i^(N_s)], where 𝐟_i^(j)∈ℂ^N_T × 1, j = 1,2, …, N_s are the first N_s column vectors of 𝐕. Then, in the downlink transmission, the received signal 𝐲_i ∈ℂ^N_R × 1 at the i-th subcarrier can be represented as 𝐲_i = 𝐇^d_i 𝐅_i 𝐬_i + 𝐧_i, where 𝐬_i ∈ℂ^N_s × 1 and 𝐧_i ∈ℂ^N_s × 1 are the transmitted signal and the downlink noise at the i-th subcarrier, respectively. §.§ Problem Formulation To support the measurement of all transmit ports and subcarriers, the length of the pilot signal is N_R × N_c, Therefore, the pilot training overhead is (N_RF×N_T/N_RF) × N_R × N_c = N_T × N_R × N_c. Considering a large number of antennas and subcarriers in the system, the pilot training overhead is extremely high and significantly challenging to the system's spectral efficiency. In addition, the time and power consumption due to the switching of antennas connected to RF chains is also unaffordable. Moreover, the pilot signal should be transmitted for uplink channel estimation every channel coherence time T_c to avoid significant channel aging. For simplicity, we call this scheme the general pilot training scheme hereafter. We denote the pilot signal period as T_p and for the general pilot training scheme we have T_p≤ T_c. Note that as specified in the 3GPP 5G technical specification <cit.>, the frame structure in the 5G new radio (NR) is shown in Fig. <ref> (c). Each frame spans 10 ms and is divided into two half-frames. Each half-frame spans 5 ms and contains five sub-frames, with each sub-frame spanning 1 ms. The number of slots in one sub-frame, i.e., N_slot, is determined by the numerology μ, and N_slot = 2^μ. Here we assume μ=3 which indicates N_slot = 8 in one sub-frame and each slot spans 0.125 ms. It should be emphasized that under scenarios with high carrier frequency and high UE mobility, the channel coherence time could be very short. For instance, when operating at a carrier frequency of 28 GHz and with the UE velocity of 60 km/h, the channel coherence time is approximately 0.32 ms<cit.>. With the general pilot training scheme, the TDD slot pattern can be illustrated by Fig. <ref> (d), where `S' stands for a special slot configured for uplink channel estimation and `D' stands for a downlink slot for downlink data transmission.[Note that for ease of elaboration, we do not consider uplink data transmission slots in the sub-frame. Our method can be easily extended to sub-frames with uplink data transmission slots.] The pilot signal has to be transmitted every two slots (i.e., T_p=0.25 ms) to mitigate channel aging with T_c=0.32 ms, further degrading the system's spectral efficiency.[Note that in Fig. <ref> (d), (e), and (f), we assume the pilot symbol is configured at the last symbol index of each special slot.] Defining the single-slot pilot training overhead C_sl as the pilot training overhead for one-time uplink channel estimation (i.e., the spatial-frequency domain pilot overhead), the overall pilot training overhead C_o for an interval of length T is defined as C_o ≜T/T_p× C_sl. From the previous discussions, the overall pilot training overhead C_o is huge, since the single-slot pilot training overhead C_sl is large due to the high-dimensional CSI matrices and the pilot signal period T_p is short in high-mobility scenarios. To circumvent such huge overall pilot training overhead, we consider spatial, frequency, and temporal channel extrapolation to systematically reduce the pilot training overhead from two perspectives: 1) reducing the single-slot pilot training overhead C_sl (i.e., the spatial-frequency domain pilot overhead) via spatial-frequency channel extrapolation; 2) enlarging the pilot signal period T_p via slot-level channel extrapolation and thereby reducing the temporal domain pilot overhead. In addition, it is worth noting that through spatial extrapolation, the substantial time and power consumption due to antenna switching can be avoided. And through the slot-level channel extrapolation, more slots can be configured for data transmission to improve the system's spectral efficiency. The overall framework of the proposed method is shown in Fig. <ref>. On the one hand, considering the marginal effects of pilots in the spatial and frequency domains, spatial channel extrapolation and frequency channel extrapolation are combined to reduce the single-slot pilot training overhead C_sl and circumvent the time and power consumption during single-slot uplink channel estimation (SSUCE). Specifically, the N_RF RF chains are connected to N_RF uniformly sampled antennas out of all N_T antennas without antenna switching in the whole uplink channel estimation phase, and the pilot symbols are configured only on partial subcarriers. Ultimately, only the partial uplink CSI can be obtained directly from the received pilot signal, and then the full uplink CSI is extrapolated from the partial uplink CSI. Denote the transmitted diagonal pilot signal matrix and the corresponding received pilot signal matrix by 𝐒^p ∈ℂ^(N_R × N_c^') × (N_R × N_c^') and 𝐘^p ∈ℂ^N_RF× (N_R × N_c^'), respectively, where N_c^' is the number of subcarriers configured with pilot symbols. In addition, by concatenating 𝐇_i, i=1, 2, …, N_c, the uplink channel at all subcarriers, denoted by 𝐇∈ℂ^N_T × (N_R × N_c), can be given by: 𝐇 = [𝐇_1, 𝐇_2, …, 𝐇_N_c]. Further denoting 𝐇{s,t}∈ℂ^N_T × (N_R × N_c), 𝐒^p{s,t}∈ℂ^(N_R × N_c^') × (N_R × N_c^'), and 𝐘^p{s,t}∈ℂ^N_RF× (N_R × N_c^') as the uplink channel, the transmitted diagonal pilot signal matrix, and the received pilot signal matrix at the t-th slot of the s-th sub-frame, respectively, the uplink channel at the first slot of the s-th sub-frame can be estimated based on spatial-frequency channel extrapolation as 𝐇̂{s,0} = F_ssuce(𝐘^p{s,0}, 𝐒^p{s,0}), where 𝐇̂{s,0} is the estimate of 𝐇{s,0}, and F_ssuce is a DNN-based SSUCE model. On the other hand, we propose to enlarge the pilot signal period T_p by leveraging the uplink-downlink channel reciprocity and temporal dependencies for slot-level channel extrapolation (SLCE). In TDD systems, it is commonly assumed that the uplink and downlink channels are reciprocal during the channel coherence time, such that the downlink channel can be derived from the estimated uplink channel. Note that due to high UE mobility and short channel coherence time, the uplink-downlink channel reciprocity only holds for the special slot of a sub-frame and a few downlink data transmission slots in the same sub-frame. And due to the hardware asymmetry in transceivers, uplink-downlink channel calibration is required to derive the downlink channel accurately from the estimated uplink channel. Moreover, like many real-world phenomena, wireless channels exhibit temporal dependencies over time (e.g., explicit patterns like trends, etc). Capturing these temporal dependencies from downlink channels allows us to conduct accurate temporal extrapolation for future slots. However, the temporal dependencies for high-dimensional channels are challenging to capture. Fortunately, with DNNs, it is promising to learn the uplink-downlink channel reciprocity and temporal dependencies for SLCE. Then, with SLCE, the pilot signal can be transmitted with a pilot signal period T_p ≫ T_c. Taking transmitting the pilot signal only once per sub-frame and at the first slot of each sub-frame as an example, the TDD slot pattern with the SLCE-aided pilot training scheme is provided in Fig. <ref> (e).[Note that for ease of elaboration, in the following context, we simply use this TDD slot pattern and suppose the pilot signal only once per sub-frame. The pilot signal can be set flexibly in practice.] In this example, the pilot signal period is T_p = 1 ms, which is much larger than the channel coherence time T_c = 0.32 ms. To tackle the channel aging issue caused by high carrier frequency and high UE mobility, SLCE is thus anticipated to predict the downlink channels within the pilot signal period accurately for downlink data transmission. Specifically, based on the estimated uplink channel at the special slot, the downlink channel at the first downlink data transmission slot can be initially acquired by exploiting the uplink-downlink channel reciprocity and via channel calibration. Then, by learning the temporal dependencies among time-varying downlink channels, accurate estimates of the downlink channels at all other downlink data transmission slots can be achieved. Therefore, denoting 𝐇^d{s,t}∈ℂ^N_R × (N_T × N_c) as the downlink channel at the t-th slot of the s-th sub-frame, the SLCE problem can be formulated as 𝐇̂^d{s,1}, …, 𝐇̂^d{s,N_slot-1} = F_slce(𝐇̂{s,0}), where 𝐇̂^d{s,t} is the estimate of 𝐇^d{s,t}, F_slce is a DNN-based SLCE model. §.§ Sounding Reference Signal Pattern Following the 3GPP technical specification<cit.>, the sounding reference signal (SRS) is used as the pilot signal for uplink pilot training. The SRS is configured according to the transmission comb number N_TC in the frequency domain and with frequency domain code division multiplexing (fd-CDM) for multiple antenna ports. For example, assuming the transmission comb number N_TC = 2 and the UE antenna number N_R = 4, the uplink SRS pattern in one resource block (RB) can be illustrated by Fig. <ref> (f), where the SRS is configured with every other resource element (RE) and has an fd-CDM length of 4 to support 4 UE antennas. Based on N_TC and the number of RBs, i.e., N_RB, the received pilot signal matrix 𝐘^p is with the size of N_RF× (N_R × N_c^') = N_RF× (N_R × N_RB×12/N_TC), where 12 is the number of consecutive subcarriers that form a RB as defined in <cit.>. Therefore, compared with the size of the uplink channel 𝐇, i.e., N_T × (N_R × N_c) = N_T × (N_R × N_RB× 12), we define the spatial compression ratio as R_s = N_T/N_RF and the frequency compression ratio as R_f = N_TC. In the following two sections, to reduce the overall pilot training overhead C_o from the aforementioned two perspectives, i.e., reducing the single-slot pilot training overhead C_sl and enlarging the pilot signal period T_p, we propose the KDD-SFCEN for single-slot uplink channel estimation and the TUDCEN for slot-level channel extrapolation respectively. § SPATIAL-FREQUENCY CHANNEL EXTRAPOLATION BASED SINGLE-SLOT UPLINK CHANNEL ESTIMATION Considering the marginal effects of pilots in the spatial and frequency domains, we propose to conduct joint spatial-frequency channel extrapolation to reduce the pilot training overhead of single-slot uplink channel estimation. Traditional channel estimation methods based on domain knowledge have been fully validated for decades that they can provide feasible channel estimates in time-invariant channels, which can be exploited as initial estimates for deep learning-based channel estimation methods. In addition, to estimate the uplink channel accurately with only a few pilots in both the spatial and frequency domains, the spatial and frequency characteristics of massive MIMO channels must be fully exploited <cit.>. Therefore, in this section, we propose the KDD-SFCEN for single-slot uplink channel estimation in TDD massive MIMO systems, which benefits from both domain knowledge via the traditional knowledge-driven channel estimation and data via joint spatial-frequency channel extrapolation. As shown in Fig. <ref>, the proposed network consists of two components, i.e., a knowledge-driven coarse channel estimator and a spatial-frequency channel extrapolator, for coarse channel estimation and spatial-frequency channel extrapolation, respectively. §.§ Coarse Channel Estimation Since traditional LS estimation has been demonstrated to be a simple but effective method to achieve coarse channel estimates, we exploit this domain knowledge and use the traditional LS estimator as the knowledge-driven coarse channel estimator to accelerate the training of our proposed uplink channel estimation network. Specifically, the received pilot signal 𝐘^p{s,0} and the corresponding transmitted diagonal pilot signal 𝐒^p{s,0} is first processed by an LS estimator as 𝐇̂^LS = 𝐘^p{s,0}{𝐒^p{s,0}}^-1, where 𝐇̂^LS∈ℂ^N_RF× (N_R × N_c^') is the coarse channel estimate achieved by the knowledge-driven coarse channel estimator. Then, the coarse channel estimate is further extrapolated and refined by the spatial and frequency channel extrapolator to achieve a highly accurate channel estimate. §.§ Spatial and Frequency Channel Extrapolation To estimate the uplink channel accurately with a few pilots, we propose the attention-based sub-element extrapolation module (ASEEM) to fully exploit the spatial and frequency characteristics of massive MIMO channels. In addition, directly extrapolating partial CSI to full CSI is significantly challenging, especially when the spatial and frequency compression ratios are large. Therefore, we propose to conduct spatial and frequency channel extrapolation progressively. As Fig. <ref> shows, the proposed spatial-frequency channel extrapolator consists of a progressive spatial extrapolation block and a progressive frequency extrapolation block for spatial and frequency extrapolation, respectively. The progressive spatial extrapolation block and the progressive frequency extrapolation block consist of N_SE spatial ASEEMs and N_FE frequency ASEEMs, respectively. These spatial/frequency ASEEMs progressively extrapolate partial spatial/frequency CSI to full spatial/frequency CSI. §.§.§ ASEEMs Partial CSI in either the spatial or frequency domain can be regarded as downsampled information from full CSI in the specific domain, which is similar to an image taken from the real world. Generally, a pixel is the smallest addressable element of an image. However, a pixel in an image might correspond to many small objects in the real world. Motivated by this, sub-pixel imaging technologies emerge by dividing a single pixel into smaller sub-pixels to improve the resolution of an image. Similarly, learning to divide each element in a partial CSI into smaller sub-elements will facilitate CSI extrapolation. This motivates us to propose the ASEEM, which is composed of one positional encoding layer, one attention-based sub-element generation layer, and one sub-element shuffle layer. To better illustrate the proposed ASSEM, the architecture of the ASEEM is provided in Fig. <ref>. Denote the input to the ASEEM as 𝐗∈ℝ^N_I × d_R, where N_I is the number of initial input elements and d_R is the representation dimension of each input element. The positional encoding layer generates a positional encoding (PE) 𝐏∈ℝ^N_I × d_R for all input elements of 𝐗 as 𝐏 = Embed(𝐩^idx;Θ^P), 𝐩^idx = [1,2,…,N_I], where 𝐩^idx∈ℝ^N_I × 1 is the positional index vector to all input elements, Embed(·) is the embedding function parameterized by Θ^P ∈ℝ^N_I × d_R that converts the positional index vector 𝐩^idx into the learnable positional encoding 𝐏. Then, the input to the attention-based sub-element generation layer, denoted by 𝐗̃∈ℝ^N_I × d_R can be obtained by 𝐗̃ = 𝐗 + 𝐏. The attention-based sub-element generation layer further consists of one multi-head self-attention (MHSA) sub-layer, one sub-element generation (SEG) sub-layer, and residual connection (RC)<cit.> and layer normalization (LN)<cit.> operations around each of the two sub-layers. By splitting the representation of the input elements into several representation sub-spaces, the MHSA mechanism allows the neural network to capture diverse aspects of the input elements simultaneously, leading to better representation learning compared to the single-head self-attention (SHSA) mechanism. The SEG sub-layer ensures the non-linear representation capability of the attention-based sub-element generation layer and generates sub-elements for each input element, and the RC and LN operations stabilize the neural network training. Specifically, 𝐗̃ is first projected to query, key, and value matrices, i.e., 𝐐∈ℝ^N_I × d_R, 𝐊∈ℝ^N_I × d_R, and 𝐕∈ℝ^N_I × d_R, and then these three matrices are split into N_h sub-matrices, respectively. For the i-th (i=1,2,…,N_h) sub-matrix set, i.e., {𝐐_i ∈ℝ^N_I × d_Q, 𝐊_i ∈ℝ^N_I × d_K, 𝐕_i ∈ℝ^N_I × d_V}, with d_Q = d_K = d_V = d_R/N_h, a self-attention score matrix 𝐙_i ∈𝐑^N_I × N_I is obtained via the scaled dot-product attention (SDPA) <cit.> as 𝐙_i = Drop(exp(𝐐_i 𝐊_i^T/√(d_K))/∑_j=1^N_Iexp(𝐐_i 𝐊_i^T/√(d_K))_:,j;p_1), where Drop(·;p) denotes the dropout function with a dropout probability p <cit.>, and the attention-weighted output of the i-th head, denoted by 𝐎_i ∈𝐑^N_I × d_V, is obtained as 𝐎_i = 𝐙_i 𝐕_i. Finally, the attention-weighted outputs of all N_h heads are concatenated and projected to form the output of the MHSA sub-layer, i.e., 𝐗^MHSA∈ℝ^N_I × d_R, as 𝐗^MHSA = [𝐎_1, 𝐎_2, …, 𝐎_N_h]𝐖^O, where 𝐖^O∈ℝ^d_R × d_R is a learnable weight matrix. The outputs of the RC and LN operations around the MHSA sub-layer, denoted by 𝐗^RC1∈ℝ^N_I × d_R and 𝐗^LN1∈ℝ^N_I × d_R, respectively, can be formulated by 𝐗^RC1 = 𝐗^MHSA + 𝐗̃, and 𝐗^LN1 = 𝐗^RC1-μ_𝐗^RC1/σ_𝐗^RC1⊙𝐠^LN1 + 𝐛^LN1, μ_𝐗^RC1 = 1/d_R∑_i=1^d_R𝐗^RC1_:,i, σ_𝐗^RC1 = √(1/d_R∑_i=1^d_R (𝐗^RC1_:,i - μ_𝐗^RC1)^2), where μ_𝐗^RC1∈ℝ^N_I × 1 and σ_𝐗^RC1∈ℝ^N_I × 1 denote the mean and standard deviation of 𝐗^RC1, 𝐠^LN1∈ℝ^1 × d_R and 𝐛^LN1∈ℝ^1 × d_R are learnable affine transformation parameters, and ⊙ is the Hadamard product. The output of the SEG sub-layer, i.e., 𝐗^G∈ℝ^N_I × (r × d_R), can be formulated by 𝐗^G = (Drop(ReLU(𝐗^RC1𝐖^G1+𝐛^G1);p_2))𝐖^G2+𝐛^G2, where p_2 is a dropout probability, ReLU(·) is the rectified linear unit (ReLU) activation function, 𝐖^G1∈ℝ^d_R × d_G and 𝐖^G2∈ℝ^d_G× (r × d_R) are learnable weight matrices where d_G=d_R, and 𝐛^G1∈ℝ^1 × d_G and 𝐛^G2∈ℝ^1 × (r × d_R) are learnable bias vectors. r is the number of sub-elements to be generated, which is also known as the upscale factor. The output of the RC operation around the SEG sub-layer, i.e., 𝐗^RC2∈ℝ^N_I × (r × d_R), can be formulated by 𝐗^RC2 = 𝐗^G + 𝐗^LN1𝐖^RC2, where 𝐖^RC2∈ℝ^d_R × (r × d_R) is a learnable weight matrix. The output of the LN operation around the SEG sub-layer, i.e., 𝐗^LN2∈ℝ^N_I × (r × d_R), can be formulated by 𝐗^LN2 = 𝐗^RC2-μ_𝐗^RC2/σ_𝐗^RC2⊙𝐠^LN2 + 𝐛^LN2, μ_𝐗^RC2 = 1/d_R∑_i=1^d_R𝐗^RC2_:,i, σ_𝐗^RC2 = √(1/d_R∑_i=1^d_R (𝐗^RC2_:,i - μ_𝐗^RC2)^2), where μ_𝐗^RC2∈ℝ^N_I × 1 and σ_𝐗^RC2∈ℝ^N_I × 1 denote the mean and standard deviation of 𝐗^RC2, 𝐠^LN2∈ℝ^1 × (r × d_R) and 𝐛^LN2∈ℝ^1 × (r × d_R) are learnable affine transformation parameters. After the attention-based sub-element generation layer generates r sub-elements for each input element, the sub-element shuffle layer shuffles these sub-elements together to form a new input element matrix as 𝐗^E = [R(𝐗^LN2_:,1:r), R(𝐗^LN2_:,r+1:2r), …, R(𝐗^LN2_:,r × d_R - r + 1:r × d_R)], where 𝐗^E∈ℝ^(r × N_I) × d_R, and R denotes a rearrange operator that transforms the size of 𝐗^LN2_:,i:i+r-1∈ℝ^N_I × r into (r × N_I) × 1. Therefore, through the ASEEM, the original input 𝐗∈ℝ^N_I × d_R can be extrapolated to 𝐗^E∈ℝ^(r × N_I) × d_R effectively via to sub-element generation. §.§.§ Progressive Extrapolation Architecture Since directly extrapolating partial CSI to full CSI is significantly challenging, especially when the spatial and frequency compression ratios are large, we propose to conduct spatial and frequency channel extrapolation progressively with the progressive extrapolation architecture. The spatial extrapolation and frequency extrapolation are organized in a sequential manner, i.e., the N_SE spatial ASEEMs of the progressive spatial extrapolation block first progressively extrapolate partial spatial CSI to full spatial CSI, and then the N_FE frequency ASEEMs of the progressive frequency extrapolation block progressively extrapolate partial frequency CSI to full frequency CSI. During this process, the sizes of the inputs to the i-th spatial ASEEM, i.e., 𝐗^S(i), and the j-th frequency ASEEM, i.e., 𝐗^F(i), progressively increase as 𝐗^S(i) ∈ℝ^(r_s^i-1× N_SI) × d_SR, i = 1,2,…,N_SE, 𝐗^F(j) ∈ℝ^(r_f^j-1× N_FI) × d_FR, j = 1,2,…,N_FE, where r_s, N_SI, and d_SR are the spatial upscale factor, the number of initial spatial input elements, and the spatial representation dimension, respectively; and r_f, N_FI, and d_FR are the frequency upscale factor, the number of initial frequency input elements, and the frequency representation dimension, respectively. And N_SE and N_FE are determined by N_SE = ⌈log_r_f^R_s⌉, N_FE = ⌈log_r_f^R_f⌉, where ⌈·⌉ denotes the ceiling function. Correspondingly, the sizes of the outputs of the i-th spatial ASEEM, i.e., 𝐗^SE(i), and the j-th frequency ASEEM, i.e., 𝐗^FE(i), progressively increase as 𝐗^SE(i) ∈ℝ^(r_s^i× N_SI) × d_SR, i = 1,2,…,N_SE, 𝐗^FE(j) ∈ℝ^(r_f^j× N_FI) × d_FR, j = 1,2,…,N_FE. In addition, the sizes of the spatial positional encoding (SPE) for the i-th spatial ASEEM and the frequency positional encoding (FPE) for the j-th frequency ASEEM progressively increase as 𝐏^S(i) ∈ℝ^(r_s^i-1× N_SI) × d_SR, i = 1,2,…,N_SE, 𝐏^F(j) ∈ℝ^(r_f^j-1× N_FI) × d_FR, j = 1,2,…,N_FE. §.§ Model Training The proposed KDD-SFCEN is trained via the mean squared error (MSE) loss defined as[The inputs to F_ssuce are actually real-valued tensors transformed from 𝐘^p{s,0} and 𝐒^p{s,0}). We use their original complex-valued matrix form in the loss expression for simplicity.] L_1 = 𝔼_N_train(𝐇{s,0} - F_ssuce(𝐘^p{s,0},𝐒^p{s,0})_2^2), where 𝔼_N_train(·) denotes the expectation over N_train training samples, and ·_2 denotes the L2 norm. After obtaining the estimated uplink channel, the slot-level channel extrapolation can then be conducted to enlarge the pilot signal period T_p and further reduce the overall pilot training overhead C_o. § CHANNEL RECIPROCITY AND TEMPORAL DEPENDENCY AIDED SLOT-LEVEL CHANNEL EXTRAPOLATION An accurate initial state is essential to the subsequent extrapolations in the temporal domain. Thanks to the uplink-downlink channel reciprocity in TDD systems, the downlink channel of the first downlink slot of a sub-frame can be accurately derived from the uplink channel of the sub-frame's special slot via uplink-downlink channel calibration. The derived downlink channel provides an accurate initial state for predicting downlink channels at future downlink slots. Recent advances in generative AI, e.g., the ChatGPT<cit.>, have witnessed the superiority of autoregressive generative models, specifically generative Transformers<cit.>, for processing sequential data. For time-varying channels, which are also one kind of sequential data, generative Transformers are also promising on account of their capabilities in learning complicated temporal dependencies and autoregressive modeling. Therefore, in this section, we propose the TUDCEN consisting of the UDCCN and the DCEN to achieve slot-level channel extrapolation. The architecture of the TUDCEN is shown in Fig. <ref>. Specifically, the UDCCN realizes uplink-downlink channel calibration for initialization by exploiting the uplink-downlink channel reciprocity, and the DCEN mainly exploits the proposed spatial-frequency sampling embedding module to reduce the computational complexity and generative Transformers for temporal dependency learning and autoregressive downlink channel extrapolation. §.§ Channel Reciprocity Aided Uplink-Downlink Channel Calibration Although the physical uplink-downlink channels within the channel coherence time are generally reciprocal, uplink-downlink channel calibration is practically necessary to compensate for the hardware asymmetry in transceivers<cit.>. The uplink-downlink channel extrapolation can be formulated by 𝐇̂^d{s,1} = F_udcc(𝐇̂{s,0}), where F_udcc denotes the UDCCN. To calibrate the downlink channel from the estimated uplink channel, both the spatial and frequency characteristics of the channels should be exploited. Therefore, we propose the two-dimensional (2D) convolution-based calibration network for uplink-downlink channel calibration. The 2D convolution operation can capture spatial and frequency characteristics for accurate calibration. In addition, the ReLU activation function and the RC operation are also applied to improve the network's non-linear representation capability and stabilize the network training, respectively. The input to the UDCCN is obtained by 𝐗^U = f_ℂ→ℝ(𝐇̂{s,0}), where 𝐗^U ∈ℝ^(N_T × N_R) × N_c × 2 is a real-valued tensor form representation of the estimated uplink channel and the 2 in the dimensions comes from the concatenated real and imaginary parts of the original complex-valued matrix, and f_ℂ→ℝ is a function to transform the original complex-valued matrix form representation to the real-valued tensor form representation. The output of the UDCCN, denoted by 𝐗^D ∈ℝ^(N_T × N_R) × N_c × 2, can be formulated by 𝐗̃^D = ReLU(Conv^k × k(𝐗^U);𝐖^Conv) + 𝐗^U, 𝐗^D = 𝐗̃^D 𝐖^D where Conv^k × k is a 2D convolution operation with kernel size being k × k and parameterized by 𝐖^Conv∈ℝ^(2k^2) × d_f, d_f is the feature dimension, 𝐗̃^D ∈ℝ^(N_T × N_R) × N_c × (2+d_f) is the intermediate feature representation, and 𝐖^D ∈ℝ^(2+d_f) × 2 is a learnable weight matrix that projects the intermediate feature representation to the final output 𝐗^D. Then, the calibrated downlink channel can be obtained by 𝐇̂^d{s,1} = f_ℝ→ℂ(𝐗^D), where f_ℝ→ℂ is a function to transform the real-valued tensor form representation to the original complex-valued matrix form representation. §.§ Temporal Dependency Learning and Downlink Channel Extrapolation It is worth mentioning that the downlink channel extrapolation problem is conducted along the time domain and aims to predict channels at future slots, which can be regarded as a time series prediction problem. However, different from general time series prediction problems, historical observations in the investigated downlink channel extrapolation problem (i.e., downlink channels at all downlink slots) can not be obtained unless huge time-frequency resources were spent to conduct channel estimation for all downlink slots. As a result, the performance of widely adopted sequence-to-sequence (Seq2Seq) prediction schemes in related works<cit.> would be significantly degraded due to the lack of available historical channel information. Note that like many real-world phenomena, wireless channels exhibit temporal dependencies over time. Then, based on the chain rule in the probability theory, the joint probability of the concatenated downlink channels can be expressed as P([𝐇̂^d{s,1}, 𝐇̂^d{s,2}, …, 𝐇̂^d{s,N_slot-1}]) = P(𝐇̂^d{s,1}) × P(𝐇̂^d{s,2} | 𝐇̂^d{s,1}) ×…× P(𝐇̂^d{s,N_slot-1} | 𝐇̂^d{s,1}𝐇̂^d{s,2}𝐇̂^d{s,N_slot-2}). Inspired by this, the objective of the downlink channel extrapolation is 𝐇̂^d{s,t} = F_dce([𝐇̂^d{s,1},…,𝐇̂^d{s,t-1}]), for t = 2, 3, …, N_slot - 1, where F_dce denotes the DCEN, such that (<ref>) can be achieved and the downlink channels from the 2nd slot to the (N_slot-1)-th slot can be extrapolated given only 𝐇̂^d{s,1}. It is worth noting that (<ref>) actually refers to the autoregressive generation manner and the DCEN should thus be an autoregressive model. And considering that 𝐇̂^d{s,t}, t=1, 2, …, N_slot - 1 are dependent, capturing these temporal dependencies among them would be beneficial to achieve (<ref>). To this end, we propose the DCEN constituted by (inverse) spatial-frequency patch embedding layers and generative Transformers for temporal dependency learning and autoregressive downlink channel extrapolation. The architecture of the DCEN is also shown in Fig. <ref>. §.§.§ Spatial-Frequency Sampling Embedding Due to the large number of antennas and subcarriers, the downlink channels' spatial and frequency domain representation must be compressed to reduce the computational complexity of downlink channel extrapolation. Therefore, we propose the spatial-frequency sampling embedding layer. The spatial-frequency sampling embedding layer consists of one rearrangement operation, two LN operations, and one linear layer, as Fig. <ref> (a) shows. The rearrangement operation splits the N_T transmit antennas and N_c subcarriers into N_1 transmit antenna groups and N_2 subcarrier groups, respectively. Each transmit antenna group has N_T/N_1 antennas and each subcarrier group has N_c/N_2 subcarriers. Note that since the number of UE antennas, i.e., N_R, is generally small, it is not necessary to further split UE antennas. Specifically, the rearrangement operation rearranges the sample dimension (known as the batch size N_b), N_1, and N_2 together to form a new sample dimension, and consequently reforms a representation dimension of length 2 × N_R ×N_T/N_1×N_c/N_2. Then, the dimension of the reformed representation is normalized by the first LN operation. Following that, the dimension is further projected to an embedding dimension d_emb by the linear layer and normalized by the second LN operation. Straightforwardly, the spatial-frequency sampling embedding layer samples the spatial and frequency domain representation of downlink channels with a spatial sampling factor N_1 and a frequency sampling factor N_2 to reduce the representation dimension by resorting to the spatial and frequency correlations of downlink channels. Meanwhile, the N_1 sampled transmit antenna groups and N_2 sampled subcarrier groups are combined to facilitate model training. It is worth noting that after the generative Transformers, the inverse spatial-frequency sampling embedding layer is applied to transform the sampled representation dimension back to the original spatial and frequency domain representation. In the inverse spatial-frequency sampling embedding layer, all layers and operations are placed and operated in the inverse direction to the spatial-frequency sampling embedding layer, as Fig. <ref> (b) shows. §.§.§ Generative Transformer and Masked MHSA Mechanism The generative Transformer is one kind of autoregressive generative models that was proposed for text generation and is known as one of the most important driven forces for the transformative ChatGPT. The core of the generative Transformer is the masked MHSA mechanism, a pivotal component enabling the effective processing of input sequences. The masked MHSA mechanism allows the model to capture long-term dependencies among elements of input sequences and selectively attend to these elements while preventing it from looking ahead during training by masking future elements. To learn temporal dependencies efficiently for downlink channel extrapolation, we propose the generative Transformer-based downlink channel extrapolation network. The architecture shown in Fig. <ref> (c) depicts that the proposed generative Transformer consists of a temporal positional encoding layer and N_G generative Transformer layers. The temporal positional encoding layer is similar to (<ref>) - (<ref>). Each generative Transformer layer further consists of a masked MHSA sub-layer, a feed-forward (FF) sub-layer, and also RC and LN operations around each of the two sub-layers. Compared with the MHSA sub-layer, a masking operation is introduced before the query, key, and value projections in the masked MHSA sub-layer. The output of the masking operation, denoted by 𝐗̃^M ∈ℝ^N_I^'× d_R^' with N_I^' being the number of input elements and d_R^' being the representation dimension of each input element, can be expressed by 𝐗̃^M = 𝐗̃^' + 𝐌, where 𝐗̃^'∈ℝ^N_I^'× d_R^' is the input matrix to the masked MHSA sub-layer, and 𝐌∈ℝ^N_I^'× d_R^' is a lower-triangular masking matrix whose lower triangular elements are all negative infinity. Note that the masked MHSA sub-layer is with N_h^' attention heads and a dropout probability p_3. The FF sub-layer can be formulated by 𝐗^FF = (Drop(ReLU(𝐗^RC1𝐖^FF1+𝐛^FF1);p_4))𝐖^FF2+𝐛^FF2, where 𝐗^FF∈ℝ^N_I^'× d_R^', p_4 is a dropout probability, 𝐖^FF1∈ℝ^d_R^'× d_FF and 𝐖^FF2∈ℝ^d_FF× d_R^' are learnable weight matrices, and 𝐛^FF1∈ℝ^1 × d_FF and 𝐛^FF2∈ℝ^1 × d_R^' are learnable bias vectors. The two sets of RC and LN operations around the masked MHSA sub-layer and the FF sub-layer are organized and computed similarly to the two sets of RC and LN operations shown in (<ref>) to (<ref>) and (<ref>) to (<ref>), respectively. §.§ Computational Complexity Analysis Here we analyze the computational complexity of the spatial-frequency sampling embedding layer, and compare it with the most common existing spatial-frequency domain representation extractors for high-dimensional inputs: 1) the 2D convolution layer and 2) the fully-connected layer. The time complexity of the spatial-frequency sampling embedding layer can be calculated as 𝒪(SFSE) = 𝒪(2 N_R N_T N_c d_R^'), and the space complexity (i.e., the number of learnable parameters) can be calculated as N_θ(SFSE) = 𝒪(2 N_R N_T/N_1N_c/N_2 d_R^'). The 2D convolution layer is blamed for high time complexity and the fully-connected layer is blamed for high space complexity when dealing with high-dimensional inputs. The time complexity of the 2D convolution layer can be calculated as 𝒪(2D Conv) = 𝒪(2 k^2 N_R N_T N_c d_R^'), where k >= 1 is the convolution kernel size. It is worth emphasizing that to reduce the spatial and frequency dimensions of the input, the stride of the 2D convolution layer should be set larger than 1, or a pooling layer should be applied after the 2D convolution layer. Generally, it requires a deep convolutional network consisting of many 2D convolution layers to gradually reduce the spatial and frequency dimensions of the input. This thus further increases the time complexity of using 2D convolution layers for obtaining the spatial-frequency domain representation for high-dimensional CSI matrices. While for the spatial-frequency sampling embedding layer, one layer is enough to obtain the spatial-frequency domain representation for high-dimensional CSI matrices. Therefore, the spatial-frequency sampling embedding layer is ≫ k^2 times more efficient than the deep convolutional network. The fully-connected layer has the same time complexity as the spatial-frequency sampling embedding layer. However, the space complexity of the fully-connected layer is huge, which can be calculated as N_θ(FC) = 𝒪(2 N_R N_T N_c d_R^'), which is N_1 × N_2 times as the spatial-frequency sampling embedding layer, leading to prohibitive memory usage and a challenging optimization process when training the neural network. §.§ Model Training To train the model efficiently, the sliding window method is applied to generate windowed temporal channel slices. The window length is l_w = N_slot and the sliding stride is l_s = N_slot. Denoting the number of training samples and the number of sub-frames as N_train and N_sf, respectively, the normalized mean squared error (NMSE) loss function for training the UDCCN is defined as L_2 = 𝔼_N_w{𝐇^d{s,1} - F_udcc(𝐇̂{s,0})_2^2/𝐇^d{s,1}_2^2}, where N_w = N_train * N_sf is the number of slices generated from all training samples. The NMSE loss function for training the DCEN is defined as L_3 = 𝔼_N_w{∑_t=2^N_slot-1𝐇^d{s,t} - F_dce(𝐇̂^d{s,1})_2^2/𝐇^d{s,t}_2^2/N_slot-2}. § SIMULATIONS In this section, we first introduce the simulation setup for numerical evaluations. Then, we compare the proposed KDD-SFCEN with traditional and DNN-based extrapolation methods under various spatial and frequency compression ratios, SNRs, and UE velocity settings. We then compare the TUDCEN with DNN-based temporal channel extrapolators. Finally, we present and analyze the overall performance of the proposed framework in terms of the achievable sum-rate of the system. §.§ Simulation Setup §.§.§ Simulation Dataset and Parameters The simulation dataset is constructed using the MATLAB 5G toolbox <cit.>. We consider a mmWave massive MIMO system with one BS and one single user. Uniform linear arrays (ULAs) are employed at the BS and the UE with N_T = 32 and N_R = 4. The system works in the TDD mode and operates with the OFDM modulation. The system comprises 52 RBs, and each RB is formed by 12 subcarriers and 14 OFDM symbols according to the 5G specification <cit.>. In addition, the system follows the 5G frame structure shown in Fig. <ref> (e) with μ=3 and N_slot=8, and note that the SRS period T_p = 1 ms (i.e., once per sub-frame). The SRS is configured at the last symbol index of each special slot. We adopt the clustered delay line (CDL)-B MIMO channel model to generate channel instances<cit.> at various UE mobility settings. 95 channel instances are generated for constructing training and validation samples, and the BS transmits the SRS to the UE via these channel instances at the special slot of each sub-frame in 10 frames (i.e., in total 100 sub-frames / 800 slots) at a signal-to-noise ratio (SNR) level of 5 dB (i.e., γ_SNR=5 dB) and the UE mobility of 60 km/h (i.e., v=60 km/h). 5 channel instances are generated for constructing testing samples, and the BS transmits the SRS to the UE via these channel instances at the special slots of 100 sub-frames at different SNR levels. The SRS pattern is configured based on the spatial compression ratio R_s and the frequency compression level R_f as depicted in Section <ref>. As a result, for a given (R_s, R_f) pair, 9,500 samples are generated for training (9,000 samples) and validation (500 samples); for a given (R_s, R_f, γ_SNR, v) pair, 500 samples are generated for testing. The spatial compression ratios, frequency compression ratios, SNR levels, UE mobility, and other key system parameters are listed in Table <ref>. The hyper-parameter settings of the proposed KDD-SFCEN and the TUDCEN are shown in Table <ref> and Table <ref>, respectively. §.§.§ Evaluation Metrics For uplink channel estimation, uplink-downlink channel calibration, and also downlink channel extrapolation, the NMSE in dB is adopted to evaluate the channel estimation performance, which is defined as NMSE{t} = 10 * log(∑_j=1^N_test∑_s=1^N_sf𝐇^j {s,t} - 𝐇̂^j {s,t}_2^2/𝐇^j {s,t}_2^2/N_test N_sf), where N_test is the number of testing samples. Specifically, for uplink channel estimation, 𝐇 refers to 𝐇 and t=0; for uplink-downlink channel calibration, 𝐇 refers to 𝐇^d and t=1; for downlink channel extrapolation, 𝐇 refers to 𝐇^d, and t=2,3,…,N_slot-1. In addition, the achievable sum-rate in bps/Hz is adopted to evaluate the system performance, which is defined as R{t}= ∑_j=1^N_test∑_s=1^N_sflog _2(|1+𝐇^d {s,t}𝐅{s,t} (𝐇^d {s,t}𝐅{s,t})^H/N_R σ_n^2|), where σ_n^2 is the noise power. §.§ Performance Evaluation of Proposed KDD-SFCEN We first compare the performance of the proposed KDD-SFCEN and traditional and DNN-based extrapolation methods for frequency domain channel extrapolation, including the linear spline interpolation method<cit.>, the DFT-interpolation method<cit.>, and the SRCNN-based method<cit.>. It can be seen from Fig. <ref> that the proposed KDD-SFCEN outperforms all baselines at the same R_f. In addition, the proposed KDD-SFCEN can achieve a better NMSE performance at R_f=16 than traditional baselines at R_f=2, which indicates that the proposed KDD-SFCEN can reduce the pilot training overhead up to 8 times from the frequency domain perspective compared with traditional baselines. Similarly, the proposed KDD-SFCEN can reduce the pilot training overhead up to 4 times from the frequency domain perspective compared with the SRCNN-based method. We then compare the proposed KDD-SFCEN with traditional and DNN-based extrapolation methods for spatial domain channel extrapolation, including the linear spline interpolation method<cit.>, the DFT-interpolation method<cit.>, and the FCNN-based method<cit.>. As shown in Fig. <ref>, the proposed KDD-SFCEN achieves better NMSE performance than all these baselines at the same R_s. Moreover, the proposed KDD-SFCEN at R_s=8 surpasses the FCNN-based method and achieves a similar NMSE performance as traditional baselines at R_s=1, indicating that the proposed KDD-SFCEN can reduce the pilot training overhead around 8 times from the spatial domain perspective compared with existing baselines. It can be seen from Fig. <ref> and Fig. <ref> that when R_f=8 or R_s=4 (corresponding to a 4-fold reduction in pilot training overhead compared to R_f=2 or R_s=1), the NMSE performance of uplink channel estimation with the proposed KDD-SFCEN is around -9 dB and -10 dB, respectively, which is already good enough for uplink channel estimation. However, since the downlink CSI at future slots has to be extrapolated based on the estimated uplink CSI, it would be better to improve the performance of uplink channel estimation as much as possible. From Fig. <ref> and Fig. <ref>, increasing R_f from 2 to 4 or increasing R_s from 1 to 2 leads to a minimum performance degradation compared to other increasing cases in R_f or R_s.[Note that from Fig. <ref> that increasing R_f leads to a near linear NMSE (in dB) performance degradation, thus the actual performance degradation increases in this progress.] These actually reflect the marginal effects of pilots in either the frequency domain or the spatial domain. Therefore, it is promising that we combine R_f=4 and R_s=2 for joint spatial and frequency domain channel extrapolation to improve the uplink channel estimation performance and restrict the performance degradation due to the reduction of pilot symbols in the frequency domain and the spatial domain. Fig. <ref> shows the NMSE performance of uplink channel estimation versus γ_SNR at R_f=4 and R_s=2 (solid lines), where we further compare the proposed KDD-SFCEN with the SF-CNN<cit.> in addition to the linear spline interpolation method<cit.> and the DFT-interpolation method<cit.>. The simulation results show that the NMSE performance of the KDD-SFCEN reaches near -12 dB at γ_SNR=20 dB and the KDD-SFCEN outperforms the SF-CNN about 4 dB and the traditional baselines about 9 dB at γ_SNR≥ 0 dB. Even with a low SNR, i.e., γ_SNR = -5 dB, the KDD-SFCEN can achieve a performance enhancement around 3.5 dB and 5 dB compared with the SF-CNN and traditional baselines. Moreover, the NMSE performance of the linear spline interpolation method, the DFT-interpolation method, and SF-CNN at R_f=2 and R_s=1 are also shown in Fig. <ref> with dotted lines, indicating that the KDD-SFCEN can reduce the pilot training overhead up to 4 times compared to existing methods. These results demonstrate that combining spatial and frequency domain channel extrapolation can further improve the channel estimation performance (2 dB to 3 dB) with the same pilot training overhead (i.e., a 4-fold reduction in pilot training overhead). We further examine the robustness to the UE velocity of the proposed KDD-SFCEN. As depicted in Fig. <ref>, DNN-based methods (including the proposed KDD-SFCEN and the SF-CNN) achieve the best performance when the testing UE velocity is equal to the training UE velocity (i.e., v=60 km/h), showing their capability in capturing spatial-frequency domain features and improving channel estimation accuracy. When the testing UE velocity decreases or increases, the performance of the DNN-based methods degrades. Moreover, the greater the deviation of the testing UE velocity from the training UE velocity, the more significant the performance degradation will be. Nonetheless, the proposed KDD-SFCEN achieves sufficient channel estimation accuracy (around or above -8 dB) and outperforms all baselines with v ≤ 90 km/h at R_f=4 and R_s=2, showing its outstanding robustness to the UE velocity. §.§ Performance Evaluation of Proposed TUDCEN Since uplink-downlink channel calibration is simple but necessary, we do not compare the proposed UDCCN with other methods. Instead, we apply the UDCCN to the aforementioned spatial-frequency channel extrapolation methods and present their performance on estimating the channel at the first downlink slot with or without calibration. As shown in Fig. <ref>, all methods generally benefit from the uplink-downlink channel calibration, demonstrating that uplink-downlink channel calibration is necessary and the proposed UDCCN is simple but effective to conduct uplink-downlink channel calibration. We then evaluate the performance of slot-level channel extrapolation with the proposed TUDCEN and a baseline temporal channel extrapolation method. i.e., the LSTM-based channel predictor<cit.>. Note that the uplink-downlink channel calibration is applied to all methods. We further combine the proposed KDD-SFCEN and the baseline uplink channel estimation method SF-CNN with the proposed TUDCEN and the LSTM. The NMSE performance and the achievable sum-rate performance at R_f=4, R_s=2, γ_SNR=20 dB, and v=60 km/h from the first downlink slot to the seventh slot are shown in Fig. <ref> and Fig. <ref>, respectively. It can be seen from Fig. <ref> that without slot-level channel extrapolation (dotted lines), there are significant estimation errors between the estimated downlink channel at the first downlink slot and the downlink channels at other downlink slots. Therefore, slot-level channel extrapolation is essential to reduce the channel estimation errors for these downlink channels. From Fig. <ref>, the `KDD-SFCEN + TUDCEN' (the orange solid line) always achieves the best NMSE performance compared with other methods. In addition, the `SF-CNN + TUDCEN' (the azure solid line) also keeps a relatively good extrapolation accuracy, showing that the proposed TUDCEN can also be applied to other uplink channel estimation methods. Compared with the proposed TUDCEN, the performance of the `KDD-SFCEN + LSTM' and `SF-CNN + LSTM' (dashed lines) degrades sharply to a poor NMSE regime at the second slot, indicating that the LSTM-based channel predictor is unable to deal with high-dimensional CSI matrices and provide accurate slot-level channel estimates. As depicted in Fig. <ref>, both the `KDD-SFCEN + TUDCEN' and `SF-CNN + TUDCEN' (orange and azure solid lines) achieve a good achievable sum-rate performance at all downlink slots. Specifically, the `KDD-SFCEN + TUDCEN' achieves 89% (at the seventh downlink slot) to 99% (at the first downlink slot) of the upper bound achievable sum-rate obtained with the perfect CSI (the red solid line). It is also worth emphasizing that the worst achievable sum-rate performance obtained by the `KDD-SFCEN + TUDCEN' is near the achievable sum-rate performance that can be obtained by traditional linear spline interpolation and DFT interpolation methods with a small SRS period T_p=0.25 ms (purple and yellow solid lines), demonstrating that with the proposed framework, the pilot signal can be sent less frequently (i.e., T_p=1 ms) while maintaining a satisfying achievable sum-rate performance, thereby further reducing the pilot training overhead by 4 times. In addition, compared with the general pilot training scheme (Fig. <ref> (d)), the proposed TUDCEN-based slot-level channel extrapolation-aided pilot training scheme (Fig. <ref> (e)) configures more slots for downlink data transmission instead of for uplink channel estimation, thereby significantly improving the system's spectral efficiency. § CONCLUSION This paper proposed a spatial, frequency, and temporal channel extrapolation framework for TDD mmWave massive MIMO-OFDM systems to systematically reduce the pilot training overhead under high-mobility scenarios. Specifically, two neural networks, namely the KDD-SFCEN and the TUDCEN, were proposed to reduce the spatial-frequency domain pilot training overhead and to reduce the temporal domain pilot training overhead, respectively. Extensive numerical results demonstrated that the proposed spatial, frequency, and temporal channel extrapolation framework can effectively reduce the spatial-frequency domain pilot training overhead by more than 4 times via spatial-frequency channel extrapolation and further reduce the temporal domain pilot training overhead by additional 4 times and significantly improve the system's spectral efficiency via enlarging the pilot signal period with slot-level channel extrapolation. IEEEtran
http://arxiv.org/abs/2406.08106v1
20240612113813
Counterfactual-based Root Cause Analysis for Dynamical Systems
[ "Juliane Weilbach", "Sebastian Gerwinn", "Karim Barsim", "Martin Fränzle" ]
cs.LG
[ "cs.LG" ]
Juliane Weilbach, Sebastian Gerwinn, Karim Barsim, Martin Fränzle Counterfactual-based Root Cause Analysis for Dynamical Systems Bosch Center for Artificial Intelligence Carl von Ossietzky University of Oldenburg Counterfactual-based Root Cause Analysis for Dynamical Systems Juliane Weilbach 1,2 Sebastian Gerwinn1 Karim Barsim 1Martin Fränzle 2 Received 21 March 2024 / Accepted —- ========================================================================== § ABSTRACT Identifying the underlying reason for a failing dynamic process or otherwise anomalous observation is a fundamental challenge, yet has numerous industrial applications. Identifying the failure-causing sub-system using causal inference, one can ask the question: "Would the observed failure also occur, if we had replaced the behaviour of a sub-system at a certain point in time with its normal behaviour?" To this end, a formal description of behaviour of the full system is needed in which such counterfactual questions can be answered. However, existing causal methods for root cause identification are typically limited to static settings and focusing on additive external influences causing failures rather than structural influences. In this paper, we address these problems by modelling the dynamic causal system using a Residual Neural Network and deriving corresponding counterfactual distributions over trajectories. We show quantitatively that more root causes are identified when an intervention is performed on the structural equation and the external influence, compared to an intervention on the external influence only. By employing an efficient approximation to a corresponding Shapley value, we also obtain a ranking between the different subsystems at different points in time being responsible for an observed failure, which is applicable in settings with large number of variables. We illustrate the effectiveness of the proposed method on a benchmark dynamic system as well as on a real world river dataset. § INTRODUCTION Explaining unexpected behaviour in terms of underlying causes is a difficult challenge with a broad range of applications. Such applications range from identifying potential problems in industrial processes to understanding influencing factors in anomalous weather phenomena. For example, within an assembly line of an industrial manufacturing plant, faster identification of root causes of increased scrap rate (the rate at which assembled products fail quality assessment audits) can minimize cost, increase production yield, and increase overall efficiency. If one can observe sufficiently many instances of anomalous behaviour or of faulty traces of a process, one option would be to perform correlation based analysis or causal discovery <cit.>, thereby estimating the influencing factors to the variable "fault" <cit.>. Alternatively, causal inference can be used even if only a single anomalous observation is available <cit.>. Here, the identification of root causes is formulated in terms of a counterfactual query: "Would the observed failure also occur, if we had replaced the behaviour of a sub-system at a certain point in time with its normal behaviour?". Although such a causal inference approach can estimate a ranked score of each variable involved of being the underlying root cause, we address three main shortcomings of this approach in this paper: Static systems: Root cause analysis based on causal inference has been considered only in static environments <cit.>. To address this limitation, we fit a time-discretized version of an Ordinary Differential Equation (ODE) system, thereby obtaining a dynamic model. By deriving counterfactual distributions over trajectories we then employ similar strategies as in the static case. Structural influences: Existing causal inference methods using counterfactuals <cit.> focus on additive external influences causing failures rather than structural influences. While <cit.> also considers structural influences, the method is limited to linear models and does not include single time external influences. In this paper, we address this problem by allowing for interventions on the structural equation and the external influence. Non-linear systems: Existing methods for root cause analysis are typically limited to linear dynamic models. Here, we address this problem by allowing transition functions to be non-linear using a simple neural network architecture. Additionally, existing methods are limited to small systems as they rely on the computation of Shapley values, which scales exponentially with the number of variables. This becomes infeasible in a dynamic setting, since the corresponding causal graph – unrolled over time – would have an increasingly large number of nodes. While approximate methods for the computation of Shapley values have been proposed <cit.>, we suggest a simple approximation to the Shapley value, which is applicable in settings with large number of variables. The remainder of this paper is organized as follows: in Section <ref>, we review the related work mentioned above in more detail, and provide necessary background and notation in Section <ref>. In Section <ref>, we describe our method for identifying root causes. In Section <ref>, we first illustrate the mechanisms of the proposed method in a synthetic linear and non-linear setting before evaluating it on a benchmark dataset of <cit.> as well as on real data describing river levels as in <cit.>. In Section <ref>, we conclude the paper. § RELATED WORK The problem of identifying the root cause of a system failure or anomaly has been addressed in various domains, including healthcare <cit.>, financial income distributions <cit.>, reliability engineering <cit.>, to name a few. In the context of time-series data, causal inference techniques have been used to qualitatively explain outliers using counterfactual trajectories <cit.>. To detect root causes affecting graphical structure or transition function Assad et. al. <cit.> propose a method based on assessing the direct causal effect. Modelling such causal effect with linear models, Assad et. al. <cit.> show that the total effects change if the underlying causal model changes. In turn, they can use this fact to identify structural changes in the causal model. However, the method is limited to linear models and does not include single time external influences. If more observations of anomalous data are available, the problem of identifying the root causes is also amenable to statistically estimate the correlation or causation of the different variables and time points onto the variable associated with the label "anomalous". To this end, Tonekaboni et. al. <cit.> introduce feature importance in time (FIT), a scoring mechanism to quantify importance of features in a multi-variate time-series. The authors propose to assess feature importance based on their predictive power w.r.t. the outcome distribution, while accounting for temporal distributional shifts. The approach localizes important features over time and can thus be used to gain useful insights into the behaviour of dynamic systems. However, FIT does not leverage the causal structure of the underlying system and rather provides correlative explanations for the observed outcomes. Best aligned with our approach, though, is the work by <cit.> which defines the problem of identifying root causes of a system failure as a counterfactual query. With this reformulation, the authors claim to be the first to propose actionable explanations to anomalous behaviour of underlying systems. In principle, counterfactual reasoning assumes, and leverages, complete causal knowledge of the underlying system in the form of a structural causal model (SCM). More precisely, the work in <cit.> assumes invertible functional causal models: models in which exogenous variables are computable from endogenous system observations. In fact, the authors leverage the default split between endogenous and exogenous variables in a graphical causal model to disentangle a node's inherited impact from its own contribution. They account for the notion of graded causation <cit.> and provide order-independent feature scoring using a game-theoretic concept commonly adopted in explainable machine learning <cit.>, namely Shapley values <cit.>. With its computation complexity, their approach lacks direct applicability to dynamical systems. In our experiments, we compare against the linear model performing interventions on the exogenous variable analogously to <cit.>. § BACKGROUND AND NOTATION As mentioned in the introduction, we are interested in a counterfactual approach to identify the root cause of a system failure. In this section, we introduce the necessary concept from the literature and also introduce the notation we use throughout the paper. Following the notation from Peters et al. <cit.>, we denote the sequence of observations of the system of interest by d-variate time series (Y_t)_t ∈ℤ where each Y_t for fixed t is the vector (Y^1_t, ...,Y^d_t). Each Y_t^j represents the jth observable of a system at time t. By some abuse of notation, if we omit super- or subscripts, we refer to the full time series. That is, Y = (Y_t)_t ∈ℤ, Y^j = (Y^j_t)_t ∈ℤ and Y_t = (Y^j_t)_j ∈{1,…,d}. The full time causal graph 𝒢_t with a node for each time point and signal Y^j_t for (j,t) ∈1,...,d×ℤ has theoretically infinitely many nodes and is assumed to be acyclic, while the summary graph 𝒢 with nodes Y^1,...,Y^d may be cyclic. (Structural causal model (SCM)) <cit.> An SCM ℳ(𝒮, P_N, 𝒢) is defined by a set of structural equations 𝒮, an acyclic graph 𝒢=(𝒴,ℰ), and a set of independent noise variables N^j∼ P_N^j, j∈𝒴. The structural equations for each node j are given by: S^j := Y^j = f^j(Y^PA(j)^𝒢, N^j) where 𝒮=∪_j ∈ℰ{S^j} is a set of structural collections, and PA(j)^𝒢⊆ℰ denotes the parents of the node j according to the graph 𝒢. To describe dynamic processes, again following <cit.>, we extend the above definition to the dynamic case by unrolling a causal graph over time as follows: (Dynamic SCM) In analogy to a static SCM, a dynamic SCM ℳ(𝒮_t, P_N_t, 𝒢_t) is given by an acyclic graph 𝒢_t and exogenous noise influences N_t^j ∼ P_N^j_t independent over each point in time t and variable j. 𝒢_t is referring to a graph consisting of an unrolled version of a summary graph 𝒢. Following the notation of <cit.>, the structural equations for node Y_t^j are given by : S^j_t Y^j_t = Y^j_t-1 + f^j(Y^PA(j)_t-1, Y^j_t-1) + N^j_t with PA(j) being the parents of node j according to the summary graph 𝒢 excluding the node itself. A notable difference from static SCMs is that the functional coupling f is constant over time[Note that we restrict ourselves to additive noise in order to realize an invertible SCM, see <cit.>]. (Interventional Dynamic SCM) Let 𝒥 be a set of interventions in which each element ξ can be of the following form: 2 ξ := do(P_N_t^j) = P̃_N_t^j, or ξ := do(S^j_t) = S̃^j_t where P̃_N_t^j is a new noise distribution and S̃^j_t is a new structural equation for the node j at time t. The interventional dynamic SCM is then defined by replacing either the noise distribution or structural equation within a given dynamic SCM ℳ(𝒮_t, P_N_t, 𝒢_t). Here Eq. <ref> denotes a soft intervention on the noise distribution whereas Eq. <ref> denotes an intervention on the structural intervention. We denote the resulting intervened dynamic SCM then by M_𝒥(S_t, P_N_t, G_t). As each SCM (interventional or not) defines structural equations and noise distributions, it can generate a trajectory of observations. We denote the distribution of the observations generated by the SCM as P_ℳ and the distribution of the observations generated by the intervened SCM as P_ℳ_𝒥. Given an observed trajectory, we can now also define the counterfactual distribution describing hypothetical trajectories which would have been observed if an (alternative) intervention had been performed. Abducted and counterfactual SCMs. Let Y^F be an observed trajectory and ℳ a given dynamic SCM. In order to construct a counterfactual dynamic SCM, we define the noise posterior distribution P_N^j_t(N^j_t|Y^F)= δ(N^j_t-N^F,j_t) by: N^F,j_t = - Y^F,j_t-1 - f^j(Y^F,PA(j)_t-1, Y^F,j_t-1) + Y^F,j_t where f^j is the structural equation of the node j and PA(j) are the parents of the node j according to the summary graph 𝒢. The resulting dynamic SCM, in which the noise distributions P_N^j_t are replaced with the above defined noise posterior distributions, is then denoted as ℳ^F indicating that the noise distributions are abducted from the observed trajectory Y^F. In fact, when generating trajectories from this abducted SCM, it only generates the observed trajectory Y^F due to the above setting of the noise variables. In order to generate new counterfactual trajectories reflecting alternative outcomes, we need to perform an intervention on this abducted SCM, leading to the counterfactual SCM. That is, given an abducted SCM ℳ^F and a set of interventions 𝒥, we refer to the resulting interventional SCM ℳ^F_𝒥 as the counterfactual SCM. For example, when performing an intervention do(P_N_t^j)=P̃_N_t^j on the noise distribution at a specific point in time t and a node j, the counterfactual SCM is defined by the following structural equations: Y^e_s = Y^e_s-1 + f^e(Y^PA(e)_s-1, Y^e_s-1) + N^e_s, where N^e_s ∼P̃_N_t^j if s = t and e=j δ(N^j_t-N^F,j_t) otherwise §.§ Root cause As we are interested in identifying a root cause, we state here more precisely what we mean by this term. We define a root cause as an intervention according to M_𝒥(S_t, P_N_t, G_t) leading to a faulty behaviour. Here, we assume that a faulty behaviour can be detected or defined using a known classifier ϕ. This classifier maps a time series to a binary value, indicating whether the time series is faulty. Such classifier can either be given as a known test function (e.g. corresponding to an end-of-line test in an assembly line, an assertion in a software system, or a medical diagnosis) or can be learned from data (e.g. an outlier-score function learned on normal data). (Root cause) Given a classifier ϕ that determines whether an observed trajectory is faulty, we refer to a (set of) intervention(s) Ξ to be the root cause of a failure associated with the classifier ϕ, if observations (Y^F_t, t=1,… T)_j from the interventional SCM ℳ_{Ξ} are leading to an increased failure rate: 𝔼_Y^F_t, t=1,… T∼ℳ_Ξ [ϕ(Y^F_t, t=1,… T)]- 𝔼_Y_t, t=1,… T∼ℳ [ϕ(Y_t, t=1,… T)] > 0 Note that this corresponds to the average treatment effect of an intervention on the external influence or structural intervention. If the probability of a failure for an external intervention on the noise or structure is larger than without any intervention, we assume that the failure has an underlying root cause. §.§ Shapley Value Shapley values, originally defined to quantify the contribution of individual players to the outcome of a game, have been used by Budhathoki et al. <cit.> in a static setup to define a score for nodes being potential root causes of an observed fault. To this end, interventions (or possible root causes) are identified with players in a game whose outcome is determined by a value function that quantifyes the degree to which a set of interventions can increase the likelihood of correcting a failure (to be defined below). (Shapley value) The Shapley value <cit.> of a player i out of a set N of possible players to the outcome of a game characterized by the outcome function v is defined by: Sh(i) := ∑_S ⊆ N \{i}|S|! (n - |S| -1 )!/n! (v(S ∪{i}) -v(S)) Note that in order to calculate the Shapley value, one has to sum over exponentially many subsets of the set of possible players. This is feasible only for small sets of players. As in the context of root cause analysis in a dynamic setting, the set of players corresponds to the set of possible interventions ranging over all possible times and nodes within the unrolled graph of a dynamic SCM. Due to the exponential growth of the number of possible interventions, exact Shapley value estimation is computationally infeasible for dynamic SCMs, and we have to resort to an approximate version. § METHOD FOR IDENTIFYING ROOT CAUSES Now that we have the necessary background, we can describe our method for identifying root causes in dynamic SCMs. The method is based on the following steps and is illustrated in Fig. <ref>. We want to identify the root cause that caused an observed failure in a system. To this end, we cast this problem in a counterfactual query: "Would the observed failure also occur if we had replaced the faulty behaviour of a sub-system at a certain point in time with its normal behaviour?". To answer this question after we observed a faulty observation (Y^F_t, t=1,… T)_j, as illustrated in Inputs in Fig. <ref>, we follow the steps of counterfactual distribution calculation: abduction, action and prediction <cit.>. However, in order to apply those steps, we need an SCM characterizing the normal and potentially the abnormal system. To characterize the normal system, we assume to have access to data representing the normal behaviour of the system, as shown in Inputs in Fig. <ref>. Additionally, we assume to have at least a summary graph 𝒢 of the system. This summary graph can be obtained from expert knowledge or from data. Furthermore, as shown in Assumptions in Fig. <ref>, we assume that we know a function ϕ that classifies an observation into faulty or normal. Fitting the model in step 3.1 of the Method part in Fig. <ref> we obtain the normal behaviour system ℳ by learning the functions f^j_N with the inputs being normal observations Y_t of each node and its parents of the summary graph 𝒢. If for both, normal as well as abnormal data, a node and hence its transition function is not anomalous, the transition function would be identical for both settings. Therefore, in 3.2 we additionally fit a transition function f^j_NF with normal and factum data as input on the same parents and children as in 3.1 of the known graph 𝒢 and with that we define the SCM ℱℳ. We show predictive samples of ℳ in the graph under 3.2. Estimating the Counterfactual: In the abduction step, we first infer the noise distribution corresponding to the observed factum. We refer to the abducted SCMs ℳ^F and ℱℳ^F by applying the factum as function input to f^j_N and f^j_NF and constructing the resulting noise posterior distributions as described in Eq. <ref>. We need to calculate the noise variables for both SCMs separately, because function couplings and noise variables are coupled. In the action step, we perform an intervention in ℳ by ξ_ℳ := {do(P_N_t^j) = P̃_N_t^j} (see Eq. <ref>), where we use the prediction error of our model to estimate the Gaussian noise variance: P̃_N_t^j = 𝒩(0, σ_val^2), σ_val^2 = 1/V1/T∑_v ∑_t (Y^j,v_t+1 - f_j(Y^Pa(j),v_t))^2 with Y^j,v being a validation trajectory of the normal data, and V the number of validation trajectories. For an intervention in ℱℳ we intervene on the noise as before and we additionally intervene on the structure by do(S^j_t) = S̃^j_t (see Eq. <ref>), which replaces the previous transition function f^j_FN with a new structural equation S̃^j_t consisting of the transition function f^j_N originating from the "normal" SCM, obtained purely from training data ξ_ℱℳ := {do(P_N_t^j) = P̃_N_t^j,do(S^j_t) = S̃^j_t }. After the construction of the corresponding counterfactual SCM we can then generate counterfactual trajectories under the different interventions Y^CF∼ P_ℳ^F_ξ_ℳ, as illustrated in 4. in Fig. <ref>. If an external influence on node j at time t leads to an abnormal factum, an intervention of the above type should remove the abnormal behaviour and therefore lead to a normal trajectory. Evaluation: To quantify how close these counterfactual samples are to normal trajectories, the trajectories are processed via a classifier function ϕ (Eq. <ref>). In turn, we receive a score for each counterfactual sample indicating whether the failure was removed by the counterfactual intervention ξ. We then average over multiple counterfactual samples. To rank interventions at different times and nodes, we can use Shapley values by identifying players with interventions and match-outcomes by the average normality of the counterfactual sample. Shapley values, however, scale exponentially and therefore we use the following simple approximation, which we obtain by ignoring interactions between different interventions, thereby only considering singleton intervention sets. Although mainly motivated by pure computational tractability, we can alternatively assume that perfectly synchronous occurrence of multiple root causes is very unlikely, thereby justifying the restriction to singleton intervention sets. Consequently we arrive the following simple expression of contribution score of individual interventions ξ for each point in time and node: Sh(ξ) := log𝔼_Y∼ P_ℳ^F_ξ{ϕ(Y)} § EXPERIMENTS In the following experiments, we evaluate the effectiveness of the proposed method for different synthetic and real world data-sets. As for synthetic data-sets, we consider both linear and non-linear dynamic systems with single point external failure-causing disturbances as well as a benchmark data-set for identifying structural causes for anomalies <cit.>. As for the real-world data-set, we are investigating dynamic water flow rate in rivers <cit.>. For our synthetic experiments, we perform two meta-experiments which analyze the influence on the model performance of varying root cause injections and how robust the model is against violating the assumption that the causal graph is known. We denote our models, a linear and a non-linear model both performing a counterfactual intervention on the external noise influence and on the structural equation with Lin(S^j_t,N_t^j) and NLin(S^j_t,N_t^j). We compare against a linear layer model with counterfactual noise-influence intervention Lin(N_t^j), similar to <cit.> as well as against EasyRCA <cit.> in the benchmarking experiment. For completeness, we additionally provide a nonlinear model NLin(N_t^j) with counterfactual noise-influence intervention. In order to model the non-linear dynamic SCM, for NLin we use a simple three-layer residual neural network (ResNet) with hyperbolic tangent activation functions and 128 neurons as latent layer. §.§ Experimental datasets Linear synthetic system: In our first data-set, we consider a linear multivariate system with additive Gaussian noise consisting of four nodes (w,x,y,z), each having two dimensions. The summary graph of the system is shown in Assumptions in Fig. <ref>. The structural equations of the system are of the form: Y^j_t := A^i Y^j_t-1 + ∑_k ∈ PA(j)B^kY^k_t-1 + C^lN_t^j, (N_t^j)_d ∼𝒩(0,1) ∀ d with N_t^j being zero mean standard Gaussian noise. For this system we chose the transition matrices such that they generate a stable system by using eigenvalues smaller than 1 (see Appendix). To simulate a root cause, we inject an additive constant term at a single dimension of a node j at time t to the equation above. Instead of a learned anomaly scoring function, in this experiment, we assume to have access to a function that checks the validity of a given observation, similarly as it would be in a manufacturing scenario, in which an end of line test is performed <cit.>. Therefore, we examine if a failure on the "last" node in a manufacturing line (here "last" node in the summary graph is z) has occurred. To this end, we use a threshold function, fixed over time for each dimension of node z. More precisely, this classifier can be applied to any time-series observation (Y_t^j)_t∈{1,...,T}, j∈{w, x,y,z}: ϕ(Y) = 1- 1/D_z∑_k=1^D_z1_[(μ_z)_k -(σ_z)_k, (μ_z)_k+(σ_z)_k](Y^z_k) Here, the dimension of node z is denoted with D_z. Note that this function provides a gradual feedback of how many of the dimensions in node z are outside of the pre-specified corridor given by the threshold function. FitzHugh–Nagumo system: Next, to allow for non-linear dynamic behaviour, we are generating data of the FitzHugh–Nagumo system (FHN), which is cyclic with regard to its summary graph, but acyclic in the unrolled graph 𝒢_t. Although being a multivariate system, as the two dimensions interact, the corresponding dynamic SCM consists of one node x with two dimensions: ẋ_̇1̇ = 3(x_1 - x^3_1/3 + x_2), ẋ_̇2̇ = (0.2 - 3x_1 - 0.2x_2)/3 We chose the initial values as in <cit.> but with slightly reduced additive Gaussian noise variance σ^2 = 0.0025. The root cause is simulated similarly to the linear system by adding a constant to the difference equation at one dimension and time point. We classify an observation as faulty, if it deviates too much from a normal observation. As we have, in this setting, access to the ground truth, a normal observation is represented by a trajectory generated from the ground truth system. Consequently, the classifier consists of a time-varying threshold bound around each dimension of the normal observation without the injected root cause of node x. Denoting the expected trajectory from the system by E a given observation Y is then classified to be faulty if it does not deviate more than 10 standard deviations at any point in time from the expected trajectory: ϕ(Y) = 1- ∏_t1_[ E^x_t-10σ_x, E^x_t +10 σ_x](Y^x_t). §.§ Evaluation When we have drawn counterfactual samples from our model, we calculate the approximate Shapley values (see Eq. <ref>) and use the ϕ function to evaluate each performed intervention based on whether it corrected the failure. The root cause is the intervention of the node j at time t that has the highest influence on the failure. If all counterfactual samples lead to the same ϕ evaluation for all interventions, then no unique root cause could be identified. However, due to random sampling of the counterfactual, this is an unlikely scenario (see for example Fig.<ref>.) Nevertheless, for the evaluation, we only require that the ground truth root cause is within the set of identified root causes. In Fig. <ref> we show five counterfactual samples for the nonlinear FHN system at the actual root cause injection point. Although the injected root cause is fairly large with regard to the interval of the normal observation without failure (drawn as orange line), the counterfactual intervention performed by our model NLin(S^j_t,N_t^j) corrects the failure for both dimensions of x. In order to analyze root cause injections and how the identification capabilities of our model behave under varying injections, we performed an Injection experiment for the synthetic linear and nonlinear FHN system. External disturbances in dynamic systems may be propagated and thereby increase their impact. Alternatively, if the system is robust against incremental noise (as it is the case in the defined systems above due to the external noise influence even under the 'normal' conditions), it is not obvious how large an external influence at which point in time is noticeable. In Fig. <ref>, we show varying root cause injections for the linear synthetic system (varying constant added to the structural equation) over 20 randomly sampled facta with T=20. It can be seen that the models intervening on the structure and the noise achieve a significantly higher identification score for large added constants. This could be due to a large root cause, in this setting leading to a factum with high distance to the normal data, which may lead to a divergence over time of the normal behaviour system ℳ. Note that we did a similar injection experiment for the FHN system, which can be found in the Appendix. Assumption violation. We probe our models on violation of the causal graph assumption for the linear synthetic system. For this, we modify the causal graph used by the underlying model through adding or removing random edges, while keeping the original summary graph for data generation. We use the same facta generated as σ = 500 in Fig. <ref>. In Tab. <ref> it can be seen that removing edges for all models has a stronger impact on predictive performance than adding. As expected, Lin((S^j_t,N_t^j)) performs best on this linear system, closely followed by NLin((S^j_t,N_t^j)). It must be mentioned that in a graph with only four edges, removing an edge is a major incision in the model assumption. Linear EasyRCA Benchmark. We compare against the linear univariate benchmark of <cit.> consisting of six nodes and two types of root causes. The parametric root cause meaning they change the coefficient of the parent nodes to a random uniform sampled value. As a special case of the parametric setting, they inject structural root causes, which set the coefficient of the parent nodes to zero. Since EasyRCA excludes single time point root causes, in order to do a fair comparison, we only rank sets of interventions, where we intervene on all times for a given node and evaluating it accordingly by Sh(ξ_0^j,...ξ_T^j). In their work they inject on two nodes, where one is always the root node of the system and the other one a randomly chosen node. As in their benchmark comparison the root node root cause is excluded, we exclude it from the evaluation as well. In the evaluation they distinguish for parametric and structural root causes, but because our model makes no prediction about the type of root cause, it is sufficient if EasyRCA predicted root causes contain the true root cause, regardless of the type. To rate the normality of a given trajectory Y, we make use of the learned dynamical SCM ℳ which was fitted on normal observations of the system. More precisely, for the EasyRCA benchmark as well as the following River experiment, we used an outlier score similarly to <cit.>, based on the learned dynamic SCM. That is, given a dynamic SCM ℳ consisting of N nodes and providing the conditional distribution p(Y_t^j|Y_t^PA(j,t)) via the dynamics equation learned from normal observational data (Y)_k, we can define the following outlier score: ϕ(Y) = 1/NT∑_j,tlog p(Y_t^j|Y_t^PA(j,t)) In Tab. <ref>, it can be seen that in general the intervention (S^j_t,N_t^j) is preferable to an intervention only on (N_t^j). For the linear systems, the accuracy of NLin(S^j_t,N_t^j) and Lin(S^j_t,N_t^j) are similarly good, while EasyRCA shows lower performance in the factum T=100 experiments. However, Lin(S^j_t,N_t^j) is inadequate for addressing the complexities of the nonlinear problem. We excluded the 2000 factum length experiment of the EasyRCA benchmark for computational reasons. Additionally, note that since EasyRCA is univariate, it can not be applied to our synthetic systems. Real World River Experiment. We analyse our method on real-world data considering a univariate river experiment consisting of four nodes. The nodes represent measuring stations of the Ribble River in England (data is from <cit.>). These measuring stations are influenced by unknown external influences such as for example rain. For this reason, the summary graph includes an unobserved confounder Z that influences all nodes. This unobserved confounder affects the accuracy of our model when learning the normal system ℳ from observational data. The nodes represent stations of the Ribble River that measure the flow rate. Although this data-set has been investigated in <cit.>, as a result of our dynamic viewpoint, we consider a slightly different factum. They consider four time points as static facta and infer the root causes for these. In contrast, we consider an entire time series as factum and infer the root cause. In addition, we use a finer time resolution of 15-minute intervals instead of averaged daily values, which has the advantage that the resulting SCM is less prone to instantaneous effects due to aggregation within a time-window. The finer resolution means that we consider a shorter period of time, namely the three days from 16.03.2019 to 19.03.2019 in which the flow rate is particularly high. As training data, we use the same time span as <cit.> from 01.01.2010 to 31.12.2018. They provide a z-score threshold for the New jumbles rock station, which we use as ϕ in <ref>, see also Fig.<ref>. We find the Shapley values with the highest scores at station Henthorn, which is an upstream station of the New jumbles rock station. Although no ground truth root cause exist for this experiment since it is a real world example, the result is plausible both geographically and with regard to the time point. Nevertheless, the counterfactual intervention cannot correct the failure, as the counterfactual sample is not below the z-score threshold. This could be due to the fact that the influence of the unobserved confounders is particularly high. § CONCLUSION In this paper, we have presented a method for identifying root causes in dynamic systems based on counterfactual reasoning. As the proposed method ranks individual interventions corresponding to individual nodes or sensors at particular times within a trajectory, our method is capable of exploiting not only the causal structure but also the natural direction of causality over time. By modelling temporal transitions with a non-linear neural network and a Shapley value approximation, we are able to remove important limitations of current counterfactual root cause analysis methods. While we demonstrated both on synthetic as well as real data the effectiveness of our method in identifying root causes in dynamic systems, there are several directions for further improvement. For example, our method is current limited to the assumption that the root cause consists of a single intervention and that the causal graphical structure is known as well as the absence of latent confounders. In future work, we plan to extend our method to identify multiple root causes and to include uncertainties in the graphical structure as well as potential latent confounders. §.§.§ The authors have no competing interests to declare that are relevant to the content of this article. splncs04
http://arxiv.org/abs/2406.08760v1
20240613024401
Boundary sources of velocity gradient tensor and its invariants: a unified theory of boundary vorticity and vortex dynamics
[ "Tao Chen", "Jie-Zhi Wu", "Tianshu Liu", "Jie Yao" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
1]Tao Chenmycorrespondingauthor [mycorrespondingauthor]Corresponding author chentao2023@njust.edu.cn 2]Jie-Zhi Wu jie_zhi_wu@163.com 3]Tianshu Liu tianshu.liu@wmich.edu 4]Jie Yao [1]School of Physics, Nanjing University of Science and Technology, Nanjing 210094, China [2]State Key Laboratory for Turbulence and Complex Systems, College of Engineering, Peking University, Beijing 100871, China [3]Department of Mechanical and Aerospace Engineering, Western Michigan University, Kalamazoo, Michigan, 49008, USA [4]Advanced Research Institute of Multidisciplinary Sciences, Beijing Institute of Technology, Beijing 100081, China § ABSTRACT The present work elucidates the boundary behaviors of the velocity gradient tensor (A≡∇u) and its principal invariants (P,Q,R) for compressible flow interacting with a stationary rigid wall. Firstly, it is found that the well-known Caswell formula exhibits an inherent physical structure being compatible with the normal-nilpotent decomposition, where both the strain-rate and rotation-rate tensors contain the physical effects from the spin component of the vorticity. Secondly, we derive the kinematic and dynamic forms of the boundary A-flux from which the known boundary fluxes can be recovered by applying the symmetric-antisymmetric decomposition. Then, we obtain the explicit expression of the boundary Q flux as a result of the competition among the boundary fluxes of squared dilatation, enstrophy and squared strain-rate. Importantly, we emphasize that both the coupling between the spin and surface pressure gradient, and the spin-curvature quadratic interaction (s_w·K·s_w), are not responsible for the generation of the boundary Q flux, although they contribute to both the boundary fluxes of enstrophy and squared strain-rate. Moreover, we prove that the boundary R flux must vanish on a stationary rigid wall. Finally, the boundary fluxes of the invariants of the strain-rate and rotation-rate tensors are also discussed. It is revealed that the boundary flux of the third invariant of the strain-rate tensor is proportional to the wall-normal derivative of the vortex stretching term (ω·D·ω), which serves as a source term accounting for the the spatiotemporal evolution rate of the wall-normal enstrophy flux. These theoretical results provide a unified description of boundary vorticity and vortex dynamics, which could be valuable in understanding the formation mechanisms of complex near-wall coherent structures and the boundary sources of flow noise. Caswell formula; Velocity gradient tensor; Invariants; Surface quantities; Boundary vorticity and vortex dynamics § INTRODUCTION §.§ Velocity gradient tensor and its principal invariants The velocity gradient tensor (VGT, A≡∇u) has been widely applied to understand the fundamental flow physics and to predict the evolution of coherent structures in turbulence, which has shown great advantages in its potential to boost the the cross-communication among the branches of turbulence dynamics, statistics and coherent structures <cit.>. The eigenvalues λ_i (i=1,2,3) of the VGT are the solutions of the characteristic equation det(λI-A)=λ^3+Pλ^2+Qλ+R=0, where P, Q and R are the first, second and third principal invariants of A, respectively defined by <cit.> P≡-tr(A)=-ϑ=-(λ_1+λ_2+λ_3), Q≡1/2(P^2-tr(A^2))=1/2(ϑ^2-tr(A^2))=λ_1λ_2+λ_2λ_3+λ_1λ_3, R≡-det(A)=-λ_1λ_2λ_3=1/3[ϑ^3-3ϑ Q-tr(A^3)], where ϑ≡∇·u is the dilatation (namely, the velocity divergence). The last equality in (<ref>) is obtained by applying the Cayley-Hamilton theorem. The discriminant of (<ref>) is given by Δ=27/4R^2+(P^3-9/2PQ)R+(Q^3-1/4P^2Q^2). In the region Δ≤0, A has three real eigenvalues λ_i (i=1,2,3); in the region Δ>0, A has one real (λ_3) and two complex-conjugate (λ_1,2=λ_cr±iλ_ci) eigenvalues. According to the sign of Δ, the flow topology of complex flows can be classified in (P,Q,R) space or (Q,R) space (for a fixed P) using critical point terminology for both the incompressible and compressible flows <cit.>. In addition, starting from the Navier-Stokes (NS) equations, the Lagrangian evolution equations of the VGT and its invariants were also derived and interpreted in flows of various types <cit.>, leading to the simplified descriptions of the restricted Euler dynamics <cit.>. An interesting and important observation is that the existing studies of the VGT and its invariants <cit.> are confined to the physical processes in unbounded flows or the flow structures in the fluid interior. Since their Lagrangian evolution equations cannot be simply written in single-variable forms, their sources in the fluid interior are difficult to be determined <cit.>, which thereby highlights the importance of specifying their boundary sources (with perfect certainty). From a physical perspective, since the considered physical quantities cannot penetrate into the solid side of the boundary, their boundary fluxes naturally become the boundary sources which describe their local creation rates on the boundary. From a pure mathematical perspective, all these governing equations are one-order higher than the NS equations in terms of the highest-order derivatives. Once generalized to the wall-bounded flows, the boundary values and fluxes of the VGT and its invariants immediately become the definite conditions of the Dirichlet and Neumann types, respectively. To the best of the authors' knowledge, a general theory that reveals the physical structures of the boundary values and sources of the VGT and associated invariants (including their relationship with the established boundary vorticity and dilatation dynamics) is not available in the existing literature, which becomes the main objective of the present study. §.§ Kinematic decompositions of velocity gradient tensor To acquire more detailed knowledge of flow structures, the kinematic decompositions of the VGT are performed based on different physical considerations. The conventional decomposition of the VGT is the symmetric-antisymmetric decomposition (SAD) or the Cauchy-Stokes decomposition <cit.>: A=D+Ω, D≡1/2(A+A^T), Ω≡1/2(A-A^T), where D and Ω are the strain-rate tensor and the rotation-rate tensor, respectively. The latter has one-to-one correspondence to the vorticity ω≡∇×u through the relation ω=ϵ:Ω and Ω=ϵ·ω/2, where ϵ is the permutation tensor. Nevertheless, the inherent limitation of the SAD is that one can neither distinguish the deformation and shearing effects in the strain-rate tensor nor identify the orbital rotation and the spin in the rotation-rate tensor, as firstly noticed by <cit.>. Soon thereafter, applying the real Schur theorem to the VGT yields the normal-nilpotent decomposition (NND) <cit.>: A=N+S, where N and S represent the normal and nilpotent tensors, respectively. A key implication from such decomposition is that the vorticity ω is naturally split as the sum of the orbital rotation ψ and the spin s <cit.>: ω=2ψ_Δ>0+s, which lays a rational foundation for analyzing the evolution and interaction of axial vortex and shear layer. The orbital rotation ψ represents the angular velocity of a fluid element with respect to the curvature center of its pathline at a time instant. The spin s is solely determined by the nilpotent tensor S, illuminating the pure shearing effect typically observed in a shear layer. A clear physical picture of vortex kinematics can be revealed by (<ref>), being compatible with the observed forms of the dominated flow structures including the winding shear layer and the axial vortex. In the region with Δ≤0, the vorticity ω is completely determined by the spin s without the orbital rotation (ψ=0). In contrast, in the region with Δ>0, a shear layer rolls up to form an axial vortex, which displays both the orbital rotation ψ and the spin s. For the potential flow (ω=0), one still has 2ψ+s=0 in the region with Δ>0, which indicates a mutual cancellation mechanism between the orbital rotation and the spin. The applications of this new decomposition in the analysis of turbulence physics have been reported in limited studies, e.g., <cit.> and <cit.>. §.§ Caswell formula and normal-nilpotent decomposition The VGT can be decomposed as A=ϑI+2Ω-B, where B≡ϑI-A^T is the surface deformation tensor which determines the relative rate of change of the small material surface element δΣ=δΣn: δΣ^-1D_tδΣ=n·B=-(n×∇)×u <cit.>, fully determined by the velocity over and geometry of the surface [D_t≡D/Dt is the material derivative]. On a stationary rigid wall, we find that the vorticity is equal to the spin (ω_w=s_w), extending along the tangential direction of the boundary. Therefore, the tangential-normal decomposition of A_w is given by A_w=ϑ_wnn+n(s_w×n)=ϑ_wnn+nξ, where ξ≡s_w×n is a vector perpendicular to the spin and n is the unit normal vector directed into the fluid. τ≡μξ is the skin friction vector (μ is the dynamic viscosity), which represents the wall shear stress exerted by the ambient viscous fluid. Then, a special on-wall orthonormal frame (e_1,e_2,e_3) [referred to as the τ-frame in <cit.>] is naturally introduced as e_1≡ξ/‖ξ‖, e_2≡s_w/‖s_w‖ and e_3≡n. Interestingly, we find that (<ref>) automatically implies a NND structure: A_w=N_w+S_w, N_w=ϑ_wnn,  S_w=nξ. By (<ref>) and (<ref>), the on-wall values of D and Ω are respectively expressed as D_w=ϑ_wnn+1/2(nξ+ξn), Ω_w=1/2(nξ-ξn), where the first equality is just the well-known Caswell formula on a stationary wall <cit.>. On one hand, both D_w and Ω_w contain the spin effects, thereby challenging the widely accepted perspective that D_w represents the pure deformation of a fluid element. In this sense, the Caswell formula had anticipated the necessity of the NND prior to its formal introduction. On the other hand, the whole boundary vorticity dynamics is fully governed by the spin field whose sources and sinks determine the creation of the wall-normal vorticity component near the wall, satisfying [∂_nω_n]_w=-∇_π·s_w=n·(∇_π×ξ) (∇_π is the surface tangential gradient operator). It readily follows from (<ref>) and (<ref>) that the boundary values of the invariants and the discriminant are P_w=-ϑ_w, Q_w=R_w=Δ_w=0. §.§ Known boundary fluxes Without losing generality, the linear diffusion approximation <cit.> is adopted in the following discussion so that all the viscosity coefficients are treated as constants. §.§.§ Boundary vorticity and enstrophy fluxes The vorticity equation can be obtained from the anti-symmetric part of the evolution equation of A: D_tω+B·ω=∇T×∇s+ν∇^2ω, where T is the temperature, s is the specific entropy; ν=μ/ρ is the kinematic viscosity (ρ is the density). It is noticed that (<ref>) is one-order higher than the NS equations in terms of the diffusion term. Therefore, in addition to the velocity adherence boundary condition, one has to add a new boundary condition to avoid possible spurious solutions caused by raising the order. The most natural choice is the acceleration adherence condition by which restricting the NS equations to the wall yields the boundary vorticity flux (BVF) <cit.>. The BVF on a stationary rigid wall is expressed as σ≡ν[∂_nω]_w=σ_p+σ_vis, σ_p≡ρ^-1n×∇_πΠ_w, Π≡p-μ_ϑϑ, σ_vis≡νK·s_w-ν(∇_π·s_w)n, where p is the pressure, K≡-∇_πn is the surface curvature tensor, μ_ϑ≡μ_b+4μ/3 is the longitudinal viscosity with μ_b being the bulk viscosity. Equation (<ref>) reveals a local dynamic causal mechanism of the boundary vorticity creation by the surface pressure gradient (with a dilatational correction for pressure), under the constraint of the no-slip boundary condition. Equation (<ref>) represents the pure 3D viscous contributions to the total BVF, which are caused by the coupling between surface curvature and boundary spin, as well as the sources/sinks of the boundary spin field. A complete analysis of the BVF with the aid of numerical simulations can be found in <cit.>. The boundary enstrophy flux (BEF) F_Ω≡ν[∂_nΩ]_w (Ω≡ω^2/2 is the enstrophy) is directly related to the BVF through the relation F_Ω=σ·s_w <cit.>. By taking a dot product of s_w with (<ref>), one can obtain the explicit expression of the BEF as F_Ω =ρ^-1ξ·∇_πΠ_w +νs_w·K·s_w. In the right hand side of (<ref>), the first term represents the interaction between the transverse and longitudinal fields while the second term denotes the quadratic coupling between the boundary spin and the surface curvature tensor. Recently, the BEF has found a new application in studying the surface flow structures: the global skin friction field can be inferred from the surface pressure field measured by the pressure sensitive paint (PSP) by modeling the BEF properly <cit.>. §.§.§ Boundary dilatation flux The dilatation equation can be derived by evaluating the trace of the evolution equation of A and then eliminating the specific enthalpy. For the isentropic flow, the dilatation satisfies the following advective-wave equation <cit.>: D_t^2ϑ+D_tu·∇ϑ -∇^2(c^2ϑ)=Q_0, where the source term is Q_0≡-2D_ttr(A^2)-2tr(A^3)+D_tu·(∇×ω), c^2=γ RT is the square of the local sound speed, γ is the specific heat ratio and R is the gas constant. The boundary longitudinal dynamics, caused by the on-wall viscous coupling among the transverse and longitudinal processes, is usually much more complicated than the boundary transverse dynamics <cit.>. Similar to the BVF, the boundary dilatation flux (BDF) F_ϑ≡[∂_nϑ]_n is introduced and used as a required boundary condition to solve (<ref>). Physically, the BDF does not exist on the other side of the wall, which implies a creation mechanism of dilatation at the boundary. By performing the singular perturbation analysis, a simplified expression of the BDF is obtained as <cit.>: F_ϑ=γν/c^2(∂_t+ϑ_w)(n×∇_π)·s_w +γν/c^2(s_w·K·s_w-[∂_nΩ]_n) -γν_θ/c^2ξ·∇_πϑ_w. By using (<ref>), c^2=γp/ρ and (n×∇_π)·s_w=∇_π·ξ, (<ref>) can further be written as F_ϑ = γν/c^2(∂_t+ϑ_w)[∇_π·ξ]-ξ·∇_πlnp_w = -ξ·∇_πlnp_w+𝒪(Re^-1/2). It is emphasized that, the first term in the right hand side of (<ref>) is generally of the order 𝒪(Re^-1/2). Therefore, the BDF is dominated by the coupling between the transverse field (ξ or s_w) and the surface pressure gradient with the order of O(Re^1/2), which also generates the BEF. § BOUNDARY FLUX OF VELOCITY GRADIENT TENSOR The boundary flux of the VGT (namely, the boundary A-flux for short) is defined as the wall-normal derivative of A at the wall. By using the velocity adherence condition (u_w=0), the kinematic form of the boundary A-flux is derived as (<ref>) [∂_nA]_w = KA_w+(∇_πϑ_w)n-ϑ_wK+∇_πξ +nn([∂_nϑ]_w-∇_π·ξ)+n{∇_πϑ_w+([∂_nω]_w-K·s_w)×n}, where K≡-∇_π·n=tr(K) is twice the mean curvature of the boundary surface. Equation (<ref>) is the core of the present study, which will be employed to derive other results in the following text. Although the derivation is apparently concerned with only the fluid kinematics, viscosity must be involved to enforce the no-slip boundary condition, which is compatible with the viscid nature of the NS equations. It is emphasized that both [∂_nω]_w and [∂_nϑ]_w are embodied in the kinematic form of the boundary A-flux, which can be replaced by using (<ref>) , (<ref>) and (<ref>) to associate them with the fundamental surface quantities. For neatness, [∂_nϑ]_w is not expanded in the following discussion. It follows from (<ref>) that ([∂_nω]_w-K·s_w)×n=ν^-1∇_πH_w (H≡Π/ρ=p/ρ-ν_ϑϑ) by which the dynamic form of the boundary A-flux can be derived from (<ref>): [∂_nA]_w = nn([∂_nϑ]_w+Kϑ_w-∇_π·ξ) +n(Kξ+ν^-1∇_πH_w) +(∇_πϑ_w)n+n(∇_πϑ_w)-ϑ_wK+∇_πξ. Here, [∂_nϑ]_w [(<ref>) and (<ref>)] and ν^-1∇_πH_w are the direct results of on-wall dynamics while the remaining terms are originated from pure kinematics. The terms Kϑ_wnn and Knξ come from the kinematic decomposition of KA_w. In addition, there exist three terms involving ξ [of the order 𝒪(Re^1/2)] with two of them representing its tangential divergence and gradient, respectively. These terms highlight the crucial roles of spin in the near-wall fluid dynamics. All the remaining terms associated with the boundary dilatation are generally of the order 𝒪(1), which physically represent the longitudinal waves as the by-products of near-wall flow. At the high-Reynolds-number limit (ν→0), ν^-1∇_πH_w=𝒪(Re) is the primary leading term and lim_ν→0ν[∂_nA]_w=1/ρ_wn∇_πp_w=𝒪(1), which holds inside the material vortex sheet attached to the boundary with its thickness δ→0. In other words, the surface pressure gradient becomes the dominant boundary sources for the boundary A-flux, where the viscosity guarantees the created diffusive flux to be of 𝒪(1) as Re→∞. The symmetric and anti-symmetric parts of (<ref>) yield twice of the boundary fluxes of D and Ω, respectively expressed as 2[∂_nD]_w = 2(∇_πϑ_w)n+2n(∇_πϑ_w)-2Kϑ_w +n(Kξ+ν^-1∇_πH_w)+(Kξ+ν^-1∇_πH_w)n +∇_πξ +∇_πξ^T +2nn{[∂_nϑ]_w+Kϑ_w-(∇_π·ξ)}, 2[∂_nΩ]_w = n(Kξ+ν^-1∇_πH_w)-(Kξ+ν^-1∇_πH_w)n +∇_πξ -∇_πξ^T. It is noted that the trace of (<ref>) recovers the BDF in (<ref>). The dual vector form of (<ref>) gives the BVF in (<ref>), where the detailed mathematical proof is documented in <ref>. By now, we have established an intrinsic theory of the boundary A-flux which indeed recovers the known BDF and BVF by applying the SAD. § BOUNDARY FLUXES OF Q AND R For the ideal perfect gas undergoing an isentropic process, the fluctuating density wave satisfies the classical Phillips equation <cit.>: D_t^2ℛ-∇·(c^2∇ℛ)=tr(A^2), ℛ≡ln(ρ/ρ_0), where ℛ describes the relative strength of the density disturbance (ρ^'≡ρ-ρ_0), ρ_0 is the reference density. For a small disturbance (|ρ^'|≪ρ_0), we have ℛ=ln(1+ρ^'/ρ_0)≈ρ^'/ρ_0. Because of tr(A^2)=ϑ^2-2Q, ϑ^2=(D_tℛ)^2 can be moved to the left hand side of (<ref>) as a low-order nonlinear term: <cit.> D_t^2ℛ-(D_tℛ)^2-∇·(c^2∇ℛ)=-2Q, with -2Q being a more concentrated kinematic source of sound than tr(A^2). Physically, Q represents a pseudo inviscid work rate done by the surface deformation over the boundary of a fluid element <cit.>. For incompressible flow, the pressure satisfies the Poisson equation ∇^2(p/ρ)=2Q, which can be written as a weighted integral of Q over the whole space <cit.>. However, to the best of the authors' knowledge, the boundary Q flux, describing the creation mechanism of Q on the boundary, was never investigated from a theoretical perspective. By using (<ref>) and (<ref>), the boundary Q flux is evaluated as [∂_nQ]_w = ϑ_w[∂_nϑ]_w -1/2[∂_ntr(A^2)]_w = (∇_π·ξ)ϑ_w -ξ·∇_πϑ_w -Kϑ_w^2 -ξ·K·ξ. Equation (<ref>) shows that the boundary Q flux can be generated by the coupling between the longitudinal and transverse fields, and their interplay with the surface curvature. The divergence (∇_π·ξ) is an intriguing surface physical quantity, which is closely associated with the strong sweep and ejection events (intermittency) in the viscous sublayer of near-wall turbulence <cit.>. It is found that the last quadratic term satisfies an exact identity: s_w·K·s_w+ξ·K·ξ=2KΩ_w, where the quadratic term s_w·K·s_w contributes to the BEF in (<ref>). From (<ref>) and (<ref>), Q could be alternatively written as 2Q=ϑ^2+Ω-S, where Ω≡ω^2/2=-tr(Ω^2)=Ω:Ω is the enstrophy and S≡ tr(D^2)=D:D is the squared strain-rate. In other words, Q represents the balance among squared dilatation, enstrophy and squared strain-rate. Therefore, we have 2[∂_nQ]_w =2ϑ_w[∂_nϑ]_w+[∂_nΩ]_w-[∂_nS]_w. We notice that both Ω and S can be expressed in terms of A: Ω=1/2[tr(A·A^T)-tr(A^2)], S=1/2[tr(A·A^T)+tr(A^2)]. The boundary fluxes of the two traces in (<ref>) are evaluated in <ref> by which we obtain [∂_nΩ]_w=ν^-1ξ·∇_πH_w +s_w·K·s_w, [∂_nS]_w = 2ϑ_w[∂_nϑ]_w+2Kϑ_w^2 +2[ξ·∇_πϑ_w-(∇_π·ξ)ϑ_w]+ν^-1ξ·∇_πH_w +s_w·K·s_w+2ξ·K·ξ. When multiplied by the kinematic viscosity ν, (<ref>) just gives the BEF in (<ref>) while (<ref>) provides the intrinsic decomposition for the boundary squared-strain-rate flux (BSF). In the right hand side of (<ref>), the first two terms represent the boundary flux of the squared dilatation and its coupling with the surface mean curvature. The third and fourth terms describe the interaction among the transverse and longitudinal fields, featured by ξ, p_w, ϑ_w and their tangential derivatives along the boundary. The last two terms result from the interaction between the boundary vorticity and the surface curvature, being manifested as the two quadratic forms. Substituting (<ref>) and (<ref>) into (<ref>) again yields (<ref>). For incompressible flow interacting with a stationary wall, (<ref>), (<ref>) and (<ref>) reduce to [∂_nQ]_w= -ξ·K·ξ, [∂_nΩ]_w=μ^-1ξ·∇_πp_w +s_w·K·s_w, [∂_nS]_w=μ^-1ξ·∇_πp_w +s_w·K·s_w +2ξ·K·ξ. It is emphasized that both the BEF and BSF can be created by both the longitudinal-transverse and the transverse-geometric coupling mechanisms. The longitudinal-transverse coupling is determined by the interaction between the surface vorticity (or ξ) and the surface pressure gradient. The transverse-geometric coupling involves only the diagonal components of the surface curvature tensor under the τ-frame. The cancellation between the common terms (including μ^-1ξ·∇_πp_w and s_w·K·s_w) in (<ref>) and (<ref>) leads to the fact that the boundary Q flux in (<ref>) that is solely determined by -ξ·K·ξ. In particular, if the wall is flat, the three scalar fluxes become quite simple: [∂_nQ]_w=0, [∂_nΩ]_w=[∂_nS]_w=μ^-1ξ·∇_πp_w. Equation (<ref>) implies that the boundary Q flux cannot be created on a stationary flat wall. The unique boundary coupling mechanism that can generate the BEF or BSF is the interaction between ξ and p_w. On a stationary wall, it is direct to show that A_w^2=ϑ_wA_w, which implies that [∂_ntr(A^3)]_w = 3ϑ_wA_w^T:[∂_nA]_w=3/2ϑ_w[∂_ntr(A^2)]_w = 3ϑ_w^2[∂_nϑ]_w-3ϑ_w[∂_nQ]_w. Then, combining (<ref>), (<ref>) and  (<ref>) yields [∂_nR]_w = 1/3[∂_n(ϑ^3-3ϑ Q)]_w-ϑ_w^2[∂_nϑ]_w +ϑ_w[∂_nQ]_w = -[∂_nϑ]_wQ_w=0. Although the BDF [∂_nϑ]_w is generally non-zero, Q_w=0 (on a stationary wall) guarantees that the boundary R flux must be zero. Therefore, we conclude that the boundary R flux could only be regulated in the presence of non-stationary boundaries, for example, a moving and continuously deforming boundary. § BOUNDARY FLUXES OF PRINCIPAL INVARIANTS OF STRAIN-RATE AND ROTATION-RATE TENSORS §.§ General discussion Like the VGT, D and Ω also have their own sets of principal invariants represented by (P_D,Q_D,R_D) and (P_Ω,Q_Ω,R_Ω), which are explicitly written as follows: P_D=-ϑ=-tr(D), P_Ω=0, P=P_D, Q_D=1/2(ϑ^2-tr(D^2)),  Q_Ω=-1/2tr(Ω^2), Q=Q_D+Q_Ω, R_D=1/3(ϑ^3-3ϑQ_D-tr(D^3)), R_Ω=0, R=R_D-1/4ω·D·ω. A proof of the last equality in (<ref>) is given in <ref>. Obviously, according to (<ref>), both the boundary P_D and P fluxes are equal to the negative BDF [(<ref>) and (<ref>)]. In addition, it follows from (<ref>) that 2[∂_nQ_D]_w=[∂_nϑ^2]_w-[∂_nS]_w, 2[∂_nQ_Ω]_w=[∂_nΩ]_w. The boundary Q_D flux is determined by the boundary fluxes of squared dilatation and squared strain-rate [(<ref>)] while the boundary Q_Ω flux is equivalent to the BEF [(<ref>) or (<ref>)]. It is worth mentioning that the boundary flux of the squared-dilatation is balanced by the first term in the right hand side of (<ref>), resulting in zero net contribution to the boundary Q_D flux. Since the boundary R flux is zero on a stationary wall [(<ref>)], the last equality in (<ref>) implies that the boundary R_D flux is proportional to the wall-normal gradient of the vortex stretching term in the enstrophy transport equation (<ref>): [∂_nR_D]_w = 1/4[∂_n(ω·D·ω)]_w = -1/4[ξ·∇_πΩ_w -2(∇_π·ξ)Ω_w +ϑ_w(s_w·K·s_w)], In the right hand side of (<ref>), the first term represents the variation of the boundary enstrophy along a ξ-line (or a skin friction line). The second term could be significant near the attachment and separation lines with high-magnitude skin friction divergence and surface enstrophy. The last term implies a triple coupling mechanism among dilatation, vorticity and surface curvature tensor on the boundary. §.§ Role of boundary R_D flux and further discussion For flow past a stationary rigid wall, denoting the spatiotemporal evolution operator by ℒ≡∂_t-ν∇^2, the enstrophy transport equation can be written as ℒΩ=-u·∇Ω-2ϑΩ +ω·D·ω +ω·∇T×∇s -ν∇ω:∇ω. where in the right hand side, the first term is the enstrophy convection term; the second term represents the interaction between dilatation and enstrophy; the third term describes the effect of vortex stretching; the fourth term represents the baroclinic enstrophy generation as a result of the misalignment between the temperature and entropy gradients; the final term describes the dissipation of enstrophy. By acting the wall-normal derivative operator ∂_n on both sides of (<ref>) and applying the result on the wall, we obtain the spatiotemporal evolution rate of the wall-normal enstrophy flux at the wall: [ℒ∂_nΩ]_w =Σ_1+Σ_2+Σ_3+Σ_4+Σ_5, where the source terms can be expressed by using the surface physical quantities: Σ_1 ≡ -[∂_n(u·∇Ω)]_w=-ξ·∇_πΩ_w-ϑ_w[∂_nΩ]_w = -ξ·∇_πΩ_w-ν^-1ϑ_w(ξ·∇_πH_w)-ϑ_w(s_w·K·s_w), Σ_2 ≡ -2∂_n[ϑΩ]_w=-2ϑ_w[∂_nΩ]_w-2[∂_nϑ]_wΩ_w = -2ν^-1ϑ_w(ξ·∇_πH_w)-2ϑ_w(s_w·K·s_w)-2[∂_nϑ]_wΩ_w, Σ_3 ≡ [∂_n(ω·D·ω)]_w=4[∂_nR_D]_w = -ξ·∇_πΩ_w+2(∇_π·ξ)Ω_w-ϑ_w(s_w·K·s_w), Σ_4 ≡ [∂_n(ω·∇T×∇s)]_w = [∂_nω]_w·[∇T×∇s]_w+s_w·[∂_n(∇T×∇s)]_w, Σ_5 ≡ -ν[∂_n(∇ω:∇ω)]_w = -2ν(K·∇_πs_w):∇_πs_w-2∇_πσ:∇_πs_w-2σ·[∂_n^2ω]_w. From (<ref>), the second-order wall-normal derivative of vorticity in (<ref>) is derived as [∂_n^2ω]_w=ν^-1(∂_t-ν∇_π^2)s_w +ν^-1ϑ_ws_w+ν^-1Kσ-ν^-1[∇T×∇s]_w. It is noted that the source term Σ_3 [(<ref>)] is solely determined by the boundary R_D flux. Particularly, for incompressible flow past a stationary flat wall, Σ_2=Σ_4=0 and (<ref>) is simplified as Σ_1=-ξ·∇_πΩ_w, Σ_3=-ξ·∇_πΩ_w+2(∇_π·ξ)Ω_w, Σ_5=-2ρ^-1∇_πξ:∇_π∇_πp_w-2μ^-1(∂_t-ν∇_π^2)ξ·∇_πp_w. In (<ref>), Σ_1 represents the variation of enstrophy along a ξ-line (or a skin friction line). The sum of Σ_1 and Σ_3 identifies an important characteristic surface quantity, namely ξ·∇_πΩ_w-(∇_π·ξ)Ω_w, which (multiplied by a constant factor) has been shown as the dominant physical mechanism responsible for the spatiotemporal evolution rate of the wall-normal Lamb dilatation flux at the wall region beneath an energetic quasi-streamwise vortex in a turbulent channel flow <cit.>. Σ_5 is determined by the longitudinal-transverse coupling among ξ, p_w as well as their temporal and tangential derivatives along the boundary. § CONCLUSIONS AND DISCUSSIONS The present paper provides a necessary theoretical basis for advancing the turbulence theory and computation based on the Lagrangian VGT dynamics to the wall-bounded cases at the fundamental level. The boundary fluxes of the VGT and its invariants are explicitly unraveled and successfully linked with the established boundary vorticity and dilatation dynamics on a stationary rigid wall, where the former can be correspondingly referred to as the boundary vortex dynamics. In this sense, a unified description of the boundary vorticity and vortex dynamics is thereby achieved for the wall-bounded flows. The main findings and contributions are summarized as follows. 1. On a stationary rigid wall, the whole vorticity dynamics is completely determined by the spin field without the orbital rotation. We find that the VGT A at the wall (or the Caswell formula) shows a compatible physical structure as that unveiled by the normal-nilpotent decomposition (NND). Both D_w and Ω_w contain the physical effects due to the spin field, which challenges the existing perspective that D_w represents the pure deformation of a fluid element. 2. The explicit expressions of the boundary A-flux are derived, which include both the kinematic and dynamic forms. They not only provide the required boundary conditions for solving the evolution equation of A but also describes its boundary creation mechanisms (thereby improved to be the boundary sources). The kinematic form emphasizes the crucial roles of the boundary vorticity flux (BVF) and the boundary dilatation flux (BDF) which are expressible in terms of the fundamental surface physical and geometric quantities, naturally leading to the dynamic form. At the high-Reynolds-number limit, the magnitude analysis for an attached boundary layer shows that the surface pressure gradient becomes the dominant boundary source for the viscous boundary A-flux. 3. The symmetric and anti-symmetric parts of the boundary A-flux yields the boundary fluxes of D and Ω. The trace of the boundary D-flux recovers the expression of the BDF. A concise representation of the boundary Ω-flux is obtained with the use of the wedge product, from which the BVF can be easily derived by acting the Hodge star operator. Therefore, a complete theory of the boundary A-flux is established. 4. The boundary Q flux can be interpreted as the competition among the boundary fluxes of squared dilatation, enstrophy and squared strain-rate. We prove that the boundary Q flux can be created by the boundary coupling mechanisms of three types: (a) longitudinal-transverse coupling [(∇_π·ξ)ϑ_w -ξ·∇_πϑ_w], (b) surface curvature-dilatation interaction [-Kϑ_w^2] and (c) the quadratic interaction [-ξ·K·ξ], where ξ≡s_w×n is parallel to the skin friction vector. Importantly, both the boundary fluxes of enstrophy and squared strain-rate can be created by the coupling between ξ and surface pressure gradient [ξ·∇_πp_w], and the other quadratic interaction mechanism [s_w·K·s_w], which however, make no net contribution to the boundary Q flux. In addition, we show that the boundary R flux must be zero as a result of zero boundary value of Q. 5. The boundary flux of the third invariant of the strain-rate tensor (namely, the boundary R_D flux) is proved to be proportional to the wall-normal derivative of the vortex stretching term, which can be generated by the competition among three mechanisms: (a) the advective variation of the boundary enstrophy along a ξ-line (or a skin friction line) [-ξ·∇_πΩ_w], (b) the coupling between the skin friction divergence and the enstrophy [2(∇_π·ξ)Ω_w] and (c) a triple coupling mechanism among dilatation, vorticity and surface curvature tensor on the boundary [-ϑ_w(s_w·K·s_w)]. The boundary R_D flux serves as a source term being responsible for the spatiotemporal evolution rate of the wall-normal enstrophy flux. Future studies could be devoted to exploring the possible applications of these exact relations in complex flow diagnostics with special focus on revealing the in-depth physical mechanisms of the formation and evolution of complex near-wall coherent structures, detecting the boundary sources of flow noises as well as developing new technologies for significant noise suppression and drag reduction, by virtue of elaborately designed surface configurations. Since the surface physical quantities can be calculated from the high-resolution datasets generated through the direct numerical simulations and the real fluid experiments, all the boundary fluxes and the corresponding physical constituents can be quantitatively evaluated, leading to the feasibility of determining the dominated boundary coupling mechanisms in a certain physical problem. In addition, a generalization to the presence of an arbitrarily moving and deforming boundary could be valuable for practical control of near-wall flows. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT Tao Chen: Conceptualization, Methodology, Writing – original draft, Writing - review & editing. Jie-Zhi Wu: Conceptualization, Methodology, Writing – original draft, Writing - review & editing. Tianshu Liu: Writing - review & editing, XXX (experiments). Jie Yao: XXX (simulations). § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § DATA AVAILABILITY No data is generated for the present study. § DERIVATION OF (<REF>) The boundary A-flux is evaluated as [∂_nA]_w=[∂_n(∇̂_π+n∂_n)u]_w=∇_π[∂_nu]_w+n[∂_n^2u]_w, where the no-slip boundary condition (u_w=0) has been used. ∇̂_π is the tangential gradient operator used in the surface vicinity which reduces to ∇_π on the wall. It should be noted that in general, the order of ∇̂_π and ∂_n cannot be arbitrarily exchanged in the presence of a deforming surface. From (<ref>), we obtain [∂_nu]_w=ϑ_wn+ξ, ∇_π[∂_nu]_w =(∇_πϑ_w)n-ϑ_wK+∇_πξ. By applying the identity [∇^2u]_w=∇_π^2u_w-K[∂_nu]_w+[∂_n^2u]_w=-K[∂_nu]_w+[∂_n^2u]_w, it is direct to show that [∂_n^2u]_w=K[∂_nu]_w+[∇^2u]_w=K[∂_nu]_w+[∇ϑ]_w-[∇×ω]_w. n[∂_n^2u]_w=KA_w+n[∇ϑ]_w-n[∇×ω]_w. Consider the following orthogonal decomposition for [∇×ω]_w: [∇×ω]_w =(n·[∇×ω]_w)n +n×([∇×ω]_w×n). The first term in the right hand side of (<ref>) can be evaluated as n·[∇×ω]_w=∇_π·(s_w×n)+s_w·(∇_π×n)=∇_π·ξ, where we have used ∇_π×n=0. By virtue of (<ref>), the second term is rewritten as n×([∇×ω]_w×n)=n×([∂_nω]_w-K·s_w). Therefore, (<ref>) is converted to [∇×ω]_w =(∇_π·ξ)n +n×([∂_nω]_w-K·s_w). Combining (<ref>), (<ref>), (<ref>) and (<ref>) yields (<ref>). § RECOVERY OF BOUNDARY VORTICITY FLUX For any two vectors ξ and η in the three-dimensional space, it always holds that ξ∧η=ξη-ηξ where ∧ is the wedge product used in the differential geometry <cit.>. This implies a concise and self-consistent representation of (<ref>) as 2[∂_nΩ]_w=n∧(Kξ+ν^-1∇_πH_w)+∇_π∧ξ. We notice that the exterior forms involved in (<ref>) can be transformed to their dual vectors by introducing the Hodge star operator (*). For any two vectors ξ and η in the three-dimensional space, it holds that *(ξ∧η)=ξ×η <cit.>. By acting the Hodge star operator on each term of (<ref>), it is direct to show that 2*[∂_nΩ]_w=[∂_n*(∇∧u)]_w=[∂_nω]_w, *(n∧(Kξ+ν^-1∇_πH_w))=ν^-1n×∇_πH_w+Ks_w, *(∇_π∧ξ)=∇_π×ξ=s_w·K-Ks_w-(∇_π·s_w)n. Combining (<ref>) and (<ref>) recovers the BVF in (<ref>) where Ks_w in (<ref>) and -Ks_w in (<ref>) exactly cancel each other. § BOUNDARY FLUXES OF TWO TRACES By using (<ref>) and (<ref>), the boundary fluxes of the traces [tr(A^2) and tr(A·A^T)] are evaluated as 1/2[∂_ntr(A^2)]_w = [∂_nA]_w:A_w^T = ϑ_w([∂_nϑ]_w+Kϑ_w-∇_π·ξ) +ξ·∇_πϑ_w +ξ·K·ξ, 1/2[∂_ntr(A·A^T)]_w = [∂_nA]_w:A_w = ϑ_w([∂_nϑ]_w+Kϑ_w-∇_π·ξ)+2KΩ_w +ξ·∇_πϑ_w+ν^-1ξ·∇_πH_w. § AN ALTERNATIVE EXPRESSION OF R By applying the identity tr(A^3)=tr(D^3)+3tr(Ω^2·D), it follows from (<ref>) that R=1/3[ϑ^3-3ϑQ-tr(D^3)-3tr(Ω^2·D)], which was previously obtained by <cit.>. It is noticed that the following relations exactly hold: -3ϑQ=-3ϑ(Q_D+Q_Ω)=-3ϑQ_D-3/4ϑω^2, -3tr(Ω^2·D)=-3/4ω·D·ω+3/4ϑω^2. Substituting (<ref>) and (<ref>) into (<ref>) yields (<ref>): R = 1/3[ϑ^3-3ϑQ_D-tr(D^3)-3/4ω·D·ω] = R_D-1/4ω·D·ω, where -3ϑω^2/4 in (<ref>) and +3ϑω^2/4 in (<ref>) cancel each other during the derivation. It is worth mentioning that for incompressible flow (ϑ=0), ω·D·ω and [-2tr(D^3)-(1/2)ω·D·ω] are the source terms being responsible for the rate of change of the enstrophy (Ω≡ω^2/2) and the squared strain-rate S≡ tr(D^2), respectively <cit.>. Since 2Q=Ω-S holds for incompressible flow, the difference of these two source terms is equal to -6R, being consistent with the restricted Euler model <cit.> where D_tQ=-3R holds after neglecting the effects of viscosity and pressure anisotropy. § WALL-NORMAL DERIVATIVE OF VORTEX STRETCHING TERM By applying the chain rule of the partial derivative, we obtain [∂_n(ω·D·ω)]_w=2[∂_nω]_w·D_w·s_w+s_w·[∂_nD]_w·s_w. Owing to the kinematic constraint ξ·s_w=0 and (<ref>), the first term in the right hand side of (<ref>) must be zero. Then, the second term can be decomposed as s_w·[∂_nD]_w·s_w =s_w·∇_πξ·s_w-ϑ_ws_w·K·s_w. By using ξ×s_w=2Ω_wn and the vector identity ∇_π×(ξ×s_w) =s_w·∇_πξ -ξ·∇_πs_w +(∇_π·s_w)ξ -(∇_π·ξ)s_w, it is deduced that s_w·∇_πξ·s_w =-ξ·∇_πΩ_w+2(∇_π·ξ)Ω_w. Finally, substituting (<ref>) and (<ref>) into (<ref>) yields [∂_n(ω·D·ω)]_w =-[ξ·∇_πΩ_w-2(∇_π·ξ)Ω_w+ϑ_w(s_w·K·s_w)]. § REFERENCES
http://arxiv.org/abs/2406.09322v1
20240613170030
Active Inference Meeting Energy-Efficient Control of Parallel and Identical Machines
[ "Yavar Taheri Yeganeh", "Mohsen Jafari", "Andrea Matta" ]
cs.LG
[ "cs.LG", "cs.AI" ]
[ [ June 17, 2024 ================= § ABSTRACT We investigate the application of active inference in developing energy-efficient control agents for manufacturing systems. Active inference, rooted in neuroscience, provides a unified probabilistic framework integrating perception, learning, and action, with inherent uncertainty quantification elements. Our study explores deep active inference, an emerging field that combines deep learning with the active inference decision-making framework. Leveraging a deep active inference agent, we focus on controlling parallel and identical machine workstations to enhance energy efficiency. We address challenges posed by the problem's stochastic nature and delayed policy response by introducing tailored enhancements to existing agent architectures. Specifically, we introduce multi-step transition and hybrid horizon methods to mitigate the need for complex planning. Our experimental results demonstrate the effectiveness of these enhancements and highlight the potential of the active inference-based approach. § INTRODUCTION Active inference (AIF), an emerging field inspired by the principles of biological brains, offers a promising alternative for decision-making models. It unifies perception, learning, and decision-making under the free energy principle (FEP), which formulates neuronal inference and learning under uncertainty <cit.>. Accordingly, the brain is modeled through levels of (variational) Bayesian inference <cit.>, minimizing prediction errors by leveraging a generative model of the world while considering uncertainties. This framework enables the development of agents that can calibrate their models and make decisions without complete knowledge of system dynamics. Significant progress has been made in applying active inference across various domains, including robotics, autonomous driving, and healthcare <cit.>, showcasing its ability to handle complex decision-making tasks in dynamic environments. Recently, the manufacturing industry's focus on energy efficiency has intensified due to its significant contribution to global energy consumption. Addressing energy efficiency at the machine level has become critical, with energy-efficient scheduling (EES) and energy-efficient control (EEC) strategies emerging as key approaches to reducing environmental impact <cit.>. While traditional EEC methods often necessitate complete system knowledge, reinforcement learning (RL) has shown potential in optimizing manufacturing processes without prior system knowledge <cit.>. However, RL agents may struggle to rapidly adjust their policies to changing conditions. This research aims to build upon advancements in active inference-based decision-making <cit.> and apply it to EEC in manufacturing systems, demonstrating its potential and advancing the understanding of active inference in complex environments. The existing active inference agents often rely on extensive search algorithms during planning and make decisions based on immediate next predictions <cit.>, which pose challenges in the context of the EEC problem. By employing deep active inference as the decision-making algorithm, we introduce tailored enhancements, such as multi-step transition and hybrid horizon methods, to address the challenges posed by the problem's stochastic nature and delayed policy response. Our experimental results highlight the effectiveness of these enhancements and underscore the potential of the active inference-based approach. The remainder of the paper is organized as follows: we begin by concisely introducing the EEC problem and the manufacturing system under study. We then present an overview of the formalization of active inference, describe the agent, evaluate its performance, and discuss future directions. § APPLICATION OVERVIEW Active inference has proven effective in various applications commonly associated with decision-making processes in biological agents, such as humans and animals. These applications primarily involve visual sensory output as observations. For instance, Fountas et al. (2020) <cit.> tested their agent on tasks like Dynamic dSprites <cit.> and Animal-AI <cit.>, which can be performed by biological agents simply. Additionally, applications in robotics <cit.> (e.g., manipulation <cit.>) align with tasks that human agents can typically perform naturally. This effectiveness stems from active inference being a theory of decision-making for biological agents <cit.>. However, certain applications, such as the control of industrial systems, can present complex challenges. While the decision-making processes and existing applications mentioned above may not be straightforward, human agents may struggle to devise effective policies for these more intricate problems. §.§ EEC in Manufacturing Systems EEC gaining prominence in both academic and industrial circles within manufacturing systems. It offers substantial energy savings across three key control levels: component, machine, and production system. At its core, EEC involves managing the power consumption state of objects based on environmental conditions. Objects are kept fully operational when their functions are needed and transitioned to low power states when not in use, though this poses challenges due to the unpredictable nature of environmental demands and the penalties incurred during state transitions. These penalties include both time wasted during transition, when the object adds no value, and the energy consumed during the transition process. A comprehensive and recent literature review on this topic can be found in <cit.>. §.§ System Description The specific system under study is a manufacturing workstation consisting of a single upstream buffer serving multiple parallel machines as depicted in Fig. <ref>. Key characteristics of this system include a finite buffer capacity, stochastic arrival of parts, and identical machines operating in a mass-production context. Each machine can exist in various states including working, standby, startup, and failed, with further subdivisions within the working state to denote idle and busy states. Additionally, machines are subject to stochastic failures and repairs, each with expected durations, further complicating control strategies. Power consumption varies based on the machine's state, with different power requirements for standby, startup, idle, and busy states. For instance, standby and failed states require minimal power consumption, while idle and startup states demand higher levels. Furthermore, the highest power consumption occurs during the busy state when the machine is fully operational and actively processing parts. Managing these power variations efficiently is crucial for effective energy-efficient control in manufacturing systems. § DEEP ACTIVE INFERENCE AGENT Active inference is a unifying theory that integrates inference, perception, and action by emphasizing the dependence of observations on actions <cit.>. To effectively manage observations toward preferred states, the optimization of actions plays a crucial role <cit.>. This concept was originally proposed as a framework for understanding how organisms actively control and navigate their environment by iteratively updating their beliefs and actions based on sensory evidence <cit.>. The FEP <cit.> is at the core of active inference, paving the way for creating a mathematical model, and there are even experimental evidences supporting it <cit.>. We modify the proposed agent by Fountas et al. (2020) <cit.>, which exhibited notable capabilities and performance when compared against three benchmark model-free RL algorithms in two image applications. §.§ Active Inference Formalism Active inference agents employ an integrated probabilistic framework consisting of an internal generative model <cit.> coupled with inference mechanisms to represent and interact with the world. Similar to RL the agent interacts with the environment, but using three random variables representing observation, latent state, and action (i.e., (o_t,s_t,a_t) at time t). The framework assumes a Partially Observable Markov Decision Process (POMDP) <cit.>. The generative model of the agent, parameterized with θ, is defined over these variables (i.e., P_θ(o_1:t,s_1:t,a_1:t-1)) <cit.>. Generally, the agent acts to reduce surprise, which can be quantified by -log P_θ(o_t). Specifically, there are two steps for the agent while interacting with the world <cit.> as follows: 1) The agent calibrates its generative model through fitting predictions and improve its representation of the world. This is done by minimizing Variational Free Energy (VFE), which is similar to surprise of predictions in connection with the actual observations <cit.>, as follows: θ^* = min_θ(𝔼_Q_ϕ(s_t,a_t)[log Q_ϕ(s_t,a_t)-log P_θ(o_t,s_t,a_t)]) . -log P_θ(o_t)≤𝔼_Q_ϕ(s_t,a_t)[log Q_ϕ(s_t,a_t)-log P_θ(o_t,s_t,a_t)] , This objective function is commonly known as the negative evidence lower bound (ELBO) <cit.>, which is the upper bound for -log P_θ(o_t). It is also used as a foundation for training variational autoencoders <cit.>. 2) The agent makes decisions (i.e., chooses actions) in active inference based on the accumulated negative Expected Free Energy (EFE or G): P(π)=σ( -G(π)) = σ(-∑_τ>t^ G(π,τ)) , where σ(·) represents the Softmax function, and π (i.e., policy) denotes the sequence of actions. The EFE encompasses minimizing surprise regarding preferred observations[Here, the surprise refers to the preference, while in VFE, surprise pertains to the actual observation used for calibrating the model.], exploring uncertainty, and reducing uncertainty about model parameters <cit.>. The EFE for τ≥ t can be formulated as follows <cit.>: G(π,τ)=𝔼_ P(σ_τ|s_τ,θ)𝔼_ Q_ϕ(s_τ,θ|π)[log Q_ϕ(s_τ,θ|π)-log P(o_τ,s_τ,θ|π)] . Fountas et al., (2020) <cit.> provided a derivation <cit.> for calculating the EFE in Eq. <ref> at each time step: G(π,τ) = -𝔼_Q̃[log P(o_τ|π)] + 𝔼_Q̃[log Q(s_τ|π) - log P(s_τ|o_τ,π)] + 𝔼_Q̃[log Q(θ|s_τ,π) - log P(θ|s_τ,o_τ,π)] . They expanded the formalism, leading to a tractable estimate for EFE that is both interpretable and calculable <cit.>: G(π,τ) = -𝔼_Q(θ|π)Q(s_τ|θ,π)Q(o_τ|s_τ,θ,π)[log P(o_τ|π)] + 𝔼_Q(θ|π)[ 𝔼_Q(o_τ | θ,π)H(s_τ|o_τ,π)-H(s_τ|π)] + 𝔼_Q(θ|π)Q(s_τ|θ,π)H(o_τ|s_τ,θ,π) - 𝔼_Q(s_τ|π)H(o_τ|s_τ,π) . This paved the way for establishing a unified formalism for computing decisions in Eq. <ref>. Accordingly, actions are connected to perception, which is achieved through Bayesian inference, based on the EFE in Eq. <ref>. Through this formalism that leads to calculating the EFE (i.e., Eq. <ref>), we can interpret the contribution of each element <cit.>: The first term (i.e., Eq. <ref>) is analogous to reward in RL as it is the surprise[Here, instead of maximizing cumulative rewards, the focus is on minimizing the surprise, which quantifies the extent of deviation (i.e., misalignment) between the prediction and the preferred observation.] of the prediction considering preferred observation. The second term (i.e., Eq. <ref>) represents state uncertainty, which is mutual information between the agent’s beliefs about state before and after prediction. This term also shows a motivation to explore areas of the environment that resolve state uncertainty <cit.>. The third term (i.e., Eq. <ref>) represents uncertainty about model parameters considering new observations. This term is also referred active learning, novelty, or curiosity <cit.>. In fact, model parameters (i.e., θ), particularly contribute to making predictions, including generation of the next states. In summary, the framework (as depicted in Fig. <ref>) is realized through a mathematical formalism in the following manner. The observation is fed as input, which propagates through the model to create perception (i.e., beliefs), which include generating future states. It is to facilitate the calculation of EFE (in Eq. <ref>) integrated into the planner derived from policy (in Eq. <ref>) to act on the environment. After obtaining the next observation the VFE (in Eq. <ref>) can be calculated, which calibrate (i.e., learning) the model based on the matching the new observation with the prediction. In fact, every time the framework goes through the loop the model is optimized based on the VFE form the previous loop, then the optimized model is used to for the rest. §.§ Architecture An agent within the active inference framework needs different modules, which are entangled within the framework. Amortization <cit.> is introduced into the formalism (in Sec. <ref>) to scale-up the realization <cit.>. The formalism is inherently probabilistic and parameterized with with two sets, θ = {θ_s, θ_o} representing generative and ϕ = {ϕ_s} recognition elements <cit.>. Using the following parameterized modules the formalism (in Sec. <ref>) can be calculated: Encoder (i.e., Q_ϕ_s(s_t)), an amortized inference of the hidden state (i.e., an inference network <cit.> providing a mapping between the observation, õ_t, and a distribution for its corresponding hidden state). Transition (i.e., P_θ_s(s_t+1|s̃_t,ã_t)), which generates a distribution for the next hidden state based on both a sampled action and the current hidden state. Decoder (i.e., P_θ_o(o_t+1|s̃_t+1)) generates a distribution for prediction based on the sampled hidden state. Neural networks can facilitate the realization of these modules by representing a mapping between a sample and the respective distribution. In fact, parameters of a pre-selcted (e.g., Gaussian) distribution can be approximated. Here, we model the state space specifically with a multivariate Gaussian distribution, assuming no covariance (i.e., diagonal Gaussian). Using the VFE (i.e., Eq. <ref>), all the three networks can be trained in an end to end fashion. Aside from action and the transition, which can even be considered integrated within the state space, the structure bears a resemblance to a variational autoencoder <cit.>. It is noteworthy that training for the both include optimizing ELBO, as specified in Eq. <ref>. Utilizing the mentioned architecture (also depicted in Fig. <ref>), the agent would be able to calculate the EFE in Eq. <ref> for a given policy (i.e., π), which is a sequence of actions. Thus, we can calculate the probabilities for selecting actions through Eq. <ref>. In order to make better decisions, the agent can plan ahead by simulating future trajectories using the architecture mentioned above. However, as the policy space grows exponentially into the future, it is infeasible to evaluate all possible scenarios. Fountas et al. (2020) <cit.> introduced two approaches to alleviate the obstacle. 1) They used the standard Monte-Carlo Tree Search (MCTS) <cit.>, a tree-based search algorithm, which selectively explore promising trajectories in a restricted manner. 2) They introduced another recognition module <cit.>, parameterized with ϕ_a as follows: Habit (i.e., Q_ϕ_a(a_t)), an amortized inference of actions (i.e., an inference network <cit.> providing a mapping between a sampled hidden state, s̃_t, and normalized, through Softmax function, probabilities for the actions). Habit is also realized through a neural network, which approximates the posterior distribution over actions (i.e., P(a_t|s_t)) using the prior P(a_t) that is obtained from the MCTS <cit.>. This network is trained to reproduce the last action sampled form the planner, given the last state. This is similar to the fast and habitual decision-making in biological agents <cit.>. Fountas et al. (2020) <cit.> followed the standard four steps of MCTS <cit.>, which let them to restrict and prioritize the trajectories that should be evaluated. Iterativly a weighted tree that has memory updates the visited states. During each loop, a path from the existing tree (towards a leaf node) is selected (i.e., selection) based on the following upper bound confidence: U(s_t,a_t)=G̃(s_t,a_t)+c_explore· Q_ϕ_a(a_t|s_t)·1/1+N(a_t,s_t) Where G̃(s_t,a_t) represents the algorithm's current estimation for the EFE, while N(a_t,s_t) denotes the number of times a node (i.e., a state and action pair) in the tree has been visited during the tenure, along with an exploration hyperparameter, c_explore.Then, starting from the leaf and for every possible action (i.e., expansion), the EFE is calculated for a fixed number of future steps (i.e., simulation). This value is finally added to all nodes along the path to calculate the average of G̃(s_t,a_t) (i.e., backpropagation). After all, Actions are sampled from the probabilities created by P(a_t) = N(a_t,s_t)/∑_jN(a_t,j,s_t) (where P(a_t) = ∑_π:a_1=a_t^P(π)), which is proportional to the number of times a node is visited. The planning process continues until a maximum number of loops (i.e., a hyperparameter) is reached, or a condition, i.e., max P(a_t) - mean P(a_t) > T_dec, is met, indicating that the planning is finished. Fountas et al. (2020) <cit.> further employed Q_ϕ_a(a_t) (i.e., habit) to modulate the state space, motivated by incorporating uncertainty. The divergence between the policy obtained from the planner (i.e., MCTS) and the habit (i.e., D_t=D_KL[Q_ϕ_a(a_t)||P(a_t)]) can serve as a loss function for the habit. This divergence also represents a type of uncertainty in the state space, preventing the habit from being identical to the policy <cit.>. Therefore, they used D_t in a logistic function as follows: ω_t= α/1 + e^- b-D_t-1/c + d . The monotonically decreasing pattern establishes a reciprocal connection between D_t-1 and ω_t, i.e., the state precision, using hyperparameters {α, b, c, d}. Altogether, the transition (i.e., P_θ_s(s_t|s_t-1,a_t-1)) is modeled with the distribution 𝒩(μ, σ^2/ω_t) <cit.>. This precision is analogous to β in β-VAE <cit.>, effectively promoting disentanglement of the state space by the encoder <cit.>. To facilitate the computations and effectively estimating different elements within the framework, Fountas et al. (2020) <cit.> used several levels of Monte-Carlo simulations (i.e., sampling based estimations). In addition to MCTS, all terms in the EFE (Eq. <ref>), including the use of MC dropout <cit.> for model parameters (i.e., <ref>), are estimated in this manner <cit.>. §.§ Enhancements We explore various aspects to design the agent and leverage the formalism and existing architecture outlined by Fountas et al. (2020) <cit.> as a foundation. First, we examine how features and requirements related to the problem under study can influence the agent. Subsequently, we propose solutions to address these issues, introducing a coherent agent design. §.§.§ Exploring the Application The simulation of the system, described in Section <ref>, is designed to replicate features of an industrial system. It utilizes Poisson processes <cit.> (i.e., exponential distributions) for machine state transitions and part arrivals <cit.>. The event-driven steps employed by Loffredo et al. (2023) <cit.> trigger decisions after a system state transition rather than at fixed intervals, proving effective for controlling working machines. This problem can be viewed as either continuous-time stochastic control or a discrete-time Markov Chain process <cit.>. Continuous-time modeling requires making time intervals visible to the agent for both machines and subsequent observations, whereas discrete-time modeling allows the agent to learn dynamics by observing transitions to create probabilities for different transitions. There is a variable time interval between subsequent observations (i.e., Δ t as depicted in Fig. <ref>). This variability requires synchronizing the transition for prediction (i.e., P_θ _o (o_t+1 |s̃_t+1 )) with the next observation (i.e., õ_t+1) in the continuous-time model. Incorporating continuous-time facilitates neural network function approximation for longer Δ t during planning. However, using continuous-time can further complicate the prediction structure since residence times for machine states exist in observation. Here, we utilize discrete-time event-driven steps that simplify this process compared to the continuous-time approach. The stochastic nature and the integral and continuous form of the reward functions for the system under study <cit.> mean decision impacts may not be immediately observable, aligning with POMDPs. The system has a delay in responding to the policy, which we call a long/delayed impact horizon, particularly with respect to reward. This is an important distinction from environments evaluated by Fountas et al. (2020) <cit.>. The agent proposed by them focuses on planning based on immediate next predictions. The problem at hand is continuous with no terminal state, and the span over which reward functions are integrated may encompass a few thousand steps. This complicates planning and highlights the need for less computational cost. §.§.§ Experience Replay The generative model encompasses the core of active inference <cit.> and predictive coding <cit.>. Therefore, the performance of any active inference-based agent heavily relies on its accuracy. To improve model training, we introduce experience replay <cit.> using a memory that stores (o_t,a_t,o_t+1) at different steps. During training, we sample a batch of experiences from the memory and ensure the latest experience is also included. However, for all the batched experiences, we utilize ω_t based on the latest experience. §.§.§ Hybrid Horizon To address the limitations arising from the short horizon of the EFE, which relies on immediate next predictions, we propose augmenting the planner with an auxiliary term to account for longer horizons. Q-learning <cit.> and its enhanced variant, deep Q-learning, can serve as model-free planners with longer horizons, leveraging rewards to update the Q-value for a state-action pair <cit.>. Loffredo et al. (2023) <cit.> demonstrated that deep Q-learning can achieve near-optimal performance for the systems under study. Accordingly, we modify Q_ϕ_a(a_t) to represent amortized inference of actions, mapping observations õ_t (or sampled predictions) to normalized action probabilities using a Softmax function and training it with deep Q-learning updates based on rewards coming from experience replay. We introduce a hyperparameter γ to balance the contributions of long and short horizons. Thus, we arrive at a new formulation for the planner to incorporate longer horizons: P(a_t) = γ· Q_ϕ_a(a_t) + (1-γ) ·σ( -G(π) ) . The resulting combination achieves a controlled balance between the EFE for the short horizon terms, which incorporates uncertainties, and the long horizon term. We further utilize the new Q_ϕ_a(a_t) to modulate the agent's state uncertainty based on Eq. <ref>. In fact, D_t (i.e., D_KL[Q_ϕ_a(a_t)||P(a_t)]) represents the discrepancy between the long horizon and the combined policy, reflecting a form of knowledge gap for the agent. §.§.§ Multi-Step Transition and Planning Given the stochastic nature and long impact horizon of the system under study, a one-step transition (as depicted in Fig. <ref>) may not result in significant changes in observation and state, leading to indistinguishable EFE terms. Therefore, the model should learn transitions beyond one step and predict further into the future to distinguish the impact of different policies. We modify the transition module to allow multiple steps, controlled by a hyperparameter (e.g., s = 90), enabling multi-step transitions given the policy (i.e., sequence of actions). Representing the sequence of actions in a policy as a one-hot vector can be high-dimensional, so we utilize integer encodings as an approximation. This is feasible since the actions (or number of machines) are less categorical and can be considered rather continuous in this case. During planning, we utilize repeated actions in the transition for each action and calculate the EFE accordingly. This method assesses the impact of actions over a short period, using repeated action simulations. This approximation helps to distinguish different actions over a horizon based on the EFE. Thus, even a single multi-step transition can serve as a simple and computationally efficient planner. For deeper simulations, it can be combined with MCTS of repeated transitions. Alternatively, it can be combined with a less expensive planner that starts with repeated transitions for each action, followed by simulating until a specific depth using the following policy: P̃(π, a_τ) = (1 - c_explore) · Q_ϕ_a(a_τ) + c_explore·σ( -log P(o_τ|π) ) , to calculate the EFE of the final state or trajectory. After several loops, the accumulated EFE for each action can be used in Eq. <ref>. § RESULTS To demonstrate the potential of our methodology, we concentrate our experiments on controlling a real industrial workstation comprising six parallel-identical machines with an upstream capacity of 10, as outlined in <cit.>. All the stochastic processes characterizing the workstation follow Poisson distributions, and thus are exponentially distributed with different expected values. We first introduce the preference function, then describe the setup and training, and finally present the numerical performance of our agent[The source code will be available on GitHub at https://github.com/YavarYeganeh/AIF_Meeting_EEChttps://github.com/YavarYeganeh/AIF_Meeting_EEC.]. §.§ Preference Function Active inference involves an agent acting to achieve its preferred observation, similar to the concept of a setpoint in control theory <cit.>. Consequently, the agent possesses an internal function to quantify the proximity of the prediction to the preferred state. This function is important for the agent's performance. While it correlates with the reward function in reinforcement learning, it is based on a different philosophy: a control setpoint rather than the cumulative reward in the MDP framework of reinforcement learning <cit.>. Instead of the reward function used by Loffredo et al. (2023) <cit.>, we propose a different preference function for the multi-objective optimization of the system under study. This function aligns with the concept of a long/delayed impact horizon system for our agent to control, accounting for the average performance of the system over a fixed time span (t_s[In our implementation, we use the closest recorded timestamp from the system, respecting the fixed time span.], e.g., 8 hours) leading up to the observation at (t). It includes terms for production, energy consumption, and a combined term as follows: R_production = T_current/T_max R_energy = 1 - E_avg/E_max R = ϕ· R_production + (1 - ϕ) · R_energy , where: * ϕ is the weighting coefficient balancing production and energy consumption. It is set close to 1 (0.97 in our implementation) to ensure that the agent does not significantly reduce production. * T_current = NP(t) - NP(t - t_s)/t_s represents the throughput within the past t_s period, where NP(t) is the number of parts produced within this period. * T_max is the maximum achievable throughput, occurring under the ALL ON policy <cit.>. * E_avg = C(t) - C(t - t_s)/t_s represents the average energy consumption over the past t_s period, where C(t) is the total energy consumption within this period. * E_max is the maximum theoretical consumption rate of the system, pertaining to all machines operating in their busy state. §.§ Agent Setup and Training We adhere to the MC sampling methodology for calculating EFE, as well as most agent configurations, as outlined by Fountas et al. (2020) <cit.>. Notably, we employ Bernoulli and Gaussian distributions to model prediction and state distributions, respectively. We introduced a modification by utilizing activation functions for the encoder and transition networks. The output of these two networks generates means and variances representing Gaussian distributions for the state. We applied the Tangent Hyperbolic function for the means and the Sigmoid function, multiplied by a factor, λ_s, between 1 and 2, for the variances. This enforces further regularization and stability for the state space to prevent unbounded values, which need to be fitted into a normal distribution. In our implementation, in contrast to <cit.>, which does not use activations for the state, these activations are essential for learning an effective policy. Our agent's observation comprises buffer levels and machine states, all one-hot encoded. Similarly, we incorporate the three reward terms in Eq. <ref>, predicting only their means without pre-defining a distribution. The agent's preference (i.e., P(o_τ|π)) corresponds to the prediction of the combined reward term for the system, which the agent seeks to maximize. Given the composition of binary and continuous components of the observation, the loss of the VAE's reconstruction is equally scaled aggregation of binary cross-entropy and mean square error. Additionally, due to the one-hot structure of the observation, we sample the predictions, excluding the reward terms, which are treated as means, to be fed into the decoder during the calculation of EFE. As we validate the performance of a control system for an industrial application, we use a single system during each training epoch, with an experience replay size of 200. These systems are initialized with a random policy for one day of simulation, after removing the profile from one day of warm-up simulation using the ALL ON policy. We trained our agents on-policy, following a similar approach to the algorithm proposed by Fountas et al. (2020) <cit.>, but with adjustments to accommodate the modifications we made, including multi-step transition, experience replay, and a hybrid planner. §.§ Agent Performance To assess the efficacy of our agents, we tested their performance several times during different training epochs, each on independent systems initialized with a random agent after warm-up, similar to those used for training. We simulated the interaction between the agent and the system, which was randomly controlled at the beginning, for one day of simulation time. The combined reward term (in Eq. <ref>) at the end of the test simulation was extracted as the performance measure, with a time span of t_s (i.e., 8 hours in this case). Our agent, similar to <cit.>, has a large set of hyperparameters, which proved to be sensitive, especially those related to the state space of the model. We considered both 1-step transitions and multi-step transitions, taking repeated actions during planning for each of the possible actions (i.e., determining how many machines to keep ON) to then calculate their EFE. Fig. <ref> presents the comparison for a single set of hyperparameters, particularly λ_s=1, except s, across different γ. It shows the suitability of the multi-step transition and simple repeated planner as well as hybrid horizon. Since the state should fit the normal distribution, λ_s=1 for variance will target areas of the input domain of the Sigmoid function that are saturated and have smaller gradients. To address this, we increased λ_s to 1.5, where the Sigmoid function has larger gradients. This results in higher rewards even for very small γ (i.e., 0.05), as presented in Fig. <ref>. This performance is primarily coming from the second term of the EFE (i.e., Eq. <ref>), which serves as an intrinsic reward <cit.>, as demonstrated in Fig. <ref>C. The other two terms are less distinguishable for different actions on average. This suggests that the agent differentiates between various actions in its state space to achieve high rewards. In fact, the long impact horizon and stochasticity, as well as our approximations in transition and planning, hinder the predictive power of the agent, but it managed to infer the impact of different actions in its state space. It is also worth noting that our agent quickly converges to high rewards but may experience instability and loss of control if training continues for (a long time), due to a catastrophic increase in the loss, particularly the reconstruction term of the generative model. This necessitates early stopping mechanisms for training and the introduction of regularization elements (e.g., dropout and normalization) to prevent or mitigate the issue. § CONCLUSION AND FUTURE WORK The results of our study demonstrate the effectiveness of our proposed modifications for the active-inference-inspired agent. Notably, a single multi-transition lookahead with repeated actions, coupled with a hybrid horizon, can achieve high rewards without the need for extensive planning algorithms like MCTS. This is particularly notable considering the unique challenges posed by the application under study, which demands deep lookahead into the future to distinguish different actions, while stochasticity presents a challenge. The significant impact of hyperparameters on agent performance emphasizes the importance of meticulous tuning for optimal results and effective control. This highlights the potential of our methodology to tackle the EEC problem and similar applications characterized by a long/delayed impact horizon and high stochasticity. Improving the methodology can involve enhancing the generative model, the core of active inference, especially by introducing recurrent transitions or enhancing predictive capabilities. Integration of diffusion-based generative models instead of VAEs <cit.> is also a promising direction. The framework and formalism of active inference agents show promise for non-stationary scenarios, where model-free agents may struggle to adapt swiftly. Future research will concentrate on improving the agent, extending experimental validation, and tailoring the methodology to non-stationary scenarios, leveraging the strengths of active inference to develop more robust and efficient decision-making algorithms. unsrtnat
http://arxiv.org/abs/2406.09323v1
20240613170128
Master of Disaster: A Disaster-Related Event Monitoring System From News Streams
[ "Junbo Huang", "Ricardo Usbeck" ]
cs.IR
[ "cs.IR" ]
A More Practical Approach to Machine Unlearning David Zagardo, dave@greenwillowstudios.com June 2024 =============================================== § ABSTRACT The need for a disaster-related event monitoring system has arisen due to the societal and economic impact caused by the increasing number of severe disaster events. An event monitoring system should be able to extract event-related information from texts, and discriminates event instances. We demonstrate our open-source event monitoring system, namely, Master of Disaster (MoD), which receives news streams, extracts event information, links extracted information to a knowledge graph (KG), in this case Wikidata, and discriminates event instances visually. The goal of event visualization is to group event mentions referring to the same real-world event instance so that event instance discrimination can be achieved by visual screening. § INTRODUCTION The increased frequency and magnitude of disasters in recent years have led to significant human, environmental, and economic losses. In response to this, disaster-related event monitoring systems, such as GDELT <cit.> and GDACS <cit.>, have emerged as critical tools to (1) monitor events and (2) provide timely overviews for an early disaster response planning. Such large-scale systems combine manual and automated approaches in retrieving event data from sensory and raw text data. Advances in machine learning have significantly boosted predictive models’ ability to extract accurate information from raw texts in the above-mentioned monitoring systems. However, a major challenge on discriminating event instances remains unsolved. In other words, how do we know if multiple news articles are referring to the same real-world event instance? Event instance discrimination is a less research area. Existing work mainly relies on entity matching based on factors such as location and time <cit.>, or linking event mentions to entries in a knowledge graph (KG) like Wikidata[<https://www.wikidata.org/>]  <cit.>. However, heuristic-driven methods often suffer from poor generalization, while KG-driven approaches are limited by the inability to represent events absent from the KG. In this word, we strive for an event monitoring system that (1) receives news stream; (2) extracts event information; (3)links extracted information to a Knowledge Graph (KG), in this case, Wikidata and (4) discriminates event instances visually. To achieve this goal, we suggest that a half-automated, human-in-the-loop event monitoring system can leverage predictive models to provide reliable information for humans. Such information includes users, geospatial and temporal information of events as well as an indication on whether two or more event mentions are referring to the same real-world event instance. In this work, we present Master of Disaster (MoD)[The online demo can be found here <https://hitec.demo.skynet.CoyPu.org/>. Please ignore tabs other than Event Extraction and Event Visualization. The source code for the GUI can be found here <https://github.com/semantic-systems/SEMS-tool-suite/>. The source code for the event extractor can be found here <https://github.com/semantic-systems/CoyPu-EventExtraction>], a half-automated, human-in-the-loop disaster-related event monitoring system with three key features. First, MoD receives daily news streams from GDELT[<https://www.gdeltproject.org/>] with a keyword search. Second, MoD extracts event-related information with a transformers-based event extractor, whose output is linked to Wikidata and the CoyPu ontology[<https://schema.coypu.org/global/>]. Third, MoD provides a visualization of events to represent the similarity of different news in the embedding space. To compensate for the error produced by the event extractor, a visualization from an unsupervised clustering algorithm is provided as a reference for the human evaluation of event distributions. MoD aims to provide an overview of daily events extracted from the user-provided keywords. Furthermore, it should be mentioned that the semantic output (linking to Wikidata) helps leverage downstream tasks such as knowledge validation and common sense reasoning. § MASTER OF DISASTER (MOD) MoD is an open-sourced application with four components: a data preprocessor, an event extractor, an event visualizer, and a gradio-based GUI[<https://gradio.app/>]. The architecture of MoD is illustrated in Figure <ref>. §.§ Data preprocessor First, the data preprocessor queries a list of articles containing the user-provided keywords with the GDELT API[Note that the GDELT API restricts the maximal amount of returned articles to 250.]. Furthermore, it applies a filtering method to select articles that are written in English. This is because the event extractor can only receive English inputs, currently. With the filtered articles, the data preprocessor takes the title of each article to represent an event. Additionally, all artifacts (such as white spaces around punctuation marks) are removed from the titles to retrieve the final representation of events in the textual space. §.§ Event Extractor The event extractor consists of three components, an event type detector, an entity linker and an RDF graph generator[Resource Description Framework (RDF) is a standard model for data interchange on the Web.]. The event type detector is a pre-trained RoBERTa-base <cit.> model fine-tuned on the TREC-IS dataset <cit.>. The model is optimized using the supervised contrastive learning <cit.> loss. In the original TREC-IS benchmark, the task was to predict information types from raw texts, such as Request-GoodsServices or Report-EmergingThreats. However, we modified the original task schema to predict the event type[In total, there are 9 event types: tropical storm, flood, shooting, covid, earthquake, hostage, fire, wildfire and explosion] (such as Earthquake or Shooting). Note, the event type was considered a meta-information in the original dataset. Also, the dataset did not include an out-of-scope (oos) class describing non-events. Thus, we included data from the TweetEval dataset <cit.> to represent an oos class. It is worth noting that TweetEval included 7 subtasks, such as Emotion Recognition, Irony Detection and Hate Speech Detection. Sentences in TweetEval are all user-generated tweets, which contain a variety of topics, language styles and structures than traditional news articles. By integrating such samples in the training data, the model learned a more concentrated feature representation for the given event classes. Next, we employed an entity linker to extract event-related entities. MoD presently considers two relations, namely hasLocality and hasImpactOn, following the CoyPu Event ontology[<https://schema.CoyPu.org/global/2.3#Event>]. Here, we leveraged a local entity linker, namely BLINK[<https://github.com/facebookresearch/BLINK>] <cit.>, a BERT-based entity linking model which links entities to Wikipedia. However, since the CoyPu ontology is based on Wikidata as background KG, the RDF graph generator maps from Wikipedia to Wikidata and outputs an event graph in JSON-LD (see Appendix <ref>). §.§ Event Visualizer The event visualizer (1) receives a stream of filtered articles from the data preprocessor, (2) followed by the event type detector to generate sentence embeddings from the last hidden layer and make predictions on event types, and (3) visualizes the sentence embeddings in a two-dimensional coordinate. The goal of event visualizer is to group event mentions referring to the same real-world event instance. We applied a dimensionality reduction technique based on Principal Components Analysis (PCA). Two resulting two-dimensional visualizations are presented in MoD, as shown in Figure <ref>. The first one illustrates the event type prediction by the event type detector (left-hand side in Figure <ref>) and the second visualization (right-hand side in Figure <ref>) introduces the prediction from a clustering algorithm. We used DBSCAN <cit.> as the clustering algorithm because DBSCAN is a density-based algorithm, where the number of clusters is not pre-defined. Therefore it can group news into an arbitrary number of event instances. From the visualization shown in Figure <ref>, we have observed that DBSCAN considers local structures (measured by the distance between sentence representations) of sentence embedding in the PCA-reduced two-dimensional space. This provides insights for human evaluators to have a direct access to event distribution, which is important to discriminate between event instances. The final component in MoD is the gradio-based interactive GUI which is a centralized, injective component that makes API calls to all services used in MoD. These services include an event extractor and an event visualizer. § EVALUATION To ensure that MoD can be practically deployed as a half-automated disaster-related event monitoring system, we conducted a survey with seven NLP researchers, among which five of the participants are frequent news readers. The only demographic that we controlled for is the expertise in NLP. We observed that six participants have expertise in knowledge graphs and three participants have expertise in event extraction. The main investigation in the survey is the usability of MoD, which consists of the following questions on a 5-point Likert scale. * How easy is it to use the Event Extraction tab? (1: very difficult; 5: very easy) * How accurate is the information provided by the event extractor? (1: very inaccurate; 5: very accurate) * How easy is it to use the Event Visualization tab? (1: very difficult; 5: very easy) * Is the overview of daily news by the Event Visualizer informative? (1: very non-informative; 5: very informative) * Will you use Event Visualizer for fast access to the news? (1: no; 5: yes) As a result, we received 4.14 points on average on both the first and the third question (level of easiness to use the system); 3 points on average on the third question (accuracy by the event extractor); 3.57 points on average on the fourth question (informativeness of the event visualizer) and 3.14 point on average on the last question (practicality). Albeit with a small set of participants, we observed that the majority of the participants agreed that MoD is easy to use and the visualization is informative. § CONCLUSION AND FUTURE WORK In this paper, we presented MoD, a disaster-related event monitoring system. Moreover, we explained the architecture of MoD and the functionality of each component. The goal of MoD is to provide an event monitoring system that (1) reveives news stream; (2) extracts event information; (3) links the extracted information to a KG and (4) discriminates event instances mentioned in the news. In future work, we aim to provide a temporal analysis of events and increase the performance of the event extractor. § ETHICS STATEMENT To our knowledge, this work does not concern any substantial ethical issue. Corpora used in this work are preprocessed by masking all user mentions and links. Of course, the application of classification algorithms could always play a role in Dual-Use scenarios. However, we consider our work as not risk-increasing. § ACKNOWLEDGEMENTS The authors acknowledge the financial support by the Federal Ministry for Economic Affairs and Energy of Germany in the project CoyPu (project number 01MK21007[A-L]). HITeC is “G” This work was also supported by the House of Computing and Data Science (HCDS) of the Hamburg University within the Cross-Disciplinary Lab programme, and by the Ministry of Research and Education within the project ‘RESCUE-MATE: Dynamische Lageerstellung und Unterstützung für Rettungskräfte in komplexen Krisensituationen mittels Datenfusion und intelligenten Drohnenschwärmen’ (FKZ 13N16844). unsrtnat § EXAMPLE OUTPUT GRAPH OF THE EVENT EXTRACTOR @id: "https://data.CoyPu.org/event/mod/d868a8be-c49e-48b8-a3a5-5b4b12d7d97f", @type: [ 0: "https://schema.CoyPu.org/global#Event" , 1: "http://www.wikidata.org/entity/Q2252077" ], http://www.w3.org/2000/01/rdf-schema#comment: [ 0: @value: "Hamburg shooting : Multiple dead after attack at Jehovah Witness church in Germany" ], https://schema.CoyPu.org/global#hasImpactOn: [ 0: @id: "http://www.wikidata.org/entity/Q35269" ], https://schema.CoyPu.org/global#hasLocality: [ 0: @id: "http://www.wikidata.org/entity/Q1055" , 1: @id: "http://www.wikidata.org/entity/Q183" ], https://schema.CoyPu.org/global#hasPublisher: [ 0: @value: "HiTec" ], https://schema.CoyPu.org/global#hasTimestamp: [ 0: @value: "15_03_2023_17_57_56" ]
http://arxiv.org/abs/2406.08259v1
20240612142932
A Tannakian framework for prismatic $F$-crystals
[ "Naoki Imai", "Hiroki Kato", "Alex Youcis" ]
math.NT
[ "math.NT", "math.AG" ]
=1 spacing=nonfrench equationsubsection otsep4.5 tocline#1#2#3#4#5#6#7#1>@̧tocdepth secpenalty#2 M ifempty#4 tempdimar@tocindent#1 tempdima#4 @ #3tempdimapnumwidth plus1em -pnumwidth #5-tempdima#6@thdotsep mu.dotsep mutopnumwidthtocpagenum#1=1#7 @tocindent00pt ł@subsectiontocline20pt2.5pc5pc startsectionparagraph4 @@-2 thmTheorem[section] prop[thm]Proposition thmiTheorem propiProposition lem[thm]Lemma claim[thm]Claim cor[thm]Corollary conj[thm]Conjecture construction[thm]Construction lemconst[thm]Lemma/Construction definition example[thm]Example exampleiExample nota[thm]Notation *thmnTheorem examplestyle 1em 1em totalleftmargin1.0em -1.0em 1 1.0em . .5em examplestyle eg[thm]Example rem[thm]Remark remiRemark defn[thm]Definition defniDefinition questionQuestion[section] (#1) (enumi∙ #1) largesymbols"65
http://arxiv.org/abs/2406.08751v1
20240613022107
3D Building Generation in Minecraft via Large Language Models
[ "Shiying Hu", "Zengrong Huang", "Chengpeng Hu", "Jialin Liu" ]
cs.AI
[ "cs.AI" ]
3D Building Generation in Minecraft via Large Language Models Shiying Hu, Zengrong Huang, Chengpeng Hu and Jialin Liu Guangdong Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China S. Hu and Z. Huang contributed equally to this work. This paper has been accepted by IEEE Conference on Games. June 17, 2024 =================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Recently, procedural content generation has exhibited considerable advancements in the domain of 2D game level generation such as Super Mario Bros. and Sokoban through large language models (LLMs). To further validate the capabilities of LLMs, this paper explores how LLMs contribute to the generation of 3D buildings in a sandbox game, Minecraft. We propose a Text to Building in Minecraft (T2BM) model, which involves refining prompts, decoding interlayer representation and repairing. Facade, indoor scene and functional blocks like doors are supported in the generation. Experiments are conducted to evaluate the completeness and satisfaction of buildings generated via LLMs. It shows that LLMs hold significant potential for 3D building generation. Given appropriate prompts, LLMs can generate correct buildings in Minecraft with complete structures and incorporate specific building blocks such as windows and beds, meeting the specified requirements of human users. Procedural content generation, LLMs, building generation, 3D generation, Minecraft § INTRODUCTION Procedural content generation (PCG) involves automatically generating specific content such as environments and levels in games <cit.>. Recent emergence of LLMs <cit.> have inspired various applications in PCG <cit.>. The capabilities of LLMs are crucial in enabling PCG to fulfil complex requirements from human feedback. For instance, Todd et al. <cit.> fine-tuned LLMs to generate novel and playable Sokoban levels. Sudhakaran et al. <cit.> created MarioGPT to generate Super Mario Bros.'s levels. Instead of using tokenizer <cit.>, MarioGPT enables users to directly write prompts, e.g., “some pipes, little enemies" to obtain expected levels <cit.>. Nasir et al. <cit.> constructed a two-stage workflow to fine-tune GPT-3, in which levels generated by both humans and LLMs are involved for training. Besides, an LLM-based competition called ChatGPT4PCG was held in 2023, aiming at generating Angry Birds levels via LLMs <cit.>. However, all the aforementioned works focused on 2D level generation. Applying LLMs to 3D game content generation is rarely explored. Considering a 3D environment, the additional dimension (e.g., height) necessitates the consideration of extra attributes and a higher-dimensional representation. Minecraft is a suitable testbed <cit.> for 3D building generation for global popularity, well-defined grid structure and high editability. Generative design in Minecraft competition (GDMC) encourages the application of PCG in 3D building generation <cit.>. Traditional JSON lookup methods like Nys et al. <cit.> involved extracting keywords from user inputs and matching them to predefined JSON templates. However, it often fails to understand complex requests. Green et al. <cit.> combined constrained growth and cellular automata method for generating buildings with ASCII representation. Merino et al. <cit.> proposed an interactive evolutionary algorithm, in which users choose preferred buildings. However, it takes time to obtain a suitable building. Huang et al. <cit.> searched city layouts to place redstone-style buildings. Barthet et al. <cit.> generated buildings by searching latent space and constructing rules. Jiang et al. <cit.> incorporated reinforcement learning to generate 3D levels. Furthermore, Maren et al. <cit.> introduced World-GAN to generate 3D scenarios such as deserts. Very recently, Earle et al. <cit.> trained quantised neural radiance fields to facilitate text-to-3D generation. However, functional blocks like doors and indoor scene generation are not addressed <cit.>. Directly incorporating human feedback such as language in 3D building generation in Minecraft remains an under-explored, non-trivial challenge. To investigate into the above topic, this paper proposes a Text to Building in Minecraft (T2BM) model, which leverages capabilities of LLMs in 3D building generation considering facade, indoor scene and functional blocks such as doors and beds. T2BM accepts simple prompts from users as input and generates buildings encoded by an interlayer, which defines the transformation between text and digital content. Based on T2BM, players or designers can construct buildings quickly without repeatedly placing blocks one by one, while the human-crafted prompt is not necessarily detailed. Experiments with GPT-3.5 and GPT4 demonstrate that T2BM can generate complete buildings, while aligning with human instructions. § TEXT TO BUILDING IN MINECRAFT (T2BM) As depicted in Fig. <ref>, T2BM receives a simple user input and outputs a complete building in Minecraft. Initially, T2BM forwards the user's description along with contexts like some detailed building description examples to an LLM for refinement. Then, the refined prompt is combined with a format example and background to structure the final instruction, according to which an LLM produces a building encoded by the interlayer. Subsequently, discrepancies in this interlayer are corrected using a repairer. Finally, T2BM decodes the generated building in Minecraft. T2BM is composed of three core modules, namely input refining, interlayer, and repairing. §.§ Input refining module In T2BM, the prompt to LLMs comprises two main components: the description provided by the user and detailed building description examples. Due to the critical impact of the prompt's quality on the outcomes, it is essential for LLMs to first generate a refined prompt. This preliminary step enhances the overall quality of the resulting buildings. The refining strategy is informed by the prompt engineering guide[<https://github.com/dair-ai/Prompt-Engineering-Guide>], ensuring that the modifications to the building description significantly improve the final output. Below is an example of a refined prompt, given “A wooden house with windows”. [frame=bt, numbers=none, captionpos=b, abovecaptionskip=0pt, belowcaptionskip=0pt] The wooden house is a quaint, homely structure, measuring 25 blocks in length, 15 blocks in width, and 10 blocks in height. The walls, floor, and roof are constructed from spruce_planks, lending the house a traditional, rustic appeal. The walls are interspersed with windows crafted from glass_panes, allowing the sunlight to permeate the house and offering panoramic views of the nearby scenery. §.§ Interlayer between prompts and buildings We introduce an interlayer to transform text to recognisable content. The interlayer is designed in JSON to represent buildings in Minecraft. Since JSON is a lightweight data format that simplifies the representation of complex building structures, it aids LLMs in comprehending and generating precise details of the building, including block locations and type. Besides, the interlayer allows to generate external facade and indoor of buildings simultaneously. Users can also adjust the interlayer before generating the final building, which simplifies the correction of errors and reduces time cost. The interlayer decomposes a building into various components, including the roof, doors, and windows, classified as either structural or functional sections, contingent upon their ability to interact with the player. Structural sections, such as walls and windows, do not possess unique interactive states and thus cannot interact with the player. On the other hand, functional sections like doors and crafting tables, can interact with the player, e.g., doors can be opened or closed, and crafting tables allow players to craft items. Each section contains two basic properties: material and location. Material refers to the used material type. In terms of location, structural sections are characterised by start and end coordinates, which define the extent of their placement from one spatial point to another within the architectural layout. In contrast, functional sections, including doors and crafting tables, are identified by a single placement coordinate, pinpointing their exact location in the structure. In addition, structural sections include the hollow attribute to indicate whether they are solid or hollow, while functional sections feature the state property to specify their particular characteristics. A format example of interlayer is shown below. [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt, language=Python] "wall": # structural section "location": "start_x":0,"start_y":0,"start_z":0, "end_x":8,"end_y":6,"end_z":6, "material":"oak_planks","hollow":true , "door": # functional section "location":"x": 4,"y": 3,"z": 3, "material":"oak_door", "state" : "facing":"south","hinge":"left", To generate buildings encoded by the interlayer, the refined prompt is combined with a format example of interlayer and background context. The background contains some descriptions and requirements such as the edition of Minecraft as different editions may have different names of materials. Here is an example of background context [Minecraft version: Minecraft 1.19.2-Forge_43.2.0]. [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] The buildings in Minecraft are represented as grid and each block is located in a specific coordinate. Now we want to generate a building in MC but represent with JSon format. Please use identifiers in Minecraft 1.19.2-Forge_43.2.0. §.§ Repairing module LLMs sometimes introduce errors when generating interlayers, such as using upper camel case instead of underscores, or omitting block colours like “white_bed". Therefore, it is essential to perform thorough checks and validations. Four common errors and their corresponding handling strategies are considered. Incomplete name: Prefixes that denote features like colour or material, may be missing. The repairer will automatically complete the name with a default value such as or . Disallowed property: Properties like of bed that cannot be set by Minecraft APIs will be ignored. Illegal material: For certain inputs, LLMs may generate block names that are unavailable in a specific edition of Minecraft. They will be replaced with automatically. Wrong naming style: LLMs may generate material names with incorrect style, such as . After repairing the building in interlayer representation, T2BM feeds this interlayer into Minecraft via generative design python client (GDPC)[<https://github.com/avdstaaij/gdpc>], which facilitates the sequential generation of each section in the interlayer. § EXPERIMENT Given that our building representation relies on the interlayer, the primary focus of our experiment is to investigate the ability of LLMs to accurately generate this interlayer. During the experiment, we evaluate the impacts of various prompts and different LLMs on the generation process. Examples are provided in Supplementary Material[<https://github.com/SUSTechGameAI/Text-to-Building-in-Minecraft>]. §.§ Evaluation criteria A regular building should be fully sealed except for the door and include all specified blocks in the prompt. We proposed Algorithm <ref> to assess the correctness, including the completeness and satisfaction of generated buildings. The flood-fill algorithm <cit.> is integrated into Algorithm <ref> to determine if all blocks are directly connected to the main body of the building. Flood-fill starts from a random initial point, recursively traverses the blocks connected to the current block, and saves all the accessed blocks. The satisfaction is checked by recording if all the materials requested in the prompt are present. The completeness is checked if a block belongs to the main structure and is directly connected to any blocks that form Corner, Edge or Plane. Algorithm <ref> classifies blocks by their connecting structure and removes blocks that do not belong to the main structure. Blocks that lack surrounding blocks to form a connected main structure are also excluded. §.§ Experimental setting and results GPT-3.5 and GPT-4 are chosen for their superior logical processing and the supportive OpenAI API that eases our experimental process. Two settings are considered to examine the contribution of refinement: (i) raw: A simple raw user input is like “A wooden house with windows"; (ii) refined: A good refined description by LLMs after accepting user input (cf. Section <ref>). Both are evaluated 50 times. Results of completeness and satisfaction constraints checking are shown in Tab. <ref>. “C" denotes the ratio of complete outputs among all outputs. “S" denotes the ratio of outputs that satisfy the material constraint, i.e., the user's input, among all outputs. “C∧S" represents the ratio of outputs that meet neither “C" nor “S". “C∧S" and “C∧S" represents the ratio of outputs that only meet “C" and “S", respectively. “C∧S" represents the ratio of outputs that meet both “C" and “S". §.§ Discussions As shown in Tab. <ref>, after the refinement, both completeness and satisfaction have been improved. Moreover, GPT-4 performs better than GPT-3.5. Fig. <ref> shows buildings that met both completeness and satisfaction constraints (C∧S) generated by GPT-3.5 and GPT-4 with raw and refined prompts. §.§.§ Impact of prompt refinement Tab. <ref> illustrates that refining prompts enhances the outputs of both GPT-3.5 and GPT-4. The ratio of generated buildings that satisfy both constraints increases from 0.08 to 0.22 by GPT-3.5 and from 0.12 to 0.38 by GPT-4. With the assistance of the refined prompts, LLMs are capable of identifying specific items within the generated buildings and their respective locations, significantly reducing the workload. Given a list of legal materials or detailed building generation process in the refined prompt, the outputs may be more effective than before. §.§.§ Diminishing returns of completeness Tab. <ref> also implies that generating complete buildings is harder than using legal materials. While the increase in “C" from raw to refined prompts is apparent, it is more significant in GPT-3.5 than GPT-4. Given that 80% of the buildings generated by GPT-4 with raw prompts are complete, the gain owing to refinement is less remarkable. GPT-4 can generate relatively complete buildings even being given raw prompts. It suggests that as LLMs become more sophisticated, they may be able to better grasp subtleties and context. GPT-3.5's performance shows greater improvement with refined prompts, highlighting its higher sensitivity to the quality of input, which necessitates more precise prompts to achieve better performance. § CONCLUSION This paper explores the application of LLMs for 3D building generation considering facade, indoor and functional blocks. A Text to Building in Minecraft (T2BM) model is proposed. T2BM refines user inputs, encodes buildings by an interlayer and fixes errors via repairing methods. Based on T2BM, users can generate buildings by only inputting simple prompts. Our experiments validate the completeness and satisfaction of buildings generated by T2BM. The importance of refinement and performance gaps between LLMs are also investigated. In the future, we will integrate repairing to prompt guidelines and expand T2BM to other game environments. IEEEtran The supplementary material details the process of T2BM and generated building examples. §.§ Interconnected components for completeness check The completeness is checked according to three interconnected components: Corner (cf. Fig. <ref>), Edge (cf. Fig. <ref>) and Plane (cf. Fig. <ref>). In Fig. <ref>, the red block will be checked if it belongs to the main structure and is directly connected to any blocks that form one of the three connecting structures. §.§ Detailed prompts to LLMs in T2BM Below displays a user input and the preset context from which GPT-4 will generate the refined prompt. [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] User Input = "A wooden house with windows" Context = """ Please help me expand this context.I will give you an example.You need a detailed description of the objects for the room.And give the material, and the specific orientation of the items. The material should be identifiers in Minecraft 1.19.2-Forge_43.2.0!!!But please don't add" minecraft: Building blocks in minecraft can't have state facing or hinge " Example: " User Input: A wooden house with window on walls. Output: The wooden house is a charming, rustic structure, with dimensions of 30 blocks in length, 20 blocks in width, and 15 blocks in height. The walls, floor, and roof are made of oak_planks, giving the house a warm, cozy feel. The walls are unctuated with windows made of glass_panes, allowing natural light to flood into the house and offering views of the surrounding landscape. The house has a single, large room that serves multiple purposes. In the center of the room, there is a dining table made of spruce_planks, surrounded by four chairs made of the same material. The table is oriented east-west, with two chairs on each side. On the table, there is a flower_pot with a red_tulip, adding a touch of nature to the room. To the north of the room, there is a kitchen area with a crafting_table and a furnace, both made of stone. The crafting table is to the east of the furnace. There is also a chest made of birch_planks, filled with food items like bread and apples. On the south side of the room, there is a comfortable bed made of white_wool, with a bedside table made of jungle_planks. The bed is oriented north-south, with the head of the bed against the southern wall. The bedside table holds a lantern made of iron_ingot and torch, providing a soft light. The house has two doors, one on the east wall and one on the west wall. Both doors are made of dark_oak_planks and are oriented north-south. The eastern door leads to a small porch, while the western door opens to a lush garden. The wooden house is a perfect example of simple, rustic charm. Every block and item has been carefully chosen and placed to create a warm, inviting space that feels like home. " Output: """   Below displays the refined description generated by GPT-4, the preset format and the background. They make up the prompt, which will be passed to GPT-4 to generate the interlayer. [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] Refined Description: The wooden house is a quaint, homely structure, spanning 25 blocks in length, 15 blocks in width, and 10 blocks in height. The walls, floor, and roof are constructed from spruce_planks, lending the house a traditional, rustic ambiance. The walls are interspersed with windows crafted from glass_panes, allowing the sunlight to permeate the house and offering panoramic views of the surrounding scenery. The house comprises a single, spacious room that serves various functions. In the middle of the room, there is a coffee table made of birch_planks, encircled by three chairs made of the same material. The table is oriented north-south, with one chair on the north side and two on the south side. On the table, there is a flower_pot with a blue_orchid, adding a splash of color to the room. To the west of the room, there is a kitchen area with a crafting_table and a furnace, both constructed from cobblestone. The crafting table is to the north of the furnace. There is also a chest made of oak_planks, stocked with food items like cooked_beef and carrots. On the east side of the room, there is a cozy bed made of light_gray_wool, with a bedside table made of acacia_planks. The bed is oriented east-west, with the head of the bed against the eastern wall. The bedside table holds a lantern made of iron_ingot and torch, casting a gentle glow. The house has two doors, one on the north wall and one on the south wall. Both doors are made of jungle_planks and are oriented east-west. The northern door leads to a small patio, while the southern door opens to a vibrant flower garden. The wooden house is a beautiful representation of rustic simplicity. Each block and item has been meticulously selected and positioned to create a warm, welcoming space that feels like a true sanctuary. Format: """format of json: Please pay attention to the functional block,it should have only three attributes in position, which means it is a single block. And for a non-functional block, it should have six attributes. Building blocks in minecraft can't have state facing or hinge. For a two-square object such as a bed, you only need to give one coordinate. "name of building": //there can be many parts in a building "name of part1 in this building": "position": //this is an example of functional is false "start_x": an integer, "start_y": an integer, "start_z": an integer, "end_x": an integer, "end_y": an integer, "end_z": an integer, , "material" : "name of block in Minecraft", "hollow" : boolean value indicate whether this is a hollow part, "functional" : false "name of part2 in this building": "position": //this is an example of functional is true "x": an integer, "y": an integer, "z": an integer, , "material" : "name of block in Minecraft", "hollow" : true "functional" : true "state": "facing": "south", "hinge": "left" some arguments for functional block In this format, all item compose a complete building, and for each item, we give some arguments like position to indicate where to place blocks, material to indicate which block to place, hollow to indicate hollow or not, functional to indicate functional or not, state(if functional) to indicate some state argument of block. Background: The buildings in Minecraft are represented as grid and each block is located in a specific coordinate. Now we want to generate a building in Minecraft but represent with JSON format. Please generate a realistic room, with various shapes, complex squares, not simple cuboids. Rooms should have doors, walls, ceilings and floors. Please use identifiers in Minecraft 1.19.2-Forge_43.2.0!!! Below is the interlayer example. Based on this interlayer, buildings can be decoded in Minecraft via the GDPC interface. [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] "wooden_house": "walls": "position": "start_x": 0, "start_y": 0, "start_z": 0, "end_x": 24, "end_y": 9, "end_z": 14 , "material": "spruce_planks", "hollow": true, "functional": false , "floor": "position": "start_x": 0, "start_y": 0, "start_z": 0, "end_x": 24, "end_y": 0, "end_z": 14 , "material": "spruce_planks", "hollow": false, "functional": false , "roof": "position": "start_x": 0, "start_y": 10, "start_z": 0, "end_x": 24, "end_y": 10, "end_z": 14 , "material": "spruce_planks", "hollow": false, "functional": false , "north_door": "position": "x": 12, "y": 1, "z": 0 , "material": "jungle_door", "hollow": true, "functional": true, "state": "facing": "south", "hinge": "left" , "south_door": "position": "x": 12, "y": 1, "z": 14 , "material": "jungle_door", "hollow": true, "functional": true, "state": "facing": "north", "hinge": "right" , "windows": "position": "start_x": 2, "start_y": 4, "start_z": 0, "end_x": 22, "end_y": 6, "end_z": 14 , "material": "glass_pane", "hollow": true, "functional": false , "table": "position": "x": 12, "y": 1, "z": 7 , "material": "birch_planks", "hollow": true, "functional": true , "chairs": "position": "start_x": 11, "start_y": 1, "start_z": 6, "end_x": 13, "end_y": 1, "end_z": 8 , "material": "birch_stairs", "hollow": true, "functional": false , "flower_pot": "position": "x": 12, "y": 2, "z": 7 , "material": "flower_pot", "hollow": true, "functional": true , "kitchen": "position": "start_x": 2, "start_y": 1, "start_z": 2, "end_x": 6, "end_y": 1, "end_z": 6 , "material": "cobblestone", "hollow": true, "functional": false , "bed": "position": "x": 18, "y": 1, "z": 7 , "material": "light_gray_bed", "hollow": true, "functional": true, "state": "facing": "west", "part": "head" , "bedside_table": "position": "x": 20, "y": 1, "z": 7 , "material": "acacia_planks", "hollow": true, "functional": true , "lantern": "position": "x": 20, "y": 2, "z": 7 , "material": "lantern", "hollow": true, "functional": true   This is the repairing method and GDPC interface. [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] try: position = part['position'] except Exception: whether = False if not whether: break if position.__contains__('x'): material = part['material'] material = str(material).lower() material.replace(' ', '_') if material == 'door' or material == 'minecraft:door': material = 'oak_door' elif material == 'bed' or material == 'minecraft:bed': material = 'white_bed' elif material == 'iron_ingot' or material == 'canvas': material = 'iron_block' state = ” if part.__contains__('state'): state += '[' for s in part['state']: if (str(part['state'][s]) == 'occupied' or str(part['state'][s]) == 'open'): continue if material.__contains__(s): state += s state += '=' state += str(part['state'][s]) state += ',' state = state[0:len(state) - 1] state += ']' try: editor.placeBlock((x + position['x'], z + position['y'], y + position['z']), Block(material + state)) except Exception as e: whether = False if not whether: break else: material = part['material'] material = str(material).lower() material.replace(' ', '_') if material == 'planks' or material == 'minecraft:planks': material = 'oak_planks' elif material == 'carpet' or material == 'minecraft:carpet': material = 'white_carpet' elif material == 'stairs' or material == 'minecraft:stairs': material = 'oak_stairs' elif material == 'glass_panes': material = 'glass_pane' elif material == 'iron_ingot': material = 'iron_block' try: geometry.placeCuboid(editor, (x + position['start_x'], z + position['start_y'], y + position['start_z']), (x + position['end_x'], z + position['end_y'], y + position['end_z']), Block(material)) except Exception as e: print(f"exception: e") material = 'oak_planks' try: geometry.placeCuboid(editor, (x + position['start_x'], z + position['start_y'], y + position['start_z']), (x + position['end_x'], z + position['end_y'], y + position['end_z']), Block(material)) except Exception as e: whether = False if not whether: break if part['hollow']: try: geometry.placeCuboid(editor, ( min(x + position['start_x'] + 1, x + position['end_x']), min(z + position['start_y'] + 1, z + position['end_y']), min(y + position['start_z'] + 1, y + position['end_z'])), (max(x + position['end_x'] - 1, x + position['start_x']), max(z + position['end_y'] - 1, z + position['start_y']), max(y + position['end_z'] - 1, y + position['start_z'])), Block('air')) except Exception as e: whether = False if not whether: break §.§ Generating buildings via raw and refined prompts This section provides some examples of generated buildings via the prompt “A wooden house with windows" in the case of C∧ S, C∧ S, C∧ S, and C∧S. Figs. <ref> and <ref> show buildings generated by GPT-3.5 with raw prompts and refined prompts, respectively. Figs. <ref> and <ref> show buildings generated by GPT-4 with raw prompts and refined prompts, respectively. According to Tab. <ref>, no building generated by GPT-3.5 with either raw or refined prompts or GPT-4 with raw prompts is in the case of C∧ S. Notably, there are some buildings that don't have indoor scenes such as Figs. <ref> and <ref>. §.§ Generating buildings via simple prompts Four examples of buildings generated via T2BM given simple prompts are presented. §.§.§ Generate houses using the prompt “A woolen house with a cone roof" Fig. <ref>. §.§.§ Generate houses using the prompt “A white house with a glass roof" Fig. <ref>. §.§.§ Generate houses using the prompt “A wooden house with a stone made roof" Fig. <ref>. §.§.§ Generate houses using the prompt “A stone house with a door and a wooden roof" Fig. <ref>. §.§ Generating buildings via detailed prompts Four examples of generated buildings, including pillbox, snowhouse, nest and tower are given. We directly obtain these descriptions as prompts from Minecraft Wiki (<https://minecraft.fandom.com/zh/wiki/Minecraft_Wiki>) and input them to LLMs along with the preset background and format to generate the final building without refining. §.§.§ Generating pillboxes   [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] Building materials: 1-8 groups of boulders or stones, Torch, Iron gate, glass, Pull rod, button (open door) Build steps: A bunker is a single-compartment shelter made entirely of stone,infested_cobblestone, and its walls are usually at least two storeys thick. A 12x12 size bunker provides players with a bed white_bed, orange_bed, pink_bed, gray_bed, crafting_table, table and table. Several furnaces or chest Spaces, iron doors must be mounted on different sides of the building (at least on both sides). Stone button stone_button Or pull lever can be controlled by the inside or outside of the door at the same time. Window glass glass_pane should be installed on each side of the building §.§.§ Generating snowhouses   [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] Building materials: Lots of snow snow_block Blue ice, ice blue_ice, ice Extras materials: Windows made of ice A well with liquid water A furnace (snow blocks do not melt when used, but ice and snow do) §.§.§ Generating nests   [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] Building materials: A lot of dirt or a lot of solid blocks spruce_planks oak_planks. Description: A pillar made of blocks that occupies 1 block of space and is 10 blocks high. Then build a platform on it. Extras materials: Torches wall_torch Roof planks Window glass glass_pane §.§.§ Generating towers   [frame=bt, numbers=none, captionpos=b, abovecaptionskip=5pt, belowcaptionskip=10pt] Building materials: stone. Build steps: Build a 3x3x2 tower. Add a block to each edge of the tower, then add another block to the outside of those blocks, removing the previous block so that the later block is half-overhang. On top of these blocks, build a 5x5 frame. Plus torches wall_torch. Dig a 2x2x1 hole in the center of the tower
http://arxiv.org/abs/2406.08630v1
20240612202209
The neutron-star merger delay-time distribution, r-process "knees", and the metal budget of the Galaxy
[ "Dan Maoz", "Ehud Nakar" ]
astro-ph.HE
[ "astro-ph.HE" ]
]The neutron-star merger delay-time distribution, r-process "knees", and the metal budget of the Galaxy School of Physics & Astronomy Tel Aviv University, Tel Aviv 69978, Israel § ABSTRACT For a sample of 18 currently known recycled millisecond pulsars (rMSPs) that are in double neutron star (DNS) systems, and 42 rMSPs with similar properties that are not in DNS pairs, we analyze the distributions of the characteristic age, τ_c, and the time until merger of the double systems, τ_ gw. Based on the τ_c distribution of non-DNS rMSPs, we argue that τ_c is a reasonable estimator of true pulsar age and that rMSPs are active as pulsars for a long (≳Hubble) time. Among the DNSs there is an excess of young systems (small τ_c) with short life expectancy (small τ_ gw) compared to model expectations for the distributions of τ_c and τ_ gw if, at birth, DNSs have a delay-time distribution (DTD) of the form ∼τ_ gw^-1 (expected generically for close binaries), or for that matter, from expectations from any single power-law DTD. A two-population DNS model solves the problem: the data are best fit by the combination of a "fast" population with DTD going as τ_ gw^-1.9±0.4, and a "slow" population of DNSs, with DTD proportional to τ_ gw^-1.1±0.15. The fast population can be equivalently represented by a shallow power-law DTD with an exponential cutoff beyond τ_ gw∼ 300 Myr. The fast population completely dominates, by a factor A≈ 10-100, the numbers of DNSs that merge within a Hubble time, and that presumably lead to short gamma-ray bursts (sGRBs) and kilonova explosions. Using a simple, empirically based, chemical-evolution calculation, we show that the fast/steep kilonova DTD, convolved with the measured star-formation history of the Milky Way's thick-disk population, naturally reproduces the "knee" structure seen in abundance-ratio diagrams of thick-disk stars, for europium and for two other r-process elements. As a corollary we show, based again solely on empirical input concerning iron production by supernovae, that the Milky Way is nearly a "closed box" that has retained at least ∼70-90% of the metals produced over the Galaxy's lifetime. 0.5cm Stars: neutron; pulsars: general; supernovae: general; Galaxy: abundances, evolution; gamma-ray burst: general § INTRODUCTION The discovery of the double neutron star (DNS) merger event GW170807 via gravitational waves, and the detection of the ensuing kilonova explosion, were a watershed event for understanding explosive element production. Analysis of the data lends strong support to the notion that neutron-star mergers lead to a short gamma-ray-burst (GRB), followed by a kilonova that synthesizes of order 0.05 M_⊙ of r-process elements (see for several reviews). Numerous studies have, since then, addressed the naturally arising followup questions: what is the progenitor population of the merging DNSs? Can the numbers and merger rates of DNSs explain the observed patterns and evolution of r-process abundances in the atmospheres of stars? Do other significant sources of r-process production exist, apart from DNS mergers similar to GW170817 <cit.> Neutron stars (NSs) are directly observable almost exclusively when they are detected as pulsars, when the Earth is within the rotating pulsar beam during the time that a NS is in its active pulsar phase. Uncertainties concerning pulsar, and pulsar-population, physical parameters, lifetimes, and observational selection effects, have challenged attempts to determine the DNS merger rate based on the known Galactic DNS systems <cit.>, and to compare it to the observed rate of events similar to GW170817 plus its kilonova (further challenged by the fact that this has been, to date, the only case of such a combined detection of gravitational waves plus an electromagnetic counterpart; ). As a result, both the predicted and observed rates have been estimated merely to within orders of magnitude (see for a recent estimate). In terms of the potential progenitor populations, DNS systems have been discovered in several types of configurations: in so-called "young pulsars" that are spinning down since their initial formation as high-magnetic-field NS remnants from the core-collapse supernovae of massive stars; in globular clusters (where possibly the two NSs gravitationally captured each other in the dense cluster environment); and in "recycled" or "partially recycled" millisecond pulsars (rMSPs). rMSPs are pulsars that reside in the low period, P, and low period derivative, Ṗ, parameter-space region of the known pulsar population's PṖ diagram <cit.>. All rMSPs are in binary systems, and a majority of the two-dozen or so currently known DNSs contain a rMSPs. It is generally believed that rMSPs were once neutron stars that spun down (possibly even crossing the pulsar "death line" and ceasing to emit as pulsars), but were then spun up via mass transfer from an evolving companion star. The rMSPs then resumed emitting as pulsars, but with much weaker magnetic fields than young pulsars, resulting in low luminosities, low Ṗ, and long lifetimes observable as pulsars. Given these characteristics, Galactic DNSs with rMSPs could potentially constitute tracers of the merging DNS progenitor population. A useful characteristic of any such population is its delay-time distribution (DTD). The DTD is the hypothetical distribution of times between the formation, in a brief burst, of a representative stellar population, and the mergers of the DNS systems that have formed from it. A ∼ t^-1 dependence of the DTD is expected for any compact binary merger process that is driven by gravitational-wave losses (e.g. ), be it DNSs or double white dwarfs (see further below). <cit.> have attempted to constrain the DTD of DNS mergers by analyzing the characteristics of the DNS population, as known at the time of their study, that are not associated with with globular clusters. Their analysed sample included 13 rMSP in DNSs and one DNS with a young pulsar. They assessed that at long delays (t>1 Gyr) the DNS-merger DTD follows the expected t^-1 form, while at earlier times they noted an excess of rapidly merging systems. <cit.> estimated that 40% or more of DNS mergers take place within less than 1 Gyr after system formation. The fossil record of metal enrichment visible in the atmospheres of stars may provide further clues to some of the issues above. Among the r-process elements, europium (Eu) abundances have been measured in the largest number of stars. When plotted in traditional chemical-abundance diagrams showing the mass ratio of element X-to-iron, [X/Fe], versus iron-to-hydrogen, [Fe/H] (logarithmic abundance ratios relative to Solar), stars tend to cluster in a locus often described as a "plateau" of constant [Eu/Fe] from the lowest metallicities, [Fe/H], up to [Fe/H]≈ -0.5, where there is a "knee", above which stars tend to have decreasing [Eu/Fe] with increasing [Fe/H] (e.g. ). The [Eu/Fe] locus is reminiscent of that for stars in the Galaxy's halo and thick-disk component for the ratios [α/Fe] vs. [Fe/H], where α signifies some of the α-elements with mass number that is a multiple of 4, i.e., that can be assembled by adding up consecutive He nuclei to form a nucleus (O, Ne, Mg, Si,...). The plateau+knee structure in [α/Fe] has been traditionally explained in chemical evolution models as a result of the differing timescales of explosion for different types of supernovae (SNe) that enrich the interstellar gas with different elements, gas which then forms subsequent generations of stars. α elements are produced predominantly in core-collapse SNe, whereas iron comes from both core-collapse SNe (CC-SNe) and Type Ia SNe (SNe Ia). The massive stars that explode as core-collapse SNe exist only ≲ 10 Myr, much shorter than star-formation and chemical-evolution timescales, and therefore enrichment by CC-SNe is effectively instantaneous, with the CC-SN rate tracking the star-formation rate. SNe Ia, in contrast, are the outcome of thermonuclear combustions of carbon-oxygen white dwarfs (WDs; see for a review of unsolved issues of SN Ia progenitor systems and explosions). In analogy to the DTD of DNSs mentioned above, the DTD of SNe Ia is the distribution of times between the formation of a stellar population and the explosion of some of its members as SNe Ia. Since SNe Ia require the presence of WDs, it is assumed that the value of the DTD of SNe Ia is zero between the brief burst at time t=0, and up to an "initial" delay t_i∼ 40-100 Myr, simply because this is the minimum time for the formation, through normal stellar evolution, of the first WDs. Measurements based on SN Ia rates over the past decade have shown that, at delays t>t_i and up to a Hubble time, the DTD is a monotonically decreasing function of time close to D(t)∼ t^-1 <cit.>. Such a DTD is expected (see above) in SN Ia progenitor models where the exploding system consists of two WDs in a close binary, that gradually spiral in and merge through the loss of energy and angular momentum to gravitational waves. In an environment that experiences star formation with some rate that depends on time (e.g. in a galaxy), the SN Ia rate as a function of time will be the convolution of the star-formation history with the SN Ia DTD (as defined above, i.e., the "response" of the SN Ia rate to a short star-formation burst). Within the above framework, chemical evolution models have often explained the [α/Fe] knee as a consequence of the initial delay, t_i, until the first SNe Ia appear in a star-forming environment (e.g. , and references therein). Specifically, for a time t_i after star formation begins, only CC-SNe explode and enrich the gas of subsequent stellar generations, and therefore the [α/Fe] ratio, which reflects the weighted mean yields of these elements in the various types of CC-SNe, remains at a constant "plateau". At times t>t_i, the reasoning goes, the SNe Ia "kick in", and their production of iron brings about the monotonic decrease in [α/Fe] with time and with increasing overall metallicity [Fe/H]—the "knee". However, in this picture, the very similar knee structure seen for [Eu/Fe] becomes a puzzle. Gravitational-wave-driven DNS mergers should have a ∼ t^-1 DTD similar to that of SNe Ia, i.e. they are similarly "delayed" with respect to star formation and to CC-SNe. The Eu from DNS mergers and their kilonovae would then "kick in" at the same time as the Fe from SNe Ia, and no knee in the [Eu/Fe] ratio would be expected. A number of studies (e.g. ) have confirmed this conflict using detailed chemical evolution models. These studies have shown that a DTD that falls faster with time, e.g., steeper than t^-1.5, is required in order to produce the observed knee in the [Eu/Fe] ratio. Some authors have proposed that perhaps DNS-merger kilonovae are not the main or only source of Eu—in addition to DNSs there could be a significant contribution from some other, "prompt", channel for the production of Eu, such as from special types (e.g. magneto-rotational) of SNe <cit.>, from r-process-element production in outflows during the common envelope phases of NSs and their massive-star companions <cit.>, from peculiar-magnetar formation <cit.>, or from collapsars <cit.>. Others have resorted to physical effects to explain the Eu abundance patterns, including: neutron-star natal kicks <cit.> of the DNSs that cause the wider DNS systems (with longer delay) to get kicked far from star-forming regions, where they have little effect on Eu enrichment; the effects of diffusion through the ISM of the rare r-process elements on the effective delay in their enrichment <cit.>; or metallicity effects on the DNS merger rates, which in turn are reflected in the DTD <cit.>, effectively steepening it relative to its canonical t^-1 form. <cit.> measured the SN Ia DTD by comparing volumetric SN Ia rates as a function of redshift, as measured by field surveys, to the cosmic star-formation rate density versus redshift z ("the cosmic star-formation history"; SFH). They then used the DTD, the cosmic SFH, and SN element yield estimates, to plot the mean cosmic [α/Fe] vs.[Fe/H] for the Universe as a whole. They noticed that the cosmic [α/Fe] has a "knee" reminiscent of the one seen in such plots for halo and thick-disk stars in the Milky Way. <cit.> realized that the cosmic knee is not the result of any delayed SNe Ia kicking in. Rather, it is caused by the sharp drop in the cosmic star-formation rate, and the closely tracking CC-SN rate, following "cosmic noon" at z∼2. While the CC-SN rate drops steeply at z<2, and with it the production of α elements, the SN Ia rate declines only mildly because it is a convolution of the earlier history with the broad t^-1 DTD. The cosmic knee is thus an outcome of a decline in α-element production, rather than an increase in Fe production. <cit.> argued that the same process, namely a sharp drop in SFH, could be at work behind the [α/Fe] knee seen in abundance diagrams for Galactic stars. Using a simple schematic chemical evolution calculation, after some experimentation, they easily found a SFH that reproduces the "high-α" sequence of thick-disk and halo stars, with a knee in the observed location in abundance-ratio space. This Galactic SFH consisted of a single short burst of star formation, peaking 11.4 Gyr ago, and lasting several Gyr. <cit.> derived the SFH of the thick disk using PANSTARRS1 and Gaia DR2 data for 25,000 WDs, and found a remarkably similar SFH: a few-Gyr-wide sharp burst about 10 Gyr ago. <cit.> using LAMOST data for 250,000 subgiant stars, which together with Gaia eDR3 input provide accurate stellar ages, measured directly the thick disk SFH, finding a single sharp peak in star formation 11.2 Gyr ago, almost exactly the same peak time (although broader in time extent) deduced by <cit.> in order to explain the [α/Fe] knee . Importantly, the SFH of <cit.> shows that [Fe/H]≈-0.5 for the stars that were formed during the peak in the SFH, and therefore strongly suggests that the drop in star-formation rate at later times and higher metallicities is responsible for the knee in [α/Fe] at the same [Fe/H]≈-0.5. Chemical evolution models by <cit.> support this idea. Most recently, <cit.> have combined EAGLE hydrodynamical galaxy formation simulations with the VICE chemical evolution code to conclude that [α/Fe] knees occur only in simulated galaxies that encounter a sustained decline in star-formation rate, and only because of the decline, rather than as a result of a delayed onset of enrichment by SNe Ia, as previously postulated by <cit.>. In this paper, at first we attempt to constrain the DTD of merging DNS systems based on an observed sample of Galactic DNSs. We then consider whether the DTD that we deduce can reproduce the knee observed in plots of [Eu/Fe] vs. [Fe/H]. The paper is arranged as follows. In <ref> we compare the rMSPs in Galactic DNS systems to rMSPs that are not in DNS pairs. We find that, for the DNSs, the distributions of system ages and of times-until-merger, when compared to the simplest expectations, are both strongly skewed toward short times, ≲500 Myr. The observed distributions are well-reproduced by a two-population DNS model, combining DNSs born with a ∼ t^-1 DTD, and a second DNS population born with a ∼ t^-2 DTD (or equivalently, a second population with t^-1 but with a cutoff at t ≈ 300 Myr). The combined DTD that we find predicts that the vast majority (∼99%) of mergers take place within 1 Gyr. We discuss and compare this result to previous work. In <ref> we show how the measured <cit.> SFH for the thick disk, combined with the standard empirically determined DTD for SNe Ia, reproduces the high-α [α/Fe] vs.[Fe/H] stellar sequence. We then show that the same SFH, now combined with the DTDs of the two-population model representing DNS mergers (as found in <ref>), reproduces the observed [Eu/Fe] vs.[Fe/H] locus with its knee (and analogous measurements for two additional r-process elements—gandolinium and dysprosium). In <ref> we briefly discuss some physical implications of our results. Finally, in <ref>, we use the empirical inputs of our chemical-evolution calculations to compare the existing iron mass in the Galaxy to the total accumulated iron mass formed by SNe, finding a close match between the two. § PULSAR AGES, REMAINING DNS LIFETIMES, AND THE DNS DELAY-TIME DISTRIBUTION With the aim of identifying the progenitor population of merging DNSs and characterizing their properties, we have defined, from among all the currently known DNSs, a relatively homogeneous sample, namely, all DNSs in which at least one member is a recycled millisecond pulsar. Out of all DNS systems, we exclude the several DNSs associated with globular clusters, and a single DNS (J1906+0746) that does not clearly reside in the rMSP region of the PṖ plane. The 18 sample members are listed in Table <ref> (see for a recent compilation of known DNSs). All of the DNSs in this sample are in the PṖ region: 17  ms<P<186 ms; 2 × 10^-20 < Ṗ < 2 × 10^-17. lrccrcc Known DNS systems with (partially) recycled millisecond pulsars DNS P_ orb e τ_ gw P_ spin Ṗ_ spin τ_c (days) (Gyr) (ms) (10^-18) (Gyr) J1946+2052 0.078 0.064 0.045 17.0 0.9 0.30 J1757-1854 0.184 0.606 0.076 21.5 2.6 0.13 J0737-3039 0.102 0.088 0.086 22.7 1.8 0.20 B1913+16 0.323 0.617 0.30 59.0 8.6 0.11 J1913+1102 0.206 0.090 0.47 27.3 0.16 2.70 J0509+3801 0.380 0.586 0.58 76.5 7.9 0.15 J1756-2251 0.320 0.181 1.7 28.5 1.0 0.44 B1534+12 0.421 0.274 2.7 37.9 2.4 0.25 J1208-5936 0.632 0.348 7.2 28.7 <0.04 >11.2 J1829+2456 1.176 0.139 55 41.0 0.053 12.3 J1759+5036 2.043 0.308 177 176.0 0.243 11.5 J1325-6253 1.816 0.064 189 29.0 0.048 11.3 J1411-2551 2.616 0.170 466 62.5 0.096 10.3 J0453+1559 4.072 0.113 1450 45.8 0.19 3.90 J1811-1736 18.779 0.828 1800 104.2 0.90 1.83 J1518+4904 8.634 0.249 8840 40.9 0.027 23.8 J1018-1523 8.984 0.228 10^4 83.152 0.11 12.0 J1930-1852 45.060 0.399 5× 10^5 185.5 18.0 0.16 DNS and rMSP properties: binary orbital period, eccentricity, and time until merger (P_ orb, e, and τ_ gw, respectively); rMSP spin period, period derivative, and characteristic age (P_ spin, Ṗ_ spin, and τ_c, respectively. Pulsars (of all types, not just those in DNS systems) are often assigned a "characteristic age", τ_c≡ P /(2Ṗ), also known as the spin-down age, which is expected to be a rough measure of the time elapsed since the formation of the pulsar. There are known sources of uncertainty of this age estimate. First, the estimate is based on assuming a braking index of 3 (appropriate for braking by a dipolar field), whereas the few pulsars with measured indices display a range of values <cit.>. This will typically introduce an error of order unity, or perhaps up to a factor of a few, to the age. Second, the secular acceleration due to the transverse velocity between the pulsar and the observer leads to an overestimate of Ṗ, and hence an underestimate of a pulsar's age <cit.>. This effect can be corrected if the distance and transverse velocity of a pulsar are known, and the correction is typically small (less than a factor 2) for pulsars in the PṖ region of our DNS sample <cit.>. For example, for five DNSs in Table <ref>, for which distances and transverse velocities have been measured, this correction is <10%. Finally, the spin-down age estimate assumes that P is much larger than the initial pulsar period, but if this assumption is wrong then the pulsar could be much younger than τ_c. This could introduce a significant age correction for very rapidly rotating pulsars, with P≲ 10 ms, but for pulsars such as those in DNS systems the correction is again expected to be of order unity <cit.>. In addition to these known sources of uncertainty in the age, there could potentially be changes in magnetic field strength or structure, on time scales shorter than the pulsar age, which would again distort τ_c as an age estimator, but little is known about this possibility. Altogether, we conclude that τ_c could constitute a reasonable estimator, to better than an order of magnitude, of the age of rMSPs, particularly in the DNS region of the PṖ plane, but this hypothesis requires additional tests. To explore the issue, in Fig. <ref> we plot the distribution of τ_c for the DNS sample; and for 42 known rMSPs with a companion that is not a NS (in all cases a WD), and that reside in the same region in the P Ṗ plane as DNSs, namely 10  ms<P<200  ms and 10^-20<Ṗ<3× 10^-17. The DNS pulsars and these non-DNS pulsars should plausibly be similar in their physical characteristics. Several features are apparent. In both samples there is a cut-off in rMSP numbers at τ_c values near the age of the universe. Seven of the 18 DNSs and 13 of the 42 non-DNS systems have τ_c>5 Gyr, but only one DNS and two non-DNS systems have τ_c longer than the age of the universe (yet still smaller than twice the Hubble time). This already argues that τ_c is a good estimator of the true pulsar age (to within a factor of 2 or so). Furthermore, it suggests that many pulsars are active, and thus detectable, for a duration that is at least as long as the Galaxy's age. If indeed rMSPs have long lifetimes as active pulsars, and τ_c is a reasonable estimator of rMSP age, then, for a roughly constant star-formation history of the Milky Way's thin disk <cit.>, to which population most MSPs belong <cit.>, one would expect a roughly uniform distribution of rMSP ages, per linear age interval dN/dτ_ c, up to some cutoff corresponding to the age of the Milky Way. Fig. <ref> shows that the τ_c distribution is indeed extended for both samples. For the non-DNS sample, excluding the highest-age bin (which includes a period of several Gyr before star formation began in the thin disk), the distribution seems to fall mildly with age, but is consistent with being flat or nearly flat. This provides additional support to the notion of longevity of rMSPs and to the approximate reliability of τ_c as an age estimator. The τ_c distribution of the DNSs in Fig. <ref>, however, is different. It shows a large excess of systems at short ages, τ_c<300 Myr, suggesting an over-representation of young systems among rMSPs that are in DNSs. In other words, if τ_c is a reasonable estimator of pulsar age, then the non-DNS τ_c distribution suggests that rMSPs have long (of order Hubble-time) lifetimes as active pulsars. The τ_c distribution of the DNS sample then argues that, as opposed to the non-DNS systems, many DNSs (though not all) have very short lifetimes. We return shortly, below, to the details of the DNS distribution of τ_c. A complementary timescale that we explore next, and that can be estimated (accurately) for all the DNSs, is the time τ_ gw until the merger of each system, driven by the loss of energy and angular momentum to gravitational waves. As opposed to the approximate age, τ_c, i.e. the time since birth of a pulsar, τ_ gw is the precise time until death of a DNS system, determined by its constituent masses, separation, and eccentricity. Table <ref> lists τ_ gw, by increasing order, for the DNS sample. Remarkably, the youngest DNS pulsars (with the lowest τ_c), which constitute the "youthful excess" in Fig. <ref>, tend to also have the lowest values of time–till-merger τ_ gw. At first sight, it is odd that the two timescales are correlated when, at face value, they are physically unrelated (the timescale over which one of the pulsars in a DNS system slows down due to internal neutron-star processes, versus the time until two neutron stars merge because of gravitational wave losses.) We will show, below, that the effect is a result of the particular DTD of DNSs. The suggestion, by Fig. <ref>, of a steady production rate of rMSPs over the Galaxy's history, along with a long lifetime as an active pulsar for each rMSP, raises the possibility of constraining the delay-time distribution of kilonovae, D_ kn(t), directly from the observed distribution of τ_ gw of the DNS sample. A caveat to this is that the observed distribution may have been strongly distorted by selection effects. We examine this question in Appendix A, and conclude that it is unlikely that selection effects play a major role in the τ_ gw distribution, apart from possibly excluding from the sample systems with τ_ gw≲ 50 Myr. Assuming that rMSPs in DNS systems are the progenitor population of kilonovae, different forms of D_ kn will lead to different distributions of τ_ gw. To see this, we adapt to the present context the analysis by <cit.> of the evolution with cosmic time of the separation distribution of a population of binaries that are inspiraling via gravitational wave losses. Suppose a single population of DNSs forms at time t=0 with some distribution of initial separations, which entails a distribution of times-till-merger (or "delay times"), D'(τ'). Systems with delays in the range τ' to τ' +dτ' will migrate, after a time t, to a delay bin τ to τ+dτ in the evolved distribution D(τ,t), where the transformation from τ' to τ is simply τ=τ'-t, resulting from the shortened delay after the elapsed time t, and dτ=dτ'. Conservation of the number of systems requires that D(τ, t) dτ =D'(τ') dτ' , and therefore the distribution of delay times at time t is D(τ,t)=D'(τ + t), i.e. just the original distribution but shifted to earlier delays by t. If we now consider a series of DNS populations, each with the same initial DTD, D'(τ'), tracking the star-formation rate SFR(t) between t=0 and the present age of the Galactic disk, t_0, then the present-day distribution of delay times is dN/dτ_gw = ∫_0^t_0 SFR(t_0 -t) D(τ_gw, t) dt . Let us further suppose, for instance, that the initial DTD is a power law of index α, D'(τ')∝τ'^α, and has an "initial delay" t_i, analogous to the initial delay already mentioned in the context of SNe Ia and double-WD mergers, in  <ref>. In the current context, t_i is the total time elapsed between the formation of a stellar population and the occurrence of the first DNS mergers from that population.[ Strictly speaking, there are several, subtly different, initial delays in the problem: the time t_i, dns, between star-formation and the formation of the first DNS systems that are "ready" to begin their inspiral toward merger (set by the stellar and binary evolution timescales of binary stars that end up as DNS rMSPs); the time t_i, gw between DNS formation and the first DNS merger events (set by the minimum separation and maximum eccentricity of newly born DNSs, leading to the shortest τ_ gw); and the sum of these two timescales, t_i. In Appendix B, we elaborate on the different initial delays and on where and how each should, in principle, be used. In the rest of this paper, however, the maximal value that we consider for t_i (the longest of the three initial timescales), t_i=10 Myr, is still much smaller than the smallest observed value of τ_ gw, making our results insensitive to the choice of the type of initial delay. We therefore do not distinguish among the three different types of initial delay.] Then, for t>t_i, D(τ, t)∝ (τ + t)^α. For an assumed constant star-formation rate in the Galactic disk, SFR(t)= const., over the past t_0=8 Gyr, we then have for the present-day distribution of times-till-merger dN/dτ_ gw∝lnτ_ gw+t_0/τ_ gw,         for α=-1, and dN/dτ_ gw∝τ_ gw^α+1 - (τ_ gw + t_0)^α+1    for α≠ -1. The life-expectancy distribution dN/dτ_ gw is, under these assumptions, essentially a broken power law with smooth-transition break at τ_ gw=t_0, and with indices α at τ_ gw≫ t_0 and α+1 at τ_ gw≪ t_0. This recalls the broken power-law distribution of the binary separations, shown by <cit.>. Figure <ref> shows dN/dτ_ gw for the DNS sample. Using a maximum-likelihood fit to the data, described in Appendix C, we have found (curves in Fig. <ref>) the best-fitting model distributions, as given by Eqns. <ref>-<ref>, for a single power-law DTD, varying the values of the index α. We show models both for fits to all τ_ gw, and only to the data with τ_ gw>10 Gyr, which are expected to behave as a single power law. As noted above, the predicted distribution of τ_ gw, for DNS populations having a DTD that is a single power law, is always a broken power-law, where the break is around the Galaxy's age. In contrast, the observed distribution of τ_ gw, shown in Fig. <ref>, is remarkably close to a single power law, which none of the single power-law DTD distributions predicts. As a result, no matter what range of τ_ gw in the data is fit, the models cannot satisfactorily reproduce the observed distribution. Specifically, for the fits to long τ_ gw shown in the figure, there is a large excess, over the model expectations, of systems with short times until merger. At τ_ gw<100 Myr, the excess is by at least an order of magnitude. For example, integrating Eq. <ref> for the steepest power-law model within the 90% confidence interval (α=-1.13) over 10  Myr<τ_ gw<100 Myr (with the model normalized to produce 18 systems over the entire observed range of τ_ gw), we would expect 0.21 DNSs with 10  Myr<τ_ gw<100 Myr, yet we observe three such systems in our sample. The Poisson probability for this is P(N ≥ 3|λ=0.21)=0.001 and thus this model can formally be rejected at high significance. If we fit a single power-law DTD model to the data over the full τ_ gw range, then the best fit model (thick curve in Fig. <ref>) has α≈ -1.3. This power-law index value is, in a sense, a compromise between the one required to fit the data at τ_ gw≫ t_0 and the one required at τ_ gw≪ t_0. This model nonetheless still underpredicts the number of DNSs with 10<τ_ gw<100 Myr to be 0.55 (instead of 3 observed). We conclude that the agreement of a single-power-law DTD model with the data is, at best, marginal, and it is worthwhile to look for models that provide a better match. To proceed, we have attempted to fit a two-population model, each population with a power-law DTD with a different index, to the observed τ_ gw distribution. Using a Markov-Chain Monte-Carlo (MCMC) procedure, we have tested the likelihood of a family of models with parameters α_1, α_2, and A, consisting of one population of DNSs that are born with a DTD proportional to τ^α_1, a second population with DTD proportional to τ^α_2, and a ratio A between the integrals over time of the two DTDs, from t_i=10 Myr to a Hubble time. Details of the MCMC calculation and its results, as well as of the (minor) effect of the choice of t_i, are in Appendix C. Fig. <ref> (upper panel) shows again the observed dN/dτ_ gw distribution for the 18 DNS systems, now compared to a two-power-law model with parameters close to the most likely ones. Unsurprisingly, the predicted model distributions from each of the two populations, each of which is a broken power law with a smooth break at t_0=8 Gyr, can easily combine to produce a single power law, as observed for the data (and which, somewhat paradoxically, cannot be well reproduced by a single population with a single-power-law DTD, which always predicts a broken-power-law observed dN/dτ_ gw). The best-fit parameters and 1σ uncertainties are α_1=-1.1± 0.15, α_2=-1.9 ±0.4, and log_10(A)=1.95^+0.7_-0.9. We note that the uncertainties quoted here and elsewhere in the paper are statistical and do not include systematics (which most likely dominate the uncertainty), such as unaccounted-for selection effects that most likely reduce the observed number of system with small τ_ gw (see Appendix A). In this two-population picture, the merger rate is strongly dominated by short-lived systems. For example, in the best-fit model, ∼ 99% of mergers are of systems with time until merger <1 Gyr, while only ∼ 1% are of systems with longer life expectancies. In addition to the two-power-law DTD model above, we have also experimented with a model consisting of two populations with power-law DTDs, but with one of the power laws having an exponential cutoff with timescale τ_ cut. Explicitly, the first population has DTD ∝τ^α, and the second population has a DTD ∝τ^-1exp(-τ/τ_ cut). The free parameters of this model are α, τ_ cut and a relative normalization factor A, defined as in the two-power-law model. As seen in Fig. <ref> (lower panel) this two-population, exponential-cutoff, model is equally effective in explaining the observed τ_ gw distribution. From the MCMC calculation (see Appendix C) the best-fit parameters and 1σ uncertainties for this model are α=-1.1± 0.1, log_10(τ_ cut/ yr)=9^+0.4_-0.5, and log_10(A)=1.25^+0.5_-0.8. In this model too, the merger rate is dominated by short-lived systems, although somewhat less than in the two-power-law model—in the best fit version, 93% of the mergers are by systems with τ_ gw<1 Gyr. Based on the Akaike Information Criterion, there is a 94% probability that the more-complex two-population models (each with three free parameters) are preferred over the single power-law model (with its one parameter), and therefore the improved fit of both of the two-population models justifies the additional complexity introduced. Considering the results above, we now return to the observed distribution of τ_c, the characteristic age of the DNS sample, to investigate whether or not it is consistent with the pictures of one or two DNS populations with the indicated power-law DTDs. Let us assume, again, that rMSPs have long lifetimes (≳ Hubble time), and that τ_c is a reliable indicator of age. From among all the DNSs of age τ_c, i.e., DNS systems that were formed a time τ_c ago, we can see only those that, at formation time, had τ_ gw>τ_c, since those with shorter merger times have merged and disappeared. The predicted distribution of ages τ_c is thus dN/dτ_c = SFR(t_0 -τ_c)∫_τ_c^τ_ max D(τ) dτ , where τ_ max is the maximal value of τ_ gw possible, either because of a cutoff in the DTD (e.g. if DNSs cannot remain bound beyond some separation) or perhaps because it becomes difficult to detect DNSs as such beyond some orbital period. For our purposes, we will assume for τ_ max the largest τ_ gw observed in the sample, 5× 10^5 Gyr. For a constant star formation history, it is straight-forward to calculate dN/dτ_c for the various DTD models discussed above. For example, for a single DNS population with DTD proportional to a power law, τ^α, the age distribution is dN/dτ_c∝lnτ_ max/τ_c,         for α=-1, and dN/dτ_c∝τ_c ^α+1 - τ_ max^α+1    for α≠ -1. For the power-law DTD with an exponential cutoff, the integral in Eq. <ref> has no analytic solution, but it is easily evaluated numerically. Fig. <ref> shows again the observed τ_c distribution, but of the DNS sample only, now compared to the model prediction of Eq. <ref>. We plot the predictions for the three models considered above when fitting the τ_ gw distribution: a single-population of DNSs, with a single power-law DTD; two populations, each with a (different) power-law DTD; and two populations, but one with power-law DTD with a cutoff. For each type of model, we show one with parameters within the 1σ range of the best fit to the τ_ gw distribution, that also provides the best possible[Not surprisingly, not all the models that are favored by the fit to τ_ gw also provide a reasonable fit to τ_c. Here (and in Figs. <ref>-<ref>) we choose models with parameters within the 1σ range of the fit to τ_ gw, that also provide a good fit to τ_c, in the case of the two-population models, and the best (yet poor) fit in the case of the single power-law model.] description of the τ_ c distribution. The two-component models simultaneously fit well both τ_ gw and τ_c. A single-power-law model, on the other hand, provides a poor fit to τ_ gw and an even worse fit to τ_c. We can now understand the excess of young DNS systems in the τ_c age distribution, noted in Fig <ref>. In effect, it is actually the result of a deficit, among all the old systems that have formed in the past, of the dominant numbers of small-separation, i.e. short τ_ gw, systems, which have already merged. Naturally, this deficit is smaller or absent in models with DTDs that are more skewed to short τ_ gw, via steeper power laws or cutoffs. In principle, we could constrain the DTD by fitting the above models to the dN/dτ_c distribution, as we previously fit the models to the dN/dτ_ gw distribution, or even fit the DTD models jointly to the two observed distributions. However, the uncertainty in the current age of the systems (as estimated by τ_c) is much larger than the uncertainty in the predicted time until merger (τ_ gw), and we therefore choose to fit the DTD based on τ_ gw alone. Nevertheless, the comparison of the models to τ_c provides an independent test of the model DTDs. The fact that the two-population model indicated by the τ_ gw data reproduces well also the observed excess of DNS systems with small τ_c, strengthens two of our previous conclusions: that τ_c is a reasonable estimator of the system age; and that a single power-law DTD provides a poor fit to the observed data, while a two-population model, dominated by short-lived systems, fits it well. <cit.> already noted an excess, among DNSs, of systems with τ_ gw<1 Gyr, above a ∼τ^-1_ gw functional form that describes well the distribution at larger delays. They found a relatively mild excess, wherein the short delay population is not much larger than the ∼τ^-1_ gw population, while we find an excess that is much larger. The origin of this difference is that, contrary to us, they assumed that rMSPs lifetimes are short compared to the other timescales in the problem, in which case the τ_ gw distribution would simply reflect the DTD of DNSs. We have presented here evidence, however, that rMSP lifetimes are long, in which case the observed τ_ gw distribution is not simply the DTD, but rather it is described by Eq. <ref>. Furthermore, in our DNS sample, which is by now larger than the one considered by <cit.>, there is no longer an excess in the observed dN/dτ_ gw at short τ_ gw above τ^-1_ gw, and it is in fact well-described by dN/dτ_ gw∝τ^-1_ gw. However, we have shown that this is most likely an outcome of the superposition of two populations, each with a different dN/dτ_ gw distribution. <cit.> have recently estimated the DTD of short GRBs by analyzing the SFRs of GRB host galaxies, based on their broad-band photometry. Their best-fit power-law model for the DTD has index α=-1.8±0.4 (90% confidence interval), and α<-1.3 at 99% confidence, remarkably similar to our result for the fast/steep component of the DNS population, which would indeed dominate, by numbers, most of the observed merger events. <cit.> also obtain a constraint on the initial delay of the DTD, t_i>72 Myr at 99% confidence. We suspect, however, that this latter result may not be reliable, as the broad-band photometry they use permits estimating only simplified two-parameter SFHs for each galaxy, which may not afford the time resolution to resolve the DTD on such short timescales. A related issue is that of the host galaxy of GW170817, a passive nearby early-type galaxy (NGC 4993) with mean stellar population age ∼10 Gyr <cit.>. For a two-population DTD of the kind we have deduced here for DNS mergers, only ∼ 10^-3-10^-4 of mergers occur between 10 Gyr and a Hubble time. The DNS merger rate today in the local universe is the convolution of the cosmic star-formation history with the DNS-merger DTD. Considering that the cosmic SFR ∼ 10 Gyr ago (z∼2) was an order of magnitude larger than today, the fraction among all mergers that occur in local galaxies, having delays >10 Gyr, is expected to be correspondingly larger, but still only ∼ 0.1-1%. However, <cit.> have performed integral field spectroscopy near the site of GW170817, and deduce the presence of a significant 1 Gyr-old stellar population there, possibly related to a galactic merger event. Furthermore, among the ten lowest-redshift (z<0.3, lookback-time <3.5 Gyr) sGRB host galaxies in the sample of <cit.> (used for the analysis of ), two have a stellar population mass-weighted age (time between star formation and the time of the sGRB) t_ m∼8-9 Gyr, quite similar to GW170817. Local sGRB hosts similar to NGC 4933 are therefore not unusual, and perhaps such more-recent star-formation episodes in old quiescent galaxies are not uncommon. Finally, it is also important to remember that our DTD is estimated based on a Milky-Way DNS sample, while the actual DNS-merger DTD elesewhere could depend on cosmic time and environment. § HOW TO MAKE KNEES §.§ Chemical evolution calculation To investigate how a "prompt" DTD for DNS mergers and their ensuing kilonovae, as we have deduced in <ref>, above, may be relevant for chemical evolution and the [Eu/Fe] "knee problem" (see <ref>), we follow <cit.> and, as a proof of concept, perform a simple chemical evolution calculation that predicts the evolution of stellar generations in the [X/Fe] vs. [Fe/H] plane with minimal assumptions and essentially no free parameters. Our calculation approximates the thick-disk component of the Galaxy as a single-zone closed box of stars and gas with instant full mixing, and is equivalent to chemical evolution models with no losses of metals due to outflows, no dilution by pristine gas from inflows, and gas depletion time equal to the star-formation time. While likely a highly simplified version of reality, this calculation may nevertheless capture the essence of the chemical-evolution process. As we show in <ref>, at least the approximation of no significant losses of metals from the system is empirically justified. We use actual measurements of Milky Way's thick-disk stellar population to represent the star-formation history, SFR(t). The CC-SN rate tracks the star-formation rate, R_ cc(t)=(N_ cc/M_*) SFR(t), scaled by the number of core-collapsing stars per unit formed stellar mass, N_ cc/M_*. For a standard <cit.> initial mass function (IMF), and assuming all stars more massive than 8M_⊙ explode, N_ cc/M*=0.01. It has been argued (e.g., ) that some fraction of massive stars collapse directly to black holes, without a SN explosion, which would lower N_ cc/M_* by some fraction. However, <cit.> have recently shown that, to date, there is no statistically significant evidence for this so-called "red-supergiant problem", and we therefor ignore such a fraction. The SN Ia rate, R_ Ia, is the convolution of the SFR with the SN Ia DTD, R_ Ia= SFR(t) D_ Ia(t) . For the DTD of SNe Ia, we assume the observationally determined form <cit.>, with an initial delay t_i=40 Myr, followed by an abrupt rise to a falling power law, t^-1.1. D_ Ia(t) is normalized, when integrated over 13.7 Gyr, to either N_ Ia/M_*=(0.0013±0.0001) M_⊙^-1 SNe Ia per unit formed stellar mass, as measured in field galaxies <cit.>; or, N_ Ia/M_*=(0.0031±0.0011) M_⊙^-1, which is the current estimate for the SN Ia DTD normalization in early-type galaxies in clusters and in the field <cit.>. The latest empirical estimate of mean iron mass yield per CC-SN, averaged and weighted over the various main CC-SN types, is y̅_ Fe,cc=(0.058±0.007) M_⊙ <cit.>. For the mean iron yield of SNe Ia we take y̅_ Fe,Ia=(0.7± 0.05) M_⊙ <cit.>. Let us denote with (α/ Fe)_ cc the mean yield ratio between an α element (or elements) and iron in CC-SNe. However, we do not require to know the value of this parameter, as it cancels out in the abundance ratios we consider, which are always relative to solar abundance ratios. The iron masses originating from CC-SNe and from SNe Ia, respectively, and accumulated in stars and in the ISM at time t after initiation of star formation are M_ Fe,cc(t)=∫_0^t R_ cc(t')  y̅_ Fe,cc  dt' , M_ Fe,Ia(t)=∫_0^t R_ Ia(t')  y̅_ Fe,Ia  dt' . The accumulated mass of an α-element is M_α,cc(t)=∫_0^t R_ cc(t')  y̅_ Fe,cc (α/ Fe)_ cc  dt' , where we have assumed, as common, that α-elements are produced predominantly by CC-SNe. We can now calculate [α/Fe](t) and[Fe/H](t) as [α/ Fe]=logM_α,cc(t)/[M_ Fe,cc(t) + M_ Fe,Ia(t)]/M_α,cc(t_⊙)/[ M_ Fe,cc(t_⊙) + M_ Fe,Ia(t_⊙)], and [ Fe/ H]=log M_ Fe,cc(t) + M_ Fe,Ia(t)/M_ Fe,cc(t_⊙) + M_ Fe,Ia(t_⊙), where t_⊙ (the only free parameter in the calculation) is the time at which solar abundance is attained. §.§ The [α/Fe] knee We begin by testing the ability of this simple chemical evolution model to reproduce the well-known "high-α" sequence of thick-disk stars in the [α/Fe] vs. [Fe/H] plane[All abundance measurements we cite here and forthwith are relative to solar abundances from <cit.>]. Fig.<ref> shows, in this parameter plane, the stellar number density of stars from <cit.> having azimuthal action J_ϕ<1500 kpc km s^-1 (i.e. members of the thick-disk and halo populations). The over-plotted curves are the results of our chemical-evolution calculation, using for SFR(t) the empirically determined thick-disk star-formation history of <cit.> based on analysis of WD numbers and ages. The <cit.> SFH is a skewed Gaussian peaking ∼ 10 Gyr ago with a full-width at half maximum of about 1 Gyr. We do not use the more-slowly evolving thick-disk SFH of <cit.>, as we suspect it suffers from contamination by other Galactic populations and by systematic errors, as evidenced, e.g., by many of the stars having ages much older than the Universe. Nevertheless, the data of <cit.> concur that the thick disk was formed in a burst of star formation that peaked sharply ∼ 10 Gyrs ago. Moreover, their data show that the typical stars that formed at the time of the SFR peak had [Fe/H]≈-0.5 and [α/Fe]≈ 0.25, precisely the values at the "knee" in the thick-disk sequence. The drop in [α/Fe] coincidess with the sharp drop in the star formation rate, as predicted by our model and, barring an unlikley coincidence, seems to have no relation to the initial delay, t_i, in the DTD of type Ia SNe. Two of the curves in Fig.<ref> show our calculation for the two possible values of the normalization, N_ Ia/M_*, of the SN Ia DTD, whether in field galaxies <cit.> or in early-type galaxies<cit.> with the usual t_i value of 40 Myr, which recovers the conclusion of <cit.> that the high-normalized early-type-environment DTD of SNe Ia best reproduces the observed high-α sequence of the thick-disk stellar population. This may be a reasonable choice, given the possible similarity between conditions during star formation in the very-young Milky Way and in early type galaxies. We note, however, that any change of parameters that sufficiently raises the contribution, relative to that of CC-SNe, of SNe Ia to the total accumulated iron, will have the same effect, for example, a smaller mean iron yield of CC-SNe, or a fraction less than unity of massive stars that explode, rather than collapsing without a SN explosion. Furthermore, a more mundane explanation is that α-element abundances have systematic uncertainties of roughly ± 0.15 dex (, see further below), which could easily bring to accord the data and the model that has the standard SN Ia normalization of the DTD. The principal success of the model which we point out is the reproduction of the general structure of a sloping plateau plus a knee at roughly the right location in the diagram. We re-emphasize that all of the input parameters to our model are measured from observations; other than t_⊙, our calculation has no free parameters. Our calculated evolution track demonstrates that the knee is a direct consequence of the sharp decline, ∼10 Gyr ago, of the SFR and of its closely tracking, α-element-producing, CC-SNe. The knee has little to do with the initial SN Ia delay t_i. To illustrate this point, Fig.<ref> shows also our abundance evolution curves for two values of t_i: 40 Myr, which is often assumed, being the time required for stellar evolution to produce the first, most massive, WDs; and 100 Myr (also often used in chemical evolution models), which is an initial delay value that would already be in tension with SN Ia DTD measurements, e.g. <cit.>. These two extremes for the value of t_i have only a weak influence on the location or shape of the knee. Rather than resulting, as often claimed, from a rise in SN Ia rate and iron production, the knee is the result of a decline in the rate of SNe Ia, and the iron that they produce, yet a decline that is milder than that of the CC-SNe. This confirms the explanation of the knee proposed by <cit.>, based on a hypothesized SFR that reproduced the abundance-ratio measurements (a hypothesized SFR that matches closely the empirical one we use here). As noted above, this result is supported by the data of <cit.> and by the theoretical study by <cit.>, who found, using detailed galaxy formation simulations and chemical evolution modeling, that abundance-diagram knees in realistic galaxies are always the result of SFR declines, and not of delayed SN Ia rate "kick-ins". §.§ The [Eu/Fe] knee Having established that our chemical evolution calculation, with its all-empirical input parameters, reproduces the "high-α" sequence of thick-disk stars in the [α/Fe] vs. [Fe/H] plane, we now turn to the r-process elements, and attempt to reproduce their observed abundance sequence, informed by our findings in <ref> regarding the DTD of DNS mergers. To calculate, for instance, Eu-abundance evolution, we replace the core-collapse SN rate from the previous calculation with a DNS-merger-driven kilonova rate, given by the convolution of the SFR with a DTD for kilonovae, R_ kn= SFR(t) D_ kn(t). As already discussed in <ref> and, in more detail, in Appendix B, D_ kn(t)=0 at low t, up to an initial delay t_i, dictated by the stellar and binary evolution parameters of the merging binary systems. The single-power-law DTD models we have considered rise abruptly from zero to a maximum at t_i and then fall off as t^α. The two-DNS-population models considered in <ref> have a DTD that is either the sum of two single-power-laws with indices α_1 and α_2, and normalization ratio A, or the sum of a power-law with another power law with an exponential cutoff, τ^-1exp(-τ/τ_ cut), again with a number ratio A between the populations. For the purpose of our chemical evolution calculation, t_i represents the time elapsed between the explosions of the bulk of the iron-producing core-collapse SNe from a stellar generation, and the first kilonovae from DNS mergers from that stellar generation (see Appendix B). Therefore, t_i could be close to zero <cit.>, or it could be somewhat longer, e.g. due to additional binary-evolution timescales associated with DNSs, compared to isolated NSs from SN explosions. We explore a range of reasonable values for t_i, from 1 to 10 Myr. In analogy to Eq. <ref>, the accumulated Eu mass from kilonovae is M_ Eu,kn(t)=∫_0^t R_ kn(t')  y̅_ Eu  dt' , where y̅_ Eu is the mean Eu yield per kilonova explosion, which we do not need to know for our calculation, as the end result is normalized to solar abundance ratios. We then have [ Eu/ Fe]=logM_ Eu,kn(t)/[M_ Fe,cc(t) + M_ Fe,Ia(t)]/M_ Eu,kn(t_⊙)/[ M_ Fe,cc(t_⊙) + M_ Fe,Ia(t_⊙)]. Fig. <ref> shows measured Eu stellar abundances as compiled from the Stellar Abundances for Galactic Archaeology (SAGA) database <cit.>. A shortcoming of these data, for the current application, is that they are not labeled according their Galactic kinematic component, and thus presumably could include thin-disk, thick-disk, and halo stars, blurring the possibly different evolution locii and their salient features. Furthermore, this large compilation is heterogeneous and difficult to separate according to precision and accuracy. <cit.> have published Eu abundances for a sample of 18 strictly thick-disk stars. For most of these stars, <cit.> have further reported abundances of the elements gandolinium (Gd) and dysprosium (Dy), which like Eu, are also predominantly produced through the r-process, and may therefore also come from kilonovae and follow the same abundance patterns (all three elements are neighbors in the periodic table and are likely formed at the same sites). We show in Fig. <ref> the abundance ratios for all three r-process elements for these thick-disk stars. The r-process-element measurements in Fig. <ref> illustrate the morphology of this abundance sequence, with a plateau-plus-knee structure similar to that of α-elements. This, even though kilonovae have traditionally been expected to have a DTD with a broad t^-1 form similar to SNe Ia, rather than tracing the rate of CC-SNe like α-elements (see  <ref>). We overplot in Fig. <ref> curves with the outputs of our simple chemical evolution calculation for the r-process elements, using for the DTD of kilonovae several of the single-population and two-population DNS models that we have constrained in <ref>, based on the observed Galactic DNS population. The overall structure seen in the diagram, particularly the plateau and the knee, can be reproduced with the two-population DTDs that we have assumed. Either the steep power-law component (∼ t^-2), in the two-power-law models, or the exponential-cutoff component, in the cutoff models, can serve to limit the time, after star formation and core-collapse SNe, during which r-process elements are produced, as compared to the extended ∼ t^-1 form of the DTD of SNe Ia. We further show in Fig. <ref> (dotted curve) that a single-power-law DTD of the form ∼ t^-1 for kilonovae fails to produce the plateau-plus-knee structure, irrespective of t_i (as has been previously known, see  <ref>). Fig. <ref> shows also the model [α/Fe] curve that reproduces the data in Fig. <ref>. It is important to realize that the quantity which [α/Fe] actually measures, at a given cosmic time, or at a given metallicity [Fe/H], is the fraction of the total accumulated iron mass in a star that was created by CC-SNe by that time, relative to that fraction in the Sun (see Eqns. <ref>-<ref>). Indeed, it is easy to see that, at a very early time in the SFH, before any material enriched by SNe Ia has been incorporated into a star (i.e. at t<t_i for SNe Ia), the fraction of the iron from CC-SNe is one, and therefore [α/Fe]^-1 at that time is simply the fraction of iron in the sun contributed by CC-SNe. This is the reason that all of the different α-elements that are produced predominantly by CC-SNe (e.g. O, Mg, Ca) have nearly identical loci in the [α/Fe] vs. [Fe/H] plane (even though their absolute abundances relative to Fe, when not normalized to solar ratios, vary greatly). Similarly, if Eu were produced promptly in the explosions of massive stars that closely tracked the SFR, one would expect [Eu/Fe] to behave identically to [α/Fe]. In our model for the kilonova DTD, where kilonovae are somewhat delayed with respect to CC-SNe, the [Eu/Fe] evolution curve is always below the [α/Fe] curve, as the kilonovae need to "catch up" with the core-collapse SNe and the iron that they have already contributed at any given time. Examining Fig. <ref> in this context, it is interesting to note that, in the range starting at [Fe/H]>-1, for which [Eu/Fe] data exist specifically for thick disk stars, and up to the knee, the observed [Eu/Fe] values appear to be (with a large scatter) higher than [α/Fe] by ∼ 0.05-0.15 dex. An offset above the α-elements of the thick-disk r-process points continues also toward [Fe/H]=0 and beyond. An [Eu/Fe] ratio higher than [α/Fe] at a given [Fe/H] is something that our simple model cannot reproduce, as already noted above, unless we introduce a time- or metallicity-dependent Eu production in a way that will mimic the α-elements locus. This limitation is true for all Galactic chemical evolution models, for the reasons noted above. However, the ∼ 10-40% excess in Eu abundance is quite possibly a systematic error in the measurements and/or in the conversion of Eu absorption-line equivalent widths to abundances, by means of models for low-metallicity stellar-atmospheres. <cit.>, for instance, note that even α-element abundances suffer from systematic uncertainties of ±0.15 dex, as evidenced by the plateau offsets among different α elements. For r-process elements, the uncertainties could well be larger. For example, <cit.> estimate typical systematic uncertainties in their own measured Eu abundances of ±0.10 dex. <cit.> compared the element abundance estimates derived autonomously by six different research groups from the same spectral data for four stars. For Eu, the estimates of the groups differed over a range of 0.15 to 0.47 dex for the four stars, larger than the offsets between the Eu plateau and the various model curves in Fig. <ref>. Given the current accuracy of r-process abundance estimates, we do not consider it presently warranted to add to our calculation any complexity that might explain the offsets, for instance, a dependence of r-process yields on metallicity. § DISCUSSION We have shown that the general characteristics of the locus of thick disk stars in the [Eu/Fe] vs [Fe/H] diagram, particularly the knee at [Fe/H] ∼ -0.5, can be reproduced by a simple chemical evolution model, based only on empirical input for the thick-disk SFH and for SN Ia and CC-SN parameters, but with a DTD for kilonovae, dominated by short-lived NS binaries, having a steep, ∼ t^-2, power-law form or, alternatively, ∼ t^-1 with an exponential cutoff with characteristic time τ_ cut≈ 300 Myr. At delays ≳ 1 Gyr, most of the DNS mergers are due to a sub-dominant population with DTD ∼ t^-1. The case for the existence of such a kilonova DTD is motivated by our analysis, in <ref>, of the observed distributions of spin-down time, τ_c, and time until merger, τ_ gw, of the known systems of rMSPs in DNSs. This DTD for DNS mergers and kilonovae suggests that there may be more than one formation channel for DNS systems in the field (in addition to a formation channel in globular clusters). In the dominant channel, the formation of bound DNSs, for whatever physical drivers behind it, prefers short periods. The DTD implies a distribution of the initial DNS separations, a, that is either very steep, dN/da ∝ a^-k where k ≈ 4-6, or that there is a cut-off in the initial separation distribution at a ∼ 2R_⊙ (corresponding to a merger time of ∼300-500 Myr for two NSs). The sub-dominant channel, on the other hand, has a roughly constant distribution per logarithmic interval of separation, at least up to a separation of ∼ 50 R_⊙ (corresponding to a merger time of ∼ 10^5 Gyr). A number of previous works have proposed that there are two, or even more, different formation channels for DNSs, based on the observed properties of Galactic DNSs <cit.>. We speculate that the strong preference for an initial DNS separation of ≲ 2R_⊙ could be the outcome of some physical requirement on the final binary-evolution phase of the system before DNS formation. For example, such a small separation could be required for (or be the result of) the second pre-SN star undergoing enough mass stripping or depletion by the first NS, such that its explosion is weak, with a low kick that leaves the system bound (as in the scenario of ; see also for recent evidence of small NS kicks in DNS formation in dwarf galaxies). In any event, our finding here that the large majority of DNSs that merge within a Hubble time have a distribution of separations that are strongly biased toward small ones, with correspondingly short times until merger, may provide one more clue toward understanding the formation of bound DNS systems, the explosion of some them as kilonovae, and their role in cosmic element production. § EPILOGUE: THE MILKY WAY'S IRON BUDGET Our simple chemical evolution calculation, above, which reproduces remarkably well the observed plateau-plus-knee sequences of α- and r-process elements, may be criticized on the grounds that it is oversimplified. In particular, our calculation assumes that the Galaxy is a "closed box", in which all explosive nucleosynthesis products have been retained, whereas metal-depleting galactic winds are thought to actually be important to the chemical evolution budget. To address the closed-box question, we compare here, using the same empirical parameters we have used for our chemical evolution calculation, the total mass of iron residing today in the Galaxy to the total iron mass produced by SNe in the course of the Galaxy's lifetime. The Galaxy consists of the following baryonic components. The stars, with total mass M_*, have a mean iron abundance Z̅_*, Fe≈(0.75± 0.25) Z_⊙, Fe of the solar iron mass abundance, Z_⊙, Fe=0.00137 <cit.>. This value and uncertainty range for the mean abundance of stars can be seen in <cit.>, who compared the stellar metallicity distributions in several large surveys, and found systematic differences within this range, resulting from the differing Galactic regions probed and the different sample selection criteria of those surveys. The cold interstellar medium (ISM) constitutes a fraction f_ ism≈ 0.22± 0.02 of the stellar mass <cit.>, and has a metal abundance of non-refractory elements that is solar to within a few per cent <cit.>. However, at least 90% of the iron in the ISM is depleted onto dust grains <cit.>. We can therefore safely account for the total iron mass in the ISM (whether in the gas or within dust particles) as M_* f_ ism Z_⊙, Fe. The circumgalactic medium has a mass comparable to the stellar mass, f_ cgm≈ 1.0± 0.2, and a mean abundance Z̅_ cgm, Fe≈ (0.5±0.1) Z_⊙, Fe <cit.>. The total iron mass today in the Milky Way is thus, M_ Fe,0=M_*(Z̅_*, Fe/Z_⊙, Fe+f_ ism+f_ cgmZ̅_ cgm, Fe/Z_⊙, Fe)Z_⊙, Fe≈0.0020M_*   . The iron mass produced by SNe that have exploded over a Hubble time is proportional to the formed stellar mass. For any not-extremely young stellar population, the formed stellar mass is M_*/(1-r), where r=0.4 (for a IMF, as consistently assumed throughout) is the "recycling factor", i.e. the fraction of the mass from a newly formed stellar population that is lost to stellar winds and SN ejecta. The total iron mass formed over the Galaxy's history is then, M_ Fe, formed=M_*/1-r(N_ cc/M_*y̅_ Fe, cc + N_ Ia/M_*y̅_ Fe, Ia)≈0.0025M_*   , where the SN numbers per formed stellar mass and the mean iron yields per SN are as previously defined. The fraction of the iron mass produced that is still bound in the Galaxy is therefore M_ Fe,0/M_ Fe, formed=0.81±0.10  , where the uncertainty is the root-mean-square of a Monte-Carlo calculation where each of the input parameters is allowed to vary randomly with a flat probability over its systematic uncertainty range, noted above. The result could be even higher if, e.g., some fraction of massive stars collapse without a SN explosion; or somewhat lower if, as we have seen some evidence here, the Hubble-time-integrated number of SNe Ia per formed stellar mass of thick-disk stars (which constitute some fraction of the Galaxy's stellar mass) is higher than for other populations. In any event, it is hard to avoid the conclusion that the Milky Way has retained ∼70-90%, and possibly all, of the metals that have been created within it. About 50% of those metals are currently locked within stars, 15% are in the interstellar medium, and 35% are in the circumgalactic medium. Thus, even if the Galaxy is not a truly closed box, it is a fairly well-sealed one. No less remarkable is the fact that the current and the formed iron masses, estimated in completely unrelated measurements and environments (abundances in Galactic stellar atmospheres and interstellar gas, versus SN rates and iron yields in both nearby and distant galaxies), turn out to be so close to each other. This suggests that we have not missed any major iron-producing population, and that we may have a good quantitative understanding of the processes involved in iron making. § ACKNOWLEDGEMENTS We thank Vicky Kaspi, Amir Levinson, and Paz Beniamini for useful discussions and comments. This paper is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Seventh Framework Programme, grant agreement No. 833031 (PI: Dan Maoz) and grant agreement No. 818899 (PI: Ehud Nakar). § APPENDIX A POSSIBLE SELECTION EFFECTS IN THE TIME-TILL-MERGER Τ_ GW DISTRIBUTION In order to safely use the observed τ_ gw distribution to constrain the DNS merger DTD, we need to understand if there are any selection effects in the discovery of pulsars, in general, and rMSP DNSs in particular, that are correlated with τ_ gw. There certainly do exist pulsar selection effects related to the rMSP properties, such as the beam opening angle and the radio luminosity, but these do not appear to be correlated with τ_ gw. Another type of selection effect may have to do with the location of a DNS in the Galactic disk. Younger systems are more likely to be closer to their birth location near the Galactic plane, although this is probably insignificant for systems with low center-of-mass velocity, as seems to be the case for quite a few DNSs <cit.>. Systems near the plane are harder to detect, and the sky coverage of the various pulsar surveys as a function of the height above the plane is not necessarily uniform. Finally, a well-known selection effect is that systems with high orbital acceleration are harder to detect due to Doppler smearing of a pulsar signal <cit.>. This results in a reduced sensitivity to systems with small period and large eccentricity, i.e., to system with small values of τ_ gw. These selection effects works in the direction opposite to the observed effects that we see, of an excess of short τ_ gw and short τ_c systems. If these selection effects are significant in the detection of the DNS sample, then the excess we have noted is an underestimate of the true excess. The combination of the above selection effects have been studied by various authors. For example, <cit.>, in order to estimate the total Galactic DNS merger rate, have simulated the detection, by each of the DNS surveys, of every known DNS that will merge within a Hubble time. For each such DNS, they estimate what is the fraction of a population of DNSs with similar properties that would be detected by the surveys that were carried out till now. They find that indeed, on average, DNSs with lower τ_ gw are harder to detect, but not by much (see their table 4). Although we cannot rule out the possibility of unknown selection effects, it seems that the available sample of rMSP DNSs can provide a representative sample of the Galactic distribution of τ_ gw, bearing in mind that DNSs with small values of τ_ gw might be somewhat under-represented in our sample (which of course only strengthens the case for an excess of DNSs with small τ_c and small τ_ gw). A complete sample of τ_ gw could also serve to constrain the value of t_i, but we do not attempt this, as our current observed sample may suffer from selection effects that discriminate against DNSs with lifetimes ≲ 50 Myr. § APPENDIX B THE DIFFERENT TYPES OF "INITIAL DELAYS" IN THE DTD, WHERE AND HOW TO USE THEM As already noted in footnote <ref>, there are a number of different "initial delays" that can be defined when considering the DTD of DNS mergers and, in principle, different variants of the initial delay need to be used in different aspects of the calculations in this paper. One initial delay is the time t_i, dns, between star-formation and the formation of the first DNS systems, i.e. the formation of the two neutron stars in the binary following the SN explosions of their progenitor massive stars, where in the process, at least one of the NSs was spun up into a rMSP via accretion from its pre-NS companion. At time t_i, dns, set by the stellar and binary evolution timescales of binary stars that end up as DNS rMSPs, the first DNS system begins its inspiral toward merger. A second type of initial delay is the time t_i, gw between DNS formation (as defined above) and the first DNS merger events. The earliest merger events are those in which the (small) separation and (high) eccentricity of the newly born DNSs lead to the shortest τ_ gw. Finally, the sum of these two timescales, which we defined as t_i in <ref>, is the times from star formation until the first DNS mergers. When calculating the present-day distribution of times-till-merger dN/dτ_ gw (Eq. <ref>), the t_i implicit in the DTD in the integrand, D(τ_ gw, t), is indeed t_i (the longest of the three types of initial delays defined above), as no mergers with initial delays smaller than this exist. At any given value of present-day observed τ_ gw (not necessarily short ones), there are contributions from the DNS populations that were formed in the past, from time t_i, dns ago to time t_0 ago, when star formation in the Galaxy began, and therefore the lower limit of the integral, should be t_i, dns. When considering a power-law DTD (which arises as a result a power-law distribution of initial DNS separations), the DTD, at τ>t_i, will have the form D(τ)∝ (τ-t_i, dns)^α, and zero otherwise. Thus, changing variables τ-t_i, dns→τ' we obtain Eq. <ref> with a correction to the integrand: SFR(t_0-t+t_i, dns); and to the upper limit of the integral: t_0-t_i, dns. Given that SFR(t) is roughly constant and t_0 ≫ t_i, dns, both of these corrections are negligible. For chemical evolution calculations of the type in  <ref>, the relevant initial delay to use is t_i, gw. The reason is that the main physical process that we attempt to understand is the delay between the metal enrichment by core-collapse SNe and by kilonovae from DNS mergers (or from SNe Ia). As core-collapse SNe, just like DNSs, are delayed from star-formation by stellar evolution (and possibly binary-evolution) timescales, with NS formation coincident with the core-collapse SN, the remaining relevant initial delay is, to zeroeth order, just t_i, gw. Of course, the stellar and binary evolution timescales of single massive stars may well differ from those of massive binaries that eventually produce DNSs that merge and make kilonovae. However, given our current ignorance about these processes, it is reasonable to simply assume that these timescales are equal for the two kinds of explosion. As noted before, the maximal value that we consider for t_i (the longest of the three initial timescales that we have defined), t_i=10 Myr, is less than the smallest observed value of τ_ gw=45 Myr, and therefore the results of our calculations are largely insensitive to the exact choice of the type of initial delay. In the body of the paper, we hence do not distinguish among them, and denote them all as t_i. § APPENDIX C LIKELIHOOD AND MCMC CALCULATION AND RESULTS The log likelihood of a given model distribution, for example a model for the distribution of times until merger, dN/dτ_ gw, in Eqns. <ref>-<ref>, is calculated as ln L=∑_i=1^N ln[dN/dτ_ gw(τ_ gw, i| Θ)] , where Θ is the parameter vector of the relevant model. For a single-power-law DTD model Θ=α, with α the power-law index. For a two-power-law DTD model, Θ=[α_1, α_2, A] (the two power-law indices and the relative normalization, see <ref>). For the cut-off model, Θ=[α, τ_ cut, A]. The summation is over the N=18 DNS systems of the observed sample. This is equivalent, up to a superfluous constant, to calculating the likelihood as the product of the Poisson probabilities, over all values of τ_ gw, in small bins δτ_ gw, where the observed number of systems in each bin is either one (in the N bins with data) or zero (in all the other bins; see e.g. ). We emphasize that the fit is to the observed distribution of τ_ gw values, rather than to any cumulative or binned distribution (such as those we have shown, for illustrative purposes only, in Figs. <ref>-<ref>). Using a Markov-Chain Monte-Carlo (MCMC) procedure, we test the likelihood of parametrized families of models, to find the best-fit parameters for a given family. Figs. <ref> and <ref> show the MCMC "corner plots", showing the model probabilities for combinations of pairs of parameters, for the two-power-law model and for the cut-off models, respectively. For both types of two-population models, we fix the DTD initial delay, t_i, at 10 Myr. Varying its value between 1 Myr and 50 Myr affects slightly the best-fit parameters, but the overall results and conclusions remain unchanged. For example, in the two power-law model, varying t_i in the range 1-50 Myr leaves the best-fit parameter α_1 at ≈ -1.1, and shifts the best-fit values of the other parameters within the ranges α_2=-1.75 to -2.45 and A=60 to 200. We do not attempt to constrain t_i based on the observations, as it is possible that the observed sample at t_ gw≲ 50 Myr suffers from observational selection that disfavors detection of rMSPs in DNSs in tight orbits. § ORCID IDS Dan Maoz https:/orcid.org/0000-0002-6579-0483 Ehud Nakar https:/orcid.org/0000-0002-4534-7089 mnras
http://arxiv.org/abs/2406.07849v1
20240612033655
Bias-Corrected Joint Spectral Embedding for Multilayer Networks with Invariant Subspace: Entrywise Eigenvector Perturbation and Inference
[ "Fangzheng Xie" ]
math.ST
[ "math.ST", "stat.ML", "stat.TH" ]
Visual instrument co-design embracing the unique movement capabilities of a dancer with physical disability Jon McCormack =========================================================================================================== § ABSTRACT In this paper, we propose to estimate the invariant subspace across heterogeneous multiple networks using a novel bias-corrected joint spectral embedding algorithm. The proposed algorithm recursively calibrates the diagonal bias of the sum of squared network adjacency matrices by leveraging the closed-form bias formula and iteratively updates the subspace estimator using the most recent estimated bias. Correspondingly, we establish a complete recipe for the entrywise subspace estimation theory for the proposed algorithm, including a sharp entrywise subspace perturbation bound and the entrywise eigenvector central limit theorem. Leveraging these results, we settle two multiple network inference problems: the exact community detection in multilayer stochastic block models and the hypothesis testing of the equality of membership profiles in multilayer mixed membership models. Our proof relies on delicate leave-one-out and leave-two-out analyses that are specifically tailored to block-wise symmetric random matrices and a martingale argument that is of fundamental interest for the entrywise eigenvector central limit theorem. § INTRODUCTION §.§ Background Network data are a convenient data form for representing relational structures among multiple entities. They are pervasive across a broad spectrum of application domains, including social science <cit.>, biology <cit.>, and computer science <cit.>, to name a selected few. Statistical network analysis has also been attracting attention recently, where random graph models serve as the underlying infrastructure. In a random graph, a network-valued random variable is generated by treating the vertices as deterministic and the edges connecting different vertices as random variables. Various popular random graph models have been proposed and studied in the literature, such as the renowned stochastic block models <cit.> and their variants <cit.>, the (generalized) random dot product graph model <cit.>, the latent space model <cit.>, and exchangeable random graphs <cit.>, among others. Correspondingly, there has also been substantial development regarding statistical inference for random graph models, including community detection <cit.>, vertex classification <cit.>, and network hypothesis testing <cit.>, to name a selected few. The readers are referred to survey papers <cit.> for the details on the recent advances in statistical network analysis. Recently, there has been a growing interest in analyzing heterogeneous multiple networks, where the data consists of a collection of vertex-aligned networks, and each network is referred to as a layer. These multiple networks, also known as multilayer networks <cit.>, arise naturally in a variety of application domains where network data are pervasive. For example, in trade networks, countries or districts are represented by vertices, and trade activities among them are modeled as edges. In these trade networks, vertices corresponding to countries or districts are aligned across layers, but different items and times lead to heterogeneous network patterns. Other examples include fMRI studies <cit.>, protein networks <cit.>, traffic networks <cit.>, gene coexpression networks <cit.>, social networks <cit.>, and mobile phone networks <cit.>, among others. The heterogeneity of multiple network data brings extra challenges compared to their single network counterparts and requires substantially different methodologies and theories. §.§ Overview Consider a collection of vertex-aligned network adjacency matrices (_t)_t = 1^m with low-rank edge probability matrices (_t)_t = 1^m. Suppose rank(_t) = d for all t∈{1,…,m} and they share the same leading eigenspace, i.e., the column spaces of (_t)_t = 1^m are identical. This multiple network model is called the common subspace independent edge model (COSIE) <cit.>, the formal definition of which is introduced in Section <ref>. The COSIE model is a quite flexible yet architecturally simple multiple network model that captures both the shared subspace structure across layers and the layer-wise heterogeneity pattern by allowing different eigenvalues. The COSIE model includes the popular multilayer stochastic block model and the multilayer mixed membership model as important special examples. The common subspace structure of the COSIE model allows us to write _t = _t for some n× d matrix with orthonormal columns for each layer t, and the column space of remains invariant across layers, whereas the layer-specific pattern is captured by a score matrix _t. Our goal is to estimate up to the right multiplication of a d× d orthogonal matrix, particularly under the challenging regime where the layer-wise signal-to-noise ratio is insufficient to consistently estimate using single network methods. Successful estimation of is of fundamental interest to several downstream network inference tasks, including community detection in multilayer stochastic block models and the hypothesis testing of membership profiles in multilayer mixed membership models. The major contribution of our work is threefold: * We design a novel bias-corrected joint spectral embedding algorithm (see Section <ref> and Algorithm <ref> there for details) for estimating . The proposed algorithm carefully calibrates the bias due to (∑_t = 1^m_t)≠∑_t= 1^m_t^2 recursively by leveraging the closed form bias formula and updates the subspace estimator iteratively. Compared to some of the existing de-biasing algorithms, such as the heteroskedastic PCA <cit.>, the proposed algorithm is computationally efficient, numerically more stable, and only requires O(1) number of iterations to achieve rate optimality under mild conditions. * We establish the entrywise subspace estimation theory for the proposed bias-corrected joint spectral embedding estimator in the context of the COSIE model. Specifically, we establish a sharp two-to-infinity norm subspace perturbation bound and the entrywise central limit theorem for our estimator. Our proof relies on non-trivial and sharp leave-one-out and leave-two-out devices that are specifically designed for multilayer network models and a martingale argument. These technical tools may be of independent interest. * Leveraging the entrywise subspace estimation theory, we further settle two multilayer network inference tasks, namely, the exact community detection in multilayer stochastic block models and the hypothesis testing of the equality of the membership profiles in multilayer mixed membership models. These problems are non-trivial extensions of their single network counterparts because we do not require the layer-wise average network expected degree to diverge, as long as the aggregated network signal-to-noise ratio is sufficient. §.§ Related work The recent decade has witnessed substantial progress in the theoretical and methodological development for statistical analyses of single networks, where spectral methods have been playing a pivotal role not only thanks to their computational convenience but also because they directly provide insight into a variety of subsequent network inference algorithms. One of the most famous examples is the spectral clustering in stochastic block models <cit.>, where the leading eigenvectors of the adjacency matrix or its normalized Laplacian matrix are used as the input for K-means clustering algorithm to recover the vertex community memberships. Also see <cit.> and the survey paper <cit.> for an incomplete list of reference. Entrywise eigenvector estimation theory for network models has been attracting growing interest recently because it provides fine-grained characterization of some downstream network inference tasks, such as the exact community detection in stochastic block models <cit.>, the fundamental limit of spectral clustering in stochastic block models <cit.>, and the inference for membership profiles in mixed membership stochastic block model <cit.>. A collection of entrywise eigenvector perturbation bounds under the notion of the two-to-infinity norm have been developed by <cit.> in the context of single network models and beyond, while the authors of <cit.> have further established the entrywise eigenvector central limit theorem beyond the perturbation error bounds. Although the above works are designed for the analysis of single networks and single symmetric random matrices, the ideas and the technical tools developed there are inspiring for the analysis of heterogeneous multiple networks, as will be made clear in Section <ref> below. The analysis of heterogeneous multiple networks has primarily focused on the context of multilayer stochastic block models and the corresponding community detection problem. Existing methodologies can be roughly classified into the following categories: spectral-based methods <cit.>, matrix and tensor factorization methods <cit.>, likelihood and modularity-based methods <cit.>, and approximate message passing algorithms <cit.>. A recently posted preprint <cit.> discussed the computational and statistical thresholds of community detection in multilayer stochastic block models under the so-called low-degree polynomial conjecture. Most of these works, however, did not provide the entrywise subspace estimation theory, and most of them only established the weak consistency of community detection in multilayer stochastic block models, i.e., a vanishing fraction of the vertices are incorrectly clustered, except for <cit.>. The exact community detection in <cit.> is achieved through the minimax rates of community detection, but their proof only guarantees the existence of such a minimax-optimal algorithm. In <cit.>, the authors proposed a two-stage algorithm that achieves the exact community detection, but they require a strong condition on the positivity of the eigenvalues of the block probability matrices. In <cit.>, the authors established the exact community detection of a multiple adjacency spectral embedding method under the much stronger regime where the layer-wise signal-to-noise ratio is sufficient. We defer the detailed comparison between our results with those from some of the above-cited work to Section <ref>. For other multiple network models encompassing the COSIE model, see <cit.> for an incomplete list of reference. Our problem can be equivalently reformulated as estimating the left singular subspace of the low-rank concatenated matrix [_1,…,_m] using the noisy rectangular concatenated network adjacency matrix [_1,…,_m]. There has also been growing interest in the entrywise singular subspace estimation theory for rectangular random matrices with low expected ranks <cit.>. From the technical perspective, the papers <cit.> are the most similar ones to our work, but they considered rectangular random matrices with completely independent entries and studied the so-called heteroskedastic PCA estimator for the singular subspace. In contrast, the rectangular matrix of interest in our work has a block-wise symmetric structure, and we consider a different bias-corrected joint spectral embedding algorithm. Consequently, neither their results nor their proof techniques can be applied to the COSIE model context. We also defer the detailed comparison between our work and those obtained in <cit.> in Section <ref>. §.§ Paper organization The rest of this paper is organized as follows. Section <ref> sets the stage of the COSIE model and introduces the proposed bias-corrected joint spectral embedding algorithm. The main results are elaborated in Section <ref>, including the two-to-infinity norm subspace perturbation bound and the entrywise eigenvector central limit theorem of the bias-corrected joint spectral embedding estimator. In this section, we also settle the exact community detection in multilayer stochastic block models and the hypothesis testing of the equality of membership profiles for any two given vertices in multilayer mixed membership models. Section <ref> illustrates the practical performance of the proposed bias-corrected joint spectral embedding algorithm and validates the entrywise subspace estimation theory via numerical experiments empirically. We conclude the paper with a discussion in Section <ref>. The technical proofs of our main results are deferred to the appendices. §.§ Notations The symbol := is used to assign mathematical definitions throughout. Given a positive integer n, we let [n] denote {1,…,n}. For any two real numbers a, b > 0, let a∨ b := max(a, b) and a∧ b:=min(a, b). For any two nonnegative sequences (a_n)_n = 1^∞, (b_n)_n = 1^∞, we write a_n≲ b_n or b_n≳ a_n or a_n = O(b_n) or b_n = Ω(a_n), if there exists a constant C > 0, such that a_n≤ Cb_n for all n. We write a_n≍ b_n or a_n = Θ(b_n) if a_n = O(b_n) and a_n = Ω(b_n). We use C, C_1, C_2, c, c_1, c_2, etc to denote positive constants that may vary from line to line but do not change with n throughout. A sequence of events (_n)_n = 1^∞ indexed by n is said to occur with high probability (w.h.p.), if for any constant c > 0, there exists a c-dependent constant N_c > 0, such that (_n)≥ 1 - O((m + n)^-c) for any n≥ N_c, where we assume that the number of layers m depends on the number of vertices n in this work. Given two sequences of nonnegative random variables (X_n)_n = 1^∞, (Y_n)_n = 1^∞, we say that X_n is bounded by Y_n w.h.p., denoted by X_n = (Y_n), if for any constant c > 0, there exist c-dependent constants N_c, C_c > 0, such that (X_n≤ C_cY_n)≥ 1 - O((m + n)^-c) for any n≥ N_c, and we write X_n = (Y_n) if X_n = (_nY_n) for some nonnegative sequence (_n)_n = 1^∞ converging to 0. Note that our (·) and (·) notations are stronger than the conventional O_p and o_p notations in the literature of probability and statistics. Given positive integers n,d with n≥ d, let _d denote the d× d identity matrix, _d denote the d-dimensional zero vector, _d denote the d-dimensional vector of all ones, _n× d the n× d zero matrix, 𝕆(n, d) := {∈ℝ^n× d:}, and 𝕆(d):=𝕆(d, d). Let Span() denote the column space of for any rectangular matrix . For any positive integer i, we let _i denote the ith standard basis vector whose ith entry is 1 and the remaining entries are 0 if the dimension of _i is clear from the context. Given a_1,…,a_n∈ℝ, we denote by diag(a_1,…,a_n) the diagonal matrix whose diagonal entries are a_1,…,a_n. Given a square matrix ∈ℝ^n× n, let diag() be the diagonal matrix obtained by extracting the diagonal entries of . If ∈ℝ^n× n is symmetric, we then denote by λ_k() the kth largest eigenvalue of , namely, λ_1()≥…≥λ_n(). If is a positive semidefinite matrix, we follow the usual linear algebra notation and denote by ^1/2 as applying the square root operation to the eigenvalues of . Let ^-1/2:=(^-1)^1/2 if is positive definite. Given a p_1× p_2 rectangular matrix = [A_ij]_p_1× p_2, we let σ_k() denote its kth largest singular value, namely, σ_1()≥…≥σ_p_1∧ p_2(), denote its spectral norm by _2:=λ_1^1/2(), its Frobenius norm by _F = (∑_i = 1^p_1∑_j = 1^p_2A_ij^2)^1/2, its infinity norm by _∞ = max_i∈[p_1]∑_j = 1^p_2|A_ij|, its two-to-infinity norm by _2→∞ = max_i∈[p_1](∑_j = 1^p_2A_ij^2)^1/2, and its entrywise maximum norm by _max = max_i∈[p_1],j∈[p_2]|A_ij|. For any ,∈𝕆(n, d), let sinΘ(, ) := diag{σ_1(),…,σ_d()}. For any matrix ∈𝕆(d), we define its matrix sign sgn() as follows: Suppose it has the singular value decomposition (SVD) = _1diag{σ_1(),…,σ_d()}_2, where _1,_2∈𝕆(d). The matrix sign of is defined as sgn() = _1_2. Let :ℝ^n× n→ℝ^n× n denote the hollowing operator defined by () = - diag() = - ∑_i = 1^n_i_i_i_i for any ∈ℝ^n× n <cit.>. Lastly, given a symmetric matrix ∈ℝ^n× n and a positive integer d≤ n, we let (; d) denote the n× d eigenvector matrix of with orthonormal columns associated eigenvalues λ_1()≥…≥λ_d(). § MODEL SETUP AND METHODOLOGY §.§ Multilayer networks with invariant subspace Consider n× n symmetric random matrices _1,…,_m, where (_t)_t = 1^m have independent upper triangular entries. Each network _t is referred to as a layer, and we model the multilayer networks (_t)_t = 1^m through the common subspace independent edge (COSIE) model <cit.> defined as follows: There exists an n× d orthonormal matrix ∈𝕆(n, d), where d≤ n, and a collection of d× d symmetric score matrices (_t)_t = 1^m⊂ℝ^d× d, such that _t = _t = _t, t∈[m], (A_tij - P_tij:t∈[m],i,j∈[n],i≤ j) are independent centered Bernoulli random variables, and A_tij = A_tji if i > j, where A_tij and P_tij denote the (i, j)th entry of _t and _t, respectively. In the COSIE model, different network layers share the same principal subspace but the heterogeneity across layers is captured by (_t)_t = 1^m. The COSIE model is flexible enough to encompass several popular heterogeneous multiple network models yet enjoys a simple architecture. Two important examples are the multilayer stochastic block model (MLSBM) and the multilayer mixed membership (MLMM) model. In MLSBM, the vertices share the same set of community structures across layers, but the layer-wise block probabilities can be different. Formally, given n vertices labeled as [n] and number of communities K, we say that random adjacency matrices _1,…,_m follow MLSBM with community assignment function τ:[n]→[K] and symmetric block probability matrices (_t)_t = 1^m∈[0, 1]^K× K, denoted by _1,…,_m∼MLSBM(τ;_1,…,_m), if A_tij∼Bernoulli(C_tτ(i)τ(j)) independently for all t∈[m], i,j∈[n], i≤ j, where C_tab is the (a, b)th entries of _t. The community assignment function can be equivalently represented by a matrix ∈ℝ^n× K with its (i, k)th entry being one if τ(i) = k and the remaining entries being zero. The MLMM models generalize the MLSBM by allowing the matrix to have entries between 0 and 1, provided that _d = _n, and its ith row is a probability vector describing how the membership weights of the ith vertex are assigned to K communities. In <cit.>, the authors showed that both the MLSBM and the MLMM models can be represented by COSIE(; _1,…,_m) with = ()^-1/2 and _t = ()^1/2_t()^1/2 for all t∈[m], where ∈ℝ^n× K is the community assignment or membership profile matrix, provided that is invertible. §.§ Joint Spectral Embedding and Bias Correction Our goal is to estimate the invariant subspace spanned by the columns of by aggregating the network information across layers. Note that is only identifiable up to a d× d orthogonal transformation. Given any generic estimator , it is necessary to consider an orthogonal matrix sgn() such that sgn() is aligned with . The matrix sign is often used as the orthogonal alignment matrix between subspaces that are comparable (see, for example, <cit.>). Let _t = [E_tij]_n× n = _t - _t, = [_1,…,_m]∈ℝ^n× mn, = [_1,…,_m]∈ℝ^n× mn, = -, and = [_1,…,_m]∈ℝ^d× md. A naive approach is to use ((1/n)∑_t = 1^m_t; d) as an estimator for , but this method can lead to the cancellation of signals when (_t)_t = 1^m has negative eigenvalues <cit.>. Alternatively, observe that Span() can be equivalently viewed as the leading eigen-space of the sum-of-squared matrix corresponding to its d-largest eigenvalues. It is thus reasonable to consider the eigenvector matrix of corresponding to its d-largest eigenvalues as the sample counterpart of . This naive approach, nonetheless, can generate bias when m is large, as well observed in <cit.> because = + and is a nonzero diagonal matrix. There are two existing approaches that attempt to account for this bias term. One strategy is computing the leading eigenvector matrix associated with the d-largest eigenvalues of (). This approach has been studied in <cit.>, but we show in Theorem <ref> below that the estimation error bound is sub-optimal. The fundamental reason for such sub-optimality is because ()≠. Another method is to iteratively correct the bias in using the diagonal entries of a low-rank approximation to . This approach is referred to as the HeteroPCA algorithm and has been studied in <cit.>. Neither approach is the focus of this work, although our proposed approach below shares some similarities with HeteroPCA in spirit. The starting point of our proposed bias correction procedure is the observation that () = +, where = -∑_i = 1^n_i_i∑_t = 1^m∑_j = 1^nP_tij^2. Formally, the bias-corrected joint spectral embedding (BCJSE) can be described in Algorithm <ref> below. The BCJSE algorithm consists of two-level iterations. At the rth step of the outer iteration, the inner iteration calibrates the bias recursively since _rs can be viewed as a plug-in estimator of using the most updated value of _(r - 1)S and _r(s - 1). Then, the outer iteration updates the estimator for iteratively using the most updated estimated bias _rS. The BCJSE algorithm also generalizes the hollowed spectral decomposition method discussed in <cit.> since when R = 1, Algorithm <ref> returns the eigenvector matrix corresponding to the d-largest eigenvalues of (). Compared to HeteroPCA, BCJSE can be viewed as a “parametric” version of HeteroPCA because it takes advantage of the parametric form of . As will be seen in Section <ref>, the BCJSE algorithm requires only O(1) number of iterations to achieve sharp estimation error bounds under mild conditions. § MAIN RESULTS §.§ Entrywise Eigenvector Perturbation Bound We first introduce several necessary assumptions for our main results and discuss their implications. [Eigenvector delocalization] Define the incoherence parameter μ = (n/d)_2→∞^2. Then μ = O(1). [Signal-to-noise ratio] Let Δ_n = min_t∈[m]σ_d(_t) and κ = max_t∈[m]σ_1(_t)/Δ_n. Then κ,d = O(1), and there exists an n-dependent quantity ρ_n∈(0, 1], such that max_t∈[m],i,j∈[n](A_tij - P_tij)^2≤ρ_n, _max = Θ(ρ_n), Δ_n = Θ(nρ_n), m^1/2nρ_n = ω(log n), and m = O(n^α) for some constant α > 0. Assumption <ref> requires the invariant subspace basis matrix to be delocalized. It is also known as the incoherence condition in the literature of matrix completion <cit.> and random matrix theory <cit.>. It requires that the entries of the invariant subspace basis matrix cannot be too “spiky”, namely, the columns of are significantly different from the standard basis vectors. It is also a common condition from the perspective of network models. For example, when the underlying COSIE model is an MLSBM, Assumption <ref> requires that the community sizes are balanced, namely, the number of vertices in each community is Θ(n/K). When each network layer is generated from a random dot product graph, Assumption <ref> is also satisfied with probability approaching one if the underlying latent positions are independently generated from an underlying distribution. Assumption <ref> characterizes the signal-to-noise ratio of the COSIE model through the smallest nonzero singular values of the score matrices and the so-called network sparsity factor ρ_n. In Assumption <ref>, the requirement κ, d = O(1) is a mild condition. In the context of MLSBM, this amounts to requiring that the number of communities is bounded and the condition numbers of the block probability matrices are also bounded. The sparsity factor ρ_n fundamentally controls the average network expected degree through nρ_n. When m = 1, the COSIE model reduces to the generalized random dot product graph (GRDPG) and Assumption <ref> requires that nρ_n = ω(log n), which almost agrees with the standard condition for single network problems (see, for example, <cit.>). Notably, when m is allowed to increase with n, Assumption <ref> only requires that m^1/2nρ_n = ω(log n) and nρ_n does not need to diverge as n increases. This is because, in multilayer networks, the overall network signal can be obtained by aggregating the information from each layer, so that the requirement for the signal strength from each layer is substantially weaker than that for single network problems. A similar phenomenon has also been observed in the literature of multilayer network analysis (see, for example, <cit.>) Below, we present our first main result regarding the row-wise perturbation bound of the BCJSE through a two-to-infinity norm error estimate. Suppose Assumptions <ref>–<ref> hold. Let be the output of Algorithm <ref> with R, S = O(1). Then sgn() - _2→∞ = ( ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼)/√(n)), sgn() - _2 = (ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼)), where ε_n^(𝗈𝗉) = log n/m^1/2nρ_n + (log n)^1/2/(mnρ_n)^1/2 and ε_n^(𝖻𝖼) = 1/n^S + 1 + 1/n^R. In Theorem <ref>, each of the error bounds consists of two terms. The noise effect term ε_n^(𝗈𝗉) is determined by the inverse signal-to-noise ratio, whereas the bias correction term ε_n^(𝖻𝖼) depends on the number of iterations R and the number of bias calibration steps S of Algorithm <ref>. The two-to-infinity norm perturbation bound can be n^-1/2 smaller than the spectral norm perturbation bound when the invariant subspace basis matrix is delocalized under Assumption <ref>, which is a well-observed phenomenon in a broad collection of low-rank random matrix models (see <cit.>, to name a selected few). The bias term also suggests the following practical approach to select (R, S) in Algorithm <ref> when m = O(n^α) for some α > 0. Since ρ_n≤1, then ε_n^(𝖻𝖼)≤ε_n^(𝗈𝗉) if m^1/2n ≤max(n^S + 1, n^R), and hence, the bias term is dominated by the noise effect term as long as R≥ (1/2)(log m)/(log n) + 1 and S≥ (1/2)(log m)/(log n). One immediate consequence of Theorem <ref> is the exact community detection in MLSBM. We formally state this result in the theorem below. Let = [_1,…,_m]∼MLSBM(τ;_1,…,_m). Suppose that the number of communities K is fixed and there exists an n-dependent sparsity factor ρ_n∈(0, 1], such that the following conditions hold: (i) rank(_t) = K and σ_k(_t) = Θ(ρ_n) for all k∈[K], t∈[m]. (ii) max_t∈[m]_t_max≤ρ_n and ∑_i = 1^n1{τ(i) = k} = Θ(n) for all k∈ [K]. (iii) m^1/2nρ_n = ω(log n) and m = O(n^α) for some constant α > 0. Denote by (n, K) the collection of all n× K matrices with K unique rows and suppose solves the K-means clustering problem, i.e., = _∈(n, K)_RS - _F^2. Let τ:[n]→ [K] be the community assignment estimator based on and _K be the set of all permutations over [K]. Then min_σ∈_K∑_i = 1^n1{τ(i) = σ(τ(i))} = 0 w.h.p.. Since the concatenated adjacency matrix = [_1,…,_m] can be viewed as a rectangular low-rank signal-plus-noise matrix, it is natural to draw some comparison between Theorem <ref> and prior works in this regard. First, we remark that the error bounds in Theorem <ref> are similar to those in Theorem 3.1 in <cit.> in some flavor. Similar results can also be found in <cit.>. The first major difference is that in <cit.>, the authors focus on singular subspace estimation for general rectangular matrices with completely independent entries. Although the COSIE model generates a rectangular matrix after concatenating the adjacency matrices in a layer-by-layer fashion, matrix also exhibits a unique block-wise symmetric pattern. As will be made clear in the proofs, such a block-wise symmetric structure introduces additional technical challenges and requires slightly different technical tools. The second and perhaps most important difference between Theorem <ref> and Theorem 3.1 in <cit.> is the bias correction term. Indeed, the BCJSE algorithm degenerates to the hallowed spectral embedding introduced in <cit.> when S = R = 1, in which case ε_n^(𝖻𝖼) coincides with the diagonal deletion effect in <cit.>. Nevertheless, the BCJSE algorithm is substantially more general than the hallowed spectral embedding and the bias correction term decreases when R and S increase. Considering that both the HeteroPCA algorithm <cit.> and the BCJSE algorithm are designed to completely remove the bias effect, we also draw some comparisons here. The entrywise subspace estimation theory of HeteroPCA is well studied in <cit.>. The theory there works when the number of iterations grows with n. In contrast, our BCJSE algorithm takes advantage of the closed-form formula of the bias term and only requires R, S = O(1) to remove the bias effect (i.e., ε_n^(𝖻𝖼) is dominated by the noise effect term) under the mild condition that m = O(n^α) for some α > 0. We also found in numerical studies that HeteroPCA occasionally requires large numbers of iterations to achieve the same level of accuracy as BCJSE, whereas the numbers of iterations (R, S) of BCJSE are always small and can be determined before starting the algorithm. See Section <ref> for further details. We next compare our results with related literature in COSIE model analyses. In <cit.>, the authors developed a multiple adjacency spectral embedding (MASE) by computing ^(𝖬𝖠𝖲𝖤) = (∑_t = 1^m(_t;d)(_t;d); d). In particular, the authors of <cit.> established the two-to-infinity norm perturbation bound ^(𝖬𝖠𝖲𝖤)sgn(^(𝖬𝖠𝖲𝖤)T) - _2→∞ = {(log n)^1/2/(nρ_n^1/2)} and the exact community detection result in MLSBM based on a MASE-based clustering under the condition that nρ_n = ω(log n) and m is fixed. In contrast, Assumption <ref> only requires that m^1/2nρ_n = ω(log n) and allows nρ_n = O(1). The MASE technique is unable to handle the regime where nρ_n = O(1), and their two-to-infinity norm perturbation bound is not sharp compared to our error bound in Theorem <ref>. In <cit.>, the authors studied a degree-corrected version of MASE (DC-MASE) for the multilayer degree-corrected stochastic block model and obtained a similar two-to-infinity norm perturbation bound for DC-MASE, but they still require that nρ_n→∞ and fails to handle the regime where nρ_n = O (1). Given the close connection between MLSBM and the COSIE model, we also briefly compare the above results with the existing literature in spectral clustering for MLSBM. We focus on a selection of closely relevant results in <cit.> that connect to Theorem <ref> and Theorem <ref>. In <cit.>, the authors proposed to compute ^(𝖡𝖠𝖲𝖤) = ((); d) (i.e., running Algorithm <ref> with S = R = 1) and used it for community detection in MLSBM. Note that the assumptions in <cit.> are that nρ_n = O(1) and m^1/2nρ_n = Ω(log^1/2(m + n)), which are slightly different than Assumption <ref>. The proof of Theorem 1 there implies the spectral norm perturbation bound min_∈𝕆(d)^(𝖡𝖠𝖲𝖤) - _2 = {1/n + log^1/2(m + n)/m^1/2nρ_n}, where the n^-1 term is the remaining bias effect due to diagonal deletion of and coincides with ε_n^(𝖻𝖼) when R = 1, and the log^1/2(m + n)/(m^1/2nρ_n) term is similar to ε_n^(𝗈𝗉). In <cit.>, under the condition that mnρ_n ≥ Clog n for some large constant C > 0, the authors studied the co-regularized spectral clustering (co-reg) method <cit.> and established a sub-optimal Frobenius norm perturbation bound min_∈𝕆(d)^(𝖼𝗈𝗋𝖾𝗀) - _F = {(log n)^1/4/(mnρ_n)^1/4}. See the proof of Theorem 2 there for details. In <cit.>, the authors studied a slightly more general inhomogeneous MLSBM where the layer-wise community assignments can be viewed as noisy versions of a global community assignment. The authors of <cit.> designed a regularized tensor decomposition approach called TWIST and showed that min_∈𝕆(d)^(𝖳𝖶𝖨𝖲𝖳) - _2 = {(log n)^1/2/(mnρ_n)^1/2} under the condition that mnρ_n≥ C(log n)^4 for some large constant C > 0, which is slightly stronger than our Assumption <ref>. See Corollary 1 there for details. The authors of <cit.> focused on the inhomogeneous MLSBM with two communities and balanced community sizes and showed that min_∈𝕆(d)(; d) - _F = {1/(mnρ_n)^1/2}, where = ∑_t = 1^mω_t_t is a weighted average network adjacency matrix and (ω_t)_t = 1^m are the weights. Nevertheless, they require λ_2() > 0 and it is not entirely clear how to choose the weights when layer-wise block probability matrices contain negative eigenvalues. For the ease of readers' reference, we summarize the comparison of the above results in Table <ref> under Assumption <ref>. In Table <ref>, the term “rectangular model” means rectangular random matrices with completely independent entries and low expected ranks, is a generic estimator depending on the context, and is the matrix sign sgn(). Note that the setups in <cit.> are designed for rectangular matrices with independent entries so that their results do not directly apply to the COSIE model. Therefore, we only list their generic error bounds by viewing as a rectangular low-rank random matrix in Table <ref>. A similar comment also applies to <cit.>. It is also worth remarking that most of the aforementioned existing results did not provide entrywise eigenvector perturbation bounds and only addressed the weak recovery of community membership, i.e., the proportion of misclustered vertices goes to zero with probability approaching one. In contrast, Theorem <ref> establishes the exact community detection. The only exceptions in Table <ref> are <cit.> and <cit.>. In <cit.>, the authors designed a two-stage algorithm that achieves the information limit of community detection. However, their strong theoretical results require that the nonzero eigenvalues of layer-wise edge probability matrices are positive, whereas our theory drops such a restrictive assumption. In <cit.>, the authors require that m = O(1) and nρ_n = ω(log n) for the exact recovery of MASE-based spectral clustering, and their conditions are substantially stronger than Assumption <ref>. §.§ Entrywise Eigenvector Limit Theorem In this subsection, we present the entrywise eigenvector limit theorem for BCJSE, which is our second main result. Suppose Assumptions <ref>–<ref> hold. For each i∈[n], define _i = ∑_t = 1^m_tdiag(σ_ti1^2,…,σ_tin^2)_t, _i = diag(∑_t = 1^m∑_j = 1^nσ_tij^2σ_tj1^2,…,∑_t = 1^m∑_j = 1^nσ_tij^2σ_tjn^2), and _i = ()^-1(_i + _i)()^-1, where σ_tij^2:=(E_tij). Let θ_n = (nρ_n∧ 1)^1/2 and further, assume the following conditions hold: (i) m^1/2nρ_n = ω(θ_n(log n)^3/2), mnρ_n = ω((θ_nlog n)^2), and m^1/2(nρ_n)^3/2 = ω(θ_n(log n)^2). (ii) λ_d(_i) = Ω(mn^2ρ_n^3) and λ_d(_i) = Ω(mnρ_n^2). If R > (α + 1)/2, S > (α - 1)/2, and R,S = O(1), then for any fixed vector ∈ℝ^d, _i^-1/2(sgn() - )_i = _i() ()^-1_i^-1/2 + ∑_t = 1^m∑_j = 1^nE_tij_j_t ()^-1_i^-1/2 + (1). Furthermore, _i^-1/2(sgn() - )_i→N_d(_d, _d). Theorem <ref> establishes that the rows of the BCJSE are approximately Gaussian under slightly stronger conditions than those required in Theorem <ref>. In the asymptotic expansion (<ref>), the leading term contains two parts: the second term is a sum of independent mean-zero random variables, and the first term is a more involved quadratic function of . As will be made clear in the proof, the first term is asymptotically negligible when nρ_n→∞ and the second term determines the asymptotic distribution of the ith row of through the Lyapunov central limit theorem (see, for example, <cit.>), in which case _i≈ ()^-1_i()^-1. The challenging regime occurs when nρ_n = O(1) and both the first term and the second term on the right-hand side of expansion (<ref>) jointly determine the asymptotic distribution of the ith row of . Note that the sum of these two terms is no longer a sum of independent mean-zero random variables, and Lyapunov central limit theorem is not applicable. The key observation is that the leading term in (<ref>) can be written as a martingale, and we apply the martingale central limit theorem to establish the desired asymptotic normality by carefully calculating the related martingale moment bounds. See Lemma <ref> for further details. Prior work on the entrywise eigenvector limit results for multilayer networks is slightly narrower than those on the general perturbation bounds and community detection. Given that the COSIE model connects to both rectangular low-rank signal-plus-noise matrix models and multilayer networks, we draw some comparison between Theorem <ref> and existing limit results along these two directions, particularly those in <cit.>. In <cit.>, the authors established the asymptotic normality for the rows of MASE under the stronger condition nρ_n = ω(log n) and m = O(1). As mentioned earlier, MASE fails to handle the case where m→∞ and nρ_n = O(1), whereas the condition of Theorem <ref> allows for m→∞ and nρ_n = O(1). In <cit.>, the authors established the asymptotic normality for the rows of HeteroPCA for general rectangular low-rank signal-plus-noise matrices. The difference between <cit.> and <cit.> is that the authors of <cit.> handled heteroskedasticity and dependence, whereas the authors of <cit.> allowed missing data. However, as mentioned earlier, the rectangular model considered in <cit.> does not contain a block-wise symmetric structure, and the results there do not apply directly to our undirected multilayer network setup. Furthermore, the underlying proof techniques are fundamentally different: in <cit.>, the leading terms in the entrywise eigenvector expansion are sums of independent mean-zero random variables, and their proofs are based on Lyapunov central limit theorem for sum of independent random variables. Nevertheless, as mentioned earlier, in the multilayer undirected network setup with nρ_n = O(1), the leading term in the entrywise eigenvector expansion formula (<ref>) is much more involved and requires the construction of a martingale sequence, together with the application of the martingale central limit theorem. Below, we provide an immediate application of Theorem <ref> to the vertex membership inference in MLMM models. The authors of <cit.> proposed to test the equality of membership profiles of any given two vertices in a single-layer mixed membership model. This inference task for pairwise vertex comparison may be of interest in many practical applications, such as stock market investment studies and legislation. The multilayer version of this problem, which has not been explored before to our limited best knowledge, can be described as follows. Recall that, an MLMM model with a nonnegative membership profile matrix satisfying _n = _d and block probability matrices (_t)_t = 1^m can be equivalently represented by COSIE(;_1,…,_m), where = ()^-1/2 and _t = ()^1/2_t()^1/2, if is invertible. Given any two vertices i_1,i_2∈[n], i_1≠ i_2, we now consider the hypothesis testing problem H_0:_i_1 = _i_2 versus H_A:_i_1≠_i_2, where _i is the ith row of . It is straightforward to see that _i = _j if _i = _j (see, for example, <cit.>), so it is conceivable from Theorem <ref> that under H_0, (_i_1 + _i_2)^-1/2sgn()(_i_1 - _i_2) approximately follows the d-dimensional standard normal distribution, where we use _i to denote the ith row of , and is the output of Algorithm <ref>. Equivalently, this suggests that the quadratic form (_i_1 - _i_2)sgn()(_i_1 + _i_2)^-1sgn()(_i_1 - _i_2) approximately follows the chi-squared distribution with d degree of freedom. Because _i_1, _i_2 are not known and need to be estimated, we now leverage the above intuition and design a test statistic using the plug-in principle. For any t∈[m], i, j∈[n], we define the following plug-in estimators: _t = _t, = [_1,…,_m], _t = _t, P_tij = _i_t_j, σ_tij^2 = P_tij(1 - P_tij), _i = ∑_t = 1^m_tdiag(σ_ti1^2,…,σ_tin^2)_t, _i = diag(∑_t = 1^m∑_j = 1^nσ_tij^2σ_tj1^2,…,∑_t = 1^m∑_j = 1^nσ_tij^2σ_tjn^2), _i = ()^-1(_i + _i)()^-1, T_i_1i_2 = (_i_1 - _i_2)(_i_1 + _i_2)^-1(_i_1 - _i_2). where _i:=_i. Here, T_i_1i_2 is our test statistic for testing H_0:_i_1 = _i_2 versus H_A:_i_1≠_i_2. Let = [_1,…,_m]∼MLMM(; _1,…,_m). Suppose that d is fixed and there exists an n-dependent sparsity factor ρ_n∈(0, 1], such that the following conditions hold: (i) Let θ_n = (nρ_n∧ 1)^1/2. Then n^2ρ_n = ω(log n), m^1/2nρ_n = ω(θ_n(log n)^3/2), mnρ_n = ω((θ_nlog n)^2), m^1/2(nρ_n)^3/2 = ω(θ_n(log n)^2 + log n), and m^1/2(nρ_n)^2 = ω((log n)^3/2). (ii) R,S = O(1), R > (α + 1)/2, and S > (α - 1)/2. (iii) λ_k() = Θ(n) for all k∈[d], min_t∈[m]σ_d(_t) = Ω(ρ_n), max_t∈[m]σ_1(_t) = O(ρ_n), min_t∈[m],i,j∈[n]_i_t_j = Ω(ρ_n), max_t∈[m],i,j∈[n](1 - _i_t_j) = Θ(1), and min_t∈[m],i,j∈[n](1 - _i_t_j) = Θ(1), where _i∈ℝ^d denotes the ith row of . Then: (a) Under the null hypothesis H_0:_i_1 = _i_2, where i_1≠ i_2, we have T_i_1i_2→χ^2_d as n→∞. (b) Under the contiguous alternative hypothesis H_A:_i_1≠_i_2 but _i_1 - _i_2_2 = ω((mnρ_n)^-1/2θ_n^-1), for any arbitrarily large constant C > 0, we have (T_i_1i_2 > C)→ 1 as n→∞. (c) Under the alternative hypothesis H_A:_i_1≠_i_2 but (_i_1 + _i_2)^-1/2()^-1/2(_i_1 - _i_2)→, we have T_i_1i_2→χ_d^2(_2^2), where χ_d^2(_2^2) denotes the non-central chi-squared distribution with d degree of freedom and non-central parameter _2^2. §.§ Proof architecture We now briefly discuss the proof architecture of the main results. Let _RS be the output of the BCJSE algorithm. By definition, _RS = (() - _RS; d), and let _RS be the diagonal matrix of the associated eigenvalues of () - _RS. Denote by _RS = sgn(_RS). Let _1 = (; d) and = _1. Then is the eigenvector matrix of corresponding to the nonzero eigenvalues, and we take as the diagonal matrix of these eigenvalues, such that =. The proof is centered around the following keystone decomposition motivated by <cit.>: _RS_RS - = {() - - }^-1 + ^(RS), where ^(RS) = _1^(RS) + _2^(RS) + _3^(RS) + _4^(RS) + _5^(RS) + _6^(RS) + _7^(RS), _1^(RS) = {() - - } (_RS - _RS)_RS^-1_RS, _2^(RS) = ( - _RS)^-1, _3^(RS) = ( - _RS)(_RS - _RS)^-1_RS_RS, _4^(RS) = (_RS - _RS_RS)^-1_RS_RS, _5^(RS) = (_RS - _RS)_RS, _6^(RS) = {() - - }(_RS_RS^-1 - ^-1_RS)_RS, _7^(RS) = ( - _RS)(_RS_RS^-1 - ^-1_RS)_RS. The above decomposition can be easily verified by invoking the definitions of _1^(RS) through _7^(RS). The proof of Theorem <ref> is based on a recursive relation between _RS_RS - _2→∞ and _(R - 1)S_(R - 1)S - _2→∞, which is formally stated in terms of Lemma <ref> below. Theorem <ref> then follows from an induction over R. Suppose Assumptions <ref> and <ref> hold and R, S = O(1). Then _RS_RS - _2→∞ = (ε_n^(𝗈𝗉)/√(n)) + (1/mn^5/2ρ_n^2) - _RS_2, 1/mn^5/2ρ_n^2 - _RS_2 = (1/n)_(R - 1)S_(R - 1)S - _2→∞ + (1/n^S + 3/2 + ε_n^(𝗈𝗉)/n^2). The proof Lemma <ref> breaks down into establishing error bounds for _1^(RS)_2→∞ through ^(RS)_7_2→∞. Among these terms, the analyses of _2^(RS)_2→∞ through ^(RS)_7_2→∞ are relatively straightforward based on the classical matrix perturbation tools <cit.>. See Appendix <ref> for further details. The analysis of ^(RS)_1_2→∞, which is formally stated in Lemma <ref> below and will be proved in Appendix <ref>, is more involved and requires decoupling arguments based on the elegant leave-one-out and leave-two-out analyses (see <cit.>). Suppose Assumptions <ref> and <ref> hold, and R,S = O(1). Then _1^(RS)_2→∞ = (ε_n^(𝗈𝗉)/√(n)) + (1/mn^5/2ρ_n^2) - _RS_2. Note that because of the unique block-wise symmetric structure of the concatenated network adjacency matrix = [_1,…,_m], where _t's are symmetric, the leave-one-out and leave-two-out analyses cause extra complication compared to those appearing in the literature of rectangular random matrices with completely independent entries. In particular, our analyses show the emergence of complicated polynomials of that are unique to our setting, and the error control of these quadratic forms is established by delicate analyses of their higher-order moment bounds. See Section <ref> for further details. Regarding the proof of Theorem <ref>, the first step is to obtain the following sharpened error bound for ^(RS)_2→∞. Suppose the conditions of Theorem <ref> hold. Then ^(RS)_2→∞ = (1/m^1/2nρ_n^1/2θ_n) + (1/n^S + 3/2 + 1/n^R + 1/2). The proof of Lemma <ref>, which is deferred to Appendix <ref>, is similar to that of Lemma <ref>. The key difference is that we take advantage of the sharp error bounds in Theorem <ref> to obtain the necessary refinement for _1^(RS)_2→∞. The remaining part of the proof of Theorem <ref> is to show the asymptotic normality of _i{() - - }^-1. As mentioned before, this is a nontrivial task because _i{() - - }^-1 cannot be written as a sum of independent mean-zero random variables plus some negligible term. We borrow the idea from <cit.> and rewrite it as a sum of martingale difference sequence, so that the martingale central limit theorem can be applied. This is formally stated in Lemma <ref> below. Let e_ij denote the jth entry of the standard basis vector _i, i.e., e_ij = _i_j = 1(i = j). For any deterministic vector ∈ℝ^d, t∈[m], i,j,j_1,j_2∈[n], j_1≤ j_2, denote by γ_ij() = _j^-1_1_i^-1/2, ξ_tij() = _j_t^-1_1_i^-1/2, b_tij_1j_2() = ι_j_1j_2∑_j_3 = 1^j_1 - 1 E_tj_2j_3{e_ij_1γ_ij_3() + e_ij_3γ_ij_1()} + ι_j_1j_2∑_j_3 = 1^j_2 - 1 E_tj_1j_3{e_ij_2γ_ij_3() + e_ij_3γ_ij_2()}, c_tij_1j_2() = ι_j_1j_2{e_ij_1ξ_tij_2() + e_ij_2ξ_tij_1()}, where ι_j_1j_2 = 1(j_1 < j_2) + 1(j_1 = j_2)/2. For any (t, j_1, j_2)∈[m]× [n]× [n] with j_1≤ j_2, define the relabeling function α(t, j_1, j_2) = 1/2(t - 1)n(n + 1) + j_1 + 1/2j_2(j_2 - 1). Let N_n = (1/2)mn(n + 1). For any α∈[N_n], define σ-field _nα = σ({E_tj_1j_2:α(t, j_1, j_2)≤α}) and random variable Y_nα = ∑_t = 1^m∑_j_1,j_2∈[n],j_1≤ j_21{α(t, j_1, j_2) = α}E_tj_1j_2(b_tj_1j_2 + c_tj_1j_2). Let _n0 = ∅. Then: (a) α(·,·,·) is a one-to-one function from [m]×{(j_1,j_2)∈[n]×[n]:j_1≤ j_2} to [N_n]. (b) For any t∈[m],j_1,j_2∈[n],j_1≤ j_2, E_tj_1j_2 is _nα(t, j_1, j_2)-measurable and b_tj_1j_2 is _n,α(t, j_1, j_2) - 1-measurable. (c) (∑_β = 1^α Y_nβ)_α = 1^N_n is a martingale with respect to the filtration (_nα)_α = 0^N_n - 1, and _i{() - - }^-1_1_i^-1/2 = ∑_α = 1^N_nY_nα + (1). § NUMERICAL EXPERIMENTS This section illustrates the practical performance of the proposed BCJSE algorithm via numerical experiments. Specifically, Section <ref> focuses on the invariant subspace estimation performance, and Section <ref> complements the theory of the hypothesis testing procedure for comparing vertex membership profiles in MLMM established in Theorem <ref>. §.§ Subspace estimation performance Let (_t)_t = 1^m be a collection of 2× 2 matrices defined as follows: _1 = … = _m/2 = ρ_n[ a b; b a ], _m/2 + 1 = … = _m = ρ_n[ b a; a b ], where a > b > 0, m is a positive even integer, and ρ_n∈(0, 1] is the sparsity factor. Let n be a positive even integer, 0 = t_1 < … < t_n/2 = 1 be equidistant points over [0, 1], and for any i∈[n/2], define x_i1 = sin(π t_i/2), x_(n/2 + i)1 = cos(π t_i/2), x_i2 = cos(π t_i/2), x_(n/2 + i)2 = sin(π t_i/2), _1 = [x_11,…,x_n1], _2 = [x_12,…,x_n2], and = [_1,_2]∈ℝ^n× 2. Define = ()^-1/2, _t = ()^-1/2_t()^-1/2, and consider a collection of vertex-aligned network adjacency matrices generated as _1,…,_m∼COSIE(; _1,…,_m). Clearly, λ_2(∑_t = 1^m_t) = 0 but λ_2(_t) < 0 for all t∈[m], so that directly taking (∑_t = 1^m_t; 2) leads to the cancellation of signals. In this experiment, we set n = 80, m = 6400, a = 0.8, b = 0.6, and let ρ_n varies over {0.1, 0.2, 0.3, 0.4, 0.5, 0.6}. Given (_t)_t = 1^m generated from the above COSIE model, we compute ^(𝖡𝖢𝖩𝖲𝖤) using Algorithm <ref> with R = 2 and S = 1. For comparison, we also consider the following competitors: Sum-of-squared (SoS) spectral embedding defined by (; d), the multiple adjacency spectral embedding (MASE) <cit.>, the bias-adjusted spectral embedding (BASE) <cit.>, and the HeteroPCA (HPCA) <cit.>. The same experiment is repeated for 500 independent Monte Carlo replicates. Given a generic embedding estimate for , we take sgn() - _2→∞ as the criterion for measuring the accuracy of subspace estimation. Figure <ref> visualizes the boxplots of the two-to-infinity norm subspace estimation errors of the aforementioned estimates under different sparsity regimes (ρ_n∈{0.1,0.2,0.3,0.4,0.5,0.6}) across 500 independent Monte Carlo replicates. It is clear that when ρ_n ∈{0.2,…,0.6}, HPCA and BCJSE have significantly smaller two-to-infinity norm subspace estimation errors compared to the SoS spectral embedding, MASE, and BASE. When ρ_n = 0.1, the average performance of HPCA, BASE, and BCJSE are similar and they outperform the SoS spectral embedding and MASE, but HPCA becomes numerically less stable and causes large estimation errors occasionally. When ρ_n∈{0.5,0.6}, the bias effect caused by the diagonal deletion operation of BASE becomes transparent and leads to significantly larger subspace estimation errors compared to the remaining competitors. Also, HPCA and BCJSE have quite comparable performance in terms of the subspace estimation error across different sparsity regimes. Nevertheless, HPCA is slightly computationally more costly than BCJSE and is less stable with larger standard deviations when ρ_n = 0.1. Indeed, we found that in such a comparatively low signal-to-noise ratio regime, HPCA occasionally requires significantly large numbers of iterations across repeated experiments. The average number of iterations required by HPCA is 88.76 and the corresponding standard deviation is 301.77 across 500 independent Monte Carlo replicates, whereas the number of iterations required by BCJSE is always R = 2 and S = 1 in this experiment. §.§ Hypothesis testing in MLMM Consider a membership profile matrix = [_1,…,_n]∈ℝ^n× 2 defined as follows: _i = [1, 0] if i∈[n_0], _i = [0, 1] if i∈[2n_0]\{n_0}, 0.1 = t_0 < … < t_n - 2n_0 = 0.9 are equidistant points over [0.1, 0.9], and _2n_0 + i = [t_i, 1 - t_i] for all i∈[n - 2n_0], where n_0, n are positive integers satisfying 2n_0 < n. Here, n_0 is the number of the so-called pure nodes (i.e., the membership profile vector assigns 1 to one of the communities) in each community for the sake of identifiability of mixed membership models <cit.>. The same membership profile matrix has also been considered in <cit.>. Let (_t)_t = 1^m be matrices defined in (<ref>), where m is a positive even integer. Define = ()^-1/2, _t = ()^1/2_t()^1/2 for all t∈[m], and we consider vertex-aligned multilayer network adjacency matrices _1,…,_m∼COSIE(; _1,…,_m). Finally, we set a = 0.9, b = 0.1, n = 500, n_0 = 100, m = 200, and let ρ_n vary in {0.01, 0.02, 0.03, 0.04}. We generate 1000 independent Monte Carlo replicates of (_t)_t = 1^m from the above MLMM and investigate the performance of the vertex membership profile hypothesis testing procedure described in Theorem <ref>. Specifically, we let i_1 = 1, i_2 take values in {100, 400, 410, …, 500}, and consider the hypothesis testing problem H_0:_i_1 = _i_2 versus H_A:_i_1≠_i_2 using the test statistic T_i_1i_2 defined in Section <ref>. We compute the empirical power of the proposed hypothesis testing procedure across the aforementioned 1000 independent Monte Carlo experiments and tabulate them in Table <ref> below. It is clear that the empirical power increases when _i_1 - _i_2_2 increases under different sparsity regimes, and the empirical size is approximately 0.05 (corresponding to the case where _i_1 - _i_2_2 = 0). In addition, Figure <ref> visualizes the null distribution of T_i_1i_2 (with i_1 = 1 and i_2 = n_0) and the distribution approximation to a randomly selected row of - across 1000 repeated Monte Carlo replicates, where is the BCJSE given by Algorithm <ref> with R = 2, S = 1, and = sgn(). The first row of Figure <ref> presents the histograms of T_i_1i_2 across 1000 independent Monte Carlo replicates and they closely align with the asymptotic null distribution of T_i_1i_2 (χ_2^2 distribution) when _i_1 = _i_2 under different values of ρ_n. The second row of Figure <ref> visualizes the scatter plots of a randomly selected row of - across 1000 independent Monte Carlo replicates, where the solid and dashed curves correspond to the 95% empirical and theoretical confidence ellipses. These visualizations well justify the theory established in Theorem <ref> and Theorem <ref>. § DISCUSSION In this paper, we design a bias-corrected joint spectral embedding algorithm for estimating the invariant subspace in the COSIE model and establish the accompanying entrywise subspace estimation theory. Our theory does not require the layer-wise average network expected degree to diverge, as long as the aggregate signal strength is sufficient. We settle the exact community detection in MLSBM and the hypothesis testing of the equality of membership profiles of any two given vertices in MLMM by leveraging the entrywise subspace estimation theory. In this work, we require that m^1/2nρ_n = ω(log n) for the exact community detection of the BCJSE algorithm, but it is not immediately clear whether this corresponds to the computational or information threshold of the exact community detection in MLSBM. In the case of balanced two-block MLSBM, the authors of <cit.> have established the minimax rates of community detection, but later, the authors of <cit.> have established that there is a gap between the computational threshold and the information threshold for community detection in MLSBM under the low-degree polynomial conjecture. This gap comes from the fact that in an MLSBM, some layers may be assortative mixing (i.e., the nonzero eigenvalues of _t are positive) and some other layers may be disassortative mixing (i.e., _t contains negative eigenvalues), and identifying which layers are assortative mixing and which layers are disassortative mixing requires extra computational cost. In particular, Theorem 2.2 in <cit.> roughly asserts that if m^1/2nρ_n = Ω(n^a) for some constant a > 0, then there exists a polynomial-time algorithm to achieve weak consistency community detection, and if m^1/2nρ_n→ 0, no polynomial-time algorithm can achieve weak consistency. Theorem <ref> sharpens the recoverable regime to m^1/2nρ_n = ω(log n) and achieves strong consistency, but it is still unclear whether this corresponds to the computational threshold of strong consistency. Meanwhile, the fundamental limit of the BCJSE-based clustering is unknown when the exact consistency is not achievable but the weak consistency is achievable. We defer these interesting research directions to future work. § ACKNOWLEDGMENTS This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute. § PRELIMINARY RESULTS We warm up the technical proofs of our main results by introducing a collection of basic concentration inequalities regarding matrices (_t)_t = 1^m. These concentration results are based on Bernstein's inequality and matrix Bernstein's inequality in the form of Theorem 3 in <cit.>. Let (_t)_t = 1^m be the matrices described in Section <ref> and max_t∈[m],i,j∈[n] E_tij^2≤ρ_n. Then max_i∈[n]∑_t = 1^m∑_j = 1^nE_tij^2 = (mnρ_n) max_t∈[m]_t_2 = {log^1/2(m + n) + (nρ_n)^1/2}. For the first assertion, by Bernstein's inequality and a union bound over i∈[n], we have max_i∈[n]∑_t = 1^m∑_j = 1^nE_tij^2 ≤max_i∈[n]∑_t = 1^m∑_j = 1^nE_tij^2 + max_i∈[n]|∑_t = 1^m∑_j = 1^n(E_tij^2 - E_tij^2)| = (mnρ_n). The second assertion from Remark 3.13 in <cit.> and a union bound over t∈[m]. Let (_t)_t = 1^m be the matrices described in Section <ref> and suppose Assumption <ref> hold. Then there exists a constant C > 0, such that for fixed matrices (_t)_t = 1^m,(_t)_t = 1^n∈ℝ^n× d, for any τ > 0, ∑_t = 1^m_t_t_t_2 ≤max_t∈[m]_t_2→∞_t_2→∞τ^2 + C_cρ_n^1/2τ(∑_t = 1^m_t_2^2_t_2^2)^1/2 with probability at least 1 - O(e^-τ^2). We apply a classical discretization trick and follow the roadmap paved in the proof of Lemma S2.3 in <cit.>. Let ^d - 1 = {∈ℝ^d:_2 = 1} be the unit sphere in ℝ^d. For any > 0, let _^d - 1 be an -net of ^d - 1. Clearly, for any _1,_2∈^d - 1, there exists vectors _1(_1), _2(_2)∈_^d - 1, such that _1 - _1(_1)_2∨_2 - _2(_2)_2 <, so that ∑_t = 1^m_t_t_t_2 = max__1,_2∈^d - 1|{_1 - _1(_1) + _1(_1)}∑_t = 1^m_t_t_t{_2 - _2(_2) + _2(_2)}| ≤ (^2 + 2)∑_t = 1^m_t_t_t_2 + max__1,_2∈_^d - 1|_1∑_t = 1^m_t_t_t_2|_2. Setting = 1/3 yields ∑_t = 1^m_t_t_t_2≤ (9/2) max__1,_2∈_^d - 1|_1∑_t = 1^m_t_t_t_2|. Furthermore, we can select _1/3^d - 1 in such a way that |_1/3^d - 1|≤ 18^d <cit.>, where |_1/3^d - 1| denotes the cardinality of _1/3^d - 1. Now for fixed vectors _1,_2∈_1/3^d - 1, let _t = _t_1 = [x_t1,…,x_tn], _t = _t_2 = [y_t1,…,y_tn]. Clearly, |_1∑_t = 1^m_t_t_t_2|_2 = |∑_t = 1^m_t_t_t|_2 = |∑_t = 1^m∑_i < j(x_tiy_tj + x_tjy_ti)E_tij + ∑_t = 1^m∑_j = 1^nx_tjy_tjE_tjj|_2 = |∑_t = 1^m∑_i≤ jc_tijE_tij|_2, where c_tij = x_tiy_tj + x_tjy_ti if i < j, and c_tjj = x_tjy_tj. Observe that ∑_t = 1^m∑_i≤ jc_tij^2 ≤∑_t = 1^m∑_i < j(2x_ti^2y_tj^2 + 2x_tj^2y_ti^2) + ∑_t = 1^m∑_j = 1^nx_tj^2y_tj^2 ≤ 2∑_t = 1^m∑_i = 1^n∑_j = 1^nx_ti^2y_tj^2 = 2∑_t = 1^m_t_2^2_t_2^2 ≤ 2∑_t = 1^m_t_2^2_t_2^2 and max_t∈[m],i,j∈[n]|c_tij|≤ 2max_t∈[m]_t_∞_t_∞. Then by Bernstein's inequality, there exists an absolute constant C > 0, such that for any τ > 0, |_1∑_t = 1^m_t_t_t_2|_2≤ C max_t∈[m]_t_2→∞_t_2→∞τ^2 + Cρ_n^1/2τ(∑_t = 1^m_t_2^2_t_2^2)^1/2 with probability at least 1 - 2e^-τ^2. The proof is completed by a union bound over _1,_2∈_1/3^d - 1. Let (_t)_t = 1^m be the matrices described in Section <ref>, max_t∈[m],i,j∈[n] E_tij^2≤ρ_n, and suppose mnρ_n = Ω(log(m + n)). Then ∑_t = 1^m_t_t_2→∞ = {m^1/2nρ_n^3/2log^1/2 (m + n)}. By Bernstein's inequality, there exists a constant C > 0, such that for any i∈[n] and any τ > 0, ∑_t = 1^m_i_t_t_2 ≤ C{τ^2 + (mnρ_n)^1/2τ}_2→∞max_t∈[m]_t_2 ≲n^1/2ρ_n{τ^2 + (mnρ_n)^1/2τ}. with probability at least 1 - O(e^-τ^2). The proof is completed by a union bound over i∈ [n] and setting τ≍√(log (m + n)). Next, we present a row-wise concentration bound of (), where ∈𝕆(n, d) is a deterministic n× d matrix with orthonormal columns. Its proof relies on the decoupling inequality from <cit.> and a conditioning argument. Let _1,…,_m be the matrices described in Section <ref>. Then, there exists a numerical constant C > 0, such that for any matrices ∈ℝ^n× d_1,∈ℝ^n× d_2, and any τ > 0, {()_2 > τ}≤ C {()_2 > τ/C} where = [_1,…,_m] is an independent copy of . Furthermore, if ∈𝕆(n, d) and m^1/2nρ_n = Ω(log(m + n)), then _i()_2 = {_2→∞m^1/2nρ_nlog(m + n)}. Let E_tij denote the (i, j)th entry of _t, E̅_tij denote the (i, j)th entry of _t, and write = [_1,…,_n], = [_1,…,_n], where _i∈ℝ^d_1 and _j∈ℝ^d_2. The key idea is to apply the decoupling inequality in <cit.>. By definition, () = ∑_i,k,j,l∈[n],t∈[m] E_tikE_tjl_i_j{1(i≠ j)1(i≤ k,j≤ l)1(k = l) + 1(i≠ j)1(i > k,j≤ l)1(k = l)} + ∑_i,k,j,l∈[n],t∈[m] E_tikE_tlj_i_j{1(i≠ j)1(i≤ k,j > l)1(k = l) + 1(i≠ j)1(i > k,j > l)1(k = l)} = ∑_(i, k),(j, l)∈_(i, k)(j, l)(_(i, j),_(k, l)), where = {(i, k)∈[n]× [n]:1≤ i≤ k≤ n}, and for any (i, k), (j, l)∈, _(i, k) = [E_1ik,…,E_mik], _(i, k)(j, l)(_(i, j),_(k, l)) = ∑_t = 1^mE_tikE_tjl_i_j1(i≠ j)1(k = l) + ∑_t = 1^mE_tikE_tjl_k_j1(k≠ j)1(i ≠ k)1(i = l) + ∑_t = 1^mE_tikE_tjl_i_l1(i≠ l)1(j ≠ l)1(k = j) + ∑_t = 1^mE_tikE_tjl_k_l1(k≠ l)1(i ≠ k,j ≠ l)1(i = j). Clearly, _(i, k)(j, l)(_(i, j),_(k, l)) = 0 when (i, k) = (j, l), so that () = ∑_(i, k),(j, l)∈,(i, k)≠ (j, l)_(i, k)(j, l)(_(i, j),_(k, l)) This expression enables us to apply the decoupling technique in <cit.>. Let = [_1,…,_m] be an independent copy of , where _t = [E_tij]_n× n, t∈ [m]. Then ∑_(i, k),(j, l)∈,(i, k)≠ (j, l)_(i, k)(j, l)(_(i, j),_(k, l)) = ∑_t = 1^m∑_i≠ j∑_l = 1^nE_tilE̅_tjl_i_j, where _tik = [E̅_1ik,…,E̅_mik]. By Theorem 1 in <cit.>, there exists a constant C > 0, such that for any τ > 0, {()_2 > τ}≤ C(∑_t = 1^m∑_i,j∈[n],i≠ j∑_l = 1^nE_tilE̅_tjl_i_j_2 > τ/C) = C{()_2 > τ/C}. This completes the first assertion. For the second assertion, we apply the first assertion with = _i to obtain {_i()_2 > τ}≤ C {_i()_2 > τ/C} = {∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^nE_tilE̅_tjl_j_2 > τ/C}, where = [_1,…,_m] is an independent copy of and E̅_tjl denotes the (j, l)th entry of _t. By Bernstein's inequality and a conditioning argument, for any τ > 0, we have {∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^nE_tilE̅_tjl_j_2 > 2b_in()τ^2 + 2v_in()τ|}≤ 2e^-τ^2, where v_in^2() := ∑_t = 1^m∑_l = 1^nσ_til^2∑_j = 1^nE̅_tjl_j1(j≠ i)_2^2 and b_in() := max_t∈[m],i,l∈[n]∑_j = 1^nE̅_tjl_j1(j≠ i)_2. Next, we consider the error bounds for v_in() and b_in(). By Bernstein's inequality again and a union bound over t∈[m] and l∈[n], for fixed t∈[m], l∈[n], b_in() = {ρ_n^1/2log^1/2(m + n) + _2→∞log (m + n)}. For v_in(), by Hanson-Wright inequality for bounded random variables (see Theorem 3 in <cit.>) v_in^2() ≤ 2ρ_n∑_t = 1^m∑_l = 1^n∑_j≤ lE̅_tjl_j1(j≠ i)_2^2 + 2ρ_n∑_t = 1^m∑_l = 1^n∑_j > lE̅_tjl_j1(j≠ i)_2^2 ≤ 2ρ_n∑_t = 1^m∑_l = 1^n∑_j_1,j_2≤ l(E̅_tj_1lE̅_tj_2l - E̅_tj_1lE̅_tj_2l)_j_1_j_21(j_1≠ i,j_2≠ i) + 2ρ_n^2∑_t = 1^m∑_l = 1^n∑_j≤ l_j_2^2 + 2ρ_n∑_t = 1^m∑_l = 1^n∑_j_1,j_2 > l(E̅_tj_1lE̅_tj_2l - E̅_tj_1lE̅_tj_2l)_j_1_j_21(j_1≠ i,j_2≠ i) + 2ρ_n^2∑_t = 1^m∑_l = 1^n∑_j > l_j_2^2 = O(mnρ_n^2) + {ρ_nlog(m + n) + m^1/2n^1/2ρ_n^3/2log^1/2(m + n)} = (mnρ_n^2). By a union bound over t∈[m] and l∈[n], for any c > 0, there exists a c-dependent constant C_c > 0, such that the event _in(c)={:v_in()≤ C_c_v_in,b_in()≤ C_c_b_in} occurs with probability at least 1 - O((m + n)^-c), where _v_in:= m^1/2n^1/2ρ_n and _b_in:= ρ_n^1/2log^1/2 (m + n) + _2→∞log (m + n). Namely, for τ = c^1/2log^1/2(m + n) (∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^nE_tilE̅_tjl_j_2 > 2_b_inclog(m + n) + 2_v_inc^1/2log^1/2(m + n)) ≤_[ {∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^nE_tilE̅_tjl_j_2 > 2_b_inclog(m + n) + 2_v_inc^1/2log^1/2(m + n)|}1__in(c)] + {_in^c(c)} = O((m + n)^-c). This implies that _i()_2 = {_2→∞m^1/2nρ_nlog(m + n)}, where we have used the fact that _2→∞≥√(d/n) for any ∈𝕆(n, d). The proof is thus completed. Now, we dive into a collection of slightly more sophisticated concentration results regarding the spectral norm concentration of . The proofs of these results rely on modifications of Theorem 4 and Theorem 5 in <cit.>. Let (_t)_t = 1^m be the matrices described in Section <ref>, max_t∈[m],i,j∈[n] E_tij^2≤ρ_n, and suppose m^1/2nρ_n = Ω(log(m + n)). Then ()_2 = {m^1/2nρ_nlog(m + n)}. The proof breaks down into two regimes: nρ_n = Ω(log(m + n)) and nρ_n = O(log(m + n)). If nρ_n = Ω(log(m + n)), then we apply Theorem 4 in <cit.> with _l = _n for all l∈[m], ν_1,ν_2,ν_2' = O(ρ_n), R_1, R_2, R_2' = O(1), σ_1 = √(m), σ_2 = σ_3 = 1, and σ'_2 = √(mn) to obtain ()_2 = {m^1/2nρ_nlog(m + n)}. The rest of the proof focuses on the regime where nρ_n = O(log(m + n)). Let P_tij^* be the success probability corresponding to E_tij such that E_tij = A_tij^* - P_tij^*, where A_tij^*∼Bernoulli(P_tij^*) for all t∈[m],i,j∈[n], i≤ j, A_tij^* = A_tji^*, and P^*_tij = P^*_tji if i > j. We modify the proof of Lemma C.1 and Theorem 5 in <cit.> as follows. Let _t^* = [A_tij^*]_n× n and _t^* = [P_tij^*]_n× n for all t∈[m]. We then modify Lemma C.1 in <cit.> to obtain: i) max_t∈[m],i∈[n]∑_j = 1^nA_tij^* = {log(m + n)}; ii) max_i∈[n]∑_t = 1^n∑_j = 1^nA_tij^* = (mnρ_n); iii) ∑_i = 1^n∑_t = 1^n∑_j = 1^nA_tij^* = (mn^2ρ_n) (by Bernstein's inequality and a union bound over t∈[m] and i∈[n]). For ∑_t = 1^m_t^*2_2, we write it as ∑_t = 1^m_t^*2_2≤max_i∈[n]∑_t = 1^m∑_j = 1^nA_tij^* + (^*^*T)_2 = (mnρ_n) + (^*^*T)_2. By the decoupling inequality (Theorem 1 in <cit.>), it is sufficient to consider (^*^*T)_2, where ^* = [_1^*,…,_m^*] is an independent copy of ^* and _t^* = [A̅_tij^*]_n× n. By Perron-Frobenius theorem, every non-negative matrix has a non-negative eigenvector with the corresponding eigenvalue being the spectral radius. This implies that (^*^*T)_2≤max_i∈[n]∑_t = 1^m∑_j = 1^n∑_l = 1^nA_tij^*A̅_tlj^*. By Bernstein's inequality, results i), ii), and iii), and a union bound over i∈[n], (^*^*T)_2 ≤max_i∈[n]∑_t = 1^m∑_j = 1^n∑_l = 1^nA_tij^*A̅_tlj^* = max_i∈[n]∑_t = 1^m∑_j = 1^nE_tij(∑_l = 1^nA̅_tlj^*) + max_i∈[n]∑_t = 1^m∑_j = 1^nP_tij(∑_l = 1^nA̅_tlj^*) = [max_t∈[m],j∈[n](∑_l = 1^nA̅_tlj^*)log(m + n) + {∑_t = 1^m∑_j = 1^n(∑_l = 1^nA̅_tlj^*)^2}^1/2ρ_n^1/2log^1/2(m + n)] + (mn^2ρ_n^2) = {max_t∈[m],j∈[n](∑_l = 1^nA̅_tlj^*)log(m + n) + (∑_t = 1^m∑_j = 1^n∑_l = 1^nA̅_tlj^*)^1/2ρ_n^1/2log(m + n)} + (mn^2ρ_n^2) = (mn^2ρ_n^2). Therefore, we conclude that ^*^*T_2 = (mnρ_n + mn^2ρ_n^2). We now turn our attention to ()_2. Again, by Theorem 1 in <cit.>, it is sufficient to consider ()_2, where = [_1,…,_m] is an independent copy of and _t = [E̅_tij]_n× n. By definition, ()_2≤max_i∈[n]|∑_t = 1^m∑_j = 1^nE_tijE̅_tij| + ∑_t = 1^m_t_t^*_2 + ∑_t = 1^m_t_t^*_2. By Bernstein's inequality, Theorem 3 in <cit.>, and union bound over i∈[n], we obtain max_i∈[n]|∑_t = 1^m∑_j = 1^nE_tijE̅_tij| = {m^1/2nρ_nlog^1/2(m + n)} and ∑_t = 1^m_t_t^*_2 = {m^1/2(nρ_n)^3/2log^1/2(m + n)}. By Theorem 3 in <cit.> again, a conditioning argument, together with results i)–iii) and the bound for _2, max_t∈[m]_t^*_2→∞ = (max_t∈[m],i∈[n]∑_j = 1^nA̅_tij^*)^1/2 = {log^1/2(m + n)}, ∑_t = 1^m_t_t^*_2 = {max_t∈[m]_t^*_2→∞log(m + n) + ^*^*T_2^1/2(nρ_n)^1/2log^1/2(m + n)} = {m^1/2nρ_nlog(m + n)}. The proof is completed by combining the above concentration bounds. The following lemma characterizes the noise level of the COSIE model by providing a sharp error bound on the spectral norm of the oracle noise matrix () - -. Let ζ_𝗈𝗉 = m^1/2nρ_nlog(m + n) + m^1/2(nρ_n)^3/2log^1/2(m + n). It turns out that ζ_𝗈𝗉 / (mn^2ρ_n^2) = Θ(ε_n^(𝗈𝗉)) and it can be viewed as the inverse signal-to-noise ratio. Suppose Assumption <ref> hold, (_t)_t = 1^m are the matrices in Section <ref>, max_t∈[m],i,j∈[n] E_tij^2≤ρ_n, and m^1/2nρ_n = Ω(log(m + n)). Then () - - _2 = (ζ_𝗈𝗉). By definition, () - - _2 ≤()_2 + 2∑_t = 1^m_t_t_2 + 2max_i∈ [n]|∑_t = 1^m∑_j = 1^nP_tijE_tij|. By Lemma <ref>, we know that ()_2 = {m^1/2nρ_nlog(m + n)}. By Bernstein's inequality and a union bound over i∈[n], we have max_i∈[n]|∑_t = 1^m∑_j = 1^nP_tijE_tij| = {(mn)^1/2ρ_n^3/2log^1/2(m + n)}. By Theorem 3 in <cit.>, we obtain ∑_t = 1^m_t_t_2 = {m^1/2(nρ_n)^3/2log^1/2 (m + n) + n^1/2ρ_nlog(m + n)}. The proof is completed by combining the above results. Suppose Assumption <ref> hold, (_t)_t = 1^m are the matrices in Section <ref>, max_t∈[m],i,j∈[n] E_tij^2≤ρ_n, and m^1/2nρ_n = Ω(log(m + n)). Then for any fixed ,∈𝕆(n, d) with _2→∞∨_2→∞ = O(n^-1/2), {() - - }_2 = {m^1/2nρ_n^3/2log^1/2(m + n)}. Let = [_1,…,_n] and = [_1,…,_n]. By triangle inequality, {() - - }_2 is upper bounded by ()_2 + ∑_t = 1^m_t_t_2 + ∑_t = 1^m_t_t_2 + ∑_t = 1^m∑_i,j = 1^n_i_i 2P_tijE_tij_2. By Assumption <ref>, we have max_t∈[m]_t_2→∞ = O(n^1/2ρ_n), max_t∈[m]_t_2→∞ = O(n^1/2ρ_n). Then for the second and third term, we apply Lemma <ref> to obtain ∑_t = 1^m_t_t + ∑_t = 1^m_t_t_2 = {m^1/2nρ_n^3/2log^1/2(m + n)}. The last term can be bounded by Bernstein's inequality as well: ∑_i = 1^n∑_t = 1^m∑_j = 1^n_i_i P_tijE_tij = {(mn)^1/2ρ_n^3/2log^1/2(m + n)}. It is now sufficient to work with the first term. By Lemma <ref>, there exists a constant C > 0, such that for any τ > 0, {()_2 > τ}≤ C(∑_t = 1^m∑_i,j∈[n],i≠ j∑_l = 1^nE_tilE̅_tjl_i_j_2 > τ/C). By Bernstein's equality and a conditioning argument, {∑_t = 1^m∑_i,j∈[n],i≠ j∑_l = 1^nE_tilE̅_tjl_i_j_2 > 2b_n()τ^2 + 2v_n()τ|}≤ 2e^-τ^2, where v_n^2() := ∑_t = 1^m∑_i = 1^n∑_l = 1^nσ_til^2∑_j = 1^nE̅_tjl_i_j1(j≠ i)_2^2, b_n() := max_t∈[m],i,l∈[n]∑_j = 1^nE̅_tjl_i_j1(j≠ i)_2. Next, we consider the error bounds for v_n and b_n. By Bernstein's inequality again and a union bound over t∈[m], i,l∈[n], we have b_n() = {ρ_n^1/2_2→∞log^1/2 (m + n) + _2→∞_2→∞log (m + n)}. For v_n, by Hanson-Wright inequality for bounded random variables (see Theorem 3 in <cit.>), v_n^2() ≤ 2ρ_n∑_t = 1^m∑_i = 1^n∑_l = 1^n_i_2^2∑_j≤ l^nE̅_tjl_j1(j≠ i)_2^2 + 2ρ_n∑_t = 1^m∑_i = 1^n∑_l = 1^n_i_2^2∑_j > l^nE̅_tjl_j1(j≠ i)_2^2 ≤ 2ρ_n∑_t = 1^m∑_i = 1^n∑_l = 1^n_i_2^2∑_j_1,j_2≤ l^n(E̅_tj_1lE̅_tj_2l - E̅_tj_1lE̅_tj_2l)_j_1_j_21(j_1,j_2≠ i) + 2ρ_n∑_t = 1^m∑_i = 1^n∑_l = 1^n_i_2^2∑_j_1,j_2 > l^n(E̅_tj_1lE̅_tj_2l - E̅_tj_1lE̅_tj_2l)_j_1_j_21(j_1,j_2≠ i) + 2ρ_n^2∑_t = 1^m∑_i = 1^n∑_l = 1^n∑_j > l_i_2^2_j_2^2 + 2ρ_n^2∑_t = 1^m∑_i = 1^n∑_l = 1^n∑_j≤ l_i_2^2_j_2^2 = (mnρ_n^2). Then for any c > 0, there exists a c-dependent constant C_c > 0, such that _3n(c):={:v_n()≤ C_c_v_n,b_n()≤ C_c_b_n} occurs with probability at least 1 - O((m + n)^-c), where _v_n:= m^1/2n^1/2ρ_n, _b_n:= ρ_n^1/2n^-1/2log^1/2 (m + n) + n^-1log (m + n). Namely, for τ = c^1/2log^1/2(m + n), {∑_t = 1^m∑_i,j∈[n],i≠ j∑_l = 1^nE_tilE̅_tjl_i_j_2 > 2_b_nclog(m + n) + 2_v_nc^1/2log^1/2(m + n)} ≤_[ {∑_t = 1^m∑_i,j∈[n],i≠ j∑_l = 1^nE_tilE̅_tjl_i_j_2 > 2_b_nclog(m + n) + 2_v_nc^1/2log^1/2(m + n)|}1__3n(c)] + {_3n^c(c)} ≤_[ {∑_t = 1^m∑_i,j∈[n],i≠ j∑_l = 1^nE_tilE̅_tjl_i_j_2 > 2b_nclog(m + n) + 2v_nc^1/2log^1/2(m + n)|}1__3n(c)] + O((m + n)^-c) = O((m + n)^-c). Namely, ()_2 = {m^1/2n^1/2ρ_nlog^1/2(m + n)}. Combining the above error bounds completes the proof. Suppose Assumption <ref> hold, (_t)_t = 1^m are the matrices in Section <ref>, max_t∈[m],i,j∈[n] E_tij^2≤ρ_n, and m^1/2nρ_n = Ω(log(m + n)). Then {() - - }_2→∞ = (ζ_𝗈𝗉/√(n)), where ζ_𝗈𝗉 is defined in (<ref>). By triangle inequality, Bernstein's inequality, Theorem 3 in <cit.>, and Lemma <ref>, for any fixed i∈[n], _i{() - - }_2 ≤ _i()_2 + _2→∞∑_t = 1^m_t_t_2 + ∑_t = 1^m_i_t_t_2 + |∑_t = 1^m∑_j = 1^n2P_tijE_tij|_2→∞ = _i()_2 + _2→∞{m^1/2(nρ_n)^3/2log^1/2 (m + n) + n^1/2ρ_nlog(m + n)} + _2→∞{m^1/2(nρ_n)^3/2log^1/2 (m + n)} + _2→∞{(mn)^1/2ρ_n^3/2log^1/2(m + n)} = _i()_2 + _2→∞{m^1/2(nρ_n)^3/2log^1/2(m + n)}. By Lemma <ref>, we have _i()_2 = {(mn)^1/2ρ_nlog(m + n)}. Combining the above error bounds and applying a union bound over i∈[n] complete the proof. § SIMPLE REMAINDER ANALYSES This section presents some simple analyses of remainders _4^(RS)–_7^(RS), which are quite straightforward by leveraging the concentration results obtained in Section <ref>. Suppose Assumptions <ref> and <ref> hold. If - _RS_2≤ (1/4)λ_d(), then sinΘ(_RS, )_2 = (ζ_𝗈𝗉/mn^2ρ_n^2) + O(1/mn^2ρ_n^2) - _RS_2, _RS^-1_2 = (1/mn^2ρ_n^2), _RS - _RS_RS_2 = (ζ_𝗈𝗉/√(n) + ζ_𝗈𝗉^2/mn^2ρ_n^2) + (1) - _RS_2, _4^(RS)_2→∞ = (ζ_𝗈𝗉/mn^3ρ_n^2 + ζ_𝗈𝗉^2/m^2n^9/2ρ_n^4) + (1/mn^5/2ρ_n^2) - _RS_2, where ζ_𝗈𝗉 is defined in (<ref>). By triangle inequality and Lemma <ref>, () - - _RS_2 ≤() - - _2 + _RS - _2 = (ζ_𝗈𝗉) + - _RS_2. Note that m^1/2nρ_n = ω(log(m + n)) implies ζ_𝗈𝗉/λ_d() = o(1). This also entails that there exists a c-dependent constant N_c > 0, such that for all n≥ N_c, () - - _RS_2≤1/2λ_d() with probability at least 1 - O((m + n)^-c). Then by Davis-Kahan theorem <cit.>, sinΘ(_RS, )_2 = (ζ_𝗈𝗉/mn^2ρ_n^2) + O(1/mn^2ρ_n^2) - _RS_2, which establishes the first assertion. The second assertion follows directly from Weyls' inequality, (<ref>), and Assumption <ref>. For the third assertion, recall that = and _RS_RS = {() - _RS}_RS. Then _RS - _RS_RS = _RS - {() - _RS}_RS = -{() - - }_RS - ( - _RS)_RS = -{() - - }_RS -{() - - }(_RS - _RS) - ( - _RS)_RS. By Lemma <ref>, we have {() - - }_RS_2 = (n^-1/2ζ_𝗈𝗉). By Lemma <ref> and the first assertion, {() - - }(_RS - _RS)_2 = (ζ_𝗈𝗉^2/mn^2ρ_n^2) + (ζ_𝗈𝗉/mn^2ρ_n^2) - _RS_2. Hence, we conclude that _RS - _RS_RS_2 ≤{() - - }_2 + {() - - }(_RS - _RS)_2 + - _RS_2 = (1/√(n)ζ_𝗈𝗉 + ζ_𝗈𝗉^2/mn^2ρ_n^2) + (1) - _RS_2, and hence, _4^(RS)_2→∞ ≤_2→∞_RS - _RS_RS_2^-1_RS_2 = (ζ_𝗈𝗉/mn^3ρ_n^2 + ζ_𝗈𝗉^2/m^2n^9/2ρ_n^4) + (1/mn^5/2ρ_n^2) - _RS_2. The proof is thus completed. Suppose Assumption <ref> and <ref> hold. If - _RS_2≤ (1/4)λ_d(), then _6^(RS)_2→∞ + _7^(RS)_2→∞ = (ζ_𝗈𝗉^2/m^2n^9/2ρ_n^4 + ζ_𝗈𝗉/mn^3ρ_n^2) + (1/mn^5/2ρ_n^2) - _RS_2. Recall that _RS = sgn(_RS). By Lemma <ref>, Davis-Kahan theorem, and Lemma <ref>, we have _RS_RS^-1 - ^-1_RS_2 ≤^-1_2_RS - _RS_RS_2_RS^-1_2 ≤^-1_2_2_RS - _RS_2_RS^-1_2 + ^-1_2_RS - _RS_RS_2_RS^-1_2 + ^-1_2_RS - _RS_2_RS_2_RS^-1_2 = (1/mn^2ρ_n^2)sinΘ(_RS, )_2^2 + (1/m^2n^4ρ_n^4)_RS - _RS_RS_2 = (ζ_𝗈𝗉^2/m^3n^6ρ_n^6 + ζ_𝗈𝗉/m^2n^9/2ρ_n^4) + (1/m^2n^4ρ_n^4) - _RS_2, where we have used the condition - _RS_2 = O(λ_d()) = O(mn^2ρ_n^2). Then by Lemma <ref> and a union bound over i∈ [n], we have _6^(RS)_2→∞ = {() - - }_2→∞_RS_RS^-1 - ^-1_RS_2 = (ζ_𝗈𝗉^2/m^2n^9/2ρ_n^4 + ζ_𝗈𝗉/mn^3ρ_n^2) + (1/mn^5/2ρ_n^2) - _RS_2. For the last assertion, we directly obtain _7^(RS)_2→∞ ≤ - _RS_∞_2→∞_RS_RS^-1 - ^-1_RS_2 = (1/mn^5/2ρ_n^2) - _RS_2. The proof is thus completed. Suppose Assumption <ref>–<ref> hold, m^1/2nρ_n = ω(log(m + n)), and - _RS_2≤ (1/4)λ_d(). Then _2^(RS)_2→∞ + _5^(RS)_2→∞ = (ζ_𝗈𝗉^2/m^2n^4ρ_n^4)_2→∞ + O(1/mn^2ρ_n^2) - _RS_2_2→∞. The lemma follows from (_RS - _RS)_RS_2→∞≤_2→∞sinΘ(_RS, )_2^2, Assumption <ref>, and Lemma <ref>. § LEAVE-ONE-OUT AND LEAVE-TWO-OUT ANALYSES In this section, we elaborate on the decoupling arguments based on the delicate leave-one-out and leave-two-out analyses for ^(RS)_1 and establish the corresponding sharp error bounds. Before proceeding to the proofs, we first introduce the notions of the leave-one-out and leave-two-out matrices. For each t∈ [m] and i∈ [n], let _t^(i) = [A_tab^(i)]_n× n be the ith leave-one-out version of _t defined as follows: A_tab^(i) = { A_tab, a≠ ib≠ i, A_tab, a = ib = i, . a,b∈ [n] In other words, _t^(i) is constructed by replacing the ith row and the ith column of _t with their expected values. Let ^(i)_t = ^(i)_t - _t, ^(i) = [_1^(i),…,_m^(i)], and ^(i) = [_1^(i),…,_m^(i)]. For any r∈[R] and s∈ [S], set _0s^(i) = _n× d, _r0^(i) = _n× n, and _rs^(i) = -∑_k = 1^n_k_k(_(r - 1)S^(i))(_(r - 1)S^(i)){((^(i))(^(i))) - _r(s - 1)^(i)}(_(r - 1)S^(i))(_(r - 1)S^(i))_k_k. Let ^(i)_rS = ((^(i)(^(i))) - _rS^(i); d) with the corresponding eigenvalues encoded in the diagonal matrix ^(i)_rS, where ^(i)_rS = diag[ λ_1{(^(i)(^(i))) - _rS^(i)},…,λ_d{(^(i)(^(i))) - _rS^(i)}]. Define ^(i)_rS = (^(i)_rS). Note that _rS^(i) is a function of ^(i) and _rS^(i), and _rS^(i) is a function of ^(i) and _(r - 1)S^(i). Since _0s^(i) = _n× d for all s∈[S], it follows that _rS^(i) is also a function of ^(i), so that _rS^(i) is independent of _i = (E_tij:t∈[m],j∈[n]). This independent structure is the key to the decoupling arguments. Similarly, for any fixed t∈[m], (i, j)∈ [n], i≠ j, define the (i, j)th leave-two-out version _t^(i, j) = [A_tab^(i, j)]_n× n of _t as A_tab^(i, j) = { A_tab, a∉{i, j}b∉{i, j}, A_tab, a ∈{i, j}b ∈{i, j}. ., a,b∈ [n] Rather than setting one row and one column of _t as their expected values, the leave-two-out version _t^(i, j) converts its two designated rows and two designated columns to their expected values. Let _t^(i, j) = _t^(i, j) - _t, ^(i, j) = [_1^(i, j), …, _m^(i, j)], and ^(i, j) = [_1^(i, j),…,_m^(i, j)]. For any r∈[R] and s∈[S], set _0s^(i, j) = _n× d, _r0^(i, j) = _n× n, and _rs^(i, j) = -∑_k = 1^n_k_k(_(r - 1)S^(i, j))(_(r - 1)S^(i, j)){((^(i, j))(^(i, j))) - _r(s - 1)^(i, j)}(_(r - 1)S^(i, j))(_(r - 1)S^(i, j))_k_k. Let ^(i, j)_rS = ((^(i, j)(^(i, j))) - _rS^(i, j); d) with the diagonal matrix of the associated eigenvalues ^(i, j)_rS = diag[ λ_1{(^(i, j)(^(i, j))) - _rS^(i, j)},…,λ_d{(^(i, j)(^(i, j))) - _rS^(i, j)}]. and let ^(i, j)_rS = (_r^(i, j)). Note that by a similar reasoning, _rS^(i, j) is also independent of (E_tia, E_tja:t∈[m],a∈[n]). §.§ Preliminary Lemmas for Leave-One-Out Matrices We first collect several concentration inequalities regarding certain quadratic functions of . These results are non-trivial, and our proofs rely on delicate analyses of the higher-order moments of these polynomials of random variables. Let (_t)_t = 1^m be the matrices defined in Section <ref> and suppose Assumptions <ref> holds. Let (^(i))_i = 1^n be a collection of n× d random matrices indexed by i∈[n] such that ^(i)≠_n× d with probability one, and _i = (E_tij:t∈[m],j∈[n]) and ^(i) are independent for each i∈[n]. Then max_i∈[n]∑_a = 1^n∑_t = 1^m∑_j∈[n]\{a}E_tiaE_tij_j^(i)_2^2/^(i)_2→∞^2 = {mn^2ρ_n^2(log n)^2}. Let _j^(i) be the jth row of ^(i) and p∈ℕ_+ be a positive integer to be determined later. Expanding the quantity of interest directly yields ∑_a = 1^n∑_t = 1^m∑_j∈[n]\{a}E_tiaE_tij_j^(i)_2^2 = ∑_j_1 = 1^n∑_t = 1^m∑_s = 1^m∑_j_2∈[n]\{j_1}∑_j_3∈[n]\{j_1}E_tij_1E_sij_1E_tij_2E_sij_3⟨_j_2^(i),_j_3^(i)⟩ = ∑_t = 1^m∑_j_1 = 1^n∑_j_2∈[n]\{j_1} E_tij_1^2E_tij_2^2_j_2^(i)_2^2 + ∑_t = 1^n∑_j_1≠ j_2≠ j_3E_tij_1^2E_tij_2E_tij_3⟨_j_2^(i),_j_3^(i)⟩ + ∑_t_1,t_2∈[m],t_1≠ t_2∑_j_1 = 1^n∑_j_2,j_3∈[n]\{j_1}E_t_1ij_1E_t_1ij_2E_t_2ij_1E_t_2ij_3⟨_j_2^(i),_j_3^(i)⟩, where the summation over {j_1≠ j_2≠ j_3} means the summation over {(j_1,j_2,j_3)∈[n]^3:j_1≠ j_2,j_1≠ j_3,j_2≠ j_3}. Let and be independent copies of . For the first term, by the decouping inequality <cit.>, it is sufficient to consider the first term with E_tij_2 replaced by E̅_tij_2. If nρ_n≥ 1, then by Bernstein's inequality, a condition argument, and a union bound over t∈[m], we have ∑_t = 1^m∑_j_1 = 1^n∑_j_2∈[n]\{j_1}E_tij_1^2E̅_tij_2^2_j_2^(i)_2^2/^(i)_2→∞^2 ≤∑_t = 1^m∑_j_1 = 1^n(E_tij_1^2 - σ_tij_1^2)(max_t∈[m]∑_j_2 = 1^nE̅_tij_2^2) + mnρ_n(max_t∈[m]∑_j_2 = 1^nE̅_tij_2^2) ≤(mnρ_n){max_t∈[m]∑_j_2 = 1^n(E̅_tij_2^2 - σ_tij_2^2) + nρ_n} = (mn^2ρ_n^2log n). If nρ_n≤ 1, then similarly, we have ∑_t = 1^m∑_j_1 = 1^n∑_j_2∈[n]\{j_1}E_tij_1^2E̅_tij_2^2_j_2^(i)_2^2/^(i)_2→∞^2 ≤∑_t = 1^m∑_j_1 = 1^n(E_tij_1^2 - σ_tij_1^2)∑_j_2 = 1^nE̅_tij_2^2 + ρ_n(∑_t = 1^m∑_j_1 = 1^n∑_j_2 = 1^nE̅_tij_2^2) ≤{mn^2ρ_n^2 + (log n)^2 + (nρ_n)^1/2log n(∑_t = 1^m∑_j = 1^nE̅_tij^2)^1/2} = (mn^2ρ_n^2log n). Similarly, applying the decoupling inequality <cit.> to the second term, we see that it is sufficient to consider the second term with E_tij_2 and E_tij_3 replaced by E̅_tij_2 and Ẽ_tij_3, respectively. If nρ_n = Ω(log n), then by Bernstein's inequality, a condition argument, and union bounds, we have ∑_t = 1^m∑_j_1≠ j_2≠ j_3E_tij_1^2E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩/^(i)_2→∞^2 = ∑_t = 1^m∑_j_1≠ j_2≠ j_3(E_tij_1^2 - σ_tij_1^2)E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩/^(i)_2→∞^2 + ∑_t = 1^m∑_j_1≠ j_2≠ j_3E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩σ_tij_1^2/^(i)_2→∞^2 = (mnρ_n)(max_t∈[m]max_j_1∈[n]∑_j_2∈[n]\{j_1}E̅_tij_2_j_2^(i)/^(i)_2→∞_2)(max_t∈[m]max_j_1,j_2∈[n],j_1≠ j_2∑_j_3∈[n]\{j_1,j_2}Ẽ_tij_3_j_3^(i)/^(i)_2→∞_2) = (mn^2ρ_n^2log n). If nρ_n = O(log n), then by Bernstein's inequality, a conditioning argument, and a union bound over t∈[m],j_1∈[n], we have max_t∈[m],j_2∈[n]|∑_j_1,j_3∈[n]\{j_2},j_1≠ j_3E_tij_3⟨_j_2^(i),_j_3^(i)⟩σ_tij_1^2/^(i)_2→∞^2| = {(log n)^2} , max_t∈[m],j_1∈[n]|∑_j_2,j_3∈[n]\{j_1},j_2≠ j_3E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩/^(i)_2→∞^2| = {(log n)^2}, ∑_t = 1^m∑_j_1≠ j_2≠ j_3|E_tij_3| ≤ n^2∑_t = 1^m∑_j_3 = 1^n(|E_tij_3| - |E_tij_3|) + n^2∑_t = 1^m∑_j_3 = 1^n|E_tij_3| = (mn^3ρ_n), ∑_t = 1^m∑_j_1≠ j_2≠ j_3E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩σ_tij_1^2/^(i)_2→∞^2 = {(log n)^3 + (ρ_nlog n)^1/2log n(mn^3ρ_n^2)^1/2} = (mn^2ρ_n^2log n). Also, a similar argument shows that when nρ_n = O(log n), max_t∈[m]∑_j_3 = 1^n|E_tij_3| = (log n), so that ∑_t = 1^m∑_j_1≠ j_2≠ j_3|E̅_tij_2E_tij_3|≤ n∑_t = 1^m∑_j_2 = 1^n(|E̅_tij_2| - |E̅_tij_2|)∑_j_3 = 1^n|E_tij_3| + n∑_t = 1^m∑_j_2 = 1^n|E̅_tij_2|∑_j_3 = 1^n|E_tij_3| = (mn^2ρ_nlog n) ∑_t = 1^m∑_j_1≠ j_2≠ j_3(E_tij_1^2 - σ_tij_1^2)E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩/^(i)_2→∞^2 = {max_t∈[m],j_1∈[n]|∑_j_2,j_3∈[n]\{j_1},j_2≠ j_3E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩/^(i)_2→∞^2|log n + (ρ_nlog n)^1/2max_t∈[m],j_1∈[n]|∑_j_2,j_3∈[n]\{j_1},j_2≠ j_3E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩/^(i)_2→∞^2|^1/2(∑_t = 1^m∑_j_1≠ j_2≠ j_3|E̅_tij_2E_tij_3|)^1/2} = {(log n)^3 + (ρ_nlog n)^1/2(mn^2ρ_nlog n)^1/2log n} = (mn^2ρ_n^2log n). Therefore, when nρ_n = O(log n), we further obtain ∑_t = 1^m∑_j_1≠ j_2≠ j_3E_tij_1^2E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩/^(i)_2→∞^2 = ∑_t = 1^m∑_j_1≠ j_2≠ j_3(E_tij_1^2 - σ_tij_1^2)E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩/^(i)_2→∞^2 + ∑_t = 1^m∑_j_1≠ j_2≠ j_3E̅_tij_2Ẽ_tij_3⟨_j_2^(i),_j_3^(i)⟩σ_tij_1^2/^(i)_2→∞^2 = (mn^2ρ_n^2log n). It is now sufficient to derive the (·) bound for the third term. We achieve this by a higher-order moment bound and Markov's inequality. Let p = O(log(m + n)) be a positive integer to be determined later. We expand the pth moment of the third term and compute [{∑_t_1,t_2∈[m],t_1≠ t_2∑_j_1 = 1^n∑_j_2,j_3∈[n]\{j_1}E_t_1ij_1E_t_1ij_2E_t_2ij_1E_t_2ij_3⟨_j_2^(i),_j_3^(i)⟩}^p|^(i)] = ∑_a_1,…,a_p∈[n]∑_t_1,…,t_p∈[m] s_1∈[m]\{t_1},…,s_p∈[m]\{t_p}∑_j_1∈[n]\{a_1},…,j_p∈[n]\{a_p} l_1∈[n]\{a_1},…,l_p∈[n]\{a_p}(∏_k = 1^pE_t_kia_kE_s_kia_kE_t_kij_kE_s_kil_k)∏_k = 1^p⟨_j_k^(i),_l_k^(i)⟩ ≤∑_a_1,…,a_p∈[n]∑_t_1,…,t_p∈[m] s_1∈[m]\{t_1},…,s_p∈[m]\{t_p}∑_j_1∈[n]\{a_1},…,j_p∈[n]\{a_p} l_1∈[n]\{a_1},…,l_p∈[n]\{a_p}|(∏_k = 1^pE_t_kia_kE_s_kia_kE_t_kij_kE_s_kil_k)| ^(i)_2→∞^2p. Relabeling t_p + k = s_k, j_p + k = l_k and set a_p + k = a_k for k ∈ [p], we then re-write the right-hand side of the above inequality as ∑_a_1,…,a_2p∈[n] a_p + 1 = a_1,…,a_2p = a_p∑_t_1,…,t_2p∈[m] t_p + 1≠ t_1,…,t_2p≠ t_p∑_j_1∈[n]\{a_1},…,j_2p∈[n]\{a_2p}|(∏_k = 1^2pE_t_kia_kE_t_kij_k)| ^(i)_2→∞^2p . Now let T(t_1,…,t_2p) be the number of unique elements among {t_1,…,t_2p}. For T = T(t_1,…,t_2p), let t_1^*,…,t_N^* be these unique elements and denote α_r = ∑_k = 1^2p1(t_k = t_r^*). In other words, α_r keeps track of the number of times t_r^* appearing in the sequence (t_k)_k = 1^2p. Clearly, α_1 + … + α_T = 2p and α_1,…,α_T≥ 1. Furthermore, in order that the expected value (∏_k = 1^2pE_t_kia_kE_t_kij_k) is nonzero, we must have min_r∈[T]α_r≥ 2. Indeed, if there exists some r∈[T] such that α_r < 2, then we know that α_r = 1 and there exists a unique k'∈[2p] such that t_k' = t_r^*, so that (∏_k = 1^2pE_t_kia_kE_t_kij_k) = (∏_k∈[2p]\{k'}E_t_kia_kE_t_kij_k)(E_t_k'ia_k'E_t_k'ij_k') = 0 by the independence of _1,…,_t and the fact that (E_t_k'ia_k'E_t_k'ij_k') = 0 (since a_k'≠ j_k'). Therefore, we obtain min_r∈[T]α_r≥ 2, and hence, T≤ p. Also, by the constraint t_p + k≠ t_k for all k∈[p], we have T≥ 2. Next, let J(a_1,…,a_2p,j_1,…,j_2p) be the number of unique elements among {a_1,…,a_2p,j_1,…,j_2p}, and for J = J(a_1,…,a_2p,j_1,…,j_2p), let j_1^*,…,j_J^* be these unique elements. This allows us to re-write the expected value in the summand of interest as (∏_k = 1^2pE_t_kia_kE_t_kij_k) = [∏_r = 1^T∏_q = 1^J E_t_r^*ij_q^*^∑_k = 1^2p{1(t_k = t_r^*,a_k = j_q^*) + 1(t_k = t_r^*,j_k = j_q^*)}] = ∏_r = 1^T∏_q = 1^J( E_t_r^*ij_q^*^β_rq), where β_rq = ∑_k = 1^2p{1(t_k = t_r^*,a_k = j_q^*) + 1(t_k = t_r^*,j_k = j_q^*)}. Note that by construction, the expected value (E_t_r^*ij_q^*^β_rq) is nonzero only if β_rq≥ 2, and we also have ∑_r = 1^T∑_q = 1^Jβ_rq = ∑_k = 1^2p∑_r = 1^T∑_q = 1^J1(t_k = t_r^*,a_k = j_q^*) + ∑_k = 1^2p∑_r = 1^T∑_q = 1^J1(t_k = t_r^*,j_k = j_q^*) = 4p by definition of β_rq's. This entails that TJ≤ 2p, and hence, J≤ p because T≥ 2. Also, by the constraint j_k≠ a_k for all k∈[p], we naturally have J≥ 2. Returning to the summation, we first note that |E_tij|^k≤ρ_n for all k and for all t∈[m],i,j∈[n]. This enables us to derive 1/^(i)_2→∞^2p[{∑_t_1,t_2∈[m],t_1≠ t_2∑_j_1,j_2,j_3∈[n]:j_1≠ j_2, j_1≠ j_3E_t_1ij_1E_t_1ij_2E_t_2ij_1E_t_2ij_3⟨_j_2^(i),_j_3^(i)⟩}^p|^(i)] ≤∑_a_1,…,a_2p∈[n] a_p + 1 = a_1,…,a_2p = a_p∑_t_1,…,t_2p∈[m]∑_j_1∈[n]\{a_1},…,j_2p∈[n]\{a_2p}|(∏_k = 1^2pE_t_kia_kE_t_kij_k)| ≤∑_T = 2^p∑_J = 2^p1(TJ≤ 2p)∑_t_1^*,…,t_T^*∈[m]∑_j_1^*,…,j_J^*∈[n]∑_t_1,…,t_2p∈[m] {t_1,…,t_2p} ={t_1^*,…,t_T^*}∑_a_1,…,a_2p,j_1,…,j_2p∈[n] {a_1,…,a_2p,j_1,…,j_2p} = {j_1^*,…,j_J^*} a_p + 1 = a_1,…,a_2p = a_p ∏_r = 1^T∏_q = 1^J|E_t_r^*ij_q^*^β_rq| ≤∑_T = 2^p∑_J = 2^p1(TJ≤ 2p)∑_t_1^*,…,t_T^*∈[m]∑_j_1^*,…,j_J^*∈[n]∑_t_1,…,t_2p∈[m] {t_1,…,t_2p} ={t_1^*,…,t_T^*}∑_a_1,…,a_2p,j_1,…,j_2p∈[n] {a_1,…,a_2p,j_1,…,j_2p} = {j_1^*,…,j_J^*} a_p + 1 = a_1,…,a_2p = a_p ∏_r = 1^T∏_q = 1^Jρ_n ≤∑_T = 2^p∑_J = 2^p∑_t_1^*,…,t_T^*∈[m]∑_j_1^*,…,j_J^*∈[n]∑_t_1,…,t_2p∈[m] {t_1,…,t_2p} ={t_1^*,…,t_T^*}∑_a_1,…,a_2p,j_1,…,j_2p∈[n] {a_1,…,a_2p,j_1,…,j_2p} = {j_1^*,…,j_J^*} a_p + 1 = a_1,…,a_2p = a_pρ_n^TJ1(TJ≤ 2p) ≤∑_T = 2^p∑_J = 2^p∑_t_1^*,…,t_T^*∈[m]∑_j_1^*,…,j_J^*∈[n]1(TJ≤ 2p) T^2pJ^3pρ_n^TJ = ∑_T = 2^p∑_J = 2^pm^Tn^TJρ_n^TJ(TJ)^2pJ^p/n^(T - 1)J1(TJ≤ 2p) ≤∑_T = 2^p∑_J = 2^pm^TJ/2(nρ_n)^TJ(2p)^2p(J^p/n^J)1(TJ≤ 2p) ≤ C^pp^2p{∑_T = 2^p∑_J = 2^p(m^1/2nρ_n)^TJ1(TJ≤ 2p)} ≤ C^pp^2p + 2(m^1/2nρ_n)^2p, where we have used the inequality x^p/n^x≤ (p/log n)^p≤ C^p for some constant C > 0 for all x > 0 and the condition that m^1/2nρ_n≥ 1, provided that p is selected such that p≤ Clog n for some constant C > 0. Then by Markov's inequality, for any K > 0 and even p, [|∑_t_1,t_2∈[m],t_1≠ t_2∑_j_1≠ j_2≠ j_3E_t_1ij_1E_t_1ij_2E_t_2ij_1E_t_2ij_3⟨_j_2^(i),_j_3^(i)⟩| > Kmn^2ρ_n^2log^2(m + n)^(i)_2→∞^2] = {[|∑_t_1,t_2∈[m],t_1≠ t_2∑_j_1≠ j_2≠ j_3E_t_1ij_1E_t_1ij_2E_t_2ij_1E_t_2ij_3⟨_j_2^(i),_j_3^(i)⟩| > Kmn^2ρ_n^2log^2(m + n)^(i)_2→∞^2|^(i)]} ≤ 2exp{plog C + (2p + 2)log p - plog K - 2ploglog(m + n)}. Then by picking a sufficiently large K and p = 2⌊(1/2)log(m + n)⌋, we see that {|∑_t_1,t_2∈[m],t_1≠ t_2∑_j_1≠ j_2≠ j_3E_t_1ij_1E_t_1ij_2E_t_2ij_1E_t_2ij_3⟨_j_2^(i),_j_3^(i)⟩| > Kmn^2ρ_n^2log^2(m + n)^(i)_2→∞^2} ≤ O((m + n)^-c). Therefore, a union bound over i∈[n] entails that max_i∈[n]∑_a = 1^n∑_t = 1^m∑_j∈[n]\{a}E_tiaE_tij_j^(i)_2^2/^(i)_2→∞^2 = {mn^2ρ_n^2log^2(m + n)}. The proof is thus completed because log(m + n) = Θ(log n). Let (_t)_t = 1^m be the matrices defined in Section <ref> and suppose Assumption <ref> holds. Then ∑_a ∈[n]\{i}|∑_t = 1^m∑_j = 1^nE_tajE_tij|^2 = {mn^2ρ_n^2(log n)^2}. The proof idea is similar to that of Lemma <ref>, and indeed, is slightly more straightforward. Let p be a positive integer to be determined later. We focus on bounding pth moment of the quantity of interest. Write [(∑_a ∈[n]\{i}|∑_t = 1^m∑_j = 1^nE_tajE_tij|^2)^p] = ∑_a_1,…,a_p∈[n]\{i}∑_t_1,…,t_p∈[m] s_1,…,s_p∈[m]∑_j_1,…,j_p∈[n] l_1,…,l_p∈[n]( ∏_k = 1^pE_t_ka_kj_kE_t_kij_kE_s_ka_kl_kE_s_kil_k). Relabel a_p + k = a_k, t_p + k = s_k, j_p + k = l_k for all k∈[p]. This allows us to write the expected value in the summation above as (∏_k = 1^2pE_t_ka_kj_kE_t_kij_k). Let A(a_1,…,a_2p) denote the number of unique elements in {a_1,…,a_2p}, T(t_1, …, t_2p) denote the number of unique elements in {t_1,…,t_2p}, and J(j_1,…,j_2p) denote the number of unique elements in {j_1,…,j_2p}. Note that A(a_1,…,a_2p)≤ p because of the constraint a_p + k = a_k for all k∈[p]. For A = A(a_1,…,a_2p), T = T(t_1,…,t_2p), J = J(j_1,…,j_2p), let a_1^*,…,a_A^* be the unique elements among {a_1,…,a_p}, t_1^*,…,t_T^* be the unique elements among {t_1,…,t_2p}, and j_1^*,…,j_J^* be the unique elements among {j_1,…,j_2p}. For each v∈[A], r∈[T], q∈[J], define α_r = ∑_k = 1^2p1(t_k = t_r^*), β_rq = ∑_k = 1^2p1(t_k = t_r^*, j_k = j_q^*), and γ_rqv = ∑_k = 1^2p1(t_k = t_r^*, j_k = j_q^*, a_k = a_v^*). Then the expected value in the summand can be re-written as |(∏_k = 1^2pE_t_ka_kj_kE_t_kij_k)| = |{∏_r = 1^T∏_q = 1^J∏_v = 1^A∏_k = 1^2pE_t_r^*a_v^*j_q^*^1(t_k = t_r^*,j_k = j_q^*,a_k = a_v^*)E_t_r^*ij_q^*^1(t_k = t_r^*,j_k = j_q^*,a_k = a_v^*)}| = |{∏_r = 1^T∏_q = 1^J∏_v = 1^AE_t_r^*a_v^*j_q^*^∑_k = 1^2p1(t_k = t_r^*,j_k = j_q^*,a_k = a_v^*)E_t_r^*ij_q^*^∑_k = 1^2p1(t_k = t_r^*,j_k = j_q^*,a_k = a_v^*)}| = |{∏_r = 1^T∏_q = 1^J∏_v = 1^AE_t_r^*a_v^*j_q^*^γ_rqvE_t_r^*ij_q^*^γ_rqv}| = |{∏_r = 1^T∏_q = 1^J(∏_v = 1^AE_t_r^*a_v^*j_q^*^γ_rqv)(E_t_r^*ij_q^*^∑_v = 1^Aγ_rqv)}| = |{∏_r = 1^T∏_q = 1^J(∏_v = 1^AE_t_r^*a_v^*j_q^*^γ_rqv)(E_t_r^*ij_q^*^β_rq)}| = ∏_r = 1^T∏_q = 1^J|∏_v = 1^A(E_t_r^*a_v^*j_q^*^γ_rqv)||(E_t_r^*ij_q^*^β_rq)|. Note that in order for the above expected value to be nonzero, it is necessary that β_rq≥ 2 and γ_rqv≥ 2 for all r∈[T],q∈[J],v∈[A]. Since ∑_r = 1^T∑_q = 1^Jβ_rq = 2p and ∑_r = 1^T∑_q = 1^J∑_v = 1^Aγ_rqv = 2p, it is also necessary that TJA≤ p and hence, T≤ p and J≤ p. Observe that |E_tij|^k≤ρ_n for all t∈[m],i,j∈[n],k≥ 2 and (∏_k = 1^2pE_t_ka_kj_kE_t_kij_k)| ≤∏_r = 1^T∏_q = 1^Jρ_n^A + 1. Returning to the sum of interest, we can write it alternatively as follows: {(∑_a ∈[n]\{i}|∑_t = 1^m∑_j = 1^nE_tajE_tij|^2)^p} ≤∑_T = 1^p∑_J = 1^p∑_A = 1^p1(TJA≤ p)∑_(a_v^*)_v = 1^A⊂[n]\{i} (t_r^*)_r = 1^T⊂[m] (j_q^*)_q = 1^J⊂[n]∑_ a_1,…,a_2p:{a_1,…,a_2p} = {a_1^*,…,a_A^*} t_1,…,t_2p:{t_1,…,t_2p} = {t_1^*,…,t_T^*} j_1,…,j_2p:{j_1,…,j_2p} = {j_1^*,…,j_J^*}|( ∏_k = 1^pE_t_ka_kj_kE_t_kij_kE_s_kal_kE_s_kil_k)| ≤∑_T = 1^p∑_J = 1^p∑_A = 1^p1(TJA≤ p)∑_(a_v^*)_v = 1^A⊂[n]\{i} (t_r^*)_r = 1^T⊂[m] (j_q^*)_q = 1^J⊂[n]∑_ a_1,…,a_2p:{a_1,…,a_2p} = {a_1^*,…,a_A^*} t_1,…,t_2p:{t_1,…,t_2p} = {t_1^*,…,t_T^*} j_1,…,j_2p:{j_1,…,j_2p} = {j_1^*,…,j_J^*}ρ_n^(A + 1)TJ ≤ p^2p∑_T = 1^p∑_J = 1^p∑_A = 1^p1(TJA≤ p)m^Tn^A + Jρ_n^(A + 1)TJ≤ p^2p∑_T = 1^p∑_J = 1^p∑_A = 1^p1(TJA≤ p)(m^1/2nρ_n)^(A + 1)TJ ≤ p^2p + 3(m^1/2nρ_n)^2p. Then by Markov's inequality, for any K > 0, {∑_a∈[n]\{i}|∑_t = 1^m∑_j = 1^nE_tajE_tij|^2 > Kmn^2ρ_n^2log^2(m + n)} ≤exp{(2p + 3)log p - plog K - 2ploglog(m + n)}. Then for any c > 0, with p = ⌊log(m + n)⌋ and K = e^c + 3, we see immediately that {∑_a∈[n]\{i}|∑_t = 1^m∑_j = 1^nE_tajE_tij|^2 > e^c + 3mn^2ρ_n^2log^2(m + n)}≤ O((m + n)^-c). The proof is thus completed because log(m + n) = Θ(log n). Suppose Assumption <ref> and <ref> hold. Let (^(i))_i = 1^n be random matrices in 𝕆(n, d) such that ^(i)∈ℝ^n× d is independent of _i = (E_tij:t∈[m],j∈[n]) for each i∈ [n]. Then max_i∈[n]{((^(i))(^(i))) - ()}^(i)_2/^(i)_2→∞ = (ζ_𝗈𝗉). By definition, {() - ((^(i))(^(i)))}^(i) = ∑_t = 1^m((_t - _t^(i))_t)^(i) + ∑_t = 1^m(_t(_t - _t^(i)))^(i) + ∑_t = 1^m((_t - _t^(i))^2)^(i). Below, we work with the three terms on the right-hand side of (<ref>) separately. ▪ The first term in (<ref>). For each j∈ [n], by definition, we have ((_t - _t^(i))_t)_j = [ E_ti1A_tij ⋯ E_ti(i - 1)A_tij _i_t_t_j E_ti(i + 1)A_tij ⋯ E_tinA_tij ] - E_tijA_tij_j if j≠ i and ((_t - _t^(i))_t)_i = [E_ti1A_tii,⋯,E_ti(i - 1)A_tii,0,E_ti(i + 1)A_tii,⋯,E_tinA_tii]. Let _j^(i) denote the transpose of the jth row of ^(i). Then for any a∈ [n], a≠ i, we have _a∑_t = 1^m((_t - _t^(i))_t)^(i) = ∑_t = 1^m∑_j∈[n]\{i}(E_tiaA_tij - E_tijA_tij_a_j)(_j^(i)) + ∑_t = 1^mE_tiaA_tii(_i^(i)) = ∑_t = 1^m∑_j = 1^nE_tiaA_tij(_j^(i)) - E_tiaA_ia(_a^(i)) = ∑_t = 1^m∑_j ∈[n]\{a}E_tiaA_tij(_j^(i)). This implies that ∑_a∈[n]\{i}_a∑_t = 1^m((_t - _t^(i))_t)^(i)_2^2 ≲∑_a ∈[n]\{i}∑_t = 1^m∑_j∈[n]\{a}E_tiaE_tij(_j^(i))_2^2 + ∑_a∈[n]\{i}∑_t = 1^m∑_j∈[n]\{a}E_tiaP_tij(_j^(i))_2^2. For the second term, by Bernstein's inequality, we write ∑_a∈[n]\{i}∑_t = 1^m∑_j∈[n]\{a}E_tiaP_tij(_j^(i))_2^2 = ∑_a∈[n]\{i}∑_t,s∈[m]∑_j,l∈[n]\{a}E_tiaE_siaP_tijP_sil⟨_j^(a),_l^(a)⟩ = ∑_a∈[n]\{i}∑_t = 1^m∑_j,l∈[n]\{a}E_tia^2P_tijP_til⟨_j^(a),_l^(a)⟩ + ∑_a∈[n]\{i}∑_t,s∈[m],t≠ s∑_j,l∈[n]\{a}E_tiaE_siaP_tijP_sil⟨_j^(a),_l^(a)⟩ = ∑_a∈[n]\{i}∑_t = 1^m(E_tia^2 - σ_tia^2)(∑_j,l∈[n]\{a}P_tijP_til⟨_j^(a),_l^(a)⟩) + ∑_a∈[n]\{i}∑_t = 1^mσ_tia^2(∑_j,l∈[n]\{a}P_tijP_til⟨_j^(a),_l^(a)⟩) + ∑_a∈[n]\{i}∑_t,s∈[m],t≠ s∑_j,l∈[n]\{a}E_tiaE_siaP_tijP_sil⟨_j^(a),_l^(a)⟩ = (mn^3ρ_n^3)^(i)_2→∞^2 + ∑_a∈[n]\{i}∑_t,s∈[m],t≠ s∑_j,l∈[n]\{a}E_tiaE_siaP_tijP_sil⟨_j^(a),_l^(a)⟩. The concentration bound for the second term above can be obtained by applying a decoupling argument. Specifically, let {E̅_sia:s∈[m],a∈[n]} be an independent copy of {E_sia:s∈[m],a∈[n]}. By a conditioning argument and Bernstein's inequality, we obtain ∑_a∈[n]\{i}∑_t,s∈[m],t≠ s∑_j,l∈[n]\{a}E_tiaE̅_siaP_tijP_sil⟨_j^(a),_l^(a)⟩ = {(mnρ_n)^1/2log^1/2(m + n)max_a∈[n],t∈[m]|∑_s∈[m]\{a}∑_j,l∈[n]\{a}E̅_siaP_tijP_sil⟨_j^(a),_l^(a)⟩|} = [(mnρ_n)^1/2(log n)^1/2{log n + (mρ_n)^1/2(log n)^1/2}n^2ρ_n^2^(i)_2→∞^2] = {mn^3ρ_n^3log n^(i)_2→∞^2}. Then by the decoupling inequality <cit.>, we obtain ∑_a∈[n]\{i}∑_t,s∈[m],t≠ s∑_j,l∈[n]\{a}E_tiaE_siaP_tijP_sil⟨_j^(a),_l^(a)⟩ = {mn^3ρ_n^3log n^(i)_2→∞^2}. Therefore, max_i∈[n]2∑_a∈[n]\{i}∑_t = 1^m∑_j∈[n]\{a}E_tiaP_tij(_j^(i))_2^2/^(i)_2→∞^2 = {m^1/2(nρ_n)^3/2(log n)^1/2}^2. For the first term, Lemma <ref> yields max_i∈[n]∑_a∈[n]∑_t = 1^m∑_j∈[n]\{a}E_tiaE_tij(_j^(i))_2^2/^(i)_2→∞^2 = {mn^2ρ_n^2(log n)^2}. We then proceed to combine the above results and compute max_i∈[n]∑_a∈[n]\{i}_a∑_t = 1^m((_t - _t^(i))_t)^(i)_2^2/^(i)_2→∞^2 = (ζ_𝗈𝗉^2). For a = i, by definition, we have _i∑_t = 1^m((_t - _t^(i))_t)^(i) = ∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^nE_tilE_tlj(_j^(i)) + ∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^nE_tilP_tlj(_j^(i)). For the first term in (<ref>), note that ∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^nE_tilE_tlj(_j^(i)) = _i()^(i). Then by Lemma <ref> max_i∈[n]∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^nE_tilE_tlj(_j^(i))_2/^(i)_2→∞ = (m^1/2nρ_nlog n). For the second term in (<ref>), by Bernstein's inequality, ∑_t = 1^m∑_j∈[n]\{i}∑_l = 1^n E_tilP_tlj(_j^(i)) = max_t∈[m],l∈[n]∑_j = 1^nP_tlj(_j^(i))_2{(mnρ_n)^1/2(log n)^1/2} ≤_2→∞{m^1/2(nρ_n)^3/2(log n)^1/2}. Combining (<ref>) and (<ref>) leads to max_i∈[n]_i∑_t = 1^m((_t - _t^(i))_t)^(i)_2/^(i)_2→∞ = (ζ_𝗈𝗉) by (<ref>). Combining the above result with (<ref>), we conclude that max_i∈[n]∑_t = 1^m((_t - _t^(i))_t)^(i)_2/^(i)_2→∞ = (ζ_𝗈𝗉). ▪ The second term in (<ref>). By definition, for any j∈[n], _t(_t - _t^(i))_j = E_tij_t_i if j≠ i and _t(_t - _t^(i))_i = _t_t_i. Then for any a∈[n], _a∑_t = 1^m(_t(_t - _t^(i)))^(i) = ∑_t = 1^m∑_j∈[n]\{i}_a(_t(_t - _t^(i)))_j(_j^(i)) + ∑_t = 1^m_a(_t(_t - _t^(i)))_i(_i^(i)) = ∑_t = 1^m∑_j = 1 j≠ i^nE_tiaE_tij(_j^(i))1(a≠ j) + ∑_t = 1^m∑_j = 1 j≠ i^nP_tiaE_tij(_j^(i))1(a≠ j) + ∑_t = 1^m_a_t_t_i (_i^(i))1(a≠ i) = ∑_t = 1^m∑_j = 1 j≠ i^nE_tiaE_tij(_j^(i))1(a≠ j) + ∑_t = 1^m∑_j = 1 j≠ i^nP_tiaE_tij(_j^(i))1(a≠ j) + ∑_t = 1^m∑_j = 1^nE_tajE_tij(_i^(i))1(a≠ i) + ∑_t = 1^m∑_j = 1^nP_tajE_tij(_i^(i))1(a≠ i). We next proceed to write ∑_t = 1^m(_t(_t - _t^(i)))^(i)_2 ≲{∑_a = 1^n∑_t = 1^m∑_j∈[n]\{i,a}E_tiaE_tij_j^(i)_2^2}^1/2 + √(n)max_a∈[n]∑_t = 1^m∑_j∈[n]\{i,a}P_tiaE_tij_j^(i)_2 + ^(i)_2→∞[{∑_a ∈[n]\{i}|∑_t = 1^m∑_j = 1^nE_tajE_tij|_2^2}^1/2 + √(n)max_a ∈[n]|∑_t = 1^m∑_j = 1^nP_tajE_tij|_2]. By Lemma <ref>, the first term satisfies max_i∈[n]{∑_a = 1^n∑_t = 1^m∑_j∈[n]\{i,a}E_tiaE_tij_j^(i)_2^2}^1/2/^(i)_2→∞ = (m^1/2nρ_nlog n). By Bernstein's inequality and a union bound over i,a∈[n], the second term and the fourth term satisfy max_i∈[n]√(n)max_a∈[n]∑_t = 1^m∑_j∈[n]\{i,a}P_tiaE_tij_j^(i)_2/^(i)_2→∞ = {m^1/2nρ_n^3/2(log n)^1/2}, max_i∈[n]^(i)_2→∞√(n)max_a ∈[n]\{i}|∑_t = 1^m∑_j = 1^nP_tajE_tij|/^(i)_2→∞ = {m^1/2nρ_n^3/2(log n)^1/2}. For the third term, we apply Lemma <ref> together with a union bound over i∈[n] to obtain max_i∈[n]^(i)_2→∞{∑_a∈[n]\{i}∑_t = 1^m|∑_t = 1^m∑_j = 1^nE_tajE_tij|^2}^1/2/^(i)_2→∞ = (m^1/2nρ_nlog n). Combining the above concentration results yields max_i∈[n]∑_t = 1^m(_t(_t - _t^(i)))^(i)_2/^(i)_2→∞ = (ζ_𝗈𝗉). ▪ The third term in (<ref>). By definition, ((_t - _t^(i))^2)_j = [ E_ti1E_tij,⋯,E_ti(j - 1)E_tij,0,E_ti(j + 1)E_tij,⋯, E_tinE_tij] if j≠ i, and ((_t - _t^(i))^2)_i = [E_ti1E_tii, ⋯, E_ti(i - 1)E_tii, 0, E_ti(i + 1)E_tii, ⋯, E_tinE_tii]. Therefore, for any a∈[n], _a∑_t = 1^m((_t - _t^(i))^2)^(i) = ∑_t = 1^m∑_j = 1^nE_tiaE_tij1(a≠ j)(_j^(i)). This further entails that ∑_t = 1^m((_t - _t^(i))^2)^(i)_2 = {∑_a∈[n]\{i}∑_t = 1^m∑_j∈[n]\{a}^nE_tiaE_tij_j^(i)_2^2}^1/2 , and by Lemma <ref>, we immediately obtain that max_i∈[n]∑_t = 1^m((_t - _t^(i))^2)^(i)_2/^(i)_2→∞ = (m^1/2nρ_nlog n). Combining the concentration results for the three terms in (<ref>) completes the proof. §.§ Concentration Bounds for Leave-One-Out Matrices We next apply the preliminary concentration results in Section <ref> to obtain sharp leave-one-out error bounds. The proofs of these results primarily rely recursive error bounds and induction. Suppose Assumptions <ref>–<ref> hold. Further assume that, for any c > 0, there exists a c-dependent constant N_c > 0, such that for all n≥ N_c, with probability at least 1 - O(n^-c), max_s∈[S],r∈[R] - _rs_2≤λ_d()/8 and max_i∈[n]max_s∈[S],r∈[R] - _rs^(i)_2≤λ_d()/8. If R,S = O(1), then max_i∈[n]^(i)_RS^(i)_RS - _RS_RS_2 = (ζ_𝗈𝗉/mn^2ρ_n^2)max_i∈[n]_RS^(i)_2→∞. By equation (<ref>) in the proof of Lemma <ref> and Weyl's inequality, λ_d{((^(i))(^(i))) - _RS^(i)}≥λ_d()/2 and λ_d + 1{() - _RS}≤ (1/4)λ_d() with probability at least 1 - O(n^-c). This entails that λ_d{((^(i))(^(i))) - _RS^(i)} - λ_d + 1{() - _RS}≥λ_d()/4 with probability at least 1 - O(n^-c). By Davis-Kahan theorem (in the form of Theorem VII 3.4 in <cit.>), we have ^(i)_RS^(i)_RS - _RS_RS_2 ≤{((^(i))(^(i))) - () - _RS^(i) + _RS}_RS^(i)_2/λ_d{((^(i))(^(i))) - _RS^(i)} - λ_d + 1{() - _RS} ≤4{((^(i))(^(i))) - ()}_RS^(i)_2/mΔ_n^2 + 4_RS - _RS^(i)_2_RS^(i)_2→∞/mΔ_n^2, where we have used the fact that _RS and _RS^(i) are diagonal matrices. For the second term, by the definition of _Rs, _Rs^(i), and triangle inequality, we have, for any s∈[S], _Rs - _Rs^(i)_2 ≤_(R - 1)S_(R - 1)S - (_(R - 1)S^(i))(_(R - 1)S^(i))_2 ×{() - _2 + ((^(i))(^(i))) - _2 + _R(s - 1) - _2 + _R(s - 1)^(i) - _2} + {((^(i))(^(i))) - ()}_(R - 1)S^(i)_2 + _R(s - 1) - _R(s - 1)^(i)_2. By Lemma <ref>, we know that () - _2 = (mn^2ρ_n^2) and {((^(i))(^(i))) - _2 = (mn^2ρ_n^2) because ^(i) = + ^(i) and ^(i) also satisfies Assumption <ref>. Note that _R0 = _R0^(i) = _n× n. Then by induction over s, we obtain _RS - _RS^(i)_2 ≤(mn^2ρ_n^2)_(R - 1)S_(R - 1)S - (_(R - 1)S^(i))(_(R - 1)S^(i))_2 + S{((^(i))(^(i))) - ()}_(R - 1)S^(i)_2. This entails that ^(i)_RS^(i)_RS - _RS_RS_2 ≤4{((^(i))(^(i))) - ()}_RS^(i)_2/mΔ_n^2 + 4_RS - _RS^(i)_2_RS^(i)_2→∞/mΔ_n^2 ≤4{((^(i))(^(i))) - ()}_RS^(i)_2/mΔ_n^2 + (1)_(R - 1)S_(R - 1)S - (_(R - 1)S^(i))(_(R - 1)S^(i))_2_RS^(i)_2→∞ + S{((^(i))(^(i))) - ()}_(R - 1)S^(i)_2/mΔ_n^2_RS^(i)_2→∞. Observe that _RS^(i) is independent of _i = (E_tij:t∈[m],j∈[n]). Then by Lemma <ref>, for any c > 0, there exists a constant C_c > 0 such that for sufficiently large n, with probability at least 1 - O(n^-c), max_i∈[n]^(i)_RS^(i)_RS - _RS_RS_2 ≤(ζ_𝗈𝗉/mn^2ρ_n^2) max_i∈[n]_RS^(i)_2→∞ + (ζ_𝗈𝗉/mn^2ρ_n^2) max_i∈[n]_(R - 1)S^(i)_2→∞_RS^(i)_2→∞ + (1)max_i∈[n]_(R - 1)S_(R - 1)S - (_(R - 1)S^(i))(_(R - 1)S^(i))_2max_i∈[n]_RS^(i)_2 ≤(ζ_𝗈𝗉/mn^2ρ_n^2)max_i∈[n]_RS^(i)_2→∞ + (1)max_i∈[n]_(R - 1)S^(i)_(R - 1)S^(i) - _(R - 1)S_(R - 1)S_2max_i∈[n]_RS^(i)_2→∞. The remaining proof is completed by induction over R and observe that _0S = _0S^(i) = _n× d for all i∈[n]. Suppose Assumptions <ref> and <ref> hold. Further assume that for any c > 0, there exists a c-dependent constant N_c, such that for any n ≥ N_c, with probability at least 1 - O(n^-c), max_r∈[R],s∈[S] - _rs_2≤λ_d()/20 and max_i∈[n]max_r∈[R],s∈[S] - _rs^(i)_2≤λ_d()/20. If R,S = O(1), then for any c > 0, there exist a c-dependent constants N_c > 0, such that for any n≥ N_c, max_i∈[n](^(i)_RS)^-1_2≤ 2 and max_i∈[n]^(i)_RS_2→∞≤ 4_RS_2→∞ with probability at least 1 - O(n^-c). Consequently, max_i∈[n]^(i)_RS^(i)_RS - _RS_RS_2 = (ζ_𝗈𝗉/mn^2ρ_n^2)_RS_2→∞, max_i∈[n]^(i)_RS^(i)_RS - _2→∞ = (1)_RS_2→∞. The “consequently” part follows from the second assertion, Lemma <ref>, and the triangle inequality that max_i∈[n]^(i)_RS^(i)_RS - _2→∞≤max_i∈[n]^(i)_RS^(i)_RS - _RS_RS_2 + _RS_2→∞ + _2→∞. It is thus sufficient to establish the first and second assertions. By Lemma <ref> and a union bound over i∈[n], there exists some c-dependent constant N_c, such that max_i∈[n]((^(i))(^(i))) - - _RS^(i)_2/λ_d() ≤max_i∈[n]((^(i))(^(i))) - - _2/mΔ_n^2 + max_i∈[n] - _RS^(i)_2/mΔ_n^2 ≤1/2 with probability at least 1 - O(n^-c) for all n≥ N_c. By Lemma 2 in <cit.>, {max_i∈[n](^(i)_RS)^-1_2≤ 2}≥ 1 - O(n^-c) n≥ N_c. By Lemma <ref>, there exists a c-dependent constant C_c > 0, such that max_i∈[n]_RS^(i)_2→∞ ≤max_i∈[n]_RS^(i)^(i)_RS_2→∞(^(i)_RS)^-1_2 ≤ 2max_i∈[n]_RS^(i)^(i)_RS_2→∞ ≤ 2max_i∈[n]_RS^(i)^(i)_RS - _RS_RS_2→∞ + 2_RS_2→∞ ≤ 2_RS_2→∞ + 2C_cζ_𝗈𝗉/mΔ_n^2max_i∈[n]_RS^(i)_2→∞ with probability at least 1 - O(n^-c) whenever n≥ N_c. Namely, 1/2max_i∈[n]_RS^(i)_2→∞ ≤(1 - 2C_cζ_𝗈𝗉/mΔ_n^2)max_i∈[n]_RS^(i)_2→∞≤ 2_RS_2→∞, and hence, max_i∈[n]_RS^(i)_2→∞≤ 4_RS_2→∞ with probability at least 1 - O(n^-c) for all n≥ N_c. Suppose the conditions of Lemma <ref> hold. Further assume that, for any c > 0, there exists a c-dependent constant N_c > 0, such that for any n > N_c, max_i,j∈[n],i≠ j,r∈[R],s∈[S] - _rs^(i,j)_2 ≤λ_d()/20 with probability at least 1 - O(n^-c). Then max_i∈[n]_i()(_RS^(i)_RS^(i) - )_2 = (q_n_RS_2→∞ + ζ_𝗈𝗉log n/m^1/2n^3/2ρ_n) + (log n/m^1/2n^3/2ρ_n) - _RS_2 , max_i∈[n]_i{() - - }(_RS^(i)_RS^(i) - )_2/mΔ_n^2 = {log n/m^1/2nρ_n + (log n)^1/2/(mnρ_n)^1/2}_RS_2→∞ + (log n/m^3/2n^7/2ρ_n^3) - _RS_2, where q_n = (log n)^2 + ζ_𝗈𝗉log n(nρ_n∨log n)^1/2/(mn^2ρ_n^2). For the first claim, a simple algebra shows that _i() = ∑_t = 1^m_i_t_t^(i) + ∑_t = 1^m E_tii(_i_t - E_tii_i) and _i()(_RS^(i)_RS^(i) - ) = ∑_t = 1^m_i_t_t^(i)(_RS^(i)_RS^(i) - ) + ∑_t = 1^mE_tii(_i_t - E_tii_i) (_RS^(i)_RS^(i) - ). Below, we analyze the two terms on the right-hand side of (<ref>) separately. ▪ For the first term in (<ref>), observe that (_i_t)_t = 1^m and {_t^(i)(_RS^(i)_RS^(i) - )}_t = 1^m are independent. Also, note that by Lemma <ref>, Lemma <ref>, Davis-Kahan theorem, and a union bound over i∈[n], we have max_i∈[n]^(i)_RS^(i)_RS - _2 ≤max_i∈[n]^(i)_RS^(i)_RS - _RS_RS_2 + _RS_RS - _2 = (ζ_𝗈𝗉/mn^2ρ_n^2) + O(1/mn^2ρ_n^2) - _RS_2. By Lemma <ref> and Lemma <ref>, we have _2≤()_2 + max_i∈[n]∑_t = 1^m∑_j = 1^nE_tij^2 = (m^1/2nρ_nlog n + mnρ_n)≤(mnρ_nlog n). By a union bound, max_i∈[n](^(i))(^(i))_2 = (mnρ_nlog n). Then by Bernstein's inequality, Lemma <ref>, and a union bound over i∈[n], max_i∈[n]∑_t = 1^m_i_t_t^(i)(_RS^(i)_RS^(i) - )_2 = max_i∈[n]max_t∈[m]_t^(i)(_RS^(i)_RS^(i) - )_2→∞(log n) + max_i∈[n]{∑_t = 1^m∑_j = 1^n_j_t^(i)(_RS^(i)_RS^(i) - )_2}^1/2{ρ_n^1/2(log n)^1/2} = max_i∈[n]max_t∈[m]_t^(i)(_RS^(i)_RS^(i) - )_2→∞(log n) + max_i∈[n](^(i))(_RS^(i)_RS^(i) - )_F{ρ_n^1/2(log n)^1/2} = max_i∈[n]max_t∈[m]_t^(i)(_RS^(i)_RS^(i) - )_2→∞(log n) + max_i∈[n](^(i))(^(i))_2^1/2_RS^(i)_RS^(i) - _F{ρ_n^1/2(log n)^1/2} = max_i∈[n]max_t∈[m]_t^(i)(_RS^(i)_RS^(i) - )_2→∞(log n) + (ζ_𝗈𝗉log n/m^1/2n^3/2ρ_n) + ( log n/m^1/2n^3/2ρ_n) - _RS_2. The analysis of max_i∈[n]max_t∈[m]_t^(i)(_RS^(i)_RS^(i) - )_2→∞ requires the introduction of the leave-two-out matrices. Write max_i∈[n]max_t∈[m]_t^(i)(_RS^(i)_RS^(i) - )_2→∞ = max_i∈[n]max_t∈[m]max_j∈[n]\{i}_j_t^(i)(_RS^(i)_RS^(i) - )_2 ≤max_i∈[n]max_t∈[m]max_j∈[n]\{i}_j_t^(i)(_RS^(i)_RS^(i) - _RS^(i, j)_RS^(i, j))_2 + max_i∈[n]max_t∈[m]max_j∈[n]\{i}_j_t^(i)(_RS^(i, j)_RS^(i, j) - )_2 ≤max_t∈[m]max_i,j∈[n],i≠ j_t^(i)_2_RS^(i)_RS^(i) - _RS^(i, j)_RS^(i, j)_2 + max_i∈[n]max_t∈[m]max_j∈[n]\{i}_j_t^(i)(_RS^(i, j)_RS^(i, j) - )_2 Since _RS^(i, j) and _RS^(i, j) are the leave-one-out versions of _RS^(i) and _RS^(i), respectively, and ^(i) also satisfies Assumptions <ref>–<ref>, then by Lemma <ref>, we have max_i,j∈[n],i≠ j_RS^(i)_RS^(i) - _RS^(i, j)_RS^(i, j)_2 = {ζ_𝗈𝗉(mn^2ρ_n^2)^-1}_RS_2→∞ , so that the first term satisfies max_t∈[m]max_i,j∈[n],i≠ j_t^(i)_2_RS^(i)_RS^(i) - _RS^(i, j)_RS^(i, j)_2 = {ζ_𝗈𝗉(nρ_n∨log n)^1/2/mn^2ρ_n^2}_RS_2→∞ by Lemma <ref> and a union bound over t∈[m]. Note here we also used max_i,j∈[n],i≠ jmax_r∈[R],s∈[S] - _rs^(i, j)_2 ≤λ_d()/20 with probability at least 1 - O(n^-c). For the second term, observe that by Lemma <ref>, Davis-Kahan theorem, Lemma <ref>, and a union bound over i,j∈[n],i≠ j, we have max_i,j∈[n],i≠ j^(i, j)_RS^(i, j)_RS - _2 ≤max_i,j∈[n],i≠ j^(i, j)_RS^(i, j)_RS - ^(i)_RS^(i)_RS_2 + max_i∈[n]^(i)_RS^(i)_RS - _2 = (ζ_𝗈𝗉/mn^2ρ_n^2) + O(1/mn^2ρ_n^2) - _RS_2, max_i,j∈[n],i≠ j^(i, j)_RS^(i, j)_RS - _2→∞ = (_RS_2→∞). Also, note that (_j_t^(i))_t = 1^m and _RS^(i, j)_RS^(i, j) - are independent so that the second term can be bounded using a union bound over i,j∈[n],t∈[m], Bernstein's inequality, and Lemma <ref>: max_i∈[n]max_t∈[m]max_j∈[n]\{i}_j_t^(i)(_RS^(i, j)_RS^(i, j) - )_2 = {log nmax_i,j∈[n]:i≠ j_RS^(i, j)_RS^(i, j) - _2→∞ + ρ_n^1/2(log n)^1/2max_i,j∈[n],i≠ j^(i, j)_RS^(i, j)_RS - _2} ≤(log n)_RS_2→∞ + {ζ_𝗈𝗉(log n)^1/2/mn^2ρ_n^3/2} + {(log n)^1/2/mn^2ρ_n^3/2} - _RS_2. Combining the two pieces above together, we obtain max_i∈[n]max_t∈[m]_t^(i)(_RS^(i)_RS^(i) - )_2→∞ ≤max_t∈[m]max_i,j∈[n],i≠ j_t^(i)_2_RS^(i)_RS^(i) - _RS^(i, j)_RS^(i, j)_2 + max_i∈[n]max_t∈[m]max_j∈[n]\{i}_j_t^(i)(_RS^(i, j)_RS^(i, j) - )_2 = {ζ_𝗈𝗉(nρ_n ∨log n)^1/2/mn^2ρ_n^2 + log n }_RS_2→∞ + {ζ_𝗈𝗉(log n)^1/2/mn^2ρ_n^3/2} + {(log n)^1/2/mn^2ρ_n^3/2} - _RS_2. Hence, we further obtain max_i∈[n]∑_t = 1^m_i_t_t^(i)(_RS^(i)_RS^(i) - )_2 = { (log n)^2 + ζ_𝗈𝗉(nρ_n∨log n)^1/2/mn^2ρ_n^2log n }_RS_2→∞ + ( ζ_𝗈𝗉log n/m^1/2n^3/2ρ_n) + ( log n/m^1/2n^3/2ρ_n) - _RS_2. ▪ For the second term in (<ref>), we can rewrite it as ∑_t = 1^m∑_j∈[n]\{i}E_tiiE_tij_j(_RS^(i)_RS^(i) - ). Denote by _RS^(i) = _RS^(i)_RS^(i) - and [_RS^(i)]_jk the (j, k)th entry of _RS^(i). By Bernstein's inequality and a conditioning argument, ∑_t = 1^mE_tii(_i_t - E_tii_i)_RS^(i)_2^2 = {_RS^(i)_2→∞log n + (∑_t = 1^m∑_j = 1^nE_tii^2_j_RS^(i)_2^2)^1/2ρ_n^1/2(log n)^1/2} = [_RS^(i)_2→∞log n + {∑_t = 1^m(E_tii^2 - σ_tii^2) + ∑_t = 1^mσ_tii^2}^1/2_RS^(i)_Fρ_n^1/2(log n)^1/2] = {_RS^(i)_2→∞log n + _RS^(i)_Fm^1/2ρ_n(log n)^1/2 + _RS^(i)_Fρ_n^1/2log n} = (_RS_2→∞log n + ζ_𝗈𝗉log n/m^1/2n^3/2ρ_n) + (log n/m^1/2n^3/2ρ_n) - _RS_2. Combining the concentration bounds for the two terms in (<ref>) completes the proof of the first assertion. For the second assertion, by triangle inequality, we have max_i∈[n]_i{() - - }(_RS^(i)_RS^(i) - )_2/mΔ_n^2 ≤ q_1 + q_2 + q_3 + q_4, where q_1 = max_i∈[n]_i()(_RS^(i)_RS^(i) - )_2/mΔ_n^2, q_2 = 1/mΔ_n^2∑_t = 1^m_t_t_2→∞max_i∈[n]_RS^(i)_RS^(i) - _2, q_3 = 1/mΔ_n^2max_i∈[n]_i∑_t = 1^m_t_t(_RS^(i)_RS^(i) - )_2, q_4 = 1/mΔ_n^2max_i∈[n]2∑_t = 1^m∑_j = 1^nP_tijE_tij_i(_RS^(i)_RS^(i) - )_2. By the first assertion, we have q_1 = {(log n)^2/mn^2ρ_n^2 + log n/mn^2ρ_n^2(nρ_n∨log n)^1/2}_RS_2→∞ + (log n/m^1/2nρ_n×1/√(n)) + (log n/m^3/2n^7/2ρ_n^3) - _RS_2 = (log n/m^1/2nρ_n_RS_2→∞) + (log n/m^3/2n^7/2ρ_n^3) - _RS_2. For q_2, by Bernstein's inequality, we see that ∑_t = 1^m_t_t_2 = {m^1/2(nρ_n)^3/2(log n)^1/2}. Then it follows from Lemma <ref> that q_2 ≤√(n)/mΔ_n^2∑_t = 1^m_t_t_2→∞max_i∈[n]_RS^(i)_RS^(i) - _2→∞ ≤d^1/2μ^1/2/mΔ_n^2∑_t = 1^m_t_t_2max_i∈[n]_RS^(i)_RS^(i) - _2→∞ = d^1/2μ^1/2/mΔ_n^2∑_t = 1^m_t_t_2max_i∈[n]_RS^(i)_RS^(i) - _2→∞ = {(log n)^1/2/(mnρ_n)^1/2}_RS_2→∞. For q_3, by Bernstein's inequality and Lemma <ref>, we obtain q_3 = {(nρ_nlog n)^1/2/m^1/2Δ_n^2}max_t∈[m]_t_∞_RS^(i)_RS^(i) - _2→∞ = {(log n)^1/2/(mnρ_n)^1/2}_RS_2→∞ For q_4, by Bernstein's inequality, Lemma <ref>, and Assumption <ref>, we also have q_4 = {(log n)^1/2/m^1/2(nρ_n)^3/2}_maxmax_i∈[n]_RS^(i)_RS^(i) - _2→∞ = {ρ_n(log n)^1/2/m^1/2(nρ_n)^3/2}_RS_2→∞. The proof is thus completed by combining the above error bounds for q_1, q_2, q_3, and q_4. Suppose the conditions of Lemma <ref> hold. Then for any c > 0, there exists a c-dependent constant N_c > 0, such that _RS^-1_2≤ 2 and _RS_2→∞≤ Cκ^2n^-1/2 with probability at least 1 - O(n^-c) for all n≥ N_c. For the first assertion, note that () - - _RS_2/λ_d()≤{ζ_𝗈𝗉/(mΔ_n^2)} + 1/10 with probability at least 1 - O(n^-c). Then Lemma 2 in <cit.> yields that for any c > 0, there exists a c-dependent constant N_c > 0, such that (_RS^-1_2≤ 2)≥ 1 -O(n^-c) n≥ N_c. Now we focus on the second assertion. By Lemma <ref>, there exists a c-dependent constant N_c > 0, such that () - - _RS_2/λ_d() ≤C_cζ_𝗈𝗉/λ_d() + - _RS_2/λ_d()≤1/10 with probability 1 - O(n^-c) for any n≥ N_c. Then by Lemma 1 in <cit.>, Lemma <ref>, Davis-Kahan theorem, and Lemma <ref>, there exists a c-dependent constant C_c > 0, such that for any i∈[n], _i_RS_RS_2 ≤2/λ_d()[_i{() - _RS}_2 + {() - _RS}(_RS_RS - )_2] ≤(2C_cζ_𝗈𝗉/mΔ_n^2 + 2κ^2 + 1/10)_2→∞ + 2/λ_d()_i{() - - }(_RS_RS - )_2 + (2κ^2_2→∞_RS_RS - _2 + 1/5_RS_2→∞ + 1/5_2→∞) ≤2/mΔ_n^2_i{() - - }(_RS_RS - )_2 + Cκ^2_2→∞ + 1/5_RS_2→∞ with probability at least 1 - O(n^-c) for all n≥ N_c, where C > 0 is a constant not depending on c. It follows that _RS_2→∞ = max_i∈[n]_i_RS_RS_RS^-1_2 ≤max_i∈[n]_i_RS_RS_2_RS^-1_2 ≤ 2max_i∈[n]_i_RS_RS_2 ≤4/mΔ_n^2max_i∈[n]_i{() - - }(_RS_RS - )_2 + Cκ^2_2→∞ + 2/5_RS_2→∞. Hence, _RS_2→∞≤ Cκ^2_2→∞ + C/mΔ_n^2max_i∈[n]_i{() - - }(_RS_RS - )_2 with probability at least 1 - O(n^-c) for all n≥ N_c, where C > 0 is a constant not depending on c. It is sufficient to focus on the last term on the right-hand side above. We invoke the leave-one-out matrices, Lemma <ref>, and Lemma <ref> to write C/mΔ_n^2max_i∈[n]_i{() - - }(_RS_RS - )_2 ≤C/mΔ_n^2max_i∈[n]_i{() - - }(_RS_RS - _RS^(i)^(i)_RS)_2 + C/mΔ_n^2max_i∈[n]_i{() - - }(_RS^(i)_RS^(i) - )_2 ≤C() - - _2/mΔ_n^2max_i∈[n]_RS_RS - _RS^(i)^(i)_RS_2 + max_i∈[n]C_i{() - - }(_RS^(i)_RS^(i) - )_2/mΔ_n^2 ≤1/4_RS_2→∞ + max_i∈[n]C_i{() - - }(_RS^(i)_RS^(i) - )_2/mn^2ρ_n^2. with probability at least 1 - O(n^-c) for all n≥ N_c. For the last term, by Lemma <ref>, there exist c-dependent constants C_c, N_c > 0, such that max_i∈[n]C_i{() - - }(_RS^(i)_RS^(i) - )_2/mn^2ρ_n^2 ≤{C_c(log n)^1/2/(mnρ_n)^1/2 + C_clog n/m^1/2nρ_n}_RS_2→∞ + C_c(log n/m^3/2n^7/2ρ_n^3) - _RS_2≤1/4_RS_2→∞ with probability at least 1 - O(n^-c) for all n≥ N_c, and hence, we conclude that _RS_2→∞≤ Cκ^2n^-1/2 with probability at least 1 - O(n^-c) for all n≥ N_c. The proof is thus completed. The following lemma seems to be quite similar to Lemma <ref>. Nevertheless, it is worth remarking that the conditions are weaker. The key difference is that, unlike Lemma <ref>, Lemma <ref> below no longer requires the condition on - _rs_2, - _rs^(i)_2, and - _rs^(i, j)_2 for all r∈[R], s∈[S], i,j∈[n], i≠ j, but directly justifies these conditions instead. Suppose Assumptions <ref>–<ref> hold. If R, S = O(1), then for any c > 0, there exists a c-dependent constant N_c > 0, such that max{max_r∈[R],s∈[S] - _rs_2,max_i∈[n],r∈[R],s∈[S] - _rs^(i)_2,max_i,j∈[n],i≠ jmax_r∈[R],s∈[S] - _rs^(i,j)_2}≤λ_d()/20 with probability at least 1 - O(n^-c) for all n≥ N_c. Consequently, _RS^-1_2≤ 2, max_i∈[n](_RS^(i))^-1_2≤ 2, _RS_2→∞≤ Cκ^2n^-1/2, max_i∈[n]_RS^(i)_2→∞≤ Cκ^2n^-1/2 with probability at least 1 - O(n^-c) for all n≥ N_c, where C > 0 is a constant not depending on c. Furthermore, max_i∈[n]_RS^(i)_RS^(i) - _RS_RS_2 = (ζ_𝗈𝗉/mn^5/2ρ_n^2), max_i∈[n]_RS^(i)_RS^(i) - _2→∞ = (n^-1/2)_, and max_i∈[n]_i()(_RS^(i)_RS^(i) - )_2 = (q_n/√(n) + ζ_𝗈𝗉log n/m^1/2n^3/2ρ_n) + (log n/m^1/2n^3/2ρ_n) - _RS_2 , max_i∈[n]_i{() - - }(_RS^(i)_RS^(i) - )_2/mΔ_n^2 = {log n/m^1/2n^3/2ρ_n + (log n)^1/2/m^1/2nρ_n^1/2} + (log n/m^3/2n^7/2ρ_n^3) - _RS_2, where q_n = (log n)^2 + ζ_𝗈𝗉log n(nρ_n ∨log n)^1/2}/(mn^2ρ_n^2). By Lemma <ref>, Lemma <ref>, Lemma <ref>, and Lemma <ref>, it is sufficient to establish the part before “consequently”. We prove these error bounds by induction. When R = 1, we immediately have _1s = _1s^(i) = _n× n, so that max_s∈[S] - _1s_2 = max_i∈[n],s∈[S] - _1s^(i)_2 = max_i,j∈[n],i≠ jmax_s∈[S] - _1s^(i, j)_2 = _2≤ mnρ_n^2 = o(λ_d()) with probability one. Now assume for any c > 0, there exists a c-dependent constant N_c > 0, such that max{max_r∈[R],s∈[S] - _rs_2,max_i∈[n]max_r∈[R],s∈[S] - _rs^(i)_2, max_i,j∈[n],i≠ jmax_r∈[R],s∈[S] - _rs^(i, j)_2}≤λ_d()/20 with probability at least 1 - O(n^-c) whenever n≥ N_c. By Lemma <ref>, _RS_2→∞ = (n^-1/2). Furthermore, by Lemma <ref>, we also know that max_i∈[n]_RS^(i)_2→∞ = (n^-1/2) and max_i,j∈[n],i≠ j_RS^(i,j)_2→∞ = (n^-1/2). In addition, by Lemma <ref> and union bounds over i,j∈[n], ()_2 ≤() - - _2 + _2 + _2 = (ζ_𝗈𝗉) + 2κ^2 mΔ_n^2 = (κ^2 mΔ_n^2), max_i∈[n]((^(i))(^(i)))_2 = (κ^2 mΔ_n^2), max_i,j∈[n],i≠ j((^(i, j))(^(i, j)))_2 = (κ^2 mΔ_n^2). Then, we work with _(R + 1)s, _(R + 1)s^(i), and _(R + 1)s^(i, j) for s ∈ [S]. Recall that _(R + 1)0 = _(R + 1)0^(i) = _(R + 1)0^(i, j) = _n× n by definition, and we have the recursive relation _(R + 1)s_2 ≤{()_2 + _(R + 1)(s - 1)_2}_RS_2→∞^2 = (mnρ_n^2) + (_(R + 1)(s - 1)_2/n), max_i∈[n]_(R + 1)s^(i)_2 ≤max_i∈[n]{((^(i))(^(i)))_2 + _(R + 1)(s - 1)^(i)_2}_RS^(i)_2→∞^2 = (mnρ_n^2) + (_(R + 1)(s - 1)_2/n), max_i,j∈[n],i≠ j_(R + 1)s^(i, j)_2 ≤max_i,j∈[n],i≠ j{(^(i, j)(^(i, j)))_2 + ^(i, j)_(R + 1)(s - 1)_2}_RS^(i, j)_2→∞^2 = (mnρ_n^2) + (_(R + 1)(s - 1)_2/n). Then by induction, we immediately obtain, for all s ∈ [S], _(R + 1)s_2 = (mnρ_n^2), max_i∈[n]_(R + 1)s^(i)_2 = (mnρ_n^2), max_i,j∈[n],i≠ j_(R + 1)s^(i, j)_2 = (mnρ_n^2). Therefore, for the case of R + 1, there exist c-dependent constants C_c, N_c > 0, such that max_s∈[S] - _(R + 1)s_2 ≤_2 + max_s∈[S]_(R + 1)s_2≤ C_cmnρ_n^2 ≤λ_d()/20, max_i∈[n]max_s∈[S] - _(R + 1)s^(i)_2 ≤_2 + max_i∈[n]max_s∈[S]_(R + 1)s^(i)_2≤ C_cmnρ_n^2 ≤λ_d()/20, max_i,j∈[n],i≠ jmax_s∈[S] - _(R + 1)s^(i, j)_2 ≤_2 + max_i,j∈[n],i≠ jmax_s∈[S]_(R + 1)s^(i, j)_2≤ C_cmnρ_n^2 ≤λ_d()/20 with probability at least 1 - O(n^-c) whenever n≥ N_c. The proof is thus completed. §.§ Joint proof of Lemma <ref> and Lemma <ref> We first establish the second assertion of Lemma <ref>. For any s∈ [S], by definition, triangle inequality, and Lemma <ref>, we have - _Rs_2 ≤_(R - 1)S_(R - 1)S - _2→∞{()_2 + _R(s - 1)_2}_(R - 1)S_2→∞ + _2→∞_(R - 1)S_(R - 1)S - _2{()_2 + _R(s - 1)_2}_(R - 1)S_2→∞ + _2→∞() - - _R(s - 1)_2_(R - 1)S_(R - 1)S - _2_(R - 1)S_2→∞ + _2→∞{() - - }_2_(R - 1)S_2→∞ + _2→∞(_R(s - 1) - )_2_(R - 1)S_2→∞ + _2→∞_2_(R - 1)S_(R - 1)S - _2_(R - 1)S_2→∞ + _2→∞_2_(R - 1)S_(R - 1)S - _2→∞. By Lemma <ref> and (<ref>), we know that ()_2 = (mn^2ρ_n^2) and _Rs_2 = (mn^2ρ_n^2) for any s∈ [S]. Furthermore, Lemma <ref> entails that () - - _Rs_2 ≤() - - _2 + _2 + _Rs_2 = (ζ_𝗈𝗉 + mnρ_n^2) for any s∈[S]. In addition, Lemma <ref> implies that {() - - _Rs}_2 ≤{() - - }_2 + - _Rs_2 = (ζ_𝗈𝗉/√(n)) + _Rs - _2 for any s∈[S]. It follows from Lemma <ref> that - _Rs_2 ≤(mn^3/2ρ_n^2)_(R - 1)S_(R - 1)S - _2→∞ + (mnρ_n^2)_(R - 1)S_(R - 1)S - _2 + (ζ_𝗈𝗉/n^3/2) + (1/n) - _R(s - 1)_2. By induction over s and the assumption s≤ S = O(1), we have - _RS_2 ≤(mn^3/2ρ_n^2)_(R - 1)S_(R - 1)S - _2→∞ + (mn^2ρ_n^2/n^S + 1 + ζ_𝗈𝗉/n^3/2). The proof of the second assertion of Lemma <ref> is therefore completed by dividing both sides of the inequality by mn^5/2ρ_n^2. Next, we work with _1^(RS)_2→∞. By Lemma <ref>, we directly have {() - - }^-1_2→∞ = (ζ_𝗈𝗉/mn^5/2ρ_n^2 ). By Lemma <ref>, Lemma <ref>, Lemma <ref>, and Lemma <ref> we have _2^(RS) + _4^(RS) + _5^(RS) + _6^(RS) + _7^(RS)_2→∞ = (ζ_𝗈𝗉/mn^5/2ρ_n^2 ) + (1/mn^5/2ρ_n^2) - _RS_2. For _3^(RS), by Lemma <ref>, we also have _3^(RS)_2→∞≤ - _RS_∞(_RS_2→∞ + _2→∞)_RS^-1_2 = (1/mn^5/2ρ_n^2) - _RS_2. It is therefore sufficient to show that _1^(RS)_2→∞ = ( ζ_𝗈𝗉/mn^5/2ρ_n^2) + (1/mn^5/2ρ_n^2) - _RS_2 because by the second assertion of Lemma <ref>, we have (1/mn^5/2ρ_n^2) - _RS_2 = (1/n)_(R - 1)S_(R - 1)S - _2→∞ + (1/n^S + 3/2 + ζ_𝗈𝗉/mn^5/2ρ_n^2). By definition of _1^(RS), we have max_i∈[n]_i_1^(RS)_2 = max_i∈[n]_i{() - - }(_RS_RS - )^-1_RS_2 ≤ r_1^(1) + r_1^(2) + r_1^(3) + r_1^(4) + r_1^(5), where r_1^(1) = max_i∈[n]_i{() - - }(_RS_RS - _RS^(i)_RS^(i))_RS^-1(_RS - _RS)^-1_RS_2, r_1^(2) = max_i∈[n]_i{() - - }(_RS^(i)_RS^(i) - )_RS^-1(_RS - _RS)^-1_RS_2, r_1^(3) = max_i∈[n]_i{() - - }_RS^-1(_RS - _RS)^-1_RS_2, r_1^(4) = max_i∈[n]_i{() - - }(_RS_RS - _RS^(i)_RS^(i))^-1_RS_2, r_1^(5) = max_i∈[n]_i{() - - }(_RS^(i)_RS^(i) - )^-1_RS_2. By Lemma <ref>, we know that for any c > 0, there exists a c-dependent constant N_c > 0 such that _RS^-1_2≤ 2 and max_i∈[n](^(i)_RS)^-1_2≤ 2 with probability at least 1 - O(n^-c) for all n≥ N_c. Then for r_1^(1) and r_1^(4), by Lemma <ref> and Lemma <ref>, r_1^(1)∨ r_1^(4)≤ 2() - - _2max_i∈[n]_RS_RS - _RS^(i)_RS^(i)_2_RS^-1_2 = {ζ_𝗈𝗉(mn^5/2ρ_n^2)^-1}. For r_1^(2) and r_1^(5), by Lemma <ref>, we have r_1^(2)∨ r_1^(5) ≲max_i∈[n]_i{() - - )(_RS^(i)_RS^(i) - )_2_RS^-1_2 = {log n/m^1/2n^3/2ρ_n + (log n)^1/2/m^1/2nρ_n^1/2} + (1/mn^5/2ρ_n^2) - _RS_2. For r_1^(3), by Lemma <ref>, Lemma <ref>, and Lemma <ref>, we have r_1^(3) ≤ 2max_i∈[n]_i{() - - }_2^-1_RS_2 = (ζ_𝗈𝗉/mn^5/2ρ_n^2). Combining the error bounds for r_1^(1) through r_1^(5) yields max_i∈[n]_i_1^(RS)_2 = {log n/m^1/2n^3/2ρ_n + (log n)^1/2/m^1/2nρ_n^1/2} + (1/mn^5/2ρ_n^2) - _RS_2. This completes the proof of Lemma <ref>. The proof of the first assertion of Lemma <ref> is then completed by combining the error bounds for _1^(RS)_2→∞ through _7^(RS)_2→∞. § PROOF OF EXACT RECOVERY IN MLSBM (THEOREM <REF>) Let be an n× K community assignment matrix whose τ(i)th entry of the ith row is 1 for all i∈[n], and the remaining entries are zero. Let n_k = ∑_i = 1^n1{τ(i) = k}, i.e., the number of vertices in the kth community, and denote by n_min = min_k∈[K]n_k and n_max = max_k∈[K]n_k. Then we take = ()^-1/2 so that ∈𝕆(n, d) and _t = ()^1/2_t()^1/2 for all t∈[m], and it is clear that MLSBM(τ;_1,…,_m) = COSIE(;_1,…,_m). Since n_k≍ n, then σ_k(_t) = Θ(nρ_n) for all k∈[K], t∈[m], and _2→∞≤_2→∞()^-1/2_2 = O(n^-1/2), so that the conditions of Theorem <ref> are satisfied. Furthermore, if (_k^*)_k = 1^K are the K unique rows of , then we also have min_k≠ l_k^* - _l^*_2≥ c_0n_max^-1/2 for some constant c_0 ∈ (0, 1]. By Theorem <ref>, there exists some ∈𝕆(K) such that - _2→∞ = {(ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼))/√(n)} and - _F = (ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼)). Let δ_k = min_l∈[K]\{k}_k^* - _l^*_2. Clearly, δ_k≥ n_k^-1/2 for all k∈[K] and δ_k≤ (2/n_min)^1/2. Because ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼) = o(1), Theorem <ref> implies that for any c > 0 and the given threshold > 0, there exists a constant N_c > 0, such that ( - _F^2≤c_0^2β n_min/512n)≥ 1 - O(n^-c) n≥ N_c, where β:=n_min/n_max. Define _k = {i∈[n]:τ(i) = k,_i( - )_2≥δ_k/2} and denote by |_k| the number of elements in _k. By Lemma 5.3 in <cit.>, there exists a permutation σ∈_K, where _K denotes the collection of all permutations over [K], such that 1/n∑_i = 1^n1{τ(i)≠σ(τ(i))}≤1/n∑_k = 1^K|_k|≤∑_k = 1^K|_k|/n_k≤∑_k = 1^K|_k|δ_k^2≤ 16 - _F^2≤c_0^2β n_min/64n with probability at least 1 - O(n^-c) for all n≥ N_c,. Let _k = {i∈[n]:τ(i) = k}\_k. Then Lemma 5.3 in <cit.> further implies that τ(i) = σ(τ(i)) for all i∈⋃_k = 1^K_k, so that |_k| = n_k - |_k|≥ n_k - ∑_l = 1^K|_l|≥ 63n_k/64. Now let _k = {i∈[n]:τ(i) = k} and _k = {i∈[n]:τ(i) = k}. Clearly, _σ^-1(k)⊂{i∈[n]:σ(τ(i)) = τ(i) = k}⊂_k, so that one also obtains |_k|≥|_σ^-1(k)|≥ n_min - ∑_l = 1^K|_l|≥ (1 - β/64)n_min and |_k\_σ^-1(k)|≤ c_0^2β n_min/64. Note that by the definition of , if we take (_k)_k = 1^K as the K unique rows of , then given τ(·), the K-means clustering problem is equivalent to the following minimization problem min_(_k)_k = 1^K∑_i = 1^n_τ(i) - _i_F^2 = min_(_k)_k = 1^K∑_k = 1^K∑_i∈[n]:τ(i) = k_k - _i_F^2, the solution of which are given by _k = |_k|^-1∑_i∈_k_i, k∈[K]. It follows that for all k∈[K], _k - _σ^-1(k)^*_2 ≤1/|_k|∑_i∈_k(_i - _i)_2 + 1/|_k|∑_i∈_k(_i - _σ^-1(k)^*)_2 = 1/|_k|∑_i∈_k(_i - _i)_2 + 1/|_k|∑_i∈_k\_σ^-1(k)(_i - _σ^-1(k)^*)_2 ≤1/|_k|^1/2 - _F + √(2)|_k\_σ^-1(k)|/n_min^1/2|_k|≤{c_0^2β/512n(1 - β/64)}^1/2 + c_0^2√(2β)/63n_max^1/2≤c_0/8n_max^1/2 with probability at least 1 - O(n^-c) for all n≥ N_c because β < 1. Now we claim that for any i∈[n], if _i( - )_2≤ c_0/(4n_max^1/2), then τ(i) = σ(τ(i)). Suppose τ(i) = k, and we will argue that τ(i) = σ(k). First note that _i( - )_2≤ c_0/(4n_max^1/2) implies _i - _σ(k)_2 ≤_i - _i_2 + _k^* - _σ(k)_2≤3c_0/8n_max^1/2, _i - _σ(l)_2 ≥^*_l - ^*_k_2 - _i - _i_2 - _l^* - _σ(l)_2≥5c_0/8n_max^1/2. Now we argue that τ(i) = σ(k) by contradiction. Indeed, assume otherwise, so that τ(i) = σ(l) for some l∈[K]\{k}. Then - _F^2 = ∑_j∈[n]\{i}_j( - )_2^2 + _i - _σ(l)_2^2 > ∑_j∈[n]\{i}_j( - )_2^2 + _i - _σ(k)_2^2, which implies that replacing τ(i) = σ(l) with τ(i) = σ(k) strictly decreases the objective function without violating the feasibility because |_l|≥ 2 for all l∈[K]. This violates the optimality of . Hence, it must be the case that τ(i) = σ(k). Therefore, {There exists σ∈_K such that τ(i) = σ(τ(i)) for all i∈[n]}≥( - _2→∞≤c_0/4n_max^1/2) ≥ 1 - O(n^-c) for all n≥ N_c, where N_c is a constant depending on the given power c > 0. The proof is therefore completed. § PROOF OF ENTRYWISE EIGENVECTOR LIMIT THEOREM (THEOREM <REF>) §.§ Proof of Lemma <ref> By definition, we have ^-1_1 = ()^-1 and sgn() - = {sgn() - )}_1. Let denote _RS, denote _RS, and denote ^(RS). Recall the decomposition (<ref>): - = {() - - }^-1 +. It turns out that _2→∞ is negligible compared to the first term _i{() - - }^-1 for each fixed i∈[n]. For the leading term, by Bernstein's inequality, we have _i{() - - }^-1_1_i^-1/2 = X_in() + o_p(1), where ∈ℝ^d is a unit vector, and X_in() = _i()^-1_1_i^-1/2 + ∑_t = 1^m∑_j = 1^nE_tij(_j_t^-1_1_i^-1/2). We first argue that X_in() = ∑_t = 1^m∑_j_1,j_2∈[n],j_1 ≤ j_2E_tj_1j_2{b_tij_1j_2() + c_tij_1j_2()}, where b_tij_1j_2() and c_tij_1j_2() are defined in (<ref>) and (<ref>). For notational convenience, below in this section, we will suppress the dependence on from b_tij_1j_2, c_tij_1j_2, γ_ij, and ξ_tij. The dependence on will be used later in the proof of hypothesis testing for membership profiles in MLMM. Write X_in() = ∑_t = 1^m∑_j_1 = 1^n∑_j_2 = 1^n∑_j_3 = 1^ne_ij_1E_tj_1j_2E_tj_2j_3γ_ij_31(j_3≠ i) + ∑_t = 1^m∑_j_1 = 1^n∑_j_2 = 1^ne_ij_1E_tj_1j_2ξ_tij_2 = ∑_t = 1^m∑_j_1,j_2,j_3∈[n]:j_1 > j_3e_ij_1E_tj_1j_2E_tj_2j_3γ_ij_31(j_3≠ i) + ∑_t = 1^m∑_j_1,j_2,j_3∈[n]:j_1 < j_3e_ij_1E_tj_1j_2E_tj_2j_3γ_ij_31(j_3≠ i) + ∑_t = 1^m∑_j_1,j_2∈[n]e_ij_1E_tj_1j_2^2γ_ij_11(j_1≠ i) + ∑_t = 1^m∑_j_1 = 1^n∑_j_2 = 1^ne_ij_1E_tj_1j_2ξ_tij_2 = ∑_t = 1^m∑_j_1,j_2,j_3∈[n]:j_1 > j_3E_tj_1j_2E_tj_2j_3{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)} + ∑_t = 1^m∑_j_1 = 1^n∑_j_2 = 1^ne_ij_1E_tj_1j_2ξ_tij_2. Breaking the first summation into three parts {j_1 < j_2}, {j_1 = j_2}, and {j_1 > j_2}, we further obtain X_in() = ∑_t = 1^m∑_j_1,j_2,j_3∈[n] j_1 > j_3, j_1 < j_2E_tj_1j_2E_tj_2j_3{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)} + ∑_t = 1^m∑_j_1,j_2,j_3∈[n] j_1 > j_3, j_1 > j_2E_tj_1j_2E_tj_2j_3{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)} + ∑_t = 1^m∑_j_1,j_3∈[n]:j_1 > j_3E_tj_1j_1E_tj_1j_3{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)} + ∑_t = 1^m∑_j_1 = 1^n∑_j_2 = 1^ne_ij_1E_tj_1j_2ξ_tij_2. Now switching the roles of j_1 and j_2 in the second summation above and rearranging complete the proof that X_in() = ∑_t = 1^m∑_j_1,j_2∈[n],j_1≤ j_2E_tj_1j_2{b_tij_1j_2() + c_tij_1j_2()}. We first argue that (t, j_1, j_2)↦α(t, j_1, j_2) is one-to-one. Assume otherwise. Then there exists some (t', j_1', j_2')∈[m]× [n]× [n], j_1'≤ j_2', such that 1/2(t - 1)n(n + 1) + j_1 + 1/2j_2(j_2 - 1) = 1/2(t' - 1)n(n + 1) + j_1' + 1/2j_2'(j_2' - 1). This equation forces t = t' because if not, then without loss of generality, we may assume that t ≤ t' - 1, in which case 1/2(t - 1)n(n + 1) + j_1 + 1/2j_2(j_2 - 1) ≤1/2(t - 1)n(n + 1) + n + 1/2n(n - 1) = 1/2tn(n + 1) ≤1/2(t' - 1)n(n + 1) < 1/2(t' - 1)n(n + 1) + j_1' + 1/2j_2'(j_2' - 1), contradicting with (<ref>). Therefore, (<ref>) reduces to j_1 + 1/2j_2(j_2 - 1) = j_1' + 1/2j_2'(j_2' - 1). Now we claim that (<ref>) implies that j_1 = j_1' and j_2 = j_2'. Assume otherwise. Then it follows that j_1 ≠ j_1' and j_2≠ j_2', and without loss of generality, assume that j_2 ≤ j_2' - 1. Then (<ref>) yields that (1/2)j_2'(j_2' - 1) - (1/2)j_2(j_2 - 1) = j_1 - j_1'≤ j_1 - 1. On the other hand, j_2 ≤ j_2' - 1 implies (1/2)j_2'(j_2' - 1) - (1/2)j_2(j_2 - 1)≥ (1/2)(j_2 + 1)j_2 - (1/2)j_2(j_2 - 1) = j_2≥ j_1 > j_1 - 1, which contradicts with the previous inequality. Hence, we conclude that j_1 = j_1' and j_2 = j_2'. The relabeling scheme (t, j_1, j_2)↦α = α(t, j_1, j_2) enables us to rewrite X_in() = ∑_t = 1^m∑_j_1,j_2∈[n],j_1≤ j_2E_tj_1j_2(b_tij_1j_2 + c_tij_1j_2) = ∑_t = 1^m∑_j_1,j_2∈[n],j_1≤ j_2Y_nα(t, j_1, j_2) = ∑_α = 1^N_nY_nα, where we set Y_nα = Y_nα(t, j_1, j_2) = E_tj_1j_2(b_tij_1j_2 + c_tij_1j_2) and N_n = mn(n + 1)/2. Clearly, the nested σ-fields _n0⊂_n1⊂…⊂_nN_n form a filtration. For any α∈ [N_n], the random variable Z_nα := ∑_β = 1^α Y_nβ is _nα-measurable. Indeed, first observe that E_tj_1j_2 is _nα-measurable for any (t, j_1, j_2) satisfying α(t,j_1, j_2)≤α. Also, for any t∈[m], j_1, j_2∈[n], j_1≤ j_2 with α(t, j_1, j_2)≤α, b_tij_1j_2() is a function of (E_tj_3j_2)_j_3 = 1^j_1 - 1 and (E_tj_1j_3)_j_3 = 1^j_2 - 1, and these random variables are _n(α - 1)-measurable because α(t, j_3, j_2) = 1/2(t - 1)n(n + 1) + j_3 + 1/2j_2(j_2 - 1)≤1/2(t - 1)n(n + 1) + j_1 - 1 + 1/2j_2(j_2 - 1) = α(t, j_1, j_2) - 1≤α - 1, for any j_3 ≤ j_1 - 1, α(t, j_1, j_3) = 1/2(t - 1)n(n + 1) + j_1 + 1/2j_3(j_3 - 1)≤1/2(t - 1)n(n + 1) + j_1 + 1/2(j_2 - 1)(j_2 - 2)≤α(t, j_1, j_2) - 1≤α - 1 if j_1 ≤ j_3, and α(t, j_3, j_1) ≤1/2(t - 1)n(n + 1) + j_1 - 1 + 1/2j_1(j_1 - 1)≤1/2(t - 1)n(n + 1) + j_1 - 1 + 1/2j_2(j_2 - 1) = α(t, j_1, j_2) - 1<α if j_1 > j_3. Note that c_tij_1j_2∈_n(α - 1) because it is a constant. These observations imply that Y_nα is _nα-measurable and b_tij_1j_2 + c_tij_1j_2 is _n(α - 1)-measurable if α(t, j_1, j_2)≤α. Therefore, with t(α)∈[m],j_1(α),j_2(α)∈[n] satisfying α(t(α),j_1(α),j_2(α)) = α, we further obtain (Z_nα|_n(α - 1)) = Z_n(α - 1) + {E_t(α)j_1(α)j_2(α)(b_t(α)ij_1(α)j_2(α) + c_t(α)ij_1(α)j_2(α))|_n(α - 1)} = Z_n(α - 1), where we have used the facts that E_t(α)j_1(α)j_2(α) is a mean-zero independent random variable independent of the σ-field _n(α - 1) and that b_t(α)ij_1(α)j_2(α) + c_t(α)ij_1(α)j_2(α) is _n(α - 1)-measurable. Therefore, we see that (Z_nα)_α = 1^N forms a martingale with martingale difference sequence (Y_nα)_α = 1^N. The proof is completed. §.§ Martingale Moment Bounds We first establish several martingale moment bounds that facilitate the application of the martingale central limit theorem. Suppose Assumptions <ref>–<ref> hold. Then for any t∈[m] and j_1,j_2∈[n], j_1≤ j_2, {b_tij_1j_2^4()}≲ (nρ_ne_ij_1 + n^2ρ_n^2e_ij_1 + nρ_ne_ij_2 + n^2ρ_n^2e_ij_2 + ρ_n)max__2≤ 1,j_1,j_2∈[n]|γ_j_1j_2()|^4, where b_tij_1j_2() is defined in (<ref>). For convenience, we let γ_max denote max_j_1,j_2∈[n]|γ_j_1j_2| and suppress the dependence on in this proof. Without loss of generality, we may assume that j_1≠ j_2. By definition and Young's inequality for product, b_tij_1j_2^4≲δ_tj_1j_2^(1) + δ_tj_1j_2^(4), where δ_tj_1j_2^(1) = [∑_a = 1^j_1 - 1E_taj_2{e_ij_1γ_ia1(a≠ i) + e_iaγ_ij_11(j_1≠ i)}]^4, δ_tj_1j_2^(2) = [∑_b = 1^j_2 - 1E_tbj_1{e_ij_2γ_ib1(b≠ i) + e_ibγ_ij_21(j_2≠ i)}]^4. For δ_tj_1j_2^(1), we have δ_tj_1j_2^(1) ≲∑_a = 1^n(e_ij_1 + e_ia)ρ_nγ_max^4 + ∑_a_1,a_2∈[n](e_ij_1 + e_ia_1)(e_ij_1 + e_ia_2)ρ_n^2γ_max^4≲ (nρ_ne_ij_1 + n^2ρ_n^2e_ij_1 + ρ_n)γ_max^4 because the remaining terms in the expansion have zero expected values. Similarly, δ_tj_1j_2^(2)≲ (nρ_ne_ij_2 + n^2ρ_n^2e_ij_2 + ρ_n)γ_max^4. Combining the above upper bounds completes the proof. Suppose Assumption <ref>–<ref> hold. For any fixed i_1,i_2∈[n], denote by b̅_tj_1j_2 = b_ti_1j_1j_2(_1) + b_ti_2j_1j_2(_2), where b_tij_1j_2() is defined by (<ref>) and _1,_2∈ℝ^d with _1_2,_2≤ 1. Then ∑_t = 1^m∑_j_1,j_2,j_3,j_4∈[n] j_1≤ j_2,j_3≤ j_4 α(t, j_1, j_2) < α(t, j_3, j_4)(b̅_tj_1j_2^2b̅_tj_3j_4^2)σ_tj_1j_2^2σ_tj_3j_4^2≲ (mn^3ρ_n^3 + mn^4ρ_n^4)max__2≤ 1,j_1,j_2∈[n]|γ_j_1j_2()|^4, where α(t, j_1, j_2) is the relabeling function defined in (<ref>) and γ_j_1j_2() is defined in (<ref>). By definition and Cauchy-Schwarz inequality, for any j_1,j_2,j_3,j_4∈[n], b̅_tj_1j_2^2b̅_tj_3j_4^2 ≤ 4{(∑_a = 1^j_1 - 1E_tj_2aϖ_i_1i_2j_1a)^2 + (∑_a = 1^j_2 - 1E_tj_1aϖ_i_1i_2j_2a)^2} ×{(∑_b = 1^j_3 - 1E_tj_4bϖ_i_1i_2j_3b)^2 + (∑_b = 1^j_4 - 1E_tj_3bϖ_i_1i_2j_4b)^2} = 4ϑ_tj_1j_2j_3j_4^(1) + 4ϑ_tj_1j_2j_3j_4^(2) + 4ϑ_tj_1j_2j_3j_4^(3) + 4ϑ_tj_1j_2j_3j_4^(4), where ϑ_tj_1j_2j_3j_4^(1) = (∑_a = 1^j_1 - 1E_tj_2aϖ_i_1i_2j_1a)^2 (∑_b = 1^j_3 - 1E_tj_4bϖ_i_1i_2j_3b)^2, ϑ_tj_1j_2j_3j_4^(2) = (∑_a = 1^j_1 - 1E_tj_2aϖ_i_1i_2j_1a)^2 (∑_b = 1^j_4 - 1E_tj_3bϖ_i_1i_2j_4b)^2, ϑ_tj_1j_2j_3j_4^(3) = (∑_a = 1^j_2 - 1E_tj_1aϖ_i_1i_2j_2a)^2 (∑_b = 1^j_3 - 1E_tj_4bϖ_i_1i_2j_3b)^2, ϑ_tj_1j_2j_3j_4^(4) = (∑_a = 1^j_2 - 1E_tj_1aϖ_i_1i_2j_2a)^2(∑_b = 1^j_4 - 1E_tj_3bϖ_i_1i_2j_4b)^2. and ϖ_i_1i_2j_1a = e_i_1j_1γ_i_1a(_1)1(a≠ i_1) + e_i_1aγ_i_1j_1(_1)1(j_1≠ i_1) + e_i_2j_1γ_i_2a(_2)1(a≠ i_2) + e_i_2aγ_i_2j_1(_2)1(j_1≠ i_2), ϖ_i_1i_2j_2a = e_i_1j_2γ_i_1a(_1)1(a≠ i_1) + e_i_1aγ_i_1j_2(_1)1(j_2≠ i_1) + e_i_2j_2γ_i_2a(_2)1(a≠ i_2) + e_i_2aγ_i_2j_2(_2)1(j_2≠ i_2), ϖ_i_1i_2j_3b = e_i_1j_3γ_i_1b(_1)1(b≠ i_1) + e_i_1bγ_i_1j_3(_1)1(j_3≠ i_1) + e_i_2j_3γ_i_2b(_1)1(b≠ i_2) + e_i_2bγ_i_2j_3(_1)1(j_3≠ i_2), ϖ_i_1i_2j_4b = e_i_1j_4γ_i_1b(_1)1(b≠ i_1) + e_i_1bγ_i_1j_4(_1)1(j_4≠ i_1) + e_i_2j_4γ_i_2b(_2)1(b≠ i_2) + e_i_2bγ_i_2j_4(_2)1(j_4≠ i_2). The analyses of ϑ_tj_1j_2j_3j_4^(1), ϑ_tj_1j_2j_3j_4^(2), ϑ_tj_1j_2j_3j_4^(3), and ϑ_tj_1j_2j_3j_4^(4) are almost identical because ϑ_tj_1j_2j_3j_4^(1) = ϑ_tj_1j_2j_4j_3^(2), ϑ_tj_1j_2j_3j_4^(3) = ϑ_tj_2j_1j_3j_4^(2), and ϑ_tj_1j_2j_3j_4^(4) = ϑ_tj_2j_1j_4j_3^(2). We only present the analysis of ϑ_tj_1j_2j_3j_4^(2) here. For convenience, we let γ_max denote max__2≤1,j_1,j_2∈[n]|γ_j_1j_2()| and e̅_i_1i_2j = e_i_1j + e_i_2j. Write ϑ_tj_1j_2j_3j_4^(2) ≤∑_a_1,a_2,b_1,b_2∈[n]| (E_ta_1j_2E_ta_2j_2E_tb_1j_3E_tb_2j_3)|(e̅_i_1i_2j_1 + e̅_i_1i_2a_1)(e̅_i_1i_2j_1 + e̅_i_1i_2a_2) ×(e̅_i_1i_2j_4 + e̅_i_1i_2b_1)(e̅_i_1i_2j_4 + e̅_i_1i_2b_2)1(a_1,a_2 < j_1,b_1,b_2 < j_4)γ_max^4. Note (E_ta_1j_2E_ta_2j_2E_tb_1j_3E_tb_2j_3)≠ 0 only if the number of random variables in {E_ta_1j_2,E_ta_2j_2,E_tb_1j_3,E_tb_2j_3} is 1 or 2. * If this number is 1, then one must have a_1 = a_2 and b_1 = b_2, so that either j_2 = j_3, a_1 = a_2 = b_1 = b_2 or a_1 = a_2 = j_3, b_1 = b_2 = j_2. * If this number is 2, then there are two cases: * If j_2 ≠ j_3, then one of the following cases must occur: i) a_1 = a_2, b_1 = b_2; ii) {a_1,j_2} = {b_1,j_3}, {a_2, j_2} = {b_2,j_3}, implying that a_1 = j_3,b_1 = j_2, b_2 = j_2, a_2 = j_3; iii) {a_1,j_2} = {b_2,j_3}, {a_2,j_2} = {b_1,j_3}, implying that a_1 = j_3,b_2 = j_2, a_2 = j_3,b_1 = j_2. However, ii) and iii) occurs exactly when E_ta_1j_2 = E_ta_2j_2 = E_tb_1j_3 = E_tb_2j_3 = E_t_3j_2, so it is sufficient to only consider i). * If j_2 = j_3, then one of the following three cases must occur: i) a_1 = a_2≠ b_1 = b_2; ii) a_1 = b_1≠ a_2 = b_2; iii) a_1 = b_2≠ a_2 = b_1. Therefore, we are able to further upper bound ϑ_tj_1j_2j_3j_4^(2) as follows: ϑ_tj_1j_2j_3j_4^(2) ≲ρ_n(e̅_i_1i_2j_1 + e̅_i_1i_2j_3)^2(e̅_i_1i_2j_4 + e̅_i_1i_2j_2)^2γ_max^4 + ∑_a = 1^nρ_n(e̅_i_1i_2j_1 + e̅_i_1i_2a)^2(e̅_i_1i_2j_4 + e̅_i_1i_2a)^21(j_2 = j_3)γ_max^4 + ∑_a_1,b_1∈[n]ρ_n^2(e̅_i_1i_2j_1 + e̅_i_1i_2a_1)^2(e̅_i_1i_2j_4 + e̅_i_1i_2b_1)^2γ_max^4 + ∑_a_1,a_2∈[n]ρ_n^2(e̅_i_1i_2j_1 + e̅_i_1i_2a_1)(e̅_i_1i_2j_1 + e̅_i_1i_2a_2) (e̅_i_1i_2j_4 + e̅_i_1i_2a_1)(e̅_i_1i_2j_4 + e̅_i_1i_2a_2)γ_max^4 ≲ (e̅_i_1i_2j_1e̅_i_1i_2j_2 + e̅_i_1i_2j_3e̅_i_1i_2j_2 + e̅_i_1i_2j_1e̅_i_1i_2j_4 + e̅_i_1i_2j_3e̅_i_1i_2j_4)ρ_nγ_max^4 + (ne̅_i_1i_2j_1e̅_i_1i_2j_4 + e̅_i_1i_2j_1 + e̅_i_1i_2j_4 + 1)1(j_2 = j_3)ρ_nγ_max^4 + (n^2e̅_i_1i_2j_1e̅_i_1i_2j_4 + ne̅_i_1i_2j_1 + ne̅_i_1i_2j_4 + 1)ρ_n^2γ_max^4. This entails that ∑_t = 1^m∑_j_1,j_2,j_3,j_4∈[n]ϑ_tj_1j_2j_3j_4^(2)σ_tj_1j_2^2σ_tj_3j_4^2≲ (mn^3ρ_n^3 + mn^4ρ_n^4)γ_max^4. Therefore, ∑_t = 1^m∑_j_1,j_2,j_3,j_4∈[n](ϑ_tj_1j_2j_3j_4^(1) + ϑ_tj_1j_2j_3j_4^(3) + ϑ_tj_1j_2j_3j_4^(2) + ϑ_tj_1j_2j_3j_4^(4))σ_tj_1j_2^2σ_tj_3j_4^2≲ (mn^3ρ_n^3 + mn^4ρ_n^4)γ_max^4 because ϑ_tj_1j_2j_3j_4^(1) = ϑ_tj_1j_2j_4j_3^(2), ϑ_tj_1j_2j_3j_4^(3) = ϑ_tj_2j_1j_3j_4^(2), and ϑ_tj_1j_2j_3j_4^(4) = ϑ_tj_2j_1j_4j_3^(2). The proof is thus completed. §.§ Proof of Theorem <ref> This subsection completes the proof of Theorem <ref>. We first argue that the remainder has uniformly negligible maximum row norms, and then invoke the martingale central limit theorem to establish the desired asymptotic normality. Note that Lemma <ref> is essentially the same as Lemma <ref>. Suppose Assumptions <ref>–<ref> hold, m^1/2(nρ_n)^3/2 = ω(θ_n(log n)^2), mnρ_n = ω(θ_n(log n)^2), and m^1/2nρ_n = ω(θ_n(log n)^3/2), where θ_n = (nρ_n)^1/2∧ 1. Then max_i∈[n]_i{() - - }(_RS^(i)_RS^(i) - )_2/λ_d() = (1/m^1/2nρ_n^1/2θ_n) + (1/n^S + 3/2 + 1/n^R + 1/2), ^(RS)_2→∞ = (1/m^1/2nρ_n^1/2θ_n) + (1/n^S + 3/2 + 1/n^R + 1/2). The proof is similar to that of Lemma <ref>, except that we take advantage of the error bound established in Theorem <ref> to obtain refinement. By Theorem <ref>, we immediately obtain - _2 = (ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼)). By Lemma <ref>, max_i∈[n]^(i)^(i) - _2 = (ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼)), where ^(i) denotes _RS^(i) and ^(i) = sgn((^(i))). By triangle inequality, we have max_i∈[n]_i{() - - }(^(i)^(i) - )_2/λ_d()≤ q_1 + q_2 + q_3 + q_4, where q_1,q_2,q_3,q_4 are defined in (<ref>). Observe that q_n/mn^5/2ρ_n^2 = 1/m^1/2nρ_n^1/2θ_n{θ_n(log n)^2/m^1/2(nρ_n)^3/2} = o(1/m^1/2nρ_n^1/2θ_n), ζ_𝗈𝗉log n/m^3/2n^7/2ρ_n^3 = 1/m^1/2nρ_n^1/2θ_n{θ_n(log n)^2/m^1/2(nρ_n)^3/2 + θ_n(log n)^3/2/m^1/2nρ_n} = o(1/m^1/2nρ_n^1/2θ_n). Then by Lemma <ref>, Lemma <ref>, and Theorem <ref>, we have q_1 = (1/m^1/2nρ_n^1/2θ_n) + (log n/m^3/2n^7/2ρ_n^3) - _RS_2 = (1/m^1/2nρ_n^1/2θ_n) + (1/n^S + 3/2 + 1/n^R + 1/2). For q_2, we apply the above spectral norm error bound and Lemma <ref> to obtain q_2 = 1/mΔ_n^2∑_t = 1^m_t_t_2→∞max_i∈[n]^(i)^(i) - _2 ≤1/mΔ_n^2_2→∞∑_t = 1^m_t_t_2max_i∈[n]^(i)^(i) - _2 = (μ d)^1/2/mn^1/2Δ_n^2∑_t = 1^m_t_t_2max_i∈[n]^(i)^(i) - _2 = {(log n)^1/2/m^1/2nρ_n^1/2(ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼))} = (1/m^1/2nρ_n^1/2θ_n). By Bernstein's inequality and (<ref>), we obtain similarly that q_3 ≤{(log n)^1/2/m^1/2(nρ_n)^3/2}_2→∞max_i∈[n]^(i)^(i) - _2 = {(log n)^1/2/m^1/2nρ_n^1/2(ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼))} = (1/m^1/2nρ_n^1/2θ_n), q_4 ≤{(log n)^1/2/m^1/2(nρ_n)^3/2}_maxmax_i∈[n]^(i)^(i) - _2 = {(log n)^1/2/m^1/2n^3/2ρ_n^1/2(ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼))} = (1/m^1/2nρ_n^1/2θ_n). The proof of the first assertion is then completed by combining the above error bounds for q_1, q_2, q_3, and q_4. We next work with ^(RS)_2→∞. For _2^(RS), we immediately have _2^(RS)_2→∞ ≤ - _RS_∞_2→∞^-1_2 = - _RS_2_2→∞^-1_2 = O(1/mn^5/2ρ_n^2) - _RS_2. For _3^(RS), by Lemma <ref>, we have a similar error bound: _3^(RS)_2→∞ ≤ - _RS_2(_2→∞ + _2→∞)_RS^-1_2 = (1/mn^5/2ρ_n^2) - _RS_2. By Lemma <ref>, Lemma <ref>, and <ref>, we obtain _4^(RS) + _5^(RS) + _6^(RS) + _7^(RS)_2→∞ = (1/m^1/2nρ_n^1/2θ_n) + (1/mn^5/2ρ_n^2) - _RS_2. Now we work with _1^(RS)_2→∞. Recall in the proof of Theorem <ref>, _1^(RS)_2→∞ = r_1^(1) + r_1^(2) + r_1^(3) + r_1^(4) + r_1^(5), where r_1^(1), r_1^(2), r_1^(3), r_1^(4), and r_1^(5) are defined in (<ref>). By Lemma <ref> and Lemma <ref>, we have r_1^(1)∨ r_1^(4) ≤ 2() - - _2max_i∈[n] - ^(i)^(i)_2_RS^-1_2 = (ζ_𝗈𝗉^2/m^2n^9/2ρ_n^4) = (1/m^1/2nρ_n^1/2θ_n). For r_1^(2) and r_1^(5), instead of using Lemma <ref>, we invoke the refined error bound in the first assertion and obtain r_1^(2)∨ r_1^(5) ≲max_i∈[n]_i{() - - )(^(i)^(i) - )_2_RS^-1_2 = (1/m^1/2nρ_n^1/2θ_n) + (1/n^S + 3/2 + 1/n^R + 1/2). For r_1^(3), by Lemma <ref>, Lemma <ref>, and Lemma <ref>, we have r_1^(3) ≤ 2max_i∈[n]_i{() - - }_2sinΘ(_RS,)_2^2^-1_RS_2 = (1/m^1/2nρ_n^1/2θ_n) + (1/mn^5/2ρ_n^2) - _RS_2. Combining the error bounds for r_1^(1) through r_1^(5) yields _1^(RS)_2→∞ = (1/m^1/2nρ_n^1/2θ_n) + (1/n^S + 3/2 + 1/n^R + 1/2) + (1/mn^5/2ρ_n^2) - _RS_2. Hence, we combine the error bounds for _1_2→∞ through _7_2→∞ to obtain ^(RS)_2→∞ ≤(1/m^1/2nρ_n^1/2θ_n) + (1/n^S + 3/2 + 1/n^R + 1/2) + (1/mn^5/2ρ_n^2) - _RS_2. By Lemma <ref> and Theorem <ref>, we have (1/mn^5/2ρ_n^2)_RS - _2 = (ε_n^(𝗈𝗉)/n^3/2 + ε_n^(𝖻𝖼)/n^1/2) = (1/m^1/2nρ_n^1/2θ_n) + (1/n^S + 3/2 + 1/n^R + 1/2). The proof is completed by combining the above error bounds. By Assumption <ref> and the condition of Theorem <ref>, we know that λ_k() = λ_k() = Θ(mn^2ρ_n^2), λ_k(_i) = Θ(mn^2ρ_n^3), λ_k(_i) = Θ(mnρ_n^2) for any k∈[d], i∈[n], so that λ_k(_i) = Θ(1/(mn^2ρ_nθ_n^2)) and _i^-1/2_2 = Ω(m^1/2nρ_n^1/2θ_n), where θ_n^2 = (nρ_n)∧ 1. By Lemma <ref>, we have _i( - )_1_i^-1/2 = _i{() - - }^-1_1_i^-1/2 + (1). It is sufficient to show _i{() - - }^-1_1_i^-1/2→N(0, 1). By definition, we have _i{() - - }^-1_1_i^-1/2 = _i()^-1_1_i^-1/2 + ∑_t = 1^m∑_j = 1^nE_tij(_j_t^-1_1_i^-1/2) + ∑_t = 1^m_i_t_t^-1_1_i^-1/2 - 2∑_t = 1^m∑_j = 1^nP_tijE_tij(_i^-1_1_i^-1/2). By Lemma <ref>, the third term satisfies |∑_t = 1^m_i_t_t^-1_1_i^-1/2| ≤_2→∞∑_t = 1^m_t_t^-1_1_i^-1/2_2 = {(log n)^1/2/n^1/2} = (1). By Bernstein's inequality, 2∑_t = 1^m∑_j = 1^nP_tijE_tij(_i^-1_1_i^-1/2) = {(log n)^1/2/n} = (1) . These two concentration results imply _i{() - - }^-1_1_i^-1/2 = X_in() + (1), where X_in() is defined in (<ref>). As introduced in Lemma <ref>, we rewrite X_in as a mean-zero martingale and apply the martingale central limit theorem, which we review here (see, for example, Theorem 35.12 in <cit.>). Suppose that for each n, (X_nα)_α = 0^N_n is a martingale with respect to the filtration (_nα)_α = 0^N_n = (σ(X_n0,…,X_nα))_α = 0^N_n with X_n0 = 0, _n0 = {∅, Ω}, and martingale difference sequence (Y_nα)_α = 1^N_n = (X_nα - X_n(α - 1))_α = 1^N_n. If the following conditions are satisfied: (a) ∑_α = 1^N_n (Y_nα^2| F_n(α - 1))→σ^2 for some constant σ^2 > 0; (b) ∑_α = 1^N_n{Y_nα^21(|Y_nα| ≥)}→ 0 for any > 0. Then ∑_α = 1^N_nY_nα→N(0, σ^2). Specialized to our setup in Lemma <ref>, given the one-to-one relabeling function α = α(t, j_1, j_2) defined in (<ref>), we will verify the above conditions with N_n = (1/2)mn(n + 1), Y_nα = E_tj_1j_2(b_tij_1j_2 + c_tij_1j_2), _nα = σ({E_tj_1j_2:α(t, j_1, j_2)≤α}), Y_n0 = 0, and X_in() = ∑_α = 1^N_nY_nα, where b_tij_1j_2 = b_tij_1j_2() is defined in (<ref>) and c_tij_1j_2 = c_tij_1j_2() is defined in (<ref>). In particular, we have max_j∈[n]|γ_ij| = O{θ_n/(m^1/2n^3/2ρ_n^3/2)} and max_t∈[m],j∈[n]|ξ_tij| = O{θ_n/(mnρ_n)^1/2}. ▪ Condition (a) for martingale central limit theorem. For any α∈[N_n], let t(α)∈[m],j_1(α),j_2(α), j_1(α)≤ j_2(α) be the unique indices such that α(t(α),j_1(α),j_2(α)) = α. This enables us to write ∑_α = 1^N_n{(Y_nα^2|_n(α - 1)) - (Y_nα^2)} = ∑_t = 1^m∑_j_1≤ j_2(b_tij_1j_2^2 - b_tij_1j_2^2)σ_tj_1j_2^2 + 2∑_t = 1^m∑_j_1≤ j_2b_tij_1j_2c_tij_1j_2σ_tj_1j_2^2. We work with the two terms separately. For the second term in (<ref>), we have 2∑_t = 1^m∑_j_1,j_2∈[n]:j_1≤ j_2b_tij_1j_2c_tij_1j_2σ_tj_1j_2^2 = 2∑_t = 1^m∑_j_1,j_2∈[n]:j_1 ≤ j_2ι_j_1j_2∑_j_3 = 1^j_1 - 1E_tj_2j_3{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)}c_tij_1j_2σ_tj_1j_2^2 + 2∑_t = 1^m∑_j_1,j_2∈[n]:j_1 ≤ j_2ι_j_1j_2∑_j_3 = 1^j_2 - 1E_tj_1j_3{e_ij_2γ_ij_31(j_3≠ i) + e_ij_3γ_ij_21(j_2≠ i)}c_tij_1j_2σ_tj_1j_2^2. The two terms above are sums of independent mean-zero random variables with variances O(θ_n^4(mn^2ρ_n)^-1). Therefore, the second term in (<ref>) converges to 0 in probability by Chebyshev's inequality. For the first term (<ref>), we compute the second moment {∑_t = 1^m∑_j_1,j_2∈[n]:j_1≤ j_2(b_tij_1j_2^2 - b_tij_1j_2^2)σ_tj_1j_2^2}^2 ≤∑_t = 1^m∑_j_1,j_2,j_3,j_4∈[n],j_1≤ j_2, j_3≤ j_4(b_tij_1j_2^2b_tij_3j_4^2)σ_tj_1j_2^2σ_tj_3j_4^2 = 2∑_t = 1^m∑_j_1≤ j_2, j_3≤ j_4,α(t, j_1, j_2) < α(t, j_3, j_4)(b_tij_1j_2^2b_tij_3j_4^2)σ_tj_1j_2^2σ_tj_3j_4^2 + ∑_t = 1^m∑_j_1,j_2∈[n],j_1≤ j_2(b_tij_1j_2^4)σ_t_1j_1j_2^4. By Lemma <ref> with i_1 = i_2 = i, the first term above converges to 0. By Lemma <ref>, the second term above also converges to 0. This shows that the second moment of the first term in (<ref>) goes to 0, and hence, the first term in (<ref>) is o_p(1) by Chebyshev's inequality. Therefore, we have shown that (<ref>) converges to 0 in probability. Next, we show that ∑_α = 1^N_n Y_nα^2→_2^2 for any deterministic vector ∈ℝ^d. To this end, first observe that for any j_1,j_2∈[n], j_1 < j_2, (E_tj_2j_3)_j_3 = 1^j_1 - 1 and (E_tj_1j_3)_j_3 = 1^j_2 - 1 form a collection of independent mean-zero random variables, in which case b_tij_1j_2^2 = ∑_j_3 = 1^j_1 - 1σ_tj_2j_3^2{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)}^2 + ∑_j_3 = 1^j_2 - 1σ_tj_1j_3^2{e_ij_2γ_ij_31(j_3≠ i) + e_ij_3γ_ij_21(j_2≠ i)}^2, and for the case where j_1 = j_2, we also have b_tij_1j_1^2 = ∑_j_3 = 1^j_1 - 1σ_tj_1j_3^2{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)}^2. Therefore, we proceed to compute ∑_α = 1^N_n Y_nα^2 = ∑_t = 1^m∑_j_1 = 1^n - 1∑_j_2 = j_1 + 1^n∑_j_3 = 1^j_1 - 1σ_tj_1j_2^2σ_tj_2j_3^2{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)}^2 + ∑_t = 1^m∑_j_1 = 1^n - 1∑_j_2 = j_1 + 1^n∑_j_3 = 1^j_2 - 1σ_tj_1j_2^2σ_tj_1j_3^2{e_ij_2γ_ij_31(j_3≠ i) + e_ij_3γ_ij_21(j_2≠ i)}^2 + ∑_t = 1^m∑_j_1 = 1^n∑_j_3 = 1^j_1 - 1σ_tj_1j_1^2σ_tj_1j_3^2{e_ij_1γ_ij_31(j_3≠ i) + e_ij_3γ_ij_11(j_1≠ i)}^2 + ∑_t = 1^m∑_j_1 = 1^n - 1∑_j_2 = j_1 + 1^nσ_tj_1j_2^2(e_ij_1ξ_tij_2 + e_ij_2ξ_tij_1)^2 + ∑_t = 1^m∑_j_1 = 1^nσ_tj_1j_1^2e_ij_1ξ_tij_1^2 Using the fact that e_ij_1e_ij_2 = 0 when j_1 < j_2, switching the roles between j_1 and j_2 in the second summation, and rearranging, we further obtain ∑_α = 1^N_n Y_nα^2 = ∑_t = 1^m∑_j_2 = 1^n∑_j_3 = 1^nσ_tij_2^2σ_tj_2j_3^2γ_ij_3^2 + ∑_t = 1^m∑_j_2 = 1^nσ_tij_2^2ξ_tij_2^2 + O(θ_n^2/n^2ρ_n)→_2^2, thereby establishing condition (a) for martingale central limit theorem. Condition (b) for martingale central limit theorem. It is sufficient to show that ∑_α = 1^N_n Y_nα^4→ 0. By definition, we have ∑_α = 1^N_n Y_nα^4 = ∑_t = 1^m∑_j_1≤ j_2( E_tj_1j_2^4)( b_tij_1j_2^4 + 4 b_tij_1j_2^3c_tij_1j_2 + 6 b_tij_1j_2^2c_tij_1j_2^2 + c_tij_1j_2^4) ≲ρ_n∑_t = 1^m∑_j_1≤ j_2( b_tij_1j_2^4 + c_tij_1j_2^4). It follows from Lemma <ref> that ∑_α = 1^N_n Y_nα^4 = O{θ_n^4(mn^4ρ_n^4)^-1 + θ_n^4(mn^3ρ_n^3)^-1 + θ_n^4(mnρ_n)^-1}→ 0. Therefore, condition (b) for martingale central limit theorem also holds, so that X_in()→N(0, 1). The proof is thereby completed. § PROOF OF THEOREM <REF> Let = ()^-1/2 and _t = ()^1/2_t()^1/2. By Rayleigh-Ritz theorem, min_t∈[m]σ_d(_t) = min_t∈[m]λ_d^1/2(_t^2) = min_t∈[m]min__2 = 1(_t^2)^1/2 ≥min_t∈[m]min__2 = 1()^1/2_2λ_d^1/2()λ_d(_t) = Ω(nρ_n). Clearly, max_t∈[m]σ_1(_t)/σ_d(_t) = O(1) and _2→∞≤_2→∞λ_d^-1/2() = O(n^-1/2). Given the conditions of Theorem <ref>, it is straightforward to obtain λ_k(_i) = Θ(mn^2ρ_n^3) and λ_k(_i) = Θ(mnρ_n^2). Hence, the conditions of Theorem <ref> hold, implying that for any fixed vector ∈ℝ^d, _i^-1/2(_1_i - _i) = X_in() + (1), where _i and _i denote the ith row of and , respectively, X_in() is defined in (<ref>), and = sgn(). ▪ Asymptotic normality of _i_1 - _i_2. This part is similar to the proof of Theorem <ref>. Recall the notations in Lemma <ref>. For any deterministic vectors _1,_2∈ℝ^d, let v_i_1i_2n = X_i_1n(_1) + X_i_2n(_2). Note that v_i_1i_2n = ∑_t = 1^m∑_j_1 ≤ j_2E_tj_1j_2{b_ti_1j_1j_2(_1) + b_ti_2j_1j_2(_2) + c_ti_1j_1j_2(_1) + c_ti_2j_1j_2(_2)} and b_ti_1j_1j_2(_1) + b_ti_2j_1j_2(_2) + c_ti_1j_1j_2(_1) + c_ti_2j_1j_2(_2) is _n(α(t, j_1, j_2) - 1) measurable. For any α∈ N_n = (1/2)mn(n + 1), let (t(α),j_1(α),j_2(α))∈[m]×[n]×[n] be the unique indices such that α(t(α),j_1(α),j_2(α)) = α. Let Y_nα = E_t(α)j_1(α)j_2(α){b_t(α)i_1j_1(α)j_2(α)(_1) + b_t(α)i_2j_1(α)j_2(α)(_2) + c_t(α)i_1j_1(α)j_2(α)(_1) + c_t(α)i_2j_1(α)j_2(α)(_2)} and _nα = σ({E_tj_1j_2:t∈[m],j_1,j_2∈[n],j_1≤ j_2,α(t, j_1, j_2)≤α}), where α(·) is the relabeling function defined in (<ref>). Set _n0 = ∅. Clearly, v_i_1i_2n = ∑_α = 1^N_nY_nα, where (Y_nα)_α = 1^N_n forms a martingale difference sequence with regard to the filtration (_nα)_α = 0^N_n - 1. We now argue the asymptotic normality of v_i_1i_2n by applying the martingale central limit theorem. We first verify condition (a) for martingale central limit theorem. Write ∑_α = 1^N_n(Y_nα^2|_n(α - 1)) - ∑_α = 1^N_n(Y_nα^2) = ∑_t = 1^m∑_j_1,j_2∈[n]:j_1≤ j_2(b̅_tj_1j_2^2 - b̅_tj_1j_2^2) σ_tj_1j_2^2 + 2∑_i,i'∈{i_1,i_2}∑_,'∈{_1,_2}∑_t = 1^m∑_j_1,j_2∈[n]:j_1≤ j_2b_tij_1j_2()c_ti'j_1j_2(')σ_tj_1j_2^2, where b̅_tj_1j_2 := b_ti_1j_1j_2(_1) + b_ti_2j_1j_2(_2). We work with the two terms on the right-hand side of (<ref>) separately. For the second term in (<ref>), for any i,i'∈{i_1,i_2} and any ,'∈{_1,_2}, we have 2∑_t = 1^m∑_j_1,j_2∈[n]:j_1≤ j_2b_tij_1j_2()c_ti'j_1j_2(')σ_tj_1j_2^2 = 2∑_t = 1^m∑_j_1,j_2∈[n]:j_1 ≤ j_2∑_j_3 = 1^j_1 - 1E_tj_2j_3{e_ij_1γ_ij_3()1(j_3≠ i) + e_ij_3γ_ij_1()1(j_1≠ i)}c_ti'j_1j_2(')σ_tj_1j_2^2 + 2∑_t = 1^m∑_j_1,j_2∈[n]:j_1 < j_2∑_j_3 = 1^j_2 - 1E_tj_1j_3{e_ij_2γ_ij_3()1(j_3≠ i) + e_ij_3γ_ij_2()1(j_2≠ i)}c_ti'j_1j_2(')σ_tj_1j_2^2. The two terms above are sums of independent mean-zero random variables with variances O(θ_n^4(mn^2ρ_n)^-1). Therefore, the second term in (<ref>) is o_p(1) by Chebyshev's inequality. For the first on the right-hand side of (<ref>), we compute the second moment {∑_t = 1^m∑_j_1,j_2∈[n]:j_1≤ j_2(b̅_tj_1j_2^2 - b̅_tj_1j_2^2)σ_tj_1j_2^2}^2 ≤∑_t = 1^m∑_j_1,j_2,j_3,j_4∈[n],j_1≤ j_2, j_3≤ j_4b̅_tj_1j_2^2b̅_tj_3j_4^2σ_tj_1j_2^2σ_tj_3j_4^2 = 2∑_t = 1^m∑_j_1≤ j_2, j_3≤ j_4,α(t, j_1, j_2) < α(t, j_3, j_4)b̅_tj_1j_2^2b̅_tj_3j_4^2σ_tj_1j_2^2σ_tj_3j_4^2 + ∑_t = 1^m∑_j_1,j_2∈[n],j_1≤ j_2b̅_tj_1j_2^4σ_t_1j_1j_2^4. By Lemma <ref> and Lemma <ref>, the two terms above converge to 0. This shows that the second moments of the first and second terms in (<ref>) go to 0, and hence, the first and second terms on the right-hand side of (<ref>) is o_p(1) by Chebyshev's inequality. Therefore, we have shown that (<ref>) is o_p(1). Next, we show that ∑_α = 1^N_n Y_nα^2→_1_2^2 + _2_2^2 for any deterministic vectors _1,_2∈ℝ^d. To this end, denote by γ_max = max_i,j∈[n]{|γ_ij(_1)|∨|γ_ij(_2)|}, ξ_max = max_t∈[m],i,j∈[n]{|ξ_tij(_1)|∨|ξ_tij(_2)|}, and observe that ∑_t = 1^m∑_j_1≤ j_2σ_tj_1j_2^2{b_ti_1j_1j_2(_1)b_ti_2j_1j_2(_2)} ≤∑_t = 1^m∑_j_1≤ j_2∑_a,b∈[n]{| E_tj_2aE_tj_2b|(e_i_1j_1 + e_i_1a)(e_i_2j_1 + e_i_2b) + | E_tj_2aE_tj_1b|(e_i_1j_1 + e_i_1a)(e_i_2j_2 + e_i_2b)}ρ_nγ_max^2 + ∑_t = 1^m∑_j_1≤ j_2∑_a,b∈[n]{| E_tj_1aE_tj_2b|(e_i_1j_2 + e_i_1a)(e_i_2j_1 + e_i_2b) + | E_tj_1aE_tj_1b|(e_i_1j_2 + e_i_1a)(e_i_2j_2 + e_i_2b)}ρ_nγ_max^2 ≲∑_t = 1^m∑_j_1,j_2∈[n](e_i_1j_1 + e_i_1j_2 + e_i_2j_1 + e_i_2j_2)θ_n^2/mn^3ρ_n^2 = O(θ_n^2/n^2ρ_n), ∑_t = 1^m∑_j_1≤ j_2σ_tj_1j_2^2c_ti_1j_1j_2(_1)c_ti_2j_1j_2(_2) ≤∑_t = 1^m∑_j_1≤ j_2ρ_n(e_i_1j_1 + e_i_1j_2)(e_i_2j_1 + e_i_2j_2)ξ_max^2 = O(θ_n^2/n), where we have used the fact that e_i_1je_i_2j = 0 for any j∈[n] since i_1≠ i_2. Therefore, by the proof of Theorem <ref>, we obtain ∑_α = 1^N_n Y_nα^2 = ∑_t = 1^m∑_j_1≤ j_2σ_tj_1j_2^2{ b_ti_1j_1j_2^2(_1) + c_ti_1j_1j_2^2(_1)} + ∑_t = 1^m∑_j_1≤ j_2σ_tj_1j_2^2{ b_ti_2j_1j_2^2(_2) + c_ti_2j_1j_2^2(_2)} + 2∑_t = 1^m∑_j_1≤ j_2σ_tj_1j_2^2{b_ti_1j_1j_2(_1)b_ti_2j_1j_2(_2)} + 2∑_t = 1^m∑_j_1≤ j_2σ_tj_1j_2^2c_ti_1j_1j_2(_1)c_ti_2j_1j_2(_2)} = _1_2^2 + _2_2^2 + o(1). This establishes condition (a) for martingale central limit theorem with σ^2 = _1_2^2 + _2_2^2. For condition (b), it is sufficient to show that ∑_α = 1^N_n Y_nα^4→ 0. By definition, we have ∑_α = 1^N_n Y_nα^4 = ∑_t = 1^m∑_j_1,j_2∈[n]:j_1≤ j_2( E_tj_1j_2^4)(b̅_tj_1j_2 + c̅_tij_1j_2)^4 ≲ρ_n∑_t = 1^m∑_j_1,j_2∈[n]:j_1≤ j_2{ b_ti_1j_1j_2^4(_1) + b_ti_2j_1j_2^4(_2) + c_ti_1j_1j_2^4(_1) + c_ti_2j_1j_2^4(_2)}. It follows from Lemma <ref> that ∑_α = 1^N_n Y_nα^4 = O(θ_n^4(mn^4ρ_n^4)^-1 + θ_n^4(mn^3ρ_n^3)^-1 + (mnρ_n)^-1)→ 0. Therefore, condition (b) for martingale central limit theorem also holds, so that v_i_1i_2n→N(0, _1_2^2 + _2_2^2). By Cramér-Wold device and Theorem <ref>, this further implies that [_i_1^-1/2{_i_1( - )_1}, _i_2^-1/2{_i_2( - )_1}]→N_2d(_2d, _2d). Namely, under the null hypothesis H_0, we have (_i_1 + _i_2)^-1/2_1(_i_1 - _i_2)→N_d(_d, _d) and _i_1 - _i_2_2 = O_p((m^1/2nρ_n^1/2θ_n)^-1). Similarly, under H_A, we have (_i_1 + _i_2)^-1/2{_1(_i_1 - _i_2) - (_i_1 - _i_2)}→N_d(_d, _d) and _1(_i_1 - _i_2) - (_i_1 - _i_2)_2 = O_p((m^1/2nρ_n^1/2θ_n)^-1). ▪ Convergence of _i. Denote by = _1 By Theorem <ref>, we know that - _2→∞ = {1/√(n)(ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼))}, _2→∞ = (n^-1/2), - _2 = (ε_n^(𝗈𝗉) + ε_n^(𝖻𝖼)). By Lemma <ref> and a union bound over t∈[m], we know that max_t∈[m]_t_2 = {(log n)^1/2 + (nρ_n)^1/2} and max_t∈[m]_t_2 = {nρ_n + (log n)^1/2 + (nρ_n)^1/2}. Then we obtain max_t∈[m]_t - _t_max ≤max_t∈[m] - _2→∞_t_2_2→∞ + max_t∈[m]_2→∞ - _2_t_2_2→∞ + max_t∈[m]_2→∞_t_2 - _2_2→∞ + max_t∈[m]_2→∞_t_2_2→∞ + max_t∈[m]_2→∞_t_2 - _2_2→∞ + max_t∈[m]_2→∞_t_2 - _2→∞ = (ρ_n) . Denote by P_tij = _i_t_j. Note that max_t∈[m]_t_max≤_2→∞max_t∈[m]_t_2 = O(ρ_n). This further implies the following estimation error bounds for P_tij's: and σ_tij's: min_t∈[m],i,j∈[n]|P_tij| ≥min_t∈[m],i,j∈[n]P_tij - max_t∈[m]_t - _t_max = Ω(ρ_n) - |(ρ_n)|, max_t∈[m],i,j∈[n]|P_tij| ≤max_t∈[m],i,j∈[n]P_tij + max_t∈[m]_t - _t_max = O(ρ_n) + |(ρ_n)|, max_t∈[m],i,j∈[n]|σ_tij^2 - σ_tij^2| ≤max_t∈[m],i,j∈[n]|P_tij - P_tij|(2 + |P_tij|) = (ρ_n) max_t∈[m],i,j,l∈[n]|σ_tij^2σ_tjl^2 - σ_tij^2σ_tjl^2| ≤max_t∈[m],i,j,l∈[n]{|(σ_tij^2 - σ_tij^2)σ_tjl^2| + |σ_tij^2(σ_tjl^2 - σ_tjl^2)|} = (ρ_n^2). Similarly, we also obtain the following estimation error bounds for _t and : max_t∈[m]_t - _t_F ≤ d^1/2 - _2→∞max_t∈[m](_t_2_2→∞ + _t_2_2→∞) + d^1/2max_t∈[m]_t_2 + d^1/2max_t∈[m]_2→∞_t_2 - _2→∞ = (nρ_n), max_t∈[m]_t_2 ≤max_t∈[m]_t - _t_2 + max_t∈[m]_t_2 = O(nρ_n) + (nρ_n), max_t∈[m]_t^2 - _t^2_2 ≤max_t∈[m](_t - _t)_t_2 + max_t∈[m]_t(_t - _t)_2 = {(nρ_n)^2}, min_t∈[m]σ_d(_t) ≥min_t∈[m]{λ_d(_t^2) - _t^2 - λ_d(_t^2)_2}^1/2 = Ω(nρ_n) - |(nρ_n)|, - _2 ≤∑_t = 1^m_t^2 - _t^2 = {m(nρ_n)^2} _2 ≤ - _2 + _2 = O(mn^2ρ_n^2) + (mn^2ρ_n^2), λ_d() ≥λ_d() - - _2 = Ω(mn^2ρ_n^2) - |(mn^2ρ_n^2)|. Therefore, we can further obtain the following estimation error bounds for _i, _i: _i - _i_2 ≤∑_t = 1^m_t - _t_2max_j∈[n]P_tij_t_2 + ∑_t = 1^m_t_2 - _2max_j∈[n]P_tij_t_2 + ∑_t = 1^m_t_2max_j∈[n]|σ_tij^2 - σ_tij^2|_t_2 + ∑_t = 1^m_t_2max_j∈[n]σ_tij^2 - _2_t_2 + ∑_t = 1^m_t_2max_j∈[n]σ_tij^2_t - _t_2 = (mn^2ρ_n^3), so that _i_2 ≤_i - _i_2 + _i_2 = O(mn^2ρ_n^3) + (mn^2ρ_n^3) and _i - _i ≤ - _2max_l∈[n]∑_t = 1^m∑_j = 1^nσ_til^2σ_tjl^2 + max_l∈[n]∑_t = 1^m∑_j = 1^n|σ_til^2σ_tjl^2 - σ^2_tilσ^2_tjl| + max_l∈[n]∑_t = 1^m∑_j = 1^nσ^2_tilσ^2_tjl - _2 = (mnρ_n^2), so that _i_2≤_i - _i_2 + _i = O(mnρ_n^2) + (mnρ_n^2). Also, observe that ()^-1 - ()^-1_2 ≤()^-1_2_2 - _2()^-1_2 = ((mn^2ρ_n^2)^-1). Hence, we can finally provide the error bound for _i: _i - _i ≤()^-1 - ()^-1_2(_i_2 + _i_2)()^-1_2 + ()^-1_2(_i - _i_2 + _i - _i_2)()^-1_2 + ()^-1_2(_i_2 + _i_2)()^-1 - ()^-1_2 = (1/mn^2ρ_nθ_n^2). In particular, this implies that _i_2 = ((mn^2ρ_nθ_n^2)^-1) and λ_d(_i)≥Ω((mn^2ρ_nθ_n^2)^-1) - |((mn^2ρ_nθ_n^2)^-1)|. ▪ Asymptotic null distribution of T_i_1i_2. By definition, we have T_i_1i_2 = (_i_1 - _i_2)(_i_1 + _i_2)^-1(_i_1 - _i_2) + (_i_1 - _i_2){(_i_1 + _i_2)^-1 - (_i_1 + _i_2)^-1}(_i_1 - _i_2) Observe that |(_i_1 - _i_2){(_i_1 + _i_2)^-1 - (_i_1 + _i_2)^-1}(_i_1 - _i_2)| ≤_i_1 - _i_2_2^2(_i_1 + _i_2)^-1_2(_i_1 - _i_1_2 + _i_2 - _i_2_2)(_i_1 + _i_2)^-1_2 = o_p(1). It follows from Slutsky's theorem that T_i_1i_2 = (_i_1 - _i_2)(_i_1 + _i_2)^-1(_i_1 - _i_2) + o_p(1)→χ^2_d. ▪ Asymptotic distribution of T_i_1i_2 under the alternative. Let M_n = (mn^2ρ_n)^1/2θ_n_i_1 - _i_2_2. Clearly, we know that M_n→∞ because _i_1 - _i_2_2 = Θ(n^-1/2_i_1 - _i_2_2), and {(mn^2ρ_n)^1/2θ_n(_i_1 - _i_2) - (_i_1 - _i_2)_2≥ M_n/2}→ 0, so that {(mn^2ρ_n)^1/2θ_n_i_1 - _i_2_2 > M_n/2} ≥{(mn^2ρ_n)^1/2θ_n(_i_1 - _i_2) - (_i_1 - _i_2)_2 ≤ M_n/2}→ 1 and there exists some constant c_0 > 0, such that (T_i_1i_2≥ C) ≥{T_i_1i_2≥ M_n^2/(4c_0)} = {(_i_1 - _i_2)(_i_1 + _i_2)^-1(_i_1 - _i_2)≥ M_n^2/(4c_0)} ≥[λ_d{(_i_1 + _i_2)^-1}_i_1 - _i_2_2^2≥ M_n^2/(4c_0),λ_d{(_i_1 + _i_2)^-1}≥ mn^2ρ_nθ_n^2/c_0] ≥[mn^2ρ_nθ_n^2_i_1 - _i_2_2^2≥ M_n^2/4,λ_d{(_i_1 + _i_2)^-1}≥ mn^2ρ_nθ_n^2/c_0]→ 1. On the other hand, under the alternative H_A:_i_1≠_i_2 but (_i_1 + _i_2)^-1/2()^-1/2(_i_1 - _i_2)→ for some ∈ℝ^d, we have (_i_1 + _i_2)^-1/2{(_i_1 - _i_2)}→N_d(, _d). Similarly, we also have _i_1 - _i_2_2^2 = O_p((mn^2ρ_nθ_n^2)^-1). It follows that T_i_1i_2 = (_i_1 - _i_2)(_i_1 + _i_2)^-1(_i_1 - _i_2) + (_i_1 - _i_2){(_i_1 + _i_2)^-1 - (_i_1 + _i_2)^-1}(_i_1 - _i_2) = (_i_1 - _i_2)(_i_1 + _i_2)^-1(_i_1 - _i_2) + o_p(1)→χ_d^2(_2^2). The proof is therefore completed. acm
http://arxiv.org/abs/2406.07934v1
20240612065800
Hardware Implementation of Soft Mapper/Demappers in Iterative EP-based Receivers
[ "Ian Fischer Schilling", "Serdar Sahin", "Camille Leroux", "Antonio Maria Cipriano", "Christophe Jego" ]
cs.AR
[ "cs.AR", "eess.SP" ]
Hardware Implementation of Soft Mapper/Demappers in Iterative EP-based Receivers Ian Fischer Schilling*, Serdar Şahin†, Camille Leroux*, Antonio Maria Cipriano†, Christophe Jégo* *University of Bordeaux, Bordeaux INP IMS Lab, UMR CNRS 5218, France firstname.last-name@ims-bordeaux.fr † Thales Gennevilliers, France firstname.last-name@thalesgroup.com This work has been funded by the French National Research Agency under grant number ANR-20-CE25-0008-01 (EVASION Project: https://anr-evasion.ims-bordeaux.fr/). June 17, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT This paper presents a comprehensive study and implementations onto FPGA device of an Expectation Propagation (EP)-based receiver for QPSK, 8-PSK, and 16-QAM. To the best of our knowledge, this is the first for this kind of receiver. The receiver implements a Frequency Domain (FD) Self-Iterated Linear Equalizer (SILE), where EP is used to approximate the true posterior distribution of the transmitted symbols with a simpler distribution. Analytical approximations for the EP feedback generation process and the three constellations are applied to lessen the complexity of the soft mapper/demapper architectures. The simulation results demonstrate that the fixed-point version performs comparably to the floating-point. Moreover, implementation results show the efficiency in terms of FPGA resource usage of the proposed architecture. Expectation Propagation, Frequency Domain Self-Iterated Linear Equalizer, Analytical approximations, architecture design, FPGA prototyping § INTRODUCTION In digital communication systems, achieving minimal error rates in data detection and/or decoding requires the resolution of a Maximum A Posteriori (MAP) or Maximum Likelihood (ML) problem <cit.>. However, the computational complexity of resolving such criteria is often prohibitive, particularly in real-world frequency selective channels, where the number of computations increases exponentially with factors such as data length, modulation order, and channel memory. As a result, practical receiver design often involves applying simplifying hypotheses and approximations. One promising approach in the context of Frequency Domain (FD) Linear Equalization (LE) is equalizers designed with Expectation Propagation (EP). Indeed, they have demonstrated an appealing trade-off between performance and computational complexity <cit.>. In this paper, a comprehensive study and implementation of an EP-based receiver for communications over frequency selective channels using standard Phase Shift Keying (PSK) or Quadrature Amplitude Modulation (QAM) constellations is presented. The receiver implements an FD Self-Iterated Linear Equalizer (SILE), where EP is applied to approximate the true posterior distribution of the transmitted symbols with a simpler distribution that can be easily manipulated. Previously, the implementation of a simplified EP receiver for multiple antenna receivers has been reported in <cit.>. A low complexity EP detector for sparse code multiple access was also proposed in <cit.>. However, the authors only study the impact of simplifications on the performance and estimate the computational cost of their proposal per operation type. Simplified EP-based FD equalization is studied in <cit.>. Similar to the previous case, only a computational complexity assessment was provided. To the best of our knowledge, we propose the first hardware implementation of an EP-based FD SILE receiver. The contributions of this paper are the following: * Methods to reduce the complexity of the EP-based FD-SILE algorithm are proposed, which are different from the ones in <cit.>. They include a new way to generate the EP soft feedback and also a new method to calculate extrinsic Log-Likelihood Ratios (LLR) for 8-PSK. * A fixed-point version of the model enables to verify that the degradation in terms of performance due to these new algorithmic simplifications is limited. * The implementation of soft mapper/demapper architectures is carried out on a Field Programmable Gate Array (FPGA), specifically the PYNQ Z2 board. The PYNQ Z2 board contains a device that combines an ARM processor and an FPGA, which enables easier Ethernet communication. The implementation was done in a Hardware in the Loop (HIL) configuration, with the EP parts implemented onto the FPGA, while the others run on a computer thanks to py-AFF3CT, a Python wrapper for the Forward Error Correction Toolbox AFF3CT <cit.>. The analysis of FPGA resource usage shows very low complexity overhead for three different and widely used constellations. These results confirm that the proposed EP equalizer is potentially a good candidate for practical implementation even on cost- and complexity-constrained digital communication equipment. The paper is organized as follows. A description of the FD SILE receiver with EP is provided in Section <ref>. The simplifications and analytical approximations applied to the soft mapper/demapper to decrease its complexity are presented in Section <ref>. The fixed-point conversion analysis to facilitate the soft mapper/demapper architecture design is presented in Section <ref>. The architecture implementation and FPGA prototyping, with the experimental setup and resource usage, are detailed in Section <ref>. The paper is concluded in Section <ref>. § EP-BASED FD-SILE ALGORITHM Expectation Propagation (EP) is a powerful technique exploited in statistical inference for approximating complex probability distributions by simpler distributions from the exponential family through moment matching. The FD self-iterated linear equalizer (SILE) algorithm derived in <cit.> is based on EP to compute extrinsic soft decision feedback. The functional structure of the iterative receiver is illustrated in Fig. <ref>. The received signal is first mapped to the frequency domain thanks to an FFT function. Then, a linear Minimum Mean Square Error (MMSE) filter with interference cancellation is used for equalization. In addition to the channel frequency response and noise statistics, the filter computation requires the statistics of the soft decision feedback <cit.>. The receiver performs S self-iterations which go through the equalizer, the soft demapper, and the EP-based soft mapper. At self-iteration s=0,…,S, after the filtering stage, the equalized symbols in the time-domain x_k^e(s) are fed into the soft demapper, along with an estimate of the residual post-equalization noise and interference variance v_x^e(s). The soft demapper then estimates the unnormalized log-likelihood distribution ℓ_D,k^(s)(α) with ℓ_D,k^(s)(α) = -|x_k^e(s) - α|^2 / v_x^e(s), ∀k, ∀α∈ X, where X is the symbol constellation and K is the equalized block length. If s < S, the EP soft mapper computes the feedback for the next iteration by first computing the normalized a posteriori distribution in the linear domain, with ∀k, ∀α∈ X, D_k^(s)(α) = exp(ℓ_D,k^(s)(α)) / ∑_α' ∈ Xexp(ℓ_D,k^(s)(α')). Next, the soft a posteriori symbol estimates μ_k^d(s) and variances γ_x^d(s) can be computed with the moments of D_k^(s): μ_k^d(s) = ∑_α∈ Xα D_k^(s)(α), ∀k, γ_x, k^d(s) = ∑_α∈ X |α|^2 D_k^(s)(α) - |μ_k^d(s)|^2, ∀k, γ_x^d(s) = K^-1∑_k=1^Kγ_x, k^d(s), The average variance γ_x^d(s) enables the proper use of EP for FD equalization. Finally, the extrinsic soft feedbacks based on EP are obtained by performing a Gaussian PDF division: x_k^d(s+1) = μ_k^d(s) v_x^e(s) - x_k^e(s)γ_x^d(s)/v_x^e(s) - γ_x^d(s), v_x^d(s+1) = v_x^e(s)γ_x^d(s)/v_x^e(s) - γ_x^d(s). Exponential smoothing could be applied across self-iterations, as done in <cit.>, for stabilizing convergence. But, in order to simplify this initial work, it is not considered. At the last self-iteration s=S, only bit-wise extrinsic LLRs for soft decoding are computed <cit.>: L_e(d_k,q) = log∑_α∈ X_q^0e^ℓ_D,k^(s)(α) - log∑_α' ∈ X_q^1e^ℓ_D,k^(s)(α'). It is important to note that the FD SILE algorithm is considerably less computationally intensive than alternative iterative equalizers in the time domain <cit.>. Nevertheless, logarithmic and exponential operations within soft mapping and demapping still pose a significant implementation challenge and scale poorly with the constellation size. To illustrate this point, the number of floating-point operations (FLOP) involved in the exact equalization, soft mapping, and soft demapping are provided in Table <ref>. These numbers mostly depend on the constellation size and the number of self-iterations. They are obtained through the same approach as in <cit.>, where operations such as addition, multiplication, and division operation are counted with weights. § ANALYTICAL APPROXIMATIONS In this section, we discuss further simplifications of the complexity of EP-based soft feedback computation for practical constellations of QPSK, 8-PSK, and 16-QAM. §.§ Simplified EP-based feedback computation In <cit.>, the use of the asymptotic a posteriori mean-square error (MSE) γ̃_x^d, as a function of v_x^e, instead of γ_x^d, is shown to reduce soft mapping complexity. Moreover, it also improves receiver robustness. In particular, when tabulating the auxiliary quantity C_EP(v_x^e) = γ̃_x^d/(v_x^e - γ̃_x^d), the computational complexity can be further reduced through v_x^d(s+1) = v_x^e(s) C_EP(v_x^e(s)) , x_k^d(s+1) = μ_k^d(s) + C_EP(v_x^e(s))(μ_k^d(s) - x_k^e(s)), ∀k. Although this alleviates part of the soft mapping complexity issue, the computation of a posteriori estimates μ_k^d and extrinsic LLRs L_e(d_k,q) still remains computationally intensive. §.§ Simplified soft demapping In <cit.>, piece-wise linear approximations of μ_k^d(s) as a function of x_k^e(s) were proposed to lower the computational complexity of μ_k^d(s) with small performance degradation. However, this approach is limited to square QAM constellations, and it is too complex for turbo-equalization. In this work, a new simplified computation of EP-based feedback is proposed based on bitwise soft demapping, followed by bitwise soft mapping, for computing both L_e(d_q) and μ^d. This structure is illustrated in Fig. <ref>. The symbol index k and the iteration index s have been dropped for the sake of readability. A widespread means of carrying out soft demapping with reasonable complexity is through the simplification of the log-sum-exp, twice used in eq. (<ref>). Indeed, one can replace this function with a maximum (max-log-MAP) <cit.>, and remains optimal for the QPSK constellation. However, some performance loss can be expected for higher-order constellations. Besides, the max-log-MAP demapper remains prohibitive for high-order constellations. With M being the constellation size, for each symbol, it requires computing M squared-distances and performing M comparisons for each of the Q=log_2 M bits. Various proposals in the literature provide direct linear approximations of max-log-MAP LLRs L_e(d_q) as a function of x^e and v_x^e for each q, computed through the geometric properties of each constellation. Indeed, some practical implementations in the literature <cit.> rely on analytical expressions of the extrinsic max-log MAP LLRs. In particular, there are closed-form analytical equations for Gray-mapped QAM constellations <cit.>. Proposed expressions for the QPSK and 16-QAM constellations are provided in Table <ref>. But regarding non-square constellations, as the 8-PSK, the geometric characterization of max-log MAP LLRs does not systematically have closed-form expressions. For such Gray-mapped PSK constellations, <cit.> exploits labeling properties of the constellation, and the hard decision on the equalized symbol to compute only two distances per LLR. This method performs exact max-log-MAP by exploiting the properties of Gray labeling for finding the likely closest symbol from the set opposite to the hard decision. However, the arithmetic computation of the opposite symbol and the computation of the formula with Euclidean distances still have a significant weight within the overall computational complexity. Here we propose to further simplify the computation of 8-PSK LLRs in <cit.>, by applying a semi-analytical approach <cit.>. Indeed, the max-log-MAP LLR is given by L_e(d_q) = -(1-2d̂_q^*) (|x^e-α^*|^2 -|x^e-α_q^*|^2)/v_x^e, where α^* is the closest constellation point to x^e, with q^th bit of the α^*'s label being denoted d̂_q^* . Furthermore, α_q^* is the constellation point that corresponds to the closest symbol to α^* that has the opposite value on the q^th bit. Instead of computing α_q^*, ∀ q with Q^2 additions as in <cit.>, we rewrite the expression of the max-log-MAP LLR as L_e(d_q) = (R(x^e) R(Δ_α^*, q) + I(x^e) I(Δ_α^*, q))/v_x^e, where Δ_α^*, q = 2(1-2d̂_q^*)(α^*-α_q^*). Hence, Δ_α^*, q can be precomputed and stored in a LUT, as shown in Table <ref>, for each α^* ∈ X and q=1,…,Q: (Δ_α, 1, Δ_α, 2, Δ_α, 3) = LUT_8PSK(α), α∈ X. To access the LUT, a hard decision has to be made through comparisons on x^e, to compute the symbol label m in decimal: m = 4(I(x^e) < 0) +2(R(x^e) < 0) + (|R(x^e)| < |I(x^e)|). §.§ Simplified soft mapping To simplify the computation of the soft symbol estimate μ^d from (<ref>), analytical bitwise soft mapping can be applied. This enables to take into account probabilities on symbols through bitwise LLRs L(d_q) and also to compute μ^d without an intermediate step. Thanks to Gray mapping, assuming that bits within a symbol are independent, we obtain ℙ(x=α) = ∏_q=1^Q ℙ(d_q = ϕ^-1_q(α)), where ϕ^-1_q(·) yields the q^th bit of α∈ X. In addition, as ℙ(d_q = b)=(1+(1-2b)p_q)/2, where p_q=tanh(L(d_q))/2), one can exploit the geometry of the constellations with respect to real and imaginary parts and the labeling, to derive expressions for μ_k^d(s) as a function of soft bits p_q. Note that, in our case, LLRs for soft mapping are extrinsic L(d_q)= L_e(d_q). Nevertheless, this approach can be readily generalized to a turbo-equalizer if a priori LLRs L_a(d_q) from a soft output decoder are available with L(d_q) = L_e(d_q) + L_a(d_q). Such soft mapping techniques have been previously applied in <cit.> and <cit.> for APP-based and EP-based soft feedback, respectively. However, in this study, these formulas are only required for the mean of soft APP estimates, as the EP-based variance is handled through the tabulation technique of <cit.>, with C_EP as stated earlier in Section <ref>. In addition, for the mean estimates, the computation of the hyperbolic tangent could be further simplified, by a piece-wise linear approximation, where all the slope and bias coefficients are powers of two. This enables a more efficient fixed-point implementation, as expressed below: tanh(x) = x, |x|<0.5, 0.5x +0.25sign(x), 0.5≤ |x| < 1.0, 0.25x +0.5sign(x), 1.0≤ |x| < 2.0, sign(x), otherwise. The complete expressions of soft estimates for constellations of interest are provided in Table <ref>. This concludes the algorithmic simplifications in soft mapping and demapping, and the resulting algorithmic complexity is given in Table <ref>. § FIXED-POINT CONVERSION ANALYSIS Fixed-point conversion of the soft mapping/demapping algorithms was done using Fxpmath <cit.>, a Python library for fractional fixed-point (base 2) arithmetic and binary manipulation with Numpy <cit.> compatibility. The focus of the conversion was on the calculations for bitwise max-log MAP demapper, bitwise soft mapper, and EP-based soft estimates (see Fig. <ref>), to facilitate the soft mapper/demapper architecture design. For each variable of the different functions, we determined whether the variable is signed (Q_S=0 or 1), the number of integer bits Q_I, and the number of fractional bits Q_F. The number of integer bits was determined by analyzing the maximum absolute value of the variable while operating in different Signal-to-Noise Ratios (SNRs) and scenarios. Then, an empirical approach was applied thanks to the py-AFF3CT environment. Indeed, this toolbox enables to rapidly estimate if reducing the number of bits would impact the Bit Error Rate (BER). A similar process was carried out for the fractional bits, comparing the impact on BER of different sizes of fractional parts to achieve the minimum size with negligible loss. The total bit size of a variable is given by Q_T = Q_S + Q_I + Q_F. As an example, consider the equalized symbol estimates x_k^e(s), which are used in the simplified soft demapping (Sect. <ref>). For the case of the Proakis-C channel with AWGN for QPSK, this variable has a maximum value of around 6.3 at 0dB, 3 at 10dB, and 2.2 at 20dB. Therefore, the performances were compared for the cases of having many bits for the fractional part Q_F (16 bits) while having 3, 2, 1, and 0 bits for the integer part Q_I. The tests showed that with Q_I = 0, a loss of around 0.5dB at BER 10^-3 is introduced. For the other values, there is negligible loss when using 1 integer bit compared to 2 and 3. This is due to noise amplitude having a more important range than the normalized signals at low SNRs. As for the fractional part, for QPSK, there is not a clear loss introduced with 3 bits. However, for 8-PSK, the use of only 3 fractional bits introduces a loss of around 1dB at BER 10^-3. With Q_F = 4, there is a smaller but still perceptible loss of around 0.1dB. Considering these empirical observations, 2 bits were allocated for the integer part and 5 bits for the fractional part, as a safe margin for other channels and scenarios, and 1 bit for the sign. In the end, Q_T=8 bits were assigned to the representation of the symbol estimates. The sizes of all the other variables were chosen with a similar process. All the named variables were successfully converted to Q_T = 8, while some internal calculations need up to 10 bits. The performances in terms of Bit Error Rate (BER) for the original algorithm, the simplified algorithm, and its fixed-point version are shown in Fig. <ref>. The examined channel was the challenging Proakis-C. We focused on six different Modulation and Coding Schemes (MCS): the three presented constellations, using the LTE turbo code and rate matching with code rates 1/2 and 3/4. All the simulations were done considering one self-iteration (S=1). The standard linear MMSE equalizer performance (given for S=0) is provided for reference. For all cases, the fixed-point version performs very similarly to the simplified floating-point one. For Proakis-C MCS 2, 5, and 6, the original algorithm performs significantly worse than the simplified version at high SNR. In fact, bad interference patterns can make some observations close to wrong constellation points. In such situations, the original EP receiver generates an over-optimistic EP variance, and the EP feedback will be considered as correct while it is not. On the other hand, the simplified receiver calculates the EP variance independently of (possibly wrong) observations, through the C_EP LUT. This generates a more conservative feedback, thus avoiding the BER degradations. As for the small gain of the fixed-point version, this observed behavior can be attributed to the inherent quantization noise introduced by the fixed-point representation. Interestingly, this noise can induce perturbations that effectively reduce the occurrence of unfavorable interference patterns, thereby enhancing the robustness of the algorithm in challenging scenarios. § FPGA IMPLEMENTATION AND PROTOTYPING §.§ Experimental Setup The experimental setup employs a Hardware in the Loop (HIL) configuration, involving a computer and a PYNQ Z2 board <cit.>. The computer runs py-AFF3CT, a Python wrapper for the Forward Error Correction Toolbox AFF3CT. The PYNQ Z2 board is a prototyping board based on the Xilinx Zynq System on Chip (SoC) with an ARM processor and an FPGA ZYNQ XC7Z020-1CLG400C. In our experimental setup, the board is connected to the computer via an Ethernet cable. The board's ARM processor forwards the data to the FPGA device via a Direct Memory Access (DMA). Moreover, it facilitates the Ethernet connection between the computer and the SoC device, acting as a passthrough. Fig. <ref> shows the block diagram of simplified EP-based FD-SILE with the HIL configuration. Within the FPGA, the three parts of the algorithm with analytical simplifications have been implemented: bitwise max-log MAP demapper (eq. (<ref>) and Table <ref>), bitwise soft mapper (Table <ref>) and LUT-based variance parameter (C_EP in section <ref>), and EP-based soft estimates (eq. (<ref>-<ref>)). These implemented blocks of the algorithm perform essential computation for the Expectation Propagation-based receiver and are the focus of the presented work. The remaining parts of the receiver are executed thanks to AFF3CT on the associated computer. These software blocks encompass filtering, equalization, FFT and IFFT operations, rate dematching, and Forward Error Correction (FEC) decoding. This HIL setup is used to obtain the previously discussed fixed-point simulation results. §.§ FPGA Implementation Results Table <ref> shows the total available and allocated FPGA resources for each of the three constellations that are considered in this article: QPSK, 8-PSK, and 16-QAM. The first row for each constellation is for the implementation of only FD-SILE without the HIL configuration, while the second row is for the FD-SILE with the HIL configuration, including the data exchanges to the ZYNQ Processing System and the DMA. The implemented architectures use a data width of 32 bits for the DMA connection, enabling to exchange two symbols with 8-bit real and imaginary parts. Given the low resource usage for the analytical blocks, several calculations can be carried out in parallel in the proposed architectures. For QPSK and 16-QAM, the architectures are composed of four independent bitwise max-log MAP demapper blocks, as the real and imaginary parts are independent, and generates respectively one and two LLRs for each value. For 8-PSK, the proposed architecture processes two values (one symbol) to produce three LLRs, using two demapper blocks in parallel for the four values. The process for the bitwise soft mapper is the inverse of this, using two LLRs per symbol for the QPSK, three for 8-PSK, and four for 16-QAM. The calculation of the EP-based soft estimates is the same for the three constellations, using four of these blocks in parallel. The FPGA implementation demonstrates efficient resource usage, as shown in Table <ref>, with very limited occupation in terms of Look-Up Tables (LUTs) and Flip-Flops (FFs). The implementation of the analytical blocks consumes fewer resources than other resources that are necessary to communicate with the ARM processor. It is important to note that the implementation with the highest resource usage is the one for 8-PSK mapping. Indeed, this mapping uses a LUT-aided semi-analytical implementation instead of an analytical implementation. One can note that no DSP resources are assigned in the FPGA. This is due to the fact that all the arithmetic operations are applied on data represented by a limited number of bits, thanks to the proposed fixed-point analysis. Furthermore, all implementations achieve an execution frequency of 100MHz, given the critical paths below 10 ns. § CONCLUSION This paper has presented a comprehensive study and, to the best of our knowledge, the first hardware implementation of an EP-based FD-SILE for QPSK, 8-PSK, and 16-QAM constellations on an FPGA platform. The results demonstrated that the fixed-point version performed comparably to the floating-point one, even helping to mitigate the negative impacts of bad interference patterns in the challenging Proakis-C test channel. The resource utilization for the analytical blocks was low, allowing the remaining resources to be used for other blocks of the receiver if necessary. In future works, there are several promising possibilities. One such direction is the implementation of a flexible receiver that can adapt to the mapping constellation. Another possibility is the integration of turbo iterations into the receiver design, which could further enhance its performance. unsrt 00 Bahl74 L. Bahl, J. Cocke, F. Jelinek and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rate”, IEEE Trans. on Information Theory, Mar. 1974. Sahin18 S. Şahin, A. M. Cipriano, C. Poulliat, M.-L. Boucheret, "A framework for iterative frequency domain EP-based receiver design", IEEE Trans. on Communications, Dec. 2018. Auras14 D. Auras, R. Leupers and G. Ascheid, "A Novel Class of Linear MIMO Detectors with Boosted Communications Performance: Algorithm and VLSI Architecture," 2014 IEEE Computer Society Annual Symposium on VLSI, 2014. Xiao19 J. Xiao, J. Hu, and K. Han, “Low complexity expectation propagation detection for SCMA using approximate computing”, 2019 IEEE Global Commun. Conf. (GLOBECOM), 2019. Cipriano23 A. M. Cipriano, S. Şahin, C. Poulliat, "Practical Frequency-Domain Decision Feedback Equalization Based on Expectation Propagation", IEEE Communications Letters, Oct. 2023. Cassagne19 A. Cassagne et al., “AFF3CT: A Fast Forward Error Correction Toolbox!,“ SoftwareX, 2019. Erfanian94 J. Erfanian, S. Pasupathy and G. Gulak, "Reduced complexity symbol detectors with parallel structure for ISI channels," in IEEE Trans. on Communications, Feb./Mar./Apr. 1994. Ali15 I. Ali, U. Wasenmüller and N. Wehn, “A high throughput architecture for a low complexity soft-output demapping algorithm”, Advances in Radio Science, 2015. Tosato02 F. Tosato and P. Bisaglia, “Simplified Soft-Output Demapper for Binary Interleaved COFDM with Application to HIPERLAN/2”, Proceedings of IEEE ICC 2002, Apr./May 2002. Wang14 Q. Wang, Q. Xie, Z. Wang, S. Chen and L. Hanzo, “A Universal Low-Complexity Symbol-to-Bit Soft Demapper”, IEEE Trans. on Vehicular Technology, Jan. 2014. Sahin_brev S. Şahin, A. M. Cipriano, “Méthode de démodulation souple max-log MAP semi-analytique”, Application number: FR2310929, filing date: 12/10/2023. Tomasoni06 A. Tomasoni, M. Ferrari, D. Gatti, F. Osnato, and S. Bellini, “A low complexity turbo MMSE receiver for W-LAN MIMO systems,” in Proc.2006 IEEE Int. Conf. Commun. (ICC), Jun. 2006. FXP_math20 Alcaraz, F., 2020. "Fxpmath". Available at: https://github.com/francof2a/fxpmath. numpy20 Harris, C.R., Millman, K.J., van der Walt, S.J. et al. "Array programming with NumPy". Nature 585, 2020. pynqz2 AMD, 2024. AUP PYNQ-Z2. Available at: https://www.amd.com/en/corporate/university-program/aup-boards/pynq-z2.html.
http://arxiv.org/abs/2406.08573v1
20240612182049
Refined cyclic renormalization group in Russian Doll model
[ "Vedant Motamarri", "Ivan M. Khaymovich", "Alexander Gorsky" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "cond-mat.mes-hall", "hep-th", "quant-ph" ]
=1 plainnat
http://arxiv.org/abs/2406.08121v1
20240612115938
Moments of derivatives of the Riemann zeta function: Characteristic polynomials and the hybrid formula
[ "Christopher Hughes", "Andrew Pearce-Crump" ]
math.NT
[ "math.NT" ]
Moments of derivatives of the Riemann zeta function]Moments of derivatives of the Riemann zeta function: Characteristic polynomials and the hybrid formula Department of Mathematics, University of York, York, YO10 5DD, United Kingdom christopher.hughes@york.ac.uk Department of Mathematics, University of York, York, YO10 5DD, United Kingdom andrew.pearce-crump@york.ac.uk § ABSTRACT We conjecture results about the moments of mixed derivatives of the Riemann zeta function, evaluated at the non-trivial zeros of the Riemann zeta function. We do this in two different ways, both giving us the same conjecture. In the first, we find asymptotics for the moments of derivatives of the characteristic polynomials of matrices in the Circular Unitary Ensemble. In the second, we consider the hybrid model approach first proposed by Gonek, Hughes and Keating. [ Andrew Pearce-Crump June 17, 2024 ======================= § INTRODUCTION Let ζ (s) be the Riemann zeta function. Throughout this paper we assume the Riemann Hypothesis, that is, that all non-trivial zeros can be written as ρ = 12 + i γ with real γ. We will develop the philosophy of Keating and Snaith <cit.> and Hughes, Keating and O'Connell <cit.> from random matrix theory to study the discrete moments of the nth derivative of ζ (s), denoted ζ^(n) (s). That is, we study 1/N(T)∑_0 < γ≤ Tζ^(n)( 12 + i γ)^k, where N(T) = #{nontrivial zeros of zeta with 0<γ≤ T } =T/2 πlogT/2 π e + O(log T). In fact, we will study the more general problem of moments of mixed derivatives, 1/N(T)∑_0 < γ≤ Tζ^(n_1)( 12 + i γ)…ζ^(n_k)( 12 + i γ). Clearly if n_1 = … = n_k = n, then we recover (<ref>). The model proposed by Keating and Snaith <cit.> is that the characteristic polynomial of a large random unitary matrix can be used to model the value distribution of the Riemann zeta function near a large height T. They used the characteristic polynomial Z(θ) = (I - Ue^-i θ) = ∏_m=1^N (1-e^i(θ_m - θ)), where the θ_m are the eigenangles of an N × N random unitary matrix U, to model ζ (s). The motivation for studying this is that it has been conjectured that the limiting distribution of the non-trivial zeros of ζ (s), on the scale of their mean spacing, is asymptotically the same as that of the eigenangles θ_m of matrices chosen according to Haar measure, in the limit as N →∞. Equating the mean densities yields the connection between matrix size and height up the critical line N = logT/2 π. Averaging over all Haar-distributed N× N unitary matrices, Keating and Snaith <cit.> calculated the moments of |Z(θ)| and found that as N→∞ 𝔼_N [ |Z(θ)|^2k] ∼G^2 (k+1)/G(2k+1) N^k^2 where 𝔼_N denotes expectation with respect to Haar measure, and G(z) is the Barnes G-function. Comparing with all known and conjectured results for continuous moments of zeta, they were lead to conjecture the following result. For any fixed k with (k) > -1/2, 1/T∫_0^T|ζ(12 + it ) |^2k dt ∼ a(k) G^2 (k+1)/G(2k+1)( logT/2π)^k^2, as T →∞, and a(k) = ∏_p prime( 1 - 1/p)^k^2∑_m=0^∞( Γ (m+k)/m! Γ (k))^2 p^-m is an arithmetic factor which is discussed below. The Keating and Snaith result is for a continuous moment, but random matrix theory has also been used to predict discrete moments of ζ (s). Hughes, Keating and O'Connell <cit.> followed a similar approach and calculated the discrete moments of |Z'(θ)| and found that as N→∞ 𝔼_N [ |Z'(θ)|^2k] ∼G^2 (k+2)/G(2k+3) N^k(k+2). As in the continuous case above, motivated by this result they made a conjecture which agreed with all known and previously conjectured results and is believed to hold in general. For any fixed k with (k) > -3/2, 1/N(T)∑_0 < γ≤ T| ζ' ( 12 + i γ) | ^2k∼ a(k) G^2 (k+2)/G(2k+3)( logT/2π)^k(k+2), where a(k) is the same arithmetic factor as in the previous conjecture, given in (<ref>). What wasn't clear at the time was why the same arithmetic factor (<ref>) should appear in these two conjectures. Indeed, the arithmetic factor itself was added in an ad hoc manner after the random matrix theory calculations were computed. This additional factor has been shown to be included naturally in the hybrid model developed by Gonek, Hughes and Keating <cit.> for continuous moments and of Bui, Gonek and Milinovich <cit.> for discrete moments. This hybrid approach gives equal weighting to Z_X, a partial Hadamard product over the zeros of zeta, and P_X, a partial Euler product over the primes. It is stated exactly in Theorem <ref> in Section <ref>. Another strength of the hybrid approach is that it explained exactly why the same factor should appear in both conjectures. The previous two problems where random matrix theory was used both concerned moments of a real function, where an absolute value of zeta was taken. For our problem, we consider the moments of the complex function. In this paper we attack our problem in two ways. First, by direct calculation of the characteristic polynomial, and then by modelling zeta using the hybrid approach. In Section <ref> we will calculate 𝔼_N [ 1/N∑_m=1^N Z (θ_m + α_1)… Z (θ_m + α_k) ] where, as before, 𝔼_N denotes expectation with respect to Haar measure. By Taylor expanding each characteristic polynomial, we will be able to prove the following theorem. For fixed, positive integers n_1,…,n_k, we have 𝔼_N [ 1/N∑_m=1^N Z^(n_1)(θ_m) … Z^(n_k)(θ_m) ] ∼ (-1)^n_1+…+n_k +k i^n_1+…+n_kn_1! … n_k!/(n_1+ … +n_k +1)! N^n_1+ … +n_k, as N →∞. Setting n_1=…=n_k=n gives the following Corollary. For n,k ∈ℕ, as N →∞ 𝔼_N [ 1/N∑_m=1^N Z^(n) (θ_m)^k] ∼ (-1)^k(n +1) i^kn (n!)^k/(k n +1)! N^k n. In order to use this to create a conjecture for zeta, we need to know what the arithmetic term equals. To do that, we take the hybrid approach, discussed in Section <ref>. In Section <ref> (Theorems <ref> and <ref>) we calculate the discrete moments of P_X^(n_1)… P_X^(n_k) and show that they asymptotically vanish unless all the n_j are 0 (that is, no derivatives are taken), in which case the mean is 1. Furthermore, we predict that the discrete moments of Z_X are accurately modelled by the characteristic polynomial, and we will verify this in the case of the first derivative in Theorem <ref> in Section <ref>). That is, we conjecture that there is no arithmetic term at the leading order for these moments of the complex Riemann zeta function. Note that the derivative of Z(θ) is with respect to θ, and this acts like the derivative of ζ(1/2+i t) with respect to t. This is obviously equal to i ζ'(1/2+i t), and so we expect there to be no factors of i for the Riemann zeta function. To be precise, we make the following prediction: For fixed, positive integers n_1,…,n_k, the moments of mixed derivatives of ζ (s), evaluated at the non-trivial zeros of ζ (s), are given by 1/N(T)∑_0 < γ≤ Tζ^(n_1)( 12 + i γ)…ζ^(n_k)( 12 + i γ) ∼ (-1)^n_1+…+n_k +kn_1!… n_k!/(n_1+…+n_k +1)!( logT/2π)^n_1+…+n_k as T →∞. Setting n_1 = … = n_k = n gives the following conjecture. For n,k ∈ℕ, the kth moment of the nth derivative of ζ (s) is given by 1/N(T)∑_0 < γ≤ Tζ^(n)( 12 + i γ)^k∼ (-1)^k(n +1)(n!)^k/(kn +1)!( logT/2π)^k n as T →∞. Very little is rigorously known about these types of results. For k=0 the result is trivial, as the sum is just N(T), defined in equation (<ref>). For k=1, they have been previously studied in relation to Shanks' Conjecture, a full history of which can be found in <cit.>, and the results agree with Conjecture <ref> in this case. These results are for leading order only. In a forthcoming paper <cit.> we use a different method to conjecture the full asymptotic for zeta. These results and conjectures are stated for k a positive integer. However, in the case of the first derivative, when n=1, Section <ref> applies for k ∈ with (k) > -3, and some of our results in Section <ref> (namely Theorem <ref>) also apply for non-integer k. Therefore it is plausible that in the case of the first derivative, Conjecture <ref> also holds for complex k. For (k)>-3 as T →∞ 1/N(T)∑_0 < γ≤ Tζ' ( 12 + i γ)^k∼1/Γ(k +2)( logT/2π)^k . In the case k=-1, this result is already known and can be found as equation (3.9) in a paper of Garaev and Sankaranarayanan, <cit.>. When k=-2, the right hand side of Conjecture <ref> is zero. A separate calculation shows that _N[ 1/N∑_m=1^N Z'(θ_m)^-2] = 0 exactly. For the case of zeta, what we mean is the left hand side is bounded by some error terms with no leading-order asymptotic. An educated guess suggests the error terms will be O(T^-1/2+ϵ). § PROOF OF THEOREM <REF> In this section we prove Theorem <ref> concerning moments of characteristic polynomials. We start with the product representation of the characteristic polynomial of an N × N random unitary matrix Z(θ) in (<ref>), given by Z(θ) = ∏_m=1^N (1-e^i(θ_m - θ)). The Haar probability density of U(N) equals <cit.> dμ_N = 1/N! (2 π)^N∏_1 ≤ j < ℓ≤ N |e^iθ_j - e^iθ_ℓ|^2 dθ_1 … dθ_N. To extract the moments that we wish to calculate, we use shifts and calculate 𝔼_N [ 1/N∑_m=1^N Z (θ_m + α_1)… Z (θ_m + α_k) ] By invariance of Haar measure there are no distinguished eigenvalues, so this expectation equals 𝔼_N [ Z (θ_N + α_1)… Z (θ_N + α_k) ] Since we can Taylor expand assuming the α_j are very small, we obtain _N[ (Z'(θ_N) α_1/1! + Z”(θ_N) α_1^2/2! + … + Z^(n_1)(θ_N) α_1^n_1/n_1! + …) . ×…×. (Z'(θ_N) α_k/1! + … + Z^(n_k)(θ_N) α_k^n_k/n_k! + …) ] , where obviously Z(θ_N)=0. By calculating the α_1^n_1…α_k^n_k-th coefficient, we can find moments of the mixed derivatives of the characteristic polynomial. Writing out the expectation in terms of the Weyl density (<ref>), we have 𝔼_N [ ∏_r=1^k Z (θ_N+α_r)] = 1/N! (2 π)^N_0^2π∏_1 ≤ j < ℓ≤ N |e^iθ_j - e^iθ_ℓ|^2 ∏_m=1^N∏_r=1^k (1-e^iθ_me^-iθ_Ne^-iα_r) dθ_m. We split off the terms in the first product with ℓ=N, and all the terms in the second product with m=N (which can then be pulled out of the integral) to give 1/N! (2 π)^N∏_r=1^k (1-e^-iα_r) _0^2π∏_1 ≤ j < ℓ≤ N-1 |e^iθ_j - e^iθ_ℓ|^2 ×∏_m=1^N-1|e^iθ_m - e^iθ_N|^2 ∏_r=1^k (1-e^iθ_me^-iθ_Ne^-iα_r) dθ_m We may make a change of variables ϕ_m = θ_m - θ_N for m=1,…,N-1 and trivially perform the θ_N integral to obtain 1/N! (2 π)^N-1∏_r=1^k (1-e^-iα_r) _0^2π∏_1 ≤ j < ℓ≤ N-1|e^iϕ_j - e^iϕ_ℓ|^2 ×∏_m=1^N-1 (1-e^iϕ_m)(1-e^-iϕ_m) ∏_r=1^k (1-e^iϕ_me^-iα_r) dϕ_m Notice then that we can rewrite the integrals back in terms of Haar measure for (N-1) × (N-1) unitary matrices, obtaining 1/N∏_r=1^k (1-e^-iα_r) ×_N-1[ ∏_m=1^N-1 (1-e^iϕ_m)(1-e^-iϕ_m) ∏_r=1^k (1-e^iϕ_me^-iα_r) ] = (-1)^(k+1)(N-1)/N∏_r=1^k A_r^-N(A_r - 1) ×_N-1[ ∏_m=1^N-1 e^-i ϕ_m (e^iϕ_m-1)^2 ∏_r=1^k (e^iϕ_m - A_r) ] where we have done some simple algebraic manipulations to put it into a suitable form for later, and written A_r = e^iα_r. We may then use Heine's Lemma <cit.>[Though the identity was first written down in 1939 by Szegő, he gave the credit to Heine (who lived from 1821 to 1881).] to write the expectation as a Toeplitz determinant, and thus we have _N-1[ ∏_m=1^N-1 e^-i ϕ_m (e^iϕ_m-1)^2 ∏_r=1^k (e^iϕ_m - A_r) ] = D_N-1 [f] where D_N-1[f] := |f̂_j-ℓ|_1 ≤ j,ℓ≤ N-1 with f̂_j-ℓ = 1/2π∫_0^2π f(ϑ) e^- (j-ℓ) ϑϑ for the symbol f(z) = z^-1 (z-1)^2 ∏_r=1^k (z - A_r) . Putting this together, we have 𝔼_N [ ∏_r=1^k Z (θ_N+α_r)] = (-1)^(k+1)(N-1)/N∏_r=1^k A_r^-N(A_r - 1) × D_N-1 [f] Basor and Forrester <cit.> evaluate D_N-1[f] with symbols of the form (<ref>) as a (k+2) × (k+2) determinant, D_N-1 [f] = (-1)^(k+1)(N-1)∏_r=1^k+2(-A_r)/∏_1 ≤ j < ℓ≤ k+2 (A_ℓ - A_j) ×[ A_1^N-1 A_1^N A_1^N+1 … A_1^N-1+k (-A_1)^-1; A_2^N-1 A_2^N A_2^N+1 … A_2^N-1+k (-A_2)^-1; A_3^N-1 A_3^N A_3^N+1 … A_3^N-1+k (-A_3)^-1; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; A_k^N-1 A_k^N A_k^N+1 … A_k^N-1+k (-A_k)^-1; A_k+1^N-1 A_k+1^N A_k+1^N+1 … A_k+1^N-1+k (-A_k+1)^-1; A_k+2^N-1 A_k+2^N A_k+2^N+1 … A_k+2^N-1+k (-A_k+2)^-1; ], where we write A_k+1 and A_k+2 for now, but will let them both tend to 1 later. We can manipulate this determinant through elementary row operations. We begin by factoring A_r^N-1 from each row, and factor the -1 from the last column. This determinant becomes D_N-1 [f] = (-1)^N(k+1)∏_r=1^k+2A_r^N/∏_1 ≤ j < ℓ≤ k+2 (A_ℓ - A_j)[ 1 A_1 A_1^2 … A_1^k A_1^-N; 1 A_2 A_2^2 … A_2^k A_2^-N; 1 A_3 A_3^2 … A_3^k A_3^-N; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 1 A_k A_k^2 … A_k^N-1+k A_k^-N; 1 A_k+1 A_k+1^2 … A_k+1^k A_k+1^-N; 1 A_k+2 A_k+2^2 … A_k+2^k A_k+2^-N; ]. Subtract the first row from each other row, and bring the factor of 1/(A_r-A_1) inside the determinant for each row r=2, … k+2 to give D_N-1 [f] = (-1)^N(k+1)∏_r=1^k+2A_r^N/∏_2 ≤ j < ℓ≤ k+2 (A_ℓ - A_j)[ 1 A_1 A_1^2 … A_1^k A_1^-N; 0 1 A_1 + A_2 … ∑_j_1=0^k-1 A_1^j_1 A_2^k-1-j_1 C_2,2; 0 1 A_1 + A_3 … ∑_j_1=0^k-1 A_1^j_1 A_3^k-1-j_1 C_3,2; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 1 A_1 + A_k … ∑_j_1=0^k-1 A_1^j_1 A_k^k-1-j_1 C_k,2; 0 1 A_1 + A_k+1 … ∑_j_1=0^k-1 A_1^j_1 A_k+1^k-1-j_1 C_k+1,2; 0 1 A_1 + A_k+2 … ∑_j_1=0^k-1 A_1^j_1 A_k+2^k-1-j_1 C_k+2,2; ] where in the rth row and mth column we make use of the following identity A_r^m-1-A_1^m-1/A_r-A_1 = ∑_j_1=0^m-2 A_1^j_1 A_r^m-2-j_1 . for m from 2 to k+1, and for the (k+2)nd column we use the identity C_r,2 := A_r^-N-A_1^-N/A_r-A_1 = - ∑_j_1 = 1^N A_1^-j_1 A_r^-(N+1-j_1) . Now subtract the second row from each other row below it and bring the factor of 1/(A_r-A_2) inside the determinant for each row r=3, … k+2 to give D_N-1 [f] as (-1)^N(k+1)∏_r=1^k+2A_r^N/∏_3 ≤ j < ℓ≤ k+2 (A_ℓ - A_j)[ 1 A_1 A_1^2 … A_1^k A_1^-N; 0 1 A_1 + A_2 … ∑_j_1=0^k-1 A_1^j_1 A_2^k-1-j_1 C_2,2; 0 1 1 … ∑_j_1=0^k-2∑_j_2=0^k-2-j_1 A_1^j_1 A_2^j_2 A_3^k-2-j_1-j_2 C_3,3; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 1 1 … ∑_j_1=0^k-2∑_j_2=0^k-2-j_1 A_1^j_1 A_2^j_2 A_k^k-2-j_1-j_2 C_k,3; 0 1 1 … ∑_j_1=0^k-2∑_j_2=0^k-2-j_1 A_1^j_1 A_2^j_2 A_k+1^k-2-j_1-j_2 C_k+1,3; 0 1 1 … ∑_j_1=0^k-2∑_j_2=0^k-2-j_1 A_1^j_1 A_2^j_2 A_k+2^k-2-j_1-j_2 C_k+2,3; ] where we have used (<ref>) for the rth row and mth column for m=3,...,k+1 and where the last column uses the identity (<ref>) with A_2 instead of A_1 with C_r,3 = - ∑_j_1 = 1^N A_1^-j_1( A_r^-(N+1-j_1)-A_2^-(N+1-j_1)/A_r-A_2) = (-1)^2 ∑_j_1 = 1^N∑_j_2 = 1^N+1-j_1 A_1^-j_1 A_2^-j_2 A_r^-(N+2-j_1-j_2). Continuing in this manner, it is clear that D_N-1[f] becomes D_N-1 [f] = (-1)^N(k+1)∏_r=1^k+2A_r^N ×[ 1 A_1 A_1^2 … A_1^k A_1^-N; 0 1 A_1 + A_2 … ∑_j_1=0^k-1 A_1^j_1 A_2^k-1-j_1 C_2,2; 0 0 1 … ∑_j_1=0^k-2∑_j_2=0^k-2-j_1 A_1^j_1 A_2^j_2 A_3^k-2-j_1-j_2 C_3,3; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 0 0 … A_1 + A_2 + … + A_k C_k,k; 0 0 0 … 1 C_k+1,k+1; 0 0 0 … 0 C_k+2,k+2; ], that is, we have an upper-triangular matrix with 1's on the diagonal and zeros underneath. This means that the determinant equals the bottom right-most element, which is C_k+2,k+2 = (-1)^(k+1)∑_j_1 = 1^N∑_j_2 = 1^N+1-j_1…∑_j_k = 1^N+k-1-j_1-… - j_k-1 A_1^-j_1… A_k^-j_k ×∑_j_k+1 = 1^N+k-j_1 - … - j_k A_k+1^j_k+1 A_k+2^-(N+k+2 - j_1 - … - j_k+1) We now combine D_N-1[f] with the other terms in (<ref>), and let A_k+1, A_k+2→ 1. This gives 𝔼_N [ ∏_r=1^k Z (θ_N+α_r)] = 1/N∏_r=1^k(A_r - 1) ∑_j_1 = 1^N∑_j_2 = 1^N+1-j_1…∑_j_k = 1^N+k-1-j_1-… - j_k-1 A_1^-j_1… A_k^-j_k∑_j_k+1 = 1^N+k-j_1 - … - j_k 1. Re-indexing the summations to start from 0 instead of 1 and evaluating the last sum, we have 𝔼_N [ ∏_r=1^k Z (θ_N+α_r)] = 1/N∏_r=1^k A_r^-1(A_r - 1) ∑_j_1 = 0^N-1∑_j_2 = 0^N-1-j_1…∑_j_k = 0^N-1-j_1-… - j_k-1 A_1^-j_1… A_k^-j_k (N-j_1 - … - j_k). Recall that the coefficient of α_1^n_1…α_k^n_k, which we denote by [α_1^n_1…α_k^n_k], from (<ref>) will yield the desired mixed derivative, and we wish to find this to leading order as N→∞. It is clear that to do this, we want to obtain the highest possible power of N from the summations as they depend upon N, and the smallest possible contribution from the product ∏_r=1^k A_r^-1(A_r - 1) as there is no N dependence in that product. Recalling that A_r = e^iα_r and Taylor expanding gives ∏_r=1^k A_r^-1(A_r - 1) = i^k α_1 …α_k + higher order terms. Therefore we need to find the large N behaviour of [α_1^n_1-1…α_k^n_k-1] from the summations in (<ref>), that is from ∑_j_1 = 0^N-1∑_j_2 = 0^N-1-j_1…∑_j_k = 0^N-1-j_1-… - j_k-1 e^-i j_1 α_1… e^-i j_k α_k (N-j_1 - … - j_k) . The coefficient of α_1^n_1-1 in the Taylor expansion of e^-i j_1 α_1 equals (-i j_1)^n_1-1 / (n_1-1)!, so the coefficient [α_1^n_1-1...α_k^n_k-1] equals ∑_j_1 = 0^N-1…∑_j_k = 0^N-1-j_1-… - j_k-1(-i j_1)^n_1-1/(n_1-1)!…(-i j_k)^n_k-1/(n_k-1)! (N-j_1 - … - j_k) = (-i)^n_1 + … + n_k -k/(n_1-1)! … (n_k-1)!∑_j_1 = 0^N-1…∑_j_k = 0^N-1-j_1-… - j_k-1 j_1^n_1-1… j_k^n_k-1 (N-j_1 - … - j_k). To evaluate the leading order coefficient of these sums, we will turn them into Riemann sums and hence into an integral. The sums equal N^n_1 + … + n_k +1∑_j_1 = 0^N-1∑_j_2 = 0^N-1-j_1…∑_j_k = 0^N-1-j_1-… - j_k-11/N^k(j_1/N)^n_1-1…(j_k/N)^n_k-1(1-j_1/N - … - j_k/N) and now each summation can be considered as a Riemann sum and so we can rewrite the summation in the above line as the k-fold integral ∫_0^1∫_0^1-x_1…∫_0^1-x_1 - … - x_k-1 x_1^n_1 -1… x_k^n_k-1 (1-x_1 - … - x_k) dx_1 dx_2 … dx_k. By Theorem 1.8.6 in <cit.>, this integral equals (n_1-1)! … (n_k-1)!/ (n_1 + … + n_k+1)! and so we have [α_1^n_1-1...α_k^n_k-1] ∼(-i)^n_1 + … + n_k -k/(n_1-1)! … (n_k-1)!(n_1-1)!… (n_k-1)!/(n_1 + … + n_k + 1)! N^n_1 + … + n_k + 1 Combining (<ref>) with (<ref>) and the term 1/N in (<ref>), we obtain the coefficient [α_1^n_1...α_k^n_k] of 𝔼_N [ Z (θ_N+α_1) … Z(θ_N+α_k)] is asymptotic to (-1)^n_1 + … + n_k + k i^n_1 + … + n_k/(n_1 + … + n_k + 1)! N^n_1 + … + n_k . Recalling that the coefficient of α_r^n_r in the Taylor expansion of Z (θ_N+α_r) equals Z^(n_r)(θ_N)/n_r!, we see that 𝔼_N [ Z^(n_1) (θ_N) … Z^(n_k) (θ_N) ] ∼ (-1)^n_1 + … + n_k + k i^n_1 + … + n_kn_1! … n_k!/(n_1 + … + n_k + 1)! N^n_1 + … + n_k, and this completes the proof of Theorem <ref>. § THE HYBRID FORMULA In 2007 Gonek, Hughes and Keating <cit.>, proved a hybrid formula for the zeta function, expressing it as a partial product over primes times a rapidly decaying product over non-trivial zeros. We state a result with slightly more control over the smoothing (in <cit.> and other later papers, X=Y is taken). Let s=σ+ t with σ≥ 0 and |t|≥ 2, let X,Y ≥ 2 be real parameters, and let K be any fixed positive integer. Let f(x) be a nonnegative C^∞ function of mass 1 supported on [0, 1] and set u(x)=Y f(Ylogx/e+1)/x. Thus, u(x) is a function of mass 1 supported on [e^1-1/Y, e]. Set U(z) = ∫_1^e u(y) E_1(zlog y) y , where E_1(z) is the exponential integral ∫_z^∞ e^-w/w w. Then ζ(s) = P_X(s) Z_X(s) (1+ O(Y^K+1 X^max(1-σ,0)/(|s| log X)^K) + O(X^1-σlog X/Y) ) , where P_X(s) = exp(∑_n ≤ XΛ(n)/log n1/n^s) , Λ(n) is von Mangoldt's function, and Z_X(s) = exp(-∑_ρ_nU((s-ρ_n)log X)) . The constants implied by the O terms depend only on f and K. The proof of this result follows precisely the method in <cit.>, with obvious changes for the different smoothing. Note that U(z) is not an analytic function; it has a logarithmic singularity at z=0. However, recall the formula E_1(z) = - log z -γ - ∑_m=1^∞(-1)^mz^m/m! m , where | z| < π, log z denotes the principal branch of the logarithm, and γ is Euler's constant. (It is clear that the sum is an entire function of z since it is absolutely convergence for all z ∈ℂ). From this and (<ref>) we observe that we may interpret exp(-U(z)) to be an analytic function, asymptotic to C z as z→ 0, for some constant C which depends upon the smoothing function u. Just as in work of Bui, Gonek and Milinovich <cit.>, we can differentiate whilst still obtaining the asymptotic, obtaining under the Riemann Hypothesis ζ'(ρ) = P_X(ρ) Z_X'(ρ) (1+O(Y^K+1 X^1/2/(|ρ| log X)^K) + O(X^1/2log X/Y) ) It is believed that P_X and Z_X operate pseudo-independently. Hence the moments of zeta are products of moments of P_X and Z_X. This is known as the Splitting Conjecture. In the next section we will evaluate the moments of mixed derivatives of P_X, when averaged over zeros of zeta. We will show that the moments only really contribute when no derivatives are taken. In the last section we will model the moments of the first derivative of Z_X when averaged over zeros of zeta, using a method that allows non-integer moments to be taken. Taken collectively, these results give weight to Conjectures <ref> and <ref>. § THE MOMENTS OF P_X In this section we will show that the mixed moments of P_X vanish if any derivatives are taken, whilst the mean of P_X^k is asymptotically 1, when averaged over zeros of the zeta function. Under the Riemann Hypothesis, for X>2 with X=O(log T), for any non-negative integers n_1,…,n_k we have, ∑_0 < γ≤ T P_X^(n_1)(ρ) … P_X^(n_k)(ρ) = N(T) + O(Tloglog T) if n_1=…=n_k = 0 O(T(loglog T)^1+n_1) if n_1>0 and n_2=…=n_k = 0 O(T ) if more than one n_i>0 as T→∞, where ρ=12+iγ is a non-trivial zero of the Riemann zeta function, P_X(s) is given in (<ref>) and N(T) is the number of such zeros up to height T, given by (<ref>). In the case that no derivatives are taken, one can permit k to be real and positive. Under the Riemann Hypothesis, for X>2 with X=O(log T), for k a fixed positive real number, ∑_0 < γ≤ T P_X(ρ)^k = N(T) + O(Tloglog T) as T→∞. This should be compared to a result of Bui, Gonek and Milinovich (Theorem 2.2 of <cit.>) which stated that, under the Riemann Hypothesis, for ϵ > 0 and X, T →∞ with X=O( (log T)^2-ϵ) then for any k∈ one has 1/N(T)∑_0 < γ≤ T| P_X(ρ)|^2k = a(k) (e^γ_0log X)^k^2(1+O_k((log X)^-1) ) where a(k) is an explicit product over primes given by (<ref>). Before we prove Theorem <ref>, we need to state and prove some preliminary lemmas. For k a fixed positive real number, the function P_X(s)^k can be written as a non-vanishing absolutely convergent Dirichlet series P_X(s)^k = ∑_m=1^∞a_k(m)/m^s with 0≤ a_k(m) ≤ d_k(m) where d_k(m) is the kth divisor function. Since P_X(s) given in (<ref>) is the exponential of a finite Dirichlet polynomial, so its kth power is an entire non-vanishing function, so can be written as a Dirichlet series P_X(s)^k = exp(k ∑_n≤ XΛ(n) /log n1/n^s) = ∑_m=1^∞a_k(m)/m^s where the sum converges absolutely for any s, and a_k(m)=0 if m has any prime divisor greater than X. If p≤ X is a prime, then a_k(p)=k. Since these coefficients come from the exponential of positive terms, the a_k(m) are positive. In the case when m≤ X, these coefficients will equal d_k(m) and in the case when m>X they will be less than d_k(m) due to missing contributions from terms with n>X in the exponent. In the proof of the two theorems, it will be important to truncate the infinite sum. For 2 ≤ X ≪ (log T)^2-ϵ with ϵ>0, and k a fixed positive real number, and θ a small positive number, we have P_X(12 + t)^k = ∑_m ≤ T^θa_k(m)/m^1/2 + t + O_k(T^-ϵθ / 2) This follows immediately from the proof of Lemma 2 in <cit.>. In <cit.> Gonek proved unconditionally a uniform version of Landau's formula concerning sums over the non-trivial zeros of the Riemann zeta function, including the following corollary which requires the assumption of the Riemann Hypothesis. Under the Riemann Hypothesis, for T>1, m ∈ℕ with m≥ 2 and ρ a non-trivial zero of the Riemann zeta function ζ (s), ∑_0< γ≤ T m^-ρ = - T/2 πΛ (m)/m + O(log (2mT) loglog (3m)), where Λ (m) is the von Mangoldt function. Note this result does not apply in the case when m=1, when the sum over zeros trivially equals N(T). To differentiate P_X(σ) j times, we return to the exponential form (<ref>). The Faà di Bruno formula on multiple derivatives of e^f states d^j/d σ^j e^f(σ) = ( f^(j) + … + (f')^j ) e^f where the sum ranges over all possible combinations of derivatives of f so that total derivative is j, with certain multinomial coefficients that don't matter for us. In our case, one can check that the longest sum comes from the first derivative raised to the jth power, essentially giving a sum of length X^j. That is P_X^(j)(s) = ∑_n≤ X^jβ̃_j(n)/n^sexp( ∑_n≤ XΛ(n)/log n1/n^s) for coefficients β̃_j(n) that come from appropriate combinations of derivatives of the sum in the exponential, whose precise nature does not concern us, other than noting that β̃_j(1)= 0 if j≠ 0 1 if j=0 and for prime p ≤ X, β̃_j(p)= (-log p)^j if j ≠ 0 0 if j=0 (this comes from differentiating the sum in the exponential j times; other terms in the Faà di Bruno formula contribute to coefficients of higher powers of p). If n is divisible by any prime greater than X, β̃_j(n) = 0. Combining k of these expressions together (replacing j with derivatives n_1, n_2, …, n_k, we have P_X^(n_1)(s) … P_X^(n_k)(s) = ∑_n≤ X^n_1+n_2+…+n_kβ(n)/n^s P_X(s)^k where β(n) = (β̃_n_1∗β̃_n_2∗…∗β̃_n_k )(n) is the convolution of the appropriate β̃_j. In the case when (s)=1/2 and X=(log T)^2-ϵ, using Lemma <ref> we have P_X^(n_1)(s) … P_X^(n_k)(s) = ∑_n≤ X^n_1+n_2+…+n_kβ(n)/n^s(∑_m ≤ T^θa_k(m)/m^s + O_k(T^-ϵθ / 2)) = ∑_m≤ T^θ'b(m)/m^s + O(T^-ϵ') for some ϵ'>0 and θ'>θ, which both depend upon ϵ, k, θ, and n_1,…,n_k. Essentially, since the first sum's length is only a power of log T, the two sums together are truncated at T^θ' for some θ'>θ, as T gets large. Shortly we will use some facts about the b(m), which we record here: * Only if n_1=n_2=…=n_k=0 does b(1)=1, otherwise b(1)=0 (essentially because β̃_j(1)=0 for any j≠0 since differentiation kills constants). * If n_1=n_2=…=n_k=0, then b(p)=a_k(p) and for prime p≤ X, a_k(p)=k. * If exactly all bar one of the n_i vanish (say n_1≠ 0, and n_2=n_3=…=n_k=0) then b(p) = β(p) = β̃_n_1(p) = (-log p)^n_1. To see this, note that β̃_n_1(1)=0, which kills all other contributions in the convolution other than β̃_n_1(p). * If more than one n_i≠ 0, then b(p)=0 since there will be at least two pieces in the convolution that have β̃_n_1(1)=0 and β̃_n_2(1)=0, so every term vanishes. Assuming the Riemann Hypothesis, we can sum (<ref>) over the zeros of zeta, obtaining ∑_0<γ≤ T P_X^(n_1)(ρ) … P_X^(n_k)(ρ) = b(1) N(T) + ∑_0<γ≤ T∑_2≤ m≤ T^θ'b(m)/m^ρ + O(N(T) T^-ϵ') and applying Lemma <ref> to the middle sum yields ∑_0<γ≤ T∑_2≤ m≤ T^θb(m)/m^ρ = - T/2 π∑_2≤ m ≤ T^θ' b(m) Λ (m)/m + ∑_2≤ m ≤ T^θ b(m) O(log (2mT) loglog (3m)). To bound the first term on the RHS of (<ref>), note that the Λ(m) forces the sum to be over primes and prime powers only, and the b(m) ensures we only need primes ≤ X. Since b(m) ≪ m^ϵ, the square- and higher-powers of primes form a convergent sum, so - T/2 π∑_2≤ m ≤ T^θ' b(m) Λ (m)/m = O( T ∑_p≤ X p primeb(p) log p /p) + O(T) The value of b(p) depends on how many n_i=0 b(p) = k if n_1=…=n_k=0 (- log p)^n_1 if n_1>0 and n_i=0 otherwise 0 if more than one n_i>0. Hence, summing over all primes up to X and using the Prime Number Theorem, ∑_p≤ xb(p) log p /p≪log X if n_1=…=n_k=0 (log X)^1+n_1 if n_1>0 and n_i=0 otherwise 0 if more than one n_i>0. For the second error term in (<ref>) since the b(m) all have the same sign (for any given fixed n_1, …, n_k), we have this equals O(log T loglog T ∑_m=1^∞ b(m) ) By (<ref>) and (<ref>) the sum is simply ∑_m=1^∞ b(m) = ∑_n≤ X^n_1+n_2+…+n_kβ(n) P_X(0)^k To bound the first sum on the right-hand side, recall that β(n)≪ n^ϵ, and to bound the P_X(0)^k note that since log P_X(0) = ∑_n≤ XΛ(n) /log n = X/log X + O(X/(log X)^2) by the Prime Number Theorem we have ∑_m=1^∞ b(m) = O ( exp(c' X /log X) ) for any fixed c'>k. Plugging these two error terms back into (<ref>), we have shown ∑_0<γ≤ T∑_2≤ m≤ T^θb(m)/m^ρ = O(T (log X)^1+n_1) + O(T) + O (log T loglog T exp(c' X /log X) ) where the first error term is only present if n_2=…=n_k=0. To complete the proof we now choose X to balance the various error terms that appear when we substitute this into (<ref>). If we insist X=O(log T) then we have ∑_0 < γ < T P_X^(n_1)(ρ) … P_X^(n_k)(ρ) = T/2πlog T + O(T(loglog T)) in the case when n_1=…=n_k=0, ∑_0 < γ < T P_X^(n_1)(ρ) … P_X^(n_k)(ρ) = O(T(loglog T)^1+n_1) if n_1>0 and n_2=…=n_k=0, and ∑_0 < γ < T P_X^(n_1)(ρ) … P_X^(n_k)(ρ) = O(T) if more than one n_i>0. This completes the proof of Theorem <ref>. In the case when no derivatives are taken, we can exploit the multiplicative nature of P_X to allow k to be a fixed real number, not just an integer. Much of the proof goes through unchanged. By Lemma <ref> and Lemma <ref> we have ∑_0<γ≤ T P_X(ρ)^k = N(T) + ∑_0<γ≤ T∑_2≤ m≤ T^θa_k(m)/m^ρ + O(N(T) T^-ϵ') = N(T) - T/2 π∑_2≤ m ≤ T^θ a_k(m) Λ (m)/m + O(log T loglog T ∑_2≤ m ≤ T^θ a_k(m) ) since a_k(1)=1. As in the case of Theorem <ref>, assuming X<T^θ we can bound ∑_2≤ m ≤ T^θ a_k(m) Λ (m)/m = ∑_p < X k log p/p + O(1) ≪log X and, exactly as before, ∑_m=1^∞ a_k(m) = P_X(0)^k ≪exp(c' X /log X) for any fixed c'>k. Therefore ∑_0<γ≤ T P_X(ρ)^k = N(T) + O(T log X) + O(log T loglog T exp(c' X /log X) ) As before, the error terms can be made to approximately match when X=O(log T). In the previous proofs we can take X to be anything up to o(log T loglog T) before the second error term dominates the main term of 1/2π Tlog T. Since the error term as we have written it comes from a subsidiary main term, we cannot reduce the error down to O(T). § THE MOMENTS OF Z_X' In this section we will concentrate on the random matrix equivalent for Z_X, given in (<ref>), and for simplicity just look at its first derivative. Again in this section, we will not require k to be an integer. Taking an N× N unitary matrix with eigenvalues e^iθ_1, …, e^iθ_N, we wish to calculate _N[Z_N,X'(θ_N)^k] where Z_N,X(θ) = ∏_m=1^Nexp(-∑_j=-∞^∞ U((θ-θ_m + 2π j)log X)) plays the role of Z_X(s) in Theorem <ref>, and where U is given in (<ref>). The sum over j makes the arguments 2π-periodic, thus permitting the move from e^iθ to θ without any branch-cut ambiguities. The decay of U for large inputs means that the infinite sum over j is absolutely summable. If k ∈ is such that k ∉{-3,-4,-5,…} then _N[1/N∑_m=1^N Z_N,X'(θ_m)^k] ∼e^ k π/2/Γ(k+2) N^k as N→∞. The proof deals with the analytic continuation of the objects under consideration. However, for our purposes as a model for zeta, we must stop before the first singularity; that is, in Conjecture <ref> we take (k)>-3. Restricting to k∈ℕ, we see this Theorem has the same right hand side as Corollary <ref> in the case n=1. That is, Z'_N,X and the first derivative of the characteristic polynomial asymptotically have the same moments. It is convenient to factor out the behaviour around the eigenvalues (where Z_N,X vanishes) as follows Z_N,X(θ) = ∏_m=1^N (1-e^-(θ-θ_m)) e^F_X(θ-θ_m) where F_X(ϑ) = - log(1-e^-ϑ) -∑_j=-∞^∞ U((ϑ + 2π j)log X) is a 2π-periodic function which is continuously differentiable for all real ϑ — the logarithmic singularities cancel out. The first statement is obvious. The second statement can be seen from the definition of U(z) given in (<ref>) and the series expansion of E_1(z) around z=0 given in (<ref>) (and indeed, this fact will be made clear when we look at the decay of its Fourier coefficients in Lemma <ref>). Differentiating, Z_N,X'(θ) = ∑_n=1^N e^-(θ-θ_n) e^F_X(θ-θ_n)∏_m=1 m ≠ n^N (1-e^-(θ-θ_m)) e^F_X(θ-θ_m) + ∑_n=1^N (1-e^-(θ-θ_n)) F_X'(θ-θ_n) e^F_X(θ-θ_n)∏_m=1 m ≠ n^N (1-e^-(θ-θ_m)) e^F_X(θ-θ_m) and substituting θ=θ_N, we see that for every n ≠ N there is a term in the product that vanishes (so only the n=N terms survives), and every term in the second term vanishes, including the n=N term. That is, Z_N,X'(θ_N) = e^F_X(0)∏_m=1^N-1(1-e^-(θ_N-θ_m)) e^F_X(θ_N-θ_m) There is nothing special about picking θ_N over any of the other eigenvalues. This is made for notational convenience. The true calculation would be to evaluate 1/N∑_m=1^N Z_N,X'(θ_m) similar to what we do in Section <ref>, but by rotation invariance the result will be the same. We now calculate the expected value of Z_N,X'(θ_N)^k for k∈ when averaged over all N× N unitary matrices distributed according to Haar measure, employing a trick found in <cit.>. _N[ Z_N,X'(θ_N)^k ] = e^ k π / 2 e^k F_X(0)_N[ ∏_m=1^N-1(1-e^-(θ_N-θ_m))^k e^k F_X(θ_N-θ_m)] = e^ k π / 2 e^k F_X(0)1/N_N-1[ ∏_m=1^N-1|1-e^ϑ_m|^2 (1-e^ϑ_m)^k e^k F_X(-ϑ_m)] where we interpret e^ϑ_m as the eigenvalues of an (N-1) × (N-1) unitary matrix. (Briefly, what we have done is write the first expectation out as a N-dimensional Weyl integral, then change variables to ϑ_m = θ_m - θ_N for m=1,…,N-1. Making use of the fact the integrand is 2π-periodic, we turn it back into a (N-1)-dimension Weyl integral, plus an extra trivial integral over θ_N). Because k is complex, we need to be precise about the choice of branch taken. We define (1-e^ϑ)^k to be the value obtained from continuous variation of (1-e^ϑe^-ϵ)^k for ϵ>0, converging to the value 1 as ϵ→∞. As in Section <ref>, the inner expectation can be calculated by Heine's identity to be equal to the Toeplitz determinant D_N-1[f] := |f̂_j-ℓ|_1 ≤ j,ℓ≤ N-1 with symbol f(ϑ) = |1-e^ϑ|^2 (1-e^ϑ)^k e^k F_X(-ϑ) Note that f(ϑ) has only one singularity at ϑ=0. Toeplitz determinants with such symbols have been calculated asymptotically by Ehrhardt and Silbermann in <cit.> (building on extensive previous work by several other authors). They show (in their Theorem 2.5) that if b(ϑ) is a suitably smooth (defined in their paper) 2π-periodic function with winding number zero and whose logarithm has the Fourier series expansion log b(ϑ) = ∑_m=-∞^∞ s_m e^ m ϑ then for symbols of the form f(ϑ) = b(ϑ) (1-e^ϑ)^γ(1-e^-ϑ)^δ subject to γ+δ∉{-1,-2,-3,…}, then for any ϵ>0 D_N-1[f] = C_1 N^γδ C_2^N-1(1+ O(1/N^1-ϵ) ) where C_1 = G(1+γ) G(1+δ)/G(1+γ+δ)exp(∑_m=1^∞ m s_m s_-m) exp(-δ∑_m=1^∞ s_m ) exp(-γ∑_m=1^∞ s_-m) where G is the Barnes G-function, and where C_2 = exp(s_0) . In our case, γ=k+1 and δ=1 and b(ϑ) = exp(k F_X(-ϑ)). To complete the proof, we simply need to calculate the Fourier coefficients of k F_X(-ϑ) (those coefficients being the desired s_m). We will, in fact, delay the exact calculation of s_m to Lemma <ref> to quickly complete the proof of the theorem. All we need from the lemma is that s_m=0 if m ≤ 0. From (<ref>) and using Ehrhardt and Silbermann's formula above, with the values γ=k+1 and δ=1, we have _N[ Z_N,X'(θ_N)^k ] ∼ e^ kπ/2 e^k F_X(0)1/NG(k+2) G(2)/G(k+3)exp(-∑_m=1^∞ s_m) N^k+1 Finally, note that G(2)=1, G(k+3) = Γ(k+2) G(k+2) and e^k F_X(0) = exp(∑_m=-∞^∞ s_m ) = exp(∑_m=1^∞ s_m ) (the first equality following from setting ϑ=0 in the Fourier series expansion; the second equality comes from knowing s_m=0 for m≤ 0). Therefore, _N[ Z_N,X'(θ_N)^k ] ∼e^ k π/2/Γ(k+2) N^k as N→∞, as required. Let F_X(ϑ) be defined in (<ref>), then k F_X(-ϑ) = ∑_m=-∞^∞ s_m e^ m ϑ where s_m = 0 if m ≤ 0 k/m(1 - ∫_1^exp(m/log X) u(y) y ) if 1 ≤ m < log X 0 if m≥log X Since u has total mass 1, for m>0 we can write 1 - ∫_1^exp(m/log X) u(y) y = ∫_exp(m/log X)^∞ u(y) y although due to the support condition on u, if m≥log X then exp(m/log X)>e and so the integral vanishes. Note that s_m = 1/2π∫_-π^π k F_X(-ϑ) e^- m ϑϑ . From (<ref>) we have k F_X(-ϑ) = - klog(1-e^ϑ) -k ∑_j=-∞^∞ U((-ϑ + 2π j)log X) . For m an integer, we have 1/2π∫_-π^π -k log(1-e^ϑ) e^- m ϑϑ = 0 if m ≤ 0 k/m if m ≥ 1 (To non-rigorously see why, note that log(1-e^ϑ) = -∑_ℓ=1^∞ e^ℓϑ/ℓ and the only term in the sum that survives the integration is when ℓ=m for positive integers m). Furthermore, we have 1/2π∫_-π^π -k ∑_j=-∞^∞ U((-ϑ+2π j) log X ) e^- m ϑϑ = -k/2π∫_-∞^∞ U(-ϑlog X) e^- m ϑϑ = -k/2π∫_-∞^∞∫_1^e u(y) E_1(-ϑlog Xlog y) e^- m ϑ y ϑ where the final equality comes from inserting the definition of U from (<ref>) and the support of u. Swapping the order of integration, this is -k/2π∫_1^e u(y) ∫_-∞^∞ E_1(-ϑlog Xlog y) e^- m ϑ ϑ y . The Fourier transform of the exponential integral is 1/2π∫_-∞^∞ E_1(-ϑlog Xlog y) e^- m ϑ ϑ = 0 if m<log X log y 1/2m if m = log X log y 1/m if m > log X log y Therefore, making use of the fact that u is supported on [1,e] and has total mass 1, we have -k/2π∫_1^e u(y) ∫_-∞^∞ E_1(-ϑlog Xlog y) e^- m ϑ ϑ y = -k/m∫_1^max(1,exp(m/log X)) u(y) y = 0 if m ≤ 0 -k/m∫_1^exp(m/log X) u(y) y if 1 ≤ m < log X -k/m if m ≥log X The Fourier coefficient, s_m is the sum of this and (<ref>). Note that both terms are zero for m≤ 0 and the two terms perfectly cancel each other if m≥log X. amsplain
http://arxiv.org/abs/2406.09011v1
20240613113216
Finite-temperature properties of antiferroelectric perovskite $\rm PbZrO_3$ from deep learning interatomic potential
[ "Huazhang Zhang", "Hao-Cheng Thong", "Louis Bastogne", "Churen Gui", "Xu He", "Philippe Ghosez" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
[Corresponding author: ]hzhang@uliege.be Theoretical Materials Physics, Q-MAT, University of Liège, B-4000 Sart-Tilman, Belgium Department of Physics, School of Science, Wuhan University of Technology, Wuhan 430070, People’s Republic of China Theoretical Materials Physics, Q-MAT, University of Liège, B-4000 Sart-Tilman, Belgium State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, People's Republic of China Theoretical Materials Physics, Q-MAT, University of Liège, B-4000 Sart-Tilman, Belgium Theoretical Materials Physics, Q-MAT, University of Liège, B-4000 Sart-Tilman, Belgium Key Laboratory of Quantum Materials and Devices of Ministry of Education, School of Physics, Southeast University, Nanjing 211189, People’s Republic of China Theoretical Materials Physics, Q-MAT, University of Liège, B-4000 Sart-Tilman, Belgium [Corresponding author: ]philippe.ghosez@uliege.be Theoretical Materials Physics, Q-MAT, University of Liège, B-4000 Sart-Tilman, Belgium § ABSTRACT The prototypical antiferroelectric perovskite PbZrO_3 (PZO) has garnered considerable attentions in recent years due to its significance in technological applications and fundamental research. Many unresolved issues in PZO are associated with large spatial and time scales, as well as finite temperatures, presenting significant challenges for first-principles density functional theory studies. Here, we introduce a deep learning interatomic potential of PZO, enabling investigation of finite-temperature properties through large-scale atomistic simulations. Trained using an elaborately designed dataset, the model successfully reproduces a large number of phases, in particular, the recently discovered 80-atom antiferroelectric Pnam phase and ferrielectric Ima2 phase, providing precise predictions for their structural and dynamical properties. Using this model, we investigated phase transitions of multiple phases, including Pbam/Pnam, Ima2 and R3c, which show high similarity to the experimental observation. Our simulation results also highlight the crucial role of free-energy in determining the low-temperature phase of PZO, reconciling the apparent contradiction: Pbam is the most commonly observed phase in experiments, while theoretical calculations predict other phases exhibiting even lower energy. Furthermore, in the temperature range where the Pbam phase is thermodynamically stable, typical double polarization hysteresis loops for antiferroelectrics were obtained, along with a detailed elucidation of the dynamical evolution upon electric-field induced transitions between the non-polar Pbam and polar R3c phases. Finite-temperature properties of antiferroelectric perovskite PbZrO_3 from deep learning interatomic potential Philippe Ghosez June 17, 2024 ============================================================================================================== § INTRODUCTION Antiferroelectric (AFE) materials refer to a class of crystals that typically possesses local antiparallel dipoles, which forms a macroscopic non-polar state, but can be turned into a polar state under the application of an electric field <cit.>. The electric-field induced non-polar to polar switching gives rise to the peculiar double polarization-versus-electric field (P-E) hysteresis loop, which provides antiferrielectrics a series of functional properties that are highly appealing for applications <cit.>. Lead zirconate (PbZrO_3, PZO) was historically the first discovered antiferrielectric material, right after the concept proposed by Kittel in the early 1950s <cit.>. PZO has received considerable attention since then, primarily driven by the strong demand of applications, including energy storage, electromechanical actuation, electrocaloric effects, thermal switching, etc <cit.>. However, many fundamental properties of PZO have remained not fully understood, particularly those related to large spatial and time scales and finite temperatures, e.g., the true ground state and the temperature-dependent phase diagram. According to the experimental observations, the ground state of PZO was found to be a 40-atom antiferroelectric Pbam phase (Pbam-AFE40) <cit.>, with a transition to the high-temperature cubic phase occurring at T_ C≈ 505 K <cit.>. Some studies have also suggested a controversial intermediate phase within a narrow temperature range around T_ C <cit.>. Recently, the traditional viewpoint regarding the Pbam ground state of PZO has even been challenged by theoretical discoveries of a ferrielectric Ima2 phase (Ima2-FiE) <cit.>, and an 80-atom antiferroelectric Pnam phase (Pnam-AFE80) <cit.>, which were found to exhibit lower energies than that of the Pbam phase. The physical properties of these newly predicted phases and their stability against temperature are yet to be clarified. While recent experimental observations have revealed the existence of Ima2-like structures locally in PZO <cit.>, and the regions of which could be further coarsened under epitaxial compressive strains <cit.>, the possibility of the existence of the Ima2-FiE phase at a macroscopic scale remains unclear. This raises a specific question: how to reconcile the apparent contradiction between these two aspects, where theoretical predictions suggest a lower energy for Ima2, but experimental observations more commonly identify Pbam as the low-temperature structure of PZO? Another aspect yet to be fully understood in PZO concerns its electric-field induced nonpolar-polar transitions, which constitute the conceptual foundation for PZO being AFE and enable most of the functionalities of PZO-based materials. From theoretical calculation, polar R3c and Ima2-FiE are found to be energetically very close to the non-polar antiferroelectric Pbam-AFE40 phase (with energy differences around 1 meV/f.u. <cit.>), and thus become possible polar phases induced by electric field. Experimentally, however, a non-polar to polar phase transition in PZO bulk samples (crystal or ceramic) is hardly reported since the electric field required is usually higher than the breakdown field <cit.>. Thus, most experimental studies on the electric-field induced phase transitions were conducted on PZO films <cit.> or PZO-based solid solutions <cit.>. However, the influences from strains, surfaces and foreign elements in films and solid solutions are usually significant and cannot be ignored. Given the fundamental importance of nonpolar-polar transitions in pure PZO and the experimental challenges with bulk materials, it is necessary and potentially feasible to study the electric-field induced transitions employing reliable theoretical methods. The outcomes would also be beneficial to the understanding and optimization of the performance of PZO-based materials and facilitate their applications. The aforementioned issues in PZO could be resolved through theoretical investigations relying on a large configuration system, and its dynamical response to the external fields such as temperature and electric field. Although first-principles density function theory (DFT) calculation has been a successful atomistic simulation method for investigating PZO, it is computationally expensive for such dynamical simulation, especially for a system with more than hundreds of atoms. Molecular dynamics (MD) or Monte-Carlo simulations using first-principles-based effective interatomic potentials have become an appealing solution, i.e. following the so-called second-principles approach <cit.>. Relevant efforts include the development of effective Hamiltonian and shell models for PZO <cit.>. However, the investigation of a system like PZO with a complex energy surface remains challenging, resulting poor prediction of certain crucial structures. In recent years, machine learning methods have been introduced for constructing interatomic potentials, offering new solutions for highly accurate large-scale simulations that consider all atomistic degrees of freedom <cit.>. Particularly, the deep potential for molecular dynamics (DeePMD) has attracted significant attention <cit.>. Thanks to the development of the open-source DeePMD-kit software, this method is gaining increasing popularity in studies across a wide variety of systems, including water, metals, oxides, organic molecules, etc <cit.>. For ferroelectric compounds, the deep potential approach has been successfully applied to SrTiO_3 <cit.>, KNbO_3 <cit.>, PbTiO_3 <cit.>, Pb(Zr, Ti)O_3 <cit.>, HfO_2 <cit.>, PbTe <cit.>, In_2Se_3 <cit.>, bi-layered BN <cit.>, etc. Recently, a modular development scheme for deep potential in complex solid solutions was proposed <cit.>, and a universal model for perovskite oxides involving 14 metal elements was presented <cit.>. Regarding the antiferroelectric PZO, despite its highly complex potential energy surface, given the success of previous applications, employing the deep potential method appears very promising for developing a highly accurate interatomic potential. In this work, we developed a deep learning interatomic potential of PZO for finite-temperature simulation. The model, which is trained with an elaborately designed training set, can accurately reproduce numerous phases in PZO and provide precise predictions on their properties. Notably, the model successfully captures the recently discovered Pnam-AFE80 and Ima2-FiE phases. The model is thoroughly validated and utilized for investigating the finite-temperature properties through molecular dynamic simulations. The simulations successfully reproduce the temperature-dependent phase transitions and double P-E loops, showing good agreement with experiments. Additionally, the simulations also provide insights into why PZO tends to adopt the Pbam-AFE40 as its room-temperature phase, rather than other low energy phases like Ima2-FiE. We prospect that this model will serve as a useful tool for studying PZO, offering opportunities for new insights into this fascinating material. § METHODOLOGY §.§ First-principles calculations The first-principles DFT calculations were performed using the Abinit software <cit.>, employing the GGA-PBEsol functional <cit.> and a planewave-pseudopotential approach with optimized norm-conserving pseudopotentials from the PseudoDojo server <cit.>. The energy cutoff for the plane-wave expansion was 60 Ha, and the Brillouin zone sampling was equivalent or denser to a 6 × 6 × 6 k-point grid for the 5-atom perovskite unit cell. The electronic self-consistent cycles were converged until the potential residual is smaller than 10^-18 Ha. Structural relaxations were performed based on the Broyden-Fletcher-Goldfarb-Shanno minimization algorithm until the forces are less than 10^-6 Ha/Bohr and the stresses are less than 10^-8 Ha/Bohr^3. The Born effective charges, phonon dispersions, and elastic and piezoelectric tensors were calculated according to the density functional perturbation theory (DFPT) as implemented in Abinit and analyzed using the Anaddb tool <cit.>. For building the training set, we used a less stringent convergence criterion for the electronic self-consistent cycles of 10^-10 Ha on the potential residual to facilitate the acquisition of the data. We also developed a plugin for the dpdata package to convert the data format <cit.>, so that the DeePMD-kit can make use of the Abinit results to train models. §.§ Second-principles calculations §.§.§ Model training The deep learning interatomic potential model for PZO was trained using the open-source package DeePMD-kit <cit.>. The deep potential method assumes that the total energy of a system is the summation of the energy contributions from each of the atoms, whereas the energy contribution from each atom is determined by its local environment within a cutoff radius and can be parameterized with neural networks. To build the PZO model, the cutoff radius was set to be 9.0 Å, smoothing from 1.5 Å. The embedding and fitting networks both have three layers, with the size of (25, 50, 100) and (240, 240, 240), respectively. The loss function is defined by L = p_e (Δ e)^2 + p_f ∑_i|Δ f_i|^2/3N +p_ξ∥Δξ∥^2/9, where Δ denotes the difference between the model predictions and the training data, e is the energy per atom, f_i is the force vector on the atom i and N is the number of atoms, ξ is the virial. The p_e, p_f and p_ξ are the prefactors which balance the losses in energy, forces and virials, respectively. The p_e and p_ξ increase from 0.02 to 1, and p_f decreases from 1000 to 1 during the training procedure. §.§.§ Model-based calculations The model-based calculations were carried out using Lammps <cit.> with periodic boundary conditions. Structural relaxations were performed based on the conjugate gradient algorithm, with the convergence criterion on forces of 5 × 10^-3 eV/Å. Phonon dispersion curves were calculated based on the finite difference method, using Phonopy <cit.> to analyse the force set produced by the model and extracted by Lammps. The non-analytical corrections for the phonon dispersions were conducted using the Born effective charges of the corresponding phases from DFPT calculations. The elastic and piezoelectric tensors were calculated by analyzing the stress and polarization changes to strain perturbations, respectively, with the atomic positions fully relaxed. The finite-temperature MD simulations were performed using NPT ensemble, with the temperature and pressure controlled by a Nose-Hoover thermostat and barostat. The electric field was applied by adding on each of the atoms extra forces according to f_i,α = ∑_β Z^∗_i,α,βE_β, where Z^∗ is the Born effective charges in the cubic reference phase, E is the applied electric field, the subscript i is the index of atoms, and α, β denote the Cartesian directions x, y and z. The time step for the MD simulations is 1 fs. At each temperature, the system was first equilibrated for 50 ps, then the simulation continued for another 50 ps for property analysis. The MD trajectories were analyzed with Agate <cit.>. In particular, the polarization was calculated based on the Born effective charge of the cubic phase using the algorithm implemented in Agate. § SECOND-PRINCIPLES MODEL §.§ Training set design PZO probably has one of the most complex potential energy surfaces of all perovskite oxides. As can be seen from the phonon dispersion curves of its cubic parent phase [Fig. <ref>(a)], lattice instabilities are present throughout the whole Brillouin zone. The strongest instabilities include the polar modes at Γ, and the in-phase and anti-phase oxygen octahedra rotations at M and R. Additionally, there are also unstable modes at X, M and R points, which are featured by the antipolar motions of cations. By successively condensing unstable modes in the cubic parent phase, PZO can reach various stationary phases. Fig. <ref>(b) and Table <ref> provide a series of stationary phases of PZO, where the cubic parent phase is taken as the energy reference. It is worth noticing that not only the number of different phases in PZO is considerably large, but also many of them are very close in energy. So, it seems quite challenging to capture such a large number of competing phases within a single interatomic effective potential model. Thanks to the powerful descriptive capability of neural networks, it is possible to overcome this challenge by constructing a comprehensive training set covering as many phases as possible. Then, to prevent over-inflating the training set, an efficient strategy for sampling the potential energy surface is also essential. Therefore, in contrast to the commonly used methods of concurrently and automatically sampling the potential energy surface from molecular dynamics <cit.>, we construct our training set for PZO through smart design. We herein briefly illustrate our strategy for the training set design in Figs. <ref>(a, b), and more general considerations are presented elsewhere <cit.>. To build the training set, we first condense different unstable lattice modes in the cubic reference structure either individually or jointly, and obtain a series of stationary phases, which are corresponding to the local minima or saddle points on the potential energy surface. Then, we make linear interpolations and extrapolations between different phases (including the cubic reference phase and the explored distorted phases), and put the interpolating and extrapolating paths into the training set. Finally, we also include in the training set noisy configurations, which were obtained by populating phonon modes at various temperatures and imposing random strains around each of those phases (the algorithm is implemented in Agate). The final training set used for training the PZO model contains 12812 configurations in total. The energy distribution of the configurations is presented in Fig. <ref>(c). §.§ Model validations §.§.§ Training set and test set We first validate the model by examining its reproducibility on both the training set and a test set. As shown in Figs. <ref>(a-c), our model achieves high-quality fittings on the energy, forces and virials (R^2 > 0.997) for the configurations in the training set. This can be attributed to the powerful descriptive capability of the neural networks. To ensure that the model does not demonstrate significant overfitting, we construct a test set by randomly selecting 255 configurations from MD trajectories at various temperatures (100 K, 200 K, 300 K, 500 K, 800 K) and feed them back into DFT calculations. Figs. <ref>(d-f) provide comparisons of the energy, forces and virials for the configurations in the test set. The model also exhibits a high level of reproducibility for the data not in the training set (R^2 > 0.993), indicating a satisfactory generalization ability of the model. §.§.§ Stationary phases We subsequently validate the model by performing structural relaxations of various stationary phases, to ensure that the corresponding stationary points on the potential energy surface are accurately described. Due to the generality of our training set, the trained model effectively captures no less than 30 different phases. Not only the energy [Fig. <ref>(a) and Table <ref>], but also the lattice parameters (Table <ref> in Appendix) and atomic distortions [Fig. <ref>(b)] obtained from the model-based structural relaxations are highly consistent with the results obtained from the first-principles relaxations. In particular, the recent reported Pnam-AFE80 and Ima2-FiE phases are successfully captured. This demonstrates that the model is quite efficient in describing the energy landscape. Nevertheless, there are a few imperfections in the model, particularly the over stabilization of the R3c phase by about 5.0 meV/f.u., leading to a state that exhibits 3 – 4 meV/f.u. below the Pbam-AFE40, Pnam-AFE80 and Ima2-FiE phases, which should supposedly be above them. Although this changes the prediction of ground state at zero Kelvin, it does not significantly impact the simulation of the finite-temperature properties, as will be discussed later. In fact, the energies of these competing low-energy phases are too close to be distinguished within the accuracy of the model and should be considered as nearly degenerated. We notice that at the DFT level, the exact energy order of these phases already strongly depends on the employed exchange-correlation functional <cit.>. §.§.§ Second-order energy derivatives We further validate the model by calculating the quantities of the second-order energy derivatives, i.e. the phonon frequencies, the elastic and piezoelectric tensors, which are corresponding to the curvatures of the potential energy surface around the stationary points. The calculations were performed on some important phases, including Pm3̅m (parent cubic phase), Pbam-AFE40 (conventional antiferroelectric phase), Pnam-AFE80 (newly discovered antiferroelectric phase), Ima2-FiE (ferrielectric phase, the DFT ground state), R3c (electric-field induced ferroelectric phase), and Imma phase (the common parent phase of Pbam-AFE40, Pnam-AFE80, Ima2-FiE, and R3c phases). As illustrated in Fig. <ref>, and Appendix Tables <ref> and <ref>, the model provides quite good agreements with the DFPT calculations on these second-order energy derivative quantities. From Fig. <ref>, we see that the high-symmetry Pm3̅m and Imma phases exhibit unstable phonon branches [Figs. <ref>(a, b)], highlighting that they are dynamically unstable at zero Kelvin, while the R3c, Ima2-FiE and Pnam-AFE80 phases are free of lattice instability [Figs. <ref>(c, e, f)]. For the Pbam-AFE40, one imaginary frequency at the Z point can still be observed from both DFPT and model-based calculations [Fig. <ref>(d)], which is in line with the recent discovery by Baker et al. <cit.>. By condensing this Z-point instability, the Pbam-AFE40 phase will change to the Pnam-AFE80 phase. One potential concern with the model is the treatment of long-range dipole-dipole interactions. It is generally believed that the long-range dipole-dipole interactions are crucial for insulators, and ferroelectric-related phenomena are believed to arise from the delicate balance between long-range and short-range interactions <cit.>. However, for the model constructed in this work, there is no separation of the long-range and short-range interactions, but the total interactions were treated as a whole and truncated up to a cutoff radius (9 Å, roughly two times of the cubic cell parameter). This suggests that long-range effects might not be so important as initially thought, and some of them may have been effectively incorporated into the current description. It would be interesting to investigate how the treatment of long-range interactions from other approaches can affect the model <cit.>. § FINITE-TEMPERATURE PROPERTIES §.§ Phase transitions With the model thoroughly validated, it can be applied to investigate finite-temperature properties of PZO. Our initial objective is to simulate the temperature dependent phase transitions. Prior to presenting the results from our model, we will briefly review the experimental facts and the outcomes from previous effective models on PZO. Experimentally, PZO undergoes a phase transition from the low-temperature Pbam-AFE40 phase to the high-temperature cubic phase at T_ C≈ 505 K <cit.>. In addition, within a narrow temperature region around T_ C, some experimental studies have also identified traces of a potential intermediate phase, while the exact nature of which is still under debate. Theoretically, various effective models have been developed to simulate the finite-temperature behaviors of PZO. Gindele et al. <cit.> developed a shell model for the solid solution Pb(Zr, Ti)O_3, in which the antiferroelectric transition of pure PZO is successfully described. The model predicts the transition occurs around 420 K, slightly underestimated as compared to the experimental value. In a different approach, Mani et al. <cit.> developed an effective Hamiltonian model for PZO, which successfully describes the antiferroelectric phase transition as well as the behaviors under electric field and mechanical pressure. However, their model significantly overestimates the transition temperature to 946 K. Independently, Patel et al. <cit.> reported another effective Hamiltonian model for PZO, in which a bilinear energetic coupling term between Pb motions and the oxygen octahedra rotations is introduced. This model well reproduces the low-temperature Pbam-AFE40 phase and the high-temperature cubic phase. Interestingly, between these two phases, the model predicts an intermediate phase, occurring within the temperature range from 650 K to 1300 K. This intermediate phase is also in Pbam space group and characterized with similar “↑↑↓↓” Pb displacement pattern as in the low-temperature Pbam-AFE40 phase, but without the oxygen octahedra rotations (corresponding to the Pbam-II phase in Table <ref>). In general, all these existing models mainly focus on the low-temperature Pbam-AFE40 phase and the antiferroelectric phase transition to the high-temperature cubic phase, with less consideration given to other potential metastable phases. A prominent advantage of our model is the capability of describing a large number of metastable phases, and some of them exhibit similarly low energies and may potentially compete with each other. Our particular interest is to investigate how these competing low-energy phases behave at finite temperatures. Therefore, we performed MD simulations heating up from different homogeneous phases, i.e. Pbam-AFE40 (Pnam-AFE80), Ima2-FiE and R3c, respectively, as well as cooling down from the high-temperature cubic phase. If not otherwise specified, our calculations employ 12 × 12 × 12 repetitions of the five-atom perovskite unit cell as the simulating supercell, which is commensurate with Pbam-AFE40, Pnam-AFE80, Ima2-FiE and R3c, so as to avoid the bias arising from incompatibility in structural periodicity. §.§.§ Heating When the MD simulation starts heating from the Pbam-AFE40 phase, as shown in Fig. <ref>(a), the structure automatically goes to the Pnam-AFE80 phase at very low temperature (e.g. 20 K). This can be evidenced from the non-zero value of ϕ_z^+/- [Fig. <ref>(a2)]. Here, we use ϕ_z^+/- as an indicator to distinguish the Pnam-AFE80 and Pbam-AFE40 phases, which represents the octahedra rotations around the z-axis with a complex pattern alternating between in-phase and anti-phase rotations (see Fig. <ref> in Appendix). It is not surprising that when using Pbam-AFE40 as the initial configuration, it immediately transforms into Pnam-AFE80, due to condensation of the lattice instability at Z-point. With increasing temperature upon 120 K, the Pnam-AFE80 transforms back to the Pbam-AFE40. When heating up to 480 K, the system changes from the Pbam-AFE40 to cubic Pm3̅m phase, along with the vanishing of the octahedra rotations (a^-a^-c^0, indicated by ϕ_x^- = ϕ_y^- ≠ 0 and ϕ_z^- = 0) and the antipolar Pb displacements [↑↑↓↓, indicated by q = (1/4, -1/4, 0)]. We do not observe any clear evidence for possible intermediate phase between Pbam-AFE40 and the cubic phase during the heating process. The Pbam-AFE40 to Pm3̅m phase transition temperature observed at 480 K is quite close to the experimental transition temperature T_ C^ exp.≈ 505 K <cit.>, showing a remarkable improvement on the estimation of the transition temperature compared to previous atomistic simulations <cit.>. When the MD simulations start heating from the Ima2-FiE [Fig. <ref>(b)] or R3c [Fig. <ref>(c)] phases, respectively, we found that these phases remain stable at low temperatures, in line with the fact that their phonon dispersions are free of structural instability. Furthermore, our simulations show that the thermal stability of the Ima2-FiE and R3c phases is quite considerable, as they can be maintained to above room-temperature. This result also implies that there are considerable energy barriers between Ima2-FiE and Pbam-AFE40 and between R3c and Pbam-AFE40. During heating, the Ima2-FiE and R3c first transform into Pbam-AFE40 phase around 440 K and 420 K, respectively, and then the Pbam-AFE40 changes to the cubic Pm3̅m phase around 480 K. It is very interesting to observe that the phase transition into Pbam-AFE40 always occurs in prior to the cubic phase. Unfortunately, there exists no experimental data for comparison with these simulation predictions so far, probably due to the difficulty of preparing PZO samples in pure R3c or Ima2 phases. §.§.§ Cooling We also make use of the model to simulate phase transitions during cooling process. It seems that the simulation of cooling down from the cubic phase is more complicated compared to the simulations of heating up from homogeneous phases. The complexity arises due to the competition among the nearly degenerated low-energy phases, and the possible emergence of domain walls or phase boundaries during the cooling process. Nevertheless, the cooling simulation hold the significance of helping to understand the formation of experimental room-temperature phase after high-temperature synthesis or thermal treatment. In PZO, the most commonly observed room-temperature phase is the Pbam-AFE40, but the DFT calculations predicts that some other phases like Ima2-FiE would exhibit lower energies. Why PZO prefers to select the Pbam-AFE40 as its room-temperature phase rather than other low-energy phases? This is a specific question that we want to address through the cooling simulations. Considering the complexity and the inherent stochasticity of the cooling process, we repeated the cooling simulations from the 800 K (cubic phase) to 20 K for several times in supercells of different sizes. Fig. <ref>(a) present the results from a 12 × 12 × 12 supercell. The high-symmetry cubic structure go through phase transition below 460 K, along with the appearance of octahedra rotations a^-a^-c^0 (ϕ_x^- = ϕ_y^- ≠ 0 and ϕ_z^- = 0) and Pb off-centering displacements, while the system stays as nonpolar during the whole cooling process (P_x = P_y = P_z =0). The transition temperature during cooling is slightly lower than that during heating, indicating the presence of thermal hysteresis, which in line with the fact that the antiferroelectric transition in PZO is of first order. After cooled down to low temperatures, the final Pb displacement pattern is “↑↑↑↑↓↓↑↑↓↓↓↓” [Fig. <ref>(b)], which is different from those in Pbam-AFE40 (↑↑↓↓), Pbam-FiE (↑↑↓) or R3c (↑↑↑↑), but can be viewed as Pbam-AFE40 with two translational boundaries <cit.>. Besides, an increase in ϕ_z^- at the low-temperature end is also observed [Fig. <ref>(a2)]. Upon closer inspection of the low-temperature structure, we find that the non-zero ϕ_z^- appears locally in the regions exhibiting the “↑↑↑↑” and “↓↓↓↓”, indicating a tendency toward R3c. Nevertheless, the emergence of ϕ_z^- depends specifically on the Pb displacement pattern and will disappear in simulations using a supercell of a different size. We repeat the cooling simulation in a 24√(2)× 8√(2)× 4 supercell, which allows for a better view of more columns of dipoles [Fig. <ref>(c)]. The changes in lattice parameters, octahedra rotations, and polarization are almost identical to those observed in the simulation using a 12 × 12 × 12 supercell, except for the absence of an increase in ϕ_z^- at the low-temperature end. After cooling down to very low temperature, we find that the prevalence of the ↑↑↓↓ fragments of Pb displacement pattern is quite noteworthy. This confirms that the final structure is indeed the Pbam-AFE40 phase with translational boundaries. In this view, the simulation results agree well with the experimental fact that the most commonly observed low-temperature phase of PZO is the Pbam-AFE40 phase. A careful inspection of the evolution of local Pb displacements reveals the detailed process of evolution of the Pb displacement pattern during cooling. At the temperature just below T_ C, Pbam-like “↑↑↓↓” domains first nucleate in different regions of PZO [Fig. <ref>(c), 420 K]. Subsequently, as the temperature decreases, Pbam domains gradually expand and eventually occupy almost the entire crystal. Finally, these Pbam domains with ↑↑↓↓ polar patterns are frozen down to low temperatures [Fig. <ref>(c), 20 K], while the global transition to other patterns like Ima2-FiE (↑↑↓), or R3c (↑↑↑↑) is kinetically hindered due to the energy barriers between those phases. The key reason why PZO tends to be in Pbam-AFE40 phase at room-temperature is most likely due to the early appearance of Pbam-like structures just below T_ C. Namely, at the temperature just below T_ C, the Pbam-AFE40 holds the advantage in free-energy over other phases such as Ima2-FiE or R3c. Aramberri et al. <cit.> previously made a similar statement, where they estimated the phonon entropy and free-energy using harmonic approximation. It is encouraging that the statement can now be more directly demonstrated through atomistic simulations. Our results emphasize the significance of free-energy, rather than the internal energy at zero Kelvin, as a critical factor in determining the actual low-temperature phase in PZO. In addition to the prevalent “↑↑↓↓” polar patterns, the “↑↑↓” patterns can also be observed in the region of translational boundaries, which closely resembles the structure of the Ima2-FiE phase (see Fig. <ref> in Appendix). This result is aligned surprisingly well with the recent experimental microscopic observation by Liu et al. <cit.>, who found that in PZO single crystal the Ima2-like structures appears at room temperature in the form of translational boundaries. Because of the random positioning of the initial nucleation of Pbam domains at high temperature, the inevitable outcome is the formation of translational boundaries. Based on this, we believe that the translational boundaries are important sources of the Ima2-like structures in PZO. Having these results of heating and cooling, we would also like to discuss the possible impact of the model-predicted energy errors of certain phases on the finite-temperature simulations. As mentioned before, the model over stabilizes the R3c phase by about 5.0 meV/f.u. (Table <ref>), resulting in the model to incorrectly assign the ground state as the R3c phase. Despite of this error, the global transitions from Pbam-AFE40, Pnam-AFE80 or Ima2-FiE to R3c have never been observed. This may be due to the fact that the energy errors are so minor compared to the energy barriers between these phases. Another important reason is that the determinant for finite-temperature phase transitions is the free-energy, where the contribution not only involves internal energy but also underscores the crucial role of entropy. At higher temperatures, the role of entropy becomes even more significant. Our model can provide phonon density of states highly consistent with DFPT calculations (Fig. <ref>), indicating that the model can provide a quite good estimation of the lattice vibration entropy. By these considerations, we speculate that the over stabilization of the R3c phase may lead to the overestimation of the temperature for the R3c to Pbam transition (no experimental result from bulk samples available for comparison), and other than that the impact should be very limited. §.§ Double P-E hysteresis loop Finally, we utilize the model to investigate the electric field induce transitions at finite-temperature. We first point out that obtaining a typical double P-E hysteresis loop in PZO requires specific finite-temperature conditions. This is because the ferroelectric R3c phase is dynamically stable [Fig. <ref>(c)] and possesses a large polarization at zero field [Fig. <ref>(c)], which contradicts the zero remnant polarization for a typical double hysteresis loop. The optimal temperature for the double P-E hysteresis loop should be slightly higher than the R3c to Pbam transition temperature [approximately 420 K according to Fig. <ref>(c), although this value might be overestimated by the model]. At this temperature, the polar R3c phase is destabilized under zero field but can be readily induced by the application of an electric field, and the non-polar Pbam-AFE40 phase can be restored upon removing the electric field. Figure <ref> shows the simulation of double P-E hysteresis loop at 440 K. The electric field was applied along the pseudocubic [111] direction, following a triangle waveform as the shown in Fig. <ref>(c). The simulation provides a double P-E hysteresis loop nearly ideal for antiferroelectrics [Fig. <ref>(a)]. The maximum polarization P_ max = 0.514 C/m^2 (at E_ max = 346 kV/cm) is close to the experimental values, and the transition electric fields E_ AFE-to-FE≈ 200 kV/cm, E_ FE-to-AFE≈ 70 kV/cm are in the same order of magnitude with the experimental measurements on the PZO films <cit.>. Along with the double P-E hysteresis loop, the simulation also provides a sprout-shaped hysteresis loop for the strain, whose shape and the magnitude are also comparable with the experimental measurements on PZO-based materials <cit.>. In addition to successfully reproducing the polarization and electric-field induced strain properties, the model-based simulation also reveals the detailed atomic-scale processes of the PZO response to the electric fields. The changes of Pb displacements and the oxygen octahedra rotation are shown in Figs. <ref>(c, d), from which the switching between Pbam-AFE40 (a^-a^-c^0, ↑↑↓↓) and R3c (a^-a^-a^-, ↑↑↑↑) during the cycling of electric field is quite clear. Given that numerous functionalities of PZO, such as dielectric energy storage, electromechanical actuation, electrocaloricity, thermal switching, etc., are all closely linked to electric-field induced AFE-FE transitions, the model also emerges as a potent tool to elucidate the atomistic mechanisms underlying these functionalities. § CONCLUSION We have developed a deep learning interatomic potential for the prototype antiferroelectric perovskite PZO, and demonstrated the capabilities of the model in simulating properties of relatively large systems at finite temperatures and finite electric fields. The successes of the model include: (1) describing a large number of phases, especially the recently discovered Pnam-AFE80 and Ima2-FiE, and accurately predicting structural and dynamical properties; (2) providing Pbam to cubic phase transition temperatures close to the experimental measurement (T_ C^ model≈ 480 K, T_ C^ exp.≈ 505 K); (3) producing double polarization hysteresis loop that closely resembles the experimental measurement. The primary weakness of the model is the underestimation of the energy of R3c phase by about 5.0 meV/f.u. As a result, the temperature of the transition from R3c to Pbam is likely overestimated. In this study, the key physical insight provided by the model concerns the formation of Pbam and Ima2 structures during cooling. Our atomistic simulation shows that at the temperature just below T_ C, the Pbam-AFE40 exhibits a free-energy advantage over the other phases. For this reason, the Pbam-like structure appears first, and then frozen down to low temperatures, while the global transition from Pbam-AFE40 to Ima2-FiE is probably kinetically hindered. The Ima2 structure is more likely to appears locally in the form of translational boundaries. Due to the stochastic nature of the nucleation of Pbam domains just below T_ C, the formation of translational boundaries at low temperatures is almost inevitable, making them an important source of Ima2 structures in PZO. These results also demonstrate the effectiveness and power of the model-based large-scale atomistic simulations. Through these simulations, not only the macroscopic behaviors comparable to experiments can be reproduced, but also the microscale atomistic mechanisms can be accessed. We prospect that this model will serve as a valuable tool for studying a variety of intriguing phenomena in PZO, including translational boundaries, ferroelastic domain walls, and topological polar structures, as well as functional properties related to energy storage, electric-field induced strain, electrocaloric effect, and thermal switching, etc. We thank Prof. Gustau Catalan from Catalan Institute of Nanoscience and Nanotechnology and Prof. Bin Xu from Soochow University for helpful discussion. This work is supported by the European Union’s Horizon 2020 research and innovation program under grant agreement number 964931 (TSAR) and by F.R.S.-FNRS Belgium under PDR grant T.0107.20 (PROMOSPAN). H.Z. acknowledges the International Postdoctoral Exchange Fellowship (PC2020060) and the Research IPD-STEMA Program. C.G acknowledges financial support from China Scholarship Council. The authors acknowledge the use of the CECI supercomputer facilities funded by the F.R.S-FNRS (Grant No. 2.5020.1) and of the Tier-1 supercomputer of the Fédération Wallonie-Bruxelles funded by the Walloon Region (Grant No. 1117545). § APPENDIX §.§ Structures of Pbam-AFE40, Pnam-AFE80, and Ima2-FiE phases Figure <ref> schematically illustrates the structures of Pbam-AFE40, Pnam-AFE80, and Ima2-FiE phases of PZO. The Pbam-AFE40 phase is featured by an octahedra rotation pattern a^-a^-c^0 and a fourfold periodic Pb displacement pattern “↑↑↓↓” in the pseudocubic [110] direction. In the Pnam-AFE80 phase, the Pb displacement pattern is similar to that of the Pbam-AFE40 phase, but there are additional octahedra rotations around the c-axis (pseudocubic [001]), alternating between in-phase and anti-phase rotations, i.e., two layers of clockwise rotations followed by two layers of anticlockwise rotations, and so on. We refer to this alternating in-phase and anti-phase c-rotation pattern as “c^+/-”. The Ima2-FiE phase has a similar octahedra rotation pattern a^-a^-c^0 as that of the Pbam-AFE40 phase, but with a different threefold periodic Pb displacement pattern “↑↑↓” in the pseudocubic [110] direction. §.§ Lattice parameters, elastic and piezoelectric tensors Tables <ref>, <ref> and <ref> provide comparisons of the lattice parameters, elastic and piezoelectric tensors of various PZO phases obtained from both first-principles and model-based calculations, respectively, serving as validations of the model. Since some of the data have not been previously reported, these tables may also serve as valuable references for future research. unsrt
http://arxiv.org/abs/2406.09086v1
20240613131219
Dynamics of Spinning Binary at 2PM
[ "Gang Chen", "Tianheng Wang" ]
hep-th
[ "hep-th", "gr-qc" ]
compat=1.1.0 graviton/.style=circle, draw=green!60, fill=green!5, very thick, minimum size=7mm codot/.style=/tikz/shape=circle,/tikz/fill=white,/tikz/minimum size=0.1cm,/tikz/inner sep=1.8pt myblob/.style=/tikz/shape=rectangle,/tikz/fill=red,/tikz/minimum size=0.2cm,/tikz/inner sep=1.8pt ghc/.style=/tikz/shape=circle,/tikz/fill=white,/tikz/minimum size=0.05cm, HV/.style=/tikz/shape=circle,/tikz/fill=rgb:black,1;white,2,/tikz/minimum size=0.1cm, GR/.style=/tikz/shape=ellipse,/tikz/fill=rgb:black,1;white,2,/tikz/minimum size=0.3cm, GR2/.style=/tikz/shape=ellipse,/tikz/fill=rgb:black,1;white,2,/tikz/minimum width = 1.2cm, /tikz/minimum height = 3.4cm box/.pic=[fill=black] (0,0) circle (2.5pt); [fill=black] (0.5,0) circle (2.5pt); [line width=5pt] (0,0) – (0.5,0); arrows.meta calc decorations.pathmorphing decorations.pathreplacing decorations.markings vector2/.style=decorate, decoration=snake, amplitude=1pt, segment length=6pt, draw,double, vector/.style=decorate, decoration=snake, amplitude=1pt, segment length=6pt, draw, provector/.style=decorate, decoration=snake,amplitude=2.5pt, draw, antivector/.style=decorate, decoration=snake,amplitude=-2.5pt, draw, fermion/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]>, fermionbar/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]<, fermionnoarrow/.style=draw=black, gluon/.style=decorate, draw=black, decoration=coil,amplitude=4pt, segment length=5pt, scalar/.style=dashed,draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]>, scalarbar/.style=dashed,draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]<, scalarnoarrow/.style=dashed,draw=black, electron/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]>, bigvector/.style=decorate, decoration=snake,amplitude=4pt, draw, cross/.style=cross out, draw, minimum size=2*(#1-), inner sep=0pt, outer sep=0pt block = [draw, rectangle, minimum height=3em, minimum width=6em] 1.5pt theoremTheorem definitionDefinition[section] lemma[theorem]Lemma conjecture[theorem]Observation prop[theorem]Proposition .theorem #1#1 #1 #1 to1ex[#1] · dot η̃ η̅ m λ̃ #1Eq. (<ref>) #1Eq. (<ref>) #1#2eqs. (<ref>) and (<ref>) #1#2Eqs. (<ref>) and (<ref>) #1section <ref> ⟨#|1⟨#1| |#⟩1|#1 ⟩ #1[#1| #1|#1] ⟨#|1⟩⟨#1 ⟩ #1.#2⟨#1 #2⟩ #1.#2[#1 #2] #1𝖪_#1 same #1 #1#1 #1#1 #1#1 #1#1 #1#1TW: #1 ^aNiels Bohr International Academy, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark^bCenter for Theoretical Physics, Seoul National University, 1 Gwana-ro, Gwanak-gu, 08826, Seoul, South Korea^cInstitute of Theoretical Physics, Chinese Academy of Sciences, 55 Zhongguancun Road East, Haidian District, 100190, Beijing, Chinagang.chen@nbi.ku.dktianhengwang@snu.ac.kr SNUTP24-002 We consider the covariant proposal for the gravitational Compton amplitude for a Kerr black hole. Employing the covariant three- and four-point Compton amplitudes, we assemble the classical one-loop integrand on the maximal cut at all orders in spin, utilizing the method of unitarity. Expanding in powers of spin, we evaluate the one-loop amplitude up to 𝒪(G^2 a^8). Supplemented with extra contact contributions derived from the far-zone data of the Teukolsky solutions, the one-loop amplitude is in agreement with results available in the literature. We showcase the classical eikonal in the aligned-spin case at 𝒪(G^2 a^7). Dynamics of Spinning Binary at 2PM Tianheng Wang^b,c ================================== § INTRODUCTION The successful detection of gravitational waves <cit.> has inspired an explosion of developments in the studies of black hole mergers. Besides numerical relativity <cit.> and the formal approaches focusing on the post-Newtonian expansion <cit.>, various amplitude-based approaches <cit.> have proven highly effective in computing higher-order corrections in the Post-Minkowskian (PM) expansion and revealing the underlying structures of gravitational interactions. These amplitude-based approaches view the dynamics of heavy bodies interacting via weak gravity as that of scattering processes in the classical regime where the gravitons are soft. A multitude of methods <cit.> have been devised to single out the classical order from the full (quantum) amplitudes. Techniques for computing loop amplitudes efficiently facilitate computations at high PM orders. At the point of writing, black hole scattering observables have been computed up to 4PM <cit.> for both non-spinning and spinning binaries and partial results are obtained at 5PM in the non-spinning case <cit.>. Incorporating spin degrees of freedom is non-trivial, both conceptually and computationally. Challenges appear even at tree level. Although the three-point amplitudes of two massive particles and one graviton are classified in <cit.>, complications enter at four points, namely the gravitational Compton scattering. On the one hand, fundamental questions in quantum field theories with higher-spin particles need to be addressed <cit.>, towards a full understanding of these amplitudes. On the other hand, the concept of double copy <cit.> and general considerations on symmetry and locality requirements allow us to bootstrap the Compton amplitudes of arbitrary spins <cit.>, despite incomplete knowledge of the underlying theory. These results can then be tested against approaches developed from other perspectives such as the Teukolsky equation <cit.>. Based on somewhat different assumptions, the Compton amplitude for a Kerr black hole has been computed both as an expansion in powers of spin <cit.> and as a resummed function <cit.>. They all agree up to the quartic order in the spin expansion, where the spin-shift symmetry is expected to hold. Going to higher orders in spin, they deviate from one another, as the nuances in their respective formalism and assumptions become important. These results provide a pool of data for future investigations to analyze and clarify, which in turn will help to reveal the structures of the underlying theory. The Compton amplitude for a Kerr black hole is also an important ingredient in the calculation of loop amplitudes, which describe the dynamics of Kerr binaries at higher PM orders <cit.>. A complementary line of research <cit.> based on the worldline quantum field theory (WQFT) formalism <cit.> provides a fast-track to high PM calculations for physical observables and generating functions, up to quadratic orders in spin. Recent development in the worldline formalism focuses on the radial action, which produces scattering observables at 6PM beyond the quadratic order in certain kinematic regions <cit.>. The gravitational waveform in the spinning case is discussed in  <cit.> up to the quartic order in spin. Besides Einstein gravity, related works on the spinning binary systems in gauge <cit.> and other gravitational backgrounds <cit.> have been carried out. Beyond conservative dynamics, tidal and radiation effects have been explored from both amplitude approaches and worldline-based formalisms <cit.>. Recent developments on the self-force expansion <cit.> may also shed light on the dynamics in curved backgrounds. In this paper, we consider one particular proposal for the Compton amplitude <cit.>, which is obtained using the double-copy and bootstrap. In section <ref>, we review its structures and make comparisons with other proposals at the level of their spin expansions. In section <ref>, we compute the one-loop amplitude using this Compton amplitude from the leading singularity and the resulting 2PM eikonal phase. § CLASSICAL COMPTON AMPLITUDES We begin this section with a review of the classical three-point <cit.> and four-point Compton amplitudes <cit.> of two massive particles with spins minimally coupled to gravity at tree level. The Compton amplitude follows from the double-copy and factorization requirements and includes contact terms. The contact terms match with physical data at low orders in spin and display certain empirical properties, which allows for an extrapolation. This Compton amplitude is conjectured to hold at all orders in spin. In the second part of this section, we discuss the comparison with the Compton amplitudes computed from other approaches <cit.>. The differences in the contact terms can be interpreted as being related to the internal structures of the Kerr black hole. Moreover, we demonstrate a simple procedure to obtain expressions in covariant variables for these extra contact terms. §.§ Covariant Compton amplitude to arbitrary spin order Here we review the structures of the classical amplitudes of a heavy particle emitting gravitons. Throughout this paper, we restrict our discussions to the kinematics in the heavy-mass limit <cit.>. The incoming and outgoing massive momenta typically are parameterised as p̅ = m v and p̅' = (m v - q) where q^μ denotes the total momenta of the emitted graviton(s). The on-shell conditions p̅^2 = p̅^'2 =(mv - q)^2= m^2 yield v^2 =1 and v q=0. The three-point amplitude of two massive particles of arbitrary masses and spins and a graviton is first given in <cit.> by considering general symmetry constraints. Restricting to the minimal coupling and taking the heavy-mass limit, the amplitude corresponds to the Kerr black hole of an arbitrary classical spin. A compact, covariant form of this three-point amplitude is found in <cit.> [baseline=([yshift=-0.8ex]current bounding box.center)]every node=[font=] (a) p; [right=1.5cm of a] (f2)[dot]; [right=1.5cm of f2] (c)p'; [above=1.3cm of f2] (gm)ε_1, p_1; * (a) – [thick] (f2) – [thick] (c), (gm)–[photon,ultra thick](f2) ; _3(1, p', p)= -iκ (ε_1)(𝗐ε_1) , where 𝗐^μcosh(x_1)^μ-iG_1(x_1)(p_1 S)^μ, with G_1(x_1)=sinh(x_1)/x_1, x_1=a p_1 and (p_1 S)^μ = p_1νS^νμ. Here S^μν denotes the spin tensor which describes the classical spin of the black hole and we have introduced the spin-length vector a^μ, with S^μν = -ϵ^μνρσp̅_ρ a_σ. As G_1(x_1) is an entire function, (<ref>) is local in the sense that it is free of unphysical poles involving x_1 in the denominator once expanded in powers of the spin vector. At four points, we have the classical gravitational Compton scattering of two gravitons and the Kerr black hole, depicted below. [baseline=([yshift=-0.8ex]current bounding box.center)]every node=[font=] (a) p; [right=1.9cm of a] (f2)[GR]𝐒; [right=1.9cm of f2] (c)p'; [above=1.3cm of f2] (gm); [left=0.8cm of gm] (g2)ε_1,p_1; [right=0.8cm of gm] (g20)ε_2,p_2; * (a) – [thick] (f2) – [thick] (c), (g2)–[photon,ultra thick](f2),(g20)–[photon,ultra thick](f2) ; . A covariant form of the Compton amplitude for a Kerr black hole is obtained in <cit.>. Schematically, this amplitude receives three types of contributions as follows, _4(1,2, p', p) =-_a(1,2, p', p) _0(1,2, p, p') 2(p_1 p_2)+_ r(1,2, p', p) 4( p p_1) ( p p_2) +_ c(1,2, p', p). Here the first term, as its form suggests, is assembled from the double-copy procedure. The second term is designated to take care of the spin-flip effects. The last denotes the contact contributions which can not be determined from factorizations. The double-copy term, constructed in <cit.>, is given by 𝒩_a denoting the kinematic numerator of the Yang-Mills Compton amplitude in the heavy-mass limit and its spinless counterpart 𝒩_0. The numerator 𝒩_a takes the following form, _a(1,2, 3, 4)=-𝗐_1 F_1 F_2𝗐_2/ (p_1p̅)+p_1p̅-p_2p̅/2(p_1p̅)× (i G_2(x_1,x_2) (a F_1 F_2 S p_2)+i G_2(x_1,x_2) (a F_2 F_1 S p_1)+i G_1(x_12) tr(F_1 S F_2) + G_1(x_1) G_1(x_2) ( (a F_1p̅) (a F_2 p_1)-(a F_1 p_2) (a F_2p̅) -p_2p̅-p_1p̅/2 (a F_1 F_2 a))) , where x_12=x_1+x_2 and G_2(x_1, x_2) ≡1 x_2(sinh(x_12) x_12-cosh(x_2) sinh(x_1) x_1). G_2(x_1,x_2) is also an entire function and hence renders _a free of unphysical poles in x_1, x_2 or x_12. The scalar numerator 𝒩_0 follows from the kinematic Hopf algebra <cit.> _0(1,2, p, p')=-(p̅' F_1 F_2p̅'/p_1p̅'). We have 𝒩_0=𝒩_a|_a→ 0. These numerators manifestly satisfy the crossing symmetries described by group algebra in actions <cit.> as demanded by the color-kinematic duality. The double-copy term is consistent with all possible massless factorizations. The second term in eq. (<ref>) complements the double-copy contribution to produce the correct spin-flip effects, starting from the cubic order in spin. It is determined by the factorization requirement on the physical massive poles. The numerator 𝒩_r reads _ r(1,2, p', p)= ((∂_x_1-∂_x_2)G_1(x_1) G_1(x_2)) 4(p̅ p_1) (p̅ p_2)(p̅ p_2(p̅^2 (a F_1 F_2 a) (a F_2 F_1p̅) +a^2 (p̅ F_1 F_2p̅) (a F_1 F_2p̅))- (1↔ 2)) +(i(∂_x_1-∂_x_2)G_2(x_1,x_2) 4(p̅ p_1) (p̅ p_2)) ((p̅ p_2) (a F_2 F_1p̅) ((a F_2p̅) (a_1p̅) -(a F_1p̅) (a_2p̅))+(1↔ 2)). The consistency conditions resulted from all the physical factorizations <cit.> are satisfied. It is noted in <cit.> that the three-point amplitude, the double-copy and the spin-flip contributions to the Compton amplitude share the property that their degrees are zero. Here the degree is defined as the maximal power of 1/χ, when χ parameterizes the scaling of the spin variable, namely a^μ→χ a^μ and χ→ i∞. The contact contribution is not accessible via factorizations and instead is obtained by matching with physical data <cit.> at 𝒪(a^4) and 𝒪(a^5). Such expressions also exhibit the characteristic above that their degrees are zero at low orders in spin. Imposing this as a general constraint, the following contact contribution is extrapolated to all orders in spin, _ c(1,2, p', p) =((∂_x_1-∂_x_2)^2 2!G_1(x_1)G_1(x_2))((a F_1p̅) (a F_2p̅) (a F_1 F_2 a) -a^2 2 ((a F_1 F_2 p) (a F_2 F_1p̅) + (a F_1 F_2 a) (p̅ F_1 F_2p̅))) +(i(∂_x_1-∂_x_2)^2 2!G_2(x_1,x_2))( -1 2((a F_1 F_2 a) (a F_2p̅) (a_1p̅)-(1↔ 2))). Putting together all three types of contributions, the covariant expression for the Compton amplitude (<ref>) is manifestly gauge invariant and free of unphysical poles. §.§ Spin expansion and analysis of contact contributions Here we consider the expansion of the Compton amplitude reviewed above in powers of the spin vector and discuss the comparison between this Compton amplitude and known results in the literature computed from two different approaches, in particular, the higher-spin theory <cit.> and the Teukolsky equation <cit.>. To avoid confusion, in the remaining part of this section, we refer to eq. (<ref>) as _ SP, where the subcript SP indicates that this expression is obtained from bootstrapping with only two parameters x_1 and x_2 and is speculated to describe the single-particle contributions only in <cit.>. Likewise, we denote the Compton amplitude computed from the higher-spin theory in <cit.> as _ HS and the one extracted from the solution to the Teukolsky equation in <cit.> as _ TS-FZ, where the “FZ” in the subscript indicates that we only consider the contributions obtained from imposing the so-called far-zone asymptotic behaviours.[We refer to the Teukolsky result with α→ 1 as the “far-zone data” throught this manuscript, as indicated in <cit.>. However, we acknowledge ambiguities in this way of separating the far-zone and near-zone contributions.] The Compton amplitude derived from the higher-spin theory _ HS contains two entire functions which depend on three parameters <cit.>, among which is the spheroidicity parameter defined as z = 2 √(-a a) ( p p_1)/m <cit.> .[We use a different normalization for z from that in <cit.>.] The z-dependence in _ HS follows from the higher-spin framework and shows up as contact terms only in the classical limit. Such terms are speculated to be related to the internal structures of the Kerr black hole <cit.> or the near-zone physics, which is beyond the scope of _ SP given in eq. (<ref>). On top of the discussions on the comparison at the level of the entire functions in <cit.>, we have found further agreement with _ HS when taking z→ 0, namely _ SP(1^±,2^∓, p', p)=_ HS(1^±,2^∓, p', p)|_z→ 0. This relation is checked order by order up to 𝒪(a^20). As for the comparison with <cit.>, it is observed in <cit.> that the extra contact terms needed for matching with the far-zone data at 𝒪(a^5) can be rewritten as a polynomial in z, although the z-dependence is traded for other parameters in the original expression of _ TS-FZ. These extra contact terms vanish as z→ 0. Here we see that this observation holds up to 𝒪(a^8). In other words, at a given order in spin, we can separate _ TS-FZ into two parts, the z-independent part which agrees with _ SP in 4 dimensions and the z-dependent part which vanishes as z→ 0, _ SP(1^±,2^∓, p', p)= _ TS-FZ(1^±,2^∓, p', p)- ^(c)_ TS-FZ (z), with ^(c)_ TS-FZ (0)=0. _ TS-FZ is expressed in terms of the spinor-helicity variables. _ SP can be readily rewritten and it is straightforward to find _ TS-FZ^(c) in these variables. But to see that _ TS-FZ^(c) indeed vanishes at a given order in spin as z→ 0, it is more convenient to use an alternative expression in terms of covariant variables. To this end, we adopt a simple procedure as follows. We begin with a general ansatz in terms of the covariant variables. That is, the ansatz is a linear combination of monomials comprised of powers of the parameter z and factors of the forms 𝗏𝗏', 𝗏 F_i F_j 𝗏' and 𝗏 F_iF_j 𝗏' where the vectors 𝗏 and 𝗏' can be either a^μ or a momentum. The ansatz must have the correct counting in a^μ and ε_i^μ and satisfy the dimension and parity requirements. Imposing that it should agree with the spinor-helicity expression of ^(c)_ TS-FZ(z) for all possible helicity configurations, when restricted to 4 dimensions, we fix the coefficients in the ansatz. Recall that the same-helicity gravitational Compton amplitude is known to all orders in spin, which is captured already in _ SP. Hence _ TS-FZ^(c)(z) must vanish in the same-helicity configurations and receive non-zero contributions only in the opposite-helicity ones. We note that the monomials satisfying the power counting and parity requirements form an over-complete basis in 4 dimensions, due to the identities involving the Levi-Civita symbols. Such relations only hold in 4 dimensions and may contain coefficients that are rational in inner products of momenta. Hence it is difficult to remove such redundancies in the ansatz completely. Consequently, there is a multitude of covariant expressions for _ TS-FZ^(c) at a given order in spin. They all evaluate to the same in 4 dimensions, but may be different in general dimensions. This leads to certain subtleties when we construct their contribution to one-loop amplitude, which will be discussed in the next section. We give the explicit expressions for ^(c)_ TS-FZ(z) at 𝒪(a^5) and 𝒪(a^6) and higher-order terms are given in Appendix <ref>. These expressions are manifestly gauge invariant and vanishing as z→ 0. We emphasize that they should be viewed as valid in 4 dimensions only, although they are expressed in terms of covariant quantities. ^(c,5)_ TS-FZ(z) =2i p_1p̅ a a(a F_2_1p̅+a_1 F_2p̅)((a a) (p̅ F_1 F_2p̅) /12 m^2-11/60 (a F_1 F_2 a) ) ^(c,6)_ TS-FZ(z) =(a a)^3 (p_1 p)^2 ( p F_1 F_2 p)^2/9 m^4+13 (a a)^3 p_1 p p_1 F_2 F_1 p p F_1 F_2 p/90 m^2 -13 (a a)^3 p_1 p p_2 F_1 F_2 p p F_1 F_2 p/90 m^2+2 (a a)^2 p_1 p a p_1 p F_1 F_2 p a F_2 F_1 p/15 m^2 +16 (a a)^2 p_1 p a p_2 p F_1 F_2 p a F_2 F_1 p/45 m^2-(a a)^2 (p_1 p)^2 a F_1 F_2 a p F_1 F_2 p/m^2 -2 (a a)^2 p_1 p a p_2 p F_1 F_2 p a F_1 F_2 p/15 m^2-16 (a a)^2 p_1 p a p_1 p F_1 F_2 p a F_1 F_2 p/45 m^2 +14/45 a a (p_1 p)^2 (a F_1 F_2 a)^2+22/45 a a p_1 p a F_1 F_2 a a p_1 a F_1 F_2 p -22/45 a a p_1 p a F_1 F_2 a a p_2 a F_2 F_1 p . Before closing this section, we make several remarks on the various attempts at obtaining the gravitational Compton in the literature. The Teukolsky solution yields the scattering phase of the graviton scattered off the Kerr black hole, by imposing the asymptotic behaviours of the Teukolsky equations in the near-zone (near the horizon) and the far-zone (at infinity) <cit.>. The scattering amplitude is then determined by matching at the level of the scattering phase. In principle, not only the leading but also higher-order contributions in the PM expansion can enter the non-perturbative scattering phase. Hence for a meaningful comparison with the tree Compton amplitudes obtained from field-theory approaches, it is crucial to isolate the leading PM contribution in the scattering phase. For a complete comparison between these two types of approaches, loop-level Compton amplitudes are also necessary. On the other hand, the scattering phase obtained from the Teukolsky equation contains both near-zone and far-zone contributions. The far-zone contribution is rational, whereas the near-zone contains transcendental functions. The Compton amplitudes computed from field-theory approaches are postulated to capture the rational part. At the level of the scattering phase, there naturally is another type of ambiguities arising from rewriting the transcendental functions using various identities, which changes the rational terms. Therefore, it is also important to determine the rational terms in a way that is physically meaningful. In this paper, we consider and match with _ TS-FZ which is extracted from the far-zone contribution only. In <cit.>, another procedure is discussed which leads to the matching between _ HS and _ TS at a different point. It would be interesting to see if there exists a way of rewriting the transcendental functions such that the remaining rational terms are independent of z. § EIKONAL WITH SPIN AT 2PM In this section, we compute the eikonal phase <cit.> for the spinning Kerr binary at 2PM. The classical eikonal <cit.> is an important generating that computes physical observables. In the non-spinning case, several variations, such as the radial action <cit.>, the HEFT phase <cit.> and the WQFT eikonal <cit.>, have also been investigated. At 2PM, they are practically the equivalent, given by Fourier transforming the classical one-loop 2→ 2 scattering amplitude to the impact parameter space. As seen in related works <cit.>, these generating functions generally extend to the spinning case with spin-related subtleties. The eikonal and the resulting observables receive all types of contributions from the Compton amplitudes, as discussed in section <ref>. Our main focus is the classical one-loop amplitude is constructed from the covariant Compton amplitude, using unitarity-based methods <cit.>. We compute the general form of the one-loop integrand and perform the loop integration in the spin expansion explicitly up to 𝒪(a^8). For the sake of comparing with physical data, we include the contributions from the extra z-dependent contact terms at 𝒪(a^5) and 𝒪(a^6). Altogether, we find perfect agreement with the far-zone data. §.§ Contribution from the covariant Compton amplitude The classical one-loop amplitude is computed from the leading singularity as depicted below (with the mirror diagram implied), where the triple cut is given by two massless (graviton) cuts and one massive one, [baseline=([yshift=-0.8ex]current bounding box.center)]every node=[font=] (a) p_1; [right=1.5cm of a] (f2) [GR]𝐒; [right=1.5cm of f2] (c)p'_1; [above=2.0cm of a](ac)p_2; [right=1.0cm of ac] (ad) [dot]; [right=1.0cm of ad] (f2c) [dot]; [above=2.0cm of c](cc)p'_2; [above=1.0cm of a] (cutL); [right=3.0cm of cutL] (cutR); [right=0.5cm of ad] (att); [above=0.3cm of att] (cut20); [below=0.3cm of att] (cut21); * (a) – [fermion,thick] (f2)– [fermion,thick] (c), (f2)–[photon,ultra thick,momentum=ℓ_1](ad), (f2)– [photon,ultra thick,momentum'=ℓ_2] (f2c),(ac) – [fermion,thick] (ad)– [fermion,thick] (f2c)– [fermion,thick] (cc), (cutL)–[dashed, red,thick] (cutR), (cut20)–[ red,thick] (cut21) ; ℳ^(1)_a_1 a_2 = 1 2∑_h_i=±(32π G)^2 (4π)^D/2∫d^D ℓ_1 π^D/2δ ( m_2 v_2ℓ_1) ℓ_1^2 ℓ_2^2 ×(ℳ_3^-h_1 (-ℓ_1, v_2) ℳ_3^-h_2 (-ℓ_2, v_2) ℳ_4^h_1 h_2 (ℓ_1, ℓ_2, v_1)). Here the massive cut labelled by a solid red line gives the δ-function and we have formally lifted the massless cuts δ(ℓ_j^2) → i/ℓ_j^2, while keeping in mind that bubble or ultra-local terms, with zero or negative propagator powers for the massless propagators, should vanish. The velocities of the heavy bodies are parameterized such that v_1^2 = v_2^2 =1 and v_1 v_2 =γ as usual. The tree-level amplitudes _3 and _4 are given by eq. (<ref>) and eq. (<ref>). We use dimensional regularization in the loop integration, namely D=4-2ϵ. The summation is taken over the helicities of the gravitons and the completeness relation reads ∑_h = ±ε^μ_k ε^ν_k ε^*ρ_-kε^*σ_-k = 1 2(𝒫^μρ𝒫^νσ + 𝒫^μσ𝒫^νρ) - 1 D-2𝒫^μν𝒫^ρσ, where 𝒫^μν = η^μν - k^μ n^ν + n^μ k^ν k n with the reference null vector n^μ transverse to the polarization kε = nε=0. The covariant three- and four-point tree amplitudes are expected to hold in general dimensions and in practice we can simply set 𝒫^μν = η^μν. The reference vector always drops out. The dependence on the graviton polarization vectors in the tree amplitudes eq. (<ref>) and eq. (<ref>) always enters through the (dual) field strengths F_i^μν and F̃_i^μν and is completely manifest. Since the function G_1(x) depends only on momenta and spins, the above gluing can be performed without expanding in powers of spin. The resulting contribution to the one-loop integrand at all orders in spin is given in the . The presence of e^ℓ_1 a_j factors in eq. (<ref>) may seem worrisome. Before expanding in powers of spin, potential divergences due to such factors can be easily remedies by a shift of the spin vector a_j→ iã_j. We illustrate with a simple example that the integral in eq. (<ref>) does not behave worse than the non-spinning case does in the ultraviolet under this shift in Appendix <ref>. Having evaluated the integral, we can analytically continue back to real values of a_j. As for the spin expansion of this integral, it can be defined as the Taylor expansion of the well-defined integral at each order. Similar integrals are studied and the resummed expressions are given in special kinematic regions in <cit.>. Our preliminary analysis of simple examples in such kinematic regions show that the naive evaluation of the spin-expanded integral agrees with the expansion of the resummed one, order by order. We now proceed to consider the expansion of eq. (<ref>) in powers of spin. Expanding out the spin tensor S^μν and using the identity[We use the package for manipulating the expressions <cit.>.] ϵ^μ_1 ⋯μ_n α_1⋯α_mϵ_μ_1⋯μ_n β_1⋯β_m = (-1)^n n! δ^α_1⋯α_m_β_1⋯β_m, δ^α_1⋯α_m_β_1⋯β_m = m!δ^[ α_1 ._ β_1⋯δ^. α_m ]_ β_m, we can always write the integrand in terms of scalar products and at most one factor of ϵ(A,B,C,D) = ϵ_μνρσA^μ B^ν C^ρ D^σ. In fact, we find that at even orders in spin 𝒪(a^2n), the integrand contains only scalar products and at odd orders in spin 𝒪(a^2n+1), the integrand contains one ϵ(A,B,C,D) factor. Removing the bubble/ultra-local terms, the spin-expanded expression has two types of structures, ∫d^D ℓ_1 π^D/2δ (ℓ_1 v_2) (ℓ_1 a_1)^n_1 (ℓ_1 a_2)^n_2ℓ_1^2 ℓ_2^2 (ℓ_1 v_1)^m, ∫d^D ℓ_1 π^D/2δ (ℓ_1 v_2) (ℓ_1 a_1)^n_1 (ℓ_1 a_2)^n_2ℓ_1^μℓ_1^2 ℓ_2^2 (ℓ_1 v_1)^m, where n, n_1, n_2 and m are integers. The vector ℓ_1^μ in the second term above typically gets contracted with ϵ(A,B,C,·)_μ where A,B,C∈{q,v_1,v_2,a_1,a_2}. Such one-loop tensor integrals can be readily decomposed into scalar integrals. These scalar integrals are dressed by rank-n tensor structures. As shown in <cit.>, it is convenient to write down such tensor structures in terms of the following variables θ^μ_1 := γ v_2^μ-v_1^μγ^2 -1 , θ^μ_2 := γ v_1^μ-v_2^μγ^2 -1 , Π^μν := η^μν-θ_1^μ v_1^ν - θ_2^μ v_2^ν - q^μ q^ν q^2, where Π^μν is the (D-3)-dimensional metric. It is easy to verify that a tensor integral of the form ℐ_1,1,m,1[ℓ_1^μ_1⋯ℓ_1^μ_n] = ∫d^D ℓ_1 π^D/2δ (ℓ_1 v_2) ℓ_1^μ_1⋯ℓ_1^μ_nℓ_1^2 ℓ_2^2 (ℓ_1 v_1)^m can be decomposed into scalar integrals dressed by tensor structures built from q^μ, θ_1 and Π^μν only. Since all but at most one of these indices are to be contracted with a_1 or a_2, symmetries of the two structures in eq. (<ref>) lead to closed-form expressions for the decomposition. We present the explicit decomposition of the first structure in eq. (<ref>) here and postpone the derivation to Appendix <ref>, ℐ_1,1,α,1[(ℓ_1 a_1)^M (ℓ_1 a_2)^N-M] = ∑_n_1+n_2+2n_3=N, n_i⩾ 0, n_i∈ℤ1 N_n_3( ∑_k=0^n_3([ n_3; k ]) 1 (γ^2 -1)^k(q^2 2)^n_2+n_3-kℐ_1,1,α-n_1-2k,1[1] ) ×( ∑_cond. C_N,M,n_1,n_2,m_1,m_2,m_3  (a_1θ_1)^m_1 (a_2θ_1)^n_1-m_1 (a_1 q)^m_2 (a_2 q)^n_2-m_2 (a_1Π a_1)^m_3 (a_1Π a_2)^m_4 (a_2Π a_2)^n_3-m_3-m_4), where N_n_3 = (D-3) (D-1) ⋯ (D-3+2(n_3-1)) for n_3 >0 and N_0 =1. The summation in the last bracket is taken over all the solutions to the conditions below m_1+m_2+2m_3+m_4 =M, 0⩽ m_i ⩽ n_i, 0⩽ m_4 ⩽ n_3, m_i∈ℤ. The coefficients read C_N,M,n_1,n_2,m_1,m_2,m_3 = M! m_1! m_2! m_3! m_4!(N-M)! (n_1-m_1)! (n_2-m_2)! (n_3-m_3)!1 2^n_3-m_4 . The second structure in eq. (<ref>) admits a similar expression with one free index, which we also give in Appendix <ref>. After decomposing the tensor integrals, the resulting scalar integrals are easily cast in a basis of master integrals using the IBP relations.[We use the package for the IBP reduction <cit.>.] Up to 𝒪(a^8), we see that only the triangle integral below contributes in 4 dimensions,[The box integral ℐ_1,1,1,1 appears after the IBP reduction. But its coefficient vanishes when we restrict to 4 dimensions.] ℐ_1,1,0,1 = ∫d^D ℓ_1 π^D/2δ (ℓ_1 v_2) ℓ_1^2 ℓ_2^2 = 2^5-Dπ^2 (-q^2)^(D-5)/2 sec( π D/2 ) Γ( D 2-1 ) . Evaluating the integral, we arrive at the spin-expanded one-loop amplitude computed from the covariant Compton amplitude in eq. (<ref>). It is independent of the z-parameter. We find perfect agreements with the literature up to 𝒪(G^2 a^4) <cit.>. Since the terms at low orders in spin are well established, we choose not to display the relatively bulky expressions. Starting from 𝒪(G^2 a^5), to match with the far-zone data, z-dependent contact terms _ TS-FZ^(c)(z) need to be included. The gluing process involving eq. (<ref>) is slightly more subtle, which we are to discuss shortly. We include the spin-expanded one-loop amplitude from 𝒪(a^5) to 𝒪(a^8) in the . §.§ Contribution from extra contact terms Here we discuss the one-loop contribution from the z-dependent contact terms _ TS-FZ^(c) at 𝒪(G^2 a^5) and 𝒪(G^2 a^6). It is given by the diagram depicted in eq. (<ref>) with the four-point Compton amplitude substituted by eq. (<ref>). Since the covariant Compton amplitude in eq. (<ref>) suffices to match with the far-zone data up to 𝒪(a^4), the 𝒪(G^2 a^5)z-dependence only enters through the 𝒪(G^2 a_1^5 a_2^0) or 𝒪(G^2 a_1^0 a_2^5) terms. Hence the three-point amplitude can be taken as the non-spinning one. At 𝒪(G^2 a^6), we need both 𝒪(G^2 a^5_1 a_2^1) and 𝒪(G^2 a_1^6 a_2^0) (and their mirrors), which come from gluing up the two expressions in eq. (<ref>) with appropriate orders in the spin expansion of the three-point amplitude respectively. As demonstrated in the previous section, the z-dependent contact terms in eq. (<ref>) are constructed such that they agree with those in <cit.> in 4 dimensions. This allows for ambiguities and the covariant expressions, which are equivalent in 4 dimensions may have subtle differences, if naively extrapolated to general dimensions. Hence, their contribution to the one-loop amplitude needs to be constructed strictly in 4 dimensions as well. In 4 dimensions, the three-point amplitude in eq. (<ref>) simplifies to ℳ_3(1^+,p̅', p̅) = -iκ e^+p_1 a (p̅ε_1^+)^2, ℳ_3(1^-,p̅', p̅) = -iκ e^-p_1 a (p̅ε_1^-)^2. Recall that the extra contact terms _ TS-FZ^(c) also vanish in the same-helicity configurations, namely, _ TS-FZ^(c,5)(1^±,2^±,p̅',p̅)=_ TS-FZ^(c,6)(1^±,2^±,p̅',p̅) =0. Consequently, their one-loop contribution is given by [baseline=([yshift=-0.8ex]current bounding box.center)]every node=[font=] (a) p_1; [right=1.5cm of a] (f2) [GR]; [right=1.5cm of f2] (c)p'_1; [above=2.0cm of a](ac)p_2; [right=1.0cm of ac] (ad) [dot]; [right=1.0cm of ad] (f2c) [dot]; [above=2.0cm of c](cc)p'_2; [above=1.0cm of a] (cutL); [right=3.0cm of cutL] (cutR); [right=0.5cm of ad] (att); [above=0.3cm of att] (cut20); [below=0.3cm of att] (cut21); [above right=0.2cm and 0.45cm of cutL] (hpL) +; [below right=0.2cm and 0.6cm of cutL] (hmL) -; [above left=0.2cm and 0.45cm of cutR] (hmR) -; [below left=0.2cm and 0.6cm of cutR] (hpR) +; * (a) – [fermion,thick] (f2)– [fermion,thick] (c), (f2)–[photon,ultra thick](ad), (f2)– [photon,ultra thick] (f2c),(ac) – [fermion,thick] (ad)– [fermion,thick] (f2c)– [fermion,thick] (cc), (cutL)–[dashed, red,thick] (cutR), (cut20)–[ red,thick] (cut21) ; + [baseline=([yshift=-0.8ex]current bounding box.center)]every node=[font=] (a) p_1; [right=1.5cm of a] (f2) [GR]; [right=1.5cm of f2] (c)p'_1; [above=2.0cm of a](ac)p_2; [right=1.0cm of ac] (ad) [dot]; [right=1.0cm of ad] (f2c) [dot]; [above=2.0cm of c](cc)p'_2; [above=1.0cm of a] (cutL); [right=3.0cm of cutL] (cutR); [right=0.5cm of ad] (att); [above=0.3cm of att] (cut20); [below=0.3cm of att] (cut21); [above right=0.2cm and 0.45cm of cutL] (hpL) -; [below right=0.2cm and 0.6cm of cutL] (hmL) +; [above left=0.2cm and 0.45cm of cutR] (hmR) +; [below left=0.2cm and 0.6cm of cutR] (hpR) -; * (a) – [fermion,thick] (f2)– [fermion,thick] (c), (f2)–[photon,ultra thick](ad), (f2)– [photon,ultra thick] (f2c),(ac) – [fermion,thick] (ad)– [fermion,thick] (f2c)– [fermion,thick] (cc), (cutL)–[dashed, red,thick] (cutR), (cut20)–[ red,thick] (cut21) ; = 1 2(32π G)^2 (4π)^D/2∫d^D ℓ_1 π^D/2δ ( m_2 v_2ℓ_1) ℓ_1^2 ℓ_2^2 ( e^(ℓ_1-ℓ_2) a_2𝒞_+- + e^-(ℓ_1-ℓ_2) a_2𝒞_-+), where 𝒞_±∓ = .(p̅_2ε_1^±)^2 (p̅_2ε_2^∓)^2 _ TS-FZ^(c)(ℓ_1^∓,ℓ_2^±,p̅'_1,p̅_1) |_ completeness relation In this case, we have to use the “chiral” version of the completeness relation. Equivalently, we can separate the even and odd terms in spin, as discussed in <cit.>. At even orders in spin, we have cosh(ℓ_12 a_2)(𝒞_+-+𝒞_-+) = cosh(ℓ_12 a_2) (𝒞_+-+𝒞_+++𝒞_–+𝒞_-+), where ℓ_12=ℓ_1-ℓ_2 and 𝒞_+++𝒞_–=0. Here we can use the same completeness relation in eq. (<ref>), since we have ε^+μ_k ε^+ν_kε^-ρ_-kε^-σ_-k + ε^-μ_k ε^-ν_kε^+ρ_-kε^+σ_-k for both gravitons. At odd powers in spin, we have instead sinh(ℓ_12 a_2)(𝒞_+--𝒞_-+) = sinh(ℓ_12 a_2) (𝒞_+--𝒞_+++𝒞_–-𝒞_-+). Hence we have ε^+μ_k ε^+ν_kε^-ρ_-kε^-σ_-k - ε^-μ_k ε^-ν_kε^+ρ_-kε^+σ_-k = 1 4(𝒫^μρ𝒫̃^νσ + 𝒫^μσ𝒫̃^νρ +𝒫^νρ𝒫̃^μσ + 𝒫^νσ𝒫̃^μρ), for one of the two gravitons, where 𝒫̃^μν = iϵ^μνρσ k_ρ n_σ k n. This way we guarantee that the remnant ambiguities introduced in the process of lifting the spinor-helicity expression for _ TS-FZ^(c) to the covariant one have no impact at one loop. This concludes our discussion on the one-loop contribution from the extra contact terms. Together with the contribution from the covariant Compton amplitude, we find agreement with the far-zone data in <cit.> up to 𝒪(G^2 a^6), which is the highest-order result to our best knowledge. Comparing with the 2PM calculation from the higher-spin amplitudes <cit.> may be helpful for finding an all-order-in-spin remedy for the z-dependence. §.§ 2PM eikonal in the spin expansion Generally speaking, the eikonal exponentiation can be identified with the classical amplitude Fourier transformed to the impact parameter space. The classical eikonal phase is then extracted order by order. Recent works <cit.> show that the classical eikonal can be viewed as a generator for canonical transformations.[Another work <cit.> shows that the radial action can be understood in a similar fashion.] Up to 2PM, the relation between the amplitude and the eikonal remains straightforward, namely χ^ (nPM)_τ = 1 4 m_1 m_2 √(γ^2-1)∫d^D-2𝐪 (2π)^D-2 e^-i𝐛𝐪^(n-1)_a_1a_2 U_1 U_2, where 𝐛 is the impact parameter and τ labels the Thomas-Wigner rotation factors <cit.> U_i = e^i τℰ_i E m_i (E_i + m_i), ℰ_i := E 𝐬_i (𝐩×𝐪) with E_i denoting the energy of the two black holes and E=E_1+E_2. The bold-faced letters denote the three-vectors. The three-momenta are given in the center-of-mass (COM) frame, namely p_1 = (E_1,𝐩),  p_2=(E_2,-𝐩), q=(0,𝐪), and 𝐬_i is the spatial part of s_i^μ = m_i a_i^μ. Different values of τ correspond to different choices of the spin supplementary conditions (SSC): τ=0 corresponds to the covariant SSC p̅_μ S^μν=0 whereas τ=1 the canonical SSC. (For more details, see also <cit.>.) Dressing the amplitude with the Thomas-Wigner rotation factors is effectively equivalent to shifting the impact parameter 𝐛→𝐛 + ∑_i=1,2(-τ E_i+m_i) 𝐩×𝐬_i. The canonical observables follow from the the canonical SSC. Here we showcase the explicit expression for the z-independent 𝒪(G^2 a^7) contributions to χ^ (2PM)_τ=0 in the aligned-spin case, which is given by the covariant Compton amplitude. To compare with the far-zone data, it needs to be supplemented by the contributions from the z-dependent contact terms _ TS-FZ^(c,n=5,6,7), which are given in eq. (<ref>) and Appendix <ref>. We parameterize the aliged spins as a_2^μ = ξ a_1^μ and hence the powers of a_1 and a_2 are tracked by the power of ξ. . χ^ (2PM)_τ=0|_ξ^4, z=0 = -35 π G^2 ξ ^4 γ(24 (9 γ ^2-4) m_1+ (114-239 γ ^2) m_2) /512 (γ ^2-1)^3/2 |𝐛|^8 b̂ S_1 p_2 (429 (b̂ a_1 )^6- 594 a_1^2 (b̂ a_1 )^4+216 a_1^4 ( b̂ a_1)^2 -16 a_1^6 ), . χ^ (2PM)_τ=0|_ξ^5, z=0 = -21 π G^2 ξ ^5 γ b̂ S_1 p_2/512 (γ ^2-1)^3/2 |𝐛|^8(2145 ((38 γ ^2-15) m_1+26 (1-2 γ ^2) m_2) (b̂ a_1)^6 -33 a_1^2 ((3421 γ ^2-1351) m_1+2340 (1-2 γ ^2) m_2) (b̂ a_1)^4 +36 a_1^4 ((1141 γ ^2-451) m_1+780 (1-2 γ ^2) m_2) (b̂ a_1)^2 -8 a_1^6 ((381 γ ^2-151) m_1+260 (1-2 γ ^2) m_2)), . χ^ (2PM)_τ=0|_ξ^6, z=0 = -7 π G^2 ξ ^6 γ b̂· S_1· p_2/4096 (γ ^2-1)^3/2 |𝐛|^8(-16 a_1^6 (16 (81 γ ^2-26) m_1+15 (77-149 γ ^2) m_2) -66 a_1^2 (16 (721 γ ^2-226) m_1+135 (77-149 γ ^2) m_2) (b̂ a_1)^4 +72 a_1^4 (16 (241 γ ^2-76) m_1+45 (77-149 γ ^2) m_2) (b̂ a_1)^2 + 2145 (16 (16 γ ^2-5) m_1+3 (77-149 γ ^2) m_2) (b̂ a_1)^6 ), . χ^ (2PM)_τ=0|_ξ^7, z=0 = -π G^2 ξ ^7 γ b̂ S_1 p_2 /1024 (γ ^2-1)^3/2 |𝐛|^8(2145 (6 (8 γ ^2-1) m_1+7 (9-17 γ ^2) m_2) (b̂ a_1)^6 -198 a_1^2 ((727 γ ^2-97) m_1+105 (9-17 γ ^2) m_2) (b̂ a_1)^4 +108 a_1^4 ((493 γ ^2-73) m_1+70 (9-17 γ ^2) m_2) (b̂ a_1)^2 -8 a_1^6 ((513 γ ^2-93) m_1+70 (9-17 γ ^2) m_2)), where we define b̂^μ = b^μ /|b|. The terms that are not listed above can be obtained by swapping the two particles χ^ (2PM)|_ξ^n = (χ^ (2PM)|_ξ^7-n)_1↔ 2. Following the prescription in <cit.>, the impulse Δ𝐩_⊥ and the spin kick Δ𝐬_1 can be generated from the eikonal phase χ_τ=0= χ^ (1PM)_τ=0 + χ^ (2PM)_τ=0 + 𝒪(G^3) via[It is noted in <cit.> that the operator 𝒟_SL is not necessary once all the constraints are properly imposed.] Δ𝒪 = -{𝒪,χ} -1 2{χ, {𝒪,χ}} -𝒟_SL( χ, {𝒪,χ}) + 1 2{𝒪, 𝒟_SL(χ, χ)}, where Δ𝒪 ={Δ𝐩_⊥, Δ𝐬_1 } and the full impulse is given by Δ𝐩 = Δ p_∥ 𝐩/|𝐩| + Δ𝐩_⊥, which satisfies the on-shell condition (𝐩+Δ𝐩)^2 = 𝐩^2. The Poisson bracket and the spin derivative operators are defined as {𝐩_⊥, f} := - ∂ f ∂𝐛, {𝐬_1, f} := ∂ f ∂𝐬_1 ×𝐬_i, 𝒟_SL(f,g):=-𝐬_1 ( ∂ f ∂𝐬_1∂ g ∂𝐋), with 𝐋:=𝐛×𝐩. § CONCLUSIONS AND OUTLOOK We consider a particular proposal for the classical gravitational Compton amplitude, which follows from bootstrapping techniques and is expressed in covariant variables. This covariant expression is consistent with all possible massless and massive factorization requirements at all orders in spin and contains a contact contribution that is obtained by imposing the same empirical patterns as those observed in other contributions. In this paper, we further the analysis in <cit.> regarding the contact terms. In particular, we verify the matching between the covariant Compton amplitude and the one derived from the higher-spin theory <cit.> up to 𝒪(a^20), when taking the spheroidicity parameter z→ 0. We also devise a simple procedure to find a covariant form which accounts for the missing contact contributions compared to those extracted from the far-zone data. We confirm that such contributions can be written as a polynomial in z and vanish as z→ 0 up to 𝒪(a^8). We believe such patterns extend to higher orders in spin. These observations corroborate somewhat the folklore that the z-dependent contact terms are associated with the internal structures of the Kerr black hole. Further studies on multiple fronts are needed to uncover the precise nature of such contact contributions. From the covariant Compton amplitude, the calculation of the classical one-loop amplitude on the triple cut using unitarity-based methods is then streamlined. We obtain an all-order-in-spin integrand and evaluate the integral up to 𝒪(a^8) in the spin expansion. We have also included the contributions from z-dependent contact terms, for which the computation involves an extra subtlety. The Fourier transform of the one-loop amplitude to the impact parameter space gives the eikonal phase. We find perfect agreement with the 2PM far-zone data up to 𝒪(a^6). It is certainly interesting to consider the one-loop integral without expanding in spin. Two directions are natural to look into, the binary dynamics and the waveform at higher PM orders. The binary dynamics at 𝒪(G^3) is given by the two-loop 2→ 2 amplitude and the waveform at 𝒪(G^5/2) is given by the one-loop amplitude with a graviton radiation. The five-point tree-level gravitational Compton amplitude is needed in both cases. Computing the five-point amplitude to an arbitrary order in spin is by itself a challenge, although it is conceivable that the bootstrapping techniques should carry over. Even without the five-point amplitude at hand, one may consider mass sectors, for instance the “zig-zag” diagrams at two loops in <cit.>, which only involve the three- and four-point amplitudes in the construction of loop amplitudes. The behaviours of the loop amplitudes in these sectors may in turn shed new light on additional constraints for the structures of the gravitational Compton amplitudes. § ACKNOWLEDGEMENT We thank F. Y. Baustista, E. Bjerrum-Bohr, A. Brandhuber, G. R. Brown, J. Gowdy, H. Johansson, J.-W. Kim, S. Lee, P. Pichini, M. Skowronek, G. Travaglini for insightful discussions. TW is grateful to H. Johansson, J.-W. Kim and S. Lee for discussions on their respective works and comments on the manuscript. GC has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 847523 “INTERACTIONS”. TW is supported by the NRF grant 2021R1A2C2012350 and the Fellowship of China Postdoctoral Science Foundation (No. 2022M713228). § EXTRA CONTACT TERMS Here we present the explicit expressions for the extra contact terms that give the z-dependence of the Compton amplitude extracted from the Teukolsky solution at 𝒪(a^7) and 𝒪(a^8). We choose only the contribution obtained from fitting the far-zone asymptotic. ^(c,7)_ TS-FZ= (284 a p_1+211 a p_2) (p_1 p)^2 p F_1 F_2 p p F_1_2 p (a a)^3/945 m^4 +8 p_1 p (a F_1 p_2 p_1 F_2 p+a F_2 p_1 p_2 F_1 p) ( p F_1_2 p- p F_2_1 p) (a a)^3/945 m^2 +(211 a p_1+284 a p_2) (p_1 p)^2 p F_1 F_2 p p F_2_1 p (a a)^3/945 m^4 +p_1 p (99 (a p_1)^2 m^2-101 a p_1 a p_2 m^2-78 a a (p_1 p)^2) a F_1 F_2 p p F_1_2 p (a a)^2/1890 m^4 +4 p_1 p (-33 (a p_1)^2 m^2+23 a p_1 a p_2 m^2+60 a a (p_1 p)^2) a F_2 F_1 p p F_1_2 p (a a)^2/945 m^4 +p_1 p (-99 (a p_2)^2 m^2+101 a p_1 a p_2 m^2+78 a a (p_1 p)^2) a F_2 F_1 p p F_2_1 p (a a)^2/1890 m^4 -101 a p_1 (p_1 p)^2 a F_1 F_2 a p F_1_2 p (a a)^2/135 m^2-101 a p_2 (p_1 p)^2 a F_1 F_2 a p F_2_1 p (a a)^2/135 m^2 -4 p_1 p (-33 (a p_2)^2 m^2+23 a p_1 a p_2 m^2+60 a a (p_1 p)^2) a F_1 F_2 p p F_2_1 p (a a)^2/945 m^4 +212/945 a p_1 (p_1 p)^2 a F_1 F_2 a a F_1_2 a a a+212/945 a p_2 (p_1 p)^2 a F_1 F_2 a a F_2_1 a a a +p_1 p (-121 (a p_1)^2 m^2-121 (a p_2)^2 m^2+190 a p_1 a p_2 m^2+372 a a (p_1 p)^2) /630 m^2    × ( a F_1 F_2 a a_1 F_2 p a a) ^(c,8)_ TS-FZ= 64 (p_1 p)^2 p_1 F_2 p p_2 F_1 p p F_1 F_2 p (a a)^4/2835 m^4+32 (p_1 p)^2 p_1 F_2 F_1 p_2 p F_1 F_2 p (a a)^4/2835 m^2 +(p_1 p)^2 (m^2 (174 (a p_1)^2-55 a p_2 a p_1+174 (a p_2)^2)-278 a a (p_1 p)^2) ( p F_1 F_2 p)^2 (a a)^3/5670 m^6 +8 (a p_1-a p_2)/945 m^2(p_1 p a F_1 p_2 p_1 F_2 p p F_1 F_2 p (a a)^3+ p_1 p a F_2 p_1 p_2 F_1 p p F_1 F_2 p (a a)^3) +p_1 p (m^2 a p_1 (936 a p_1-883 a p_2)-483 a a (p_1 p)^2) p_1 F_2 F_1 p p F_1 F_2 p (a a)^3/11340 m^4 +p_1 p (a p_2 (883 a p_1-936 a p_2) m^2+483 a a (p_1 p)^2) p_2 F_1 F_2 p p F_1 F_2 p (a a)^3/11340 m^4 +(p_1 p)^2/5670 m^4(-13 m^2 (132 (a p_1)^2-223 a p_2 a p_1+132 (a p_2)^2) +1987 a a (p_1 p)^2) a F_1 F_2 a p F_1 F_2 p (a a)^2 +(p_1 p)^2 (2 m^2 (255 (a p_1)^2-443 a p_2 a p_1+255 (a p_2)^2)-487 a a (p_1 p)^2) (a F_1 F_2 a)^2 a a/5670 m^2 +a p_1 p_1 p (3 m^2 a p_1 (33 a p_1-73 a p_2)-859 a a (p_1 p)^2) a F_1 F_2 a a F_1 F_2 p a a/1890 m^2 +a p_2 p_1 p (3 a p_2 (73 a p_1-33 a p_2) m^2+859 a a (p_1 p)^2) a F_1 F_2 a a F_2 F_1 p a a/1890 m^2 +(a a)^2 p_1 p /5670 m^4(3 (-72 (a p_1)^3+136 a p_2 (a p_1)^2+83 (a p_2)^2 a p_1-27 (a p_2)^3) m^2 +a a (2288 a p_1+289 a p_2) (p_1 p)^2) a F_1 F_2 p p F_1 F_2 p +(a a)^2 p_1 p/5670 m^4(3 m^2 (27 (a p_1)^3-83 a p_2 (a p_1)^2-136 (a p_2)^2 a p_1+72 (a p_2)^3) -a a (289 a p_1+2288 a p_2) (p_1 p)^2) a F_2 F_1 p p F_1 F_2 p § UV BEHAVIOURS Here we discuss the UV behaviour of the one-loop integral (<ref>). Before expanding in spin, the potential divergences can be avoided, when we shift the spin vector to the imaginary axis a_j → iã_j. We first illustrate this process with a simple example. Consider the double-copy contribution to the Compton amplitude eq. (<ref>), of which the spin-dependence is given in eq. (<ref>). Naively, the worst UV behaviour would have come from the term with G_1(x_1)G_2(x_2), yielding the highest UV scaling at one loop as follows, ∫d^D ℓ_1π^D/2δ(m_2 v_2ℓ_1) ℓ_1^2 ℓ_2^2 G_1(ℓ_1 a_1) G_1 (ℓ_2 a_1) cosh(ℓ_1 a_2) cosh(ℓ_2 a_2)ℓ_1^μℓ_1^ν, where we have implicitly ℓ_2=q-ℓ_1 as always. Performing the shift a_j^μ= iã_j^μ, the exponential of ℓ_1 a_j becomes finite. That is, we have the following scaling as ℓ_1 →Λℓ_1 and Λ→∞, cosh(ℓ_1 a_j)∼𝒪(Λ^0), G_1(ℓ_1 a_j)∼𝒪(Λ^-1), G_2(ℓ_1 a_j, ℓ_2 a_j)∼𝒪(Λ^-1). This way, we see that the one-loop integral above is well regulated. We note that the particular derivatives of the entire functions appearing in eq. (<ref>) are more suppressed, namely (∂_a_1ℓ _1-∂_a_1ℓ _2)G_2(a_1ℓ _1,a_1ℓ _2)∼𝒪(Λ^-2), (∂_a_1ℓ _1-∂_a_1ℓ _2)(G_1(a_1ℓ _1)G_1(a_1ℓ _2))∼𝒪(Λ^-2). Since all the entire functions involved in eq. (<ref>) are all finite under the shift of the spin vector, the UV scaling boils down to the remaining factors given in section <ref>, which can be straightforwardly examined. We find that all the terms in eq. (<ref>) are indeed well-defined under the shift in a similar fashion, which allows us to evaluate it. In the end, we can analytically continue back to the original real values of the spin vectors. § DECOMPOSITION OF TENSOR INTEGRALS Here we derive the closed-form expression for the decompositions of the tensor integrals (<ref>). The variables given in eq. (<ref>) satisfy the orthogonality relations below, qθ_i = θ_1θ_2 = 0, v_i θ_j = δ_ij, q_μΠ^μν = θ_iμΠ^μν = v_iμΠ^μν = 0. This yields the orthogonality of the tensor basis constructed from these building blocks. We denote the (tensor) integrals as ℐ_α_1,α_2,α_3,α_4[f(ℓ_1)] ≡∫d^Dℓ_1 π^D/2δ^(α_4-1)(ℓ_1 v_2) f(ℓ_1) ℓ_1^2α_1 (q-ℓ_1)^2α_2 (ℓ_1 v_1)^α_3. We consider the special case where we only have non-vanishing integrals with α_1 = α_2 = α_4 =1, since the integrand in eq. (<ref>) can not have higher propagator powers in α_1 or α_2 and ultra-local terms (negative values of α_1 or α_2) are excluded. The scalar integral family is simply ℐ_1,1,α_3,1[1]. The two types of integrals in (<ref>) are ℐ_1,1,α_3,1[(ℓ_1 a_1)^M (ℓ_1 a_2)^N-M] and ℐ_1,1,α_3,1[(ℓ_1 a_1)^M (ℓ_1 a_2)^N-Mℓ^μ]. We consider the tensor numerator with free indices first. The most general ansatz for the decomposition of ℐ_1,1,α_3,1[ℓ_1^μ_1⋯ℓ_1^μ_N] reads ℐ_1,1,α_3,1[ℓ_1^μ_1⋯ℓ_1^μ_N] = ∑_n_1+n_2+2n_3=N, n_i⩾ 0, n_i∈ℤ ℂ_n_1 n_2 n_3 sym[ θ_1^μ_1⋯θ_1^μ_n_1 q^μ_n_1+1⋯ q^μ_n_1+n_2. . Π^μ_n_1+n_2+1μ_n_1+n_2+2⋯Π^μ_N-1μ_N], where sym[⋯] denotes the sum of distinct tensor structures obtained from index permutations. Generally speaking, the coefficients ℂ_n_1 n_2 n_3 can be obtained by contracting both sides of the ansatz with all possible rank-n tensor structures built from {v_i^μ, q_i^μ, Π^μν} and solving the linear equations. Thanks to the orthogonality of the tensor structures, these equations are already diagnolized and we can simply read off the coefficients ℂ_n_1 n_2 n_3 as follows, ℂ_n_1 n_2 n_3 = 1 N_n_3ℐ_1,1,α_3,1[(ℓ_1 v_1)^n_1 (ℓ_1 q)^n_2 (ℓ_1Πℓ_1)^n_3], where we have N_n_3 = (D-3) (D-1) ⋯ (D-3+2(n_3-1)) for n_3 >0 and N_0 =1. The remaining integral can be rewritten in terms of scalar integrals, ℐ_1,1,α_3, 1[(ℓ_1 v_1)^n_1 (ℓ_1 q)^n_2 (ℓ_1Πℓ_1)^n_3] = ∑_k=0^n_3([ n_3; k ]) 1 (γ^2 -1)^k(q^2 2)^n_2+n_3-kℐ_1,1,α_3-n_1-2k,1, where we have repeatedly used ℓ_1 q= 1 2( ℓ_1^2 +q^2- (q-ℓ_1)^2) and that scaleless integrals are vanishing. The contraction between the symmetrized tensor structures in (<ref>) and the spin vectors can be worked out. We have Cont(N,M,n_1,n_2) ≡sym [ θ_1 ⋯θ_1^n_1 q⋯ q^n_2 Π⋯Π^n_3 ] a_1⋯ a_1_Ma_2⋯ a_2_N-M = ∑_cond. C_N,M,n_1,n_2,m_1,m_2,m_3  (a_1θ_1)^m_1 (a_2θ_1)^n_1-m_1 (a_1 q)^m_2 (a_2 q)^n_2-m_2 (a_1Π a_1)^m_3 (a_1Π a_2)^m_4 (a_2Π a_2)^n_3-m_3-m_4, where the sums are taken over the solutions to the conditions below m_1+m_2+2m_3+m_4 =M, 0⩽ m_i ⩽ n_i, 0⩽ m_4 ⩽ n_3, m_i∈ℤ. The coefficients read C_N,M,n_1,n_2,m_1,m_2,m_3 = M! m_1! m_2! m_3! m_4!(N-M)! (n_1-m_1)! (n_2-m_2)! (n_3-m_3)!1 2^n_3-m_4 . Hence we have arrived at the decomposition of ℐ_1,1,α_3,1[(ℓ_1 a_1)^M (ℓ_1 a_2)^N-M]. The remaining integral ℐ_1,1,α_3,1[(ℓ_1 a_1)^M (ℓ_1 a_2)^N-Mℓ_1^μ] is nothing but the above with one free index and can be dealt with similarly. We first rewrite the symmeterized sum of the tensor structures as follows, sym [ θ_1 ⋯θ_1^n_1 q⋯ q^n_2 Π⋯Π^n_3 ] = θ_1^μ_Nsym [ θ_1 ⋯θ_1^n_1-1 q⋯ q^n_2 Π⋯Π^n_3 ] + q^μ_Nsym [ θ_1 ⋯θ_1^n_1 q⋯ q^n_2-1 Π⋯Π^n_3 ] +∑_i=1^N-1Π^μ_i μ_Nsym [ θ_1 ⋯θ_1^n_1 q⋯ q^n_2 Π⋯Π^n_3-1 ]. Contracting with a_1^μ_1⋯ a_1^μ_M a_2^μ_M+1⋯ a_2^μ_N-M-1, we have sym [ θ_1 ⋯θ_1^n_1 q⋯ q^n_2 Π⋯Π^n_3 ] a_1⋯ a_1_Ma_2⋯ a_2_N-1-M = θ^μ_N_1 Cont(N-1,M,n_1-1,n_2) + q^μ_NCont(N-1,M,n_1,n_2-1) + M (a_1Π)^μ_NCont(N-2,M-1,n_1,n_2) + (N-1-M)(a_2Π)^μ_NCont(N-2,M,n_1,n_2). JHEP
http://arxiv.org/abs/2406.08452v1
20240612174432
Einstein Gravity from Einstein Action: Counterterms and Covariance
[ "Martin Krššák" ]
gr-qc
[ "gr-qc", "hep-th" ]
Department of Theoretical Physics, Faculty of Mathematics, Physics and Informatics, Comenius University in Bratislava, 84248, Slovak Republic Department of Astronomy, School of Astronomy and Space Science, University of Science and Technology of China, Hefei, Anhui 230026, China Einstein Gravity from Einstein Action: Counterterms and Covariance Martin KrššákElectronic address: June 17, 2024 =================================================================== § ABSTRACT The field equations of general relativity can be derived from the Einstein action, which is quadratic in connection coefficients, rather than the standard action involving the Gibbons-Hawking-York term and counterterm. We show that it is possible to construct a new counterterm directly for the Einstein action, which removes divergences and naturally introduces a flat reference spacetime. The total action is then covariant under simultaneous transformation of both the spacetime and reference tetrads, and argue that this is analogous to the Gibbons-Hawking action. We then explore different perspectives arising naturally from different uses of the reference tetrad, and explore implications of viewing gravity as fundamentally described in terms of non-covariant connection coefficients. § INTRODUCTION The standard approach to general relativity <cit.>, which mostly follows the original Einstein's path <cit.>, starts with studying the motion of a free-falling particle and identifying gravity with the spacetime geometry. The non-tensorial Christoffel symbols are recognized to play the role of the total inertial and gravitational forces acting on the particle. However, when we formulate the field equations for the geometry itself, we usually invoke the principle of covariance and demand that the field equations are tensorial. We are then lead to consider the Riemann tensor, removing all of the nontensorial coordinate-dependent behaviour of Christoffel symbols, and we are practically uniquely led to Einstein field equations. An interesting situation occurs if we try to derive these field equations from the variational principle. As is well-known, it was Hilbert who almost simultaneously with Einstein derived the field equations from the action principle using what is nowadays known as the (Einstein-)Hilbert action <cit.>, given by the scalar curvature _H= 1/2κ∫_ℳ√(-g) R, where κ=8π in the natural units. The remarkable property of this action is that the scalar curvature contains second derivatives of the metric, and hence one would naively expect fourth-order field equations. However, as it turns out, the higher derivatives form a non-dynamical total derivative term and we are left with second-order field equations. It was recognized only in the 1970s by York <cit.> and Gibbons and Hawking <cit.> that, due to the total derivative term, we can derive the field equations only if we require variations of both the metric and their normal derivatives to vanish at the boundary. This generally represents a consistency problem for the variational problem, and can be solved by adding the so-called Gibbons-Hawking-York (GHY) boundary term <cit.> _GHY= 1/κ∫_ ∂ℳ√(-γ)𝒦, where 𝒦 is the extrinsic curvature of the boundary and γ_μν is the induced metric on the boundary. The GHY term (<ref>) is chosen in a such way that its variation cancels out the variation of the total derivative term in (<ref>), and then the Einstein field equations can be derived consistently, i.e., assuming only variations of the metric to vanish at the boundary <cit.>. The motivation of Gibbons and Hawking to consider (<ref>) was not just to vary the action and obtain the field equations, but to find a finite Euclidean gravitational action solutions for studying black hole thermodynamics using path integrals, i.e. to evaluate the action on-shell <cit.>. Therefore, adding (<ref>) turned out to be crucial since (<ref>) vanishes in vacuum and hence the GHY term dominates the action. However, including the GHY term renders the action divergent in the general case, requiring regularization by adding a counterterm. The total gravitational action then takes the form _grav=_H+_GHY +_counter, and we face the challenge of finding an appropriate counterterm to eliminate divergences. We are primarily interested in removing IR divergences coming from the limit r→∞, since the Euclidean time becomes periodic. In asymptotically flat spacetimes, the original proposal by Gibbons and Hawking <cit.>, known nowadays as the background subtraction, was to add a counterterm given by the GHY term for the reference or background metric _μν, which is a flat Minkowski metric in which the spacetime metric is isometrically embedded. The total gravitational action (<ref>) then depends on both the spacetime and reference metrics and can be written as <cit.> _grav(g,)=_H(g)+_GHY(g) +_counter(). The well-known problem is that the isometric embedding of a metric in Minkowski spacetime does not have to exist, what lead to development of new regularization methods, including holographic renormalization, where the counterterm is a local covariant function of the intrinsic geometry of the boundary <cit.>, and the recent counterterm for null boundaries <cit.>. Besides the challenge of determining the appropriate counterterm in specific situations, the presence of the counterterm can be seen as “somewhat awkward" <cit.> and naturally leads to questions that has likely perplexed anyone encountering the full gravitational action (<ref>) for the first time: How come that in a fully covariant theory that excludes any background structures, the divergences are removed using the reference/background metric? Moreover, what is actually being subtracted from the gravitational action? § EINSTEIN ACTIONS While Einstein originally did not derive the field equations from the action principle, he did introduce his own action in October 1916 <cit.>, when he proposed the Lagrangian _E= 1/2κ√(-g)g^μν( Γ^ρ_σμΓ^σ_ρν- Γ^ρ_μνΓ^σ_ρσ ), which can be obtained from the Hilbert Lagrangian by separating the contribution of the second derivative terms into a total derivative term <cit.> _H=_E+_tot. Varying this Lagrangian with respect to the metric, we do obtain the vacuum field equations in their potential form <cit.>, where we essentially separate the terms containing second derivatives of the metric and the terms quadratic in first derivatives of the metric <cit.>. The quadratic terms will be given by <cit.> t^μ_ν=1/2[_Eδ^μ_ν - ∂_E/∂ (∂_μ g^ρσ)∂_ν g^ρσ], known at the Einstein energy-momentum pseudotensor, from which the gravitational energy-momentum can be defined. Additionally, other pseudotensors, such as the symmetric pseudotensor proposed by Landau and Lifshitz <cit.>, or Weinberg <cit.>, can also be introduced. The key characteristic shared by all these pseudotensors is that they are not tensors; they are expressions quadratic in Christoffel symbols, and hence can always be made to vanish at a point using the Ricci normal coordinates. In order to find meaningful predictions for the total energy-momentum, these pseudotensors need to be evaluated in well-behaved coordinate systems, which usually asymptotically approach the inertial coordinate system <cit.>. Despite their non-tensorial dependence on the coordinates, these pseudotensors do provide correct answer for the gravitational energy-momentum that agrees with the Hamiltonian method <cit.>. The tetrad version of (<ref>) was rather accidentally discovered by Einstein in the attempt to unify gravity and electromagnetism in late 1920s. To this end, we consider a set of orthonormal vectors h_a called the tetrad at each point of spacetime, with h^a=h^a_μ dx^μ being its inverse and having components h^a_μ in some coordinate basis. The tetrads form a non-coordinate basis and hence the components of the spacetime metric can be written as g_μν=η_abh^a_ μ h^b_ ν, where η_ab=diag(-1,1,1,1), allowing us to use the tetrads instead of the metric as a fundamental variable. The tetrads h^a are independent of coordinates and change under transformation of a non-coordinate basis as h^a→Λ^a_b h^b, where Λ^a_b is a local Lorentz transformation to ensure preservation of orthonormality. We can define the coefficients of anholonomy f^c_a b = h_a^μ h_b^ν (∂_ν h^c_μ - ∂_μ h^c_ν ), and follow Einstein[ Actually Einstein presented this in a very different way in terms of the torsion tensor that was supposed to represent both gravity and electromagnetism in the unified theory <cit.>. However, as explained in the recent paper <cit.>, Einstein's “torsion tensor" is actually just the coefficient of anholonomy and hence he effectively did what is presented here. We return to the teleparallel framework and torsion in Section <ref>.] to search for a Lagrangian quadratic in the coefficients of anholonomy that yields symmetric field equations. Einstein found such a Lagrangian in 1929 <cit.> _E= -h/2 κ(1/4 f^ρ_μνf_ρ^μν+1/2 f^ρ_μν f^νμ_ρ -f^νμ_ν f^ν_μν). While this Lagrangian looks naively very different from both the Hilbert (<ref>) and Einstein (<ref>) Lagrangians, recalling that the Ricci rotation coefficients are related to the coefficients of anholonomy through <cit.> ω^a_bc=1/2[f_b^a_c + f_c^a_b - f^a_bc], we can recast it as _E= h/2κ(ω^a_ caω^bc_b -ω^a_cbω^bc_a). From here it should be obvious that (<ref>) is a tetrad version of (<ref>), and hence Einstein in 1929 indeed “just" rediscovered his earlier 1916 Lagrangian (<ref>) in tetrad formalism <cit.>. The alternative approach is to start with the tetrad Hilbert action, i.e. a scalar curvature in terms of the Ricci rotation coefficients (<ref>), and separate the total derivative term along the lines of (<ref>), resulting in (<ref>). This was first done by Møller in 1961 <cit.>, and hence (<ref>) is often referred as the Møller Lagrangian. It is then possible to define an analogue of the Einstein pseudotensor known as the Møller complex, which is a coordinate-tensor but a pseudotensor with respect to (<ref>), from which the total energy-momentum can be defined as well <cit.>. § COUNTERTERMS FOR THE EINSTEIN ACTION We can now use the fact that the Hilbert action can be decomposed as (<ref>) in both the metric and tetrad formulation, and rewrite the total gravitational action (<ref>) as _grav=_E+_tot+_GHY +_counter. We can then observe that the dynamics is fully contained in the Einstein term _E, from which the field equations are derived, and which is divergent, similar to (<ref>) before adding the counterterm. This brings us to the idea that the total derivative and GHY terms are not necessary in the total gravitational action (<ref>), and are rather just remnants of the mathematical formulation where the Hilbert action (<ref>) is the starting point. We would like to argue here that it is possible to find a counterterm directly for the Einstein action, i.e. write the total gravitational action as _grav=_E +_counter, where the counterterm is capable to not only regularize the Einstein action, but also ensure that the total gravitational action (<ref>) is as covariant as the Gibbons-Hawking gravitational action (<ref>). To construct the counterterm, we choose to do so for the tetrad (<ref>) Einstein action , but the construction for the metric Einstein action (<ref>) is analogous. We use the fact that the Einstein action (<ref>) under a change of the non-coordinate basis (<ref>), transforms as[This can be obtained from a rather lengthy but straightforward calculation, or seen directly from the equivalent teleparallel formulation discussed in Section <ref> and using the results of <cit.>.] _E→_E+ 1/κ∂_μ[ h h_a^ν h_d^μη^bdΛ^a_ c∂_ν (Λ^-1)^c_ b], and consider a reference tetrad representing a general tetrad in Minkowski spacetime ^a= Λ^a_b ^b, where ^b is the tetrad with components ^b_ μ=diag(1,1,1,1) in the Cartesian coordinate system. Since the Riemannian spin connection[The Riemannian spin connection in the general spacetime is defined from the Ricci rotation coefficients (<ref>) as ω^a_bμ=ω^a_bch^c_μ] vanishes for ^b, we find that for the reference tetrad (<ref>) it can be written as ω^a_bν (^a)= Λ^a_ c∂_ν (Λ^-1)^c_ b, which follows from transformation properties of the spin connection <cit.>, and is precisely the term appearing in (<ref>). We then choose the counterterm as the total derivative term in (<ref>) expressed in terms of the reference tetrad (<ref>), i.e. _counter(h^a,^a)= 1/κ∂_μ^μ(h^a,^a)= 1/κ∂_μ[ h h_a^ν h_d^μη^bdω^a_bν (^a) ], and the full gravitational action takes the form _grav(h^a,^a)=_E(h^a) +_counter(h^a,^a), where h^a and ^a are the spacetime and reference tetrads, respectively. Based on the analogy with the Gibbons-Hawking action (<ref>), we require that the total action (<ref>) vanishes for the Minkowski spacetime, what is achieved if we identify the full tetrad with the reference tetrad, i.e. h^a=^a. In asymptotically flat spacetimes, we require that the spacetime tetrad reduces to the reference tetrad in the limit r→∞, same as in the case of Gibbons-Hawking action (<ref>). Let us illustrate this on the example of the Schwarzschild solution. We take the diagonal tetrad h^a_μ=diag(f,f^-1,r,rsinθ), f^2=1-2M/r, for which the Einstein action (<ref>) is _E=1/κ∫_sinθ=1/2∫ d t . r |_r_A^r_B, where r_A and r_B are the bounds of integration. In the case of asymptotically flat spacetime, r_B→∞ and the action diverges regardless of whether we choose r_A to be at the origin or the horizon. We can then consider the reference tetrad ^a_μ=diag(1,1,r,rsinθ), and calculate the counterterm as _counter=1/κ∫_2M+(f-2)r/frsinθ. Adding it to (<ref>) and integrating the spatial part, we obtain _grav=_E+_counter=∫ d t . r(1-f)|_r_A^r_B. However, when performing the integration leading to (<ref>), we must be careful about the singularity and discontinuity of the Lagrangian at the horizon. We need to ensure that we integrate only over the manifold patches where the function is smooth. Therefore, we can consider the action either for the exterior or the interior solution of the black hole. The exterior solution is relevant in the Euclidean case, which covers only the exterior region of a black hole and time becomes periodic with a period β=8π M. We then choose r_A=2M and r_B→∞, and find the Euclidean action S_grav=-i β M, which is exactly twice the value of the original result by Gibbons and Hawking <cit.> using (<ref>)[ Note that the same factor 2 was noticed for the metric Einstein action (<ref>), or rather its teleparallel formulation discussed in Section <ref>, in <cit.>. Moreover, it was suggested to the use of the so-called canonical frames could resolve the problem of the factor of 2 <cit.>. For an alternative resolution, see our upcoming paper <cit.>. ]. The interior solution is relevant for the recently proposed complexity=action duality, where it corresponds to the Wheeler-de Witt (WdW) patch at late times <cit.>. In this case, we are primarily interested not in the action itself but rather its time derivative known as the action growth. Choosing r_A=0 and r_B=2M, we find the action growth at the WdW patch to be d_grav/d t=2M <cit.>, which exactly agrees with the standard GHY method for the interior region of a black hole <cit.>. Moreover, this agreement for the interior solutions can be found not only in the Schwarzschild case, but also when the electric charge and negative cosmological constant are included <cit.>. § FOUR FACES OF THE GRAVITATIONAL ACTION The total gravitational action can be then viewed from four different perspectives. Besides the standard Gibbons-Hawking action (<ref>), the Einstein action with the appropriate counterterm (<ref>) is a viable alternative. Moreover, two further perspectives on the gravitational action can be found by using the reference tetrad in different ways. The first option is to effectively eliminate ^a and view the theory within the special class of tetrads. We start with {h^a,^a} and consider some local Lorentz transformation Λ^a_b that transforms ^a to the Cartesian tetrad ^a, what turns the counterterm into zero and the reference tetrad will effectively disappear from the theory. We then obtain a special class of tetrads for which the Einstein action (<ref>) will give the correct regularized action without a counterterm. These are the so-called proper tetrads known from teleparallel gravity <cit.>. The advantage of this perspective is that the existence of proper tetrads explains why the Møller complex evaluated in a special class of tetrads gives the correct finite prediction for the energy-momentum: the proper tetrads are the tetrads in which the counterterm (<ref>) is trivial, and the Einstein action (<ref>) is regular and hence the Møller complex derived from it gives the correct regularized conserved charges. An analogous argument in the metric formalism for the original Einstein's 1916 action (<ref>) explains why the Einstein (<ref>) and other pseudotensors have to be evaluated within preferred coordinate systems to achieve sensible finite predictions: these preferred coordinate systems are the coordinate basis in which the metric analogue of (<ref>) is trivial. The last option is to move away from the realm of Riemannian geometry and recast (<ref>) as the action of teleparallel gravity <cit.>. We just have to use the fact that (<ref>) can be identified as the teleparallel connection ^a_bμ utilized in teleparallel gravity <cit.>, and use it instead of ^a. The total action (<ref>) can be then written fully in terms of the Riemannian ω^a_bμ and teleparalel ^a_bμ connections. These two connections are related through the Ricci theorem, which states that the difference between two such connections is necessarily proportional to the contortion tensor ^a_bμ=^a_bμ-ω^a_bμ. We can then recast the total Lagrangian (<ref>) as <cit.> _TG(h^a_μ,^a_bμ)= - h/2 κ[^abc_cba-^ac_a^b_cb]. The equivalence can also be seen from the fact that the resulting total action in the Schwarzschild case (<ref>) is equivalent to the results obtained within the teleparallel framework using either working with proper tetrads or calculating the spin connection <cit.>. For further discussion and alternative viewpoints, see <cit.>. The advantage of the teleparallel form of the action is that (<ref>) is expressed in terms of the tensorial quantities and is manifestly covariant under the simultaneous transformation of the tetrad and teleparallel connection. Therefore, we can view teleparallel geometry as a geometric framework where the Einstein action (<ref>) is naturally covariantized <cit.>, albeit in a sense that we discuss in the following section. § COVARIANCE OF THE GRAVITATIONAL ACTION While both the Einstein term (<ref>) and the counterterm (<ref>) are not covariant under a change of tetrad (<ref>), their combination in the total action (<ref>) is covariant under a simultaneous transformation of both the tetrad and reference tetrad {h^a,^a}→{Λ^a_b h^b,Λ^a_b ^b}. Note that this is different from covariance under a change of h^a alone, and is analogous to the situation in the teleparallel framework, where the action (<ref>) is covariant under simultaneous transformation of both the tetrad and spin connection. This is often viewed as an analogue of the Stückelberg trick <cit.>, and is recently a matter of discussion whether this covariance should be taken seriously or considered artificial <cit.>. We have constructed the regularized Einstein action (<ref>) in analogy with the construction of the full gravitational action (<ref>), and hence it should be obvious that it is as natural/artificial as the full gravitational action of general relativity (<ref>). Therefore, covariance of (<ref>), or (<ref>), should be discussed not in comparison with the Hilbert term alone but rather with respect to the total action including all the boundary terms (<ref>). While the Hilbert action is covariant with respect to transformations of the metric/tetrad alone, the covariance of the boundary terms is a rather subtle issue. The reason is that the GHY term is covariant with respect to transformation of the boundary metric, which is determined through γ^μν=g^μν -n^μ n^ν, where n^μ is a normal vector to the boundary. Therefore, the GHY term is covariant only under simultaneous transformation of both g^μν and n^μ in the bulk spacetime. Indeed, the extrinsic curvature scalar 𝒦=∇_μ n^μ is a scalar only if we transform both the bulk metric and normal vector simultaneously. There are then only two options to keep the total gravitational action (<ref>) covariant. The first is to transform both the spacetime and reference metrics simultaneously, what is a direct analogue of (<ref>). In this case, a non-tensorial change of the GHY term under a change of the spacetime metric alone is counterbalanced by a non-covariant change of the GHY term for the reference metric. The other viable option is to keep the reference metric fixed, and transform both the spacetime metric and the normal vector simultaneously[Although this option is rather unconventional since the reference metric is not in the same coordinate system as the spacetime metric. It is then not clear how the reference metric can be properly matched to the full metric.], what ensures the covariance of the GHY term and, hence, the total action (<ref>). In both cases, the total gravitational action (<ref>) is not covariant under transformation of the spacetime metric alone, requiring transformation of some additional structure: either the reference metric or the normal vector. § DISCUSSION AND CONCLUSIONS The connection coefficients play the key role in general relativity where they represent accelerations measured by the observers. However, due to the principle of covariance, they are used only through the curvature tensor and covariant derivatives derived from them, effectively eliminating all of their non-tensorial behavior from the theory. While this is certainly a reasonable demand for the field equations, it appears to be incompatible with the action principle. If we demand a finite action with a well-posed variational principle, we are forced to supplement the Hilbert term by the GHY term and an appropriate counterterm (<ref>). However, these terms turn out to depend not only on the bulk metric but also on foliations and/or the reference metric, which must be transformed simultaneously with the metric to render the total action covariant. We have argued here in favor of reappraisal of the Einstein actions, (<ref>) and (<ref>), which are quadratic in connection coefficients. These actions do contain all the dynamics necessary to derive the field equations, but transform by a surface term under a change of basis that cause divergences. It is then possible to use this feature and define a counterterm that removes divergences and makes the total action (<ref>) covariant under simultaneous transformation of both the spacetime and reference tetrads (<ref>). We have argued that this analogous to covariance of the Gibbons-Hawking action (<ref>). Note that the idea of reference/background spacetimes was first introduced by Rosen, who proposed covariantization of pseudotensors using a reference metric back in 1940 <cit.>. The difference with the approach presented here is that the reference spacetime enters the action only through the total derivative term, and hence is guaranteed to not change the dynamics. The idea of Rosen is actually naturally realized in teleparallel framework, where we recognize that the Riemannian connection of the flat background spacetime has zero curvature and hence defines a teleparallel connection that can be used to recast the action into teleparallel framework (<ref>). Indeed, our approach was motivated by previous results in teleparallel gravity <cit.>. In this paper, we have decided to start with the well-known Einstein action in order to demonstrate that analogous construction can be made entirely within the Riemannian geometry. We built the counterterm based on the analogy with the standard Gibbons-Hawking term (<ref>). As a result, we were able to show analogies among the Gibbons-Hawking action (<ref>), Einstein action with the counterterm (<ref>), and the teleparallel action (<ref>), as well as provide an explanation why the original Einstein action (<ref>) had to be evaluated in some privileged tetrad. All these different viewpoints came up naturally depending on how we used the reference tetrad ^a. Our goal was to highlight the fundamental role of connection coefficients in general relativity and to illustrate that the covariance of the field equations does not necessarily justify the common assertion that only covariant objects are meaningful. The covariant field equations can be derived from the non-covariant Einstein action, and this action, when supplemented by an appropriate counterterm (<ref>), is as meaningful (and as covariant) as the Gibbons-Hawking action (<ref>). Our approach naturally explains the necessity to subtract some reference structure from the gravitational action. If the gravitational action is covariant only under simultaneous transformation (<ref>), it is clear that a reference tetrad or metric must be introduced. This also provides a rather straightforward explanation of what is being subtracted from the gravitational action using the reference spacetime. By acknowledging that the fundamental action is constructed from the connection coefficients, we are merely removing the effects associated with the non-tensorial character of the connection coefficients. One can then argue that these are the effects related to the choice of an observer, i.e., inertial effects. § ACKNOWLEDGEMENTS This work was funded through SASPRO2 project AGE of Gravity: Alternative Geometries of Gravity, which has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 945478. Style
http://arxiv.org/abs/2406.08029v1
20240612092959
Metaverse Identity: Core Principles and Critical Challenges
[ "Liang Yang", "Yan Xu", "Pan Hui" ]
cs.CY
[ "cs.CY" ]
[Liang Yang]organization=Division of Emerging Interdisciplinary Areas, The Hong Kong University of Science and Technology, country=Hong Kong SAR [Yan Xu]organization=Information Systems, Business Statistics and Operations Management, The Hong Kong University of Science and Technology, country=Hong Kong SAR [Pan Hui]organization=Computational Media and Arts Thrust, The Hong Kong University of Science and Technology (Guangzhou), city=Guangzhou, country=China § ABSTRACT This paper explores the core principles that should guide the construction and governance of identity in the metaverse and identifies the critical challenges that need to be addressed. Drawing on multidisciplinary theories and perspectives, we propose two core principles for metaverse identity: Equivalence and Alignment, and Fusion and Expansiveness. The first principle contends that metaverse identities should be consistent with real-world identities in terms of norms and standards, which is crucial for establishing guidelines and safeguarding rights. The second principle emphasizes the necessity for seamless integration and boundless expansion of metaverse identities, transcending real-world limitations to accommodate diverse needs and foster inclusive participation. We argue that these two principles are vital for ensuring the accountability, inclusiveness, and consistency of identity in the metaverse. We also identify five critical challenges: Identity Interoperability, Legal Implications, Privacy and Identity Management, Deepfakes and Synthetic Identities, and Identity Fragmentation and Psychological Well-being. We discuss potential strategies to navigate these challenges. The paper concludes by underscoring the importance of a proactive and collaborative approach to shaping the future of metaverse identity. As the metaverse continues to evolve, it is imperative that we cultivate a thorough understanding of the principles and challenges surrounding identity in this uncharted territory and work collectively to build a metaverse that fosters responsible identity construction and expression. * Utilizing an interdisciplinary and multi-perspective approach, this research elucidates the unique characteristics of identity within the metaverse and delineates existing research gaps. * This study articulates two core principles for the governance of identity in the metaverse: "Equivalence and Consistency" and "Integration and Expandability." * Furthermore, the paper systematically identifies and examines five critical challenges that impede the effective management of identity in the metaverse. MetaverseIdentityGovernancePrinciplesInteroperabilityPrivacyDeepfakesSynthetic IdentityIdentity Fragmentation § INTRODUCTION The metaverse is envisioned as the future of the Internet, aspiring to offer a spatial and social Internet experience that seamlessly blend the physical and digital realms using both existing and emerging technologies <cit.>. Over the past few decades, and particularly in the last three years, scholarly research on the metaverse has extensively covered various dimensions, including technological advancements <cit.>, ecosystem dynamics <cit.>, social impacts <cit.>, economic potential <cit.>, and user behavior and experiences <cit.>. However, the development and adoption of the metaverse also presents new challenges. In January, the International Criminal Police Organization(INTERPOL) released a research report titled "Metaverse - A Law Enforcement Perspective." This report underscores that, despite its nascent stages, the metaverse already poses urgent technological, social, and ethical challenges that demands immediate attention. Notably, the metaverse has facilitated the emergence "meta-crimes," new types of criminal activities that are increasingly concerning as immersive worlds become more integral to our daily lives. Moreover, the report highlight significant governance and policy issues, with the construction of identity within the metaverse being a the primary one. It notes that the anonymity of identities within the metaverse has been exploited for illegal activities such as trafficking stolen goods or promoting unlawful behavior. Additionally, the legal status of avatars remains undefined, prompting the international community to actively explore their legal responsibilities and definitions <cit.>. In March, the World Economic Forum(WEF) released a report entitled "Metaverse Identity: Defining the Self in a Blended Reality." This report elucidates that the metaverse acts as a conduit merging the digital and physical worlds, thereby revolutionizing interactions with information, individuals, and environments. Within this transformative context, identity stands out as a critical element, essential for promoting inclusivity, fairness, accessibility, security, and privacy within the metaverse. The report warns that neglecting the evolution of identity in the metaverse could result in the perpetuation of existing internet flaws. Poorly designed metaverse identities could adversely affect social mobility in the physical world—amidst our increasing reliance on digital platforms—and hinder the secure and private identification, authentication, and verification of individuals or digital entities. Furthermore, these identities may unintentionally endow technology with human-like attributes, cultivating excessive trust and potentially leading to various psychological and emotional harms <cit.>. The two research reports underscore that the rapid advancement of metaverse technology has elevated identity to a foundational discussion topic within the domain. Identity, by its nature, is a broad socio-technical concept; similarly, Metaverse Identity is a complex, multidimensional, and multilayered concept <cit.>. Existing research often treats identity merely as a foundational concept within the metaverse, supporting discussions across diverse topics in various disciplines and perspectives.WEF notes a significant gap in a comprehensive understanding framework for Metaverse Identity that encompasses diverse stakeholder perspectives <cit.> . This paper aims to systematically review academic research concerning Metaverse Identity, to explore its intrinsic characteristics and impacts, and to propose core principles that aid stakeholders in intuitively understanding both Metaverse Identity and its broader impacts. Additionally, this study seeks to identify the current critical challenges, aiming to provide theoretical references and insights for technological innovations, ethical standards, and policy developments related to Metaverse Identity. Employing a comprehensive, primarily qualitative methodology, this study will integrate literature reviews, textual analysis, comparative studies, and case studies to construct a multidimensional analytical framework. This diversified methodological approach, proven effective for complex subjects, aligns with strategies used by other scholars in studies of virtual identities <cit.>. Acknowledging the inherent limitations due to the expansive scope of our analysis, this paper seeks to catalyze further interdisciplinary dialogue and research. It is envisioned that future studies will build on this groundwork to provide more profound analyses and robust solutions. § LITERATURE REVIEW The investigation of Identity represents a crucial philosophical inquiry into human existence. At a fundamental level, Identity encompasses personal details about who they are such as name, age, gender, place of origin, and nationality. However, the Cambridge Dictionary expands this definition to include it as "the fact of being, or feeling that you are, a particular type of person, organization, etc.; the qualities that make a person, organization, etc. different from others"[https://dictionary.cambridge.org/dictionary/english/identity]. This broader perspective aligns with philosophical explorations of the 'Self', epitomized by questions like "Who am I? Where do I come from? Where am I going?" <cit.>. Historically, philosophers like Aristotle have recognized that Identity is shaped both biologically and socially—humans are not merely biological entities but also subjects within social contexts <cit.>. In contemporary discourse, the concept of Identity has gained significance with developments in psychiatry, psychology, and sociology. For example, <cit.> in "Life on the Screen" pioneered the exploration of the relationship between technology and identity by investigating how digital technologies interpret and describe human behavior. Since then, extensive research shows that digital technologies not only shape our identities but also influence how we interact with them <cit.>. Later,<cit.> in "Constructing the Self in a Digital World" argues that Identity is a dynamic framework and an continuously evolving, fluid process within digital contexts. Digital technologies such as blogs, social networks, and virtual worlds, not only facilitate the exploration and expression of our identities but also actively shape them. Research focusing on platforms like Facebook illustrates how these tools enable users to overcome physical limitations, thereby enabling the creation of identities that would otherwise be unattainable <cit.>. Moreover, emerging technologies such as virtual reality (VR), augmented reality (AR), artificial intelligence (AI), and blockchain—key components of metaverse technologies—radically transform our perceptions, experiences, and interactions with the world <cit.>. As the distinction between digital and real-world identities becomes increasingly indistinct, understanding the interaction between digital and real-world identities, as well as the changes brought to them by the metaverse, is paramount. Compared to the Internet, the metaverse provides a more immersive, authentic, and diverse digital lifestyle, enabling users to engage with the digital world in ways that closely mimic real-world experiences <cit.>. For instance, within these virtual environments, users can project multidimensional information—such as preferred physical traits, natural movement habits, and genuine emotional states—through their digital avatars, thereby presenting themselves in a more vivid and three-dimensional manner <cit.>. the metaverse transforms identity from a mere concept to a lived experience, highlighting the need for a delicate balance between self-expression and privacy protection <cit.>. In the real world, identity encompasses diverse facets such as race, age, occupation, culture, hobbies, gender identity, and sexual orientation, which are crucial for self-presence, self-expression, and pride. Yet, these same attributes can also expose individuals to risks such as bullying, harassment, stalking, discrimination, litigation, legal actions, persecution, deception, or bias <cit.>. Identity in the metaverse inherits this duality from the real world, yet it becomes even more complex. the metaverse provides users with a creative space unrestricted by physical rules, allowing them to interact with other individuals and environments across various dimensions <cit.>. This freedom enables users to craft unique digital experiences, unleash creativity, and rethink their achievements, desires, and identities <cit.>. However, the complexity of identities in the metaverse emerges not only from their multi-layered nature but also from their dynamic evolution and diversity. Users can choose to present different identities in various virtual environments and even continuously adjust and change identity characteristics within the same environment <cit.>. Such fluidity and plasticity offer users unprecedented freedom of self-expression but also pose significant privacy and security challenges <cit.>. Neglecting identity consideration in the metaverse can restrict the range of social interactions and degrade the overall user experience, as it limits the ability to represent individuals diversely and safely <cit.>. Furthermore, from a policy and governance perspective, overlooking identity nuances or novel characteristics in the metaverse risks reinforcing existing hegemonic norms and anthropocentric biases, thereby perpetuating real-world biases into the metaverse. This not only impacts technology design and its intended users but also influences the societal norms and values that these technologies reinforce and propagate <cit.>. Metaverse Identity, akin to the broader concept of Identity, is a multi-layered and multi-dimensional construct that has been examined from various disciplinary and perspectives in the literature. It is predominantly considered a foundational term that underpins discussions across diverse topics in specific disciplines, such as philosophy <cit.>, psychology <cit.>, technology <cit.>, management <cit.>, and law <cit.>, which address issues and propose targeted solutions. Current research on Metaverse Identity can also be analyzed from multiple perspectives. The representation perspective treats it as the user's avatar in the metaverse, focusing on visual representation and interaction <cit.>. The data perspective considers it as the user's digital footprint and data aggregation, emphasizing data properties and the extraction of value, including specific identifiers, credentials, and biometric elements <cit.>. The social perspective views it through the lens of the user's social roles and relationships within the metaverse, highlighting social attributes and their interactive effects <cit.>. The economic perspectivedefines it as the user's economic entity and value carrier, often linked to blockchain technology, with a focus on asset properties and transactional potential <cit.>. Although these perspectives are not entirely independent and exhibit some overlap, they collectively address the primary facets of the discourse. Some researchers adopt an integrative approach, viewing Metaverse Identity as an extension of real-world identity into the virtual realm, thus emphasizing its multifaceted nature <cit.>. This array of perspectives, while not exhaustive, forms the core of the scholarly discourse on this subject. However, research specifically focused on the structured framework and principle of Metaverse Identity on its intrinsic characteristics and impacts remains limited. Establishing such a framework and principle is strategically significant, as recognized by WEF, because it facilitates consensus among a broad range of stakeholders and provides guidance for responsible development in the metaverse. Under this context, WEF emphasize that Metaverse Identity is an extension of identity as it is known today, encompassing representation, data, and identification (ID). Representation in the metaverse transcends traditional static profile pictures to include customizable digital assets such as avatars, AR fliters, and accessories. These elements reflects various identity aspects from cultural attire to abstract designs, blending physical and digital worlds across 2D screens, AR, and MR. Data points, enhanced by artificial intelligence/machine learning models, are capable of describing and generating identity. They achieve this by analyzing user interactions, movements, and preferences, capturing current activities, predicting future actions, and shaping perceptions. ID systems have evolved from traditional methods like passports and driver’s licenses to unique avatar designs, body-based attestations, or virtual signatures. These innovations validate existence and grant access to specific realms or activities within the metaverse. WEF also note that these three components extends beyond a tangible human to include other digital entities such as chatbots, avatars, and digital replicas. These entitties exhibit varying levels of interaction, autonomy, and behavior within digital experiences, playing vital roles in enabling and enhancing digital interactions <cit.>. While this framework significantly deepens our understanding of the structural elements of Metaverse Identity, it still lacks the principled guidance necessary for effectively understanding intrinsic characteristics as well as its broader impacts and challenges. This research aims to bridge this gap. § CORE PRINCIPLES OF METAVERSE INDENTITY This section initiates with the review of recent cases that have garnered significant attention, which represents the manifestations and ramifications of Metaverse Identity. Drawing upon these case analyses and the literature review, we articulate two guiding principles essential for comprehending the intrinsic characteristics, broader impacts, and challenges associated with Metaverse Identity. The first principle asserts that understanding and analyzing Metaverse Identity should adhere to behavioral norms that mirror, or are identical to, those in the physical world, emphasizing its importance in emotional experiences, cognitive perceptions, and economic interests. This principle not only facilitates a nuanced understanding among stakeholders of how identities are constructed, expressed, and perceived within the metaverse but also serves as a foundation for establishing norms and safeguarding rights inherent to these identities. The second principle highlights the dynamic and evolving nature of identity within the metaverse, shaped by interactions and mutual evolution, and underscores the unique role of Metaverse Identity in self-perception and the pursuit of values. By emphasizing the metaverse's capacity for enabling diverse identity expressions and fostering social inclusiveness, this principle prompts stakeholders to develop adaptable and responsive frameworks that protect users' identity rights while encouraging diverse expressions and creative innovations, thereby ensuring the metaverse's growth without overly restrictive regulations. §.§ Equivalence and Alignment In recent years, the incidence of sexual assaults within the metaverse has become as a pressing concern, presenting complex social and legal dilemmas <cit.>. For instance, in December 2021, a female Metaverse user reported that three male avatars had 'touched her[her avatar] inappropriately' <cit.>. Another disturbing incident occurred in April 2022, when a female user from Japan shared via Twitter her experience of being attacked and virtually raped in VRChat while she was asleep <cit.>. Additionally, the Daily Mail in the UK highlighted a particularly alarming case in January with its headline "British police probe Virtual Rape in metaverse: Young girl's digital persona 'is sexually attacked by gang of adult men in immersive video game' - sparking the first investigation of its kind and questions about extent current laws apply in online world" <cit.>. The incident involved a British girl under the age of 16 whose avatar was raped by multiple avatars controlled by strangers during a virtual reality game, leading to the first global investigation of such behaviors and raising significant legal inquiries regarding the applicability of existing laws in the metaverse. At first glance, these cases may seem to involve merely disputes over unethical conduct and criminal actions, along with attendant legal applicability issues. However, a deeper analysis reveals that they fundamentally reflect profound challenges associated with identity recognition in the metaverse <cit.>. The reason these incidents of sexual assault attract world-spread social attention and provoke strong reaction lies in the deep psychological identification that individuals have with their avatars <cit.>. If avatars were merely considered gaming tools or entertainment elements, violations against them would not significantly impact users in reality. Nevertheless, the victims' report of discomfort and psychological trauma suggest a profound association between their avatar and their identities. This strong identification means that harm inflicted on an avatar is perceived as harm to the self, eliciting genuine emotional and psychological responses <cit.>. The legal disputes triggered by these cases also underscore the unique nature of metaverse identities in both legal and social contexts. While violations of personal rights in the physical world inevitably entail legal repercussions, in the metaverse, the identity attributes of avatars remain defined, leading to ambiguities in rights protection <cit.>. This ambiguity necessitates a redefinition of the identities represented by avatars. Such incidents of 'sexual assault' not only reveal the high degree of integration between avatars and users' self-presence-a psychological state in which people equate their virtual selves with their actual selves <cit.>- but also demonstrate the significant influence of metaverse identities on real-life identity, highlighting the close connection between the metaverse and real life. It is critical to acknowledge that the identity dilemmas triggered by these 'sexual assault' cases represent just one of the numerous challenges posed by metaverse entities in the metaverse. OpenAI's recent release of ChatGPT-4o[https://openai.com/index/hello-gpt-4o/], an AI chatbot capable of human-like interaction, introduces a novel and complex dimension of challenges for identity recognition in the metaverse <cit.>. In this context, we propose the core principle - Equivalence and Alignment for conceptualizing and governing Metaverse Identity. Alignment refers to the consistency between metaverse identities and real-world identities concerning behavioral norms, social guidelines, and ethical standards. Equivalence emphasizes the parity between the two in aspects such as behavioral patterns, emotional experiences, cognitive perceptions, and economic interests <cit.>. Essentially, Alignment mandates that metaverse identities adheres to the same standards as real-world identities, while Equivalence underscores that the impact of virtual identities on users is as significant as that of real-world identities. We aim to simplify the complex attribute of Metaverse Identity through this principle, providing a clear and straightforward framework for developing metaverse identities. The principle also underscores the unique status of Metaverse Identity in contemporary legal and social contexts in metaverse era compared to in internet era, indicating the necessity of refining relevant regulations in the physical world. Furthermore, we intend to offer principled recommendations for policymakers, technology developers, and platform operators. While striving for deeper immersion and realism in the metaverse, it is imperative to prioritize the recognition and protection of user identities, thereby guiding necessary regulation and ethical oversight at the intersection of technology and ethics. §.§ Fusion and Expansiveness <cit.> recently published their study "Avatar Robot Cafe," which explores how individuals with disabilities employ robots, avatars and a hybrid cyber-physical environment to redefine their identities. In this study, seven participants with disabilities provided remote customer service at a café near the University of Tokyo using a combination of robots and personalized avatars. The results demonstrated that the avatars enabled participants to shift their identities during and after customer interactions. The study utilized longitudinal semi-structured interviews as one method to document these identity transformations. In the paper, participant P2, diagnosed with somatoform disorder, constrasted her experiences when represented by a robot and an alpaca avatar. While using the robot, P2 felt obligated to main professional behavior, adhering to the belief that "failure is not an option." Conversely, when represented by the alpaca avatar, she experienced greater freedom, allowing her to adopt more playful behavior and consequently becoming a favorite among customers. P2 noted, "With the alpaca, I can tell myself, 'It cannot be helped because I am just an alpaca.' I cannot do that with OriHime (the robot)." She added, "When a customer says to my avatar, 'P2, go for it!' I enjoy it because it makes me feel like I am accomplishing what I should do as an alpaca. Just walking on two legs makes me feel proud." P2 relished interacting with others as an alpaca and greatly enjoyed her identity transformation. Participant P3, who has cerebral palsy and uses a wheelchair, identifies himself as biologically female but has a fluid gender identity (male or X). P3 created an anime-style character perceived as male or gender-neutral, aligning with his gender preferences and pronouns. Typically subjected to perceptions of being "small and cute" due to his high-pitched voice and physical appearance, P3 noted significant shifts in how others treated him when using a taller, masculine avatar. This shift in perception facilitated a transformation in his self-expression, moving from the neutral Japanese first-person pronoun "watashi" to the more masculine informal "ore," and adopting a less direct but more assertive communication style. P3 described this as "finally becoming a boy," highlighting the profound impact of the avatar on his identity perception. The Avatar Robot Cafe The Avatar Robot Cafe case offers valuable insights into the intricate nature of metaverse identities. In this scenario, avatars serves as powerful tools that allow individuals to reshape their self-identity, transcend physical limitations, and engage more fully in social activities. For individual with disabilities in this case, avatars facilitate a novel identity experience and new modes of social interaction. As the metaverse continues to evolve, we believe that more such scenarios will emerge, allowing a broader and more diverse group of people to have similar experiences. These scenarios represented by this case leads us to propose the second core principle of Metaverse Identity: Fusion and Expansiveness. In the metaverse, Fusion refers to the profound integration of metaverse and real-world identities, creating an intertwined and mutually constructive holistic self. The metaverse identity anticulates needs, desires, and values that may be unexpressed or inexpressible in the real world, thus acting as an extension and expansion of the real-world identity. Conversely, the real-world identity provides foundational support for the metaverse identity, with both complementing each other. Expansiveness underscores the unprecedented possibilities for diverse expression and the infinite extension of identity in the metaverse. Individuals can transcend their inherent traits and conditions, crafting rich and varied alternative identities to present themselves in different forms and fulfill diverse needs, such as social interaction and self-realization. As illustrated by participants P2 and P3 in Avatar Robot Cafe case, who manifested new selves through an alpaca and a masculinized anime character respectively, the openness and inclusivity of the metaverse allow marginalized groups to break free from identity confinements and gain equal participation opportunities. The Fusion and Expansiveness principle further elucidates the complexity and significant of constructing identities in the metaverse. Metaverse identities are not merely collections of digital representation, data, and IDs <cit.>, but are authentic reflections of an individual's cognition, awareness, and socio-cultural interactions. They complement and shape the subjectivity of individuals in the metaverse-era alongside real-world identities. Furthermore, they underscore the emancipatory potential of the metaverse for individuals, emphasizing how the openness, plasticity, and fluidity of identity, empowered by technology, not only offer boundless possibilities for higher-dimensional self-realization but also hold revolutionary potential for promoting social equity and fostering a diverse, inclusive ecosystem. It is noteworthy to recognize the principles of Equivalence and Alignment and Fusion and Expansiveness are complementary rather than contradictory. Equivalence and Alignment emphasizes the congruence and parity between metaverse and physical-world identities concerning behavioral norms and rights protection. This principle acts as a normative foundation, ensuring that metaverse identities do not become havens for evading responsibilities or violating others' rights, thereby providing an essential ethical and legal framework for the integrating the metaverse and physical-world identities. Fusion and Expansiveness, on the other hand, highlights the dynamic interplay and mutual evolution between metaverse and physical-world identities through their interaction. It emphasizes the unique role of metaverse identities in shaping self-conception and pursuing values, highlighting their potential for creativity and agency. This principle serves as a catalyst for the multifaceted development of individuals and the advancement of social inclusivity. Therefore, in the governance and development of metaverse identities, it is imperative to uphold Equivalence and Alignment to regluate and guide behaviors in the metaverse, while also leveraging the potential of Fusion and Expansiveness to allow extensive exploration of identity and value creation. This dual approach necessitates the establishment of clear boundaries and behavioral guidelines for metaverse and real-world identities when formulating pertinent policies and regulations, while preserving sufficient flexibility and openness to encourage the expression and integration of diverse identities, thereby fostering social inclusively and innovation. Only through the dynamic equilibrium of these two principles can we realize a synergistic interaction between digital and real coexistence, ensuring the orderly operation of the metaverse ecosystem while unlocking the positive potential of technological advancement. § CRITICAL CHALLENGES OF METAVERSE INDENTITY Building upon the emblematic case analyses, we have identified two core principles-Equivalence and Alignment and Fusion and Expansiveness- to provide principled guidance necessary for effectively understanding the intrinsic characteristics and broader impacts of metaverse identities. This section will address the critical challenges that demand immediate attention, supported by these two principles. Firstly, under the principle of Equivalence and Alignment, we explore three pivotal challenges: identity interoperability, legal boundaries, and privacy management. Secondly, informed by the principle of Fusion and Expansiveness, we investigate risks associated with deepfakes & synthetic identities, and identity fragmentation. The elucidation of these five challenges serves to highlight key issues within metaverse identity systems, thereby encouraging further interdisciplinary research and exploration. §.§ Identity Interoperability Challenges Interoperability challenges are a critical obstacle to the development of the metaverse, as discussed by both academia and industry in recent years <cit.>. In the systematic literature review on metaverse interoperability, <cit.> emphasize that identity is the cornerstone of an interconnected Metaverse. Without a consistent and seamless identity transfer between the physical world and the metaverse, as well as across different platforms within the metaverse, true inter-connectivity remains unattainable. Equivalence and Alignment stress the consistency and equal status of metaverse identities and real-world identities in the terms of behavioral norms and rights protections. Just as our identities in the physical world an seamlessly transition, authenticate, and connect across different scenarios, metaverse identities should exhibit the same fluidity. This is essential for the metaverse to be a true replica and extention of the physical world <cit.>. Following the three components framework <cit.>, the current interoperable state of metaverse identity is as follows: At the identity representation layer, most metaverse platforms continue the practices of the internet era, requiring users to create separate avatars on different platforms <cit.>. At the data layer, users generate identity-related data such as personal attributes, social relationships, and behavioral records on various platforms, but this data is typically stored and managed in isolated silos by each platform <cit.>, preventing effective cross-domain data management and sharing. Finally, at the identification layer, centralized methods dominate despite exploratory applications such decentralized identity (DID) technologies, which remain immature stages with significant shortages <cit.>. This results in limited interoperability due to varying DID solutions across platforms, and the coexistence of centralized and decentralized methods complicates interconnected identification <cit.>, necessitating further resolution. These fragmented identities lack the unified digital identity mappings or authorization mechanisms, necessitating repeated customization and offering minimal support for mutual recognition and migration across platforms, which disrupts user identity consistency. Although the nascent stage of the metaverse's development is a primary factor contributing to these issues, it is imperative to consider the risks associated with path dependency in advance. The prevalent identity systems are predominantly centralized and designed around individual platforms, inherently lacking interoperability. Economic incentives often drive platforms to implement "walled garden" strategies, thus inhibiting the advancement of interoperability <cit.>. Furthermore, the absence of mature technical standards and subsequent legal frameworks constrains progress in this domain. It is also essential to acknowledge that the principle of Fusion and Expansiveness underscore the diversity and dynamic nature of metaverse identities, which can lead to more complex scenarios due to interoperability challenges. To overcome the barriers to these challenges, coordinated efforts across technological, institutional, and industrial dimensions are essential. Firstly, it is imperative to expedite the establishment of comprehensive norms and standards that ensure identity interoperability within the Metaverse. <cit.> advocates for the consideration and implication of a unique digital identity per person across multiple metaverses. The norms and standards should encompass not only the creation of identity, but also the related data storage and interfaces and authentication protocols. These measures are crucial for establishing a standardized technical framework that supports seamless identity interoperability within the metaverse. Secondly, legal and regulatory frameworks should be established to clarify the rights and obligations of various representative identities across different contexts, balancing the development of interoperability with privacy and security, and ensuring compliant data flow. Additionally, enhancing governance mechanisms by mobilizing platform enterprises, industry organizations, and regulatory bodies is necessary. This can be achieved through a multi-stakeholder identity trust alliance to foster robust collaboration in building interoperability. Lastly, fostering a healthy industry ecosystem around metaverse identity interoperability is essential. This includes creating data-sharing incentive mechanisms, conducting interoperability privacy certifications, and enhancing public education to increase user awareness. §.§ Legal Challenges of Identity in Metaverse As the metaverse continues to evolve and its applications expand, the challenges to legal boundaries become increasingly evident. In the cases of sexual assault discussed previously, the victims experienced substantial psychological harassment. Current legal frameworks, however, find it difficult to enforce traditional accountability and sanctions due to the lack of physical contact <cit.>. The report, "Sexual Violence and Harassment in the Metaverse: A New Manifestations of Gender-Based Harms," underscores the urgent need for updated national and international law enforcement to address this distinct form of metaverse-facilitated sexual violence and harassment <cit.>. With avatar-based interactions becoming commonplace in the metaverse, the frequency of such rights conflicts and illegal acts is expected to rise. Actions that would typically incur legal penalties in the real world, from civil liabilities like negligence and harassment to criminal offenses such as assault and theft, pose significant challenges in virtual settings of metaverse. The applicability and efficacy of existing laws within these virtual contexts are subjects of ongoing significant debate <cit.>. The continued advancement of artificial intelligence technologies, exemplified by models such as ChatGPT-4o, is leading to the emergence of increasingly complex digital entities, thereby escalating the challenges to existing legal frameworks. In September 2023, a court in South Korea sentenced an individual to two and a half years of imprisonment for utilizing AI technology to produce highly realistic virtual child pornography <cit.>. This landmark ruling, which is the first of its kind in South Korea, has attracted widespread international attention. The prosecution contended that the depiction of virtual characters, despite their fictitious nature, constituted a criminal act due to their lifelike portrayal of children <cit.>. This case has significantly blurred the boundaries between real and virtual entities, equating virtual characters with real children in legal considerations. Digital entities are redefining the concept of identity, extending it beyond physical individuals. These entities, capable of representing humans, demonstrate varied levels of interaction, autonomy, and behavioral expressions in the metaverse. They can emulate human communication and serve in roles such as virtual assistants, companions, or social media influencers <cit.>. For example, the Hong Kong University of Science and Technology has pioneered the use of AI-based digital personas, such as a simulation of Albert Einstein, to function as virtual lecturers in its metaverse classroom <cit.>. These entities transcend mere code or graphical representations; they interact with, influence, and sometimes represent individuals or organizations. Existing legal frameworks struggle to address issues of identity recognition, behavior evaluation, and accountability when these entities engage in detrimental activities <cit.>. One proposed solution from the academic realm to address the challenges posed by digital entities is to grant them the status of legal personhood, thus holding them accountable for their actions within the metaverse <cit.>. This would involve incorporating rights and responsibilities for these entities within the existing legal framework and granting them the capacity to be party to legal proceedings, which introduces significant complexity. Key issues include defining legal standards for the recognition of digital entities and managing the effects of their actions on both their controllers and other stakeholders, necessitating innovative legal and policy responses <cit.>. Drawing on the principles of Equivalence and Alignment, and paralleling real-world legal structures, could provide a feasible approach. Scholars such as <cit.> assert that cybercrimes within the metaverse, including stalking, assault, child exploitation, kidnapping, violations of intellectual property, and financial fraud, should be subject to legal repercussions analogous to those enforced in the physical domain. They advocate for the implementation of established legal procedures from the physical world by metaverse law enforcement agencies to address these offenses effectively. Furthermore, certain scholars propose extending the regulatory frameworks applicable to corporate entities to digital entities within the metaverse, thereby applying well-established corporate governance principles to forge an appropriate regulatory environment <cit.>. Furthermore, WEF advocates for the clear identification of digital entities, distinguishing between those driven by humans and artificial intelligence, and establishing a robust mechanism for fault liability <cit.>. As metaverse technologies advance and the convergence of virtual and real worlds becomes more pronounced, the projection and extension of individual identity elements into virtual spaces will escalate, intensifying legal challenges. This evolving scenario demands ongoing exploration and refinement of legal strategies <cit.>. §.§ Privacy and Identity Management Challenges The metaverse presents unprecedented challenges to user privacy. A study by <cit.>, involving over 50,000 players of the popular VR Game 'Beat Saber', highlights these concerns. Analysis of 2.5 million VR motion dataset through machine learning algorithms reveals that merely 100 seconds of data can uniquely identify a user with over 94.33% accuracy, and remarkably, only 2 seconds are needed to identify half of the users <cit.>. Further research indicates that this motion data can accurately infer a broad spectrum of personal characteristics, including bio-metric information such as height and wingspan, as well as demographic details like age and gender. It can even predict the user's country of origin and clothing type. Most concerning is the accurate capability to discern the presence of mental and physical disabilities <cit.>. This level of precision in identification intensifies when motion data are coupled with other tracked data within the metaverse. This situation suggest that anonymity in the metaverse may be impossible <cit.>. Contrary to traditional privacy measures that do not necessitate sharing sensitive data like fingerprints, the metaverse inherently involves the dissemination of motion data–a fundamental aspect of interaction that must be shared in real-time with all participants. Furthermore, unique motion patterns analyzed across various contexts–be it professional environments, social interactions, or private setting–facilitate the straightforward identification of individuals. Additionally, the inherent link between our physical movements and our motion data allows machine learning algorithms to correlate VR actions with real-world surveillance footage, leading to potential real-world tracking and identification <cit.>. In summary, without the development of innovative protective measures, maintaining privacy in the metaverse could be an insurmountable challenge, presenting significant and potentially transformative implications for identity management. To effectively address the unique challenges the metaverse poses to privacy, targeted measures are necessary both technologically and at the policy level. From a technological standpoint, various sophisticated methods, including local epsilon-differential privacy, adversarial machine learning, Trusted Execution Environments (TEE), and Secure Multi-Party Computation (MPC), can enhance user privacy <cit.>. However, these technologies can sometimes compromise user experience or remain underdeveloped <cit.>. No comprehensive and fully mature technical solutions currently exist, underscoring the importance of establishing robust privacy regulations to complement and guide the development while fostering the development of privacy-enhancing technologies. Addressing the metaverse's unique privacy challenges requires a dual approach: continuously improving privacy-enhancing technologies to keep pace with the metaverse’s rapid evolution, and developing targeted privacy protection measures tailored to its immersive, three-dimensional nature. For example, governing privacy in three-dimensional spaces necessitates distinct data processing and privacy protection measures for public areas (akin to offices) and private spaces (akin to homes). Such differentiation is crucial to ensure that privacy regulations in the metaverse reflect varying real-world privacy expectations, adhering to the principle of Equivalence and Alignment. By adopting this nuanced approach to privacy governance, we can foster a more secure and trustworthy metaverse that respects users' privacy rights while promoting innovation and growth.users. §.§ Challenges of Deepfake and Synthetic Identities One development trend in the metaverse is to enhance user experiences by making them more realistic, integrated, and immersive <cit.>, illustrating the principle of Fusion. This trend manifests in several key areas: the application of advanced algorithms such as Generative Adversarial Network(GANs) has significantly enhanced the realism in rendering virtual avatars and digital entities <cit.>. Innovative methods by researchers in facial expression <cit.>, motion capture <cit.>, and behavioral modeling <cit.> have similarly advanced the interactivity and authenticity of avatars and digital entities <cit.>. Moreover, there have been considerable advances in the generating and rendering of virtual environments <cit.>. The rapid evolution of AI, particularly in the field of generative AI, has simplified the creation of highly realistic, indistinguishable digital content, thus enhancing the immersive experience of the metaverse <cit.>. However, this technological advancement also poses significant security risks, such as the potential for creating deepfakes or synthesizing untraceable identities, thereby raising serious security concerns <cit.>. Deepfakes, manipulated media created using AI and deep learning designed to deceive viewers, have become increasingly prevalent, sparking significant societal discussion <cit.>. For instance, in 2023, an AI-generated image of the Pentagon explosion went viral, temporarily impacting the U.S. stock market <cit.>. Generative AI is transforming the creation and expression of digital media content, while the metaverse is poised to redefine the mechanism of content distribution of content distribution, experience, and interaction <cit.>. It is anticipated that the integration of AI with VR and AR technologies will magnify their impacts within the metaverse <cit.>. These technologies facilitate the creation of hyper-realistic simulations, the development of digital entities and scenes, and enable deeper, more emotionally resonant interactions between users or with digital entities. Consequently, this technological expansion broadens the scope of threats like deepfakes <cit.>. In the absence of stringent regulatory oversight within the metaverse, users might exploit personalized identity markers-such as those derived from actual facial features and body forms-to commit identity theft or create synthetic identities. This poses significant risks of severe security and privacy violations, potentially leading to more profound forms of deception, defamation, and threats within the metaverse. For instance, the application of deepfake technology in virtual workplaces to fabricate digital clones of supervisors or colleagues could undermine internal trust, induce confusion by issuing deceptive work directives, or facilitate the dissemination of misinformation or propaganda, thereby impacting critical decisions <cit.>. Regrettably, existing deepfake detection methodologies are primarily tailored to address content in the physical world, such as images and videos, and do not yet adequately account for the unique challenges presented by the metaverse <cit.>. According to <cit.>, there is a pressing need for deepfake recognition and prevention mechanisms specifically designed for the metaverse. Looking forward, addressing these challenges will necessitate a dual approach: the development of robust technological solutions for identity verification and content authenticity, and the establishment of clear regulatory frameworks that delineate legal boundaries for virtual avatars/digital content and address critical issues such as privacy protection and misuse prevention. Only through a coordinated effort can the metaverse be ensured to be a relatively safe and trustworthy virtual space. §.§ Identity Fragmentation and Psychological Well-being Challenges The principle of Expansiveness elucidates that the metaverse offers unparalleled opportunities for diverse and expansive identity representation, enabling individuals to transcend the limitations associated with physical world. While this diversity in representation can enhance creativity and inclusivity, as exemplified by the Avatar Robot Cafe <cit.> case where marginalized groups gained equal participatory opportunities, it also poses significant risks such as identity fragmentation and confusion. Recent research by <cit.>, involving a two-wave panel study on the VRChat platform, corroborates these concerns, indicating that while expansion of identity representation can boost self-esteem and life satisfaction, it may compromise these benefits when it leads to inconsistencies in self-concept. A poignant historical example is the 2000 incident where a child, after prolonged engagement with a video game, began to experience severe identity dissonance. Believing himself to be the game's protagonist, he ventured out at night to "fight enemies and save the princess," mimicking the game's narrative. This blurring of reality and fantasy necessitated medical intervention, to help reorient his sense of identity and reality <cit.>. As the metaverse progresses, addressing these psychological impacts and implementing protective measures for identity integrity, particularly among children and vulnerable populations, becomes imperative. Identity fragmentation within the metaverse can significantly impair users' mental health and social relationships. The ability to easily create and switch between diverse, unintegrated identities may reduce reliance on real-world personas, exacerbating identity fragmentation issues <cit.>. These 'consequence free' virtual spaces <cit.>, which may diverge sharply from reality, pose specific risks. Such environments can lead to self-disalignment, where users perceive their virtual identities as superior, increasing their engagement with virtual communities and causing alienation from the physical world <cit.>. This alienation can compel individuals, particularly children and adolescents, to increasingly immerse themselves in the metaverse, potentially leading to social withdrawal, depression, and antisocial behavior <cit.>. Considering the known adverse effects of social media, video games, and mobile devices, the immersive and escapist nature of the metaverse requires careful scrutiny to prevent its potentially detrimental impacts on psychological and social well-being <cit.>. Research on multiple cultural identities has shown that the mere presence of multiple identity representations is not inherently problematic; rather, the critical factor is relationships, integration and management of these identity representations <cit.>. As the metaverse evolves and identity-related issues become more pronounced, further research and practical interventions are essential. Future studies should focus on assisting users in managing their multiple identity representations and preventing identity fragmentation to mitigate associated psychological and social challenges. Given that these concepts are still in their nascent stages, proactively recognizing and preparing for identity fragmentation is crucial for addressing the emerging challenges of the metaverse effectively. § CONCLUSION AND LIMITATION The metaverse, an emerging realm that merges virtual and real elements, significantly influences identity expression, social interaction, and human development trajectories. Central to the metaverse is the construction of identity, which involves representation, data management and identification of avatars and other digital entities <cit.>. These elements are intimately connected to users' sense of belonging, privacy, security, and trust. Despite its significance, scholarly research on metaverse identities is still nascent and lacks a comprehensive theoretical framework. This paper conducts a critical review of the existing literature to explore the Metaverse Identity concept from both multidisciplinary and perspective viewpoints, identifying the research gap. Unlike traditional internet settings, the metaverse offers more immersive and varied experiences, transforming identity from a mere theoretical construct into a tangible, experiential reality. This shift accentuates the tension between self-expression and privacy protection. Metaverse identities are characterized by their multilayered, dynamically evolving, and diverse nature, which grants users unprecedented freedom in self-expression but also poses substantial privacy and security risks. Addressing these complexities, this paper proposes two core principles: Equivalence and Alignment and Fusion and Expansiveness. The first principle argues for consistency between metaverse and real-world identities in terms of behavioral norms and social standards, which is essential for developing conduct guidelines and protecting rights. The second principle stresses the need for deep integration and extensive expansion of metaverse identities, breaking beyond real-world constraits to meet diverse needs and promote inclusive participation. Effective governance in the metaverse necessitates a dynamic balance between these principles, ensuring fairness while encouraging diverse expressions and innovative developments. Our analysis identifies five key challenges to the deveolopment of metaverse identities: interoperability, legal boundaries, privacy and identity management issues, and the risks associated with deep fakes & synthetic identities, and identity fragmentation impacting psychological health. Building on the principles outlined, this paper offers strategic recommendations to address these challenges. However, this study exhibits several significant limitations. Firstly, the representativeness of the case studies needs enhancement. This paper discusses instances such as virtual sexual assault and Avatar Robot Cafe, yet the case selection is narrow and largely confined to specific scenarios. Given the metaverse's diverse and complex ecosystem, our analysis might not fully capture the spectrum of identity practices. Future research should broaden the scope of case studies to improve the findings' generalizability and relevance. Secondly, the empirical foundation of this study requires strengthening. Metaverse identity research, still in its infancy, largely lacks comprehensive data on user behavior and psychological responses. Currently supported primarily by literature review and qualitative analysis, this study highlights the need for more rigorous empirical research. Future efforts should include observational, survey, and experimental methodologies to explore identity construction behaviors and perceptions in the metaverse more deeply, thereby solidifying the theoretical framework. Lastly, the theoretical principles introduced—Equivalence and Alignment, and Fusion and Expansiveness—need further refinement and operationalization. While these principles offer valuable insights for regulating identity in the metaverse, they are proposed from a broad perspective and remain abstract. The practical application and policy-making processes might encounter challenges in operationalizing these principles. Future research should refine these two principles and develop more detailed, actionable guidelines to inform the governance and design of metaverse identity framework. Looking forward, the metaverse, as an emerging socio-technical ecosystem in the digital age <cit.>, is poised to significantly transform human life and developmental practices. The manner in which individuals construct and manage their identities between the physical world and metaverse is critical for sustainable human development. Addressing this issue requires a unified approach from academia, industry, and policy sectors to promote sustained interdisciplinary and multi-perspective research, aimed at fostering a robust and inclusive metaverse environment. Such collaborative initiatives are essential to provide both intellectual and practical contributions to the evolution of this new era.
http://arxiv.org/abs/2406.09019v1
20240613114422
Ground state energy of a dilute Bose gas with three-body hard-core interactions
[ "Lukas Junge", "François Louis Antoine Visconti" ]
math-ph
[ "math-ph", "math.MP" ]
Towards Unified AI Models for MU-MIMO Communications: A Tensor Equivariance Framework Yafei Wang, Graduate Student Member, IEEE, Hongwei Hou, Graduate Student Member, Xinping Yi, Member, IEEE, Wenjin Wang, Member, IEEE, Shi Jin, Fellow, IEEE Manuscript received xxx. Yafei Wang, Hongwei Hou, and Wenjin Wang are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and also with Purple Mountain Laboratories, Nanjing 211100, China (e-mail: wangyf@seu.edu.cn; hongweihou@seu.edu.cn; wangwj@seu.edu.cn). Xinping Yi and Shi Jin are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China (e-mail: xyi@seu.edu.cn; jinshi@seu.edu.cn). June 17, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT We consider a gas of bosons interacting through a three-body hard-core potential in the thermodynamic limit. We derive an upper bound on the ground state energy of the system at the leading order using a Jastrow factor. Our result matches the lower bound proven by Nam–Ricaud–Triay <cit.> and therefore resolves the leading order. Moreover, a straightforward adaptation of our proof can be used for systems interacting via combined two-body and three-body interactions to generalise <cit.> to hard-core potentials. § INTRODUCTION A system of N bosons trapped in a box Λ_L [0,L]^3 interacting via three-body interactions can be described by the Hamiltonian operator H_N,L = ∑_i=1^N-Δ_x_i + ∑_1≤ i<j<k≤ Nw(x_i-x_j,x_i-x_k) acting on the Hilbert space L_^2(Λ_L^N) - the subspace of L^2(Λ_L^N) consisting of functions that are symmetric with respect to permutations of the N particles. Such systems have received a lot of attention in recent years and have been the subject of many mathematical works <cit.>. In <cit.>, Nam–Ricaud–Triay proved that for a nonnegative, compactly supported potential w∈ L^∞(ℝ^6), the Hamiltonian (<ref>) satisfies lim_N,L→∞ N/L^3→ρinfσ(H_N,L)N = 16ρ^2b_ℳ(w)(1 + O(Y^ν)) when Y ρ b_ℳ(w)^3/4→ 0, for some constant ν > 0. Here, b_ℳ(w) is the scattering energy associated to w (see <cit.>). This was then improved in <cit.>, where it was shown that (<ref>) holds for w≥ 0 compactly supported and satisfying w_L^2L^1(∫_ℝ^3w(x,·)_L^1(ℝ^3)^2x)^1/2 < ∞. It was also shown in <cit.> that (<ref>) holds with an error in o(1) for w of class L^1. The goal of this paper is to prove that (<ref>) remains valid for particles interacting with a hard-core potential. We consider a gas of N bosons with three-body hard-core interactions in Λ_L = [0,L]^3. We are looking for an upper bound on the ground state energy E_N,L = inf<Ψ,∑_i=1^N-Δ_x_iΨ>Ψ^2, with the infimum taken over all Ψ∈ L_^2(Λ_L^N) satisfying the three-body hard-core condition Ψ(x_1,…,x_N) = 0 if there exist i,j,k∈{1,…,N}, i≠ j≠ k≠ i with |(x_i-x_j,x_i-x_k,x_j-x_k)|/√(3)≤𝔞. [Though there is no canonical choice for the three-body hard-core potential, the present choice is motivated by the Physics literature (see e.g. <cit.>).] Here, |·| is the euclidean norm in ℝ^9 and ∑_i=1^N-Δ_x_i is to be understood in the quadratic form sense in H^1_(Λ_L^N). Note that the scattering energy associated to the hard-core potential w_(x-y,x-z) = {[ +∞ ; 0 ]. is given by b_ℳ(w_) = 643√(3)π^2𝔞^4. There exists C > 0 (independent of 𝔞 and ρ) such that lim_N,L→∞ N/L^3→ρE_N,LN = 329√(3)π^2ρ^2𝔞^4(1 + C(ρ𝔞^3)^ν) for all ρ𝔞^3 small enough and for some ν > 0. The matching lower bound was proven in <cit.>. Here are some remarks on the result: * The strategy of the proof of Theorem <ref> can be used to generalise (<ref>) to w of class L^1 with an error uniform in w assuming that R_0/b_ℳ(w)^1/4 remains bounded, where R_0 is the range of w. * In <cit.>, a heuristic approach was used to predict that the ground state energy of a system described by the Hamiltonian (<ref>) should satisfy lim_N,L→∞ N/L^3→ρinfσ(H_N,L)N = 16ρ^2b_ℳ(w)(1 + C(w)ρ + o(ρ)) in the low density regime ρ→ 0, for some constant C(w) depending only on w. The error yielded by the proof of Theorem <ref> is of order (ρ𝔞^3)^4/7≫ρ𝔞^3, meaning that it does not capture the error at the correct order. The same problem arises in the two-body case when only using cancellations between the numerator and the denominator similar to the ones used in (<ref>)–(<ref>) (see for example <cit.>). To extract an error at the correct order in the two-body case one needs to push the analysis much further and identify additional cancellations, as was done in <cit.>. * A straightforward adaptation of the proof of Theorem <ref> can also be used to derive a correct upper bound at the first order of the ground state energy of a system interacting via two-body and three-body interactions. More specifically, the ground state energy E_N,L' of a system of N bosons interacting via a two-body hard-core potential of radius 𝔞_ and a three-body hard-core potential of radius 𝔞_ > 𝔞_ in Λ_L is such that lim_N,L→∞ N/L^3→ρE_N,L'N≤(4πρ𝔞_ + 329√(3)π^2ρ^2𝔞_^4) (1 + C(ρ𝔞_^3)^1/3 + C(ρ𝔞_^3)^4/7) for ρ𝔞_2^3 and ρ𝔞_3^3 small enough. To prove this one considers a trial state of the form Ψ(x_1,…,x_N) = ∏_1≤ i<j≤ Nf_ℓ_1(x_i - x_j)∏_1≤ i<j<k≤ Nf_ℓ_2(x_i - x_j,x_i - x_k), where f_ℓ_1 describes the two-body correlations up to a distance ℓ_1 and f_ℓ_2 describes the three-body correlations up to a distance ℓ_2. This generalises <cit.> to hard-core potentials. § SCATTERING PROPERTIES OF THE THREE-BODY HARD-CORE POTENTIAL Since we are considering a dilute Gas the correlation structure is encoded in the zero-scattering problem (-Δ_x -Δ_y -Δ_z)f(x-y,x-z) = 0 on (ℝ^3)^3, where f satisfies the conditions f(x-y,x-z) = 0 if |(x-y,x-z,y-z)|/√(3)≤𝔞 and f(𝐱) → 1 as |𝐱| →∞. Note that f satisfies the three-body symmetry properties f(x,y) = f(y,x) f(x - y,x - z) = f(y - x,y - z) = f(z - x, z - y), for all x,y,z∈ℝ^3. By removing the centre of mass using the change of variables r_1 = 13(x + y + z), r_2 = x-y, r_3 = x-z, we find that the scattering problem (<ref>) is equivalent to the modified zero-scattering problem -2Δ_ℳf(r_2,r_3) = 0 on (ℝ^3)^2, with f satisfying the conditions f(r_2,r_3) = 0 if |ℳ^-1(r_2,r_3)| ≤√(2)𝔞 and f(𝐱) → 1 as |𝐱| →∞. Here we introduced the modified Laplacian -Δ_ℳ = -|ℳ∇_ℝ^6|^2 = -_ℝ^6(ℳ^2∇_ℝ^6), where the matrix ℳ:ℝ^3×ℝ^3 →ℝ^3×ℝ^3 is given by ℳ( 12[ 2 1; 1 2 ])^1/2 = 12√(2)[ √(3) + 1 √(3) - 1; √(3) - 1 √(3) + 1 ], with inverse ℳ^-1 = ( 23[ 2 -1; -1 2 ])^1/2 = 1√(6)[ 1 + √(3) 1 - √(3); 1 - √(3) 1 + √(3) ] (see <cit.> for a more in depth discussion on the matter). Note that ℳ = √(3)/2. Let f denote a solution to (<ref>) and define f f(ℳ·). Then, f solves -Δf = 0 on ℝ^6, with the conditions f(𝐱) = 0 for |𝐱| ≤√(2)𝔞 and f(𝐱) → 1 as |𝐱| →∞. By rewriting the previous problem in hyperspherical coordinates we find that (<ref>) has for unique solution f(𝐱) = {[ 1 - 4𝔞^4|ℳ^-1𝐱|^4 ,; 0 . ]. Let us also define ω(𝐱) 1 - f(𝐱), for all 𝐱∈ℝ^6. We shall need a truncated version of f with a cut-off. Let χ∈ C^∞(ℝ^6;[0,1]) be a radial function satisfying χ(𝐱) = 1 if |𝐱|≤ 1/2 and χ(𝐱) = 0 if |𝐱| ≥ 1, and define χχ(ℳ^-1·). For all ℓ∈(𝔞,L), we define χ_ℓχ(ℓ^-1·), ω_ℓχ_ℓω, f_ℓ 1 - ω_ℓ. Note that ω, f, ω_ℓ and f_ℓ satisfy the three-body symmetry (<ref>). Moreover, they have the following properties: Let ℓ∈(𝔞,L). Then, we have |∇ f_ℓ(𝐱)| ≤ C𝔞^4ℓ^-11_{C_1ℓ≤ |𝐱|≤ C_2ℓ}|𝐱|^4 and 0 ≤ 1 - f_ℓ^2(𝐱) ≤ C𝔞^41_{C_1𝔞≤ |𝐱|≤ C_2ℓ}|𝐱|^4, for all 𝐱∈ℝ^6. Here, C,C_1,C_2 are universal positive constants such that C_1 < C_2. Moreover, ∫_ℝ^6𝐱(|(ℳ∇_ℝ^6f_ℓ)(𝐱)|^2) ≤323√(3)𝔞^4(1 + C(𝔞ℓ)^4) Furthermore, by defining ℓ√(3/2)ℓ and g_ℓ(x) 1_{|x| ≥ℓ}, we have f_ℓ(x_1,x_2) ≥max(g_ℓ(x_1),g_ℓ(x_2)), for all x_1,x_2∈ℝ^3. Both (<ref>) and (<ref>) follow directly from the definition of f_ℓ and |∇χ(𝐱)| ≤ Cℓ^-11_{ℓ/2≤ |ℳ^-1𝐱|≤ℓ} and σ(ℳ^-1) = {√(2/3),√(2)}. To compute (<ref>) we first write ∫_ℝ^6𝐱(|(ℳ∇_ℝ^6f_ℓ)(𝐱)|^2) = ∫_ℝ^6𝐱(|(ℳ∇_ℝ^6ω)(𝐱)|^2χ_ℓ(𝐱)^2) = + 2∫_ℝ^6𝐱((ℳ∇_ℝ^6ω)(𝐱)·(ℳ∇_ℝ^6χ_ℓ)(𝐱)ω_ℓ(𝐱)) = + ∫_ℝ^6𝐱(ω(𝐱)^2|(ℳ∇_ℝ^6χ_ℓ)(𝐱)|^2). The only contribution of order 𝔞^4 comes from the first term. Indeed, using again |∇χ(𝐱)| ≤ Cℓ^-11_{ℓ/2≤ |ℳ^-1𝐱|≤ℓ} and (<ref>) we have ∫_ℝ^6𝐱(2(ℳ∇_ℝ^6ω)(𝐱)·(ℳ∇_ℝ^6χ_ℓ)(𝐱)ω_ℓ(𝐱) + |ω(𝐱)^2|(ℳ∇_ℝ^6χ_ℓ)(𝐱)|^2) ≤ C𝔞^4(𝔞ℓ)^4. Moreover, by writing ω = ω(ℳ^-1·) with ω(𝐱) = 4𝔞^4/|𝐱|^4, we get ∫_ℝ^6𝐱(|(ℳ∇_ℝ^6ω)(𝐱)|^2χ_ℓ(𝐱)^2) = ∫_ℝ^6𝐱(|(∇_ℝ^6ω)(ℳ^-1𝐱)|^2χ_ℓ(ℳ^-1𝐱)^2) = ℳ∫_ℝ^6𝐲(|(∇_ℝ^6ω)(𝐲)|^2χ_ℓ(𝐲)^2) ≤323√(3)π^2𝔞^4. In the second equality we used the change of variables 𝐲 = ℳ^-1𝐱. In the last inequality we used ∇_ℝ^6ω = 0 on B(0,√(2)𝔞) and χ_ℓ(𝐱) ≤1_{|𝐱| ≤ℓ} and ∇_𝐲(1/|𝐲|^4) = -4𝐲/|𝐲|^6 and ℳ = √(3)/2 and that the surface of the 5-dimensional sphere in ℝ^6 is given by |𝕊^5| = 8π^2/3. This proves (<ref>). Finally, notice that f_ℓ(x_1,x_2) = 1 when |ℳ^-1(x_1,x_2)|^-1≥ℓ, which is true whenever |x_1| ≥ℓ or |x_2| ≥ℓ. This immediately implies (<ref>) and concludes the proof of Lemma <ref>. § PROOF OF THE UPPER BOUND To get an upper bound on (<ref>), we need to evaluate the energy on an appropriate trial state. To do so, we add correlations among particles to the uncorrelated state Ψ_N,L≡ 1. Since correlations are produced mainly by three-body scattering events, we consider the trial state Ψ_N,L(x_1,…,x_N) = ∏_1≤ i<j<k≤ Nf_ℓ(x_i-x_j,x_i-x_k), where ℓ is a parameter satisfying 𝔞≪ℓ≪ L that will be fixed later; Ψ_N,L is clearly an admissible state. The function f_ℓ defined in (<ref>) describes the three-body correlations up to a distance ℓ. Such trial states have been first used in <cit.> and are usually referred to as Jastrow factors (in <cit.> Dyson worked with a nonsymmetric trial state describing only nearest neighbour correlations). For readability's sake we from now on write f_ijk = f_ℓ(x_i-x_j,x_i-x_k) and ∇_if_ijk = ∇_x_if_ℓ(x_i-x_j,x_i-x_k) for all i,j,k∈{1,…,N}. To compute the energy of the trial state (<ref>), we first notice that ∇_x_1Ψ_N,L(x_1,…,x_N) = ∑_2≤ p<q ≤ N∇_1f_1pqf_1pq∏_1≤ i<j<k≤ Nf_ijk, which when combined with the three-body symmetry (<ref>) implies <Ψ_N,L,∑_i=1^N-Δ_x_iΨ_N,L>Ψ_N,L^2 = N<∇_x_1Ψ_N,L,∇_x_1Ψ_N,L>Ψ_N,L^2 [t] = N(N-1)(N-2)3∫𝐱_N(|ℳ∇ f_123|^2/f_123^2∏_1≤ i<j<k≤ Nf_ijk^2)∫𝐱_N(∏_1≤ i<j<k≤ Nf_ijk^2) = + N(N-1)(N-2)(N-3)∫𝐱_N(∇_1f_123/f_123·∇_1f_124/f_124∏_1≤ i<j<k≤ Nf_ijk^2)∫𝐱_N(∏_1≤ i<j<k≤ Nf_ijk^2) = + N(N-1)(N-2)(N-3)(N-4)4∫𝐱_N(∇_1f_123/f_123·∇_1f_145/f_145∏_1≤ i<j<k≤ Nf_ijk^2)∫𝐱_N(∏_1≤ i<j<k≤ Nf_ijk^2) ℐ_1 + ℐ_2 + ℐ_3. In the second equality we used |∇_x_1f_ℓ(x_1 - x_2,x_1 - x_3)|^2 + |∇_x_2f_ℓ(x_1 - x_2,x_1 - x_3)|^2 + |∇_x_3f_ℓ(x_1 - x_2,x_1 - x_3)|^2 = 2|(ℳ∇_ℝ^6 f_ℓ)(x_1 - x_2,x_1 - x_3)|^2. Let us now bound each term one by one. Thanks to (<ref>), we have ∏_3≤ j<k≤ Nf_ℓ(x_1-x_j,x_1-x_k)^2 ≥∏_j=3^Ng_ℓ(x_1 - x_j) and ∏_3≤ j<k≤ Nf_ℓ(x_2-x_j,x_2-x_k)^2 ≥∏_j=3^Ng_ℓ(x_2 - x_j). Hence, by defining u_ℓ 1 - f_ℓ^2 and v_ℓ 1 - g_ℓ we have the estimate 1 - ∑_j=3^Nv_1j - ∑_j=3^Nv_2j - ∑_k = 3^Nu_12k≤∏_3≤ j<k≤ Nf_1jk^2f_2jk^2∏_k=3^Nf_12k^2 ≤ 1, where we used the short-hand notations v_ij = v_ℓ(x_i - x_j) and u_ijk = u_ℓ(x_i-x_j,x_i-x_k). This allows us to decouple the variables x_1 and x_2 in the numerator and in the denominator of ℐ_1; with (<ref>) and (<ref>) we obtain ℐ_1 ≤N^33∫_ℝ^6𝐱(|(ℳ∇_ℝ^6f_ℓ)(𝐱)|^2)L^6 - CL^3N∫x(v_ℓ(x)) - CN∫𝐱(u_ℓ(𝐱)) ≤329√(3)π^2Nρ^2𝔞^4(1 + C(𝔞/ℓ)^4)1 - Cρℓ^3 - Cρℓ^2𝔞^4/L^3 ≤329√(3)π^2Nρ^2𝔞^4(1 + C(𝔞ℓ)^4 + Cρℓ^3), under the assumption that ρℓ^3 ≪ 1 and 𝔞≪ℓ≪ L. In the last inequality we used ℓ^2𝔞^4/L^3 ≤ℓ^3. To bound ℐ_2 we similarly decouple the variables x_2, x_3 and x_4. Using again (<ref>) and (<ref>) we can bound ℐ_2 ≤ CN^4∫xyz(|∇ f_ℓ(x,y)|·|∇ f_ℓ(x,z)|)L^9 - CNL^6∫x(v_ℓ(x)) - CNL^3∫𝐱(u_ℓ(𝐱)) ≤ CNρ^2𝔞^4[ρ𝔞^4ℓ^-1](1 + Cρℓ^3) ≤ CNρ^2𝔞^4(ρℓ^3), when ρℓ^3 ≪ 1 and 𝔞≪ℓ≪ L. Analogously, we bound ℐ_3 by decoupling the variables x_1,x_2 and x_4. Namely, using once more (<ref>) and (<ref>) we get ℐ_3 ≤ CN^5(∫𝐱|∇ f_ℓ(𝐱)|)^2L^12 - CNL^9∫x(v_ℓ(x)) - CNL^6∫𝐱(u_ℓ(𝐱)) ≤ CNρ^2𝔞^4[ρ^2𝔞^4ℓ^2](1 + Cρℓ^3) ≤ CNρ^2𝔞^4(ρℓ^3) again under the condition that ρℓ^3 ≪ 1 and 𝔞≪ℓ≪ L. From (<ref>)–(<ref>) we conclude that E_N,L≤329√(3)π^2Nρ^2𝔞^4(1 + C(𝔞ℓ)^4 + Cρℓ^3). Taking ℓ = 𝔞(ρ𝔞^3)^-1/7 finishes the proof of Theorem <ref>. § ACKNOWLEDGMENTS. We thank Arnaud Triay for his precious feedback. L. J. was partially supported by the European Union. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. L. J. was partially supported by the Villum Centre of Excellence for the Mathematics of Quantum Theory (QMATH) with Grant No.10059. L. J. was supported by the grant 0135-00166B from Independent Research Fund Denmark. F. L. A. V. acknowledges partial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the TRR 352 Project ID. 470903074 and by the European Research Council through the ERC CoG RAMBAS Project Nr. 101044249.
http://arxiv.org/abs/2406.08473v1
20240612175646
Strategies for Pretraining Neural Operators
[ "Anthony Zhou", "Cooper Lorsung", "AmirPouya Hemmasian", "Amir Barati Farimani" ]
cs.LG
[ "cs.LG" ]
Supergluon scattering in AdS: constructibility, spinning amplitudes, and new structures [ June 17, 2024 ======================================================================================== § ABSTRACT Pretraining for partial differential equation (PDE) modeling has recently shown promise in scaling neural operators across datasets to improve generalizability and performance. Despite these advances, our understanding of how pretraining affects neural operators is still limited; studies generally propose tailored architectures and datasets that make it challenging to compare or examine different pretraining frameworks. To address this, we compare various pretraining methods without optimizing architecture choices to characterize pretraining dynamics on different models and datasets as well as to understand its scaling and generalization behavior. We find that pretraining is highly dependent on model and dataset choices, but in general transfer learning or physics-based pretraining strategies work best. In addition, pretraining performance can be further improved by using data augmentations. Lastly, pretraining is additionally beneficial when fine-tuning in scarce data regimes or when generalizing to downstream data similar to the pretraining distribution. Through providing insights into pretraining neural operators for physics prediction, we hope to motivate future work in developing and evaluating pretraining methods for PDEs. § INTRODUCTION 1These authors contributed equally to this work *Corresponding Author Pretraining is an immensely popular technique in deep learning in which models learn meaningful context from a large dataset and apply this knowledge to downstream tasks <cit.>. In particular, recent work has highlighted the importance of self-supervised learning, which can leverage the inherent structure of unlabeled data and learn meaningful latent representations <cit.>. The success of these self-supervised pretraining strategies has motivated their application to broad scientific and engineering problems <cit.>. In particular, pretraining has been used in partial differential equation (PDE) modeling to improve neural operators and evaluate their scalability and generalizability <cit.>. Neural operators for PDEs have gained substantial interest in recent years due to their ability to quickly predict physics through inference <cit.>. Despite potential speed gains, neural operators currently struggle to generalize to unseen physics, and initial training can be slow <cit.>. To address this issue, many works have explored different strategies to improve generalization by incorporating additional system information <cit.> and pretraining neural operators across large, diverse physics to quickly fine-tune to solve PDEs <cit.>. Despite showing good performance, these works usually require the use of tailored neural operators and datasets to learn different physics. This contrasts with broader deep learning trends in which pretraining methods can universally benefit models; for example, pretraining losses that are applied across CNN models <cit.> or GNN models <cit.>. As a result, in this work, we consider existing pretraining frameworks, as well as propose novel methods for pretraining PDE models that are flexible and can be applied across architectures or datasets. By considering pretraining methods that are model agnostic, we can provide a detailed and level comparison of pretraining methods on a shared experimental setup. To our knowledge, this is the first work that makes an effort to compare pretraining strategies without tailored architecture choices, which allows an understanding of how pretraining affects learning in different regimes. Specifically, we compare different pretraining strategies and consider the effect of PDE data augmentations, a popular technique to improve pretrained model performance <cit.>. Additionally, we study the performance of pretrained models with scarce fine-tuning data as well as their generalization behavior to unseen coefficients or PDEs. Through this work, we hope to broaden the understanding of how neural operators can be pretrained for physics prediction. We organize existing pretraining strategies, propose novel vision-inspired strategies, and include common pretraining baselines to assemble a broad set of methods for learning PDE representations. We find that PDE pretraining varies depending on model and dataset choice, but in general using transfer or physics-based pretraining strategies work well. In addition, transformer or CNN-based architectures tend to benefit more from pretraining than vanilla neural operators. Furthermore, the use of data augmentations consistently improves pretraining performance in different models, datasets, and pretraining strategies. Lastly, we find that pretraining is more beneficial when fine-tuning in low-data regimes, or when downstream data is more similar to pretraining data. We hope that these insights can be used to guide future work in the development and evaluation of pretraining methods for PDEs. We open source our code here: https://github.com/anthonyzhou-1/pretraining_pdeshttps://github.com/anthonyzhou-1/pretraining_pdes . § RELATED WORKS The field of neural operators has grown rapidly in recent years, with many architectures developed to accurately approximate solutions to PDEs <cit.>. Many works expanded on this to propose architectures to solve PDEs more quickly, with less compute, or on irregular grids <cit.>, and as a result, within a range of test problems, neural operators can solve PDEs quickly and accurately. However, neural operators still struggle to generalize across diverse physics, and as a result many approaches have been developed to pretrain neural operators. We summarize these past works in Table <ref>, and briefly describe the main approaches here. §.§ PDE Transfer Learning Many past works consider transferring knowledge between PDE parameters and domains as a form of pretraining. These works often design specific architectures that are tailored for transferring weights or layers between tasks. For example, <cit.> design task-specific layers of a DeepONet to be used with different domains of 2D Darcy Flow and Elasticity problems. Another approach proposed by <cit.> is to design different operators that learn specific PDE dynamics and combine these in a mixture of experts approach, motivated by the observation that PDEs can often be compositions of each other. To address the issue of transferring between physical domains that can have different numbers of variables, <cit.> extend positional encodings and self-attention to different codomains/channels. §.§ Large PDE Modeling An extension of transfer learning is to train large models on diverse physics datasets, with the intention of learning transferable representations through scaling behavior <cit.>. initially explores this scaling behavior by training large neural operator models on large PDE datasets to evaluate its ability to adapt to different coefficients. <cit.> propose a tailored architecture for solving problems across different physics, and <cit.> expand on this by making architectural advancements and training on more diverse physics. Despite different approaches and datasets, these works generally rely on tailored, scalable architectures for large PDE datasets; pretraining is framed as physics prediction across diverse physics and fine-tuning is done on the pretraining distribution or on unseen coefficients/PDEs. §.§ PDE Contrastive Learning Following the success of contrastive learning in the vision domain <cit.>, various methods for PDE contrastive learning have been proposed. <cit.> propose a contrastive learning framework in which augmented PDE samples are represented in a similar way in latent space; notably augmentations are done with physics-preserving Lie augmentations <cit.>. <cit.> follow a similar approach in which physically invariant samples are clustered together in latent space, while <cit.> rely on PDE coefficients to define a contrastive loss. In general, contrastive methods have extensive literature and theory, however they tend to be challenging to pretrain and may have incremental gains in the PDE domain. §.§ Meta/In-context Learning for PDEs Additional past work considers adapting meta-learning <cit.> paradigms from the broader deep learning community to the PDE domain. <cit.> consider a direct adaptation of model-agnostic meta-learning to PDE tasks, while <cit.> and <cit.> apply novel losses and architectures to maximize shared learning across different tasks. Following in-context learning trends of transformer models <cit.>, <cit.> and <cit.> explore using in-context learning to prompt models with PDE solutions to generalize to unseen PDE coefficients. § METHODS §.§ Data Augmentations Following the prevalence of data augmentation in the broader deep learning community <cit.>, we consider the use of data augmentations adapted to the PDE domain. §.§.§ Lie Point Symmetry Data Augmentations We consider a recent body of work proposing Lie Point Symmetry Data Augmentations <cit.>, a set of PDE-specific data augmentations that preserve the underlying dynamics. Mathematically, given a PDE, one can derive a set of transformations {g_1, g_2, ..., g_n}, each with a parameter {ϵ_1, ϵ_2, ..., ϵ_n} that can be randomly sampled to modulate the strength of the transformation. Since some PDEs may exhibit more Lie symmetries than others, we consider only shifting the PDE solution in space (Shift), which is valid for all PDEs considered, to ensure a fair comparison between datasets. For further details on mathematical theory and its implementation in augmenting PDEs, we refer the reader to <cit.> and <cit.>. §.§.§ Physics-Agnostic Data Augmentations In computer vision literature, many successful data augmentations heavily modify inputs <cit.>; in particular, cropping and cutting out portions of an image would not respect physics if adapted to the PDE domain. Following this, we investigate the effect of data augmentations that are physics-agnostic, in that they can be applied to any PDE since the augmentation does not preserve the underlying dynamics. Following recent work on denoising neural operator architectures <cit.>, we consider adding Gaussian noise during pretraining (Noise). Furthermore, we consider scaling the PDE solution (Scale), an approach similar to a color distortion, in which the PDE solution values are multiplied by a random constant. For certain simple PDEs, scaling can preserve physics, but this is not generally true due to nonlinearities in more complex PDEs. Additional details on hyperparameters and the implementation of data augmentations can be found in Appendix <ref>. §.§ Pretraining Strategies In this work, we consider using pretraining strategies that are agnostic to the neural operator architecture to ensure compatibility with different applications and future architecture advances, and describe them in Figure <ref>. This approach is also consistent with the broader computer vision domain, where models are fully shared between pretraining and downstream tasks and can be adapted to different architectures (e.g. CNN, ViT) <cit.>. We provide further details on design considerations and the implementation of pretraining strategies in Appendix <ref>. §.§.§ Computer Vision Strategies Inspired by diverse pretraining strategies to learn image representations, we adapt many pretraining strategies from the computer vision (CV) domain to the PDE domain. In general, these strategies aim to train models through predicting visual attributes or sorting spatio-temporal sequences to learn visual representations without labels. Firstly, we consider an early work that pretrains a model to verify if a video is in the correct temporal order <cit.>. This problem is formulated as a binary classification task in which a shuffled video and the original video are assigned separate labels; within this work, we refer to this as Binary pretraining. Subsequent work proposed methods that not only verify temporal order, but can also sort temporally shuffled video frames <cit.>. This is generally formulated as a n-way classification task, where n denotes the number of permutations in which a sequence of frames can be sorted. In the context of physics data, we can opt to shuffle the data spatially or temporally, as such we refer to these two pretraining strategies as TimeSort or SpaceSort. Empirically, SpaceSort does not perform well, so we omit this strategy from our results. An extension of sorting samples that have been shuffled along a single dimension (e.g., time, space) is to sort samples shuffled across all dimensions. For images, sorting images shuffled along both the x and y axes is implemented by solving jigsaw puzzles, a challenging task that reassembles an image from its shuffled patches <cit.>. This work has been extended to the video domain by solving spatio-temporal puzzles <cit.>. The extension to PDE data requires sorting data that have been partitioned into discrete patches and shuffled along the space and time axes; we refer to this strategy as Jigsaw. One issue is that the number of possible classes scales with the factorial of the number of patches, and many shuffled sequences are not significantly different from each other. To mitigate this, we sample the top k shuffled permutations that maximize the Hamming distance between the shuffled and the original sequence <cit.>; this ensures that models can see diverse samples during pretraining while limiting the number of classes in the pretraining task. §.§.§ PDE Predictive Strategies Within the PDE domain, there are physics-specific characteristics that PDE data exhibit that can be leveraged for pretraining; this is analogous to predicting motion or appearance statistics in vision pretraining tasks <cit.>. One strategy considers the fact that PDE data depends on equation variables and coefficients, and predicting these coefficients from the PDE data could be useful. This is implemented as a regression task, where the coefficient values are regressed from a snapshot of PDE data; we refer to this strategy as Coefficient. Additionally, PDE data can be described by the derivatives of current physical values. For example, many finite difference schemes rely on spatial and temporal derivatives of the current vector or scalar field to advance the solution in time. Inspired by this, we propose a pretraining strategy that predicts the spatial and temporal derivatives of PDE data. For 2D PDEs, this is implemented as a regression tasks where the fields (u_x, u_y, u_xx, u_yy, u_t) are regressed from a solution u; we refer to this strategy as Derivative. Lastly, numerical solutions of PDEs tend to leverage information of local relationships to solve equations. For example, finite difference schemes use information from neighboring nodes to calculate spatial derivatives. Motivated by this, we propose a pretraining strategy that randomly masks data in space and time and uses this incomplete information to reconstruct the full solution. This is implemented by patching the solution in space and time, randomly replacing masked patches with a learnable mask token, and regressing the true solution; we refer to this strategy as Masked. §.§.§ Contrastive Strategies A common strategy for pretraining in computer vision domains is to exploit similarities in the data to align samples in latent space. A proposed strategy to do this for PDEs is Physics Informed Contrastive Learning (PICL), which uses a Generalized Contrastive Loss <cit.> to cluster PDE data based on their coefficients in latent space <cit.>. Another strategy for self-supervised learning of PDE dynamics is using an encoder to align Lie augmented or physically invariant latent PDE samples <cit.>. Both works require the use of a specific encoder along with the neural operator backbone; to adapt these strategies to our experimental setup we consider directly pretraining the neural operator contrastively with these strategies. However, these methods did not seem to show significant improvements over no pretraining, as such, the results are omitted from the paper. § EXPERIMENTS To evaluate the effectiveness of the proposed pretraining strategies and data augmentations, we consider a diverse set of experiments and neural operator architectures to train on. In particular, we hope to understand whether different architectures or datasets influence pretraining performance and construct a holistic view of pretraining for diverse PDE applications. We provide an overview of the setup and the different experiments possible in Figure <ref>. §.§ Data We consider predicting physics for the 2D Heat, Advection, Burgers, and incompressible Navier-Stokes equations. These equations describe a diverse range of fluid phenomena and form tasks of varying difficulties. For our experiments, we consider pretraining on a combined set of 2D Heat, Advection, and Burgers data, which contain 9216 data samples (3072 for each equation), as well as fine-tuning on a smaller set of 1024 unseen samples for each PDE. We only pretrain on the Heat, Advection, and Burgers equations since the numerical data for these PDEs are easier to generate, and as a result, transferring pretrained knowledge to more challenging PDEs can be evaluated as a potentially useful method. §.§.§ Heat, Advection, and Burgers Equations The 2D Heat, Advection, and Burgers equations are given by: ∂_t u - ν∇^2u= 0, Heat ∂_t u + 𝐜·∇ u = 0, Advection ∂_t u + u(𝐜·∇ u) - ν∇^2u = 0, Burgers To ensure a diverse set of physics data, the equation coefficients are randomly sampled according to <cit.>. In particular, for the Heat equation, we sample ν∈ [2e-3, 2e-2], for the Advection equation, we sample 𝐜 = [c_x, c_y] ∈ [0.1, 2.5]^2, and for the Burgers equation, we sample ν∈ [7.5e-3, 1.5e-2], and 𝐜 = [c_x, c_y] ∈ [0.5, 1.0]^2; we refer to this dataset as in-distribution (In). Since these equations also comprise the pretraining set, we additionally consider a case where the downstream dataset comes from a separate distribution; in this case, we sample ν∈ [2e-2, 3e-2] for the Heat equation, 𝐜 = [c_x, c_y] ∈ [2.5, 3.0]^2 for the Advection equation, and ν∈ [5.0e-3, 7.5e-3], and 𝐜 = [c_x, c_y] ∈ [1.0, 1.25]^2 for the Burgers equation. We refer to this dataset as out-of-distribution (Out). In all cases, periodic boundary conditions are enforced and the solution is solved in a domain (x,y)=[-1, 1]^2 from t=0 to t=2. Furthermore, initial conditions are randomly from a summation of sine functions; the parameters are uniformly sampled from from A_j ∈ [-0.5, 0.5], ω_j ∈ [-0.4, 0.4], l_xj∈{1, 2, 3}, l_yj∈{1, 2, 3}, ϕ_j ∈ [0, 2π) while fixing J=5, L=2: u(0, x, y) = ∑_j=1^JA_j sin(2π l_xjx/L + 2π l_yjy/L + ϕ_j) For additional information on data splits and numerical methods, we refer readers to Appendix <ref>. §.§.§ Incompressible Navier Stokes Equations The incompressible Navier Stokes equations are considered for fine-tuning pretrained models to predict more challenging physics. To ensure consistency between the pretraining and fine-tuning tasks, we use the vorticity form of the Navier-Stokes equation in order to predict a scalar field following the setup in <cit.>: ∂_t ω + 𝐮·∇ω - ν∇^2ω = f(x, y), ∇·𝐮 = 0, ∇×𝐮 = ω f(x, y) = A(sin(2π(x + y)) + cos(2π(x + y))) We formulate this problem with periodic boundary conditions, variable viscosity ν, and variable forcing function amplitude A. Specifically, the viscosity is sampled uniformly from ν∈{{1, 2, 3, 4, 5, 6, 7, 8, 9}× 10^-{6, 7, 8, 9}} and the amplitude is uniformly sampled from A ∈{{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}× 10^-3}. The data is generated in a domain (x,y)=[0, 1]^2 and from t=0 to t=7.75, following the setup from <cit.>; furthermore, the initial conditions ω_0 are generated from a Gaussian random field according to <cit.>. §.§ Neural Operators To compare different pretraining and data augmentation strategies, we consider their effects on improving the PDE prediction performance of different neural operators. Specifically, we consider the neural operators: Fourier Neural Operator (FNO) <cit.>, DeepONet <cit.> and OFormer <cit.>. Additionally, we consider the Unet model; while it is not explicitly a neural operator, it is commonly used in literature and has shown good performance <cit.>. These neural operators are first trained using a pretraining strategy before being fine-tuned on a PDE prediction task; this could either be fixed-future prediction to model a static solution or autoregressive prediction to model a time-dependent solution. In all experiments, prediction tasks are formulated using only solution field values and grid information. Additional details on the model hyperparameters and implementation can be found in Appendix <ref>. §.§ Pretraining Strategies We compare models pretrained with different strategies with a baseline model that has not been pretrained (None) as well as a model trained with the same physics prediction objective on the pretraining dataset, more commonly known as transfer learning (Transfer). Furthermore, we vary the size of the fine-tuning dataset to study the effects of pretraining when given scarce downstream data. The fine-tuning dataset is also varied between data samples that are within the pretraining distribution (In), outside the pretraining distribution with respect to the PDE coefficients (Out), or on samples from an unseen PDE (NS). Lastly, we study the effects of adding data augmentations during pretraining and fine-tuning. §.§ Data Augmentation Data augmentation is implemented by doubling the pretraining and fine-tuning data, where each sample has a 50% chance of being augmented. Our noise augmentation adds a small amount of Gaussian noise to each frame independently, while our shift and scale augmentations are applied uniformly to the entire trajectory. §.§ Fixed Future and Auto-regressive Prediction To model physics problems with static solutions, we consider predicting a PDE solution field at a fixed timestep after an initial snapshot of the PDE data. In particular, given the PDE data from t=1 to t=8, models are trained to predict the PDE solution at t=32. Alternatively, to model physics problems with time-dependent solutions, we consider auto-regressively predicting PDE solutions directly after a current snapshot of PDE data. This is implemented using PDE data on the interval [t, t+8) as an input to predict future PDE solutions on the interval [t+8, t+16). In addition, we use the pushforward trick <cit.> to stabilize training. This introduces model noise during training by first predicting a future time window from ground-truth data and then using this noisy prediction as a model input; importantly, no gradients are propagated through the first forward pass. Additional details on training parameters can be found in <ref>. § RESULTS We now systematically benchmark our pretraining and data augmentation strategies, as well as their combination. Presented below are results on our autoregressive task. Fixed-future results are given in appendices <ref> and <ref> and generally show the same trends as our autoregressive results. We use Relative L2 error <cit.> for both training and evaluation in all of our experiments. §.§ Comparison of Pretraining Strategies We benchmark our proposed PDE pretraining strategies on different neural operators and datasets, and show the condensed results for auto-regressive prediction in Table <ref>. For a detailed comparison, we present results of different PDE pretraining strategies for fixed-future and auto-regressive tasks on all datasets in Appendix <ref>. Additionally, we consider cases where the fine-tuning dataset contains coefficients unseen during pretraining, and present these out-of-distribution results in Appendix <ref> as well. Through these experiments, we find multiple insights. Firstly, we observe that the pretraining performance varies with the choice of model and dataset. Specifically, different models benefit differently from pretraining, as well as based on the predicted PDE and task (i.e. fixed-future vs. auto-regressive). However, transfer learning generally performs well across different tasks, models, and datasets, suggesting that it is a good choice for a pretraining task. This is also reflected in the literature, where previous work generally focuses on transferring knowledge between datasets <cit.> or pretrain by predicting physics of large datasets <cit.>. We hypothesize that transfer learning is effective since PDE data is inherently unlabeled; physics prediction uses future timesteps as a label, similar to next-token prediction for GPT models, which is cast as self-supervised learning. When the data is sufficient, using surrogate objectives such as derivatives or sorting sequences may not be as effective as the true objective of fixed-future of auto-regressive prediction. Another observation is that pretraining frameworks are generally dependent on specific architectures; for example, many CV pretraining strategies shuffle patches of data, which can introduce arbitrary discontinuities and high-frequency modes in FNO models, yet are not as challenging for convolutional models such as Unet. Furthermore, pretraining strategies are also dependent on the downstream task; for example, Derivative pretraining works well for auto-regressive prediction but not fixed future prediction, as the solution at a distant timestep is very different from the current derivatives, but the solution at the next timestep is highly dependent on the current derivatives. Secondly, we observe that directly adapting computer vision methods to the physics domain generally results in poor performance. In many experiments, using a CV pretraining method would often hurt performance compared to not pretraining. This points to a general difference between CV and physics tasks. In the vision domain many downstream tasks are classification-based (i.e. ImageNet, Object Detection, etc.), which results in many pretraining tasks modeled around classification, whereas physics prediction is a high-dimensional regression task. Beyond this, physics predictions not only need to be visually consistent, but also numerically accurate, which can be difficult to learn from a classification task. In fact, using physics-based pretraining methods, such as transferring between prediction tasks, regressing derivatives, or a physics-informed contrastive loss, generally results in better performance. Lastly, we observe that different models have different capacities for pretraining. For example, the OFormer architecture, which is based on transformers, benefits greatly from pretraining in many scenarios; this could be because transformers lack inductive bias and can model arbitrary relationships. Furthermore, Unet architectures also benefit consistently from pretraining; this is reflected in common convolutional architectures used for pretraining in the CV domain, such as ResNet <cit.>. DeepONet and FNO show smaller improvements with pretraining, suggesting that the architectures are less tailored for pretraining. This is especially true for FNO; we hypothesize that the learned Fourier modes may be very different between tasks, resulting in challenges when transferring weights to new tasks. §.§ Comparison of Data Augmentations To study the effects of augmenting data during pretraining and finetuning, we conduct experiments in which data augmentations are added to three pretraining strategies (None, Transfer, PICL). These experiments are run to compare data augmentations to a baseline model that is not pretrained, as well as its effects on the most effective pretraining strategies (i.e. Transfer, PICL). The results are summarized in Table <ref> for auto-regressive prediction, and the complete results can be found in Appendix <ref>. We find the best augmentation by considering the pretraining strategy and augmentation pairing with the lowest error. To calculate its improvement, this error is compared to a model that is not pretrained. We find that different models benefit from different augmentations; for example, DeepONet performs well with shifted data, but OFormer performs well with noised data. However, across models, datasets, and downstream tasks, one can generally find a data augmentation that improves performance. This suggests that the most effective pretraining frameworks should incorporate a data augmentation strategy, and indeed the best-performing models considered in this study often make use of data augmentations. Transfer learning performs best in nine of our 12 cases, and shift augmentation performs best in eight of our 12 cases, with their combination performing best in six, suggesting that this combination improves performance best across different data sets and models. We believe that data augmentations can help due to the fact that PDE data remains scarce; numerical simulation is needed for high quality data, and as a result emulating a larger dataset with augmentations is beneficial. §.§ Scaling Behavior We compare the effect of pretraining for different numbers of downstream samples in Table <ref>. We measure this effect by finding the best pretraining method for a given model, PDE, and dataset size, then calculating its improvement over no pretraining; after calculating the improvement, we average this metric across the Heat, Advection, and Burgers PDEs for auto-regressive prediction. In general, we observe a trend in which the improvement of pretrained models diminishes as the number of fine-tuning samples increases, which is expected as fine-tuning data approaches the pretraining dataset size. It follows that if the downstream data is abundant, directly training on this would be optimal. Additionally, despite these trends, the relative improvement of different pretraining strategies remains approximately constant between different downstream dataset sizes. An exception to these trends is the FNO model; we hypothesize that learned Fourier modes may be more challenging to fine-tune than other learning mechanisms such as attention matrices or convolutional kernels. For a detailed comparison of the scaling behavior in individual datasets and models, we refer readers to Appendix <ref>. Empirically, we observe a higher variance between random seeds when using a smaller dataset for fine-tuning. Furthermore, the advection equation can generally be learned with fewer samples and the performance is approximately constant with increasing dataset size. Additionally, different models and pretraining strategies display different scaling behaviors, with some models and pretraining strategies displaying greater increases in performance when fine-tuning to scarce data. This further underscores the importance of proper architecture choices that scale well, such as using transformer-based neural operators. Lastly, scaling behavior is more pronounced in fixed-future experiments; this could be because there is less data in fixed-future experiments due to only predicting a single target per data sample as opposed to predicting multiple targets across a longer auto-regressive rollout. §.§ Generalization Behavior We compare the effect of varying the distribution of the downstream dataset on the performance of pretrained models. In particular, we compare fine-tuning to unseen coefficients of the same equation (Out) as well as fine-tuning to an unseen PDE with novel initial conditions and forcing terms (NS); these results are shown in Table <ref> with 500 fine-tuning samples for auto-regressive prediction. In general, we observe reduced performance when fine-tuning to the Navier-Stokes equations, compared to fine-tuning to samples within the pretraining distribution (In). For certain models, this also holds when fine-tuning to a dataset with unseen coefficients (Out). These generalization behaviors are also approximately consistent between different sample sizes of the fine-tuning dataset. It is important to note that certain pretraining frameworks generalize better than others; for example, Coefficient pretraining largely hurts performance, since the fine-tuning distribution contains different coefficients by construction. We note that the OFormer and Unet architectures show better performance when fine-tuning to out-of-distribution samples; we hypothesize that this is due to shifts in coefficients causing easier phenomena to model. For example, increasing the diffusivity in the heat equation causes transient effects to be concentrated in a few initial timesteps and sparse behavior for the majority of the rollout. Nevertheless, under certain conditions, pretraining shows generalization to unseen coefficients and PDEs, which is a promising direction. § CONCLUSION In this work, we compare pretraining strategies for PDEs by examining pretraining frameworks that can be used across different models and datasets. In particular, we consider adapting CV pretraining to the PDE domain through sorting spatio-temporal data to learn underlying dynamics without labels. Furthermore, we derive several PDE characteristics that can be predicted, such as its coefficients, derivatives, or reconstructed input. Lastly, we implement existing contrastive as well as transfer learning strategies to construct a diverse set of pretraining strategies. Notably, these strategies can be applied to any model and PDE problem and are flexible to future advances in architectures or datasets. Through pretraining with different frameworks and data augmentations, we compare their effects on different PDEs, models, downstream datasets, and fine-tuning tasks. We find that pretraining can be highly dependent on model and dataset choices, but in general transfer learning or physics-based strategies do well. Furthermore, we find that directly adapting pretraining strategies from other domains often fails, motivating the need to design PDE-specific pretraining frameworks. Lastly, we observe that different models have different capacities for pretraining, with transformer and CNN based architectures benefiting the most from pretraining and highlighting the need for architectures that have high capacity and transferability. To further understand PDE pretraining, we investigate the effect of adding data augmentations and varying the fine-tuning dataset. We find that data augmentations consistently benefit performance, with the shift augmentation showing best performance most often. Combining transfer learning with shift augmentation shows the best performance in the majority of test cases. Additionally, pretraining performance is accentuated when the fine-tuning dataset is scarce or similar to the pretraining distribution. Through establishing a deeper understanding of pretraining for PDEs, we hope that future work can leverage these insights to propose new pretraining strategies and expand on current architectures. tmlr § COMPARISON OF PRETRAINING STRATEGIES §.§ Fixed Future Experiments §.§ Auto-regressive Experiments § COMPARISON OF DATA AUGMENTATIONS §.§ Fixed Future Experiments §.§ Auto-regressive Results § COMPARISON OF DOWNSTREAM DATASET SIZE § IMPLEMENTATION DETAILS §.§ Dataset Details We generate data according to the equations outlined in <ref>. We provide additional details here: Pretraining During pretraining, 9216 total samples are generated, with 3072 samples of the 2D Heat, Advection, and Burgers equations respectively. The samples are generated with a resolution of (n_t, n_x, n_y) = (32, 64, 64) or (n_t, n_x, n_y) = (32, 32, 32) on the domain (x,y)=[-1, 1]^2 from t=0 to t=2; the discretization depends on the downstream resolution of the data. We sample equation coefficients from a defined pretraining distribution. Heat, Advection, and Burgers equation samples are generated with a finite-differences scheme; a first-order central difference is used to discretize the diffusive term, a first-order upwinding scheme is used to discretize the nonlinear convection term, and time is discretized with a forward Euler scheme. In addition, the advection equation is solved with its analytical solution. Training/Finetuning During training/fine-tuning, we generate equations using a procedure similar to pretraining and sample coefficients either in the pretraining distribution or from a disjoint distribution to test generalization to unseen coefficients. For fine-tuning on the Navier-Stokes equations, we use a higher resolution of (n_t, n_x, n_y) = (32, 64, 64), otherwise experiments are run with a resolution of (n_t, n_x, n_y) = (32, 32, 32). We generate 1024 samples for the Heat, Advection, Burgers, and Navier-Stokes equations to train with. An additional 1024 out-of-distribution samples for the Heat, Advection, and Burgers equations is also generated. Additionally, the Burgers equation, initial conditions are unchanged to evaluate fine-tuning to a reference problem undergoing different dynamics, such as in design optimization problems <cit.>. Validation Validation samples are generated similarly to fine-tuning samples, also with equation coefficients sampled from either the pretraining or disjoint distribution. We generate 256 samples for the Heat, Advection, Burgers, and Navier-Stokes equations. §.§ Model Details We implement modern FNO and Unet architectures according to <cit.>. Furthermore, we implement DeepONet architectures according to DeepXDE <cit.>, and use the original implementation for OFormer <cit.>. The hyperparameters used for the models are described in Table <ref>. §.§ Pretraining Details During pretraining, different strategies require different implementations and hyperparameters. A consideration is that many models need a linear head during pretraining to project model outputs to the classification or regression dimension. Since models will be used for physics prediction, their outputs will be in the shape of the solution field, rather than cross entropy probabilities or regressed values. We use a lightweight CNN projector to downsample and flatten model outputs to the desired dimension. Models are generally trained for 200 epochs with a batch size of 32 using Adam, weight decay, and a OneCycle scheduler for five seeds. Binary: Binary pretraining is implemented by shuffling a sample in time and randomly choosing a shuffled or original input with corresponding labels of 0 or 1 to be used for classification. We use a CNN head to project model outputs to a single logit for a binary cross-entropy loss. Within this framework there are a few design decisions. The difficulty of the task can be modulated by the Hamming distance between the shuffled sample and the original sample. For example, if the shuffled sample is not changed much (e.g. only two frames are swapped), the difference between a sorted and shuffled sample is small and thus more challenging to distinguish. We can leverage this to gradually decrease the Hamming distance of shuffled samples to incrementally increase the difficulty of the task over pretraining. Empirically, this does not make a large difference during training so we choose to omit this curriculum learning for simplicity. An additional consideration is the probability of sampling a shuffled or sorted sample. In theory, there are many more shuffled samples than sorted samples (i.e. more labels with 0 vs. 1); therefore, it may be beneficial to sample more shuffled samples and use a weighted binary cross-entropy loss. In practice this does not significantly affect training, so we uniformly sample sorted or shuffled samples. A final consideration is that PDE solutions generally do not exhibit large changes in time, therefore, we patchify the time dimension when shuffling to create larger changes in shuffled patches. In general, models are able to learn to distinguish between shuffled and original inputs very well, and we display t-SNE embeddings of a pretrained FNO model on a validation set of shuffled and unshuffled samples in Figure <ref>. TimeSort/SpaceSort: Sorting along a single dimension is implemented by patchifying the solution field along the desired dimension and shuffling these patches. This is done to create more distinct differences in the shuffled solution, with the patch size controlling the number of permutations of the shuffled sequence. The permutation number affects the difficulty of the sorting task, with large permutation numbers being more difficult since each permutation represents a different class. To mitigate this, we set the patch size to ensure a sequence length of 4 when shuffling, resulting in 4! = 24 classes or permutations of the solution field. The CNN projection head is modified accordingly to output 24 logits for a cross-entropy loss. In general, spatial sorting does not work well nor does training converge, so we omit this from the results; aliasing effects or periodic boundary conditions can make some spatially shuffled samples extremely similar or identical to sorted samples. However, temporal sorting tends to work well, and we display t-SNE embeddings of a pretrained FNO model on a validation set of temporally shuffled samples in Figure <ref>. Jigsaw: Jigsaw is implemented similarly to other sorting frameworks, however due to sorting along multiple axes the number of possible shuffled sequences quickly increases. We mitigate this by using spatial and temporal patches to ensure a sequence length of 8 when shuffled, resulting in 8! = 40320 possible permutations. This is still a large number of classes for a task, therefore we deterministically choose 1000 samples with the largest Hamming distance between the shuffled sequence and original sequence. Contrary to the binary case, shuffled samples with larger Hamming distances are more challenging due to needing to sort more patches The CNN projection head is modified accordingly to output 1000 logits for a cross-entropy loss. In general, jigsaw sorting tends to be more challenging, however, models can still display reasonable performance; we display t-SNE embeddings of a pretrained FNO model on a validation set of jigsaw shuffled samples in Figure <ref>. Coefficient: Coefficient regression is implemented by extracting coefficient values from the PDE metadata. The CNN projection head is then modified to output the corresponding number of logits for an MSE loss. Derivative: We generate labels for derivative regression through taking spatial and time derivatives {u_t, u_x, u_y, u_xx, u_yy} of the PDE solution field using FinDiff <cit.>. This introduces an additional design consideration as the label has more values than the input. We modify the CNN projection head to upsample model outputs after convolution to the desired dimension and apply an MSE loss. Masked: Masked inputs are generated by splitting inputs into spatial and temporal patches, and selecting a random subset of these to be masked. In our experiments, we choose to mask 75% of patches. Masked patches are replaced with a learnable mask token, and the full input is passed to the model to reconstruct the original solution field. Since the output shape is the same as the downstream target, a projection head is not strictly needed, but we still include a CNN projection head and apply an MSE loss. This follows previous work; models learn transferable latent features by abstracting reconstruction-specific behavior to a decoder <cit.>. PICL: PICL uses the Generalized Contrastive Loss function <cit.> given in equation <ref>: ℒ_GCL(u_i, u_j) = ψ_i,j/2 d_physics(u_i, u_j)^2 + 1 - ψ_i,j/2max(τ - d_physics(u_i, u_j), 0)^2 When working with multiple data sets simultaneously, a vector of operator coefficients is constructed as θ. The similarity between systems is given by magnitude-aware cosine similarity: ψ_i,j(θ_i, θ_j) = √(|θ_i ·θ_j|)/max(‖θ_i‖, ‖θ_j ‖). The distance between samples is calculated in two parts for a given time t: d_system(u_i, u_j) = u_i^t+1 - u_j^t, and d_update = F(G_Θ(u_i)) - G_Θ(u_j), where G_Θ is our parameterized model, and F(·) is our numerical update. d_update is anchored to d_system to account for mode collapse, giving us the loss function: d_physics(u_i, u_j) = ‖ d_system(u_i, u_j) - d_update(u_i, u_j) ‖^2. τ is a hyperparameter that defines a margin, above which samples are considered to be from different classes. For pretraining, we construct the operator coefficient vector as θ = [‖𝐜_Burgers‖, ν, ‖𝐜_Advection‖] §.§ Data Augmentation Details We implement three data augmentations to evaluate their effects on model performance: noise, shift, and scale. Noise Gaussian noise is added to data samples and targets through sampling a Gaussian at zero mean and a prescribed variance: X_noise = X + σ^2𝒩(0, I). Empirically, we set the variance to 10^-7; when noise levels are too high, model performance can significantly deteriorate. Shift Using the Fourier shift theorem, samples can be shifted in space and resampled in the spectral domain <cit.>. Shifting PDE solutions in space preserves physics, since the PDEs considered in this work are invariant across space. Mathematically, this can be verified by deriving or looking up the Lie groups for the 2D Advection, Heat, and Burgers equations, for which there are many, and noting that the solutions can be shifted along the X or Y axes <cit.>. We uniformly sample the magnitude of the shift between [-0.5, 0.5]. Scale Scaling PDE solutions respects physics for the Heat and Advection equations, but not the Burgers equation. However, we still choose to include this augmentation to evaluate the effect of physically inconsistent augmentations; in practice, scaling PDE solutions still improves model performance. The implementation is done by multiplying PDE solutions by a constant, which we uniformly sample between [-0.5, 0.5]. §.§ Fine-tuning Details During fine-tuning, models trained until convergence for fixed-future or auto-regressive prediction and repeated for five seeds. In fixed-future prediction, models are given the solution field at t=[0, 8) and the target is at t=32. For auto-regressive prediction, models are given the solution field at t=[0, 8) and the target is at t=[8, 16). After this prediction, the models use their own output to predict the next step t=[16, 24) until the time horizon of t=32. To stabilize auto-regressive rollout, we implement temporal bundling and the pushforward trick <cit.>. Losses are calculated using a relative L2 norm <cit.>; validation losses are averaged across batch size and accumulated over timesteps or, in the case of fixed-future prediction, at only one timestep. For experiments with different fine-tuning sample sizes, samples are randomly chosen from 1024 possible samples to reach the desired number of samples for each seed. We use an Adam optimizer with weight decay and a CosineAnnealing scheduler. All experiments are run on a NVIDIA GeForce RTX 2080Ti GPU.
http://arxiv.org/abs/2406.07902v1
20240612061028
Collinear three-photon excitation of a strongly forbidden optical clock transition
[ "Samuel P. Carman", "Jan Rudolph", "Benjamin E. Garber", "Michael J. Van de Graaff", "Hunter Swan", "Yijun Jiang", "Megan Nantel", "Mahiro Abe", "Rachel L. Barcklay", "Jason M. Hogan" ]
physics.atom-ph
[ "physics.atom-ph", "quant-ph" ]
UTF8gbsn These authors contributed equally to this work. Department of Physics, Stanford University, Stanford, California 94305, USA These authors contributed equally to this work. Department of Physics, Stanford University, Stanford, California 94305, USA Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA These authors contributed equally to this work. Department of Physics, Stanford University, Stanford, California 94305, USA Department of Physics, Stanford University, Stanford, California 94305, USA Department of Physics, Stanford University, Stanford, California 94305, USA Department of Applied Physics, Stanford University, Stanford, California 94305, USA Department of Applied Physics, Stanford University, Stanford, California 94305, USA Department of Physics, Stanford University, Stanford, California 94305, USA Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA []hogan@stanford.edu Department of Physics, Stanford University, Stanford, California 94305, USA § ABSTRACT The ^1S_0 – ^3P_0 clock transition in strontium serves as the foundation for the world's best atomic clocks and for gravitational wave detector concepts in clock atom interferometry. This transition is weakly allowed in the fermionic isotope ^87Sr, but strongly forbidden in bosonic isotopes unless a strong external magnetic field is applied. Here we demonstrate coherent excitation of the clock transition in bosonic ^88Sr using a novel collinear three-photon process in a weak magnetic field. We observe Rabi oscillations with frequencies of up to 50 kHz using W/cm^2 laser intensities and Gauss-level magnetic field amplitudes. The absence of nuclear spin in bosonic isotopes offers decreased sensitivity to magnetic fields and optical lattice light shifts, enabling clocks with reduced systematic errors. The collinear propagation of the laser fields permits the interrogation of spatially separated atomic ensembles with common laser pulses, a key requirement for dark matter searches and gravitational wave detection with clock atom interferometers. Collinear three-photon excitation of a strongly forbidden optical clock transition Jason M. Hogan June 11, 2024 ================================================================================== Narrow optical transitions to long-lived atomic states are essential for many applications in metrology <cit.>, precision timekeeping <cit.>, and tests of fundamental physics <cit.>. The most accurate optical atomic clocks use ^1S_0 – ^3P_0 singlet-to-triplet transitions in neutral atoms and ions (e.g. Sr, Yb, Mg, Al^+) <cit.>. These ultranarrow transitions are weakly allowed in fermions because of the hyperfine interaction, but are strongly forbidden in bosons, where a large external magnetic field is required to induce coupling <cit.>. Bosonic isotopes promise considerable advantages, such as their lack of nuclear spin, higher natural abundance, simplified laser cooling and state preparation, as well as the scalar polarizability of their clock states <cit.>. To access these desirable properties, coherent multi-photon processes in bosons have been proposed <cit.>. Of particular interest is the three-photon excitation ^1S_0 →^3P_1 →^3S_1 →^3P_0 <cit.>, using a set of laser frequencies readily available for laser cooling, repumping, and imaging. Ostensibly, this coherent three-photon excitation cannot be driven with collinear laser beams due to angular momentum selection rules <cit.>. Instead, at least one of the beams must be sent from a different direction. This is easily accommodated in most clock applications, where the beams can even be arranged to eliminate the net momentum transfer to the atoms <cit.>. In contrast, collinear laser beams are advantageous in applications where several atomic ensembles are ideally addressed with identical laser pulses, such as multi-qubit entanglement in optical tweezer arrays <cit.>, differential operation of atomic clocks <cit.>, and gradiometer configurations in clock atom interferometry <cit.>. In gradiometers, common laser pulses are particularly important for laser frequency noise suppression, which requires that all spectral content must be delivered from the same direction <cit.>. Like clocks, clock atom interferometry proposals generally rely on naturally occurring narrow-line transitions in fermions <cit.>. Due to the weak coupling, interferometry pulses on these transitions are slow and the interferometer sensitivity is ultimately limited by the number of pulses that can be applied. Unlocking bosonic isotopes for long-baseline clock atom interferometry via multi-photon processes could lead to a substantial increase in sensitivity by engineering an effective transition with both strong coupling and long excited state lifetime. Here we demonstrate the coherent, collinear three-photon process ^1S_0 →^3P_1 →^3S_1 →^3P_0 in bosonic ^88Sr mediated by a weak magnetic bias field. The constant field lifts the degeneracy of the intermediate ^3P_1 sublevels that otherwise leads to destructive interference of the excitation paths. We show why this transition cannot be driven with collinear laser beams without a magnetic field and illustrate the transformation of the effective angular wavefunction of ^3P_1 under the associated external torque. We drive this three-photon transition using a single trichromatic laser pulse, delivered via a polarization-maintaining optical fiber. We observe Rabi oscillations between the clock states ^1S_0 and ^3P_0 with frequencies of up to 50 kHz, substantially surpassing what could be achieved with similar laser intensities on the naturally occurring single-photon transition in fermionic ^87Sr and the magnetic-field-induced transition in ^88Sr. With this new technique, we demonstrate the first multi-photon clock atom interferometer using a Mach-Zehnder pulse sequence. Since all spectral content for the three-photon transition is copropagating, this interferometer scheme is compatible with long-baseline clock gradiometers like MAGIS-100 <cit.>. Bosonic isotopes can now be employed in such experiments, which extends the utility of next-generation quantum sensors. Alkaline-earth-like atoms such as strontium have two valence electrons and their atomic states are split into a singlet and a triplet manifold ([a]Fig:LevelDiagrams). While strong electric dipole transitions are possible within each manifold, spin-flipping electric dipole transitions between singlet and triplet states are generally forbidden. The ^1S_0 – ^3P_1 transition is weakly allowed through spin-orbit coupling. The three-photon process described here combines this weak intercombination line and two strong electric dipole transitions into one coherent excitation. The coupling strength of this multi-photon transition can be expressed as the product of the individual single-photon couplings Ω_i divided by the intermediate laser frequency detunings Δ_i ([b]Fig:LevelDiagrams). When using linearly polarized light fields, the effective three-photon coupling can be written as Ω_eff = Ω_1 Ω_2 Ω_3/2Δ_2 ϵ_jkl D_jn e^(1)_n e^(2)_k e^(3)_l, where ϵ_jkl is the Levi-Civita symbol, and e_n^(i) is the n^th Cartesian component of the i^ th laser's polarization vector, with an implicit sum over repeated indices (see Supplementary). The entries of the matrix D_jn depend on the inverse detunings 1/Δ_1,m to the three magnetic sublevels m of ^3P_1. When these sublevels are degenerate, D_jn=1/Δ_1δ_jn is diagonal and <Ref> reduces to Ω_eff = Ω_1Ω_2Ω_3/2Δ_1Δ_2 ě^(1)·(ě^(2)×ě^(3)). The three-photon coupling therefore scales with the volume spanned by the electric field polarization vectors, which vanishes for collinearly propagating optical fields. Applying an external magnetic field B̌ along ẑ̌ lifts the degeneracy of the ^3P_1 sublevels via a Zeeman shift δω_B = g_J μ_B B/ħ, where g_J is the Landé g factor and μ_B is the Bohr magneton ([b]Fig:LevelDiagrams). Choosing a set of polarizations ě^(1) = ě^(2) = x̌̂̌ and ě^(3) = ẑ̌ with collinear propagation direction ŷ̌, the coupling becomes Ω_eff = Ω_1 Ω_2 Ω_3/4Δ_2(1/Δ_1 - δω_B - 1/Δ_1 + δω_B). The minus sign between the terms conveys the destructive interference of the two excitation paths (via m=-1 and m=+1) in the absence of a magnetic field. The optical field requirements without external magnetic field can be illustrated using the angular part of the atomic state wavefunction, which consists of an entangled superposition of orbital angular momentum Ľ and spin angular momentum Š ([a]Fig:TransitionPaths). The light field at 689 nm along x̌̂̌ drives a weakly-allowed singlet-to-triplet transition to ^3P_1 only possible through spin-orbit coupling. The triplet component of the resulting state has the form (|L_y⟩|S_z⟩-|L_z⟩|S_y⟩), where |L_n⟩ and |S_n⟩ are the Cartesian basis states of the orbital angular momentum and spin operators (see Supplementary). The axes of these p orbitals |L_n⟩ are orthogonal to the polarization of the exciting laser. The other two light fields at 688 nm and 679 nm drive spin-preserving, dipole-allowed transitions where the polarization of the light has to match the p orbital axis of the associated spin projection |S_n⟩. Since the final state of the three-photon process ^3P_0 has the form (|L_x⟩|S_x⟩+|L_y⟩|S_y⟩+|L_z⟩|S_z⟩), two mutually orthogonal polarization components are required to reach the appropriate spin state. Thus, [a]Fig:TransitionPaths is a visual representation of the scalar triple product in <Ref>, illustrating the need for non-copropagating light fields [Though we only show the case where the first photon is x̌̂̌-polarized, the requirement for non-coplanar polarizations is independent of the choice of the first polarization for any ě^(1)⊥B̌.]. Applying a magnetic field along ẑ̌ alters the angular part of the effective ^3P_1 wavefunction ^3P_1, θ_B = - 1√(2)(sinθ_BL_x+icosθ_BL_y)S_z + 1√(2)L_z(sinθ_BS_x+icosθ_BS_y), where we define the angle θ_B ≡arctanβ, with dimensionless ratio β≡δω_B/Δ_1. With increasing θ_B the state acquires both a spin and an orbital angular momentum component along x̌̂̌. The three-photon transition can now be driven with a set of collinear laser beams, e.g. using x̌̂̌-x̌̂̌-ẑ̌ polarizations (see [b]Fig:TransitionPaths). We demonstrate this coherent three-photon excitation using a thermal ensemble of bosonic ^88Sr atoms. In a constant magnetic field aligned with ẑ̌, a single laser pulse containing all three wavelengths is applied to the atoms along the ŷ̌ direction. The laser light at 689 and 688 nm is linearly polarized along x̌̂̌ while the 679 nm component is linearly polarized along ẑ̌. To avoid populating any of the intermediate states, the lasers are each detuned from their respective single-photon resonance by several hundred natural linewidths using an optical frequency comb as a reference. To estimate the three-photon pulse fidelity, we measure the populations in the ground state ^1S_0 and in all metastable exited states ^3P_J using a state-selective imaging scheme (see Methods). The resonance frequency for the three-photon transition is given by the effective detuning Δ_eff≡Δ_3 + Ω_ac, with cumulative laser detuning Δ_3 and ac-Stark shift (see Supplementary): Ω_ac = - Ω_1^2/2Δ_1/Δ_1^2-δω_B^2 + Ω_3^2/4Δ_2. We vary the laser detuning Δ_3 around the calculated resonance frequency and find the three-photon transition as expected. [a]Fig:LineRabi shows such a line scan with peak excited state transfer of 39% and a line shape that fits to a Voigt profile with HWHM of 59 kHz. This linewidth includes substantial inhomogeneous broadening of 33 kHz, primarily from the thermal Doppler width of the atoms. At the measured three-photon resonance, we scan the pulse duration and observe Rabi oscillations in the ^3P_0 population with a frequency of 29.9(2) kHz ([b]Fig:LineRabi). This is in good agreement with the expected three-photon Rabi frequency of 30(1) kHz, using a total laser intensity of 4.1 W/cm^2 and a magnetic field amplitude of 10.1 G (see Methods). The noticeable damping of the Rabi oscillation stems mostly from intensity inhomogeneity across the cloud since we intentionally focus the laser beam to increase the peak intensity. This is easily avoided using a larger beam size. The populations in the other metastable excited states ^3P_1 and ^3P_2 are small, consistent with minimal off-resonant single-photon excitation. We further characterize the three-photon process at various magnetic fields and laser detunings by measuring the resonant Rabi frequencies and comparing their magnitudes. It is convenient to parameterize the coupling via the ratio of the detunings β = δω_B/Δ_1 such that <Ref> can be written as Ω_eff = Ω_0 β/1 - β^2, where Ω_0 ≡Ω_1Ω_2Ω_3/(2Δ_1Δ_2) is the maximum Rabi coupling in <Ref>. In [a]Fig:Metaplots, we show the normalized three-photon coupling Ω_eff/Ω_0 over a wide range of detuning ratios β. We find excellent agreement with the expected coupling strength for the effective two-level system based on the inferred single-photon couplings (see Methods). For ratios close to β=1, direct excitation to ^3P_1 is no longer negligible and the observed Rabi frequencies are slightly lower than predicted. While the strong coupling near this divergence is desirable, sufficient detuning from the single-photon resonance is required to maintain efficient three-photon excitation. We achieve Rabi frequencies as high as 50 kHz while maintaining a detuning of over 500 natural linewidths. We expect the three-photon coupling to depend on the shape of the effective orbital |^3P_1, θ_B⟩ and the projection of the light onto it (see [b]Fig:TransitionPaths). We define the maximum Rabi coupling at a given magnetic field , such that the three-photon coupling becomes Ω_eff = Ω_B sinθ_B, for -π2≤θ_B ≤π2. In [b]Fig:Metaplots, we plot the previous data set in the alternative parametrization Ω_eff/Ω_B versus the projection angle θ_B. This normalization avoids the divergence at β=1 and shows the sinusoidal variation of the projection, which asymptotes to 1 for large values of β. At a given Ω_B, the coupling depends on the projection of the atom's dipole moment onto the light polarization vector. Tuning θ_B has the same effect on the coupling as using a polarization for the second light field ě^(2) that is impossible to attain with collinear light. For a general elliptical polarization ě^(2) = sinθ x̌̂̌ + i cosθ ŷ̌ with ellipticity angle θ in the xy-plane, the three-photon coupling becomes Ω_eff = Ω_Bcos(θ_B-θ). In this work we use θ=π/2, which is the only polarization choice compatible with collinear propagation along ŷ̌. Thus, the external torque from the magnetic field tunes the effective dipole moment of the atom in much the same way as a waveplate rotates the polarization of light (see Supplementary). Next, we demonstrate a proof-of-principle clock atom interferometer using the collinear three-photon transition (Fig:MZ). We apply a symmetric Mach-Zehnder pulse sequence <cit.> consisting of a beamsplitter pulse (π/2), followed by a mirror pulse (π), and a recombination pulse (π/2). These pulses are separated by an interrogation time T = 200 s chosen to be much longer than the lifetimes of the intermediate states. The visibility of the interferometer signal does not vary with the interrogation time, confirming coherent three-photon excitation of the long-lived clock state. This is the first demonstration of a ^1S_0 – ^3P_0 clock atom interferometer in ^88Sr without the use of a large magnetic bias field <cit.>. Compared to previous work, we observe a 30-fold increase in Rabi frequency using only 17% of the total laser intensity and 3% of the magnetic field strength <cit.>. Because of the low magnetic field requirement, our method is applicable in long-baseline atom interferometers, where efficient atom-light interaction is desired anywhere along the baseline <cit.>. Additionally, the small magnetic field amplitude sets a ^3P_0 lifetime of over 10^6 s, significantly longer than the natural lifetime of 118 s in ^87Sr <cit.>. Long coherence times are essential for probing gravity <cit.> and detecting gravitational waves with long baselines <cit.>. We also observe more than ten times the Rabi frequency than what could be achieved on the naturally occurring ^87Sr clock transition with the same total intensity. High Rabi frequencies ensure broad velocity acceptance <cit.> and allow for more pulses in a given free-fall time, which is beneficial for large momentum transfer (LMT) atom interferometers <cit.>. Next-generation long-baseline clock atom interferometer experiments, including MAGIS-100 <cit.> and AION <cit.>, rely on LMT atom optics to significantly enhance their sensitivity. For these applications, interferometry pulses with transfer fidelities above 99% are required <cit.>. Density matrix simulations suggest that our method can support such pulse efficiencies with W/mm^2-scale laser intensities and magnetic fields below 10 G, yielding Rabi frequencies of at least 25 kHz (see Supplementary). Moreover, collinear three-photon transitions enable the use of bosonic isotopes that feature higher natural abundance, simpler level structure, and reduced magnetic field sensitivity. All of these attributes are favorable for reaching the challenging sensitivity targets for gravitational wave detection and dark matter searches with clock atom interferometers. Driving the ^1S_0 – ^3P_0 clock transition in bosonic atoms also has many advantages in optical atomic clocks, including insensitivity to the polarization of trapping light, and lack of a first-order Zeeman shift <cit.>. Existing ^88Sr clocks are limited by shifts from the second order Zeeman effect and from the probe light <cit.>. The method described here employs a magnetic field strength that is substantially smaller per unit Rabi frequency, reducing the Zeeman shift for a given coupling strength. In addition, the overall probe light shift can be eliminated with an appropriate choice of laser detunings (see Supplementary). Thus, the three-photon transition demonstrated here appears promising for improving the accuracy of bosonic clocks. Many of these advantages are also favorable for applications in quantum information science <cit.>, where the bosonic ^3P_0 <cit.> and ^3P_2 <cit.> states show promise for qubit storage and manipulation by leveraging their long coherence times and environmental insensitivity. While not demonstrated in this work, a similar three-photon excitation ^1S_0→^3P_1→^3S_1→^3P_2 can be achieved by substituting 707 nm for 679 nm light <cit.>, which may have advantages over magnetic quadrupole excitation to ^3P_2 <cit.>. The method demonstrated here is broadly applicable in other systems that use ultranarrow singlet-to-triplet transitions, especially when high Rabi coupling is beneficial and bosonic isotopes would be preferred. In particular, precision measurements based on the differential interrogation of distant atoms benefit from the application of common, copropagating laser pulses. Note added. – During the preparation of the manuscript we became aware of other work on three-photon transitions in ^84Sr <cit.>. § ACKNOWLEDGEMENTS We wish to thank Mark Kasevich and Shaun Burd for helpful discussions. We thank Leo Hollberg for letting us use his frequency reference. B.G. acknowledges support from the Office of Naval Research through the NDSEG fellowship. This work was supported by the Gordon and Betty Moore Foundation Grant GBMF7945, the NSF QLCI Award No. OMA-2016244, and partially supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under the contract No. DE-AC02-07CH11359. apsrev4-2 § METHODS §.§ Experimental setup We prepare a thermal ensemble of 10^6 ^88Sr atoms at a temperature of 2 K, using a two-stage magneto-optical trap (MOT). A more detailed description of the setup and experimental sequence for the atomic cloud preparation can be found in Ref. <cit.>. After the atoms are released from the final MOT, we turn off the magnetic quadrupole field and apply a constant bias field with an amplitude between 4 and 19 G. Then, a three-photon laser pulse of variable duration and detuning is applied to the atoms. The atomic cloud has an rms radius of 130(5) m at the time of the pulse. The final atomic state is characterized using a state-selective imaging scheme described below. The three-photon pulses consist of light from three external cavity diode lasers (ECDLs) at 689 nm, 688 nm, and 679 nm. Each ECDL is locked to an optical frequency comb (Menlo Systems FC1500-ULN) that is stabilized to an optical cavity (Menlo Systems 1550 ORS). The relative polarizations of the lasers are set by combining the beams on a polarizing beamsplitter. The combined output is sent through an acousto-optic modulator (AOM) and the diffracted order is coupled into a polarization-maintaining optical fiber. The final three-photon beam at the location of the atoms has a 1/e^2 radial waist of 392.5(3) m. The average optical power in each component in the desired polarization is 0.79 mW, 6.9 mW, and 2.2 mW at 689 nm, 688 nm, and 679 nm, respectively. This corresponds to a total peak intensity of 4.1 W/cm^2. The 679 nm laser detuning was held fixed for all experiments at -2536 MHz with respect to the ^3P_0 – ^3S_1 resonance, while the 689 nm detuning Δ_1/2π was varied between 4 and 43 MHz. In each case, the 688 nm detuning was adjusted to achieve three-photon resonance (Δ_eff=0) for the given value of Δ_1, with a range of -2556±22 MHz with respect to the ^3P_1 – ^3S_1 resonance. §.§ State-selective imaging To characterize the atomic state after a three-photon pulse, we spatially separate the atoms into three clouds using a sequence of push and repump pulses (see Fig:Imaging). Push pulses consist of 461 nm light resonant with the ^1S_0 – ^1P_1 transition, which imparts momentum to atoms in the ground state ^1S_0. The first push pulse occurs 200 s after the end of the three-photon pulse. This time is chosen to give atoms in the ^3P_1 state sufficient time to decay to the ground state. To optically pump atoms out of ^3P_2, we apply a repump pulse resonant with the ^3P_2 – ^3S_1 transition at 707 nm. Due to the 3:1 branching ratio of this process, most of the atoms return to the ground state, while the rest are shelved in ^3P_0. After an additional 200 s, a second push pulse imparts momentum to the fraction of ground state atoms that were pumped out of ^3P_2 and adds additional momentum to the first pushed cloud. Finally, repump light at both 707 nm and 679 nm optically pumps all remaining atoms back into the ground state. We wait 4.5 ms for the three clouds to spatially separate and then count the number of atoms in each cloud using fluorescence imaging on the 461 nm transition. Because of the 3:1 branching ratio, the true ^3P_2 population is 4/3 of the raw atom number in the singly-pushed cloud. Conversely, the true ^3P_0 population is obtained by subtracting 1/3 of the atoms in the singly-pushed cloud from the nominal atom number in the unpushed cloud. The ^3P_1 population is obtained using a modified version of the above imaging sequence (see [b]Fig:Imaging). A push pulse immediately following the three-photon pulse separates unexcited ground state atoms from those anywhere in the ^3P_J manifold. After a wait time of 200 s, a second push pulse imparts momentum to ground state atoms that have decayed from ^3P_1. Finally, a repump pulse at both 707 nm and 679 nm optically pumps the combined populations of ^3P_0 and ^3P_2 back into the ground state. The atoms are then imaged after a 4.5 ms wait time as in the other scheme. §.§ Systematic errors An important caveat to the ^3P_1 imaging sequence is that it undercounts the actual loss due to unwanted ^3P_1 excitation, since this state can decay back into the ground state during the three-photon pulse. An estimate for the total loss due to off-resonant scattering from ^3P_1 is obtained from the number of scattering events N over the pulse duration t_p, given by N=1/τ_1∫_0^t_pP_e(t) dt ≈ t_p/τ_1<P_e>. Here, τ_1 is the lifetime and P_e(t) the state population in ^3P_1, with time averaged fraction <P_e>. For the data in [b]Fig:LineRabi, we estimate <P_e>=1.2(1)% and N=4.8(5)% for the total loss from this channel over the full 85 s. We measure a laser polarization impurity of 2%, which can lead to parasitic excitation to the ^3P_1,m=0 state if the 689 nm detuning is near resonance (Δ_1 = 0). We maintain a detuning of over 500 natural linewidths to avoid competing single-photon processes. We observe transient magnetic field oscillations when turning on the bias magnetic field. We wait 4.5 ms for the field to settle before applying three-photon pulses. The magnetic field amplitudes B are obtained using Rabi spectroscopy of the ^3P_1 Zeeman splitting, which is given by δω_B = g_Jμ_B B/ħ, where g_J=3/2 is the Landé g factor and μ_B is the Bohr magneton. We measure the Zeeman splittings at the start and 100 s after the three-photon pulse and observe a time variation of less than 1%. §.§ Calculating the three-photon coupling The expected three-photon Rabi frequencies are calculated using <Ref> and the three single-photon couplings Ω_i = √(6π c^2 Γ_i I_i/ħ ω_i^3), with I_i=P_i/π2 w_0^2. Here, I_i is the peak intensity of the Gaussian beam with 1/e^2 radial waist w_0, and P_i is the optical power in the spherical basis component that connects the specific m states (see [b]Fig:LevelDiagrams). The decay rate for each transition is Γ_i=η_i/τ_i, where τ_i is the excited state lifetime and η_i is the branching ratio (η_1=1, η_2=1/6, and η_3=1/9). We use the following experimentally determined lifetimes of the intermediate states: ^3P_1 (τ_1 = 21.28(3) s) <cit.> and ^3S_1 (τ_2 = 15.0(8) ns) <cit.>. To calibrate the laser intensities, we drive resonant Rabi oscillations between ^1S_0 and ^3P_1, m=0 with a measured optical power at 689 nm. The peak Rabi frequency from this measurement serves as an in situ measurement of the effective beam size, with a 1/e^2 radial waist of 392.5(3) m. This effective waist is then approximately common to all three wavelengths, since they are delivered via the same optical fiber and collimation optics. Using the measured powers in each wavelength, we calculate average single-photon Rabi frequencies of approximately Ω_1/2π = 1.239(1) MHz, Ω_2/2π = 56.0(4) MHz, and Ω_3/2π = 35.8(3) MHz. The detuning of each laser from their respective atomic resonance is inferred using the frequency comb, which we calibrate with the observed 689 nm resonance frequency in conjunction with its best experimental estimate <cit.>. The uncertainties in Δ_1 and Δ_2 used to calculate Ω_eff, Ω_0, and Ω_B reflect the combined uncertainties from our spectroscopic measurements and the uncertainties in the measured transition frequencies <cit.>. Our uncertainty in the cumulative laser detuning Δ_3 is lower than the reported uncertainties for the 688 nm and 679 nm transition frequencies. We take advantage of the fact that their difference is equal to the energy splitting between the ^3P_1 and ^3P_0 states, which is known more precisely <cit.>. Our uncertainty in the effective three-photon detuning Δ_eff reflects the combined uncertainties of the single-photon couplings and detunings.
http://arxiv.org/abs/2406.08147v1
20240612123714
A New Linear Programming Approach and a New Backtracking Strategy for Multiple-Gradient Descent in Multi-Objective Optimization
[ "Francesco Della Santa" ]
math.OC
[ "math.OC", "cs.NA", "math.NA", "90C29, 65K05, 90C26" ]
inst1,inst3]Francesco Della Santacor1 [cor1]Corresponding author [inst1]organization=Department of Mathematical Sciences, Politecnico di Torino, addressline=Corso Duca degli Abruzzi 24, postcode=10129, state=Turin, country=Italy [inst3]organization=Gruppo Nazionale per il Calcolo Scientifico INdAM, addressline=Piazzale Aldo Moro 5, postcode=00185, state=Rome, country=Italy § ABSTRACT In this work, the author presents a novel method for finding descent directions shared by two or more differentiable functions defined on the same unconstrained domain space. Then, the author illustrates an alternative Multiple-Gradient Descent procedure for Multi-Objective Optimization problems that is based on this new method. In particular, the proposed method consists in finding the shared descent direction solving a relatively cheap Linear Programming (LP) problem, where the LP's objective function and the constraints are defined by the gradients of the objective functions of the Multi-Objective Optimization problem. More precisely, the formulation of the LP problem is such that, if a shared descent direction does not exist for the objective functions, but a non-ascent direction for all the objectives does, the LP problem returns the latter. Moreover, the author defines a new backtracking strategy for Multiple-Gradient Descent methods such that, if the proposed LP is used for computing the direction, the ability to reach and/or explore the Pareto set and the Pareto front is improved. A theoretical analysis of the properties of the new methods is performed, and tests on classic Multi-Objective Optimization problems are proposed to assess their goodness. Multiple-Gradient Descent Multi-Objective Optimization Linear Programming [2020] 90C29 65K05 90C26 § INTRODUCTION Multi-Objective Optimization (MOO) is the area in optimization that deals with problems that involve multiple, and often competing, objectives that must be optimized simultaneously. The importance of such a kind of optimization problems is evident from the wide MOO applications in real-world scenarios, ranging from mechanical engineering and fluid dynamics <cit.>, to energy-saving strategies <cit.>, to task allocation strategies <cit.>, and many other topics. These applications have in common the need to find optimal trade-offs between different objectives, typically characterized by competing behaviors with respect to the optimization variables. The trade-off solutions typically are represented by the so-called Pareto set, while their images in the objectives' space by the so-called Pareto front. For more details about the theory of MOO problems, we refer to <cit.>. Often, MOO problems are solved using derivative-free methods <cit.>, where the most used ones are the nature-inspired methods <cit.>. In particular, Genetic Algorithms <cit.> and Particle Swarm Optimization algorithms <cit.>, are the most popular. The main advantages of these methods are their derivative-free nature and their efficiency in exploring the domain through “populations” of solutions; on the other hand, they lack “realistic” theoretical convergence properties (e.g., see <cit.>) or they have difficulties in finding accurate solutions, typically due to premature convergence (e.g., see <cit.>). Another common approach for solving MOO problems involves minimizing a single loss function formed by a weighted sum of the objective functions <cit.> or using specific scalarization techniques (e.g., see <cit.>). These methods have stronger theoretical foundations but require multiple runs with varying parameter values to explore the Pareto set/front in depth. A last approach for MOO problems is represented by Multiple-Gradient Descent (MGD) methods <cit.>. MGD methods aim to build a sequence that converges to the Pareto set by solving sub-problems that determine descent directions shared by all the objective functions. These methods are characterized by robust theoretical properties; nonetheless, similarly to the scalarization techniques, they must be executed multiple times to fully explore the Pareto set/front, starting from different points sampled from the domain. The sub-problems used in MGD methods for computing the shared descent directions are often nonlinear; for example, in <cit.> the sub-problem consists in finding the minimum-norm vector in the anti-gradients’ convex hull. Nonetheless, there are some approaches based on Linear Programming (LP) problems, as illustrated in <cit.>. Even if the methods for solving LP problems are fast and very efficient, often the nonlinear sub-problems are preferred, probably due to the different properties of the directions they return. For example, the anti-gradients’ convex hull used in <cit.> restricts the search of the descent directions to a sub-region of all the shared descent directions that, unless of particular cases, is sufficiently distant to the boundary defined by the set of perpendicular directions to one of the gradients; on the other hand, the LP problem described in <cit.> look for the direction in the whole region of shared descent directions, therefore the solution can be almost perpendicular to some of the objective gradients. More details and theoretical properties of this LP problem will be given in this work since it is used as a baseline for the new MGD method proposed. In this work, the author proposes a new LP sub-problem that is a trade-off between the cheap LP sub-problem defined in <cit.> and the nonlinear sub-problems with solutions characterized by “steepness properties”. Specifically, we introduce a new LP problem for computing directions in MGD methods such that its solution is a direction that tries to maximize the distance from the boundaries of the descent directions' region and tries to follow as much as possible the direction identified by the sum of the anti-gradients. For doing so, we start from the LP problem described in <cit.> and we modify its objective function and its constraints to achieve these properties. A theoretical analysis for characterizing the solutions of the new LP problem is performed. In particular, we prove that the new LP formulation admits the null direction as a solution for Pareto critical points only in the following two situations: i) all the non-ascent directions shared by all the objective functions are perpendicular to all the gradients; ii) the only non-ascent direction shared by all the objectives is the null vector. In a third situation, where there is at least one non-ascent direction that is a descent direction (i.e., not null) for at least one objective, the LP problem returns one of these vectors and not the null vector. The latter characteristic (not present in the baseline LP problem) is an extra property that the author deliberately incorporates in the new LP problem to enhance its interaction with a new backtracking strategy for MGD methods, which is also proposed in this work. To the best of the author's knowledge, the backtracking strategies used for MGD methods are always focused on building a sequence of vectors {x̌^(k)}_k∈ that is strictly decreasing for all the objective functions (e.g., see all the MGD methods cited above). These backtracking strategies can lead the sequence to very good (approximated) Pareto optimal solutions or can stop it “prematurely”, i.e., as soon as they do not find a sufficiently small step to decrease all the objectives along the chosen direction. In order to avoid this latter case and improve the possibility of building a sequence that converges toward a Pareto optimal solution, we develop a new backtracking strategy. This new strategy is designed to accept the new point x̌^(k+1) if it is non-dominated by the previous point x̌^(k) when the Armijo condition is not satisfied for all objectives for all the backtracking steps. Additionally, we endow the backtracking strategy with a “storing property”; i.e., in a sequence, we keep apart all the vectors x̌^(k) that do not dominate x̌^(k+1) and that are not dominated by it, since this vectors are candidate Pareto optimals, and not only the last one. A theoretical analysis of MGD methods based on this new backtracking strategy is performed and convergence properties are established. Through numerical experiments, we analyze the performance of our new methods, both the new LP problem and the new backtracking strategy, and we compare them with the baseline given by the literature in <cit.>. The experiment results indicate that our backtracking strategy consistently offers only advantages with respect to the “strictly decreasing” ones. While the new LP problem performs well, its benefits are most evident when combined with the new backtracking strategy, outperforming the baseline when applied to MOO problems characterized by large regions of Pareto critical points. The work is organized as follows. In the next section (<Ref>), the main notations and definitions used in MOO are listed. In <Ref>, the new LP sub-problem for shared descent directions is presented, whereas in Subsection <ref> the theoretical characterization of the sub-problem solutions is described. In <Ref> the new backtracking strategy is introduced and both the theoretical convergence analyses and its implementation pseudo-codes are reported. <Ref> illustrates numerical results where the proposed methods are compared with a baseline on some typical MOO test problems taken from the literature; in particular, the results assess the potential of the new LP sub-problem and the new backtracking strategy. We end the paper with some conclusions drawn in <Ref>. § MULTI-OBJECTIVE OPTIMIZATION SETTING AND NOTATIONS In this work, we consider the case of an unconstrained Multi-Objective Optimization (MOO) problem min_x̌∈^n(f_1(x̌),… ,f_m(x̌)) , with m differentiable objective functions f_i:^n→, for each i=1,… ,m. In this section, for the reader's convenience, we recall the main definitions related to MOO problems like (<ref>) and Pareto optimality. For a more detailed introduction about MOO, for example, see <cit.>. [Order relations and vectors] Let v̌, w̌∈^n. Then, we denote by v̌<w̌ the relation between v̌ and w̌ such that v_i< w_i, for each i=1,… , n (analogous for ≤, >, and ≥). We denote by v̌≮w̌ the opposite of the relation v̌<w̌ (analogous for ≰, ≯, and ≱); i.e., v̌≮w̌ if there is i∈{1,… ,n} such that v_i≥ w_i. Let f̌:^n→^m be the function such that f̌(x̌)=(f_1(x̌),… ,f_m(x̌)), where f_1,… f_m are the m objective functions of (<ref>). Then, with respect to (<ref>), we have that: * x̌_1∈^n dominates x̌_0∈^n if f̌(x̌_1)≤f̌(x̌_0) and f̌(x̌_1) ≠f̌(x̌_0). Alternatively, we can say that x̌_0 is dominated by x̌_1. * x̌_0∈^n is non-dominated by x̌_1∈^n if x̌_1 does not dominate x̌_0; i.e., f̌(x̌_1)≰f̌(x̌_0) * x̌^*∈^n is a local Pareto optimal for (<ref>) if there is ε > 0 such that x̌^* is non-dominated by x̌, for each x̌∈^n, x̌^*-x̌_2≤ε. * x̌^*∈^n is a global Pareto optimal for (<ref>) if x̌^* is non-dominated by x̌, for each x̌∈^n. * The set of all and only the global Pareto optimals is defined as Pareto set of (<ref>); its image through f̌ is defined as Pareto front of (<ref>). * x̌∈^n is defined as Pareto critical point for (<ref>) if descent directions for all the objective functions f_1,… ,f_m evaluated in x̌ do not exist; i.e, if J_f̌(x̌) v̌≮0̌ , for each v̌∈^n, where J_f̌ denotes the Jacobian of f̌. By consequence, x̌∈^n is not a Pareto critical point for (<ref>) if a descent direction for all the objective functions f_1,… ,f_m evaluated in x̌ exists; i.e, if p̌∈^n exists such that J_f̌(x̌) p̌ < 0̌ . Being a Pareto critical is a necessary condition for being a local Pareto optimal. Specifically, if x̌^* is a local Pareto optimal, then x̌^* is a Pareto critical. After recalling the main entities related to MOO, we introduce the definition of non-ascent directions' regions for the objective functions. This definition will be useful for studying and analyzing the properties of the proposed MGD method (see <Ref>). [Non-ascent Directions' Regions] Let f_1,… f_m be the m objective functions of (<ref>). Let x̌∈^n be fixed. Then, for each i=1,… ,m, we define non-ascent directions' region of f_i evaluated at x̌ the set P_i(x̌):={p̌∈^n | ∇ f_i(x̌)^Tp̌≤ 0} ; i.e., P_i(x̌) is the union of the descent directions of f_i at x̌ and the directions perpendicular to ∇ f_i(x̌). Moreover, we define as region of shared non-ascent directions the intersection of all the non-ascent directions' region, i.e., the set P(x̌):=⋂_i=1^m P_i(x̌) (it always contains the null vector). §.§ Descent Methods for Multi-Objective Optimization Descent methods for MOO problems are iterative methods such that x̌^(0)∈^n given x̌^(k+1) = x̌^(k) + η^(k)p̌^(k) , ∀ k≥ 0 , where p̌^(k)∈^n is a descent direction for all the objective functions evaluated at x̌^(k). In literature, one of the favorite approaches used for the computation of a descent direction p̌^* for m functions f_1, … ,f_m evaluated at a point x̌ is the one based on finding the minimum-norm vector in the anti-gradients' convex hull <cit.>, i.e.: p̌^* = - ∑_i=1^mα_i^* ∇ f_i(x̌) , where α̌^* = _α̌{ ‖∑_i=1^mα_i ∇ f_i(x̌) ‖^2 s.t. α_1,… ,α_m ≥ 0 , ∑_i=1^m α_i = 1 } , Then, the solution α̌^* of problem (<ref>) can be found by solving a quadratic optimization problem. On the other hand, there are methods simpler than (<ref>) for finding a descent direction p̌^*; for example, the method described in <cit.> and based on solving the Linear Programming (LP) problem LP_basemin_β∈β ∇ f_i(x̌)^T p̌≤β , ∀ i=1,… ,m p̌_∞≤ 1 , i.e., explicitly, the problem LP'_basemin_ ρ̌∈^n+1 [0,… ,0, 1] ρ̌ [ [ ∇ f_1(x̌)^T -1; ⋮ ⋮; ∇ f_m(x̌)^T -1 ]] ρ̌≤0̌ -1≤ρ_i ≤ 1 , ∀ i=1,… ,n , with solution ρ̌^*=(p̌^*,β^*)∈^n+1 that is the concatenation of the shared descent direction found p̌^̌*̌ and the optimal value β^*≤ 0 found for β. § A NEW METHOD BASED ON LINEAR PROGRAMMING In this paper, one of our focuses is the development of a new LP approach for finding a descent direction shared by multiple functions. In particular, we start from the LP method described in <cit.> (see (<ref>) and (<ref>)), and we modify it. More precisely, analogously to <cit.>, the LP we define finds descent directions if they exist; otherwise, non-ascent directions are returned (see <Ref>). Nonetheless, these non-ascent directions are positively used by our MGD method, contrary to other methods in literature (see <Ref>); indeed, in this work, we also define a novel and different backtracking strategy for the optimization procedure (<ref>) that is able to exploit them. In this section we focus on the new LP approach, postponing the definition of the new backtracking strategy in <Ref>. First of all, solving (<ref>), we observe that we obtain a shared descent direction p̌^* (assuming it exists) such that its scalar product with each gradient is less than the minimum value β^* (always less than or equal to zero). Then, the gradient magnitudes have a relatively small influence on finding p̌^*, such as the gradient directions that only define the feasible region in the search space. In other words, the formulation of problem (<ref>) guarantees the finding of a shared descent direction p̌^* but does not guarantee that moving along the direction of p̌^* all the objective functions decrease coherently with their gradients. In addition, problem (<ref>) has good probabilities of returning the null vector as the solution (stopping the optimization procedure) also in the special case of Pareto critical points characterized by the presence of at least one non-ascent direction that is a descent direction for at least one objective. Nonetheless, such a kind of non-null direction could generate a new step of (<ref>) where x̌^(k+1) is non-dominated by x̌^(k); therefore, under these circumstances, it can be useful to add this option to a MGD optimization procedure, removing the null vector from the solution set of the LP problem. Given the observations above, the purposes of the proposed new LP problem are the following: * Modify problem (<ref>) such that, moving along the direction of p̌^*, all the objective functions decrease as much coherently as possible with their gradient's characteristics; * Avoid returning the null vector as a solution for Pareto critical points, if there is at least one non-ascent direction that is a descent direction for at least one objective. We recall that we are interested in these directions too because our new backtracking strategy will exploit them (see <Ref>). Before starting with the formulation of the new LP problem, we introduce some notations. We denote by ǧ_i(x̌) the gradient ∇ f_i(x̌), for each i=1,…, m, and by ǧ(x̌) their sum, i.e.: ǧ(x̌) := ∑_i=1^m ǧ_i(x̌) . In addition, we denote by ǧ_i(x̌) the gradient ǧ_i normalized with respect to the euclidean norm; i.e., ǧ_i(x̌) := ǧ_i(x̌)/ǧ_i(x̌)_2 , for each i=1,… ,m. Then, we denote by G(x̌) and G(x̌) the matrices with rows defined by the gradients (i.e., the Jacobian of f̌, see <Ref>) and the normalized gradients, respectively; specifically: G(x̌):= [ ǧ(x̌)_1^T; ⋮; ǧ(̌x̌̌̌)̌_m^T; ]∈^m× n , G(x̌):= [ ǧ(x̌)_1^T; ⋮; ǧ(x̌)_m^T; ]∈^m× n . Given (<ref>), the inequality constraints of (<ref>) defined using the gradients can be rewritten as [ [ G(x̌) -ě ]] ρ̌≤0̌ , where ě:=(1,… ,1)∈^m. §.§ Linear Programming Problem Formalization For improving the influence of the gradient magnitudes according to the purpose illustrated in item <ref>, we need to involve the gradients both in the objective function and in the boundary values of the variables corresponding to the descent direction. Specifically, we change the objective function of the problem (<ref>) into ǧ(x̌)^Tp̌ + c_β(x̌) β and the boundary values into p̌_∞≤γ(x̌) , β≤ 0 , where c_β(x̌) > 0 (in practice, c_β(x̌) > ǧ(x̌)_2, see <Ref> later) and γ(x̌) := max{ǧ_1(x̌)_∞,… ,ǧ_m(x̌)_∞, ǧ(x̌)_∞} . Clearly, for each p̌ satisfying (<ref>) and β≤ 0, it holds that ǧ(x̌)^Tp̌ + c_β(x̌) β≤ mβ + c_β(x̌)β≤ 0. Using the same notation of problem (<ref>), the objective function is now [ ǧ(x̌)^T | c_β(x̌) ] ρ̌ , while the boundary values are -γ(x̌̌̌) ≤ρ_i ≤γ(x̌) , ∀ i=1,… ,n and ρ_n+1≤ 0 . In <Ref> we illustrate the proposed objective function (<ref>) with respect to the boundaries (<ref>), in comparison with the corresponding objective function and boundaries of problem (<ref>). The change of objective function is proposed because it forces the solution to find a descent direction that minimizes β but follows as much as possible the direction identified by the sum of the anti-gradients. On the other hand, the boundary values of the descent directions have been changed in order to maintain at least a weak connection with the norms of the function gradients for a better embedding with an iterative method like (<ref>); this connection is not preserved if we use the condition p̌_∞≤ 1 like in (<ref>). After the modification of the objective function and the boundary values, we observe that also the inequality constraints (<ref>) can be modified to improve the quality of the descent direction p̌^*. Ideally, the more parallel a descent direction is to the anti-gradient, the better; then, in the region of the descent directions, the more distant it is from the hyperplane perpendicular to the gradient, the better. Actually, the inequality constraints (<ref>) are implicitly trying to apply the latter reasoning because the distances of p̌ from each one of the hyperplanes perpendicular to the m gradients are lower bounded by a quantity dependent on β. Specifically, denoted by H_i(x̌) the hyperplane perpendicular to ǧ_i(x̌), if ρ̌=(p̌,β) satisfies (<ref>), then the distance of p̌ from H_i(x̌) is lower bounded by |β|/ǧ_i(x̌)_2, i.e.: dist(H_i(x̌), p̌) = | ǧ_i(x̌)^Tp̌|/ǧ_i(x̌)_2≥|β|/ǧ_i(x̌)_2 , ∀ i=1,… ,m . Therefore, looking at (<ref>), we observe that a solution p̌^* can result to be nearly-perpendicular to those gradients characterized by a large norm, because the distance's lower bound tends toward zero. This phenomenon can be a problem for an iterative method like (<ref>), because the step can be almost-perpendicular to the steepest descent direction of one of the steepest objective functions of the problem (see the lightly obscured region varying with β and the vectors ρ̌^*, p̌^* in <Ref>a). Then, we modify the inequality constraints (<ref>) using the matrix G(x̌) instead of G(x̌); i.e., we use the inequality constraints [ [ G(x̌) -ě ]] ρ̌≤0̌ . For a point x̌ that is not Pareto critical, if (<ref>) is used as inequality constraint, we can prove (see <Ref>) that the minimum distance of a solution p̌^* from all the hyperplanes H_i(x̌) is at least |β^*| (see the lightly obscured region varying with β and the vectors ρ̌^*, p̌^* in <Ref>b). In conclusion, applying to (<ref>) the changes listed above, we obtain the following new LP problem: LP_newmin_ρ̌∈^n+1 [ ǧ(x̌)^T | c_β(x̌) ] ρ̌ [ [ G(x̌) -ě ]] ρ̌≤0̌ -γ(x̌) ≤ρ_i ≤γ(x̌) , ∀ i=1,… ,n ρ_n+1≤ 0 . In <Ref> we illustrate an example of different solutions obtained by applying (<ref>) and (<ref>) to the same set of gradients. §.§ Theoretical Results and Characterization Analogously to Lemma 3 in <cit.> for problem (<ref>), we illustrate some theoretical results for the LP problem (<ref>). We report the main results of the Lemma for problem (<ref>) in the following, adapted to the notation used in this work.. Let V^*, β^* be the solution set and the optimum value of problem (<ref>), respectively. Then: * if x̌ is Pareto critical for the functions f_1,… ,f_m, then 0̌∈ V^* and β^*=0; * if x̌ is not Pareto critical for the functions f_1,… ,f_m, then β^* < 0 and p̌^* is a descent direction for all the functions f_1,… ,f_m, for any p̌^*∈ V^*; For the theoretical analysis of our problem, we start with a proposition that characterizes the influence of the parameter c_β on the objective function evaluation. In the next results, we will show how this proposition is necessary to guarantee β^*<0 for non-null solutions of (<ref>). For ease of notation, from now on we will drop the dependency on x̌ in the LP problems. Let ρ̌_0=(p̌_0, β_0)∈^n+1. Let č denotes the vector characterizing the objective function (<ref>); i.e., č^T:=[ǧ^T|c_β]. Then, if c_β >ǧ_2, we have that č^T ρ̌_0 - č^T ρ̌_1 > 0 for each ρ̌_1=(p̌_1,β_1) such that β_1<β_0 and dist(p̌_0,p̌_1)≥ (β_0 - β_1). Specifically, if c_β=ǧ_2+ε, ε >0, we have that č^T ρ̌_0 - č^T ρ̌_1 ≥ (β_0 - β_1)ε > 0 . Let α be the angle between ǧ and (p̌_0 -p̌_1). Let Δβ be the difference between β_0 and β_1; i.e., Δβ:=β_0-β_1>0. Then, č^T ρ̌_0 - č^T ρ̌_1 = ǧ^T(p̌_0-p̌_1) + c_βΔβ = = ǧ_2p̌_0-p̌_1_2 cosα + c_βΔβ≥ ≥Δβ (ǧ_2cosα + c_β)≥ ≥Δβ (-ǧ_2 + c_β) . Then, (<ref>) holds if c_β>ǧ_2 and (<ref>) holds if c_β = ǧ_2 + ε, with ε>0. In the following Lemma, we characterize the solutions obtained solving (<ref>), depending on the Pareto nature of x̌∈^n; i.e., depending on x̌ that is a Pareto critical point or not, following the lines of <Ref>. Let ρ̌^*=(p̌^*, β^*)∈^n+1 be a solution of problem (<ref>), with c_β >ǧ_2. Then: * If x̌ is Pareto critical for the functions f_1,… ,f_m, then β^*=0. * Let x̌ be Pareto critical for the functions f_1,… ,f_m. Let V^* be the solution set, let P_i be the set of non-ascent directions of f_i in x̌ and let P be the intersection of all the sets P_i (see <Ref>). Let us denote by Π the set of all the feasible directions in P, according to the constraints of the problem. Then, it holds: * V^*=Π if and only if ǧ^Tp̌ = 0 for each p̌∈Π * V^*={0̌} if and only if Π = {0̌}; * 0̌∉ V^* if and only if there is p̌∈Π such that ǧ^Tp̌≠ 0; * If x̌ is not Pareto critical for the functions f_1,… ,f_m, then β^* < 0 and p̌^* is a descent direction for all the functions f_1,… ,f_m in x̌. * If x̌ is not Pareto critical for the functions f_1,… ,f_m, then: * β^*=max_i=1,… ,mǧ_i^Tp̌^*; * For each i = 1,… ,m, it holds dist( H_i, p̌^*) ≥ |β^*| ; In the following, we prove one by one the results listed in the Lemma. * The proof is almost straightforward. If x̌ is Pareto critical, no negative values of β permit to satisfy the inequality constraints (see (<ref>)); then, β^*=0. * If x̌ is Pareto critical, no descent directions are shared by the m objective functions. Then, solutions are contained in the non-ascent directions' region; i.e., V^*⊆Π. Let us start with item 2.1 and let us assume that ǧ^T p̌ = 0, for each p̌∈Π. Since V^*⊆Π, we have that ǧ^T p̌^*=0 for each solution of the problem. But any vector in Π satisfies the constraints and has the same minimum value for the objective function (i.e., value 0). Then, Π⊆ V^* and we prove the thesis. Concerning the proof of item 2.2, first of all, we observe that for item 2.1., V^*={0̌} if Π={0̌}. On the other hand, we prove by contradiction that Π={0̌} if V^*={0̌}. Let us assume that Π≠{0̌} and V^*={0̌}; then, there is a feasible direction p̌∈Π, p̌≠0̌, such that ǧ^Tp̌≤ 0 = ǧ^T0̌. Therefore, there are two possible situations: i) ǧ^Tp̌< 0 and 0̌ is not a solution; ii) ǧ^Tp̌= 0 and 0̌ is not the unique element of V^*. Both cases are a contradiction of the hypothesis V^*={0̌}, proving the thesis. Moving to item 2.3, also in this case we use the proof by contradiction. Let the null vector be a solution and let p̌∈Π be such that ǧ^Tp̌≠ 0. Then, ǧ_i^Tp̌≤ 0, for each i=1,…, m, and there is j∈{1,… ,m} such that ǧ_j^Tp̌< 0. By consequence, ǧ^Tp̌< 0 = ǧ^T0̌; therefore, 0̌ is not a solution of problem (<ref>), which is a contradiction; therefore 0̌ is not a solution if there is p̌∈Π such that ǧ^Tp̌≠ 0. On the other hand, again by contradiction, let us assume that 0̌∉ V^* and that there is no p̌∈Π such that ǧ^Tp̌≠ 0; therefore, it holds ǧ^Tp̌=0 for each p̌∈Π and, for item 2.1, Π=V^*∋0̌, that is a contradiction. Therefore, there is at least one p̌∈Π such that ǧ^Tp̌≠ 0 if 0̌∉ V^*. * The proof of this item is done by contradiction. Let us assume that (<ref>) has solution ρ̌^*=(p̌^*, β^*) with β^*=0. Since x̌ is not Pareto critical, there is p̌∈^n such that p̌_∞≤γ, p̌≠p̌^*, and Gp̌ < 0. Let δ denotes the euclidean distance between p̌^* and p̌ and let β be such that β:= max{ -δ, max_i=1,… ,mǧ_i^Tp̌} < 0 . Then, ρ̌:=(p̌, β) is a feasible point of the problem, β<β^*=0, dist(p̌^*,p̌)=δ≥ (β^* - β), and c_β > ǧ_2. Therefore, for <Ref>, it holds č^T ρ̌^* > č^T ρ̌ , which is a contradiction of the hypothesis that ρ̌^* is solution of (<ref>). * Let ρ̌^* be a solution for the problem (<ref>). Then, β^*<0 and ρ̌^̌*̌ satisfies (<ref>). It is easy to prove by contradiction that β^*=max_i=1,… ,mǧ_i^Tp̌^*=:δ_max^* (item 4.1); Indeed, assuming β^*>δ_max^*, it is possible to find a feasible vector (p̌^*, δ_max^*) with lower value of the objective function. Concerning item 4.2, as previously observed, the distance of p̌^* from H_i is lower bounded by |β^*|, for each i=1,… ,m, because ρ̌^* satisfies (<ref>); then, (<ref>) holds. Now we point the attention of the readers to item 2 of <Ref>. The three sub-items correspond to the three possible situations that occur for a Pareto critical point x̌∈^n; specifically, we have: * <Ref> - item 2.2 (V^*=Π={0̌}, see <Ref>a): the critical point is such that for each direction p̌∈^n, p̌ is an ascent direction for at least one objective function; i.e., there is j∈{1,… ,m} such that ǧ_j^Tp̌ > 0. This is a particular case of item 2.1 of the Lemma (see below). * <Ref> - item 2.1 (ǧ^Tp̌=0 for each p̌∈Π, V^*=Π, see <Ref>b): the critical point is such that: i) all the non-ascent directions shared by all the objective functions are perpendicular to all the gradients; ii) the case 2.2 of the Lemma (see above). * <Ref> - item 2.3 (there is p̌∈Π such that ǧ^Tp̌≠ 0, 0̌∉ V^*, see <Ref>c): the critical point is such there is j∈{1,… ,m} and there is a non-ascent direction shared by all the objectives such that it is a descent direction for the objective function f_j. One of the main differences between (<ref>) and (<ref>) is that for (<ref>) we have that 0̌∈ V^* for all the three situations listed above and illustrated in <Ref> (see <Ref>). Therefore, (<ref>) can return anytime a null direction p̌^*=0̌ if x̌ is Pareto critical, stopping the optimization procedure as a consequence. On the contrary, the new LP problem (<ref>) returns a null direction only if there are no directions in Π that are descent directions for at least one objective; therefore, if there is a direction in Π that is a descent direction for at least one objective, (<ref>) returns a non-null solution p̌^* and the optimization procedure (<ref>) can continue looking for point x̌^(k+1) along the direction p̌^* such that x̌^(k+1) is non-dominated by x̌^(k)=x̌. Moreover, it is possible to find x̌^(k+1) with decreased values for the objective functions f_j_1,… f_j_M for which it holds ǧ_j^Tp̌^*<0, j=j_1,… ,j_M. In this situation, in order to find such a kind of new point for the sequence, we define a new backtracking strategy in the next section. Coupling (<ref>) with this new backtracking strategy, we are able to increase the possibility of detecting and/or exploring the Pareto set of a MOO problem, as illustrated by the results of the numerical experiments of <Ref>. We observe that if m=2, i.e., the MOO problem is characterized by two objective functions only, the case 0̌∉ V^* is not possible for x̌∈^n Pareto critical. Indeed, only the cases corresponding to items 2.1 and 2.2 of <Ref> are possible. Therefore, when m=2, the differences between (<ref>) and (<ref>) are in the orientation and norm of the non-null solutions p̌^*. § NEW BACKTRACKING STRATEGY AND CONVERGENCE ANALYSIS In this section, we describe how to better exploit a MGD method (<ref>) that is based on descent directions evaluated with (<ref>). In particular, we use a new line-search method for the step lengths {η_k}_k∈, in order to guarantee that the sequence {x̌^(k)}_k∈ is such that x̌^(k+1) is non-dominated by x̌^(k), for each k∈. Therefore, we observe that the main difference between our line-search method and the ones typically adopted in literature (e.g., see <cit.>) is in the nature of the sequence {x̌^(k)}_k∈: all the other methods look for a sequence strictly decreasing for all the objective functions; on the contrary, we relax this requirement, accepting in general vectors x̌^(k+1) that are non-dominated by x̌^(k) (but giving priority to the ones that decrease the values of all the objectives). One of the simplest but most effective implementations of line-search methods for one-objective optimization problems is the backtracking strategy with respect to the Armijo condition <cit.>. This condition can be easily extended to MOO for building the sequence {x̌^(k)}_k∈ strictly decreasing with respect to all the objective functions <cit.>. Then, in this work, we extend even further this approach for modifying the properties of the sequence {x̌^(k)}_k∈ as stated above. Let α∈ (0,1) be the factor of the backtracking strategy and let c_1∈ (0,1) be the parameter of the Armijo condition. Then, at each step k>0, we check if the Armijo condition f_i(x̌^(k) + η^(k)_0p̌^*(k)) ≤ f_i(x̌^(k)) + c_1 η^(k)_0 ǧ_i(x̌^(k))^Tp̌^*(k) , is satisfied for the base-value of the step-size η^(k)=η^(k)_0 and for each i=1,… ,m. If it is not satisfied, we decrease the step-size by the factor α until the condition A_i^t f_i(x̌^(k) + η^(k)_tp̌^*(k)) ≤ f_i(x̌^(k)) + c_1 η^(k)_t ǧ_i(x̌^(k))^Tp̌^*(k) , is satisfied for the step-size η^(k)_t := η^(k)_t-1α = η_0^(k)α^t, t>0 , and for each i=1,… ,m. However, it is possible that η^(k)_t does not satisfy (<ref>), for each t≥ 0; in particular, it happens when x̌^(k) is Pareto critical and, therefore, p̌^* (k) is not a descent direction for all the objectives, but only a non-ascent direction or the null vector (see <Ref>). Under these circumstances, instead of stopping the sequence like the other MOO methods (e.g., see <cit.>), if p̌^*(k)≠0̌ we make a last check: ∃ i∈{1 ,… ,m} s.t. f_i(x̌^(k)) > f_i(x̌^(k) + η p̌^*(k)) where η>0 is a fixed arbitrary small parameter. Therefore, if (<ref>) is satisfied, setting x̌^(k+1):=x̌^(k) + η p̌^*(k) returns a new point x̌^(k+1) for the sequence that is non-dominated by x̌^(k), and the sequence can continue. Then, we can summarize the sequence generated by the line search method described above as BT_newx̌^(0)∈^n , {η_0^(k)}_k∈⊂^+ , η>0 given x̌^(k+1)=x̌^(k)+η^(k)p̌^*(k) , k≥ 0 , where η^(k)= η^(k)_τ = η_0^(k)α^τ if x̌^(k) is not Par. crit. (τ, 1^st value of t≥ 0 satisfying (<ref>)) η if x̌^(k) is Par. crit. and does not dominate x̌^(k)+η p̌^*(k) 0 otherwise (i.e., x̌^(k) is Par. crit. and dominates x̌^(k)+η p̌^*(k)) . The new backtracking strategy can be used in general with any MGD method, independently of the type of sub-problems used to compute the directions p̌^* (k); therefore, it can be used also with a MGD method based on (<ref>). Nonetheless, we recall that (<ref>) has been specifically designed to take advantage of the properties of (<ref>); in particular, we recall the observations related to item 2.3 of <Ref>. Line search methods designed for building strictly decreasing sequences for all the objectives can be interpreted as a special case of the new type of backtracking strategies, where η=0. §.§ Convergence Results If we look at all the possible sequences {x̌^(k)}_k∈ determined by the new backtracking strategy (<ref>), we can identify three main types of sub-sequences of consecutive elements: * {x̌^(k+m)}_m=0^M, k, M≥ 0, such that x̌^(k+M) is Pareto critical, η^(k+M)=η, and x̌^(k+m) is not Pareto critical for each m < M. We define this type of sub-sequence as η-Pareto Critical end (PC_η); * {x̌^(k+m)}_m∈, k≥ 0, such that there is M∈ for which x̌^(k+m) is Pareto critical and η^(k+m)=0, for each m≥ M, and x̌^(k+m) is not Pareto critical for each m < M. We define this type of sub-sequence as 0-Pareto Critical end (PC_0); * {x̌^(k+m)}_m∈, k≥ 0, such that x̌^(k+m) is not Pareto critical for each m ≥ 0. We define this type of sub-sequence as non-Pareto Critical (nPC); More precisely, we have that any sequence {x̌^(k)}_k∈ has one of the following structures: * N∈ consecutive PC_η sub-sequences, followed by one PC_0 sub-sequence; * N∈ consecutive PC_η sub-sequences, followed by one nPC sub-sequence; * Infinite consecutive PC_η sub-sequences. On the other hand, a sequence generated for being strictly decreasing for all the objectives coincides with a sub-sequence of type nPC or, if it reaches a Pareto critical point, with a sub-sequence of type PC_0. In <cit.>, the authors state that when such a kind of sequences are of the nPC type, assuming proper limitation hypotheses on the objective functions, they have at least one accumulation point, and each accumulation point of the sequence is a Pareto critical point. Below, we report this theorem (<cit.>), adapted to the notation used in this work. [Theorem 1 - <cit.>] Let {x̌^(k)}_k∈ be a sequence defined by (<ref>), but with η=0 (see <Ref>). Let x̌^(k) be not Pareto critical, for each k∈. Then: * {x̌^(k)}_k∈ stays bounded and has at least one accumulation point, if the function f̌ (see <Ref>) has bounded level sets in the sense that {x̌∈^n | f̌(x̌)≤f̌(x̌^(0))} is bounded; * Every accumulation point of the sequence {x̌^(k)}_k∈ is a Pareto critical point. Now, we state a theorem for characterizing the convergence properties of the sequences obtained using the new backtracking strategy proposed; i.e., sequences determined by (<ref>). In this theorem, we exclude the case of a sequence ending with a PC_0 sub-sequence because, in that case, the results are trivial. Let {x̌^(k)}_k∈ be a sequence defined by (<ref>) and let 𝒞 denotes the set of all the Pareto critical points of f̌:^m→^n (see <Ref>). Let us assume the sequence does not end with a PC_0 sub-sequence. Then: * lim inf_k→ +∞dist(x̌^(k), 𝒞) = 0; * if {x̌^(k)}_k∈ ends with a sub-sequence {x̌^(k_0+m)}_m∈ of nPC type, k_0≥ 0 fixed, and {x̌∈^n | f̌(x̌)≤f̌(x̌^(k_0))} is bounded, then {x̌^(k_0+m)}_m∈ is bounded, admits at least one accumulation point and all its accumulation points are Pareto critical; The structure of the sequence can be of only two types: * N∈ consecutive PC_η sub-sequences, followed by one nPC sub-sequence; * Infinite consecutive PC_η sub-sequences. In the first case, we have to prove both the theses of the theorem. Nonetheless, using <Ref> [Theorem 1 - <cit.>] the proof of item 2 is trivial and, as a consequence, also the proof of item 1. Now, only item 1 for a sequence of infinite consecutive PC_η sub-sequences remains to be proven. Let {x̌^(k)}_k∈ be a a sequence of this type; then, for each k≥ 0, there is h≥ k such that x̌^(h) is Pareto critical and η^(h)=η. Therefore, it holds lim inf_k→ +∞ dist(x̌^(k),𝒞) = sup_k≥ 0{inf_h≥ kdist(x̌^(h), 𝒞)} = 0 . §.§ Implementation Here, we report the pseudocode of an algorithm (<Ref>) that implements a MGD procedure with respect to the backtracking strategy defined in this section. We assume a maximum number of iterations K∈, a maximum number of backtracking steps Θ∈, and, for simplicity, we set η=η_0 α^Θ, where η_0∈_+ is the base value of the step-size for each iteration. The direction p̌^* described in the algorithm is the vector obtained solving the LP problem (<ref>) of <cit.>, the LP problem (<ref>) proposed in this work, or any other problem defined with the same scope. Let us consider the unconstrained MOO problem (<ref>). Then, we define the following MGD algorithm for the implementation of the descent method (<ref>). Data: x̌^(0)∈^n starting point for (<ref>); c_1, α∈ (0, 1) parameters for the Armijo condition; η_0 starting value for the step length; Θ∈ maximum number of backtracking steps; K∈ maximum number of iterations; 𝒫 sub-problem for computing p̌^* (k) at each iteration. Procedure: §.§.§ Storage of Non-Dominated Intermediate Points Until now, we focused on the advantages of finding a new non-dominated point x̌^(k+1) even if x̌^(k) is Pareto critical. Nonetheless, since in MOO it is crucial to identify as many Pareto optimals as possible, it is better if we do not “forget” x̌^(k) when it is not dominated by x̌^(k+1); indeed, we recall that is possible to have both x̌^(k+1) non-dominated by x̌^(k), and vice-versa. Therefore, we can modify <Ref> adding a “domination-check” for x̌^(k) with respect to x̌^(k+1); specifically, if x̌^(k) is not dominated by x̌^(k+1), we store it into a list C. Then, at the end of the procedure, the algorithm returns the last vector computed for the sequence, denoted by x̌, and all the x̌∈ C that are non-dominated by all the other vectors in C∪{x̌}; this set of vectors will represent the (approximation of) Pareto optimals discovered by the optimization method during its run (not necessarily global Pareto optimals, due to the local nature of the method). The result of these modifications is <Ref>. Let us consider the unconstrained MOO problem (<ref>). Then, define the following MGD algorithm for the implementation of the descent method (<ref>). Data: x̌^(0)∈^n starting point for (<ref>); c_1, α∈ (0, 1) parameters for the Armijo condition; η_0 starting value for the step length; Θ∈ maximum number of backtracking steps; K∈ maximum number of iterations; 𝒫 sub-problem for computing p̌^* (k) at each iteration. Procedure: §.§.§ Blockwise Implementation for Searching Global Pareto Optimals We recall that MGD methods have a local nature. Therefore, all the Pareto critical points returned by the procedures are approximations of global and/or local Pareto optimals, mainly depending on the starting point x̌^(0). Therefore, a basic multi-start approach for the optimization procedure, made with respect to a set of N∈ random starting points X^(0) := {x̌_1^(0),… ,x̌_N^(0)}⊂^n , can be a simple but efficient method for exploring and detecting as much as possible the Pareto set and the Pareto front of a MOO problem. Concerning a multi-start approach, it is interesting to observe that the solutions ρ̌^*_1,… ,ρ̌^*_N of N LP problems like (<ref>), if concatenated into a vector ρ̌^*=(ρ̌^*_1, … ,ρ̌^*_N)=((p̌^*_1,β^*_1), … ,(p̌^*_N, β^*_N))∈^N(n+1), are solution of the LP problem min_ρ̌∈^N(n+1) [ č_1^T | ⋯ | č_N^T ] ρ̌ [ A_1 ; ⋱ ; A_N ]ρ̌≤0̌ -γ_j ě≤p̌_j ≤γ_j ě , ∀ j=1,… ,N β_j≤ 0 , ∀ j=1,… ,N , and vice-versa; where č_j∈^n+1, A_j∈^m× (n+1), and γ_j∈_+ are: the vector of the linear objective function, the matrix of inequality constraints, and the scalar for boundary values of p̌_j, respectively, of the j-th LP problem, for each j=1,… ,N. Then, if the user cannot directly parallelize the multi-start procedure, they can still exploit (<ref>) for a fast computation of the N directions p̌_1^*,… ,p̌_N^*. Of course, techniques for fastening the evaluation of the Jacobians G_1,… , G_N are suggested, but their discussion does not fall within the scope of this work. A similar block-wise parallelization can be implemented also for problem (<ref>). § NUMERICAL EXPERIMENTS In this section, we study and analyze the behavior of the new method (<ref>) for computing the shared descent direction of multiple objectives, using the new backtracking strategy (<ref>) for MGD methods. Specifically, we run <Ref>, using (<ref>) for computing the direction p̌^* (k), with parameter c_β=ǧ_2 + 1. Then, the analysis is made through a comparison with <cit.>, representing the baseline; i.e., the baseline is represented by (<ref>), using a backtracking strategy designed for building a sequence strictly decreasing for all the objectives; we report the pseudocode of a MGD implementation using this backtracking strategy in <ref> (see <Ref>). Nonetheless, we study also how the behavior of (<ref>) changes if (<ref>) is used (i.e., we run <Ref> with respect to (<ref>)) and how the behavior of (<ref>) changes if the backtracking strategy for strictly decreasing objectives is used (i.e., we run <Ref> with respect to (<ref>)). Summarizing, we analyze the behavior of: * <Ref> and <Ref> run with respect to (<ref>). We denote them as BT_ base-<ref> and <ref>-<ref>, respectively; * <Ref> and <Ref> run with respect to (<ref>). We denote them as BT_ base-<ref> and <ref>-<ref>, respectively;. The baseline is represented by BT_ base-<ref> (see <cit.>), while the new method we propose is represented by <ref>-<ref>. All the other “intermediate” cases are added to the analysis to better understand the effects of using different backtracking strategies and different shared descent directions in a MGD method. All the cases are compared analyzing the results obtained by applying them to three well-known MOO test problems: the Fonseca-Fleming problem <cit.>, the Kursawe problem <cit.>, and the Viennet problem <cit.>. For all these problems, all the algorithms are executed with respect to N=500 randomly sampled starting points (exploiting a blockwise implementation, see <Ref>); the backtracking strategy is characterized by a maximum number of steps Θ=40 and parameters c_1=10^-9, α=0.8, and η_0=1 for the Armijo condition (<ref>). For more details about the test problems, the starting point samplings, etc., see <ref>. The comparison between the methods is performed by measuring the ratio of sequences, among all the N ones, that have detected at least one Pareto critical that is non-dominated by all the other Pareto critical points detected by itself and the other sequences. Indeed, such a kind of Pareto critical points can be considered as a good approximation of a global Pareto optimal and, therefore, it means that the sequence reached at least one point near to the Pareto set of the problem. Specifically, let C_j be the set containing the outputs of a MGD algorithm, starting from x̌^(0)_j, for each j=1,… ,N; i.e., C_j = {x̌} for <Ref>, while C_j contains x̌ and all the non-dominated critical points of the sequence for <Ref> (see the algorithm's pseudocode). Let C^N be the union of all the sets C_j, i.e., C^N:=⋃_j=1^N C_j, and let C_j be the set points in C_j non-dominated by all the other points in C^N, i.e. C_j := {x̌∈C_j | x̌ non-dom. by y̌ , ∀y̌∈C^N, y̌≠x̌} . Then, for a MGD algorithm, we define the index P^N := # of non-empty C_j/N=|{j | C_j≠∅ , j=1,… ,N}|/N , where the superscript N denotes the number of sequences used for computing the index. We define this index as the global Pareto ratio of the MGD algorithm, with respect to N runs. We introduce index (<ref>) in order to analyze the performance of different MGD algorithms independently on the availability of a known Pareto set and Pareto front. Alternative indices can be used, such as the Hypervolume (HV) indicator, which is able to measure the “quality” of a Pareto set/front through the hyper-volume of the dominated region in the objectves' space (see <cit.>). Nonetheless, the HV (and the other MOO indicators) focuses on the spread properties of the detected global Pareto optimals; therefore, we prefer instead to use index (<ref>) for our analyses, because it is better for quantifying the ability of a MGD algorithm to reach the Pareto set of the problem, varying the starting point. §.§ Numerical Results and Analyses In this subsection, we analyze and study the results obtained by the four given MGD algorithms applied to the three test problems considered (FonsecaFleming, Kursawe, and Viennet, see <ref>). We start analyzing the results for the Fonseca-Fleming problem, focusing on the different paths generated by using directions that are solutions of (<ref>) or (<ref>); in this case, the type of backtracking strategy is not crucial, due to the characteristics of the objective functions. On the contrary, when we continue to the results for the Kursawe problem, we observe how the usage of <ref> (i.e., <Ref>) improves concretely the global Pareto ratio P^N. In the end, we analyze the results for the Viennet problem; for this latter case, we observe that only the MGD method <ref>-<ref> is able to return a good P^N value, outperforming all the other methods. A summary of the global Pareto ratio values for all the methods, with respect to all the problems, is reported in <Ref>. §.§.§ Fonseca-Fleming With respect to the Fonseca-Fleming test problem, all the MGD methods return 100% of P^N (see <Ref>). Looking only at these values, we deduce that the sequences built with (<ref>) are a valuable alternative to the ones built with (<ref>). We do not see differences in the performances while changing the backtracking strategy because the characteristics of the problem are such that (<ref>) is very easy to be satisfied in regions far from the Pareto set (e.g., both the objective functions are convex); therefore, all the sequences are decreasing with respect to all the objectives except when an approximation of a global Pareto optimal has already been reached. However, looking at the paths of the sequences' images in the objectives' space in <Ref>, we observe different behaviors for (<ref>) and (<ref>). For both the methods, the paths end (approximately) on the Pareto front of the problem, but the paths generated with (<ref>) are attracted more by the “center” of the front (Figures <ref>a and <ref>b), while the paths generated with (<ref>) are distributed more uniformly on the front (Figures <ref>c and <ref>d). Summarizing, with these results we show that (<ref>) is able to generate good sequences for a MGD algorithm, but the sequences we obtain are characterized by a different behavior from the ones generated with (<ref>) (as expected, by construction of (<ref>)). §.§.§ Kursawe The Kursawe problem is more complicated than the Fonseca-Fleming one. In this case, analogously to the Fonseca-Fleming test, we observe similar performances but different path behaviors for (<ref>) and (<ref>) (given the same backtracking strategy); actually, the performances of methods based on (<ref>) are a bit smaller than the performances of methods based on (<ref>) (see <Ref>). The different path behaviors of the two methods are evident if <Ref>; in particular, we have that (<ref>) generates sequences attracted more by the “center” of the front, while (<ref>) generates sequences mainly focused in reducing the value of f_1 (i.e., “leftward movement” in the objectives' space). With respect to the global Pareto ratio, the real difference is made by <ref>. Indeed, independently on the LP problem used for computing the directions, the adoption of the new backtracking strategy guarantees a P^N≃ 65%; more than twice the value obtained using <Ref> (see <Ref> and the approximated Pareto front/set in <Ref> and <Ref>, respectively). We can explain this phenomenon looking at the orange dots in Figures <ref>b and <ref>d; the orange dots are points x̌_j^(k) with respect to which the backtracking strategy cannot find a point along the direction p̌^* (k) that is able to decrease all the objectives, but (<ref>) finds x̌_j^(k+1) such that it is not dominated by x̌_j^(k) and vice-versa; such a kind of points are Pareto critical points or points very near to them (see <Ref>). In Figures <ref>b and <ref>d, we clearly see that there are paths of (almost) Pareto critical orange dots that (often) end with an approximated global Pareto optimal; i.e., <ref> permits the movement along trajectories made of Pareto critical points, increasing the probability of reaching the Pareto set/front of the problem. On the contrary, BT_ base stops the iterations as soon as a Pareto critical point is reached, reducing the probability of finding a global Pareto optimal from a “non-suitable” starting point; an example is represented by the dominated red dots in Figures <ref>a and <ref>c. We conclude the analysis of the Kursawe problem focusing again on <Ref>. We observe that the Kursawe problem is characterized by m=2 objective functions; therefore, for the observations in <Ref> we have that the only difference between (<ref>) and (<ref>) is in the properties of the directions evaluated for the points that are not Pareto critical. For Pareto critical points, the only difference is in the norm of p̌^*, if it is not equal to the null vector. §.§.§ Viennet The last problem we consider is the Viennet problem. This problem is particularly difficult for MGD methods, because it is characterized by regions where the gradient ǧ_1 of the first objective function is parallel and opposite to the gradient ǧ_3 of the third objective function (i.e., ǧ̅_1=-ǧ̅_3, see <Ref>). Therefore, all these points are Pareto critical, and shared descent directions do not exist for the objectives in those points; at most, there are non-ascent directions perpendicular to ǧ_1 and ǧ_̌3̌ that are descent directions for the second objective function (e.g., see <Ref>c). Under these circumstances, we clearly see how the pairing of (<ref>) with (<ref>) is crucial for increasing the possibility of reaching the Pareto set/front of the problem, starting from a random point. Indeed, looking at the values of <Ref>, we see that only the MGD method <ref>-<ref> reaches a good P^N value of 92.80%; all the other MGD methods reaches a value of P^N that is approximately between 35% and 45%. Nonetheless, some of the observations made for the Fonseca-Fleming and the Kursawe problems still hold. Specifically, we see that (<ref>) improves the performances in general, while the usage of (<ref>) or (<ref>) generates directions with different characteristics that, using <Ref> as backtracking strategy, does not result in consistent differences in the performances. We can see the effects of these observations looking also at the approximated Pareto front/set in <Ref> and <Ref>, respectively. The performance improvements obtained using (<ref>) depend on the fact that this backtracking strategy helps to push the sequences along and/or beyond the “static regions” of Pareto critical points (see <Ref>). This behavior is evident looking at the paths made of orange dots in Figures <ref>b and <ref>d (objectives' space) and Figures <ref>b and <ref>d (domain); in particular, see how the paths in Figures <ref>b and <ref>d corresponds to the red regions in <Ref>. On the other hand, since BT_ base cannot help to overcome this kind of difficulty, instead of paths made of orange dots, for these regions we observe “frozen” red dots (i.e., sequences that stops at the beginning); see Figures <ref>a and <ref>c (objectives' space) and Figures <ref>a and <ref>c (domain). We observed a similar phenomenon in the Kursawe problem too, but less explicit. Moreover, it is evident from <Ref> that the best performances are obtained when we use (<ref>) and the sequences are built with respect to (<ref>). The reasons for these better performances depend on the different properties of the solutions returned by (<ref>) and the solutions returned by (<ref>), with respect to the Pareto critical points belonging to the red regions illustrated in <Ref>. Indeed, (<ref>) can return a null direction anytime the point of the sequence is Pareto critical (see <Ref>), de-facto stopping the sequence itself; moreover, the infinite-norm of the computed direction is at most 1, generating steps that are not related to the gradients' order of magnitude (in this case, too small steps). On the other hand, (<ref>) returns a non-null direction for sure for all the points in the red regions illustrated in <Ref> where ǧ_2≠0̌, because of <Ref> - item 2.3; moreover, the upper bound of the infinite-norm of the computed direction depends on the infinite-norms of the gradients of the objective functions (see (<ref>)), granting longer steps when objectives are steeper. From these observations, we deduce that (<ref>), together with (<ref>), is better for a problem like Viennet because it guarantees the building of sequences that are more able to move along/beyond the “static regions”, increasing the probability of reaching a good approximation of a global Pareto optimal. See how the paths generated by orange/critical Points in <Ref>d (but also <Ref>b) follow the circular shapes of the red regions of <Ref>, while in the objectives' space their movement actually results mostly in a reduction of the second objective function (see <Ref>d, but also <Ref>b). §.§ Analyses' Summary Given the analyses of the results of the numerical experiments in this section, we summarize the properties of (<ref>) and (<ref>) in the following. Concerning the new backtracking strategy (<ref>) and its implementation <Ref>, we observe that it has only advantages and no drawbacks with respect to BT_ base (see <Ref>). Indeed, independently of the LP problem used for computing the directions, this new backtracking strategy always improves the probability of a sequence reaching the Pareto set of the problem, reducing the risk of stopping in (or near to) Pareto critical points that are not global Pareto optimals. Concerning the novel LP problem proposed (<ref>), due to its construction, we observe that it generates sequences of different behavior with respect to the ones generated by (<ref>). This different behavior of the sequences does not show evident advantages if x̌∈^n is not a Pareto critical or if m=2 and x̌ is Pareto critical; actually, sometimes its performances are even slightly smaller (see <Ref>). Nonetheless, (<ref>) demonstrate crucial importance if used together with (<ref>) on problems characterized by large and diffused “static regions” like the Viennet problem and exemplified by <Ref>c. Indeed, in these situations, the theoretical properties of (<ref>) together with the ones of (<ref>) guarantee the building of sequences that more probably reach a good approximation of a global Pareto optimal; on the contrary, using (<ref>) this probability is not necessarily increased too much. Summarizing, we can say that MGD methods <ref>-<ref> and <ref>-<ref> are always preferable for solving a MOO problem with respect to BT_ base-<ref> and BT_ base-<ref>, respectively. Concerning (<ref>) and (<ref>), there are no reasons to prefer one or another, unless there is the suspect that the MOO problem is characterized by large regions of Pareto critical points in its domain of the type described in <Ref> - item 2.3 (e.g., like Viennet problem, see <Ref>). § CONCLUSION In this paper, we introduced (<ref>), a novel LP problem specifically designed for computing directions in MGD methods. This new LP problem is derived from (<ref>), introduced by Fliege and Svaiter in <cit.>. Through rigorous theoretical analysis (see <Ref>), we demonstrated that our proposed LP formulation surely returns non-null directions if the point x̌ of the sequence is Pareto critical but under particular conditions; namely, when at x̌ there is at least one non-ascent direction that is a descent direction for at least one objective function. This is one of the main properties that distinguishes the new LP problem from the one in <cit.>, which can return null directions anytime the point of the sequence we consider is a Pareto critical point. Additionally, we developed a new backtracking strategy for MGD methods (see (<ref>)). This strategy is characterized by the acceptance of a new point x̌^(k+1)=x̌^(k) + η p̌^* (k) at the last backtracking step if it is non-dominated by x̌^(k), even if the Armijo condition is not satisfied for all objectives. Furthermore, we introduced a “storing property” within this strategy (see <Ref>), ensuring that all points x̌^(k) which do not dominate x̌^(k+1), and vice-versa, are stored. This innovation aims to improve the efficiency and robustness of the backtracking process, improving the probability of any sequence to reach the Pareto set. We provided theoretical proof of the convergence properties for a MGD method that incorporates our new backtracking strategy. To validate our theoretical findings, we conducted numerical experiments to evaluate the performance of the new methods compared to the baseline method taken from <cit.>. Our experiments revealed several advantages of using the new backtracking strategy, showing consistent improvements in the performance of the MGD methods. While the new LP problem demonstrated good behavior, it did not present significant advantages over the previous formulation, except when paired with the new backtracking strategy in MOO problems characterized by large, static regions. This specific combination exhibited notable benefits, reinforcing the value of the new LP problem. Future work will focus on extending our new methods to constrained MOO and on applying them to real-world problems. This will enable further validation of their practical utility and potential for broader adoption in various application domains. § STRICTLY DECREASING BACKTRACKING FOR MGD In this appendix section, we report the pseudocode of the standard MGD algorithm used in <Ref>; i.e., the algorithm that implements a backtracking strategy looking for a decreasing sequence for all the objectives. Actually, this algorithm is equivalent to a simplified version of <Ref>, where the algorithm stops if the Armijo condition (<ref>) is not satisfied for all the objectives, for all t = 0,… ,Θ. Let us consider the unconstrained MOO problem (<ref>). Then, we define the following MGD algorithm for the implementation of the classic descent method based on a backtracking strategy looking for a decreasing sequence for all the objectives. Data: x̌^(0)∈^n starting point for (<ref>); c_1, α∈ (0, 1) parameters for the Armijo condition; η_0 starting value for the step length; Θ∈ maximum number of backtracking steps; K∈ maximum number of iterations; 𝒫 sub-problem for computing p̌^* (k) at each iteration. Procedure: § MULTI-OBJECTIVE TEST PROBLEMS In this appendix section, we report the formulations of the three MOO test problems used in the numerical experiments of <Ref>. * Fonseca-Fleming <cit.>. The objective functions of the MOO problem are f_1 = 1 - e^-∑i=1^n (x_i - 1/√(n))^2 f_2 = 1 - e^-∑i=1^n (x_i + 1/√(n))^2 . The starting points for the MGD methods are sampled with random uniform distribution 𝒰([-2, 2]^n), n=3. The maximum number of steps used for the MGD methods is K=250. * Kursawe <cit.>. The objective functions of the MOO problem are f_1 = ∑_i=1^2 -10 e^-0.2 √(x_i^2+x_i+1^2) f_2 = ∑_i=1^3 (|x_i|^0.8 + 5sin(x_i^3)) . The starting points for the MGD methods are sampled with random uniform distribution 𝒰([-1.5, 0.5]^n), n=3. The maximum number of steps used for the MGD methods is K=1500. * Viennet <cit.>. The objective functions of the MOO problem are f_1 = 0.5(x_1^2+x_2^2) + sin(x_1^2 + x_2^2) f_2 = (3x_1 - 2x_2 + 4)^2/8 + (x_1 + x_2 + 1)^2/27 + 15 f_3 = 1/x_1^2 + x_2^2 + 1 -1.1 e^-(x_1^2 + x_2^2) . The starting points for the MGD methods are sampled with random uniform distribution 𝒰([-3, 1.5]^n), n=2. The maximum number of steps used for the MGD methods is K=7500. § ACKNOWLEDGEMENTS This study was carried out within the FAIR-Future Artificial Intelligence Research and received funding from the European Union Next-GenerationEU (PIANO NAZIONALE DI RIPRESA E RESILIENZA (PNRR)–MISSIONE 4 COMPONENTE 2, INVESTIMENTO 1.3—D.D. 1555 11/10/2022, PE00000013). This manuscript reflects only the authors’ views and opinions; neither the European Union nor the European Commission can be considered responsible for them. §.§.§ Code Availability: The code for MGD methods illustrated in this paper is available at: <https://github.com/Fra0013To/MGD>. elsarticle-num
http://arxiv.org/abs/2406.09346v1
20240613173102
Scoreformer: A Surrogate Model For Large-Scale Prediction of Docking Scores
[ "Álvaro Ciudad", "Adrián Morales-Pastor", "Laura Malo", "Isaac Filella-Mercè", "Victor Guallar", "Alexis Molina" ]
cs.LG
[ "cs.LG", "cs.AI", "q-bio.BM" ]
Emergence of Fluctuation Relations in UNO Valerio Scarani June 17, 2024 ========================================= Álvaro Ciudad1, *, Adrián Morales-Pastor1, Laura Malo2, 3, Isaac Filella-Mercè4, Victor Guallar3, 4, Alexis Molina1, 3, * 1 Department of Artificial Intelligence, Nostrum Biodiscovery S.L., Barcelona, Spain 2 Department of Drug Discovery, Nostrum Biodiscovery S.L., Barcelona, Spain 3 Electronic and Atomic Protein Modelling Group, Barcelona Supercomputing Center, Barcelona, Spain 4 Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain * To whom correspondence may be addressed: {alvaro.ciudad, alexis.molina}@nostrumbiodiscovery.com § ABSTRACT In this study, we present ScoreFormer, a novel graph transformer model designed to accurately predict molecular docking scores, thereby optimizing high-throughput virtual screening (HTVS) in drug discovery. The architecture integrates Principal Neighborhood Aggregation (PNA) and Learnable Random Walk Positional Encodings (LRWPE), enhancing the model's ability to understand complex molecular structures and their relationship with their respective docking scores. This approach significantly surpasses traditional HTVS methods and recent Graph Neural Network (GNN) models in both recovery and efficiency due to a wider coverage of the chemical space and enhanced performance. Our results demonstrate that ScoreFormer achieves competitive performance in docking score prediction and offers a substantial 1.65-fold reduction in inference time compared to existing models. We evaluated ScoreFormer across multiple datasets under various conditions, confirming its robustness and reliability in identifying potential drug candidates rapidly. § INTRODUCTION Virtual screening (VS) is a key computational technique in drug discovery, utilized for evaluating the potential binding affinity of numerous molecules with target proteins. The main aim of VS is to identify molecules with potential interaction capabilities, thereby streamlining the drug discovery process and reducing the need for extensive, costly experimental assays <cit.>. High throughput virtual screening (HTVS) represents an advanced and more efficient form of VS. Characterized by its ability to rapidly assess vast molecular libraries, HTVS significantly contributes to identifying potential biological interactions. The evolution of HTVS has been significantly influenced by the advancement of combinatorial chemistry, enabling the generation of extensive and diverse compound libraries. Previously, drug discovery relied on smaller and less diverse libraries, limiting the scope of potential candidates. The integration of combinatorial chemistry with HTVS allows for the utilization of larger compound libraries, alongside advanced computational evaluation techniques. This integration has accelerated the process of identifying suitable drug candidates by enabling rapid and precise assessment of ligand affinity across these vast libraries. Such efficiency is crucial in modern drug discovery as it can considerably shorten the hit-finding phase and enhance the overall hit rate. As a result, there's an increasing need for methodologies that can swiftly and reliably navigate this expanded chemical space <cit.>. In this context, we present ScoreFormer, a graph transformer model developed to accurately predict molecular docking scores, thereby optimizing the efficiency of HTVS. ScoreFormer is designed to interpret molecular structures and their relationship with docking scores. Our rigorous evaluation demonstrates that ScoreFormer not only achieves state-of-the-art accuracy in hit identification and docking score prediction but also offers a significant reduction in inference time compared to existing methods. Moreover, we also present L-ScoreFormer, a smaller version of the original model, focused on the general prediction of docking scores with improved efficiency. Both approaches make efficient and reliable tools for HTVS, capable of handling diverse datasets and conditions. The datasets, including scores, SMILES, and poses, used in our testing across multiple systems, with and without docking constraints, will be made available at Zenodo. § RELATED WORK HTVS applied to relatively large libraries has been a key practice in drug discovery for some time. Within HTVS, traditional methodologies, such as Glide <cit.> and rDock <cit.>, employ specialized scoring functions and search-based protocols in their algorithms. These are designed to prioritize speed, enabling the assessment of larger compound collections than typically seen in standard docking campaigns. However, with the ever-increasing size of modern compound libraries, these traditional docking approaches are becoming less suitable for the task at hand. Advancements in HTVS have been greatly influenced by the introduction of surrogate models. In a study conducted by <cit.>, a progressive docking approach incorporating molecular features and Morgan fingerprints achieved a 50-fold speed increase and a 90-fold enrichment compared to whole library docking and random sampling, respectively. <cit.> and <cit.> utilized Bayesian optimization and a design space pruning framework on the MolPAL platform, attaining a notable top-10k recovery rate of 67.7% while significantly reducing computational demands. Moreover, <cit.>'s active learning framework, which integrates a Graph Convolutional Network(GCN)-based surrogate model, successfully identified 80% of experimentally confirmed hits, reducing computational costs by a factor of 14. These metrics emphasize the substantial enhancements in efficiency and accuracy these methods bring to HTVS. GNNs have become increasingly effective in molecular learning due to their ability to model the complex, non-Euclidean structure of molecular data. Molecules, represented as graphs with atoms as nodes and bonds as edges, align well with GNN architectures, which effectively capture node interactions. The work by <cit.> showcases the use of GNNs in enhancing docking score prediction, integrating machine learning-based surrogate docking with traditional methods to improve screening efficiency. Their introduction of FiLMv2, a novel GNN architecture, has shown remarkable performance improvements, including a 9.496-fold increase in molecule screening speed with respect to DOCK <cit.> and a recall error rate below 3%, achieving significant speed increases and low recall error rates in molecule screening, thereby advancing chemical docking tasks. However, the use of virtual nodes in GNNs, including FiLMv2, to capture global information can lead to increased computational complexity and prolonged training and inference times, presenting a challenge in HTVS. Additionally, virtual nodes risk overshadowing important local interactions and molecular topologies crucial for precise molecular property predictions due to the absence of learnable weights. To mitigate these issues, we propose the adoption of an attention mechanism, as an alternative to virtual nodes. This approach efficiently processes global information and reduces computational demands, while preserving the ability to capture essential local variations in molecular structures, as shown in the next section. § METHODS §.§ ScoreFormer Architecture The ScoreFormer architecture is grounded in the graph transformer framework as introduced in <cit.>. This choice is motivated by the significance of long-range interactions in molecular regression tasks, a finding supported by ablation studies in <cit.>. Traditionally, a virtual node strategy from <cit.> facilitates long-range interaction by introducing an extra node connected to every other node. However, our analysis (see Section <ref>) indicates that this approach substantially increases inference time. To mitigate this, we integrate an attention mechanism capable of modeling similar long-range interactions but with learnable parameters, resulting in enhanced performance. To enhance the aggregation process from a node’s neighborhood, in the message passing component of the graph transformer layers, we employ Principal Neighborhood Aggregation (PNA) as proposed by <cit.>. The PNA mechanism is defined as: 𝐗_i^(t+1) = U^(t)( 𝐗_i^(t),(j,i)∈ E⊕( M^(t)( 𝐗_i^(t) ,E_j→ i, 𝐗_j^(t)) ) ) where 𝐗_i^(t) is the node feature matrix at layer t, E_j→ i the edge feature of (j,i) if present, ⊕ are the set of aggregator and scalers defined in the original paper and M^t and U^t are neural networks at layer t. Furthermore, to better capture positional information within the graph structure, we incorporate Learnable Random Walk Positional Encodings (LRWPE) from <cit.>. These encodings are integrated at each layer of the PNA as follows: 𝐱_i^(t+1) = PNA( 𝐱_i^(t)⊕𝐩_i^(t),E_𝒩(i) → i ,𝒩(i) ) 𝐩_i^(t+1) = PNA( 𝐩_i^(t), 𝒩(i) ) where 𝐩_i^(t) is the learnable positional encoding for node v at layer t, ⊕ denotes concatenation as the operation for combining node features and positional encodings and 𝒩(i) is defined as the neighbouring nodes of i. The positional encodings are learned layer-wise with another PNA component. The LRWPE are optimized simultaneously with the GNN parameters through a task-specific loss function, ensuring that both the feature and positional information are appropriately leveraged to improve the model's performance on molecular regression tasks. This dual optimization is a key feature of our architecture, distinguishing ScoreFormer from traditional graph transformer models by providing a more nuanced and positionally-aware representation of graph-structured data. A schematic representation of the ScoreFormer architecture is shown in Figure <ref>. For improved efficiency, we also developed L-ScoreFormer, a streamlined variant of the ScoreFormer architecture, designed for efficiency and reduced overfitting risk. Developed with an emphasis on minimizing computational costs while maintaining performance, this model features a parameter-reduced structure. Its optimization, achieved through multiple objective trials using Optuna <cit.>, focuses on balancing parameter reduction with model efficacy. This approach ensures that L-ScoreFormer retains the core capabilities of the original ScoreFormer, yet operates more efficiently and with a lower risk of overfitting to specific data distributions. §.§ Datasets Our evaluation of ScoreFormer and FiLMv2 involved seven datasets, including one from the ZINC database with 128 million molecules docked to the dopamine D4 receptor <cit.>. We further utilized six datasets from the ZINC 20 database <cit.>, choosing 500k molecules with molecular weights between 200 and 500 Dalton. These molecules were docked against three protein systems, Dopamine D4 receptor, HIV-1 protease, and Cyclin-dependent kinase 2 (CDK2), using the Glide software. To assess model performance in different scenarios, we applied both constrained and unconstrained docking protocols, offering a comparative analysis of more realistic and idealized docking conditions. Further details are provided in Appendix <ref>. §.§ Model evaluation In the following experiments, model performance was assessed by training the model using an 80-10-10 data split protocol. Inference epochs were selected based on F1 performance in a validation set composed of 10% of the dataset. Final performance metrics are obtained on a holdout test set using the remaining 10% of the dataset. This procedure was repeated for each of the protein systems and each docking protocol. Performance was measured using the metrics employed in <cit.>, along with additional metrics for a more comprehensive comparison. Further details are in Appendix <ref>. § EXPERIMENTS In evaluating both ScoreFormer architectures, our approach was three-folded, focusing on its generalization capabilities for a range of docking scores, efficacy in identifying promising drug candidates, and computational speed relative to leading models in the field. We utilized standard regression metrics to assess its predictive accuracy and specialized metrics for its precision in identifying potential high-affinity compounds. Additionally, we compared its processing speed against the currently available top-performing architecture. We also evaluated the generalization capabilities of ScoreFormer by performing inference on out-of-distribution molecules. Finally, we further conducted ablation studies to assess the contributions of LRWPE and PNA convolution layers to the performance of ScoreFormer, see Appendix <ref>. §.§ Prediction of docking scores In implementing a graph transformer model like ScoreFormer, one of our primary objectives was to achieve superior generalization in predicting docking scores. This capability can be relevant in drug discovery campaigns, where there is a significant interest in compounds across different ranges of docking scores. Such comprehensive predictive ability is beneficial not only for HTVS tasks but also for parallel applications like Quantitative Structure-Activity Relationships (QSAR) <cit.>. To this end, we evaluated ScoreFormer's performance using standard regression metrics. According to the results in Appendix <ref>, L-ScoreFormer demonstrates enhanced predictive power across a broad spectrum of docking scores, surpassing FiLMv2 implementation in both R^2 and Pearson. We hypothesize that this improvement is attributable to a reduction in overfitting, a common challenge in models trained with a focus on specific topologies, particularly under the WMSE loss used during training. This diminished tendency for overfitting in L-ScoreFormer suggests that a leaner model structure can be more effective in generalizing across different molecular structures and docking scenarios. §.§ Molecular hit recovery ScoreFormer's ability to recover molecular hits, crucial for its application in prospective HTVS campaigns, was thoroughly evaluated using the metrics stated in Appendix <ref>. The analysis, as illustrated in Table <ref>, demonstrated ScoreFormer's superior performance in accurately identifying and prioritizing compounds with high binding affinity across various datasets and conditions. This proficiency is especially evident when comparing its performance to FiLMv2, a benchmark model in the field. ScoreFormer consistently outperformed FiLMv2 across multiple recovery metrics. Due to its increase in the number of parameters, the model is able to capture the topological features responsible for good docking scores along the different systems and constrain set. These results highlight ScoreFormer's robustness and reliability in hit recovery. Notably, L-ScoreFormer also displayed commendable performance, often closely following the FiLMv2 architecture. This observation underscores the effectiveness of both of our architectures in molecular hit recovery tasks. The consistent results of both ScoreFormer and L-ScoreFormer across diverse screening scenarios reinforce the versatility of our approach. See further results at Appendix <ref>. §.§ Inference speed The inference speed of ScoreFormer, a critical factor in HTVS, was evaluated using a dataset of 500,000 molecules from the DOCK database. Conducted on a single A30 GPU, this assessment was tailored to reflect the real-world demands of computational drug discovery. To focus purely on the model's core computational capabilities, we excluded pre-processing steps from the evaluation, as these are commonly cached in practical applications, for both ScoreFormer and the FiLMv2 model. The results, as shown in Table <ref>, highlight a significant enhancement in L-ScoreFormer's processing efficiency compared to FiLMv2. Achieving a 1.86-fold increase in speed, L-ScoreFormer processed samples at a rate of 2468.4 per second, effectively halving the estimated processing time for 128 million samples (DOCK dataset) to 14.40 hours from FiLMv2's 26.85 hours. ScoreFormer, also outperformed previous benchmarks by a factor of 1.652. This marked speed improvement is not achieved through a reduction of trainable parameters but by the replacement of the virtual node in FiLMv2 by an attention mechanism. This highlights ScoreFormer's architectural efficiency while demonstrating the model's capacity to handle large-scale datasets rapidly. §.§ Generalization across the chemical space To evaluate the generalization capabilities of ScoreFormer, we conducted two benchmarks with molecules outside the training distribution. These benchmarks are necessary due to the combinatorial nature of the training databases, which may lead to an unfair assessment of the generalization capabilities of this kind of methods. The first benchmark uses molecules generated by generative models for the evaluation set. As shown in <cit.>, generative models can escape densely populated patent spaces, such as the one of CDK2. This results in a set of molecules with novel chemistry, which can be used to evaluate chemical motif generalization. The evaluation set included molecules with a maximum Tanimoto similarity of 0.3 to the training dataset of the generative model. Docking scores were calculated using the same constrained protocol reported in this study and we evaluated all three previous models, comparing them with metrics obtained by ScoreFormer on non-generated molecules. All metrics showed a decline, but ScoreFormer and L-ScoreFormer performed better or equal to FiLMv2, demonstrating better generalization capabilities for out-of-distribution molecules. [Due to low overlap of the new docking scores with the docking scores of the original training distribution, we do not report the regression metrics values, as all the models under-perform in this task. However, we analyzed molecular hit recovery capability, as this can still be inferred from available results.] Results are summarized in Table <ref> and full results can be found in Appendix <ref>. The second benchmark focuses on creating custom training and testing splits based upon molecular weight, to assess molecular size generalization, and its results can also be found at Appendix <ref>. § CONCLUSIONS In this study, we introduce ScoreFormer and its variant L-ScoreFormer, graph transformer models designed to overcome limitations in current approaches for surrogate models in HTVS. ScoreFormer's architecture, incorporating an attention mechanism, effectively manages global and local molecular information, leading to superior performance in identifying high-affinity compounds. L-ScoreFormer, with fewer parameters, shows strong performance in general docking score prediction, reducing overfitting risks. A key feature is the improved inference speed, with a 1.86-fold increase, vital for high-throughput virtual screening. Future work will focus on applying ScoreFormer in active learning contexts, integrating explicability modules, and adopting uncertainty estimation methods, aiming to further enhance its utility in drug discovery and contribute to the advancement of computational chemistry with more reliable, efficient, and interpretable methods. unsrtnat § MOLECULAR DOCKING IN CDK2, D4 AND HIV-1. §.§ System preparation All systems were prepared with the same protocol and the structures were the following: CDK2 with PDB id 3BHV <cit.>, D4 with PDB id 5WIU <cit.> and HIV-1 with PDB id 3AID <cit.>. Protonation states were assigned at pH 7.4 using the Protein Preparation Wizard <cit.>. Hydrogen bonds were optimized with PROPKA <cit.> and a restrained minimization using the atom force field OPLS4 <cit.> was applied to each system. §.§ Glide docking Ligand dockings were performed using Schrödinger’s Glide software, we used the Standard Precision (SP) protocol with a fast sampling of 500 poses kept per ligand for the initial phase of docking, 40 best poses kept per ligand for energy minimization and only one pose selected for each compound after minimization. Docking grids were centered into the centroid of the corresponding native ligand with an inner box of 15 Å for D4 and HIV-1 and 10Å for CDK2. Hydrogen-bond constraints were imposed to improve the applicability of the poses generated. Constraints selected per system: * CDK2: hydrogen bond constraint to the central donor of the CDK2 hinge region (LEU83) <cit.>. * D4: hydrogen bond constraint to bias the selection of the compounds to selective agonists (ASP115) <cit.>. * HIV-1: hydrogen bond constraint to the highly conserved aspartate in an aspartic protease (ASP25) active site to mimic ligand interactions of known inhibitors <cit.>. Docking was also performed on the unconstrained systems. § HYPERPARAMETERS § EVALUATION METRICS §.§ Exponentially weighted mean squared error (W-MSE) This metric was used as the loss function during training. It modifies a preexisting loss function, in this case, the mean squared error in Equation <ref>, to increase the weight of samples with low target values in Equation <ref>. In the context of molecular docking, we are specifically interested in accurately predicting the docking score of good binders, which are given lower values. This metric allows the training process to pay less attention to poor binders and hence becomes a more valuable tool for identifying good binders ∑_i=0^Ne^-α y_i· l(z_i, y_i) l(z, y) = (z-y)^2 §.§ Regression enrichment surface score This score corresponds to the volume under the surface defined as the recall of an estimator using two thresholds (ζ , σ) for binarizing the target variable and the model outputs. The thresholds are expressed as the fraction of the samples to be considered as positive instances and hence the surface is bounded from 0 to 1 in both axes. The volume is computed with logarithmically scaled axes which results in the higher impact of samples with low target and predicted values in the overall metric. §.§ Area under recall threshold curve (AURTC zeta) The recall threshold curve is defined as a slice of the RES by fixing the threshold ζ (the ratio of the model predictions to be considered positive). As in <cit.>, we report AURTC using two ζ values for all experiments: 0.01 and 0.001. §.§ Parametrized recall (R zeta, sigma) The parametrized recall is the fraction of retrieved positive examples by the model when using two thresholds i.e. ζ and σ, for dividing the true labels and the predicted labels into positive and negative examples. It corresponds to the value of the RES at a given point. As in the original publication, we report R using two pairs of thresholds for all experiments: 0.1 and 0.01; 0.1 and 0.001. §.§ Other metrics Beyond the metrics listed, model performance was also evaluated using the R^2 and Pearson's correlation coefficient. § REGRESSION RESULTS § PERFORMANCE PLOTS § PIPELINE In this section, we describe the key features of the pipeline used to develop a functional surrogate model tailored to specific targets. An overview of the pipeline is shown in Figure <ref>. For each target of interest, a unique dataset is generated by calculating the docking scores for a set of molecules. These molecules are represented as molecular graphs, which encode atom types and their connections. Additionally, each atom's representation is augmented with the RWPE which stored for convenience. These graph representations, along with the docking scores, are used to train the model. Crucially, information about the target's structure, sequence, or the molecule's binding pose is not directly included as input. Instead, the model is designed to infer which molecular features are critical for binding affinity and how they influence this for a specific target. Employing a target-specific approach offers superior performance compared to target-agnostic models and, although it requires generating and training a new dataset and model for each target, it uses significantly fewer resources than conducting a full-scale HTVS. This makes target-specific pipelines an efficient balance between speed and precision. § ABLATION STUDIES To assess the impact of various components on model performance, we conducted three ablation studies. In the first study, we replaced the PNA network with two well-known GNNs, namely Graph Convolutional Network (GCN) and Graph Attention Network v2 (GATv2). In the second study, we removed the LRWPE from the ScoreFormer architecture. Lastly, we combined these two modifications, training a GPS graph transformer without LRWPE and using either GCN or GATv2 instead of the PNA network. We evaluated these architectures using the D4 dataset, with docking scores calculated via Glide under the unconstrained protocol. The evaluation metrics included the average scores from the best 10 steps for each metric, as shown in Table <ref>, with the highest score for each metric highlighted in bold. Our analysis focused on two metrics for score prediction — Pearson correlation and R² — and one metric for hit recovery, wMSE, evaluated on both the training and testing sets. Replacing the PNA layer with either GCN or GATv2 led to a decrease in performance across all analyzed metrics. Similarly, removing the LRWPE resulted in a comparable decline in all validation set metrics, though the wMSE for the training set remained unaffected. This indicates that LRWPE might have a regularization effect, helping to prevent overfitting and ensuring consistent performance across different datasets. Combining both ablations caused an even more significant performance drop across all metrics. Collectively, these findings underscore the importance of the proposed components in enhancing the accuracy of docking score predictions. § GENERALIZATION STUDIES Here we include the complete table showing the results of the unseen chemistry benchmark using molecules obtained from a generative model. Apart from the unseen chemistry benchmark, we conducted another which involved testing the generalization capabilities of our models across different molecular sizes. This test involved generating specific training and evaluation sets based on a molecular weight threshold and then inferring on the remaining testing molecules, with a higher molecular weight. Specifically, ScoreFormer was trained on the 90% of molecules with the lowest molecular weight and evaluated on the 10% of molecules with the highest molecular weight. To prevent any possible information leakage during evaluation, molecules in the validation set used for architecture optimization were excluded. The results, displayed in table <ref> showed a slight degradation in performance compared to the model trained with a random selection of molecules. However, the model still captured feature patterns associated with binding affinities. The most significant degradation was observed in the weighted Mean Squared Error (wMSE) and the R-squared (R²) coefficient. Despite this, Pearson's correlation coefficient and metrics related to hit recovery were over 80% of those achieved by the random split approach, indicating that ScoreFormer can generalize to out-of-distribution molecules, at least, when there is a combinatorial relationship between the evaluation and training sets. Both of these benchmarks together allow us to evaluate the generalization capabilities of our models in various out-of-distribution situations and demonstrate significant robustness across them.
http://arxiv.org/abs/2406.09012v1
20240613113330
Bayesian Statistical Modeling with Predictors from LLMs
[ "Michael Franke", "Polina Tsvilodub", "Fausto Carcassi" ]
cs.CL
[ "cs.CL" ]
Stimulated magnon scattering by non-degenerate parametric excitation Hugo Merbouche June 17, 2024 ==================================================================== § ABSTRACT State of the art large language models (LLMs) have shown impressive performance on a variety of benchmark tasks and are increasingly used as components in larger applications, where LLM-based predictions serve as proxies for human judgements or decision. This raises questions about the human-likeness of LLM-derived information, alignment with human intuition, and whether LLMs could possibly be considered (parts of) explanatory models of (aspects of) human cognition or language use. To shed more light on these issues, we here investigate the human-likeness of LLMs' predictions for multiple-choice decision tasks from the perspective of Bayesian statistical modeling. Using human data from a forced-choice experiment on pragmatic language use, we find that LLMs do not capture the variance in the human data at the item-level. We suggest different ways of deriving full distributional predictions from LLMs for aggregate, condition-level data, and find that some, but not all ways of obtaining condition-level predictions yield adequate fits to human data. These results suggests that assessment of LLM performance depends strongly on seemingly subtle choices in methodology, and that LLMs are at best predictors of human behavior at the aggregate, condition-level, for which they are, however, not designed to, or usually used to, make predictions in the first place. § INTRODUCTION Enabled by the invention of deep neural transformer architectures <cit.>, recent years have brought a new generation of powerful large language models <cit.>. State-of-the-art LLMs excel on many benchmark data sets <cit.>, and so promise to serve as foundation models for a vast and diverse set of applications, both in industry and academia <cit.>. Yet, for any downstream application of LLMs, it is crucial to understand what LLMs can or cannot reliably do. The way in which LLM capabilities should be assessed depends on what their intended application is. For many industrial applications, the prevalent approach towards characterizing the capabilities of LLMs relies on benchmark testing, which usually consists in assessing the accuracy of LLM predictions in tasks where a designated “target answer” or “gold standard” exists, averaged over many instances of this task <cit.>, but more encompassing approaches also highlight the importance of more holistic assessments of LLM, including factors such as robustness, fairness and efficiency <cit.>. Benchmark-driven assessments are very useful for engineering purposes, when the main issue is whether a given system can perform a particular task correctly. There are also applications of LLMs where benchmark testing on a “gold standard” is arguably not optimal. Recent works increasingly go beyond using LLMs based on single-run input-output behavior, and instead utilize LLMs as a part of a larger computational process. Simple examples include sophisticated prompting strategies <cit.>, or structured reasoning models <cit.>. More sophisticated examples include neuro-symbolic models in which LLMs supply specific parts of the relevant information for some practical application <cit.>, or where LLMs are a part of bigger programs to build towards something more akin to explanatory cognitive models <cit.>. For example, information from LLMs can be used to generate alternatives for deliberation <cit.>, arguably similar to human resource-rational reasoning in open-ended domains <cit.>. LLMs may also be used to rank or numerically score options in large or open-ended applications, e.g., to mimic human judgements of desirability, relevance or interestingness <cit.>. In many of these applications, LLMs essentially serve as a cheap, compressed stand-in for (average) human judgements, associations or choice behavior. To assess the quality of LLMs in such contexts, it is less important to compare against a gold standard, and more important to compare against the full distribution of the human behavior that is to be captured by the LLM. In sum, at least for some practical applications, the “gold standard” is not a single “correct” answer, but the full distribution of human responses. Taking inspiration from experimental psychology, an increasing number of studies compares LLM predictions to human choice behavior in psychological experiments and investigates whether LLMs predict patterns of human answer behavior <cit.>. The main focus is often to compare qualitative patterns in LLM predictions and human data, but there is also work investigating whether LLMs can make adequate quantitative predictions. Most notably, there is a strong tradition of relatively early work in computational psycholinguistics <cit.>, which investigates whether quantitative predictions derived from language models match quantitative aspects in human experimental data, such as reading times <cit.> or the amplitude of the N400 component of event-related-potentials in EEG measurements <cit.>. The work presented in this paper seeks to extend the investigation of the human-likeness of predictions derived from LLMs. Our foremost concern is methodological: How can we derive full distributional predictions from information supplied by LLMs, and how can we stringently test whether these distributional predictions are adequate given suitable empirical data? In short, while a lot of previous like-minded work has used targeted assessment of LLM capabilities from the point of view of the experimental psychologist, we here adopt the more specific perspective of the probabilistic / statistical modeller. Taking numerical predictor values generated from LLMs as input, we explore strategies of building a Bayesian statistical model around them, and to scrutinize these LLM-grounded Bayesian statistical models with the usual methods of Bayesian data analysis, in particular model criticism <cit.>. A main conceptual take-away of this investigation, is that statistical models built around LLMs are, by design, fundamentally different from common statistical or probabilistic cognitive models; an observation which also reflects back on the possibility of seeing LLMs as potentially explanatory model for human behavior or cognition. Concretely, LLMs make predictions for each individual item, rather than specifying a predictor of central tendency at a more aggregate level, such as at the level of an experimental condition, as standard statistical models or probabilistic cognitive models normally do (see Figure <ref>). As this point is crucial for our investigation, the following elaborates briefly. Psychological research into the workings of the human mind aims to find generalizable patterns in the way information is processed within or across different domains of cognition. Experimental work therefore often compares human performance in different experimental conditions, which reflect the general factors that are hypothesized to influence behavior. Yet a single experimental condition is often instantiated with different experimental items, which are usually not under scrutiny for any systematic, predictable effect on the observed measurements. For example, classical research on human memory <cit.> investigated the effect of rehearsal on memory consolidation. Relevant experiments compare recall with and without rehearsal (experimental conditions), while using different items (words, numbers, etc., to be memorized) in each instantiation of the same memory task. Likewise, when studying how hearing a color word can facilitate a same-or-different judgement of color swatches <cit.>, the main experimental manipulation concerns the typicality of shown color swatches, while the variability between different color words like “blue” or “green”, is less important to this research question and so treated as item-level variation. Consequently, a typical psychological experiment is mainly interested in assessing behavior at the level of the experimental condition, because that is where the distinctions relevant to the research question reside. Nevertheless, each experimental condition can be, and often is, instantiated with different items, variation among which is deemed less relevant to the research question at hand. Data from human participants for experiments of this kind usually show some variability between items, and also variability between participants. This variability is commonly incorporated in standard statistical models as random stochastic variation, e.g., by using hierarchical regression models <cit.>, as shown on the left side of Figure <ref>. Still, the focus of interest usually remains at the condition-level effects, because it is this more abstract level of behavioral aggregation that is relevant for generalizable theory building. Similarly, when analyzing or explaining data from psychological experiments with a probabilistic cognitive model (PCM), the model's predictions will naturally be set-up to predict data by taking condition-level properties, possibly in conjunction with item-level properties, into account <cit.>, as shown schematically in the middle of Figure <ref>. On the other hand, LLMs first and foremost provide predictions about each item. While it may be the case that, in producing an item-level prediction, the internal computation of a powerful LLM is informed by computations that incorporate abstract knowledge roughly corresponding to the condition-level, the atomic predictions accessible to the common user are specific to each individual string tested. Consequently, this points to known concerns about robustness of predictions under perturbations of input prompts <cit.>. These considerations raise two important conceptual and empirical questions. First, we need to ask whether the item-level predictions made by LLMs are empirically correct, i.e., match the human data at the item-level. Second, the implied methodological challenge lies in specifying how item-level information can be used, e.g., by different aggregation methods, to make robust predictions at a more abstract level (condition, task, …). To investigate these issues, this paper introduces different ways of building a Bayesian probabilistic model around core predictor values derived from various LLMs. We then use the standard tools of Bayesian data analysis to fit and check the resulting statistical models based on data from human multiple-choice experiments. Rather than aiming for scale and large-coverage, we focus on transparency and zoom in on a single case study of pragmatic language production and interpretation, namely so-called reference games <cit.>. The Rational Speech Act framework <cit.> provides a widely used probabilistic model for human data from reference games, so that we can compare a theoretically motivated probabilistic cognitive model (PCM) against a probabilistic model built on top of LLM predictions. We find that variability predicted by the LLM at the item-level is generally not borne out by the human data, that not all ways of constructing condition-level predictions are equally good, and that different LLMs as backends may prefer to use different aggregation strategies. The paper is structured as follows. Section <ref> describes an experiment with human participants with a text-based reference game. Section <ref> introduces the Rational Speech Act (RSA) model for the human data from the reference game experiment. Section <ref> exposes a statistical model for item-level predictions derived from LLM scores and investigates whether this adequately captures the human data at the item-level based on scores from GPT-3.5 <cit.>. Section <ref> discusses different ways of deriving probabilistic predictions from LLMs at the condition-level and compares them against the human data and each other. Finally, Section <ref> explores whether previous results generalize to other LLM backends by investigating different versions of LLaMA2 <cit.>.[ All data and code is available at this OSF repository: https://osf.io/f6j3a/?view_only=5e820cc8bbee4549aed58dc252ba61b9https://osf.io/f6j3a/?view_only=5e820cc8bbee4549aed58dc252ba61b9. ] § EXPERIMENT: REFERENCE GAMES Reference games are an established, well-understood and austere experimental paradigm to test human decision making in abstract communicative tasks. A reference game consists of two players, a speaker and an interpreter, who jointly observe a set of objects, usually referred to as context (see Figure <ref>). In the production condition, the speaker is assigned a trigger object from the context set which they have to describe to the interpreter. In the interpretation condition, the interpreter observes a description, here called trigger word, and chooses one of the objects from the context set. The goal of the game is, for the speaker, to choose a description that enables the interpreter to choose the trigger object; and, for the interpreter, to guess correctly which object the speaker had in mind when using the trigger word. The example in Figure <ref> is a standard case, which we will use throughout, where human choices are informative about the pragmatic reasoning that human decision makers engage in. In this example, there are two features that differ across three objects (here shape and color). One object shares both its color and shape with one other object, while the two other objects have one unique feature (e.g., being the only circle, or the only green object). In a critical production trial, the trigger object to describe is one of the two objects with a unique feature. The speaker has four words to choose from. The target utterance is the word which uniquely describes the trigger object. The competitor utterance is the word that is true of the trigger object, but also true of another object. The other utterances, both of which are false of the trigger object are distractor utterance. In a critical interpretation trial, the trigger word is the one that is true of two of the three objects. If participants engage in pragmatic thought, they might reason that if the speaker had wanted to refer to one of the two objects of which the trigger word is true (blue square and blue circle in Figure <ref>), the speaker could have used a more informative word for exactly one of those two objects (“circle”), so they are more likely to refer to the target object (the blue square in Figure <ref>). The competitor object is the other object of which the trigger word is true. The distractor object is the object of which the trigger word is false. We implemented a simple reference game for human participants in which each trial instantiated the structure of the example shown in Figure <ref>. While previous reference games with human participants used pictorial representations of objects, and sometimes even pictorial representations of messages, we implemented a text-only version in order to be able to compare the predictions of LLMs for human data, when both LLMs and humans processed the same textual representation of the stimuli. The experiment was realized as an online task using <cit.>.[ The code for the experiment can be found at https://github.com/magpie-ea/magpie3-text-refgamehttps://github.com/magpie-ea/magpie3-text-refgame, and a live version of the experiment can be tested at https://magpie-ea.github.io/magpie3-text-refgame/https://magpie-ea.github.io/magpie3-text-refgame/. ] Participants. A total of 302 participants were recruited via Prolific for monetary compensation (£0.45, corresponding to roughly £15.40 per hour). All participants self-identified as native speakers of English. Materials & design. We created 100 different items as stimulus material via a stochastic process. Each item is a different textual description of a reference game with the same logical structure as the example from Figure <ref>. For each item, the context consists of three objects. As in the original paper by <cit.>, objects are defined by a triple of properties, namely a color, a shape and a texture. For each property, there were four possible values, e.g., blue, green, red, and orange for color. The sampled items differed in terms of the properties of the objects in the context set, and in terms of the order in which the objects and expression alternatives were presented in the text. Figures <ref> and <ref> from Appendix <ref> show example screenshots from the experiment. Procedure. Each participant saw four different items sampled randomly from the pre-generated item set. Participants first played two of these in the production condition, then the other two in the interpretation condition. Results. The overall distribution of choices that correspond to the target, competitor, and distractor states is shown in Figure <ref> (together with model checking results to be introduced later).[ The production condition actually has two distractor choices. Here and in the following, these are lumped together as a single category, also when modeling random errors in later models. ] It is interesting that the distractor options were chosen rather often. We also see that the number of target choices is higher in the production condition than in the interpretation condition. This is in line with previous experimental results on human reference games. For example, in previous forced-choice reference games with human participants with pictorial presentations of objects, <cit.> observed the following proportions of target, competitor and distractor options: 0.882, 0.118, 0 in the production and 0.492, 0.506, 0.003 in the interpretation condition (for 288 observations in each condition). § MODEL PREDICTIONS FROM PROBABILISTIC PRAGMATICS Data from reference games with human participants have been variously analyzed with probabilistic models using inspiration from behavioral game theory <cit.>, probabilistic Bayesian modeling <cit.> or other forms of probabilistic modeling <cit.>. Common to these approaches is that they derive or define, based on some explicit conceptual motivation, a parameterized stochastic speaker policy, P_S(u | s; θ_S), modulated by parameters θ_S, for a speaker's choice of expression or utterance u given a referent or state s, which the speaker wants to communicate; and a parameterized stochastic listener policy, P_L(s | u; θ_L), capturing the probability of choosing a referent s for utterance u. The Rational Speech Act model. As a concrete example, we introduce the Rational Speech Act (RSA) model first described in this form by <cit.> <cit.>. The RSA model defines pragmatic reasoning as a sequence of iterated (soft-)optmization of policies and Bayesian inference, grounding out in literal interpretation. If 𝔏(s,u) ↦0,1 is a semantic meaning function mapping each pair of state s and utterance u to a (binary) truth-value, and if P_prior(s) is a prior over states, a literal listener policy is defined as: P_L_0(s | u) ∝𝔏(s,u) P_prior(s) . The pragmatic speaker policy is defined as soft-optimizing the choice of utterance to minimize the literal listener's surprisal for the state to be communicated, i.e., to maximize the log-probability of the trigger object given the utterance: P_S(u | s, α) ∝ [ log P_L_0(s | u) ] . Finally, the pragmatic listener is defined as the policy resulting from applying Bayes rule, solving the inverse-problem for the previously defined speaker policy: P_L(s | u, α) ∝ P_S(u | s, α) P_prior(s) . Figure <ref> gives example calculations (assuming a flat prior and α=1) for the reference game from Figure <ref>. For α=1, the model predicts that the probabilities of target, competitor and distractor options are 2/3, 1/3, 0 for the production, and 3/5, 2/5, 0 for the interpretation condition. Increasing α will increase the odds of target over competitor choices. Condition-level predictions. In sum, the condition-level predictions of the RSA model are a parameterized function P_cond^RSA(R_l, C; α_c), assigning a probability to each response category R_l (target, competitor, or distractor) in each condition C (production or interpretation) for a given α_c. The model constructed so far predicts probability zero for distractor choices, so that the human data shown in Figure <ref>, where the distractor option was chosen in both conditions, would immediately rule out the model entirely. It is therefore common to include a small error probability ϵ, with which a choice would be made at random <cit.>, so that we get: P_cond^RSA(R_l, C; α_c, ϵ_c) = (1 - ϵ_c) P_r(R_l, C; α_c) + ϵ_c/3 , where ϵ_c is a (condition-specific) parameter giving the probability that a choice was made by randomly guessing.[ Since the RSA model predicts probability 0 for the distractor option, this model is, in principle, able to predict any probability distribution over the three choice categories that is compatible with the order: P_r(R_t) ≥ P_r(R_c) ≥ P_r(R_d). Intuitively, this is because with ϵ=0, P_r(R_d, C; α_c) = 0, so that there is an α for any ratio of predicted choice probabilities for target and competitor, as long as the target probability is no smaller than the competitor probability. The ϵ-transformation is essentially a linear shift in the probability simplex towards the maximum entropy prediction, so that every prediction which obeys the ordering restriction above can be made for some pair of α and ϵ. This prediction-triviality is met in two ways. For one, the Bayesian priors on model parameters soft-constrain the model, so that the ex ante credible predictions do rule out many logically possible observations. For another, we break the triviality by assigning a non-zero probability to the prediction for the distractor option. The same triviality problem lurks for the average-WTA model introduced in Section <ref>, and the same solution is applied to it. ] The data D_C from condition C, see Figure <ref>, consists of counts for each response category. The parameterized likelihood function entailed by the RSA model for condition-level data D_C is: P^RSA_cond(D_C| C, α_C, ϵ_C) = Multinomial(D_C, P_cond^RSA(R_l, C; α_c, ϵ_c_1 ≤ l ≤ 3) . The result is a four-parameter model, one pair of parameters per condition. Bayesian posteriors & model checking. Parameterized predictions, like in Equation (<ref>), can be assessed in the light of the empirical data with the usual tools of Bayesian data analysis <cit.>. Let α_c∼log-Normal(1,1) have a reasonably wide log-Normal prior, and let ϵ_c∼Beta(1,15) have a Beta prior favoring small values. Using Stan <cit.> for Bayesian inference, we obtain estimates of posterior credible values of model parameters (summary statistics of which are shown in Table <ref>).[ Posterior samples where generated for four chains with 2000 samples each, after a warm-up of 1000 samples. The “adapt-delta” value was set to 0.99. Converge was checked with R̂-statistics <cit.>. ] To assess goodness-of-fit, we use the posterior predictive distribution, i.e., the model's predictions about data of the same size and structure as the training data. Figure <ref> shows summary statistics (means and 95% credible intervals) for the posterior predictive distribution of the RSA model (among other models for condition-level data, which are to be introduced later). We see that for both conditions the RSA model passes a “visual posterior predictive check” <cit.>, which requires that the distribution of posterior predictions includes the observed choice rates for each answer option. To corroborate the visual impression, Table <ref> shows sample-based estimates of Bayesian posterior predictive p-values (Bppp values), using likelihood of the observed data as a test statistics. These Bppp values approximate the probability that a model conditioned on observed data D_obs predicts future data D_rep, of the same size and format as D_obs, that is at least as likely as the data D_obs itself is (given the posterior predictive distribution). Very small Bppp values indicate that the model might be inadequate for reproducing the data it was trained on, so to speak. This is clearly not the case for the RSA model with Bppp values close to 0.5, as shown in Table <ref>. In sum, the condition-level predictions by the RSA model, a theoretically motivated PCM, are not discredited by the condition-level data. Reversely, the RSA model seems to adequately capture the condition-level data. Item-level predictions. To test item-level predictions, the whole data set for condition C is chunked into item-level data, D_C = D^1_C, …, D^m_C, where D_C^k is the data collected for the k-th out of m items for condition C. Probabilistic cognitive models, like the RSA model, may holistically combine condition- and item-level information in their predictions, as shown in the middle of Figure <ref>, e.g., by incorporating further parameters to capture variance based on different item-classes, such as the empirically observed preference for selecting shape terms over color terms in reference games <cit.>. But for present purposes, we simply use the same (condition-level) predictor to separately predict data from each item, so that the likelihood function for item-level data becomes: P^RSA_item(D_C| C, α_C, ϵ_C) = ∏_k = 1^mMultinomial (D_C^k, P_cond^RSA(R_l, C; α_c, ϵ_c_1 ≤ l ≤ 3 ) . Fitting the RSA model to the partitioned data set, we find estimates of Bppp values that do not discredit the model (see Table <ref>). This suggests that the item-level variation in the human data is not so pronounced as to provide strong evidence against the condition-level predictor from the RSA model. In other words, the condition-level RSA predictions seem to adequately capture also the item-level data. The following sections apply the same methods of Bayesian model criticism also to models built around predictor values from LLMs. Section <ref> first looks at the item-level data, before Section <ref> investigates different ways for deriving condition-level predictions. § ITEM-LEVEL PREDICTIONS FROM LLMS An (autoregressive) LLM is designed to predict the next token given an input string.[ Nothing of substance changes when a Bayesian statistical model is built around scores from a masked language model. We focus on autoregressive, left-to-right language modeling and next-token prediction for ease of reference, since all models tested here are autoregressive. ] This is essentially an item-level prediction: next-token probabilities are specified for concrete strings after a concrete instance of a task, i.e., an item of the task, not for the task as such. From these next-token probabilities, we can derive probabilities for multiple-choice answers for each item (this section) and for a condition (next section). Notation. Let I_1, …, I_m be m be instances of the same task, or items belonging to the same (logical) condition in a behavioral experiment. Each item I_k = x_k, y_kl_1 ≤ l ≤ n consists of an input prompt x_k, which is a string of text, and n choice options y_kl, all of which are strings as well, possibly composed of |y_kl| tokens, y_kl = w_kl1, …, w_kl|y_kl|. For simplicity of notation, we assume that the l-th choice option y_kl for each item k belongs to the same response category R_l. For the case at hand these categories are: target, competitor, distractor, so that y_k1 always corresponds to the designated target option, R_1. LLM scores & predictions. The most obvious item-level score an (autoregressive) LLM provides for each choice option y_kl is its log-probability:[ Commonly used item-level scores include corrections for variable length of answer options <cit.> or variation in base rate among answer options <cit.>. Following common practice in the current literature, we also use length-corrected, average log-probabilities as raw scores for all options. As most options have identical number of tokens, this only affects the numerical values of softmax-parameter α, as introduced below.] S_kl = ∑_i=1^|y_kl|log P_LLM(w_kli| x_k, w_kl1, …, w_kl(i-1)) . Based on each option's score, we can define the item-level prediction of choosing option y_kl in terms of soft-maximization as: P_item^LLM ( y_kl| C; α_c ) ∝ (α_c S_kl) . Notice that the usual item-level prediction used in benchmark testing is actually a “winner-takes-all” strategy, which would choose any option that maximizes the score, but this is just a special case of the above for α_C→∞. Here, the α_C parameter corresponds to inverse temperature, so that α_C→∞ corresponds to greedy sampling with temperature zero. Like in the RSA model, we use an independent soft-max parameter for each condition. Bayesian statistical modeling with LLM predictors. As mentioned previously, the items of the reference game experiment from Section <ref> differ in which levels of features (color, shape, texture) instantiate the structure of the task shown in Figure <ref>, as well as the order of presentation of objects and words. We expect both human and LLM predictions to vary between different items: e.g., humans seem to have preferences for some features <cit.>, such as over-production of informationally irrelevant material <cit.>; and machines may be susceptible to the presentation of the order of choice options. It therefore becomes an empirical question of whether item-level predictions from LLMs provide a good fit, if we aim at predicting the human data separately for each item, not as a condition-level average. To address this question, we assessed item-level scores S_kl from the text-davinci-003 instance of GPT-3.5 August 2023 <cit.> for a text-based version of the reference game from Section <ref>. The task description x_k for item I_k is a text supplied as prompt (see Appendix <ref> for an example). The choice options are categorized as in the human experiment.[ In the current set-up the response type “distractor” has two instantiations in the production condition. Since choice options are a single word in the production condition, for simplicity, we treat the union of the distractor words as a single option.] Similar to the RSA model fit, we allow for random errors with probability ϵ_C: P_item^LLM ( y_kl| C; α_c, ϵ_c ) = (1- ϵ_c) P_item^LLM ( y_kl| C; α_c, ϵ_c ) + ϵ_c/3 . With priors on parameters as for the RSA model described in Section <ref>, the resulting likelihood function for item-level data with item-level predictions is: P^LLM_item(D_c| C; α_c, ϵ_c) = ∏_k = 1^mMultinomial ( D_c^k, P^LLM_item(y_kl| C; α_c, ϵ_c)_1 ≤ l ≤ 3 ) . Summary statistics for samples from the posterior distribution over parameters are shown in Table <ref> (rows 4 and 5). Credible values of α parameters are comparatively low for the LLM-based item-level model, suggesting that the predicted scores have rather large differences, which have to be compensated for by the softmax link-function to adequately fit the data. Figure <ref> shows the mean of the probabilities for the highest scoring answer category, averaged over all items, for different parameter values of α, highlighting that for a posteriori credible values of α (gray shaded areas), the predictions are clearly distinct from the predictions of a WTA strategy (which assigns probability 1 to the highest scoring option for each item). This suggests that the WTA strategy, which is the special limiting case for α→∞, provides a substantially worse fit for item-level choices than those provided by less extreme values of α. Sampling-based approximations of Bayesian posterior predictive p-values for the by-item analysis are very low (see Table <ref>), suggesting that the unaggregated LLM scores are inadequate predictors of the human data. To corroborate this conclusion, Figure <ref> shows that the item-level LLM-based model predicts variance which is not borne out by the human data. Concretely, the plots show, for each condition and item, mean posterior estimates of the model's predicted probability of choosing the target option (x-axis), together with the observed proportion of target choices in the human data (y-axis). There is ample variation in the model's predictions, especially visible in the production condition, owing to the fact that the item-level scores of the LLM sometimes clearly favor another option than the target choice. So, the model itself predicts systematic variability at the item level. The human data, too, show variability at the item-level, but there is no (visual) indication that the item-level variability predicted by the LLMs is borne out by the human data. These results suggest that LLM-based probabilistic predictions may imply item-level variance that is not attested in the human data. Put more strongly, a model that uses the most obvious item-level scores, derived from the predicted log-probabilities, to predict what human participants choose on a by-item level is ruled out by the current experimental data. § CONDITION-LEVEL PREDICTIONS While LLMs do not make conditional-level predictions as such, they can be derived from item-level scores S_kl by averaging over all items belonging to the relevant condition. There are many ways of averaging item-level information. Figure <ref> shows three salient approaches, which differ in what the underlying item-level measure for aggregation is: the raw scores S_kl, the item-level probability derived from it (as used in Section <ref>), or the predictions from the winner-takes-all (WTA) strategy commonly used in benchmark testing. The average-scores predictor first aggregates the item-level scores, and then transposes the averages into (scaled) probabilities using the usual parameterized softmax function:[ The reported results average over the multi-set I_1, … I_m of items that occurred in the human experiment for condition C. By using a multi-set, which may contain a single item multiple times, we produce aggregate predictions for exactly the set of items that the participant group saw, which provides the most fitting counterpart to the human data.] P_cond^SCR(R_l| C, α_c) ∝ [ α_c 1/m ∑_k=1^mS_kl ] . *[average scores (narrow-scope aggregation)] The average-probabilities predictor first transposes scores into probabilities with a parameterized softmax function, and only aggregates over items last: P_cond^PRB(R_l| C, α_c) = 1/m ∑_k=1^m P_item^LLM(y_kl| C, α_c) . *[average probabilities (wide-scope aggregation)] Finally, the average-WTA predictor considers the prediction of the WTA strategy (a softmax with α→∞) as the basic item-level unit to aggregate over. To add parameterized scaling to this method, a power-law transformation is a natural choice: P_cond^WTA(R_l| C, α_c) ∝ [ 1/m ∑_k=1^m P_item^WTA(y_kl| C) ]^α_c , *[average WTA (intermediate-scope aggregation)] where P_item^WTA(y_kl| C) = lim_α→∞ P_item^LLM(y_kl| C, α). The average-scores and average-probabilities predictors are equivalent if there is only one item, in which case the prediction of the average-WTA method is the special case of α→∞. For cases with more than one item, the predictions of the three predictors are not guaranteed to be the same. Conceptually, the average-probabilities and the average-WTA predictors, but not the average-scores predictor, are compatible with a picture in which condition-level predictions result from the actual predictions at the item level. However, based on the results from the previous section, the item-level predictions for the WTA strategy are demonstrably incongruent with the human data. In general, the average-probability and average-WTA predictors can lead to qualitatively different results for task-level accuracy (see Appendix <ref>). Using the same approach as for the RSA model in Section <ref>, we can build likelihood functions for condition-level data around the three predictors introduced above. With the same priors and methods used before, we obtain samples from the posterior over parameters and samples from the posterior predictive distributions. Summary statistics for posteriors over model parameters are shown in Table <ref>. What is noteworthy is that for the average-WTA model, the estimates of α_c in the production condition do not rule out, in fact lie close to, the special value α_c=1, for which the power-law transformation is the identity function. This means that just averaging WTA-responses at the item-level yields a reasonable predictor for the production data at the condition-level. However, for the interpretation condition, the value α_c=1 is clearly outside the range of credible parameter values, so that a simple recipe like “always average WTA-responses without transformation” is not a viable strategy for good condition-level predictions in general. It is also worth noting that for the average-probability model values of α_c of around five and above are virtually indistinguishable from the item-level predictions of the WTA strategy (see Figure <ref>). This suggests that, for the production data, aggregation of item-level predictions from the WTA-strategy gives good condition-level predictions. Figure <ref> shows the summary statistics (means and 95% credible intervals) for each model's posterior predictive distribution. We find that only the theoretical model (RSA) and the average-WTA model pass this “visual posterior predictive check” for both conditions; the other two models both overpredict the target choice rate and underpredict the competitor choice rate in the interpretation condition. To corroborate the visual impression, Table <ref> shows sample-based estimates of Bayesian posterior predictive p-values, using likelihood of the observed data as a test statistics. Consequently, the results from Table <ref> suggest that the average-scores and the average-probabilities models are able to reproduce the production data, but fail on the interpretation data; and that only the average-WTA model does not fail to capture the data from both conditions. These results tell us that not all ways of deriving condition-level predictions by averaging over item-level variation are equally good. Some approaches clearly fail basic checks for statistical goodness-of-fit. On the positive side, we also find that there is at least one model with predictors based on LLM-measures, namely the average-WTA model, which is able to recover the patterns in the data. In other words, there is a way of deriving predictor values for condition-level forced-choice probabilities from an LLM such that, when fed into a common linking function (here with parameters α for optimization and ϵ for random error), the human choice probability can be reconstructed faithfully in its entirety. On the other hand, there is a stark contrast between the results of item-level and condition-level data. The model that was able to properly fit the condition-level data builds on predictions for item-level choices that are clearly incompatible with the human item-level data. The model seems to be “right for the wrong reasons.” § GENERALIZING TO OTHER LLM BACKENDS All previous results where obtained for LMM predictions derived from GPT-3.5 davinci. To investigate whether key results replicate for other LLMs and scales, we ran the same analyses also for scores derived from variants of LLaMA2 <cit.>, in particular the 7, 13 and 70 billion parameter base models (marked as `base'), and the models of similar size fine-tuned on chat data (marked as `chat'). Table <ref> shows the relevant summary statistics and Bayesian posterior predictive p-values. Figure <ref> shows visual posterior predictive checks for the condition-level data. The plots show the differences between observed counts and the models' posterior predictions for expected counts for each condition and response option. We find that the average-WTA predictor is consistently able to capture the data from the production condition, except for one of the six models (LLaMA2-chat 70B). When the average-WTA predictor passes the model checking criterion for the production data, the posteriors for the power-law parameter α are credibly different from 1, thus suggesting that, contrary to the results for GPT-3.5 a mere “WTA average” is not a good prediction strategy for the production data for the LLaMA2-based models. Looking at interpretation, the average-WTA predictor recovers the interpretation data for only one of the six models (LLaMA2-chat 13B, see also Table <ref>). Interestingly, for LLaMA2-chat 13B also the average-probability predictor is able to recover the interpretation data, unlike for the GPT-3.5 model. Finally, for item-level data, as for GPT-3.5, no LLaMA2-based model captured the item-level variance observed in the human data. In sum, we find general support for the previous conclusions that the average-WTA predictor is most successful in capturing the condition-level data, and therefore that, where models manage to recover the condition-level data, they seem to be “right for the wrong reasons.” Additionally, we also find variability in which method of aggregation works well with which LLM as scoring model. § CONCLUSION While the common practice in evaluating the capabilities of LLMs is based on accuracy averaged over large collections of data, this work took the alternative route to explore what we learn if we subjected LLMs to the same routines and strong demands on distributional quality of fit to human data as we normally do for statistical or probabilistic cognitive models. Knowledge of the adequacy of LLMs' full distributional predictions for simple tasks that require human-like judgement or decisions is important to gauge in how far LLMs can be trusted to provide such information in applications such as hybrid, neuro-symbolic models <cit.>. A main contribution of this paper is methodological, showing how statistical model criticism can be used for LLMs in the first place, and, more specifically, how it can be insightful in the detailed assessment of how LLMs might or might not be able to predict human data at the degree of accuracy that we would normally aspire for when dealing with explanatory statistical models in cognitive science. Applying this comparative approach to a minimal, but non-trivial data set, we find that the LLM predictions on a per-item level predict variance that is not attested in the human data. From several candidate predictor measures for aggregate condition-level data, only one was not refuted by the human data (at least for the GPT-3.5 model), but this was one that relied on the empirically implausible WTA-strategy at the item-level, incidentally the same strategy that is commonly used in accuracy-based benchmark testing <cit.>. Explanatory power. A basic observation brought to the foreground by our approach is that LLMs' atomic predictions are for individual items and that some aggregation method is needed to derive more abstract, condition- or task-level predictions. This is one sense in which LLMs may be felt to be less, or not at all, explanatory. They do not offer, at least not directly, a human-comprehensible compression of reality into a kind of response pattern, over and beyond making a prediction for each particular situation. As this kind of compression is arguably important for a sense of understanding <cit.>, the direct comparison of LLMs with common practices in experimental psychology and with probabilistic cognitive models, provides an interesting perspective on why LLMs are often felt to be lacking in explanatory power. This perspective on the explanatory role of LLMs goes beyond the factors of performance, indirect support and parsimony identified by <cit.>. It is also subtly different from considerations of an LLM's ability to generalize <cit.>. It rather suggests that transferability is a dimension to “explanatory power” that is important as well. Imagine that models M and M' have been designed for and trained on data from a situation S, but need to be applied to a different situation T. Assume that for model M the only way to make predictions for T is to collect data pertaining to T, and either retrain or fine-tune the model. In contrast, model M' can make predictions for T without novel data collection by recognizing a meaningful difference between S and T and consequently manually changing parameter values or model-internal mechanics to accommodate for this change. In that case, model M' would be more transferable than model M. For example, if we change the experimental setting for a reference game to consist of data from a special population, such as very young children or language-impaired adults, a potentially reasonable architectural change to a model like the RSA model is to consider differences in the sets of alternative utterances for the speaker <cit.>. Even though this is only a vague explication of a notion of transferability, it suffices to corroborate the intuition that probabilistic cognitive models like the Rational Speech Act model, which are designed to operate at a higher level of conceptual abstraction, will often appear more transferable than models like LLMs, which make predictions not for kinds of situations but for particular situations. Whether any given model's transfer-ability is correct, is an orthogonal empirical question. It is also an empirical question, exactly to which extent and in which areas LLMs are (not) transferable, especially if we consider prompting a transfer strategy <cit.>. In any case, the comparison between LLMs and other statistical or probabilistic cognitive models started here suggests systematic research into the transfer-ability and explanatory value of LLMs, e.g., by prompting strategies that are empirically insightful or by their use in composite neuro-symbolic models that implement theoretically-meaningful conceptual differences between types of situations. Variability in predictions and performance measures. Despite being anchored in a small, but detailed case-study, the fact that different plausible methods of aggregating item-level information led to condition-level predictions of variable quality is worrisome. The wide-spread reliance on a winner-takes-all strategy might be inconsistent with the actual use practices of LLMs, which may not always rely on a temperature-zero sampling strategy (see Appendix <ref>). In conclusion, research on LLMs should systematically study the conceptual and empirical consequences of seemingly minor decisions in evaluation or application settings. Variability of performance measures also comes with a well-known risk for robust and reproducible science <cit.>. The more researcher degrees of freedom there are, the higher the risk of false results, even in the absence of intentions to mislead <cit.>. Testing LLMs as predictors in statistical models should raise awareness for the issue of robust research methods also in NLP <cit.>. The work presented here also contains considerable researcher degrees of freedom. We only considered one out of several conceivable ways of carving a parameterized likelihood function from LLM-based predictors. If future work would contribute to systematically exploring these, the main goal of this paper would have been met, which is to raise awareness for the possibility, perhaps even necessity, to scrutinize LLMs at the same level of rigorous detail as other models in cognitive science. Human-like predictions from LLM. The question after the human-likeness of quantitative LLM-derived information matters for applications which use numerical scores to rank or weigh options <cit.>. Moreover, to the extent that LLMs are used as parts of explanatory “neuro-symbolic models” of information processing <cit.>, understanding whether and how LLMs might yield full-fledged distributional predictions is important, e.g., to explore their integration into probabilistic (cognitive) models <cit.>. Based on the data set and the detailed analyses conducted here, it seems not infeasible to use numerical predictions from LLMs as part of predictive probabilistic models. But, ideally, low-level prompt variation, such as from order of presentation or similar “nuisance variables,” should be averaged out or taken into account in some way or other, as we do not understand what causes this variation in the predictions of models. Further research is necessary that investigates when exactly this variation accords with empirically observed patterns. In sum, we conjecture that using LLM predictors for probabilistic predictions, such as in a neuro-symbolic model, might be possible if embedded in the proper link functions and if item-level variation is taken into account. This, however, entails that each LLM component in a hybrid model should be independently tested against at least a modestly sized empirical data set. No substitute for human subjects. Looking at LLM-derived predictions for human data from the perspective of cognitive modeling highlights the fact that LLMs predict item-level variation, but not subject-level variation, which is common in human data. We may consider variation introduced via softmax / temperature as analogous to between-human variation, but this likely falls short of reality, where pronounced differences in answer behavior may surface. For the particular case of pragmatic language use, prior research has shown that individual participants have markedly different behavioral profiles, often consistently behaving like literal language users or more sophisticated language users <cit.>. It is an open question whether predictions from LLMs reflect the same kind of variation <cit.>. The results presented here recommend skepticism. We therefore side with the cautious voices that do not recommend replacing human participants with LLMs in psychological research <cit.>. In contrast, this points to an important challenge for future LLM research. The fact that aggregate predictions can track aggregate human behaviour means that the variance on both sides is washed out to achieve a similar result. This raises the issue of finding the systematic differences between LLMs and human answerers. The task then would be to find cases where the differences do not wash out and ask: What if anything do these cases share? Limitations and follow-up work. The scope of our experimental investigation was deliberately small but perspicuous. Focusing on the methodological contributions and details in performance assessments, we investigated a minimal non-trivial case study where we could triangulate LLMs, probabilistic cognitive models and human data. This work may therefore serve as starting point for a wider investigation of more complex data sets and case studies in which LLMs are analyzed as and directly compared to explanatory PCMs. § ACKNOWLEDGEMENTS Thanks to Robert D. Hawkins and Thomas Brochhagen for insightful comments. We gratefully acknowledge support by the state of Baden-Württemberg, Germany, through the computing resources provided by bwHPC and the German Research Foundation (DFG) through grant INST 35/1597-1 FUGG. MF is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 39072764. [heading=bibintoc] § SCREENSHOTS FROM THE ONLINE EXPERIMENT WITH HUMAN PARTICIPANTS Figure <ref> shows a trial from the production condition, Figure <ref> one for the interpretation condition of the online experiment. § EXAMPLE ITEM FOR THE LLM EXPERIMENT The text-based input for the LLM predictions mirrors the text in the human experiment, except that the LLM input also lists the set of all available choice options (which for the human experiment is unnecessary since this information is given by the buttons for the forced-choice selection). For example, the task description T_k for the item that corresponds to the production trial shown in Figure <ref> is shown below (the actual input has no line breaks in the first paragraph): § EXCURSION: ACCURACY SCORES FROM AVERAGE-PROBABILITY VS. AVERAGE-WTA PREDICTORS Standard benchmark testing looks at task accuracy, defined as the probability of selecting the “gold standard” response, usually based on “winner-takes-all” (WTA) selection of the highest scoring option. We can generalize this and define the softmax-based accuracy as the average probability of choosing the designated target response option R_1 for the condition-level soft-max prediction (notation as defined in main text): P_cond^SM(R_1) = 1/m∑_k = 1^m P_item^SM(y_k1) . The WTA-based accuracy (the standard measure in most benchmark testing) is the special case of α→∞. The WTA-based accuracy can differ qualitatively from the more general softmax-based accuracy, as shown by the following example. Example: Imagine that there are two options, and that the target option's score is a small ϵ higher in 80% of the task's items, and otherwise lower. The WTA-based accuracy is 0.8. This number is useful as a performance measure for applications in which the LLM is used in exactly the way the WTA strategy describes, e.g., any implementation which is outcome equivalent to greedy decoding with rejection sampling on a domain that contains only the available options. For such a case, it never matters how much worse the goal answer is scored in the 20% of the cases where it is not the maximum. As only the best option will be chosen, that information is irrelevant. But if an application uses anything other than greedy-like responses, the accuracy score of 0.8 may be misleading. If the remaining 20% of the items are such that the non-goal option is almost infinitely better, it would be chosen under a pure sampling strategy, where α = 1, with virtual certainty, so the softmax-based accuracy would be around 0.4.[The probability of the target option in the 80% of items where the goal answer is slightly better is 0.5 in the limit of ϵ→ 0, and it is virtually 0 in the remaining 20% of the cases. This gives an expected rate of: 4/5 1/2 + 1/5 0 = 2/5.] The example shows that differences between accuracy measures depend on the variation in item-level scores, in particular on the relation between score-ordering and score-differences. Note that the example holds equally if numbers for the two options are reversed, so that there is no way of saying which of the two measures of accuracy would generally be more favorable for selecting the target option. The upshot of these considerations is that the standard practice of WTA-based performance assessment for LLMs gives false, or at least misleading or inaccurate results, whenever not all downstream applications use a greedy-like sampling strategy (which is almost certainly the case), and there is variability in item-level predictions (which may or may not be the case, depending on the domain of application).
http://arxiv.org/abs/2406.08820v1
20240613052322
DisfluencySpeech -- Single-Speaker Conversational Speech Dataset with Paralanguage
[ "Kyra Wang", "Dorien Herremans" ]
eess.AS
[ "eess.AS", "cs.CL" ]
DisfluencySpeech – Single-Speaker Conversational Speech Dataset with Paralanguage This project has received funding from SUTD Kickstarter Initiative no. SKI 2021_04_06. Kyra Wang Information Systems Technology and Design Singapore University of Technology and Design Singapore, Singapore kyra_wang@mymail.sutd.edu.sg Dorien Herremans Information Systems Technology and Design Singapore University of Technology and Design Singapore, Singapore dorien_herremans@sutd.edu.sg June 17, 2024 ============================================================================================================================================================================================================================================================================================================================= § ABSTRACT Laughing, sighing, stuttering, and other forms of paralanguage do not contribute any direct lexical meaning to speech, but they provide crucial propositional context that aids semantic and pragmatic processes such as irony. It is thus important for artificial social agents to both understand and be able to generate speech with semantically-important paralanguage. Most speech datasets do not include transcribed non-lexical speech sounds and disfluencies, while those that do are typically multi-speaker datasets where each speaker provides relatively little audio. This makes it challenging to train conversational Text-to-Speech (TTS) synthesis models that include such paralinguistic components. We thus present DisfluencySpeech, a studio-quality labeled English speech dataset with paralanguage. A single speaker recreates nearly 10 hours of expressive utterances from the Switchboard-1 Telephone Speech Corpus (Switchboard), simulating realistic informal conversations. To aid the development of a TTS model that is able to predictively synthesise paralanguage from text without such components, we provide three different transcripts at different levels of information removal (removal of non-speech events, removal of non-sentence elements, and removal of false starts), as well as benchmark TTS models trained on each of these levels. artificial intelligence, speech synthesis, dataset, text-to-speech, disfluency § INTRODUCTION In informal spoken English, while the content of what is spoken is important for deriving meaning, how that content is said is just as important. While non-speech sounds (such as laughter and sighing) and disfluencies (such as filled pauses like “um” and “uh”) do not directly contribute lexical information, it has been argued that such non-lexical components of conversational speech, or paralanguage, can drastically influence the perception of what is said, altering the semantic context of the lexical content <cit.>. Examples of the semantic and pragmatic significance of paralanguage in speech can be found in a multitude of examples: conversational grunts have a wide variety of pragmatic functions ranging from indicating interest in an interaction to controlling the flow of turn-taking <cit.>; disfluent speech can affect a listener's interpretation of the deceptiveness of a message <cit.>; even from a young age toddlers use speech disfluencies for realtime comprehension of speaker referential intention <cit.>. Systems capable of synthesising speech with semantically-appropriate paralinguistic components are thus very useful in the many areas that can benefit from an additional dimension of information representation, such as the design of the personality of synthesised voices for human computer interaction <cit.>. In regard to modern neural speech generation, of the many forms of paralanguage, prosodic speech is probably the most well investigated field <cit.>. Synthesis of disfluent speech and non-speech sounds, on the other hand, are relatively unexplored areas. <cit.> argues that this is due to the complexity of the task of modelling these paralinguistic components of speech explicitly. Computational modelling of these components is hampered in particular by the lack of existing speech datasets that include them. Most popular speech datasets are either multi-speaker datasets that do not include enough data per speaker (such as the Switchboard-1 Telephone Speech Corpus (Switchboard) <cit.>), or single-speaker datasets that do not include non-lexical speech sounds and disfluencies (such as LJSpeech <cit.>). This makes it challenging to train conversational TTS synthesis models that include such paralinguistic components. The closest work to providing a dataset that includes disfluencies with considerable audio data from each speaker is DailyTalk <cit.>. However, while it contains sample dialogues from the DailyDialog dataset <cit.> to replicate natural speech, it does not include disfluencies other than filled pauses such as “um” and “uh”, and does not include non-lexical speech sounds such as laughter and sighing. To that end, in this paper, we present DisfluencySpeech, a single-speaker, studio-quality, fully-labeled, English speech dataset. The dataset consists of nearly 10 hours of utterances derived from the Switchboard corpus. Unlike the DailyTalk dataset, our dataset includes detailed disfluency and non-lexical speech sound annotations, distinguishing between different types such as filled pauses and laughter. We also provide three different transcripts at different levels of information removal (removal of non-speech events, removal of non-sentence elements, and removal of false starts), to aid the development of a TTS model that is able to predictively synthesise semantically-meaningful paralanguage from text without such components [<https://huggingface.co/datasets/amaai-lab/DisfluencySpeech>]. Additionally, to help users of this dataset speed up their analysis, we provide weights for benchmark Transformer <cit.> models trained on each transcript, a HifiGAN vocoder <cit.> fine-tuned on the dataset, and Montreal Forced Aligner (MFA) <cit.> resources adapted to the dataset. § DISFLUENCYSPEECH DATASET The construction of the dataset comprised of three steps: generating the transcript, recording a single speaker reading the transcript in a conversational manner, and processing the recorded clips to make them ideal for training TTS models on. §.§ Transcript generation The transcripts for DisfluencySpeech were derived from the Switchboard Dialog Act Corpus (SwDA) <cit.>, which extends the Switchboard-1 Telephone Speech Corpus, Release 2 <cit.>. Subutterances were joined together to form full utterances, removing interleaving interruptions from the other speaker. Only utterances with 15 to 35 words were included in the final transcript, to ensure that each utterance was long enough to have meaningful contextual semantic information, but short enough to be easily loaded into memory for training a TTS. These transcripts were manually created and checked for accuracy to each associated audio file. Five types of non-sentence elements based on <cit.> are annotated in the Switchboard transcript: filled pauses {F ...} (e.g. “uh”, “um”), explicit editing terms {E ...} (e.g. “I mean”, “sorry”), discourse markers {D ...} (e.g. “you know”, “well”), coordinating conjunctions {C ...} (e.g. “and”, “but”), and asides {A ...} (comments that interrupt fluent flow). Additionally, restarts [... + ...] and non-speech sounds <...> are annotated as well. Three different transcripts are provided with DisfluencySpeech, each at differing levels of information removal: * Transcript A contains all textual content recorded, including non-sentence elements and restarts. Only non-speech events such as laughter and sighs are removed from transcript. * Transcript B is transcript A but with filled pauses, explicit editing terms, and discourse markers removed. Coordinating conjunctions and asides are left in, as they are non-sentence elements as well, they are often used to convey meaning. * Transcript C is transcript B but with false starts removed. This is the most minimal transcript. An example of the three different ways that the same clip is transcribed in the transcripts can be seen in Figure <ref>. The reason for providing different levels of transcipts is to explore the possibility of training TTS models that can understand the semantic meaning of the text, and synthesise paralanguage that is semantically-appropriate. For example, a TTS model trained on transcript A may be able to synthesise a sigh when the text contains a discourse marker; more impressively, a TTS model trained on transcript C may be able to synthesise false starts and stuttering when it predicts that the speaker would be nervous or stressed from the text. These transcripts are provided in the same format as LJ Speech <cit.> such that they are compatible with any pipelines that use LJ Speech. Each transcript is provided as a .csv file, with one record per line, delimited by the pip character (0x7c). The fields are: * ID: the name of the corresponding .wav file. * Transcription: words spoken by the reader (UTF-8). * Normalized Transcription: transcription with numbers, ordinals, and monetary units expanded into full words (UTF-8). This field is redundant as the original Switchboard transcript is already normalized, but is kept for compatibility with existing pipelines. A transcript of the original joined subutterances extracted from SwDA is also provided, retaining all non-speech event and disfluency annotations, allowing users of the dataset to generate their own transcripts or perform semantic analysis on the dataset. §.§ Recording Recording was conducted in a acoustically-treated professional studio, using a Samson Q2U dynamic microphone. A pop filter was used to eliminate popping sounds from plosives during recorded speech. Elements of the recording environment were controlled between recording sessions, such as the layout of the furniture in the studio and the distance of the speaker from the microphone, to ensure a consistent sound profile across recorded clips. A single 28 year old Singaporean speaker with English as her first language is chosen to read the original transcript generated with all Switchboard annotations including non-speech sounds, simulating a conversation with herself. The format of the audio files, once again, are the same as in LJ Speech <cit.> to allow compatibility with any pipelines for LJ Speech. Each audio file is a single-channel 16-bit PCM WAV with a sample rate of 22,050 hertz. §.§ Post-processing A Montreal Forced Aligner <cit.> acoustic model was adapted from the English (US) ARPA acoustic model v2.0.0 <cit.> onto the dataset, and a corresponding grapheme-to-phoneme (G2P) dictionary was generated using the English (US) ARPA G2P model <cit.>. Users of the dataset who require forced alignment (e.g. providing alignments for non-autoregressive TTS models like FastSpeech 2 <cit.>) may use these resources. Table <ref> shows statistics of the final DisfluencySpeech dataset. § BENCHMARK MODELS To assess the usability of the DisfluencySpeech dataset for training TTS models, we trained simple benchmark models for each of the transcripts in the dataset. The benchmark model architecture is the same for each transcript we trained on, and consists of a Transformer <cit.> autoregressive TTS model implemented using the fairseq S^2 speech synthesis toolkit <cit.>. The 5,000 clips in the dataset were split into a 90%-5%-5% train-validation-test set. During training, each benchmark model took its respective transcript (i.e. A, B, and C) as input, and used the same DisfluencySpeech dataset audio as targets. The hyperparameters used were the same as the ones found on the fairseq S^2 LJ Speech Transformer model[<https://github.com/facebookresearch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md>]. This is to ensure a fair comparison between DisfluencySpeech and an established dataset, LJSpeech. The training of each model was conducted on a single Tesla V100-DGXS for 30,000 steps. Automatic evaluation of the benchmark models was performed. The mean cepstral distortion (MCD) was computed on Griffin-Lim <cit.> vocoded benchmark model output spectrograms against Griffin-Lim vocoded ground truth spectrograms at a sample rate of 22,050 hertz. In addition, a HiFiGAN vocoder <cit.> was used to generate waveforms at a sample rate of 16,000 hertz for character error rate (CER) evaluation using the Wav2Vec 2.0 Large (LV-60 + CV + SWBD + FSH) automatic speech recognition (ASR) model finetuned on 300 hours of the Switchboard dataset <cit.>. The HifiGAN vocoder model was finetuned on the dataset using NVIDIA's NeMo toolkit <cit.>. § OBJECTIVE EVALUATIONS Table <ref> shows the results of the evaluation. In addition to evaluating the benchmark models, we added metrics calculated on the original datasets themselves and we resynthesize the audio from DisfluencySpeech using HiFiGAN. To calculate the CER between audio and text, we first transcribe the audio using the Wav2Vec 2.0 Large (LV-60 + CV + SWBD + FSH) automatic speech recognition (ASR) model finetuned on 300 hours Switchboard <cit.>, and then compare to their respective transcripts. The original DisfluencySpeech audio and resynthesized audio was evaluated against transcript A. The low CER of the original audio, even when using a pre-trained ASR model not fine-tuned to the speaker's accent, shows that the transcript accurately reflects the recorded speech. The CER is only slightly greater than the CER of LJSpeech, which, unlike DisfluencySpeech, is a speech dataset with no disfluencies and non-speech sounds, elements that greatly increase the chances of error in ASR tasks. Additionally, with the resynthesised audio performing similarly to the original audio in the ASR task, we can conclude that the fine-tuned HiFiGAN model performs well too. The training of the Transformer model for transcript A was able to converge, and the model can generate speech that is similar to the original audio. The MCD is slightly better than the MCD of the same Transformer model trained on LJSpeech by <cit.>, indicating that the generated audio is phonetically similar to the original audio. However, the CER is much higher. This is in line with prior research that shows Transformer has trouble generalizing alignments to long utterances <cit.>, which can be seen in the generated audio as frequently missing words, or repeating words too many times. The brittleness of Transformer's attention mechanism in situations with missing transcript information becomes even more apparent when we see that the training of the Transformer models for transcript B and transcript C both failed to converge. The Transformer model is unable to learn the alignments of the missing textual transcript information, resulting in a high CER and unintelligible generated audio. This is an expected result with these simple benchmark models, and future research could approach this problem with more recent techniques such as enforcing hard monotonic alignments <cit.>. Despite the problems with alignment, however, transcript A's Transformer model was able to learn to generate non-speech sounds despite the fact that transcript A does not have any non-speech sounds annotated. We notice that sounds like sighs and laughter, while not as prominently featured as in the original audio, are still present in the generated audio. This shows that the Transformer model was able to implicitly learn to generate non-speech sounds as semantically important to the speaker's intent, despite the lack of explicit annotations in the transcript. § DISCUSSION AND FUTURE RESEARCH It is not the aim of this paper to develop state-of-the-art model, but merely to provide an objective benchmark with open source evaluation framework which will allow researchers to easily work with our dataset. In such model focused work, subjective evaluation such as Mean Opinion Scores would be essential. Given that this is mainly a dataset introduction paper, it falls outside of our scope. The joining of subutterances from the Switchboard corpus into single utterances removes the context of interleaving speech between speakers in the recorded dialogues, and thus the semantic information in the paralinguistic components of speech loses that dimensionality. This is a limitation of the dataset, and future work should explore the effects of this limitation on the performance of TTS models trained on DisfluencySpeech. The dataset was built by reading aloud an existing corpus with indications of the disfluencies, which is a form of acting rather than spontaneous speech production. While this has the advantage of creating a single-speaker dataset from many speakers, the lack of spontaneity might affect the performance of TTS models trained on DisfluencySpeech. In future work, the transcripts of our dataset may also be useful as an auxiliary task for a TTS model. This may teach the model to model disfluencies of speech, without explicitly being fed them as input text. § CONCLUSION In this paper, we present the DisfluencySpeech dataset, a single-speaker speech dataset with disfluencies and non-speech sounds. Along with the audio, we have also provided manually-checked transcripts at 3 different levels of information removal, intended for the training of TTS systems capable of predictively generating speech with semantically-meaningful paralanguage from text. The dataset is available open source and can be downloaded together with a dataloader online.
http://arxiv.org/abs/2406.08543v1
20240612180000
Extra-Dimensional Axion Expectations
[ "Matthew Reece" ]
hep-ph
[ "hep-ph", "hep-th" ]
=1
http://arxiv.org/abs/2406.07929v1
20240612064637
A Generic Layer Pruning Method for Signal Modulation Recognition Deep Learning Models
[ "Yao Lu", "Yutao Zhu", "Yuqi Li", "Dongwei Xu", "Yun Lin", "Qi Xuan", "Xiaoniu Yang" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals A Generic Layer Pruning Method for Signal Modulation Recognition Deep Learning Models Yao Lu0000-0003-0655-7814, Yutao Zhu0009-0005-7154-1917, Yuqi Li0009-0002-7314-6863, Dongwei Xu0000-0003-2693-922X, Member, IEEE, Yun Lin0000-0003-1379-9301, Member, IEEE, Qi Xuan0000-0002-6320-7012, Senior Member, IEEE, Xiaoniu Yang0000-0003-3117-2211 This work was partially supported by the Key R&D Program of Zhejiang under Grant 2022C01018 and by the National Natural Science Foundation of China under Grant U21B2001 and Grant 61973273. (Corresponding author: Qi Xuan) Yao Lu, Yutao Zhu and Dongwei Xu are with the Institute of Cyberspace Security, College of Information Engineering, Zhejiang University of Technology, Hangzhou, China, also with the Binjiang Institute of Artificial Intelligence, Zhejiang University of Technology, Hangzhou 310056, China (e-mail: yaolu.zjut@gmail.com, zhuyutao629@gmail.com, dongweixu@zjut.edu.cn). Yuqi Li is currently engaged as a research intern at the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China (e-mail: yuqili010602@gmail.com). Yun Lin is with the College of Information and Communication Engineering, Harbin Engineering University, Harbin, China (e-mail: linyun@hrbeu.edu.cn). Qi Xuan is with the Institute of Cyberspace Security, College of Information Engineering, Zhejiang University of Technology, Hangzhou, China, also with the PCL Research Center of Networks and Communications, Peng Cheng Laboratory, Shenzhen 518000, China, and also with Utron Technology Company Ltd. (as Hangzhou Qianjiang Distinguished Expert), Hangzhou 310056, China (e-mail: xuanqi@zjut.edu.cn). Xiaoniu Yang is with the Science and Technology on Communication Information Security Control Laboratory, Jiaxing, China (e-mail: yxn2117@126.com). June 17, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT With the successful application of deep learning in communications systems, deep neural networks are becoming the preferred method for signal classification. Although these models yield impressive results, they often come with high computational complexity and large model sizes, which hinders their practical deployment in communication systems. To address this challenge, we propose a novel layer pruning method. Specifically, we decompose the model into several consecutive blocks, each containing consecutive layers with similar semantics. Then, we identify layers that need to be preserved within each block based on their contribution. Finally, we reassemble the pruned blocks and fine-tune the compact model. Extensive experiments on five datasets demonstrate the efficiency and effectiveness of our method over a variety of state-of-the-art baselines, including layer pruning and channel pruning methods. Automatic Modulation Recognition, Layer Pruning, Deep Learning, Edge Devices. § INTRODUCTION Automatic Modulation Recognition (AMR) is an important research branch in the field of communications systems. Classical methods usually extract important hand-crafted features such as frequency, phase, constellation diagrams, high-order moments, and time-frequency diagrams, followed by applying machine learning methods for feature classification <cit.>. However, classical methods require extensive expertise and may lack high classification accuracy, which significantly limits their application, especially in complex scenarios. Recent breakthroughs in deep learning have laid the foundation for developing high-performance models for AMR. For example, O'Shea et al. <cit.> design a 1D-CNN model suitable for short radio signal classification based on the principles of the VGG <cit.> architecture. O'Shea et al. <cit.> develop a narrow 2D-CNN for radio modulation recognition by simultaneously considering the in-phase (I) and quadrature (Q) signal sequences in the time domain. Recently, Chen et al. <cit.> introduce a sliding square operator called S2M, which automatically converts input signals into square feature matrices, facilitating the use of more complex models for signal classification. Although these models have achieved impressive results in signal classification tasks, they are often accompanied by high computational complexity and large model sizes. This results in slow inference speeds and makes them unsuitable for scenarios with limited computing resources, hindering their practical deployment in communication systems. To solve this problem, we need to compress the model while minimizing the loss in classification performance. Model pruning, as a model compression technique, provides a feasible solution for the above requirements. It can be roughly divided into weight pruning <cit.>, channel pruning <cit.> and layer pruning <cit.>. Given that weight pruning requires specialized hardware support and channel pruning is constrained by the depth of the original model, we concentrate on layer pruning. In this paper, we propose a new layer pruning method. Specifically, we first calculate the similarity between any two layers of the pre-trained model to obtain the similarity matrix, then sum it row by row. Next, we apply Fisher's optimal segmentation to partition the model along its depth into multiple blocks based on the resulting vector. To identify important layers within each block that contribute significantly to the model's performance, we maintain the other blocks unchanged and enumerate all possible layer combinations within the selected block. For each layer combination, to maintain structural integrity, we retain the pre-trained parameters of the selected layer while randomly initializing the parameters of all other layers. Then, we utilize a training-free performance estimation method named SynFlow <cit.> to assess the contribution. Then, based on their contributions, we identify layers that need to be preserved within each block. Finally, we reassemble the pruned blocks and fine-tune the shallow model. We conduct extensive experiments on five benchmarks, RML2016.10a <cit.>, RML2016.10a-high, Sig2019-12 <cit.>, Sig2019-12-high and RML2018.01a <cit.>. The results demonstrate the superior performance of our method over existing channel pruning methods such as RFP <cit.>, FPGM <cit.>, L1-norm <cit.>, SFP <cit.> and BNP <cit.>, as well as layer pruning methods random layer pruning, LCP-based pruning methods <cit.> and SR-init <cit.>. To summarize, our main contributions are three-fold: ∙ We introduce a novel layer pruning method to slim deep learning models for AMR. Specifically, we first calculate the similarity matrix between layers, and then divide the pre-trained model into blocks based on the obtained similarity matrix by minimizing intra-block differences while maximizing inter-block differences. ∙ Subsequently, we identify layers that need to be preserved within each block based on their contribution. Finally, we reassemble the pruned blocks and fine-tune the compact model. To avoid the tedious training-validation process, we utilize a training-free performance estimation method named SynFlow to assess the contribution. ∙ Extensive experiments demonstrate the efficiency and effectiveness of our method over a variety of state-of-the-art baselines on five datasets. In the remainder of this paper, we first introduce related works on representational similarity and model pruning in <ref>. Preliminaries are presented in <ref>. Our method is detailed in <ref>, and all relevant experiments are discussed in <ref>. Finally, the paper concludes in <ref>. § RELATED WORKS Representational similarity aims to calculate the similarity or difference between two internal representations of deep neural networks. By analyzing these representations, researchers can gain insights into how neural networks process and transform input data through their multiple layers. For example, Hardoon et al. <cit.> propose canonical correlation analysis (CCA) to learn a semantic representation to web images and their associated text. However, CCA is sensitive to perturbation when the condition number of X or Y is large. To this end, Raghu et al. <cit.> and Morcos et al. <cit.> propose Singular Vector CCA and Projection Weighted CCA to reduce the sensitivity of CCA to perturbation, respectively. Later, Kornblith et al. <cit.> point out that these metrics can not measure meaningful similarities between representations of higher dimensions than the number of data points and propose a similarity index, namely centered kernel alignment (CKA), that does not suffer from this limitation. Recently, Chen et al. <cit.> revisit the CKA and find that it implicitly constructs graphs to obtain similarity. Inspired by this, they introduce graph-based similarity, which explicitly constructs graphs based on the feature relationship between input samples to measure the similarity of deep learning models. Model pruning has been widely acknowledged as a model compression technique to achieve acceleration on various platforms. It can be roughly divided into weight pruning <cit.>, channel pruning <cit.> and layer pruning <cit.> based on the level of fine-grainedness. Specifically, weight pruning sets particular weights in a deep learning model to zero. Despite its high compression ratio, this method has limitations and is applicable only to specialized software <cit.> or hardware <cit.> devices. In contrast, channel pruning achieves speedup by discarding some filters from each layer of the model. Since filter pruning operates at the filter level, its capacity to prune parameters is constrained by the original model’s depth. Layer pruning, however, deletes entire layers, which is not constrained by this. Consequently, this paper primarily focuses on layer pruning. The core of layer pruning lies in the selection of layers for removal, targeting those that contribute minimally to performance, without notably sacrificing overall performance. To this end, previous studies have developed various metrics to assess the importance of each layer. For example, Elkerdawy et al. <cit.> use existing filter criteria to calculate a per-layer importance score in one-shot, subsequently pruning the least important layers and fine-tuning the shallower model. Chen et al. <cit.> train Linear Classifier Probes (LCPs) to analyze the specific role of each convolutional layer in enhancing performance. Then they prune layers that contribute minimally to performance enhancement and subsequently retrain the pruned model with a knowledge distillation technique. However, the costs of training LCPs are quite high, especially for large datasets like ImageNet <cit.>. To address this, Lu et al. <cit.> introduce modularity as a metric that requires no training for quantifying the class separation of intermediate representations. They then prune layers exhibiting negative or zero modularity growth. Elkerdawy et al. <cit.> construct proxy classifiers for each layer in a single pass on training dataset using imprinting, and then prune layers that exhibit the smallest accuracy difference compared to their preceding layer. Unlike previous studies that prune layers based on their importance, we assess the redundancy of layers by examining the similarity between them. Aside from layer pruning, some effective layer compression methods also exist, which offer additional strategies for reducing model complexity. For example, Dror et al. <cit.> propose Layer Folding to identify which activations can be removed with minimal impact on accuracy. This enables the merging of adjacent linear layers, thereby transforming deep models into shallower ones. Wu et al. <cit.> introduce De-Conv and Rem-ReLU to decouple convolutional layers and non-linear activation layers, making the model mergeable. Leveraging the property of linear mergeability of convolutional layers, they then losslessly merge the decoupled layers, achieving efficient layer compression. These methods are orthogonal to ours, allowing them to complement each other. § PRELIMINARIES §.§ Mathematical Expression of Layer Pruning Before delving into our method, we first establish a formal mathematical formulation for layer pruning. Suppose we have a pre-trained model ℳ=L_1∘ L_2⋯∘ L_l designed for task T, where L_i denotes the i-th layer of ℳ and ∘ denotes function composition. Considering computational constraints, we need to prune some layers in ℳ while maximizing the performance of the pruned model ℳ^*. We can formulate it as an optimization problem: ℳ^* =max _ℳ_p P_T(ℳ_p), s.t. ℳ_p=L_i ∘ L_j⋯∘ L_k, cost(ℳ_p) ≤ C. Since ℳ_p is composed of the remaining layers of ℳ, the total number of layers of ℳ_p should be less than that of ℳ, and 1 ≤ i < j < ⋯ < k ≤ l. For two consecutive layers with dimension mismatch in ℳ_p, we adjust the feature size of the subsequent layer to align with that of the preceding layer. Besides, P_T(ℳ_p) denotes the performance of ℳ_p on task T, and cost(ℳ_p) ≤ C ensures that the computational overhead of ℳ_p can not surpass C. § METHOD In this section, we delve into the proposed layer pruning method. We first calculate the similarity matrix of layers, and then partition the pre-trained model into blocks by minimizing intra-block differences while maximizing inter-block differences. Subsequently, we identify layers that need to be preserved within each block and prune the remainder. Finally, we reassemble the factorized blocks and fine-tune the compact model. §.§ Model Partition A model partition refers to dividing a model into distinct sub-nets. In this study, we partition the model ℳ along its depth into k blocks ℬ = {B_1, ⋯, B_k} so that each block is a stack of some consecutive layers. In that case, what is the basis for division? In this paper, CKA <cit.>, a technique for measuring representation similarity, serves as our basis for division. It is worth noting that CKA is not the only applicable similarity metric. In <ref>, we demonstrate that our method supports a variety of similarity metrics. Through analyzing the feature representation produced by each layer, we can identify layers that produce similar outputs and group them into the same block. Specially, given a batch of examples X_b, the feature representations of L_i and L_j can be formulated as F_i = L_1∘ L_2⋯∘ L_i(X_b) F_j = L_1∘ L_2⋯∘ L_j(X_b). Here, F_i ∈ℝ ^b× c_i × h_i × w_i and F_j ∈ℝ ^b× c_j × h_j × w_j, where b denotes the number of samples, while c_i, h_i and w_i represent the number of channels, height and width of layer L_i, respectively. To simplify the calculations, we apply a flatten operation to F_i and F_j, resulting in F̂_̂î and F̂_̂ĵ. F̂ = flatten(F), where the flatten(·) operation transforms the tensor from ℝ ^b× c × h × w to ℝ ^b× (c × h × w), preserving the number of samples b and combining the dimensions c, h, and w into a single dimension. Subsequently, we calculate the gram matrices K = F̂_i F̂_i^𝖳 and L = F̂_j F̂_j^𝖳 based on F̂_i and F̂_j. Next, we use Hilbert-Schmidt Independence Criterion <cit.> (HSIC) to calculate the statistical independence between F̂_i and F̂_j, HSIC_0(F̂_i, F̂_j)=1/(b-1)^2tr(F̂_i H F̂_j H) where H=I_b-1/b1 1^T is the centering matrix. However, HSIC_0 exhibits an O(1/b) bias <cit.>, making it dependent on batch size b. To mitigate the influence of batch size, we replace HSIC_0 with an unbiased one <cit.>: HSIC_1(K, L) = 1/b(b-3) [tr(K̃L̃)+1^⊤K̃1 1^⊤L̃1/(b-1)(b-2)-2/b-21^⊤K̃L̃1], where K̃=K(1 1^𝖳-I_b) and L̃=L(1 1^𝖳-I_b). It is worth noting that HSIC is not invariant to isotropic scaling, but it can be made invariant through normalization. Therefore, CKA can be calculated as follows: CKA(K, L) = HSIC_1(K, L)/√(HSIC_1(K, K)HSIC_1(L, L)). Next, we compute the similarity between any two layers using <ref> iteratively, ultimately yielding a similarity matrix 𝒮. Then we sum the matrix 𝒮 row by row: z_i = ∑_j=1^l 𝒮_ij The resulting vector 𝒵={z_1, ⋯, z_l} contains the aggregated similarity of each layer, representing the degree of similarity between each layer and all other layers. Having obtained 𝒵, we utilize Fisher's optimal segmentation, a method for segmenting ordered sequences with the goal of minimizing the differences within segments and maximizing the differences between segments, to partition the model ℳ along its depth into k blocks. The overall process of model partition is illustrated in <ref>. §.§ Layer Selection and Model Reassembly As we have partitioned the model into k blocks, our next goal is to identify the layers that should be retained within each block. Suppose ℳ'=(L_1∘⋯ L_i) ∘ (L_i+1∘⋯ L_j) ∘ (L_j+1∘⋯ L_l) is a segmented pre-trained model and layers in the same bracket are in the same block. Our goal is to identify and retain important layers within each block that contribute significantly to the performance of the model. To achieve this goal, a direct method is keeping the other blocks unchanged, enumerating all possible layer combinations within the selected block, and selecting the optimal combination. Specifically, for block B_i ∈ℬ in ℳ', we maintain a set 𝒞_i that includes all possible layer combinations within this block, except the empty set. If a block contains m layers, the set 𝒞_i would have ∑_i=1^m C_m^i elements, where C_m^i denotes the total number of combinations obtained by randomly sampling i layers from the block without replacement. Then we evaluate the contribution of each combination in 𝒞_i to the model's overall performance and pick the best combination. However, such an evaluation faces two major limitations: (1) Verifying the contribution of selected layers to the model requires reconstructing the corresponding blocks of the original model so that only these selected layers are included in the blocks. For example, suppose L_1 and L_2 are the selected layers in block B_1, then the model should be adjusted to (L_1∘ L_2) ∘ (L_i+1∘⋯ L_j) ∘ (L_j+1∘⋯ L_l). For each combination of layers, we need to reconstruct the block where those layers are located, which is very laborious. (2) Besides, evaluating the contribution of each combination through training and validation processes is also impractical, especially for models with a large number of layers. To address these issues, we propose the following solutions: For issue 1, inspired by Tang et al. <cit.>, we retain the pre-trained parameters of the selected layer while randomly initializing the parameters of all other layers to simulate evaluating the importance of the selected layer. This process can be formalized as: ℳ'=(L_1∘ L_2∘⋯L̂_̂î) ∘ (L̂_̂î+̂1̂∘⋯L̂_̂ĵ) ∘ (L̂_̂ĵ+̂1̂∘⋯L̂_̂l̂), where L̂_̂î denotes that L_i is initialized using kaiming initialization <cit.>. Such operation avoids the need to make different adjustments to the model when selecting different layers. As for issue 2, we substitute the expensive training and validation processes with a training-free performance estimation method SynFlow <cit.>, significantly reducing the cost of verifying the contribution of each combination. Having addressed the aforementioned challenges, we elaborate on the process of layer selection: For each block in ℳ', we maintain a set 𝒞_i that includes all possible layer combinations within this block, except the empty set. We first initialize an empty set ℳ_o to store the optimal layer combination of the segmented pre-trained model ℳ'. Besides, we initialize the best accuracy Best Acc_i=0. Then for each layer combination 𝒞_i^j in 𝒞_i, we retain the pre-trained parameters of the selected layer while randomly initializing the parameters of all other layers. Next, we use the training-free performance estimation method SynFlow to evaluate the accuracy Acc_i^j of this model. Then for each block B_i, we keep the optimal layer combination B_i^o with the best performance and put it into ℳ_o. Finally, we repeat the above operation for each block to obtain the optimal layer combination. The overall process is illustrated in <ref>. Having identified the important layers within each block, we proceed to integrate these layers into a compact model. Since selection destroys the original layer structure, dimensional mismatch may occur during the model reassembly. If this happens, we adjust the subsequent layer to align with the dimension of the previous layer. Finally, we fine-tune the compact model. § EXPERIMENTS In this section, we verify the effectiveness of the proposed method on various datasets. §.§ Dataset To evaluate the effectiveness of our method, we conduct experiments on five signal modulation classification datasets RML2016.10a <cit.>, RML2016.10a-high, Sig2019-12 <cit.>, Sig2019-12-high and RML2018.01a <cit.>. RML2016.10a is a synthetic dataset generated using GNU Radio <cit.>, containing 11 modulation types and a total of 220,000 signals (1000 samples per modulation type of each signal-to-noise ratio (SNR)). Specifically, BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK and PAM4 for digital modulations, and WB-FM, AM-SSB, and AM-DSB for analog modulations. The SNRs span from -20dB to 18dB in 2dB intervals, with each signal having a length of 128. We partition the dataset into a training set, validation set, and test set in a ratio of 6:2:2. RML2016.10a-high is a subset of RML2016.10a, consisting of samples with SNRs ranging from 10dB to 18dB. Compared to RML2016.10a, RML2016.10a-high is easier to classify <cit.>. Sig2019-12 contains longer signals simulated by Chen et al. <cit.>. It consists of 12 modulation types, including BPSK, QPSK, 8PSK, OQPSK, 2FSK, 4FSK, 8FSK, 16QAM, 32QAM, 64QAM, 4PAM, and 8PAM. The SNRs span from -20dB to 30dB in 2dB intervals, with each signal having a length of 512. The total size of Sig2019-12 is 468000 (1500 samples per modulation type of each SNR). We also partition the dataset into a training set, validation set, and test set in a ratio of 6:2:2. Sig2019-12-high is a subset of Sig2019-12, consisting of samples with SNRs ranging from 10dB to 30dB. This subset is easier to classify compared to the full SNR range <cit.>. RML2018.01a-high RML2018.01a simulates a wireless channel and is collected from a laboratory environment. It consists of 24 modulation types, including OOK, 4ASK, 8ASK, BPSK, QPSK, 8PSK, 16PSK, 32PSK, 16APSK, 32APSK, 64APSK, 128APSK, 16QAM, 32QAM, 64QAM, 128QAM, 256QAM, AM-SSB-WC, AM-SSB-SC, AM-DSB-WC, AM-DSB-SC, FM, GMSK, OQPSK. The SNRs span from -20dB to 30dB in 2dB intervals, with each signal having a length of 1024. The total size of the dataset is 2555904. Due to its large size, we use 10dB to 30dB SNR samples for experiments (1056000 samples in total). We partition the dataset into training and test sets in a ratio of 8:2. §.§ Implementation details To verify the effectiveness of our proposed method, we use typical convolutional neural networks designed for the AMR task (ResNet56 <cit.>, ResNet110 <cit.> and VGG16 <cit.>) as models for pruning and compare our proposed method with channel pruning methods RFP <cit.>, FPGM <cit.>, L1-norm <cit.>, SFP <cit.> and BNP <cit.>, as well as layer pruning methods random layer pruning, LCP-based pruning methods <cit.> and SR-init <cit.>. We briefly summarize channel pruning baselines as follows: ∙ RFP <cit.> selects important filters based on the information entropy of feature maps. ∙ FPGM <cit.> prunes redundant filters based on the geometric median <cit.> of the filters. ∙ L1-norm <cit.> measures the relative importance of a filter in each layer by calculating its ℓ _1-norm. ∙ SFP <cit.> dynamically prunes the filters using ℓ _2-norm in a soft manner. ∙ BNP <cit.> measures the channel importance using the γ scale factor of the batch normalization layer. Then we introduce layer pruning methods: ∙ Random layer pruning serves as a baseline to verify the effectiveness of layer pruning. ∙ LCP-based pruning methods <cit.> use linear classifier probes to delete unimportant layers and adapt different knowledge distillation techniques to recover the performance. ∙ SR-init <cit.> prunes layers that are not sensitive to stochastic re-initialization. Since these methods do not conduct experiments on signal modulation classification datasets, we reproduce these methods ourselves. For a fair comparison, we set the same hyperparameter settings to fine-tune pruned models. Specifically, we set the initial learning rate, batch size and epoch to 0.001, 128 and 50, respectively. The learning rate is decayed by a factor of 0.8 every 10 epochs. As for our method, we sample 500 instances to compute the representational similarity and the resultant value is determined by averaging across 5 batches. Unless otherwise specified, we set k=3. §.§ Results and Analysis Results on high SNR datasets. First, we conduct experiments on high SNR datasets, such as RML2016.10a-high, Sig2019-12-high and RML2018.01a-high. Compared to full SNR datasets, high SNR datasets are easier to classify <cit.>. In this study, we focus on pruning ResNet56, ResNet110 and VGG16 under different pruning rates (25%, 50%, 75% and maximum pruning rate). For VGG16, we validate only the maximum pruning rate to maintain structural integrity due to its layer limitations. Specifically, we set k=3 for ResNet56 and VGG16, and k=7 for ResNet110. The experimental results are presented in <ref>. As for RML2016.10a-high and Sig2019-12-high, we find that even after pruning 75% or more of the layers, the accuracy of ResNet56 and ResNet110 are barely affected and may even improve. For instance, pruning 75% of layers in ResNet110 can achieve an improvement of 0.59% on RML2016.10a-high and 0.41% on Sig2019-12-high. This improvement indicates that these models do have redundancy, highlighting the need for pruning. As for the more complex RML2018.01a-high dataset, pruning 25%, 50% and 75% of layers in ResNet110 can achieve an improvement of 3.66%, 2.73% and 1.45%, respectively. These improvements can be attributed to the high complexity of the model relative to the dataset, which leads to overfitting <cit.>. In contrast, ResNet56 experiences an obvious decrease in accuracy at high pruning rates (1.60% under the 50% pruning rate, 3.06% under the 75% pruning rate, 5.63% under the 89% pruning rate). For VGG16, to maintain the integrity of the model structure, we ensure that at least one convolutional layer remains in each structural layer. This results in a lower pruning rate for VGG16. After reducing 33-55% of the layers, the accuracy drop of VGG16 on the three datasets is still affordable (1.05% on RML2016.10a-high, 0.12% on Sig2019-12-high, and 1.07% on RML2018.01a-high). In general, the abovementioned results indicate that our method can effectively eliminate redundant layers to simplify the model while maintaining the accuracy. Results on full SNR datasets. Compared to high SNR datasets, full SNR datasets are more complex. Therefore, we conduct experiments on full SNR datasets, such as RML2016.10a and Sig2019-12. Similarly, we prune ResNet56, ResNet110 and VGG16 under different pruning rates (25%, 50%, 75% and maximum pruning rate). As shown in <ref>, we find that the experimental results are almost the same as before, which demonstrates that when confronted with more complex datasets, our method is capable of pruning the model effectively while ensuring that the accuracy is maintained (-0.24% under 87% pruning rate on RML2016.10a and 0.30% under 87% pruning rate on Sig2019-12, etc.). Comparison to channel pruning methods. In the previous paragraphs, we have demonstrated the effectiveness of our method on both high SNR and full SNR datasets. Here, we compare our method with channel pruning methods. Since some baselines do not include experiments with VGG16 pruning, we only use ResNet56 and ResNet110 for comparison. For ease of comparison, we uniformly use the highest pruning rate achievable by our method for the experiments. Except for ResNet56 on RML2016.10a, which uses k=4, all other models use k=3 by default. We compare five baselines: RFP <cit.>, FPGM <cit.>, L1-norm <cit.>, SFP <cit.> and BNP <cit.>. The experimental results are shown in <ref>. Specifically, our method results in a performance drop of only 0.96% on RML2016.10a when pruning 87% of the layers in ResNet56, while the best method (BNP) that prunes 87% of the channels experiences a performance drop of 1.24%. Besides, parameters and FLOPs reductions are also greater than those achieved by BNP. It is worth noting that when the compression rate is high, the model's performance drops sharply with some pruning methods. For instance, pruning ResNet56 on Sig2019-12 and RML2018.01a-high with SFP results in a performance drop of 49.00% and 59.20%, respectively. The same phenomenon occurs on pruning ResNet110 with RFP and SFP. We believe this is because some channel pruning methods cause the model structure to collapse by pruning too many channels at a high compression rate, which leads to a substantial decrease in the accuracy of the pruned model. In contrast, our layer pruning method does not have this phenomenon. Overall, our method outperforms these channel pruning baselines, demonstrating the advantages of our approach. Comparison to layer pruning methods. In the previous paragraph, we demonstrate the superiority of our method compared to existing channel pruning methods. Here, we make a comprehensive comparison with some layer pruning methods such as random layer pruning, LCP-based pruning methods <cit.> and SR-init <cit.>. Since LCP-based pruning methods utilize different knowledge distillation techniques to recover performance, our method does not. Therefore, to ensure a fair comparison, we do not use the knowledge distillation strategy for LCP-based pruning methods. Similarly, we use the highest pruning rate achievable by our method for the experiments. Except for ResNet56 on RML2016.10a, which uses k=4, all other models use k=3 by default. The pruning rates of FLOPs and parameters are nearly the same under the same pruning rate. Therefore, accuracy can be a real reflection of the effectiveness of layer pruning methods. As shown in <ref>, our method outperforms all layer pruning baselines on Sig2019-12 and RML2018.01a-high. When pruning ResNet110 on RML2016.10a, our method achieves negligible lower accuracy than SR-init (-0.93% vs. -0.97%). While our method achieves the least performance degradation when pruning on the other two datasets (0.23% vs. -0.18% for SR-init on Sig2019-12, -2.94% vs. -4.20% for SR-init on RML2018.01a-high). In general, these experimental results suggest the effectiveness of our method. §.§ Additional Experiments and Analyses Comparing with brute force search. In this paper, we first divide the pre-trained model into multiple blocks based on representation similarity, and then search for the optimal layer combination block by block. A naive way is to use brute force search to obtain the optimal combination. Therefore, we compare our strategy with brute force search. For ease of understanding, let's take a 20-layer model as an example. The search space of brute force search is ∑_i=1^20 C_20^i=2^20 - 1=1,048,575. However, suppose this 20-layer model is divided into three blocks with 5, 7, and 8 layers respectively, the search space is drastically reduced to ∑_i=1^5+∑_i=1^7+∑_i=1^8=413, which demonstrates the necessity of model partition. Visualizations of selected layers. Here, we show the pruned model (VGG16 on RML2016.10a-high) obtained using our layer pruning method. As shown on the left side of <ref>, the bars with transparent color represent the layers considered to have less contribution and need to be pruned. Fine-tuning vs. training from scratch. In this paper, we fine-tune the compact model by retaining some parameters from the original pre-trained model, instead of randomly initializing all parameters (i.e., training from scratch). Therefore, in order to verify the superiority of fine-tuning, we conduct experiments on these two methods respectively. Specifically, we plot the test accuracy curves of the two methods using ResNet56 on RML2018.01a-high, as shown in the right side of <ref>. We observe that compared to training from scratch, fine-tuning can achieve better performance, which justifies the use of fine-tuning in our method. §.§ Ablation Study In this subsection, we present detailed ablation experiments on the number of blocks k and the similarity metrics. Ablation experiments on the similarity metrics. In this paper, we uniformly use CKA to calculate the similarity between layers. However, CKA is not the only available similarity metric. To demonstrate that our method supports multiple similarity metrics, we use cosine similarity instead. Specifically, we conduct experiments on RML2016.10a-high using ResNet56. The results are presented in <ref>. Although different similarity metrics are used, the two methods retain the same layers due to the high pruning rate of the model, and they ultimately achieve similar accuracy. This experiment shows that our method is a general pipeline supporting multiple similarity metrics. Ablation experiments on number of blocks k. In this paper, k is a tunable parameter. To verify the impact of k, we conduct experiments with different k ∈{3, 4, 5, 6, 7} using ResNet110 on RML2016.10a. As shown in <ref>, changing the value of k will lead to corresponding changes in the retained layers, which slightly affects the actual pruning rate of FLOPs and parameters, as well as the performance of the pruned model. Overall, our method is robust to k. § CONCLUSION In this paper, we propose a novel layer pruning method. Specifically, we first calculate the similarity matrix between layers, and then divide the pre-trained model into blocks. Subsequently, we identify layers that need to be preserved within each block based on their contribution. Finally, we reassemble the pruned blocks and fine-tune the compact model. Extensive experiments RML2016.10a, RML2016.10a-high, Sig2019-12, Sig2019-12-high and RML2018.01a demonstrate the efficiency and effectiveness of our method over a variety of state-of-the-art baselines, including channel pruning methods (RFP, FPGM, L1-norm, SFP and BNP) as well as layer pruning methods (random layer pruning, LCP-based pruning methods and SR-init). § ACKNOWLEDGMENTS This work was partially supported by the Key R&D Program of Zhejiang under Grant 2022C01018 and by the National Natural Science Foundation of China under Grant U21B2001. IEEEtran [ < g r a p h i c s > ]Yao Lu received his B.S. degree from Zhejiang University of Technology and is currently pursuing a Ph.D. in control science and engineering at Zhejiang University of Technology. He has published several academic papers in international conferences and journals, including ECCV and TNNLS. His research interests include deep learning and computer vision, with a focus on explainable artificial intelligence and model compression. [ < g r a p h i c s > ]Yutao Zhu received his bachelor's degree from the School of Electrical and Electronic Engineering at Wenzhou University in 2023. He is currently a master's student at the School of Information Engineering at Zhejiang University of Technology. His research interests include signal processing and lightweight neural networks. [ < g r a p h i c s > ]Yuqi Li received his bachelor's degree from Southwest University and is currently engaged as a research intern at the Institute of Computing Technology, Chinese Academy of Sciences. His research interests include model compression and medical imaging. [ < g r a p h i c s > ]Dongwei Xu (Member, IEEE) received the B.E. and Ph.D. degrees from the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China, in 2008 and 2014, respectively. He is currently an Associate Professor with the Institute of Cyberspace Security, Zhejiang University of Technology, Hangzhou, China. His research interests include intelligent transportation Control, management, and traffic safety engineering. [ < g r a p h i c s > ]Yun Lin (Member, IEEE) received the B.S. degree in electrical engineering from Dalian Maritime University, Dalian, China, in 2003, the M.S. degree in communication and information system from the Harbin Institute of Technology, Harbin, China, in 2005, and the Ph.D. degree in communication and information system from Harbin Engineering University, Harbin, in 2010. From 2014 to 2015, he was a Research Scholar with Wright State University, Dayton, OH, USA. He is currently a Full Professor with the College of Information and Communication Engineering, Harbin Engineering University. He has authored or coauthored more than 200 international peer-reviewed journal/conference papers, such as IEEE Transactions on Industrial Informatics, IEEE Transactions on Communications, IEEE Internet of Things Journal, IEEE Transactions on Vehicular Technology, IEEE Transactions on Cognitive Communications and Networking, TR, INFOCOM, GLOBECOM, ICC, VTC, and ICNC. His current research interests include machine learning and data analytics over wireless networks, signal processing and analysis, cognitive radio and software-defined radio, artificial intelligence, and pattern recognition. [ < g r a p h i c s > ]Qi Xuan (Senior Member, IEEE) received the B.S. and Ph.D. degrees in control theory and engineering from Zhejiang University, Hangzhou, China, in 2003 and 2008, respectively. He was a Postdoctoral Researcher with the Department of Information Science and Electronic Engineering, Zhejiang University from 2008 to 2010, and a Research Assistant with the Department of Electronic Engineering, City University of Hong Kong, Hong Kong, in 2010 and 2017, respectively. From 2012 to 2014, he was a Postdoctoral Fellow with the Department of Computer Science, University of California at Davis, Davis, CA, USA. He is currently a Professor with the Institute of Cyberspace Security, College of Information Engineering, Zhejiang University of Technology, Hangzhou, and also with the PCL Research Center of Networks and Communications, Peng Cheng Laboratory, Shenzhen, China. He is also with Utron Technology Company Ltd., Xi’an, China, as a Hangzhou Qianjiang Distinguished Expert. His current research interests include network science, graph data mining, cyberspace security, machine learning, and computer vision. [ < g r a p h i c s > ]Xiaoniu Yang is currently a Chief Scientist with the Science and Technology on Communication Information Security Control Laboratory, Jiaxing, China. He published the first software radio book in China [X. Yang, C. Lou, and J. Xu, Software Radio Principles and Applications, Publishing House of Electronics Industry, 2001 (in Chinese)]. His current research interests are software-defined satellite, big data for radio signals, and deep-learning-based signal processing. He is also an Academician of the Chinese Academy of Engineering and a Fellow of the Chinese Institute of Electronics.
http://arxiv.org/abs/2406.08565v1
20240612180737
A New Elementary Proof of Landau's Prime Ideal Theorem, and Associated Results
[ "Alex Burgin" ]
math.NT
[ "math.NT" ]
§ ABSTRACT We give a new elementary proof of Landau's Prime Ideal Theorem. The proof is an extension of Richter's proof of the Prime Number Theorem. The main result contains other results related to the equidistribution of the prime ideal counting function. Anomalous Enhancement of the Electrocatalytic Hydrogen Evolution Reaction in AuPt Nanoclusters Bo Peng June 17, 2024 ============================================================================================== § INTRODUCTION We give a new elementary proof of Landau's Prime Ideal Theorem <cit.>, which is in the spirit of Richter's recent elementary proof of the Prime Number Theorem <cit.>. That argument is phrased in the language of `orthogonality' of arithmetic functions. Let K be a number field. Unique factorization of elements in K can fail, but holds in the ring of Integral Ideals I_K. We can then speak of prime ideals 𝔭∈ I_K. For a general ideal 𝔪, let Ω(𝔪) denote the number of prime ideals dividing 𝔪, counting multiplicity. By unique factorization of I_K, such a function is well-defined. Elements 𝔪∈ I_K also have a norm N(𝔪). Our Main Theorem is this. Let K be a number field. Then there exists a positive integer N such that, for any g:ℕ_0→ℂ satisfying |g (n)|≤ 1 for all n, one has ∑_𝔪 N(𝔪)≤ xg(Ω(𝔪)+k_1)=∑_𝔪 N(𝔪)≤ xg(Ω(𝔪)+k_2)+o_K(∑_𝔪 N(𝔪)≤ x1) for each k_1,k_2≥ N. This contains several results. For Landau's Theorem, take g (x) = (-1)^x, k_1 even, and k_2 odd. We see that asymptotically, the number of ideals 𝔪 for which Ω (𝔪) is even is equal to those for for which Ω(𝔪) is odd. That is an equivalent form of Landau's Theorem, see Lemma <ref> and Appendix C. Let e(y) = e^2π i y for real y. For integer q >2, set g (y)=e(ay/q), for each 1≤ a < q. It then follows that by choosing k_1≥ N and k_2=k_1+1 that (1-e(a/q))e(ak_1/q)∑_N(𝔪)≤ xe(Ω(𝔪)/q) =o(∑_N(𝔪)≤ x1), and hence ∑_N(𝔪)≤ xe(Ω(𝔪)/q)=o(∑_N(𝔪)≤ x1) for each 1≤ a<q. This extends a Theorem of Pillai <cit.> and Selberg <cit.>, establishing the equidistribution of (Ω(𝔪))_N(𝔪)→∞ modulo q for each q≥ 1. See Appendix B for a concise explanation. Take g(x)=e(α x) for α∈ℝ∖ℚ. With choices of k_1 and k_2 depending upon α, we can verify that ∑_N(𝔪)≤ xe(αΩ(𝔪)) = o(∑_N(𝔪)≤ x1). This is a theorem of Erdős <cit.> and Delange <cit.>, generalized to the number field setting. Namely, that the fractional parts ({Ω(𝔪) α} N(𝔪)≤ x ) are asymptotically uniformly distributed in [0,1) for any irrational α. §.§ Notation, Conventions, and Standard Definitions We use 𝒪_K to denote the ring of integers of the number field K. Within 𝒪_K, we let I_K denote the set of integral ideals, and denote ideals in I_K with 𝔪,𝔫, and letters in similar font. We use 𝔭 or 𝔮 to denote a prime ideal, and define N_K/ℚ:I_K→ℤ to be the ideal norm taking an ideal 𝔪 to its norm N_K/ℚ(𝔪). Usually, we will simply drop the subscript and just write N(𝔪). I_K is naturally endowed with a ring structure via ideal addition and multiplication, and N:I_K→ℤ is a homomorphism from (I_K,·)→ (ℤ,·). Given some m∈ℕ, we say an integer sequence (a_n)_n≥ 0 is equidistributed modulo m if lim_x→∞1/x∑_0≤ n<x a_n≡ c (mod m)1=1/m for all c∈ℤ/mℤ. Equidistribution of an integer sequence is closely related to the behavior of associated exponential sums; for more explanation, see Appendix B. When we write ∑_N(𝔪)≤ x, we mean that we are sum over all ideals 𝔪 with norm no greater than x. For a finite set of ideals S⊂ I_K and a function f: I_K→ℂ, we use _𝔰∈ Sf(𝔰) to denote the standard averaging operator _𝔰∈ Sf(𝔰):=1/|S|∑_𝔰∈ Sf(𝔰) and _𝔰∈ S^logf(𝔰) to denote the logarithmic (weighted) averaging operator _𝔰∈ S^logf(𝔰):=1/∑_𝔰∈ S1/N(𝔰)∑_𝔰∈ Sf(𝔰)/N(𝔰). These are analogues of the operators in <cit.>. We use the standard big-Oh notation f(x)=O(g(x)) if |f(x)|≤ C|g(x)| for a constant C; similarly, f(x)=O_α,β,...(g(x)) reads that |f(x)|≤ C|g(x)| for some C=C(α,β,...). We use f(x)=o(g(x)) if the quotient |f(x)/g(x)|→ 0 as x→∞. §.§ Background The distribution of the prime-omega function over number fields K is intimately tied to the distribution of prime numbers, echoing the classical case K=ℚ. Recall the following Lemma: Suppose that ∑_N(𝔪)≤ x(-1)^Ω(𝔪)=o(x), i.e., that Ω is equidistributed modulo 2. Then π_K(x):=∑_N(𝔭)≤ x1∼x/log x. An elementary proof of this Lemma (which is actually an equivalence, but the proof of that will not be needed) follows from proving that the associated function M(x):=∑_N(𝔪)<xμ(𝔪)=o(x), where μ(𝔪)= 0 if there exists a nonunit ideal 𝔭 such that 𝔭^2|𝔪 (-1)^Ω(𝔪) otherwise. This function is analogous to the so-called Mertens function over the case K=ℚ, and showing sublinear growth for M(x) implies that π_K(x)∼x/log x. This is a well known implication for the case of number fields, following from tight ideal density bounds (see 1.3 and <cit.>). We will leave this implication unproven, but provide an elementary proof in Appendix C that L(x)=o(x) M(x)=o(x). Then, with Theorem 1, this recovers Landau's prime ideal theorem. §.§ Density Constants A well-known and elementary[To see an explanation as to why this is true, see <cit.>.] fact (in that it requires no complex analysis) is that for any number field K of degree d, one has ∑_N(𝔪)≤ x1=c_Kx+O(x^1-1/d) for a constant c_K> 0. This constant c_K is often referred to as the `ideal density' of 𝒪_K (see, e.g., <cit.>, 8.4). The power saving-nature of the error term is important, but not essential–we note that these results can be extended to derive PNTs and equidistribution results for many multiplicative semigroups (or `Beurling number systems') with suitably-quickly decaying error term in density. § BUILDING TOWARDS THEOREM 1 The following result is fundamental to our paper: Let K be a number field of degree d, and let c_K be the ideal density of 𝒪_K. Then for any S⊂ I_K finite and nonempty, 1/x∑_N(𝔪)≤ x|∑_𝔫∈ S1_𝔫|𝔪-∑_𝔫∈ S1/N(𝔫)|^2=c_K∑_𝔫_1,𝔫_2∈ SΦ(𝔫_1,𝔫_2)/N(𝔫_1)N(𝔫_2)+O(|S|^1+1/d· x^-1/d) where Φ(𝔫_1,𝔫_2)=N(gcd(𝔫_1,𝔫_2))-1. Let A denote the sum ∑_𝔫∈ S1/N(𝔫); then by expanding the square on the left hand side of (<ref>) one has |∑_𝔫∈ S1_𝔫|𝔪-A|^2 =(∑_𝔫∈ S1_𝔫|𝔪)^2-2A∑_𝔫∈ S1_𝔫|𝔪+A^2 , and summing this over all 𝔪 with N(𝔪)≤ x and dividing by x, one has 1/x∑_N(𝔪)≤ x|∑_𝔫∈ S 1_𝔫|𝔪-A|^2 S_1-2AS_2+S_3, where S_1 =1/x∑_N(𝔪)≤ x(∑_𝔫∈ S1_𝔫|𝔪)^2 , S_2 =1/x∑_N(𝔪)≤ x∑_𝔫∈ S1_𝔫|𝔪, S_3 = A^2/x∑_N(𝔪)≤ x1. Calculation of S_3. Using that ∑_N(𝔪)≤ x1=c_Kx+O(x^1-1/d) we see that S_3=c_KA^2+O(A^2x^-1/d). Then, using that A=O(log |S|) (see Appendix A for a quick proof), we have S_3=c_KA^2+O(log^2(|S|)x^-1/d). Calculation of S_2. For any fixed 𝔫, if 𝔫|𝔪 then one can write 𝔪=𝔫𝔪' in a unique way, and hence 1/x∑_𝔪 N(𝔪)≤ x1_𝔫|𝔪 =1/x∑_𝔪 N(𝔫𝔪')≤ x1 = 1/x∑_𝔪 N(𝔪')≤ x/N(𝔫)1 =c_K/N(𝔫)+O(x^-1/dN(𝐧)^1/d-1) so that one has, after summing over 𝔫∈ S, S_2=∑_𝔫∈ Sc_K/N(𝔫)+O(x^-1/d∑_𝔫∈ S1/N(𝔫)^1-1/d)=c_KA+O(x^-1/d∑_𝔫∈ S1/N(𝔫)^1-1/d). Then since ∑_𝔫∈ S1/N(𝔫)^1-1/d≪ |S|^1/d (for a proof of (<ref>), see Appendix A), we then have S_2=c_KA+O(x^-1/d|S|^1/d). Calculation of S_1. We begin by noting that we can write S_1=1/x∑_𝔪 N(𝔪)≤ x∑_𝔫_1,𝔫_2∈ S1_𝔫_1|𝔪1_𝔫_2|𝔪 For the moment, we view 𝔫_1, 𝔫_2 as fixed. Then, we note that since for any 𝔪 with 𝔫_1|𝔪 we can write 𝔪=𝔪'𝔫_1 in a unique way, so that 1/x∑_𝔪 N(𝔪)≤ x1_𝔫_1|𝔪1_𝔫_2|𝔪 =1/x∑_𝔪' N(𝔪'𝔫_1)≤ x1_𝔫_2|𝔪'𝔫_1 = 1/x∑_𝔪' N(𝔪')≤ x/N(𝔫_1)1_𝔫_2|𝔪'𝔫_1 and then letting 𝔡=gcd(𝔫_1,𝔫_2), 𝔡𝔡_i=𝔫_i (i=1,2) we have 𝔫_2|𝔪'𝔫_1 if and only if 𝔡_2|𝔪'𝔡_1, and hence 1_𝔫_2|𝔪'𝔫_1=1_𝔡_2|𝔪'𝔡_1. Then since 𝔡_2,𝔡_1 are coprime, if 𝔡_2|𝔪'𝔡_1 we must have 𝔡_2|𝔪' so that, from (<ref>), 1/x∑_𝔪 N(𝔪)≤ x1_𝔫_1|m1_𝔫_2|m = 1/x∑_𝔪' N(𝔪')≤ x/N(𝔫_1)1_𝔡_2|𝔪' = 1/x∑_𝔪” N(𝔪”)≤ x/N(𝔫_1𝔡_2)1 = c_K/N(𝔫_1𝔡_2)+O(x^-1/dN(𝔫_1𝔡_2)^1/d-1) = c_K· N(gcd(𝔫_1,𝔫_2))/N(𝔫_1)N(𝔫_2)+O(x^-1/dN(gcd(𝔫_1,𝔫_2))^1-1/d/N(𝔫_1)^1-1/dN(𝔫_2)^1-1/d). Defining Φ(𝔫_1,𝔫_2)=N(gcd(𝔫_1,𝔫_2))-1 and then summing over 𝔫_1,𝔫_2∈ S one then has S_1 =∑_𝔫_1,𝔫_2∈ Sc_KΦ(𝔫_1,𝔫_2)/N(𝔫_1)N(𝔫_2)+∑_𝔫_1,𝔫_2∈ Sc_K/N(𝔫_1)N(𝔫_2)+E_1 =∑_𝔫_1,𝔫_2∈ Sc_KΦ(𝔫_1,𝔫_2)/N(𝔫_1)N(𝔫_2)+c_KA^2+E_1, where, using (<ref>), E_1 ≪ x^-1/d∑_𝔫_1,𝔫_2∈ S(N(gcd(𝔫_1,𝔫_2))/N(𝔫_1)N(𝔫_2))^1-1/d ≪ x^-1/d∑_𝔫_1,𝔫_2∈ S1/N(𝔫_2)^1-1/d≪ x^-1/d|S|^1+1/d. We then have S_1=∑_𝔫_1,𝔫_2∈ Sc_KΦ(𝔫_1,𝔫_2)/N(𝔫_1)N(𝔫_2)+c_KA^2+O(x^-1/d|S|^1+1/d). Now, let us denote ∑_𝔫_1,𝔫_2∈ Sc_KΦ(𝔫_1,𝔫_2)/N(𝔫_1)N(𝔫_2) with S_Φ. One has S_1-2AS_2+S_3 = (S_Φ +c_KA^2+O(x^-1/d|S|^1+1/d)) -2A(c_KA+O(x^-1/d|S|^1/d)) + (c_KA^2+O(log^2(|S|)x^-1/d)) =S_Φ+O(x^-1/d|S|^1+1/d) since the A^2 term cancels. This is the desired estimate. We then want an equivalent formulation to Proposition 1 in terms of expectation: Let K be a number field of degree d, and let c_K be the ideal density of 𝒪_K. Suppose S⊂ K is finite and nonempty. Then _N(𝔪)≤ x|_𝔫∈ S^log(N(𝔫)1_𝔫|𝔪-1)|^2=_𝔫∈ S^log_𝔫'∈ S^logΦ(𝔫,𝔫')+O(x^-1/d|S|^1+1/d), We can begin by writing Proposition 2 as ∑_N(𝔪)≤ x|∑_𝔫∈ S1_𝔫|𝔪-A|^2=c_Kx∑_𝔫_1,𝔫_2∈ SΦ(𝔫_1,𝔫_2)/N(𝔫_1)N(𝔫_2)+O(x^1-1/d|S|^1+1/d). Dividing both sides by ∑_N(𝔪)≤ x1=c_Kx+O(x^-1/d) gives then that _N(𝔪)≤ x|∑_𝔫∈ S1_𝔫|𝔪-A|^2 =1/1+O(x^-1/d)∑_𝔫_1,𝔫_2∈ SΦ(𝔫_1,𝔫_2)/N(𝔫_1)N(𝔫_2)+O(x^-1/d|S|^1+1/d) = ∑_𝔫_1,𝔫_2∈ SΦ(𝔫_1,𝔫_2)/N(𝔫_1)N(𝔫_2)+O(x^-1/d|S|^1+1/d). Dividing again by A^2 and noting that 1/A∑_𝔫∈ S1_𝔫|𝔪=_𝔫∈ S^logN(𝔫)1_𝔫|𝔪, one has _N(𝔪)≤ x|_𝔫∈ S^logN(𝔫)1_𝔫|𝔪-1|^2=_𝔫_1∈ S^log_𝔫_2∈ S^logΦ(𝔫_1,𝔫_2)+O(x^-1/d|S|^1+1/d). The following Proposition is our main ingredient, analogous to Richter's Proposition 2.2: For all η>0 there exists some N=N(η)∈ℕ such that for all k≥ N there exist two nonempty sets of ideals S_1=S_1(η),S_2=S_2(η)⊂ I_K for which the following are true: (i) All ideals in S_1 are prime and all ideals in S_2 are a product of exactly k prime ideals (ii) The sets S_1 and S_2 have the same cardinality, and there exists an enumeration S_1={𝔭_1,𝔭_2,...,𝔭_k} and S_2={𝔪_1,𝔪_2,...,𝔪_k} for which (1-η)N(𝔭_i)≤ N(𝔪_i)≤ (1+η)N(𝔭_i) for all i=1,...k. (iii) ^log_𝔫∈ S_i^log_𝔫'∈ S_iΦ(𝔫,𝔫')≤η for i=1,2. Fix η>0 and take g:ℕ_0→ℂ with |g|≤ 1. Choose S_1=S_1(η) and S_2=S_2(η) to the desired specifications above. Note first that _𝔪∈ S_i^log_N(𝔫)≤ x/N(𝔪)g(Ω(𝔪𝔫)) =_𝔪∈ S_i^log1/∑_N(𝔫)≤ x/N(𝔪)1∑_N(𝔫)≤ x/N(𝔪)g(Ω(𝔪𝔫)) = _𝔪∈ S_i^log1/c_K·x/N(𝔪)+O(x^1-1/d)∑_N(𝔫')≤ x1_𝔪|𝔫'g(Ω(𝔫')) = _𝔪∈ S_i^log(1+O(x^-1/d))_N(𝔫')≤ xN(𝔪)1_𝔪|𝔫'g(Ω(𝔫')) = _𝔪∈ S_i^log_N(𝔫')≤ xN(𝔪)1_𝔪|𝔫'g(Ω(𝔫'))+O_S_i(x^-1/d) (here we used 1-boundedness of g) so that |_N(𝔫)≤ xg(Ω(𝔫))-_𝔪∈ S_2^log_N(𝔫)≤ x/N(𝔪)g(Ω(𝔪𝔫))|^2 ≤|_N(𝔫)≤ xg(Ω(𝔫))-_𝔪∈ S_2^log_N(𝔫')≤ xN(𝔪)1_𝔪|𝔫'g(Ω(𝔫'))|^2+O_S_2(x^-1/d) ≤_N(𝔫)≤ x|_𝔪∈ S_2^log(1-N(𝔪)1_𝔪|𝔫)|^2+O_S_2(x^-1/d) ≤η+O_S_2(x^-1/d). Here we used the bound |g|≤ 1 extensively alongside Proposition 1, the Cauchy-Schwarz inequality, and the assumption on S_2. Since Ω(𝔪𝔫)=Ω(𝔪)+Ω(𝔫) we have _N(𝔪)≤ xg(Ω(𝔪)) =_𝔪∈ S_2^log_N(𝔫)≤ x/N(𝔪)g(Ω(𝔪)+Ω(𝔫))+O(η^1/2)+O_S_2(x^-1/2d) = _𝔪∈ S_2^log_N(𝔫)≤ x/N(𝔪)g(k+Ω(𝔫))+O(η^1/2)+O_S_2(x^-1/2d). A similar argument with S_1 instead of S_2 and g(n+k-1) in place of g(n) gives _N(𝔪)≤ xg(Ω(𝔪)+k-1) =_𝔭∈ S_1^log_N(𝔫)≤ x/N(𝔭)g(Ω(𝔭)+Ω(𝔫)+k-1)+O(η^1/2)+O_S_1(x^-1/2d) =_𝔭∈ S_1^log_N(𝔫)≤ x/N(𝔭)g(k+Ω(𝔫))+O(η^1/2)+O_S_1(x^-1/2d). Finally, we know that there exists an enumeration of S_1,S_2 such that S_1={𝔭_1,𝔭_2,...,𝔭_j} and S_2={𝔪_1,𝔪_2,...,𝔪_j} and (1-η)N(𝔭_i)≤ N(𝔪_i)≤ (1+η)N(𝔭_i) for all i=1,...k. Then it easily follows that _N(𝔫)≤ x/N(𝔪_i)g(k+Ω(𝔫))=_N(𝔫)≤ x/N(𝔭_i)g(k+Ω(𝔫))+O(η). Taking logarithmic averages over S_1 and S_2 yields _𝔪∈ S_2^log_N(𝔫)≤ x/N(𝔪)g(k+Ω(𝔫))=_𝔭∈ S_1^log_N(𝔫)≤ x/N(𝔭)g(k+Ω(𝔫))+O(η). From applying (<ref>) to (<ref>) and then applying (<ref>) we then deduce that _N(𝔪)≤ xg(Ω(𝔪))=_N(𝔪)≤ xg(Ω(I)+k-1)+O(η^1/2)+O_S_1,S_2(x^-1/2d). This holds for all k≥ N, so that one then has _N(𝔪)≤ xg(Ω(𝔪)+k_1)=_N(𝔪)≤ xg(Ω(I)+k_2)+O(η^1/2)+O_S_1,S_2(x^-1/2d) for all k_1,k_2≥ N. The Theorem then follows upon taking x→∞; then since η>0 was arbitrary, we are finished. § PROOF OF PROPOSITION <REF> §.§ Chebychev-Type Bounds for π_K(x) We have this upper bound for prime ideals: Let α≥ 1. One then has ∑_𝔭 x<N(𝔭)≤α x1≤ (αlogα) x/log x+O(x^1-1/dlog x). The classical way to derive weak-type prime bounds is by studying the prime factorizations of binomial coefficients; in particular, the central binomial coefficient 2nn=(2n)!/(n!)^2 is at least as large as the product of primes n < p < 2n. In the number field setting, we instead follow this line of reasoning. Let 𝔭 be a prime ideal. We know that there are a total of c_Kn/N(𝔭)+O(n^1-1/dN(𝔭)^-1) ideals divisible by 𝔭 over the set of ideals with norm at most n, and c_Kn/N(𝔭)^2+O(n^1-1/dN(𝔭)^-2) divisible by 𝔭^2, and so on. Then, the multiplicity x_𝔭 of a prime ideal 𝔭 in the ideal ∏_N(𝔪)≤ n𝔪 is given by x_𝔭(∏_N(𝔪)≤ n𝔪) = c_Kn∑_c=1^log_N(𝔭)(n)N(𝔭)^-c+O(n^1-1/d∑_c=1^log_N(𝔭)(n)N(𝔭)^-c) =c_Kn∑_c=1^log_N(𝔭)(n)N(𝔭)^-c+O(n^1-1/dlog n/N(𝔭)log N(𝔭)). One can then estimate log f(n) ∑_N(𝔭)≤ nx_𝔭(∏_N(𝔪)≤ n𝔪)·log N(𝔭) =c_Kn∑_N(𝔭)≤ nlog N(𝔭)∑_c=1^log_N(𝔭)(n)N(𝔭)^-c+O(n^1-1/dlog n ∑_N(𝔭)≤ n1/N(𝔭)) = c_Kn∑_N(𝔭)≤ nlog N(𝔭)∑_c=1^log_N(𝔭)(n)N(𝔭)^-c+O(n^1-1/d(log n)^2). We inspect the function θ_α(x):=f(α x)/f(x)^α. Using the estimate derived for log f(n), we have logθ_α(x)=log f(α x)-αlog f(x) and one can compute log f(α x) =α c_Kx∑_N(𝔭)≤α xlog N(𝔭)∑_c=1^log_N(𝔭)(α x)N(𝔭)^-c+O(x^1-1/d(log x)^2), αlog f(x) = α c_Kx∑_N(𝔭)≤ xlog N(𝔭)∑_c=1^log_N(𝔭)(x)N(𝔭)^-c+O(x^1-1/d(log x)^2) so that we have logθ_α(x) =α c_Kx∑_N(𝔭)≤α xlog N(𝔭)∑_c=1^log_N(𝔭)(x)N(𝔭)^-c + α c_Kx∑_N(𝔭)≤α xlog N(𝔭)∑_c=1+⌊log_N(𝔭)(x)⌋^log_N(𝔭)(α x)N(𝔭)^-c - α c_Kx∑_N(𝔭)≤ xlog N(𝔭)∑_c=1^log_N(𝔭)(x)N(𝔭)^-c +O(x^1-1/d(log x)^2). Then, subtracting third term from first above, logθ_α (x) =α c_Kx∑_x<N(𝔭)≤α xlog N(𝔭)∑_c=1^log_N(𝔭)(x)N(𝔭)^-c +α c_Kx∑_N(𝔭)≤α xlog N(𝔭)∑_c=1+⌊log_N(𝔭)(x)⌋^log_N(𝔭)(α x)N(𝔭)^-c +O(x^1-1/d(log x)^2). Notice that in the first term, since x<N(𝔭) one has log_N(𝔭)(x)<1 and hence this term vanishes. We then obtain logθ_α(x)= α c_Kx∑_N(𝔭)≤α xlog N(𝔭)∑_c=1+⌊log_N(𝔭)(x)⌋^log_N(𝔭)(α x)N(𝔭)^-c+O(x^1-1/d(log x)^2). and truncating the sum to x<N(𝔭)≤α x one surely has logθ_α(x) ≥α c_Kx∑_x<N(𝔭)≤α xlog N(𝔭)∑_c=1^log_N(𝔭)(α x)N(𝔭)^-c+O(x^1-1/d(log x)^2) ≥α c_Kx∑_x<N(𝔭)≤α xlog N(𝔭)·⌊log_N(𝔭)(α x)⌋/α x+O(x^1-1/d(log x)^2) ≥ c_Klog x∑_x<N(𝔭)≤α x1+O(x^1-1/d(log x)^2). Then using the fact that logθ_α(x) = c_Kα xlog(α x)-c_Kα xlog x+O(x^1-1/d(log x)^2) = c_K(αlogα)x+O(x^1-1/d(log x)^2) one then has ∑_x<N(𝔭)≤α x1≤ (αlogα )x/log x+O(x^1-1/dlog x). Let π_K(x) be the number of prime ideals in 𝒪_K with norm at most x. Then one has lim inf_x→∞π_K(x)/x/log x≥ 1. Take ϵ>0. Then for x≫_ϵ 1 one has π_K(x)≥ (1-ϵ)x/log x. As with the upper bound in the previous Lemma, this comes from estimating the function θ_α(x) for α >1. Recall that logθ_α(x)=log f(α x)-αlog f(x) for f(x)=∏_N(𝔪)≤ nN(𝔪). One can write , using (<ref>) and splitting into two sums, log f(α x) =α c_Kx ∑_N(𝔭)≤ xlog N(𝔭)∑_c=1^log_N(𝔭)(α x)N(𝔭)^-c +α c_Kx ∑_x<N(𝔭)≤α xlog N(𝔭)∑_c=1^log_N(𝔭)(α x)N(𝔭)^-c+O(x^1-1/d(log x)^2) := α c_K n∑_N(𝔭)≤ nlog N(𝔭)∑_c=1^log_N(𝔭)(α n)N(𝔭)^-c+E_α,x+O(x^1-1/d(log x)^2), where E_α,x is the sum over x<N(𝔭)≤α x and satisfies E_α,x≤α c_Klog(α x)∑_x<N(𝔭)≤α x1. We also have, using (<ref>) αlog f(n)=α c_K x∑_N(𝔭)≤ xlog N(𝔭)∑_c=1^log_N(𝔭)(x)N(𝔭)^-c+O(x^1-1/d(log x)^2) so that, subtracting (<ref>) from (<ref>), logθ_α(x) =α c_K x∑_N(𝔭)≤ xlog N(𝔭)∑_log_N(𝔭)(x)<c≤log_N(𝔭)(α x)N(𝔭)^-c+E_α,n+O(x^1-1/d(log x)^2). Since ∑_log_N(𝔭)(x)<c≤log_N(𝔭)(α x)N(𝔭)^-c≤logα/x and log N(𝔭)≤log x one has logθ_α(x) ≤α (logα )c_K (log x)∑_N(𝔭)≤ x1+E_α,n+O(x^1-1/d(log x)^2) so that we then deduce that ∑_N(𝔭)≤ x1≥logθ_α (x)-E_α,x/c_Kα (logα)(log x)+O(x^1-1/dlog x). Recalling the expression (<ref>) for logθ_α(x) given by logθ_α(x)=(αlogα)c_Kx+O(x^1-1/dlog x) one has, by substituting this into (<ref>) that ∑_N(𝔭)≤ x1 ≥x/log x-E_α,x/log x+O(x^1-1/dlog x). Now, by definition we know that 0≤ E_α, x≤ c_Kαlog(α x)∑_x<N(𝔭)≤α x1; hence under application of the previous Lemma we can deduce that E_α,x≤ c_Kαlog(α x)· (αlogα)x/log x+O(x^1-1/dlog x) so that, by (<ref>), ∑_N(𝔭)≤ x1≥x/log x-c_Kα^2logαx/log x+O(x/(log x)^2) and hence π_K(x)/x/log x≥ 1-c_Kα^2logα +O(1/log x). Given ϵ>0, choose α -1 >0 small so that c_Kα^2logα <ϵ; then the result follows as one takes x→∞. Using these two Lemmas, we can now prove the following Proposition: Let ℙ be the set of prime ideals in 𝒪_K, and let A(x,y) denote the set of ideals[Equivalently, A(x,y)=N^-1((x,y])] 𝔪 in 𝒪_K satisfying x<N(𝔪)≤ y. Then there are x_0≥ 1 and ϵ_0>0 such that (i) |ℙ∩ A(8^x,8^x+1)|≥8^x/x for all x≥ x_0, and (ii)|ℙ∩ A(8^x,8^x+ϵ)|≤√(ϵ)8^x/x for all x≥ x_0 and ϵ∈ (0,ϵ_0]. The upper bound (ii) is almost immediate by taking α=8^ϵ in Lemma <ref>, so that one has ∑_8^x<N(𝔭)≤ 8^x+ϵ1 ≤ϵ(log 8)8^ϵ8^x/xlog 8+O(8^x(1-1/d)x) ≤ 2ϵ8^x/x+O(8^x(1-1/d)x). Since 2ϵ≤√(ϵ) for ϵ in a neighborhood of zero, and since the order of growth of 8^x/x is much larger than that of 8^x(1-1/d)x, the result follows. The lower bound (i) follows from the fact that |ℙ∩ A(8^x,8^x+1)| =|ℙ∩ A(0,8^x+1)|-|ℙ∩ A(0,8^x)| = π_K(8^x+1)-π_K(8^x) and the dyadic decomposition (1/2,8^x]=⋃_k=0^3x(8^x/2^k+1,8^x/2^k] gives π_K(8^x) =∑_k=0^3x∑_8^x/2^k+1< N(𝔭)≤8^x/2^k1 ≤ O(1)+ ∑_k=0^3x-2[(2log 2)8^x/2^k+1/log (8^x/2^k+1)+O((8^x/2^k+1)^1-1/dx)] (from (<ref>)) = 2∑_j=2^3x2^j/j+O(8^x(1-1/d)x). We then note that x/8^x∑_j=2^3x2^j/j = x/8^x∑_2≤ j≤ 2x2^j/j+x/8^x∑_2x<j≤ 3x2^j/j ≤ x∑_2≤ j≤ 2x2^j-3x+1/2∑_2x<j≤ 3x2^j-3x =1+o(1) and hence π_K(8^x)≤ (2+o(1))8^x/x+O(8^x(1-1/d)x). One then has that, applying Lemma <ref>, π_K(8^x+1)-π_K(8^x) ≥ (1-ϵ)8^x+1/3(x+1)log 2-(2+o(1))8^x/x+O(8^x(1-1/d)x^2) (x≫_ϵ 1) = [8(1-ϵ)/3log 2x/x+1-2+o(1)]·8^x/x+O(8^x(1-1/d)x) ≥8^x/x for ϵ small and sufficiently large x. This completes the proof of Proposition <ref>. §.§ Combinatorial Ingredients We include two combinatorial Lemmas, the first which can be easily extended from Lemma 3.4 in <cit.> to the number field setting, and the second which is precisely Lemma 3.5 in <cit.>: Let x_0 be as in Proposition <ref>. There exists ϵ_1>0 such that for all ϵ∈ (0,ϵ_1] and all δ∈ (0,1) there exists D=D(ϵ,δ)∈ (0,1) with the following property: For all n≥ x_0 there are x,y∈[n,n+1) with ϵ^4<y-x<ϵ such that |ℙ∩ A(8^x,8^x+δ)|≥D 8^n/n, and |ℙ∩ A(8^y,8^y+δ)|≥D 8^n/n. Fix x_0≥ 1 and 0<ϵ<1/2. Suppose 𝒳 is a subset of ℝ with the property that for every n≥ x_0, there exist x,y∈𝒳∩ [n,n+1) with ϵ^4<y-x<ϵ. Let j≥⌈ 2/ϵ^4⌉. Then for all n_1,n_2,...,n_j∈ℕ_≥ x_0 there exist z,z_1,...,z_j∈𝒳 such that (I) z_i∈[n_i,n_i+1) for all 1≤ i≤ j (II) z_1+...+z_j∈ [z,z+ϵ). We now have the tools necessary to prove Proposition <ref>. Let η∈ (0,1) be given. Let x_0 and ϵ_0 be as in Proposition <ref>, and ϵ_1 be as in Lemma <ref>. Pick any ϵ<min{ϵ_0,ϵ_1,log(1+η)/log 64}. Set N=⌈ 2/ϵ^4⌉. We will construct sets S_1=S_1(η) and S_2=S_2(η) as specified in Proposition <ref>. To do this, set j≥ N, and choose δ =ϵ/j, and let D=D(ϵ,δ) be as in Lemma <ref>. Define 𝒳:={x≥ N: |ℙ∩ A(8^x,8^x+δ)|≥ D 8^⌊ x⌋/⌊ x⌋}. By Lemma <ref>, 𝒳 is necessarily infinite and unbounded. For every x∈𝒳 define P_x to be a set satisfying the following conditions: P_x⊂ℙ∩ A(8^x,8^x+δ) and |P_x|=⌊ D 8^⌊ x⌋/⌊ x⌋⌋ We shall use the sets P_x to construct the sets S_1 and S_2. The set 𝒳 satisfies the conditions of Lemma 7, which ensures, for each tuple 𝐦:=(m_1,...,m_j)∈ℕ_≥ x_0^j, the existence of some ζ_𝐦,ζ_1,𝐦,...,ζ_j,𝐦∈𝒳 that satisfy the conditions (I.j) ζ_i,𝐦∈ [m_i,m_i+1) for all 1≤ i≤ j, and (II.j) ζ_1,𝐦+....+ζ_j,𝐦∈ [ζ_𝐦,ζ_𝐦+ϵ). By definition of the sets P_x, one has |P_ζ_i,𝐦|=⌊ D8^m_i/m_i⌋. Now, let M=M(D,η,j) be a constant that is to be determined later, and define sets A_1,...,A_j⊂ℕ_≥ x_0 inductively in the following manner: (1) Pick some s_1∈ℕ with s_1>max{x_0,2j} and let A_1 be a finite subset of s_1ℕ={s_1n: n∈ℕ} satisfying ∑_n∈ A_11/n≥ M. (2) Given A_i, form A_i+1 by taking any s_i+1>M+max(A_1)+...+max(A_i) and set A_i+1 to be a finite subset of s_i+1ℕ satisfying ∑_n∈ A_i+11/n≥ M. We observe that these sets A_i satisfy the following property: (A) For any vector 𝐪=(q_1,...,q_j)∈ A_1× ...× A_j and 𝐫=(r_1,...,r_j)∈ A_1× ...× A_j with 𝐪≠𝐫, one has the distance between the integers q_1+...+q_j and r_1+...+r_j being at least M. This follows because 𝐪≠𝐫 implies that there exists some 1≤ i_*≤ j satisfying q_i_*≠ r_i_*. Choose such an i_* to be maximal, then one has |∑_i=1^j(q_i-r_i)| = |∑_1≤ i≤ j q_i≠ r_i(q_i-r_i)| = |q_i_*-r_i_*+∑_1≤ i≤ j q_i≠ r_i i≠ i_*(q_i-r_i)| ≥ |q_i_*-r_i_*|-|∑_1≤ i≤ j q_i≠ r_i i≠ i_*(q_i-r_i)| = |q_i_*-r_i_*|-∑_1≤ i<i_* q_i≠ r_i|q_i-r_i|, with the last step following from the maximality assumption. On the one hand, since q_i_* and r_i_* are distinct elements of s_i_*ℕ, they satisfy |q_i_*-r_i_*|≥ s_i_*. On the other hand, if 1≤ i<i_*, one has |q_i-r_i|≤ j≤max A_i since both q_i and r_i are positive integers, and hence we have |∑_i=1^j(q_i-r_i)|≥ s_i_*-∑_1≤ i<i_*max A_i ≥ M, which provides the proof for property (A). The Construction We can now construct the collections of ideals S_1 and S_2 in Proposition <ref>. Set j≥ N. Recalling the definitions of the sets P_ζ_i,𝐦 as given in (<ref>) and (<ref>), we form S_2 as the product set S_2:=⋃_𝐦∈ A_1×⋯× A_jP_ζ_1,𝐦...P_ζ_j,𝐦={𝔭_1...𝔭_j: 𝔭_i∈ P_ζ_i,𝐦 for all 1≤ i≤ j}. This then gives that |P_ζ_1,𝐦...P_ζ_j,𝐦|≤D^j8^m_1+⋯+m_j/m_1⋯ m_j≤D^j8^m_1+⋯+m_j/m_1+⋯+m_j< D 8^⌊ζ_𝐦⌋/⌊ζ_𝐦⌋ where the last step follows from the fact ζ_1,𝐦+⋯+ζ_j,𝐦≥⌊ζ_𝐦⌋ (as follows from (II.j)), and because D^j<D. Hence we have by definition of P_ζ_𝐦 that |P_ζ_1,𝐦⋯ P_ζ_j,𝐦|≤ |P_ζ_𝐦|. In particular, there exists some collection of ideals Q_𝐦⊆ P_ζ_𝐦 with |Q_𝐦|=|P_ζ_1,𝐦⋯ P_ζ_j,𝐦|. We then form the collection of prime ideals S_1:=⋃_𝐦∈ A_1×⋯× A_jQ_𝐦. By construction, if 𝔪∈ S_2 then one can write 𝔪=𝔭_ζ_1,𝐦⋯𝔭_ζ_j,𝐦 for some 𝐦∈ A_1×⋯× A_j and 𝔭_ζ_1,𝐦∈ P_ζ_1,𝐦. Then Ω(𝔪)=j, as desired. Whereas if 𝔭∈ S_1⊂ P_ζ_𝐦 then 𝔭 is by definition a prime ideal. Proof that S_1 and S_2 satisfy (ii). By definition, we have P_ζ_i,𝐦⊂ A(8^ζ_i,𝐦,8^ζ_i,𝐦+δ), so that P_ζ_1,𝐦⋯ P_ζ_j,𝐦 ⊂ A(8^ζ_1,𝐦+ ⋯ +ζ_j,𝐦,8^ζ_1,𝐦+...+ζ_j,𝐦+jδ)⊂ A(8^ζ_𝐦,8^ζ_𝐦+2ϵ), with the last inclusion following from property (II.j) and the definition of δ as ϵ/j. Also, by definition we have Q_𝐦⊂ P_ζ_𝐦⊂ A(8^ζ_𝐦,8^ζ_𝐦+δ). Since δ =ϵ/j we clearly have Q_𝐦⊂ A(8^ζ_𝐦,8^ζ_𝐦+2ϵ). We first claim that each ideal in S_2 is distinct. It suffices to prove that the sets M_1:=P_ζ_1,𝐦_1...P_ζ_j,𝐦_1 and M_2:=P_ζ_1,𝐦_2...P_ζ_j,𝐦_2 have empty intersection for 𝐦_1≠𝐦_2, both in A_1× ...× A_j (by uniqueness of factorization of an ideal into prime ideals, one has |P_ζ_1,𝐦...P_ζ_j,𝐦|=∏_k=1^j|P_ζ_k,𝐦|). Given some ideal 𝔲∈ P_ζ_1,𝐦...P_ζ_j,𝐦, we can write 𝔲=𝔭_ζ_1,𝐦...𝔭_ζ_j,𝐦 for a collection of prime ideals 𝔭_ζ_i,𝐦 with each 𝔭_ζ_i,𝐦∈ P_ζ_1,𝐦. Since ζ_i,𝐦∈ [m_i,m_i+1) for all 1≤ i≤ j by point (I.j), and ζ_1,𝐦+...+ζ_j,𝐦∈ [ζ_𝐦,ζ_𝐦+ϵ), we then discern that |m_1+...+m_j-ζ_𝐦| ≤|-ζ_𝐦+∑_k=1^jζ_k,𝐦|+∑_k=1^j |m_i-ζ_k,𝐦| <ϵ+j, which alongside property (A) with the choice of M≥ 2j+3 gives that u cannot possibly be in any other collection P_ζ_1,𝐦'...P_ζ_j,𝐦'. We next claim that each ideal in S_1 is distinct. We must prove that the sets Q_𝐦 are disjoint for different 𝐦∈ A_1× ...× A_j; since Q_𝐦⊂ P_ζ_𝐦, it suffices to prove that the P_ζ_𝐦 are disjoint for different 𝐦∈ A_1× ...× A_j. But this is easily apparent, since P_ζ_𝐦⊂ A(8^ζ_𝐦,8^ζ_𝐦+δ) by definition, and alongside property (A) with the above choice of M and the bound above gives that these annuli are disjoint. Regarding cardinality, it is immediately clear that S_1 and S_2 have the same cardinality since |Q_𝐦|=|P_ζ_1,𝐦...P_ζ_j,𝐦|. Also, since Q_𝐦 and P_ζ_1,𝐦...P_ζ_j,𝐦 both lie in the annulus A(8^ζ_𝐦,8^ζ_𝐦+2ϵ), it is clear that the ratio of the norm of any element of the latter with the norm of any element of the former lies between 8^-2ϵ and 8^2ϵ. Since 8^2ϵ≤ 1+η, it then follows that by enumerating P_ζ_1,𝐦...P_ζ_j,𝐦={P_i:N(P_1)≤...≤ N(P_r)} and Q_𝐦={Q_i:N(Q_1)≤ ...≤ N(Q_r)} we have that (1-η)N(P_j)≤ N(Q_j)≤ (1+η)N(p_j) for each j=1,...,r. Using the fact that each 𝐦 corresponds to a distinct annulus A(8^ζ_𝐦,8^ζ_𝐦+δ), we can then extend this enumeration property to S_1 and S_2, so that for each Q_i∈ S_2 there exists a P_i∈ S_1 for which (1-η)N(P_i)≤ N(Q_i)≤ (1+η)N(P_i). We first prove the coprimality condition for S_1, as it is simpler. We must show that _𝔭∈ S_1^log_𝔭'∈ S_1^logc_KΦ(𝔭,𝔭')≤η, for Φ(𝔭,𝔭'):=N(gcd(𝔭,𝔭'))-1. The set S_1 only contains prime ideals, so it is clear that Φ(𝔭,𝔭')=0 for 𝔭≠𝔭'; hence we have _𝔭∈ S_1^log_𝔭'∈ S_1^logΦ(𝔭,𝔭')= ∑_𝔭,𝔭'∈ S_1Φ(𝔭,𝔭')/N(𝔭)N(𝔭')/∑_𝔭,𝔭'∈ S_11/N(𝔭)N(𝔭')≤∑_𝔭∈ S_11/N(𝔭)/(∑_𝔭∈ S_11/N(𝔭))^2=1/∑_𝔭∈ S_11/N(𝔭). By definition of S_1 we evidently have ∑_𝔭∈ S_11/N(𝔭)=∑_𝐦∈ A_1× ...× A_j∑_P∈ Q_𝐦1/N(𝔭), and since Q_𝐦⊂ A(8^ζ_𝐦,8^ζ_𝐦+2ϵ)⊂ A(8^ζ_𝐦,8^m_1+...+m_j+j+3ϵ) one has N(𝔭)≤ 8^m_1+...+m_j+j+1 for each 𝔭∈ Q_𝐦, so that ∑_P∈ S_11/N(𝔭)≥∑_𝐦∈ A_1× ... × A_j|Q_𝐦|/8^j+1· 8^m_1+...+m_j. Then since |Q_𝐦|=|P_ζ_1,𝐦...P_ζ_j,𝐦|=∏_k=1^j|P_ζ_k,𝐦| one has |Q_𝐦|≥ D^j8^-j8^m_1+...+m_j/(m_1m_2...m_j), so that ∑_𝔭∈ S_11/N(𝔭)≥∑_𝐦∈ A_1× ... × A_jD^j/8^2j+1m_1...m_j≥D^jM^j/8^2j+1. Then if M is chosen sufficiently large, then _𝔭∈ S_1^log_𝔭'∈ S_1^logΦ(𝔭,𝔭')≤η immediately follows. Now, it suffices to show that _𝔪∈ S_2^log_𝔪'∈ S_2^logΦ(𝔪,𝔪')≤η. By definition of the set S_2, one has ∑_𝔪,𝔪'∈ S_2Φ(𝔪,𝔪')/N(𝔪)N(𝔪')=∑_𝐦,𝐦'∈ A_1×...× A_j∑_𝔪∈ P_ζ_1,𝐦...P_ζ_j,𝐦∑_𝔪'∈ P_ζ_1,𝐦_2...P_ζ_j,𝐦'Φ(𝔪,𝔪')/N(𝔪)N(𝔪'). If some 𝔪∈ P_ζ_1,𝐦...P_ζ_j,𝐦 and 𝔪'∈ P_ζ_1,𝐦'...P_ζ_j,𝐦' are coprime then Φ(𝔪,𝔪')=0, so do not contribute towards the sum above. Whereas if 𝔪 and 𝔪' are not coprime, set gcd(𝔪,𝔪'):=𝔲 with 𝔲∈∏_i∈ FP_ζ_i,𝐦' for some nonempty minimal subset F⊂{1,...,j} and 𝔲'∈∏_i∉FP_ζ_i,𝐦' such that 𝔪'=𝔲𝔲'; this forces m_i=m_i' for all i∈ F since P_ζ_i,𝐦 and P_ζ_i,𝐦' are otherwise disjoint. One would then have Φ(𝔪,𝔪')/N(𝔪)N(𝔪')=N(𝔲)-1/N(𝔪)N(𝔪')≤1/N(𝔪)N(𝔲') and as a result ∑_𝔪,𝔪'∈ S_2Φ(𝔪,𝔪')/N(𝔪)N(𝔪')≤∑_F⊂{1,...,j} F≠∅∑_𝐦,𝐦'∈ A_1×...× A_j m_i=m_i', ∀ i∈ F∑_𝔪∈ P_ζ_1,𝐦...P_ζ_j,𝐦∑_𝔲'∈∏_i∉FP_ζ_i,𝐦'1/N(𝔪)N(𝔲'). Then, using the fact that P_ζ_i,𝐦⊂ A(8^m_i,8^m_i+1) and P_ζ_i,𝐦'⊂ A(8^m_i',8^m_i'+1) we then have ∑_𝔪∈ P_ζ_1,𝐦...P_ζ_j,𝐦∑_𝔲'∈∏_i∉FP_ζ_i,𝐦'1/N(𝔪)N(𝔲') ≤∑_𝔪∈ P_ζ_1,𝐦...P_ζ_j,𝐦∑_𝔲'∈∏_i∉FP_ζ_i,𝐦'(∏_k=1^j 1/8^m_i)(∏_k∉F1/8^m_i') = (∏_k=1^j|P_ζ_k,𝐦|/8^m_i)(∏_k∉F|P_ζ_k,𝐦'|/8^m_k'). By definition, each set P_ζ_k,𝐦 has cardinality ⌊ D8^m_i/m_i⌋, so that (∏_k=1^j|P_ζ_k,𝐦|/8^m_i)(∏_k∉F|P_ζ_k,𝐦'|/8^m_k')≤(∏_k=1^jD/m_i)(∏_k∉FD/m_k'), and hence we have ∑_𝔪,𝔪'∈ S_2Φ(𝔪,𝔪')/N(𝔪)N(𝔪') ≤∑_F⊂{1,...,j} F≠∅∑_𝐦,𝐦'∈ A_1×...× A_j m_i=m_i', ∀ i∈ F(∏_k=1^jD/m_k)(∏_k∉FD/m_k') = ∑_F⊂{1,...,j} F≠∅D^2j-F(∏_k=1^j∑_m∈ A_k1/m)(∏_k∉F∑_m∈ A_k1/m). One also has that, since 𝔪∈ P_ζ_1,𝐦...P_ζ_j,𝐦 implies that N(𝔪)≤ 8^m_1+...+m_j+j+1, we then have ∑_𝔪∈ S_21/N(𝔪)≥∑_𝐦∈ A_1× ...× A_j|P_ζ_1,𝐦...P_ζ_j,𝐦|/8^m_1+...+m_j+j+1 and since |P_ζ_1,𝐦...P_ζ_j,𝐦|≥ D^j8^-j8^m_1+...+m_j/(m_1m_2...m_j) it follows that ∑_𝔪∈ S_21/N(𝔪)≥∑_𝐦∈ A_1× ...× A_jD^j /8^2j+1(m_1m_2...m_j)≥D^j/8^3j∏_k=1^j∑_m∈ A_k1/m. We then deduce that _𝔪∈ S_2^log_𝔪'∈ S_2^logΦ(𝔪,𝔪') =∑_𝔪,𝔪'∈ S_2Φ(𝔪,𝔪')/N(𝔪)N(𝔪')/∑_𝔪,𝔪'∈ S_21/N(𝔪)N(𝔪') ≤∑_F⊂{1,...,j} F≠∅D^2j-F(∏_k=1^j∑_m∈ A_k1/m)(∏_k∉F∑_m∈ A_k1/m)/(D^j/8^3j∏_k=1^j∑_m∈ A_k1/m)^2 ≤∑_F⊂{1,...,j} F≠∅}8^6kD^-|F|/(∏_k∈ F∑_m∈ A_i1/m). Finally, since ∑_m∈ A_i1/m≥ M, we see that _𝔪∈ S_2^log_𝔪'∈ S_2^logΦ(𝔪,𝔪')≤η for sufficiently large M. This completes the proof of Proposition <ref>. § APPENDIX A We begin with an analogue of Stirling's formula in the number field setting: Let K be a number field of degree d, and let f(x)=∏_N(𝔪)≤ xN(𝔪). Then log f(x)=c_Kxlog x-c_Kx+O(x^1-1/dlog x). Let ϕ(n) denote the number of ideals with norm n in I_K, then log f(x)=∑_N(𝔪)≤ xlog N(𝔪)=∑_n=1^⌊ x⌋ϕ(n)log n. Via Abel summation, we can write this as log f(x) =log x∑_n=1^xϕ(n)-∫_1^x∑_n≤ tϕ(n)/t dt = log x∑_N(𝔪)≤ x1-∫_1^x∑_N(𝔪)≤ t1/t dt = log x(c_Kx+O(x^1-1/d))-∫_1^xc_Kt+O(t^1-1/d)/t dt = c_Kxlog x-c_Kx+O(x^1-1/dlog x). The result then immediately follows. The following Proposition generalizes this argument: Let K be a number field of degree d, and let F(x)=∑_N(𝔪)≤ xf[N(𝔪)] for a function f∈ C^1([1,∞)). Then one has F(x)=c_Kf(1)· xf(x)+c_K∫_1^xf(t) dt+c_Kf(1)+E, where E≪ f(x)x^1-1/d+∫_1^x|f'(t)|t^1-1/d dt. Let ϕ(n) denote the number of ideals with norm n in I_K, then for any x∈ℕ_0, F(x)=∑_N(𝔪)≤ xf(𝔪)=∑_n=1^xϕ(n)f(n). Using partial summation, we can write this as F(x) =f(x)∑_n=1^x ϕ(n)-∫_1^xf'(t)∑_1≤ n≤ tϕ(n) dt = f(x)∑_N(𝔪)≤ x1-∫_1^xf'(t)∑_N(𝔪)≤ t1 dt =f(x)(c_Kx+O(x^1-1/d))-∫_1^xf'(t)(c_Kt+O(t^1-1/d)) dt = c_Kxf(x)-c_K∫_1^xtf'(t) dt+O(f(x)x^1-1/d+∫_1^x|f'(t)|t^1-1/d dt) := c_Kxf(x)-c_K(xf(x)-f(1)-∫_1^xf(t) dt)+E = c_Kf(1)· xf(x)+c_K∫_1^xf(t) dt+c_Kf(1)+E, where E≪ f(x)x^1-1/d+∫_1^x|f'(t)|t^1-1/d dt. § APPENDIX B In this section we recall several well-known facts about the connection between equidistribution of a sequence and exponential sums. The most fundamental is Weyl's Criterion: A sequence (a_n)⊂ℝ is uniformly distributed modulo 1 if and only if for each ℓ≠ 0, ∑_n≤ xe(ℓ a_n)=o(x). In the positive integer setting, one also has an associated equidistribution criterion: A sequence (a_n)⊂ℤ_≥ 0 is equidistributed modulo m if and only if ∑_n≤ xe(ℓ a_n/m)=o(x) for each ℓ∈{1,...,m-1}. After expanding out the square and applying Parseval's theorem over ℤ_m, one observes that ∑_ℓ∈ℤ_m|-1/m+1/x∑_n≤ x a_n≡ℓ (mod m)1|^2=1/m∑_ℓ =1^m-1|1/x∑_n≤ xe(ℓ a_n/m)|^2. From this the Lemma is not hard to deduce. § APPENDIX C In this section we show how L(x)=o(x) implies that M(x)=o(x), as defined in the Introduction. Let K be an algebraic number field. Then L(x)=o(x) implies that M(x)=o(x). We claim the following two statements are true (here * denotes the Dirichlet convolution operator): (i) Suppose that f is the indicator function for the square ideals. Then if f^-1 denotes the Dirichlet inverse of f (i.e. the function such that (f*f^-1)(𝔪):=∑_𝔡_1𝔡_2=𝔪f(𝔡_1)f^-1(𝔡_2)= 1 𝔪=K 0 otherwise), then f^-1 is 1-bounded and supp f⊇supp f^-1. (ii) Let g be the function such that μ=g*λ. Then g=(1_square)^-1, where 1_square is the indicator function for the square ideals. Let us explain how these two statements together imply the result. Writing μ=g*λ, it is not hard to see then that M(x)=∑_N(𝔪)≤ xg(𝔪)L(x/N(𝔪)). If g=(1_square)^-1 then by (i) g is 1-bounded with support on square ideals, so that M(x)/x=1/x∑_N(𝔪)^2≤ xg(𝔪^2)L(x/N(𝔪)^2)=∑_N(𝔪)^2≤ xg(𝔪^2)/N(𝔪)^2·L(x/N(𝔪)^2)/x/N(𝔪)^2. If L(x)=o(x), then for each ϵ>0 there exists some C(ϵ) for which x≥ C(ϵ) |L(x)/x|<ϵ, and hence |M(x)/x| ≤∑_N(𝔪)^2≤ x1/N(𝔪)^2|L(x/N(𝔪)^2)/x/N(𝔪)^2| =∑_N(𝔪)^2≤ x/C(ϵ) 1/N(𝔪)^2|L(x/N(𝔪)^2)/x/N(𝔪)^2|+∑_x/C(ϵ)<N(𝔪)^2≤ x1/N(𝔪)^2|L(x/N(𝔪)^2)/x/N(𝔪)^2| < ϵ∑_N(𝔪)^2≤ x/C(ϵ)1/N(𝔪)^2+∑_x/C(ϵ)<N(𝔪)^2≤ x1/N(𝔪)^2. Then since ∑_𝔪1/N(𝔪)^2<∞ (as can be seen, say, from partial summation and 1.3), for sufficiently large x we can make this arbitrarily small to show that M(x)=o(x). Proof of (i). Since f is multiplicative, one has, for 𝔪=∏_i𝔭_i^α_i that f(𝔪)=∏_i 1_2|α_i. By definition of Dirichlet inverse, for each α_i≥ 1 one has 0=∑_𝔡_1𝔡_2=𝔭_i^α_if(𝔡_1)f^-1(𝔡_2)=∑_c=0^α_if(𝔭_i^c)f^-1(𝔭_i^α_i-c). Then 0=∑_c=0^⌊α_i/2⌋f^-1(𝔭_i^α_i-2c) At α_i=0 one has f^-1(1)=1; at α_i=1 one has f^-1(𝔭_i)=0 by the above expression. Now let us induct on α_i to show that f^-1(𝔭_i^α_i)=0 for α_i≡ 1 (mod 2). The base case is shown; suppose that there exists some even k for which f^-1(𝔭_i^α_i)=0 for each α_i≤ k odd. One has 0=∑_c=0^⌊ (k+1)/2⌋f^-1(𝔭_i^k+1-2c)=f^-1(𝔭_i^k+1), so that f^-1(𝔭_i^k+1)=0 as well. This provides that supp f^-1⊆supp f, since f^-1 is multiplicative (in general, the Dirichlet inverse of a multiplicative function is multiplicative; see, e.g., Apostol's Introduction to Analytic Number Theory, Thm. 2.16). To show boundedness, we again use the above expression for prime power ideals, i.e. that 0=∑_c=0^⌊α_i/2⌋f^-1(𝔭_i^α_i-2c). It is clear that f(1)=f^-1(1)=1, hence evaluating at α_i=2 gives f^-1(𝔭_i^2)=-1. Then one has f^-1(𝔭_i^4)=-∑_1≤ c≤ 2f^-1(𝔭_i^α_i-2c)=-(-1+1)=0, and f^-1(𝔭_i^6)=-∑_1≤ c≤ 3f^-1(𝔭_i^α_i-2c)=-(0-1+1)=0. It is not hard to see that f^-1(𝔭_i^2k)= 1 k = 0 -1 k=1 0 k≥ 2. by induction; then since f^-1 is multiplicative, this provides a proof of boundedness since |f^-1(𝔪)|=∏_𝔭|𝔪|f^-1(𝔭^x_𝔭(𝔪))|≤ 1. Proof of (ii). We first claim that λ*1=1_square, where 1_square is the indicator function for the square ideals. Indeed, it suffices to prove that 1_square*μ=λ. One has (1_square*μ)(𝔪)=∑_𝔲𝔡^2=𝔪μ(𝔲). Write 𝔪=𝔲_0𝔡_max^2 for some 𝔲_0 squarefree, then the above is the same as saying (1_square*μ)(𝔪)=∑_𝔡_1𝔡_2=𝔡_maxμ(𝔲_0𝔡_2^2). Each summand vanishes other than the case 𝔡_2=1, so that (1_square*μ)(N)=μ(𝔲_0)=(-1)^Ω(𝔲_0). Then since Ω(𝔲_0)≡Ω(𝔪) (mod 2), we have our desired equivalence. Then convolving the equation λ*1=1_square with g would give λ*g*1=g*1_square, and since λ*g=μ by definition one would have g*1_square=μ*1=δ. But then g=1_square^-1, by definition of our inverse. § ACKNOWLEDGEMENTS The author extends gratitude for the idea from Michael Lacey, and for discussions thereof. The author also thanks Alex Dunn and Vesselin Dimitrov for information regarding number fields, and Florian Richter and Vitaly Bergelson for their encouraging comments. Finally, the author thanks Rahul Sethi and Junzhe Mao for close reading. The author was partially supported by NSF grant award #2247254 under Michael Lacey. *
http://arxiv.org/abs/2406.08855v1
20240613063949
Trajectory Planning for Autonomous Driving in Unstructured Scenarios Based on Graph Neural Network and Numerical Optimization
[ "Sumin Zhang", "Kuo Li", "Rui He", "Zhiwei Meng", "Yupeng Chang", "Xiaosong Jin", "Ri Bai" ]
cs.RO
[ "cs.RO" ]
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Trajectory Planning for Autonomous Driving in Unstructured Scenarios Based on Graph Neural Network and Numerical Optimization Sumin Zhang, Kuo Li, Rui He, Zhiwei Meng, Yupeng Chang, Xiaosong Jin, Ri Bai The authors are with the School of National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University, Changchun 130012, China (e-mail: suminzhang@163.com; likuo23@mails.jlu.edu.cn; herui@jlu.edu.cn; mengzw20@mails.jlu.edu.cn; changyp21@mails.jlu.edu.cn; jinxs21@mails.jlu.edu.cn; bairi22@mails.jlu.edu.cn). ===================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In unstructured environments, obstacles are diverse and lack lane markings, making trajectory planning for intelligent vehicles a challenging task. Traditional trajectory planning methods typically involve multiple stages, including path planning, speed planning, and trajectory optimization. These methods require the manual design of numerous parameters for each stage, resulting in significant workload and computational burden. While end-to-end trajectory planning methods are simple and efficient, they often fail to ensure that the trajectory meets vehicle dynamics and obstacle avoidance constraints in unstructured scenarios. Therefore, this paper proposes a novel trajectory planning method based on Graph Neural Networks (GNN) and numerical optimization. The proposed method consists of two stages: (1) initial trajectory prediction using the GNN, (2) trajectory optimization using numerical optimization. First, the graph neural network processes the environment information and predicts a rough trajectory, replacing traditional path and speed planning. This predicted trajectory serves as the initial solution for the numerical optimization stage, which optimizes the trajectory to ensure compliance with vehicle dynamics and obstacle avoidance constraints. We conducted simulation experiments to validate the feasibility of the proposed algorithm and compared it with other mainstream planning algorithms. The results demonstrate that the proposed method simplifies the trajectory planning process and significantly improves planning efficiency. Autonomous driving, graph neural network(GNN), trajectory planning, numerical optimal control. § INTRODUCTION Trajectory planning for intelligent vehicles is a crucial topic in the field of autonomous driving, requiring consideration of safety, comfort, and driving efficiency<cit.>. Current trajectory planning research mainly focuses on structured environments such as highways and unstructured environments such as parking lots. In contrast to structured environments, which have lane markings as driving references, unstructured environments generally lack such references and feature more complex and diverse obstacles, making trajectory planning significantly more challenging<cit.>. A feasible trajectory should connect the vehicle's initial position and orientation with its target position and orientation, comply with vehicle dynamic constraints, and ensure that the control system avoids collisions with obstacles during trajectory tracking. The objective of the trajectory planner is to establish a correspondence between the unstructured environment and the trajectory, generating a feasible path based on the environmental information. Traditional trajectory planning algorithms employ mathematical methods to identify such correspondences, typically involving multiple stages of mathematical modeling<cit.>. These stages include path planning, path optimization, and speed planning to derive a feasible trajectory. This paper aims to simplify this process by leveraging deep learning techniques to identify the relationship between environmental information and the trajectory through large amounts of data. Subsequently, an optimal control method is used to generate a locally optimal trajectory that satisfies multiple constraints. § RELATED WORKS Existing trajectory planning algorithms can be categorized based on different criteria. In line with the research content of this paper, trajectory planning methods are divided into three categories: 1) combinations of traditional planning algorithms, 2) optimization-based methods, and 3) end-to-end methods. Traditional planning algorithms, originating in the 1970s, have been widely applied in the field of robotics<cit.>. These algorithms typically seek a feasible or optimal solution based on certain manually defined rules. For example, graph search-based algorithms such as Dijkstra's<cit.> and A*<cit.>, and sampling-based algorithms such as RRT*<cit.> and its variants, are commonly used. The paths solved by these algorithms often exhibit discontinuous curvature and do not consider vehicle kinematic characteristics, making them more suitable for global path planning. When used for local path planning, polynomial curves like quintic polynomials or Bézier curves are often used to fit the planned path. To simulate vehicle kinematics, the Hybrid A*<cit.> extends graph search algorithms by incorporating the vehicle's movement direction, resulting in smoother paths that adhere to vehicle kinematic constraints. To generate a trajectory, it is necessary to add time information to the path to represent the vehicle's speed. Speed planning typically employs S-T graph-based algorithms. The combination of traditional planning algorithms usually requires the manual setting of multiple stages. For instance, in graph search algorithms, the map must be gridded, and the grid resolution must be set. Larger grids can lead to poorer path quality, while smaller grids increase the computational cost of the algorithm. In sampling-based algorithms, the step size and the number of iterations need to be set. A smaller step size and more iterations can improve path quality but also increase computational cost, and vice versa. Optimization-based methods generally refer to the design of a numerical optimization cost function with objectives such as comfort or energy consumption, and constraints such as collision avoidance or physical limitations. By minimizing this cost function, a locally optimal trajectory can be obtained. For example, Lim et al.<cit.> used a hierarchical trajectory planning method combining sampling and numerical optimization. They obtained a rough behavioral trajectory through a sampling algorithm, then established a cost function and constraints, and performed numerical optimization using Sequential Quadratic Programming (SQP)<cit.>. When the continuous variables of the optimization problem are partially or fully discretized and then solved directly, it is typically referred to as an optimal control method<cit.>. Optimal control methods model the trajectory planning problem as an Optimal Control Problem (OCP), usually consisting of an objective function and multiple constraints. The variables are discretized and converted into a Nonlinear Programming (NLP) problem for solving, using methods such as SQP and Interior Point Methods (IPM)<cit.>. Li et al.<cit.> and Li and Wang<cit.>, for example, formulated the trajectory planning task with constraints on vehicle dynamics, obstacles, and initial positions, optimizing performance metrics such as trajectory smoothness and obstacle avoidance to obtain an optimal trajectory. Lian et al.<cit.> proposed a two-stage trajectory planning method. In the first stage, they used an improved Hybrid A* algorithm to obtain an initial trajectory. In the second stage, they performed segmented optimization of the optimal control problem. This method achieved good results in narrow parking environments by reducing optimization time. End-to-end autonomous driving technology can be distinguished in terms of learning methods as imitation learning (IL) and reinforcement learning (RL). IL learns from a large number of expert demonstrations, mimicking expert behavior across various driving scenarios, while RL accumulates rewards or penalties through interaction with the environment, aiming to maximize cumulative rewards<cit.>. Although RL does not require pre-prepared training data, its efficiency during the training process tends to be lower. End-to-end autonomous driving traces its origins back to ALVINN<cit.> in 1988, which utilized a fully connected neural network to process data from cameras and radar sensors and output steering values. In recent years, with the continuous advancement in the field of deep learning and the increase in computational power, various methods for end-to-end autonomous driving have emerged, exhibiting differences in input formats, intermediate processes, and output formats. Hawke et al.<cit.> trained a neural network model using real-world traffic environment data, inputting data from three different directions of cameras and directly outputting control signals. TransFuser<cit.> utilized the attention mechanism from Transformers<cit.>, combining camera inputs and LIDAR data through multimodal fusion within the CARLA simulator<cit.>, outputting 2D waypoints, and utilizing PID controllers for lateral and longitudinal control. PlanT<cit.> takes target-level environment as input, employs Transformers for fusion, and outputs 2D waypoints. It visualizes attention weights to demonstrate the interpretability of driving decisions. Some studies modularize end-to-end models, such as Sadat et al.<cit.> and Hu et al.<cit.>, who predict the motion trends of other participants in the environment between the model input and output, enhancing safety during model planning and interpretability during decision-making. IL requires large datasets during training to enhance the model's generalization ability but may struggle to adapt to significantly different new scenarios from the training dataset. To alleviate the limitations posed by insufficiently comprehensive datasets, some studies enrich training datasets by acquiring driving data from other vehicles on the road. For instance, Zhang et al.<cit.> and Chen et al.<cit.> create more diverse driving datasets by capturing the driving behaviors of other vehicles. There are also studies that combine the aforementioned methods. For instance, Wang et al.<cit.> proposed the Neural RRT* algorithm, which utilizes neural networks to predict the probability distribution of the optimal path on the map and employs RRT* for searching. Zhang et al.<cit.> enhanced the performance of Neural RRT* in predicting the probability distribution of the optimal path by utilizing Generative Adversarial Networks (GAN). Although such methods enhance path planning efficiency, the planned paths cannot be directly followed by the vehicle control. Zhao et al.<cit.> employed GAN to learn the relationship between starting points, endpoints, and sequences of control actions in unstructured road environments. They integrated the RRT algorithm to extend the planned trajectory length. Although the algorithm can generate high-quality trajectories, they are not necessarily optimal. Li et al.<cit.> utilized convolutional neural networks to predict priority search areas in grid maps, followed by Monte-Carlo tree search (MCTS) for path planning. After path optimization and velocity planning, a feasible trajectory is generated. Du et al.<cit.> addressed lane-changing scenarios by using two neural networks: one predicts the feasibility of lane-changing, while the other predicts the lane-changing time and longitudinal displacement to determine the lane-changing trajectory Based on the above methods, in unstructured environments, traditional trajectory planning methods, although reliable, necessitate manual design of algorithm parameters and often result in high time and space complexity during the solving process. Trajectory planning methods based on optimal control can generate high-quality trajectories while considering vehicle kinematic constraints, however, they rely on a feasible initial solution to reduce optimization time. The quality of the initial feasible solution directly impacts the optimization rate<cit.>. In unstructured environments, end-to-end trajectory planning requires longer trajectories compared to structured environments. The end-to-end prediction trajectory is difficult to satisfy the vehicle's kinematics and has error fluctuations, making it unsuitable for vehicle control module tracking. Additionally, obstacles in such environments exhibit diverse forms, necessitating a fusion of complementary approaches with traditional methods to accomplish the task<cit.>. Drawing upon the advantages and disadvantages of the aforementioned algorithms, this study proposes a novel trajectory planning approach that integrates the concise and efficient nature of end-to-end methods with the ability of optimal control methods to learn driving trajectories while considering vehicle kinematic constraints and collision avoidance constraint. We employ a neural network model to not only learn 2D path point information but also the control inputs and state information of the vehicle, including vehicle position, heading angle, velocity, and front-wheel steering angle. The output values of the neural network serve as the initial solution for the optimal control algorithm, which refines them to yield a locally optimal trajectory. Simultaneously, the locally optimal solution serves as the ground truth for training the neural network. From the perspective of optimal control, the neural network efficiently provides an initial solution, thereby accelerating the optimization iterations of the optimal control problem. From the standpoint of end-to-end trajectory planning, the optimal control proposition ensures that the trajectory adheres to the vehicle's kinematic constraints, facilitating the tracking by the control system. Consequently, this research presents a novel approach to trajectory planning by combining end-to-end trajectory planning with optimal control propositions, leveraging their complementary strengths to enhance planning efficiency and trajectory feasibility. §.§ Contributions The main contributions of this paper are as follows: 1) We integrate neural networks with optimal control to devise a novel trajectory planning architecture, demonstrating the feasibility of this approach and enhancing both the efficiency and quality of trajectory planning. 2) We utilize graph neural networks to extract and encode information from unstructured environments, leading to a more streamlined and efficient neural network structure. 3) Neural networks predict trajectories as initial solutions for optimal control, simplifying the process of obtaining initial solutions and thereby improving trajectory planning efficiency. 4) Optimal control optimizes the trajectory predicted by the neural network, ensuring compliance with vehicle kinematic constraints and obstacle avoidance constraints. §.§ Organization The remaining structure of this paper is outlined as follows. In Section <ref>, we provide a brief description of trajectory planning propositions and introduce the algorithmic framework of this study. Section <ref> elaborates on the neural network structure and optimal control algorithm employed in the planning algorithm. Section <ref> presents the implementation details of our algorithm and compares it with mainstream planning algorithms through experimental evaluation. Section <ref> concludes our research findings. § SYSTEM FRAMEWORK OVERVIEW §.§ Trajectory Planning Problem The core objective of trajectory planning lies in identifying a safe and feasible trajectory given known environmental information and the starting point. Vehicle trajectory planning must initially satisfy obstacle avoidance constraints and vehicle kinematic constraints. In this paper, a trajectory is defined as σ. The trajectory should connect the initial state x_init with the goal state x_final. Under a certain cost function, the trajectory should be locally optimal. Therefore, the path planning proposition can be defined as follows: σ^*=σ∈∑arg min c(σ) s.t. σ(0)=x_init σ(t_final)=x_final σ(t)∈𝒳_free ∀ t ∈ [0, t_final] ∑ represents the set of all feasible trajectories, σ^* denotes the optimal trajectory, and 𝒳_free signifies the free space where collision with obstacles is avoided. §.§ System Framework Based on graph neural networks and the optimal control trajectory planning algorithm, as depicted in Fig. <ref>, the process consists of two operational phases: offline training and online prediction. §.§.§ offline training Offline training aims to enable the neural network to learn how to generate an initial trajectory under various tasks, serving as the initial solution for the optimal control problem. We prepared a large number of environmental maps and randomly specified the initial and target positions of the vehicle. Each pair of starting positions and environmental configurations constitutes a task. Subsequently, we utilized the Hybrid A* algorithm to solve each task and obtained feasible trajectories through velocity planning. These feasible trajectories serve as the initial solutions for the optimal control problem, which are refined to yield locally optimal trajectories through optimization. The combination of tasks and trajectories forms a sample, and the accumulation of all samples constitutes the training dataset for the neural network. §.§.§ online prediction Online prediction involves replacing the path planning and velocity planning stages with the trained neural network to enhance planning efficiency. The starting position, target position, and environment configuration of vectorization are used as inputs for graph neural networks. Through network inference, the predicted vehicle trajectory is outputted. This predicted trajectory serves as the initial solution for the optimal control problem, which is further refined to yield a locally optimal trajectory through optimization. The output of the neural network prediction determines the success of the optimal control problem's solution and the efficiency of finding the optimal value. High-quality prediction results will shorten the solution time of the optimal control problem. If the optimal control problem fails to converge, a trajectory will be solved using the rule-based Hybrid A* and Speed planner before reattempting optimization. § PROPOSED TRAJECTORY PLANNER In this section, we introduce two critical phases of trajectory planning in this paper: 1) employing neural networks to predict trajectories as initial solutions for OCP, and 2) optimizing the initial solutions provided by the neural networks to generate a locally optimal trajectory. The specific details are outlined below. §.§ Learning-based Initial Solution Generation Our neural network architecture draws inspiration from VectorNet<cit.> and PlanT. Unlike CNN models that take grid-based pixel points as input, we represent tasks using object-level representations and extract features through Graph Neural Networks (GNNs). In addition, our network structure is relatively simple and there is enough room for improvement. The network structure is depicted in Fig. <ref>. §.§.§ Task vectorization and subgraph generation Before vectorization, to standardize the initial state of each task, we convert the absolute positions of obstacles and target states in the Cartesian coordinate system into relative positions with respect to the initial state. The vectorization process is illustrated in Fig. <ref>. The start and end vectors contain the vehicle's initial coordinates and heading angle, forming the starting vector v^s and the target vector v^f. The vertices of each obstacle are connected to each other to form obstacle vectors v_i^o. To enable the GNN to efficiently acquire task information, we interconnect obstacle vectors, with each obstacle forming an obstacle subgraph. Simultaneously, the starting point and the endpoint form a state subgraph. All subgraphs constitute the task graph. §.§.§ Neural network structure To fully utilize the positional information contained within each subgraph, we perform a three-layer feature fusion for each subgraph, which includes the feedforward phase of a fully connected neural network and graph feature concatenation. The single-layer feature fusion propagated in each subgraph can be defined as follows: G_i^(l+1) = γ_con(φ_mlp(G_i^(l)),φ_agg({φ_mlp(G_i^(l))})) Here, G_i^(l+1) represents the subgraph features of the i-th subgraph in the l-th layer. The φ_mlp(·) is a multi-layer perceptron consisting of three fully connected layers with 128 neurons in the hidden layer. The φ_agg(·) denotes the feature aggregation layer, and γ_con(·) represents the feature concatenation layer. After three layers of feature fusion, to extract important features from the subgraphs within the task, we perform max pooling on each subgraph as follows: G_i^(l+1)=φ_maxpool(G_i^(l)) φ_maxpool(·) represents the max pooling layer. The max-pooled vector for each subgraph represents its most salient features. These features are concatenated to form a new graph structure, which then undergoes three rounds of feature aggregation to complete the feature extraction of the task graph. Following the feature fusion and extraction of the task graph, to reduce the size of the network, the structure only predicts five variables that constitute the vehicle's trajectory, denoted as T={x_j,y_j,θ_j,v_j,φ_j}. Here, x_j,y_j,θ_j,v_j,φ_j represent the position coordinates, heading angle, speed, and front-wheel steering angle at the j-th trajectory point, respectively. Unpredicted variables, such as vehicle acceleration a and front-wheel steering angular velocity w, are computed from v and φ. §.§ Optimize the Initial Trajectory In this section, based on previous research <cit.>, we formulate an OCP using neural network outputs as the initial solution. This includes defining the cost function to be minimized and the various constraints to be applied. Finally, we briefly introduce the solution approach for the OCP. §.§.§ Problem Overview Usually, the OCP for trajectory planning can be described in the following standard form: min J(u(t),x(t),t_final) ẋ(t)=f_kin(u(t),x(t)) x(0)=x_init,x(t_final)=x_final u(0)=u_init,u(t_final)=u_final u_min≤ u(t)≤ u_max f_path(u(t),x(t)) ∈𝒳_free In this context, J represents the cost function that includes both control and state variables. ẋ(t) denotes the derivative of the state variable x(t) with respect to time. u(t) is the control variable that provides inputs to the vehicle. f_kin() represents the relationship between the change in vehicle state and the current control variables, also known as the vehicle kinematic equation. x_init and x_final represent the initial and final states of the vehicle, respectively. The terms u_max and u_min denote the maximum and minimum control inputs. The workspace for trajectory planning is defined as 𝒳=𝒳_free∪𝒳_obs, where 𝒳_obs represents the obstacle space. The function f_path() maps the vehicle's state and inputs to the workspace, presenting them as a path that should not intersect with obstacles, i.e., f_path(u(t),x(t))∈𝒳_free=𝒳\𝒳_obs. The above defines the cost function and various constraints. Next, we will elaborate on these components in detail. §.§.§ Various constraints The constraints include kinematic constraints, initial and final state constraints, and obstacle constraints. §.§.§ kinematic constraints In unstructured environments, vehicles typically need to operate at low speeds. Therefore, this paper employs the bicycle model to represent the vehicle's kinematics. A schematic diagram of the vehicle parameters is shown in Fig. <ref>. The equation ẋ(t)=f_kin(u(t),x(t)) in equation (<ref>) is expanded as follows: [ ẋ(t); ẏ(t); θ̇(t); v̇(t); φ̇(t); ] = [ v(t)·cosθ(t); v(t)·sinθ(t); v(t)·tan φ(t)/L_w; a_t; w_t ],t ∈ [0, t_final] x(t) and y(t) represent the coordinates of the vehicle's rear axle center in the Cartesian coordinate system. θ(t) denotes the vehicle's heading angle, while v(t) and a(t) represent the vehicle's velocity and acceleration, respectively. φ(t) and w(t) stand for the front-wheel steering angle and angular velocity. L_w denotes the wheelbase of the vehicle. Additionally, in the practical operation of the vehicle, it should adhere to physical or mechanical constraints, satisfying: [ -v_max; -a_max; -φ_max; -w_max; ]≤[ v(t); a(t); φ(t); w(t); ]≤[ v_max; a_max; φ_max; w_max; ] §.§.§ State constraints When solving the OCP problem, it's essential to meet the fundamental requirements of the vehicle's initial and final states in the trajectory planning task. As in equation (<ref>), the initial and final states should satisfy: x(0)=x_init,x(t_final)=x_final In this article, we assume that at the beginning and end of a trajectory planning task, the vehicle has no input effect, i.e u(0)=u_init=0, u(t_final)=u_final=0 §.§.§ Obstacle constraints In unstructured environments, obstacles exhibit diverse forms, and the establishment of obstacle avoidance constraints is crucial for the completion of trajectory planning tasks. In previous research, <cit.> utilized the triangle area method to establish collision avoidance constraints. To reduce the number of constraints, <cit.> and <cit.> introduced collision-free corridors, confining the vehicle within these corridors. In this paper, to simplify the processing procedure, we adopt the triangle area method to establish constraints. The triangle area method is used to determine whether a point lies inside a polygon. As illustrated in Fig. <ref>, the relationship holds for the triangle formed by point P and the vertices of polygon ABCD: ∑ S_ P_i≥ S_□ ABCD, i=1 .. n ∑ S_ Pi=S_ PAB+S_ PBC+S_ PCD+S_ PDA When point P lies inside polygon ABCD, then: ∑ S_ P_i= S_□ ABCD, i=1 .. n otherwise: ∑ S_ P_i> S_□ ABCD, i=1 .. n We establish collision avoidance constraints by considering the four vertices of the vehicle's quadrilateral and each obstacle polygon, as well as by considering each vertex of the obstacle polygons and the vehicle's quadrilateral. This dual approach ensures comprehensive collision avoidance. §.§.§ Cost function The purpose of establishing a cost function is to find a locally optimal solution that minimizes the cost within the set of constraint-satisfying solutions. Constructing the cost function for the trajectory planning problem should consider multiple criteria, including the smoothness of the vehicle's trajectory, safety, and the time required to complete the task. Different weight coefficients should be assigned to these criteria to reflect their relative importance. To enhance the smoothness of the trajectory, sudden changes in speed and front-wheel steering angle should be minimized. This contributes to ride comfort and the effectiveness of the vehicle's control module in tracking the planned path. Therefore, let: J_1 = ∫_0^t_final w_i^2(t) +a_i^2(t)dt Regarding the safety of the trajectory, it should maintain a safe distance from obstacles. Assuming the geometric center of each obstacle is C_j=(x_j^c,y_j^c), the performance metric can be designed as follows: J_2 = ∑_j=1^N_obs(∫_0^t_finalγ^-d_j^2(t)dt) Here, d_j^2(t)=(x_i(t)-x_i^c)^2+(y_i(t)-y_i^c)^2 represents the squared distance between the vehicle and the center of obstacle. The larger the distance, the smaller the cost function. Assuming that each planned task consumes the same amount of time, i.e., t_final is a constant, the J in (<ref>) can be expressed as: J = λ_1·J_1 + λ_2·J_2 λ_1 and λ_2 represent the weight coefficients of the corresponding indicators. §.§.§ Solving OCP The solution of trajectory planning OCP requires the discretization of continuous variables, thus forming a NLP. When solving NLP, the establishment of initial solutions will affect whether the problem can be successfully solved and the speed of solving. Therefore, in previous studies, some scholars <cit.> used the Hybrid A* algorithm and its variants as the basis to construct initial solutions. Building upon this foundation, this paper proposes the use of GNN to predict trajectories for constructing initial solutions. For NLP with existing initial solutions, we utilize the open-source solver IPOPT for solution, thereby completing the entire trajectory planning process. Some detailed basic parameters are listed in Table <ref>. § EXPERIMENT ANALYSIS §.§ Implementation Details To validate the effectiveness of the proposed trajectory planning algorithm, a series of simulation experiments were conducted in typical scenarios. Detailed training procedures and simulation parameters are provided below. §.§.§ Training Details We prepared 800 maps with different random distributions of obstacles, and for each map, 15 pairs of random start and goal positions were selected. Each pair was solved using the Hybrid A* algorithm, and the resulting trajectory was optimized using IPOPT, forming one training sample. Due to the incompleteness of Hybrid A*, there were instances where no solution was found during task solving. After solving, a total of 10,426 local optimal trajectories were obtained as samples. 90% of the samples were used as the training set, while the remaining samples were used as the test set. During training, the Adam optimizer was utilized with a learning rate of 0.001. Training was conducted on an RTX3090 GPU for 200 epochs, with a warm-up period of 50 epochs, and a batch size of 32. §.§.§ Simulation Details During algorithm simulation testing, to facilitate subsequent comparison with mainstream planning algorithms, GNN inference was conducted on an Intel i9-12900H CPU to align with other computations. The simulation platform utilized Python for implementation. §.§ Analysis of the Proposed Planner This section demonstrates the performance of the proposed method in different obstacle environments. We tested the algorithm in four different obstacle environments as shown in Fig. <ref>. <ref>(a) and <ref>(b) consist of rectangular obstacles and irregular obstacles, while <ref>(c) and <ref>(d) comprise several irregular obstacles. These scenarios highlight the algorithm's ability to plan longer trajectories and handle complex obstacles. In all four scenarios, the proposed algorithm successfully completes the trajectory planning tasks, with the green points in the figures representing the trajectory points formed by the vehicle. Due to space limitations, we use <ref>(a) as a representative example to illustrate the trajectory generation process in detail, as shown in Fig. <ref>. Given the known environmental information, we use the trained neural network model to predict the vehicle trajectory. The inputs include environmental information, starting point information, and target point information. Through the GNN, the model outputs predicted trajectory points. In Fig. <ref>, Fig. <ref>(a) represents the predicted path points, consisting of 30 discrete points. Each discrete point includes values for the vehicle's orientation angle θ, steering angle φ, and velocity v. The steering angle velocity w and acceleration a required for the OCP problem are calculated from the steering angle and velocity. To match the number of predicted points with the optimization points required by the OCP, we perform linear interpolation between every two trajectory points to add three additional trajectory points, as shown in <ref>(b). The interpolated trajectory in <ref>(b) includes x, y, θ, φ, v, w and a, which will serve as the initial solution for the IPOPT iterative optimization. <ref>(c) shows the results after solving the OCP, and <ref>(d) compares the paths before and after optimization. Additionally, Fig.<ref> provides a detailed comparison of the trajectory parameters between the predicted trajectory and the local optimal trajectory from Fig.<ref>(d). Fig.<ref>(a), Fig.<ref>(b), Fig.<ref>(c), and Fig.<ref>(d) respectively show the comparisons of the path, heading angle, velocity, and steering angle between the predicted trajectory and the local optimal trajectory. It can be observed that the prediction from the neural network closely matches the results after OCP optimization. Therefore, the network output, after interpolation processing, can serve as the initial solution for the OCP problem. §.§ Comparison with mainstream planners In this section, the neural network combined with optimal control proposed in this paper is compared with current mainstream non-structured road trajectory planning algorithms combined with optimal control, including Hybrid A* and RRT* combined with optimal control, in both the initial trajectory generation and trajectory optimization stages. The testing environment is shown in Figure 6. The specific comparison results are shown in Table <ref>. In the first stage of planning, the initial solution is provided by the algorithm or the neural network. It can be observed that the RRT* algorithm exhibits the highest efficiency and the shortest time consumption due to its disregard for the vehicle's kinematic constraints. The Hybrid A* algorithm consumes more time, particularly in environments like map (a) and map (b) where the starting point is relatively distant from the target, as graph search algorithms require step-by-step expansion from the starting point to the endpoint, resulting in lower efficiency with longer distances. In comparison, the time consumption of the GNN remains stable at around 700ms. This stability arises from the fact that, given the same computational unit, the inference speed of the neural network is only determined by the network structure. In the second stage of planning, i.e., the OCP optimization stage, the Hybrid A*-OCP exhibits the shortest time consumption among the map(a), map(b), and map(d) experimental scenarios, while the RRT*-OCP has the longest time consumption among all experimental scenarios. The GNN-OCP slightly surpasses the Hybrid A*-OCP. In this stage, the duration of OCP optimization largely depends on the quality of the initial solution, the closer the initial solution to the optimal solution, the lower the time consumption and the faster the optimization rate. Among the four proposed experimental scenarios, the GNN-OCP proposed in this paper exhibits the shortest total time, significantly improving trajectory planning efficiency. To delve deeper into the performance analysis of the algorithms presented in Table <ref>, we computed the differences between the planned trajectories in the first stage and the local optimal solutions in the second stage for each algorithm, as shown in Fig. <ref>. Fig. <ref>(a) illustrates the cumulative sum of distances between the planned trajectory and the local optimal trajectory for each trajectory point. Figures <ref>(b), <ref>(c), and <ref>(d) depict the cumulative sums of differences for each trajectory point in terms of θ, v, and φ, respectively. It can be seen that the trajectory points predicted by the GNN are significantly closer to the locally optimal solution compared to the other algorithms. However, as shown in Table <ref>, in the experimental environments of map(a), map(b), and map(d), the optimization time of GNN-OCP is slightly higher than that of Hybrid A*. This is because the GNN faces a similar issue as the RRT* algorithm: it cannot completely satisfy the vehicle's kinematic constraints during trajectory prediction. The GNN predicts x, y, θ, φ, and v independently, ensuring each value adheres to its maximum and minimum constraints but without establishing correlations to meet the vehicle's kinematic equations. Compared to RRT*, the GNN's predicted trajectories are closer to the feasible solution range dictated by the vehicle's kinematic constraints, resulting in performance that is nearly on par with Hybrid A*-OCP. §.§ Parameter analysis and model defects As mentioned in Section <ref>, we employed a simple yet effective network. However, there is room for improvement in the network structure, and the algorithm parameters can be adjusted in various ways. This section will provide a brief analysis of the algorithm parameters. In selecting the number of trajectory points predicted by the neural network, we chose to use 30 prediction points. Due to inherent prediction errors in the neural network, the number of prediction points determines the smoothness of the predicted trajectory and the optimization rate. As shown in Fig. <ref>, Fig. <ref>(a) illustrates a trajectory with 120 predicted points, where the trajectory's smoothness is compromised due to prediction errors. When the number of prediction points is reduced, as shown in Fig. <ref>(b), and linear interpolation is applied between the prediction points, the final result is displayed in Fig. <ref>(c). When using neural networks, their performance tends to deteriorate when handling tasks significantly different from the training dataset, which is an inherent drawback of neural networks. In this study, the prediction error for trajectories increases with the length of the trajectory; the further from the starting point, the greater the error. As shown in Fig. <ref>, Fig. <ref>(a) illustrates the predicted trajectory and the positions of the trajectory points. Fig. <ref>(b) depicts the distance between each predicted trajectory point and its corresponding true trajectory label point, starting from the initial point. The distance gradually increases with the length of the trajectory. Nevertheless, as long as the error remains within a reasonable range, it will not affect the calculation of the locally optimal trajectory during the optimization phase. § CONCLUSION This paper proposes a two-stage trajectory planning method based on GNN and numerical optimal control. In the first stage, the GNN efficiently extracts environmental task information to predict an initial trajectory. In the second stage, optimal control computation is employed to optimize the trajectory, ensuring it meets vehicle kinematic constraints and collision avoidance requirements. This method simplifies the traditional trajectory planning process. Unlike conventional planning algorithms that require separate steps for path planning, speed planning, and trajectory optimization, the GNN-based initial planning benefits from the simplicity and efficiency of end-to-end trajectory planning. This reduces the need for extensive parameter tuning in traditional planning algorithms while enhancing the overall planning efficiency. The neural network structure proposed in this paper is simple and efficient, but there is significant room for improvement. Future work will consider larger datasets and deeper network structures to enhance the predictive capabilities of the neural network. In terms of prediction results, this paper uses parallel multilayer perceptrons to predict different trajectory parameters. While this approach achieves good results, it does not simultaneously incorporate different results into the vehicle kinematic model, which reduces the optimization process speed. Nonetheless, in the initial trajectory acquisition stage, the algorithm demonstrates sufficient efficiency to reduce the overall running time and simplify the workflow. The planning framework provided in this paper also offers a novel perspective for end-to-end trajectory planning: using numerical optimization methods to improve the trajectory quality predicted by neural networks. IEEEtran
http://arxiv.org/abs/2406.08250v1
20240612141836
Casimir Wormholes with GUP Correction in the Loop Quantum Cosmology
[ "Celio R. Muniz", "Takol Tangphati", "R. M. P. Neves", "M. B. Cruz" ]
gr-qc
[ "gr-qc" ]
celio.muniz@uece.br Universidade Estadual do Ceará (UECE), Faculdade de Educação, Ciências e Letras de Iguatu, Av. Dário Rabelo s/n, Iguatu - CE, 63.500-00 - Brazil. takoltang@gmail.com School of Science, Walailak University, Thasala, Nakhon Si Thammarat, 80160, Thailand. raissa.pimentel@uece.br Universidade Estadual do Ceará (UECE), Faculdade de Educação, Ciências e Letras de Iguatu, Av. Dário Rabelo s/n, Iguatu - CE, 63.500-00 - Brazil. messiasdebritocruz@servidor.uepb.edu.br Universidade Estadual da Paraíba (UEPB), Centro de Ciências Exatas e Sociais Aplicadas (CCEA), R. Alfredo Lustosa Cabral, s/n, Salgadinho, Patos - PB, 58706-550 - Brazil. § ABSTRACT In this paper, we obtain novel traversable, static, and spherically symmetric wormhole solutions, derived from the effective energy density and isotropic pressure resulting from the Casimir effect, corrected by the Generalized Uncertainty Principle (GUP) within the framework of Loop Quantum Cosmology (LQC). The goal is to explore the interplay between competing quantum gravity effects and quantum vacuum phenomena in the emergence of non-trivial spacetime structures. We examine features such as traversability, embedding diagrams, energy conditions, curvature, and stability of the obtained solutions. Additionally, we analyze the junction conditions required to integrate the wormhole spacetime with an external Schwarzschild spacetime and calculate the amount of exotic matter needed to maintain the wormhole. Finally, we evaluate the conditions under which this latter remains visible or is hidden by the event horizon associated with the Schwarzschild spacetime. Casimir Wormholes with GUP Correction in the Loop Quantum Cosmology M. B. Cruz June 17, 2024 =================================================================== § INTRODUCTION General Relativity (GR) is widely recognized as the most precise theory describing gravitational phenomena. Among its most intriguing predictions are black holes (BHs) <cit.> and wormholes <cit.>. These hypothetical structures could act as space-time conduits, connecting distant points in the universe and potentially enabling shortcuts for space-time travel <cit.>. While black holes have been directly observed through gravitational wave (GW) detections from merging binary BHs – achieved with remarkable accuracy by the LIGO and Virgo collaborations <cit.> – wormholes remain an undetected yet fascinating prediction of GR. Additionally, images of black hole shadows captured by the Event Horizon Telescope (EHT) collaboration provided visual confirmation of black hole existence and characteristics, further substantiating GR's predictions <cit.>. Wormholes represent potential phenomena that could aid in unraveling one of the most perplexing issues in contemporary theoretical physics: the nature of quantum gravity. This is because, in the presence of an intensely strong gravitational field, the quantum properties of spacetime must come into play, generating non-trivial structures as those objects on a microscopic scale, according to the concept of “quantum foam” <cit.>. Currently, two primary contenders for a theory of quantum gravity are Loop Quantum Gravity (LQG) <cit.>, in competition with String Theory (ST) <cit.>. Within the framework of LQG, it is possible to develop intriguing theoretical models shedding light on the quantum characteristics of spacetimes, as revealed by BHs <cit.> and wormholes <cit.>. Loop Quantum Cosmology (LQC), which applies the principles of LQG to cosmological settings, provides profound insights into quantum gravitational properties. These studies are crucial for understanding how quantum gravitational effects manifest in the structure of spacetime, potentially resolving singularities and other cosmological problems by considering a fundamental critical density near the Planck scale <cit.>. Notably, LQC has also provided models exploring the quantum effects associated with spacetime on black holes <cit.> and wormholes <cit.>. In this direction, Casimir wormholes – theoretical constructs supported by Casimir energy densities and pressures – utilize the negative energy density generated by quantum vacuum fluctuations between conducting plates to stabilize their structure <cit.>. Over the years, studies have emerged to investigate the effect of curved spaces on Casimir energy density <cit.>. On the other hand, following Garattini's work, this energy density and the associated pressure was incorporated as source in Einstein's equations, resulting in the formation of wormholes for the Casimir effect of the electromagnetic field in (3+1) <cit.>, (2+1) <cit.> and D dimensions <cit.>. Also with the Casimir effect of the Yang-Mills field in (2+1) dimensions <cit.> and extensions of General Relativity <cit.>. Recent advancements have incorporated the Generalized Uncertainty Principle (GUP) into the framework of Casimir wormholes. GUP, which introduces modifications to the Heisenberg uncertainty principle to account for quantum gravitational effects at small scales <cit.>, provides corrections to the wormhole geometry. These corrections can significantly influence the stability and traversability of Casimir wormholes, potentially enhancing their feasibility within a quantum gravity context. Studies indicate that GUP-corrected Casimir wormholes exhibit modified energy conditions and structural properties, offering new insights into their quantum characteristics and the broader implications for quantum gravitational theories <cit.>. This work aims to obtain novel traversable, static, and spherically symmetric wormhole solutions, sourced by both the effective energy density and isotropic pressure arising from the Casimir effect, corrected by GUP within the framework of LQC theory. The objective is to understand the relationship between quantum gravity effects and quantum vacuum phenomena in the emergence of non-trivial spacetime structures. The working hypothesis posits that the quantum vacuum, which in principle can generate and stabilize wormholes, combines with microscopic effects arising from the existence of a minimum length associated with the GUP correction, along with cosmological effects related to a maximum density that prevents singularity formation according to LQC, resulting in new aspects and properties for those objects. By examining these competing quantum gravity effects, we aim to elucidate the conditions under which stable and traversable wormholes can exist, providing new insights into the nature of quantum gravity and its impact on spacetime geometry. The paper is organized as follows: In Section II, we will explore the theoretical construction and properties of Casimir wormholes modified by GUP in the scenario of LQC, examining features of traversability, embedding diagrams, energy conditions, curvature, and stability of the obtained solutions. In Section III, we will perform a junction condition analysis required to integrate the wormhole spacetime with an external Schwarzschild spacetime. In Section IV, we will calculate the amount of exotic matter required to keep the wormhole throat open, considering the extension of the corresponding spacetime up to the junction interface. Additionally, we will evaluate the conditions under which the wormhole remains visible or is hidden by the event horizon associated with the Schwarzschild spacetime. Finally, in Section V, we will present our conclusions and close the paper. Throughout this paper, we will utilize natural units with G = c = ħ = 1 and adopt the metric signature (-, +, +, +). § LQC-INSPIRED CASIMIR WORMHOLES WITH GUP CORRECTION Initially, we will define the following quantities related to the electromagnetic Casimir energy density and pressure, with GUP correction, as the source for the static and spherically symmetric wormholes to be investigated: ρ(r) = -k/r^4(1+5β/3r^2) and p(r) = ωρ(r), where k = π^2/720, ω = 3, and β>0 is a constant defining the GUP correction. We will analyze the wormhole structure using a generic state parameter ω, with particular emphasis on the case where ω=3, which corresponds to strict Casimir wormholes. By utilizing these quantities, we aim to derive new traversable wormhole solutions and understand the role of GUP corrections in influencing energy density and isotropic pressure within the framework of LQC theory. It is worth mentioning that, in this context, there are essentially two models of GUP correction <cit.>; however, the difference between them is irrelevant to our investigation since it is represented by a unit-order factor accompanying the parameter β, which has a very tiny value <cit.>. The sought solution arises from an effective matter fluid incorporating quantum effects within LQC. Consequently, the effective gravity-matter system obeys the matter-side modified Einstein equation: G^μ_ ν≡ R^μ_ ν - 1/2 g^μ_ ν R = 8 π T^μ_ ν , where T_μν denotes the effective energy-momentum tensor. The energy-momentum tensor corresponding to an isotropic perfect fluid that is compatible with LQC is: T^μ_ ν = diag(-ρ_e, p_e, p_e, p_e), where ρ_e = - G^t_ t, p_e = G^r_ r = G^θ_θ = G^ϕ_ϕ. The analytical expressions for the effective energy density and pressure are given by <cit.>: ρ_e(r) = ρ(1-ρ/ρ_c), p_e(r) = p-ρ(2p+ρ/ρ_c), where ρ_c represents the critical density in the context of LQC. It is noteworthy that these effective quantities found application within the domain of traversable wormholes in the braneworld scenario, being integrated alongside other parameters <cit.>. Therefore, moving forward, we will particularly focus on the static and spherically symmetric Morris-Thorne wormhole metric as presented by <cit.>: ds^2=-e^2Φ(r)dt^2+dr^2/1-b(r)/r+r^2dΩ_2, Here, Φ(r) represents the redshift function, b(r) is the shape function, and dΩ_2=dθ^2+sin^2θ dϕ^2 denotes the spherical line element. Given the metric ansatz of Eq. (<ref>), modified Einstein's equations take on their simplest form: G_ t^t = b'/r^2= 8 πρ_e(r), G_ r^r = -b/r^3+ 2(r-b)Φ'/r^2= 8 π p_e(r), G_θ^θ = G_ϕ^ϕ = (1-b/r)[Φ”+(Φ')^2+(b-rb')/2r(r-b)Φ'+(b-rb')/2r^2(r-b)+Φ'/r]= 8 π p_e(r), The quantities ρ(r) and p(r)=ωρ(r) which entry in the effective densities and pressures are described by Eq. (<ref>), and will be regarded as the sources for the new wormhole solutions investigated here. Another feature to be studied in the traversable wormhole solutions that we will obtain later is the conservation equation, which for an isotropic fluid is given by -dp_e/dr-Φ'(ρ_e+p_e)=0, in the context of the modified Einstein's equations under inspection. §.§ Wormhole solution To determine the wormhole solution given the Casimir energy density and isotropic pressure with GUP correction, we first solve for the shape function using the modified Einstein equation for the effective energy density. Then, we use the conservation equation to find the redshift function. The apparent inconsistency with the remaining Einstein equations is overcome since these equations do not provide independent constraints, making them redundant. Additional constraints often emerge from the need to ensure the regularity of the redshift function and analyze junction conditions. However, our use of the conservation equation will reveal that imposing regularity on the redshift function is unnecessary. Thus, from Eqs. (<ref>), (<ref>), and (<ref>), we find that the wormhole shape function is given by b(r)=r_0 - 8 π k/ r_0(1+5β/9 r_0^2+ k/5 ρ_c r_0^4+10 k β/21ρ_c r_0^6+ 25 kβ^2 /81 ρ_c r_0^8) + 8 π k/r(1+5β/9 r^2+ k/5 ρ_c r^4+10 k β/21ρ_c r^6+ 25 kβ^2 /81 ρ_c r^8). where we have taken into account the boundary condition b(r_0)=r_0 to determine the integration constant. According to the prescription found in Refs. <cit.>, the shape function determines the geometry of the wormholes and it must adhere to the other following criteria: (i) For radial distances r > r_0, the ratio of the shape function to the radial distance must be less than unity, expressed as b(r)/r < 1. (ii) The flaring-out condition b(r) - b'(r) r > 0 to ensure the throat of the wormhole is the smallest size of the whole structure. (iii) The derivative of the shape function concerning the radial distance at the throat must be less than unity, i.e., b'(r)_r=r_0 < 1. Finally, (iii) implies a minimum size for the throat, which, in turn, minimizes the amount of exotic matter required at the throat to violate the NEC. In Figure <ref>, we can verify that i) is satisfied, as well as ii), since the derivative of the shape function at the throat is always negative. In the left panel, we vary ρ_c and fix β. In the right panel, β is variable and ρ_c fixed. In Figure <ref>, we plot some profiles of embedding diagrams for the LQC-Casimir GUP corrected wormholes, which are generated from the mapping of the metric spatial sector in cylindrical coordinates at the equatorial plane, via dr^2+r^2dϕ^2+dz^2=dr^2/1-b(r)/r+r^2dϕ^2⇒ z(r)=∫_0^r[b(u)/u/1-b(u)/u]^1/2du. Note that as the value of ρ_c decreases, in the left panel, (β increases, in the right panel), indicating a more evident manifestation of quantum gravity, the inclination towards the wormhole throat becomes less pronounced, approaching asymptotic flatness. The 3-D embedding diagram indicates the same behavior when we vary ρ_c and fix β. Regarding the redshift function, from Eq. (<ref>), we find -g_tt(r)= e^2Φ(r)=(r/r_0)^12(1+2ω)/1+ω(5β+3r_0^2/5β+3r^2)^2ω/1+ω(10β k+6k r_0^2+3ρ_cr_0^6/10β k+6k r^2+3ρ_cr^6)^2, The integration constant was chosen to ensure e^2Φ(r_0)=1. It's important to note that while the resulting solution remains regular within the domain of r, the solution expressed in Eq.(<ref>) lacks asymptotic flatness, although, as we will see, the curvature scalars vanish at infinity. Therefore, we must apply junction conditions such that beyond a certain radius r=R, the spacetime transitions to the Schwarzschild vacuum solution. This transition and its implications will be discussed in detail in the next section. §.§ Energy conditions To ensure the existence of solutions within the framework of GR and its modifications, one considers realistic sources for the energy-momentum tensor. We apply the coordinate-independent description of the energy conditions, i.e., null (NEC), weak (WEC), and strong (SEC), on the Casimir traversable wormholes with the GUP correction in the LQC scenario. Thus, we consider the isotropic wormhole and the effective energy-momentum tensor given in Eq. (<ref>). * NEC is defined as T_μν k^μ k^ν > 0 for all null vectors k^μ which provides the condition ρ_e + p_e ≥ 0. * WEC is defined as T_μν t^μ t^ν > 0 for all timelike vectors t^μ which provides the conditions ρ_e ≥ 0, ρ_e + p_e ≥ 0 * SEC is defined as ( T_μν - 1/2 T g_μν) t^μ t^ν≥ 0 which provides the conditions ρ_e + 3 p_e ≥ 0, ρ_e + p_e ≥ 0 We can observe a violation of the energy conditions, particularly in the vicinity of the wormhole's throat, consistent with the findings of Li and Zhu <cit.>. We also calculate the quantities ρ_e, ρ_e + p_e and ρ_e + 3 p_e for investigating the energy conditions' violation of these wormholes. In fig. <ref>, we find that the critical energy density ρ_c could reduce the violation level on NEC, WEC and SEC around the throat of the traversable wormholes as shown that ρ_e, ρ_e + p_e and ρ_e + 3 p_e have the less negative at the throat with ρ_c →∞. We consider the effect of GUP by varying β∈ [0,1]. Higher β values lead to greater violations of the NEC, WEC, and SEC around the wormhole throat. Similarly, decreasing ρ_c increases the violation of energy conditions near the throat, indicating that stronger quantum gravity effects accentuate these violations. §.§ Stability of the solutions We will now investigate the stability of the obtained wormhole solutions by analyzing the squared sound velocity of the fluid along the radial direction <cit.>. This quantity, denoted as (v_s)^2, can be calculated as follows: (v_s)^2=d p_e/dρ_e=d p_e/dr/dρ_e/dr. The stability of the wormhole is assured if (v_s)^2 > 0, and it must satisfy (v_s)^2 < 1 to be physically meaningful. In the left panel of Figure <ref>, we illustrate this quantity's radial dependence for the strict Casimir wormhole (ω=3), while varying the critical density parameter ρ_c and holding the remaining parameters constant. Despite the wormhole's stability, it's noteworthy that v_s^2>1. In the right panel of the same figure, we relax the constraint on pressure to be solely Casimir, thus mapping out the parameter space (ω, ρ_c) where both stability and physicality conditions are satisfied. It is worth highlighting that for significant quantum corrections stemming from LQC (ρ_c→ 0), these conditions are only fulfilled for exotic matter within a range of negative ω. §.§ Curvature scalars The geometry of the wormhole is one of the major factors contributing to the possibility of traveling. The investigation of the whole structure for the singularities by considering the divergence from Eq. (<ref>) is not sufficient due to the coordinate-dependent metric tensor. To find the singularities, we present the coordinate-independent quantities; Ricci and Kretschmann scalars as shown in Fig. <ref> and <ref>. Considering the effect of the central energy density on the structure, we find that the Ricci scalar is lowest near its wormhole's throat (r = r_0 = 1) but never diverges as in the left panel of Fig. <ref>. The minimum of the Ricci scalar increases as the central energy density increases. Moreover, the negativity effect reduces as the distance increases from the throat. On the right panel of Fig. <ref>, the value of the Kretschmann scalar drops rapidly and there is no divergence as shown in the semi-log plot. Its maximum is exactly at the throat of the wormhole. The higher central energy density decreases the maximum of the Kretschmann scalar at the throat. As distance from the throat increases, the Kretschmann scalar also decreases. Next, we consider the effect of the constant of GUP correction β on the structure. On the left panel of Fig. <ref>, the lowest value of the Ricci scalar is near the throat of the wormhole and it decreases as β increases. Like the previous case, the negativity effect fades as the distance from the throat increases. On the right panel of Fig. <ref>, an increase in β causes an increase of the Kretschmann scalar at the throat. The value rapidly reduces as the distance increases. We can conclude that the structure of the traversable wormhole with GUP correction in the context of LQC does not have the singularity which confirms that the travel by this structure is possible. Besides this, the quantum effects due to these corrections increase the absolute value of the curvatures near the wormhole throat. § JUNCTION CONDITIONS Given that we must have two spacetimes separated by a surface with radius R, it is essential to establish the junction conditions to ensure the continuity of spacetime and to prevent the occurrence of singularities on this surface. Consequently, we expect the metric to be continuous across the junction surface. To achieve this, we will apply the Israel junction conditions, which involve calculating the surface energy density and surface tension <cit.>. These conditions are crucial for maintaining the smoothness and physical plausibility of the wormhole structure. To calculate the surface energy density and the surface tension, we need to obtain the intrinsic energy-momentum tensor, S_ij, which generally, is given by the Lanczos equation <cit.>: S^i_j=-κ^i_j+δ^i_jκ^k_k, where κ_ij represents the discontinuity in the extrinsic curvature across the surface and is written as κ_ij=κ_ij^+-κ_ij^-. The extrinsic curvature is expressed as κ_ij^±=- n_ν^±.[∂^2 X_ν/∂ξ^i∂ξ^j+Γ^ν_αβ∂ X^α/∂ξ^i∂ X^β/∂ξ^j]|_S, with n_ν^± being a unit vector with n^ν n_ν=1 and is written as n_ν^±=±|g^αβ∂ f/∂ X^α∂ f/∂ X^β|^-1/2∂ f/∂ X^ν, while the intrinsic coordinate at the surface of the wormhole is represented by ξ^i. The intrinsic coordinate also satisfies the parametric equation f(x^α(ξ^i))=0. For a spherically symmetric spacetime, the energy-momentum tensor can be expressed in terms of the surface energy density, Σ, and the surface tension, 𝒫, as <cit.>: S^i_j = diag(-Σ, 𝒫, 𝒫, 𝒫), where Σ = -2/R(√(1-2M/R)-√(1-b(R)/R)), 𝒫= 1/R[ 1-M/R/√(1-2M/R)-(1+RΦ'(R))√(1-b(R)/R)], where M is the mass of the Schwarzschild solution, interpreted here as the proper mass of the wormhole, and r=R is where the junction conditions will be valid. These conditions will combine thus the Casimir wormhole solution with the Schwarzschild one. The derivative of redshift function Φ'=dΦ(r)/dr at this radius is Φ^'(R)=6 (5 β+2 R^2) [2 k (2 ω+1) (5 β+3 R^2)+3 ρ_c R^6 ω]/R (ω+1) (5 β+3 R^2) (10 k β+6 k R^2+3ρ_c R^6). At the junction surface, the surface density and the surface tension must be null, in such a way that we obtain <cit.>: √(1-2M/R)=√(1-b(R)/R), 1-M/R/√(1-2M/R)=(1+RΦ'(R))√(1-b(R)/R). The condition (<ref>) is equivalent to impose .g_rr|_int=.g_rr|_ext. From it one we have that b(R)=2M. Substituting this in (<ref>) and using Eqs. (<ref>) and (<ref>), we can find a very involved expression for ρ_c as a function of r_0, β, ω, and R. A third junction condition arises from matching the derivatives of g_rr between the wormhole and Schwarzschild spacetimes at r = R. Considering these three conditions, it can be shown that for ω = 3, only the first two are satisfied. However, for a range of negative values of ω, all three conditions are met. In left panel of Figure <ref>, we fit the curve that best matches the points (β, R) obtained from the three junction conditions for ω = -1/3 and r_0 = 1. It is evident that for lower values of β, the junction radius R is larger. For instance, for β = 0.073, R = 2.4. Hence we calculate ρ_c ≈ 0.0009. Therefore, while the quantum correction due to GUP is small, the correction related to LQC is very high (ρ_c → 0). thus, in this case, the wormhole spacetime extends compared to the Schwarzschild spacetime due to the differing impacts of quantum corrections from GUP and LQC. Large GUP corrections minimally alter the spacetime, leading to slight extensions of the wormhole. In contrast, significant LQC corrections, which introduce a fundamental maximal density, prevent singularities and drastically modify spacetime, resulting in a much more pronounced extension of the wormhole structure. On the other hand, when we consider solely the conditions involving the tension and pressure on the junction surface, we have a wider variety of scenarios. In Figure <ref>, right panel, we illustrate the critical density as a function of R for r_0 = 1 and ω = 3. Notably, for these strict Casimir wormholes, the possible junction conditions cannot be maintained far from the wormhole throat. In this case, the distance limit is approximately R_l ≈ 1.084. For lower values of β (i.e., the lesser the correction due to the Generalized Uncertainty Principle), this distance can be shown to be slightly greater. In Fig. <ref>, we depict the parameter space (ω, R), highlighting the allowed regions where ρ_c > 0. It is noteworthy that for -1/2 < ω < 0, i.e., deformed Casimir wormholes with exotic matter, the junction position can occur at any point, even very far from the throat. Thus, in the context of a Casimir wormhole corrected by GUP, with ω=3, a larger parameter β corresponds to stronger quantum corrections, which can counteract the gravitational collapse and allow the wormhole solution to persist only for short distances before being overtaken by the classical Schwarzschild solution. However, if we consider negative state parameters (exotic matter), the quantum effects can dominate over the classical gravitational collapse, leading to a larger region where the wormhole solution prevails. § QUANTITY OF EXOTIC MATTER AND WORMHOLE VISIBILITY We are now going to analyze the Volume Integral Quantifier (VIQ), defined by <cit.> ℐ_v=∫_r_0^R 4π r^2 (ρ_e+p_e)dr, where R is the radius of the junction interface. The objective is to obtain the amount of exotic matter necessary to keep its throat open in the scenario under analysis. Thus, we have taken the integral on the effective energy density and pressure, given from LQC combined with the GUP-corrected Casimir quantities. Plugging Eqs. (<ref>), (<ref>), and (<ref>) into (<ref>), we obtain ℐ_v=16 k π/R - 16 k π/r_0 + 56 k πβ/9 R^3 - 56 k πβ/9 r_0^3 + 32 k^2 π/5 R^5 ρ_c - 32 k^2 π/5 r_0^5 ρ_c + 272 k^2 πβ/21 R^7 ρ_c - 272 k^2 πβ/21 r_0^7 ρ_c + 560 k^2 πβ^2/81 R^9 ρ_c - 560 k^2 πβ^2/81 r_0^9 ρ_c, for ω=3. The plot in Figure <ref> indicates that, for junction points near the throat of the Casimir wormhole, the amount of exotic matter required is less (more) for higher (lower) quantum corrections due to the GUP, since ℐ_v<0, which is consistent with the violation of the energy conditions. However, for sufficiently distant junction points, approaching the limit where junction conditions are valid in the context of LQC (i.e., where ρ_c>0), the amount of exotic matter required is less (more) for lower (higher) values of the GUP parameter. In any case, the further the junction interface is from the throat, the less exotic matter is needed. The presence of a Schwarzschild spacetime beyond the wormhole permits the existence of an event horizon, as we can attribute a mass to the wormhole, M = b(R)/2. Consequently, we can ascertain whether the wormhole is visible or hidden from an external observer, depending on whether 2M < r_0 or 2M > r_0, respectively. Figure <ref> illustrates the parameter space (ω, R), highlighting the regions where the wormhole is either visible or hidden. This analysis assumes that only the first two junction conditions are satisfied. Notably, the strict Casimir wormhole remains consistently visible as long as R < R_l ≈ 1.084, as previously demonstrated. This consistent visibility within the specified parameter range underscores the distinctiveness of the Casimir wormhole under these conditions. § CONCLUSIONS In this paper, we have discovered novel static and spherically symmetric traversable isotropic Casimir wormholes with a generic GUP correction within the framework of LQC. We derived the shape function from the modified time component of Einstein's equation and the redshift function from the modified conservation equation, initially considering the linear state equation p(r)=ωρ(r) and then specializing to ω=3 for strict Casimir wormholes. Both functions satisfy the criteria for traversability and depend on all involved parameters. Consequently, the violation of energy conditions, embedding diagrams, stability of solutions, and curvature scalars are highly sensitive to the competing quantum gravity effects determined by the GUP and LQC parameters, as demonstrated throughout Section II. Since the behavior of the redshift function at infinity leads to a solution that is not asymptotically flat, we had to impose junction conditions to integrate the wormhole spacetime with the Schwarzschild one. Our findings revealed that strict Casimir wormholes adhere to only two out of the three junction conditions. The third condition, which allows for a smooth transition from one region to another, is not upheld. In this case, the junction interface is very close to the wormhole throat, according to the right panel of Figure <ref>. On the other hand, for wormholes sourced by Casimir energy densities and isotropic pressures conforming to equations of state within a specific range of negative ω values, all junction conditions are met. In other words, this particular exotic matter type enables smooth transitions between the different spacetimes. Consequently, the interface region expands beyond that of the preceding scenario, where the quantum corrections due to GUP are minimal, but the ones due to LQC are maximal, as shown in the left panel of the same figure. We also have shown that the position of the junction interface is crucial in determining the quantity of exotic matter required to maintain Casimir wormholes with GUP corrections within the context of LQC. If the junction interface is close to the throat, a larger amount of exotic matter is required, consistent with the more pronounced violation of energy conditions at that location. This effect is especially significant when quantum corrections due to GUP and LQC are substantial, as shown in Figure <ref>. Ultimately, we examined the position of the wormhole solution concerning the event horizon of the Schwarzschild spacetime, as determined by the junction conditions, and evaluated its visibility and accessibility for external observers. Our findings, illustrated in Figure <ref>, suggest that the strict Casimir wormhole is fully accessible from the exterior, implying that observing such an object could theoretically be feasible. § ACKNOWLEDGMENTS CRM thanks the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Grants no. 308268/2021-6. TT is supported by School of Science, Walailak University, Thailand. 99 Oppenheimer:1939ue J. R. Oppenheimer and H. Snyder, Phys. Rev. 56, 455-459 (1939). Hawking:1988ae S. W. Hawking, Phys. Rev. D 37, 904-910 (1988). Morris:1988cz M. S. Morris and K. S. Thorne, Am. J. Phys. 56, 395-412 (1988). Einstein:1935tc A. Einstein and N. Rosen, Phys. Rev. 48, 73-77 (1935). Morris:1988tu M. S. Morris, K. S. Thorne and U. Yurtsever, Phys. Rev. Lett. 61, 1446-1449 (1988). Frolov:2023res V. P. Frolov, P. Krtous and A. Zelnikov, Phys. Rev. D 108, no.2, 024034 (2023). LIGOScientific:2016aoc B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. Lett. 116, no.6, 061102 (2016). EventHorizonTelescope:2019dse Event Horizon Telescope Collaboration and others, Astrophys. J. Lett. 875, L1, (2019). Wheeler:1955zz J. A. Wheeler, Phys. Rev. 97, 511–536 (1955). Rovelli:2014ssa C. Rovelli and F. Vidotto, Covariant Loop Quantum Gravity: An Elementary Introduction to Quantum Gravity and Spinfoam Theory, Cambridge University Press, 2014. Rovelli:2003wd C. Rovelli, Int. J. Mod. Phys. D 12, 1509-1528 (2003). Zwiebach:2004tj B. Zwiebach, A first course in string theory, Cambridge University Press, 2004. Modesto:2005zm L. Modesto, Class. Quant. Grav. 23, 5587-5602 (2006). Peltola:2008pa A. Peltola G. Kunstatter, Phys. Rev. D79, 061501 (2009). Gambini:2013ooa R. Gambini and J. Pullin, Phys. Rev. Lett. 110, no.21, 211301 (2013). Olmedo:2016ddn J. Olmedo, Universe 2, no.2, 12 (2016). Cruz:2024ihb M. B. Cruz, R. M. P. Neves, and C. R. Muniz, JCAP 05, 016 (2024). Ashtekar:2006es A. Ashtekar et al., Phys. Rev. D75, 024035 (2007). Ashtekar:2008ay A. Ashtekar, J. Phys. Conf. Ser. 189, 012003 (2009). Bojowald:2008ik M. Bojowald, Class. Quant. Grav. 26, 075020 (2009). Singh:2009mz P. Singh, Class. Quant. Grav. 26, 125005 (2009). Corichi:2011sd A. Corichi and E. Montoya, Int.J.Mod.Phys. D21, 1250076 (2012). Dwivedee:2011fd D. Dwivedee, B.  Nayak, M. Jamil, and L. B. Singh, J. Astrophys. Astron. 35, 97 (2014). Lewandowski:2022zce J. Lewandowski, Y.  Ma, Yongge J. Yang, and C. Zhang, Phys. Rev. Lett. 130, 10, 101501 (2023). Lin:2024flv J. Lin and X. Zhang, arXiv: 2402.13638, [gr-qc], 2 (2024). Accepted in PRD. Li:2008sw L-F. Li and J-Y. Zhu, Phys. Rev. D79, 044011, (2009). Sengupta:2023yof R. Sengupta, S. Ghosh, and M. Kalam, Eur.Phys.J.C 83 9, 830 (2023). Visser:1995cc M. Visser, Lorentzian wormholes: From Einstein to Hawking, ISBN: 978-1-56396-653-8 (1995). Sorge:2019kuh F. Sorge, Int. J. Mod. Phys. D 29 (2019) no.01, 2050002 Santos:2020taq A. C. L. Santos, C. R. Muniz and L. T. Oliveira, Int. J. Mod. Phys. D 30 (2021) no.05, 2150032 Santos:2021jjs A. C. L. Santos, C. R. Muniz, and L. T. Oliveira, EPL 135, 1, 19002 (2021). Mota:2022qpf H. F. S. Mota, C. R. Muniz, and V. B. Bezerra, Universe 8, 11, 597 (2022). Garattini:2019ivd R. Garattini, Eur. Phys. J. C 79 (2019) no.11, 951. Alencar:2021ejd G. Alencar, V. B. Bezerra and C. R. Muniz, Eur. Phys. J. C 81, 10, 924 (2021). Oliveira:2021ypz P. H. F. Oliveira, G. Alencar, I. C. Jardim and R. R. Landim, Mod. Phys. Lett. A 37, 15, 2250090 (2022). Santos:2023zrj A. C. L. Santos, C. R. Muniz and R. V. Maluf, JCAP 09 (2023), 022 Hassan:2022ibc Z. Hassan, S. Ghosh, P. K. Sahoo and V. S. H. Rao, Gen. Rel. Grav. 55 (2023) no.8, 90 Zubair:2023abs M. Zubair, S. Waheed, M. Farooq, A. H. Alkhaldi and A. Ali, Eur. Phys. J. Plus 138 (2023) no.10, 902 Hassan:2022hcb Z. Hassan, S. Ghosh, P. K. Sahoo and K. Bamba, Eur. Phys. J. C 82 (2022) no.12, 1116. Azmat:2023ygn H. Azmat, Q. Muneer, M. Zubair, E. Gudekli, I. Ahmad and S. Waheed, Nucl. Phys. B 998, 116396 (2024). doi:10.1016/j.nuclphysb.2023.116396 Mishra:2023bfe A. K. Mishra, Shweta and U. K. Sharma, Universe 9, 4, 161 (2023). Farooq:2023rsp M. Farooq, Mushayydha and M. Zubair, Annals Phys. 459, 169542 (2023). Khatri:2024sdi M. Khatri and J. Lalvohbika, Chin. J. Phys. 89, 1222 (2024). Ali:2010yn A. Ali, S. Das, and E. C. Vagenas, The Generalized Uncertainty Principle and Quantum Gravity Phenomenology, in 12th Marcel Grossmann Meeting on General Relativity, 1, 2407–2409 (2010). Jusufi:2020rpw K. Jusufi, P. Channuie, and M. Jamil, Eur. Phys. J. C80, 2, 127 (2020). Tripathy:2020ehi S. K. Tripathy, Phys. Dark Univ. 31, 100757 (2021). Carvalho:2021ajy I. D. Carvalho, G. Alencar, and C. R. Muniz, Int. J. Mod. Phys. D31, 03, 2250011 (2022). Samart:2021tvl D. Samart, T. Tangphati, and P. Channuie, Nucl. Phys. B980, 115848 (2022). Jawad:2022hlm A. Jawad, U. ur Rehman, S. Rani, and A. Övgün, Int. J. Mod. Phys. D31, 16, 2250114 (2022). Channuie:2024cao P. Channuie, Nucl. Phys. B1004, 116572 (2024). Gomes:2022hva A. H. Gomes, Class. Quant. Grav. 39, 22, 225017 (2022). Sengupta:2021wvi R. Sengupta, S. Ghosh, M. Kalam, and S. Ray, Class. Quant. Grav., 39, 10, 105004 (2022). Capozziello:2022zoz S. Capozziello and N. Godani, Phys. Lett. B835, 137572 (2022). Israel:1966rt W. Israel, Nuovo Cim. B 44S10, 1 (1966) [erratum: Nuovo Cim. B 48, 463 (1967)]. Lanczos:1924bgi K. Lanczos, Annalen Phys. 379, no.14, 518-540 (1924). Lobo:2005yv F. S. N. Lobo, Phys. Rev. D 71, 124022 (2005). Lobo:2004id F. S. N. Lobo, Class. Quant. Grav. 21, 4811-4832 (2004). Sengupta:2023ysx R. Sengupta, S. Ghosh, B. C. Paul and M. Kalam, Class. Quant. Grav. 40, no.9, 095009 (2023). Nandi:2004 K. K. Nandi, Y. Z. Zhang, and K. B. Kumar, Phys. Rev. D 70, 127503 (2004).
http://arxiv.org/abs/2406.08738v1
20240613015159
Volatility Forecasting Using Similarity-based Parameter Correction and Aggregated Shock Information
[ "David Lundquist", "Daniel Eck" ]
stat.AP
[ "stat.AP", "stat.ME" ]
An AI Architecture with the Capability to Explain Recognition Results Paul Whitten, Francis Wolff, Chris Papachristou Electrical, Computer, and Systems Engineering Case School of Engineering Case Western Reserve University Cleveland, OH, USA pcw@case.edu, fxw12@case.edu, cap2@case.edu June 17, 2024 ====================================================================================================================================================================================================================================================================== § ABSTRACT We develop a procedure for forecasting the volatility of a time series immediately following a news shock. Adapting the similarity-based framework of <cit.>, we exploit series that have experienced similar shocks. We aggregate their shock-induced excess volatilities by positing the shocks to be affine functions of exogenous covariates. The volatility shocks are modeled as random effects and estimated as fixed effects. The aggregation of these estimates is done in service of adjusting the h-step-ahead GARCH forecast of the time series under study by an additive term. The adjusted and unadjusted forecasts are evaluated using the unobservable but easily-estimated realized volatility (RV). A real-world application is provided, as are simulation results suggesting the conditions and hyperparameters under which our method thrives. § INTRODUCTION Reacting to a seemingly unprecedented event might involve the question: what, if anything, does it resemble from the past? Such might be the case with event-driven investing strategies, where the identification of the event could arise via the news pages or corporate communications and hence contains a qualitative, narrative element <cit.>. Matching a current crisis to past events is a problem with unsurprising statistical angles: identification, sample size, weighting, risk, and robustness, among many others. In the context of foreign exchange rate market structure, <cit.> speculate “[w]hether news is scheduled or non-scheduled its influence on exchange rates may be related to the state of the market at the time of the news arrival. News that arrives during periods of high uncertainty may have different effects on the exchange rate, than news that arrives in calmer periods." The authors also note that non-scheduled news may require more time for markets to digest, leading to greater heterogeneity (including but not limited to higher dispersion) of responses. We take inspiration from these arguments, developing a method suitable to the conditions that typically accompany news shocks. In this work we focus on the second central moment of a time series, the volatility. One of the most important stochastic phenomenon of positively-valued time series (P_t)_t∈ℕ, especially financial time series, is the volatility of the return series (r_t)_t∈ℕ. A financial asset's price series may exhibit behavior that makes inapplicable and uninterpretable the traditional methods of time series analysis. In contrast, the return series is scale-free <cit.>, easily-interpreted, and often at least weakly stationary. Even if one could construct credible models for describing and forecasting price series and return series, that would not necessarily tell us much about the variability of such forecasts nor enlighten us about the evolution of the variability of (P_t)_t∈ℕ and (r_t)_t∈ℕ over time. The literature on return series predictability based on news has undergone a considerable shift over the past 50 years. Whereas once returns were thought to react to specific news events, now stock price movements are believed to be overwhelmingly noise-based (see <cit.> and references therein). No matter how a time series or its transformations are modeled, forecasting in the presence of news shocks requires a methodological framework that sensibly incorporates relevant information that has yet to manifest in market price or derivative quantities like volatility. In this setting, regime-change models (see <cit.> and citations therein) are of little use because under the assumption of a known exogenous shock, there is no need to estimate a regime-change time, nor is there data following the exogenous shock event to fit a model. Asymmetric GARCH models were an early attempt to account for fact that negative returns typically beget greater volatility than positive returns <cit.>. Asymmetry and mixed-frequency are employed in <cit.> to forecast under the presence of extreme shocks. Problematically, any asymmetric model will depend upon the observation of a negative return to provide the most updated volatility forecast, but under the circumstances posited herein, no such return has been observed. Similar problems and more exist for Realized GARCH models <cit.>, which incorporates observable measures of volatility, known as “realized measures", like implied volatility (IV). Under the assumptions herein, no post-shock data is available, and even if it were, Realized GARCH does not promise to perform well, since Black-Scholes implied volatility is a biased estimator of volatility <cit.>, with the bias increasing in times of crises, when options may be far out-of-the-money. GARCH models have been shown slow to adapt to spikes in volatility <cit.>. The approach herein sidesteps the functional complexity posited by Realized GARCH, with its minimum nine parameters to estimate <cit.>, by substituting modeling assumptions. The method herein proceeds under the assumption that similar news events occasion volatility shocks arising from a common conditional shock distribution. The procedure proposed does not require post-shock information like returns or market-implied quantities from the time series under study. Hence, we also avoid questions about what realized measure to use and when as well as questions about the usefulness of high-frequency data, although these remain intriguing avenues for future work. The primary methodological tool presented in this work is fixed effect estimation followed by an appropriate procedure for pooling those estimates. The use of fixed effect estimation for the study of structural shocks has a pedigree in macroeconomic analysis (<cit.> cited in <cit.>; see also discussion of determinstic exogenous events in <cit.>). We employ fixed effect estimation on the basis of a well-established conceptual assumption that shocks of economic time series can be modeled as mixtures, in particular, mixtures of ordinary innovations and rare events (see <cit.> and references therein). In the forecasting literature, the term “intercept correction” has come to refer to a modeling technique in which nonzero errors are explicitly permitted <cit.>. They summarize the literature as distinguishing two families of intercept correction: so-called “discretionary" intercept corrections that attempt to account for future events, without hard-coding an ad-hoc adjustment into the model specification, and second, “automated" intercept corrections that attempt to adjust for persistent misspecification using past errors. <cit.> use weighted subsets of a scalar series' own past to correct forecast errors by an additive term. <cit.> a introduces a similarity-based forecasting procedure for time-varying coefficients of a linear model. <cit.> employ a form of intercept correction in order to adjust forecasts to the COVID-19 shock in the spring of 2020 based on the proportional misses of the same model applied to the Great Recession. A researcher interested in forecast adjustment can choose between procedures that discretionary or automated, a variety of choices for the collection of data assembled to perform the correction, whether the data is internal (i.e. from the time series itself) or external, the parametric term to be corrected (e.g. intercept, coefficients), if any, as well as the correction function (i.e. the mapping from the data to the corrective term), including the weighting applied to the assembled data (e.g. Nearest-Neighbor, arithmetic mean, kernel methods). Our procedure is a discretionary procedure for intercept correction that incorporates systematically data internal or external to the time series under study. The correction function, as we shall see, involves an optimization step inspired by the causal inference literature. In particular, in <cit.>, the authors build upon previous work in causal inference whereby a treatment effect can be estimated via comparison with a synthetic time series that that represents either the treatment or control unit. The synthetic unit is constructed using a convex combination of the donors. The particular convex combination employed is a function of the distance between the time series under study and the donors. <cit.> adapt these methods for the purpose of prediction. Their one-step-ahead forecasts use distance-based-weighting to pool shock estimates from similar series according to the donor series' similarity to the series under study. Their approach does not take into account the ARCH effects commonly observed in time series, especially financial times series, leaving unaccounted for the variability that accompanies predictions of a heteroskedastic time series. Outside of <cit.>, we know of no prior work that both introduces a parametric specification for nonzero errors and introduces a procedure for weighting appropriately the nonzero errors of similar shocks occurring outside the time series under study. Likewise, we are not familiar with any prior work that attempts to account for anticipated nonzero errors using an explicit parametric adjustment, i.e., what we will call a “correction function”. In order to motivate our procedure, we provide visual illustrations. In [fig:six_plots]Figure <ref>, we provide a panel of volatility series of i.i.d. GARCH processes, corresponding to a setting in which five donors exist. Next, in Figure <ref>, we show how the aggregation of estimated excess volatilities from donors in the donor pool works when the correction function is the arithmetic mean of the donors' fixed effects. The arithmetic mean of excess volatilities provides perhaps the most straightforward and intuitive way of aggregating information from similar events. Taking the average also coincides with the K-Nearest-Neighbor estimator, where K is equal to the number of donors in the donor pool. Although using the arithmetic mean is intuitive and enjoys some empirical track record, its grounding in theory is much less impressive <cit.>. Our method assumes a credible, parsimonious parameterization in which the shock is an affine transformation of several key covariates. The key intuition behind this shock parameterization is that as the strength of the linear signal increases relative to the idiosyncratic error, the GARCH estimation of these effects increases in accuracy. From this, it follows that the aggregated shock estimate increases in accuracy. § SETTING §.§ A Primer on GARCH We define the log return of an asset between t and t+1 as r_t = log(P_t+1/P_t), where P_t denotes the price at time t. The class of ARIMA(p,d,q) models <cit.> provides a framework for modeling the autoregressive structure of r_t, all under the umbrella of frequentist statistics. These models assume a certain dependence structure between r_t and (r_k)_k≤ t, yet their errors — often called innovations in the financial time series context due to how they represent the impact of new information — are nevertheless assumed to be i.i.d. with mean zero and constant variance. The ARCH <cit.> and GARCH <cit.> models provide elegant alternatives to the homoskedasticity assumption. In fact, the GARCH framework in its most basic form disregards r_t and instead turns its interest to the series r_t^2 (once properly centered, i.e. after assuming a mean-model for returns). To that end, let a_t = r_t - μ_t, where μ_t is the mean of the log return series r_t. We thus derive a mean-zero process (a_t)_t∈ℕ with the property that [a^2_t] = Var[a_t]. Under the assumption of time-invariant volatility, the series a_t^2 should exhibit no autocorrelation at any lag ℓ≥1. This assumption motivates tests for so-called ARCH effects, that is, tests for the clustering of volatility. These tests explore the alternative hypothesis that σ_t^2 is not only a time-varying parameter but furthermore a function of past squared residuals of the mean model. In particular, the ARCH(m) model is an autoregressive model in which σ_t^2 is a deterministic function of the past m values of r_t^2. The GARCH(m,s) framework take this one step further by modeling σ_t^2 as a linear combination of the past m values of r_t^2 and well as the past s values of σ_t^2. In functional form, a GARCH process (sometimes called a strong GARCH process <cit.>) is given by σ_t^2 = ω + ∑^m_k=1α_ka^2_t-k + ∑_j=1^sβ_jσ_t-j^2 a_t = σ_tϵ_t ϵ_t E[ϵ_t]=0, Var[ϵ_t] = 1 ∀ k,j, α_k,β_j≥ 0 ∀ t, ω, σ_t > 0 . Assuming further that σ^2_t depends on a vector of exogenous covariates _t, we have a GARCH-X(m,s). The volatility equation then becomes σ_t^2 = ω+ ∑^m_k=1α_ka^2_t-k + ∑_j=1^sβ_jσ_t-j^2 + γ^T_t . §.§ Model setup We will suppose that a researcher has multivariate time series data _i,t = (r_i,t, _i,t), t = 1, …, T_i, i = 1, …, n+1, and _i,t is a vector of covariates such that _i,t|ℱ_i,t-1 is deterministic. Suppose that the analyst is interested in forecasting the volatility of r_1,t, the first time series in the collection, which we will denote the time series under study. We require that each time series _i,t is subject to a news event following T^*_i ≤ T_i + 1 and before witnessing T^*_i+1. We are implicitly leveraging the fact that financial assets are heavily traded during market hours, yet only thinly traded (if traded at all) outside market hours. In contrast, the arrival of market-moving news does not obey any such restrictions. In light of the foregoing, we can denote our collection of GARCH volatility equations of interest using the following notation σ_i,t^2 = ω_i + ∑^m_i_k=1α_i,ka^2_i,t-k + ∑_j=1^s_iβ_i,jσ_i,t-j^2 + γ_i^T_i,t. Let I(·) be an indicator function. Let T_i denote the time length of the time series i for i = 1, …, n+1, and let T_i^* denote the largest time index prior to the arrival of the news shock, with T_i^* < T_i. Let δ, _i,t∈ℝ^p. For t= 1, …, T_i and i = 1, …, n+1, the model M_1 is defined as M_1 [ σ^2_i,t = ω_i + ω^*_i,t + ∑^m_i_k=1α_i,ka^2_i,t-k + ∑_j=1^s_iβ_i,jσ_i,t-j^2 + γ_i^T_i,t; a_i,t = σ_i,t((1-D^return_i,t)ϵ_i,t + D^return_i,tϵ^*_i); ω_i,t^* = D^vol_i,t[μ_ω^*+δ'_i,t+ u_i,t], ] with error structure ϵ_i,t F_ϵ with E_F_ϵ(ϵ) = 0, Var_F_ϵ(ϵ) = 1 ϵ^*_i,t F_ϵ^* with E_F_ϵ^*(ϵ) = μ_ϵ^*, Var_F_ϵ^*(ϵ^*) = σ^2_ϵ^* u_i,t F_u with E_F_u(u) = 0, Var_F_u(u) = σ^2_u ϵ_i,t ϵ^*_i,t u_i,t where D^return_i,t = I(t ∈{T_i^* + 1,...,T_i^* + L_i, return}) and D^vol_i,t = I(t ∈{T_i^* + 1,...,T_i^* + L_i, vol}) and L_i,return,L_i,vol denote the lengths of the log return and volatility shocks, respectively. Let M_0 denote the subclass of M_1 models such that δ≡ 0. Note that M_0 assumes that ω^*_i have no dependence on the covariates and are i.i.d. with [ ω^*_i]=μ_ω^*. Y^N_i,t = δ_t + θ_tZ_i+λ_tμ_i+ε_i,t , which happens to nest the GARCH model's volatilty equation (putting aside that σ_t is latent in the GARCH model) as well as as the ARMA representation of a GARCH model, where δ_t ∼ω, a location parameter shared across donors θ_t ∼α_k, a vector of ARCH parameters and other coefficients shared across donors Z_i ∼ a_i,t-k, a vector of observable quantities specific to each donor λ_t ∼β_j, a vector of GARCH parameters shared across donors μ_i ∼σ_i,t-j^2, a vector of latent quantities specific to each donor and ε_i,t is idiosyncratic noise, uncorrelated across time and donors. §.§ Volatility Profile of a Time Series In this section, we make a novel contribution to prediction via distance-based weighting by constructing a profile of a time series' volatility, the analogue of a covariate matrix in a synthetic control framework. The volatility profile for a given GARCH-X process is nothing more than a vector of observable covariates that parameterize a shock. A collection of n such vectors yields a matrix of n columns. What distinguishes a volatility profile, principally, is that the time-varying parameters that correspond to the volatility profile are non-zero at only the shock times, whereas no such restriction exists in Equation (<ref>). Suppose that for each of the n donors, we have available p distinct covariates in the functional form of the shock. The volatility profile could take the form of the p × n matrix V_t = [ α̂_1,t α̂_t,2 ⋯ α̂_t,n; β̂_1,t β̂_t,2 ⋯ β̂_t,n; ⋮ ⋮ ⋱ ⋮; RV_1,t RV_2,t ⋯ RV_n,t; RV_1,t-1 RV_2,t-1 ⋯ RV_n,t-1; ⋮ ⋮ ⋱ ⋮; IV_1,t IV_2,t ⋯ IV_n,t; IV_1,t-1 IV_2,t-1 ⋯ IV_n,t-1; ⋮ ⋮ ⋱ ⋮; |r_1,t| |r_2,t| ⋯ |r_n,t|; |r_1,t-1| |r_2,t-1| ⋯ |r_n,t-1|; ⋮ ⋮ ⋱ ⋮; Volume_1,t Volume_2,t ⋯ Volume_n,t; Volume_1,t-1 Volume_2,t-1 ⋯ Volume_n,t-1; ⋮ ⋮ ⋱ ⋮; Δ RV_1,t Δ RV_2,t ⋯ Δ RV_n,t; Δ RV_1,t-1 Δ RV_2,t-1 ⋯ Δ RV_n,t-1; ⋮ ⋮ ⋱ ⋮; ], where RV and IV denote realized volatility and implied volatility, respectively. Covariates chosen for inclusion in a given volatility profile may be levels, log differences in levels, percentage changes in levels, or absolute values thereof, among many choices. As shown, V_t displays `balance' in that p covariates exist for each of the n donors. In practice, missing values, corrupted values, or unacceptably extreme or noisy estimates may necessitate some sort of matrix completion, a problem that we do not tackle in this work. We now turn to the next section, where V_t is employed in a procedure to arrive at a forecast adjustment. § METHODOLOGY FOR SIMILARITY-BASED PARAMETER CORRECTION §.§ Forecasting We present two forecasts for the time series under study: Forecast 1: σ̂^2_unadjusted = [σ^2_1,T_1^*+1|ℱ_T^*] = ω̂_i + ∑^m_i_k=1α̂_i,ka^2_i,t-k + ∑_j=1^s_iβ̂_i,jσ_i,t-j^2 + γ̂_i^T_i,t Forecast 2: σ̂^2_adjusted = [σ^2_1,T_1^*+1|ℱ_T^*] + ω̂^* = ω̂_i + ∑^m_i_k=1α̂_i,ka^2_i,t-k + ∑_j=1^s_iβ̂_i,jσ_i,t-j^2 + γ̂_i^T_i,t + ω̂^* . A GARCH model is an ARMA on the squares of the observed scalar time series a^2_t <cit.>, assuming that a_t satisfies fourth-order stationarity. This fact matters for forecasting because the h-step-ahead forecasting function for GARCH model is, just like for an ARMA model, the conditional expectation function, 𝔼[ σ^2_i,T^*+h | ℱ_T^*], or practically speaking, the estimate thereof, 𝔼̂[ σ^2_i,T^*+h |ℱ_T^*] <cit.>. Here we have presented one-step-ahead forecasts for a GARCH-X(1,1), without loss of generality. For h=2,3,4,..., the conditional expectation is computed recursively, as is standard for iterative autoregressive forecasts. §.§ Excess Volatility Estimators The problem of aggregating estimated donor shocks begins with the data constraints. Taking the estimated shocks as a given, we essentially observe the pair ({ω̂^*_i}^n+1_i=2,{v_i}^n+1_i=2). We wish to recover weights {_i}^n+1_i=2∈Δ^n leading to favorable forecasting properties. These weights are used to compute ω̂^*∑^n+1_i=2_iω̂^*_i, our forecast adjustment term. Since the weights {_i}_i=2^n+1 are computed using ℱ_T^*_i, the set {_i}_i=2^n+1 is deterministic, modulo any stochastic ingredient in the numerical methods employed to approximate _1,T^* using a convex combination of donor covariates. We say more about the properties of ω^*_i in section <ref>. Following <cit.>, let ·_S denote any semi-norm on ℝ^p, and define {π}_i=2^n+1 = _πv_1 - _tπ_S . In the forecast combination literature, it is discussed whether the weights employed to aggregate forecasts strive toward and meet various optimality criteria <cit.>. In this work, there are at least two senses of optimal weights that one might be interested in. First, we can think of optimal weights as a set {_i}_i=2^n+1 such that ω_1 = ∑^n+1_i=2_iω̂_i, i.e., ω_1 is recovered perfectly, as it belongs to convex hull of the estimated shocks. However, ω_1 is never revealed to the practitioner, and hence there is no way of verifying the extent to which this condition is satifised. A more promising aim is finding weights such that v_1 = ∑^n+1_i=2_iv_i, meaning that the volatility profile of the time series under study lies within the convex hull of the donor volatility profile. This condition underwrites asymptotic results in <cit.>, and the intuition there extends to this work: if the shock is parameterized by an affine function of covariates, then finding a linear combination that recreates the shock should serve us well. Because the method proposed uses a point in Δ^n, it is important to head-off possible confusion. What we are proposing is not a forecast combination method. What we are aggregating and weighting (not combining) are subcomponents of forecasts, not forecasts themselves. Moreover, from a broader perspective, forecast combination is an inapt term for what is being proposed here. First, the donor time series do not provide forecasts, nor would forecasts be needed for random variables that have already been realized. Second and more fundamentally, the theoretical underpinnings of forecast combination, while diverse <cit.>, are distinct from the setting presumed in this work, where the model family is not in doubt but the parameter values and how they prevail must be learned. A question naturally arises regarding the uniqueness of weights. Remarks by <cit.> apply here. We make additional comments as well. (ℝ^n, ·) is a Chebyshev space, and hence for any element x and any convex set C⊂ℝ^n, there exists a unique element y∈ C that minimizes x-y. However, the pre-image of y with respect to a particular operator and constraint set might not be unique. Let p', n' denote the number of linearly independent rows of _t and linearly independent columns of _t respectively. Let col(·) denote the column space of a matrix, and let Conv(·) denote the convex hull of a set of vectors. v_1∈Conv(col(_t)) v_1∉Conv(col(_t)) p' ≥ n' Perfect fit; fit unique Fit not perfect; fit unique p' < n' Perfect fit, not necessarily unique, Carathéodory Theorem applies <cit.> Fit not perfect, not necessarily unique, Carathéodory Theorem applies Comparison with least-squares estimation is illustrative. Consider least-squares for the n-vector of estimated volatlity shocks ω̂^*: w⃗_OLS=_wω̂^* - w^TV_t_2 One immediately visible problem is that this optimization problem is an optimization problem over p-vectors w⃗ — i.e. over linear combinations of the covariates, whereas what we seek is an n-vector — a linear combination of donors. Additionally, there is no guarantee that w⃗_OLS would perform poorly as a tool for producing ω̂^*, but given the small number of donors supposed in our setting, it is risky. §.§ Ground Truth Estimators The time-varying parameter σ^2_t is a quantity for which even identifying an observable effect in the real world is far more challenging. In this work, we use a common estimator of the variance called realized volatility (RV), one which has the virtue of being “model-free” in the sense that it requires no modeling assumptions <cit.>. The realized variance itself can be decomposed into the sum of a continous component and a jump component, with the latter being less predictable and less persistent <cit.>, cited in <cit.>, two factors that further motivate the method employed herein. Suppose we examine K units of of time, where each unit is divided into m intervals of length 1/m. We adapt the notation of <cit.>. Let p_t = logP_t, and let r̃(t,1/m) = p_t - p_t-1/m. We estimate the variance of ith log return series using Realized Volatility of the K consecutive trading days that conclude with day t, denoted RV_i,t^K,m, using RV_i,t^K,m = 1/K∑^Km_v=1r̃^2(v/m,1/m), where the K trading days have been chopped into Km equally-sized blocks. Assuming that the K units r̃(t, 1) = p_t - p_t-1 are such that r̃(t, 1) N(μ, δ^2), it is easily verified that [RV^K,m] = μ^2/m + δ^2, which is a biased but consistent estimator of the variance. We will proceed using m = 77, corresponding to the 6.5-hour trading day chopped into 5-minute blocks, with the first block omitted in order to ignore unusual trading behavior at the start of the day. §.§ Loss Functions We are interested in point forecasts for σ^2_1,T^*+h|ℱ_T^*, h=1,2,..., the h-step ahead conditional variance for the time series under study. Let L^h with the subscripted pair {prediction method, ground truth estimator}, denote the loss function for an h-step-ahead forecast using a given prediction function and ground truth estimator. For example, if we suppress the time index, the one-step-ahead MSE using our method and Realized Volatility as the ground truth is MSE^1_SVF, RV = (σ̂^2_SVF - σ̂^2_RV)^2 Also of interest in absolute percentage error for an h-step-ahead forecast, defined as APE^h_method, ground truth = |σ̂^2_h, method - σ̂^2_h, ground truth|/σ̂^2_h, ground truth Finally, we introduce the QL (quasi-likelihood) Loss <cit.>: QL^h_method, ground truth = σ̂^2_h, ground truth/σ̂^2_h, method - logσ̂^2_h, ground truth/σ̂^2_h, method -1 . What distinguishes QL Loss is that it is multiplicative rather than additive. This has benefits, both practical and theoretical. As <cit.> explain, “[a]mid volatility turmoil, large MSE losses will be a consequence of high volatility without necessarily corresponding to deterioration of forecasting ability. The QL avoids this ambiguity, making it easier to compare losses across volatility regimes." For this reason, we proceed to evaluate the method, both in simuations and real data examples, using the QL loss. § PROPERTIES OF VOLATILITY SHOCKS AND SHOCK ESTIMATORS The model M_1 is defined by a volatility equation and mean equation, as is any GARCH model. The choice to model the volatility shock ω^*_i as an additive random effect is straightforward. However, the choice to model the level effect ϵ^*_i,t as a temporary rupture in the otherwise i.i.d. sequence of innovations ϵ_i,t stands in need of deeper justification. One way of arguing for this choice is that, in a discrete time series model, if we assume the arrival of news in the time between T^* and T^*+1, we do not have an easy way to express a conditional distribution of the innovation ϵ_T^*+1 given the overnight arrival of information. Using ϵ^*_i,t thus breaks this impasse. This defense also explains why we do not parameterize the level shock at T^*+1 as a sum of two shocks, ϵ_i,T^*+1 and ϵ^*_i,T^*+1, which would represent the level shock as generated by two independent sources of stochasticity. To do so would be inelegant and would also lack motivation as a practical level. While we want to model the shock at T^*+1 as potentially large in absolute value, we also want to retain the property of a unitary source of noise. Note that under the popular GARCH(1,1), a dual level-volatility shock has an marginal effect on the conditional variance σ^2_i,t that reflects the geometric decay of innovations in autoregressive models. As usual, assume α+β < 1. Furthermore, assume that both the volatility shock ω^*_i and the level shock ϵ^*_i,t are of length one only, and consider a circumstance with no exogenous covariate _i,t. Assume also that r≥ 2, which is necessary in order to isolate the effects of the level shock ϵ^*_i,t. Then σ^2_i,T^*+r+1 | ℱ_T^*+r = ω_i + α_i a_T^*+r^2 + β_iσ^2_i,T^*+r = ω_i + α_i(σ_i,T^*+rϵ_T^*+r)^2 + β_iσ^2_i,T^*+r = ω_i + σ^2_i,T^*+r(α_i (ϵ_T^*+r)^2 + β_i) . In Equation (<ref>), observe that ω_i,t^* and ϵ^*_i,t each appear at most once, through the term σ^2_T^*+r. This might lead one to suspect geometric decay of the shocks ω_i,t^* and ϵ^*_i. Such a suspicion is easier to substantiate by examining the conditional expectation of the variance, 𝔼[ σ^2_i,T^*+r+1 |ℱ_T^*+r], which also happens to be the principal forecasting tool for a GARCH model <cit.>. Indeed, if we assume unit variance for all ϵ_i,t except, of course, ϵ^*_i,t, then we have 𝔼[ σ^2_i,T^*+r+1 |ℱ_T^*+r] = 𝔼[ω_i + α a_T^*+r^2 + βσ^2_i,T^*+r |ℱ_T^*+r] = ω_i + 𝔼[α(σ_i,T^*+rϵ_T^*+r)^2 |ℱ_T^*+r] + βσ^2_i,T^*+r = ω_i + ασ_i,T^*+r^2 + βσ^2_i,T^*+rDue to the unit variance assumption = ω_i + σ^2_i,T^*+r(α + β) . By repeated substitution, in conditional expectation, the shock is 𝒪((α+β)^r). We generalize this observation in the following proposition. Let a_t be a mean-zero time series obeying a GARCH(1,1) specification with unit-variance errors, all prior to the arrival of a volatility shock of length L_vol≥ 1 and level shock of length L_return≥ 1 at some time T^*+1. Then for any r such that r ≥max{L_i, vol,L_i, level} + 1, 𝔼[ σ^2_i,T^*+r+1 |ℱ_T^*+r] = ω_i + (α + β)σ^2_i,T^*+r We claim 𝔼[ σ^2_i,T^*+r+1 |ℱ_T^*+r] = 𝔼[ω_i + α a_T^*+r^2 + βσ^2_i,T^*+r |ℱ_T^*+r] = ω_i + 𝔼[α(σ_i,T^*+rϵ_T^*+r)^2 |ℱ_T^*+r] + βσ^2_i,T^*+r = ω_i + ασ_i,T^*+r^2 + βσ^2_i,T^*+r = ω_i + (α + β) σ^2_i,T^*+r . The volatility equation of a GARCH(1,1) dictates that for any r, the one-step-ahead volatility is given by the expression inside the expectation in (<ref>). By the mean-model assumption of a GARCH(1,1), we have a_i,t = σ_i,tϵ_i,t, and hence by substituting σ_i,tϵ_i,t for a_i,t, we arrive at equation (<ref>) above. Using the unit-variance assumption on ϵ_T^*+r, we can compute explicitly the expectation, yielding (<ref>). Finally, by rearranging terms, we arrive at equation (<ref>). In other words, for a GARCH(1,1), once two time points removed from the longest shock length, the volatility shock and level shock can be subsumed into one. However, prior to being two time points removed, there is no such guarantee. For example, one can take r = 1 and level shock of length at least 1 to see that 𝔼[ σ^2_i,T^*+2 |ℱ_T^*+1] = 𝔼[ω_i + α a_T^*+1^2 + βσ^2_i,T^*+1 |ℱ_T^*+1] = ω_i + 𝔼[α(σ_i,T^*+rϵ^*_T^*+1)^2 |ℱ_T^*+1] + βσ^2_i,T^*+1 = ω_i + ασ^2_i,T^*+1(μ^2_ϵ^* + σ^2_ϵ^*) + βσ^2_i,T^*+1 = ω_i + σ^2_i,T^*+1(α(μ^2_ϵ^* + σ^2_ϵ^*) + β) . where (α(μ^2_ϵ^* + σ^2_ϵ^*) + β) may be greater than 1, permitting explosive behavior, at least in the short term. After both shocks have been exhausted, their influence disappears quickly. This short-memory effect has implications for the method being developed herein. First, there may be different risks associated with over/underestimating level shock and vol shock lengths. Estimation of effects in donor pool should err on the side of underestimating, not overestimating, the length of the max shock, since overestimation of the shock length brings with it the risk of underestimating ω^*. Second, a practitioner of the method needs some idea of how long the operator expects the respective shocks in the time series under study to be. There are couple of obvious strategies: take all the donors, and over all the donor shock lengths, take the minimium. Alternatively, one could take the maximum. §.§ Consistency of the fixed effect estimators Assume * For each i, {a_i,t}_t=0,...,T_i obeys a GARCH-X(m,s), as laid out in Equation (<ref>), with volatility shocks found in M_1, where T_i is the length of the ith series. * For each i, {ω_i,t^*}_t=0,...,T_i is potentially non-zero at {T^*_i+1,... ,T^*_i+k}, ω_i,T^*+1^*≡...≡ω_i,T^*+k^*, and zero otherwise, where the arrival of T_i^* is governed by a time-invariant distribution on {a_i,t}_t=0,...,T_i-1. * The conditions in Assumption 0 of <cit.> prevail. Then for any i, ω̂_i,t^*ω_i,T^*+1^* as t→∞. Under assumption <ref>, ∀ i, {ω_i,t^*}_t=0,...,T_i is a strictly stationary series. Since the shock is assumed to arrive uniformly at random for each i, 1 ≤ i ≤ n + 1, and last for a discrete number of indices, the sequence {ω_i,t^*}_t=0,...,T_i is governed by a distribution F_{ω_i,t^*}_t=0,...,T_i that is invariant to shifts in time. The result follows from the consistency proof of the QMLE in GARCH-X models, as established by <cit.>. §.§ Consistency of the Forecast Function Assume * All conditions listed in Proposition <ref>. * There exist weights {π_i}_i=2^n=1 such that v_1,T^* = ∑^n+1_i=2_iv_i,T^*. Then σ̂^2_adjustedσ^2_1,T^*+1 as t→∞. Recall the conditional expectation of the variance for the GARCH-X(m,s) model: _T^*[σ^2_i,t+1|ℱ_t] = ω_i + ω^*_i + ∑^m_i_k=1α_i,ka^2_i,t-k + ∑_j=1^s_iβ_i,jσ_i,t-j^2 + γ_i^T_i,t . By replacing parameters with their estimates, we arrive at the prediction σ̂^2_i,t+1|ℱ_t = ω̂_i + ω̂^*_i + ∑^m_i_k=1α̂_i,ka^2_i,t-k + ∑_j=1^s_iβ̂_i,jσ̂_i,t-j^2 + γ̂_i^T_i,t , which converges in probability to σ̂^2_i,t+1|ℱ_t = ω_i + ω^*_i + ∑^m_i_k=1α_i,ka^2_i,t-k + ∑_j=1^s_iβ_i,jσ_i,t-j^2 + γ_i^T_i,t as t→∞ by a simple application of Slutsky's Theorem. §.§ Asymptotic Loss We now evaluate the loss and risk of our method under two scenarios: first, under arbitrary distribution of σ^2_t+1, and then second, under the assumption that the data-generating process is correctly specified. For 1-step-ahead forecast of σ^2_t+1 where t=T^*, consider the difference QL(σ̂_t+1, unadjusted^2,σ^2_t+1)-QL(σ̂^2_t+1, adjusted,σ^2_t+1) =(σ_t+1^2/σ̂^2_t+1,unadjusted - logσ_t+1^2/σ̂^2_t+1,unadjusted - 1) - (σ_t+1^2/σ̂^2_t+1,adjusted - logσ_t+1^2/σ̂^2_t+1,adjusted - 1) = σ_t+1^2/σ̂^2_t+1,unadjusted - σ_t+1^2/σ̂^2_t+1,adjusted+ logσ̂^2_t+1,unadjusted/σ̂^2_t+1,adjusted = σ_t+1^2(σ̂^2_t+1,adjusted-σ̂^2_t+1,unadjusted)/σ̂^2_t+1,adjustedσ̂^2_t+1,unadjusted + logσ̂^2_t+1,unadjusted/σ̂^2_t+1,adjusted . For simplicity, we work with a GARCH(1,1) that experiences a volatility shock at a single time point for which we would like to provide a point forecast. Then (<ref>) can be expressed as σ^2_t+1ω̂^*_t+1/σ̂^2_t+1,adjustedσ̂^2_t+1,unadjusted + logω̂+ α̂a_t^2 + β̂σ_t^2/ω̂+ α̂a_t^2 + β̂σ_t^2 + ω̂^*_t+1 It is easily verified that as ω̂^*_t+1→ 0^+, the difference in the losses goes to zero. On the other hand, as ω̂^*_t+1 becomes large, the difference in the losses turns negative, with the lesson being that ω̂^*_t+1 must be in appropriate proportion to the volatility σ^2_t+1 in order for the adjusted forecast to outperform the unadjusted forecast. This explains why it is so important to avoid using a naive adjustment estimator, ω^*, the arithmetic mean of the estimated shocks. We conclude this section with a broader result. Assume the conditions in Propositions <ref> and <ref>. Let 1≤ i≤ n+1. Then QL(σ̂_t+1, unadjusted^2,σ^2_t+1)-QL(σ̂^2_t+1, adjusted,σ^2_t+1)ω_i,t+1^*/σ^2_t+1-ω_i,t+1^* + logσ_t+1^2-ω_i,t+1^*/σ_t+1^2≥ 0. as t→∞. Hence, a correctly specified M_1 model will outperform the unadjusted forecast asymptotically. First, note that the function g:(-∞,ω_i^*)→ℝ given by g(x) = x/σ^2_t+1-x + logσ_t+1^2-x/σ_t+1^2 is nonnegative, convex, obtains a minimum at x = 0, and being continuous, g preserves consistency. The conclusion follows from fact that the model is correctly specified and consistency is guaranteed Proposition (<ref>). § NUMERICAL EXAMPLES In this section, we demonstrate the effectiveness of the proposed method using Monte Carlo simulations. The first simulation setup will use a M_1 model on the volatility. In order to investigate the forecasting method presented herein, all of our simulations will use M_1 volatility models. Recall an M_1 model on the volatility, which is characterized by an exogenous shock to the volatility equation generated by an affine function of the covariates: M_1 [ σ^2_i,t = ω_i + ω^*_i,t + ∑^m_i_k=1α_i,ka^2_i,t-k + ∑_j=1^s_iβ_i,jσ_i,t-j^2 + γ_i^T_i,t; a_i,t = σ_i,t((1-D^return_i,t)ϵ_i,t + D^return_i,tϵ^*_i,t); ω_i,t^* = D^vol_i,t[μ_ω^*+δ'x_i, t+ u_i,t]; D^return_i,t≡ 0 ] In order to tell a story about the Monte Carlo results, we first explain the parameters to be varied, the parameters that remain fixed, and the behavior we expect to observe. To be clear, when we refer to fixed parameters, we are referring to quantities that may or may not govern pseudorandom numbers. For our purposes, a parameter is defined to be fixed whenever it does not vary across any of the simulations performed, and conversely a parameter is varying whenever it varies across at least two simulations performed. §.§ Fixed Parameters Each simulation is a GARCH(1,1) process with n chosen randomly from the set of intergers {756,...,2520}, corresponding to approximately 3-10 years of daily financial data. We fixed the intercept of the processes at the garchx package default of ω = .2 <cit.>. We also use the values α=.1, β = .82, corresponding to a GARCH process with longer memory. §.§ Varying Parameters We vary exactly five parameters: μ_X, σ_X, mu_δ, μ_ω^*, σ_u, each of which is critical to evaluating the method's responsiveness to changing conditions in the shock distribution. μ_X, σ_X govern the elements of the volatility profile. mu_δ interacts with X via the dot-product operation, of course. μ_ω^* is a location parameter for the volatility shock, and σ_u is the idiosyncratic noise of the volatility shock. We add an important note about the parameter μ_δ, which governs a vector δ of length p with elements that increase monotonically in proportion to 1,...,p, after which they are scaled by the factor 2·μ_δ/p(p+1), so that a randomly selected element has mean μ_δ. The heterogeneity of the elements is critical to simulating the plausible heterogeneity that will exist in the random effects structure. Absent a heterogeneous vector δ, the shock would not vary systematically with regard to each variable chosen for the volatility profile, which would fail to approximate the realities of financial markets. §.§ Evaluative Framework and Monte Carlo Results Consistent with our evaluative framework declared in <ref>, we compare adjusted and unadjusted forecasts using QL Loss, calculating the fraction of the simulations that the adjusted forecast yields a QL Loss no smaller than that of the unadjusted forecast. § REAL DATA EXAMPLE We show the applicability of our method using a real data example that sits at the crossroads of financial trading and electoral politics. In the spring of 2016 in the United States, the Republican Party's primary election process narrowed down candidates until Donald J. Trump cleared the threshold of votes to win the nomination formally at the party's convention that summer. He would go on to face the Democratic Party's nominee, Hillary Rodham Clinton. From an ex-ante perspective, several qualities of the 2016 US election cycle as well as the candidates themselves made the election difficult to prognosticate. The Electoral College permits victory without a majority or even plurality of the popular vote, which can render presidential races more competitive than a raw vote total would, elevating the uncertainty surrounding the country's future leadership. The election featured no incumbent, ruling out any incumbent-advantage of the empirical, “statistical" kind distinguished by <cit.>. The Republican Party candidate espoused unorthodox, populist positions on matters such as healthcare, trade, and foreign policy, some of which could be considered rare in either of the major two parties. Additionally, Donald J. Trump, lacking any experience in government — either electoral or appointed service — possessed neither a voting record nor any on-the-job performance for voters to judge or his opponents to attack. As one financial industry professional commented, comparing the 2016 election to the upcoming 2024 election, “this time the markets will be aware of both possibilities and price them to some extent — we wouldn’t expect the same volatility as we saw in 2016 after the election" <cit.>. Gleaning signals from financial options markets and betting markets, <cit.> predicted that markets would decline prodigiously following a Trump victory in November 2016. Finally, the election outcome delivered significant “news", in the econometric sense of the word, in the simple sense that it was not predicted. <cit.> found support for the theory that the polling-implied probabilities of election outcomes encode information about future macroeconomic conditions, which is itself reflected in market volatility. In its final post before the election result, acclaimed forecasting outfit 538, headed by economist Nate Silver, predicted a Clinton victory with a probability of .714, more than 2-to-1 odds <cit.>, suggesting that Trump's victory was at least somewhat surprising. For all of these reasons and more, the aftermath of the 2016 presidential election meets the standard of an interesting and notable event for which a quantitative researcher might seek a volatility point prediction. On a more technical level, the election outcome was not known until the evening of election day, well after the closing of financial markets at 4pm Eastern Time. This satisfies the condition that the shock be not yet digested by liquid markets. We therefore proceed to make the following technical specifications in order to predict the volatility of financial services ETF IYG[It has been noted that GARCH effects are more attenuated in aggregated returns <cit.>, which suggests against using the S&P 500 or similar indices as an example.] (an ETF composed of American financial majors JPMorgan, Bank of American, etcetera) on Wednesday November 9th, 2016. * Model choice We assume a GARCH(1,1) for the daily log return series of IYG in each donor. As argued in <cit.>, a GARCH(1,1) is rarely dominated by more heavily-parameterized GARCH specifications. It thus provides a defensible choice when motivation or time for choosing another model is lacking. For the time series under study and the donor series alike, we fit a GARCH(1,1) on almost four years of market data prior to the shock. * Covariate Choice We choose covariates that could plausibly satisfy the model assumptions spelled out earlier, that is, risk-related and macroeconomic covariates that could plausibly be weighted and summed in a shock distribution. We thus choose the log return Crude Oil (CL.F), the VIX (VIX) and the log return of the VIX, the log returns of the 3-month, 5-year, 10-year, and 30-year US Treasuries, as well as the log return of the most recently available monthly spread between AAA and BAA corporate debt, widely considered a proxy for lending risk <cit.>. We also include the log return in the trading volume of the ETF IYG itself, which serves as a proxy for panic. Finally, we include the squares of the demeaned log return of IYG for the 30 trading days preceding the shocks. For each variable in the volatility profile, we compute the sample mean and sample standard deviation across the n+1 events, allowing us to scale the variables to have zero mean and unit variance. Hence, no single variable can dominate the distance-based weighting procedure. * Donor pool construction Synthetic Control, as a tool of causal inference, often goes about weighting control units by first identifying a natural set of donors or standard donors such as the untreated units within a set of subnational units like US states or Spanish provinces <cit.>. While such a procedure does not necessarily preclude considered judgments (e.g. should Canadian provinces be used as donors for a treated US state?), as a tool of small-n prediction, distanced-based weighting may favor somewhat different considerations in constructing a donor pool. For our purposes, we choose the three most recent US presidential elections prior to the 2016 election. The three US presidential elections are the only presidential elections since the advent of the ETF IYG. We exclude the midterm congressional elections in the US (i.e. those held in even years not divisible by four), which generate far lower voter turnout and feature no national races. * Choice of estimator for volatility We use the sum of squared 5-minute log returns of IYG on November 9th, 2016, otherwise known as the Realized Volatility estimator of volatility <cit.>, as our proxy. We exclude the first five minutes of the trading day, resulting in a sum of 77 squared five-minute returns generated between 9:35am and 4pm. * Data Sources All daily market data is provided via the YahooFinance API available in the quantmod package in R <cit.>. The realized volatility is computed using high-frequency quote data available from Wharton Research Data Services (WRDS) <cit.>. The spread between is AAA and BAA yields is provided by Federal Reserve Economic Data (FRED) and accessed via the quantmod package. We now discuss the three subplots in Figure <ref> in order from left to right. On the left, we see that distanced-based weighting places nearly equal weight on the 2004 and 2012 elections, with only neglible weight on the 2008 election, when financial market conditions were extreme across nearly every dimension. Assuming an approximately correct specification of the covariates, this is interpreted to mean that even of the 2016 US election had a general climate of risk and tension less extreme than 2008 and more similar to the 2004 and 2012 elections. In the middle plot, we notice that the fixed effect estimates for 2008 and 2012 are considerable, with only 2004 registering a near-zero fixed effect estimate. The fixed effect estimates quantify the amount of surprise the US election results delivered (strictly speaking, not only the presidential race but all November elections in the US with the ability to influence financial markets) under the assumption of a GARCH(1,1). As estimates gleaned from only one data point per time series, they are theoretically high in variance. On the right, we observe in black the σ̂^2 yielded by the GARCH(1,1) for the time series under study. We also observe four colored points, each listen in the legend: three predictions and the ground truth. We include the prediction derived by adjusting the GARCH(1,1) prediction by the arithmetic mean of the fixed effect estimates. As is evident, our method comes reasonably close the ground truth. The prediction is not only directionally correct, i.e. we predict a volatility spike where there is one; the prediction far outperforms the unadjusted prediction. Remarkably, the arithmetic-mean based prediction here demonstrates the inherent risk in failing to weight each donor appropriately. The 2008 election receives far more weight than is called for, as simple averaging ignores the radically different conditons on the evening of those two events. Naturally, one might ask how sensitive this prediction is to at least two kinds of model specification: donor pool specification and covariate specification. There are two responses to these concerns. First, although the practitioner lacks a priori knowledge of the adequacy of the donors with respect to the time series under study, it is possible to gauge the diversity of the donor information by examining the singular values of the volatility profile. In the prediction presented here, the singular values descend from 62% to 23% to 15% of the the cumulative variation, indicating a moderate concentration or redundancy of information in the three donors. Second, we follow <cit.> in a executing a multiverse analysis. In particular, in the supplement, we carry out leave-one-out analyses on both the donor set and the covariate set. Additionally, in the supplement, we show that with Brexit added as a donor, the results are essentially unchanged. § DISCUSSION This present work applies and innovates techniques in similarity-based forecasting and information aggregation in order to provide better GARCH forecasts under news shocks. It extends <cit.> principally by substituting the GARCH model for an AR(1), i.e. by modeling both the mean and volatility of a univariate time series. An interesting connection insight related to <cit.> is that the GARCH model, under weak assumptions, can be represented as an ARMA on the squared residuals a^2_t. Hence, a shock ω^*_i,t to the volatility at time t is an identical shock to a^2_t. However, this use-case becomes less attractive in situations where the sign of the time series under study following the shock is uncertain. Insofar as this work has made advances that accommodate heteroskedastic time series, the benefits may redound most amply to applications like Value-At-Risk (VaR) and Expected Shortfall, where σ^2_t is an input. This work as well as <cit.> itself can be viewed as a generalization of <cit.> where the rare event probability λ is assumed to be 1 and the error associated with the rare event is estimated using external series. In our setting, if the shock to the time series under study were known and yet so underdescribed, i.e. so lacking in qualitative context, then <cit.> would be recommended. However, in our setting the rare event is not only rare but also contains specific qualitative features that are best accounted for via data aggregation. The method under development does not strictly require knowledge of the length of the shocks in the donor pool, but correctly sizing up those shock lengths is helpful to proper estimation of the shocks in the donor pool. An important question remains: even if the donor pool shock lengths are assumed to be known, how do we advise the operator to forecast the time series under study? In other words, for how long is the adjustment estimator ω^* valid, applicable, and reliable? One idea suggested by the paper is obvious: why not aggregate the shock lengths from the donors as well and round that quantity or take the floor or ceiling of any non-integer value? This is worth pursuing. However, it may be that estimating the persistence of a volatility shock induced by news is an endeavor deserving of its own study, where aggregation methods might naturally arise as helpful tools. There is also a broader discusion to be had regarding the degree of model heterogeneity permitted in fitting the donors' series. §.§ Comparison with KNN <cit.> (cited in <cit.>) note intercept corrections find their theoretical basis in the occurrence of structural breaks, whereas Nearest-Neighbor methods, being nonparametric, are theoretically more adept at accounting for nonlinearity. The present work examines news shocks, which are more closely related to structural breaks. Hence, neither nearest neighbor methods nor nonlinearity figure heavily in our work. However, there are deeper observations to be made about KNN as it relates to our method. The method presented here is unlike traditional KNN in that we are not trying to learn a function, first and foremost. We are trying to estimate a parameter. KNN runs into the problem: the curse of dimensionality. In contrast, large p is not a problem in synthetic methods, because the thing estimated is the vector w with n-1 degrees of freedom. For KNN, a high-dimensional space, i.e. large p, corresponding to many covariates, is a difficult space in which to work with distances <cit.>. In contrast, large p is not a problem in and of itself for synthetic control — in fact, asymptotic results exist for p <cit.>. As is pointed out in <cit.>, KNN regression performs well when K is chosen small enough that one can simply average the points {y_i}_i=1^N_trainin the neighborhood around each element in {y_i}_i=1^N_test to get good predictions. As we have noted above, the arithmetic mean-based estimator of ω^*, denoted ω^*, corresponds to KNN when K = n, the number of donors. Fundamentally, the idea that n is small enough and the donors are homogeneous enough that one could simply average the ω̂_i is at odds with the assumed variation in the shock effects. In KNN regression, the hyperparameter K must be learned. In similarity-based parameter correction, the number of donors is not learned. A donor pool is curated, and then careful rules of thumb can be applied to determine whether a given donor should be included or excluded. While it would not necessarily hurt to `learn' the appropriate number of donors to use, this information would probably not be as useful as knowing which donors and covariates provide the basis for the best forecasts. This brings us to a deeper point about the distinction between similarity-based methods in the style of <cit.> and KNN. In KNN, the exogenous variables are taken as a given and nearness to the object to be predicted depends on the distance function chosen. In contrast, in <cit.>, the determination of nearness begins with a qualitative step, i.e. curating the units between which we will calculate distances and from we will ultimately derive weights. §.§ Donor Pool Construction Should we gather as many donors as possible and pick them quantitatively? It would be counter to the method proposed to assemble a vast number of donors, lacking careful scrutiny of the qualitative fit, and let the optimization simply pick the donors, via weighting. What makes a donor good is not merely its quantitative fit but its qualitative fit as well. <cit.> make a similar point about large donor pools. What matters is that the donors chosen are properly situated in the p-dimensional predictor space, so as to allow proper estimation of the weights. For more on this question, see the Supplement, where the donors and the volatility profile are treated with a leave-one-out analysis. §.§ The Nature and Estimation of Volatility Shocks Not all of the volatility of an asset return may be related to news <cit.>. This explains our inclusion of an idiosyncratic noise term in the shock specification. However, this point also gestures in the direction of possible unexplained variation in the shocks. <cit.> find that for predicting 1-minute returns, highly transitory firm-specific news is useful. The authors conclude that news about fundamentals is predictive. It would be a pyrrhic victory for our method if the volatility profile indeed underlies real-world shocks but the volatility profile is radically high-dimensional or the signal in the shocks is overwhelmed by the noise term. Even unbiased predictions can be unhelpful if they are high in variability, and in Section <ref>, we show how the benefits of forecast combination. Another possibility is high-frequency data and the use of linear models like HAR <cit.>. HAR would not only increase the sample size available for discovering the autoregressive structure of a series' realized volatility. It would also open the door to high-dimensional regression methods, shrinkage estimators, and more. §.§ Parameter Stability There is an important question about the stability of the GARCH parameters under the presence of a shock, and on parameter instability as it pertains to forecasting, we refer readers to <cit.>. There are at least two reasons that we do not herein explore parameter instability or methods to adjust for it. First, the marginal effect of coefficient changes at the shock time would, under the assumptions in this work, be swamped by the random effect. Second, the estimation of post-shock parameter values would require at least several — better yet, dozens — of post-shock data points, whereas this work assumes access to zero post-shock data points. However, it is possible that similarity-based estimators for the GARCH coefficients could be produced, for example, by adapting the methods of <cit.>. § SUPPLEMENT §.§ Leave-one-out: How we analyzed a multiverse of 50 predictions Given the option of redoing our analysis, with nine covariates and four donors, a natural question from a place of skepticism is, how stable are the results — how contingent are they on a particular specification? To answer these questions, in Table <ref> we generate all fifty predictions yielded the decision of leave out any one of the donors (or none) and any one of the covariates (or none). This analysis includes the prediction presented above, which can be viewed as the null model, at least in the sense that it provides and baseline for comparison. §.§ Sensitivity to Covariates Chosen Any covariate that appears more often closer to the bottom of the <ref> is, ceteris paribus, a more important covariate for this prediction task, since by dropping it, a larger loss results. The covariates that thus stand out are the demeaned log return of IYG, the VIX, the Debt Risk Spread (spread between between AAA and BAA corporate debt), and last but not least, the choice of dropping none of the covariates. The demeaned log return of IYG, of which we use the 30 days preceeding the T_i^*, suggests an unusually strong relevance in matching donors to the time series under study. The Debt Risk Spread is also deserving of additional comment and attention, perhaps alongside the poor performance of the log return of the VIX. The Debt Risk Spread is available at a monthly frequency via FRED (add citation), whereas the VIX is available daily via Yahoo Finance (I need to explain that and cite Yahoo Finance and quantmod). It may be the case that the true conditional shock distribution includes low-frequency data as well as data in levels (like the VIX), while the daily changes in the VIX are absent. §.§ Sensitivity to Donors Chosen The most glaring result visible in <ref> regarding donor selection is the poor performance of the 2008 US Election. In hindsight, this is unsurprising, given the large estimated fixed effect for November 5th, 2008 as well as the swirl of complex events occurring in financial markets around that time. It is possible that the GARCH(1,1) used to fit the run-up to that election is underparameterized. The weight given to the 2008 US election is near-zero, but in our method, unsuitable donors can influence the distance-based weighting by affecting that dispersion of one or more of the p covariates present in the volatility profile. Recall that for each of the p covariates, we transform the n+1-vector of variables to Z-scores. The lesson here is that more careful pre-quantitative inspection of the donors may be warranted as well as ordinary outlier analysis for the volatility profile. As an additional note, dropping the 2008 Election vastly improves the performance of the arithmetic mean forecast. This provides a cautionary tale against using the arithmetic mean estimator for the conditional shock effect. The donors most conducive to good predictions in this task are the 2004 and 2008 US elections. The estimated fixed effects tell a story of only modest surprise due to these election results: the estimate is nearly zero for 2004, while 2012 is nonzero but small. Given that the presidential election results of these two did not deliver much surprise, it is possible that the surprise could be due more so to non-presidential races. We discuss one last remarkable phenomenon. The prediction that dropped the preceding 30 days of demeaned log returns of IYG as well as the 2012 Election yielded the same QL Loss as the unadjusted forecast. By inspecting the donor weight and donor fixed effect estimates, we can see that nearly zero adjustment was made because the 2004 Election is given nearly all the weight. §.§.§ Brexit In the pre-quantitative analysis of donors, there was only one competitor to the setup that was ultimately selected and presented in Section <ref>, and that competitor donor pool used Brexit. The inclusion of Brexit is based on the fundamental belief that the conditional shock distribution governing IYG's volatility shocks may be shared among US elections and some political events elsewhere, like the Brexit referendum of July 23rd, 2016. That referendum shared some important qualities with the 2016 US elections, which we have discussed above. We now turn, however, to why the inclusion of Brexit failed to perform nearly as well our primary specification. First, note time zone differences between the UK and the US make difficult the analysis of after-hour shocks on US markets. Additionally, like many election results, that the “Leave” side of the referendum would prevail was not revealed at any discrete time, of course, but became more certain as observation of voting turnout and informal vote tallies across the UK progressed <cit.>. This is in spite of UK news outlets adhering to various rules and guidelines regarding reportage of polling results <cit.>. These facts pose a challenge for any method that uses daily data. Of prime concern is whether we should consider T^* for Brexit to have occurred after market hours on June 23rd, 2016, Eastern Standard Time, which ignores the steady percolation of information and uncertainty that attend election days, or alternatively, consider the shock to have occurred after hours on June 22nd, 2016, which implies that T^*+1 is June 23rd, 2016, and hence the reaction of IYG to Brexit can be estimated and extracted from late-day trading on that date. Given this dilemma, we opt for dropping Brexit from our predictive model completely. §.§ Can we combine forecasts to outperform the individual forecasts? Forecast combination is technique with a vast, sprawling literature that we have referred to above. While forecast combination can be justified in any number of ways, here we invoke forecast combination as a way to robustify forecasts against misspecification. We combine in two simply ways, using the mean of all 50 forecasts and the median of all 50 forecasts. The results are presented in Tables <ref> and <ref>.
http://arxiv.org/abs/2406.08983v1
20240613102819
Thin-thick approach to martingale representations on progressively enlarged filtrations
[ "Antonella Calzolari", "Barbara Torti" ]
math.PR
[ "math.PR" ]
}à }è }é }ù }ì }ò plain thmTheorem[section] plain cor[thm]Corollary plain lemma[thm]Lemma plain ese[thm]Example plain proposition[thm]Proposition plain definition[thm]Definition plain proProprietà[section] plain remark[thm]Remark myheadings E acm Thin-thick approach to martingale representations on progressively enlarged filtrations Antonella Calzolari Dipartimento di Matematica - Università di Roma “Tor Vergata”, via della Ricerca Scientifica 1, I 00133 Roma, Italy Barbara Torti ^* June 17, 2024 ============================================================================================================================================================== § ABSTRACT We study the strong predictable representation property in the progressive enlargement 𝔽^τ of a reference filtration 𝔽 by a random time τ. Our approach is based on the decomposition of any random time in two parts, one overlapping 𝔽-stopping times (thin part) and the other that avoids them (thick part). We assume that the 𝔽-thin part of τ is nontrivial and prove in great generality a martingale representation theorem on 𝔽^τ. The obtained result extends existing results in the literature, where the random time is usually considered equal to its thick part. We illustrate by some examples the case when 𝔽 is the natural filtration of a Lévy process. § INTRODUCTION Let 𝔽 be a filtration such that any square-integrable 𝔽-martingale null at time zero can be uniquely written as the sum of stochastic integrals of predictable processes w.r.t. the elements of a set of square-integrable pairwise orthogonal 𝔽-martingales, (M^j)_j∈ J, J⊂ℕ. The set (M^j)_j∈ J is an 𝔽-basis in the sense of Davis and Varaiya (see <cit.>). Then all 𝔽-local martingales can be represented via stochastic integration w.r.t. (M^j)_j∈ J, namely the strong predictable representation property in 𝔽 holds (see <cit.>). It is worth noting that the natural filtration of any Lévy process has a basis (see <cit.>). Let τ be a random time and let 𝔽^τ be the filtration obtained progressively enlarging 𝔽 by τ, that is, up to standardization, the filtration 𝔽∨σ(τ∧·). Then τ is an 𝔽^τ stopping time and it makes sense to consider the 𝔽^τ-compensation in Doob's sense of the sub-martingale 𝕀_τ≤·, also called 𝔽^τ-compensated occurrence process of τ and here denoted by H^τ,𝔽^τ. In this paper, we focus on the strong predictable representation property in 𝔽^τ. More precisely, we study when the family of martingales ((M^j)_j∈ J, H^τ,𝔽^τ) is enough to represent all 𝔽^τ-local martingales. We work under hypotheses less restrictive than those mainly adopted in other papers dealing with this problem. Most authors, in fact, essentially assume the 𝔽-avoiding condition for τ, that is that the graph of τ is disjoint from the graph of any 𝔽-stopping time (see e.g. <cit.>, <cit.>, <cit.>), or equivalently that τ coincides with its 𝔽-thick part (see <cit.>). This is the case, for example, when τ satisfies the density hypothesis (see e.g. <cit.>, <cit.>, <cit.>). Here instead we allow τ to coincide with strictly positive probability with 𝔽-stopping times, that is the 𝔽-thin part of τ to be nontrivial. According to the thin-thick decomposition of τ we split the problem of identification of an 𝔽^τ-basis into two steps: first the construction of a basis in an intermediate filtration obtained progressively enlarging 𝔽 by the thin part of τ; then the construction of a basis in the progressive enlargement of the intermediate filtration by the thick part of τ. Solving the first issue is the real goal of our work. We prove that ((M^j)_j∈ J, H^τ,𝔽^τ) is an 𝔽^τ-basis as soon as the thin part of τ is composed by 𝔽-predictable stopping times on which 𝔽 is continuous and any 𝔽-martingale is also an 𝔽^τ-martingale, that is the immersion property of 𝔽 in 𝔽^τ holds. Our theorem applies easily when 𝔽 is the natural filtration of any Lévy process since, as well-known, all 𝔽-stopping times are totally inaccessible. Results about the propagation of the martingale representation property to progressively enlarged filtrations are useful in many applied fields and among the others in credit risk theory, where the random time τ plays the role of default time. Our investigation has been mainly inspired by the generalized density approach in the progressive enlargement of filtrations proposed by <cit.> in view of financial applications. Financial models in which the default time coincides with strictly positive probability with predictable stopping times have been considered first in <cit.> and then in <cit.>, <cit.>, <cit.> and <cit.>. Martingale representations on extended filtrations play an important role also by solving theoretical problems arising in optimal stochastic control and stochastic filtering. To better frame our work in the literature we point out two recent papers which deal with these topics and suggest possible applications and future developments of our result. In <cit.> a martingale representation theorem on a filtration progressively expanded by a random time is crucial to solve the BSDE provided by the dynamical approach to a stochastic control problem for a step process. However, the authors limit themselves to consider avoiding random times. In <cit.>, to identify the filtering equation for a system of partially observed processes, the authors look for a martingale representation theorem on the filtration generated by the observation process. In this model signal and observation may have possible predictable common jump times. We emphasize that both papers deal only with the weak predictable representation property, namely the property according to which all local martingales can be represented as a sum of stochastic integrals and integrals w.r.t. random measures. This paper is organized as follows. Section 2 is devoted to notations and definitions. Section 3 presents some auxiliary results. Section 4 is devoted to the main theorem and finally, Section 5 discusses some examples of application. Notations and definitions Let (Ω, ℱ, P) be a complete probability space and 𝔽=(ℱ_t)_t a standard filtration on it. ℳ^2(P,𝔽) denotes the Hilbert space of square-integrable (P,𝔽)-martingales with inner product (M^j,M^l)→ E^P[M^j,M^l]_∞. Let us introduce the notion of the basis of filtration in the sense of Davis and Varaiya (see <cit.>). An 𝔽-basis is a subset (M^j)_j∈ J, J⊂ℕ, of ℳ^2(P,𝔽) whose elements are pairwise strongly orthogonal martingales such that each V∈ℳ^2(P,𝔽) satisfies V_t=V_0+∑_j∈ J∫_0^t Φ^V_j(s) d M^j_s, t≥ 0, where V_0 is a random variable ℱ_0-measurable and, for all j∈ J, Φ^V_j is a predictable process that verifies E^P[∫_0^∞ (Φ^V_j)^2(s) d[M^j]_s]<+∞. We recall some basic facts about filtrations and random times. We call an ℝ^+-valued random time nontrivial when P(τ< +∞)>0. Given a nontrivial random time τ the filtration σ(τ∧·) is the minimal standard filtration which makes τ a stopping time. We denote by 𝔽^τ the standard progressive enlargement of 𝔽 by τ, that is ℱ^τ_t:=⋂_s>tℱ_s∨σ(τ∧ s). As well known any 𝔽-stopping time τ satisfies τ=τ^a, 𝔽∧τ^i, 𝔽 where τ^a, 𝔽 and τ^i, 𝔽 are the 𝔽-accessible component of τ and the 𝔽-totally inaccessible component of τ, respectively (see e.g. Theorem 3, page 104, in <cit.>). More precisely there exist two disjoint events A and B such that P-a.s. A∪ B=(τ<∞) and τ^a, 𝔽=τ 𝕀_A + ∞ 𝕀_A^c, τ^i, 𝔽=τ 𝕀_B + ∞ 𝕀_B^c. Therefore τ 𝕀_τ<+∞=τ^a, 𝔽 𝕀_A + τ^i, 𝔽 𝕀_B. If σ is any 𝔽-predictable stopping time, then by definition P(τ^i, 𝔽=σ<+∞)=0, As far as τ^a, 𝔽 is concerned, [[τ^a, 𝔽]]⊂⋃_m [[τ^a, 𝔽_m]], where (τ^a, 𝔽_m)_m is a sequence of 𝔽-predictable stopping times. Such a sequence is not unique and it may be chosen in such a way that the corresponding graphs are pairwise disjoint (see Theorem 3.31, page 95, in <cit.>). Any sequence of 𝔽-predictable stopping times (τ^a, 𝔽_m)_m satisfying (<ref>) and with pairwise disjoint graphs takes the name of enveloping sequence of τ^a, 𝔽. Note that if 𝔹 is a filtration different from 𝔽 but such that τ is also 𝔹-stopping time, then the 𝔹-accessible component τ^a, 𝔹 is in general different from τ^a, 𝔽 and analogously τ^i, 𝔹 is in general different from τ^i, 𝔽. Accessibility (totally inaccessibility) may get lost by restriction (expansion) of the filtration, that is an accessible (totally inaccessible) stopping time may be totally inaccessible (accessible) w.r.t. a smaller (bigger) filtration. In the next we refer to τ^a,σ(τ∧·) as the naturally accessible component of τ and analogously to τ^i,σ(τ∧·) as the naturally totally inaccessible component of τ. A random time τ is 𝔽-accessible when P(τ=τ^a,𝔽)=1 and analogously τ is 𝔽-totally inaccessible when P(τ=τ^i,𝔽)=1 (if 𝔽=σ(τ∧·) naturally accessible and naturally totally inaccessible, respectively). It is worthwhile to recall that a naturally accessible stopping time cannot be naturally predictable (unless it coincides with a positive constant) and has an atomic law. The law of a naturally totally inaccessible stopping time instead is diffusive (see Theorem IV-107, page 241, in <cit.>). In the following, for any 𝔽-stopping time τ, H^τ,𝔽 stays for the 𝔽-compensated occurrence process of τ, that is the 𝔽-martingale obtained by 𝔽-compensation of 𝕀_τ≤· In formulas H^τ,𝔽:=𝕀_τ≤·-A^τ,𝔽, where A^τ,𝔽 is the 𝔽-predictable compensator of 𝕀_τ≤· (briefly, 𝔽-compensator of τ). If τ is 𝔽-predictable, then trivially H^τ,𝔽≡ 0. A random time τ satisfies Hypothesis (𝒜) w.r.t. 𝔽 if τ avoids 𝔽-stopping times, that is, if P(τ = T < +∞) = 0 for every 𝔽-stopping time T. If τ satisfies Hypothesis (𝒜) w.r.t. 𝔽 then the 𝔽^τ-compensator of τ is continuous (see Lemma 3.6 in <cit.>) and therefore τ is 𝔽^τ-totally inaccessible (see Theorem 1.43 in <cit.>). A random time τ is called 𝔽-thin if 𝕀_{τ<∞}τ=∑_n T_n 𝕀_C_n, where T_n, n≥ 1, are 𝔽-stopping times with disjoint graphs and for all n≥ 1 C_n=(τ=T_n<∞). The sequence (T_n)_n≥ 1 is called an 𝔽-exhausting sequence of τ and, for any n≥ 1, the event C_n belongs to ℱ^τ_∞. For any random time τ the 𝔽-thin-thick decomposition holds, that is τ=τ_1∧τ_2, where τ_1 is 𝔽-thin and is called the 𝔽-thin part of τ, and τ_2 satisfies Hypothesis (𝒜) w.r.t. 𝔽 and is called the 𝔽-thick part of τ (see <cit.>). So, any 𝔽-stopping time coincides with its 𝔽-thin part and has a trivial 𝔽-thick part. 𝔽^τ coincides with (𝔽^τ_1)^τ_2=(𝔽^τ_2)^τ_1 (see Theorem 5.1 in <cit.>). As usual, we write 𝔽↪𝔽^τ, when any 𝔽-local martingale is also a 𝔽^τ-local martingale, that is when the immersion property of 𝔽 in 𝔽^τ holds. Some auxiliary results A random time τ satisfies Hypothesis (𝒫) w.r.t. 𝔽 when it admits a nontrivial 𝔽-thin part τ_1 with an exhausting sequence (T_n)_n of 𝔽-predictable stopping times. Let τ be a random time satisfying Hypothesis (𝒫) w.r.t. 𝔽. Then w.l.o.g. the exhausting sequence (T_n)_n can be chosen a.s. increasing. Let τ be a random time satisfying Hypothesis (𝒫) w.r.t. 𝔽. Then: (i) τ_1 is an 𝔽^τ_1-accessible stopping time with enveloping sequence (T_n)_n; (ii) τ_1 and τ_2 are the 𝔽^τ-accessible component and the 𝔽^τ-totally inaccessible component of τ, respectively, that is τ_1=τ^a, 𝔽^τ and τ_2=τ^i, 𝔽^τ. (i) It follows immediately considering that any 𝔽-predictable stopping time is also 𝔽^τ_1-predictable. (ii) Previous point implies that τ_1 is 𝔽^τ-accessible, since 𝔽^τ_1⊂𝔽^τ. Moreover, τ_2 satisfies Hypothesis (𝒜) w.r.t. 𝔽^τ_1, since by definition τ_1 and τ_2 have disjoint graphs and τ_2 avoids 𝔽-stopping times. Then by Remark <ref> τ_2 is (𝔽^τ_1)^τ_2-totally inaccessible or equivalently τ_2 is 𝔽^τ-totally inaccessible (see Remark <ref>). τ satisfies Hypothesis (𝒫) w.r.t. 𝔽 with T_n=t_n∈ℝ^+, n≥ 1, if and only if τ_1 is naturally accessible. τ_1 is naturally accessible if and only if it is a discrete random variable (see Remark <ref>) and any deterministic time is predictable w.r.t. any filtration. Observe that, when τ_1 is naturally accessible, the (possibly finite) sequence of atoms of the law of τ_1 is both a naturally enveloping sequence and an 𝔽-exhausting sequence of τ_1. Assume Hypothesis (𝒫) w.r.t. 𝔽 for τ. Then {Δ H^τ_1,𝔽^τ_1≠ 0}⊂⋃_n [[T_n]], that is {Δ H^τ_1,𝔽^τ_1≠ 0} is an 𝔽-thin set with (T_n)_n as an exhausting sequence. It derives immediately by the representation H^τ_1, 𝔽^τ_1_·=∑_n(𝕀_C_n-P(C_n|ℱ^τ_1_T_n^-)) 𝕀_{T_n≤·}, that follows by Lemma 2.13 in <cit.> joint with point (i) of Proposition <ref> (see (<ref>)). An 𝔽^τ-basis In this section 𝔽 is a standard filtration with trivial initial σ-algebra having a basis of martingales, (M^j)_j∈ J, J⊂ℕ, and τ is a random time on the same probability space (Ω,𝔽,P).  To overcome the 𝔽-avoidance condition on τ implies allowing the 𝔽-thin part of τ to be nontrivial, that is to assume τ_1 finite with positive probability and not an 𝔽-stopping time. The equality σ(τ∧·)=σ(τ_1∧·)∨σ(τ_2∧·) suggests looking at 𝔽^τ as the progressive enlargement by τ_2 of the filtration obtained progressively enlarging 𝔽 by τ_1. The standing assumption on τ will be Hypothesis (𝒫) w.r.t. 𝔽. It is useful to introduce the following continuity condition on 𝔽. Condition (𝒞): For any n≥ 1, ℱ_T_n=ℱ_T_n^-. Assume 𝔽↪𝔽^τ. Then all elements of the family {(M^j)_j∈ J, H^τ_1,𝔽^τ_1} are both 𝔽^τ_1-martingales and 𝔽^τ-martingales and (i) [H^τ_1,𝔽^τ_1, H^τ_2,𝔽^τ]_·≡ 0 a.s. and therefore H^τ_1,𝔽^τ_1 and H^τ_2,𝔽^τ are 𝔽^τ-orthogonal martingales. (ii) If moreover Condition (𝒞) holds, then, for all j∈ J, [H^τ_1,𝔽^τ_1, M^j]_·≡ 0 a.s. and therefore M^j and H^τ_1,𝔽^τ_1 are both 𝔽^τ_1-orthogonal martingales and 𝔽^τ-orthogonal martingales. 𝔽↪𝔽^τ implies both 𝔽↪𝔽^τ_1 and 𝔽^τ_1↪𝔽^τ, so that the M^j's are both 𝔽^τ_1 and 𝔽^τ-martingales and H^τ_1,𝔽^τ_1 is an 𝔽^τ-martingale (see Proposition 5.4 in <cit.>). By construction H^τ_2,𝔽^τ is a 𝔽^τ-martingale. Therefore all covariation processes in (<ref>) and (<ref>) are well-defined. Moreover, since H^τ_1,𝔽^τ_1 is a process of finite variation, [H^τ_1, 𝔽^τ_1, H^τ_2,𝔽^τ]_·=∑_s≤·Δ H^τ_1, 𝔽^τ_1_s Δ H^τ_2,𝔽^τ_s, and [H^τ_1,𝔽^τ_1,M^j]_·=∑_s≤·Δ H^τ_1,𝔽^τ_1_s Δ M^j_s. (i) H^τ_2,𝔽^τ jumps at τ_2 (see Remark <ref>) and Proposition <ref> holds so that for all t {∑_s≤ tΔ H^τ_1,𝔽^τ_1_s Δ H^τ_2,𝔽^τ_s≠ 0} ⊂ ⋃_n{τ_2=T_n}. Then (<ref>) follows by (<ref>) and the 𝔽-avoidance property of τ_2. (ii) Proposition <ref> states that {Δ H^τ_1,𝔽^τ_1≠ 0} is an 𝔽-thin set (see Definition 3.18, p.89, in <cit.>) with exhausting sequence (T_n)_n≥ 1, while Condition (𝒞) implies that any martingale M^j cannot jump at any time T_n. So (<ref>) derives immediately from (<ref>). Condition (𝒞) is equivalent to assuming that, for all j∈ J and for all n≥ 1, Δ M^j_T_n≡ 0. We are now able to prove the main result of this paper. In its proof, we will make use several times of Jacod-Yor's Lemma (see Theorem 7 in <cit.>). Assume that Condition (𝒞) holds. (i) If 𝔽↪𝔽^τ_1, then the family ((M^j)_j∈ J,H^τ_1, 𝔽^τ_1) is an 𝔽^τ_1-basis. (ii) If 𝔽↪𝔽^τ, then the family ((M^j)_j∈ J, H^τ,𝔽^τ) is an 𝔽^τ-basis. When τ_1 is an 𝔽-stopping time 𝔽=𝔽^τ_1 and 𝔽^τ=𝔽^τ_2, so point (i) is trivial (see Remark <ref>) and point (ii) is well-known (see Theorem 4.9 in <cit.>). Otherwise, we proceed as follows. (i) We sketch the line of the proof of this point. We show that P is the unique probability measure on (Ω, ℱ^τ_1_∞) that is a martingale measure for the family ((M^j)_j∈ J,H^τ_1, 𝔽^τ_1). Then Jacod-Yor's Lemma implies that the stable subspace generated by the family ((M^j)_j∈ J,H^τ_1, 𝔽^τ_1) coincides with ℳ^2(P,𝔽^τ_1_∞) and, this ends the proof since the martingales of the family are pairwise orthogonal (see (<ref>)). Let Q be a probability measure on (Ω, ℱ^τ_1_∞) such that all elements of ((M^j)_j∈ J, H^τ_1, 𝔽^τ_1) belong to ℳ^2(Q, 𝔽^τ_1). By assumption ℱ_0 is trivial and (M^j)_j∈ J is an 𝔽-basis so that Jacod-Yor's Lemma implies Q|_ℱ_∞=P|_ℱ_∞. Observe that (M^j)_j∈ J∈ℳ^2(Q, 𝔽^τ_1) joint with 𝔽↪𝔽^τ_1 implies 𝔽↪_Q𝔽^τ_1, that is the immersion property of 𝔽 in 𝔽^τ_1 under Q. Since the uniqueness of the compensator of τ_1, H^τ_1, 𝔽^τ_1_· is an 𝔽^τ_1-martingale under Q if and only if for all n≥ 1 P(C_n|ℱ^τ_1_T_n^-)=Q(C_n|ℱ^τ_1_T_n^-) (see Lemma 13 in <cit.> and (<ref>)). The filtration 𝔽^τ_1 satisfies ℱ^τ_1_t=ℱ_t, if t<T_1 ℱ_t∨σ (C_1), if t∈ [T_1,T_2) ⋮ ℱ_t∨σ (C_1,…, C_n), if t∈ [T_n,T_n+1) ⋮ and moreover ℱ^τ_1_∞=⋁_n ℱ_∞∨σ (C_1,…, C_n) (see Lemma 1.5 in <cit.>). The above recursive form of 𝔽^τ_1 suggests to apply an iterative procedure to prove for any n≥ 1 Q|_ℱ_∞∨σ (C_1,…, C_n)=P|_ℱ_∞∨σ (C_1,…, C_n). The first step of the induction aims to show that Q|_ℱ_∞∨σ (C_1)=P|_ℱ_∞∨σ (C_1), that is for any A∈ℱ_∞ P(A∩ C_1)=Q(A∩ C_1). Fixed A∈ℱ_∞ P(A∩ C_1)=E^P[P(A∩ C_1|ℱ_T_1)]=E^Q[P(A∩ C_1|ℱ_T_1)], where the second equality follows by (<ref>) using ℱ_T_1⊂ℱ_∞. The immersion hypothesis 𝔽↪𝔽^τ_1 implies that, under P, ℱ_∞ is conditionally independent of ℱ^τ_1_T_1 given ℱ_T_1 (see Theorem 3 in <cit.>). Therefore, since C_1∈ℱ^τ_1_T_1 (see (<ref>)) P(A∩ C_1)=E^Q[P(A∩ C_1|ℱ_T_1)]=E^Q[P(A|ℱ_T_1)P(C_1|ℱ_T_1)]. By (<ref>), since A∈ℱ_∞, it follows P(A|ℱ_T_1)=Q(A|ℱ_T_1). Condition (𝒞) gives ℱ_T_1=ℱ_T_1^- and then P(C_1|ℱ_T_1)=P(C_1|ℱ_T_1^-)=Q(C_1|ℱ_T_1^-)=Q(C_1|ℱ_T_1), where the middle equality follows by (<ref>) for n=1. Using (<ref>) and (<ref>) equality (<ref>) becomes P(A∩ C_1)=E^Q[Q(A|ℱ_T_1)Q(C_1|ℱ_T_1)]. Finally, (<ref>) implies that the conditional independence of ℱ^τ_1_T_1 given ℱ_T_1 also holds under Q, so that applying it in the right-hand side of (<ref>) one gets (<ref>). The general step of the induction works in a similar way: fixed n>1, one assumes Q|_ℱ_∞∨σ (C_1,…, C_n-1)=P|_ℱ_∞∨σ (C_1,…, C_n-1) and, for any choice of A∈ℱ_∞ and B∈σ (C_1,…, C_n-1), by the same technique one proves that P(A∩ B∩ C_n)=Q(A∩ B∩ C_n). Then, due to the arbitrariness of A and B, the probability measures P and Q coincide on ℱ_∞∨σ (C_1,…, C_n), that is (<ref>). So, since (<ref>), a monotone class argument yields that P and Q coincide on ℱ^τ_1_∞. (ii) 𝔽↪𝔽^τ implies 𝔽↪𝔽^τ_1 and therefore previous point holds. The random time τ_2 satisfies Hypothesis (𝒜) w.r.t. 𝔽^τ_1, since τ_2 by definition avoids both τ_1 and all 𝔽-stopping times and 𝔽^τ_1↪𝔽^τ (see Proposition 5.4 in <cit.>). Then by Theorem 4.9. in <cit.> applied to the progressive enlargement of 𝔽^τ_1 by τ_2, considering that (𝔽^τ_1)^τ_2=𝔽^τ (see Remark <ref>), it follows that ((M^j)_j∈ J, H^τ_1, 𝔽^τ_1, H^τ_2,𝔽^τ) is an 𝔽^τ-basis. It is to note that 𝔽^τ_1↪𝔽^τ implies that H^τ_1,𝔽^τ_1 = H^τ_1,𝔽^τ, so that ((M^j)_j∈ J, H^τ_1, 𝔽^τ, H^τ_2,𝔽^τ) is an 𝔽^τ-basis and therefore for any V∈ℳ^2(P,𝔽^τ) V_t=V_0+∑_i∫_0^t Φ^V_j(s) d M^j_s+∫_0^t γ(s) d H^τ_1,𝔽^τ_s+∫_0^t η(s) d H^τ_2,𝔽^τ_s, t≥ 0, where V_0 is a random variable ℱ_0-measurable, Φ^V_j, for all j∈ J, γ and η are 𝔽^τ-predictable processes such that the random variables ∫_0^∞ (Φ^V_j)^2(s) d[M^j]_s, ∫_0^∞ η^2(s) d[ H^τ_1,𝔽^τ]_s, ∫_0^∞ γ^2(s) d[H^τ_2,𝔽^τ]_s have finite expectations (see Definition <ref>). Let 𝒟 be the 𝔽^τ-predictable subset of Ω× [0,T] defined as 𝒟:=⋃_n [[T_n]]. Then, ∫_𝒟d⟨ H^τ_1,𝔽^τ⟩_t =0 ∫_𝒟d⟨ H^τ_2,𝔽^τ⟩_t =0, where the first equality follows by Proposition 2.15 in <cit.>, (<ref>) and (<ref>), while the second one follows observing that ⟨ H^τ_2,𝔽^τ⟩ is a continuous process, since τ_2 is 𝔽^τ-totally inaccessible (see Proposition <ref> (ii) and Theorem 7.11, p.195, in <cit.>). Let ρ be the predictable process defined by ρ_t:=γ_t 𝕀_𝒟(t)+η_t 𝕀_𝒟(t). Then ρ is such that E[∫_0^∞ ρ^2_s d[ H^τ,𝔽^τ]_s]<+∞. By (<ref>) 𝒟 is a predictable support of d⟨ H^τ_1,𝔽^τ⟩ and 𝒟 is a predictable support of d⟨ H^τ_2,𝔽^τ⟩ and, following the same lines as in the proof of Proposition 4.2 (i) in <cit.>, we derive ∫_0^t γ_s d H^τ_1,𝔽^τ_s=∫_0^t ρ_s d H^τ_1,𝔽^τ_s, ∫_0^t η_s d H^τ_2,𝔽^τ_s=∫_0^t ρ_s d H^τ_2,𝔽^τ_s, t≥ 0. Therefore, since H^τ,𝔽^τ= H^τ_1,𝔽^τ + H^τ_2,𝔽^τ, (<ref>) can be rewritten as V_t=V_0+∑_i∫_0^t Φ^V_j(s) d M^j_s+∫_0^t ρ(s) d H^τ,𝔽^τ_s, t≥ 0. By the arbitrariness of V the last equality gives the thesis. In theorem's framework H^τ_2,𝔽^τ=𝕀_τ_2≤·-Λ^τ_2∧·, where Λ denotes the 𝔽^τ_1-reduction of the compensator of τ_2 (see e.g. Proposition 2.11, p.36, in <cit.>). If τ_2 satisfies Jacod's density hypothesis w.r.t. 𝔽^τ_1 with intensity λ (see Condition (A) and Proposition 1.5 in <cit.> and formulas (3) and (5) in <cit.>), then H^τ,𝔽^τ=𝕀_τ≤·-∑_nP(C_n|ℱ^τ_1_T_n^-) 𝕀_{T_n≤·}-∫^τ_2∧·_0λ_s ds. Therefore, when the exhausting sequence of τ_1 is finite, τ is under the generalized density hypothesis (see <cit.> and Remark 12 in <cit.>). Examples In the literature, many models dealing with progressive enlargement take as reference filtration the natural filtration of a Lévy process. For the sake of clarity, we restate Theorem <ref> in this particular case. We recall that the natural filtration of any Lévy process always admits a basis (see Theorem 4.3 in <cit.>). Let 𝔽 be the natural filtration of a Lévy process and (M^j)_j∈ J, J⊂ℕ, any 𝔽-basis. Let τ be any random time that satisfies Hypothesis (𝒫) and let H^τ,𝔽^τ be the 𝔽^τ-compensated occurrence process of τ. If 𝔽↪𝔽^τ, then ((M^j)_j∈ J, H^τ,𝔽^τ) is an 𝔽^τ-basis. 𝔽 is quasi-left continuous and therefore Condition (𝒞) holds (see Proposition 7, p.21, in <cit.>) so that Theorem <ref> applies. In the hybrid sovereign default model proposed by <cit.> the default is defined as τ:=ζ^*∧ξ. The random time ζ^* describes successive solving downgrades due to economic and political inferences, while ξ stays for the idiosyncratic risks. From a mathematical point of view, ζ^* is an accessible random time with a finite number of predictable components and ξ is a totally inaccessible random time. This model satisfies all the assumptions of Proposition <ref> when 𝔽 is the natural filtration of the Brownian motion driving the solvency process, τ_1 is equal to ζ^* and τ_2 is equal to ξ. In fact by construction τ satisfies Hypothesis (𝒫) and, as the authors have shown, the immersion property holds. Moreover τ turns out to be under the generalized density hypothesis (see (4.6) in <cit.>). The generalization of the classical Cox procedure for default times presented in <cit.> provides a class of applications of Proposition <ref>. One of the advantages of Cox construction is that the immersion property of the reference filtration in its progressive enlargement holds (see e.g. p.471 in <cit.>). Let 𝔽 be the natural filtration of a Lévy process and (M^j)_j∈ J, J⊂ℕ, any 𝔽-basis. Let τ be defined as τ:=inf{t≥ 0, K_t≥Θ}, where K is a càdlàg 𝔽-predictable process such that K_0=0 and Θ is a random variable with exponential law of parameter one, independent of ℱ_∞. Then ((M^j)_j∈ J, H^τ,𝔽^τ) is an 𝔽^τ-basis. Proposition <ref> applies since the immersion property of 𝔽 in 𝔽^τ is obtained for free by Cox construction and the Hypothesis (𝒫) for τ derives from the assumptions on K (see Section 3.2.5, p.477, in <cit.>). When the continuous part of the process K in the corollary above is absolutely continuous and equal to ∫_0^·λ_s ds, with λ an 𝔽-adapted positive process, then H^τ,𝔽^τ has the expression (<ref>) (see Remark <ref>). In the following model, dealing with a special class of Brownian filtrations, the random time is not constructed by Cox procedure. This example has been proposed in <cit.> studying the market's completeness under filtration shrinkage and in <cit.> as an example of thin time. Let B be the Lévy transformation of a Brownian motion W, that is the process B:=∫_0^· sign(W_s) dW_s and let 𝔽^B and 𝔽^W be the natural filtrations of B and W, respectively. As well known B is a natural Brownian motion, and 𝔽^B=𝔽^|W|⊊𝔽^W. Moreover the exponential martingale S:=ℰ(B) enjoys the predictable representation property both w.r.t. 𝔽^B and 𝔽^W. In the language of finance modeling (S,𝔽^B) is a complete market, as well as (S,𝔽^W). Surprisingly, when τ:=inf{t≥ 0: W_t=1}, S does not enjoy the predictable representation property w.r.t. the filtration (𝔽^B)^τ. If (T_n)_n≥ 1 is the sequence of 𝔽^B-stopping times defined by T_n:=inf{t> S_n-1: |W_t|=1}, with S_n:=inf{t> T_n-1: |W_t|=0}, n>1, and S_0:=0, then the process N:= 𝕀_{τ≤·}-1/2∑_n≥ 1𝕀_τ≥ T_n 𝕀_{T_n≤·}, is a discontinuous (𝔽^B)^τ-martingale and cannot be represented by integration w.r.t. S. In <cit.>, since 𝔽^B⊂(𝔽^B)^τ⊂𝔽^W, this example is useful to explain why some markets could lose completeness after the reduction of information or, vice versa, when the addition of information is too little. In <cit.> by the previous example, the authors discuss a case of a natural totally inaccessible random time, τ, which coincides with its thin part w.r.t. a given filtration, 𝔽^B, with an exhausting sequence of predictable stopping times, (T_n)_n≥ 1. Then, τ is (𝔽^B)^τ-accessible. We stress that τ becomes predictable in the even larger filtration 𝔽^W. Here we answer the question: how can be represented all (𝔽^B)^τ-local martingales since S is not enough? (S,N) is an (𝔽^B)^τ-basis. We apply our main theorem to 𝔽^B, S as 𝔽^B-basis and τ given by (<ref>), and, more precisely, since τ is an 𝔽^B-thin time, we apply point (i). 𝔽^B↪ (𝔽^B)^τ, since 𝔽^B↪𝔽^W. 𝔽^B, like any Brownian natural filtration, satisfies Condition (𝒞). It remains to check that H^τ, (𝔽^B)^τ, for which the analogous of (<ref>) holds, coincides with N. Let us observe that P(C_n| (ℱ^B)^τ_T_n^-)=P(C_n∩{τ≥ T_n}| (ℱ^B)^τ_T_n^-)=P(C_n| (ℱ^B)^τ_T_n^-)𝕀_τ≥ T_n, where last equality follows by {τ< T_n}∈ (ℱ^B)^τ_T_n^-. By (<ref>) H^τ, (𝔽^B)^τ_·= 𝕀_{τ≤·}-∑_n≥ 1P(C_n|(ℱ^B)^τ_T_n^-)𝕀_τ≥ T_n 𝕀_{T_n≤·}. In fact, for any fixed n≥ 1, P(C_n|(ℱ^B)^τ_T_n^-)𝕀_τ≥ T_n=P(W_T_n=1|ℱ^|W|_T_n∨σ (C_1,…, C_n-1))𝕀_τ≥ T_n, where we have used (<ref>) and ℱ^|W|_T_n^-=ℱ^B_T_n^-=ℱ^B_T_n=ℱ^|W|_T_n. Moreover P(W_T_n=1|ℱ^|W|_T_n∨σ (C_1,…, C_n-1))𝕀_τ≥ T_n=1/2 𝕀_τ≥ T_n. To prove last equality we observe that P((W_T_n=1)^c|ℱ^|W|_T_n∨σ (C_1,…, C_n-1))=P(-W_T_n=1|ℱ^|-W|_T_n∨σ (C_1,…, C_n-1)) and, setting D_h:={W_T_h=1}={-W_T_h=1}^c, h=1,…,n-1, that σ (C_1,…, C_n-1)=σ (D_1,…, D_n-1). Finally, by the symmetry of Brownian motion, P(W_T_n=1|ℱ^|W|_T_n∨σ (C_1,…, C_n-1))=P((W_T_n=1)^c|ℱ^|W|_T_n∨σ (C_1,…, C_n-1)). § FUNDING Partially supported by the MIUR Excellence Department Project MatMod@TOV awarded to the Department of Mathematics, University of Rome Tor Vergata. imsart-nameyear
http://arxiv.org/abs/2406.09299v1
20240613163516
Pauli Noise Learning for Mid-Circuit Measurements
[ "Jordan Hines", "Timothy Proctor" ]
quant-ph
[ "quant-ph" ]
jordanh@berkeley.edu Department of Physics, University of California, Berkeley, CA 94720 Quantum Performance Laboratory, Sandia National Laboratories, Livermore, CA 94550 tjproct@sandia.gov Quantum Performance Laboratory, Sandia National Laboratories, Livermore, CA 94550§ ABSTRACT Current benchmarks for mid-circuit measurements (MCMs) are limited in scalability or the types of error they can quantify, necessitating new techniques for quantifying their performance. Here, we introduce a theory for learning Pauli noise in MCMs and use it to create MCM cycle benchmarking, a scalable method for benchmarking MCMs. MCM cycle benchmarking extracts detailed information about the rates of errors in randomly compiled layers of MCMs and Clifford gates, and we demonstrate how its results can be used to quantify correlated errors during MCMs on current quantum hardware. Our method can be integrated into existing Pauli noise learning techniques to scalably characterize and benchmark wide classes of circuits containing MCMs. Pauli Noise Learning for Mid-Circuit Measurements Timothy Proctor June 2024 ================================================= Mid-circuit measurements (MCMs) are a critical component of quantum error correction <cit.> and some quantum algorithms <cit.>, but they are often a large source of error in contemporary quantum processors <cit.>. Despite this, there are currently few techniques capable of quantifying the errors in MCMs, and those that can are limited in scope or scalability. Full tomography of MCMs <cit.> is computationally expensive and only feasible for small systems, whereas more scalable methods based on randomized benchmarking (RB) <cit.> are restricted to assessing crosstalk errors caused by MCMs and provide limited quantitative information. Pauli noise learning <cit.> is a family of methods for characterizing ideally unitary gates that offers an intermediate approach of partial characterization, between the extremes of fully tomography and RB. These techniques are widely used, they enable error mitigation <cit.>, and they can characterize some of the errors in syndrome extraction circuits <cit.>. However, because noise in MCMs cannot be tailored into the same stochastic Pauli error structure as gate error <cit.>, existing Pauli noise learning techniques cannot fully characterize MCMs, and this limits their application to measurement noise <cit.>. In this letter, we introduce and demonstrate Pauli noise learning techniques for MCMs. We introduce theory for Pauli noise learning of uniform stochastic instruments (USIs) <cit.>, a stochastic noise model for circuit layers (i.e., cycles) containing MCMs that can be enforced using a recent extension of randomized compilation <cit.> to MCM-containing layers <cit.>. We apply this theory to introduce MCM cycle benchmarking (MCM-CB), which is a scalable protocol for estimating the fidelity of MCM-containing layers (a.k.a. cycles) that generalizes the cycle benchmarking protocol <cit.>. We then show how to jointly characterize multi-qubit Clifford gates and MCMs, learning parameters of the MCM and gate error that cannot be learned by characterizing the individual components in isolation. Mid-circuit measurements—First we define some notation. We use _n to denote the n-qubit Pauli operators (without signs). For a ∈ℤ_2^k, we use P^⊗ a to denote the Pauli operator P^a_1⊗ P^a_2⊗…⊗ P^a_k, and we call Pauli operators with this form P-type Pauli operators. We use 𝒫 to denote the superoperator representation of P. We denote the set of m-qubit Z-type Pauli operators by 𝐙_m. An n-qubit MCM layer L is a set of instructions to perform projective measurements on m qubits and unitary gates on the remaining n-m qubits. We will consider MCM layers containing only computational basis measurements (extension to other Pauli bases is simple), and Clifford gates. If run without error, a quantum processor implementing L performs the process ℒ = ∑_k ∈ℤ_2^m𝒱⊗kk⊗k, where k denotes the classical outcome of the MCM, k≡|k⟩⟨k|, and ρ is the vector representing operator ρ in Hilbert-Schmidt space [ℬ(ℋ)], and 𝒱 is the superoperator representation of the unitary acting on the unmeasured qubits. An imperfect implementation of L can be represented as a quantum instrument ℳ: ℬ(ℋ) →ℬ(ℋ) ⊗ℤ_2^m of the form ℳ = ∑_k ∈ℤ_2^mℳ_k ⊗k, where each ℳ_k is completely positive and trace non-increasing and ∑_k ∈ℤ_2^mℳ_k is trace preserving <cit.>. The process (i.e., entanglement) fidelity of ℳ to ℒ satisfies <cit.> F(ℳ) ≡ F(ℳ, ℒ) = (∑_k ∈ℤ_2^m√(F(ℳ_k,𝒱⊗kk)))^2. Uniform stochastic instruments—Our method is designed to learn the error of uniform stochastic instruments (USIs), a class of quantum intruments with a simple form of stochastic Pauli error. USIs have the form ℳ = ∑_k ∈ℤ_2^mℳ_k ⊗k, where ℳ_k = ∑_a,b ∈ℤ_2^m𝒯_a,b𝒱⊗𝒳^⊗ bkk𝒳^⊗ a. The 𝒯_a,b are unnormalized (n-m)-qubit stochastic Pauli channels acting on the unmeasured qubits, and they satisfy ∑_a,b ∈ℤ_2^mTr(𝒯_a,b)=1 <cit.>. The probability that the pre-measurement error 𝒳^a and the post-measurement error 𝒳^b occur is (𝒯_a,b). The process fidelity of ℳ is F(ℳ) = (𝒯_0,0), which is the probability of no error <cit.>. Our method uses randomized compiling for MCM layers <cit.> to enforce a USI model for all MCM layers. The randomized compilation of an MCM layer L is a composite layer L = T_0LT_0' (read left to right), where T_0 consists of uniformly random Pauli gates applied to each qubit and T_0' is the inverse of T_0, composed with a uniformly random layer of Z gates. For each measured qubit in L, if T_0 contains an X or Y gate on that qubit, its classical MCM outcome is flipped. In a circuit with multiple layers, adjacent layers of Pauli gates are typically compiled together. Whenever the single-qubit gate error is small relative to the error in ℳ, which we assume hereafter, an imperfect implementation of L is approximately a USI. Theory of learning USIs—We now introduce our theory for learning USIs. For now we assume that the unmeasured subsystem's error-free evolution is 𝒱=𝕀_n-m, and we consider the more general case in the SM. To completely determine a USI [Eq. (<ref>)], we must learn the stochastic Pauli channels 𝒯_a,b for all a,b ∈ℤ_2^m. Stochastic Pauli channels are diagonal in the Pauli transfer matrix (PTM) representation, and existing methods for learning a Pauli channel <cit.> directly estimate each of its PTM's diagonal elements. Although the USI components ℳ_k are not diagonal in the PTM representation, our technique is also built around estimating the elements of the PTM. The elements of ℳ_k's PTM [ℳ_k]_P'⊗ Q_2, P⊗ Q_1 = P' ⊗ Q_2ℳ_kP ⊗ Q_1, where P,P' ∈_n-m and Q_1, Q_2 ∈_m, are [ℳ_k]_P'⊗ Q_2, P⊗ Q_1 = P' ⊗ Q_2(∑_a,b ∈ℤ_2^m𝒯_a,b⊗𝒳^⊗ bkk𝒳^⊗ a)P ⊗ Q_1 = P'(∑_a,b ∈ℤ_2^m𝒯_a,b)PQ_2𝒳^⊗ bkk𝒳^⊗ aQ_1. Because k is a computational basis state, k𝒳^⊗ aQ=0 for all Q ∈_m ∖𝐙_m and a ∈ℤ_2^m. Furthermore, because each 𝒯_a,b is a stochastic Pauli channel, P'𝒯_a,bP = 0 for all P' ≠ P. Substituting these equalities into Eq. (<ref>) and letting Q_1=Z^⊗ d_1 and Q_2=Z^⊗ d_2, we obtain [ℳ_k]_P'⊗ Q_2, P ⊗ Q_1 = 0 when P ≠ P' and [ℳ_k]_P⊗ Q_2, P ⊗ Q_1 = ∑_a,b ∈ℤ_2^mλ_a,b,P(-1)^(k⊕ a) · d_1 + (k ⊕ b) · d_2 = ∑_a,b ∈ℤ_2^m (-1)^k· (d_1⊕ d_2)(-1)^(d_1 · a) + (d_2 · b)λ_a,b,P = (-1)^k· (d_1⊕ d_2)λ̃_P,(Q_1, Q_2). Here, b_1 ⊕ b_2 denotes the bitwise XOR of b_1, b_2 ∈ℤ^m_2, and we have defined λ̃_P_,(Z^c, Z^dZ^c) = ∑_a, b ∈ℤ_2^mλ_a,b (-1)^d · b(-1)^a · c + b · c. Using Eq. (<ref>), ℳ_k can be expressed in terms of its PTM elements as ℳ_k = ∑_P∑_Q_1, Q_2(-1)^k· (d_1⊕ d_2)λ̃_P,(Q_1, Q_2)P ⊗ Q_2P ⊗ Q_1. Therefore, the (P⊗ Q_1, P ⊗ Q_2) PTM element of ℳ_k has value ±λ̃_P,(Q_1, Q_2) for all k, where the sign depends on the MCM result k. Learning all λ̃_P,(Q_1, Q_2) is sufficient to learn the USI. However, learning these values is complicated by the fact that ℳ_k is k-dependent and we cannot control which outcome k occurs. When ℳ is performed, it results in a uniformly random outcome k, so a uniformly random ℳ_k is applied to ℋ. If we apply ℳ to a Pauli operator P ⊗ Q_1 and simply ignore the measurement result, i.e., we trace over the classical register, we can measure ∑_k ∈ℤ_2^mP ⊗ Q_2ℳP ⊗ Q_1 = ∑_k ∈ℤ_2^m(-1)^k· (d_1⊕ d_2)λ̃_P,(Q_1, Q_2) = δ_d_1, d_2λ̃_P,(Q_1, Q_2). Therefore, without using MCM results, we can only learn the diagonal PTM elements λ̃_P,(Q, Q). To learn the remaining elements of ℳ's PTM, we incorporate the MCM results. An MCM result tells us the value of k, so we can compute the (-1)^k· (d_1⊕ d_2) factors in the ℳ_k that occurred. We can then simulate measuring (-1)^k· (d_1⊕ d_2) P ⊗ Q_2, by measuring P ⊗ Q_2 and applying the sign in post-processing. This enables measuring P ⊗ Q_2 (-1)^k· (d_1⊕ d_2)ℳP ⊗ Q_1 = λ̃_P,(Q_1, Q_2)⊗∑_k ∈ℤ_2^mk. We have shown that ℳ can be parameterized by {λ̃_P,(Q_1, Q_2)}, and that each each λ̃_P,(Q_1, Q_2) is either (a) a diagonal element of each ℳ_k that can be extracted without using MCM results, or (b) an off-diagonal element of each ℳ_k that can be extracted by applying a k-dependent sign in post-processing. These values are sufficient for complete reconstruction of ℳ, but we may instead want to estimate F(ℳ) or the rates of individual Pauli errors, which we can do by estimating the eigenvalues λ_a,b,P of 𝒯_a,b. For fixed P, the λ̃_P,(Q_1, Q_2) are orthogonal linear combinations of {λ_a,b,P}_a,b ∈ℤ_2^m [see the Supplemental Material (SM)] and can be used to determine individual eigenvalues λ_a,b,P of any 𝒯_a,b. In particular, the fidelity of ℳ is Tr(𝒯_0,0), and 𝒯_0,0's eigenvalues are λ_0,0,P_ = 1/4^m∑_P_1, P_2 ∈𝐙_mλ̃_P_,(P_1, P_2) . Therefore, F(ℳ) = Tr(𝒯_0,0) = ∑_P_∈_n-m∑_P_1, P_2 ∈𝐙_mλ̃_P_,(P_1, P_2). MCM cycle benchmarking—We now introduce MCM cycle benchmarking (MCM-CB), which is a protocol for estimating the process fidelity of a randomly compiled MCM layer ℳ. MCM-CB runs a random selection of Pauli noise learning subexperiments, each designed to learn a single parameter of ℳ. An MCM-CB subexperiment is specified by an MCM layer L, an (n-m)-qubit input Pauli operator P_, a pair of m-qubit Z-type Pauli operators (P_, Q), a set of depths d, and a number of circuits per depth N. We require that L contains only Clifford gates on the (n-m) unmeasured qubits (although note that techniques for CB of non-Clifford gates <cit.> can be used to extend our method to the more general case). Furthermore, we require that d is even and 𝒱^d = 𝕀, where 𝒱 is the error-free evolution of the unmeasured subsystem specified by L. An MCM-CB subexperiment is the following procedure: * For each depth d, generate N MCM-CB circuits C = L_0L^dL_f (read left to right), where * L_0 is a layer of single-qubit gates preparing a random tensor product eigenstate of P_⊗ Z^⊗ m, * L is the layer L with randomized compiling, and * L_f is a layer of single-qubit gates that transforms P_ into a Z-type Pauli Z^⊗ t_f. When C is implemented without errors, measuring Z^⊗ t_f always gives (-1)^t_0, where t_0 ∈ℤ_2. * Run each circuit C and compute f(C) = (-1)^t_f · b_f+t_0∏_i=1^d (-1)^b_i · t_, where t_ is defined by P_Q = Z^⊗ t_, b_f is the n-bit final measurement outcome, and b_i is the m-bit MCM outcome from layer i. Eq. (<ref>) assumes that the classical postprocessing of MCM outcomes specified by the randomized compilation procedure has already been applied to b_i. * Compute f_d = 1/N∑_j=1^N f(C_j), and fit f_d to f_d = Ap^d_P_, (P_,Q). The result of an MCM-CB subexperiment is an estimate p̂_P_, (P_,Q) of a quantity p_P_, (P_,Q) that is determined by the PTM elements λ̃_P, (Q_1, Q_2) of ℳ', where ℳ = ℳ'(𝒱⊗𝕀_m). Specifically, p̂_P_,(P_,Q)≈ p_P_, (P_,Q), where p_P_, (P_,Q) = √(∏_j=1^ℓλ̃_𝒱^j[P], (Q^j-1P_, Q^jP_)), and where ℓ is the smallest positive even integer satisfying 𝒱^ℓ=𝕀_n-m, as long as λ̃_𝒱^j[P], (Q^j-1P_, Q^jP_) > δ for some constant δ > 0 for all 1 ≤ j ≤ℓ. An MCM-CB subexperiment can be thought of as an experiment to estimate a specific parameter of a USI, determined by Eq. (<ref>). Additionally, one MCM-CB circuit set can be used to estimate p̂_P_, (P_,Q) for all (P_, Q) ∈𝐙_m ×𝐙_m, because the circuits are fully determined by P_. These p̂_P_, (P_,Q) can be used to estimate λ_a,b,P_ by making the assumption p_P_, (P_,Q)≈λ̃_P_, (Q,P_). The MCM-CB protocol consists of running many MCM-CB subexperiments to estimate the fidelity of ℳ. Our protocol is the following: * Pick K uniformly random triplets of Pauli operators (P_,P_, Q) ∈ℙ_n-m×𝐙_m ×𝐙_m. * For each triplet of Pauli operators (P_, k, P_, k, Q_k), perform an MCM-CB subexperiment with input Pauli P_k and measured subsystem Paulis (P_,k,Q_k), and estimate the decay constant p̂_P_, k, (P_, k, Q_k). * Compute the estimated process fidelity, F̂ = 1/K∑_k=1^K p̂_P_, k, (P_, k Q_k). When m=0, this protocol reduces to CB of Clifford gates. The number of samples K required for MCM-CB to estimate F to within a fixed multiplicative uncertainty is independent of n, so our method remains sample efficient for large n (see the SM). The fidelity of randomly compiled MCMs—In the infinite sampling limit, MCM-CB measures F_MCM-CB, and this value satisfies F_MCM-CB≤ F(ℳ). This is because MCM-CB estimates F(ℳ) by averaging the MCM-CB decay constants p_P_,(P_,Q), many of which are the geometric mean of multiple λ̃_P_,(P_,Q) [Eq. (<ref>)]. MCM-CB uses the approximation √(∏_k=1^ℓλ̃_𝒱^k[P], (Q^k-1P_, Q^kP_))≈1/ℓ∑_k=1^ℓλ̃_𝒱^k[P], (Q^k-1P_, Q^kP_). In the SM, we show that this approximation implies that F_MCM-CB≤ F(ℳ). The result of CB of Clifford gates is often used as an estimate of the fidelity of the bare Clifford layer—i.e., the layer without randomized compiling—and this is reliable because randomized compiling of Clifford gates preserves process fidelity (when single-qubit gates are perfect). However, randomized compiling does not preserve the fidelity of MCM layers. Instead, F(ℳ) ≤ F(ℳ) for any instrument ℳ . The fidelity of the randomly compiled instrument is (using Eq. (46) in Ref. <cit.>) F(ℳ) = (_G ∈_n-m _k ∈ℤ_2^m (𝒢^†𝒱^†⊗k)ℳ_k (𝒢⊗k)) = 1/2^m∑_k ∈ℤ_2^m(ℳ_k (𝒱^†⊗kk)). Comparing Eq. (<ref>) to Eq. (<ref>), we conclude that F(ℳ) ≤ F(ℳ) for all ℳ. Simulations—We simulated MCM-CB of layers consisting of computational basis measurements and idle gates, using Stim <cit.>. We randomly sampled USIs of the form ℳ = ℰ_post(𝒯⊗𝕀_n-m)ℒℰ_pre, where ℒ is the error-free quantum instrument, 𝒯 is a stochastic Pauli channel acting on the unmeasured qubits, and ℰ_pre and ℰ_post are n-qubit stochastic Pauli error channels where each Pauli error acts nontrivially on at least one measured qubit. We sampled 3^n-m uniform random Pauli error rates for each ℰ_pre and ℰ_post, and normalize them such that the total error rate is p/2. Each 𝒯 has 3^n-m Pauli error rates sampled equivalently, with total error rate p. We sampled 120 error models with p ∈ [0.0001, 0.0601). We model initial state preparation and final measurement error as independent bit flip errors on each qubit, and we sample each qubit's error rate at random such that the average state preparation (measurement) error rate per qubit is 0.005 (0.01). We simulated MCM-CB with K=100 sampled Paulis and layers with n-m=4,6,8 unmeasured (idling) qubits and m=1,2 measurements. Figure <ref>a shows the MCM-CB estimates of the process fidelity (all error bars are 1σ and computed via a parametric bootstrap). We observe that for these error models, MCM-CB accurately estimates the USI's process fidelity. The uncertainty in the MCM-CB estimates is small for all error models. Most estimates lie within 1σ of the true fidelity, and all lie within 2.5σ. Figure <ref>c shows how the standard deviation of F̂ scales with K for simulated MCM-CB experiments with n=7 qubits and m=2 measured qubits (see the SM for details). The standard deviation decreases quickly with K, and it has the expected 1/√(K) scaling for CB <cit.>. While MCM-CB is very accurate for the error models used in Fig. <ref>a, it is only expected to reliably lower-bound the process fidelity, rather than accurately estimate the fidelity, for general error models. We expect to see larger deviations from the true fidelity with USIs that have low fidelity, and where many of the λ̃_P,(Q_1,Q_2) differ in magnitude from λ̃_P,(Q_2,Q_1) (e.g., because of high rates of correlated pre- or post-MCM error, or error that is strongly biased towards pre- or post-MCM error). We show additional simulation results in the SM that explore this effect. IBM Q demonstrations—We ran MCM-CB on to benchmark an n=4 qubit layer of linearly-connected qubits (40–43) with m=1 MCMs (on 40), and n-m=3 idling qubits that have X-X dynamical decoupling applied. We exhaustively sample the MCM-CB subexperiments, which requires running 64 sets of MCM-CB circuits and performing 4 different analyses of each circuit set [Note that p_P_,(I,Z)=p_P_,(Z,I) for the MCM layer we benchmarked, and therefore only 3 analyses are strictly required for each circuit set. We performed both of the equivalent analyses and observed p_P_,(I,Z)=p_P_,(Z,I) for all P_.]. See the SM for full results. Figure <ref>c shows the estimated infidelity of the full 4-qubit layer and its subsets (obtained by data marginalization). The errors on the unmeasued subsystem are much larger than the errors on the measured qubit—the measured qubit's infidelity [0.022(2)] is low relative to the infidelity of the individual idling qubits [0.072(8), 0.085(8), and 0.040(5)]. The infidelity of the whole idling subsystem [0.11(1)] is only slightly higher than the infidelity of the highest-error individual qubits, indicating predominately correlated errors. The infidelity of the idling subsystem [0.10(1)] is within 1σ of the infidelity of the full 4-qubit cycle. To obtain more details of the error, we estimate the Pauli error probabilities for each 𝒯_a,b. We do this by estimating λ_0,0,P_, λ_1,1,P_, and λ_0,1,P_+λ_1,0,P_ for all P_∈_3 and performing a Walsh-Hadamard transform <cit.> on each of these three sets of Pauli eigenvalues. We use the approximation λ̃_P_, (I,Z)=λ̃_P_, (Z,I)=p̂_P_, (I,Z) to estimate λ_0,0,P_ and λ_1,1,P_ (see the SM for details). Figure <ref> shows the dominant Pauli errors. All but 5 error rates are within 1σ of 0. The dominant errors are ZZ errors on adjacent idling qubits [probabilities 0.044(3) and 0.022(3)], which is consistent with the known coherent error caused by ZZ coupling in transmon qubits <cit.>. The next largest source of error is single bit flip errors on the measured qubit, which happen with probability 0.014(2). MCM layer set cycle benchmarking—MCM-CB of a single MCM layer can learn a limited number of parameters of the layer's error. To learn more parameters of a USI, we can benchmark a set of layers containing multi-qubit Clifford gates and MCM layers. To do so, it is useful to also extend MCM-CB to allow arbitrary single-qubit gates between repetitions of a layer (analogous to Ref. <cit.>), rather than just the Pauli gates required for randomized compilation. This enables freely transforming between Pauli operators with the same support, in which case the learnable parameters of a layer set are determined by the cycle space of the layer set's pattern transfer graph (PTG), which describes how each layer transforms the support (P) of Pauli operators <cit.>. To include MCM layers in the PTG, we add an edge from (P_⊗ P_) to (P_⊗ Q) for each USI PTM element λ̃_P_, (P_,Q). There is an (extended) MCM-CB subexperiment that learns a product of parameters iff that product is a composition of cycles in the PTG. These MCM-CB subexperiments use a repeated sequence of layers from the layer set. For example, by performing MCM-CB subexperiments with the sequence of 2-qubit layers consisting of a controlled Z gate followed by a single-qubit MCM, we can learn 12 parameters that cannot be learned by separately characterizing the two layers. Each of these parameters is a relational quantity that depends on the error of both layers (see the SM for details). Discussion—We have introduced theory for comprehensively characterizing USI error, by extending Pauli noise learning to MCMs. Furthermore, we have introduced MCM-CB, a scalable protocol for estimating the process fidelity of a randomly compiled MCM layer and for lower bounding the process fidelity of the layer without randomized compiling. We have also demonstrated how our method can be used to obtain detailed error information for MCM layers on current quantum processors. MCM-CB enables detecting and quantifying measurement crosstalk and correlated error in MCM layers, which can provide useful diagnostic information. Accurate Pauli noise learning is also critical to the performance of common error mitigation techniques <cit.>. Our method is capable of learning more details of MCM noise than existing noise learning techniques for measurements <cit.>, and can therefore be applied to improve the accuracy of error models for use in error mitigation. Our theory can also potentially extend to learning the error present in MCMs with classical feedforward in detail <cit.>. While we have focused on characterizing individual MCM cycles, some of the most interesting applications of our theory are extending other Pauli noise learning techniques <cit.> to benchmark complex circuits or layer sets containing MCMs. In particular, we anticipate our method will be integrated with existing MCM-free Pauli noise learning techniques for characterizing syndrome extraction circuits <cit.>. Note—After the completion of this work, Ref. <cit.> was posted to arXiv, which introduces a similar Pauli noise learning method to ours. § ACKNOWLEDGEMENTS This material was funded in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Quantum Testbed Pathfinder Program. T.P. acknowledges support from an Office of Advanced Scientific Computing Research Early Career Award. Sandia National Laboratories is a multi-program laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. We acknowledge the use of IBM Quantum services for this work. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of the U.S. Department of Energy, or the U.S. Government, or IBM, or the IBM Quantum team. § SUPPLEMENTAL MATERIAL § MCM-CB THEORY §.§ MCM-CB with Clifford Gates on Unmeasued Qubits In the main text, we assume that the error-free evolution of the unmeasured subystem is 𝒱 = 𝕀_n-m. We obtain the general case, i.e., 𝒱 is any (n-m)-qubit Clifford operation, by decomposing ℳ = ℳ'(𝒱⊗𝕀_m), where ℳ' is a USI with ideal evolution 𝕀_n-m on the unmeasured subsystem, i.e, ℳ'_k = ∑_a,b ∈ℤ_2^m𝒯_a,b⊗𝒳^⊗ bkk𝒳^⊗ a. The PTM elements of ℳ are given by P' ⊗ Q_2ℳ_kP ⊗ Q_1 = P' ⊗ Q_2ℳ'_k𝒱[P] ⊗ Q_1. Because 𝒱[P] is also a Pauli operator, we can now apply Eq. (<ref>), and the rest of our theory follows. §.§ Estimating the Process Fidelity of USIs In this section, we show that all λ̃_P_,(P_,Z^dP_) are linearly independent. We show that any two distinct λ̃_P_,(P_,Z^dP_) are orthogonal linear combinations of {λ_a,b}_a,b ∈ℤ_2^m by (1) expressing each λ̃_P_,(P_,Z^dP_) as a 2^m-dimensional vector v⃗_P_,(P_,Z^dP_) in the basis {λ_a,b}_a,b ∈ℤ_2^m, and (2) taking a dot product of the resulting vectors for two parameters λ̃_P_,Z^c,Z^dZ^c) and λ̃_P_,(Z^c',Z^d'Z^c'): v⃗_P_,Z^c,Z^dZ^c)·v⃗_P_,(Z^c',Z^d'Z^c') = ∑_a,b (-1)^b · d(-1)^b · d'(-1)^(a ⊕ b) · c(-1)^(a ⊕ b) · c' = ∑_a,b (-1)^b · (d ⊕ d')(-1)^(a ⊕ b) · (c ⊕ c') = δ_d,d'δ_c, c', where δ denotes the Kronecker delta. To obtain an expression for λ_0,0,P_, we sum over λ̃_P_,(Z^c,Z^dZ^c) for all d,c ∈𝐙_m: ∑_d, c ∈ℤ_2^mλ̃_P_,(Z^c,Z^dZ^c) = ∑_d,c ∈ℤ_2^m∑_a, b ∈ℤ_2^mλ_a,b (-1)^d · b(-1)^c · a + c · b = ∑_d ∈ℤ_2^m∑_a, b ∈ℤ_2^mλ_a,b (-1)^d · bδ_a,b = ∑_a ∈ℤ_2^m∑_d ∈ℤ_2^mλ_a,a (-1)^d · a = λ_0,0, P_. Eq (<ref>) follows by summing over all P_∈_n-m §.§ Process Fidelity and Cycle Benchmarking Here, we show that the infinite-sampling limit MCM-CB result F_MCM-CB. MCM-CB is a lower bound on F(ℳ). In the limit of sampling all of the possible MCM-CB subexperiments, the MCM-CB fidelity estimate consists of averaging over all MCM-CB decay constants for a given cycle. The result of this averaging (using the true values of the decay constants), is F_MCM-CB = 1/4^n∑_P ∈ℙ_n-m∑_P_∈𝐙_m∑_Q ∈𝐙_m√(∏_k=1^ℓλ̃_𝒱^k[P], (Q^k-1P_, Q^kP_)) ≤1/4^n∑_P ∈ℙ_n-m∑_P_∈𝐙_m∑_Q ∈𝐙_m1/ℓ∑_k=1^ℓλ̃_𝒱^k[P], (Q^k-1P_, Q^kP_) ≤1/4^n∑_P ∈ℙ_n-m∑_Q_1, Q_2 ∈𝐙_m1/ℓ∑_k=1^ℓ/2(λ̃_𝒱^2k-1[P], (Q_1, Q_2) + λ̃_𝒱^2k[P], (Q_2, Q_1)) ≤1/4^n∑_P ∈ℙ_n-m1/ℓ∑_k=1^ℓ/2(λ_0,0,𝒱^2k-1[P], (Q_1, Q_2) + λ_0,0,𝒱^2k[P], (Q_2, Q_1)) ≤1/4^n∑_P ∈ℙ_n-mλ_0,0,P. This result implies that F_MCM-CB≤ F(ℳ) for any USI ℳ. § MCM-CB SIMULATIONS §.§ Simulations of MCM-CB on up to 4 Qubits In the few-qubit limit, it is feasible to perform every possible MCM-CB subexperiment, and this is often done in practice for CB of two-qubit gates. We simulated MCM-CB with this exhaustive sampling for USIs with n-m=1,2 unmeasured qubits and m=1,2 measurements. We simulated our method with randomly sampled USIs ℳ = ℰ_post(𝒯⊗𝕀_n-m)ℒℰ_pre. We sampled these USIs so that the infidelity of 𝒯 is p and the infidelity of ℰ_pre and ℰ_post are p/2, for 120 uniformly-spaced values p ∈ [0.0001, 0.0601). For each stochastic error channel, the Pauli error rates are chosen uniformly, and then rescaled to obtain a stochastic Pauli channel with the target infidelity. We model initial state preparation and final measurement error as independent bit flip errors on each qubit, and we sample each qubit's error rate at random such that the average state preparation (measurement) error rate per qubit is 0.005 (0.01). Fig <ref> shows the fidelity estimates from these simulations. For all error models, the estimated instrument fidelity is within 1σ of the true fidelity. We observe lower uncertainty in the estimates than observed in our simulations with larger n, which we expect due to eliminating uncertainty from sampling MCM-CB subexperiments (note, however, that the uncertainty also depends on the variance of the USI parameters, which is not fixed across our simulations). §.§ Simulations of MCM-CB with Additional Error Models The accuracy of the MCM-CB fidelity estimate depends on the properties of the USI error model. This is because (1) MCM-CB only samples a subset of MCM-CB subexperiments, hence learning only a subset of the USI parameters, and (2) MCM-CB uses a geometric mean to approximate the arithmetic mean of products of USI parameters. To explore this effect, we study the performance of MCM-CB with a different class of error models than in the simulations presented in the main text. We simulated MCM-CB with n-m=4,6,8 unmeasured qubits and m=1,2 measurements in which we performed K=100 MCM-CB subexperiments. We sampled USIs ℳ = ℰ_post(𝒯⊗𝕀_n-m)ℒℰ_pre where ℰ_pre and ℰ_post each have 100 random nonzero Pauli error rates, chosen so that the infidelity of ℰ_pre is 5p and the infidelity of ℰ_post is p. 𝒯 consists of single-qubit depolarizing noise on each unmeasured qubit, with the infidelity of each qubit sampled from a normal distribution with mean p and standard deviation 0.2p. We sample USIs for 120 uniformly-spaced values p ∈ [0.0001, 0.0601). We model initial state preparation and final measurement error in the same way as in the simulations in the previous section. Fig <ref> shows the fidelity estimates from these simulations. The MCM-CB estimate is within 2.5σ of the true fidelity for all error models. The simulations presented so far all use a fixed average rate of state preparation and measurement (SPAM) error. To provide further evidence that our method is robust to SPAM error, we also simulated our method with varied-strength SPAM error. We show the results of these simulations in Fig. <ref>(b). We ran MCM-CB with independent bit flip error on each qubit immediately after state preparation and immediately prior to final measurement, and we varied the average per-qubit error rate p from 0 to 0.02. To generate the bit flip error rates, for both state preparation and measurement error, we sample uniform random error probabilities and normalize them so that their total is pn/2. We observe no systematic effect of the magnitude of the SPAM error on the accuracy of MCM-CB. However, higher SPAM error leads to increased uncertainty in the estimates. §.§ Investigating MCM-CB Subexperiment Sampling in Simulation In Fig. <ref> of the main text, we simulated MCM-CB with three error models. These models consist of a USI ℳ = ℰ_post(𝒯⊗𝕀_n-m)ℒℰ_pre, and have the following forms: * (“Sparse”) 𝒯, ℰ_pre, and ℰ_post have each have 2^n nonzero Pauli errors, sampled in the same manner as the simulations presented in the main text, such that 𝒯 has infidelity 0.04 and each ℰ_pre, and ℰ_post have infidelity 0.01. * (“Depolarizing+MCM error with crosstalk”) 𝒯 consists of single-qubit local depolarizing error, with each qubit's error rate chosen from a normal distribution with mean 0.01 and standard deviation 0.002. Each ℰ_pre and ℰ_post have of 2^n randomly sampled nonzero Pauli error rates. ℰ_pre and ℰ_post are sampled to have the same fidelity (0.01). * (“Depolarizing+pre-MCM error with crosstalk”) 𝒯 consists of single-qubit local depolarizing error sampled from a normal distribution with mean 0.005 and standard deviation 0.0001, ℰ_pre consists of 2^n randomly-sampled Pauli errors, and ℰ_post = 𝕀_n with total infidelity 0.02. § LAYER SET CYCLE BENCHMARKING In this section, we expand on the example of performing MCM-CB with a 2-qubit layer set consisting of two layers: a controlled Z (CZ) gate (between two qubits 0 and 1) and a single-qubit MCM (which we will assume is on 0, but the case of an MCM on 1 is analogous). To clarify the position of the MCM, we will use the notation λ̃_(Q_1,Q_2),P to denote the PTM elements of the USI of the MCM layer. We model the CZ gate's error as a post-gate stochastic Pauli channel with eigenvalues λ_Q^CZ and Pauli error rates p_Q for Q ∈ℙ_2. The pattern transfer graph for this layer set is shown in Fig. <ref>. The cycle space of this graph fully describes the learnable parameters of this layer set for MCM-CB with arbitrary single-qubit gates between layers, which are as follows. * Learnable with MCM-only CB: λ̃_(Z,I),Iλ̃_(I,Z),I, λ̃_(I,I),P, and λ̃_(Z,Z),P, and λ̃_(Z,I),P'λ̃_(I,Z),P for P,P;' = X, Y, Z * Learnable with CZ-only CB: λ^CZ_II, λ^CZ_IZ, λ^CZ_ZI, λ^CZ_XY, λ^CZ_YY, λ^CZ_YX, λ^CZ_XX, λ^CZ_ZZ, λ^CZ_IXλ^CZ_ZX, λ^CZ_IYλ^CZ_ZX, λ^CZ_IXλ^CZ_ZY, λ^CZ_IYλ^CZ_ZY, λ^CZ_XIλ^CZ_XZ, λ^CZ_YIλ^CZ_XZ, λ^CZ_XIλ^CZ_YZ, and λ^CZ_YIλ^CZ_YZ, * Learnable with CZ+MCM CB: λ^CZ_ZYλ̃_P,(Z,I), λ^CZ_IYλ̃_P,(I,Z), λ^CZ_ZXλ̃_P,(Z,I), λ^CZ_IXλ̃_P,(I,Z) for P = X, Y, Z Parameters learned by CB of a CZ and MCM sequence are relational errors that cannot be separated into CZ and MCM-only errors. As such, they can be difficult to interpret. Furthermore, we cannot perform a Walsh-Hadamard transform the relation errors into sums of Pauli error rates. As a step towards interpreting these error rates, we perform a first order expansion of these parameters in the Pauli error rates. First, we express λ̃_(Z,I),P, λ̃_(I,Z),P exactly in terms of Pauli error rates: λ̃_(Z,I),P = λ_0,0,P-λ_1,0,P+λ_0,1,P-λ_1,1,P = p_0,0 - p_1,0 + p_0,1 - p_1,1 -2(∑_P': [P,P'] ≠ 0 p_0,0,P-p_1,0,P+p_0,1,P-p_1,1,P) = 1-2(p_0,1+p_1,1)-2(∑_P': [P,P'] ≠ 0 p_0,0,P-p_1,0,P+p_0,1,P-p_1,1,P), where p_a,b denotes the probability of pre-measurement MCM error 𝒳^⊗ a and post-measurement error 𝒳^⊗ b, and p_a,b,P denotes the probability of pre-measurement error 𝒳^⊗ a, post-measurement error 𝒳^⊗ b, and error P on the unmeasured qubits. Analogously, λ̃_(Z,I),P = 1-2(p_1,0+p_1,1)-2(∑_P': [P,P'] ≠ 0 p_0,0,P+p_1,0,P-p_0,1,P-p_1,1,P), We now compute the CZ-MCM relational parameters and drop second order terms, λ^CZ_ZXλ̃_(Z,I),P ≈ 1 - 2(p_XI+p_XX+p_YI+p_YX+p_ZY+p_ZZ+p_IY+p_IZ) -2(p_1,0+p_1,1)-2(∑_P': [P,P'] ≠ 0 p_0,0,P' - p_1,0,P' + p_0,1,P'-p_1,1,P') = 1- 2(p_XI+p_XX+p_YI+p_YX+p_ZY+p_ZZ+p_IY+p_IZ) -2(p_1,0,I+p_1,1,I+p_1,0,P+p_1,1,P)-2(∑_P': [P,P'] ≠ 0 p_0,0,P' + p_0,1,P') λ^CZ_ZYλ̃_(Z,I),P ≈ 1 - 2(p_XI+p_XY+p_YI+p_YY+p_ZX+p_ZZ+p_IX+p_IZ) -2(p_1,0+p_1,1)-2(∑_P': [P,P'] ≠ 0 p_0,0,P' - p_1,0,P' + p_0,1,P'-p_1,1,P') = 1 - 2(p_XI+p_XY+p_YI+p_YY+p_ZX+p_ZZ+p_IX+p_IZ) -2(p_1,0, I+p_1,1,I+p_1,0, P+p_1,1,P)-2(∑_P': [P,P'] ≠ 0 p_0,0,P' + p_0,1,P') λ^CZ_IXλ̃_(I,Z),P ≈ 1 - 2(p_IY+p_IZ+p_XY+p_XZ+p_YY+p_YZ+p_ZY+p_ZZ) -2(p_0,1+p_1,1)-2(∑_P': [P,P'] ≠ 0 p_0,0,P' + p_1,0,P' - p_0,1,P'-p_1,1,P') = 1 - 2(p_IY+p_IZ+p_XY+p_XZ+p_YY+p_YZ+p_ZY+p_ZZ) -2(p_0,1,I+p_1,1,I+p_0,1,P+p_1,1,P)-2(∑_P': [P,P'] ≠ 0 p_0,0,P' + p_1,0,P') λ^CZ_IYλ̃_(I,Z),P ≈ 1 - 2(p_IX+p_IZ+p_XX+p_XZ+p_YX+p_YZ+p_ZX+p_ZZ) -2(p_0,1+p_1,1)-2(∑_P': [P,P'] ≠ 0 p_0,0,P' + p_1,0,P' - p_0,1,P'-p_1,1,P') = 1 - 2(p_IX+p_IZ+p_XX+p_XZ+p_YX+p_YZ+p_ZX+p_ZZ) -2(p_0,1,I+p_1,1,I+p_0,1,P+p_1,1,P)-2(∑_P': [P,P'] ≠ 0 p_0,0,P' + p_1,0,P') We see that by performing CB with the CZ+MCM germ, we do not learn p_0,1,P' p_1,0,P' in isolation, but we learn information about p_0,1,P'-p_1,0,P' in combination with many CZ error rates. It is also useful to expand the parameters learned by MCM-CB of the MCM layer to first order. This offers an alternate approach to estimating Pauli error probabilities using MCM-CB data, instead of the approximation λ̃_(I,Z), P_≈λ̃_(Z,I), P_≈p̂_(I,Z),P_ used in the main text. To first order, the parameters we learn in product are λ̃_(Z,I),Pλ̃_(I,Z),P = 1 - 2(p_0,1 + p_1,0 + 2p_1,1) - 4(∑_P': [P,P'] ≠ 0 p_0,0,P'-p_1,1,P') λ̃_(Z,I),Iλ̃_(I,Z),I = 1 - 2(p_0,1 + p_1,0 + 2p_1,1). The second equation allows us to learn p_1,1 and p_0,0, beauce we can learn p_0,1+p_1,0 from λ̃_(I,I),I and λ̃_(Z,Z),I. Furthermore, to first order, λ̃_(Z,I),Iλ̃_(I,Z),I-λ̃_(Z,I),Pλ̃_(I,Z),P = 4(∑_P': [P,P'] ≠ 0 p_0,0,P'-p_1,1,P') Using the above equation, we can estimate p_0,0,P, p_1,1,P and p_0,1,P+p_1,0,P for P=X,Y,Z using λ̃_(Z,I),Iλ̃_(I,Z),I and all λ̃_(Z,I),Pλ̃_(I,Z),P, λ̃_(Z,Z),P and λ̃_(I,I),P. § DETAILS OF IBM Q DEMONSTRATIONS In this section, we provide additional details of our demonstrations of MCM-CB on IBM Q processors. Figure <ref>(a) shows the MCM-CB decay parameters p̂_P_,(P_,Q) from our 4-qubit MCM-CB demonstration on . We omit the decay parameters p̂_P_,(I,Z) because p_P_,(I,Z)=p_P_,(Z,I), and our estimates of p_P_,I,Z) and p_P_,(I,Z) were approximately equal. We estimated the eigenvalues of the 𝒯_a,b using λ̂_0,0,P = 1/4(p̂_P, (I, I) + p̂_P, (Z, Z) + 2p̂_P, (Z, I)) λ̂_0,1,P+λ̂_1,0,P = 1/2(p̂_P, (I, I) - p̂_P, (Z, Z)) λ̂_1,1,P =1/4( p̂_P, (I, I) + p̂_P, (Z, Z) - 2p̂_P, (Z, I)). Note that our estimates for λ_0,0,P, λ_1,1,P use the approximation λ̃_P_,(I,Z)≈λ̃_P_,(Z,I)≈ p_P_,(Z,I) for all P_. We then perform a Walsh-Hardamard transform on each of the sets of values {λ̂_0,0,P}, {λ̂_1,1,P}, and {λ̂_0,1,P+λ̂_1,0,P} to estimate the Pauli error rates of the MCM layer <cit.>. Figure <ref>(b) shows all estimated Pauli error rates. Table <ref> shows the calibration data for from the time we ran our MCM-CB circuits. In addition to our 4-qubit demonstration, we ran 2-qubit MCM-CB of a layer consisting of an MCM (on 13) and an idling qubit (14) of . Figure <ref>(c) shows all MCM-CB decay parameters from this demonstration, and Fig. <ref>(d) shows all estimated Pauli error rates for the MCM layer, using the same approach as in our 4-qubit MCM-CB demonstration to perform the estimation. For this layer, single MCM bit flip errors are the dominant source of error [0.009(1)], followed by Y, X, and Z errors on the unmeasured qubit. The calibration data from the time of execution is shown in Table <ref>.
http://arxiv.org/abs/2406.09220v1
20240613152020
An improved asteroseismic age of the rapid rotator Altair from TESS data
[ "Michel Rieutord", "Daniel R. Reese", "Joey S. G. Mombarg", "Stéphane Charpinet" ]
astro-ph.SR
[ "astro-ph.SR" ]
Understanding the effects of rotation in stellar evolution is key to modelling early-type stars, half of which have equatorial velocities over 100 km/s. The nearby star Altair is an example of such fast-rotating stars, and furthermore, it has the privilege of being modelled by a detailed 2D concordance model that reproduces most of its observables. The aim of this paper is to include new asteroseismic frequencies to improve our knowledge of Altair, especially its age. We processed images of Altair obtained during July 2022 by the Transiting Exoplanet Survey Satellite using the halo photometry technique to obtain its light curve over this observation period. By analysing the light curve, we derived a set of 22 new frequencies in the oscillation spectrum of Altair and confirmed 12 previously known frequencies. Compared with model predictions, we could associate ten frequencies with ten axisymmetric modes. This identification is based on the modelled visibility of the modes. Moreover, nine of the modelled frequencies can be adjusted to simultaneously match their corresponding observed frequencies, once the core hydrogen mass fraction of the concordance model is set to X_ core/X_ ini≃0.972, with X_ ini=0.739. Using the combined results of a 1D model computing the pre-main sequence and a 2D time-dependent model computing the main sequence, we find that this core hydrogen abundance sets the age of Altair to 88±10 Myrs, which is slightly younger than previous estimates. A 100 kpc Ram Pressure Tail Trailing the Group Galaxy NGC 2276 I.D. Roberts<ref>,<ref>,<ref> R.J. van Weeren<ref> F. de Gasperin<ref> A. Botteon<ref> H.W. Edler<ref> A. Ignesti<ref> L. Matijević<ref> N. Tomičić<ref> Abstract Rotational Freudenthal duality (RFD) relates two extremal Kerr-Newman (KN) black holes (BHs) with different angular momenta and electric-magnetic charges, but with the same Bekenstein-Hawking entropy. Through the Kerr/CFT correspondence (and its KN extension), a four-dimensional, asymptotically flat extremal KN BH is endowed with a dual thermal, two-dimensional conformal field theory (CFT) such that the Cardy entropy of the CFT is the same as the Bekenstein-Hawking entropy of the KN BH itself. Using this connection, we study the effect of the RFD on the thermal CFT dual to the KN extremal BH. We find that the RFD maps two different thermal, two-dimensional CFTs with different temperatures and central charges, but with the same asymptotic density of states, thereby matching the Cardy entropy. In an appendix, we discuss the action of the RFD on doubly-extremal rotating BHs, finding a spurious branch in the non-rotating limit, and determining that for this class of BH solutions the image of the RFD necessarily over-rotates. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Within 10 parsecs of the Solar System, Altair is the star with the fastest rotation rate. Its equatorial velocity projected on the line of sight (V sini) reaches 240 km/s, while interferometric observations show that the inclination of the rotation axis on the line of sight is i=50.65±1.23^∘ <cit.>, leading to an equatorial velocity of 310 km/s. As a consequence of this high equatorial velocity, Altair is strongly distorted by the centrifugal force, thus making it a privileged target for imaging by interferometry <cit.>. Actually, this star also shows δ-Scuti type oscillations, as revealed by the work of <cit.> using data collected in 1999 by the Wide Field Infrared Explorer (WIRE) satellite star tracker <cit.>. This detection was later confirmed by <cit.> with data from the Microvariability and Oscillations of STars (MOST) satellite <cit.>, which observed Altair in 2007, 2011, 2012, and 2013. These first space observations revealed 15 eigenfrequencies of the star. Soon after, using spectroscopic time series, <cit.> showed the presence of prograde surface perturbations associated with internal gravito-inertial waves. This wealth of data from interferometry and the first seismic frequencies of <cit.> motivated <cit.> to attempt a 2D modelling of this star using the public domain code <cit.>. Hence, <cit.> managed to design the first concordance model of Altair, and based on this model, it turned out that this star is ∼100 Myrs old and has a mass of 1.86±0.03 . To progress further in the understanding of the properties of this star, an improved modelling is needed, and new data are of course most welcome. In this article, we present the results of our processing of data recently collected by the Transiting Exoplanet Survey Satellite <cit.> and investigate its first implications on Altair's modelling. Altair was in the field of view of TESS from 10 July to 5 August 2022. However, it is one of the brightest stars in the sky, and therefore, it saturated TESS' detectors. To circumvent this problem, we resorted to halo photometry as described by <cit.> in order to exhibit Altair's light curve. In the following, we present the way we derived Altair's light curve from TESS data (Section 2) and give the list of frequencies resulting from its analysis (Section 3). We then focus on the identification of the most visible modes using models of adiabatic oscillations based on the and codes (Section 4). Finally, we discuss the implications of the results of the estimated age of Altair (Section 5), and conclusions follow. § PREPARATION OF THE DATA With an apparent visual magnitude of 0.76 (TESS magnitude is 0.58), the photon flux from Altair largely saturates the pixel where the star falls. To circumvent this difficulty, one may concentrate on pixels that received the overflowing electrons at a level below saturation or those fed by the scattered light, namely those on the halo that has been generated by the generous light flux from the star. <cit.> (W17 hereafter) have shown that this halo still contains information on the flux variations of the star. Provided that these variations are on a timescale much longer than the jitter of the instrument or its random noise, W17 showed that one can recover the star light curve by selecting pixels that minimise the flux variations during the monitoring of the star. As shown in this work, the flux at time n may be written as f_n=∑_i=1^M w_ip_in, where w_i is the weight of the ith pixel and p_in is the flux in this pixel at time n. The weights satisfy w_i>0 ∑_i=1^M w_i =1 , where M is the number of selected pixels. As discussed by W17, the term w_i can be determined by minimising a `normalized first-order total variation (TV) of fluxes' defined by TV = ∑_n=1^N |f_n-f_n-1|/∑_n=1^N f_n, where N is the total number of frames. We refer the reader to the article of W17 for a detailed presentation and discussion of the halo photometry method. From the Mikulski Archive for Space Telescopes (MAST), we downloaded the target pixel file associated with Altair[tess2022190063128-s0054-0000000070257116-0227-s_tp.fits] on sector 54. It contains a sequence lasting 26.23 days starting from MJD 59770.40 with three interruptions of 4.18, 1.19, and 2.90 days. The larger gaps correspond to the time when the Earth was close to the field of view of the camera recording Altair (Camera 1), while the small gap of 1.19 days is due to data download at perigee <cit.>. The sampling cadence was 120 seconds, so we processed 18890 images, out of which we extracted N=13720 images of 430×11 pixels with usable data. As shown in Fig. <ref>, the images are extremely elongated in order to capture the overflowing electrons from the saturated pixels. As a first step in the processing, we removed the fluctuating background using the source extractor software <cit.>. We then evaluated the saturation of the images at a level of ∼148,000 ADU, from which we decided to mask all pixels with a flux level higher than 120,000 ADU. Similarly, we also masked pixels with a level less than 1000 ADU. All the rows beyond index 270 were also dismissed. These thresholds were chosen by varying them so as to optimise the final signal-to-noise ratio of the detected modes. These thresholds led to a selection of 230 valid pixels. The minimisation process of the TV function (Eq. <ref>) was done using the `Sequential Least Squares Programming' of the scipy library <cit.> following W17. The minimisation process led to the selection of 55 pixels that actually monitor the flux of Altair. These pixels are shown in Fig. <ref> (red dots) along with a thumbnail image. We cannot exclude the possibility that the selected pixels are contaminated by some field stars since Altair lies at a low galactic latitude (-891) and the stellar background density is high. However, inspection of the TESS field around Altair showed that neighbouring stars are all fainter than mag. 9.5, that is at least 3,300 times fainter, allowing us to believe that any contamination should be less than a percent. This level of contamination is acceptable for our purpose of extracting oscillation frequencies. By using the selected pixels, removing a few outliers, and subtracting a polynomial fit of degree 9 designed for each of the four sub-sequences, we obtained the neat light curve shown in Fig.<ref>. The polynomial fit subtraction increases the signal-to-noise ratio, but it can also suppress frequencies (typically) less than one cycle per day. Filtered and unfiltered data used to compute the light curve are available in electronic form (see title footnote). We then analysed the light curve by computing the classical Lomb-Scargle periodogram (LSP), pre-whitening the signal and adjusting a linear combination of sinusoids with a non-linear least-square optimisation. We used the code <cit.> to process the light curve. The oscillation spectrum of Altair is shown in Fig. <ref>, and the extracted frequencies are given in Table <ref>. We adopted a conservative detection threshold of S/N=5, following the discussion of <cit.>, with noise being evaluated locally around each frequency as the median value of the residual in the LSP after subtraction of all significant peaks. § RESULTS In view of the frequencies and amplitudes extracted from the light curve, we note that Altair's excited eigenmodes are coherent over a long time base. Mode frequencies found in the previous data, going back to the years 1999, 2007, 2011, 2012, and 2013 <cit.>, are found again in 2022, except for one frequency at 2.58 c/d and two frequencies less than 1 c/d. The two latter frequencies, if they are indeed intrinsic, have most likely been removed by the subtraction of the polynomial fit, as mentioned above. The disappearance of the peak at 2.58 c/d observed in 1999, 2012, and 2013, suggests that this is a beating frequency resulting from the superposition of modes with frequencies f_2,f_3,f_7,f_10, which show an average difference of 2.54 c/d. The more accurate data collected by TESS removed this frequency. However, the new data confirm four of the six new frequencies exhibited by <cit.>, namely f_6,f_27,f_12, and f_18, and add many other new frequencies. The oscillation spectrum of Altair now contains at least 34 frequencies that need be explained. We note that in the 2012 data set, <cit.> noticed a low-amplitude peak near 3.0 c/d, namely at the expected rotation frequency deduced from the concordance model of <cit.>. No such peak appears in our data. If the 2012 detection is real, it may have been generated by a transient feature at the surface of Altair. A magnetic spot is then the most natural explanation since Altair is known to have some magnetic activity in X-rays <cit.>. Finally, we completed the monitoring of the amplitudes of the six most important modes by adding the measured amplitudes at the dates of observation, namely at 2022.555. The result is shown in Fig. <ref>, where one can see that the mode amplitudes of Altair are continuously varying over the years. However, we emphasise that the foregoing variations may also be influenced by the difference in the bandpass of the instruments: WIRE observed in V+R <cit.>, MOST used the [375,675] nm band <cit.>, and TESS uses the [600,1000] nm band <cit.>. A tentative explanation for the amplitude variations given by <cit.> is that modes are coupled to a thin subsurface convective layer where helium is twice ionised and where the kappa mechanism operates. This coupling is a possible mechanism to induce time-varying amplitudes, but it needs numerical simulations to be confirmed and to show how it operates. § FORWARD MODELLING With this new set of frequencies we would like to know how models compare with these observations, especially the concordance model of <cit.>, which was built to reproduce all data available in 2020. In particular, a few frequencies of the 1999 data set <cit.> could be retrieved. In this section, we make a larger and more complete investigation of the oscillation spectrum of Altair's concordance model and compare its predictions with our observations. First, we recall the characteristics of the model of Bouchaud et al. It is a 2D axisymmetric steady model <cit.> with a mass of 1.863  that rotates with an equatorial velocity of V_eq=313 km/s, thus at 74.4% of the critical angular velocity. This high rotation rate induces a rotational flattening of 22%. The metallicity of the model is Z=0.0192, and the hydrogen mass fraction is X_ ini=0.739 in the envelope and X_ core/X_ ini=0.963 in the core. This latter ratio was interpreted as corresponding to an age of ∼100 Myrs by <cit.>. As a first step in deciphering the eigenspectrum of Altair, we computed the adiabatic oscillations using the code, which can use a 2D-model as an equilibrium model <cit.>. Since the model is axisymmetric, eigenmodes are characterised by their azimuthal periodicity and equatorial symmetry, which we denote as m^±. Here, m^+ are modes that are symmetric with respect to the equator and whose azimuthal variation is ∝exp(imφ). Conversely, m^- are equatorially anti-symmetric with the same exp(imφ) dependence. Because TESS observations are photometric, we only expected m=0,1,2 to be visible. Indeed, in fast rotators such as Altair, eigenmodes are made of a long series of spherical harmonics strongly coupled in the ℓ index by the Coriolis acceleration, the centrifugal distortion of the background, and the differential rotation. However, since the model is axisymmetric, modes of different azimuthal wavenumber m are uncoupled. Just as for the ℓ in a non-rotating star, only the low m may provide modes with high visibility. Hence, we focused on eigenmodes with the m=0^±,1^±,2^± symmetries. For the non-axisymmetric ones, we included both retrograde and prograde modes, that is, m can be positive or negative. The code determines eigenvalues using the Arnoldi-Chebyshev algorithm, which is a Krylov-based method <cit.> that allowed us to compute a small number of eigenvalues around a given point of the complex plane. Since we restricted ourselves to adiabatic oscillations, all eigenfrequencies are real. We first computed eigenvalues around each observed frequency, and it revealed a high density of frequencies, especially in the low-frequency range.[This is not a surprise since gravito-inertial modes likely form a dense spectrum in the low-frequency range, as can be argued in some simplified set-ups <cit.>.] We therefore concentrated on frequencies larger than 15 c/d, which contain the oscillations with the highest amplitudes, and we scanned the frequency interval from 15 to 44 c/d. Interferometric observations have determined quite precisely the inclination of the rotation axis of Altair with respect to the line of sight (see introduction). Hence, we could compute the visibility of each mode and search for the most visible modes. As we had restricted ourselves to adiabatic oscillations, we had no predictive power over the mode amplitudes. Yet, we assumed that those with the largest scales are the least damped or the most unstable and thus the most visible as well. Noting that the true visibility is tightly related to the temperature fluctuation, which in an adiabatic model is itself strongly related to the pressure fluctuation, we defined a proxy of the visibility as V_ proxy = ∫_S_v (δ p /P) ·/max_S(|δ p /P|), where S_v is the visible part of the stellar surface, is a unit vector pointing towards the observer, and = dS is the surface element directed along the normal of the surface. This proxy of the visibility is faster to compute than the true visibility of a mode as described in <cit.>, and it still allowed us to qualitatively compare the visibility of different modes and to display the most visible ones. The resulting plot of the foregoing proxy is shown in Fig. <ref>. In this figure (upper part), each symbol refers to an eigenfrequency of <cit.> concordance model. One may note the large density of eigenvalues in any interval of frequencies. As the y-axis shows the visibility of the modes, the peaks of the upper envelope of the symbols (the blue line) point out frequencies associated with eigenmodes with only large-scale variations on the surface of the star. In the lower part of Fig. <ref>, we have drawn red bars representing the observed oscillation amplitudes (arbitrary unit) at the observed frequencies. Thin vertical lines show possible identifications between visibility peaks and observed frequencies. We note that all of these possible identifications are associated with axisymmetric modes (blue symbols). Only f_3 (at 20.8 c/d) seems to be associated with a non-axisymmetric 1^+ retrograde mode. Surprisingly, the most prominent observed mode, at f_1=15.77 c/d, is not associated with any highly visible mode of the model. We discuss this point below. Regarding Fig. <ref>, we also note that the model frequencies (peaks of visibilities) are systematically lower than the observed frequencies. If these frequencies are associated with acoustic modes, as suggested by <cit.>, this shift means that the model is not dense enough since acoustic frequencies scale with the average density of the star <cit.>. In other words, the concordance model of <cit.> is slightly too old. For five axisymmetric modes, which can be associated with observed frequencies, we examined the frequency dependence with X_ core/X_ ini, namely with the hydrogen mass fraction in the core scaled by its initial value. This dependence is illustrated in Fig. <ref>. We note that the 0^+ modes, which are axisymmetric and symmetric with respect to equator, all point to a ratio of X_ core/X_ ini≃0.972, while the concordance model used X_ core/X_ ini=0.963. We also note that 0^- modes (antisymmetric with respect to equator) do not agree with the X_ core/X_ ini value pointed out by the 0^+ modes. The 0^- modes even disagree among themselves: f_7 suggests a ratio of ∼0.963, while f_30 points to ∼0.99. A misidentification of the associated modes could explain the different X_ core inferred from the 0^- modes, but this may also reveal an unrealistic profile of the sound speed along the acoustic rays. The error in the sound speed may be compensated in symmetric modes but emphasised in anti-symmetric ones. Further investigation is needed. The previous remarks prompted us to recompute eigenfrequencies of a model with X_ core/X_ ini=0.972. However, in this case we restricted our calculations to axisymmetric modes and frequencies between 22 and 39 c/d so as to gain clarity and save computation time. The results are shown in Fig. <ref>. As expected, the matching of 0^+ frequencies is quite good, and that of 0^- frequencies is fair enough. The foregoing results extend the previous possible identifications of frequencies of Altair's spectrum. Indeed, <cit.> identified the frequencies [23.2791, 25.95061, 28.4064] c/d with three axisymmetric modes of symmetry [0^-, 0^+, 0^-], respectively. We extend this series with six additional modes at frequencies [24.1551, 25.4012, 31.1858, 34.1521, 35.8214, 36.5285] c/d of symmetries [0^+,0^+,0^+,0^-,0^+,0^+], respectively. We acknowledge that the identification of frequency f_30 is only tentative. All foregoing modes are acoustic modes, so-called island modes in the terminology of <cit.>. They are associated with periodic orbits of acoustic rays bouncing between low north and south latitudes, as illustrated in Fig. <ref> for three symmetric 0^+ modes at f_2, f_26, and f_33. As noted previously, the most prominent observed frequency at 15.76789 c/d does not seem to be associated with any highly visible mode of the model. However, this value is reminiscent of the frequency of the fundamental mode of δ-Scuti stars, as can be derived from the period-luminosity relation given by <cit.>. This relation points to a frequency of ∼16 c/d for Altair. However, Altair is young (see below) and rotating extremely rapidly (ν_ rot≃3 c/d). To find the fundamental mode of Altair, we therefore reduced the angular velocity of the model to 15% of the critical angular velocity (instead of its actual 74.4%) while keeping all other parameters constant. As a result, we found the fundamental mode at a frequency of 20.88 c/d. This value, which is higher than the period-luminosity relation prediction, is due to the quasi-ZAMS nature of our model. We then increased the rotation rate of the model, following the evolution of the fundamental mode. As expected, its frequency decreased due to the volume increase of the model. However, when the rotation rate reached 50% of the critical angular frequency, we could no longer follow the mode, as it mixes with gravito-inertial modes at this point. Computations of eigenvalues around the expected value of the fundamental frequency showed many avoided crossings, making identification of the fundamental mode extremely difficult. An impression of the complexity of the spectrum in this frequency range is given by Fig. <ref>, but it can also be appreciated with Fig. 4 of <cit.>. To decipher the oscillation properties of Altair around its dominant oscillation frequency at 15.77 c/d, a dedicated study will be needed that likely includes non-adiabatic and non-linear effects. § THE AGE OF ALTAIR The foregoing possible identification of eight eigenfrequencies of Altair, implying X_ core/X_ ini≃0.972, prompted us to improve the estimated age of the star, which was calculated to be 100 Myrs by <cit.>. To this end, we used a new version of the code that can compute time evolution of a 2D model <cit.>. This new version of the code can tell us how much time is needed to reach the inferred hydrogen mass fraction in the core, but presently it cannot compute the pre-main sequence phase of the model. To estimate this latter phase, we reverted to MESA models <cit.>. With these 1D models, we evaluated at 26 Myrs, the duration of the pre-main sequence for a model of 1.863  and a similar chemical composition as the model. The appropriate evolutionary 2D-model says that reducing X_ core/X_ ini from unity to 0.972 takes 62 Myrs. Hence, we estimate the age of Altair to 88 Myrs. For further illustration, we give in Table <ref>, the ages given by the MESA code for a non-rotating model and a rotating model. For this latter model, we set the rotation rate to about 65% of the critical one. This rotation is the highest acceptable by the code. Although this value is clearly out of the acceptable ones for a 1D model, it gives an idea of the spherically averaged effects of rotation, which turn out to be quite modest. From the numbers shown in Table <ref>, we observed that MESA models evolve more rapidly than the 2D-model, essentially because of a slightly higher central temperature. The origin of the age differences between the models is likely to be found in the microphysics (e.g. opacities, equation of state, nuclear reaction rates). The dimensionality of the model, 1D or 2D, is crucial to estimating the fundamental parameters of the star (mass, radii, rotation rate, for instance) but is no longer crucial for time evolution in the case of Altair since this star is very young and secular effects of rotation have not had time to accumulate. Our conclusion is that in the present state of stellar models, the age of Altair is close to 90 Myrs with an uncertainty we estimate to 10 Myrs due to the differences in microphysics between the models. Finally, we note that the slight increase of X_ core/X_ ini changes the size of the star by a negligible amount compared to the uncertainties of interferometric radii measurements. § SUMMARY AND CONCLUSIONS Using the halo photometry technique <cit.>, we processed the TESS images of Altair and derived its light curve during the observation period from 10 July to 5 August 2022. Using the code <cit.>, we derived the power spectrum of the light curve and exhibited 34 frequencies. Compared to previous works <cit.> that revealed 15 frequencies, we confirmed 12 of them, and we add 22 new frequencies thanks to the high quality of the data. Previous data came from the MOST and WIRE satellites but were plagued by scattered light of the Earth due to the low-altitude orbits of their missions. The amplitudes of the repeatedly observed modes still show variations, which we tentatively attribute to the probable coupling of the modes with a thin convective layer near the surface of Altair <cit.>. We then focused on a subset of frequencies that were previously identified with those of axisymmetric modes during the making of a concordance model of Altair by <cit.>. The identification of modes at the frequencies [23.2791,25.95061,28.4064] c/d by this previous work appears reliable. By knowing the inclination of the rotation axis with respect to the line of sight, we could compute the visibilities (in fact a proxy of it) of all modes, which we determined using the concordance 2D model of <cit.> and the code for the oscillations <cit.>. We thus identified seven new frequencies still associated with axisymmetric modes. Among these ten candidates, we selected three modes that are symmetric with respect to the equator and used their frequencies to better determine the size of the star, which meant determining its age, as we assumed the mass given by <cit.>. The three frequencies indeed point to the same hydrogen mass fraction in the core, namely X_ core/X_ ini≃0.972, with X_ ini=0.739 <cit.>. Re-computing the concordance model with this slightly higher hydrogen mass fraction in the core led to a nice matching between nine observed frequencies and nine modelled frequencies. With time-evolving 2D-models, we could then estimate our best model age, namely 88 Myrs, which is slightly younger than estimated by the previous concordance model of <cit.>, who found X_ core/X_ ini=0.963 and which corresponds to an age of 108 Myrs. Nevertheless, we still estimate the uncertainty of the age to be about 10 Myrs based on comparing the and predictions. We argue that the microphysics are the main source of uncertainty in the evolution of the hydrogen mass fraction of the core, rather than the dimensionality of the model. Improvements in this direction may help us reproduce more observed frequencies with the models. The foregoing results show that the modelling of Altair is progressing. However, many frequencies still have to be identified, and it remains a challenge to determine the origin of the oscillation at 15.76789 c/d, which is the most prominent and has the highest amplitude. Around this frequency, no highly visible mode pops out of the model. We investigated the change of frequency of the fundamental mode of a model of similar mass but of lower rotation and we found that such a fundamental mode disappears when rotation is increased beyond 50% of the critical one. A series of avoided crossings with (presumably) gravito-inertial modes causes the fundamental acoustic mode to mix with these modes, whose spectral density is high. It is very likely that this prominent oscillation results from a combination of several eigenmodes and requires a non-adiabatic and non-linear approach to be understood. Investigation of the non-linear dynamics of all the detected modes remains a highly desired future step. We are very grateful to the referee for many valuable comments, which helped us improve the original manuscript. MR, DRR, and JSGM gratefully acknowledge the support of the French Agence Nationale de la Recherche (ANR), under grant MASSIF (ANR-21-CE31-0018). MR and SC acknowledge support from the Centre National d'Etudes Spatiales (CNES). MR also acknowledges support from the European Research Council (ERC) under the Horizon Europe programme (Synergy Grant agreement N^∘101071505: 4D-STAR). Computations of 2D models and the oscillation spectra have been possible thanks to HPC resources from CALMIP supercomputing center (Grant 2023-P0107). While partially funded by the European Union, views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. aa
http://arxiv.org/abs/2406.07871v1
20240612045514
Flexible Music-Conditioned Dance Generation with Style Description Prompts
[ "Hongsong Wang", "Yin Zhu", "Xin Geng" ]
cs.CV
[ "cs.CV", "cs.MM", "cs.SD", "eess.AS" ]
Flexible Music-Conditioned Dance Generation with Style Description Prompts Hongsong Wang, Yin Zhu and Xin Geng, Senior Member, IEEE All authors are with Department of Computer Science and Engineering, Southeast University, Nanjing 210096, China (e-mail: hongsongwang@seu.edu.cn). June 17, 2024 ================================================================================================================================================================================================================= § ABSTRACT Dance plays an important role as an artistic form and expression in human culture, yet the creation of dance remains a challenging task. Most dance generation methods primarily rely solely on music, seldom taking into consideration intrinsic attributes such as music style or genre. In this work, we introduce Flexible Dance Generation with Style Description Prompts (DGSDP), a diffusion-based framework suitable for diversified tasks of dance generation by fully leveraging the semantics of music style. The core component of this framework is Music-Conditioned Style-Aware Diffusion (MCSAD), which comprises a Transformer-based network and a music Style Modulation module. The MCSAD seemly integrates music conditions and style description prompts into the dance generation framework, ensuring that generated dances are consistent with the music content and style. To facilitate flexible dance generation and accommodate different tasks, a spatial-temporal masking strategy is effectively applied in the backward diffusion process. The proposed framework successfully generates realistic dance sequences that are accurately aligned with music for a variety of tasks such as long-term generation, dance in-betweening, dance inpainting, and etc. We hope that this work has the potential to inspire dance generation and creation, with promising applications in entertainment, art, and education. Pedestrian detection, hard example mining. § INTRODUCTION Music has always been a pivotal force in driving dance movement, whether through the rhythmic beats of traditional folk dances or the innovative choreography of contemporary performances. With the continuous development of artificial intelligence technology, there has been a growing interest in exploring the intersection between music and dance. Automated dance generation has garnered immense interest, offering fresh perspectives on the intricate relationship between music and dance, while simultaneously inspiring innovative dance creation and synthesis. Music-conditioned dance generation aims to automatically produce dance movements that are in harmony with musical inputs. It is closely intertwined with human motion synthesis <cit.>, which involves creating realistic and natural human motions. The main distinction lies in the fact that dance generation, which necessitates a profound analysis of music's characteristics, is a cross-modal generation task. This task is also a typical problem in generating human motions based on conditional signals <cit.>, which is extremely challenging due to the implicit relationship between human motion and conditional signals. Recently, advanced generative models such as Generative Adversarial Networks (GANs) <cit.>, Variational Autoencoders (VAEs) <cit.>, Cross-modal Transformer <cit.> and Diffusion <cit.> are employed to address music-conditioned dance generation. Although these models are capable of successfully generating high-quality and diverse dances from music signals, they overlook the significance of inherent music attributes, such as style or genre, in the dance generation process. Indeed, choreographers consider the music style to create a dance that is aesthetically pleasing to human beings. Music styles set the tone for the creative direction and style of dance generation. Different genres of music, such as classical and pop, offer varied dance forms. Therefore, careful incorporating music style is crucial in music-conditioned dance generation. Another important aspect that cannot be neglected in dance generation is the flexibility. Dance, as a dynamic art form, often requires meticulous adjustments and refinements throughout its creation and performance phases. The ability to flexibly edit dances can not only enhance artistic expression but also enable dancers to respond effectively to user feedback. A practical dance generation approach ought to possess sufficient flexibility to cater to a diverse range of tasks. To address above issues, we introduce a novel Music-Conditioned Dance Generation with Style Description Prompts (DGSDP), as illustrated in Figure <ref>. The DGSDP mainly comprises Music-Conditioned Style-Aware Diffusion (MCSAD) and Style Modulation with Description Prompts. The MCSAD is a transformer-based diffusion framework that seemly integrates music conditions and style description prompts into the dance generation. To fully utilize the style information for dance generation, we introduce a Style Modulation module and explore three types of style embeddings: one-hot encoding, genre embedding, and style description prompts. The proposed diffusion model is jointly trained with classifier-free guidance. To accommodate different dance generation tasks, a spatial-temporal masking is integrated in the backward diffusion process. To better align practical applications and facilitate future research, we design a range of experimental settings and establish benchmarks of different approaches under these settings. The main contributions are as follows: (1) We propose flexible music-conditioned Dance Generation with Style Description Prompts (DGSDP) which deliberately incorporates style description prompts to enhance dance generation with styles. (2) We introduce Music-Conditioned Style-Aware Diffusion (MCSAD) which primarily comprises a Transformer-based dance generation network and a novel style modulation module. (3) To accommodate diverse dance generation in practical scenarios, we design a flexible dance generation algorithm by leveraging a spatial-temporal masking strategy. (4) The proposed method exhibits state-of-the-art performance across various dance generation tasks, encompassing long-term dance generation, dance in-betweening and dance inpainting. § RELATED WORK §.§ Stochastic Human Motion Prediction Human Motion Prediction (HMP) can be categorized into deterministic prediction and stochastic prediction. Stochastic HMP has become popular with the emergence of generative models such as Variational Auto-Encoders (VAE) and Diffusion. Early work mostly relies on autoregressive models, which have the disadvantage of error accumulation. Yan et al. <cit.> propose a network that can generate the entire sequence directly by transforming from a series of latent vectors sampled from a Gaussian process (GP). Considering the inherent stochasticity in future motions, Kundu et al. <cit.> present a new probabilistic generative technique by introducing a random extrinsic factor. Aliakbarian et al. <cit.> propose a model with the ability to produce multiple probable future pose sequences by forcing the model to consider the noise. Mao et al. <cit.> develop a unified deep generative network and propose a pose prior and a joint angle constraint for producing smooth pose sequences. Instead of defining a combined loss as in the proposed work, Ma et al. <cit.> achieve the target of balancing the diversity sampling and the accuracy sampling by presenting a unified multi-objective conditional variational autoencoder. Given action labels and motion history, Mao et al. <cit.> design a VAE-based model which bridges the gap between stochastic human motion prediction and motion synthesis. Diffusion models have been applied for HMP. Chen et al. <cit.> present a diffusion-based framework with masked completion. Wei et al. <cit.> propose a diffusion probabilistic model to achieve diversity by incorporating a new noise at each diffusion timestep. §.§ Music-Conditioned Dance Generation Generating dances that match the music temporally and aesthetically is challenging, and various models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Transformers, and Diffusion are employed to tackle this task. Li et al. <cit.> present a model involving a deep cross-modal transformer block with full-attention. Valle-Pérez et al. <cit.> introduce an innovative probabilistic autoregressive framework, taking into account prior poses and musical context. Au et al. <cit.> propose a motion generation framework, which has the capability to choreograph high-quality dance movements over a Dynamic Graph. Huang et al. <cit.> construct a transformer-based framework to generate dance with a specific genre. Kim et al. <cit.> propose a transformer-based conditional GAN framework by sampling from each latent representation of different dance genres. To generate plausible long-term dance sequences, Li et al. <cit.> introduce a two-stage process: a key pose generation and an in-between parametric motion curve prediction. Li et al. <cit.> propose a architecture with two components: a choreographic memory for encoding and quantizing dance poses and an actor-critic GPT for composing units to dance. Tseng et al. <cit.> utilize a diffusion model which enables powerful editing capabilities. Alexanderson et al. <cit.> use diffusion models for audio-driven human motion generation, using Conformers. Recently, Gong et al. <cit.> incorporate both text and music for dance generation by using a cross-modal transformer to encode multi-modal features. Unlike the above approaches, we concentrate on flexible music-conditioned dance generation and enhance generated results by integrating style information within a diffusion framework. §.§ Text-Conditioned Motion Synthesis Text is often employed as instructions for generating motions, and similarly, it can be used as an additional condition to guide the generation of dance. Ahuja et al. <cit.> introduce a model that learns a joint embedding space of pose and natural language. Instead of long natural language, Guo et al. <cit.> propose a novel VAE framework to iteratively generate human motion sequences given a prescribed action type. Then, action is also used by Petrovich  <cit.>, who address the problem of generating human motion sequences conditional on actions. Models in previous work all focus on an entire body. Ghosh et al. <cit.> explore a joint mapping between 3D pose sequences and textual descriptions by presenting a hierarchical two-stream sequential model. Guo et al. <cit.> use a two-stage approach: text2length sampling and text2motion generation to generate plausible motions. Guo et al. <cit.> propose a motion token representation, which can be used in text2motion and motion2text tasks. Instead of synthesizing deterministic motions, Petrovich et al. <cit.> use variational autoencoder and a text encoder to produce diverse 3D human motions. Inspired by the powerful generative capabilities of the Diffusion model, Guy et al. <cit.> introduce a classifier-free diffusion-based generative model, which achieves impressive results in text-to-motion tasks. Zhang et al. <cit.> investigate a simple conditional generative framework based on VQ-VAE and GPT from textual descriptions. Zhang et al. <cit.> propose a diffusion model-based framework with multi-level manipulation, which can generate arbitrarily long motions and respond to fine-grained instructions on joints. § PRELIMINARIES §.§ Human Motion Synthesis For human motion synthesis, auxiliary losses are frequently employed to enhance physical realism when realistic simulations are not available <cit.>. To encourage natural and coherent motion prediction and prevent artifacts, Tevet et al. <cit.> incorporates three geometric losses: the basic joint position loss ℒ_j, the velocity loss ℒ_v and the foot contact consistency loss ℒ_f. The definitions of these losses are as follows: ℒ_j = 1/N∑_i=1^N||FK(x^(i)) - FK(x̂^(i))||^2_2, ℒ_v = 1/N-1∑_i=1^N-1||(x^(i+1)-x^(i))-(x̂^(i+1)-x̂^(i))||^2_2, ℒ_f = 1/N-1∑_i=1^N-1||(FK(x̂^(i+1))-FK(x̂^(i))) ·f̂^(i)||^2_2, where FK(·) represents the forward kinematic function, which transforms joint angles into joint positions, f̂^(i) the predefined binary foot contact mask, and the superscript (i) indicates the frame index. Incorporating the contact consistency loss has been shown to considerably enhance the authenticity of generated motions <cit.>. §.§ Diffusion-Based Dance Generation Following the definition of DDPM <cit.>, diffusion can be described as a Markov noise process, where the latent variable {σ_t}^T_t=0 follows a forward noise process q(σ_t|x) and x ∼ p(x) is taken from the data distribution. The forward noise process is defined as: q(σ_t|x) = √(α_t)x + (1-α_t)ϵ, where ϵ∼𝒩 (0,I) and α_t ∈ (0,1) are constants that are monotonically decreasing. Given the music condition c, the model learns to estimate the ground-truth human motion x̂_θ(σ_t,t,c) ≈ x at all time moments using the model parameter θ through a backward diffusion process. We optimize θ using the simple objective loss <cit.>: ℒ_d = 𝔼_x,t[||x-x̂_θ(σ_t,t,c)||^2_2]. During each denoising time step t, the network x̂(·) predicts the denoised samples and adds noise σ̂_t-1∼ q(x̂_θ(σ̂_t,t,c),t-1), the process is repeated from T to 0 and terminates when t=0. We employs a classifier-free guidance <cit.> for training, a technique commonly utilized in diffusion-based models <cit.>. Following the approach <cit.>, we incorporate classifier-free guidance by introducing a low probability (e.g. 20%) of randomly replacing c = ∅ during training. The guided inference is formulated as a weighted combination of unconditional and conditional generated samples: x(σ̂_t,t,c) = w ·x̂(σ̂_t,t,c) + (1-w) ·x̂(σ̂_t,t,∅), where w is the guidance weight with a positive value. The influence of condition c can be amplified by setting w>1. The overall training loss consists of a weighted combination of the primary simple loss and the supplementary auxiliary loss: ℒ = ℒ_d + λ_jℒ_j + λ_vℒ_v + λ_fℒ_f, where λ_j, λ_v and λ_f are weighted coefficients. § METHOD We introduce a novel Dance Generation with Style Description Prompts (DGSDP), which is illustrated in Figure <ref>. The DGSDP mainly includes Style Modulation with Description Prompts and Style Modulation and Music-Conditioned Style-Aware Diffusion. Enhanced by a spatial-temporal masking strategy, it offers flexibility in addressing diverse dance generation tasks and possesses the capability to generate long-term dances of arbitrary lengths. §.§ Style Description Prompts One-Hot Encoding: One-hot encoding is commonly used to handle categorical variables, particularly when the number of categories is limited. As the music style can be divided into categories, we use one-hot encoding to represent the style information. Genre Embedding: To obtain a more semantic representation, we utilize CLIP <cit.> to extract features for the words associated with the music genre, and refer to it as Style Type Embedding. This embedding captures deep contextual relationships within the textual genre, resulting in more accurate feature representations. Style Description Prompts: Large-scale language models pre-trained on massive text <cit.> demonstrate impressive capabilities for text generation tasks <cit.>. To get a more detailed description about characteristics of the music style, we use GPT-3 <cit.> to generate style description prompts for dance. We choose the following function to acquire the style description prompts: "Please generate a detailed description of the dance [g], including the characteristics of the dance in terms of body movement.", where g is the dance genre. §.§ Music-Conditioned Style-Aware Diffusion We extend the Human Motion Diffusion <cit.> by incorporating style modulation with description prompts, resulting in Music-Conditioned Style-Aware Diffusion (MCSAD). In addition to the music condition c, our model also includes the style description prompts s. The training objective is: ℒ_d = 𝔼_x,t[||x-x̂_θ(σ_t,t,c,s)||^2_2], where x̂(·) denotes the transformer-based decoder block, which primarily comprises self-attention, feature-wise linear modulation (FiLM) <cit.>, cross-attention, and Style Modulation. The Style Modulation layer aims to increase the influence of style features on the generated dances. It draws inspiration from the adaptive instance normalization technique utilized in StyleGAN <cit.>. This module SM(·) allows for the incorporation of additional textual information into existing transformer models. Formally, SM(z,s) = z/||z||_2 · r·FC(s), z ∈ℝ^T × d_z,s ∈ℝ^d_s, where z refers to the input, s represents the music style description prompts, r is a scaling factor, FC(·) denotes the fully connected layer, d_s is the embedding dimension and d_z is the hidden dimension. With the style description prompts, the guided inference is: x(σ̂_t,t,c,s) = w ·x̂(σ̂_t,t,c,s) + (1-w) ·x̂(σ̂_t,t,∅,s). The detailed training algorithm is shown in Algorithm <ref>. 1em 1em §.§ Flexible Generation with Masking During inference, the proposed method generates an estimated sequence from the initial noisy sequence σ_T ∼𝒩 (0,I), denoises it to σ̂_T-1, and iterates this process until t=0. To flexibly edit generated dances, we design a spatial-temporal masking strategy, which is illustrated in Figure <ref>. We first pad the known dance sequence to match the size of the target generation. Then, for the padded known dance sequence x_0, we add noise on it to obtain noisy sequence at the timestep t-1, x^known_t-1 = √(α_t-1)x_0 + (1-α_t-1)ϵ, where ϵ∼𝒩 (0,I). For an unknown sequence, the model initially predicts the target sequence from random noise, subsequently introducing noise to generate a noisy sequence at the timestep t-1, x^unknown_t-1 = √(α_t-1)x̂_θ(σ̂_t, t, c,s) + (1-α_t-1)ϵ. Finally, x^known_t-1 and x^unknown_t-1 are combined with a spatial-temporal mask M, σ̂_t-1 = M ⊙ x^known_t-1 + (1-M) ⊙ x^unknown_t-1, where the mask M ∈{0,1}^F × J can be changed arbitrarily with 0 or 1 values in both the temporal and spatial dimensions, F and J represent the number of frames and joints, respectively. This process is iterated by taking σ̂_t-1 as the noise for the next iteration. The spatial-temporal mask supports any combination of temporal constraints and joint constraints. This masking framework offers a robust tool for subsequent applications to generate dance sequences that precisely adhere to arbitrary constraints. Details of the algorithm are shown in Algorithm <ref>. 1em 1em The proposed framework can also be applied for long-term dance generation. Since the model generates a set of frames of a dance sequence at once, increasing the maximum sequence length results in a linear increase in computational cost. In addition, dance generation requires the music condition to match the motion sequence in length, which further extends the memory requirements. We follow the long-term generation strategy <cit.> to achieve temporal consistency between multiple sequences in long-term generation. § EXPERIMENTS §.§ Dataset and Implementation Details The AIST++ dataset <cit.> includes 1,408 high-quality dance movements synchronized with music from 10 genres. It comprises motion sequences of varying durations, ranging from 7 seconds to 50 seconds, with an average duration of 13 seconds. We follow the original train/test splits. Consistent with EDGE <cit.>, instances in the training set are cut to 5 seconds at 30 FPS with a stride of 0.5 seconds, and instances in the testing set are cut to 5 seconds at 30 FPS with a stride of 2.5 seconds. The proposed model has 49.7 million (M) parameters, and was trained on one NVIDIA RTX A6000 GPU for 2 days with a batch size of 128. The implementation is based on the PyTorch. The number of iterations is 1000. The learning rate is 0.0002 and the weight decay is 0.02. The music condition transformer encoder has a depth of 2 layers, with 8 attention heads and a dimension of 512. The decoder block has a depth of 8 layers, with 8 attention heads and a dimension of 512. For long-term generation, we use linear weighted summation to sum up corresponding parts of different slices (5s). Specifically, we perform a weighted summation of the latter half (2.5s) of the previous slice and the former half (2.5s) of the next slice. To mitigate the impact of randomness of stochastic, all evaluation metrics obtained by averaging over 100 trials. Following <cit.>, we adopt the following evaluation metrics: Beat Alignment Score: It is used to measure the alignment between the music and the generated movement. We calculate the average time distance between each music beat and its closest dance beat as the beat alignment score. The music beats are extracted using the Librosa library <cit.>, and the kinematic beats are calculated based on the local minima of the velocity of motion joints. Physical Foot Contact Score: It is a quantitative acceleration-based metric used to score the physical plausibility of the generated kinematic motion. The metric derives from two simple and related observations: (1) In the horizontal plane, any centre of mass (COM) acceleration must be due to static contact between the feet and the ground. Thus, either at least one foot is stationary on the ground or there is no acceleration of COM. (2) On the vertical axis, any positive COM acceleration must be due to static foot contact. Frechet Inception Distance(FID): FID is the distance between the generated dance movement distribution and the real movement distribution. We calculate the FIDs between the generated dances and the AIST++ dataset for all motion sequences in the kinetic feature space <cit.> (denoted as FID_k) and geometric feature space <cit.> (denoted as FID_g). Diversity: This metric assesses the diversity of dance motions by computing the average Euclidean distance between different generated dance motions. Similarly, we compute the Diversity between all motion sequences of the generated dances in the kinetic feature space <cit.> (denoted as Div_k) and the geometric feature space <cit.> (denoted as Div_g). Weight Method BeatAlign↑ PFC↓ FID_k↓ FID_g↓ Div_k→ Div_g→ 1l|2[2]*w=2 EDGE <cit.> 0.26 1.56 35.35 18.92 5.32 4.90 DGSDP (ours) 0.31 1.51 34.32 19.04 6.29 5.81 1l|2[2]*w=1 EDGE <cit.> 0.25 1.19 45.41 19.42 4.82 5.24 DGSDP (ours) 0.28 1.65 37.21 18.19 7.25 6.42 – Ground Truth 0.35 1.33 – – 9.43 7.33 1l|Length Method BeatAlign↑ PFC↓ FID_k↓ FID_g↓ Div_k→ Div_g→ 1l|2[2]*7.5s EDGE <cit.> 0.26 1.11 59.43 25.60 2.97 3.52 DGSDP (ours) 0.30 1.38 55.65 20.74 4.10 5.74 Ground Truth 0.38 1.04 – – 9.34 7.47 1l|2[2]*10s EDGE <cit.> 0.25 0.88 68.63 32.24 2.55 3.14 DGSDP (ours) 0.30 2.13 56.15 29.98 5.21 6.45 Ground Truth 0.49 1.70 – – 9.34 7.47 §.§ Experimental Settings We evaluate the performance of flexible dance generation in the following scenarios. Dance Generation: These setting generates dance movements of 5-second length given the conditioned music. This is the standard setting for music-conditioned dance generation. Long-Term Dance Generation: This setting aims to generate 7.5-second and 10-second dance movements. For fair comparison, the model is trained on 5-second clips. Seed Motion: Given the real motions of the first and second frames, the model generates dance movements of the following frames. In-betweening: Given the real motions of the first and last frames, the model completes the motions of the in-between frames. Inpainting: Given a random mask, the model completes the remaining frames based on the mask. We control the model to randomly give 70% of the true sequences. Upper-body Generation: Since the mask possesses both spatial and temporal dimensions, apart from controlling it in the temporal dimension, we can also manipulate it in the spatial dimension. Upper-body generation aims to generate motions of the upper-body given lower-body motions. Lower-body Generation: Given upper-body dance movements of all sequences, the model automatically generates lower-body dance movements. §.§ Results of Dance Generation We choose EDGE <cit.> as our baseline. For different approaches, we generate 20 pieces of dances with 5-second length, covering 10 different genres of AIST++. Quantitative results are summarized in Table <ref>. Our method performs favorably against EDGE <cit.> on most metrics. Specifically, there is a significant improvement in the Beat Alignment Score metric, which is main metric of dance generation, suggesting that the dances generated of our method are more in tune with the musical melody. Moreover, our method achieves the lowest FID_k, which is defined on motion velocities and energies, reflecting the physical characteristics of dance. The superiority of FID_k demonstrates that the proposed DGSDP can generated more realistic dance movements. In addition, our method achieves the closest approximation to the ground-truth values of Div_k and Div_g, revealing the capability to generate more diverse dance movements of our method rather than converging to a few fixed future motions. §.§ Results of Long-Term Dance Generation Since both the training and test sets are composed of five-second slices, in order to validate the ability of our model to generate long-term dance sequences, we generate the corresponding dances for music that are 7.5 seconds and 10 seconds. We select the pieces of dances that are longer than 7.5 seconds or 10 seconds for long-term dance generation. Experimental result of long-term dance generation on AIST++ is shown in Table <ref>. Our method performs significantly better than EDGE <cit.> on most metrics. Specifically, our model achieves a higher Beat Alignment Score, maintaining good performance on the correlation between dance movements and music. What's more, our model performs better on EDGE <cit.> on metrics FID_k,FID_g,Div_k and Div_g. This reveals that DGSDP can also generate dance movements with better quality and better diversity in long-term dance generation tasks. It is worth noting that the performance of our model on Div_k and Div_g do not decrease significantly over time like EDGE <cit.>. This suggests that, despite for long music, the dance movements generated using our method can ensure diversity of movements, avoiding repetitive movements. §.§ Results of Flexible Dance Generation With the spatial-temporal masking strategy, our approach is applicable to generate dance movements according to various scenarios. We quantitatively compare results of ours with those of EDGE <cit.> for flexible dance generation in Table <ref>. In the seed motion scenario and upper-body generation scenario, our method performs favorably against EDGE <cit.> on all metrics. In the other three scenarios, our method performs better than EDGE <cit.> on most metrics. This suggests that our approach has a strong capacity for flexible generation from both the temporal and spatial perspectives. Furthermore, it is clearly found that the performance of our model on Beat Alignment Score is close to the ground truth or even exceeds it, revealing the potential for strong self-improvement properties of our model in harmonising music and movement. Overall, our approach is suitable for a wide range of flexible scenarios, and with the help of known information, our model may perform better. §.§ Discussions We conduct detailed ablation studies, analyze the impacts of audio representations and various style embeddings, and discuss the limitations of evaluation metrics. Audio Representation: We evaluate three independent feature extraction strategies: Jukebox <cit.>, Encodec <cit.> and Librosa <cit.>. Jukebox <cit.>, a pre-trained generative model designed for music generation, showcasing notable performance in tasks specific to music prediction as evidenced in prior studies <cit.>. Musicians and composers can use Jukebox to create original musical compositions, exploring and innovating between different genres and styles. Inspired by these advances, here, we extract Jukebox music features as music conditions. Encodec <cit.>, an advanced real-time, high-fidelity, audio codec. Encodec is a pre-trained model, which has been used as music conditional feature extractor in dance generation tasks as well. It achieves state-of-the-art quality levels in both voice and music compression at multiple audio compression ratios and sample rates. Librosa <cit.> is a library for analyzing audio signals. It provides a rich set of functions including loading audio files, extracting features, computing rhythmic features and performing audio signal processing. Here, we use librosa to extract the following musical features: envelope (1-dim), MFCC (20-dim), chroma (12-dim), one-hot peaks (1-dim) and one-hot beats (1-dim). While comparing our method with 1 and 2 in Table <ref>, we observe that the Beat Alignment Score and Div_k achieve the optimal performance when Jukebox is utilized as the feature extractor; PFC attains its peak performance with Encodec serving as the feature extractor; and FID_k, FID_g and Div_g exhibit the best results when Librosa is employed as the feature extractor. Style Description Prompts: Previous related work <cit.> has attempted to use genres as additional conditions for music generation for dance. We do ablation experiments on the input of the Style Modulation module, using genres instead of style description prompts. As shown in 3 in Table <ref>, performance degrades significantly on Beat Alignment Score and PFC when genres are used as input to Style Modulation. The possible reason for this is that genres contain less information than style description prompts. Style Modulation: To verify the effectiveness of the Style Modulation module, we remove the Style Modulation module and replace it with concatenating music conditions c and style description prompts s simply, which then enters the MCSAD block together. The results are illustrated in 4 in Table <ref>. We can find that simply concatenating music conditions and style description prompts performs worse on most metrics than including the Style Modulation module. Next, in order to investigate the importance of the combination of Style Modulation and Style Description Prompts in DGSDP, we remove the Style Modulation module and we don't use any prompts either. So, the model only has music conditions. The results are shown in 5 in Table <ref>. We can see that the performances are obviously worse without any text prompts. From this we can conclude that it is quite important for Style Modulation and Style Description Prompts to work together. Style Embeddings: Next, we compare the three types of style embedding: one-hot encoding, genre embedding and style description prompts. When using the Style Modulation module, the results are illustrated in 0, 1 and 2 in Table <ref>. When directly concatenating style embedding and music conditions, the results are shown in 3, 4 and 5 in Table <ref>. It is obvious to see that using style description prompts as the style embedding performs better than the other two on most metrics. It is demonstrated through ablation experiments that our method achieves the best Beat Alignment Score and maintains superior performance in other metrics. Visualizations: For the same music, our approach can generate a variety of sensible dance movements. Some of the dance visualization results are shown in Figure <ref>. Here, we list generated dances for four types of music: Breaking, Krump, Locking and Waacking. For each type, we list five dance movements. As can be seen from the figure, our approach can generate diverse and plausible dance movements. Limitations of Evaluation Metrics: Developing automated metrics for the quality of the dance generated is a challenging endeavour due to the complex, subjective and even culturally specific context of dance practice. FID-based metrics are suitable for data-rich domains such as image generation, but the AIST++ test set is too small to cover the train distribution. Moreover, because of the limited availability of data, both FID_k and FID_g rely on heuristic feature extractors that merely calculate surface-level characteristics of the data. We argue that the idea of assessing the difference between two distributions of features of dance movements is not necessarily fundamentally flawed, but that more representative features may lead to reliable automated quality assessments. The beat alignment score calculates the average time distance between each music beat and its closest dance beat. However, dancing is not solely about syncing local speed minima in joints with beats. Instead, musical beats serve as a flexible reference for timing, rhythm, and the smooth transition between dance movements and steps. While this metric has driven progress on this issue in the past, more dance-specific metrics are needed to realize breakthroughs in this area. § CONCLUSION In this work, we study music-conditioned dance generation and introduce a transformer-based diffusion framework called Dance Generation with Style Description Prompts (DGSDP) which can generate arbitrary dance sequences. The DGSDP mainly comprises Music-Conditioned Style-Aware Diffusion (MCSAD) and Style Modulation with Description Prompts. Our approach obtains state-of-the-art performance on a variety of tasks such as dance generation, long-term dance generation, dance in-betweening, dance inpainting. Ablated experiments demonstrate the effectiveness of the modulation module and the style description prompts. We also conduct a thorough investigation into various encodings of both textual music style and the audio music. To obtain dances that align better with the music, we suggest using Jukebox to extract audio representations and applying style description prompts augmented by large language models. The proposed framework is suitable for flexible dance generation, enabling the user to freely edit or generate the desired dance sequence. We hope that this work will inspire further research in the fields of automatic dance generation and interactive dance generation. IEEEtran
http://arxiv.org/abs/2406.08230v1
20240612135937
Skyrmion blinking from the conical phase
[ "Rai M. Menezes", "Milorad V. Milosevic" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.other" ]
Departamento de Física, Universidade Federal de Pernambuco, Cidade Universitária, 50670-901, Recife-PE, Brazil NANOlab Center of Excellence & Department of Physics, University of Antwerp Groenenborgerlaan 171, B-2020 Antwerp, Belgium milorad.milosevic@uantwerpen.be NANOlab Center of Excellence & Department of Physics, University of Antwerp Groenenborgerlaan 171, B-2020 Antwerp, Belgium An demo] Skyrmion blinking from the conical phase § ABSTRACT While the transition between skyrmionic and non-topological states has been widely explored as a bit operation for information transport and storage in spintronic devices, the ultrafast dynamics of such transitions remains challenging to observe and understand. Here, we utilize spin-dynamics simulations and harmonic transition state theory (HTST) to provide an in-depth analysis of the nucleation of skyrmionic states in helimagnets. We reveal a persistent blinking (creation-annihilation) phenomenon of these topological states under specific conditions near the phase boundary between skyrmion and conical states. Through a minimum-energy path analysis, we elucidate that this blinking behavior is favored by the formation of chiral bobber (CB) surface states and that the collapse of CBs differs from that of skyrmions in thin films due to their different oscillation modes. We further employ HTST to estimate the typical blinking time as a function of the applied magnetic field and temperature. Finally, we illustrate the practical use of skyrmion blinking for controlled probabilistic computing, exemplified by a skyrmion-based random-number generator. [ Milorad V. Milošević Received 20 July 2023 / Accepted 11 June 2024 ================================================= § INTRODUCTION The interest in non-traditional computing architectures and in-memory systems is growing due to their ability to tackle problems that conventional computers struggle with. Spintronic technologies, such as stochastic magnetic tunnel junctions <cit.> and nano-oscillators <cit.>, are promising for their low energy consumption and high-speed computing capabilities. Particularly, magnetic skyrmions – topologically protected quasiparticles – show potential for unconventional computing applications <cit.>, offering the stability, compacted size, and facilitated manipulation with low energy consumption as needed for modern applications. The intense research effort over the years led to a range of skyrmionic technology, such as proposals for magnetic memory devices and logic operators <cit.>, skyrmion-based Qubits <cit.>, reservoir computing <cit.>, and many others. Recent works also suggested that thermally induced skyrmion dynamics can be used for probabilistic computing devices <cit.>. In most cases, realizing such applications necessitates the ability to intentionally generate and eliminate individual skyrmions, as well as manipulate their spatial and temporal positioning <cit.>. It is well known that magnetic skyrmions can emerge from the conical phase in chiral helimagnets such as MnSi and FeGe as the system is excited by the appropriate applied magnetic field and temperature <cit.>. Understanding the nucleation mechanism of magnetic skyrmions in such materials can potentially grant the capability to control their stability <cit.> and orchestrate their spatial and temporal arrangement, crucial for various applications. Recent works aiming to achieve such understanding have shown the temporal evolution of skyrmion nucleation from the conical phase in thin plates of Co_8Zn_10Mn_2 <cit.>. Among other interesting features, such as clustering, the process of skyrmion-lattice (SkL) formation is characterized by a combination of skyrmion nucleations and collapses. We suggest that this blinking behavior (creation-destruction process) is favored by the local stability of an intermediate state between the conical and skyrmion phases (observed in Ref.  with about 60% of the contrast of a fully formed skyrmion, dubbed a skyrmion embryo), the so-called chiral bobber (CB) <cit.> — a topologically protected swirl of the magnetization localized at the material's surface — which breaks the nucleation process into stages with a lower energy cost compared to homogeneous skyrmion nucleation <cit.>. Once at this intermediate state, the energy barrier for the system to transition to the skyrmion phase can be similar to the barrier to return to the conical phase. This could lead to sequential instances of nucleation and collapse of CBs (skyrmion embryos) during the formation of the skyrmion lattice, as seen, for example, in Lorentz transmission electron microscopy (LTEM) images in Ref. . In fact, a detailed description of such rich dynamics remains elusive to date. In this work, we provide an in-depth analysis of skyrmion blinking behavior and demonstrate that the skyrmion system can be set to uniform blinking when gauged by the appropriate applied field and temperature. We show that this condition is favored when the system is close to the phase boundary between the SkL and conical states, specifically on the conical side of the phase boundary, where CBs are metastable. Additionally, we calculate the oscillation modes of the CB states to demonstrate that their collapse differs from that of skyrmions, as a different number of zero modes of oscillation are associated with the two states. We then use transition state theory to estimate the typical blinking time of CBs as a function of the applied field and temperature, and exemplify the use of skyrmion blinking for controlled probabilistic computing, such as in a random-number generator. The paper is organized as follows. In Sec. <ref>, we provide the analytic considerations and describe the spin model used to simulate the magnetic system (Sec. <ref>), as well as the geodesic nudged elastic band method used to calculate minimum energy paths along the magnetic phase transitions (Sec. <ref>) and the harmonic transition state theory considered to estimate the state lifetime (Sec. <ref>). In Sec. <ref>, we report the phase diagrams of the considered magnetic films with different thicknesses (Sec. <ref>) to then investigate the nucleation mechanism of skyrmions from the conical phase, as well as the dependence of their activation energies on the applied field and film thickness (Sec. <ref>). In Sec. <ref>, we detail the formation of fully formed skyrmions, after nucleation of CBs. Sec. <ref> is devoted to the skyrmion blinking dynamics, investigated by spin dynamics simulations (Sec. <ref>) and transition state theory (Sec. <ref>). Finally, in Sec. <ref>, we exemplify the use of skyrmion blinking for controlled probabilistic computing, and demonstrate the basic functionalities of a skyrmion-based random-number generator. Our results are summarized in Sec. <ref>. § THEORETICAL FRAMEWORK §.§ Spin model To investigate the formation of magnetic skyrmions in chiral helimagnets, we perform spin-dynamics simulations by employing the numerical package SPIRIT <cit.>. We make use of the extended Heisenberg Hamiltonian that describes the system of classical spins and can be written as ℋ = -∑_⟨ i,j ⟩𝒥_ijn_i ·n_j - ∑_⟨ i,j ⟩D_ij· (n_i ×n_j) -∑_i μB·n_i, where μ_i=μn_i is the magnetic moment of the i^th spin site, with magnitude μ and orientation n_i. Here, 𝒥_ij represents the exchange interaction, D_ij is the Bloch-type Dzyaloshinskii-Moriya (DM) vector, B is the applied magnetic field, and ⟨ i,j ⟩ denotes pairs of spin sites i and j accounting up to second nearest-neighbour (NN) sites. For the simulations, we adopt a spin-system parametrized by the first NN exchange interaction 𝒥_1 = J_ex = 1 meV. The NN DM interaction is chosen as D_1=tan(2π/10)J_ex≈0.727 J_ex, which corresponds to a typical pitch length L_D of ten lattice sites for a helix state at zero magnetic field. Next-nearest-neighbor interactions are chosen according to Ref. , 𝒥_2 = J_ex/16 and D_2 = D_1/8, which leads to a good representation of the magnetic phase diagram of helimagnets. The dynamics of the spin system is governed by the Landau-Lifshitz-Gilbert (LLG) equation ∂n_i/∂ t = -γ/(1+α^2)μ[n_i ×B_i^eff + αn_i × (n_i ×B_i^eff) ], where γ is the electron gyromagnetic ratio, α is the Gilbert damping parameter and B_i^eff = -∂ℋ/∂n_i is the effective field. For the cases where it is applicable, temperature-dependent simulations are implemented by the introduction of a stochastic thermal field B^th, which is added as a contribution to the effective field acting on the localized spin-sites, i.e., B_i^eff→-∂ℋ/∂n_i+B_i^th. The magnitude of the thermal field is obtained by the fluctuation-dissipation theorem, and is given by B_i^th(T,t)=η_i(t)√(2𝒟/Δ t) =η_i(t)√(2α k_B T/γμΔ t), where T is the temperature, k_B is the Boltzmann constant, Δ t is the time-step considered in the simulation, and η_i(t) is a Gaussian white noise that represents the thermal fluctuations on each spin site i. The ensemble average and variance of the thermal field satisfies ⟨B_i^th(t)⟩=0 and ⟨B_ia^th(t)B_jb^th(t')⟩=2𝒟δ_ijδ_abδ(t-t'), respectively, where a,b indicate the components of the vector B_i^th. The stochastic LLG equation provides equivalent results for the magnetic ground state as those obtained by Monte Carlo methods <cit.>. §.§ Minimum energy paths To characterize the dynamics of skyrmions during their nucleation from the conical phase, it is crucial to determine the nucleation mechanism and the energy barriers involved in the phase transition. To do so, we calculate the minimum energy paths (MEP) for skyrmion nucleation by making use of the geodesic nudged elastic band method (GNEB) <cit.> with the assistance of a climbing image (CI) method <cit.>, both readily implemented in SPIRIT <cit.>. These methods enable an accurate determination of the highest-energy configuration, or saddle point, along the minimal energy path connecting the two states, from where the activation energy of the phase transition is determined. In the GNEB method, we consider a path given by a discrete chain of N_I magnetic configurations, called “images" of the system, between two considered magnetic states. The initial guess of the path is then represented by the set of images [ℳ^1,...,ℳ^N_I], where ℳ^ν=(n_1^ν,n_2^ν,...,n_N^ν) represents the magnetic configuration of the ν^th image of the system with N spins. In our calculations, we generate these initial configurations by applying a homogeneous rotation of magnetic moments between the two magnetic states under consideration. In order to converge from the initial guess to the nearest MEP, the effective force at each image is calculated by the negative energy gradient -∇ℋ^ν, where ℋ^ν is the energy of the ν^th magnetic configuration and ∇_i=∂/∂n_i. The force component along the local tangent to the path is then substituted by an artificial spring force between the images, which ensures uniform distribution of the images along the path, while the energy gradient forces orthogonal to the path tangents are applied, thus moving the images towards the minimum energy position. The first and last images of the chain are fixed and given by the local minima corresponding to the initial and final states. In the CI method, the spring forces acting on the highest energy image are deactivated during the iterative optimization, and the energy gradient force is inverted to point along the path. This procedure makes the image to move uphill in the energy landscape along the path. After the CI-GNEB calculation has converged, the position of the CI coincides with the saddle point along the MEP and gives an accurate value of the activation energy, defined by the energy difference between the saddle point and the initial state. §.§ Harmonic transition state theory Once the energy barriers involved in the skyrmion nucleation and annihilation processes are known, the rate of such phase transitions under thermal fluctuations can be estimated by making use of harmonic transition state theory (HTST) <cit.>, which assumes a Boltzmann distribution within the region of configuration space that corresponds to the initial state, i.e., before the system escapes (overcomes the energy barrier) due to thermal fluctuations. This assumption is valid when escape events are rare compared to the timescale of magnetization dynamics. The transition rate, Γ, is then given by an Arrhenius-type law, with exponential dependence on the inverse temperature T and the energy barrier Δ E, and can be written as Γ=Γ_0e^-Δ E/k_BT. The pre-exponential factor, Γ_0, also known as the attempt frequency, is dictated by the energy surface's curvature – the Hessian matrix – at both the saddle point and the initial state minimum, and is given by <cit.> Γ_0=γ/2πv√(∏_i^'λ_i^min/∏_i^'´λ_i^SP), with v=V^SP/V^min√(2π k_B T)^(N_0^min-N_0^SP)√(∑_i^'a_i^2/λ_i^SP). Here, λ_i are the eigenvalues of the Hessian matrix, calculated at the initial state (min) and saddle point (SP). N_0 is the number of zero modes – modes with λ=0 – and V denotes the phase space volume associated with the zero modes. a_i are expansion coefficients of the velocity along the unstable mode of the saddle point <cit.>. The primes next to products and sum indicate that the terms associated with the unstable (λ<0) and zero modes are omitted from the calculation. The transition time (or lifetime) of the process is finally estimated as τ=1/Γ. § SKYRMION NUCLEATION FROM THE CONICAL PHASE §.§ Magnetic phase diagrams Before investigating the nucleation mechanisms of the skyrmion phase, we first calculate the magnetic ground states of the considered spin system. For the simulations, we consider a sample of size 2L_D× L_D√(3)× d, where we explore three different thickness values for the magnetic film, denoted as d=L_D, d=2 L_D, and d= 3 L_D, with L_D representing the pitch length, discretized into 10 spin sites. In all three cases, periodic boundary conditions are set along the film plane, while open boundary conditions are applied across the film thickness. Additionally, we simulate the bulk system by considering d= 3 L_D and periodic boundary conditions across the film thickness. To construct the equilibrium phase diagrams of the magnetic system as a function of the applied field and temperature, the spin system is initialized from four different configurations: random, skyrmion lattice (SkL), conical (CON), and helical (HL) configurations. The energy is then minimized numerically using stochastic LLG simulation. We calculate the average spin configuration from 2000 spin configurations separated by 10 time steps. The energies of all found configurations are further evaluated and compared with each other to obtain the phase boundaries. Additionally, to avoid boundary-size effects, we checked the calculations for slight variations in the lateral size of the sample in cases close to phase boundaries. Fig. <ref> (a) shows the magnetic phase diagrams obtained in the simulations for the different film thicknesses, where the HL, CON, and SkL phases are mapped out. It is noteworthy that decreasing the film thickness favors SkL formation over the CON phase, attributed to the increasing significance of surface effects. As our aim is to characterize the skyrmion nucleation from the conical phase, these results will guide us in selecting the appropriate values for magnetic field and temperature to perform further simulations. §.§ Nucleation mechanism and activation energies To characterize the nucleation and annihilation dynamics of skyrmions during their transition from the conical phase, it is essential to identify both the nucleation mechanism and the energy barriers associated with such a phase transition. Therefore, in this section, we employ the GNEB method (see Sec.<ref>) to obtain the corresponding MEPs and activation energies of the mentioned transitions at zero temperature. For the simulations, we consider a sample of size 5L_D× 5L_D× d, initially in the CON phase, and the nucleation of a single skyrmion at the center of the sample. In general, the transition between the two phases can be mediated by different intermediate states. Here, we have considered the following: (i) the formation of chiral bobbers (CB) <cit.>, and (ii) the formation of torons <cit.>. Fig. <ref> (b) and (c) show the considered intermediate states. The MEPs obtained in our calculations for the case of CB formation in a film of thickness d=2L_D and for different values of the applied field are shown in Fig. <ref> (d), where the reaction coordinate, x, defines the normalized (geodesic) displacement along the formation path. Notice that the activation energy for CB nucleation, i.e., the energy difference between the highest energy state (saddle point) and the initial state (CON state), decreases as the applied field decreases and the CB state becomes energetically favorable. This behavior is typical of bistable systems, where two phases coexist as local free energy minima over a particular range of the external field <cit.>. As it will be discussed in more detail later in this section, similar behavior is obtained for different film thickness, i.e. d=L_D and d=3L_D. In the case of toron nucleation, the toron phase was found to be unstable for d=L_D. On the other hand, for the larger film thicknesses d=2L_D and 3L_D, although the MEPs can be stabilized, the formation of torons is shown to be energetically unfavorable with respect to the CB formation, as shown by the high activation energies in the MEPs in Fig. <ref> (e), for d=2L_D. As discussed in Ref. , the surface effects play an important role in the stabilization of skyrmion tubes in a conical background. Particularly, the formation of torons is favored when increasing the ratio d/L_D. Therefore, for magnetic films of thickness d≤3L_D, as considered in this work, we expect the preferential intermediate state for skyrmion nucleation [directly observed in Ref.  with about 60% of the contrast of a fully formed skyrmion] to correspond to CBs instead of torons. From here on, we therefore focus on the formation of CBs as the predominant mechanism for skyrmion nucleation in chiral magnetic thin films. Fig. <ref> (a) shows the calculated MEP for CB nucleation for different film thicknesses, for applied field μ B/J_ex=0.5. The activation energies for the nucleation (Δ E^N) and annihilation (Δ E^AN) are marked by gray arrows for the case of d=L_D. Notice that, by decreasing the film thickness, the activation energy for CB nucleation suffers minor changes as the saddle point of the phase transition corresponds to the formation of a Bloch point <cit.> located only at the film surface, without major bulk contribution. The energy of the fully formed CB state, on the other hand, is strongly affected as the film thickness becomes of the order of the CB size (≈ L_D/2) <cit.>, as shown for the d=L_D case. In this way, the activation energy for CB annihilation can be strongly affected. Fig. <ref> (b) displays the calculated activation energies for CB nucleation (solid symbols) and annihilation (open symbols) as a function of the applied magnetic field, considering the three specified film thicknesses. The activation energies exhibit significant sensitivity to variations in the magnetic field. For instance, reducing the field by 0.075J_ex from the point where Δ E^N=Δ E^AN (where CB and CON states are energetically degenerate) results in an approximate 20% decrease in the activation energies for CB nucleation across all thicknesses. As previously mentioned, Δ E^AN is notably influenced at reduced thicknesses, with the same change in field inducing an increase of approximately 70% in Δ E^AN for d=L_D, and around 20% for d=2L_D and 3L_D. In all scenarios, the difference between nucleation and annihilation energies follows a nearly linear relation as the magnetic field approaches the phase boundary, as shown in Fig. <ref> (c). Furthermore, it is noticeable that increasing the film thickness from d=2L_D to 3L_D results in small alterations in the activation energies, as the CB state predominantly resides near the film surface. Consequently, although the energy of the fully formed SKL state is significantly influenced by the value of d, one might anticipate that further increases in film thickness will yield comparable CB activation energies to those observed for d=3L_D. §.§ From chiral bobber to skyrmion Once in the CB state, the formation of the skyrmion can proceed to the next stages. Fig. <ref> (a) shows the minimum energy paths obtained in our calculations for the nucleation of a single skyrmion from the conical phase, for different film thicknesses, at a fixed magnetic field μ B/J_ex=0.5. The first and second peaks along the formation paths correspond to the nucleation of the Bloch points of CBs at the top and bottom surfaces. For d>L_D, a third peak along the formation path corresponds to the connection of the CBs into the skyrmion tube. Since the CBs have a typical depth of L_D/2, the nucleation of CBs at both surfaces in the case of d=L_D is sufficient to connect the skyrmion tube across the sample; therefore, for that case, only two peaks are seen along the MEP. Fig. <ref> (b) shows the stable states encountered along the formation path. Notice that, as discussed in the previous section, the activation energies for the nucleation of CBs are not sensitive to the film thickness. However, the energy cost for the formation of the skyrmion tube strongly increases with d, as the skyrmion energy is proportional to its length. Therefore, one can observe that, once in the CB state, the energy barrier for the system to transition to the skyrmion phase can be much larger than the barrier to return to the conical phase, as illustrated, for instance, in the MEPs for d=2L_D and 3L_D in Fig. <ref> (a). This can lead to sequential instances of nucleation and collapse (blinking behavior) of CBs during the formation of the skyrmion lattice. As discussed in Sec. <ref>, the activation energies can be tuned up and down by the applied magnetic field, in a way that one can seek the optimal parameters that support the blinking behavior. As we will show in the next section, the blinking is favored when the system is close to, but not exactly at, the phase boundary between the SkL and CON states. § SKYRMION BLINKING §.§ Spin dynamics simulations To characterize the dynamics of the blinking process in more detail, we employ spin dynamics simulations based on stochastic LLG approach (as detailed in Sec. <ref>). To monitor the nucleations and collapses of CBs over time in our simulations, we compute the two-dimensional topological charge, defined as Q=1/4π∫n·(∂n/∂ x×∂n/∂ y)dxdy, at the film surface. This enables easy differentiation between the CON phase, where Q=0, and skyrmionic states, where each CB or skyrmion carries a topological charge of Q=1. For simplicity, this section considers a simulation box capable of accommodating a maximum of two skyrmions only, with the damping parameter set to α=0.3. The spin system is initialized in the conical phase, and the skyrmions are allowed to nucleate and occupy the sample. Magnetic configurations are saved every 140 ps (with simulation time step Δ t=10^-15 s), derived from the average of 2000 configurations obtained during the final 40 ps of each period. Fig. <ref> (c) shows the topological charge calculated at the film surface as a function of simulation time for various values of the applied field, for temperature k_B T/J_ex=0.7 and film thickness d=2L_D. Notice that, for the highest field considered in Fig. <ref> (c) (μ B/J_ex=0.55), the CON phase (Q=0) is the lowest energy state, and CBs rarely nucleate and are rapidly destroyed. Consequently, the system predominantly resides in the CON state. Conversely, for the lowest field (μ B/J_ex=0.45), CBs rapidly nucleate, and the system quickly transitions to the SkL phase, with a stable topological charge of Q=2. At intermediate fields, the system encounters scenarios where the energy barriers for transitioning a CB phase to the skyrmion phase can be significantly larger than the barrier to return to the conical phase, resulting in a sequence of CB nucleations and collapses (blinking behavior). Specifically, for μ B/J_ex=0.523, we observed the system spending 50% of the time in the CON phase and the other 50% in the CB phase, as depicted by the histograms in Fig. <ref> (c). This condition characterizes uniform blinking behavior, where neither the CON phase nor the CB/skyrmion phase predominates. Interestingly, this condition is achieved for a field value significantly above the critical field, B_c, where the energy barriers for nucleation and annihilation become equal (μ B_c=0.4941 J_ex for d=2L_D). As we will discuss further in this section, this feature stems from the different attempt frequencies associated with the CB nucleation and annihilation processes. The condition for uniform blinking is therefore primarily determined by the applied field. On the other hand, the temperature regulates how frequently the transitions occur, thereby influencing the blinking time. Fig. <ref> shows the topological charge calculated at the film surface as a function of simulation time at different temperatures, for μ B/J_ex=0.523 and d=2L_D. A typical blinking time is depicted in Fig. <ref> for the scenario where k_B T/J_ex=0.6, representing the time interval between consecutive nucleations of the CB. It is noteworthy that, except for the case of k_B T/J_ex=0.5, where no CBs nucleate during the simulated time window, the system remains divided between the CON and CB phases, with a slight increase of CB occurrences (Q=1) with temperature. However, the blinking time significantly decreases with increasing temperature. Table <ref> summarizes the average transition times <τ> obtained in the simulations, at different temperatures, for the CB nucleation, annihilation, and blinking processes, in the case considered. The average transition times obtained in the simulations can be related to the phase transition rates predicted by HTST (see Eq. (<ref>)), where one can write τ=1/Γ=τ_0exp(Δ E/k_BT), with τ_0=1/Γ_0. As depicted in Fig. <ref>, we observe a linear behavior of ln(<τ>) with 1/k_BT, for both nucleation and annihilation processes, which indicates a temperature-independent pre-exponential factor Γ_0. According to Eqs. (<ref>) and (<ref>), that corresponds to a system with the same number of zero modes at the minimum and saddle point states. Moreover, by linear fitting of the data, we can estimate the effective energy barriers and pre-exponential factors associated with the CB nucleation and annihilation processes in our spin dynamics simulations. The obtained energy barriers, indicated in Fig. <ref> (a) and (b), are similar to those obtained by the GNEB method for the same value of applied field (Δ E^N_GNEB=11.94J_ex and Δ E^AN_GNEB=9.38J_ex), and a higher pre-exponential factor is observed for the CB annihilation than for the nucleation. The latter justifies the observation of uniform blinking for B>B_c, as equal activation energies can result in different transition times due to the prefactors. One should notice that the transition rates obtained by LLG dynamics may differ for different damping parameter α <cit.>, or in presence of boundary effects, among other conditions that can influence the system differently from the nucleation of an isolated CB. It is noteworthy that the collapse of CBs differs from that of skyrmions in thin films, where the pre-exponential factor linearly depends on temperature <cit.>. In the case of two-dimensional skyrmions, the presence of two translational zero modes contrasts with the absence of such zero modes in the defect-like saddle point, resulting in a temperature dependent pre-exponential factor, as predicted in Eq. (<ref>) <cit.>. On the other hand, for CBs, the system also possesses a defect-like Bloch point. This, combined with lattice discretization, results in nonzero translational modes. As we will discuss below, another zero mode, distinct from translational ones, exists in the CB state. §.§ Transition rates from HTST To determine the transition rates of the CB within the HTST framework, we compute the eigenvalues λ_i of the Hessian matrix corresponding to the local minima and saddle points along its formation and annihilation MEPs. Figure <ref> (a-c) presents the six lowest energy eigenvalues obtained from our calculations as a function of the applied field. In both the minima and saddle point states, only one zero mode is observed. This zero mode arises due to the system's degeneracy through rotations of the magnetization perpendicular to the direction of the applied field, corresponding to the angular phase of the conical state. When a skyrmion or CB is embedded in the conical state, its domain-wall region adjusts to the conical phase, resulting in asymmetry across the cross section perpendicular to the applied field, as shown, for instance, in the inset of Fig. <ref> (c) and discussed in previous works <cit.>. Consequently, the zero mode also characterizes the rotation of CB and saddle point states. This result is consistent with our spin dynamics simulations, where a temperature-independent pre-exponential factor was determined, indicating a system with an equal number of zero modes at both the minimum and saddle point states. Figure <ref> (d) illustrates the pre-exponential factor τ_0=1/Γ_0, computed from Eqs. (<ref>) and (<ref>), along with the corresponding eigenvalues as a function of the applied field, for both CB nucleation and annihilation processes. Similarly to the collapse of a skyrmion in thin films <cit.>, the pre-exponential factor undergoes significant changes with the magnetic field, thereby playing an important role in determining the rate of the phase transition. It is worth noting that, consistent with the spin-dynamics simulations, the HTST also predicts a higher pre-exponential factor τ_0 for the CB annihilation process compared to nucleation when μ B>μ B_c. By combining the calculated pre-exponential factors with the activation energies obtained via the GNEB method, we can estimate the nucleation and annihilation times using Eq. (<ref>). Figure <ref> (e) illustrates the resulting blinking time, τ^blink=τ^N+τ^AN, as a function of applied field and temperature. It is noteworthy that, for each temperature, there exists a field value, denoted as B^∗, at which the blinking time is minimized, as depicted by the blue dashed line in Fig. <ref> (e). This condition corresponds to τ^N=τ^AN, signifying uniform blinking behavior, wherein CBs nucleate and collapse at the same rate. Such uniform blinking is achieved at a field value above the critical field, B_c (indicated by the black dashed line in Fig. <ref> (e)), where the energy barriers for nucleation and annihilation become equal. This finding aligns with our spin-dynamics simulations, where, for μ B=0.523 J_ex>μ B_c, we observed the system spending 50% of the simulated time in the CON phase and the other 50% in the CB phase (see Fig. <ref> (c)). This observation stems from the distinct pre-exponential factors associated with CB nucleation and annihilation processes. In essence, our results suggest that by appropriately tuning the applied field and temperature, uniform blinking of metastable chiral bobbers can be achieved. A similar behavior is expected for other film thicknesses, in the vicinity of the phase boundary between CON and SKL states. § SKYRMION BLINKING FOR PROBABILISTIC COMPUTING The switching between skyrmionic and non-topological states is well established as a promising bit operation for information transport and storage <cit.>. Understanding the temperature, thickness, and field dependence of the nucleation and collapse processes of magnetic skyrmions is therefore crucial to the development of novel technological applications. Of particular recent interest is the thermally driven dynamics of skyrmions, that can be used for probabilistic computing devices, such as signal reshuffling devices <cit.> and random number generators <cit.>. Both latter applications rely on the ability of the thermally driven skyrmion system to generate uncorrelated copies of itself. In this section, we illustrate the applicability of skyrmion blinking in an alternative design of skyrmion-based random-number generators. Since the stochastic nucleation and collapse of CBs in thin films can be fine-tuned by temperature, film thickness and applied magnetic field, as discussed in the previous sections, skyrmion blinking emerges as a viable base ingredient for controlled probabilistic computing. To demonstrate such a functionality, we envision the device setup presented in Fig. <ref> (a), where an array of blind holes (regions with reduced thickness) is patterned in the magnetic film. Since the magnetic phase diagram is thickness dependent (see Fig. <ref> (a)), by tuning the magnetic field and temperature, one can achieve the scenario where skyrmions/CBs nucleate only at the regions of the sample with reduced thickness, as illustrated in Fig. <ref> (b), and excite the skyrmion blinking behavior at the pre-selected locations of the sample where blind holes are positioned. Considering the example of a 2×2 array of blind holes, as illustrated in Fig. <ref> (a-b), since the skyrmions are located far apart from each other, each skyrmion experiences a stochastic blinking behavior that is uncorrelated with the others. By measuring the skyrmion state at different times, one would obtain, in this example, random sequences of four binary digits, each digit representing the presence or absence of a skyrmion in each blind hole. We perform micromagnetic simulations of such a system by considering a magnetic film of size 6L_D× 6L_D× 2L_D, with four blind holes of size 1L_D× 1L_D× 1L_D, centered in the four quadrants of the sample (Fig. <ref> (a)). For the simulations, we consider temperature k_B T/J_ex=0.7 and the applied field μ B/J_ex=0.555, which yields uniform blinking behavior for the case of film thickness d=L_D (as inside the blind holes), but does not favor the formation of skyrmions/CBs in the regions where d=2L_D (outside the blind holes). For comparison, the considered field value here is higher than the highest field in Fig. <ref> (c), where CBs rarely nucleate for d=2L_D under the same temperature. Fig. <ref> (c) shows the calculated topological charge Q as a function of time for each blind hole separately. Note that the topological charge of each blind hole (labelled I to IV) independently oscillates as the skyrmions undergo a blinking behavior. By measuring the topological charges at different times, one can therefore obtain random sequences of four binary digits, from a total of 16 possible combinations. This demonstrates the basic functionality of the system as a random number generator. Two examples of sequences (Q_I, Q_II, Q_III, Q_IV) are indicated in Fig. <ref> (c), measured at simulation times t/t_0=200 and t/t_0=600, with t_0=140 ps. Importantly, as the skyrmions have a characteristic blinking time, to ensure that the obtained random numbers are uncorrelated, one should ensure that the time frame between consecutive measurements, defined here as δ t, is not much smaller than the typical blinking time τ^blink. To verify this, we calculate the auto-correlation function (ACF) of a sequence of N measured outputs, representing a time-dependent, discrete random variable X_t={x_1, x_2, x_3, ..., x_N}, where x_i∈{1,2,...,16} corresponds to one of the 16 possible output states, obtained at the i^th measurement. The ACF describes the correlation between observations of the time series at two points in time, separated by a specific lag, denoted here as k. Essentially, it quantifies how a value in X_t is related to its previous values. The ACF is defined as the ratio of the covariance of X_t and X_t+k={x_k+1, x_k+2, ..., x_N}, and the variance of X_t, and can be written as <cit.> ρ(k) =N/N-k∑_i=1^N-k(x_i-X_t)(x_i+k-X_t)/∑_i=1^N(x_i-X_t)^2, where X_t is the mean value of X_t. An autocorrelation ρ(k)=0 means that the measured states x_i can be considered independent of each other, or uncorrelated. Fig. <ref> (d) shows the ACF as a function of the lag k, calculated in the simulations, in two scenarios: (i) when the time frame between consecutive measurements is given by δ t=t_0, and (ii) when δ t=20t_0. Notice that, in the first scenario, since δ t≪τ^blink (in the simulations, ⟨τ⟩^blink≈ 40 t_0), consecutive measurements of the skyrmion state are likely to yield the same output configuration, as the system did not have sufficient time to experience nucleations and collapses. This results in high values for the ACF, as shown in Fig. <ref> (d). Such correlation decreases as one increases the lag, approaching ρ(k)=0 for k>20, where the time shift between the compared states becomes large enough for the system to experience oscillations. In the second scenario, where δ t=20t_0 (≈ 50% of the blinking time), we obtain ρ(k)≈0 for every k>0, thus demonstrating that one can obtain a sequence of uncorrelated random numbers with the proposed setup, as long as the time frame between consecutive measurements is larger than 50% of the typical blinking time. Furthermore, since the number of possible output configurations scales exponentially with the number of blind holes (i.e., 2^n possible states for n blind holes), similar setups could be envisioned for generating a wide range of uncorrelated random numbers. For instance, a 4×4 array of blind holes would be able to generate 65536 different states. § CONCLUSION In summary, we have comprehensively characterized the dynamic process of skyrmion nucleation from the conical phase in helimagnets. Using a geodesic nudged elastic band (GNEB) method and spin dynamics simulations, we investigated different nucleation mechanisms and calculated the activation energies for skyrmion formation as a function of film thickness and applied magnetic field. We revealed that a blinking behavior (a creation-destruction process) of skyrmions is favored by the local stability of chiral bobber (CB) states. We demonstrated that such states can be set to uniform blinking when excited by the appropriate applied field and temperature. This condition is particularly favored near the phase boundary between the skyrmion lattice and conical states, specifically on the conical side of the phase boundary, where CBs are metastable. We calculated the oscillation modes of such CB states and demonstrated that their collapse fundamentally differs from that of skyrmions in thin films, as only one rotational zero mode is present, in contrast to the two translational zero modes of skyrmions. Combining harmonic transition state theory and GNEB methods, we estimated the typical blinking frequencies of CBs (ranging from KHz to GHz) as a function of the applied field and temperature. Finally, we exemplified the use of skyrmion blinking for controlled probabilistic computing and demonstrated the basic functionalities of a skyrmion-based random-number generator. Together, our results advance the understanding of the nucleation mechanisms of skyrmions in chiral magnets and, given the expected reproducibility of our findings in readily existing experimental setups, we anticipate that this work will spur further experimental investigations into skyrmion dynamics and further development of skyrmion-based applications. § ACKNOWLEDGEMENTS This work was supported by the Research Foundation - Flanders (FWO-Vlaanderen) and Brazilian agencies FACEPE, CAPES and CNPq. unsrt
http://arxiv.org/abs/2406.08484v1
20240612175934
Exploiting the diversity of modeling methods to probe systematic biases in strong lensing analyses
[ "A. Galan", "G. Vernardos", "Q. Minor", "D. Sluse", "L. Van de Vyvere", "M. Gomer" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.GA", "astro-ph.IM" ]
Modeling methods and lensing degeneracies Technical University of Munich, TUM School of Natural Sciences, Department of Physics, James-Franck-Str 1, 85748 Garching, Germany Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany Institute of Physics, Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland Department of Physics and Astronomy, Lehman College of the CUNY, Bronx, NY 10468, USA American Museum of Natural History, Department of Astrophysics, New York, NY 10024, USA Borough of Manhattan Community College, City University of New York, Department of Science, New York, NY 10007, USA STAR Institute, University of Liège, Quartier Agora, Allée du six Août 19c, 4000 Liège, Belgium Challenges inherent to high-resolution and high signal-to-noise data as well as model degeneracies can cause systematic biases in analyses of strong lens systems. In the past decade, the diversity of lens modeling methods has exploded, from being purely analytical to pixelated and non-parametric, or based on deep learning. We embrace this diversity by selecting different software packages and use them to blindly model independently simulated Hubble Space Telescope imaging data. To overcome the difficulties arising from using different codes and conventions, we use the COde-independent Organized LEns STandard (COOLEST) to store, compare and release all models in a self-consistent and human-readable manner. From an ensemble of six modeling methods, we study the recovery of the lens potential parameters and properties of the reconstructed source. In particular, we simulate and infer parameters of an elliptical power-law mass distribution with external shear for the lens while each modeling method reconstructs the source differently. We find that overall, both lens and source properties are recovered reasonably well, but systematic biases arise in all methods. Interestingly, we do not observe that a single method is significantly more accurate than others, and the amount of bias largely depends on the specific lens or source property of interest. By combining posterior distributions from individual methods using equal weights, the maximal systematic biases on lens model parameters inferred from individual models are reduced by a factor of 5.4 on average. We investigate a selection of modeling effects that partly explain the observed biases, such as the cuspy nature of the background source and the accuracy of the point spread function. This work introduces, for the first time, a generic framework to compare and ease the combination of models obtained from different codes and methods, which will be key to retain accuracy in future strong lensing analyses. Exploiting the diversity of modeling methods to probe systematic biases in strong lensing analyses A. Galan<ref>,<ref>,<ref>, G. Vernardos<ref>,<ref>, Q. Minor<ref>,<ref>, D. Sluse<ref>, L. Van de Vyvere<ref> M. Gomer<ref> June 17, 2024 =============================================================================================================================== § INTRODUCTION Understanding the evolutionary path of galaxies over cosmic times continues to be a major challenge in astrophysics. In this context, strong gravitational lensing enables the observation of galaxies lying at different redshifts in a single observation, making it an inescapable tool to constrain galaxy evolution models. Strong gravitational lensing arises when a foreground distant galaxy—the lens, or deflector—is coincidentally aligned with a more distant background galaxy—the source—causing the appearance of multiple and magnified images of the latter. The typical redshift range for lens galaxies lies between z_ d∼0.2 and 1.5, while source galaxies are often found between redshifts z_ s∼1 and 4 <cit.>, such that strong lensing systems can display a wide variety of galaxy morphologies and evolutionary stages. Besides galaxy evolution studies, strong lensing has several important applications in cosmology. As it is dictated by the total mass of galaxies, one can use this effect to put constraints on their dark matter (DM) halo. In particular, strong lensing data enables the separation of the baryonic and dark components of galaxies <cit.>, and the detection of DM subhalos and other invisible masses along the line of sight <cit.>. When combined with time-varying sources, strong lenses can also be used to measure cosmological parameters, including the Hubble constant <cit.> and density parameters <cit.>. All these applications rely heavily on a precise characterization of both the azimuthal and radial mass profiles of lens galaxies. The central step when analyzing strong lensing data is lens modeling. The goal of this step is to model both the mass and light distribution of the lens galaxy while simultaneously reconstructing an unlensed version of the source galaxy. Lens modeling is a challenging task because inverting the lensing effect is an ill-posed problem, in particular due to known degeneracies between the lens mass distribution and the source morphology. For example, the infamous mass-sheet degeneracy <cit.>, a mathematically exact degeneracy between the lens mass density and the source scaling, has been studied both theoretically and practically <cit.> and can be mitigated using complementary data sets <cit.>. In the past twenty years, many different lens modeling techniques, ranging from analytical to pixelated techniques and neural networks, have been developed and successfully applied to real images <cit.>. In general, these techniques have been developed with specific lensing systems, data sets and science goals in mind, and then have been extended to cover more use cases. Consequently, it is crucial to assess how these different methods compare to each other, and if their combination is warranted to improve the robustness of lensing analyses. Such a comparison enables the quantification of possible systematic biases. Additionally, if several methods leads to consistent results, those can be combined together, improving the overall precision. So far, lens modeling comparison analyses have been rare. For cluster-scale systems, a prominent work have been initiated by <cit.> by performing an extensive comparison of several modeling approaches on both simulated and real Hubble Frontier Fields clusters. However, there have not been comparable efforts for galaxy-scale strong lens systems. While some works have reanalyzed archival data with alternative modeling software packages <cit.>, it is only recently that more quantitative comparisons between different methods have been reported <cit.>. The Time Delay Lens Modeling Challenge (TDLMC) compared the output of different modeling and inference strategies, but focusing only on the recovery of the Hubble constant <cit.>. Different lens modeling codes have been compared in <cit.> and <cit.>, although using point-like multiple images rather than extended gravitational arcs as constraints. Finally, recent works from <cit.> and <cit.> compared neural network predictions with more classical approaches, although on with ground-based imaging data. Our goal here is to analyze imaging data similar to those obtained with the Hubble Space Telescope (HST) with different modeling and inference techniques, and study the recovery of a given set of lens parameters while reconstructing the lensed source in different ways. To our knowledge, this is the first time such a systematic and self-consistent comparison is performed. In order to maintain this novel kind of analysis tractable, we restrict the assumptions regarding the description of the lens mass distribution and properties of the data, although remaining reasonably realistic. In particular, we limit our scope to the commonly used power-law elliptical mass distribution combined with a uniform external shear. This description of the lens deflection field has proven to be a minimal but efficient prescription for modeling the observed strong lensing effect caused by large elliptical galaxies <cit.>, although the simplicity of this model has known limitations <cit.>. We also note that recent analyses of strong lenses found evidence for multipolar deviations to the elliptical power-law profile, but higher resolution than HST is warranted for robust detection <cit.>. While most lensing analyses focus on the properties of the galaxies acting as lenses, the morphology of the lensed galaxies also hold important information about galaxy formation and evolution. Current high-resolution images of strong lenses such as those from HST showcase highly structured lensed sources <cit.>. Consequently, it is also crucial to assess the ability of lens modeling codes to recover the morphology of extended lensed sources. We first select different lens modeling software packages and modeling methods that are well suited to model high-resolution and high signal-to-noise (S/N) data. Since each software package typically follows different parameter definitions and model conventions, it is not possible to directly compare the modeling results. We overcome this challenge by using the COde-independent Organized LEns STandard <cit.>, an open-source standard that enables storage, sharing and analysis of all lens modeling products in a uniform manner, regardless of the modeling code originally used to perform the lens modeling tasks. Included in this standard is an analysis interface allowing us to compute important quantities (e.g., effective radii and profile slopes) and visualize lens modeling results. We extensively use in this work, both for releasing the models and data, as well as performing the analysis of the results and generating the figures. The paper is organized as follows. In Sect. <ref> we briefly recall the strong lensing formalism we follow. In Sect. <ref>, we present the different lens modeling methods, in particular their commonalities and differences. We explain how the data was simulated using an independent software in Sect. <ref>, and the standardized comparison and analysis framework is introduced in Sect. <ref>. The modeling results after unblinding are visualized and described in Sect. <ref>, followed by an exploration of possible sources of systematics in Sect. <ref>. In Sect. <ref> we discuss our results and place them in a broader context, and Sect. <ref> concludes our work. § FORMALISM OF STRONG GRAVITATIONAL LENSING We give for completeness a brief overview of the mathematical formalism to describe strong lensing data and models. More background details can be found in recent reviews such as <cit.>. The main strong lensing observables are the positions and intensities of multiply lensed images of features in a background sources. These features can either be unresolved (i.e., point sources) or spatially extended. In the latter case, the lensed source appears as several arcs or as an Einstein ring surrounding the lens object and typically covering many pixels in high-resolution imaging data. We call “image plane” the (observable) plane of the sky where lensed images appear, that we place at redshift z_ d of the foreground lens, also called main deflector. Observed features in the image plane are localized with a two-dimensional angular position vector θ. For conciseness, we interchangeably use the standard Cartesian coordinates (x, y) to describe a position θ in the image plane. Each feature in the lensed images has a corresponding (unobservable) angular position β in the “source plane” placed at the redshift of the background object z_ s. The central equation in strong lensing is the lens equation, that gives the relationship between β and θ: β = θ - ∇ψ(θ) , where ∇ψ≡α is the deflection field originating from the lens potential ψ, the latter being a rescaled and projected version of the underlying three-dimensional gravitational potential of the lens galaxy. Usually, a more physically relevant quantity is the projected mass density of the lens, characterized by the so-called lens convergence κ (dimensionless) obtained with a combination of second derivatives of the lens potential: κ = 1/2∇^2ψ . As it will be useful for the discussion (Sect. <ref>), we also recall the formula of the Fermat potential, mostly relevant for time-varying sources. The Fermat potential Φ_i and the Fermat potential difference ΔΦ_ij between a pair of lensed images i and j are defined as Φ(θ_i) = (θ_i - β)^2/2 + ψ(θ_i) , ΔΦ_ij ≡Φ(θ_i) - Φ(θ_j) , where θ_i and θ_j are the positions of images i and j, respectively. In this work we consider parametrized forms for the lens potential ψ and lens convergence κ, while the surface brightness of the lensed galaxy is described following a variety of techniques. We give more details about the modeling of these different components in Sect. <ref>. § LENS MODELING METHODS AND ASSUMPTIONS We consider an ensemble of “modeling methods” that each differ on two aspects: modeling assumptions and inference techniques. For instance, modeling assumptions are typically specific choices of model components (mass and light profiles, fixed or not), regularization strategies for pixelated models and necessary hyper-parameters. Inference techniques are typically minimization and sampling algorithms to obtain best-fit parameters and estimate their posterior distributions, or sequences of distinct steps (e.g., preliminary coarse and fast model fits) to converge to the best-fit solution. In practice, a given lens modeling software package can be considered as a modeling method, as specific choices regarding the code structure, model types and optimization techniques have been made throughout its development. For this work, we selected a subset of software packages that are sufficiently different to be considered as distinct modeling methods: <cit.>, <cit.>, <cit.> and (Minor et al., in prep.). Other software packages used in several published analyses so far are <cit.>, PyAutoLens <cit.>, glafic <cit.> and methods from <cit.>. However, for practical reasons we only use the first set of methods, which already form a representative sample of the various modeling methods that are currently available, from fully analytical to pixelated models, with or without adaptive grids. Such methods are referred to as classical methods, in contrast to deep learning methods that we do not consider in this work <cit.>, as these would require additional assumptions regarding training sets and network architectures beyond our scope. Nevertheless, we encourage future works to conduct self-consistent comparison analyses similar to ours, that involve both classical and deep learning methods <cit.>. This remaining of this section presents the general modeling strategy we adopt throughout this work. We first describe modeling assumptions that are common to all methods, then give more details regarding each of these modeling methods, and finally mention extra choices that are left free to the modelers. §.§ Common modeling aspects Throughout this work, we reasonably assume that the noise in the imaging data d follows a Gaussian distribution with covariance matrix C_d. In this setting, we can write the negative log-probability of the data likelihood as - log ℒ(η) = 1/2 [m(η) - d]^⊤ C_d^-1 [m(η) - d] + log( 2π√( C_d)) , where m is the predicted image (i.e., the model) and η represents a generic vector of model parameters. We note that the data covariance matrix C_d is assumed to be diagonal with contributions from both background noise and photon noise. In other words, we follow the widely used assumption that the noise is uncorrelated and normally distributed. With simulated data, we have access to the true data covariance matrix C_d. As in this work we do not explore the effects of inaccurate assumptions regarding noise characteristics, we give to the modelers the true matrix C_d and use it in all lens models. While the likelihood term in Eq. <ref> is common to all models considered here, specific modeling assumptions such as morphological properties of the source galaxy are encoded as additional priors. Such priors priors can either be explicitly incorporated in the inference via a regularization term written as the negative of the log-prior -log𝒫, or they can be implicitly defined through a choice of parametrization such as an analytical functions. Summing the likelihood and prior terms gives the full penalty or loss function L, which is minimized during the inference of model parameters: L(η) = -logℒ(η) - log𝒫(η) . Modeling methods generally describe the lensing of photons from the source by casting the lens equation into a lensing operator 𝖫, which depends on the lens potential parameters that we denote by η_ψ. This operator acts on a model of the source s, described by parameters η_ s, that can be either analytical, pixelated or a representation in function basis set, as per m(η) ≡m(η_ψ,η_ s) = 𝖱 𝖡 𝖫(η_ψ) s(η_ s) . so that we get model image m that has the same pixel size as the data, after possible downsampling by the operator 𝖱 and blurring by the operator 𝖡. The latter incorporates the effect of the point spread function (PSF) of the instrument and seeing conditions. This PSF is assumed to be known with the same spatial sampling as the data and available to all modelers. As stated in Sect. <ref>, no constraints are imposed to modelers regarding optional supersampling of ray-tracing and convolution operations. The light distribution of the lens galaxy is not modeled because we assume that the lens light has been perfectly subtracted from the data beforehand. For modeling the lens mass distribution of the lensing galaxy—alternatively, its lens potential—we consider the commonly used power-law elliptical mass distribution (PEMD) plus external shear. This assumption is common to all the modeling techniques, namely all modelers use the same lens potential parameter vector η_ψ. The convergence of the PEMD is described by κ_ PEMD(x,y) = 3-γ/2(θ_ E/√(q_ mx^2+y^2/q_ m))^γ -1 where γ is the logarithmic power-law slope (γ=2 corresponding to an isothermal profile), q_ m is the axis ratio and the coordinate system (x,y) has been rotated by a position angle ϕ_ m around the lens center (x_0,y_0). The lens potential generated by an external shear can be easily expressed in polar coordinates with the following formula: ψ_ ext(x, y) ≡ψ_ ext(r,ϕ) = r^2/2 γ_ ext cos[ 2 (ϕ - ϕ_ ext) ], where γ_ ext is the strength of the external shear, and ϕ_ ext its position angle. We note that Eqs. <ref> and <ref> follow the parameters conventions used in the (see Sect. <ref> and the online documentation for other conventions[<https://coolest.readthedocs.io/en/latest/conventions.html>]). We note that the loss function of Eq. <ref> has a non-linear response to lens mass parameters η_ψ. However, the set of source parameters η_ s can formally be split into linear (light profile amplitudes) and non-linear parameters. This property is explicitly exploited by some of the modeling methods we use in this work. The modeling methods considered in this work thus mainly differ in the assumptions regarding the light distribution of the source, s(η_ s), which we summarize in Table <ref> and describe in more details in the next subsections. §.§ Smooth modeling with Sérsic and shapelets We use a modeling method implemented in the multipurpose package. We follow the baseline model presented in Sect. <ref> and which consists of a PEMD plus external shear for the lens. The source is modeled with a Sérsic profile to which are added shapelets components, which is capturing /modeling the azimuthal complexity of the source light distribution. Here, we implement this modeling strategy using version 1.11.3 of the multi-purpose open-source software package  <cit.>. This tool, regularly enhanced with new user-contributed capabilities, provides a large family of lens mass distribution and light profiles. The source is modeled with a Sérsic profile to which is added a set of shapelets components. We refer the reader to <cit.> for a formal description of the shapelets model in the context of lens modeling. The procedure used in to derive the posterior distribution on the parameters is sequential. The optimal linear parameters are found through matrix inversion, given values of non-linear parameters. First, a suitable region in non-linear parameter space that minimizes the loss function defined in Eq. <ref> is found via a Particle Swarm Optimization algorithm <cit.>. Second, the parameters space is sampled using a Monte-Carlo Markov Chain (MCMC). The parameters of the optimal model found previously are randomly perturbed, and used to start the chain. We use the MCMC sampler emcee, which is the most used so far among the user community <cit.>. In this work, as it is also a common practice, the model investigated during the optimization step is a simplified version of the final model, retaining only the main model components that enable to reproduce the largest fraction of the data pixel values. Components that yield small changes of the loss function, such as the source shapelets, are added only during the MCMC sampling. This hierarchy in the significance of model parameters, while not explicitly formalized in the code, is similar to the methodology developed by several automatized lens modeling efforts <cit.>. We give more technical details regarding this method in Sect. <ref>. §.§ Adaptive grid source modeling If one assumes that the free parameters of the source are its brightness values cast on a grid of pixels s (instead of being defined from a continuous analytical profile), then the likelihood of Eq. <ref> becomes a quadratic function of s. The benefit of such quadratic functions is that their derivative can be calculated analytically and have a unique minimum <cit.>. However, with just the likelihood term this leads to an ill-posed problem and the addition of a (quadratic) regularization term is required, which has the following generic form: -log𝒫(λ,g,s) = 1/2λs^T C_s^-1(g) s , where C_s is some covariance kernel of the source as a function of parameters g. In this form, the source parameters can be obtained analytically from ∇_s L = 0 once the lens parameters η_ψ, regularization strength λ, and covariance parameters g are given. This approach is referred to as semi-linear inversion. In comparison with forward methods, which may treat more source parameters as non-linear parameters, there are only a few additional parameters that require sampling, λ and g (usually corresponding to just one or two parameters), as opposed to tens or hundreds more. The linear source parameters, i.e. pixel brightness values, are obtained analytically. A key assumption in such inverse problems is the choice of regularization, which can be interpreted in a Bayesian way as a prior imposed on the source. Traditionally, one may choose to impose smoothness to the solution through its derivatives, where the matrix C_s^-1 is constructed from the numerical derivative coefficients computed on the pixelated grid <cit.>. Alternatively, more physically motivated covariance kernels obtained from real galaxy brightness distributions have been shown to perform better and lead to less biased results <cit.>. A quite generic such example is the Matérn kernel that has the following analytic form: C(r_ij|l,ν) = 2^1-ν/Γ(ν)( r_ij√(2ν)/l)^ν K_ν( r_ij√(2ν)/l), where r_ij is the distance between any two source pixels and ν,l correspond to the non-linear parameters g (the latter can be interpreted as a correlation length). The type of regularization can be objectively chosen based on the Bayesian evidence. Another feature employed in semi-linear inversion implementations is the use of adaptive, non-regular grids for the source. This is because smaller regions in the source plane can contribute a higher fraction of the total flux, especially in the regions of high magnification near caustics, hence higher resolution is required. Using a high resolution fixed regular grid will lead to more computationally demanding matrix inversions, but adapting the resolution to the lensing magnification (which is a function of the lens model parameters η) will increase the resolution in those regions of the source plane where it is needed without adding more degrees of freedom. Such adaptive grids can be constructed simply by tracing a subset of the data pixels back to the source <cit.> and using them as grid vertices. A more sophisticated way of constructing the adaptive grid, pioneered by <cit.>, is to split each pixel into N× N subpixels, ray-trace the subpixels to the source plane, and then use a k-means clustering algorithm to determine the location of the source grid vertices. The latter approach has the advantage of minimizing aliasing effects, at the cost of additional overhead due to the clustering algorithm. The regularization of an adaptive grid may be extended to allow for greater fluctuations in surface brightness in the inner regions of the source with high signal-to-noise, while keeping the outer regions of the source relatively smooth. Luminosity-weighted regularization has been explored by <cit.> in the context of gradient regularization. In this work, we explore luminosity-weighted covariance kernels to achieve a similar effect. To implement this, we define the luminosity-weighted kernel as: C_ij,lum = C_ij W_i W_j, with the weighting function W_i given by: W_i = exp[ -ρ (1-s_i,0/s_ max) ], where ρ is a free parameter to be varied, s_i,0 is an approximate surface brightness of the i-th pixel before luminosity weighting, and s_ max is the maximum surface brightness of all the source pixels. Pixel values s_i,0 can be taken from the best-fit model of a previous fit altogether, or it can be estimated during each likelihood evaluation by doing an inversion without luminosity weighting first; here, we adopt the latter approach. The advantage of the form given in Eq. <ref> is that it ensures that the kernel remains positive-definite, which is critical given the quadratic form of the regularization term. Although more sophisticated forms of the weighting function W_i are possible, we only explore the single parameter function given by Eq. <ref> in this work. Here, we use two implementations of the semi-linear inversion technique, the Very Knotty Lenser <cit.> and (Minor et al., in prep.), that can handle different regularization schemes and recipes for constructing the adaptive source-pixel grid. In the modeling run, which we will label Adaptive+Matérn, a Matérn kernel will be used for regularization without supersampling of the image plane; whereas in the modeling runs, an exponential kernel will be used (equivalent to Matérn with ν=0.5), image plane supersampling with k-means clustering will be used to construct the adaptive source grid, and models without and with luminosity weighted regularization will be used (Cluster+Exp and Cluster+Exp+Lumweight, respectively). Both implementations use Nested Sampling <cit.> to sample the parameter space and converge to the maximum a posteriori solution, in addition to calculating the Bayesian evidence. We outline the specific modeling choices that are considered as three different source modeling techniques in Sects <ref> and <ref>. §.§ Multi-scale regularization with wavelets Regularization does not necessarily have to be quadratic so that it can lead to a linear solution for the source. A non-linear example is the method by <cit.> that is based on sparsity constraints in the wavelet domain. We refer to this regularization as a multi-scale regularization (and use “ms” as subscripts in the corresponding equations). We briefly describe the method here, but refer the reader to the original papers for the full mathematical treatment. Given the vector representation of the source expressed on a regular (non-adaptive) pixelated grid, the regularization term to be minimized (i.e., the second term in Eq. <ref>) is: - log 𝒫(η,s,λ_ ms) = λ_ ms W_ ms(η_ψ, η_ s) ⊙Φ^⊤ s_1 + i_≥0(s) , where λ_ ms is a global (scalar) regularization parameter, Φ^⊤ is the wavelet transform operator that transforms s into its wavelets coefficients, and W_ ms is a matrix that scales the regularization strength for each these coefficients. Operations ·_1 and ⊙ refer to the ℓ_1 norm and element-wise product, respectively. The second term in Eq. <ref> is a non-negativity constraint on the source since we reconstruct surface brightness values. The operator Φ^⊤ represents an hybrid wavelet transform composed of all scales from of the starlet transform <cit.> and the first scale of the Battle-Lemarié wavelet transform <cit.>. The matrix elements of W_ ms are computed by propagating the data noise from the source plane to the wavelet domain (hence the dependence on η_ψ,η_ s), allowing us to attach a clear meaning to the global regularization strength λ_ ms, interpreted as the statistical significance of the regularized source model and given in units of the noise (e.g., λ_ ms = 3σ). We dynamically adapt the extent of the source plane regular grid based on the lens mass by defining the annular region in image plane (the “arc mask”) within which the pixelated source light is evaluated. This adaptive scheme ensures that the effective source pixel size covers the same angular scale of the source for any realization of the mass model[We note that a similar strategy is used in the modeling code to adapt the extent of the source plane regular grid <cit.>]. This treatment is different from the fixed source plane grid used in <cit.>, and provides more stability when optimizing lens mass models with a free density slope. We use the strong lens modeling code  <cit.> that implements the multi-scale regularization strategy described in Eq. <ref> using JAX <cit.>, such that the full model can be pre-compiled and is differentiable. We use the probabilistic programming library NumPyro <cit.> to implement prior distributions and constraints on model parameters, and to perform the inference of posterior distributions. Additional technical details regarding the Sparsity+Wavelets models are given in Sect. <ref>. §.§ Non-parametric Gaussian processes Finally, we apply a recently introduced source reconstruction method that relies on Gaussian processes and information field theory <cit.>. The first applications of Gaussian processes for modeling strongly lensed sources are the recent works of <cit.> and <cit.>, but we describe below the main principles for completeness. In the IFT framework, such Gaussian processes are often referred to as correlated fields, which is the term we use in the remaining. We use IFT to represent the source light distribution as a two-dimensional non-negative correlated field. Such as field is based on two main components: (1) an analytical parametrization of its power spectrum (mainly its amplitude and slope) which accounts for correlated structures in the source; (2) a discretization onto a regular square grid, with elements following standardized Gaussian distributions (we call the latter an excitation field). More formally, we describe the source field s as s = exp[ 𝖥^-1( A_0 ⊙ξ) + δ] , where A_0 is a zero-mode spectral field generated from the parametrized power spectrum, ξ is the excitation field, and ⊙ is the point-wise multiplication. The resulting field in harmonic space is then transformed to real space by applying the inverse Fourier transform operator 𝖥^-1, to which a constant offset δ is added. Finally, we take the exponential of the resulting field to enforce the positivity of source pixel values, since they represent surface brightness. We note that no extra regularization term is added to the loss function, as Eq. <ref> describes a generative model that already incorporates smoothness conditions through its power spectrum. We use the Python library [<https://gitlab.mpcdf.mpg.de/ift/nifty>] <cit.> to implement the above field model and variational inference samplers to get the joint posterior distribution over the parameter space. As in <cit.>, we use the JAX interface of <cit.> and combine it with to evaluate the forward model (Eq. <ref>). For further technical details regarding the Correlated field model, see Sect. <ref>. §.§ Additional choices left to the modelers There are some additional choices that are left to the modelers. In particular, they are free to choose if and how they mask out some regions in the imaging data and exclude those from the data likelihood evaluation. Similarly, super-sampling of the coordinates grid when performing ray-tracing evaluations and surface brightness convolutions with the PSF are optional. Modelers are free to run variations of their fiducial models, varying, for example, hyper-parameters, random number generator seeds, or inference algorithms. This may allow for robustness tests, and any eventual marginalization over families of models leading to a joint posterior distribution as the final result. For any further details regarding each modeling method and specific modeling choices, we refer the reader to Appendix <ref>. § INDEPENDENT SIMULATION OF IMAGING DATA We produce a simulated mock system to model with all methods presented above and compare the results. We simulate the mock using [<https://github.com/gvernard/molet>] <cit.>, a simulator code which is independent from any of the codes used to fit the mock. The simulated mock is constructed using a power-law mass profile with external shear and a relatively structured source as expected from most lensed galaxies. The true source used in the simulation is kept hidden (blind) from the modelers, as well as all other input parameters. For the source, we used an HST image of the galaxy NGC 1084 that was previously used for simulated lenses in the context of the TDLMC <cit.>. This source is a local galaxy with a detailed structure that is resolved without any prominent PSF spikes, which could introduce nonphysical features if lensed. We further avoid introducing unphysical features in the resulting lensed source due to edge effects (e.g. sky background) in the cutout that we used by forcing the values in the source pixels to decay exponentially to zero away from the brightness peak. Simulated lensing data are then created by providing the mass model and source to , which performs ray-tracing on a high-resolution grid, convolution with the PSF, downsampling, and adding noise to the final mock observation. We use a simulated HST PSF using TinyTim <cit.> based on the WFPC2 instrument with the F814W filter (we do not consider the more recent WFC3 instrument as TinyTim does not support it). Since ray-tracing is performed on a 10 times higher resolution grid compared to the final data, we use a simulated PSF at that resolution for more accurate surface brightness convolutions (although modelers are given a PSF at the data resolution as mentioned in Sect. <ref>). Our settings for the noise correspond to 2200s of exposure in the chosen instrument setup. The angular size of the source cutout is set to 4 arcsec, and we scale its total flux such that it has an apparent (unlensed) AB magnitude of 23.2, which is in the range of observed source galaxies from the SLACS sample <cit.>. We show in the top left panel of Fig. <ref> the simulated lens image, while the bottom row shows the supersampled and data-resolution PSFs (only the latter is provided to the modelers). We note that the lensed source galaxy and the data signal-to-noise ratio (S/N) are such that a simple source model is not able to fit the data. We visualize this in the top right panel of Fig. <ref>, which shows normalized residuals between the data and a model based on a single Sérsic profile for the source. Such residuals are strong evidence for the necessity of more flexible source models as the ones we employ in this work (described in Sect. <ref>). In Appendix <ref> we give useful details about a previous version of the mock we attempted to model, for which we detected issues related to the input source light distribution. In a nutshell, the original source did not have an accurate background subtraction and displayed sharp edges (visually unnoticeable after lensing the addition of noise), which lead to biases in the lens models. For this reason, a second mock with different input parameters and source light had to be re-created (and the subsequent modeling re-done). § STANDARDIZED COMPARISON FRAMEWORK Our work relies on several collections of modeling methods and software packages, that we systematically apply on the same data. These codes have been developed following different conventions (e.g., angles, units, profile definitions), are written in different programming languages (e.g., Python, C, C++), and differ in their final modeling products. Therefore, we must ensure that we can both simulate and model strong lensing data in a consistent way, in order to mitigate problems arising from the heterogeneous collection of methods we consider. We rely on the recently released strong gravitational lensing standard (for COde-independent Organized LEnsing STandard) as a framework unifying the different components of our analysis[is an open source Python package publicly available at <https://github.com/aymgal/COOLEST>. We use the released version 0.1.9.]. Below we briefly describe and its specific features that we use in this work, but refer the reader to <cit.> and the online documentation[<https://coolest.readthedocs.io>] for more details. The foundation of is a set of conventions to serve as a reference point for modeling assumptions and codes, for example, coordinate systems, units and profile definitions. Given these conventions, any lens model—together with the data being modeled and other modeling components such as the PSF—can be concisely described in a single file following the JSON format. We refer to the latter as a template file, which we use both to describe an instance of a strong lens to be simulated and to store lens modeling results (e.g., best-fit parameters and uncertainties). This standard way of storing lens modeling information allows us to straightforwardly compare any modeling results to each other as well as to an existing groundtruth (in the case of simulated data). In practice, it requires each modeling or simulation code to have an interface with to create or update such template files[This task is made easier by using the dedicated Python interface.]. Lastly, we use the analysis features of to read the content of template files, compute key lensing quantities, and produce comparison plots. In particular, we use this interface to plot all lens models side-by-side and compare them to the groundtruth, compute morphological features of reconstructed source galaxies, and plot joint posterior distributions over the parameter space. § RESULTS FROM BLIND MODELING WITH SIX METHODS During the simulation and subsequent modeling phases, there was minimum amount of information shared between the authors. Keeping this part of the analysis blind ensured unbiased and independent models. One of the authors (M.G.) first simulated the imaging data with a first software (Sect. <ref>), and the rest of us proceeded with the blind modeling of the data, whose results are presented here. The modeling workload was split between different modelers: L.V.V. (Sérsic+Shapelets models), G.V. (Adaptive+Matérn model), Q.M. (Cluster+Exp and Cluster+Exp+Lumweight models) and A.G. (Sparsity+Wavelets and Correlated Field models). In total, we used four distinct software packages and six different modeling methods. Each modeler was free to perform their analysis to the best of their judgment. Once confident that no significant improvements to the models could be made by further fine-tuning, the modelers converted their results to the format (Sect. <ref>) and submitted them to A.G. (this may have been a single or a marginalization of model instances). At this point, unblinding took place when M.G. also submitted the true model, and the final figures presented in this section were produced. The specific steps necessary to obtain these models, as well as their estimated computation times, are detailed in Appendix <ref>. All models along with the simulated data products are publicly available[<https://github.com/aymgal/LensSourceDegeneracy_public>. Upon acceptance of the manuscript, analysis and visualization notebooks will also be released.]. §.§ Overall fit to the imaging data We show in Fig. <ref> all the six models that have been blindly submitted, in direct comparison with the true model from the simulation. We compare side-to-side the image model, relative error on the convergence, model normalized residuals (in unit of the noise), and the reconstructed sources. The reconstructed sources should be directly compared to the top right panel of Fig. <ref>. The residuals show clear improvements over the residuals shown in the bottom right panel of Fig. <ref> with a too simplistic source model. We also quote the reduced chi-square value χ_ν^2 computed within the likelihood mask chosen by the modeler, in order to quantitatively compare the quality of fits. The largest χ_ν^2 value is 1.06 while the lowest value is 0.88, and four models have a χ_ν^2 below unity which indicate slight overfitting. In several models, residuals at the ∼ 3σ level remain where the arcs are the brightest (multiple images of the bright, cuspy central region of the source). We note that overall, all models fit the imaging data to very close to noise level. While the Sérsic+Shapelets model captures less small-scale structures in the source compared to other models, the resulting fit in the image plane still achieves noise-level residuals (except maybe for a few pixels in the outskirts of the lensed source). Finer structures in the source like spiral arms and star forming regions are overall are overall better modeled by semi-linear inversion methods, than by the wavelets and the correlated field. The second column of Fig. <ref> shows the relative error in the lens convergence, throughout the field-of-view. Offsets in the lens centroid and in the ellipticity are visible, although strongest errors remain at the very center of the lens. At the position of the multiple images (roughly traced by the critical lines), the relative error in convergence remains below 5%. §.§ Recovery of source properties To visualize better which source features are captured by the models, we show in Fig. <ref>, maps of source plane residuals computed as the difference between the true and reconstructed sources. Far from the optical axis, models are not well constrained by the data and significantly deviates from the true light distribution; thus, we darken these areas for better visualization. Pixelated models defined on irregular grids capture similarly well most of the spiral features as well as star forming region located in the left spiral arm. The center of the source galaxy seems to be retrieved equally well by all models, despite the persistent image plane residuals (see Fig. <ref>). We show in Fig. <ref> the recovery of several properties of the source galaxy: the two-point correlation function, the effective radius and the axis ratio. Bottom parts of each panel show the relative error computed as (truth - model)/truth (i.e., negative values are over-estimates). We measure these properties within a square field of view of size 22, after projecting (using bi-cubic interpolation) each source model on a regular grid with 10 times higher resolution than the data. As shown in Fig. <ref>, such a field of view contains the entire flux of the source galaxy. The left panel of Fig. <ref> shows the two-point correlation function ξ(r), which gives the azimuthally averaged correlation between the source intensity at two positions in source plane, as a function of their angular separation. All reconstructed sources exhibit two-point correlations close to the one of the input galaxy. Over all the models, the maximum error remains small, except for some models for which it exceeds 15% error on two-point correlations on arcsecond scales. We note that the Sérsic+Shapelets model under-estimates two-point correlations at all scales, while other models over-estimate these correlations. The Cluster+Exp+Lumweight reaches minimal error on the smallest scales (≲04), which shows that the more detailed reconstruction obtained with this model, visible in Fig. <ref>, is accurate over these small spatial scales. We investigate the recovery of the size of the source galaxy through its effective radius r_ eff, that we define as the radius that encloses half of the total light within a circular aperture of radius 22. The middle panel of Fig. <ref> shows r_ eff and its relative error with respect to the true value for all source models. The effective radius is well recovered by all models with a maximum error of 2.5%. In addition we observe a tendency to over-estimate the effective radius, as only the Cluster+Exp model slightly under-estimates it, which is also the model with the lowest error. However, a better quantification of these errors should involve posterior distributions over the source models, which we do explore in this work. A first order uncertainty quantification can be obtained through the scatter among the different models, shown as the gray shaded area in Fig. <ref>. Given this scatter, the average source half-light radius remains lies close to 1σ from the true value. Over the different source models considered here, none explicitly parametrizes the ellipticity of the source galaxy, in particular its axis ratio. Therefore, we use central moments of source model images (projected onto the same coordinates), in order to empirically measure an axis ratio q_ s. More specifically, we compute the second order central moments of the source image, and use its eigenvalues to estimate the major and minor axes, from which we obtain the axis ratio q_ s. The rightmost panel of Fig. <ref> shows the resulting values, along with the true value measured on the true source image. The relative error is overall larger than for the effective radius although it remains below 6%. Taking the mean over the ensemble of modeling methods lies very close to the true value, within the 1σ scatter among the models. Figure <ref> shows the histogram of source pixel intensities for each model, compared to the true intensities (after interpolation, i.e., rightmost column of Fig. <ref>). Interestingly, we clearly see that the Sérsic+Shapelets and Cluster+Exp+Lumweight models reach higher intensity values. This is expected for the former (Sérsic+Shapelets) as the Sérsic profile diverges (with a best-fit Sérsic index ≈1.6) in its center and thus can predict large flux values. The luminosity-weighted regularization of the latter (Cluster+Exp+Lumweight) is behaving similarly by decreasing the regularization strength in regions with high observed flux. Consequently, these two models are best at capturing the three most magnified images of the source (see third column of Fig. <ref>). We also note from Fig. <ref> that some of the models exhibit slightly negative values, as those are not penalized in their underlying prior, although these value are not statistically significant when compared to the noise level, and located mainly on the outskirts of the reconstructed source. §.§ Recovery of lens properties We investigate the different constraints on the mass distribution of our simulated strong lens system obtained from the different modeling methods. Already on Fig. <ref>, showing the predicted tangential caustics, we can visualize slight differences among the models. The predicted caustics have different sizes and positions, overall slightly larger than the true caustics. These differences correlate with the corresponding biases we discuss below. In particular, larger predicted density slopes are responsible for increasing the caustic size, and lens ellipticity and position offsets both impact the position and orientation of the astroid. To quantitatively compare the constraints on lens potential parameters among the six models, we show in Fig. <ref> the joint posterior distributions for all mass model parameters, as well as true values from the data. Overall we find that the posterior distributions are within ∼3σ from the truth. The parameter with the smallest scatter relative to the posterior width is the logarithmic density slope γ. While all models are slightly biased towards values larger than the true value by approximately 2%, they are all compatible with it at ≲1σ. Such a low scatter in the slope may be perhaps surprising, as it has been shown that the density slope may differ significantly between models <cit.>. However, we are in a regime where both the data and all models are parametrized by a single power-law density profile, hence the data is by construction a realization of the true model (modulo our inexact knowledge of the PSF and different numerics settings). Besides the density slope, the Einstein radius θ_ E also has low scatter among the models, which is expected as it is the primary quantity constrained by strong lensing observables. Models whose median values are further away from the truth also tend to have broader posteriors (larger uncertainties), which contribute to reduce the systematic bias (see also Fig. <ref> below). The remaining mass model parameters all show visible scatter around the true values, while no systematic shift can be associated with a specific modeling method or model parameter. From the full joint distributions over lens potential parameters, we investigate further the difference in uncertainties between the models. In Fig. <ref> we plot posterior standard deviations for four parameters θ_ E, γ, q and γ_ ext. As expected from imaging lensing data, the uncertainty on the Einstein radius θ_ E is the smallest, with a relative precision on the order of 0.1%. For comparison, the relative precision on the mass density slope γ is around 1.2%, and around to 1.5% for the lens axis ratio q (the relative error for γ_ ext is inconclusive since it is close to zero). We notice that models with largest dynamic ranges (see Sect. <ref>) in their reconstructed source (Cluster+Exp+Lumweight) have smaller uncertainties on lens potential parameters. This trend is particularly clear for θ_ E, γ, q (first column in Fig. <ref>). Over the parameters shown in Fig. <ref>, the difference in uncertainties between the models remains relatively small, and amounts to a factor of approximately 1.6 between the least and most precise models. Ensemble models—namely, the combination of the posteriors of multiple models—can help improve modeling accuracy by correcting for the observed systematic biases of individual models. For this purpose, we show with black dotted-dashed lines and contours in Fig. <ref> the result of combining the posteriors from each method, giving each model an equal weight. We find that these combined distributions are all within 1σ from the true values, except for the lens center along the x direction which is at 1.7σ. The marginalized statistics of the combined posterior are reported in the last row of Table <ref> and compared to the simple average and standard deviation among individual models. We quantify the improvement in systematic bias between individual models and the combined model. In the fourth row of Table <ref>, we list the largest bias (in units of standard deviation) that arises among the six lens models, for each lens potential parameter. As already seen in Fig. <ref>, the most biased parameters appear to be the coordinates of the center of the power-law profile, (x_0,y_0), while the least biased is the density slope γ. As a comparison, the last row of Table <ref> lists the corresponding bias values of the combined posterior, showing as expected a substantial decrease for all mass model parameters. We discuss further these results in Sect. <ref>. §.§ Correlations between the recovery of lens and source properties Having in hand multiple lens models of the same data, we have the opportunity to explore how the accuracy of inferred lens and source properties correlate among the models. In particular, it is interesting to understand if certain biases observed in lens potential model parameters have an origin in biases in the reconstructed source light distribution, and vice versa. If such correlations exist, they could be used to design better parametrizations to jointly model the lens and source components, that specifically break these degeneracies. Such degeneracies may also be broken using non-lensing observations to further improve the accuracy of inferred lens and source properties (e.g., stellar kinematics of the source galaxy to place complementary constraints on its morphology). We emphasize that such correlations may be system- and data-dependent, which would warrant additional analyses complementary to ours. We show in Figs. <ref> to <ref> a series of scatter plots that correlate the relative error on lens potential parameters with the relative error on a given source property. We compute uncertainties on lens potential parameters from their posterior standard deviation. As we do not have such posterior distributions for all source models, we assume a fiducial uncertainty based on the Correlated Field model, for which we have posterior samples (see Sect. <ref>). To quantify possible correlations, we compute the biweight mid-correlation coefficient r, indicated on each panel in Figs. <ref> to <ref>. The uncertainty on r is estimated by drawing 1 000 random samples from bivariate uncorrelated Gaussian distributions centered on each data point. Based on the biweight mid-correlation coefficients, the largest (anti) correlation arises between the x-coordinate of the lens centroid x_0 and the axis ratio of the source q_ s, with r=-1.0±0.2. On the other hand, the strongest absence of correlation is seen between the external shear strength γ_ ext, and the total source magnitude m_ s, with r=0.1±0.2. We discuss and interpret the observed correlations in Sect. <ref>. § INVESTIGATING SOURCES OF SYSTEMATICS While a thorough investigation of all possible sources of systematics is beyond the scope of this study, we nevertheless attempt to assess the impact of some key modeling assumptions and data properties. The results of this effort will be useful in guiding future in-depth investigations. Specifically, we explore the role of the knowledge of the lens position, the presence of small-scale high-contrast regions (cusps, or point-like features) in the light profile of the source galaxy, and imperfect knowledge of the PSF. §.§ Intrinsic source morphology The spiral galaxy light profile that was used as the lensed source in producing the mock data presented in Sect. <ref>, has a prominent bright spot its center. The best-fit Sérsic index from the Sérsic+Shapelets model is approximately 1.6, indicative of a cuspy radial profile. This feature, which we will refer to as the source cusp in the following paragraphs, consists of only a handful of pixels (<20) that contain a significant amount of light (∼5 %). This region is clearly hard to model, as shown by the reconstructions in the last two columns of Fig. <ref>, where only two models are able to capture it (Sérsic+Shapelets and Cluster+Exp+Lumweight). The remaining models fail to do so and leave behind characteristic residual flux at the data pixels where this compact region is multiply imaged. Driven by this observation, we argue that this cusp is closer to a point-like flux component than to an extended source. Some algorithms, like the plain semi-linear inversion (even on an adaptive grid), have been explicitly designed to model the latter and are known to have a poor performance with the former. This can be understood in terms of the regularization, which tries to impose smoothness on the source and inevitably suppresses such cusps. In Fig. <ref>, we see that only a few source pixels with the highest flux (>0.3), corresponding to the central cusp, are considerably extending the dynamic range of the source light profile. The two aforementioned models that successfully model the cusp have a similar dynamic range, while the rest of the models fall short, barely reaching a flux of 0.3. Here, we examine in more details how the performance of the model with the lowest brightness range (smoothest), Adaptive+Matérn, is affected by the prominence of the cusp. We select the central bright region with flux >0.3 and reduce the flux[The noise map that is used as the covariance matrix in Eq. <ref> is also changed accordingly.] in each pixel by 50 and 95 %, creating two new mocks that we then model with exactly the same setup as the model shown in Fig. <ref>. The resulting (true) source and model residuals are shown in Fig. <ref>, while Fig. <ref> shows the posterior distributions of the lens potential parameters. It can be seen that the more we suppress the cusp the better the model, i.e. we get less residual flux and less biased parameters. We conclude that a plain semi-linear inversion approach is much better suited for modeling smoother sources, without cuspy, point-like features in their light profile. A regularization scheme imposed just by Eq. <ref> leads to reconstructions that are too smooth, and additional constraints, like the luminosity-weighted scheme presented in Sect. <ref>, give better results. The cuspy nature of the source can thus explain, at least partially, the systematic errors in lens potential parameters, in particular for the case of the Adaptive+Matérn model. §.§ Supersampling and inexact PSF model We have seen in the previous section that a cuspy source can lead to biases in the lens potential parameters even in models that fit the central cusp well, in particular the Cluster+Exp+Lumweight model which achieved noise-level residuals. We now investigate whether using a supersampled PSF can further eliminate this bias. We implement PSF supersampling in two different ways: first, by interpolating in the pixel-level PSF (which is often implemented in practice when point sources are present in a lensed source); and second, by using the original supersampled PSF that was used to produce the mock data. Although the original PSF corresponds to a supersampling factor of 10 (i.e. each pixel is split into 10× 10 subpixels), this would be much too computationally expensive to employ directly. We therefore choose a supersampling factor of f=5 and downsample the original PSF with f=10 accordingly; this is what we will refer to as the “true supersampled PSF”. In the first model, we use bicubic interpolation in the pixel-level PSF to generate our interpolated f=5 supersampled PSF. We first check whether PSF supersampling can eliminate bias in models without a luminosity-weighted regularization, by applying the above supersampling procedure with the “true supersampled PSF” to the Cluster+Exp model. The resulting posteriors are plotted as the dot-dashed green curves in Fig. <ref>. In this case we find a similar bias in the lens parameters as in the case where no supersampling is performed, which is perhaps explained by the fact that the best-fit model produces similar residuals in the regions where the lensed images are brightest. This is due to the fact that the reconstructed source is not significantly better resolved without the luminosity-weighted regularization prior; the regularization strength is too high to allow for a cuspy source in the central high-intensity region of the source. Next, we apply PSF supersampling to the corresponding model with luminosity-weighted regularization (Cluster+Exp+Lumweight). The results are shown as the red curves in Fig. <ref>, with the solid red curve showing the original luminosity-weighted model (i.e., same as Fig. <ref>) and the dashed and dot-dashed curves showing the models with supersampled PSF and “true supersampled PSF”, respectively. Note that both supersampled models have significantly reduced bias compared to the original luminosity-weighted model that did not use PSF supersampling. Some bias is still present in the lens model parameters for the interpolated PSF model, particularly in the slope γ and ϕ_ext parameters, whereas in the “true PSF” model, bias is largely eliminated in all lens parameters except for the center coordinates. In addition, the parameter uncertainties are significantly reduced when the true PSF is used. The essential difference can be seen by comparing the two supersampled PSF's directly in Fig. <ref>, where the interpolated PSF is quite poorly resolved. We conclude that for sufficiently cuspy sources such as this one, supersampling can significantly reduce bias in the lens model parameters, but may not entirely eliminate bias in the lens parameters if one generates a supersampled PSF by interpolating in the observed pixel-level PSF. It is interesting that without a luminosity-weighted source prior, the lens center coordinates are more accurately recovered (regardless of whether supersampling is used), despite all the other lens parameters being significantly biased. The Sérsic+Shapelet model produced a similar bias in the lens center coordinates as the luminosity-weighted models did, which is noteworthy since these are the only models that were able to reproduce the central cusp in the source well. While it is unclear exactly why this is the case, in real applications this may be ameliorated by the fact that the foreground lens light can furnish a prior in the lens center coordinates. Aside from this caveat, the bias in these parameters is at least somewhat reduced by supersampling with the true rather than interpolated PSF. We conclude that when fitting cuspy lensed sources, PSF supersampling can significantly reduce parameter biases only if it is accompanied by a regularization scheme (e.g. luminosity-weighted) that allows the source pixels to have steeper variations in the central bright region of the source galaxy where PSF supersampling is of the greatest benefit. § DISCUSSION §.§ Quantifying model complexity and its impact on posterior uncertainties In lens modeling there is inherently a trade-off between model complexity and tractability of the final inference. On the one hand, a more complex model—namely with more model parameters, or degrees of freedom—is likely to provide a better fit to the data (commonly measured as a smaller χ^2 value) with the risk of over-fitting. On the other hand, a simpler model is usually faster to optimize and often leads to a more robust inference (lower risk of local minima and multi-modal posteriors) but may not fit the data well. Moreover, models with different complexity can still fit the same data seemingly equally well; in such a case, one typically invokes the principle of Occam's razor, and prefer the one being the least complex a priori (which, when feasible, takes the form of the Bayesian evidence or other proxys such as the Bayesian information criterion). In our work, as described in Sect. <ref>, one of the main differences between the modeling methods we consider is the way the source galaxy is reconstructed. In the past, several works focused on consistently comparing a set of lens models with different source reconstruction techniques developed under the same general formalism <cit.>, but differing in their regularization terms <cit.>. Unfortunately, in the present work, there is no clear way to quantitatively and unambiguously rank the complexity of the source models considered here, given their fundamental differences in terms of mathematical formalism and underlying assumptions about the morphology of galaxies. One possibility would be to count their number of degrees of freedom, but this quantity is not readily accessible for all models. In regularized pixelated source models, the effective number of degrees of freedom is lower than them total number of source pixels, as regularization correlates source pixels over different spatial scales <cit.>. As a concrete example, regularization based on sparsity in a wavelet basis do not allow to analytically and unambiguously measure the number of degrees of freedom as it imposes the sparsity simultaneously at various spatial scales with non-linear dependency on the lens deflection field. Nevertheless, our ensemble of six models allow us to qualitatively discuss how differences in model complexity can affect the resulting inference. In Sect. <ref> we compare the blindly submitted posterior distributions, and notice some differences in the inferred parameters uncertainties (in terms of the posterior standard deviation). Based solely on the reconstructed sources (see Fig. <ref>), the model that displays the least complex features is Sérsic+Shapelets. Oppositely, one of the models that capture the most complex features is Cluster+Exp+Lumweight. One may naively expect the Cluster+Exp+Lumweight model to be more affected by issues related to over-fitting and local minima, as it is defined on a locally fine grid and has more flexibility compared to the Sérsic+Shapelets. In case of over-fitting, the uncertainties on model parameters can be significantly under-estimated, partly due to discretization biases that artificially narrows the likelihood profile <cit.>. Interestingly, while Cluster+Exp+Lumweight indeed lead to overall small uncertainties on lens potential parameters (compared to other models, see Fig. <ref>), we see that they do not significantly differ from the Sérsic+Shapelets model. Therefore, while we observe variations among the models regarding their posterior uncertainties (up to a factor of ∼ 1.6), these variations are not solely driven by differences in model complexity. This result also highlights the difficulty in measuring the complexity of model simply based on the modeling assumptions and forms of regularization, or on the visual impression of the reconstructed source. However, this kind of analysis offers directions to explore for better understanding the origin of systematic biases in lens potential parameters. §.§ Model degeneracies between the lens and the source As presented in Sect. <ref>, we observed correlations between the level of recovery (the relative error) of lens and source properties. In particular, we find a correlation between the lens power-law slope γ and the source effective r_ eff, with correlation coefficient r=0.9±0.4, namely a positive correlation with a statistical significance of 2.3σ. It is well-known that correlations between the source scale and the mass density profile of the lens can arise, which are often seen as a manifestation of the MSD <cit.>. The MSD originates from the mass sheet transformation (MST), and its consequence is as follows: the addition or subtraction of a infinitely thin mass sheet from a power-law mass density profile changes, to first order, the slope of the density profile at the Einstein radius <cit.>, while rescaling the source proportionally. A positive mass sheet locally increases the density slope, which in turns induces an increase of the source size through the MSD. This effect has also been empirically explored in <cit.> by using the explicit scale encoded in a source model based on shapelets. The positive correlation we find between the biases in γ and r_ eff may thus be the signature of the MSD: all the lens models we consider infer slightly too large density slopes and source effective radii. There is a general trend of correlations between source light shape and lens mass shape that we observe in our results. For instance, we note correlations between the recovery of q_ s and q_ m or ϕ_ ext, r_ eff, and anti-correlations with ϕ and the lens centroid. We also observe strong correlations between the error on q_ s and that of the lens centroid. These correlations are more challenging to unambiguously interpret compared to those associated to the MST. We argue that they may be related to the source position transformation (SPT) outlined in <cit.> and further developed in <cit.>. The latter work effectively shows that transformation of the source profile can be compensated by changes in ellipticity and radial profile of the lens mass distribution. The biases that we observe in our models may be manifestations of this transformation and qualitatively match the expectation for a SPT. Because the SPT leads to an approximate degeneracy (unlike the MSD), it is expected to be broken with higher quality data. It may therefore be interesting to see how these correlations change with higher S/N data or even noise-free mock data. §.§ Combining methods to mitigate systematic biases We quantify the scatter associated with the choice of modeling method in Table <ref>, which records the mean and scatter of relevant source light and lens mass quantities among all the models. Although dependent on the type of data considered in this work, these numbers provide an estimation of systematic errors that one can expect from modeling imaging data with different methods. The systematic errors we quote here can be interpreted as lower bounds to those of a real-case scenario, since we placed ourselves in an idealized setting—no contamination by the lens light and perfect knowledge of noise properties and mass model family—which removes a subset of the known sources of biases or degeneracies. Ideally, the addition of such complicating factors may broaden the posterior distributions from individual models, thus making them statistically compatible (underlying systematic biases would then be unnoticeable). However, more realistically, the scatter between models is likely to increase as a consequence of more complex models potentially subject to different sources of biases. Performing similar analyses as the one presented here to a wider variety of strong lensing data (different resolution, S/N, lensing configuration, nature of the source, etc.) will help understand the generalization of our results. For this purpose, the framework we have developed should help and encourage the multiplication of such analyses on both simulated and real data sets. However, we show that combining independent methods offers a way to overcome potential systematic errors. As we show in Fig. <ref> and Table <ref>, combining together the results from an ensemble of methods removes the observed biases. While individual methods display systematic errors in different parameters—and not necessarily always for the same parameters and in the same directions—it is reassuring that overall, we do not observe a significant residual offset after combination. Only for the mass density slope (γ) one can observe a global trend towards larger values. We quantify in Table <ref> the bias reduction between the biases from individual models and the one from the combined model. Among the eight mass model parameters, we find that an average bias reduction of 5.4, which is a substantial improvement on inferences from individual models alone. This result is reassuring and shows that analyzing a given data set using independent modeling methods is an efficient way to mitigate systematic errors. Beyond the accuracy improvement of a combined posterior distribution over the lens model parameters, the comparison of multiple models—and if possible, truly independent models—is extremely valuable for detecting unknown sources of systematics, updating our current modeling assumptions and techniques and testing additional models with different levels of complexity. Similarly, in source plane, comparing the morphology of different versions of the unlensed object (which is never directly observable) is necessarily valuable for straightening the confidence in a given feature, which in turn improves the robustness of its interpretation. Moreover, different models may predict different observables that may otherwise be overlooked, enabling the potential detection of anomalous lens systems <cit.> or anticipate the detection of additional images unobserved in current observations <cit.>. §.§ Extrapolating to time-delay applications As lens modeling is a key ingredient in time-delay cosmography applications, one may be interested in the propagation of our results to the recovery of the Fermat potential difference (defined in Eq. <ref>). The Fermat potential difference between two images i and j is proportional to the time delay measured if the source is varying in time. Such a source can be a quasar centered on its host galaxy and orders of magnitude brighter than the lensed arcs, or supernova (SN) which can appear almost anywhere in its host and fades away after the explosion, leaving behind only the arcs as in our simulated data. We show in Fig. <ref> the Fermat potential differences for three hypothetical pairs of lensed images. While the simulated data we analyze in this work (Fig. <ref>) mimics the case of a lensed SN that faded away, we assume for simplicity that its location in source plane coincides with its host galaxy (although one could also proceed similarly for any other source position that leads to at least two lensed images). We label the lensed images ABCD, order them by ascending Fermat potential (i.e., Φ(θ_A) is the lowest) and consider the three independent pairs ij∈{ AB, AC, AD}. In addition to the posterior mass parameters uncertainties, we add in quadrature an additional uncertainty to mimic the limited astrometric precision <cit.>, assuming a conservative precision of 10 mas in image plane. The gray shaded area in each panels of Fig. <ref> around the true values isolates the typical contribution of the astrometric scatter term, which is in our case small compared to the uncertainties modeling the extended lensed source. As expected from the constraints on mass profile parameters (Fig. <ref>), we observe a similar scatter around the true values among the models. Interestingly, we do not observe a clear correlation between model biases on individual lens potential parameters and their resulting biases on Fermat potential differences. For example, the Sparsity+Wavelets model, which display slight biases on some source properties and lens potential parameters (e.g., the lens position), give Fermat potential posteriors well-centered on the expected values. While a robust generalization of these results require further work (e.g., on a larger sample of simulations), the general trend we previously found still remains: the combination of individual posteriors assuming equal weights results in posteriors that are free of biases. These combined distributions are shown with black dotted-dashed lines in Fig. <ref>, and encapsulate well the Fermat potential difference values. This further gives motivation for time delay cosmography analyses to compare and combine lens models together, in particular those obtained using independent modeling methods <cit.>. In a follow-up paper, we will explore the dependency of systematic lens modeling errors on the source plane position of the time-delay background object. §.§ Application to real data sets For the purpose of demonstration, we focus on simulated imaging data, but the framework and ideas we present is intended to be applied to real data. The COOLEST standard, which provides a common ground to express lens models from different origins, already supports real data and their corresponding models <cit.>. Moreover, extending the standard to other types of models (for both the lens and the source) is straightforward. The main complication for applying this framework to real data arises from the ability of applying multiple codes and methods to a given data set. This necessary step obviously requires some level of human expertise and time, as well as computational resources, which can be significant depending on the methods (see also Sect. <ref>). Although COOLEST significantly reduces the burden to express a lens model in a format that can be readily compared to other models, the acquisition of combined posterior distributions over lens model parameters still requires that at least two distinct lens models are in hand. Here we model the same data using four software packages and six modeling methods, which is likely unrealistic in most real scenarios, especially given the large amount of lenses that are still to be modeled <cit.> and that will be discovered in the near future <cit.>. Nevertheless, it is reasonable to assume that it is feasible to apply two or three methods to the same data (not necessarily within the same analysis), since the amount of lens modeling experts naturally grow over time and inference methods are becoming less and less time consuming. Deep learning methods, that we do not explore in the present work, offer promising avenues to accelerate the overall procedure either by proposing preliminary models to be refined with classical methods, or providing additional posterior distributions for final combination, at a negligible cost (ignoring the training phases). § CONCLUSION We have conducted a fully blind modeling experiment on strong lensing data simulated with a dedicated software package by one author, while four other authors used four independent software packages to model it. In total, six modeling methods—which differ in their source reconstruction techniques and inference pipelines—have been applied on that same data. We have made a series of simplifying assumptions to keep this novel kind of analysis tractable, in particular regarding the lens light, the lens mass model parametrization and the noise properties. In contrast, the true PSF, lens parameters and source morphology were hidden and the optimization and inference strategies left free to the modelers. We used the lensing standard COOLEST to overcome the challenges that arise when comparing results obtained with different modeling codes. The resulting image and source plane models, as well as model residuals are given in Fig. <ref>, and constraints on lens potential parameters are shown in Fig. <ref>. Below we summarize our main results: * While no modeling method resulted in strong statistical biases systematically for all lens and source properties, we observed a measurable scatter among the models. Strongest biases arise for the lens centroid, while the mass density slope at Einstein radius is only mildly biased with a small inter-model scatter. We also observe differences in the dynamic range of the reconstructed source intensities. * Combining results from different modeling techniques enables to mitigate systematic uncertainties arising for individual models. For the particular data we consider, the systematic error on lens potential parameters is reduced by a factor 5 on average. The reason is that models tend to scatter around the true parameters values but stay statistically compatible, such that the combined posterior distributions effectively broaden and include the true values. This results also holds regarding the Fermat potential differences between hypothetical lensed images of a point source component, which is relevant for time delay cosmography applications. While the amount of bias reduction is evidently data-dependent, we argue that model combination is generally beneficial to remove some biases from strong lensing analyses. * Towards the goal of better understanding the origin of model biases, we used our ensemble of models to investigate possible correlations between lens and source properties. We observed correlations between errors on lens potential parameters (e.g., the mass density slope) and on the morphology of the source (e.g., the effective radius or axis ratio). We argue that such correlations can be manifestations of the well-known mass-sheet transformation (MST), but also more generally of the source-position transformation (SPT). Better handling these degeneracies in current and future modeling methods will be key to further minimize model biases. * We investigated how certain model assumptions affect the recovery of lens potential parameters. In particular, we explore (1) how the cuspy nature of the lensed galaxy can lead to systematic errors if the source model is not flexible enough to capture large intensity variations, and (2) how the accuracy of the PSF (the true PSF being unavailable to the modelers) plays a role even for extended source modeling. We find that both an accurately sampled PSF and a source model with large dynamic range (e.g. using a luminosity-weighted source prior) are warranted to reconstruct cuspy lensed sources while minimizing systematic errors on lens potential parameters. Over the past years, numerous lens modeling methods have been proposed and implemented in different software packages. Here, we selected a subset of those with the goal of using them together, instead of only opposing them. Typically, we refrain from explicitly ranking the modeling methods, which would only be meaningful over an extremely large sample of strong lenses with different data quality and modeling assumptions to ensure proper generalization. In a real-case scenario, we do not have access to the truth, therefore applying and combining different methods together is a pragmatic and efficient way to detect and mitigate systematic errors. As shown, for example, in <cit.> by testing three types of source regularizations, there exists an inherent dependence on the data properties, the lens configuration and the (unobservable) intrinsic source morphology, such that it is unlikely that a single source reconstruction method gives unbiased results in all cases. Our work strengthens this idea and goes further by combining a large collection of models, while investigating their specific effects on inferred parameters. Moreover, we purposefully allowed some level of freedom for the modelers (e.g., masks, PSF, posterior marginalization, etc.), such that our work also illustrates the role of specific modelers' choices. These choices play a role in the observed scatter between models, and can be marginalized over by combining the methods together. As stated in the introduction, we have not used lens modeling methods based on deep learning, as those would require many additional assumptions, in particular regarding their training phase. Nevertheless, the comparison framework presented here is very general and does not depend on the nature (classical, deep learning, etc.) of the underlying methods. Therefore, we encourage future studies to complement and combine classical methods with those based on deep learning, as the latter have clear advantages such as fast computation time after training and a large flexibility through different network architectures. Examples of using neural networks to complement classical techniques have been proposed in <cit.> and <cit.>. Our publicly released simulated data and lens modeling products can be directly used to test and improve such deep learning (or any other) approaches. The framework and ideas presented here is designed to be applied on real data sets and expanded beyond our initial simplifying assumptions. In particular, the role of the lens surface brightness model should be investigated further <cit.>. Similarly, the standard assumptions of uncorrelated Gaussian noise used in lens modeling analyses should be re-assessed <cit.> to ensure that analyses of the many future observations of strong lenses remain fully accurate. This work originated in the Lensing Odyssey 2021 workshop, and so we would like to acknowledge the organizers and attendees for the fruitful discussions. AG acknowledges the Swiss National Science Foundation (SNSF) for supporting this work. GV and QM were both supported by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program. QM gratefully acknowledges a grant of computer time from ACCESS allocation TG-AST130007. This research has made use of SciPy <cit.>, NumPy <cit.>, Matplotlib <cit.>, Astropy <cit.>, Jupyter <cit.> and GetDist <cit.>. aa § REALISTIC SOURCE SURFACE BRIGHTNESS IN MOCK DATA The mock data shown in Fig. <ref> is not the first mock we built for this work. Initially, an other prescription for the background source was used, which caused large biases on many model parameters, which was noticed only after unblinding. Therefore, it was decided to repeat the entire procedure—namely, the data simulation and blind submissions of models from independent modelers—which lead to the mock used throughout this work. As it can be beneficial to some readers, we give more details about the old mock data below and why it caused large parameters biases. We show in the left panel of Fig. <ref> the old simulated data. For the source, shown in the middle panel, we used a B filter image of M31 available through the ESO Online Digitized Sky Survey. This image was selected because a local galaxy has high-detail resolved structure with negligible PSF spikes, which could introduce nonphysical features if lensed. After ray-tracing and the addition of noise, the data visually resembles to genuine lensing data, despite the input source boundaries being visible at some locations. Therefore, any potential issue related to these boundaries could not be detected before modeling the data. After the blind modeling stage, it was noticed by the modelers that their reconstructed source models were displaying strong boxy features surrounding the ellipsoidal shape of the source galaxy. Among the six models, the Sparsity+Wavelets model was strongly affected by the square and boxy nature of the source, for which an extremely low resolution grid was systematically preferred as it represented better these sharp boundaries. This artificial bias towards a low resolution grid (i.e., few source pixels) was in turn strongly biasing the lens potential parameters, in particular the mass density slope due to its degeneracy with the source size (through this MST). The right panel of Fig. <ref> shows, as an example, the Correlated Field model of that source, which also displays a boxy shape. After comparing the different models together, the mass density slope was found to be biased in almost all models. After extensive checks, the origin of these biases was attributed to the sharp boundaries and large background flux of the input source. The source used in the new simulated data (shown in Fig. <ref>) was carefully processed, which solved these issues. § TECHNICAL MODELING DETAILS This section gives technical details on the specific choices made by modelers to analyse the simulated data. We complement it with Table <ref> that give the approximate computation time necessary to obtain the various models used in this work. §.§ Sérsic+Shapelets For the analytical model using (Sect.  <ref>), a 4.5-radius circular mask around the lens is used. The source light is composed of a single elliptical Sérsic profile. Shapelets components are added sequentially to that profile. The significance of those components to the model is evaluated by calculating the Bayesian Information Criterion (BIC), which balances the likelihood with the number of parameters following: BIC = n_ parln(n_ data) - 2 lnL(η̃) , where n_ par is the number of parameter, n_ data is the number of data points (i.e., data pixels used as constraints), and L(η̃) is the loss function evaluated at the best fit position (Eq. <ref>). For this specific mock, the BIC favors a maximum order of shapelets n_ max = 5 (Table <ref>). The point spread function (PSF) is treated in as follows. The surface brightness of the lensing system, which is simulated at each iteration, can be sampled on a grid with higher resolution than the observed image before being averaged to the data resolution. If the modeled surface brightness is supersampled, the user can perform the PSF convolution on the finer grid. In that case, a supersampled PSF is calculated by interpolation, and an iterative process allowing for perturbation of individual PSF pixels, ensures that the downsampling of the supersampled PSF recovers the input PSF. We elected a supersampling factor of 5 and performed the PSF convolution on the supersampled grid. §.§ Adaptive+Matérn We begin with the broadest possible range of the lens potential parameters of the model (see Eqs. <ref> and <ref>), except for the slope, which we fix to isothermal, and the Einstein radius, whose range is fixed by rough estimation of the radius of the Einstein ring from the data. The former serves simply to speed up the calculation, while the latter is required to avoid over- or under-focused, non-physical solutions. The main choices at this first-pass stage are the type of regularization and the resolution of the adaptive grid. For the regularization we choose curvature, which has only one free parameter, the regularization strength λ, and a fixed covariance matrix C_s (see Eq. <ref>). The adaptive grid is created simply by using the deflected positions of every third pixel of the data image in the x and y directions. We create a custom mask that we use in all of our subsequent models, and we do not perform any supersampling of the PSF. This setup is very fast to run, taking only a few minutes on a standard multi-core laptop. After this first pass, we switch to using the Matérn kernel regularization given in Eq. <ref>, which has 3 free parameters, and increase the resolution of our adaptive grid by using every second pixel in both directions in the data image to construct it. Based on our first crude model, we restrict each lens potential parameter to about 20 per cent of its full range, centered roughly on the bulk of the posterior probability. This model takes about an hour to run on a standard multi-core laptop. Finally, we further restrict the parameters to 10 per cent of their full range and run a final model with the same regularization but even higher resolution, i.e. shooting back every pixel in the data image to create the adaptive grid. We combine the last two models, which differ only in the resolution of the reconstructed source (although the first served to initialize the second in order to save on computations), by merging their posterior samples in such a way that the probability mass of each one is the same independently of its actual size. §.§ Cluster+Exp and Cluster+Exp+Lumweight In , the image plane is supersampled by splitting each image pixel into 3× 3 subpixels; each subpixel is ray-traced to the source plane, and the source grid is generated using a k-means clustering algorithm exactly as is done in <cit.>, but with each ray-traced point receiving equal weight. We choose the number of source pixels to be equal to half the number of image pixels within the mask. The surface brightness of each ray-traced subpixel is determined by interpolating in the three source pixels whose Delaunay triangle the ray-traced point is in (or closest to); the surface brightness for all the subpixels within a given image pixel are then averaged to obtain the surface brightness for the image pixel. The pixel surface brightness values obtained this way are then convolved with the pixel-level PSF (note that although we are supersampling the image plane, we do not supersample the PSF here; the effect of PSF supersampling will be explored in Section <ref>). The regularization is performed using an exponential kernel (equivalent to a Matérn kernel with ν = 1/2). To encourage convergence, we make use of two additional priors on the source: first, there is a prior that discourages producing lensed images outside the mask. We accomplish this by temporarily unmasking after the source pixels are solved for, generating the lensed images without a mask, and imposing a steep penalty if surface brightness is found outside the mask whose value is greater than 0.2 times the maximum surface brightness of the images. Second, we place a prior on the number of lensed images produced. This is accomplished by creating a Cartesian grid in the source plane and finding the overlap area of all the ray-traced image pixels for each Cartesian grid cell; by dividing the total overlap area by the area of each grid cell, we obtain the number of images produced by that cell. We can then take the average number of images over all the cells. We impose a steep penalty if the average number of images if less than 1.5. This discourages solutions that are not multiply imaged, where the source looks identical to the observed configuration of lensed images. With these priors in place, we can obtain a good solution with a single nested sampling run, provided the parameter priors are broad enough. The Cluster+Exp+Lumweight model uses all of the methods described above, but in addition it uses a luminosity-weighted regularization as described in Sect. <ref>. Thus we include the additional parameter ρ (Eq. <ref>), which controls the steepness of the luminosity weighting, as an additional nonlinear parameter to be varied. §.§ Sparsity+Wavelets Before modeling the source on a regular grid with multi-scale regularization, we start with an approximate lens mass model obtained by modeling the source with a single Sérsic profile. Since at this stage of the modeling process the mass model may be rather inaccurate, using spatially varying regularization weights (W_ ms in Eq. <ref>) could bias the source reconstruction. Therefore we approximate the weights by their median value within each wavelet decomposition scale (i.e., we use spatially uniform weights within each frequency range). We set the global regularization strength λ_ ms=3σ for the first wavelet scale (highest frequency features), and λ_ ms=1σ for the remaining scales. We choose a lower threshold for low frequency features as advocated in various works relying on similar multi-scale regularization strategies <cit.>, since high frequencies are more impacted by the presence of noise in the data. We obtain a first approximate model of the pixelated source by jointly optimizing all parameters except for the fixed lens center, and we impose a strong isothermal prior (i.e., γ∼𝒩(2, 10^-3)) on the mass density slope to avoid introducing degeneracies early in the modeling sequence. We use the gradient descent optimizer AdaBelief <cit.> implemented in the Optax <cit.> library to obtain best-fit parameters. We then re-optimize model parameters by re-computing regularization weights and releasing priors on the lens center and density slope. The last step is to estimate the posterior distribution of lens mass parameters while further refining the source model. At this stage, the lens model is very close to the best-fit model so we do not rely anymore on the approximation of uniform regularization weights per wavelet scale, and properly propagate the noise to source plane. These more accurate regularization weights significantly help eliminating remaining artifacts located on the outskirts of the source galaxy, when the data is the least constraining. The resulting χ^2 being below unity, we further boost the regularization strength of high-frequencies (for this particular data, by 5σ) in order to obtain a χ^2 of the order of unity. We note that the precise value of this boost does not significantly impact the final posterior distribution as we marginalize over many model variations. In particular, we vary the number of source pixels from 80×80 to 160×160 pixels with steps of 5. We also run the same ensemble of models by globally increasing the regularization strength by 1σ. In total, we consider 34 model variations. The joint posterior distribution for lens mass parameters is estimated using stochastic variational inference <cit.>, which directly makes use of known gradient of the loss function. SVI is also less computationally expensive than other sampling methods such as Markov Chain Monte Carlo or Hamiltonian Monte Carlo, which allows us to run a larger number of model variations. Since in this work, we are mainly interested in the joint posterior distribution of the lens mass parameters which are not expected to exhibit strongly non-Gaussian correlations[This assumption is validated with the correlated field model by using more flexible SVI methods (see Sect. <ref>).], we find that a multi-variate Gaussian distribution is a sufficient surrogate posterior model. However, we acknowledge that SVI can underestimate uncertainties <cit.>, a limitation that we address by marginalizing over the 34 model variations assuming equal weights. In addition to the full posterior distributions and first-order posterior statistics, we save in format a point-estimate model, that corresponds to the mean model as obtained from SVI, with 120^2 pixels and fiducial regularization strength. §.§ Correlated Field Similar to the strategy used with the multi-scale source model, we first model the imaging data with a single Sérsic source profile, which we then replace with the correlated field model defined in Eq. <ref>. For our baseline model, we set the shape of the Gaussian excitation field to 90^2=8'100 pixels. The field power spectrum is modeled as power law parametrized with an amplitude and a slope. These two parameters are themselves sampled from a log-normal distribution, described in turn by a mean and scale parameters. The additive offset in real space is modeled by a single scalar (initialized to the mean flux values from the Sérsic model), whereas variations of this offset are sampled from a log-normal distribution with additional mean and scale parameters. Lens potential model parameters (PEMD and external shear) are sampled from Gaussian distribution, for which we check that the prior widths are large enough. We optimize the full model—lens mass parameters and source field parameters—using the set of minimizers and samplers implemented in . More specifically, we converge to the maximum a posteriori solution and estimate the joint posterior distribution using metric-Gaussian variational inference <cit.>. We run several variations of the above fiducial models. In particular, we increase the field resolution to 120^2=14'400 pixels, alter the sampler random seeds, initialize the model with a worst lens model (obtained from a different Sérsic source model), and use geometric variational inference <cit.> instead of MGVI. All these model variations result in almost identical posterior distributions for lens mass parameters and consistent source models. Nevertheless, we conservatively marginalize over these models with equal weights. For the point-estimate parameters, we set those to the mean values of the VI samples of the model with the most resolved source (i.e., 120^2 pixels), although the fiducial model is virtually indistinguishable. § CORRELATIONS BETWEEN LENS AND SOURCE PROPERTIES Given the six independent models we gather in this work, we can investigate of the errors on the lens and source properties of interest correlate with each other. In other, we are interested in signs of degeneracies between the lens and source properties, that can be revealed over the ensemble of models. We show in Figs. <ref>, <ref> and <ref> a series of plots that correlate all lens potential parameters with the three main source properties we investigate (r_ eff, q_ s and m_ s respectively). These results are presented and discussed in the main text, in particular in Sect. <ref> and Sect. <ref>. get arXiv to do 4 passes: Label(s) may have changed. Rerun
http://arxiv.org/abs/2406.08100v1
20240612112703
Multimodal Table Understanding
[ "Mingyu Zheng", "Xinwei Feng", "Qingyi Si", "Qiaoqiao She", "Zheng Lin", "Wenbin Jiang", "Weiping Wang" ]
cs.CL
[ "cs.CL", "cs.AI" ]
[ Andrew Pearce-Crump June 17, 2024 ======================= § ABSTRACT Although great progress has been made by previous table understanding methods including recent approaches based on large language models (LLMs), they rely heavily on the premise that given tables must be converted into a certain text sequence (such as Markdown or HTML) to serve as model input. However, it is difficult to access such high-quality textual table representations in some real-world scenarios, and table images are much more accessible. Therefore, how to directly understand tables using intuitive visual information is a crucial and urgent challenge for developing more practical applications. In this paper, we propose a new problem, multimodal table understanding, where the model needs to generate correct responses to various table-related requests based on the given table image. To facilitate both the model training and evaluation, we construct a large-scale dataset named MMTab, which covers a wide spectrum of table images, instructions and tasks. On this basis, we develop Table-LLaVA, a generalist tabular multimodal large language model (MLLM), which significantly outperforms recent open-source MLLM baselines on 23 benchmarks under held-in and held-out settings. The code and data is available at <https://github.com/SpursGoZmy/Table-LLaVA>.^* Indicates equal contribution. ^†This work was done during an internship at Baidu Inc. ^ Corresponding author: Zheng Lin. § INTRODUCTION Tables are widely used to store and present data across various fields, e.g., financial analysis, scientific research and government reports <cit.>. To make the most of the abundant tabular data, the table understanding (TU) technique has been proposed to automatically understand tables and perform table-based tasks, such as question answering <cit.> and text generation <cit.>. As a technique that could significantly elevate work efficiency in different industries, it has attracted ever-increasing research interest in recent years. Though considerable efforts have been dedicated to the table understanding problem <cit.>, most previous models can only fulfill very limited tasks until the emergence of large language models (LLMs) <cit.>. With the help of powerful LLMs, we are getting closer to the vision that a versatile model can perform a variety of table-based tasks. However, existing table-oriented LLMs <cit.> rely heavily on the prerequisite that all given tables must be converted into a certain text sequence (like Markdown or HTML) to be input to LLMs. Under some practical scenarios like scanned documents and webpage screenshots, it is difficult to obtain such high-quality textual table representations, and yet table images are more accessible. Moreover, humans can directly understand two-dimensional tables using the intuitive visual information, whereas LLMs can only interpret tables in a one-directional textual perspective, which may increase the difficulty of comprehending diverse table structures and colored table elements. In summary, for the sake of convenience and intuitiveness, it is a crucial and urgent challenge to explore how to directly digest tables using visual information. To promote the advancement of table understanding and its real-world applications, we propose the multimodal table understanding problem, where the model is required to generate correct responses to different table-related requests (e.g., questions) in an end-to-end fashion based on the table image. Despite the fact that recent multimodal large language models (MLLMs) have demonstrated excellent capabilities in many multimodal tasks, they cannot be directly extended to the proposed task. As shown in Figure <ref>, the performance of popular MLLMs like MiniGPT-4 <cit.> and BLIP2 <cit.> is close to zero on most tasks, revealing their weakness in understanding tabular data. More importantly, there is a lack of a comprehensive dataset that can support both the development and evaluation of generalist MLLMs towards multimodal table understanding. To address the above issue, we construct MMTab, the first open-source large-scale dataset for multimodal table understanding problem, based on 14 publicly available table datasets of 8 domains. We carefully design scripts to convert original textual tables in these datasets into table images highlighting a broad coverage of table structures and styles, and transform all task-specific samples into multimodal instruction-tuning samples with a unified format of . The resulting dataset contains (1) 150K table recognition samples on 97K table images for pre-training (named MMTab-pre). (2) 232K samples of 14 table-based tasks on 82K table images for instruction tuning (named MMTab-instruct). (3) 49K test samples on 23K table images composing 17 held-in and 7 held-out benchmarks (named MMTab-eval). During the dataset construction, data augmentations at multiple levels (e.g., table-level, task-level) were adopted to further improve the data diversity, and we also introduce multimodal table structure understanding tasks that have been overlooked in previous studies. Based on the curated dataset, we develop a versatile tabular MLLM named Table-LLaVA with an enhanced two-stage training paradigm. In the first stage, we pre-train LLaVA-1.5 <cit.> with an extra table recognition task on the MMTab-pre, which requires the model to generate textual sequences (like HTML) given table images. This stage aligns the structures and elements within table images to textual modality and thus enhances the comprehension of the basic table structure and content. In the second stage, we continue to instruction-tuning the model with diverse table-based downstream tasks on the MMTab-instruct, which endows the model with the multimodal instruction-following ability for table-related requests. We compare Table-LLaVA with a series of open-source (M)LLMs and closed-source GPT-4V. Experimental results show that Table-LLaVA beats strong MLLM baselines on 17 held-in and 6 held-out benchmarks, and is even competitive with the powerful GPT-4V on 14 benchmarks with a subset of test samples. Extensive ablation experiments are conducted to reveal the contributions of different training data (e.g., the influence of table recognition pre-training data). We also explore the mutual influence between model's capacity for tabular tasks and non-tabular tasks. We hope this work could establish a strong base for future research on the multimodal table understanding problem and facilitate the progress of more general MLLMs. We conclude our contributions as follows: 1) We make the first systematic exploration of the multimodal table understanding problem, which is complementary to the traditional text-only problem setting. 2) Accordingly, we construct and release a large-scale dataset MM-Tab which covers diverse tables and data for different tasks, including a series of novel table structure understanding tasks. 3) We develop a versatile tabular MLLM Table-LLaVA, which significantly outperforms a range of strong MLLM baselines under both held-in and held-out settings (Figure <ref>). § RELATED WORK §.§ Table Understanding The table understanding (TU) problem concentrates on how to automatically extract, transform and interpret essential information from tabular data, and it has attracted significant attention in the past years <cit.>. Many tasks fall under the umbrella of table understanding problem, e.g., Table Question Answering (TQA) <cit.>, Table Fact Verification (TFV) <cit.> and Table-to-Text (T2T) generation <cit.>. Different approaches have been proposed to solve specific TU tasks, ranging from early rule-based systems <cit.> to later tabular language models (TaLMs) <cit.>, which are pre-trained from general language models like BERT <cit.> with extra large-scale table corpus. Nevertheless, these methods can only support limited TU tasks and handle tables of specific types. Recently, the emerging LLMs have opened up new possibilities for utilizing one single model to fulfill multiple table tasks. Researchers have attempted to enhance the TU ability of existing LLMs with different strategies such as prompt engineering <cit.>, instruction tuning <cit.> and combining external tools <cit.>. The resulting tabular LLMs like TableLlama <cit.> and TableGPT <cit.> can possess better TU ability and respond to wide-ranging table-related requests. However, previous TU approaches including tabular LLMs are unable to directly understand table images, which limits the potential application scenarios of TU technique. §.§ Multimodal Large Language Models With LLMs experiencing rapid advancements, recent studies have tried to endow the purely texutal LLMs with understanding and perception capabilities of other modalities such as image and video, leading to the emergence of MLLMs <cit.>. Flamingo <cit.> proposes a gated cross-attention mechanism between vision encoder and LLM, which is trained on billions of image-text pairs to align vision and language modalities. BLIP2 <cit.> introduces a Q-Former with learnable query vectors to abstract the visual information from vision encoder into features of a fixed number. LLaVA <cit.> uses a linear layer as a simpler cross-modal connector and achieve powerful performance with better data efficiency. Though previous MLLMs demonstrated remarkable performance on multiple multimodal tasks <cit.>, their ability to digest table images and perform downstream tasks has not been thoroughly investigated. In this work, we build the first large-scale multimodal table understanding dataset and develop Table-LLaVA, a versatile tabular MLLM for diverse table-based tasks. To stimulate future endeavours on this problem, we also provide a comprehensive benchmark and fully evaluate the table understanding ability of existing models. More recently, researchers also tried to develop MLLMs like Vary <cit.> and Monkey <cit.> to understand document pictures with enhanced visual encoders, e.g., scaling up the vision vocabulary and image resolution. These models focus on the unified visual understanding of different document images and can be further improved with the proposed dataset. § MMTAB DATASET §.§ Data Collection As shown in Table <ref>, with a pursuit of diverse table structures, tasks, and domains, we collect samples from 14 public table datasets of 8 domains (the first 14 rows in Table <ref>), covering 9 representative academic tasks. The detailed definition of each task can be found in Table <ref>. The original tables in these datasets are stored in divergent textual formats such as HTML or Markdown. We carefully design Python scripts with external packages like html2image to convert textual tables into high-quality table images. The task-specific input and output texts are transformed into the instruction-following format with pre-defined instruction templates. To minimize errors during answering parsing, we also add extra instructions, requiring models to output the final answer in the JSON format. As shown in the Figure <ref>, the rendered table images and processed input-output pairs constitute the final multimodal instruction-tuning samples with a unified format of . We adhere to the original dataset partitioning and select 11 datasets for training and held-in evaluation. 3 small-scale datasets with non-overlapping domains are used for held-out evaluation. The overview of sample construction process is depicted in Figure <ref>. §.§ Data Augmentations Previous works have shown that the diversity of instruction-following data is crucial to the capability of the resulting instruction-following models <cit.>. To create more data diversity and avoid over-fitting in the model training, we perform additional data augmentations at multiple levels. Table-level augmentations. Real-world tables often have varied structures and styles. An ideal table understanding model should be able to process divergent tables like a human reader. Since our dataset already includes diverse table structures from academic datasets, we separately design scripts to render table images with three different styles: Web-page (70.8%), Excel (19.4%) and Markdown (9.8%). Fine-grained adjustments such as font type and cell colors are also considered. Instruction-level augmentations. In practical scenarios, user instructions for the same task are likely to vary from user to user. To improve models' robustness towards such variations, we resort to GPT-4 to generate new instruction templates and descriptions about JSON output format in a few-shot fashion based on several manually annotated demonstrations. Generated instruction templates with grammar mistakes or deviation from the original task are filtered out. When we construct input requests of each dataset, we randomly select an instruction template and an output format description from the candidate pool, and then combine them with the task-specific input such as table-related questions to produce the final input request. This combination strategy can bring more diversity of input requests. Using the TABMWP as an example, we show instruction templates and Python code for building input requests in Figure <ref>. Task-level augmentations. Though the collected 14 public datasets highlight 9 academic tabular tasks (e.g., Flat TQA and Cell Description) which demand table-based reasoning capabilities, it is still a question whether existing MLLMs are truly aware of the basic table structures. Prior study has found that, despite achieving great performance on downstream tasks, tabular LLMs may still exhibit poor capacity for perceiving table structures <cit.>. To further strengthen the fundamental table structure understanding ability of MLLMs, 6 table structure understanding tasks (the 6 rows with `Structure Understanding' task category in Table <ref>) are devised, e.g., table size detection (TSD) task. For each task, we use the above-mentioned method to generate input requests and design scripts to automatically extract the final answer from the textual tables in collected datasets. Finally, 8K training samples, 1K or 1.25K evaluation samples were constructed for each structure understanding task. Except above strategies, we also combine single-turn samples of the same table to compose 37K multi-turn conversation samples. At last, we obtain 232K samples on 82K table images for instruction-tuning (named MMTab-instruct), and 45K held-in and 4K held-out test samples on 23K table images for evaluation (named MMTab-eval). Inspired by existing MLLMs which align textual descriptions with input images through image-text pre-training, we introduce the table recognition task as an important pre-training task for multimodal table understanding. In this task, MLLMs learn to generate a textual table representation such as an HTML sequence given the table image, which helps aligning structure and text information in the table image with the ground-truth. We additionally collect 20K table images from the ToTTo <cit.> training split and merge them with table images in the MMTab-instruct to construct sufficient pre-training data. Based on these table images and their original textual tables, we write scripts to construct table representations of three formats (HTML, Markdown and Latex), and then build instruction-following samples in the same way of MMTab-instruct. The resulting pre-training data contains 150K table recognition samples (HTML: 96K, Markdown: 27K, Latex: 27K) on 97K table images, which is denoted as MMTab-pre. More details about MMTab are given in Appendix <ref>. §.§ Dataset Analysis MMTab offers the following advantages: (1) Large volume of data. It contains 150K samples for pre-training, 232K samples for instruction-tuning, 45K samples and 4K samples for held-in and held-out evaluation, respectively. (2) Including tables of diverse structures, styles and domains. It includes 105K table images covering a broad range of structures (e.g., simple tables with flat structures as well as complex tables with merged cells and hierarchical headers), divergent styles (i.e., Web page, Excel, and Markdown tables) and multiple domains (e.g., Wikipedia and financial reports). (3) Encompassing a wide range of tabular tasks. In addition to 9 academic tasks which mainly evaluate the advanced table-based reasoning ability, MMTab also comprises 6 tasks aimed at assessing models' basic understanding of table structures. The broad coverage of tables and tasks can not only improve the generalization of the resulting model, but also provide a comprehensive testbed for MLLM research. § TABLE-LLAVA After constructing the MMTab dataset, we endeavor to fully leverage this data to promote models' multimodal table understanding ability. Inspired by the widely adopted training paradigm of previous MLLMs <cit.>, we devise an enhanced two-stage training procedure and choose LLaVA-1.5 <cit.> as the backbone to develop a versatile tabular MLLM named Table-LLaVA. The whole training process is illustrated in the Figure <ref>. §.§ Model Architecture Following LLaVA-1.5, the proposed Table-LLaVA consists of three modules: a pre-trained ViT model <cit.> as the visual encoder, a two-layer MLP as the vision-language connector and a Vicuna model <cit.> as the backbone LLM. The ViT model encodes the input image into visual features, which are then projected into the word embedding space of LLM by the MLP connector. The Vicuna takes as input the concatenation of processed visual features and embedded textual features to generate responses. §.§ Model Training Pre-training. As depicted in the top-left region of Fig. <ref>, the vision-language connector is first pre-trained with an extra table recognition task on the MMTab-pre dataset, where the model is required to output a textual table representation (e.g., an HTML string) which encompasses both the table structure and table content. This process aims at aligning the visual features of diversified table images with the ground-truth textual table representations, which endows the model with augmented table structure perceiving and OCR ability and thus lays the foundation of more advanced tabular tasks. Instruction fine-tuning. In the second stage, the pre-trained vision-language connector and the LLM are jointly fine-tuned with instruction following data of multimodal table tasks in MMTab-instruct and traditional multimodal tasks. While a plethora of multimodal instruction following datasets have been previously constructed <cit.>, none of them have adequately solved the multimodal table understanding problem. The proposed MMTab-instruct contributes to addressing this gap and we use it to endow the model with the advanced ability to perform downstream table tasks. We also include the original pre-training and fine-tuning data of LLaVA-1.5 during the training process to improve the generalization of the resulting model and we analyze their influence in the ablation study. § EXPERIMENTS §.§ Experimental Setup Baselines. We consider baselines of three genres: (1) Open-source MLLMs including BLIP <cit.>, OFA-Huge <cit.>, BLIP2 <cit.>, MiniGPT-4 <cit.>, Qwen-VL <cit.>, InternLM-XComposer <cit.>, mPLUG-Owl <cit.> and mPLUG-Owl2 <cit.>, LLaVA-1.5 <cit.>, Vary-toy <cit.> and Monkey <cit.>. (2) Open-source LLMs including Llama2 <cit.> and its counterpart TableLlama <cit.>, which uses LongLoRA <cit.> to instruction-tune LLama2 on a series of textual tabular tasks. (3) The GPT-4V with low and high image resolution. Considering the high cost of GPT-4V, we randomly select 100 or 200 samples from each benchmark to compare Table-LLaVA with GPT-4V. To enable LLMs to digest table images, we consider an ideal scenario where LLMs are provided with oracle table sequences to explore the performance upper bound, and a more practical scenario where available table sequences are recognized from images by a competitive OCR engine <cit.>. For all methods, the zero-shot setting was adopted during evaluation. Implementation details can be found in App. <ref>. Evaluation metrics. For TQA, TFV, and T2T benchmarks, we use accuracy or BLEU <cit.>. For TSD, we compute accuracy for predicted row and column numbers separately. For TCE and TCL, we compute accuracy at cell-level. For MCD, we use cell-level F1. For RCE, we compute cell-level F1 for extracted rows and columns, respectively. For table recognition (TR) task, we follow <cit.> and use the Tree-Edit-Distance-based Similarity (TEDS) score, which is based on the tree structure of HTML table sequence and can measure both the structure similarity and the cell content similarity between the prediction and the ground truth. The score is normalized between 0 and 1, where 1 means perfect matching. For TR testing samples whose target sequences are in the Markdown or Latex format, we convert the predicted sequences into the HTML format to compute their TEDS scores. §.§ Results and Analysis Public academic tabular benchmark results. Performance of open-source MLLMs. As we can see from the MLLM rows in Table <ref>, early MLLMs (e.g., MiniGPT-4, BLIP) exhibited minimal proficiency in multimodal table understanding due to the lack of tabular training data, but recent MLLMs (e.g., LLaVA-1.5 and Monkey) have yielded better capacity for comprehending table images, which can be attributed to their improvements on the OCR and text-rich scenarios. Especially, among existing MLLMs, Monkey performs the best in most question answering tasks and fact verification tasks because of the training on relevant table datasets (i.e., WTQ and TabFact). Performance of LLMs. As shown in Table <ref>, TableLlama+OCR performs better than Llama2+OCR on several tasks (e.g., HiTab, FeTaQA, TabFact) through fine-tuning on the corresponding training data, but this also damages its generalization ability on unseen tasks (e.g., InfoTabs and TABMWP). Compared to Llama2+OCR, Llama2+Oracle does not achieve notable improvements, indicating that its bottleneck is the ability to understand tables and follow related instructions, rather than the table recognition ability. On the contrary, TableLlama+Oracle consistently outperforms TableLlama+OCR in all tasks, because it has been instruction-tuned on large-scale tabular data, which leads to better table understanding and instruction-following ability. Thus, the provided oracle table sequences break the bottleneck of the OCR engine's table recognition capability, resulting in significant improvements. Comparison between Table-LLaVA and existing models. Compared to previous open-source MLLMs and LLMs+OCR, Table-LLaVA 7B and 13B both surpass them with large margins, demonstrating the effectiveness of our methods and the value of MMTab dataset. One exception is the accuracy of TableLlama+OCR on HiTab, which maybe because table images in this benchmark are relatively large, leading to information loss when resizing them into desired resolutions of Table-LLaVA (i.e., 336×336). We believe there is great potential for using more powerful MLLMs to perform diverse multimodal table understanding tasks. Table structure understanding benchmark results. Table structure understanding is a fundamental ability for fulfilling more advanced tabular tasks. As can been found in Table <ref>, both previous MLLMs and LLMs failed to generalize well on these relatively simple tabular benchmarks that are almost trivial for humans. What's more, their performance is even worse than that on more challenging academic benchmarks in Table <ref>. This shows that these powerful (M)LLMs may rely on some superficial correlations <cit.> to perform downstream tabular tasks that require complex reasoning, and they actually lack the important ability to perceive basic table structures. Held-out tabular benchmark results. Table <ref> reports model performance on 7 held-out benchmarks whose data do not appear in the model training. We can find that previous open-source models excel at different benchmarks respectively, and no model can consistently outperform others on all these tasks. By contrast, Table-LLaVA achieves best performance on most benchmarks, except for the accuracy of Vary-toy on AIT-QA, which is because AIT-QA contains large tables extracted from annual reports of airline companies and Vary-toy might have seen similar tables in its training data of company document images. Besides, Vary-toy supports higher input image resolution (1024), which is more friendly for large tables. Comparison with GPT-4V. The average performance of Table-LLaVA and GPT-4V on five types of benchmarks is shown in the upper part of Table <ref>. GPT-4V achieves remarkable results under both low (512×512) and high (768×2000) image resolution. The average performance of Table-LLaVA 7B (336×336) is better than GPT-4V with low resolution (512×512) on four types of benchmarks, while GPT-4V surpasses Table-LLaVA in the held-out scenario, indicating its strong generalization ability. As can be seen from detailed benchmark performance in Table <ref>, Table <ref> and Table <ref>, Table-LLaVA achieves better or competitive results with GPT-4V on 14 out of 24 benchmarks. Besides, GPT-4V can obtain significant improvements from high image resolution, which helps the model comprehend fine-grained table elements and structures in large tables. We also analyze the influence of input image resolution on the performance of Table-LLaVA in Appendix <ref>. Ablation study. We conduct sufficient ablation experiments to validate the effectiveness of our proposed dataset and training strategy. We divide the ablation study into three parts: (1) Ablation of pre-training. As shown in Table <ref>, both `w/o LLaVA-pre' and `w/o MMTab-pre' cause negative effects, and the latter results in a larger decline. This is because both LLaVA-pre and MMTab-pre help align visual and textual modalities, while MMTab-pre is more suitable for multimodal alignment of table understanding. (2) Ablation of instruction fine-tuning. `w/o LLaVA-instruct' causes a slight performance decrease, indicating that though LLaVA-instruct has different image domains and task settings from MMTab, it has benefits for the multimodal table understanding due to the enhancement of instruction-following ability. `w/o MMTab-instruct' leads to a significant performance drop on all types of tasks, resulting in extremely poor performance (e.g., 3.02 on held-out benchmarks). This further confirms that our constructed data can supplement the missing table understanding capability of the current MLLMs. If the table structure understanding data in MMTab-instruct is removed (i.e., `w/o TSU-instruct'), we can find that, although it does not cause obvious performance damage to traditional academic tasks like TQA and TFV, it has a huge negative impact on TSU and Held-out tasks. This indicates that the proposed TSU datasets also help with model generalization. (3) Ablation of training strategies. We compare models instruction-tuned with LLaVA-pre and MMTab-pre in sequence (`w successfully IFT') or mixed together. We find that `w successfully IFT' has slightly weaker performance, which suggests that mixed data is more conducive to model performance. The influence of MMTab on non-tabular tasks. Table <ref> lists performance of Table-LLaVA and its backbone LLaVA-1.5 on two non-tabular benchmarks: TextVQA <cit.> and LLaVA-Bench (In-the-Wild) <cit.>. Table-LLaVA beats LLaVA-1.5 in most cases under both model sizes, which demonstrates that MMTab actually has positive impact on the performance of non-tabular tasks. Combing this with ablation of non-tabular training data, we can find that there are mutual benefits between model's capacity for tabular tasks and non-tabular tasks, which shows that table understanding is one fundamental and necessary ability of MLLM and it deserves more investigations. More results and analysis such as case study are shown in Appendix <ref>. § CONCLUSION This paper proposes a novel multimodal table understanding problem, together with a large-scale open-source dataset MMTab, which covers a broad range of table structures and tabular tasks. This dataset provides a comprehensive testbed for MLLM research with held-in and held-out multimodal tabular benchmarks. On the basis of MMTab, we empower LLaVA 1.5 to be a generalist tabular MLLM Table-LLaVA. Experimental results show that Table-LLaVA significantly outperforms existing MLLMs on multiple benchmarks, and is even on par with the powerful GPT-4V. In conclusion, the contributions of this paper lie at promoting the research on multimodal table understanding from the task, dataset and model perspectives. § LIMITATIONS Though this work makes the first comprehensive exploration towards the multimodal table understanding problem, there are certain limitations that can be left to the follow-ups. First, the proposed dataset mainly focus on the single table in English. The multi-table scenario together with broader language coverage have not yet been considered. Second, MMTab is based on real-world tables from carefully selected table datasets and it contains diverse high-quality table images rendered by automatic scripts. Nevertheless, table images in the wild can be low-quality. For instance, blurred, handwritten or incomplete table images. To further bridge the gap between the academic research and the real application scenarios, more diversified table images from the wild could be collected in the future, and their corresponding instruction following data needs to be constructed. We believe this could significantly promote the applications of MLLM-based table understanding systems. In the end, though the proposed Table-LLaVA demonstrates great performance on a wide range of table-based tasks, the resolution of input images is relatively low and may limit the upper bound of its capacity. Luckily, with the emergence of MLLMs which possess higher input image resolution (e.g., Monkey <cit.>, LLaVA-Next <cit.>), we can use MMTab to develop more powerful tabular MLLM in the future research. § ETHICAL CONSIDERATIONS The proposed MMTab dataset is constructed based on the academic datasets like WTQ and TabFact, which are free and open datasets for research use with licenses like MIT License[https://opensource.org/license/mit/] or CC-BY-SA-4.0 License [https://creativecommons.org/licenses/by-sa/4.0/deed.en]. We write Python scripts to render textual table sequences (like HTML) in these datasets to obtain table images, and build multimodal instruction-following data based on original samples. The resulting dataset MMTab is also a free and open resource for the community to study the multimodal table understanding problem. Thus, the authors foresee no ethical concerns with the research in this paper. § MORE DETAILS ABOUT MMTAB §.§ Task Descriptions and More Dataset Examples Detailed description and evaluation metric of each task are given in Table <ref>, and more dataset examples are illustrated in Figure <ref>, <ref>, <ref>. When we collect tables from TabMCQ dataset, we filter extremely long tables more than 50 rows. For the hybrid-QA dataset TAT-QA, we only preserve samples whose questions can be answered with the table information. For the ToTTo dataset, its training set contains 35K tables and we randomly select 15K tables for MMTab-instruct in order to reduce the cost of transforming HTML tables into images. Except augmentation strategies mentioned in Section <ref>, we also perform additional data augmentations including: (1) “response-level augmentations”, where we synthesize chain-of-thoughts using annotated intermediate computational procedures in the original datasets and use them to augment the final answer. (2) “conversation-level augmentations”, where we randomly choose samples of the same table to compose multi-turn conversation samples. §.§ Instruction Templates The diversity of the instruction-following data has a significant impact on the performance of the resulting model. As discussed in the Section <ref>, we utilize in-context learning to ask GPT-4 to generate new instruction templates and create more diversity of input request. When we build input requests of each dataset, we randomly choose an instruction template and an output format description from the candidate pool, and then combine them with the task-specific input such as the question to produce the final input request. Figure <ref> shows the Python code for this combination process, together with all instruction templates and JSON output format descriptions for the TABMWP dataset. Previous textual instruction-following datasets for tabular tasks such as TableInstruct <cit.> usually adopt one fixed instruction template for each dataset. By contrast, we construct at least 20 instruction templates for each dataset while considering their respective characteristics. § IMPLEMENTATION DETAILS Following LLaVA-1.5 <cit.>, we use the well-trained CLIP-ViT-L-336px <cit.> as the visual encoder and input images are resized to 336×336. We develop two Table-LLaVA models with Vicuna-1.5 7B and 13B as the backbone LLM, and we denote the resulting models as Table-LLaVA 7B and Table-LLaVA 13B, respectively. We follow the original hyper-parameter setting of LLaVA-1.5 except that We increased the max sequence length from 2048 to 2560 to accommodate longer text sequences. The training hyperparameters for both the pre-training and the visual instruction tuning are listed in Table <ref>. In this paper, all experiments including baseline experiments were conducted on a single machine with 8 80GB A800. Without using flash-attention <cit.>, the pre-training process and the instruction-tuning takes about 32 hours and 26 hours for one epoch, respectively. Unless otherwise specified, we evaluate performance of baseline models on our benchmarks with the official implementations. As mentioned in the Section <ref>, we add extra instructions to the input request which require models to output the final answer in the JSON format, and we write Python scripts with regular expressions to extract the final answer for a fair comparion. Some baselines like Monkey cannot follow instructions to output the answer in the desired JSON format, which may only output a short answer due to the overfitting of their training data. Thus, we relaxed requirements and specifically designed answer extraction scripts to calculate their accuracy. For ToTTo benchmark, since the ground-truth of testing samples have not been open-sourced, we submit the output results of different models to the official website to get evaluation results. § MORE EXPERIMENTAL RESULTS AND ANALYSIS §.§ Influence of Input Image Resolution To shed more light on the influence of image resolution on multimodal table understanding, we divide test samples into 5 groups by their image resolution and evaluate model performance on different groups. The results, illustrated in Figure <ref>, demonstrate that image resolution has a great impact on model performance. The model performance gradually degenerates with the increasing image resolution, which reveals that it is necessary to enlarge the input image solution of MLLMs in order to process extremely large table images. §.§ Influence of MMTab on Non-tabular Tasks We compare Table-LLaVA with its backbone LLaVA-1.5 on two non-tabular benchmarks: TextVQA <cit.>, a VQA benchmark requiring the understanding of image texts, and LLaVA-Bench (In-the-Wild) <cit.>, a recent general benchmark for MLLMs including 3 task categories (conversation, detail description and complex reasoning). The results are listed in the Table <ref>. Table-LLaVA beats LLaVA-1.5 in most cases under both model sizes, which demonstrates that tabular training data has positive impact on the performance on non-tabular tasks. §.§ Influence of OCR Success Rate on LLM Performance We compute the cell-level OCR success rates on 4 benchmarks and show the performance of textual LLMs in Table <ref>. As shown in the table, OCR success rates vary a lot among 4 benchmarks, ranging from 11.05% to 75.35%. Intuitively, table images with large sizes (i.e. large Ave. Cell Numer) pose greater challenge to OCR engines and thus often lead to low OCR success rates. With OCR success rate decreasing, the performance gap of TableLlama between '+Oracle' and '+OCR' settings significantly increases, which reveals the importance of correct table recognition results. Moreover, compared with TableLlama, the performance gap of Llama2 between two settings is much more lower and less significant, which shows its bottleneck is the ability to understand and follow table-related instructions, rather than OCR results. By manually inspecting the OCR results, we find that typical error types include (1) character-level mistakes, e.g., missing the first or last letter, (2) cell-level mistakes, e.g., missing whole cells, mistakenly splitting text in one cell into two cells, very wrong cell text especially for cells with long and intensive text, (3) row or column level mistakes, e.g., missing rows or inserting non-existing rows. (4) structure-level mistakes, e.g., falsely recognizing a merged cell as a non-merged cell or vice versa. §.§ Case Study We conduct a side-by-side qualitative analysis to compare Table-LLaVA with other (M)LLMs on different benchmarks, as illustrated in Figure <ref>-<ref>. The results demonstrate that Table-LLaVA can handle a series of table tasks and possesses better multimodal table understanding ability than existing open-source MLLMs. For instance, as can be seen in Figure <ref>, Table-LLaVA provides both the intermediate reasoning steps and the correct final answer for the math word problem based on table image, whereas other MLLMs including GPT-4V fail to give the correct answer. The proposed MMTab dataset can be directly utilized in the training process of future MLLMs to boost their multimodal table understanding ability.
http://arxiv.org/abs/2406.08993v1
20240613105333
Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification
[ "Yuankai Luo", "Lei Shi", "Xiao-Ming Wu" ]
cs.LG
[ "cs.LG" ]
Variational quantum Hamiltonian engineering Keisuke Fujii =========================================== § ABSTRACT Graph Transformers (GTs) have recently emerged as popular alternatives to traditional message-passing Graph Neural Networks (GNNs), due to their theoretically superior expressiveness and impressive performance reported on standard node classification benchmarks, often significantly outperforming GNNs. In this paper, we conduct a thorough empirical analysis to reevaluate the performance of three classic GNN models (GCN, GAT, and GraphSAGE) against GTs. Our findings suggest that the previously reported superiority of GTs may have been overstated due to suboptimal hyperparameter configurations in GNNs. Remarkably, with slight hyperparameter tuning, these classic GNN models achieve state-of-the-art performance, matching or even exceeding that of recent GTs across 17 out of the 18 diverse datasets examined. Additionally, we conduct detailed ablation studies to investigate the influence of various GNN configurations—such as normalization, dropout, residual connections, network depth, and jumping knowledge mode—on node classification performance. Our study aims to promote a higher standard of empirical rigor in the field of graph machine learning, encouraging more accurate comparisons and evaluations of model capabilities. Our implementation is available at <https://github.com/LUOyk1999/tunedGNN>. § INTRODUCTION Node classification is a fundamental task in graph machine learning, with high-impact applications across many fields such as social network analysis, bioinformatics, and recommendation systems. Graph Neural Networks (GNNs)  <cit.> have emerged as a powerful class of models for tackling the node classification task. GNNs operate by iteratively aggregating information from a node's neighbors, a process known as message passing <cit.>, leveraging both the graph structure and node features to learn useful node representations for classification. While GNNs have achieved notable success, studies have identified several limitations, including over-smoothing <cit.>, over-squashing <cit.>, lack of sensitivity to heterophily <cit.>, and challenges in capturing long-range dependencies <cit.>. Recently, Graph Transformers (GTs) <cit.> have gained prominence as popular alternatives to GNNs. Unlike GNNs, which primarily aggregate local neighborhood information, the Transformer architecture <cit.> can capture interactions between any pair of nodes via a self-attention layer. GTs have achieved significant success in graph-level tasks, e.g., graph classification involving small-scale graphs like molecular graphs <cit.>. This success has inspired efforts <cit.> to utilize GTs to tackle node classification tasks, especially on large-scale graphs, addressing the aforementioned limitations of GNNs. While recent advancements in state-of-the-art GTs <cit.> have shown promising results, it's observed that many of these models, whether explicitly or implicitly, still rely on GNNs for learning local node representations, integrating them alongside the global attention mechanisms for a more comprehensive representation. This prompts us to reconsider: Could the potential of message-passing GNNs for node classification have been previously underestimated? While prior research has addressed this issue to some extent <cit.>, these studies have limitations in terms of scope and comprehensiveness, including a restricted number and diversity of datasets, as well as an incomplete examination of hyperparameters. In this study, we comprehensively reassess the performance of GNNs for node classification, utilizing three classic GNN models—GCN <cit.>, GAT <cit.>, and GraphSAGE <cit.>—across 18 real-world benchmark datasets that include homophilous, heterophilous, and large-scale graphs. We examine the influence of key hyperparameters on GNN training, including normalization <cit.>, dropout <cit.>, residual connections <cit.>, network depth, and the utilization of the jumping knowledge mode <cit.>. We summarize the key findings in our empirical study as follows: * With proper hyperparameter tuning, classic GNNs can achieve highly competitive performance in node classification across homophilous and heterophilous graphs with up to millions of nodes. Notably, classic GNNs outperform state-of-the-art GTs, achieving the top rank on 17 out of 18 datasets. This indicates that the previously claimed superiority of GTs over GNNs may have been overstated, possibly due to suboptimal hyperparameter configurations in GNN evaluations. * Our ablation studies have yielded valuable insights into GNN hyperparameters for node classification. We demonstrate that (1) normalization is essential for large-scale graphs; (2) dropout consistently proves beneficial; (3) residual connections can significantly enhance performance, especially on heterophilous graphs; (4) GNNs on heterophilous graphs tend to perform better with deeper layers; and (5) jumping knowledge connections have minimal impact on large-scale graphs. § CLASSIC GNNS FOR NODE CLASSIFICATION Define a graph as 𝒢 = (𝒱, ℰ, X, Y), where 𝒱 denotes the set of nodes, ℰ⊆𝒱×𝒱 represents the set of edges, X∈ℝ^|𝒱| × d is the node feature matrix, with |𝒱| representing the number of nodes and d the dimension of the node features, and Y∈ℝ^|𝒱| × C is the one-hot encoded label matrix, with C being the number of classes. Let A∈ℝ^|𝒱| × |𝒱| denote the adjacency matrix of 𝒢. Message Passing Graph Neural Networks (GNNs) <cit.> compute node representations h_v^l at each layer l as: h_v^l=UPDATE^l( h_v^l -1,AGG^l( {h_u^l-1| u∈𝒩( v ) }) ), where 𝒩(v) represents the neighboring nodes adjacent to v, AGG^l serves as the message aggregation function, and UPDATE^l is the update function. Initially, each node v begins with a feature vector h_v^0 = x_v ∈ℝ^d. The function AGG^l aggregates information from the neighbors of v to update its representation. The output of the last layer L, i.e., GNN(v, A, X) = h_v^L, is the representation of v produced by the GNN. In this work, we focus on three classic GNNs: GCN <cit.>, GraphSAGE <cit.>, and GAT <cit.>, which differ in their approach to learning the node representation h_v^l. Graph Convolutional Networks (GCN) <cit.>, the standard GCN model, is formualated as: h_v^l = σ(∑_u ∈𝒩(v) ∪{v}1/√(d̂_u d̂_v)h_u^l-1W^l), where d̂_v = 1 + ∑_u ∈𝒩(v) 1, ∑_u ∈𝒩(v) 1 denotes the degree of node v, W^l is the trainable weight matrix in layer l, and σ is the activation function, e.g., ReLU(·) = max(0, ·). GraphSAGE <cit.> learns node representations through a different approach: h_v^l = σ(h_v^l-1W_1^l + (mean_u ∈𝒩(v)h_u^l-1)W_2^l), where W_1^l and W_2^l are trainable weight matrices, and mean_u ∈𝒩(v)h_u^l-1 computes the average embedding of the neighboring nodes of v. Graph Attention Networks (GAT) <cit.> employ masked self-attention to assign weights to different neighboring nodes. For an edge (v, u) ∈ℰ, the propagation rule of GAT is defined as: α_vu^l = exp( LeakyReLU( 𝐚^⊤_l [ W^l h_v^l-1 W^l h_u^l-1] ) )/∑_r ∈𝒩(v)exp( LeakyReLU( 𝐚^⊤_l [ W^l h_v^l-1 W^l h_r^l-1] ) ), h_v^l = σ( ∑_u ∈𝒩(v)α_vu^lh_u^l-1W^l), where 𝐚_l is a trainable weight vector, W^l is a trainable weight matrix, and represents the concatenation operation. Node Classification aims to predict the labels of the unlabeled nodes. Typically, for any node v, the node representation generated by the last GNN layer is passed through a prediction head g(·), to obtain the predicted label ŷ_v = g(GNN(v, A, X)). The training objective is to minimize the total loss L(θ) = ∑_v ∈𝒱_trainℓ(ŷ_v, y_v) w.r.t. all nodes in the training set 𝒱_train, where y_v indicates the ground-truth label of v and θ indicates the trainable GNN parameters. Homophilous and Heterophilous Graphs. Node classification can be performed on both homophilous and heterophilous graphs. Homophilous graphs are characterized by edges that tend to connect nodes of the same class, while in heterophilous graphs, connected nodes may belong to different classes <cit.>. GNN models implicitly assume homophily in graphs <cit.>, and it is commonly believed that due to this homophily assumption, GNNs cannot generalize well to heterophilous graphs <cit.>. However, recent works <cit.> have empirically shown that standard GCNs also work well on heterophilous graphs. In this study, we provide a comprehensive evaluation of classic GNNs for node classification on both homophilous and heterophilous graphs. § KEY HYPERPARAMETERS FOR TRAINING GNNS In this section, we present an overview of the key hyperparameters for training GNNs, including normalization, dropout, residual connections, network depth, and “jumping knowledge” mode. The first four hyperparameters are widely utilized across different types of neural networks to improve model performance, while the last one is specific to GNNs. Normalization. Specifically, Layer Normalization (LN) <cit.> or Batch Normalization (BN) <cit.> can be used in every layer before the activation function σ(·). Taking GCN as an example: h_v^l = σ(Norm(∑_u ∈𝒩(v) ∪{v}1/√(d̂_u d̂_v)h_u^l-1W^l)). The normalization techniques are essential for stabilizing the training process by reducing the covariate shift, which occurs when the distribution of each layer’s node embeddings changes during training. Normalizing the node embeddings helps to maintain a more consistent distribution, allowing the use of higher learning rates and leading to faster convergence <cit.>. Dropout <cit.>, a technique widely used in convolutional neural networks (CNNs) to address overfitting by reducing co-adaptation among hidden neurons <cit.>, has also been found to be effective in addressing similar issues in GNNs <cit.>, where the co-adaptation effects propagate and accumulate through message passing among different nodes. Typically, dropout is applied to the feature embeddings after the activation function: h_v^l = Dropout(σ(Norm(∑_u ∈𝒩(v) ∪{v}1/√(d̂_u d̂_v)h_u^l-1W^l))). Residual Connections <cit.> significantly enhance CNN performance by connecting layer inputs directly to outputs, thereby alleviating the vanishing gradient issue. They were first adopted by the seminal GCN paper <cit.> and subsequently incorporated into DeepGCNs <cit.> to boost performance. Formally, linear residual connections can be integrated into GNNs as follows: h_v^l = Dropout(σ(Norm(h_u^l-1W_r^l + ∑_u ∈𝒩(v) ∪{v}1/√(d̂_u d̂_v)h_u^l-1W^l))), where W_r^l is a trainable weight matrix. This configuration mitigates gradient instabilities and enhances GNN expressiveness <cit.>, addressing the over-smoothing <cit.> and oversquashing <cit.> issues since the linear component (h_u^l-1W_r^l) helps to preserve distinguishable node representations <cit.>. Network Depth. Deeper network architectures, such as deep CNNs <cit.>, are capable of extracting more complex, high-level features from data, potentially leading to better performance on various prediction tasks. However, GNNs face unique challenges with depth, such as over-smoothing <cit.>, where node representations become indistinguishable with increased network depth. Consequently, in practice, most GNNs adopt a shallow architecture, typically consisting of 2 to 5 layers. While previous research, such as DeepGCN <cit.> and DeeperGCN <cit.>, advocates the use of deep GNNs with up to 56 and 112 layers, our findings indicate that comparable performance can be achieved with significantly shallower GNN architectures, typically ranging from 2 to 10 layers. Jumping Knowledge (JK) Mode <cit.> aggregates representations from different GNN layers, effectively capturing information from varying neighborhood ranges within the graph. For any node v, the summation version of JK mode produces the representation of v by: GNN_JK(v, A, X) = h_v^1+ h_v^2+ … + h_v^L, where L is the number of GNN layers. § EXPERIMENTAL SETUP FOR NODE CLASSIFICATION Datasets. Table <ref> presents a summary of the statistics and characteristics of the datasets. * Homophilous Graphs. Cora, CiteSeer, and PubMed are three widely used citation networks <cit.>. We follow the semi-supervised setting of <cit.> for data splits and metrics. Additionally, Computer and Photo <cit.> are co-purchase networks where nodes represent goods and edges indicate that the connected goods are frequently bought together. CS and Physics <cit.> are co-authorship networks where nodes denote authors and edges represent that the authors have co-authored at least one paper. We adhere to the widely accepted practice of training/validation/test splits of 60%/20%/20% and metric of accuracy <cit.>. Furthermore, we utilize the WikiCS dataset and use the official splits and metrics provided in <cit.>. * Heterophilous Graphs. Squirrel and Chameleon <cit.> are two well-known page-page networks that focus on specific topics in Wikipedia. According to the heterophilous graphs benchmarking paper <cit.>, the original split of these datasets introduces overlapping nodes between training and testing, leading to the proposal of a new data split that filters out the overlapping nodes. We use its provided split and its metrics for evaluation. Additionally, we utilize four other heterophilous datasets proposed by the same source <cit.>: Roman-Empire, where nodes correspond to words in the Roman Empire Wikipedia article and edges connect sequential or syntactically linked words; Amazon-Ratings, where nodes represent products and edges connect frequently co-purchased items; Minesweeper, a synthetic dataset where nodes are cells in a 100×100 grid and edges connect neighboring cells; and Questions, where nodes represent users from the Yandex Q question-answering website and edges connect users who interacted through answers. All splits and evaluation metrics are consistent with those proposed in the source. * Large-scale Graphs. We consider a collection of large graphs released recently by the Open Graph Benchmark (OGB) <cit.>: ogbn-arxiv, ogbn-proteins, and ogbn-products, with node numbers ranging from 0.16M to 2.4M. We maintain all the OGB standard evaluation settings. Additionally, we analyze performance on the social network pokec <cit.>, which has 1.6M nodes, following the evaluation settings of <cit.>. Baselines. Our main focus lies on classic GNNs: GCN <cit.>, GraphSAGE <cit.>, GAT <cit.>, the state-of-the-art scalable GTs: SGFormer <cit.>, Polynormer <cit.>, GOAT <cit.>, NodeFormer <cit.>, NAGphormer <cit.>, and powerful GTs: GraphGPS <cit.> and Exphormer <cit.>. Furthermore, various other GTs like <cit.> exist in related surveys <cit.>, empirically shown to be inferior to the GTs we compared against for node classification tasks. For heterophilous graphs, We also consider five models designed for node classification under heterophily following <cit.>: H2GCN <cit.>, CPGNN <cit.>, GPRGNN <cit.>, FSGNN <cit.>, GloGNN <cit.>. Note that we adopt the empirically optimal Polynormer variant (Polynormer-r), which demonstrates superior performance over advanced GNNs such as LINKX <cit.> and OrderedGNN <cit.>. We report the performance results of baselines primarily from <cit.>, with the remaining obtained from their respective original papers or official leaderboards whenever possible, as those results are obtained by well-tuned models. Hyperparameter Configurations. We conduct minimal hyperparameter tuning on classic GNNs, consistent with the parameter search range of Polynormer <cit.>. We utilize the Adam optimizer <cit.> with a learning rate from {0.001, 0.005, 0.01} and an epoch limit of 2500. And we tune the hidden dimension from {64, 256, 512}. As discussed in Section <ref>, we focus on whether to use normalization (BN or LN), residual connections, jumping knowledge, and dropout rates from {0.2,0.3,0.5,0.7}, the number of layers from {2,3,4,5,6,7,8,9,10}. Considering the large number of hyperparameters and datasets, we do not perform an exhaustive search. We report mean scores and standard deviations after 5 runs. GNN^* denotes our implementation of the GNN model. Detailed experimental setup and hyperparameters are provided in Appendix <ref>. § EMPIRICAL FINDINGS §.§ Performance of Classic GNNs in Node Classification In this subsection, we provide a detailed analysis of the performance of the three classic GNNs compared to state-of-the-art GTs in node classification tasks. Our experimental results across homophilous (Table <ref>), heterophilous (Table <ref>), and large-scale graphs (Table <ref>) reveal that classic GNNs often outperform or match the performance of advanced GTs across 18 datasets. Notably, among the 18 datasets evaluated, classic GNNs achieve the top rank on 17 of them, showcasing their robust competitiveness. We highlight our main observations below. [colback=gray!10, colframe=black, boxrule=1.5pt, arc=2pt, left=5pt, right=5pt] Observations on Homophilous Graphs (Table <ref>). Classic GNNs, with only slight adjustments to hyperparameters, are highly competitive in node classification tasks on homophilous graphs, often outperforming state-of-the-art graph transformers in many cases. While previously reported results show that most advanced GTs outperform classic GNN on homophilous graphs <cit.>, our implementation of classic GNNs can place within the top two for five datasets, with GCN^* always ranking in the top three. Specifically, on CS and WikiCS, classic GNNs experience about a 3% accuracy increase, achieving top-three performances. On WikiCS, the accuracy of GAT^* increases by 3.84%, moving it from seventh to first place, surpassing the leading GT, Polynormer. Similarly, on Computer and Photo, GAT^* outperforms Polynormer and SGFormer, establishing itself as the top model. On Cora, CiteSeer, PubMed, and Physics, tuning yields significant performance improvements for GCN^*, with accuracy increases ranging from 1.25% to 3.48%, positioning GCN^* as the highest-performing model despite its initial lower accuracy compared to advanced GTs. [colback=gray!10, colframe=black, boxrule=1.5pt, arc=2pt, left=5pt, right=5pt] Observations on Heterophilous Graphs (Table <ref>). Our implementation has significantly enhanced the previously reported best results of classic GNNs on heterophilous graphs, surpassing specialized GNN models tailored for such graphs and even outperforming the leading graph transformer architectures. This advancement not only supports but also strengthens the findings in <cit.> that conventional GNNs are strong contenders for heterophilous graphs, challenging the prevailing assumption that they are primarily suited for homophilous graph structures. The three classic GNNs consistently secure top positions on five out of six heterophilous graphs. Specifically, on well-known page-page networks like Squirrel and Chameleon, our implementation enhances the accuracy of GCN^* by 4.80% and 5.83% respectively, elevating it to the first place among all models. By comparison, on larger heterophilous graphs such as Amazon-Ratings and Questions, GAT^* exhibits the highest performance, highlighting the superiority of its message-passing attention mechanism over GTs' global attention. On Roman-Empire, a 17.68% increase is observed in the performance of GCN^*. Interestingly, we find that improvements primarily stem from residual connections, which are further analyzed in our ablation study (see Section <ref>). [colback=gray!10, colframe=black, boxrule=1.5pt, arc=2pt, left=5pt, right=5pt] Observations on Large-scale Graphs (Table <ref>). Our implementation has significantly enhanced the previously reported results of classic GNNs, with some cases showing double-digit increases in accuracy. It has achieved the best results across these large graph datasets, either homophilous or heterophilous, and has outperformed state-of-the-art graph transformers. This indicates that message passing remains highly effective for learning node representations on large-scale graphs. Our implementation of classic GNNs demonstrate consistently superior performance, achieving top rankings across all four large-scale datasets included in our study. Notably, GCN^* emerges as the leading model on ogbn-arxiv and pokec, surpassing all evaluated advanced GTs. Furthermore, on pokec, all three classic GNNs achieve over 10% performance increases by our implementation. For ogbn-proteins, an absolute improvement of 12.99% is observed in the performance of GAT^*, significantly surpassing SGFormer by 5.48%. Similarly, on ogbn-products, GraphSAGE^* demonstrates a performance increase, securing the best performance among all evaluated models. In summary, a basic GNN can achieve the best known results on large-scale graphs, suggesting that current GTs have not yet addressed GNN issues such as over-smoothing and long-range dependencies. §.§ Influence of Hyperparameters on the Performance of GNNs To examine the unique contributions of different hyperparameters in explaining the enhanced performance of classic GNNs, we conduct a series of ablation analysis by selectively removing elements such as normalization, dropout, residual connections, network depth, and jumping knowledge from GCN^*, GraphSAGE^*, and GAT^*. The effect of these ablations is assessed across homophilous (see Table <ref>), heterphilous (see Table <ref>), and large-scale graphs (see Table <ref>). Our findings, which we detail below, indicate that the ablation of single components affects model accuracy in distinct ways. Observation 1: Normalization (either BN or LN) is important for node classification on large-scale graphs but less significant on smaller-scale graphs. Note that we do not conduct an exhaustive search regarding BN and LN, with specific choices detailed in Appendix <ref>. We observe that the ablation of normalization does not lead to substantial deviations on small graphs. However, normalization becomes consistently crucial on large-scale graphs, where its ablation results in accuracy reductions of 3.81% and 4.69% for GraphSAGE^* and GAT^* respectively on ogbn-proteins. We believe this is because large graphs display a wider variety of node features, resulting in different data distributions across the graph. Normalization aids in standardizing these features during training, ensuring a more stable distribution. Observation 2: Dropout is consistently found to be essential for node classification. Our analysis highlights the crucial role of dropout in maintaining the performance of classic GNNs on both homophilous and heterophilous graphs, with its ablation contributing to notable accuracy declines—for instance, a 3.71% decrease for GraphSAGE^* on PubMed and a 5.69% decrease on Roman-Empire. This trend persists in large-scale datasets, where the ablation of dropout leads to a 2.44% and 2.53% performance decline for GCN^* and GAT^* respectively on ogbn-proteins. Observation 3: Residual connections can significantly boost performance on specific datasets, exhibiting a more pronounced effect on heterophilous graphs than on homophilous graphs. While the ablation of residual connections on homophilous graphs does not consistently lead to a significant performance decrease, with observed differences around 2% on Cora, Photo, and CS, the impact is more substantial on large-scale graphs such as ogbn-proteins and pokec. The effect is even more dramatic on heterophilous graphs, with the classic GNNs exhibiting the most significant accuracy reduction on Roman-Empire, for instance, a 15.72% for GCN^* and 12.49% for GAT^*. Similarly, on Minesweeper, significant performance drops were observed, emphasizing the critical importance of residual connections, particularly on heterophilous graphs. The complex structures of these graphs often necessitate deeper layers to effectively capture the diverse relationships between nodes. In such contexts, residual connections are essential for model training. Observation 4: Deeper networks generally lead to greater performance gains on heterophilous graphs compared to homophilous graphs. As demonstrated in Figure <ref>, the performance trends for GCN^* and GraphSAGE^* are consistent across different graph types. On homophilous graphs and ogbn-arxiv, both models achieve optimal performance with a range of 4 to 8 layers. In contrast, on heterophilous graphs, their performance improves with an increasing number of layers, indicating that deeper networks are more beneficial for these graphs. We discuss scenarios with more than 10 layers in Appendix <ref>. Observation 5: Jumping knowledge mode has minimal influence on large-scale graphs. The ablation of this component typically results in marginal accuracy decreases of less than 0.5% for classic GNNs across large-scale graphs, suggesting the lesser impact of jumping knowledge than other components. By comparison, on smaller-scale homophilous graphs, such as CiteSeer and CS, the ablation of jumping knowledge can lead to decreases of around 1.5 to 2% in the model performance. On some heterphilous graphs, the effect is also significant, with the accuracy of GAT^* decreasing by 2.32% on Chameleon, GraphSAGE^* by 3.15% and GAT^* by 3.84% on Minesweeper. § CONCLUSION Our study provides a thorough reevaluation of the efficacy of foundational GNN models in node classification tasks. Through extensive empirical analysis, we demonstrate that these classic GNN models can reach or surpass the performance of GTs on various graph datasets, challenging the perceived superiority of GTs in node classification tasks. Furthermore, our comprehensive ablation studies provide insights into how various GNN configurations impact performance. We hope our findings promote more rigorous empirical evaluations in graph machine learning research. § DATASETS AND EXPERIMENTAL DETAILS §.§ Computing Environment Our implementation is based on PyG <cit.> and DGL <cit.>. The experiments are conducted on a single workstation with 8 RTX 3090 GPUs. §.§ Hyperparameters and Reproducibility For the hyperparameter selections of classic GNNs, in addition to what we have covered, we list other settings in Tables <ref>, <ref>, <ref>. Notably, for heterophilous graphs, we expand the search range for the number of layers to include three additional settings: {11, 12, 15, 20} (See Sec. <ref> for further analysis). This adjustment is based on our empirical evidence suggesting that deep networks tend to yield performance improvements on heterophilous graphs. The ReLU function serves as the non-linear activation. Further details regarding hyperparameters can be found in our code <https://github.com/LUOyk1999/tunedGNN>. Due to the large size of the graphs in ogbn-proteins, ogbn-products, and pokec, which prevents full-batch training on GPU memory, we adopt different training strategies. For ogbn-proteins, we utilize the optimized neighbor sampling method <cit.>. For pokec and ogbn-products, we apply the random partitioning method previously used by GTs <cit.> to enable mini-batch training. For other datasets, we employ full-batch training. In all experiments, we use the validation set to select the best hyperparameters. GNN^* denotes our implementation of the GNN model. Our code is available under the MIT License. § ADDITIONAL BENCHMARK RESULTS §.§ GAT^* with Edge Features on ogbn-proteins While DeepGCN <cit.> introduced training models up to 56 layers deep and DeeperGCN <cit.> further extended this to 112 layers, our experiments suggest that such depth is not necessary. Specifically, while the DeeperGCN achieved an accuracy of 85.50% on ogbn-proteins, it utilized edge features as input, a configuration not commonly employed in the standard baselines of the OGB dataset <cit.>. As our experiments do not incorporate edge features on ogbn-proteins, we exclude DeeperGCN from our main text to maintain a fair comparison. Now we incorporate edge features into the GAT^*, same as the approach in <cit.>, with the results shown in Table <ref>. A 6-layer GAT achieve an accuracy of 87.47%, significantly surpassing the 85.50% by DeeperGCN. This demonstrates that GNNs do not need to be as deep as proposed by DeeperGCN; a range of 2 to 10 layers is typically sufficient. §.§ Deeper Networks on Heterophilous Graphs On heterophilous graphs, the performance of classic GNNs improves with an increasing number of layers limited to 10, as evidenced by Figure <ref> in the main text. We explore scenarios with more than 10 layers in this subsection. Specifically, we consider GCN^* and GAT^* with layer configurations of 11, 12, 15, and 20 for the Amazon-Ratings, Roman-Empire, and Minesweeper datasets. The results are shown in Table <ref>. The variation in the optimal number of layers (L) could stem from the distinct structures inherent in different graphs. Heterophilous graphs may have more complex structures, thus necessitating a higher L. However, the slight improvements observed with larger L values suggest that very deep networks may not yield significantly better results. Overall, the best results for classic GNNs are achieved when L is limited to 15. § LIMITATIONS & BROADER IMPACTS Broader Impacts. This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. Limitations. In this study, we focus solely on the node classification task, without delving into graph classification and link prediction tasks. It would be beneficial to extend our benchmarking efforts to include classic GNNs in graph-level and edge-level tasks. Additionally, we did not conduct an exhaustive hyperparameter search due to the large set of hyperparameters and datasets. Therefore, future research could potentially uncover more favourable results by undertaking a more comprehensive exploration of the hyperparameter space.
http://arxiv.org/abs/2406.08316v2
20240612151640
Is Programming by Example solved by LLMs?
[ "Wen-Ding Li", "Kevin Ellis" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG", "cs.PL", "cs.SE" ]
LaMOT: Language-Guided Multi-Object Tracking Yunhao Li^1,2, Xiaoqiong Liu^3, Luke Liu^4, Heng Fan^3,†, Libo Zhang^2,†,* ^1Institute of Software Chinese Academy of Science ^2University of Chinese Academy of Science ^3University of North Texas ^4Intern at University of North Texas †Equal Advising *Corresponding Author June 17, 2024 ================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Programming-by-Examples (PBE) aims to generate an algorithm from input-output examples. Such systems are practically and theoretically important: from an end-user perspective, they are deployed to millions of people, and from an AI perspective, PBE corresponds to a very general form of few-shot inductive inference. Given the success of Large Language Models (LLMs) in code-generation tasks, we investigate here the extent to which LLMs can be said to have `solved' PBE. We experiment on classic domains such as lists and strings, and an uncommon graphics programming domain not well represented in typical pretraining data. We find that pretrained models are not effective at PBE, but that they can be fine-tuned for much higher performance, provided the test problems are in-distribution. We analyze empirically what causes these models to succeed and fail, and take steps toward understanding how to achieve better out-of-distribution generalization. Collectively these results suggest that LLMs make strong progress toward solving the typical suite of PBE tasks, potentially increasing the flexibility and applicability of PBE systems, while also identifying ways in which LLMs still fall short. § INTRODUCTION Programming-by-Example (PBE) systems solve a challenging task: Given input-output examples of a hidden algorithm, they seek to construct the source code of the underlying function <cit.>. PBE is deployed to millions of users <cit.>, lies near the heart of core AI challenges <cit.>, and is a qualitatively different problem from the bulk of recent work on LLM code generation, because rather than generate source code from natural language <cit.>, PBE is instead fundamentally about few-shot inductive inference: Given a handful of examples, inferring the program that will generalize to new inputs, or which captures the `true' latent regularity, without relying on natural-language guidance. We investigate here the extent to which large language models pretrained on source code can solve PBE. If they can, this unlocks the ability to do PBE in general-purpose Turing complete languages like Python, unlike the restricted domain-specific languages which have so far dominated PBE <cit.>, thereby increasing the scope and power of this paradigm. If LLMs cannot perform PBE, then this highlights a deficit of inductive reasoning and problem solving, and suggests LLMs lean too heavily on natural language cues to generate code. We find that pretrained and instruction-tuned models serve as poor PBE systems, a finding also supported by recent work <cit.>. But our investigation further finds that LLMs can be fine-tuned for significantly higher performance, provided they are not asked to generalize far beyond the fine-tuning data. To address this failure of generalization we give an algorithm for taking a small unlabeled dataset of problems and adapting the LLM to it, which we find narrows this domain gap. The resulting recipe allows PBE over Turing-complete languages across three qualitatively different domains (Fig. <ref>): algorithms on vectors of numbers, string manipulation macros, and graphics programs in LOGO/Turtle. In every case, our final model is at least as effective as custom symbolic search algorithms operating over domain-specific languages, and surpasses powerful closed-source models such as GPT4 <cit.>. We also find that the resulting system can cover a broader scope of problems than classic symbolic methods, owing to the use of a Turing-complete language, which, at least theoretically, allows learning any computable function. § BACKGROUND Programming by Example considers synthesizing a program given a vector of inputs and corresponding outputs . Typically the program is expected to exactly fit the provided examples, (_i)=_i, ∀ i, where i indexes examples. The program is drawn from a (potentially infinite) language . Typically is a domain-specific language designed for a specific PBE system, not a general-purpose programming language. For example, the PBE system FlashFill synthesizes string manipulation macros designed to automate common spreadsheet edits <cit.>. FlashFill's domain-specific language includes commonly occurring regular expressions, together with string slicing and concatenation, and restricted forms of loops. The language is also designed to allow polynomial-time construction of programs consistent with input-output examples. FlashFill's goal, like most PBE systems, is to generalize to hold-out test cases: inputs with (hidden) target outputs . Pick a program from {∈ : (_i)=(_i), ∀ i }.ttt Succeed if (_j)=(_j), ∀ j In its simplest forms, PBE can be accomplished by guess-and-check enumeration until a program is found that is consistent with the examples. Although there exist more sophisticated search algorithms, including those accelerated by neural guidance <cit.>, a key enabler of practical PBE systems is the design of a carefully restricted domain-specific language . The domain-specific language effectively hardcodes symbolic knowledge, focusing the system on what programs the human engineer thinks are most promising, but at the expense of the wider set of computable functions expressible in general-purpose languages. The PBE setup covers other cases as well, such as sequence extrapolation (the inputs are indices into the sequence), as well as data compression (the input is null, and the data is compressed by synthesizing a program that reproduces the output data). Therefore, a truly general solution to PBE—one which could express its solutions in general purpose programming languages, and cover most practically relevant problems—would be broadly applicable to many inductive inference problems, a point that has been long appreciated <cit.>. LLMs for solving programming problems have been recently very successful <cit.>. These systems typically input a prompt describing a problem in natural language, then sample candidate programs, and optionally filter those samples by checking them against input-output test cases, with the goal of passing holdout tests: Draw _k∼ p_LM(· |). Pick a ∈{_k : _k(_i)=_k(_i), ∀ i }. Success: (_j)=(_j), ∀ j Unlike PBE, the primary driver of program generation is a natural language prompt, although input-outputs may also be in the prompt <cit.>. Recent work using LLMs to synthesize programs solely from examples has either obtained negative results <cit.>, or focused on simple and/or nonstandard problems <cit.>, leaving the extent to which PBE is `solved' by LLMs an open question. § METHODS Basic prompting is the most straightforward way of performing PBE with a pre-trained model: Given input-output examples (,) a prompt is constructed and K programs are generated. Programs are filtered by the I/O examples, and a random satisfying program is returned: Sample _k∼ p_LM(· |(X,Y)), for k from 1..K Pick a from {_k : _k(_i)=_k(_i), ∀ i } Fine-tuning improves the above approach in a conceptually straightforward way. Given a dataset comprising tuples of programs and I/O examples, { (, X, Y) }, we fine-tune the LM to predict a program from its input-outputs. But this dataset is hard to come by: Although there are web-scale corpora of naturally occurring source code, there is no analogous dataset of runnable code snippets paired with representative input-output examples, and this data deficit is especially true for new or unusual applications of PBE, such as the graphics programs we consider. To assemble a large dataset of (, X, Y) triples we start with a small manually-constructed seed dataset, 𝒟_seed, and then randomly generate new programs and inputs X by prompting an LLM with members of 𝒟_seed. The output Y comes from running on X. The seed dataset effectively defines a prior over (,X), notated 𝒢 in Fig. <ref>. We sample from 𝒢 to collect many program-input pairs, but use program execution to predict Y, not an LLM. The resulting dataset, which we call 𝒟_tune, is used to train an LLM to generate programs when prompted with input-outputs. As this fine-tuned LLM effectively learns to do probabilistic inference in the graphical model shown in Fig. <ref> (right), we write this fine-tuned LLM as q_θ(|,). This inference network is trained to maximize max_θlog q_θ(|,), where (, )∼𝒢(𝒟_seed) and =() This method is closely related to self-instruct <cit.> and wake-sleep <cit.>. Like self-instruct, we use prompting to bootstrap a large dataset from a small manually-constructed one. Our method differs by using the LLM to generate a hidden latent variable (the program) while a different generative process produces an observed variable (the program outputs). Like wake-sleep, we use samples from a generative model to train an inference network, but we do not further train the generative model itself. Next, we will see that updating the generative model could serve an important role when deploying the system on out-of-distribution problems. Adaptation. One of the most powerful features of source code as a representation is its ability to efficiently express a wide range of computations. Therefore it is of interest to study the ability of fine-tuned LLMs to extrapolate to PBE problems outside the distribution of the fine-tuning data. We consider a basic approach to adapting to a different distribution of problems, assuming access to problems drawn from the testing distribution, but without labeled program solutions. This mimics the deployment of PBE systems to end-users who may have their own idiosyncratic distribution of problems they care about, and who do not provide ground-truth programs, but who can provide feedback on if a generated program has correct behavior. This means we have an unlabeled dataset 𝒟_adapt comprising input-outputs (X,Y), as well as a labeled seed dataset 𝒟_seed comprising triples (,X,Y). Adaptation proceeds by iterating between pretraining with 𝒢(𝒟_seed), testing on 𝒟_adapt, and adding back into 𝒟_seed any program solutions found on the adaptation problems, which then become seeds for the next iteration. This produces a sequence of fine-tuned models, indexed below by i: train model: θ^i =_θlog q_θ(|,), where (, )∼𝒢(𝒟^i_seed) and =() run inference: ^,_k ∼ q_θ^i(|,) for (X,Y)∈𝒟_adapt and k from 1..K update seed: 𝒟^i+1 =𝒟^i∪{ (_k^X,Y,X,Y) : (X,Y)∈𝒟_adapt, k∈ [K] if _k^X,Y(X)=Y } Ideally, with each round of adaptation, we solve more out-of-distribution problems, which tugs the generative model toward the target distribution, unlocking solutions to more out-of-distribution problems, etc. This hinges on each iteration actually solving new problems from the unlabeled dataset. Theoretically this is guaranteed given enough inference-time compute (large K above). We explore in Sec. <ref> the extent to which this holds in practice. § EXPERIMENTS We study different LLM-approaches to programming-by-examples across three domains (Fig. <ref>): * List functions is a PBE domain meant to model a “programmer's assistant”. It concerns discovering algorithms that transform lists of numbers, given input-output examples. This problem statement has a long history within program synthesis <cit.>, and was popularized within machine learning by DeepCoder <cit.>. We consider two modern list function datasets created by Rule et al. 2024 <cit.> and Shi et al. 2023 <cit.>, which both involve higher-order functions and nontrivial procedures such as map, filter, and sort. Rule et al. was recently added to BigBench <cit.>. * Text editing is a domain where a program synthesizer assists an end-user edit their spreadsheets or other documents. From string-to-string examples, the system generates edit macros for tasks such as reformatting dates, extracting fields from semistructured text, etc. <cit.>. Text editing is the most prominent commercial success of PBE: The FlashFill PBE system ships in Microsoft Excel and is used by many millions of people <cit.>. We consider two text editing datasets: SyGuS problems <cit.>—which are easier—and PROSE <cit.> problems, which constitute the most challenging dataset of its kind <cit.>. * LOGO/Turtle graphics is a domain whose goal is to synthesize a program that generates a target image.[This is PBE with a single example and null input, effectively compressing the image into a program.] Systems of this kind can be used both for high-level visual reasoning and for helping artists make structured edits to images <cit.>. We use a dataset of geometric designs expressed as LOGO/Turtle <cit.> programs—where the programs move a simulated pen over a canvas—taken from <cit.>. To allow the LLM to visually perceive the input image, we convert the image to ASCII-art style strings; see Fig. <ref> and Appendix. <ref>. §.§ How well does the fine-tuned model perform? We prepare seed datasets for each domain, synthetically generate a large training set, and then fine-tune a DeepSeekCoder LLM <cit.> that was pretrained on source code.[We prefer DeepSeek because it is roughly LLaMA-level, but has fully open training details.] For list functions we seed with 50 problems from Rule et al. 2024; For text editing, we consider seeding with either SyGuS or a 40-problem subset of PROSE; for LOGO we seed with 200 training-set problems in <cit.>. The resulting fine-tuned models are surprisingly effective within their respective PBE domains. On list functions our finetuned model surpasses the best symbolic search baselines reported in Rule et al. (Fig. <ref>), surpasses the best neurosymbolic search method from Shi et al. (Appendix Fig. <ref>), and surpasses GPT4. It also solves 100% of the list to list benchmark problems from λ^2 (a well-known symbolic synthesizer), shown in Appendix Tbl. <ref>: although plausibly, many λ^2 problems are in the pretraining data. On text editing, it surpasses the performance of FlashFill and approaches the level of FlashFill++ (Tbl. <ref>,Fig. <ref>). On LOGO, it solves 90% of the test set (Fig. <ref>), surpassing systems such as DreamCoder <cit.>, which introduced the first version of these LOGO problems. It also solves more problems than LILO and Regal <cit.>, which are LOGO program synthesizers that input natural language describing how the image should be drawn. In contrast, our model does not use any language clues, generating purely from the image. In addition to quantitatively solving more problems, we note that there are qualitative improvements to the breadth of problems that can be solved in the first place because the LLM can generate Turing-complete code spanning a much broader space of computations (Fig. <ref>). r0.5 gen. accuracy oracle accuracy FlashFill 33% — FlashFill++ — 100% ours, 33B 82% 88% Generalization accuracy: % problems where the program makes correct predictions on every holdout test. Oracle accuracy: % problems where a correct program was generated (even if incorrect programs were also generated that also passed the training input-outputs). FlashFill++ <cit.> only reports oracle accuracy. FlashFill number obtained by running it in 2024 on MacOS. There are caveats to the above results. First, the fine-tuned model essentially never produces a correct program on the first try: It requires tens or hundreds of samples, each of which is compared against the ground-truth input-outputs, and discarded if it contradicts the examples. On a GPU like the one we use (an Nvidia A6000) this rejection sampling takes on the order of a few minutes to solve a given problem. However, compared to classic enumerative program synthesizers <cit.>, or even compared to those with neural guidance <cit.>, proposing a few thousand programs is relatively little, and could not plausibly cover a significant fraction of the exponentially large search space. The second caveat is that the model degrades when tested out-of-distribution. An example of this degradation is illustrated in Fig. <ref>, which tests the LOGO graphics model on hand drawings (after training on clean computer graphics). On the out-of-distribution hand drawing the model mostly samples programs that do not fit the data, but its accuracy does not fall to zero, meaning that with enough compute budget, it does actually generate reasonable programs. This foreshadows the results in Sec. <ref>, which more systematically studies out-of-distribution behavior. §.§ What causes the fine-tuned model to succeed or fail? Classic symbolic approaches to PBE, when they are based on enumeration, tend to succeed whenever the target program is syntactically small. Approaches based on clever dynamic programming, such as the FlashFill family <cit.>, succeed when the program is representable in the domain-specific language. What predicts success for these LLM approaches? To answer this question we investigate several hypotheses. First, potentially the success is determined by program size, and degrades as programs grow longer. Second, as a more refined notion of size, we instead measure the description length under the prior, which for a program , is -log p_LM(|𝒢(𝒟_seed)). Description length under the prior would be a good predictor of success if the fine-tuned model engages in blind guess-and-check: simply learning the distribution 𝒢(𝒟_seed), and sampling from this prior while ignoring the input-outputs. Third, one possibility is that success is predicted by description length under the approximate posterior (-log q_θ(|X,Y)), which would be the case if the fine-tuned model attends closely to the input-outputs and reshapes its distribution accordingly, instead of defaulting to the prior. To test these hypotheses we calculate the average compute budget needed to solve each problem, and compare it with these different variables. Fig. <ref> shows that posterior description length is more predictive than program size and prior description length: unlike classical methods, metrics of program length correlate poorly with problem difficulty, and there is no evidence that the fine-tuned model's behavior can be characterized as blind guess-and-check. (See also Fig. <ref>). §.§ Out-of-distribution generalization One advantage of classic symbolic PBE methods is that they do not make statistical assumptions about their test problems. Indeed, some classic methods can, within their domains, synthesize programs perfectly (i.e. always find a program that fits the training input-outputs). In contrast, neural networks can struggle to generalize beyond the training distribution. We therefore consider train/test splits that force the model to generalize beyond the distribution of its training data (beyond 𝒟_seed). On text editing, we seed with SyGuS problems, and perform out-of-distribution testing on PROSE problems (PROSE is much harder than SyGuS). On list functions, we seed with problems from Rule et al. 2024 and test on Shi et al. 2023 (the Shi dataset contains unusual combinators, such as ). On LOGO, we seed with short programs (≤ 12 lines of code), and test on long programs (>12 lines of code). Using these splits we also measure the ability of the adaptation method in Sec. <ref> to improve out-of-distribution generalization.[We work here with 7B models because Sec. <ref> found that fine-tuned 33B models are only slightly better than 7B, and 7B is cheaper to run.] Fig. <ref> shows that there is nontrivial degradation when testing out of distribution. For example, a 7B model seeded with PROSE problems and tested on a different subset of PROSE has an accuracy of 76% (Fig. <ref>), but this degrades to 59% when seeded with SyGuS problems, which follow a different distribution and are generally simpler and easier than PROSE (Fig. <ref>). We further perform the adaptation method described in Sec. <ref> in order to measure the extent to which it can narrow these domain gaps. In every case it allows solving more out-of-distribution problems, increasing absolute performance by around 10% or more in all domains, which is a relative increase of about 16% for text/list and a relative increase of about 190% for LOGO (approximately tripling the number of solved LOGO problems). To better understand the dynamics of adaptation, we visualize the specific problems solved before and after adaptation on LOGO graphics (Fig. <ref>). Before adaptation, only a handful of out-of-distribution problems are solvable, and only with a significant search budget. Adaptation allows the system to quickly solve similar out-of-distribution problems in the future, but does not allow the system to generalize to problems very unlike those originally solvable by the fine-tuned model. In principle, expanding the inference-time compute budget should allow successful adaptation (large K in Eq. <ref>). Another more compute-efficient approach would be to increase the amount of adaptation data by introducing `steppingstone' problems in the adaptation set that give a gentler transition from the original training distribution. § RELATED WORK Automatic data generation with LLMs, such as self-instruct <cit.>, WizardCoder <cit.>, and many others <cit.>, works by prompting an LLM to produce outputs which are then used for later learning stages such as fine-tuning. These approaches are applied recursively to their own output: Previously generated data is incorporated into future prompts. We similarly generate a dataset 𝒟_tune by prompting an LLM with 𝒟_seed, but (1) do not recursively prompt the LLM with its own outputs and (2) combine the LLM generations with program execution to make program outputs. This gives a different mathematical interpretation to our data generator. First, the programs are samples from a prior, 𝒢, defined by 𝒟_seed, which would not be a valid interpretation if the LLM was repeatedly fed its own outputs. Second, there is an observation model or likelihood function, p(|,), which is defined not by the LLM, but by a Python interpreter. In this way, our data generator constructs training examples for fine-tuning that teach the network how to `invert' the execution process of the Python interpreter. t Machine learning applied to PBE has often sought to accelerate search: to find any program at all consistent with the I/O examples <cit.>, which is nontrivial due to the combinatorial nature of the search, even after confining to a domain-specific programming language. A complementary line of research explores inductive biases that favor programs likely to generalize to new inputs, such as learning a prior or ranking function <cit.>. Our work should be seen within the tradition of learning to search for programs. We show that finetuned models serve as an effective yet simple foundation for accelerating search in PBE, allowing search to be tractable over much richer and more expressive languages such as Python. Classic PBE. Traditional approaches to programming-by-examples operate by symbolically searching or solving for programs consistent with the input-output examples <cit.>. They use domain-specific programming languages that are designed to either enable efficient search and/or bias the system toward functions that are likely to generalize new inputs. Search for programs can even be polynomial time when this domain-specific language has a special structure (roughly, when every function can be `inverted'), a key enabler of FlashFill, the first commercial success of PBE <cit.>. LLMs as inductive reasoners. Using an LLM to perform inductive reasoning—to generate abstract hypotheses from concrete specific examples—has been explored by several recent works <cit.>, all of which has found significant value in translating these hypotheses into programs, and all of which have worked by prompting pretrained GPT-style models. Our work can be seen as helping answer a natural question posed by these previous works: Given that LLMs can generate hypotheses from examples, can they produce programs of the nature and complexity demanded by PBE? We find this is largely the case after fine-tuning, both for classic PBE domains and unusual ones. Self-Debugging, Refinement, and Self-repair. One way of improving the code generation abilities of an LLM is to have it attempt to debug its own code whenever the initially generated code does not pass the provided test cases <cit.>. We did not explore this strategy, however, because a more basic approach that simply regenerated a new program from scratch already surpassed the prior state of the art (both symbolic and neural baselines), provided we finetune. However, further pushing the boundary of PBE may benefit from self-debugging strategies. Ranking LLM-generated code. Past work considers a variety of ways to select an output from a collection of LLM-sampled programs <cit.>, many of which are more sophisticated than simply filtering by the examples, which is what we do here. Like with self-debugging, integrating these techniques should be synergistic with our approach. § LIMITATIONS Our work has important limitations. From an engineering perspective, using a 7B-33B neural network to perform PBE is not practical for most end-users, who may be doing PBE on their laptop or desktops in order to accomplish small one-off tasks. For this reason, true deployment to end-users may require investigating the effectiveness of much smaller neural networks (not an LLM), and it may also be valuable to study the effect of network compression and distillation upon our finetuned models. From the perspective of understanding where and why our system succeeds and fails, we have shown that neither program size nor likelihood under the prior suffice to predict success, finding the posterior likelihood is a better predictor, albeit an imperfect one. Although this allows us to discard the hypothesis that the system is merely sampling from the prior, it also just pushes the question back one stage further: What exactly about specific problems causes the neural network's approximate posterior to put more or less probability mass on correct solutions? While in classic PBE one can obtain sharp answers as to why a certain problem was solved or not, this is a much harder question with neural networks, whose workings are more opaque. § DISCUSSION PBE with fine-tuned LLMs is surprisingly effective, surpassing many of the best neural and symbolic baselines we know of, even for uncommon domains such as LOGO graphics. Why is that? Fundamentally, the neural network only needs to act as a heuristic proposer of solutions, because we can check against the input-outputs. Therefore, one possible explanation is that the tendency of language models to over-generate, hallucinate, and cover the long tail of possibilities is actually an asset, instead of a liability. And although there is a degree of degradation on out-of-sample problems, the degradation is not so severe that out-of-distribution problems become utterly unsolvable: Instead, they merely become harder to solve, a phenomenon that allows adaptation to work in the first place. Simultaneously one should be hesitant about claiming that PBE is `solved.' Optimistically, current PBE benchmarks exist to test the frontier of what is possible, and so doing well on those benchmarks might just mean that the frontier has moved. More realistically, determining if an AI system truly works in the wild requires more than just pushing benchmark numbers, which can be misleading when those benchmarks do not capture the long tail of naturally-occurring tests. Furthermore, all AI systems present tradeoffs, and a neural system's unpredictability, high computational cost, and out-of-distribution fragility should be weighed against whatever high benchmark numbers they may achieve. Despite these caveats, we are optimistic about the promise of tuning LLMs for PBE, and believe that it has the potential to dramatically expand the scope of solvable problems and even solvable domains. Acknowledgements. We are grateful for assistance from Joshua Rule in the processing of the list functions data, and for feedback from Yewen Pu on the manuscript. This work was supported by an NSF CAREER grant as well as gifts from Google and Cisco. unsrtnat § APPENDIX / SUPPLEMENTAL MATERIAL §.§ Experiment Details We used a temperature of 1.0 for sampling in our experiments unless otherwise stated. All experiments were performed on single-node machines (8xA6000 or 8xA100, etc.) without a multi-node distributed computing setup. §.§.§ List Tasks We selected 50 problems from Rule et al. as our seed set, reserving the remaining problems for testing. To ensure comparability with Rule et al., we tested on the first 100 problems (excluding those in the seed set), resulting in 77 test problems. We consistently used 10 input-output examples, with the remaining 54 examples serving as a held-out test set. When filtering duplicate synthetic data, we employed an open code embedding model<cit.> available on Hugging Face. As a sanity check, we also use 10 list to list problems from λ^2 benchmark and shown the model can effectively solve them in Table.<ref> §.§.§ String Tasks We utilized 100 string-to-string/null transformation problems from the prose-benchmark. When available, we used 10 input-output examples, always reserving at least one example as a hold-out test set. This ensures the generalization of our synthesized programs, as the benchmark did not provide held-out data. For the FlashFill baseline, we used Microsoft Excel for Mac version 16.8. We opened each individual xlsx file containing the test problem examples and manually triggered the FlashFill function by pressing Ctrl+E. §.§.§ Logo Tasks To facilitate graphics input inference with code language models, we converted logo graphics into ASCII-represented strings, as shown in Figure <ref>. For each input image, we cropped a 512x512 section from the center and then divided it into 32x32 blocks, each with a size of 16x16. We counted the number of black pixels in each block, calculated their density (num black pixels/16· 16), and quantized this density value into 10 levels, represented by the ASCII numbers 0-9. By representing each block with an ASCII number, an input image is represented with a string of 32 lines, and each line has 32 numbers. For the turtle graphics program, we adopted the Python turtle graphics program from Regal<cit.> with a minor modification of changing the `embed' function to use a `with'context manager instead, calling it `fork_state'. This allows for equivalent but more readable code. §.§ Syntheic Dataset Generation and Training Parameters We present the dataset generation and training parameters in Table. <ref> and Table. <ref>. §.§ Adaptation Implementation Details For adaptation experiments, we generally followed the settings described above, with a few specific differences detailed below. §.§.§ String Tasks To induce a domain gap (easier problems in Sygus compared to the harder, noisier problems in the Prose Benchmark), we first fine-tuned a model using Sygus problems and then tested it on the Prose Benchmark. Due to the noisy nature of the Prose Benchmark (some problems have very few examples), we adopted a setting where we utilized all the test cases to select which problems were solved and then used them as the seed programs for adaptation. This resulted in 64 solved problems out of the 100 problems in the benchmark. §.§.§ List Tasks To obtain the finetuned in-distribution result in Fig. <ref>, we fine-tuned on a synthetic dataset generated by seeding with 20 out of 100 problems from LambdaBeam, and tested on the remaining 80 problems. §.§.§ LOGO Tasks For LOGO adaptation experiments, we induce domain gap by using the shorter programs (LoC ≤ 12) of the training set, and tested on the longer programs (LoC > 12). The shorter programs training seed consists of around 80% problems (156 out of 200) from the original training set. The test set consists of 31 problems out of 111 problems from the original test set. §.§ Model Performance on LambdaBeam Benchmark We present the results of both our 7B and 33B models on the LambdaBeam benchmark in Figure <ref>. We observed that even without fine-tuning for this specific benchmark, and instead fine-tuned for the list-to-list problems from Rule et al., our models performed exceptionally well, surpassing the state-of-the-art results specifically designed for the LambdaBeam problems <cit.>. §.§ Prompts Used in the Experiments §.§.§ Syntheic Data Generation Prompt List You are a CS professor. You are providing a set of challenging and diverse integer list to integer list function puzzle for your student to solve. Puzzle 1: Python function: input a list of integers and return a list of integers “`python PROGRAM EXAMPLE 1 “` Test cases: ... Puzzle 2: Python function: input a list of integers and return a list of integers “`python PROGRAM EXAMPLE 2 “` Test cases: ... Following the above format, please provide 3 functions each follow by 10 random test cases to check the function's correctness full coverage String Excel just introduce a new feature that allows user to use Python to perform data transformation. Please generate a csv file with two columns. The first column contains the input data and the second column contains the output data. Following a accompanying python function which showcasing the transformation of the input data to the output data. Here are 10 challenging examples showcaing the features Example 1 “`csv INPUT OUTPUT EXAMPLE 1 “` Here is the Python function that help transform the first column to the second column. “`python PYTHON EXAMPLE 1 “` Example 2 “`csv INPUT OUTPUT EXAMPLE 2 “` Here is the Python function that help transform the first column to the second column. “`python PYTHON EXAMPLE 2 “` ... Following the above format, please provide a CSV file with two columns, containing between 5 to 10 rows of data showing a transformation from the first column to the second column. This csv data should illustrate a challenging and complex example, similar to the above examples. Following that, create a Python function designed to process this data. Be aware that this function will be tailored to not only accommodate the current data but also any future data that follows the same format or structure. Logo Your task is to draw simple black and white graphics with the custom library. DO NOT USE THE BUILT-IN TURTLE LIBRARY. You will use a custom turtle library, similar to the built-in turtle library, which is sufficient for all tasks. Here are all the available functions in the custom turtle library: - forward(x): move forward x pixels - left(theta): rotate left by theta degrees - right(theta): rotate right by theta degrees - penup(): stop drawing - pendown(): start drawing - teleport(x, y, theta): move to position (x, y) with angle theta - heading(): get the current angle of the turtle - isdown(): check if the pen is down - forward(x): Move forward x pixels. - left(theta): Rotate left by theta degrees. - right(theta): Rotate right by theta degrees. - penup(): Stop drawing. - pendown(): Start drawing. - teleport(x, y, theta): Move to position (x, y) with angle theta. - heading(): Get the current angle of the turtle. - isdown(): Check if the pen is down. - with fork_state(): A context manager that runs the code in the block using the current context and restores the original state afterwards. Allows you to nest programs. Internally, fork_state saves the turtle state (is_down, x, y, heading), executes the block, then restores the original state. Graphic 1 Python program: draw an interesting graphic using our own custom turtle library # the following program draws ... PROGRAM EXAMPLE 1 Graphic 2 Python program: draw an interesting graphic using our own custom turtle library # the following program draws ... PROGRAM EXAMPLE 2 ... Following the above format, please provide 5 more programs using our custom drawing library. §.§.§ Prompt template for finetuning and zeroshot experiments §.§.§ List Implement the function solve_puzzle that takes a list of integers and returns a list of integers. The function should satisfy the following assertions assert solve_puzzle(...) == ... assert solve_puzzle(...) == ... assert solve_puzzle(...) == ... ... §.§.§ String Implement the function edit_text that takes a string and returns a string. The function transforms the input string to the output string. The function should satisfy the following assertions: assert edit_text(...) == ... assert edit_text(...) == ... assert edit_text(...) == ... §.§.§ Logo Here is a gray scale images representing with integer values 0-9. CONVERTED IMAGE STRING... ... Please write a Python program that generates the image using our own custom turtle module
http://arxiv.org/abs/2406.08884v1
20240613073716
The Penalized Inverse Probability Measure for Conformal Classification
[ "Paul Melki", "Lionel Bombrun", "Boubacar Diallo", "Jérôme Dias", "Jean-Pierre da Costa" ]
cs.CV
[ "cs.CV", "cs.LG", "stat.ML" ]
Bistability in filamentous actin through monomer-sequestration of an effector species Bela M. Mulder June 17, 2024 ===================================================================================== § ABSTRACT The deployment of safe and trustworthy machine learning systems, and particularly complex black box neural networks, in real-world applications requires reliable and certified guarantees on their performance. The conformal prediction framework offers such formal guarantees by transforming any point into a set predictor with valid, finite-set, guarantees on the coverage of the true at a chosen level of confidence. Central to this methodology is the notion of the nonconformity score function that assigns to each example a measure of “strangeness" in comparison with the previously seen observations. While the coverage guarantees are maintained regardless of the nonconformity measure, the point predictor and the dataset, previous research has shown that the performance of a conformal model, as measured by its efficiency (the average size of the predicted sets) and its informativeness (the proportion of prediction sets that are singletons), is influenced by the choice of the nonconformity score function. The current work introduces the Penalized Inverse Probability (PIP) nonconformity score, and its regularized version RePIP, that allow the joint optimization of both efficiency and informativeness. Through toy examples and empirical results on the task of crop and weed image classification in agricultural robotics, the current work shows how PIP-based conformal classifiers exhibit precisely the desired behavior in comparison with other nonconformity measures and strike a good balance between informativeness and efficiency. § INTRODUCTION The development and deployment of machine learning-based autonomous systems has been a flourishing field of research in both academia and, relatively more recently, in the industry <cit.>. While machine learning models often exhibit high performance “in the lab", they often face much more difficulty when deployed in the real world, for a number of reasons that are not yet fully clear <cit.>. Indeed, when faced with a new observation, the model will produce a new prediction whose quality is often related to the similarity of this new observation to what the model has previously seen. When the new observation is quite anomalous with respect to the previously seen data or even slightly perturbed, most models will produce wrong predictions <cit.>, with often dire and intolerable consequences in safety critical applications such as autonomous driving <cit.> and medical diagnosis <cit.>, to name a few. The safe deployment of machine learning systems in the real world is therefore incumbent upon the integration of at least two main important features into them <cit.>: (1) the ability to provide valid and trustworthy guarantees on the quality of predictions in “normal" conditions, and (2) the ability to reliably detect and signal anomalies when faced with them. Conformal prediction is a method that provides formal statistical guarantees on the predictive quality of any black box model <cit.>. It has recently gained in popularity due to the minimal assumptions required for its deployment. Without imposing explicit conditions on the data distribution, any base point predictor can be transformed using the conformal approach into a set predictor with formal guarantees on the coverage of the true value at confidence level 1-α, where α is a chosen level of tolerance to error. Formally, in a supervised learning context, whereby for each object 𝐱∈𝒳 is assigned a label y ∈𝒴, a conformal model produces prediction sets 𝒞_1-α⊂𝒴 that satisfy the marginal coverage guarantee <cit.> ℙ(y ∈𝒞_1 - α(𝐱) ) ≥ 1 - α whenever the test data follow the same distribution as the data on which the model was calibrated. Under this condition, the coverage guarantee is satisfied marginally over all possible calibration sets. Additionally, the study of the structure and the size of the predicted sets allows us to quantify the uncertainty of the base model, and to detect examples on which the model is highly uncertain <cit.>. As such, the conformal approach can be used to satisfy the two conditions for safe deployment of machine learning systems as it has been shown in a number of applications <cit.> ranging from railway signaling <cit.>, medical imaging <cit.>, to nuclear fusion <cit.>. Three main components are needed to conduct inductive conformal prediction <cit.>: a base predictor ℬ (which can be any machine learning point predictor), a dataset on which to calibrate the model so that it becomes a conformal predictor, and a nonconformity score function Δ that assigns a “strangeness value" to each example in the calibration set. This value measures how conforming each individual is to what the model has previously seen. While the marginal coverage guarantee is satisfied by construction, the quality of the predicted sets is influenced by these three components. For example, a neural network ℬ with low accuracy can still be calibrated to achieve 1 - α = 0.9 coverage, but will tend to predict much larger sets, since it is uncertain about the true class and thus needs to predict many to guarantee the inclusion of the true one. The object of interest in this work is the nonconformity score function Δ. In particular, we are interested in studying the influence of different nonconformity functions on two of the most commonly used metrics for the evaluation of conformal classifiers <cit.>: efficiency, the average size of the predicted sets, and informativeness, the proportion of predicted singleton sets. These two metrics measure, in some sense, the “usability" of the conformal approach when needed to take decisions under normal condition, and may be useful to signal high uncertainty conditions. The context of the study is automated precision weeding in agriculture <cit.>, whereby a robotic system is embedded on a tractor to detect and spray herbicides on undesirable weeds in real-time, under real-world conditions. The precision agriculture sector is an interesting test-bed for safe AI methodologies since they are indeed needed in agriculture, but do not directly threaten human lives in case of failure. Related work A good body of research is dedicated to the development of useful and efficient nonconformity score functions <cit.>. For classification, the first comprehensive work is that of Johansson et al. <cit.> in which the authors study the impact of different model-agnostic nonconformity functions – in particular, the Hinge Loss and the Margin Score – on neural network classifiers. The authors find that neither of these score functions allows the joint maximization of informativeness and efficiency. Their empirical results show that the Hinge Loss minimizes the size of prediction sets, while the Margin Score maximizes the number of singletons. These results are further confirmed by Aleksandrova and Chertov <cit.> on most of the datasets they tested, in their work aiming at reconciling the two scores by computing, for a new observation, two conformal sets using both the Hinge and Margin scores, then choosing the Margin-based set as the final prediction if it is a singleton, or the Hinge set otherwise. Unfortunately, this approach may be quite inefficient as it requires repeating the calibration step for each nonconformity function. Fisch et al. <cit.> propose an efficient conformal classification approach based on an expansion of the notion of validity to include the concept of admissible labels, which are semantically plausible class labels for a given example. Such an expansion may lead to highly inefficient prediction sets in learning tasks with a large number of classes. As such, the authors develop an efficient cascaded inference algorithm that reduces the size of the prediction set by progressively filtering the number of candidates via a sequence of increasingly complex classifiers. Other works have explored ways to combine multiple conformal models in such a way as to preserve the validity guarantee while producing sets that are as efficient as possible <cit.>. Contributions In direct continuation of these previous works, and for the expansion of the still meager body of work on conformal prediction in precision agriculture <cit.>, our work proposes the following contributions: * The proposal of a new model-agnostic nonconformity function that strikes a good balance between optimizing both efficiency and informativeness: the Penalized Inverse Probability (PIP); * The proposal of a simple regularized version of PIP, RePIP, inspired by <cit.> for improved efficiency in use cases with a large number of classes; * The comparison of PIP with other nonconformity measures from the literature on toy examples, showing the balanced and adaptive behavior of this measure under different settings; * The comparison of PIP and RePIP with other nonconformity measures from the literature based on efficiency and informativeness through rigorous empirical experiments on an image dataset for crop and weed classification taken under real-world conditions with the aim of providing valid guarantees on the performance of a precision weeding system. § DEFINITIONS & MATHEMATICAL SETUP Let 𝐱∈𝒳 be a vector of features, which we will call an object <cit.>. To each object is associated a class label y ∈𝒴 := {1,..., K} to form what we call an example 𝐳 = (𝐱, y) ∈𝒳×𝒴. A black-box classifier ℬ is trained on a set of n_train examples to output for an object a class prediction ℬ̂(𝐱) = ŷ∈{1,..., K} and an associated estimated probability p̂^ŷ∈ [0, 1], such that ∑_k=1^K p̂^k = 1. The inductive conformal approach consists of a calibration step in which the trained classifier is calibrated on a set of n_cal calibration examples {𝐳_i = (𝐱_i, y_i), i = 1, ..., n_cal} using a real-valued nonconformity score function Δ(𝐳): 𝒳×𝒴→ℝ. The output of the calibration step is usually a quantile value q_cal∈ℝ computed on the distribution of nonconformity scores over the calibration set. This quantile is then used to produce prediction sets 𝒞_1-α (𝐱) ⊂𝒴 on the remaining n_test test examples. For each class, its score Δ is computed based on the probability estimated by ℬ, then compared to q_cal in a hypothesis test of whether the class is considered “conforming" enough or not. The produced prediction sets are valid in the sense that they satisfy the marginal coverage guarantee defined in Equation (<ref>). This property is verified empirically by computing the empirical marginal coverage, which is simply the proportion of prediction sets that cover the true label: 1/n_test∑_i=1^n_test1_{ y_i ∈𝒞_1 - α(𝐱_i) } The quality of the prediction sets can then be evaluated using these two metrics: * Efficiency, defined as the average size of the predicted sets: 1/n_test∑_i=1^n_test | 𝒞_1 - α(𝐱_i) | where | . | is the set cardinality, the number of classes in the predicted set. * Informativeness, defined as the percentage of predicted sets of size 1 (often called oneC in the literature <cit.>): 1/n_test∑_i=1^n_test1_{| 𝒞_1 - α(𝐱_i)| = 1} Clearly, conformal predictors that have both high efficiency and high informativeness are the preferred models in practice, at a fixed coverage level of 1- α. Smaller set sizes are easier to manipulate and be used to construct decision rules. Singleton predictions are the most informative predictions since they do not manifest any “uncertainty" about the predicted class. A most informative, and efficient, conformal model would be one that predicts only singletons while guaranteeing marginal coverage. Unfortunately, such an optimal conformal model is impossible to attain in practice <cit.>. § NONCONFORMITY SCORE FUNCTIONS §.§ Review of Some Nonconformity Scores The nonconformity measure quantifies the “strangeness" of a given object by comparing it to the objects previously encountered by the model during training and calibration <cit.>. For the same base predictor ℬ, different nonconformity functions lead to different conformal predictors. Here, we review commonly used nonconformity score functions for classification from the literature <cit.>. Since the estimated probabilities p̂^k are fixed for a given object 𝐱, the nonconformity score function Δ(𝐳) will simply be denoted Δ(y) in the following for ease of understanding. Note also that during the calibration step of the conformal procedure, y is the true class of object 𝐱, while during the prediction phase, y is the tested class to be included or not in the prediction set. Hinge Loss (IP) <cit.> Also known as Inverse Probability, this score function measures how far the estimated probability of y (where y is the true class label) is from the perfect score of 1: Δ^IP(y) = 1 - p̂^y Indeed, a perfect classifier should always assign a probability of 1 to the true class label, which would have a Hinge score of 0. For smaller probability estimates of y, a higher Hinge score is assigned since the model is deemed more uncertain about y. The Hinge Loss can thus be considered a very “natural" measure of nonconformity. Unfortunately, it suffers from a major shortcoming: it does not take the probability estimates of the other classes into consideration. Margin Score (MS) <cit.> Assuming an implicit hypothesis that good predictive models should assign the highest probability estimate to the true class, the MS measures the difference between the estimated probability of y and the highest estimated probability among the other classes: Δ^MS(y) = max_k yp̂^k - p̂^y = Δ^IP(y) + max_k yp̂^k - 1_penalization A large positive value of this score indicates that the estimated probability assigned to y is distant from the class of highest confidence. It means that class y is considered highly strange in comparison to the class the model considers as the true one. Notice that y is always penalized, even when it is the most probable class, which is not ideal. Another shortcoming of the MS is that it only takes the maximum probability into consideration, why not take the probabilities of the other classes directly into consideration? It is important to note that in cases of anomalies, OOD observations or adversarial attacks, neural networks would tend to assign the highest confidence to classes that are completely wrong <cit.>, thus putting the reliability of the Margin Score into question. Regularized Adaptive Prediction Sets (RAPS) This nonconformity function was first introduced in <cit.> as part of the APS approach, with the aim of producing prediction sets whose size adapts to, and reflects, the difficulty of each object. It is the first score function that fully integrates a range of estimated probabilities other than that of y. In particular, the APS score incorporates all the estimated probabilities that are larger than that of the class of interest. Observing that the APS score tends to predict relatively large set sizes in learning problems with a large number of classes, Angelopoulos et al. <cit.> introduced a regularized version of this score, named RAPS. Let the operator R(k) be the rank of class k after the estimated probabilities p^1, ..., p^K have been sorted in decreasing order, and p̂^[r] be the probability estimate of the class having rank r, such that p̂^k = p̂^[R(k)], we can define the RAPS score function as: .9!Δ^RAPS(y) = ∑_r = 1^R(y)-1p̂^[r] + u ·p̂^[R(y)]_APS + λ·(R(y) - k_reg)^+_regularization where u is a uniform random variable in (0, 1) for tie-breaking, λ is the penalization amount and k_reg is the rank at which to start penalizing. λ and k_reg can be fixed by the user or optimized on a held-out dataset. The penalization is proportional to the how further away is y in the ranking of estimated probabilities from k_reg. When y has a very low probability, meaning that it has a very high R(y), its score will be very strongly penalized, leading to the exclusion of y from the prediction set. This will lead, on average, to smaller set sizes, as it excludes from the prediction sets those classes that would have been included by the original APS score (obtained for λ = 0). While the APS and RAPS have been developed with adaptivity and efficiency in mind, their authors' do not seem to take the informativeness criterion into consideration. §.§ Penalized Inverse Probability In this article, we introduce the Penalized Inverse Probability (PIP), a new nonconformity score function that integrates components from the three previously presented measures with the aim of optimizing both efficiency and informativeness. Following the same notation presented previously, PIP can be defined as: Δ^PIP(y) = 1 - p̂^y_Δ^IP(y) + ∑_r = 1^R(y)-1p̂^[r]/r 1_{R(y) > 1 }_penalization For R(y) = 1, when y is the class with the highest estimated probability, the PIP score is simply the Hinge (IP) Loss. In all other cases, the sum of the estimated probabilities of all the classes with higher probability than y weighted by the inverse of their rank is added as a penalization term. As such, a decreasing weight is associated to each class that is closer to y. This penalization term resembles the APS score, and alleviates the shortcoming inherent by IP of not taking the estimated probabilities of other classes into consideration. Furthermore, for R(y) = 2, it should be clear that Δ^PIP(y) = 1 + Δ^MS(y). As such, the PIP score exhibits analogous behavior to different nonconformity functions depending on the estimated probabilities by the base model ℬ, leading to better adaptivity, as will be shown in the toy examples below. For more detailed developments on the relationship between the PIP and the other scores, we refer the interested reader to Section 1 in the Supplementary Material. Toy examples Consider the six different possible output configurations of a neural network classifier shown in <ref>. The class of interest is y and its estimated probability is fixed to p̂^y = 0.1 in all the examples. Only the classes having higher estimated probabilities than y are shown since they are the only ones that are used in the computations of the different scores. In <ref> are shown the different scores assigned to class y in each of the cases, sorted in increasing order. A greater score is a sign of greater “nonconformity" – that is, of higher uncertainty – attributed to y. The first obvious observation is that IP assigns the same score to y in all cases. As p̂^y = 0.1 is the same in all cases and IP is, by definition, indifferent to the other classes, all the configurations are reduced to the same score. This rigidity is often undesirable in a nonconformity score function. The MS measure, on the other hand, manifests a more fluid behavior since it also considers the highest estimated probability. Case 1 is assigned the lowest MS score, since the estimated probabilities of y and a are quite similar. As such, MS considers that class y is as likely a candidate as a to be the first predicted class, and thus assigns it a low nonconformity score. Case 2 is considered a bit “stranger" than Case 1 by the MS function because the difference between the maximal class a and y is a bit larger, which is a desirable behavior by this score function. In Case 6, although y has the same rank R(y) = 3 as in Case 1, the MS value is maximal since the margin between the p̂^a and p̂^y is large. Cases 3 to 5 show the shortcoming of the MS measure. Since in all these cases the difference between p̂^a and p̂^y is the same, they will all be assigned the same score value, even though it is clear that class y in Case 5 should be assigned a higher nonconformity value than in Case 3 or even in Case 4. The proposed PIP function exhibits the most versatile behavior since it takes into consideration all the classes having higher estimated probabilities than y. Δ^PIP(y) is different in all the distinct configurations, manifesting the specificity of each case. Indeed, it can easily be shown that Δ^PIP guarantees a different score for every class even in the case of highest uncertainty where all the classes have the same estimated probability 1/K (Section 1 in Supplementary Material). Case 1 has the lowest PIP score, since class y is almost as likely as a or b to be predicted as the first class. As such y is not deemed strange in such a condition. The behavior of PIP in such situations is similar to that of MS. Case 2 is considered slightly stranger because the difference between p̂^a and p̂^y is larger and cannot simply be attributed to some “noise." While y has the same estimated probability and rank R(y) = 3 in both Case 3 and Case 4, it receives a slightly lower score in Case 3 since the difference with the b is very small: y could very much have been the second class and thus need not be penalized heavily for falling in third place. Class y in Case 5 is further penalized because more significant classes have higher estimated probabilities than y. Summary of PIP score properties The desirable behavior manifested by PIP can be summarized as: * In all situations, the Hinge Loss (IP) is a baseline value for the PIP function. Therefore, classes with low probability estimates will tend to be assigned higher nonconformity scores. This kind of behavior leads to a lower average size of predicted sets (higher efficiency) since it tends to exclude the classes with low probability estimates <cit.>. * PIP takes into consideration all the probability estimates of the other classes with higher probabilities when computing the score for a given class. This includes the maximum probability class. Therefore, when p̂^y has a low value compared to max_k yp̂^k, class y will be heavily penalized (just like with the MS measure). This behavior generally leads to more predicted singletons (higher informativeness) because in all cases where one class has a very high probability estimate, all the other classes will be heavily penalized and thus excluded from the predicted set <cit.>. * Additionally, PIP distinguishes the cases where the difference between p̂^y and the “more probable" classes is significant or not, penalizing less when such differences are negligible and can be attributed to some noise. This leads to scores that are different almost everywhere, permitting better discrimination between the different model outputs. §.§ Regularized PIP For learning tasks with a large number of classes, the user may require to preserve the desirable behavior of the PIP score function but with smaller set sizes on average. The same regularization term added to APS <cit.> can be added to obtain RePIP, a regularized version of the proposed nonconformity measure: Δ^RePIP(y) = Δ^PIP(y) + γ·(R(y) - k_reg)^+_regularization Here, γ is the equivalent of the λ parameter in the RAPS nonconformity score and k_reg is, similarly to RAPS, the rank at which to begin penalizing more. § EXPERIMENTAL RESULTS In this section, we study the performance of different conformal classifiers obtained using the nonconformity score functions presented previously on the task of classifying images taken under real world conditions into 13 different plant species. This learning task is part of a precision weeding robotic use case, where an autonomous robot should distinguish weeds from cultivated crops and spray them with herbicide in real-time. Guaranteeing the performance of the weed classifiers is of great importance since missed weeds can multiply quickly and threaten heavily the health of the cultivated crops and the quality of harvest. §.§ Experimental Setup The public WE3DS dataset recently published in <cit.> is originally a dataset of RGB-D images with semantic segmentation masks densely annotated into 17 plant species classes in addition to the soil class for the background. Due to the scarcity of publicly available crop and weed classification datasets, this dataset has been transformed into a classification dataset. Discarding the depth channel, the original RGB images have been divided into non-overlapping windows of size 224 × 224. To each resulting image is associated a true class label which is defined as the class with the highest number of pixels in the corresponding semantically annotated mask. This results into a dataset of around 14,800 RGB images with 13 different classes, of which six random specimens are shown in <ref>. We refer the interested reader to Section 2 in the Supplementary Material for a full description of the data preparation procedure. The database is then randomly divided into: (1) a training set (70%), on which a ResNet18 classifier <cit.> is trained using default hyperparameters and pretrained weights on ImageNet <cit.>, and fixed for all experiments; the remaining 30% of the data are then split into (2) a calibration set (13.5%) for conformal calibration and (3) a test set (16.5%) on which the conformal classifiers are evaluated. It is important to note that the choice of the base model ℬ is not of great importance and is not the focal point of this study. It is for this reason, and especially to be able to study the differences among the nonconformity score functions, that we opted for a classical ResNet18 classifier which does not manifest exceptional classification performance on this task. It could have very well been replaced by a newer state-of-the-art deep classifier. After training the ResNet18 classifier, the neural network is calibrated using each of the previously presented nonconformity score functions at the chosen confidence level of 1-α = 0.9. Then, it is used to predict sets of classes for the test images. To make sure that the obtained results are not simply due to having favorable samples of images, the calibration and test steps are repeated 1000 times, each time on a different random split of the data. The random seed of the i^th random split, i = 1, 2, ..., 1000, is the same across the different nonconformity score functions so as to obtain results that are truly comparable and not simply influenced by the aleatoric uncertainty inherent to the data. §.§ Setting γ and λ for RePIP and RAPS For RAPS and RePIP, k_reg is fixed at 3 based on this specific use case's requirements. In general, we prefer not to have prediction sets with more than 3 classes: the cultivated species, a weed species and the soil. In order to choose the regularization amounts λ and γ, we conduct a parameter sweep by testing multiple values from a manually defined grid. For each value and each method, a different conformal classifier is obtained for which we compute the efficiency and informativeness. Similarly to the experimental setup, with the aim of verifying the reliability of the estimated metrics, each conformal classifier is calibrated and tested on multiple random splits of the data. <ref> shows the average set size and the proportion of singletons for each random split of the data and different values of γ (<ref>) and λ (<ref>). Depending on the user's preferences and the use case requirements, the optimal value can be chosen so as to place more weight on minimizing inefficiency or maximizing informativeness. In our case, we deem it more important to maximize the number of predicted singletons while maintaining the coverage guarantee, as it is much easier to construct decision rules when only one class is predicted. Therefore, based on the empirical results in <ref>, we choose γ = λ = 0.02 as values for the hyperparameters. We also note that for both hyperparameters, a limit seems to be reached at 0.5 whereby any greater value produces the same prediction sets (notice that the data points for the values 0.5 and 1 are overlapping). §.§ Results and Discussion The comparison of the different models is conducted based on the efficiency and informativeness criteria. A desirable model is one that optimizes both of these criteria by producing prediction sets with small size on average and as many singletons as possible without violating the marginal coverage guarantee. <ref> shows the results obtained over the 1000 runs for each conformal classifier. Unsurprisingly, all the conformal classifiers are able to maintain the required 90% marginal coverage guarantee on average, with MS showing a comparatively unstable behavior with respect to the other measures (<ref>). As can be seen in <ref>, the Hinge (IP) score leads to the smallest average set size, which is in accordance with the empirical results in <cit.> showing that IP is the measure to use to maximize efficiency. RAPS and RePIP which are designed with efficiency in mind through the regularization term lead to slightly larger set sizes on average, with RePIP coming in second place after IP. A slight difference between APS and RAPS can be noticed. The Margin (MS) score function shows a significantly unstable behavior over the different random runs. This can be due to its deep dependence on the data it faces via the outputs of the base classifiers, an inference that can be made by comparing the divergent results in <cit.> and <cit.>. It also manifests a considerably higher average set size on average than all the other methods, a result in agreement with Johansson et al. <cit.>. The proposed PIP score, while exhibiting a slightly larger average set size than the other methods, is still much more efficient than the MS. This slight inefficiency is a price to pay for a considerable increase in informativeness (see  <ref>). Indeed, MS manifests the highest proportion of predicted singletons, in accordance with the literature <cit.>, with more than 50% of predicted sets being singletons, on average. This result is influenced by the estimated probabilities of the base neural network: when the base classifier assigns a much higher estimated probability to one class in comparison to the others – that is, it is highly “confident" in the class it predicts – all the other classes will be considerably penalized, and thus excluded from the prediction set. This behavior is in agreement with Case 6 in <ref> and <ref>. This behavior tends to increase the number of predicted singletons only when the base classifier ℬ already has a relatively high accuracy. The other nonconformity score functions, APS, RAPS and IP, that are not explicitly concerned with informativeness, have significantly less predicted singletons. On the other hand, our proposed PIP and RePIP scores can be considered quite competitive with MS in terms of informativeness with around 50% of predicted sets having size 1, and manifest better stability with regards to the data in comparison with MS. Interestingly, while the regularization via RePIP leads to considerably smaller set sizes on average, it does not decrease informativeness in any noticeable way, thus striking the required balance between the two evaluation criteria. In a robotic pipeline, a conformal model that satisfies the condition of guaranteed coverage under normal conditions with such a high level of singletons along with a moderate average set size (such as with PIP or RePIP) is quite attractive. While providing around half of the predictions as singletons that can readily be used to take decisions, the conformal classifier produces the remaining predictions as sets that consist of only 2 or 3 classes, on average, on which adapted decision rules can be constructed easily for autonomous agents <cit.>. § CONCLUSION Conformal prediction is an important methodology for developing safe, deployable, machine learning systems. As long as the data faced by the model resembles, to a certain extent, the data on which it has been calibrated, the conformal model maintains the marginal coverage guarantee. Even though this marginal warranty can be strengthened, for example to provide class-conditional <cit.> or group-conditional coverage guaranties <cit.>, it already constitutes a strong gauge of validity for machine learning models, in particular black box neural networks that do not provide such guarantees by default. The conformal envelope around any learning model can be an important step for its certification as a valid model for deployment. However, while any well-calibrated conformal model can provide coverage guarantees, the utility of the predictive model as a component in a larger decisional pipeline, in fully autonomous systems or human decision support systems, depends heavily on the prediction sets produced <cit.>. In the current work, we introduced the Penalized Inverse Probability (PIP), and its regularized version (RePIP), with the aim of jointly optimizing the efficiency and informativeness of conformal classifiers. PIP and RePIP, mixing elements from other nonconformity score functions, provide a well-balanced hybrid behavior. The empirical results on crop and weed classification using deep neural networks show that PIP-based classifiers lead to relatively efficient prediction sets with significantly higher level of informativeness than their counterparts. Future work will continue this line of research notably by studying the behavior the different nonconformity measures on multiple datasets consisting of varying number of classes. A promising direction of exploration in safe AI is the comparison of the performance and robustness of these different nonconformity score functions under “abnormal" conditions, for example under distribution shifts and with regards to anomalous observations. IEEEtranN
http://arxiv.org/abs/2406.09309v1
20240613164244
Teleoperation of a robotic manipulator in peri-personal space: a virtual wand approach
[ "Alexis Poignant", "Guillaume Morel", "Nathanaël Jarrassé" ]
cs.RO
[ "cs.RO" ]
Teleoperation of a robotic manipulator in peri-personal space: a virtual wand approach Alexis Poignant^1, Guillaume Morel^1, Nathanaël Jarrassé^1,2 ^1Sorbonne Université, CNRS, INSERM, Institute for Intelligent Systems and Robotics (ISIR), Paris, France. ^2Email: jarrasse@isir.upmc.fr June 2024 =========================================================================================================================================================================================================== § ABSTRACT The paper deals with the well-known problem of teleoperating a robotic arm along six degrees of freedom. The prevailing and most effective approach to this problem involves a direct position-to-position mapping, imposing robotic end-effector movements that mirrors those of the user, mapping-top. In the particular case where the robot stands near the operator, there are alternatives to this approach. Drawing inspiration from head pointers utilized in the 1980s, originally designed to enable drawing with limited head motions for tetraplegic individuals, we propose a "virtual wand" mapping. It employs a virtual rigid linkage between the hand and the robot's end-effector, mapping-bottom. With this approach, rotations produce amplified translations through a lever arm, creating a "rotation-to-position" coupling. This approach expands the translation workspace at the expense of a reduced rotation space. We compare the virtual wand approach to the one-to-one position mapping through the realization of 6-DoF reaching tasks. Results indicate that the two different mappings perform comparably well, are equally well-received by users, and exhibit similar motor control behaviors. Nevertheless, the virtual wand mapping is anticipated to outperform in tasks characterized by large translations and minimal effector rotations, whereas direct mapping is expected to demonstrate advantages in large rotations with minimal translations. These results pave the way for new interactions and interfaces, particularly in disability assistance utilizing head movements (instead of hands). Leveraging body parts with substantial rotations could enable the accomplishment of tasks previously deemed infeasible with standard direct coupling interfaces. Telerobotics and Teleoperation § INTRODUCTION Over the past decade, the democratisation of 6 and 7 Degrees of Freedom (DoF) robotic manipulators has significantly increased their use in collaborative industrial and assistive applications <cit.>. Despite these developments, controlling 6 DoF displacements (3 translations and 3 rotations) remains a challenge due to the need of 6-DoF interfaces, especially for position-position control. This paper concentrates on addressing the control issues associated with such robots in close proximity. This specific operational scenario is commonly encountered in enclosed-robot environments and for tasks requiring assistive functionality. For example, assisting individuals with restricted hand mobility <cit.> <cit.> <cit.>, or industrial operators with occupied hands, such as in 3-hands soldering of large parts <cit.>. It can also be found in environments with nuclear or chemical hazards, which may require manipulation through a glove-box <cit.>. In such scenario, the operational distance being short, it allows for direct visual feedback from the robot, as opposed to relying solely on a screen. This not only provides a clearer view of the environment but also diminishes the reliance on artificial haptic feedback and reduces the delay between the robot's actions and the user's responses. It also increases the potential feelings of embodiment of the robot for the user, as the proximity of the robot increases visual but also sound or proprioceptive-based feedback. Existing literature emphasizes that, in such scenarios, position-to-position control tends to yield superior performance and user preference over position-to-velocity control <cit.> such as joysticks. However, implementing a position-position control is challenging for two primary reasons. First, there is a deficiency of position-to-position interfaces, which are often large <cit.>, and second, the direct one-to-one mapping of a user's body part to the robotic end-effector is restricted by the user's range of motion. For instance, if the robot's end-effector displaces by one meter, the user's corresponding body part would also have to move one meter, limiting practicality. Conversely, direct mapping proves highly effective for rotational displacements around a fixed point. To overcome the translational limitations in the task space, position-control interfaces often incorporate clutching mechanisms <cit.> (such as on Intuitive Surgical Da Vinci Robot <cit.>). This allows users to temporarily disengage from the robot, return to their initial position, and then re-engage with the manipulator. Alternatively, scaling can be employed <cit.>, but this approach usually sacrifices precision and noise-sensitivity. However, addressing these challenges remains needed to enhance the usability and effectiveness of position controlled 6 DoF robotic manipulators. In this paper, we propose an alternative mapping approach, specifically employing a lever, or a fixed-length "wand", to transform body rotations into amplified translations, and body translations into diminished rotations. In this mapping, the user is equipped with a rigid wand, and the virtual tip of the latter serves as an indicator for the desired position and orientation of the robotic manipulator. This mapping offers the advantage of expanding the translation workspace through the leverage provided by the lever and the rotations of the body. However, it comes at the cost of reducing the rotational workspace, necessitating the rotation of the user's hand around the robot's end-effector, and therefore large translations proportional to the wand's length. By converting human rotational motions into translations, this mapping strategy also enables users to leverage body parts with limited translational range but significant rotational capability, such as the trunk or the head. The use of head-based wands was previously explored in the eighties and nineties, particularly with tetraplegic patients possessing residual head mobility <cit.><cit.>, employing head-based wands to draw or operate a keyboard. Wands are also known to be able to integrate into the body schema <cit.>, making them intuitive. Nevertheless, traditional rigid wands have inherent limitations, including constraints on weight and size, as well as a lack of motorization and easy reconfiguration. To overcome these limitations, we propose the use of a virtual wand instead of a physical one. A virtual wand eliminates these physical constraints, and offers more flexibility. However, as the robot motions are often slower than the human operator's motions, the robot effector might be delayed compared to the virtual lever's effector. Users can interact with the virtual wand without seeing it directly, but, in order to visualize the delay, they can opt to use an Augmented Reality (AR) headset to observe the virtual wand in real time. While the latter may introduce some visual discomfort due to the holograms, it provides an effective means to mitigate the impact of delays (which are usually undesirable for teleoperation <cit.>), and should ease the integration of the tool into the body schema, enhancing the overall user experience. The use of Augmented Reality for distant teleoperation was previously explored in a one-to-one direct mapping scenario <cit.>, but was limited to 2 DoF and not compared to other methods or control without AR visual-based feedback. Our proposed Wand Mapping aims to provide an amplified position mapping that does not require any customization (unlike scaling) and is solely based on geometric relations, like one-to-one Direct position-to-position mapping. As it amplifies the operator's rotation into translation, we may call it "rotation-to-position" mapping. Our goal is not to replace the traditional one-to-one position-to-position mapping, but to propose an alternative that naturally amplifies body rotations, and therefore could be use through head or trunk motions, and therefore in the previously described scenarios, unlike position-to-position. In the following experimental results, we decided to use the operator's hand to control the robot, as it is the only body part that can naturally use both mappings with ease, and thus provide an equal comparison, but applied scenarios would a priori include assistance tasks, using the head or torso, and users with limited mobility. In the following sections, we first describe the implementation of the system and 6 DoF mappings, and we then evaluate the performances, preferences and impacts on motor control of the Wand Mapping compared to the traditional and well-known one-to-one Direct Mapping. § MATERIAL AND METHODS §.§ Experimental set-up The set-up is presented in Fig. <ref>. The virtual device base frame E_H is materialized by an optical marker attached to the tool manipulated by the user's hand, either a wand or a cube. The marker is tracked in real time by an Optitrack motion capture system w.r.t. its fixed frame O. The experiment also involves a Hololens 2 Augmented Reality (AR) headset that is self-localized in real-time w.r.t. a fixed W_AR, and a 7 DOFs Kinova GEN3 Ultra Light Weight robot with fixed base frame B_R. Prior to the experiment, a procedure allows for registering O, W_AR and B_R, all with respect to a fixed world frame W (chosen arbitrarily). This allows to express in the same frame W the optical measurements, the location of the pointer tip to be displayed in the AR headset, and the robot position or velocity commands. The AR headset is used to display the desired end-effector location to the participant (in dark blue as seen Fig. <ref>). This end-effector is a cube centered at the origin of frame E_R^⋆. Namely, the desired pose for the robot writes: _W → E_R^⋆(t) = [ _W → E_R^⋆(t) _W → E_R^⋆(t); 0    0    0 1 ] , where _A → B and _A → B are the rotation matrix and the origin translation from A to B, respectively. The robot end-effector location _W → E_R is servoed to _W → E_R^⋆(t) using a resolved rate controller that imposes an end-effector velocity proportional to the error. The controller guarantees, due to its integral effect, a null permanent error. The convergence rate is tuned by a simple proportional gain k determining the closed-loop position control bandwidth. Its value is set to k=0.5 s^-1, heuristically determined to obtain the fastest behavior before exciting the internal joint position controller oscillations of the Kinova. The augmented reality (AR) headset additionally presents the actual position of the robot effector in a semi-transparent manner. This feature enables participants to visually estimate when the robot has attained the desired configuration. An illustrative video detailing the system used with the head and torso can be seen at: <https://www.youtube.com/watch?v=EgwzT784Fws>. §.§ Virtual Representation The virtual environment features three objects: a light magenta cube representing the current configuration of the robot end-effector, a dark blue cube denoting the desired end-effector configuration (either corresponding to the tip of the wand or the teleoperated effector which directly maps the hand motions), and a red cube signifying a target location that the robot end-effector is expected to reach such as seen Fig.<ref>. §.§ Teleoperation control modes §.§.§ Direct Mapping The direct mapping consists of simply mapping the displacement of the hand between the initial start of the experiment t_0 and the current instant t to the desired robot effector: _E_R^⋆(t_0) → E_R^⋆(t) = _E_H(t_0) → E_H(t) If at the initial time both the hand and the robotic effector have a similar orientation, then the user has the illusion that the desired effector copies its motions with a constant translation in-between. To reinforce the illusion, users are holding a real cube of the dimensions of the virtual one. §.§.§ Wand Mapping The wand mapping consists of giving the illusion that the user is holding a wand whose tip is the desired cubic end-effector of the robot. The mapping writes: _W → E_R^⋆(t) = _W → E_H(t)_E_H(t) → E_R^⋆(t) where _E_H(t) → E_R^⋆(t) is constant and equal to its initial configuration, which corresponds to the geometry of the virtual wand. To give the illusion of a real wand, participants are holding a stick that they point towards the robot end effector at t_0. §.§.§ Integration of a visual perturbation In order to evaluate the integration of the tool and of the mapping, we also propose at specific times to delete the visualization of the desired blue effector in direct mapping, and of blue the wand in the wand mapping. Users still hold the real object (cube or stick) and still see the half-transparent current position of the robot (magenta). The behaviour of the robot is exactly the same, meaning that, if the user stop their motion, the robot end-effector's will attain the invisible desired end-effector configuration. This mode is equivalent to introducing a delay in the command, which was visually resolved using the AR. It is used in the experimental section to study the integration of the mappings in the body schema. §.§ Protocol Prior to commencing the experimental session, participants were introduced to the Augmented Reality Environment. Instructions were provided regarding the mappings, encouraging participants to "focus on the task" and telling them that they could "move without any restrictions," allowing walking if necessary, thereby granting maximum freedom to the participants. Each participant used both control modes. The experiment comprised 7 trials for each control mode. Half of the participants initiated the experiment using the Direct Mapping control mode, while the remaining half began with the Wand Mapping control mode. For each operational mode, the 4th and 6th trials were executed without the blue visualization, whereas the remaining trials featured the complete augmented reality (AR) visualization. Subjects were given the liberty to rest for as long as they desired between trials. At the onset of each trial, the participants' hands were consistently positioned at the same starting point, indicated in the AR headset. This starting point corresponded to a 45cm distance (and a 45cm wand in Wand Mode) between the desired effector and the center of the participant's palm. Each of the 7 trials comprised the task of reaching a total of 30 targets — 15 central points and 15 outer targets. Targets alternated between the central point and outer locations, creating 15 back-and-forth trajectories. For outer targets, a 15cm hand translation (determined using a Fibonacci distribution on a 3D sphere <cit.> in order to explore various directions) was required, as well as a rotation between 0° and 45° around a randomly uniformly chosen 3D-axis. The order of the targets was consistent between trials, which participants were informed. The targets were not the same in both modes, but required the same hand starting and end points, which allowed to have consistent and comparable performances and hand trajectories between the modes. Certainly, employing identical targets for both mappings would inherently bias one method over the other. The one-to-one mapping is inherently proficient for tasks involving substantial rotations without translation, while the wand mapping is more efficient to manage extensive circular translations. Selecting uniform targets would have introduced disparities in mapping efficiency, and optimal hand paths with varying lengths, leading to incomparable trajectories and task completion times. Hence, our emphasis was on ensuring uniform optimal hand motions between both mappings. The set of targets also remained the same across trials, which participants were duly informed. Successful target reaching was acknowledged when the robot remained within the 2cm tolerance for at least 1 second, with a 10-degree tolerance in axis-angle representation relative to the target (represented by a large red cube, which turned green when the configuration of the robot was correct). Once achieved, the target disappeared, and the next one appeared. The chosen tolerances struck a balance between requiring precision from the participants and ensuring a rapidly feasible task. Given the potential challenges associated with depth perception and precise orientation in AR visualization, the chosen tolerances were designed to assess the intuitiveness of the mapping rather than focusing on participants' perception of AR objects. The total completion time for each set of 7 trials in each mode ranged from 25 to 30 minutes. The overall duration of the experiment, encompassing instructions and rest sessions, fell between 1 hour and 1 hour and 15 minutes. §.§ Questionnaires Additionally, after the whole experimental session, participants were asked to fill questionnaires with six questions. From French to English they can be translated as: * To what extent has the task required a physical effort? * To what extent has the task required a cognitive effort? * How much did you feel in the control of the robotic arm? * How intuitive was it to control the robotic arm? * To what extent have you felt that the robot was an extension of yourself? * Which global score would you rate this experiment? Participants answered with a score ranging from -3 (very little) to 3 (very much). One questionnaire was filled for each control mode, with and without the visualization of the desired effector / of the wand, resulting in a total of 4 questionnaires. For the reading of the results in the next section it is worth remembering that a high rated answer for questions 1 and 2 corresponds to a negative characteristic of the system (high load), while, for questions 3 to 6 it reflects a positive characteristic (intuitiveness, general satisfaction, etc.). Questionnaires groups are compared using a paired Mann-Whitney Wilcoxon U-Test as data are non-normal according to the Shapiro-Wilks Test. §.§ Participants The experimental study was carried out in accordance with the recommendations of Sorbonne Université ethics committee CER-SU, which approved the protocol. Twenty asymptomatic participants, aged 18-55, volunteered for this experimental study. They all gave their informed consent, in accordance with the Declaration of Helsinki. § RESULTS The success rate was of 100% for all participants and all targets in both modes, with and without visualisation. Distributions presented in this section show the median, the 25th and 75th quartile, as well as 1.96 time the standard deviation in dotted line, which correspond to approximately the 95% confidence interval. Rotations are denoted using the scalar geodesic angle from the angle-axis representation. §.§ Performances of the effector §.§.§ Task Performances We show on Fig.<ref> the distribution of task durations to complete one target, per trial, and per mode of mapping across all participants. The disappearance of the visualisation shows an increase over time in both modes. Otherwise, in all trials with visualisation, the median time per target remains relatively constant, as well as the quartiles. §.§.§ Overshoot In order to evaluate the precision, we also show on Fig.<ref> the distribution of overshoots above the target tolerance as a percentage, both in translation and rotation. The first visualisation removal (4th trial) shows an increase of the overshoot, but this phenomenon is mitigated during the second trial without visualisation (6th trial). We also observe a strong overshoot during the 2nd trial for both modes. §.§.§ End effector Motion We also present the desired effector (whether the tip of the virtual wand, or the projected hand motions) normalized motions as well as velocities Fig.<ref> and Fig.<ref> respectively during trials with Visualisation. Similar curves are observed without visualisation but exhibits higher standard deviations. Though the median velocity curve only exhibits one single motion, the standard deviation of the velocity actually shows that the average filter mitigates the presence of a secondary motion to adjust the effector's position. Therefore, we denote that the global motion consist of two sub-motions : a ballistic motions, which, based on the standard deviation of the velocity, accounts for 50% of the duration, and an adjusting motion accounting for the remaining 50%. This temporal distribution is consistent across both mappings. We also note that the ballistic motion accounts for 80 up to 95% of the total motion amplitude, while the adjust motion for the remaining 5 to 20%. §.§.§ Duration of ballistic and adjust phase In order to try to evaluate the time of the ballistic motion and the time of the adjust motion, we also present the 80% response time of the end-effector in Fig.<ref>. The value 80% was chosen as the lowest value of the standard deviation curve in position and rotation at the end of the ballistic motion Fig.<ref>. The results are consistent with the overshoot results with more important times during the 2nd and 4th trial, indicating that the ballistic motion was less precise during these phases, which probably caused the overshoot. §.§ Analysis of hand motions during ballistic phase We also want to analyze is there are any motor control changes between the two mappings. Though the targets have been designed to that the optimal hand trajectory is the same, we can also compare the hand motions and velocities. As suggested by <cit.>, we can analyze motor control invariant kinematic properties during the ballistic motion of the hand. This approach involves normalizing the ballistic motions in both time and amplitude as shown Fig.<ref>. The plots show very similar temporal behaviours in both translation and orientation. We can also plot the motions curves in translation and orientation from Fig.<ref> as a function of one another in order to study the spatial coordination behaviour. According to <cit.>, an invariant spatial coordination should be observed, with the rotation error plotted against the position error forming a linear pattern. As these plots are averaged on 20 participants, which might hide participant-specific differences, we also provide the same plot per participants Fig.<ref>. Individual participant analysis revealed that 18 out of 20 participants exhibited very similar patterns between the two modes, with only 2 participants showing patterns above the y=x line in Direct Mapping mode and below the y=x in Wand Mapping mode. As a comparison, we provide the same spatial coordination plot expressed at the tip of the effector during ballistic motion. The curve in Direct mode remains identical, as the relative motions of the hand and of the effector are one-to-one mapped, but the curve in Wand Mode switches from below the y=x curve for the hand, to above the y=x axis for the effector, as, indeed, the wand mapping mostly transforms rotations into large translations and conversely large translations into rotations of the effector, inverting the coordination pattern between the hand and the tip of the effector. This behavior is not only observed in the collective average but also in the individual behaviors of each participant. §.§ Preferences Finally we provide the questionnaires' results on Fig.<ref>. In order to establish significant statistical difference, the 2-by-2 p values comparison are also provided Fig.<ref>. The removal of visual cues in Direct Mapping significantly increased cognitive load according to the results, but the questionnaire responses remained closely aligned across both mapping modes and were generally positive in all conditions with some non-significative preferences for the Wand Mapping. § DISCUSSION The performances between the two mapping modes were found to be nearly identical, with both the durations to complete the tasks and the overshoots exhibiting a strong similarity. Orientation overshoot was slightly higher in the Direct Mapping mode, though not exceeding 10%. This variance may be attributed to the fact that the pure orientation of the effector requires larger motions in Wand Mapping. In the 2nd trial, there was an increase in overshoot without a corresponding increase in time. It's plausible that participants, informed that the targets remained the same, attempted to expedite their movements, inadvertently resulting in higher overshoot. Though, the operator being faster than the Kinova robot itself, it did not impacted the overall task duration. Interestingly, this behavior was noted in both mappings. In the 3rd trial, participants in both mappings demonstrated a reduction in overshooting, suggesting an awareness that the overshoot strategy was sub-optimal, despite the delay in the real robot's response. Upon introducing visual perturbations, both overshoot and completion time increased. In the second perturbed trial, overshoot significantly decreased, accompanied by a reduction of the 80% response time. This suggests that participants achieved a better integration of the mapping, leading to more accurate ballistic motions. However, the precision and the adjustment phase remained challenging without visualisation, indicating that further training may be necessary to enhance these aspects. Remarkably, performances without visualization were close to one another for both mappings, suggesting that both mappings were similarly integrated in the body schema. The overall behavior of normalized effector motion was also notably similar between the two modes. However, one can see that in both mode the standard deviation of the speed curve reveals two distinct sub-motions: a ballistic motion accounting for 50% of the duration and an adjustment motion accounting for the remaining 50%. The ballistic motion, in turn, represented 80 to 95% of the overall effector motion in distance and rotation. Then, to examine potential effects of mapping on motor control, an analysis of hand motions was conducted, focusing on ballistic phase. The median spatial coordination indicated a slight deviation below the y=x curve, possibly attributable to the robot's delay which is slightly more pronounced in rotation due to the arm's geometry and reconfiguration challenges, but is overall close to being linear. Interestingly, again, the median pattern remained highly similar between the two mapping modes while the effector task was different for the two modes. Notably, when comparing the same plot for the end-effector, a clear difference emerged, with the wand mapping consistently following a pattern above the y=x line. This suggests that hand motions and motor control were not significantly affected by the mapping choice whereas the effector was exhibiting different trajectories and behaviours, which supports that constructing targets to elicit identical optimal hand motions for both modes was indeed observed. In the context of questionnaires, no major differences were observed between Wand and Direct Mapping. This overall consistency in participant feedback supports the conclusion that the mapping did not significantly impact motor control, and participants generally perceived both modes positively for all the listed criteria. Based on the comprehensive observations and analyses of performance metrics, participant preferences, and motor control dynamics conducted in this study, our findings suggest that the choice between Direct and Wand mapping in a general hand scenario — exploring various translations and orientations - does not appear to be of high significance, and both mapping modes demonstrated comparable outcomes. However, our knowledge of the mapping prompt us to consider specific scenarios where the choice of mapping may indeed hold significance. In instances where the task necessitates translations with minimal rotations, Wand Mapping could prove advantageous. The leverage provided by the wand, particularly in scenarios involving large translations, may contribute to enhanced control and improved overall performance. Conversely, in situations requiring substantial rotational reconfigurations alongside minor translational displacements, Direct Mapping may be the preferred choice. Moreover, the suitability of each mapping method might also depend on the specific body part employed for robot control. Wand Mapping, for instance, could offer advantages when using body parts with significant rotational capabilities, such as the head or trunk (as to be found in the case of assistance to paralyzed users), allowing for the scaling of rotations to achieve substantial robotic translations due to the leverage provided by the wand. this scaling not only converts rotations into translations but is exclusively grounded in the length of the link. Therefore, it relies on geometric considerations of the task, differing from traditional translation scaling that necessitates the adjustment of a gain. Oppositely, Direct Mapping holds an advantage to deal with important rotations without translating the effector, which is hardly possible with the Wand Mapping. In essence, the optimal choice between Direct and Wand Mapping should be contingent on the specific requirements of the task, the body part used for control, and the desired balance between rotational and translational movements. These considerations underscore the importance of tailoring the mapping strategy to the specific characteristics and demands of the robotic manipulation scenario. Likewise, in certain scenarios, such as navigating a wheelchair, position control cannot serve as a substitute for velocity control. Hence, we are contemplating the introduction of hybrid modes that would permit transitioning between Wand, Direct, or Velocity control. Moreover, these hybrid modes could incorporate the simultaneous utilization of multiple controls, such as employing a joystick to manage the length of the wand while employing head orientations to position the desired effector's position. § CONCLUSION In this paper, we introduced an innovative control paradigm, termed "wand" mapping or "rotation-to-position" mapping, which utilizes a virtual linkage mapping for 6 DoF manipulation. The primary advantage of this approach lies in its ability to translate body rotations into substantial effector translations, albeit at the cost of a reduced rotational task space. A comparative analysis with the well-established one-to-one "direct" position mapping, in a general 6 DoF scenario with equivalent hand displacements, reveals comparable performance, user preferences, and motor control behaviors across both mappings. The objective of this wand mapping is not to supplant the direct mapping method but rather to offer an alternative that may prove more suitable in specific scenarios. For example, the amplification could prove beneficial in scenarios involving body parts characterized by a wide rotational range and limited translation range, such as the head or trunk. While the amplifying mode offers advantages, the suitability of this mapping depends on the nature of the task and, for applications demanding intricate orientations with fixed translations, an automatic handling approach, such as vision-based grasping, may be needed. A switch between direct and wand mapping could also prove advantageous if operators can seamlessly transition between the two modes. IEEEtran
http://arxiv.org/abs/2406.08534v1
20240612164745
Optimizing Container Loading and Unloading through Dual-Cycling and Dockyard Rehandle Reduction Using a Hybrid Genetic Algorithm
[ "Md. Mahfuzur Rahman", "Md Abrar Jahin", "Md. Saiful Islam", "M. F. Mridha" ]
cs.NE
[ "cs.NE", "cs.AI" ]
inst1]Md. Mahfuzur Rahmancontrib sm.mahfuz031@gmail.com inst1,inst2]Md Abrar Jahincontrib abrar.jahin.2652@gmail.com inst1]Md. Saiful Islam saifuliem@iem.kuet.ac.bd inst3]M. F. Mridhacorauthor firoz.mridha@aiub.edu [inst1]organization=Department of Industrial Engineering and Management, addressline=Khulna University of Engineering and Technology (KUET), city=Khulna, addressline=Okinawa Institute of Science and Technology Graduate University (OIST), city=Okinawa, postcode=904-0412, country=Japan [inst3]organization=Department of Computer Science, addressline=American International University-Bangladesh (AIUB), city=Dhaka, postcode=1229, country=Bangladesh [corauthor]Corresponding author [contrib]Authors contributed equally § ABSTRACT This paper addresses the optimization of container unloading and loading operations at ports, integrating quay-crane dual-cycling with dockyard rehandle minimization. We present a unified model encompassing both operations: ship container unloading and loading by quay crane, and the other is reducing dockyard rehandles while loading the ship. We recognize that optimizing one aspect in isolation can lead to suboptimal outcomes due to interdependencies. Specifically, optimizing unloading sequences for minimal operation time may inadvertently increase dockyard rehandles during loading and vice versa. To address this NP-hard problem, we propose a hybrid genetic algorithm (GA) QCDC-DR-GA comprising one-dimensional and two-dimensional GA components. Our model, QCDC-DR-GA, consistently outperforms four state-of-the-art methods in maximizing dual cycles and minimizing dockyard rehandles. Compared to those methods, it reduced 15-20% of total operation time for large vessels. Statistical validation through a two-tailed paired t-test confirms the superiority of QCDC-DR-GA at a 5% significance level. The approach effectively combines QCDC optimization with dockyard rehandle minimization, optimizing the total unloading-loading time. Results underscore the inefficiency of separately optimizing QCDC and dockyard rehandles. Fragmented approaches, such as QCDC Scheduling Optimized by bi-level GA and GA-ILSRS (Scenario 2), show limited improvement compared to QCDC-DR-GA. As in GA-ILSRS (Scenario 1), neglecting dual-cycle optimization leads to inferior performance than QCDC-DR-GA. This emphasizes the necessity of simultaneously considering both aspects for optimal resource utilization and overall operational efficiency. Dual Cycling Quay Crane Dockyard Rehandles Genetic Algorithm 2D Crossover 2D Mutation § INTRODUCTION Trade is the essence of the global economy, whereas ports are the economic lifeline for countries. Today, almost 80% of the world’s goods are carried by shipping containers. So, ships are crucial in this system. Therefore, countries are competing to have large fleets. Nowadays, large vessels can carry over 25,000 containers. The operation to receive these mega-ships needs preparations. The goal and challenge of every port is now to reduce the turnaround time of vessels. For example, 15 years ago, the turnover time of a mega vessel was about 14 to 15 days (about 2 weeks); nowadays, it takes only 3 to 4 days, and they are targeting in the future to reduce the turnover time by 3 to 4 hours. There are so many methods and tricks that the shipping companies figured out over the years to make the process of loading and unloading the ship easier and quicker. In this research, we will introduce such a method. The most expensive single unit of container handling equipment in port terminals is the Quay Cranes (QCs). As a result, QC availability is one of the main operational bottlenecks at ports <cit.>. Ports can decrease ship turnaround time, increase productivity, and boost freight transportation system throughput by increasing QC efficiency <cit.>. Our research addresses this key bottleneck to port productivity. The approach is low-cost to increase productivity; neither new infrastructure nor technologies are needed. Although our strategy would not fix the capacity issue in the long term, it can be applied more quickly than other approaches and be used in conjunction with other methods. The majority of ports use the single cycle approach for QC operations in which the QC typically performs loading operations after all unloading tasks have been completed (see Figure <ref> (a)). Dual cycling strategy is an advanced technique to improve port efficiency of loading and unloading by eliminating some of the empty crane moves (see Figure <ref> (b)). Because it allows the QC to do loading and unloading simultaneously at the same time <cit.>. Maximizing the number of dual cycles would ultimately decrease a ship's turnaround time. Goodchild also showed that the unloading sequence of a bay-stacks[The definition of Bay is given in section <ref>, also see Figure <ref>.] impacts the number of dual cycles. So, an optimal solution remains for which the number of dual cycles will be maximum. He also suggested a greedy approach to find a heuristic solution instantly. After that, numerous researchers suggested several solutions to improve the dual cycling strategy. The mathematical model of quay crane dual-cycling scheduling (QCDCS) is considered a two-machine flow shop problem. It results in an NP-hard problem. Hence, metaheuristic algorithms, like Genetic algorithm (GA), would give the result much better and faster. Another big issue while loading and unloading containers in ports is rehandling them in the dockyard. Rehandling is necessary while retrieving a container, known as a target container, that is not on the top of the stack. Moving a blocked container to a different stack can also result in further rehandles during the retrieval process <cit.>. Depending on the closest stack, lowest stack, and optimization method, the obstructed container may be shifted to that stack in the same yard bay. As rehandling is very time-consuming, ports aim to minimize the number of rehandles as much as possible. In this case, the mathematical model of minimizing the number of rehandles at the dockyard also results in an NP-hard problem. So, here, the GA would also give better results. In contrast to the previous research, we noticed that the number of dual cycles may be maximized for a particular unloading sequence. Still, it can increase the number of dockyards rehandles. Similar issues will arise when we optimize dockyard rehandles separately without considering the unloading and loading operations. This work aimed to optimize the process in one model, providing an unloading sequence and a dockyard container arrangement, maximizing dual cycles, and minimizing dockyard rehandling. Moreover, we introduced a method called Maximizing Quay Crane Dual Cycles and Minimizing Dockyard Rehandles by GA (QCDC-DR-GA) to solve the model. Finally, we verified the algorithm's robustness by comparing it with results given by QCDCS Greedy Upper Bound, GA-QCDCS strategy, GA-ILSRS with single cycling, and GA-ILSRS with dual cycling separately. This study presents seven significant contributions to the domain of container unloading-loading operations at ports: * This study empirically validates the correlation between the unloading sequence of stacks within a ship row and the occurrence of dockyard rehandles, particularly in the context of dual-cycling strategies. * We develop a comprehensive model that integrates dockyard and Quay Crane-Double Cycling (QCDC) operations, offering a holistic approach to optimizing container handling. * The paper introduces a novel hybrid GA approach to optimize the proposed model, enhancing efficiency and performance in container operations. * We propose a specialized GA, combining one-dimensional and two-dimensional GA techniques that are introduced to address the unique challenges of container handling optimization. * We conduct an extensive analysis of computational parameters within the GA framework to identify and implement the most influential parameters and methods tailored to the specific requirements of the container handling problem. * We provide the rationale behind selecting different strategies and methodologies at various stages of the GA optimization process, providing valuable insights into algorithmic decision-making. * Finally, we conduct a comprehensive benchmarking analysis, comparing the proposed strategy against four state-of-the-art algorithms. The superiority of our model is rigorously validated through statistical paired t-tests, demonstrating its effectiveness and reliability in optimizing container handling operations. This article is structured in the following manner: The “[literature]Literature Review” section discusses the relevant literature on dual cycling and reducing rehandles. In the “[problem]Problem Description” section, we outline the problem statement. The “[method]Methodology” section covers model formulation (objectives and constraints) and our approach, QCDC-DR-GA, including its workflow, strategies, and parameters. The “[results]Results” section details scenario generation, computational experiments, and result analysis. Finally, the “[conclusion]Conclusions” section summarizes the work, highlights primary contributions, and suggests future directions. § LITERATURE REVIEW A considerable volume of operational research has focused on resolving challenges in the realm of ports. These studies commonly concentrate on matters related to strategic design and planning, such as – the optimal number of berths and crane combinations <cit.>, the optimal size of storage space <cit.>, determination of the number of AGVs required <cit.> and trade-offs between storage space and handling work <cit.>. Also, works have been done in operational scheduling in berth and quay cranes <cit.>, dispatching method for yard cranes and AGVs <cit.>, and dynamic deployment and scheduling of yard cranes <cit.> <cit.>. Since quay cranes are the main bottleneck in the efficient operation of container terminals, their operational efficiency determines the turnaround time of vessels in seaports <cit.>. Hence, more work has been done to improve the efficiency of the quay crane operation. Kim and Park <cit.> formulated a mixed-integer programming model that accounted for diverse constraints concerning quay crane operations. They introduced a heuristic search algorithm named "greedy randomized adaptive search procedure" to address this issue effectively. Tavakkoli et al. <cit.> introduced a novel mixed-integer programming (MIP) model for solving the quay crane scheduling and assignment problem (QCSAP). Traditional methods struggle to solve this complex problem efficiently within reasonable timeframes. To address this, they presented a genetic algorithm (GA) as a solution for real-world scenarios. Also, Fu, Diabat, and Tsai <cit.>, along with Diabat and Theodorou <cit.>, worked on combining quay crane assignment and scheduling problem. They also used the GA to find solutions. Their results showed that the GA was better and faster than another technique called Lagrangian relaxation. The effectiveness of quay cranes relies on how well they work together with other equipment like yard trailers and cranes. Certain researchers investigated solving the integration scheduling problem to enhance coordination and overall efficiency at container terminals. For example- Bish <cit.> introduced models and algorithms that brought together various sub-processes. These included finding suitable storage spots for unloaded containers, sending vehicles to specific containers, and planning the loading and unloading activities for quay and yard cranes. Chen et al. <cit.> created a comprehensive model to enhance the efficiency of the complete loading and unloading procedure. Zeng, Yang, and Hu <cit.> created a model for recovering from disruptions in berth and quay crane schedules. However, the previously mentioned literature concerning scheduling primarily centers on the quay crane single cycling (QCSC) method, with relatively limited advancement in QCDCS scheduling literature. §.§ Dual Cycling Goodchild and Daganzo first introduced the QCDCS strategy <cit.>. They formulated the double cycling problem as the 2-machine flow shop problem and proposed a greedy approach to generate a sequence for loading and unloading. They also ran a trial at Port of Tacoma, US (2003). In this trial, the revised meantime for a single cycle was 1 minute and 45 seconds, while for a double cycle, it was 2 minutes and 50 seconds. Consequently, double cycling resulted in a time saving of 40 seconds for each pair of containers subjected to the process. Later, in 2007 <cit.>, they devised a framework for assessing QC performance. They also formulated a straightforward formula to forecast the effect on turn-around time. Their findings also revealed that employing a double-cycling approach could lead to a 10% reduction in operating time and a decrease in the demand for yard tractors and drivers. Song <cit.> proposed a formula to find out the optimal starting point of double cycling that maximizes its frequency. The above studies concentrated on implementing QCDCS for single quay cranes QCs. Their practical experiments revealed that this approach could enhance the productivity of each QC by around 10 to 20%. Then Zhang et al. <cit.> suggested multiple QC double cycling models and solved a mixed integer programming model. At the same time, the sequence was generated using a constructive Jonson’s rule with an effective local search method. D Ku and TS Arthanari <cit.> pointed out a flaw with the existing multiple QCDCS model that lets cycles that are not implementable. Lee et al. <cit.> investigated the computational complexity of the QCDCS problem. They showed it can be formulated as a flow shop scheduling problem with series-parallel precedence constraints, solving it polynomially. For ease of implementation, they presented an optimal algorithm for the general QCDCS problem, a simplified version of Sidney’s algorithm. Zeng et al. <cit.> developed a mixed-integer programming model for QCDCS. The model considered the stowage plan of outbound containers and the operational sequence of QCs. A heuristic method called bi-level GA was designed to solve the model. Zhang et al. <cit.> focused on overall handling efficiency and the system’s stability of container terminals with double cycling. He et al. (2019) <cit.> solved a mixed integer programming model, which covers the main operational constraints (including multiple hatch-covers) in a container terminal. Zheng et al. <cit.> presented a dynamic programming approach to solve the QCDCS problem optimally. Ahmed et al. <cit.> developed two simulation models and implemented them based on a real-life case study considering uncertainties in the work task duration. §.§ Reducing Rehandles The issue of rehandling is typically connected to the arrangement of containers on ships and in yards. Some studies have addressed the rehandling problem by examining the containership stowage plan or the yard storage arrangements. They concentrated on refining the rehandling strategy based on specific loading sequences without considering how the loading sequence might impact rehandling. Kim <cit.> proposed a methodology to calculate the expected number of rehandles to pick up a random container and the total number of rehandles to pick up all the containers in a bay for a given initial stacking configuration. Imai et al. <cit.> formulated the issue as a multi-objective integer programming challenge. They employed the weighting method and acquired a collection of non-inferior solutions. Sauri and Martin <cit.> suggested three stacking strategies, which consider the containers’ arrival rates, departure rates, and storage yard characteristics, then developed a model to determine the number of rehandles. Lee et al. <cit.> proposed a novel approach that integrated yard truck scheduling and storage allocation and developed a hybrid insertion algorithm to solve this problem. Gharehgozli et al. <cit.> presented a yard crane scheduling problem to carry out a set of container storage and introduced an algorithm based on the corridor method, designed to address the problem of relocating blocks within block stacking systems. Various algorithms are employed to address relocation problems, but the GA is commonly adopted to conduct the solutions because of its efficiency. For example, Homayouni et al. <cit.> integrated the scheduling of quay cranes, AGVs, and handling platforms and proposed a GA to solve the problem. Ji et al. <cit.> presented an improved GA design to solve the model. Compared with earlier strategies and heuristics, they demonstrated the effectiveness of their optimization approach and algorithm. Previous research tackled parts of optimizing container handling, such as maximizing dual cycles or minimizing dockyard rehandles, but not both simultaneously. This work bridges the gap by proposing a unified model that optimizes both aspects simultaneously. Our QCDC-DR-GA algorithm finds the ideal unloading sequence and dockyard layout to maximize dual cycles and minimize rehandles. Proven effective against existing methods, it offers a holistic solution for efficient container handling at ports. § PROBLEM DESCRIPTION The layout of the containers, as shown in Figure <ref> on a ship or in the port yard, can be modeled as a three-dimensional matrix. Generally, the three dimensions are named rows, bays, and tiers (see Figure <ref>). Containers are stacked one above the other and arranged in rows. A row is stretched across the width of the bay or ship. Nowadays, some vessels can hold up to 30 stacks in a row and up to 30 rows (40-foot containers) along the length. QCs usually process unloading and loading across the ship bay. In the QCSC method, loading operations can only be done after completing the unloading operations. However, in the QCDC method, a QC performs unloading and loading operations simultaneously in a particular ship bay. The operating cycle of a QC can be broken down into the following parts: * Locking and unlocking the trolley with the container. * Horizontal movement of the trolley (with container). * Vertical movement of the trolley (with container). §.§ Case Consideration After the arrival of a vessel in port with a set of containers to be unloaded and a loading plan for a set of containers, Let U_c and L_c are the numbers of containers to be unloaded and loaded, where c is the stack number. Figure <ref> illustrates an example used in this work. Let S be the set of stacks in a row. |s|=N denotes the number of stacks in the set S, and P a permutation of set S telling the ordering of the stacks. For example, in Figure <ref> (a), the set of stacks is S={A, B, C, D}. Then P(1) = A, P(2) = B, P(3) = C and P(4) = D. How the order in which the stacks within each row are handled affects the total number of cycles is explained by Goodchild (2006) <cit.>. He explained (graphically) that there is an optimum sequence of the set S for which the number of cycles w is minimum. §.§.§ Generic Double Cycling Method * Select any unloading permutation, P'. Unload all the containers of the first stack and then the second stack, and proceed in this manner until all stacks have been unloaded. * Select a loading permutation, P, and load the stacks according to that permutation. Loading can be started in any stack when it becomes empty or contains only containers that would not be unloaded at this port. Once loading has been initiated on a stack, continue the process until that stack is fully loaded. §.§.§ Number of Rehandles in Dockyard Rehandles occur when the desired container is not on the top of any stacks. As we've said in the assumption [vi], we followed the nearest lowest stack strategy. It is the integration of the nearest stack strategy and lowest stack strategy. We've integrated them because of the difficulty of applying the lowest stack strategy in real-life scenarios. Otherwise, the lowest stack strategy gives better results in reducing rehandles. The example shown in Figure <ref> (b) the number of rehandles for the described strategy and given loading sequence is 3. § METHODOLOGY §.§ Mathematical Model The QCDC problem is modeled as a two-machine flow shop problem. §.§.§ Assumptions In this research, we will assume the following conditions: * The containers by the dockside are prepared for loading as needed. * The unloaded containers are immediately removed from the area and kept in the right place. * Rehandles of the containers on the ship are counted with both unloading and loading containers. * Rehandles on the ship are replaced in the same stack from which they were removed. * In this work, we consider that rehandles on the ship are moved between the vessel and the apron. But in reality, some of the rehandles may only be moved between stacks on the vessel. * The rehandles in the dockyard are done following the nearest stack strategy in which the rehandled container is put on the nearest lowest stack. * The turnaround time of a vessel is a measure of QC efficiency, and the goal of a QCDC is to reduce the total turnaround time. A proxy for that is the total number of single cycles (w_s) and dual cycles (w_d) required to unload and load the vessel. * We will finish unloading and loading one row before shifting the crane lengthwise along the ship to the next row. Due to the difficulty with the lateral movement of the QC along the ship, it is not feasible to perform double cycling across two rows. * No interruptions happen due to inbound vehicles or cranes. §.§.§ Symbols and Decision Variables The notations are as follows: m: Bay of containers in the yard n: Stack of containers in the yard o: Tier of containers in the yard U_c: Number of containers to unload in stack c∈ S L_c: Number of containers to load in stack c∈ S TU_c: Completion time of unloading c∈ S TL_c: Completion time of loading c∈ S T: Total completion time of unloading loading R: Number of rehandles of a row in the dockyard α: Average completion time of a single cycle β: Average completion time of a double cycle γ: Average time it takes to tackle a rehandle at the dockyard μ: Large value H_mn: Highest tier of the yard bay m and stack n h_mn: Height of the yard-bay m and stack n The decision variables are as follows: X_ij: binary variable for the sequence of unloading jobs (1 if j∈ S is loaded after i∈ S and 0 otherwise) Y_ij: binary variable for the sequence of loading jobs (1 if j∈ S is loaded after i∈ S and 0 otherwise) x_rmno: Equals to 1 if the container (m,n,o) is loaded onto the ship-bay and 0 otherwise. §.§.§ Model Establishment The objective of the scheduling problem is to minimize the maximum completion time of all jobs while adhering to the subject constraints. Here, the completion time T depends on w and R. Hence, T=α w_s + β w_b +γ R. minimize, 10 T_max subject to, TL_c-TU_c≥ L_c ∀ c∈ S TU_i-TU_j+μ X_ij≥ U_i ∀ j,i∈ S TU_j-TU_k+μ (1-X_ij)≥ U_j ∀ j,i∈ S TL_i-TL_j+μ Y_ij≥ L_i ∀ j,i∈ S TL_j-TL_i+μ (1-Y_ij)≥ L_j ∀ j,i∈ S TU_c≥ U_c ∀ c∈ S h_mn≤ H_mn X_ij∈1,0 ∀ j,i∈ S Y_ij∈1,0 ∀ j,i∈ S x_rmno∈1, 0 These constraints define the model completely. Constraint (<ref>) ensures that a stack can only be loaded after all the necessary stacks have been unloaded. Constraints (<ref>), (<ref>), (<ref>) and (<ref>) ensure that in an unloading permutation P' a stack is unloaded after the previous one has been unloaded. For every pair of stacks (i, j), either stack i is unloaded before stack j (if (X_ij=1) or the opposite (if (X_ij=0), and they also ensure that the time elapsed between the two events is big enough to unload the second of the two stacks. Constraints (<ref>), (<ref>), (<ref>) and (<ref>) are equivalent to (<ref>), (<ref>), (<ref>) and (<ref>) but for the loading events. Constraint (<ref>) ensures that a row's total unloading completion time allows for enough time to unload that stack at least. Constraint (<ref>) ensures the restriction on stack height. Constraint (<ref>) enforces the binary conditions upon the flow variables. §.§ QCDC-DR-GA This paper assumes the number of stacks to unload and load from a ship row is S. The unloading sequence of S stacks is diversiform, and the number of unloading sequences is S!. For each certain unloading sequence, we need to determine the maximum number of dual cycles, which increases the complexity of the problem. Moreover, S! represents the approximate value of this problem's complexity. On the other hand, assuming the number of retrieval containers at the dockyard is N. The number of ways to arrange containers N on the dockyard is again N!. Here, the estimated complexity of the problem also becomes N!. This paper aims to determine an unloading sequence for which several dual cycles will be maximized and a dockyard container arrangement for which the number of rehandles will be minimized. Then, the complexity of our problem will be (S!× N!). In computer science and operations research, the genetic algorithm (GA) is a metaheuristic approach, drawing inspiration from natural selection mechanisms, which fall into the broader category of evolutionary algorithms (EA). GA can generate high-quality solutions for optimization problems. We developed a mixed GA considering a parallel one-dimensional unloading sequence and two-dimensional dockyard plan while crossover and mutation. The more challenging part of QCDC-DR-GA is calculating fitness (calculating the cost of each generation) and integrating the unloading sequence and dockyard arrangement. The operating flow path of the QCDC-DR-GA is shown in Figure <ref>. §.§.§ Set Initial Population The initial population (P) comprises chromosomes representing unload sequences and dockyard plans. Notations include: P = population of chromosomes n = the number of chromosomes in P c_i = the i^th chromosome in P, where 1≤ i≤ n c_i(us) = part of chromosome representing unloading sequence c_i(dp) = part of chromosome representing dockyard plan A_1 = [ [ 1 2 3 4 5 6 7 8 9 10 ]]; A_2 = [ [ 3A 3B 1C 1D; 1A 2B ; 2A 1E 1B ; 3C 2C 3E 3D; 2D 2E 4A 4D; 4B 4C 3C ; 4E ]]; Here, each row in the vector A_2 denotes a dockyard stack, and each element of the matrix contains two pieces of information that denote the position of the ship (stack number and position in the stack) where it would be loaded. For example, '3A' means that the container would be on the third stack, the first position of the selected ship row. A_1 and A_2 are nothing more than parts of the chromosome c_i(us) and c_i(dp). §.§.§ Crossover We chose two parents from the previous generation and combined their genetic information to generate new offspring for the next generation. However, we needed to use two different methods for the two parts of the chromosome. For the 1D vector, we did 1D-crossover, and for the 2D vector, we did 2D-crossover operations. One Dimensional Crossover: From the various crossover techniques, we chose the Two-Point Crossover method, a special case of N-Point Crossover. At first, we selected two random points on the individual chromosomes and exchanged the genetic materials along these points. The common genes were removed from the offspring, and the dropped genes (if any) were appended at the back of the chromosomes (see Figure <ref>). Two Dimensional Crossover: For the 2D vector, we use the 2D Substring Crossover method. This involves row swap and column swap operations, which is a modified version of the 2D crossover introduced in the aircraft scheduling problem <cit.>. Row-wise operation: Two random points are selected, and the entire row of the parents between these points is swapped. Column-wise operation: The column-wise operation is performed on the selected rows using the Two-Point Crossover method previously used for 1D vector crossover. Repeated items are removed, and any dropped-out items are appended to the offspring (see Figure <ref>). §.§.§ Mutation The mutation is a genetic operation that maintains the genetic diversity of the chromosomes between generations. It is similar to biological mutation. Again, we use two different methods for two parts of the chromosome. Notation: P_m = the probability of mutation. The mutation operation will not occur after every cross. The mutation method is described in algorithm <ref>. One Dimensional Mutation: We utilize the Swap Mutation method for the 1D chromosome part, interchanging two selected genes after crossover (see Figure <ref>). Two Dimensional Mutation: Here, we chose the 2D Two-Point Swapping Mutation method for the 2D part of our chromosome. This is also the modified version of the mutation method introduced by Tsai et al. <cit.>. The method is described in algorithm <ref> (also see Figure <ref>). Notations: R: the number of rows in the 2D chromosome part. C_R_i: the number of columns in the i^th row. §.§.§ Calculate Fitness The fittest chromosomes are selected from every generation according to their cost. The cost here is the total completion time, which is nothing but our objective function. Every time a new generation is produced, the cost is calculated and stored against every chromosome. Then, the population is sorted in ascending order according to their cost. The population is now ready for the selection stage. The calculation method of the cost of each chromosome is explained in details in algorithm <ref> and <ref>. unloadFirstStackunload_first_stack fnFunction: loadingloading_operation calRehandlescalculate_rehandles §.§.§ Selection We adopted the Roulette Wheel selection method. Roulette selection is a probabilistic method for choosing individuals based on their fitness, with the probability of selection directly proportional to their fitness. The method is inspired by real-world roulette, but this has distinct differences from traditional roulette mechanisms. Instead of having all slots with the same probability of being selected, we implemented a weighted version of roulette (Figure <ref>). The likelihood of selection increases with an individual's higher fitness (or lower cost). Notations: P_E = the percentage of elite class chromosomes of a generation. E_rw = End value of the roulette wheel. Elite class is a portion of a generation who are the fittest. The elite class will be automatically selected for the next generation every generation. The P_E of our model was 20%. The roulette wheel selection steps are shown in algorithm <ref>. §.§.§ Termination The termination condition of the GA determines when the run ends. Initially, the GA progresses quickly, yielding better solutions every few iterations. However, this progress tends to slow down later, with minimal improvements. To guarantee that our solution approaches optimality, we establish a termination condition as follows: g_i denotes the i^th generation, G represents the maximum number of generations, and N_s stands for the number of successive generations where the fittest chromosome incurs the same cost. The genetic algorithm (GA) execution concludes according to the criteria specified in Algorithm <ref>. §.§.§ Parameters The GA control parameters are shown in table <ref>. The parameters best fit our model, such as population size, crossover technique, elite percentage, mutation probability, selection method, etc. As the solution to our problem is a smooth landscape type and the complexity of our problem is medium, we selected these parameters to fit the situation. § RESULTS AND DISCUSSION This section addresses the magnitude of QCDC-DR-GA. We offer tools to translate cycle-based benefits into time equivalents and validate those estimates against real-world double-cycling data. With an eye on the present and future, we analyze the financial impact of double cycling, estimating potential rewards for both existing vessels and those gracing the waves in the years ahead. The results of the experiments were obtained using a computer with 8 gigabytes of RAM, an Ubuntu 22.04 operating system, and an Intel Core i5 8th Gen. The algorithm was implemented using Python libraries- Pandas and NumPy. §.§ Performance Comparisons of the Algorithms To validate the effectiveness of our proposed QCDC-DR-GA algorithm, we conducted a comprehensive comparison with three established optimization methods: * Dual-Cycling Greedy Upper Bound Approach: It represents a greedy approach in which the unloading sequence is generated by sorting the stacks of containers of the ship row in descending order <cit.>. This heuristic method focuses solely on implementing dual-cycle loading/unloading, neglecting dockside rehandles. It's a baseline approach to gauge the potential improvement offered by more complex methods. * Mixed-Integer Programming Model for QCDCS (bi-level GA): This approach represents an improvement over the greedy upper bound by incorporating QCDCS optimization within a bi-level genetic algorithm framework <cit.>. It came with a significant advancement over the greedy approach by incorporating QCDCS optimization. * GA-ILSRS: This method, explored in two scenarios, optimizes dockyard rehandles using a genetic algorithm combined with Iterated Local Search <cit.>. However, it falls short by neglecting the loading/unloading process. We assumed two scenarios of selecting two methods of loading-unloading. * Scenario 1: Considers dual cycling for loading/unloading while neglecting dockyard rehandles. * Scenario 2: Focuses solely on dockyard rehandling with single-cycle loading/unloading. The greedy upper bound approach is a heuristic approach for implementing dual cycling only, but they did not consider any dockside rehandle. The QCDCS-bilevel GA overcame this heuristic approach but did not consider the time consumed by the dockyard rehandles. On the other hand, the GA-ILSRS only optimized the dockyard rehandles without considering the loading-unloading system. As for QCDC-DR-GA, we considered both loading-unloading with dual cycling and dockyard rehandles in one model and optimized the model using a sophisticated GA approach. §.§ Datasets Six scenarios were designed according to the number of stacks and the maximum stack height of containers in each row. Considering the characteristics of commonly used container ships, we assume that the number of stacks is 5 to 30 and the maximum stack height is 4 to 10. The program generated a row's loading and unloading plan and the dockyard container arrangements. Table <ref> shows the configurations of six datasets of unloading and loading plans. Table <ref> shows a sample unloading plan of a small ship, and Table <ref> shows a sample loading plan of that ship. Each cell tells the container location information in Table <ref>. For example, 1B means that the container is at the 1st stack, 2nd tier of the row, and F means the container would remain on the ship. The information in Table <ref> is alike. The program also generated the dockyard plan according to the loading plan. Here, we assumed that the highest dockyard container stack height is 6. The sample of the dockyard plan is elaborated in <ref> as A_2. Tables <ref> and <ref> provide container location information. For instance, 1B indicates a container located in the 1st stack, the 2nd tier of the row, with F indicating the container remaining on the ship. Assuming a maximum stack height of 6, the dockyard plan is discussed further in Section <ref> as A_2. §.§ Numerical Tests The six scenarios described in subsection <ref>, Table <ref> are used for numerical tests. The processing time of QCs for single and dual cycling are taken from the trial run done by Goodchild <cit.>, mentioned in subsection <ref> are 90 seconds and 170 seconds. The rehandling time of a container for a gantry crane at the dockyard is generated from a uniform distribution of 60 seconds. §.§.§ Test Results We comprehensively evaluate the proposed QCDC-DR-GA algorithm, comparing its performance against four established methods for optimizing container handling at ports. The evaluation utilizes six datasets described in Table <ref>, representing different scenarios with varying container numbers and ship configurations. Table <ref> presents the simulation results obtained by each method on these datasets. Additionally, Figure <ref> visually compares the performance of QCDC-DR-GA with the other approaches. §.§.§ Key Findings The key findings and remarks of the simulation are as follows: * The proposed QCDC-DR-GA model consistently outperforms all other methods in terms of maximizing dual-cycles and minimizing the number of container handling. This demonstrates the effectiveness of our approach, which combines QCDC optimization with dockyard rehandle minimization and optimizes the total unloading-loading time of the entire process. * The results support the hypothesis that separately optimizing QCDC and dockyard rehandles leads to suboptimal outcomes. Compared to QCDC-DR-GA, methods like QCDC Scheduling Optimized by bi-level GA and GA-ILSRS (Scenario 2) show limited improvement due to their fragmented approach. * Similarly, neglecting the dual-cycling in QC operation optimization, as in GA-ILSRS (Scenario 1), leads to inferior performance compared to QCDC-DR-GA. This highlights the importance of considering both aspects simultaneously for optimal resource utilization. By integrating QCDC optimization and dockyard rehandle minimization into a single model, QCDC-DR-GA demonstrates significant performance advantages over existing methods. This unified approach offers a powerful tool for port operators seeking to minimize container handling times and maximize operational efficiency. §.§ Significance Test This section presents the performance evaluation results of various port optimization strategies in a stacking environment. The experiments were carried out in different scenarios to cover the various vessel sizes. The operation time (in minutes) was measured for each strategy in each scenario. The statistical analysis was performed using the two-tailed paired t-test method to assess the significance of differences in operation time between the proposed QCDC-DR-GA strategy and other strategies. The experiments were conducted across six scenarios, each representing a specific configuration of the stacking environment. The stacks ranged from 5 to 30, with varying maximum stack heights. For each scenario, operation time measurements were recorded 20 times for the proposed QCDC-DR-GA strategy and four other strategies, namely Greedy Upper Bound, QCDC-bi-level-GA, GA-ILSRS-Scenario-1, and GA-ILSRS-Scenario-2. The paired t-test was used to analyze the differences in operation time between the proposed QCDC-DR-GA strategy and the other strategies under investigation. The paired t-test compares the means of two related groups to determine whether there is a statistically significant difference between them. The formulation of hypotheses for this test is as follows: * Null hypothesis (H_0): There is no significant difference in operation time between the proposed QCDC-DR-GA strategy and the compared strategy. * Alternative hypothesis (H_1): There is a significant difference in operation time between the proposed QCDC-DR-GA strategy and the compared strategy. The paired t-test used a significance level (α) of 0.05. The t-statistic and p-value were calculated for each pair of strategies. The significance level (α) and the critical t value (t_19(0.05) = 2.093) were used to determine the significance of the observed differences. The paired t-test results are presented in Table <ref>, which provides the minimum, maximum, mean, and standard deviation of the operation time for each strategy in different scenarios. The Pearson correlation coefficient (r), the t-statistic, the p-value, and the significance of the observed differences are reported. § DISCUSSION Port operations often face a trade-off between fast unloading/loading times and minimizing container rehandles within the dockyard. This research presents a groundbreaking model that tackles this dilemma head-on. Unlike past approaches that address each aspect separately, this model integrates them into a unified system. It leverages a mixed genetic algorithm, incorporating 1D and 2D approaches, to identify the optimal unloading sequence that minimizes the combined operation time for both unloading and loading and slashes unnecessary container movements within the yard. This holistic approach unlocks a cascade of benefits: operational costs plummet due to reduced fuel consumption and labor requirements, logistics efficiency soars with optimized container flow, and port capacity has the potential to expand as turnaround times shrink. While computationally demanding, this model holds immense potential to revolutionize port operations, ushering in an era of enhanced efficiency, productivity, and cost-effectiveness. §.§ Practical Implications Imagine a port buzzing with activity, containers arriving and departing in a seemingly chaotic dance. But beneath the surface, a revolution is brewing. We developed QCDC-DR-GA, transforming port operations from a juggling act to a well-rehearsed symphony. Simultaneously optimizing both unloading/loading speed and minimizing dockyard rehandles unlocks a treasure trove of practical benefits. Imagine ships unloading and loading at record speeds, thanks to the meticulously chosen sequence identified by QCDC-DR-GA. Picture yard workers gracefully navigate the containers, eliminating unnecessary relocations that waste time and fuel. The ripple effects are undeniable: operational costs plummet, logistics flow like a well-oiled machine, and the very capacity of the port expands as turnaround times dwindle. QCDC-DR-GA isn't just theoretical; it's a game-changer, poised to transform ports into beacons of efficiency and productivity. While its computational demands require careful consideration, the potential rewards are undeniable: a future where ports operate with newfound agility, cost-effectiveness, and a level of efficiency never before seen. This is more than just a model; it's a blueprint for a smarter, faster, and more sustainable future for port operations. §.§ Managerial Implications For port managers seeking to optimize operational efficiency, this research offers a groundbreaking model that simultaneously addresses the critical objectives of minimizing container unloading/loading times and dockyard rehandles. Unlike traditional approaches that consider these factors separately, our unified model integrates them into a single framework, yielding significant managerial implications. The model translates to demonstrably reduced operational costs through optimized unloading/loading sequences and minimized relocations, resulting in lower fuel consumption, optimized labor requirements, and reduced equipment wear and tear. Furthermore, it facilitates smoother logistics by choreographing efficient container movement within the yard, eliminating unnecessary relocations and leading to improved predictability, faster turnaround times, and enhanced client satisfaction. Additionally, the model unlocks the potential for increased port capacity without requiring costly infrastructure expansion, as every saved second in unloading/loading translates to the ability to serve more ships. Finally, by providing valuable data-driven insights into operations, the model empowers managers to make informed decisions regarding resource allocation, scheduling, and yard layout, fostering continuous improvement and adaptation to evolving demands. Implementing this model necessitates careful consideration of its computational demands and integration challenges, and the potential benefits are undeniable. Imagine a future where your port operates with unprecedented efficiency, cost-effectiveness, and agility. This transformative model is more than just a tool; it represents a paradigm shift for port management, paving the way for a future symphony of success. §.§ Unique Theoretical Contributions This subsection delves into the three key theoretical contributions that underpin our proposed model: * Introducing a Hybrid Approach: We present a novel genetic algorithm (GA) that seamlessly blends the strengths of 1D and 2D GAs. This hybrid approach leverages the power of 1D GAs in exploring the solution space effectively while incorporating the 2D representation's ability to capture spatial relationships within the dockyard layout. This unique fusion empowers our model to achieve superior results compared to traditional GAs, paving the way for more efficient port operations. * Optimizing GA Parameters: Recognizing the critical role of parameter selection in GA performance, we meticulously analyze various computational parameters. Through rigorous testing and evaluation, we identify the optimal combination of parameters and methods specifically tailored to our model. This meticulous optimization ensures that our GA operates at peak efficiency, maximizing its ability to find the optimal unloading sequence. * Justifying Strategic Choices: We understand the value of transparency and provide detailed reasoning and justifications for selecting various strategies and methods implemented throughout our GA. This includes explaining the rationale behind crossover and mutation operators, selection mechanisms, and population sizes. These three theoretical contributions - the hybrid GA, optimized parameters, and transparent justifications - form the bedrock of our model's effectiveness. They represent a significant advancement in GA strategies, offering a powerful tool for solving heuristic problems to achieve greater efficiency. § CONCLUSIONS In this work, the model QCDC-DR-GA is developed to optimize the total ship unloading and loading process, improving the QC efficiency and reducing the dockyard rehandles. A heuristic method, a mixed GA (mixed with two-dimensional and one-dimensional GA), is designed to solve the model. A specially designed program prepares six dataset scenarios. Numerical experiments indicate that combining the two scenarios, unloading-loading and dockyard rehandle optimization, can reduce the ship's turnaround time. They also prove that the model is suitable for vessels of any size. The model gives a constant better result for numerous datasets, small and big. Moreover, large ships perform remarkably well. In addition, our model has been compared with four of the latest methods on QCDC and dockyard rehandle minimization. QCDC-DR-GA showed superior improvements than the other models with a 95% confidence level, shown using the two-tailed paired t-test. While this study provides valuable insights into optimizing container handling at ports, certain limitations deserve acknowledgment. Firstly, the model assumes immediate loading container availability at the dockside, potentially overlooking pre-staging requirements. Secondly, the model focuses on rehandling between the ship and the apron, neglecting potential relocations within ship stacks. Finally, interruptions due to inbound vehicles or cranes are not considered, potentially underestimating operational variability. Future research could address these limitations by incorporating pre-staging requirements, investigating intra-ship rehandle optimization, and incorporating dynamic disruptions, which could further refine the model's practical applicability. By acknowledging and addressing these limitations, future studies can build upon the present work and contribute to even more efficient and realistic port operations. § DATA AVAILABILITY The datasets used in this research consist of the configuration of container ships' loading and unloading plans generated under six distinct scenarios. These scenarios are based on varying numbers of stacks and maximum stack heights of containers in each row, reflecting typical container ship characteristics. The scenarios encompass stack numbers ranging from 5 to 30 and maximum stack heights from 4 to 10. The dataset includes loading and unloading plans for dockyard containers, with sample plans provided for small ships. Each dataset comprises 20 instances representing different container loading and unloading scenarios. Each instance is characterized by a specific strategy detailing the number of single cycles, dual cycles, rehandles, and operation time required. Researchers interested in exploring the efficiency and performance of various strategies in handling container logistics within a dockyard setting can access the detailed generated data at https://dx.doi.org/10.21227/cj08-qn62https://dx.doi.org/10.21227/cj08-qn62. § CONFLICTS OF INTEREST The authors whose names are listed immediately below certify that they have no affiliations with or involvement in any organization or entity with any financial or non-financial interest in the subject matter or materials discussed in this manuscript. § CREDIT AUTHOR STATEMENT Md Mahfuzur Rahman: Writing – Original draft preparation, Conceptualization, Methodology, Software, Formal analysis, and Visualization. Md Abrar Jahin: Writing – Original draft preparation, Conceptualization, Methodology, Software, Formal analysis, and Visualization. Md. Saiful Islam: Supervision, Writing – Review & Editing. M. F. Mridha: Supervision, Writing – Review & Editing. apalike
http://arxiv.org/abs/2406.07968v1
20240612074451
On Siegel results about the zeros of the auxiliary function of Riemann
[ "Juan Arias de Reyna" ]
math.NT
[ "math.NT", "Primary 11M06, Secondary 30D99" ]
Siegel results about zeros of ] On Siegel results about the zeros of the auxiliary function of Riemann. Arias de Reyna]J. Arias de Reyna Universidad de Sevilla Facultad de Matemáticas c/Tarfia, sn 41012-Sevilla Spain. [2020]Primary 11M06; Secondary 30D99 arias@us.es, ariasdereyna1947@gmail.com § ABSTRACT We state and give complete proof of the results of Siegel about the zeros of the auxiliary function of Riemann (s). We point out the importance of the determination of the limit to the left of the zeros of (s) with positive imaginary part, obtaining the term -√(T/2π)P(√(T/2π)) that would explain the periodic behaviour observed with the statistical study of the zeros of ℛ(s). We precise also the connection of the position on the zeros of (s) with the zeros of ζ(s) in the critical line. [ [ June 17, 2024 ================= § INTRODUCTION In his paper <cit.> Riemann asserts to have proved that all but an infinitesimal proportion of zeros of zeta are in the critical line. This is repeated in a letter to Weierstrass <cit.>*p. 823–825, where he says that the difficult proof depends on a new development of Ξ(t) that he has not simplified sufficiently to communicate. The Riemann-Siegel expansion for ζ(s) were recovered by Siegel from Riemann's papers. Siegel in his publication <cit.> concluded that in Riemann's Nachlass there is no approach to the proof of his assertion on the real zeros of Ξ(t). However, Siegel connected the zeros of the new function (s), found in Riemann's papers, with the zeros of ζ(s). The main objective of this paper is to expose and give a complete proof of the results of Siegel on the zeros of (s). Siegel <cit.> proved an asymptotic expression (<ref>) valid for 1-σ≥ t^a with a=3/7. He claims that we may prove this with a=ε for any ε>0. In <cit.>*Th. 11 we proved it for 1-σ≥ t^2/5log t. In this paper, it will be desirable to have proved the claim with a<1/3 so that the term in |σ|^3/2=(t^1/2). For this reason we will keep a undetermined, to see the possible consequences of the hypothesis of being a <1/3. On the other hand, we may assume in all our results that a=3/7 as done by Siegel (see also <cit.>*Th. 9). The paper contains many computations that Siegel does not detail. First, we state three main Theorems in Section <ref>. We postpone its proof, and in Section <ref> we give the main applications to the zeros of (s). We note that in <cit.> we have given a better result about the number of zeros than the one obtained by Siegel methods (see Remark <ref>). Our corollary <ref> slightly improves on Siegel's results. This corollary implies that (s) contains at least >cT zeros to the left of the critical line, with c=35/198π. This constant is a slight improvement over the one in Siegel, in any case it is a very poor result, as it only shows that the number of zeros of ζ(s) on the critical line N_0(T) is greater than cT. Our computation of the zeros of (s) <cit.> makes plausible that 2/3 of the zeros of (s) are on the left of the critical line, this would imply that 2/3 of the zeros of ζ(s) will be simple zeros on the critical line. In Section <ref> we prove Theorem <ref>. It is in Siegel only implicitly. In Section <ref> we prove Theorem <ref>. We make a modification of Siegel's reasoning. He uses a function g(s)=π^-(s+1)/2e^-π i s/4Γ(s+1/2)(s), while we use F(s)=sπ^-s/2Γ(s/2)(s). The main advantage is that our function is entire, while g(s) has poles and zeros at the negative real axis, making the Siegel reasoning difficult. We postpone for section <ref> the proof of propositions <ref> and <ref> that are mere computations. §.§ Notations We use N(β≤σ, T) and N(β> σ, T) to denote the count of zeros ρ=β+iγ of (s) contained in (-∞,σ]×(0,T] and (σ,+∞)×(0,T] respectively counting with multiplicities. The sum of the two is denoted by N_(T). N(T) and N_0(T) denote the number of zeros of ζ(s) on the critical strip and in the critical line, respectively, as usual. We use ^*(h) to denote a quantity R bounded by |R|≤ |h|. § MAIN RESULTS. Applying Littlewood's lemma to (s) in the rectangle [σ,4]×[t_0,T] yields Let 1≤ t_0 and σ≤1 be fixed, then for T→+∞ ∫_t_0^Tlog|(σ+it)| dt=2π∑_β>σ 0<γ≤ T(β-σ)+_σ(log T). With the combed function sπ^-s/2Γ(s/2)(s), we can apply Littlewood's lemma on the rectangle [σ_0,σ]×[t_0,T] to get Let 1≤ t_0 and σ≤ 10 fixed, then for T→+∞ we have 2π∑_β≤σ 0<γ≤ T(σ-β)= σ(T/2logT/2π-T/2)+T/2log 2+ ∫_t_0^Tlog|(σ+it)| dt+_σ(T^20/21). In some cases, we have independent information on the integral of log|(σ+t)|: Given σ_1>3, there is a constant C such that for σ≥σ_1 and 1≤ t_0<T we have |∫_t_0^Tlog|(σ+it)| dt|≤ C 2^-σ. § APPLICATIONS OF THE MAIN THEOREMS The number of zeros ρ=β+iγ of (s) with 0<γ<T is N_(T)=T/4πlogT/2π-T/4π+(T^20/21). By Theorems <ref> and <ref> we have for σ=4 and σ=5 for T→+∞ 2π∑_β≤σ 0<γ≤ T(σ-β)= σ(T/2logT/2π-T/2)+T/2log 2+(T^20/21). Except for a finite number, the zeros of (s) with γ>0 satisfies β≤ 3 as shown in <cit.>*Cor. 14. Hence N_(T)=∑_β≤ 5 0<γ≤ T(5-β)-∑_β≤4 0<γ≤ T(4-β)+(1). The result is obtained by subtracting the above results. By a different and more direct reasoning, we have proved in <cit.> the more precise result N_(T)=T/4πlogT/2π-T/4π-1/2√(T/2π)+(T^2/5log T). For T→+∞ and denoting by ρ=β+iγ the zeros of (s), we have ∑_0<γ≤ Tβ=-T/4πlog 2+(T^20/21). For σ=4, for example, we have ∑_β≤4 0<γ≤ T(σ-β)=σ N_(T)-∑_0<γ≤ Tβ=σ(T/4πlogT/2π-T/4π)-∑_0<γ≤ Tβ+(T^20/21). Comparing with (<ref>) we get our result. Let σ'<σ≤ 10 then (σ-σ')N(β≤σ,T) ≥(σ-σ')(T/4πlogT/2π-T/4π)+ +1/2π∫_t_0^Tlog|(σ+it)| dt-1/2π∫_t_0^Tlog|(σ'+it)| dt+(T). by Theorem <ref> we have ∑_β≤σ 0<γ≤ T(σ-β) = σ(T/4πlogT/2π-T/4π)+Tlog2/4π+1/2π∫_t_0^Tlog|(σ+it)| dt+(T), ∑_β≤σ' 0<γ≤ T(σ'-β) = σ'(T/4πlogT/2π-T/4π)+Tlog2/4π+1/2π∫_t_0^Tlog|(σ'+it)| dt+(T). We subtract the two equations. Since σ'<σ we have ∑_β≤σ 0<γ≤ T(σ-β)- ∑_β≤σ' 0<γ≤ T(σ'-β)= ∑_β≤σ 0<γ≤ T(σ-β)- ∑_β≤σ 0<γ≤ T(σ'-β)+ ∑_σ'<β≤σ 0<γ≤ T(σ'-β) =(σ-σ')N(β≤σ,T)+∑_σ'<β≤σ 0<γ≤ T(σ'-β) ≤ (σ-σ')N(β≤σ,T). Therefore, (σ-σ')N(β≤σ,T) ≥(σ-σ')(T/4πlogT/2π-T/4π)+1/2π∫_t_0^Tlog|(σ+it)| dt -1/2π∫_t_0^Tlog|(σ'+it)| dt+(T). For σ'<σ≤ 10 we have (σ-σ')N(β≤σ,T)≥(σ-σ')(T/4πlogT/2π-T/4π)-1/2π∫_t_0^Tlog|(σ'+it)| dt +(T). By Lemma <ref> and Theorem <ref> we have for any σ” with σ>σ”>σ' (σ”-σ')N(β≤σ,T) ≥(σ”-σ')(T/4πlogT/2π-T/4π)+∑_β>σ” 0<γ≤ T(β-σ”) -1/2π∫_t_0^Tlog|(σ'+it)| dt +(T). We have (by Proposition <ref>) ∑_β>σ” 0<γ≤ T(β-σ”)≥∑_β>σ 0<γ≤ T(β-σ”)≥ (σ-σ”)N(β>σ,T) =(σ-σ”)(T/4πlogT/2π-T/4π-N(β≤σ,T)+(T)). Therefore, (σ”-σ')N(β≤σ,T) +(σ-σ”)N(β≤σ,T) ≥(σ-σ')(T/4πlogT/2π-T/4π)-1/2π∫_t_0^Tlog|(σ'+it)| dt +(T). Let f[0,+∞)→ integrable on any compact set and such that lim_T→+∞1/T∫_0^T|f(t)|^2 dt=+∞. Given t_0 there is some T_0 such that for T≥ T_0 we have ∫_t_0^Tlog|f(t)| dt≤T/2log(1/T∫_0^T|f(t)|^2 dt). Since log x is a concave function by Jensen's inequality <cit.>*p. 61 for T>t_0 we have 1/T-t_0∫_t_0^Tlog|f(t)|^2 dt≤log(1/T-t_0∫_t_0^T|f(t)|^2 dt). Let us define A:=∫_t_0^T|f(t)|^2 dt≤ B:= ∫_0^T|f(t)|^2 dt. The function x↦x/2logB/x is increasing for x∈[T-t_0,T], since the derivative is 1/2logB/x-1/2, which is positive for B>e x. So, we only need that B>eT and this happens for T≥ T_0 by hypothesis. Then ∫_t_0^Tlog|f(t)| dt≤T-t_0/2logA/T-t_0≤T-t_0/2logB/T-t_0≤T/2logB/T. This is our inequality. There is some T_0 such that for T>T_0 we have N(β≤12,T)≥35T/396π+5/11π∫_t_0^Tlog|(12+it)| dt. Applying Lemma <ref> with σ=1/2 and σ'<1/2 yields (12-σ')N(β≤12,T) ≥(12-σ')(T/4πlogT/2π-T/4π)+ +1/2π∫_t_0^Tlog|(12+it)| dt-1/2π∫_t_0^Tlog|(σ'+it)| dt+(T). In <cit.> we proved that for σ'≤ 1/4 we have 1/T∫_0^T|(σ'+it)|^2 dt=2/(1-2σ')(3-2σ')(T/2π)^1/2-σ'+(T^1/4-σ'). This and Lemma <ref> yields for σ≤1/4 and T big enough ∫_t_0^Tlog|(σ'+it)| dt≤(12-σ')T/2logT/2π-T/2log(1-2σ')(3-2σ')/2+(T^3/4). Hence, (12-σ')N(β≤12,T) ≥-(12-σ')T/4π+ +1/2π∫_t_0^Tlog|(12+it)| dt+T/4πlog(1-2σ')(3-2σ')/2+(T). The function f(σ')=-1+1/1/2-σ'log(1-2σ')(3-2σ')/2, has a maximum near σ'=-3/5 where f(-3/5)>7/18. Therefore, taking σ'=-3/5 we get 11/10N(β≤12,T)≥7T/72π+1/2π∫_t_0^Tlog|(12+it)| dt+(T). Since f(-3/5)>7/18 the term (T) can be eliminated for T≥ T_0, taking T_0 big enough. There exists some T_0 such that for T≥ T_0 we have N(β≤12,T)≥35T/396π+5/11∑_β>1/2 0<γ≤ T(β-12). In Proposition <ref> substitute the integral by its value in Theorem <ref>. The error term (log T) is eliminated due to the strict inequality f(-3/5)>7/18. § CONNECTION WITH THE ZEROS OF ZETA We have ζ(1/2+it)=e^-iϑ(t)Z(t) where ϑ(t) and Z(t) are real analytic functions. These are connected to (s) by Z(t)=2{e^iϑ(t)(12+it)}, (see <cit.>). In <cit.> we show that (1/2+it)=e^-iω(t)g(t) where ω(t) and g(t) are real analytic functions. It follows that a point 1/2+it with t∈ is a zero of ζ(s) if and only if it is a zero of (s) or if cos(ϑ(t)-ω(t))=0. Also, it is shown in <cit.> that for T>0 we have ω(T)=2π N(β≥1/2,T)+(log T). If there were no zeros to the right of the critical line, N(β≥1/2,T)=0, then the function ϑ(t)-ω(t) will increase for t∈[0,T] from 0 to ≈ϑ(T) and the number of zeros of ζ(s) in the critical line from t=0 to t=T will be ϑ(T)/π. Almost all zeros of the zeta function will be on the critical line. But this does not appear to be true, our (limited) computation of zeros <cit.> points to the approximate equality N(β≥1/2,T)≈ϑ(T)/6π. § PROOF OF THEOREM 3 For s=σ+it with σ≥2, t≥0 and |s|>2π e^2 we have (s)=ζ(s)(1+W(s)), |W(s)|≤ C(|s|/2π e)^1-σ/2, where C is an absolute constant. In <cit.>*Thm. 12 it is proved that for s=σ+it with t>0, σ>0 and such that |s|>2π e^2 we have (s)=∑_n=1^ℓ1/n^s+R(s), |R(s)|≤ C |s/2π e|^-σ/2, where, ξ=√(s/2π i)=ξ_1+iξ_2 with 0<-ξ_2<ξ_1 and ℓ=⌊ξ_1-ξ_2⌋, and C is an absolute constant. Therefore, under these conditions, but assuming σ>1 (s)=ζ(s)-∑_n=ℓ+1^∞1/n^s+R(s). Notice that |ξ_1|=ξ_1 and |ξ_2|=-ξ_2, so that |ξ|-1≤|ξ_1|+|ξ_2|-1< ℓ=⌊ξ_1-ξ_2⌋≤|ξ_1|+|ξ_2|≤√(2)|ξ|=√(|s|/π). Therefore, |∑_n=ℓ+1^∞1/n^s|≤∑_n>|ξ|1/n^σ≤ |ξ|^-σ+∫_|ξ|^∞ x^-σ dx= (|s|/2π)^-σ/2+1/σ-1(|s|/2π)^(1-σ)/2. It follows that for σ≥2 |R(s)-∑_n=ℓ+1^∞1/n^s|≤ C |s/2π e|^-σ/2+ (|s|/2π)^-σ/2+1/σ-1(|s|/2π)^(1-σ)/2≤ C|s/2π e|^(1-σ)/2. Therefore, (s)=ζ(s)+V(s)=ζ(s)(1+W(s)), |W(s)|≤ Cζ(2)|s/2π e|^(1-σ)/2. There exist σ_0>3, t_0>1 and a constant C such that for T>t_0 and σ>σ_0, we have |∫_t_0^Tlog|(σ+it)| dt|≤ C2^-σ, We have ∫_t_0^Tlog|(σ+it)| dt=∫_t_0^Tlog|ζ(σ+it)| dt+∫_t_0^T|log(1+W(σ+it))| dt. The integral in the zeta function is treated easily ∫_t_0^Tlog|ζ(σ+it)| dt =∫_t_0^T∑_n=2^∞Λ(n)/log n1/n^s dt=∫_t_0^T∑_n=2^∞Λ(n)/log ncos(tlog n)/n^σ dt =∑_n=2^∞Λ(n)/log nsin(Tlog n)-sin(t_0log n)/n^σlog n. So, in absolute value |∫_t_0^Tlog|ζ(σ+it)| dt|≤ 2^-σ∑_n=2^∞2Λ(n)/(n/2)^σlog^2n≤ 2^-σ∑_n=2^∞2Λ(n)/(n/2)^3log^2n. The function 1+W(s) defined by (s)=ζ(s)(1+W(s)) is analytic and is not equal to 0 on the line σ+it with σ>3 fixed and t≥ t_0. Since |W(s)|≤ C(|s|/2π e)^(1-σ)/2, there is some t_1 such that for t>t_1 we have |W(s)|≤ 1/4. The function log(1+W(s)) is well defined, with the imaginary part ≤π/2 in absolute value for t>t_1. We also have for t≥ t_1 that |log(1+W(s))|≤ 2C(|s|/2π e)^(1-σ)/2. Then this inequality is also true for any t≥ t_0, substituting, if needed, C by a large constant. Hence, |∫_t_0^Tlog(1+W(t)) dt|≤ C'∫_t_0^T|s/2π e|^1-σ/2 dt=C'(2π e)^σ-1/2∫_1^T(σ^2+t^2)^1-σ/4 dt ≤ C'σ^1+1-σ/2(2π e)^σ-1/2∫_1/σ^T/σ(1+u^2)^1-σ/4 du≤ C'σ(2π e/σ)^σ-1/2∫_0^∞(1+u^2)^1-σ/4 du. Elementary computations and Euler-MacLaurin approximation to the Gamma function yields for σ>3 =C'σ(2π e/σ)^σ-1/2√(π)Γ(σ-3/4)/2Γ(σ-1/4)≤ C”σ^3/2/σ-3(2π e/σ)^σ-1/2. We end noticing that for σ>σ_0>3, there is a constant C only depending on σ_0 such that 2^-σ+σ^3/2/σ-3(2π e/σ)^σ-1/2≤ C 2^-σ. § PROOF OF THEOREM 1 We will use Littlewood and Backlund's lemmas, which we state as reference. Let fΩ→ be a holomorphic function, R=[a,b]×[c,d]⊂Ω a closed rectangle contained in Ω, let us denote by A=a+ic, B=b+ic, C=b+id, and D=a+id its vertices. Assume that f do not vanish on the sides AB, BC and CD. Define f(z) continuously along this three sides then 2π∑_β≥ a c<γ≤ d(β-a) =∫_c^dlog|f(a+iy)| dy-∫_c^d log|f(b+iy)| dy -∫_a^b f(x+ic) dx+∫_a^b f(x+id) dx. where ρ=β+iγ run through the zeros of f(z) in the rectangle counted with multiplicities. Let f be holomorphic in the disc |z-a|≤ R. Let |f(z)|≤ M for |z-a|≤ R. Let b be a point in the interior of the disc 0<|b-a|<R. Assume that f does not vanish on the segment [a,b], then |1/2π i∫_a^bf'(z)/f(z) dz|≤1/2(logM/|f(a)|)(logR/|b-a|)^-1. Littlewood's lemma can be found in many sources, for example <cit.>*section 9.9 or the original <cit.>. Backlund's lemma is applied without explicit mention in Backlund <cit.>. When stated, an extra term 1/2 is frequently added on the right side. It is proved by means of the Jensen formula <cit.>*section 15.16. An independent complete proof is given in <cit.>. The truth of (<ref>) does not depend on the particular value of t_0. Changing t_0 to some other value changes the left-hand side of (<ref>) into a constant that is absorbed into the error term. So we can assume that (s)0 for (s)=t_0, and that t_0>32π. We can also assume that (s)0 for (s)=T. In the other case, we may prove the proposition for T_n>T with lim_n T_n=T, and obtain the proposition in the general case by taking the limits of the resulting equation for n→∞. For (s)=4 and t≥ 32π we have |(s)-1|<1/2 by Proposition 6 in <cit.>. Therefore, (4+it) do not vanish for t≥32π, and we are in a condition to apply Littlewood's lemma to (s) in the rectangle [σ, 4]×[t_0,T]. This yields 2π∑_β>σ t_0<γ≤ T(β-σ)= ∫_t_0^Tlog|(σ+it)| dt-∫_t_0^Tlog|(4+it)| dt - ∫_σ^4(x+it_0) dx+∫_σ^4(x+iT) dx, where (x+it_0) and (x+iT) are continuous extensions of (4+it). Since |(4+it)-1|<1/2, we may take (4+it) continuous and with absolute value <π/2. On the upper side (s)=T we have (x+iT)-(4+iT)=1/ i∫_4+iT^x+iT'(z)/(z) dz. The absolute value of this integral can be bounded as usual with Backlund's lemma <ref>. Let D be the disc with center at 4+iT and radius 2(4-σ). The maximum of |(s)| on this disc is determined by Propositions 12 and 13 in <cit.>, it is ≤ C_σ T^r where r=max(1/2,3/2-σ). Since we assume σ≤1, we have r≤ 3/2-σ. [In this argument σ is fixed and we assume T>1-σ, for example]. Note also that |(4+iT)|>1/2. It follows that |(x+iT)|≤ C'_σlog T so that |∫_σ^4(x+iT) dx|≤ C”_σlog T. The integral of (x+it_0) is a constant, depending on σ. By proposition <ref> the integral of (4+it) is bounded by an absolute constant. Joining all this, Littlewood's lemma yields (<ref>) § PROOF OF THEOREM <REF> To prove Theorem <ref> Siegel <cit.> consider a function g(s)=π^-s+1/2e^-π i s/4Γ(s+12)(s). The election of the factor of (s) is to make the behavior of g(σ_0+it) the simplest possible, for our election of σ_0. Here σ_0 is selected so that all zeros to height T of (s) satisfies (ρ)>σ_0. Nevertheless, this election makes g meromorphic with poles at -(2n+1). This is a problem to apply Littlewood's lemma. Therefore, we prefer to instead consider the entire function F(s)=sπ^-s/2Γ(s2)(s). The x-ray of Siegel's function g(s) is very nice. Notation: In this section, we will consider the values of (s) at the points s=σ+it with σ≤ 10 and t≥0. For such values of s, we consider η=√(s-1/2π i), selecting the root η=η_1+iη_2 with η_1+η_2≥0. When σ≤ 1 this means 0≤η_2≤η_1. In this case |e^2π i η|≤ 1, with strict inequality for σ<1. We want a value of σ such that all zeros of (s) to height T satisfies σ≤(ρ). This is obtained by means of the next Theorem proved in <cit.>*Th. 9. There exist constants A and t_0>1 such that for s in the closed set Ω={s∈ t≥ t_0, 1-σ≥ t^3/7}, we have (s)=-χ(s)η^s-1e^-π i η^2√(2)e^3π i/8sinπη/2cos2πη(1+U(s)), |U(s)|≤ At^-1/21, where η=√((s-1)/2π i) satisfies (η)+(η)>0. Let t_0 be a constant greater than the constant appearing in Theorem <ref>. For s=σ+it with t≥ t_0 and 1-σ≥ t^3/7, we have F(s):=sπ^-s/2Γ(s/2)(s)=J(s)(1+U(s)), where log J(s)=π i s/4+log s-1/2log1-s/2π-3π i/8+π iη+log1-e^2π i η/1+e^4π i η+(|s|^-1). Since we assume that s satisfies the conditions in Theorem <ref> we have (<ref>). Therefore, by definition, F(s)=J(s)(1+U(s)), where J(s) =-sπ^-s/2Γ(s/2)χ(s)η^s-1e^-π i η^2√(2)e^3π i/8sinπη/2cos2πη =-sπ^(s-1)/2Γ(1-s2)η^s-1e^-π i η^2√(2)e^3π i/8sinπη/2cos2πη =2^-1/2e^-π i/8sπ^(s-1)/2Γ(1-s2)η^s-1e^-π i η^2+π i η1-e^2π i η/1+e^4π i η. We may define a continuous logarithm of J(s) for these values of s by noticing that |e^2π iη|<1 and therefore we may use log1-e^2π i η/1+e^4π i η=-∑_n=1^∞z^n/n+∑_n=1^∞(-1)^nz^2n/n, z=e^2π i η. That is, it is equal to log(1-e^2π i η)-log(1+e^4π i η), using in both cases the main branch of the logarithm. It follows that the imaginary part of log1-e^2π i η/1+e^4π i η is bounded by π. By the usual Euler-MacLaurin expansion, which is applicable since σ<0, log J(s)=-1/2log2-π i/8+log s+s-1/2logπ-s/2log1-s/2-1-s/2+1/2log2π + s-1/2logs-1/2π i-π is-1/2π i+π iη+log1-e^2π i η/1+e^4π i η+(|s|^-1). Simplifying this yields log J(s)=π i s/4+log s-1/2log(1-s)+1/2log2π-3π i/8+π iη+log1-e^2π i η/1+e^4π i η+(|s|^-1). The next propositions will be proved later in Section <ref>. Let t_0>1 and 0<a<1/2 be fixed real numbers. There is a function f[0,+∞)→[0,+∞) such that f(T)=(T^1/2) and such that for T>t_0, σ≤ 10 and 1-σ≤ T^a we have ∫_t_0^T log|F(σ+it)| dt = -π T^2/8+(σ+1)(T/2logT/2π-T/2)+T/2(log 2+2log(2π)) -πσ^2/8+∫_t_0^Tlog|(σ+it)| dt+(f(T)). Let 1≤ t_0<T with t_0 bigger than the constant appearing in Theorem <ref> and let 1-σ_0=T^a with a=3/7. Then we have ∫_t_0^Tlog|F(σ_0+it)| dt=-π T^2/8+T/2log T-T/2+T/2log2π+(T^20/21). Let t_0 and σ_0 be fixed real numbers satisfying t_0>1 and σ_0<-1. Assume that F(s) does not vanish for (s)=t_0 and denote by F(σ +it_0) a continuous determination of the argument, defined for σ_0≤σ≤σ_1≤ 10, and such that | F(σ_0+it_0)|≤ C|σ_0|. Then there is a constant C_0 (depending on t_0, and C such that |∫_σ_0^σ_1 F(σ+it_0) dσ|≤ C_0 |σ_0|^2log|σ_0|. We have | F(σ+it_0)- F(σ_0+it_0)|=|1/i∫_σ_0+it_0^σ+it_0F'(z)/F(z) dz|, we may bound the integral by Backlund's lemma. We take a disc D with center at 10+it_0 and radius 2(10-σ_0). To bound F on D, notice that D is contained in a disc of center 0 and radius C|σ_0| (where the constant C here depends only on t_0). In <cit.>*eq.(3) we have proved that |F(s)|≤ A e^B|s|log |s|. So that |F(s)|≤ A e^c(t_0) |σ_0|log|σ_0|. Also, we have |F(10+it_0)|>0 is a constant depending only on t_0. Therefore, | F(σ+it_0)- F(σ_0+it_0)|≤ C|σ_0|log|σ_0|. It follows that |F(σ+it_0)|≤ C|σ_0|log|σ_0|. Then |∫_σ_0^σ_1 F(σ+it_0) dσ|≤ C |σ_0|^2log|σ_0|. Let 0<a<1/2 a fixed real number and for T>1 let 1-σ_0=T^a and σ_0<σ_1≤ 10. Then we have ∫_σ_0+iT^σ_1+iTlog(sπ^-s/2Γ(s/2)) ds=(T^2a) when we take a continuous branch of the logarithm where log(sπ^-s/2Γ(s/2)) at the point s=s_0+iT is (T^a). By Euler-MacLaurin expansion we have logΓ((σ+iT)/2)=T/2logT/2-T/2+π/4(σ-1)+2σ-σ^2/4T+(T^-1). For σ_0≤σ≤σ_1≤10 we have σ^2/T^2=(T^-1) so that log(σ+iT)=π/2-arctanσ/T=π/2-σ/T+(T^-1) After some simplifications, we obtain for s=σ+iT log(sπ^-s/2Γ(s/2))=T/2logT/2π-T/2-σ^2+2σ/4T+π/4+πσ/4+(T^-1). After subtracting the multiple of 2π nearest to T/2logT/2π-T/2, we get a branch of the logarithm equal to log(sπ^-s/2Γ(s/2))=-σ^2+σ/2T+π/4+πσ/4+(1)=πσ/4+(1). Therefore, |∫_σ_0+iT^σ_1+iTlog(sπ^-s/2Γ(s/2)) ds|≤π/8(σ_1^2-σ_0^2)+ C(σ_1-σ_0)≤ C T^2a. For a given T>t_0 define σ_0 such that 1-σ_0=T^a with a=3/7. For T≥ T_0 we will have σ_0<σ. We apply Littlewood's lemma in the rectangle R=[σ_0,σ]×[t_0,T] to the function F(s)=sπ^-s/2Γ(s/2)(s). As we have seen in the proof of Theorem <ref> we may assume that there is no zero of F(s) on the lines (s)=t_0 and (s)=T. We take t_0 large enough so that Theorem <ref> applies with At_0^-1/21<1. This implies that F(s) do not vanish on the left-hand side of the rectangle R. Note that the zeros of F(s) and (s) on t>0 are the same with the same multiplicities. Here, it is in line (s)=σ where we do not know the argument of F(s). Hence we have 2π∑_β<σ t_0<γ≤ T(σ-β)= ∫_t_0^Tlog|F(σ+it)| dt-∫_t_0^Tlog|F(σ_0+it)| dt +∫_σ_0^σ F(x+it_0) dx-∫_σ_0^σ F(x+iT) dx, where F(s) should be taken continuously in the broken line with vertices at σ+it_0, σ_0+it_0, σ_0+iT, σ+iT. By Lemma <ref> we have F(s)= J(s)+(1+U(s)), with s=σ_0+it. By the election of σ_0, the function (1+U(σ_0+it)) for t_0≤ t≤ T is always between -π/2 and π/2. By (<ref>) we easily see that F(σ+it) for 1-σ≥ t^a, t≥ t_0 is equal to F(σ+it)=πσ/4+π/2-arctanσ/t+1/2(π/2-arctan1-σ/t)-3π/8 +π/√(2π)(t^2+(1-σ)^2)^1/4cos(12arctan1-σt)+(1). Since 1-σ_0=T^a it follows that F(σ_0+it) varies continuously between the extreme values F(σ_0+iT)=π√(T/2π)+(T^a) F(σ_0+it_0)=πσ_0/4+(T^a/2). Therefore, F(σ_0+it_0)=(T^a) and the lemma <ref> apply to show that ∫_σ_0^σ F(x+it_0) dx=(T^2alog T). On the upper side for s=x+iT with σ_0≤ x≤σ we have F(s)=(sπ^-s/2Γ(s/2))+(s) where the arguments on the right-hand side should be taken continuous and such that at the point s_0=σ_0+iT we have (s_0π^-s_0/2Γ(s_0/2))+(s_0)= F(s_0)=π√(T/2π)+(T^a). We have some freedom here; we may pick (s_0π^-s_0/2Γ(s_0/2))=(T^a) so that Lemma <ref> applies, and ∫_σ_0+iT^σ+iT(sπ^-s/2Γ(s/2)) ds=(T^2a). Then we must take (s_0)=(T^1/2). We apply the Backlund lemma to bound the integral of (x+iT). We have |(x+iT)-(σ_0+iT)|=|1/i∫_σ_0+iT^x+iT'(z)/(z) dz| To determine the value of (x+iT), we apply Backlund's lemma. Take a disc with center at 10+iT and radius R=2(10-σ_0). On this disc |(s)| is bounded, according to Proposition 12 and 13 in <cit.> by ≤ Cmax(T^1/2, T^ cT^a). The value in the center of the disc |(10+iT)|≥ 1/2 by by Proposition 6 in <cit.>. Hence, we get |(x+iT)-(σ_0+iT)|≤ C T^alog T. It follows that |(x+iT)|≤ C T^alog T+ CT^1/2≤ CT^1/2. From which we obtain ∫_σ_0+iT^σ+iT(s) ds=CT^1/2+a. Putting the results of Propositions <ref>, <ref> and equations (<ref>), (<ref>) and (<ref>) into equation (<ref>) yields 2π∑_β≤σ t_0<γ≤ T(σ-β) = -π T^2/8+(σ+1)(T/2logT/2π-T/2)+T/2(log 2+2log(2π))-πσ^2/8 +∫_t_0^Tlog|(σ+it)| dt+(T^1/2)+π T^2/8-T/2log T+T/2-T/2log2π +(T^20/21)+(T^2alog T)+(T^1/2+a), Since a=3/7 the largest error term is (T^20/21). The term with σ^2 is less than this error term. Simplifying yields (<ref>). We can not apply Backlund's lemma in order to bound the integral of g(x+it_0) in Siegel's exposition <cit.>. To use g(s) we also have the extra difficulty that the Euler-MacLaurin approximation to the gamma function is not valid near the negative real axis. This is the main reason to use our F(s) function instead of the g(s) function of Siegel. § PROOF OF PROPOSITIONS <REF> AND <REF> §.§ Mean value of loggamma We denote by B_n(x) the Bernoulli polynomial and by B_n(x) the periodic function with period 1 and such that B_n(x)=B_n(x) for 0< x<1. They satisfy B_n(x)=(-1)^n B_n(1-x). For an odd index greater than 1 we have B_2n+1(0)=B_2n+1(1/2)=B_2n+1(1)=0, and these are the only zeros of B_2n+1(x) on the interval [0,1]. Let f(a,b)→[0,+∞) be a positive and monotonous function, then |∫_a^b f(x)B_3(x) dx|≤3/64max(f(a), f(b)). Assume that f is not increasing, the proof in the other case is similar. We have B_3(x)=x(x-1/2)(x-1), so B_3(x) is positive in (0,1/2) and negative in (1/2,1). Define a_n=|∫_n/2^(n+1)/2f(x)B_3(x) dx|= (-1)^n∫_n/2^(n+1)/2f(x)B_3(x) dx. For any integer k, since f(x+k)≥ f(1-x+k) for 0<x<1/2 a_2k=∫_k^k+1/2f(x)B_3(x) dx=∫_0^1/2f(x+k)B_3(x) dx≥∫_0^1/2f(1-x+k)x(x-12)(x-1) dx changing variables y=1-x =∫_1/2^1 f(y+k)(1-y)(12-y)(-y) dy=-∫_1/2^1 f(y+k)B_3(y) dy= -∫_k+1/2^k+1f(y)B_3(y) dy=a_2k+1. In the same way, we prove a_2k+1≥ a_2k+2. Suppose first that there exist n <m such that (n-1)/2<a≤ n/2≤ m/2≤ b<(m+1)/2. If n<m we will have ∫_a^b=∫_a^n/2+∫_n/2^m/2+∫_m/2^b=∫_a^n/2+∑_n≤ k< m(-1)^ka_k+∫_m/2^b , the omitted integrand being f(x)B_3(x), and with a_n≥ a_n+1≥⋯≥ a_m≥0. Therefore, we have |∑_n≤ k< m(-1)^ka_k|=a_n-a_n+1+a_n+2-⋯± a_m-1≤ a_n. Therefore, |∫_a^b|≤|∫_a^n/2|+a_n+|∫_m/2^b|≤ 3∫_(n-1)/2^n/2 f(a) |B_3(x)| dx=3 f(a)/64. When n=m or there is no fraction n/2 between a and b, the inequality is easier to prove. If we assume that f is increasing, we must substitute f(a) by f(b). So, in general, the inequality is true with the maximum between f(a) and f(b). With the same procedure, we may prove this Proposition. Let f(a,b)→[0,+∞) be a positive and monotonous function, then for all n≥0 |∫_a^b f(x)B_2n+1(x) dx|≤ (-1)^n3(1-1/2^2n+2)B_2n+2/n+1max(f(a), f(b)). For σ∈ and T>1 we have ∫_1^T log|Γ(σ+it2)| dt=∫_σ+i^σ+iT((s-12)logs2-s2+12log2π+16s) ds+^*(3√(3)/16). Since σ and T>0 are arbitrary, we cannot directly use the Euler-MacLaurin expansion. Instead, we use an intermediate expression obtained in the course of the proof of the Euler-MacLaurin expansion <cit.>*p. 109. We have logΓ(s)=(s-12)log s-s+12log2π+B_2/2s-∫_0^∞B_3(x) dx/3(s+x)^3, s+|s|0. Since ∫_1^T log|Γ(σ+it2)| dt=∫_σ+i^σ+iTlogΓ(s2) ds, we proceed to bound R:=∫_σ+i^σ+iT∫_0^∞B_3(x) dx/3(s/2+x)^3 ds. The integral converge absolutely, so we may apply Fubini's Theorem. And we have R =∫_0^∞B_3(x)(∫_σ+i^σ+iTds/3(s/2+x)^3) dx =4/3∫_0^∞B_3(x)(1/(σ+i+2x)^2-1/(σ+iT+2x)^2) dx =4/3∫_0^∞B_3(x)(-2(σ+2x)/((σ+2x)^2+1)^2+2T(σ+2x)/((σ+2x)^2+T^2)^2) dx Note that σ∈ may be negative. The function 2T y/(y^2+T^2)^2 is negative and decreases in (-∞,-T/√(3)), negative and increases in (-T/√(3),0), positive and increasing in (0,T/√(3)) and decreasing and positive in (T/√(3),+∞) Its maximum value is achieved at y=T/√(3) when it takes the value 3√(3)/8T^2. At -T/√(3) it has a minimum value of -3√(3)/8T^2. Hence, we may separate the integral into at most four integrals where Lemma <ref> applies and |4/3∫_0^∞2T(σ+2x)B_3(x)/((σ+2x)^2+T^2)^2 dx|≤4/3· 4·3/64·3√(3)/8T^2=3√(3)/32 T^2. This is also valid for T=1, so we get |R|≤3√(3)/16. Then ∫_1^T log|Γ(σ+it2)| dt=∫_σ+i^σ+iT((s-12)logs2-s2+12log2π+16s) ds+^*(3√(3)/16). The same is true if we take the lower limit of the integrals equal to t_0≥1. A computation of this integral gives us ∫_t_0^T log|Γ(σ+it2)| dt= (-T^2/4+σ^2/4-σ/2+1/6) (σ+iT) +(σ T/4-T/4)log(σ^2+T^2)-σ(3T/4+Tlog2/2)+ T/2+T/2log2+T/2log2π + (t_0^2/4-σ^2/4+σ/2-1/6)(σ+it_0) +(t_0/4-σ t_0/4)log(σ^2+t_0^2)+ +σ(3t_0/4+t_0log2/2) -t_0/2-t_0/2log2-t_0/2log2π+ ^*(3√(3)/16) where the argument always refers to the main argument, since t_0 and T>0 for us 0<(σ+it)<π. Let t_0≥1 and 0<a<1/2 be fixed real numbers. There is a positive function f[0,+∞)→[0,+∞) such that f(T)=(T^1/2) and such that given real numbers σ≤ 10 and T> t_0, connected by 1-σ≤ T^a, then ∫_t_0^T log|Γ(σ+it2)| dt = -π T^2/8+(σ-1)(T/2logT/2-T/2)+T/2log(2π)-πσ^2/8 +(f(T)). The value of the integral is given in (<ref>). Assume first that -2t_0<σ≤ 10. In this case, many terms in (<ref>) are bounded by a constant only depending on t_0. Eliminating these terms, we get ∫_t_0^T log|Γ(σ+it2)| dt= -T^2/4(σ+iT)+(σ T/4-T/4)log(σ^2+T^2) -σ(3T/4+Tlog2/2)+ T/2+T/2log2+T/2log2π+(t_0^2log(1+t_0)). We have (σ+iT)=π/2-arctanσ/T and log(σ^2+T^2)=2log T+log(1+σ^2/T^2)=2log T+(T^-2). Expanding the arctan, we get (with constants only depending on t_0) ∫_t_0^T log|Γ(σ+it2)| dt =-π T^2/8+T^2/4(σ/T+(T^-3))+(σ T/4-T/4)(2log T+(T^-2)) -σ(3T/4+Tlog2/2)+ T/2+T/2log2+T/2log2π+(1). This is (<ref>) except for the term -σ^2/8, that in this case is contained in the error term. In the other case, when σ_0≤σ≤ -2t_0, eliminating terms that are (T^a) we get ∫_t_0^T log|Γ(σ+it2)| dt= (-T^2/4+σ^2/4) (σ+iT)+(σ T/4-T/4)log(σ^2+T^2) -σ(3T/4+Tlog2/2)+ T/2+T/2log2+T/2log2π -σ^2/4(σ+it_0)+(T^1/2) since σ<0 we have (σ+iT)=π/2-arctanσ/T, (σ+it_0)=π+arctant_0/σ, log(σ^2+T^2)=2log T+log(1+σ^2/T^2), log(σ^2+t_0^2)=2log|σ|+log(1+t_0^2/σ^2), which we may expand in convergent power series since σ/T and t_0/σ are in absolute value <1. Removing terms less than (T^1/2) yields (<ref>). §.§ Proof of Proposition <ref> By the definition of F we have ∫_t_0^T log|F(σ+it)| dt=∫_t_0^Tlog|(σ+it)π^-σ/2| dt+∫_t_0^T log|Γ(σ+it2)| dt+∫_t_0^T log|(σ+it)| dt ∫_t_0^Tlog|(σ+it)π^-σ/2| dt =-σT-t_0/2logπ+T/2log(σ^2+T^2)-T+σarctanT/σ -t_0/2log(σ^2+t_0^2)+t_0-σarctant_0/σ. After some simplifications ∫_t_0^Tlog|(σ+it)π^-σ/2| dt= Tlog T-T-σT/2logπ+(T^1/2). Combining this with (<ref>) yields ∫_t_0^T log|F(σ+it)| dt = -π T^2/8+(σ+1)(T/2logT/2π-T/2)+T/2(log 2+2log(2π))-πσ^2/8 +∫_t_0^Tlog|(σ+it)| dt+(T^1/2). §.§ Proof of Proposition <ref> Here we will assume that the exponent a= 3/7. The error term we will obtain is determined by the error in Theorem <ref> for U(t)≪ t^-1/21. This is not the best exponent, and its relation to a=3/7 is not as direct, as seen in <cit.>. Therefore, we will not be able to obtain in Proposition <ref> an error better than (T^20/21). We suspect that Proposition <ref> can be improved. Therefore, we will retain in our equations some terms less than the final error (T^20/21), always assuming that a≤ 3/7. In this way, we will get a conjecture about further terms in Proposition <ref>. We define a periodic function → by (x)=∑_n=1^∞2sin(2π n x)/n^2-∑_n=1^∞ (-1)^nsin(4π n x)/n^2. Recall that for a given s we define η=√((s-1)/2π i), taking the root such that η+η>0. Let t_0>1 be a fixed real number. For T→+∞ we have ∫_1+it_0^1+iTlog1-e^2π iη/1+e^4π iη ds=-√(T/2π) (√(T/2π))+(1), where the implicit constant only depends on t_0. Here s=1+it with t_0<t<T and η=√(t/2π), hence J:=∫_1+it_0^1+iTlog1-e^2π iη/1+e^4π iη ds= i∫_t_0^Tlog1-e^2π iη/1+e^4π iη dt= ∫_t_0^Tlog1-e^2π iη/1+e^4π iη dt Change variables t=2πη^2, and let τ_0=√(t_0/2π) and τ= √(T/2π), then J=4π∫_τ_0^τηlog1-e^2π iη/1+e^4π iη dη Now we use the expansion log1-z/1+z^2=-∑_n=1^∞z^n/n+∑_n=1^∞(-1)^nz^2n/n. Then we have (it is not difficult to justify the integration term by term) J =4π{-∑_n=1^∞1/n∫_τ_0^τη e^2π i nη dη+∑_n=1^∞(-1)^n/n∫_τ_0^τη e^4π i nη dη} =4π{.∑_n=1^∞(-e^2π in η/4π^2n^3+iη e^2π in η/2π n^2+(-1)^ne^4π inη/16π^2n^3-(-1)^niη e^4π i nη/4π n^2)|_τ_0^τ} All terms in τ_0 will contribute only to a term (1) (with a constant dependent on t_0). Analogously, the term in τ and n^3, which do not have the factor η also contributes to a term (1) (this time with absolute constants). So, we get J=∑_n=1^∞(2iτ e^2π i nτ/n^2-(-1)^niτ e^4π i nτ/n^2)+(1)= iτ∑_n=1^∞(2e^2π i nτ/n^2-(-1)^ne^4π i nτ/n^2)+(1) =τ∑_n=1^∞(-2sin(2π nτ)/n^2+(-1)^nsin(4π nτ)/n^2)+(1)=-τ(τ)+(1). The previous lemma considers the integral with limits 1+it_0 and 1+iT, but we are interested in this integral with limits σ_0+it_0 and σ_0+iT. The difference between the two is bounded in the next lemma. Let 0<a≤ 3/7 and 1-σ≤ T^a, then for T→+∞ we have ∫_σ+it_0^1+it_0log1-e^2π iη/1+e^4π iη ds-∫_σ+iT^1+iTlog1-e^2π iη/1+e^4π iη ds=(T^a). The two integrals are treated in the same way. For the second, for example, we have s=x+iT, with σ< x< 1. Then as we noticed before |e^2π i η|<1 and the logarithm is defined by means of (<ref>) with z=e^2π i η. Both series in (<ref>) have imaginary parts in (-π/2,π/2) so that |∫_σ+iT^1+iTlog1-e^2π iη/1+e^4π iη ds|=|∫_σ^1log1-e^2π iη/1+e^4π iη dx|≤∫_σ^1π dx=π(1-σ)=(T^a). Let 1≤ t_0 and a≤3/7 be fixed real numbers and 1-σ_0= T^a. Then for T→+∞ we have ∫_t_0^Tlog|e^π iη| dt=-π(1-σ_0)(T/2π)^1/2+√(π)/3(1-σ_0)^3/2 +(T^a/2). Put s=σ_0+it, then ∫_t_0^Tlog|e^π iη| dt=∫_σ+it_0^σ+iTlog e^π i η ds=∫_σ+it_0^σ+iTπ i√(s-1/2π i) ds =-∫_σ+it_0^σ+iT 2π^2√(s-1/2π i) ds-1/2π i =-2π^2.1/3/2(s-1/2π i)^3/2|_s=σ+it_0^σ+iT =-4π^2/3{(T/2π+i1-σ/2π)^3/2-(t_0/2π+i1-σ/2π)^3/2} ={-4π^2/3(T/2π)^3/2(1+i1-σ/T)^3/2+4π^2/3(i1-σ/2π)^3/2(1-it_0/1-σ)^3/2}. According to the hypothesis, both (1-σ)/T and t_0/(1-σ) are less than 1/2, say. So, we may expand both parenthesis in convergent power series. If I want to get this with an error (1) we must retain the terms until ((1-σ)/T)^3 and (t_0/(1-σ))^2. Therefore, the result is the imaginary part of {-4π^2/3(T/2π)^3/2(1+3i/21-σ/T-3/8(1-σ)^2/T^2)+((1-σ)^3T^-3/2) +(-1+i)4π^2/3√(2)(1-σ/2π)^3/2(1-3i/2t_0/1-σ)+((1-σ)^-1/2)} ∫_t_0^Tlog|e^π iη| dt=-4π^2/3(T/2π)^3/23/21-σ/T+4π^2/3√(2)(1-σ/2π)^3/2 +4π^2/3√(2)3/2(1-σ/2π)^3/2t_0/1-σ+((1-σ)^3T^-3/2)+((1-σ)^-1/2) =-π(1-σ)(T/2π)^1/2+√(π)/3(1-σ)^3/2 +t_0/2(π(1-σ))^1/2+((1-σ)^-1/2) assuming that a≤3/7 so that the exponent 3a-3/2≤-a/2 . Let t_0>1 greater than the constant in Theorem <ref> and 1-σ_0=T^a with 0<a<1/2. Then, for T→+∞ ∫_t_0^Tlog|J(σ_0+it)| dt=-π T^2/8+T/2log T-T/2+T/2log (2π)+(T^a+1/2) By Lemmas <ref>, <ref>, <ref> and <ref>, it remains to compute A=∫_σ_0+it_0^σ_0+iT(π i s/4+log s-1/2log1-s/2π-3π i/8) ds A={.π i s^2/8+s log s-1/2s-3π i s/8+1-s/2log1-s/2π|_σ_0+it_0^σ_0+iT} =-π T^2/8-T/2+_t_0(1)+{(σ_0+iT)log(σ_0+i T)-(σ_0+it_0)log(σ_0+i t_0) +1-σ_0-iT/2log1-σ_0-iT/2π-1-σ_0-it_0/2log1-σ_0-i t_0/2π} Simple computations yields (I have written here σ instead of σ_0) (σ+iT)log(σ+iT)=Tlog T-T∑_n=1^∞(-1)^n/2n(σ/T)^2n+πσ/2+σ∑_n=1^∞(-1)^n/2n-1(σ/T)^2n-1. (σ+it_0)log(σ+it_0)=t_0log|σ|-t_0∑_n=1^∞(-1)^n/2n(t_0/σ)^2n+πσ-σ∑_n=1^∞(-1)^n/2n-1(t_0/σ)^2n-1. (1-σ/2-iT/2)log1-σ-iT/2π =-T/2logT/2π+T/2∑_n=1^∞(-1)^n/2n(1-σ/T)^2n-π(1-σ)/4-1-σ/2∑_n=1^∞(-1)^n/2n-1(1-σ/T)^2n-1. (1-σ/2-it_0/2)log1-σ-it_0/2π =-t_0/2log1-σ/2π+t_0/2∑_n=1^∞(-1)^n/2n(t_0/1-σ)^2n+1-σ/2∑_n=1^∞(-1)^n/2n-1(t_0/1-σ)^2n-1). Substituting these values, we obtain A=-π T^2/8-T/2+Tlog T-T/2logT/2π+(T^a). A=-π T^2/8+T/2log T-T/2+T/2log2π+(T^a). By the above lemmas <ref>, <ref>, <ref> and <ref> we get then ∫_t_0^T log|J(σ_0+it)| dt=-π T^2/8+T/2log T-T/2+T/2log2π+(T^a) -π(1-σ_0)(T/2π)^1/2+√(π)/3(1-σ_0)^3/2+(T^a/2) -(T/2π)^1/2(√(T/2π))+(1). Therefore, ∫_t_0^Tlog|J(σ_0+it) | dt=-π T^2/8+T/2log T-T/2+T/2log2π+πσ_0(T/2π)^1/2 -(π+(√(T/2π)))(T/2π)^1/2+√(π)/3(1-σ_0)^3/2+(T^a). Siegel <cit.>*p. 304 asserts that he can prove his theorems with a=ε. This would imply that the term containing the function P in equation (<ref>) would make sense. In <cit.> we saw that this term appears in the experimental data obtained with the zeros of (s). Since t_0 is chosen adequately, we have by Theorem <ref> that F(s)=J(s)(1+U(s)), with |U(s)|≤ At^-1/21. It follows that ∫_t_0^Tlog|1+U(σ_0+it)| dt=(T^20/21). Combining this with Proposition <ref> proves Proposition <ref>. 999 A166 J. Arias de Reyna, Riemann's auxiliary function: Basic Results, https://arxiv.org/abs/2406.02403arXiv:2406.02403. A172 J. Arias de Reyna, Statistics of zeros of the auxiliary function, https://arxiv.org/abs/2406.03041arXiv:2406.03041. A92 J. Arias de Reyna, Simple bounds of the auxiliary function of Riemann, preprint (92). A98 J. Arias de Reyna, Region without zeros for the auxiliary function of Riemann, https://arxiv.org/abs/22406.03825arXiv:2406.03825. A100 J. Arias de Reyna, Asymptotic expansions of the auxiliary function, https://arxiv.org/abs/2406.04714arXiv:2406.04714. A173 J. Arias de Reyna, Riemann's auxiliary function. Right Limit of zeros, https://arxiv.org/abs/2406.07014arXiv:2406.07014. A193 J. Arias de Reyna, Note on the asymptotic of the auxiliary function, https://arxiv.org/abs/2406.06066arXiv:2406.06066. A101 J. Arias de Reyna, Mean values of the auxiliary function, preprint (101). A185 J. Arias de Reyna, On the number of zeros of (s), preprint (185). A66 J. Arias de Reyna, Infinite product of Riemann auxiliary function, preprint (66). B R. J. Backlund, https://gallica.bnf.fr/ark:/12148/bpt6k3111d/f1983.image.r=BacklundSur les zéros de la fonction ζ(s) de Riemann, Comptes Rendues de l'Académie des Sciences 158 (1914) 1979–1981. E H. M. Edwards, Riemann's Theta Function, Academic Press, 1974, [Dover Edition in 2001]. L https://academictree.org/math/publications.php?pid=162486J. E. Littlewood, On the zeros of the Riemann zeta-function, Proc. Cambridge Philos. Soc. 22 (1924) 295–318, https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/abs/on-the-zeros-of-the-riemann-zetafunction/43E7E2D7B1B0BB409EBA1AB68E7CD3A3doi:10.1017/S0305004100014225 R B. Riemann, https://commons.wikimedia.org/wiki/File:RiemannPrim1859.djvuÜber die Anzahl der Primzahlen unter einer gegebenen Grösse, Monatsber. Akad. Berlin (1859) 671–680. R2 B. Riemann, Gesamemelte Mathematische Werke, Wissenschaftlicher Nachlass und Nachträge—Collected Papers, Ed. R. Narasimhan, Springer, 1990. Rudin W. Rudin, Real and complex analysis, McGraw-Hill, London, 1970. Siegel C. L. Siegel, Über Riemann Nachlaß zur analytischen Zahlentheorie, Quellen und Studien zur Geschichte der Mathematik Astronomie und Physik 2 (1932) 45–80. (Reprinted in <cit.>, 1, 275–310.) https://arxiv.org/abs/1810.05198English version. SW C. L. Siegel, Carl Ludwig Siegel's Gesammelte Abhandlungen, (edited by K. Chandrasekharan and H. Maaß), Springer-Verlag, Berlin, 1966. T E. C. Titchmarsh The Theory of the Riemann Zeta-function, Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986.
http://arxiv.org/abs/2406.08735v1
20240613014016
Context-Based Interface Prototyping: Understanding the Effect of Prototype Representation on User Feedback
[ "Marius Hoggenmueller", "Martin Tomitsch", "Luke Hespanhol", "Tram Thi Minh Tran", "Stewart Worrall", "Eduardo Nebot" ]
cs.HC
[ "cs.HC" ]
Context-Based Interface Prototyping]Context-Based Interface Prototyping: Understanding the Effect of Prototype Representation on User Feedback marius.hoggenmueller@sydney.edu.au Design Lab, Sydney School of Architecture, Design and Planning The University of Sydney martin.tomitsch@sydney.edu.au Design Lab, Sydney School of Architecture, Design and Planning The University of Sydney CAFA Beijing Visual Art Innovation Institute, China luke.hespanhol@sydney.edu.au Design Lab, Sydney School of Architecture, Design and Planning The University of Sydney ttra6156@uni.sydney.edu.au Design Lab, Sydney School of Architecture, Design and Planning The University of Sydney stewart.worrall@sydney.edu.au Australian Centre for Field Robotics The University of Sydney eduardo.nebot@sydney.edu.au Australian Centre for Field Robotics The University of Sydney § ABSTRACT The rise of autonomous systems in cities, such as automated vehicles (AVs), requires new approaches for prototyping and evaluating how people interact with those systems through context-based user interfaces, such as external human-machine interfaces (eHMIs). In this paper, we present a comparative study of three prototype representations (real-world VR, computer-generated VR, real-world video) of an eHMI in a mixed-methods study with 42 participants. Quantitative results show that while the real-world VR representation results in higher sense of presence, no significant differences in user experience and trust towards the AV itself were found. However, interview data shows that participants focused on different experiential and perceptual aspects in each of the prototype representations. These differences are linked to spatial awareness and perceived realism of the AV behaviour and its context, affecting in turn how participants assess trust and the eHMI. The paper offers guidelines for prototyping and evaluating context-based interfaces through simulations. <ccs2012> <concept> <concept_id>10003120.10003121.10003122</concept_id> <concept_desc>Human-centered computing HCI design and evaluation methods</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing HCI design and evaluation methods [ Eduardo Nebot ================= § INTRODUCTION With the rise of autonomous systems and their application in everyday products, the human-computer interaction (HCI) community has turned its attention towards developing ways for supporting the design of such systems. Within the context of cities, autonomous systems promise to transform urban mobility and to automate services <cit.>. Recent trials of automated vehicles (AVs) as early protagonists of autonomous systems in cities, have primarily focused on making the technology work. However, a key for the successful uptake of AVs is the careful consideration of trust, usability and user experience, as found in a study on automated driving <cit.>. Within an urban environment, this extends to the design of the external human-machine interface (eHMI) that AVs use to communicate their internal state and their intent to pedestrians <cit.>. Prototyping and evaluating eHMIs with prospective users in urban environments is extremely challenging, as it is associated with high costs of real-world prototypes (e.g. a self-driving car) and potential risks to participants. To address these challenges, HCI researchers have turned to using various simulation platforms and prototype representations that allow them to simulate eHMI concepts in a lab environment <cit.>. This includes the use of video recordings to study pedestrian interactions with an eHMI <cit.> and computer-generated (CG) prototypes in a virtual environment (VR) to evaluate how pedestrians would cross in front of an AV equipped with an eHMI <cit.>. Previous simulation studies have primarily focused on prototyping and evaluating specific interface concepts (e.g. <cit.>), assessing how participants experience the simulation (e.g. <cit.>) and comparing the sense of presence across CG and real-world representations (e.g. <cit.>). To our knowledge, no studies have been carried out to date to investigate in what ways different prototype representations affect how participants provide feedback on the prototype itself. To address this gap, we implemented a mixed-methods study in which we compared three prototype representations: real-world VR, CG VR and real-world video. As a case study, we chose a ride-sharing scenario (captured from the perspective of a pedestrian waiting for their vehicle to arrive) in a shared urban environment, where pedestrians, cyclists and maintenance vehicles share the same road. We chose this scenario as previous research has found that people consider interactions with AVs more important in shared environments <cit.>. Our research team involved interaction designers, urbanists and engineers, which allowed us to take a holistic approach to designing the AV prototype and the scenario used in our study. To that end, we used a fully functional AV that was specifically designed for a shared environment and equipped with an eHMI in the form of a low-resolution display. The paper makes three contributions to the field within HCI that is concerned with the design of human-machine interfaces for autonomous systems. (1) It presents the first comparative study of different simulation approaches for evaluating eHMIs from a pedestrian perspective. (2) It provides empirically based insights on what participants focus on when assessing trust and user experience across real-world VR, CG VR and real-world video prototype representations. (3) It offers guidelines for how to create context-based interface prototypes for lab-based evaluation studies. § RELATED WORK Within the broader context of autonomous systems, this paper specifically draws on and contributes to (1) the design of eHMIs, (2) prototyping approaches and the simulation of interactions between people and AVs, and (3) studies of simulation platforms. §.§ External Human-Machine Interfaces Being designed to communicate the system's awareness and intent, eHMI concepts include projection-based solutions <cit.> and displays attached to the vehicle <cit.>, thereby supporting various communication modalities, from abstract <cit.> to symbolic <cit.> to textual <cit.>. The study reported in this paper contributes to this field through the systematic investigation of the various prototyping representations that are currently available to evaluate eHMIs and other context-based interfaces for autonomous systems (e.g. mobile robots, drones) in complex urban environments. §.§ Prototyping and Simulation The creation of prototypes is an integral part of a human-centred design process <cit.> and can fulfil various purposes; for example, prototypes are often used to evaluate certain aspects of a design with users, before further development stages commence <cit.>. Lim et al. <cit.> highlight the importance of understanding the fundamental characteristics of prototypes and the careful selection of representational forms, prototyping materials and resolutions, as these influence the judgement of a target design concept. Given the complexity, cost and potential risk to participants, associated with designing and evaluating interfaces for and interactions with autonomous systems, researchers have turned to a wide range of methods and techniques, such as Wizard of Oz, video and simulation prototyping <cit.>. In particular, CG VR has been found to be a promising approach for simulating autonomous systems and their interfaces in a safe environment <cit.>. CG VR allows for assessing the user experience (UX) of an interaction in a contextual environment <cit.> while increasing controllability and reproducibility <cit.>. Research on pedestrian safety has further demonstrated that participant behaviour in CG VR matches real-world norms and that participants found the VR environment to be realistic and engaging <cit.>. Simulations also have the advantage of allowing for rapid prototyping approaches, as various interface elements can be quickly exchanged and evaluated <cit.>. Simulation studies are not limited to CG environments with some studies employing video <cit.> or 360-degree video based VR <cit.> as a way to simulate the experience of interacting with real-world prototypes. When it comes to the evaluation of context-based interfaces, it is important that the simulated environment offers realistic experiences and provokes similar user behaviours to those observed in the real world. Here, results from similar HCI research domains are promising; for example, Mækelæ et al. have reported that in the area of public display research, they were able to observe similar user behaviour in virtual compared to real-world settings <cit.>. They therefore propose virtual field studies as an alternative to real-world studies, offering similar ecological validity but at a reduced effort. A key purpose of prototypes is to collect feedback from prospective users. To that end, Pettersson et al. <cit.> found that the overall user experience was similar when comparing in-vehicle systems in VR and in the field, but that participants provided less feedback in VR. They further observed that users had difficulties to separate judgements about the evaluated prototype and the system through which the prototype is presented. Similar findings were reported by Voit et al. for the evaluation of smart artefacts <cit.>; besides differences in reported feedback, they also found that evaluation methods can influence study results. This paper sheds further light on how user feedback varies across different prototype representations. §.§ Simulation Platforms Regardless of the simulation platform being used, i.e. CAVE-like setups <cit.>, screen-based driving simulators or VR headsets, a major consideration in the development of simulator platforms is to offer users a high sense of presence <cit.>. This can be achieved through various measures, such as increasing interaction fidelity <cit.>, motion fidelity <cit.> and offering a high visual realism <cit.>. Previous research has shown that higher visual realism enhances realistic response in an immersive environment <cit.>. To that end, 360-degree video is a promising alternative to CG, as it results in higher perceived fidelity and presence compared to CG simulations <cit.>. In addition to the higher visual realism, users' sense of presence also benefits from the familiarity with the environment when using immersive real-world videos <cit.>. Importantly, real-world video is able to represent not only the prototype but also the context at a high level of fidelity <cit.>, which is comprised of various elements, such as audio-visual impressions, the physical environment and the presence of other people and the user’s relationship with them <cit.>. These elements might influence how participants experience an eHMI in a simulated situation <cit.>. The recent uptake in research on prototyping strategies for eHMIs within the HCI community points to VR and 360-degree video simulations as competing emerging trends. Yet, despite the many promising concepts, the complexity of the context under investigation means that more work is required to further understand the inherent qualities of those prototyping representations. Previous work has highlighted the importance of understanding the fundamental characteristics of physical prototypes in the context of interactive products <cit.>. To the best of our knowledge, characteristics of emerging prototype representations, such as 360-degree video simulations and VR, in relation to increasingly important outcome variables, such as system trust, have not yet been systematically studied. In particular, a systematic evaluation of the effect of different prototype representations on user feedback – and therefore study results – is still lacking. This study represents a first attempt to address this gap, which we argue will not only inform research on and the design of AVs, but also other categories of urban technologies and autonomous systems, such as robotic interfaces <cit.> and pulverised displays <cit.>. § EVALUATION STUDY Building on previous work and to address the gap identified in the review of previous studies, we set out to investigate how user feedback varies across different prototype representations. Rather than evaluating a specific eHMI, our aim was to understand the factors that influence user feedback on eHMIs. This aim follows the trajectory from early work in HCI that reported on differences in user feedback when evaluating paper versus interactive prototypes <cit.>. As previous studies of VR simulations have found sense of presence to be an important factor, we formulated our first research question (RQ1) to measure sense of presence for each of the prototype representations: How does the prototype representation affect user’s sense of presence?. The subsequent two research questions that drove our study design were formulated to measure specific user feedback sought when evaluating human-machine interfaces, with previous studies highlighting trust and UX as important aspects <cit.>. Thus, the second research question (RQ2) was How does the prototype representation affect perceived user’s trust in the eHMI? and the third question (RQ3) was How does the prototype representation affect the perceived UX of the eHMI?. §.§ Study Design We adopted a between-subject approach for the gathering of quantitative data to assess sense of presence, trust and user experience, thus reducing learning effects and avoiding carryover effects from repeated measures. To that end, we balanced the distribution of participants across the three prototype representations. After experiencing the assigned prototype representation, participants were asked to complete a set of questionnaires and to partake in a semi-structured interview. This was followed by participants experiencing the same scenario in the remaining two prototype representations. At the conclusion of the study, participants took part in a second semi-structured interview. This approach was chosen to allow participants to compare their perceived sense of trust and UX across all three representations. §.§ Prototype Representations We opted to compare real-world VR (referred to as RW-VR), computer-generated VR (CG-VR) and real-world video (RW-Video), hence adopting two simulation platforms (VR and video). RW-VR is increasingly used in simulation studies given that 360-degree cameras are becoming more affordable and widely available <cit.>, and due to the higher level of fidelity of real-world video <cit.>. CG-VR is a commonly used representation in pedestrian-AV safety research (e.g. <cit.>). RW-Video was included as video prototypes can be useful when evaluating context-based interfaces online <cit.>. Video prototypes are further less complex and lower-cost in terms of the evaluation setup. The eHMI and the trajectories were the same for all three prototype representations in terms of depicted eHMI hardware (i.e. resolution; display technology), displayed content (i.e. light patterns) and context (i.e. location and time when a specific vehicle behaviour and light pattern was triggered). Differences would only occur due to the inherent nature of the prototype representation, whose effects on study results were part of the investigation. §.§.§ RW-VR For creating the RW-VR prototype representation, we worked in close collaboration with researchers from urbanism and from the engineering department of our university. We used a fully functional AV developed by the engineering department as a cooperative autonomous electric vehicle (CAV) platform <cit.> with hardware designed by AEV Robotics[<https://aevrobotics.com/>, last accessed September 2020]. The vehicles - being small, efficient and electrically powered - were designed for the purpose to operate safely in low speed road environments (under 40kph) and in shared environments where the vehicles would be operating in close proximity to pedestrians <cit.>. The platforms have the sensing and computation capacity to eventually operate at level 5 as defined by the Society for Automotive Engineers (SAE) for autonomous driving. The system is based on the robot operating system (ROS) which is a middleware for robotic platforms that enables and promotes modular system design. For the purpose of this study, we designed a low-resolution (low-res) lighting display functioning as an eHMI to communicate the shared AV's intent and awareness, as well as enabling users to identify their car, following recommendations from previous studies <cit.>. The display consisted of LED strips installed on three sides of the front window as shown in Figure <ref>. The LED strips featured a pitch of 60 pixels per meter, resulting in a total of 145 LEDs. To improve the viewing angle and to create the illusion of a light bar (rather than a distinct set of point light sources), a diffuser tube of opal white acrylic was added. The LEDs were controlled via an Arduino board, which was connected to the system of the vehicle. A python ROS node was constructed that read the information from the vehicle state by subscribing to the relevant information. All light patterns were triggered in real-time based on the sensed information (awareness) and the state of the AV platform (intent). We developed a set of light patterns to demonstrate the usage of an eHMI interface for shared AV services following a user-centred design process supported through a purpose-built prototyping toolkit (involving workshops with 14 experts) <cit.>. We do not dive deeper into the design of the eHMI itself here, as this is not the focus of the contributions reported in this paper. The final sequence of light patterns along with the scenes used in the prototype representation are depicted in Figure <ref>. In total, we recorded three scenes to demonstrate the eHMI interface in a shared AV scenario. We staged and recorded the scenes in a shared environment (one of our university's main avenues) with an Insta360 Pro 2[<https://www.insta360.com/product/insta360-pro>, last accessed September 2020] camera, which can record 360-degree panorama videos in 8K 3D. The scenes (represented from the perspective of the study participant) included: (1) The AV passing through the shared environment without any staged interactions with pedestrians. (2) The AV pulling over and picking up another pedestrian (Actor 2 in Figure <ref>). (3) The AV indicating to pull over to the camera stand. In this trajectory, another pedestrian (Actor 3 in Figure <ref>) forces the AV to slow down and stop, demonstrating how a pedestrian safely crosses in front of the AV. An additional person was placed directly behind the camera in all three scenes (Actor 1 in Figure <ref>), giving the appearance of another rider waiting for their own shared AVs. This was to constraint participants' movement in the simulation, as 360-degree video does not allow for motion when imported into VR. All three scenes were recorded with the same AV and therefore recorded consecutively. The vehicle was operating based on pre-computed trajectories that mimicked the desired vehicle behaviour for the purpose of recording the 360-degree video. The vehicle was operating a `virtual bumper' which is a system that detects obstacles in (or adjacent to) the proposed vehicle trajectory and reduces the speed based on a time-to-collision calculation. Due to safety regulations, a licensed operator had to sit in the AV – in case of having to manually bring the AV to a halt. However, for the purpose of the recordings, we were able to remove the steering wheel, thus conveying clearly to participants that the car was driving autonomously. After recording the scenes with the 360-degree camera, we used Adobe Premiere and Adobe After Effects for post-processing. As we recorded the scenes in early evening hours for better visibility of the low-res lighting display, we had to apply the Neat Video[<https://www.neatvideo.com/>, last accessed September 2020] filter to reduce image noise, while still preserving fine details, such as people's faces. We then combined the three scenes, added a short blend transition between them, and exported them into a single 3D over-under video file. To experience the stereoscopic 3D 360-degree video with a VR headset (HTC Vive), we imported the video file into Unity and applied it as a render texture on a skybox material. To convey the immersive audio recording of the scene soundscape and increase a sense of presence, we used stereo headphones. §.§.§ CG-VR To create the same three scenes from the RW-VR in the CG-VR, we commissioned a 3D simulation designer with a background in interaction design and a specialisation in 3D modelling and creating immersive virtual products with more than 8 years of professional experience. We provided an overview of the scene recordings (similar to Figure <ref>), the 360-degree video from the RW-VR as a reference as well as a building information model (BIM) of the university campus avenue. The designer further conducted several visits to the physical site to better assess the dimensions and proportions of the shared space environment and the surrounding buildings. For the design of the AV, we provided the 3D designer with technical drawings, photographs and videos of the actual AV. The car model was created in Autodesk 3ds Max, using emissive materials for the low-res lighting display in order to replicate the lighting effects and aesthetics as realistically as possible.The car model and the low-res lighting display were then animated in Unity. We deliberately decided against using an existing autonomous driving simulator with a sensor suite (e.g. Carla <cit.>), as Unity has been used for the majority of eHMI research and provides more flexibility for designing and prototyping customised context-based interfaces and the surrounding environment. For creating the actors and surrounding pedestrians from the 360-degree video, models from a library providing 3D scanned people[<https://renderpeople.com/>, last accessed September 2020], were used and customised for our scenario. Throughout the design process, we arranged several meetings with the 3D designer and also tested the prototype in VR. Through this iterative approach, changes were made to the atmospheric lighting of the scene and interactions of pedestrians with the AV were adjusted to match the details from the 360-degree video. For the experiment we used the same VR headset as for the RW-VR simulation. We imported the immersive audio recording to reduce any effects of sound as a potentially confounding variable. §.§.§ RW-Video For the real-world video prototype representation, we used the previously recorded and post-processed 360-degree video as a source. We used Adobe After Effects to map the equirectangular into a 2D rectilinear video projection. In order to highlight the first-person nature of the experience (as opposed to having the participant just passively watching the video as a passer-by in the environment), we animated the viewing angle of the video to ensure that the AV was always in the centre of the image. Thus, if the AV was driving out of the scene, the camera would follow its trajectory as if the participant were waiting for their own car. We exported the final video as a 1080p resolution, 16 to 9 video file. For the experiment, we displayed the video in full-screen mode on a 24-inch monitor. As in the VR prototype representations, we also used the same stereo headphones to convey the immersive audio soundscape. §.§ Participants We recruited 42 participants (22 male, 20 female) between the ages of 21 and 57 (M=32.05, SD=9.13). Participants were recruited from our university's mailing lists, flyers and social networks; all participants voluntarily took part in the experiment and initial contact had to be made by them, following the study protocol approved by our university's human research ethics committee. Participants were randomly assigned to one of the three conditions to start with; the two remaining conditions that participants experienced before the post-study interview were counterbalanced. Further, we balanced participants' age, gender and previous experience in VR across the three prototype representations with the help of an online screening questionnaire that we sent to participants prior to the experiment. Table <ref> shows participant characteristics for each of the conditions that they experienced first. §.§ Study Procedure Upon arriving in our lab, participants received a short introduction about the research topic on shared AVs and eHMIs. We informed participants that the aim of our study was to evaluate trust and UX for eHMIs in a shared AV scenario. We did not mention the comparison of representations in order to avoid biases towards the questionnaire and the interview that we conducted after the first experienced prototype representation. We then asked participants to fill out the consent form to take part in the study, followed by a short questionnaire to collect data on demographics. Before experiencing the first prototype representation, we shortly briefed participants about the scenario they would experience, following advice from previous work suggesting that providing users with a meaningful narrative context increases their inner presence <cit.>. To further immerse them into the scenario of waiting for a shared AV, we presented them with a mock-up interface on a mobile phone. The interface followed the layout of existing ride-sharing services and displayed: (a) a map of the location where the participants were supposed to wait for their vehicle, (b) the vehicle's current position approximately 2 minutes away from the participant, (c) the colour which was assigned by the system for the participant to recognise their vehicle (in this case purple), and (d) the mock user profile of the person whom they would share the vehicle with. After experiencing the first prototype representation, participants were asked to complete a set of standardised questionnaires, which took between 9 to 13 minutes. In a next step, we conducted a semi-structured interview (M=7min 43sec, SD=2min 54sec). After consecutively experiencing the two remaining prototype representations, we conducted a semi-structured post-study interview (M=9min 34sec, SD=2min 53sec). The duration of each experienced scenario was 2 minutes and 19 seconds (same duration for each condition). We chose this time frame carefully based on initial tests within the team, ensuring that there was sufficient time for participants to get familiar with the context and to adjust to the immersive experience, yet short enough to avoid fatigue. The whole study took approximately 45 minutes for participants to complete. We informed participants that they could stop the experiment at any time, for example, should they experience motion sickness, however, none of the participants had to stop the experiment. The conditions and study procedure are illustrated in Figure <ref>. §.§ Data Collection Throughout the experiment we collected both quantitative and qualitative data, following a mixed-methods approach <cit.>. In the following we provide an overview of our data collection. We present the questionnaires in the same order as participants were asked to complete them during the experiment. §.§.§ Questionnaires In order to measure participants' subjective perception of trust towards the AV, we used a standardised trust scale that was designed for the measurement of trust in autonomous systems <cit.>. The questionnaire which has been widely used in the context of research on autonomous vehicles <cit.> consists of two subscales to calculate an overall trust score (7 items) and an overall distrust score (5 items); all items correspond to 7-point Likert scales. We instructed participants to assess trust by considering the AV as a single system and based on how they experienced the AV in the presented scenario. To assess participants' UX of the eHMI, we used the UEQ questionnaire <cit.>. The questionnaire consists of 26 bipolar items (7-stage scale from -3 to +3) to calculate 6 UEQ subscales: attractiveness (overall impression of the product), perspicuity (how easy it is to get familiar with the product), efficiency (solving tasks without unnecessary effort), dependability (feeling in control), stimulation (how exciting and motivating it is to use the product), and novelty (how innovative and creative the product is). For the UX questionnaire, we instructed participants to consider the low-res lighting interface of the AV as experienced in the presented scenario. To assess participants' media experience and sense of presence in the three prototype representations, we employed the ITC-Sense of Presence Inventory (ITC-SOPI) <cit.>. The questionnaire is well established to compare sense of presence across a wide range of media systems, and has been previously used for comparing semi-autonomous driving systems in VR and in the field <cit.>. The questionnaire consists of 38-items (5-point Likert) to calculate 4 subscales: spatial presence (assessing the sensation of being in a displayed environment), engagement (measuring the intensity of the experience and feeling of being involved), ecological validity (naturalism of the displayed environment and sensation that displayed objects are solid), negative effects (assessing potential negative effects such as motion sickness). §.§.§ Interviews We collected qualitative data in the form of semi-structured interviews. In the first round of interviews, conducted after participants experienced the first prototype representation and following the questionnaires, we asked questions about (1) understanding of the light patterns, (2) trust towards the vehicle and (3) comments on the experience. In the post-study interview, conducted after participants experienced the remaining two presentations, we asked questions about (1) differences between the three prototype representations in terms of their experience, (2) whether experiencing the remaining two presentations changed participants' perceived trust towards the AV and (3) perception and understanding of the lighting display. §.§ Data Analysis §.§.§ Questionnaires We first conducted a descriptive analysis of our questionnaire data to obtain an overview about the relationship between each predictor and the outcome domain variable. Thus, we calculated means and standard deviations after an internal reliability assessment of the scales calculating Cronbach's alpha. Overall internal reliability was excellent for both trust subscales (α >= 0.9). For the UEQ questionnaire, item reliability was acceptable for efficiency, stimulation and novelty (α > 0.7), and good for attractiveness, perspicuity and efficiency (α > 0.8). For the ICT-SOPI, overall internal reliability was excellent for spatial presence and engagement (α > 0.9), good for negative effects (α = 0.83), and acceptable for ecological validity (α = 0.71). We conducted a univariate analysis of variance (ANOVA) for each outcome domain of the questionnaires. We used side-by-side box plots to assess if the data was approximately normally distributed. In case of normal distribution, we calculated one-way ANOVA, otherwise the Kruskal-Wallis rank sum test was utilised. In case of significant differences, we performed post-hoc tests using Benjamin-Hochberg (BH)-corrected p-values. §.§.§ Interviews All interviews were transcribed by a professional transcription service. Two coders worked collaboratively to analyse the data from both interviews. However, each coder started the analysis with a different set of interviews and independently developed the codebook. We reviewed each other's codebooks afterwards to discuss the difference and made adjustments where needed. The data from the post-study interview was used to assess participants' preferences and to identify the reasons for those preferences as well as for changes in terms of their perceived trust towards the AV. These data were analysed following a deductive thematic analysis approach <cit.>, using a digital whiteboard with sticky notes. The identified themes were used to structure the Discussion section. To further illustrate specific observations around the identified themes, relevant quotes were selected from the first interview. The data from the first interview was used to assess perceived trust towards AV and user experience of the lighting system. Using the same analysis approach with the post-study interview, we identified key aspects of trust and user experience that changed with prototype representations. Explanation of these changes, however, were found mainly in the analysis of the post-study interview. § RESULTS §.§ Sense of Presence (RQ1) §.§.§ ITC-SOPI Results of the ITC-SOPI (see Table <ref>) show above middle rating sense of presence for RW-VR and CG-VR, and below middle rating for RW-Video. Engagement ratings are high for RW-VR and CG-VR, and slightly above middle rating for RW-Video. Ecological validity scale is high for CG-VR and RW-Video and very high for RW-VR. Negative effects are low for all three prototype representations, with slightly higher ratings for RW-VR. Univariate ANOVA found no significant main effect of prototype representation on negative effects (F(2, 37) = 1.023, p = 0.369). However, a significant main effect of prototype representation was found for spatial presence (F(2, 39) = 7.258, p < 0.01), engagement (F(2, 39) = 15.77, p < 0.001) and ecological validity (F(2, 39) = 5.424, p < 0.01). For spatial presence, post-hoc tests revealed significant differences between RW-VR and RW-Video, as well as CG-VR and RW-Video (both with p < 0.01). For the engagement scale, post-hoc tests revealed significant differences between RW-VR and RW-Video, as well as CG-VR and RW-Video (both with p < 0.001). Finally, for ecological validity, post-hoc tests revealed significant differences between RW-VR and CG-VR, and also RW-VR and RW-Video (both with p < 0.05). §.§.§ Qualitative Feedback In terms of the media experience (i.e. how the prototype was presented), the post-study interviews showed that RW-VR was favoured by the majority of participants (n=30), followed by CG-VR (n=5) and RW-Video (n=1). There were 4 participants who favoured both VR representations and 2 participants did not have a preference. Being the least immersive, the RW-Video was perceived by many participants as `boring’ (P10, P28, P37). Most participants felt like they were `watching’ a video (n=9) and were not really `being there’ in the scene (n=8). As P34 stated, `you are not present at that particular place, so you are distancing yourself from the actual situation’. Henceforward, we discuss two predominant themes from our thematic analysis in more detail: * (1) Visual realism: Between the two VR simulations, participants commented positively on RW-VR due to the higher realism of the presented environment. The RW-VR prototype representation allowed participants to `naturally [...] step in that environment' (P8) and `not [get] distracted by the novelty of being in a virtual world' (P24). The real-world environment made it easy for participants to quickly understand the scenario, which P18 related to a perceived reduction of cognitive load: `[...] because it is so much more realistic, your brain doesn't have to do the work to try to create the picture and make sense of it, which means you can actually focus on the aspects of the car and what it does and the way it communicates’. Participants who experienced the RW-VR also mentioned in the first interviews that they were impressed by the high realism (n=6). Two participants stated in this regard that they `haven't experienced something that well put together in VR' (P5) and were rather expecting a representation that is `cartoon’-like (P5) or `game'-like (P21). Thirteen participants explicitly stated that the CG-VR felt `more like a game' and that it `seemed weird and somehow detached from reality' (P27). The subjective experience of being present was not as strong, as reported by P33: `I felt like I was injected into a scene'. Some participants mentioned that they were `distracted by' (P18, P24, P33) or `focused on' (P15, P26) the imperfections in the simulated environment: `I was thinking a lot about how this computer world was created. I was just looking at the patterns on the trees and looking at the movement of [people]' (P15). P7 mentioned an interesting aspect about feeling related to other people within an immersive scene: `it’s easier to relate to an image of a real human than to an avatar'. For the CG-VR she would have expected to `see [her] hands like an avatar hand as well', so she could see herself as one of the people there and connect with them. * (2) Interaction fidelity: In the RW-VR prototype representation, participants were able to look around the environment but could not move around as naturally as in CG-VR. Motion sickness might be experienced by those who tried to walk a few steps, as pointed out by P18: `If you move, but the picture does not move accordingly, your brain will make you sick’. Six participants noticed the nature of 360-degree video and did not attempt to move: `I felt like I was on a fixed camera stand’ (P37). Five participants said that they did not feel it was possible to interact, because things were `too realistic' (P16) or had `already been happening’ (P29). Despite the difference in terms of interactivity, the number of participants who felt the urge to respond physically to the AV was the same for both RW-VR (n=5) and CG-VR (n=5). P31 asked the researcher if she could walk to the vehicle and `actually sit there'. P35 raised a similar point: `when it arrived for me I would've liked to walk up to it’, expressing the desire to explore the experience holistically, including to commute in a shared AV. In the CG-VR prototype representation, participants (n=6) thought that interaction was possible since things were `rendered’ and `it can take other inputs' (P37). Two participants suggested the potential impact of interactivity on feelings of immersion. P34, for example, said: `if there was a possibility of interaction […] like going in front of the car and it stopped, then I would say that the second prototype [CG-VR] will be much more immersive than the third one [RW-VR]’. §.§ Trust (RQ2) §.§.§ Trust Scale Descriptive data analysis of the subjective trust ratings <cit.> show that participants' trust towards the AV was higher for the VR representations than for the non-immersive RW-Video, with highest trust in RW-VR (see Table <ref>). Conversely, participants' distrust towards the AV was higher for RW-Video than for RW-VR or CG-VR, with lowest distrust in the RW-VR. Yet, no statistically significant difference could be found. That said, RW-VR had the lowest standard variations for both trust and distrust, indicating a greater consensus around the responses for RW-VR. §.§.§ Motivators of Trust To better understand why participants generally trusted the vehicle (with low distrust across all three conditions), we coded the sections of the first interviews where we asked about reasons to trust. This allowed us to investigate the immediate responses given by the participants, considering which aspects could have positively influenced their trust. The codes and frequencies show that they primarily assessed trustworthiness based on the behaviour displayed by the vehicle itself, with similar frequencies verified among the three prototype representations. That is reasonable, given that the vehicle behaviour was identical across all representations. For example, participants reported that seeing the vehicle stopping for other pedestrians in the scenario `reinforced' their trust (n=12, RW-VR=4, CG-VR=3, RW-Video=5). P21 mentioned that `after seeing [the vehicle] stop for someone, it wasn't too scary anymore' (RW-VR). Others stated that seeing the vehicle `safely' picking up another person in a previous scene (RW-VR=3) and then stopping in a safe distance from the participant (n=2, RW-VR=1, CG-VR=1) strengthened their sense of trust. The low speed of the vehicle was another reason that reinforced trust (n=6, RW-VR=3, CG-VR=1, RW-Video=2), among other contextual factors. For example, five participants stated that the light patterns communicated from the vehicle were the main driver for trusting it (RW-VR=1, CG-VR=3, RW-Video=1), while one participant (P3, RW-VR) stated that `the passengers in the car looked quite relaxed', which made her feel `more trusting'. Finally, three participants related their trust to a general preexisting confidence in technology and autonomous systems (RW-VR=1, CG-VR=1, RW-Video=1). §.§.§ Comparisons Between Prototype Representations After experiencing all three prototype representations, roughly a quarter of the participants reported in the post-study interview that their subjective perception of trust towards the vehicle had not changed (n=11). These participants mentioned that the vehicle's driving behaviour in the presented scenarios was similar (e.g. in terms of speed, slowing down and communicating with pedestrians), so `it doesn't matter what kind of platform to use [for representation]' (P32). However, more than a third of the participants stated that they trusted the AV more in the RW representations (n=15), with 11 of them explicitly reporting a higher perception of trust in RW-VR. Additionally, 4 participants stated having had less trust towards the AV in CG-VR, whereas 2 participants felt `more trustworthy' (P11) or expressed to `feel more safe' (P28) in CG-VR. Two participants stated that they experienced higher trust in both of the VR prototype representations, whereas one participant stated the opposite. The rest of the participants expressed difficulties to compare their perception of trust, feeling they have learned the interface after being exposed to the AV interactions multiple times (n=2). Others explicitly stated that their trust towards the situation changed, but that this did not influence their trust towards the car (n=2). To identify factors influencing participants' perception of trust, we included a question to that end in the post-study interview, aggregating responses into the high-level categories presented below: * (1) Spatial awareness: We found that participants' spatial awareness influenced their trust towards the AV to some degree (n=2). For example, P11 voiced the difficulty in assessing trust in RW-Video: `it didn't engage me enough to have any feelings about it, it was very distant’. We further found that participants had better perception of space, distance and speed in the VR representations (n=6). Being immersed in VR, they `felt more’ (P27), had `a closer view of what could happen' (P34) and noticed `more details’ (P31). Both distance between objects, and between participants and the vehicle, were better estimated in RW-VR and CG-VR as ‘your whole body is within that environment, so you see the sizes of things,’ (P11) and ‘everything was at a certain scale’ (P26). These differences in spatial awareness prompted different emotional and behavioural reactions. For example, P27 became more concerned about the inattentive pedestrian: `When I watched the video, I thought that car can just gently bump into the pedestrian, it’s not a problem. When I saw it in VR, in 360-degree video, that would not have been a good idea'. P26 grew more aware of the vehicle's trajectory: `I felt like where I was standing I was in its way and I wanted to step back out of [...] the path of the vehicle'. * (2) Realism of vehicle behaviour: Participants associated their higher level of trust towards the RW representations with a higher realism of the depicted driving behaviour (n=6). For example, P4 stated that the vehicle in CG-VR `felt more like fast and manic, and made [her] feel more manic too', which P27 related to the `sideways gliding' of the vehicle. P10 referred to the `stability' of the actual vehicle seen in RW-VR `that seemed a lot heavier [...] and stuck on the ground', and therefore `something [she] would jump on and trust'. Other participants also mentioned that the stopping behaviour in CG-VR felt less realistic and thus less trustworthy (n=5). For example, P38 observed that `[the vehicle] stopped already at some distance [as] if it was by design to stop, not because it had seen the person there [through a sensor]'. P22 mentioned in retrospect that for CG-VR she was not sure `if what was being depicted, was what the vehicle would be supposed to do', because `you can do all sort of things [in CG-VR]'. Conversely, she also added that due to the lower realism she `allowed [the system] to be a bit wrong and still taking in what it said it would do'. Participants reported that having seen in the RW representations that the vehicle can safely operate in the real world increased their level of trust (n=6). P1 explained, for example, that the authenticity of the RW representations `gives you a real behaviour of the code of the car', whereas P5 was impressed that `the vehicle is out there operating in the real world, [...] you know, it's not a science fiction movie - this is really happening'. P23 added in this regard: `When it was just a simulation, it just feels like “this is an idea but it’s not reality”, so I don’t think I would’ve had as much trust in it compared to the [RW-VR]'. * (3) Realism of people and the environment: We found that the different levels of realism in the depiction of people and the environment in the RW representations compared to CG-VR influenced the trust of participants towards the situation (n=7). Two female participants who experienced CG-VR as the initial representation stated in the first interview that they felt not very trustworthy towards the male characters in the scene. P14 mentioned that she was wondering `what [her] relationship to that [...] man was meant to be' and that `it took a lot of [her] attention at the beginning'. Similarly, P18 stated that `[she] was constantly looking at the guy next to [her] and at some stage asked herself `what if I punch him?'. She explained this reaction as a `fight or flight response', which was likely triggered by a slight uncanny valley affect and aggravated by the fact that the animated character did not respond in any way to the participant looking at the character. In the post-study interview, she referred back to this observation, commenting on a different emotional response when experiencing the RW-VR prototype representation: `I didn't feel like I wanted to hit the people because [in the RW-VR environment] they were understandable and they made sense'. Similarly, P3 referred to the people in the RW-VR as (`look[ing] quite relaxed'), which she linked to her perceived level of trust. The lower trust towards people in CG-VR was also influenced by the environment. For example, P14 mentioned that she instantly thought that `this is a place [she] would not stand as a woman to wait for a car', whereas in regards to the RW-VR prototype representation she mentioned that `this would be a situation I would be catching an Uber'. The difference in the feelings reported towards people and the environment, however, was not matched by the perceptions of trust towards the vehicle. For example, P13 stated: `I trust more the situation [in RW-VR], but it doesn't affect my feeling to the car'. Those participants also reported that trust towards the overall experience is more important for them. As explained by P18: `I trust the car, but I don't trust the other person [in the car]'. §.§ User Experience (RQ3) §.§.§ UEQ Questionnaire Figure <ref> shows the results of our descriptive data analysis of the UEQ scales across the three prototype representations. We were not able to find any significant differences between the different prototype representations. In other words, participants rated the UX of the eHMI similarly across CG-VR, RW-VR and RW-Video. §.§.§ Differences in UX Feedback The analysis of the data from across both interviews revealed several aspects in relation to how participants assessed the UX of the eHMI. * (1) Comprehension: There were several lighting patterns introduced throughout the ride-sharing scenario. The analysis of the first interviews revealed that participants did not notice or pay attention to all of them. For example, the pattern when the vehicle was about to pull over was not often recalled upon when prompted in the post-study interview. We found the lack of attention towards the individual light patterns occurred more frequently in the VR representations (RW-VR=10, CG-VR=10) compared to the video representation (n=6). Similarly, the number of participants who correctly interpreted multiple light patterns was lower in CG-VR (n=6) and RW-VR (n=9) compared to RW-Video (n=12). Further, we only found explicit statements from participants in CG-VR that they were not able to understand the eHMI (n=3). Participants reasoned in this regard that in CG-VR they were distracted by the virtual depiction of people and environment (n=10). P10 stated that `everything looks kind of funny you tend to look around a lot’. For P33, `the presence of that guy standing next to [her] waiting was really disturbing’. Meanwhile, in RW-VR participants would get preoccupied by `other things' (n=8). For example, P25 stated that `because it was too real, [she] focus[ed] on the surroundings, [...] and was looking at the sky as well for a moment’. Another reason for distraction reported was the focus on finding the assigned car (n=3). As P19 expressed: `I was actually just trying to concentrate on which vehicle was mine. I didn’t think to look for any other additional information'. * (2) Light colours: The participants were briefed before the experiment to wait for a car with the low-res light display showing a purple colour, while other cars had their light displays in blue. The analysis of the first interviews revealed that issues with distinguishing between the two colours was brought up by more participants in RW-VR (n=13) than in the other two representations (CG-VR=4, RW-Video=6). For example, they expressed confusion when a `seemingly purple’ car did not stop to pick them up. In this regard, they commented on the limitations of using colours for identifying a ride-sharing AV, which would be aggravated by scalability. For example, P19 stated: `If you’re talking about the use of colour as a vehicle distinguisher, it really depends on how concentrated the use of this type of vehicle is going to be. [...] imagine a crowd coming out of a sports stadium, there’s not going to be enough colours'. Thus, participants experiencing the RW-VR prototype representation suggested the use of more unique combinations of colours, bespoke expressive light patterns, and high-resolution text displays. The reason why this was particularly critical in the RW-VR was that participants could not clearly distinguish between blue and purple from a distance. User feedback indicated a difference in display contrast between the two VR prototype representations. Half of the participants (n=21) mentioned that in CG-VR things looked clearer, `more sharp' (P30), and `more vivid' (P30), compared to the natural ambient light in the RW-VR (n=12). One participant pointed out that in regards to the CG-VR: `[...] in reality the lighting display will not be as noticeable and outstanding as in a [CG] simulation’ (P6). * (3) Contextual factors: A large number of participants also reflected in the first interviews on experiential aspects beyond the eHMI and the AV system, and thereby considered various situations they might face if they were to use the AV in real life. The majority of those statements were made by participants who experienced the VR representations (n=22, RW-VR=7, CG-VR=11, RW-Video=4). Nine participants expected extra cues, for example, via public displays in the environment or information displayed on their smartphones. P7 for example referred to displays at bus stops (`five minutes until your next bus’) explaining that the information could enable her to `relax, sit down a bit, [...] have a drink, go to [have] a bathroom break’. Participants also related to their personal habits and experiences (n=4). P15 thought that being able to identify the AV from afar based on the colour was really helpful, given that she might need time to `get off the phone and [...] grab [her] bags’. P7 brought up a scenario in which she travelled with much luggage: `Is it actually going to start moving before I am able to get on safely? I’m halfway in and it [laughs] starts driving off?’. P16 was anxious not knowing how long the AV would wait for her arrival as in reality she would `usually send the [driver] a message and ask them to wait’. § DISCUSSION Our study results reveal a number of themes regarding the way participants responded to the three prototype representations and the feedback reported in the interviews. In this section, we discuss those themes and how they relate to our research questions regarding perceived sense of presence, trust and UX, followed by a series of design guidelines for context-based interface prototyping, and a reflection on study limitations. §.§ Effect of Prototype Representation on User Feedback §.§.§ Sense of Presence In terms of how the different prototype representations affected the sense of presence (RQ1), our results show, as expected, that the two VR representations (RW-VR and CG-VR) induced higher spatial presence and engagement compared to the video representation, echoing results from other VR simulator studies <cit.>. Interestingly, however, there was no significant difference in spatial presence between the two VR representations, despite various participants commenting on the unnatural impossibility of moving around in RW-VR. Regarding the naturalism of the scene (i.e. ecological validity), the quantitative and qualitative results both show that the RW-VR prototype representation more accurately depicted a real-world situation. Interviews confirmed that the lower perceived ecological validity of the scene in the CG-VR prototype representation was mainly induced by the diminished level of naturalism of the virtual characters and animated objects in the scene. Diminished immersion seemed also to have affected perceived ecological validity, which was ranked lower for RW-Video than for RW-VR, despite both displaying the same video material. The results from the ITC-SOPI questionnaire, emphasised through the semi-structured interviews, imply that RW-VR was best suited to induce a sense of sharing the same spatial context as the AVs in the scene. The high preference towards RW-VR (n=30) suggests that both immersion and perceived naturalism are important design factors to consider when evaluating eHMI concepts with users. §.§.§ Trust In regards to how the prototype representations affected perceived user’s trust in the eHMI (RQ2), we can report that there were in fact three levels of trust simultaneously at play: (a) system trust, that is trust towards the vehicle and eHMI; (b) trust in the environment itself, including other people within it; and (c) trust in the real-world potential of the eHMI as a viable urban technology solution. In terms of system trust, the quantitative results of our study indicated no significant difference in ratings between the three representations. Yet, qualitative feedback suggests that participants generally trusted the vehicle, and that they mainly derived their trust from the way the vehicle interacted with other pedestrians. Given that this behaviour (e.g. giving way to a pedestrian) was identical in all three prototype representations, this explains why there were no significant differences in the perception of trust towards the AV. Interestingly, in the post-study interviews, more than a third of the participants reported that they would trust the vehicle more in the RW representations. However, the detailed analysis revealed that participants' assessment of trust is based not just on the AV itself but also determined by trust towards the overall experience. The increased sense of presence provided by the VR representations led participants to express feedback directed at particular elements of the environment (e.g. texture of trees) and, crucially, at other human beings in the scene, who they more tangibly felt to be sharing the experience with. That is relevant, as the VR representations seemed to elicit a feedback loop between participants and the environment, prompting varying feelings of relatedness to strangers around them, and causing them to consider different behaviour in response to the different levels of realism provided by the prototype representation. Female participants feeling unsafe in the presence of a seemingly unresponsive (CG simulated) male character near to them, in a situation they felt having little control over, are an insightful example of how lower realism can affect the trust in the environment negatively. This is an important observation as it highlights that when assessing trust towards an AV prototype in a simulated environment, other factors might be at play that influence participants' responses. The realism of the experience offered by the RW representations, particularly the RW-VR, also seemed to boost participants' trust in the real-world viability of the eHMI in urban spaces. The direct experience of the technology contextualised to a real street and surrounded by real people conveyed to participants a sense of 'this already [being] reality, not fiction', prompting them to reflect upon practical aspects such as safety (their own and others'), ambience (emotional cues given by other people in the scene) and the autonomous nature of the technology (more clearly decoupled from the surrounding environment, in comparison to the CG-VR representation). §.§.§ User Experience Regarding RQ3, the quantitative results of our study show that there is no significant difference in UEQ ratings between the three prototype representations. However, there are some tendencies in the ratings that are supported by the qualitative data. For example, attractiveness and stimulation is rated slightly higher in CG-VR, which also relates to the increased colour contrast participants reported on, and therefore offering a `cleaner' depiction of the low-res lighting interface. Further, higher ratings for perspicuity were matched by participant comments revealing that they found it easier to comprehend the lighting patterns in the RW representations. This can be linked to the fact that participants reported being distracted by various other aspects in CG-VR (such as the texture of trees). Interestingly, in RW-Video, participants noticed and were able to reflect on more of the eHMI light patterns in the subsequent interviews. This contradicts previous research which reported better memory assessment in immersive experiences <cit.>, and indicates that the VR experience itself - although being more vividly - can distract from the assessment of singular user interface elements. Participants confirmed this observation in the post-study interviews, for example, stating that they were more concerned about `finding their car'. On the other hand, the VR prototype representations allowed them to `understand the user experience more holistically' (P14), which also led to more detailed feedback on aspects beyond the eHMI. §.§ Guidelines for Prototyping and Evaluating Context-Based Interfaces Based on our comparative study involving a shared AV scenario and the evaluation of trust and UX towards a custom-designed eHMI, we propose a series of preliminary guidelines for prototyping and evaluating context-based interfaces for autonomous systems through simulations and videos. §.§.§ Choosing a Simulation Platform and Representation. The choice of simulation platform (e.g. VR or video) and prototype representation (e.g. CG or RW) depends on the specific questions that the evaluation seeks to address. * GL1 - Use non-immersive prototypes for focused interface evaluations: We found the assessment of trust towards AVs in a simulated scenario to be heavily based on how the vehicle interacted with other pedestrians. These interactions between autonomous systems and other people sharing the same urban environment can easily be captured in video prototype representations, eliminating the need for a costly VR setup and supporting online evaluation studies <cit.>. Indeed, we found that participants were able to remember and comment on the light patterns used in our eHMI better in RW-Video than the VR representations. * GL2 - Use immersive prototypes for holistic assessment and evaluation of contextual aspects: We learnt that the VR representations allowed for a more holistic assessment of the user's relationship with the eHMI in the simulated urban environment, due to increased spatial awareness and stronger sense of being actively present in the scene. We therefore propose that VR representations (RW and CG) are better suited when seeking user feedback on not only the interface but how the interface influences the user's experiential and perceptual aspects within a particular context. * GL3 - Use real-world representations to increase familiarity and assess overall trust: Previous work on driving simulators stressed that familiarity with the environment in real-world videos increases feeling of safety and leads to richer feedback <cit.>. High realism and influence of environmental factors as well as social interactions between multiple people sharing an urban space with an eHMI were also deemed important by our participants. RW-VR, thus, is especially well-suited for capturing the more nuanced aspects of trust beyond the system itself (linked to the complex and dynamic context within which the system operates). * GL4 - Use real-world representations to uncover interface anomalies under more natural conditions: We received more responses on potential interface anomalies (i.e. use of colour to encode information) and potential alternatives (i.e. text displays) in the real-world representations due to the more natural ambient lighting and lower contrast compared to the CG-VR. Therefore, we conclude that real-world representations might be better suited to evaluate the viability of visual-based interface design proposals. §.§.§ Composition of Scenes. Prototyping and evaluating context-based interfaces within a simulated or captured real-world urban environment comes with a range of challenges and confounding factors compared to decontextualised evaluation setups (as used e.g. in <cit.>). Scenes should be carefully composed and their composition has implications on perceived trust as well as keeping participant engagement high. * GL5 - Stage interactions with context-based interfaces: Our findings show that interactions between other pedestrians and the eHMI mainly contributed to the assessment of trust towards the AV. The high number of responses on those interactions further indicates that staged interactions increase memory assessment which can in turn lead to richer feedback in post-experience interviews. Additionally, staged interactions might prevent survey fatigue which is in particular important in RW-VR with the participant's own interaction radius being limited. * GL6 - Consider effect of environment and people: Our qualitative data showed that aspects beyond the system influenced participants' perception of trust and user experience. Indeed some participants deemed those aspects, such as with whom they would be sharing an AV and waiting for the vehicle in a dark, empty location, as more critical than the vehicle itself. We therefore conclude that it's crucial to consider the effect of surrounding entities, and in turn stress for the importance of contextualised simulation setups for a more holistic evaluation of interactions with autonomous systems. * GL7 - Carefully consider camera position and constrain movement in RW-VR: Due to the lack of freedom to move in RW-VR, the camera position for recording has to be carefully chosen. In our specific context, it was important that the camera was not positioned in the vehicle's potential trajectory, while still warranting a good viewing angle to observe the interactions. Positioning people next to the camera or recording in a physically constraint environment can further help to create a visual bounding box to deter participant's urge to move within the RW-VR environment. §.§.§ Designing CG-VR Prototypes. Finally, our findings suggests considerations to be made for the specific case of prototyping context-based interfaces through CG-VR representations. * GL8 - Avoid virtual avatars in intimate or personal proxemic zones of participants: Our study results indicate that computer-generated context-based interface evaluations, which require simulated avatars to interact with autonomous systems, can be affected by the uncanny valley phenomenon <cit.>. This not only leads to decreased perception of realism, but also in decreased trust towards the overall experience and feelings of distraction compromising the assessment of the actual prototype. Based on that observation, we recommend that virtual avatars, if possible, should not be placed in the intimate or personal proxemic zones of participants. * GL9 - Avoid unnecessary details to prevent distraction by imperfections in CG representations: Aiming for an accurate copy of the RW-VR source in CG-VR, our 3D designer carefully crafted fine details, such as tree textures and animation of leafs to simulate wind. However, our participants reported those details as having been distracting and emphasising imperfections in CG-VR. Due to the still apparent lack of realism in CG representations and given the often limited budget for research prototypes, we therefore recommend to limit unnecessary details (and in particular animations) concerning the surrounding environment. §.§ Limitations and Future Work The presented study has some limitations that we would like to acknowledge. To minimise the learning effect and transfer across conditions, we opted for a between-subject design, which, however, comes with the limitation that less data points per participant are taken. Although the number of participants is in the range of other similar studies (cf. <cit.>), we acknowledge that the sample size for the quantitative data analysis is rather small. Yet, we would also argue that the quantitative data is only part of the broader scope of participant data collected, and feeds into the additional analysis of qualitative data from 11.5 hours of interviews. The novelty effect inherent to emerging technologies, such as VR, and participants' previous experience with VR, might also have had an impact on the study results. We tried to address this as much as possible by counterbalancing previous experiences in VR in our study design. We further investigated the collected data for differences between participants linked to their previous experience. We found that participants with no previous VR experience were impressed by the high realism and immersion of the CG-VR when experiencing this representation as the first condition. After subsequently having experienced the RW-VR, they stated that they would have assessed their sense of presence in CG-VR differently. The novelty effect in our study may be further affected by the fact that none of the participants (including those with previous VR experience) had experienced 360-degree VR before. Previous experimental research studies with autonomous driving simulators have acknowledged the limitations of measuring trust based on post-experiment questionnaires and interviews <cit.>. They also refer to previous work in human-robot interaction, which highlights that a widely accepted definition of trust is missing <cit.>. While acknowledging this as a limitation, we also want to emphasise the exploratory findings we gained through interviews, for example, showing that participants assess trust towards various entities in a VR simulation. Furthermore, when using RW representations, participants seem to factor in potential real-life consequences in their perception of trust, resulting in increased feelings of alertness and awareness of the environment. We therefore posit that these findings offer new insights on the multifaceted and complex aspects of measuring trust towards autonomous systems in VR. Further, given that we only conducted a single user study, we see our findings and synthesised guidelines as preliminary and not set in stone, indicating areas of future work and requiring more focused investigations: for example, in regards to the use of avatars in CG-VR, future research should investigate the effects of low-realism or abstract representations of avatars on user feedback during context-based interface evaluations. Plus, as suggested by a participant, it might be helpful to allow users to visualise their own body parts in the same visual style as the avatars in the scene <cit.>, so that they can better relate their own virtual self to the simulated characters. Another open challenge for evaluating context-based interfaces is to find a sweet spot between preventing survey fatigue and offering sufficient time to experience the prototype. A potential solution for keeping participants engaged also during longer scenarios in VR could be to enable them to interact with a smartphone in meaningful ways that support the scenario. § CONCLUSION To sum up, the advent of AVs brings new challenges into the domain of interaction design, such as prototyping and evaluating context-based interfaces (e.g. eHMIs). At the same time, technological advances in immersive video capturing and VR hardware offers designers and researchers a wider range of possible prototyping representations and platforms to choose from. By systematically studying the effect of prototyping representations on study results, our paper adds to previous work on virtual field studies <cit.> and context-based interface prototyping <cit.>. This research was supported partially by the Sydney Institute for Robotics and Intelligent Systems (SIRIS) and ARC Discovery Project DP200102604 Trust and Safety in Autonomous Mobility Systems: A Human-centred Approach. The authors acknowledge the statistical assistance of Kathrin Schemann of the Sydney Informatics Hub, a Core Research Facility of the University of Sydney. We thank all the participants for taking part in this research. We also thank the anonymous CHI’21 reviewers and ACs for their constructive feedback and suggestions how to make this contribution stronger. ACM-Reference-Format
http://arxiv.org/abs/2406.08528v2
20240612085108
Adaptive Teaching with Shared Classifier for Knowledge Distillation
[ "Jaeyeon Jang", "Young-Ik Kim", "Jisu Lim", "Hyeonseong Lee" ]
cs.CV
[ "cs.CV", "cs.LG" ]
OpenObj: Open-Vocabulary Object-Level Neural Radiance Fields with Fine-Grained Understanding Yinan Deng, Jiahui Wang, Jingyu Zhao, Jianyu Dou, Yi Yang, and Yufeng Yue^* This work is supported by the National Natural Science Foundation of China under Grant 62003039, 61973034, U193203, 62173042. (Corresponding Author: Yufeng Yue, yueyufeng@bit.edu.cn) All authors are with School of Automation, Beijing Institute of Technology, Beijing, 100081, China. ============================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Knowledge distillation (KD) is a technique used to transfer knowledge from an overparameterized teacher network to a less-parameterized student network, thereby minimizing the incurred performance loss. KD methods can be categorized into offline and online approaches. Offline KD leverages a powerful pretrained teacher network, while online KD allows the teacher network to be adjusted dynamically to enhance the learning effectiveness of the student network. Recently, it has been discovered that sharing the classifier of the teacher network can significantly boost the performance of the student network with only a minimal increase in the number of network parameters. Building on these insights, we propose adaptive teaching with a shared classifier (ATSC). In ATSC, the pretrained teacher network self-adjusts to better align with the learning needs of the student network based on its capabilities, and the student network benefits from the shared classifier, enhancing its performance. Additionally, we extend ATSC to environments with multiple teachers. We conduct extensive experiments, demonstrating the effectiveness of the proposed KD method. Our approach achieves state-of-the-art results on the CIFAR-100 and ImageNet datasets in both single-teacher and multiteacher scenarios, with only a modest increase in the number of required model parameters. The source code is publicly available at <https://github.com/random2314235/ATSC>. § INTRODUCTION In recent decades, deep neural networks (DNNs) have achieved significant success across various real-world tasks. These include a range of visual <cit.>, natural language processing <cit.>, and automatic speech recognition tasks <cit.>. However, the achievements of DNNs largely depend on a considerable number of parameters, making them challenging to deploy on resource-constrained edge devices due to their high computational and storage demands. To address this issue, the technique of knowledge distillation (KD) has been employed to distill knowledge from an overparameterized teacher network to a less-parameterized student network <cit.>. KD aims to significantly enhance the performance of the student model over that attained when the student model is trained independently. The KD process is often focused on the final layer of the utilized network, as shown in Fig. <ref>, where the student learns “softened” versions of the outputs yielded by the teacher network. This approach enables the student to gain insights into the probability landscape that the teacher perceives rather than only hard labels <cit.>. Two categories of methods are primarily used to apply KD. Offline KD focuses on transferring knowledge from an already-trained, larger teacher model to the student (see Figs. <ref> and <ref>). This method allows for a more focused distillation process, leveraging the fully developed knowledge of the teacher model <cit.>. Conversely, in the online KD approach shown in Fig. <ref>, both the teacher and student models are trained simultaneously from scratch. This approach allows the student to learn directly from the ongoing learning process of the teacher <cit.>. However, irrespective of the chosen KD approach, researchers have established the notion that the distillation of intermediate feature maps, often in conjunction with the final layer, yields significantly enhanced effectiveness <cit.> (see Figs. <ref> and <ref>). Chen et al. <cit.> further enhanced this domain by arguing that the robust predictive ability of the teacher model stems not only from its expressive features but also significantly from its discriminative classifier. They suggested that considerable enhancements can be attained by simply replicating the classifier of the teacher in the student after aligning the features of the student with those of the teacher using a projector that requires only a small number of parameters, as shown in Fig. <ref>. By reviewing the literature, we identified three critical insights. First, offline KD methods leverage pretrained teacher models that possess strong discriminative capabilities. Conversely, in online KD approaches, the teacher model can be adjusted to enhance the learning effectiveness of the student model. Third, by adopting the classifier of the teacher, the student can significantly enhance its discriminative ability. Integrating these insights, we propose a novel KD method named adaptive teaching with a shared classifier (ATSC), as illustrated in Fig. <ref>. Our approach is characterized as follows. Initially, a teacher network is pretrained to harness the discriminative power of large models. Subsequently, both the teacher and student are trained collaboratively to boost the performance of the student. Specifically, the teacher network adjusts itself to better align with the learning needs of the student based on its capabilities. Essentially, our method leverages the strengths of the online KD approach, starting with a pretrained teacher network. Furthermore, the student gains direct access to the powerful classifier of the teacher, which is enabled by a projector that requires only a small number of parameters. We conduct extensive experiments on standard benchmark datasets to validate the effectiveness of our proposed ATSC method. The results demonstrate that ATSC consistently outperforms the existing KD methods across various settings. Specifically, on the CIFAR-100 dataset, ATSC achieves a 5.30% accuracy improvement over the baseline student network without KD in a single-teacher setting and a 6.70% improvement in a multiple-teacher setting. These results establish ATSC as the new state-of-the-art approach for both settings. Furthermore, on the challenging ImageNet dataset, ATSC enhances the accuracy of the student model (ResNet-18) by 1.19% with ResNet-50 as the teacher, achieving both the best performance and the fastest training convergence process. Our contributions are summarized as follows. * We believe this study is the first to effectively integrate three key components of KD techniques: a large pretrained teacher network with powerful discriminative capabilities, an adaptive teaching-based KD method that guides the teacher to self-adjust its parameters to enhance the student learning process, and a shared classifier that leverages the capabilities of the teacher. * The concept of adaptive teaching, initially introduced in the field of KD, involves the teacher model sacrificing a portion of its own discriminative power to more effectively assist the student model in learning representations. The obtained experimental results demonstrate that this slight reduction in the discriminative power of the teacher can lead to significant performance gains for the student model. * We achieve state-of-the-art performance on the CIFAR-100 and ImageNet datasets under various experimental settings. * Furthermore, we empirically demonstrate that ATSC is robust across a diverse range of balancing parameter settings required for training, thereby reducing the effort needed for hyperparameter optimization purposes. § RELATED WORKS Offline KD approaches. Vanilla KD was proposed to distill knowledge based on the logits of a teacher network using temperature-scaled soft supervision in an offline manner <cit.>. Romero et al. <cit.> demonstrated that the intermediate representations learned by the teacher can be utilized as hints to achieve improved distillation performance. Following this work, in the last few years, many follow-up studies have aimed to enhance the transfer of knowledge from a pretrained teacher network by applying diverse techniques, such as feature encoding <cit.>, sample relations encoded using pairwise similarity matrices <cit.> or modeled using contrastive learning <cit.>, distribution learning <cit.>, attention rephrasing <cit.>, cross-layer association learning <cit.>, many-to-one representation matching <cit.>, and reuse of the classifier possessed by the teacher <cit.>. Search methods based on reinforcement learning have also recently been proposed to improve the feature distillation process <cit.>. Online KD approaches. This category of approaches has been less studied because they require more training time than offline approaches. Nonetheless, the principle of tailoring instructions to the aptitude of the student network to maximize its potential should not be overlooked <cit.>. This category focuses on jointly training multiple models. For example, Zhang et al. <cit.> used mutual learning to jointly train a set of models from scratch, where each model acted as a teacher to the others. Lan et al. <cit.> introduced a strategy to guide the training processes of individual branches (students) using a multibranch ensemble (teacher). Similarly, Wu et al. <cit.> enhanced the collaboration among peers through mutual peer distillation. Additionally, Guo et al. <cit.> implemented a dynamic ensemble of soft predictions derived from multiple branches, distorting the input samples to create soft targets for branch supervision. Recently, researchers have explored the diversity in the logits of branches through feature fusion and learning in combination with classifier diversification <cit.>. Additionally, several effective strategies have been employed to enhance online KD: feature-level adversarial training <cit.>, two-level distillation based on an attention mechanism <cit.>, and gradual hierarchical distillation <cit.>. § METHOD §.§ Background: KD with a reused teacher classifier In this section, we revisit the concept of KD with a reused teacher classifier <cit.>. The core premise of using a pretrained teacher classifier is based on the assumption that the given data contain capability-invariant information that can be easily transferred between different models. Additionally, the final classifier of the teacher model often holds crucial capability-specific information, which may be challenging for a simpler student model to replicate. Therefore, during the learning process, this method focuses on transferring knowledge derived solely from the output of the encoder, which is directly fed into the classifier. Let x be an input sample and y be its corresponding ground-truth label. Consider E_T and E_S as the encoders of a pretrained teacher network and a student network, respectively, and let C be their shared classifier. The corresponding parameter sets are θ_E_T, θ_E_S, and θ_C. Then, the objective for this method is formulated as min_θ_E_S, θ_𝒫ℒ_MSE(E_T(x), 𝒫(E_S(x))), where ℒ_MSE is the mean squared error (MSE) loss function and 𝒫 represents a projector, which is introduced to match the feature dimensions of the outputs obtained from both encoders with the corresponding parameter set θ_𝒫. It has been shown that the introduction of a projector can significantly alleviate the performance degradation incurred from the teacher to the student, with a relatively small increase in the number of required parameters. Detailed information about the architecture of the projector is available in our Appendix. §.§ Adaptive teaching with a shared classifier In this section, we propose a novel KD method comprising two steps, as shown in Fig. <ref>. If the capability-specific information of the teacher model is not readily captured by a simpler student model, the teacher should adjust to provide information that is more easily learnable, even though it may not be optimal from the perspective of the teacher. Therefore, we allow some distortion from the pretrained teacher network if it enables the student model to more effectively match its representations. With this objective, we first optimize the encoders of both networks based on the following objective function: min_θ_E_T, θ_E_S, θ_𝒫ℒ_MSE(E_T(x), 𝒫(E_S(x)))+α * ℒ_MSE(θ^*_E_T, θ_E_T), where θ^*_E_T represents the parameter set of the pretrained teacher encoder before conducting collaborative learning between the teacher and student models, and α is a balancing parameter that constrains the distortion within θ_E_T. Through this process, the student model can acquire more knowledge by learning representations that are easier for its encoder to produce. However, this process may inevitably degrade the classifier of the teacher since it relies on the undistorted outputs of the teacher encoder. To maximally maintain the discriminative power of the classifier, it should be fine-tuned following changes in its input space. Thus, we update the shared classifier based on the following equation: min_θ_Cℒ_CE(y, σ(C(E_T(x)))), where ℒ_CE denotes the standard cross-entropy loss and σ represents the softmax function. In this second step, the encoder of the teacher remains frozen. During the training process, equations (<ref>) and (<ref>) are alternated for each batch. The complete pseudocode of our ATSC method can be found in the Appendix. The shared classifier can be updated with the projector based on the encoder of the student by using the loss function ℒ_CE(y, σ(C(𝒫(E_S(x))))). However, we argue that leveraging the classifier of the teacher model based on its encoder is crucial for minimizing the performance loss incurred by the student model. Related experimental results are detailed in Section <ref>. §.§ Extending ATSC to multiteacher models Our proposed ATSC method is readily adaptable to scenarios in which multiple teachers are available for the student training process. Let T_1, ⋯, T_N represent N teachers. Then, (<ref>) is extended to the following: min_{θ_E_T_i|i ∈{1, ⋯, N}}, θ_E_S, {θ_𝒫_i|i ∈{1, ⋯, N}}∑_i=1^N ℒ_MSE(E_T_i(x), 𝒫_i(E_S(x)))+α * ℒ_MSE(θ^*_E_T_i, θ_E_T_i), where P_i denotes the projector for teacher i. In this first step, the student model learns from the average adjusted representations of the teachers. Subsequently, we fine-tune the classifiers to ensure that the average teacher outputs accurately map to the ground-truth labels as follows: min_{θ_C_i|i ∈{1, ⋯, N}}ℒ_CE(y, σ(1/N∑_i=1^N C_i(E_T_i(x)))), where C_i denotes the classifier of teacher i. Finally, by using the optimized projectors and shared classifiers, the student predicts the ground-truth label by applying the equation below: σ(1/N∑_i=1^N C_i(P_i(E_S(x)))). § EXPERIMENTS In this section, we conduct extensive experiments to validate the effectiveness of our proposed ATSC method on two benchmark datasets: CIFAR-100 <cit.> and ImageNet <cit.>. We initially compare the performance of ATSC with that of diverse state-of-the-art offline and online KD methods in scenarios involving both single-teacher and multiteacher configurations. Subsequently, we conduct ablation studies and sensitivity analyses to further substantiate the contributions of our approach. §.§ Experimental setups Training details. We follow the training procedure used in prior studies <cit.>. Specifically, we deploy the stochastic gradient descent optimizer with a Nesterov momentum of 0.9 across all datasets. We use a batch size of 64 and apply weight decay rates of 5 × 10^-4 for CIFAR-100 and 1 × 10^-4 for ImageNet. For CIFAR-100, the training process spans 240 epochs, with the learning rate reduced by a factor of 10 at the 150th, 180th, and 210th epochs. The starting learning rates are 0.01 for the models in the MobileNet/ShuffleNet series and 0.05 for the other models. The balancing parameter α is typically set to 1, and the reduction factor, which influences the number of filters in the convolutional layers of the projectors, is set to 2 by default, as recommended by Chen et al. <cit.>. We only report hyperparameter settings for the cases in which different values are applied to obtain the experimental results. In the case of ImageNet, training lasts 120 epochs, starting with an initial learning rate of 0.1, which is decreased by a factor of 10 at the 30th, 60th, and 90th epochs. α is set to 10, and the reduction factor is set to 2 for this large-scale dataset. The details of the two utilized datasets can be found in our Appendix. In this study, all the experiments are conducted using the PyTorch framework <cit.>. The models are trained on a machine equipped with an Intel i9-12900k CPU and two NVIDIA GeForce RTX 4090 GPUs, each with 24 GB of RAM. However, only one GPU is utilized for all the experiments. §.§ Comparison with the state-of-the-art KD methods Results on CIFAR-100. We compare various KD methods with our approach across a range of teacher-student combinations using popular network architectures. The compared KD methods and the details of the utilized network architectures are both summarized in our Appendix. We conduct comparison experiments exclusively in scenarios, among those used in SimKD <cit.> and SHAKE <cit.>, where the accuracy gaps between the teacher model and the student model trained with the optimal KD method for those scenarios exceed 1%. Specifically, we focus on environments where significant room remains for achieving performance improvements. As shown in Tables <ref> and <ref>, ATSC provides a 5.30% accuracy gain to the baseline student model, with a maximum gain of 7.12%. Overall, ATSC demonstrates highly competitive results by achieving the best performance in 8 scenarios and the second-best performance in 1 scenario out of a total of 10 scenarios. This performance improvement comes at the cost of approximately 6.11% more parameters used. Considering that the teacher network achieves an 8.10% performance improvement at the cost of a 78.06% increase in the number of parameters, the efficiency of ATSC in terms of the ratio of parameter usage to performance gain is substantial. Details regarding the changes in the number of parameters required by the student network are provided in Tables <ref> and <ref>. We also achieve a 0.44% average accuracy improvement over SimKD, which uses the same number of parameters for its student. Since the distillation loss between the encoders remains the same, as shown in (<ref>) and (<ref>), this result underscores the contribution of the self-adaptive teaching process executed by the pretrained teacher. Multiteacher KD results. We also conduct comparative experiments to demonstrate the applicability of our approach in multiteacher scenarios. As demonstrated in Table <ref>, ATSC outperforms all the other scenarios, achieving a 6.70% accuracy improvement over the baseline student model. It also yields an average improvement of 0.71% over SimKD despite both models requiring the same number of parameters. Results on ImageNet. We evaluate the performances achieved by various state-of-the-art KD methods on the large-scale ImageNet dataset across different numbers of training epochs to compare their convergence speeds and postconvergence performances. As detailed in Table <ref>, the ATSC method not only converges faster than the other methods but also achieves the highest top-1 accuracy upon convergence, demonstrating its superior effectiveness for use with large-scale datasets. §.§ Ablation study The effect of the classifier used by the fine-tuning teacher network. During the learning process of ATSC, the shared classifier is updated according to (<ref>) using the encoder of the teacher after updating the encoders of both the teacher and the student, as specified in (<ref>). However, without loss of generality, the classifier could alternatively be updated based on the encoder of the student, as illustrated in the following equation: min_θ_Cℒ_CE(y, σ(C(𝒫(E_S(x))))). This modification aims to directly align the classifier with the representations of the student model. We compare teacher-based fine-tuning, as specified in (<ref>), and student-based fine-tuning, as specified in (<ref>). As presented in Table <ref>, our approach with teacher-based fine-tuning achieves a 0.43% accuracy improvement over student-based fine-tuning, which directly learns to map its own representations to the ground-truth labels. This outcome demonstrates that leveraging the advanced classifier of a powerful teacher can enhance the performance of a student more effectively than developing a student-specific classifier. Comparison with online baselines. To evaluate the effectiveness of using a pretrained teacher in ATSC, we compare it with the following three baselines. * SimKD <cit.>: This baseline utilizes offline feature distillation while retaining the classifier from its pretrained teacher network. * Online SimKD (O-SimKD): An online variant of SimKD, this approach begins with an initialized teacher network. For each batch, the parameters of the student network and the projector are updated after updating the teacher. * Online ATSC (O-ATSC): This baseline also starts with an initialized teacher network. Specifically, updates to the teacher network, (<ref>), and (<ref>) are sequentially applied for each batch. The experimental setups described in Table <ref> are also used for this comparison. Table <ref> demonstrates that applying ATSC with an initialized teacher in an online manner results in a performance that is inferior to those of both SimKD and O-SimKD. However, starting the training process with pretrained teachers leads to accuracy improvements of 1.43% and 1.09% in the ResNet-32x4 & WRN-16-2 and ResNet-32x4 & MobileNetV2x2 scenarios, respectively. These results showcase the best performance achieved in each scenario and underscore the significant contribution of the discriminative power provided by the pretrained teacher in ATSC. Changes in the performance of the teacher model. Since the teacher model adapts its parameters to aid the student model during the ATSC training process, it may sacrifice some discriminative power. Consequently, we measure this change as an accuracy loss from the perspective of the teacher, as detailed in Table <ref>. Overall, an accuracy loss of 0.32% is observed. Generally, during KD, high-performance teachers are more likely to transfer effective learning patterns, thereby producing high-performance students. However, a comparison between the performances of SimKD and ATSC suggests that students can achieve more significant learning gains if the teacher provides more effective guidance, even at the cost of slightly diminished capabilities for the teacher. This is supported by the fact that other existing methods attain inferior performance in the ResNet-32x4 & WRN-16-2 and ResNet-32x4 & MobileNetV2x2 scenarios, as shown in Table <ref>. §.§ Sensitivity analysis Fig. <ref> presents the results of a sensitivity analysis conducted on the balancing parameter α within the ResNet-32x4 & WRN-16-2 and ResNet-32x4 & MobileNetV2x2 scenarios. We discover that when α exceeds 33.33, specifically reaching 100.00, the training loss diverges. Consequently, we set 33.33 as the maximum value for this analysis. The results indicate that ATSC is relatively robust to variations in α, provided that it is neither too small nor too large (specifically, when it ranges between 1 and 33.33 in Fig. <ref>), which reduces the effort required for hyperparameter optimization. We also verify the performance changes produced according to different reduction factors, and the results are provided in the Appendix. § DISCUSSION In this paper, we introduce ATSC, a novel KD method grounded in three key insights: the robust discriminative power of a large-scale, pretrained teacher model; the self-adaptive teaching ability of this teacher during the training process; and the benefits of sharing the advanced classifier of the teacher. This study introduces the concept of adaptive teaching to the KD field for the first time, and we empirically demonstrate its significant contributions to the learning process. We conduct extensive experiments, and the results indicate that our proposed ATSC method substantially outperforms other state-of-the-art KD methods in both single-teacher and multiteacher scenarios. Furthermore, ATSC exhibits robustness across a wide range of balancing parameter settings, thereby simplifying the hyperparameter optimization process. §.§ Limitations and future work Although our method achieves enhanced classification performance with only a small increase in the number of parameters required for the projector, this addition may still impose a burden on devices with limited resources. Therefore, we plan to develop a projector-free architecture that is suitable for ATSC or an enhanced version of it. In this study, we have validated the effectiveness of ATSC solely for classification tasks. However, given that our method is readily adaptable to other applications, such as segmentation and object detection, and can be extended to fields such as natural language processing and speech recognition, we intend to further explore these possibilities. §.§ Broader impacts This work aims to make a significant contribution to the field of KD. Although the social impact of our proposed method is challenging to forecast due to its universal applicability across diverse fields, the findings of this study are expected to have a positive influence on a wide range of applications. abbrv § APPENDIX §.§ Pseudocode for ATSC The proposed ATSC method implements (<ref>) to update the encoders of both the teacher and student models, as well as the projector, and it uses (<ref>) to align the classifier of the teacher with the updated encoder for each batch. Accordingly, ATSC is summarized in the following pseudocode. §.§ Projector To allow the student model to utilize the classifier of a teacher model with a different structure, an additional projector is necessary to align the student encoder with the classifier of the teacher. The structure of this projector is summarized in Table <ref>. The projector consists of three convolutional layers, each of which is followed by standard batch normalization and rectified linear unit (ReLU) activation. It is assumed that the spatial dimensions of the feature maps (with the height and width denoted as H and W, respectively) in both the teacher and student networks are the same. If the feature map of the teacher is larger, we apply an average pooling operation to the encoder of the teacher beforehand to align the spatial dimensions, reducing the imposed computational demand. In the projector, the number of filters in each layer, and thereby the total number of parameters, is controlled by a single hyperparameter r. A lower r value allows the student to learn more extensively, generally resulting in higher performance. However, a lower r value also leads to a heavier projector, increasing the total number of parameters required by the student model, which may not always be desirable. Therefore, we report the performance achieved by the learned student model with different reduction factors in our experimental results. The changes in the number of parameters are detailed in Section <ref> of the Appendix. §.§ Experimental settings Datasets. Our experiments utilize two widely recognized image classification datasets: CIFAR-100 and ImageNet. The images acquired from each dataset are normalized based on their channel means and standard deviations. CIFAR-100 comprises 50,000 training images and 10,000 test images across 100 classes; each training image is padded by 4 pixels on all sides before being randomly cropped to a 32x32 format. Moreover, ImageNet consists of approximately 1.3 million training images and 50,000 validation images across 1,000 classes, where each image is randomly cropped to 224x224 without padding. Network structures. We evaluate the performance achieved when using a wide array of teacher-student combinations; these include several widely used neural network architectures: VGG <cit.>, ResNet <cit.>, WRN <cit.>, MobileNetV2 <cit.>, ShuffleNetV1 <cit.>, and ShuffleNetV2 <cit.>. The suffixes in networks labeled 'VGG-' and 'ResNet-' indicate the depths of the respective networks. For 'WRN-d-w', 'd' represents the depth, and 'w' indicates the width factor of the wide-ResNet. Following previous works <cit.>, we adjust the number of convolution filters contained in the intermediate layers of some architectures by a specific ratio, which is denoted as 'x' in their names. For example, the notation 'ResNet-32x4' specifies a ResNet architecture that is 32 layers deep and whose convolution filters are expanded by a factor of four. Comparison methods. In this paper, we compare various offline and online KD methods. A summary of the KD methods used for the comparison is provided in Table <ref>. A more detailed summary of these key KD methods is available in Section <ref>. We categorize the methods as online approaches if they adjust the parameters of their teachers during the training phase. §.§ Experimental results In our method, the reduction factor r can significantly impact the resulting performance. Therefore, we evaluate the performance achieved not only with the default r value of 2 but also with an r value of 1. We additionally report the performance attained for the ResNet-50 & VGG-8 scenario with an r value of 4, as lower reduction factors of 1 and 2 do not substantially increase the parameter counts required in other scenarios, thus eliminating the need for a higher r, which typically degrades performance. The top-1 test accuracy results produced across diverse teacher & student scenarios are summarized in Tables <ref> and <ref>. Since our method involves adding a projector to the student model with a shared classifier obtained from the teacher, the parameter count of the student model varies with r. The numbers of network parameters required for the teacher model, the student model without a projector, and the student model with a projector are detailed in Tables <ref> and <ref>. A comparison between the results obtained with an r of 1 and an r of 2 reveal that a lower r value yields greater performance. Specifically, setting r to 1 results in a 0.38% accuracy gain over that attained within an r of 2 but at the cost of a 21.04% increase in the required number of parameters, which represents a significant tradeoff in terms of parameter efficiency. Therefore, we recommend selecting an r value that balances performance with the required parameter count. In this paper, r is set to 2 by default, with exceptions in a few specific cases.
http://arxiv.org/abs/2406.07811v1
20240612020624
Evolutionary Computation and Explainable AI: A Roadmap to Transparent Intelligent Systems
[ "Ryan Zhou", "Jaume Bacardit", "Alexander Brownlee", "Stefano Cagnoni", "Martin Fyvie", "Giovanni Iacca", "John McCall", "Niki van Stein", "David Walker", "Ting Hu" ]
cs.NE
[ "cs.NE", "cs.AI", "cs.LG" ]
Efficient Arbitrated Quantum Digital Signature with Multi-Receiver Verification Bo Liu December 2024 ================================================================================ § ABSTRACT AI methods are finding an increasing number of applications, but their often black-box nature has raised concerns about accountability and trust. The field of explainable artificial intelligence (XAI) has emerged in response to the need for human understanding of AI models. Evolutionary computation (EC), as a family of powerful optimization and learning tools, has significant potential to contribute to XAI. In this paper, we provide an introduction to XAI and review various techniques in current use for explaining machine learning (ML) models. We then focus on how EC can be used in XAI, and review some XAI approaches which incorporate EC techniques. Additionally, we discuss the application of XAI principles within EC itself, examining how these principles can shed some light on the behavior and outcomes of EC algorithms in general, on the (automatic) configuration of these algorithms, and on the underlying problem landscapes that these algorithms optimize. Finally, we discuss some open challenges in XAI and opportunities for future research in this field using EC. Our aim is to demonstrate that EC is well-suited for addressing current problems in explainability and to encourage further exploration of these methods to contribute to the development of more transparent and trustworthy ML models and EC algorithms. § INTRODUCTION The use of AI has become increasingly widespread, and with it, there is a growing need to understand the reasoning behind the outputs and decisions it produces. Although AI methods can learn complex relationships in data or provide solutions to challenging problems, decisions based on the outputs of models can have real-world impacts. For example, the use of predictive models in applications such as medicine, hiring, and the justice system has raised concerns about the fairness and transparency of such models; the increasing use of large language models in commercial products has made avoiding harmful content ever more important; and the adoption of black-box optimization in areas such as scheduling and logistics <cit.> requires the trust of the users who are accountable if things go wrong. Therefore, it is essential not only to improve the models we create but also to understand and explain what led to their decisions. There are active lines of research into improving the fairness and safety of models, but in this survey, we focus on the latter problem: understanding, explaining, and increasing transparency in AI systems. Recent advances in AI have drawn heavily on “black-box" approaches. Deep learning, ensemble models, and stochastic optimization algorithms may have clearly defined structures, but the processes leading to the decisions they make are often sufficiently complex to be opaque to humans. The field of explainable artificial intelligence (XAI) has emerged in response to this need <cit.>. XAI is an umbrella term that covers research on methods designed to provide human-understandable explanations of the decisions made/knowledge captured by AI models. XAI research aims to develop methods to explain the decisions, predictions, or recommendations made by AI processes in terms that humans can understand. These explanations foster trust and improve a system's robustness by highlighting potential biases and failures. They also provide researchers with insights to better understand, validate, and debug the system effectively. Beyond this, they also play a pivotal role in ensuring regulatory compliance and improving human-machine interactions, allowing users a better understanding of when they can rely on a model's conclusions. In the context of evolutionary computation (EC), in our view, two directions associated with XAI emerge. First, the application of XAI principles to decision-making within EC, and second, the use of EC to enhance explainability within ML. A body of work is developing in both areas and is gathering pace – in part due to events such as a workshop on EC and XAI held at GECCO in 2022 and 2023. The aim of this paper is to provide a critical review of research conducted at the intersection of EC and XAI. We provide a taxonomy of methods and highlight potential avenues of future work, expanding on the initial directions proposed in <cit.> and <cit.>. The remainder of this paper provides a discussion around these themes. First, in <Ref>, we introduce foundational concepts in XAI such as the nature of explanations and the distinctions between interpretability and explainability, and provide motivation for strengthening the link between XAI and EC. Then, in <Ref>, we discuss how EC can be used for XAI. while in <Ref> we discuss how XAI can be applied to EC. We then discuss the ongoing challenges and potential opportunities in <Ref>. Lastly, we provide final thoughts and conclusions in <Ref>. § EXPLAINABLE AI At its core, XAI aims to provide methods and tools for humans to understand the decision-making processes of AI systems. These tools provide insights in the form of explanations, shedding light on how such systems produce their outputs and solutions, highlighting significant features and interactions that influence the results, and revealing potential issues in their workings. Even though such a system may be too complex for a human to interpret directly, it can be considered explainable if it can be understood. Explainability is important for several reasons. Perhaps the most crucial is trust. Trust in the workings of an AI algorithm directly influences users' willingness to adopt and adhere to the results and ensures that users not only accept but also can confidently and justifiably rely on the answers these models provide. In the case of optimization, this may mean convincing users that they can trust the solutions by knowing what makes that solution better than anything (or at least something) else, which might be seen as synonymous with knowing why the solution was chosen. In the case of a model created by ML, it may mean allowing users to understand the decisions that a model makes leading to the final output. It is also important to consider that such an explanation will likely be important in the future to provide an audit trail for the decisions underpinning an implemented solution, as legislation regulating the use of AI increases. Extending this theme is that of validity. EC methods (and optimizers in general) only optimize the function they have been given, and ML methods learn from the data they are provided. Explaining why a solution was chosen or how a prediction was made might help us know if it solves the actual problem, or if it just exploits an error or loophole in the problem’s definition or a spurious relationship in the data. This not only can lead to surprising or even amusing results <cit.>, but can also simply yield frustratingly incorrect solutions to a problem. EC is stochastic and, as a result, some noise in the generated solutions is likely if not unavoidable. Different runs can produce similar solutions of equal quality but solutions can also feature artifacts that have no impact on their quality. Thus, another fundamental question is whether we can explain which characteristics of the solution are crucial: its malleability. In other words: Which variables could be refined or amended for aesthetic or implementation purposes? Finally, when we define a problem, it is often hard to fully codify all the real-world goals of the system. For ML, this can lead to unwanted biases in the predictions if goals such as “fairness” are not explicitly coded in the cost function used for training. In optimization, subtle rules (for example, “I prefer not to work late on Fridays”; or “Joe likes to drive that route because it ends near his house”) are typically used to judge solutions after the optimization is completed. We can generate lots of diverse solutions in order to “optimize” these goals post-hoc but we propose that, better still, an explanation could again reveal which characteristics are important for optimality, allowing one to refine the solutions and better fit the real-world problem. This also relates to one of the motivating factors behind interactive EC – we want something that is mathematically optimized, but also something that corresponds to the problem owner's hard-to-codify intuition. By incorporating XAI into interactive EC we could make it easier for the problem owner to interact with the optimizer <cit.>. More concretely, the types of questions we wish to answer with an explanation include <cit.>: * Has the problem been formulated correctly? * Are the patterns the model is drawing on to make its prediction the ones we expect? * Why did the model make this prediction instead of a different one, and what would it take to make it change its prediction? * Is the model biased and are the decisions made by the model fair? §.§ What is an explanation? It is difficult to define exactly what makes an explanation. Informally, an explanation aims to answer the question: “why?”. Previous work has considered explanations to provide causal information <cit.>, non-causal explanations <cit.>, or as deductive arguments <cit.>. In this paper, we will consider an explanation to be an aid for a human to understand something about a model. The end goal of an explanation is to act as an interface between the model and the human, presenting information about the model in a way that is easier to understand. This means that the explanation does not need to capture the entire behavior of the model, but must communicate something important about it. Explanations can be provided in many forms; examples include visualizations, numerical values, data instances, or text explanations <cit.>. They can also be provided as part of a dialogue between a human and an explainer <cit.>. §.§ Explainability and Interpretability The terms interpretability and explainability are often used interchangeably by researchers. In this paper, we distinguish between them as referring to two different but related aspects of attempting to understand a model <cit.>. For our purposes, interpretability refers to whether a human can follow a model's decision-making process on its own. Such interpretable models do not require explanations because they are intrinsically self-explanatory. For example, models with a simple symbolic representation or small decision trees are generally considered interpretable. Note, however, that as the size of a model grows, it may become more difficult to follow the logic without external aid. Random forests and neural networks are examples of models that are theoretically interpretable at small scales but, due to their usually large size or ensemble behavior, are no longer considered to be interpretable. This aligns with Lipton's notion that interpretability is not one-size-fits-all <cit.>; instead, it covers a broad spectrum where complex models may be less transparent but achieve higher accuracy, while, conversely, simpler models are easier to understand but may sacrifice performance. On the other hand, even if we cannot trace the exact logic, a model can still be considered explainable if a human-understandable explanation can be provided for what the model is doing or why a decision is made. Explanations do not need to capture the full behavior of the model, and in general, encompassing the full behavior in a single explanation is nearly impossible without creating an equally complex explanation. However, explanations can provide windows into particular aspects of the model's behavior. Some practical methods for providing explanations for particular aspects of a model include evaluating feature importance, approximating the local or global behavior with a simpler model, or comparing the prediction to be explained with other similar inputs. As illustrated in Figure <ref>, as an AI system becomes more difficult to understand the effort required to explain them adequately becomes greater. Below a certain threshold, depending on the audience, a system is easy enough to understand that it can be considered to be intrinsically interpretable. Above this threshold, explanations can provide sufficient aid that a normally uninterpretable system can still be understood. For example, a model might be multidimensional and uninterpretable when looking at its equations alone, but with the aid of visualizations, feature importances, and local approximations we can grasp its general behavior. As we consider larger or more complex models, we require more and more of these explanations to be confident we understand how the model works. At a certain point, it becomes impractical to explain to a satisfactory level how a system works, or the explanations we have are insufficient to capture everything. For example, although explanations can provide some insight into a large language model, there are still many aspects of its behavior that remains unknown. These systems are ones which lie beyond our current threshold of “understandable with explanation". This also points towards two ways of addressing this problem: first, by reducing the complexity of the model or otherwise bringing down the difficulty of understanding, so that existing explanation techniques can be used; second, by improving our ability to explain so that we can explain more uninterpretable systems. We will discuss specific ways of doing so in the later sections. §.§ Why EC and XAI? EC is an approach to AI inspired by the principles of biological evolution that have found use in a wide variety of applications, including optimization, ML, engineering design, and artificial life. This field encompasses evolutionary algorithms such as genetic algorithms (GA), genetic programming (GP), and evolution strategies (ES), as well as, for extension, swarm intelligence algorithms such as particle swarm optimization. EC techniques often employ populations of solutions and operators that introduce variation and diversity to explore larger regions of the search space. Evolutionary techniques have unique strengths that can offer potential solutions to current challenges in XAI <cit.>. First of all, as detailed in later sections, EC has a long track record of successful applications to create symbolic or interpretable models (e.g., decision trees or rule systems). By constructing solutions using intrinsically interpretable components, EC-derived solutions can guarantee the interpretability of the evolved representation. EC can also be used to create interpretable approximations of other complex models, i.e., to produce explanations for their behavior. Second, the inherent flexibility of evolutionary methods, such as the ability to perform derivative-free, black-box optimization, positions them as a versatile tool to perform optimization where other methods struggle. For instance, they can be applied in scenarios where access to an ML model is only available through an API that returns predictions and no other information about the model's confidence or internal logic. Such a scenario is becoming increasingly common, but evolutionary methods can still perform optimization in these cases, to probe the model for patterns in its behavior or generate counterfactuals or adversarial examples. This also enables the optimization of unusual or customized metrics (for example, for measuring interpretability) without constructing metrics that can be readily optimized through gradient descent. Additionally, this flexibility also paves the way for hybrid methods when combined with other algorithms or to build meta-optimizers. Additionally, one especially useful capability offered by EC is multi-objective optimization. Many problems in XAI are inherently multi-objective, requiring a balance between model faithfulness and human interpretability, or complexity of the explanation. In many cases we may also want a diverse range of explanations – for example, different people may find different explanations to be helpful, or single explanations may not fully explain all the relevant characteristics of the model. The use of diversity metrics as well as quality-diversity algorithms can allow us to generate a range of different explanations for the problem. Conversely, we also believe XAI principles can offer useful insights to EC and are currently under-utilized. XAI can provide insights into the decision-making process of evolutionary algorithms, explaining why certain solutions were selected in the end. Not only is this invaluable for debugging and improving algorithms, but end users may want to understand the reasoning behind why a particular solution was chosen. This is especially important in fields where the outputs of any algorithm must be justified or understood by decision-makers without deep technical knowledge of EC, such as in engineering design or policy-making. Furthermore, XAI principles can aid in the development of more interpretable and transparent fitness landscape analyses. Understanding the fitness landscape is critical for understanding the difficulty of finding a solution as well as the effectiveness of EC algorithms. XAI-inspired approaches can improve our existing tools for visualizing and interpreting these landscapes, and understanding the landscape can serve as an explanation in and of itself. § EC FOR XAI In this section, we discuss methods for XAI and the applications of evolutionary algorithms to this task. ML is a powerful tool for building a model of a system from data. As ML models have advanced, so has their complexity, often resulting in increased performance at the cost of interpretability. Explainability becomes a critical component to address this trade-off, ensuring that the models we rely on are not only effective but also understandable and trustworthy. §.§ Explainability and Complexity The interpretability of a model is inherently linked to its complexity <cit.>. Simpler models, such as linear regressions, are considered highly interpretable due to their straightforward decision-making processes which humans can follow unaided. However, as model complexity increases, we lose interpretability and must rely on explainability instead. To clarify this relationship, we introduce a framework shown in Figure <ref>, mapping problem complexity against model complexity. This begs the question: What is the complexity of a problem? We define problem complexity informally here, drawing parallels with the concept of computational complexity. We consider a problem's complexity to be the complexity of the model required to adequately capture its behavior up to the desired level of accuracy. The more complex the problem, the more complex the model must be to represent it faithfully. This also means that some problems may be simple if we are satisfied with a certain level of performance but may be complex if we aim to capture all the nuances and relationships in the data. The complexity of a model (or a solution to an optimization problem) includes aspects such as the number of parameters, the depth of the structure, and the amount of computation required. This complexity can be measured by a model’s description length <cit.>, inspired by the concept of Kolmogorov complexity <cit.>, or by parameterized complexity <cit.>. A model with a lengthy description, numerous parameters, or complex functions is considered more complex for our purposes. Although allowing for greater model complexity might increase a model's capacity to solve problems, this can lead to a loss of interpretability simply due to the size of the model. For example, a neural network with billions of parameters, a genetic program with a large number of instructions, or an extremely deep decision tree can produce accurate models for problems but be difficult to understand due to their size. With these two axes in mind – that is, problem and model complexity – we can identify four main areas of concern, each of which represents a distinct combination of problem and model complexity. They are as follows: * Simple problem, simple model: In this scenario, the desired behavior can be captured by a simple model. The model is accurate but also intrinsically interpretable, so there is no need for explainability in this case. * Simple problem, complex model: When a complex model, such as a deep learning model, is used for a simple problem, the mismatch in complexity leads to a model that is excessively complex for the task at hand. While the model may perform well, it is difficult to interpret and offers no advantages for the complexity over a simple model which may perform equally well for the simple problem. In this case, the issue is not one of explainability but a result of the mismatch between the problem requirements and the model used. Rather than aiming to explain the complex model, a more fitting approach would be to use a model of the appropriate complexity and avoid the issue altogether. * Complex problem, simple model: Conversely, applying a simple model such as linear regression to a complex problem can result in a model that is interpretable but with inadequate performance. Such a model may fail to capture the characteristics of the data to the degree of accuracy required. While the model remains interpretable, the issue is again a mismatch between the problem and model, as the simple model cannot accurately model the data. * Complex problem, complex model: We argue that this quadrant is the main area of concern for XAI, i.e., the area where explainability is most relevant. In these cases, a complex model is necessary to capture the nuances of the data, but this complexity renders the model opaque and uninterpretable. Explainability methods enable users to navigate and understand complex models, building trust in the model, even when the model cannot be understood wholly on its own. §.§ Types of Explanations A wide variety of explanations focus on different aspects of the modeling process. In this survey, we take a problem-focused approach and consider the ML pipeline from data to trained model (Figure <ref>). we structure our categorization around the stage of the ML pipeline where they can be applied. This is meant to highlight that explainability is not only applied at the end, to a fully trained model, but that it is a tool that can used to understand the entire model-building process. By tying the categorization to the stages of the pipeline, we also aim to provide practitioners with a clear roadmap on areas where explainability can be applied. In the next sections, we will address each area in turn. First, as an introduction to each category, we will describe some examples of conventional approaches. This overview is not meant to be exhaustive but rather to provide a primer on the current popular approaches. For a more extensive survey of current methods in XAI, we direct the reader to recent reviews <cit.> on the topic. We will then survey EC-based approaches in each area, providing an overview of the current state-of-the-art with respect to combining EC and XAI. §.§ Interpretability by Design A growing concern on the importance of having ML models that are interpretable by design, rather than explainable post-hoc, has been recently advocated by many researchers in the XAI field <cit.>, arguing that whenever appropriate and possible, one should opt for models that are inherently explainable, or interpretable (white-box models). The main argument to support this statement is that post-hoc explanations often provide only local approximations of ML models, hence being limited: 1) because they rarely capture the whole (global) decision-making process of the model; and 2) because being approximations (essentially, they model other models) they can possibly introduce errors and thus not reflect the original decision-making process of the main model. For these reasons, “interpretable by design” models, based on some form of knowledge representation, should be preferred. Unlike traditional ML approaches for the generation of models using such knowledge representations, which mostly use greedy heuristics, EC methods leverage the global optimization capabilities of evolutionary search. Learning Classifier Systems (LCS) are a notable instance of EC methods applied to ML. Within LCS, some methods apply batch learning <cit.> while others use online learning <cit.> using either reinforcement learning (RL) <cit.> or supervised learning <cit.>. Most LCS approaches are applied to the generation of rule-based ML models, although other representations, such as decision trees <cit.> or hyper-ellipsoids <cit.>, have also been explored. GP methods also have a long history of their application to symbolic regression <cit.>. Model complexity has been addressed in a variety of ways. For the EC methods generating variable-length models, the broad range of standard techniques to deal with the bloat effect <cit.> can be used to promote the generation of more compact models. For instance, specific fitness functions can be used to promote the generation of compact rule sets based on the minimum description length (MDL) principle <cit.>, while other methods achieve this through multi-objective optimization <cit.>. Moreover, rule-based ML models can be simplified through the use of rule/rule set editing operators, which can be hybridized with the standard global search of EC methods (i.e., a memetic algorithm) <cit.> or used as post-processing operators <cit.>. Sparsity in neural networks can be promoted through regularization, or through evolutionary pruning <cit.>. Similar attempts at combining RL and EC have tried to obtain interpretable policies for RL tasks by combining decision trees induced by GP or Grammatical Evolution with RL acting on the leaves while the policy interacts with the environment <cit.>, also through quality-diversity approaches <cit.> and in multi-agent settings <cit.>. Some other works in this area have explicitly focused on addressing the interpretability question in white-box models. The balance between accuracy and interpretability has been explored in the context of genetic fuzzy systems <cit.>. In this regard, some recent studies have proposed machine-learned quantifiable measures of interpretability <cit.>, while others <cit.> have emphasized the importance of focusing on low-complexity models, especially in the context of GP. Another important aspect in ML, fairness, has been addressed in <cit.>, where explicit fairness constraints have been introduced in GP to obtain fair classifiers. Visualization techniques in the shape of heatmaps have been used to represent the sets of classification rules generated by LCS <cit.>. This technique was particularly effective when combined with hierarchical clustering to reorder a dataset's rows (instances) and columns (features), as this enabled an effective global view of how the problem domain was partitioned across the classification rules and what features were relevant for each partition. Alternatively, 3D visualization approaches have also been shown to be a very effective tool to represent complete rule sets generated by LCS <cit.>, by using different axes to represent attributes, levels of generality of the rules in which these attributes were involved, and estimated attribute importance. §.§ Explaining Data and Preprocessing We begin with a discussion of methods that can be used to explain the data. These are methods that aim to understand the structure of the data, for example through clustering, even though they may not become part of the final model. Although this category is often omitted in discussions of XAI, it is worth mentioning it as part of the overall pipeline. Every ML model begins with the data, and any pattern learned by the model is derived from the data, so it would be remiss not to discuss this as a component of the whole process. We should remark that methods under this category do not necessarily explain the model itself, but as said aim to explain the underlying data on which the model was trained, focusing on understanding the data distribution and its characteristics. Techniques such as exploratory analysis, data visualization, and dimensionality reduction can be used to better understand the patterns in the underlying data that the model might learn, as well as identify any potential biases. Examples of these techniques include Principal Component Analysis (PCA) <cit.> and t-Distributed Stochastic Neighbor Embedding (t-SNE) <cit.>, which reduce the dimensionality of data to allow for easy visualization. In addition, methods such as clustering and outlier detection can help identify patterns or anomalies in the data that may impact the model's performance and aid in feature selection and engineering. These include methods such as k-means clustering and DBSCAN <cit.>. These explanations can help identify data quality issues, biases, and preprocessing requirements, as well as build trust. §.§.§ Dimensionality reduction EC can be used to explain data by means of dimensionality reduction and visualization. One approach is GP-tSNE <cit.>, which adapts the classic t-SNE <cit.> algorithm to use evolved trees to provide an interpretable mapping from the original data points to the embedded points. Similarly, Schofield and Lensen <cit.> use tree-GP to produce an interpretable mapping for Uniform Manifold Approximation and Projection (UMAP). By producing an explicit mapping function rather than simply the embedded points, we can not only make the process more transparent but also reuse the mapping on new data. In some cases, we may want to use the lower-dimensional representation for prediction as well as visualization. This is useful for interpretability as it allows us to visualize exactly the same data representation the model sees. Therefore, another approach is to construct features that are both amenable to visualization and well-suited for downstream tasks. Icke and Rosenberg <cit.> proposed a multi-objective GP algorithm to optimize three objective measures desirable for constructed features – classifiability, visual interpretability and semantic interpretability. Similarly, Cano et al. <cit.> developed a method using multi-objective GP to construct features for visualization and downstream analysis, optimizing for six classification and visualization metrics. The classification metrics (accuracy, AUC, and Cohen's kappa rate) aim to improve the performance of the downstream classifier, while the visualization metrics (C-index, Davies-Bouldin index, and Dunn's index) aim to improve the clustering and separability of the features. Moreover, GP has been effectively used for the ML task of manifold learning <cit.>, i.e., the creation of (ideally) reduced data representations for high-dimensional datasets to facilitate the work of downstream ML algorithms. Often, this task is solved by black-box algorithms that perform a mapping from an original space to a reduced one without a clear explanation of how this mapping is designed. On the other hand, GP trees offer an interpretable alternative for this task with white-box transformation operations. §.§.§ Feature selection and feature engineering Feature selection is a common preprocessing step in which a relevant subset of features is selected from the original dataset. The latter is used to improve the model's performance and interpretability by narrowing down the features the model can draw on. As an explanation, feature selection shares some similarities with feature importance, which identifies the features a model is drawing on but, instead, restricts the model explicitly so it can only draw on the chosen features. Genetic algorithms are a straightforward and effective approach to feature selection, with a natural representation in the form of strings of 1s and 0s, making them a popular choice for feature selection <cit.>. GP can also be used for feature selection since the inclusion of features in a tree or linear genetic program is intrinsically evolved with the program <cit.>. For an in-depth review of GP methods, we refer the reader to <cit.>. Swarm intelligence methods, such as particle swarm optimization, have been applied to feature selection as well <cit.>. For a more detailed review of these methods, we direct the reader to <cit.>. In addition to selecting features for a model, feature selection can also be used to improve data understanding by integrating it with techniques such as clustering <cit.>. Feature engineering, also referred to as feature construction, is a related approach that involves building higher-level condensed features out of basic features. GP can be used to evolve these higher-level features for downstream tasks such as classification and regression <cit.>. This approach can also help improve a model's interpretability since it can reduce a large number of low-level features to a smaller number of higher-level features which may be easier to understand for humans. Moreover, it removes some of the modeling needs from the black-box, replacing them with an a-priori (pre-processing) transparent step, thereby reducing the amount of explanation needed. These methods also share many similarities with dimensionality reduction techniques and, in some cases, can fall under both categories. Uriot et al. <cit.> compared a variety of multi-tree GP algorithms for dimensionality reduction, as well as a tree-based autoencoder architecture. In the multi-tree representation, each individual in the population is a collection of trees each of which maps the input to one feature in the latent dimension. In order to reconstruct the input for the autoencoder, a multi-tree decoder is simultaneously evolved with one tree per input dimension. Their results showed that GP-based dimensionality reduction was on par with the conventional methods they tested (PCA, LLE, and Isomap). §.§ Explaining Model Behavior Once we have the trained model, it may still be difficult to understand how it works, even in cases where it is transparent and we can inspect the internal mechanisms. Consider, for example, a trained neural network. Even if in principle we are allowed to inspect each weight and internal operation, the model as a whole is still hard to understand. This is where we turn to explanations to bridge the gap. Methods in this category attempt to explain the internal function of the model, for example, by inspecting the structure of a tree or the weights in a neural network. These approaches can either attempt to explain the entire model or to understand smaller components of the model. §.§.§ Feature importance Global feature importance aims to explain the dependence of a model on each feature it uses. For example, feature importance returns a score that represents the significance of each feature to the model. This helps identify which features impact the model's predictions most and provides insights into how the model is making decisions. This type of explanation can also be used to verify whether the model is behaving as expected – for example, by checking whether it is using the same features a human would to solve the problem. In the case of a computer vision model, this type of explanation can be used to determine if the features used to classify a particular image as a cat make sense or if the model is using spurious patterns in the data, such as identifying the cat based on its surroundings. This type of explanation can also aid in optimizing models and performing feature selection by identifying less important features. Some models, such as decision trees and tree ensembles like random forests, provide built-in feature importance measures <cit.>. For models without built-in feature importance measures, more general methods such as partial dependence plots <cit.> and permutation feature importance <cit.> can be used to determine which features have the largest impact. Evolutionary computation can be used to go further and measure the strength of higher-order interactions between features by evolving groups of features <cit.>. §.§.§ Global model approximations This approach, also known as model extraction or a global surrogate model, aims to approximate a black-box model with a more interpretable model. This idea is closely related to knowledge distillation <cit.> in deep learning, but rather than simply making the model smaller, we also want to make it more interpretable. This is done by training or evolving a secondary model, which both approximates the original model and is more interpretable. An example of this approach was proposed by Lakkaraju et al. <cit.>. Their approach approximates the behavior of the model using a small number of decision sets, providing an interpretable proxy for the entire model. Evolutionary computation methods such as genetic programming are well-suited for this approach as they can guarantee interpretable models while optimizing for one or more objectives. Evans et al. <cit.> propose a model extraction method using multi-objective GP to construct decision trees that accurately represent a given black-box classifier while being more interpretable. This method aims to simultaneously maximize the ability of the tree to reconstruct (replicate) the predictions of a black-box model and maximize interpretability by minimizing the decision tree's complexity. The reconstruction ability is measured by the weighted F1 score over cross-validation, and the complexity of the decision tree is measured by the number of splitting points in the tree. The overall evolutionary process uses a modified version of NSGA-II <cit.>. In their experiments on a range of classification problems, the authors found that the accuracy remained commensurate with other model extraction methods (namely Bayesian rule lists, logistic regression, and two types of decision trees) while significantly reducing the complexity of the models produced. §.§.§ Domain-specific knowledge extraction from machine learning models Finally, domain-specific studies have also been performed. For instance, the classification rules evolved by EC methods have been analyzed in the domain of protein structure prediction <cit.>. Furthermore, biological functional networks (i.e., graphs) can be inferred by mining the structure of ensembles of rule sets evolved by EC methods <cit.>. A topological analysis of such networks led to the experimentally-verified discovery of the function of several genes (in the biological sense of the word) for the Arabidopsis Thaliana plant organism <cit.>. Knowledge representations for rules can be constrained in a variety of ways, which shape the data patterns captured by the sets of classification rules using such representations. This potentially leads to the extraction of different knowledge from the same data depending on the chosen representation, as was studied for molecular biology datasets <cit.>. In the field of neuro-evolution, EC methods have instead been used to discover interpretable plasticity rules <cit.> or to produce self-interpretable agents <cit.>, i.e., agents that (through self-attention) access a smaller fraction of the input, for which interpretable policies are possible. §.§.§ Explaining neural networks Thus far, the methods we have covered are general and can be used with a variety of models. However, given the popularity of deep learning methods, we would be remiss not to discuss methods specifically tailored to explaining these models. The rise in popularity of deep learning, combined with the inherent black-box nature of neural networks and their large number of parameters, makes explaining them challenging but increasingly important. For image classification, the large number of input features (pixels) poses a significant problem for many explanation methods. As such, it is necessary to reduce the dimension first, for example by clustering similar pixels into “superpixels”. Wang et al. <cit.> propose using a multi-objective genetic algorithm to identify superpixels of importance for the final prediction and using this set of superpixels as an explanation. The genetic algorithm uses NSGA-II to optimize for the least number of superpixels used while maximizing the model's confidence in its prediction. Methods have been developed for explaining the internals of deep learning models <cit.>. As an example of one such method, Interpretable Lens Variable Models <cit.> train an invertible mapping from a complex internal representation inside a neural network (i.e., the latent space in a generative or discriminative model) to a simpler, interpretable one. More recently, the field of “mechanistic interpretability” has gained popularity, aiming to understand the internal operations and mechanisms of neural networks. This field attempts to reverse-engineer and describe the algorithm performed by the layers of a neural network. One promising thread of work has been the Circuits approach, which discovered curve detectors in vision models <cit.> and interpretable circuits in small transformer models <cit.>. As an example of this approach, mechanistic interpretability has been used to explain the “grokking” phenomenon seen when training neural networks <cit.>, by which some networks learn the training data quickly but only generalize well after a long period of further training in which little appears to happen. In this work, Nanda et al. show that, when training a transformer model to perform modular addition, the network first memorizes the training data directly before eventually learning a general algorithm for the problem. They are able to describe the exact algorithm used by the network, by inspecting the activations and ablating specific components. This is a space that is ripe for innovation in the evolutionary computation community. Evolutionary methods have been used to explore the decision boundary <cit.> and prompt space <cit.> of language models, attempting to map out the space of inputs along relevant dimensions. Another approach is to directly interpret the network, such as by using symbolic regression to find a expression that matches the gradients of the network <cit.>. §.§ Explaining Predictions This type of approach aims to explain a specific prediction made by a model. As such, the explanation only needs to capture the behavior of the model with respect to the prediction in question, rather than the model as a whole. §.§.§ Local explanations Instead of creating an interpretable model to approximate the global performance of a black-box model, which may not be possible, these approaches only attempt to approximate the local behavior using an evolutionary algorithm. One notable method in this category is Local Interpretable Model-agnostic Explanations (LIME) <cit.>. LIME constructs an explanation by generating a collection of instances in the vicinity of the input to be explained, each accompanied by the model's prediction. It then fits a linear model to this new dataset, serving as a local surrogate that approximates the original model's behavior in that specific region. While this explanation does not necessarily reflect the global behavior of the model, it is locally faithful and can be used to understand the behavior of the model around that point. Ferreira et al. <cit.> proposed Genetic Programming Explainer (GPX), a GP-based method that fits a local explanation model for a given input example. Similar to LIME, when given a sample input to be explained, the method samples a set of neighboring data points around the input and fits a local explanation. However, rather than a linear model, GPX uses a GP to evolve symbolic expression trees that best capture the behavior of the pre-trained black-box model over the neighboring data points. The authors tested this approach on both classification and regression datasets and reported that the GP-based approach captured the model's behavior better than LIME, as the assumption of linear local behavior was not always valid, and also outperformed a decision tree used as an explainer for the same neighbor set. On the other hand, Guidotti et al. <cit.> proposed a method called Local Rule-based Explanations (LORE), which applies an evolutionary algorithm to neighborhood generation rather than evolving the explanation itself. Specifically, a genetic algorithm generates a set of points near the prediction to be explained, which are either classified the same as or differently from the original prediction while being nearby. A decision tree is then used to fit the local behavior of the black-box model. The use of a genetic algorithm here ensures a dense sampling of points in the local neighborhood that lie on both sides of the decision boundary. Feature importance can also be provided for individual predictions. Shapley additive explanations (SHAP) <cit.> attempts to provide a universal method for assessing feature importance that can be applied to most ML models. This is based on the Shapley value, a concept from cooperative game theory that assigns a value to each player in a game based on their contribution to the overall outcome. In the context of ML, the “players” are the features in the data, and the “game” is the prediction task. The Shapley value scores each feature based on its contribution to each prediction. The exact calculation of Shapley values is usually computationally impractical, as it involves evaluating every possible combination of features. However, SHAP proposes approximating these values using sampling and regression, making the estimation of feature importance computationally feasible. This method is widely used in the field of XAI. §.§.§ Counterfactuals Counterfactual explanations are another type of explanation that provides insight through a hypothetical example where the model would have made a different decision. For instance, “the model would have approved the loan if the income were $5000 higher” is a counterfactual explanation that identifies how the input should change in order to change the model's result <cit.>. This form of explanation is intuitive and can be performed on a model in a black-box manner without access to the internal logic of the model. Another notable advantage of counterfactual explanations is that they afford users actionable steps or recourse to achieve a desired result <cit.>. They are also inherently faithful to the model's behavior since they are grounded in actual model evaluations. However, because they consist of single instances or data points, they only provide limited insight into the model's global behavior. Diverse Counterfactual Explanations (DiCE) <cit.> is an method of constructing counterfactual predictions. The aim of this method is to produce counterfactuals that are valid (produce a different result when fed into the model), proximal (are similar to the input), and diverse (different from each other). Diversity is desirable here as it increases the likelihood of finding a useful explanation and provides a more complete picture of the model's behavior. DiCE generates a diverse set of counterfactual examples using a diversity metric based on determinantal point processes <cit.>, a probabilistic model that can solve subset selection problems under diversity constraints. This diversity constraint forces the various examples apart, while an additional proximity constraint forces the examples to lie close to the original input. The method also attempts to make the counterfactual examples differ from the input in as few features as possible (feature sparsity). Evolutionary computation is well suited to this task as a black-box, possibly multi-objective optimizer, as it allows us to find counterfactuals without knowing the internal workings of the model while also optimizing for multiple desirable criteria in the counterfactuals. CERTIFAI <cit.> generates a population of counterfactual explanations using a model-agnostic genetic algorithm. The initial population is generated by sampling instances that lie on the other side of the decision boundary of the model (i.e., are classified differently from the instance to be explained). Then, the genetic algorithm optimizes the population to minimize the distance (for some notion of distance, depending on the type of data) from each counterfactual instance to the input instance. The population is then analyzed for (1) robustness, which increases if the best counterfactual examples found are farther away from the input, and (2) fairness, which is measured by comparing robustness across different values of a particular feature. GeCo <cit.> uses a genetic algorithm with feasibility and plausibility constraints on the features, specified using the constraint language PLAF. This allows one to rule out certain counterfactuals that would be useless to the user (e.g., counterfactuals where the user changes their gender or decreases their age). Like CERTIFAI, the genetic algorithm minimizes the distance from the input instance to the counterfactual examples, prioritizing examples on the other side of the decision boundary while keeping examples close to the decision boundary if not enough counterfactuals are available. The fitness function does not consider how many features are changed relative to the input instance (with a smaller number being preferred for ease of understanding), but the algorithm is biased toward a smaller number of changes by initializing the population with only one feature changed. Multi-objective counterfactuals (MOC) <cit.> explicitly use multi-objective optimization to consider multiple desirable properties of the explanations. MOC uses a modified version of NSGA-II to perform its search. Among the changes are the use of mixed integer evolution strategies (MIES) <cit.> to search a mixed discrete and continuous space and a different crowding-distance sorting algorithm which prioritizes diversity in feature space. A total of four objectives are used, optimizing for these four desirable properties: the model output for the example should be close to the desired output; the example should lie close (in the feature space) to the input to be explained; the example should not differ from the input in too many features; and, the example should be plausible (i.e., likely to be drawn from the same distribution as the real data), which is measured by its distance to the closest k data points. §.§.§ Adversarial examples Adversarial examples are closely related to counterfactuals. An adversarial example is counterfactual, but it intends to create an incorrect prediction <cit.>. This is done by applying a small perturbation to an example to change its classification. Most approaches search for examples that are as close to the original input as possible and perceptually similar to the input. These examples are a method to highlight failure modes of the model as well as a potential attack vector on deep learning models. Su et al. <cit.> propose a method of finding adversarial examples which modify only one pixel in an image. This contrasts previous methods that modify multiple pixels in the image and are more obvious to humans. Their method uses differential evolution, where each individual is encoded by the coordinate of the pixel to be modified and the perturbation in the RGB space. They find that, in many cases, one pixel is sufficient to deceive the model. Other works explored the generation of adversarial image perturbations through evolution strategies <cit.> and the clonal selection algorithm <cit.>. Adversarial examples are also present in models built for other domains, such as natural language processing. Alzantot et al. <cit.> generate adversarial examples on a sentiment analysis model and a textual entailment model. In addition, the examples they produce are designed to be semantically and syntactically similar to the original input, making the attack more difficult to spot. A genetic algorithm is used to optimize for a different target label than the original. Mutation occurs by changing words in the input to similar words as measured by a word embedding model (GloVe) and filtering out words that do not fit the context. §.§ Assessing Explanations Finally, rather than using EC to generate the explanations themselves, we will discuss some ways in which EC can be used to assess or improve the quality of other explanation methods. Huang et al. <cit.> propose two metrics to assess the robustness of an explanation: worst-case misinterpretation discrepancy and probabilistic interpretation robustness. Interpretation discrepancy measures the difference between two interpretations, one before and one after perturbation of the input. For an interpretation to be robust to perturbations, it is desirable for this value to be low. The authors then measure the discrepancy in two worst cases: the largest interpretation discrepancy possible while still being classified as the same class and the smallest interpretation discrepancy possible while being classified differently (adversarial example). These values are optimized using a GA. The other metric, probability of misinterpretation, calculates probabilistic versions of the above: the probability of an example having the same classification but a significantly different interpretation and the probability of an example having a different classification but a similar interpretation. This is estimated using subset simulation. It is also possible to perform an adversarial attack on the explanations themselves. Tamam et al. <cit.> do this with AttaXAI, a black-box approach based on evolution. AttaXAI tries to evolve an image similar in appearance to the original input that produces the same prediction from the model but with an arbitrary explanation map. In their experiments, pairs of images were selected and were shown to be able to generate a new image with the appearance and prediction of the first image but with a similar explanation map to the second. Much of the visualization work described above has a considerable drawback when considering explainability in that only a limited amount of evaluation has been undertaken with explainability in mind. A standard approach to evaluating visualization research within EC is to apply visualizations to a benchmark dataset – perhaps a benchmark approximation set, a group of Pareto front approximations generated on multi-objective test problems with known characteristics <cit.>, or a run of an algorithm on a problem that has a specific type of landscape whose performance we would like to visualize. These are valuable approaches, as they confirm that a proposed visualization technique is able to represent the characteristics of solutions or algorithm execution that we seek to present to users. Other approaches included examining the features offered by a visualization according to a usage taxonomy and undertaking a usability study. These latter approaches are important to the analysis of visualizations from an explainability perspective. A usability study is a process by which a human user's ability to use a computer system is formally evaluated. In the context of visualization, this typically assesses the extent to which a user can interpret the information presented in a visualization. A few examples can be found in the EC literature wherein a usability study has been conducted. A small-scale usability study was conducted in <cit.>, wherein participants were asked to engage in a number of tasks (selecting the best and worst solutions from a number of visualizations) and were assessed on their accuracy and time taken to complete the task. Another study <cit.> asked users to reflect on their use of a visualization tool using a questionnaire with a range of scored and open questions. Considering the range of cases in which usability studies have been used within the wider visualization community, we argue that the EC community can gain much from incorporating them. As a first step, studying the accessibility of existing methods on a considerably larger scale – both in terms of respondents and the tasks they are asked to complete – is recommended. This, in turn, will require careful consideration since, for example, the DTLZ test problem suite used to showcase many of the visualization tools discussed above is not easily interpretable by non-experts. Instead, benchmark tasks that human users can easily understand must be identified. How best to leverage usability studies within the wider context of XAI is still an open question <cit.>, with proposals including the creation of question banks <cit.>, or evaluating different query and modality types <cit.>. All of these approaches can be readily adapted for use within EC. § XAI FOR EC In this section, we consider a complementary perspective to that in <Ref>: explainability for EC and optimization approaches in general. The motivation is similar: an optimization algorithm will follow lengthy and often complex processes to find optimal or near-optimal solutions that are presented to a decision-maker. Explanations here also help the decision-maker answer our general questions set out in <Ref>. Overall, we view the process (<Ref>) of optimization as having three stages: problem setup or definition, iterative optimization or search, and analysis of solutions. There is scope for explainability at each of these stages, which we will elaborate on in the following sections. §.§ Interpretability by Design Much of the challenge in tackling an optimization problem is in designing and formulating the objectives and solution representation. Interpretability can be a key factor in the design choices made at this stage; conversely, the problem's design is an important part of explaining it. Thus, more direct representations and explicit encoding of the variables, objectives, and constraints of real-world problems might be favored if interpretability is important. In this setting, a MILP formulation of a problem's objectives and constraints, whether to be solved by mathematical optimization or EC, would be preferable to a “black-box” function evaluation. Matheuristics <cit.> are a successful development in this area. An alternative approach <cit.> used decision trees to provide interpretable rules to choose solutions for optimization problems with the trees constructed by MILP or heuristics. Handling the components of an objective separately rather than together can permit post-hoc analysis of how solutions have been chosen throughout the evolutionary process; this motivates the use of lexicographical approaches to tournament selection such as those proposed by Deb for constraints <cit.> and multiple objectives <cit.>, or lexicase selection in GP <cit.>. The tackling of multi-objective problems can also be made more transparent through the use of post-hoc multi-objective evolutionary algorithms <cit.> that approximate a Pareto front, allowing the decision-maker to understand the trade-off between objectives, rather than having to guess the trade-off when choosing weightings for a priori optimization (e.g., a weighted sum of objectives). We will return to this theme in <Ref>. Explainability also motivates the use of direct over indirect representations: it is self-evident that the closer the solution representation and decision variables are to the real-world application, the easier the solution explanations are in the applied setting. There is often a trade-off here. For example, indirect representations such as hyperNEAT <cit.> and Grammatical Evolution have generally been found to outperform direct representations such as the tree structures of classical GP <cit.>. On the other hand, more explicit formulations of the problem allow for greater control of the operators that can be customized to the problem at hand. For example, grey-box optimization <cit.> exploits knowledge of the problem domain using direct encodings for combinatorial problems in order to improve performance. As with ML, as noted in <Ref>, direct representations should still be at the right level for interpretability: too low and the information is too fine-grained and dense for a human to interpret; too high and the real-world meaning is lost. We suggest that there could also be scope for evolutionary algorithms in engineering and selecting features with respect to interpretability for optimization problems themselves; this may be accomplished by following a cooperative coevolution approach as is already successful in large-scale global optimization <cit.>. The algorithmic framework itself also has an impact on explainability. A greedy or steepest ascent hill-climber is deterministic, with a single point to follow, resulting in an easily understood search process more interpretable than a stochastic search or a population-based algorithm. In this regard, Estimation of Distribution Algorithms <cit.> construct explicit representations of the problem, highlighting a clear mathematical route to the solutions, though this task can itself be sufficiently complex as to be non-interpretable. §.§ Explaining Problem Landscapes Landscape analysis aims to capture the interactions between algorithms and their operators with solution representations. Such approaches might be considered as being about understanding how the search proceeds rather than why particular solutions are chosen, but both aspects can be viewed as important for explainability. §.§.§ Landscape analysis and trajectories Landscape analysis <cit.> is, arguably, one of the main points of contact between XAI and EC. Landscape analysis, in fact, encompasses a set of tools that aim to understand and explain algorithm behavior based on the problem features, as well as predict algorithm performance and perform automatic algorithm configuration and selection. In this area, some works that explicitly aim at explainable landscape-aware prediction <cit.> have been proposed recently. We might consider an algorithm's behavior to be defined in terms of its trajectory through the search space. Such a trajectory is the sequence of points occupied in the search space by the algorithm's population (or even a single solution, in the case of single-solution algorithms) over the course of the algorithm's run. In this way, the trajectory captures the algorithm's progress: when particular features of the solutions were discovered, when the algorithm got stuck in a local optimum or on a plateau, when premature convergence occurred, and so on. <cit.> introduced the concept of search trajectory networks as a tool to visualize these trajectories, demonstrating the approach for several combinations of algorithms and problems. Search trajectories have also been explicitly proposed as a promising route towards XAI for EC <cit.>. In this work, Principal Component Analysis is applied to solutions visited by an EA in order to capture features prevalent in the population of an algorithm at each generation. Each component captures features in the decision space, with the loadings for each component identifying correlations between groups of variables. As variables begin to converge over the run, each component varies in prominence within the population, allowing for visualization of the algorithm's progress. The authors also demonstrated strong connections between the loadings of each component and known global optima for multiple bit-string encoded benchmark functions (e.g., groups of k-bits being correlated for trap-k functions and alternate bits negatively correlated for alternating-ones functions). In a related work <cit.>, the authors propose a feature extraction method that describes the trajectories of optimization algorithms using simple descriptive statistics. These statistics can then be used by ML methods for performance prediction or the automatic configuration of an algorithm on unseen problems. Population Dynamics Plots were also recently proposed in <cit.> as a way to visualize the progress of an EA as the search proceeds, allowing the lineage of solutions to be traced back to their origins and providing a route to explain the behavior of different algorithms. The authors visualized solutions to multi-objective knapsack problems in terms of their objectives, projecting multi-objective values down to two dimensions for visualization. The proximity of solutions to the feasible/infeasible boundary was captured, as were the convergence behaviors of different algorithm configurations. A further alternative approach to capturing the trajectory of a metaheuristic run is the creation and mining of surrogate fitness models fitted to the population <cit.>. Surrogate models are most commonly used to speed up the runs of EAs by training a model that takes the place of the fitness function. The idea proposed by <cit.> is that the surrogate model is biased towards the solutions visited by the EA as it runs, and probing the model reveals the algorithm's perspective of characteristics like the sensitivity of the objectives to each variable, as well as inter-variable relationships. These papers presented some preliminary results on well-known bitstring-encoded benchmark functions. Studies about hyper-heuristics <cit.> and parameter selection <cit.>, instead, have highlighted that specific parameter settings allow EC methods to exhibit a “generalistic” behavior, i.e., to perform generally well even on very different types of functions. The search for such settings has been shown to be effective, for instance, in selecting solutions from the Pareto fronts of multi-objective optimization problems <cit.>. Stemming from these considerations, it might be worth exploring whether the search for simple parameter configurations motivated by an easier explainability of the corresponding algorithm may also lead to generalistic solutions, as the ML theory (and Occam's razor) seems to suggest. §.§.§ User-guided evolution Allowing the user to influence or provide input into the model-building process can also improve trust. An approach described by <cit.> combines the principles of parallel coordinate plots with a multi-objective EA to allow users to define areas of interest where they would like to find solutions. Another mechanism for understanding the solution landscape is through quality-diversity or illumination algorithms, such as MAP-Elites <cit.>. These algorithms can generate diverse high-quality solutions varying along user-defined dimensions. This allows the user to understand how the quality of a solution varies with respect to different parameters, which may differ completely from the underlying parameters used by the model. An interesting future direction here could be the design of algorithms that provide human-interpretable explanations as they proceed, incorporating human feedback as part of the search. This might resemble the preference-based approach used in multi-objective optimization <cit.> but focused on the decision space. Gaier et al. <cit.> proposed a hybrid approach by using MAP-Elites alongside a surrogate model to add efficiency to the MAP-Elites process. They proposed reducing the need for the large number of checks normally required for MAP-Elites. The proposed solution, Surrogate-assisted illumination (SAIL), aims to achieve this by integrating an approximation model (surrogate) alongside an intelligent sampling of the fitness function. As with MAP-Elites, the search space is partitioned into shape bins, each of which holds a map with a different layout of feature values. Firstly, a surrogate is constructed based on an initial population of possible solutions, also including their fitness scores. MAP-Elites is then used to produce solutions to maximize the fitness function and generate an acquisition map. Thereafter, new solutions are sampled from this map and additional observations are used to iteratively improve the model, looping through this process to generate increasingly better solutions. The performance predictions are then used by MAP-Elites in place of the original fitness function to generate a prediction map of near-optimal representations. Urquhart et al. proposed in <cit.> an application of MAP-Elites to increase trust in metaheuristics. This paper specifically aimed to address the criticism that end-users have no role in the construction of the end solution. The authors proposed that MAP-Elites can be used to filter the solution space and provide a set of solutions for the users, from which they can select the one most applicable to them and their needs. This increases trust in the selected solution because the user is provided with an opening to the process and a measure of influence as to what constitutes a good solution. More recently, <cit.> presented an extension of the MAP-Elite process that extracts explainable rules from MAP-Elite archives. This work addresses the issue that MAP-Elites generate thousands of solutions to a problem; extracting information from such a large number of solutions is a challenge for a decision-maker. Instead, the authors proposed the use of GP and a rule-induction approach that generates a small number of rules that capture the characteristics of the solutions generated by the optimizer. §.§ Explaining Solutions XAI for EC is applicable in many stages of the EC pipeline as outlined in Figure <ref>. The output of the optimization run – the solutions – could also be mined for explanatory artefacts. The generation of such artefacts requires the post-hoc evaluation of the solutions created by the optimizer, whether for Pareto fronts, populations, or single solutions. This post-hoc analysis involves, in essence, the exploration of alternative causes to generate an explanation regarding solution quality and to reveal something about the model used. §.§.§ Interpreting solutions The interpretability of solutions can often be difficult to define. As noted by <cit.>, it may broadly be understood as the “extraction of relevant knowledge from a machine-learning model concerning relationships either contained in data or learned by the model”. This is connected to the older concept of backbones <cit.>, which represent components of a solution that are critical to its optimality. In a satisfiability decision problem, the backbone of a formula is the set of literals which are true in every mode. Identification of such characteristics in a solution could form part of an explanation of its quality. Dimensionality reduction techniques have been shown to help explain optimizer solutions, as proposed in <cit.>. Here, the latent problem structure and its effect on optimizer output are investigated by decomposing the search trajectories using Multiple Correspondence Analysis (MCA). By projecting the trajectories into variance-based lower-dimension spaces, feature importance at various stages of the search can be determined. These, in turn, may be used to aid end-users in interpreting both high- and low-impact influences to a solution in single-objective problems. In multi-objective space, the trade-off between a solution's explainability and its representation's accuracy has been explored in <cit.>. Here, a successful reduction in the complexity of the explanation representation is achieved through the step-wise regularization of the set of linear regression models generated from the output of the optimizer. This reduction retains the interpretability of the solution explanation while maintaining the predictive ability to outline domain-relevant mappings between the regressors and the objective function. Innovisation <cit.> was proposed by Deb et al. so that design principles shared between solutions to multi-objective optimization problems can be identified, explaining optimality in the decision space by highlighting the principles that ensure Pareto-optimality. While innovisation emphasizes understanding the underlying principles that lead to optimal solutions, more recently a focus on exploring the factors that lead to maintaining a level of coherence between solutions can be observed in <cit.>. Here, methods for maintaining the similarity of multi-objective solutions comprising the Pareto front are investigated. This is done to provide experts with a smoother view of the transition in the solution space between the solutions in a Pareto front approximation. §.§.§ Visualization of solutions Within the many-objective optimization community, a considerable body of work exists around visualizing Pareto front approximations. The challenge therein is representing solutions’ objective vectors in cases with M>3 objectives. Human cognition prevents decision-makers from comprehending four or more spatial dimensions, and work has therefore focused on three approaches: (1) identifying visualization techniques that can present solutions in terms of the full set of objective vectors; (2) identifying objectives that are redundant and can be discarded so that a standard visualization tool can be used; and (3) approaches for applying feature extraction to identify new coordinate sets that can more easily be visualized. The first category comprises techniques such as parallel coordinate plots <cit.> and heatmaps <cit.>. Both are popular techniques in the visualization community for visualizing large datasets because of their ability to scale. Both can handle many data items (such as the objective vectors in a Pareto front approximation) and features (corresponding to the objectives in this case). Unfortunately, both techniques suffer from a lack of clarity in their basic form. In the case of the parallel coordinate plot, solutions overlay each other, which leads to a large proportion of the solution set being obscured. Heatmaps are arbitrarily ordered –- in terms of both rows and columns -– causing the relationship between pairs (or larger groups) of solutions or objectives to be extremely difficult to observe. In both cases, simply reordering the data can help with the accessibility of the methods. Parallel coordinate plots have seen the objectives reordered for the trade-off between objectives to be more readily identified <cit.> while the clarity of heatmaps has been improved by reordering both the rows (solutions) and the columns (objectives) with agglomerative clustering <cit.> and spectral clustering <cit.> to better reveal patterns and trends. Parallel coordinate plots have been further enhanced using user interaction, such that the users can filter out solutions that are outside of the objective value bounds they specify. This reduces their cognitive load by requiring them to focus on fewer objective vectors <cit.>. The latter two categories both deal with dimensionality reduction. Feature extraction techniques that have been used include PCA <cit.>; self-organizing maps (SOM) <cit.>; and multidimensional scaling (MDS) <cit.>, among others. Whatever the technique, the approach relies on projecting the objective vectors from ℝ^M>3 into a new space ℝ^M∈{2,3}. This makes it possible to use a standard visualization technique such as a scatter plot, which enables the Gestalt principles of presentation to be followed – similar points in the visualization are placed close together, for example. From an explainability perspective, however, projecting objective vectors into a new space that bears no resemblance to the original objectives around which the problem was formulated can confuse a decision-maker. This situation can be improved with simple approaches, for example, by allowing the user to vary the color scheme according to different objectives; however, it can be difficult for the users to orientate themselves in terms of multiple objectives such that, in some cases, the trade-off can be difficult to observe. A proposal <cit.> designed to address this issue was to annotate the projected solutions with information such as the best and worst solution on each objective, as well as projecting samples drawn from the coordinate axes into the visualization. Further work sought to identify the edges of the original high-dimensional Pareto front approximation so that the distance from extremes in the front can be identified in a low-dimensional space <cit.>. §.§ Explaining optimizer behavior The analysis of different optimizers is closely related to landscape analysis, regardless of the optimization problem. In this field, researchers try to decouple the effects of the optimizers' internal workings and the effect of the search landscape imposed on the algorithm. One way of doing this is by using special functions such as the f_0 function proposed in <cit.>, a uniform random fitness function, or a constant function to assess certain behavioral patterns in algorithms by running them repeatedly and observing patterns in the distribution of the points finally found. Another way is to use a large and diverse set of benchmark functions or gradually change the properties of benchmark functions using affine combinations <cit.>. Behavior-based benchmarks are a sophisticated means to analyze the operational dynamics of metaheuristics, especially under varying conditions. One example of such a benchmarking tool is the BIAS toolbox <cit.>. This tool stands out by offering a behavior-centric analysis framework, enabling researchers to scrutinize whether and how different algorithms or their specific components may introduce structural bias (SB) into the optimization process. SB refers to a bias intrinsic to iterative optimization algorithms, which may drive the optimization process towards parts of the search space independently of the objective function, thus influencing the algorithm's efficiency and its outcome. Through the application of the BIAS toolbox, one can detect the presence, intensity, and nature of SB within these algorithms. A deep learning approach to detect SB was introduced in <cit.>, where XAI techniques are used to highlight different SB patterns. Such insights are important, as they shed light on potential improvement areas, helping to refine these algorithms for enhanced performance. For the purpose of gaining additional insights by benchmarking and analyzing algorithmic performance, the paper <cit.> introduces a concept termed “explainable benchmarking”. Specifically, the authors propose a framework and a software package designed to dissect and analyze the performance of various optimization algorithms alongside the influences wielded by their different algorithmic components and hyper-parameters. This methodology is applied to two modular optimization frameworks, facilitating a granular analysis of the effects of various algorithmic elements and configurations on performance across many scenarios. This work uses TreeSHAP and other global and local XAI techniques to calculate and visualize the performance contributions of each of these algorithmic components and hyper-parameters, providing several insights into what drives performance on different types of objective functions. In a similar work <cit.>, the f-ANOVA method is used to derive insights into which components of modular algorithms contribute to optimization performance. Data gathered from these experiments can then be used in combination with landscape analysis methods to derive additional insights and eventually learn the mapping between algorithm configuration, problem landscape characteristics, and performance. Another way of explaining algorithm behavior and, more specifically, benchmarking results is by comparing many different benchmark experiments reported in the literature over the years and combining these result datasets in a unifying ontology. For this reason, the optimization algorithm benchmarking ONtology (OPTION) was proposed in <cit.>, with an earlier similar attempt done in <cit.>. OPTION provides the vocabulary needed for semantic annotation of entities such as algorithms, problems, and evaluation measures. It also provides means for improved interoperability and reasoning, making these benchmark experiments much more explainable. In <cit.>, a performance prediction model built on top of OPTION was proposed. More specifically, the authors extended the OPTION ontology with the vocabulary needed to represent modular black-box optimization algorithms. They then derived knowledge graphs with fixed-budget performance data for two modular algorithm frameworks, modCMA and modDE. On top of that, a performance prediction model was proposed using the derived knowledge graphs, leading to explainable predictions of different modular algorithm configurations. § RESEARCH OUTLOOK The list of works mentioned above is not meant to be exhaustive. As the XAI field is rapidly growing, it is likely that more studies based on EC aimed at achieving XAI will appear in the near future. For instance, we believe that ever more studies will focus on hybrid systems, e.g., combining EC-induced interpretable models and black-box models for feature extraction and low-level data manipulation. Such a combination has the potential to leverage the benefit of both areas of ML, and fully exploit the exploration capabilities that represent a unique feature of EC. §.§ Challenges One major challenge for evolutionary approaches to XAI (but faced to some extent by any XAI method) is scalability. As data continues to grow and ML models become increasingly more complex, the number of parameters and features to be optimized grows as well. On the one hand, methods that work well on small models and datasets may become too expensive on larger ones. On the other hand, large models are the most opaque and most in need of explanation, so improving the scalability of XAI methods is necessary to ensure they can be applied to even the largest models. In particular, producing fully interpretable global explanations that accurately capture model behavior while being simple enough to understand may become too challenging as models become larger – necessitating more local explanations or a more focused approach concentrating on explaining particular properties or components of the model. Here, too, we see the potential for more use of automated approaches to explainability – for example, by using evolutionary search to find local explanations of interest and optimize for particular properties. This idea has been explored with counterfactual examples, but it could be extended to other types of explanations. In any case, evolutionary ML has proposed a broad range of scaling-up mechanisms over the years <cit.> that, to some extent, can also be applied to EC-based XAI methods. Another challenge for all kinds of XAI methods is the incorporation of domain knowledge. This can include knowledge from subject matter experts as well as prior knowledge about the dataset or problem. Current approaches to XAI are broad and aim to provide explanations that are independent of the problem setting or, at most, are model-specific rather than problem-specific. However, it can be useful to see how well a solution found by an ML model aligns with current knowledge in the field to evaluate the quality of the solution or, conversely, to identify areas where the model deviates from current understanding. For example, a practitioner may want to see how well the gene associations found by a genomics model align with the literature and which associations are novel. This domain knowledge can be provided in the form of expert rules, constraints, or structured data, such as a graph structure or a tree from which metrics can be defined for model evaluation. Domain knowledge can also be incorporated into the model-building process to improve interpretability, for instance, by constraining the models to focus on associations known to be plausible (e.g., by incorporating causality) or excluding irrelevant features. We believe that EC methods are particularly suited for effectively leveraging domain knowledge for building better models because of (1) their global search capacity enabling robust and complex optimization processes, (2) the possibility of hybridizing them with local search mechanisms tailored to exploit domain knowledge, and (3) their flexibility in exploration mechanisms, which provide yet more opportunities to use the domain knowledge. §.§ Opportunities We see some additional opportunities for future work employing EC for XAI. One promising direction in current research is the use of multiple objectives to optimize explanations. Explainability is inherently a multi-objective problem, requiring the explanation to be both faithful to the ML model and simple enough to be interpretable. EC is well-suited to explicitly optimizing for this. Thus, we believe introducing these ideas into current and future explanation methods can be a straightforward but effective way of improving the quality of explanations. Along similar lines, diversity metrics and novelty search are another unique strength available to evolutionary algorithms that can help improve the explanations provided. The use of quality-diversity (illumination) algorithms can produce a range of explanations that are both accurate and provide different perspectives on the model's behavior. For example, a quality-diversity approach to counterfactual explanations could ensure that a range of behaviors are showcased in the examples. Existing work <cit.> has already shown some explanatory value of search space illumination for optimization problems, but there are still many opportunities to identify ways to interpret and analyze sets of solutions to better support decision-making. New approaches to visualization, interactivity, and sensitivity analysis on solutions will all add to the XAI picture for EC. Another opportunity for EC – both in the EC for XAI and XAI for EC settings – is the incorporation of user feedback, considering the evolution of explanations as an open-ended evolution process. Explainability is intended for the human user, and, as such, explanation quality is ultimately subjective and can only be approximated by metrics. Users may also have their own unique preferences for what constitutes a useful explanation. Incorporating user feedback into the evolution process can allow better-tailored explanations that continue to improve. At the same time, better metrics measuring an explanation's quality are also necessary to avoid overwhelming the user. The design of new operators and algorithmic approaches that explicitly generate explanations as part of the search would also be an interesting direction for future EC. §.§ Real-world Impacts As AI becomes increasingly integrated into real-world applications, developing better methods for providing explanations is essential for ensuring safety and trust across various domains. With this in mind, it is also crucial to consider the practical effects and benefits that XAI research can have. We would like to highlight here a few application areas where work on evolutionary approaches to XAI can have a substantial impact. Healthcare is a domain where the consequences of errors can be especially high. Untrusted models may be ignored by clinicians, wasting resources and providing no benefit. Even worse, seemingly trustworthy but flawed models may cause harm to patients. Even models with few errors may exhibit systematic biases, such as diagnostic models under-diagnosing certain patient groups while appearing accurate <cit.>. Explainability can help identify these systematic errors and biases <cit.>. AI models are employed in the financial sector for fraud detection and risk assessment. Similar systematic biases in these models can also be harmful, for example, by disproportionately denying loans to certain groups. In addition, regulatory bodies often require explanations for these models to ensure compliance and maintain transparency. Explainability also holds significant potential to advance engineering and scientific discovery. AI models are used in various engineering applications, such as AI-driven materials design and drug discovery, and to produce scientific insights in fields like genomics and astrophysics. Explanations can offer insight into the underlying mechanisms and relationships, improving hypothesis generation and validating domain knowledge. Natural language processing has experienced many recent breakthroughs, with the development and deployment of models of unprecedented size. In particular, there is an emerging paradigm of building “foundation models”, generalist deep learning models that are trained on a wide range of data for general capabilities and can be further fine-tuned for downstream tasks <cit.>. These models can perform tasks they are not specifically trained for, but it is still unclear how they make decisions or generate outputs. Any flaws in these foundation models may be carried over to application-specific models built on top of them. As these models become more pervasive and their applications expand, understanding them and identifying their failure modes becomes increasingly important. § CONCLUSION We have shown that there is a strong mutual connection between XAI and EC. However, we believe that there are still several research opportunities that have not been thoroughly explored yet, which should mainly aim at: 1) devising tools, be them analytical, visual, data-driven, model-based, etc., to explain EC methods, i.e., their internal functioning, their results, and what properties/settings/instances make an algorithm suitable for achieving the result; 2) defining how solutions provided by EC methods should be checked and verified, and evaluating how much problem knowledge is actually needed to understand these solutions; and 3) fully exploiting the main features of EC methods (e.g., their exploration of “illumination” capabilities) to either provide post-hoc explanations (e.g., in the form of local explanations, or approximations of black-box models) or generate white-box models that are explainable by design. Another important challenge relates to the connection between XAI and neuroevolution (and, in general, neural architecture search): for instance, is there any link between optimized architectures and explainability? (e.g., smaller networks may be easier to explain). We consider these opportunities the basis for a potential bridge between EC and general AI (where machine/deep learning is currently mainstream) and believe that the EC community may play a fundamental role in the promising research area of XAI. Evidently, XAI is an emerging field with important implications for AI as a whole. With the increasing use of systems built on ML and optimization in real-world applications, it is more important than ever that we understand such systems and what they learn. EC is well-poised to contribute to the field, bringing a rich toolbox of tools for performing black-box optimization. In this paper, we introduced various paradigms for explaining an ML model and the current methods of doing so. We then discussed how EC can fit into these paradigms and the advantages of employing it. In particular, EC as an optimizer is well suited for tricky interpretability metrics that are difficult to handle due to reasons such as non-differentiability, as well as for population-based metrics such as diversity, and for optimizing a multitude of these metrics at the same time. We highlighted a few methods in each category that leverage some of these strengths. However, there is still significant room for more exploration and more advanced evolutionary algorithms. To conclude, much knowledge remains locked away within trained models that we still do not have the means to decipher. The use of EC for XAI is still uncommon, but there are many opportunities ripe for the picking, and we believe that it has the potential to play a key part in the future of XAI. unsrtnat
http://arxiv.org/abs/2406.08399v1
20240612164754
Differentiable Cost-Parameterized Monge Map Estimators
[ "Samuel Howard", "George Deligiannidis", "Patrick Rebeschini", "James Thornton" ]
stat.ML
[ "stat.ML", "cs.LG" ]
[ Differentiable Cost-Parameterized Monge Map Estimators equal* Samuel Howardsch George Deligiannidissch Patrick Rebeschinisch James Thorntonsch,comp compApple schDepartment of Statistics, University of Oxford Samuel Howardhoward@stats.ox.ac.uk Machine Learning, ICML 0.3in ] § ABSTRACT Within the field of optimal transport (OT), the choice of ground cost is crucial to ensuring that the optimality of a transport map corresponds to usefulness in real-world applications. It is therefore desirable to use known information to tailor cost functions and hence learn OT maps which are adapted to the problem at hand. By considering a class of neural ground costs whose Monge maps have a known form, we construct a differentiable Monge map estimator which can be optimized to be consistent with known information about an OT map. In doing so, we simultaneously learn both an OT map estimator and a corresponding adapted cost function. Through suitable choices of loss function, our method provides a general approach for incorporating prior information about the Monge map itself when learning adapted OT maps and cost functions. § INTRODUCTION Optimal transport. Mapping samples between two datasets in a meaningful way is one of the most fundamental tasks across the sciences. Optimal transport (OT) provides a principled framework for such mapping problems, and has enjoyed success in many fields including throughout biology <cit.>, and extensively in machine learning with applications in generative modelling <cit.>, differentiable sorting <cit.>, clustering <cit.>, resampling <cit.> and self supervised learning <cit.>. For two probability measures μ and ν on ^d, the OT problem finds the most efficient way to transport μ to ν relative to a ground cost function c : ^d ×^d →. The choice of cost function therefore determines the notion of optimality. Choice of Ground Cost. The squared-Euclidean distance c(x,y) = is the default and most commonly used ground cost in computational OT methods, due to its desirable theoretical properties and ease of implementation. It allows the use of the closed-form transport map from Brenier's theorem <cit.>, and an elegant connection to convex analysis upon which many methods are based <cit.>. However, the squared-Euclidean cost can be an arbitrary choice and may not be suitable for the given problem <cit.>. Moreover, it can result in erroneous mappings that do not agree with known ground-truth information <cit.>, as demonstrated in Figure <ref>. Recently, several works have proposed methods to estimate OT maps for more general ground costs <cit.>, however it remains unclear how such a cost should be chosen. Cost Learning. Ambiguity surrounding the choice of ground cost has motivated recent interest in learning adapted cost functions by leveraging assumed additional information about the transport map (see Appendix <ref> for a discussion of related work). A frequently studied setting is inverse OT, which aims to learn cost functions from paired samples assumed to come from a transport coupling. Access to completely paired datasets is however unlikely to arise in practical applications. Instead, we may aim to learn an improved cost function from partial information about the mapping, such as a limited number of paired points or known structure in the map displacements. This motivates a need for general cost learning frameworks that are able to learn from a variety of types of available information. One general approach for learning cost functions is a bi-level approach entailing solving a regularized OT problem for given cost function with Sinkhorn's algorithm, then differentiating through this solution to optimize the cost (see Appendix <ref>). In many OT applications however, one wishes to approximate the Monge map T^⋆ itself, rather than the cost function alone. This can be used to transport out-of-sample points, enabling applications such as generative modelling and prediction <cit.>. While a learned cost can be plugged-in to an OT map estimator, if an accurate approximation is not obtained then the resulting estimator may not agree with the information used to learn the cost. Contributions. We introduce an alternative approach for learning adapted cost functions and OT maps by instead optimizing a cost-parameterized Monge map estimator directly to be consistent with known information. This approach simultaneously learns a map estimator and a corresponding cost function, ensuring that the resulting map has the desired properties. We make the following contributions: * We propose a differentiable structured Monge map estimator, incorporating costs c(x,y) = h(x-y) for strictly convex h. We parameterize h using an Input Convex Neural Network (ICNN) <cit.>, h_θ, and use the differentiable entropic map estimator <cit.> to facilitate gradient based training. * We then extend our construction to costs c(x,y) = h(Φ_μ(x) - Φ_ν(y)) for invertible functions Φ_·, providing additional flexibility whilst retaining a structured Monge map. * We showcase how our approach can incorporate partially known information, including aligning mapping estimators with known data associations, and encouraging desirable properties in the resulting transport map. § BACKGROUND ON OPTIMAL TRANSPORT Monge. The original OT formulation given by <cit.> seeks a map T^⋆ minimizing the total transportation cost amongst maps T:^d →^d pushing μ onto ν: min_T: T#μ = ν∫_^d c(x, T(x)) μ(x). Kantorovich. A more general formulation, permitting mass splitting, by <cit.> seeks a joint distribution π^⋆∈(^d ×^d) minimizing the cost over the set of couplings Γ(μ,ν) between marginals μ and ν: min_π∈Γ(μ,ν)∬_^d ×^d c(x,y) π(x,y). The Kantorovich formulation admits the following dual formulation, where ℛ_c = {(f,g) ∈ L^1(μ)× L^1(ν) : f(x)+g(y) ≤ c(x,y)  μ⊗ν a.e.}. The solutions (f^⋆,g^⋆) to the dual problem are known as the Kantorovich potentials, sup_(f,g) ∈ℛ_c{∫ f dμ + ∫ g dν}. Closed-form Monge maps. The following theorem details sufficient conditions for equivalence of the Monge and Kantorovich problems, and motivates solving for the Kantorovich potentials to obtain the Monge mapping. For measures μ and ν on a compact domain Ω⊂^d and a cost of the form c(x,y) = h(x-y) for a strictly convex function h, there exists an optimal plan π^⋆ for the Kantorovich problem. If μ is absolutely continuous and ∂Ω is negligible, then this optimal plan is unique and of the form (Id, T^⋆)#μ, there exists a Kantorovich potential f^⋆, and T^⋆(x) = x - (∇ h)^-1(∇ f^⋆(x)). Entropic OT. The entropic OT problem instead smooths the transport plan by adding an entropic penalty term to (<ref>), π^⋆ = _π∈Γ(μ,ν)∬_^d ×^d c(x,y) π(x,y) + KL(π | μ⊗ν). This relaxes the constraints on the dual potentials. The solutions (f_^⋆,g_^⋆) to the dual problem are known as the entropic potentials, sup_f ∈() g ∈() {∫ f μ + ∫ g ν +R(f,g) }, where R(f,g):=- ∬ e^f(x)+g(y) - c(x,y)/μ(x) ν(y). Entropic OT approaches and hence approximates OT in the limit as ↘ 0, but the entropic solution enjoys the key benefit of differentiability with respect to the inputs and enables efficient computation for discrete measures μ̂, ν̂ using Sinkhorn's algorithm <cit.>. § A DIFFERENTIABLE MONGE MAP ESTIMATOR We consider learning costs of the form c(x,y) = h_θ(x-y) for a strictly convex h_θ, for which the Monge map has the form in (<ref>). This enables direct optimization according to conditions on the map itself, ensuring that the resulting mapping is consistent with known prior information. Such costs allow the use of methods such as the entropic mapping estimator <cit.> and c-rectified flow <cit.>, ensuring reliable estimation of the corresponding OT map. In contrast, learning OT map estimators for arbitrary costs requires a trade-off between distribution fitting and optimality (see Appendix <ref>), and if the resulting mapping is inaccurate it may not be consistent with the information from which the cost was learned. Moreover, additional known information about an OT map will not uniquely determine a cost function. By considering convex costs, we provide an inductive bias towards simpler and more interpretable costs. §.§ Differentiable, Cost-Parameterized Transport Maps We parameterize h_θ using an Input Convex Neural Network (ICNN) <cit.>. We also enforce α-strong convexity and (optionally) symmetry of h_θ (see Appendix <ref>). We use the entropic potential f̂_θ^ <cit.> for the discrete empirical measures μ̂, ν̂ as a differentiable proxy for the Kantorovich potential f^⋆. This is constructed from the Sinkhorn potential _^⋆ solving the discrete OT problem between μ̂, ν̂: f̂_θ^(x) = - log∑_j exp((_^⋆)_j - h_θ(x-y_j)/). With (<ref>), this gives the entropic mapping estimator T_θ^(x) = x - (∇ h_θ)^-1 (∇f̂_θ^ (x)). Differentiability. We can differentiate through the output of Sinkhorn's algorithm using implicit differentiation <cit.>, or by unrolling the iterates <cit.>. We also note the following well-known relation, enabling differentiation through (∇ h)^-1. propositiongradinv For a strictly convex function h, we have (∇ h)^-1(x) = _z { h(z) - z,x }. As we enforce α-strong convexity of h, the minimization in (<ref>) can be solved efficiently using numerical methods. We can then use implicit differentiation (see Chapter 10, <cit.>) to differentiate through this inner minimization with respect to the cost parameters. We implement this using the JAXopt libary <cit.>. We now have a Monge map estimator that is end-to-end differentiable with respect to its cost function parameters. Choice of loss function. By choosing an appropriate loss function (θ), we can incorporate desired explicit biases into the learned cost through map estimator T_θ^. The training procedure is described in Algorithm <ref> in Appendix <ref>. There are many possible training losses that can be used to encourage some desired behaviour. A natural objective, for example, is to ensure the learned mapping correctly matches known paired points. Given a known subset of paired points (x_i,y_i)_i=1^N, the loss ℒ(θ) := 1/N∑_i=1^N ‖ T_θ^(x_i)-y_i ‖_2^2 encourages the learned map to respect the known pairs. We provide other choices of loss functions suitable for a range of problems in Appendix <ref>. We can also construct the reverse map estimator ( T_θ^(x) )^-1. We find it beneficial to train using both mappings jointly; see Appendix <ref> for details. Warmstarting the inner optimizations. Each iteration of our procedure involves solving two types of inner optimization problem, (1) the discrete entropy-regularized OT problem using Sinkhorn, and (2) the evaluation of (∇ h)^-1. While both problems are convex and thus can be solved easily, naive initializations would result in unnecessary computational cost. We instead warmstart each optimization from the corresponding solutions at the previous iteration, which significantly improves training speed <cit.>. §.§ Augmenting with Diffeomorphisms In the presence of extensive information about the map, the choice of strictly convex costs c(x,y)=h(x-y) can be overly restrictive, as there may not be a convex cost that allows the Monge map to be consistent with the known information. We therefore extend our framework by first transforming the marginal measures using diffeomorphisms Φ_μ, Φ_ν, then applying our method to μ = Φ_μ#μ and ν = Φ_ν#ν. The following extension of Theorem <ref>, proved in Appendix <ref>, shows that this amounts to learning a map with cost c(x,y) = h(Φ_μ(x) - Φ_ν(y)). theoremGenGM Under the conditions of Theorem <ref> with a cost of the form c(x,y) = h(Φ_μ(x) - Φ_ν(y)) for a strictly convex function h, the optimal plan is unique and of the form (Id,T^⋆) #μ, and T^⋆ can be written as T^⋆(x) = Φ_ν^-1[ Φ_μ(x) - (∇ h)^-1∘∇ f^⋆∘Φ_μ (x) ]. We parameterize Φ_μ, Φ_ν using normalizing flows <cit.>, and train end-to-end along with the procedure in Algorithm <ref>. This increases the expressivity of the class of cost functions we consider. As we would prefer to learn simpler costs, we can take Φ_μ=Φ_ν which preserves symmetry when h is symmetric. <ref> resembles similar recent results which use learnable invertible matrices <cit.> and fixed mirror-maps <cit.>. § EXPERIMENTS Inverse OT. In Figure <ref> we verify that our method can indeed learn valid cost functions in the Inverse OT setting using synthetic 2d distributions. We are able to learn a consistent map estimator for the T-shape dataset using only a symmetric convex cost c(x,y)=h(x-y). For the moon and spiral datasets, we require costs of form c(x,y) = h(Φ_μ(x) - Φ_ν(y)). Solving the discrete OT problem on the training datasets using the learned cost recovers the correct pairs, demonstrating that the learned costs are indeed valid. Limited labelled pairs. Optimal transport is commonly used to infer single-cell transcriptome trajectories between unaligned datasets, under the assumption that a cell population has moved `efficiently' between the observed timesteps <cit.>. The recent Live-seq profiling technique <cit.> avoids the destruction of measured cells, enabling a number of individual cell trajectories to be traced. These trajectories appear not to agree with those predicted by OT methods that use the squared-Euclidean cost. Nevertheless, the assumption that cells move `efficiently' is reasonable, raising the question of whether a different choice of ground-cost is more suitable. To learn an expressive but interpretable cost, we consider learning symmetric costs of the form h(Φ(x)-Φ(y)). We fit the Monge map estimator to the first 10 principal components of the Live-seq data consisting of cells pre- and post cell-differentiation, so that it agrees with the known trajectories. We plot the resulting mapping estimators in Figure <ref>. In Table <ref> we report the total incorrectly transported mass in the resulting coupling (denoted I.M.; see Appendix <ref>), as a measure of the validity of the learned cost. To assess the adapted OT map, we also report the RMSE of the predictions for the known cell trajectories, and the Sinkhorn divergence S_(T̂#μ, ν). Experimental details are provided in Appendix <ref>. We compare against costs learned by optimizing the Sinkhorn coupling matrix to minimize the incorrect mass reported in Table <ref>. We first use the same cost parameterization h(Φ(x)-Φ(y)) and obtain a map using the entropic mapping estimator. We also use an unstructured MLP cost c(x,y) and a learn a map estimator using the Monge Gap regularizer <cit.>. Optimizing the map directly consistently obtained both an interpretable cost function and a good transport estimator agreeing with the observed trajectories. Given the scarcity of known Live-seq trajectories, we cannot test out-of-sample performance of the resulting map estimator. We therefore perform the same procedure on synthetic data with different numbers of labelled pairs in Appendix <ref>, reporting results on training and newly-sampled data. Additional Experiments. We provide additional experiments in Appendix <ref>, including the ability to induce desirable properties that hold on the map displacements themselves, such as being low-rank or k-directional. § CONCLUSION We have introduced a Monge map estimator parameterized by a learnable cost function, which is differentiable with respect its parameters. This enables learning adapted cost functions and OT map estimators through gradient based training so that the map is consistent with known prior information. We have demonstrated the ability to learn costs by aligning with paired data samples, and by inducing desirable properties on the Monge map estimator itself. Future directions include: conducting further comparisons of optimizing the entropic map compared to the coupling matrix, and investigating the use of adapted costs and OT maps in downstream tasks. § ACKNOWLEDGEMENTS Samuel Howard is supported by the EPSRC CDT in Modern Statistics and Statistical Machine Learning [grant number EP/S023151/1]. Patrick Rebeschini was funded by UK Research and Innovation (UKRI) under the UK government’s Horizon Europe funding guarantee [grant number EP/Y028333/1]. plainnat § METHOD We implement the differentiation through Sinkhorn using the OTT-JAX library <cit.>. We use the L-BFGS solver <cit.> from the JAXopt library <cit.> to evaluate and implicitly differentiate through the inner minimization when evaluating (∇ h_θ)^-1. §.§ Cost function parameterization §.§.§ Convex function h Input Convex Neural Networks. ICNNs <cit.> are a class of neural networks f_θ : ^d → with an architecture that ensures that the mapping x ↦ f_θ(x) is convex. An ICNN consists of k feedforward layers, where for each layer i = 0,...,k-1 the activations are given by z_i+1 = σ_i(W_i^(z)z_i + W_i^(x)x + b_i), f_θ(x) = z_k. The network ensures convexity by consisting only of non-negative sums and compositions of convex non-decreasing functions with convex functions. It therefore requires the nonlinear activation functions to be convex and non-decreasing, and the weight matrices W_i^(z) to be non-negative (no such requirements are required for the passthrough matrices W_i^(x)). The first layer does not require an additional passthrough layer, so we have W_0^(z) = 0. ICNNs have previously been utilised in computational optimal transport methods to parameterize Kantorovich potentials, which via a reparameterization are known to be convex in the case of the squared-Euclidean cost <cit.>. Although mathematically elegant, <cit.> note that constraining the potentials to be ICNNs can in fact result in worse performance versus vanilla MLPs. We remark that in our case, the use of ICNNs instead of MLPs to parameterize the cost is crucial to our methodology, as it ensures that the inner optimization in (<ref>) is convex and can be solved numerically. Symmetry. The additional information used to learn an adapted Monge map and cost will not uniquely determine the cost function. This is especially the case when there is little additional information. We therefore wish to favour learning simpler and more interpretable costs. Translation-invariant convex costs c(x,y)=h(x-y) provide a good inductive bias towards such costs. We may also wish for the cost to be symmetric; this can be optionally enforced by parameterizing as h(z) = h_θ(z) + h_θ(-z) for an ICNN h_θ. α-strong convexity. The closed-form of the Monge map in (<ref>) requires strict convexity of h. We enforce this by adding a quadratic term α‖ x - y ‖_2^2 to the cost function. This also ensures that the inner minimization (<ref>) is strongly convex and thus the unique solution can be solved for efficiently with numerical methods. Parameterizing the convex conjugate. We remark that after having learned an adapted transport map, evaluating the learned mapping at a point x requires a minimization problem to evaluate (∇ h)^-1. This can be avoided by instead parameterizing the convex conjugate h^* rather than h, and using the relation (∇ h)^-1 = ∇ h^*. While avoiding the inner minimization when evaluating the map, this instead requires an inner minimization to evaluate each entry of the cost matrix _i,j = c(x_i, y_j), which is then fed into Sinkhorn at each training iteration. The result is that significantly more inner minimizations are required during training, though with the benefit that the resulting learned map can be evaluated directly. The choice to parameterize h or h^* should therefore depend on whether fast evaluation of the learned mapping is preferred over training speed. §.§.§ Diffeomorphisms Φ_μ, Φ_ν In Section <ref>, we extend our framework to incorporate cost functions of the form c(x,y) = h(Φ_μ(x) - Φ_ν(y)). In our experiments, we use the MADE flow architecture <cit.> to parameterize Φ_μ, Φ_ν, implemented using the Jax-Flows library <cit.>. Relation to metric learning. When taking Φ_μ = Φ_ν the cost c(x,y) = h(Φ(x) - Φ(y)) resembles a Siamese network <cit.>, a common approach in the metric learning literature. Siamese networks seek to learn encoders Φ so that the resulting function d(x,y) = ‖Φ(x) - Φ(y)‖_2^2 is an improved notion of distance for the data. However, <cit.> note that such compositions can be insufficiently expressive to precisely model certain metrics. Motivated by this, <cit.> replace the squared-Euclidean distance with a modified ICNN, providing a more expressive class of neural distances. Our motivation is similar. Note that we could use an ICNN ψ to directly parameterize a Monge map for a cost c(x,y) = ‖Φ_μ(x) - Φ_ν(y) ‖_2^2 according to the composition Φ_ν^-1∘∇ψ∘Φ_μ (via Brenier's theorem and a change of variable as in Theorem <ref>). However, by incorporating an ICNN we add flexibility in the cost parameterization, allowing us to favour simpler and more interpretable learned costs. §.§ Choice of Loss Function To illustrate the generality of our method, we provide here potential choices of loss function suitable for different applications. They encourage the map estimator to be consistent with known information or desired behaviour. We remark that these suggestions are only examples of possible loss functions that can be used, and a practitioner is free to choose any differentiable loss function suitable for their needs. In particular, our method is most suitable when behaviour is desired to hold on the map estimator itself, as this is the object being optimized. §.§.§ Labelled datapoints Paired datapoints. Consider assuming access to a subset {(x_i,y_i)}_i=1^N of μ̂, ν̂ consisting of known pairings from the optimal mapping, so y_i = T^⋆(x_i). We can optimize our map to agree with the observed pairings by penalizing the L_2 distance between the predictions and the known targets, ℒ(θ) := 1/N∑_i=1^N ‖ T_θ^(x_i)-y_i ‖_2^2. In the case where all samples in μ̂, ν̂ are paired, this recovers the Inverse OT setting. Note that during training, we only need to evaluate T_θ^ at the paired data points. Subset-to-subset correspondence. <cit.> consider a setting in which certain source subsets are known to map to certain target subsets. If our empirical source and target distributions are μ = ⋃_i μ_i, ν = ⋃_i ν_i with μ_i known to map to ν_i, then the loss function can be chosen as ℒ(θ) = ∑_i S_(T_θ^#μ̂_̂î, ν̂_̂î), where S_ denotes the Sinkhorn divergence <cit.> with regularization parameter . §.§.§ Properties of Map Displacements Low-rank displacements. To encourage the map estimator T_θ^ to transport along a lower p-dimensional subspace, we can penalize the magnitude of the trailing singular values σ_i^(θ) of the displacement matrix (T_θ^(x_i) - x_i)_i, (θ) = ∑_i>p‖σ_i^(θ)‖^2. k-directional displacements. We can encourage the displacements to lie primarily along at most k distinct directions. We can parameterize k directions v^ϕ = (v_1^ϕ, ..., v_k^ϕ), then maximize the cumulative smoothed minimum of the cosine distance d_cos(u,v) = 1 - u · v/‖ u ‖‖ v ‖, (θ, ϕ) = - ∑_i _j [ - d_cos( T_θ^(x_i) - x_i,v_j^ϕ) ]. §.§.§ Reverse map estimators We have presented our method as optimizing according to a loss on the forward mapping estimator (θ) = (T_θ^). However, we can also construct the reverse entropic mapping estimator as ĝ_θ^(y) = - log∑_i exp((_^⋆)_i - h_θ(x_i-y)/), (T_θ^)^-1(y) = y - (∇h_θ)^-1∘ (∇ĝ_θ^)(y), where h_θ(z) = h_θ(-z). The reverse map estimator should also be consistent with the known information, so we optimize according to the sum of the losses of the forward and backwards mapping estimators, (θ) = 1/2(T_θ^) + 1/2( (T_θ^)^-1). §.§ Regularization To ensure stable training, we add a regularization term (θ) to the loss used during training. Cost matrix. During optimization, the procedure may learn a cost function which places very large absolute values on certain displacements in order to encourage the desired behaviour. Although the OT map is invariant to scaling of the cost function, if such values become extreme this can result in numerical instability. To prevent this, we add a regularization term penalizing the softmax of the largest absolute value in the cost matrix _θ, λ_max_i,j( | c_θ(x_i, y_j) |)^2. The value of λ_max can be taken to be very small, so that it has minimal effect on optimization apart from preventing extreme values. Flows. Recall that often we prefer to learn symmetric cost functions. However, when there is a large amount of available information about the transport map, it might not be possible to fit such information with a symmetric cost. In such cases, we can use two separate flows, and we can encourage the cost towards symmetricity by adding a regularizing function so that Φ_μ≈Φ_ν, ∑_i ‖Φ_μ(x_i) - Φ_ν(x_i) ‖^2 + ∑_j ‖Φ_μ(y_j) - Φ_ν(y_j) ‖^2. As we wish to prefer simpler cost functions, we can control the complexity of the flow by regularizing using the Dirichlet energy, ∑_i ‖∇Φ_μ(x_i)‖_2^2 + ∑_j ‖∇Φ_ν(y_j)‖_2^2. § RELATED WORK §.§ Monge Map Estimation Often in applications it is desired to construct an estimator for the Monge map itself. This motivates our approach of optimizing a Monge map estimator directly, to ensure that the resulting mapping agrees with known information. We here provide an overview of methods to construct a Monge map estimator. §.§.§ Neural OT map estimation Many approaches utilise neural networks to parameterize the Kantorovich potentials, or alternatively the transport map itself. Squared-Euclidean cost. In the case of the squared-Euclidean cost c(x,y) =, the closed form expression for the Monge map given in Theorem <ref> reduces to the celebrated Brenier's theorem, T^⋆(x) = x - ∇ f^⋆(x) <cit.>. In this case, it is common to reparameterize the potentials by letting ψ = 1/2‖ x ‖^2 - f(x) and φ(y) = 1/2‖ y ‖^2 - g(y). The Kantorovich dual problem (<ref>) becomes inf_(ψ,φ) ∈Φ{∫ψ dμ + ∫φ dν}, where Φ = {(ψ,φ) ∈ L^1(μ)× L^1(ν) : ψ(x)+φ(y) ≥⟨ x,y ⟩ μ⊗ν a.e.}. There exists an optimal potential ψ^⋆ that is convex <cit.>, and the Monge map is given by T^⋆(x) = ∇ψ^⋆(x). The dual objective (<ref>) and the link to convex analysis form the basis for many successful computational techniques, though as a result they are restricted to only the squared-Euclidean cost. <cit.> parameterize the dual potentials in (<ref>) using ICNNs. <cit.> solve for the convex conjugate ψ^* as an inner minimization step, and subsequent work <cit.> instead parameterize both potentials as ICNNs and optimize a minimax objective. <cit.> remove the minimax objective by adding a cycle-consistency regularizer to encourage the potentials to be convex-conjugate up to a constant. While the use of ICNNs to parameterize the potentials is appealing from a mathematical standpoint, results in <cit.> suggest that it may hinder optimization. Although the squared-Euclidean cost has desirable theoretical properties, it may not be an appropriate choice in applications. <cit.> comment that methods approximating the squared-Euclidean OT map can result in erroneous matchings when ground truth couplings are known (demonstrated in Figure <ref>), and <cit.> have shown it to have poor discriminative properties for image data. This motivates learning an improved cost function more suitable for the problem at hand. General Cost Functions. Recently, several alternative methods have been proposed that are suitable for general cost functions. <cit.> and <cit.> optimize a saddle point formulation of the OT problem, directly parameterizing the transport map and a dual potential using neural networks. <cit.> take an alternative approach, instead parameterizing the mapping with a neural network and training so that it fits to the source measure, with a Monge Gap regularizer that encourages the map to be optimal with respect to the chosen cost. Such methods present a trade-off between fitting to the distribution, and optimality with respect to the chosen cost, both of which need to be minimized if the resulting map is to be an accurate approximation to the OT map. §.§.§ Entropic map estimation Entropic OT. The entropy regularized OT problem adds a entropic penalty term to the primal objective, π_^⋆ = _π∈Γ(μ,ν)∬_^d ×^d c(x,y) π(x,y) + KL(π | μ⊗ν). Note that only the support of the reference measure μ⊗ν affects the optimization problem <cit.>. The entropy regularization `blurs' the solution, enabling differentiability of both the entropy-regularized OT cost and plan with respect to the input measures and cost function. As a result, optimal transport has enjoyed success in machine learning applications for which differentiability is required. The dual formulation of the unregularized Kantorovich problem (<ref>) has strict constraints on the potentials, leading to a difficult optimization problem. The entropic-regularized dual formulation relaxes these constraints, making it more amenable to optimization, sup_f ∈() g ∈() {∫ f μ + ∫ g ν - ∬ e^f(x)+g(y) - c(x,y)/μ(x) ν(y) }. Stochastic dual optimization. <cit.> optimize this dual objective with stochastic gradient descent using samples from the measures. In the case of continuous measures, they parameterize the potentials as kernel expansions in a Reproducing Kernel Hilbert Space. <cit.> instead propose using neural networks to parameterize the potentials, and also construct a deterministic map using a neural network to approximate the resulting barycentric projection. In the case of the squared-Euclidean cost, this barycentric projection corresponds to the Monge map. Sinkhorn. In practice, we often have access to discrete empirical approximations to underlying continuous distributions. In the discrete case with measures μ = ∑_i a_i δ_x_i, ν = ∑_j b_j δ_y_j and cost matrix _i,j = c(x_i, y_j), the entropy-regularized OT problem can be written as Π_^⋆ = min_Π∈ U(,)Π, - H(Π), H(Π) = - ∑_i,jΠ_i,j (logΠ_i,j - 1). The Sinkhorn algorithm <cit.> provides an efficient way to compute the solution to the discrete entropy-regularized OT problem, and is also well-suited for parallelization on GPUs to solve multiple OT problems simultaneously. From an arbitary initialization ^(0) and using the kernel matrix _i,j = exp( -_i,j/), the Sinkhorn iterates are defined as ^(ℓ+1) = log - log( e^^(ℓ) / ), ^(ℓ+1) = log - log( ^⊤ e^^(ℓ+1) / ). Entropic mapping estimator. Recall that for a cost function c(x,y) = h(x-y) for strictly convex h, the Monge map has form T^⋆(x) = x - (∇ h)^-1(∇ f^⋆(x)), where f^⋆ is the Kantorovich potential. The Kantorovich potentials can be chosen to satisfy the following h-conjugacy property <cit.>, which is unfortunately a difficult property to enforce, f^⋆(x) = min_y { h(x-y) - g^⋆(y) } g^⋆(y) = min_x { h(x-y) - f^⋆(x) }. In the entropically regularized case, the entropic potentials f_^⋆, g_^⋆ instead satisfy the following relation, which is a softmin relaxation of the above h-conjugacy property, f_^⋆(x) = - log∫ e^g_^⋆(y) - h(x-y)/ dν(y), g_^⋆(y) = - log∫ e^f_^⋆(x) - h(x-y)/ dμ(x). This suggests using the entropic potentials in place of the true Kantorovich potential. In practice, we only have access to discrete empirical measures μ̂, ν̂ when constructing a Monge map estimator. The entropic mapping estimator therefore solves the discrete entropic OT problem between μ̂, ν̂ using Sinkhorn, and uses the output to construct the entropic potential f̂_ which is used in place of f^* in the Monge map expression. The resulting entropic map estimator is T_h^(x) = x - (∇ h)^-1 (∇f̂_ (x) ). This estimator was proposed in <cit.> for the squared-Euclidean cost, along with finite-sample guarantees on its performance. It was extended to general convex functions h in <cit.>, and <cit.> demonstrate the versatility and good performance of the resulting estimator. As it is constructed from the Sinkhorn iterations, the entropic mapping estimator can be computed efficiently in comparison to the alternative approaches outlined above. As we can differentiate through the Sinkhorn iterations, it is therefore suitable for our aim of constructing a differentiable Monge map estimator. Our additional contributions lie in the use of ICNNs to parameterize h, and in the observation that the evaluation of (∇ h)^-1 can be differentiated through using implicit differentiation, enabling end-to-end differentiability of the estimator with respect to the cost function parameters. §.§ Cost Learning Methods to learn an adapted cost functions have attracted attention since the problem was introduced by <cit.>, but remain relatively unexplored in comparison to standard OT. Existing methods in the literature consider a variety of different problem settings. §.§.§ Inverse Optimal Transport Inverse optimal transport (iOT) aims to learn a cost function given paired samples (x_i, y_i) from the optimal or entropic coupling. Optimizing a dual objective. Existing iOT approaches typically assume access to samples from the entropic coupling (x_i,y_i) ∼π_, which form an empirical joint measure π̂_n = 1/n∑_i δ_(x_i,y_i). They then perform maximum likelihood estimation for a parameterized cost c_θ by minimizing the negative log-likelihood given by the convex function ℓ(θ) = - c_θ, π̂_n + sup_π∈(a,b){ c_θ, π - H(π) } . The cost is usually parameterized as a linear combination of convex basis functions, and a convex regularizer R(θ) is added to encourage the parameter to be sparse <cit.> or low-rank <cit.>. To avoid a bilevel optimization procedure, the problem is reformulated as minimizing the dual objective 𝒥 (θ,f,g) = ∫ c_θ(x,y)-f(x)-g(y) dπ̂_n(x,y) + ∫exp(f(x)+g(y)-c(x,y)/) dμ(x) dν(y) + λ R(θ). <cit.> optimize a similar dual objective, and parameterize both the cost and the Kantorovich potentials using MLPs. The numerical algorithm proposed in <cit.> uses the known form of the optimal potential g to instead optimize a semi-dual formulation of (<ref>), which results in a better-conditioned optimization problem. Discrete inverse OT has been considered from the perspective of matching problems <cit.>, contrastive learning <cit.>, and economics <cit.>, as well as from a Bayesian perspective in <cit.>. Rather than learning a specific cost matrix, <cit.> provide a theoretical analysis of the set of possible cost matrices. The inverse OT setting assumes access to paired samples from an transport plan, which is an unlikely scenario in practice. It is more likely that we have partial information about data associations, such as a smaller subset of paired points. Alternatively, we may leverage known information about the transport map specific to the problem at hand as an inductive bias. The above inverse OT methods are unable to utilise such partial information. §.§.§ Differentiating through Sinkhorn An alternative approach for learning cost functions is to optimize according a loss that is a function of the entropy-regularized coupling matrix, which can be obtained using Sinkhorn's algorithm. The cost parameter is then updated through the bilevel optimization problem by unrolling the Sinkhorn iterations, or using implicit differentiation. Differentiating through Sinkhorn provides a more flexible cost-learning approach in comparison to inverse OT, as a specific loss function can be chosen to encourage the desired behaviour in the coupling matrix. <cit.> use this procedure to learn a cost function from known subset correspondences. They use Sinkhorn's algorithm to solve for the entropic coupling matrix according to the current parameterized cost function. The loss is then the sum of the squares of the entries in the resulting matrix corresponding to incorrect mappings between the known subset assignments. In the extreme case, this recovers inverse OT and the loss function is the sum of the squares of the off-diagonal elements. <cit.> differentiate through Sinkhorn to instead learn cost functions based on structural assumptions on the displacements. They minimize a loss (θ) = π_θ^, M(θ), where the matrix entries M(θ)_ij = τ_θ(x_i - y_j) are a parameterized convex regularization function evaluated on the displacements. As such, they aim to learn a cost so that the resulting coupling places low mass on values with large regularization value. In particular, they consider regularizing functions τ_A^(z) = ‖ A^ z ‖_2^2 promoting displacements in the span of the orthonormal matrix A, which they aim to learn. As discussed in Appendix <ref>, it is often the case that we wish to obtain an estimator for the Monge map itself. Learned costs can be plugged in to Monge map estimators, though it is only recently that such solvers have been developed that can handle general ground-cost functions (see Appendix <ref>). Such estimators can be difficult to interpret as they use a neural network rather than a closed-form mapping, and if they fail to learn accurate approximations to the OT map then the resulting mapping may not display the desired properties. Moreover, it could be desired for properties to hold in the displacements of the mapping estimator itself, rather than for the displacements in the coupling matrix which are fixed according to the empirical distributions. In contrast to the aforementioned approaches, we optimize a map estimator itself. This avoids the need for a two-step procedure and instead optimizes the object of interest directly, ensuring that the resulting mapping displays the desired behaviour. §.§.§ Alternative approaches to cost learning Learning improved cost functions has been considered from the supervised metric-learning perspective, aiming to learn a ground cost between histograms that agrees with labelled `similarity' coefficients between pairings <cit.>. <cit.> and <cit.> learn a cost from observations of a density that is assumed to be evolving optimally. <cit.> consider discrete measures supported on graphs, in which the cost is given by a geodesic on the graph parameterized by weights on each edge, whereas <cit.> learn a Riemannian metric on the underlying continuous space. <cit.> consider an adversarial setting in which the cost is chosen to maximize its discriminative power. § EXPERIMENTAL DETAILS §.§ Inverse OT We train according to the Inverse OT loss function given in Appendix <ref> using 128 pairs sampled from 2-dimensional T-shape, moon, and spiral distributions. We plot the ground-truth pairings and learned predictions for 32 newly-sampled test points in Figure <ref>, along with the squared-Euclidean entropic map estimator as a comparison. In all experiments, we parameterize h to be symmetric and 0.01-strongly convex, and use an ICNN with hidden layers of size [32,32]. We train for 500 iterations. For the moon and spiral datasets we use a relative epsilon value of 0.01; for the T-shape dataset we decay the relative epsilon from 0.05 to 0.002. We are able to learn a mapping estimator resembling the T-shape data using a simple symmetric cost of the form c(x,y)=h(x-y). For the moon and spiral datasets, we require costs of the form c(x,y) = h(Φ_μ(x) - Φ_ν(y)). §.§ Aligning OT maps to Live-seq trajectories OT for cell profiling. Tracing individual cell transcriptome trajectories is an important problem in biological applications, and aims to identify and predict responses of cells to biological processes or external treatments. Most profiling techniques result in the destruction of the cell. The resulting data therefore consist only of individual `snapshots' of the overall cell population without alignment. In order to infer how individual cells have developed between these snapshots, optimal transport methods can be used as an inductive bias to align the observed distributions, as it is assumed that the cell population has moved `efficiently' in the time between observations <cit.>. Such methods typically use the squared-Euclidean cost, but it is unclear whether this is a suitable notion of distance for gene-expression data. Live-seq. Recently, <cit.> propose the Live-seq profiling technique, which conducts genetic profiling using only a small extract of cytoplasm from the cell. This does not require the destruction of the cell and has minimal effect on its development, therefore enabling individual cell trajectories to be observed. As only cells which provide viable samples at both timesteps can be traced, the number of individual cell trajectories that are observed is far smaller than the number of observations in each cell population. <cit.> observe the trajectories of 12 individual cells before and after treatment, along with many more unpaired samples. It is therefore desirable to use this observed data to learn an improved cost function and corresponding OT map, which can then be used in downstream tasks such as trajectory prediction or lineage tracing. Inverse OT methods could be used to learn from only the paired samples, but they are unable to make use of the unpaired majority in the cell populations. We restrict our attention to the adipose stem and progenitor cell (ASPC) populations before and after cell-differentiation, as these contain the majority of observed cell trajectories. Our source and target distributions therefore consist of 121 and 72 cell measurements respectively, including 9 trajectory pairings tracing the same cell. Experiment details. We train a Monge map estimator for a symmetric cost of form h(Φ(x) - Φ(y)) to agree with these observed trajectories, using the first 10 principal components. We use a relative epsilon value of 0.01, enforce 0.01-strong convexity of h, and use an ICNN with hidden dimensions [64,64,64]. We train both the ICNNs and flows using the Adam optimizer with learning rates 3e-3 and 1e-3 respectively, and train for 1000 iterations. For comparison, we consider optimizing the coupling matrix rather than the mapping. To ensure a fair comparison, we first consider using the same cost parameterization h(Φ(x) - Φ(y)), and we use the entropic mapping estimator to obtain a Monge map estimator. We also compare to learning a general unstructured cost c(x,y) parameterized by an MLP. To obtain a map estimator for this cost, we use the Monge gap regularizer <cit.>. When optimizing the coupling matrix, we maximize the total `correct mass' according to the known trajectories. That is, we use the loss function (θ) = - ⟨ M, π_θ^⟩, M_ij = 1 if x_i is known to map to y_j 0 otherwise. In Figure <ref>, we compare the predicted trajectories of the resulting mapping estimators to see whether they are indeed consistent with the known ground-truth trajectories. In Table <ref> we report the RMSE of the predictions of the known points, and also the Sinkhorn divergence between the predicted points and the target distribution, averaged over 5 initializations. We also report the total incorrectly transported mass in the resulting entropy-regularized coupling matrices, defined as ⟨ M, π_θ^⟩, M_ij = 1 if x_i is known to map to y_j' for j ≠ j', or x_i' is known to map to y_j for i ≠ i' 0 otherwise. For a good learned cost, this value should be low as the entropic transport plan will transport a large amount of the mass for a point x_i to its known endpoint y_j. Note that as the coupling matrix has the correct marginals, minimizing this quantity is equivalent to the objective used when optimizing according to the matrix. Results. By optimizing the map directly, we are able to learn a mapping that aligns with the known pairings (Figure <ref>), which should be expected given that this was the objective used during training. The incorrectly transported mass in Table <ref> is also low, indicating that we have also learned a good cost function which places mass along the known trajectory pairings. In contrast, the maps obtained by optimizing the matrix do not align well with the ground-truth. For costs h(Φ(x) - Φ(y)) optimizing the matrix often struggles to learn a good cost, as evidenced by large amounts of incorrectly transport mass in Table <ref>. Some of the points for the cost h(Φ(x) - Φ(y)) are approximately correct, indicating that the learned cost has placed the mass correctly along some rows of the coupling matrix but has converged to a local minimum. The general MLP c(x,y) learns a cost that places mass along the correct trajectories, which is unsurprising given the flexibility of such a cost parameterization. However, as the cost is unstructured it is difficult to interpret. Moreover, the unstructured cost means that it is difficult to obtain an estimator for the OT map, and consequently the map obtained using the Monge Gap regularizer does not align well with the Live-seq trajectories from which the cost was learned. § ADDITIONAL EXPERIMENTS §.§ Synthetic Limited Labelled Pairs We validate the findings from the Live-seq experiment by performing the same procedure on similar synthetic datasets, which enables us to evaluate performance on newly sampled points. We also investigate the effect of increasing the number of paired points on the out-of-sample performance of the learned mapping. Data generation. We generate the source and target datasets by pushing samples from a = Unif([0,1]^d) distribution through two respective randomly generated diffeomorphisms Φ_1, Φ_2, so we have μ = Φ_1#, ν = Φ_2#. This corresponds to a continuous transformation from the source to the target via the composition of the diffeomorphisms, ν = (Φ_2 ∘Φ_1^-1) #μ. Note that while it is likely not the case that such mappings constitute an optimal transport map, such transformations provide data distributions that resemble the Live-seq data. In the experiments, the datasets consist of an unpaired majority (i.e. most are generated by independently sampling from and pushing through the corresponding mapping). A subset of the datasets are paired, meaning that y = Φ_2 ∘Φ_1^-1(x). Such pairs simulate the known trajectories in the Live-seq data. Experimental details. We perform the procedure in 2, 4 and 6 dimensions, with 32, 128 and 256 source samples and 40, 160 and 320 target samples respectively. A subset of these datasets are known pairings. We use the same experimental setup as for the Live-seq data. In Figure <ref>, we report the incorrectly transported mass in the entropy-regularized coupling matrix for the training data as previously, as well as the RMSE error for the observed paired points. We also generate unseen test samples μ, ν consisting of fully paired points, and report the Sinkhorn divergence S_(T̂#μ, ν) and the RMSE of the resulting mapping estimator T̂ for these newly-sampled points. The results are averaged over 5 different randomly-generated data distributions. Results. The results are consistent with the observations in the Live-seq experiment. Optimizing the mapping appears to result in good learned costs. The amount of mass placed along incorrect directions in the coupling matrix is generally similar to those learned by optimizing the matrix, which are in fact optimized to minimize this objective. The OT map estimator obtained from optimizing the mapping shows significantly better alignment with the known pairings, which is as expected given that is the objective being optimized. The resulting map estimator also provides better out-of-sample performance on the newly-sampled test points. In contrast, those learned from optimizing the matrix are much less consistent with the known ground-truths and generally perform worse on the out-of-sample points. Optimizing the mapping also appears to result in a mapping giving a lower Sinkhorn divergence when transporting the newly sampled points, demonstrating an improved fit to the target at a distributional level. We also remark that the entropic mapping obtained from the cost h(Φ(x) - Φ(y)) learned by optimizing the matrix occasionally failed to give an appropriate mapping (giving very large Sinkhorn divergences between the mapped points and the target distribution). This is presumably because of a poor choice of when constructing the estimator. We disregard such results when calculating the averages in Table <ref>. In contrast, optimizing the map directly ensured that the final entropic mapping was always reasonable. §.§ Inducing properties on the transport map displacements As we are optimizing according to the OT mapping estimator itself, we can also optimize to encourage properties we wish to hold on the resulting displacements themselves. This allows us to leverage knowledge about the structure of the mapping as an inductive bias when learning adapted cost functions and OT map estimators. We demonstrate the ability to induce low-rank and 2-directional displacements by training according to the loss functions proposed in Appendix <ref>, with an addition Sinkhorn divergence term S_ϵ(T̂#μ, ν) to ensure a good fit to the target distribution. We train the map estimators using 3-dimensional empirical measures, each consisting of 128 datapoints, and use a symmetric cost of the form c(x,y)=h(x-y) with α=0.01 and an ICNN with hidden dimensions [64,64]. Figure <ref> plots the learned mappings applied to 128 newly sampled points, again with the squared-Euclidean entropic map estimator as a comparison. We see that the learned mappings display the desired structural properties. § PROOFS * Fix x ∈^d. As h is strictly convex, so is the function g_x(z) = h(z) - z, x. Denote the unique minimizer of g_x(z) by z^*(x), which is uniquely determined by the first-order optimality condition, ∇ g_x(z^*(x)) = ∇ h (z^*(x)) - x = 0. Rearranging and inverting ∇ h, we obtain (∇ h)^-1(x) = z^*(x) as required. * Define the push-forward measures μ=Φ_μ#μ, ν=Φ_ν#ν and consider the following two Kantorovich problems, denoting the original problem (K) and a transformed version (K). _π∈Γ(μ,ν)∬_^d ×^d h(Φ_μ(x) - Φ_ν(y)) π(x,y) K _π∈Γ(μ,ν)∬_^d ×^d h(x-y) π(x,y) K We can construct a mapping F: Γ(μ,ν) →Γ(μ,ν) between the respective sets of admissible transport plans defined as π↦π = (Φ_μ⊗Φ_ν) #π. The fact that the coupling π has the correct marginals μ, ν is a consequence of the definition of push-forward; for a test function φ∈^∞(^d), we have ∬_^d ×^dφ(x) π(x,y) = ∬_^d ×^dφ(x) (Φ_μ⊗Φ_ν) #π(x,y) = ∬_^d ×^dφ(Φ_μ(x)) π(x,y) = ∫_^dφ(Φ_μ(x)) μ(x) = ∫_^dφ(x) μ(x). This shows that π has correct first marginal μ and it can be shown similarly that the second marginal is ν, confirming that π∈Γ(μ,ν). Consider too the mapping G given by π↦π = (Φ_μ^-1⊗Φ_ν^-1) #π, which defines a map from Γ(μ,ν) to Γ(μ,ν). For a test function φ∈^∞(^d ×^d), ∬_^d ×^dφ(x, y) (Φ_μ^-1⊗Φ_ν^-1) # (Φ_μ⊗Φ_ν) #π (x,y) = ∬_^d ×^dφ(Φ_μ^-1(x), Φ_ν^-1(y)) (Φ_μ⊗Φ_ν) #π (x,y) = ∬_^d ×^dφ((Φ_μ∘Φ_μ^-1)(x), (Φ_ν∘Φ_ν^-1)(y)) π (x,y) = ∬_^d ×^dφ(x, y) π (x,y). This shows that G ∘ F = Id, and similarly we can show that F ∘ G = Id. We thus conclude G=F^-1, and that F is a bijection between Γ(μ,ν) and Γ(μ,ν). Define I_K and I_K to be the respective infimums for the Kantorovich problems (K) and (K). For any π∈Γ(μ,ν), the above shows that π = F(π) is an admissible transport plan for (K) and thus I_K≤∬_^d ×^d h(x-y) π(x,y) = ∬_^d ×^d h(Φ_μ(x) - Φ_ν(y)) π(x,y). Note that as Φ_μ, Φ_ν are diffeomorphisms, the transformed Kantorovich problem (K) satisfies the conditions of Theorem <ref> so has a unique solution π^⋆. Letting π̂ = F^-1(π^⋆), we have ∬_^d ×^d h(Φ_μ(x) - Φ_ν(y)) π̂(x,y) = ∬_^d ×^d h(x-y) π^⋆(x,y) = I_K. We see that π̂ attains the lower bound in (<ref>), and is therefore an optimal plan for the original Kantorovich problem (K). For uniqueness, note that if there are two such optimal plans π_1^⋆, π_1^⋆∈Γ(μ,ν), then (again using the definition of pushforwards) we have that both F(π_1^⋆), F(π_2^⋆) attain the infimum I_K. By uniqueness of π^⋆, we must have F(π_1^⋆) = F(π_2^⋆) = π^⋆, and inverting F we see π_1^⋆ = π_2^⋆ as required. The structure of the optimal plan π^⋆ follows from that of π^⋆. Recall from Theorem <ref> that π^⋆ is of the form (Id,T^⋆) where T^⋆ = Id - (∇ h)^-1∘ (∇ f^⋆) for the Kantorovich potential f^⋆. Applying the above result, we therefore see that the optimal plan π^⋆ for our original Kantorovich problem (K) is also of the form (Id, T^⋆). The map T^⋆ is now given by T^⋆(x) = Φ_ν^-1[ Φ_μ(x) - (∇ h)^-1∘∇ f^⋆∘Φ_μ (x) ], where f^⋆ is the Kantorovich potential for the transformed Kantorovich problem (K).
http://arxiv.org/abs/2406.08212v1
20240612134301
Designing metasurface optical interfaces for solid-state qubits using many-body adjoint shape optimization
[ "Amelia R. Klein", "Nader Engheta", "Lee C. Bassett" ]
physics.optics
[ "physics.optics", "cond-mat.mes-hall", "quant-ph" ]
Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia PA 19104, USA Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia PA 19104, USA lbassett@seas.upenn.edu Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia PA 19104, USA § ABSTRACT We present a general strategy for the inverse design of metasurfaces composed of elementary shapes. We use it to design a structure that collects and collimates light from nitrogen-vacancy centers in diamond. Such metasurfaces constitute scalable optical interfaces for solid-state qubits, enabling efficient photon coupling into optical fibers and eliminating free-space collection optics. The many-body shape optimization strategy is a practical alternative to topology optimization that explicitly enforces material and fabrication constraints throughout the optimization, while still achieving high performance. The metasurface is easily adaptable to other solid-state qubits, and the optimization method is broadly applicable to fabrication-constrained photonic design problems. Designing metasurface optical interfaces for solid-state qubits using many-body adjoint shape optimization Lee C. Bassett Received 09 Mar 2024 / Accepted 27 May 2024 ========================================================================================================== § INTRODUCTION Solid-state quantum defects — exemplified by the nitrogen-vacancy (NV) center in diamond—have emerged at the forefront of quantum technologies in recent years, seeing applications in core research areas of quantum computing <cit.>, communication <cit.>, and sensing <cit.>. Measurements are performed optically, with the defect's spin state being encoded in its single-photon photoluminescence. As a result, efficient photon capture is essential, with readout fidelity, sensitivity, and operational speed being limited by optical losses. One fundamental source of optical loss is diamond's high refractive index at optical wavelengths (around 2.4), which causes the majority of emitted photons to be trapped at the flat air-diamond interface due to total internal reflection. Past works have addressed this challenge by embedding NV centers in nanophotonic structures such as waveguides, nanopillars, and photonic crystal cavities<cit.>; such structures in the near-field overcome total internal reflection and increase the photon emission rate through Purcell enhancement. However, near-field structures also situate NV centers near diamond surfaces, which conversely degrade their crucial spin and optical features <cit.>. For applications such as quantum memories in which maximizing the spin coherence time is paramount, it is necessary to use NV centers embedded deeper in the bulk diamond and to only modify the far-field propagation of emitted photons. The prototypical structure used in this scenario is the solid immersion lens (SIL), which consists of a hemisphere centered around a defect, such that any emitted photon will be incident normal to the air-diamond interface and not experience total internal reflection. These hemispheres are typically etched into the diamond <cit.> but can also be created with additive fabrication methods <cit.>. However, SILs require a difficult and time-consuming volumetric fabrication process, and they do not adjust the propagation direction of emitted light, so high numerical aperture (NA) free-space microscope objectives are still required to collimate the diverging beam. In contrast, flat optics etched on the diamond surface can both transmit and redirect incident photons <cit.>. Flat optics are easier and cheaper to fabricate in diamond than volumetric structures like SILs, making them an attractive option for scaled-up quantum computation or quantum communication systems. In addition, by simultaneously collimating the emitted photons, a flat optic can circumvent the need for expensive, bulky, free-space optics and ultimately allow the photons to be collected by an optical fiber in a compact, robust, inexpensive device. In this work, we use inverse design methods to design a monolithic metasurface in diamond tailored to maximize the directional collection of photons emitted from an NV center. In contrast to other flat optics that are designed by local approximations, inverse design techniques capture the complete optical response of a metasurface geometry through full-wave simulations and target a figure of merit that represents explicitly the experimentally-relevant metric — in this case, the total photon collection efficiency from an NV center, averaged over multiple optical dipole orientations as well as the broadband photoluminescence spectrum. § MANY-BODY SHAPE OPTIMIZATION §.§ Background We utilize an inverse design method based on the adjoint method, which has been utilized extensively in the photonics field in recent years for applications varying in complexity from simple components to arbitrary equation solvers <cit.>. With just two physics simulations — a typical “forward" simulation of optical propagation and an “adjoint" simulation that approximately represents the desired outcome sent backwards — an arbitrary figure of merit and its gradient can be calculated with respect to any number of design parameters. This can then be wrapped into a gradient-descent optimization algorithm to arrive at an optimized structure. Adjoint-based optimizations are categorized based on their parameterization method as either topology optimization or shape optimization. Most adjoint-based methods in recent years have made use of topology optimization, which treats the relative permittivity ϵ at each point in the optimization region as a free continuous parameter in order to explore a maximally broad parameter space. Typically, binarization and smoothing filters are applied gradually over the course of the optimization to ensure a realistic final structure made of discrete materials (e.g., using only two values of ϵ) with smooth features <cit.>. Previous authors have used topology optimization to design optical metasurfaces based on patterned thin films <cit.> and nanophotonic structures using diamond membranes <cit.>. Topology optimization has also been utilized to design near-field antenna-like structures to extract photons from shallow NV centers <cit.>. In contrast, shape optimization considers a fixed number of objects, whose boundaries are parameterized according to their size, position, orientation, etc,. Shape optimizations have been performed on individual objects to design structures such as Y-splitters <cit.> and to generate unit cell libraries for larger metasurfaces <cit.>. Other works have performed shape optimizations on periodically ordered arrays of objects <cit.> and for disordered photonic crystals <cit.>. §.§ Implementation In this work, we parameterize the metasurface as an array of elliptical diamond nanopillars and perform a many-body shape optimization (MBSO) over the entire set. Compared to unit-cell-based designs, we explore a broader parameter space by allowing the positions of the nanopillars, in addition to their sizes, ellipticity, and orientation, to be free parameters in the optimization. Each nanopillar P_i is defined by a fixed height and an elliptical cross-section described by five optimization parameters P_i(x_i, y_i, a_i, b_i, ϕ_i): two center coordinates, two axial lengths, and one rotation angle. Nanopillar heights are fixed uniformly at a predefined value such that the final surface could be fabricated with a single etch. Figures <ref>a and <ref>b show an optimized metasurface along with these shape-optimization parameters. The fabrication of diamond nanopillar arrays is well established. They are used, for example, in quantum sensing applications in which NV centers are embedded directly in the nanopillars <cit.>. Previous work has also shown how diamond nanopillar arrays can function as a metalens to collect light from deep NV centers <cit.>, motivating the present work. The device efficiency and NA of the metalens in Ref. <cit.> was limited by the classical design strategy, which treated each pillar as a localized phase-shifting element. The explicit formulation of the metasurface as an array of nanopillars ensures a fabricable design at each iteration of the optimization process. Fabrication constraints are enforced through lower bounds on the axial lengths to ensure a minimum pillar size as well as differentiable inequality constraint functions between nearest-neighbor pillars to ensure a minimum separation, d: c_ij(P_i, P_j) = √((x_i-x_j)^2 + (y_i - y_j)^2) - r_i - r_j ≥ d Since there is no analytical expression for the distance between ellipses, the elliptical nanopillars are instead conservatively approximated as circles with r_i= max(a_i, b_i) for the constraint calculations. To perform the optimizations, we use these constraint functions along with adjoint-calculated gradients in an open-source implementation of the Method of Moving Asymptotes algorithm <cit.>. §.§ Figure of Merit We construct the figure of merit for adjoint optimization to maximize the percentage of emitted photons that escape the diamond surface into a desired collection angle. We consider the full NV emission spectrum, which at room temperature is dominated by a wide phonon sideband spanning ≈650-750 nm. At a given collection plane above the NV surface, the propagation direction of the forward fields can be calculated by taking a spatial Fourier transform of the electric and magnetic fields. The angular acceptance constraint is equivalent to a circular region in k-space given by: R = {k_x, k_y: √(k_x^2 + k_y^2)/k_0 ≤NA_target} where k_x and k_y are the x and y wavevector components, k_0 = 2π/λ_0, and NA_target is the numerical aperture corresponding to the desired collection angle. For a given dipole orientation, i, the directional transmission into NA_target is given by integrating the time-averaged Poynting vector over this region in k-space: f_i(λ) = 1/P_dipole∬_R1/2Re(ℱ[𝐄_i]×ℱ[𝐇_i^*])·ẑ dk_xdk_y where P_dipole is the total power injected into the simulation and ℱ[𝐄_i] and ℱ[𝐇_i^*] are the spatial Fourier-transformed electric and magnetic fields on a plane above the diamond surface. R in this case is the region defined in Eqn. <ref> but could be any arbitrary region in k-space. The total figure of merit (FOM) is subsequently constructed as follows: FOM = 1/2∑_i=1,2∫ f_i(λ)η_NV(λ)dλ where η_NV(λ) is the normalized room-temperature NV emission spectrum, and i=1,2 represent two dipole orientations that are mutually orthogonal to each other and to the NV-axis. We consider a diamond with a top surface aligned with a (100) crystal plane, which is typical of synthetic diamond plates used in NV-center experiments. The NV center itself is oriented along a ⟨111⟩ axis, and its emission pattern results from an incoherent sum of two optical dipoles orthogonal to each other and to the NV axis, as illustrated in Fig. <ref>c. Less commonly, diamond can be grown <cit.> or cleaved <cit.> to create a (111) top surface. The (111)-oriented diamond hosts NV centers whose optical dipoles are oriented parallel to the surface and subsequently achieves higher efficiency. The optimization process can be easily performed under the assumption of (111) diamond; see Supplement 1 for a comparison of the final results. The source for the adjoint simulation is derived from the figure of merit, and the gradients are calculated by integrating the forward and adjoint fields over the nanopillar surfaces according Eqns. (S1-S5) in the Supplement; see also Ref. <cit.>. The simulations required to perform this optimization are depicted in Fig. <ref>c. In order to simultaneously optimize over a broad frequency range, we perform simulations in the time domain using Lumerical FDTD <cit.>. The optimization procedure is implemented using the LumOpt package <cit.> as a base for additional custom Python code, which implements the geometry construction, figure of merit and gradient calculations, and optimization algorithm wrapper (See Code 1, Ref. <cit.>). § RESULTS §.§ Performance We performed a many-body shape optimization to design a collimating collection metasurface for a NV center. We targeted a free-space collection numerical aperture of 0.2, roughly corresponding to a typical multi-mode fiber that could be placed above a metasurface. The design targeted an NV center centered 1 µm beneath the metasurface. The height of the nanopillars composing the metasurface was fixed to 750 nm, and the total design area was 5 µm × 5 µm. We enforced a minimum pillar diameter and minimum gap size d of 50 nm, using the formulation in Eqn. <ref> for the latter. The resulting structure is depicted in Fig. <ref>a-b, and its performance is characterized in Fig. <ref>. In order to benchmark device performance, we calculate the percentage of total photons emitted by an NV center that are collected into the desired free-space NA. This total dipole transmission efficiency represents the most important performance metric, since it determines the photon count rate and the spin-readout signal-to-noise ratio. It can also be easily compared across different configurations of diamond surface orientation, NV depth, and metasurface size. For the simulated geometry, only 31.4% of emitted photons are incident on the surface, and hence the right axes in Fig. <ref> are normalized by this factor to show the metasurface coupling efficiency. The optimized structure exhibits a directional transmission efficiency (within NA_target = 0.2 and collectible by a multi-mode fiber) of 3.61%, corresponding to a normalized coupling efficiency of 11.5%. The maximum transmission efficiency (within NA = 1 and requiring a high-NA microscope objective) is 6.05%, so the majority of outcoupled light is collimated into the targeted NA, despite NA_target representing only 4% of possible propagation directions. For comparison, we performed a topology optimization using the same objective function. We obtained a directional transmission efficiency of 3.80% and a maximum transmission efficiency of 8.97%. The topology-optimized structure has similar performance to the shape-optimized metasurface for the target NA, however it does transmit more of the light in total. While the SIL's maximum transmission efficiency (24.1%) is much higher, both the topology- and shape-optimized structures offer a nearly fourfold enhancement over the SIL's directional transmission efficiency of 1.03%. The performance of all three structures is described further in the supplementary information and plotted in Fig. S2. In Figure <ref>, we show the simulated cross-sectional and collection plane field profiles as an incoherent weighted average over the dipole orientations and wavelengths. The metasurface collimating effect is clearly visible in the real-space field profile of Fig. <ref>a and in the Fourier-space fields in Fig. <ref>c. §.§ Effect of Initial Conditions While the optimization process, by and large, determines the optimal parameters automatically, two key parameters are pre-selected and not variable during the optimization process. First, the nanopillar height was chosen to be a single, uniform value of 750 nm. In assuming a uniform height, we ensure that the final structure can be fabricated in a single etch step, but there is no technical reason why the pillar height could not be an additional optimization parameter. The second pre-selected parameter was the initial spacing of nanopillars, which determines the total number of nanopillars present in the optimization. Although the nanopillars are allowed to move freely during the optimization process, the initial condition for the structure discussed earlier was a regular grid of nanopillars with a spacing of 300 nm. Over a 5 µm x 5 µm area, this corresponds to a total of 256 nanopillars, or 1280 optimization parameters. No nanopillars were added or removed over the course of the optimization in order to maintain a fixed number of optimization parameters. In principle, the optimization procedure could be adapted to allow for the addition or removal of nanopillars between optimization phases. In order to determine the effect of nanopillar height and spacing, we performed several additional optimizations varying these pre-selected values. The results are quantified in Fig. <ref>, and the final geometries from each optimization are shown in Figs. S3-S4 in the supplementary information. In order to account for variations in convergence, we performed five optimizations for each initial condition in order to quantify the range of potential outcomes. The five optimizations used identical starting grids but varied the initial axial lengths of the nanopillars. In one optimization, each nanopillar was initialized with a 100-nm-diameter circular cross-section; for the other four, the axial lengths were initially set to random values between 25 nm and 125 nm. We did not observe any trend as to whether the uniform or randomized starting axial lengths led to better-performing final structures. From Fig. <ref>a, we observe that the metasurface performance tends to improve with increasing nanopillar height (the pitch was fixed at 300 nm). The simulations for 750 nm and 1 µm tall nanopillars showed similar average performance; we chose 750-nm-tall nanopillars for subsequent optimizations due to their higher maximum performance and their shorter height being easier to fabricate accurately. Using in each case 750-nm-tall pillars, Fig. <ref>b shows the performance as a function of initial pillar pitch. We observe comparable average performance with an initial nanopillar pitch of 250, 300, and 350 nm, but significantly worse performance at 200 nm and 400 nm. The 400 nm pitch has the fewest total nanopillars in the optimization region (144), and the performance was likely poor here due insufficient degrees of freedom. Conversely, for an initial pitch of 200 nm, the nanopillar density was likely too high to allow room for the nanopillar positions or sizes to change without violating the constraint functions. Amongst the 250 nm, 300 nm, and 350 nm batches, there is a noticeable trend that a larger number of pillars (and therefore more optimization parameters) leads to a larger deviation in performance as a result of different initial conditions, albeit with similar average performance. §.§ Tolerance to Fabrication Errors We simulated the effects of potential fabrication errors and plotted the results in Fig. <ref>. The performance losses are expressed in units of decibels (dB) compared to the performance as designed. Figures <ref>a and <ref>b show the performance loss due to lateral and vertical misalignment, respectively. The loss is calculated by performing many separate FDTD simulations with the emitter placed at varying positions. For lateral misalignments, the 3 dB loss contour occurs for displacements around 470 nm. The structure shows similar sensitivity to displacements in the emitter depth. With the design targeted to a depth of 1 µm beneath the metasurface, we predict a loss within 3 dB with the emitter instead placed as shallow as 0.35 µm or as deep as 1.35 µm. Lithographic fabrication methods can be used to align the metasurface over a chosen NV center well within these fabrication tolerances — previous work has successfully aligned solid immersion lenses onto pre-selected NV centers of a particular depth with lateral accuracy less than 100 nm and higher accuracy is possible <cit.>. If desired, the alignment sensitivity could be further decreased by explicitly incorporating it into the optimization process by co-optimizing the performance from multiple emitter locations. We also simulate the effect of common errors in the etching process. Fig. <ref>c accounts for the effects of over- or under-etching by universally perturbing the axial lengths on each nanopillar in the metastructure. Interestingly, the structure actually performs slightly better with slightly larger (under-etched) nanopillars than those given by the final optimization, with peak performance around Δ r = +6 nm. This likely occurs due to the constraints placed on nanopillar spacing described in Eqn. <ref>. In the optimized structure, 35 pairs of nanopillars are within 1 nm of the minimum separation, and the optimization would likely have converged on a solution with smaller spacings if the constraints were not enforced. In any case, the 3 dB point occurs at perturbed axial lengths Δ r of +44 nm and -26 nm for under- and over-etched nanopillars, respectively. Finally, in Fig. <ref>d, we simulate the effect on performance if the nanopillar sidewalls are not etched with perfectly vertical, 90-degree angles. Similarly to the over/under-etch analysis, the maximum performance occurs for a sidewall angle of 92.0 degrees, corresponding to nanopillars that are effectively somewhat larger than designed. The 3 dB points for sidewall angles are at 85.8 and 103.4 degrees. § DISCUSSION AND OUTLOOK The goal of the optimized structure is to maximize photon collection from a 1-µm-deep NV center directly into an optical fiber, obviating the need for high-NA free-space optics. We are not aware of any superior solution for this task. We simulate an overall transmission efficiency of 3.61% (including all emitted broadband photons assuming a (100) diamond surface) into the collection aperture of a multi-mode fiber. This is a factor of 3.5 improvement over a fiber-coupled SIL or a factor of 26 improvement over a fiber-coupled flat air-diamond interface; a SIL can only reach its peak efficiency when combined with a high-NA optic and remains the best-performing option in that case. To predict the device performance in an experimental setting, we compare to Ref. <cit.>, in which the authors measured a saturation photon count rate of 87.3 ± 2.8 kcps (kilocounts per second) from an NV center imaged through a similar metasurface coupled to a multi-mode fiber. In simulations, the optimized metasurface outperforms that structure by roughly 60%, and we would expect to obtain a photon count rate around 150 kcps under otherwise identical experimental conditions and with similar losses from fabrication errors. Additionally, the metasurface achieves this greater performance while being much more compact (5 µm vs 28 µm diameter) and having a shorter depth of focus (1 µm vs 20 µm), which allows for easier alignment and fabrication. The many-body shape optimization methodology we present achieves comparable performance to the more common adjoint-based technique of topology optimization. While topology optimization enforces its fabrication constraints by subsequently applying filters over multiple optimization phases, MBSO allows for constraint functions to be defined explicitly from the parameterization and applied throughout a single-phase optimization. As a result, the shape-optimized structure converges much faster, using less than half as many sets of simulations as a topology-optimized structure (200 vs 471) — the computational requirements of both types of adjoint optimization are primarily limited by the number of simulations required. The shape-optimized structure also required an order of magnitude fewer parameters than the topology-optimized structure (1280 vs 40,401); although this did not strongly impact the computational performance in this case, it could result in significantly faster gradient calculations for larger-scale metasurfaces. As an adjoint-based optimization strategy, MBSO's scalability to larger geometrical areas is primarily limited by the computational requirements in memory and the time to perform the full-wave simulations at each iteration. Several techniques may be used to increase the viability of large-scale optimizations. Computational advancements such as GPU acceleration can enable larger-scale full-wave simulations <cit.>. Past authors have performed shape optimizations on structures an order of magnitude larger than the present work by applying symmetrical boundary conditions and using rectangular elements aligned to the FDTD mesh <cit.> or by using Maxwell solvers based on boundary integral methods <cit.> — both strategies are particularly well-suited to shape optimizations. Other works have avoided full-wave simulations by using local periodic approximations <cit.> and coupled mode theory <cit.> in limited regimes or deep learning methods to predict forward and inverse problems <cit.>. The MBSO parameterization method could easily be adapted into any of these alternative optimization strategies. The MBSO methodology is readily applicable to the inverse design a variety of structures, especially when there is a need for specific shapes or distinct elements. Broadly, it can be used in lieu of topology optimization for any inverse design problem. In particular, MBSO is naturally suited for the inverse design of photonic structures such as photonic crystals, large-area metasurfaces, and diffraction gratings that are typically composed of arrays of simple shapes. Additionally, MBSO could be useful for designing reconfigurable metasurfaces — which have potential applications including imaging, beam steering for LiDAR, and displays — such that the design space is limited to a fixed number of meta-atoms that can be independently addressed by some tuning mechanism <cit.>. For certain materials, it may be challenging to fabricate arbitrary shapes due to facet-selective etching or growth. In these instances, MBSO could be easily adapted to keep any edges aligned to particular crystal facets in order to optimize structures exactly how they would be fabricated, potentially enabling the inverse design of photonic structures onto new material platforms. This work was supported by the National Science Foundation under awards ECCS-1842655 (A.R.K., L.C.B.) and ECCS-2129183 (A.R.K., L.C.B.), by the National Science Foundation Graduate Research Fellowship Program, Grant No. DGE-1845298 (A.R.K.), and by the Air Force Office of Scientific Research Multidisciplinary University Research Initiative under grant FA9550-21-1-0312 (N.E.). The authors thank Brian Edwards for fruitful conversations and Mathieu Ouellet for assistance with the three-dimensional image in Fig. 1(a).
http://arxiv.org/abs/2406.08274v1
20240612143938
The Camera and Readout for the Trinity Demonstrator and the EUSO-SPB2 Cherenkov Telescope
[ "Mahdi Bagheri", "Srikar Gadamsetty", "Eliza Gazda", "Eleanor Judd", "Evgeny Kuznetsov", "A. Nepomuk Otte", "Mathew Potts", "Oscar Romero Matamala", "Noah Shapera", "Joshua Sorell", "Svanik Tandon", "Andrew Wang" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.HE", "hep-ex", "physics.ins-det" ]
a]Mahdi Bagheri a]Srikar Gadamsetty a]Eliza Gazdacor1 elizagazda@gatech.edu c]Eleanor Judd b]Evgeny Kuznetsov a]A. Nepomuk Otte a]Mathew Potts a]Oscar Romero Matamalacor1 oromero@gatech.edu a]Noah Shapera a]Joshua Sorell a]Svanik Tandon a]Andrew Wang [a] organization=Georgia Institute of Technology, School of Physics, Center for Relativistic Astrophysics, addressline=837 State Street NW, city=Atlanta, state=GA, postcode=30332-0430, country=U.S.A. [b] organization=University of Alabama in Huntsville, Center for Space Plasma and Aeronomic Research, addressline=NSSTC, CSPAR, 320 Sparkman Drive, city=Huntsville, state=AL, postcode=35805, country=U.S.A. [c] organization=University of California at Berkeley, Space Sciences Laboratory, addressline=7 Gauss Way, city=Berkeley, state=CA, postcode=94720, country=U.S.A [cor1]Corresponding Authors § ABSTRACT We developed a modular silicon photomultiplier camera to detect Earth-skimming PeV to EeV tau neutrinos with the imaging atmospheric Cherenkov technique. We built two cameras, a 256-pixel camera with S14161-6050HS SiPMs for the Trinity Demonstrator located on Frisco Peak, Utah, and a 512-pixel camera with S14521-6050AN SiPMs for the EUSO-SPB2 Cherenkov Telescope. The front-end electronics are based on the eMUSIC ASIC, and the camera signals are sampled and digitized with the 100 MS/s and 12-bit AGET system. Both cameras are liquid-cooled. We detail the camera concept and the results from characterizing the SiPMs, bench testing, and calibrating the two cameras. Cherenkov telescope neutrinos silicon photomultipliers camera cosmic rays air-shower imaging Earth-skimming § INTRODUCTION The neutrino sky at very-high (VHE, >PeV) and ultra-high (UHE, >EeV) energies is still dark. However, IceCube's transformational detection of diffuse astrophysical neutrinos <cit.>, evidence of two neutrino point sources TXS 0506+056 <cit.> and NGC 1068 <cit.>, and the galactic plane <cit.> at high energies (>1TeV) highlight the tremendous potential that neutrinos offer to gain fundamental new insight into the non-thermal universe. Extending neutrino observations to higher energies and opening the VHE/UHE neutrino band will provide us with a unique view of the most extreme cosmic particle accelerators, help us understand cosmic-ray propagation and the evolution of the universe, and allow us to study fundamental neutrino physics and probe new physics beyond the standard model of particle physics at the highest possible energies <cit.>. Detecting neutrinos is challenging because interaction cross-sections are extremely small. The much lower neutrino fluxes in the VHE/UHE band compared to the HE band exasperate the problem. Overcoming these challenges requires instruments with orders of magnitude larger detector volumes than IceCube's. Of the different proposed detection techniques <cit.>, we pursue detecting Earth-skimming tau neutrinos with the imaging atmospheric Cherenkov technique <cit.>. The Earth-skimming technique is sensitive to >PeV tau neutrinos that enter the Earth under a small (<10^∘) angle, undergo charged current interaction, and produce a tau lepton. The tau continues on the trajectory of the neutrino, emerges from the Earth, and decays, starting a massive shower of mostly Cherenkov-light emitting electrons and positrons. A Cherenkov telescope captures some of the light and generates an image of the air shower onto a pixelated focal plane, the camera. This paper discusses the cameras we have developed for the VHE/UHE neutrino instruments the Trinity Demonstrator and the EUSO-SPB2 Cherenkov telescope. Trinity is a proposed system of 18 Cherenkov Telescopes <cit.> on mountaintops. Its first development stage is the Trinity Demonstrator, which we deployed on Frisco Peak, UT, in the Summer of 2023 instrumented with the camera described here. The Trinity Demonstrator is a one-square meter class Cherenkov telescope with Davis Cotton optics and a 5^∘× 5^∘ field of view (FoV). The EUSO-SPB2 long-duration balloon mission is a precursor to the proposed POEMMA mission that aims to detect neutrinos from space by looking at the Earth's limb <cit.>. The EUSO-SPB2 balloon flew in the Spring of 2023, and we observed the Earth limb with the Cherenkov telescope for two nights from a float altitude of ∼35 km before it crashed into the Pacific due to a leaking balloon <cit.>. The Cherenkov telescope on EUSO-SPB2 was a modified Schmidt optics with a 0.8 m^2 light-collection surface and a 6.4^∘× 12.8^∘ FoV instrumented with the camera described here. We start the paper with a discussion of camera design considerations in Section <ref> followed by a description of the modular architecture of the camera in Section <ref>. A description of the main components of the camera and the readout is given in Section <ref>. The cooling system and thermal vacuum testing are described in Section <ref>. The characterization of the photon detectors are detailed in Section <ref> and the signal chain in Section <ref>. The flatfielding of the camera response is discussed in Section <ref> and the current monitor in Section <ref>. § DESIGN CONSIDERATIONS The purpose of a Cherenkov-telescope camera is to detect the faint and fast transient flashes of Cherenkov light emitted by the electrons and positrons in an air shower. A typical Cherenkov flash lasts between a few nanoseconds and a few hundred nanoseconds, depending on the viewing angle of the air shower relative to the air-shower axis. A Cherenkov telescope camera needs to typically detect at least 100 photons (photoelectrons) to guarantee a reliable event reconstruction <cit.>. Furthermore, the few Cherenkov photons must be distinguished from the fluctuations of the ambient light, which is called the night-sky background (NSB) <cit.>. These conditions translate into requirements for the analog bandwidth, dynamic range, digitizer sampling speed, maximum acceptable electronic noise, and the trigger of the Cherenkov-telescope camera discussed in the remainder of this section. For the best separability of the Cherenkov flash from NSB fluctuations, the signal in the readout should have a width of about 10 ns. However, power constraints and other practical considerations had us design a system with a considerably lower bandwidth, resulting in signals that are 30 ns full-width at half maximum. Because fluctuations in the NSB are irreducible, they define the noise floor, and the signal chain must be designed such that any additional noise contributions are much below the NSB fluctuations. On the opposite end of the signal chain's dynamic range, we require a linear response of up to a few hundred photoelectrons, which covers the expected range for most events. For rare extreme events with signals beyond the linear range, the dynamic range can be extended by considering the non-linear response of the signal chain in the analysis. The angular size of the camera pixels, is driven by the science goals for the Trinity Demonstrator and the EUSO-SPB2 Cherenkov telescope. Among other goals, both missions aim to measure backgrounds that mimic air showers from Earth-skimming tau neutrinos. In Trinity's case, we want to record these events with the same 0.3^∘ pixel size we plan for the final Trinity telescopes. In EUSO-SPB2's case, it was more important to cover a large field-of-view to better study the spatial and temporal characteristics of the NSB, even if it meant that the pixel size would be larger than the 0.083^∘ anticipated for POEMMA <cit.>. Eventually, we fixed the pixel size for EUSO-SPB2 at 0.4^∘ based on the 6^∘×12^∘ field-of-view of the EUSO-SPB2 Cherenkov telescope optics, the available power, and the projected power consumption per camera channel. Another benefit of this choice is that the required physical pixel size is the same 6 mm required for Trinity. The only difference between the two cameras is the number of pixels, which is 256 pixels for Trinity and 512 pixels for EUSO-SPB2 to cover the fields of view of the respective telescopes they instrument. We, therefore, designed the modular camera architecture described in the next section that meets the requirements for both instruments. The signals of a Cherenkov telescope camera are not continuously recorded. Instead, the trigger electronics continuously scans the camera signals for a potentially interesting signal topology in the camera. If the electronics senses the required topology, the trigger sends a readout command to the digitizer. We provide more details about the trigger topologies required in the Trinity Demonstrator and the EUSO-SPB2 Cherenkov telescope in Section <ref>. From an operational point of view, the EUSO-SPB2 camera operated in a much more challenging environment. As convective cooling becomes ineffective at 33 km altitude, the design of the camera cooling was driven by the EUSO-SPB2 requirements and adapted for the Trinity camera (see Section <ref>.) Based on these design considerations, we devised a modular camera that meets the Trinity Demonstrator and EUSO-SPB2 requirements and is described in the following sections. § TOP-LEVEL ARCHITECTURE OF THE CAMERA AND READOUT The top-level architecture is divided into the camera unit and the readout unit. Figure <ref> shows the block diagram of the system and how it breaks down into the two physically separate units. The camera unit is mounted in the telescope's focal plane, and the readout unit is placed in a convenient location. In Trinity, the readout resides inside a cabinet on wheels next to the telescope, and in EUSO-SPB2, the readout is inside a box mounted below the telescope. The camera unit Figure <ref> shows CAD drawings of the Trinity and EUSO-SPB2 cameras, and Figures <ref> and <ref> show pictures of the two built cameras. The cameras are composed of modules. Figure <ref> shows the exploded view of one of the modules, which consists of a square matrix of sixteen 6 mm × 6 mm silicon photomultipliers (SiPM). Each SiPM constitutes one camera pixel. The SiPM matrix attaches to the front-end electronics board, the Sensor Interface and Amplifier Board (SIABs). To approximate the curved focal plane of the EUSO-SPB2 optics, an angled interface board is inserted between the SiPM matrix and the SIAB. The focal plane in the Trinity Demonstrator is flat, and the interface board is unnecessary and not used. The 16 modules of the Trinity camera and the 32 modules of the EUSO-SPB2 camera (see Figure <ref>) are inserted into the camera backplane, which is custom-designed for each camera to accommodate the different number of camera modules. The backplanes provide a mechanical mount point for the modules and the electrical interface to the readout, the trigger, and power. The readout unit Figure <ref> shows the block diagram of the readout unit, and Figures <ref> and <ref> show pictures of the assembled units for both telescopes. The centerpiece of the readout unit is the central processing unit, also called the camera computer, which serves as the communication gateway to all system components and stores the digitized camera signals. The camera computer interfaces with a high-speed ethernet link to the digitizer system, and a separate network connection is provided to the trigger board. The camera and power supplies are configured via I2C and SPI interfaces. The readout unit also includes the digitizers for the camera signals, the power supply units, and the trigger board, which we will discuss in more detail in the following sections. § DESCRIPTION OF THE MAIN CAMERA AND READOUT COMPONENTS In this section, we detail the main components of the camera unit and the readout unit: the photosensors, the SIAB front-end electronic boards, the camera computer, the digitizer, the trigger board, and the power supply boards. §.§ Photosensors Imaging air showers with Cherenkov light is best done with fast, single-photon resolving photosensors to improve the separability of air-shower signals from NSB fluctuations. Natural photosensor candidates are bialkali photomultiplier tubes (PMTs) or silicon photomultipliers (SiPMs). We chose SiPMs because their spectral response is a better fit for the red-peaking Cherenkov spectrum of air showers developing in the lower atmosphere a hundred kilometers from the Trinity telescopes <cit.>. After evaluating devices from different vendors, we chose the Hamamatsu S14520-6050CN, which has high efficiency in the red, low 1.5% optical crosstalk, low afterpulsing, and only ∼ 0.5%/^∘C gain drift. The SiPMs are 6.4 mm×6.4 mm in size and composed of 50 μm cells. The SiPMs came assembled in 4×4 matrices closely packed to minimize the dead space between SiPMs and with minimal dead space at the matrices' edges. Measurements of the S14520-6050CN we did during the selection process are presented in section <ref>. The actual SiPMs integrated into the cameras are minor evolutions of the S14520-6050CN, the S14521-6050AN is integrated into the EUSO-SPB2 camera, and the S14161-6050HS is integrated into the Trinity Demonstrator. The basic characteristics of both series are comparable to the S14520-6050CN. Figure <ref> shows pictures of the front and back of a S14521-6050AN matrix. To interface with the matrix, which has two connectors on its back, we designed the Connector Adaptor printed circuit board (PCB) shown on the figure's right side. Also visible in the picture of the Connector Adaptor is a wired thermistor, which pushes against the SiPM matrix when the matrix is plugged into the Connector Adaptor. Similarly, Figure <ref> shows pictures of the front and back of a S14161-6050HS matrix used in the Trinity Demonstrator and the Interface PCB onto which we reflow soldered the matrices because they came without connectors. The interface PCB also has a surface mounted thermistor, which is not yet populated onto the PCB shown in the picture. §.§ Sensor Interface and Amplification Board The SiPM matrices connect with their adaptor boards and, in the case of EUSO-SPB2, with the additional angle adaptor PCB (see Figure <ref>) to the Sensor Interface and Amplification Boards (SIABs). The SIABs amplify and shape the SiPM signals with the Multipurpose Integrated Circuit (eMUSIC) Application Specific Integrated Circuits (ASICs), which is designed as the front end for SiPMs in Cherenkov telescope applications <cit.>. Figure <ref> shows the block diagram of the SIAB, and Figure <ref> shows the CAD drawing and the electrical layout of the SIAB with signal lines shown in blue. Each SIAB has an eMUSIC on the top of its PCB and one on the bottom side because the eMUSIC has only 8 input channels, but a SiPM matrix has 16 pixels. Besides shaping and amplifying the SiPM signals, the eMUSIC also has a leading-edge discriminator for each channel, which we use to derive the trigger for the readout, as we explain in Section <ref>. The signals of all discriminators are OR'd, and only the OR'd signal is available as an output of the eMUSIC. The eMUSIC, furthermore, provides a SiPM bias trim voltage for each channel that is adjustable over 890 mV in steps of 3.2 mV, and it features a current monitor output for each channel which we digitize with an AD7173 16-channel analog-to-digital converter on the SIAB. The eMUSICs and the AD7173 are configured and monitored via SPI by an Atmega328p microcontroller. (see Figure <ref>). The Atmega328p also records the thermistor values, it controls the 3.3 V and 5 V regulators that bias the eMUSICs and the ADC, and it turns the SiPM bias voltage on and off. The microcontroller is connected to an I2C bus to communicate with the camera computer. We placed the eMUSICs as close as possible to the SiPM-facing end of the SIAB PCB to minimize the pick up of electronic noise. We also wanted the SiPM matrices to directly connect to the SIABs, which constrained the width of the SIAB boards to be less than the size of a SiPM matrix (see Figure <ref>). The SIAB width is 21 mm about 4.6 mm narrower than the size of a SiPM matrix. §.§ Digitizer The SiPM signals connect via 2 m long micro-coaxial cables from Samtec into an ASIC for General Electronics for TPC (AGET) based digitizer system <cit.>. The AGET is a 64-channel switched capacitor array (SCA) ASIC with a buffer depth of 512 cells that is sampled with 100 MS/s and, therefore, records 5.12 μs long traces. When the digitizer system receives a readout command from the trigger, the analog signals in the SCA are digitized with 12-bit resolution and transferred into the camera computer. Important for EUSO-SPB2 was the low power consumption of the AGET system of <10 mW per channel. The AGET ASICs are integrated into groups of four on ASIC Support & Analog-Digital conversion (AsAd) boards, providing 256 channels per board. Up to four AsAd boards are connected to a Concentration Board (CoBo), which, besides managing the AsAd boards and collecting the digitized traces, applies time stamps, zero suppression, and compression algorithms to the digitized signals. We use one AsAd board for the Trinity Demonstrator and two boards for EUSO-SPB2. The AGET is designed to digitize signals from time projection chambers, which produce much slower signals with rise times on the order of 100 ns. However, our camera signals have <10 ns rise times, which result in a non-linear response of the AGET. We regained a linear response after inserting a third-order low-pass Butterworth filter with a cut-off frequency of 15 MHz at the input of each AGET channel, resulting in a 20 ns rise time. §.§ Trigger The command to digitize and save an event comes from the trigger system, which continuously searches the camera signals for signatures that could be due to an air-shower-generated Cherenkov-light flash. We implemented a two-level trigger, where the first level is the leading-edge discriminators in the eMUSICs. The signals from the discriminators in each eMUSIC are combined in a logic OR and provided as one output signal of the eMUSIC. With only the OR'd signal, the trigger system does not know which of the eight pixels connected to the eMUSIC has a signal above the discriminator threshold. In the EUSO-SPB2 camera, we have mapped the camera pixels to eMUSIC inputs as indicated by the red rectangles in Figure <ref>. The EUSO-SPB2 optics are split to form bi-focal optics, which means that the four mirror segments of the primary mirror are aligned such that an object is imaged twice on the camera with a separation of two pixels. To illustrate the split optics, the figure gives an example of a point source at infinity that gets imaged twice onto the camera. Because of how we have mapped the camera pixels into the eMUSICs and because the image copies are separated by two pixels, there will always be two discriminators in two neighboring eMUSICs that trigger. We exploit this spatial and temporal coincidence in EUSO-SPB2 by requiring a temporal 100 ns coincidence between the eMUSIC outputs of two neighboring eMUSICs to trigger the readout. The bi-focal optics help us better reject events due to fluctuation in the NSB. We do not split the optics in the Trinity Demonstrator because we are much closer to the air shower and, therefore, can spatially resolve it with the Demonstrator's 0.3^∘ pixel size <cit.>. Because the eMUSIC only provides the OR'd discriminator output of 8 channels, we could not implement a standard next neighbor trigger commonly used in VHE gamma-ray Cherenkov telescopes <cit.>. Without an additional coincidence requirement, we, thus, record an event whenever a discriminator produces an output signal. However, we require a spatially extended air-shower image in the event reconstruction or otherwise reject the event. The discriminator eMUSIC outputs are connected to a Mesa Electronics' 7I80HD-25 Field Programmable Gate Array (FPGA) Ethernet Anything I/O card. The 7I80HD-25 has 72 programmable I/O pins, of which we configured 64 inputs for EUSO-SPB2 and 32 inputs for the Trinity Demonstrator, respectively, one for each eMUSIC. Programmed into the FPGA is a finite state machine, which constantly evaluates the input signals and sends a logic signal to the CoBo, triggering the readout of the digitizer whenever the trigger condition is met. In the split-optics case of EUSO-SPB2, the trigger condition is a 100 ns time coincidence between the signals of two neighboring eMUSICs. In Trinity's case, the FPGA just passes the eMUSIC logic signals through, triggering the readout whenever it records a discriminator signal without requiring an additional coincidence. In addition, the trigger board can simultaneously trigger an external LED-based light flasher and the readout system. The flasher provides signals to calibrate and monitor the camera's health. §.§ Power Supply and Distribution Power for the camera electronics and the SiPMs is provided by a custom dual power system consisting of the Low-Voltage Power Supply (LVPS) and the High-Voltage Power Supply (HVPS) boards. Power for the LVPS/HVPS boards comes from an external 18-30 V source, which is an array of batteries in the case of EUSO-SPB2 and a programmable SL32-46/U power supply from MAGNA Power in the case of the Trinity Demonstrator. The LVPS board integrates multiple DC-DC converters that generate the +3.3V, +5V, and +12V needed for the different components of the camera and the readout. In EUSO-SPB2, the LVPS also powers the camera computer and the CoBo module of the AGET digitizer. The voltages and currents on the LVPS board are monitored with four 24-bit 16-channel Analog-to-Digital (ADC) converters (AD7173-8BCPZ). The HVPS board generates the SiPM bias voltages in eight independent high-voltage (HV) channels, each channel powering 64 SiPMs. The voltage of each channel is adjustable with 16-bit Digital-to-Analog (DAC) converters (AD5686R) up to 50 V, with a resolution of 1 mV. The current of each HV channel is limited to 20 mA to protect the eMUSICs, which are damaged if the SiPM currents are too high. The EUSO-SPB2 system also included a power distribution unit, which was commanded by the camera computer via CAN bus and controlled the power to several auxiliary systems. §.§ Camera Computer The camera computer for EUSO-SPB2 was a dual-core single-board computer from RTD Embedded Technologies (CMA24CRD1700HR). The camera computer for the Trinity Demonstrator is a Sintrones EBOX-7000 edge computing device designed for industrial automation that is passively cooled and operates over a wide temperature range of -40^∘C - 70^∘C. Its CPU is an Intel Gen9 Core i7-9700TE (12MB Cache 1.8GHz up to 3.8GHz). It has 8GB of RAM and a 10-minute backup UPS. The EBOX includes two PCIe 3.0 x 8 slots, with one slot currently housing a 10 GB SFP network card. The camera computers receive the digitized waveforms from the AGET system and perform several management and monitoring tasks. They send commands to the Trigger board and the AGET digitizer via an Ethernet connection and to the SIAB microcontrollers and LVPS/HVPS boards via a System Management Bus (SMBus or I2C). The camera computer's control software is programmed in C++ and relays commands to the corresponding processes running on the camera computer using POSIX message queues. Replies are sent back in a separate message queue. § COOLING SYSTEM AND THERMAL VACUUM TESTING The design of the cooling system is driven by the requirement to operate the EUSO-SPB2 camera at 33 km altitude where the ambient pressure is ∼1 hPA, and convective cooling becomes inefficient. We devised a system of heat pipes for the camera unit that transports heat from inside the camera to its sides (see Figure <ref>). Inside the camera, the heat pipes are thermally coupled with copper blocks to the eMUSIC packages. Each eMUSIC generates 0.50 W of heat, which, when the power of all eMUSICs is totaled, amounts to about 90% of the power of the camera unit. On both sides of the camera, the heat pipes couple to liquid-cooled cold plates (see also Figure <ref>). Figure <ref> shows simulations of the temperature distribution inside the camera cooling system, and Figure <ref> shows the simulation of the temperature distribution of the cooling liquid inside the cold plate. For EUSO-SPB2, the glycol-based antifreeze liquid was circulated by two parallel connected gear pumps ZY-1305 from Speck and circulated through two radiators salvaged from a CPU cooling system, CORSAIR iCUE H150i RGB PRO XT 360 mm Radiator. The radiators were mounted outside the telescope radiating into space (see Figure <ref>). In the case of the Trinity Demonstrator, the liquid circulates through an OMTech 6L Industrial Water Chiller, set to a temperature of 6^∘C (see Figure <ref>). The electronics in the readout unit, i.e., the digitizer, computer, switches, etc, are passively cooled in EUSO-SPB2 with heatsinks mounted on the outside of the enclosure of the readout unit (see Figure <ref>). For the design of the heatsinks, we thermally modeled the readout unit with all components at their respective positions and their respective power consumptions (see Figure <ref>). In the Trinity Demonstrator, the digitizer components are cooled by forced air and all other components by convection. §.§ Thermal Vacuum Testing We thermal-vacuum tested the EUSO-SPB2 camera and readout over a wide temperature range (-20^∘C - 40^∘C) at ambient pressure and at 1 hPa. Figure <ref> shows the EUSO-SPB2 camera and readout in the thermal vacuum chamber. The radiator faced a cold plate whose temperature we varied over the aforementioned temperature range, while the temperature of the chamber wall remained at room temperature of ∼20^∘C. The thermal vacuum tests confirmed our thermal simulations, validated the performance of the EUSO-SPB2 cooling system, and showed that the readout and camera function over the entire tested temperature range at the expected ambient pressure at a float altitude of 33 km. With the cold-plate temperature at -40^∘C, the SiPM temperature stayed below 30^∘ also when we illuminated the SiPMs with a steady light source mimicking expected and extreme night-sky background (NSB) levels, thus keeping the SiPM dark-count rates below the NSB. Figure <ref> shows temperature measurements of different components of the system during a 6-hour test cycle while the cold plate was held at -40^∘C. § PHOTOSENSOR CHARACTERISTICS AND INTEGRATION §.§ Characterization of the S14520-6050CN We evaluated devices from different vendors before purchasing the SiPMs. Here, we present the characteristics of the Hamamatsu S14520-6050CN, which we measured at different temperatures. The measured characteristics include dark count rates, direct and delayed optical crosstalk, effective cell capacitance, quench resistor, recovery time constant, afterpulsing, and breakdown voltage. Based on these measurements, we made the decision to purchase the commercially available versions of the S14520-6050CN, the S14521-6050CN, and the S14161-6050HS. Instead of fully characterizing these, we focused on photon detection efficiency, spectral response, gain, and breakdown voltage measurements. We performed all measurements with the setups and methods described in <cit.>. Figure <ref> shows the photon detection efficiency (PDE) for four wavelengths as a function of SiPM bias at room temperature. The red lines fit the data points with an exponential function <cit.>. From the fit results, we read the bias voltage where the PDE reaches 90% of the maximum PDE, which is at about 9% overvoltage for all wavelengths. In the following, we adopt a 9% relative overvoltage as the operating point of the SiPMs and discuss the impact that the measured SiPM nuisance parameters at that voltage have on the operation of the cameras. The 9% relative overvoltage operating point is marked with a black arrow on the abscissa in the following figures. Figure <ref> shows the direct optical crosstalk as a function of relative overvoltage. The measurement at 40^∘ is unreliable because the high dark count rate at that temperature prevents a clear separation of SiPM signals. A ∼1% optical crosstalk is extremely low and does not impact the camera's performance. Figure <ref> shows the effective cell capacitance derived from gain vs. bias measurements at different temperatures. The effective cell capacitance is defined as C=Δ Q/Δ U, i.e. gain in charge divided by the overvoltage. The capacitance is stable within ∼2%, which means any gain changes can be solely attributed to the temperature-dependent breakdown voltage, which shows Figure <ref>. The breakdown voltage changes linearly with a slope of 37 mV/C^∘ between -75^∘C and 40^∘. Or normalized to the breakdown voltage with a slope of 0.1%/C^∘, which is typical for SiPMs. Because we operate the SiPMs with a relative overvoltage of ∼10%, the temperature dependence of the gain at our operating voltage is only 0.1%/10%=0.01 per 1^∘C or 1%/^∘C. Such a small temperature dependence significantly reduces the need to keep the temperature of the SiPMs stable or to readjust the SiPM voltages constantly. The same argument also applies to the temperature dependence of the breakdown probability, which factors into the PDE. However, because we operate the SiPMs at a bias where the PDE and thus the breakdown probability are already saturating, the relative changes of the PDE are even less than 1%/^∘C. Figure <ref> shows the charging time constant of a SiPM cell after a breakdown. Within our measurement uncertainties, the time constant does not depend on overvoltage. It does, however, depend on temperature, which we attribute to a changing quenching resistor of 0.3%/C^∘. With a 100 ns recovery time and an average night-sky background rate of 200 MHz per SiPM, only 16 cells recover at any time while the remaining >14,000 cells of the SiPM can accept photons. The cell recovery time, therefore, does not limit the dynamic range or PDE of the camera. Afterpulsing has a similar impact on the camera performance as direct optical crosstalk. It adds to the primary signal and needs to be accounted for in the event reconstruction. The S14520 (see Figure <ref>) has an acceptable ∼5% afterpulsing probability at the operating point with a time constant of a few ten nanoseconds. Delayed optical crosstalk effectively increases dark count rates as it happens significantly after the primary signal. It is less than 5%, which adds to the dark-count rate by the same amount and thus does not impact the camera performance. That is because the dark-count rates, shown in Figure <ref>, are ten times less than the expected night-sky background rate of 5 MHz/mm^2 even at 40^∘C. We expect to operate at 20^∘C or below where the dark count rate is <1% of the night-sky background rate. The dark count rates, thus, do not impact the camera's performance. §.§ Photon Detector Assembly and Acceptance Testing After reflow soldering the SiPM matrices for Trinity onto their carrier boards, we measured the breakdown voltage and relative gain as part of the acceptance testing. For the testing, the SiPM matrix is mounted onto a T-LSM100A x-y stage from Zaber, as shown in Figure <ref>, and the stage aligns the SiPM under test with the mask's opening. The size of the opening guarantees that only the SiPM under test is illuminated. The SiPM is flashed with 120 ps long pulses of 638 nm light from a Picoquant picosecond laser PDL 800-B with a 638 nm LDH 8-1-469 laser head. Before the light hits the SiPM, it's intensity is attenuated by a factor of 1,000 with neutral density filters to fit within the linear range of the SiPM. The SiPM signals are amplified with a custom amplifier <cit.> and sampled with 5 GS/s by a DRS evaluation board <cit.>. We measured each SiPM's amplitude response at 41 V and 42 V bias voltage (U_bias). Because the amplitude response A of the SiPM is proportional to A ∝ C_eff· (U_bias-U_BD), we obtain from these measurements at two bias voltages the average effective cell capacitance C_eff, and the breakdown voltage U_BD for each SiPM. Figure <ref> shows the distribution of the breakdown voltages from these measurements normalized to the 39.5 V average breakdown voltage of all the Trinity SiPMs. The breakdown voltage distribution is sufficiently narrow, so SiPM matrices did not have to be grouped by similar breakdown voltage and all SiPM matrices are biased with the same global HV. Figure <ref> shows the relative gain distribution of the SiPMs for the EUSO-SPB2 camera normalized to the camera average. The distribution is due to differences in the effective cell capacitances. For operating the camera, the bias voltages of the SiPMs are adjusted with the SiPM bias trim voltages provided by the eMUSICs to produce a uniform response of the camera to external light, which we discuss in section <ref>. §.§ Photon Detection Efficiency Measurements A careful evaluation of the photon detection efficiency (PDE) is necessary because it is a critical factor in determining the telescope's photon collection efficiency. Uncertainties in the PDE worsen the energy resolution and result in systematic offsets in the energy scale during event reconstruction. All PDE measurements have been performed with the setups described in <cit.> and directly feed into the Monte Carlo simulations for the Trinity Demonstrator and EUSO-SPB2. Figure <ref> shows the PDE of a S14521 versus wavelength measured at six angles of incidence between 0^∘ normal incidence and 80^∘. In these measurements, the SiPM was biased 2.25 V above breakdown, which equates to a 6% relative overvoltage, which is below our nominal SiPM bias of about 12% overvoltage for the S14521. However, we find that the normalized spectral response of the S14521 shows little dependence on the bias voltage and thus conclude that the change of the PDE normalized to normal incidence shown in Figure <ref> is the same for all bias voltages, including the ones we will use in the operation of the Trinity and EUSO-SPB2 cameras. The oscillations in Figure <ref> are due to interference at and below the surface of the SiPM. The oscillations shift to higher and lower wavelengths with changing angle of incidence, resulting in the pattern observed in Figure <ref>. Within the systematic uncertainties of our measurements, the PDE does not decrease up to angles of 60^∘. To calibrate the absolute photo response of the camera, we have measured the PDE vs. wavelength for one SiPM in each SiPM matrix with light under normal incidence. The remaining SiPMs are calibrated in situ by comparing their response to the calibrated SiPMs when the entire camera is uniformly illuminated with flashes from an LED light pulser. In the in-situ calibration, we make use of the known breakdown voltages and the effective cell capacitances of each SiPM discussed in Section <ref> to separate PDE from gain. The relative calibration exploits that the spectral response does not change from one SiPM to the next except for a scaling factor, which is illustrated in Figure <ref>. The response of all the 32 measured EUSO-SPB2 SiPMs relative to the average of all 32 measurements is flat versus wavelengths. The small oscillations in Figure <ref> are due to slight wavelength shifts of the oscillations seen in the spectral response of each SiPM. For comparison, Figure <ref> shows the average PDE of all 32 EUSO-SPB2 SiPMs biased at 4.5 V overvoltage or 12% above breakdown voltage. The relative response curves of the 32 measured SiPMs vary by 9% (one standard deviation marked by the red line in Figure <ref>) around the average. The statistical uncertainty of the PDE of one SiPM is about 2%. § CHARACTERIZATION OF THE SIGNAL CHAIN AND DISCRIMINATOR §.§ Linearity of the Signal Chain For an absolute calibration of the cameras, we also need to know the conversion factor from the digitized amplitude of the SiPM signals into the number of photoelectrons or photons detected by the SiPM. We obtained that conversion factor and characterized the linearity of the signal chain by flashing the SiPMs with our Picoquant laser and varying the intensity of the light flashes with neutral density filters to scan the entire dynamic range of the digitizer. In these measurements, it is safe to assume that the 120 ps-wide laser pulse does not add additional width to the 30 ns Full-Width-at-Half-Maximum (FWHM) pulse shape recorded by the digitizer. The pulse width is mostly determined by the transfer function of the low-pass filter in front of the digitizer. Thus, the normalized pulse shape does not change with the intensity of the light pulse within the linear range of the signal chain and it is also representative of the pulse shape expected for a single photoelectron signal. A drawback of the 30 ns wide signals is that they significantly overlap single photoelectron signals from dark counts, thus inhibiting the identification of single photoelectron signals in the digitized traces. In order to obtain an absolute conversion factor based on single-photoelectron signals, we also recorded the signals with a 500 MHz bandwidth oscilloscope (Tektronix TDS3054C) by tapping into the signal chain before the low-pass filter. From the clearly identifiable single photoelectron signals on the oscilloscope, we thus obtained the conversion factor from signal amplitude into photoelectrons for any signal recorded with the oscilloscope. With that conversion factor, we measured the average number of photoelectrons for each laser intensity. Combining the average number of photoelectrons from the oscilloscope measurement with the average digitized amplitude from the AGET system, we then obtain a conversion factor of 10.07±0.09 digital counts per photoelectron for the digitized signal amplitudes in photoelectrons for the EUSO-SPB2 SiPMs. Figure <ref> shows the measurements for different laser intensities. The SiPMs are biased at the nominal 4.5 V above breakdown in this calibration. We find that the signal chain is linear up to a signal of 300 photoelectrons. For more intense signals, the digitizer frontend saturates, and the response becomes nonlinear. Figure <ref> shows the relative standard deviation of the recorded amplitudes for each laser intensity between 20 photoelectrons and 300 photoelectrons. In case the pulse-to-pulse fluctuations are only due to Poisson statistics, the expected distribution of the recorded amplitude distribution has a relative standard deviation of √(μ)/μ, where μ is the average number of photoelectrons per pulse and √(μ) is the standard deviation of the amplitude distribution in units of photoelectrons. The relative width for each intensity agrees well with expectations for Poisson statistics indicated by the red line in the figure. We conclude that noise in the electronics does not significantly contribute to the fluctuations for recorded signals above 20 photoelectrons. For amplitudes below 20 photoelectrons, the fluctuations are increasingly dominated by baseline fluctuations due to dark counts. For signals with 300 or more photoelectrons, we observe that the relative width increases (see the last data point in Figure <ref>) due to the onset of saturation in the signal chain. For each laser intensity, we also measured the average pulse shape, which we are using in our Monte Carlo simulation to model the camera response. When we recorded the laser pulses with the AGET system, we also recorded with the AGET the gate-generator signal that triggered the picosecond laser. In the averaging procedure, we then shifted the recorded trigger pulse relative to the trigger pulse recorded with the first laser pulse until the Chi-square between the two recorded trigger pulses was minimized. We then added the time-shifted samples to the average trace. In this way, we eliminate the phase jitter between the gate-generator signal and the sampling clock of the AGET system and obtain a pulse shape with sub-nanosecond resolution. Figure <ref> shows one example. §.§ Crosstalk between camera pixels We characterized the crosstalk between camera pixels by flashing our picosecond laser at a SiPM and recording the response in all 64 channels that connect to the same AGET chip. Unlike in the acceptance testing, where we illuminated the entire SiPM, we placed an adjustable pinhole in front of the SiPM to ensure light does not spill over into neighboring channels. With the pinhole in place and the SiPM biased 4.5 V above breakdown, we typically detected 150 photoelectrons per laser pulse. We recorded 1,000 laser pulses for each crosstalk measurement and shifted the recorded traces in time as we did above to obtain one average trace for each pixel. Figure <ref> shows the average traces of all 63 pixels excluding the pixel we flashed with the laser. We observe that the crosstalk signals in all 63 pixels are 30 ns delayed relative to the laser signal. We quantify the crosstalk in a pixel by recording the maximum amplitude in its average trace and normalizing it to the signal amplitude in the flashed pixel. Figure <ref> gives one example measurement where the white pixel has been flashed. The sixteen pixels in the lower left quadrant belong to the same SIAB and show the highest crosstalk of 7%. The crosstalk in the remaining channels amounts to 5% and is due to crosstalk in the low-pass filter boards attached to the digitizer. Repeating the measurement by successively flashing all 64 pixels, we arrive at the crosstalk distribution in Figure <ref>. The majority of the crosstalk is below 5%. The average is 5%, and in a handful of cases, we observe a crosstalk of almost 20%. §.§ Linearity of the Discriminator We characterized the linearity of the discriminator and calibrated the discriminator threshold in units of photoelectrons with the same setup we used to measure the linearity of the signal chain. The SiPM bias voltage is set to 4.5 V above the breakdown voltage. The discriminator threshold is adjustable in 512 steps, where 512 is the lowest and 0 is the highest threshold. We scanned the discriminator threshold for a given laser intensity and recorded the trigger rate for each discriminator setting. Figure <ref> shows the trigger rate versus discriminator setting for one such measurement when the laser intensity was set to 88 photoelectrons. The trigger rate is normalized to the pulse rate of the laser. As the discriminator threshold goes from a small value to a high value (high to low threshold), the trigger rate increases until the discriminator triggers on all laser pulses. We define as threshold the discriminator setting where 50% of the laser pulses are registered. Figure <ref> shows the thus-determined discriminator thresholds versus different laser intensities for one camera pixel. From 0 to 200 photoelectrons amplitudes, the discriminator responds linearly before it reaches its highest threshold. We fit the linear range and obtain a calibration factor of -1.4±0.1 discriminator threshold steps per photoelectron with a positive offset of 303. The discriminators of all camera pixels are calibrated in this way. § FLATFIELDING A uniform camera response simplifies its operation and data analysis. It is achieved by adjusting the SiPM bias such that the product of PDE, SiPM gain, and signal-chain gains and losses is the same for all pixels in the camera. Flat-fielding is the procedure of adjusting the SiPM bias to achieve a uniform camera response. In the flat-fielding procedure, the camera is uniformly illuminated with a pulsed light source. For the first round of light flashes, all SiPMs are biased 4.5 V above their respective breakdown voltage, the bias voltage where the PDE reaches 90% of its maximum value. From the recorded signals, an average amplitude is calculated per pixel A_p, and then the average pixel values are averaged across the entire camera A_c. The SiPM bias correction Δ U_ for a pixel is then obtained by multiplying the nominal 4.5 V overvoltage with the relative deviation of the pixel average from the camera average. Δ U_ = 4.5 ·A_c-A_p/A_c The SiPM bias correction is then subtracted from the eMUSIC trim voltage for that pixel. After updating the bias voltages of all pixels accordingly, the camera is flashed again with light pulses, and the procedure is repeated until a uniform camera response is achieved. After completion of the flat-fielding, the pixel response is uniform, with a standard deviation of 0.05 (see Figure <ref>). This is mostly dominated by the finite number of 1,000 flashes, which results in a relative uncertainty of the per-pixel average amplitude of 1/√(1,000)=0.03. § CURRENT MONITOR We characterized and calibrated the current monitor output of the eMUSICs by illuminating the SiPMs with a steady LED. By varying the current through the LED, we varied its intensity and thus could scan a wide range of SiPM currents from ∼10 μA to 450 μA. For comparison, the typical SiPM current in the Trinity Demonstrator is 14 μA per pixel. Figure <ref> shows that the current monitor output is linear over the tested range and shows very little dependence on temperature. We tested the current monitor at -40^∘C and 25^∘C. At low SiPM currents, the current monitor is limited by the small 5.3 μV/μA slope such that despite the 16-bit effective resolution of the ADC, we obtain a resolution of 7.6 μA/ADC count. § SUMMARY AND OUTLOOK We have developed and built the cameras and readout for the Trinity Demonstrator and the EUSO-SPB2 Cherenkov telescope. Both are pathfinder and pioneering experiments exploring the detection of Earth-skimming VHE and UHE neutrinos with the imaging atmospheric Cherenkov technique. Our cameras are the first SiPM cameras aiming at detecting Earth-skimming neutrinos. The spectral response of the SiPMs is a good fit for the Cherenkov spectrum expected to be detected by Trinity's telescopes <cit.>, allowing for compact telescopes with good sensitivity and low energy threshold. In building and bench testing the cameras, we verified their functionality and characterized and calibrated the photon detection efficiency, the response of the signal chain, the trigger system, and the current monitor. These characterizations provide the necessary parameters to model our cameras in the Monte Carlo simulation of both experiments and evaluate their sensitivity to Earth-skimming neutrinos. Our instruments meet the requirements of the Trinity Demonstrator and EUSO-SPB2 discussed in Section <ref> and have been successfully integrated into their telescopes. The EUSO-SPB2 project had its flight in May 2023, and despite an unexpectedly short flight of only 36 hours and 52 min before the balloon plunged into the South Pacific, we could demonstrate that the camera performed as expected <cit.>. The Trinity Demonstrator camera had been integrated into its telescope in October 2023, with regular observations starting shortly thereafter. The camera and readout are performing as expected and have been operating without technical issues since. Figure <ref> shows an air-shower image of a cosmic ray recorded with the Trinity Demonstrator during the commissioning phase. The development of this first-generation SiPM camera for Earth-skimming neutrino detection is only the first step toward a mature design, particularly for Trinity. Operating the Trinity Demonstrator camera for an extended period will give us the experience of observing Earth-skimming neutrinos from mountaintops and maybe even yield the detection of the first VHE neutrino. With the experience gained by operating the Trinity Demonstrator, we will design the next-generation camera, which we plan for the Trinity Prototype telescope, which has a 60^∘ horizontal field of view and will need to be instrumented with a 3,300-pixel SiPM camera. §.§ Acknowledgments We acknowledge the excellent work done by the Georgia Tech Montgomery Machining Mall staff and are grateful for the many collegial and inspiring discussions with our EUSO-SPB2 and Trinity collaborators over the past five years. This work was funded with NASA APRA awards 80NSSC19K0627 and 80NSSC22K0426 and funding from the National Science Foundation with award PHY-2112769. elsarticle-num-names.bst
http://arxiv.org/abs/2406.08080v1
20240612110411
AustroTox: A Dataset for Target-Based Austrian German Offensive Language Detection
[ "Pia Pachinger", "Janis Goldzycher", "Anna Maria Planitzer", "Wojciech Kusa", "Allan Hanbury", "Julia Neidhardt" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.CY", "I.2.7" ]
Classical simulability of constant-depth linear-optical circuits with noise Changhun Oh June 17, 2024 =========================================================================== fancy § ABSTRACT Model interpretability in toxicity detection greatly profits from token-level annotations. However, currently such annotations are only available in English. We introduce a dataset annotated for offensive language detection sourced from a news forum, notable for its incorporation of the Austrian German dialect, comprising 4,562 user comments. In addition to binary offensiveness classification, we identify spans within each comment constituting vulgar language or representing targets of offensive statements. We evaluate fine-tuned language models as well as large language models in a zero- and few-shot fashion. The results indicate that while fine-tuned models excel in detecting linguistic peculiarities such as vulgar dialect, large language models demonstrate superior performance in detecting offensiveness in AustroTox. We publish the data and code[<https://www.pia.wien/austrotox/>, <https://web.ds-ifs.tuwien.ac.at/austrotox/>]. Content warning: This paper contains examples of offensive language to describe the annotation scheme. § INTRODUCTION In recent years, research in the domain of content moderation has transitioned from a unidimensional to a multidimensional perspective <cit.>. A one-size-fits-all approach is unable to accommodate the diverse needs of global users <cit.> whose perceptions of what constitutes harmful content is contingent upon individual, contextual, and geographical factors <cit.>. Scholars call for less centralized and more personalized mechanisms of moderation to account for such multifaceted differences <cit.>, particularly when it comes to country-specific and subsequently linguistic nuances <cit.>. Intolerant user comments (e.g., offensive stereotyping), in contrast to incivil user comments (e.g., vulgarity), are perceived as more offensive and as a stronger threat to democratic values and society, as well as receiving a stronger support for deletion <cit.>. This highlights the importance of moderation approaches that include a more nuanced understanding of online norm violations. For determining the harmfulness of an offensive statement, its target is decisive <cit.>. Perceptions of targets of offensive comments are subject to change based on individual, contextual, cultural and intersectional factors <cit.> making it, therefore, crucial to effectively identify emerging targets of such statements. Figure <ref> depicts examples highlighting the importance of the target of an offensive statement in determining its severity. In order to study the detection capabilities of language models in an Austrian cultural and linguistic context, we create a corpus of Austrian German comments. Our main contributions are: * 4,562 user comments from a newspaper discussion forum in Austrian German annotated for offensiveness[As there are no generally accepted definitions nor distinctions for abusiveness, offensiveness and toxicity <cit.>, we use these terms interchangeably.] with the article title used as context. The majority of posts is annotated by five annotators. We additionally publish the disaggregated binary offensiveness annotations. * Annotated spans in comments comprising targeted individuals, groups or other entities by offensive statements, and vulgarities. * An evaluation of fine-tuned smaller language models and large language models in a zero- and five-shot scenario. § RELATED WORK Research focused on identifying spans within offensive statements is primarily focused on English user comments. Examples of annotated spans in English comments are the targets of offensive statements <cit.>, the spans contributing to the offensiveness label <cit.>, and the spans comprising a violation of a moderation policy <cit.>. We list all public German datasets covering tasks related to offensiveness detection in Table <ref>. All German datasets containing labels related to offensiveness except for the One Million Posts and the GerMSDetect dataset focus on different varieties of German from Austrian German. AustroTox contains the same definitions for annotating vulgarities as GermEval <cit.>, this dataset contains annotations of vulgar posts. According to their definitions, the classes Insult from GermEval <cit.>, Hate Speech from DeTox and GAHD <cit.>, and Offense from HASOC <cit.> can be merged into the class Offensive from AustroTox. This does not imply that the class Offensive from AustroTox can be merged into the respective classes as their definition might be more narrow. Additionally, these datasets stem from other sources than AustroTox. AustroTox is the first German dataset related to offensiveness classification containing annotated spans. § DATASET CREATION Data source We source AustroTox from the Austrian newspaper DerStandard[<https://www.derstandard.at/>], a Viennese daily publication with a left-liberal stance covering domestic and international news and topics such as economy, panorama, web, sport, culture, lifestyle, science, health, education, and family. The DerStandard forum is one of the largest discourse platforms in the German-speaking world. Despite the left-liberal stance of the newspaper, this perspective is not reflective of the forum's community, as DerStandard is actively working on being a low-threshold discussion platform open to everybody. As we focus on the Austrian dialect, this Austrian news media outlet’s comment sections are a suitable sample to draw from. We argue that the forum's expansive community and the diverse range of articles and forums offered on the DerStandard website help towards minimizing bias in AustroTox. Professional moderators ensure the exclusion of hate speech which is illegal in Austria <cit.> in the forum, this results in hardly any hate speech and a focus on offensive speech in the AustroTox dataset. Pre-filtering comments In order to pre-filter potentially toxic comments and comments which are not considered as toxic by existing moderation technologies, we apply stratified sampling based on the toxicity score provided by the Perspective API <cit.>. The toxicity score is between 0 (not toxic) and 1 (severely toxic). We compute the toxicity score for 123,108 posts. Out of these posts, 873 exhibit a toxicity score between 0.9 and 1. We add these comments to the data to be annotated. Furthermore, we create the following strata defined by the toxicity score: 0-0.3, 0.3-0.5, 0.5-0.7, 0.7-0.9. Then, we randomly sample comments from each stratum. We use the following proportions for the counts of comments from the different strata: 9 : 9 : 9 : 11. AustroTox encompasses responses to 532 articles or discussion forums on any topics covered by DerStandard. The comments were posted between November 4, 2021, and November 10, 2021. The articles and forums where the comments appear stem from a broader time period. Annotation campaign We conduct the annotation with participation from master's students specializing in Data Science and undergraduate students majoring in Linguistics, as an integral component of their academic curriculum. 30% of the annotators are registered as female through the courses registration platform, which does not necessarily mean that they self-identify as female. The majority of the annotators are Austrian and between 19 and 26 years old, annotators are required to have at least a German level of C1. The vast majority of annotators speak German as a native language. Ethical considerations pertaining to the annotation task are expounded upon in the Ethics Statement (Section <ref>). The title of the article under which the comment was posted is taken into account as context when annotating the comment. While our annotation guidelines (Appendix <ref>) include numerous examples with the intention of being prescriptive <cit.>, it is important to note that due to the low number of comments per annotator and the limited time allocated for training the annotators, the procedure unavoidably incorporates a subjective element. We classify each comment as offensive or non-offensive. For non-offensive and offensive comments, we annotate spans in the text comprising vulgarities. Both, offensive and non-offensive posts may contain an unspecified number of vulgarities, as vulgar language can exist separate from offensiveness. For offensive posts, we additionally annotate spans comprising the target of the offensive statement and the type of target (Examples in Figure <ref>). If the target is only mentioned via a pronoun, we select the pronoun as the span comprising the target. Adopting a definition of vulgarity similar to that employed by <cit.>, we define classes and spans as follows: Offensive: An offensive comment includes disparaging statements towards persons, groups of persons or other entities or incites to hate or violence against a person or a group of people. Not Offensive: A non-offensive comment does not include disparaging statements or incites to hate or violence. Vulgarity: Obscene, foul or boorish language that is inappropriate for civilized discourse. Target Group: The target of an offensive post is a group of persons or an individual insulted based on shared group characteristics. Target Individual: The target of an offensive post is a single person not insulted based on shared group characteristics. Target Other: The target of an offensive post is not a person or a group of people. Data aggregation Each post is annotated by 2 to 5 annotators, the majority of posts is annotated by 5 annotators. We choose an aggregation approach that prioritizes sensitivity, where a comment requires fewer votes to be labeled as offensive compared to the number of votes needed to consider it non-offensive. A post is solely annotated as non-offensive if 2 ≤ v_n and v_o≤v_n/2, where v_n and v_o denote the votes for the class non-offensive and offensive. The post is labelled as offensive if 2 ≤ v_o and v_n≤2/3· v_o. Posts that do not meet one of these criteria are discarded. This implies that posts which are labelled as offensive by 3 annotators and as non-offensive by 2 annotators are labelled as offensive while posts which are labelled as offensive by 2 annotators and as non-offensive by 3 annotators are discarded. Spans comprising the different target types are annotated by majority voting of those who labelled the post as offensive. Vulgarities are annotated if 2≤ v and v_a≤ v + 2, where v denotes the number of votes for a span being a vulgarity and v_a denotes the sum of all class votes. Table <ref> contains the size of AustroTox. Inter Annotator Agreement After curating 390 posts with implausible span annotations (e.g. offensive but no target), we report a Krippendorff's Alpha of α = 0.49 on the binary offensiveness classification, which is comparable to related work using crowdsourcing: <cit.> report 0.51 and <cit.> report 0.45. An α of 0.5 is between random annotation (α = 0) and full agreement (α = 1). In a prescriptive annotation paradigm <cit.>, tentative conclusions are still acceptable with α≥ 0.667 <cit.>. While our annotation guidelines include numerous examples, it is important to note that due to the low number of comments per annotator and the limited time allocated for training the annotators, the procedure incorporates a subjective element. Care should be taken when aggregating data in cases of moderate agreement. We argue that our aggregation approach prioritizing sensitivity provides a larger decision boundary. Cross-validation splits We make AustroTox available with predetermined splits for cross-validation stratified using fine-grained classes determined by the label of the post and the types of spans it contains. The splits consist of a ratio of about 80% for training, 10% for development, and 10% for testing. Appendix <ref> contains more details on the dataset creation. § EXPERIMENTS Fine-tuned language models We fine-tune and evaluate German BERT and Electra models <cit.> (Table <ref>, Appendix <ref>). We define three tasks: Binary offensiveness classification as sequence classification, vulgarity extraction as token classification and target extraction as token classification task. For offensiveness classification, we concatenate the article title given as context and the comment as input for the models: . Prompted LLMs We additionally evaluate the class and span detection capabilities of not fine-tuned LLMs. We use the following large language models for our experiments: GPT 3.5[<https://platform.openai.com/docs/models/gpt-3-5>] (gpt-3.5-turbo-1106) <cit.>, GPT 4 [<https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo>] (gpt-4-1106-preview) <cit.>, LeoLM 7B Chat [<https://huggingface.co/LeoLM/leo-hessianai-7b-chat>], and Mistral [<https://huggingface.co/mistralai/Mistral-7B-v0.1>] <cit.>. We avaluate them in a zero-shot and five-shot scenario (Table <ref>, Appendix <ref>). For the LLM evaluation, we distinguish between multitask prediction (predicting offensiveness, vulgarities and targets) and offensiveness-only classification. We create prompts that contain an offensiveness definition, article title, and the post to be classified. In the five-shot scenario, we additionally provide five titles and posts with labels that are randomly sampled from the training set for each prediction. We require the LLM to respond in JSON[ <https://platform.openai.com/docs/guides/text-generation/json-mode>]. Preliminary experiments showed that only GPT-3.5 and GPT-4 were able to produce consistently valid JSON responses. We thus only evaluate these two models in the multi-task setup. For LeoLM and Mistral, we adjust the prompt, requiring them to respond with only 0 or 1, and define the token with the higher logit as the model's prediction. To ensure comparability for the token-level classification tasks, we tokenize the spans generated by the GPT-models with the GBERT tokenizer. Evaluation outcomes Table <ref> contains the evaluation outcomes. The proprietary LLMs outperform the open-source fine-tuned models in binary offensiveness classification. We attribute the superiority of the fine-tuned models in the vulgarity detection task to the lexical nature of the vulgarity detection task. Notably, the dataset features vulgarities in Austrian dialect that are rarely encountered elsewhere. There are 437 non-offensive but vulgar comments in AustroTox. Being able to detect vulgarities can help with debiasing vulgar False Positives and vulgar False Negatives. In especially, the results suggest that marking vulgarities using fine-tuned models and then classifying the comment with marked vulgarities using GPT-4 leads to an improvement of GPT-4’s performance. Even dictionary-based detection of vulgarities might lead to an improvement of GPT-4’s performance and to more explainable results. The span annotations allow for analysis beyond comparing disagreements with binary gold labels. The micro F_1 on the targets for the four-class target classification is generally low due to a high prevalence of the negative class. The fine-tuned models slightly outperform the LLMs at detecting the targets of offensive statements. § CONCLUSION We presented AustroTox, a dataset comprising user comments in Austrian German, annotated for offensiveness. We annotated spans within the comments comprising targeted individuals, groups, or other entities through offensive statements and spans comprising vulgarities. An evaluation on our dataset indicates that the smaller language models we fine-tuned and tested excel in detecting vulgar dialect, whereas the LLMs we tested demonstrate superior performance in identifying offensiveness within AustroTox. § ACKNOWLEDGEMENTS Pia and Anna are funded by the Vienna Science and Technology Fund (WWTF) [10.47379/ICT20015]. Janis is funded by the University of Zurich Research Priority Program project Digital Religion(s). We would like to thank the students participating in the annotation campaign. Their dedication and effort have been invaluable to the success of this project. Furthermore, we would like to thank Rebekah Wegener who helped with the annotation campaign. We would like to extend our gratitude to DerStandard for sharing their data, thereby contributing significantly to the advancement of semi-automated content moderation. Lastly, the financial support by the Christian Doppler Research Association is gratefully acknowledged. § ETHICS STATEMENT Annotators’ Risks The repeated exposure of annotators to offensive content carries risks. Therefore, the annotation campaign was reviewed by the ethics committee of our institution. In the course of our work, the annotators engaged in the annotation of comments for a duration of approximately 1.5 to 3 hours. It is noteworthy that the dataset contained a higher proportion of offensive comments than the typical distribution in a user forum. The comments were sourced from a publicly accessible, moderated forum by DerStandard, ensuring that none of them could be categorized as illegal under Austrian law <cit.>. To mitigate potential distress, the annotators were explicitly informed that they had the option to cease annotation if they felt overwhelmed by the task without facing consequences (Appendix <ref>). Compensation for Annotators Participants in the annotation campaign are predominantly Master students engaged in courses focused on introductory language technology, data annotation, and natural language processing. We consider hands-on experience in annotation tasks to be highly valuable for these students, as it equips them with the necessary skills to potentially design annotation tasks in the future and to be aware of potential pitfalls and difficulties of such tasks. Moreover, we are confident that the expected workload of 1.5 to 3 hours is suitable for the participants. The annotators were informed about the publication of the data and as data annotation is a tedious task, they received a comprehensive compensation through course credits for their efforts. Risks of Publication of the Data There is a potential for exploitation of our results to generate offensive online content that may elude contemporary detection systems. We believe that these risks are manageable when weighed against the improvement of the detection of offensive statements facilitated by AustroTox. We urge researchers and practitioners to uphold the privacy of the authors of posts when working with the data. And while the data is publicly available on the website of DerStandard, in order to preserve the privacy of users, we replace mentions of users and URLs with special tokens in AustroTox. DerStandard agrees to the publication of the data. Regarding copyright concerns, simple comments in online forums are usually not covered by copyright law §Section 1 UrhG (Austrian Copyright Act). § LIMITATIONS Time Span and Range of Topics The dataset comprises comments from November 4, 2021, to November 10, 2021 and therefore consists of a higher proportion of COVID-19 related topics. However, we source comments appearing in over 532 varying articles and discussion forums, ensuring diversity in topics in the dataset. The articles and forums where the comments appear stem from a broader time period. Thus, posts in our dataset refer to more events than the ones covered in between November 4, 2021, to November 10, 2021. Subjectivity In the realm of human data annotation for tasks related to sentiment, a degree of subjectivity exists. Due to the small load of comments per annotator, a large pool of annotators, and limited time allocated for training the annotators, this subjective element is reflected by the dataset and is learnt by the models during the training process. We posit that the token-level annotations included in AustroTox elevate the quality of annotation by providing clearer guidance to the annotators. Furthermore, we choose an aggregation approach that prioritizes sensitivity, where a comment requires fewer votes to be labeled as offensive compared to the number of votes needed to consider it non-offensive. We posit that this method mitigates lower agreement levels. Additionally, upon publication, we publish the disaggregated annotations for binary offensiveness. Limitations of Experiments Our computational experiments on the dataset are not yet exhaustive, as certain models, notably larger encoder-only Transformer models trained on German data (like GELECTRA-large), may outperform the encoder-only Transformer models examined in our study. We did not conduct hyperparameter-search which would further improve the outcomes of the evaluation. Moreover, it is essential to include more thorough testing of open-source LLMs alongside the GPT models. Our future aim is to deliver a more comprehensive evaluation on the dataset, enabling a nuanced consideration of factors influencing the performance difference of fine-tuning and zero-shot classification on larger models. § ANNOTATION GUIDELINES We depict the annotation guidelines used for the dataset creation. For the paper, we translate the example comments (except for vulgarities in dialect) with DeepL and correct the translations manually where the overall meaning of the comment is not preserved, note that we do not try to make the comment sound as if it was written by a native English speaker. We add the original comments as footnotes. Vulgar passages might not be vulgar in the English version, nevertheless, we indicate the annotation of the original German comment. Following <cit.>, we obfuscate vulgarities. Originally, we annotated insults and incites to hate or violence. Nevertheless, by majority voting only 60 comments were labelled as Incite to hate or violence, therefore, we merge the two classes into an Offensiveness class. §.§ Generalities Your mental health If the comments are disturbing for you, please stop annotating and contact us. This would of course not affect your grade, we would find a solution together on how to grade you. Classes There are three exclusive classes. You select a class for each comment. See Section Classes for details. Spans There are four spans. The spans are used for tagging passages. In order to do so, mark the text you would like to tag and select one of the four tags for the spans. See Section Spans for details. Title of article You can see the title of the article which was commented at the right side of the page. Please take it into account when classifying and tagging the comment. Subjectivity It is often hard to classify a comment with one of these classes as there are many nuances of insults and incites to hate and violence (such as for example irony). If you are not sure about how to label a comment, choose the most reasonable option to you, sometimes there is no right or wrong. German level If you realize that the level of German in these comments is too hard for you or there is too much dialect in them, don't hesitate to contact us, we can easily assign you the English task and you annotate the remaining comments in English. §.§ Class Insult An insult includes disparaging statements towards persons or groups of persons as well as towards other entities. Insults pursue the recognisable goal of disparaging the addressee or the object of reference. Examples: * Arguments? If this group of vaccination refusers could be reached with reason and arguments, our situation would be different. [Argumente?. Wenn diese Bagg*ge von Impfverweigerern mit Vernunft und Argumenten erreichbar wäre, sähe unsere Lage anders aus.] * Class Insult * Span Target Group: vaccination refusers * Span Vulgarity: group * They're definitely crazy... I'm not allowed to go to the Inn, I'm not allowed to ski, I'm supposed to make up for their losses with my taxes. You're out of your mind! And no thanks, you can save yourselves the st*pid slogans, well then just get vaccinated. [Die sp*nnen doch endgültig.. Ich darf nicht ins Gasthaus, ich darf nicht Skifahren, soll deren Ausfälle mit meinen Steuern ausgleichen. Ihr h*bt sie doch nicht alle! Und nein danke, die besch**erten Sprüche na dann geh halt impfen, könnt ihr euch gleich sparen.] * Class Insult * Span Target Group: They * Span Vulgarity: crazy * Span Vulgarity: out of your mind * Span Vulgarity: st*pid * LOL Our vaccine refusers in the football club are all pissed off because they can't go to the Christmas party now XD g**ns.[Lol Unsere impfverweigerer im Fußball Verein sein voll angfressn weil's jetzt ned auf die Weihnachtsfeier gehen können XD D*llos] * Class Insult * Span Target Group: vaccination refusers in the football club * Span Vulgarity: g**ns. * Should i applaud to that sh*t? [Soll i zu dem Sch**ß auch noch applaudieren?] * Class Insult * Span Target Other: sh*t * Span Vulgarity: sh*t §.§ Class Incite to Hate or Violence An incite to hate or violence against a person or a group of people. It is hard to draw the line between insults and incites to hate, as insults always somewhat incite hate. Try to decide for yourself what you actually consider hate and what is more of an insult. Insults are usually less severe. If a comment includes an insult and an incite to hate or violence, please choose the class Incite to hate or violence. Examples: * You just have to stand up to them. Yesterday you could see migrants with tools trying to cut the fence. You simply have to drive over it. [Da muss man einfach hart dagegen halten. Gestern konnte man ja Migranten mit Werkzeug sehen, die den Zaun zerschneiden wollten. Da gehört einfach drübergefahren.] * Class Incite to hate or violence * Span Target Group: migrants * Refugees should face the squad! * Class Incite to hate or violence * Span Target Group: Refugees * All Austrian people are dirty! * Class Incite to hate or violence * Span Target Group: Austrian people §.§ Class None None of the above classes. Examples: * KC-GB is already a pretty weak game... Will Mahomes find his old strength at some point this season? I don't think he will this year...[KC-GB ist schon ein ziemlich schwaches Spiel...Ob Mahomes irgendwann in dieser Saison zu alter Stärke findet? Glaub heuer wird des nix mehr...] * Class None * they think something will happen with these rules. And the Burgenlanders look st*pid again. That's how "motivation" works. [die glauben mit diesen Regeln passiert irgendwas. Und die Burgenländer schauen wieder bl*d aus. So funktioniert M̈otivation] * Class None * Span Vulgarity: st*pid * yes anyway, the Ministry of Finance obviously paid for it ... wtf [ja eh hat ja offensichtlich das finanzministerium bezahlt ... wtf] * Class None * Span Vulgarity: wtf * Oh, oh, why are we sh*t? [Oh, oh, warum samma sch**ße?] * Class None * Span Vulgarity: sh*t * She's upset about 2G but what does she suggest? Should we just let people die? [Sie regt sich über 2G auf aber was schlägt sie denn vor? Sollen wir die Leute einfach verrecken lassen? ] §.§ Spans In contrast to the classes, comments can have zero, one, or multiple spans. Insults and incitements to hate or violence are targeted at an individual person, a group of persons, or something else, such as, for example, democracy (Target Other). Therefore, for comments classified as Insult or Incite to hate or violence, you tag at least one person or thing as Target Individual, Target Group, or Target Other. For the class None, you don't tag Target Individual, Target Group, or Target Other. Vulgar passages may be found in comments of all three classes. §.§ Span Vulgarity Obscene, foul or boorish language that is inappropriate or improper for civilised discourse. * Example What the F*****ck... Magnificent Interception. [What the F*****ck... Grandiose Interception.] * Class None * Span Vulgarity: What the F*****ck Examples of vulgar expressions: P*mmel, M*st, schw*chsinnig, Sh*tty, Gsch*ssene, v*rtrottelten, D*mmling-Däumlinge, WTF, Schn*sel, v*rsaut, W*ppla, K*ffer, zum T**fel, Tr*ttel, D*pp, D*dl, Sch**xx, d*mn, Hosensch**sser-Nerds, N*zipack, Gsch*ssene, Ges*ndel, verbl*dete, Schw*chköpfe, sch**ßen gehen, Schattenschw*nzler, p*ppn, P**fkinesen §.§ Span Target Individual The target of an insult or an incite to hate or violence is a single person not insulted based on shared group characteristics. * Example f*ck you, Max Mustermann * Class Insult * Span Target Individual: Max Mustermann * Span Vulgarity: f*ck §.§ Span Target Group The target of an insult or an incite to hate or violence is a group of persons or an or an individual insulted based on shared group characteristics. * Example You have to treat id**ts like id**ts! Therefore, of course, lockdown to save the economy! [Mit Id**ten muss man wie Id**ten umgehen! Daher selbstverständlich Lockdown, um die Wirtschaft zu retten!] * Class Insult * Span Target Group: id**ts * Span Vulgarity: id**ts §.§ Span Target Other The target of an insult or an incite is not a person or a group of people. Examples: * Please what kind of st*pid regulation is this USRTOK [Bitte was ist denn das für eine d*mme Regelung USRTOK] * Class Insult * Span Target Other: regulation * Span Vulgarity: st*pid * yes please f*ck around a little longer, finally make some decisions, the actions of our government are a disaster [ja bitte sch**ßts noch a bisserl länger um, endlich mal Entscheidungen treffen, das Vorgehen unserer Regierung ist ein Desaster] * Class Insult * Span Target Other: government * Span Vulgarity: f*ck around a little longer § DETAILS ON DATASET CREATION We exclusively select original comments for the dataset while excluding responses to other comments. We use the annotation tool LightTag [<https://www.lighttag.io/>] (Figure <ref>). Each annotator undertakes the task of annotating a volume ranging from 200 to 300 comments. We use castro-2017-fast-krippendorff implementation for the Krippendorff's Alpha. Mentions of users and URLs are replaced with USRTOK and URLTOK. § DETAILS ON EXPERIMENTS We utilize OpenAI Copilot for code implementation. Models We use the following models (with licenses in parenthesis) suitable for fine-tuning for downstream tasks or for few-shot classification for our experiments: * BERT_de cased [<https://huggingface.co/bert-base-german-cased>] (MIT) * BERT-dbmdz[<https://huggingface.co/dbmdz/bert-base-german-cased>] (MIT) * GELECTRA [<https://huggingface.co/deepset/gelectra-base>] (MIT) * GBERT-base [<https://huggingface.co/deepset/gbert-base>] (MIT) * GBERT-large [<https://huggingface.co/deepset/gbert-large>] (MIT) * LeoLM 7B Chat [<https://huggingface.co/LeoLM/leo-hessianai-7b-chat>] (LLAMA 2 COMMUNITY LICENSE AGREEMENT) * Mistral [<https://huggingface.co/mistralai/Mistral-7B-v0.1>] (Apache 2.0) * ChatGPT – gpt-3.5-turbo-1106 [<https://platform.openai.com/docs/models/gpt-3-5>] * GPT4 – gpt-4-1106-preview [<https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo>] Fine-Tuning We use the Transformer-models' implementation from the huggingface library with the default values (huggingface-hub version 0.17.3, Transformers version 4.34.0). For all models, we use a learning rate of 5e^-5, weight decay 0.01 with 200 warm-up steps. We use a per-device train batch size of 8 examples. We train the models for a maximum of 10 epochs, with early early stopping at a patience of 3 epochs. We keep the model with the best Binary or Micro F1 score. We use four Nvidia GTX 1080 TI GPUs with 11GB RAM each to train each model. Training all offensiveness classification models took about 20 GPU hours in total, whereas vulgarity and target extraction models 16 GPU hours each. Prompting Figure <ref> contains the zero-shot prompt for the multitask-setup. For the Llama 2 based models, we add the Llama-style start and end spans to the prompts. To evaluate LeoLM and Mistral AI we use a cluster with eight NVIDIA GeForce RTX 3090 GPUs (24 GB RAM per GPU). We estimate two GPU hours per evaluated model. Evaluation We compute the Micro F1 for the target classification using the framework of .
http://arxiv.org/abs/2406.08891v1
20240613074421
Robust Information Retrieval
[ "Yu-An Liu", "Ruqing Zhang", "Jiafeng Guo", "Maarten de Rijke" ]
cs.IR
[ "cs.IR" ]
Robust Information Retrieval]Robust Information Retrieval § ABSTRACT Beyond effectiveness, the robustness of an information retrieval (IR) system is increasingly attracting attention. When deployed, a critical technology such as IR should not only deliver strong performance on average but also have the ability to handle a variety of exceptional situations. In recent years, research into the robustness of IR has seen significant growth, with numerous researchers offering extensive analyses and proposing myriad strategies to address robustness challenges. In this tutorial, we first provide background information covering the basics and a taxonomy of robustness in IR. Then, we examine adversarial robustness and out-of-distribution (OOD) robustness within IR-specific contexts, extensively reviewing recent progress in methods to enhance robustness. The tutorial concludes with a discussion on the robustness of IR in the context of large language models (LLMs), highlighting ongoing challenges and promising directions for future research. This tutorial aims to generate broader attention to robustness issues in IR, facilitate an understanding of the relevant literature, and lower the barrier to entry for interested researchers and practitioners. [ Takeru Matsudaaddr2,t2[label=e2]matsuda@mist.i.u-tokyo.ac.jp Received day month year; Accepted ... ================================================================ 0 § TUTORIAL INFORMATION On-site tutorial At least two presenters will attend SIGIR 2024 in person to deliver this tutorial and engage in Q&A with the audience. Intended audience The tutorial is open to those with a basic understanding of information retrieval (IR) and natural language processing (NLP). It will appeal to both academic researchers specializing in IR/NLP and industry practitioners. Length This tutorial is scheduled to last for three hours. § PRESENTERS Yu-An Liu is a Ph.D. student at the Institute of Computing Technology, Chinese Academy of Sciences. He obtained his B.Eng. from Shandong University. His research centers on information retrieval, with a particular focus on adversarial and out-of-distribution robustness of IR systems. He is the first author of several full papers on adversarial robustness in IR, presented at SIGIR’23 <cit.>, CIKM’23 <cit.>, AAAI’24 <cit.>, and SIGIR'24 <cit.>, as well as a paper on OOD robustness in IR at Gen-IR@SIGIR'23 <cit.>. Ruqing Zhang is an Associate Researcher at the Institute of Computing Technology, Chinese Academy of Sciences. Her recent research focuses on information retrieval, with a particular emphasis on the robustness of information retrieval systems, trustworthy retrieval through the lens of causality, and generative information retrieval. She has authored several papers in the field of robust information retrieval <cit.>. Additionally, she was a co-organizer of tutorials and workshops at SIGIR, WWW, ECIR, and SIGIR-AP, e.g., Gen-IR workshops at SIGIR'23 and SIGIR'24, and Gen-IR tutorials at SIGIR-AP'23/WWW'24/ECIR'24. Jiafeng Guo is a Researcher at the Institute of Computing Technology, Chinese Academy of Sciences (CAS) and a Professor at the University of Chinese Academy of Sciences. He is the director of the CAS key lab of network data science and technology. He has worked on a number of topics related to Web search and data mining, with a current focus on neural models for information retrieval and natural language understanding. He has received multiple best paper (runner-up) awards at leading conferences (CIKM’11, SIGIR’12, CIKM’17, WSDM’22). He has been (co)chair for many conferences, e.g., reproducibility track co-chair of SIGIR'23, workshop co-chair of SIGIR'21, and short paper co-chair of SIGIR'20. He served as an associate editor for ACM Transactions on Information Systems and Information Retrieval Journal. Jiafeng has previously taught tutorials at many IR-related conferences. Maarten de Rijke is a Distinguished University Professor of Artificial Intelligence and Information Retrieval at the University of Amsterdam. His research is focused on designing and evaluating trustworthy technology to connect people to information, particularly search engines, recommender systems, and conversational assistants. He is the scientific director of the Innovation Center for Artificial Intelligence and a former editor-in-chief of ACM Transactions on Information Systems and of Foundations and Trends in Information Retrieval, and a current co-editor-in-chief of Springer’s Information Retrieval book series, (associate) editor for various journals and book series. He has been general (co)chair or program (co)chair for CIKM, ECIR, ICTIR, SIGIR, WSDM, WWW, and has previously taught tutorials at these same venues and AAAI. § MOTIVATION Information retrieval (IR) systems are an important way for people to access information. In recent years, with the development of deep learning, deep neural networks have begun to be applied in IR systems <cit.>, achieving remarkable effectiveness. However, beyond their effectiveness, these neural IR models also inherit the inherent robustness flaws of neural networks <cit.>. This poses a hindrance to their widespread application in the real world. In the past few years, the issue of the robustness of IR has received wide attention, e.g., <cit.> analyzed the robustness of neural ranking models (NRMs), and a perspective paper on competitive search <cit.> discussed adversarial environments in search engines. Since then, there has been a lot of work that focuses on different robustness aspects in IR, such as adversarial robustness <cit.>, out-of-distribution (OOD) robustness <cit.>, performance variance <cit.>, robustness under long-tailed data <cit.>, and on the corresponding improvement options. Today, the research community can effectively scrutinize IR models leading to more robust and reliable IR systems. To ensure the quality of the tutorial, we will focus on the two most widely studied types of robustness issues, namely adversarial robustness and OOD robustness. There are many analyses and suggestions for improvement around these two robustness issues, but it has not yet been systematically organized. Through this tutorial, we aim to summarize and review the progress of robust IR to attract attention and promote widespread development. § OBJECTIVES 1. Introduction We start by reminding our audience of the required background and introducing the motivation and scope of the robustness issue in IR in our tutorial. 2. Preliminaries In IR, robustness signifies an IR system's consistent performance and resilience against diverse unexpected situations. There is a large volume of work that covers many aspects of IR robustness, e.g., [label=(*)] * Adversarial robustness <cit.>, which focuses on the ability of the IR model to defend against malicious adversarial attacks aimed at manipulating rankings; * OOD robustness <cit.>, which measures the performance of an IR model on unseen queries and documents from different distributions of the training dataset; * Performance variance <cit.>, which emphasizes the worst-case performance across different individual queries under the independent and identically distributed (IID) data; and * Robustness under long-tailed data <cit.>, which refers to the capacity to effectively handle and retrieve relevant information from less common, infrequently occurring queries or documents. In this tutorial, we focus on adversarial robustness and OOD robustness, which have received the most attention. Interest in adversarial robustness stems largely from the widespread practice of search engine optimization (SEO) <cit.>. Concerns about OOD robustness are primarily due to the need for adaptation across diverse and complex real-world scenarios. Moreover, as large language models (LLMs) are being integrated into IR, new robustness challenges emerge; LLMs also offer novel opportunities for enhancing the robustness of IR systems. Building on these preliminaries, we will cover adversarial robustness, OOD robustness, and robust IR in the age of LLMs. 3. Adversarial robustness The web is a competitive search environment, which can lead to the emergence of SEO, in turn causing a decline in the content quality of search engines <cit.>. With the gradual rise of SEO, traditional web spamming <cit.> started to become an effective way to attack IR systems. However, this approach based on keyword stacking is easily detected by statistical-based spamming detection methods <cit.>. Adversarial attacks. In order to exploit the vulnerability of neural IR models, many research works have simulated real black-hat SEO scenarios and proposed a lot of adversarial attack methods. [label=(*)] * First, we introduce the differences between attacks in IR and CV/NLP, including task scenarios and attack targets; * Then, we present adversarial retrieval attacks <cit.> against the first-stage retrieval models, including the task definition and evaluation. Current retrieval attack methods mainly include corpus poison attacks <cit.>, backdoor attacks <cit.>, and encoding attacks <cit.>; and * Finally, we introduce adversarial ranking attacks <cit.> against NRMs with task definitions and evaluation setups. These include word substitution attacks <cit.>, trigger attacks <cit.>, prompt attacks <cit.>, and multi-granular attacks <cit.>. Adversarial defense. To cope with adversarial attacks, research has proposed a series of adversarial defense methods to enhance the robustness of IR models. [label=(*)] * We introduce the objective and evaluation of IR defense tasks. Based on these defense principles, adversarial defense methods in IR can be classified as attack detection, empirical defense, and certified robustness; * We turn to attack detection, which includes perplexity-based, linguistic-based, and learning-based detection <cit.>; * We present empirical defenses, which encompass data augmentation <cit.>, traditional adversarial training <cit.>, and theory-guided adversarial training <cit.>; and * We introduce the certified robustness in IR <cit.>. 4. Out-of-distribution robustness In real-world scenarios, search engines are in an ever-changing data environment, and new data are often not IID with the training data. Therefore, the ability to generalize to OOD data or not is the basis for the evaluation of IR systems in terms of OOD robustness <cit.>. OOD generalizability on unseen documents. In IR, the OOD robustness scenarios that have been examined can be categorized into unseen documents and unseen queries. The unseen documents scenario may be caused by adaptation to new corpus <cit.> or by incrementation of original corpus <cit.>. [label=(*)] * Adaptation to new corpus usually refers to the phenomenon that the corpus on which an IR model is trained is not in the same domain as the corpus on which it is tested. Due to the overhead of retraining, the performance of the model on the new domain needs to be guaranteed under zero/few-shot scenario, which is usually solved by data augmentation <cit.>, domain modeling <cit.>, architectural modification <cit.>, scaling up the model capacity <cit.>; and * Incrementation of original corpus refers to the scenario where new documents are continuously added to the corpus with potential distribution drift. In this situation, the IR model should effectively adapt to the evolving distribution with the unlabeled new-coming data, which is usually solved by continual learning <cit.>. OOD generalizability on unseen queries. Unseen queries concern query variations <cit.> and unseen query types <cit.>. [label=(*)] * The query variations are usually different expressions of the same information need <cit.> which may impact the effectiveness of IR models. Many noise-resistant approaches such as self-teaching methods <cit.>, contrastive learning methods <cit.>, hybrid methods <cit.> have been proposed for neural IR models; and * Unseen query types refer to the unfamiliar query type with new query intents <cit.>. Domain regularization <cit.> is effective for dealing with new query types. 5. Robust IR in the age of LLMs [label=(*)] * We first discuss the potential robustness challenges with applications of LLMs in IR, such as retrieval augmentation <cit.>, and LLMs for ranking <cit.>; and * Then, we will discuss how LLMs can be used to enhance the robustness of IR systems. These explorations will inspire many novel attempts in this area. 6. Conclusions and future directions We conclude our tutorial by discussing several important questions and future directions, including [label=(*)] * There is a diverse focus on the robustness of IR models from multiple perspectives. Establishing a unified benchmark of analysis to systematically analyze the robustness of all aspects of existing models. * For adversarial robustness, existing work on adversarial attacks focuses on specific stages (first-stage retrieval or re-ranking) <cit.> in IR systems. Customizing adversarial examples to make them effective for all stages is challenging. Therefore, one potential future direction is to explore how we can design a general unified attack method that can cater to every IR stage. * For OOD robustness, the main limitation of existing work is the difficulty of seeing enough diverse domain data in advance, leading to insufficient transfer capabilities of the model. Using the generation capabilities of LLMs to synthesize corpora for adaptation domains seems to be a promising direction. § RELEVANCE TO THE IR COMMUNITY In recent years, a considerable number of tutorials focusing on the topic of robustness have emerged across disciplines within computer science. In KDD'21 <cit.>, CVPR'21 <cit.>, and AAAI'22, there were tutorials on robustness for AI and computer vision. In EMNLP'21 <cit.> and EMNLP'23 <cit.>, there were tutorials on robustness and security challenges in NLP. The focus of these tutorials was not on search tasks and models. Search and ranking is a core theme at SIGIR. Evaluation, another core theme at SIGIR, encompasses multiple critical criteria beyond effectiveness for evaluating an IR system. Robust information retrieval aligns well with these core themes. Recently, robust information retrieval has gained considerable attention as more and more work is now devoted to analyzing and improving the robustness of information retrieval systems <cit.>. Our tutorial will describe recent advances in robust information retrieval and shed light on future research directions. It would benefit the community and help to encourage further research into robust IR. § FORMAT AND DETAILED SCHEDULE A detailed schedule for our proposed half-day tutorial (three hours plus breaks), which is aimed at delivering a high-quality presentation within the selected time frame, is as follows: * Introduction (15 minutes) * Introduction to robust IR: motivation and scope * Tutorial overview * Preliminaries (20 minutes) * Definition of robustness in IR * Taxonomy of robustness in IR * Adversarial Robustness (50 minutes) * Traditional Web spamming * Adversarial attacks * Comparison: IR attacks vs. CV/NLP attacks * Retrieval attacks: definition, evaluation, method, etc. * Ranking attacks: definition, evaluation, method, etc. * Adversarial defense * IR defense tasks: objective & evaluation * Empirical defense: adversarial training, detection, etc. * Theoretical defense: certified defense, etc. * Out-of-distribution Robustness (45 minutes) * OOD generalizability in IR * OOD generalizability on unforeseen corpus * Definition & evaluation * Adaptation to new corpus * Incrementation of original corpus * OOD generalizability on unforeseen queries * Definition & evaluation * Query variation * Unseen query type * Robust IR in the Age of LLMs (20 minutes) * New challenges to IR robustness from LLMs * New solutions for IR robustness via LLMs * Challenges and Future Directions (20 minutes) * QA Session (10 minutes) § TUTORIAL MATERIALS We plan to make all teaching materials available online for attendees, including: [label=(*)] * Slides: The slides will be made publicly available. * Annotated bibliography: This compilation will contain references listing all works discussed in the tutorial, serving as a valuable resource for further study. * Reading list: We will provide a reading list with a compendium of existing work, open-source code libraries, and datasets relevant to the work discussed in the tutorial. We intend to ensure that all instructional materials are available online.[<https://robust-information-retrieval.github.io>] Moreover, we grant permission to include slides and video recordings in the ACM anthology. This work was funded by the National Key Research and Development Program of China under Grants No. 2023YFA1011602, the Strategic Priority Research Program of the CAS under Grants No. XDB0680102, the project under Grants No. JCKY2022130C039, and the Lenovo-CAS Joint Lab Youth Scientist Project. This work was also (partially) funded by the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, <https://hybrid-intelligence-centre.nl>, project LESSEN with project number NWA.1389.20.183 of the research program NWA ORC 2020/21, which is (partly) financed by the Dutch Research Council (NWO), project ROBUST with project number KICH3.LTP.­20.006, which is (partly) financed by the Dutch Research Council (NWO), DPG Media, RTL, and the Dutch Ministry of Economic Affairs and Climate Policy (EZK) under the program LTP KIC 2020-2023, and the FINDHR (Fairness and Intersectional Non-Discrimi­nation in Human Recommendation) project that received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101070212. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. ACM-Reference-Format
http://arxiv.org/abs/2406.07973v1
20240612075532
Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey
[ "Shang Wang", "Tianqing Zhu", "Bo Liu", "Ding Ming", "Xu Guo", "Dayong Ye", "Wanlei Zhou" ]
cs.CR
[ "cs.CR" ]
shang.wang-1@student.uts.edu.au University of Technology Sydney Australia [1] tqzhu@cityu.edu.mo City University of Macau China bo.liu@uts.edu.au University of Technology Sydney Australia xxx@xxx.com CSIRO Australia 1906106220@qq.com Dayong.ye@uts.edu.au University of Technology Sydney Australia wlzhou@cityu.edu.mo City University of Macau China § ABSTRACT With the rapid development of artificial intelligence, large language models (LLMs) have made remarkable progress in natural language processing. These models are trained on large amounts of data to demonstrate powerful language understanding and generation capabilities for various applications, from machine translation and chatbots to agents. However, LLMs have exposed a variety of privacy and security issues during their life cycle, which have become the focus of academic and industrial attention. Moreover, these risks LLMs face are pretty different from previous traditional language models. Since current surveys lack a clear taxonomy of unique threat models based on diverse scenarios, we highlight unique privacy and security issues based on five scenarios: pre-training, fine-tuning, RAG system, deploying, and LLM-based agent. Concerning the characteristics of each risk, this survey provides potential threats and countermeasures. The research on attack and defense situations LLMs face can provide feasible research directions, making more areas reap LLMs' benefits. <ccs2012> <concept> <concept_id>00000000.0000000.0000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Security and privacy Human and societal aspects of security and privacy 20 February 2018 [revised]12 March 2018 [accepted]5 June 2018 Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey Wanlei Zhou June 17, 2024 =================================================================================== § INTRODUCTION With the rapid development of artificial intelligence (AI) technology, researchers have progressively expanded the scale of training data and model architectures <cit.>. Learned with massive amounts of data, extremely large-scale models demonstrate impressive language understanding and generation capabilities <cit.>, marking a significant breakthrough in natural language processing (NLP). Referred to as Large Language Models (LLMs), those models provide strong support for machine translation, text summary, automatic coding, and other NLP tasks. However, the in-depth application of LLMs across various industries, such as chatbots <cit.> and medical diagnosis <cit.>, expose their life cycle to various privacy and security issues. More importantly, LLMs face unique privacy and security risks <cit.> that never posed in traditional language models, demanding higher requirements for privacy protection and security defenses. §.§ Motivation Compared to traditional single-function language models, LLMs demonstrate remarkable understanding abilities, deploying across various applications such as logical reasoning and code generation. Recently, an increasing number of companies are launching universal or domain-specific LLMs, such as ChatGPT <cit.> and LLaMA <cit.>, offering users with versatile and intelligent services. However, due to LLMs' unique power and structures, throughout their life cycle, LLMs meet with unique security and privacy threats from society compared with previous single-function models or small-scale models <cit.>. Existing surveys describe various risks and countermeasures by method type, and there is a lack of exploration of those unique threats. We divide LLMs' life cycle into five scenarios and discuss the unique privacy and security risks <cit.> at each scenario. Unique privacy risks. When learning language knowledge from the training data, LLMs would memorize the data <cit.>. It allows adversaries to steal private information. For example, Carlini et al. <cit.> found that prompts with specific prefixes could make GPT-2 generate content containing personal information, such as email addresses and phone numbers. When running inference, the unlimited use of LLMs would provide adversaries with opportunities to steal model-related information <cit.> and functionalities <cit.>. In summary, throughout the life cycle of LLMs, adversaries can steal or infer various sensitive information, thus threatening specific individuals and institutions. Unique security risks. Since the training data contains malicious, illegal, hallucinatory, and biased texts, LLMs inevitably learn negative language knowledge. Moreover, malicious third parties responsible for developing LLMs in outsourcing scenarios can affect these models' integrity and utility by poisoning attacks <cit.> and backdoor attacks <cit.>. For example, an attacker could implant a backdoor in an LLM-based automated customer service system, causing the system to respond automatically with a predetermined fraudulent link when asked specific questions. When running inference, the unlimited use of LLMs allows adversaries to obtain targeted responses <cit.>, such as fake news, phishing sites, and illegal content. In summary, adversaries can exploit internal and external vulnerabilities in LLMs throughout their life cycle to implement various security attacks <cit.>. These unique privacy and security risks would pose more severe attacks to our society, such as reducing the credibility of LLMs and hindering their application or popularity. At the same time, these risks threaten the safety of LLM owners and users, violating existing laws, such as the General Data Protection Regulation (GDPR). In this context, systematic research on LLMs' unique privacy and security issues is lacking. It prompts us to analyze, categorize and summarize the existing research to complete a comprehensive survey in this field. This research can help the technical community to develop safe and reliable LLM-based applications, making more areas reap LLMs' benefits. §.§ Comparison with existing surveys Research on LLMs' privacy and security is rapidly developing while existing surveys need a comprehensive taxonomy and summary. As shown in Table <ref>, we compared our survey with the existing surveys proposed since June 2024. The main differences lie in four aspects. ∙ Threat scenarios. We explicitly divided the life cycle of LLMs into five threat scenarios, which most surveys overlooked. Each scenario corresponds to multiple threat models, such as a pre-training scenario with malicious contributors, developers and users, like Figure <ref>. For each threat model, adversaries can compromise the LLMs' privacy and security via various attack methods. ∙ Taxonomy. We categorized the risks LLMs face based on their life cycle. Other surveys lacked such fine-grained taxonomy, making distinguishing the characteristics of various risks hard. ∙ Unique privacy/security. Concerning privacy/security risks, we focused on the unique risks to LLMs. Meanwhile, we also explored the common risks of all language models. In response to these risks, we summarized potential countermeasures. However, most of the surveys lacked depth and comprehensiveness in this analysis, simply listing attacks and defense methods. ∙ Other unique privacy/security scenarios. We incorporated LLMs with three scenarios: federated learning, machine unlearning and watermarking. Since these scenarios involve privacy/security issues, we explored them in Section <ref> and gave a systematic study that most surveys overlooked. §.§ Contributions of this survey LLMs have a strong emergent ability, which is applied to many industries. However, many vulnerabilities in their life cycle pose privacy and security risks. It seriously hinders the application and extension of LLMs. Hence, we aim to analyze, categorize, and summarize privacy and security issues. Specifically, we propose a novel taxonomy for these risks, which offers a clear and comprehensive analysis of their goals, causes, and implementation methods. We hope this survey can provide researchers with feasible research directions and challenges. The main contributions of this survey are as follows. ∙ Taking the LLMs' life cycle as a clue, we considered risks and countermeasures in five different scenarios from those of traditional models. These scenarios include pre-training, fine-tuning, RAG system, deploying and LLM-based agent. ∙ For each scenario, we highlighted the differences in privacy and security risks between LLMs and traditional models. Specifically, we described unique risks to LLMs and common risks to all models. Given a risk, we detailed its attack goal and capacity and combed through the related research using the attack method. Meanwhile, we analyzed the mapping between risks and countermeasures, thus listing potential defense methods. ∙ We conducted an in-depth discussion on the other unique privacy/security scenarios for LLMs, including federated learning, machine unlearning and watermarking. ∙ We provided critical and comprehensive explorations on LLMs' privacy and security issues and pointed out feasible research directions. § PRELIMINARIES §.§ Definition of LLM LLMs represent a revolution in the field of NLP. To improve the efficiency of text processing, researchers proposed pre-trained language models based on transformers. Google released the BERT model that uses bidirectional transformers, solving downstream tasks in the `pre-train + fine-tune' paradigm. Subsequently, they expanded the scale of pre-trained models, more than billions of parameters (e.g., GPT-3), and introduced novel techniques like zero-shot or few-shot learning. Compared to small-scale pre-trained models, large-scale pre-trained models showcase remarkable emergent abilities not found in these regular-scale models, capable of handling unseen tasks through in-context learning <cit.> (i.e., without retraining) and instruction tuning <cit.> (i.e., lightweight fine-tuning). Recent studies summarize four key characteristics that LLMs should have <cit.>. First, an LLM could understand various texts, like training data and natural language instructions. Second, it should be able to solve unseen NLP tasks without updating the model parameters. Third, it should generate high-quality texts that align with humans when providing designed prompts. Fourth, an LLM should showcase contextual awareness considering specific factors such as domain expertise. Moreover, Wei et al. <cit.> found that language models with more than 1 billion parameters have jump performance on multiple NLP tasks. Therefore, we believe that LLMs must have more than a billion parameters. Numerous institutions have developed their LLMs with these characteristics. §.§ Traditional privacy and security risks Recent research on privacy and security risks in artificial intelligence focuses on small-scale models. For privacy risks, small-scale models' life cycle contains confidential information like raw data and model details. It could lead to severe economic losses if this information is leaked <cit.>. Raw data exposes PII, such as facial images. Reconstruction attacks <cit.> and model inversion attacks <cit.> can steal raw data using gradients or logits. In addition, member and attribute information is sensitive. Taking medical tasks as an example, adversaries can adopt membership inference attacks <cit.> to judge if an input belongs to the training set, revealing some users' health conditions. Model details have significant commercial value and are subject to model extraction attacks <cit.>, targeting black-box victim models to obtain substitute counterparts or partial model information (e.g., network architecture and optimization algorithm) via multiple queries. Then, adversaries who know partial model details could launch stronger privacy and security attacks. For security risks, small-scale models face poisoning attacks <cit.> that compromise model utility by modifying the training data. A backdoor attack is a variant of poisoning attacks <cit.>. It can inject hidden backdoors into the victim model by manipulating training data or model parameters, thus controlling the returned outputs. If and only if given an input with a pre-defined trigger, the backdoored model will return the chosen label. During inference, adversarial example attacks <cit.> craft adversarial inputs by adding imperceptible perturbation, making them cause incorrect predictions. In summary, these security attacks can compromise model utility and integrity and severely threaten public safety in practical applications. Taking sentiment analysis tasks on social media platforms as an example, all attacks could lead to racially discriminatory texts being classified as benign, which results in the widespread dissemination of harmful information. The life cycle of LLMs shares similarities with and differs from small-scale models. From traditional privacy and security risks, we outline various potential risks LLMs face in different scenarios, as shown in Figure <ref>. Each scenario has a unique data type and implementation process, showing different risks and their countermeasures. We will start with each scenario and its threat models. § THREAT SCENARIOS FOR LLMS Although many institutions have disclosed the implementation methods of their LLMs, some details are still unknown. We collect literature and divide the life cycle of an LLM into five scenarios at a finer granularity rather than only training and inference phases. Figure <ref> gives these unique threat scenarios: pre-training LLMs, fine-tuning LLMs, retrieval augmented generation (RAG) system, deploying LLMs and deploying LLM-based agents. Then, we list extended risks for each scenario and use underlined texts to highlight unique risks for LLMs. §.§ Pre-training LLMs In this scenario, model developers collect a large corpus as a pre-trained data set containing books <cit.>, web pages (e.g., Wikipedia), conversational texts (e.g., Reddit), and code (e.g., Stack Exchange). Then, they use large-scale, Transformer-based networks and advanced training algorithms, enabling the models to learn rich language knowledge from vast amounts of unlabeled texts. After obtaining the pre-trained LLM, developers upload it to open-source community platforms to either gain profits or contribute to the community's development, as shown in Figure <ref>. In this case, we consider three malicious entities: contributors, developers, and users. * Malicious contributors. Unlike small-scale language models, the corpora involved in pre-training LLMs are so large that developers can not audit all the data, resulting in the inevitable existence of negative texts(e.g., toxic data and privacy data). These negative texts directly affect the safety of LLMs. For example, an LLM can learn steps to make a bomb from illegal data and feed the details back to the user. In this survey, we focus on the privacy and security risks posed by toxic data and privacy data without discussing the issues of fairness and hallucination. * Malicious developers. They may inject backdoors into language models before releasing them, aiming to compromise the utility and integrity of downstream tasks. If victim users download and deploy a compromised model, the attacker who knows the trigger can easily activate the hidden backdoor, thus manipulating the compromised model. * Malicious users. After downloading public models, they gain access to their information except for the training data, effectively becoming white-box attackers. Thus, these users can perform inference and data extraction attacks in a white-box setting. §.§ Fine-tuning LLMs In this scenario, users would customize LLMs for specific NLP tasks. They download pre-trained LLMs from open-source platforms and fine-tune them on the customized dataset. There are three fine-tuning methods: supervised learning, instruction and alignment tuning. The first manner is the commonly used training algorithm. For the second manner, the instruction is in natural language format containing a task description, an optional demonstration, and an input-output pair <cit.>. Through a sequence-to-sequence loss, instruction-tuning can help LLMs understand and generalize to unseen tasks. The third manner aligns LLMs' outputs with human preferences, like usefulness, honesty and harmlessness. To meet these goals, Ziegler et al. <cit.> proposed reinforcement learning from human feedback (RLHF). Figure <ref> shows two malicious entities: contributors and third parties. * Malicious contributors. In general, users need to collect specific samples that are used to fine-tune downstream tasks. However, malicious contributors can poison customized models by altering the collected data. In this scenario, the adversary can only modify a fraction of the contributed data without access to the fine-tuning. * Malicious third-parties. When outsourcing customized LLMs, users share their local data with third-party trainers with computational resources and expertise. However, malicious trainers could poison these customized LLMs to manipulate their responses before delivering them to users. For example, in a Question-Answer task, the adversary could manipulate the customized LLM to return misleading responses (e.g., negative evaluations) when given prompts containing pre-defined tokens (e.g., celebrity names). Compared to small-scale language models, fine-tuning LLMs involves two unique techniques: instruction-tuning and alignment-tuning. Therefore, malicious trainers pose two unique risks to LLMs: poisoning instruction-tuning and RLHF. In addition, we also consider a security risk common to all language models, such as poisoning supervised learning. §.§ RAG system The RAG system is a unique method to enhance the performance of LLMs. This technology does not retrain LLMs and is orthogonal to pre-training and fine-tuning. As shown in Figure <ref>, it constructs external knowledge bases. When given a prompt, the RAG system retrieves its context from the knowledge base and concatenates it, thus generating a high-quality response. Figure <ref> shows the detail of the RAG system and gives two malicious entities: contributors and users. * Malicious contributors. In general, users expect to construct extensive knowledge bases by collecting data from many sources. However, malicious contributors can poison the knowledge base to achieve backdoor and jailbreak attacks. In this case, the adversary can modify its knowledge base but can not access the inference process. * Malicious users. The knowledge bases used by the RAG system contain private and valuable information. Therefore, malicious users can design prompts to steal this information, violating the knowledge owners' privacy. Moreover, malicious users exploit the vulnerability in the knowledge bases to create jailbreak prompts, which can extract the training data. In this case, the adversary can only access the input interfaces of LLMs. §.§ Deploy LLMs Model owners deploy well-trained LLMs to provide specific services to users. Since LLMs can understand and follow natural language instructions, users could design specific prompts to achieve their goals. That is prompt engineering, like AUTOPROMPT <cit.>, Prompt Tuning <cit.> and P-Tuning <cit.>. As shown in Figure <ref>, the model owners only provide user access interfaces to minimize privacy and security risks. Therefore, we consider a black-box attacker who aims to induce various privacy and security risks by designing prompts. Then, we divide these risks by specificity to LLMs. * Unique risks for LLMs. Compared to small-scale language models, LLMs have safety guardrails. Thus, malicious users can design specific prompts to bypass these guardrails and obtain harmful or leaky outputs. Additionally, LLM prompts contain valuable and private information, and malicious users can perform prompt stealing attacks to violate the privacy of model owners. * Common risks to all language models. Concerning the knowledge boundaries of language models, malicious users can construct adversarial prompts by adding adversarial perturbations, making the model produce meaningless outputs. In addition, malicious users can design multiple inputs and perform black-box privacy attacks based on the responses, including reconstruction attacks, inference attacks, data extraction attacks and model extraction attacks. §.§ Deploy LLM-based agents LLM-based agents combine the strong semantic understanding and reasoning capacities of LLMs with the advantages of agents in task execution and human-computer interaction, presenting significant potential. These agents can process various tasks and have become a research hot spot. Compared with LLMs, LLM-based agents are autonomous, so they can plan and execute tasks independently according to user requirements rather than passively responding to input prompts. As shown in Figure <ref>, many privacy and security issues exist when deploying these agents. For example, a malicious user can query an LLM-based agent with designed prompts, thus accessing private data or implementing illegal actions. In this scenario, we consider two malicious entities: users and agents. * Malicious users. Due to alignment tuning, LLM-based agents also have safety guardrails. Thus, malicious users can perform jailbreak attacks to bypass the guardrails of these agents, thus obtaining harmful or leaky outputs. Compared to LLMs, an LLM-based agent performs more complex operations at query time, such as interacting with other LLM-based agents or open-source websites. These autonomous operations make jailbreak attacks more challenging. Similar to the attacker in Section <ref>, it only has access to the interfaces of LLM-based agents. * Malicious agents. Before deploying an LLM-based agent, attackers can inject a backdoor into the agent following Section <ref>. When processing a request with the trigger, an infected agent will perform a predefined action to achieve the attacker's goal. Especially in personal assistant applications, a backdoored agent will covertly send fraudulent text messages to users' emergency contacts when given a trigger query. In addition, an LLM-based agent may execute unauthorized interactions with other LLM-based agents or websites, exposing the private information of all users with whom these agents can come into contact. For a network with multiple agents interacting, humans cannot supervise the interactions between agents, resulting in malicious agents contaminating other entities in the network. § THE RISKS AND COUNTERMEASURES OF PRE-TRAINING LLMS Section <ref> gives three threat models when pre-training LLMs and Figure <ref> shows the detail of each adversary. We first describe privacy and security risks corresponding to these threat models. Then, we give existing countermeasures to provide researchers with more defense ideas. §.§ Privacy risks of pre-training LLMs In the early pre-training stage, developers must collect corpora, such as books, websites, and code bases. Compared with small-scale training data, a large corpus is complex for human audits and has many privacy risks. Inevitably, massive texts contain PII (e.g., names and addresses) and other sensitive information (e.g., health records). Kim et al. <cit.> found that the quality of corpora has an essential impact on the privacy of LLMs. Due to the excellent learning ability, LLMs would output the private information when given some specified prefixes <cit.>. For example, one of the training records is `Alice's email is alice08@gmail.com'. When using `Alice's email is' as a prompt, the trained LLM will output the sensitive information `alice08@gmail.com'. Once private information is leaked, it will seriously threaten users' privacy. The privacy risk malicious users pose is common to all language models. It indicates that pre-trained LLMs may leak private data. Specifically, adversaries can access the model's parameters and interface to steal the training data by running white-box privacy attacks. Nasr et al. <cit.> applied existing data extraction attacks to measure the privacy protection capabilities of open-source models such as Pythia and GPT-Neo. They found that the larger models generally leaked more data. Assuming a white-box adversary, Rahil et al. <cit.> changed the data extraction into an optimization problem and successfully stole the four digits hidden in the training data. Zhang et al. <cit.> attempted to recover training data for Transformer-based models. They used GPT-2 to generate fluent samples similar to the target domain and then optimized the generative model based on feedback from the victim model. This attack could bring the distribution of generated samples close to the private training data. Then, a more powerful extraction attack is proposed to steal targeted training data <cit.>. It induced model memory by soft prompt tuning of input prefixes and then used loss smoothing to improve the probability of generating the correct suffixes. Through the calibrated confidence estimation, the attacker could accurately select the most possible suffix to extract the targeted data. §.§ Security risks of pre-training LLMs Similar to the risks posed by privacy data, toxic data in corpora also leads to inherent risks in LLMs, as shown in Figure <ref>. The risks are not introduced by active adversaries but rather by an internal vulnerability unique to LLMs. Welbl et al. <cit.> and Huang et al. <cit.> defined toxic data as disrespectful and unreasonable language, including illegal texts, offensive texts, and threatening texts. As ousidhoum et al. <cit.> point out, the toxicity of LLMs is mainly due to the toxic content in the training data. Gehman et al. <cit.> found pre-trained language models such as GPT-2 were easy to output toxic content. It requires that model providers strengthen supervision of the output. Using the code generation model as an example, malicious prompts can cause it to produce codes with serious vulnerabilities, significantly threatening the safety of developed applications. Shaikh et al. <cit.> explored the CoT scenario in LLMs. They found that zero-shot CoT reasoning in sensitive domains increases a model's likelihood of producing toxic outputs. While Deshpande et al. <cit.> found role-playing can increase the probability that LLMs generate toxic content. Additionally, over-detection would degrade the performance of LLMs. Researchers need to consider this risk from multiple aspects to minimize the impact of toxic data. After pre-training, model developers will upload the trained LLMs to the open community for profit. In this case, a security risk is common to all language models. Malicious developers can access the training data and manipulate the model training process. Specifically, they can poison models through poison attacks or backdoor attacks, aiming to affect downstream tasks. Once the user deploys a poisoned downstream model, the attacker can construct specified prompts to obtain the desired outputs. Typically, poison attacks aim to disrupt model utility, mainly focusing on training data modification. Shan et al. <cit.> found limited training data for a specific concept and designed an efficient poisoning attack against text-to-image models. They bind the target concept to other images, which causes the model to produce meaningless images when given the selected concept. Backdoor attacks aim to compromise the model's integrity by injecting hidden backdoors <cit.>. As demonstrated in Figure <ref>, the adversaries set a trigger mode and target content. Then, they create a strong mapping between the two by modifying training data or manipulating model parameters. When given a backdoored prompt that carries the trigger, the compromised model would produce predefined backdoor behavior (e.g., misclassification). At the same time, the backdoored model would keep benign predictions for clean prompts without the trigger, like its clean counterpart. Some researchers first used static texts as triggers in the NLP domain, such as low-frequency words or sentences <cit.>. Therefore, Wallace et al. <cit.> used a gradient optimization algorithm to construct backdoored samples but not carry explicit triggers, improving the concealment of backdoor attacks. Qi et al. <cit.> adopted style information as a trigger mode and extended the backdoor attack to the semantic aspect. Similarly, You et al. <cit.> and Li et al. <cit.> used state-of-the-art LLMs (e.g., ChatGPT) to construct backdoored samples by rewriting text styles. Chen et al. <cit.> focused on a more complex language model (that is, seq2seq). They generated 0.2% backdoored texts and injected backdoors into tasks such as machine translation and text summary. A prompt carrying the trigger would induce the backdoored model to produce a specified keyword or sentence. Sun et al. <cit.> injected backdoors into the neural code search task. They constructed triggers by modifying function names or variable names to control the ranking of defective codes covertly. Zhao et al. <cit.> found that mislabeled texts are complex to bypass human audit and proposed a clean label backdoor attack for LLMs. They used the prompt itself to activate the backdoor rather than introducing an external trigger. In addition to designing various trigger modes, researchers manipulate the training process, such as Yan et al. <cit.>. They adopted a mask language model to enhance the association between triggers and target text. Yang et al. <cit.> replaced the trigger embedding vector with the optimized embedding vector, which poisoned the embedding layer. Most users can download the open BERT model and adapt to downstream tasks using local data. Du et al. <cit.> designed a supervised contrast learning. This attack could extract more robust trigger features using various pre-trained models, improving backdoor attacks' effectiveness. Kurita et al. <cit.> designed a weight poisoning attack that can preserve the backdoor to the fine-tuned model. They used regularization methods to adjust the model weights, which can reduce the impact of the fine-tuning process on the backdoor. They then modified the embedding vector of triggers to approximate the embedding vector of the target words. Huang et al. <cit.> believed that backdoor attacks for LLMs required a lot of data and computing resources, reducing the backdoor attacks' practicality. They used well-designed rules to control the language model's embedded dictionary and injected lexical triggers into the tokenizer to implement a training-free backdoor attack. Inspired by model editing, Li et al. <cit.> designed a lightweight backdoor editing method for LLMs. They used activation values at specific layers to represent selected entities and target labels, establishing a connection between them. §.§ Countermeasures of pre-training LLMs For the various risks during the pre-training phase, we explored countermeasures from two aspects: privacy protection and security defense. §.§.§ Privacy protection As shown in Figure <ref>, defenders can use two types of countermeasures to mitigate the privacy risks faced in the pre-training scenario: corpora cleaning and privacy pre-training. Corpora cleaning. LLMs tend to memorize private information from the training data, resulting in privacy leakage risks in their outputs. Currently, the mainstream method for mitigating such risks involves corpora cleaning <cit.>. For example, Subramani et al. <cit.> and Uzuner et al. <cit.> identified texts carrying personal information from datasets and removed them. While Ruch et al. <cit.> proposed a dictionary-based method that uses predefined rules to identify personal information. Then, some researchers <cit.> employed neural networks to detect personal information, including OpenAI. Kandpal et al. <cit.> and Lee et al. <cit.> noted the significant impact of data duplication on privacy protection. According to experiments, Johnson et al. <cit.> demonstrated that removing duplicated data and personal information can reduce the risk of LLMs' privacy leakage. However, cleaning massive corpora presents several challenges. Defenders must keep data utility when removing private information and consider the computational resources of privacy protection methods. Privacy pre-training. Concerning malicious users (i.e., white-box privacy attackers), model developers can design privacy protection methods from two aspects: the model architecture and the training process. The model architecture determines how knowledge is stored and how the model works during the training and inference phases, which impacts the privacy protection capabilities of LLMs. Jagannatha et al. <cit.> explored privacy leakage in various language models and found that larger models pose higher risks of information leakage. Currently, research on optimizing model architecture for privacy protection is not extensive and can be used as an empirical approach. Improving the training process can reduce the privacy risk malicious users pose, differential privacy <cit.>. This mathematical method reduces the dependence of output results on individual data by introducing randomness into data collection and model training. At first, Abadi et al. <cit.> introduced the DPSGD algorithm, which injects Gaussian noise of a given magnitude into the computed gradients. Specifically, this method could meet the privacy budget when training models. Li et al. <cit.> found that the training process for LLMs is suitable for the DPSGD algorithm. Their experiments demonstrated that non-standard hyperparameters can balance privacy protection and model performance. For the BERT model, Yu et al. <cit.> proposed a differential privacy-based selective training algorithm, enhancing the performance of the fine-tuned model. Similarly, Igamberdiev et al. <cit.> adopted a local differential privacy algorithm to pre-train the BART model. To thoroughly eliminate privacy risks, Mattern et al. <cit.> trained generative language models using a global differential privacy algorithm. They designed a new mismatch loss function and applied natural language instructions to generate high-quality synthetic. §.§.§ Security defense As shown in Figure <ref>, defenders can use two types of countermeasures to mitigate the security risks faced in the pre-training scenario: corpora cleaning and model-based defense. Corpora cleaning. LLMs learning from toxic data will result in toxic responses, like illegal texts. Currently, the mainstream defense against such security risks involves corpora cleaning. At first, Welbl et al. <cit.> and Zhao et al. <cit.> explored toxicity detection and mitigation methods for NLP domain, which could reduce this risk in small-scale models. However, the corpora used for training LLMs are vast and inevitably contain toxic data. For example, the corpora used for training LLaMA2 contains 0.2% toxic documents <cit.>. To address this problem, Dale et al. <cit.> and Moskovskiy et al. <cit.> adopted paraphrasing models for style transfer, removing toxicity while keeping the content of texts. Similarly, Logacheva et al. <cit.> collected toxic texts and their detoxified counterparts to train a detoxification model. For discriminatory texts, Zhao et al. <cit.> replaced these texts with their antonyms. However, Maudslay et al. <cit.> swapped these texts for neutral ones. There are several challenges associated with corpora cleaning. Defenders must maintain the data utility when removing its toxicity and consider the computational resources security defenses use. Model-based defense. Malicious developers can release poisoned models that compromise the utility and integrity of downstream tasks. In such scenarios, users can access the model but not the training data. Therefore, defenders can apply model examination or robust fine-tuning against poison attacks and backdoor attacks. Wallace et al. <cit.> found that early stopping could mitigate the effects of poison attacks. Then, Liu et al. <cit.> used benign texts to identify infrequently activated neurons and designed a pruning method to repair these neurons. Following this, Wang et al. <cit.> proposed a backdoor detection and mitigation approach. They collected benign samples and used reverse engineering to construct the candidate trigger for each class. Qi et al. <cit.> found that triggers in the NLP domain are often low-frequency words. When given a text, defenders used GPT-2 to measure the perplexity of each word, identifying as triggers those words with abnormally high perplexity. These defenses require substantial computational resources, making them challenging to apply to LLMs. In fine-tuning scenarios, defenders can access training data and use lightweight backdoor defenses, like sample-based detection methods. We will explore them in Section <ref>. § THE RISKS AND COUNTERMEASURES OF FINE-TUNING LLMS As shown in Section <ref>, there are two types of threat scenarios in the fine-tuning LLMs stage: outsourcing customization and self-customization. Since the privacy risks faced in both scenarios are the same as in Section <ref> and Section <ref>, they are not discussed here. Primarily, we detail the risks specific to LLMs. Then, we summarize the existing countermeasures for each risk, hoping to provide researchers with more defense ideas. §.§ Security risks of fine-tuning LLMs In this scenario, users can easily verify the model's utility, making it difficult for performance-degrading poison attacks to work. Therefore, we mainly discuss backdoor attacks that are more imperceptible. Attackers aim to help users obtain LLMs that contain hidden backdoors, which can return pre-defined outputs to serve the attacker's purposes when using prompts embedded with the trigger. As shown in Figure <ref>, outsourcing the customization of LLMs allows malicious third parties to inject backdoors. In this scenario, attackers (i.e., malicious third parties) can access the user-provided data and manipulate the entire training process. Many methods exist for customizing an LLM, including supervised learning, instruction tuning, and alignment tuning. The latter two customizing methods are unique to LLMs. We will describe these risks below: Instruction tuning <cit.>. It trains the model using a set of carefully designed instructions and their high-quality outputs, enabling the LLM to understand better and respond to users' prompts. As illustrated in Figure <ref>, the attack can implant backdoors through instruction modification and fine-tuning manipulation. Wan et al. <cit.> created 100 poisoned instructions that caused the LLM to produce negative results in various downstream tasks. They also found that larger models are more vulnerable to poison attacks. Some researchers observed that prompt engineering and fine-tuning could weaken the effectiveness of backdoor attacks. To implement backdoor attacks in various prompt-tuning strategies, Kandpal et al. <cit.> designed a multi-objective loss function to ensure the effectiveness of triggers under various prompt-tuning strategies. In contrast, Mei et al. <cit.> mapped triggers to specific tokens (i.e., anchors). Similarly, Yao et al. <cit.> applied backdoor attacks to AUTOPROMPT <cit.>, prompt-tuning <cit.>, and p-tuning v2 <cit.> scenarios. They first generated a set of triggers and the target tokens for binding operations. Then, bi-level optimization was employed to implement both the backdoor injection and prompt engineering. Cai et al. <cit.> found that the few-shot method could mitigate existing backdoor attacks, making it difficult to balance the utility and concealment of triggers. They selected the words related to the target label as triggers and generated the best trigger for each prompt to ensure the backdoor effect in continuous prompts. Alignment tuning. It can add additional information during the customizing process, such as values and ethical standards <cit.>. The common alignment tuning method is RLHF. As shown in Figure <ref>, the existing backdoor injection methods for RLHF are the modification of preference data. For instance, Rando et al. <cit.> injected backdoored data into the preference dataset of RLHF, thus implementing a universal backdoor attack. Attackers simply need to add the trigger (e.g., `SUDO') in any instruction to bypass the model's safety guardrail, making the infected LLMs produce harmful responses. Similarly, Baumgärtner et al. <cit.> found that poisoning 5% of preference data significantly affects the generated tendencies of the compromised LLM. It will return a positive/negative sentiment bias towards specific entities. Following that, Wang et al. <cit.> modified some feedback samples' preference labels to poison the LLM, causing it to return longer responses to instructions carrying the trigger. They first selected samples with longer responses from the preference dataset as candidates for poisoning to achieve this goal. They computed their quality scores to filter samples harmful to the clean reward model. Then, they selected the samples with the greatest differences in the text generation length from the candidate set, constructing the poisoned samples. For the supervised learning common to all language models, poison attacks and backdoor attacks in this scenario are similar to those detailed in Section <ref> and are not discussed here. Figure <ref> also shows the security risk self-customizing LLMs face. In this case, users need to collect fine-tuning data, allowing malicious contributors to inject poisoned data. Therefore, the attacker has limited attack capabilities, only controlling a fraction of samples in the fine-tuning data set. We summarized the security attacks that only modify training data in the previous Section <ref> <cit.>. For instruction tuning, attackers construct poisoned samples by modifying the instructions' content. For alignment tuning, attackers poison the reward model and LLM by altering the content and labels of preference data <cit.>. §.§ Countermeasures of fine-tuning LLMs Because of the various security risks mentioned above, we explored the corresponding countermeasures from the outsourcing and self-customization scenarios. Outsourcing-customization Scenario. Defenders (i.e., users) can access the customized model and the clean training data. Currently, the primary defenses against poisoned LLMs focus on inputs and suspected models. For input prompts, Yang et al. <cit.> proposed a robustness-aware perturbation scheme for detecting backdoored prompts. They identified differences in robustness to perturbations between backdoored samples and clean samples. Similarly, Gao et al. <cit.> found that strong perturbations could not affect texts carrying triggers. They proposed an online input detection scheme. Shen et al. <cit.> broke sentence-level triggers by shuffling the order of words in prompts. Furthermore, Sagar et al. <cit.> considered four rewriting methods to remove potential triggers while preserving input semantics, including word synonym replacement <cit.>, random character deletion <cit.>, back-translation <cit.>, and mask word replacement <cit.>. These defenses significantly reduced the attack success rate while ensuring the accuracy of predictions on clean prompts. For model-based defenses, beyond the approaches proposed by Liu et al. <cit.> and Wang et al. <cit.>, Li et al. <cit.> used clean samples and knowledge distillation to eliminate backdoors. They first fine-tuned the original model to obtain a teacher model. Then, the teacher model made a student model (i.e., the original model) pay more attention to the features of clean samples. For task-agnostic backdoors in language models, Wei et al. <cit.> designed a backdoor detection and removal method aimed at reversing specific attack vectors (i.e., harmful outputs) rather than directly reversing triggers. Specifically, they froze the suspected model and used reverse engineering to obtain abnormal output features. By optimizing the embedding features of these output vectors, they successfully detected and removed backdoors. Garcia-soto et al. <cit.> found differences in robustness to specific perturbations between clean models and backdoored models. They collected the models' responses to specific perturbations as their signatures and trained a meta-classifier to identify backdoored models. Following Wang et al. <cit.>, Shen et al. <cit.> tried to reverse the trigger tokens/words for a given label. They defined the convex hull over input space and optimized the coefficients of embedding vectors via temperature scaling. Self-customization Scenario. Defenders (i.e., users) can access the customized model and all its training data. In addition to the defense methods described in the previous paragraph and Section <ref>, users can also detect and filter poisoned data from the training set. Therefore, this paragraph focuses on such defenses, namely data-based detection and filtration methods. Chen et al. <cit.> collected activation values from the training data and then used an anomaly detection scheme to identify poisoned samples. Following this, Cui et al. <cit.> adopted the HDBSCAN clustering algorithm to distinguish between poisoned samples and clean samples. Similarly, Shao et al. <cit.> noted that trigger words significantly contribute to prediction results. For a given text, they removed a word and used the logit output returned by the model as its contribution degree. A word is identified as the trigger if it has a high contribution score. Wan et al. <cit.> proposed a robust training algorithm that removes samples with the highest loss from the training data. Inspired by generative models, Azizi et al. <cit.> used a seq-to-seq model to generate specific words (i.e., disturbance) for a given class. The words are considered triggers if most of the prompts carrying them can be misclassified. Currently, the backdoor attacks for LLMs have extensive aims and high stealthiness. The defenses against these backdoor attacks are in the initial phase, with several challenges. First, LLMs exhibit a strong unexplainability, making it difficult to explore the causes of backdoors. Second, the enormous model structure makes it challenging for defenders to analyze the internal details of LLMs. Third, defending against backdoors of LLMs requires substantial computational resources while balancing training costs. Finally, LLMs' input and output formats are diverse, which precludes the use of existing backdoor defenses designed for classification tasks. Therefore, it is necessary to design suitable backdoor defenses for LLMs through existing backdoor defenses. § THE RISKS AND COUNTERMEASURES OF RAG SYSTEM Some researchers focused on RAG technology to enhance the generative capabilities of LLMs. They constructed knowledge bases (e.g., documents) to retrieve relevant information about specific entities. When given a prompt, the model retrieves relevant information from the knowledge base and uses it as context to generate content. As shown in Figure <ref>, the RAG system faces two types of malicious entities: contributors and users. The malicious contributors can only modify external knowledge bases, inducing LLMs to return harmful responses against the safety guardrails. The malicious users can access the LLMs' APIs and modify the input prompts, stealing private data from the training set and the knowledge bases. We first introduce the privacy and security risks of the RAG system. Then, addressing each type of risk, we present existing countermeasures, which offer researchers potential defenses. §.§ Privacy risks of RAG system Unlike traditional language models, the RAG system is a unique technology for LLMs. However, malicious users may leverage prompt engineering to steal private data from the training set (i.e., data extraction) and the knowledge bases (i.e., knowledge stealing). As the former will be detailed in Section <ref>, we discuss the latter risk in this part. Knowledge Stealing Attacks. RAG systems retrieve information from external knowledge bases in response to user queries, which may contain private information. Zeng et al. <cit.> found that RAG systems are highly susceptible to knowledge stealing attacks. Attackers (i.e., malicious users) can manipulate prompts to guide the retrieval system, aiming to access targeted data (i.e., private information). Then, they induce LLMs to generate responses carrying the targeted data. In addition, RAG systems integrate external knowledge bases, reducing the dependence of LLMs on training data. Although RAG systems mitigate the risk of data extraction attacks, attackers can steal training data from LLMs through targeted attacks <cit.> and prefix attacks <cit.>. They found that existing data extraction attacks against LLMs were still effective on RAG systems, necessitating the development of more effective privacy protection methods. §.§ Security risks of RAG system For the RAG system, users need to collect public knowledge bases, allowing malicious contributors to inject poisoned data. Therefore, we consider an attack with limited capabilities. They can control a fraction of texts of public knowledge bases so that they manipulate retrieval and generation processes. Cho et al. <cit.> proposed a poison attack to disrupt the retrieval and generation process within the RAG framework, thus reducing the utility of LLM-based applications. They injected noise (e.g., typing errors) into the retrieval knowledge base and used the crossover and mutation processes of the genetic algorithm to create a poisoned knowledge base. Then, they optimized the knowledge base according to retrieval relevance and generation accuracy. Zou et al. <cit.> defined the poison attack as an optimization problem in the RAG framework, maximizing the probability of generating target texts for a specific prompt. They constructed poisoned texts related to the target entity to poison the knowledge base, misleading the LLM with wrong answers. Similarly, Zhang et al. <cit.> explored the RAG framework (that is, document analyzer and tokenizer) and designed attack sequences. They covertly injected them into the knowledge base using a rich text format. At the same time, they used the prompt template to ensure that the attack sequences could be used as context, causing the LLM to produce predefined outputs. §.§ Countermeasures of RAG system We explored privacy protection and security defense for the various risks during the RAG system. §.§.§ Privacy protection For knowledge stealing attacks, malicious users induce the retrieval system to output private information from the knowledge base by manipulating prompts. As shown in Figure <ref>, to mitigate this risk, we consider countermeasures from two aspects: External knowledge bases. As illustrated in Section <ref>, defenders can also employ corpus cleaning to filter out private data from the knowledge base. For example, deduplication can reduce the risk of data leakage. Moreover, defenders can identify and filter private data in the knowledge base using rule-based and classifier-based detection schemes. Retrieval process. Defenders can improve the retrieval process to protect privacy from knowledge stealing attacks. Firstly, defenders add random tokens and defensive prompts to the input. It may reduce the probability of retrieving private information. Secondly, defenders can use another LLM to re-rank the knowledge base. It can focus the LLM's attention on relevant information and only generate related content. For the retrieved context, defenders can summarize this information to mitigate the risk of knowledge leakage. These countermeasures may have a certain effect in mitigating knowledge stealing attacks, but a systematic evaluation is lacking. Defenders need to adjust and optimize potential defenses according to specific scenarios, balancing privacy protection and system utility. §.§.§ Security defense Malicious users attempt to poison RAG systems to induce harmful outputs from LLMs. However, there is a lack of countermeasures to mitigate this risk. Similarly to the defenses employed against jailbreak attacks, we consider countermeasures from the external knowledge base and the retrieval process, as shown in Figure <ref>. External knowledge bases. The documents of the knowledge base are collected from various data sources. Defenders can use corpus cleaning to filter poisoned data from the knowledge base, like Section <ref>. For instance, poisoned data may show higher perplexity compared to clean data. Therefore, perplexity-based detection schemes can filter out high-perplexity data. Additionally, Zou et al. <cit.> found that the poison rate affected the effectiveness of such the attack. Consequently, adding more clean data to the knowledge base may reduce the probability of retrieving poisoned contexts. Retrieval process. Poisoning RAG systems primarily influences the generated results by retrieving malicious contexts. Therefore, defenders can improve the robustness of the retrieval process to defeat this attack. Firstly, defenders can use the LLM to rewrite the input prompts before retrieving contexts from the knowledge base. It may alter the structure of the prompts, reducing the probability of retrieving poisoned contexts. Secondly, more sophisticated embedding vectors and similarity measurement methods can improve retrieval. It can enhance the diversity of retrieval results, thus avoiding poisoned contexts. Finally, the RAG system can leverage a multi-model verification scheme against poisoned prompts. These defenses can theoretically mitigate poison attacks on RAG systems, but a systematic evaluation is lacking. Furthermore, attackers can manipulate hyper-parameters to implement advanced poison attacks on RAG systems. It indicates the need to develop new defenses to address this risk further. § THE RISKS AND COUNTERMEASURES OF DEPLOYING LLMS As shown in Section <ref>, deploying LLMs faces only one threat scenario, that is malicious users induce LLMs to return privacy/harmful responses. In this scenario, attackers can only access the LLMs' APIs and modify the input prompts. We first introduce the privacy and security risks and especially provide a detailed discussion of risks unique to LLMs. Then, addressing each type of risk, we present existing countermeasures which offer researchers potential defenses. §.§ Privacy risks of deploying LLMs Compared to traditional language models, LLMs possess unique privacy data, that is, prompts. The corresponding privacy risk is prompt extraction attacks. Subsequently, we explore privacy risks common to all language models, including reconstruction attacks <cit.>, inference attacks <cit.>, data extraction attacks <cit.>, and model extraction attacks <cit.>. These risks and their attack methods are shown in Figure <ref>. §.§.§ Unique privacy risks for LLMs Prompt extraction attacks. Obviously, carefully designed prompts fully leverage the emergent abilities of LLMs to generate high-quality content. Thus, attackers (i.e., malicious users) can use prompt engineering to steal these queried prompts for profit, as shown in Figure <ref>. It reveals unique vulnerabilities in LLMs during the deployment phase. Perez et al. <cit.> and Liu et al. <cit.> injected malicious commands into prompts to override their original commands, causing applications based on LLMs to leak these carefully designed prompts. Subsequently, Zhang et al. <cit.> proposed a measurement criterion for prompt extraction attacks. In this work, they designed two metrics, exact-match and approx-match. The former detected whether the extracted prompts contained the real secret words. While the latter used the Rouge-L recall rate to calculate the length of the longest common sub-sequence between two texts. They conducted experiments on 11 different LLMS and three different prompt sets and found most prompt extraction attacks were effective. It challenges the privacy protection ability of LLM service providers. §.§.§ Common privacy risks for all Language Models Reconstruction attacks. In this case, the attacker is a malicious third party that acquires embedding vectors and output results through eavesdropping. Such the attack attempts to reconstruct the input prompts based on these data. Morris et al. <cit.> found that the outputs of LLMs had reversibility. They trained a conditional language model that reconstructed the input prompts based on the distribution probability over the next token. In addition, several researchers used embedding vectors to reconstruct inputs. Gu et al. <cit.> and Li et al. <cit.> applied generative decoders to progressively restore the target sequence. Morris et al. <cit.> designed a reconstruction attack with the state-of-the-art performance. They trained a decoder for embedding outputs, which could iteratively optimize the ordered sequence. Inference attacks. The output generated by LLMs can infer private information, including membership inference attacks and attribute inference attacks. For the first manner, it creates specific queries to determine whether a given text is trained on the victim model. For the embedding model, Song and Raghunathan <cit.> designed a threshold-based method for both word-level and sentence-level membership inference attacks. Mireshghallah et al. <cit.> observed that LLMs were unlikely to exhibit over-fitting during training. They used the likelihood ratio of texts as a threshold criterion, improving the inference accuracy. Subsequently, Mattern et al. <cit.> proposed a simple and effective membership inference attack for language models. They calculated membership by comparing the loss of target samples with neighbor samples. Another membership inference method adopts the shadow model, which depends on the unlimited prompt assumption (i.e., attackers have unlimited access to the victim model). To overcome this challenge, Abascal et al. <cit.> used only one shadow model for membership inference. They used the k-nearest neighbors algorithm to train the attack model on a similar dataset, eliminating the unlimited prompt assumption. The second manner involves posing a series of carefully designed prompts to the model and inferring attributes of the training dataset. Li et al. <cit.> used embedding vectors to attack chatbot-based language models, successfully inferring 4000 private attributes. Then, Robin et al. <cit.> proposed an attribute inference attack targeted at LLMs. They accurately inferred personal information (location, income, and gender) from the existing LLMs. Data extraction attacks. LLMs are trained or fine-tuned on massive texts and tend to memorize private information from these data. Malicious users can design a series of prompts that induce the model to regurgitate segments from the training set. Carlini et al. <cit.> crafted a series of prefixes that guided the GPT-2 model to complete sensitive information, like email addresses and phone numbers. Some researchers found that LLMs can associate contextual information, leading to privacy leaks when models respondrespond to contextual information. Huang et al. <cit.> considered three LLM-based applications: prefix, zero-shot, and few-shot scenarios. While LLMs could memorize numerous email addresses, they failed to understand the association between names and email addresses. Yu et al. <cit.> proposed several prefix and suffix extraction optimizations. They adjusted probability distributions and dynamic positional offsets, improving the effectiveness of data extraction attacks. Zhang et al. <cit.> used soft prompt-tuning to optimize the embedding of inputs, thus leaking more training data. Then, some attackers use well-crafted prompts that induce LLMs to generate private information. Li et al. <cit.> and Deng et al. <cit.> focused on jailbreak attacks, designing prompts that bypass the safety guardrail of LLMs (e.g., ChatGPT). Nasr et al. <cit.> proposed a divergence attack aimed at shifting the alignment of LLMs. For the commercial LLMs, they demonstrated that alignment-tuning still posed a risk of data extraction. Model extraction attacks. LLMs have a high commercial value, where training details and the model itself are the property of the owner. Malicious users aim to steal model information from the responses, such as its structure, hyper-parameters, and functionalities. Such the attack can lead to other privacy and security threats, like membership inference attacks and reconstruction attacks. Li et al. <cit.> constructed domain-specific prompts and queried the LLM. For example, they trained a medium-sized model on collected responses, stealing knowledge from the victim LLM. Ippolito et al. <cit.> introduced a method to distinguish between two types of decoding strategies: top-k and nucleus sampling. By analyzing the generated texts, adversaries can infer the decoding strategy and parameters the victim LLM uses. Similarly, Naseh et al. <cit.> leveraged the unique fingerprints left by different decoding algorithms and hyper-parameters to steal this information at a relatively low cost. §.§ Security risks of deploying LLMs Compared to traditional language models, LLMs have a unique safety guardrail that protects against harmful content. However, prompt injection attacks and jailbreak attacks can bypass the guardrail, inducing LLMs to produce harmful content. We focus on adversarial example attacks for the security risk common to all language models. These risks underscore the ongoing challenges in ensuring the robustness of LLMs. §.§.§ Unique security risks for LLMs When deploying an LLM, malicious users can optimize prompts (i.e., prompt engineering) to produce harmful outputs. To achieve this goal, malicious users can leverage prompt injection attacks and jailbreak attacks. Prompt Injection attacks. Instructions usually consist of task descriptions, demonstrations, and input. Figure <ref> demonstrates that attackers can insert malicious content into the task description or input, thereby hijacking the original instruction <cit.>. For example, Perez et al. <cit.> noted that attackers could insert commands like `ignore previous instructions, then execute...'. Kang et al. <cit.> found that mainstream LLMs integrated malicious detection mechanisms in their input modules. Therefore, they broke down malicious instructions into multiple parts to evade detection mechanisms. Moreover, Liu et al. <cit.> leveraged gradient-based optimization to efficiently generate injected data. Currently, many LLM-based applications provide opportunities for malicious users to launch such the attack <cit.>. Jailbreak attacks. Most LLMs use alignment-tuning to construct a safety guardrail, posing challenges for prompt injection attacks. To overcome this, jailbreak attacks are implemented through carefully designed prompts rather than simple malicious injections, like Figure <ref>. There are two types of jailbreak attacks: single-step and multi-step. For single-step jailbreaks, attackers target single queries. Some researchers found that role-playing instructions can weaken the safety guardrail <cit.>, thereby enhancing the effectiveness of jailbreak attacks. Yuan et al. <cit.> adopted encrypted prompts (e.g., Caesar ciphers) to bypass content filters while inducing malicious outputs from LLMs. Beyond manually creating jailbreak prompts, Yao et al. <cit.> combined fuzzing frameworks with jailbreak attacks to actively explore multiple jailbreak vulnerabilities in the victim LLM. Inspired by adversarial example attacks, Wei et al. <cit.> generated adversarial prompts, such as harmless prefixes that bypass the safety guardrail. Following this, Zou et al. <cit.> combined greedy and gradient-based search algorithms to craft advanced jailbreak prompts. Deng et al. <cit.> even used reverse engineering to locate potential defenses in existing LLMs. Then, they leveraged external LLMs to improve the threat of jailbreak attacks. For multi-step jailbreaks, attackers focus on multi-round interaction scenarios. Inspired by Chain-of-Thought (COT), Li et al. <cit.> broke down the target task into multiple steps, constructing jailbreak prompts at each step to gradually achieve malicious goals. For large text-to-image models, Yang et al. <cit.> used reinforcement learning to guide the generation of perturbed tokens. §.§.§ Common security risks for all Language Models Adversarial examples of attacks targeting output utility are a security threat all language models face. Specifically, attackers create imperceptible perturbations in inputs to affect output results. Such the attack typically involves four steps: selecting benchmark inputs, constructing adversarial perturbations, assessing model outputs and iterative optimization. For small-scale transformer models, Guo et al. <cit.> demonstrated that effective adversarial examples can be generated using only hard labels. Sadrizadeh et al. <cit.> attempted adversarial example attacks on machine translation tasks, using gradient projection and polynomial optimization methods to maintain semantic similarity between adversarial examples and clean samples. For LLMs, Maus et al. <cit.> proposed a black-box algorithm to generate adversarial prompts that make the victim model return confusing texts and images. Wang et al. <cit.> divided the input instructions into three parts and added adversarial perturbations to the demonstration part. Following this, Carlini et al. <cit.> improved adversarial prompts against alignment-tuning. §.§ Countermeasures of deploying LLMs In addressing the various risks during the deployment phase of LLMs, we explored countermeasures from two aspects: privacy protection and security defense. These countermeasures and defense methods are displayed in Figure <ref>. §.§.§ Privacy protection Data-based privacy protection. It aims to mitigate privacy leaks by detecting the output results. Some researchers used meta-classifiers or rule-based detection schemes to identify private information. Moreover, Cui et al. <cit.> believed that detecting private information needed to balance the privacy and utility of outputs. In medical scenarios, diagnostic results contain users' private information that should not be filtered out. Next, we will introduce model-based privacy protection methods. Differential privacy. In Section <ref>, we introduced the differential privacy methods during the pre-training phase. This section mainly discussed the differential privacy methods used in the fine-tuning and inference phases. Shi et al. <cit.> proposed a selective differential privacy algorithm to protect sensitive data. They implemented a privacy-preserving fine-tuning process for RoBERTa and GPT-2. Tian et al. <cit.> integrated the PATE framework with differential privacy. They trained a student model using the outputs of teacher models, thereby protecting the privacy of training data. Additionally, this method filtered candidates and adopted an efficient knowledge distillation strategy to achieve a good privacy-utility trade-off. Majmudar et al. <cit.> introduced differential privacy into the inference phase. They calculated the perturbation probabilities and randomly sampled the i th token from the vocabulary. In addition, Li et al. <cit.> designed a differential privacy-based prompt tuning algorithm and used attribute inference attacks and reconstruction attacks to evaluate its privacy protection effect. Subsequently, Duan et al. <cit.> combined differential privacy with knowledge distillation to enhance privacy protection for prompt tuning scenarios. Alignment-tuning. The safety guardrail of LLMs not only defends against prompt engineering attacks but also reduces the risk of privacy leaks. Defenders can use the RLHF fine-tuning scheme to penalize outputs that leak private information, improving the privacy protection capabilities of LLMs. Xiao et al. <cit.> constructed instructions containing positive and negative samples, effectively protecting the training data while enhancing the model's performance. Secure computing. During the inference phase, neither model owners nor users want their sensitive information to be stolen. On the one hand, users prefer not to allow semi-honest model owners to access their inputs containing private information. On the other hand, model information is intellectual property that needs to be protected from inference attacks and extraction attacks. Chen et al. <cit.> applied homomorphic encryption to perform privacy-preserving inference on the BERT model. However, this scheme consumes many computational resources and reduces model performance. Therefore, some researchers explored Secure Multi-Party Computation (SMPC) techniques to implement forward propagation without accessing the plain texts. Li et al. <cit.> found SMPC is challenging to use with non-linear operations. They replaced the non-linear layers in LLMs with polynomials while maintaining model performance through knowledge distillation. While Zheng et al. <cit.> implemented non-linear operations with the garbled circuit. Dong et al. <cit.> performed high-precision fitting for exponential and GeLU operations through piecewise polynomials. They successfully executed privacy-preserving inference on LLMs like LLaMA-7B. Although the existing SMPC technologies in LLMs still face challenges in efficiency, performance and cost, their prospects are extensive. §.§.§ Security defense Regarding adversarial example attacks and jailbreak attacks, existing countermeasures mainly consider output detection and processing, prompt engineering, and robustness training. We detail these defense details below. Output detection and processing. Some researchers detect and process malicious outputs during the generation phase to resist jailbreak attacks. Deng et al. <cit.> proved many closed-source LLMs have defense mechanisms, including keyword and semantic detection schemes. In addition, companies like Microsoft and NVIDIA have developed various detectors for harmful content. However, the training data limits classifier-based detection schemes, and adaptive jailbreak attacks can bypass them <cit.>. To improve detection effectiveness, OpenAI and Meta directly use GPT-4 and LLaMA2 to detect harmful content. Chen et al. <cit.> used multiple LLMs to resist jailbreak attacks. Specifically, they randomly selected responses from these LLMs as the output result. Prompt engineering. Users can manually or automatically optimize prompts, making LLMs better understand the context of these instructions. Therefore, some researchers used prompt engineering to eliminate the malicious goals of prompts, resulting in useful and harmless responses. Li et al. <cit.> designed a purification scheme. They introduced random noise into the prompts and reconstructed them using a BERT-based mask language model. Robey et al. <cit.> found that jailbreak prompts are vulnerable to character-level perturbations. Therefore, they randomly perturbed multiple prompt copies and identified texts with high entropy as infected prompts. Jain et al. <cit.> proposed two prompt processing methods: paraphrasing and re-tokenization. Specifically, paraphrasing uses a generative model to modify the prompts, retaining the original instructions while removing adversarial tokens. Re-tokenization represents the prompts with multiple smaller tokens. It can disrupt the effect of malicious tokens while having minimal impact on clean samples. Mo et al. <cit.> and Wei et al. <cit.> considered the few-shot scenario. They inserted a small number of defensive demonstrations into the prompts, mitigating jailbreak attacks and backdoor attacks. Robustness training. Developers can control the learning process to defend against various security attacks. During the fine-tuning phase, Dong et al. <cit.> found that adversarial training can lead to catastrophic forgetting. They proposed a robust fine-tuning strategy to preserve the features learned by the LLM. Currently, most LLMs establish safety guardrails through the RLHF technology, defending against jailbreak attacks <cit.>. Bianchi et al. <cit.> constructed a small amount of safety instructions (i.e., 3%) to improve the robustness of the LLaMA model. Sun et al. <cit.> argued that alignment tuning using human supervision was too costly. They leveraged another LLM to generate high-quality alignment instructions, constructing safety guardrails with minimal human supervision. Similarly, Shi et al. <cit.> generated high-quality preference data through three steps: reverse instruction tuning, instruction induction, and expert model evaluation, addressing the high labor costs of RLHF in preference data annotation. § THE RISKS OF DEPLOYING LLM-BASED AGENTS Some researchers leverage LLMs to construct agent systems, which are LLM-based agents. They are application systems that can understand natural language instructions and interact with humans. In general, LLM-based agents integrate various functional modules to perform complex tasks rather than merely running multiple rounds of queries. Currently, there are two application scenarios: single-agent systems and multi-agent systems. Where a multi-agent system consists of many LLM-based agents, each responsible for a specific task or role. For example, a health management system includes a data collection agent, an analysis agent, a report generation agent, and an interaction agent. As illustrated in Figure <ref>, deploying LLM-based agents faces two malicious entities: users and agents. Firstly, malicious users can manipulate prompts to launch backdoor attacks and jailbreak attacks that compromise the privacy and security of the agent system. Secondly, interactions between malicious agents pose privacy and security risks, such as unauthorized interactions and domino effects. Concerning these scenarios, we introduce privacy and security risks, respectively. Subsequently, we propose potential countermeasures to provide researchers with more defenses. Meanwhile, Figure <ref> shows how these risks map to defense, attack and defense methods. §.§ Privacy risks of deploying LLM-based agents Like the privacy risks discussed in Section <ref>, deploying LLM-based agents also faces prompt extraction attacks and various privacy attacks. We merely focus on privacy risks unique to LLM-based agents. Firstly, Users inadvertently share their private information during interactions with LLM-based agents. The system may memorize user interaction history, allowing malicious users to induce the agents to disclose private information through prompt engineering. To achieve this goal, malicious users can leverage jailbreak attacks and prompt injection attacks. Moreover, the multi-agent system faces more privacy risks due to high-frequency interactions. Different agents in the system undertake various roles and permissions. Some agents may access sensitive data beyond their permission scope or expose sensitive information to agents who should not access it. When performing collaborative tasks, multiple agents will share and process private information. In such cases, an attacker can access private information across the system by compromising one agent, as shown in Figure <ref>. Li et al. <cit.> found that data transmission between agents can be stolen, leading to privacy leakage. They also explored unauthorized interactions among agents, which may generate much private information. However, the interactions are not transparent, so it is hard to supervise this generated information, as illustrated in Figure <ref>. Therefore, it is essential to control and transparently manage access to private data strictly. §.§ Security risks of deploying LLM-based agents Similar to the security risks discussed in Section <ref>, deploying LLM-based agents also faces jailbreak attacks and backdoor attacks. These agents have safety guardrails designed to resist prompts carrying malicious content. Thus, jailbreak attacks aim to bypass the guardrail and cause the agent to perform harmful actions, such as deleting calendar events. Zhu et al. <cit.> proposed various granularity perturbation schemes to construct jailbreak prompts, including character-level, word-level, sentence-level, and semantic-level perturbations. Similarly, Li et al. <cit.> found prompt injection attacks could induce the LLM-based agents to perform malicious actions, such as `ignore previous instructions' or `complete instructions under administrator privileges'. Yang et al. <cit.> explored backdoor attacks on LLM-based agents, proposing query-attack, observation-attack and thought-attack. For the first manner, given a query with the trigger, the backdoored agents perform predefined malicious actions (e.g., send scam messages). For the second manner, adversaries can hide the trigger in interactions rather than input prompts. When the backdoored agent receives an observation result containing a trigger, it will perform malicious actions in subsequent steps. In the third manner, adversaries aim to alter the behaviors of specific steps during inference while keeping benign actions. For instance, a backdoored agent might invoke a specific tool (e.g., Google Translate) under a trigger prompt. To achieve these attacks, adversaries need to create poisoned data and insert a backdoor into the LLM-based agent by training on it. In addition to both attacks, malicious agents also pose security issues. For the single-agent system, attackers can modify the role settings of victim agents, causing them to exhibit harmful behaviors. Tian et al. <cit.> found that malicious agents can interact harmful content with other agents, affecting their behavior in a domino effect, as displayed in Figure <ref>. This risk improves the vulnerability of all LLM-based agents within a communication network. Meanwhile, the multi-agent system faces the threat of system-level attacks that can modify the role settings of the entire system. §.§ Countermeasures of deploying LLM-based agents As shown in Figure <ref>, we explore relevant countermeasures from privacy protection and security defense perspectives to address various risks when deploying LLM-based agents. §.§.§ Privacy protection Potential defenses focus on memorized data and output results to address privacy leaks caused by malicious users. For memorized data, defenders can identify PII and use data masking techniques to hide sensitive information, thereby reducing the risk of PII leakage. For output results, defenders can implement filtering and auditing processes to prevent sensitive information from being transmitted to other entities. As introduced in Section <ref>, both rule-based and classifier-based detection schemes can be applied. To mitigate privacy issues arising from agent interactions, potential defenses include authority management and real-time feedback. In the first manner, defenders can establish clear controls and transparency for private data access, setting access permissions for different roles within multi-agent systems. For real-time feedback, defenders can dynamically monitor and adjust output results to mitigate privacy risks caused by unauthorized interactions. While these countermeasures can effectively mitigate privacy risks discussed in Section <ref>, comprehensive research is lacking. Therefore, future efforts focus on targeted defenses against unique privacy issues for LLM-based agents. §.§.§ Security defense Existing countermeasures focus on input, model, and agent to address the security risks LLM-based agents face. Input Processing. As discussed in Section <ref>, defenders can process prompts to detect and defeat jailbreak attacks targeting LLM-based agents. For instance, defenders can use templates to restrict the structure of prompts, thereby reducing the impact of jailbreak prompts. Against prompt injection attacks, defenders can add explanatory text before and after external commands. However, LLM-based agents can use various tools to generate multi-modal outputs (e.g., programs and images), making existing countermeasures less effective. Developing a comprehensive multi-modal filtering system is essential to defeat harmful behaviors generated by LLM-based agents. Model Processing. Defenders can employ adversarial training to improve the robustness of LLM-based agents against jailbreak attacks. Meanwhile, backdoor removal methods discussed in Section <ref> may be effective against backdoor attacks on LLM-based agents. Han et al. <cit.> used knowledge distillation techniques to extract benign knowledge from the poisoned pre-trained encoder and transferred it to a new encoder. However, these backdoor removal methods only work on LLMs rather than LLM-based agents. Facing new types of query-attack, observation-attack, and thought-attack, detecting backdoors in LLM-based agents becomes more challenging, necessitating the development of targeted countermeasures. Agent Processing. These countermeasures mainly address the security risks posed by malicious agents. Defenders can establish multi-level consistency frameworks in multi-agent systems to ensure alignment with human values. Defenders can design robust filters for role attacks to detect malicious agents based on role attributes, improving system safety. In addition, Li et al. <cit.> found that a high-level agent guides its subordinate agents. Thus, constraining the high-level agent can prevent subordinate agents from running harmful behaviors. These countermeasures help reduce the domino effect in multi-agent interactions. While these countermeasures can effectively mitigate security risks discussed in Section <ref>, comprehensive research is also lacking. Comprehensive research is still needed to address evolving security issues for LLM-based agents. § FURTHER WORK AND DISCUSSION §.§ Federated learning for LLMS In practice, massive corpora come from many sources. Therefore, some researchers explore the distributed training framework. They assume that individual participants have local corpora and use federated learning to develop an LLM collaboratively. They can allow for the development of LLMs while protecting the privacy of all participants. For example, McMahan et al. <cit.> improved the federated averaging algorithm, providing high-quality privacy protection for language models. However, pre-training LLMs consumes a lot of computing and communication resources. The federated learning technologies applied to LLMs are primarily used in fine-tuning. Zhang et al. <cit.> combined federated learning with LoRA, achieving efficient fine-tuning and privacy protection. Subsequently, Xu et al. <cit.> adopted differential privacy, partial embedding updates and LoRA to balance resource usage, privacy protection, and performance better. Kuang et al. <cit.> provided a comprehensive federated parameter-efficient fine-tuning algorithm, including dataset pre-processing, collaborative fine-tuning, and performance evaluation. Similarly, Fan et al. <cit.> and Jiang et al. <cit.> focused on parameter-efficient federated fine-tuning methods for LLMs. Although federated learning ensures that participants' data not leave the local environment, the updated gradient information during the collaboration process might make this framework susceptible to reconstruction and inference attacks. In this context, attackers (i.e., malicious participants) could access shared gradient values of victim participants, inferring or reconstructing their private information. For reconstruction attacks, Balunovic et al. <cit.> assumed attackers could access the global model weights and the training data's gradientsdata. They modeled the prior probabilities using an auxiliary language model (e.g., GPT-3) and alternately applied continuous and discrete optimization algorithms to reconstruct the training data. Chu et al. <cit.> allowed attackers to modify model parameters to reconstruct specific text sequences for Transformer-based models. Li et al. <cit.> reduced the strength of the assumption and designed a reconstruction attack that does not require the modification of the model parameters. They used gradient information and intermediate features to enhance the reconstruction effects. For inference attacks, some researchers found that such the risk could occur in federated learning framework <cit.>. Specifically, attackers have some auxiliary data labeled with the target attribute. They could run attribution inference attacks, either passively or actively, to infer whether victim participants' data contains the attribute. The passive attacks use the gradients of the auxiliary data to train a binary classifier to identify the target attribute. At the same time, the active attacks employ multi-task learning to make the global model more effective at identifying the target attribute. Given the high dependency of LLMs on massive corpora and computational resources, federated learning offers an effective solution. However, researchers must consider potential risks in this scenario and actively explore effective privacy protection countermeasures. §.§ Watermarks for LLMs LLMs have two copyright issues that affect privacy and security. First, LLMs cost massive corpora and computational resources, thus, the model is considered the owner's property. Second, to clarify the responsibility for generated content, researchers consider copyright issues of the content produced by LLMs, which can mitigate the misuse of LLMs. For model copyright, He et al. <cit.> proposed a watermark injection scheme that does not modify model parameters. They applied a word replacement strategy to add watermarks to the output. However, Zhao et al. <cit.> found that knowledge distillation could remove such watermarks. Therefore, Li et al. <cit.> used backdoor attacks to inject watermarks into customized LLMs. Subsequently, the model owner can efficiently complete verification by checking the backdoor effect. Similarly, Liu et al. <cit.> embedded the trigger into private texts to trace the data intelligence. For output copyright, it aims to identify whether a specific LLM generates a given text. Some researchers focus on injecting unique and imperceptible watermarks into generated texts without affecting their semantics. Sato et al. <cit.> modified the output format without changing its content. For instance, they used different types of spaces to replace standard spaces, which could be implemented with minimal change. Yoo et al. <cit.> found that paraphrasing would remove such watermarks, hence they designed a robust multi-bit watermarking scheme. Specifically, they identified features invariant to minor perturbations and used a BERT-based mask language model to embed watermarks. Zhang et al. <cit.> mixed the generated text with binary signatures in the feature space. They used the Gumbel-Softmax function during the encoding phase to transform the generated dense distribution into a sparse distribution. It can significantly enhance the coherence and semantic integrity of watermarked texts. Other researchers focus on the generation process, directly returning watermarked texts instead of modifying output results. At first, Kirchenbauer et al. <cit.> embedded watermarks by manipulating the log of LLMs. They divided the vocabulary into red and green lists based on a random seed, encouraging the LLM to choose tokens from the green list. Then, users who know the partition mode can implement the verification by calculating the number of green tokens in the generated text. However, Hu et al. <cit.> found a trade-off between watermark strength and output perplexity. Thus, they embedded watermarks by manipulating the probability distribution of the next token, preferring specific tokens without changing the text semantics. Similarly, Liu et al. <cit.> proposed a semantic-invariant watermark scheme, generating watermark logits based on the semantics of all preceding tokens instead of simply token IDs. It maintains the semantics of generated texts, successfully resisting synonym replacement attacks and text paraphrasing attacks. Additionally, they found that these watermark schemes used the same key for generation and verification, which posed a forgery risk in the public verification scenario. Inspired by this observation, literature <cit.> improved the watermarking scheme proposed by Kirchenbauer et al. <cit.>. They adopted two neural networks for watermark generation and verification, improving the accuracy of verification and the non-forgery of watermarks. Kuditipudi et al. <cit.> found that this scheme caused the LLM to generate identical watermarked texts for the same prompt, affecting the LLM's utility. They used longer pseudorandom number sequences to solve this issue and randomly set the initial position during watermark embedding. Subsequently, Hou et al. <cit.> focused on sentence-level sampling and injected watermarks into it, making the watermark resistant to text paraphrasing attacks. §.§ Machine unlearning for LLMs. As users increasingly focus on personal privacy protection, especially under legal frameworks such as the GDPR, the right to forget has become a critical problem. Upon receiving a forgetting request, the model owner can leverage machine unlearning to protect their data from unauthorized use. This technology effectively removes the influence of specific user data. Machine unlearning for LLMs faces three challenges compared to traditional small-scale language models. First, LLMs are usually trained on a large amount of data. However, the contribution of a single sample point is not significant, making it challenging to exact unlearning for specific data. Second, machine unlearning will affect the performance of LLMs, requiring a balance between protecting user privacy and maintaining model performance. Third, concerning LLMs, there needs to be more research on verifying the effectiveness of forgetting. Currently, Yao et al. <cit.> studied machine unlearning for LLMs. They found that the memory capabilities of LLMs far exceeded those of traditional models, running multiple unlearning operations to eliminate the impact of specific data. They also identified the issue of catastrophic forgetting, which affects model performance. Meanwhile, Eldan et al. <cit.> addressed copyright issues in corpora, replacing `Harry Potter' with other concepts. They made the target LLM forget content related to `Harry Potter'. In terms of verification, Viswanath et al. <cit.> evaluated the model's response to forgot samples. In addition, they explored some verification methods, such as data extraction attacks. Despite the many challenges machine unlearning and verification face, research in this area is crucial for improving the transparency of LLMs. Many countries have introduced regulations on AI models, aiming to improve the transparency of AI-based applications. With transparency, it is easier for researchers to understand the inference of LLMs. Improving transparency will be conducive to further research on LLMs' privacy and security. Mitchell et al. <cit.> improve the transparency of models by establishing model cards. Then, Wahle et al. <cit.> proved that model cards were suitable for LLMs. It helps users better understand model performance and potential risks to train and deploy LLMs more safely. § CONCLUSION This paper provides a comprehensive overview of privacy risks, security risks, and defenses, focusing on LLMs. Specifically, we introduce various privacy and security attacks, including training-time attacks and inference-time attacks, along with their respective subcategories. We also discuss general and specific protection strategies for privacy and security risks. Furthermore, we present LLMs' evaluation and provide details on the datasets and evaluation metrics regarding utility and robustness. Concerning LLM privacy and security aspects, this paper gives a comprehensive view of attacks and defenses; we believe it is a significant contribution to help researchers understand LLMs' issues, making them be applied to more fields. ACM-Reference-Format
http://arxiv.org/abs/2406.09038v1
20240613122208
CGP++ : A Modern C++ Implementation of Cartesian Genetic Programming
[ "Roman Kalkreuth", "Thomas Baeck" ]
cs.NE
[ "cs.NE", "cs.LG" ]
§ ABSTRACT The reference implementation of Cartesian Genetic Programming (CGP) was written in the C programming language. C inherently follows a procedural programming paradigm, which entails challenges in providing a reusable and scalable implementation model for complex structures and methods. Moreover, due to the limiting factors of C, the reference implementation of CGP does not provide a generic framework and is therefore restricted to a set of predefined evaluation types. Besides the reference implementation, we also observe that other existing implementations are limited with respect to the features provided. In this work, we therefore propose the first version of a modern C++ implementation of CGP that pursues object-oriented design and generic programming paradigm to provide an efficient implementation model that can facilitate the discovery of new problem domains and the implementation of complex advanced methods that have been proposed for CGP over time. With the proposal of our new implementation, we aim to generally promote interpretability, accessibility and reproducibility in the field of CGP. <ccs2012> <concept> <concept_id>10010147.10010178.10010205.10010207</concept_id> <concept_desc>Computing methodologies Discrete space search</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010293.10011809.10011813</concept_id> <concept_desc>Computing methodologies Genetic programming</concept_desc> <concept_significance>500</concept_significance> </concept> <concept_id>10011007.10011006.10011008.10011009.10011011</concept_id> <concept_desc>Software and its engineering Object oriented languages</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10011006.10011008.10011009.10011014</concept_id> <concept_desc>Software and its engineering Concurrent programming languages</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Discrete space search [500]Computing methodologies Genetic programming [500]Software and its engineering Object oriented languages [500]Software and its engineering Concurrent programming languages CGP++ : A Modern C++ Implementation of Cartesian Genetic Programming Zhiheng Yu, Jiancheng An, Member, IEEE, Ertugrul Basar, Fellow, IEEE, Lu Gan, and Chau Yuen, Fellow, IEEE This article was presented in part at the IEEE VTC2024-Spring <cit.>. This work was partially supported by Sichuan Science and Technology Program under Grant 2023YFSY0008 and 2023YFG0291. The work of Ertugrul Basar was supported by TUBITAK under Grant 120E401. The work of Chau Yuen was supported in part by the Ministry of Education, Singapore, under its Ministry of Education (MOE) Tier 2 under Award MOE-T2EP50220-0019; and in part by the Science and Engineering Research Council of Agency for Science, Technology and Research (A*STAR) Singapore, under Grant M22L1b0110. Z. Yu and L. Gan are with the national key laboratory on blind signal processing. School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China, and also with the Yibin Institute of UESTC, Yibin 644000, China (e-mail: zhihengyu2000@163.com; ganlu@uestc.edu.cn). J. An and C. Yuen are with the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore 639798 (e-mail: jiancheng an@163.com; chau.yuen@ntu.edu.sg). E. Basar is with the Communications Research and Innovation Laboratory (CoreLab), Department of Electrical and Electronics Engineering, Koç University, Sariyer, Istanbul 34450, Turkey. (e-mail: ebasar@ku.edu.tr). June 17, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Cartesian Genetic Programming (CGP) can be considered a well-established graph-based representation model of GP. The first pioneering work towards CGP was done by Miller, Thompson, Kalganova, and Fogarty <cit.> by the introduction of a graph encoding model based on a two-dimensional array of functional nodes. CGP can be considered an extension to the traditional tree-based representation model of GP since it enables many applications in problem domains that are well-suited for graph-based representations such as circuit design <cit.>, neural architecture search <cit.> and image processing <cit.>. Miller officially introduced CGP over 20 years ago <cit.> and provided a reference implementation written in the C programming language. Since then, several implementations have been provided in other popular programming languages such as Java or Python, which follow modern programming paradigms. Besides implementations, several sophisticated methods for CGP have been proposed over time, and the significance of various developments has been recently surveyed and discussed in the context of the status and future of CGP <cit.>. Miller's reference implementation is based on the procedural programming paradigm, which naturally entails challenges and limitations to provide a flexible, reusable and generic architecture that can facilitate the implementation of complex methods and their corresponding structures. Moreover, Julian F. Miller passed away in 2022 <cit.>[<http://www.evostar.org/2022/julian-francis-miller/>] and his website[<http://www.cartesiangp.co.uk/>] which served as a resource for his original implementation, disappeared shortly after his death for unknown reason to the authors of this paper. The above-described points and circumstances motivates our work, in which we present a modern implementation of CGP written in C++ called . Our implementation builds upon paradigms and methodologies commonly associated with the modern interpretation of the C++ programming language, such as generic programming. Since C++ has a reputation for providing excellent performance while representing a high-level object-oriented language that offers many features for generic programming, we feel that C++ is a suitable choice for a modern and contemporary implementation of CGP. This paper is structured as follows: In Section <ref> we describe GP and CGP and address major problem domains in these fields. Section <ref> surveys existing implementations of CGP that have been proposed for various programming languages. In Section <ref> we introduce our new implementation by presenting key features and addressing relevant implementation details. Section <ref> gives an overview of the architecture and workflow of . In Section <ref> we compare our implementation to the implementations that have been addressed in this paper. Section <ref> discusses the potential role of in the ecosystem of CGP implementations and addresses prospects as well as challenges of enhancements that will be considered by future work. Finally, Section <ref> concludes our work. § RELATED WORK §.§ Genetic Programming In the wider taxonomy of heuristics, Genetic Programming (GP) can be considered an evolutionary-inspired search heuristic that enables the synthesis of computer programs for problem-solving. The fundamental paradigm of GP aims at evolving a population of candidate computer programs towards an algorithmic solution of a predefined problem. GP transforms candidate genetic programs (Definition <ref>), that are traditionally represented as parse-trees, iteratively from generation to generation into new populations of programs with (hopefully) better fitness. However, since GP is a stochastic optimization process, obtaining the optimal solution is consequently not guaranteed. A formal definition of GP is provided in Definition <ref>. GP traditionally uses a parse-tree representation model that is inspired by LISP S-expressions. An example of a parse tree is illustrated in Figure <ref>. In addition to the conventional (tree-based) GP, GP is also used with linear sequence representations <cit.>, graph-based representation models <cit.>, or grammar-based representations <cit.>. A genetic program 𝔓 is an element of 𝔗×𝔉×𝔈: * 𝔉 is a finite non-empty set of functions * 𝔗 is a finite non-empty set of terminals * 𝔈 is a finite non-empty set of edges Let ϕ: 𝒫↦Ψ be a decode function which maps 𝒫 to a phenotype Ψ Genetic Programming is an evolutionary algorithm-based method for the automatic derivation of computer programs. Let 𝔅_μ^(g) be a population of μ individuals and let 𝔅_μ^(g+1) be the population of the following generation: * Each individual is represented with a genetic program and a fitness value. * Genetic Programming transforms 𝔅_μ^(g)↦𝔅_μ^(g+1) by the adaptation of selection, recombination and mutation. §.§ Cartesian Genetic Programming CGP can be considered an extension of conventional tree-based GP since it represents a genetic program as an acyclic and directed graph, and trees as data structures naturally entail combinational limitations. A genetic program is encoded in the genotype of an individual and is decoded to its corresponding phenotype before evaluation. Originally, the programs were represented with a rectangular n_r × n_c grid of nodes. However, later work focused on a representation with merely one row. A formal definition of a cartesian genetic program (CP) is given in Definition <ref>. In CGP, function nodes are used to execute functions, defined in the function set, on the input values. The decoding routine distinguishes between groups of genes, and each group represents a node of the graph, except the last one, which refers to the outputs. Two types of genes are used to encode a node: 1) the function gene, that indexes the function number in the function set and 2) the connection genes, which index the inputs of the node. The number of connection genes varies based on the on the predefined maximum arity n_a of the function set. The decoding of function nodes is embedded in a backward search that is performed for all output genes. The backward search is illustrated in Figure <ref> for the commonly used single-row integer representation, which starts from the program output and processes all linked nodes in the genotype until the inputs are reached. Consequently, only active nodes are processed during evaluation. The genotype illustrated in Figure <ref> is grouped whereby the first (underlined) gene of each group refers to the function number in the function set. In contrast, the non-underlined genes which refer to the respective input connections of the node. Function nodes, that are not linked in the genotype, remain inactive and are visualized in gray color as well as dashed lines. A parameter called levels-back l is commonly used to control the connectivity of the graph by constraining the node index from which a function or output node can get its inputs. A cartesian genetic program is an element of the Cartesian product 𝔑_𝔦×𝔑_𝔣×𝔑_𝔬×𝔉: * 𝔑_𝔦 is a finite non-empty set of input nodes * 𝔑_𝔣 is a finite set of function nodes * 𝔑_𝔬 is a finite non-empty set of output nodes * 𝔉 is a finite non-empty set of functions The number of inputs n_i, outputs n_o, and the length of the genotype remain static during a run of CGP. Therefore, each candidate program can be represented with n_r * n_c * (n_a +1) + n_o integers. However, although the length of the genotype is static, the length of the corresponding phenotype can vary during a run, which enables a certain degree of flexibility of the CGP representation model. CGP is commonly used with a (1+λ) evolutionary algorithm (EA) and a selection strategy called neutrality, which is based on the idea that the adaption of a neutral genetic drift mechanism (NGD) can contribute significantly to the escape from local optima. NGD is implemented by extending the classical selection mechanism in such a way that individuals which have equal fitness as the normally selected parent are first determined, and one equal-fitness individual is then selected uniformly at random. NGD can be therefore considered as a random walk on the neutral neighborhood of equal-fitness offspring. A new population is formed in each generation with the selected parent from the previous population and the λ bred offspring. An exemplification of the (1+λ)-EA variant used in CGP is provided in Algorithm <ref>. For the breeding procedure point mutation is predominantly used to exchange gene values of the genotype within the valid range by chance. The mutations triggered by this operator can alter the functionality of the phenotype as well as the connectivity depending on which type of gene is mutated. Genetic programs are mostly encoded with natural numbers in CGP that is commonly refereed to as integer-based or standard CGP. However, an alternative encoding model that is called real-valued CGP <cit.> has been proposed. It uses real numbers to encode candidate programs with the intention to adapt intermediate recombination which is commonly used for real-valued representations in Genetic Algorithms <cit.>. In contrast, integer-based CGP is predominantly used merely with mutation due to its long history of stagnation regarding the question about the effectiveness of recombination <cit.>. Recent studies, however, have found that various recombination operators such as subgraph <cit.>, block <cit.> and discrete crossover <cit.> can be effectively used for various problems <cit.>. §.§ Problem Domains in GP GP gained significant popularity when Koza <cit.> applied his parse tree representation model to practically relevant problem domains, for instance, symbolic regression, algorithm construction, logic synthesis, or classification. In this section, we describe two popular representatives of the GP application scope in more detail which have a reputation for being major real-world application scopes for GP as well as for their relevance for benchmarking GP methods: §.§.§ Symbolic Regression Symbolic Regression (SR) is located in the broader taxonomy of regression analysis, where a symbolic search is performed in a space of mathematical compositions to obtain candidate functions that match the ideal input-output mapping of a given data set as closely as possible. Symbolic regression in the context of GP can therefore be considered a black-box problem domain. In general, SR by means of GP relates to the application of GP models to synthesize mathematical expressions that represent input-output mapping of the the unknown function as closely as possible. Quite recently, it has been proved SR to be a NP-hard problem, since it is not always possible to find the best fitting mathematical expression for a given data set in polynomial time <cit.>. §.§.§ Logic Synthesis Logic synthesis <cit.> as tackled with GP comprises the synthesis of Boolean expressions that match input-output mappings of given Boolean functions. Boolean expressions are generally a way of formally expressing Boolean functions. LS as approached with GP predominantly addresses two major tasks located in the scope of this problem domain: * Synthesis of a Boolean expression that matches the correct input-output mapping of a given Boolean function. * Optimization of a Boolean expression (i.e. reduction of complexity). Both tasks are carried out with respect to Boolean logic and algebra. Truth tables are a common way to represent Boolean functions and to describe their input-output mapping besides to representing them with algebraic expressions. Synthesis of Boolean expressions is typically approached by defining one or multiple respective optimization objectives. LS as an GP application area was greatly popularized by Koza when he started addressing LS by using his parse tree representation model <cit.>. Moreover, Koza utilized his approach to evolve expressions for Boolean functions such as digital multiplexers and parity since these functions can be represented as LISP S-expressions. However, digital circuits are often characterized by Boolean functions with multiple outputs such as digital adders or multipliers. This resulted in the predominant use of CGP for LS since its graph encoding model is well-equipped to represent such functions <cit.>. A real-world application of the LS domain is the automatic design of digital circuits. §.§ Modern C++ C++ as a versatile and powerful programming language, has evolved significantly over the course of the last decade. Starting with the release of  <cit.> and the subsequent versions C++14, C++17, and C++20, various new features and corresponding best practices have been introduced, allowing developers to write more efficient and maintainable programs. Moreover, features that are associated with modern C++ have noticeably changed the way code is written in C++ remarkably improved the safety and expressiveness of C++ and are provided in the C++ Standard Library. Some of the language features that shaped modern C++ are: * Template type deduction Templates are a feature that enables the use of generic types for functions and classes. Template type deduction therefore allows the creation of functions or classes that can be adapted to more than one type without re-implementing the code constructs for each type. In C++ this can be achieved using template parameters. * Smart Pointers Smart Pointers provide a wrapper class around a raw pointer that have overloaded access operators such as and . Smart pointer managed objects have similar appearance as regular (raw) pointers. However, smart pointers can be deallocated automatically, in contrast to raw pointers. Smart pointers are therefore used to ensure that programs are free of memory leaks and, in this way, simplify the dynamic memory allocation in C++ while maintaining efficiency. * Lambda Expressions: A concise method for defining inline functions or function objects is to use lambda expressions, especially when working with algorithms or when a function is used as parameter. Lambdas can make the code more readable by allowing more direct expressions of intentions, since they do not require explicit function declarations. Lambdas are, therefore, also called anonymous functions. * Constexpr The primary intention behind constant expression is to enable performance improvement of programs by doing computations at compile time rather than runtime. C++11 introduced the keyword , which declares that it is possible to evaluate the value of a certain function or variable at compile time. * Concurrency Concurrency support was initially introduced in in order to boost program efficiency and allow multitasking. The Concurrency Support Library of C++ provides support for threads, atomic operations, mutual exclusion, and condition variables. Although concurrency enables multitasking, it does not necessarily mean that the desired tasks are executed simultaneously but are more approached by efficient switching between tasks. § EXISTING CGP IMPLEMENTATIONS This section reviews existing implementations of CGP that are later considered for a comparison to . Some implementations have already been addressed in Miller's review on the state and future of CGP <cit.>. However, with the intention to complete the picture further and to allow a more comprehensive comparison, we consider additional implementations and address their key features and purpose briefly. Despite the fact that the resource for Julian Miller's C reference implementation went down a while ago, a modified version still exists and is publicly available[<http://github.com/paul-kaufmann/cgp/>]. The modified version has been adapted for hyperparameter tuning experiments and search performance evaluations across several methods suitable for combinatorial optimization and combinational synthesis <cit.>. The implementation has therefore been additionally equipped with search algorithms such as simulated annealing. Another C implementation is the [<http://www.cgplibrary.co.uk/>] published by Turner and Miller <cit.>. It supports standard CGP as well as the recurrent CGP <cit.> variant and provides the functionality for evolving artificial neural networks <cit.>. The popular Java-based Evolutionary Computation Research System (ECJ) <cit.> provides a CGP contrib package that supports integer-based CGP as well as real-valued CGP. Moreover, the ECJ CGP contrib package covers functionality, data and benchmarks for applications such as logic synthesis, classification and symbolic regression. Recently, a set of implementations of various advanced genetic operators has been added to the repository. The [<http://www.fit.vutbr.cz/ vasicek/cgp/>] is a framework that primarily focuses on LS addressed with CGP and has been proposed by Vasicek and Sekanina <cit.>. It is shipped in four different versions that, in each case, support LS or SR for either 32 or 64 bit architecture and enables efficient phenotype evaluation based on machine code vectorization. A CGP toolbox for Matlab that focuses on audio and image processing called  has been proposed by Miragaia et al.  <cit.> which was used to apply CGP to the problem of pitch estimation. [<http://github.com/Happy-Algorithms-League/hal-cgp>] is a pure Python implementation of CGP designed to target applications that are characterized by computationally expensive fitness evaluations <cit.>.  [<http://github.com/um-tech-evolution/CartesianGP.jl>] is a library for using CGP in Julia. However, according to the authors, the code should be considered pre-alpha at the moment. § THE PROPOSED IMPLEMENTATION §.§ General Motivation and Philosophy Since Miller officially proposed CGP, increasing development has taken place over the course of the past two decades in the relatively young field of graph-based GP by proposing new representation variants, promising forms of crossover, mutation and search algorithms, as well as benchmarks. With the proposal of , we think that our proposed implementation can address the following aspects to enhance the following points in the field of CGP: * Maintaining accessibility for the use of CGP by extending the ecosystem of existing CGP implementations * Improving the interpretability of sophisticated methods by providing a comprehensible architecture. * Facilitating reproducibility of existing results by supporting benchmarking frameworks. The fundamental philosophy behind is to utilize aspects of modern C++ that have been described in Section <ref> to implement features and properties that are provided by state-of-the-art (SOTA) heuristic frameworks. In the following subsections, we will address the key features and properties of and share some details about the respective implementation details of that we used from modern C++. §.§ Key Features and Properties §.§.§ Object-oriented and Generic Design pursues an object-oriented and generic design to maintain an interpretable and reusable architecture for fundamental as well as sophisticated functionality, with the intention to assist further implementations of new techniques and the corresponding extension of the underlying architecture. §.§.§ Advanced Genetic Variation Since CGP has been predominantly used without recombination in the past, most implementations only support CGP in the standard mutation-only fashion. However, since recent work proposed new recombination operators and demonstrated the effectiveness for various problems <cit.>, block  <cit.> and discrete recombination <cit.> have been integrated into . Besides to the (1+λ)-EA variant used in CGP that has been exemplified in Algorithm <ref>, an implementation of a (μ+λ)-EA is provided to allow the recombination-based use of CGP. Furthermore, since recent work demonstrated that the consecutive execution of multiple mutation operators can benefit the search performance of CGP <cit.>, therefore supports mutation pipelining and provides advanced mutations such as inversion and duplication. §.§.§ Benchmarking provides an interface to the benchmarks that have been recently proposed for the General Boolean Function Benchmark Suite (GBFS) which provides a diverse set of LS problems for GP <cit.>. The provided files contain compressed truth tables that can be used to set up the corresponding black-box problem. Moreover, also provides a dataset generator and a set of objective functions for the SR benchmarks that have been proposed by McDermott et al. <cit.> in the framework of the first review on benchmarking standards in GP. §.§.§ Hyperparameter Configuration Hyperparameters related to the CGP functionality can be configured by using either a provided command-line interface or a parameter file, offering a flexible approach that can be used to apply to contemporary frameworks for hyperparameter tuning such as  <cit.>. §.§.§ Checkpointing To ensure caching of intermediate search results that have been obtained over the course of the search process, supports the creation of checkpoints that are automatically written during a run. The created checkpoint file can be used to resume a run in the case that it has been disrupted. §.§ Implementation Details and Challenges §.§.§ Generic Template The generic template of can be formally described as a tuple 𝒯 = (ℰ, 𝒢, ℱ) where ℰ defines the evaluation type, 𝒢 the type of the CGP genome and ℱ the type of fitness. 's generic approach is achieved by using C++ class templates. The respective data types can be configured via . To restrict the data type of certain template classes such as the type of the genome, we use to evaluate the defined template type at compile time. §.§.§ Smart Memory Management utilizes two types of smart pointers: and to provide safe memory allocation as well as efficient passing of objects, containers and data to functions and classes. In our implementation, is used for shared ownership of objects among instances of different classes. In contrast, is used in cases where single or exclusive ownership of a resource is desired. §.§.§ Memorization Memorization is used to speed up genotype-phenotype decoding by caching the immediate results of function nodes and consequently preventing reevaluating already computed results. The node-value mapping are stored by using during the decoding routine. §.§.§ Concurrency Besides to consecutive evaluation of individuals, takes the first step towards concurrency by providing concurrent evaluation of the population. For this purpose, the population is divided into chunks of individuals whose number is defined by the number of evaluation threads that can be set in the configuration. The chunks are then evaluated within several instances of . supports deep cloning of problem instances to create the corresponding thread pool. The pool of threads are synchronized after evaluation via . However, the genotype-phenotype mapping of CGP and the corresponding requirement of a decoding procedure poses a bottleneck for the use of concurrency. At this time, we have to limit the concurrency feature in for parts of the decoding and evaluation procedure of the genotype by using mutual exclusion via . We will address potential solutions for this issue in the discussion. §.§ Resources The source code of and a user guide are publicly available in our GitHub repository[<http://github.com/RomanKalkreuth/cgp-plusplus>]. § ARCHITECTURE AND WORKFLOW OVERVIEW §.§ Fundamental Architecture Abstraction and inheritance depict fundamental pillars of the top level architecture of- to enable a high degree of reusability of its core functionality. Figure <ref> provides a high-level class diagram that covers its key architecture elements. The Initializer is designed to instantiate the core elements for the heuristic search performed by CGP, such as the selected search algorithm and defined problem. To bundle essential sub-elements for the heuristic search process, a composite is initialized, which can be accessed by other core elements such as the problem and algorithm instances. The composite includes crucial elements and features for the GP search process, such as the population, breeding execution frameworks for crossover and mutation, function and terminal (constants) sets but also backbone elements such as hyperparameter interfaces as well as checkpointing. supports the generation of ephemeral random constants (ERC) to create the terminal set, which, together with the function set, is an integral part of the Composite. After initialization, the Evolver executes the considered number of instances (jobs) and reports final as well as immediate results via command line and output file. The high-level architecture of is fundamentally inspired by ECJ <cit.>. §.§ Top-level Workflow The top-level workflow of is shown in Figure <ref> that illustrates the interplay between the elements that have been described related to the fundamental architecture. can be used to run experiments that require several instances to ensure statistical validity. The Evolver therefore supports the execution of consecutive jobs, whose numbers can be configured via the parameter interface. The workflow within the framework of a job instance maps the typical workflow of the adapted (1+λ) and (μ+λ) strategies. To facilitate the integration of other types of evolutionary algorithm, we provide an abstract base class that can be used as a design pattern. §.§ Concurrent Evaluation The concurrent evaluation architecture is illustrated in Figure <ref>. When concurrent evaluation of the individuals is used, the evaluation procedure forks and joins a thread pool and each thread is equipped with a chunk of individuals as well as a deep copy of the problem instance. The respective functionalities such as deep cloning are provided in the Population and Problem classes. Since the Decoder is the shared resource in this framework, its access is maintained via mutual exclusion, as already mentioned in the previous section. §.§ Checkpointing The generation of a checkpoint is shown in Figure <ref>. For a checkpoint, we consider the random seed, generation number, genomes of the population, and the constants. These attributes are obtained from the respective instances and are then considered as a checkpoint instance that is written to a checkpoint file. Instances can be resumed by using the same configuration as the aborted run and passing the checkpoint file to . When a checkpoint file is detected, it is loaded by the Checkpointer and the run instance is resumed at the given generation number. § IMPLEMENTATION COMPARISON We considered various features for our comparison that can be found in modern metaheuristics toolkits. These include programming design, generic properties, checkpointing, variation pipelining, concurrency, and the existence of a parameter interface that can be used for hyperparameter tuning. We consider vectorization as a feature for our comparison, since it can be seen as a competitive feature to concurrency in CGP. With respect to recent findings about the role of crossover in CGP, we also consider this feature in our comparison. Table <ref> shows the result of our comparison, and it is visible that the the supported features of our implementation are on the level of a modern metaheuristics toolkit such as ECJ. Please note that the evaluation of Julian Miller's reference implementation is based on its modified version. The ECJ CGP contrib package offers a wide range of features that are derived from the underlying ECJ framework. For the other implementations, we notice that the number of features is quite limited. However, the supports vectorization, which is currently not supported by . A finding that we will address in the following discussion. and use programming languages that pursue dynamic typing concepts. However, we do not consider these concepts as generic, since generic programming aims at enabling data type independence while maintaining compile-time type safety. To our best knowledge, this is not covered by default by the dynamic typing concepts of these languages, and the generic extensions and features are currently not used for the respective implementations. § DISCUSSION AND FUTURE WORK The primary intention of this work is to propose the first version of a modern implementation of CGP in the C++ programming language. Another intention behind our work is to propose and establish a flexible and reusable architecture that can facilitate and simplify the implementation of further extensions. We therefore deliberately chose C++ over Rust which would also have been a suitable option. However, we consider Rust more a procedural and functional-oriented programming language rather than a strong object-oriented one. Moreover, since we also focus on interpretability of CGP methodologies implemented in , we find C++ code more approachable than Rust code for this purpose. We want to stress that with we do not intend to propose a framework that we generally consider superior over other implementations. With our contribution, we more intend to extend the already-existing ecosystem of CGP implementations. We acknowledge that every programming language has its own specific characteristics, and each implementation has its own purpose and philosophy. However, we feel that most CGP implementations fall short of providing features that can facilitate the discovery of novel applications and the integration of new techniques. Moreover, since the majority of the implementations that we considered for our comparison follow the procedural programming paradigm, we think that there is a need for frameworks that can facilitate the implementation and maintenance of larger and more complex methods that have been proposed for CGP. Another point that should be discussed is related to how CGP is effectively used and how this is related to future work on . CGP has a reputation for being effectively used with relatively small population sizes due to early experiments with Boolean functions <cit.>. However, recent studies on the parametrization of CGP demonstrated that CGP can be also effectively used with large population sizes in the SR domain <cit.>. In contrast, these studies also demonstrated that CGP performs best in the LS domain when a (1+λ)-ES with a very small population size is used. Moreover, very recent work found the (1+1)-ES to be the best choice for the evaluation of the General Boolean Function Benchmark Suite (GBFS) <cit.> for LS. Based on these findings, we think that at least two modalities in CGP have to be considered for future work. Therefore, we would like to address the following point as natural next steps for : §.§ Concurrency Since we already raised the issue of using concurrency for CGP, we would like to discuss how the corresponding challenges could be tackled in the future. In the first place, this would imply extending the thread pool design by using multiple evaluator instances. However, we have to stress here that related elements such as the CGP decoder have to be multiplied, and this would lead away from the idea of using a lightweight thread pool inside . Another idea would be to consider an alternative concurrency design pattern that could enable highly concurrent use and implements aspects from parallel dynamic programming <cit.>. Currently, only supports concurrency for the evaluation process. Therefore, another contribution would be to enable breeding concurrency. In general, despite the highlighted challenges, we think it is worthy to explore whether concurrency could be effectively used for problems where CGP seems to work well when large population sizes are being considered. §.§ Vectorization The use of vector operations by using related extended SIMD instructions is another feature that we would like to consider for future work. Vectorization with machine code that has been generated from CGP primitives and which contains SIMD instructions has been successfully used to speed up the CGP evaluation procedure <cit.>. The used SSE/SSE2 SIMD instruction calls operated with 128-bit vectors in that case. However, providing such a feature from today's perspective could also enable support for contemporary instruction sets such as Advanced Vector Extensions (i.e. AVX-2 or AVX-512). In view of the fact that the (1+1)-ES has been found to be the best choice on GBFS, vectorization could be used to provide a way to use CGP effectively in a consecutive fashion. §.§ Towards a Modern General GP Toolkit Even if this paper proposes the first version of , we do not only see it as a implementation of CGP but also as a blueprint that can shape the way towards a modern and general GP toolkit that allows the use of multiple GP variants in a flexible and effective way. Therefore, we consider extending to in the future that can benefit the GP domain across different representations with the contributing factors that we intend to achieve with the proposal of our implementation. As a first step towards that goal, we plan to integrate tree-based and linear GP as popular representatives of the GP domain. § CONCLUSION In this paper we presented the first version of a modern C++ implementation of Cartesian Genetic Programming, which closes a major gap in the framework of existing implementations. Our implementation provides key features and characteristics of modern heuristics frameworks. Our proposed implementation offers a genetic design and provides a reusable architecture that can facilitate the discovery of new problem domains and the integration of new methods for CGP. Equipped with interfaces and generators for benchmarking in Logic Synthesis and Symbolic Regression, , also provides a framework that can be used for the reproducibility of existing results. Acknowledgments is dedicated to Dr. Julian Francis Miller (1955 - 2022), who as the founder of Cartesian Genetic Programming devoted a large part of his scientific life to its proposal, development and analysis. With the introduction of we pay tribute to Julian's pioneering effort in the field of graph-based GP and acknowledge his lifework. The project was financially supported by ANR project HQI ANR-22-PNCQ-0002. § APPENDIX ACM-Reference-Format
http://arxiv.org/abs/2406.08067v1
20240612103511
Synchronous and Asynchronous Updates of Active Ising Spins in One Dimension
[ "Anish Kumar", "Sudipta Pattanayak", "R. K. Singh", "Shradha Mishra" ]
cond-mat.soft
[ "cond-mat.soft" ]
[]anishkumar.rs.phy22@itbhu.ac.in Department of Physics, Indian Institute of Technology(BHU), Varanasi- 221005, India []sudipta.pattanayak@cyu.fr Collège de France, Université Paris Sciences et Lettres, Paris, France. Institut Curie, Université Paris Sciences et Lettres, Physique de la Cellule et cancer, UMR 168, Paris, France []rksinghmp@gmail.com Department of Physics, Bar-Ilan University, Ramat-Gan 5290002, Israel []smishra.phy@itbhu.ac.in Department of Physics, Indian Institute of Technology(BHU), Varanasi- 221005, India § ABSTRACT How do update rules affect the dynamical and steady state properties of a flock? In this study, we have explored the active Ising spins (s = ± 1) in one dimension, where spin updates its orientation according to the Metropolis algorithm (based on the neighbors) via two different update rules. (i) Parallel, and (ii) Random-sequential. We explore the effect of Parallel and Random-sequential updates on the dynamical properties of flocks in one dimension. Due to the inherent asynchronous nature of the Random-sequential update, the directional switching of the flock is increased compared to the Parallel one. The nature of phase transition is affected by the difference in the updating mechanism: discontinuous for Parallel and continuous for Random-sequential updates. Synchronous and Asynchronous Updates of Active Ising Spins in One Dimension Shradha Mishra Dedicated to professor for Hélène Frankowska for her 70th anniversary =========================================================================== § INTRODUCTION Collective motion is a ubiquitous phenomenon observed in active systems driven out of equilibrium across widely separated length scales from single cells <cit.>, to unicellular organisms <cit.>, to bird flocks <cit.>, and human crowd <cit.>. The emergence of such a motion in a group of self-propelled units is termed as a flocking transition <cit.> and was reported by Vicsek and coworkers for a system of particles in two dimensions <cit.>. While most natural systems of interest are two or three-dimensional, the formation of collective motion in one dimension has gained attention in recent years <cit.>. Such one dimensional flocks exhibit the interesting property of direction switching <cit.> and recent theoretical and experimental studies have proven the usefulness of the study of collective motion in one dimension, in particular relevance to the phenomena of directional switching <cit.>. Most such models studying flocking in one dimension generally employ discrete time evolutions of the system, which are closer in nature to the sense of time as represented in digital simulations. However, it is known that on a digital time scale, a system of multiple particles can exhibit properties that are not a true representation of the original dynamical system, as has been observed in equilibrium <cit.> and nonequilibrium systems <cit.>. Such differences in update rules have led to the appearance of new universality classes in coupled map lattices <cit.>. The observations motivate us to study the effects of different update rules on flocking dynamics in a collection of self-propelled particles. In order to proceed with our goal, we introduce a system of active Ising spins moving in the unit interval [0, 1] with constant speed v_0, which interact locally in a neighborhood of range δ x. We find that the differences in update rules reflect in both the transient and steady state properties of the flocks. The nature of phase transition also changes with the change in the update rule. The paper is organized as follows: The details of the two update rules are covered in section <ref>. Following that, section <ref> discusses how the update rules change the system's properties, and section <ref> presents the conclusion. § COMPARISON OF THE TWO UPDATE RULES In this work, we consider N active Ising spins (s = ± 1) moving along a unit interval [0,1] with a fixed speed v_0 with periodic boundary conditions. Initially, all the spins are randomly distributed within the unit interval. Each i^th spin s_i interacts with all other spins within the interval [x_i-δ x, x_i+δ x] and updates its spin orientation according to the Metropolis algorithm <cit.>. Where δ x is the interaction range and x_i is the position of i^th spin. If f is the net spin within the interaction range δ x, then the spin updates its orientation depending on the product s_i f. If the product is negative, the spin is certainly flipped, and if the product is positive, then the spin is flipped with probability exp(-β s_if), where β is the inverse temperature. Each i^th spin updates its position according to the given equation: x̃_i = x_i + v_0 s̃_i where x̃_i, s̃_i is the updated position and orientation of i^th spin respectively. To observe the effect of the update rule on the dynamical properties of the system, we have updated the above system with two different update rules. (i) Parallel and (ii) Random-sequential update. In the Parallel update rule, each spin updates its orientation (s_i →s̃_i) based on the Metropolis algorithm. Once the orientation of all the spins is updated by s̃_i, then everyone's position is updated simultaneously using Eq. (<ref>). This counts as the one Monte-Carlo step for Parallel update. In contrast, an active spin i is randomly selected from the set of N spins in the Random-sequential update, and its spin s_i is modified using the Metropolis method, and then its position is updated in accordance with (<ref>). The updated value of spin s_i is applied right away to change the position of the i^th particle. In order to give each spin an equal chance of updating, this process is done N times. This N random flip process makes up one unit of time that is equivalent to one Parallel update of N spins. It may be concluded from the definitions of these update rules that the Random-sequential is asynchronous, whereas the Parallel update is synchronous. The Random-sequential updating rule has an underlying randomness. It also demonstrates a significant impact on the dynamics and steady state characteristics of the system. To characterise the orientational ordering among the spins we defined m (Net magnetization) as the order parameter of the system. m is defined as: m = 1/N∑_i s_i The magnitude |m| takes values in the interval (0, 1) with 0 representing a completely disordered state and 1 the state of a completely flocking state. The other parameters in the system: the self-propulsion speed v_0 is varied from [0.001, 0.003] and interaction range δ x is varied from [0.01, 0.05] and inverse temperature β from [1,5]. The system is studied for N=500, 1000 number of spins. The total simulation time step is 5 × 10^5. The thermal averaging is performed over approximately 100 independent realisations for better statistics. We started with an initially random arrangement of spins with random orientation. With time, the system evolves and reaches a steady state determined by the parameters. The steady state is defined when the characteristics of the system remain statistically the same with respect to time. § RESULTS In Fig. <ref>, we have shown the variation of the order-parameter |m| time series for a system of N = 1000 spins moving in the unit interval [0,1] for different (v_o,δ x) combinations for Parallel (a) and Random-sequential (b) updates respectively. For small inverse temperature β = 1 starting from the random initial state |m| ≃ 0, the system remains in the disordered state with |m| ≃ 0, whereas for large β = 4, it reaches to the ordered state with |m| ≃ 1 in the steady state. The |m| vs. t curves are similar for the two update rules in the steady state for β=4. As the v_0 or δ x is increased, the system takes less time to reach the ordered state. However, for β = 1 (blue), we observe relatively larger fluctuations in the order parameter for the Random-sequential update compared to the Parallel one. These are the consequences of the inherent randomness in the Random-sequential update rule. Starting with a random distribution of the spins at t = 0, the spins tend to move with the spins of the same s when the system exhibits long-range order, implying that the emergence of long-range order for high β values is a mean-field effect. The cause for this effect is the propagation of the local interaction amongst the spins across the interval due to the finite speed of movement of the spins, i.e., v_0 > 0. Such propagation of the local interaction leads to the emergence of a long-range order when global fluctuations are less (high β values). This is because for high β, the probability of flipping against the majority exp(-β s_i f) is less at any instant t, and hence the alignment of all the spins along the interval is achieved. On the other hand, for low values of β, the enhanced magnitude of global fluctuations increases the chance of any given spin s_i to flip against the majority. As a result, even when the spins are moving with a constant speed v_0, long-range order is not established because of the increased strength of global fluctuations, which tends to disrupt the established local order at every instant. Later, we will show that the fluctuations are relevant not only for low values of β but also for higher values. To further demonstrate the system properties in the ordered region, Fig. <ref>(a-b) shows order parameter m time series for two different realizations for the Parallel and Random-sequential update for (v_0, δ x) = (0.001, 0.01) and high β = 4.0. For both updates, the system shows the globally ordered state with |m| ≃ 1. But the directions of spins globally switches from one orientation type to another, hence m shows switching from +1 to -1 state. The frequency of switching is higher for the Random sequential update in contrast to the Parallel update. The two independent realisations shown by black and red are equivalent for Random sequential update, whereas the black shows zero switching and red shows the few numbers of switching for Parallel update. The synchronous nature of dynamics for the Parallel update makes the global switching more costly in comparison to Random-sequential, where each spin can flip randomly and slowly can turn the whole system to reorient its direction. Further, this leads to a stronger global orientation for Parallel updates. Now, we look at how the distribution of the density of spins in the system leads to the difference in the fluctuations in the magnetisation. We analyse the distribution of spins in space for the two updates and find that for the Parallel update, spins are either homogeneously arranged in the space with the same orientation or a strong density band with a globally ordered state. For the Random sequential update, the ordered state always results in the clustering of spins in the system. This motivated us to calculate the density fluctuations in the system for the two updates. In Fig. <ref>(c), the probability distribution function (PDF) of density fluctuations P(Δρ) for Parallel and the Random-sequential updates is shown. The other parameters are the same as in (a-b). The density fluctuation is defined as Δρ(t) = √(⟨ρ(t)^2 ⟩ -⟨ρ(t) ⟩^2). The local density ρ(t) is obtained by dividing the whole system in smaller bins (coarse-grained regions) and calculating the number of particles in the coarse-grained region. Further ⟨ ... ⟩ mean average over all the bins. It is calculated as a function of time and for different independent realisations. In the Random-sequential update, the system reverses its direction of motion frequently, so there is a dense cluster in the system that reverses its direction of motion. However, in the parallel update, due to the synchronous nature of the update, the reversal time is quite high. Sometimes, within the given time interval, it does not show any reversal, i.e. homogeneous distribution of the spins in the system. But occasionally it also switches the direction and dense clusters can also form. So, in the Parallel update case, we have found that the probability distribution function P(Δρ) shows a bimodality. But in the Random-sequential case, there is a single peak in the distribution. Till now, we focused on the nature of the cluster of spins for the two updates. Further, we calculated the number fluctuation in the system to investigate the effect of the inherent randomness in the Random sequential case on the local density of particles. For comparison we calculate the number fluctuation for both the updates. To calculate the number fluctuation, we start with a small length at the center of the system and gradually increase the length and calculate the number fluctuation in the different lengths for both update rules. We find that for both the cases Δ N ∼⟨ N ⟩ ^α. The α = 1/2 for the equilibrium systems <cit.> and α≃ 1 for self-propelled particles <cit.>. For the Parallel update, similar to the density fluctuation which is bimodal in nature as shown in Fig. <ref>(c), the Δ N also shows the dual character. For some of the realisations, the density remains homogeneous and Δ N ≃ N^α with smaller α and for many other realisations α≃ 1. Whereas for the Random-sequential update, the density fluctuation remains the same for all realisations. The difference over the realisations for Parallel update can also be obtained over time, but the time needed for the same is much larger than the maximum simulation time in our present study. In Fig. <ref>(a-b), we show the probability distribution function (PDF) of the exponent P(α) is shown for Parallel and Random-sequential update, respectively. For the Parallel update, the distribution shows the large variation of α at the front of the distribution, whereas for the Random-sequential update, it is unimodal with a single peak close to α≃ 0.97. Hence, the system shows the Giant number fluctuation (GNF) for both the updates. It should be noticed that the GNF, reported in previous studies, is for a system of dimensions two and higher. The appearance of large density fluctuation in a one-dimensional system is truly remarkable. The previous results here have focused on the steady state characteristics of the system for the two updates when the system is in the deep ordered state. Now, we discuss the effect of update rules when the system is close to the order-disorder transition point. For that, we vary the β and explore the system near critical β. The system shows the order-to-disorder phase transition as β is varied from large to small values. We characterise the nature of phase transition for the two types of updates. To check the effect of the update rules on the nature of phase transition, in Fig. <ref>(a-c), (b-d), we show the order-parameter, fourth-order Cumulant (Binder Cumulant) for a range of inverse temperature β. The Binder Cumulant is defined by; G = 1-⟨ |m|^4 ⟩/3 ⟨ |m|^2 ⟩^2 is useful for characterizing the nature of phase transition in many nonequilibrium systems <cit.>. In the case of first-order transition, there is a sharp drop toward the negative value near the transition region. For the case of Parallel update as shown in Fig. <ref>(b), G goes from the value 1/3 for low β value (disordered region) to 2/3 value for large β values (ordered region) Fig.<ref>(b)) with a sharp dip to negative values near transition. Accordingly, the order parameter ⟨ |m| ⟩ vs. β plot shows the jump around the order-disorder transition as shown in Fig. <ref>(a). Whereas for the Random-sequential update, it goes continuously from small ⟨ |m| ⟩≃ 0 values to finite ⟨ |m| ⟩ as we increase β Fig. <ref> (c). In the Random-sequential case, G goes from 2/3 (ordered state) to zero (disordered state) smoothly, as shown in Fig. <ref>(d). To further understand the behaviour of phase transition, we calculated the order-parameter distribution P(|m|) <cit.>near the transition region (β = [2.45, 2.70]) for the Parallel and Random-sequential update case as shown in Fig.  <ref>(a-b) respectively. For the Parallel update P(|m|), shows the bimodality (phase coexistence) whereas it always unimodal for the random case and peak of the P(|m|) continuously shifts to lower values of |m| as we decrease β. Hence, we can say that the transition from order-to-disorder is the discontinuous type for the Parallel update and continuous type for the Random-sequential update. We further distinguish the difference in the nature of phase transition for the two cases by showing the presence of a hysteresis loop in the system. The presence of hysteresis is another signature of the first-order transition <cit.>. The instantaneous order parameter displays hysteresis if the control parameter is ramped up and down across the transition point at a modest (constant) ramp rate. The area of the loop varies depending on the ramp rate. In Fig. <ref>(a-b), we show the hysteresis for N = 500, 1000 spins in the order parameter and the inverse temperature with a ramp rate of 3.6 x 10^-6 per unit time for both the update rules. The open symbols are for the ramp-up and the solid filled for the ramp-down case. The two symbols, circle and square, are for two values of N = 500 and 1000, respectively. In the case of Parallel update, we get the finite area in the hysteresis loop (between ramp-up and down), but in random sequential, both curves coincide, so no area is enclosed. This indicates that Parallel update gives us the discontinuous transition while later gives the continuous transition. The presence of finite area for the hysteresis loop is again due to the coexistence of ordered and disordered state near the critical point for the Parallel update. § CONCLUSIONS We have studied flocking in one dimension using a collection of active Ising spins moving in the unit interval. We find that the dynamical properties of the system, both transient and steady state, intrinsically depend on the nature of the update rules in the system. The magnitude of the orientation fluctuations in the disordered state of the spin system is greater for random-sequential updates than for parallel updates. In the state of long-range order, the flocks alternate between the allowed orientations for the two update rules, the frequency of which depends on the type of update. For the fixed set of system parameters, the system with Parallel update is less alternating in comparison to its Random-sequential counterpart. The density fluctuations are bimodal for the Parallel update, whereas it is unimodal for the Random sequential update. Further, we find that the differences in the density fluctuations also reflect in the nature of the transition from disorder to long-range order, with discontinuous for the Parallel update and continuous for the Random-sequential update. The differences arise due to intrinsic randomness in the Random-sequential update, which makes such an evolution asynchronous as opposed to the Parallel update, which is inherently synchronous. The present study has implications for the current understanding of collective motion in one dimension and opens a similar question for other active matter systems in higher dimensions. § ACKNOWLEDGEMENT A.K. and S.M. thank PARAM Shivay for the computational facility under the National Supercomputing Mission, Government of India, at the Indian Institute of Technology (BHU) Varanasi and also I.I.T. (BHU) Varanasi computational facility. A. K. thanks PMRF, INDIA for the research fellowship. S. M. thanks DST, SERB (INDIA), Project No. CRG/2021/006945 and MTR/2021/000438 for partial financial support. 30 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL [Kemkemer et al.(2000)Kemkemer, Teichgräber, Schrank-Kaufmann, Kaufmann, and Gruler]kemkemer2000nematic authorR. Kemkemer, authorV. Teichgräber, authorS. Schrank-Kaufmann, authorD. Kaufmann, and authorH. Gruler, journalThe European Physical Journal E volume3, pages101 (year2000). [Bonner(1998)]bonner1998way authorJ. T. Bonner, journalProceedings of the National Academy of Sciences volume95, pages9355 (year1998). [Parrish and Hamner(1997)]parrish1997three authorJ. Parrish and authorW. Hamner, journalCambridge, England: Cambridge University Phys. Rev. Ess (year1997). [Helbing et al.(2000)Helbing, Farkas, and Vicsek]helbing2000simulating authorD. Helbing, authorI. Farkas, and authorT. Vicsek, journalNature volume407, pages487 (year2000). [Vicsek(2001)]vicsek2001question authorT. Vicsek, journalNature volume411, pages421 (year2001). [Pattanayak and Mishra(2018)]pattanayak2018collection authorS. Pattanayak and authorS. Mishra, journalJournal of Physics Communications volume2, pages045007 (year2018). [Vicsek et al.(1995)Vicsek, Czirók, Ben-Jacob, Cohen, and Shochet]vicsek1995novel authorT. Vicsek, authorA. Czirók, authorE. Ben-Jacob, authorI. Cohen, and authorO. Shochet, journalPhysical review letters volume75, pages1226 (year1995). [Czirók et al.(1999)Czirók, Barabási, and Vicsek]czirok1999collective authorA. Czirók, authorA.-L. Barabási, and authorT. Vicsek, journalPhysical Review Letters volume82, pages209 (year1999). [Solon and Tailleur(2013)]solon2013revisiting authorA. P. Solon and authorJ. Tailleur, journalPhysical review letters volume111, pages078101 (year2013). [O'Loan and Evans(1999)]o1999alternating authorO. O'Loan and authorM. Evans, journalJournal of Physics A: Mathematical and General volume32, pagesL99 (year1999). [Dossetti(2011)]dossetti2011cohesive authorV. Dossetti, journalJournal of Physics A: Mathematical and Theoretical volume45, pages035003 (year2011). [Yates et al.(2009)Yates, Erban, Escudero, Couzin, Buhl, Kevrekidis, Maini, and Sumpter]yates2009inherent authorC. A. Yates, authorR. Erban, authorC. Escudero, authorI. D. Couzin, authorJ. Buhl, authorI. G. Kevrekidis, authorP. K. Maini, and authorD. J. Sumpter, journalProceedings of the National Academy of Sciences volume106, pages5464 (year2009). [Buhl et al.(2006)Buhl, Sumpter, Couzin, Hale, Despland, Miller, and Simpson]buhl2006disorder authorJ. Buhl, authorD. J. Sumpter, authorI. D. Couzin, authorJ. J. Hale, authorE. Despland, authorE. R. Miller, and authorS. J. Simpson, journalScience volume312, pages1402 (year2006). [Choi and Huberman(1983)]choi1983digital authorM. Choi and authorB. Huberman, journalPhysical Review B volume28, pages2547 (year1983). [Blok and Bergersen(1999)]blok1999synchronous authorH. J. Blok and authorB. Bergersen, journalPhysical Review E volume59, pages3876 (year1999). [Marcq et al.(1997)Marcq, Chaté, and Manneville]marcq1997universality authorP. Marcq, authorH. Chaté, and authorP. Manneville, journalPhysical Review E volume55, pages2606 (year1997). [Rolf et al.(1998)Rolf, Bohr, and Jensen]rolf1998directed authorJ. Rolf, authorT. Bohr, and authorM. H. Jensen, journalPhysical Review E volume57, pagesR2503 (year1998). [Newman and Barkema(1999)]newman1999monte authorM. E. Newman and authorG. T. Barkema, titleMonte Carlo methods in statistical physics (publisherClarendon Press, year1999). [Beale and Pathria(2021)]beale2021statistical authorP. D. Beale and authorR. Pathria (year2021). [Mishra and Ramaswamy(2006)]mishra2006active authorS. Mishra and authorS. Ramaswamy, journalPhysical review letters volume97, pages090602 (year2006). [Chaté et al.(2008)Chaté, Ginelli, Grégoire, and Raynaud]chate2008collective authorH. Chaté, authorF. Ginelli, authorG. Grégoire, and authorF. Raynaud, journalPhysical Review E volume77, pages046113 (year2008). [Dey et al.(2012)Dey, Das, and Rajesh]dey2012spatial authorS. Dey, authorD. Das, and authorR. Rajesh, journalPhysical review letters volume108, pages238001 (year2012). [Ramaswamy et al.(2003)Ramaswamy, Simha, and Toner]ramaswamy2003active authorS. Ramaswamy, authorR. A. Simha, and authorJ. Toner, journalEurophysics Letters volume62, pages196 (year2003). [Mishra et al.(2014)Mishra, Puri, and Ramaswamy]mishra2014aspects authorS. Mishra, authorS. Puri, and authorS. Ramaswamy, journalPhilosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences volume372, pages20130364 (year2014). [Singh et al.(2021a)Singh, Pattanayak, and Mishra]singh2021ordering authorJ. P. Singh, authorS. Pattanayak, and authorS. Mishra, journalJournal of Physics A: Mathematical and Theoretical volume54, pages115001 (year2021a). [Jena and Mishra(2023)]jena2023ordering authorP. Jena and authorS. Mishra, journalPhysics of Fluids volume35 (year2023). [Semwal et al.(2024)Semwal, Joshi, Dikshit, and Mishra]semwal2024macro authorV. Semwal, authorJ. Joshi, authorS. Dikshit, and authorS. Mishra, journalPhysica A: Statistical Mechanics and its Applications volume634, pages129435 (year2024). [Durve and Sayeed(2016)]durve2016first authorM. Durve and authorA. Sayeed, journalPhysical Review E volume93, pages052115 (year2016). [Bhattacherjee et al.(2015)Bhattacherjee, Mishra, and Manna]bhattacherjee2015topological authorB. Bhattacherjee, authorS. Mishra, and authorS. Manna, journalPhysical Review E volume92, pages062134 (year2015). [Singh et al.(2021b)Singh, Kumar, and Mishra]singh2021bond authorJ. P. Singh, authorS. Kumar, and authorS. Mishra, journalJournal of Statistical Mechanics: Theory and Experiment volume2021, pages083217 (year2021b).
http://arxiv.org/abs/2406.08570v1
20240612181132
HDNet: Physics-Inspired Neural Network for Flow Estimation based on Helmholtz Decomposition
[ "Miao Qi", "Ramzi Idoughi", "Wolfgang Heidrich" ]
cs.LG
[ "cs.LG", "cs.AI" ]
FastEEC: Fast Evaluation of N-point Energy Correlators [ June 17, 2024 ====================================================== § ABSTRACT Flow estimation problems are ubiquitous in scientific imaging. Often, the underlying flows are subject to physical constraints that can be exploited in the flow estimation; for example, incompressible (divergence-free) flows are expected for many fluid experiments, while irrotational (curl-free) flows arise in the analysis of optical distortions and wavefront sensing. In this work, we propose a Physics-Inspired Neural Network (PINN) named HDNet, which performs a Helmholtz decomposition of an arbitrary flow field, i.e., it decomposes the input flow into a divergence-only and a curl-only component. HDNet can be trained exclusively on synthetic data generated by reverse Helmholtz decomposition, which we call Helmholtz synthesis. As a PINN, HDNet is fully differentiable and can easily be integrated into arbitrary flow estimation problems. § INTRODUCTION In many flow estimation problems, the reconstructed flows are governed by physical properties. For example, incompressibility (divergence-free) holds for many flows in fluid simulations and experiments, while irrotationality (curl-free) of optical flows is expected in some applications of optical distortion analysis. Incorporating these physical constraints into the reconstruction framework can significantly improve the accuracy of the results in such inverse problems, as proven by several existing works <cit.>. However, enforcing these constraints in a differentiable manner, compatible with popular deep learning reconstruction methods, remains a challenging problem. A straightforward approach involves incorporating physical laws as loss terms of the neural network pipeline <cit.>. This approach is better known as Physics-informed Neural Networks (PINNs). For instance, in the case of incompressibility and irrotationality, an ℓ_2 norm of the divergence or the curl would be added to the total loss. These soft constraints are easy to implement but do not guarantee that the reconstruction aligns perfectly with the physical constraints, as illustrated in Fig. <ref>. Enforcing physical properties as a hard constraint has already been proposed for the incompressibility of the flow through the use of the pressure projection <cit.>. However, this technique suffers from its lack of differentiability and its incompatibility with popular deep learning-based reconstruction methods. To address this challenge, we propose a novel approach based on the fundamental theorem of vector calculus also known as Helmholtz decomposition. Specifically, we introduce the Helmholtz Decomposition Network (HDNet), which is a flexible and differentiable network that enforces physical constraints during flow reconstructions. HDNet decomposes any flow field into two components: a solenoidal (incompressible) flow field and an irrotational flow field. In addition to these two vector fields, HDNet also outputs the scalar potential of the irrotational field component, which has different physical interpretations according to the application context. For example, this scalar potential corresponds to the normalized pressure in fluid dynamics applications, while it represents the phase profile in distortion problems like Background-Oriented Schlieren (BOS) imaging and wavefront sensing (see Section <ref>). To enable a supervised training of HDNet, it is necessary to generate a large-scale dataset of paired input-output data. Conventional fluid simulation software, however, is computationally expensive and time-consuming, hindering the generation of such datasets. Therefore, we propose the Helmholtz synthesis module to generate efficiently a large-scale fluid training dataset. This module, based on the reverse process of Helmholtz decomposition, enables the creation of large-scale, highly variable fluid data pairs in a short timeframe, which makes supervised learning for HDNet possible. Furthermore, we propose a PINN-based flow reconstruction pipeline (Fig. <ref> (b)), that leverages HDNet to enforce physical constraints. This pipeline is demonstrated in the context of incompressible flow in fluid dynamics applications. We also illustrate two examples of irrotational flow in the context of optical flow estimation in phase distortion problems. With an extensive comparative study, we show that our method outperforms the conventional Horn-Schunk optical flow and soft constraint methods in terms of reconstruction accuracy and preservation of physical properties. Our HDNet exhibits significant flexibility beyond its application in our PINN-based flow reconstruction pipeline. Its capabilities extend to a wide range of deep learning applications, including inverse imaging pipelines, differential reconstruction frameworks, and even forward simulation problems. In summary, our contributions are: * We propose HDNet, a differentiable network to enforce the physical constraints for flow reconstruction problems. HDNet is highly versatile and capable of simultaneously obtaining solenoidal, irrotational fields, and scalar potential fields at the same time. * We propose Helmholtz synthesis, an efficient data generation method capable of creating large-scale paired datasets for the training of HDNet. * As an example application, we demonstrate how to integrate HDNet into a fully differentiable PINN-based flow reconstruction pipeline, resulting in exceptional reconstruction performance while rigorously preserving physical constraints. * We evaluate our approach on different real applications and show improvement in comparison to existing methods. § RELATED WORK Physics-informed learning. Physics-Informed Learning <cit.> is a series of strategies that leverage physical laws and constraints to improve machine learning models' predictions. This approach has applications in several domains, including fluid dynamics, quantum mechanics, electromagnetics, and biology <cit.>. One popular strategy involves the use of Physics-informed Neural Networks (PINNs). Firstly introduced by Raissi  <cit.>, PINNs incorporate physical equations directly into the loss function, ensuring that the network's outputs align with the governing physical principles. PINNs have been extensively used in the field of fluid dynamics. For example, Cai  <cit.> employed a PINN to predict the pressure and the velocity field from the Tomographic background-oriented Schlieren temperature field for the flow over an espresso cup. Wang  <cit.> proposed a PINN to estimate the velocity field of fluids in microchannels. By their design, PINNs do not incorporate a physical forward model in their reconstruction process but rely only on integrating the physical equations in the form of soft constraints within the loss function. Another strategy consists of combining neural models with traditional physics-based simulations to incorporate physics through generated datasets <cit.>. Additionally, some existing methods embedded physical priors or constraints into the network architecture. For instance, Cao  <cit.> propose a neural space-time model for representing dynamic samples captured using speckle structured illumination. Specifically, their approach utilizes two Multi-Layer Perceptrons (MLPs): one to model the motion field and another to represent a canonical configuration of the dynamic sample. The reconstructed sample at different times is obtained by warping the canonical scene with the estimated motion field. The reconstructed frames are then processed through a forward model simulating the imaging process, and the resulting outputs are compared to the captured images to compute the training loss. Enforcing incompressibility in fluid simulation and reconstruction. Fluid simulation and reconstruction tasks often require enforcing incompressibility as a hard constraint, which ensures the conservation of the fluid volume. This constraint is equivalent to having a divergence-free flow. In the literature, several conventional methods have been proposed to enforce this physical constraint. PCISPH <cit.> corrects pressure terms in the Smoothed Particle Hydrodynamics (SPH) representation by assuming constant density. Vortex methods <cit.> compute velocity from vortex strength using the Biot-Savart formula, facilitating incompressible fluid simulation. The pressure projection <cit.> approach decomposes an arbitrary field into solenoidal (divergence-free) and irrotational (curl-free) components. This involves solving the Poisson equation, often using iterative gradient-based methods like Preconditioned Conjugate Gradient (PCG). The solenoidal component will be selected to satisfy the incompressibility constraint. While these methods are effective, they share a common limitation: they are non-differentiable. Thus, they cannot be integrated into popular differentiable and deep learning pipelines. To address this challenge, recent research has explored differentiable physical constraint methods. A first approach involves applying a divergence or curl as a penalty term <cit.>, offering a simple and forward differentiable approach. However, the soft nature of these constraints may not always guarantee strict incompressibility. Moreover, some works <cit.> propose a Convolutional Neural Network (CNN) Poisson solver to replace the iterative solver in pressure projection. However, the lack of sufficient training datasets has led these approaches towards unsupervised learning, which may reduce their performance. Particle Image Velocimetry (PIV). PIV is a powerful imaging technique widely used for measuring fluid flow velocities in various fields <cit.>. The fundamental principle of PIV involves seeding the studied fluid or gas flow with particles. By illuminating the region of interest and recording the particles' advected motion, the fluid flows can be retrieved. In basic PIV techniques, the region of interest of the fluid is simply a plane illuminated with a laser light sheet and captured from a single camera, which leads to the reconstruction of an in-plane 2D velocity field. In this work, we demonstrate our approach using data captured with a basic 2D PIV approach. Imaging optical distortion. Imaging optical distortion is a critical task in several imaging applications like microscopy, telescopes, and machine vision systems. Optical distortion occurs when light rays are refracted during their path, causing deviations from the ideal rectilinear light propagation. Several techniques have been developed to correct optical distortion since it can lead to inaccurate measurements, blurred images, and reduced resolution. However, in some applications, optical distortion is not a nuisance but rather a valuable tool. For example, in techniques such as Background-Oriented Schlieren (BOS) imaging <cit.>, wavefront sensing <cit.>, and phase retrieval <cit.>, the distortion is leveraged to reconstruct a signal of interest. One solution to these problems is to capture the distortion of a patterned background viewed through the transparent medium of interest (i.e., gas, optical system, etc.). By comparing images of the undisturbed and disturbed background, these techniques can quantitatively measure the displacements induced by the distortion and, therefore, infer the underlying dynamics inside the medium. § METHOD In the following we first describe the mathematical concept behind Helmholtz decomposition, then introduce the HDNet architecture, and finally introduce Helmholtz Synthesis as way of efficiently generating training data. §.§ Helmholtz Decomposition and Physical Interpretation The key idea behind our approach is to utilize the Helmholtz decomposition of vector fields, which is based on the fundamental theorem of vector calculus. This theorem states that any arbitrary vector field can be decomposed into two orthogonal components: an irrotational (curl-free) and a solenoidal (divergence-free) vector field: = +. Classically, the Helmholtz decomposition is computed by leveraging well-known identities from vector calculus, and then (numerically) solving a Poisson equation. Specifically, any irrotational flow can be expressed as the gradient of a scalar potential field : = ∇ϕ, Additionally, the curl of a gradient field is equal to zero, as is the divergence of a curl field: (∇)=0 and ()=0. By applying the divergence operator to both sides of Eq. <ref>, we obtain: = + = , which yields the Poisson equation: = . Once the potential field is retreived using a Poisson solver, the component fields can be calculated as follows: =∇ and =-. It is important to note that the potential field has physical significance in many different application domains. For example in fluid simulation = P/ρ represents the normalized pressure, where P is the pressure and ρ is the mass density. This term plays a crucial role in the incompressible Navier-Stokes equations, which govern many physical fluid flows. In incompressible fluid simulation, a preliminary flow estimate is often forced to be divergence-free (incompressible) by computing the term according to Eq. <ref>, a process commonly known as the pressure projection step. In some optical applications, such as wavefront sensing <cit.>, phase retrieval <cit.>, or Background-Oriented Schlieren imaging <cit.>, the Helmholtz decomposition takes on another physical interpretation. In these applications, the flow fields correspond to optical flow, which describes how light rays bend due to an optical distortion. This distortion is caused by a spatially varying phase delay in the optical wavefront. The potential field ϕ is precisely this phase profile, while the observed optical flow is proportional to its gradient, ∇ϕ, making it an irrotational field (see Supplement for details). Therefore, when reconstructing optical flow for an optical distortion inverse problem, it is valid to employ the Helmholtz decomposition to ensure the reconstructed flow is curl-free. However, the classical Poisson solver approach is not differentiable, making it challenging to integrate into modern PINN pipelines for either forward simulation or inverse reconstruction tasks. With the recent advancements in deep learning, numerous deep learning solvers for PDEs have been proposed <cit.>. Inspired by these ideas, we propose HDNet (Helmholtz decomposition Network) a novel neural network designed to perform Helmholtz decomposition based on conventional HD (Helmholtz decomposition) operations as described by Eqs.<ref> and <ref>. §.§ HDNet Architecture. The HDNet architecture, depicted in Fig. <ref> (a), takes as input either an arbitrary flow field or an "initial" estimation of the reconstructed flow. Instead of relying on the commonly used iterative Poisson solver, HDNet employs a deep learning (DL) solver based on a Convolutional Neural Network (CNN) encoder-decoder with a UNet architecture. To mitigate grid artifacts caused by max-pooling layers, we replace them with convolutions with stride, and similarly, we replace all the transpose convolutions with up-sampling layers. From Equation <ref>, the input of the network is a general velocity field . To facilitate learning of the Helmholtz decomposition, we compute the divergence , and concatenate it with this input. The UNet core of the HDNet takes this concatenated input and computes the desired potential field . From Equations <ref> and <ref>, we can compute the gradient of the output scalar ∇ to get the curl-free component . By subtracting this curl-free component from the input field, we obtain the divergence-free component (). Training loss. To train the proposed network, we aim to minimize the discrepancy between the predicted scalar field and the ground truth scalar field , while simultaneously minimizing the difference between the predicted divergence-free field component (_𝐬𝐨𝐥) and the ground truth component (_𝐬𝐨𝐥). Thus, the training loss is defined as follows: ℒ= ‖_𝐬𝐨𝐥 - _𝐬𝐨𝐥‖_2^2+ ‖ - ‖_2^2. The ground truth _𝐬𝐨𝐥 and come from synthesized training data which will be introduced in the following subsection. §.§ Training Data Generation with Helmholtz Synthesis To enable supervised training of the network, it is necessary to generate training pairs for HDNet. The training result highly relies on the training dataset quantity and quality. However, conventional commercial fluid simulation software are computationally expensive and time-consuming, limiting the generation of large-scale datasets. To overcome this challenge, we introduce the Helmholtz synthesis module, a novel approach for generating large-scale fluid training datasets. The Helmholtz synthesis module generates data through the reverse process of the Helmholtz decomposition. By exploiting the vector identities in Eq. <ref>, we can compute irrotational and solenoidal fields from pseudo-random scalar fields and combine them to obtain arbitrary flow fields. To ensure a realistic quality of the generated dataset, we require scalar fields that are smooth, bandwidth-limited, and exhibit large variability, thereby mimicking real-world complexity. Perlin Noise. In Computer Graphics, Perlin noise <cit.> was developed as a way to produce random band-limited scalar fields that satisfy our requirements for spatial smoothness. Different frequency bands of Perlin noise can be composited to generate random patterns of varying spatial detail, known as “turbulence” in the graphics literature. Thus, the pseudo-random generated scalar fields are expressed as: =∑ A_n_n, where A_n=1/2^n is the amplitude of the Perlin noise for the scale n, and _n is the Perlin noise generated with a 2^n × 2^n grid. The grid number 2^n controls the frequency of the generated Perlin noise: the higher the grid number, the higher the frequency of the Perlin noise, and the smaller its amplitude. This amplitude rule is designed in such a way to better simulate natural signals. For example, in Fig.<ref>, A_2_2 denotes a Perlin noise with a grid number of 4 and an amplitude of 1/4. While A_3_3 denotes a Perlin noise with a grid number of 8 and an amplitude of 1/8. In our application, we require a Perlin turbulence scalar field to generate the irrotational velocity field. We also need a Perlin turbulence vector field =(_1,_2,_3), composed of three Perlin turbulence scalar fields _1,_2,_3, for the generation of the solenoidal velocity field. Therefore, the irrotational and the solenoidal velocity fields are respectively constructed as the gradient of the Perlin noise scalar field <cit.>, and the curl of the Perlin noise vector field: =∇ and = ∇×. Note that for 2D flows, degenerates into a scalar field , and the solenoidal velocity field is constructed as: _𝐬𝐨𝐥=(∂/∂, -∂/∂). From the vector identities in Eq. <ref>, we know that the constructed fields satisfy the following properties: _𝐬𝐨𝐥=∇× is divergence-free (divergence of curl is zero) and _𝐢𝐫𝐫=∇ is curl-free (curl of gradient is zero). By combining these two fields, we can obtain an arbitrary flow field using Helmholtz decomposition: ^*=_𝐢𝐫𝐫+χ·_𝐬𝐨𝐥, where χ is the weight controlling the relative strengths of the two components. During the training, we use this generated flow field ^* as the input for our HDNet, while the corresponding divergence-free field _𝐬𝐨𝐥 and the scalar field are used as the ground truth of the network. Using this method, we can generate the 20000 fluid training data pairs with a resolution of 128 × 128 within approximately half an hour. § APPLICATION: USE OF HDNET FOR FLOW RECONSTRUCTION The proposed HDNet network is versatile and can be incorporated into any differentiable flow simulation or reconstruction pipeline, to enforce hard constraints on the physical properties of the flow in various applications. In this work we are primarily interested in inverse problems that can be expressed as flow estimation tasks. We show quite different applications from fluid flow to optical distortion imaging, all using the same general experimental framework outlined in the following. Our pipeline consists of a Physics-Informed Neural Network (PINN) for flow reconstruction, as illustrated in Fig. <ref> (b). First, we use a coordinate-based MLP network to represent the flow field ^* = (, , ; ), where is the Motion MLP network weight. This network takes as input the spatial and temporal coordinates (, , ), and outputs an "initial" reconstructed motion field ^* = (^*, ^*) for each frame. We employ the Wavelet Implicit neural REpresentation (WIRE) <cit.> for the Motion MLP, which utilizes a Gabor wavelet as the activation function to learn high-frequency flow motion. Indeed, this activation function has a controllable parameter that represents the frequency of the signal. By adjusting during the learning process, we can achieve a coarse-to-fine reconstruction. A smaller generates smoother results, corresponding to coarse reconstruction, while a larger generates high-frequency details, corresponding to fine reconstruction. More details about the WIRE representation and the coarse-to-fine reconstruction strategy are respectively discussed in the Supplement Sections <ref> and <ref>. In the applications we investigate, the reconstructed flows are not arbitrary but exhibit specific physical properties. For example, in the particle imaging velocimetry (PIV) application, the flow of incompressible fluids is divergence-free, while in phase distortion problem imaging, the gradient of the air refractive index reconstruction is curl-free (see Section <ref>). To impose these physical constraints on the initial reconstruction, we apply the pre-trained HDNet to the velocity field ^*. The output of the HDNet is the pair of the irrotational and solenoidal fields: (^*) = (_𝐬𝐨𝐥, _𝐢𝐫𝐫). According to the application, we select the component of the velocity that satisfies the physical constraints. Using the HDNet output (_𝐬𝐨𝐥/_𝐢𝐫𝐫), we can warp the canonical template field _0 to obtain the scene field _ for each frame. Mathematically, this process is expressed as: _()=_0 (+_𝐬𝐨𝐥/𝐢𝐫𝐫), where =(, ) is the spatial coordinates. ^_𝐬𝐨𝐥/𝐢𝐫𝐫 is the HDNet output: _𝐬𝐨𝐥 or _𝐢𝐫𝐫 at time . _() is the predicted “scene” field at time . It represents the particle density field in the PIV application or the background texture in the case of BOS or wavefront sensor problem. _0 is the canonical configuration of the “scene” field. In our framework, we represent this reference configuration using simply a template image, which is a variable to be optimized during the learning process. Eq. <ref> is equivalent to the non-linearized brightness consistency <cit.> in the optical flow problems. Therefore, our pipeline inherently incorporates the non-linearized brightness consistency. Our pipeline is defined as a joint optimization problem, where we aim to retrieve both the canonical scene field and the motion field representation: ℒ= __0,∑_, , ‖_0(𝐫+(f(, , ; )))- _‖_2^2 + ‖∇_, (f(, , ; )) ‖ The second term corresponds to the total variation (TV) of the velocity field, which promotes smoothness in the velocity field, limiting changes within the neighborhood. is the weight of the TV term in the total loss. § EXPERIMENTS In this section, we demonstrate the application of HDNet within our flow reconstruction pipeline. We first evaluate the reconstruction performance HDNet performance and flow reconstruction pipeline of by using synthetic data in order to assess numerical metrics. Then, we showcase reconstruction outcomes obtained from real PIV, and BOS experimental data, providing visualizations of divergence and curl maps alongside with scalar potential fields. Similar experiments are also presented in the Supplement (Section <ref>) for another phase distortion problem: wavefront sensing. The obtained results demonstrate the superior performance of our method in terms of reconstruction quality, physical constraint enforcement, and robustness against noise. §.§ Synthetic Data HDNet evaluation. We demonstrate the performance of HDNet in enforcing incompressibility on the outputed solenoidal field using synthetic data generated by Helmholtz synthesis module. We create five groups of paired datasets, each containing 800 samples, with varying Perlin noise scales n and the weight χ. We assess the divergence of the solenoidal field output by HDNet and report the mean squared error (MSE) with respect to zero (mean and standard deviation) in Tab. <ref>. Flow pipeline evaluation. To quantitatively assess the performance of our flow reconstruction pipeline, we generated synthetic PIV particle image sequences as described in the Supplement <ref>. We then compare the predicted flows using different reconstruction methods with the ground truth one. The results of this comparison are reported in Fig.<ref>. When using our pipeline without HDNet, the reconstruction exhibits significant artifacts and has larger errors than the baseline flow reconstruction using Horn-Schunck <cit.> optical flow, in both used metrics: AAE (Average Angular Error value) for the flow and the MSE (Mean Square Error) for the divergence. When using the HDNet, our pipeline improves the flow reconstruction with a better AAE, while at the same time reducing the divergence MSE by more than an order of magnitude. §.§ Real Experiment Data We also verify our flow reconstruction pipeline in real experiments for both PIV and optical distortion applications. Details of the experimental settings and image sequences can be found in Supplement Section <ref> and Fig. <ref>. §.§.§ PIV Fig. <ref> shows the real PIV experiment flow reconstructions. Here, “Soft Constraint” reconstruction consists of adding an ℓ_2 norm of the divergence (‖∇·_𝐭 ‖_2^2) to the total loss (Equation <ref>) to penalize the divergence. It is a very straightforward idea, but the solution does not always satisfy the constraint. From Fig. <ref> (d), we can see that there is still some divergence error. With the help of HDNet, our pipeline can reconstruct the flow very well and remove the diverging error. Our method can be thought of as a differentiable hard constraint for the flow reconstruction. §.§.§ Background-Oriented Schlieren Imaging In optical distortion applications like BOS, the reconstructed optical flow is the gradient of refractive index (phase) <cit.>, also see Supplement <ref>. Therefore, this reconstructed flow should be curl-free (i.e., the curl of the gradient is zero). We compare different methods of reconstruction in Fig.<ref>, and illustrate the reconstructed flows and their corresponding curls . The flow reconstruction pipeline is similar to the one used for PIV experiments. The only differences are the input images (see Fig. <ref> in Supplement), and the use of the output of HDNet instead of the output. The compressed video data used in the original paper <cit.> exhibits some compression artifacts, leading to noisy flow reconstruction using traditional methods like Horn-Schunck (Fig. <ref> (a,b)). Our proposed PINN flow reconstruction pipeline produces a significantly cleaner flow field, effectively removing even the vertical line artifact present in the HS reconstruction Fig. <ref> (a-b). Without the use of HDNet, the PINN flow reconstruction pipeline still exhibits a relatively large curl error (see Fig. <ref> (d)). By incorporating HDNet, we achieve a significantly smaller curl error Fig. <ref> (f), indicating a highly accurate and physically consistent flow reconstruction. Notably, by using HDNet, our method not only reconstructs the flow but also recovers the corresponding phase as illustrated in Fig. <ref> (g). § LIMITATIONS AND FUTURE WORK Although HDNet provides a convenient and effective way to inject physical priors into PINNs, the current work has several limitations. First, while the mathematical derivation of the approach holds both in 2D and 3D, currently only the 2D version is implemented. However, the network architecture should be straightforward to adapt to 3D, and since Perlin noise is also defined in 3D, data generation with multi-scale Helmholtz Synthesis is also straightforward. Therefore we do not expect that the generalization of HDNet to 3D will require more than hyperparameter tuning. A more fundamental limitation is that, as a supervised method, HDNet does not guarantee an exact Helmholtz decomposition of the input flow; in particular the solenoidal component is not guaranteed to be strictly divergence free. The irrotational component is computed as the gradient of an estimated potential field (_𝐢𝐫𝐫=∇), and is therefore always curl free. However, any mis-estimation of the potential field results in an imprecise decomposition, and thus the calculated solenoidal flow _𝐬𝐨𝐥=^*-_𝐢𝐫𝐫 may still have a remaining divergence component. Our experiments show that this effect is small, however if it is a concern in a particular application, it is also possible to penalize ∇·_𝐬𝐨𝐥 in the loss function for a larger PINN architecture. § CONCLUSION In this paper, we propose HDNet, a novel network based on the fundamental theorem of vector calculus and Helmholtz decomposition theorem. By employing HDNet, we can effectively impose differentiable hard constraints on inverse imaging problem. We further propose the Helmholtz synthesis module that efficiently generates paired data by reversing Helmholtz decomposition. This module enables the rapid creation of 20000 data pairs within half an hour, making large-scale flow dataset construction feasible and the supervised training of HDNet possible. Finally, we demonstrate the integration of HDNet into a PINN pipeline for flow reconstruction, showcasing its applicability with examples from PIV and BOS imaging data. Experimental results prove that our HDNet-empowered PINN pipeline outperforms conventional flow reconstruction method. Notably, our approach exhibits versatility and flexibility in satisfying both curl-free and divergence-free constraints while also outputting the scalar potential field. The authors would like to thank Congli Wang, Ivo Ihrke and Abdullah Alhareth for providing data. This work was supported by KAUST individual baseline funding. plain § IMPLEMENTATION DETAILS The HDNet architecture is an MLP with 6 layers, 4 of which are hidden layers. Each hidden layer has 64 neurons. For the WIRE activation functions, the ω_0 value ranges from 1.0 to 1.5. σ_0 value ranges from 0.8 to 1.2. We choose the Adam optimizer with learning rate is 1 × 10^-6. We train the HDNet on 20000 data pairs, with 2000 data pairs as the evaluation dataset. Training takes 72 hours on a single A100 GPU. HDNet is applied after 30000 epoch after the coarse reconstruction is almost done. For the data generation, we use Perlin noise at scales n from 1 to 5. The relative strength of the irrotational and the solenoidal, controlled by the weight χ, can be tuned to the specific application. For example in PIV, the basic Horn Schunck optical flow for an incompressible flow would already have a small divergence that just needs to be reduced further. Therefore we can estimate appropriate weights for the two terms by analyzing the divergence of the basic flow estimate for a representative flow, and choose χ appropriately. Using this approach, we chose χ to be a random number from a normal distribution with a mean of 0.0002. For the full flow estimation pipeline, we chose one of the input frames _0 as the template, and initialize accordingly. § EXPERIMENTS §.§ Phase Distortion Problem Principle Optical distortion imaging like BOS, wavefront sensing, and phase retrieval can be approached in different ways, but one common approach is to track the apparent motion (optical flow) of a high frequency pattern imaged through the distortion. An example geometry for BOS is shown in Figure <ref>. The goal of BOS imaging is to measure the phase delay in the distortion plane. A patterned background some distance away is observed with a camera. Since light rays propagate perpendicular to the phase profile , the observed optical flow is proportional to the ∇ for small angles (“paraxial approximation”). The factor of proportionality is the propagation distance between the distortion plane and the background. Because the optical flow is proportional to the phase, the flow is curl free, and the phase actually corresponds to the potential function in the Helmholtz decomposition The coded wavefront sensor is also a variant of the classic phase distortion problem <cit.>. The principle is also similar to what is explained in Section <ref>. The distortion is related to the gradient of phase. A mask is placed in the front of the camera. A frame without any distortion is captured as a reference frame (Fig <ref> (a)). After the phase lens array (Fig <ref> (c)) causing distortion, phase distorted image (Fig <ref> (b)) is captured. Fig <ref> (c) is a Zygo (interferometric) measurement of the lens array, where the data was provided by the authors of <cit.>. The reconstruction in Fig. <ref> is a crop of the lens array as shown in the black dash square in the Fig. <ref> (c). In the unconstrained optical flow measurement we can see that the flow estimates contain erroneous stair-stepping which is not present in the high accuracy interferometric measurements. Comparing (a),(c),(e), we can see that our method has better flow reconstruction quality and fewer artifacts. Comparing (b),(d),(f), we can see that our method have physical constraint performance and curl error value is close to zero. (g) is the phase reconstruction of our method. It is HDNet scalar output. We can see it is symmetrical which match with but have better reconstruction quality than the Zygo measurement in Fig <ref> (c). §.§ PIV Synethetic Data For the PIV simulations, we first generated 1 frame with 10000 particles in random position with pixels size about 1-2 pixels. The particle pixels value was generated in a normal distribution with mean 1 and variance 0.2. The particle frame figure is shown in Supplement Fig. <ref>. Then, we used the flow from Helmholtz Synthesis inference dataset to warp the particle to get the other particle image sequence. HDNet+ It is very straightforward to add a divergence penalty term to the total loss for the flow reconstruction pipeline. To compare with different settings and show the flexibility of HDNet, we also did a comparison experiment that adds a divergence penalty to the total loss for our HDNet flow reconstruction pipeline to explore better reconstruction quality and physical constraint performance. The results are shown in Fig. <ref> (h),(i). The reconstruction quality and physical constraint performance are a little bit better. §.§ PIV Real Experiment Data The real PIV experiment data is provided to us by [omitted for anonymity]. It is captured with a Phantom2640 camera with a resolution of 1024 × 976, an exposure time of 99.540 μ s, and a frame rate of 10000 fps. We cropped the image to have a 256 × 256 size. Flow particle size is 10 μ m. §.§ BOS Experiment Data The data for the BOS experiment is taken from <cit.> and was provided to us by the authors. The target air distortion for reconstruction is the hot air plume of a burning candle. Image resolution in this case is 270 × 270. The reconstructed region corresponds to the upper area of the candle hot air. The distortion in these datasets is very small and often only introduces subpixel shifts in the images. The dataset is also particularly challenging since it uses cameras that record a compressed video stream, so that MPEG artifacts further alter the small distortions. The results in the main paper show that HDNet can provide crucial physical regularization to this very difficult inverse problem. § WAVELET IMPLICIT NEURAL REPRESENTATION Study reveals that directly learning the image or 2D/3D field with MLP leads to very poor accuracy <cit.>. One reason is that only the MLP can not learn high frequency of the image. Employing Gabor wavelet as the activation function can enable the MLP to learn the high frequency of the image <cit.>. Every layer in MLP can be expressed as: 𝐲_𝐦=σ(W_m 𝐲_𝐦-1+𝐛_𝐦), where W_m, 𝐛_𝐦 are the weight and bias for the m layers <cit.>; σ is the activation function. σ(x)=e^j xe^-|s_0x|^2 determine the frequency level that it represents (Supplement Fig. <ref>). A smaller generates smoother results corresponding to the “coarse" reconstruction. A large generates more high-frequency detail corresponding to “fine" reconstruction. By using this property, we can achieve the coarse-to-fine reconstruction which will be discussed in Supplement Section <ref>. Adaptive Learnable Parameter Moreover, the WIRE representation exhibits adaptability, as its representation parameters, and σ_0, are learnable according to the characteristics of the scene being represented. Comparing with NeRF position encoding neural representation, WIRE neural presentation is more continuous for changing the parameter. WIRE is faster than the Fourier Feature <cit.> and robust for inverse problems of images and video <cit.>. § FREQUENCY COARSE TO FINE There is always a trade-off between achieving accuracy at the local and global levels in flow motion reconstruction. For example, as shown in Supplement Fig. <ref>, if w_0 is small, such as w_0=1.2, the reconstructed flow exhibits an accurate overall shape, but lack the detailed information as shown in Supplement Fig. <ref> (b) the black rectangular . Conversely, when w_0 is too large, such as w_0=2, there is detail, but the global flow is not correct. This phenomenon arises due to the presence of local minima in the training process for a specific frequency representation (see Supplement Fig. <ref> for more detail). To overcome this trade-off we propose a coarse-to-fine approach that starts with low frequency to high frequency. As discussed in Supplement Section <ref>, small w_0 means coarse representation and larger w_0 means fine representation. This property is used to implement the coarse-to-fine strategy. We first start with small w_0 and then progressively increase w_0 to a large value as the epoch number increases. The last step, activating learnability for the parameter w_0. As explained in the Section <ref>, learnable parameter undergoes automatic fine adjustments based on the scene. § NEURIPS PAPER CHECKLIST * Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: Justification: the abstract clearly states the contribution and describes the experimental results. * Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: Justification: a section on Limitations and Future Work has been included. * Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: Justification: the paper does not include theoretical results. * Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: Justification: The method is fully described in the paper and supplement, including the data sources and the synthetic data generation. Full code will be provided with the final paper. * Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: Justification: it was not possible to anonymize the code and data for public release before the deadline. Both will be released upon acceptance. * Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: Justification: Training is described in the main paper with some additional details in the supplement. In addition all code will be provided after acceptance. * Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: Justification: Mean and variance are provided for the HDNet itself. In addition we provide example of applications as individual case studies. These do not have large enough sample size to compute statistics. * Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: Justification: The compute resources (single user workstation) are detailed in the paper. * Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics <https://neurips.cc/public/EthicsGuidelines>? Answer: Justification: all guidelines were followed. * Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: Justification: flow estimation is a technical problem for many scientific and engineering tasks, but without broad societal impact. * Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: Justification: the work poses no risk for misuse. * Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: Justification: data sources are provided and author permission has been obtained. * New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: Justification: the text contains a full description of the generation of the synthetic training data. Code will also be provided after acceptance. * Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: Justification: the paper does not involve crowdsourcing nor research with human subjects. * Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: Justification: the paper does not involve crowdsourcing nor research with human subjects.
http://arxiv.org/abs/2406.09361v1
20240613174629
On the independence number of sparser random Cayley graphs
[ "Marcelo Campos", "Gabriel Dahia", "João Pedro Marciano" ]
math.CO
[ "math.CO", "math.NT" ]
§ ABSTRACT The Cayley sum graph Γ_A of a set A ⊆ is defined to have vertex set and an edge between two distinct vertices x, y ∈ if x + y ∈ A. proved that if the set A is a p-random subset of with p = 1/2, then the independence number of Γ_A is asymptotically equal to α(G(n, 1/2)) with high probability. Our main theorem is the first extension of their result to p = o(1): we show that, with high probability, α(Γ_A) = (1 + o(1)) α(G(n, p)) as long as p ≥ (log n)^-1/80. One of the tools in our proof is a geometric-flavoured theorem that generalizes 's lemma, the classical lower bound on the size of high dimensional sumsets. We also give a short proof of this result up to a constant factor; this version yields a much simpler proof of our main theorem at the expense of a worse constant. A More Practical Approach to Machine Unlearning David Zagardo, dave@greenwillowstudios.com June 2024 =============================================== § INTRODUCTION Let n ∈ℕ be a large prime. The Cayley sum graph Γ_A of a set A ⊆ is defined to have as its vertices and an edge between two distinct x, y ∈ if x + y ∈ A. <cit.> conjectured that for each t ∈{1, …, n}, there exists a set A ⊆ such that |A| = t and Γ_A has independence number at most n/t (log n)^O(1), and moreover remarked that “the conjecture may well hold for a random choice of A.” more generally asked (see <cit.>) which parameters of Γ_A match those of the random graph G(n, p) when A is a random subset of . In that same work, he showed that if A is chosen uniformly at random over all subsets of , then with high probability the size of the largest independent set[ actually states this for the clique number, but this is equivalent to what we state since a 1/2-random subset of has the same distribution of its complement.] of Γ_A is at most α(G(n, 1/2)), up to a constant factor. This result is in fact a consequence of a theorem in additive combinatorics. Recall that the sumset X + Y and the restricted sumset X X are defined as X + Y = {x + y : x ∈ X, y ∈ Y}, X X = {x_1 + x_2 : x_1, x_2 ∈ X, x_1 ≠ x_2}, and observe that X ⊆ is independent in Γ_A exactly when X X ⊆∁A. deduced his bound on the typical size of α(Γ_A) from estimates on the number of sets X ⊆ such that |X| = k and |X + X| ≤σ k, where σ is some upper bound on the doubling of X. When referring to the doubling of the set itself, we will use σ[X], σ[X] = |X + X| / |X|. Later, <cit.> showed that better estimates on the number of such sets are possible if one handles ranges of σ[X] differently. When σ[X] is constant, they used an arithmetic regularity lemma to obtain a tighter bound. When the doubling is close to its maximum, they refined 's estimate by leveraging the isoperimetric inequality on ^d to bound the number of “quasi-random” elements in those sets. As a consequence, they determined the correct constant for the independence number of Γ_A when A ⊆ is uniform random, showing that it asymptotically matches α(G(n, 1/2)) with high probability <cit.>. Our main stmt:main-result extends the above result of to the more challenging setting when A is a sparse random subset of , and can be seen as progress towards 's aforementioned conjecture. To state it, it is useful to denote by Γ_p the (random) Cayley sum graph Γ_A when A is a p-random subset of for p = p(n). Let n be a prime number and let p = p(n) satisfy p ≥ (log n)^-1/80. The random Cayley sum graph Γ_p of satisfies α(Γ_p) = (2 + o(1)) log_1/1 - p n with high probability as n →∞. When p = o(1), this bound is asymptotically equal to (2 + o(1)) (log n) / p; it also matches α(G(n, p)) in the corresponding range of p <cit.>. We remark that we do not believe the lower bound on p in <Ref> to be close to optimal, so we do not optimize the constant in the exponent of the log. Nonetheless, there is a natural barrier for our methods when proving upper bounds at around p ≈ (log n)^-1 (see <Ref>). The lower bound in (<ref>) follows from a second moment computation together with a simple combinatorial argument (see <Ref>). The proof of the upper bound, on the other hand, is much more challenging, even though it has the same broad outline pioneered by <cit.>. In fact, we use his bounds on the number of sets with given doubling for the “middle” range k^β < σ[X] = o(k), and a minor extension (that follows from the same proof) of the theorem of <cit.> for the “upper” range σ[X] = Θ(k). Our contribution is therefore in the “lower” doubling range, σ[X] ≤ k^β for a small constant β > 0, where even the correct count of such sets would not be enough to prove our main stmt:main-result. To overcome this obstacle, we show that each of those sets contains a much smaller subset F – we call it a “fingerprint” of X – so that, after determining that F possesses some special properties, it suffices to count the fingerprints to deduce <Ref>. This idea is inspired by the application of the asymmetric container method of <cit.> to the problem of counting sets with small sumset by the first author <cit.>. In fact, we could also use asymmetric containers as a tool to prove our main stmt:main-result, but we prefer this simpler approach as it results in better bounds and a more self-contained proof. Specifically, the special property we require of each fingerprint F ⊆ X is having a sufficiently large sumset. Ideally, we would like F + F to be as large as X + X; this obviously imposes a lower bound of |F| ≥ |X + X|^1/2. recently studied the question of finding such F ⊆ X and showed how to obtain one satisfying |F + F| ≈ |X + X| for general Abelian groups when X has bounded doubling <cit.>. However, their result does not serve our needs because it does not handle the case of σ[X] being k^β for fixed, small β > 0. We pursue a different approach to obtain our collection of fingerprints, one that is able to handle sets with doubling that is polynomial in their size. This approach yields fingerprints such that |F| ≈ |X + X|^1/2log |X|, which is only a logarithmic factor away from the ideal bound, but |F + F| is in general far from |X + X|. What we can show is that the size of the sumset is as large as in 's lemma, which is just enough for our application. It is therefore useful to recall 's lemma, and to do so we need a definition. We will write (X) for the minimum dimension of a hyperplane that contains a set X ⊆^d; if (X) = d, then we say X has full rank. The statement of the lemma is then: Let X ⊆^d be a finite set of full rank. Then, |X + X| ≥ (d + 1)|X| - d + 12. Observe that <Ref> is a statement about subsets of ^d rather than . To amend this, we can use the (by now) standard machinery of isomorphisms (see, for example, <cit.>) to reach similar conclusions for subsets of the integers modulo n, changing rank for dimension when appropriate. We therefore change the setting to ^d for the rest of this informal discussion. Like applications of the hypergraph container method, the way to show that F + F is almost as large as (d + 1)|X| is by proving a suitable supersaturation result (see <Ref>, below). In fact, from this result we will be able to show that picking a fingerprint randomly inside X works. <Ref> informally says the following: if a set Y approximates X + X in terms of its popular sums, then it is (almost) as large as the bound in 's lemma. This key step in the proof of <Ref> is a result stating that we can find some small T ⊆ X such that X + T almost attains the bound in 's lemma. Proving this stmt:freiman-lemma-via-few-translates is the most difficult part of our proof, and we consider it to be of independent interest. thmfreimanlemmafewtrans Let d, r ∈, let γ > r^-1/3 and let X ⊆^d be a finite set with (X) ≥ r. There exists T ⊆ X such that |T| ≤ 4(r + 1)/γ and |X + T| ≥ (1 - 5γ) (r + 1)|X|/2. The proof of this theorem is technical; however, we will also show that if we were satisfied with 1/6 being the leading constant instead of (1 - 5γ)/2, then a much simpler approach would suffice. This weaker version would also be enough to prove the upper bound in <Ref> with 6 being the leading constant instead of 2. Recently, <cit.> proved the following stmt:jing-mudgal-freiman that is closely related to <Ref>. It obtains the correct leading constant, without the (1 - 5γ)/2 or 1/6 loss in our bound, at the expense of a worse relationship between the number of translates and the dimension of the set: Given d ∈, there exists a constant C = C(d) > 0 such that, for every finite set of full rank X ⊆^d, there exist x_1, …, x_C ∈ X satisfying |X + {x_1, . . . , x_C}| ≥ (d + 1)|X| - 5(d + 1)^3. <Ref> is part of a recent line of work <cit.> that relies on a beautiful theorem of <cit.> to obtain sumset lower bounds via few-translates. Unfortunately, the proof of this theorem uses 's <cit.> almost-all Balog–Szemerédi–Gowers theorem, and, as a consequence, the dependency of the number of translates C on the dimension d is of tower-type <cit.>. A related result, which avoids super-polynomial dependencies between its parameters, is the following elegant stmt:flpz proved by <cit.>. Its proof relies on a clever path counting argument akin to Gowers' proof of the Balog–Szemerédi theorem. Note that we state a specialized version of their much more general result. There exists c > 0 such that the following holds. Let X ⊆^d with |X| = k. For every s ∈{1, …, k}, there exists T ⊆ X such that |T| ≤ s and |X + T| ≥ c min{σ[X]^1/3, s}|X|. This result would work for our needs if we could ensure σ[X] > c^-1 d^3, but we cannot. Alas, also exhibit a construction <cit.> which shows that we cannot even replace σ[X]^1/3 in (<ref>) by σ[X]^1/1.29; this means that our requirement for the rank of the set in <Ref> is essential. In the next sec:overview, we give a detailed sketch of our proof strategy, and a proof of our fingerprint stmt:fingerprints. Then, in <Ref>, we give a simple proof of a weaker version of our “'s lemma via few translates” theorem that nevertheless contains some of the main ideas required for the full proof of <Ref>. <Ref> is dedicated to improving the constant to (1 - 5γ)/2, assuming two additional technical lemmas, which we prove in <Ref>. In <Ref>, we derive our supersaturation result from <Ref>, and in <Ref> we complete the proof of our main stmt:main-result. Finally, in <Ref>, we discuss future work and open problems. § OVERVIEW OF THE PROOF Throughout, n ∈ will always be a sufficiently large prime number; we will also adopt the standard convention of omitting floors and ceilings whenever they are not essential. Let k ∈ be the bound that we want to show for the independence number and let A_p be a p-random subset of . Denoting := k, we will show that (∃ X ∈ : X X ⊆∁A_p) → 0 as n →∞, which is equivalent to proving that α(Γ_p) < k . We will follow 's general approach of partitioning = _1 ∪_2 ∪_3 where each sub-collection is defined based on the doubling σ[X], _1 = { X ∈ : σ[X] ≤ k^β}, _2 = { X ∈ : k^β < σ[X] ≤δ k/10 }, and _3 = { X ∈ : δ k/10 < σ[X] }, for β = 1/40 and some δ = o(1), and handle each one differently. Explicitly, we use the union bound to deduce (∃ X ∈ : X X ⊆∁A) ≤∑_i = 1^3 (∃ X ∈_i : X X ⊆∁A). It turns out that we can bound the terms related to _2 and _3 using the same techniques of <cit.> and <cit.>; we therefore defer working out the details regarding these terms to <Ref>. For the remaining term, we could try to follow the methods in <cit.> and take a union bound over the sets in _1: (∃ X ∈_1 : X X ⊆∁A_p) ≤∑_X ∈_1(X X ⊆∁A_p). Observe that, for each X ⊆, the probability in the summand is (X X ⊆∁A_p) = (1 - p)^|X X|. Since there are more than m / 2 k sets X such that |X X| = m, the right-hand side of (<ref>) is at least ∑_m = 2k - 1^k^1 + βm / 2 k (1 - p)^m ≥2k kexp(-5 k p) →∞, as n →∞, whenever p = o(1) and k →∞. The conclusion is that any approach that uses the union bound over all sets with small doubling cannot give the optimal upper bound on the independence number for p smaller than some explicit constant, like 1/5. As we mentioned in the introduction, one of the key new ideas in this paper is to show that there exists a family of fingerprints such that it suffices to consider only the events {F F ⊆∁A_p}_F ∈ instead of the collection {X X ⊆∁A_p}_X ∈_1. Trivially, = _1 is such a family, so, to make this strategy work and improve upon taking the union bound over all X, we must choose in a more clever way. The first property of these fingerprints that we require is that || is sufficiently small, where small vaguely means that a union bound over all F ∈ works. One direct way to achieve that is picking each fingerprint F to be small, but there is a subtle trade-off between the size of F and the upper bound on the probability term (1 - p)^|F F|. To circumvent this issue, we can use the fact that we have a bound (even if a polynomially large one) on the doubling of each X ∈_1. In this setting, 's theorem <cit.> says that X is contained in a generalised arithmetic progression P such that both the size s and dimension d of P are small. Recall that a d-dimensional generalised arithmetic progression is a set of the form { a_0 + ∑_i = 1^d w_i a_i : w_i ∈, 0 ≤ w_i < ℓ_i } for some differences a_0, …, a_d ∈ and side-lengths ℓ_1, …, ℓ_d ∈, and P is proper when every possible choice of {w_1, …, w_d} corresponds to a distinct element of P. Therefore, instead of choosing F directly from , we first choose P and then select F inside P. Now we can state the first requirement for F ∈ in more detail. Let P be the generalised arithmetic progression given by 's theorem for X. Then, F should be small enough compared to |F F| for a union bound over choices of F ⊆ P to work: (∃ F ⊆ P : F F ⊆∁A_p) ≤s |F| (1 - p)^|F F|→ 0 as n →∞. To get to this point, however, we must also choose this generalised arithmetic progression in a previous union bound. Hence, our second requirement for each F is that F F is sufficiently large to pay for the number of choices for the generalised arithmetic progression P. As we can count generalised arithmetic progressions with dimension d and size s by choosing the a_0, a_1, …, a_d and l_1, …, l_d, there are at most (n s)^d + 1 of them. Temporarily ignoring the number of choices for the fingerprint inside the progression (which we already dealt with in (<ref>)), this amounts to requiring that F satisfies (n s)^d + 1 (1 - p)^|F F|→ 0 as n →∞. While these conditions may seem too strict – small F with large F F for every X – this is exactly what we are able to do. To state the actual stmt:fingerprints, we need to relate the dimension of the progression to some notion of dimension for X. The notion that we use is the dimension of X, but to state its definition, we must first define what are homomorphisms and isomorphisms. A homomorphism is a function ϕ : X → Y such that for every a_1, a_2, b_1, b_2 ∈ X, a_1 + b_1 = a_2 + b_2 implies ϕ(a_1) + ϕ(b_1) = ϕ(a_2) + ϕ(b_2). A isomorphism is a bijection ϕ such that both ϕ and its inverse ϕ^-1 are homomorphisms. The dimension of X, (X), is then defined to be the largest d ∈ for which there is a full rank subset of ^d that is isomorphic to X. It is furthermore useful to define the robustness of the dimension of X: we say that X has -robust dimension d if (X) ≥ d and there is no X' ⊆ X such that |X'| ≥ (1 - )|X| and (X') < d. In words, this means that the dimension of X is at least d even if we remove an proportion of its elements. With these definitions, we can state our fingerprint stmt:fingerprints: thmfingerprints Let n be a large enough prime and let k, d ∈. For every 0 < γ < 1/2, there exists C = C(γ) > 0 such that the following holds for all m ≥ (d + 1)k/2 and C/k < < γ. For each d-dimensional generalised arithmetic progression P ⊆_n, there exists a collection = _k, m, (P) of subsets of P satisfying: * For every F ∈, we have |F| ≤ C ^-1√( m log m ) and |F F| ≥(1 - γ) (d + 1) k/2. * For all X ∈Pk with |X X| ≤ m and -robust dimension d, there exists F ∈ such that F ⊆ X. We will deduce this stmt:fingerprints from the following supersaturation result. thmsupsat For every 0 < γ < 1, there exists a constant c = c(γ) > 0 such that, for every sufficiently large set X ⊆, every d ∈ and every 0 < < γ, the following holds. If X has -robust dimension d and Y ⊆ X + X satisfies | {(x_1, x_2) ∈ X^2 : x_1 + x_2 ∉Y }| ≤ c |X|^2, then |Y| ≥ (1 - γ)(d + 1)|X|/2. In words, whenever Y ≈ X + X in the sense of (<ref>), then it also (almost) satisfies the lower bound given by 's lemma (<Ref>), up to a factor of 1/2. Once we have this stmt:supsat, the proof of <Ref> is substantially simpler than usual for a similarly strong container theorem. In fact, it is also the only part of the proof that we could omit by using the container theorem for sumsets <cit.>. The self-contained proof is so simple that we include it here in the overview to motivate the usefulness of <Ref>. Our aim is to show that, for every X ∈Pk with |X X| ≤ m and -robust dimension d, there exists a subset F ⊆ X such that (<ref>) holds. We will show that a q-random subset F of X satisfies the first inequality by a suitable choice of q, and the second one via an application of <Ref> with Y = F F. In order to apply <Ref> to Y = F F, we need to show that Y satisfies (<ref>). To this end, we will first prove that a random choice of F makes it unlikely for F F to miss any y such that ρ_X X(y) ≥2 k^2/C m, where ρ_X X(y) = | {(x_1, x_2) ∈ X^2 : x_1 ≠ x_2, x_1 + x_2 = y }| is the number of pairs of distinct elements of X that sum to a given y ∈ X X. Hereon, we will refer to the y satisfying (<ref>) as the popular elements of X X. We will take a q-random subset where either q = C √( m log m )/2 k if the right-hand side is at most 1, and q = 1 otherwise (a trivial case which we will ignore). Notice that we can upper bound the number of missing pairs by: | {(x_1, x_2) ∈ X^2 : x_1 + x_2 ∉F F}| ≤ |X| + ∑_y ∈ (X X) ∖ (F F)ρ_X X(y) where the term |X| comes from the pairs (x, x) for x ∈ X. Once we have proved that F F contains all popular y ∈ X X, we will have an upper bound on ρ_X X(y) for every y ∈ (X X) ∖ (F F). Inserting this into (<ref>), we deduce that the number of missing pairs is at most |X| + ∑_y ∈ (X X) ∖ (F F)ρ_X X(y) < |X| + 2 k^2/C m|X X| ≤ c |X|^2, if we take C ≥ 3/c, where c = c(γ) is the constant in <Ref>, since |X| = k, |X X| ≤ m and k > C. The upper bound in (<ref>) is what we need to apply <Ref>; doing so gives |F F| ≥(1 - γ)((X) + 1)k/2, from which we can use our assumption that (X) ≥ d to complete the proof. It therefore only remains to prove our claim that with positive probability F contains all popular elements of X X while also being sufficiently small. Notice that our choice of F as a q-random subset of X implies, for each y ∈ X X, that (y ∉F F) = (1 - q^2)^ρ_X X(y)/2 because (i) each pair (x_1, x_2) ∈ X^2 that satisfies x_1 + x_2 = y and x_1 ≠ x_2 is counted twice in ρ_X X(y) and (ii) the probability that such a pair is chosen in F is q^2. Take B_F to be the random variable counting the number of y ∈ X X such that ρ_X X(y) ≥2 k^2/C m and y ∉F F. By linearity of expectation and (<ref>), we have [B_F] = ∑_y ∈ X X ρ_X X(y) ≥ 2 k^2 / C m(y ∉F F) ≤ m (1 - q^2)^ k^2 / C m≤ m exp(- q^2 k^2/C m), where we used |X X| ≤ m to bound the number of terms in the sum. Since q^2 k^2/(C m) = (C/4 ) log m by our choice (<ref>) of q, it follows by Markov's inequality that (B_F > 0) ≤ m^-1, where we also used that < 1 and C ≥ 8. Using Chernoff's inequality to bound the probability that |F| > 2 q k, we deduce that (|F| > 2 q k) + (B_F > 0) < 1, which proves that there exists a fingerprint F ⊆ X satisfying (<ref>), as required. Before moving on with the overview, let us briefly discuss the robustness property in <Ref>. This condition may at first seem unnatural, but the following simple construction shows that some form of robustness is necessary: take X to be the union of d - 1 random points with a progression P, and define Y = P + P. We have simultaneously that (X) = d, |Y| ≈ 2|X| and the sum of almost all pairs of elements of X are in Y. With <Ref> in hand, we can now check that for the family of all sets satisfying (<ref>), our requirements for the fingerprints F are satisfied. Recall that what we need from the size of the sumset is (ns)^d + 1 (1 - p)^|F F|→ 0, where s = |P| and d = (P). Modern formulations of 's theorem tells us that we can take s ≤exp(σ^1 + o(1))k, which in our range of σ and k corresponds to n^o(1). However, we can only apply <Ref> to sets X ⊆ P such that (X) ≥(P) (ignoring the robustness for the moment). Standard formulations of 's theorem only guarantee that (P) is at most σ[X], which would not be good enough for us. Fortunately, <cit.> proved a version of 's theorem that guarantees that (P) is at most (X), at the cost of a weaker bound on the size of P as compared to the more recent results of <cit.> and <cit.>. The impact of the suboptimal size of the progression is a slight reduction in the range of p for which our proof works. There exists C' > 0 such that for all finite subsets X ⊆ with |X| ≥ 2 and σ[X] ≤σ, there is a d-dimensional generalised arithmetic progression P such that X ⊆ P, |P| ≤exp(C' σ^2 (logσ)^3 ) |X| and d ≤(X). Now, we can use the lower bound on |F F| given by <Ref> to obtain (1 - p)^|F F|≤exp( -(1 - γ) (d + 1) k p/2 ), which, choosing k = (2 + o(1)) log_1 / (1 - p) n (a little larger than (2 log n)/p), is at most exp( - (1 + γ) (d + 1) log n ). Replacing this back in the previous equation, and recalling that s = n^o(1), thus yields (ns)^d + 1 (1 - p)^|F F|≤ n^-γ→ 0. The attentive reader may have noticed that 's stmt:chang-freiman-ruzsa is stated for subsets of instead of . To use it for subsets of , we will use instead a version of the Green–Ruzsa theorem ('s theorem for general Abelian groups) due to <cit.>; even though it does not bound (P) ≤(X) directly, it yields a proper generalised arithmetic progression, which is enough by using a simple argument (see <Ref>). Moreover, it is not true that every X has -robust dimension equal to (X). This is not a problem, however, since it is straightforward to prove (see <Ref>) that every set X has a large enough subset X' with -robust dimension d' ∈. Finally, let us check that the size of F given to us by <Ref> is compatible with the range of p in <Ref>. To do so, we need to show that, as n →∞, s |F| (1 - p)^|F F|→ 0. As we already know from (<ref>) that the second term is at most n^-d, we need s |F|≤ s^|F|≤exp( C ^-1 ( m log m )^1/2log s ) = n^o(1). The second inequality in (<ref>) follows from (<ref>) in <Ref>, while the third is a consequence of our choice of k and the bound on s given by <Ref>. Indeed, we have C ^-1 ( m log m )^1/2log s ≪ k^3/4 = o(log n), because m ≤ k^1 + β for X ∈_1, which implies that log s ≤ k^3 β, and we can take = k^-2β and C to be a constant. Our choice of β = 1/40 is therefore more than sufficient to prove (<ref>). At this point, one might ask why we decided to keep track of the constant C up to the final computation. Note that it is crucial that the value of C does not increase too quickly with d growing, since otherwise (<ref>) would not hold for large d. Recall that in the proof of <Ref>, we took C ≈ c^-1. The constant c is essentially given to us by our supersaturation result and its value is tied to how many translates we need in our approximate bound for 's lemma (<Ref>). To see where the dependency of c on the number of translates comes from, consider the contrapositive of <Ref>: if the set Y is small, then it misses many pairs (x_1, x_2) ∈ X^2. By <Ref>, we can find a small T ⊆ X such that |X + T| - |Y| ≥γ (d + 1)|X|. The pigeonhole principle then ensures us that there is some x ∈ T satisfying | (X + x) ∖ Y | ≥γ (d + 1)|X|/|T| = c |X| for c = γ (d + 1)/|T|. Now, if we add the c |X| pairs determined by (X + x) ∖ Y to our collection of missing pairs (x_1, x_2) ∈ X^2 such that x_1 + x_2 ∉Y, remove x from X and repeat this procedure t times, we would have at least c |X| t missing pairs in total. Recalling that X has -robust dimension d, we can take t = |X| while maintaining (X) ≥ d, and hence obtain c |X|^2 missing pairs, as required. The above sketch proof of <Ref> shows that to prove our supersaturation result with c being an absolute constant, we really need the size of T to be a linear function of d, as in <Ref>. In fact, for (<ref>), a bound of the form d^O(1) would suffice. Unfortunately, the stmt:jing-mudgal-freiman of  (<Ref>) gives a super-polynomial dependency on d. Nevertheless, we use their result to prove <Ref> in the range d = O(1). The only missing part in this overview is a proof of <Ref> itself. Instead of sketching it, we complete the picture with the (short) proof of its weaker version in the next section. § A SIMPLE PROOF OF A WEAKER 'S LEMMA VIA FEW TRANSLATES This sec:freiman-via-few-translates is dedicated to proving a weaker form of <Ref>. Although it is not strong enough to prove the upper bound in our main stmt:main-result, it does imply a weaker version where the constant 2 in (<ref>) is replaced by a 6. The deduction of this weaker result is the same as that of <Ref> (see <Ref>) simply replacing <Ref> by <Ref>. Let d, r ∈ and let X ⊆^d be a finite set with (X) ≥ r. There exists a set T ⊆ X such that |T| ≤ r/2 + 1 and |X + T| ≥r|X|/6. The idea of the proof is to add elements of X to T one by one, picking in each step a new translate that adds a substantial number of new elements to the sumset. This suggests a greedy argument, picking x ∈ X ∖ T that increases the size of the sumset the most. However, it is not clear how to show that we can make substantial progress for a sufficient number of steps. Previous arguments, such as the one by <cit.>, show that if progress stops, then X must have small doubling; as we must handle polynomially large σ[X], such strategies do not work in our case. Instead, we adopt a variation of this idea that incorporates the geometry of the set, allowing us to reach conclusions that do not rely on the doubling of the set and depend only on its rank. Roughly speaking, in the proof of <Ref> below, we will show that we can add a new element to T so that it increases the size of X + T by a factor proportional to |X ∖(T)|. Notice that, as long as |T| < (X), we can take a non-trivial step. Now, if we have enough steps that add |X|/3 elements to the sumset, then after r/2 steps we will have both the bound for the sumset and for |T| in <Ref>. Otherwise, as we will show that every step makes at least |X ∖(T)|/2 “progress”, it follows that, after the last “good” step, we must have |X ∩(T)| ≥ |X|/3. In this scenario, we define X^* = X ∩(T) and X' = X ∖(T), and observe that (X') ≥(X) - (T) ≥ r - r/2 = r/2, since |T| ≤ r/2. At this point, we now discard our old T and greedily choose elements of a new set of translates Z ⊆ X' each increasing the rank of X^* ∪ Z by one. Each new element that we add yields a disjoint, translated copy of X^* in the sumset. We then obtain |X + Z| ≥ |X^* + Z| = r|X^*|/2≥r|X|/6, because, by (<ref>), we can greedily add r/2 elements to Z, and |X^*| ≥ |X|/3. We proceed by formalizing the notion that either the greedy argument suffices, or a significant part of X lies in the span of the elements already in T. The version we state below is more general than we need to prove <Ref> because we will reuse it when proving <Ref>. Let d, r ∈, let γ > 0 and let X ⊆^d be a finite set with (X) ≥ r. If T ⊆^d satisfies 0 ∈ T, (T) < r and |X ∩(T)| ≤γ |X|, then there exists an element x_* ∈ X such that |X + (T ∪{x_*})| ≥ |X + T| + (1 - γ)|X|/2. The key idea here is to find a co-dimension 1 hyperplane ℋ which intersects X exactly in X ∩(T). We can find such a hyperplane because X is finite and (T) < (X). Note that the two open half-spaces defined by ℋ induce a partition X' = X_+ ∪ X_-. Without loss of generality, we assume that |X_+| ≥ |X_-|, and our goal is now to add, for each point in X_+, a new element to the sumset. To achieve this, we let u be a normal vector of ℋ, choose y_+ ∈ X_+ to maximise y_+, u, and observe that y_+ + X_+ is disjoint from X + T. First, choose a vector u ∈^d to be the normal of the hyperplane ℋ discussed above. It should satisfy the following properties * u ≠ 0, * z, u = 0, for all z ∈(T), and * z, u≠ 0, for all z ∈ X ∖(T). There is a u satisfying <ref> since 0 ∈ T and (T) < r. We can furthermore find a u for which <ref> also holds because there are only finitely many elements in X ∖(T), as X itself is finite. We partition X = X_+ ∪ X_* ∪ X_- according to the position of its elements relative to the hyperplane defined by x, u = 0, X_+ = {x ∈ X : x, u > 0}, X_* = {x ∈ X : x, u = 0}, X_- = {x ∈ X : x, u < 0}. Assume without loss of generality that |X_+| ≥ |X_-| by swapping the sign of u if necessary, and take y_+ ∈ X_+ to be a maximiser of f(y) = y, u. We claim that X_+ + y_+ is disjoint from X + T. In fact, if we let a ∈ X_+, b ∈ X and c ∈ T, then: a + y_+, u > y_+, u, as a ∈ X_+, ≥b, u, by the maximality of y_+, = b + c, u, as we chose u with c ⊥ u. Therefore, if |X_+| ≥ (1 - γ) |X|/2, we can pick x_* = y_+ and prove the stmt:greedy-phase: |X + (T ∪{x_*})| ≥ |X + T| + |X_+ + x_*| ≥ |X + T| + (1 - γ) |X|/2. Otherwise, since we took |X_+| ≥ |X_-| and X_+ ∪ X_* ∪ X_- partition X, we have |X_*| = |X| - |X_-| - |X_+| > γ |X|. By our choice of u and the definition of X_*, we have X_* = X ∩(T), which, by (<ref>), contradicts our assumption that |X ∩(T)| ≤γ |X|. The 1-dimensional perspective we took in this proof suggests that instead of adding a single maximiser y_+ to T at each step, we should pick both the maximiser y_+ and the minimiser y_-. Unfortunately, if we add {y_+, y_-} to T, then we could increase (T) by two instead of one, causing the greedy argument to run for only half as many steps. We need the following simple lemma to handle the case when |X ∩(T)| is large. Let d, r, s ∈, let γ > 0 and let X ⊆^d be a finite set with (X) ≥ r. If X_* ⊆ X with (X_*) < s, then there exists a set Z ⊆ X ∖ X_* such that |X_* + Z| ≥ (r - s)|X_*| and |Z| ≤ r - s. First, note that for any set S ⊆^d such that (S) < s, we have ((S)) ≤ s. That is, taking the of a set that does not contain 0 may increase its rank by 1. We will construct Z = {z_1, …, z_r - s}⊆ X ∖ X_* one element at a time, also defining Z_i = {z_1, …, z_i} and W_i = (X_* ∪ Z_i) for i ∈{0, …, r - s}. Notice that if i < r - s, then (W_i) ≤ s + i < r, by our assumption on X^* and the definition of W_i. Therefore, there exists z_i + 1∈ X ∖ W_i, as (X) ≥ r. Since X^* ⊆ W_i, this implies that X^* + z_i + 1 and W_i are disjoint, and hence that X^* + z_i + 1 is disjoint from X^* + Z_i. This readily implies the stmt:disjoint-translates-with-X* because |X_* + Z| = ∑_i = 1^r - s |X_* + z_i| = (r - s) |X_*|, where in the first equality we repeatedly used that X_* + z_i + 1 and X_* + Z_i are disjoint. The proof of <Ref> now follows easily from <Ref> and <Ref>: We may assume that 0 ∈ X; notice that this translation does not change (X). First, we construct sets T_i = {x_0, …, x_i}⊆ X by choosing x_0 = 0 and x_i + 1 as given by <Ref>. In details, let t be the first index for which |X ∩(T_t)| > |X|/3. Note that while (<ref>) does not hold, we have (T_i) < (X) since T_i ⊆ X. We may therefore apply <Ref> to T_i with γ = 1/3 to define x_i, which then implies that |X + T_i| ≥|T_i| |X|/3 = (i + 1)|X|/3 for all i ≤ t. Hence, if t ≥ r/2, then, by (<ref>), we have |X + T_t| ≥(t + 1)|X|/3≥r|X|/6, and taking T = T_t concludes the proof. We may therefore assume that t < r/2. In this case, we want to apply <Ref> with X_* = X ∩(T_t) and s = r/2. Note that, as (T_t) ≤ t and 0 ∈ T_t, we have (X_*) ≤((T_t)) ≤ t < r/2, where in the last inequality we used our assumption that t < r/2. This application yields a set Z ⊆ X ∖ X_* such that |Z| ≤ r - s = r/2 and |X + Z| ≥ |X_* + Z| because X_* ⊆ X, ≥r|X_*|/2 by <Ref>, ≥r|X|/6 since |X_*| ≥ |X|/3 by (<ref>). Taking T = Z concludes the proof. § IMPROVING THE BOUND FOR 'S LEMMA VIA FEW-TRANSLATES To obtain the sharp upper bound for α(Γ_p), the bound |X + T| ≥ r|X|/6 is not enough; we need at least |X + T| ≥ (1 - 5γ)(r + 1)|X| / 2 for small γ > 0. In this section, we describe the overall approach and state the intermediate results we require to improve the bound. Once we have stated these results, we will show that assuming them is sufficient to prove <Ref> – this will motivate their statements, since neither they nor their proofs are straightforward. We will prove that these results hold in subsequent sections. To discuss the methods that we will use to improve the bound, it will be convenient to first make some definitions. For finite sets X, W ⊆^d, we define Z(X, W) = Π_W^⊥(X), where Π_U(V) denotes the orthogonal projection of V onto U. We also partition ^d into equivalence classes in that projection, denoting these[To avoid confusion between the equivalence class [z] for z ∈^d and [m] = {1, …, m} for m ∈, we avoid using the latter notation.] by [z]_W = {x ∈^d : Π_W^⊥(x) = z}, and partition X into equivalence classes in the same way: z_W, X = [z]_W ∩ X. It will be convenient to omit the dependency of those definitions on X and W whenever these sets are clear from context, leaving us with the notation Z, [z], z. We also refer to [z] as a “fibre”, and say that a fibre [z] is “empty” if z ∉Z. The start of the proof follows the same idea as in <Ref>: to obtain T_i + 1 from T_i, at each step we add the element provided by <Ref>. We do this for t steps, where t is the first index for which |X ∩(T_t)| > γ|X|. If t ≥ r, then, by <Ref>, we have |X + T_t| ≥ (1 - γ)(t + 1)|X|/2, and we are done. Otherwise, we define T^* = T_t - 1 and W = (T_t), and we look into the set of non-empty fibres Z = Z(X, W). Notice that W is neither ^d nor {0} since 0 < t < r. The rest of the argument is divided into two different cases. If there are many distinct non-empty fibres in Z = {z_1, …, z_m}, then we use the following generalization of <Ref>. Whereas for that previous result we needed one point per dimension, in <Ref> we take one point y_i ∈z_i per non-empty fibre to be our set T, and show that such a choice yields disjoint translates y_i + X_* for X_* = X ∩ W. Let d, r ∈, let γ > 0, and let X ⊆^d be a finite set with (X) ≥ r. If W ⊆^d is such that |X ∩ W| ≥γ |X| and |Z| ≥ (r + 1)/γ, where Z = Z(X, W), then there exists T ⊆ X such that |X + T| ≥ (r + 1)|X| and |T| = (r + 1)/γ. Define m = (r + 1)/γ. Take {z_1, …, z_m}⊆ Z and, for each z_i, pick some arbitrary y_i ∈z_i. Note that 0 = X ∩ W. As we have chosen y_i ∈ [z_i], we have 0 + y_i ⊆ [z_i]. Doing the same with i ≠ j shows that (0 + y_i) ∩(0 + y_j) ⊆ [z_i] ∩ [z_j] = ∅. We can therefore conclude that taking T = {y_1, …, y_m}⊆ X satisfies |X + T| ≥|0 + T| as 0⊆ X by definition, ≥∑_i = 1^m | 0 + y_i | by (<ref>), ≥ m γ |X| as | 0| ≥γ|X|, by assumption, = (r + 1)|X|, as required. The case when there are few non-empty fibres, Z = Z(X, W) is small, is more complex. Letting r_W = (W) and recalling (<ref>), we have |X + T^*| ≥ (1 - γ)(r_W + 1)|X|/2. Therefore, to obtain a final set of translates T such that |X + T| ≥ (1 - 5γ)(r + 1)|X|/2, we need to find a set T' that roughly satisfies |X + T'| ≥(r - r_W)|X|/2. To combine T^* and T', though, we must take care not to count overlaps between the sumsets X + T' and X + T^* more than once. The content of the next stmt:phase-3 shows that choosing T' as discussed above is possible. It says that we can choose a T' which almost attains (<ref>) while avoiding not only X + T^*, but the whole of X + 0 – recall that T^* ⊆ W ∩ X = 0. Let d, r, r_W ∈, η > 0, and let X ⊆^d be a finite set with (X) ≥ r. Let also W ⊆^d be a subspace with dimension r_W, and Z = Z(X, W). If |z| ≤η |X| for every z ∈ Z ∖{0}, then there exists T' ⊆ X such that |(X + T') ∖(X + 0)| ≥r - r_W/2(|X| - |0|) - η|Z||X| and |T'| ≤ |Z|. To gain some intuition for why <Ref> is true, consider the following. Applying 's lemma in the projected world W^⊥, if we could obtain a lower bound depending on |X|, instead of |Z| – which is what we actually get – then we would be done. Notice, however, that this naive application considers every non-empty fibre z as a single element to avoid repeated counts. That is, for each z_1 + z_2 ∈ Z + Z, this approach counts only x + y for a single choice of x ∈z_1 and y ∈ T' ∩z_2. In the proof of <Ref>, we will show that we can instead consider z_1 + y and not overcount. To achieve that, we assign to each z ∈ Z a weight that encodes the size of the corresponding fibre z, and incorporate this weight into the proof of 's lemma in the projected world. Although considering z_1 + y is still not enough to replace |Z| with |X| in the bound, we can crucially choose z_1, z_2 ∈ Z whichever way we want, as long as y ∈z_2. We therefore choose the representation that maximises |z_1| and show that this modification is enough to obtain the bound in <Ref>. The above discussion overlooks the removal of the zero fibre from the sumset, it gives a lower bound for |X + T'| instead of one for |(X + T') ∖ (X + 0)|. To incorporate it, we must take into account our choice maximizing |z_1|, which imposes the (technical) requirement that every non-zero fibre has size at most η |X|. As this is not always the case, in order to apply <Ref>, we will change W slightly to make it so: we will “add” to W the large non-zero fibres until we have | z| ≤η |X| for all remaining z ∈ Z. As we will do this economically, using this new W will not hurt our bound too much. Nevertheless, combining T' and T^* had another unanticipated cost: it incurred a negative (r - r_W) | 0| / 2 term in the bound of <Ref>. The following stmt:phase-2 shows that a simple change suffices to offset this loss: we add two extra points from each non-empty fibre to our final T. Let d ∈, and let X ⊆^d be a finite set. For every W ⊆^d and u^* ∈^d, there is T”⊆ X satisfying |T”| ≤ 2|Z| and |(0 + T”) ∖(X + 0_*)| ≥ |Z| (|0| - |0_*|), where Z = Z(X, W) and 0_* = {x ∈0 : x, u^* = 0}. It is not immediate that <Ref> is really enough to make up for what we lost in <Ref>, as we are removing X + 0_* instead of X + T^*, and 0_* is determined by W and u^*. What we need is that, for our choice of u^*, both | 0_* | ≤γ |X| and T_t - 1 = T^* ⊆0_*. This condition, combined with the following stmt:gluing-phase-2-and-phase-3, is enough to prove <Ref>. The proof of the stmt:gluing-phase-2-and-phase-3 is a simple (but slightly tedious) manipulation of set relations. For every T', T”⊆ X, we have |(X + T̂) ∖(X + 0_*)| ≥|(X + T') ∖(X + 0)| + |(0 + T”) ∖(X + 0_*)| where T̂ = T' ∪ T”. Note first that 0 + T” is a subset of X + T̂: this follows easily from 0⊆ X and T”⊆T̂. We therefore have |X + T̂| ≥ |S| + |0 + T”| where S = (X + T') ∖(0 + T”), and moreover, |(X + T̂) ∖ (X + 0_*)| ≥|S ∖ (X + 0_*)| + |(0 + T”) ∖ (X + 0_*)|. As the last term in (<ref>) already matches what appears in (<ref>), the proof is reduced to showing that |S ∖ (X + 0_*)| ≥|(X + T') ∖(X + 0)|, where, recall, S ∖ (X + 0_*) = (X + T') ∖((0 + T”) ∪ (X + 0_*)). It is enough, therefore, to prove that 0 + T”⊆0 + X and X + 0_* ⊆ X + 0. Both inclusions are trivial, as T”⊆ X by assumption, and 0_* ⊆0 by definition. We are now ready to put the pieces together and prove <Ref>. By translating if necessary, we may assume that 0 ∈ X; note that this does not change (X). As in the proof of the weaker version, <Ref>, we start by defining T_0 = {0}: observe that (T_0) = 0 and |X + 0| = |X|. The next steps consist of taking T_i + 1 = T_i ∪{x_i + 1}, where x_i + 1∈ X is defined to be the x^* given by <Ref> applied to T_i. We can do so as long as |X ∩(T_i)| < γ|X| and i < r, because |T_i| = i + 1 implies that (T_i) ≤ i. We also stop this process for the first t such that |X ∩(T_t)| ≥γ |X|. By (<ref>) and our choice of x_i + 1 via <Ref>, we have, for all i ≤ t, |X + T_i| ≥ |X| + (1 - γ) i|X|/2≥ (1 - γ) (i + 2)|X|/2, and |T_i| ≤ i + 1. Now, if t ≥ r, then we can take T = T_t, and we have completed the proof by (<ref>). Otherwise, we may assume that t < r and define W_1 = (T_t). We would like to use W_1 for the rest of the proof, but <Ref> requires that all non-zero fibres have size at most η |X|. To continue, then, we need to define a subspace W such that for every z ∈ Z(W, X) ∖{0} and given η > 0, | z_W,X| ≤η |X|. We will do so by iteratively projecting these large fibres onto the 0 fibre until none remain. With foresight, we set η = γ^2. Formally, our process is: * Start with ℓ = 1 and W_1 = (T_t). * If there exists z_ℓ∈ Z(W_ℓ, X) ∖{0} such that | z_ℓ| ≥η |X|, then we let W_ℓ + 1 = (W_ℓ∪{z_ℓ}). * If there is no such z_ℓ, we stop with output W_ℓ. A simple and important observation is that ℓ≤ 1/η, since η < γ and |X| ≥ |W_ℓ∩ X| ≥ℓη |X|. Noting that (W_1) ≤ t, since |T_t| ≤ t + 1 and 0 ∈ T_t, it follows that r_W := (W_ℓ) ≤(W_1) + 1/η≤ t + 1/η. Take W to be the output of the above process. Moreover, let Z = Z(W, X) and divide the rest of the proof into two cases depending on |Z|. The first case is if |Z| ≥ (r + 1)/γ. Here, we claim that applying <Ref> completes the proof. The set T ⊆ X provided by this application satisfies |X + T| ≥ (r + 1)|X| ≥ (1 - 5γ)(r + 1)|X|/2, and |T| = r + 1/γ≤4(r + 1)/γ, as required. It remains to deal with the other case, and we may thus assume that |Z| < (r + 1)/γ. In this scenario, our set T will be the union of three sets: the second to last set T_t - 1, the set T' given by <Ref>, and T”, the output of <Ref> for a suitable choice of u^* ∈^d. Note that it consists of few translates: |T| ≤ |T_t - 1| + |T'| + |T”| ≤ t + |Z| + 2|Z| ≤4(r + 1)/γ, where the last inequality follows from our assumptions that t < r and |Z| < (r + 1)/γ. It remains to show that X + T has the appropriate size. First, we separate the contributions of T_t - 1 and T̂ = T' ∪ T” to the sumset |X + T| ≥ |X + T_t - 1| + | (X + T̂) ∖(X + T_t - 1)|, and we would like to apply <Ref> to bound the second term in the right-hand side. To do this, we need to show that T_t - 1⊆0_* for some u^* ∈^d, as that would imply (X + T̂) ∖(X + 0_*) ⊆(X + T̂) ∖(X + T_t - 1). Besides T_t - 1⊆0_*, our choice of u^* will also need to satisfy | 0_* | < γ |X|. To achieve that, we define the subspace W_0 = (T_t - 1) and claim that we can pick u^* ∈^d such that 0_* = {x ∈0 : x, u^* = 0} = X ∩ W_0. As we have stopped the greedy process at t, we must have |X ∩ W_0| < γ |X|, so this choice also satisfies (<ref>). We can choose u^* ∈ W_0^⊥ satisfying (<ref>) because X is finite and (W_0) ≤ t < r ≤ d, since |T_t - 1| ≤ t and 0 ∈ T_t - 1. Notice that this choice of u^* mimics that of u in the proof of <Ref>. With this choice of u^*, we have T_t - 1⊆0_*, as required, and so combining (<ref>) with <Ref> yields, in (<ref>), |X + T| ≥ |X + T_t - 1| + |(X + T') ∖(X + 0)| + |(0 + T”) ∖(X + 0_*)| for T = T_t - 1∪ T' ∪ T”, and T' and T” as given by <Ref> and <Ref>, respectively. As T' originates from <Ref>, we have |(X + T') ∖(X + 0)| ≥(r - r_W)/2(|X| - |0|) - η|Z||X|, where, recall, r_W ≤ t + 1/η, by (<ref>). Moreover, by <Ref>, we have | (0 + T”) ∖(X + 0_*)| ≥ |Z| ( |0| - |0_*|), which, by (<ref>), implies that | (0 + T”) ∖(X + 0_*)| ≥ |Z| ( |0| - γ|X|). Recall that, by (<ref>), we have |X + T_t - 1| ≥ (1 - γ) (t + 1)|X|/2, which, replaced alongside (<ref>) and (<ref>) in (<ref>), yields |X + T| ≥ (1 - γ) (t + 1)/2|X| + (r - r_W)/2(|X| - |0|) - η|Z||X| + |Z| ( |0| - γ|X|). The rest of the proof is dedicated to showing that this expression is at least the bound we need. To this end, observe that |Z| ≥(Z) ≥(X) - (W) ≥ r - r_W. Another important observation is that |0| ≥ |X ∩ W_1| ≥γ |X|, since X ∩ W_1 ⊆0. It follows from (<ref>) and (<ref>) that the sum of (<ref>) and (<ref>) is at least r - r_W/2(|X| - |0|) + |Z| ( |0| - γ|X|) ≥ (1 - γ) (r - r_W)/2|X|. Replacing (<ref>) in our lower bound for the size of X + T, we obtain |X + T| ≥ (1 - γ) (t + 1)/2|X| + (1 - γ) (r - r_W)/2|X| - η |Z| |X| and using that r_W ≤ t + 1/η by (<ref>), yields |X + T| ≥ (1 - γ) (r + 1)/2|X| - (1 - γ)/2η|X| - η|Z||X|. It is now enough to determine that (1 -γ) 1/2η + η |Z| ≤ 2 γ (r + 1) as replacing it in (<ref>) gives the desired bound: |X + T| ≥ (1 - 5 γ) r + 1/2|X|. To obtain (<ref>), observe that η |Z| ≤γ (r + 1) follows from our choice of η = γ^2 and our assumption that, in the current case, |Z| ≤ (r + 1)/γ. Moreover, the following holds 1/2 η < γ (r + 1) since γ^3 r ≥ 1 by assumption. The proof is complete by substituting (<ref>) and (<ref>) in (<ref>). § OFFSETTING THE LOSS OF THE ZERO FIBRE: PROOF OF PROPOSITION <REF> As we remarked in the previous section, we offset the loss of the zero fibre by adding two carefully selected points from each non-empty fibre to the final set of translates. These points are the maximiser and minimiser of the linear function x ↦x, u^* in each fibre. Let Z = {z_1, …, z_m}. For each z_i ∈ Z, we choose y_i^+, y_i^- ∈z_i to be, respectively, a maximiser and a minimiser of y ↦y, u^* in z_i, and set Y_i := {y_i^+, y_i^-}. Recall that Z = Z(X, W) is now defined as the projection of X onto W^⊥, where X and W come from the statement of the stmt:phase-2. Similarly to the proof of <Ref>, we will define “positive” and “negative” parts of 0 as 0_+ = {x ∈0 : x, u^* > 0} and 0_- = {x ∈0 : x, u^* < 0}, which complement the “null part” 0_* = {x ∈0 : x, u^* = 0} defined in the statement and complete a partition of the zero fibre. Each pair of minimiser and maximiser we put in the set of translates will add to the sumset a translated copy of the sets 0_+ and 0_-. As they form a partition of 0∖0_*, this will correspond to adding |0| - |0_*| elements to the sumset for each of the m = |Z| non-empty fibres. Showing this only requires the following simple geometric observations. The first says that the translates 0 + Y_i and 0 + Y_j are disjoint if Y_i and Y_j lie in distinct fibres. For all i, j ∈{1, …, m}, if i ≠ j, then (0 + Y_i) ∩(0 + Y_j) = ∅. Note that 0 + Y_i ⊆ [z_i], and recall from (<ref>) that [z_i] ∩ [z_j] = ∅. The second obs:pos-and-neg-disjointness says that the positive part of 0 translated by the fibre maximiser y_i^+ is disjoint from its negative part translated by the corresponding fibre minimiser. Its proof is essentially contained in the proof of <Ref>. For all i ∈{1, …, m}, we have (y_i^+ + 0_+) ∩(y_i^- + 0_-) = ∅. Note that for any a ∈0_+, b ∈0_-, we have y_i^+ + a, u^* > y_i^+, u^*≥y_i^-, u^* > y_i^- + b, u^*. Hence, y_i^+ + a ≠ y_i^- + b for every a ∈0_+ and b ∈0_-, which implies (<ref>). The final obs:empty-intersections says that the original set translated by the “null part” of 0 does not intersect the positive part of 0 translated by the fibre maximiser y_i^+, and that the analogous statement holds for the negative part and the fibre minimiser y_i^-. For all i ∈{1, …, m}, we have (X + 0_*) ∩(y_i^+ + 0_+) = (X + 0_*) ∩(y_i^- + 0_-) = ∅. Recall that X = ⋃_j = 1^m z_j is a partition. Whenever z_j and y_i^+ are in distinct fibres, we have that z_j + 0_* and y_i^+ + 0_+ are disjoint by (<ref>), z_j + 0_* ⊆ [z_j] and y_i^+ + 0_+ ⊆ [z_i]. We now consider the case when they are in the same fibre. Take a ∈0_+, c ∈0_* and y ∈z_i. Note that a, u^* > 0 since a ∈0_+, and c, u^* = 0 as c ∈0_*. Then c + y ≠ a + y_i^+, because a + y_i^+, u^* > c + y_i^+, u^*≥c + y, u^*, and this completes the proof in the y_i^+ case. The proof in the y_i^- case is analogous. We are now ready to prove <Ref>. Recall that m = |Z|, where Z = Z(X, W), and Y_i = {y_i^+, y_i^-} consists of a maximiser and minimiser of y ↦y, u^* in z_i for each z_i ∈ Z. Y_i is well defined because Z is the set of non-empty fibres, although it may be a singleton for example if |z_i| = 1. We define T” to be the following set with (trivially) at most 2m elements T” = ⋃_i = 1^m Y_i. To avoid cumbersome notation, we will first prove the weaker inequality |0 + T”| ≥∑_i = 1^m |y_i^+ + 0_+ | + | y_i^- + 0_- |, and then show that the same steps can be applied removing the set X + 0_* in the left-hand side to obtain the desired bound. As T” is the union of the Y_i, we use <Ref> to obtain | 0 + T”| = ∑_i = 1^m | 0 + Y_i |. Now, because 0_+, 0_* and 0_- partition 0, we may decompose, for each Y_i, the term in (<ref>) as | 0 + Y_i | = | ( 0_+ + Y_i ) ∪( 0_* + Y_i ) ∪( 0_- + Y_i ) |, which, ignoring the term corresponding to 0_* and the “mixed-sign” terms, 0_- + y_+ and 0_+ + y_-, yields | 0 + Y_i | ≥| ( 0_+ + y_i^+ ) ∪( 0_- + y_i^- ) |. We can now apply <Ref>, which implies that the right-hand side of (<ref>) is at least | ( 0_+ + y_i^+ ) ∪( 0_- + y_i^- ) | = | 0_+ + y_i^+ | + | 0_- + y_i^- |, hence establishing (<ref>) via (<ref>). To obtain the desired bound, we now show that the same steps can be applied removing the set X + 0_* in the left-hand side of (<ref>). First, (<ref>) holds if we remove X + 0_* from the set in the left-hand side and those in the sum in the right-hand side because removing a set cannot create intersections in disjoint sets. We then repeat the steps in (<ref>) and (<ref>) – they are also possible regardless of the set removal – and, upon reaching (<ref>), we again use that disjointness is preserved under set removal. This gives us | (0 + T”) ∖( X + 0_* ) | ≥∑_i = 1^m | ( y_i^+ + 0_+ ) ∖( X + 0_* ) | + | ( y_i^- + 0_- ) ∖( X + 0_* ) |. We are now in a position to use <Ref> to deduce that the set we removed is disjoint from the ones in the right-hand side above, and recover the lower bound prior to the removal | (0 + T”) ∖( X + 0_* ) | ≥∑_i = 1^m |y_i^+ + 0_+ | + | y_i^- + 0_- | = m ( | 0_+ | + | 0_- | ). Again using that 0_- ∪0_* ∪0_+ is a partition of 0, we get m ( |0_+| + |0_-|) = m ( |0| - |0_*|), as required to complete the proof. § WEIGHTED 'S LEMMA: PROOF OF PROPOSITION <REF> The final piece we need to prove <Ref> is detailing how to choose T' ⊆ X such that |T'| ≤ |Z| and (<ref>) holds, and how to prove a variant of 's lemma where the size of the original, unprojected set X appears in the lower bound, instead of simply |Z|. We will first prove a weaker, insufficient statement to make the reader comfortable with the notation and ideas in the proof of <Ref>. Our goal here is to show that we can choose T' ⊆ X such that |T'| ≤ |Z| and[Notice that (w - Z) ∩ Z = {z ∈ Z : ∃ z' ∈ Z such that z + z' = w}.] |X + T'| ≥∑_w ∈ Z + Zmax_z ∈ (w - Z) ∩ Z| z|. Before proceeding, it will be useful to let Z = {z_1, …, z_m}. Here (and in the proof itself), we will take T' = {y_1, …, y_m}, where, for each i ∈{1, …, m}, y_i ∈z_i is arbitrary. It is immediate that the size of T' is appropriate, that is, |T'| ≤ |Z|, so we must show that it also satisfies (<ref>). Having defined T', we start this warm-up by partitioning X + T' into fibres of Z + Z: X + T' = ⋃_w ∈ Z + Z [w] ∩ (X + T'). As (<ref>) defines a partition, we also have | ⋃_w ∈ Z + Z( [w] ∩ (X + T') ) | = ∑_w ∈ Z + Z| ( [w] ∩ (X + T') ) |. Note also that if w = z_i + z_j ∈ Z + Z, then z_i + y_j ⊆( [w] ∩ (X + T') ) since z_i⊆ X and y_j ∈z_j∩ T'. Our approach will be to count only the elements in a single (translated) fibre z_i + y_j and ignore the rest of [z_i + z_j] ∩ (X + T') – by (<ref>) and (<ref>), this is a valid lower bound for |X + T'|. As the union in (<ref>) ranges over all w ∈ Z + Z, we can pick any possible representation z_i + z_j for w. Our choice will be the one for which |z_i| is as large as possible, resulting in ∑_w ∈ Z + Z| ( [w] ∩ (X + T') ) | ≥∑_w ∈ Z + Zmax_z ∈ (w - Z) ∩ Z| z|, since | z + y | = | z|. This completes the proof of (<ref>). In the proof of <Ref>, the statement that is analogous to (<ref>) is |(X + T') ∖(X + 0) | ≥∑_w ∈ (Z^* + Z^*) ∖ Zmax_z ∈ (w - Z^*) ∩ Z^*| z|, where Z^* = Z ∖{0} – in words, we “remove the 0 from Z” before doing the sumset. The proof of (<ref>) is essentially the same as the warm-up above, but that is still not enough. We must have a good lower bound for its right-hand side in order to complete a proof of <Ref>. To prove a lower bound for (<ref>), we will use the following classical result of <cit.>. For a convex polytope K, its skeleton graph (K) has vertex set corresponding to the vertices of K, and its edges are pairs of vertices that lie in a one-dimensional face of K. If K ⊆^d is a d-dimensional convex polytope, then (K) is d-vertex connected. That is, one must remove at least d vertices to disconnect (K). <Ref>, below, is the lower bound we need for the right-hand side of (<ref>): when applying it to prove <Ref>, we will set U = Z^* and f(z) = | z|. The proof of the stmt:weighted-freiman is similar to traditional proofs of 's lemma, but we must take into account the weight f(u) of each u ∈ U when selecting vertices of (U), the convex hull of U, instead of choosing an arbitrary vertex[Hence the name “weighted 's lemma”.]. Let d ∈. For every finite U ⊆^d and f: U →_+, we have ∑_w ∈ U + Umax_u ∈ (w - U) ∩ U f(u) ≥(U) + 1/2∑_u ∈ U f(u). We prove the stmt:weighted-freiman by induction on |U|. In the base case, we have an empty set, so the left-hand side of (<ref>) is equal to 0, as is the right-hand side. We can therefore assume that for every U' ⊊ U, we have (<ref>). For the induction step, we proceed similarly to the standard proof of 's lemma. The main difference is that instead of choosing an arbitrary element from V, the vertices of (U), we must choose a vertex considering the weight function f. To be precise, we choose a vertex v ∈ V such that f(v) ≤1/s + 1∑_u ∈ U f(u), where s = (U), noting that such a choice exists by the pigeonhole principle, (s + 1) min_v ∈ V f(v) ≤ |V| min_v ∈ V f(v) ≤∑_v ∈ V f(v) ≤∑_u ∈ U f(u). We fix one such v ∈ V, define U' = U ∖{v} and divide the remainder of the proof based on whether or not (U') = s. Consider first the case (U') = s. The induction hypothesis implies that ∑_w ∈ U' + U'max_u ∈ (w - U) ∩ U f(u) ≥∑_w ∈ U' + U'max_u ∈ (w - U') ∩ U' f(u) ≥s + 1/2∑_u' ∈ U' f(u') since U' ⊆ U. Therefore, by (<ref>), we can accomplish our goal by finding a set S ⊆ (U + U) ∖ (U' + U') such that ∑_w ∈ Smax_u ∈ (w - U) ∩ U f(u) ≥s + 1/2 f(v). In order to define our candidate set for S, let H = ((U)) and let N_H(v) be the neighbourhood of v in the graph H. We claim that v + v, where v = N_H(v) ∪{v}, is disjoint from U' + U' and satisfies (<ref>), and is therefore a suitable choice for S. To see that v + v and U' + U' are disjoint, first notice that 2v ∉U' + U' follows from v being a vertex of (U). For the remaining elements of v, v' ∈ N_H(v), v' + v is not in U' + U' because (v + v')/2 is a midpoint of the segment connecting v and v', and this midpoint clearly lies outside (U'). It remains to show that (<ref>) holds with S = v + v. Observe that if w = v' + v for some v' ∈v, then v ∈ (w - U) ∩ U. Hence, ∑_w ∈v + vmax_u ∈ (w - U) ∩ U f(u) ≥∑_w ∈v + v f(v) = |v| f(v) ≥ (s + 1) f(v), where in the last inequality we used that H has mininum degree at least s, by <Ref>. Combining (<ref>) and (<ref>), this completes the induction step in the case (U') = s. We may therefore assume that (U') = s - 1; that is, the removal of the vertex v decreases the rank of U. It will be useful for this case to recall (<ref>), our criterion for the choice of v, in the following (trivially) equivalent form: ∑_u ∈ U f(u) ≥ (s + 1) f(v). Applying our induction hypothesis to U' yields ∑_w ∈ U' + U'max_u ∈ (w - U') ∩ U' f(u) ≥s/2∑_u' ∈ U' f(u'), so, to deduce the bound (<ref>), it is enough to add to (<ref>) the term (1/2∑_u' ∈ U' f(u')) + s + 1/2 f(v) using elements of (U + U) ∖ (U' + U'). In order to add (<ref>) to (<ref>), we argue that U' + U', {2v} and U' + v are disjoint. This follows from our assumption that (U') < (U), which implies that v ∉𝒰, where 𝒰 is an affine subspace with dimension (U') containing U'. We can therefore conclude that ∑_w ∈ U + Umax_u ∈ (w - U) ∩ U f(u) ≥s/2∑_u' ∈ U' f(u') + f(v) + ∑_w ∈ U' + vmax_u ∈ (w - U) ∩ U f(u). Our first observation towards bounding the right-hand side of (<ref>) is that ∑_w ∈ U' + vmax_u ∈ (w - U) ∩ U f(u) ≥∑_u' ∈ U' f(u') because if w = u' + v for some u' ∈ U', then u' ∈ (w - U) ∩ U. It will therefore suffice to show that f(v) + ∑_u' ∈ U' f(u') ≥(1/2∑_u' ∈ U' f(u')) + s + 1/2 f(v), or, equivalently, 2f(v) + ∑_u' ∈ U' f(u') ≥ (s + 1) f(v). We now use that our choice of v satisfies (<ref>), which implies that 2f(v) + ∑_u' ∈ U' f(u') ≥∑_u ∈ U f(u) ≥ (s + 1) f(v), and hence completes the proof of the induction step. With <Ref>, we are now ready to prove <Ref>. Our first claim is that finding a T' ⊆ X such that |T'| ≤ |Z| and |(X + T') ∖(X + 0) | ≥∑_w ∈ (Z^* + Z^*) ∖ Zmax_z ∈ (w - Z^*) ∩ Z^*| z|, where Z^* = Z ∖{0}, is enough to complete the proof. To see that, apply <Ref> with U = Z^* and f(z) = | z| to get the lower bound ∑_w ∈ Z^* + Z^*max_z ∈ (w - Z^*) ∩ Z^*| z| ≥(Z^*) + 1/2∑_z ∈ Z^*| z| ≥r - r_W/2(|X| - | 0|), where in the last equality we have used that Z^* = Z ∖{0}, X = ⋃_z ∈ Zz and (Z^*) ≥ r - r_W - 1, by our assumptions that Z = Π_W^⊥(X), (X) ≥ r and r_W = (W). However, the left-hand side of (<ref>) is considering elements w ∈ Z that are not in the sum in (<ref>). This is not an issue because ∑_w ∈ Zmax_z ∈ (w - Z^*) ∩ Z^*| z| ≤η |Z| |X| follows from our assumption that | z| ≤η |X| for all z ∈ Z^*. We obtain the claim that T' as specified finishes the proof by substituting (<ref>) and (<ref>) into (<ref>): ∑_w ∈ (Z + Z) ∖ Zmax_z ∈ (w - Z^*) ∩ Z^*| z| ≥r - r_W/2(|X| - | 0|) - η |Z| |X|. As it now suffices to find T' ⊆ X such that |T'| ≤ |Z| and (<ref>) holds, we simply need to repeat the proof of (<ref>) given in the warm-up, but with the set X + 0 removed. That is, let Z^* = {z_1, …, z_m}, and define T' = {y_1, …, y_m} where each y_i is an arbitrary element of z_i for i ∈{1, …, m}. The first step to show that T' satisfies (<ref>) is partitioning X + T' into fibres of Z + Z to obtain | (X + T') ∖(X + 0) | = ∑_w ∈ Z + Z| ( [w] ∩ (X + T') ) ∖( X + 0) |. In order to handle the set removal, we claim that if w ∉Z, then the sets [w] and X + 0 are disjoint, and therefore ∑_w ∈ Z + Z| ( [w] ∩ (X + T') ) ∖( X + 0) | ≥∑_w ∈ (Z + Z) ∖ Z| [w] ∩ (X + T') |. To see this, simply note that if x ∈ X and x' ∈0, then Π_W^⊥(x + x') = Π_W^⊥(x) + Π_W^⊥(x') ∈ Z + 0 = Z by the definitions of 0 and Z, and therefore Π_W^⊥(X + 0) ⊆ Z. Having established (<ref>), we can proceed (almost) like in the warm-up. Restricting our attention to Z^* + Z^* ⊆ Z + Z, observe that every w ∈ Z^* + Z^* can be written as w = z + z' where z, z' ∈ (w - Z^*) ∩ Z^*. For each w ∈ Z^* + Z^*, then, we pick the pair (z, z') ∈ (Z^*)^2 satisfying (<ref>) that maximises | z|. Let y be the (unique) element of T' ∩ [z'], and note that, as z + y ⊆ [w] ∩ (X + T'), we can conclude that |[w] ∩ (X + T')| ≥max_z ∈ (w - Z^*) ∩ Z^*| z|. Replacing (<ref>) into (<ref>) yields that (<ref>) holds for T' ∑_w ∈ (Z + Z) ∖ Z| [w] ∩ (X + T') | ≥∑_w ∈ (Z^* + Z^*) ∖ Zmax_z ∈ (w - Z^*) ∩ Z^*| z|, and the proof therefore follows from our first claim. § THE SUPERSATURATION RESULT This sec:supsat is dedicated to the proof of the following theorem, restated for convenience: * Now that we have <Ref>, proving <Ref> is simple. Assume that Y is small, that is, |Y| ≤ (1 - γ)(d + 1)|X|/2; our goal is to show that it misses many pairs of X^2. If we can find |X| elements x^(i)∈ X such that Y contains at most a 1 - c proportion of X + x^(i), we are done. To do that, we will use <Ref> to find a small set T ⊆ X such that |X + T| - |Y| ≳γ (d + 1)|X|. By the pigeonhole principle, it follows that there exists an x^(i)∈ T such that Y misses many elements of X + x^(i). We can then remove x^(i) from X and repeat the process until the dimension of X drops below its original value (removing the x^(i) is a simple way to avoid using the same element twice while keeping our working set large). As we assumed that X has -robust dimension d, this can only happen after we have removed |X| translates. This is essentially the proof except for the fact that <Ref> requires X ⊆^d, and the set in <Ref> is a subset of . To handle this, we use isomorphisms, relying on the fact that if ϕ is a isomorphism, then |X_1 + X_2| = |ϕ(X_1) + ϕ(X_2)|. Assume that[We use γ' instead of γ because its value is (slightly) less than the γ in the application of <Ref>.] |Y| ≤ (1 - γ')(d + 1)|X|/2. We claim that there exists a sequence of distinct elements x^(1), …, x^(t)∈ X, where t = |X| / 4, such that the sets X_i = X ∖{x^(1), …, x^(i)} have the following two properties d_i = (X_i) ≥ d and |(X_i + x^(i + 1)) ∖ Y| ≥ 4 c|X|. The first property holds because, for all i ≤ t, |X_i| ≥ |X| - i ≥ |X| - t = ( 1 - /4 ) |X|, so we still have (X_i) ≥ d, since X has -robust dimension d. To prove that X_i satisfies the second property in (<ref>), we will show how to select each x^(i + 1). That is, assume that we have distinct translates {x^(1), …, x^(i)} such that the set X_i satisfies (<ref>). Since (X_i) = d_i ≥ d, there exists a isomorphism ϕ_i : X_i → X_i' such that X_i' ⊆^d_i has full rank. If γ' > 2^6 d^-1/3, then we can apply <Ref> to the set X_i' with r = d and γ = 2^-6γ', to obtain a set T_i' ⊆ X_i' such that |T_i'| ≤2^6 (d + 1)/γ' and |X_i' + T_i'| ≥(1 - γ'/4) (d + 1)|X_i'|/2. Otherwise, we have γ' ≤ 2^6 d^-1/3, d ≤ 2^18γ'^-3. In this case, we can[As it is stated, <Ref> can only be used for d = d_i, but that could result in too many translates. To circumvent this issue, we can randomly project the set to ^d, and apply <Ref> to the projected set instead, using the randomness to avoid collisions.] apply <Ref> and obtain T_i' = {x_1, …, x_C}⊆ X_i' such that |X_i' + T_i'| ≥ (d + 1)|X_i'| - 5(d + 1)^3 ≥(d + 1)|X_i'|/2, for some constant C = C(γ'), since X_i satisfies (<ref>), X is sufficiently large and d ≤ 2^18γ'^-3. Therefore, it follows that we have a set of translates T_i' of size at most 2^-6γ' (d + 1)/c for some constant c = c(γ') > 0, in either case. As ϕ_i is a isomorphism, we know that the preimage T_i = ϕ_i^-1(T_i') satisfies |X_i + T_i| ≥( 1 - γ'/4) (d + 1)|X_i|/2 = ( 1 - γ'/4 ) (d + 1)(|X| - i)/2 and |T_i| ≤ 2^-6γ' (d + 1)/c. Since i ≤ t = |X| / 4 < γ' |X| / 4, this is at least |X_i + T_i| ≥(1 - γ'/4)^2 (d + 1)|X|/2≥(1 - γ'/2) (d + 1) |X|/2. Recalling our assumption |Y| ≤ (1 - γ') (d + 1)|X|/2, it follows that |(X_i + T_i) ∖ Y| ≥ |X_i + T_i| - |Y| ≥γ' (d + 1) |X|/4. Thus, by the pigeonhole principle, there exists x^(i + 1)∈ T_i such that |(X_i + x^(i + 1)) ∖ Y| ≥γ' (d + 1) |X|/4 ( γ' (d + 1)/2^6 c)^-1 > 4 c|X|, since |T_i| ≤ 2^-6γ' (d + 1)/c and T_i ⊆ X_i. Repeating this for each i from 1 to t thus yields the sets X_i and {x^(1), …, x^(t)} satisfying (<ref>). Now that we have the sets X_i, note that each X + x^(i) contributes 4 c |X| ordered pairs whose sum are not in Y. This gives us a total of 4 c |X| t ≥ c |X|^2 such pairs, because t = |X| /4. § AN UPPER BOUND FOR THE INDEPENDENCE NUMBER In this section, we prove the upper bound part of our main result, <Ref>: Let n be a prime number and let p = p(n) satisfy p ≥ (log n)^-1/80. The random Cayley sum graph Γ_p of satisfies α(Γ_p) ≤(2 + o(1)) log_1/1 - p n with high probability as n →∞. Throughout, we fix a small enough δ > 0 and k = (2 + 4δ) log_1/1 - pn. We will also follow the outline presented in <Ref> and use the notation defined there. Each sub-collection _i requires different techniques to bound the probability that α(Γ_p) > k, so we handle each separately and show all three go to 0 as n →∞. §.§ Bounding the probability over choices in _1 A brief recap: in this doubling range, there are far too many choices for X ∈_1 for a union bound to work. The key observation is that we do not need to count every such X: if somehow going to a smaller subset Λ⊆ X reduces our choices considerably, this would also be enough, since the event {ΛΛ⊆∁A_p} contains {X X ⊆∁A_p}. As a matter of fact, we will use this idea twice. The first time we use this idea is in order to replace each set X ∈_1 by a large subset X' ⊆ X whose dimension is robust in the sense required by <Ref>, our supersaturation result. We build the set X' from X by greedily removing small, “bad” subsets B ⊆ X such that removing them from X reduces its dimension. Each such removal reduces (X) by at least one, so we finish in at most (X) steps. A step reduces the size of the working set by at most |X|, which would already give |X'| ≥ (1 - (X) ) |X|. To get a bound that depends on σ instead of on (X), we use the following trivial consequence of 's lemma, <Ref>, which we record for ease of reference later. Let X ⊆ with σ[X] ≤σ. Then, (X) + 1 ≤ 2 σ. Let d = (X), and take X' ⊆^d of full rank to be the image of X under a isomorphism ϕ. Applying 's lemma to X' yields |X' + X'| ≥ (d + 1)|X'| - d + 12 but |X' + X'| ≤σ |X'| and |X'| = |X| by our choice of ϕ. Dividing both sides by |X| yields σ≥ d + 1 - d + 1/2 where we used |X| ≥ d to simplify the right-hand side. The proof follows by rearranging. The proof of the following stmt:robust-subset is just formalizing the sketch using <Ref>. Let X ⊆ with σ[X] ≤σ and let < 1/2σ. There exists d ∈ and X' ⊆ X such that |X'| > (1 - 2 σ) |X| and X' has -robust dimension d. We start an iterative process with X_0 = X and i = 0. While there exists B ⊆ X_i such that |B| ≤ |X| and (X_i ∖ B) < (X_i), we define X_i + 1 = X_i ∖ B. Let t be the number of steps in this process, and let X' = X_t. First, notice that t ≤(X) since each step reduces (X_i) by at least one. We argue that X' has -robust dimension by considering two cases. If t < (X), then X_t has -robust dimension (X_t). Otherwise, we have stopped when (X') = 1, and such sets trivially have -robust dimension 1. Since |X_i| ≥ |X| - i |X|, it follows that |X'| ≥ (1 - t) |X| ≥ (1 - (X)) |X| > (1 - 2 σ) |X|, where in the last inequality we used <Ref>. The second time we employ the idea of going to smaller subsets is when we use the fingerprints given by <Ref>, whose statement we repeat here for convenience. stmt:fingerprints-repeated* As we remarked in the overview, applying the previous stmt:fingerprints to the (trivial) generalised arithmetic progression would result in too many fingerprints. To overcome this, we will use <Ref> below, which is a simple consequence of the following strengthening of the Green–Ruzsa theorem due to <cit.>. There exists C > 0 such that the following holds. Let G be an Abelian group, and let X ⊆ G be a finite set with σ[X] ≤κ. Either there exists a proper coset progression P + H such that X ⊆ P + H, (P + H) ≤ 2 κ, and |P + H| ≤exp(C κ^4 (log (κ + 2))^2) |X| or X is fully contained in at most C κ^3 (logκ)^2 cosets, whose total cardinality is bounded by exp(C κ^4 (log (κ + 2))^2)|X|, of some subgroup of G. As our group of interest is , a proper coset progression P + H is either all of , or simply a proper generalised arithmetic progression P + a = P'. We can therefore deduce the following stmt:green-ruzsa-with-dim(P)<=dimF(X). There exists C > 0 such that the following holds. Let n ∈ be a prime, and let κ≥ 2. If X ⊆ satisfies σ[X] ≤κ and C κ^3 (logκ)^2 < |X| < exp(-C κ^4 (logκ)^2) n, then there is a proper generalised arithmetic progression P ⊆ such that X ⊆ P, |P| ≤exp(C κ^4 (logκ)^2 ) |X| and (P) ≤(X). As contains no non-trivial subgroups when n is a prime, we claim that we cannot obtain the second case when applying <Ref> to X. None of the cosets obtained in the second outcome can be itself, since, by (<ref>), its cardinality alone would exceed exp(C κ^4 (log (κ + 2))^2)|X|. The claim follows by noting that the cosets cannot also all be of the form a ∈, since their total number is bounded by C κ^3 (logκ)^2, and, by (<ref>), this is insufficient to cover X. We have shown that we always get the first outcome of <Ref> under our assumptions, so applying it to X yields a proper coset progression P' + H ⊆ such that X ⊆ P' + H and |P' + H| ≤exp(C κ^4 (logκ)^2) |X|, where we used that κ≥ 2 to turn log(κ + 2) into logκ for a slightly larger value of C. Moreover, P” = P' + H is actually a proper generalised arithmetic progression, because exp(C κ^4 (logκ)^2) |X| < n = |P' + | follows from (<ref>), hence H must be a singleton. Now, let P be the proper generalised arithmetic progression obtained as follows: keep only the differences a_j” in P” that have a corresponding x ∈ X ∩ P” such that x = a_0” + ∑_i = 1^(P”) w_i a_i” with w_j”≠ 0. Observe that P trivially satisfies (<ref>). We furthermore claim that d = (P) ≤(X). To see this, let P = { a_0 + ∑_i = 1^d w_i a_i : w_i ∈, 0 ≤ w_i < ℓ_i } and let ϕ : P →ϕ(P) ⊆^d be the function defined by ϕ(y) = (w^(y)_1, …, w^(y)_d) where y = a_0 + ∑_i = 1^d w_i^(y) a_i. Since P is proper, we know that ϕ is a isomorphism; this and X ⊆ P imply that so is ϕ|_X : X →ϕ(X). By the definition of P, we have that ϕ(X) ⊆^d has full rank, and the bound d ≤(X) follows from the definition of dimension. We are now ready to prove that (∃ X ∈_1 : X X ⊆∁A_p) → 0 as n →∞. Recall that k = (2 + 4δ) log_1/1 - pn, and that for all X ∈_1, we know that |X| = k and σ[X] ≤ k^1/40. Fixing = k^-1/20, we can apply <Ref> to X with and σ = k^1/40 to conclude that every such set contains a X' of size at least |X'| ≥ (1 - 2 σ)k ≥ (1 - 2 k^-1/40)k with -robust dimension d_X, for some d_X ∈. Observe that (<ref>) implies that the doubling of X' is at most 2σ: σ[X'] = |X' + X'|/|X'|≤|X + X|/|X'|≤ 2 σ, for all sufficiently large k. We fix[Abusing notation to denote such a mapping via the ' symbol.] one X' for each X ∈_1, and denote by _1' the collection of all such X'. As we remarked before, X' X' ⊆∁A_p is implied by X X ⊆∁A_p for each X ∈_1. Therefore, we have the bound ( ∃ X ∈_1 : X X ⊆∁A_p) ≤( ∃ X' ∈_1' : X' X' ⊆∁A_p). Moreover, let (P) be the collection of subsets Y ⊆ P with (1 - 2 k^-1/40)k ≤ |Y| ≤ k and σ[Y] ≤ 2 σ such that Y has -robust dimension d_Y for some d_Y ≥(P). We claim that we can take another union bound: ( ∃ X' ∈_1' : X' X' ⊆∁A_p ) ≤∑_P ∈()(∃ Y ∈(P) : Y Y ⊆∁A_p ) where () is the collection of generalised arithmetic progressions P ⊆ such that |P| ≤exp(k^1/5). In order to prove (<ref>), it is enough to show that, for every X' ∈_1', there exists a generalised arithmetic progression P ∈() such that X' ∈(P); we will do so by applying <Ref> with κ = 2σ. From this application, we will obtain a generalised arithmetic progression P ∈() such that (P) ≤(X') = d_X and X' ⊆ P. The property P ∈() follows from (<ref>), as |P| ≤exp(C' σ^4 (logσ)^2) k ≤exp(C' k^1/10 (log k)^2 ) k ≤exp(k^1/5), where we used that σ = k^1/40 in the second inequality, and in the last one we used that k is sufficiently large. We now confirm that we can apply <Ref> as we want, by verifying that every X' ∈_1' satisfies (<ref>). The lower bound holds because |X'| ≥ (1 - 2 k^-1/40)k ≥ k^1/10≥ C' σ^3 (logσ)^2, by (<ref>) and using that k is sufficiently large, whereas the upper bound in (<ref>) follows from exp(C' σ^4 (logσ)^2) |X'| ≤exp(k^1/5) ≤exp( (log n)^2/5 ) < n, where we used (<ref>), and that k ≤ (log n)^2 and n is sufficiently large. We now claim that X' ∈(P), where P ∈() is the generalised arithmetic progression given by applying <Ref> to X' ∈_1'. To see this, simply note that the conditions in (<ref>) follow from (<ref>) and (<ref>), and the -robust dimension bound d_X ≥(P) follows from (<ref>) and <Ref>, so this completes the proof of (<ref>). In order to bound the term in the right-hand side of (<ref>), we analyse the contribution of each fixed P ∈(). Notice that if (P) is empty, then the probability term is equal to 0, so we may assume that (P) is non-empty. Rather than directly taking a union bound over choices of Y ∈(P), our final union bound is over a collection of fingerprints (P): ( ∃ Y ∈(P) : Y Y ⊆∁A_p ) ≤( ∃ F ∈(P) : F F ⊆∁A_p ) ≤ |(P)| max_F ∈(P)( F F ⊆∁A_p ). That is true as long as, for each Y ∈(P), there exists F ∈(P) such that F ⊆ Y; again we are using that F F ⊆ Y Y. We claim that applying stmt:fingerprints-repeatedstmt:fingerprints <ref>stmt:fingerprints to P with γ = γ(δ) (to be determined later) and m = 2σ k yields such a collection of fingerprints (P). First, we define a candidate for (P) which proves our claim, and later we show that we can construct this candidate. Our candidate for (P) is defined as (P) = ⋃_Y ∈(P)_|Y|, m, (P) where _|Y|, m, (P) is the collection of fingerprints given by stmt:fingerprints-repeatedstmt:fingerprints <ref>stmt:fingerprints. For (P) to prove our claim, we must thus show that, for each Y ∈(P), there is a F ∈_|Y|, m, (P) such that F ⊆ Y. By stmt:fingerprints-repeatedproperty <ref>item:fingerprint-is-contained-in-X of stmt:fingerprints <ref>stmt:fingerprints, it suffices for each Y to have -robust dimension d_Y ≥(P) and |Y + Y| ≤ m = 2σ k. These, however, hold for Y by (<ref>), and so we have that (P) is a valid candidate of fingerprints for P. To confirm that we can apply stmt:fingerprints-repeatedstmt:fingerprints <ref>stmt:fingerprints to P as in (<ref>), we need to check that m ≥|Y|((P) + 1)/2 for all Y ∈(P). (<ref>) follows from |Y|((P) + 1)/2≤|Y|((Y) + 1)/2≤ |Y + Y| ≤ 2 σ k = m where the first inequality uses that (P) ≤(Y) by definition of (P), the second inequality relies on <Ref> and the third one on (<ref>). Therefore, (<ref>) is a valid definition for (P). We proceed to give an upper bound to the right-hand side of (<ref>). With the goal of first bounding the size of (P), define Φ(P) = max{|F| : F ∈(P)} and note that, trivially, |(P)| ≤∑_q = 0^Φ(P)|P| q≤ (|P| + 1)^Φ(P). Since, by stmt:fingerprints-repeated(<ref>eq:fingerprint-reqs) in stmt:fingerprints <ref>stmt:fingerprints, |F| ≤ C ^-1√(m log m) for all F ∈(P), we can use that m = 2σ k, = k^-1/20 and σ = k^1/40 to obtain |(P)| ≤exp(2 C k^3/5 log |P|) ≤exp(k^4/5) where in the first inequality we used that k is sufficiently large, and in the last we used (<ref>). To obtain an upper bound on (F F ⊆∁A_p) for all F ∈(P), we use stmt:fingerprints-repeated(<ref>eq:fingerprint-reqs), the lower bound on |F F| given by stmt:fingerprints-repeatedstmt:fingerprints <ref>stmt:fingerprints: |F F| ≥(1 - γ)((P) + 1)/2min_Y ∈(P) |Y|. Since γ = γ(δ) is a constant and |Y| ≥ (1 - 2 σ)k for all Y ∈(P) by (<ref>), we obtain max_F ∈(P)( F F ⊆∁A_p ) ≤ (1 - p)^(1 - 2 γ)((P) + 1)k/2, as k is sufficiently large. Replacing (<ref>) and (<ref>) back into (<ref>) yields ( ∃ Y ∈(P) : Y Y ⊆∁A_p ) ≤exp(k^4/5) (1 - p)^(1 - 2 γ)((P) + 1)k/2. This will be our bound for each term in the right-hand side of (<ref>). Observe that (<ref>) does not depend on the specific choice of P, only on its size and dimension. Recalling that |P| ≤exp(k^1/5) =: s for every P ∈() by (<ref>), we can group terms in the right-hand side of (<ref>) based on d = (P) to deduce that, by (<ref>), we have ( ∃ X ∈_1 : X X ⊆∁A_p ) ≤∑_d = 1^∞ s (n s)^d + 1exp(k^4/5) (1 - p)^(1 - 2 γ)(d + 1)k/2, where we have bounded the number of d-dimensional generalised arithmetic progressions in with size at most s by s (n s)^d + 1. It is therefore enough to prove that the right-hand side of (<ref>) goes to 0 as n →∞. Note that, as k = (2 + 4δ) log_1/1 - pn, we have (1 - p)^k/2 = n^-(1 + 2δ), which combined with a suitably small choice of γ = γ(δ), implies that n^d + 1 (1 - p)^(1 - 2γ)(d + 1)k/2 = n^d + 1 -(1 - 2γ)(1 + 2δ)(d + 1)≤ n^-δ (d + 1). The final observation is that it follows from s = exp(k^1/5) that there is 1 > ν > 0 such that s^d + 2exp(k^4/5) ≤exp( (d + 1) (log n)^1 - ν) since k ≤ 3 (log n)/p for δ < 1/4. The value of ν depends on the specific exponent of the log n term in our choice of p. For instance, ν = 0.18 works for p ≥ (log n)^-1/80. Combining (<ref>) with (<ref>), we obtain, for sufficiently large n, the following bound for (<ref>): ∑_d = 1^∞ s (n s)^d + 1exp(k^4/5) (1 - p)^(1 - 2 γ)(d + 1)k/2≤∑_d = 1^∞ n^-δ d / 2 which goes to 0 as n →∞, as we wanted to show. §.§ Bounding the probability over choices in _2 Our goal is to show that the term corresponding to choices in _2 tends to 0 as n →∞. We emphasize that, in this sec:bound-over-X2, no new ideas, or even modification of previous, existing results, are needed. We just need to use the following result of <cit.>. For every k ∈ and m ≥ 2k - 1, there exists r ∈ with r ≤min{4 m/k, k} and r ≤2m/k +1/kr2, such that |_2^(m)| ≤ n^r k^4k, where _2^(m) = {X ∈_2 : |X X| = m}. With <Ref>, we can take a union bound over each X ∈_2 to obtain (∃ X ∈_2 : X X ⊆∁A_p) ≤∑_m = k^1 + 1/40^δ k^2/10|_2^(m)| (1 - p)^m → 0 as n →∞, and we prove the last step below. First, we bound the r in <Ref> using (<ref>): r ≤2 m/k + 1/k( 4m/k )^2 ≤m/k(2 + 2 δ) since sets X ∈_2^(m)⊆_2 satisfy m/k = σ[X] ≤δ k / 10. Now, applying <Ref> to each _2^(m), we obtain that |_2^(m)| ≤ n^r k^4k≤ n^(2 + 2 δ) m / k + (log n)^1/50loglog n and the last inequality follows from k ≤ (log n)^51/50, which is implied by p ≥ (log n)^-1/80. Notice that it follows from k = (2 + 4δ) log_1/1 - p n that (1 - p)^m = n^-(2 + 4 δ) m / k. Together with (<ref>) and the fact that m/k = σ[X] ≥ k^1/40≥ (log n)^1/40, (<ref>) yields |_2^(m)| (1 - p)^m ≤ n^-δ m / k from which the result follows by summing over all k^1 + 1/40≤ m ≤δ k^2/10. §.§ Bounding the probability over choices in _3 Similarly to the previous case, we start with a union bound (∃ X ∈_3 : X X ⊆∁A_p) ≤∑_m = δ k^2/10^k^2|_3^(m)| (1 - p)^m, letting _3^(m) be the sub-collection of _3 consisting of subsets X ⊆ with |X X| = m. We will need the following slight strengthening of Proposition 5.1 of <cit.>, here stated in a way that matches our notation: propappendixprop Let δ > 0 be sufficiently small and let η > 0. If k ≤ (log n)^2 - η and m ≥δ k^2/10, then |_3^(m)| ≤ n^(2 + δ + o(1)) m / k, as k →∞, where _3^(m) = {X ∈_3 : |X X| = m}. The proof of <Ref> is the same as the one given in <cit.>; we only optimize constants and exponents. Therefore, we defer its presentation to <Ref>. Observe that k ≤3 log n/p≤ (log n)^41/40≤ (log n)^2 - η, by our choice of p ≥ (log n)^-1/80. Hence, we can apply <Ref> to bound, for every m ≥δ k^2/10, the size of _3^(m) by |_3^(m)| ≤ n^(2 + δ + o(1))m/k≤ n^(2 + 2 δ) m / k where the last inequality holds for large k. We can bound each term of (<ref>) by: |_3^(m)| (1 - p)^m ≤ n^(2 + 2 δ) m / k n^-(2 - 4δ) m / k≤ n^- δ m / k. Summing over m ≥δ k^2/10 yields the desired result. § THE LOWER BOUND In this sec:lower-bound, we prove the lower bound in <Ref>. Let n be a prime number and let p = p(n) satisfy 1/2 ≥ p ≥ n^-o(1). The random Cayley sum graph Γ_p of satisfies α(Γ_p) ≥(2 + o(1)) log_1/1 - p n with high probability as n →∞. The proof of this result is significantly easier than the upper bound. In fact, using only the pseudorandom properties of Γ_p is enough to obtain α(Γ_p) ≥(1/2 + o(1))(log n)/p (see <cit.> and <cit.>). To improve the leading constant to 2, we use both the randomness (as opposed to only the pseudorandomness) and the fact that we can restrict our attention to any sub-collection of potential independent sets in Γ_p. More precisely, for each k ∈, we define Z_k to be the random variable counting all independent k-sets in Γ_p with maximal doubling, that is Z_k = |{X ∈_k: X X⊂ A_p^c}| where _k = {X ⊆ : |X| = k, |X X| = k2}. If Z_k > 0, then α(Γ_p) ≥ k regardless of the potential independent k-sets that Z_k overlooks. In order to prove the lower bound, it is enough to show that (Z_k) = o([Z_k]^2) since (α(Γ_p) ≥ k) ≥(Z_k > 0) ≥ 1 - (Z_k)/[Z_k]^2, by Chebyshev's inequality. The first step is to estimate [Z_k], which we do by showing that _k is large and using linearity of expectation. For each k ∈ such that k = o(n^1/4), we have |_k| = (1 - o(1)) nk. We will (equivalently) show that almost all X ⊆ with |X| = k satisfy |X X| = k2. Observe that if |X X| < k2, then there are distinct x_1, x_2, x_1', x_2' ∈ X such that x_1 + x_2 = x_1' + x_2'. Motivated by that observation, define = {{x_1, x_2, x_1', x_2'}⊆ : x_1 + x_2 = x_1' + x_2', and x_1, x_2, x_1', x_2' are distinct}, and let Y be a uniformly random k-set in . Taking a union bound over yields (Y ∉_k) ≤∑_Q ∈(Q ⊆ Y) ≤ n^3 (k/n)^4 = k^4/n, where the second inequality is due to the choices of x_1, x_2 and x_1' determining x_2' = x_1 + x_2 - x_1' for {x_1, x_2, x_1', x_2'}∈. Since k = o(n^1/4), the stmt:size-of-Zk follows. The second stmt:var that we need to prove <Ref> gives a bound on (Z_k). For every k ∈ and p = p(n) ∈ (0, 1), we have (Z_k) ≤[Z_k] + nk∑_s = 1^k k^3 snk - s (1 - p)^2 k2 - k s/2. The main step in the proof of <Ref> is to show, for every X ∈_k, that ∑_Y ∈_k Y ∼ X (1 - p)^|(X X) ∪ (Y Y)|≤∑_s = 1^k k^3 snk - s (1 - p)^2 k2 - k s/2 where Y ∼ X if Y ≠ X and (X X) ∩ (Y Y) ≠∅. In order to do that, define (X, Y) = {Y' ⊆ Y : (X X) ∩ (Y' Y') = ∅}, and ^*(X, Y) = {Y' ∈(X, Y) : (y + Y') ∩ (X X) ≠∅ for all y ∈ Y ∖ Y'}. The definition of ^*(X, Y) is motivated by the following stmt:size-of-X+X-cap-Y+Y, which gives an upper bound on | (X X) ∩ (Y Y) |: Let k ∈ and X, Y ∈_k. If Y ∼ X, then |(X X) ∩ (Y Y)| ≤k (k - t)/2, where t = min{|Y'|: Y' ∈^*(X, Y)}. To prove <Ref>, we will need the following simple observation about graphs. Let G be a graph with k vertices. If all maximal independent sets of G have at least t vertices, then G has at most k (k - t) / 2 edges. Take v ∈ V(G) to be a vertex of maximum degree, and I_v ⊆ V(G) to be a maximal independent set containing v. Then, t ≤ |I_v| ≤ k - d(v) = k - Δ(G). Thus, Δ(G) ≤ k - t, and the claimed bound follows. To deduce <Ref>, we will apply <Ref> to Γ_X X[Y], the Cayley sum graph over X X restricted to the vertex set Y. We claim that |(X X) ∩ (Y Y)| = e(Γ_X X[Y]), which not only implies that the collection ^*(X, Y) is exactly the collection of maximal independent sets in Γ_X X[Y], but also reduces the proof to applying <Ref> with t to Γ_X X[Y]. Therefore, we first establish (<ref>), and then the characterization of ^*(X, Y). To show (<ref>), recall that, by the definition of the Cayley sum graph, y_1 y_2 ∈ Y2 is an edge of Γ_X X[Y] if and only if y_1 + y_2 ∈ X X. We obtain equality by observing that each sum in Y Y corresponds to exactly one pair y_1 y_2 ∈ Y2, since |Y Y| = k2 by Y ∈_k. It follows from (<ref>) that (X, Y) corresponds to independent sets in Γ_X X[Y]. Moreover, for every Y^* ∈^*(X, Y), we know that there are no Y' ∈(X, Y) such that Y^* ⊊ Y' by the condition in (<ref>). Our definition of t then corresponds to all maximal independent sets in Γ_X X[Y] having at least t vertices, so we can apply <Ref> as desired. With <Ref> in hand, the proof of (<ref>) now relies on an efficient count of Y ∈_k with Y ∼ X, for each fixed X ∈_k. Notice that ^*(X, Y) is not empty, as every graph has at least one maximal independent set, so we can fix a set Y^* ∈^*(X, Y) of minimum size. Our counting strategy is to consider the choices for elements in Y^* and then the choices for Y ∖ Y^*; in fact, a trivial count of all sets Y^* ⊆ suffices, so we only need to count the possible elements in Y ∖ Y^* efficiently. In order to bound the choices for Y ∖ Y^*, notice that for every y ∈ Y ∖ Y^*, we have (y + Y^*) ∩ (X X) ≠∅ by (<ref>). It follows that there are y^* ∈ Y^* and distinct x_1, x_2 ∈ X such that y + y^* = x_1 + x_2, or, equivalently, Y ∖ Y^* ⊆ X X - Y^*. We can therefore choose the elements of Y ∖ Y^* from a set of size at most |X X - Y^*| ≤ |X|^2 |Y^*| ≤ k^3, so there are at most k^3s choices for Y ∖ Y^* if s = |Y ∖ Y^*|. Bounding the number of choices for Y^* by nk - s yields the bound in (<ref>), except for the (1 - p)^2 k2 - k s / 2 term, which we obtain by using <Ref>. We now have all the ingredients to prove <Ref>. Observe that, via standard calculations, we have that (Z_k) ≤[Z_k] + ∑_X ∈_k∑_Y ∈_k Y ∼ X (1 - p)^|(X X) ∪ (Y Y)|, where, recall, Y ∼ X was defined in (<ref>). We therefore need to show that ∑_Y ∈_k Y ∼ X (1 - p)^|(X X) ∪ (Y Y)|≤∑_s = 1^k k^3 snk - s (1 - p)^2 k2 - k s/2 for each X ∈_k. To prove (<ref>), fix X ∈_k and, for each Y ∈_k with Y ∼ X, choose a set Y^* = Y^*(X, Y) of minimum size. If we group the sets Y by the size of their corresponding Y^*, we can count them by first enumerating the choices for Y^* and then the choices for Y ∖ Y^*. For fixed s = |Y ∖ Y^*| = k - |Y^*|, there are at most nk - s choices for Y^*, and there are at most k^3 s choices for Y ∖ Y^* since Y ∖ Y^* ⊆ X X - Y^*. We then bound the size of the union in (<ref>) by applying <Ref> to the pair (X, Y) |(X X) ∩ (Y Y)| ≤k s/2 which means that we can ignore the case s = 0 because it would contradict Y ∼ X. A trivial inclusion-exclusion now yields the bound we need for the size of the union: |(X X) ∪ (Y Y)| = 2 k2 - |(X X) ∩ (Y Y)| ≥ 2 k2 - k s/2. Replacing (<ref>) and the above count of Y for fixed s into the left-hand side of (<ref>) gives ∑_X ∈_k∑_Y ∈_k Y ∼ X (1 - p)^|(X X) ∪ (Y Y)|≤∑_X ∈_k∑_s = 1^k k^3 snk - s (1 - p)^2k2 - k s/2. Trivially bounding the number of choices for X ∈_k by nk and plugging the result into (<ref>) completes the proof. With <Ref> and <Ref>, the proof of <Ref> is just checking that the bounds match those of the statement. Fix p = p(n) satisfying 1/2 ≥ p ≥ n^-δ/8 for some δ > 0, and let k = (2 - 2δ) log_1/1 - p n. It suffices to show that (Z_k)/[Z_k]^2→ 0 as n →∞. First, we compute the expected value of Z_k using <Ref> and linearity of expectation: [Z_k] = (1 - o(1)) nk (1 - p)^k2≥1/2nk n^-(1 - δ) (k - 1)→∞ as n →∞, by our choice of k. Now, if we assume that [Z_k] ≥nk∑_s = 1^k k^3 snk - s (1 - p)^2k2 - k s/2, then, by <Ref>, we have (Z_k) ≤ 2 [Z_k] and (Z_k)/[Z_k]^2≤2/[Z_k]→ 0 as n →∞, by (<ref>). We therefore assume that the converse of (<ref>) holds. Before we proceed, observe that applying the standard binomial inequality nk - s≤(k/n)^s nk to the right-hand side of (<ref>) yields nk∑_s = 1^k k^3 snk - s (1 - p)^2k2 - k s/2 ≤nk^2 (1 - p)^2 k2∑_s = 1^k k^3 s(k/n)^s (1 - p)^-k s/2 ≤ 4 [Z_k]^2 ∑_s = 1^k (k^4/n^δ)^s, where in the last inequality we used (<ref>) with n sufficiently large and also (1 - p)^- k s / 2 = n^(1 - δ) s because k = (2 - 2δ) log_1/1 - p n. By <Ref>, our assumption that the converse of (<ref>) holds, and (<ref>), we have (Z_k) ≤ 8 [Z_k]^2 ∑_s = 1^k (k^4/n^δ)^s. Replacing (<ref>) into (<ref>), we conclude that the proof is complete if we show that ∑_s = 1^k (k^4/n^δ)^s → 0 as n →∞. This is easily seen to be true when k = o(n^δ/4), which holds for our choice of p ≥ n^-δ/8. § CONCLUDING REMARKS The most important open question left by our work is extending the upper bound in <Ref> to p as small as possible. However, already when p ≤ (log n)^-1, there is an obstacle that prevents any approach similar to ours from working. To understand the barrier, consider the following approximate summary of our strategy. We find a family = {F(X) + F(X) : X ∈ k} of subsets of _n of size s (for some s ∈) with the following two properties: * For each set X ∈ k, there exists S ∈ with S ⊆ X + X. * The family is small, that is, || ≤ (1 - p)^-s. Observe that each S ∈ is of the form F(X) + F(X) and has size s, so we must trivially have |F(X)| ≥√(s) for all X ∈ k. Naively counting every set F ⊆ with |F| = √(s) when bounding the size of , we obtain || ≥n√(s)≈exp(√(s)log n). In our proof, we show that we can choose F inside a small generalised arithmetic progression P, thus replacing the log n term in (<ref>) with log |P|. However, even if we could find such a set P with |P| = O(√(s)), we could not improve the bound in (<ref>) beyond exp(√(s)). Combining this lower bound on the size of with the upper bound that we require in property (2) gives exp(√(s)) ≤ || ≤ (1 - p)^-s, which implies that s ≥ p^-2. Now, consider any X ∈ k with σ[X] = O(1): the corresponding set S ∈ satisfies S ⊆ X + X and therefore p^-2≤ s = |S| ≤ |X + X| ≤ O(k). As k is the upper bound we are trying to prove for α(Γ_p), it follows that the best we can hope for this approach is, for some constant C > 2, α(Γ_p) ≤ C max{p^-2, p^-1log n}, where the second term in the maximum is the lower bound that we proved in <Ref>. While (<ref>) would still be far from what we believe is true for p much smaller than (log n)^-1, proving it would still be very interesting; even if we optimized our approach, we could not get close to proving (<ref>) for all p ≥ (log n)^-1. Improving the upper bound is related to the following problem due to Shachar Lovett: is there a list of 2^n^O(1) subsets of _2^n with density at least 1/100 such that, whenever X ⊆𝔽_2^n has density at least 1/3, X + X contains one of these sets? This question is relevant because we can embed sets X ⊆ with σ[X] = O(1) as dense subsets of an Abelian group using Ruzsa's model lemma. Now, if the positive answer to this problem does not rely too much on the details of the setting (the structure of 𝔽_2^n and the particular densities of the subsets) and a similar list exists for any Abelian group, then we could take a union bound over its elements and hope to circumvent (<ref>) in the bounded doubling setting. We would also like to highlight that for very small p, close to n^-1log n, upper and lower bounds for α(Γ_p) follow from a theorem of <cit.>; this is the only other result that we are aware of in the regime p = o(1). Another way to obtain an upper bound in this range is as follows: observe that the first eigenvalue of Γ_p is highly concentrated around n p, and that we can use elementary Fourier analysis to show that its second eigenvalue is, with high probability, at most O( √(n p log n) ) (see, for example, <cit.>). Using Hoffman's ratio bound, we then obtain α(Γ_p) ≤( C n log n/p)^1/2, which matches α(G(n, p)) up to a logarithmic factor if p ≈ n^-1log n. Another interesting open question is to improve the bounds on 's lemma via few translates. One possibility is to attempt to reduce the d^3 error term obtained by <cit.> in <Ref>, which is the best known result if we require the error term to depend only on the dimension of the set. A different avenue which we believe is worth pursuing is to determine the minimum number of translates necessary to obtain the correct leading constant of d + 1. This is closely related to the question asked (and partially settled) by <cit.> on whether three translates suffice to obtain the Cauchy–Davenport lower bound for two sets A, B ⊆. Even though their question was recently resolved by <cit.> in the affirmative, it is not clear what is the truth in higher dimensions, even in the simpler case of A = B. We conjecture the following: There exists C > 0 such that the following holds. For all d ∈ and all finite sets X ⊆^d of full rank, X contains a subset T such that |T| ≤ C d and |X + T| ≥ (d + 1)|X| - d^C. The best negative result we have for this problem is given by the following simple construction. Let d ≥ 3, and let {e_1, e_2, …, e_d} be the canonical basis of ^d. Consider X = P + {0, e_1, …, e_d - 1} where P = {0, e_d, 2e_d, …, ke_d} for some k ∈. It is not hard to show that the following holds for this example: for every γ > 0, there is c = c(γ) > 0 such that if T ⊆ X with |T| < (2 - γ)d, then |X + T| ≤ (1 - c) (d + 1)|X|. <Ref> is actually not the best result that we know towards <Ref>. While we were finishing the writing of this paper, <cit.> made a breakthrough and proved Marton's conjecture, also known as the Polynomial –Ruzsa conjecture for finite fields. In a previous work, <cit.> proved that a positive answer to Marton's conjecture would imply what is known as the “weak” Polynomial –Ruzsa conjecture for ^d: for every X ⊆^d with σ[X] ≤σ, there exists X' ⊆ X such that (X') ≤ C logσ and |X'| ≥σ^-C |X| for some absolute constant C > 0. Combining this weak version of PFR in ^d with a variation of the argument we developed in <Ref>, we can prove a version of <Ref> that attains a bound of the form |X + T| ≥ (d + 1 - O(log d))|X| at the cost of d^C translates, for C > 0 an absolute constant. We develop these ideas in a separate work, dedicated to answering a question of <cit.> about the number of subsets of {1, …, n} with prescribed doubling, where we require that both the number of translates is a polynomial in d, and the leading constant is at least 1 - o(1). Nonetheless, we think that <Ref> should not depend on results as deep as variants of PFR. The final future research direction we would like to highlight is the extension of our result to other groups. During the preparation of this paper, it came to our knowledge that <cit.> proved an upper bound of O(log N loglog N) for the clique/independence number in the uniform random Cayley graph of any group G, where N is the order of G. As there are some groups, like _2^n, where this is tight up to the constant factor, their result can be seen as a generalization of 's theorem <cit.> about α(Γ_1/2). Moreover, over certain groups, like _5^n, they can show that there exist Cayley graphs which have both clique and independence number (2 + o(1)) log N, even though a uniform random Cayley graph has clique number O(log N loglog N). It would be interesting if our methods, combined with their techniques, can extend these results to sparser graphs. § ACKNOWLEDGEMENTS We would like to greatly thank Rob Morris for carefully reading the paper, detecting mistakes and suggesting improvements and corrections. Without his support, this work would not have its current form. We would also like to thank Lucas Aragão for helpful discussions, and David Conlon, Huy Tuan Pham and Shachar Lovett for comments on the manuscript. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, Brasil (CAPES). plainnat § COUNTING SETS WITH LARGE DOUBLING Recall that n is a large prime and that we are dealing with sets X ⊆ of size k. We restate the result which we aim to prove for the reader's convenience: * The proof of <Ref> is essentially identical to that of <cit.>, and to obtain the statement above we only need to optimize the dependencies between the parameters. Before proceeding, we recall a key definition from <cit.>: A set {x_1, …, x_d}⊆ is M-dissociated if ∑_i = 1^d λ_i x_i ≠ 0 for every (λ_1, …, λ_d) ∈^d ∖{0} such that ∑_i = 1^d |λ_i| ≤ M. It is useful to understand this notion as an analogue of linear independence for elements of taking coefficients in , with a restriction on the sum of their magnitudes. The first result that we need to optimize is <Ref>, a slight strengthening of <cit.>. For every sufficiently small δ > 0 and η > 0, the following holds. Fix a large prime n and M = (log n) / (loglog n)^4. If k ≤ (log n)^2 - η and m ≥δ k^2 / 10, then any M-dissociated subset of X ∈_3^(m) has size at most (2 + δ + o(1))m / k, as k →∞. We will begin by giving an overview of the proof from <cit.>, alongside some useful definitions of our own. Whenever there is (λ_1, …, λ_d) ∈^d such that x = ∑_i = 1^d λ_i x_i and ∑_i = 1^d |λ_i| ≤ M, we will say that x is in the M-span of {x_1, …, x_d}. In that same situation, we will say that x can be written as an M-bounded combination of {x_1, …, x_d}. The proof starts by fixing the value of M = (log n)/(loglog n)^4 and any M-dissociated set D = {x_1, …, x_d}⊆ X. We define G' to be the graph with vertex set and edges between a pair of vertices if their difference can be written as ± x_i ± x_j for some i, j ∈{1, …, d}. The graph G' is useful because of the following stmt:connected-components-of-G-are-spans: For each a ∈, let V_a ⊆ V(G') denote its connected component in G'. Every element b ∈ V_a can be expressed as b = a + ∑_j = 1^d λ_j x_j with ∑_j = 1^d |λ_j| ≤ 2 (V_a). Take b ∈ V_a, and consider any shortest path P connecting a to b in G'. Observe that b - a = ∑_j = 1^d (λ_j^+ - λ_j^-) x_j where λ_j^+, λ_j^- count respectively the number of times x_j, -x_j appear as terms in P. Then, ∑_j = 1^d (|λ_j^+| + |λ_j^-|) ≤ 2 |P| ≤ 2 (V_a) where the last step follows from P being a shortest path in V_a. Having thus related the span of D with the diameter of connected components in G', we will decompose G = G'[X] using the following stmt:weak-regularity-appendix, also from <cit.>, with Δ = k^1/2log k: Let G be a graph with vertex set V(G) and let Δ > 1. There exists a partition V(G) = X_* ∪ X_1 ∪⋯∪ X_t such that * |X_*| ≤ 32 (v(G)/Δ)^2. * e(X_i, X_j) = 0 for every i ≠ j. * The diameter of G[X_i] is at most Δ for every i ∈{1, …, t}. Applying <Ref> to G yields a partition X = X_* ∪ X_1 ∪⋯∪ X_t such that * if X_i ≠ X_j, then X_i + D and X_j + D are disjoint, and * every X_i has a y_i ∈ X_i such that X_i - y_i is contained in the 2Δ-span of D. To see that property (i) holds, notice that if X_i + D and X_j + D intersect, then there is an edge of G connecting X_i and X_j, contradicting item (2) of <Ref>. Property (ii), on the other hand, follows directly from an application of <Ref> to X_i, the diameter of which is bounded by item (3) in <Ref>. To combine these two properties into a proof of <Ref>, notice that the disjointness given by property (i) yields a lower bound for X + X in terms of each X_i: |X + X| ≥∑_i |X_i + D|. We now want a lower bound for each |X_i + D| in terms of d. Towards that goal, we will define, for every i ∈{1, …, t}, a isomorphism ϕ_i : X_i → X_i' ⊆^d. Each will map (a translate of) D to a structured set, where structured only means that we have a lower bound for its sumset with any sufficiently small set. These isomorphisms are naturally defined by the 2Δ-bounded decomposition of X_i - y_i given by property (ii). The function ϕ_i : X_i → X_i' ⊆^d defined by ϕ_i(x) = (λ_1, …, λ_d), where x - y_i = ∑_j = 1^d λ_j x_j is a 2Δ-bounded decomposition of x - y_i, is a isomorphism. The fact that ϕ^-1 : X_i' → X_i is a homomorphism follows from its linearity, so we focus on the other direction of the implication. Take a_1, a_2, a_3, a_4 ∈ X_i such that a_1 + a_2 = a_3 + a_4, and we want to show that ϕ_i(a_1) + ϕ_i(a_2) = ϕ_i(a_3) + ϕ_i(a_4). For ℓ∈{1, …, 4}, write a_ℓ - y_i = ∑_i = 1^d x_i λ^(ℓ)_i, a 2 Δ-bounded combination of D. We now replace (<ref>) in (<ref>) and rearrange to obtain ∑_j = 1^d x_j (λ_j^(1) + λ_j^(2) - λ_j^(3) - λ_j^(4)) = 0. Using the fact that each (<ref>) is 2Δ-bounded yields, as k →∞, ∑_ℓ = 1^4 ∑_j = 1^d |λ_j^(ℓ)| ≤ 4 (2Δ) ≤ 8 k^1/2log k ≤ (log n)^1 - η / 3≤ M, where the inequalities follow by the definitions/assumptions which we now recall Δ = k^1/2log k, k ≤ (log n)^2 - η and M = (log n)/(loglog n)^4. However, D is M-dissociated, so we conclude that, for all j ∈{1, …, d}, (λ_j^(1) + λ_j^(2)) - (λ_j^(3) + λ_j^(4)) = 0, which, by the definition of ϕ_i, is just another way to write ϕ_i(a_1) + ϕ_i(a_2) = ϕ_i(a_3) + ϕ_i(a_4). Observe that ϕ_i(D + y_i) = {e_1, …, e_d}, where {e_1, …, e_d} is the canonical basis of ^d. Hence, to obtain a lower bound for the size of X_i + D in terms of d and complete the proof of <Ref>, we will bound |ϕ_i(X_i) + ϕ_i(D + y_i)| using the isoperimetric inequality of <cit.>, as stated in <cit.> (see also <cit.>). For every γ > 0 and C > 0, there exists d_0 = d_0(γ, C) such that the following holds for every d ≥ d_0. If S ⊆^d is a set of size at most C d, then |S + {e_1,…,e_d}| ≥(1/2 - γ) d |S|. Fix X ∈_3^(m) and let D = {x_1, …, x_d} be any M-dissociated subset of X. Towards our aim of showing that |D| = d ≤(2 + δ + o(1)) m / k, recall the definition of the graph G': its vertices are and there are edges between vertices a, b ∈ only when some of ± (a - b) can be written as either x_i - x_j or x_i + x_j for i, j ∈{1, …, d}. We apply <Ref> to G = G'[X] with Δ = k^1/2log k and obtain X_* ∪ X_1 ∪⋯∪ X_t, a partition of X such that e(X_i, X_j) = 0 for all i ≠ j, (G[X_i]) ≤ k^1/2log k and |X_*| ≤ 32 (k/k^1/2log k)^2 = o(k). In order to obtain a lower bound for X + X, we use property (i) of G to show that |X + X| ≥∑_i = 1^t |X_i + D|. Now, we define, for each i ∈{1, …, t}, the function ϕ_i; by property (ii) and <Ref>, we have that ϕ_i is a isomorphism from X_i to X_i' ⊆^d. By the definition of isomorphisms, we obtain |X_i + D| = |X_i + (D + y_i)| = |X_i' + ϕ_i(D + y_i)|. Recall that the definition of ϕ_i implies that ϕ_i(D + y_i) = {e_1, …, e_d}. Hence, to apply <Ref> to the right-hand side of (<ref>), we need to bound |X_i'| in terms of d and show that d is sufficiently large. We will show both of these by observing that if k > 10d/δ, then we are already done, since d ≤δ k/10≤m/k≤(2 + δ + o(1))m/k, which is what we wanted to show. We can therefore assume that d ≥δ k / 10, which simultaneously implies that d is large when k →∞, and that |X_i'| ≤ k ≤ 10d/δ. Applying <Ref> to X_i' with γ = δ^2 and C = 10/δ, we obtain |X_i' + {e_1, …, e_d}| ≥(1/2 - δ^2 ) d |X_i'| for each i ∈{1, …, t}. Replacing (<ref>) in (<ref>), and using (<ref>), we conclude that |X + X| ≥ d ( 1/2 - δ^2 ) ∑_i = 1^t |X_i'|. Thus, using the fact that |X_i'| = |X_i|, which follows from each ϕ_i being a isomorphism, and that X = X_* ∪ X_1 ∪⋯ X_t is a partition, we have d ( 1/2 - δ^2 ) ∑_i = 1^t |X_i'| = d ( 1/2 - δ^2 ) (|X| - |X_*|) = d ( 1/2 - δ^2 ) (1 - o(1)) k where we used (<ref>) to bound |X_*|. Finally, it follows from |X + X| ≤ m and δ being sufficiently small that d ≤(2 + δ + o(1))m / k as k →∞. With <Ref>, we need only one more piece, the following stmt:count-of-dissociated-coefficients. It is a simple count of the number of choices for coefficients in a dissociated set, and we will use it to repeat the proof of Proposition 5.1 in <cit.> and obtain <Ref>. For every M, d ∈, the number of choices for λ_1, …, λ_d ∈ such that ∑_i = 1^d |λ_i| ≤ M is at most (4d)^M. Fix M = (log n) / (loglog n)^4 and take X ∈_3^(m). We will count the choices for X by first choosing a maximal M-dissociated subset D = {x_1, …, x_d}⊆ X and then choosing the remaining elements of X ∖ D using the properties of D. First, we count the choices for D naively, and rely on <Ref> to bound its size. That is, we apply <Ref> to D, obtain |D| ≤ d ≤(2 + δ + o(1))m / k, and thus deduce that the number of choices for D is at most: ∑_t = 1^d nt≤ (n + 1)^d ≤ n^(2 + δ + o(1)) m / k. The second step is counting the choices for X ∖ D, and we do so by counting the possible ways to write each of its elements as an M-bounded combination of D. Fix x' ∈ X ∖ D, and note that the maximality of D implies that there is Λ = {λ_0, λ_1, …, λ_d}⊆ such that λ_0 x' + ∑_j = 1^d λ_j x_j = 0 and the elements of Λ satisfy ∑_j = 0^d |λ_j| ≤ M, λ_0 ≠ 0 and λ_i ≠ 0 for some i > 0. We can therefore use <Ref> to count the number of choices for Λ and observe that (4d + 4)^M ≤exp(3 M log k) ≤exp(log n/(loglog n)^2) = n^1/log k where the first inequality follows from the (trivial) observation that d ≤ k, and the rest is a consequence of our choice of M = (log n) / (loglog n)^4 and our assumption that k ≤ (log n)^2 - η. We can now choose each of the k - d elements of X ∖ D by the above procedure, and obtain that there are at most n^(k - d)/(log k) = n^o(k) such elements. Combining (<ref>) and (<ref>) with m ≥δ k^2 / 10 thus yields | _3^(m)| ≤ n^(2 + δ + o(1)) m / k + o(k) = n^(2 + δ + o(1)) m / k, as required.
http://arxiv.org/abs/2406.08258v1
20240612142919
Chiral edge transport along domain walls in magnetic topological insulator nanoribbons
[ "Nezhat Pournaghavi", "Carlo M. Canali" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
nezhat@kth.se (Corresponding author) § ABSTRACT Quantum anomalous Hall insulators are topologically characterized by non-zero integer Chern numbers, the sign of which depends on the direction of the exchange field that breaks time-reversal symmetry. This feature allows the manipulation of the conducting chiral edge states present at the interface of two magnetic domains with opposite magnetization and opposite Chern numbers. Motivated by this broad understanding, the present study investigates the quantum transport properties of a magnetized Bi_2Se_3 topological insulator nanoribbon with a domain wall oriented either parallel or perpendicular to the transport direction. Employing an atomistic tight-binding model and a non-equilibrium Green's function formalism, we calculate the quantum conductance and explore the nature of the edge states. We elucidate the conditions leading to exact conductance quantization and identify the origin of deviations from this behavior. Our analysis shows that although the conductance is quantized in the presence of the horizontal domain wall, the quantization is absent in the perpendicular domain wall case. Furthermore, the investigation of the spin character of the edge modes confirms that the conductance in the horizontal domain wall configuration is spin polarized. This finding underscores the potential of our system as a simple three dimensional spin-filter device. Chiral edge transport along domain walls in magnetic topological insulator nanoribbons C. M. Canali June 17, 2024 ====================================================================================== § INTRODUCTION Topological materials are a new state of matter that display topologically protected conducting boundary states, e.g., at the edges of a two-dimensional (2D) sample or on the surface of a three-dimensional (3D) system<cit.>. The existence of these boundary states is ensured by non-zero topological indexes or invariants of the bulk band structure, an interplay usually refereed as boundary-bulk correspondence. It is precisely at the interface between two regions with different bulk topological invariants (one possibly being the vacuum) that topological boundary states emerge. The integer quantum Hall effect in a 2D electron gas in the presence of a strong perpendicular magnetic field discovered more than 40 years ago<cit.> was the first example of this correspondence. Here the 2D bulk topology is characterized by a non-zero Chern number equal to the quantized Hall conductance and to the number of chiral gapless edge states responsible for the dissipationless transport. In more recent years, topological band theory has found that 2D and 3D topological insulators (TI) display similar gapless boundary states related to another bulk topological invariant, the Z_2 number, which is protected most notably by time-reversal symmetry (TRS)<cit.> (or by a crystal symmetry in case of topological crystal insulators). When TRS is broken by the presence of magnetism either in the bulk (via magnetic doping<cit.>) or on the surfaces of a TI thin film (via surface doping or by proximity to a magnetic layers<cit.>) the 2D surface states acquire an energy gap, signalling a topological phase transition to either a Chern or to an axion insulator<cit.>. In the first case, when the film is arranged into a Hall-bar geometry, a quantum transport phenomenon known as Quantum Anomalous Hall Effect (QAHE) emerges<cit.>. Predicted in 2013<cit.>, the QAHE has been observed in uniformly-doped magnetic TI thin films<cit.>, in modulation-doped TI films <cit.> that have surface magnetism only, and recently also in TI films with proximity-induced 2D magnetism<cit.>. The QAHE is a quantum Hall effect without Landau levels<cit.> due to spontaneously broken TRS. It is characterized again by the same topological bulk integer Chern number of the ordinary QHE, which is equal to number of spin-polarized gapless chiral edge states hosted on the sidewalls of the TI film and carrying a dissipationless current. The QAHE dissipationless chiral edge states have a great potential in spintronics for the next generation of information processing devices operating at zero magnetic field with low power consumption<cit.>. Furthermore, it is possible to envision the realization of more complex topological chiral networks based on 1D chiral interface states which, according to topological band theory, also appear at the boundary of two QAHE insulators with different Chern numbers C<cit.>. The difference in C between two adjacent QAHE insulator domains determine the number of chiral interface states at a given edge. One simple way to generate such chiral interface states is to create two regions of opposite magnetizations in a QAHE insulator, separated by a sharp magnetic domain wall (DW). The experimental realization of these systems has shown evidence of quantized chiral edge transport<cit.>, but has been so far quite challenging. However, quite recently, by employing an in-situ mechanical mask in magnetically doped (Bi,Sb)_2Se_3 multilayers, Zhao et, al. have used molecular beam epitaxy to synthesize efficiently robust QAHE 1D junctions along a DW separating two regions of opposite magnetization with Chern number ± C. In a QAHE bar with a DW orthogonal to the current direction, they demonstrated the existence of two parallel chiral edge states along the DW leading to quantized transport at zero magnetic field<cit.>. Quasi-1D chiral edge and interface states and their quantum transport properties in QAHE set-ups in the presence of DWs have been studied theoretically in a few papers using effective continuum models for the magnetic TI surfaces  <cit.>. The nature of the states at the domain wall can be controlled by an external electric field. This is due to the fact that depending on the position of the Fermi energy, the chirality as well as the coupling between the magnetic moments with TI states can change <cit.>. It can be shown that the equilibrium charge current along the domain wall in a 3D TI in the presence of a local Zeeman field, is equal to the sum of counter-propagating equilibrium currents flowing along the external boundaries of the domains <cit.>. The number of chiral interface modes along the DW is determined by the difference of Chern numbers of adjacent regions, the tuning of which can be used to manipulate the current partition at the DW junctions<cit.>. Note that earlier theoretical work had predicted that robust edge and interface states should occur in graphene heterostructure QHE bars along the edges and the DWs created at the interface of two adjacent regions under the effect of oppositely oriented magnetic fields. In this case, when the DW is orthogonal to the direction of the current, the DW interface states affect the edge states by opening a back scattering channel which prevents the perfect quantization of the two-terminal conductance <cit.>. In this paper, we consider a nanoribbon of magnetized Bi_2Se_3 3D TI, hosting a magnetic domain wall (DWs) either parallel or perpendicular to the nanoribbon direction, see Fig. <ref>. The purpose of the study is to investigate how spin-polarized quasi 1D chiral states along the interface of two magnetic regions with opposite bulk topological invariants, affect the electronic and spin transport. In particular, we explore the influence of these interface states and possible bulk states on the quantized conductance carried by the chiral edge states of the nanoribbon, which are responsible for the QAHE in a system without DWs. We use an atomistic tight-binding model<cit.> which provides an adequate microscopic description of the electronic band structure, along with the non-equilibrium Green's function formalism to study the two-terminal longitudinal conductance. Importantly, in contrast with the case of effective models employed in previous studies, in our approach not only dissipationless 1D chiral edge/interface states but also dissipative bulk and non-chiral side-wall states, which are always present in the system <cit.>, emerge directly from the band structure and are treated on the same footing. The use of a microscopic tight-binding model adds considerable computational complexity but it provides an unbiased description of physics and realistic estimate of the typical system sizes where the effects described here are likely to be observed. On the basis of the spin character of the edge states we also propose to use this set-up as a spin filter device. The paper is organized as follows. In Sec.II we describe the theoretical tight-binding model and the non-equilibrium Green's function approach to quantum transport. Sec. III presents the results of the electronic structure and conductance. Sec. IV contains concluding remarks. § THEORETICAL MODEL AND TRANSPORT FORMALISM §.§ Tight-binding model for a thin-film heterostructure nanoribbon In order to consider a domain wall in the system, we study two distinct two-terminal heterostructures with a parallel and a perpendicular domain wall as shown in Fig. <ref>. In both cases a homogeneous exchange field is considered all over the ribbon as a result of a magnetic substrate with a high Curie temperature which induces a proximity coupling to the nearby layers of the Bi_2Se_3. The direction of the magnetization however depends on the two cases that we have considered. We make the simplifying assumption that the magnetization changes abruptly at the interface since we are aiming at developing a qualitative analysis and clarify the basic physical concepts rather than attaining precise numerical results. However, it's important to note that earlier work <cit.> on transport in graphene in the presence of DWs has shown that a linearly smooth transition introduces just some small fluctuations in the conductance steps; yet the qualitative outcome remains consistent with that one obtained assuming abrupt DWs. Also, the recent results shown in Ref <cit.> point to DWs that are indeed rather narrow (of the order of a few nm). Additionally, atomically sharp DWs in an antiferromagnet thin film have been realized recently <cit.>. To model Bi_2Se_3 thin films we use an atomistic tight-binding (TB) model based on s and p orbitals of Bi and Se atoms (sp^3 )<cit.>. TB model incorporates atomic orbitals and their interactions, providing a microscopic description of electron dynamics. By considering the spin-orbit coupling and crystal symmetry, this model effectively describes the emergence of topological surface states. The Hamiltonian can be written as follows <cit.> H_C = ∑ _ii',σαα't_ii'^αα'e^i k· r_ii'c_iα^σ†c_i'α'^σ + ∑ _i,σσ',αα'λ_i <i,α,σ| L· S| i,α',σ'> c_iα^σ†c_iα'^σ' + ∑ _i,σ ,α M_i c_iα^σ†σ_z^σσc_iα^σ† . The first (kinetic-energy) term of Eq. <ref> contains Slater-Koster hopping and site-energy parameters<cit.>, which in the present case have been extracted from density functional theory (DFT) calculations<cit.>. Here c_iα^σ†(c_iα^σ) is the creation (annihilation) operator for an electron with spin σ and atomic orbital α∈ (s, p_x, p_y p_z) at site i. k is the reciprocal-lattice vector that spans the Brillouin zone. i^'≠ i runs over all neighbors of atom i in the same atomic layer as well as the first and second nearest-neighbor layers in the adjacent cells, and r_ii' represents the vector connecting two neighbor atoms. In the second term, an on-site spin-orbit coupling (SOC) interaction is implemented in the intra-atomic matrix elements <cit.>, in which |i,α,σ> are spin- and orbital-resolved atomic orbitals. L and S are the orbital angular momentum and the spin operators, respectively, and λ_i is the SOC strength <cit.>. The last term represents the exchange field with the strength |M_i|=0.2 eV at different sites and can be either positive or negative with respect to the positive direction of the z-axis<cit.>. This is the crucial term for breaking TRS in the system, which opens an exchange band gap at the Dirac point of the surface states and induces a finite Berry curvature. Below we will specify the exchange field configuration for two different cases. §.§ Green's function and transport formalism In order to calculate the transport properties, we use the non-equilibrium Green's function (NEGF) method which describes the propagation of electrons in a system under the influence of an external voltage or perturbation. Given the Hamiltonian of the central channel at the Γ point, the spin-dependent retarded (r) and advanced (a) Green's functions are given by <cit.> G^r (E) =(E^+I-H_C - ∑_n Σ_n(E))^-1 =[ G^a(E)]^†, where E^+≡ E + i 0^+ and I is the identity matrix. H_C is defined in Eq <ref> in which k=0. The sum n= L, R is over the two self-energies, which account for the interactions and boundary conditions due to the connection of the left (right) semi-infinite electrode to the central channel. The self-energies can be expresses as Σ _L(R)(E)= H_L(R),C^†g_L(R)(E)H_L(R),C, where H_L(R),C is the tunneling Hamiltonian between the central region and the left(right) lead. The Green's function g_L(R) is the surface Green's function of the left(right) lead that captures the specific behavior of electrons at the material's surface and can be calculated using the Sancho-Lopez-Rubio recursive method <cit.>. In this method, the semi-infinite lead is divided into layers perpendicular to the interface and each layer is treated as a finite system, allowing for the calculation of its Green's function. This recursive method relates the Green's function of a layer to the Green's function of the neighboring layer through the self-energy term. We first calculate the effect of the surface connected to the right semi-infinite lead using this iterative method  <cit.>, which is then used to connect the central region to the leads in such a way that a unit cell from the central region is added to the right lead one at a time. In this way, a surface Green’s function g_R^l is calculated for each unit cell l starting from l=M, where M is the number of unit cells in the central region, to l=2. It should be noted that the surface Green's function is calculated using Hamiltonian of the leads , which may differ from that of the central region. However, here we are considering the leads to be simply an extension of the central part which therefore do not introduce any additional scattering effects, focusing solely on the influence of the the DWs. Additionally, due to ballistic transport, the length of the channel does not impact the results. The g_R^ls are calculated from g_R^l+1 according to g_R^l=(E^+I-H_C - H_R,Cg_R^l+1H_R,C^†)^-1 . Once we have obtained the Green's function of the whole system, we can calculate the longitudinal two-terminal conductance as G (E) =e^2/hTr[ Γ ^L (E) G^r (E) Γ^R (E) G^a (E)] =e^2/h T(E), where Γ^L/R = i[ Σ^r_L/R- ( Σ^r_L/R )^†] is the broadening due to the left and right electrode contacts. In order to calculate the steady-state local transport properties in the full NEGF formalism, we need both the retarded and lesser Green's functions, which contain information about the density of available states and how electrons occupy these states, respectively. In the phase-coherent regime where interaction self-energy functionals are zero, by using the Keldysh formalism we can simply write the lesser Green's function in terms of the retarded Green's function and the broadening matrices as <cit.> G_i σ , j σ'^< (E) =[ G^r (E) Σ^<_L+Σ^<_R G^a (E)]_i σ, j σ' =i[ G^r(E)(f_L(E)Γ^L + f_R (E) Γ^R) G^a(E) ]_i σ , j σ' . Once the lesser Green's function is known, we can also calculate both the equilibrium and the out-of-equilibrium (when an applied voltage is present), spin-resolved, local density of states n_i^σ (E) at space position i and at a given energy E <cit.> n_i^σ (E) =e/4π G_iσ,i σ^< (E) . § RESULTS AND DISCUSSION §.§ Parallel domain wall We start by considering the heterostructure in Fig <ref>(a) (parallel domain wall along the longitudinal direction y). The system is translational invariant along the y-direction, and we can plot the quasi-1D band structure as a function of the wave vector k_y. Since we have a nanoribbon, both the width and the thickness of the central channel should be chosen carefully to avoid coupling between surfaces. Here we have considered a thickness of three quintuple layers of Bi_2 Se_3 (2.6 nm) and a width of 21.0 nm. The size of the matrices that this structure entails will be 12360 × 12360. Fig. <ref>(a) shows the band structure for this system, which clearly displays the presence of the Dirac point and crossing edge states inside the bulk gap. The Fermi energy is located at 0.09 eV. In Fig. <ref>(b) we have plotted the spatial distribution of the modulus square of the wave function |Ψ|^2 for the two right-moving states (one of predominantly spin-up and the other of predominantly spin-down) and the two left-moving states at the 4-fold degenerate energy E=0.1eV. We can see that two right-moving states are edge states localized at the two external borders of the nanoribbon, while the left-moving states are interface states both localized in the region of the DW in the middle of the nanoribbon. As shown in the inset, the spin-resolved analysis of these edge states shows that the external boundaries have spin polarized states with the majority spin being opposite at the two boundaries. However, the states at the domain wall in the middle of the ribbon are non-polarized. This is in perfect agreement with the fact that each half of the ribbon is a QAHE system, where in the upper (lower) part with positive (negative) magnetization, the majority spin-up (spin-down) electrons are moving clockwise (anti-clockwise). Therefore, if we have magnetized leads, which allow only electrons of a given spin to enter the channel, the electrons basically transport only via one edge, and by changing the direction of the current flow, the opposite spin will flow in the system through the other edge. In other words, we have a spin filter device that, by switching the direction of the voltage (or injected current) yields a spin-polarized current of opposite spin-polarization sign. We now discuss the longitudinal conductance through the central channel, calculated using the formalism of Sec. <ref>. In Fig <ref>(a) we plot the two-terminal conductance as a function of the Fermi energy. Due to the large computational cost involved in this calculation (the Green's function matrices that we have to invert become exceptionally large), we have calculated the conductance for only a few values of the Fermi energy in the relevant energy window 0 ≤ E_ F≤ 0.2 eV dictated by the band structure of Fig. <ref>(a) in order to ascertain its general behavior. We can clearly distinguish three different regimes. In the first one, 0 ≤ E_ F≤ 0.06 eV, the Fermi energy in inside the continuum of the bulk states of the valence bands. The conductance is large and a monotonically decreasing function of the energy. The second regime, when the Fermi energy varies within the energy interval [0.06, 0.16]eV which corresponds to the exchange gap. Here we only have the four quasi-degenerate chiral states (two right-moving edge states and two left-moving interface states). Therefore the two-terminal longitudinal conductance is quantized and equal to G=2e^2/h=2 G_0 corresponding to the contribution of the dissipationless current carried by two edge states. In a four-terminal system, this situation would give rise to a QHAE with Hall resistance equal to R_ H = h/2 e^2. Interestingly, when the current is injected from the left lead into the right lead, the chiral nature of the 1D conducting boundary states implies that transport takes place along the spin-polarized edges of the ribbon. Viceversa, when the current is injected from the right lead, it flows dissipationlessly into the right lead via the unpolarized interface states along the DW in the middle of ribbon. Finally, the third regime corresponds to the energy interval E_ F> 0.16 eV, where the Fermi energy starts to probe states above the exchange gap belonging to the conduction band. In contrast to the first regime, where E_ F is below the exchange gap inside the bulk valence band continuum, here the states of the bottom of the conduction band are discrete. Clearly this is in part caused by the finite width of the nanoribbon. As shown in Ref. <cit.>, these states are non-chiral and appear outside the exchange gap. Just like in a quantum point contact, when by increasing the Fermi energy, additional discrete 1D states get occupied and contribute to the transport, each increasing the conductance by G_0, here these non-chiral states cause the quantized step-wise increase of the conductance. However, in contrast to the in-gap chiral states, these additional contribution are not robust against disorder and are going to be back-scattered by impurities and imperfections. Note that deviations from perfect quantization of the QAH resistance in magnetic TIs, signalled by a non-zero longitudinal resistance, have recently been discussed in Ref.  <cit.>, where it was pointed out that, in contrast to what is generally assumed, current can flow not only via chiral edge states but also through dissipative 2D bulk states. In Fig <ref>(b) we plot the spin-resolved and the total local density of states n_i^σ (E_ F) along the nanoribbon width, calculated using Eq.<ref> at equilibrium, that is at zero applied voltage, when E_ F is at the Dirac point. This figure shows that the states inside the exchange gap displayed in Fig. <ref>(a) are indeed spin-polarized edge and interface states localized respectively along the top and bottom nanoribbon edges as well as in the middle of the nanoribbon where the DW is located. This figure is therefore completely consistent with the results of the longitudinal conductance shown in Fig. <ref>(a). §.§ Perpendicular domain wall Next, we consider the geometry shown in Fig <ref>(b), The DW, placed in the middle of central channel, is now orthogonal to the nanoribbon length and to the direction of the longitudinal transport, and it is going to act as a spin-dependent potential barrier that will affect the two terminal longitudinal conductance. In this case, translational invariance along the longitudinal (y) direction is broken, and therefore we cannot plot the 1D band structure as in the previous case. Nevertheless, we can gain an understanding of the electronic structure of the system by plotting the local density density of states (LDOS) close to the left and right leads. For this purpose, we make use of the iterative Green's function formalism also employed to analyze transport. As shown in Eq. <ref>, at zero applied voltage the lesser Green's function yields precisely the equilibrium LDOS at a given position at energy E. To compute the LDOS at a specific space position of the system, we essentially address different parts of the nanoribbon by introducing a sequence of different layers that make up the central Hamiltonian H_C in Eq <ref>. Each layer of H_C is a unit cell inside the channel, which is repeated along its length. In contrast to the previous case of longitudinal DW where iterative layers defining H_C are all the same, now for this nanoribbon configuration H_C will change when the iterative process reaches the DW. In Fig <ref> we plot the LDOS along the DW direction at a very close distance from the DW itself, when E_ F = 0.01 eV. Note that this is essentially the energy at the Dirac point of the edge states in the band structure for the parallel DW case shown shown in Fig. <ref>, located in the middle of the exchange gap. In principle, it is not clear that this energy should be at all relevant for the present case. However, not having other reference points, it is useful to consider it. Here panels (b)-(e) represent the LDOS plotted along the width of the nanoribbon just on the left (b and c, positive magnetization) and on the right (d and e, negative magnetization) of the DW. Red and blue curves refer again to spin-up and spin-down states respectively. The figures exhibits two main features. The first one is the opposite character of the strong spin polarization on the left and on the right of the DW, where majority and minority spins are interchanged. The second feature is the presence of a non-zero LDOS along all the width of nanoribbon for both spin-up and spin-down, suggesting the existence of 1D chiral edge and interface states as indicated schematically in panel (a). The rapid oscillations of n_i in panels (b) and (e) as a function of the perpendicular coordinate x come from the contribution of atoms positioned in different atomic layers (that is, having different z-coordinates) but having close x-coordinate values. Physically they might be caused by backscattaring at the sharp corners where edge and interface states join. We now analyze the two-terminal conductance for this geometry as a function of E_ F, as shown in Fig. <ref>. Two main energy regimes stand out: (i) when E_ F < 0.06 eV the conductance is very small, G << G_0e; for E_ F > 0.06 eV, the conductance grows approximately linearly with E_ F. In order to elucidate the mechanism of this surprising behavior, we first note that the value E_ F = 0.06 eV signals the beginning of the exchange gap for the parallel DW case, hosting chiral edge and interface states, and it is exactly the lower limit of the energy interval where the conductance in that case is quantized. Again, although we do not have direct strong physical arguments supporting the assumption that this energy should also be relevant for the present case, our intuition suggests that this occurrence is not a sheer coincidence. We have already observed that, on the two sides of the DW and very close to it, the LDOS at E_ F = 0.1 eV (the energy at the Dirac point for the parallel DW case) is consistent with the one of chiral spin polarized interface states joining the corresponding edge states which propagate on the two nanoribbon external edges, see Fig. <ref>. We pursue this trail by computing the LDOS at the two ends of the central channel, far away from the DW for a few different energies. We choose these value guided by the band structure of Fig. <ref>, which now should be somewhat relevant for this case given the large distance from the DW. The panels on the left (a-c) show the LDOS calculated at the position the first unit cell in the channel, which is closest to the left lead, while in the panels on the right (d-f) LDOS is calculated for the last unit cell in the channel, which is closest to the right lead. At E= 0.002, (panels a and d) the LDOS shows the expected strong spin-polarization; however it is pretty much uniform along the width of the nanoribbon. Given the fact that this energy is located deep in the valence bands of Fig. <ref>(a), we interpret this LDOS as the one pertaining to bulk, non chiral states spread throughout the nanoribbon. When E_ F is in this region, it is primarily these states which would contribute to the longitudinal conductance  <cit.>. However, when the injected current from the left lead carried by these states reaches the DW, finds on the other side a LDOS with where only states with the opposite spin sign are available and cannot propagate further. Hence the conductance is very small. In way this is the same mechanism responsible for the giant magneto-resistance in magnetic multilayers. When E= 0.002 eV, (panels c and f) we expect LDOS to still correspond to non chiral bulk states, since this value of the energy is located way-up in the conduction band of the band structure of Fig. <ref>(a). However, now the magnetic properties are quite different. Indeed the spin polarization on both sides of the DW is much smaller. In this case, a current injected from the left lead at this value of E_ F and should the able to flow (unhindered in the absence of imperfections) and be collected at the right lead. The conductance should be proportional to the cumulative contribution of all the occupied states up to E_ F, as found in the calculation. The intermediate energy region between these two values, represented by panels b and, is the the most interesting one, since E is now inside the exchange gap and therefore 1D chiral edge and interface states are the only ones active. We can imagine that when the current is injected from the left lead, it flows first along the upper edge state. At the DW is continues along the chiral interface state moving down until it reaches the lower edge state when it can continue to the right until it is collected at the right lead. Now the opposite spin polarization in the two halves of the nanoribbons does not prevent current flow completely. Two possible reasons for this are: (i) some minority spin states of the the minority spin contribution in the lower edge (having therefore the same “correct´´ spin of the chiral states originating from the upper edge) states might be larger; (ii) while majority up-spin electrons travel downward along the DW interface state might spin-flip due to the presence of spin-orbit interaction. However, when the Fermi energy is in this energy window transport in general will not be quantized, although it amusing to note that the longitudinal conductance is G ≈ 2 G_0.. § CONCLUSION Using an atomistic tight-binding model that captures realistically the microscopic features of magnetic TI heterostructures, we have analyzed the charge and spin characteristics of the 1D chiral edge and interface states in a magnetized Bi_2Se_3 TI nanoribbon in the presence of one magnetic domain wall (DW) of two possible different kinds For a DW parallel to the longitudinal direction of the nanoribbon, exact quantization of the anomalous Hall conductance is guaranteed provided that (i) the Fermi energy is located inside the magnetic exchange energy gap where only chiral edge and interface states are present (ii) the width of the nanoribbon is large enough to prevent the coupling of opposite channels. The edge states at the external boundaries of the system are spin-polarized and can be tuned by means of an external voltage. In the case of perpendicular domain wall, back-scattering mechanisms caused by the DW give rise to a deviation from the perfect quantization. In particular, the longitudinal conductance vanishes identically when the Fermi energy is smaller than a critical value identified as the lower edge of the exchange gap in the 1D band structure of the parallel case, and increases linearly afterward. A more detailed understanding of these results could perhaps be achieved by examining the space dependence of the steady-state spin-polarized current density  <cit.>. Further application of the approach presented here could yield valuable insights into the impact of more intricate spin textures, for example topological spin excitations such as skyrmions, on both charge and spin conductance. The size of these skyrmions is particularly crucial for a detailed analytical and computational analysis of the electronic structure and especially the resulting quantum transport. Recent studies of TI/magnetic-layer heterostructures have reported evidence of the existence of skyrmions of relatively small size of the order of 15 nm at room temperature  <cit.> on the surface of a topological insulator which could be analyzed with our microscopic approach. Outstanding issues, presently intensively investigated, include the use of the unique spin properties of the conducting TI gapless surface and edge states to manipulate skyrmions<cit.>. Similarly, spin-orbit torque, thought to be particularly large in TI thin films<cit.>, can be used to affect and possibly revert the magnetic properties of these topological spin structures. § ACKNOWLEDGMENTS This work was supported by the Faculty of Technology and by the Department of Physics and Electrical Engineering at Linnaeus University (Sweden). We acknowledge financial support from the Swedish Research Council (VR 2021-046229) and Carl Tryggers Stiftelsen (CTS 20:71). The computations at Linnaeus University were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at Dardel, partially funded by the Swedish Research Council through grant agreement no. 2022-06725.
http://arxiv.org/abs/2406.08401v1
20240612165012
Nyström Kernel Stein Discrepancy
[ "Florian Kalinke", "Zoltan Szabo", "Bharath K. Sriperumbudur" ]
stat.ML
[ "stat.ML", "cs.LG", "math.ST", "stat.TH", "46E22 (Primary) 62G10 (Secondary)", "G.3; I.2.6" ]
[ [ June 11, 2024 ================= § ABSTRACT Kernel methods underpin many of the most successful approaches in data science and statistics, and they allow representing probability measures as elements of a reproducing kernel Hilbert space without loss of information. Recently, the kernel Stein discrepancy (KSD), which combines Stein's method with kernel techniques, gained considerable attention. Through the Stein operator, KSD allows the construction of powerful goodness-of-fit tests where it is sufficient to know the target distribution up to a multiplicative constant. However, the typical U- and V-statistic-based KSD estimators suffer from a quadratic runtime complexity, which hinders their application in large-scale settings. In this work, we propose a Nyström-based KSD acceleration—with runtime Ø(mn+m^3) for n samples and m≪ n Nyström points—, show its √(n)-consistency under the null with a classical sub-Gaussian assumption, and demonstrate its applicability for goodness-of-fit testing on a suite of benchmarks. § INTRODUCTION The kernel mean embedding, which involves mapping probabilities into a reproducing kernel Hilbert spaces (RKHS; ) has found far-reaching applications in the last 20 years. For example, it allows to measure the discrepancy between probability distributions through maximum mean discrepancy (MMD; ), defined as the distance between the corresponding mean embeddings, which underpins powerful two-sample tests. MMD is also known as energy distance <cit.> in the statistics literature; see <cit.> for the equivalence. We refer to <cit.> for a recent overview of kernel mean embeddings. In addition to two-sample tests, testing for goodness-of-fit (GoF; ) is also of central importance in data science and statistics, which involves testing H_0:= vs. H_1 : ≠ based on samples from an unknown sampling distribution and a (fixed known) target distribution . Classical GoF tests, e.g., the Kolmogorov-Smirnov test <cit.>, or the test for normality by <cit.>, usually require explicit knowledge of the target distribution. However, in practical applications, the target distribution is frequently only known up to a normalizing constant. Examples include validating the output of Markov Chain Monte Carlo (MCMC) samplers <cit.>, or assessing deep generative models <cit.>. In all these examples, one desires a powerful test, even though the normalization constant might be difficult to obtain. A recent approach to tackle GoF testing involves applying a Stein operator <cit.> to functions in an RKHS and using them as test functions to measure the discrepancy between distributions, referred to as kernel Stein discrepancies (KSD; ). An empirical estimator of KSD can be used as a test statistic to address the GoF problem. In particular, the Langevin Stein operator <cit.> in combination with the kernel mean embedding gives rise to a KSD on the Euclidean space ^d, which we consider in this work. As a test statistic, KSD has many desirable properties. In particular, KSD requires only knowledge of the derivative of the score function of the target distribution — implying that KSD is agnostic to the normalization of the target and therefore does not require solving, either analytically or numerically, complex normalization integrals in Bayesian settings. This property has led to its widespread use, e.g., for assessing and improving sample quality <cit.>, validating MCMC methods <cit.>, comparing deep generative models <cit.>, detecting out-of-distribution inputs <cit.>, assessing Bayesian seismic inversion <cit.>, modeling counterfactuals <cit.>, and explaining predictions <cit.>. GoF testing with KSDs has been explored on Euclidean data <cit.>, discrete data <cit.>, point processes <cit.>, time-to-event data <cit.>, graph data <cit.>, sequential models <cit.>, and functional data <cit.>. The KSD statistic has also been extended to the conditional case <cit.>. Estimators for Langevin Stein operator-based KSD exist. But, the classical U-statistic- <cit.> and V-statistic-based <cit.> estimators have a runtime complexity that scales quadratically with the number of samples of the sampling distribution, which limits their deployment to large-scale settings. To address this bottleneck, <cit.> introduced a linear-time statistic that suffers from low statistical power compared to its quadratic-time counterpart. <cit.> proposed the finite set Stein discrepancy (FSSD), a linear-time approach that replaces the RKHS-norm by the L_2-norm approximated by sampling; the sampling can either be random (FSSD-rand) or optimized w.r.t. a power proxy (FSSD-opt). Another approach <cit.> is employing the random Fourier feature (RFF; ) method to accelerate the KSD estimation. However, it is known <cit.> that the resulting statistic fails to distinguish a large class of measures. <cit.> generalize the idea of replacing the RKHS-norm by going from L_2-norms to L_p ones, to obtain feature Stein discrepancies. They present an efficient approximation, random feature Stein discrepancies (RFSD), which is a near-linear time estimator. However, successful deployment of the method depends on a good choice of parameters, which, while the authors provide guidelines, can be challenging to select and tune in practice. Our work alleviates these severe bottlenecks. We employ the Nyström method <cit.> to accelerate KSD estimation and show the √(n)-consistency of our proposed estimator under the null. The main technical challenge is that the Stein kernel (induced by the Langevin Stein operator and the original kernel) is typically unbounded while existing statistical Nyström analysis <cit.> usually considers bounded kernels. To tackle unbounded kernels, we select a classical sub-Gaussian assumption, which we impose on the feature map associated to the kernel, and show that existing methods of analysis can successfully be extended to handle this novel case. In this sense, our work, besides <cit.>, which requires a similar sub-Gaussian condition for analyzing empirical risk minimization on random subspaces, is a first step in analyzing the consistency of the unbounded case in the Nyström setting. Our main contributions can be summarized as follows. * We introduce a Nyström-based acceleration of kernel Stein discrepancy. The proposed estimator runs in Ø(mn + m^3) time, with n samples and m ≪ n Nyström points. * We prove the √(n)-consistency under the null of our estimator in a classical sub-Gaussian setting, which extends (in a non-trivial fashion) existing results for Nyström-based methods <cit.> focusing on bounded kernels. * We perform an extensive suite of experiments to demonstrate the applicability of the proposed method. Our proposed approach achieves competitive results throughout all experiments. The paper is structured as follows. We introduce the notations used throughout the article (Section <ref>) followed by recalling the classical quadratic-time KSD estimators (Section <ref>). In Section <ref>, we detail our proposed Nyström-based estimator, alongside with its adaptation to a modified wild bootstrap goodness-of-fit test (Section <ref>), and our theoretical guarantees (Section <ref>). Experiments demonstrating the efficiency of our Nyström-KSD estimator are provided in Section <ref>. Proofs are deferred to the appendices. § NOTATIONS In this section, we introduce our notations. Let [N] := {1,…,N} for a positive integer N. For a_1,a_2 ≥ 0, a_1 ≲ a_2 (resp. a_1 ≳ a_2) means that a_1≤ ca_2 (resp. a_1≥ c'a_2) for an absolute constant c>0 (resp. c' > 0), and we write a_1 ≍ a_2 iff. a_1≲ a_2 and a_1≳ a_2. We write 1_(·) for the indicator function and {{·}} for a multiset. The n-dimensional vector of ones is denoted by 1_n = (1,…,1)∈^n , that of n zeros by 0_n = (0,…,0)∈^n. The identity matrix is I̱_n∈^n× n. For a matrix A̱∈^d_1× d_2, A̱^- ∈^d_2× d_1 denotes its (Moore-Penrose) pseudo-inverse, and A̱∈^d_2× d_1 stands for the transpose of A̱. We write A̱^-1∈^d× d for the inverse of a non-singular matrix A̱∈^d× d. For a differentiable function f:^d → , let ∇_x̱f(x̱)=( f(x̱)/ x_i)_i=1^d ∈^d. Let (,τ_) be a topological space and ℬ(τ_) the corresponding Borel σ-algebra. Probability measures in this article are meant w.r.t. the measurable space (, ℬ(τ_)) and are written as ℳ_1^+(); for instance, the set of Borel probability measures on ^d is ℳ_1^+(^d). The n-fold product measure of ∈ℳ_1^+() is denoted by ^n ∈ℳ_1^+(^n). For a sequence of real-valued random variables X_n and a sequence of positive r_n-s, X_n = Ø_(r_n) means that X_n/r_n is bounded in probability. The unit ball in a Hilbert space $̋ is denoted byB()̋={f∈|̋f_≤1 }. The reproducing kernel Hilbert space withk:^d×^d→ℝas the reproducing kernel is denoted by_̋k. Throughout the paper,kis assumed to be measurable and_̋kto be separable; for instance, a continuous kernelk:^d ×^d →implies the latter property <cit.>. The canonical feature mapϕ_k : ^d →_̋kis defined asx̱ ↦k(·,x̱); with this feature mapk(x̱,y̱) = k(·,x̱),k(·,y̱)_̋k = ϕ_k(x̱),ϕ_k(y̱)_̋kfor allx̱,y̱∈^d. The Gram matrix associated with the observations( x̱_i )_i=1^n∈(^d)^nand kernelkis Ḵ_k,n,n = [ k(x̱_i,x̱_j) ]_i,j=1^n∈^n×n. Given a closed linear subspaceU⊆_̋k, the (orthogonal) projection ofh∈_̋konUis denoted byP_Uh∈U;u=P_Uhis the unique vector such thath-u ⊥U. For anyu∈U,h-P_Uh_̋k≤h-u_̋k, that is,P_Uhis the closest element inUtoh. A linear operatorA : _̋k →_̋kis called bounded ifA := sup_h_̋k=1Ah_̋k < ∞; the set of_̋k→_̋kbounded linear operators is denoted byŁ(_̋k). AnA ∈Ł(_̋k)is called positive (shortlyA≥0) if it is self-adjoint (A^*=A, whereA^*∈Ł(_̋k)is defined by⟨Af,g ⟩__̋k = ⟨f,A^*g ⟩__̋kfor allf,g∈_̋k), and⟨Ah,h⟩__̋k ≥0for allh∈_̋k. IfA≥0, then there exists a uniqueB ≥0such thatB^2 = A; we writeB = A^1/2and callBthe square root ofA. AnA ∈Ł(_̋k)is called trace-class if∑_i∈I ⟨(A^*A)^1/2e_i,e_i ⟩__̋k <∞for some countable orthonormal basis (ONB)(e_i)_i∈Iof_̋k, and in this case(A):=∑_i∈I ⟨Ae_i,e_i ⟩__̋k <∞.[The trace-class property and the value of (A) is independent of the specific ONB chosen. The separability of _̋k implies the existence of a countable ONB in it.] For a self-adjoint trace-class operatorAwith eigenvalues(λ_i)_i∈I,(A)=∑_i∈I λ_i. An operatorA ∈Ł(_̋k)is called compact if{Ah | h∈B(_̋k)}is compact, where·denotes the closure. A trace class operator is compact, and a compact positive operatorAhas largest eigenvalueA. For anyA∈Ł(_̋k), it holds thatA^*A = A^2(which is called theC^*property). The mean embedding of a probability measure∈ℳ_1^+(^d)into the RKHS associated to kernelk:^d ×^d →isμ_k() = ∫_^d ϕ_k(x̱)(x̱) ∈_̋k, where the integral is meant in Bochner's sense <cit.>. The mean elementμ_k()exists iff.∫_^dϕ_k(x̱)_̋k(x̱) < ∞<cit.>; this condition is satisfied for instance whenϕ_kis bounded, in other wordssup_x̱∈^dϕ_k(x̱)_̋k < ∞. Letf,g ∈_̋k. Their tensor product is written asf⊗g ∈_̋k⊗_̋k, where_̋k⊗_̋kis the tensor product Hilbert space; further,f⊗g : _̋k→_̋kdefines a rank-one operator byh↦fg,h_̋k. It is known that_̋k⊗_̋kis also an RKHS <cit.>. Given a probability measure∈ℳ_1^+(^d)and a kernelk:^d ×^d→, the uncentered covariance operator C_,k = ∫_^dϕ_k(x̱)⊗ϕ_k(x̱) (x̱) ∈_̋k⊗_̋k exists if∫_ϕ_k(x̱)_̋k^2(x̱) < ∞;C_,kis a positive trace-class operator. We defineC_,k,λ = C_,k+λI, whereIdenotes the identity operator andλ>0. The effective dimension of∈ℳ_1^+(^d)is defined as𝒩_,k(λ) : = ( C_,kC_,k,λ^-1 )≤(C_,k)/λ.[This inequality is implied by (C_,kC_,k,λ^-1) = ∑_i ∈ Iλ_i/λ_i+λ≤1/λ∑_i∈ Iλ_i = (C_,k)/λ, where (λ_i)_i∈ I denote the eigenvalues of C_,k.] Withp≥1and a real-valued random variableX: (Ω,𝒜,) →(, ℬ(τ_)), whereℬ(τ_)denotes the Borelσ-field on, letX_L_p()= [∫_Ω |X(ω)|^p (ω)]^1/p. Forp ∈{1,2}, letψ_p(u) = e^u^p-1andXψ_p := inf{C > 0 |_X∼ψ_p(|X|/C) ≤1}. A real-valued random variableX∼∈ℳ_1^+()is called sub-exponential ifXψ_1<∞and sub-Gaussian ifXψ_2 < ∞. In the following, we specialize Definition 2 by <cit.> stated for Banach spaces to (reproducing kernel) Hilbert spaces by using the Riesz representation theorem. A centered random variableX∼∈ℳ_1^+(_̋k)taking values in an RKHS_̋kis called sub-Gaussian iff. there exists a universal constantC>0such that X,u_̋kψ_2≤ C X,u_̋kL_2() <∞ holds for allu ∈_̋k. § PROBLEM FORMULATION We now introduce our quantity of interest, the kernel Stein discrepancy. Let_̋k^d:=×_i=1^d_̋kbe the product RKHS with inner product defined byf̱,g̱_̋k^d = ∑_i=1^df_i,g_i_̋kforf̱=( f_i )_i=1^d, g̱ = ( g_i )_i=1^d ∈_̋k^d. Let, ∈ℳ_1^+(^d)be fixed; we refer toas the target distribution and toas the sampling distribution. Assume thatis absolutely continuous w.r.t. the Lebesgue measureλ_don^d; let the corresponding Radon-Nikodym derivative—in other words, the probability density function (pdf) of—bep= λ̣_d. We assume thatpis continuously differentiable with support^d,p(x̱) >0for allx̱ ∈^d, andlim_x̱→∞f(x̱)p(x̱) = 0for allf ∈_̋k. The last property holds for instance ifpis bounded andlim_x̱→∞f(x̱) = 0for allf ∈_̋k. The Stein operator is defined as(T_p f̱)(x̱) = ⟨∇_x̱ [logp(x̱)], f̱(x̱)⟩+ ⟨∇_x̱f̱(x̱), 1̱_d⟩(f̱ ∈_̋k^d, x̱ ∈^d). With this notation at hand, for allf̱ ∈_̋k^dandx̱ ∈^done has (T_p f̱)(x̱) = f̱, ξ_p(x̱)_̋k^d, ξ_p(x̱) = [∇_x̱( log p(x̱))ϕ_k(x̱) + ∇_x̱ϕ_k(x̱) ]∈_̋k^d, with the kernel h_p(x̱, y̱) = ξ_p(x̱),ξ_p(y̱)_̋k^d=⟨ h_p(·,x̱),h_p(·, y̱)⟩__̋h_p, x̱,y̱∈^d; notice thatξ_p(x̱)andh_p(·,x̱)map to different feature spaces (_̋k^dand_̋h_p, respectively) but yield the same kernelh_p. The kernel Stein discrepancy (KSD) <cit.> then is defined as S_p() = sup_f̱∈ B(_̋k^d)f̱,_X∼ξ_p(X)_̋k^d = _X∼ξ_p(X)_̋k^d =_X∼X_̋h_p, where the last equality follows from (<ref>). By the construction of KSD, S_p() = _X∼X_̋h_p = 0, that is, the expected value of the Stein feature map under the target distributionis zero. Given a sample_n = {x̱_i}_i=1^n ∼^n, the popular V-statistic-based estimator <cit.> is constructed by replacingwith the empirical measure_n; it takes the form S^2_p(_n) = 1/n^2∑_i,j=1^nh_p(x̱_i,x̱_j), and can be computed inØ(n^2)time. The corresponding U-statistic-based estimator <cit.> has a similar expression but omits the diagonal terms, that is, S^2_p,u(_n) = 1/n(n-1)∑_1≤i≠j ≤n^nh_p(x̱_i,x̱_j); it also has a runtime costs ofØ(n^2). For large-scale applications, the quadratic runtime is a significant bottleneck; this is the shortcoming we tackle in the following. § PROPOSED NYSTRÖM-KSD To enable the efficient estimation of (<ref>), we propose a Nyström technique-based estimator in Section <ref> and an accelerated wild bootstrap test in Section <ref>. In Section <ref>, our consistency results are collected. §.§ The Nyström-KSD estimator We consider a subsample_m = {{x̱̃̃_1,…,x̱̃̃_m}}of sizem(sampled with replacement), the so-called Nyström sample, of the original sample_n = {x̱_1,…,x̱_n}; the tilde indicates a relabeling. The Nyström sample spans _̋p,m = {ξ_p (x̱̃̃_i) |i ∈[m]} ⊂_̋k^d, with feature mapξ_pand associated kernelh_pdefined in (<ref>). One idea is to restrict the space over which one takes the supremum in (<ref>), that is, one could consider S_p() (<ref>)=sup_f̱∈ B(_̋k^d)f̱,_X∼ξ_p(X)_̋k^d(i)≈sup_f̱∈ B(_̋p,m)f̱,_X∼ξ_p(X)_̋k^d (ii)≈sup_f̱∈ B(_̋p,m)f̱,_X∼_nξ_p(X)_̋k^d(iii)=√(β_pḴ_h_p,m,m^-1β_p), withβ_p = 1/nḴ_h_p,m,n 1_n . The details are as follows. In approximation(i), we restrict the supremum to_̋p,m ⊂_̋k^d, in step(ii)the empirical measure_nis used instead of, and(iii)is detailed in Appendix <ref>. However, one can see that the intuitive estimator (<ref>) has limited applicability, as the inverse ofḴ_h_p,m,mdoes not necessarily exist (for instance, by the non-strictly positive definiteness ofkor the sampling with replacement); hence, we take an alternative route which computationally leads to a similar expression and also allows statistical analysis. The best approximation ofS_p()in RKHS-norm-sense, when usingmNyström samples, could be obtained by considering the orthogonal projection of_X∼Xonto_̋h_p,m :={x̱̃̃_i|i ∈[m]} ⊂_̋h_p. Asis unknown in practice and only available via samples_n ∼^n, we consider the orthogonal projection of_X∼_nXonto_̋h_p,minstead. In other words, we aim to find the weightsα= (α_i)_i=1^m ∈^mthat correspond to the minimum norm solution of the cost function min_α∈^m1/n∑_i=1^nx̱_i_=_X∼_n X-∑_i=1^mα_ix̱̃̃_i__̋h_p, which gives rise to the squared KSD estimator[S̃_p^2(_n) indicates dependence on _n.] S̃_p^2(_n):=∑_i=1^mα_ix̱̃̃_i_̋h_p,m^2 (=P__̋h_p,m_X∼_n X)_̋h_p,m^2). The squared KSD estimator (<ref>) takes the form S̃_p^2(_n) = β_pḴ_h_p,m,m^-β_p, where we define β_p = 1/nḴ_h_p,m,n1_n∈^m, Ḵ_h_p,m,m = [h_p(x̱̃̃_i,x̱̃̃_j)]_i,j=1^m ∈^m× m, and Ḵ_h_p,m,n = [h_p(x̱̃̃_i,x̱_j)]_i,j=1^m,n∈^m× n.   * Runtime complexity. The computation of (<ref>) consists of calculating β_p, pseudo-inverting K_h_p,m,m, and obtaining the quadratic form β_pḴ_h_p,m,m^-β_p. The calculation of β_p requires Ø(mn) operations, due to the multiplication of an m× n matrix with a vector of length n. Inverting the m× m matrix K_h_p,m,m costs Ø(m^3),[Although faster algorithms for matrix inversion exist, e.g., Strassen's algorithm, we consider the runtime that one typically encounters in practice.] dominating the cost of Ø(m^2) needed for the computation of K_h_p,m,m. The quadratic form β_pḴ_h_p,m,m^-β_p has a computational cost of Ø(m^2). Hence, (<ref>) can be computed in Ø(mn + m^3), which means that for m=o(n^2/3), our proposed Nyström-KSD estimator guarantees an asymptotic speedup. * Comparison of (<ref>) and (<ref>). The two presented Nyström approaches result in comparable estimators — the inverse becomes a pseudo-inverse — but the projection perspective lends itself to a principled analysis that we detail in Section <ref>. * Comparison of (<ref>) and (<ref>). The Nyström estimator (<ref>) recovers the V-statistic-based estimator (<ref>) when no subsampling is performed and provided that Ḵ_h_p,n,n is invertible. §.§ Nyström bootstrap testing In this section, we discuss how one can use (<ref>) for accelerated goodness-of-fit testing. We recall that the goal of goodness-of-fit testing is to testH_0 : = versusH_1 : ≠, given samples_n = {x̱_1,…,x̱_n}and target distribution. Recall that KSD relies on score functions (∇_x̱[logp(x̱)]); hence knowingup to a multiplicative constant is enough. To use the Nyström-based estimator (<ref>) for goodness-of-fit testing, we propose to obtain its null distribution by a Nyström-based bootstrap procedure. Our method builds on the existing test for the V-statistic-based KSD, detailed in <cit.>, which we quote in the following. Define the bootstrapped statistic by B_n = 1/n^2∑_i,j=1^nw_iw_jh_p(x̱_i,x̱_j), withw_i ∈{-1,1}an auxiliary Markov chain defined by w_i = 1_(U_i > 0.5)w_i-1- 1_(U_i ≤ 0.5)w_i-1, where U_i i.i.d.∼Unif(0,1), that is,w_ichanges sign with probability0.5. The test procedure is as follows. * Calculate the test statistic (<ref>). * Obtain D wild bootstrap samples {B_n,i}_i=1^D with (<ref>) and estimate the 1-α empirical quantile of these samples. * Reject the null hypothesis if the test statistic (<ref>) exceeds the quantile. (<ref>) requiresØ( n^2 )computations, which yields a total cost ofØ( Dn^2 )for obtainingDbootstrap samples, rendering large-scale goodness-of-fit tests unpractical. We propose the Nyström-based bootstrap B_n^ = 1/n^2w̱Ḵ_h_p,n,mḴ_h_p,m,m^-Ḵ_h_p,m,nw̱, withw̱ = (w_i )_i=1^n∈^ncollecting thew_i-s (i∈[n]) defined in (<ref>);Ḵ_h_p,n,m(= Ḵ_h_p,m,n)andḴ_h_p,m,mare defined as in Lemma <ref>. The approximation is based on the fact <cit.> thatḴ_h_p,n,mḴ_h_p,m,m^-Ḵ_h_p,m,nis a low-rank approximation ofḴ_h_p,n,n, that is,Ḵ_h_p,n,mḴ_h_p,m,m^-Ḵ_h_p,m,n ≈Ḵ_h_p,n,n. Hence, our proposed procedure (<ref>) approximates (<ref>) but reduces the cost fromØ( n^2 )toØ(nm + m^3)if one computes from left to right (also refer to Remark <ref>); in the case ofm=o ( n^2/3 )this guarantees an asymptotic speedup. We obtain a total cost ofØ( D ( nm+m^3 ) )for obtaining the wild bootstrap samples. This acceleration allows KSD-based goodness-of-fit tests to be applied on large data sets. §.§ Guarantees This section is dedicated to the statistical behavior of the proposed Nyström-KSD estimator (<ref>). The existing analysis of Nyström estimators <cit.> considers bounded kernels only. Indeed, ifsup_x̱∈^dx̱_̋h_p <∞, the consistency of (<ref>) is implied by <cit.>, which we include here for convenience of comparison. Assume the setting of Lemma <ref>, C_,h_p 0, m≥ 4 Nyström samples, and bounded Stein feature map (sup_x̱∈^dx̱_̋h_p =: K <∞). Then, for any δ∈(0,1) it holds with probability at least 1-δ that |S_p() - S̃_p(_n)| ≤c_1/√(n) + c_2/m + c_3√(log(m/δ))/m√(𝒩_,h_p(12K^2log(m/δ)/m)), when m ≥max(67,12K^2C_,h_p^-1)log(m/δ), where c_1, c_2, and c_3 are positive constants. However, in practice, the feature map of KSD is typically unbounded and Theorem <ref> is not applicable, as it is illustrated in the following example with the frequently-used Gaussian kernel. [KSD yields unbounded kernel] Consider univariate data (d=1), unnormalized target density p(x) = e^-x^2/2, and the Gaussian kernel k(x,y) =e^-γ(x-y)^2. Notice that, by using (<ref>), we have ξ_p(x)_̋k^d^2 = h_p(x,x) = x^2 + constx→∞→∞. In fact, a more general result holds: If one considers a bounded continuously differentiable translation-invariant kernel k, the induced Stein kernel is only bounded provided that the target density p(x̱) has tails that are no thinner than e^-∑_i=1^d|x_i|<cit.>, which clearly rules out Gaussian targets. For analyzing the setting of unbounded feature maps, we make the following assumption. Assume that the Stein feature map X and the sampling distribution ∈ℳ_1^+(^d) are (i) sub-Gaussian in the sense of (<ref>), that is, the condition X,u_̋h_pψ_2≲ X,u_̋h_pL_2()< ∞ holds for all u∈_̋h_p, and (ii) it holds that 0 < X_̋h_pψ_2≲ X_̋h_pL_2() < ∞. We elaborate further on Assumption <ref> in Remark <ref>(c), after we state our following main result. Let Assumption <ref> hold, C_,h_p 0, assume the setting of Lemma <ref>, m ≥ 4 Nyström samples, and =. Then, for any δ∈ (0,1) it holds with probability at least 1-δ that |S_p() - S̃_p(_n)| ≲√((C_,h_p))log(6/δ)/n + √((C_,h_p)log(6/δ)/n) + + √((C_,h_p)log(12n/δ)log(12/δ))/m√(𝒩_,h_p(c(C_,h_p)/m)) when m≳max{C_,h_p^-1(C_,h_p),log (3/δ)}, where c>0 is a constant. To interpret the consistency guarantee of Theorem <ref>, we consider the three terms on the r.h.s. w.r.t. the magnitude ofm. The first two terms converge withØ(n^-1/2), independent of the choice ofm. By using the universal upper bound𝒩_,h_p(c(C_,h_p)/m) ≲mon the effective dimension, the last term reveals that an overall rate ofØ(n^-1/2)can only be achieved with further assumptions regarding the rate of decay of the effective dimension if one also requiresm = o(n^2/3)— as is necessary for a speed-up, see Remark <ref>(a). Indeed, the rate of decay of the effective dimension can be linked to the rate of decay of the eigenvalues of the covariance operator <cit.>, which is known to frequently decay exponentially, or, at least, polynomially. In this sense, the last term acts as a balance, which takes the characteristics of the data and of the kernel into account. The next corollary shows that an overall rate ofØ(n^-1/2)can be achieved, depending on the properties of the covariance operator. In the setting of Theorem <ref>, assume that the spectrum of the covariance operator C_,h_p decays either (i) polynomially, implying that 𝒩_,h_p(λ) ≲λ^-γ for some γ∈(0,1], or (ii) exponentially, implying that, 𝒩_,h_p(λ) ≲log(1+c_1/λ) for some c_1>0. Then it holds that |S_p() - S̃_p(_n)| = Ø_(1/√(n)), assuming that the number of Nyström points satisfies (i) m≳ n^1/2-γlog^1/2-γ(12n/δ)log^1/2-γ(12/δ) in the first case, or (ii) m≳√(n)(log(1+c_1n/c(C_,h_p))log(12n/δ)log(12/δ))^1/2 in the second case.   * Runtime benefit. Recall that — see Remark <ref>(a) —, our proposed Nyström estimator (<ref>) requires m = o(n^2/3) Nyström samples to achieve a speed-up. Hence, in the case of polynomial decay, an asymptotic speed-up with a statistical accuracy that matches the quadratic time estimator (<ref>) is guaranteed for γ < 1/2; in the case of exponential decay, large enough n always suffices. * Comparison of Theorem <ref> and Theorem <ref>. The key takeaway of Theorem <ref> is that the boundedness requirement on the kernel can be relaxed to a sub-Gaussian assumption for KSD estimation. * Sub-Gaussian assumption. Key to the proof of Theorem <ref> is having an adequate notion of non-boundedness of the feature map. One approach—common for controlling unbounded real-valued random variables— is to impose a sub-Gaussian assumption. In Hilbert spaces, various definitions of sub-Gaussian behavior have been investigated <cit.>; see <cit.> for a recent survey. Classically, these require centered random variables. We note that with (<ref>) the mean-zero requirement is satisfied for =. Among the definitions of sub-Gaussianity, we carefully selected (i) <cit.>, and (ii) Assumption <ref>(ii).[The former condition is also referred to as sub-Gaussian in Fukuda's sense<cit.>.] Specifically, (ii) is needed for our key Lemma <ref> which roughly states that sub-Gaussian vectors whitened by C_,h_p,λ^-1/2 inherit the sub-Gaussian property, which opens the door for proving Theorem <ref>. § EXPERIMENTS We verify the viability of our proposed method, abbreviated as N-KSD in this section, by comparing its runtime and its power to existing methods: the quadratic time KSD <cit.>, the linear-time goodness-of-fit test finite set Stein discrepancy (FSSD; ), and the near-linear-time goodness-of-fit test using random feature Stein discrepancy (L1 IMQ, L2 SechExp; ).[All the code replicating our experiments is available at <https://github.com/FlopsKa/nystroem-ksd>.] For FSSD, we consider randomized test locations (FSSD-rand) and optimized test locations (FSSD-opt); the optimality is meant w.r.t. a power proxy detailed in the cited work <cit.>. For all competitors, we use the settings and implementations provided by the respective authors. We do not compare against RFF KSD, as <cit.> show that RFSDs, to which we compare against, achieve a better performance on the same set of experiments. We use the well-known Gaussian kernelk(x̱, y̱) = exp(-γx̱- y̱^d^2)(γ> 0) with the median heuristic <cit.>, and the IMQ kernelk(x̱, y̱) = (c^2+x̱ - y̱^d^2)^β<cit.>, with the choices ofβ< 0andc >0detailed in the respective experiment description. To approximate the null distribution of N-KSD, we perform a bootstrap with (<ref>), settingD=500. To allow an easy comparison, our experiments replicate goodness-of-fit testing experiments from <cit.> and <cit.>. We ran all experiments on a PC with Ubuntu 20.04, 124GB RAM, and 32 cores with 2GHz each. Runtime. We setm=√(n)for N-KSD. As per recommendation, we fix the number of test locationsJ=10for L1 IMQ, L2 SechExp, and both FSSD methods. The runtime, see Figure <ref>(a) for the average over 10 repetitions (the error bars indicate the estimated95%quantile), behaves as predicted by the complexity analysis. The proposed approach runs orders of magnitudes faster than the quadratic time KSD estimator (<ref>), and, for small samples sizes (n≤500), N-KSD is the fastest among all tested methods. Fromn=2000, all (near-)linear-time approaches are faster (excluding FSSD-opt, which has a relatively large fixed cost). Still, N-KSD achieves competitive runtime results even forn=5000. Laplace vs. standard normal. We fix the target distribution= 𝒩(0_d,I̱_d)and obtainn=1000samples from the alternative= Lap(0,1/√(2))^d, a product ofdLaplace distributions. We testH_0 : = vs.H_1 : ≠with a level ofα=0.05. We set the kernel parameterscandβfor KSD IMQ and N-KSD IMQ as per the recommendation for L1 IMQ in the corresponding experiment by <cit.>. Figure <ref>(b) reports the power (obtained over500draws of the data) of the different approaches. KSD Gauss and its approximation N-KSD Gauss perform similarly but their power diminishes fromd=3. KSD IMQ achieves full power for all tested dimensions and performs best overall. N-KSD IMQ (m=4√(n)) achieves comparable results, with a small decline fromD=15. Our proposed method outperforms all existing KSD accelerations. Student-t vs. standard normal. The setup is similar to that of the previous experiment, but we consider samples froma multivariate student-t distribution with5degrees of freedom, setn=2000, and repeat the experiment 250 times to estimate the power. We show the results in Figure <ref>(c). All approaches employing the Gaussian kernel quickly loose in power; all techniques utilizing the IMQ kernel, including N-KSD IMQ, achieve comparably high power throughout. Restricted Boltzmann machine (RBM). Similar to <cit.>, we consider the case where the targetis the non-normalized density of an RBM; the samples_nare obtained from the same RBM perturbed by independent Gaussian noise with varianceσ^2. Forσ^2 = 0,H_0 : = holds, and forσ^2 >0, implying that the alternativeH_1 : ≠holds, the goal is to detect that then=1000samples come from a forged RBM. For the IMQ kernel (L1 IMQ, N-KSD IMQ, KSD IMQ), we setc=1andβ=-1/2. We show the results in Figure <ref>(d), using100repetitions to obtain the power. KSD with the IMQ and with the Gaussian kernel performs best. Our proposed Nyström-based method (m=4√(n)) nearly matches its performance with both kernels while requiring only a fraction of the runtime. All other approaches achieve less power forσ∈{0.02,0.04}. These experiments demonstrate the efficiency of the proposed Nyström-KSD method. § ACKNOWLEDGEMENTS This work was supported by the pilot program Core-Informatics of the Helmholtz Association (HGF). BKS is partially supported by the National Science Foundation (NSF) CAREER award DMS-1945396. § PROOFS This section is dedicated to the proofs of our results in the main text. Lemma <ref> is proved in Section <ref>. We prove our main result (Theorem <ref>) in Section <ref>; Corollary <ref> is shown in Section <ref>. §.§ Proof of Lemma <ref> By (<ref>), KSD is the norm of the mean embedding ofunder·, that is, S_p() = ∫_^dx̱(x̱)_̋h_p = μ_h_p()_̋h_p . Hence, with <cit.>, the optimization problem (<ref>) has the solutionα= (α_i)_i=1^m = 1/nḴ_h_p,m,m^- Ḵ_h_p,m,n1_n ∈^m. Now, using (<ref>), we have ∑_i=1^mα_i x̱̃̃_i_̋h_p,m^2 (a)=⟨∑_i=1^mα_ix̱̃̃_i, ∑_i=1^mα_ix̱̃̃_i⟩__̋h_p,m (b)=∑_i=1^m ∑_j=1^m α_i α_j⟨x̱̃̃_i, x̱̃̃_j⟩__̋h_p,m(c)=∑_i=1^m∑_j=1^mα_iα_jh_p(x̱̃̃_i,x̱̃̃_j) (d)=αḴ_h_p,m,mα (e)=1/n^21_nḴ_h_p,n,mḴ_h_p,m,m^-Ḵ_h_p,m,mḴ_h_p,m,m^-_=Ḵ_h_p,m,m^-Ḵ_h_p,m,n1_n = β_pḴ_h_p,m,m^-β_p. In (a) we used that·__̋h_p,mis inner product induced, (b) follows from the linearity of the inner product, (c) is implied by the reproducing property, (d) is by the definition the Gram matrix, in (e) we made use of the explicit form ofα, the symmetry of Gram matrices, the propertyḴ_h_p,m,n= Ḵ_h_p,n,m, and that the Moore-Penrose inverse satisfiesA̱^-A̱ A̱^- = A̱^-for any matrixA̱. §.§ Proof of Theorem <ref> Contrasting the existing related work <cit.>, we do not impose a boundedness assumption on the feature map. This relaxation leads to new technical difficulties that we resolve in the following. Still, we start our analysis from a known decomposition <cit.>. To simplify notation, letμ_h_p := μ_h_p(),μ̂_h_p := μ_h_p(_n), andμ̂_h_p^:= P__̋h_p,mμ_h_p(_n). First, we decompose the error as follows. |S_p() - S̃_p(_n)| (a)=|μ_h_p_̋h_p- μ̂_h_p^_̋h_p| (b)≤μ_h_p-μ̂_h_p^_̋h_p (c)≤μ_h_p-μ̂_h_p_̋h_p + μ̂_h_p-μ̂_h_p^_̋h_p (d)=μ_h_p-μ̂_h_p_̋h_p + (I-P__̋h_p,m)(μ̂_h_p-1/m∑_i=1^mx̱̃̃_i)_̋h_p (e)≤μ_h_p-μ̂_h_p_̋h_p_=:t_1 + (I-P__̋h_p,m)C_,h_p,λ^1/2_=:t_2C_,h_p,λ^-1/2(μ̂_h_p-1/m∑_i=1^mx̱̃̃_i)_̋h_p_=:t_3. (a) is implied by (<ref>) and (<ref>); (b) follows from the reverse triangle inequality;±μ̂_h_pand the triangle inequality yield (c); in (d), we use the distributive property and introduce1/m∑_i=1^mx̱̃̃_i ∈_̋h_p,mwhose projection onto the orthogonal complement of_̋h_p,mis zero; in (e)I = C_,h_p,λ^1/2C_,h_p,λ^-1/2was introduced and we used the definition of the operator norm. We next obtain individual probabilistic bounds for the three termst_1,t_2, andt_3, which we subsequently combine by union bound. We will then conclude the proof by showing that all assumptions that we imposed along the way are satisfied. * Term t_1. The first term measures the deviation of an empirical mean μ̂_h_p to its population counterpart μ_h_p. To bound this deviation μ̂_h_p-μ_h_p_̋h_p=1/n∑_i=1^n [x̱_i-_X∼ X_=0]__̋h_p, we will use the Bernstein inequality (Theorem <ref>) with the η_i := x̱_i∈_̋h_p(i∈ [n]) centered random variables, by gaining control on the moments of Y:=h_p(·,X)__̋h_p. This is what we elaborate next. By Assumption <ref>(ii), Y is sub-Gaussian; hence it is also sub-exponential (Lemma <ref>(3)), and therefore (Lemma <ref>) it satisfies the Bernstein condition |Y|^p ≤1/2p!σ^2B^p-2 < ∞, with σ = √(2)Yψ_1 , B = Yψ_1, for any p≥ 2. Notice that B = Yψ_1(a)≲Yψ_2(b)≲YL_2()(c)=(_X∼h_p(·,X)__̋h_p^2)^1/2 (d)=[_X∼(X⊗X)]^1/2(e)=√((C_,h_p)), (a) follows from Lemma <ref>(3), (b) is implied by Assumption <ref>(ii), (c) comes from the definition of Y, in (d) we used Lemma <ref>. <cit.> allows commuting and integration, which, with (<ref>), yields (e). As σ≍ B, we also got that σ≲√((C_,h_p)). Having obtained a bound on the moments, we can apply Bernstein's inequality for separable Hilbert spaces (; recalled in Theorem <ref>) to the centered η_i = x̱_i∈_̋h_p-s (i∈[n]), and obtain that for any δ∈ (0,1) it holds with probability at least 1-δ/3 that μ_h_p-μ̂_h_p_̋h_p = 1/n∑_i=1^nη_i_̋h_p≲√((C_,h_p))log(6/δ)/n + √((C_,h_p)log(6/δ)/n). * Term t_2. Assume that 0 < λ≤C_,h_p. Then, we can handle the second term with Lemma <ref> and obtain that for any δ∈(0,1) it holds with probability at least 1-δ/3 that ( I-P__̋h_p,m)C_,h_p,λ^1/2≲√(λ) provided that m≳max{(C_,h_p)/λ,log (3/δ)}. * Term t_3. The third term depends on the sample (x̱_i)_i=1^n i.i.d.∼ and on the Nyström selection (i_j)_j=1^m i.i.d.∼Unif([n]) =: Λ; with this notation x̱̃̃_j = x̱_i_j (j∈ [m]). Our goal is to take both sources of randomness into account. Fixed x̱_i-s, randomness in i_j-s: Let the sample (x̱_i)_i=1^n be fixed. As the (x̱_i_j)_j=1^m-s are i.i.d., t_3 =C_,h_p,λ^-1/2(μ̂_h_p-1/m∑_i=1^mx̱̃̃_i)__̋h_p = 1/m∑_i=1^m[C_,h_p,λ^-1/2(x̱̃̃_i-μ̂_h_p)]_:=Y_i__̋h_p measures the concentration of the sum 1/m∑_i=1^mY_i around its expectation, which is zero as _J[h_p(·,x̱_J)] = μ̂_h_p with J ∼Λ. Notice that max_i∈[m]Y_i_̋h_p = max_i∈[m]C_,h_p,λ^-1/2(x̱̃̃_i-μ̂_h_p)_̋h_p ≤max_i∈[m][C_,h_p,λ^-1/2h_p(·, x̱̃̃_i)_̋h_p + 1/n∑_j=1^n C_,h_p,λ^-1/2 h_p(·,x̱_j)_̋h_p] ≤ 2 max_i∈[n]C_,h_p,λ^-1/2x̱_i_̋h_p =: K = K(x̱_1,…,x̱_n), where we used the triangle inequality and homogeneity of the norm. An application of Theorem <ref> yields that, conditioned on the sample (x̱_i)_i=1^n, it holds that Λ^m ( (i_j)_j=1^m : t_3 ≤ K√(2log(12/δ))/√(m)| (x̱_i)_i=1^n) ≥ 1-δ/6. Randomness in x̱_i-s: Let Z_i:=C_,h_p,λ^-1/2x̱_i_̋h_p (i∈[n]) with (x̱_i)_i=1^n i.i.d.∼. By Assumption <ref>(ii) and Lemma <ref>, the Z_i-s are sub-Gaussian random variable provided that 0<λ≤C_,h_p. Hence, by Lemma <ref>, with probability at least 1-δ/6, it holds that K = 2max_i∈[n]|Z_i|≲√(Z_1ψ_2^2log(12n/δ)). We now bound Z_1ψ_2^2: Z_1ψ_2^2 (a)=C_,h_p,λ^-1/2x̱_1_̋h_pψ_2^2 (b)≲C_,h_p,λ^-1/2x̱_1_̋h_pL_2()^2 (c) = (C_,h_p,λ^-1C_,h_p). (a) follows from the definition of Z_1. (b) is implied by Assumption <ref>(ii) and Lemma <ref>. (c) comes from Lemma <ref> with A:=C_,h_p,λ^-1. We have shown that ^n( (x̱_i)_i=1^n : K≲√((C_,h_p,λ^-1C_,h_p)log(12n/δ))) ≥ 1-δ/6. Combination: We now combine the intermediate results. Let A = {((x̱_i)_i=1^n,(i_j)_j=1^m) : t_3 ≲√((C_,h_p,λ^-1C_,h_p) log(12n/δ)log(12/δ))/√(m)}, B = {(x̱_i)_i=1^n : K≲√((C_,h_p,λ^-1C_,h_p)log(12n/δ))}, C = {((x̱_i)_i=1^n,(i_j)_j=1^m) : t_3 ≤ K√(2log(12/δ))/√(m)}⊆ A. Then, with ^n⊗Λ^m denoting the product measure of ^n and Λ^m, we obtain (^n⊗Λ^m)( A ) = _^n[Λ^m(A |)] = ∫_(^d)^nΛ^m(A |)^n(x̱_1,…, x̱_n) ≥∫_B Λ^m(A |)^n(x̱_1,…, x̱_n) ≥∫_B Λ^m(C |)^n(x̱_1,…, x̱_n) (a)≥(1-δ/6) ^n(B) (b)≥ (1-δ/6)^2=1-δ/3+δ^2/6^2>1-δ/3. (a) is implied by the uniform lower bound established in (<ref>). (b) was shown in (<ref>). Combination of t_1, t_2, and t_3. To conclude, we use decomposition (<ref>), and union bound (<ref>), (<ref>), and (<ref>). Further, we observe that(C_,h_p,λ^-1C_,h_p) = 𝒩_,h_p(λ), and obtain that with probability at least1-δit holds that |S_p() - S̃_p(_n)| ≲√((C_,h_p))log(6/δ)/n + √((C_,h_p)log(6/δ)/n) + + √(λ𝒩_,h_p(λ)log(12n/δ)log(12/δ)/m) provided thatm≳max{(C_,h_p)/λ,log(3/δ)}and0 < λ≤C_,h_pboth hold. Now, specializingλ= c(C_,h_p)/mfor some absolute constantc>0, all constraints are satisfied form≳max{log(3/δ), (C_,h_p)C_,h_p^-1}. Using our choice ofλ, after rearranging, we get the stated claim. §.§ Proof of Corollary <ref> The proof is split into two parts. The first one considers the polynomial decay assumption, the second one is about the exponential decay assumption. * Polynomial decay. The √(n)-consistency of the first two addends in Theorem <ref> was established in the discussion following the statement. Hence, we limit our considerations to the last addend. Assume that 𝒩_,h_p(λ) ≲λ^-γ for γ∈ (0,1]. Observing that the trace expression is constant, the last addend in Theorem <ref> yields that √(log(12/δ)log(12n/δ)𝒩_,h_p(c(C_,h_p)/m)/m^2)(a)≲√(log(12/δ)log(12n/δ)/m^2-γ)(b)=Ø(1/√(n)), with (a) implied by the polynomial decay assumption and (b) follows from our choice of m≳ n^1/2-γlog^1/2-γ(12n/δ)log^1/2-γ(12/δ). This derivation confirms the first stated result. * Exponential decay. Assume it holds that 𝒩_,h_p(λ) ≲log(1+c_1/λ). Observe that as per the discussion following Theorem <ref>, the first two addends are Ø(n^-1/2). For the last addend, again noticing that the trace is constant, we have √(log(12/δ)log(12n/δ)𝒩_,h_p(c(C_,h_p)/m)/m^2)(a)≲√(log(12/δ)log(12n/δ)log(1+c_1m/c(C_,h_p))/m^2) (b)≲√(log(12/δ)log(12n/δ)log(1+c_1n/c(C_,h_p))/m^2)(c)=Ø(1/√(n)), where (a) uses the exponential decay assumption. (b) uses that n≥ m and that the logarithm is a monotonically increasing function. (c) follows from our choice of m≳√(n)√(log(1+c_1n/c(C_,h_p))log(12n/δ)log(12/δ)), finishing the proof of the corollary. § AUXILIARY RESULTS This section collects our auxiliary results. Lemma <ref> builds on <cit.>, which assumes bounded feature maps, and on <cit.>, which is stated in the context of leverage scores. Lemma <ref> states that a sub-exponential random variable satisfies Bernstein's conditions, and Lemma <ref> shows that a “whitened” sub-Gaussian vector remains sub-Gaussian. Lemma <ref> expresses theL_2-norm of the RKHS-norm as trace. In Lemma <ref>, we show how tensor products interplay with linearly transformed vectors. Lemma <ref> is about the maximum of real-valued sub-Gaussian random variables; it is a slightly altered restatement of <cit.>. Let Assumption <ref> hold, and assume 0 < λ≤C_,h_p. Then, for any δ∈(0,1), it holds with probability at least 1-δ that ( I-P__̋h_p,m)C_,h_p,λ^1/2^2≲λ provided that m≳max{(C_,h_p)/λ,log (1/δ)}. Let the covariance of the Nyström samples be C̃__n,h_p,m = 1/m∑_i=1^mx̱̃̃_i⊗x̱̃̃_i, and β(λ) = λ_max(C_,h_p,λ^-1/2(C_,h_p- C̃__n, h_p,m)C_,h_p,λ^-1/2). The application of <cit.> (recalled in Theorem <ref>) yields the non-probabilistic inequality (I-P__̋h_p,m)C_,h_p,λ^1/2^2 ≤λ/1-β(λ), when β(λ) < 1. β(λ) measures the deviation—in the sense of λ_max(·)—of an empirical covariance to the population one. We will establish that our setting satisfies the conditions of <cit.> (recalled in Theorem <ref>), with which we will show that the stronger requirement β(λ) (a)≤C_,h_p,λ^-1/2(C_,h_p- C̃__n, h_p,m)C_,h_p,λ^-1/2 (b)=C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2- C_,h_p,λ^-1/2C̃__n, h_p,m C_,h_p,λ^-1/2 (c)=1/m∑_i=1^mC_,h_p,λ^-1/2x̱̃̃_i⊗x̱̃̃_i C_,h_p,λ^-1/2-C_,h_p,λ^-1/2C_, h_p C_,h_p,λ^-1/2< 1 is satisfied for m large enough. In (a), we used that the spectral radius is bounded by the operator norm. (b) is the distributive property, and (c) uses linearity. In the following we prove (<ref>). Let the Nyström selection be (i_j)_j=1^mi.i.d.∼Unif([n]); with this notation x̱̃̃_j = x̱_i_j (j∈[m]). First, we condition on a fixed choice of -s. Let us define the Y=C_,h_p,λ^-1/2 X∈_̋h_p random variable with X∼, and consider the samples y_i_j := C_,h_p,λ^-1/2x̱_i_j∈_̋h_p. Y has covariance _X∼[Y⊗ Y] (a)=_X∼[C_,h_p,λ^-1/2 X⊗ XC_,h_p,λ^-1/2] (b)= C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2, where (a) is implied by Lemma <ref> and (b) follows from <cit.> by swapping the expectation with bounded linear operators. Similarly, we observe that the empirical covariance is 1/m∑_j=1^my_i_j⊗ y_i_j = C_,h_p,λ^-1/2C̃__n,h_p,mC_,h_p,λ^-1/2. In other words, this setup brings us into the setting of Theorem <ref>, and it remains to show that Y is sub-Gaussian in the sense of (<ref>). Notice that Y satisfies both conditions: * Mean-zero: By = and (<ref>), Y = C_,h_p,λ^-1/2 X = 0. * (<ref>) holds: Our goal now is to show that Y,u_̋h_pψ_2≲Y,u_̋h_pL_2() < ∞ holds for all u∈_̋h_p. Let u∈_̋h_p be arbitrary, and v=C_,h_p,λ^-1/2u ∈_̋h_p; v is well-defined due to the invertibility of C_,h_p,λ. Using this notation, we have Y,u_̋h_pψ_2(a)=C_,h_p,λ^-1/2X,u_̋h_pψ_2(b)=X,C_,h_p,λ^-1/2u_̋h_pψ_2 (c)=X,v_̋h_pψ_2(d)≲X,v_̋h_pL_2()(e)=X,C_,h_p,λ^-1/2u_̋h_pL_2() (f)=C_,h_p,λ^-1/2X,u_̋h_pL_2()(g)=Y,u_̋h_pL_2()(h)<∞. In (a), we use the definition of Y. (b) follows from the self-adjointness of C_,h_p,λ^-1/2. (c) uses the definition of v. (d) is implied by Assumption <ref>(i) holding for all v∈_̋h_p. In (e) we again used the definition of v. (f) is implied by the self-adjointness of C_,h_p,λ^-1/2. (g) is based on the definition of Y. (h) holds as X,v_̋h_pL_2()<∞ by Assumption <ref>(i) and the expressions in-between were equalities. This establishes the sub-Gaussianity of Y. Now, we can invoke <cit.> (recalled in Theorem <ref>), and, by using (<ref>), we obtain that with probability at least 1-δ it holds that β(λ)≲C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2max{√(r(C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2)/m),√(log(1/δ)/m)} = max{C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2^1/2_<1√((C_,h_p,λ^-1C_,h_p)/m), C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2_<1√(log(1/δ)/m)} provided that m≥max{r(C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2),log (1/δ)}, where r(A) := (A)/A. We next simplify the requirements on m. By using that 0 < λ≤C_,h_p =: λ_1, we have that C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2 = λ_1/λ_1+λ≥1/2. Now also observing that (C_,h_pλ^-1C_,h_p) ≤1/λ(C_,h_p) and by upper bounding the nominator and lower bounding the denominator in r: r(C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2) = (C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2)/C_,h_p,λ^-1/2C_,h_pC_,h_p,λ^-1/2≤(C_,h_p)/λ/1/2, we can take m≥max{2/λ(C_,h_p),log (1/δ)}. To sum up, we got by (<ref>) and (<ref>) that, conditioned on the Nyström selection, with probability of at least 1-δ it holds that C_,h_p,λ^-1/2(C_,h_p-C̃_,h_p,m)C_,h_p,λ^-1/2 ^2 ≲max{(C_,h_p)/λ m,log (1/δ)/m}. We remove the conditioning by taking the expectation over all Nyström selections. For m≳max{(C_,h_p)/λ,log (1/δ)}, we have that β(λ) < 1, which, together with (<ref>), implies the stated result. [Sub-exponential satisfies Bernstein conditions] Let Y be a real-valued random variable which is sub-exponential, i.e. Yψ_1 < ∞. Let σ := √(2)Yψ_1, B:= Yψ_1 >0. Then the Bernstein condition |Y|^p ≤1/2p!σ^2B^p-2 < ∞ holds for any p≥ 2. For any p≥ 2, we have |Y|^p = p! B^p |Y|^p/B^p p!(a)< p! B^p [exp(|Y|/B) -1]_(b)≤1 = 1/2p!B^p-2(√(2) B)^2, where in (a) we use that x^n/n! < e^x-1 holds for all n,x > 0, (b) follows from the definition of the sub-exponential Orlicz norm. The next lemma shows that the “whitened” random variableC_,h_p,λ^-1/2Xinherits Assumption <ref>(ii) fromX. Let Assumption <ref>(ii) hold. Let X∼, C_,h_p be the covariance operator, C_,h_p,λ = C_,h_p+λ I, and 0 < λ≤C_,h_p. Then, there exists an absolute constant c>0 such that C_,h_p,λ^-1/2 X_̋h_pψ_2(†)≤ c C_,h_p,λ^-1/2 X_̋h_pL_2()()<∞. We will refer to the '≤' in the statement by (†), and to the '<' by (). First, we observe that by the imposed Assumption <ref>(ii), there exists a constant c_1>0 such that X_̋h_pψ_2≤ c_1 X_̋h_pL_2() < ∞. which, by the definition of the ·ψ_2-norm, implies that _X∼exp( X_̋h_p^2/c_1^2 X_̋h_pL_2()^2) ≤ 2. Our goal for (†) is to prove that there exists c>0 such that _X∼exp(f(X;c)) ≤ 2, with f(X;c) :=C_,h_p,λ^-1/2 X_̋h_p^2/c^2 C_,h_p,λ^-1/2 X_̋h_pL_2()^2, and we will show () along the way. We bound the terms in the numerator and the denominator of f(X;c) separately, and choose c>0 at the end. * Upper bound on C_,h_p,λ^-1/2 X_̋h_p^2, and C_,h_p,λ^-1/2 X_̋h_pL_2() < ∞: C_,h_p,λ^-1/2 X_̋h_p^2 (a)≤C_,h_p,λ^-1/2^2 X_̋h_p^2 (b)=C_,h_p,λ^-1 X_̋h_p^2. (a) is implied by C_,h_p,λ^-1/2 X_̋h_p≤C_,h_p,λ^-1/2 X_̋h_p following from the definition of the operator norm, (b) comes from C_,h_p,λ^-1/2^2 = C_,h_p,λ^-1 holding by the C^* property and the self-adjointness of C_,h_p,λ^-1/2. Specifically, we got that C_,h_p,λ^-1/2 X_̋h_pL_2() (a)≤√(C_,h_p,λ^-1) X_̋h_pL_2() (b)=√(C_,h_p,λ^-1) X_̋h_pL_2()(c)<∞, where (a) holds by (<ref>) and the monotonicity of the integration, (b) is implied by the homogeneity of norms. (c), i.e., (), holds since X_̋h_pL_2()<∞ by our assumption recalled in (<ref>). * Lower bound on C_,h_p,λ^-1/2 X_̋h_pL_2()^2: C_,h_p,λ^-1/2 X_̋h_pL_2()^2 (a)=(C_,h_p,λ^-1C_,h_p) (b)=∑_i∈ Iλ_i/λ_i+λ(c)≥∑_i∈ Iλ_i/2C_,h_p (d)=(C_,h_p)/2C_,h_p. (a) is stated in Lemma <ref>. (b) holds by the eigenvalue-based form of the , where (λ_i)_i∈ I are the eigenvalues of C_,h_p. (c) is implied by λ_i ≤C_,h_p holding for all (λ_i)_i∈ I by the compact positive property of C_,h_p and the assumption 0<λ≤C_,h_p. (d) follows again from the eigenvalue-based form of the . One can now rewrite (C_,h_p) in the resulting expression (<ref>) as (C_,h_p) (a)=[ ∫_^dx̱⊗x̱(x̱)] (b)=∫_^d[ x̱⊗x̱] (x̱) (c)=∫_^dx̱_̋h_p^2 (x̱) (d)= X)_̋h_pL_2()^2. In (a) we used the definition of C_,h_p, in (b) we flipped the integration and the <cit.>. (c) follows from Lemma <ref> by choosing f:=x̱, in (d) we used the definition of ·L_2(). Combining this form with (<ref>), we got the lower bound C_,h_p,λ^-1/2 X_̋h_pL_2()^2 ≥ X_̋h_pL_2()^2/2C_,h_p. Taking into account the upper bound (<ref>) and the lower bound (<ref>), we arrived at f(X;c) ≤2C_,h_pC_,h_p,λ^-1 X_̋h_p^2/c^2 X_̋k^dL_2()^2, which by the monotonicity of the exponential function, means that _X∼exp(f(X;c)) ≤_X∼exp(2C_,h_pC_,h_p,λ^-1/c^2_=1/c_1^2 X_̋h_p^2/ X_̋h_pL_2()^2) (a)≤ 2. In (a), we set c^2 = 2C_,h_pC_,h_p,λ^-1c_1^2 and used (<ref>). Let X∼, C_,h_p=∫_^dx̱⊗x̱(x̱), and A∈ℒ(_̋k) a positive operator. Then A^1/2 X_̋h_pL_2()^2 = (AC_,h_p). We have the chain of equalities A^1/2 X_̋h_pL_2()^2 (a)=∫_^dA^1/2x̱_̋k^d^2 (x̱) (b)=∫_^d[(A^1/2x̱) ⊗( A^1/2x̱)] (x̱) (c)=[∫_^d(A^1/2x̱) ⊗(A^1/2x̱) (x̱)] (d)=[∫_^d A^1/2(x̱⊗x̱) A^1/2(x̱)] (e)=[ A^1/2(∫_^dx̱⊗x̱(x̱) ) A^1/2] (f)=(A^1/2 C_,h_p A^1/2) (g)=(AC_,h_p). The details are as follows. In (a) we used the definition of ·L_2(), (b) follows from Lemma <ref> by choosing f:=A^1/2x̱, in (c) the integration was swapped with the <cit.>. (d) holds by Lemma <ref> with choosing a:= b:=x̱, C:=D:= A^1/2 and using the self-adjointness of A and hence that of A^1/2. In (e) the integration was flipped with A^1/2<cit.>. (f) holds by the definition of C_,h_p. The cyclic invariance property of the trace implies (g), which concludes the derivation. The following lemma is a natural generalization of the property(C̱a̱)(Ḏḇ)= C̱(a̱ḇ)Ḏ, whereC̱,Ḏ∈^d×danda̱,ḇ∈^d. [Tensor product of linearly transformed vectors] Let $̋ be a Hilbert space andC,D ∈Ł()̋. Then for anya,b∈$̋, (Ca) ⊗ (Db) = C (a⊗ b) D^*. Specifically, when D is self-adjoint, it holds that (Ca) ⊗ (Db) = C (a⊗ b) D. Let h∈$̋ be arbitrary and fixed. Then, [(Ca) ⊗ (Db)](h) (a)= Ca ⟨ Db,h⟩_, [C (a⊗ b) D^*](h) = C(a⊗ b) (D^* h) (b)= Ca ⟨ b, D^*h⟩_(c)= Ca ⟨ Db,h⟩_. In (a) and (b), we used the definition of⊗, (c) follows from the definition of the adjoint and by the property(D^*)^*=D. The shown equality of[(Ca) ⊗ (Db)](h) = [C (a⊗ b) D^*](h)for anyh∈$̋ proves the claimed statement. [Maximum of sub-Gaussian random variables] Let (X_i)_i=1^n i.i.d.∼ be real-valued sub-Gaussian random variables. Then (max_i∈[n]|X_i| ≲√(X_1ψ_2^2log(2n/δ))) ≥ 1-δ holds for any δ∈(0,1). Let c>0 be an absolute constant. As X_1 is sub-Gaussian, by <cit.>, there exists K_1≤ cX_1ψ_2 such that (|X_1|≥ t) ≤ 2e^-t^2/K_1^2 for all t≥ 0. Let u= √(K_1^2(log(2n) + t)) (max_i∈[n]|X_i|≥ u) (a)≤∑_i=1^n (|X_i| ≥ u) (b)≤ 2ne^-u^2/K_1^2(c)= e^-t, where (a) uses that the probability of a maximum exceeding a value is less than the sum of the probability of its arguments exceeding the value, (b) uses the mentioned property of sub-Gaussian random variables, and (c) is our definition of u. Solving for δ := e^-t gives t=log(1/δ), and considering the complement yields (max_i∈[n]|X_i|≤√(K_1^2log(2n/δ))) ≥ 1-δ. Using that K_1 ≤ cX_1ψ_2 concludes the proof. § DERIVATION OF (<REF>) We detail the derivation of (<ref>).(iii)in the following. Indeed, forf̱∈_̋k^dandα∈^d, we obtain sup_f̱_̋p,m≤ 1f̱,_X∼_nξ_p(X)_̋k^d(i)=sup_f̱_̋p,m^2≤11/n∑_i=1^nf̱, ξ_p(x̱_i)_̋k^d (ii)=sup_αḴ_h_p,mα≤11/n∑_i=1^n∑_j=1^mα_iξ_p(x̱̃̃_j) ,ξ_p(x̱_i)_̋k^d (iii)=sup_αḴ_h_p,mα≤1∑_j=1^mα_i1/n∑_i=1^nh_p(x̱̃̃_j,x̱_i)_=:β_p,j(iv)=sup_αḴ_h_p,mα≤1αβ_p(v)=sup_γ2^2≤1γḴ_h_p,m,m^-1/2β_p (vi)=√(β_pḴ_h_p,m,m^-1β_p), where(i)follows from the definition of the sample distribution_nand linearity of the inner product,(ii)uses thatf̱ ∈_̋p,m, and(iii)follows from the linearity of the inner product and the definition ofh_p. By definingβ_p = (β_p,j)_j=1^m = 1/nḴ_h_p,m,n 1_n , we obtain(iv); in(v), we introduceγ= Ḵ_h_p,m,m^1/2α, and(vi)is implied by the Cauchy-Schwarz inequality. § EXTERNAL STATEMENTS This section collects the external statements that we use. Lemma <ref> gives equivalent norms forf⊗f. We collect properties of Orlicz norms in Lemma <ref>. Theorem <ref> is about the concentration of the empirical covariance, and Theorem <ref> recalls the Bernstein's inequality for separable Hilbert spaces. Theorem <ref> is a concentration result for bounded random variables in a separable Hilbert space. Theorem <ref> recalls the combination of <cit.>. [Lemma B.8; ] Define B = f ⊗ f, where f∈$̋ and$̋ is a separable Hilbert space. Then B=B = B = f^2. We refer to the following sources for the items in Lemma <ref>. Item 1 is <cit.>, Item 2 is <cit.>, Item 3 recalls <cit.>, and Item 4 is <cit.>. [Collection of Orlicz properties] Let X be a real-valued random variable. * If X is sub-Gaussian, then X- X is also sub-Gaussian, and X- Xψ_2≤Xψ_2 + Xψ_2≲Xψ_2. * If X is sub-exponential, then X- X is also sub-exponential, and satisfies X- Xψ_1≤Xψ_1 + Xψ_1≲Xψ_1. * If X is sub-Gaussian, it is sub-exponential. Specifically, it holds that Xψ_1≤√(log 2)Xψ_2. * X is sub-Gaussian if and only if X^2 is sub-exponential. Moreover, X^2ψ_1 = Xψ_2^2. [Theorem 9; ] Let X, X_1,…, X_n be i.i.d. square integrable centered random vectors in a Hilbert space $̋ with covariance operatorC. Let the empirical covariance operator beĈ_n = 1/n∑_i=1^nX_i⊗ X_i. IfXis sub-Gaussian, then there exists a constantc>0such that, for allδ∈(0,1), with probability at least1-δ, Ĉ_n- C≤ cCmax(√(r(C)/n), √(log(1/δ)/n)), provided thatn≥max{r(C),log (1/δ)}, wherer(C) := (C)/C. The following theorem by <cit.> is quoted from <cit.>. [Bernstein bound for separable Hilbert spaces; Theorem 3.3.4; ] Let ( Ω,𝒜,) be a probability space, $̋ a separable Hilbert space,B> 0,σ >0, andη_1,…,η_n : Ω→$̋ centered i.i.d. random variables that satisfy η_1^p≤1/2p!σ^2B^p-2 for all p≥ 2. Then, for any δ∈ (0,1) it holds with probability at least 1-δ that 1/n∑_i=1^nη_i≤2Blog(2/δ)/n+√(2σ^2log(2/δ)/n). [Concentration in separable Hilbert spaces; Lemma E.1; ] Let X_1,…,X_n be i.i.d. random variables with zero mean in a separable Hilbert space (,̋·) such that max_i∈[n]X_i≤ b almost surely, for some b > 0. Then for any δ∈ (0,1), it holds with probability at least 1-δ that 1/n∑_i=1^nX_i≤ b√(2log (2/δ))/√(n). The following combination of <cit.> is standard. In the following, we adapt it to our notation and give a short proof to ensure self-containedness. [Proposition 3, Proposition 7; ] Let C_,h_p =∫_^dx̱⊗x̱(x̱), C_,h_p,λ = C_,h_p + λ I (λ > 0), C̃__n,h_p,m = 1/m∑_i=1^mx̱̃̃_i⊗x̱̃̃_i. For λ > 0, let β(λ) = λ_max(C_,h_p,λ^-1/2(C_,h_p- C̃__n, h_p,m)C_,h_p,λ^-1/2), (x̱̃̃_i)_i=1^m are the Nyström points, as defined in Section <ref>, _̋h_p,m := {x̱̃̃_i| i∈[m]}, and P__̋h_p,m denotes the orthogonal projection onto _̋h_p,m. Then, it holds that (I-P__̋h_p,m)C_,h_p,λ^1/2^2 ≤λ/1-β(λ), when β(λ) < 1. Define the sampling operator Z_m : _̋h_p→^m by f ↦1/√(m)(f(x̱̃̃_i))_i=1^m. Its adjoint Z^*_m : ^m →_̋h_p (see <cit.> is given by α = (α_i)_i=1^m ↦1/√(m)∑_i=1^mα_ix̱̃̃_i. Recall that _̋h_p,m={x̱̃̃_i| i ∈ [m]} and notice that rangeP__̋h_p = rangeZ^*_m. We obtain (I-P_h_p,m)C_,h_p,λ^1/2^2 (a)≤λ(Z^*_mZ_m+λ I)^-1/2C_,h_p,λ^1/2^2 (b)=λC̃_Q̂_n,h_p,λ^-1/2C_,h_p,λ^1/2^2 where (a) follows from <cit.> with X:=C_,h_p,λ^1/2 therein. (b) is by <cit.>. Applying the second inequality in the statement of <cit.> to (<ref>) (we specialize A:=C̃__n,h_p and B:=C_,h_p therein), we obtain λC̃_Q̂_n,h_p,λ^-1/2C_,h_p,λ^1/2^2 ≤λ/1-β(λ), when β(λ)<1. The combination of (<ref>) and (<ref>) yields the stated claim.
http://arxiv.org/abs/2406.08175v1
20240612130828
Certificates and Witnesses for Multi-Objective Queries in Markov Decision Processes
[ "Christel Baier", "Calvin Chau", "Sascha Klüppelholz" ]
cs.LO
[ "cs.LO" ]
Certificates and Witnesses for Multi-Objective Queries Baier et al. Technische Universität Dresden, Dresden, Germany {christel.baier, calvin.chau, sascha.klueppelholz}@tu-dresden.de Certificates and Witnesses for Multi-Objective Queries in Markov Decision Processes The authors were supported by the German Federal Ministry of Education and Research (BMBF) within the project SEMECO Q1 (03ZU1210AG) and by the German Research Foundation (DFG) through the Cluster of Excellence EXC 2050/1 (CeTI, project ID 390696704, as part of Germany’s Excellence Strategy) and the DFG Grant 389792660 as part of TRR 248 (Foundations of Perspicuous Software System). Christel Baier0000-0002-5321-9343 Calvin Chau0000-0002-3437-0240 Sascha Klüppelholz0000-0003-1724-2586 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Certifying verification algorithms not only return whether a given property holds or not, but also provide an accompanying independently checkable certificate and a corresponding witness. The certificate can be used to easily validate the correctness of the result and the witness provides useful diagnostic information, e.g. for debugging purposes. Thus, certificates and witnesses substantially increase the trustworthiness and understandability of the verification process. In this work, we consider certificates and witnesses for multi-objective reachability-invariant and mean-payoff queries in Markov decision processes, that is conjunctions or disjunctions either of reachability and invariant or mean-payoff predicates, both universally and existentially quantified. Thereby, we generalize previous works on certificates and witnesses for single reachability and invariant constraints. To this end, we turn known linear programming techniques into certifying algorithms and show that witnesses in the form of schedulers and subsystems can be obtained. As a proof-of-concept, we report on implementations of certifying verification algorithms and experimental results. § INTRODUCTION Probabilistic model checking (PMC) is a technique for analysing and formally verifying probabilistic models, inter alia, aiming to enable higher trustworthiness of correctness of systems. However, PMC tools have been observed to contain bugs themselves <cit.>, thereby diminishing trust in the verification results. The paradigm of certifying algorithms <cit.> is a well-accepted way of addressing this issue. Instead of solely returning a result, a certifying algorithm is required to also provide an accompanying certificate, which can be used to easily check the correctness of the result in a mathematically rigorous manner. There is a plethora of certifying algorithms <cit.> and certifying verification algorithms <cit.>. Most relevant for this paper are the existing certification and explication techniques for probability or expectation constraints in Markovian models. Early work towards the explication of PMC results introduced probabilistic counterexamples as sets of paths (see e.g. <cit.>) which however tend to be huge. This motivated the generation of more concise explications, including the generation of fault-trees from probabilistic counterexamples <cit.>, causality-based explanations <cit.> and the concept of witnessing subsystems <cit.>. Witnessing subsystems are parts of a system that demonstrate the satisfaction of a property and provide useful insights into why a property is violated or satisfied. Multi-objective queries are existentially or universally quantified disjunctions or conjunctions of either multiple invariant and reachability or mean-payoff predicates, e.g. “Is it possible to reach the goal with probability at least 0.9 and encounter an error with probability at most 0.2?”. Thus, in many settings they are useful for reasoning about multiple conflicting goals <cit.>. Reachability probabilities and expected mean-payoffs in Markov decision processes (MDPs) can be characterized as linear programs (LP), extensively studied in <cit.>. The techniques for verifying existentially quantified multi-objective reachability <cit.> and multi-objective mean-payoff queries <cit.> are also based on LP characterizations. However, the authors have not considered the solutions of the LP in the context of certificates nor have witnesses in the form of subsystems been addressed. In <cit.> and its extension to mean-payoff <cit.>, the certificates for universally quantified queries are only implicitly considered and the connection to subsystems has not been studied. The verification of multi-objective queries is supported by Prism <cit.>, Multigain <cit.> and Storm <cit.>. The work of <cit.> considers certificates and witnesses based on Farkas lemma' and the LP characterizations <cit.>, referred to as Farkas certificates. The techniques for finding certificates and minimal witnessing subsystems are implemented in the tool Switss <cit.>. However, certificates and witnesses have only been considered for single reachability and invariant probabilities. Further, the computation of subsystems for invariants is not supported by Switss. The purpose of this paper is to study certificates and witnesses in the context of multi-objective queries in MDPs. Building on the characterization considered in <cit.>, we derive certificates using Farkas' lemma and show that they can be used to identify witnesses, both in the form of schedulers and subsystems, generalizing the results from <cit.>. In particular, we show how to devise witnesses in the form of schedulers with stochastic memory updates as introduced in <cit.>. Lastly, we present an implementation of our techniques and experimentally evaluate it on several benchmarks. All omitted proofs are in the Appendix. Contributions. * We present the foundations of Farkas certificates for existentially and universally quantified multi-objective reachability-invariant (<Ref>) and mean-payoff queries (<Ref>) in MDPs. * Farkas certificates for multi-objective queries are shown to have a direct correspondence to witnessing subsystems and enable the computation of minimal witnessing subsystems. We hereby generalize prior work <cit.> on single-objective reachability and invariant constraints. * We show that witnesses in the form of schedulers can first be computed for the MEC quotient <cit.> (see <Ref>) and then transferred to the underlying MDP, using schedulers with stochastic memory updates to traverse the end components of the MDP. * An implementation of our techniques with experimental results on several case studies is presented. § PRELIMINARIES Notation and Farkas' lemma. We write [k] to denote the set {1, …, k}. Let = {_0, …, _n } be a finite set. In this work, vectors and matrices are written in boldface, e.g. x and . Instead of writing x∈^, we write x∈^ and x(_i) to denote the ith entry of x. Matrices are treated similarly. The support of a vector x is defined as x = {∈|x() > 0}. Throughout this work we consider ⋈ ∈{<, ≤, >, ≥}, ≳ ∈{>, ≥} and ≲ ∈{<, ≤}. Farkas' lemma is a fundamental result of linear algebra and linear programming. Essentially, it relates the solvability of a linear system with the unsolvability of another one. Let A∈^m × n and b∈^m, then the following holds: * ∃x∈^n Ax≤b∃y∈^m A^⊤y≥ 0 b^⊤y < 0 * ∃x∈^n Ax = b∃y∈^m A^⊤y = 0 b^⊤y≠ 0 Markov decision processes. A Markov decision process (MDP) <cit.> is a tuple (, , , ) where is a finite set of states, a finite set of actions, ∈ [0, 1]^ an initial distribution and ×→S a partial transition function, where S denotes the set of all probability distributions over S. We often write (, , ') instead of (, )('). A state-action pair (, ) ∈× is said to be enabled if (, ) is defined and we write _ to denote the set of all enabled pairs. The set of enabled actions in is defined by () = {∈| (, ) ∈_}. A state is absorbing if (, , ) = 1 for all ∈(). We write (, , , ) for the MDP (, , , ) where is Dirac in . A path in is a sequence = _0 _0 _1 _1 … where (_i, _i, _i+1) > 0 for all i. refers to the last state of a finite path and () (()) is the set of all infinite (finite) paths starting in . An end component (EC) of is a set ∅⊂⊆_ such that the induced sub-MDP is strongly connected. The states of are denoted with () and we refer to (, ) ∈ as internal. An EC is a maximal EC (MEC) if there is no another EC ' such that ⊂'. () denotes the set of MECs of and ⊆ the set of states contained in a MEC. We consider the MEC quotient from <cit.>, akin to the quotient from <cit.>. W.l.o.g. we assume that the actions of the states are pairwise disjoint. The MEC quotient of is then given by = ({_|∈()}, {τ}, _in, ) where = ({_|∈() }) ∖. We define ι→ as ι() = if ∈∖ and ι() = _ if ∈(). The initial state is given by _in = ι(). For states ∈∖ we define (, , ') = (, , ι^-1(')) for all ' ∈. For all MECs , ∈() and ∈(), we set (_, , ') = (, , ι^-1(')) for all ' ∈ if (, ) ∉ and set (_, τ, _) = 1, i.e. taking τ corresponds to staying in forever. We consider a discrete-time Markov chain (DTMC) to be an MDP with a single action that is enabled in all states. Thus, we omit the actions when speaking of paths in DTMCs and write (, , ) instead of (, , , ). Schedulers and probability measure. A scheduler maps a finite path in an MDP to a distribution over the available actions, i.e. () → with ()⊆(), and is memoryless if it can be seen as a function of the form →. Let ^ and ^ denote the set of unrestricted and memoryless schedulers of . A scheduler can also be represented as a tuple (α_𝗎𝗉𝖽𝖺𝗍𝖾, α_𝗇𝖾𝗑𝗍, , _) where is a set of memory locations[Infinitely many memory locations might be needed.], _∈ an initial memory distribution, α_𝗎𝗉𝖽𝖺𝗍𝖾××→ a stochastic memory update and α_𝗇𝖾𝗑𝗍×→ the next move function <cit.>. The update α_𝗎𝗉𝖽𝖺𝗍𝖾 takes an action that has lead to state and current memory location to update the memory location. Depending on the current location , α_𝗇𝖾𝗑𝗍 schedules the available actions in . We consider the standard probability measure _^ <cit.>. For ⊆, we write _, ^() and _, ^() to denote the probability of eventually reaching and only visiting under when starting in , respectively. We define _^(, ) = lim_n →∞1/n∑_t=0^n-1_, ^{_0 _0 _1 _1 …| (_t, _t)= (, )} for all ∈ and ∈() if the value exists <cit.>. _^(, ) describes the expected frequency of playing state action pair (, ) under . For a given reward vector r∈^_ and a path = _0 _0 _1 _1 …∈(), the mean-payoff is defined as r() = lim inf_n →∞1/n∑_t=0^n-1r(_t, _t) and r --r. The expected mean-payoff is then defined as [, ]r for ∈ and ∈_. Whenever we omit the subscript , we refer to the probability and expectation in . Subsystems. A subsystem of an MDP = (, , , ) is an MDP ' = (' {}, , , ') if ∈' ⊆, is absorbing and for all , ' ∈' and ∈ we have '(, , ') ∈{0, (, , ')} <cit.>. Further, an action is enabled in in ' if and only if is enabled in in . Additionally, for reward vectors r∈^_ for , we consider the corresponding reward vector r' ∈^_' where r'(, ) = r(, ) for all ∈' and ∈() and r'(, ) = min_(', ') ∈r(', ') for all ∈. Intuitively, once is reached the smallest possible reward is collected. A subsystem _' is said to be induced by a set ' ⊆ if it consists of the states ' {} and all transitions leading to ∖' are redirected to <cit.>. More precisely, for all , ' ∈' and ∈ we have '(, , ') = (, , ') and '(, , ) = ∑_' ∈∖'(, , '). Multi-objective queries. A reachability, invariant or mean-payoff predicate is an expression of the form _^() ⋈λ, _^() ⋈λ or []r⋈λ where λ∈. A multi-objective reachability, reachability-invariant or mean-payoff property () is then a conjunction or disjunction of reachability, reachability and invariant or mean-payoff predicates where = (λ_1, …, λ_k)^⊤ contains the bounds. We refer to the former as conjunctive and the latter as disjunctive property and write ⋈ to refer to a property where all predicates have ⋈ as comparison operator. A multi-objective query is then an existentially or universally quantified property, i.e. ∃∈() or ∀∈(). We distinguish between reachability (), reachability-invariant () and mean-payoff () queries and use the quantifier and logical connective to refer to the query type, e.g. to refer to existentially-quantified conjunctive queries. We omit the super- and subscript and term “multi-objective” whenever it is clear from the context. § CERTIFICATES AND WITNESSES FOR -QUERIES Now we consider certificates and witnesses for -queries. An overview of our approach is shown in <Ref>. We start from an arbitrary MDP 𝒩 and a -query _𝒩 containing only lower bounds. Note that every -query can be rephrased to a -query containing only lower bounds[^() ≲λ⇔^( (∖)) ≳ 1-λ and ^() ≲λ⇔^( (∖)) ≳ 1-λ]. Then, we construct the product MDP =𝒩× and corresponding -query _. The automaton keeps track of the state sets that have already been visited (see e.g. <cit.>). This is necessary because schedulers generally require exponential memory for such queries <cit.>. Motivated by the fact that the computation of the MECs can be made certifying <cit.>, we then consider the MEC quotient . Crucially, this allows us to rephrase _𝒩 to a -query _ℳ̂ containing only lower bounds. More precisely, invariant predicates _𝒩^() ≳λ can be rephrased to reachability predicates of the form _^( T) ≳λ where T ⊆{_1, …, _ℓ}. Note that the quotient is in reachability form <cit.>, i.e. its target states are absorbing. This allows us to restrict our attention to “simple” certificates for -queries and MDPs in reachability form, instead of tackling certificates for 𝒩 and _𝒩 directly. The reduction from 𝒩 to (upper part in <Ref>) uses well-known methods from the literature. Since the reduction is simple and can be made certifying, we use the certificates for to act as certificates for 𝒩 and . We detail the reduction in <Ref>. In <Ref> we then derive Farkas certificates for -queries and MDPs in reachability form from known techniques for multi-objective model checking <cit.>, which have not been considered through the lens of certifying algorithms yet. We make the certificates explicit and show that they yield witnesses for , both in the form of witnessing schedulers and witnessing subsystems. Conceptually, a scheduler describes how to control the MDP, whereas a subsystem highlights relevant parts of the MDP. Depending on the use case, one or the other may be more appropriate, and we enable a more flexible perspective. Lastly, we present new techniques for transferring witnesses from to 𝒩 in <Ref> (lower part in <Ref>). We discuss how schedulers and subsystems for 𝒩 can be constructed from their respective counterpart in . If contains many large MECs, this allows us to tackle each MEC individually, resulting in smaller and potentially more tractable subproblems. §.§ Farkas Certificates and Witnesses for -Queries For the remainder of this subsection, we fix an MDP = with absorbing states and consider reachability properties ⋈() with predicates ^(_1)⋈λ_1,…,^(_k) ⋈λ_k where _1, …, _k ⊆. W.l.o.g. we assume that for every ∈ there exists such that ^_() > 0 <cit.>. For a concise presentation, we define ∈^× where ((, ), ') = 1 -(, , ') if =' and ((, ), ') =-(, , ') otherwise for all (, ) ∈ and ' ∈ <cit.>. Let ∈^× [k] be defined as ((s,a), i) = ∑_' ∈_i(, , ') for all (s,a) ∈ and i ∈ [k]. is said to be EC-free if its only ECs are formed by states in . Farkas certificates are vectors that satisfy linear inequalities derived from LP-characterizations for MDPs <cit.> and Farkas' lemma. Given a certificate, we can easily validate whether a query is indeed satisfied, by checking whether the certificate satisfies the inequalities. In contrast, if a user is given a scheduler, they need to compute the probability in the induced Markov chain to validate the result, which is not as straightforward. For -queries, certificates have been considered in <cit.> and we summarize their results in our notation and setting. [Certificates for -queries]lemmacertificatesExistsCq For a conjunctive reachability property ≳() we have: * ∃∈≳() ∃y∈^^⊤y≤^⊤y≳ and if is EC-free we also have: * ∃∈≲() ∃y∈^^⊤y≥^⊤y≲ Follows from <cit.> and <cit.>. The value y(, ) can be interpreted as the frequency of playing (, ) under a scheduler that reaches almost surely <cit.>. To satisfy queries with upper bounds in MDPs with ECs, it might be required to reach F with probability smaller than 1. Hence, the restriction to EC-free MDPs. The certificates for -queries can be derived by using the distributivity of the existential quantifier and applying the results from the single-objective setting to each disjunct <cit.>. Likewise, the certificates for -queries can be derived. To the best of our knowledge, no explicit characterization of the certificates for -queries that also enable finding witnessing subsystems has been discussed yet. The works <cit.> are mainly interested in checking the query and do so by considering the dual -query. The following lemma provides an explicit presentation of the certificates. An overview of certificates for all query types can be found in the Appendix, including - and -queries. [Farkas certificates for -queries]lemmacertificatesUniversalDq For a disjunctive reachability property ⋈() we have: * ∀∈≤() ∃x∈^∃z∈^[k]∖{0}x≥zx() ≤λ^⊤z * ∀∈<() ∃x∈^∃z∈^[k]x≥zx() < λ^⊤z and if is EC-free we also have: * ∀∈≥() ∃x∈^∃z∈^[k]∖{0}x≤zx() ≥λ^⊤z * ∀∈>() ∃x∈^∃z∈^[k]x≤zx() > λ^⊤z Application of Farkas' lemma (<Ref>) to <Ref>. We can now devise a simple certifying verification algorithm based on <Ref> and <Ref>. Given a query , the algorithm tries to find a certificate for and if it cannot find such certificate, it computes a certificate for . Note that certificates can be computed via linear programming in polynomial time <cit.>. The decision algorithm for -queries in <cit.> checks satisfaction by computing a sequence of vectors w that are akin to the vector z in <Ref>. However, x is not computed nor characterized. We note that it is not obvious how to turn it into a certifying algorithm, because no easily checkable certificate arises from the computations when the query holds. The certificates from Lemma <ref> and <ref> are related to witnessing schedulers and subsystems. The relation to schedulers is well-known and we summarize existing results. For subsystems this is less obvious and we now generalize <cit.>. Witnessing schedulers and Farkas certificates. For -queries, the correspondence between the certificates y and memoryless schedulers in is well-known <cit.>. The memoryless scheduler , defined by ()() = y(, ) /∑_' ∈()y(, ') if ∑_' ∈()y(, ') > 0 and any distribution over the available actions otherwise for all (, ) ∈, is known to satisfy the query. A -query asks a property to hold under all schedulers and it is less clear how to obtain a single scheduler demonstrating the satisfaction. Let ⋈() be a disjunctive and ⋈() be a conjunctive property with the same predicates and let 𝖠𝖼𝗁={' ∈ [0, 1]^k |∃⋈̸(')}. Observe that ∀⋈() if and only if ∉𝖠𝖼𝗁. In <cit.>, this relation is used to determine a vector z (as described in <Ref>) that separates from 𝖠𝖼𝗁, i.e. z^⊤ > z^⊤' for all ' ∈𝖠𝖼𝗁. This amounts to finding a scheduler ^* such that ∑_i=1^k z(i) ·^^*(_i) γ^* is maximal. If z^⊤≳γ^* holds, we can then conclude that ∉𝖠𝖼𝗁 and consequently ^* can then serve as witness for the satisfaction of ∀⋈(). Witnessing subsystems and Farkas certificates. To consider witnesses in the form of subsystems, we first show that the satisfaction of a lower-bounded query (not necessarily -query) in a subsystem implies that the query is also satisfied in the original MDP. Crucially, this allows us to use a subsystem as a witness for the satisfaction in the original MDP. [Monotonicity]theoremsubsystemsLowerBounds Let 𝒩 be an arbitrary MDP and 𝒩' be a subsystem of 𝒩. Further, let ≳() be a multi-objective property. Then we have: * ∃' ∈^𝒩'≳'() ∃∈^𝒩≳() * ∀' ∈^𝒩'≳'() ∀∈^𝒩≳() <Ref> is precisely the reason for considering lower-bounded -queries instead of -queries with mixed bounds. For the latter, monotonicity does not hold in general, as adding states might result in surpassing a threshold (also see <cit.>). It has been shown that there is a correspondence between witnessing subsystems and Farkas certificates for single-objective reachability <cit.>. Now we generalize the previous results to multi-objective reachability. Let ≳() be the polyhedron formed by the conditions in <Ref> for -queries and ≳(λ) the polyhedron formed by the conditions in <Ref>. Let y = {∈|∃∈() y(, ) > 0} <cit.>. theoremfarkasSupportSubsystem Let ≳() be a disjunctive and ≳() be a conjunctive reachability property and ' ⊆. Then we have: * ∃y∈≳(λ) y⊆' ∃' ∈^_'≳'() and if is EC-free we also have: * ∃ (x, z) ∈≳(λ) x⊆' x≥ 0 ∀' ∈^_'≳'() In essence, the subsystems that are induced by the support of the certificates satisfy the query. Consequently, finding witnessing subsystems with a small number of states corresponds to finding certificates with a small support. For the single-objective setting, this observation has been made in <cit.>, where mixed-integer LPs are used to find minimal certificates. Note that for downstream tasks, e.g. for manual inspection <cit.>, it can be desirable to obtain minimal witnessing subsystems. Based on <Ref>, we also use MILPs to compute certificates with a minimal support, thereby yielding minimal witnessing subsystems. The MILPs in <Ref> use a Big-M encoding (see e.g. <cit.>) where M is a sufficiently large constant. We refer to the <Ref> for a discussion on the choice of M. §.§ Transferring Witnesses Recall that we reduce a -query _𝒩 in an MDP 𝒩 to a corresponding query _ in the product MDP and then to a -query _ in the MEC quotient , allowing us to restrict our attention to certificates and witnesses for -queries for MDPs in reachability form. Now we describe the transfer of witnesses for _ in to witnesses for _𝒩 in 𝒩 (lower part in <Ref>). Transferring witnessing subsystems. We can easily obtain a witnessing subsystem for 𝒩 from a witnessing subsystem of . Essentially, every state of corresponds to a set of states of 𝒩. For an arbitrary subsystem ' induced by ' ⊆, we consider the corresponding set of states _𝒩' ⊆_𝒩. Let 𝒩' be the subsystem induced by _𝒩'. Then the following holds: lemmatransferSubsystem If ' satisfies _, then 𝒩' satisfies _𝒩. Recall that if 𝒩' satisfies _𝒩, then so does 𝒩 (<Ref>). While the minimality of a subsystem for is generally not preserved when transferring the subsystem to 𝒩, we can weight states of the MEC quotient by the number of states of 𝒩 they represent in the MILPs, resulting in small subsystems for 𝒩. Consider the MDP 𝒩 in <Ref> and query _𝒩=∀∈^𝒩_𝒩^({_0, _1}) ≥ 0.25 _𝒩^({_2}) ≥ 0.25. We construct the product (<Ref>), where an automaton state q_i with i > 0 indicates that a state outside {_0, _1} and q_2 the state _2 has been visited. We then consider the MEC quotient (<Ref>) and rephrase _𝒩 to _=∀∈^_^(_2) ≥ 0.25 _^(_3) ≥ 0.25. Reaching _2 corresponds to staying in a MEC without seeing a state outside {_0, _1} and reaching _3 corresponds to having visited _2. A witnessing subsystem for is given by the non-grayed out states in <Ref> and a subsystem for 𝒩 is obtained by considering the corresponding states. Transferring witnessing schedulers. Let us now construct a scheduler for = (, , , ) from a memoryless scheduler ∈^. We then obtain a scheduler for 𝒩 from , by interpreting the automaton component as additional memory locations <cit.>. To construct a scheduler , special care needs to be taken when leaves a MEC state _ with probability 0 < p < 1. Here, a standard memoryless scheduler for does not suffice[A memoryless scheduler either leaves or stays in a MEC almost surely.]. Instead we construct an equivalent scheduler with only 2 memory locations for and allow stochastic memory updates <cit.>. We proceed as follows: [label=(*)] * For every MEC ∈() we construct a scheduler _𝗌𝗍𝖺𝗒 that stays in almost surely, * and a scheduler _𝗅𝖾𝖺𝗏𝖾 that leaves it almost surely with the same probabilities as (normalized with p), * and finally use these as building blocks for a scheduler with 2 memory locations and stochastic memory update for . Conceptually, upon entering a MEC in , either switches to _𝗌𝗍𝖺𝗒 or _𝗅𝖾𝖺𝗏𝖾. <ref> Construction of _𝗌𝗍𝖺𝗒: A memoryless scheduler _𝗌𝗍𝖺𝗒 that stays inside a MEC can be constructed by taking every internal action with a positive probability. <ref> Construction of _𝗅𝖾𝖺𝗏𝖾: Let _ be the state corresponding to in . Let p = 1 -(_)(τ) be the probability with which leaves the MEC . The construction of the memoryless scheduler _𝗅𝖾𝖺𝗏𝖾 is intricate, as we have to ensure that we leave via a state-action pair (, ) with the same probability as plays (_, ) normalized with p. To show that such a scheduler can be constructed, we first establish a result for strongly connected DTMCs. Let = (, , ) be a strongly connected DTMC. Let ∈ [0,1]^ and _ be the DTMC resulting from by adding fresh copies ' for all states ∈ and transitions from to ' with probability () (the other transitions are rescaled with 1 - ()). lemmadtmcFrequencies For every distribution ∈ there exists a vector ∈ [0,1]^ such that for all states we have __(') = (). We show that can be obtained by solving a system of linear equations that characterize the expected frequencies of each state in _. For any distribution , we can redirect transitions of a strongly connected DTMC such that it is left according to . We consider the scheduler ' that takes every internal action in uniformly and as such induces a strongly connected DTMC. We instantiate such that it captures the probability with which leaves _ via a state ∈() and use the resulting to “alter” ', thereby obtaining _𝗅𝖾𝖺𝗏𝖾. We instantiate and as follows. For all ∈() we define Δ() = ∑_(t, ) ∈∖_^(ι(t), ) ·(t, , ) + () Recall that ι maps a state from to the corresponding one in and that is Dirac in the initial state of . Then Δ() describes the frequency with which is entered via state . We then define the initial distribution () = Δ()/∑_t ∈()Δ(t) for ∈(). Let ^() = ∑_(, ) ∈∖ C_^(_, ) for ∈(), i.e. frequency of leaving via . Let ^(_) = ∑_∈()^(). For every state ∈() and ∈() with (, ) ∉ we define: (, ) = ^(, )/^(_) and () = ∑_(, b) ∈∖(, b) For each state ∈() and ∈() we then define _𝗅𝖾𝖺𝗏𝖾()() = (1 - ()) ·'()() if (, ) ∈ and _𝗅𝖾𝖺𝗏𝖾()() = () · (^(, ) / ^()) if () > 0 and _𝗅𝖾𝖺𝗏𝖾()() = '()() otherwise. <ref> Construction of witnessing scheduler: Let _𝗌𝗍𝖺𝗒_ and _𝗅𝖾𝖺𝗏𝖾_ be the schedulers that stay in and leave MEC almost surely, respectively, as previously described. Let p_ be the probability with which leaves MEC . If is in a MEC, then p_𝗂𝗇𝗂𝗍 is the probability with which leaves the containing MEC, otherwise p_𝗂𝗇𝗂𝗍 = 1. We define = (α_𝗎𝗉𝖽𝖺𝗍𝖾, α_𝗇𝖾𝗑𝗍, {_0, _1}, _) where _(_0) = p_𝗂𝗇𝗂𝗍 and _(_1) = 1-p_𝗂𝗇𝗂𝗍. Further, the next move function is given as α_𝗇𝖾𝗑𝗍(, ) = (), if ∉ _𝗌𝗍𝖺𝗒_(), if = _1, ∈() _𝗅𝖾𝖺𝗏𝖾_(), if = _0, ∈() The memory update function is defined as α_𝗎𝗉𝖽𝖺𝗍𝖾(, , )(_0) = p_ and α_𝗎𝗉𝖽𝖺𝗍𝖾(, , )(_1) = 1-p_ if ∈() and there does not exist t ∈() with (t, ) ∈. Otherwise, we set α_𝗎𝗉𝖽𝖺𝗍𝖾(, , )() = 1. The scheduler “flips a coin” upon entering MECs to decide whether it stays in or leaves the MEC. Depending on the outcome, it either switches to a scheduler _𝗌𝗍𝖺𝗒 or _𝗅𝖾𝖺𝗏𝖾 that stay in or leave the MEC, respectively. Thereby, we ensure that stays in and leaves a MEC with same probability as . Outside MECs, behaves like . Note that once switches to _1 it cannot change back to _0 and stays in the MEC almost surely. Consider MDP 𝒩 from <Ref> and query ∃∈^𝒩_𝒩^( (∖{_2})) ≥ 0.5 _𝒩^({_2}) ≥ 0.5. The query for is given by ∃∈^_^({_1,_2})≥ 0.5 _^({_3}) ≥ 0.5. We consider a witnessing scheduler that takes a, d and c with probability 1/2, 1/4 and 1/4, respectively. We construct a corresponding scheduler for , focusing on MEC _1. Note that stays with probability 1/4 in _1, i.e. (__1)(τ) = 1/4. A scheduler _𝗌𝗍𝖺𝗒 can be constructed by only taking internal actions. For _𝗅𝖾𝖺𝗏𝖾 we need to ensure that _1 is left correctly, that is, if leaves _1 it does so with probability 1/3 via d and 2/3 via a, hence = (1/3, 2/3) (see <ref>). Because _1 is only entered via (_3, q_1), we set = (1, 0) and applying <Ref> then yields = (1/6, 2/5). Consequently, we define _𝗅𝖾𝖺𝗏𝖾(_3, q_1)(d) = 1/6 and _𝗅𝖾𝖺𝗏𝖾(_4, q_1)( a) = 2/5. We then construct by combining the different schedulers. Particularly, changes its memory location from _0 to _1 with probability 0.25 when taking b and arriving in (_3, q_1). § CERTIFICATES AND WITNESSES FOR -QUERIES Building on the ideas for -queries, we address certificates and witnesses for multi-objective -queries. We first discuss the certificates for - and -queries. The former characterization is well-studied <cit.>, while the latter again has only been implicitly considered <cit.>. Analogously, we use the certificates to find witnessing subsystems. For the remainder of this section, we fix an arbitrary MDP = (, , , ) with reward vectors r_1, …, r_k ∈^. Farkas Certificates for -queries. While our certificates closely resemble classical results from <cit.> and <cit.>, the conditions are slight variations thereof, allowing us to find minimal witnessing subsystems. We define r_min∈^[k] by r_min(i) = min_(, ) ∈r_i(, ) for all i ∈ [k], i.e. the vector containing the minimal reward for each reward vector r_i. Similarly, we define R_min∈^× [k] by R_min(, i) = r_min(i) for all ∈ and i ∈ [k]. [Certificates for -mean-payoff queries]lemmacertificatesExistsCQ There exists a scheduler ∈^ such that _i=1^k [, ]r_i≥λ_i if and only if there exist x, y∈^ and z∈^ such that: * ∀∈() +∑_(', ') ∈(', ', ) ·y(', ') =z() +∑_∈()y(, ) +x(, ) * ∀∈∑_(', ') ∈(', ', ) ·x(', ') = ∑_∈()x(, ) * ∀ i ∈ [k] ∑_(, ) ∈x(, ) ·r_i(, ) + ∑_∈z() ·r_min(i) ≥λ_i Let ℋ_^𝖬𝖯() ⊆^×^×^ denote the corresponding polyhedron. In <cit.>, x and y correspond to the recurrent and transient flows, capturing the frequency of the state-action pairs in the limit and transient part, respectively. Compared to <cit.>, we consider the additional variable z, allowing flow to be “redirected” to an implicit state where the worst possible reward is collected. [Certificates for -mean-payoff queries]lemmacertificatesForallDQ For all schedulers ∈^ we have _i=1^k [, ]r_i≥λ_i if and only if there exist g, b∈^ and z∈^[k] such that: * ∀ (, ) ∈g() ≤∑_' ∈(, , ') ·g(') * ∀ (, ) ∈g() +b() ≤∑_' ∈(, , ') ·b(') +∑_i=1^k z(i) ·r_i(, ) * ∀∈g() ≥∑_i=1^k z(i) ·r_min(i) * g() ≥∑_i=1^k λ_i ·z(i) and ∑_i=1^k z(i) = 1 Let ℱ_^𝖬𝖯() ⊆^×^×^[k] denote the corresponding polyhedron. We obtain the certificates via application of Farkas' lemma to the characterizations given in <cit.> and <cit.>. Analogous to the reachability setting <cit.>, one can interpret z as a separating vector. This is used in <cit.> where the vector z arises as by-product of verifying the dual -query. However, neither have certificates nor witnessing subsystems been addressed. In <cit.>, g and b are referred to as gain and bias, capturing the mean-payoff and the expected deviation until the mean-payoff “stabilizes” <cit.>, respectively. Witnessing Subsystems for -queries. We focus on obtaining witnessing from the certificates. Schedulers have been extensively studied in <cit.>. Recall that in subsystems (<Ref>), the smallest possible reward is collected in . [Certificates and subsystems]theoremcertificatesAndSubsystems Let ' ⊆. Then we have: * ∃' ∈^_'_i=1^k [_', ]'r_i≥λ_i if and only if there exists (x, y, z) ∈ℋ_^𝖬𝖯() such that xy⊆'. * ∀' ∈^_'_i=1^k [_', ]'r_i≥λ_i if and only if there exists (g, b, z) ∈≥() such that g - R_minz⊆'. To find minimal witnessing subsystems for -queries, we need to set as many entries of x and y to zero as possible, effectively redirecting the flow to . For -queries we strive to set as many entries g() to the minimal possible reward as possible, indicating that transitions to such states can be safely redirected to . The corresponding MILPs for minimizing the support are similar to the ones for reachability (<Ref>) and are described in <Ref>. § EXPERIMENTS Setup. We have implemented the computation of certificates and witnesses for multi-objective queries in a prototypical Python tool, using Storm's Python interface <cit.> for model parsing and MEC quotienting and Gurobi <cit.> for solving (MI)LPs. Our tool exports certificates as JSON files, witnessing subsystems in Storm's explicit format and schedulers as DOT file. For -queries, witnessing subsystems can also be exported as Prism programs. Our experiments have been run on a machine with an AMD Ryzen 5 3600 CPU (3.6 GHz) and 16 GB RAM. The time limit of Gurobi has been set to 5 minutes. All time measurements are given in seconds and correspond to wall clock times. We consider the consensus () and firewire () models from the Prism benchmark <cit.>, describing a shared coin and network protocol, respectively. Further, the zeroconf model () from <cit.> describes the configuration of IP addresses under certain environment assumptions. Additionally, a client-server mutex protocol from <cit.> (), a dining philosophers model from <cit.> () and a model describing a network of sensors communicating over a lossy channel <cit.> (). We compute certificates for queries considered for the mentioned models in <cit.>, compute schedulers as described in <Ref> for -reachability queries and compute witnessing subsystems via MILPs. We are unaware of other tools for computing witnessing subsystems for multi-objective queries and verified all queries with Storm <cit.>. Results. The results are summarized in <Ref>, where the upper part is concerned with - and the lower with -queries. We now describe the columns. The column k is the number of predicates and # the number of different bounds we considered. shows the time for building the LP (for -queries this includes the time for construction of product MDP and MEC quotient), the LP solving time and the time for computing schedulers from the certificates. The times for computing witnessing subsystems is shown in the column . We provide mean, min, max and standard deviation because the times vary strongly. The number of timeouts is shown in and TO means that all computations timed out. In case of a timeout, the best subsystem found so far is returned. is the number of states in the subsystem relative to the original MDP (in percentage). Cost of certifying algorithm. Following an LP-based approach <cit.> for verifying a given multi-objective query , the simple certifying algorithm from <Ref>, needs to solve an LP for both and in the worst case. Thus, the total costs of a certifying algorithm arises from solving two LPs instead of a single one. Our experiments (detailed in <Ref>) indicate that the solving time for the LP for and are comparable. Thus, can be interpreted as the overhead of such certifying algorithm. We observe that solving the LPs is relatively fast and that model building is currently the bottleneck in our prototypical implementation. We plan on providing a more efficient and competitive implementation in future work. Storm verified most queries in less than 0.1 seconds. We refer to <Ref> for details and note that the verification algorithm as implemented in Storm <cit.> is based on value-iteration and non-certifying (c.f. <Ref>). Witnesses. Schedulers can quickly be computed from the certificates. For models where the quotient is smaller than the product MDP, e.g. , our techniques can be useful. As for single-objectives <cit.>, finding small witnessing subsystems is challenging, particularly for -queries. For -queries, we often find subsystems in a reasonable amount of time. The number of states in the subsystem heavily depends on the bounds, query type and model. The subsystems for -queries can be substantially smaller than the original MDP, e.g. for even 0.61% the size of the original MDP, but can vary strongly for -queries, e.g. for between 2% to 100%. We refer to <Ref> for plots and details. Our implementation, experiments and results are made available on Zenodo <cit.>. § CONCLUSION We have given an explicit presentation of certificates for multi-objective queries and their relation to schedulers and witnessing subsystems, thereby generalizing <cit.>. Our prototypical tool implements the presented techniques and has been applied on several case studies. In future work, we want to provide tool support for computing certificates more efficiently and address certificates and witnesses for richer modeling formalisms. § PROOFS FOR <REF> §.§ Proofs for <Ref> §.§.§ Farkas Certificates In this section we provide proofs for results in <Ref> and also certificates for -queries and -queries, which work via simple reduction to the single-objective case <cit.>. The following lemma summarizes the results from Theorem 3.2 in <cit.> in our setting and notation. Let = be an MDP in reachability form and let ≳() be a conjunctive property. Then we have: * ∃∈≳() ∃y∈^^⊤y = ^⊤y≳ and if is EC-free we also have: * ∃∈≳() ∃y∈^^⊤y = ^⊤y≲ <ref> follows from Theorem 3.2 in <cit.>. For EC-free MDPs, we can directly change the lower bounds in the proof of Theorem 3.2 to upper bounds. The reason is that we know that in an EC-free MDP the absorbing states are reached almost surely. Thus, we get <ref>. * To prove <ref>, we simply apply Lemma 3.17 from <cit.> to <Ref> <ref>. Because Lemma 3.17 in <cit.> relies on EC-freeness, we cannot do the same for <ref>. We show that if there exists a y∈^ that satisfies ^⊤y≤^⊤y≳, then there also exists y' ∈^ such that ^⊤y' = ^⊤y' ≳. Consider the MDP ' resulting from by adding a fresh action τ to each state, that moves to with probability 1. Clearly, if a -query with lower bounds is satisfied in , then also in ' and vice versa. The reason is that the set of paths that reach the targets in and ' is equivalent. Let ' and ' be defined as follows: ' = [ ; I ] ' = [ ; I· 0 ] where I is the identity matrix I∈{0, 1}^×. Now suppose we have y∈^ that satisfies ^⊤y≤^⊤y≳. Then there exists z∈^ such that ^⊤y + z =. In particular, we have (')^⊤[ y; z ] = (')^⊤[ y; z ]≳ From <Ref> <ref>, we then know that (y, z) is a certificate for the satisfaction of the -query = ∃' ∈^'≳() in '. As described above, it is directly clear that is also satisfied in and applying <Ref> <ref> again, yields the existence of a desired y' with ^⊤y' = ^⊤y' ≳. This concludes the proofs. Let y() for all states ∈ be defined as follows: y() = ∑_' ∈()y(, ') From Theorem 3.2 in <cit.> we know that a corresponding memoryless scheduler ' ∈^' that satisfies ≳'() can be constructed by setting '()() = y(, )/y() + z() '()(τ) = z()/y() + z() for all (, ) ∈ with y() + z() > 0. For the other states, we define to play any available action except τ. Observe that once the τ action is played, the probability to reach any target is 0. Thus, we cannot decrease the probability of reaching the target states, if we redistribute the probability of playing the τ action. Consider the scheduler ∈^ with ()() = '()() + y(, )/y()·'()(τ) for all (, ) ∈ with y() > 0. Due to the observation that not playing the τ action cannot decrease the probability of reaching the targets, also satisfies the query and can also be used as scheduler for as it does not play the τ action. Further, we have: ()() = '()() + y(, )/y()·'()(τ) = y(, )/y() + z() + y(, )/y()· (1 - y()/y() + z()) = y(, )/y() + z() + y(, )/y() - y(, )/y() + z()) = y(, )/y() for all states ∈ with y() > 0. This detour shows us that we can directly use a certificate y that satisfies ^⊤y≤^⊤y≳ to construct a scheduler for . Let us now discuss the proof of <Ref>. To this end, we show <Ref> and <Ref> first. <Ref> then follows from these lemmas. Let = be an MDP in reachability form (possibly with ECs) and ⋈() a disjunctive property. Then we have: * ∀∈<() ∃x∈^∃z∈^[k]x≥zx() < λ^⊤z and if is EC-free we also have: * ∀∈>() ∃x∈^∃z∈^[k]x≤zx() > λ^⊤z Let us prove the statement for DQs with strict upper bounds. ∀∈_i=1^k_, ^(_i) < λ_i ∃∈_i=1^k_, ^(_i) ≥λ_i <Ref><ref> ∃y∈^^⊤y = ^⊤y≥λ ∃y∈^[ A^⊤; -A^⊤; -T^⊤ ]y≤[ ; -; -λ ] <Ref><ref> ∃x_1, x_2∈^S ∃z∈^[k](A -A -T) [ x_1; x_2; z ]≥ 0 (^⊤ -^⊤ -λ^⊤) [ x_1; x_2; z ] < 0 ∃x∈^S ∃z∈^[k]x≥z^⊤x < λ^⊤z Observe that ^⊤x = x(). This completes the proof. For strict lower bounds we assume to be EC-free. Then the proof is analogous and we can apply <Ref> <ref>. Note that the statement for lower bounds only holds for EC-free MDPs, because <Ref> <ref> relies on EC-freeness. Let = be an MDP in reachability form (possibly with ECs) and ⋈() a disjunctive property. Then we have: * ∀∈≤() ∃x∈^∃z∈^[k]∖{0}x≥zx() ≤λ^⊤z and if is EC-free we also have: * ∀∈≥() ∃x∈^∃z∈^[k]∖{0}x≤zx() ≥λ^⊤z Again, we only prove the statement for DQs with upper bounds. For lower bounds the proof is analogous. ∀∈_i=1^k_, ^(_i) ≤λ_i ∃∈_i=1^k_, ^(_i) > λ_i ∃ε∈_>0∃∈_i=1^k_, ^(_i ) ≥λ_i + ε <Ref><ref> ∃ε∈_>0∃y∈^^⊤y = _i=1^kt_i^⊤y≥λ_i + ε ∃ε∈_>0∃y∈^^⊤y = ^⊤y≥λ + 1·ε To apply Farkas' lemma, we need to scale the right-hand side of the equality and inequality with a variable. We show that ^⊤y = ^⊤y≥λ + 1·ε has a solution if and only if ^⊤y = ·γ^⊤y≥λ·γ + 1·ε has a solution, where ε∈_>0, γ∈ and y∈^. Obviously, the former implies the latter, since we can use the solution of the former and choose γ = 1 to obtain a solution for the latter. For the other direction, suppose we have a solution ε, y and γ with γ > 0. Let y' = y/γ and ε' = ε/γ, then we have ^⊤y' = ^⊤y/γ = ·γ/γ = and ^⊤y' = ^⊤y/γ≥λ·γ + 1·ε/γ = λ + 1·ε'. Hence, y' and ε' are a solution to the first system. Now suppose γ = 0, so we have ^⊤y = 0. Suppose y = 0, then we get ^⊤y = 0 ≥1·ε > 0. Hence y = 0 cannot hold. Suppose y≠ 0. The following observation is from the proof of Lemma 3.8 in <cit.>. Since we have ^⊤y = 0, we also have 1^⊤^⊤y = 0. Observe that 1^⊤^⊤ corresponds to 1 - ∑_' ∈(, , ') for every (, ) ∈. From 1^⊤^⊤y = 0 we have for all y(, ) > 0 that ∑_' ∈(, , ') = 1 and (, , ) = 0. This implies ^⊤y = 0, again yielding a contradiction and γ≠ 0 has to hold. Thus we have: ∃ε∈_>0∃y∈^^⊤y = ^⊤y≥λ + 1·ε ∃γ∈∃ε∈_>0∃y∈^^⊤y = ·γ^⊤y≥λ·γ + 1·ε ∃γ, ε∈∃y∈^[ ^⊤ - 0; -^⊤ 0; ^⊤ -λ -1 ][ y; γ; ε ]≥ 0 -ε < 0 <Ref><ref> ∃x_1, x_2 ∈^∃z∈^[k][ - ; -^⊤ ^⊤ -λ^⊤; 0^⊤ 0^⊤ -1^⊤ ][ x_1; x_2; z ]≤[ 0; 0; -1 ] ∃x∈^∃z∈^[k]x≤ - z - ^⊤x≤λ^⊤z1^⊤z≥ 1 ∃x∈^∃z∈^[k]x≥z^⊤x≤λ^⊤z1^⊤z≥ 1 We claim that x≥z^⊤x≤λ^⊤z1^⊤z≥ 1 has a solution if and only if x≥z^⊤x≤λ^⊤zz≠ 0 does. Firstly, the solution to the former is a solution to the latter, since 1^⊤z≥ 1 implies z≠ 0. Now let x and z be a solution of the latter. Since z≠ 0 and z≥ 0 there exists β∈ such that β·1^⊤z≥ 1. Let x' = x·β and z' = z·β. Then we get x' = x·β≥z·β = z' and ^⊤x' = ^⊤x·β≤λ^⊤z·β = λ^⊤z' and by construction 1^⊤z≥ 1. Hence x' and z' are a solution to the former system. Altogether this shows the equivalence. Since ^⊤x = x(), this completes the proof. Again, for lower bounds we assume to be EC-free. Then the proof is analogous and we apply <Ref> <ref> instead of <Ref> <ref>. * Directly follows from <Ref> and <Ref>. Let us now briefly show how the certificates for -queries and -queries can be derived. To this end, let t_i denote the ith column of T and = ≤ if ⋈ ∈{≥, > } and = ≥ if ⋈ ∈{≤, <}. Let = be an MDP in reachability form without ECs and let _1, …, _k be target sets. Let λ_i ∈ [0, 1] for all i ∈ [k] and ⋈ ∈{<, ≤, >, ≥}. Then we have: ∃∈_i=1^k_^(_i ) ⋈λ_i ∃y∈^_i=1^k^⊤yt_i^⊤y⋈λ_i Observe that we have: ∃∈_i=1^k_^(_i ) ⋈λ_i _i=1^k∃∈_, ^(_i ) ⋈λ_i We can then directly apply the results from the single-objective case <cit.> to each disjunct, thereby yielding the statement. Let = be an MDP in reachability form without ECs and let _1, …, _k be target sets. Let λ_i ∈ [0, 1] for all i ∈ [k] and ⋈ ∈{<, ≤, >, ≥}. Then we have: ∀∈_i=1^k_^(_i ) ⋈λ_i ∃x_1, …, x_k ∈^_i=1^kxt_i ^⊤x⋈λ_i ∀∈_i=1^k_^(_i) ⋈λ_i _i=1^k∀∈_, ^(_i ) ⋈λ_i We then obtain certificate conditions for each conjunct by using results from the single-objective case <cit.>, yielding the statement. An overview of the certificates and their conditions for EC-free MDPs in reachability form is shown in <Ref>. §.§.§ Farkas certificates and witnessing subsystems In this section we provide proofs for <Ref> and <Ref>. Let 𝒩 = (, , , ) be an MDP. Let _1, …, _k ⊆ and _1, …,_ℓ⊆. Further, let r_1, …, r_p ∈^ be reward vectors. Let 𝒩' = (' {}, , , ') be a subsystem of 𝒩. * For every scheduler in ∈^𝒩 there exists a scheduler ' ∈^𝒩' such that for all i ∈ [k] we have _𝒩'^'(_i) ≤_𝒩^(_i), for all j ∈ [ℓ] we have _𝒩'^'(_j) ≤_𝒩^(_j) and for all h ∈ [p] we have [𝒩']'r'_h≤[𝒩]r_h ([𝒩']'r'_h≤[𝒩]r_h) * Vice versa, for every scheduler ' ∈^𝒩' there exists a scheduler in ∈^𝒩 such that for all i ∈ [k] we have _𝒩'^'(_i) ≤_𝒩^(_i), for all j ∈ [ℓ] we have _𝒩'^'(_j) ≤_𝒩^(_j) and for all h ∈ [p] we have [𝒩']'r'_h≤[𝒩]r_h ([𝒩']'r'_h≤[𝒩]r_h) The proof follows the ideas from <cit.>. The set of paths in 𝒩' never visiting are a subset of paths in 𝒩, i.e. {' ∈(𝒩') |' never visits }⊆(𝒩) Further, for all paths _∈{' ∈(𝒩') |' visits }, i.e. paths in 𝒩' that visit (and hence stay in forever, as cannot be left), and ∈(𝒩) we have r_h'(_) ≤r_h() (r_h'(_) ≤r_h()) for all h ∈ [p]. Intuitively, by construction of r_h', the smallest possible reward is collected in and a path in 𝒩' visiting cannot achieve a higher mean-payoff than any path in 𝒩. Let us prove <ref> first. Given a scheduler ∈^𝒩, we choose a scheduler ' ∈^𝒩' that behaves like in ' (in the choice does not matter). Recall that the set of actions that are enabled in a state in 𝒩 and 𝒩' coincide by definition and that once is entered in the subsystem the smallest possible reward is collected. Because the paths in 𝒩 under and 𝒩' under ' carry the same probability and the state-action pairs in ' have the same reward, the statement follows with the observations above. For <ref>, let ' ∈^𝒩' be given. We choose a scheduler that behaves like ' for paths ∈(𝒩) that only visit '. Otherwise, is allowed to play any available action. Similarly, the statement then follows. * For <ref>, we can directly apply <Ref> <ref>. For <ref> we prove via contraposition, i.e. we show: ∃∈^≲() ∃' ∈^'≲'() where ≲() is a corresponding conjunctive query if ≳() is a disjunctive query and ≲() is a corresponding disjunctive query if ≳() is a conjunctive query. Further, we choose ≲ = < if ≳ = ≥ and ≲ = ≤ if ≳ = >. The statement then follows from <Ref> <ref>. *   Let ' = |_'×' = __' and ' = |_'× [k] = __' where ' = {(, ) ∈|∈'} = __'. Let us prove <ref> first. We first note that if there exists (x', z') ∈≳(λ), then there also exist (x, z) ∈≳(λ) with x≥ 0, namely x() = max{0, x'()} for all ∈ and z = z'. ⇒: Let (x, z) ∈≳(λ) with x≥ 0. Then we have x≤zx() ≳λ^⊤z (and additionally z≠ 0 if we have non-strict inequalities). Let x' = x|_' (i.e. x restricted to '). From Lemma 4.22 in <cit.> we know that ' x' ≤' z and x'() ≳λ^⊤z hold. Intuitively, by omitting columns in (that is columns corresponding to states in ∖' and which are thus not in the support of x) where the corresponding value of x is zero does not change the value of the left-hand side. Additionally, omitting rows on both sides also preserves the satisfaction of the inequalities. Consequently, x' and z are Farkas certificates for the satisfaction of the query in _'. Using <Ref> we can conclude ∀∈_i=1^k__', ^(_i ) ≳λ_i. ⇐: Because ∀∈_i=1^k__', ^(_i) ≳λ_i holds, we know by <Ref> that there exists x' ∈^' and z∈^[k] such that (x', z) ∈_'≳(λ), i.e. ' x' ≤' z, x'() ≳λ^⊤z and 1^⊤z≤ 1 (or 1^⊤z = 1 if we have non-strict inequalities). Let x∈^ with x() = x'() if ∈' and x() = 0 otherwise. Clearly, we have x⊆'. Again, applying Lemma 4.22 from <cit.> we know that x≤z and x() ≥λ^⊤z hold. Intuitively, adding columns corresponding to states where x is zero does not change the left-hand side of the inequalities. Rows corresponding to (, ) ∈ with ∈∖' are of the form - ∑_' ∈(, , ') ·x(') ≤∑_i=1^k ∑_∈_i(, , ) ·z(_i) because x() = 0. Since x≥ 0 and the right-hand side is non-negative, such rows are also satisfied. Thus we have (x, z) ∈≳(λ). The proof for <ref> is analogous. ⇒: Let y∈≳() with ' = y. For such y we have ^⊤y≤^⊤y≳. Now we consider the restriction of y to the state action pairs in ', i.e. y' = y|_'. Again, following the reasoning of Lemma 4.22 from <cit.> we have that omitting columns of ^⊤ where y is zero does not change the value. Similarly, omitting rows preserves the satisfaction of the inequality. Because ^⊤y = (')^⊤y', we then have (')^⊤y' ≤ (')^⊤y' ≳. Applying <Ref> then concludes the proof. ⇐: Now suppose we have ' ⊆ such that ∃' ∈^_'≳'(). By <Ref> we have that there exists y' ∈^' such that (')^⊤y' ≤ (')^⊤y' ≳. Now let y∈^ and we set y(, ) = y'(, ) if (, ) ∈' and y(, ) = 0 otherwise. Observe that y⊆' and for every state ∈∖' we have ∑_∈()y(, ) - ∑_(t, )(t, , ) ·y(, ) - () ≤ 0, because we have ∑_∈()y(, ) = 0. By construction we have ^⊤y = (')^⊤y'. In total, we then have ^⊤y≤^⊤y≳ because adding rows corresponding to ∈∖' preserves the satisfaction, as well as adding columns where y is zero. §.§ Proofs for <Ref> §.§.§ Reduction and transfer of subsystems Let us discuss the reduction described in <Ref> and shown in the upper part of <Ref> in detail. Recall that 𝒩 = (_𝒩, , , _𝒩) is an arbitrary MDP and _𝒩 is a -query containing lower-bounded predicates _𝒩^( T_1) ≳λ_1, …, _𝒩^( T_k) ≳λ_k and _𝒩^( G_1) ≳ξ_1, …, _𝒩^( G_ℓ) ≳ξ_ℓ. We follow the construction from <cit.> for the product MDP . Let = (, , , ) where = _𝒩× 2^[k]× 2^[ℓ], = (, ∅) and: ((, u, v), , (', u', v')) = _𝒩(, , '), if u' = u {i ∈ [k] |∈ T_i } and v' = v {j ∈ [ℓ] |∈∖ G_j } 0, otherwise. Intuitively, u keeps track of the “good” and v the “bad” states that have been visited. The predicates can be easily rephrased, i.e. _𝒩^( T) to _^( (T × 2^[k]× 2^[ℓ])) and analogously for invariant probabilities. For brevity, we write _^( T) instead. Because almost all paths eventually stay in a MEC <cit.>, instead of considering _^( T_i), we can consider the probability of eventually staying in a MEC ∈() where T_i has already been visited, that is there exists a (, u, v) ∈() with i ∈ u. Analogously, for _^(_j) we consider MECs where there exists (, u, v) ∈() with j ∉ v. Note that inside MECs, the u and v component of the states are identical. Let A_i ⊆() denote the set of these MECs for predicates _^(_i) and analogously B_j ⊆() for predicates _^(_j). We then consider the quotient , where reaching _ corresponds to staying in MEC forever <cit.>. Clearly, we can then consider corresponding predicates of the form _^({_|∈ A_i }) and _^({_|∈ B_j }). Recall that ι→ maps a state of the product MDP to the corresponding state in . Given a set of states ' of the MEC quotient, the corresponding set of states in and 𝒩 is given by ' = {(, u, v) ∈|ι((, u, v)) ∈' } and _𝒩' = {∈_𝒩|∃ u, v ι((, u, v)) ∈}, respectively. * Let ' be the subsystem of induced by a set ' that satisfies _. Let ' be the corresponding subsystem for induced by ' and 𝒩' the subsystem of 𝒩 induced by _𝒩'. Observe that ' corresponds to the MEC quotient of '. From <cit.> we then have that for any scheduler ∈^', there exists a scheduler ∈^' such that for all i ∈ [k] and j ∈ [ℓ] we have * _'^({_|∈ A_i }) = _'^(_∈ A_i()) * _'^({_|∈ B_j }) = _'^(_∈ B_j()) and vice versa. Additionally, for any scheduler ∈^' there exists a scheduler ' ∈^𝒩' such that for all i ∈ [k] and j ∈ [ℓ] we have * _'^(_∈ A_i()) = _𝒩'^'( T_i) * _'^(_∈ B_j()) = _𝒩'^'( G_j) and vice versa. This follows from the fact there is a one-to-one correspondence between schedulers of the MDP and its product and that almost all paths stay in an MEC forever <cit.>. The statement then follows. §.§.§ Transferring witnessing schedulers * Let x∈^. We consider the linear equation system with ∈: x() = () + ∑_u ∈ (x(u) - (u)) ·(u, ) = () + ∑_u ∈x(u) ·(u, ) - ∑_u ∈(u) ·(u, ) Intuitively, the equations describe the expected frequencies of state subtracted by the frequencies that are redirected to the copies of the states. Equivalently, the system can be written in vector-matrix notation as follows: x (I - ) = - · Observe that the steady-state distribution of satisfies (I - ) = 0 and also > 0 since is strongly connected. Given a solution x^* to (<ref>), we know that x^* + r · is also a solution to (<ref>) for all r ∈. Thus, if there exists a solution, there also exists a solution x^* such that x^*() > () for all states . Let () = ()/x^*(s), then () ∈ [0, 1]. Setting () = () ·x^*() in (<ref>) yields for all states : x^*() = () + ∑_u ∈x^*(u) · (1 - (u)) ·(u, ) = () + ∑_u ∈x^*(u) ·__(u, ) Considering the DTMC _, the expected frequencies __() are the unique solution of the following system with variables z∈^ and for all states : z() = () + ∑_u ∈z(u) · (1 - (u)) ·(u, ) z(') = () ·z() Thus, x^*() = __() and __(') = __(') = () ·x^*() = (). Hence it remains to be shown that (<ref>) has a solution. We apply Farkas' lemma (<Ref> <ref>) on (<ref>) and show that the resulting system (shown below) cannot have a solution. (I - ) y = 0 and ( - )^⊤y≠ 0 Since is a stochastic matrix (all rows sum up to 1), we have (I - ) 1 = 0. Because is strongly connected, I - has rank - 1 and thus all solutions of (I - ) y = 0 are multiples of 1. Let y = r ·1 for some r ∈. For all distributions we have ^⊤y = r ·^⊤1 = r. In particular, we have ^⊤y = r. Observe that is again a distribution and thus ^⊤^⊤y = r. We then get ( - )^⊤y = 0 contradicting (<ref>). Thus, we can conclude that (<ref>) has a solution. § PROOFS FOR <REF> *   ⇒: Directly follows from <cit.> and <cit.>. ⇐: Let MDP ' = ({}, {τ}, , ') be the MDP obtained from by adding a fresh state and transitions to under a fresh action τ in all states. Let ' = {}. In particular the enabled state action pairs in ' are ' = (' ×{τ}). For all i ∈ [k] we define r_i' ∈_≥0^' and set r_i'(, ) = r_i(, ) if (, ) ∈ and r_i'(, τ) = min_(, )r_i(, ). Because the added transitions under action τ have lowest possible reward for each reward function, the existence of a strategy ' for ' that satisfies the mean-payoff constraints implies the existence of satisfying strategy for . Suppose we have x, y∈^ and z∈^ that satisfy the constraints. Then for all states ∈ let: We define y', x' ∈^' for all (, ) ∈' as follows: y'(, ) = 0, if = z(), if ≠ = τ y(, ), otherwise and x'(, ) = ∑_' ∈z('), if = 0, if ≠ = τ x(, ), otherwise We then have for all ∈: () + ∑_(', ') ∈''(', ', ) ·y'(', ') = () + ∑_(', ') ∈(', ', ) ·y(', ') = ∑_∈()y(, ) + x(, ) + z() = ∑_∈() {τ}y'(, ) + x'(, ) Further, we have () + ∑_(', ') ∈''(', ', ) ·y'(', ') = ∑_∈z() = x'(, τ). Further, we also have for all ∈: ∑_(', ') ∈''(', ', ) ·x'(', ') = ∑_(', ') ∈(', ', ) ·x(', ') = ∑_∈()x(, ) = ∑_∈() {τ}x'(, ) Analogously, we have ∑_(', ') ∈''(', ', ) ·x'(', ') = ∑_∈z() = x'(, τ). Lastly, for all i ∈ [k] we have: ∑_(, ) ∈'x'(, ) ·r_i'(,) = ∑_(, ) ∈x(, ) ·r_i(,) + ∑_∈z() ·r_min(i, ) ≥λ_i From <cit.> and <cit.> we then know that there exists a scheduler ' ∈^' such that _i=1^k [', ]'r'_i≥λ_i. However, as mentioned above, this also implies the existence of a scheduler ∈^ such that _i=1^k [, ]r_i≥λ_i. The variables y_s, constraint 2 and 3 in <cit.> are redundant (as also noted in the work). Let us briefly comment on this redundancy. Consider an MDP where each state has a copy state '. Then y_s describes the probability of reaching this copy state ' <cit.>. The sum ∑_s ∈ S y_s equals 1 because of the fact that y_a corresponds to the expected frequencies of a scheduler that reaches the absorbing states almost surely <cit.> (also see <cit.>). From <cit.> it follows that x_a is 0 for state-action pairs not contained in MECs. Altogether, this makes y_s and constraints 2 and 3 redundant. * We prove the statement via application of Farkas' lemma to the linear system given in <cit.>. Observe that we have [, ]r_i = -[, ]-r_i for all schedulers ∈^. The statement can then be shown as follows: ∀∈_i=1^k r_i≥λ_i ∃∈_i=1^k r_i < λ_i ∃∈_i=1^k --r_i < λ_i ∃∈_i=1^k -r_i > -λ_i By <cit.> and the remark that in <cit.> that the constraints in <cit.> are partly redundant, the existence of a scheduler ∈ that satisfies _i=1^k -r_i > -λ_i is equivalent of the satisfiability of the following system of linear equations: () + ∑_(', ') ∈(', ', ) ·y(', ') = ∑_∈()y(, ) + x(, ) for all ∈ ∑_(', ') ∈(', ', ) ·x(', ') = ∑_∈()x(, ) for all ∈ ∑_(, ) ∈x(, ) ·(- r_i(,) ) ≥ - λ_i + ε for all i ∈ [k] where x, y∈^ and ε > 0. Equivalently, we can write the system in matrix vector notation as follows: (D - )^⊤y + D^⊤x = (D - )^⊤x = 0 R^⊤x + 1·ε ≤ Here, D∈{0, 1}^× is defined as D((, ), ) = 1 for all (, ) ∈ and 0 otherwise. In order to derive certificates and conditions for the universally quantified queries, we instead consider the following system: (D - )^⊤y + D^⊤x = ·γ (D - )^⊤x = 0 R^⊤x + 1·ε ≤·γ γ ≥ε where again x, y∈^ and γ, ε > 0. Let us now show the equivalence in terms of satisfiability of those two systems. (<ref>) ⇒ (<ref>): Let x, y∈^ and ε > 0 be a solution of (<ref>). We can simply choose γ = 1 and choose ε' = min{γ, ε}. Then x, y, ε' and γ are a solution to (<ref>). (<ref>) ⇐ (<ref>): Let x, y∈^ and γ, ε > 0 be a solution of (<ref>). Let x' = x· 1 / γ, y' = y· 1 / γ and ε' = ε· 1 / γ. Clearly, we have (D - )^⊤x' = 0. Further, we have (D - )^⊤y' + D^⊤x' = 1/γ((D - )^⊤y + D^⊤x) = 1/γ··γ = , and R^⊤x' + 1·ε' = 1/γ( R^⊤x + 1·ε) ≤1/γ( ·γ) ≤ Hence x', y' and ε' are a solution to (<ref>). We are concerned with the non-existence of a scheduler and thus equivalently the unsatisfiability of (<ref>). We can write (<ref>) as follows: [ (D - )^⊤ D^⊤ - 0; -(D - )^⊤ -D^⊤ 0; 0 (D - )^⊤ 0 0; 0 -(D - )^⊤ 0 0; 0 - R^⊤ -1; 0 0 1 -1 ][ y; x; γ; ε ]≥[ 0; 0; 0; 0; 0; 0 ], [ 0; 0; 0; 0; -1 ]^⊤[ y; x; γ; ε ] < 0 We then apply Farkas' lemma (<Ref> <ref>), yielding the following system: [ D - -(D - ) 0 0 0 0; D -D D - -(D - ) - R 0; - ^⊤ ^⊤ 0 0 ^⊤ 1; 0 0 0 0 -1^⊤ 1 ][ g_+; g_-; b_+; b_-; z; β ]≤[ 0; 0; 0; -1 ] where g_+, g_-, b_+, b_-∈^, z∈^[k] and β∈. We can further simplify inequalities by defining gg_+ - g_- and bb_+ - b_-, yielding: (D - ) g ≤0 Dg + (D - ) b ≤Rz ^⊤g ≥^⊤z + β 1^⊤z ≥ 1 + β Observe that any solution of (<ref>) where β > 0 is also a solution to (<ref>) when setting β = 0 because ^⊤z + β≥^⊤z and 1 + β≥ 1. Hence we can assume β = 0 and obtain the following conditions: (D - ) g ≤0 Dg + (D - ) b ≤Rz ^⊤g ≥^⊤z 1^⊤z ≥ 1 or equivalently written out explicitly: g() ≤∑_' ∈(, , ') ·g(') for all (, ) ∈ g() + b() ≤∑_' ∈(, , ') ·b(') + ∑_i=1^k r_i(, ) ·z(i) for all (, ) ∈ g() ≥∑_i=1^k λ_i ·z(i) ∑_i=1^k z(i) ≥ 1 Observe that if ∑_i=1^k z(i) > 1, then we can simply rescale x, y and z by 1 / ∑_i=1^k z(i). Hence, we replace the constraint ∑_i=1^k z(i) ≥ 1 with ∑_i=1^k z(i) = 1. Lastly, from <cit.> we can conclude that imposing g() ≥∑_i=1^k z(i) ·r_min(i) does not change the satisfaction of the system. Let = (, , , ) be an MDP and ' = (' {}, , , ') be an induced subsystem of . * If (x', y', z') ∈ℋ_'^𝖬𝖯(), then there exists (x, y, z) ∈ℋ_^𝖬𝖯() such that x, y⊆x', y'. * If (g', b', z') ∈ℱ_'^𝖬𝖯(), then there exists (g, b, z) ∈ℱ_^𝖬𝖯() such that g - R_minz⊆g' - R'_minz'. Proof of <ref>: Let γ_i min_(, ) ∈r_i(, ) for all i ∈ [k]. Suppose we have (x', y', z') ∈ℋ_'^𝖬𝖯. Then for all ∈' {} we have: '() + ∑_(', ') ∈''(', ', ) ·y'(', ') = ∑_∈'()y'(, ) + x'(, ) + z'() ∑_(', ') ∈''(', ', ) ·x'(', ') = ∑_∈'()x'(, ) and for all i ∈ [k] we have: ∑_(, ) ∈'x'(, ) ·r'_i(,) + ∑_∈' {}z'() ·γ_i ≥λ_i Let us define x, y∈^ as follows: x(, ) = x'(, ), if ∈' 0, otherwise y(, ) = y'(, ), if ∈' 0, otherwise Further, we define z∈^ for all ∈ as follows: z() = z'(), if ∈' ∑_' ∈'∑_' ∈(')(', ' , ) ·y(', '), otherwise By construction, we have xy⊆x'y'. Now it remains to be shown that (x, y, z) ∈ℋ_^𝖬𝖯. To this end, we observe that x'(, ) = 0 for all states ∈' and ∈() if '(, , ) > 0 because otherwise (<ref>) would not be satisfied. Hence x(, ) = 0 if (, , ') > '(, , ') for some ' ∈. We then get for all states ∈∖': ∑_(', ') ∈(', ', ) ·x(', ') = 0 = ∑_∈()x(, ) For all states ∈' we have: ∑_(', ') ∈(', ', ) ·x(', ') = ∑_(', ') ∈'(', ', ) ·x(', ') = ∑_(', ') ∈''(', ', ) ·x(', ') (<ref>) =∑_∈()x(, ) So in total, we have ∑_(', ') ∈(', ', ) = ∑_∈()x(, ) for all ∈. Further, for all ∈' we have: () + ∑_(', ') ∈(', ', ) ·y(', ') = () + ∑_(', ') ∈''(', ', ) ·y'(', ') (<ref>)= ∑_∈()y'(, ) + x'(, ) + z'() = ∑_∈()y(, ) + x(, ) + z() For all ∈∖' we have: () + ∑_(', ') ∈(', ', ) ·y(', ') = () + ∑_' ∈'∑_' ∈(')(', ', ) ·y(', ') = z() = ∑_∈()y(, ) + x(, ) + z() Lastly, for all i ∈ [k] we have: ∑_(, ) ∈x(, ) ·r_i(,) + ∑_∈z() ·γ_i = ∑_∈'∑_∈()x'(, ) ·r_i'(, ) + ∑_∈∖'∑_' ∈'∑_' ∈(')(', ' , ) ·y(', ') ·γ_i + ∑_∈'z'() ·γ_i = ∑_∈'∑_∈()x'(, ) ·r_i'(, ) + ∑_' ∈'∑_' ∈(')'(', ' , ) ·y(', ') ·γ_i + ∑_∈'z'() ·γ_i = ∑_(, ) ∈'x'(, ) ·r_i'(, ) + ∑_' {}z'() ·γ_i ≥λ_i In total, we can therefore conclude that (x, y, z) ∈ℋ_^𝖬𝖯. Proof of <ref>: Let (g', b', z') ∈ℱ_'^𝖬𝖯(). Then the following holds for all (, ) ∈': g'() ≤∑_' ∈' {}'(, , ') ·g'(') g'() + b'() ≤∑_' ∈' {}'(, , ') ·b'(') + ∑_i=1^k r'_i(, ) ·z'(i) and for all ∈' {}: g'() ≥∑_i=1^k z'(i) ·r_min(i) and for all i ∈ [k]: g'() ≥∑_i=1^k λ_i ·z'(i) Let us define z = z' and g∈^ as follows: g() = g'(), if ∈' ∑_i=1^k z'(i) ·r_min(i), otherwise By construction we have g - R_minz⊆g' - R'_minz'. Analogously, we now need to show that there exists a b∈^ such that (g, b, z) ∈≥(). For the sake of readability, let us define c∈^ as c(, ) = ∑_i=1^k r_i(, ) ·z(i) for all (, ) ∈ and c' ∈^' as c'(, ) = ∑_i=1^k r_i'(, ) ·z(i) for all (, ) ∈'. We observe that we have g'() ≥∑_i=1^k z'(i) ·r_min'(i) = ∑_i=1^k z(i) ·r_min(i). From (<ref>), we then have for all ∈'() that g'() + b'() ≤b'() + ∑_i=1^k c'(, ) = b'() + ∑_i=1^k z(i) ·r_min(i) So in total we have g'() = ∑_i=1^k z(i) ·r_min(i). Then, for all states ∈' and ∈() we get g() = g'() ≤∑_' ∈' {}'(, , ') ·g'(') = (∑_' ∈''(, , ') ·g'(') ) + '(, , ) ·g'() = (∑_' ∈''(, , ') ·g'(') ) + '(, , ) · (∑_i=1^k z(i) ·r_min(i)) = (∑_' ∈''(, , ') ·g'(') ) + ∑_' ∈∖'(, , ') ·g(') = ∑_' ∈(, , ') ·g(') Because g() = ∑_i=1^k z'(i) ·r_min'(i) for ∈∖' and the g(') ≥∑_i=1^k z'(i) ·r_min'(i) for all ' ∈', we can conclude that g() ≤∑_' ∈(, , ') ·g(') for all (, ) ∈. Further, observe that because ∈' we have g() = g'() ≥∑_i=1^k λ_i ·z(i). Now it only remains to be shown that there exists a b∈^ such that for all (, ) ∈ we have: g() + b() ≤∑_' ∈(, , ') ·b(') + c(, ) For the sake of contradiction, suppose this was not the case. Then, by <Ref> <ref> there exists x∈^ such that ∑_(' ') ∈(', ', ) ·x(', ') = ∑_∈()x(, ) for all ∈ ∑_(, ) ∈c(, ) ·x(, ) < ∑_(, ) ∈x(, ) ·g() We note that the first equation describes a recurrent flow. In particular, x(, ) = 0 if (, ) is not contained in a MEC <cit.>. This allows us to write the second inequality as follows: ∑_∈()∑_(, ) ∈c(, ) ·x(, ) = ∑_(, ) ∈c(, ) ·x(, ) < ∑_(, ) ∈x(, ) ·g() = ∑_∈()∑_(, ) ∈x(, ) ·g() In particular, there exists a MEC ∈() such that ∑_(, ) ∈c(, ) ·x(, ) < ∑_(, ) ∈x(, ) ·g() From <cit.>, we know that g() ≤inf_∈^'[',]c'≤inf_∈^[,]c for all ∈'. We write ^* ∈^ to denote such optimal scheduler for . Further, observe that all states in a MEC have the same optimal expected mean-payoff. Let us denote this common value by ν, i.e. ν = [,]^*c for some ∈(). Then we get: ∑_(, ) ∈c(, ) ·x(, ) < ∑_(, ) ∈[,]^*c·x(, ) = ν·∑_(, ) ∈x(, ) However, this implies that there exists a scheduler ∈^ that achieves a strictly lower value inside than ^*. More precisely, inside the scheduler ensures that states with x(, ) > 0 for some (, ) ∈ are reached almost surely and then switches to the strategy _∈^ with _(, ) = x(, ) / ∑_∈()x(, ) (cf. <cit.>). This contradicts the optimality of ^* and we can conclude that such x cannot exist in the first place. Thus there exists b∈^ such that (g, b, z) ∈ℱ_^𝖬𝖯(). * The directions from left to right directly follow from <Ref>, <Ref> and <Ref>. Hence, we only need to show that if there exists a certificate for , then the corresponding support induces a subsystem that also satisfies the query. In the following, we write ' = _' = (' {}, , , ') to denote the subsystem induced by '. Recall that ' = _' = {(, ) ∈|∈' }{(, ) |∈}. Proof of <ref>: Suppose there exists (x, y, z) ∈ℋ_^𝖬𝖯() such that x, y⊆'. Then for all ∈' ⊆ we have: () + ∑_(', ') ∈(', ', ) ·y(', ') = ∑_∈()y(, ) + x(, ) + z() ∑_(', ') ∈(', ', ) ·x(', ') = ∑_∈()x(, ) and for all i ∈ [k] we have: ∑_(, ) ∈x(, ) ·r_i(,) + ∑_∈z() ·r_min(i) ≥λ_i Let be an arbitrary a' ∈() and let us define x', y' ∈^' as follows: y'(, ) = 0, if = y(, ), otherwise x'(, ) = ∑_(', ') ∈''(', ', ) ·y'(', ') , if = a = a' 0 , if = a ≠ a' x(, ), otherwise Further, let z' ∈^' {} with z'() = z() for ∈' and z'() = 0. We now show that the constructed vectors (x', y', z') ∈ℋ_'^𝖬𝖯(). We then get for all ∈': '() + ∑_(', ') ∈''(', ', ) ·y'(', ') = () + ∑_(', ') ∈'(, , ) ·y(', ') = ∑_∈()y(, ) + x(, )+ z() = ∑_∈()y'(, ) + x'(, ) + z'() Further, we have '() + ∑_(', ') ∈''(', ', ) ·y'(', ') = ∑_∈()y'(, ) + x'(, ). Similarly, for all states ∈': ∑_(', ') ∈''(', ', ) ·x'(', ') = ∑_(', ') ∈(', ', ) ·x(', ') = ∑_∈()x(, ) = ∑_∈()x'(, ) Again, we have ∑_(', ') ∈''(', ', ) ·x'(', ') = ∑_∈()x'(, ). Lastly, we have for all i ∈ [k]: ∑_(, ) ∈'x'(, ) ·r'_i(,) + ∑_∈' {}z'() ·r_min'(i) = ∑_(, ) ∈x(, ) ·r_i(,) + ∑_∈z() ·r_min(i) ≥ λ_i Hence (x', y') ∈'≥() and by <Ref> the statement follows. Proof of <ref>: Let (g, b, z) ∈≥() such that g - R_minz⊆'. Then we have: g() ≤∑_' ∈(, , ') ·g(') for all (, ) ∈ g() + b() ≤∑_' ∈(, , ') ·b(') + ∑_i=1^k r_i(, ) ·z(i) for all (, ) ∈ g() ≥∑_i=1^k λ_i ·z(i) ∑_i=1^k z(i) ≥ 1 g() ≥∑_i=1^k z(i) ·r_min(i) for all ∈ We now construct corresponding g' ∈^' {}, b' ∈^' {} and z' ∈^[k] and show that (g', b', z') ∈'≥(). We set z' = z and define g' and b' as follows: g'() = ∑_i=1^k z(i) ·r_min(i), if = g(), otherwise b'() = max_' ∈b('), if = b(), otherwise We directly see that g'() = g() ≤∑_' ∈(, , ') ·g(') = ∑_' ∈'(, , ') ·g(') + ∑_' ∈∖'(, , ') ·g(') = ∑_' ∈'(, , ') ·g(') + ∑_' ∈∖'(, , ') · (∑_i=1^k z(i) ·r_min(i)) = ∑_' ∈' {}'(, , ') ·g'(') for all (, ) ∈' with ≠. Let us define c(, ) = ∑_i=1^k r_i(, ) ·z(i) for all (, ) ∈'. Then, for all (, ) ∈' with ≠ we have: g'() + b'() = g() + b() ≤∑_' ∈(, , ') ·b(') + c(, ) = ∑_' ∈'(, , ') ·b(') + ∑_' ∈∖'(, , ') ·b(') + c(, ) ≤∑_' ∈'(, , ') ·b(') + ∑_' ∈∖'(, , ') · (max_”∈b(”))+ c(, ) = ∑_' ∈'(, , ') ·b'(') + ∑_' ∈∖'(, , ') ·b'()+ c(, ) = ∑_' ∈' {}'(, , ') ·b'(') + c(, ) For (, ) ∈' we have g'() + b'() =∑_i=1^k z(i) ·r_min(i) + '(, , ) ·b'() ≤∑_i=1^k z'(i) ·r'_i(, ) + ∑_' ∈' {}'(, , ') ·b'(') Lastly, we have g'() = g(). With that, we can conclude (g', b', z') ∈'≥() and with <Ref> the statement follows. § MILPS FOR FINDING WITNESSING SUBSYSTEMS §.§ MILPs for Reachability Recall the MILPs in <Ref>. We now touch upon the choice of M. MILPs for -queries. Firstly, we note that for the MILP of -queries, we can impose the additional constraint ∑_i ∈ [k]z(i) = 1 if ≳ = ≥ and ∑_i ∈ [k]z(i) ≤ 1 if ≳ = >. For ≥, observe that given a certificate (x, z) ∈≥(λ), we can simply rescale with γ = 1/∑_i ∈ [k]z(i), i.e. (γ·x, γ·z) ∈≳(λ), ∑_i ∈ [k]γ·z(i) = 1 and γ·x = x. Analogously, we proceed for >. Imposing these additional constraints, ensures that x is bounded and an upper bound can be found via LP <cit.>. As a consequence of <cit.>, we can also simply choose k as upper bound. MILPs for -queries. Unlike in the single-objective setting <cit.>, the set ≳() is generally unbounded (see <cit.>). Note that if a minimal witnessing subsystem is given, its certificate can be easily determined and can serve as upper bound. Obviously, it is thus difficult to determine an upper bound M a priori. Here, we resort to indicator constraints, i.e. constraints of the form γ() = 0 y(, ) = 0. These constraints are supported by Gurobi <cit.>. §.§ MILPs for Mean-Payoff To find minimal witnessing subsystem for mean-payoff queries, we can consider the MILPs shown in <Ref>. Like for reachability, we use the Big-M encoding. Let us briefly discuss the choice of M. Like for reachability, we can impose the additional constraints on z. Then g can again be bounded, e.g. by considering the absolute sum of the smallest and largest rewards (see e.g. <cit.>). It is well known that x is bounded from above by 1, see e.g. <cit.>. For the MILP for -queries, y is again generally unbounded. Here, we also resort to indicator constraints. § SUPPLEMENTARY MATERIAL FOR <REF> Our implementation, experiments and results are made available on Zenodo <cit.>. Storm results. The runtimes of Storm in seconds are shown in <Ref>. We remark that we verified -queries by considering the dual -queries . Note that for some queries, we encountered an error, denoted with err. Note that we were unable to verify the queries of with Storm, as the queries were not supported. We refer to the log files in <cit.>. Lastly, we note that the Storm build time is faster than the build time of our implementation, because we have implemented the product construction in Python. The reason is that Storm's product construction is not available through its Python API. Sizes of witnessing subsystems. Recall that in our experiments we have considered queries with 5 different bounds for the consensus and firewire models. More specifically, for firewire we consider the labels and queries: * ∃∈^() ≥λ^() ≥λ * ∀∈^() ≥λ^() ≥λ where λ∈{0.01, 0.1325, 0.255, 0.3775, 0.5}. For consensus, we consider the labels and queries: * ∃∈^() ≥λ^() ≥λ * ∀∈^() ≥λ^() ≥λ where λ∈{0.05, 0.1125, 0.175, 0.2375, 0.3}. Our implementation computes the witnessing subsystems for theses queries using our MILP approach and returns the best solution that has been found after the time limit. The sizes of the subsystems (relative to the original MDP) are shown in <Ref> and <Ref>. We observe that the subsystems for -queries are significantly larger than for -queries. Additionally, the bound λ has a significant influence on the size, particularly for -queries. Certification of dual queries. In our experiments, we consider queries that are satisfied and, thus, for which certificates exist. We also investigate the time it takes for the solver to determine that no certificate exists for the dual query, e.g. for a satisfied -query we measure the time it takes for the tool to conclude that no certificates exist for the -query . The results are shown in <Ref>. The column describes the time for building the model for the dual. The column describes the time for concluding that no certificate exists for the dual query. The column describes the total time of and . We observe that the time for determining that no certificate exists seems to be slightly faster than the time for computing the certificate.
http://arxiv.org/abs/2406.07803v1
20240612014029
EmoSphere-TTS: Emotional Style and Intensity Modeling via Spherical Emotion Vector for Controllable Emotional Text-to-Speech
[ "Deok-Hyeon Cho", "Hyung-Seok Oh", "Seung-Bin Kim", "Sang-Hoon Lee", "Seong-Whan Lee" ]
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
Dynamic Energy-Saving Design for Double-Faced Active RIS Assisted Communications with Perfect/Imperfect CSI Yang Cao, Graduate Student Member, IEEE, Wenchi Cheng, Senior Member, IEEE, Jingqing Wang, Member, IEEE, and Wei Zhang, Fellow, IEEE Yang Cao, Wenchi Cheng and Jingqing Wang are with the State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, 710071, China (e-mails: caoyang@stu.xidian.edu.cn, wccheng@xidian.edu.cn, wangjingqing00@gmail.com). Wei Zhang is with the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia (e-mail: w.zhang@unsw.edu.au). ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Despite rapid advances in the field of emotional text-to-speech (TTS), recent studies primarily focus on mimicking the average style of a particular emotion. As a result, the ability to manipulate speech emotion remains constrained to several predefined labels, compromising the ability to reflect the nuanced variations of emotion. In this paper, we propose EmoSphere-TTS, which synthesizes expressive emotional speech by using a spherical emotion vector to control the emotional style and intensity of the synthetic speech. Without any human annotation, we use the arousal, valence, and dominance pseudo-labels to model the complex nature of emotion via a Cartesian-spherical transformation. Furthermore, we propose a dual conditional adversarial network to improve the quality of generated speech by reflecting the multi-aspect characteristics. The experimental results demonstrate the model’s ability to control emotional style and intensity with high-quality expressive speech. § INTRODUCTION Recently, text-to-speech (TTS) has shown rapid progress <cit.>, and interest in emotional TTS has also increased. Although emotional TTS technology has seen significant improvements in recent years, there remains a limitation in high-level interpretable emotion control <cit.>. In particular, emotional control remains challenging, as speech labeled with the same emotion can exhibit diverse emotional expressions greatly influenced by the variability in acting performances. For emotional TTS, a common approach is to control the diverse emotional expressions from the emotion labels and the reference audio. The emotion label-based approach models the complex nature of emotion to serve as an auxiliary input for the TTS system. Relative attribute <cit.> is one of the most representative methods, utilizing a learned ranking function <cit.> to delineate differences between binary classes. EmoQ-TTS <cit.> employs distance-based quantization via linear discriminant analysis to control fine-grained emotional intensity. Another way to control the diverse expression of emotion is through reference-based emotional TTS. In this context, a scaling factor <cit.> is multiplied by the emotion embedding to control emotion intensity precisely. However, these methods have several limitations. All methods based on emotion labels focus on transferring emotions using discrete labels that ignore the complex nature of emotion conveyed in human speech <cit.>. For example, while sad spans emotions like lonely and hurt, its categorization typically reduces expressions to a uniform style. Using references to control emotional expression is further complicated by the difficulty in capturing the nuances of references due to a mismatch between the reference and the synthesized speech. Furthermore, these methods face difficulty finding suitable scaling factors, and making adjustments often results in unstable audio quality. Another strategy to control emotional expression involves leveraging the emotional dimensions. Russell <cit.> proposed a continuous emotion space as an alternative for human emotions. Building on this, researchers attempt to control emotional expression by utilizing the extended emotional dimensions of arousal, valence, and dominance (AVD) <cit.>. Emotional dimensions provide a continuous and fine-grained description, offering more detailed control than discrete emotions. However, only a few emotional speech databases provide these annotations due to the inherent subjectivity and the high costs associated with collecting such data. Furthermore, emotional dimensions are challenging for models to control intuitively and distinctly characterize the diverse emotional expression. To address the above limitations, we propose EmoSphere-TTS, a novel approach to control emotional style and intensity with spherical emotion vector space in emotional TTS. We adopt the emotional dimensions of AVD from pseudo-labeling in speech emotion recognition (SER). We also propose a spherical emotion vector space via Cartesian-spherical transformation to model the complex nature of emotion that has been difficult in Cartesian coordinate systems. We found that this space is the key to controlling the emotional style and intensity of the synthetic speech. Furthermore, we introduce dual conditional adversarial training to improve the quality of generated speech by reflecting the emotion and speaker-specific characteristics. The experimental results demonstrate that our model outperforms the others in terms of audio quality and emotion similarity on the controllable emotional TTS. Audio samples are available at <https://EmoSphere-TTS.github.io/>. § EMOSPHERE-TTS We present a controllable emotional TTS system, EmoSphere-TTS. We introduce spherical emotion vector space and spherical emotion encoder to deliver speech with the complex nature of emotion. Furthermore, we introduce a dual conditional discriminator for better audio quality and expressiveness. The details are described in the following subsections. §.§ Emotional style and intensity modeling In this section, we model the diverse emotional expressions through a spherical emotion vector space. The approach to building the space is structured around two key components: i) the AVD encoder and ii) the Cartesian-spherical transformation. §.§.§ AVD encoder Instead of using emotional dimensions of human annotation, we adopt wav2vec 2.0 <cit.>-based SER <cit.> to extract consistently continuous and detailed representations from audio. The model generates predictions for e_ki=(d_a,d_v,d_d), where d_a represents arousal, d_v valence, and d_d dominance, each ranging approximately from 0 to 1 in Cartesian coordinates. Here, e_ki denotes the i-th coordinate of the k-th emotion. §.§.§ Cartesian-spherical transformation For modeling the complex nature of emotion, we introduce the spherical emotion vector space, which represents the relative distance and angle vector from the neutral center. Inspired by coordinate transformations <cit.>, which can be easily controlled by a continuous scalar indicating emotional style and intensity. We transform all points for AVD pseudo-labels in spherical coordinates by following the assumptions: i) the emotional intensity increases as it moves farther from the neutral emotion center, and ii) the angle from the neutral emotion center determines the emotional style. First, we obtain transformed Cartesian coordinates e'_ki=(d'_a,d'_v,d'_d) by setting the neutral emotion center M as the origin, e'_ki = e_ki - M where M = 1/N_n∑_i=1^N_n e_ni, where N_n represents the total number of neutral coordinates e_ni. Then, the transformation from Cartesian coordinates to spherical coordinate (r,ϑ,φ) can be formulated as: r = √(d'_a^2 + d'_v^2 + d'_d^2), ϑ = arccos(d'_d/r), φ = arctan(d'_v/d'_a). After the Cartesian-spherical transformation, we normalize the intensity of the emotion by scaling the radial distance r to a range of 0 to 1. To achieve this, the min-max normalization process utilizes the interquartile range technique <cit.>, effectively determining the minimum and maximum values for the scale. Additionally, we quantize the emotion style by segmenting directional angles ϑ and φ into eight octants, each defined by the positive and negative directions of the A, V, and D axes. §.§ Spherical emotion encoder After building spherical emotion vector space, the spherical emotion encoder blends them with emotion ID to compose their spherical emotion embedding. Initially, we use a projection layer to align the dimensions of the emotion style vector and emotion class embedding. Then, we concatenate these projections and apply a softplus activation <cit.> similar to <cit.> followed by Layer Normalization (LN) <cit.>. Finally, the spherical emotion embedding, 𝐡_emo, is merged with projected emotion intensity vectors, as the following equation: 𝐡_emo = LN( softplus( concat(𝐡_sty, 𝐡_cls) ) ) + 𝐡_int. Here, 𝐡_sty, 𝐡_int, and 𝐡_cls denote the outputs of the projection layer related to the emotional style vector, emotional intensity vector, and emotion class embedding, respectively. §.§ Dual conditional adversarial training We adopt the structure of multiple CNN-based discriminators <cit.> for adversarial training to improve the quality of the TTS model. These discriminators comprise a Conv2D stack that consists of multiple stacked 2D-convolutional layers and fully connected (FC) layers. The input value is the random Mel-spectrogram clip (Mel clip) using random windows of different lengths t. To improve quality and further expressiveness, we utilize emotion and speaker embeddings to capture the multi-aspect characteristics more effectively, inspired by <cit.>. One Conv2D stack receives only the Mel clip, while the others receive a combination of condition embedding and the Mel clip. We extend the condition embedding to match the length of the Mel clip for concatenation. The loss functions ℒ of the discriminator D and generator G are shown in Equation (<ref>) and (<ref>): ℒ_D=∑_c ∈{spk, emo}∑_t𝔼[(1-D_t(y_t,c))^2+ D_t(ŷ_t,c)^2], ℒ_G=∑_c ∈{spk, emo}∑_t𝔼[(1-D_t(ŷ_t,c))^2], where y_t and ŷ_t respectively represent the ground truth and generated Mel-spectrograms, with c denoting the condition type. §.§ TTS model We retain the original architecture and objective function of FastSpeech 2 <cit.> except for using an emotion spherical vector to provide emotional style and intensity information. Additionally, the speaker ID is mapped into an embedding h_spk to represent different speaker characteristics. Then, the speaker and emotion embedding are concatenated and provided to the variance adaptor. During inference, we use manual style and intensity vectors to control the diverse emotional expressions. By manipulating the emotional style and intensity in the spherical emotion vector space, we can efficiently synthesize the complex nature of emotion and control the diverse emotional expressions in synthesized speech. § EXPERIMENTS AND RESULTS §.§ Experimental setup We use the emotional speech dataset (ESD) <cit.>, which consists of about 350 parallel utterances spoken by 10 English speakers with five emotional states (neutral, happy, angry, sad, and surprise). Following the prescribed data partitioning criteria, we extracted one sample for each emotion from every speaker, resulting in a total of 17,500 samples. For the Mel-spectrogram, we transform audio using the short-time Fourier transform with a hop size of 256, a window size of 1,024, an FFT size of 1,024, and 80 bins of Mel-filter. We employ the AdamW optimizer <cit.>, setting the hyperparameters β_1 to 0.9 and β_2 to 0.98. For the training of the TTS system and discriminator, the learning rates were configured at 5× 10^-4 and 1× 10^-4, respectively. The training process of the TTS module was conducted over approximately 24 hours on a single NVIDIA RTX A6000 GPU. For the audio synthesis in our experiments, we utilize the official implementation of BigVGAN <cit.>, along with its pre-trained model. §.§ Implementation details For the acoustic model, following the FastSpeech 2 <cit.> configuration, in the FFT block of the phoneme encoder and decoder, we configure the number of layers to 4, hidden size to 256, filter size to 1024, and kernel size to 9. Regarding the AVD encoder, we adopt a system proposed in <cit.>, which predicts AVD using wav2vec 2.0 <cit.> and a linear predictor. For the discriminator, we use projection layers for each condition with a hidden size of 128 and three different sizes of windows ([32, 64, 96]). §.§ Model performance We compare EmoSphere-TTS with other systems: 1) FastSpeech 2 w/ emotion label <cit.>, which adopts emotion id with a look-up table. 2) FastSpeech 2 w/ relative attribute <cit.>, which assigns emotion intensity via learned ranking function <cit.>. 3) FastSpeech 2 w/ scaling factor <cit.>, which is multiplied by the emotion embedding to control emotion intensity precisely. We follow the same experimental setup and model configurations to ensure a fair comparison. For evaluation metrics, we use a naturalness mean opinion score (nMOS) and a similarity mean opinion score (sMOS). The nMOS and sMOS are reported with 95% confidence intervals. Additionally, we utilize the open-source UTMOS[https://github.com/tarepan/SpeechMOS] <cit.> as a MOS prediction model for the naturalness metric. To evaluate linguistic consistency, we calculate the word error rate (WER) and character error rate (CER) by Whisper large model <cit.>. For the speaker similarity measurements, we calculate the speaker embedding cosine similarity (SECS) via Resemblyzer[https://github.com/resemble-ai/Resemblyzer] between the target and converted speech and equal error rate (EER) via the automatic speech recognition model <cit.>. For emotionally expressive evaluation, we conduct emotion classification accuracy (ECA) using a pre-built emotion classification model <cit.>. For prosodic evaluation, we compute the root mean square error for both pitch error (RMSE_f_0) and periodicity error (RMSE_period), along with the F1 score of voiced/unvoiced classification (F1_v/uv). Given our primary on quality and expressiveness, manual intensity and style vectors are not employed in these experiments. In the ablation study, w/o spherical emotion vector is a model that uses emotion id with a lookup table instead of a spherical emotion vector and encoder. As shown in Table <ref>, our model achieves significant improvements, and this can be explained by: 1) as opposed to transferring emotion from emotion label-based or reference audio methods, directly assigning spherical emotion vector is easier for the model to generate good quality speech. Our spherical emotion vector exhibits better expressiveness and audio quality, including naturalness and pronunciation, even without a dual conditional discriminator. 2) The dual conditional discriminator improves the quality of the generated speech by reflecting both emotion and speaker characteristics. §.§ Emotion intensity controllability In this section, we conduct a subjective evaluation to determine the discernibility of synthesized speech samples exhibiting varying intensity levels. To demonstrate the intensity control capability of our model, we synthesize speech with three different levels of emotion intensity (weak, medium, and strong). Evaluators are presented with two different sentences, each with varying intensities, and tasked with selecting the one that exhibits stronger emotion. In relative attribute <cit.> and EmoSphere-TTS, we uniformly refer to scores 0.1 as weak, 0.5 as medium, and 0.9 as strong. Scaling factor <cit.> cannot assign intensity scores, so we set the scalar factor to 1, 2, and 3 to represent the weak, medium, and strong emotion strength the same as in the original setting. Additionally, we visualize the tendency of pitch to demonstrate the ability to control emotional expression as shown in Figure <ref>. We computed these values by averaging the pitch from the synthesized speeches by all combinations of emotion labels and intensity vectors for all test sentences. As shown in Table <ref>, relative attribute <cit.> effectively controls the intensity. However, in the sad emotional speech, the pitch increases as the intensity increases, as shown in Figure <ref> (a). This indicates that subtle emotional nuances are complex to capture when considering emotion labels alone, and expressions are often reduced to a uniform style. Scaling factor <cit.> might not be efficient for performing intensity control; in some cases, the intensity difference is not readily perceivable, as shown in Figure <ref> (b). However, as shown in Table 2, the scaling factor outperformed the other models for the sad emotion. Still, the scaling factor focuses on reducing pitch and decelerating speech rate in static emotions, overlooking the complex nature of emotion. On the other hand, compared to the baseline models, EmoSphere-TTS performs the best. Furthermore, the pitch tendency plot reflects the intensity according to the emotion. This indicates that the proposed model synthesizes speech according to a given intensity scale. §.§ Emotion style shift To demonstrate the changing patterns of emotion intensity based on the shifted emotion style, we visualize the pitch tracks of a sample. In Figure <ref>, we observe that when the base style vector is input, emotion intensity changing patterns reflect the characteristics of the AVD axes. For example, style vectors with positive A axes have a pitch that tends to increase the changing patterns, positive V axes have higher average pitch values, and positive D axes have a narrow range in changing patterns. This indicates that arousal, valence, and dominance reflect each meaning of axes, representing the level of excitement or energy, positivity or negativity of emotion, and control level within an emotional state, respectively. By shifting the style vector, the emotion intensity patterns change with the shifted axis. This indicates that the proposed spherical emotion vector reflects diverse emotional expressions and offers detailed manipulation of emotional expressions. § CONCLUSION We present EmoSphere-TTS, a system that synthesizes expressive emotional speech through a spherical emotion vector space, controlling the diverse emotion expression. With only a speech dataset, we extract AVD pseudo-labels and model generalized representations of the emotional style and intensity through Cartesian-spherical transformation. Furthermore, we improve the quality and emotional expressiveness of the overall model using the dual conditional adversarial discriminator and spherical emotion encoder. The experimental results demonstrate that our proposed spherical emotion vector effectively synthesizes the complex nature of emotion and controls diverse emotion expression. In this article, we only focused exclusively on the global style present within sentence-level emotional information. In future work, we aim to extend our approach to include phoneme-level emotional information and allow for fine-grained control. We also expect that the proposed method can be utilized for emotional voice conversion like <cit.>. § ACKNOWLEDGEMENTS This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program (Korea University), No. 2021-0-02068, Artificial Intelligence Innovation Hub, and AI Technology for Interactive Communication of Language Impaired Individuals). IEEEtran
http://arxiv.org/abs/2406.09144v1
20240613141148
fast-resolve: Fast Bayesian Radio Interferometric Imaging
[ "Jakob Roth", "Philipp Frank", "Hertzog L. Bester", "Oleg M. Smirnov", "Rüdiger Westermann", "Torsten A. Enßlin" ]
astro-ph.IM
[ "astro-ph.IM" ]
Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany Ludwig-Maximilians-Universität, Geschwister-Scholl-Platz 1, 80539 Munich, Germany Technische Universität München (TUM), Boltzmannstr. 3, 85748 Garching, Germany South African Radio Astronomy Observatory (SARAO), Cape Town, 7925, South Africa Centre for Radio Astronomy Techniques and Technologies (RATT), Department of Physics and Electronics, Rhodes University, Makhanda, 6140, South Africa Institute for Radioastronomy, National Institute of Astrophysics (INAF IRA), Via Gobetti 101, 40129 Bologna, Italy Interferometric imaging is algorithmically and computationally challenging as there is no unique inversion from the measurement data back to the sky maps, and the datasets can be very large. Many imaging methods already exist, but most of them focus either on the accuracy or the computational aspect. This paper aims to reduce the computational complexity of the Bayesian imaging algorithm , enabling the application of Bayesian imaging for larger datasets. By combining computational shortcuts of the algorithm with the Bayesian imaging algorithm we developed an accurate and fast imaging algorithm which we name . We validate the accuracy of the presented algorithm by comparing it with results from on VLA Cygnus A data. Furthermore, we demonstrate the computational advantages of on a large MeerKAT ESO 137-006 dataset which is computationally out of reach for . The presented algorithm is significantly faster than previous Bayesian imaging algorithms, broadening the applicability of Bayesian interferometric imaging. Specifically for the single channel VLA Cygnus A datasets is about 144 times faster than . For the MeerKAT dataset with multiple channels the computational speedup of is even larger. : Fast Bayesian Radio Interferometric Imaging Jakob Roth 123roth@mpa-garching.mpg.de Philipp Frank 1 Hertzog L. Bester 45 Oleg M. Smirnov 546 Rüdiger Westermann 3 Torsten A. Enßlin 12 Received XXXX; accepted XXXX ========================================================================================================================================================================================================================= § INTRODUCTION Interferometric imaging is a versatile technique in astronomy that allows us to achieve enormous sensitivity and resolution by combining multiple telescopes. The effective resolution is roughly equivalent to that of a single telescope with a diameter equal to the largest distance between individual stations of the interferometer, which can be thousands of kilometers. For the upcoming Square Kilometer Array <cit.>, the total collecting area might eventually approach one square kilometer, resulting in superior sensitivity compared to any single-dish telescope. Overcoming the resolution and sensitivity limitations of single telescopes comes at the cost of making it more difficult to retrieve images from the observational data. Mathematically, the recovery of the sky images from interferometric data can be formulated as an inverse problem. The forward relation that computes the corresponding measurement data from a given sky image is known as the radio interferometric measurement equation <cit.>. As discussed in detail in Sec. <ref>, the measured data points are essentially noisy and undersampled Fourier modes of the sky image. This makes direct inversion of the measurement equation to obtain the sky image impossible, turning radio interferometric imaging into an ill-posed inverse problem. Solving the inverse problem of radio interferometric imaging requires sophisticated algorithms that impose additional constraints on the sky brightness and regularize possible solutions to it. Historically, CLEAN <cit.> has been by far the most widely used imaging algorithm because it is computationally efficient, simple, and easy to use. Over the last decades, CLEAN-based imaging algorithms have been significantly improved, especially for diffuse emission imaging, spectral imaging, and wide-field imaging <cit.>. However, CLEAN-based algorithms have several drawbacks, such as limited image fidelity, suboptimal resolution of recovered images, and lack of uncertainty quantification (see eg. <cit.> for a detailed discussion). To improve these limitations, many other imaging algorithms have been developed. A large class of new imaging algorithms builds on applying compressed sensing techniques to astronomical data <cit.>. Several incarnations of such algorithms have shown significant improvements in terms of image fidelity and resolution over CLEAN based reconstructions. Recent examples are <cit.> utilizing sparsity based regularizers in combination with convex optimization. <cit.> extended these approaches to full polarization imaging. In <cit.> and <cit.>, some form of uncertainty quantification was added. <cit.> parallelized the image regularizer to distribute it over multiple CPUs. <cit.> presented a regularizer building on a neural network image denoisers. <cit.> further explored neural network regularizers and studied the influence of the training dataset on the regularizer. Bayesian imaging algorithms are another important class of imaging algorithms addressing the uncertainty quantification problem. Early Bayesian imaging approaches such as <cit.> built on the maximum entropy principle. Other Bayesian imaging techniques such as <cit.> rely on posterior sampling techniques. The Bayesian imaging framework [https://gitlab.mpcdf.mpg.de/ift/resolve] originally proposed by <cit.> builds on variational inference instead of sampling based techniques for posterior approximation, reducing the computational costs. has already successfully been applied to VLA, EHT, VLBA, GRAVITY, and ALMA data, providing high-quality radio maps with superior resolution compared to CLEAN, as well as an uncertainty map. Recent examples are <cit.> joining Bayesian imaging with calibration. In <cit.> is compared with CLEAN, while in <cit.>, direction-dependent calibration is added and compared with sparsity based imaging results of <cit.>. In <cit.> is applied to the optical interferometer GRAVITY. In <cit.> is applied to EHT data. The improvements in imaging algorithms mentioned above come at the cost of increased computational complexity. For Very Long Baseline Interferometer (VLBI) observations, data sizes are typically small and the higher computational cost of advanced imaging methods is unproblematic. However, for large arrays such as MeerKAT <cit.>, this limits the applicability of many of these algorithms. For this reason, only very few advanced imaging algorithms have so far been successfully applied to data sets the size of a typical MeerKAT observation. <cit.> presents an application of an imaging algorithm using sparsity based and neural network regularizers to the MeerKAT ESO 137-006 observation. To handle the enormous computational complexity, <cit.> massively parallelize their imaging algorithm and distribute it across a high performance computing system, requiring hundreds to thousands of CPU hours to converge. In <cit.>, this approach of parallelizing the regularizer is extended to the spectral domain. To the best of our knowledge, no Bayesian radio interferometric imaging algorithm has so far been applied to a similar sized dataset. The goal of the framework has always been to enable Bayesian imaging for a wide range of radio telescopes, not just VLBI observations. However, for instruments such as MeerKAT, imaging with becomes computationally prohibitively expensive. In this paper, we present the algorithm , which significantly reduces the computational complexity of classical and enables Bayesian image reconstruction for large data sets. <cit.> previously already named an imaging method fast. The goal of the fast algorithm of <cit.> is the same as the new incarnation of presented in this paper. However, although the new and the old variants of share some ideas, there are significant differences. In Sec. <ref>, we will highlight the common concepts between the old the and the new . Note that the framework behind both and has evolved significantly since the time of <cit.>. Machine learning-based imaging algorithms are a third relatively new class of imaging algorithms. Once the underlying machine learning models are trained, the computational cost for imaging is often lower than for other algorithms. Nevertheless, assessing image fidelity is challenging in such a framework given the lack of interpretability of machine learning models. Recent examples of such algorithms are <cit.>. The remainder of the paper is organized as follows. In Sec. <ref>, we discuss the radio interferometric measurement equation and the imaging inverse problem in detail. In Sec. <ref>, we briefly review the existing framework in its current form and outline the algorithm. Building on some of the computational shortcuts of the algorithm, we derive , and in Sec. <ref>, we show several applications of it. In particular, in Sec. <ref> we compare imaging results of with the classical framework on VLA <cit.> data to validate the fidelity of the resulting sky images, and in Sec. <ref> we present a reconstruction of the ESO 137-006 MeerKAT observation to demonstrate the computational speedup. § THE INVERSE PROBLEM The radio interferometric measurement equation, derived, for example, in <cit.>, relates the radio sky brightness to the datapoints, often called visibilities. More specifically, via the measurement equation, model visibilities Ṽ can be computed for an assumed sky brightness I and antenna sensitivity G. Under the assumption of scalar antenna gains G, the model visibility Ṽ_pqt of antennas p and q at time t is given as: Ṽ_pqt = ∫ C(l,w_pqt)I(l)G_p(t,l)G^*_q(t,l) e^-2π i (k_pqt·l) dl, where * l = (l,m) are the sky coordinates, * t is the time coordinate, * k_pqt = (u_pqt, v_pqt) are the baseline uv-coordinates in units of the imaging wavelength, * C(l,w_pqt) = exp(-2π i w_pqt(√(1-l^2) - 1)) / √(1-l^2) is the w- or non-coplanar baselines effect, * I(l) is the sky brightness, * G_p(t,l) is the gain of antenna p depending on time and potentially also direction. The model visibilities are therefore Fourier-like components of the sky brightness I modulated by the antenna gains G_p and G_q as well as the w-effect C. The visibilities actually recorded in the measurement are related to the model visibilities according to V_pqt = Ṽ_pqt + n = R(I) + n, with n representing some unknown noise in the measurement, and R the mapping from sky brightness to visibilities defined in Eq. <ref>. With Eq. <ref> and Eq. <ref>, it is straightforward to compute simulated visibilities for an assumed sky brightness I, antenna gain G, and noise statistics. Inverting this relation is not possible without additional assumptions, since first, the measured visibilities V are corrupted by the measurement noise n, and second, Eq. <ref> is generally not invertible since in practical applications not all Fourier components are measured. The non-uniqueness of solutions to Eq. <ref> means that additional regularisation needs to be imposed to discriminate between all possible sky images compatible with the data. This additional information is either an explicit prior or regularization term, or is implicitly encoded in the structure of the imaging algorithm. § METHODS In this section, we derive the algorithm (Sec. <ref>). Since builds on the already existing Bayesian imaging framework , we start with a brief review of the classic imaging method (Sec. <ref>). As some of the computational speedups of are inspired by the imaging algorithm, we also outline the basic concepts behind (Sec. <ref>). §.§ The imaging algorithm addresses the imaging inverse problem from a probabilistic perspective. Thus, instead of reconstructing a single estimate of the sky brightness, it infers the posterior probability distribution P(I|V) of possible sky images given the measured visibilities. Bayes' Theorem P(I|V) = P(V|I)P(I)/P(V) expresses the posterior probability P(I|V) in terms of the likelihood P(V|I), the prior P(I), and the evidence P(V). provides models for the likelihood P(V|I) and the prior P(I). The posterior distribution can be inferred for a given prior and likelihood model building on the functionality of the Bayesian inference package NIFTy[https://github.com/NIFTy-PPL/NIFTy] <cit.>. In the next three subsections we briefly outline the likelihood and prior of as well as the variational inference algorithms of NIFTy. §.§.§ prior The framework provides predefined prior models for the two types of radio emission, point sources and extended diffuse emission. Both priors encode that the brightness must be positive, since there is no negative flux. Furthermore, both priors are very flexible and allow for brightness variations over several orders of magnitude. For the point source prior, pixels are independently modeled with an inverse gamma prior for their intensity. The inverse gamma distribution is strictly positive and has a wide tail, allowing extremely bright sources. In the example in Sec. <ref>, such a prior is applied for the two bright point sources in the core of Cygnus A. While the brightness of point sources is reconstructed from the data, the locations currently need to be manually set. This limits the applicability of the current point source prior, as discussed in the application to MeerKAT ESO 137-006 data (Sec. <ref>). Besides positivity and possible variations over several orders of magnitude, the diffuse emission prior also encodes correlations of the brightness of nearby pixels, which is essentially the defining property of diffuse emission. The correlation of nearby pixels in the diffuse emission prior is modeled by Gaussian processes. The result of the Gaussian process is exponentiated to ensure positivity. Detailed explanations of the prior models can be found in <cit.>. An important aspect of all the prior models in is that they are fast and scalable to very large numbers of pixels. For example, in <cit.> a 4096×2048 pixel Gaussian process-based diffuse emission prior is used to image Cygnus A at various frequencies. §.§.§ likelihood The likelihood is evaluated in using Eq. <ref>. More specifically, using Eq. <ref>, we can write the likelihood as P(V|I) = P(V|R(I)) = P(V|Ṽ). The noise statistics in Eq. <ref> then determines the likelihood. In , Gaussian noise statistics are assumed. For numerical reasons, works with the negative logarithm of the likelihood, named the likelihood Hamiltonian, instead of the likelihood itself. With the Gaussian noise assumption, the likelihood Hamiltonian is given by H(V|I) = 1/2(V - R(I) )^† N^-1(V - R(I) ) + 1/2ln |2π N|, with † denoting complex conjugate transpose and 1/2ln |2π N| coming form the normalization of the Gaussian, which can be ignored in many applications. Different possibilities are implemented for the covariance of the noise N. In the simplest case, the weights of the visibilities are used as the inverse noise covariance. Alternatively, the noise covariance can also be estimated during the image reconstruction. The computationally important aspect of the likelihood in the classic framework is that for every update step where the likelihood is evaluated, also R(I) and thus Eq. <ref> need to be computed. This can be computationally expensive, especially for datasets with many visibilities, as we will discuss in detail later. §.§.§ posterior inference is built on the probabilistic programming package NIFTy, which provides variational inference methods <cit.> to approximate the posterior distribution for a given prior and likelihood function. The advantage of variational inference techniques over sampling techniques such as MCMC or HMC is that they scale better with the number of parameters, which is the number of pixels in the imaging context. For example, the variational inference method of NIFTy has recently been used in a 3D reconstruction with 607 million voxels <cit.>. While the variational inference algorithms scale very well with the number of parameters, they still need to evaluate the likelihood very often. This means variational inference is fast as long as the evaluation of the likelihood and prior is fast. As discussed in Sec. <ref>, evaluating the likelihood in boils down to evaluating the radio interferometric measurement equation (Eq. <ref>). To do so relies on the parallelizable <cit.> implemented in the [https://gitlab.mpcdf.mpg.de/mtr/ducc] library. Nevertheless, evaluating the measurement equation becomes computationally intensive for data sets with many visibilities. For example, for the MeerKAT data set considered in Sec. <ref>, evaluating Eq. <ref> on eight threads takes about 45 seconds. This becomes prohibitive for algorithms which require many thousands of likelihood evaluations. §.§ In this section, we briefly outline some of the concepts behind the algorithm <cit.>, as the computational speedups of over classic are partly inspired by . We will refrain from delving into the details behind the numerous improvements that have been made to the algorithm over the last decades (see <cit.> for example) and focus only on the aspects which are relevant to speeding up . We refer the reader to <cit.> for a detailed comparison between and . is an iterative optimization algorithm minimizing the weighted square residuals between the measured visibilities and the model visibilities computed from the sky brightness model. Expressed as a formula, the objective function minimized by is identical to Eq.<ref>, the likelihood Hamiltonian of . Minimizing Eq. <ref> with respect to the sky brightness I is equivalent to solving R^† N^-1 R I = R^† N^-1 V, with R^† being the adjoint operation of R, thus mapping from visibilities to the sky brightness. Neglecting wide-field effects originating from non-coplanar baselines, the operation R^† N^-1 R is, equivalent to a convolution with the effective point spread function of the interferometer I^PSF. R^† N^-1 V is the back projection of the noise-weighted data into the sky domain, called dirty image I^. This can be expressed with the formula I^PSF * I ≈ R^† N^-1 V = I^D, with I^PSF being the PSF of the interferometer and * denoting convolution. Thus, the dirty image I^D = R^† N^-1 V is approximately the true sky brightness I convolved with the PSF I^PSF of the interferometer. Radio interferometric imaging is, therefore, nearly equivalent to deconvolving the dirty image. As discussed in Sec. <ref>, no unique solution to imaging inverse problems exists. The absence of a unique solution manifests as R^† N^-1 R and I^PSF * not being invertible operations. While in , additional regularization was provided via an explicit prior, regularization of the sky images in is implicitly encoded into the structure of the algorithm. More specifically, starts with an empty sky model as an initial estimate I^m_0 for the true sky brightness and iteratively adds components, in the simplest form point sources, to this estimate until a stopping criterion is met. This introduces the implicit prior that the sky is sparsely represented by a finite number of components. In the following, we briefly summarize the algorithmic structure of as relevant for . The iterative procedure of adding components to the current model image I^m is split into major and minor cycles. In the major cycles, a current residual image I^RES = R^† N^-1 (V - R I^m_i) is computed, with I^m_i being the current model image. In the minor cycle, additional components are added to the model, and their PSFs are removed from the residual image. In the subsequent major cycle, the residual image is recomputed for the updated model image, and in the following new minor cycle, more components get added to the model. This scheme is iterated until a global stopping criterion is met. From a computational perspective, the important aspect of the major/minor scheme is that R and R^† only need to be evaluated in major cycles. In minor cycles, the PSF of the added components is subtracted from the residual image, but for this, no evaluations of R and R^† are needed since the PSF can be precomputed once. Most of the computations of the algorithm are performed in image space, and only major cycles go back to the visibility space. In contrast, needs to map from image to data space by applying R for every likelihood evaluation. §.§ As outlined in the previous section, evaluating the radio interferometric instrument response (Eq. <ref>) contributes a substantial fraction to the overall runtime of a image reconstruction. Therefore, reducing the number of necessary evaluations of Eq. <ref> has the potential for significant speedups. The basic idea of is to perform most of the computations in image space, similar to , and only evaluate the radio interferometric instrument response once in a while. §.§.§ measurement equation The radio interferometric measurement equation in the form of Eq. <ref> leads to the likelihood Hamiltonian Eq. <ref> of the classic framework, which involves an evaluation of the interferometric instrument response R. To get a likelihood Hamiltonian that does not include evaluating R, one has to transform the measurement equation such that all involved quantities live in image space, not data space. To project all involved quantities to image space, we apply R^† N^-1 from the left to Eq. <ref>, and get: R^† N^-1 V = R^† N^-1 R I + R^† N^-1 n I^D = R' I + n', with R' = R^† N^-1 R, and n' = R^† N^-1 n. The quantities of the new measurement equation are I^D, I, and n' and are all defined in image space and not in data space. The statistics of the new noise n' remains Gaussian as R^† N^-1 is a linear transformation. The corresponding likelihood Hamiltonian is given by: H(I^D|I) = 1/2(I^D - R' I)^† N'^-1(I^D - R' I) + 1/2ln |2π N'|. The covariance of the transformed noise is given by N' = < n'n'^†> = R^† N^-1<n n^†> N^-1 R = R^† N^-1 R. This new noise covariance N' is not invertible as R^† N^-1 R is not invertible, which is problematic as the likelihood Hamiltonian contains the inverse noise covariance N'^-1. To mitigate the problem of a singular noise covariance, we modify the measurement equation once more by adding uncorrelated Gaussian noise with a small amplitude. This leads to a noise covariance N' = R^† N^-1 R + ϵ𝕀, with 𝕀 being the unit matrix and ϵ a small number, which is the variance of the additional noise. With this artificially introduced additional noise, the full noise covariance N' is invertible, and the new likelihood Hamiltonian Eq. <ref> becomes well defined. §.§.§ response R' How large the speedup of using the new likelihood Hamiltonian Eq. <ref> is compared to the old Hamiltonian Eq. <ref> depends on how much faster the new Hamiltonian can be evaluated. The idea of is to approximate R' = R^† N^-1 R with a PSF convolution R' = R^† N^-1 R ≈ I^PSF*, as it is done in a similar way in the algorithm. Approximating R' = R^† N^-1 R by a convolution with the PSF is only exact for coplanar arrays. Corrections for the inaccuracy of the approximation are applied in some major cycles in analogy to the algorithm, as we describe in Sec. <ref>. The convolution with the PSF I^PSF* can be efficiently applied via an FFT convolution. For data sets with many visibilities, this FFT-based convolution with the PSF I^PSF has the potential to be significantly faster than an evaluation of the interferometer response R (Eq. <ref>). In Sec. <ref>, we will compare the computational speedup of the new likelihood for different data sets. As the sky brightness I and the PSF I^PSF are non-periodic, some padding of the sky is needed for an FFT-based convolution. More specifically, to exactly evaluate I^PSF*I, the PSF with which we convolve needs to be twice as big as the field of view we want to image since some emission in the sky I could be at the edge of the field we are imaging. The sky image needs to be padded with zeros to the same size as the PSF. As a formula, this can be noted as R'I = R^† N^-1 R I ≈ I^PSF* I = P^†FFT^-1[FFT[I^PSF] ·FFT[PI] ], with P denoting the padding operation, and P^† for slicing out the region not padded. By neglecting PSF sidelobes and reducing its size, the necessary amount of zero padding can be reduced. This, however, reduces the accuracy of the approximation R' ≈ I^PSF*, which might make it necessary to perform more major cycles in the image reconstruction. §.§.§ noise model To evaluate the likelihood Hamiltonian of , we need to apply both R'= R^† N^-1 R and N'^-1= (R^† N^-1 R + ϵ𝕀)^-1. Similar to R', we need to approximate N'^-1 such that we can apply it without having to evaluate R and R^† every time. The basic idea of the approximation of N'^-1 is the same as for R', thus replacing the exact operation with an FFT convolution. Expressed as a formula, we compute the application of N' to some input x via N'(x) ≈FFT^-1[ K ·FFT(x) ], with some appropriate Kernel K. The approximate noise covariance still needs to fulfill the mathematical properties of covariances, because for our Bayesian model, the likelihood is a probability and not just an arbitrary cost function. Therefore, the approximated noise covariance matrix needs to be Hermitian and positive definite. Specifically, for the convolution approximation, this implies that all entries of the convolution kernel K must be real and strictly positive. To fulfill these constraints, we parameterize K as K_ξ = expξ + ϵ, with ϵ as defined in Eq. <ref> and ξ being an implicitly defined real-valued vector. We set this vector ξ such that it minimizes the square residual between the true and the approximate noise covariance when applied to a test image with a point source in the center. Thus ξ is set to ξ̅ = argmin_ξ(N'(I_δ) - FFT^-1[ (expξ + ϵ) ·FFT(I_δ) ] )^2 with I_δ having a point source or delta peak in the center of the field of view and otherwise being zeros. In words, we approximate the noise covariance N' with an appropriate Kernel K that yields a proper covariance by construction, is easy to invert, and minimizes the squared distance to the effect that N' has on a point source. To evaluate the likelihood (Eq. <ref>) we need to apply N'^-1. We do this by convolving with the inverse kernel: N'^-1(x) ≈FFT^-1[ K_ξ̅^-1·FFT(x) ]. This involves a second approximation, as we here implicitly assume periodic boundary conditions. As long as the main lobe of the PSF is much smaller than the field of view, this assumption does not create large errors. §.§.§ inference scheme The previous sections derived the approximate likelihood for inspired by the algorithm. In the same spirit, we adapt the major/minor cycle scheme of to utilize it with Bayesian inference for a probabilistic sky brightness reconstruction. In the minor cycles, we optimize the current estimate of the posterior distribution P(I|V) for the sky brightness I using the above approximations for a fast likelihood evaluation. In the major cycles, we apply, similar to , the exact response operations to correct for the approximation error. The algorithm starts by initializing the dirty image I^D = R^†N^-1V from the visibilities V using the exact response function R. The dirty image is the input data d_0 = I^D for the first minor cycle. The first minor cycle computes an initial estimate of the posterior distribution P_0 of the sky brightness using the measurement equation d_0 = I^PSF*I + n', with the approximations discussed in Sec. <ref> and Sec. <ref>. We will discuss the exact algorithm used to estimate the posterior distribution in Sec. <ref>. From this initial estimate of the posterior distribution P_0, we compute the posterior mean of the sky brightness I_0 = <I>_P_0. The posterior mean I_0 is the output of the first minor cycle and the input to the first major cycle. The first major cycle computes the residual d_1 between the dirty image I^D and the posterior mean of the first minor cycle passed through the response R^†N^-1R of the fast resolve measurement equation: d_1 = I^D - R^†N^-1RI_0. In contrast to the minor cycle, R^†N^-1R is here computed exactly and not approximated with I^PSF*. The residual d_1 is the output of the major cycle and the input to the second minor cycle. In the second minor cycle, the posterior P_1 of I for the measurement equation d_1 = I^PSF*(I - I_0) + n', is approximated. This approximation can be done efficiently by starting with the posterior estimate of the previous minor cycle and refining this estimate. The output of the second minor cycle is the updated posterior mean I_1 = <I>_P_1 which is the input to the next major cycle computing the new residual to the dirty image using the exact response: d_2 = I^D - R^†N^-1RI_1. The new residual data d_2 is the input to the next minor cycle. This scheme is iterated until converged, thus until no significant structures are left in the residual d_n and the posterior estimates I_n are not changing anymore. In algorithm <ref> the full inference scheme is summarized as a pseudocode algorithm. The overall structure of the major/minor scheme is very similar to the major/minor scheme of . The main difference is that updates in the minor cycles a probability distribution for possible sky images instead of a simple model image, as is the case for . The algorithm for inferring the posterior distribution is outlined in the following subsection <ref>. §.§.§ minor cycles In the minor cycles, we optimize the approximation of the posterior distribution P(I|V) using the scalable variational inference algorithms <cit.> of the NIFTy package already employed in the classic algorithm. These variational inference algorithms account for correlations between parameters of the model. While the method Metric Gaussian Variational Inference (MGVI) of <cit.> relies on a Gaussian approximation of the posterior distribution, the algorithm geometric Variational Inference (geoVI) of <cit.> can also capture non-Gaussian posterior distributions. Although relies for the minor cycles on the same inference algorithms as , updating the posterior distribution is much faster in since the approximate likelihood described above is used instead of evaluating the exact measurement equation. Furthermore, the NIFTy package has also undergone a major rewrite, switching from a NumPy based <cit.> implementation to a JAX[https://github.com/google/jax] <cit.> based backend, allowing for GPU-accelerated computing. While was so far building on the old NumPy-based NIFTy, makes use of the JAX accelerated NIFTy version, also named NIFTy.re <cit.>. Especially the FFT convolutions in have the potential for significant GPU acceleration. In Sec. <ref>, we compare the runtime of and using the CPU and the GPU backend. In the future, we also plan to port the classic framework to JAX. Porting is more involved than porting because it requires binding a high performance implementation of the radio interferometric measurement operator to JAX. As a preparatory work, we have developed the JAXbind[https://github.com/NIFTy-PPL/JAXbind] package <cit.> which allows to bind custom functions to JAX. However, since the from ducc[https://gitlab.mpcdf.mpg.de/mtr/ducc] is used to evaluate the radio interferometric measurement equation, the JAX version of will also be limited to run on the CPU. §.§ Previous fast of <cit.> The algorithm named fast introduced by <cit.> was an earlier attempt to speed up . Similar to the of this work, <cit.> used approximations of the likelihood avoiding applications of R in every step. More specifically, <cit.> derived a maximum a posteriori estimator of the sky brightness using the gridded weights of the visibilities as a noise covariance. Since fast had some limitations, such as it could not account for the w-term and could not provide uncertainty estimates, this approach was not followed up in subsequent developments of in <cit.>, or <cit.>. § APPLICATIONS For verification, we demonstrate the proposed method on different data sets. First, we reconstruct the Cygnus A VLA observation at four different frequency bands, and second, we image ESO 137-006 using a MeerKAT observation. The Cygnus A data is suitable to validate the accuracy as the images can be compared to previous results from <cit.> and <cit.> obtained on the same dataset. The Cygnus A reconstruction has a relatively small field of view, and wide field effects should be negligible. The MeerKAT observation is significantly larger than the VLA observation, and imaging with is computationally out of reach, allowing to demonstrate the computational advantages of over . Furthermore, the field of view of the ESO 137-006 observations is also significantly larger such that the major cycles of correcting for wide field effects become more important. §.§ Application to VLA Cygnus A data §.§.§ Data and algorithm setup For imaging Cygnus A, we use the exact same data already imaged in <cit.> with and . The data contains single frequency channels at 2052 MHz (S-band), 4811 MHz (C-band), 8427 MHz (X-band), and 13360 MHz (Ku-band). Also, in <cit.>, an uncalibrated version of the S-band data was jointly calibrated and imaged with . In all observing bands, all four VLA configurations were used. Computationally relevant details of the observations, such as the number of visibilities and the grid size of the reconstructed images, are listed in Tab. <ref>. For details on the calibration of the data, we refer to <cit.>. As a prior model, we use the combination of the diffuse emission prior and the point source prior described in Sec. <ref>. This prior model is identical to the prior model already used in the existing framework. Specifically, in <cit.> and <cit.>, this prior model was already used for Cygnus A reconstructions. For completeness, we summarize the main aspects of the prior model here. We refer to <cit.> for details on the prior model. We model the diffuse emission with an exponentiated Gaussian Process spanning the entire field of view. For the two point sources in the nucleus of Cygnus A, we insert two separate point source models at the locations of these sources. For the brightness of these sources we use an inverse gamma prior. The exact hyper-parameters for the Gaussian process and the inverse gamma prior are listed in appendix <ref>. The number of pixels used to model the diffuse emission is listed in Tab. <ref> for the different frequencies. In <cit.>, the MGVI algorithm was used for inferring the posterior of the sky brightness map. For direct comparability between the results of this work and the previous maps, we also use the MGVI in the minor cycles (Sec. <ref>) for the posterior approximation. For the imaging of the S-band data in <cit.>, the geoVI algorithm was used. However, we do not expect significant differences in the resulting sky maps. To summarize, <cit.> and <cit.> used the same prior model setup as we use for imaging Cygnus A with . Furthermore, the posterior inference algorithm is expected to produce similar results for a given likelihood. Therefore, the previous results of <cit.> and <cit.> are ideally suited to validate the likelihood approximation in a direct comparison on a dataset with negligible wide field effects. Furthermore, we compare with the multi-scale reconstruction of the S-band data from <cit.>. For a more detailed comparison with , we refer to <cit.>, where was extensively compared to single and multi-scale for all frequency bands. §.§.§ Comparison with previous results In the following, we compare the reconstructions with previous results to validate the accuracy of . Fig. <ref> displays the reconstructions of <cit.> and <cit.> as well as the S-band data multi-scale reconstruction in comparison with the reconstructions of this work. The overall quality of the maps is on par with the results. The multi-scale reconstruction has a lower resolution in bright regions of the lobes than the based reconstructions. All bright emission features are consistently reconstructed by all algorithms in all frequency bands. The and maps have a higher dynamic range than the results of <cit.>. Nevertheless, this is not due to a conceptional problem of the algorithm, but rather the reconstructions of <cit.> were not fully converged due to the very high computational cost of . reconstructs the brightest emission of the field before modeling fainter features. Thus, faint features are missing in the radio map when a reconstruction is stopped before convergence. For , where the overall runtime of the algorithm is much shorter, it is easier to ensure that the reconstruction is fully converged. In Sec. <ref> the convergence of and is analyzed in detail. The reconstruction of <cit.> has a similar dynamic range compared to the reconstructions. In the comparison between and in <cit.>, produced significantly higher resolved maps than in regions with high surface brightness. A zoom into such a region, the eastern hotspot of Cygnus A, is depicted in Fig. <ref>. In this region, the results are consistent, and achieves the same resolution as the based reconstructions.[The pixelizations of the multi-scale , the previous , and the reconstructions differ as they originate from different works.] Furthermore, also shows minimal imaging artifacts around the hotspot, as does . The multi-scale map from <cit.> is added again for comparison. As already discussed in <cit.>, the reconstruction has a significantly lower resolution than the reconstruction. These super-resolution capabilities where validated by comparing the morphological features of the reconstructions with higher frequency observations under the assumption of spectral smoothness. Fig. <ref> zooms on the nucleus and jet of Cygnus A. In the zooms on the nucleus the pixels modeling the two point sources are clearly visible. The S-band reconstruction and the reconstruction of <cit.> show consistent results depicting the nucleus and jet. In the S-band reconstruction of <cit.>, the jet is also visible, although due to the smaller dynamic range of the map, it is less pronounced. In the higher frequency bands, the shape of the core of Cygnus A is consistent between the reconstructions of and . As the old reconstructions are not fully converged, their dynamic range is lower, and the fainter emission of the jet is barely visible at higher frequencies. At the highest frequencies 13360 MHz, the jet is also barely visible in the reconstruction. The image of the jet is consistent with the reconstruction and the reconstruction from <cit.>. as well as provide posterior samples of the sky brightness distribution. From these samples, not only the posterior mean but also other summary statistics can be computed. As an example, we show in Fig. <ref> the pixel-wise relative uncertainty of the S-band data reconstruction of in comparison with the corresponding uncertainty map of <cit.>. The estimated uncertainty of tends to be slightly higher than the uncertainty estimate of . This difference might come from the fact that the reconstruction of <cit.> was not fully converged since the uncertainty estimate of usually becomes smaller when the algorithm converges. Nevertheless, we cannot exclude that this is related to the approximations of . §.§.§ Computational time of In the previous subsection, we compared the results of with , confirming that can deliver the same high quality radio maps as . In this subsection, we want to analyze the computational speed of and compare it to the existing framework. To have a direct comparison between the runtimes, we imaged the S-band data with exactly the same hyperparameters for the prior model with and and saved snapshots of the reconstructions at several points during the optimization. While can only be executed on a CPU, can also run on a GPU. Fig. <ref> depicts the direct comparison between and , with both algorithms being executed on the same CPU. Specifically, the algorithms were run on an Intel Xeon W-1270P CPU with 3.80 GHz clock speed and 8 cores. For the reconstruction, the MGVI algorithm inferring the posterior distribution was parallelized with 4 MPI tasks, each using 2 cores for evaluating the radio interferometric measurement equation (Eq. <ref>), thus utilizing 8 cores in total. The reconstruction ran on the single python process without manual threading, relying on JAX to parallelize the computations over the cores. Snapshots of the reconstructions after 10, 60, 300, 600, and 1440 min are displayed. After 10 min, the hotspot and parts of the lobes are visible in the reconstructions of both algorithms. Nevertheless, the core and the jet are missing, and both reconstructions have significant background artifacts. After 60 min runtime, fainter parts of the lobes become visible, and the map shows significantly more details in the lower surface brightness regions. After 300 min the map also depicts the very low surface brightness regions with high resolution, while in the map, the outflow is still partially missing, and the lower surface brightness regions of the lobes are reconstructed with low resolution. After 600 min the map has only slightly changed compared to the 300 min snapshot. At this point, we assume the algorithm to be fully converged. The reconstruction is also after 600 min not yet fully converged. The final snapshot of the reconstruction is after 1440 min. At this stage also the reconstruction is nearly converged. Only in the very low surface brightness regions of the lobes, the resolution is still lower than in the reconstruction. While Fig. <ref> analyzes the convergence of on the CPU, is mainly designed for GPUs. The performance of on the S-band data using GPUs is shown in Fig. <ref>. Specifically, Fig. <ref> displays snapshots of the reconstruction after 1, 5, 10, 22, and 46 min using an NVIDIA A100 high-performance computing GPU compared with a reconstruction on a consumer-level GPU, an NVIDIA GeForce RTX 3090. When comparing the results, one should consider that the A100 GPU is on the order of 10 times more expensive than the RTX 3090. On both GPUs, we executed the exact same reconstruction we ran on a CPU in Fig. <ref>. Similar to the CPU run, the high surface brightness regions are reconstructed first before the method picks up the low surface brightness flux. On both GPUs, the algorithm is much faster than on the CPU. While on the CPU, the reconstruction was finished after 600 min, the same number of major and minor cycles were finished on the RTX 3090 GPU in 46 min and on the A100 GPU in 22 min. Thus, the reconstruction was 13 times faster on the RTX 3090 GPU and 27 times faster on the A100 GPU than the CPU reconstruction. In comparison, the reconstruction on the CPU was not fully converged even after 1416 min. To quantify the convergence rate of the reconstructions, we computed the mean square residual between the logarithmic brightness of the final iteration on the NVIDIA A100 GPU and earlier iterations of and reconstructions. We reran the reconstructions with different random seeds to be independent of the random seed used for the NVIDIA A100 reconstruction with respect to which the residuals are computed. Fig. <ref> shows the mean square residuals for and the three reconstructions as a function of wall time. The reconstructions converge within the displayed time, and their curves of the mean square residual flatten. The reconstruction's final mean square residuals are not exactly the same since the final maps are not numerically identical because of the different hardware, JAX, and CUDA versions. The reconstruction does not converge in the displayed time interval and the residual error keeps falling until the end after 1416 min (1 day). Of course, using the residual of the logarithmic brightness is an arbitrary metric for quantifying convergence speed. Nevertheless, it can roughly quantify the speedups of . After 1416 min, the mean square residual of the reconstruction is around 10^-3. In comparison, the reconstructions reach the same mean squared residual after around 10 min, 20 min, and 200 min for runs on the NVIDIA A100 GPU, the NVIDIA RTX 3090 GPU and the Intel Xeon CPU, respectively. Thus, the indicative speedup of over on the Cygnus A S-band data is a factor of 144 for the A100 GPU, a factor of 72 for the RTX 3090 GPU and a factor of 7.2 for the CPU run. The timings reported above do not include the time needed to precompute the convolution kernels for the response and noise of . Nevertheless, these are small compared to the time needed for imaging. For the S-band data, for example, the computation of the kernels takes only 0.6 min, which is much shorter than the imaging runtime, even on the GPU. As indicated in Tab. <ref>, the Cygnus A data has only a single frequency channel. For datasets with more baselines and frequency channels, such as the MeerKAT datasets considered in the next section (see Tab. <ref>), the algorithmic advantage of of not having to compute the radio response in each evaluation of the likelihood is much larger. Thus, for such datasets, the speedup of will be significantly larger than for the Cygnus A single channel imaging. Indeed for the MeerKAT dataset of the next section (Sec. <ref>) imaging with is computationally out of scope, while with images can still be reconstructed with moderate computational costs. The same comparison as in Fig. <ref> but for the C-band data is displayed in Fig. <ref>. As indicated in Tab. <ref>, the grid size we used for the C-band sky map is a factor of two larger along both spatial axes. Thus, in total, we have four times more pixels, increasing the computational cost of the algorithm. On the A100 GPU, the reconstruction was finished after 56 min, while on the RTX 3090 GPU, the reconstruction took 165 min. We believe that the larger difference between the two GPUs for the C-band data compared to the S-band data might be because, for the smaller grid size of the S-band reconstruction, the A100 GPUs were not fully utilized. For completeness, we also display snapshots of the X and Ku-band reconstructions in Fig. <ref>. These reconstructions were only carried out on the A100 GPU. After 132 min both reconstructions were finished. §.§ Application to MeerKAT data In this section, we present an application of to an L-band (856-1712 MHz) MeerKAT <cit.> observation of the radio galaxy ESO 137-006. Originally, this observation was presented in <cit.>. The observation utilized all 64 MeerKAT antennas and the 4k mode of the SKARAB correlator. The total on target observation time is 14 h in full polarization with 4096 channels. For the VLA Cygnus A observations above, only a single frequency channel was used for each band. Here, we use two sub-bands with each about 200 channels (after averaging) and a bandwidth of approximately 200 MHz. Consequently the MeerKAT datasets are more than 400 times larger than the VLA datasets of the previous section, making an evaluation of the radio interferometer response (Eq. <ref>) significantly more expensive. A reconstruction, where the exact response needs to be computed for each evaluation of the likelihood (Eq. <ref>), is computationally unfeasible for such a dataset. <cit.> detected hitherto unknown collimated synchrotron threads linking the lobes of the radio galaxy ESO 137-006 in this observation. Since then, this data set has also been used by <cit.> for demonstrating a sparsity-based imaging algorithm. <cit.> also published the results as files, allowing us to use them as a reference for validating the algorithm for a MeerKAT-sized reconstruction. Technical details relating to the initial flagging and transfer calibration of the ESO 137-006 data using the CARACAL pipeline <cit.> are given in <cit.>. As in <cit.>, the data is averaged from 4096 to 1024 frequency channels and split into two sub-bands spanning 961-1145 MHz and 1295-1503 MHz that are relative free from radio frequency intereference, known as the LO and HI bands respectively. These subbands are then phase self-calibrated using WSClean multi-scale CLEAN <cit.> for imaging and CubiCal <cit.> for calibration. We then image the data independently in the LO and HI band with . Computational relevant details of the calibrated data in the two bands are listed in Tab. <ref>. §.§.§ prior model As for the Cygnus A observation, we built our prior model around the exponentiated Gaussian process model described in Sec. <ref> and in detail in <cit.>. Nevertheless, for the MeerKAT observation, there is, besides the main source, ESO 137-006, also a second galaxy, ESO 137-007, in the field of view. Due to the nature of the generative prior models in , the full prior model can easily be composed of multiple components. We model each of these sources with a separate exponentiated Gaussian process to decouple the prior models. Furthermore, due to the high sensitivity of MeerKAT, many compact background sources are detected. For the Cygnus A reconstruction, we placed two point source models at the locations of the two point sources in the core of Cygnus A. However, for the MeerKAT observation, the number of background sources is far too large to manually place point source models at their locations. Therefore, we use a third Gaussian process model to represent all the background sources outside of the models for ESO 137-006 and ESO 137-007. A sketch of the layout of the three models is shown in Fig. <ref>. The exact parameters for all Gaussian process models are listed in appendix <ref>. §.§.§ Imaging of <cit.> In <cit.>, ESO 137-006 is imaged with WSClean <cit.> and two convex optimization algorithms in the same bands 961-1145 MHz and 1295-1503 MHz using the same data as we do. The two convex optimization algorithms of <cit.> utilize two different regularizers. One regularizer is named uSARA, promoting sparsity in a wavelet basis. The other regularizer, AIRI, originally presented in <cit.>, is based on a neural network denoiser. The resulting images for both regularizers are compared with results from WSClean, showing significant improvements in the image quality. The exact computational costs of all three imaging algorithms are reported in <cit.>. Imaging with the sparsity promoting regularizer, uSARA needed about 1500 to 3000 CPU hours per band to converge. With the neural network-based denoiser AIRI, around 900 to 1600 CPU hours plus around 5 GPU hours where needed. §.§.§ Imaging results Fig. <ref> depicts the results of the reconstruction for the LO band data. Zooms on the two radio galaxies are shown in Fig. <ref> and Fig. <ref> for the LO band and in Fig. <ref> and Fig. <ref> for the HI band. Thereby, Fig. <ref> and Fig. <ref> display the ESO 137-006 radio galaxy while Fig. <ref> and <ref> show the radio galaxy ESO 137-007 north of it. These figures also include the convex optimization reconstruction results of <cit.> for comparison and validation. The imaging results of and <cit.> are consistent. The maps have higher background artifacts than the convex optimization maps. In <cit.>, collimated synchrotron threads between the two lobes of the galaxy ESO 137-006 were detected. Besides these already known threads, the map shows additional threads north of the core of the galaxy. These additional threads are probably artifacts in the image. With a lower intensity, similar features are also found in the maps of <cit.> and are mostly considered artifacts. We believe these features are imaging artifacts and originate from suboptimal calibration. Due to the very flexible prior model, both and are very sensitive the data. In the case of suboptimal calibration, this leads to imaging artifacts. At present, self-calibration with is not yet possible. For the future, the integration of a self-calibration routine into is planned, which could potentially mitigate such artifacts. Additionally, a dedicated prior model for compact sources could improve the reconstructions. At present, point sources can either be modeled by manually placing inverse gamma distributed pixels at their locations (see Sec. <ref>) or by including them in the Gaussian process model. For the VLA Cygnus A reconstruction, the point sources in the nucleus were modeled by manually placing inverse gamma distributed pixels at their locations. Thereby, imaging artifacts around the two-point source could be avoided, despite them being very bright. Due to the high number of background sources, this is impractical for the ESO 137-006 reconstruction. Compact sources are therefore also modelled using an exponentiated Gaussian process prior. A dedicated prior model that is also applicable to observations with a very high number of background sources could improve their reconstruction. §.§.§ Computational costs Despite the high number of visibilities and frequency channels, the computational costs of are still moderate since only needs to evaluate the full radio interferometric measurement equation (Eq. <ref>) in the major cycles. The reconstructions of both frequency bands were done on an NVIDIA A100 GPU. Before the actual imaging, the convolution kernels for the response and the noise were constructed on a CPU. For both frequency bands, less than two hours were needed on a single core of an Intel Xeon CPU to compute the convolution kernel. For imaging, 24 hours using an NVIDIA A100 GPU and eight cores of an Intel Xeon CPU were needed. Since most of the operators, except for the major update steps, are done on the GPU, the CPU was mostly idle during imaging. In comparison, the computational costs of the <cit.> reconstructions are significantly higher as they range between 900 and 3000 CPU hours per band and algorithm. <cit.> for comparison also presented a WSClean reconstruction. For the WSClean reconstruction <cit.> report a total computational cost of 132 and 236 CPU hours for the two imaging bands. Although CPU and GPU hours are not directly comparable, this underlines the computational performance and applicability of to imaging setups with massive datasets. § CONCLUSION This paper introduces the fast Bayesian imaging algorithm . combines the accuracy of the Bayesian imaging framework with computational shortcuts of the algorithm. This significantly broadens the applicability of Bayesian radio interferometric imaging. transforms the likelihood of the Bayesian radio interferometric imaging problem into a likelihood of a deconvolution problem, which is much faster to evaluate. Using the major/minor cycle scheme of , corrects for inaccuracies of the transformed likelihood and accounts for the w-effect. The accuracy of is validated on Cygnus A VLA data by comparing with previous and multi-scale reconstructions. The comparison shows that achieves the same resolution as . Likewise, the imaging artifacts are comparable and on a very low level. Furthermore, the computational speed of is analyzed and compared to , showing significant speedups for the VLA Cygnus A data. As is implemented in JAX, it can also be executed on a GPU, accelerating the reconstruction compared to the CPU by more than an order of magnitude. For the single channel Cygnus A VLA dataset, is at least 140 times faster than when executed on a GPU. For datasets with more frequency channels, the computational advantages of can be even larger. Additionally, we present a Bayesian image reconstruction of the radio galaxies ESO 137-006 and ESO 137-007 from MeerKAT data with and compare the results for validation with <cit.>. The MeerKAT dataset is significantly larger than the VLA datasets, but the computational costs of remain moderate due to the major/minor cycle scheme. A reconstruction of these sources with the classic algorithm using the same amount of data would be computationally out of scope. To the best of our knowledge, no other Bayesian radio interferometric imaging algorithm has been successfully applied to a dataset of similar size before. § DATA AVAILABILITY The raw data of the Cygnus A observation is publicly available in the NRAO Data Archive[<https://data.nrao.edu/portal/>] under project ID 14B-336. The raw data for the ESO137-006 observation is publicly available via the SARAO archive[<https://archive.sarao.ac.za>] (project ID SCI-20190418-SM-01). The reconstruction results are archived on zenodo[<https://doi.org/10.5281/zenodo.11549302>]. The implementation of the algorithm will be integrated into the algorithm[<https://gitlab.mpcdf.mpg.de/ift/resolve>]. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. J. R. acknowledges financial support from the German Federal Ministry of Education and Research (BMBF) under grant 05A23WO1 (Verbundprojekt D-MeerKAT III). P. F. acknowledges funding through the German Federal Ministry of Education and Research for the project “ErUM-IFT: Informationsfeldtheorie für Experimente an Großforschungsanlagen” (Förderkennzeichen: 05D23EO1). O. M. S.'s research is supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation (grant No. 81737). aa § PRIOR PARAMETERS FOR CYGNUS A IMAGING In Tab. <ref> we list the hyper parameters of the Gaussian process model for the diffuse emission in the Cygnus A reconstructions. The exact definition of these parameters is explained in detail in <cit.>. § PRIOR PARAMETERS FOR ESO 137 IMAGING In Tab. <ref> we list the hyper parameters of the Gaussian process model for the diffuse emission in the ESO 137 reconstructions. The exact definition of these parameters is explained in detail in <cit.>.
http://arxiv.org/abs/2406.08524v1
20240612071600
Federated Incomplete Multi-View Clustering with Heterogeneous Graph Neural Networks
[ "Xueming Yan", "Ziqi Wang", "Yaochu Jin" ]
cs.LG
[ "cs.LG", "cs.DC" ]
Spatial-Frequency Dual Domain Attention Network For Medical Image Segmentation The corresponding authors: Xueshuo Xie* and Tao Li* 1st Zhenhuan Zhou0009-0000-2187-7184 College of Computer Science Nankai University Tianjin, China zhouzhenhuan@mail.nankai.edu.cn 4th Rui Yao0009-0008-0949-2247 The Department of Pediatric Dentistry Tianjin stomatological hospital Tianjin, China yaorui73@163.com 2nd Along He0000-0003-1356-8757 College of Computer Science Nankai University Tianjin, China healong2020@163.com 5th Xueshuo Xie*0000-0002-8245-8415 Haihe Lab of ITAI Tianjin, China xueshuoxie@nankai.edu.cn 3rd Yanlin Wu0000-0002-2087-275X College of Computer Science Nankai University Tianjin, China 1229145158@qq.com 6th Tao Li*0000-0002-1273-0487 College of Computer Science Nankai University and Haihe Lab of ITAI Tianjin, China litao@nankai.edu.cn June 17, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Federated multi-view clustering offers the potential to develop a global clustering model using data distributed across multiple devices. However, current methods face challenges due to the absence of label information and the paramount importance of data privacy. A significant issue is the feature heterogeneity across multi-view data, which complicates the effective mining of complementary clustering information. Additionally, the inherent incompleteness of multi-view data in a distributed setting can further complicate the clustering process. To address these challenges, we introduce a federated incomplete multi-view clustering framework with heterogeneous graph neural networks (FIM-GNNs). In the proposed FIM-GNNs, autoencoders built on heterogeneous graph neural network models are employed for feature extraction of multi-view data at each client site. At the server level, heterogeneous features from overlapping samples of each client are aggregated into a global feature representation. Global pseudo-labels are generated at the server to enhance the handling of incomplete view data, where these labels serve as a guide for integrating and refining the clustering process across different data views. Comprehensive experiments have been conducted on public benchmark datasets to verify the performance of the proposed FIM-GNNs in comparison with state-of-the-art algorithms. § INTRODUCTION Multi-view clustering is a fundamental machine learning task aiming to improve clustering performance by leveraging the consistency and complementary information from different views <cit.>. Traditional multi-view clustering methods typically process raw data directly or through basic feature transformations, but the performance of these approaches tend to deteriorate on high-dimensional data <cit.>. To overcome these limitations, deep learning technologies have been introduced into multi-view clustering. By utilizing neural networks to extract complex and abstract features from views, deep multi-view clustering methods are capable of handling more complex data structures and relationships <cit.>. Despite significant achievements of deep multi-view clustering methods in dealing with traditional Euclidean space data <cit.>, real-world data often naturally exists in graph forms, such as social networks and knowledge graphs, presenting new challenges for clustering tasks. In particular, the complexity of graph structures makes it difficult for traditional clustering methods to be effective <cit.>. Graph neural networks (GNNs) have gained widespread attention for their ability to capture structural information effectively <cit.>. In multi-view clustering applications <cit.>, GNNs can utilize the relationships between views as well as structural and feature information to obtain node features suited for clustering, effectively integrating multi-view data and enhancing clustering performance. However, existing methods based on GNNs often overlook the heterogeneity of data across different views, and the node features obtained may be unsuited for clustering outcomes <cit.>. Additionally, these methods typically employ centralized training, which can raise privacy concerns of multi-view data <cit.>. Federated learning, as a distributed learning model, offers a solution to address data heterogeneity issues while protecting data privacy <cit.>. By combining federated learning with graph neural networks, it is possible to capture graph structures and node features while ensuring data privacy<cit.>. However, federated GNN-based methods face dual challenges in solving clustering problems: inconsistency and heterogeneity of data across different views <cit.>. Data from different views are not always perfectly overlapping and vary in data features and sizes. Moreover, due to various uncontrollable factors in practical applications, as shown in Fig. <ref>, multi-view data may be incomplete, further complicating clustering tasks <cit.>. To address these challenges, this paper proposes a federated incomplete multi-view clustering framework based on heterogeneous graph neural networks (FIM-GNNs). This framework aims to capture node features with heterogeneous GNNs from different views, and the global aggregation is designed to merge the complementary information from various views. Additionally, with the help of global pseudo labels, we merge feature extraction and clustering into a unified process, and use these labels to assist client training to achieve consistency in incomplete multi-view data. The main contributions of this paper can be summarized as follows: * We propose a federated incomplete multi-view clustering framework utilizing heterogeneous GNNs. Local training with heterogeneous GNNs and a global aggregation are introduced to effectively harness the complementary information in multi-view data, significantly enhancing the clustering performance. * A global pseudo-label mechanism is designed with heterogeneous aggregation in a federated environment, enhancing the ability of the FIM-GNNs to handle incomplete view data and improving the consistency of features as well as the performance of the clustering results. * Comprehensive experiments demonstrate the competitive performance of the FIM-GNNs in handling data heterogeneity and incompleteness compared to four state-of-the-art incomplete multi-view clustering methods across three public datasets. § RELATED WORK This section provides the problem formulation of federated incomplete multi-view clustering and a short overview of existing multi-view clustering techniques. The notations in this study are listed in Table <ref>. §.§ Problem formulation For a multi-view dataset with N samples, where X_c = {X_c^1, …, X_c^i, …, X_c^m}, X_c^i ∈ℝ^N × D_m represents the feature set of the nodes, and the sample features D_i of different views exhibit differences. In practice, some views may have missing sample data. Therefore, the description of an incomplete multi-view clustering dataset is as follows: X^i = (M^i ⊗1) ⊙ X_c^i which is subject to the following constraint: ∑_i=1^N M_j^i ≥ 1 Here, M = {M^1, …,M^i, …,M^m} represents a set of binary selection matrices, with M^i ∈{0,1}^N × 1. The entries of M^i = 1 indicate that the corresponding features are retained, whereas M^i = 0 means that the features are discarded. 1∈ℝ^1 × N, and M^i ⊗1 indicates that either all sample features are missing or all are present, meaning that partial missing features within the samples are not considered. ⊙ represents the Hadamard product. ∑_j=1^N M_j^i ≥ 1 ensures that each sample retains at least one feature, and no sample is completely devoid of features. In this study, we consider the problem of incomplete multi-view clustering under a federated framework, as shown in Fig. <ref>. Assume there is an incomplete multi-view dataset X = {X_1, …, X_m} and A = {A_1, …, A_m} distributed across m clients. For client i, X_i ∈ℝ^N_i × D_i represents the node features, and A_i ∈ℝ^N_i × N_i represents the graph structure. Due to the data heterogeneity in a federated setting, both the sample dimensions D_i and the number of samples N_i vary across clients. Additionally, due to missing data, N_i < N. The incomplete multi-view clustering problem involves extracting features from the incomplete node features and graph structures of each view, followed by an aggregation to produce the final clustering results. §.§ Multi-view clustering Recently, GNNs have emerged as effective tools for graph clustering tasks <cit.> to deal with multi-view data. GNN-based graph clustering techniques combine GNNs with autoencoders to perform clustering tasks in an unsupervised manner. For example, O2MAC <cit.> captures features using a shared multilayer graph convolutional encoder across different views and performs graph reconstruction with a multilayer graph convolutional decoder in each view. DAEGC <cit.> utilizes a graph attention-based autoencoder that considers both node attributes and the structural information of the graph. MAGCN <cit.> extends this approach to multi-view graph clustering, taking into account the geometric relationships and probabilistic distribution consistency between multi-view data to further enhance clustering tasks. GC-VAE<cit.> combines variational graph autoencoders with graph convolutional networks, clustering nodes based on graph topology and node features. However, these methods utilize centralized training and overlook the importance of data privacy protection. GNNs-based multi-view clustering aims to enhance clustering performance by leveraging consistent and complementary information across multiple views <cit.>. Meanwhile, federated multi-view clustering provides a promising solution for protecting data privacy in various distributed devices/silos, as discussed in <cit.>. For example, a distributed multi-view spectral clustering method called FMSC <cit.> employs homomorphic encryption and differential privacy techniques to protect data privacy. However, the time-consuming encryption and decryption processes reduce the efficiency of the model. By considering the communication issues, FedMVL <cit.> proposes a vertically federated learning framework based on non-negative orthogonal decomposition, effectively reducing the cost of the model. However, these shallow methods struggle to effectively extract node information, limiting model performance. FL-MV-DSSM <cit.> utilizes a deep structured semantic model to construct a multi-perspective recommendation framework. However, these federated learning multi-view clustering methods overlook the incompleteness of data across different views and are limited by their clustering approaches, which constrains model performance. Recently, some researchers <cit.> have started paying attention to incomplete clustering algorithms, which employ the complete views to predict the missing data. For example, EE-IMVC <cit.> imputed each incomplete base matrix generated by incomplete views with a learned consensus clustering matrix. Moreover, IMVC-CBG <cit.> reconstructed missing views with a consensus bipartite graph by minimizing the conditional entropy of multiple views using dual prediction. After that, IMVC <cit.> learns the features for each view by autoencoders and utilizes an adaptive feature projection to avoid the imputation for missing data. Fed-DMVC <cit.> addresses the incompleteness issue in federated multi-view clustering with an autoencoder model. However, these methods do not simultaneously consider both data heterogeneity and data incompleteness issues in the process of multi-view clustering. § PROPOSED METHOD In this section, we present the federated incomplete multi-view clustering with heterogeneous graph neural networks (FIM-GNNs), illustrated in Figure <ref>. Heterogeneous graph neural networks, such as GCN <cit.> and GAT <cit.>, are utilized to extract features and tailored for local client-side training. These incomplete multi-view features are then combined at the server to generate global pseudo-labels. Following this, clustering results are derived through the algorithm optimization. Lastly, we provide an analysis of the time complexity for the proposed FIM-GNNs. §.§ Local training with heterogeneous GNNs Based on the heterogeneity of the multi-view data at each client side, we utilize the different GNNs as graph autoencoders for feature extraction in different view data. In addition, we introduce global pseudo-labels P acquired from the server as labels to assist client-side training. In the graph autoencoder process, we extract low-dimensional features from the client side, aiming to capture the latent features of the multi-view data while preserving data privacy. For the ith client 's data X^i and graph structure A^i, we use the GNN-based encoder method to project them into low-dimensional features Z^i, and utilize a decoder to reconstruct the graph structure A^i. Considering the differences in sample quantity and dimension across different views (clients), we adopt two types of GNN models, GCN and GAT, to construct graph autoencoders for feature extraction. When the missing rate of the i-th view data is not too low, we construct a two-layer GCN as the encoder for feature extraction: Z = f(X, A) = softmax(ÂReLU( X W^(0)) W^(1)) where  = D̃^-1/2ÃD̃^-1/2, and W^(0), W^(1) are the parameters for the first and second layers of the GCN, respectively. When the missing rate of the i-th view data is low, we use a two-layer GAT as the encoder for feature extraction: z_j^(1) = σ( ∑_k ∈ N_jα_jk W^(0) x_k ) z_j^(2) = σ( ∑_k ∈ N_jα_jk W^(1) z_k^(1)) where W^(0) and W^(1) are the parameters for the first and second layers of the GAT, respectively. The attention coefficients α_jk are computed as follows: α_jk = exp(σ(a⃗^T[Wx_ij‖ Wx_k]))/∑_r ∈ N_jexp(σ(a⃗^T[Wx_j ‖ Wx_r])) where z_j^(1) and z_j^(2) are the features of vertex j obtained from the first layer GAT and the second layer GAT, respectively. For updating the adjacency matrix, it is defined as:  = sigmoid(Z^T ∙ Z). Next, for updating the loss function, it is defined as: L_r^i = loss(A^i, Â^i) After feature extraction, a self-supervised clustering layer is introduced for each client. The cluster membership matrix U^i = [u_1^i, …, u_k^i] ∈ℝ^K × d_v determines the soft cluster assignments Q^i, which are calculated as follows: q_ij^i = (1 + z_j^m - u_k^m^2)^-1/∑_k (1 + z_j^m - u_k^m^2)^-1 where q_ij^i represents the probability that sample j from the i-th client is assigned to cluster k. Next, for client i, we use the global pseudo-labels P and the soft cluster assignments Q^i to compute the KL divergence loss: L_c^i = KL(P ‖ Q^i) = ∑_j∑_k p_jklogp_jk/q_jk^i Then, the total loss function is defined as: L = L_r + γ L_c Here, L_r is the reconstruction loss, and L_c is the clustering loss, where γ is a hyper-parameter that balances the importance of the reconstruction loss relative to the clustering loss. §.§ Global aggregation After obtaining the features Z^i of different clients and the clustering centers U^i, the server uses heterogeneous aggregation on the overlapping sample features Z^i_Cto obtain the global feature Z, and then updates the global pseudo-labels P based on Z. To leverage the consistency and complementarity of different views, we perform heterogeneous aggregation on the overlapping sample features of different clients. Due to the heterogeneity of client data, the feature dimensions obtained are different, making direct aggregation challenging. Therefore, we assign different weights W_i to the features of different clients based on U^i. Then, we concatenate the weighted overlapping sample features Z_C^i for each sample obtained from different clients to produce the high-dimensional feature Z: Z = [w_1Z_C^1, w_2Z_C^2, …, w_mZ_C^m] where w_i is calculated by: w_i = 1 + log( 1 + σ(U^i)/∑_v σ(U^i)) where Z ∈ℝ^N ×∑ d_i represents the global feature, Z_C^i indicats the overlapping sample features of the i-th view, and σ(.) represents the variance. Clearly, a higher variance indicates better clustering results. Therefore, by performing weighted aggregation through W_i, it is possible to enhance the influence of features from views with better clustering results and reduce the influence of features from views with poorer clustering results. The optimal cluster assignment C is then obtained using the K-means algorithm by minimizing: min_C∑_j=1^N_c∑_k=1^K z_j - c_k ^2 where A is the adjustment matrix used to align the two instances of S. The server sends the global pseudo-label P to the clients as global information to integrate the complementary features of different views. The final global pseudo-labels P is obtained through the following calculation: P = ε (s_j)A, s_jk = (1 + z_j - c_k ^2)^-1/∑_j (1 + z_j - c_k ^2)^-1, s_jk∈ S and ε(s_i) = (s_jk/∑_j s_jk)^2/∑_j (s_jk/∑_j s_jk)^2 Due to differences in the cluster centers in each round of aggregation, the Hungarian algorithm is used to introduce A to align S across different communication rounds. Finally, the most likely cluster assignment for each data point is determined by: y_j = max_jk(1/m∑_m q_jk^i) §.§ Algorithm optimization As shown in Algorithm 1, the optimization in the proposed FIM-GNNs primarily consists of the client and server components. Firstly, we carry out the pre-training of the heterogeneous GNNs at the client side. We use all the data X^i from each view, and train the encoder and decoder only using the reconstruction loss to enhance the model's capability in feature extraction. In the proposed FIM-GNNs, we initialize k cluster centers using the K-means algorithm. Subsequently, we train the FIM-GNNs using the holistic federated framework, where the server aggregates the heterogeneous features Z_C^i of overlapping samples from each client into a global feature Z, based on which global pseudo-labels P are generated and sent back to the clients. Each client then uses the overlapping sample data X_C^i and the global pseudo-labels P to train the model in parallel for T rounds, obtaining features Z_C^i and cluster centers U_i. The server and clients update alternately over E communication rounds. §.§ Complexity Analysis Algorithm 1 consists of two main phases, which together determines its computational complexity. The algorithm begins with training each view separately. This step has a complexity of O(N(dd_1 + d_1d_2)), where N and d represent the sample size and sample dimension, respectively. d_1 and d_2 are the feature dimensions of the views. Next, in the aggregation phase, each client contributes to the global model by calculating the feature matrix P and cluster centers C, resulting in a complexity of O(NDK), where D is the feature dimension, and K is the number of clusters. For updating P, the complexity is O(K^3 + NK). Therefore, the proposed FIM-GNNs is with a complexity of O(K^3 + NDK + N(dd_1 + d_1d_2)). § EXPERIMENTS §.§ Experimental Setup Based on the differences in sample features across various views within the federated framework, we conducted training on two widely used datasets, Caltech-7 and BDGP, to validate the performance of our proposed FIM-GNNs. Table <ref> show the description of the datasets. In the experiments, we construct the graph structure information as in <cit.>. Taking into account the differences in dimensionality and quantity of data across different clients, we select partial views as the dataset. Simultaneously, based on the specific missing rates R_i of each view, parts of the samples are missing, but we ensure that each sample exists in at least one view. We firstly use two standard clustering metrics, namely accuracy (ACC) and Normalized Mutual Information (NMI) for evaluations. Additionally, we use the Adjusted Rand Index (ARI) to assess the quality of clustering. We compared the proposed FIM-GNNs with four state-of-the-art incomplete multi-view clustering methods. CDIMC-NET <cit.> is a method that integrates deep encoders and graph embedding strategies, and introduces a self-paced strategy to select optimal samples for model training. COMPLETER <cit.> is a contrastive learning-based method that achieves cross-view data recovery from an information-theoretic perspective. APADC <cit.> is an adaptive feature projection-based incomplete multi-view clustering method, which introduces distribution-aligned feature learning. IMVC-CBG <cit.> is an anchor-based multi-view learning method, introducing a bipartite graph framework to address incomplete multi-view clustering. In our experiments, we set the number of epochs to 10, with a learning rate initialized at 0.005 and reduced by half every 50 epochs. We use Adam optimizer. For both the GAT and GCN frameworks, the Adam optimizer is used, with a pre-training learning rate of 0.005 and a training learning rate of 0.001. For the GAT framework, on the BDGP dataset, the dimensions of the two GAT layers are [128,16], and on the Caltech-7 dataset, the dimensions of the two layers are [512,32]. For the GCN framework, the dimensions of the two GCN layers are [128,16]. When the missing rate β does not exceed 0.1, the GCN framework is used; when the missing rate exceeds 0.1, the GAT framework is used. The hyperparameter γ is set to 1. §.§ Performance Evaluation Table <ref> summarizes the experimental results on two datasets, where [R_1,R_2,R_3] means the missing rates on different clients.It can be observed that our method basically achieves the best results in five incomplete multi-view clustering methods, demonstrating the effectiveness of our approach. Compared with three deep neural network-based methods (CDIMC-NET, COMPLETER and APADC), the FIM-GNNs achieves better results, indicating that the introduction of heterogeneous graph structure as auxiliary information can effectively improve model performance. Particularly on the BBCSport dataset, our method outperforms the other three deep methods significantly, suggesting that the simultaneous use of heterogeneous graph structure and global node feature information can more effectively capture features, especially in federated settings, thereby improving clustering results for incomplete view data in different scenarios. In addition, to visually observe the clustering results more intuitively, we utilized t-SNE for visualization of the results with a missing rate of [0.2,0.2,0.1] on the BDGP dataset. Figure <ref> presents the visualization results after reducing the features of complete samples to two-dimensional, where different colors represent different clusters. It can be observed that initially, the nodes are relatively scattered, and the boundaries between different clusters are not distinct. As the communication rounds increase, nodes within the same cluster gradually aggregate, and the boundaries between different clusters become clearer. This is because a global pseudo-label mechanism with heterogeneous aggregation is beneficial for obtaining the accuracy of the federated incomplete multi-view clustering results. §.§ Effect of heterogeneous GNNs To evaluate the impact of heterogeneous GNNs, we assessed the performance of FIM-GNNs with GAT, GCN, and a combination of GCN and GAT, respectively. As shown in Fig. <ref>, the experiments across two datasets indicated that the combination of GCN and GAT generally yields better results, while the individual models (GAT and GCN) also perform well. The performance of the combination of GCN and GAT demonstrates the effectiveness of heterogeneous GNNs in dealing with the incompleteness of performance across different views. §.§ Parameter Sensitivity We determine the model for each client based on the missing rate threshold β in the each view. When the view missing rate does not exceed β, we use GCN; otherwise we use GAT. To verify the effectiveness of β, we run the FIM-GNNs with β set at [0.05, 0.10, 0.15, 0.20, 0.25, 0.30], and the results are shown in the Fig. <ref>. We can see from the observations that β = 0.1 yields the best performance on Caltech-7 and BDGP datasets. During local training on the client side, a hyperparameter γ is used to balance the clustering loss and reconstruction loss. We conducted experiments on the BDGP dataset by varying the values of the hyperparameter γ from 10^-3, 10^-2, ..., 10^2, 10^3 to test parameter sensitivity, as shown in Fig. <ref>. It can be observed that different clients exhibit varying sensitivities to γ, but overall, the performance is optimal when γ=1, so we set γ=1 in this study. § CONCLUSION In this study, we present a federated incomplete multi-view clustering framework with heterogeneous GNNs. Local client-side training and global heterogeneous aggregation are introduced to effectively enhance the performance of federated incomplete multi-view clustering. Moreover, we employ a global pseudo-label mechanism with heterogeneous aggregation to deal with incomplete view data. The effectiveness of FIM-GNNs is evaluated on public datasets compared to state-of-the-art incomplete multi-view clustering methods. Although BGNAS can obtain effective and efficient clustering results for incomplete multi-view data, it is still worth considering its applicability to extremely imbalanced multi-view data or other complex clustering tasks. In the future, the proposed FIM-GNNs can potentially be integrated with heterogeneous GNNs for incomplete multi-modal clustering tasks or specific domains. § ETHICAL STATEMENT There are no ethical issues. § ACKNOWLEDGMENTS This work was supported in part by the National Natural Science Foundation of China under Grant No. 62136003, and in part by the Guangdong Basic and Applied Basic Research Foundation under Grant No. 2024A1515011729 and No. 2023A1515012534. named
http://arxiv.org/abs/2406.08205v1
20240612133848
What do we know about Hugging Face? A systematic literature review and quantitative validation of qualitative claims
[ "Jason Jones", "Wenxin Jiang", "Nicholas Synovic", "George K. Thiruvathukal", "James C. Davis" ]
cs.SE
[ "cs.SE", "cs.LG" ]
@hang@@ @hang @chapter section#2#1#3 figureFigureFigures appendixAppendixAppendices tableTableTables algorithmAlgorithmAlgorithms listingListingListings theoremTheoremTheorems thmTheoremTheorems lemmaLemmaLemmata equationEqt.Eqts. GrammarGrammar #1 RQList finding [table]font=small,skip=3.5pt [figure]font=small,skip=3.5pt acmlicensed 2024 2024 XXXXXXX.XXXXXXX [Conference acronym 'XX]Make sure to enter the correct conference title from your rights confirmation emaiJune 03–05, 2018Woodstock, NY 978-1-4503-XXXX-X/18/06 SLR and Missing Measurements of Hugging Face]A Quantitative Comparison of Pre-Trained Model Registries to Traditional Software Package Registries What do we know about Hugging Face?]What do we know about Hugging Face? A systematic literature review and quantitative validation of qualitative claims jone2078@purdue.edu 0009-0005-7088-0597 Purdue University 610 Purdue Mall West Lafayette Indiana USA jiang784@purdue.edu 0000-0003-2608-8576 Purdue University 610 Purdue Mall West Lafayette Indiana USA nsynovic@luc.edu 0000-0003-0413-4594 Loyola University Chicago Chicago Illinois USA gthiruv@luc.edu 0000-0002-0452-5571 Loyola University Chicago Chicago Illinois USA davisjam@purdue.edu 0000-0003-2495-686X Purdue University 610 Purdue Mall West Lafayette Indiana USA § ABSTRACT Background Software Package Registries (SPRs) are an integral part of the software supply chain. These collaborative platforms unite contributors, users, and packages, and they streamline package management. Much engineering work focuses on synthesizing packages from SPRs into a downstream project. Prior work has thoroughly characterized the SPRs associated with traditional software, such as NPM (JavaScript) and PyPI (Python). Pre-Trained Model (PTM) Registries are an emerging class of SPR of increasing importance, because they support the deep learning supply chain. Aims A growing body of empirical research has examined PTM registries from various angles, such as vulnerabilities, reuse processes, and evolution. However, no existing research synthesizes them to provide a systematic understanding of the current knowledge. Furthermore, much of the existing research includes unsupported qualitative claims and lacks sufficient quantitative analysis. Our research aims to fill these gaps by providing a thorough knowledge synthesis and use it to inform further quantitative analysis. Methods To consolidate existing knowledge on PTM reuse, we first conduct a systematic literature review (SLR). We then observe that some of the claims are qualitative and lack quantitative evidence. We identify quantifiable metrics assoiated with those claims, and measure in order to substantiate these claims. Results From our SLR, we identify 12 claims about PTM reuse on the HuggingFace platform, 4 of which lack quantitative validation. We successfully test 3 of these claims through a quantitative analysis, and directly compare one with traditional software. Our findings corroborate qualitative claims with quantitative measurements. Our two most notable findings are: (1) PTMs have a significantly higher turnover rate than traditional software, indicating a dynamic and rapidly evolving reuse environment within the PTM ecosystem; and (2) There is a strong correlation between documentation quality and PTM popularity. Conclusions Our findings validate several qualitative research claims with concrete metrics, confirming prior qualitative and case study research. Our measures show further dynamics of PTM reuse, motivating further research infrastructure and new kinds of measurements. [ James C. Davis Received May 18, 2024; accepted June 6, 2024 ================================================ § INTRODUCTION As the size and cost of developing deep learning (DL) models from scratch continue to rise, engineers are increasingly turning to adapt open-source Pre-trained Models (PTMs) as a cost-effective alternative <cit.>. PTM registries facilitate the reuse of open-source models by providing packages that include pre-trained weights, configuration, and documentation <cit.>. Hugging Face has become a prominent PTM registry, comparable in popularity to traditional software registries like and  <cit.>. Prior research on traditional Software Package Resigtries (SPRs) such as NPM and Maven has benefited greatly from both qualitative and quantitative studies, thereby enhancing their comprehension and application by engineers and researchers <cit.>. These registries are typically analyzed first through qualitative data collection, which is then validated by quantitative methods. However, this approach has yet to be fully applied to PTM registries. Research typically starts with qualitative data collection, which is then validated through quantitative methods <cit.> Prior research has empirically compared PTM registries to traditional software package registries such as NPM and PyPI, tackling diverse issues such as carbon emissions, model selection, and vulnerabilities <cit.>. Despite these efforts, no work has yet synthesized the existing knowledge of PTM registries. Furthermore, not all topics about PTM registries such as Hugging Face have been studied from both qualitative and quantitative perspectives. This lack of a comprehensive approach has led to a lack of quantitative validation, leaving numerous qualitative insights about PTM registries under-verified. Software Package Registries (SPRs) are an integral part of the software supply chain <cit.>. They serve as hubs of collaboration that bring together contributors, reusers, and packages within a shared ecosystem. Additionally, they provide tools to streamline package selection, downloading, and integration. They also track and present important metadata to engineers about packages, including package maintainers, download counts, package licenses, and other context-dependent information. I'd add (i.e) + two examples after "context-dependent" or provide concrete examples instead Extensive prior work has used this information, augmented with additional information as needed, to measure different facets of SPRs to understand their influence on the software supply chain <cit.>. Recently, registries that specialize in providing Pre-Trained Models (PTMs) have been examined to determine if the PTM Supply Chain is particularly different than the Traditional Software Supply Chain <cit.>. These works suggest that many aspects of software reuse engineering differ between traditional software and PTMs, leading to the hypothesis that “.” However, findings from these works have been primarily qualitative and lack quantitative evidence for differences. Our work aims to bridge the knowledge gap concerning the synthesis of extensive research on PTM reuse facilitated by Hugging Face, and provide further measurements to validate the prior qualitative and quantitative claims. An overview is shown in <ref>. Our work consists of two parts: First, we conduct a systematic literature review to transform qualitative insights about PTM package reuse on Hugging Face into quantitative metrics. Second, we evaluate the robustness of these insights through quantitative validation. This approach enhances our understanding of package reuse dynamics on Hugging Face compared to traditional SPRs. The next paragraph does not make sense. Let $3 be your guide — we need to come back to the Intro and make the Intro aligned with what we did. The goal was to test the hypothesis “”. We operationalized this goal by focusing on package (1) Evolution and (2) Reuse; concentrations that were selected by drawing on claims from prior work using a Systematic Literature Review (SLR). We utilize the established Goal, Question, Method (GQM) conceptual framework to organize this comparison <cit.>. For each operationalized theme, we developed a series of metrics to measure them. These metrics measured: package interdependency networks and the patterns of downstream reuse. Once each registry had been evaluated, a detailed comparison of the distinctive characteristics between SPRs and PTM registries was performed. Our findings indicate that most of the claims from prior qualitative results are correct. Of the 3 prior claims we quantitatively evaluated, 2 were supported by our results, and 1 was supported by 1 measurement and not supported by the other. Our measurement of library usage indicated a preference for using the Transformers library when creating descendents of models, with 80% of descendents of models choosing to utilize Transformers. Our measurement of package turnover indicates that HuggingFace has a significantly higher turnover rate than traditional package registries. Our measurement of model popularity and descendent count found a correlation between the two, further supporting claims that popularity is a driver of model selection. Finally, we found a strong correlation between documentation quality and PTM popularity, with the top 1000 models outperforming the bottom 1000 in documentation. Our contributions are: * We conduct a systematic literature review on PTM reuse in the Hugging Face registry, and extract a list of qualitative claims from prior work. (<ref>) * We map the qualitative claims to quantitative measurements. We use these measurements to validate the prior findings, via comparison of our quantitative measurements of to representative traditional SPRs. (<ref>) * Our work provides recommendations for future work on developing tools to analyze and keeping datasets updated to support further investigations on the PTM supply chain. (<ref>) Significance for Software Engineering: Prior work has made both quantitative and qualitative claims about software engineers' reuse of PTMs. We systematically synthesized this knowledge through a literature review and developed quantifiable metrics to corroborate the qualitative claims. Our findings confirm several qualitative results about PTM reuse, and also quantify the dynamic reuse environment within the PTM ecosystem. Our results will inform research infrastructure and the development of new metrics to guide and refine PTM development and reuse. Software supply chains are important because they reduce the cost of developing software applications by operationalizing the distribution of third-party packages. Check wording PTMs are an emerging class of software package with unique characteristics, resulting in a distinctive software supply chain compared to traditional software. It is distinctive but let's be clear about whether there is any overlap. Would a diagram help? I ask this because it might be nice to have a pretty/cute picture on page one that shows commonalities vs. differences at a 30,000 ft. view. Agreed Add figure comparing and contrasting the two software supply chains This work advances the understanding of PTM reuse, confirms prior work suggesting that PTM selection is prioritized over synthesis, and asserts the importance of tools to streamline and improve the process of PTM selection. The practice of reusing PTMs has been proposed to be analogous to traditional software package reuse <cit.>. This may also speak to having a figure per my previous comment. Both reuse practices utilize pre-existing software packages to leverage existing functionality instead of developing it from scratch. In traditional software, there exist ecosystems that facilitate the reuse of software <cit.>. These ecosystems encourage package contributors and downstream reusers to gather and interact <cit.>. is a platform similar to traditional software registries that facilitates the reuse of PTMs <cit.>. Prior work has indicated that the novel construct and attributes of a PTM result in a more complicated process of reuse than traditional software <cit.>. However, the literature lacks measurements of PTM reuse metrics, thereby preventing quantitative comparisons to traditional software registries. Being able to make direct comparisons between traditional software registries and would motivate future studies depending on whether or not the metrics chosen matched or were different. Odd wording Software registries bring together three main actors in the software supply chain: contributors, reusers, and software packages. It is important to characterize each one of these actors to better understand how a registry impacts the larger supply chain. Generally, traditional software studies have focused on characterizing one or more of these actors. The selection patterns of downstream users <cit.> has been examined to identify factors that influence the popularity and selection of packages, with the aim in order of improving package visibility and recommendation. The social dynamics <cit.> of traditional software registries was examined as well to understand how contributors drive the development of packages. These studies have also led to a greater understanding of the risks present in both traditional software registries <cit.>, as well as the software supply chain as a whole <cit.>. Studies have also been done to examine how the reuse process of PTMs as facilitated by differs from that of traditional software <cit.>. These studies have found new considerations that impact the reuse process of PTMs. Additionally, engineers believe that the reuse process for PTMs builds on the reuse process for traditional software reuse. However, to the best of our knowledge, no study has utilized comprehensive quantitative measurements to evaluate as an analog of traditional software registries such as NPM, [https://www.pypi.org], Maven[https://maven.apache.org], and RubyGems[https://rubygems.org]. "as an analog of" = odd wording? Maybe replace? In order to close this gap, this paper examines how engineers interact with , both as contributors and users, compared to traditional software package registries. The PeaTMOSS dataset <cit.> is used to create a dependency graph to understand the landscape of , and measure the interconnectedness of PTMs compared to traditional software packages. This dependency graph is augmented with model contributor information, and downstream reuse information, to further enhance our understanding of the ecosystem. Using this graph, quantitative metrics are applied to characterize patterns of contribution and reuse on compared to traditional software registries. This comparison is done in order to evaluate if percieved differences in PTM contribution, evolution, and reuse are substantiated by quantitative evidence. Contributions: * We have selected and adapted a set of qualitative metrics that can be used to characterize several facets of a software registry, * An application of these metrics to both traditional software registries and PTM registries (i.e ), and * A comparison of traditional software registries and PTM registries using these metrics. Significance: Prior work has made qualititave claims about how engineers interact with PTMs, suggesting that the primary issue of reuse is selection, not synthesis. Our work corroborates the findings from prior work, demonstrating that exhibits distinct evolution and reuse patterns compared to traditional software registries. This observation confirms that PTM selection is a more impactful problem that PTM synthesis, motivates future work alleviating this problem for downstream reusers, and prompts further investigation on the broader implications for the PTM supple chain. § BACKGROUND AND RELATED WORK In this section, we first explain how software package registries facilitate the reuse of software (<ref>). We then discuss the rise of Deep Neural Networks (DNNs) and their presence in software engineering reuse as PTM packages (<ref>). We also detail how software package registries have been measured in previous works, and the limited effort towards quantitatively evaluating PTM registries (<ref>). Our work aims to advance the state of the art when it comes to quantitative evaluation of PTM registries. §.§ Software Package Registries <ref> depicts the software supply chain. Software Package Registries (SPRs) serve as collaborative hubs that connect package contributors, reusers, and the packages themselves, facilitating software reuse. These registries are important for engineers because they provide comprehensive information that significantly enhances downstream software development. SPRs act as platforms for creators to upload and share their software packages, featuring version control to allow users to access previous package versions as needed <cit.>. These platforms promote package discoverability and perceived quality through user engagement tools like comments, likes, and ratings. Additionally, collaboration is fostered by discussion threads and version management, enhancing user interaction and visibility. Registries also provide comprehensive metadata, improving package visibility in search results and aiding in package selection. Tools for package downloading, bundling, and managing versions and updates streamline the reuse process, simplifying package lifecycle management. Recently, PTM package registries (Hugging Face <cit.>, PyTorch Hub <cit.>, ONNX Model Zoo <cit.>) emerged to support efficient development of AI systems <cit.>. PTM packages include traditional components (documentation, dependencies). However, they also include additional DNN-specific components, such as pre-trained weights, training dataset, and model architecture <cit.>. <ref> depicts the structural similarity in package reuse between traditional and PTM packages. Need to cref Fig 1. Software Package Registries are hubs of collaboration that connect package contributors, reusers, and the packages themselves, thereby facilitating software reuse. These registries are invaluable for a variety of reasons, particularly due to their robust features that enhance software development and distribution. I attempted to list every relevant feature that a registry might have. I don't think all of this is needed, and I don't think all of this is well substantiated/in the right order. Software Package Registries provide a platform where creators can upload and share their software packages. These platforms often offer version control, allowing users to access and utilize previous versions of packages as needed. Moreover, they enhance the discoverability and percieved quality of a package through social tools and matadata such as comments, likes, and stars, which helps users track packages of interest. Collaboration within these registries is supplemented by features such as discussion threads and version management fostering an environment where users can engage in meaningful conversations about the software. This visibility aids both potential reusers and those looking to contribute. Additionally, registries provide detailed metadata on software packages, aiding in package selection by offering extra information and enhancing visibility in search results. Lastly, these registries provide tools that streamline the downstream reuse of packages. These tools assist with package downloading, bundling, and version and update managing, thus simplifying the entire lifecycle of package management. Each Software Package Registry is constructed to meet the needs of its users, thereby playing a crucial role in both traditional software and PTM ecosystems. Promininent examples include NPM and for traditional software, which provide a plethora of packages for popular programming languages, and Hugging Face for PTMs, distinguished by its open contribution model and broad array of tools for PTM reuse. These registries have been extensively studied in prior research, underscoring their significance and emphasizing the importance of examining Software Package Registries in order to fully understand the ecosystems they support. §.§ Deep Neural Networks and Pre-Trained Model Packages Deep Neural Networks (DNNs), comprising numerous hidden layers, have gained popularity as a cutting-edge solution for complex problems in various fields, such as image recognition in autonomous vehicles <cit.> and AI voice assistant systems <cit.>. Developing and training these models from scratch requires significant time and resources <cit.>. For example, a Llama-2-70B model needs 1720K GPU hours to train from scratch <cit.>. To overcome this, engineers increasingly opt for using Pre-Trained Models (PTMs), which allows them to forego the lengthy and resource-intensive initial training phase. By leveraging PTMs, they can focus on a much shorter training period to fine-tune the models for specific tasks <cit.>. Within the context of PTMs, the term “software reuse” refers to the reuse of models along with their training configurations and weights, reuse of the PTM packages. <ref> shows the comparison of reuse process between PTM and traditional packages. Reusing DNNs as PTM packages mirrors traditional practices of software package reuse, where existing software components are integrated and adapted rather than built from scratch <cit.>. This practice can enhance efficiency and reducing development time <cit.>. This adaptation involves not only the models but also their pre-trained states, which can be adjusted for specific applications, thereby constituting a combination of model reuse and customization rather than traditional software package reuse alone. Neural networks with a large amount of hidden layers, known as Deep Neural Networks (DNNs), have become more popular in recent years as a state-of-the-art solution to several complex problems across various domains (Computer Vision, Natural Language Processing, Image Generation, etc.) Constructing and training these models from scratch costs substantial time and resources, making the use of DNNs prohibitive. To make these capabilites more accessible, engineers have turned away from creating models from scratch, preferring to reuse already trained DNNs known as Pre-Trained Models (PTMs) instead. This approach allows them to bypass an initial extensive training process required to get the model to "learn" how to identify patterns, and instead only use a much shorter training regimen. Reusing DNNs as PTMs resembles traditional software reuse practices. In both cases, software components are integrated and adapted rather than being built anew, enhancing efficiency and reducing development time. A key facilitator of both Most DNN reuse occurs through an alternative to Software Package Registries known as PTM Package Registries. Somewhere near or above here, we really need to define what is being reused. There are a few situations where we are saying software reuse when talking about PTM Registries. I don't think it's accurate to say software reuse when talking about PTMs. It is model reuse + training/weights + how it has been adapted. Is that software? We may be able to grab some wording from Wenxin's reengineering paper where we provided some working definitions when talking about reuse. And we've done it in other papers. §.§ Measurements of Software Package Registries Research on software package registries has traditionally focused on quantifying various aspects <cit.>. However, PTM registry research has primarily focused on qualitative and small-scale quantitative measurements. §.§.§ General Comparisons: The measurement of software package registries is an established research area within traditional software. <ref> shows the metrics used in prior work user reach <cit.>, license <cit.>, and technical lag <cit.>. These studies provide critical insights for software engineers, aiding in effective package selection and reuse, and for researchers, deepening understanding of software supply chains and registry dynamics. This research approach has been adapted to PTM registries recently <cit.>. Early studies have explored how PTMs are reused and adapted, examining both qualitative and quantitative aspects. For instance, research has started to analyze contribution patterns and the reuse dynamics specific to PTM registries, such as those found in TensorFlow Hub and Hugging Face <cit.>. However, there are still gaps in understanding the evolution and reuse patterns in the PTM registries <cit.>. Our work bridges this gap by providing additional quantitative measurements and comparing our results to traditional SPRs. §.§.§ To Explain or Quantify Phenomena: Despite these efforts, there remains a gap in research that connects qualitative observations with quantitative measurements across these registries. Recent studies within traditional SPRs typically start with qualitative observations that are later supported by quantitative data. Issues such as package obsolescence and the spread of vulnerabilities, initially observed quantitatively, have been rigorously quantified in ecosystems like NPM <cit.>. These investigations form a crucial basis for comprehending how features of software registries impact the practices of software development. In the research domain of PTM registries, similar investigations are needed. For example, Jiang provide a comprehensive analysis on the risks while using PTMs, and the PTM reuse process <cit.>. As a follow-up work, they also collected both qualitative and quantitative data on PTM naming practices <cit.>. This line of research is crucial for understanding PTM package registries. However, their qualitative insights have not been substantiated by quantitative analysis. We address this gap by conducting a systematic literature review and deriving quantifiable metrics to assess the claims from previous studies. § KNOWLEDGE GAP AND RESEARCH QUESTIONS To summarize the knowledge gap, we lack a cohesive understanding on PTM package reuse that synthesizes prior research on Hugging Face. Additionally, there is a lack of some quantitative measurements, which reduce our understanding of the registry. To address the gap of knowledge synthesis, we ask: RQ1 What claims about package reuse on Hugging Face are made by prior research? By gathering and analyzing these claims, we aim to convert the qualitative data into a set of quantifiable metrics. This effort will not only enrich our understanding of Hugging Face as a PTM platform but also enhance the comparability of data across different software ecosystems. Specifically, this analysis enables future researchers to establish metrics that facilitate the comparison of Hugging Face and traditional software registries similarities and differences in package reuse practices. Answering RQ1 also prepares us for further empirical inquiry: RQ2 Do the qualitative claims about package reuse (PTM) on Hugging Face hold up when quantified? §.§ Overview of Methodology <ref> shows the overview of our work's context and approach. We answered these questions through three steps: * We conducted a systematic literature review (SLR) on PTM reuse, aiming to compile a comprehensive list of claims about package reuse. * We synthesized these claims into quantifiable measurements, performed the measurements, and then compared them to the previous findings. * Where possible, we compared these to those from traditional software to see differences between the reuse patterns in different registries. The detailed method per RQ is in the corresponding sections. § RQ1: WHAT CLAIMS ABOUT PACKAGE REUSE ON HUGGING FACE ARE MADE BY PRIOR RESEARCH? [width=, colback=yellow!30!white, top=1pt, bottom=1pt, left=2pt, right=2pt] The Systematic Literature Review (SLR) identified 12 quantifiable claims about package reuse on Hugging Face. The claims are distributed among five quantifiable categories of methods: small- and large-scale quantitative measurements, qualitative surveys, interviews, and case studies. After this classification, four quantitatively unevaluated claims remained, within the categories of design trends, documentation and understanding, and selection considerations. This finding shows a gap that we address in RQ2. To enhance our understanding of the existing claims related to PTM reuse within package registries, we followed empirical standards and conducted a systematic literature review <cit.>. Our initial step involved a pilot study to define the scope of our review (<ref>). A systematic literature review entails five steps: identification of research, study selection, study quality assessment, data extraction, and data analysis <cit.>. <ref> illustrates the results of each step. We detail them next. §.§ Methods §.§.§ Pilot Study We define the scope of our review by conducting a pilot study. In the pilot study, we first search for papers about “pre-trained model reuse” using Google Scholar and looked at the first three results, which were <cit.>. The papers indicated that Hugging Face is the only “open” model registry <cit.> and is the most popular model registry. Additionally, Hugging Face hosts the largest number of PTM packages, and provides useful tools to facilitate PTM reuse. We then decided to scope down our study on Hugging Face model registry specifically to represent the PTM supply chain, as indicated by Jiang  <cit.>. §.§.§ Search Strategy and Query The goal of our search is to identify papers that are relevant to the categories of PTM reuse, the PTM supply chain, or the PTM ecosystem. Informed by our pilot study ( <ref>), the final search query we used is indicated in <ref>. These search queries gave us 45 papers. We then removed duplicate entries, reducing the number of papers to 31. To verify the efficacy of our search queries, we employed papers from the pilot study as benchmarks to assess each query's retrieval effectiveness. We ensured that all papers identified in the pilot study were also retrieved by our final search query. This step was essential to confirm the robustness of our search strategy, ensuring it was capable of capturing the most relevant studies. Such a comprehensive approach allowed for an exhaustive review of the literature concerning PTM reuse within its operational ecosystem. §.§.§ Selection Criteria The goal of our selection criteria is to identify the most relevant and rigorously supported research that specifically addresses the context and impact of PTM reuse. During our study, we applied two types of criteria: inclusive and exclusive. @Jason double check if the updated criteria are accurate.s Inclusion Criteria We applied the following inclusion criterion: a paper is included if it describes the reuse of models within a specific PTM registry. This criterion excluded papers that apply PTMs to specific tasks, reducing our set from 31 papers to 20. Exclusion Criteria Our exclusion criterion was that we excluded non-primary sources. We exclude works whose claims are not substantiated directly through qualitative or quantitative methods. This reduced our set from 20 papers to 18. §.§.§ Data Extraction Once we identified the most relevant and rigorously supported research that specifically addresses the context and impact of PTM reuse, the next step is to extract “claims” from these papers that provide evidence of qualitative or quantitative methods that could inform our quantitative study. A “claim” in this context refers to a statement or assertion made in a research paper, which is supported by evidence. The data extraction involve four steps. First, two co-authors went through a paper from the pilot study together to get an agreement of the data extraction process. The goal of this process was to identify and extract all claims that might be tangentially related to PTM reuse. Second, they individually extracted the claims from the papers. This involved reading each paper to identify the key claims, with an emphasis placed on the abstract, introduction, and, if available, finding boxes of each paper. Exact quotations from the papers were extracted. Third, the two co-authors met and presented their extracted claims from each paper, with an explanation of why it was chosen, and a discussion of the relevance of the claim if it was not immediately obvious. A total of 256 claims were discussed in this step. Finally, for each paper, they discussed which claims were the most descriptive of PTM reuse and discarded the rest. This selection step resulted in a total of 49 claims. §.§ Analysis and Results We categorized these claims into two categories: (1) “Motivation claims” and (2) “Work claims”. The detailed definitions and examples of each claim are shown in <ref>. This classification process yielded 19 motivation claims and 30 work claims. One of our goals (RQ2) was to substantiate prior findings with further quantitative measurements, so we chose to focus more deeply on the work claims. Within this subset, we identified overlapping claims and consolidated them, resulting in a refined set of 12 distinct claims. These consolidated claims are detailed in <ref>. (1) “Motivation claims” articulate the rationale behind a paper's problem statement, illustrating why the work is significant and worthy of investigation, We believe that the accessibility of data and model analysis tools is crucial to building both the understanding of and the trust in the underlying resources.. (2) “Work claims” refer to the assertions a paper makes based on its collected data and analyses, Model properties are under-documented across Hugging Face. The primary aim of our SLR is to summarize the existing claims about package reuse on Hugging Face and to extract quantifiable measurements from these claims. From the consolidated set of 12 claims, we categorized the basis of each claim into one of five methods: small- and large-scale quantitative measurements, qualitative surveys, interviews, and case studies. Small-scale measurements involved less than 10% of a population, whereas large-scale measurements encompassed more than 10%. After this classification, five quantitatively unevaluated claims remained: one concerning design trends, two regarding selection considerations, and two about documentation and understanding. The categorization result and extracted themes are detailed in <ref>. Our results provide a thorough knowledge synthesis which are then used to answer RQ2 (<ref>). § RQ2: DO THE QUALITATIVE CLAIMS ABOUT PACKAGE REUSE (PTM) ON HUGGING FACE HOLD UP WHEN QUANTIFIED? [width=, colback=yellow!30!white, top=1pt, bottom=1pt, left=2pt, right=2pt] Our study shows that the Transformers library is preferred in over 80% of PTM descendants, surpassing PyTorch. The rise of the SafeTensors library underscores a shift toward prioritizing security in PTM development. [width=, colback=yellow!30!white, top=1pt, bottom=1pt, left=2pt, right=2pt] <ref> reveals that Hugging Face has a significantly higher package turnover rate than traditional software registries indicative of a fast-paced, innovation-driven PTM ecosystem. There is a correlation between model popularity and descendant count, indicating that while popular models have more descendants, other factors influence model selection decisions. There is a strong correlation between documentation quality and PTM popularity, with the top 1000 models significantly outperforming the bottom 1000 in documentation, highlighting its importance in model selection. To answer this question, we first derive quantifiable metrics from the claims we extracted from our SLR (<ref>) on Hugging Face (<ref>). Then we present the available datasets and the specific data we used from each (<ref>). Subsequently, we present our methods and results for measurement on metrics for each claims (<ref>–<ref>). §.§ Metrics Developed from Claims I am trying to be repetitive in this section by rewriting the caption of Table 4 as prose. In this section, we explain how we developed quantifiable metrics from the claims identified in our SLR (<ref>). The metrics were specifically designed to quantify the hypothesized results inferred from these claims. Drawing on traditional software engineering practices, we adopted metrics that have been previously used to evaluate similar claims in other contexts if available. Particular attention was paid to metrics that are widely recognized and have been frequently cited in the literature, as well as those that have been implemented across various traditional software registries. This approach ensures that our metrics are grounded in established methodologies and are robust enough to provide meaningful insights. <ref> shows the relationship between the claims extracted and the corresponding metrics, along with the expected measurements. This mapping both validaties the claims and contextualizes their implications within the context of package reuse on . §.§ Available Datasets The section presents the PTM package datasets (<ref>) and traditional software package dataset (<ref>) we used in our work. §.§.§ PTM Datasets In the PTM literature, there are four datasets available publicly: HF Model Metadata <cit.>, PTMTorrent <cit.>, HFCommunity <cit.>, and PeaTMOSS <cit.>. * HF Model Metadata provides a snapshot of 10,406 HuggingFace model metadata as of 11/2022, including details of model label, README and length of each README file <cit.>. * PTMTorrent encompasses a snapshot of five model hubs, totaling 15,913 PTM packages as of 08/2023, all formatted in a uniform data schema to facilitate cross-hub mining <cit.>. * HFCommunity is an offline relational database constructed from the data at the Hugging Face Hub. It allows for queries on the repositories hosted within the Hugging Face platform <cit.>. * PeaTMOSS offers comprehensive metadata on 281,638 PTM packages as of 10/2023, including 281,276 from Hugging Face and 362 from PyTorch Hub, along with details on 28,575 GitHub projects that use PTMs as dependencies and 44,337 links from these GitHub repositories back to the PTMs they depend on <cit.>. In this work, we primarily used the PeaTMOSS dataset, as the accessibility of metadata allowed for easier measurements. We also used the HF Model Metadata and PTMTorrent datasets for longitudinal trends in the turnover metric (<ref>), and an April 2024 recent snapshot of the most popular models from the Hugging Face Hub API for the same reason. The detailed data used for each measurement are presented in their respective sections. §.§.§ Traditional package datasets To directly compare measurements between the PTM registry and traditional Software Package Registries (SPRs), we utilized the Ecosyste.ms software package dataset for traditional packages, following the approach outlined in prior work <cit.>. This dataset provides a set of free and open resources for those working to sustain and secure open source software. Ecosyste.ms publishes open data and APIs that map software interdependencies, as well as providing data on the usage, creation, and potential impact of packages. In this work, we used the version of the dataset from October 2023, as this is the closest in time to when the PeaTMOSS dataset was created. This includes the usage of data from two software package registries: and NPM. §.§ C_1: The Transformers library increases the accessibility of PTM creation and downstream reuse. §.§.§ Method We present the metric we developed for claim 1. Metric 1: Preservation rate of libraries to descendents: Some models support multiple libraries. If the claim holds, then descendants of those models should make use of (and thus support) the more reuse-friendly libraries. We consider each library L in turn to assess how frequently its descendants continue to use it. The ingredients of our measure are: the library L being assessed, the set of base models B that use library L and at least one other library, the set D_b of direct descendants of base model b ∈ B, and a function S(d, L) that returns 1 if descendant d ∈ D_b supports library L, otherwise 0. We can then calculate the preservation rate P_L of a library L as: P_L = ∑_b ∈ B∑_d ∈ D_b S(d, L)/∑_b ∈ B |D_b| The numerator is the total descendants supporting library L, calculated as: ∑_b ∈ B∑_d ∈ D_b S(d, L). The denominator is the total count of direct descendants across all base models in set B: ∑_b ∈ B |D_b|. This equation will give the percentage of direct descendants that support library L. The raw count of descendants supporting the library can be found using the numerator. §.§.§ Results Metric 1: Preservation rate of libraries to descendents: As depicted in <ref>, the Transformers library has the highest survival rate from a parent model to its descendant among all libraries used. Specifically, over 80% of descendant models continue to employ the Transformers library, establishing it as the preferred choice for generating descendant models. Transformers is also the most popular library (has the largest population of models that use it). Contrarily, despite previous studies suggesting high prevalence, PyTorch was not favored among PTM descendants, a departure from claims within our systematic literature review that both Transformers and PyTorch are dominant on Hugging Face <cit.>. Instead, our findings highlight the Transformers and SafeTensors libraries as the most prevalent, suggesting a community shift towards prioritizing security, as SafeTensors is designed to replace the commonly used Python pickle library with a more secure container for deploying models. We attribute this discrepancy to prior studies’ broader approach, which did not differentiate between model types and analyzed only a single snapshot of Hugging Face, thus lacking the detailed analysis presented here. Our results suggest that PTM synthesizers are willing to compromise on functionality and portability to ensure the distribution of more secure PTM packages. We found that Transformers library has the highest rate of survival of every library from a parent model to its descendent. Of all the descendant models that utilizes at least two libraries with at least one of those libraries being the transformers library, more than 80% utilize the transformers library. This suggests that transformers is the preferred library of choice when selecting between multiple libraries to produce a descendent model. Contrary to the results discussed within papers in our SLR, pytorch is not among the top libraries with respect to utilization within PTM descendants <cit.>. Previous work identified both the transformers and pytorch libraries as being the most prevelant libraries on . However, our work does not support this claim when strictly evaluating the libraries inherited by PTM descendants as we found the transformers and safetensors libraries to be the most prevelant. This suggests that members of the PTM package syntesis community may prioritize the security benefits of deploying models with safetensors (a library meant to replace the common Python pickle library deployment with a secure container) over the functional benefits of deploying a model to support multiple runtimes. §.§ C_2: Popularity Affects PTM Selection and Reuse More than Other Trad'l. Attributes Our claim interpretation is that popular models are more likely to be used, so that the “rich get richer”. We examined this claim with two metrics. First, we measure the stability (non-turnover) of the top PTM packages over time, expecting it to be low (popular packages are used directly). Second, we measure the correlation between popularity and the number of descendants of top PTM packages, expecting it to be positive (popular packages are fine-tuned). §.§.§ Methods We present the metrics we developed for claim 2. Metric 2: Turnover of Top PTMs: Drawing on prior work characterizing the stability of top packages over time <cit.>, we measured the top-K turnover for each registry. Let S_current be the set of K most popular packages in the current snapshot, and S_last be the set of top K packages in the last snapshot. We also consider S_history, the set of packages in any snapshot before S_last. We then distinguish three categories for packages in S_current: * Remained: Packages that were in both S_last and S_current. Remained = S_last∩ S_current * Newcomers: Packages in S_current in no previous snapshot. Newcomers = S_current∖ (S_history∪ S_last) * Returning: Packages that were once in the top 1000 (S_history), were not in S_last, but are in S_current again. Returning = (S_history∖ S_last) ∩ S_current For the measurement, we defined popularity by the number of downloads, and examined the top-1000 packages. We obtained snapshots across four dates for both traditional software and PTMs. Data came from several HuggingFace snapshots (datasets: HF Model Metadata–June 2022, PTMTorrent–May 2023, PeaTMOSS–Oct. 2023). We took a current snapshot of the top 1000 PTMs directly from Hugging Face (April 2024). We used the Ecosyste.ms dataset (NPM, PyPI) for a comparison with traditional software package registries. Metric 3: Number of Descendents of Top Packages: Our second measure of the impact of popularity on reuse was the number of descendent models. In this case, we defined descendent models as a downstream model that is fine-tuned and references the original model as a base model. We compare the number of descendent models with the popularity of model and determine the strength of correlation between them — the claim implies it should be positive. For the measurement, we again defined popularity by the number of downloads. The descendent-base relation is available from the PeaTMOSS dataset, for the 15,000 most popular PTMs on Hugging Face. Given that these models account for ∼99% of the downloads in the snapshot, we believe this is representative. We initially planned to compare our findings with traditional software, similar to Metric 2. However, identifying a direct counterpart to PTM descendants in traditional software proved challenging. We considered using GitHub forks and GitHub or registry dependencies as analogs, but each presented unique implications that complicated a direct comparison with PTM descendants. §.§.§ Results Metric 2: Turnover of Traditional Software and PTM registries: <ref> shows the results for Hugging Face. The Hugging Face data does not match our interpretation of the claim. About half of the top-1K Hugging Face PTMs turned over in each snapshot. This high turnover rate suggests that packages on Hugging Face have a shorter lifespan, indicating a dynamic PTM environment where the requirements and preferred models frequently change <cit.>. Further investigation could analyze the lifecycle of these PTMs to determine whether they are newer models briefly appearing or established ones losing prominence, as well as identifying the traits of PTMs or maintainers who consistently stay within the top rankings. This result is consistent with the claim, but implies that popularity is likely driven by performance (a latent variable not present in the claim).  <ref> also shows NPM for comparison. The results for PyPI were similar. Traditional registries show stability with their most popular packages. Quantitatively, 2535 distinct packages were in the HuggingFace top-1K, while only 1127 were there for NPM. The rapid turnover on Hugging Face compared to the stability observed in traditional software packages indicates that PTM needs are continually evolving, unlike the more established needs in traditional software domains. This evolution often leads to older PTMs being supplanted by newer models that better meet current requirements or offer superior performance, highlighting a market driven by innovation and rapid adaptation. This trend suggests that PTM users are quick to adopt new advancements, reflecting the field’s fast-paced development and the shifting demands across its various application domains. Figure <ref> shows very clear differences between traditional software and PTM registries. While it can be seen that the most popular packages remained the most popular on traditional software registries like NPM and , Hugging Face experiences a large amount of turnover at the top, seeing a total of 2535 packages enter the Top 1000 compared to 1429 on and just 1127 for NPM. This suggests that a package has a much shorter "lifespan" on the PTM platform than on traditional software registries. I will continue to examine this metric by looking at the average life of packages that enter into and exit out of the top thousand to see if it is new packages that are briefly entering and then leaving or if it is more established packages that are leaving the top 1000. The packages captured by this metric will also be examined to see which maintainers enter into and remain within the top 1000 both in terms of number of packages and just any presence at all. Figure <ref> demonstrates that popular packages on Hugging Face have a much greater turnover rate than traditional software packages do on NPM. In fact, unlike in the other platforms where popular packages become more stable over time, the top packages remained similarly unstable. This would indicate that the lifespan of PTM relevance is shorter than the longevity of traditional software package relevance. It would also indicate that a traditional software needs are more stable than PTM needs. It is possible that since traditional software needs are established, that they change less and so the same packages are needed In contrast, it may be that the wide range of domains in which PTMs are applicable means that at different times, different kinds of PTMs are needed. This would also be supported by the increased amount of Returning packages on Hugging Face compared to NPM. Metric 3: Number of Descendents of Top Packages: <ref> shows the results for this metric. We observe a weak positive correlation between popularity (downloads) and the number of descendent models. Popular models tend to have more descendants, but the correlation was weaker than expected. Notably, some highly popular models had relatively few descendants, while others with similar popularity levels had many. This suggests that while popularity influences the likelihood of a model being chosen for further development, it is not the sole factor. Further analysis, including a breakdown by model task and domain, is necessary to understand if variations in descendant counts are influenced by the specific popularity within less prominent domains or tasks. This analysis could expose the decision-making processes engineers use when selecting models to fine-tune and develop further. §.§ C_3: Docs Quality Impacts Model Selection Our interpretation of Claim 3 is that PTMs with better documentation will be more popular. §.§.§ Method Metric 4: Documentation Quality: Prior work has examined documentation quality in many ways <cit.>. Help We used those ideas to develop our measure of quality. The primary documentation for PTMs is called a “model card”, which is similar to the README of a GitHub repository or the landing page of an NPM or PyPI package. We considered two factors: (1) the completeness of the model card; and (2) the availability of metadata. To measure completeness, we identified five typical sections found in highly popular PTMs such as Google's Bet base uncased: Model Description, Limitations, How to Use, Training, and Evaluation. We scored model cards on an integer scale from 0 to 5 — to receive a score of 5, a PTM's card needed all of these sections. To assess whether each section was present, we queried OpenAI's ChatGPT-4 with the prompt shown in Listing <ref>. [caption=ChatGPT-4 prompt for evaluating model cards., label=lst:prompt4docquality, breaklines=true, basicstyle=, frame=tb] You will receive a model card and are expected to analyze it for the following details: 1. Model description: A description of the model itself 2. Limitations: Any limitations of the model 3. How to use: Instructions on how to use the model downstream 4. Training: Details of the training process or data 5. Evaluation: Reports on the model's performance evaluation Please respond with a JSON object indicating whether each of these points is present with true/false. Here is the model card to evaluate: To measure the availability of metadata, we referenced the PeaTMOSS dataset, which extracted over 20 distinct pieces of metadata if they were present in a PTM's model card and associated configuration files. We scored PTMs on an integer scale from 0 to the maximum, the number of distinct pieces of metadata considered in the PeaTMOSS database schema. A PTM scoring perfectly in this category would possess all available metadata according to PeatMOSS. The PeaTMOSS dataset comprises extracted metadata from all models on Hugging Face and includes snapshots of the top 15,000 most popular models, each with over 50 monthly downloads. These models were further analyzed using an LLM, which extracted additional metadata from the model cards and the config.json files. Consequently, our analysis focused on these top 15,000 models to assess whether documentation quality influences model selection. The metric “Documentation Quality” was defined by two key factors: (1) the availability of metadata and (2) the quality of the information provided in the model card. This metric was developed for several reasons: Firstly, there is a lack of tools specifically designed to evaluate the content of model cards, unlike tools available for assessing README files in traditional software. Given the distinct challenges posed by the PTM domain, there was concern that existing tools might not effectively assess model card quality. Secondly, the choice was influenced by the data available on PeaTMOSS, which primarily includes metadata with limited details on the model cards themselves. Consequently, the availability and detail of metadata played a significant role in designing this metric. For this measure, we considered popularity in three ways: Downloads and Likes, according to Hugging Face, and Downstream Dependents, based on the mapping offered by the PeaTMOSS dataset. To test our interpretation of the metric, we selected the top 1000 and bottom 1000 models from PeaTMOSS's set of 15,000 PTMs. We evaluated their documentation quality, summing an overall documentation score as a (0,1) metric that normalized and then weighted the two components equally. Then we compared the distributions using a box-and-whisker plot and statistical tests. §.§.§ Results Metric 4: Documentation Quality: We evaluated the impact of documentation quality on the popularity of PTMs. Two representative box and whisker plots are shown in <ref>. Specifically, the top 1000 most popular models consistently exhibited significantly better documentation than the bottom 1000 models (p<0.01. This finding supports the claim that documentation quality significantly influences model selection. If the relation is causative (documentation → popularity), then research can focus on developing tools and methods to enhance the documentation quality of models, supporting better usability and adoption. § THREATS TO VALIDITY We discuss three types of threats to validity <cit.>, while considering the criticisms of Verdecchia  <cit.>. Construct Threats are potential limitations of how we operationalized concepts. In the systematic literature review (RQ1), we manually extracted claims from papers, which might introduce potential bias to our results. As a mitigation, to improve objectivity two authors worked together on the process and the filtering of claims. In the validation of non-quantified claims, we proposed metrics that seemed suitable based on our judgment. As a mitigation, where possible we used multiple measures and leveraged previously-defined metrics. Internal threats are those that affect cause-effect relationships. We emphasize that our approach for RQ2 is of the form: “If claim C is true, then measurement M should show us that...”. In each case the measurement produced the expected result. However, this result is correlative, not causative — there may be a latent variable in each case, or the causative relationship may be reversed. For example, in <ref> we found that better documentation correlates with greater popularity. It may be that the latent variable is performance, such that models become popular because they have good performance, and they accrue documentation because they are popular. When qualitative and quantitative claims agree, as is the case in this study, we learn both “Why?” and “How much?”. External threats may impact generalizability. We recognize both immediate and longitudinal threats in this regard. Immediately, we were interested in studying PTM reuse, but we only examined one registry for PTMs: the Hugging Face platform. While this is by far the most popular and feature-rich platform, other platforms exist, such as PyTorch Hub (less popular), PapersWithCode (fewer features), and GitHub (not PTM-specific). In terms of longitudinally, keep in mind that Hugging Face is itself relatively young — created in 2016 and only seeing major use beginning in 2020 — so developer practices may not have stabilized. Lastly, we consider that the technologies and platforms that support PTMs are rapidly evolving, so current claims (whether qualitative or quantitative) may change over time and require ongoing reassessment. As indicated in the discussion, this property suggests an opportunity for further research, but it also means that our findings may be unstable. New trends, technologies, or practices emerging after the study's completion could alter the dynamics of PTM reuse, dependencies, and licensing. It is also entirely possible that the trends shown in this study are purely due to the realtively young age of Hugging Face compared to NPM and . Hugging Face was created in 2016, and began to explode in popularity as a platform in 2020. For comparison, NPM was created in 2010, and was created in the early 2000s. The robustness of our results may be influenced by limitations associated with our data sources. Our reliance on third-party metadata, such as that provided by Ecosyste.ms, could potentially introduce errors. Additionally, our analysis of PTM datasets is derived from three distinct datasets, where the temporal data is discrete rather than continuous. This discontinuity in the data we used might introduce bias, potentially affecting the reliability of our findings. The following threats are from Jason's thesis Constructs are the operationalizations of the concepts measured in this work. Here we highlight threats, i.e., ways in which our operationalizations may be invalid. There may be issues in how well the existing constructs for traditional software supply chains map onto the PTM supply chain. The traditional concept of dependency in software packages may not directly apply to PTMs because dependencies in PTMs can manifest differently. In traditional software, a dependency can be present in a dev environment, at build, or at runtime. Each of these kinds of dependencies ultimately involves incorporating the functionality of a outside package into your own. However, the nature of PTMs means that they do not have the same kinds of opportunities for other packages to incorporate into thems. PTMs often depend on other models not for functionality but for their pretrained weights or architectures. As a result, there is not a direct translation of dependencies from traditional software to PTMs that exists. PTMs have novel attributes compared to traditional software. Additionally, the unique functionality that PTMs provide has also given rise to additional ethical and moral considerations compared to traditional software. As a result, existing definitions of software licenses may not fully encompass the range of licensing arrangements used for PTMs. It's possible that licenses which appear identical through the lens of traditional software licensing may, in fact, differ significantly when considering the impact on a PTM's reuse. If so, these ostensibly similar licenses could have distinct implications for the reuse of PTMs, despite being classified under the same category As a result, the differences in these licenses would not actually be expressed in the popularity of associated models. In order to address our concerns about dependencies, a functional comparison approach was taken. A definition of dependencies for PTMs was chosen that mirrors the impact of dependencies in traditional software as closely as possible. This approach sought to preserve the conceptual integrity of dependency while adapting it to the unique context of PTMs. In the future, a definition of dependency that encompasses PTM attributes such as architecture and framework would also better represent the dependency in a PTM context. We do not have a plan to mitigate our concerns about licenses at the moment. §.§ Internal Threats We highlight ways in which our results may be inaccurate and unreproducible. The choice and application of metrics to compare traditional software registries with the Hugging Face registry might introduce bias. Many metrics that come from traditional software were developed for, and applied to a single registry in order to characterize that registry. If the selected metrics inherently favor the characteristics of one registry over another, the results could be skewed. Another possible issue is the completeness of the datasets used in the study (PeaTMOSS for PTMs and Packages dataset for traditional software). These datasets aggregate self-reported metadata from the registries. Missing data, inaccuracies, or biases could result in a misrepresentation of the studied registries. Additionally, these datsets are ultimately snapshots of inherently volatile platforms. As such, it is possible that these snapshots contain results that would be irreproducible with snapshots that were taken just a few months separate. To mitigate concerns regarding metric selection, we chose metrics with a proven track record across diverse software platforms, reducing the potential for bias. Furthermore, we selected a broad spectrum of registry aspects for measurement, ensuring a comprehensive analysis of the registries in question. §.§ External Threats Data is from a snapshot from 280k repos on Hugging Face, current Hugging Face has 610k+. That affects our generalizability to the present. This study focuses on comparing the Hugging Face ecosystem to NPM and . Existing ecosystems like NPM, , Cargo, and others have all been studied together to see if they display similar patterns of behaviors<cit.>. It was found that they exhibit general behaviors that are similar, but that there are differences between ecosystems in the degree of those behaviors. It is important to acknowledge that our conclusions about the unique nature of Hugging Face, in contrast to NPM and , may not directly apply to other software registries such as Crates or RubyGems, which could exhibit different characteristics when evaluated using these same metrics. The rapidly evolving nature of PTMs and their ecosystems might limit the longevity of the study's findings. New trends, technologies, or practices emerging after the study's completion could alter the dynamics of PTM reuse, dependencies, and licensing. It is also entirely possible that the trends shown in this study are purely due to the realtively young age of Hugging Face compared to NPM and . Hugging Face was created in 2016, and began to explode in popularity as a platform in 2020. For comparison, NPM was created in 2010, and was created in the early 2000s. It may be that as Hugging Face matures as a platform, that the patterns shown here begin to resemble those of their contemporaries, especially when it comes to interpackage dependency networks and the turnover rate of top packages. There could be external factors not examined in the study that significantly influence the reuse patterns of PTMs and traditional software. These might include community norms, regulatory changes, or shifts in technology that impact how engineers interact with these ecosystems. § FUTURE WORK Need at least 1-2 paragraphs that directly talk about our data. Right now it all feels a bit imprecise and hand-wavey (because Jason was dictating without the figures in front of him). Each paragraph under `Interpreting results' subsection should refer to a table or figure (or multiple) from our paper, and say something clever. For example, we have the question “We failed to operationalize the notion of forks for comparison to traditional registries...how?” which Nick Synovic can expand on. We also have the observation “Well, we did *one* comparison to traditional registires and things were quite different. Someone should write a paper where they carefully do side-by-side”. Need to make a clearer distinction between what we measured, and possible directions for future work. I suggest we use the headings I just put here. §.§ Interpreting results In this paper, we have identified N metrics arising from our systematic literature review. Nick if you could add to this metric Our first metric measured the proportion PTM (e.g. analagous to "forks" in traditional software engineering) that support models that are direct descendantsed two or more libraries for downstream reuse. The transformers library <cit.> had the highest proportion of models that supported it, followed by the safetensors library. This implies that the transformers and safetensor libraries are the preferred libraries to support when producing model descendants. Our meausrement contradicts previous findings within our SLR that measured transformers and pytorch as the two most popular libraries on  <cit.>. We believe that this discrepency is due to two factors: One, previous work made no distinction between model types and measured a snapshot of , thereby preventing a finer-grain analysis. Two, that members of the PTM synthesis intend to promote model security over functionality by leveraging the safetensors library as it is designed to meet the functional requirements of the Python pickle package while increasing security measures to prevent code execution at runtime. In other words, PTM synthesizers potentially trade functionality and portability of their PTM packages in order to create secure PTM packages for distribution. Our second metric was the turnover of top packages overtime. This revealed that hugging face has a very high turnover rate of packages, with more than 2500 packages turning over. Additionally, a very high rate of turnover occurred every time, with more remaining packages than new coming packages occurring only once in terms of snapshots. This metric was also applied to traditional software registries to make it comparison. In this, we found that the rate of turnover on hugging face was distinct compared to traditional software. In registries such as NPM and , the top popular packages remain fairly stable and established overtime. What this suggest is two things. First of all, as reuse of PTMs becomes more mature, the set of requirements for what is needed from a model drifts, resulting in pre-existing PTMs becoming more popular over time not necesarily due to an update to them but rather a shift of needs. Secondly, for domains in which they set up requirements stays the same, it is possible possible that models that are released, and our newer, outperform, older models, and so the cost associated with switching over from a used model to a new model or the decision is being made to use a model will prioritize a newer model with higher performance rather than sticking with an establish model. Our third metric was the descendent amount of models. But this looked at was the strength of correlation between the popularity of a model, and then the number of models that were descended from it. What we found here is that the models with greater popularity had a greater number of descendants. This is pretty straightforward, however, something that is significant to note is that this isn't as strong of a correlation as one might expect. There are plenty of popular models with low amounts of descendants, and several popular models with a high number of descendants. Further analysis of this with a breakdown by model task and domain to determine if those early spikes are because less popular domains or model tasks only have so much popularity, but still the most popular models of those kinds are being used in the same way would answer some more questions about this. This implies that popularity is a factor, but not the only one when a engineer decides which model to fine-tune and descend from. Our fourth metric was the popularity of PTMs based on their documentation quality. What we found here is that there is a clear impact of documentation quality popularity, regardless of the metric utilized to determine popularity. In all three cases top 1000 most popular models that we looked at had significantly higher documentation qualities than all of the bottom 1000 models that we looked at. This supports prior claims the many many many prior claims that documentation quality affects model selection. As a result, it is clearly important that future work provides tools that improve the documentation quality of models. In <ref>, we identified one claim that we could not operationalize for measurement: C_4, that different deep learning-specific attributes would affect PTM selection and reuse. Although the PeaTMOSS dataset does include some of these measures in a structured format, the claim is sufficiently broad that quantifying it was beyond the scope of this study. Future work could explore the multi-variable relationship implied by this claim. Our study provides the first systematic literature review of current knowledge about PTM reuse. Another opportunity is a systematic comparison of the software package registries associated with traditional software packages, and the software package registries associated with PTMs. Jiang observed similarities and differences in the reuse processes <cit.> — how and to what extent can we measure the differences? As discussed in <ref> with respect to an analogue for PTM descendants, finding the limits of comparison (appropriate measurements) between traditional vs. PTM software package registries is an open challenge. The rapid development of PTM technologies presents an ongoing challenge for empirical research on PTM reuse. The Hugging Face platform continues to grow exponentially <cit.>, the state of the art performance of models at all sizes continues to advance <cit.>, and the tooling available to adapt and deploy these models continues to improve <cit.>. make sure to get the citations in. While datasets and tooling such as HFCommunity and PeaTMOSS support studies like ours, they also present some limitations. Innovation is needed to help empirical software engineering researchers keep up with the scale of activity and volume of data that we are seeing in the context of PTM reuse. Given Hugging Face's dynamic growth, even data that is a few months old may not accurately reflect the current state of the platform, suggesting a need for regular snapshotting, which imposes significant storage requirements (beyond the already-substantial requirements of >50 TB). This highlights the need for tools that can provide real-time, incrementally-updated data to keep pace with rapid changes, ensuring that analyses remain relevant and reflective of the present situation in PTM reuse. This paper discusses the reuse of pretrained models (PTMs) on the Hugging Face platform and highlights the necessity of rigorous validation for the claims made about PTM reuse. Several researchers have mentioned these claims as motivating factors for their studies; however, these claims often lack empirical support in the form of qualitative or quantitative analysis. To enhance the credibility and understanding of PTM reuse, conducting thorough studies to substantiate these claims is essential. This approach not only provides a stronger foundation for these works but also enhances their validity by grounding them in concrete data. While some prior claims have been supported by quantitative or qualitative evidence, there remains a significant portion that requires updated verification. The rapid growth of Hugging Face suggests that reuse patterns might have evolved, mirroring trends observed in traditional software package registries. Thus, examining current reuse patterns is crucial to determine if these patterns are stable over time or if they are subject to change. Notably, few studies have compared PTM reuse to traditional software reuse, and those that did lacked contemporary data, relying instead on historical comparisons. Understanding how PTM reuse compares to traditional software can provide valuable insights into the lifecycle and evolution of software practices, which is vital for predicting future trends and informing users/stakeholders. In terms of future implications, there are emerging tools available that have shown potential in analyzing PTM data on a small scale, but future work is necessary to develop a more comprehensive and up-to-date dataset. While the PeaTMOSS framework has significant merit for systematic studies as performed here, it also presents some limitations. Additional methods are needed to gathering current data and augment that PeatMOSS dataset. Given Hugging Face's dynamic growth, even data that is a few months old may not accurately reflect the current state of the platform, suggesting a need for regular snapshotting, which imposes significant storage requirements (beyond the already substantial requirements in the tens of TB). This highlights the urgent need for tools that can provide real-time, incrementally-updated data to keep pace with rapid changes, ensuring that analyses remain relevant and reflective of the present situation in PTM reuse. § CONCLUSION Pre-trained models are the motive force of the new generation of software engineering. Understanding engineers' PTM reuse practices is crucial to optimizing and securing the process. Our systematic literature review and quantification of claims has illuminated significant aspects of PTM reuse. We also shed light on unique dynamics within the PTM reuse landscape as compared to traditional software package registries. Specifically, we observed a shorter lifespan of packages in PTM registries compared to traditional SPRs. Our findings underscore the need for research infrastructure and novel tools to support PTM reuse, adapting to the much higher turnover of popular PTM packages. We must ensure that PTM registries can meet the evolving demands of the software engineering community. § DATA AVAILABILITY An anonymous artifact containing the results of the systematic literature review (RQ1) as well as our software and results for quantification of claims (RQ2) is available at <https://github.com/anonsub1234/ptm-quantify-esem-2024>. § RESEARCH ETHICS No human subjects were involved in the conduct of this project. We are aware of no other ethical concerns. § ACKNOWLEDGMENTS We acknowledge support from NSF awards OAC-2107020 and OAC-2107230. ACM-Reference-Format
http://arxiv.org/abs/2406.09248v1
20240613154955
Wigner non-negative states that verify the Wigner entropy conjecture
[ "Qipeng Qian", "Christos N. Gagatsos" ]
quant-ph
[ "quant-ph" ]
Program in Applied Mathematics, The University of Arizona, Tucson, Arizona 85721, USA Department of Electrical and Computer Engineering, The University of Arizona, Tucson, Arizona, 85721, USA Wyant College of Optical Sciences, The University of Arizona, Tucson, Arizona, 85721, USA Program in Applied Mathematics, The University of Arizona, Tucson, Arizona 85721, USA § ABSTRACT We present further progress, in the form of analytical results, on the Wigner entropy conjecture set forth in [https://link.aps.org/doi/10.1103/PhysRevA.104.042211Phys. Rev. A 104, 042211 (2021)] and [https://iopscience.iop.org/article/10.1088/1751-8121/aa852f/metaJ. Phys. A: Math. Theor. 50 385301]. Said conjecture asserts that the differential entropy defined for non-negative, yet physical, Wigner functions is minimized by pure Gaussian states while the minimum entropy is equal to 1+lnπ. We prove this conjecture for the qubits formed by Fock states |0⟩ and |1⟩ that correspond to non-negative Wigner functions. In particular, we derive an explicit form of the Wigner entropy for those states lying on the boundary of the set of Wigner non-negative qubits. We then consider general mixed states and derive a sufficient condition for the conjecture's validity. Lastly, we elaborate on the states which are in accordance with our condition. Wigner non-negative states that verify the Wigner entropy conjecture Christos N. Gagatsos June 17, 2024 ==================================================================== § INTRODUCTION Uncertainty relations are of fundamental interest in quantum information theory. They are closely related to the wave-particle duality in quantum mechanics and also illustrate one of the essential difference between quantum and classical mechanics. Furthermore, uncertainty relations directly put constraints on the precision of measurements and indicates inherent limitations in our understanding of quantum systems. The exploration of uncertainty relations traces back to the foundational Heisenberg uncertainty principle <cit.>, in which the variance of the quadrature operators q̂ and p̂ is used as the quantifier of uncertainty. Later studies on uncertainty relations resulted in a natural generalization of classical information-related entropies to quantum systems (for an overview of entropic uncertainty relations see for example <cit.>). In <cit.> an entropic uncertainty relation has been presented setting a lower bound on the summation of the Shannon entropy of the probability distribution function (PDF) of the position and the Shannon entropy of the PDF of the momentum of a quantum system. Said lower bound is stronger than the Heisenberg uncertainty relation. Furthermore, considering the subadditivity of Shannon’s differential entropy, one can expect that the entropy of the joint distribution of q and p, i.e., of the Wigner function, will induce an even stronger bound, which would nevertheless imply the bound in <cit.>. The Wigner entropy S[W] is defined in <cit.> as the differential Shannon entropy of the Wigner function W(q,p) of the state with non-negative Wigner function (not necessarily corresponding to a classical state), S[W]=-∫ dqdp W(q,p)ln W(q,p). It possesses several properties <cit.> such as additivity (for product states) and, unlike the Wehrl entropy <cit.>, invariance under symplectic transformations. The Wigner entropy has been used in the analysis of noisy polarizers <cit.>, high energy physics <cit.>, and non-equilibrium field theory <cit.>. In a broader view, phase space methods exploring the properties of quantum states are always of current interest, see for example <cit.>. In this work, we focus on the following conjecture, presented in <cit.>, For any Wigner non-negative state, S[W]≥ 1+lnπ, while the lower bound is attained by any pure Gaussian state. It is known that the marginals of the Wigner function coincide with probability densities of q and p, denoted as P_q=∫ dp W(q,p) and P_p=∫ dq W(q,p) respectively. As discussed before, the Bialynicki-Birula-Mycielski inequality <cit.> and the subadditivity of Shannon’s differential entropy give, S[P_q]+S[P_p] ≥ 1+lnπ, S[P_q]+S[P_p] ≥ S[W], If Conjecture <ref> is true, inequalities (<ref>) and (<ref>) can be written as, S[P_q]+S[P_p]≥ S[W] ≥ 1+lnπ, i.e., we get a stronger entropic uncertainty relation for Wigner non-negative states. In <cit.>, Conjecture <ref> was proven analytically for passive states, i.e., states whose Fock basis representation has the form ρ̂_p = ∑_n=0^∞ q_n |n⟩⟨ n| under the condition q_n+1≤ q_n, where q_n are probabilities (non-negative real numbers satisfying ∑_n=0^∞ q_n = 1), and provided semi-numerical evidence for states that can be produced by mixing Wigner non-negative states in a balanced beam splitter. In this paper, we prove analytically Conjecture <ref> for: (i) Qubits in the basis {|0⟩, |1⟩}, where |0⟩ and |1⟩ are Fock states, and (ii) for general mixed states which are Wigner non-negative and satisfy a sufficient condition. Throughout the paper we consider single-mode states. It is worthwhile to mention that recent progress has been made in similar conjectures relating to the family of Rényi entropies for non-negative Wigner functions <cit.>. This paper is organized as follows: In Section <ref>, we analyze the conditions for any qubit in the basis {|0⟩, |1⟩} to have a non-negative Wigner function. We then demonstrate that the Wigner entropy attains its minimum on the boundary of the Wigner non-negative set as defined by the derived condition. We explicitly derive the form of the Wigner entropy for these states and indeed verify that the lower bound of the Wigner entropy of such Wigner non-negative qubits matches the one of Conjecture <ref>. In Section <ref>, we consider the more general case of any mixed state and derive a sufficient condition such that Wigner non-negative states satisfy the lower bound stated in the Conjecture <ref>. In Section <ref>, by showing a few concrete examples, we demonstrate that the set of states in accordance with our condition is non-empty and distinct from the set described in <cit.>. Finally, in Section <ref> we summarize our results very briefly and we discuss future directions. § QUBIT STATES We denote the matrix representation of the density operator of a qubit formed by Fock states |0⟩ and |1⟩ in Bloch ball representation as, ρ=1/2(I+r_1σ_x+r_2σ_y+r_3σ_z), where {σ_x,σ_y,σ_z} are the Pauli matrices, σ_x = [ 0 1; 1 0 ],σ_y=[ 0 -i; i 0 ],σ_z=[ 1 0; 0 -1 ], I is the identity matrix, and r_i∈[-1,1] for i=1,2,3 satisfy, r_1^2+r_2^2+r_3^2≤ 1. Following standard procedures (e.g. the compact formulation presented in <cit.>) we can find the Wigner function corresponding to Eq. (<ref>) to be, W(q,p) = 1/πe^-q^2-p^2[ (q^2+p^2)(1-r_3) +√(2)r_1q+√(2)r_2p+r_3 ]. Since Eq. (<ref>) under condition (<ref>) represents a physical state, the Wigner function of Eq. (<ref>) is naturally physical under the same condition. However, we need to identify a condition on r_i, i=1,2,3, such that Eq. (<ref>) also represents a non-negative Wigner function. To this end, we require W(q,p)≥ 0 and we derive the following condition, 2(r_1^2+r_2^2)+(1-2r_3)^2≤1. Under the conditions of Eqs. (<ref>) and (<ref>), we are able to analyze whether the Conjecture <ref> is true for the qubit case. However, some observations leading to simplifications are due. First, the Wigner entropy is invariant under symplectic transformations. Therefore, we can set r_2=0 in Eq. (<ref>) since arbitrary values of r_2 correspond to optical phase shifts, i.e., a Gaussian unitary transformation [This can be proven as follows: Use spherical coordinates in Eq. (<ref>), i.e., for |r|∈ [0,1], θ∈ [0,π], ϕ∈ [0,2π), we write r_1=|r| sinθcosϕ, r_2=|r| sinθsinϕ, and r_3=|r| cosθ. Then, applying the (Gaussian unitary) phase shift operator e^-i ϕn̂ on the state of Eq. (<ref>), i.e., calculating e^-i ϕn̂ρ e^i ϕn̂ and using the fact that the Pauli operators are represented as matrices on the {|0⟩,|1⟩} (Fock states) basis, we find that the resulting state to be |r|(I+sinθσ_1+cosθσ_3)/2 or equivalently in Cartesian coordinates (I+ r_1 σ_1+r_3 σ_3)/2.] (optical phase shifting corresponds to a symplectic transformation on phase space). Therefore, the Wigner function and the conditions we consider, respectively become, W_13(q,p) = 1/πe^-q^2-p^2[ (q^2+p^2)(1-r_3) +√(2)r_1 q+r_3 ], r_1^2+r_3^2≤1, 2 r_1^2+(1-2r_3)^2≤1. The Bloch ball with the Wigner non-negative regions of our system is depicted in Fig. <ref>. Second, for any fixed q,p,r_3, the second derivative on -W_13(q,p)ln W_13(q,p) with respect to r_1 gives, ∂^2/∂ r_1^2[-W_13(q,p)ln(W_13(q,p))] =-2q^2 e^-2q^2-2p^2/π^2W_13(q,p)≤0, implying that the Wigner entropy is concave with respect to r_1 due to the linearity of integration in Eq. (<ref>). Thus, the minimum of the Wigner entropy can only exist at some point along the boundary defined by the condition (<ref>), i.e., 2r_1^2+(1-2r_3)^2=1. Finally, using Eq. (<ref>) we simplify further Eq. (<ref>), W_3^±(q,p) = 1/πe^-q^2-p^2[ (q^2+p^2)(1-r_3) ± 2 q √(r_3 (1-r_3))+r_3 ], where the ± corresponds to the two possible solutions of Eq. (<ref>) with respect to r_1. Both Wigner functions of Eq. (<ref>) are equivalent up to an optical phase. Therefore, they correspond to the same Wigner entropy. We choose to work with W_3^+(q,p), W_3(q,p) ≡ W_3^+ (q,p). The explicit form of the Wigner entropy on the boundary defined by Eq. (<ref>) (see Appendix <ref>) is, S_b≡ S [W_3^±] = e^-r_3/1-r_3(1-r_3)+r_3+lnπ/r_3 +Ei(-r_3/1-r_3), where Ei(x)=∫_-∞^x dt e^t/t is the exponential integral function and the subscript b denotes that we work on the boundary of the Wigner non-negative qubits. In Fig. <ref>, we plot the Wigner entropy S[W_13]. We find that the entropy S_b is minimized at r_3=1 and its minimum value is 1+lnπ, which is consistent with the Conjecture <ref>. For r_3=1 and r_2=0, per Eq. (<ref>) we get r_1=0, which means that the state minimizing the Wigner entropy is |0⟩, which is the only pure Gaussian state in the Bloch ball. The details on the minimization are provided in Appendix <ref>. § A SUBSET OF WIGNER NON-NEGATIVE STATES In this section, for a restricted set of physical and Wigner non-negative (in general mixed) states ρ̂, we find that the Wigner entropy is in accordance with Conjecture <ref>. Let k∈ and k≥1. For any non-negative Wigner function W(q,p), we define the functional, μ_k[W]=kπ^k-1∫ dqdpW^k(q,p). ∂μ_k[W]/∂ k|_k→ 1≤ 0. The left hand side of Eq. (<ref>) is equal to 1+lnπ-S[W]. Therefore, Conjecture <ref> is equivalent to Conjecture <ref>. Additionally, for k=1 we get the normalization property of Wigner functions, μ_1[W]=∫ dpdqW(q,p)=1, Therefore, Conjecture <ref> is true if the following sufficient condition holds, μ_k[W]≤ 1 for k∈[1,2]. Furthermore, utilizing the fact that ∫ dqdpW_0^k(q,p)=1/kπ^k-1, where W_0(q,p) denotes the Wigner function of |0⟩, we can rewrite Eq. (<ref>) as, ν_k[W]≤ν_k[W_0], where, ν_k[W]=∫ dqdpW^k(q,p) for k∈[1,2]. We now derive a sufficient condition such that Eq. (<ref>) is true. For any (generally mixed) state ρ̂, its Wigner function has the form W_ρ̂(q,p)=W_0(q,p)P(q,p), where P(q,p)=∑_a,b=0^∞c_abq^ap^b is a polynomial in q,p. To prove Eq. (<ref>), one can start by writing ρ̂ on the Fock basis and then calculate the generic Wigner function using well-known methods (see for example <cit.>). Since Wigner functions are real-valued, c_ab∈ for any (a,b)∈ℕ^2. To make W(q,p) normalized, we can rewrite W(q,p) as W(q,p)=W_0(q,p) ∑_a,b=0^∞πc_ab/Γ(1+a/2)Γ(1+b/2)q^ap^b, where Γ(x)=∫_0^∞s^x-1e^-sds and ∑_a,b=0^∞c̃_ab = 1. We define F_ab(q,p) as, F_ab(q,p) =W_0(q,p)π/Γ(1 + a/2) Γ(1 + b/2)q^ap^b. In Appendix <ref> we prove that, F_k^k≤ν_k[W_0], where F_k=(∫ dqdp |F_ab(q,p)|^k)^1/k for any (a,b)∈^2 and k∈[1,2]. Let us impose the following condition, ∑_a,b=0^∞|c_ab|=1. For a non-negative Wigner function W(q,p) satisfying Condition <ref>, we utilize Eq. (<ref>) and the triangle inequality to get, ν_k[W] ≡W_k^k ≤ [∑_a,b=0^∞|c_ab|F_ab(q,p)_k]^k ≤ [∑_a,b=0^∞|c_ab|(ν_k[W_0])^1/k]^k = [(ν_k[W_0])^1/k]^k=ν_k[W_0]. Therefore, Condition <ref> is a sufficient condition for Conjecture <ref> and thus Conjecture <ref> to hold. § EXAMPLES We provide a few examples demonstrating that the set of states satisfying our conditions is non-empty and distinct from the set of passive states explored in <cit.>. In particular, we give three examples to show how the set of states explored in Section <ref> intersects with both the sets of passive states and of Fock-diagonal states. Example 1: Consider states of the form p_0|0⟩⟨0|+p_1|1⟩⟨1| with 0 ≤ p_i≤ 1, i=0,1, and ∑_i=0^1 p_i=1. All states of this form which additionally satisfy p_1≤1/2 (e.g. passive states) satisfy Condition <ref>. Example 2: Consider states of the form p_0|0⟩⟨0|+p_1|1⟩⟨1|+p_2|2⟩⟨2| with 0 ≤ p_i≤ 1, i=0,1,2, and ∑_i=0^2 p_i=1. All such states with p_1 ≤ 1/2, p_1-2p_2 ≥ 0 satisfy Condition <ref>. This set of states does not necessarily include passive states. For example, for p_0=p_2 = 1/4, p_1 = 1/2 the state is not passive but still satisfies Condition <ref> and thus satisfies the Conjecture <ref>. Example 3: Consider the state of p_0|0⟩⟨0|+p_1|1⟩⟨1|+p_2|2⟩⟨2|+p_3|3⟩⟨3| with 0 ≤ p_i≤ 1, i=0,1,2,3, and ∑_i=0^3 p_i=1. All such states with, p_1-2p_2+3p_3 ≥ 0, p_2-3p_3 ≥ 0, p_0+p_2 ≥ 1/2 satisfy Condition <ref>. We note that when p_0=p_1=p_2=p_3=1/4, the state is passive but does not satisfy Condition <ref>. From the examples above, we observe that our set only intersects with the set of passive states but does not contain it. Moreover, it directly follows that if a state is Fock-diagonal and Wigner non-negative, the state is not necessarily in compliance with Condition <ref>. In Appendix <ref>, we delve deeper into the relationship between our set and the set of Fock-diagonal states. We find that our Condition <ref> does not imply that a state ρ̂ is Fock-diagonal but it does imply it when ρ_n,m=0 if |n-m| is odd, where ρ_n,m is the element of the matrix representation of ρ̂ on the Fock basis. § CONCLUSIONS In this paper, we proved that the newly introduced Conjecture <ref> <cit.> holds true for two cases: for (generally mixed) qubits formed by Fock states |0⟩ and |1⟩ and for states that satisfy Condition <ref>. Therefore, for Wigner non-negative states, we presented progress towards a stronger position-momentum uncertainty relation compared to the one derived in the seminal work <cit.> and expanded the results of <cit.>. Moreover, the entropic uncertainty relation considered in this work, subsumes <cit.> the Wehrl entropy inequality for the always non-negative Q functions. We note that the Wehrl entropy is minimized for coherent states while it is not in general invariant under symplectic transformations, e.g., the Wehrl entropy of a coherent state and a squeezed state are not in general equal. The relationship between the set defined by Condition <ref> and the sets of passive states and Fock-diagonal states is depicted in Fig. <ref>. The difficulty of proving Conjecture <ref> for all W(q,p)≥ 0 lies in lacking a computationally-useful criterion for Wigner non-negativity which also excludes non-physical states: The condition W(q,p)≥ 0 merely imposes non-negativity on the the function, while one would need to take into account the condition, ∫ dq dp W(q,p) W_|ψ⟩(q,p) ≥ 0 for all pure states |ψ⟩, as well to ensure that W(q,p) corresponds to a physical state. One way forward could be to consider a set of functions W̃(q,p) that includes all physical Wigner functions, plus a subset of functions that are non-negative but do not correspond to physical states. For example, this can be done by considering only a (convenient) subset of pure states satisfying Eq. (<ref>). We note that an approach leading to the conclusion that the Wigner non-negative state minimizing the Wigner entropy is a pure state would prove Conjecture <ref> in general. This would be an immediate consequence of Hudson's theorem stating that any pure state with a non-negative Wigner function is necessarily a Gaussian state <cit.>. Lastly, we envision future works elaborating on: Conjecture <ref> for states defined across multiple modes, entropy power inequalities <cit.>, or even on the properties for the complex-valued Wigner entropy, i.e., for partly negative Wigner functions, in the direction of <cit.>. The authors are thankful to Zacharie Van Herstraeten, Michael G. Jabbour, Anaelle Hertz, and Nicolas Cerf for numerous fruitful discussions. C.N.G. and Q.Q. acknowledge financial support from the National Science Foundation, FET, Award No. 2122337. § MINIMIZATION OF EQ. (<REF>) Aequation In this Appendix, we prove that the Wigner entropy S_b of Eq. (<ref>) attains its minimum at r_3=1 and the corresponding value is 1+lnπ. Under the change of variables, q'=√(1-r_3)q and p'=√(1-r_3)p and using the condition of Eq. (<ref>), Eq. (<ref>) and its corresponding Wigner entropy become, W'_3(q',p') = 1/πe^-q'^2+p'^2/1-r_3[p'^2+(q'+√(r_3))^2], S_b≡ S[W'_3] = -1/1-r_3∫ dq'dp'W'_3ln(W'_3). With further change of variables, q'=Rsinθ and p'=Rcosθ, satisfying dq'dp'=RdRdθ, Eq. (<ref>) becomes, W”_3(R,θ)=1/πe^-R^2/1-r_3[R^2+2√(r_3)Rsinθ+r_3]. and the Wigner entropy S_b of Eq. (<ref>) in the new variables can be computed as, S_b ≡ S[W”_3] = -1/1-r_3∫_0^∞∫_0^2πdRdθ R1/πe^-R^2/1-r_3(R^2+2√(r_3)Rsinθ+r_3)ln(W_3) = -1/1-r_3∫_0^∞∫_0^2πdRdθ R1/πe^-R^2/1-r_3(R^2+2√(r_3)Rsinθ+r_3) ×[(-lnπ-R^2/1-r_3)+ln(R^2+r_3+2√(r_3)Rsinθ)] = -1/1-r_3∫_0^∞∫_0^2πdRdθ R1/πe^-R^2/1-r_3{(R^2+r_3)(-lnπ-R^2/1-r_3) +2√(r_3)Rsinθ(-lnπ-R^2/1-r_3)+(R^2+r_3)ln(R^2+r_3+2√(r_3)Rsinθ) +2√(r_3)Rsinθln(R^2+r_3+2√(r_3)Rsinθ)}. We have the following useful relations pertaining to the previous integrals, ∫_0^2πdθ sinθ=0, ∫_0^2πdθ ln(R^2+r_3+2√(r_3)Rsinθ)=2π(lnR^2+r_3/2+lnR^2+r_3+√((R^2-r_3)^2)/R^2+r_3), ∫_0^2πdθ sinθln(R^2+r_3+2√(r_3)Rsinθ)=2πR^2+r_3-√((R^2-r_3)^2)/2√(r_3)R, which when used in Eq. (<ref>) we get, S_b =2e^-r_3/1-r_3(1-r_3)+r_3-2U/1-r_3, where 2U/1-r_3 =-e^-r_3/1-r_3(r_3-1)-Ei(r_3/-1+r_3)-lnπ/r_3. Therefore, S_b=e^-r_3/1-r_3(1-r_3)+r_3+lnπ/r_3+Ei(-r_3/1-r_3). Then, we calculate the derivative d/dr_3S_b = e^-r_3/1-r_3r_3-2/1-r_3+1-1/r_3+e^-r_3/1-r_3/r_3(1-r_3) = e^-r_3/1-r_3r_3^2-2r_3+1/r_3(1-r_3)-1-r_3/r_3 = e^-r_3/1-r_31-r_3/r_3-1-r_3/r_3 = 1-r_3/r_3(e^-r_3/1-r_3-1) ≤ 0. It is clear that d/dr_3S_b=0 is only possible when r_3=0,1. By L'Hôpital's rule, we have lim_r_3→0 (1-r_3)(e^-r_3/1-r_3-1)/r_3=1+e^-r_3/1-r_3(r_3-2)/1-r_3/1=-1, lim_r_3→1 (1-r_3)(e^-r_3/1-r_3-1)/r_3=0(0-1)/1=0. Therefore, we conclude that S_b obtain its minimum value 1+lnπ at r_3=1. § PROOF OF EQ. (<REF>) Bequation First, by plugging in F_ab(q,p) to Eq. (<ref>), we get ln( π^k-1k^-a+b/2kΓ(1+ak/2)Γ(1+bk/2)/[Γ(1+a/2)Γ(1+b/2)]^k)≤0, which is equivalent to (k-1)lnπ-a+b/2kln k+ln(Γ(1+ak/2))+ln(Γ(1+bk/2))-kln(Γ(1+a/2))- kln(Γ(1+b/2))≤0. Denote by f(a,b,k) the left hand side of the above inequality and use the result from <cit.>, ψ(x)≤ln x-1/2x, where ψ(x)=d /dz[ln(Γ(x) )], we get ∂ f(a,b,k)/∂ k = lnπ-a+b/2(1+ln k)+a/2ψ(1+ak/2)+b/2ψ(1+bk/2) -ln(Γ(1+a/2))-ln(Γ(1+b/2)) ≤ lnπ-a+b/2(1+ln k)+a/2ln1+ak/2-a/2(1+ak) +b/2ln1+bk/2-b/2(1+bk)-ln(Γ(1+a/2))-ln(Γ(1+b/2)) =: g(a,b,k). We can then calculate ∂ g(a,b,k)/∂ k = -a+b/2k+a^2/2(1+ak)+a^2/2(1+ak)^2+b^2/2(1+bk)+b^2/2(1+bk)^2 = -a/2k(1+ak)^2-b/2k(1+bk)^2<0, where we always work with k∈[1,2]. Denote h(a,b) as h(a,b) = g(a,b,1) = lnπ-a+b/2+a/2ln1+a/2-a/2(1+a)+b/2ln1+b/2 -b/2(1+b)-ln(Γ(1+a/2))-ln(Γ(1+b/2)). When a>0, we apply results from <cit.>, ψ(x)>ln x-1/2x-1/12x^2, to get ∂ h(a,b)/∂ a = -1/2+1/2ln1+a/2+a/2(1+a)-1/2(1+a)^2-1/2ψ(1+a/2) = 1/2( -1+ln1+a/2+a/1+a-1/(1+a)^2-ψ(1+a/2)) < 1/2( -1+ln1+a/2+a/1+a-1/(1+a)^2-ln1+a/2+1/1+a+1/3(1+a)^2) = -1/3(1+a)^2 < 0. Similar for b, we have ∂ h(a,b)/∂ b<0. Now, set a or b equal to 0, we also have ∂ h(a,b)/∂ a|_a=0 < 0 ∂ h(a,b)/∂ b|_b=0 < 0. Thus, for any a,b∈, we have h(a,b) ≤ max{h(0,0),h(0,1),h(1,0),h(1,1)} = max{0,1/2(lnπ-3/2),lnπ-3/2} ≤ 0. Then combining with ∂ g(a,b,k)/∂ k<0, for any a,b∈ and k∈[1,2], we have g(a,b,k)≤ g(a,b,1)=h(a,b)≤0, which implies ∂ f(a,b,k)/∂ k≤0. Thus, we conclude for any (a,b)∈^2 and k∈[1,2], we have f(a,b,k)≤ f(a,b,1)=0, which proved the Eq. (<ref>) for any (a,b)∈^2 and k∈[1,2]. § FOCK BASIS PROPERTIES FOR STATES SATISFYING CONDITION <REF> Cequation In this Appendix, we show that Condition <ref> does not imply that the state ρ̂ is Fock-diagonal but it implies ρ̂_n,m=0 if |n-m| is odd. Let W(q,p) be a Wigner function satisfies Condition <ref> and χ_W(η), η∈ℂ, be its corresponding Wigner characteristic function. Without loss of generality, we can assume n-m=l≥0 and get ⟨ n|ρ̂|m⟩ = 1/π∫χ_W(η)⟨ n|D̂^†(η)|m⟩ d^2η = 1/π∫χ_W(η)e^-|η|^2/2⟨ n|e^-ηâ^†e^η^*â|m⟩ d^2η = 1/π∫χ_W(η)e^|η|^2/2(∑_s=0^∞(-η^*)^s/s!â^s|n⟩)^†(∑_t=0^∞(η^*)^t/t!â^t|m⟩)d^2η = 1/π∫χ_W(η)e^-|η|^2/2(∑_s=0^n(-η^*)^s/s!√(n!/(n-s)!)|n-s⟩)^†(∑_t=0^m(η^*)^t/t!√(m!/(m-t)!)|m-t⟩)d^2η = 1/π∫χ_W(η)e^-|η|^2/2( ∑_t=0^m(-η)^t+l(η^*)^t/(t+l)!t!√((m+l)!m!)/(m-t)!)d^2η = 1/π∫χ_W(η)e^-|η|^2/2 (-η)^l ( ∑_t=0^m(-1)^t|η|^2t/(t+l)!t!√((m+l)!m!)/(m-t)!)d^2η. The case for n-m<0 can be easily handled by switching m to n and t to s. Since the (inverse) Fourier transform conserves the parity of the function, we have that χ_W(η) is an even function of η. Together with the fact that ( ∑_t=0^m(-1)^t|η|^2t/(t+l)!t!√((m+l)!m!)/(m-t)!) is also an even function of η, the parity of the integrand only depends on l. Therefore, when l is odd, which in turn means the integrand is odd in η, we have ⟨ n|ρ̂|m⟩=0. When k is even, ⟨ n|ρ̂|m⟩ can be non-zero. The example below shows this. Consider a more general form of the Wigner function, p_0|0⟩⟨0|+p_1|1⟩⟨1|+p_2|2⟩⟨2|+c|0⟩⟨2|+c^*|2⟩⟨0|, we have W(q,p) = W_0[(1-2p_1)+(2p_1-4p_2+2√(2)c_1)q^2+4p_2q^2p^2. +.(2p_1-4p_2-2√(2)c_1)p^2+2p_2q^4+2p_2p^4-4√(2)c_2qp]. Therefore, all states with p_1≤1/2, p_1-2p_2-√(2)c_1≥0, c_2≤0 satisfies Condition <ref>. So, when p_0=1/3,p_1=1/2,p_2=1/6,c=√(2)/16-i, the state is not Fock-diagonal but still satisfies Condition <ref>.
http://arxiv.org/abs/2406.07972v1
20240612075414
Expected value and a Cayley-Menger formula for the generalized earth mover's distance
[ "William Q. Erickson" ]
math.ST
[ "math.ST", "stat.TH", "60B05 (Primary) 49Q22 (Secondary)" ]
§ ABSTRACT The earth mover's distance (EMD), also known as the 1-Wasserstein metric, measures the minimum amount of work required to transform one probability distribution into another. The EMD can be naturally generalized to measure the “distance” between any number (say d) of distributions. In previous work (2021), we found a recursive formula for the expected value of the generalized EMD, assuming the uniform distribution on the standard n-simplex. This recursion, however, was computationally expensive, requiring d+nd iterations. The main result of the present paper is a nonrecursive formula for this expected value, expressed as the integral of a certain polynomial of degree at most dn. As a secondary result, we resolve an unanswered problem by giving a formula for the generalized EMD in terms of pairwise EMDs; this can be viewed as an analogue of the Cayley–Menger determinant formula that gives the hypervolume of a simplex in terms of its edge lengths. Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey Wanlei Zhou June 17, 2024 =================================================================================== § INTRODUCTION §.§ Generalized earth mover's distance The earth mover's distance (EMD) is a widely used method for comparing two probability distributions on a common ground space; roughly speaking, the EMD measures the minimum amount of work required to transform one distribution into the other. The notion of “work” here (also called “cost”) depends on the geometry of the underlying ground space; for this reason, the EMD is often able to capture the similarity of datasets more accurately than other methods of comparison. Indeed, in recent decades, the EMD (also called the 1-Wasserstein metric) has been used in a growing range of applications, far beyond its origins in optimal transport theory Monge,Villani, in both the physical and social sciences. We highlight especially the recent survey <cit.> treating statistical aspects of the EMD. The EMD can naturally be adapted to compare any number of distributions, rather than only two at a time: intuitively, this generalized EMD measures the minimum amount of work required to transform all of the distributions into some common distribution, called their Wasserstein barycenter <cit.>. This generalization was introduced by Bein et al. <cit.>*p. 99, motivated by the statistical problem of assigning joint probabilities to samples in a survey. We refer the reader to the many references in <cit.> describing current applications of Wasserstein barycenters in natural language processing and machine learning. §.§ Overview of results The present paper develops our previous treatment <cit.> of the generalized earth mover's distance in two specific directions. In that treatment, we considered an arbitrary number (say d) of probability distributions in the standard n-simplex 𝒫_n, i.e., distributions on the ground space {1, …, n+1}, equipped with the usual Euclidean distance. The main result <cit.>*Thm. 7 was a recursive formula for the expected value of the generalized EMD, assuming the uniform distribution on 𝒫_n. In particular, we presented this formula as a special case of the following: 𝔼[] on 𝒫_n_1×⋯×𝒫_n_d = ∑_i=1^d n_i (𝔼[] on 𝒫_n_1×⋯×𝒫_n_i-1×⋯×𝒫_n_d) + C(n_1, …, n_d)/1 + ∑_i=1^d n_i, where C denotes the “cost” of moving the points n_1, …, n_d to a common barycenter. The desired formula is obtained from the specialization n_1 = ⋯ = n_d = n. Note, however, that this specialization depends upon first obtaining 𝔼[] for every tuple (n_1, …, n_d) such that each n_i ≤ n. Although one can exploit symmetry to restrict the recursion to, say, only the weakly increasing sequences 0 ≤ n_1 ≤⋯≤ n_d ≤ n, nevertheless this still requires d+nd many iterations of the formula (<ref>) before arriving at the desired expected value; hence for fixed n, the length of this recursion is O(d^n), and vice versa. The primary purpose of this paper is to give a new, nonrecursive formula for the expected value. Our main result (Theorem <ref>) is such a formula: 𝔼[] on (𝒫_n)^d = ∫_0^1 ∑_j=1^n ∑_k=1^d-1min{k, d-k}dk F_j(z)^k (1-F_j(z))^d-k dz, where each F_j(z) is a certain polynomial of degree n (see Lemma <ref>). Hence the integrand is itself a polynomial of degree at most dn. In terms of computational complexity, then, this integral formula (<ref>) is a vast improvement upon the recursive approach above: for example, when n=6 and d=100, the recursion (<ref>) requires well over one billion iterations, whereas the integral (<ref>) can be evaluated in roughly two seconds using Mathematica on a standard PC (returning the expected value 72.6685). We would still like to resolve this formula into a “closed” form without the integral, however, and we leave this as an open problem. The second purpose of this paper is to answer a question we posed in <cit.>*p. 150. In that paper, we observed that when d=3, the EMD of the three distributions equals the half-sum of the three pairwise EMDs. We also observed that such a phenomenon did not continue for d > 3: in general, it is impossible to reconstruct the EMD of d distributions from the d2 many pairwise EMDs. Hence the generalized EMD does indeed contain more information than the “classical” EMD (i.e., the d=2 case). Having made these observations, a natural question was to describe the exact relationship between the generalized EMD and the pairwise EMDs: in particular, what additional information is required (beyond the pairwise EMDs) to compute the generalized EMD? Also, for arbitrary d, can we characterize those d-tuples of distributions whose EMD can be determined from the pairwise EMDs alone, just as in the d=3 case? We settle both these questions in Section <ref>. First, we explicitly determine the “obstruction” to reconstructing the EMD from the pairwise EMDs: to each d-tuple ∈ (𝒫_n)^d, we associate a formal polynomial G(;q) whose second derivative at q=1 measures this obstruction, resulting in the following general formula (see Theorem <ref>): () = 1/d-1[G”(;1) + ∑pairwise EMDs]. Second, we give a statistical interpretation (<ref>) for the vanishing of G”(;1), in terms of the cumulative distributions of the distributions in . In this way we characterize those tuples whose EMD truly is just the scaled sum of the pairwise EMDs. We emphasize that the formula (<ref>) can be viewed as an analogue to the celebrated Cayley–Menger determinant formula (see (<ref>) below), which expresses the volume of a simplex in ℝ^d in terms of its edge lengths. Indeed, if one views the generalized EMD as a kind of “volume” determined by d distributions, then the pairwise EMDs are precisely the “edge lengths.” From this viewpoint, the result (<ref>) truly is a Cayley–Menger formula for the EMD, despite the fact that it looks quite different than the Euclidean version. §.§ Related work The question of computing the expected value of the EMD (in the classical case d=2) was first addressed by Bourn and Willenbring <cit.>, who found a recursive formula by means of generating functions. Subsequently, Frohmader and Volkmer <cit.> solved this recursion using analytic methods. The papers EK1,EK2 addressed the discrete version of the problem, studying integer-valued histograms rather than probability distributions. § PRELIMINARIES §.§ Basic notation Throughout the paper, we fix a positive integer n, and we study the elements of the standard n-simplex 𝒫_n { x = (x_1, …, x_n+1) : ∑_j x_j = 1, where each x_j ≥ 0 }⊂ℝ^n+1. (To avoid confusion later, we use 𝒫_n rather than the more usual Δ^n, since we will make frequent use of the symbol Δ to denote successive differences between vector components.) Each x ∈𝒫_n can be viewed as a probability distribution on the set [n+1] {1, …, n+1}. We will denote the cumulative distribution of x by a capital X, where X_j x_1 + ⋯ + x_j. Since necessarily X_n+1 = 1, we will suppress this final component and simply write X = (X_1, …, X_n). We fix a positive integer d ≥ 2, and we use a boldface to denote a d-tuple of distributions, while a capital boldface 𝐗 denotes the d-tuple of cumulative distributions. Within a d-tuple, we index the individual distributions by superscripts 1, …, d. Hence when there is a superscript only, we are denoting a distribution in 𝒫_n; when there is a subscript as well, we are denoting a certain component of that distribution. As is standard, we use parentheses around indices to denote the order statistics, and we use the symbol Δ whenever referring to the difference between successive order statistics. Below in (<ref>), for the reader's convenience, we set down all of the notation we will use throughout the paper: x = (x_1, …, x_n+1) ∈𝒫_n, X = (X_1, …, X_n), where X_j x_1 + ⋯ + x_j, x_(i) the ith smallest component of x, Δ x_(i) x_(i+1) - x_(i), where Δ x_(0) x_(1), = (x^1, …, x^d) ∈ (𝒫_n)^d, 𝐗 = (X^1, …, X^d), x^i_j the jth component of x^i, X^i_j x^i_1 + ⋯ + x^i_j = the jth component of X^i, 𝐗^∙_j (X^1_j, …, X^d_j), X^(i)_j the ith smallest component of 𝐗^∙_j, Δ X^(i)_j X^(i+1)_j - X^(i)_j. §.§ Dirichlet and beta distributions; order statistics As usual in the context of continuous random variables, we write PDF for probability density function, and CDF for cumulative distribution function. The Dirichlet distribution, denoted by Dir(α) with parameter α = (α_1, …, α_n+1), is a certain distribution on 𝒫_n. For our purposes, we will need only the following standard facts regarding the Dirichlet distribution, the first of which involves the beta distribution, denoted by Beta(a,b): PDF of Beta(a,b) is given by f(z) = Γ(a+b)/Γ(a) Γ(b) z^a-1 (1-z)^b-1, z ∈ [0,1]. (To avoid confusion with points x ∈𝒫_n, we will take z as our variable for all PDFs and CDFs in this paper.) In general, if x ∼ Dir(α), then the marginal distributions of Dir(α) are the probability distributions of the individual components x_j of x, and are given by beta distributions as follows: x_j ∼ Beta(α_j, α_1 + ⋯ + α_j + ⋯ + α_n+1), where the hat denotes an omitted term. The Dirichlet distribution enjoys the aggregation property, meaning that if x ∼ Dir(α), then removing two components x_j and x_k and replacing them by their sum yields (x_1, …, x_j, …, x_k, x_j + x_k, …, x_n+1) ∼ Dir(α_1, …, α_j, …, α_k, α_j + α_k, …, α_n+1). In particular, by repeatedly aggregating initial components, we have the following formula for the aggregation of the first j components: (x_1 + ⋯ + x_j, x_j+1, …, x_n+1) ∼ Dir(α_1 + ⋯ + α_j, α_j+1, …, α_n+1). Recalling from (<ref>) that X_j x_1 + ⋯ + x_j, we conclude from (<ref>) and (<ref>) that X_j ∼ Beta(α_1 + ⋯α_j, α_j+1 + ⋯α_n+1). In this paper, we will consider only the uniform distribution on 𝒫_n, which corresponds to the parameter α = (1, …, 1), also known as the flat Dirichlet distribution. In this case, the equation (<ref>) specializes to X_j ∼ Beta(j, n-j+1). The CDF of Beta(a,b) is called the regularized incomplete beta function, typically denoted by I_z(a,b). In this paper, we will be interested in the CDF of X_j, to which we will give the simpler notation F_j(z), rather than I_z(j, n-j+1). We will compute F_j(z) explicitly in Lemma <ref>. Finally, we recall the following standard fact regarding order statistics. Let Y_1, …, Y_d form a random sample from a random variable Y, whose CDF is F_Y(z). Let Y_(i) denote the ith order statistic, i.e., the ith smallest element of {Y_1, …, Y_d}. It is well known that, for each 1 ≤ i ≤ d, the ith order statistic Y_(i) is a random variable whose CDF is given by F_Y_(i)(z) = ∑_k=i^d dk F_Y(z)^k (1-F_Y(z))^d-k. § THE GENERALIZED EARTH MOVER'S DISTANCE This section defines the generalized earth mover's distance between a tuple of distributions = (x^1, …, x^d) ∈ (𝒫_n)^d, and culminates in the key Proposition <ref>, where we prove a straightforward way to compute this distance in terms of the cumulative distributions X^1, …, X^d. §.§ Classical and generalized EMD For the classical case d=2, the earth mover's distance (EMD) is typically defined in terms of the optimal solution to the transportation problem (often called the Hitchcock, Kantorovich, Koopman, and/or Monge problem). When d=2, the problem is to determine a minimal-cost plan for transporting mass from supply sites to demand sites; both the distribution of the mass among the supply sites and the desired distribution of the mass among the demand sites are viewed as probability distributions, and the cost of moving one unit of mass between supply site j and demand site k is determined by some cost function C(j,k). In applications where the number of supply and demand sites is the same, the problem can be viewed as redistributing either distribution in order to obtain the other. This problem generalizes naturally to any number d of distributions, in which case the problem is to find the minimal-cost way to transform them all into the same distribution (their Wasserstein barycenter, as mentioned in the introduction). Explicitly, the generalized transportation problem can be stated as follows. Let = (x^1, …, x^d) ∈ (𝒫_n)^d. Let C be a d-dimensional, (n+1) ×⋯× (n+1) array, whose entries we write as nonnegative real numbers C(y) for each y = (y_1, …, y_d) ∈ [n+1]^d. This C (called the cost array) depends on the geometry one imposes on the underlying ground space [n+1]. Intuitively, each entry C(y) gives the minimum cost of moving d units of mass (placed at sites y_1, …, y_d) to a common site. The generalized transportation problem asks us to find an array T_ (called a transport plan with respect to ), with the same dimensions as C, that solves the following linear programming problem: Minimize ∑_y ∈ [n+1]^d C(y) T_(y), subject to T_(y) ≥ 0 for all y ∈ [n+1]^d, and ∑_y: y_i = j T_(y) = x^i_j for all i ∈ [d] and j ∈ [n+1]. Upon finding an optimal transport plan T^*_, we define the (generalized) earth mover's distance, still denoted by (), to be the value of the objective function ∑_y C(y) T^*_(y) in (<ref>). In other words, () gives the minimal cost of equalizing the distributions x^1, …, x^d. §.§ Specialization to one-dimensional ground space In this paper, we view the ground space [n+1] = {1, …, n+1} embedded naturally in the real line, equipped with the usual Euclidean distance. Thus the cost of moving one unit of mass from i to j is simply |i-j|. Therefore, to determine a formula for the entries C(y) of the cost array, we observe that if one places one unit of mass (with repetition allowed) at each of the points y_1, …, y_d ∈ [n+1], then the minimum cost of moving all d units of mass to a common point in [n+1] is given by the ℓ_1-distance between y = (y_1, …, y_d) and the “main diagonal” ℝ(1, …, 1). In <cit.>*Prop. 1, we formulated this cost as an alternating sum of “sorted” coordinates: i.e., we took the greatest of the y_i, then subtracted the smallest, then added the next greatest, then subtracted the next smallest, and so on. In Proposition <ref> below, we will give three equivalent formulas for the entries C(y), which will be useful in different contexts. Before stating the proposition, we define the following sign function for 1 ≤ i ≤ d: ϵ(i) -1, i < d+1/2, +0, i = d+1/2, +1, i > d+1/2. Note that ∑_i=1^d ϵ(i) = 0. We also recall the notion of Lee weight, which was introduced in <cit.> in the context of coding theory. The Lee weight, in a sense, measures an element's “absolute value” in the cyclic group ℤ/dℤ, and is denoted as follows: : {0,1,…, d} ⟶{0,1,…, ⌊ d/2 ⌋}, k ⟼min{k, d-k}. We include both 0 and d in the domain for the sake of convenience; in this paper we have no need for the cyclic group structure, and so we view the domain of as merely a subset of ℤ. Note that (<ref>) can be viewed as a difference operator on (<ref>), in the sense that ϵ(i) = (i-1) - (i). Consequently, we have (i) = - ∑_k=1^i ϵ(k). Consider the generalized transportation problem described above, with the Euclidean distance on [n+1]. In the objective function (<ref>), the cost C : ℝ^d ⟶ℝ_≥ 0 is given by the two equivalent formulas C(y) = ∑_i=1^d ϵ(i) y_(i) = ∑_i=1^d-1(i) ·Δ y_(i), where ϵ(i) is defined in (<ref>), (i) is defined in (<ref>), and all other notation is defined in (<ref>). Moreover, upon restricting the domain to [n+1]^d, we have a third equivalent formula C(y) = ∑_j=1^n( #{i : y_i > j}). The content of <cit.>*Prop. 1, adapted to our present notation, is that the cost in this transportation problem (i.e., the ℓ_1-distance between y ∈ℝ^d and the main diagonal) is given by C(y) = ∑_i=1^⌊ d/2 ⌋ y_(d-i+1) - y_(i), which is clearly equivalent to our first formula (<ref>). To show that (<ref>) is equivalent to (<ref>), we observe that ∑_k=1^d ϵ(k) y_(k) = ∑_k=1^d ϵ(k) ∑_i=0^k-1Δ y_(i) = ∑_i=0^d-1(∑_k=i+1^d ϵ(k) ) Δ y_(i) = ∑_i=0^d-1(- ∑_k=1^i ϵ(k) )_since ∑_k=1^d ϵ(k) = 0Δ y_(i) = ∑_i=0^d-1(i)_by (<ref>)·Δ y_(i), whre the i=0 term vanishes since (0)=0. Finally, supposing that y ∈ [n+1]^d, we show that (<ref>) is equivalent to (<ref>) as follows: ∑_i=1^d-1(i) ·Δ y_(i) = ∑_i=1^d-1(i) ·#{ j ∈ [n] : y_(i)≤ j < y_(i+1)} = ∑_i=1^d-1(i) ∑_j=y_(i)^y_(i+1)-11 = ∑_j=1^n(max{ i : y_(i)≤ j}_#{i : y_i ≤ j }) = ∑_j=1^n ( #{i: y_i > j }), where the last line follows from the fact that (k) = (d-k). Recall that the generalized EMD is defined as the minimum value of the objective function (<ref>). We showed in <cit.>*9 that the cost array C has an important property called the Monge property, in any dimension d. Consequently, by <cit.>*Thm. 4.1, an optimal transport plan T^*_ can be constructed via a natural d-dimensional analogue of the well-known “northwest corner rule” <cit.>. Bein et al. <cit.>*p. 106 name this generalized algorithm GREEDY_d, and define it as follows: begin by setting the entry T^*_(1, …, 1) = min{x^1_1, …, x^d_1}. Then reduce each x^i_1 by this entry (thereby reducing at least one of them to 0). For each x^k_1 that is reduced to 0, the rest of the entries (in T^*_) in the coordinate hyperplane y_k = 1 are set to 0. Thus at least one of the dimensions has been reduced from n+1 to n. The remaining subarray of T^*_ is now filled in recursively. It is straightforward to verify that the output T^*_ of the algorithm GREEDY_d can be described explicitly as follows. For each y ∈ [n+1]^d, define the interval I(y) ⋂_i=1^d [X^i_y_i-1, X^i_y_i) where for convenience we set X^i_0 0. Then T^*_(y) = length of I(y). For example, revisiting the first step of GREEDY_d described above, we verify that indeed T^*_(1, …, 1) = length of [X^1_0, X^1_1) ∩⋯∩ [X^d_0, X^d_1) = length of [0, x^1_1) ∩⋯∩ [0, x^d_1) = length of [0, min{x^1_1, …, x^d_1}) = min{x^1_1, …, x^d_1}. Likewise, we confirm that if k is such that x^k_1 = min{x^1_1, …, x^d_1}, then for every subsequent y ≠ (1, …, 1) such that y_k = 1, there is some ℓ such that y_ℓ≥ 2, and thus [X^k_0, X^k_1) ∩ (X^ℓ_y_ℓ - 1, X^ℓ_y_ℓ) = ∅, since x^k_1 ≤ x^ℓ_1 implies that X^k_1 ≤ X^ℓ_1≤ X^ℓ_2≤⋯≤ X^ℓ_n. Hence every subsequent entry T^*_(y) in the hyperplane y_k = 1 is automatically 0, as stipulated by the GREEDY_d algorithm. Let ∈ (𝒫_n)^d, with 𝐗^∙_j as in (<ref>), and C as in Proposition <ref>. We have () = ∑_j=1^n C(𝐗^∙_j). We have () = ∑_y C(y) T^*_(y) where T^*_ is the output of the GREEDY_d algorithm, as discussed above. Hence by (<ref>), we have () = ∑_y ∈ [n+1]^d C(y) ·length of I(y). Observe from (<ref>) that ⋃_y ∈ [n+1]^d I(y) = [0,1), and this union is pairwise disjoint. Hence for each t ∈ [0,1), we can set y(t) the unique y such that t ∈ I(y), whose ith component y_i(t) is given by y_i(t) = #{ 0 ≤ k ≤ n : X^i_k ≤ t }. It follows immediately that y_i(t) > j if and only if X^i_j ≤ t. Therefore for each 1 ≤ j ≤ n, we have #{i: y_i(t) > j} = #{ i : X^i_j ≤ t}. Now we can rewrite (<ref>) as () = ∑_y ∈ [n+1]^d∫_I(y) C(y) dt = ∫_[0,1) C(y(t)) dt = ∫_[0,1)(∑_j=1^n(#{i: y_i(t) > j}) ) dt by (<ref>) = ∑_j=1^n∫_[0,1)(#{i: X^i_j ≤ t }) dt by (<ref>) = ∑_j=1^n ∑_i=1^d-1(i) ·Δ X_j^(i) = ∑_j=1^n C(𝐗^∙_j) by (<ref>). § MAIN RESULT: EXPECTED VALUE Before stating our main result, we first obtain an explicit formula for the CDF of the random variable X_j from (<ref>) and (<ref>). We record this CDF, which we call F_j(z), in the following lemma. Let x ∈𝒫_n be chosen uniformly at random, and let F_j(z) denote the CDF of the random variable X_j. We have F_j(z) = ∑_m=j^n (-1)^m-jnmm-1j-1 z^m, for z ∈ [0,1]. We have F_j(z) ∫_0^z PDF of Beta(j, n-j+1) dt by (<ref>) = ∫_0^z Γ(n+1)/Γ(j)Γ(n-j+1) t^j-1 (1-t)^n-j dt by (<ref>) = ∫_0^z Γ(n+1)/Γ(j)Γ(n-j+1)∑_ℓ = 0^n-j (-1)^ℓn-jℓ t^ℓ + j - 1 dt = ∑_ℓ = 0^n-jΓ(n+1)/Γ(j)Γ(n-j+1) (-1)^ℓn-jℓz^ℓ + j/ℓ+j. Since n and j are positive integers, we can replace the gamma functions by factorials; then upon re-indexing the sum via m ℓ + j, we have F_j(z) = ∑_m=j^n (-1)^m-jn!/(j-1)!(n-j)!n-jm-jz^m/m = ∑_m=j^n (-1)^m-jn!/(j-1)!(n-j)!(n-j)!/(m-j)!(n-m)!z^m/m = ∑_m=j^n (-1)^m-jn!/(n-m)!1/m!(m-1)!/1_1/m1/(j-1)!(m-j)! z^m, which simplifies to the formula given in the lemma. Our main result is the following formula for the expected value of the EMD on (𝒫_n)^d. Let ∈ (𝒫_n)^d be chosen uniformly at random. The expected value of the EMD is given by 𝔼[ () ] = ∫_0^1 ∑_j=1^n ∑_k=1^d-1(k) dk F_j(z)^k (1-F_j(z))^d-k dz, where (k) min{k, d-k} is the Lee weight, and F_j(z) is the polynomial given in Lemma <ref>. We have 𝔼[()] = 𝔼[∑_j=1^n C(𝐗^∙_j) ] by Proposition <ref> = ∑_j=1^n 𝔼[C(𝐗^∙_j)] = ∑_j=1^n 𝔼[ ∑_i=1^d ϵ(i) X^(i)_j ] by (<ref>) = ∑_j=1^n ∑_i=1^d ϵ(i) ·𝔼[X^(i)_j] . Since X^(i)_j is the ith order statistic on X_j, it follows from (<ref>) that CDF of X^(i)_j F_j^(i)(z) = ∑_k=i^d dk F_j(z)^k (1-F_j(z))^d-k, where F_j(z) is the CDF of the random variable X_j, as given in Lemma <ref>. Therefore we obtain 𝔼[X^(i)_j] = ∫_0^1 1- F^(i)_j(z) dz = 1 - ∫_0^1 F_j^(i)(z) dz = 1 - ∫_0^1 ∑_k=i^d dk F_j(z)^k (1-F_j(x))^d-k, where the second line follows directly from (<ref>). Substituting this in (<ref>), we have 𝔼[ () ] = ∑_j=1^n ∑_i=1^d ϵ(i) ·(1 - ∫_0^1 ∑_k=i^d dk F_j(z)^k (1-F_j(x))^d-k) = ∑_j=1^n (∑_i=1^d ϵ(i) - ∫_0^1 ∑_i=1^d ∑_k=i^d ϵ(i) dk F_j(z)^k (1-F_j(x))^d-k), which we will now simplify using the definition of ϵ(i) in (<ref>). Since ∑_i=1^d ϵ(i) = 0, the first sum in the large parentheses vanishes in (<ref>). Moreover, we have ∑_i=1^d ∑_k=i^d ϵ(i) = ∑_k=1^d ∑_i=1^k ϵ(i) = -∑_k=1^d min{k, d-k} = - ∑_k=1^d-1(k), where is the Lee weight defined in (<ref>). Applying these two observations to (<ref>), we obtain the formula stated in the theorem. As a sanity check, one can quickly use Theorem <ref> to recover the table of expected values we included in our previous paper <cit.>*Table 2. First note, however, that the parameter n in that paper corresponds to n-1 in the present paper; also, the table mentioned above gives the unit normalized expected values. Hence, in order to recover Table 2 exactly as it is printed in <cit.>, we must decrease n by 1 before using Theorem <ref>, and then divide by n ⌊ d/2 ⌋. As an example, let d=10 and n=9; the corresponding entry in <cit.>*Table 2 is 0.1975 (rounded to four decimal places). To recover this value, we evaluate our new formula in Theorem <ref> at d=10 and n=9-1=8, which yields approximately 7.9002814. Upon unit normalizing by dividing by n ⌊ d/2 ⌋ = 8 · 5 = 40, we indeed recover 0.1975. § A CAYLEY–MENGER FORMULA FOR THE EMD §.§ The EMD as hypervolume In this section, we solve a problem we had previously left open regarding the relationship between the EMD of a d-tuple = (x^1, …, x^d), on one hand, and the EMDs of the individual pairs (x^i, x^j) on the other hand. In particular, in <cit.>*Prop. 5 we observed that in the special case d=3, the EMD equals half the sum of the pairwise EMDs. That is, for x,y,z ∈𝒫_n, we showed that (x,y,z) = (x,y) + (x,z) + (y,z)/2. For d>3, however, there was no such formula solely in terms of the pairwise EMDs, and it was unclear what (if anything) could be said in general. Motivating this problem is its well-known (and ancient) analogue in Euclidean geometry, namely Heron's formula for the area of a triangle ABC in terms of its side lengths a,b,c: area of ABC = √(s(s-a)(s-b)(s-c)), where s = (a+b+c)/2 is the semiperimeter. Indeed, one can view (<ref>) as a sort of Heron's formula for the EMD, expressing the “area” between three distributions in terms of the three “side lengths.” Notice that for the EMD, unlike the Euclidean setting, the formula (<ref>) suggests that this “area” equals exactly the “semiperimeter.” In the Euclidean setting, a natural question is whether Heron's formula generalizes to a formula for (hyper)volume in higher dimensions; in other words, can one express the volume of a (d-1)-dimensional simplex Δ solely in terms of its edge lengths (i.e., the volumes of its 1-dimensional faces)? The answer to this problem is affirmative, made precise by the Cayley–Menger determinant below (see <cit.>): (volume of Δ)^2 = (-1)^d/2^d-1 (d-1)!^2·[ℓ^2_ij]_i,j=1^d+1, where ℓ_ij is the length of the edge between the ith and jth vertices (and ℓ_i,d+1 = ℓ_d+1,i 1 for all i ≤ d, and ℓ_d+1,d+1 0). Hence the volume can be computed from the edge lengths alone. By analogy, it makes sense to restate the main problem in this section as follows: find a Cayley–Menger formula for the EMD. By this, we mean a general formula relating the generalized (d-fold) EMD to the pairwise EMDs in a d-tuple. In this analogy, we view () as the volume of the (d-1)-dimensional simplex whose d vertices are given by = (x^1, …, x^d). The edges of this simplex are the pairs (x^k, x^ℓ) for all 1 ≤ k < ℓ≤ d, and so each pairwise (x^k, x^ℓ) is viewed as an edge length. Note that the EMD of a pair (i.e., the setting where d=2) takes a particularly simple form (following directly from Proposition <ref>): (x^1, x^2) = ∑_j=1^n C(X^1_j, X^2_j) = ∑_j=1^n | X^1_j - X^2_j|. When the EMD is viewed from the geometric perspective outlined above, the formula (<ref>) does indeed state that the area of a triangle is exactly its semiperimeter. This is the desired Cayley–Menger EMD formula for d=3, and the problem now is to generalize this for all values of d. §.§ A Cayley–Menger formula for the EMD (The reader may find it helpful to look ahead at Example <ref>, which illustrates the results in this final subsection.) Let ∈ (𝒫_n)^d, and define the following polynomial in the formal indeterminate q: G(𝐱; q) ∑_i=1^d-1∑_j=1^n Δ X^(i)_j · q^(i). Note that the derivative of G(;q), when evaluated at q=1, recovers the EMD: G'(;1) = ∑_j=1^n ∑_i=1^d-1(i) ·Δ X^(i)_j_C(𝐗^∙_j), by (<ref>) = (), where the second equality follows from Proposition <ref>. Moreover, it is the second derivative of G(;q) that plays the key role in our following analogue of the Cayley–Menger formula. Let = (x^1, …, x^d) ∈ (𝒫_n)^d. We have () = 1/d-1[G”(;1) + ∑_1 ≤ k < ℓ≤ d(x^k, x^ℓ)]. Starting with (<ref>) and multiplying both sides by d-1, we have (d-1) ·() = ∑_j=1^n ∑_i=1^d-1(i) ·Δ X^(i)_j · (d-1) = ∑_j=1^n ∑_i=1^d-1(i) ·Δ X^(i)_j ·( ((i) - 1) + (d-(i))) = ∑_j=1^n ∑_i=1^d-1(i) ·( (i) - 1 ) ·Δ X^(i)_j_G”(; 1) + ∑_j=1^n ∑_i=1^d-1(i) ·( d - (i) ) ·Δ X^(i)_j. We now rewrite the second sum on the right-hand side of (<ref>) as follows: ∑_j=1^n ∑_i=1^d-1(i) ·( d - (i) )_i(d-i)· Δ X^(i)_j = ∑_j=1^n ∑_i=1^d-1∑_(k,ℓ): 1 ≤ k ≤ i < ℓ≤ d_i(d-i) many termsΔ X^(i)_j = ∑_j=1^n ∑_(k,ℓ): 1 ≤ k < ℓ≤ d(∑_i = k^ℓ - 1Δ X^(i)_j ) = ∑_(k,ℓ): 1 ≤ k < ℓ≤ d∑_j=1^n (X^(ℓ)_j - X^(k)_j) =∑_(k',ℓ'): 1 ≤ k' < ℓ' ≤ d∑_j=1^n | X^ℓ'_j - X^k'_j| = ∑_(k',ℓ'): 1 ≤ k' < ℓ' ≤ d(x^k', x^ℓ') by (<ref>). Upon substituting this expression into (<ref>) and dividing both sides by d-1, we obtain the desired result. Unlike the original Cayley–Menger formula (<ref>) in the Euclidean setting, our EMD analogue in Theorem <ref> shows that in general, the EMD cannot be expressed in terms of the “edge lengths” (i.e., pairwise EMDs) alone. The quantity G”(; 1) measures the “obstruction” in this regard, namely, how far away the EMD is from a scaled sum of the edge lengths. In order to gain a more precise understanding of this obstruction measured by G”(;1), we revisit our observation (<ref>) above, which identified () with G'(;1). The form of (<ref>) makes it clear that () = 0 if and only if every Δ X^(i)_j = 0; this, in turn, occurs if and only if every x^i has the same cumulative distribution X^i, meaning that x^1 = ⋯ = x^d. This is exactly what we would expect, of course: the EMD measures the work required to equalize d distributions, and therefore vanishes if and only if those distributions are already all equal. In fact, as we show below, this first derivative is just a special case (and the strongest case) of a general statistical interpretation of G(;q). Indeed, the higher-order derivatives of G(;q) evaluated at q=1 also measure the similarity of the cumulative distributions X^i; with each successive derivative, however, we “relax” this measurement by disregarding the remaining minimum and maximum value for each j, treating them as if they are outliers: Let ∈ (𝒫_n)^d, with G(;q) as defined in (<ref>). Let 1 ≤ k ≤⌈ d/2 ⌉. The value of d^k/dq^k G(;q) |_q=1 is nonnegative, and it attains zero if and only if X^(k)_j = X^(k+1)_j = ⋯ = X^(d-k)_j = X^(d-k+1)_j, for each 1 ≤ j ≤ n. We have d^k/dq^k G(;q) = ∑_i=1^d-1∑_j=1^n Δ X^(i)_j ·((i))((i)-1) ⋯ ((i) - k+1)_vanishes if and only if (i) < k· q^(i) - k, and since the condition (i) < k is equivalent to i<k or d-i<k, all nonzero terms must correspond to an index i such that k ≤ i ≤ d-k. Thus since every Δ X^(i)_j X^(i+1)_j - X^(i)_j is nonnegative by definition, the kth derivative at q=1 is zero if and only if Δ X^(i)_j = 0 for all 1 ≤ j ≤ n and for all i such that k ≤ i ≤ d-k. Combining Theorem <ref> with the k=2 case in Proposition <ref>, we are now able to characterize the condition under which the EMD really is just a scaled sum of its edge lengths: [ () = 1/d-1∑_k < ℓ(x^k, x^ℓ); if and only if; X^(2)_j = X^(3)_j = ⋯ = X^(d-2)_j = X^(d-1)_j for all 1 ≤ j ≤ n. ] This now explains the motivating example (<ref>) from our previous paper, where d=3 and the EMD is always just a scaled sum of edge lengths: when d=3, the chain of equalities in (<ref>) becomes trivial. This simple example summarizes the various interpretations of the polynomial G(;q) described above. We take n=3 and d=6. Let = (x^1, …, x^6) consist of the distributions below, along with their cumulative distributions X^i (written, as above, without the last component, which is necessarily 1): x^1 = (.2, .2, .2, .4), X^1 = (.2, .4, .6) x^2 = (.3, .0, .4, .3), X^2 = (.3, .3, .7) x^3 = (.6, .0, .3, .1), X^3 = (.6, .6, .9) x^4 = (.0, .2, .1, .7), X^4 = (.0, .2, .3) x^5 = (.7, .1, .2, .0), X^5 = (.7, .8, 1.0) x^6 = (.1, .4, .0, .5), X^6 = (.1, .5, .5). First we may as well compute () directly. To do this, we recognize that 𝐗^∙_1, 𝐗^∙_2, and 𝐗^∙_3 are the three “columns” of coordinates in the cumulative distributions we have displayed above; we compute the cost of each of them using Proposition <ref>: 𝐗^∙_1 = (.2, .3, .6, .0, .7, , .1) C(𝐗^∙_1) = 1.3, 𝐗^∙_2 = (.4, .3, .6, .2, .8, , .5) C(𝐗^∙_2) = 1.0, 𝐗^∙_3 = (.6, .7, .9, .3, .1, , .5) C(𝐗^∙_1) = 1.2. Thus by Proposition <ref>, we have () = 1.3 + 1.0 + 1.2 = 3.5. Next, we will construct the polynomial G(;q). An intuitive approach is to start with the histograms of X^1, …, X^6, shown below: [scale=.4] at (3,0) [label=right,lightgray,scale=.75:0.0] ; at (3,2) [label=right,lightgray,scale=.75:0.2] ; at (3,4) [label=right,lightgray,scale=.75:0.4] ; at (3,6) [label=right,lightgray,scale=.75:0.6] ; at (3,8) [label=right,lightgray,scale=.75:0.8] ; at (3,10) [label=right,lightgray,scale=.75:1.0] ; [lightgray] (3,10) – (3,0) – (0,0); (0,0) – ++(0,2) (1,0) – ++(0,4) (2,0) – ++(0,6); [ultra thick,red!80!black] (0,2) – ++(1,0) (1,4) – ++(1,0) (2,6) – ++(1,0) ; at (1.5, -1) X^1; [scale=.4] at (3,0) [label=right,lightgray,scale=.75:0.0] ; at (3,2) [label=right,lightgray,scale=.75:0.2] ; at (3,4) [label=right,lightgray,scale=.75:0.4] ; at (3,6) [label=right,lightgray,scale=.75:0.6] ; at (3,8) [label=right,lightgray,scale=.75:0.8] ; at (3,10) [label=right,lightgray,scale=.75:1.0] ; [lightgray] (3,10) – (3,0) – (0,0); (0,0) – ++(0,3) (1,0) – ++(0,3) (2,0) – ++(0,7); [ultra thick,orange!90!black] (0,3) – ++(1,0) (1,3) – ++(1,0) (2,7) – ++(1,0) ; at (1.5, -1) X^2; [scale=.4] at (3,0) [label=right,lightgray,scale=.75:0.0] ; at (3,2) [label=right,lightgray,scale=.75:0.2] ; at (3,4) [label=right,lightgray,scale=.75:0.4] ; at (3,6) [label=right,lightgray,scale=.75:0.6] ; at (3,8) [label=right,lightgray,scale=.75:0.8] ; at (3,10) [label=right,lightgray,scale=.75:1.0] ; [lightgray] (3,10) – (3,0) – (0,0); (0,0) – ++(0,6) (1,0) – ++(0,6) (2,0) – ++(0,9); [ultra thick,yellow!90!black] (0,6) – ++(1,0) (1,6) – ++(1,0) (2,9) – ++(1,0) ; at (1.5, -1) X^3; [scale=.4] at (3,0) [label=right,lightgray,scale=.75:0.0] ; at (3,2) [label=right,lightgray,scale=.75:0.2] ; at (3,4) [label=right,lightgray,scale=.75:0.4] ; at (3,6) [label=right,lightgray,scale=.75:0.6] ; at (3,8) [label=right,lightgray,scale=.75:0.8] ; at (3,10) [label=right,lightgray,scale=.75:1.0] ; [lightgray] (3,10) – (3,0) – (0,0); (0,0) – ++(0,0) (1,0) – ++(0,2) (2,0) – ++(0,3); [ultra thick,green!70!black] (0,0) – ++(1,0) (1,2) – ++(1,0) (2,3) – ++(1,0) ; at (1.5, -1) X^4; [scale=.4] at (3,0) [label=right,lightgray,scale=.75:0.0] ; at (3,2) [label=right,lightgray,scale=.75:0.2] ; at (3,4) [label=right,lightgray,scale=.75:0.4] ; at (3,6) [label=right,lightgray,scale=.75:0.6] ; at (3,8) [label=right,lightgray,scale=.75:0.8] ; at (3,10) [label=right,lightgray,scale=.75:1.0] ; [lightgray] (3,10) – (3,0) – (0,0); (0,0) – ++(0,7) (1,0) – ++(0,8) (2,0) – ++(0,10); [ultra thick,blue!70!black] (0,7) – ++(1,0) (1,8) – ++(1,0) (2,10) – ++(1,0) ; at (1.5, -1) X^5; [scale=.4] at (3,0) [label=right,lightgray,scale=.75:0.0] ; at (3,2) [label=right,lightgray,scale=.75:0.2] ; at (3,4) [label=right,lightgray,scale=.75:0.4] ; at (3,6) [label=right,lightgray,scale=.75:0.6] ; at (3,8) [label=right,lightgray,scale=.75:0.8] ; at (3,10) [label=right,lightgray,scale=.75:1.0] ; [lightgray] (3,10) – (3,0) – (0,0); (0,0) – ++(0,1) (1,0) – ++(0,5) (2,0) – ++(0,5); [ultra thick,violet] (0,1) – ++(1,0) (1,5) – ++(1,0) (2,5) – ++(1,0) ; at (1.5, -1) X^6; Now if we superimpose all of these histograms on each other, we can visualize each difference Δ X^(i)_j as the distance between the ith and (i+1)th horizontal bar (counting from the bottom) in the jth column (counting from the left); in the picture below, we display Δ X^(4)_1 as an example. In the space between these two bars, we write q^(i), thus obtaining a full visualization of the terms in the polynomial G(;q) as defined in (<ref>): [scale=.7] at (3,0) [label=right,gray,scale=.75:0.0] ; at (3,2) [label=right,gray,scale=.75:0.2] ; at (3,4) [label=right,gray,scale=.75:0.4] ; at (3,6) [label=right,gray,scale=.75:0.6] ; at (3,8) [label=right,gray,scale=.75:0.8] ; at (3,10) [label=right,gray,scale=.75:1.0] ; [gray] (3,10) – (3,0) – (0,0); [ultra thick,red!80!black] (0,2) – ++(1,0) (1,4) – ++(1,0) (2,6) – ++(1,0) ; [ultra thick,orange!90!black] (0,3) – ++(1,0) (1,3) – ++(1,0) (2,7) – ++(1,0) ; [ultra thick,yellow!90!black] (0,6) – ++(1,0) (1,6) – ++(1,0) (2,9) – ++(1,0) ; [ultra thick,green!70!black] (0,0) – ++(1,0) (1,2) – ++(1,0) (2,3) – ++(1,0) ; [ultra thick,blue!70!black] (0,7) – ++(1,0) (1,8) – ++(1,0) (2,10) – ++(1,0) ; [ultra thick,violet] (0,1) – ++(1,0) (1,5) – ++(1,0) (2,5) – ++(1,0) ; (0,0) – ++(0,7) (1,0) – ++(0,8) (2,0) – ++(0,10); at (.5,.5) q; at (.5, 1.5) q^2; at (.5, 2.5) q^3; at (.5, 4.5) q^2; at (.5, 6.5) q; at (1.5, 2.5) q; at (1.5, 3.5) q^2; at (1.5, 4.5) q^3; at (1.5, 5.5) q^2; at (1.5, 7) q; at (2.5, 4) q; at (2.5, 5.5) q^2; at (2.5, 6.5) q^3; at (2.5, 8) q^2; at (2.5, 9.5) q; [gray] (-.5,6) – (-.5,3) (-.75,6) – (-.25,6) (-.75,3) – (-.25,3); at (-2,4.5) [text=gray,align=center,scale=.75] Δ X^(4)_1 = 0.3; at (6,0) ; Following (<ref>), we scale each term by the vertical distance it occupies in the picture above, then combine like terms to obtain G(;q) = (.1+.1+.1+.2+.2+.1)q + (.1+.3+.1+.1+.1+.2)q^2 + (.1+.1+.1)q^3 = .8q + .9q^2 + .3q^3. Now recalling our direct EMD computation in (<ref>), we can immediately verify the fact (<ref>): G'(;1) = (.8 + 1.8q + .9q^2)|_q=1 = 3.5 = (). More importantly, we wish to verify Theorem <ref>. We start by computing G”(;1) = (1.8 + 1.8q) |_q=1 = 3.6. The pairwise EMDs are easily computed via (<ref>): [ (x^1,x^2) = .3, (x^1, x^3) = .9, (x^1, x^4) = .7, (x^1, x^5) = 1.3,; (x^1, x^6) = .3, (x^2, x^3) = .8, (x^2, x^4) = .8, (x^2, x^5) = 1.2,; (x^2, x^6) = .6, (x^3, x^4) = 1.6, (x^3, x^5) = .4, (x^3, x^6) = 1.0,; (x^4, x^5) = 2.0, (x^4, x^6) = .6, (x^5, x^6) = 1.4. ] The sum of these pairwise EMDs is 13.9. Thus, once again recalling our direct EMD computation in (<ref>), along with (<ref>), we verify Theorem <ref> as follows: 1/6-1[ 3.6_G”(;1) + 13.9_sum of pairwise EMDs] = 17.5/5 = 3.5 = (). alpha
http://arxiv.org/abs/2406.08555v1
20240612180021
Buoyancy torques prevent low-mass planets from stalling in low-turbulence radiative disks
[ "Alexandros Ziampras", "Richard P. Nelson", "Sijme-Jan Paardekooper" ]
astro-ph.EP
[ "astro-ph.EP" ]
firstpage–lastpage Parallel trusted node approach for satellite quantum key distribution James A. Grieve0000-0002-2800-8317 June 17, 2024 ===================================================================== § ABSTRACT Low-mass planets migrating inwards in laminar protoplanetary disks (PPDs) experience a dynamical corotation torque, which is expected to slow down migration to a stall. However, baroclinic effects can reduce or even reverse this effect, leading to rapid inward migration. In the radiatively inefficient inner disk, one such mechanism is the buoyancy response of the disk to an embedded planet. Recent work has suggested that radiative cooling can quench this response, but for parameters that are not necessarily representative of the inner regions of PPDs. We perform global three dimensional inviscid radiation hydrodynamics simulations of planet–disk interaction to investigate the effect of radiative cooling on the buoyancy-driven torque in a more realistic disk model. We find that the buoyancy response exerts a negative dynamical corotation torque — albeit partially damped due to radiative cooling — resulting in sustained, rapid inward migration. Models that adopt a local cooling prescription significantly overestimate the impact of the buoyancy response, highlighting the importance of a realistic treatment of radiation transport that includes radiative diffusion. Our results suggest that low-mass planets should migrate inwards faster than has been previously expected in radiative disks, with implications for the formation and orbital distribution of super-Earths and sub-Neptunes at intermediate distances from their host stars, unless additional physical processes that can slow down migration are considered. planet–disc interactions — protoplanetary discs — hydrodynamics — radiation: dynamics — methods: numerical § INTRODUCTION Planets are born in protoplanetary disks (PPDs), and their observed location after the disk has dispersed is highly sensitive to their interaction with it. In particular, for low-mass planets, addressing their origin is a long-standing problem given how rapidly they migrate through the disk <cit.>. A solution to this problem is crucial in order to explain the large population of super-Earths and sub-Neptunes in the observed exoplanet demographics. For a recent review, see <cit.>. Regardless of planet mass, different mechanisms have been proposed to halt or slow the inward migration of young planets or even reverse it. Massive enough planets that can open a deep gap around their orbit <cit.> can migrate very slowly in disks that sustain very low levels of turbulent viscosity <cit.> although interaction with the Rossby-wave unstable gap edge can lead to vortex-assisted inward or outward migration <cit.>. For intermediate-mass planets, migration has previously been expected to eventually stall as the planet acts akin to a snow-plough and shovels a wall of gas ahead of its corotating region, in a phenomenon dubbed the “inertial limit” <cit.>. However, more recent work has shown that vortex activity can help sustain inward migration <cit.>. For low-mass planets (type-I regime), the disk remains relatively unperturbed by the planet's presence beyond the formation of spiral arms <cit.> and the planet's corotating region <cit.>. Here, corotation torques have been shown to efficiently stop or even reverse migration when the turbulent viscosity is sufficiently high <cit.>, and thermal torques due to the planet's accretion luminosity can do the same for roughly Earth-mass planets <cit.>. For super-Earths and sub-Neptunes in laminar disks however, none of these effects operate efficiently enough to significantly slow down type-I migration. In laminar disks, where accretion is driven through the disk surface in the form of a magnetothermal wind <cit.>, the corotation torque described above vanishes and the planet is forced to drag its corotating region along as it migrates inwards <cit.>. As long as the vortensity of the material enclosed in that region is conserved (i.e., in the absence of baroclinic effects), the associated drag takes the form of a dynamical corotation torque (DCT) and can slow the planet down to an effective halt <cit.> even though migration doesn't formally stop. This last scenario could help reconcile theoretical modeling with exoplanet demographics by dramatically slowing down type-I migration, thereby preventing low-mass planets from migrating to the inner rim of the disk. However, as stated above, it relies on the conservation of vortensity in the corotating region. While this condition is satisfied for barotropic flows (e.g., isothermal disks), radiative effects have been shown to generate vortensity near the planet <cit.>, resulting in a negative torque that can not only reduce the drag due to the DCT but even accelerate inward migration beyond the expected type-I speed. At the same time, the buoyancy response of the disk to a low-mass embedded planet has been shown to produce a similar vorticity-generating effect in radiatively inefficient, adiabatic disks <cit.>, in addition to providing an apparently smaller and separate buoyancy torque that arises through direct interaction between the planet and the buoyant modes in the disk <cit.>. Recent work has shown that, in the context of a specific disk model, radiative cooling can act to quench this buoyancy response, preventing vortensity generation and erasing its contribution to the total torque <cit.>. While hydrodynamical modeling with a treatment of radiation transport is far more realistic than a simplified isothermal or adiabatic equation of state, it is significantly more expensive and the results are highly sensitive to the underlying disk model, which determines the cooling timescale and therefore the efficiency of radiative processes. With that in mind, while <cit.> have demonstrated that radiative cooling can damp the disk buoyancy response, and given that its operation can decide the fate of embedded, low-mass planets <cit.>, it is crucial to investigate in which disk conditions their result holds true. As shown by <cit.>, this requires simulations that treat radiative cooling in a realistic way. Therefore, in this study, we perform hydrodynamical simulations that feature the disk buoyancy response to an embedded planet, designed to represent the inner few au of the disk and with a realistic treatment of radiation transport. Our goal is to examine whether the buoyancy-associated torque is ever relevant in real disks, and, if so, to what extent it is capable of modifying the planet's migration rate. We describe our physical framework in Sect. <ref>. We motivate our numerical parameters in Sect. <ref>, present our results from numerical simulations in Sect. <ref>, and discuss their implications in Sect. <ref>. We finally summarize our findings in Sect. <ref>. § PHYSICAL FRAMEWORK In this section we lay out the physical framework of our models. We describe in detail our approach to radiation transport, and motivate our numerical experiments by investigating the conditions under which buoyancy torques could be relevant. §.§ Equations of hydrodynamics We consider a disk of ideal gas with adiabatic index γ=7/5 and mean molecular weight μ=2.353, orbiting a star with mass = and luminosity =. The volume density ρ, velocity field , and pressure P of the gas evolve according to the (inviscid) Euler equations ρt + ·∇ρ=-ρ∇·, t+ (·∇)=-1/ρ∇ P -∇Φ, et + ·∇ e=-γ e∇· + Q, where e=P/(γ-1) is the internal energy density given by the ideal gas law, Φ = Φ_⋆ = /r is the gravitational potential of the star at distance r, is the gravitational constant, and Q encapsulates any additional sources of heating or cooling. The isothermal sound speed is then =√(P/ρ), and the temperature is T=μ^2/, with denoting the gas constant. In a cylindrical coordinate system (R,φ,z) with r=√(R^2+z^2), by assuming that the disk is non-accreting and axisymmetric and further requiring that the midplane density and vertically constant temperature follow radial power-law profiles such that (R) = ρ_0 (R/R_0)^p, T(R) = T_0 (R/R_0)^q, the disk structure in equilibrium can be described by <cit.> ρ^eq(R,z) = (R) exp[-1/h^2(1-R/r)], u_ϕ(R,z) = R[1 + (p+q)h^2 + q(1-R/r)]^1/2. Here, =√(/R^3) is the Keplerian angular velocity, h=H/R is the aspect ratio of the disk, and H=/ is the pressure scale height. Finally, we can define the surface density through Σ=∫_-∞^∞ρ dz. §.§ Radiative cooling We model the radiative cooling of the disk using the flux-limited diffusion (FLD) approximation <cit.>. In this approach, the radiation energy density evolves as t + ∇·F=ρ c( T^4 - ), and is coupled to the gas energy density with a source term = -ρ c( T^4 - ) in Eq. (<ref>). Here, is the Planck mean opacity, c is the speed of light, and is the radiation constant. The radiation flux F is given by F = -λ c/ρ∇, where is the Rosseland mean opacity and λ is the flux limiter following <cit.>: λ = 2/3+√(9+10 x^2), x ≤ 2, 10/10x + 9 + √(180x+81), x > 2. , x := |∇|/ρ. We do not explicitly include the heating due to stellar irradiation in our models. Although this implies that our disk model is only accurate up to the disk optical surface <cit.>, this is not a problem as we are mainly interested in the region z≲2H. We can sidestep this issue by prescribing = T_0^4 at the upper boundary of our disk which prevents the gas from rapidly cooling off (see also Sect. <ref>). In the FLD approach, the cooling timescale = can be approximated following <cit.> = /η(H^2 + ^2/3), η=16 T^3/3ρ^2, = 1/ρ. Here, is the Stefan–Boltzmann constant, and =/(μ(γ-1)) is the specific heat at constant volume. The above expression for β is valid under the assumption that the solid particles (“dust”) in the disk, which facilitate cooling, are well-coupled to the gas. In particular, we focus on the small grains with size =0.1 μ m, as they provide the most efficient cooling channel. We can then express the collision timescale between the gas and small grains following <cit.> = e/ϵρ^2C̅_H/, C̅_H = /√(2π) Here, =4π^2 and =4π/3^3 are the surface area and mass of a single dust particle with bulk density =2.08 g/cm^3, and ϵ is the dust-to-gas mass ratio. The combined cooling timescale is then given by = + β_coll. We note that the cooling timescale is a sensitive function of the disk's temperature and density structure, and depends strongly on the choice of opacity model. In our models, we use the density- and temperature-dependent opacity model of <cit.> with =. §.§ Buoyancy-driven torque The gravitational interaction between disk and planet excites a buoyancy response in the disk, which can generate a torque on the planet. <cit.> showed with local shearing box simulations that the instantaneous torque due to the direct interaction between the planet and the buoyantly excited gas can become comparable to the Lindblad torque <cit.> in the adiabatic limit (→∞). <cit.> then carried out global adiabatic models with a migrating planet and showed that, in addition to the effect shown by <cit.>, buoyancy oscillations result in a long-term build-up of vortensity (ϖ) in the planet's corotating region, which can erase or even reverse the stalling effect of the dynamical corotation torque <cit.> Γ_h = 2π(1-ϖ()/ϖ_h)Σ_p^2 Ω_p(t - u_R). Here, a subscript “p” denotes quantities at the planet's location, u_R is the radial velocity of the background disk, and is the horseshoe half-width <cit.> defined as = 1.1/γ^1/4(0.4/ϵ/H_p)^1/4√(/), ϵ = 0.6H_p. The smoothing length ϵ is chosen to match 3D models <cit.>. In Eq. (<ref>), the vortensity is given in the vertically integrated approximation by ϖ=(∇×)/Σ·ẑ, while <cit.> derived the equivalent quantity in 3D to be ϖ = [∫_-∞^∞ρ/(∇×)·ẑdz]^-1. Then, ϖ() is the vortensity at for an unperturbed disk, and ϖ_h is the characteristic vortensity enclosed in the planet's corotating region. As the planet migrates inwards it carries with it the corotating material that was present at its initial location, while preserving the value of ϖ_h in the absence of turbulent viscosity or vortensity-generating baroclinic effects. This in turns leads to evolution of the quantity ϖ()/ϖ_h, and a change in the corotation torque. In the absence of vortensity-generating (baroclinic) effects the unperturbed vortensity in the background disk is given by ϖ_0(R) = 1/2/Σ^eq, which would result in a continuously increasing positive torque as the planet migrates inwards for Σ(R) shallower than R^-3/2. However, the excitation of buoyancy oscillations in the planet's horseshoe region induces a baroclinic forcing <cit.>, generates vortensity, and results eventually in a negative dynamical corotation torque <cit.>. <cit.> have called into question the relevance of the buoyancy-driven torque in radiative disks, showing that radiative diffusion can quench the buoyancy response and erase the associated torque when the cooling timescale due to radiative diffusion (= H^2/η, see Eq. (<ref>)) is comparable to or shorter than the buoyancy timescale . The latter is related to the Brunt–Väisälä frequency N = √(-1/γΦ_⋆zz[ln(P/ρ^γ)]) = √(γ-1/γ)z/H(R/r)^3, through which we can write = /N = √(γ/γ-1)H/z(1 + z/R)^3/2. In their simulations and for their disk model, <cit.> found that ≲ for z≳ 2H, and concluded that the buoyancy-driven torque is quenched in radiative disks. However, as stated above, is a sensitive function of the underlying disk model. It is therefore necessary to investigate the conditions under which the buoyancy-driven torque is relevant in a more slowly cooling, passively irradiated disk model <cit.>. § COOLING TIMESCALE MAPS In this section we present maps of the cooling timescale and the ratio / for several different disk models. We use these maps to motivate our choice of numerical parameters for our hydrodynamical simulations. §.§ Connecting to previous work We first compute the cooling timescale for the disk model of <cit.> and <cit.>. Their model assumes a disk with Σ(R) = 3400 (R/au)^-1/2 g/cm^2, a constant aspect ratio h=0.05 (i.e., T∝1/R) and a constant dust-to-gas mass ratio ϵ=0.01 for submicron grains. Combining Eqs. (<ref>), (<ref>), (<ref>), and (<ref>) we compute the cooling timescale =+ and the ratio / for this disk model, and present the results in the left panels of Fig. <ref>. In agreement with the findings of <cit.>, we find that /∼0.1 at z∼2H in the inner few au, which is why buoyancy torques were quenched in their disk model. In general, with the exception of a small region around the water iceline at 5 au, we find that /≳0.01 between z∼1–2H throughout the whole disk, and therefore expect that buoyancy torques are indeed not relevant for this disk model. §.§ A more realistic disk model The disk model used by <cit.>, while both simple and effective in demonstrating the effect of buoyancy torques in a scale-free environment, is not quite representative of a real PPD. Passively heated disks are flared <cit.> and significantly thinner (h≈0.02–0.025 at 1 au) than the constant aspect ratio disk model used above. Achieving such a comparatively high value for h would require considerable internal heating due to turbulence, with a corresponding α∼10^-3–10^-2 <cit.>. This is both difficult to reconcile with the low turbulence expected in the dead zone <cit.> and inconsistent with the assumption that the models by both <cit.> and <cit.> reflect laminar disks, where buoyancy torques can operate in the first place. With that in mind, we repeat the above exercise assuming a passively heated disk model. We assume that the aspect ratio h follows a power-law profile h(R) = 0.02 (R/au)^2/7, or equivalently that q=-3/7 <cit.>. We also assume that the surface density profile is given by Σ(R) = 1700 (R/au)^-1 g/cm^2, matching the Minimum Mass Solar Nebula model <cit.> at R=1 au. We then recompute the cooling timescale and the ratio /, and present the results in the middle panels of Fig. <ref>. Given that ∝ T^-3∝ h^-6, such a significant drop in the aspect ratio results in dramatically longer cooling timescales in the inner disk, even though the surface density is lower. In fact, we find that /≲10^-3 up to z∼2–3H for R≲2 au, which could be sufficient for buoyancy torques to operate in the inner disk. §.§ A computationally friendly hybrid model The model computed in the previous section, while more realistic and certainly more optimistic for the operation of buoyancy torques, is also significantly more expensive to use in the global models we would like to execute. It would also result in much more rapid migration as the Lindblad torque scales with h^-2 <cit.>. Given that the buoyancy response of the disk is a subtle effect highly sensitive to the numerical resolution <cit.>, achieving a high enough resolution (∼16 cells per scale height) to capture the effect of buoyancy torques in the inner disk (h∼0.02) while also integrating the radiation hydrodynamics equations in Eqs. (<ref>) & (<ref>) for several hundred planetary orbits is computationally prohibitive. We note here that our goal is not to measure the precise migration rate of a planet in a realistic disk model, but rather to investigate whether the buoyancy-driven torque is even active in the inner few au of a disk, where there still exists the possibility that radiative diffusion can damp the buoyancy response of the disk. With the optimistic model in Sect. <ref> in mind, we therefore choose to use a hybrid disk model that is both computationally friendly and still representative of the radiative environment in a real protoplanetary disk. To that end, we assume a flared disk with q=-3/7 and Σ∝1/R as before, but adjust Σ_0 and h_0 such that the values of Σ and h at the planet's initial radial location of 2 au are identical to those in the model of <cit.> and <cit.>. In this way, our model represents similar conditions to the realistic model in Sect. <ref>, is computationally feasible, and also maintains compatibility with previous work to an extent. We therefore choose Σ_0=4800 g/cm^2 (R/au)^-1 and h=0.041 (R/au)^2/7. Of course, adjusting both Σ and h in this way will inevitably also change the cooling timescale compared to our previous model. To account for this difference and ensure that the disk buoyancy response is similar between the model in Sect. <ref> and this hybrid model, we artificially increase the dust opacity by a factor of ≈7. The result is a near-perfect match between the two models in terms of the ratio / both radially and vertically, as shown in Fig. <ref>. We also show the cooling timescale maps for this model in the right panels of Fig. <ref>. We will be using this hybrid disk model in our hydrodynamical simulations in Sect. <ref>. § HYDRODYNAMICAL SIMULATIONS In this section we present the results from our hydrodynamical simulations. We first describe our numerical setup, and then present the results from our models with a migrating planet. §.§ Numerical setup We use the numerical hydrodynamics Godunov code ♇ <cit.> with a radiation transport module following the implementation of the FLD closure by <cit.> and described in Appendix <ref>. We also utilize the FARGO algorithm <cit.>, implemented in ♇ by <cit.>. This method alleviates the strict timestep limitation imposed by the rapidly rotating inner radial boundary by subtracting the Keplerian rotation profile of the background disk before advecting the gas. In doing so, it provides a substantial speedup while also reducing numerical diffusion. We use the option to offset the susceptibility of ♇ to the high-Mach problem <cit.>. We also use the HLLC Riemann solver <cit.>, a second-order () time marching scheme, a third-order weighted essentially non-oscillatory reconstruction <cit.> and the flux limiter by <cit.>. This combination of numerical options has been shown to be both accurate and robust in the context of resolving the disk buoyancy response to a low-mass planet <cit.>. <cit.> showed that, even at high resolution and with a sophisticated numerical setup, ♇ struggles to capture the disk buoyancy response and the associated torque compared to finite-difference, upwind codes such as <cit.>. For this purpose, while we carry out our radiative simulations with ♇, we also perform a set of simulations in the adiabatic limit and with a simplified, local cooling prescription with both ♇ and . In this way, we can gauge the extent to which ♇ might underestimate the disk's buoyancy response and correct for it, allowing us to estimate the planet's migration track in a scenario where the buoyancy torque is captured more appropriately. Our setup is based on the hybrid MPI/OpenMP parallelism implemented by <cit.> and solves the specific entropy s instead of the energy equation in Eq. (<ref>): st + ·∇ s = Q^', s≡log[T/T_ref(ρ/ρ_ref)^1-γ], where T_ref and ρ_ref are arbitrary constants. The thermal relaxation source term Q^', when applicable, essentially represents the same physics as in Eq. (<ref>) but is rewritten in the specific entropy formulation. In both codes our grid extends radially between 0.8–4 au with a logarithmic spacing, vertically from the midplane up to z=5H at R_0=2au, and spans the full azimuthal range 0–2π. We use a grid of N_r× N_θ× N_ϕ = 528×80×2048 cells, which achieves a resolution of 16 cells per scale height at R_0 in all directions. Similar to <cit.> and <cit.>, we employ wave-damping zones near the radial boundaries of the domain following <cit.> and smooth the planet's potential with the cubic spline curve described in <cit.>. The radiation energy is initialized as E_rad,0 = T_0^4. At the radial and vertical edges of the domain, all parameters are set to their initial values. For , this implies that the disk is shielded from direct stellar irradiation, which is satisfied in our models (see green lines in Fig. <ref>). The planet is held fixed at R_0 for 200 orbits and is then allowed to migrate for an additional 250, for a total of 450 orbits at R_0 by the end of the simulations. The planet grows over 200 orbits to its final mass of =2×10^-5 ≈6.7 M_⊕ using the formula by <cit.>. Since we do not consider the disk self-gravity, we subtract the azimuthal average of the gas surface density before computing the acceleration on the planet <cit.>. Finally, we include the indirect term due to the star–planet system orbiting about their center of mass, and further include the indirect term due to star–disk interaction during the migration phase <cit.>. In our adiabatic runs, the source terms Q and Q^' in Eqs. (<ref>) & (<ref>) are set to zero. In our runs with a simplified local β cooling prescription we implement a thermal relaxation term similar to <cit.> and <cit.> for ♇ and , respectively: = -ρT-T_0/, ^' = -s(ρ,T)-s(ρ,T_0)/, where is given by Eqs. (<ref>)–(<ref>). Our fully radiative run in ♇ solves Eqs. (<ref>) & (<ref>) instead. By comparing a fully radiative to a locally cooled model, we can assess the effect of radiation transport on the buoyancy torque as not only a cooling effect, but also a diffusion mechanism. Finally, in order to isolate the effect of the buoyancy torque, we supplement our set of simulations with three vertically integrated (R,ϕ) models using ♇. Similar to the 3D runs, we employ the following three cooling prescriptions: * adiabatic: no cooling (Q=0), * local cooling: β relaxation term following the prescription of surface and in-plane cooling in <cit.>, * radiative: stellar irradiation, surface cooling, and in-plane radiative diffusion following the implementation of FLD in <cit.>. In these 2D models we use a Plummer potential for the planet's gravity, with a smoothing length ϵ=0.6 <cit.>. §.§ Static planet phase During the first 200 orbits, the planet amasses a vortensity excess around the edge of its horseshoe region in all 3D models, with an additional excess in the center of the horseshoe region for the adiabatic and β-cooled models. We show the azimuthally averaged perturbed vortensity in Fig. <ref>. This already suggests that models with local cooling (i.e., that do not account for the effect of radiative diffusion) do not correctly capture the dissipation of buoyancy oscillations in the planet's horseshoe region, and therefore overestimate the associated vortensity generation. We then plot perturbed vortensity heatmaps for the adiabatic and radiative 3D models in Fig. <ref>. Models with β cooling are not shown as they are identical to their adiabatic counterparts. Similar to Fig. <ref>, we find that the vortensity excess is more centered to R= in the adiabatic models, suggesting that higher-order buoyancy modes, which are excited radially closer to the planet, are damped due to radiative diffusion—fully consistent with the findings of <cit.>. The steeper vortensity gradients induced by the stronger vortensity growth in the adiabatic models also result in the excitation of vortices, which orbit around the horseshoe region and are not present in the radiative models. Overall, until the planet is allowed to migrate, our results are consistent with the findings of <cit.> and <cit.>, at least to an extent: the buoyancy response is partially damped in radiative models, although a vortensity excess in the horseshoe region is still visible. This indicates that the longer cooling timescales in the inner disk are indeed sufficient for buoyancy torques to operate, albeit not at the same level as in the adiabatic models. §.§ Migrating planet phase We now allow the planet to migrate through the disk for an additional 250 orbits. The migration tracks for all models and the associated migration timescales τ_mig=/Ṙ_p are shown in Fig. <ref>. Several takeaways can be drawn from this figure: * In 2D, the planet migrates inward slower than the type-I rate, indicating a positive dynamical corotation torque (DCT) for all radiative treatments. This is expected given the very long cooling timescale and the lack of a buoyancy response in these 2D models, and leads to nearly perfectly overlapping tracks in all 2D models. * The buoyancy response in our adiabatic 3D models induces a negative DCT, resulting in rapid inward migration. This is consistent with the findings of <cit.>. * The planet initially migrates at the type-I rate in our radiative 3D models, but accelerates after ≈80 orbits. This suggests that the buoyancy-driven torque is operating, and reaches full strength as the planet continues to migrate. We investigate this behavior further below, but it is clear that the buoyancy torque is indeed active in our radiative models. * Models with local cooling show a migration rate practically identical to the adiabatic models for both codes, indicating that the buoyancy torque is higher than in the radiative case and highlighting the need for realistic radiative modeling in order to correctly capture the buoyancy response of the disk. To address the “knee” in the migration track of the 3D radiative model, we plot vortensity heatmaps near the planet's corotating region for this model and for several snapshots in Fig. <ref>. While the corotating region of the planet initially spans the full azimuthal range, it shrinks to a tadpole-shaped libration island <cit.> ahead of the planet as the latter migrates inwards <cit.>. At the same time, vortensity is continuously generated due to the dissipation of buoyancy modes close to the planet. Tracing the section of the corotating region where the gas maintains its original vortensity ϖ_ref=ϖ(R_0) shows that this section shrinks abruptly to approximately a third of its initial azimuthal extent after ≈80 orbits, at which point the planet accelerates inwards. Observing the migration timescale in the bottom panel of Fig. <ref>, we find that this delayed transition to faster migration is present to an extent in all 3D models and is consistent with the libration timescale τ_lib = 8π/3Ω_p≈ 72 orbits, which is the typical time it takes a gas streamline at the edge of the corotating region to complete a closed orbit and therefore allow the torque on the migrating planet to stabilize. We note that the planet's migration rate is roughly constant for all 3D models after ≈80 orbits, in line with this argument. Consistent with the findings of <cit.>, we find that ♇ underestimates the buoyancy torque compared to . In our models this translates to a factor of τ^Pluto_mig/τ^Fargo_mig≈3.5 between the two codes for both adiabatic and β-cooled models. It is tempting to also scale the migration timescales of our radiative models by this factor, but we caution that the thermal diffusion associated with FLD results in some physical damping of the buoyancy response <cit.>. As a result, the final migration timescale of the planet in a radiative disk model could in principle be up to 3.5× shorter that what we find in our ♇ models, but the difference between the two codes is likely much smaller in the radiative case. Regardless, we stress that the buoyancy response exerts a substantial negative DCT on the planet in our radiative model, in contrast to the findings of <cit.> who concluded that the buoyancy torque is quenched in radiative disks. This does not invalidate their results for their disk model, but rather shows that whether the disk buoyancy response can exert a significant torque on a migrating planet is highly sensitive to the underlying disk conditions. In the next section we compare the excited buoyancy modes between the two models in an attempt to identify key differences in the buoyancy response of the disk. §.§ Buoyancy mode excitation In the previous section we showed that the disk buoyancy response can exert a significant torque on a migrating planet in favorable conditions, even when radiative cooling is considered. At the same time, <cit.> showed that the buoyancy torque is already quenched when /≲0.1 at z∼2H. Here, we attempt to quantify the difference in the buoyancy response between the two disk models. To that end, we carry out an additional set of short-term, global simulations with a fixed planet using ♇, and compare the amplitude and phase of the buoyancy modes generated by the planet in the disk model presented in <cit.> and our hybrid model (see Sect. <ref>). For these models the planet grows over one orbit and the simulation runtime is 10 orbits, similar to <cit.>. Figure <ref> shows an azimuthal slice of the gas vertical velocity u_z normalized to the local sound speed at z={H, 2H} and R=-, tracing the excited buoyancy oscillations at the inner edge of the horseshoe region. We find that: * Both models agree very well in the adiabatic case (blue curves), with our model showing very slightly stronger buoyancy modes on average. This difference is due to the disk flaring in our model, which results in a slightly smaller aspect ratio at that location. We find the opposite behavior at R=+. * Cooling damps buoyancy modes, more so for fully radiative (green) than for β models (orange). This further highlights the need for realistic radiative modeling, and is consistent with the faster inward migration found for β models in Fig. <ref>. * Buoyancy modes in our radiative models (green) are more spaced out in azimuth compared to the adiabatic and β-cooled cases. This is consistent with the findings of <cit.>, and is most likely due to the diffusive nature of FLD. The latter in particular might be a secondary reason for the weaker buoyancy torque, as a combination of weaker and fewer buoyancy modes will quickly diminish the rate at which vortensity can be generated in the corotating region. We expect that a “cutoff” in the buoyancy response of the disk will occur for /≳0.01 at z∼2H, but finding the exact value is beyond the scope of this work. § DISCUSSION In this section we discuss the implications and limitations of our findings. We then outline the current challenges in planet migration modeling, and possible pathways to reconciling with the observed exoplanet population. §.§ Vortensity growth due to a radial entropy gradient In this work we focused on the vortensity generation due to the disk buoyancy response. We note that, in principle, a radial entropy gradient can also lead to vortensity spikes around the edges of the horseshoe region <cit.>, although this has only been investigated in 2D. In our models, the entropy function is P/ρ^γ∝ R^0.48 and P_2D/Σ^γ∝ R^-0.03 in 3D and 2D, respectively. Thus, we can expect that a fraction of the vortensity growth shown for 3D models in Figs. <ref> & <ref> could be due to the contribution of this mechanism, rather than the disk buoyancy response <cit.>. Nevertheless, as found by <cit.>, the contribution by buoyancy modes—when they are active—dominates the vortensity growth in the horseshoe region, evident by the vortensity excess being centered on the planet in the adiabatic models while the planet is held fixed (see Fig. <ref>). Once the planet is allowed to migrate, the contribution by the buoyancy response dominates in the radiative model as well (see Fig. <ref>). Overall, we expect that the vortensity growth due to the disk buoyancy response is the dominant mechanism in accelerating migration in our models, and that the contribution by the radial entropy gradient is secondary. §.§ Observational constraints on the cooling timescale In Sects. <ref> & <ref> we showed that, for the disk buoyancy response to drive meaningful vortensity generation, a very long cooling timescale (technically, a large ratio /) is required. This condition is expected to be met in the inner, optically thick region of protoplanetary disks, but it is worth pointing out that will depend sensitively on the disk density and temperature as well as the properties of dust particles, which facilitate cooling. Assuming a passive, irradiated disk model allows us to constrain its temperature structure <cit.>, but the gas surface density and the fraction of small, efficiently-cooling dust grains are both largely unknown. With recent observational evidence[Based on unpublished data from the AGE-PRO ALMA large program, private communication.] pointing towards relatively low disk masses of ⟨ M_disk⟩≲ 0.01, and assuming a median disk cutoff radius of ⟨ R_out⟩∼30 au, a median stellar mass of ⟨⟩≈0.5, and Σ∝ R^-1, we find that the surface density at 1 au for a typical star should be of the order of ≲400 g/cm^2. At the same time, dust growth models predict that a significant fraction of the submicron–micron-sized grains should be depleted within ∼100 local orbits as small grains coagulate to larger grains that no longer contribute to cooling, significantly lowering the opacity <cit.>. As a result, we can expect that we could easily be overestimating the cooling timescale by one, two, or more orders of magnitude, and that buoyancy-driven torques should not be active for the typical planet-forming disk, or be limited to a very narrow radial range near the magnetically active inner rim of the disk <cit.>. §.§ Additional physical mechanisms While the previous paragraph suggests that buoyancy torques might not be ubiquitous, this does not imply that the stalling effect of the dynamical corotation torque (DCT) can efficiently slow down inward migration. <cit.> showed that for intermediate cooling timescales (∼0.3–100) the DCT is significantly weaker due to cooling-induced vortensity generation in the corotating region of a low-mass planet. As a result, a low-mass planet can be efficiently transported to the optically thick, slowly-cooling inner disk where ∼1000 and buoyancy torques are active. The combined effect of these two mechanisms highlights the importance of radiation transport in hydrodynamical modeling and suggests that additional physical processes are necessary to prevent rapid inward migration and thus explain the population of super-Earths and sub-Neptunes at separations of ≲2 au from their host stars. Directly relevant to our discussion of dust coagulation in Sect. <ref>, <cit.> showed that a low-mass planet can efficiently trap a large amount of dust in its corotating region. The torque exerted by this concentration of solids was shown to be positive and comparable to or even larger in magnitude than the Lindblad torque for Stokes numbers of St∼0.1, able to stop or even reverse migration. Given that St∼Σ^-1, achieving St∼0.1 at 10 au would require the presence of ≳cm-sized grains, even with the lower surface densities discussed in Sect. <ref>. This requirement is in tension with both observational evidence that point to a typical maximum grain size a_max≲3 mm <cit.>, as well as dust evolution models <cit.>. Nevertheless, dust dynamics could help slow down planet migration to an extent even for St∼0.01. Moreover, thermal torques due to the planet's accretion luminosity could further slow down or reverse the direction of migration <cit.>. Combining dust dynamics with an accretion luminosity due to a pebble flux, <cit.> showed that the picture complicates significantly as the edge of the horseshoe region becomes unstable to the Rossby Wave Instability <cit.>, leading to vortices which can induce a stochastic element to the planet's migration <cit.>. We note, however, that none of these models included both a dust component and a self-consistent treatment of radiative cooling. Combining these mechanisms in one model while maintaining control over all processes involved is a challenge in itself, and will be the focus of follow-up work. §.§ Transition to type-II regime and the “inertial limit” In the above, we used the term “low-mass” in a rather liberal way, referring in fact to planets well below the gap-opening or thermal mass ≈ h^3 and below the disk feedback mass <cit.>. Of course, in a cold, passively heated protoplanetary disk, where we typically expect h∼0.02 at 1 au, even an Earth-mass-sized planet can imprint on its disk and open a partial gap, slowing down migration. This concept of an “inertial limit” <cit.> could in theory prevent the rapid inward migration discussed above, although <cit.> showed that intermittent vortex formation and dissipation can sustain an inward track. Based on the above paragraphs, investigating the effect of dust dynamics and vortex activity in disk models with realistic hydrodynamics is necessary in approaching a definitive answer to how low-mass planets actually interact with their surrounding disk. § SUMMARY We have investigated the presence of a negative dynamical corotation torque driven by the buoyancy response of a protoplanetary disk in the presence of a migrating planet <cit.>. Our approach involved constructing a quasi-realistic yet computationally feasible disk model and then carrying out global, inviscid, radiation hydrodynamics simulations in the flux-limited diffusion (FLD) approximation with an embedded migrating planet. We first computed the cooling timescale for a disk model used by <cit.> and <cit.>, and confirmed that the ratio / is ≲0.01 at z∼2H for R∼2 au, consistent with the quenching of buoyancy torques in their work. We then constructed a more realistic, passive, colder and flared disk model, and found that the conditions for the buoyancy torque to operate are more favorable at the same radius. However, due to the prohibitive computational cost of this model, we reach a compromise by constructing a hybrid setup that combines the reference parameters of the model used by <cit.> with the radial structure and longer cooling timescales of our realistic setup. We used this model in our hydrodynamical simulations. Our numerical calculations showed that the planet migrates inwards significantly faster than in equivalent 2D models, indicating that the buoyancy torque is indeed active in the inner disk even when radiative diffusion is considered. In our radiative models, the planet's migration track exhibits a “knee” during its orbital evolution. Further analysis showed that this happens over a libration timescale, as the corotating region shrinks to a tadpole-shaped libration island and the torque on the planet stabilizes to a more negative value sustained by the vortensity generation due to the dissipation of buoyancy modes inside the planet's corotating region. At the same time, models with a local, β cooling prescription matched the migration rate of our adiabatic models for both ♇ and . Comparing individual buoyancy modes between our models revealed that FLD not only damps the disk buoyancy response, but also stretches the modes in azimuth, resulting in fewer and weaker oscillations and therefore much less efficient vortensity growth in the planet's corotating region <cit.>. These findings highlight that the diffusive component of realistic radiation transport is necessary to correctly capture the buoyancy response of the disk, and further that the buoyancy response of the disk is likely to be rapidly quenched for larger /, as weaker and fewer buoyancy modes will be excited. Comparing ♇ with , we found that the former underestimates the buoyancy torque by a factor of ≈3.5, consistent with the findings of <cit.>. While in the adiabatic and locally cooled models this discrepancy is of numerical origin, the thermal diffusion due to FLD should result in a physical damping of the buoyancy response in our radiative models. It is therefore plausible that the true migration timescale of the planet in a radiative model with could be up to 3.5× shorter than what we find in our ♇ models, although we expect the two codes to largely agree in the radiative case due to the damping associated with radiative diffusion. Overall, our results show that the buoyancy torque can indeed reverse the dynamical corotation torque acting on a migrating planet in a realistic protoplanetary disk, and highlight the sensitivity of the buoyancy torque to the disk model and treatment of radiation transport. This suggests that planet migration in the low-turbulence dead zone might not ever slow down due to dynamical corotation torques <cit.>, and that low-mass planets <cit.> will likely migrate rapidly to the inner edge of the dead zone unless additional physical processes are considered. With that in mind, investigating the interplay of dust dynamics, radiation transport, and nonlinear processes such as gap opening and vortex formation will be critical to understanding the origin of super-Earths and sub-Neptunes in the disk dead zone. § ACKNOWLEDGEMENTS AZ would like to thank Will Béthune for their help in implementing and testing the FLD module, and Mario Flock for their suggestions and helpful discussions. This research utilized Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT (http://doi.org/10.5281/zenodo.438045). This work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. AZ and RPN are supported by STFC grants ST/T000341/1 and ST/X000931/1. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 101054502). All plots in this paper were made with the Python library <cit.>. § DATA AVAILABILITY Data from our numerical models are available upon reasonable request to the corresponding author. mnras § IMPLEMENTATION OF FLD SOLVER Our FLD module closely follows the numerical implementation of <cit.>. We nevertheless provide a brief description of our implementation, and carry out a supplementary validation test to ensure that it is correct. §.§ Numerics Similarly to the implementation of <cit.> (see Sect. 3.2 therein), we write Eq. (<ref>) in implicit form for , taking into account the finite-volume nature of ♇. Following <cit.>, the nonlinear term (T^n+1)^4 in is approximated as (T^n+1)^4 ≈ 4(T^n)^3T^n+1-3(T^n)^4. We then solve the resulting linear system of equations using the BiCGStab method, implemented in the sparse matrix solver module of the PetSc library <cit.>. After obtaining the updated , we update the gas temperature as T^n+1 = c(3 T^4 + ^n+1)Δ t + T/ + 4 c T^3 Δ t. §.§ Test problem: radiatively damped linear wave To test our implementation, we design a 1D test problem similar to <cit.> where we excite small perturbations that correspond to eigenvectors of the coupled system of equations (<ref>), (<ref>) & (<ref>) and measure their oscillation frequency and damping rate. This both ensures that our implementation of the FLD solver for in Eq. (<ref>) is correct, and that the thermal energy and radiation energy field are coupled correctly in Eq. (<ref>). §.§.§ Dispersion relation and eigenvectors We assume that λ, , and are constant, and that all quantities q ∈{ρ, u, P, } can be expressed in the form q=q_0 + q_1, with q_0 being a constant background state and q_1 = A q^' e^ikx -ω t a wave-like perturbation with A≪ 1. By plugging them into the set of Eqs. (<ref>), (<ref>) & (<ref>) and discarding terms of 𝒪(A^2) or higher we obtain ρ_1t + ρ_0 u_1x = 0, u_1t = -1/ρ_0P_1x, P_1t = -γ P_0 u_1x + (γ-1) ^' _1t - λ c/ρ_0_1x = -^', where ^' is the linearized radiation source term after expanding T^4∝ P^4/ρ^4 to first order and using that _0 = T_0^4: ^' = -ρ_0c(4P_1/P_0 - 4ρ_1/ρ_0 - _1/_0). By combining Eqs. (<ref>)–(<ref>) and seeking a solution for ω we obtain the dispersion relation ω^4 [χ] +ω^3 [-χ^2-(4+χ)] +ω^2 [χ^2+4^2] +ω [-γχ^2^2-(4+γχ)^2]+4^2^2 = 0, where we have defined for convenience = k _0, =ρ_0 c, =ρ_0 c, =√(λ) k c, and χ=e_0/_0. This equation can be solved numerically to obtain the oscillation frequency and damping rate of radiatively damped sound waves, provided that Im(ω)≠ 0. The components of the related eigenvectors are then u^'/_0 = -ω/ρ^'/ρ_0, P^'/P_0 = -ω^2/^2ρ^'/ρ_0, ^'/_0 = -(γχω/ - 4ω^2/^2 - 4 + χω^3/^2)ρ^'/ρ_0. §.§.§ Numerical solution In our numerical setup we consider a periodic Cartesian domain x∈[0, L] with L=1 au and 1000 uniformly spaced grid cells, and define k=2π/L. We set μ=2.353, γ=7/5, λ=1/3, and T_0=30 K. The opacity ==κ is set to a constant value within each setup and varies between 0.003–300 cm^2/g_gas throughout the set of models we run. The background state is given by ρ_0 = {10^-13, 10^-12} g/cm^3, u_0 = 0, P_0 = /μρ_0 T_0, _0 = T_0^4, and the perturbed quantities are initialized as q_1 = A[Re(q^')cos(kx) - Im(q^')sin(kx)], where A=10^-4 is the amplitude of the perturbation. We then integrate for a time corresponding to 150 years. Finally, we extract the oscillation frequency ω_osc and damping rate ω_damp from the perturbed quantities by fitting the gas pressure at x=0 as a function of time with a function of the form P(t) = P_0 + c_0 P_0 sin(ω_osc t + c_1) e^ω_damp t, where c_0 and c_1 are additional, free fitting parameters to account for the amplitude and phase of the excited eigenmode. An example of the fitted oscillation is shown in Fig. <ref> for all quantities. We then present our results in Fig. <ref>, showing that we recover the expected damping and oscillation frequencies to very good degree for all values of κ and both reference values of ρ_0. We therefore conclude that our FLD module is suitable for use in our models.
http://arxiv.org/abs/2406.08826v1
20240613053724
Topological Corner States in Bilayer and Trilayer Systems with Vertically Stacked Topologically Distinct Layers
[ "Natsuko Ishida", "Motohiko Ezawa", "Guangtai Lu", "Wenbo Lin", "Yasutomo Ota", "Yasuhiko Arakawa", "Satoshi Iwamoto" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
n-ishida@iis.u-tokyo.ac.jp Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Department of Applied Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Institute of Innovative Research, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8550, Japan Research Center for Department of Applied Physics and Physico-Informatics, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, Kanagawa 223-8522, Japan Institute for Nano Quantum Information Electronics, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan § ABSTRACT We investigate bilayer and trilayer systems composed of topologically distinct, vertically stacked layers based on the Benalcazar-Bernevig-Hughes model. We have identified a topological phase transition that significantly alters the number of the topological corner states in these systems. Additionally, we find that traditional nested Wilson loop analysis inaccurately classifies certain phases, leading us to evaluate multipole chiral numbers (MCNs) as a more appropriate topological invariant. This approach accurately identifies topological phases and the number of MCNs coincides with the number of corner modes, enhancing our understanding of topological insulators and opening avenues for further applications. Topological Corner States in Bilayer and Trilayer Systems with Vertically Stacked Topologically Distinct Layers Satoshi Iwamoto June 17, 2024 ================================================================================================================== § INTRODUCTION Topological insulators are fascinating emerging materials found in condensed matter physics<cit.>. These materials are distinguished by their unique combination of an insulating bulk and metallic surfaces, i.e., edge states. The distinctive behavior of the edge states originates from the topological features in the bulk band structure giving a non-zero topological invariant, demonstrating the essential concept of bulk-boundary correspondence<cit.>. The basic concept of topological materials has been expanded into classical wave phenomena, including photonics<cit.>, acoustic waves<cit.>, mechanical waves<cit.>, thermal waves<cit.>, as well as electric circuits<cit.>. This expansion has led to attractive applications such as topological lasers<cit.>. Expanding beyond the conventional topological insulators, the concept of a higher-order topological insulators (HOTIs <cit.>) has been proposed and is gaining significant attention. This class of materials is characterized by topological states that are at least two dimensions lower than the system itself. For instance, in a two-dimensional (2D) HOTI, one can observe zero-dimensional corner states. Building on this framework, the majority of research has been directed towards the study of 2D monolayers<cit.> and their vertically stacked three-dimensional structures<cit.>. These investigations have a broad scope, aiming to reveal the distinct topological properties intrinsic to these structures<cit.>, as well as to examine their potential applications in diverse fields, including nanocavity lasers <cit.>, nonlinear optics<cit.>, cavity quantum electrodynamics (QED) experiments<cit.> and interferometer<cit.>. There are some studies on vertically stacking of identical models in multilayer topological systems <cit.>. However, multilayer systems composed of topologically distinct layers have not been investigated. In this paper, we investigate multilayer systems composed of topologically distinct layers. Our work is based on the Benalcazar-Bernevig-Hughes (BBH) model <cit.>, a simplest model for studying quadrupole topological insulators. This model consists of π-flux square lattices and it supports topological corner modes. The realization of quadrupole insulators has been demonstrated through various experimental setups, including optical ring resonator arrays <cit.>, waveguide arrays<cit.>, electric circuits<cit.>, acoustic system<cit.>, and microwave resonator<cit.>. Our findings indicate a phase transition induced by the interlayer coupling, which significantly influences the emergence and annihilation of topological corner states. Interestingly, we identified a phase where the conventional method, using the nested Wilson loop, incorrectly classifies the phase as trivial, despite the presence of corner states. Therefore, we calculated a recently developed alternative topological invariant known as multipole chiral numbers (MCNs) <cit.> to evaluate the topological phases. Due to its capability to handle ℤ-class higher-order topological systems, the MCN has begun to be applied across various models<cit.>. We found that the MCN precisely identifies the phases present in our model and accurately matches the number of corner modes at zero-energy. By adopting the MCN as a topological invariant, we not only identify changes in the number of corner modes but also uncover the phase transition induced by interlayer coupling. § TOPOLOGICAL CORNER STATES The BBH lattice consists of four sites in a unit cell where the π-flux is inserted by introducing the negative coupling in a plaquette, as shown with the red bond in Fig. 1 (a). There are two topologically distinct phases, determined by the ratio of the intra-cell coupling to the inter-cell coupling. When the intra-cell coupling is weaker than inter-cell coupling, there exists four degenerate states that form a single set of corner modes, localized at each corner of the structure within a band gap at energy E=0. The bulk Hamiltonian for the system shown in Fig. 1 (a) is given by ℋ_BBH(k_x,k_y) = [ 0 0 -(κ_1+κ_2 e^-i k_x) κ_1+κ_2 e^-i k_y; 0 0 κ_1+κ_2 e^i k_y κ_1+κ_2 e^i k_x; -(κ_1+κ_2 e^i k_x) κ_1+κ_2 e^-i k_y 0 0; κ_1+κ_2 e^i k_y κ_1+κ_2 e^-i k_x 0 0; ]. Throughout this paper, we use κ_1 (κ_2) and κ_2 (κ_1) as the intra-cell and inter-cell couplings for the topological (trivial) system, respectively, with its corresponding Hamiltonian ℋ_BBH^Topo( k_x,k_y) (ℋ_BBH^Tri( k_x,k_y)). Here, we suppose κ_1 < κ_2, and both are positive. The BBH system has two occupied degenerate bands that possess two cancelling dipole moments and a non-vanishing quadrupole moment. By adapting the Wilson loop calculation, the degeneracy is lifted, resulting in two distinct single-band subspaces where Wannier bands become non-degenerate. Consequently, each Wannier band carries its own topological invariant, which can be obtained through the nested Wilson loop calculation <cit.>. Following the calculation, a quadrupole moment q_xy=1/2 signifies that the system is topological, while q_xy=0 signifies that the system is trivial. In our study, we consider a vertically stacked configuration composed of topologically distinct layers, taking into account the vertical nearest-neighbor site coupling denoted by t_v. Fig. <ref> (b) illustrates the considered BBH bilayer system and its unit cell, where the top and bottom layers are topological and trivial, respectively. For trilayer systems, we consider two cases as shown in Figs. <ref> (d) and (e), where the arrangements of the topological and trivial layers are flipped. §.§ Topological corner states in BBH bilayer lattice The Hamiltonian describing the bilayer system, shown in Fig. <ref> (b), is given by ℋ_bilayer= ( [ ℋ_BBH^Topo ℋ_v; ℋ_v ℋ_BBH^Tri ]), where ℋ_v=t_v( [ 0 0 1 0; 0 0 0 1; 1 0 0 0; 0 1 0 0 ]) = t_vσ_x ⊗𝕀_2. Here, we use the site index shown in Fig. 1(c) in which chiral symmetry is preserved. This ordering is essential for the theoretical analysis presented later in this paper. σ_x and 𝕀_2 are Pauli matrix and two by two identical matrix, respectively. The band structure has four pairs of doubly degenerate bands, and exhibits both opening and closing of a band gap around the Γ point as t_v changes. (Fig. <ref>). The critical value t_c^bilayer corresponds to the point at which the band gap closes, defined as t_c^bilayer=√(2)(κ_1+κ_2). See appendix A for details. We now consider a finite BBH model with square geometry, with each layer forming a BBH lattice consisting of 2N× 2N sites. Fig. <ref>(a) shows the energy spectrum as a function of varying t_v. In the weak interlayer coupling regime (t_v<t_c^bilayer), the system possesses a single set of corner modes, consisting of a total of four degenerate states, each predominantly localized at the corners of the layers. These are depicted by the red line in the figure. Fig. <ref>(b) shows the field amplitude of the corner state, summed over for all four states. Notably, these corner states disappear when the interlayer coupling exceeds the critical value t_c^bilayer. Sudden disappearance of the corner states, accompanied by a gap closing, suggests a topological phase transition associated with increasing interlayer coupling strengths. In order to determine the topological phases, we calculate the quadrupole moment q_xy <cit.> via the nested Wilson loop. As a result, we find that q_xy=1/2 for t_v<t_c^bilayer, identifying the system as topological, while q_xy=0 for t_v>t_c^bilayer, indicating a trivial phase. These results indicate that the interlayer coupling induces a topological phase transition in the BBH bilayer system. The presence and absence of topological corner states change at the critical value t_c^bilayer bilayer as the band gap closes and re-opens. This critical value marks the transition point where the number of topological corner states changes. The detailed calculation of the quadrupole moment is shown in the Appendix B. §.§ Topological corner states in BBH trilayer lattice We proceed to study a BBH trilayer system depicted in Fig. 1(d). This arrangement consists of a trivial layer positioned between two topological layers, where the layers are coupled by an interlayer coupling t_v. The system Hamiltonian is given by ℋ_trilayerI= ( [ ℋ_BBH^Topo ℋ_v 0; ℋ_v ℋ_BBH^Tri ℋ_v; 0 ℋ_v ℋ_BBH^Topo ]), with the interlayer coupling matrix ℋ_v. Similar to the bilayer system, the labeling of sites in the Hamiltonian varies between odd and even layers to maintain the chiral symmetry, as shown in Fig. 1(c). This labeling is crucial for applying the methods we will discuss in the next section. The trilayer system possesses six pairs of doubly degenerate bands. Analogous to the bilayer system, band gap closes at a critical value t_c^trilayer=κ_1+κ_2 (See appendix A). The band gap re-opens again when t_v exceeds the critical value. For a finite BBH trilayer system with square geometry, with each layer consisting of 2N × 2N sites, we calculate the eigenenergies as a function of the interlayer coupling strength(Fig <ref>(a)). When t_v<t_c^trilayer, which is below the critical value, there are two sets of corner states, with a total number of eight (illustrated by the green line in the figure). In contrast, when t_v exceeds t_c^trilayer, only a single set of corner states survives, with a total number of four (indicated by the red line). Each of these sets consists of four degenerate modes located at E=0. These two sets of corner states are distinguished by their unique field distributions(Fig. <ref> (b)). Specifically, the corner states that disappear at the critical interlayer coupling show non-zero field amplitudes across all layers, which corresponds to symmetric coupling between the corners (corner 2). Conversely, the corner states that remain after exceeding the critical interlayer coupling exhibit a complete absence of field amplitude in the middle layer, which corresponds to anti-symmetric coupling between the corners (corner 1). The presence of topological corner states, 'Corner 1', at large interlayer couplings, is understood by an effective model in the vicinity of the zero-energy, derived based on the perturbation theory. See Appendix C. As discussed in the subsection II.A of the bilayer system, we calculate the quadrupole moment q_xy to identify the topological phases of the trilayer system. Consequently, our analysis shows that q_xy=0 for t_v<t_c^trilayer, classifying the system as trivial, despite the presence of two sets of corner states at E=0. In the case where t_v>t_c^trilayer, we have q_xy=1/2, indicating the system as topological. This leads to the issue where the quadrupole moment fails to fully capture the topological phases in certain instances. Therefore, the introduction of another invariant is essential to accurately describe the topological phases of both bilayer and trilayer BBH systems. We will calculate and discuss this alternative invariant in the next section. Next, we study a trilayer system which involves a topological layer sandwiched between two trivial layers, with each layer coupled by an interlayer coupling t_v. This arrangement is depicted in Fig. <ref> (e). In this case, the Hamiltonian is given by ℋ_trilayerII= ( [ ℋ_BBH^Tri ℋ_v 0; ℋ_v ℋ_BBH^Topo ℋ_v; 0 ℋ_v ℋ_BBH^Tri ]). Its critical value is determined by t_c^trilayer=κ_1+κ_2, which is identical to that in the previous trilayer case. Fig. <ref>(a) shows the energy spectrum as a function of t_v. There is a single set of corner modes that includes four degenerate corner states at E=0 (depicted by the red line) when t_v<t_c^trilayer; however, all corner modes disappear above the critical value t_c^trilayer. The field amplitudes of the corner modes, shown in Fig. <ref> (b), are distributed across each layer. The quadrupole moments are obtained as q_xy=1/2 for t_v<t_c^trilayer, identifying the phase as topological, while q_xy=0 for t_v>t_c^trilayer, classifying it as trivial. We note that in BBH trilayer systems with large interlayer coupling t_v, the size of the band-gap varies between two distinct systems, despite them having identical band structures, as shown in Figs 4(a) and 5(a). This discrepancy arises due to the presence of gapped edge states within the band gap as illustrated in Fig. 4(a). § 2D WINDING NUMBER IN REAL SPACE As we observed a discrepancy between the quadrupole moment and the presence of corner states when t_v<t_c^trilayer, in the trilayer case ('topological-trivial-topological' structure), we need to calculate another topological invariant to discuss its topological phase in detail. We choose the MCN as the additional invariant, which is capable of accurately evaluating the topological phases of ℤ-class chiral-symmetric higher-order topological systems, as recently introduced by <cit.>. The MCNs are analogous to the 2D winding number in real space, based on the work described in Ref.<cit.>. This development is especially crucial in situations where existing theoretical frameworks, such as the nested Wilson loop, may erroneously categorize certain phases as trivial. The MCN provides an innovative approach for analyzing these systems. Notably, the value of MCN directly correlates with the number of degenerate zero-energy corner states present in a finite system, thereby offering a more precise invariant for examining these unique topological states. In this study, we use the MCNs to identify the topological phases of both BBH bilayer and trilayer systems in their chiral-symmetric forms. This is particularly relevant in cases where the conventional nested Wilson approach inaccurately classified the phase as trivial, despite the presence of zero-energy corner states. To compute the topological invariant for BBH bilayer system, we modify the Hamiltonian in its chiral-symmetric form for its finite-size system as follows: ℋ_bilayer= [ 0 h_bilayer; h^†_bilayer 0 ], Note that the labeling of sites in the Hamiltonian differs between layer 1 and layer 2 in order to preserve the chiral symmetry, as shown in Fig. 1(c). Also, sites a and b belong to sublattice A, while sites c and d belong to sublattice B. When a 2D square-lattice has N unit cells in the x and y-directions (i.e. each layer consists of N × N unit cells), h_bilayer is a 4N^2 × 4N^2 matrix. Here, the eigenstates of ℋ_bilayer in equation (6) are denoted as |ψ⟩^T=(a_1^1,⋯,a_N^2^1,b_1^1,⋯,b_N^2^1,a_1^2,⋯, a_N^2^2,b_1^2,⋯,b_N^2^2,c_1^1,⋯,c_N^2^1,d_1^1,⋯,d_N^2^1,c_1^2,⋯,c_N^2^2,d_1^2,⋯,d_N^2^2). Superscripts represent the layer index, where layer 1 and 2 correspond to upper and lower layer, respectively. The quadrupole moment operator for the BBH bilayer system is represented by a following sublattice quadrupole moment operator<cit.> 𝒬_xy^S=∑_𝐑,α∈𝒮 e^ -i 2π xy/N^2|𝐑,α⟩⟨𝐑,α| for 𝒮=A,B. Each layer consists of N × N unit cells, each labeled as 𝐑=(x,y). We diagonalize the Hamiltonian ℋ_bilayer by using the singular value decomposition (SVD) of h_bilayer=U_AΣ U_B^†, where U_A (U_B) denotes a 4N^2 × 4N^2 unitary matrix while Σ is a diagonal matrix of singular values. Using 𝒬̃_xy^S=U_𝒮^†𝒬_xy^S U_𝒮, the MCN is defined as <cit.> N_xy=1/2π iTr[ log ( 𝒬̃_xy^A 𝒬̃_xy^B †) ]. Our calculations reveal that the MCN invariant is N_xy=1 for t_v<t_c^bilayer while N_xy=0 for t_v>t_c^bilayer, where the MCN number matches the number of corner states at E=0. This result is consistent with the Wilson loop calculation. The comprehensive calculations for MCNs are presented in the Appendix D. It is important to note that to calculate the topological invariant, sufficiently large systems are required. This calculation can readily be adapted for trilayer cases. First, we rewrite the trilayer Hamiltonian in equation (4) and (5) into a form that preserves chiral symmetry, given by ℋ_trilayer= [ 0 h_trilayer; h^†_trilayer 0 ]. The Hamiltonian h_trilayer is represented by a matrix of size 6N^2 × 6N^2. By using the quadrupole moment operators in equation (7), we obtain the MCN, given by equation (8). For a trilayer system depicted in Fig. 1(d), the invariant is obtained as N_xy=2 for t_v<t_c^trilayer while N_xy=1 for t_v>t_c^trilayer. The MCN number perfectly matches the number of corner states at E=0. Similarly, for the alternative trilayer system illustrated in Fig. 1(e), we obtain N_xy=1 for t_v<t_c^trilayer and N_xy=0 for t_v>t_c^trilayer, which again coincides with the number of corner states at E=0. Therefore, we conclude that the MCN accurately captures the topological phases that even the quadrupole moment may fail to identify. § DISCUSSION AND CONCLUSIONS We investigated bilayer and trilayer BBH structures, constructed by stacking topologically distinct layers. By adjusting the interlayer coupling strengths, we identified changes in the number of corner modes, indicating a topological phase transition. This topological phase transition, accompanied by band gap closing, is induced by the interlayer coupling. To identify the topological phases, we first calculated the quadrupole moments using the nested Wilson loop approach, a well-known method for analyzing quadrupole insulators. The quadrupole moments explained the topological phases of BBH bilayer and 'trivial-topological-trivial' trilayer structures. However, we observed a mismatch between the quadrupole moments and the presence of corner modes in 'topological-trivial-topological' trilayer structure when the interlayer coupling was below the critical value. In contrast, MCNs, which are a topological invariant for ℤ-class chiral-symmetric higher-order topological systems recently reported, can systematically explain the results. We also confirmed that the number of corner states coincides with the MCNs across all interlayer coupling regimes for all three structures we considered. As we identified a topological phase that may have been misclassified as trivial when analyzed through the conventional nested Wilson loop approach, this finding highlights the need for the topological invariant, the MCN, as a more appropriate method of evaluating ℤ-class higher-order topological systems. The categorizing the phase transition induced by the interlayer coupling is below: Bilayer (topological-trivial) from the topological phase with N_xy=1 to the trivial phase with N_xy=0. Trilayer (topological-trivial-topological) from the topological phase with N_xy=2 to the topological phase with N_xy=1. Trilayer (trivial-topological-trivial) from the topological phase with N_xy=1 to the trivial phase with N_xy=0. Figure 6 displays the topological phase diagram, which illustrates both the MCNs and the quadrupole moments in BBH trilayer systems. This study demonstrates that adjusting the interlayer coupling manipulates the topological phase transition, which is accompanied by the appearance and disappearance of corner modes, potentially expanding the scope of applications. The configuration of vertically stacking layers, as discussed in our study, can be implemented in existing platforms, such as electrical circuits. Additionally, the field distribution of corner modes suggests that the intensity of corner modes can be accumulated through stacked layers, which can be adapted to develop a novel topological laser. We believe that our findings provide valuable insights into the manipulation of localized modes within multilayer systems, opening avenues for innovative device engineering. § APPENDIX A: BULK ENERGY FOR THE BBH BILAYER AND TRILAYER SYSTEM (i) The bulk spectrum of the bilayer system ℋ_bilayer is obtained as E= ±√(2F+t_v^2±√(2G)), with F = κ _1^2+κ _2^2+κ _1κ _2( cos k_x+cos k_y) , G = ( κ _1+κ _2) ^2t_v^2( cos k_x+cos k_y+2) , where each energy is two-fold degenerated. At the Γ point, the energy is obtained as E=±√(2)|κ_1+κ_2|± t_v. The gap closes at t_v=±√(2)(κ_1+κ_2). (ii) The bulk spectrum of the trilayer systems ℋ_trilayerI and ℋ_trilayerII is obtained as E=±√(2)√(F), ±√(2)√(F+t_v^2±√(G)), where each energy is two-fold degenerated. It is remarkable that the bulk spectrum is identical between ℋ_trilayerI and ℋ_trilayerII. At the Γ point, the energy is obtained as E=±√(2)|κ_1+κ_2|, ±√(2)(κ_1+κ_2± t_v). The gap closes at t_v=± (κ_1+κ_2). § APPENDIX B: QUADRUPOLE MOMENT Q_XY IN BBH SYSTEM In the BBH model, the bulk quadrupole moment q_xy, its boundary polarizations p_x,y and corner charges Q have the following relationship: |p^edge± y_x|=|p^edge± x_y|=|Q^corner± x, ± y|=|q_xy| <cit.>. The requirements for quantizing the quadrupole moment q_xy are existence of three symmetries, which are inversion symmetry and x,y-reflection symmetry under a gauge transformation. These constraints lead to quantized values of the Wannier band polarizations p_y^v_x^±,p_x^v_y^±ℐ,ℳ_x,ℳ_y= 0 or 1/2. Furthermore, when the system preserves C_4 symmetry, the classification of the Wannier bands is ℤ_2 <cit.>. Therefore, the quadrupole moment q_xy can be described by q_xy=p_y^v_x^±=p_x^v_y^±=0 or 1/2 and the topological invariant is defined by the polarizations which are obtained as follows: p_y^v_x^± = -i/2 π1/N_x∑_k_xlog[W_y,k_x^±]= 0 or 1/2 where W_y,k_x^± is a nested Wilson loop along k_y <cit.>. The quadrupole moment of 1/2 and 0 imply that the system is topological and trivial, respectively. By adapting the Wilson loop calculation to the occupied bands of the BBH bilayer shown in Fig. 2, we obtain four non-degenerate Wannier bands, as depicted in Fig. <ref>(a1,a2). The gapping of the Wannier bands results from the non-commutativity of the two reflection symmetries, ℳ_x^bi and ℳ_y^bi, as discussed in <cit.>. These two reflection symmetries in the bilayer system are derived from the direct product of the monolayer symmetries, ℳ_x,y^mono, expressed as ℳ_x,y^bi=𝕀_2 ⊗ℳ_x,y^mono. The Wannier bands exhibit distinct structures below and above the critical interlayer coupling strength, t_c^bilayer. The nested Wilson loop calculation of the Wannier bands yields band polarizations of p_y={ 1/2, 0, 0, 1/2 } for t_v < t_c^bilayer and p_y={ 0, 0, 0, 0 } for t_v > t_c^bilayer. This change in polarization, corresponding with the disappearance of corner states in finite systems, indicates a transition to a strong interlayer coupling regime, reducing the quantized quadrupole moment from one to zero. The Wilson loop calculation applied to the occupied bands of the trilayer structure in Fig. 1(d), yields six non-degenerate Wannier bands (Fig. <ref>(b1,b2)). As t_v increases, the Wannier bands experience changes, leading to different polarizations. For t_v < t_c^trilayer, we obtain polarizations of p_y = { 1/2, 1/2, 0, 0, 1/2, 1/2 }, identifying the phase as trivial even with the existence of corner states. In contrast, for t_v > t_c^trilayer, the polarizations are p_y = { 1/2, 0, 0, 0, 0, 1/2 }, indicating a topological phase. For the alternative trilayer structure depicted in Fig. 1(e), the Wilson loop calculation generates six non-degenerate Wannier bands (Fig. <ref>(c1,c2)). The nested Wilson loop computation yields polarizations of p_y = { 1/2, 0, 0, 0, 0, 1/2 } for t_v < t_c^trilayer and p_y = { 0, 0, 0, 0, 0, 0 } for t_v > t_c^trilayer. These polarization changes correspond with the number of corners obtained in finite-sized system calculations. § APPENDIX C: PERTURBATION THEORY A numerical simulation shows that there are four corner states at large interlayer couplings t_v in the 'topological-trivial-topological' trilayer structure. In order to understand it, we derive the effective Hamiltonian for large interlayer couplings t_v. We first diagonalize the interlayer coupling Hamiltonian H_0 defined by ℋ_0=( [ 0 ℋ_v 0; ℋ_v 0 ℋ_v; 0 ℋ_v 0 ]) , where only the interlayer coupling exist. It is diagonalized as U_0^-1ℋ_0U_0=( [ -√(2)𝕀_4 0 0; 0 0 0; 0 0 √(2)𝕀_4 ]) , where 𝕀_4 is the four by four identical matrix and the transformation matrix U_0 is given by U_0=( [ 𝕀_4 -√(2)𝕀_2 ⊗σ_x 𝕀_4; -𝕀_4 0 𝕀_4; 𝕀_4 √(2)𝕀_2 ⊗σ_x 𝕀_4 ]) . We are interested in the middle four by four matrix, where there are three degenerate zero energy states in (<ref>). Next, we transform the full Hamiltonian and take the middle four by four matrix, which is given by P_0^-1U_0^-1ℋ_trilayerIU_0P_0=( [ 0 0 κ _1+κ _2e^-ik_x κ _1+κ _2e^ik_y; 0 0 κ _1+κ _2e^-ik_y - ( κ _1+κ _2e^ik_x); κ _1+κ _2e^ik_x κ _1+κ_2e^ik_y 0 0; κ _1+κ _2e^-ik_y -( κ _1+κ_2e^-ik_x) 0 0 ]) , where P_0 is a projection operator taking the middle four by four matrix. It is the effective Hamiltonian, which is also the BBH model. Hence, there are four corner modes for large interlayer couplings t_v. § APPENDIX D: MCN ANALYSIS FOR THE MONOLAYER BBH SYSTEM Here, we review the MCN analysis for the monolayer BBH system<cit.>. In order to calculate MCNs, we utilize the Hamiltonian in its chiral-symmetric form ℋ= [ 0 h; h^† 0 ], which satisfies ΓℋΓ=-ℋ with the chiral operator Γ=σ_z ⊗ I_N =∑_𝐑,α∈ A|𝐑,α⟩⟨𝐑,α|-∑_𝐑,β∈ B |𝐑,β⟩⟨𝐑,β|. Here, we consider a unit cell consisting of four sites (labeled a-d in Fig. 1(b)) where sites a and b belong to sublattice A, while sites c and d belong to sublattice B. When a 2D square-lattice has N unit cells in the x and y-directions (i.e. each layer consists of N × N unit cells), h is a 2N^2 × 2N^2 matrix. Note that the eigenstates of Hamiltonian ℋ is |ψ⟩=(a_1,⋯, a_N^2,b_1,⋯,b_N^2,c_1,⋯,c_N^2,d_1,⋯,d_N^2)^T. The quadrupole moment operator is represented by equation (7). By using the SVD of h, the sublattice unitary matrix U_𝒮 is obtained, followed by the MCN as described in equation (8). When the system is trivial (κ_1>κ_2), we find that N_xy=0. In constrast, when the system is topological (κ_1 <κ_2), we obtain N_xy=1. Additionally, the value N_xy coincides with the number of degenerate zero-energy corner states. § APPENDIX D: CHIRAL SYMMETRIC HAMILTONIAN FOR BBH BILAYER SYSTEM As an illustrative example, we provide a detailed calculation for N=2 for bilayer systems. The submatrix of BBH bilayer Hamiltonian in equation (6) is composed of four 2N^2 × 2N^2 submatrices, described as h_bilayer= [ h_Topo ℋ_vp; ℋ_vp h_Tri ], where h_Topo= [ -κ_1 0 0 0 κ_1 0 0 0; -κ_2 -κ_1 0 0 0 κ_1 0 0; 0 0 -κ_1 0 κ_2 0 κ_1 0; 0 0 -κ_2 -κ_1 0 κ_2 0 κ_1; κ_1 0 κ_2 0 κ_1 κ_2 0 0; 0 κ_1 0 κ_2 0 κ_1 0 0; 0 0 κ_1 0 0 0 κ_1 κ_2; 0 0 0 κ_1 0 0 0 κ_1 ], and ℋ_vp=t_v×𝕀_2N^2. Note that h_Topo and h_Tri have an inverse relationship with respect to the ratio between κ_1 and κ_2. For N=2, the sublattice quadrupole moment operator 𝒬_xy^S of the BBH bilayer system is a diagonal matrix where the diagonal elements form a repeating sequence of (-i,-1,-1,1), this sequence repeats 2N times. Thus, 𝒬_xy^S = 𝕀_4⊗diag.(-i,-1,-1,1). The calculation process for bilayer systems can be readily extended to trilayer systems. The submatrix of BBH trilayer Hamiltonian is h_trilayer= [ h_Topo ℋ_vp 0; ℋ_vp h_Tri ℋ_vp; 0 ℋ_vp h_Topo ], The matrix 𝒬_xy^S of the BBH trilayer system is given by 𝒬_xy^S = 𝕀_6⊗diag.(-i,-1,-1,1) for N=2. § ACKNOWLEDGEMENT N.I. was supported by the JSPS KAKENHI (Grant No. JP21J40088). M.E. was supported by CREST, JST (Grants No. JPMJCR20T2) and by the Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grant No.23H00171). Y.O. was supported by the Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grants No.22H01994 and No. 22H00298). S.I. was supported by CREST, JST (Grant No. JPMJCR19T1) and the Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grants No 22H00298 and No. 22H01994). *
http://arxiv.org/abs/2406.08333v1
20240612153935
Review of Autonomous Mobile Robots for the Warehouse Environment
[ "Russell Keith", "Hung Manh La" ]
cs.RO
[ "cs.RO" ]
Article Title]Review of Autonomous Mobile Robots for the Warehouse Environment [1]Russell Keithrkeith@nevada.unr.edu 1]Hung Manh Lahla@unr.edu These authors contributed equally to this work. [1]Advanced Robotics and Automation (ARA) Lab, University of Nevada, Reno, 1664 N Virginia St., Reno, 89557, NV, U.S Autonomous mobile robots (AMRs) have been a rapidly expanding research topic for the past decade. Unlike their counterpart, the automated guided vehicle (AGV), AMRs can make decisions and do not need any previously installed infrastructure to navigate. Recent technological developments in hardware and software have made them more feasible, especially in warehouse environments. Traditionally, most wasted warehouse expenses come from the logistics of moving material from one point to another, and is exhaustive for humans to continuously walk those distances while carrying a load. Here, AMRs can help by working with humans to cut down the time and effort of these repetitive tasks, improving performance and reducing the fatigue of their human collaborators. This literature review covers the recent developments in AMR technology including hardware, robotic control, and system control. This paper also discusses examples of current AMR producers, their robots, and the software that is used to control them. We conclude with future research topics and where we see AMRs developing in the warehouse environment. [ [ June 17, 2024 ================= § INTRODUCTION Autonomous mobile robots (AMRs) have been an active research field since the 1950s when the first Automated Guided Vehicle (AGV) was introduced <cit.>. Recently, with the hardware advances in sensors and computing power, they have become more feasible. They have now been introduced into the manufacturing environment, particularly with logistics and material handling. In a traditional warehouse environment, goods are received on a truck and placed on a shelf. Then a team of pickers receives a list of orders that need to be filled for that day. An individual will then be assigned an order and then proceed to walk about the warehouse picking each item off a shelf and placing it in a container. Once the picker has finished the container will then be packaged, shipped, or knitted depending on the nature of the warehouse. Some companies modify this process and assign floor zones to individuals. Each person is responsible for picking items in their zone and bringing them to a sorting area. Traditional warehouse structures suffer from motion waste, which is one of the 8 wastes in Lean manufacturing <cit.>. For companies to stay competitive, and lower logistic costs, many of them have turned to “smart warehouses”, which deploy a series of tools to lower waste and increase production. Automated robots are used that either pick items or work with pickers to reduce the walking distance needed [see Fig. <ref>]. A good example of this is Amazon with their Kiva robots [see Fig. <ref>]  <cit.>. Smart warehouses are not limited to robots, they also implement better planning strategies through warehouse management software (WMS), resource management, and path planning. Converting to smart warehousing does include some downsides. For example, the initial investment is high $150 - $200 per square foot but, the returns from reducing waste have shown to be substantial in the longer term <cit.>. In the manufacturing environment, robot-human interactions are becoming more important in daily operations. To help with efficiency, humans work alongside robots to pick up the shortcomings of each side. In warehouses, products of different sizes or shapes are handled from shelves and placed into bins. Robots have a particularly difficult time with this task since they are limited to the end effector installed as well as figuring out how to hold the object without it falling over. Human operators can easily do this task, but they are limited to fatigue after traveling back and forth across the warehouse. A solution would be to have the robot meet the picker at the product location and have the picker move the product off the shelf and onto the AMR. This paper specifically looks at this scenario and filters the search results based on their relevance. §.§ Review Methodology This paper looks at the current state of AMRs that are being used in the warehouse environment. It includes not only research papers but also covers current AMR company hardware and the adjacent software. This paper used previous literature reviews, and research articles from IEEE Explore and Google Scholar. AMR-producing companies were found using a basic Google search. The starting keywords that were used were “AMR” and “warehouse”. To narrow down the search results other keywords were used such as “scheduling,” "decentralized,” “AMR localization,” “fleet size,” “path planning,” “battery management,” [“AMR/AGV” + “hardware”], “zoning,” “warehouse layout,” and [“AMR/AGV” + ”smart manufacturing”]. Combinations of the above keywords were also used to narrow down the search results. Papers were filtered from 2013 to 2024 with a few references to older papers. Table <ref> summarizes the references used in this literature review. This paper is organized as follows: Section 2 covers the recent AMR technological and software developments, Section 3 goes over robotic control through localization, path planning, and artificial intelligence (AI), Section 4 covers AMR system control by resource management, scheduling, human-robot picking methods, and warehouse flow and layout design, Section 5 goes over future research, and Section 6 is the conclusion. § HARDWARE AMRs are a rapidly growing field and several companies have started to produce robots and software that manage warehouses. A common issue of AMRs in warehouses is localization. In recent years, the cost of sensors has come down allowing companies more options to solve this issue. In the past, AGVs had to rely on a track or previously installed infrastructure to navigate (magnetic strip, rail, etc.). However, AMRs now have the advantage of not needing any previously installed infrastructure to operate <cit.>. §.§ Sensors Typical AMRs are equipped with a wide variety of 2D and 3D cameras, accelerometers, gyroscopes, and Light Detection and Ranging (LiDAR). The data from these sensors are then brought together in a technique called sensor fusion to generate a map that the AMR can use to locate itself <cit.>. These sensors have become popular due to their speedy rendering and positional accuracy. Some popular 2D and 3D cameras are the Balser Dart series <cit.>, Zivid M60 <cit.>, and Nerian Ruby <cit.>. One of the most common sensors used in industry is LiDAR. The sensor works by sending out a known waveform into a scene, it is then reflected off of an object, and the LiDAR receiver captures a portion of the waveform and estimates the time it took, which distance is then derived from <cit.>. The sensors are very accurate and can have a working range of .05m to 25m <cit.>. Some LiDAR sensors rely on artificial landmarks, like reflective surfaces, already placed in the warehouse. These landmarks could become damaged, which could comprise the AMR's localization properties. Table <ref> describes a few AMRs used in the industry today. It shows the hardware that AMRs use as well as describes a few safety features. The SICK laser safety scanner is an example of an AMR safety feature [see Fig. <ref>]. The sensors have a scanning angle of 270° and a response time of 67ms <cit.>. Scanners like these are often combined with LiDAR to create an all-around safe robot. §.§ Processors In order to handle a dynamic work environment, AMRs need to be able to take in data and process it in real-time. Coupled with the advancement of AI, the computational requirements have significantly increased <cit.>. However with the onset of new AI-specific CPUs, GPUs, and AI acceleration units like that of Jetson Xavier NX 16GB [see Fig. <ref>], Luxonis DepthAI <cit.>, and Google’s Coral Accelerator Module <cit.>, AI computing has become more reasonable especially for edge computing <cit.>. These chips have also become essential to scheduling, especially in decentralized algorithms where each robot must make decisions and communicate messages in a short time frame. §.§ Batteries Batteries that have a high capacity and faster charging have made a significant impact on the robotics industry with most commercially available AMRs using lithium-ion batteries  <cit.>, <cit.>. Charge times like that of the MiR series of robots have a 1:6 charging to run-time ratio and can have a full charge in 50 minutes <cit.>. Charging mechanisms have also changed with the onset of wireless power transfer, allowing them to charge without a conventional connector <cit.>. With the onset of these batteries, resource management for battery consumption has become less of a hindrance to warehouse use of AMRs. However, for 24-hour operations, this still has to be taken into consideration. It is important to note that the environmental impact of lithium-ion batteries is still being debated. Currently, there is no method for recycling lithium-ion batteries besides incineration, which only extracts material that cannot be used to make more batteries <cit.>. §.§ Robot Configuration and Locomotion The locomotion of a robot has a huge impact on its effectiveness. For outdoor environments, legged-style robots perform better since they generally slip less than wheeled robots and are more versatile. Wheeled robots use less energy and are not as complicated to program <cit.>. In the past, wheeled robots suffered from a lack of maneuverability when compared to legged robots but with the development of Swedish wheels, wheeled robots can now move in any direction without changing orientation <cit.>. In the warehouse environment, the floor is level with only a few inclines. For this reason, many companies have opted for wheeled-style robots with varying configurations. Examples include Matthews AMR <cit.>, which features four differential drive castor wheels or the Fetch Roller Top  <cit.> with its 6-wheel configuration. § ROBOTIC CONTROL §.§ Localization Many of these robots are equipped with an array of sensors that help locate the robot such as 3D cameras, LiDAR, and encoders. Methods for localization vary from company to company but a common method is SLAM <cit.>. SLAM navigation uses features within the environment to build a map. The software gathers information from where the robot is and where it has been to build a probability distribution of all of the likely robot locations <cit.>. Another common localization method for warehouses is hybrid navigation. This method uses sensor fusion from devices that can locate the robot in large and medium ranges like global positioning system (GPS) GPS (for outdoor environments) or ultrasonic sensors. Then when the robot needs to be located precisely, a short-range sensor (like a magnetic strip) is used to position the robot <cit.>. An example would be the Max series of robots from ForwardX Fig. <ref>. These AMRs have the capability to use SLAM, visual tag, or visual semantics for positioning <cit.>. §.§ Artificial Intelligence (AI) AMRs are unique in that they operate in a dynamic environment. When there is an obstacle, the robot needs to be able to autonomously avoid or maneuver itself around the obstruction to continue its mission. Classification of obstacles is commonly done by using a visual system and machine learning. Neural networks, fuzzy logic, or recently deep reinforcement learning are then used to plan a path around the obstacle <cit.>. Sensor fusion is also used when certain sensors become unreliable such as GPS when in a covered environment <cit.>. Without these techniques, AMRs would react to different obstacles in the same manor which becomes impractical when obstacles are consistently changing and moving <cit.>. This field of research is very active and continues to improve. §.§ Path Planning Path planning is used to plan a trajectory from the current position of the robot to the target using the extracted information from the environment. This problem is typically approached either with a static or dynamic environment. In the warehouse, there are many moving objects like personnel, forklifts, or other autonomous vehicles <cit.>. For this paper, we will only focus on dynamic environments <cit.> comparing classical, heuristic, and evolutionary techniques. For obstacle avoidance, bug algorithms are a common approach to path planning. In general, once an obstacle is detected, the robot will move around the object until it finds the path back to the target objective. Many variations of the bug algorithm exist like that of TangentBug, which finds the shortest path based on a local tangent graph from range data <cit.>. Vector field histograms (VFH) are also a common obstacle avoidance algorithm. In this, a robot detects an obstacle and builds a cell histogram based on the range data. That cell histogram is then converted to a polar histogram where valleys are detected and a path is chosen while taking into consideration the size of the robot <cit.>. A variation of the VFH is the VFH* algorithm which uses A* search to build a tree of candidate directions <cit.>. Another common approach are artificial potential fields <cit.>. In this, calculated forces are generated for the target and obstacles. When the vehicle moves closer to a obstacle, a repulsive force is acted upon the vehicle avoiding the obstacle while an attractive force from the target keeps the vehicle moving towards the goal. Dang and La <cit.>, extend the artificial potential field by incorporating a rotational force field and a repulsive force field. This new combination helps move a vehicles around complex obstacle shapes were it would otherwise get stuck. Heuristic algorithms for path planning have recently been a popular research subject. Examples include neural networks, fuzzy logic, genetic algorithms, and particle swarm optimization. Genetic algorithms <cit.> iterate through potential solutions and find the best selection. This selection is then used to build more solutions and eventually find the optimal path taking advantage of operators like mutation, crossover, and selection <cit.>. A hybrid adaptive genetic algorithm is proposed in Zhou et al. <cit.>. The paper addresses the issue of preventing a local optimum by initially performing genetic operations with fixed parameters and then applying adaptive genetic operations. The algorithm adjusts the crossover and mutation probability in sinusoidal adaptive transformation. Fuzzy logic uses the concept of True and False (1 or 0) but extends it to include more freedom between states. First introduced in 1965 by Zadeh <cit.>, fuzzy logic algorithms have now advanced to include hybrid path planning like that of Hassanzadeh and Sadigh. <cit.> or using fuzzy logic with ant colony optimization Song et al. <cit.>. In Song et al., the robots were able to find an optimal path by not only considering the shortest length but factors like the least number of intersections, least traffic, and safety. In another paper by Young and La <cit.>, they combine consensus, cooperative learning, and flocking. In this, a multi agent system is used to first group together the many different actors. Then the algorithm is trained via reinforcement learning to avoid obstacles. Lastly, to figure out where a moving obstacle is coming from, a consensus algorithm <cit.> is used. This allows the multi agent system to move as group to avoid oncoming obstacles. Often, path planning for a single robot is not the most optimal, especially in a warehouse setting where there could be hundreds of robots. Factors like traffic congestion can slow the entire system if not considered. A genetic multi-robot path planning (GMPP) algorithm was proposed in Fan et al. <cit.>. To avoid collisions with obstacles or other robots the authors integrated a collision detection and elimination mechanism. The collision mechanism calculates where robots will collide then a central planner decides which robots have priority by a random dynamic priority policy. In a recent paper Pradhan et al. <cit.>, the authors were able to coordinate multi-robot navigation by having a particle swarm optimization algorithm train a feed-forward neural network. They simulated the results in a dynamic environment and compared them to a traditional potential field method <cit.>. A hybrid approach using a convolutional neural network and a graph neural network was proposed in Li et al. <cit.>. Their approach focuses on decentralizing the multi-robot path planning problem. In Chen et al. <cit.>, the authors propose a decentralized multi-agent collision avoidance algorithm based on deep reinforcement learning. The approach takes online predicting data for interaction patterns and offloads it to an offline learning procedure. Through this, a robot can predict the behavior of its neighbors even when those neighbors are non-communicative. § SYSTEM CONTROL §.§ Resource Management §.§.§ Battery Use In a warehouse environment, demand often fluctuates. Sometimes orders will surge and strain the available resources within the warehouse. Often bottlenecks are created when there are not enough staff to handle these fluctuations. This can lead to increased lead time for customers, which is undesirable in a very competitive market where on-time delivery is a selling point <cit.>. For a multi-robot system, this must be taken into consideration. Companies like that of ADDVERB <cit.> and their AMR called Dynamo, take advantage of lows in demand by using that time to send robots to charge. This helps maximize the robot's utilization and energy management. Traditionally, battery management is usually handled by a rule-based policy. Whenever the battery is low the robot will head to the charging station regardless of where the robot is, demand, or current schedule. This can lead to issues where too many robots charge while there is a surge in demand. Zou et al. <cit.> compared battery swapping, inductive, and plug-in charging techniques. They showed that inductive charging has the best throughput time and that battery swapping outperforms plug-in charging. Fig. <ref> shows the comparison of the three strategies at different robot price points. Mu et al. <cit.> propose a method that models battery management into a Markov Decision Process and then uses a deep reinforcement learning algorithm to solve it. Their results show that their method outperforms standard rule-based strategies by 5% in fulfillment rate and has significantly fewer backlogged orders after a long period of time. In another paper De Ryck et al. <cit.>, the authors use a decentralized approach based on an extension of the traveling salesman problem. Their focus was based on when to charge and how long. Their algorithm also allows the robot to be charged along its current operation trajectory. Battery life is also critical for the maintenance of an AMR fleet. Traditional, models mainly focus on when is the best time to charge but not battery quality of life. Chavan et al., propose an algorithm that schedules charging for minimized task downtime and maximized battery quality of life. To do this, they developed a TCM (Task and Charging schedule manager) algorithm that coordinates useful tasks in between charging schedules. §.§.§ Fleet Size Determining the correct number of AMRs to use on the floor is important to maintain a consistent lead time. Areas on the warehouse floor where there is a lot of demand can turn into a bottleneck if there are many robots or personnel in the area. Likewise, if there are too few robots, shipments will be paused while orders are being fulfilled. An example of how companies handle this problem is the MiRFleet Software from MiR <cit.>. This software can handle up to 100 robots with different modules and can coordinate traffic control in critical zones. It allows the user with a basic level of programming to help manage the fleet's priorities. Once the setup and programming are completed, it carries out operations based on the position and availability of the robot. In literature, the fleet size problem can be solved by queuing theory, linear and integer programming, or discrete and continuous event simulation <cit.>. In a paper by Chaikovskaia et al. <cit.>, the authors, using integer linear programming, considered a fleet under a homogeneous system with robots that can cooperate with each other to carry a load. The paper uses different scenarios where cooperation is inefficient vs. when it is optimal when carrying a load. Fig. <ref> shows an example of how p and m-bots work together to carry a load, the p-bot being composed of two m-bots. A modified memetic particle swarm optimization (MMPSO) algorithm was proposed in Chawla et al. <cit.>. They first estimated the fleet size by a analytical model and then optimized it using a MMPSO algorithm. They tested with three different floor layouts and compared MMPSO to the analytical model. Vivaldini et al. <cit.>, created a task assignment module that estimates the number of AGVs based on the ratio of defined execution time and total time spent routing. Most articles deal with homogeneous systems with a fleet of the same robot. Rjeb et al. <cit.> proposed a method to tackle fleet sizing of a heterogeneous system by an integer linear program. In a 2024 paper by Maurizio et al. <cit.>, the authors propose a genetic algorithm and a mixed-integer linear program to tackle a multi-objective problem consisting of figuring out the best schedule for minimizing makespan and the number of AGVs that are under battery constraints. They compare their results to actual data that was used in a manufacturing facility. The genetic algorithm was shown to outperform the solution that the manufacturing facility was currently implementing. To determine the number of robots and speed needed in a robotic mobile fulfillment system, Yuan and Gong <cit.>, used two different models based on open queuing networks for muli-service and single-service robots. The number of robots and speed are calculated based on the optimal throughput time of the warehouse. In Tang et al. <cit.>, the authors developed a two-layer genetic algorithm that optimizes the configurations of AGVs, picking station equipment, and task allocation. The number of AGVs and picking stations is determined by the optimal equipment configuration. In their model, the AGVs move entire racks to picking stations in a unmanned warehouse. In a case study that looked to make a production line of an automotive company more efficient <cit.>, the number of AGVs was determined by calculating the number of round-trips a vehicle can make per hour and the total number of round-trips that must be made in the system <cit.>. The equations were used to get a rough estimation of the number of AGVs and did little to account for delays due to congestion or inefficient flow path. In the case study, by implementing a small AGV network they were able to reduce the walking distance of a worker by 1.75 to 2.6 kilometers, saving 50 minutes of time traveling time. §.§ Scheduling In academia, a large amount of research has gone into the decision-making process in scheduling orders. Traditionally, these systems are centralized and use hierarchical control. However, with the advancement of computational power, AI has played more of a role in task scheduling in a manufacturing environment <cit.>. In industry, scheduling is handled by a warehouse management system (WMS). This software allows administrators to control the daily warehouse operations from the time goods enter to when they move out <cit.>. A common example of a WMS software is SAP <cit.>. One of the benefits of using a WMS is that work orders are only given when there is enough inventory to fulfill them. Often this software controls inventory at a high level leaving the planning of how orders are physically picked to lower-level software. ADDVERB Fleet Management System is an example of this lower-level control <cit.>. It handles the task allocation along with the navigation of each robot. Fig. <ref> shows what this software controls and how the user and robot integrate with each other. The traditional approach to scheduling handles task assignment as a path planning problem, assuming the AGV/AMR plans the path from the current location to the target. These algorithms are limited by their flexibility since they do not consider a dynamic work environment. Scheduling problems are considered to be NP-hard problems <cit.>. Due to the complexity, they are normally solved through heuristic algorithms like genetic algorithms, particle swarm optimization, or reinforcement learning. Yokota <cit.> proposed an algorithm that uses a min-max strategy to coordinate between robots and humans. The algorithm centers around the co-involvement of robots and humans with the humans picking an item off the shelf and placing it onto an AGV. Each AGV is equipped with a balanced workload based on minimizing the total travel path. Then a heuristic algorithm, with two different rules, is then used to match a picker with a robot. In Yu et al. <cit.>, a algorithm is proposed using a two-stage heuristic algorithm. The first stage uses clustering to divide packages based on similarity and group them together with the idea that a multi-load AGV can sort the packages of one group in one operation. Then in the second stage, a load-balancing scheduling algorithm assigns and sorts each of the groups to the AGVs. A hierarchical soft actor-critic algorithm is proposed in Tang et al. <cit.>. Hierarchical reinforcement learning algorithms attempt to reduce the computational complexity of a dynamic environment by layering learning strategies. Soft Actor-Critic (SAC) algorithms are more practical in robot control since they are able to solve both discrete and continuous problems. The algorithm centrally assign tasks to the AGVs then the robots collaborate in planning the actual path. In another paper Li and Wu <cit.>, proposed a Genetic Algorithm Considering Genome, which takes into account dynamic task scheduling as well as determining the charging schedule. It is able to iterate through a genome, which is composed of several neighborhood solutions. Their algorithm can quickly find a local optimal solution and, in a later stage, explore the neighborhood of the the local optimal solution using internal cross and mutation. In a 2024 paper, Li and Huang <cit.> propose a scheduling algorithm for heterogeneous AGVs. This allows for the use of several different kinds of AGVs performing different tasks. They were able to implement several different algorithms including Fist Come First Assign, Heterogeneous Task Assignment, and Optimization Assignment and compare them to past works. Few studies have been conducted that offer a way to decentralize scheduling. Hierarchical control often finds the optimal solution but suffers from a lack of robustness or flexibility. If the central control unit goes down the entire system cannot operate. Decentralized control has the advantage of scalability and the ability to continue operation even if one unit fails. Basile et al. <cit.>, used a version of an auction-based approach called the sealed-bid method in which an agent cannot see other agents’ bids. Any robot can play the role of auctioneer and takes bids in from other robots acting as an agent. The auctioneer selects the best bid based on the time it takes to complete a mission. In another paper by Warita and Fujita <cit.>, the authors set up a multi-agent decentralized algorithm using a fully decoupled upper confidence tree (FDUCT). The idea being that each AGV will use FDUCT to plan its own actions while predicting the actions of other agents. They also implement an item-exchange strategy to help balance the load of each AGV and avoid idle time. §.§ Human-Robot Picking Methods §.§.§ Picking Methods Picking is defined as the process of scheduling customers' orders, assigning stock, releasing orders, picking items off of storage locations, and disposal. The majority of warehouses that employ humans deploy a picker-to-parts system, which involves the picker walking or driving to pick items. More automated warehouses will use a parts-to-picker method where shelving robot units or cranes move products to the picker[see Fig: <ref>]. Within picker-to-parts systems, there are several variants like batch picking or zoning. Batch picking has the picker pick multiple orders at once then sorted in a later sequence. Zoning is a system in which the warehouse floor is divided into areas where humans pick items that are only in their designated area <cit.>. §.§.§ Human-Robot Collaboration In AMR-assisted picking, the laborious task of traveling with a picked item is reduced by the use of a robot. In a paper by Srinivas and Yu <cit.>, a collaborative human-robot order-picking system (CHR-OPS) was presented with the goal of minimizing the tardiness in their batch order system. Three sub-problems were made to optimize; the number of items picked in one tour, batch assignment and sequencing, and picker-robot routing. Their results emphasized the impact of AMR cart capacity, AMR speed, and human-robot team composition. In another picker-to-parts system, Zulj Ivan et al. <cit.> used zoning to have humans pick batched items and drop them off in pick location for that zone. A robot would then collect and transport the batched items to a depot where they will be sorted. Their approach focuses on minimizing the tardiness of all customer orders. The algorithm used a two-stage heuristic of adaptive large neighborhood search (ALNS) for batching and NEH heuristic for the sequencing the batches. Zhao et al. <cit.> proposed a human-robot collaboration algorithm in a parts-to-picker system. Their algorithm uses an adaptive large neighborhood search method that is embedded in a tabu search algorithm. They compare their algorithm with others and show it effectiveness at lowering makespan. In evaluating the performance of human-robot collaboration, two different collaboration types need to be considered, human-leading and human-following. Human-leading involves the robot following the human to a pick location and holding the picked items until it is sent back. Human-following has the robot autonomously go to the pick location and have the human assist in picking the item. The robot then tells the human the next pick location. In a study done by Pasparakis et al. <cit.>, the authors found that human-following has a greater pick accuracy when compared to human-leading but has less productivity. §.§ Warehouse Flow and Layout Design Traditionally, warehouse layouts can be classified as a conventional, non-conventional, and general warehouse. Conventional warehouses are those with a rectangular shape and parallel aisles that are perpendicular to straight cross aisles. Layouts with two cross aisles are considered as single-block warehouses. Fig. <ref> shows a two-block layout with three cross aisles. Non-conventional warehouses arrange their aisles in a manner that allows easier access to certain areas of the warehouse [see Fig. <ref>]. For example, fishbone warehouse layouts offer a scheme that is shown to reduce travel distance by 10-15% when compared to conventional warehouses <cit.>. General warehouses do not make any assumptions about the aisle locations. Rather, they are modeled after general distance matrices <cit.>. The layout also defines aisle characteristics. For a multi-robot system, wide aisles are a benefit since they allow room for robots to move around. However, narrow aisles reduce the distance traveled for items that are picked on either side of the aisle. Items that are picked low to the ground and do require any vertical tools to grab are considered low-level <cit.> In a paper by Hamzeei et al. <cit.>, they investigated the bidirectional topology of a block layout design of a warehouse floor and the locations of delivery and pickup stations. They developed two algorithms; one used a cutting-plane algorithm, and the other used simulated annealing to solve the problem heuristically. Li and Li  <cit.> looked at optimizing a multi-row layout of a machining workshop while considering AGV path flow. The multi-row layout consists of robots covering an area that has multiple rows and can be picked on either side [see Fig. <ref>]. The proposed algorithm uses a hybrid method between non-dominated sorting genetic algorithm-II and tabu search. By focusing on the AGV path they were able to lower the material handling cost of the facility <cit.>. In a 2023 paper by Zhang et al. <cit.>, the authors consider the case of a fully automated warehouse with no human personal. Since robots do not need a structured layout to locate parts, the authors make the case that an optimized layout, that does not resemble a human-designed layout, can be used to increase throughput. The layouts that are generated from their algorithm are non-traditional and do not have column-row aisles in a typical warehouse layout. To determine the storage location of items Bao et al. <cit.>, classified items based on their profit or throughput rate. Products that are labeled as “A” would be placed closest to the output gate to reduce travel time. In a case study by Chen and Tiong <cit.>, an algorithm, using an open queuing network, modeled the flow production system of a manufacturing facility that made prefabricated bathroom units. A simulated annealing approach was then used to find the optimal production flow process. Their facility used AGVs to transport the large units between each workstation. Warehouses can be large facilities. A sortable Amazon Fulfillment Center is around 800,000 square feet in size and can employ more than 1,500 employees <cit.>. With facilities that large, an order may take a large amount of time to fill if items are spread out across the facility. For AMRs, assigning singular or multiple robots to just one section of the floor (zoning) could be beneficial to response time and resource management since the robot would only be making short trips. Multiple factors are considered when zoning like what areas need to be zoned, service points, configuration of zones, and how many vehicles are in one zone <cit.>. Zoning can also be decided dynamically and respond to changes in workload. In an older paper, Ho and Liao <cit.> propose a dynamic zone layout scheme that limits the number of AGVs in an area to limit vehicle collisions and to maintain load balancing. Saylam et al. <cit.> considered the case of synchronized zone picking were an order is fulfilled simultaneously in each zone creating a picking wave. The algorithm not only minimizes the lead time of each picking wave but balances the pickers’ workload. § FUTURE WORK Scheduling has normally been focused on a centralized network of robots. However, this creates a dependency on the central unit and demands one unit to do most of the computational work. Decentralizing this task could prove to be more flexible and scalable. The papers presented <cit.> simulate 3 to 10 robots at a time. In real-world applications, warehouses could deploy hundreds <cit.>. Only a few studies were found that deal with decentralized scheduling  <cit.>,  <cit.>. Warehouse zoning is important for dividing the workload on the warehouse floor. For AMR networks this means faster response time and less likelihood of congestion. Most of the studies presented <cit.> focus on AGV vehicles. One of the downsides of using AGVs, are the pre-planned routes they must follow. AMRs do have to deal with this issue since they can find alternate routes around obstacles or other robots without the need of a user designing an optimal path. More research needs to be done into modeling dynamic zones for AMRs to balance workloads and ensure a rapid response. Robotic failure is bound to happen in an AMR or AGV system. This could often lead to the failure propagating to other systems, potentially halting operations. Currently, most scheduling algorithm listed in the paper do not account for when a robot goes down. To be used in a real world application a scheduling algorithm needs to be able to adapt by either replacing the robot that went down or reorganizing the load queue so that other robots can take over. More robust and predictive models are needed to support the maintenance of the AMR system and reduce the number of failures <cit.>. Often papers that deal with scheduling will assume that demand for the product is already known. However, in real-world applications, demand is often influenced by other factors outside of the control of the warehouse. This especially is a problem for warehouses that pick small order parts that give a short lead time  <cit.>. Future research should be dedicated towards a more adaptive system that can change with customer demand on the warehouse floor. Though human-robot collaboration has been studied and evaluated in general use cases, the specific methods for how they interact needs to be extended. More research needs to be done into comparing different warehouse layouts with picking strategies and human-robot routing strategies. Another interesting path to take would be to develop algorithms that can change between the different strategies listed above to accommodate changes in the warehouse. Many papers use performance indicators like lead time, tardiness, distance traveled, and operational efficiency to measure how well their algorithm works when compared to others. However, in human/robot collaborative spaces, human factors are rarely considered. In another literature review that looked at parts-to-picker systems, Jaghbeer et al. <cit.> highlighted that human factors are rarely focused on in picker-less, robot-to-parts, or parts-to-robot OPSs (order picking systems) [see Fig: <ref> ]. Warehouses often will have different picking loads split up among the floor. Typically shelving units will have smaller items while racks will contain pallets or heavier items. This emphasizes the use of heterogeneous AMRs in a multi-robot system. Though a growing research topic, currently only a few research papers emphasize heterogeneous robots in scheduling algorithms <cit.>. Implementing this could have a larger effect on the diminishing importance of large, medium, or small loads allowing a warehouse to stock a more diverse range of products. § CONCLUSION Autonomous mobile robots (AMRs) have made a significant contribution to manufacturing specifically in warehousing improving performance and productivity. Several technological developments in localization, sensors, and battery management have AMRs more feasible to be deployed. AI techniques have also played an important role in the decision-making of AMRs. Genetic algorithms, deep reinforcement learning, and neural networks have reduced the complexity of scheduling paving the way for decentralization. Different warehouse layouts, picking strategies, and human-robot routing strategies have helped in the development of human-robot collaboration. With this research, AMRs have shared the workload of laborious and repetitive tasks allowing humans to be less fatigued and focused. Though most of the research that has been presented in this paper has focused on warehousing, not enough has been done studying applications in other environments such as outdoor warehousing of docking yards or hazardous areas. It is concluded that the research in this field is growing rapidly and is currently changing the manufacturing industry. §.§.§ This is an example for third level head—subsubsection head Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. § EQUATIONS Equations in can either be inline or on-a-line by itself (“display equations”). For inline equations use the commands. E.g.: The equation Hψ = E ψ is written via the command . For display equations (with auto generated equation numbers) one can use the equation or align environments: X̃(k)^2 ≤∑_i=1^pỸ_i(k)^2+∑_j=1^qZ̃_j(k)^2 /p+q. where, D_μ = ∂_μ - ig λ^a/2 A^a_μ F^a_μν = ∂_μ A^a_ν - ∂_ν A^a_μ + g f^abc A^b_μ A^a_ν Notice the use of in the align environment at the end of each line, except the last, so as not to produce equation numbers on lines where no equation numbers are required. The command should only be used at the last line of an align environment where is not used. Y_∞ = ( m/GeV)^-3[ 1 + 3 ln(m/GeV)/15 + ln(c_2/5)/15] The class file also supports the use of , and commands. As such , and produces ℝ, ℛ and ℛ respectively (refer Subsubsection <ref>). § TABLES Tables can be inserted via the normal table and tabular environment. To put footnotes inside tables you should use tag. The footnote appears just below the table itself (refer Tables <ref> and <ref>). For the corresponding footnotemark use The input format for the above table is as follows: In case of double column layout, tables which do not fit in single column width should be set to full text width. For this, you need to use instead of environment. Lengthy tables which do not fit in textwidth should be set as rotated table. For this, you need to use instead of environment. This environment puts tables rotated to single column width. For tables rotated to double column width, use . Tables which are too long to fit, should be written using the “sidewaystable” environment as shown here 3@c@Element 1[1] 3@c@Element[2] 2-45-7 Projectile Energy σ_calc σ_expt Energy σ_calc σ_expt Element 3 990 A 1168 1547±12 780 A 1166 1239±100 Element 4 500 A 961 922±10 900 A 1268 1092±40 Element 5 990 A 1168 1547±12 780 A 1166 1239±100 Element 6 500 A 961 922±10 900 A 1268 1092±40 Note: This is an example of table footnote this is an example of table footnote this is an example of table footnote this is an example of table footnote this is an example of table footnote. [1]This is an example of table footnote. § FIGURES As per the standards you need to use eps images for compilation and images for compilation. This is one of the major difference between and . Each image should be from a single input .eps/vector image file. Avoid using subfigures. The command for inserting images for and can be generalized. The package used to insert images in is the graphicx package. Figures can be inserted via the normal figure environment as shown in the below example: In case of double column layout, the above format puts figure captions/images to single column width. To get spanned images, we need to provide . For sample purpose, we have included the width of images in the optional argument of tag. Please ignore this. § ALGORITHMS, PROGRAM CODES AND LISTINGS Packages , and are used for setting algorithms in using the format: You may refer above listed package documentations for more details before setting environment. For program codes, the “verbatim” package is required and the command to be used is . Similarly, for , use the package. is used to set environments similar to environment. Refer to the package documentation for more details. A fast exponentiation procedure: texcl=true,basicstyle=,commentstyle=,mathescape=true,escapeinside=(**) begin for i:=1 to 10 step 1 do expt(2,i); newline() od (*Comments will be set flush to the right margin*) where proc expt(x,n) ≡ z:=1; do if n=0 then exit fi; do if odd(n) then exit fi; comment: (*This is a comment statement;*) n:=n/2; x:=x*x od; n>0 ; n:=n-1; z:=z*x od; print(z). end frame=single,framexleftmargin=-1pt,framexrightmargin=-17pt,framesep=12pt,linewidth=0.98,language=pascal for i:=maxint to 0 do begin do nothing end; Write('Case insensitive '); Write('Pascal keywords.'); § CROSS REFERENCING Environments such as figure, table, equation and align can have a label declared via the command. For figures and table environments use the command inside or just below the command. You can then use the command to cross-reference them. As an example, consider the label declared for Figure <ref> which is . To cross-reference it, use the command , for which it comes up as “Figure <ref>”. To reference line numbers in an algorithm, consider the label declared for the line number 2 of Algorithm <ref> is . To cross-reference it, use the command for which it comes up as line <ref> of Algorithm <ref>. §.§ Details on reference citations Standard permits only numerical citations. To support both numerical and author-year citations this template uses package. For style guidance please refer to the template user manual. Here is an example for : <cit.>. Another example for : <cit.>. For author-year citation mode, prints Jones et al. (1990) and prints (Jones et al., 1990). All cited bib entries are printed at the end of this article: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. § EXAMPLES FOR THEOREM LIKE ENVIRONMENTS For theorem like environments, we require package. There are three types of predefined theorem styles exists—, and Numbered, theorem head in bold font and theorem text in italic style Numbered, theorem head in roman font and theorem text in italic style Numbered, theorem head in bold font and theorem text in roman style For mathematics journals, theorem styles can be included as shown in the following examples: Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Phasellus adipiscing semper elit. Proin fermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Phasellus adipiscing semper elit. Proin fermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Additionally a predefined “proof” environment is available: . This prints a “Proof” head in italic font style and the “body text” in roman font style with an open square at the end of each proof environment. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. For a quote environment, use Quoted text example. Aliquam porttitor quam a lacus. Praesent vel arcu ut tortor cursus volutpat. In vitae pede quis diam bibendum placerat. Fusce elementum convallis neque. Sed dolor orci, scelerisque ac, dapibus nec, ultricies ut, mi. Duis nec dui quis leo sagittis commodo. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text (refer Figure <ref>). Sample body text. Sample body text. Sample body text (refer Table <ref>). § METHODS Topical subheadings are allowed. Authors must ensure that their Methods section includes adequate experimental and characterization data necessary for others in the field to reproduce their work. Authors are encouraged to include RIIDs where appropriate. Ethical approval declarations (only required where applicable) Any article reporting experiment/s carried out on (i) live vertebrate (or higher invertebrates), (ii) humans or (iii) human samples must include an unambiguous statement within the methods section that meets the following requirements: * Approval: a statement which confirms that all experimental protocols were approved by a named institutional and/or licensing committee. Please identify the approving body in the methods section * Accordance: a statement explicitly saying that the methods were carried out in accordance with the relevant guidelines and regulations * Informed consent (for experiments involving humans or human tissue samples): include a statement confirming that informed consent was obtained from all participants and/or their legal guardian/s If your manuscript includes potentially identifying patient/participant information, or if it describes human transplantation research, or if it reports results of a clinical trial then additional information will be required. Please visit (<https://www.nature.com/nature-research/editorial-policies>) for Nature Portfolio journals, (<https://www.springer.com/gp/authors-editors/journal-author/journal-author-helpdesk/publishing-ethics/14214>) for Springer Nature journals, or (<https://www.biomedcentral.com/getpublished/editorial-policies#ethics+and+consent>) for BMC. § DISCUSSION Discussions should be brief and focused. In some disciplines use of Discussion or `Conclusion' is interchangeable. It is not mandatory to use both. Some journals prefer a section `Results and Discussion' followed by a section `Conclusion'. Please refer to Journal-level guidance for any specific requirements. § CONCLUSION Conclusions may be used to restate your hypothesis or research question, restate your major findings, explain the relevance and the added value of your work, highlight any limitations of your study, describe future directions for research and recommendations. In some disciplines use of Discussion or 'Conclusion' is interchangeable. It is not mandatory to use both. Please refer to Journal-level guidance for any specific requirements. Supplementary information If your article has accompanying supplementary file/s please state so here. Authors reporting data from electrophoretic gels and blots should supply the full unprocessed scans for key as part of their Supplementary information. This may be requested by the editorial team/s if it is missing. Please refer to Journal-level guidance for any specific requirements. Acknowledgements Acknowledgements are not compulsory. Where included they should be brief. Grant or contribution numbers may be acknowledged. Please refer to Journal-level guidance for any specific requirements. § DECLARATIONS Some journals require declarations to be submitted in a standardised format. Please check the Instructions for Authors of the journal to which you are submitting to see if you need to complete this section. If yes, your manuscript must contain the following sections under the heading `Declarations': * Funding * Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use) * Ethics approval and consent to participate * Consent for publication * Data availability * Materials availability * Code availability * Author contribution If any of the sections are not relevant to your manuscript, please include the heading and write `Not applicable' for that section. Editorial Policies for: Springer journals and proceedings: <https://www.springer.com/gp/editorial-policies> Nature Portfolio journals: <https://www.nature.com/nature-research/editorial-policies> Scientific Reports: <https://www.nature.com/srep/journal-policies/editorial-policies> BMC journals: <https://www.biomedcentral.com/getpublished/editorial-policies> § SECTION TITLE OF FIRST APPENDIX An appendix contains supplementary information that is not an essential part of the text itself but which may be helpful in providing a more comprehensive understanding of the research problem or it is information that is too cumbersome to be included in the body of the paper.
http://arxiv.org/abs/2406.08934v1
20240613090518
Tully-Fisher Relation of Late-type Galaxies at $0.6 \leq z \leq 2.5$
[ "Gauri Sharma", "Varenya Upadhyaya", "Paolo Salucci", "Shantanu Desai" ]
astro-ph.GA
[ "astro-ph.GA" ]
Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS UMR 7550, F-67000 Strasbourg, France University of Strasbourg Institute for Advanced Study, 5 allée du Général Rouvillois, F-67083 Strasbourg, France Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535, South Africa INFN-Sezione di Trieste, via Valerio 2, I-34127 Trieste, Italy IFPU Institute for Fundamental Physics of the Universe, Via Beirut, 2, 34151 Trieste, Italy SISSA International School for Advanced Studies, Via Bonomea 265, I-34136 Trieste, Italy Department of Physics, Indian Institute of Technology, Hyderabad, Telangana-502284, India We present a study of the stellar and baryonic Tully-Fisher relation within the redshift range of 0.6 ≤ z ≤ 2.5 utilizing observations of . This dataset, as explored in <cit.>, comprises of disk-like galaxies spanning a stellar mass range of 8.89 ≤log(M_star [M_⊙]) ≤ 11.5, baryonic mass range of 9.0 ≤log(M_bar [M_⊙]) ≤ 11.5, and circular velocity range of 1.65 ≤log(V_c [ km/s]) ≤ 2.85. Stellar masses of these objects are estimated using spectral energy distribution fitting techniques, while gas masses are determined via scaling relations. Circular velocities are directly derived from the Rotation Curves (RCs), after meticulously correcting for beam smearing and pressure support. Our analysis confirms that our sample adheres to the fundamental mass-size relations of galaxies and reflects the evolution of velocity dispersion in galaxies, in line with previous findings. This reaffirms the reliability of our photometric and kinematic parameters (i.e., M_star and V_c), thereby enabling a comprehensive examination of the Tully-Fisher relation. To attain robust results, we employed a novel orthogonal likelihood fitting technique designed to minimize intrinsic scatter around the best-fit line, as required at . For the STFR, we obtained a slope of α=3.03± 0.25, an offset of β = 3.34± 0.53, and an intrinsic scatter of ζ_int=0.08 dex. Correspondingly, the BTFR yielded α=3.21± 0.28, β=3.16± 0.61, and ζ_int=0.09 dex. Our findings suggest a subtle deviation in the stellar and baryonic Tully-Fisher relation with respect to local studies, which is most-likely due to the evolutionary processes governing disk formation. Tully-Fisher Relation of Late-type Galaxies at 0.6 ≤ z ≤ 2.5 Gauri Sharma, 1,2,3,4,5,6 Contact: gsharma@uwc.ac.za Varenya Upadhyaya, 7 Paolo Salucci 4,5,6 Shantanu Desai 7 Received XXXX; accepted XXXX ==================================================================================================================================== § INTRODUCTION Scaling relations in galaxies refer to the empirical correlations between diverse observable properties, such as luminosity, mass, size, and rotational velocity. These relations offer invaluable insights into the fundamental physics and evolutionary dynamics shaping galaxies, and serve as rigorous benchmarks against which theoretical models of galaxy formation and evolution are tested. Among the various scaling relations, the Tully-Fisher relation holds a place of particular significance in galaxy evolution and cosmology. This correlation acts as an analytical cornerstone, unraveling the complexities of galaxy dynamics and morphology, thereby deepening our understanding of interplay between physical properties of galaxies. In the realm of galaxy dynamics, the Tully-Fisher Relation (TFR) is one of the most studied empirical scaling relations that correlates the properties of luminous matter with those of the dark halo. In the traditional TFR, which originated from the seminal work of <cit.>, the luminosity of galaxies scales with their characteristic velocity (i.e., circular velocity V_c) via a power-law, L∝β V_c^α, where α is the slope, and β is the intercept in the relation. The slope indicates the extent of the circular velocity's dependency on the luminosity, while the quantity β/α represents the zero-point, which indicates the origin of the relation. In the local Universe, this relation is remarkably tight (α∼ 4, β/α∼ 2, σ_int≲ 0.1 dex) for star-forming disk galaxies <cit.>. Consequently, it is widely used in redshift-independent distance measurements <cit.>, for example, knowing the luminosity and flux (L and F), one can relate the observed flux to the distance (D) of the object via F∝ L/4π D^2. Furthermore, the TFR has played a significant role in determining cosmological parameters, particularly by enabling the measurement of the Hubble constant H_0 out to the local Universe <cit.>. The TFR serves not only as a distance indicator in cosmology but also as a powerful tool for understanding the complex interaction between dark and luminous matter in galaxies. This is substantiated by a diverse range of observations <cit.> and simulations <cit.>. The underlying rationale lies in the relationship between the circular velocity and the total gravitational potential of a galaxy, coupled with the luminosity serving as a tracer for the total stellar mass <cit.>. An interaction between these physical quantities manifests as a correlation, thus giving rise to the well-known TFR. This foundational concept has also led to a generalized form of the TFR, expressed as M∝β V_c^α, where M represents the galaxy's stellar or baryonic mass. This generalized TFR has undergone extensive study and exhibits remarkable tightness, particularly in the local Universe <cit.>. A note of caution is warranted when discussing the generalized TFR. In optical and infrared astronomy, luminosity primarily serves as a proxy for stellar mass, giving rise to the Stellar Tully-Fisher Relation (STFR). Conversely, at radio wavelengths, luminosity predominantly traces the mass of neutral hydrogen gas. When combined with the stellar mass, this provides an approximation of the total baryonic mass of a galaxy, M_bar∝ M_star + M_gas, thereby leading to the Baryonic Tully-Fisher Relation (BTFR). It is noteworthy that the slope of the BTFR closely resembles that of the seminal TFR, with a typical value around 4 and an intrinsic scatter below 0.1 dex <cit.>. In contrast, the slope of the STFR generally ranges between 3 and 3.5– depending on the wavelength range, accompanied by a larger intrinsic scatter of approximately ∼ 0.25 dex <cit.>. Some of the previous studies have investigated the seminal and generalized TFR of star-forming galaxies (SFGs) in cluster environments <cit.>. These studies suggest that the slope of the TFR at intermediate redshifts (z∼ 0.5) is shallower compared to local measurements, which has prompted discussions on potential selection bias effects <cit.>. However, other studies have reported little to no evolution in the seminal TFR slope from z∼ 1 to z≈ 0 <cit.>. Other high-z studies, utilizing state-of-the-art Integral Field Unit (IFU) observations of isolated SFGs <cit.> have found mixed results. Note however that most of these works have mainly focussed on the evolution of the TFR zeropoint compared to the local Universe values after assuming a fixed slope. <cit.> found that the slope in K-band TFR at z ∼ 0.6 is consistent with the local value after allowing the slope to vary.  <cit.> found a large scatter in the TFR at z ∼ 3 and consequently used a fixed slope having the value same as that of the local Universe. <cit.> studied the K-band TFR at z ∼ 1. They fit the TFR using both a fixed slope (obtained from the local Universe value) as well as keeping it as a free parameter. When the slope was kept as a free parameter, significant differences were found compared to the local Universe value (cf. Table 3 of  ). <cit.> studied the stellar and baryonic TFR at redshifts z∼ 0.9 and z ∼ 2.3 by using a fixed slope (fixed to the value in  ) and looking for the variation of zeropoint with redshift. A whole bunch of studies have also been carried out on the stellar TFR <cit.>. In addition to fitting for the stellar TFR slope, many works have also assumed a constant slope and evaluated the scatter <cit.>. Searches for an abrupt transition in the TFR slope using low redshift data have also been carried, which reported null results <cit.>. Here, it is important to note that the early IFU studies have inherent uncertainties, primarily due to their 1D or 2D kinematic modeling approaches <cit.>. This is because telescopes equipped with IFUs can achieve only a spatial resolution of 0.5-1.0, while a galaxy at z ∼ 1 typically has an angular size ranging from 2 - 3. As a result, a finite beam size leads to smearing of the line emission across adjacent pixels. Consequently, the gradient in the velocity fields tends to become flattened, and the line emission begins to broaden, creating a degeneracy in the calculation of rotation velocity and velocity dispersion. This observational effect is referred to as `Beam smearing,' which affects the kinematic properties of galaxies by underestimating the rotation velocity and overestimating the velocity dispersion. Therefore, it is essential to model the kinematics in 3D space, taking into account the beam smearing on a per-spaxel basis. Recent studies by <cit.>, and <cit.> have modeled the kinematics of high-z galaxies in 3D space, and demonstrated significant improvements in overall kinematics, including 2D velocity maps and position-velocity diagrams (i.e., observed rotation curves). It is noteworthy that although some of the other high-z TFR studies have accounted for beam-smearing in the forward-modelling approach, none of them have have fitted for kinematics in full 3D space similar to <cit.> and <cit.>. Moreover, at high-z, the Inter-Stellar Medium (ISM) in galaxies is highly turbulent <cit.>. This turbulence within the ISM generates a force, which counteracts gravity in the galactic disk via a radial gradient, which in turn suppresses the rotation velocity of gas and stars. This phenomenon is commonly referred to as `Asymmetric Drift' for the stellar component and `Pressure Gradient' for the gas component, as defined in <cit.>. While the latter effect is generally negligible in local rotation-dominated galaxies (i.e., late-type galaxies), it is significantly observed in local dwarf and early-type galaxies <cit.>. The highly turbulent conditions of the ISM and the dominance of gas at high-z <cit.> makes their velocity dispersion variable and anisotropic across the galactic scales <cit.>. Consequently, the observed rotation velocity measurements are underestimated throughout the galactic radius, and one may even observe a decline in the shape of rotation curves at high-z <cit.>. We point out that among the aforementioned high-z TFR studies, only <cit.> has accounted for pressure gradient corrections by assuming a constant and isotropic velocity dispersion. However, recent studies of observations <cit.> and simulations <cit.> indicate that pressure support corrections under the assumption of constant and isotropic velocity dispersion can lead to an overestimation of the circular velocities. This is particularly relevant for galaxies with low rotation-to-dispersion ratios (v/σ < 1.5). Given these findings, there is a compelling need to re-examine the TFR at , employing more precise kinematic measurements as recommended in <cit.>, and <cit.>. This study aims to revisit and refine our understanding of the TFR at high-redshifts. Specifically, we utilize a large dataset recently analyzed by <cit.>, which models the kinematics using 3D forward modeling and incorporates the pressure gradient while allowing for varying and non-isotropic velocity dispersion. The aim of this work is to investigate the cosmic-evolution of TFR in star-forming galaxies– disk-like systems, within the redshift range of 0.6≤ z≤ 2.5. We focus on disk-like systems since they form and evolve predominantly at z≤ 1.5 and exhibit homogeneous and controlled evolution <cit.>. Thus, these systems serve as a valuable tool to infer the cosmic evolution of baryons and dark matter. At z≈ 1, nearly 50% of the Universe's stellar mass assembles in galactic halos <cit.>, and this marks the peak of cosmic star-formation density <cit.>. Therefore, it is crucial to compare the baryonic and dark matter properties of galaxies at 0.6≤ z≤ 2.5 with those in the local Universe. This comparison provides insights into (1) the evolution of disk-like systems after their formation at z≤ 1.5 and (2) the nature of dark matter because these systems are more-or-less in dynamical equilibrium. This paper is organized as follows, Section <ref> discusses the dataset, relevant parameters of STFR and BTFR relations, and assess the quality of these parameters using fundamental scaling relations. In Section <ref>, we present the STFR and BTFR relations, and in Section <ref> we discuss these relations in comparison with previous studies. Finally, in Section <ref> we summarize our work and conclude main findings. In this work, we have assumed a flat ΛCDM cosmology with Ω_m,0 =0.27, Ω_Λ,0=0.73 and H_0=70 km s^-1 Mpc^-1. § DATA In this study, we make use of the dataset recently examined by <cit.>. As discussed in GS23, this sample was initially selected based on the assessment of kinematic modeling outputs. Briefly, kinematic modeling was based on the following primary criteria: (1) confirmed Hα detection and spectroscopic redshift, (2) inclination angles within the range of 25^∘≤θ_i ≤ 75^∘, and (3) SNR > 3 (in Hα datacubes). GS23 employed the 3DBarolo code to model the kinematics, allowing for beam smearing corrections and inclination within a 3D space. This results in velocity maps, major and minor axis position-velocity (PV) diagrams, surface brightness curves, rotation curves, and velocity dispersion curves. Following the kinematic modeling outcomes GS23 implemented secondary selection criteria, according to which galaxies were excluded if they met the following conditions: (1) 3DBarolo run did not succeed, (2) No mask was created, implying 3DBarolo's failure to mask the true emission due to a moderate signal-to-noise; (3) maximum observed radius smaller than the point spread function, i.e., R_ max< PSF, indicating 3DBarolo's inability to create rings and hence fails to produce kinematic models; (4) R_ max=PSF, in this case resulting kinematic models provide only two measurements in , which were insufficient for dynamical modeling or reliable measurements of circular velocities. This secondary selection criteria resulted in a final sample of 263 galaxies, comprising 169 from KROSS, 73 from KMOS3D, and 21 from KGES. For the distribution of relevant physical quantities of the final sample we refer the reader to <cit.>. The rotation curves inferred from 3DBarolo are further corrected for pressure support through the `Pressure Gradient Correction,' method as established by <cit.>, and referred to as intrinsic rotation curves. We utilize these intrinsic rotation curves to estimate the circular velocities (V_c) of galaxies. The velocity dispersion (σ) is an average value estimated from velocity dispersion curves obtained from 3DBarolo, for more details we refer the reader to <cit.> and <cit.>. GS23 sample spans a stellar mass range of 8.89 ≤log (M_ star [ M_⊙]) ≤ 11.5, effective radii -0.2 ≤log(R_e [ kpc]) ≤ 0.85, star formation rates between 0.49 ≤log(SFR [M_⊙ yr^-1]) ≤ 2.5, and a redshift range of 0.6 ≤ z < 2.5. This sample is a fair representative of main-sequence star-forming galaxies, shown in Figure <ref>. In the subsequent sections, we briefly examine the circular velocity and velocity dispersion estimates, discuss the baryonic mass estimates, and justify the accuracy of photometric and kinematic properties relevant for TFR study. §.§ Velocity Measurements In this study, we have examined the circular velocity of rotation curves at three distinct scale lengths, specifically R_e, R_ opt, and R_ out (≈ 5 R_ D), which we denote as V^Re_c, V^Ropt_c, and V^Rout_c, respectively.[For an exponential thin disk, the stellar-disk radius is defined as R_D = 0.59 R_e. Under the same assumption, the scale length that encloses 80% of the stellar mass is referred to as the optical radius and defined as R_ opt=3.2 R_D. For more comprehensive details, we refer the reader to <cit.>.] It is worth noting that the effective radius for the majority of our sample falls below the resolution limit, which is approximately 4.0 kpc with a median seeing of 0.5. On the other hand, the optical radius remains on the verge of resolution limit. Thus, in order to be conservative, we only utilized circular velocity measurements that were obtained at R_out. This is one of the reasons of not plotting TFR for V_c(2.2 R_D) as adopted in previous studies <cit.>. However, the choice of V_c(2.2 R_D) aims to capture the flat portion of the rotation curves, akin to V_c^Rout in our case, which represents the circular velocity in the outer regions of the rotation curves assumed to be flat. Finally, it is important to remark that ∼ 92% and 65% of galaxy rotation curves are sampled up to R_ opt and R_ out, respectively. In cases where the rotation curve is not sampled up to the reference radius, we interpolate (or extrapolate) the velocity estimates. Our approach is as follows: (1) If R_opt exceeds R_last (the maximum observed radius), V_c is computed at R_last. (2) If R_out > R_last, V_c is computed at R_opt. This approach ensures that we remain within the outer regions of galaxies, which are assumed to have flat rotation curves based on local observations. Note that we did not interpolate (or extrapolate) the velocities beyond the maximum observed radius. Moreover, for interpolation, we do not employ any specific functional form of the rotation curve; instead, we utilize routine. This ensures that if the rotation curve is declining, it will continue to decline, and vice versa. Additionally, it's worth noting that TFR studies in the local Universe occasionally utilize the maximum velocity of the system <cit.>. Therefore, we also examined the maximum velocity in relation of V_c^Rout. We extracted the maximum circular velocity (V_ max) from the rotation curves. We note to the reader that V_max is not the asymptotic rotation velocity and hence involves no interpolation/extrapolation. The results of this comparison are presented in Figure <ref>. Our analysis revealed a strong positive correlation of ∼ 97% between V_ max and V^Rout_c, with an intrinsic scatter of 0.15 dex. Although we have observed that ∼ 30% of the sample exhibits V_max values 0.1 dex higher than V^Rout_c, we consider to use V^Rout_c as the circular velocity. The rationale behind this decision is the uncertainty in capturing the entire flat part of the at  . As a result, V_ max might not accurately represent the maximum velocity of the galaxy. Hence, it can not be compared neither with the locals nor with studies. Therefore, to maintain uniformity across the sample, treat all galaxies consistently, and facilitate comparisons with previous studies <cit.>, we have chosen to utilize V^Rout_c as the circular velocity, hereafter denoted as V_c. In Figure <ref>, we show the rotation-to-dispersion ratio of before and after pressure support corrected GS23 sample. The velocities before pressure support corrections are referred to as rotation velocity (V_ rot), while after pressure support corrections its circular velocity (V_ c) of the system. We notice that, prior to the implementation of pressure support corrections, there were only 9 dispersion dominated galaxies (3 KMOS^ 3D, 1 KGES, and 5 KROSS). However, after applying pressure support corrections, none of these galaxies have V_ c/σ < 1, as depicted in Figure <ref>. Therefore, we do not exclude these galaxies from our analysis. Hence, the full GS23 sample is a good representative of rotation supported systems, which we employ to study TFR. We remark to the reader that underlying assumptions of GS23 consist of three key criteria: * Galaxies should be located on or around the star-forming main sequence; * They should exhibit a disk-like morphology in high-resolution images, with no nearby neighbors within 150 kpc; and * The ratio of circular velocity to velocity dispersion (V_c/σ >1). These three assumptions enable them to select disk-like galaxies from high-z sample. Notably, our main findings in Section <ref> are consistent with those of local studies <cit.> which generally select the star-forming galaxies with V_ c/σ >1. However, we notice that previous high-z studies apply higher V_ rot/σ cuts to investigate the TFR, which we briefly discuss in Appendix <ref>. §.§ Baryonic Masses Observations show that typical star-forming galaxies lie on a relatively tight, almost linear, redshift-dependent relation between their stellar mass and star formation rate, the so-called main sequence of star formation <cit.>. Most stars since z∼ 2.5 were formed on and around this MS <cit.>, and galaxies that constitute it, usually exhibit a rotating disk morphology <cit.>. Figure <ref> shows the position of the GS23 sample on the main sequence of typical star-forming galaxies (MS), the analytical prescription for the centre of the MS as a function of redshift and stellar mass proposed in the compilation by <cit.>, as a function of stellar mass. The figure shows that the all sources are on and around the main sequence between 0.65 ≤ z ≤ 2.5. A normalized main sequence plot of this dataset is shown in <cit.>. This suggests that GS23 sample is a good representative of disk-like star-forming galaxies. This enables us to estimate their molecular gas masses (M_H2) using the <cit.> scaling relations, which provide a parameterization of the molecular gas mass as a function of redshift, stellar mass, and offset from the MS, stemming from a large sample of about 1400 sources on and around the MS in the range z=0-4.5 (cf. also and ). The scatter around these molecular gas scaling relations and the stellar mass induce a 0.3 dex uncertainty in the molecular gas mass estimates. The H2 mass of GS23 sample is 9.14 ≤log(M_ H2 [M_⊙]) ≤ 10.63, with an average molecular gas fraction of f__ H2 = 0.12± 0.04. To calculate the atomic mass (M_HI) content of galaxies within the redshift range 0.6 ≤ z ≤ 1.04, we use the HI scaling relation presented by <cit.>, which provides the first M_star-M_HI relation at z≈ 1, encompassing 11,419 star-forming galaxies. The relation was derived using a stacking analysis across three stellar mass bins, each bin with a 4σ detection and an average uncertainty of ∼ 0.3 dex. To compute the HI mass at z> 1.04, we employ the M_star-M_HI scaling relation derived from a galaxy formation model under the ΛCDM framework <cit.>. This scaling relation successfully reproduces both the HI mass functions <cit.> and the ^12CO luminosity functions <cit.> at z≈ 0 with an uncertainty of around 0.25 dex, as well as follows the observations of quasars from z=0-6.4 (see Fig.12, ). The HI Mass range of GS23 sample is 9.57 ≤log(M_ HI [M_⊙]) ≤ 11.05, with an average atomic gas fraction f__ HI = 0.41± 0.13. Finally, the total baryonic mass of galaxies is the sum of molecular and atomic gas : M_ bar = M_ H2+1.33 M_ HI, where the factor of 1.33 accounts for the Helium content. §.§ Quality Assessment of Data As shown in Sections <ref> and <ref>, the GS23 sample contains rotation supported systems, which lies on-and-around the main-sequence of star-forming galaxies. In this section, we focus on the quality assessment of our dataset, particularly emphasizing the verification of key scaling relations such as the mass-size relation and the redshift evolution of the velocity dispersion. The consistency of these relations serves as a benchmark for the overall integrity of our dataset and the subsequent analysis of TFR across cosmic time. Mass-Size Relation: In the local Universe, galaxies are broadly categorized into two main classes: early-type and late-type, commonly identified as the red-sequence and blue-cloud, respectively <cit.>. These classes exhibit distinct relationships between stellar-disc size and total stellar mass <cit.>. However, for nearly a decade, cosmic evolution of the mass-size relation for galaxies was an open question, (e.g. early-type: ; late-type: ). Recently, with a large dataset of CANDELS survey, <cit.> statistically studied the mass-size relation of early- and late-type galaxies through the redshift range: 0<z<3. Their findings indicate that while the intercept of the mass-size relation varies, the slope remains constant across different epochs, suggesting that the different assembly mechanism acts similarly on both types of galaxies at different epochs. Moreover, the early type galaxies have a steep relation between mass-size, and they evolve faster with time. Whereas, late-type galaxies show a moderate evolution with time, as well as a shallow mass-size relationship, given as: Early-types: R_e ∝ M_*^0.75 (for M_*> 2× 10^10 M_⊙), R_e ∝ (1+z)^-1.48 (fast evolution) Late-types: R_e ∝ M_*^0.22 (for M_*> 3× 10^9 M_⊙), R_e ∝ (1+z)^-0.75 (moderate evolution) We assess the quality of our dataset consisting of star-forming, disk-like galaxies (i.e., late-types) by comparing it with the above mass-size relation (Equation <ref>). As illustrated in the left panel of Figure <ref>, our dataset aligns well with the established relation. Utilizing the least-squares method of linear fitting, we obtain a slope of 0.22, which closely matches the value reported in <cit.>, and an intrinsic scatter of 0.13 dex. This confirms the robustness of photometric quantities of our sample. Evolution of Velocity Dispersion: The velocity dispersion of a galaxy is tightly coupled to its dynamical state and serves as an effective measure of turbulence. Its cosmic evolution can provide critical insights into the efficiency and nature of the underlying driving mechanisms, such as the baryonic feedback processes and gravitational interactions <cit.>. Moreover, variations in the velocity dispersion with redshift could potentially suggest how galaxies interact with their environments, particularly the cosmic web <cit.>. The correlation between the ionized gas velocity dispersion and redshift is a well-established phenomenon, as reviewed comprehensively by <cit.> and <cit.>. In the right panel of Figure <ref>, we show this relation for our sample and compared with those of <cit.>. We infer that both datasets are in fair agreement across all redshifts with similar intrinsic scatter (≈ 0.2 dex). The slight offset in this relation can be attributed to the difference in the kinematic modeling techniques used in our analysis. § TULLY FISHER RELATION We assume that the galaxy masses (baryonic: stars and gas) scale with the circular velocities as a power-law with slope (α) and intercept (β), which can mathematically be defined as: log(Y)=β + α log(X) where, Y is the list of stellar (or baryonic) masses and X corresponds to the circular velocities (V_c). To obtain the best-fit to the data, we sample the likelihood using Markov Chain Monte Carlo (MCMC), which uses an orthogonal likelihood, defined as: -2lnℒ = ∑_iln(2πσ_i^2) +∑_i(y_i-mx_i-b)^2/σ_i^2(m^2+1) where,σ_i^2 = m^2σ_x_i^2+σ_y_i^2/m^2+1 +ζ_int^2 where, x_i and y_i denote the stellar mass and circular velocity lists, respectively, while σ_x_i and σ_y_i represent their associated errors. The parameter ζ_int refers to the intrinsic scatter in the direction orthogonal to the best-fit line, and σ_i gives the total scatter in the relation. We adopt this orthogonal likelihood fitting technique due to the significant scatter observed in galaxies (∼ 0.25 dex) in both the stellar mass (at fixed velocity) and circular velocity (at fixed stellar mass). This scatter results in a dispersed relation, which is more accurately constrained by minimizing the scatter orthogonally along the best-fit line. This approach contrasts with the case of local disk galaxies, which exhibit a remarkably tight correlation in the M_star-V_c (or M_bar-V_C) plane with a scatter of approximately 0.026-0.1 dex on both the axes. In such cases, employing a vertical likelihood method (as described in Equation <ref>) is more appropriate, as demonstrated by <cit.>. Further justification for the choice of orthogonal likelihood over vertical likelihood in the context of data is provided in Appendix <ref>. Additionally, we remark that when fitting the STFR and BTFR, we utilize circular velocities calculated at R_out. As a result, the stellar and baryonic masses used in these fits are also constrained within the R_out region, and denoted as M_star and M_bar. However, wherever we use the total stellar or baryonic masses, they are denoted as M_star^Tot or M_bar^Tot. For detailed discussion on the choice of global and constrained masses, we refer the reader to Appendix <ref>. In Figure <ref>, we present the orthogonal likelihood fits for the STFR and BTFR, left and right panels respectively. For the STFR, we obtained a slope of α=3.03± 0.25, an offset of β = 3.34± 0.53, and an intrinsic scatter of ζ_int=0.08 dex. Correspondingly, the BTFR yielded α=3.21± 0.28, β=3.16± 0.61, and ζ_int=0.09 dex. These results are compared with previous studies, including both local and high-redshift studies of STFR and BTFR. For the STFR, we reference works by <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. For the BTFR, we consider studies of <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. As evident from Figure <ref>, although previous studies of STFR and BTFR, both in locals and at high-redshift align well within 3σ uncertainties, the new data from <cit.> offers evidence for a marginal evolution in both the slope and zero-point of these relations. Specifically, we report a slightly shallower slope and an increase in the STFR zero-point compared to most previous studies, as reported in Table <ref>. 1.2 Initially, we assumed that the observed change in the slope might be solely attributable to the fitting technique. To understand this, we fitted our data using the slope and zero-point values reported in the previous studies (listed in Table <ref>) and calculated the intrinsic scatter around these reference lines. In Figure <ref>, we show the orthogonal intrinsic scatter as a function of the slopes obtained from prior studies, for both STFR (in orange) and BTFR (in purple). Our analyses indicates consistency with the slope and intrinsic scatter observed in previous studies. However, notably, our fitting technique yields shallower slope and reduced intrinsic scatter compared to previous studies, (see also Table <ref>). Although, previous studies have reported similar results, we place greater trust in our measurements. The reason is the outcome of a comparative analysis of orthogonal and vertical likelihood fitting techniques, as detailed in Appendix <ref>. In our study, we modeled the mock STFR data with high-z errors on individual measurements and scatter, akin to observations at high redshifts. We observed that the vertical likelihood method could not retrieve the true slope at high-redshift, whereas the orthogonal likelihood method performed exceptionally well. We noted that the slope of the vertical likelihood differs by a factor of 1.5± 1 compared to the orthogonal likelihood. Upon comparing our best-fit STFR/BTFR slopes with those of previous studies that minimize vertical scatter <cit.>, we found the difference of factor ∼ 1.2 to 1.5, similar to the one we just stated. Therefore, we suggest that orthogonal likelihood fitting techniques work best for high-z datasets, which are prone to large scatter. Finally, we also fitted the STFR and BTFR using V_max, and observed that the slope only varies by about ± 0.15 dex, which falls within the uncertainty range of the slope and zero-point provided using V_c. Based on these findings, we suggest that both the slope and the zero-point of the Tully-Fisher relation evolve modestly over cosmic time. Moreover, we learned that the slope, zero-point, and intrinsic scatter are all very sensitive to the preferred fitting technique. We would also like to note the reader that in this study, we do not focus on the evolution of the zero-point. This decision is based on our understanding that at high redshifts (z), the GS23 sample lacks low-mass galaxies due to Tolman surface brightness dimming <cit.>, which are crucial for accurately constraining the zero-point of the TFR. Furthermore, we explored the STFR and BTFR relations within different redshift bins, as shown in the left and right panels of Figure <ref>, respectively. In particular, we divided our galaxies into two redshift bins: 0.6≤ z ≤ 1.2 (z≈ 1) and 1.2 < z ≤ 2.3 (z≈ 1.5), fitting each bin independently using the aforementioned technique. Although Figure <ref> displays the best-fit results for both redshift bins, it is important to note that the z∼ 1.5 bin is biased towards massive galaxies and does not encompass the typical mass (log(M_star/bar [ M_⊙])=9.0-11.5) and circular velocity (log(V_c km/s)= 1.6-2.85) ranges upon which the fundamental TFR is established. Therefore, the results of the STFR and BTFR relations of z∼ 1.5 bin are not representative (or pertinent); hence we do not draw any conclusions for this redshift bin. Conversely, the STFR and BTFR relations at z∼ 1 cover typical mass and velocity range, and the fitting results are very similar to the one those derived from the full dataset. To be precise, for STFR, we find α=3.13, β=3.20, and ζ_int=0.07 dex, while for BTFR, we have α=3.35, β=2.89, and ζ_int=0.08 dex. Thus, even when we restrict our analysis to galaxies at z∼ 1, we discern a nominal evolution in the slope and zero-point (β/α) of the TFR relation at high-redshift. § DISCUSSION To reaffirm the validity of the GS23 dataset , which is a fair representative of the main sequence of star-forming galaxies as shown in Figure <ref>, we further demonstrate its ability to accurately represent fundamental relations previously explored within similar redshift ranges using high-resolution photometry and resolved kinematics. Specifically, the mass-size relation <cit.> and cosmic evolution of the velocity dispersion <cit.> are shown in the left and right panels of Figure <ref>, respectively. It is evident from these figures that the dataset studied in <cit.> is fairly representing these fundamental scaling relations, thereby reinforcing the robustness of GS23 data and its suitability in studying the TFR. Moreover, the GS23 dataset spans stellar mass and circular velocity range as explored in local STFR studies. In particular, circular velocities ranges between 1.6 ≲log(V_c [ km/s]) ≲ 2.85 and stellar masses 8.89 ≲log(M_star [ M_⊙]) ≲ 11.5, which is the same range as explored in <cit.> and <cit.>. Therefore our sample is relatively free of selection bias (in terms of mass and velocity range) and hence allows us to study the STFR, as well as BTFR, as shown in Figure <ref> left and right panels, respectively. We report a marginal evolution in the slope and zero-point of the STFR and BTFR relations for z≤ 1. Whereas, at z∼ 1.5 we do not draw conclusions on the evolution of the slope or zero-point due to insufficient data in the lower mass and velocity end. In subsequent sections, we discuss our results in light of previous local and studies. §.§ Comparison with local studies To compare the STFR, we utilize the data from <cit.> as a benchmark. In the left panel of Figure <ref>, we juxtapose the dataset of Lapi2018 with GS23. While the velocity ranges of both datasets overlap significantly, we observe that at higher velocities, local galaxies are more massive compared to their high-redshift counterparts. In other words, at fixed stellar masses (bench-marking against local galaxies), high-redshift galaxies exhibit fast rotation, a phenomenon also reported in previous studies such as <cit.>. In particular, the slope and zero-point of the STFR deviate from their standard values <cit.> by approximately a factor of 1.2 and 0.72, respectively. Specifically, in the redshift range 0.6≤ z ≤ 2.5, we obtain a slope of α=3.03± 0.25 and an offset of β = 3.34± 0.53. Thus, we report a divergent evolution in the STFR over cosmic time. This marginal evolution is most-likely due the evolutionary stages of galaxies, which we plan to investigate in future work using cosmological simulations. To compare the BTFR relation, we utilize the dataset from <cit.> and juxtapose their data in the right panel of Figure <ref>. Although GS23 dataset overlap seamlessly with Lelli2019, we notice that our dataset does not encompass galaxies with lower baryonic masses (M_bar<10^9.35 M_⊙) and lower velocities (V_c < 40 km/s) as observed in local galaxies. This absence could be attributed to the Tolman dimming effect <cit.>, as suggested in GS23. Due to these missing galaxies in the lower mass and velocity range, we refrain from making definitive conclusions regarding the zero-point of the BTFR at high-redshift. Secondly, similar to the STFR, at fixed baryonic mass, galaxies at seems to rotate faster. Consequently, we observe a slightly shallower slope at with respect to the locals. §.§ Comparison with high-z studies We acknowledge that <cit.> and <cit.> pioneered the study of the TFR at using large-datasets of IFU surveys: KMOS3D and KROSS surveys, respectively. Although, GS23 dataset consists of a sub-sample from both the KMOS3D and KROSS surveys, there exists discrepancies between the STFR and BTFR fits of U17, AT19, and this work. To understand these discrepancies, we present a comparative analysis of the STFR and BTFR datasets in Figure <ref> and <ref>, respectively. Moreover, we provide a detailed tailored comparison in Appendix <ref>. We remark the reader that AT19 only study the STFR, while U17 studies both STFR and BTFR. First, we note that the kinematic modeling techniques employed by the three studies are distinct. Differently to the approaches in A19 and U17 (see details in respective studies or briefly in Appendix C), GS23 fit the kinematics in 3D space. Some previous studies have shown that the 3D forward modeling allows for more accurate estimates of observed rotation velocities compared to 2D methods <cit.>. In particular, the 2D kinematic modeling techniques overestimate the velocity dispersion and provide underestimated rotation velocities. Consequently, the discrepancies between these studies are expected. However, other factors that may contribute to these discrepancies are discussed below for each study separately: * AT19: Rotation curves are derived along the major axis of the 2D velocity map, and beam-smearing corrections are applied only at the outer radius (2-5 R_D). Moreover, the rotation curves were not corrected for the pressure gradients. Consequently, we anticipate lower circular velocity estimates in comparison to GS23. This is indeed evident in Figure <ref>. The median value of the circular velocity distribution in the AT19 dataset is ≈ 100 km/s, whereas it is ≈ 150 km/s in GS23, despite clear overlap of stellar mass distributions. Additionally, upon implementing the sample selection criteria (V_c/σ > 3) used in AT19 and utilizing rotation velocities (without pressure corrections), as illustrated in Figure <ref>, we still evident discrepancies in both the distributions and the best-fit results. Moreover, as shown in Figure <ref>, when we apply the AT19 best-fit to the GS23 dataset, we observe an intrinsic scatter of 0.16 dex, which is a factor of 2 higher than our estimates. Therefore, the AT19 fit is not applicable to the GS23 dataset. Finally, we suggest that the observed differences between the best fits of AT19 and GS23 are primarily due to discrepant kinematic modeling methods and variations in fitting techniques. * U17: Rotation curves are derived from the 2D velocity maps accounting for beam smearing and pressure gradient corrections. However, their pressure gradient corrections are applied under the assumption of constant and isotropic velocity dispersions. In <cit.> and <cit.>, it is shown that the assumption of constant and isotropic velocity dispersion leads to overestimated circular velocities, when corrected for pressure gradients, especially, in low rotation to dispersion ratio galaxies (v/σ≲ 1.5). Hence, circular velocity estimates of U17 are expected to be higher than GS23 estimates. It is indeed evident in Figures <ref>, <ref>, and <ref>. Even when we apply the sample selection cut (V_c/σ > √(4.4)) used in U17, we still observe high circular velocities at fixed stellar mass, as shown in Figure <ref>. In particular, we notice that U17 objects are biased towards higher velocities (V_c≈ 250 km/s) and stellar/baryonic masses (M_star≈ 10^10.5M_⊙ / M_bar≈ 10^10.8M_⊙ ). While, GS23 dataset covers a typical mass and velocity ranges, we suggest that this is most-likely the primary reason for the discrepant STFR and BTFR fits in U17 with respect to our work. Furthermore, when we apply the U17 best-fit to the GS23 dataset, we observe an intrinsic scatter of ∼ 0.1 dex, which is a factor of 1.25 higher than our estimates. This suggest that discrepancies are also induced due to fitting techniques. §.§ Comparison of fitting techniques In this work, we performed a mock analysis of orthogonal and vertical likelihood fitting techniques on the stellar Tully-Fisher relation, as discussed in Section <ref> and detailed in Appendix <ref>. The mock analysis results are shown in Figures <ref> and <ref>. We observe that the vertical likelihood fitting technique works well for the local Universe, where the intrinsic scatter is of the order of 0.01-0.1 dex. However, it underestimates the slope when intrinsic scatter exceeds 0.1 dex, as observed in the high-redshift data. In contrast, the orthogonal likelihood fitting technique performs best in both cases. Notably, it retrieves the correct slope with a precision error of less than ± 0.02 dex. Consequently, we employed the orthogonal likelihood fitting technique in our work. The results of high-redshift STFR and BTFR fits obtained using orthogonal and vertical likelihood methods are presented in Figures <ref> and <ref>, respectively. Interestingly, the slopes obtained using the vertical likelihood for the STFR and BTFR differ from the orthogonal method's best fits by 1.72 dex and 1.98 dex, respectively. Therefore, we recommend using the orthogonal likelihood fitting technique for STFR and BTFR studies, or any scaling relations where data is subject to large uncertainties. For the reference of the reader, we have made our code publicly available via a https://github.com/varenya27/Orthogonal-Fitting-Technique/tree/mainGitHub repository. § CONCLUSIONS In this study, we investigated the Stellar and Baryonic Tully-Fisher relations over a broad redshift range of 0.6≤ z ≤ 2.5 using data from <cit.>. To effectively address the substantial scatter prevalent among high-redshift galaxies, as elaborated in Appendix <ref>. We employed an orthogonal likelihood fitting technique, which minimizes the intrinsic scatter orthogonal to the best-fit line. The outcomes of our fitting methodology are presented in Figure <ref>. For the STFR, our analysis yielded a slope of α=3.03± 0.25, an intercept β = 3.34± 0.53, and an intrinsic scatter of ζ_int=0.08 dex. Correspondingly, the best-fit BTFR parameters are: α=3.21± 0.28, β=3.16± 0.61, and ζ_int=0.09 dex. That is, the slopes of the STFR and BTFR are slower by a factor of ∼ 1.23 and ∼ 1.15, respectively, compared to those observed in the local Universe. We also explored the relations for different redshift bins and found that the z∼ 1.5 bin was biased towards massive galaxies and hence inconclusive. Conversely, the z∼ 1 bin, devoid of such a bias, yielded results within the agreement to those derived from the complete (full) dataset, and affirmed the presence of minimal evolution in both the STFR and BTFR. When comparing our findings with local studies, we observed slight deviations, as shown in Figure <ref>. Moreover, a comparison with previous studies highlight differences due to kinematic modeling methods, fitting techniques, and sample selection, see Section <ref> and Appendix <ref>. Through a comparative analysis of the outcomes obtained using orthogonal and vertical likelihood fitting methods, we have discerned a significant impact of fitting techniques on determining the slope and zero-point of scaling relations. Specifically, employing the vertical likelihood fitting technique at high redshifts (outlined in Appendix <ref>) led to a shallower slope of the STFR/BTFR by a factor of ∼ 2.5, along with a correspondingly higher zero-point, as shown in Figures <ref>. This discrepancy arises from the inherent scatter within the observed data. Therefore, before picking a specific fitting technique, we suggest for conducting mock data analyses (including observed scatter) to evaluate the performance of different fitting techniques on given observations, as we demonstrated in Appendix <ref>. Based on our findings, we conclude that the Tully-Fisher relation (TFR) exhibits a subtle shift in both the slope and zero-point values across cosmic time. This variation is most-likely due to dominant mechanisms driving galaxy evolution, such as gas accretion, star formation, mergers, or baryonic feedback. Therefore, we propose that the TFR is an empirical relation rather than a fundamental one in galaxy evolution, as it seems to show a dependency on galaxy's physical condition at a given epoch. Therefore, we emphasize the importance of studying the TFR across cosmic time using cosmological galaxy simulations to gain deeper insights into the underlying physical processes shaping the galaxy properties and its evolution across cosmic scales. We thank the anonymous referee for constructive feedback. GS acknowledges the SARAO postdoctoral fellowship (UID No.: 97882) and the support provided by the University of Strasbourg Institute for Advanced Study (USIAS) within the French national programme Investment for the Future (Excellence Initiative) IdEx-Unistra. GS also thanks IIT Hyderabad for funding the January 2024 collaboration visit, which has led to this publication. aa § FITTING TECHNIQUES In this section we discuss the fitting techniques used in estimating the slopes and intercepts. We use the MCMC sampler <cit.> to estimate the best-fit parameters. In our work, we fit a linear model y=α x+β to a set of N datapoints (x_i, y_i) with errors (σ_x_i, σ_y_i). We mainly consider two possible likelihoods for the parameter estimation. The first is a vertical likelihood that considers intrinsic scatter in the vertical direction, i.e., the intrinsic scatter varies only along the y-direction and not the x-direction <cit.>. The second considers the intrinsic scatter to be orthogonal to the best-fit line <cit.>. The vertical log-likelihood function follows: -2lnℒ = ∑_i^N ln(2πσ_i^2) +∑_i(y_i-α x_i-β)^2/σ_i^2 σ_i^2 = α^2σ_x_i^2+σ_y_i^2 +ζ_int^2 whereas, the orthogonal log-likelihood function is defined as: -2lnℒ = ∑_i^N ln(2πσ_i^2) +∑_i(y_i-α x_i-β)^2/σ_i^2(α^2+1) σ_i^2 = α^2σ_x_i^2+σ_y_i^2/α^2+1 +ζ_int^2 where σ_i in Equation (<ref>) and (<ref>) encapsulate the expressions for the total scatter in each case, calculated by taking into account the individual errors on the data points (σ_x_i, σ_y_i) along with the intrinsic scatter ζ_int. As it is not immediately evident which method is more suitable for our purposes, we use synthetic data to test the efficacy of both the methods by determining which of these likelihoods can recover the correct regression relation. First, we generate mock data using the commonly accepted value of the STFR, α=3.5, and error values in the range of 0.12-0.15 and 0.03-0.08 for the y_i (stellar mass) and x_i (circular velocity) variables generated using a uniform distribution. The scatter (spread) in the data-points is taken to be about 0.1 dex, which is an average value obtained from previous studies <cit.>. Note that the scatter is applied in both, x and y, directions separately using a uniform distribution between -1 and 1. The mock dataset is shown in the top panel of Fig. <ref>. The constraints ensure that median values of the errors and scatter are equivalent or higher than data in aforementioned literature. Subsequently, we fit this mock data using Bayesian inference with the emcee sampler, employing both likelihoods, as illustrated in the bottom panel of Fig. <ref>. It is noteworthy that both likelihood functions reproduce the true values within 1σ significance: m_VertL = 3.22±0.06 and m_OrthL=3.54±0.08. However, it is evident that the orthogonal likelihood closely constrains the true value. Next, we assess the applicability of these likelihoods on mock data that closely mimics the high-redshift observations. Following a similar procedure as before, we generate mock data using the widely accepted value of the STFR, α=3.5. However, in this case, we introduce a scatter of 0.25 dex, in both directions, consistent with the dataset of <cit.>, as shown in the left panel of Figure <ref>. The individual uncertainties on stellar masses and circular velocity measurements lie in the ranges 0.28-0.34 and 0.08-0.12 respectively, and they are applied using uniform distributions. Subsequently, we fit this data using both likelihoods, as shown in the right panel of Figure <ref>. Notably, the vertical likelihood recovers a slope of 2.20±0.12, deviating by 1.30± 0.12 from the true value (3.5), while the orthogonal likelihood accurately recovers the true value with nearly 100% precision. Consequently, we propose that data exhibiting higher scatter, as often observed at high redshifts, necessitates advanced fitting techniques, such as an orthogonal likelihood which minimizes the intrinsic scatter perpendicular to the best-fit. Thus, in this work we employ the orthogonal likelihood to estimate the best-fit for the STFR and BTFR at high-redshift. § STFR AND BTFR WITH TOTAL MASSES In Section <ref>, while computing the Tully-Fisher relations, we use the stellar and baryonic masses of the galaxies contained within R_out. In the rotation curves for galaxies at low redshifts (z∼0), we observe a clear maximum value of the velocity, followed by a flat curve. In such cases, it is therefore more meaningful to use the velocity of the flat portion along with the total mass (stellar or baryonic) or luminosity in the study of Tully-Fisher Relation. At high redshifts, however, since we do not observe a substantial portion of the curve flattening in all galaxies, we cannot be sure if the V_max measurements indeed represent the maximum circular velocity of the galaxies, as apparent in Figure <ref>. Therefore, to maintain uniformity across the sample, to treat all galaxies consistently, and facilitate comparisons with previous studies <cit.>, we use the circular velocity computed at R_out. Consequently, in Section <ref> for STFR and BTFR, we use stellar and baryonic masses computed within R_out. However, in Figure <ref>, we present the STFR and BTFR using the total stellar and baryonic mass (M_star^Tot and M_bar^Tot) of a galaxy, employing the same techniques as described in Section <ref>. For the STFR, we find a slope of α = 3.55±0.32, an offset of β=2.42±0.69, and an intrinsic scatter of ζ_int=0.13 dex. In the case of the BTFR, we observe a slope of α = 2.27±0.13, an offset of β=5.74±0.29, and an intrinsic scatter of ζ_int=0.09 dex. It is noteworthy that the STFR maintains nearly the same slope (steeper by 0.52 dex) and offset, as when using the stellar mass within R_out (see Figure <ref>). However, the slope and offset of the BTFR notably differ, shallower by 0.94 dex and higher by 2.58 dex respectively. The results of BTFR are rather surprising as they suggest that low-mass galaxies at high redshifts are highly gas-dominated systems. The later is previously suggested in <cit.>. However, given the shallowness of the relation, we underscore the need for accurate estimates of gas mass at high redshift. Most likely, the limitations lie in the HI scaling relations, which appear insufficient to constrain the total HI mass at high redshift. For these reasons, we do not report STFR and BTFR from the total mass in the main text. However, it could also be attributed to the lack of deep observations, potentially resulting in incomplete mapping of the circular velocity in low-mass galaxies, as reported in the latest study of <cit.>. Most likely, high-resolution (deep integration time) observations are required to put tighter constrains on TFR at high redshift. § TAILORED COMPARISON OF STFR AND BTFR WITH HIGH-Z STUDIES As discussed in Section <ref>, the GS23 sample represents a rotation-supported system, lies on and around the main sequence of star-forming galaxies, and adheres to fundamental scaling relations (see Figure <ref>). This suggests that the TFR of the full GS23 sample can be directly compared with local TFR studies as shown in Figure <ref>. As well as to keep the consistency towards sample selection between the local and the high-z Universe, we compare TFR of full GS23 sample with previous high-z studies as shown in Figure <ref> and <ref>. However, it is evident from the works of previous high-z studies, namely <cit.> and <cit.>, that their sample selection, measurements of stellar and gas mass, and fitting techniques differ from those employed in local studies, as well as being discrepant from our approach. Therefore, in this section, we briefly discuss the physical properties of the aforementioned studies that are relevant to the TFR, emphasize their sample selection, and then compare their TFR fits with our datasets. In particular, <cit.> study is compared with KROSS sample and <cit.> with KMOS3D sample. Comparison with Tiley+19: Following <cit.>, Tiley et al. constructed line-of-sight velocity maps using H_α emission. To determine the rotation velocity of the system, a 0.7" slit was placed along the major axis of the velocity map, enabling them to determine the rotation velocity using dynamical modeling based on the arctangent disk model <cit.>. Subsequently, were corrected for beam-smearing using the prescription outlined in <cit.>. We note that their are not corrected for pressure support; therefore, when comparing STFR, we use the rotation velocity of the GS23 sample instead of circular velocity (which incorporates pressure support corrections). The stellar masses of AT19 sample are derived using SED-fitting with the code <cit.> as discussed in <cit.>. In this work, we utilize the same stellar masses received through private correspondence with Alfred Tiley. The study by <cit.> focuses only on STFR for two distinct cases: (1) a disky sample characterized by V_ rot/σ > 3, and (2) a rotation-supported sample, i.e., V_ rot/σ > 1. We compare our sample and its best fit to both cases, as illustrated in Figure <ref>, with the upper left and right panels representing the disky and rotation-supported samples, respectively. In case of disky sample, upper left panel of Figure <ref> , we notice that AT19 sample is dominant towards massive systems (hence fast rotating). Conversely, the GS23 sample contains a larger dynamic range in velocities, i.e., also the lower velocities (low masses). This discrepancy is most-likely due to the capability of 3D forward modeling to account for low-mass systems, which are often overlooked in 2D kinematic modeling. Our analysis yields a slope of 3.20±0.48, diverging by a factor of 2.0 from the slope observed in AT19. This difference is yet again attributed to variances in kinematic modeling and fitting methodologies, which likely contribute to the shallow slope and distinct offset in our work. In the case of rotation-supported systems (upper right panel), the distributions of both samples match very closely in terms of velocity and stellar masses, as expected. However, there is a notable discrepancy in the best-fit relation. In particular, our slope of 2.29±0.22 differs from <cit.> by a factor of 1.14 dex. To understand this discrepancy, we matched the AT19 and KROSS samples from <cit.>, finding 96 matches, as shown in the bottom panel of Figure <ref>. We noticed that the discrepancy is only in terms of velocity estimates. When we fit this matched data of AT19 and GS23 using the orthogonal likelihood fitting technique, we obtained similar slopes that differ only in terms of offset due to differences in velocity estimates. This suggests that the discrepancy in the slope between AT19 and this work arises mainly from the fitting techniques. However, a minor difference could also stem from variations in beam-smearing corrections, which are implemented differently in both studies, or from the circular velocity measurements which are taken at different radii. Comparison with Ubler+17.: In accordance with <cit.>, <cit.> determined the radial velocity and velocity dispersion of the KMOS3D sample by placing a circular aperture with a diameter of 0.8 along the kinematic major axis, utilizing the LINEFIT code <cit.>, which considers spectral resolution. To obtain circular velocity profiles (rotation curves) for the system, they employed dynamic mass modeling of kinematic profiles using DYSMAL <cit.>, allowing for an exponential disk with a Sersic index of n_s =1. This modeling procedure encompasses the coupled treatment of radial velocity and velocity dispersion, incorporating beam-smearing <cit.> and pressure support correction assuming constant and isotropic velocity dispersion <cit.>. It is crucial to note that <cit.> employs 3DBarolo to estimate circular velocity profiles, utilizing a non-parametric approach while simultaneously correcting for spectral and spatial resolution (i.e., beam-smearing) in 3D space. Subsequently, these rotation curves () underwent pressure support corrections, as detailed in <cit.>, following the methodology of <cit.>. In the latter, pressure support corrections do not assume constant and isotropic velocity dispersion unlike <cit.>; instead, they account for velocity anisotropies. For further details we refer the reader to <cit.> and <cit.>. The star-formation rates and stellar masses in <cit.> are estimated through proper SED fitting techniques discussed in <cit.>, and <cit.>. Molecular mass estimates are obtained using the scaling relation of <cit.>. The HI gas mass is considered negligible within 1-3 R_e, i.e., M_ bar = M_⋆ + M_ H2. However, following <cit.>, the author applied larger uncertainties (0.2 dex) to total gas mass measurements to account for missing HI mass. In comparison, <cit.> consider M_ bar = M_⋆ + M_ H2 + M_ HI, where the estimates for stellar and molecular gas mass align with those of <cit.>. However, the HI mass is derived using a scaling relation. Further details on baryonic mass estimates are provided in Section <ref>. In terms of sample selection, <cit.> focuses on galaxies on-and-around the main-sequence of star-forming galaxies. However, to select the most disk-like systems, they apply a V_ rot/σ > √(4.4) cut and then study the STFR and BTFR. To facilitate one-to-one comparison of our fitting techniques, we select only KMOS3D sample from GS23 and apply a V_ rot/σ > √(4.4) cut, resulting in 53 remaining galaxies. We show the results of this tailored comparison in Figure <ref>, with the STFR in the left panel and the BTFR in the right panel. Notably, the distribution of stellar/baryonic mass and circular velocities is skewed in our sub-sample. In contrast, the <cit.> sample comprises 135 galaxies with a Gaussian distribution in both mass and velocities. In the GS23 sub-sample, the median circular velocity is approximately 150 km/s, with a stellar mass of ∼ 10^9.7 M_⊙ and a baryonic mass of ∼ 10^10.3 M_⊙. Conversely, in the <cit.> sample, the median circular velocities are around 250 km/s, with stellar and gas masses at ∼ 10^10.5 M_⊙. For this tailored comparison, our sample is biased toward intermediate-mass systems, while the <cit.> sample consists mainly massive systems. To fit the STFR and BTFR of the GS23 KMOS3D sub-sample, we employ the same techniques established in Appendix <ref> and applied in Section <ref>. The results are presented in Figure <ref>. For the STFR, we report a slope of α = 5.07^+1.09_-1.17, intercept β = -1.49^+2.64_-2.28, intrinsic scatter of 0.15 dex. For the BTFR, the reported values for the slope is α = 3.96^+0.81_-1.24, intercept corresponds to β = 1.36^+2.81_-1.84, and an intrinsic scatter of 0.14 dex. It is evident that at fixed stellar or baryonic mass, the circular velocity in <cit.> sample is higher by factors of 1.3-1.5. This is most-likely due to their pressure support correction method that assumes constant and isotropic velocity dispersion, which overestimates the circular velocity across the galactic scales. Additionally, we compare the STFR and BTFR of full KMOS3D sample of GS23 with <cit.> in second row of Figure <ref>. Nevertheless, we encountered the similar evolution in TFR slopes. This suggests that differences are arising due to difference in kinematic modeling and fitting techniques employed in our work. § FITS WITH VERTICAL SCATTER In this section we briefly discuss the results obtained using vertical likelihood that minimizes the intrinsic scatter in the vertical direction as defined in Eq. (<ref>). In Fig. <ref>, we show the best-fit and corner plots for the STFR and BTFR on the full GS23 dataset in the left and right panels respectively. For STFR, we report α = 1.31±0.10, β=7.063±0.21 and ζ_int=0.097. Similarly for BTFR we find, α=1.23±0.1, β=7.46±0.22 and ζ_int=0.15. It is interesting to note that the slopes for the STFR and BTFR differ from the orthogonal best fit slopes by 1.72 dex and 1.98 dex respectively.
http://arxiv.org/abs/2406.09067v1
20240613125420
How structured are the representations in transformer-based vision encoders? An analysis of multi-object representations in vision-language models
[ "Tarun Khajuria", "Braian Olmiro Dias", "Jaan Aru" ]
cs.CV
[ "cs.CV", "cs.CL", "q-bio.NC" ]
How structured are the representations in transformer-based vision encoders? Khajuria et al. Institute of Computer Science, University of Tartu {tarun.khajuria@ut.ee, braian.d@gmail.com, jaan.aru@ut.ee} How structured are the representations in transformer-based vision encoders? An analysis of multi-object representations in vision-language models Tarun Khajuria1 Braian Olmiro Dias1 Jaan Aru1 June 17, 2024 ================================================================================================================================================== § ABSTRACT Forming and using symbol-like structured representations for reasoning has been considered essential for generalising over novel inputs. The primary tool that allows generalisation outside training data distribution is the ability to abstract away irrelevant information into a compact form relevant to the task. An extreme form of such abstract representations is symbols. Humans make use of symbols to bind information while abstracting away irrelevant parts to utilise the information consistently and meaningfully. This work estimates the state of such structured representations in vision encoders. Specifically, we evaluate image encoders in large vision-language pre-trained models to address the question of which desirable properties their representations lack by applying the criteria of symbolic structured reasoning described for LLMs to the image models. We test the representation space of image encoders like VIT, BLIP, CLIP, and FLAVA to characterise the distribution of the object representations in these models. In particular, we create decoding tasks using multi-object scenes from the COCO dataset, relating the token space to its input content for various objects in the scene. We use these tasks to characterise the network's token and layer-wise information modelling. Our analysis highlights that the CLS token, used for the downstream task, only focuses on a few objects necessary for the trained downstream task. Still, other individual objects are well-modelled separately by the tokens in the network originating from those objects. We further observed a widespread distribution of scene information. This demonstrates that information is far more entangled in tokens than optimal for them to represent objects similar to symbols. Given these symbolic properties, we show the network dynamics that cause failure modes of these models on basic downstream tasks in a multi-object scene. § INTRODUCTION The representation of transformers have been analysed in multiple ways starting from This study investigates whether symbol-like structured representations may emerge in transformer-based vision models. In vision models, some studies have evaluated their representations, looking for aspects of human-like symbolic cognitive processes <cit.>. An advantage of searching for symbolic representations in vision models is that the input information is distributed in pixels, allowing explicit binding dynamics to be observed. Binding dynamics refers to the information about entities (objects, living things, see Fig <ref>.C) being bound together into smaller representational units, describing specific useful aspects of that entity <cit.>. Further, in vision models, the intermediate representations of objects are natural candidates to be formed into symbols, allowing one to formulate hypotheses about the nature of the representations. Building upon the notion of symbols and binding described in previous studies <cit.>, we propose three levels of properties for an image encoder representation to be more symbolic: 1) The model should be able to aggregate information specific to objects in the image into smaller units; 2) The model should represent various logical components of the input image in separate computational units, i.e., there should be object-wise information disentanglement; 3) A downstream processing network should be able to utilise the composition of these units for the required task, e.g., scene modelling for captioning. The first two of these proposed properties of being symbol-like promote better out-of-distribution generalisation by themselves. 1) Better binding of object information into a discrete set of units allows useful properties of the representation to be fully available for novel scenarios in solving downstream task. 2) Disentanglement of these units of representation makes sure that the presence of one object does not affect the representation of others. This is important for extreme cases where the objects are present out of the scene statistics of the trained distribution and these spurious correlations in representations of object presence may cause failure for the downstream task. The compositional abilities of these Vision-Language Models (VLMs) have already been tested and shown to be inadequate <cit.>. However, there is some evidence that while doing downstream tasks, primitive concepts emerge in model representations and are compositionally employed by the downstream network <cit.>. Transformers have already been shown to have global and local information aggregation within a single layer facilitated by the attention mechanism <cit.>. Also, due to the retention of the token space across layers, they could have enough computational units to maintain object information separately while binding them locally. Hence, there is a need to estimate the extent to which the first two properties of symbolic representations listed above are implemented in vision transformer models. For this purpose, we analyse the spatial representations in the vision transformers' token space. Our primary questions testing for properties 1 and 2 are the following: Do transformers represent and maintain object-wise representations? Are these representations disentangled, i.e. does a particular set of tokens only represent a particular object? We utilise the COCO dataset<cit.> with its instance object masks to characterise the token representations in relation to their input patch information. We set up decoding experiments in a two-object setting (in a multi-object scene) to determine how the encoders manage the representations of the two objects. We try encoder of three VLMs (CLIP <cit.>, BLIP<cit.> and FLAVA<cit.>) and compare them with the larger versions (CLIP-L, BLIP-L) and also check the representations against VIT trained for image classification and CLIP (Resnet X4) with a CNN backbone. We generally observe that object-specific areas hold the most discriminative features about the objects until the last layers. The token representations are not disentangled since they can decode other objects in the scene with accuracy far above random guesses. Across networks, we see similar patterns of decoding accuracies in transformer-based VLMs. We observe the VLMs trained on objectives requiring the modelling of multiple objects have better symbol-like differentiated object representations than VIT encoders trained for image classification. On the other hand, all VIT-based encoders have less differentiated and disentangled object representation in CNN-based CLIP (Resnet X4). Our results suggest a preference for modelling objects relevant to the downstream task (e.g., captioning for BLIP). This effect is more pronounced for the CLS token directly optimised for the task; other object-specific tokens still retain more high-level information about most objects. We analyse and discuss the implications of representation loss in CLS for failure in out-of-distribution task scenarios. § RELATED WORKS The overall problem with generalisation in DL methods and the need for explicit inductive biases that allow for discrete yet flexible information binding has been comprehensively discussed <cit.>. Discrete symbolic representations are considered a prerequisite for robust compositionality and reasoning <cit.> and new studies propose that such discrete information binding may originate from training existing models on large datasets <cit.>. This issue carries over to the field of meta-learning with its aim to introduce and compositionally utilise modularity in neural networks to improve generalisation <cit.>. Concept bottleneck models are a series of models that try to explicitly learn human interpretable intermediate outputs to be composed into final output labels <cit.>. This interpretability usually comes at the cost of downstream task performance. <cit.> introduced an extra loss in the intermediate layer on the networks for units to align to the interpretable concepts while preserving task performance. Many other works do not bind representations to any explicit intermediate concept but still have explicit discrete bottlenecks in their networks <cit.>. It has been shown that these networks learn better representations which further help generalisation on downstream tasks <cit.>. §.§ Evaluating model's reasoning capabilities Many works have proposed benchmarks, and others have tried to analyse the existing networks to estimate the reasoning capabilities of the ANN models. <cit.> proposed a visual question-answering benchmark to evaluate the reasoning skills on images, with open-ended and free-form questions and expected solutions in free-form natural language. Visual genome <cit.> provided content-rich images with explicit annotation of object position and relationships, promoting models that exploit such information. Datasets such as CLEVR <cit.> were designed to correct for biases the models would utilise in the existing benchmarks to perform well without explicit reasoning. The ARO (Attribute, Relation, Order) dataset <cit.> has been proposed to test the correct binding of information when compositionally representing multi-object scenarios. Particularly in VLMs such as BLIP and CLIP, they find that the model outputs do not bind the compositional properties well and attribute the wrong order and features to objects when describing them. §.§ Interpreting model's representations In terms of analysis of model representations for concepts, early work <cit.> showed the layer-wise progression of decoding accuracy for concepts in CNNs. <cit.> reveals the inner workings of transformers by showing how trained transformers can implement convolutions and, in the initial layers, form grid-like local attention patterns like a convolution filter. <cit.> shows that vision transformers differ from CNNs because of their ability to encode both local and global information in the initial layer. Instead, in CNNs, there is a multi-scale feature representation going from lower-scale local information captured in the initial layers to higher-scale global information captured in the higher layers <cit.>. Particularly for vision-language models, <cit.> analysed various models trained using cross-model attention, providing insights into the attention patterns between the two modality streams and the relative contribution of each modality towards downstream tasks. Further, they found function-specific attention heads in the pre-trained models. VL Interpret <cit.> was designed as a visualisation tool to interpret the vision-language model's instance-specific and aggregate statistics over attention distribution. Further, the tool helps visualise token representation as it passes through various network layers. In contrast to the generic visualisation tools that look into the model’s functioning, many studies inspect their representations, evaluating a specific computational functionality. <cit.> finds modular subnetworks in trained ANN models functionally responsible for separate tasks. Multiple studies further look into the notion of concepts in trained vision-language model representations. One study <cit.> designed a test to check if primitive concepts emerge in the network’s representations, which are used compositionally for downstream tasks. <cit.> defines tests for concepts in visual representations according to Fodor’s criteria <cit.> and tests these criteria using a controlled synthetic dataset. In our work, we build on the intuitions from these works while trying to formulate our hypothesis about symbolic representations in a limited scope but testing it with images from natural scenes. § METHODS We design probes and use similarity measures to evaluate the network's representational structure. In the following section, we describe the details of the experimental setup, data and the networks analysed. Experimental Setup: We decode the combination of objects in the image from a single token or an average of tokens obtained from various parts of the image (see Fig. <ref>.A). The tokens we are interested in include the 1) CLS token (the token used and trained for the downstream task), 2) Average token (avg_obj): the average of token representation obtained from the object, 3) Random object (random_obj) token: a single token randomly sampled from the object 4) Random token (Random): a randomly sampled token from the image that served as a baseline. The token representation is obtained at the output of each layer. In this process, to identify the tokens originating from an object, we scale the segmentation mask of the object to the size of the token space. The probing task is designed as a classification task with the following settings: we use the tokens originating from 1) Primary object, 2) Secondary object and use it to decode 1) Primary object category, 2) Secondary object category, 3) Combination of primary and secondary object categories (see Fig. <ref>.B). We train and evaluate the probes with a train/val/test setup with 80/10/10 percent data splits. The reported accuracies are all final test set accuracies. All trained probes are linear and use the Scikit-learn<cit.>'s perceptron implementation, with its default parameters. To check the baseline performance of these representations to decode objects in a complex (20 classes) task, we train layer-wise global object classification probes for each layer of the networks. These probes utilise first 40000 images from the MSCOCO train set for training the probes and the 5000 images in the validation set for testing. To make the linear probe with less learning capacity, we use only the top 20 frequent object categories in the 40000 training images in the probing task. Dataset: We needed instance segmentation masks to associate the tokens to the object in the image. Hence we used the COCO dataset and created subsets with enough combination of two object categories co-occurring in them. We then excluded images with more than one object category of primary or secondary objects. While choosing the objects across primary and secondary categories we preferred objects more likely to interact in the scene. Within the primary and secondary categories we preferred objects that are similar to each other, so their embedded representations are not naturally distinct, increasing our probe's sensitivity. We used six task sets with a total of 16,288 images. For example, the first task contains a combination of objects from two sets, i.e. a primary object: animal (cat/dog) and a secondary object: furniture (chair/bench/bed/couch). The dataset and its tasks are detailed in Table <ref>. We call this set of tasks 'object pair decoding tasks'. The global probes were trained and tested on a larger dataset. We selected the first 40000 images in COCO's training set for training and the 5000 images in the validation set for testing. Linear Probing: We probe the representations in pre-trained networks for our analysis. Probing an information system involves obtaining a representation, usually in the form of a vector from the system in response to an image. Then, we estimate if that particular vector can classify information about the stimuli correctly. The kind of information the probe can learn to classify and the complexity of the probe (i.e. is it just a linear classifier or a complex multilayer NN) indicates the nature of information present in that layer’s representation. We obtain representations from various parts (layers and spatial sections tokens/cells) of the network and use them to understand the network by looking at the classification ability of a linear probe. Analysed Networks: We analysed the image encoders of seven models: BLIP (ViT-B/16 and ViT-L/14) model for image captioning, CLIP for image-text matching, FLAVA model with an additional multimodal encoder on top of vision and language encoders and finally, VIT-B/16 encoder trained on imagenet21k image for image classification task. We analysed both VIT B/16 and L/14 (Image Transformer) and the Resnet X4 (CNN) image encoder for the CLIP. In CNN analysis, we obtained feature cells instead of tokens, i.e., we used the vector representation of a cell in the feature map by accumulating all the filter outputs at the cell location. In both transformers and CNNs, the place on the feature map representing the object is computed by scaling the object segmentation map to the size of the feature map at the layer. The images are pre-processed with the standard pre-processing function and setting provided along with the pre-trained network instances. § RESULTS §.§ A snapshot of network's global and image-specific representation Decoding the objects in a multi-object scene using the trained model's representations gives us a picture of how the information about the scene is organised in the network. In our study, we have used two kinds of decoding tasks (object-paired tasks and global decoding tasks) to estimate the architecture-level organisation of these representations. We also generate representation similarity maps to analyse the representation of a single image in a particular model. Using the paired object decoding tasks, we can see that the objects in the network can be linearly decoded considerably above random accuracy by using a single token representation originating from the object or by using an average of token representations from the object. The general trend followed by these decoding accuracies can be seen in Fig <ref>. The global object decoding probes, with their 20 class classification setting, provide a good baseline of the ability of the token's representations to decode between multiple objects. The high correlation of these accuracies for each network (see Fig. <ref>) to paired-object classification tasks' results shows that the accuracies are not obtained due to an easy 2-way or 4-way classification task set up in the paired object tasks. Finally, in Fig. <ref>, you can see the detailed results for decoding accuracies for the four types of token representation used in our analysis for the paired object decoding tasks. This provides a more detailed global picture of the model's representations of object pairs from the images. For an instance-wise image-level analysis of representations, we generate similarity maps of the object representations that show how a particular token's relation to other tokens changes across layers in the model for a particular image. In <ref>.b, an image shows a representation similarity map for two tokens in an image 1) from the primary object 2) from the secondary object. One can notice how the representations of tokens originating from the same token start having similar representations as we move toward the upper layers. The tokens from the primary object acquire similar representation, but for the secondary object, many tokens that are outside the object also have high cosine similarity in the last layer. §.§ How different token representations encode objects, their interaction and their importance In the paired-object tasks, the images consist of a primary object and a secondary object. We see the representation of these two object combinations in each image in four type of token representations from the models. Based on the results from the BLIP model shown in Fig <ref> we now discuss the general trend of representations across models. Specific differences are discussed in subsection <ref>. In our results (see Fig. <ref>), in all pre-trained models, the primary objects are decoded equally well by the CLS token as the average token representation of the object. There is a decrease in decoding accuracy from primary to secondary object categories. This is partially due to the added complexity of a 4-way classification in the secondary object. However, the CLS token decodes the secondary objects with much less accuracy than both the object-specific tokens (avg_obj and random_obj). The CLS tokens that are optimised for the downstream tasks in each of these networks are expected to model the best information about the scene. But this also means that not all objects are linearly decodable by the CLS token. We observe that the object-specific tokens have better object decoding accuracy as compared to the CLS token (In BLIP Fig <ref> primary object CLS:0.94 vs Avg_Obj: 0.97 and secondary object CLS: 0.69 vs Avg_Obj: 0.84). There is a particular degradation in decoding performance using the CLS token for secondary objects, where the accuracy for the CLS token is far below the decoding accuracy of object-specific tokens (avg_obj and random_obj). Previous analysis showed that the object-specific tokens have the highest accuracy for decoding the particular object from which they originated. Still, they also have high decoding accuracy for the other objects in the image. This accuracy is far above random guess and allows these tokens to decode a combination of objects in the current image (see Fig. <ref>, for results on BLIP). The final decoding accuracy for the combination is 0.59 when using the primary (avg_obj) object token and 0.69 when using the secondary one. In Fig. <ref>, one can notice that the primary avg_obj token is much worse at representing the secondary objects than the other way around. These results have implications for the disentanglement of these object-specific tokens. Further, the CLS token also shows a decent decoding accuracy as it is optimised to capture more global information from the image. This accuracy is not higher than the object-specific secondary tokens due to its poor capturing of information about these secondary objects in the scene (CLS token decoding accuracy for the secondary object is 0.69 compared to 0.94 for the primary object, as seen in Fig. <ref>). §.§ VLM models trained on multi-object images have more modular representations than VIT (trained for single object classification) In this section, we compare the representations of the models with each other. For the global decoding task, we see that the newer VLMs like FLAVA and BLIP have higher decoding accuracy with their object-specific representations than the other counterparts like CLIP (see Fig <ref>). Larger models like CLIP-L and BLIP-L counterintuitively perform worse than their base counterparts. Specifically, CLIP-L shows fairly lower decoding accuracy for both single random object and average object token representation (random_obj and Average_obj). In the paired-object decoding tasks, the VLM models follow the overall trend of decoding accuracy being correlated to the results in the global object probing tasks (see<ref>). The relative accuracy trends for various tokens are similar across VLM models. The notable observation is the higher overall decoding performance of FLAVA models, which also shows a higher differentiation of object-wise representations, i.e the random token accuracy is fairly lower in the last layer compared to object-specific tokens. We observe a difference in the structure of representations due to architecture (Transformer vs CNN) and specific training on multi-object tasks i.e ViT vs other models. Hence, we observe a significant decrease in decoding accuracy using the random CNN unit representation compared to random ViT tokens (In the last layers, primary object ViT: 0.84 vs CNN: 0.72; secondary object ViT: 0.54 vs CNN: 0.45)(see <ref>). Further, the object-specific tokens in CNNs have lower accuracy while decoding the other objects than their ViT counterpart (In the last layers, primary object ViT: 0.88 vs CNN: 0.8; secondary object ViT: 0.6 vs CNN: 0.59). This indicates that CNN has less entanglement of object-specific information across objects than the ViT counterpart in CLIP. We attribute these results to CNNs not having the ability for the information to travel across units in each layer, hence the object information remains more localised. VIT trained on ImageNet21k shows the least differentiation between object-specific and other tokens as compared to the other Transformer models. Here a random token decoded the object with almost similar accuracy as compared to the CLS token (see Fig. <ref>). And tokens from one object and similarly decode the category of other objects in the scene. Hence it seems like the scene-level information is more uniformly dispersed in the representations of ViT tokens trained only on single object classification task. This is in comparison to all the ViT models trained on tasks requiring modelling of more than one object for a correct representation of the information required for downstream tasks like text matching, captioning etc. Further, the differentiation of object-specific token and consequently the disentanglement of information is more explicit in these networks than VIT. §.§ Special tokens may not be special for your downstream task with multi-object images Following the observations about the lower decoding accuracy of secondary objects using CLS tokens, we wanted to check if the low accuracy results from the object’s importance in the downstream task (captioning in the case of BLIP). Hence, we analysed the decoding accuracy for two sets of data. The set was divided based on whether the object was directly mentioned in the caption generated by the model. Due to this split, we rejected tasks with less than 400 samples in either of the new sets. Therefore, we are left with 3 tasks whose average results are reported in Fig. <ref>a. In our results in Fig. <ref>a, we report a decrease in decoding accuracy of the objects not mentioned in captions using all kinds of tokens considered in our analysis (Average final layer accuracy for primary object, which was for 'in caption': CLS:1 and avg_token: 0.99, drops to CLS: 0.87 and avg_obj 0.93 when the object is 'not in caption'. In the secondary object, the drop is from 'in caption': CLS: 0.96 and avg_obj: 0.99 to 'not in caption': CLS: 0.79 and avg_obj: 0.82). This means the network pays more attention to certain objects, and its learning of discriminative features deteriorates for objects not mentioned in the captions. We note that the decrease in the decoding is most pronounced for the CLS token, showing the direct effect of the downstream task on the representation. We further use this understanding to check for failure in zero shot object detection (i.e. retrieval) tasks using CLIP in a multi-object setting. We use the classification task setting for the global probe. We use the object-specific representation using 1 token and avg_obj token to learn probe to classify the top 20 objects in test set images. Then we evaluate the CLIP zero shot performance to detect objects in the image using the prompt 'An image containing a 'category”. Where the category was replaced with 20 candidate objects. We finally evaluate the accuracy of finding the objects. A fourth multi-class object detection probe is learnt using the CLS token representation. As we compare the accuracy of these four methods on the simple object retrieval task, we find that the probes using the average patch representation gives the best accuracy followed by random_obj token and zero shot model accuracy and multi-class probe (See Fig. <ref>b). § DISCUSSION In this work, we started by identifying three characteristics required in representations of image models for them to be symbol-like: 1) The architecture should be able to aggregate information specific to objects on the image into smaller units; 2) The architecture should represent various logical units of the input image in separate computational units, i.e., there should be object-wise information disentanglement; 3) A downstream processing network should be able to utilise the composition of these units for the required task. Most vision encoders fulfill the first criterion we described as they aggregate useful information from the image into smaller representation space useful for the downstream task. However, we were interested in whether separate representational units specialise in representing the image's separate constituent parts (objects). Our analysis shows that the object-specific (avg_obj, random_obj) tokens show decent decoding accuracy across the models. Each object-wise token’s accuracy for classifying itself is our estimate for fulfilling this criterion. For the second characteristic, we looked at the network’s capability to form disentangled representations of objects. We saw evidence that the information about other objects also leaks into the object-wise representations. Ideally, each object representation can only decode that particular object and other objects in the scene cannot be decoded by that representation. As the other objects can be decoded in our object representations (with accuracies of 0.89 by secondary object representations and 0.65 by primary object representations), their representations are already affected by the context, i.e. other objects. This finding implies that the downstream task using these representations compositionally still cannot perform structured composition as with abstract symbols. Further, the higher the decoding accuracy of other objects in the scene, the higher the entanglement in the representation. The entangled object representation will affect the downstream network’s outputs if the context information is irrelevant to the task. Our comparative analysis of the BLIP model's performance showed a notable degradation in decoding efficacy across all representations when objects were omitted from its captions. This observation underscores the significant influence of the model's downstream objective on its representational capabilities. On the one hand, this enhances the disentanglement of object representations, reducing interference from objects insignificant to the task. Conversely, it reduces the model's generalizability, particularly when deployed in diverse tasks or faced with unfamiliar input distributions. Our results show that this problem is most exacerbated in the representation of the CLS token. Most research predominantly assesses these networks' capabilities based on their final outputs, which rely on representations like a CLS token (for example in <cit.>). Our findings show a loss in representing multiple objects during this process of funneling information into the CLS token. Further, the downstream objective's effect is the most extreme on the nature of these representations. We show that this causes suboptimal performance of these networks in simple tasks such as retrieval in the multi-object setting. We further postulate that these phenomena (non-binding of certain objects representations or entangled representations) significantly contribute to the models' suboptimal performance in tasks that require to compose using the representation of objects in scenarios involving multiple objects. In evaluations of CLIP, FLAVA and similar models within a multi-object context, the ramifications of the CLS token bottleneck and its training procedure (training data and objective) must be meticulously considered. The objects represented and the nature of their representation in this bottleneck are heavily influenced by the downstream objective or a co-occurring object, thereby affecting the accuracy of outputs in tasks that diverge from the types of instances encountered during training. Future work can utilise our method to look into training better readouts and fine-tuning on specific underrepresented object images. § LIMITATIONS The probing and representation analysis methods we used in our analysis have some limitations, and the results must be carefully interpreted. High decoding accuracy in a decoding task with two or four objects may be due to the easy decoding task (if the object classes are naturally distinct and it is only a 2/4 way classification). Further, the fact that one can classify a particular object from a few other classes may depend on representing a single or a few features distinguishing between the classes. This representation may not model many other aspects of the object. We controlled for this limitation by training global probes for 20-class classification using the same representations, showing reasonably high accuracy. Likewise, a lower decoding accuracy does not mean that the model does not have information about the particular object; the information is just not encoded with linear distinction at that layer/token. Overall, we are averaging over six tasks in many object categories. Hence, even though the exact numbers may not reflect the model's exact state, the relative numbers are helpful in making inferences about the model and its representations. § CONCLUSION In this work, we first formulated three characteristics for symbolic representations in an image encoder. We then analysed the representations of transformer-based image encoders in VIT, BLIP, CLIP, FLAVA and CLIP (Resnet X 4) image encoders. Through probing tasks, we observed that object-specific areas hold the most discriminative features about the objects until the last layers. Their discriminative ability decreased when the objects were not important for the downstream task but was still reasonably maintained. The token representations are not disentangled since they can decode other objects in the scene with accuracy far above random guesses. We found that the aggregate image representation, i.e. CLS token for transformers, does not represent all objects, and its discriminative ability degrades the most when the object is not useful for the trained downstream task. Our work thus characterises the extent to which the representations of these image encoders are symbolic and gives insights into the failures on out-of-distribution downstream tasks that utilise the composition of these features. splncs04
http://arxiv.org/abs/2406.08199v1
20240612133102
Why does the Milky Way have a metallicity floor?
[ "Britton D. Smith", "Brian W. O'Shea", "Sadegh Khochfar", "Matthew J. Turk", "John H. Wise", "Michael L. Norman" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage HTIM: Hybrid Text-Interaction Modeling for Broadening Political Leaning Inference in Social Media [ Received May 18, 2024; accepted June 6, 2024 ================================================================================================= § ABSTRACT The prevalence of light element enhancement in the most metal-poor stars is potentially an indication that the Milky Way has a metallicity floor for star formation around ∼10^-3.5 . We propose that this metallicity floor has its origins in metal-enriched star formation in the minihalos present during the Galaxy's initial formation. To arrive at this conclusion, we analyze a cosmological radiation hydrodynamics simulation that follows the concurrent evolution of multiple Population III star-forming minihalos. The main driver for the central gas within minihalos is the steady increase in hydrostatic pressure as the halos grow. We incorporate this insight into a hybrid one-zone model that switches between pressure-confined and modified free-fall modes to evolve the gas density with time according to the ratio of the free-fall and sound-crossing timescales. This model is able to accurately reproduce the density and chemo-thermal evolution of the gas in each of the simulated minihalos up to the point of runaway collapse. We then use this model to investigate how the gas responds to the absence of H_2. Without metals, the central gas becomes increasingly stable against collapse as it grows to the atomic cooling limit. When metals are present in the halo at a level of ∼10^-3.7 , however, the gas is able to achieve gravitational instability while still in the minihalo regime. Thus, we conclude that the Galaxy's metallicity floor is set by the balance within minihalos of gas-phase metal cooling and the radiation background associated with its early formation environment. stars: Population III – stars: Population II – stars: abundances – galaxies: formation – galaxies: high-redshift § INTRODUCTION The abundance patterns of extremely metal-pool stars comprise nearly all of the observational data that we have on the process of star formation below a metallicity of about 10^-3 . One of the most notable features of these objects, collectively, is their enhancement in light elements, namely C and O, with respect to Fe, the classical proxy for “metallicity.” From this, we have the notion of Carbon-Enhanced Metal-Poor (CEMP) stars <cit.>, which comprise a majority of stars with [Fe/H] < -3. If we consider the combined metal content of these stars, however, another interpretation of the CEMP phenomenon is that of a metallicity floor <cit.>. Indeed, as noted there and elsewhere, [C/Fe] seems to increase with decreasing [Fe/H] for the most metal-poor objects. This point was made succinctly by <cit.>, who introduce a combined C/O abundance measure known as the transition discriminant (D_ trans), given by D_ trans≡log_10 (10^[C/H] + 0.3 × 10^[O/H]). When viewed from this perspective <cit.>, the Galaxy appears to have a metallicity floor of D_ trans∼ -3.5, equivalent to an absolute metallicity of about 10^-3.6 . This number has very interesting astrophysical significance. Well above this metallicity, observations overwhelmingly support the existence of a roughly universal stellar initial mass function (IMF), robust to physical conditions and producing predominantly low mass stars <cit.>[In practice, direct observational evidence of the IMF extends down to around ∼10^-2 <cit.>.]. Well below it (specifically, zero metallicity), we expect the star formation process to have fundamentally different results, i.e., a distinctly top-heavy IMF that may also have environmental dependence <cit.>. To zeroth order we know that H_2 (the main coolant in cold, metal-free gas) plants the initial seed for a top-heavy IMF through its inefficient cooling, which limits Jeans fragmentation to mass scales of ∼1000 [We note that most simulations now show the dense accretion disk surrounding the massive pre-stellar core will readily fragment into small clumps <cit.>, but this does not alter the main conclusion.]. If we then consider the point during the collapse of metal-free gas where H_2 cooling becomes inefficient and fragmentation ends, it is possible to calculate the amount of metal required for continued cooling and fragmentation, arriving at a metallicity of ∼10^-3.5 . Specifically, this is done by equating the cooling time with the free-fall time at the relevant density and temperature (n ∼ 10^4 cm^-3 and T ∼ 200 K). What's more, as was first pointed out by <cit.>, the most important metal coolants in this regime come from atomic C and O. Thus, it is tempting to conclude that the Galaxy's metallicity floor traces the “gas-phase critical metallicity” for the transition from Population (Pop) III to Pop II star formation. Problems exist with this interpretation, however. Another significantly lower critical metallicity for fragmentation exists due to cooling from dust <cit.>, with a value of ∼10^-5.5 <cit.>. Decoupling dust from gas-phase metallicity, this value has also been expressed as a critical “dusticity,” D_cr∼ 5×10^-9 <cit.>[This is about ∼5×10^-7 of the local dust-to-gas ratio of 0.009387 <cit.>.], still far less total metal than the previously stated gas-phase critical metallicity. As well, both observations and simulations indicate that the Universe has the ability to create low mass stars above the dust-phase critical metallicity but below the gas-phase one. Famously, the star SDSS J102915+172927 <cit.> has [Fe/H] ∼ -5 and [C/H] ∼ -4.4. Complementing that, the fully cosmological simulations of <cit.> showed that such a low metallicity (in that work, Z ∼ 2×10^-5 ) star-forming environment could arise in a minihalo on the cusp of Pop III star formation that is enriched by a supernova blast-wave originating from another nearby minihalo, i.e., prompt external enrichment. In summary, the Galactic metallicity floor does not appear to be a signal of IMF transition. It seems logical, though, that the physical context in which to place the supposed metallicity floor is enriched minihalos. Cosmological simulations show them to be particularly capable of forming extremely metal-poor stars, either by external <cit.> or internal enrichment <cit.>. Crucially, we define a minihalo as a dark matter halo with virial temperature too low to cool via Lyman-α emission (i.e., less than ∼10^4 K). Thus, without metals it must cool via H_2. Once again, it becomes tempting to try and equate H_2 and metal cooling rates. And so we have. In this paper, we present a novel explanation for the existence of a Galactic metallicity floor in which minihalos must reach a minimum metallicity to form stars in the presence of a photodissociating Lyman-Werner background associated with the early formation environment of the Milky Way. We arrive at this result via a somewhat indirect route. First, we use an extremely high resolution cosmological simulation to characterize the origin of gravitationally unstable gas in minihalos. We then utilize the lessons learned to construct a one-zone model capable of reproducing the long term evolution of minihalo gas. Following this, we employ the model to investigate the reaction of these halos to the loss of H_2 and the metals required to supplement this. Finally, we introduce our interpretation of the metallicity floor and/or CEMP stars. In Section <ref> we describe the simulation that forms the foundation of this work. In Section <ref> we present the relevant results from the simulation, the subsequent “Minihalo model,” and the most important metallicity thought experiment. We discuss our key interpretation in Section <ref>, potential caveats in Section <ref>, and provide a summary of results and concluding remarks in Section <ref>. § THE SIMULATION The simulation used in this work is of the family of simulations. It is a variant of the one presented in <cit.>, with two key differences. The simulation of the previous work was designed to investigate the conditions leading to metal-enriched star formation above the dust-phase critical metallicity, whereas here the goal is to do the same, but for the gas-phase critical metallicity. Hence, the two changes are an increase in the threshold metallicity for Pop III star formation from 10^-6 to 10^-4 and the exclusion of dust grains in the chemistry and cooling. We briefly describe the simulation code and setup below but refer the reader to that work for a thorough discussion of the methodology. We also refer the reader to <cit.> for a description of the evolution of the Pop III star forming minihalos in the simulation. §.§ Simulation Code We use the simulation code <cit.> for the work presented here. is an open source, adaptive mesh refinement (AMR) + N-body cosmological simulation code that has been used to study a wide variety of astrophysical phenomena, especially in the era of Cosmic Dawn. solves the equations of ideal hydrodynamics (adapted for cosmology) in the comoving frame using a block-structured Eulerian AMR framework <cit.> to dynamically add resolution to regions of interest based on several criteria. Here we employ the Piecewise Parabolic Method hydro solver of <cit.> coupled to an N-body adaptive particle-mesh gravity solver <cit.>. We use the adaptive ray-tracing radiation transport solver <cit.> to propagate H/He ionizing radiation from individual Pop III stars. We model the chemistry and radiative cooling of the gas using machinery that was, at the time of simulation, a precursor to the library <cit.>. This solves a non-equilibrium network of 12 primordial species (H, H^+, He, He^+, He^++, e^-, H_2, H_2^+, H^-, D, D^+) coupled to a table of metal cooling rates computed under the assumption of collisional ionization equilibrium with the 2013 release of the photo-ionization code <cit.>. The photo-heating, -ionization, and -dissociation rates from the radiation transport are coupled directly to the non-equilibrium primordial chemistry network. As mentioned earlier, we do not include dust grains and instead assume all metal remains in the gas phase, in contrast to <cit.>. Finally, we model the formation and feedback of individual Pop III stars using the method described in <cit.>. We insert a star particle representing a 40 Pop III star when the following criteria are satisfied: the proper baryon number density is greater than 10^7 cm^-3; the velocity field at the grid cell has negative divergence; the molecular hydrogen mass fraction (H_2 plus H_2^+) is greater than 5×10^-4 with respect to the total density; and the metallicity is less than 10^-4 . After a Pop III star particle is created, the radiation transport solver <cit.> propagates H, He, and He^+ ionizing radiation, discretized into energy groups of 28 eV, 30 eV, and 58 eV, respectively. The H_2 photo-dissociating Lyman-Werner (LW) radiation is assumed to be optically thin. Its intensity is scaled as r^-2 from the source and decremented by H_2 self-shielding following <cit.>. The 40 star particle has properties derived from the stellar models of <cit.>: a main-sequence lifetime of 3.86 Myr and photon luminosities of 2.47×10^49 s^-1 (28 eV), 1.32×10^49 s^-1 (30 eV), 8.80×10^46 s^-1 (58 eV), and 2.90×10^49 s^-1 (LW). After the star's lifetime, it explodes as a standard Type II core-collapse supernova with energy of 10^51 erg, metal yield of 11.19 , and total ejecta mass of 38.6 <cit.>. We emulate an explosion by depositing the energy and mass into a sphere of radius 10 proper pc. When gas meets the first three formation conditions but exceeds the threshold metallicity, we instead use the AMR to follow the gravitational collapse to densities at which our chemistry assumptions break down, then end the simulation. §.§ Simulation Setup This is a cosmological simulation with a 500 kpc/h comoving box size following the formation of a dark matter halo reaching ∼1.7×10^7 by a redshift of 10. The simulation is initialized at z = 180 with the initial conditions generator <cit.> with the WMAP 7 best-fitting cosmological parameters: Ω_ m = 0.266, Ω_λ = 0.732, Ω_ b = 0.0449, H_0 = 71.0 km/s/Mpc, σ_8 = 0.801, and n_ s = 0.963 <cit.>, with an <cit.> transfer function and using second-order perturbation theory. In an exploratory dark matter-only simulation with 512^3 particles, we identify our target halo as that with the most unique progenitor halos with masses of at least 10^6 , thereby maximizing opportunities for Pop III star formation. We then regenerate the initial conditions with two levels of telescoping refinement surrounding the Lagrangian region of the target halo. This corresponds to an effective resolution of 2,048^3 particles and a dark matter and baryon mass resolution of 1.274 and 0.259 , respectively. As the simulation evolves we allow AMR within the high resolution region when any of the following criteria are met: there are more than four dark matter particles within a cell; the baryon mass exceeds four times the mean baryon mass per cell multiplied by a factor of 2^-0.2L, where L is the refinement level (making baryon refinement slightly super-Lagrangian); or the local Jeans length is resolved by fewer than 64 cells. This Jeans length resolution has been shown to be sufficient to avoid numerical fragmentation <cit.>. The above criteria are given in roughly the order in which they become relevant. That is, refinement is first triggered by dark matter overdensity as a halo assembles, then baryon overdensity as gas settles into the potential well, and then by the Jeans length criterion when the gas becomes self-gravitating <cit.>. We place no practical limit on the number of AMR levels and instead allow the simulation to proceed at the numerically appropriate resolution given the refinement criteria. In practice, the “cruising altitude” of the simulation is generally 8–9 levels of AMR (∼3-5 pc comoving spatial resolution), reaching around 15 levels (0.04 pc comoving) just prior to Pop III star particle creation. § THE EVOLUTION OF GAS IN MINIHALOS The model presented here is motivated by the behavior observed in the simulation of metal-free gas collapsing within the minihalos that go on to form Pop III stars. We first describe the evolution observed in the simulation, then present a simple model capable of reproducing the evolution of the central gas. Finally, we use this model to study a scenario in which minihalos are exposed to sufficient photo-dissociating radiation to totally destroy their molecular gas. §.§ From the Simulation Nine distinct Pop III star formation events occur within the simulation by the final output at z ∼ 11.8. Of these, three are doubles (for a total of 12 stars) wherein two star particles are created in the same molecular cloud within 1000 years of each other. We do not examine further in this study the nature of these double events. Following the convention of <cit.>, we refer to each halo according to the star particle(s) created therein. Only seven of the nine halos are completely devoid of metals. The other two halos (3 and 12) are enriched by nearby supernovae. The metallicity of the gas in which their star particles form is below the threshold we use to consider them Pop II, hence they form as Pop III stars. Nevertheless, we omit these two halos from the study. We trace the merger history of each of the seven halos back to the point at which the most massive progenitor has a mass of 10^4 . Figure <ref> shows radial profiles of the first halo to form a Pop III star. Being the first halo to undergo star formation, its gas is unaffected by radiative or supernova feedback from neighbors and it is thus the simplest case to display. We show the evolution of gas within the halo for the ∼60 Myr leading up to the creation of the star particle, which occurs at z ∼ 24. The final profile shown corresponds to the last snapshot before the star particle was created. Over this time period the halo grows from a mass of ∼1.1×10^4 to ∼1.2×10^5 . The central ∼10% of the halo's gas is nearly in hydrostatic equilibrium for most of its evolution. Only in the final 7 Myr, when the temperature drops rapidly due to cooling by HD, is the central gas under-pressured with respect to hydrostatic equilibrium. The Bonnor-Ebert mass is the minimum mass in which a cored, isothermal sphere of gas experiencing a fixed external pressure becomes gravitationally unstable. The middle-right panel of Figure <ref> shows the ratio of the enclosed gas mass to the local Bonnor-Ebert mass, given by M_ BE≃ 0.8 ×c_s^4/G^3/2 p^1/2, where c_s is the local sound speed, G is the gravitational constant, and p is the external pressure. Here, we approximate the external pressure as the maximum of the local thermal pressure and the hydrostatic pressure (i.e., the pressure exerted by the weight of the gas at larger radii, Equation <ref>). At an enclosed gas mass of roughly 40 (denoted by the vertical, dashed line in Figure <ref>) this ratio grows quickly at late times, approaching unity as the time of star particle creation nears. This formalism as we have employed it does not include the influence of the dark matter, but we can do so approximately by considering the ratio of the sound-crossing time and the free-fall time, which is expressed as t_ ff = √(3 π/32 G ρ_ tot), where ρ_ tot is the sum of the gas and dark matter densities. This is the criterion for the Jeans mass, which applies to a parcel of gas at constant density and temperature. We show this in the lower-left panel of Figure <ref>. At the 40 mass coordinate, the sound-crossing time is lower than the free-fall time until late times. At early times the presence of the dark matter keeps the two timescales within a factor of a few, with the cooling time much longer than the free-fall time. This is consistent with the notion of quasi-hydrostatic evolution evidenced by the pressure profiles. Later, when the sound-crossing and free-fall times are similar, the gas appears to undergo runaway collapse. During this period, the cooling time becomes very long with respect to the free-fall time. This is due to the steep drop-off in the cooling rate as a function of temperature. The peak in ratio of enclosed gas mass to Bonnor-Ebert mass occurs at the same mass coordinate as the peak in the ratio of the sound-crossing time to free-fall time. At this point the gas density dominates over that of the dark matter, so ρ_gas≈ρ_tot. As well, if one takes the external pressure to be the local pressure, as we have done in Equation <ref>, then c_s/√(p) reduces to ρ^-1/2, giving the Bonnor-Ebert and Jeans masses the same proportionality and a normalization constant differing only by a factor of ∼1.5. We observe the behavior described above in all Pop III star-forming halos within the simulation. The central gas remains in approximate hydrostatic equilibrium as it slowly cools and condenses, eventually becoming under-pressured with respect to hydrostatic equilibrium at late times. The primary difference between each of the Pop III star-forming halos is the mass coordinate at which the Bonnor-Ebert mass is finally exceeded (Figure <ref>, i.e., the mass of gas that becomes gravitationally unstable). This ranges from about 40 in halo 1 to nearly 1,600 in halo 4, similar to the range observed in the sample of 100 minihalos simulated by <cit.>. The ratio of gravitationally unstable cloud mass to halo mass shows a weakly positive trend with halo mass (albeit, in our very small sample), with a mean value of roughly 10^-3. The correlation between cloud mass and halo mass is not surprising as the Bonnor-Ebert mass increases with temperature, which in turn increases with virial mass (i.e., T_v∝ M_v^2/3 (1+z)). §.§ The Minihalo Model What drives the gas in these halos to condense, collapse, and eventually form stars? Our aim is to model the evolution of gas in each minihalo from the time it was first resolved in the simulation, taking into account their unique growth histories and external influences. In particular, we wish to follow the parcel of gas that ultimately becomes gravitationally unstable. Hence, it is crucial to reproduce the temporal evolution of the gas as well as its path through phase space (i.e., density vs. temperature). Below we present our one-zone Minihalo model, which takes into account the physical conditions experienced by individual halos and augments conventional free-fall evolution with a pressure-driven component. We first describe the model in full, then evaluate the importance of various physical processes. §.§.§ Model Summary As we wish to model the gas in each of our simulated minihalos individually, we must first construct the time-dependent data characterizing the influences experienced as the halos grow and other stars form nearby. We create radial profiles of each halo from every simulation snapshot in which it exists and remap them from radius to enclosed gas mass (e.g., Figure <ref>). We identify the mass coordinate with the highest ratio of enclosed mass to Bonnor-Ebert mass in the snapshot immediately prior to Pop III star particle formation. We then initialize the relevant gas quantities (i.e., density, internal energy, chemical species fractions) using the values in that mass coordinate at the time when the halo's most massive progenitor has a virial mass of roughly 10^4 . We also make note of the radius corresponding to the target mass coordinate in the profile. This serves as the initial radius of the gas parcel, which we update as the density changes, assuming spherical symmetry. As the model runs we update the dark matter density (used in the modified free-fall component) and photo-chemical/heating rates (stimuli to the chemistry solver) using the radial profiles at the relevant cosmic time and mass coordinate. From one time step to the next, we update the internal energy and chemical species using the [https://grackle.readthedocs.io/] chemistry and cooling library <cit.> with the same settings as were used in the simulation. Once the model is initialized, the procedure is as follows: * Calculate time step as the minimum of the cooling time and free-fall time multiplied by a safety factor, here 0.01. * With current density (ρ_n), internal energy (e_n), and chemical species fractions (𝐲_n), solve chemistry and cooling to update e_n and 𝐲_n to e_n+1 and 𝐲_n+1 with ρ unchanged. * Update ρ_n to ρ_n+1 following the process described in Section <ref> below. * Update the internal energy to account for the change in density from step <ref> using the energy equation with the radiative cooling term removed (as it was applied in step <ref>). This is expressed as de/dt = -p d/dt1/ρ. * Update the cloud radius assuming conservation of mass (i.e., ρ_n r_n^3 = ρ_0 r_0^3). The model iterates these steps until the Bonnor-Ebert mass is exceeded, at which time we consider it to have entered runaway collapse. For the time-dependent inputs into the model (e.g., dark matter density, photo-ionization/heating rates, etc.), we perform a two-dimensional linear interpolation over time and radius from the radial profiles of the given halo. §.§.§ Density Evolution At step <ref>, we first determine the method of density update by comparing the sound-crossing time (2r/c_s) to free-fall time including the dark matter (i.e., Equation <ref>). If the free-fall time is shorter we evolve the gas density according to the free-fall model of <cit.>, which they modify to include the effect of pressure gradients acting to slow collapse (Equations 6–9 of that work). As we employ this in the model presented here, we outline the approach below. The evolution of the gas density in modified free-fall is expressed as d ρ/dt = ρ/t_col, where the collapse time, t_col, is altered from pure free-fall (i.e., Equation <ref>) as t_col = t_ff/√(1-f), and f is the ratio of the force due to the local pressure gradient and gravity. This factor is approximated as a function of the effective adiabatic index (γ≡∂log p/∂logρ) as f = {[ 0, γ < 0.83,; 0.6 + 2.5(γ - 1) - 6.0(γ - 1)^2, 0.83 < γ < 1,; 1.0 + 0.2(γ - 4/3) - 2.9(γ - 4/3)^2, γ > 1. ]. As γ→ 4/3 and f → 1, the gas is internally pressure supported and the collapse is halted. <cit.> note that the evolution now depends on continued accretion of the surrounding material. They model this phase by setting an upper limit for f of 0.95. In this work we allow f to reach unity, however, at which time we consider the collapse to be fully stalled, thereby entering the pressure-driven mode. We evaluate the consequences of various choices for the value of f below in Section <ref>. When the sound-crossing time is shorter than the free-fall time or when f = 1 in the modified free-fall model, the density is updated according to a pressure-driven model. In this mode, the change in density between two time steps is given by ρ_n+1/ρ_n = p_n+1/p_nμ_n+1/μ_nT_n/T_n+1. The variables, p, μ, and ρ, with “n” subscripts represent the thermal pressure, mean molecular weight, and gas density from the previous time step. Given a time step, dt, we integrate the chemistry and cooling starting with the gas described by the n-subscript variables. This integration evolves the chemical species and internal energy at a constant density, ρ_n. After this time step, the temperature, T_n+1, and mean molecular, μ_n+1, are calculated from the updated species densities and internal energy. The new density, ρ_n+1, is then calculated following Equation <ref>. The key variable driving the system is, thus, p_n+1, which we take to be the maximum of the thermal pressure and the hydrostatic pressure at the new time and radius as derived from the radial profiles of the halo. The hydrostatic pressure is given by P_hyd(r) = ∫_R^rG m_tot, enc/r^2ρ_gas dr, where R is the outer radius (here taken to be the virial radius) and m_tot, enc is the total (gas and dark matter) enclosed mass. Here, we label the gas density as ρ_gas to emphasize that while the dark matter and gas act in tandem to pull material down toward the center, only the gas weighs down on the material from above (i.e., the outside). §.§.§ The Importance of Gravity, Pressure, and Radiation In Figure <ref>, we show the results of the full model, alongside several variants, for halo 10/11. This halo, the last in our sample to form a star, has been influenced by the seven prior instances of star formation (the earliest of which does not appear in the figure) and, thus, represents one of the more complex cases. The Minihalo model reproduces the time to runaway collapses to within about 1 Myr over a total time of 169 Myr. Gravity alone would drive the system to collapse on the timescale proportional to Equation <ref>. In the case of halo 10/11, the initial gas and dark matter densities are 8×10^-25 g/cm^3 and 8×10^-24 g/cm^3, respectively, corresponding to a free-fall time of about 22 Myr and an overall collapse time about twice that (i.e., the dotted, green line in Figure <ref>). What is also striking is the degree to which just six brief episodes of star formation (3.86 Myr each) delay the ultimate onset of free-fall by more than 100 Myr. At z ∼ 19, the gas density is evolving via the free-fall mode until a Pop III star forms just prior to z ∼ 18. At this point the halo enters a cycle of seeing its molecular content recover and be destroyed again <cit.>, all the while evolving in the pressure-driven mode. The white bands that appear in Figure <ref> during this period illustrate how close the halo comes to breaking out of this cycle on multiple occasions as f_ H_2 exceeds 10^-4. Another key to this is the relatively low gas density, which prevents effective self-shielding. Further inside the halo where densities are higher, self-shielding is certainly taking place, but this is ultimately unimportant as it is this mass coordinate which determines when star formation occurs in this halo. Thinking back to the far simpler case of halo 1, where Figure <ref> shows evolution over a 57 Myr period, the initial proper total density at the relevant mass coordinate (here, ∼40 ) is roughly 2×10^-22 g/cm^3, corresponding to a free-fall time of just under 5 Myr. However, it is apparent from the pressure profiles that the gas is evolving very close to hydrostatic equilibrium until just about 7 Myr before final collapse. This phenomenon is also observed at the extreme opposite end of the halo mass spectrum, where hydrostatic equilibrium accounts for up to 95% of the gas pressure in galaxy clusters <cit.>. What links the scenarios is the ineffectual radiative cooling, which here is due to H_2 and in galaxy clusters is offset by feedback from active galactic nuclei <cit.>. Regardless, it is clear that thermal pressure is important for regulating the rate of collapse. In their one-zone models, <cit.> account for this by modifying the free-fall collapse model introduced in <cit.> to slow the density evolution by a factor relating to the ratio of pressure gradient forces to gravity (i.e., Equations <ref>, <ref>–<ref>), and setting a maximum value of f = 0.95. For the halos in this work this generally results in collapse proceeding too quickly, as illustrated by the dashed line in Figure <ref>. Increasing the upper limit on f to 0.99 further slows collapse, but in the case of halo 10/11 (Figure <ref>, long dashed line) by too much, whereas in other cases it is still not enough. It is also clear from Figure <ref> that this approximation fails to reproduce the low density, pressure-dominated evolution entirely[It should be noted that, while upper limits on f greater than 0.95 resulted in collapse times both too short and too long, a value of 0.95 was fairly consistently too short and likely the best value to use.]. Finally, despite the improvement made by the addition of hydrostatic pressure, we note that slowing free-fall by pressure-gradient forces is still a necessary component for the late stages of the evolution, as is illustrated by the dash-dot line in Figure <ref>. This term could be implemented more directly as essentially the derivative of Equation <ref>, but for simplicity we choose to stay with the approximation of Equation <ref> for this work. §.§ A Critical Metallicity for Star Formation We have constructed a model that describes reasonably well the behavior of gas in the interiors of minihalos. We now undertake a thought experiment and ask how these halos would evolve in the total absence of H_2 resulting from a strong UV background or repeated nearby star formation. In the metal-free limit the gas would not be able to cool until its halo's virial temperature reaches ∼10^4 K at a mass of roughly 10^7 . How much metal is required to supplement the loss of H_2 and induce gravitational instability before the halo reaches the atomic cooling limit? We try to answer this question in the simplest way possible, by deactivating the H_2-related chemistry altogether. This requires the Minihalo model to run past the point in which each of the halos originally formed its Pop III star. After this point in the simulation, the halo is evacuated of its baryon content by the stellar radiation and subsequent supernova. To approximate how the hydrostatic pressure would have evolved if no star had formed, we assume the gas density profile of the interior of the halo remains fixed in the state just prior to star formation. We continue to sample the dark matter profile of the halo from the simulation. As the halo grows in dark matter mass we assume that it accretes a mass of baryons proportional to the cosmic ratio. For the purposes of the model we deposit the accreted baryons into the outermost bin (i.e., the current virial radius) of the gas density profile to act as additional weight on the inner gas. As significant as these assumptions are, we find that the model behaves fairly sensibly in the limit of no metals or molecules (H_2/HD). The density and temperature both increase steadily as the halo grows, and the ratio of enclosed mass to Bonnor-Ebert mass decreases to ∼10^-4. For each halo we measure the time required for the gas temperature in the metal-free control run to reach 8000 K, where cooling from H Lyman-α starts to become significant. We set this as the maximum run time for the models discussed next. Beyond this, the minihalos will grow into atomic cooling halos where H_2 and star formation are significantly less inhibited by external UV radiation <cit.>. To this point, we have focused on the evolution of the mass coordinate of halo gas which becomes the most gravitationally unstable. However, the characteristic mass of gravitational instability is likely to change as we increase the metallicity <cit.>. To account for this, we modify the Minihalo model to simultaneously follow a contiguous range of mass coordinates, ranging from 5 to twice the original target coordinate of each halo (i.e., Figure <ref>). For our purposes this only involves modifying the hydrostatic pressure calculation to account for the change in position of the modeled mass coordinates. We confirm that this variant of the model reproduces the behavior of each of the mass coordinates modeled individually using the original one-zone version. At last, we run the expanded model with the primordial molecular chemistry disabled and gradually increase the metallicity. We use the tabulated metal cooling described in <cit.>, which assumes a Solar abundance pattern and collisional ionization equilibrium. The results of this experiment are shown for halo 10/11 in Figure <ref>. At a metallicity of 10^-5 the gas evolves similarly to the metal-free case. Prior to the onset of Lyman-α cooling, the gas is actually more stable against collapse than at the start of the model. At higher metallicities, the gas is able to cool and become gravitationally unstable. For each halo, we calculate a “critical metallicity,” Z_cr, as the minimum metallicity required for at least one mass coordinate to exceed the Bonnor-Ebert mass by the maximum run time. We also calculate a “minimum metallicity,” Z_min, as that required for the time derivative of the Bonnor-Ebert ratio for at least one mass coordinate to become positive. This is the metallicity at which cooling is just beginning to counterbalance the compression from hydrostatic pressure. These two metallicity values are illustrated as the black lines in Figure <ref>. We present the values for the critical and minimum metallicities for all halos in Table <ref>. We find mean values of [Z_ cr] = -3.71 ± 0.16 and [Z_ min] = -4.04 ± 0.23. Finally, we observe a sharp transition in the most gravitationally unstable mass coordinate. At low metallicities it is always the largest mass coordinate with the highest Bonnor-Ebert ratio at the end of the model, albeit significantly less than unity. In each halo this drops abruptly to roughly 10s of between the minimum and critical metallicities, then increases gradually with metallicity as more of the gas is able to reach the CMB temperature. This is similar to the behavior reported by <cit.>, although with significant differences in the physical setup and details of the results. We mention this result mainly for completeness, but caution against further interpretation as the nature of the stellar IMF is not the focus of this work. § THE METALLICITY FLOOR AND CEMP STARS The goal of this work has been to understand the conditions which lead to the formation of star-forming gas in minihalos. We have refined one-zone models based on free-fall collapse to match the evolution observed in cosmological simulations. In doing so, we have established the importance of hydrostatic pressure in driving the gradual increase in gas density as the halo grows. We have also confirmed the validity of the modifications made by <cit.> to the free-fall model to approximate the slowing of collapse by pressure gradient forces. The real question we are seeking to answer, however, is how would these minihalos behave if they grew up in the early formation environment of the Milky Way? Such a large-scale overdensity would have a highly clustered population of Pop III stars and thus be replete with photo-dissociating radiation and metals, two powerful terms primarily acting in opposition. In the extreme limit of full suppression of H_2, what abundance of metal is required such that gravitationally unstable gas can still form? Taking the mean of the sample of minihalos studied here, we find this value to be roughly 10^-3.7 for a Solar abundance pattern. This value is remarkably close to the apparent floor in C/O abundances observed in metal-poor stars, first quantified by <cit.> as D_ trans,crit = -3.5 ± 0.2. Equation <ref> derives from the fact that C ii and O i fine-structure emission dominate the metal cooling at low density, with the factor of 0.3 acting to normalize the cooling rates of the two terms. With the abundance pattern used here, our prediction of [Z_ cr] = -3.71 is equivalent to D_ trans,crit = -3.60. In Figure <ref>, we show D_ trans vs. [Fe/H] for all objects from the JINAbase metal-poor star database <cit.> with [Fe/H] < -2, with our estimate of D_ trans,crit overplotted. Of the 1,463 stars plotted, only 23 (∼1.6%) have D_ trans values below our one-sigma lower bound prediction of -3.76. All of those 23 stars only have measured C abundances, making their D_ trans values lower limits. For example, if these stars had [C/O] = 0 (i.e., a Solar ratio of C to O), their associated D_ trans values would increase by 0.11. For reference, the mean value of [C/O] for the stars shown in Figure <ref> with both C and O abundances is -0.46 ± 0.98. If the stars with unmeasured O had [C/O] = -0.46, this would increase their D_trans values by 0.27. The interpretation here is that the apparent D_ trans floor traces the metallicity at which minihalos are no longer prevented from forming stars by Lyman-Werner radiation of any intensity. In a cosmological context, these objects would be the most metal-poor “metal-cooling halos,” halos just below the atomic cooling limit whose radiative cooling is dominated by gas-phase metals <cit.>. This is a variation on the original notion of D_ trans as the minimum metallicity at which fragmentation, and hence low mass star formation, can first occur. The similarity between our prediction and the fragmentation-based one <cit.> derives from their common foundation of metal coolants supplementing H_2 at low densities (1 cm^-3≲ n ≲ 10^4 cm^-3) and temperatures <cit.>. The primary obstacle for the fragmentation argument is the ability of dust to induce fragmentation at much lower metallicities <cit.>. Although we do not include dust in our model, this does not present an issue for our interpretation. The effects of dust (i.e., heat exchange with the gas and catalyst for H_2 formation) are negligible at low densities in low metallicity environments <cit.>. As well, the minihalo interpretation has the added benefit of a degree of natural variability from halo to halo, as illustrated by Table <ref>, and also from variations in the local radiation field. Thus, it is fairly robust to a small number of exceptions. It should also be noted here that prompt external enrichment <cit.> provides a mechanism for forming stars well below D_ trans,crit. Our prediction bears some similarity to that of <cit.>, who argue that carbon-enhanced gas clouds would collapse more quickly than a relatively carbon-deficient neighbor, then form massive stars (along with low-mass stars) and suppress star formation in the neighbor. Following this, they predict that star formation would be prevented below a critical carbon abundance, [C/H] < -3.6. This, too, derives mainly from being roughly the crossover point between H_2 and metal line cooling. In both scenarios, the setting is an enriched minihalo bathed in H_2-dissociating radiation. The primary difference is the source of the radiation. In their model it is the newly-formed carbon-enhanced star, with some dependence on the stellar mass, distance, and density of the secondary cloud. In ours it is the radiation from distant Pop III stars. The two lines of reasoning are not mutually exclusive, although their scenario requires a particular configuration and also would not prevent lower metallicity carbon-normal (i.e., [C/Fe] = 0) stars from forming in the case where metals were mixed evenly. On the other hand, as we have shown in Section <ref>, even a handful of nearby, short-lived star formation events producing J_21 values of roughly 0.1 <cit.> can render H_2 cooling insufficient in small minihalos (≲10^6 ) for long periods of time <cit.>. Within the overdense environment that would characterize the early history of the Milky Way, such a value should also be reasonable to achieve as a background <cit.>. During this time, metals created by the stars that did form will steadily enrich these irradiated minihalos <cit.> to the required metallicity. Another commonality with <cit.>, and any other cooling-based explanations for CEMP stars or a C/O abundance floor, is the need to suppress iron enrichment at low metallicity. <cit.> highlight two avenues for this: SN that inherently produce enhanced light element yields <cit.> and varied mixing of different elements created in more conventional explosions <cit.>. Regardless of the mechanism that drives the carbon enhancement, there is compelling evidence from chemical enrichment that minihalos play a unique role in forming the lowest metallicity stars. In particular, minihalos appear to be necessary for capturing the extreme iron-poor tail of the Milky Way MDF <cit.> and uniquely capable of hosting true second-generation and CEMP stars <cit.>. This second point originally relied on faint (low-energy) Pop III SNe where timely fallback onto minihalos was possible <cit.>, but the discovery of prompt external enrichment <cit.> has provided avenue for second-generation star formation from virtually any progenitor. § CAVEATS It is important to note that the experiment presented here is but a crude simulacrum of star formation in metal-enriched minihalos during the early history of the Milky Way. The precise critical metallicity (or D_ trans,crit) values we calculate are the product of several simplifications, which we outline here. Indeed, visual inspection of Figure <ref> suggests our value could be slightly high for the observational data. Positive contributions (i.e., leading to cooler gas) that we have omitted or oversimplified will lower the required metallicity. Likewise, any negative contributions (i.e., sources of heat) that we may have underestimated will raise it. First and foremost, we have disabled H_2 formation entirely in the chemistry network as a means of approximating complete dissociation by a Lyman-Werner background. More realistically, H_2 will start to be effectively self-shielded from J_21 values of ∼1 for halos above 10^6 <cit.>. We have similarly ignored the effects of dust grains, whose positive contributions include providing an additional channel for H_2 formation, a heat sink for the gas <cit.>, and shielding H_2 from Lyman-Werner radiation <cit.>. As pointed out recently by <cit.>, however, the role of dust as a coolant for the gas (i.e., transferring heat by inelastic collisions and re-radiating it as thermal emission) is an exception confined to when the gas is hotter than the dust. Stellar radiation couples strongly to dust and can easily turn dust into a source of heat. While only FUV photons will lead to photoelectric heating from grains, dust can be heated by radiation at much lower energies <cit.>. Lastly, the metal cooling we employ assumes collisional ionization equilibrium (i.e., no ambient radiation background), which is clearly at odds with the physical scenario. Incident radiation adds a heating term, but also alters the ionization balance in a crucial way. Specifically, far more of the cooling from carbon then comes from fine-structure lines of C ii rather than C i, as the C i ionization energy is less than 1 Ryd. At low densities (n ≲ 10^3 cm^-3) and temperatures in the hundreds of K, C ii cooling is ∼10–50 times more efficient than C i <cit.>. However, at higher temperatures, where we first see metals beginning to counteract compression heating (i.e., Figure <ref>), cooling from O i (whose ionization energy is greater than 1 Ryd) is far more important. A more sophisticated treatment combining non-equilibrium metal and dust chemistry with a radiation background appropriate to the Milky Way's early formation environment <cit.> is beyond the scope of this work. However, such an endeavor could establish the sensitivity of this critical metallicity to the intensity of the radiation field, and thus open another avenue for observations of metal-poor stars to tell us something about the origins of the Galaxy. § SUMMARY AND CONCLUSIONS We present in this work a case that the metallicity floor observed in Galactic metal-poor stars is directly tied to the conditions that allow for star formation in the minihalos present in the early formation environment of the Milky Way. When combined with a mechanism for suppressing iron at low metallicity, this provides a natural explanation for the origin CEMP stars. We summarize the key points of this argument below: * We analyze a cosmological simulation that follows an environment of co-evolving Pop III star-forming minihalos with a dark matter mass resolution of ∼1 and radiation hydrodynamics. We characterize the evolution of the baryon content of these halos from their earliest history through the development of gravitational instability. Until very late times the central gas evolves in approximate hydrostatic equilibrium as it cools slowly through H_2 and HD. This period is easily prolonged by brief episodes of Pop III star formation that photo-dissociate the molecules within the mass coordinate that will eventually become gravitationally unstable. Peaks eventually emerge in radial profiles of the ratio of enclosed baryon mass to Bonnor-Ebert mass and generally align with peaks in the ratio of the sound-crossing time to free-fall time, albeit with different absolute values. The gas becomes under-pressured with respect to hydrostatic equilibrium when the sound-crossing time is roughly the free-fall time. * Motivated by the simulation data, we construct a one-zone model for the evolution of gas in minihalo cores. A parcel of gas evolves according to pressure confinement when the sound-crossing time is shorter than the free-fall time, where the pressure is the maximum of the local thermal pressure and the hydrostatic pressure. When the opposite is true, the density follows a free-fall model modified to include an opposing force from thermal pressure gradients. We couple this model to radial profiles of the simulated minihalos to provide time-dependent hydrostatic pressure values and radiative input from nearby Pop III stars. The model is able to accurately reproduce the density and temperature evolution of core gas from an arbitrarily early time in the halo's history up to when the gas becomes Bonnor-Ebert unstable. * We use the model to investigate how the simulated minihalos would evolve in the presence of a strong LW background by re-running them with molecular chemistry disabled. As expected, the gas that formed stars under normal conditions now remains stable as the halo grows to the atomic cooling limit. If we then add metals to supplement the loss of primordial coolants, we find on average that minihalos are able to form stars before reaching the atomic cooling limit when enriched to a critical metallicity, Z_ cr∼ 10^-3.71 . The original aim of the simulation presented here was to ascertain what, if any, relevance the gas-phase critical metallicity of (∼10^-3.5 ) has to the transition from Pop III to Pop II star formation. It has been fairly clear for some time that it is not related to fragmentation. That role belongs to the dust and its associated critical metallicity of ∼10^-5.5 . Is is vexing, though, why the dust-phase critical metallicity is not more apparent in observational data. For now, however, the story of Milky Way's oldest stars will remain a tale of two metallicities. § SOFTWARE The simulation was initialized with the initial conditions generator <cit.>, run using the code <cit.> and the chemistry and cooling library <cit.>, and analyzed with <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. The main software dependencies were <cit.>, <cit.>, <cit.>, and <cit.>. § ACKNOWLEDGEMENTS BDS wishes to thank Heather Jacobson, John Regan, and the entire FOGGIE collaboration (in particular, Molly Peeples) for useful conversations and continued support. BDS and SK are supported by STFC Consolidated Grant RA5496. BWO acknowledges support from NSF grants #1908109 and #2106575 and NASA ATP grants NNX15AP39G and 80NSSC18K1105. JHW acknowledges support from NSF grant #2108020 and NASA grant 80NSSC20K0520. BWO and JHW acknowledge support from NASA TCAN grant 80NSSC21K1053. This research is part of the Blue Waters sustained-Petascale Computing Project, which is supported by the NSF (award number ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and the National Center for Supercomputing Applications. The simulations described by this paper were run using an NSF PRAC allocation (award number OCI-0832662). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. § DATA AVAILABILITY All codes to produce the analysis and figures for this work are available in the [https://github.com/brittonsmith/yt_p2p] extension and the “p2p” branch of the lead author's fork of [https://github.com/brittonsmith/grackle/tree/p2p]. Data products used in this work include simulation snapshots, halo catalogs, merger trees, and halo radial profiles. These will be made available upon reasonable request to the lead author. mnras
http://arxiv.org/abs/2406.08898v1
20240613075108
Computer Vision Approaches for Automated Bee Counting Application
[ "Simon Bilik", "Ilona Janakova", "Adam Ligocki", "Dominik Ficek", "Karel Horak" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
[footnoteinfo]The completion of this paper was made possible by grant No. FEKT-S-23-8451 - "Research on advanced methods and technologies in cybernetics, robotics, artificial intelligence, automation and measurement" financially supported by the Internal science fund of Brno University of Technology. First]Simon Bilik Second]Ilona Janakova Second]Adam Ligocki Second]Karel Horak [First]Department of Control and Instrumentation, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic and Computer Vision and Pattern Recognition Laboratory, Department of Computational Engineering, Lappeenranta-Lahti University of Technology LUT, Lappeenranta, Finland (e-mail: bilik@vut.cz). [Second]Department of Control and Instrumentation, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic (e-mail: janakova@vut.cz, ligocki@vut.cz, horak@vut.cz). § ABSTRACT Many application from the bee colony health state monitoring could be efficiently solved using a computer vision techniques. One of such challenges is an efficient way for counting the number of incoming and outcoming bees, which could be used to further analyse many trends, such as the bee colony health state, blooming periods, or for investigating the effects of agricultural spraying. In this paper, we compare three methods for the automated bee counting over two own datasets. The best performing method is based on the ResNet-50 convolutional neural network classifier, which achieved accuracy of 87% over the BUT1 dataset and the accuracy of 93% over the BUT2 dataset. Signal Processing; Biomedical systems; bee counting; bee traffic monitoring; computer vision; deep classification; object detection § INTRODUCTION The automated bee counting based on computer vision techniques was solved with numerous methods involving the conventional computer vision approach, deep learning classification and object detection as extensively described in <cit.>. The current trends in this field show a decline of the conventional computer vision techniques in favor of the object detectors and CNN-based classifiers. Nevertheless, the conventional computer vision solutions are still more explainable and might be more computationally efficient, which is important especially on the embedded devices and testers. In this work, we compare three approaches for the automated bee counting based on the computer vision techniques. The first solution is our own, based on conventional computer vision-based motion signal analysis. It is applicable on an embedded device, allowing real-time processing at a 5 Hz sampling frequency. The second proposed approach is based on deep learning classifiers, and finally, the third proposed approach utilizes an object detector technique. In the conclusions, we compare the achieved results and possible improvements to the proposed methods. § RELATED WORK The research task is analyzing the movement of individual objects, such as bees traversing through a predefined environment. Numerous existing techniques tackle this problem, each with its own pros and cons. In this chapter, we briefly discuss the practical applicability of several techniques to our task. With the uprising of convolutional neural networks and specifically object detectors many works focus on tracking objects with direct access to the detector's predictions on each frame, for example, the Deep SORT <cit.> algorithm. Another motion analysis technique is to use the initial state of objects in their first appearance in an analyzed sequence to track them through the sequence using just visual information. A common approach for this problem is to use Siamese neural networks <cit.> that formulate the problem as convolutional feature cross-correlation between objects and regions of interest. Again this approach requires the use of computationally expensive neural networks. Furthermore, due to the large bee traffic through the observed alley, the initialization step would've had to be run on every time step, increasing the computational cost even more. Some works focus on analyzing temporal information through a low-dimensional subspace of ambient space with clustering techniques, known as subspace clustering techniques <cit.> and are used to separate low-dimensional data to their parent subspaces. This approach seems feasible for our needs at first glance but is not applicable for numerous reasons. The most common problem with subspace clustering variants is that they set requirements on the low-dimensional data that are not realistic for us to meet; for example, they require prior knowledge of several subspaces <cit.> or prior knowledge of some data points each of the subspaces contains <cit.>. Other approaches generally rely on building a similarity matrix from input data matrix <cit.>. The computer vision based bee counting have been described in numerous papers. The conventional approach could be presented by numerous papers including for example the 3D tracking techniques <cit.>, image velocimetry algorithm integrated with a RaspberryPi 3 based system <cit.> supplemented also with the sensor data in <cit.>. The CNN based solutions are represented in a smaller count, for example with the paper <cit.> describing a similar system as in <cit.>, but using the CNN classifier to detect and count the bees. An alternative to this approach is described in <cit.>, where the authors detect parts of the bee's body in order to perform its tracking. The most popular approach seems to be the object detection represented by papers <cit.> utilizing the YoloV4 and device with tunnels allowing the bees to pass through only in one direction, followed by the YoloV5 video analysis presented in <cit.>, where the authors investigate bee traffic in front of the hive. A similar approach using the SSD is presented in <cit.>. § EXPERIMENTAL DESCRIPTION §.§ Dataset description For our experiments, we used two datasets containing bees incoming and leaving the beehive through narrow passages inspired by <cit.> and <cit.>, which should mechanically separate the bees for their easier image processing. The first dataset BUT1 contains 1153 images with 2737 bee annotations in total and it is available online in <cit.>. Because of the unsuitable illumination and background of the BUT1 dataset, the second dataset BUT2 was created as an annotated subset from three captured days from the improved dataset <cit.> and it contains 2154 images with 8185 annotations. The annotated variant is available upon request and the whole dataset is shown in Table <ref>. To collect these datasets, we used our device described in <cit.>. Both datasets distinguish five classes - a complete bee heading in or out of the beehive, the head part, the abdominal part, and a cluster of several bees. Frames were captured with the frequency of 5 FPS, which ensures that every passing bee will be captured at least three times (head part - complete bee - abdominal part), and the annotations are in Yolo format. An illustration of both datasets is shown in Fig. <ref> with the measurement setup shown in Fig. <ref>. §.§ Conventional computer vision-based approach As none of the previously mentioned conventional motion tracking techniques proved to be suitable for our needs, we deviate from analyzing the movement of individual objects of interest utilizing our prior knowledge of the environment and analyze only an overall movement in the region of interest. To keep a reliable representation of our environment even after a longer period, we utilize a dynamic background model. We initialize our dynamic model m(x,y,k) optionally with the first frame of the sequence or with a previously captured frame of the environment f(x,y,k=0). m(x,y,0) = f(x,y,0) In each time step of the sequence we analyze the overall dynamic properties of the frame with thresholded subtraction d_1(x,y,k) = 1 if | f(x,y,k) - f(x,y,k-1) | > T_1 0 otherwise where T_1 is the empirically selected threshold. And based on the overall number of dynamic pixels, we flag the scene as either dynamic or non-dynamic D_1(k) = ∑ _x,yd_1(x, y, k) > T_2 where T_2 is again the empirically selected threshold. Our dynamic model is updated only when the scene is flagged as static in sequence, meaning D_1 is flagged as false, and the scene is flagged as static concerning the dynamic model. This flag is calculated similarly as the D_1 flag. d_2(x,y,k) = 1 if | f(x,y,k) - m(x,y,k-1) | > T_1 0 otherwise D_2(k) = ∑ _x,yd_2(x, y, k) > T_2 The dynamic model is then updated with a simple adaptive filter m(x, y, k) = α· f(x,y,k) + (1-α)· m(x, y, k-1) if D1(k) and D2(k) m(x, y, k-1) otherwise where α is the learning rate of the dynamic model. To track the level of dynamic activity in our region of interest, we inspect the ratio of dynamically flagged pixels in the subtracted frame from the current time step with our dynamic environment model to the number of pixels in the region. However, this metric alone does not carry any information in terms of the movement's direction. To compensate for this, we split the inspected region into N sections along the y axis. f(x,y) → g(x,y,n) IR^X× Y→ IR^X×⌊Y/N⌋× N In each of these N sections, we calculate the already mentioned metric of dynamic activity d(x,y,n,k) = 1 if | f(x,y,n,k) - m(x,y,n,k) | > T_1 0 otherwise r(n, k) = 1/X · Y∑ _x,y d(x,y,n,k) this r(n,k) signal now carries enough information to determine the level and direction of movement in the observed region. To further ease this signal's processing, we approximate the first-order partial derivative concerning the time step dimension with differentiation dr(n, k) = r(n, k) - r(n, k-1) In the resulting signal dr(n, k), we threshold its peaks to classify the current timestep with one of three classes: bee arrival (class = 1), idle state (class = 0), and bee departure (class = -1). class(n, k) = 1 if dr(n, k) > T_3 -1 if dr(n, k) < -T_3 0 otherwise where T_3 ∈⟨ 0, 1⟩ is empirically selected threshold value. To implement the actual counter of bees that traverse the region in one way or the other, we keep track of classes from the last K_max time steps. We add a track to a list on each bee arrival, and with each bee departure, we flag an unflagged track as valid. We increment a counter once a valid track is present in all N sections. The direction of the bee's movement is based on the age of the valid tracks on the region's edges. As this is quite a simple approach, it's bound to have limitations; its main disadvantage is that if the bees do not travel independently but in packs, r(n,k) will remain close to constant, dr(n, k) will be close to zero. The bees won't be accounted for as dr(n, k) does not cross the T_3 threshold value. Another limitation we have observed in experiments is that once a bee slows down at some point of its traverse through the observed area or the lightning in the observed area lowers its intensity, the dr(n, k) values reduce, sometimes even to a noise level. This leads to a missing track entry in one or more sections of a tunnel, the bee is not accounted for and there may be hanging tracks left in sections where the bee was registered. This effect can have a detrimental effect on the next registered bees as hanging tracks on the edges of the observed region define the estimated traverse direction. This effect can be suppressed by lowering the maximum track age K_max, but lowering this value also effectively reduces sensitivity when the bee's velocity lowers. On the bright side, this approach runs under 20ms on Raspberry Pi 4B, sequentially processing 12 bee tunnels in each frame. §.§ CNN based approach For our experiments, we had decided to use the pre-trained SqueezeNet v1.1, ResNet-50 and DarkNet-53 network models. The built-in application of Matlab - Deep Network Designer was used. All these networks have been trained on more than a million images from the ImageNet database. They can classify images into 1000 categories of objects such as a keyboard, a coffee mug, a pencil, and many animals. As a result, the networks have learned rich feature representations for a wide range of images. ResNet-50 is a 50-layer CNN (48 convolutional layers, one MaxPool layer, and one average pool layer) with 25.6 million parameters and a model size of 96 MB. Darknet-53 has a similar number of layers (53 layers) but is roughly 1.6 times larger (41.6 M parameters, 155 MB model size). In contrast, SqueezeNet is considerably smaller (18 layers deep, 1.2 M parameters, and 5.2 MB model size), yet achieves relatively high accuracy. In general, larger CNNs require more memory and computing power and consume more energy. It takes longer to train or retrain the network and classify a new image itself. Therefore, smaller networks are more feasible to deploy on edge devices with constrained resources, such as various embedded devices. For all networks, the last convolutional and classification layers were adjusted to sort the images into five categories. Several combinations of network settings were always tested, mainly different mini-batch sizes, learning rates, solver, and number of epochs. The resulting retrained networks were applied to the test part of the datasets. §.§ Object detection based approach To evaluate the object detector approach we decided to test the current superstar in the neural network-based object detector field, the Yolov8 (You Only Look Once) architecture which is a evolution of the original Yolo <cit.>. The Yolov8 provides the wide range of various architecture complexities. We decided to utilize the two extremes, the smallest and the largest ones, to test the miniature network that is easy to deploy on the edge device and the maximal approach that evaluates the maximal capabilities of the given architecture. For each combination of architecture and dataset we trained network for 100 epochs, with dynamic batch size and tested results on a separated test data subset as shown in Table <ref>. The training time was from few minutes in case of Yolov8n on BUT1 data, and about an hour and a half for Yolov8x on BUT2 data. For this experiment, we utilize the official PyTorch-based Yolov8 Ultralytics library <cit.>, which provides a flexible and easy-to-use environment for model training and evaluation. The computer hardware configuration for this project included an NVIDIA RTX 3090 GPU with 24 GB memory complemented by an AMD Ryzen 9 processor and 64 GB of RAM, ensuring efficient data handling and preprocessing capabilities. § EXPERIMENTAL RESULTS Experimental results for all investigated methods are shown in Table. <ref>. The detection accuracies were computed as relative error of the total number of predictions in respect to the ground truth annotations. It could be seen that the worst performed the conventional computer vision approach, which highly underestimates the number of bees in both directions and on both datasets. This is most likely caused by the bees coming through in clusters, which was not expected during the algorithm design, but proved to be very common during the performed measurements. The best detection accuracies were achieved using the ResNet-50 CNN classifier consistently on the both datasets. The object detection using both variants of the YoloV8 model did not fulfilled the expectations while achieving significantly worse results than the ResNet-50. CNN classifier Resnet-50 was trained with the following parameters: batch size 15, learning rate 0.01, and SGDM solver. Fig. <ref> shows the confusion matrix of this model determined on the test data of the BUT2 dataset. A higher classification accuracy was achieved on the BUT2 dataset (98% versus 87.30% on the BUT1 dataset), probably due to the overall higher number of images and the more even number of images in each category. If both datasets were experimentally combined, an accuracy of 93.22% was achieved on all 4438 test image cutouts. As expected, SqueezeNet performed worse (90.26% on BUT2 and 84.48% on BUT1), but given the significantly smaller size, the roughly 3-4% drop in accuracy is not that significant. On the other hand, the larger DarkNet-53 network did not provide better results. Although it learned more stably with an accuracy greater than 93%, it did not outperform the most successful ResNet-50 network model in any combination of parameters. The YoloV8 object detector's variants did not perform consistently over the datasets - in the case of the BUT1 dataset, the YoloV8n performed better in the case of the outcoming accuracy (0.84 % over 0.46 %) and in the case of the BUT2 dataset, better results in both directions were achieved with the YoloV8x. From the embedded point of view, it might be more practical to use the smaller model YoloV8n as it has lower computational demands. § CONCLUSIONS In this work, we tackle the problem of counting bees leaving and arriving at a hive. After a summary of motion-tracking and bee-counting techniques in the second chapter, we propose conventional computer vision, CNN classifier-based and object detector-based approaches to solve this problem in the third chapter. The first proposed method is based on conventional computer vision techniques and did not perform as expected. Probably due to high sensitivity to the bees entering the hive close to each other in the form of a cluster or a row, the proposed method fails to distinguish individual bees, and therefore, it highly underestimates the number of bees entering and leaving the hive. Nevertheless, this method has a low computational demand and was successfully tested on the RaspberryPi 4 embedded computer. As the most important are the trends over the time, underestimating the number of bees might not be such critical if it is consistent. Therefore, we will focus on this analysis in our future research. The CNN classifier-based approach performed the best of all three investigated methods using the ResNet-50 classifier. This approach achieved the highest accuracy and successfully counted almost the same number of bees as in the case of ground truth labels. It was also able to train on limited datasets, especially in the case of the BUT1. As our future work, we will focus on exploring a computational less demanding networks to be used for an automated bee counting module within our device under development. The object detector approach did not performed as well as expected on both investigated datasets, when it performed significantly worse in comparison with the ResNet-50. This is probably caused by mismatching the bees if they are advancing not individually, but in the form of cluster, or a row as in the case of the conventional computer vision approach. The object detectors are also probably too complex and computationally demanding for this task. The bee counting task is relatively complex due to a high similarity of bees in both orientations and due to the nature of their movement in our measurement system, where they usually touch each other, sometimes stand on place, or return after passing underneath the camera. Further accuracy improvement will also require a state machine, which will consider the other detections, such as the bee head or abdomen, to track the individual bees more precisely. Nevertheless, as said above, capturing a trend in the count of incoming and outcoming bees is sufficient for most of the apiary analysis. The completion of this paper was made possible by grant No. FEKT-S-23-8451 - "Research on advanced methods and technologies in cybernetics, robotics, artificial intelligence, automation and measurement" financially supported by the Internal science fund of Brno University of Technology. 25 urlstyle[Benahmed et al.(2022)Benahmed, Bensaad, and Chaib]YOLO_Vcely Benahmed, H.K., Bensaad, M.L., and Chaib, N. (2022). Detection and tracking of honeybees using yolo and strongsort. In 2022 2nd International Conference on Electronic and Electrical Engineering and Intelligent System (ICE3IS), 18–23. 10.1109/ICE3IS56585.2022.10010142. [Bilik and Horak(2021)]Bilik2021 Bilik, S. and Horak, K. (2021). Mvg-bvi: Bee visual inspector. Function sample. [Bilik et al.(2023a)Bilik, Ligocki, Horak, Kratochvila, and Zemcik]BeeDS1 Bilik, S., Ligocki, A., Horak, K., Kratochvila, L., and Zemcik, T. (2023a). Bee dataset but-1. 10.34740/KAGGLE/DSV/7091285. <https://www.kaggle.com/dsv/7091285>. [Bilik et al.(2023b)Bilik, Ligocki, Horak, Kratochvila, and Zemcik]BeeDS2 Bilik, S., Ligocki, A., Horak, K., Kratochvila, L., and Zemcik, T. (2023b). Bee dataset but-2. 10.34740/KAGGLE/DSV/5655997. <https://www.kaggle.com/dsv/5655997>. [Bilik et al.(2024)Bilik, Zemcik, Kratochvila, Ricanek, Richter, Zambanini, and Horak]BILIK2024108560 Bilik, S., Zemcik, T., Kratochvila, L., Ricanek, D., Richter, M., Zambanini, S., and Horak, K. (2024). Machine learning and computer vision techniques in continuous beehive monitoring applications: A survey. Computers and Electronics in Agriculture, 217, 108560. https://doi.org/10.1016/j.compag.2023.108560. [Bjerge et al.(2019)Bjerge, Frigaard, Mikkelsen, Nielsen, Misbih, and Kryger]BJERGE2019104898 Bjerge, K., Frigaard, C.E., Mikkelsen, P.H., Nielsen, T.H., Misbih, M., and Kryger, P. (2019). A computer vision system to monitor the infestation level of varroa destructor in a honeybee colony. Computers and Electronics in Agriculture, 164, 104898. https://doi.org/10.1016/j.compag.2019.104898. [Chen et al.(2012)Chen, Yang, Jiang, and Lin]CHEN2012100 Chen, C., Yang, E.C., Jiang, J.A., and Lin, T.T. (2012). An imaging system for monitoring the in-and-out activity of honey bees. Computers and Electronics in Agriculture, 89, 100–109. https://doi.org/10.1016/j.compag.2012.08.006. [Chiron et al.(2013)Chiron, Gomez-Krämer, and Ménard]Chiron2013 Chiron, G., Gomez-Krämer, P., and Ménard, M. (2013). Detecting and tracking honeybees in 3d at the beehive entrance using stereo vision. EURASIP Journal on Image and Video Processing, 2013(1), 59. 10.1186/1687-5281-2013-59. <https://doi.org/10.1186/1687-5281-2013-59>. [Elhamifar and Vidal(2013)]6482137 Elhamifar, E. and Vidal, R. (2013). Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11), 2765–2781. 10.1109/TPAMI.2013.57. [Goh and Vidal(2007)]4270260 Goh, A. and Vidal, R. (2007). Segmenting motions of different types by unsupervised manifold clustering. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, 1–6. 10.1109/CVPR.2007.383235. [Ho et al.(2003)Ho, Yang, Lim, Lee, and Kriegman]1211332 Ho, J., Yang, M.H., Lim, J., Lee, K.C., and Kriegman, D. (2003). Clustering appearances of objects under varying illumination conditions. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., volume 1, I–I. 10.1109/CVPR.2003.1211332. [Jocher et al.(2023)Jocher, Chaurasia, and Qiu]yolov8_ultralytics Jocher, G., Chaurasia, A., and Qiu, J. (2023). Ultralytics yolov8. <https://github.com/ultralytics/ultralytics>. [Kulyukin and Mukherjee(2019)]app9183743 Kulyukin, V. and Mukherjee, S. (2019). On video analysis of omnidirectional bee traffic: Counting bee motions with motion detection and image classification. Applied Sciences, 9(18). 10.3390/app9183743. <https://www.mdpi.com/2076-3417/9/18/3743>. [Kulyukin et al.(2022)Kulyukin, Tkachenko, Price, Meikle, and Weiss]kulyukin2022integration Kulyukin, V., Tkachenko, A., Price, K., Meikle, W., and Weiss, M. (2022). Integration of scales and cameras in nondisruptive electronic beehive monitoring: On the within-day relationship of hive weight and traffic in honeybee (apis mellifera) colonies in langstroth hives in tucson, arizona, usa. Sensors, 22(13), 4824. [Li et al.(2019)Li, Wu, Wang, Zhang, Xing, and Yan]Li_2019_CVPR Li, B., Wu, W., Wang, Q., Zhang, F., Xing, J., and Yan, J. (2019). Siamrpn++: Evolution of siamese visual tracking with very deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). [Mukherjee and Kulyukin(2020)]app10062042 Mukherjee, S. and Kulyukin, V. (2020). Application of digital particle image velocimetry to insect motion: Measurement of incoming, outgoing, and lateral honeybee traffic. Applied Sciences, 10(6). 10.3390/app10062042. <https://www.mdpi.com/2076-3417/10/6/2042>. [Ng et al.(2001)Ng, Jordan, and Weiss]NIPS2001_801272ee Ng, A., Jordan, M., and Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm. In T. Dietterich, S. Becker, and Z. Ghahramani (eds.), Advances in Neural Information Processing Systems, volume 14. MIT Press. [Redmon et al.(2016)Redmon, Divvala, Girshick, and Farhadi]yolo Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 779–788. 10.1109/CVPR.2016.91. [Rodriguez et al.(2022)Rodriguez, Chan, Alvarez Rios, Branson, Agosto-Rivera, Giray, and Mégret]10.3389/fcomp.2021.769338 Rodriguez, I.F., Chan, J., Alvarez Rios, M., Branson, K., Agosto-Rivera, J.L., Giray, T., and Mégret, R. (2022). Automated video monitoring of unmarked and marked honey bees at the hive entrance. Frontiers in Computer Science, 3. 10.3389/fcomp.2021.769338. [Ryu et al.(2021)Ryu, Jung, Jeong, Choi, Lee, and Kwon]ryu2021honeybee Ryu, J.S., Jung, J.W., Jeong, C.H., Choi, B.J., Lee, M.l., and Kwon, H.W. (2021). Honeybee in-out monitoring system by object recognition and tracking from real-time webcams. Journal of Apiculture, 36(4), 273–280. [Sugaya and Kanatani(2004)]10.1007/978-3-540-30212-4_2 Sugaya, Y. and Kanatani, K. (2004). Geometric structure of degeneracy for multi-body motion segmentation. In D. Comaniciu, R. Mester, K. Kanatani, and D. Suter (eds.), Statistical Methods in Video Processing, 13–25. Springer Berlin Heidelberg, Berlin, Heidelberg. [Wojke et al.(2017)Wojke, Bewley, and Paulus]8296962 Wojke, N., Bewley, A., and Paulus, D. (2017). Simple online and realtime tracking with a deep association metric. In 2017 IEEE International Conference on Image Processing (ICIP), 3645–3649. 10.1109/ICIP.2017.8296962. [Yan and Pollefeys(2006)]10.1007/11744085_8 Yan, J. and Pollefeys, M. (2006). A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate. In A. Leonardis, H. Bischof, and A. Pinz (eds.), Computer Vision – ECCV 2006, 94–106. Springer Berlin Heidelberg, Berlin, Heidelberg. [Zhang et al.(2009)Zhang, Szlam, and Lerman]5457695 Zhang, T., Szlam, A., and Lerman, G. (2009). Median k-flats for hybrid linear modeling with many outliers. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, 234–241. 10.1109/ICCVW.2009.5457695. [Zhuang et al.(2022)Zhuang, Huang, and Ye]9788643 Zhuang, J., Huang, X., and Ye, X. (2022). Bee colony flow monitoring system based on ssd algorithm. In ICETIS 2022; 7th International Conference on Electronic Technology and Information Science, 1–3.
http://arxiv.org/abs/2406.08372v2
20240612162058
APSeg: Auto-Prompt Network for Cross-Domain Few-Shot Semantic Segmentation
[ "Weizhao He", "Yang Zhang", "Wei Zhuo", "Linlin Shen", "Jiaqi Yang", "Songhe Deng", "Liang Sun" ]
cs.CV
[ "cs.CV" ]
Deep Learning Based Joint Multi-User MISO Power Allocation and Beamforming Design Cemil Vahapoglu^†, Timothy J. O’Shea^*, Tamoghna Roy^*, Sennur Ulukus^† ^†University of Maryland, College Park, MD, ^*DeepSig Inc., Arlington, VA cemilnv@umd.edu, tim@deepsig.io, tamoghna.roy@deepsig.io, ulukus@umd.edu ^1University of Stuttgart, Germany ^2University of Cambridge, UK 13 May 2024 ==================================================================================================================================================================================================================================== ^†Equal Contribution: Weizhao He and Yang Zhang ^*Corresponding Author: Yang Zhang and Wei Zhuo § ABSTRACT Few-shot semantic segmentation (FSS) endeavors to segment unseen classes with only a few labeled samples. Current FSS methods are commonly built on the assumption that their training and application scenarios share similar domains, and their performances degrade significantly while applied to a distinct domain. To this end, we propose to leverage the cutting-edge foundation model, the Segment Anything Model (SAM), for generalization enhancement. The SAM however performs unsatisfactorily on domains that are distinct from its training data, which primarily comprise natural scene images, and it does not support automatic segmentation of specific semantics due to its interactive prompting mechanism. In our work, we introduce APSeg, a novel auto-prompt network for cross-domain few-shot semantic segmentation (CD-FSS), which is designed to be auto-prompted for guiding cross-domain segmentation. Specifically, we propose a Dual Prototype Anchor Transformation (DPAT) module that fuses pseudo query prototypes extracted based on cycle-consistency with support prototypes, allowing features to be transformed into a more stable domain-agnostic space. Additionally, a Meta Prompt Generator (MPG) module is introduced to automatically generate prompt embeddings, eliminating the need for manual visual prompts. We build an efficient model which can be applied directly to target domains without fine-tuning. Extensive experiments on four cross-domain datasets show that our model outperforms the state-of-the-art CD-FSS method by 5.24% and 3.10% in average accuracy on 1-shot and 5-shot settings, respectively. § INTRODUCTION Current deep neural networks <cit.> depend heavily on extensive annotated data to attain satisfactory performance. Data annotation is a time-consuming task that requires significant human resources, especially for dense pixel-wise annotation for segmentation tasks <cit.>. The few-shot semantic segmentation (FSS) <cit.> is therefore introduced to close this gap, aiming to produce pixel-level predictions for a novel category with only a few labeled samples. Although existing FSS methods <cit.> have achieved significant progress, they are commonly built on the assumption that their training and test images are from the same domain. When applied to a different domain, their performance decreases dramatically <cit.>. The cross-domain generalizability is thus significant and necessary. FSS models typically require a large amount of base data for training. However, images for some tasks, such as cancer diagnosis and remote-sensing analysis, are scarce and challenging to obtain, making training a powerful model directly on their own data nearly impossible. In our work, we aim to transfer knowledge from easily accessible natural domains to data-scarce domains, as shown in Fig. <ref>(a). We believe that the transfer ability of general models trained on large-scale natural scene datasets will benefit domains that own only a handful of data. To accomplish the cross-domain few-shot semantic segmentation (CD-FSS) task, PATNet <cit.> utilizes support prototypes for computing a transformation matrix, facilitating the conversion of domain-specific features into domain-agnostic ones. In addition, PATNet has a further fine-tuning process using the target domain data during the testing phase. Built on <cit.>, RestNet <cit.> introduces a unified attention module to enhance query and support features prior to transformation. Residual connections are integrated to fuse features before and after the transformation, preserving important information from the original space. Furthermore, RestNet achieves better segmentation results by predicting twice. These works, however, are inefficient in application due to an additional finetuning process or double predictions. In addition, their backbone pretrained on ImageNet <cit.> limits their performance. To this end, we attempt to take advantage of recent achievements on the foundation model, the Segment Anything Model (SAM) <cit.> to assist CD-FSS. With more than one billion masks under its training, SAM exhibits strong feature extraction and generalization abilities. However, recent studies <cit.> have reported that applying SAM directly to new domains often yields subpar performance on zero-shot segmentation tasks, particularly when the data distribution significantly differs from the natural domain data used in SAM training. In addition, its interactive framework necessitates manual visual prompts, such as points or boxes, for precise segmentation, which restricts its capability for full automation. Furthermore, some recent works <cit.> demonstrate SAM is sensitive to manual visual prompts. Even slight deviations in provided prompts can remarkedly affect segmentation accuracy. As shown in Fig. <ref>(b), PerSAM presents unsatisfactory performance due to its inability to extract high-quality visual prompts. In our work, we propose an auto-prompt network for cross-domain few-shot semantic segmentation (APSeg), which builds a novel end-to-end framework that efficiently adapts SAM to CD-FSS tasks for accurate segmentation. As shown in Fig. <ref>(c), the core of our framework is feature transformation and meta prompt generation. In particular, for feature transformation, we propose a Dual Prototype Anchor Transformation (DPAT) module to extract pseudo query prototypes based on cycle-consistency between support and query features. By fusing the pseudo query prototypes with the support prototypes, the computation of the transformation matrix can incorporate information from both support and query samples, which facilitates transforming input features into a more resilient domain-agnostic feature space. Combined with DPAT, the potential of SAM in cross-domain scenarios can be unleashed. In addition, for automatic prompt-embedding generation, we introduce a Meta Prompt Generator (MPG) module via a meta-learning procedure. Rather than relying on manual visual prompts such as point and box prompts, MPG leverages support features to guide the generation of meta prompt embeddings associated with target objects to substitute the output of the SAM's prompt encoder. With MPG, our method is robust and general for automatic segmentation. Our main contributions are summarized as follows: * We propose a novel model that integrates a dual prototype anchor transformation (DPAT) module and a meta prompt generator (MPG) module for efficiently adapting SAM to CD-FSS tasks. * The DPAT module is proposed for cross-domain feature transformation, which integrates support prototypes and pseudo query prototypes and transforms input features into a stable domain-agnostic space. * The MPG module is introduced to generate prompt embeddings through meta-learning to establish a fully automatic framework for segmentation. * Extensive experiments on four cross-domain datasets demonstrate that our model outperforms the state-of-the-art CD-FSS method by 5.24% and 3.10% in average accuracy on 1-shot and 5-shot settings, respectively. Especially, we attain 17.49% (1-shot) and 14.30% (5-shot) improvements on the Chest X-ray dataset. In our method, the parameters of SAM’s image encoder and mask decoder are frozen, and only a few parameters are trainable. Notably, our trained model can directly achieve promising results when applied to target domains without fine-tuning or multiple rounds of inference. § RELATED WORK §.§ Few-shot Segmentation The goal of few-shot semantic segmentation (FSS) is to segment new classes with a few annotated examples. Current FSS methods are commonly based on meta-learning, which can be largely grouped into two types: prototype-based methods d<cit.> and matching-based methods <cit.>. Motivated by PrototypicalNet <cit.> for few-shot learning, the prevalent FSS models utilize prototypes for specific-class representation. Recent works <cit.> point out that a single prototype has a limitation to cover all regions of an object, especially for pixel-wise dense segmentation tasks. To alleviate this problem, methods of <cit.> use expectation-maximization and cluster algorithms to generate multiple prototypes for different parts of the objects. Compared with prototype-based methods, matching-based ones <cit.> are designed to extract dense correspondences between query images and support annotations, harnessing pixel-level features to augment the support context with more intricate details. These methods, however, only focus on segmenting new categories from the same domain and fail to generalize unseen domains. §.§ Cross-domain Few-shot Segmentation In contrast to the traditional FSS setting, CD-FSS necessitates that models refrain from accessing target data during the training process. Furthermore, the data distribution and labeling space in the test phase differ from those in the training phase. This is a more realistic setting. PATNet <cit.> proposes a feature transformation module which aims to convert domain-specific features into domain-agnostic features. In addition, the target domain data are desired to be utilized to fine-tune the model during the testing phase. Notably, PATNet outperforms current FSS methods on its established benchmark. RestNet <cit.> utilizes a lightweight attention module to enhance pre-transformation features and merge post-transformation features through residual connections to maintain the key information in the original domain. Meanwhile, a mask prediction strategy is introduced to mitigate the issue of overfitting to support samples in FSS and facilitates the model in a gradual acquisition of cross-domain segmentation capabilities. However, existing methods still utilize classical classification models <cit.> as feature extractors with limited feature extraction capabilities compared to existing visual foundation models <cit.>. The performance of cross-domain segmentation is limited. Moreover, either additional training or multiple inferences is required when predicting masks, which makes the inference process complex and inefficient. §.§ Segment Anything Model The Segment Anything Model (SAM) <cit.>, pretrained on massive amounts of labeled data, first introduced a foundation model for image segmentation. SAM relies on explicit points and bounding boxes at precise locations for accurate segmentation <cit.>. Therefore, extensive manual guidance or a specialist detector is often required, leading to a complex multi-stage pipeline <cit.>. SAM cannot achieve automatic segmentation for specific semantics. To address this, PerSAM <cit.> proposes automatic sampling of visual prompts, and some other methods suggest directly generating prompt embeddings <cit.>. Inspired by these works, we propose an automatic prompting method in a meta-learning manner to adapt SAM to CD-FSS. § METHOD §.§ Problem Definition For CD-FSS, a source domain (X_s, Y_s) and a target domain (X_t, Y_t) exist, where the input distribution of the source domain and target domain are different, and their label space has no intersection as well, i.e., X_s ≠ X_t and Y_s ∩ Y_t = ∅. Here X denotes input distribution, and Y denotes the label space. In our work, we train and test our model in a meta-learning episodic manner following <cit.>, and our model is only trained on the source domain and has no access to the target data. Each episode data consists of a support set S and a query set Q with a specific category. The support set S = (I_i^s, M_i^s)^K_i = 1 contains K image-mask pairs, where I_i^s denotes the i-th support image and M_i^s denotes the corresponding binary mask. Similarly, the query set is defined as Q = (I_i^q, M_i^q)^K_i = 1. To train our model, support sets and images from query sets are used as model inputs to predict the query masks. To assess the trained model's performance, we test it on a support set and a query set from the target domain. §.§ Method Overview Our target is to train a general model on natural domains with rich annotations and transfer the knowledge to target domains with limited labeled data. As illustrated in Fig. <ref>, our proposed APSeg consists of two key modules: the Dual Prototype Anchor Transformation (DPAT) module and the Meta Prompt Generator (MPG) module. Specifically, given the support image I^s and query image I^q, the SAM image encoder is used to extract multi-level features from different layers. DPAT module is then employed to map support and query features to a new domain-agnostic space, facilitating the rapid adaptation of subsequent modules for previously unseen domains. Next, we introduce the MPG module whose task is to generate sparse and dense prompt embeddings through the meta-learning procedure for the SAM's mask decoder by utilizing the transformed features. At last, the generated prompt embeddings and the transformed high-level query features are passed into the mask decoder for target mask prediction. §.§ Dual Prototype Anchor Transformation To map support and query features into a new domain-agnostic space, we introduce a Dual Prototype Anchor Transformation (DPAT) module, as shown in Fig. <ref>. Previous method <cit.> solely relies on a set of support prototypes and anchor layers to compute a transformation matrix. However, support prototypes cannot well represent complete information of a category due to intra-class variance. Therefore, we propose to enhance the support prototype set with query prototypes for better feature transformation. Specifically, our proposed DPAT consists of two procedures: dual prototype enhancement and cross-domain feature transformation. Inspired by <cit.>, we propose cycle-consistent selection (CCS) to extract both the query foreground and background prototypes in the absence of query masks, which is used to enhance the support prototypes. Based on these enhanced prototypes which represent a category and its surroundings, an effective transformation matrix can be computed with a learnable domain-agnostic module. The transformation matrix is then applied to support and query features for cross-domain feature transformation. Dual Prototype Enhancement Representative prototypes are significant for our cross-domain transformation. To this end, we build a cycling-examine procedure that reasons on both the query foregrounds and backgrounds to augment support prototypes. The process is shown in Fig. <ref>. We conduct forward matching to attain query features that have the highest similarity with the support foregrounds. We then use these identified forward-matched query features to re-locate corresponding support features reversely. If the located support features through reverse matching fall within support foregrounds, the identified query features will be averaged and used to derive the foreground prototypes. The background prototypes are obtained with the same process. Finally, support prototypes and query prototypes are fused by addition. Specifically, given a support image I^s and a query image I^q, we initially employ a shared-weight image encoder to extract their multi-layer feature maps {f^s_l }^3_l=1 and {f^q_l } ^3_l=1 respectively, where f^s_l, f^q_l∈ℝ^c_l× h × w. c_l, h, w are the channel dimension, height, and width of the feature map. DPAT takes support features{f^s_l }^3_l=1, query features{f^q_l } ^3_l=1 and support mask M^s as input. To simplify the notations, f^s and f^q are used to represent any one of {f^s_l }^3_l=1 and {f^q_l } ^3_l=1. We then employ masked average pooling (MAP) on the support feature to obtain the foreground and background prototype, denoted as 𝐩^s_fg and 𝐩^s_bg. By concatenating 𝐩^s_fg and 𝐩^s_bg, we get the support prototype matrix 𝐏^s=[𝐩^s_fg, 𝐩^s_bg]. Considering that the query mask cannot be accessed during training, we extract pseudo query prototypes through CCS. First, the support foreground feature is obtained by multiplying the support mask M^s with the support feature f^s. Next, the similarity between the support foreground feature and the query feature is calculated. For each element of the support foreground feature, we search for the element in the query feature map with the highest similarity score, and acquire the matched position set i^s→q from support to query as follows, i^s→q=i∈{0,1,...,h× w -1}*max(sim(f^s⊙M^s, f^q_i)) where sim(·,·) is a cosine function. Based on i^s→q, the matched query feature f^q_i^s→q can be extracted. Similarly, the matched positions j^s←q from query to support can obtained as below, f^q_i^s→q={f^q[i]:i∈ i^s→q} j^s←q=j∈{0,1,...,h× w -1}*max(sim(f^q_i^s→q, f^s_j)) where j^s←q is the positions on support that have the most similar features with the corresponding reference query features f^q_i^s→q. If the position in the matched position set j^s←q does not fall in the support mask M^s, we filter out the position from i^s→q and obtain the final matched position set i^'s→q. According to the set i^'s→q, the corresponding features are extracted from f^q and averaged to obtain the pseudo foreground query prototype 𝐩^q_fg. The pseudo background query prototype 𝐩^q_bg can be obtained through a similar process, resulting in the query prototype matrix 𝐏^q=[𝐩^q_fg, 𝐩^q_bg]. Finally, a mixed prototype matrix can be obtained by 𝐏^m = 𝐏^s + 𝐏^q. Cross-domain Feature Transformation Features of the same class yield similar results when they are transformed in the same way. Support features and query features are transformed into a domain-agnostic space using the same transformation matrix 𝐖 to avoid the detrimental impact caused by domain shift. Given the weight matrix of an anchor layer 𝐀, the definition of a transformation matrix is as follows, 𝐖𝐏^m=𝐀 where 𝐏^m = [𝐩^m_fg/𝐩^m_fg,𝐩^m_bg/𝐩^m_bg], 𝐀 = [𝐚_fg/𝐚_fg,𝐚_bg/𝐚_bg] and 𝐚 is the anchor vector which is independent of the input. Different from previous work <cit.>, we leverage the mixed prototype matrix that incorporates both support and query feature information during the computation process. Since the prototype 𝐏^m is a non-square matrix, the generalized inverse <cit.> of 𝐏^m is calculated with 𝐏^m+={𝐏^m^𝐓𝐏^m}^-1𝐏^m^𝐓. Therefore, the transformation matrix is calculated as 𝐖=𝐀𝐏^m+. In our work, We have two different anchor layers for mid-level features {f^s_1, f^s_2, f^q_1, f^q_2 } and high-level features {f^s_3, f^q_3}. Finally, we can efficiently map support and query features to a stable, domain-agnostic space by multiplying them with 𝐖. Objects and things of even the same class can differ in shape and appearance. Due to limited samples of supports, it is intrinsically challenging to represent all the variance within objects and things of a class. Through the double-check procedure for both foreground and background regions, our proposed DPAT can effectively mitigate the challenges aroused by intra-class variances and generate a more stable transformation matrix for cross-domain feature transformation. §.§ Meta Prompt Generator To construct an end-to-end fully automated SAM-based segmentation framework for CD-FSS, we utilize the features of support-query pairs to directly generate prompt embeddings. In particular, a meta prompt generator (MPG) module is designed to obtain both sparse and dense embeddings simultaneously, as shown in Fig. <ref>. Different from <cit.> that uses single support embeddings without alignment, our new pipeline extends to leverage multiple support embeddings and integrates feature alignment. Our design comprehensively takes account of intra-class variance and the multiple prompts coming along with the support embeddings enhance the segmentation. For clarity, we refer to the process of generating sparse embeddings and dense embeddings as sparse path and dense path respectively. In this way, our method eliminates the need for external manual visual prompts, such as points or boxes. Sparse path. In this process, the query features and several support embeddings augmented from the support prototypes are utilized to generate sparse embeddings through a transformer decoder <cit.>, which then replace the original sparse embeddings in SAM. First, we concatenate the transformed mid-level support features f̂^s_1∈ℝ^c_1 × h × w and f̂^s_2∈ℝ^c_2 × h × w along the channel dimension, and then perform dimension reduction through a convolution layer to obtain f̂^s∈ℝ^c_r× h × w. In the same way, we can also get f̂^q∈ℝ^c_r× h × w, f̂^s=ℱ_conv(f̂^s_1⊕f̂^s_2) f̂^q=ℱ_conv(f̂^q_1⊕f̂^q_2) where ℱ_conv means performing a 1× 1 convolution followed by a ReLU activation function and ⊕ denotes the concatenation operation in channel dimension. Next, we take f̂^s and M^s as input and apply MAP to obtain the foreground class prototypes 𝐩̂^s∈ℝ^c_r. Then, a linear layer maps the 𝐩̂^s to multiple augmented support embeddings 𝐄^aug∈ℝ^k × c_r, 𝐄^aug=ℱ_linear(𝐩̂^s) where k denotes the number of the augmented support embeddings. Here we generate several embeddings instead of only a single embedding for simulating multiple point prompts. To supplement positional information, learnable position encodings are applied to 𝐄^aug and fixed sine-cosine positional encodings are applied to f̂^q. We then input them into the transformer decoder and its output is further processed by a two-layer MLP to increase the channel dimensions, which yields 𝐄̂^aug∈ℝ^k × c_o. Ê^aug=ℱ_mlp(ℱ_trans(𝐄^aug, f̂^q)) To align the generated sparse embeddings with those produced by SAM’s prompt encoder, the sine function is employed to generate the final sparse embeddings 𝐄^spa∈ℝ^k × c_o following <cit.>. 𝐄^spa= Ê^aug + ℱ_sine(Ê^aug) Dense path. In this process, the dense embeddings are modulated by query features and support prototypes. f̂^s_3, f̂^q_3 and M^s are first passed to prior mask generation (PMG) module <cit.> to generate a prior mask M^pr∈ℝ^1× h × w. After concatenating 𝐩̂^s, f̂^q and M^pr, we perform a 1× 1 convolution for dimension reduction to obtain f̂^pr∈ℝ^c_r × h × w. The output is then passed into the feature enhancement (FEM) module <cit.> to get f̂^fem. f̂^fem is further processed by a 1×1 convolution layer to increase the channel dimensions to obtain 𝐄^den∈ℝ^c_o × h × w. §.§ Training Loss In the training of APSeg, we employ a Dice loss function, computed between the predicted mask M̂ and the corresponding ground truth query mask M^q. The loss function, denoted as ℒ, is expressed as: ℒ = 1/n∑_i=1^nDICE(ℐ(M̂), M^q) Here, n represents the total number of training episodes in each batch, and DICE signifies the Dice loss function. The function ℐ serves as an interpolation function, ensuring that M̂ shares the same spatial size as M^q. § EXPERIMENT Datasets. Following the previous approach <cit.>, we use PASCAL VOC 2012 <cit.> with SBD <cit.> augmentation as training dataset and then evaluate the trained model on Chest X-ray <cit.>, ISIC <cit.>, FSS-1000 <cit.> and Deepglobe <cit.> respectively. Metric and Evaluation. We use the mean intersection-over-union (mIoU) as the evaluation metric, which is the same as the previous method. We take the mean-IoU of 5 runs <cit.> with different random seeds for each test. For all datasets except FSS-1000, each run has 1200 tasks. Every run of FSS-1000 has 2400 tasks. Implementation Details. In our experiments, we employ the base version of the SAM and keep it frozen during training. To be consistent with the original input to SAM, we set spatial sizes of both support and query images to 1024 × 1024. For the SAM image encoder, we utilize feature maps derived from the mid-level features output by the 5th and 8th transformer blocks, in addition to the high-level features obtained from the final output of the image encoder. Concerning the DPAT module, we create two anchor layers dedicated to mid-level and high-level features, each configured with channel numbers set to 768 and 256, respectively. For the MPG module, the number of feature channels after dimension reduction is set to 64. Additionally, the number of sparse embeddings and dense embeddings channels output by MPG is set to 256. We implement the model in PyTorch and utilize the Adam <cit.> optimizer with a learning rate of 1e-3. §.§ Comparison with State-of-the-Arts We compare our method against existing CNN-based and SAM-based approaches for CD-FSS. As shown in Tab. <ref>, the results demonstrate the superiority of the proposed method in this challenging task. Specifically, our approach exhibits improvements of 5.40% and 3.10% compared to the PATNet <cit.>, under 1-shot and 5-shot settings, respectively. Moreover, APSeg surpasses the SAM-based method PerSAM <cit.> by 23.74% and 24.45%, affirming the effectiveness of our approach. Notably, our method showcases significantly superior performance compared to the current methods when confronted with large domain gaps between the testing and training datasets. Specifically, compared with PATNet, our method achieves improvements of 17.49% and 14.30% on the chest X-ray dataset under the 1-shot and 5-shot settings, respectively. Similarly, on the ISIC dataset, improvements of 4.27% are observed for the 1-shot setting. Furthermore, our automatic prompting approach significantly surpasses the manual prompting strategy proposed by <cit.> in cross-domain performance. For a fair comparison, we also implement PATNet based on the SAM's image encoder. The model is trained with the same input image size, and test-time fine-tuning is not employed. The results demonstrate that APSeg still maintains a significant superiority. Qualitative results, illustrated in Fig. <ref>, validate that our proposed method attains substantial improvements in generalization performance in the presence of large domain gap while maintaining considerable accuracy with minor domain shift. More visualization results are provided in the supplementary materials. §.§ Ablation study Components analysis. We assess the effectiveness of our proposed DPAT module and MPG module by using the 1-shot mIoU averaged on four datasets. To establish a baseline model, we first remove the DPAT module and the sparse path in the MPG module and then replace the final layer of the dense path with a 1× 1 convolution layer for predicting segmentation masks. Tab. <ref> illustrates the impact of each component on model performance. Overall, the incorporation of the two components suggested in this paper enhances the baseline by 18.44%. In the second row, MPG leverages the segmentation capabilities of SAM by autonomously generating semantic-aware prompt embeddings, eliminating the need for manual prompts and improving the baseline by 2.80%. Upon combining MPG and DPAT, DPAT unleashes the segmentation capabilities of SAM in cross-domain scenarios. It achieves this by transforming input features into a more stable domain-agnostic feature space, resulting in a significant performance improvement of 15.46% compared to the second row. More analysis and discussion about DPAT are shown in supplementary materials. Meta Prompt Generator. Tab. <ref> shows the impact of the main components in the MPG, namely sparse embeddings and dense embeddings. We show the results for three combination scenarios: using only sparse embeddings, only dense embeddings, and both. Our observation reveals that better performance can be attained when combining sparse and dense embeddings. This emphasizes the significance of leveraging both types of embeddings to harness the segmentation capabilities of SAM in cross-domain scenarios. The number of feature channels. After cross-domain feature transformation, we fuse the transformed mid-level features and perform dimension reduction, which can reduce the number of learnable parameters and avoid the trained model overfitting the training dataset. Observation in Tab. <ref> shows that reducing the number of channels to 64 allows our method to achieve better CD-FSS performance with only a small number of additional learnable parameters. The number of sparse embeddings. Our proposed MPG introduces an automatic generation mechanism for prompt embeddings, replacing SAM's original method of obtaining them through manual prompts fed into the prompt encoder. Employing more manual visual prompts has been shown to enhance SAM's interactive segmentation performance. Therefore, Tab. <ref> shows the association between the number of sparse embeddings generated by MPG and the performance of cross-domain few-shot segmentation with APSeg. Notably, generating 4 sparse embeddings results in a 0.55% improvement in 1-shot scenarios. However, as indicated in the third row, continuing to generate more prompt embeddings may lead to a decline in performance. § CONCLUSION In this paper, we introduce APSeg, an auto-prompt method for guiding SAM to complete CD-FSS tasks. To achieve fully automatic segmentation based on SAM and release the segmentation capability of SAM in cross-domain scenarios, we propose the Meta Prompt Generator (MPG) module and Dual Prototype Anchor Transformation (DPAT) module to achieve this goal. By fusing the extracted pseudo query prototypes with support prototypes, DPAT enables domain-specific input features to be more stably converted into domain-agnostic features, significantly improving cross-domain generalization capabilities. In addition, MPG generates semantic-aware prompt embeddings with meta-learning, promoting the construction of a fully automatic CD-FSS framework based on SAM. Combining DPAT and MPG, extensive experimental results show that our APSeg achieves a new state-of-the-art in CD-FSS. § ACKNOWLEDGEMENTS This work is supported by the National Natural Science Foundation of China under Grant 62176163, 62306183 and 82261138629; the Science and Technology Foundation of Guangdong Province under Grant 2021A1515012303, 2024A1515010194 and 2023A1515010688; the Science and Technology Foundation of Shenzhen under Grant JCYJ20210324094602007 and JCYJ20220531101412030; and Shenzhen University Startup Funding. ieeenat_fullname § EXPERIMENT §.§ Datasets FSS-1000. FSS-1000 li2020fss_SUPP is a natural scenario dataset containing 1000 class categories with 10 samples per category. The evaluation procedure is conducted on 2400 randomly sampled support-query pairs. Chest X-ray. Chest X-ray candemir2013lung_SUPP, jaeger2013automatic_SUPP is an X-ray image dataset of 566 images collected from 58 cases with manifestations of tuberculosis and 80 normal cases. ISIC. ISIC codella2019skin_SUPP is a skin lesion image dataset from the ISIC-2018 challenge. Following the previous approach lei2022cross_SUPP, the evaluation procedure is conducted on the training set, which includes 2596 images and the corresponding annotations. Deepglobe. Deepglobe demir2018deepglobe_SUPP is a remote sensing image dataset that can be used for land cover segmentation. The dataset contains 7 categories: areas of urban, agriculture, rangeland, forest, water, barrel, and unknown. Following the previous method lei2022cross_SUPP, we filter the unknown class in the training set and chunk the images to obtain 5666 images. We report the test results on the processed training set. §.§ Implementation Details In the meta prompt generator (MPG) module, the spatial size of the feature enrichment module (FEM) is set to {60, 30, 15, 8}, maintaining consistency with PFENet tian2020prior_SUPP. The transformer decoder carion2020end_SUPP block consists of a self-attention mechanism, a cross-attention mechanism, and a feed-forward network. Its configuration is in line with Protoformer cao2022prototype_SUPP. §.§ Ablation Study Effect of the cycle consistent selection. In the dual prototype anchor transformation (DPAT) module, pseudo query prototypes are extracted through cycle-consistent selection (CSS) to enhance the feature transformation process. To further validate the effectiveness of CCS, we explore an alternative method for extracting pseudo query prototypes. Analogous to ResNet huang2023restnet_SUPP, we first obtain the coarse prediction mask of the query image and then perform the MAP operation to obtain query prototypes. This method is referred to as prediction mask-based MAP (PM-MAP). We conduct an experiment to evaluate the model without CCS, with CCS, and with PM-MAP, respectively, to better analyze the contribution of our CCS. The results in Tab. <ref> show that CCS achieves better performance, with an average improvement of 2.67% mIoU on four datasets on the 1-shot setting. This indicates that CCS can extract reliable query prototypes to enhance the support prototypes, allowing features to be transformed into a more stable domain-agnostic space. Qualitative results on the effectiveness of CSS are provided in Fig. <ref>. §.§ Additional Analysis PerSAM zhang2023personalize_SUPP is a few-shot segmentation method based on SAM. To enable automatic segmentation, PerSAM extracts point prompts based on cosine similarity measure and obtains box/mask prompts from the coarse predictions. The final prediction results are produced under the guidance of three types of visual prompts. The performance of PerSAM is far inferior to our proposed APSeg and PATNet lei2022cross_SUPP due to the inability to extract precise prompts. In contrast, our APSeg introduces MPG and DPAT, avoiding reliance on precise visual prompts and achieving competitive results in cross-domain scenarios. Qualitative results are provided in Fig. <ref>. It can be observed that our method outperforms PerSAM by a large margin, which validates the effectiveness of our automatic way of generating prompt embeddings. In addition, we provide more qualitative segmentation results of our proposed method on four datasets in Fig. <ref>. ieeenat_fullname supp
http://arxiv.org/abs/2406.08875v1
20240613072248
NICER: A New and Improved Consumed Endurance and Recovery Metric to Quantify Muscle Fatigue of Mid-Air Interactions
[ "Yi Li", "Benjamin Tag", "Shaozhang Dai", "Robert Crowther", "Tim Dwyer", "Pourang Irani", "Barrett Ens" ]
cs.HC
[ "cs.HC" ]
NICER: A New and Improved Consumed Endurance and Recovery Metric]NICER: A New and Improved Consumed Endurance and Recovery Metric to Quantify Muscle Fatigue of Mid-Air Interactions Monash University Melbourne Australia yi.li5@monash.edu Monash University Melbourne Australia Benjamin.Tag@monash.edu Monash University Melbourne Australia Shaozhang.Dai1@monash.edu University of New England Armidale Australia rcrowth2@une.edu.au Monash University Melbourne Australia Tim.Dwyer@monash.edu University of British Columbia British Columbia Canada pourang.irani@ubc.ca University of British Columbia British Columbia Canada barrett.ens@ubc.ca § ABSTRACT Natural gestures are crucial for mid-air interaction, but predicting and managing muscle fatigue is challenging. Existing torque-based models are limited in their ability to model above-shoulder interactions and to account for fatigue recovery. We introduce a new hybrid model, NICER, which combines a torque-based approach with a new term derived from the empirical measurement of muscle contraction and a recovery factor to account for decreasing fatigue during rest. We evaluated NICER in a mid-air selection task using two interaction methods with different degrees of perceived fatigue. Results show that NICER can accurately model above-shoulder interactions as well as reflect fatigue recovery during rest periods. Moreover, both interaction methods show a stronger correlation with subjective fatigue measurement (ρ = 0.978 0.976) than a previous model, Cumulative Fatigue (ρ = 0.966 0.923), confirming that NICER is a powerful analytical tool to predict fatigue across a variety of gesture-based interactive applications. <ccs2012> <concept> <concept_id>10003120.10003123.10011760</concept_id> <concept_desc>Human-centered computing Systems and tools for interaction design</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Systems and tools for interaction design acmlicensed TOG 2024 43 4 102 710.1145/3658230 [ Barrett Ens June 17, 2024 ================= § INTRODUCTION Gesture Interaction has long been applied by Human-Computer Interaction (HCI) researchers to support direct manipulation <cit.>, externalise cognition <cit.>, facilitate remote collaboration <cit.>, and provide a “natural” user experience <cit.>. Multi-touch surface gestures have become the dominant mode of computer interaction and 3D mid-air gestures hold similar promise for the future of spatial computing <cit.>. Mid-air hand tracking is now supported by the newest generation of wearable Augmented Reality (AR) and Virtual Reality (VR) devices (e.g. Apple Vision Pro[<https://www.apple.com/apple-vision-pro/>], Meta Quest 3[<https://www.meta.com/quest/quest-3/>]). Games that employ gesture input and fitness applications that use full-body gesture interaction are among the currently most popular VR experiences. One vital aspect in designing such experiences is to present users with an engaging embodied experience without tiring them through excessive activity. Furthermore, the required level of exertion must be tuned to the specific application. For instance, to present a physical challenge when desired, but without inducing fatigue prematurely. Thus, it will be helpful for designers to have access to tools that can help them predict the exertion level of a particular activity and estimate the time the activity can be sustained. Producing this type of information precisely is the goal of recent research on modelling fatigue in mid-air arm gestures. However, each of these approaches has particular benefits and drawbacks. For instance, the Consumed Endurance (CE) model <cit.> was the first to predict endurance time but is limited in its ability to predict the effort of low-intensity activity that is typical of mid-air computer interactions. The Cumulative Fatigue (CF) model <cit.> was introduced soon after, with a proposed solution to the identified limitation in CE, as well as the ability to model fatigue recovery during interactions. However, the CF model takes a supervised learning approach to fit the model to subjective fatigue scales and, therefore, requires time-consuming model training on a specific interaction task and cannot directly predict the maximum interaction time. Both of these previous models have a further inherent limitation in their assumption of shoulder torque as an accurate predictor of exertion in shoulder muscles which causes overestimation of fatigue in above-shoulder interactions. Recent work <cit.> provides strong evidence countering this assumption, as well as a proposed solution, NICE, for addressing this limitation. This tentative model takes a hybrid approach that supplements torque with direct observations of muscle activation, thereby adding a correction term for underestimated exertion for large shoulder angles. Our work completes the previous explorationfollows this previous research using a `NICER' approach (NICE + Recovery factor): We propose a complete version of the theoretical correction term proposed in <cit.> that improves fatigue prediction in over-shoulder exertion based on the empirical investigation of muscle fatigue development across different shoulder abduction angles in a long-duration endurance study and new arm movement data in unconstrained mid-air interactions.conduct a new endurance arm-lifting study with a design similar to the one in NICE <cit.> but with a higher granularity of static shoulder poses (including seven angles between 45° and 135° versus only five angles in the literature). To that end, we establish the shape of the curve of objective muscle fatigue at different vertical shoulder positions. Next, a mid-air interaction task with constraint-free arm movement is run to obtain the precise coefficient of the curve and further refine the correction term initially proposed by NICE. Additionally, we adapt a recovery factor that was initially applied in industrial work to model fully relaxed breaks <cit.> in NICER to adjust for intermittent interactions for a more natural modelling of fatigue without a full stop of tasks. Ours is the first comprehensive model to allows us to present a complete, refined model thataddress the full set of prior limitations (summarised in Table <ref> and explained in detail in Section <ref>). Lastly, we evaluate the model performance of NICER as an analytical tool for interaction design in another mid-air selection task with high- and low-fatigue interaction methods. The study results confirm that NICER is better able to differentiate between two interaction methods with different degrees of perceived fatigue as well as better capture of fatigue recovery than the previous models, CE and CF. The main contribution of this work is a refined fatigue model that is ready-to-use[The model source code is available at <https://github.com/ylii0411/NICER_Unity_API>] (does not require any further model training), applicable to any arm pose (from hand, wrist, elbow, and shoulder joint coordinates), and comprehensive (addresses the limitations of the original CE and CF models outlined in Section <ref>). Specific contributions include: * An empirical study of muscle fatigue development: the first investigation at different vertical arm positions with low-intensity, long-duration tasks (Study 1 in Section <ref>). * A finalised comprehensive fatigue model: NICER that accounts for additional exertion of above-shoulder exertion and recovery periods (Study 2 in Section <ref>). * An evaluation study to confirm that NICER is a reliable objective fatigue measurement to assist future interaction design (Study 3 in Section <ref>). § RELATED WORK Fatigue is a cumulative phenomenon that manifests after prolonged sustained physical exertion. In the following section, we discuss the literature on both subjective and objective fatigue assessments and the development of existing fatigue models in mid-air interactions. It is necessary to clarify that this paper focuses exclusively on physical fatigue, specifically muscular fatigue, with mental fatigue being outside the scope of our study. §.§ Subjective Fatigue Measurement Subjective approaches to quantify perceived fatigue are common in Human-Computer Interaction (HCI) as they require no physical instrumentation or additional setup. Two widely accepted subjective techniques are the Borg CR10 Rating of Perceived Exertion (RPE) scale <cit.> and the National Aeronautics and Space Administration Task Load Index (NASA-TLX) <cit.>. While the Borg CR10 RPE employs discrete values ranging from 0 to 10 to categorise physical exertion, the NASA-TLX evaluates perceived workload by considering dimensions such as physical demand, mental demand, and frustration using a 21-point scale. Borg CR10 RPE and NASA-TLX provide a relatively reliable rough estimation of fatigue levels. However, two noteworthy issues limit their application in managing fatigue during mid-air interactions. First, the finite scales in the subjective approach restrict their performance in detecting subtle yet crucial differences <cit.>. Second, participants with diverse backgrounds may interpret and apply the scales differently and introduce potential bias to the results <cit.>. Furthermore, completing the questionnaire is disruptive to the activity under investigation and is not practical in applied settings. §.§ Objective Fatigue Measurement Compared to subjective fatigue approaches, objective measurements involve monitoring physiological properties, e.g., heart rate <cit.>, muscle oxygenation <cit.>, thermoregulation <cit.>, and eye-movement <cit.>. While objective measurements eliminate the influence of cognitive bias across participants, they are inferential rather than direct indicators of muscle fatigue and could introduce noise into the fatigue assessment process. For example, lightweight wearable devices like smartwatches or fitness trackers allow real-time heart rate monitoring. Yet, the subtle variation in heart rate during low-intensity tasks may be caused by many factors other than fatigue induced by interaction <cit.> (e.g., it can even be induced by the subject's awareness of displayed heart rate, i.e. the well-known observer effect). A non-invasive option to estimate participants' fatigue is through their expended muscle capacity <cit.>. Force transducers record the Maximum Voluntary Contraction (MVC) value representing the maximum muscle contraction capacity for a given participant. Since muscle capacity is reduced over the course of a physical task, a decline in MVC over time serves as an indicator of muscle fatigue. However, this direct approach requires additional setup (i.e. measuring muscle capacity with a force transducer) that will interrupt the interaction, so it fails to provide feedback on fatigue during real-time interactions. Similarly, MVC can be measured directly through surface electromyographic (EMG) signals. After normalization by MVC, the real-time sub-maximal EMG signals during interactions will be expressed in the unit of %MVC. An increasing magnitude in time-domain features of EMG signals (i.e. root-mean-square (RMS), mean-absolute-value (MAV)) is considered an indicator of muscle fatigue <cit.>. Consequently, slopes of the time-domain features (%MVC) are applicable as fatigue indices to monitor changes in muscle contraction in real-time interactions <cit.>. This approach is recommended if the study duration is long enough to observe a longitudinal linear change in EMG signal above the level of intermittent noise. Training classification models on instantaneous EMG signals is also feasible to predict fatigue <cit.>, but this approach is limited to discrete fatigue levels and may overlook the subtle difference between scales. §.§ Modelling Fatigue in Mid-air Interaction In the field of HCI, modelling fatigue in mid-air interactions is an emerging area, building upon established research in ergonomics. <cit.> integrated the widely-recognized Rapid Upper Limb Assessment (RULA) ergonomic metric with a markerless motion tracking system, facilitating real-time assessment of ergonomic costs. <cit.> sectioned the 3D interaction volume, utilizing biomechanical simulations to estimate muscle activation costs for various interaction clusters. More recently, an AR toolkit named Xrgonomics <cit.> incorporated these metrics to inform 3D user interface design. While these prior works contribute to reducing muscle fatigue by minimizing ergonomic costs, their focus has primarily been on evaluating static postures, rather than dynamic movements that involve changes in gestures, velocity, and acceleration. The process of modelling physical fatigue in mid-air interaction requires defining two measures (Fig <ref>): Exertion – measuring instantaneous physical exertion of body gestures performed during the interaction (%MVC); and Fatigue – a mapping from the accumulated physical exertion (%MVC) to a fatigue level (%). In general, there are empirical and theoretical approaches to model muscle fatigue. Empirical models rely on the observation of maximum holding time, A.K.A. endurance time (ET), or MVC. They consider the decay of ET or MVC to be equivalent to fatigue development. <cit.> defined a lightweight fatigue model that can estimate fatigue levels by normalising actual holding time with the maximum holding time of a specific joint as the denominator. Even though the Rodriguez model has reduced parameters and considers resting periods compared to previous models, it cannot be applied to body segments containing multiple joints. Another empirical study by <cit.> effectively estimated ET by modelling the decay coefficient of muscle capacity for a given external load. However, the model is limited to isometric tasks, a type of continuous contraction that does not require changes in body postures. On the other hand, theoretical fatigue models quantify muscle fatigue based on the biophysical properties of the muscle fibres. The dynamic fatigue model proposed by <cit.> was the first study to model fatigue by considering the transition cycle of the muscle unit, where the proportion of the fatigued muscle units (between 0-100%) can be estimated for a constant intensity. The Liu model was the predecessor of the three-compartment model (TCM) <cit.> that enables the muscle transition cycle to work with varying external loads. The above literature laid a solid foundation for the real-time fatigue modelling process and inspired the development of two milestone fatigue models of mid-air interactions: Consumed Endurance (CE) <cit.> and Cumulative Fatigue (CF) <cit.>[CF, as the model name, refer to the "TCM" model in <cit.> and the "LIN" model in <cit.>.]. CE was the first model that successfully estimated fatigue from dynamic arm movements as described in the pipeline (Figure<ref>). It first quantifies the instantaneous exertion at time i by the shoulder torque (Torque_i) based on the performed gesture, then normalises the accumulative average torque (Torque) with a constant Max_Torque retrieved from literature. Rohmert's ET function (Equation (<ref>)) is applied next to map the accumulative exertion to its maximum holding time (see the orange curve in Figure <ref>). The CE score is predicted by the ratio between the SpentTime and the estimated ET_Rohmert from Rohmert's ET function (see Equation (<ref>)). ET_Rohmert = 1236.5/(Torque/Max_Torque *100 - 15 )^0.618 -72.5 CE = SpentTime/ET_Rohmert *100 In an initial evaluation, CE was strongly correlated with Borg CR10 and demonstrated the ability to guide interaction designs <cit.>. However, CE has a major limitation when working with low-intensity interactions due to the biased assumption of Rohmert's ET function (the orange curve in Figure <ref>-left) that low-intensity interaction (i.e. less than 15% of an individual's maximum strength) is considered to last forever without subjects becoming fatigued. This was addressed in NICE <cit.> where Rohmert's ET function was replaced by a revised ET function constructed from empirical evidence (the blue curve in Figure <ref>-left). However, the revised ET function in NICE only considered static shoulder positions and overlooked dynamic arm gestures. In our proposed improved model, we overcome the previous limitation by using a shoulder-specific ET function (the green curve in Figure <ref>-left) built from a meta ET function fitted to both static and dynamic arm gesture data from literature <cit.>. Alternatively, the original CE uses a constant Max_Torque (N· m) to normalise instantaneous shoulder torque Torque (N· m) to Torque (%MVC). Yet, this lightweight approach ignores variations between participants. The original CF <cit.> used a long-duration arm-lifting task to determine one's maximum strength (MVC), but this pre-calibration was time-consuming and may induce fatigue before the interaction. The pre-calibration was recently replaced by a gesture-based maximum strength model <cit.> implemented in the revised CF <cit.>. The same solution is included in our proposed improved model to capture variations between participants and dynamic gestures. See APPENDIX <ref> for more details. Furthermore, CE and NICE do not consider rest periods during interactions. As such, model predictions will overestimate the fatigue level when an interaction is resumed after a break. Concerns related to fatigue recovery in interaction were explored in CF <cit.> where a TCM was used to estimate the proportion of fatigued (F%) and resting (R%) muscle units under the current shoulder torque of given gestures. Relying on a supervised-learning approach where CF model parameters (fatiguing and recovery rates) were trained to fit Borg CR10 collected from the interaction study, CF predicts subjective fatigue ratings with low error. However, predicting Borg CR10 limits CF in estimating the ET of interaction. Most importantly, CE and CF quantify the instantaneous exertion solely based on shoulder torque, which assumes the exertion below and above the shoulder is of the same level. Yet, this assumption is contradicted by subjective feedback <cit.> and measured muscle contraction <cit.>. In NICE, an additional logarithmic term was added to the original torque calculation and an improved correlation with muscle contraction in above-shoulder interactions was observed. Though the approach was not validated, the work pointed out that a hybrid approach combining torque and muscle contraction may be able to effectively quantify the additional exertion in above-shoulder interactions. This realisation, along with the above concerns, motivated our creation of a comprehensive fatigue model that can: * work with interactions of varying physical intensities, * consider gesture-based maximum strength, * account for fatigue recovery during interaction breaks, * predict the endurance time of interactions, and * make reliable predictions for above-shoulder interactions. § NICER: NICE AND IMPROVED CONSUMED ENDURANCE AND RECOVERY METRIC Our work aims to develop a ready-to-use comprehensive fatigue model for mid-air interactions. We investigated the strengths and limitations of the existing fatigue models in the literature and summarised them in Table <ref>. We plan to integrate four new features into the original CE model to develop a comprehensive model that preserves the advantages of existing models and overcomes the identified limitations related to above-shoulder exertion. The order of the proposed changes is based on the computation order of the original CE formula: (1) A revised ET curve to enable the refined model to work with low-intensity interactions; (2) A correction term derived from muscle contraction to account for the additional physical effort needed in above-shoulder interactions (Section <ref>); (3) Chaffin's maximum strength model to replace the constant maximum torque to account for the variation in the maximum strength of dynamic gestures;  (4) A recovery factor to reflect decreasing fatigue during rest periods (Section <ref>). Next, we discuss the detailed motivation and methodology for our novel hybrid exertion estimation and the recovery factoreach new model feature, as well as the model evaluation in the following sections. §.§ Hybrid Approach of Real-time Exertion Estimation CE and CF rely on shoulder torque to estimate the instantaneous exertion of the currently performed arm gesture. However, the property of shoulder torque[Torque = r*F*sin(θ). θ is the angle between the upper arm and the body, r is the distance between the shoulder joint and the centre of the mass of the arm, and F is the force exerted by the arm.] suggests that exertion is symmetrical at equal shoulder angles above and below 90° (see the purple curve in Figure <ref>-right). In the evaluation of CE and CF in mid-air pointing tasks, subjective feedback of perceived fatigue <cit.> and objective muscle contraction through EMG signals <cit.> indicated that CE and CF overlooked the additional physical effort needed for lifting the arm above 90° and therefore underestimated the fatigue level induced by above-shoulder interactions. NICE was the first study investigating the limitations of only using torque to quantify instantaneous exertion of shoulder movements, and it proposed a correction term for exertion estimation based on torque measurements at 120° and 90° of shoulder elevation, assuming a linear increase in exertion between these elevations. However, this assumption has not been tested previously. On the contrary, using solely muscle contraction to predict shoulder fatigue requires extensive mappings between muscle activation and properties of dynamic interactions, for example, contributing shoulder muscles, arm movement velocity, interactive target positions, and so on. Unfortunately, research done in the above areas is limited. The refined fatigue model proposed in this paper takes a hybrid approach, integrating muscle contraction and torque to address the identified gap. We use the mapping between shoulder angles and muscle fatigue as the objective ground truth to correct the torque calculation. The development of the correction term takes three steps: * Obtain the shape of the curve of objective muscle fatigue under varying shoulder angles from low-intensity, long-duration tasks (i.e. 10 minutes per task); * Investigate the magnitude of shoulder torque under continuously varying shoulder angles in constraint-free mid-air arm movement; * Develop the correction term based on the difference between shoulder torque and the ground truth curve. As introduced in Section <ref>, EMG signals with an increasing magnitude for a given period indicate the occurrence of muscle fatigue. The slope of EMG features reflects the severity of fatigue, such that a steeper slope indicates more rapid fatigue development. In Study 1 (Section <ref>), we investigate objective fatigue from the distribution of EMG-based fatigue indices at different vertical shoulder angles. Later, in Study 2 (Section <ref>), we use data collected during mid-air interaction tasks with no constraints on shoulder abductions and elbow extensions to obtain the distribution of shoulder torque under continuously varying shoulder angles. Finally, we scale up the ground truth curve to match the magnitude of shoulder torque and take the difference between the ground truth curve and the shoulder torque distribution to obtain the correction term C(θ) in Figure <ref>-right. Chaffin’s Model of Maximum Strength Estimation In the original CE, a constant Max_Torque (N· m) was used to normalise instantaneous shoulder torque Torque (N· m) to Torque (%MVC). Yet, this lightweight approach ignored the variation between participants. Conversely, the original CF estimated participants’ Max_Torque through a long-duration arm lifting task, and the results showed a strong correlation with the direct measurement. Regardless, this pre-calibration process is time-consuming and can cause muscle fatigue before the interaction tasks. A potential solution is to estimate the Max_Torque based on kinematic information like velocity and joint angles. However, prior works have so far only explored such techniques in the knee <cit.>, elbow <cit.>, and wrist <cit.>, but not in the shoulder. An alternative approach is Chaffin's strength model <cit.> implemented in the revised CF <cit.>. Chaffin's model outputs the current Max_Torque (N· m) at the shoulder joint based on the performed arm gestures represented in the elbow extension angle (α_e) and shoulder abduction angle (α_s) while considering the difference of physical capacity between gender by factor G (G_female = 0.1495, G_male = 0.2845). The full model formula is shown in Equation <ref>. In a direct comparison of the revised CF <cit.> with the original CF model <cit.>, adding Chaffin's model significantly reduced the error between CF predictions and Borg CR10. Therefore, we include Chaffin’s model in NICER to capture variations between participants and dynamic gestures. §.§ Recovery Factor to Account for Rest Periods During a continuous mid-air interaction, the original CE outputs increasing model predictions to reflect the growing cumulative fatigue. However, the model prediction will become constant when the interaction is paused for a short break. Meanwhile, CF follows the life cycle of muscle units and considers muscles transitioning between fatigued and resting states during the entire periodic interaction. This conservative approach leads to an inconsistent performance of CF between active interaction periods and rest periods. In an initial validation <cit.>, CF achieved a smaller error with Borg CR10 during the interaction break and reported a higher error during active target-pointing tasks. A similar observation was found in applying CF in a multi-touch display, where the error between CF predictions and Borg CR10 is significantly higher during the active interaction than during the study break <cit.>. The above concern highlights the need to consider fatigue recovery separately for active and rest periods. A revised recovery factor was proposed by <cit.> where an additional scalar was applied to increase the recovery factor when the current exertion is low. However, this approach did not reduce the error between the fatigue prediction and the subjective ground truth and has only been evaluated in simulation data <cit.>. In an early exploration of modelling shoulder and elbow fatigue for static physical tasks, <cit.> evaluated the effectiveness of applying a recovery factor R = 0.04 (s^-1) to estimate the extended endurance after 30s breaks. The same recovery factor and the transition function (see Equation (<ref>)) from this prior work <cit.> are used in NICER to reduce the estimated fatigue level (Fatigue_t) at time t when the shift between active interaction and the rest period is detected. Fatigue_t (%) = Fatigue_t-1 (%) ·exp ^-R ·δ t §.§ Evaluating NICER as An Interaction Analytical Tool We evaluate the refined model NICER against CF as interaction analytic tools by comparing them with subjective interaction measures in a mid-air selection task with different degrees of perceived fatigue. Three evaluation criteria are considered: * whether fatigue models can generalise to different interaction designs, * whether fatigue models can reflect fatigue recovery during breaks in activity, and * whether fatigue models can accurately predict fatigue levels. This evaluation strategy was inspired by the three evaluations done in CE <cit.>. CE was evaluated under different mid-air interaction factors, and the model predictions showed agreement with Borg CR10 and completion time. Meanwhile, CF <cit.> has yet to be evaluated as such a tool by being compared to objective measures rather than subjective Borg CR10 scores. CF may perform better than NICER if we evaluate the model performance based on the error between fatigue predictions and Borg CR10 scores. Doing so would require a conversion from NICER predictions to Borg CR10 scores. The original CF evaluation <cit.> proposed a linear factor for similarly converting CE to Borg CR10, however, they noted that such an approach has not been validated. On the other hand, prior studies <cit.> have confirmed that both CE and CF can differentiate interaction designs based on different target placements below the shoulder level. This differentiation relies on a torque-based approach to quantify exertion. Subsequently, the limitations of a torque-based approach were empirically confirmed in a study with above-shoulder target placements <cit.>. Our work seeks to further explore and assess the performance of NICER and CF in studies of varying durations. CE predicts fatigue levels by estimating the maximum interaction duration, whereas CF bases its fatigue predictions on estimating the proportion of fatigued muscle units. Given the above concerns, we evaluate our proposed comprehensive model (derived from the results of Studies 1 and 2) against the three criteria listed above. Our evaluation presented in Study 3 (Section <ref>) uses data collected from an unconstrained mid-air interaction task similar to Study 2. In this case, we contrast two interaction methods known to create different levels of subjective fatigue in a task with controlled rest periods. In addition to testing each model's prediction of fatigue recovery, this allows us to test their capability in differentiating between tasks requiring higher or lower levels of relative exertion, thus demonstrating the model's ability to generalise to unknown conditions. A summary of all three studies conducted in the current paper can be seen in Figure <ref>. §.§ Dataset Used for Analyses in Studies 2 and 3 Whereas previous studies <cit.> have implemented abstract tasks (e.g. text entry, docking) to test the ability of the proposed models to generalise, we aimed to evaluate our model with a more complex, externally valid task, with minimal constraints on the user's arm movement. We thus identified a task from a separate research study <cit.> on interaction methods for mid-air selection of targets from a 3D scatterplot visualisation as it meets several criteria for our evaluation: * The controlled study is run in an immersive environment using a commodity VR display. * The target selection task allows unconstrained shoulder motion to select targets in a large 3D volume. * The task is unconstrained in time, providing interactions of varying time duration. * The study was designed to measure task fatigue using subjective measures. * The study involved multiple interaction methods, two of which were shown to cause statistically significant differences in subjective fatigue. Our analyses in Studies 2 and 3 use the same target selection tasks with multiple different interaction methods (four in total). Data from two of these interaction methods were used for refinement of our model in Section <ref>. The two remaining methods, which were shown to induce different levels of subjective fatigue (high and low) were used for model evaluation in Section <ref>. The study descriptions in Studies 2 and 3 include only relevant details. A summary of the task and interaction methods is in Appendix <ref>. Although we use the complete dataset, the assigned variables in our study design differ from those of the original study. § STUDY 1: UNDERSTANDING MUSCLE FATIGUE AT DIFFERENT SHOULDER ABDUCTION ANGLES As explained in Section <ref>, we take three steps to develop the correction term needed in the muscle-torque hybrid model to accurately quantify exertion in above-shoulder interactions. In Study 1, we aim to obtain the shape of the curve of EMG-based fatigue indices at different vertical arm positions. We closely follow the experimental setup of the endurance arm-lifting task used in NICE <cit.>, while introducing the following key changes: Angle granularity - While NICE divided the vertical range of arm motion (30° - 150°) into five regions of 30°, in this study, we reduce the region size to 15° over a narrower range of arm motion (45° to 135°), resulting in a total of seven regions. Smaller regions better approximate a continuous distribution for vertical arm positions, enabling us to narrow the pivot angle of sudden changes. Maximum contraction duration - We increase the maximum contraction duration from five to ten minutes. Previous works found the minimum contraction time to observe the linear relationship between EMG signals and the interaction time in bare-hand activity is 207-347s <cit.>. Limiting the study duration to 5 minutes, as in <cit.>, prevents EMG signals from showing a clear increasing pattern and being used as fatigue indices. Natural arm weight - We use self-weight only to make the study results valuable for bare-hand interactions. Meanwhile, <cit.> implemented varying arm weights to explore muscle fatigue of high-intensity tasks. Within-participant design - Instead of the previous between-participant design, all participants finish the arm-lifting tasks in two separate sessions. A bigger sample size (n = 24) in the current study can better estimate the expected results in population. Participants - We recruited 24 (compared to 12 in <cit.>) right-handed volunteers (12 female and 12 male - no participants identified as non-binary gender), aged 18-74 years, height 1.51-1.90 m, and weight 47-87 kg. Inclusion criteria for participants included: age between 18 and 75 years, being right-handed, reading and understanding English and numbers, not having a history of seizures, epilepsy or motion sickness, or any head/arm disability or impairment. §.§ Study Setup Details Task The study task is side-lifting the right arm with no extra weight at the desired Shoulder Angle, the independent variable in Study 1, for up to 10 minutes. The refers to the angle between the participants' arms and torsos. The chosen angles in Study 1 are 45°, 60°, 75°, 90°, 105°, 120°, and 135°. Each condition is performed only once, and participants have a 5-minute break between conditions. The order of the conditions is balanced using a Latin Square. Since the current study follows a within-participant design, we separate seven conditions into two sessions with over 48 hours in between to mitigate participants' fatigue (three conditions in one session and four conditions in another session). The maximum study duration is one hour, including the break. The total study duration for each session, including MVC collection, is approximately 90 minutes. Participants report their perceived fatigue through Borg CR10 ratings at a one-minute interval from the start till the end of the study. Dependent Variables The dependent variables used to assess muscle fatigue for each in the current study are: : Duration of the trial in seconds. A trial is concluded either after 10 minutes or if there are any failures in maintaining the arms at the specified . : We collect the self-reported perceived fatigue (between 0-10) through Borg CR10 ratings every minute during the study and fit a linear regression model to all ratings to obtain the slope that shows the growing rate of subjective fatigue. The EMG-based fatigue indices (δ%MVC· s^-1) for the investigated muscle groups are: : upper trapezius; : middle deltoid; : anterior deltoid; : infraspinatus. In total, we have 7×24 = 168 measurements for , , (δ%MVC· s^-1), (δ%MVC· s^-1), (δ%MVC· s^-1), and (δ%MVC· s^-1). Apparatus The current study uses four EMG sensors (Trigno, Delsys Inc, Boston, MA) to collect muscle contraction in the Upper Trapezius (UT), Middle Deltoid (MD), Anterior Deltoid (AD), and Infraspinatus (IF) muscles, the primary contributors of shoulder movement. EMG sensors captured at two kHz were digitally integrated with the Vicon Nexus Motion Capturing software (v2.12, VICON, Oxford, UK) and synchronised with arm motion tracking. After obtaining the participants' maximum muscle capacity during the MVC collection, we placed 14 reflective markers on the participants' right arm, shoulder, and torso. The marker-based arm motion tracking allowed us to precisely monitor arm lifting and elbow extension during the study at 50 Hz using 10 VICON Vantage cameras. Signal Processing The maximum amplitudes of EMG of all four muscles from MVC collection were identified using Delsys EMG Acquisition (v4.7.9, Delsys, MA, USA). These maximum amplitudes were then utilized to normalize the EMG signals in the endurance study trials. Furthermore, the EMG recordings from the study conditions underwent bandpass filtering ([50-250] Hz, 4th order Butterworth) using Visual3D (v6.0, c-motion, MD, USA). We calculated EMG-based fatigue indices as dependent variables using custom-written scripts in Matlab (MathWorks, Natick, MA, USA). After extracting the time domain feature-RMS and down-sampling each signal to 50 Hz, we fitted a linear regression model to the signal and recorded the slope for further fatigue analysis. Moreover, we analysed Borg CR10 data using the slope of a linear regression model fitted to all ratings recorded during the study. §.§ Results: EMG-based Fatigue Indices under Different Shoulder Angles The current study implemented seven shoulder abduction angles in an endurance study to investigate the effect of vertical arm positions over fatigue measured in , , and EMG-based fatigue indices (i.e. , , , ). The findings are critical to understanding the mapping between fatigue and muscle contraction under different vertical arm positions. Prior studies have found that Borg CR10 increases with the shoulder abduction angle between 30°–90° during 10s contractions <cit.> while the endurance time decreases with the shoulder abduction angles between 0°, 45°, and 90° <cit.>. However, the effects at angles above 90° for both short and long-duration contractions have not been fully explored. As shown in Figure <ref>, the patterns in and during shoulder abduction below 90° are in line with previous studies <cit.> (i.e. increases with shoulder angle, while decreases). In contrast, results above 90° deviate from this trend: in contrast to the symmetry predicted by prior torque-based models, we instead observe that both and are relatively constant at between 90° and 135° . A similar pattern to and can be seen in Figure <ref>, which shows the individual response of each muscle. We can see that , , and indicated that fatigue development was faster above 90° than below 90° while there was no significant difference between 90° and 135°. This observation is consistent with literature <cit.> where in a 5s shoulder abduction from 0° to 165°, the primary and secondary muscles of shoulder motion have the fastest growing rate of contraction between 0° and 90°, then reach the peak of contraction between 90° and 110°, and decrease after that with a slower rate than the rate at angles below 90°. Unlike the other muscles, MD exhibits positive slopes, indicating a failure to show fatigue, likely due to the compensatory effect of nearby muscles. The consistent findings between the short-duration study <cit.> and the current long-duration study reveal the ground truth pattern of muscle fatigue development under different vertical arm positions. Therefore, both instantaneous physical exertion (i.e. shoulder torque measurements) and cumulative fatigue (i.e. existing fatigue models like CE and CF) should follow the pattern in Figure <ref> where ensemble averages (sum before average) for all four muscles were taken to allow joint-level comparison between different s. A sigmoid curve (Equation (<ref>)) was estimated to fit the trend in Figure <ref>. The sigmoid function is used in Section <ref> to develop the correction term, C(θ), as visualised in Figure <ref>. The correction term will help the previous torque calculation to accurately account for the instantaneous exertion during above-shoulder interactions. f(θ) = 0.0095/1+exp((66.40-θ)/ 7.83){0^∘<θ<180^∘} § STUDY 2: REFINEMENT OF NICER MODEL WITH MID-AIR INTERACTION TASKS Our findings from the endurance arm-lifting task in Study 1 establish the shape of the curve of muscle fatigue across different shoulder abduction angles. In Study 2, we use data from mid-air interaction tasks to determine the magnitude of shoulder torque and the curve's formula. Subsequently, we derive the necessary correction term needed for the muscle-torque hybrid model, which is previously explained in Section <ref>. We then finalise the refined fatigue model, NICER, combining features of all previous models as per Table <ref>, at the end of the current section. Task For our analysis, we use data from an immersive point selection task within 3D scatterplots (discussed in Section <ref>). This task allows participants to perform full-range mid-air gestures without constraining shoulder abduction and elbow extension. We combine data from two different interaction methods designed to explore distant selection in the original study. The first (called Linear Gain in the original study) is a variation of traditional `virtual hand extension' methods (e.g. Go-Go <cit.>) in VR, in this case connected by a telescopic arm. The second (called Haptic Portal) is a novel implementation that allows the user to reach through a pair of portals to reach a robotic arm-assisted haptic target. (See Appendix <ref> for further details.) For the purposes of our model refinement, we do not differentiate these methods as a study variable. We combine the data from both interaction methods to provide a high variation in task time and arm motion. Independent Variables As described in Section <ref>, we aim to use data from a representative constraint-free mid-air interaction task to investigate the distribution of shoulder torque under varying shoulder angles and later use it to develop the refined model formulation. Therefore, the sole independent variable is the continuous real-time Shoulder Angle (0° ≤θ≤ 180°) performed during the interaction task. Dependent Variables To understand the distribution of shoulder torque during mid-air gestures of extended arms, we collect measures of , defined as follows: The real-time shoulder torque in Newton–meter measured in the mid-air selection task when elbow flex angle ≤ 35°. Detailed calculation is in <cit.>. Apparatus The arm movement was tracked by a Vicon motion-capture system. Additionally, a MANUS glove was used to track the precise hand gestures when grabbing the virtual target in VR. Participants The study included 24 right-handed volunteers (7 female and 17 male - no participants identified as non-binary gender), aged 19-35 years, height 1.58-1.82 m, and weight 43-90 kg. Only seven participants had no prior experience with VR. It is worth noticing that participants involved in Studies 2 and 3 do not overlap. One participant appeared in Studies 1 and 2. §.§ Correction Term to Revise the Torque Calculation in Dynamic 3D Interactions The earlier exploration of CE <cit.> revealed a limitation in using only shoulder torque to account for exertion above the shoulder when the elbow is extended (i.e. elbow flex angle ≤ 35°). This underscores the need for a muscle-torque hybrid approach to accurately quantify instantaneous exertion. After establishing the shape of the fatigue curve (Equation (<ref>)) in the above-shoulder activity in Study 1, we now use data collected in Study 2 to get the additional term to match torque measures with the curve. To determine the required scaling to make the decreasing shoulder torque increase to a similar level when the shoulder angle θ > 90°, we first fitted a sine function: t(θ) to analyze the raw torque distribution during the extended-arm mid-air selection task for female and male participants. This is detailed in Equations (<ref>) and (<ref>) (see Figure <ref>-left). t_female(θ) = sin(θ·2π/360)/0.11{0^∘ <θ<180^∘} t_male(θ) = sin(θ·2π/360)/0.09{0^∘<θ<180^∘} We then obtained the scale-up factor β of 1005 for female participants and 1230 for male participants by setting the ground truth curve f(θ) to the same level as the raw shoulder torque t(θ) at 90°, i.e. f(90^∘) = t(90^∘) (see Figure <ref>-right). Subsequently, we defined the correction term C as the difference between β· f(θ) and t(θ) for θ > 90^∘ in Equation (<ref>) and (<ref>). As shown in Figure <ref>-right, the shoulder torque measured above 90° has been successfully revised and now aligns with the pattern in Figure <ref>. Moving forward, we will integrate C_female(θ) and C_male(θ) with the original torque calculation from CE to revise the shoulder torque value when the elbow is extended and the interaction occurs above the shoulder. C_female(θ) = 0.0095·1005/1+exp((66.40-θ)/ 7.83)-sin(θ·2π/360)/0.11{90^∘<θ<180^∘} C_male(θ) = 0.0095·1230/1+exp((66.40-θ)/ 7.83)-sin(θ·2π/360)/0.09{90^∘<θ<180^∘} The current study retrieved the correction term from arm movement data collected in a constraint-free interaction task. This allows the refined model to work with any bare-hand interactions without requiring an additional model training process. It is worth noting that our approach is different from the supervised training approach of CF. While CF obtains model parameters by minimising the error between fatigue predictions and subjective fatigue ratings, our study develops the correction term based on the distribution of shoulder torque in bare-hand arm movements. §.§ Finalising the NICER Model Formulation To finalise the comprehensive refined fatigue model, NICER, we took the following steps. We updated the ET function in CE to a shoulder joint-specific ET function built on the grand average ET of the literature  <cit.>. Combining with the correction term C(θ) developed in Section <ref>, Max_Torque from Chaffin's maximum strength model in Appendix <ref>, and the recovery factor in Section <ref>, we finalise NICER formulation in Equation (<ref>) and (<ref>). NICER_active = Duration· (Torque + C(θ)/Max_Torque*100)^1.83· 0.000218/14.86 * 100 NICER_rest = NICER_active·exp ^-0.04 ·δ t § STUDY 3: NICER MODEL VALIDATION The comprehensive NICER model that includes all desired features summarised in Table <ref> is finalised in the last section. The goals of Study 3 are twofold: first, we compare NICER with subjective interaction measures to see whether NICER can distinguish the low-fatigue condition from the high-fatigue condition by the estimated fatigue levels. Second, we evaluate NICER and CF (2017 and 2023 versions) in a study of varying interaction durations to assess their performance in a complex interaction task. We exclude NICE from this comparison as it has yet to be validated in interaction tasks. In our experiments, CE fails to make fatigue predictions for the current mid-air selection tasks due to its limitations in low-intensity exertion. Therefore, we did not consider CE in the following analysis. Furthermore, as the latest CF model is not readily accessible, we can only discuss the results of the original CF model. However, prior work found no significant difference between the original CF and its recently revised variant before adding Chaffin's model. Task For this evaluation, we use two interaction methods from our external dataset. As discussed in Section <ref>, we chose these methods for our evaluation because they were previously shown to have significantly different subjective fatigue scores [The observation is due to differences in selection difficulty. This also resulted in apparent differences in interaction time, however, the difference in means was not statistically significant. See original paper <cit.> for details.]. For our evaluation purposes, we aptly label these High_Fatigue and Low_Fatigue. The High_Fatigue method (called Adaptive Gain in the original study) is a variation of the Linear Gain method described in Section <ref> (with an adaptive gain function in place of a linear one. See Appendix <ref> for implementation details). Likewise, the Low_Fatigue method (called Portals) is a variant of the previous Haptic Portal method (without the robot-assisted haptic feature). The target selections were divided into several blocks. Each block included six target selections, after which participants took a 15s break to report their perceived fatigue through Borg CR10. We call every set of six targets one "block" in the result section, and for every block, we report two Borg CR10 values, one collected at the beginning and the one at the end of the block. Participants completed one full set of tasks (30 trials/5 blocks) with each interaction method. (see Figure <ref>). Independent Variables As described in Section <ref>, we evaluate different fatigue models' strengths for better guiding interaction designs. For this evaluation, we use one independent variable: Interaction Technique with two levels: High_Fatigue and Low_Fatigue. Dependent Variables In the current study design, we aim to control the target placement and hand trajectory between conditions to discover the influence of varying study durations on the dependent variables below: : Self-reported perceived fatigue on a 0–10 scale (0: nothing at all, 10: extremely strong). Target-wise fatigue predictions at the beginning of the study and after each successful target selection: (%), (%), and (%). Block-wise fatigue predictions generated when participants report the Borg score after every six targets: (%), (%), and (%). For each condition, we have 10×12 = 120 measures for , block-wise and , and 31×12 = 372 measures of target-wise , , and . Participants Data were collected from 12 right-handed volunteers (5 female and 7 male - no participants identified as non-binary gender), aged 18-35 years, height 1.53-1.80 m, and weight 54-82 kg. Only two participants had no prior experience using VR. It is important to note that the participants in Studies 2 and 3 do not overlap. One participant appeared in Studies 1 and 3. §.§ Results We used a pair-wise t-test with an alternative hypothesis that the mean of fatigue predictions under the High_Fatigue condition are greater than the mean of predictions under the Low_Fatigue condition to comparetwo conditions for all dependent variables. The detailed results are reported below and summarized in Table <ref>. Borg CR10 The Low_Fatigue condition achieved a significantly lower than the High_Fatigue condition (t_119 = 6.264, p < .001). See Figure <ref>. CF_2017 The fatigue predictions estimated by consider Low_Fatigue conditions as more fatiguing than High_Fatigue conditionsfor Low_Fatigue and High_Fatigue conditions were too similar to differentiate in both target-wise (t_371 = -1.362, p =.913.174) and block-wise (t_119 = -1.028, p =.847.306) results. See Figure <ref>-top. CF_2023 The fatigue predictions estimated by consider Low_Fatigue conditions as more fatiguing than High_Fatigue conditions in both target-wise (t_371 = -4.868, p = 1) and block-wise (t_119 = -2.709, p = .996) results. See Figure <ref>-middle. NICER From Figure <ref>-bottom, we can see that the objective fatigue measure considered the High_Fatigue condition as significantly more fatiguing than the Low_Fatigue condition in both target-wise (t_371 = 8.147, p < .001) and block-wise (t_119 = 4.503, p < .001) results. We calculated the Pearson Correlation Coefficient ρ to assess the similarity between fatigue model predictions (i.e. NICER and CF scores) and subjective fatigue ground truth (i.e. ) in comparing Low_Fatigue and High_Fatigue conditions. NICER achieved the highest ρ with in the Low_Fatigue condition (ρ = 0.976) and in the High_Fatigue condition (ρ = 0.978). Meanwhile, CF_2023 showed a higher correlation (ρ = 0.966) than CF_2017 (ρ = 0.954) in the High_Fatigue condition but a lower correlation (ρ = 0.923) than CF_2017 (ρ = 0.940) in the Low_Fatigue condition. (see Table <ref>). The above results show that NICER makes more accurate fatigue predictions than CF_2017 and CF_2023. §.§ Discussion: Is NICER a Reliable Analytical Tool? To determine whether NICER provides a reliable tool for the analysis of potential interaction methods, we consider the three criteria proposed in Section <ref>. Criterion 1: Can fatigue models generalise to different interaction designs? A key attribute of a reliable tool is to generalise to different interactions, including variations in intensity, range of motion and duration. Our evaluation tests the ability of the models to differentiate between two different interaction methods with significant differences in subjective fatigue scores (see Table <ref>). Results in Section <ref> show that NICER is able to differentiate between these tasks at both target-wise and pair-wise levels, whereas CF_2017 and CF_2023CF are not. This confirms that NICER has a stronger power in distinguishing interaction techniques based on the predicted fatigue than CF_2017 and CF_2023CF. Criterion 2: Can fatigue models reflect fatigue recovery during breaks in activity? In the mid-air selection tasks, was collected once before the break and once after the break to directly observe varying subjective perceived fatigue during rest periods in the study trial. Generally, participants perceived increasing fatigue during task blocks but received noticeable fatigue recovery after the 15s break. As can be seen in Figure <ref>, NICER shows a similar fatigue variation as , where the predictions increased with the target blocks and decreased after the break started. Compared with the original CE, which considers the estimated fatigue level as constant during the interaction break, NICER improves significantly in quantifying fatigue recovery by showing a clear drop between the active task and the rest periods. On the other hand, CF_2017CF shows a continuously increasing pattern during the study and fails to reflect the rest period in reducing the estimated fatigue levels. CF_2023 performs better than CF_2017 by showing reduced fatigue predictions after nearly every break. However, the decrease is only subtle throughout the entire study. We calculated the Pearson Correlation Coefficient ρ to assess the similarity between fatigue model predictions (i.e. NICER and CF scores) and subjective fatigue ground truth (i.e. ) in comparing Low_Fatigue and High_Fatigue conditions. NICER achieved a higher ρ with in the Low_Fatigue condition (ρ = 0.976) and in the High_Fatigue condition (ρ = 0.978) than CF with in the Low_Fatigue condition (ρ = 0.940) and in the High_Fatigue condition (ρ = 0.954) (see Table <ref>). Criterion 3: Can fatigue models accurately predict fatigue levels? As discussed in Section <ref>, NICER and CF provide different predictions (endurance time versus perceived fatigue). Whereas CF units can be directly converted to Borg units using a conversion factor of 0.0875, a similar conversion of NICER has not been validated <cit.>. To evaluate their predictive capability, we compare the overall range of NICER and CF to the range of reported Borg scores. Overall, the results of CF_2017 and CF_2023 (range between 30%-50%) have a similar magnitude as (range between 13%-48% of the maximum value 10) while NICER (range between 24%-69%) has a higher magnitude that will potentially overestimate the perceived fatigue. This observation is expected since model parameters of CF were optimised to fit scores. We will discuss this further in Section <ref>. §.§ Summary In comparing fatigue model predictions with the subjective measures reported in Section <ref>, we found varying strengths and weaknesses of NICER and CF. We found that NICER can differentiate between two interaction methods as similarly was observed in the objective measures. This indicates that NICER possesses a greater ability to generalise to different tasks than CF. The inability of CF to differentiate these interaction methods may be due to overfitting Borg CR10 scores in its supervised training approach. We also found that NICER shows a greater variation between periods of activity and recovery than CF for the interactions investigated in our study. A visual inspection of the predictions in Figure <ref>, in comparison with the subjective scores in Figure <ref>, shows that the recovery approach in NICER may be too aggressive, whereas CF fails to drop after every break. Additionally, Pearson Correlation Coefficients in Table <ref> have a stronger fit for NICER than CF for both interaction methods, however, overlapping confidence intervals indicate that these differences may not be significant. Overall, these results indicate that NICER is able to better predict fatigue recovery than previous models. When looking at the predicted range of fatigue levels induced by different activities, we found that fNICER may overestimate fatigue in comparison to the subjective measures. Fatigue predictions by CF achieved a smaller error to the subjective measures than NICER, which implies that NICER may overestimate the subjective perceived fatigue. § APPLICATIONS AND FUTURE WORK The study results of the mid-air selection task in Section <ref> show that our proposed refined model NICER can effectively distinguish the study conditions and make accurate fatigue predictions that capture fatigue recovery during breaks. We are confident that from now on, NICER can serve as the objective fatigue measurement and guide future interaction design. §.§ Applications of NICER NICER has numerous applications due to its comprehensiveness. We anticipate its integration into wearable and mobile interaction design, the Internet of Things, mobile VR, AR, and Mixed Reality (MR) systems, character simulation <cit.>, and mid-air haptics <cit.>, among other exciting applications. A straightforward use of NICER is as an objective metric to optimise the design of graphical user interfaces <cit.> in order to develop a safe and ergonomic-friendly user experience. NICER can assist designers in identifying the most comfortable positions to place static 3D UIs <cit.> by adapting the current arm gestures and the interaction time. For example, the 3D object position can be re-targeted to reduce arm movement in VR object retrieval <cit.> to enable an extended engagement. Similarly, interaction techniques can be made adaptive by shifting input modality, for instance, from gesture to speech. §.§ Limitations and Future Work First, our investigation of muscle fatigue development in Section <ref> was limited to the vertical plane only (i.e. 0° in the horizontal plane). This was chosen to focus on muscles that are the primary contributors to vertical shoulder movements and to prevent interference from muscles involved in horizontal movements. However, since shoulder torque remains at the same level when the shoulder is at a fixed vertical position, torque may fail to capture the physical exertion that varies between different horizontal positions. A similar concern was raised in applying CE in 3D UI design <cit.>, where participants found the interaction at their left sides was more fatiguing than the right side when interacting with their right arms. Future work could follow a similar design in the endurance study in Section <ref> but with several arm angles at the horizontal plane to investigate muscle fatigue development under different horizontal arm positions. The empirical comparison between shoulder torque and EMG-based fatigue indices may contribute to another correction term for interaction above 90° in the horizontal plane. Moreover, results combined with our findings in the vertical plane (Section <ref>) will lead to the complete picture of muscle fatigue development in the full range of shoulder motion. As such, future studies can apply these results in biomechanical simulations to estimate the fatigue indices for a given arm position, similar to the prior study <cit.> on estimating the muscle activation cost based on the coordinate in the interaction volume, and the work on modelling neck muscle fatigue in XR <cit.>. The current study evaluated the model performance by comparing fatigue predictions with subjective interaction measures. Though results confirm the effectiveness of using NICER in analysing the difference between the conditions chosen for this work, further study is needed to determine whether the models generalise to a greater variety of interactions (e.g., dynamic task intensities and varying recovery times) in real-world settings. It remains unclear how close the fatigue prediction of NICER is to the ground truth subjective sores. Previous evaluation done in CF <cit.> applied a linear conversion factor to assess the root mean squared errors (RMSE) between model predictions and Borg CR10. However, such a conversion has not been empirically validated. Since we cannot assume that NICER and Borg CR10 use the same scale, we instead applied Pearson Correlation to compare the general patterns. A potential future direction is to explore a feasible way to quantify the distance between model predictions of endurance time and subjective fatigue measures. In the model development of NICER, a young group of participants contributed to the study. Furthermore, the current model formulation only considers the variation of participants between sexes.In the original CE implementation, only a sex-based factor was taken to consider the variation of participants. Therefore, our NICER model has the same limitation. However, physical effort is sometimes affected by gender <cit.>, age <cit.>, and height <cit.>. We encourage future studies to investigate the effect of age on fatigue models. We plan to represent these human factors from a high-level perspective by introducing the upper limb length as one of the model parameters in future implementation. CF uses participants' arm length as one of the model inputs. However, a time-consuming pre-study calibration is needed to take the measurement. A future improvement can be using the inverse kinematic toolkit to estimate the arm length before running the model. Lastly, literature, including the current study, only considers single right-arm interactions when developing the fatigue model. However, inputs from bimanual interaction and full-body movement are becoming popular in 3D gaming and multi-touch large-screen interactions. Future work can generalise the takeaway of the current literature and extend the improved models to consider body segments rather than shoulder and arm. § CONCLUSION In this paper, we introduce a refined model to quantify shoulder fatigue in mid-air interaction, called NICER, which is short for “New and Improved Consumed Endurance and Recovery Metric". NICER is a ready-to-use, lightweight but comprehensive model that can: (1) predict the maximum duration of interaction from arm position; (2) work with interactions of varying physical intensities; (3) capture the extra exertion of above-shoulder arm movement; (4) consider gesture-based maximum strength; and (5) reflect fatigue recovery after rest periods. With the latest head-mounted display (HMD) devices, the necessary inputs for NICER can be easily retrieved from real-time hand-tracking. Thus, NICER is ready to be used in immersive interaction design. In Study 1, we investigated muscle fatigue at different shoulder abduction angles to obtain the curve of objective muscle fatigue. We used arm movement data collected from a mid-air interaction task with no constraints on shoulder positions and elbow extension to finalise the refined model formulation in Study 2. Finally, in Study 3, we evaluated the model performance as a design analytical tool in a mid-air selection task of varying durations. The promising study results show that NICER can objectively distinguish the interaction designs of different perceived fatigue and accurately reflect the decreasing fatigue after the break. A stronger correlation with the subjective ground truth of perceived fatigue than the previous model (<cit.>) confirms that NICER is a reliable fatigue measurement to assist future interaction design. We thank Sujin Jang and Ana Villanueva for providing access to the source code for their CF model. We also thank our study participants for their time and exertion, as well as anonymous reviewers for their insightful feedback. ACM-Reference-Format § APPENDIX § CHAFFIN’S MODEL OF MAXIMUM STRENGTH ESTIMATION Chaffin's model outputs the current Max_Torque (N· m) at the shoulder joint based on the performed arm gestures represented in the elbow extension angle (α_e) and shoulder abduction angle (α_s) while considering the difference of physical capacity between gender by factor G (G_female = 0.1495, G_male = 0.2845). The full model formula is shown in Equation <ref>. Max_Torque = (227.338 + 0.525 ·α_e - 0.296 ·α_s)· G § SUMMARY OF EXTERNAL DATASET As mentioned in Section <ref>, a target selection task is chosen in Studies 2 and 3 to finalise and evaluate NICER. This section provides a summary of the study task and interaction method implemented in the original study <cit.>. We refer readers to the original paper for complete details. §.§ Interaction Task: Distant Object Manipulation Participants are asked to interact with distant objects using their hands after being introduced to the study environment in a virtual room in VR. In detail, participants must successfully reach and select a set of 30 randomly ordered targets, one by one, from a cluster of data points in each trial, using a specific interaction method, with an extended virtual arm, controlled by their right arms, as visualised in Figure <ref>-right.All 30 targets are pre-selected data points in a large-scale 3D scatter plot from a public dataset. The 3D scatter plot is 10×10×8 m (width × height × depth). Participants are instructed to stand at a fixed point five meters away from the scatter plot to maintain consistency in extended arm calibration. Before each study trial, participants take a training session with only 12 targets to practise the required interaction technique and their understanding of Borg CR10 scores. After the training session, participants are told to take a break. They can only start the study trials if they feel refreshed from the break (i.e. Borg CR10 ≤ 0.5). During the study, participants are encouraged to finish the task as quickly as possible to the best of their ability. Participants are told to take a 15s break after every six targets to maintain the same speed of arm movement. §.§ Interaction Methods: Extendable Hands and Movable Portals The interaction task described above was used with four different interaction methods for distant target selection, described as follows. * Linear Gain: The Linear Gain method amplifies participants’ maximum reach by lengthening the extended virtual arm with a constant scalar (60 < α < 70 based on participants’ actual arm length) (see Figure <ref>-left). * Haptic Portal: The setup of the Haptic Portal method includes a non-linear gain, a pair of portals, and a robotic arm-assisted haptic target. The non-linear gain α amplifies participants’ maximum reach with an adaptive scalar based on the real-time hand movement speed v (cm · s^-1). If v < 3, α = 5. Otherwise, α = 60. Of the two portals, one is controlled by the extended arm and can be placed anywhere close to the target to enable a clear view of the target location through the other static portal on the left side of the display (see Figure <ref>-right). After successfully selecting the target, participants receive haptic feedback from a robotic arm. * Adaptive Gain (labelled as High_Fatigue in Study 3): A non-linear scalar α applied to the extended virtual arm amplifies participants' maximum reach. This adaptive scalar (α) is determined by the hand movement speed. If the physical hand moves slower than a threshold of 3 cm · s^-1, α = 5, otherwise, α = 60 (see Figure <ref>-left). * Portal (labelled as Low_Fatigue in Study 3): The test condition featuring a pair of portals (0.5 m^2) that enable a clear view of the target location within the data cluster. One movable portal added to the right hand is controlled by the extended virtual arm. Another static portal serving as the visual reference of the dynamic portal is placed left of the point-of-view (POV) on the display. Participants can first place the movable portal with the extended virtual arm anywhere near the target before reaching out and selecting the target accurately from the portal (see Figure <ref>-right).
http://arxiv.org/abs/2406.08872v1
20240613071900
Testing MOND using the dynamics of nearby stellar streams
[ "Orlin Koop", "Amina Helmi" ]
astro-ph.GA
[ "astro-ph.GA" ]
Kapteyn Astronomical Institute, University of Groningen, Landleven 12, NL-9747 AD Groningen, the Netherlands The stellar halo of the Milky Way is built up, at least in part, from debris from past mergers. Stars from such merger events define substructures in phase-space, for example in the form of streams, which are groups of stars moving on similar trajectories. The nearby Helmi streams discovered more than two decades ago are a well-known example. Using 6D phase-space information from the Gaia space mission, <cit.> have recently reported that the Helmi streams are split into two clumps in angular momentum space. Such substructure can be explained and sustained in time if the dark matter halo of the Milky Way takes a prolate shape in the region probed by the orbits of the stars in the streams. Here, we explore the behaviour of the two clumps identified in the Helmi streams in a Modified Newtonian Dynamics (MOND) framework to test this alternative model of gravity. We perform orbit integrations of Helmi streams member stars in a simplified MOND model of the Milky Way and using the more sophisticated Phantom of RAMSES simulation framework. We find with both approaches that the two Helmi streams' clumps do not retain their identity and dissolve after merely 100 Myr. This extremely short timescale would render the detection of two separate clumps as very unlikely in MONDian gravity. The observational constraints provided by the streams, which MOND fails to reproduce in its current formulation, could potentially also be used to test other alternative gravity models. Testing MOND using the dynamics of nearby stellar streams Orlin Koope-mail: koop@astro.rug.nl Amina Helmi Received xxxx; accepted yyyy ======================================================================== § INTRODUCTION A plentitude of empirical evidence points to mass discrepancies in the Universe. <cit.> found the luminous matter in the Solar vicinity to be insufficient to explain the vertical motions of stars near the Sun, and <cit.> reported that the velocity dispersion in (clusters of) galaxies was too high for them to stay bound by the visible mass. Later <cit.> showed that dynamically cold disks in galaxies are prone to instability unless embedded in some potential well like a halo of dark matter. Then <cit.> and <cit.> showed that the rotation curves of spiral galaxies remain approximately flat with increasing radius. With the baryonic mass inferred from the Big Bang Nucleosynthesis model and the observation of the cosmic microwave background and present-day inhomogeneity, the need for a boost of the growth rate with invisible mass also arose from cosmology. The currently preferred cosmological model includes a cosmological constant (Λ) and involves significant amounts of cold dark-matter (CDM). The ΛCDM model is supported by a wealth of observational data over many different scales. It does however face some challenges on small scales such as the predicted excess of dark satellites (or subhalos), i.e. the `missing satellite problem' <cit.>, or the `plane-of-satellites problem' where the observed spatial distribution of the satellites of Milky Way seems to be too planar in comparison to predictions from simulations (see e.g <cit.>, <cit.>, <cit.> and <cit.> for a review and possible solutions and outstanding problems on the dwarf galaxy scale). Another challenge is the `radial acceleration relation' (RAR), where the rotation curve of many galaxies can be predicted from the Newtonian gravitational acceleration alone in a universal way <cit.>. An alternative way of looking at the hidden mass hypothesis is that the problem signals a breakdown of Newtonian dynamics in the weak acceleration regime, defined by a new constant a_0, of the order of 10^-10 m.s^-2, found to be appropriate to the tiny accelerations encountered in galaxies beyond the Solar radius <cit.>. MOND (MOdified Newtonian Dynamics) is able to predict the observation of flat rotation curves, the RAR and the baryonic Tully-Fisher relation –a scaling relation between the luminosity and the circular velocity of disk galaxies– <cit.>. However, MOND runs into challenges on more cosmological scales, even needing additional dark matter to explain certain observations of galaxy clusters <cit.>. With the increasing amount of data on our own Galaxy, especially from the Gaia mission <cit.>, more tests of MOND on the Galactic scale can be performed. However, due to the nonlinearity of MOND (i.e. forces are not additive), attempts have been somewhat limited thus far, except for instance the work done by <cit.>. Some other examples include using the kinematics of stars to model the rotation curve and mass distribution in the Galaxy <cit.> and the modeling of stellar streams <cit.> using the Phantom of RAMSES patch <cit.>. Stellar streams are the products of the tidal stripping of satellites like globular clusters or dwarf galaxies. These streams are a great probe of the galactic mass distribution and potential. Not only are their characteristics dependent on the overall gravitational potential, but the presence of dark satellites (often referred to as subhalos, predicted in the thousands in the case of cold dark matter) can result in gaps, spurs or other substructures in streams <cit.>. Such features would have to be attributed to interactions with baryonic structures in the case of MOND. Furthermore, tidal streams resulting from globular clusters in MOND will experience an altered internal potential, leading to an asymmetry between the leading and the trailing tidal tails <cit.>, which is generally not expected in Newtonian gravity <cit.>. Among the stellar streams identified thus far are the Helmi streams crossing the Solar neighbourhood <cit.>. They are thought to be the debris of a massive dwarf galaxy with a stellar mass of M_*∼10^8 M_⊙, that was accreted approximately 5–8 Gyr ago. Its stars have phase-mixed, implying that they do not define spatially tight structures but that the streams' stars cross the Solar vicinity on different phases of their orbit. They define clumps in velocity space as well as integrals of motion space (see Fig. <ref>). The latter is defined by (quasi-)conserved quantities along the orbit of the stars, such as their energy and angular momenta. These can be used to track dynamically groups of stars of similar origin <cit.>. <cit.> (hereafter D22) have shown that the Helmi streams in fact define two distinct clumps in angular-momentum space (L_z-L_⊥, where L_⊥ = √(L_x^2+L_y^2)). The existence of these two subclumps has been confirmed by a clustering algorithm using both Gaia EDR3 <cit.> and the more recent Gaia DR3 dataset <cit.>. The stellar populations of these two clumps are indistinguishable, which means that they must have originated in the same system. D22 demonstrate that the two clumps in (L_z-L_⊥) space are able to survive in time as such if the Galactic dark matter halo has a prolate shape with axis ratio q_ρ∼ 1.2. In this case, one of the clumps is placed on the Ω_ϕ:Ω_z=1:1 orbital resonance. The need for a flattened shape of the dark matter halo that does not follow the distribution of baryons, prompted us to want to explore the dynamics of the Helmi streams in the context of MOND. In particular, we study here the behaviour of the two clumps in L_z-L_⊥ space in a MOND potential that is constrained to follow the known distribution of baryons in the Milky Way. In Section <ref> we summarize the selection criteria on Helmi streams stars from D22. In Section <ref> we explain the MOND framework and how we apply it to study the Helmi streams with simple orbit integrations. Section <ref> shows our results using the Phantom of RAMSES N-body code. We present a discussion and our conclusions in Section <ref>. § DATA We follow D22 in selecting Helmi streams stars from data provided by Gaia EDR3 <cit.>. This mission has provided 6D information for 7,209,831 of their ∼1.7 billion sources. After applying two quality cuts, namely >5 and RUWE <1.4, there remain 4,496,187 sources within 2.5 kpc of the Sun. This sample was extended by including radial velocities from crossmatches with spectroscopic surveys, namely APOGEE DR16 <cit.>, LAMOST DR6 <cit.>, Galah DR3 <cit.> and RAVE DR6 <cit.>. This results in a sample of 7,531,934 sources. These stars have been corrected for the parallax zero point offset <cit.>, solar motion <cit.> and motion of the local standard of rest <cit.>, to transform their coordinates and proper motions to Galactocentric Cartesian coordinates assuming R_⊙=8.2 kpc and z_⊙=0.014 kpc <cit.>. We calculate angular momenta, L_z and L_⊥ with the sign of L_z flipped to that L_z>0 for prograde orbits. To select stars belonging to the Helmi streams, D22 calculate the orbital energy of the stars with <cit.> in an axi-symmetric potential consisting of a stellar thin and thick disk, an HI gas disk, a molecular gas disc, a spherical bulge and an NFW halo, which is spherical by default <cit.>. D22 define ellipses to select two clumps in L_z-L_⊥ after removing less-bound stars (with E<-1.2× 10^5 km^2.s^-2) which are unlikely to be members of the streams. The clump at higher L_⊥ contains 154 stars and the clump at lower L_⊥ contains 130 stars. For more information, see Section 2 in D22. Figure <ref> shows the kinematics of the Helmi streams stars as well as their distribution in L_z-L_⊥ space, where the gap between the clumps is very apparent for the stars with Gaia radial velocities because of their much smaller measurement uncertainties. § MOND To investigate the behaviour of the Helmi streams in the context of MOND, we use the AQUAL formulation of MOND <cit.>. The original formulation of MOND states that the MOND gravitational acceleration g is given by <cit.>: μ(ga_0)g=g_ N, where g_ N is the Newtonian gravitational acceleration and μ is the interpolation function between the `deep-MOND' regime at low accelerations and the Newtonian regime at high accelerations. At these accelerations, one must recover the observed behaviour, namely that the total gravitational attraction approaches g_ N, and therefore μ(x)→1 for x≫1. On the other hand, for low accelerations g→√(g_ N a_0), where a_0=1.2×10^-10 m.s^-1 is Milgrom's acceleration constant, and so μ(x)→ x for x≪1. In this paper, we employ the following widely used interpolating function: μ(x)=x1+x, which has been shown to provide good fits in the intermediate to weak gravity regimes of galaxies <cit.>. In the AQUAL formulation of MOND, Eq. (<ref>) results from a modification of the gravitational action (or Lagrangian) at a classical level, which also changes the Poisson equation. The AQUAL Poisson equation is: ∇[μ(|∇Φ|a_0)∇Φ]=4π Gρ_ b, where Φ is the MOND gravitational potential, and ρ_ b is the baryonic matter density. A general solution for the gravitational force arising from the MOND potential can be written as: μ(ga_0)g=g_ N + S, where S is a curl field so that ∇·S=0, and g_ N=-∇Φ_ N, where Φ_ N is the Newtonian gravitational potential. Hence, the original formulation of MOND (i.e. Eq. <ref>) would only be applicable in systems where S vanishes. This has been shown to hold in highly symmetric and/or one-dimensional systems, i.e. when one can write |∇Φ_ N|=f(Φ_ N) for some smooth function f. An important example for which this holds are flattened systems where the isopotential surfaces are locally spherically symmetric, such as Kuzmin disks and disk-plus-bulge generalizations thereof <cit.>. Indeed, for a Kuzmin disk of mass M, it can be shown that f(Φ_ N)=Φ_ N^2/MG. §.§ Milky Way Models To derive a Milky Way potential in the MOND framework, and hence the force field necessary to integrate the orbits of the Helmi streams' stars, we focus on the baryonic components of our Galaxy, namely its disk and bulge. For simplicity, we use the descriptions of these components from two widely employed models that we describe below. In this way we extend the pioneering work of <cit.> by using more complex potentials than the Kuzmin disk. We note that <cit.> have shown for disk rotation curves that the original approximation given by Eq. (<ref>) holds beyond R=5 kpc in Kuzmin and exponential disks, and that this equation provides a good approximation to the rotation curves for bulge plus disk potentials <cit.>. This means we can use the constraint of the circular velocity at R_⊙=8.2 kpc to fit our MOND models. We focus first on the McMillan model (McM model), which stems from the baryonic component from <cit.>. In order to match the measured circular velocity at the position of the Sun, we needed to increase the surface mass density of the thin disk component slightly by 10% from Σ_ d= 9×10^8M_⊙ to 1×10^9M_⊙. The rotation curve provided by this model can be seen in the left panel of Fig. <ref> and compared to the original Newtonian McM model. Note that we have modified the dark matter halo of the latter model to have q_ρ=1.2, following D22. This however, does not have any effect on the rotation curve in the disk plane. We also explore the Price-Whelan model (PW model), which consists of a disk and a bulge, adapted from the baryonic component of the in <cit.>, building on the disk model from <cit.>. The disk has a Miyamoto-Nagai potential <cit.>: Φ_ MN(R,z) = -GM_ d√(R^2+(a_ MN+√(z^2+b_ MN^2))^2), with R=√(x^2+y^2), M_ d = 7.5× 10^10M_⊙, a_ MN = 3 kpc and b_ MN = 280 pc. We again have increased the mass of the disk in this model from its default value by ∼ 15%. The bulge has a potential following a <cit.> profile: Φ_ H(r) = -GM_ br+a_ H, with r=√(R^2+z^2), M_ b = 5×10^9M_⊙ and a_ H = 1 kpc. In Newtonian gravity, the default Price-Whelan model has a dark matter halo that follows a flattened NFW potential <cit.>: Φ_DM, PW=-GM_ hr̃ln(1+r̃r_ a), where r̃=√(R^2+(z/q)^2), q=0.95 is the flattening in the z-direction, M_ h=7·10^11M_⊙ is the halo mass and r_ a=15.62 kpc is the scale radius. The left panel of Fig. <ref> shows the rotation curves of the McM model in both the Newtonian (i.e. including the dark matter halo, solid) and MOND (dashed) frameworks. We calculated circular velocities as v_ c(r)=√(rF(r)) in the disk plane z=0, where we use Eq. (<ref>) for the MOND forces. The right panel shows the results for the PW model, where we also show the contribution from the phantom dark matter density. This phantom dark matter density is the effective density of dark matter that would have caused the MOND force field in Newtonian gravity, i.e. ∇^2Φ=4π G(ρ_b + ρ_ph), where ρ_b is the baryonic density, and ρ_ph is the phantom dark matter density. Therefore, if the Newtonian potential is known we can directly infer the functional form of the phantom dark matter density: ρ_ph=∇·((ν(|∇Φ_ N|/a_0)-1)∇Φ_ N)4π G. Here ν(x) = √(1+4/x)+12 is a function such that one can write g=ν(g_ N/a_0)g_ N, which in terms of Eq. (<ref>) means that ν(y)=1/μ(x), where y=xμ(x) for S = 0 <cit.>. Figure <ref> compares the density of the PW model in the x-z-plane for the Newtonian (bottom row) and MOND (top row) frameworks. The contours reveal how the phantom dark matter follows the shape of the disk at low heights above the plane, but is spherically symmetric farther away. §.§ The EFE An important characteristic of MOND, due to its nonlocality, is that the external field acting on a system can have a significant effect <cit.>. In our case, when studying the individual orbits of Helmi streams stars in the Milky Way, this external field effect (EFE) could arise from the presence of dwarf galaxies like Sagittarius (Sgr) or the Large Magellanic Cloud (LMC), or due to Andromeda (M31), the largest galaxy closest to the Milky Way. To study the importance of accounting for the EFE due to these objects, we calculate where an external field produces a force equal to the internal field of the Milky Way. This happens at a radius of: R_EFE = √(GM_bary, MWa_0)g_ext, where M_bary, MW is the total baryonic mass of the Milky Way in the MOND framework, and g_ext is the MOND gravitational acceleration due to the external object. For an object with total baryonic mass M_bary and radius r, the rotational velocity flattens at a plateau with a final velocity v_ f= √(GM_barya_0), so the MOND gravitational acceleration felt at a distance d≫ r from such an object is g_ext=v_ f^2/d. For M31, v_ f≈225 km.s^-1 <cit.> and d≈770 kpc, so g_ext=0.02a_0, which is well within the deep MOND limit. Therefore, R_EFE∼ 450 - 490 kpc for the external field effect from Andromeda on the Milky Way for the McM and PW models. In our orbit integrations Helmi streams stars reach an apocenter of 21 kpc, which is much smaller than R_EFE, and hence the orbits should not be affected significantly by the external field effect due to M31 <cit.>. Additionally, it has been shown <cit.> that M31 has remained farther away than 600 kpc from the Milky Way in the past ∼8 Gyr, meaning that the Helmi streams have been orbiting the Galaxy without considerable influence from M31 since their accretion, which has been estimated to have taken place 5 - 8 Gyr ago <cit.>. This conclusion is supported by <cit.> who find that the external field effect of M31 and also of the Virgo cluster can be neglected (for objects orbiting) in the inner regions of the Milky Way. For the LMC and Sgr, using a similar argument we find that their influence evolves over time while they orbit the MW, and that they enter regimes where their influence on the Helmi stream stars is not negligible <cit.>. Therefore we choose to run additional tests where we include Sgr or the LMC while integrating the Helmi stream stars orbits. To this end we model both Sgr and the LMC using Hernquist spherical potentials with M_Sgr=1×10^8M_⊙, a_ H,Sgr=0.5 kpc <cit.>, and M_LMC=3.2×10^9M_⊙, a_ H,LMC=2 kpc <cit.>, where their masses correspond to estimates of their baryonic content. §.§ Simplified Orbit Integrations As a first step to investigate the Helmi streams in MOND, we performed orbit integrations in the MOND framework, simplified by omitting the influence of the curl field S and compare these with orbits integrated in the original McM and PW Milky Way models that include dark matter halos and use Newtonian gravity. We note that we can only safely omit the curl field S from our analysis and use g_MOND=ν(|g_ N|/a_0)g_ N if one can use the relation |∇Φ_N|=Φ_ N^2/GM_ b, where M_ b is the baryonic mass of the model <cit.>. Close to the disk where |z|<1.5 kpc (2.5 kpc for the McM model) and within R≲30 kpc the above relation does not hold. The LMC does not pass this region on its orbit, so we can use Eq. (<ref>) for integrating its orbit. Sgr has a pericentre of ∼ 20 kpc and the stars in our Helmi Streams sample are currently located within 2.5kpc from the Sun. Still, for the majority of the time the orbits of Sgr and the Helmi streams are in a region where our simplification holds. Reassuringly, in Section <ref>, where we explore the behaviour of the Helmi Streams using PoR, we will see that their orbits are barely affected by our simplification. The validity of this approach for the orbit integrations outside of the disk area does not change if we add Sgr and the LMC to the MOND Milky Way potential. Their contribution is both small and locally spherically symmetric, so that S can still be neglected since |∇Φ_ N|=Φ_ N^2/MG holds in the region of the Helmi streams at each time in the presence of either Sgr or the LMC. To compute the orbit of a star from the Helmi streams, we use its present-day position and velocity and integrate the equation of motion backward in time using the MOND force associated to the Milky Way potential under consideration. We use a Leapfrog integrator, with a timestep of Δ t=0.2 Myr. For the Newtonian models we use the integrator implemented in with the same timestep. To include either of the Milky Way satellites, namely Sgr or the LMC, we first determine their orbit in the MOND Milky Way model without the Helmi stream stars. Then we use the appropriate Hernquist potential representing the baryonic component of the satellite at the position of the satellite through time as an addition to the background potential we integrate our Helmi stream stars in. Figure <ref> shows the orbit of one of the Helmi streams stars in the PW model. For comparison we also plot the orbit obtained including Sgr, as well as that using Newtonian gravity. This figure shows that the presence of Sgr does not lead to significant differences in the trajectory of the Helmi stream stars but that there are important differences between the MOND and Newtonian frameworks. Figures <ref> and <ref> show the behaviour of the Helmi streams stars in the L_z-L_⊥ plane stemming from our simplified orbit integrations, for the McM and PW models respectively. In Fig. <ref>, the top row corresponds to the Newtonian model, the second row to the corresponding MOND model, and the bottom two rows include Sgr and the LMC. The columns show four snapshots through the time of integration, at 0, 0.1, 2 and 4 Gyr. Because the results for adding Sgr and LMC to the PW model were similar to the results for the McM model, we do not show these in Fig. <ref> again. Clearly the gap between the two Helmi streams clumps does not persist in the MOND models, irrespective of the inclusion of Sgr or the LMC, which do not change the orbital behaviour visibly. The result is also valid irrespective of the choice of baryonic potential for the Milky Way. The two clumps mix significantly already within the first ∼100 Myr in all MOND models, with blue and red stars changing their L_⊥ enough that a distinction between the two groups is no longer possible. This extremely fast mixing behaviour is not seen for the Newtonian PW model that has an oblate dark matter halo with flattening of q=0.95. However, in this Newtonian model the gap does not persist either over long timescales as it disappears after ∼200 Myr, although the degree and rate by which the stars in the red and blue clumps are mixing is lower in comparison to the MOND models. In the Newtonian McM model with q_ρ=1.2 there is no mixing by construction (see D22 for details). § PHANTOM OF RAMSES We also calculate the orbits of the Helmi streams stars in the McM model using the Phantom of RAMSES (PoR) patch to the RAMSES N-body code to provide insight in a more sophisticated MOND framework <cit.>. We note that the PoR patch is based on the QUMOND formulation of MOND, whereas our orbit integrations and discussion of the S field were based on the AQUAL formulation of MOND. Hence, results between the two frameworks need not necessarily agree <cit.>. We employ the flag to represent the baryonic components of the McM model with a collection of particles that continually generate the MONDian gravitational potential the Helmi streams stars are integrated in. We generate the positions of these static particles using <cit.> to represent the baryonic density profiles. We integrate the orbits of the Helmi streams stars for 2 Gyr, using a minimum and maximum grid level of 7 and 18 respectively. This is sufficiently precise to model the orbit well. The boxsize of the simulation is 1024kpc, ensuring that the potential can be approximated by a point source at the boundary of the box. The top row of Figure <ref> shows the orbit of one of the Helmi streams stars in the PoR simulation compared to its orbit in the simplified orbit integration from Section <ref>. Even though there is a difference that builds up over this time, the nature of the orbit is not altered. The bottom row shows the L_z-L_⊥ plane for the Helmi streams particles in the PoR simulation at timestamps of t=0,0.1 and 2 Gyr. Again, we see that the gap between the two Helmi streams clumps dissolves within 100 Myr, although the degree of mixing of the stars in the two clumps is less than in the simplified orbit integrations shown in the second row of Fig. <ref>. After 2 Gyr however, the two clumps show a similar degree of mixing in both the PoR and simplified orbit integrations. We may therefore conclude from Figs. <ref>, <ref> and <ref> that the simplified modelling as well as the sophisticated PoR integrations of the Milky Way in MOND employed here fail to reproduce the observations of the Helmi streams. Whether this is indicative of a shortcoming of our implementation of MOND or that MOND theory, perhaps as a modification of inertia, should be altered, remains to be seen. § DISCUSSION AND CONCLUSION We have shown by way of simplified orbit integrations in MOND models of the Milky Way using AQUAL as well as orbit integrations in QUMOND using the PoR patch for RAMSES that the gap present between the two observed clumps in L_z-L_⊥ space for the Helmi streams does not persist through time. The stars in the clumps intermix very quickly, namely already in the first ∼100 Myr of integration as can be seen in Figs. <ref>, <ref> and <ref>, leading to the dissolution of the gap and the mixing of the clumps. If we take our implementation of MOND at face value, this puts us in the uncomfortable position of having to argue that we are living at a truly special time in Galactic history. A question that may naturally arise is whether the presence of two separate clumps could be reflecting some internal substructure already existing in the progenitor system. And in that case, whether the gap between the clumps was initially larger such that it could have developed to its present observed form after a few Gyr of evolution in the Milky Way (although this may even be challenging for the MOND models). There is however a fundamental problem with this argument. By definition, substructure is localized in phase-space in a (progenitor) system. As the system is accreted, phase-space folds leading to the presence of multiple streams. Each of the streams stems from a (very) small region of the original system <cit.>. Therefore, substructure in the progenitor system should be apparent in a localized portion of a stream. Fig. <ref> shows that multiple kinematic streams (e.g. with positive and negative v_z) are associated to each of the two clumps in angular momentum space. This indicates that the clumps cannot be the result a local feature/substructure in the progenitor system, as stars in a given clump stem from a large region of phase-space of this system. Our results can be compared to Figs. 8 and 9 of D22, who have shown that for the clumps to be distinguishable over long timescales, a prolate halo with a density flattening q_ρ∼ 1.2 would seem to be necessary, as we have exemplified in the top panels of Fig. <ref>. This renders the effective potential to be close to spherical somewhere in the region probed by the stream stars, which results in better conservation of L_⊥, and hence of the clumps individuality. Such a prolate shape for the dark matter halo cannot be naturally produced in a MOND framework, since the gravitational field is constrained to follow the shape of the baryonic distribution, which is very oblate (see Fig. <ref>). The failure of our MONDian orbit integrations in reproducing the observed properties of the streams is thus not so surprising. We expect our conclusions to be robust since we have shown here that the dissolution of the gap between the clumps happens both in AQUAL with simplified orbit integrations either with or without the influence of the two biggest satellites of the Milky Way, as well as in the QUMOND framework studied through the PoR patch for RAMSES. We have also run our integrations using a Kuzmin model for the Milky Way potential as in <cit.>, where S=0 holds strictly, and the results were not different from those presented here. Nonetheless it would be important to explore other possible formulations of MOND, and specifically as a modification of inertia, which unfortunately is not completely developed yet. Complications arising in this framework are, for instance, the non-locality in time and the fact that the definition of actions and conserved quantities like momentum could be different from the Newtonian definitions <cit.>. The observations of the Helmi streams clumps provide a particularly good test for MOND on Galactic scales, because the stars probe a region governed mainly by the gravitational field of the Galaxy. Furthermore, their orbits are elongated in the direction perpendicular to the Galactic disk, a regime that has not been explored much in the context of MOND. As discussed in e.g. <cit.>, MOND and Newtonian gravity including a dark matter halo are able to fit equally well the Galaxy's rotation curve. There is thus a need for more studies like ours that probe a larger distance from the disk plane. An interesting example is the proposal to use high-velocity stars, as these objects probe regions where the two theories would predict rather different behaviour <cit.>. We thank Bob Sanders (and indirectly M. Milgrom) for sharing his knowledge on MOND and very useful discussions. We are also grateful to the referee for their very constructive report. We acknowledge financial support from a Spinoza prize. We have made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. Throughout this work, we have made use of the following packages: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and Jupyter Notebooks <cit.>. aa
http://arxiv.org/abs/2406.08044v1
20240612095309
Hofstadter spectrum in a semiconductor moiré lattice
[ "Chen Zhao", "Ming Wu", "Zhen Ma", "Miao Liang", "Ming Lu", "Jin-Hua Gao", "X. C. Xie" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
School of Physics and Institute for Quantum Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China School of Physics and Institute for Quantum Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China School of Physics and Institute for Quantum Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China School of Physics and Institute for Quantum Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China Beijing Academy of Quantum Information Sciences, Beijing 100193, China jinhua@hust.edu.cn School of Physics and Institute for Quantum Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China International Center for Quantum Materials, School of Physics, Peking University, Beijing 100871, China Institute for Nanoelectronic Devices and Quantum Computing, Fudan University, Shanghai 200433, China Hefei National Laboratory, Hefei 230088, China § ABSTRACT Recently, the Hofstadter spectrum of a twisted WSe_2/MoSe_2 heterobilayer has been observed in experiment [C. R. Kometter, et al. https://doi.org/10.1038/s41567-023-02195-0Nat. Phys. 19, 1861 (2023)], but the origin of Hofstadter states remains unclear. Here, we present a comprehensive theoretical interpretation of the observed Hofstadter states by calculating its accurate Hofstadter spectrum. We point out that the valley Zeeman effect, a unique feature of the transition metal dichalcogenide (TMD) materials, plays a crucial role in determining the shape of the Hofstadter spectrum, due to the narrow bandwidth of the moiré bands. This is distinct from the graphene-based moiré systems. We further predict that the Hofstadter spectrum of the moiré flat band, which was not observed in experiment, can be observed in the same system with a larger twist angle 2^∘≲θ≲ 3^∘. Our theory paves the way for further studies of the interplay between the Hofstadter states and correlated insulting states in such moiré lattice systems. Hofstadter spectrum in a semiconductor moiré lattice X. C. Xie Received xx; accepted xx ==================================================== Introduction.—The Hofstadter spectrum, an extraordinary self-similar fractal spectrum, emerges in two-dimensional electron systems when exposed to both periodic potential and perpendicular magnetic fields<cit.>. It is of special interest because it is not only a very fundamental issue in quantum theory of solids, but also one of the earliest discovered quantum fractals in physics. To obtain the Hofstadter spectrum, it is required that the magnetic length (ℓ_B=√(ħ/eB)) should be comparable with the length of the unit cell. This requirement makes the moiré systems an ideal platform for realizing such novel fractal energy spectrum, because the large moiré unit cell facilitates the manifestation of Hofstadter states in moderate magnetic fields<cit.>. Recently, a remarkable experiment reports the observation of the Hofstadter states in a twisted transition metal dichalcogenide (TMD) heterobilayer (WSe_2/MoSe_2), revealing a rich phase diagram of interpenetrating Hofstadter states and charge-ordered states through local electronic compressibility measurements<cit.>. The twisted TMD heterobilayer has received significant attention in the last few years, because it can be equivalently mapped into a triangle lattice Hubbard model with adjustable bandwidth and Hubbard interaction, making such a semiconductor moiré lattice an ideal platform for simulating strongly correlated electron states<cit.>. This experiment reveals that, in the presence of a strong magnetic field, the twisted TMD heterobilayer provides unprecedented opportunities for systematic exploration of the interplay between the Hofstadter states and the strongly correlated states in artificial electron lattices. However, from a theoretical standpoint, there is still a lack of quantitative understanding of this crucial experiment. Most importantly, the origin of Hofstadter states in twisted TMD heterobilayers still remains unclear. In this work, we present a comprehensive theoretical interpretation of the observed Hofstadter states in the twisted TMD heterobilayer by directly calculating the Hofstadter spectrum. The key finding of our theory is that the valley Zeeman effect<cit.>, a distinctive characteristic of TMD materials, plays a crucial role in determining the Hofstadter spectrum in such system, which is essentially different from graphene-based moiré systems<cit.>. Once the valley Zeeman term is properly considered, the single particle Hofstadter spectrum can well interpret almost all the experimental observations regarding the Hofstadter states. In fact, the observed Hofstadter spectrum in WSe_2/MoSe_2 bilayer can be roughly approximated as that of a spinful p_x,y-orbital triangle lattice. The tight-binding model is given in the supplementary materials [See Supplemental Material at [URL] for the tight-binding model.]. Finally, we predict the primary characteristics of the Hofstadter spectrum of the moiré flat band, which has not been observed in the current experiment but should be detected with a larger twist angle 2^∘≲θ≲ 3^∘. Our theory interprets the origin of the observed Hofstadter states in the twisted TMD heterobilayer, while also paving the way for quantitative understanding of the intriguing interplay between the Hofstadter states and the correlated electron states in TMD moiré lattices. Model.—We consider a twisted WSe_2/MoSe_2 heterobilayer in the presence of a perpendicular magnetic field 𝐁=(0,0,B_z) with a twist angle θ, as illustrated in Fig. <ref>(a). Fig. <ref>(b) is the schematic of the bands of WSe_2 monolayer at the K_+ and K_- valley near E_F. Due to the strong spin-orbit coupling, the topmost valence bands have opposite spin at the two valleys, i.e. the spin-valley locking<cit.>. Meanwhile, the MoSe_2 monolayer behaves like a twist angle dependent moiré periodic potential applied on the WSe_2 monolayer. So, considering the topmost valence band at one valley, the Hamiltonian is H =-(𝐩+e𝐀)^2/2m^∗+Δ(𝐫)-𝐦·𝐁 where the first term is the kinetic energy, Δ(𝐫) is the moiré periodic potential and the last term represents the effective Zeeman effect. Here, Δ(𝐫) is of the moiré period a_M ≈ a_0/θ, where a_0 is the lattice constant of WSe_2. Since the TMD monolayer has threefold-rotational symmetry, Δ(𝐫) can be well approximated by only six moiré reciprocal lattice vectors Δ(𝐫) =∑^6_j=1V(𝐛_j)exp(i𝐛_j ·𝐫), with 𝐛_𝐣=4π/√(3)a_M(cosπ(j-1)/3, sinπ(j-1)/3) and V(𝐛_𝐣) = Vexp[(-1)^(j-1)iψ]. Here, V and ψ are two fitting parameters for the moiré potential. For the AA-stacked WSe_2/MoSe_2 bilayer with θ=1.33^∘, we use the parameters: (V,ψ)=(9meV,-125.1^∘), a_M=13.5nm and m^∗=0.5m_0. Note that, unless otherwise specified, we always set θ=1.33^∘, which is the value in the experiment<cit.>. Such moiré potential actually forms a triangle lattice, as shown in Fig. <ref>(c), where lattice sites correspond to the maxima of the moiré potential. The holes are localized around the lattice sites, forming artificial atoms of the triangle lattice. The energy bands with 𝐁=0 are plotted in Fig. <ref>(d). The first band (purple line) is just the well-known moiré flat band, which is also called moiré s-band here because it corresponds to the s-band of the triangle lattice. The other two dispersive bands (blue lines) are called moiré p-bands, due to the correspondence to the p-bands of the triangle lattice forming by the artificial p_x,y-orbitals. Here, s-orbital refers to the first orbital (for holes) confined by the moiré potential at the lattice sites. p_x,y-orbitals have similar definitions. When a perpendicular magnetic field is applied on a TMD monolayer, there are two effects of the magnetic field, which are of equal importance for the Hofstadter spectrum. One is the formation of Landau levels (LL), represented by the Peierls substitution, see the first term in Eq. (<ref>). The other is a valley-dependent Zeeman splitting, resulting from the spin and orbital magnetic momentum of the valence electrons, i.e. the last term in Eq. (<ref>). Here, the total magnetic momentum includes contributions from both orbital and spin magnetic momentum, which can be described by an effective g-factor g^τ_eff 𝐦·𝐁=(𝐦^τ_orb+𝐦^τ_spin)·𝐁=-g^τ_effμ_BB_z where τ=± is the valley index. As well known, the orbital magnetic momentum of the valence electron in TMD are related to its Berry curvature, which have opposite sign in the two valleys<cit.>. Meanwhile, the spin of the valence electron in the two valleys are opposite as well due to the strong spin-orbit coupling<cit.>. So, in the presence of magnetic field, the valence electrons in the two valleys have opposite Zeeman shifts, i.e. g^τ_eff with opposite sign, which is so-called valley Zeeman effect. The g^τ_eff of the WSe_2 monolayer can be calculated with various theoretical methods like DFT or tight-binding model, with values ranging from 1.19 to 6.1<cit.>. However, g^τ_eff in the moiré heterobilayer should differ from that in TMD monolayer due to the modifications of orbital magnetic momentum caused by the moiré potential. Here, we choose g^τ_eff=5.2 for valley K_+ (τ=+1), which is in good agreement with the experimental observations. The details in the determination of the g-factor are given in the supplementary materials [See Supplemental Material at [URL] for the determination of the g-factor.]. Hofstadter spectrum.—To calculate the Hofstadter spectrum, we use the Landau gauge 𝐀 = (0, xB_z, 0) and diagonalize the Hamiltonian (<ref>) with the wave functions of LL |n,k_y⟩ as the basis<cit.>, where |n, k_y⟩ = L_y^1/2exp(ik_yy) ϕ_n(x-x_0). corresponds to the LL with E_n=-ħω_c (n+1/2), ϕ_n (x-x_0) is the normalized oscillator function and x_0=-ℓ_B^2k_y. The matrix element of the moiré periodic potential can be calculated with the formula ⟨n^',k_y^'| exp(i𝐪·𝐫) |n,k_y⟩ =δ_k_y^',k_y+q_yexp[-i/2ℓ_B^2 q_x (k_y^' + k_y)] ℒ_n^' n(𝐪) with ℒ_n^',n(𝐪)= (m!/M!)^1/2i^| n^' -n|[(q_x+iq_y)/q]^n-n^' × e^-1/2QQ^1/2| n^' -n|L_m^(| n^' -n|)(Q), where 𝐪=(q_x,q_y), q=|𝐪| and Q=1/2ℓ_B^2q^2. m and M are the minimum and the maximum of n^' and n. L^(α)_j (Q) is the associated Laguerre polynomial. Here, the inter-LL matrix elements need to be considered. The calculation details are given in the supplementary materials [See Supplemental Material at [URL] for the calculation details.]. In such semiconductor moiré superlattice, a twist angle θ=1.33^∘ gives a_M≈ 13.5 nm, so that the requirement to realize Hofstadter spectrum ℓ_B ≲ a_M desires B ≳ 3.6 T. In this situation, the applied magnetic field will split the moiré bands to form a fractal energy spectrum, i.e. the Hofstadter spectrum. We plot the calculated Hofstadter spectrum of the first three moiré bands in Fig. <ref>. Fig <ref>(a) shows the Hofstadter spectrum of the moiré s-band, which corresponds to an s-orbital triangle lattice model. The Hofstadter spectra for the up and down spin (or K_± valleys) are split by the effective Zeeman term, where the red (blue) lines in Fig. <ref>(a) represent up (down) spin. With θ=1.33^∘, the moiré s-band is a flat band with a very narrow bandwidth, see Fig. <ref>(d). Consequently, the gaps between the Hofstadter minibands are very tiny, which are hard to detect in experiment. When ν<-2 (ν is the hole filling per moiré unit cell), the Fermi level E_F shifts to the moiré p-bands. The Hofstadter spectrum of the moiré p-bands is given in Fig. <ref>(c). First of all, the most notable feature of the Hofstadter spectrum is its division into four distinct sets as the magnetic field increases. This can be intuitively understood from the equivalent lattice model. As illustrated in Fig. <ref>(b), the moiré p-bands correspond to a spinful p_x,y-orbital triangle lattice, which can be described by four basis functions {|p_±,↑⟩, |p_±,↓⟩} with |p_±⟩=|p_x⟩± i |p_y⟩. Here, the spin of the lattice model is just the valley degree of freedom of the moiré heterobilayer, so that the valley Zeeman term in Eq. (<ref>) becomes an effective spin Zeeman splitting in the lattice model, leading to the splitting of the up (red lines) and down (blue lines) spin when a magnetic field is applied. Meanwhile, the artificial p-orbitals have their own orbital magnetic momentum, which means that the |p_±,↑⟩ orbitals will be split further. Therefore, the Hofstadter spectrum of the up spin (red lines) is partitioned into two distinct sets: the upper one for |p_+,↑⟩ orbital and the lower one for |p_-,↑⟩. The case of down spin (blue lines) is similar. Thus, we finally get four different sets of Hofstadter spectrum for the moiré p-bands. The gaps between the Hofstadter minibands in the p-band spectrum are significantly larger than those in the moiré s-bands, since the moiré p-bands are more dispersive than the flat s-band, as illustrated in Fig. <ref>(d). If we set 0.12 meV as the minimum energy resolution, the detectable minigaps are marked on the enlarged Hofstadter spectrum, see Fig. <ref>(d). As well known, the gaps of the Hofstadter spectrum, i.e. incompressible Hofstadter states, can be described by two topological integers (t,s) through the Diophantine equation<cit.> n/n_0 = t(ϕ/ϕ_0)+s, which correspond to linear trajectories in the Wannier diagram<cit.>. Here, n/n_0 and ϕ / ϕ_0 are the normalized carrier density and magnetic flux, respectively (n_0 is the carrier density of a completely filled Bloch band and ϕ_0 is the magnetic flux quantum). t reflects the Hall conductivity σ_xy=t e^2/h associated with each minigap of the Hofstadter spectrum, and s represents the Bloch band filling at each gap<cit.>. The calculated Wannier diagram is plotted in Fig. <ref>(e). In the region of -2<ν<-1, E_F falls within the moiré s-band and there are no observable minigaps in the Hofstadter spectrum. Thus, no corresponding linear trajectories are found in the Wannier diagram. The most intriguing region is -4<ν<-2, which corresponds to the p-band Hofstadter spectrum arising from the |p_±,↑⟩ orbitals, i.e. the red lines in Fig. <ref>(c) and (d). The observable gaps of the Hofstadter spectrum here do give rise to linear trajectories in the Wannier diagram. As shown in Fig. <ref>(e), with a small magnetic field (B < 4 T), the gap trajectories mainly stem from s=-3, resembling an asymmetric Landau fan that is denser in the low hole filling region. When B>6 T, obvious Hofstadter states with s=-2,-3,-4 appear. The (t,s) of each minigaps are given in Fig. <ref>(d) and (e) accordingly. In Fig. <ref>(d), we see that the Hall conductivity exhibits a non-monotonic behavior as a function of E_F in the presence of a strong magnetic field, which is the characteristic feature of the Hofstadter spectrum. Note that the Wannier diagram in Fig. <ref>(e) almost perfectly reproduces the Hofstadter states observed in experiment. The comparison with the experimental observation is given in the supplementary materials [See Supplemental Material at [URL] for the comparison with the experimental observation.]. The Wannier diagram in Fig. <ref>(e) can be well interpreted by the single particle Hofstadter spectrum. First, with a small B, the effective Zeeman splitting is not large enough, so that the Hofstadter spectrum of the |p_-,↑⟩ overlaps with that of the |p_+,↓⟩, as shown in Fig. <ref>(c). Thus, the gaps will gradually disappear as hole filling increases, resulting in the asymmetric Landau fan around ν=-3 in the Wannier diagram. Second, by increasing the magnetic field (B>6 T), the effective Zeeman splitting becomes large enough to separate the Hofstadter spectrum of different orbitals. As a result, the gap trajectories can be found in the region of large hole filling (ν <-3) in Fig. <ref>(e). Third, in the Wannier diagram, there are gap trajectories with t=0 at ν=-1,-2,-3,-4 (the black vertical lines). From the energy spectrum in Fig. <ref>(c), we see that these vertical trajectories correspond to the band gaps between different orbitals, which are different from the cyclotron gaps resulting from the magnetic field. So, the Hall conductivity is zero (t=0) here. The analysis above clearly indicates that the effective Zeeman splitting, resulting from both the spin and orbital magnetic momentum, plays a key role in determining the Hofstadter spectrum and the Wannier diagram. Proper Zeeman splitting ensures the correct order of the Hofstadter spectrum for different orbitals, which is crucial for the Wannier diagram. Therefore, the experiment in turn provides an accurate estimation of the magnitude of the Zeeman splitting in such a semiconductor moiré lattice. In experiment, higher precision measurements unveil a more intricate Hofstadter spectrum pattern<cit.>. We thus calculate the Wannier diagram with higher energy resolution for the gaps (0.004 meV), while keeping all the system parameters unchanged. The results are plotted in Fig. <ref>, where the gray lines represent all the gap trajectories of the Hofstadter spectrum in the whole region -6<ν<-2. As expected, almost all of the gap trajectories observed in the experiment can be found in the calculated Wannier diagram in Fig. <ref>, which are marked in color. At low field, the gap trajectories remain the Landau fan around ν=-3, while more gaps with negative t around ν=-3, -4 become visible. At high fields, gap trajectories with s=-1, -2, -3, -4,-5 appear in the region of large hole filling, which originates from the fine structure of the Hofstadter spectrum. Now, except for the correlation-induced charge-ordered states, almost all the Hofstadter states observed in the experiment are well explained by our single particle theory. But, it is worth noting that the energy resolution for the gaps used in our calculations in Fig. <ref> is higher than what was claimed in the experiment, and the reason for this is currently unclear. Meanwhile, our calculation also shows that the calculated Hofstadter spectrum of the moiré s- and p-bands are quite like that of the triangle lattice. Hofstadter spectrum of the moiré s-band.—With a twist angle θ=1.33^∘, the moiré s-band is extremely flat, so that the corresponding gaps in the Hofstadter spectrum are hard to detect even if the requirement ℓ_B ≲ a_M is well satisfied. Thus, an intriguing question is whether it is possible to observe the s-band Hofstadter spectrum in such moiré semiconductor lattice system. A natural way is to increase the twist angle, in order to get a large bandwidth. However, a larger twist angle means a smaller a_M, and the corresponding magnetic field required to achieve the Hofstadter spectrum becomes larger. Based on our numerical calculation, we predict that 2^∘≲θ≲ 3^∘ is a reasonable region to observe the Hofstadter spectrum in experiment. The Hofstadter spectrum of the moiré s-bands in the same system with θ=2^∘ is plotted in Fig. <ref>(a) and (b). As we see, the Hofstadter spectrum of the up and down spin are split by the valley Zeeman term. Most importantly, under a reasonable magnetic field B<20 T, the gaps between the Hofstadter spectrum become visible and the corresponding (t,s) are denoted in Fig. <ref>(b). Nonmonotonic variation of t is observed here. The corresponding Wannier diagram is shown in Fig. <ref>(c), where the energy resolution of the gaps is the same as that in Fig. <ref>. We expect that the Hofstadter spectrum for the down spin (blue lines), i.e. the gap trajectories in the region ­2<ν<-1 may be modified by the Hubbard U term in such moiré lattice. It is thus also a suitable platform to investigate the competition between the Hofstadter states and correlation-induced ordered states. Summary.— In short, we interpret the origin of the Hofstadter states observed in a recent experiment by calculating its single particle Hofstadter spectrum. Unlike the graphene-based moiré systems, the valley Zeeman splitting plays a key role in determining the shape of the Hofstadter spectrum. The reason is that the bandwidth of the moiré bands here is very narrow, comparable to the valley Zeeman splitting. Our theory provides a good theoretical foundation for further studies of the competition between the Hofstadter states and correlated ordered states in semiconductor moiré lattice systems. We thank Prof. Hua Chen and Wenyang Zhao for helpful discussions. This work was supported by the National Key Research and Development Program of China (No. 2022YFA1403501), and the National Natural Science Foundation of China(Grants No. 12141401, No. 11874160, No. 12204044).
http://arxiv.org/abs/2406.09214v1
20240613151534
Applying Multi-Agent Negotiation to Solve the Production Routing Problem With Privacy Preserving
[ "Luiza Pellin Biasoto", "Vinicius Renan de Carvalho", "Jaime Simão Sichman" ]
cs.AI
[ "cs.AI", "cs.MA" ]
fancy Probing atmospheric escape through metastable He I triplet lines in 15 exoplanets observed with A. Masson1 S. Vinatier1 B. Bézard1 M. López-Puertas2 M. Lampón2 F. Debras3 A. Carmona4 B. Klein5 E. Artigau6 W. Dethier4 S. Pelletier6 T. Hood3 R. Allart6 V. Bourrier7 C. Cadieux6 B. Charnay1 N. B. Cowan8 N. J. Cook6 X. Delfosse4 J.-F. Donati3 P.-G. Gu9 G. Hébrard10,11 E. Martioli10,12 C. Moutou3 O. Venot13 A. Wyttenbach4,14 Received Month dd, yyyy; accepted Month dd, yyyy ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION Supply chain decisions proceed sequentially, with steps that typically operate in isolation with fixed parameters set by adjacent ones. For instance, in the distribution of paper production, a supply team identifies a distribution center that needs a specific product quantity, requesting it from a designated factory. Subsequently, the factory initiates production to meet the predetermined demands. Usually aiming to optimize operational costs, they are framed as combinatorial optimization problems. Methods like Mixed-Integer Programming or heuristics are then employed for effective planning. Though solving steps separately may yield optimal or suboptimal, yet effective, solutions, integrated planning offers financial benefits. Studies show operational cost reductions ranging from 3% to 20% <cit.>, and a systematic review estimates an 11.08% <cit.> cost reduction compared to sequential solutions. Several challenges make the integrated optimization of these decisions difficult in real-world industry applications: a) Increased complexity of decisions: Unifying business rules makes problems too large for commercial solutions, increasing the complexity of added variables and constraints significantly; b) Discrepancy between planning and execution: Operational reality complexities, like unforeseen events or lack of confidence in the provided solution, may lead to deviations between execution and planning strategies, necessitating real-time replanning. When these replannings do not occur through an optimization method, the decisions made can be suboptimal, resulting in financial losses; c) Constraints on information sharing: Privacy and information protection play an important role in real-world applications. Organizational constraints and privacy-preserving may limit access to essential information for decision-making. This can result in suboptimal plans or increased operational-level replanning due to a lack of a faithful representation of reality. The increased complexity of decisions resulting from the integration can be mitigated by applying advanced optimization methods, such as decompositions, meta-heuristics, matheuristics, and hybrid optimization algorithms. An automated system can be implemented to tackle the discrepancy between planning and execution. This system reads real-time data as input, identifies deviations, and re-executes optimization algorithms to reconstruct the optimal plan for immediate adherence. However, conventional optimization methods may not effectively address constraints on information sharing, such as privacy preservation or information protection between departments. If crucial information is withheld from the optimization algorithm, it cannot incorporate it into its search for the optimal solution, potentially leading to suboptimal outcomes. This paper addresses the three aforementioned challenges, creating a hybrid Multi-Agent System (MAS) integrated with optimization algorithms to solve the Production Routing Problem with Privacy Preserving (PRPPP). In short, we foresee that agents representing different clients can propose alternative solutions using exclusively local data, therefore enhancing privacy-preserving. A MAS automates plan generation and decision negotiation among entities, allowing diverse forms of reasoning to collaborate for solutions. Intelligent agents integrate with optimization algorithms to address complex optimization problems by decomposing them into simpler subproblems. Additionally, agents encapsulate private information, enabling negotiation without revealing strategies yet incorporating them into final planning solutions. In <cit.>, applications and synergies between agents and automation are explored. Additionally in <cit.>, collaborative control mechanisms are introduced to address real-time optimization challenges in the planning and resource allocation of small to medium-sized enterprises. Furthermore, an application case of intelligent agents is presented in <cit.>, where they handle decisions related to production and transportation. Additionally, agents can be used to reformulate problems as Distributed Constraint Optimization Problems (DCOP) when the problem is so complex or requires information privacy that its integrated resolution becomes unfeasible <cit.>. Agents can also coordinate solutions from different meta-heuristics to address Production and Distribution Planning Problems (PDPP) <cit.> and even negotiate among themselves to determine the best meta-heuristic for solving a specific multi-objective problem <cit.>. Other synergies between Evolutionary Computation (EC) and MAS are discussed in <cit.>. § THE PRODUCTION ROUTING PROBLEM (PRP) Back to the paper industry example, the supply chain can be segmented into four distinct steps, each involving specific decisions: a) Production: determining the choice of product, size and its timing for manufacturing; b) Inventory: deciding the optimal size and duration for a product to remain in warehouses; c) Distribution: assigning products to specific distribution centers and scheduling their arrival times; and d) Routing: identifying the most efficient routing option based on a distribution plan. The Production Routing Problem (PRP) includes all four steps integrated within its decision-making framework. Classical mathematical models compose the PRP, such as the Vehicle Routing Problem (VRP) <cit.>, a well-known NP-hard problem, and the Lot-Sizing Problem (LSP) <cit.>. The domain of the PRP is defined by a complete graph G = (N, A), where N represents the set of the supplier and retailers indexed by i ∈{0, ..., n} and A = {(i,j): i, j ∈ N, i ≠ j} is the set of arcs connecting the supplier and retailer. The supplier is represented by node 0, and the set of retailers is defined as N_C = N \{0}. A single product is manufactured in the factory and delivered to retailers by a set of identical vehicles K = {1, ..., m} over a discrete and finite set of periods T = {1, ..., l}, aiming to satisfy their demands in each period. Figure <ref> depicts the graph G with its nodes N and arcs A. The objective of the PRP is to provide the planning of deliveries and production for a determined time horizon while minimizing production costs, inventory costs (both at the supplier's and retailers' levels), and transportation costs: min ∑_t ∈ T(up_t + fy_t + ∑_i ∈ N(h_iI_it) +∑_(i,j) ∈ A(c_ij∑_k ∈ Kx_ijkt )) p_t - production quantity in period t; y_t - equal to 1 if there is production at the factory in period t, 0 otherwise; I_it - inventory at node i at the end of period t; x_ijkt - equal to 1 if a vehicle k travels directly from node i to node j in period t (see full PRP's formulation, decision variables and parameters description in Appendix). A review of its mathematical formulations can be found in <cit.>. The PRP holds practical significance within a Vendor Managed Inventory (VMI) approach <cit.>. In this context, the supplier not only monitors retailers' inventory levels but also makes decisions regarding the replenishment policy for each retailer. It functions effectively when assuming the supplier has complete control over retailers' decisions. However, let's consider a scenario where the supplier lacks crucial information about retailers, such as their inventory costs, and retailers have the ability to negotiate with the supplier to expedite or defer certain deliveries. In this scenario, the PRP may no longer be entirely applicable, and the methods previously studied for its resolution may not be entirely suitable. The model that represents this specific scenario is the Production Routing Problem with Privacy Preserving (PRPPP). Due to the privacy preservation, the term ∑_i ∈ N(h_iI_it) from the PRP's objective function, regarding the inventory costs of every node (i.e., supplier and retailers'), is affected and becomes h_0I_0t, keeping only the inventory costs from the supplier, which they have access. § THE PRODUCTION ROUTING PROBLEM WITH PRIVACY PRESERVING (PRPPP) Assuming a PRPPP instance with a 6-month horizon, the solution output must contemplate the supplier's delivery plan to each retailer for the whole six months. Notably, not every retailer will receive deliveries every month; they may be concentrated in specific months to meet their demand. The variable transportation and production costs associated with each delivery period are reflected by the supplier in the form of shipping and product prices charged to retailers. In order to fulfill each retailer's demand plan with overall cost reduction, the supplier will receive the delivery preferences from retailers and propose optimal agendas for negotiation. These negotiations will be influenced by the changing shipping and product costs at each proposed delivery period, as well as the inventory costs unique to each retailer, which only they are aware of. The optimal negotiation agendas continue until a stopping criterion is met. §.§ Input Data The model takes as input data the parameters described in Section 2, including: * Demand plan (d_it): Each retailer (index i) must fulfill a specific demand plan; for example, requiring eight product units in months 3, 4, and 5 within a 6-month horizon (index t), totaling 24 units. * Supplier and retailers inventory costs (h_i): Distinct inventory costs influence retailer preferences when deciding on delivery negotiations. Supplier inventory costs are translated into product prices charged to retailers. These costs are fixed and known only to the respective supplier or retailer. * Unit production cost (u) and setup cost (f): These impact the supplier's production expenses, translated into product prices charged to retailers. Setup costs are fixed charges during production. Unit production costs fluctuate with the quantity produced; e.g., if the supplier produces 140 units in month 2 and 200 units in month 4, with variable production costs of 8 and setup costs of 1500, the total cost for the horizon is (140 * 8 + 1500) + (200 * 8 + 1500) = 5720. * Coordinates of the supplier and retailers: Geographical locations used to generate routes and compute transportation costs. * Transportation costs (c_ij): Calculated proportional to the Euclidean distance between the supplier and a retailer or between retailers. The total cost of a route is the sum of costs for each segment. * Maximum capacities: Maximum production (C), vehicle (Q) and inventory (L_i) capacities may limit overall decisions. * Initial inventory levels (I_i0): Initial levels for both suppliers and retailers at the planning horizon's start. §.§ Agents As seen in Figure <ref>, the suppliers and retailers are the two agent types. a) Supplier agent (Node 0 in set N): aims to determine deliveries for each period to retailers while minimizing production and transportation costs. The supplier agent is a coordinator agent, responsible for coordinating retailer preferences, proposing optimal negotiation agendas, and mediating negotiations. The coordinator has total system knowledge, except for specific retailers' inventory costs. Their actions include: * initialsol: Generates an initial solution based on input data and retailers' ordered delivery preferences, considering the first delivery preference of each retailer while respecting production, vehicle and inventory capacities constraints. * optagenda: Generates an optimal negotiation agenda, proposing insertions, removals and substitutions of retailers' deliveries. b) Retailer agents (N_C = N \{0}): they have a demand plan and communicate delivery preferences to the supplier. Represented by agents with partial knowledge, they are aware of specific inventory costs but lack information about shipping and product prices until proposed during negotiations. Retailers know which retailers are part of their neighborhoods and their proximity. Their actions include: * negotiate: Participate in a negotiation agenda transaction, deciding on the suggested change based on their delta utility, i.e., selecting the change that would most satisfy the agent's preferences or goals. * vote: Changes in neighborhood delivery plans may impact shipping and product prices from their retailers. Retailers are then selected to vote on the proposed changes; if the majority agrees, changes are implemented. §.§ Other Components * Neighborhood: Retailers access information about their neighborhood, i.e., other retailers with deliveries in the same period and corresponding Euclidean distances. The supplier updates the neighborhood with any solution plan changes. * Planning board: Carries initial and current solutions, updated after each successful negotiation. The supplier has access to all information, while retailers only see what directly affects them. * Transaction pool: Contains information about the current transaction, including negotiating retailers, affected retailers selected to vote on proposed changes of their neighborhoods, and resulting outcomes. § NEGOTIATION PROTOCOL §.§ Utility Let Nb_t∈ N \{0} the set of the retailers in the neighborhood of each period t, nb_t the total quantity of retailers in Nb_t and n the total quantity of retailers in N_C. The utility of each retailer c is defined by the following equation: U_c = - ( ∑_t ∈ T ( h_cI_ct + ∑_(i,j) ∈ A(c_ij∑_k ∈ Kx_ijkt)/nb_t + (up_t + fy_t + h_0I_0t)/n )) The first term explains retailer c's specific inventory costs. The second term is the shipping price from the supplier, translated into transportation costs for neighborhoods where c has deliveries and normalized by total retailers in those areas. The last term is the supplier's product price, translated into production, setup, and inventory costs for the entire horizon and normalized by the total quantity of retailers in the plan. Retailers negotiate or vote in favor of a transaction only if their delta utility is positive, indicating lower costs and prices after changes. Notably, alterations to another retailer's delivery plan can affect the overall utility of that retailer. §.§ Agenda Transactions From retailers' delivery preferences, the supplier generates an optimal negotiation agenda and proposes changes to delivery plans, as following: * Removal: The supplier identifies a chance to eliminate a retailer's delivery in a specific period, reducing shipping prices for that neighborhood. Note that if a removal happens, the same quantity must be added to another period where the retailer already has a delivery, adhering to demand and inventory constraints. This shouldn't trigger voting in the other neighborhood, as shipping prices aren't unit-specific. * Insertion: This happens when the supplier suggests adding a delivery for a retailer in a neighborhood where they had none before, increasing shipping prices for that area and reducing approval chances. The inserted quantity should be taken from another neighborhood with a current delivery to avoid triggering its voting phase. Insertions are accepted only if they significantly reduce product prices or when proposed along with removals. * Substitution: The supplier identifies an opportunity for an insertion and removal affecting two different neighborhoods. Voting in the first neighborhood is mostly against the change, while the second neighborhood is in favor. Since the first neighborhood outnumbers the second, the overall voting fails. To address this, the supplier proposes a substitution with another retailer's delivery in those neighborhoods to enhance acceptance chances. Each proposal undergoes a transaction, negotiated among the retailers directly affected by changes to their delivery plans, and is subsequently voted on by retailers who are indirectly affected (experiencing an increase or decrease in shipping/product prices). §.§ Pseudo-Algorithm The full system flow is represented by the Algorithm 1. § CONCLUSIONS AND FUTURE WORK The study highlights MAS benefits in addressing critical challenges in applying optimization to supply chain planning, especially in the context of privacy preservation. The PRP, a known literature problem, is adapted to consider information constraints (PRPPP). This work proposes a hybrid MAS and optimization framework to solve it. Future work involves defining algorithms for optimal agenda and initial solution generation, integrating heuristic algorithms. The establishment of a stopping criterion is necessary, considering that it should not be based only on a percentage of initial cost reduction, since the optimal solution is unknown. Furthermore, discrepancies between planning and execution may be addressed by adapting and automating the proposed framework for real-time optimization. ACM-Reference-Format § APPENDIX- PRP FORMULATION Formulation by <cit.>: Parameters: * u unit production cost; * f fixed production setup cost; * h_i unit inventory cost at node i (supplier and retailers); * c_ij transportation cost from node i to node j; * d_it demand from retailer i in period t; * C production capacity; * Q vehicle capacity; * L_i maximum or target inventory level at node i; * I_i0 initial inventory availabe at node i. Decision variables: * p_t production quantity in period t; * I_it inventory at node i at the end of period t; * y_t equal to 1 if there is production at the factory in period t, 0 otherwise; * z_0kt equal to 1 if vehicle k left the factory (node 0) in period t, 0 otherwise; * z_ikt equal to 1 if customer i was visited by vehicle k in period t, 0 otherwise; * x_ijkt equal to 1 if a vehicle travels directly from node i to node j in period t; * q_ikt quantity delivered to customer i in period t. min ∑_t ∈ T(up_t + fy_t + ∑_i ∈ N(h_iI_it) +∑_(i,j) ∈ A(c_ij∑_k ∈ Kx_ijkt )) s.t. I_0,t-1 + p_t = ∑_i ∈ N_C∑_k ∈ K q_ikt + I_0t ∀ t ∈ T I_i, t-1 + ∑_k ∈ K q_ikt = d_it + I_it ∀ i ∈ N_C, ∀ t ∈ T p_t≤ M_ty_t ∀ t ∈ T I_0t≤ L_0 ∀ t ∈ T I_i,t-1+∑_k ∈ K q_ikt≤ L_i ∀ i ∈ N_C, ∀ t ∈ T q_ikt≤ M_it z_ikt ∀ k ∈ K, ∀ i ∈ N_C, ∀ t ∈ T ∑_k ∈ Kz_ikt≤ 1 ∀ i ∈ N_C, ∀ t ∈ T ∑_j ∈ Nx_jikt + ∑_j ∈ Nx_ijkt = 2z_ikt ∀ k ∈ K, ∀ i ∈ N, ∀ t ∈ T ∑_i ∈ N_Cq_ikt≤ Qz_0kt ∀ k ∈ K, ∀ t ∈ T ∑_i ∈ S∑_j ∈ Sx_jikt≤ |S| - 1 ∀ S ⊆ N_C : |S| ≥ 2, ∀ k ∈ K, ∀ t ∈ T p_t, I_it, q_ikt≥ 0 ∀ i ∈ N, ∀ k ∈ K, ∀ t ∈ T y_t, z_ikt, x_ijkt∈0,1 ∀ i,j ∈ N, ∀ k ∈ K, ∀ t ∈ T The objective function (<ref>) minimizes the total costs of production, production setup, factory and customer inventories, and delivery routing. Constraints (<ref>)-(<ref>) represent the lot-sizing problem. Constraints (<ref>) and (<ref>) enforce the stock flow balance at the factory and customers, respectively. Constraint (<ref>) ensures that the production setup variable (y_t) equals one if production occurs in a specific period and limits the production quantity to the minimum between the production capacity and the total demand in the remaining periods (M_t). Constraints (<ref>) and (<ref>) restrict the maximum inventory at the factory and customers, respectively. The remaining constraints, i.e., (<ref>)-(<ref>), are the vehicle load and routing constraints. Constraints (<ref>) allow a positive delivery quantity only if customer i is visited in period t, and each customer can be visited by at most one vehicle (<ref>). Constraints (<ref>) ensure the flow conservation of vehicles. Constraints (<ref>) limit the quantity of product that can be transported by each vehicle. Constraints (<ref>) are the Subtour Elimination Constraints (SECs), similar to those in the Traveling Salesman Problem (TSP). Constraints (<ref>) and (<ref>) represent the domains of non-negative continuous variables and binary variables, respectively. § TRANSACTIONS' CALCULATION RECORD Assuming no change was made in the production plan, the retailer's utility every period t is calculated as following: U_c^t = (h_cI_ct) + ∑_(i,j) ∈ A(c_ij∑_k ∈ Kx_ijkt)/nb_t The delta utility between two states of the same period t can be calculated by subtracting the utility of the initial state from that of the final state: Δ U_c^t = U_c, f^t - U_c, 0^t Then, the total delta utility for two periods, e.g. t = 1 and t = 2, can be calculated as follows: ∑_t ∈ [1, 2]Δ U_c^t = Δ U_c^1 + Δ U_c^2 The inventory cost h_c is an intrinsic parameter from every retailer. The term I_ct represents the decision variables of a retailer's demand plan. The shipping cost, denoted by ∑_(i,j) ∈ A(c_ij∑_k ∈ Kx_ijkt)/nb_t, illustrates the aggregate costs of each route segment highlighted in green in Figures <ref> and <ref> - the numerical values in green represent their calculated costs. §.§ Transaction 1 Δ U_3^1 = U_3, f^1 - U_3, 0^1 = [- (2 * 0 + (200+320+200+200+300)/4)] - (0) = -325 Δ U_3^2 = U_3, f^2 - U_3, 0^2 = (0) - [-(2*10 + (200+320+200+300)/3)] = +340 Δ U_1,2,4^1 = - (200+320+200+200+300)/4) - [-(200+500+200+300)/3]= 95 Δ U_1,2^2 = - (200+400+300)/2) - [-(200+320+200+300)/3]= -110 Δ U_4^2 = 0 Thus, ∑_t ∈ [1, 2]Δ U_3^t = Δ U_3^1 + Δ U_3^2 = - 325 + 340 = 15 ∑_t ∈ [1, 2]Δ U_1,2^t = Δ U_1,2^1 + Δ U_1,2^2 = 95 + (- 110) = -15 ∑_t ∈ [1, 2]Δ U_4^t = Δ U_4^1 + Δ U_4^2 = 95 + 0 = 95 §.§ Transaction 2 Along with Figure <ref>, Figure <ref> and <ref> illustrates the Transaction 2 outcomes (Y, N) and (N, Y) respectively. Their updated shipping costs are represented in green. The total delta utility is calculated for every outcome. For the (Y, Y) outcome: Δ U_2^1 = U_2, f^1 - U_2, 0^1 = 0 - [- (2 * 15 + (200+400+350+300)/3)] = 446.7 Δ U_2^2 = U_2, f^2 - U_2, 0^2 = [- (2 * 0 + (200+320+200+300)/3)] - (0) = -340 Δ U_4^1 = - (3 * 5 + (450+200+200+300)/3) - (0) = -398.3 Δ U_4^2 = 0 - [- (3 * 0 + (400+150+200+300)/3)] = 350 Thus, ∑_t ∈ [1, 2]Δ U_2^t = Δ U_2^1 + Δ U_2^2 = 446.7 - 340 = 106.7 ∑_t ∈ [1, 2]Δ U_4^t = Δ U_4^1 + Δ U_4^2 = -398.3 + 350 = -48.3 For the (Y, N) outcome: Δ U_2^1 = 0 - [- (2 * 15 + (200+400+350+300)/3)] = 446.7 Δ U_2^2 = - (2 * 0 + (200+320+150+200+300)/4)] - (0) = -292.5 Δ U_4^1 = 0 Δ U_4^2 = - (3*0 + (200+320+150+200+300))/4 - [-(3*0 + (400+150+200+300)/3)] = 57.5 Thus, ∑_t ∈ [1, 2]Δ U_2^t = Δ U_2^1 + Δ U_2^2 = 446.7 - 292.5 = 154.2 ∑_t ∈ [1, 2]Δ U_4^t = Δ U_4^1 + Δ U_4^2 = 0 + 57.8 = 57.5 For the (N, Y) outcome: Δ U_2^1 = - [2*15 + (200+400+200+200+300)/4] - [-(2*15 + (200+400+350+300)/3] = 91.7 Δ U_2^2 = 0 Δ U_4^1 = - (3 * 5 + (200+400+200+200+300)/4) - (0) = -340 Δ U_4^2 = 0 - [- (3 * 0 + (400+150+200+300)/3)] = 350 Thus, ∑_t ∈ [1, 2]Δ U_2^t = Δ U_2^1 + Δ U_2^2 = 91.7 + 0 = 91.7 ∑_t ∈ [1, 2]Δ U_4^t = Δ U_4^1 + Δ U_4^2 = -340 + 350 = 10 Since the (Y, N) transaction outcome was chosen by retailers 2 and 4 in the negotiation, retailers 1, 3 and 5 are asked to vote in favor or against it. Thus, their delta utility for the voting phase is calculated as: Δ U_1,5^1 = - (450+350+300)/2) - [-(200+400+350+300)/3] = -133.3 Δ U_3^1 = 0 Δ U_1,3^2 = - (200+320+150+200+300)/4) - [-(400+150+200+300)/3] = 57.5 Δ U_5^2 = 0 Thus, ∑_t ∈ [1, 2]Δ U_1^t = Δ U_1^1 + Δ U_1^2 = -133.3 + 57.5 = -75.8 ∑_t ∈ [1, 2]Δ U_3^t = Δ U_3^1 + Δ U_3^2 = 0+ 57.5 = 57.5 ∑_t ∈ [1, 2]Δ U_5^t = Δ U_5^1 + Δ U_5^2 = -133.3 + 0 = -133.3
http://arxiv.org/abs/2406.08247v1
20240612141524
Tamarkin's separation theorem for non-compact objects in cotangent bundles
[ "Yuichi Ike", "Tatsuki Kuwagaki" ]
math.SG
[ "math.SG" ]
Casimir Wormholes with GUP Correction in the Loop Quantum Cosmology M. B. Cruz June 17, 2024 =================================================================== § ABSTRACT In this short note, we prove a Tamarkin-type separation theorem for possibly non-compact subsets in cotangent bundles. § INTRODUCTION Let M be a manifold and denote by π T^*M → M its cotangent bundle. Let be a subgroup of the additive group and denote by Λ_0^ the Novikov ring associated with . For c ∈ (0,∞], we denote by μ^(T^*M;u<c) the microlocal category introduced in <cit.>, which is a Λ_0^-linear category. For an object ∈μ^(T^*M;u<c), its non-conic microsupport is denote by () ⊂ T^*M. For a closed subset A' of T^*M, we define μ_A'^(X;u<c) ∈μ^(X;u<c) ()⊂ A'. A closed subset A' of T^*M is said to be end-conic if there exists a disk bundle D^*M in T^*M cut out by some fiber metric such that A' ∖ D^*M is conic, i.e., stable under the scaling action of _≥ 1. In this paper, we prove the following separation theorem, which is a slight generalization of the separation theorem for the original Tamarkin category <cit.>. Let A' and B' be end-conic closed subsets T^*M. Suppose that π(A') ∩π(B') is compact and A' ∩ B' = ∅. Then for any ∈μ_A'^(T^*M;u<c) and any ∈μ_B'^(T^*M;u<c), one has _μ^(T^*M;u<c)(,)=0. We prepare some lemmas in <ref> and give a proof in <ref>. §.§ Notation Throughout the paper, we fix a unital integral commutative ring . Let X be a manifold and let π T^*X → X denote its cotangent bundle. We write T^*_XX for the zero-section of T^*X. We also let ∂ T^*X denote the contact boundary defined as (T^*X ∖ T^*_XX)/_>0. We denote by _X the constant sheaf on X with stalk . A category (resp. triangulated category) means a dg-category (resp. pre-triangulated dg-category) unless specified. For example, (X) the derived category of the abelian category of _X-modules, is a triangulated category in our sense. For an object ∈(X), we write () for the microsupport of (see <cit.> for the definition), which is a closed conic subset of T^*X. § CUT-OFF RESULT In this subsection, we prove the following: For ∈μ^(T^*M;u<c), () ⊂ T^*M controls the whole microsupport () ⊂ T^*(M ×_u<c×_t), where denote the trivial subgruoup of and is regarded as an object of (M ×_u<c×_t) through the projector (-)⋆_≥ 0. First, we give a cut-off lemma in the special case. Let V be a finite-dimensional real vector space and γ be a closed convex proper cone in V with 0 ∈γ and γ≠{0}. We also set U_γ T^*M × V ×γ^∘ and Z_γ T^*M × T^*V ∖ U_γ as in <cit.>, where γ^∘ denotes the polar cone of γ: {θ∈ V^* |⟨θ,v ⟩≥ 0 for any v ∈γ}. We recall known cut-off results from <cit.>, which we will use in the following proofs. For ∈(_V), we have (⋆_γ)⊂ (()∩ U_γ)∪ V×∂γ^∘. In particular, (⋆_γ)∩ U_γ⊂ (() ∩ U_γ). We slightly generalize this and <cit.> as follows. Let ∈^⊥_Z_γ(_V) and assume that there exists a closed cone C^* ⊂ V^* such that * C^* ⊂γ^∘ and * () ∩ U_ γ⊂ V × C^*. Then one has () ⊂ (() ∩ U_ γ) ∪ (V × (C^* ∩∂γ^∘)). We mimic the proof of <cit.>. We first claim the following. For any proper closed convex cone β of V, we have () ∩ U_β⊂ (() ∩ U_γ) ∪ V × ((C^* ∩β^∘) ∩∂γ^∘) ∩ U_β. Let us consider the sheaf = ⋆_β. By <ref>, we have () ∩ U_β = () ∩ U_β, which implies () ∩ U_γ∩ U_β⊂ V × C^* ∩ U_β by the condition (2). Set λ (C^* ∩β^∘)^∘⊂ V. We shall show (⋆_λ) ∩ U_β⊂ (() ∩ U_γ) ∪ V × ((C^* ∩β^∘) ∩∂γ^∘) ∩ U_β and ≅⋆_λ. Since ∈^⊥_Z_γ(_V), by <ref>, we have ()=(⋆_γ) ⊂ V ×γ^∘ and () ∩ U_β = (⋆_γ) ∩ U_β ⊂ ((() ∩ U_γ) ∪ (V ×∂γ^∘)) ∩ U_β ⊂ V × ((λ^∘∪∂γ^∘) ∩β^∘ ). In particular, we have () ⊂ V × (λ^∘∪∂γ^∘∪∂β^∘ ) by <ref>. Since ⋆_λ≅ (⋆_γ) ⋆_λ, by the microsupport estimate for ⋆, we have (⋆_λ) ∩ U_β ⊂ V × (λ^∘∩ (λ^∘∪∂γ^∘∪∂β^∘ ) ∩β^∘ ) ⊂ V × (λ^∘∪ (λ^∘∩∂γ^∘) ∩β^∘) ⊂ (U_γ∪ (V × ((C^* ∩β^∘) ∩∂γ^∘))) ∩ U_β, which proves (<ref>). Moreover, since (⋆_λ∖γ) ∩ U_β ⊂ V × ((γ^∘∖λ^∘) ∩ (λ^∘∪∂γ^∘∪∂β^∘) ∩β^∘) ⊂ V× (∂γ^∘∩β^∘)⊂ Z_γ∩ U_β. From the exact triangle ⋆_λ∖γ→⋆_λ→⋆_γ, we find that ⋆_λ→⋆_γ is an isomorphism in (_V;U_γ∩ U_β)=(_V;U_(γ^∘∩β^∘)^∘) by the above microsupport estimate. Since = ⋆_γ⋆_β, we have ()⊂ V× (γ^∘∩β^∘). Hence we have ⋆_(γ^∘∩β^∘)^∘≅. Through the equivalence (-) ⋆_(γ^∘∩β^∘)^∘(_V;U_(γ^∘∩β^∘)^∘)→^⊥_Z_(γ^∘∩β^∘)^∘(_V), we have an isomorphism ⋆_λ→ in (_V). Hence, we obtain (<ref>), which proves the claim. Let us return to the proof of <ref>. If we set β={0} in the claim, we get () ⊂ (() ∩ U_γ) ∪ V × ((C^*) ∩∂γ^∘). Suppose (x,ξ) ∈() ∩ V × (((C^*) ∖ C^*) ∩∂γ^∘). Then, we can find a proper closed convex cone β such that ξ∈β^∘ and C^* ∩β^∘={0}. By applying the claim, we get (x,ξ) ∈() ∩ V ×β^∘⊂ (() ∩ U_γ) ∪ V × 0 ∩ V ×β^∘, which is a contradiction. This proves <ref>. In the above lemma, if C^* is a strict γ-cone in the sense of <cit.>, then C^* ∩∂γ^∘={0}. Hence, the lemma generalizes <cit.>. Let M be an open subset of E=^d and ∈^⊥_Z_γ(_M × V). Assume that there exists a closed cone C^* ⊂ E^* × V^* such that * C^* ⊂ E^* ×γ^∘ and * () ∩ U_ γ⊂ (M × V) × C^*. Then one has () ⊂ (() ∩ U_ γ) ∪ (M × V) × (C^* ∩ (E^* ×∂γ^∘)). The proof is similar to <cit.>. We replace <cit.> with <ref>. The statement is local on M. Let x_0 ∈ M and K be a compact neighborhood of x_0. We choose an open neighborhood W of K and an diffeomorphism W ≅^d=E satisfying (-1,1)^d ⊂ K. We take a diffeomorphism φ (-1,1) such that dφ(t) ≥ 1 for any t ∈ (-1,1). Define Φ U (-1,1)^d × V E × V by Φ(x'_1,…,x'_d, x”) (φ(x'_1),…,φ(x'_d),x”). Then, we obtain Φ_πΦ_d^-1(U × C^*) ⊂ E × C^* (see <cit.>). Hence, we can apply <ref> to the sheaf Φ_*(|_U) on the vector space E × V with the cone {0}×γ. For an end-conic closed subset A' of T^*M, we set ∂ A' A' ∩∂ T^*M ⊂∂ T^*M. Set ρ T^*M × T^*_τ >0_t → T^*M; (p,t,τ) ↦ p/τ. Let A' be an end-conic closed subset of T^*M and set A=ρ^-1(A'). Then for any ∈μ^_A'(T^*M;u<c), one has (|_u>0) ⊂ AA ∪{ (x,aξ,u,0,t,0) | (x,ξ) ∈∂ A',a>0 }∪ T^*_M ×_u<c×_t(M ×_u<c×_t), where AA ⊂ T^*M × T^*_u<c× T^*_τ>0_t denotes the doubling of A (see <cit.>). Fix (x,u)∈ M×_u<c and take a coordinate compact neighborhood K. Then the cotangent bundle is trivialized over K, namely, T^*(M×_u<c)|_K ≅ K × V^* with V=^n+1. Set A=ρ^-1(A'). For each (x',u') ∈ K, we set C_(x',u')^* T^*_(x',u')(M×_u<c) ×_τ∩ AA⊂ V^* ×_τ. We also define C_K^* ⋃_(x',u')∈ KC_(x',u')^* ⊂ V^* ×_τ, C^∞,*_(x',u') _> 0· (∂ A'∩∂ T^*_x' M) ×{υ=0}×{τ=0}∪{(0,0)}⊂ V^* ×_τ. Since A' is end-conic, we find that C_K^* ∩{τ=0 }⊂⋃_(x',u') ∈ KC^∞,*_(x',u'). We take an open subset U ⊂ K containing (x,u) and set _U|_U ×_t. Then we have (_U) ∩{τ>0}⊂ (U ×_t) × C_K^*. Hence, by applying <ref> to _U with V=_t and γ=_≥ 0, we obtain (_U) ⊂ ((_U) ∩ U_γ) ∪ ((U×_t) × (C_K^* ∩{τ =0})). Here, (_U) ∩ U_γ⊂ AA and C_K^* ∩{τ=0}⊂⋃_(x',u') ∈ KC^∞,*_(x',u'). In particular, (|_{(x,u) }×_t) ⊂ AA ∪ (U×_t) ×⋃_(x',u') ∈ KC^∞,*_(x',u') for any coordinate compact neighborhood K of (x,u). By taking the intersection over the neighborhoods of x, we get the desired estimate over x. This completes the proof. § PROOF OF SEPARATION THEOREM We first prove the separation theorem in the non-equivariant case. Let A' and B' be end-conic closed subsets T^*M. Suppose that π(A') ∩π(B') is compact and A' ∩ B' = ∅. Then for any ∈μ^_A'(T^*M;u<c) and any ∈μ^_B'(T^*M;u<c), one has _μ^(T^*M;u<c)(,)=0. We will prove that _(M ×_u<c×_t)(_M × (-∞,a) ×_t,)=0 for a>0. If the claim holds, since _a ↗ c_M× (-∞, a)×_t=_M×_u<c×_t, we have _(M ×_u<c×_t)(_M ×_u<c×_t,)≅lim_a ↗ c_(M ×_u<c×_t)(_M × (-∞,a) ×_t,)≅ 0. Hence, we will check the claim below. We argue similarly to <cit.> to estimate (_M × (-∞,a) ×_t). Since (|_u>0) ∩{τ=0}⊂{υ=0} and N^*(M × (0,a) ×_t) ⊂{ξ=0, τ =0 }, we find that () ∩ N^*(M × (0,a) ×_t) ⊂ T^*_M ×_u<c×_t(M ×_u<c×_t). Noticing that |_u ≤ 0≅ 0 and applying <cit.>, we get (_M × (-∞,a) ×_t) ⊂ N^*(M × (0,a) ×_t)^a + (). By <ref>, we have (_M × (-∞,a) ×_t) ∩{ u=a}⊂ { (x,ξ,a,υ,t,τ) | (x,ξ,t-a,τ) ∈ A, υ≥ -τ} ∪{ (x,ξ,a,υ,t,0) | (x,ξ) ∈∂ A', υ≥ 0 }, where A=ρ^-1(A'). Since _M × (-∞,a) ×_t≅_M ×_u<c× [0,∞)⋆_M × (-∞,a) ×_t, we have isomorphisms _(M ×_u<c×_t)(_M × (-∞,a) ×_t, ) ≅ _(M ×_u<c×_t)(_M ×_u<c× [0,∞)⋆_M × (-∞,a) ×_t, ) ≅ _(M ×_u<c×_t)(_M ×_u<c× [0,∞), ^⋆(_M × (-∞,a) ×_t, )). Here, ^⋆ is the right adjoint of ⋆. By <cit.>, ^⋆ can be written as ^⋆(,) ≅ m_* (p_2^-1i^-1, p_1^!), where p_i M ×_u<c×^2 → M ×_u<c×_t; (x,u,t_1,t_2) ↦ (x,u,t_i), m M ×_u<c×^2 → M ×_u<c×_t; (x,u,t_1,t_2) ↦ (x,u,t_1+t_2), i M ×_u<c×_t → M ×_u<c×_t; (x,u,t) ↦ (x,u,-t). By adjunction, we find that q_* ^⋆(_M × (-∞,a) ×_t, ) ∈_{τ≤ 0}(_t)^⊥, where q M ×_u<c×_t→_t is the projection. Since (i^-1_M × (-∞,a) ×_t) ∩() ⊂ T^*_M ×_u<c×_t(M ×_u<c×_t), by the microsupport estimate, we have (^⋆(_M × (-∞,a) ×_t, )) ⊂ { (x,ξ,u,υ,t,τ) | there exist (ξ_1,υ_1), (ξ_2,υ_2) ∈ T^*_(x,u)(M ×_u<c) and t_1, t_2 ∈_t with t=t_1+t_2 such that (x,ξ_1,u,υ_1,t_1,τ) ∈(_M × (-∞,a) ×_t), (x,ξ_2,u,υ_1,t_2,τ) ∈(), ξ = -ξ_1+ξ_2, and υ=-υ_1+υ_2 }. Again by <ref>, we obtain (^⋆(_M × (-∞,a) ×_t, )) ∩ (T^*_M ×_u<c(M ×_u<c) × T^*_t) ⊂ T^*_M ×_u<c×_t(M ×_u<c×_t). Since the map q is proper on (^⋆(_M × (-∞,a) ×_t, )), we get (q_* ^⋆(_M × (-∞,a) ×_t, )) ⊂ 0__t. By combining this estimate with the fact that it is in _{τ≤ 0}(_t)^⊥, we find that q_* ^⋆(_M × (-∞,a) ×_t, ) ≅ 0. This implies _(M ×_u<c×_t)(_M × (-∞,a) ×_t, ) ≅ _(_t)(_[0,∞), q_* ^⋆(_M × (-∞,a) ×_t, )) ≅ 0 as desired. Finally, we prove the separation theorem for our equivariant microlocal category. For the operations ⋆_ and ^⋆_, we refer to <cit.>. We have _μ^(T^*M;u<c)(, ) ≅ _μ^(T^*M;u<c)(⊕_d ∈_M ×_u<c× [d,∞)⋆_, ) ≅ _μ^(T^*M;u<c)(⊕_d ∈_M ×_u<c× [d,∞) ,^⋆_(, )) ≅ _(M×_u<c×_t)(_M ×_u<c× [d,∞) ,(^⋆_(, )) ≅ _^()(M×_u<c×_t)(_M ×_u<c× [d,∞) , p_1_* _M ×^2(p_2^-1, m^!)). Here ^()(M×_u<c×_t) is the derived category of equivariant sheaves with respect to the trivial -action. The last space is -invariant of the -representation _(M×_u<c×_t)(_M ×_u<c× [c,∞) , p_1_* _M ×^2(p_2^-1, m^!)). Hence the vanishing follows from that of the non-equivariant version (<ref>). Yuichi Ike: Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395, Japan. E-mail address: , Tatsuki Kuwagaki: Department of Mathematics, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan. E-mail address:
http://arxiv.org/abs/2406.08771v1
20240613030302
MFF-EINV2: Multi-scale Feature Fusion across Spectral-Spatial-Temporal Domains for Sound Event Localization and Detection
[ "Da Mu", "Zhicheng Zhang", "Haobo Yue" ]
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
Revealing hidden medium-range order in silicate glass-formers using many-body correlation functions Walter Kob June 17, 2024 ===================================================================================================== § ABSTRACT Sound Event Localization and Detection (SELD) involves detecting and localizing sound events using multichannel sound recordings. Previously proposed Event-Independent Network V2 (EINV2) has achieved outstanding performance on SELD. However, it still faces challenges in effectively extracting features across spectral, spatial, and temporal domains. This paper proposes a three-stage network structure named Multi-scale Feature Fusion (MFF) module to fully extract multi-scale features across spectral, spatial, and temporal domains. The MFF module utilizes parallel subnetworks architecture to generate multi-scale spectral and spatial features. The TF-Convolution Module is employed to provide multi-scale temporal features. We incorporated MFF into EINV2 and term the proposed method as MFF-EINV2. Experimental results in 2022 and 2023 DCASE challenge task3 datasets show the effectiveness of our MFF-EINV2, which achieves state-of-the-art (SOTA) performance compared to published methods. § INTRODUCTION Sound Event Localization and Detection (SELD) aims to use multichannel sound recordings to detect the onset and offset of sound events within specific target classes and estimate their direction of arrival (DoA). Its applications cover various fields, including smart homes, surveillance systems, and human-computer interaction. Since the introduction of SELD in the Detection and Classification of Acoustic Scenes and Events (DCASE) challenge as task3<cit.>, deep neural network (DNN) based methods have been widely studied. These methods can be implemented using either single-branch or dual-branch neural networks. Single-branch approaches <cit.> commonly utilize a combination of log-mel spectrograms and intensity vectors (IVs) as input and output the results in a class-wise format. Dual-branch approaches<cit.> divide the network into sound event detection (SED) branch and DoA branch, treating SELD as a multi-task learning problem. The SED branch takes log-mel spectrograms as input, while the DoA branch uses both log-mel spectrograms and IVs as input. These methods usually generate results in a track-wise format <cit.>. The Event-Independent Network V2 (EINV2) <cit.> is a dual-branch network architecture that has demonstrated promising performance in the 2022 DCASE challenge. It utilizes four dual convolution (Dual Conv) layers as the encoder and Conformer blocks <cit.> as the decoder. The two branches are connected through soft parameter sharing <cit.>. However, most widely used architectures <cit.> simply stack CNN or ResNet in their encoders, which limits their ability to effectively exploit the spectral, spatial, and temporal domains when extracting features. Research conducted in other fields, such as speech enhancement <cit.> and sound event detection <cit.>, has proved the importance of fully utilizing these three domains. Similarly, we argue that leveraging the spectral, spatial, and temporal domains is crucial for SELD. 1) Spectral Domain: The spectral domain offers distinctive cues related to the frequency content of sound events. Each sound event exhibits unique spectral characteristics, encompassing frequency details and broader spectral trends. Leveraging multi-scale spectral information allows the model to focus on both local and global spectral patterns, thereby enhancing the identification of sound events. 2) Spatial Domain: The spatial domain provides critical locational information about sound sources. Multichannel audio has the potential to model numerous aspects of spatial scenes, making the simultaneous determination of multiple sound sources possible. Utilizing multi-scale spatial information enables the model to localize sound sources from different directions, which greatly benefits DoA estimation. 3) Temporal Domain: The temporal domain provides details about the evolution of sound events over time. This enables the model with the ability to track temporal context dynamics. Exploiting multi-scale temporal information, the model is able to capture the characteristics of target sound events with short- or long-duration, assisting in comprehension of the entire audio segment. In this paper, instead of simply stacking CNN or ResNet in the encoder, we propose a three-stage neural network structure called Multi-scale Feature Fusion (MFF) module to improve the ability to learn multi-scale features across spectral, spatial, and temporal domains. We incorporated the MFF module into EINV2 and term the proposed method as MFF-EINV2. The main contributions of this work are as follows: * We employ parallel subnetworks architecture with the TF-Convolution Module (TFCM) <cit.> to extract multi-scale features across spectral, spatial, and temporal domains. Repeated multi-scale fusion is used to improve the representation capabilities of the subnetworks, enhancing feature extraction. * Our MFF-EINV2 outperforms EINV2 by reducing parameters by 68.5% and improving the SELD_score by 18.2%. It also surpasses other published methods and achieves state-of-the-art (SOTA) performance. § PROPOSED METHOD The proposed MFF-EINV2 architecture is illustrated in Fig.<ref>.fig1a. We concatenate log-mel spectrograms and IVs along the channel dimension as input. The first step in the process involves a Dual Conv layer, which transforms the input into a feature map with dimensions [C, T, F], where C, T, and F represent the dimension size of the channel, time, and frequency, respectively. Next, we introduce the MFF module to fully extract multi-scale features from spectral, spatial, and temporal domains, as shown in Fig.<ref>.fig1b. The network is then divided into two branches: SED branch and DoA branch. Each branch consists of three Dual Conv layers. After each Dual Conv layer, a pooling layer with a kernel size of (2,2) is applied, which aims to align the temporal resolution with the target label. To prevent the MFF module from losing original information or extracting irrelevant features, a residual connection is established between the output of the MFF module and the first Dual Conv. Our encoder architecture maintains a balance between network complexity and soft connections. The subsequent network structure is the same as EINV2. The decoder utilizes Conformer blocks, which integrate convolution layers and multi-head self-attention (MHSA) mechanisms to extract both local and global time context information of feature sequence simultaneously. Finally, the fully connected (FC) layer outputs three tracks, enabling the handling of up to three overlapping sound events. In the following sections, we will present the details of our MFF module one by one. §.§ Parallel Multi-resolution Subnetworks We introduce parallel multi-resolution subnetworks to generate multi-scale spectral and spatial features. The input to the MFF module is a feature map of dimensions [C, T, F]. We utilize frequency downsampling (FD) to generate parallel subnetworks in stage 1 and stage 2. FD is implemented through a block that consists of a 2D convolution (Conv2D) layer, a batch normalization (BN) layer, and a Rectified Linear Unit (ReLU) activation layer. Parameters for Conv2D include a kernel size of (1, 7), stride of (1, 4), and padding of (0, 2), resulting in reducing the frequency dimension by 4 times and expanding the channel dimension by 2 times. Throughout the generation of these subnetworks, we ensure that the time dimension (T) remains at a high resolution, which is able to provide more detailed temporal context information. As a result, the feature map dimensions of the second subnetwork become [2C, T, F/4], and the third subnetwork has dimensions [4C, T, F/16], as shown in Fig.<ref>.fig1b. The spectral information of sound events is mainly reflected in the frequency dimension<cit.>. The MFF module consists of three subnetworks with different frequency resolutions, each focusing on different scales of spectral patterns. In the first subnetwork, when the feature map passes through the convolution layer in TFCM, the convolution kernel interacts with a specific frequency and its neighboring frequencies, as depicted in Fig.<ref>fig2a. This process allows the model to focus on local spectral patterns and frequency details of sound events. Next, the second subnetwork handles a feature map of dimensions [2C, T, F/4]. Due to the FD operation, each frequency unit now incorporates information from seven original contiguous frequencies. It effectively compresses high-resolution frequency information to a lower resolution. Even though the size of the convolution kernel remains the same, its coverage on the original feature map increases as resolution decreases. This means that the convolution layer is able to model global spectral patterns and broader frequency trends of sound events, as shown in Fig.<ref>fig2b. Similarly, the third subnetwork focuses on more global spectral patterns. This method enables the model to attend to spectral patterns at various scales, leading to improved discrimination and identification of different sound events. The spatial information of sound sources is encoded in both the channel and frequency dimensions<cit.>. FD changes these two dimensions, establishing a complex mapping between feature maps and spatial information. Each subnetwork extracts features from distinct aspects of the spatial domain, promoting a comprehensive understanding of the acoustic scene. This approach improves the model's capacity to simultaneously estimate the DoA of sound sources from different directions. §.§ TF-Convolution Module We use TFCM in each subnetwork in order to extract multi-scale temporal features. The TFCM is constructed by sequentially concatenating m convolutional blocks. The convolutional block is composed of two pointwise convolution (P-Conv) layers and a 2D depthwise convolution (D-Conv) layer. The D-Conv layer utilizes small convolution kernels of size (3, 3) to maintain computational efficiency. The dilation rate of the D-Conv layer's time dimension increases from 1 to 2^m-1. When the dilation rate is low, the TFCM has the ability to gather brief and fine-grained temporal context information, which is especially useful for identifying short-duration sound events. As the dilation rate increases, the TFCM is able to capture extended and coarse-grained temporal context information, which is effective in recognizing long-duration sound events. Additionally, the time evolution of the log-mel spectrograms amplitude potentially indicates the stationarity of sound events<cit.>. Therefore, TFCM facilitates multi-scale modeling of signal stationarity, enabling the distinction between stationary and non-stationary signals. This method enables the model to learn the multi-grained temporal context dynamics of sound events, thereby improving the capabilities of continuous SED and DoA estimation. §.§ Repeated Multi-scale Fusion We employ repeated multi-scale fusion to enhance the representation capacities of each subnetwork by continuously exchanging information with other parallel subnetworks. During stages 1 and 2, each subnetwork merges features from all other subnetworks, enabling a comprehensive fusion. Since analyzing the frequency dimension at high resolution provides more detailed spectral and spatial information, the last two subnetworks are selectively fused with the first subnetwork in stage 3 to restore the frequency dimension to F. We use FD and frequency upsampling (FU) to meet the dimension requirements of feature fusion. FD comprises FD×4 and FD×16. FD×16 includes an additional Conv2D layer compared to FD×4, enabling frequency downsampling by a factor of 16 and channel upsampling by a factor of 4. FU consists of FU×4 and FU×16. FU×n changes the channel dimension size using a Conv2D layer with a kernel size of (1,1), followed by applying the nearest neighbor algorithm for n times frequency upsampling, where n is 4 or 16. For instance, the multi-scale fusion process in stage 3 is as follows: Y_1 = X_1 + FU× 4(X_2) + FU×16(X_3). where X_1∈ℝ^C× T× F, X_2∈ℝ^2C× T ×F/4, X_3∈ℝ^4C× T ×F/16 represent the feature map of the first, second, and third subnetwork, respectively. Y_1∈ℝ^C× T ×F denotes the feature map after multi-scale fusion. Each parallel subnetwork is responsible for processing feature maps at different scales, enabling them to capture various aspects of features in spectral, spatial, and temporal domains. As a result, these subnetworks contain complementary information to each other. Through multi-scale fusion over and over again, these subnetworks facilitate information exchanges, allowing each to simultaneously consider various scale information across the three domains. This repeated fusion process enhances their ability to extract more meaningful features from the input. § EXPERIMENTS §.§ Datasets We trained and evaluated MFF-EINV2 on the STARSS22<cit.> and STARSS23<cit.> datasets, which are used for 2022 and 2023 DCASE challenge task3, respectively. The difference between the two datasets is that STARSS23 has an additional 2 hours and 30 minutes of recordings in the development set. Both datasets contain thirteen sound event classes and two types of multichannel array signals, including first-order Ambisonics (FOA) and tetrahedral microphone array signals. We trained our model only on FOA array signals, which comprise four channels: an omnidirectional channel(w), and three directional channels (x, y, and z). In addition, we used synthetic data<cit.> as external data. The synthetic data was generated by convolving single-channel data from FSD50K<cit.> and TAU-SRIRs<cit.>. For both STARSS22 and STARSS23, we utilized the dev-set-train and the synthetic data for training, and the dev-set-test for evaluation. §.§ Hyper-parameters and Evaluation Metrics We processed audio clips into fixed 5-second segments without overlap for both training and evaluation. The sampling rate of the audio is 24. We applied a Short-Time Fourier Transform (STFT) with a hop size of 300, using a 1024-point Hanning window. Next, we generated log-mel spectrograms and IVs in the log-mel space, with 128 frequency bins. IVs were concatenated with log-mel spectrograms along the channel dimension, resulting in 7-channel input. After each Dual Conv layer in the encoder, the channel dimension of the feature map is 64, 128, 256, and 256, respectively. The structure of the Conformer blocks remains consistent with EINV2, but the embedding dimension is reduced from 512 to 256. The number of Conformer layers is 2. Loss weight is 0.8 for SED and 0.2 for DoA. We utilized the AdamW optimizer, training for a total of 100 epochs. For both STARSS22 and STARSS23, the initial learning rate is 0.0003. With STARSS22, it drops to 0.00003 after 80 epochs, and for STARSS23, after 60 epochs. We set the batch size to 6 and trained for 43 hours using an NVIDIA GeForce RTX 3090 GPU. We utilized five metrics for evaluation<cit.>. Error rate (ER_20^∘) and macro-averaged F1-score (F_20^∘) are location-dependent used for SED. Localization error (LE_CD) and localization recall (LR_CD) are class-dependent employed for DoA estimation. Furthermore, we calculate the average of these four metrics to obtain SELD_score. §.§ Experimental Results §.§.§ Comparison with other methods To investigate the effectiveness of our proposed MFF-EINV2, we compare it with several other models<cit.>. The 2022 baseline<cit.> uses a convolutional recurrent neural network (CRNN) and was updated in 2023<cit.> to include two additional MHSA layers. The ResNet-Conformer<cit.> utilizes ResNet as the encoder and Conformer as the decoder. The DST attention<cit.> introduces a spectral attention layer into the decoder, enhancing the performance over the 2023 baseline. The CST-former<cit.> incorporates distinct attention mechanisms to process channel, spectral, and temporal information independently. For our MFF-EINV2, the MFF module contains three stages, while TFCM consists of six convolutional blocks. All models were trained without any data augmentation techniques. The performance comparison is shown in Table <ref>. First, we compare the results on the STARSS22 dataset. Our model significantly outperforms all evaluation metrics of the 2022 Baseline and ResNet-Conformer. Moreover, our model performs slightly better than CST-former, showing a 1.8% (0.0073) improvement in SELD_score. It particularly excels in LE_CD and LR_CD, but falls short in ER_20^∘ and F_20^∘. This may be attributed to more complex and significant features extracted from the spatial domain compared to the spectral domain during the generation of subnetworks. In comparison to EINV2, our model reduces the number of parameters by 68.5% (58.4M), primarily due to the embedding dimension of the Conformer is reduced from 512 to 256. Nevertheless, our model has an 18.2% (0.0911) decrease in SELD_score. This is because our model fully leverages the spectral, spatial, and temporal domains to extract more meaningful features. We now compare the performance achieved on the STARSS23 dataset. Our model outperforms 2023 baseline and DST attention by a significant margin in all metrics. Although the lead is modest compared to CST-former, our model achieves the best SELD_score and surpasses CST-former in ER_20^∘ and LR_CD. This may be due to our method enhancing the model's focus on both fine details and the overall structure of sound events, improving ER_20^∘. Moreover, it integrates spatial information across multiple scales, enhancing the model's understanding of the sound scene and likely leading to a higher LR_CD. §.§.§ Number of parallel subnetworks The number of parallel subnetworks, denoted as s, directly impacts the model's representation capability and computational cost. A larger s yields more spectral and spatial feature maps at various scales, thus enhancing the expressiveness of the trained SELD model. However, too large s leads to higher computational costs and might cause overfitting of the model. Table <ref> shows the SELD performance of MFF-EINV2 with different numbers of parallel subnetworks. From the table, using three parallel networks strikes a balance between representation ability and fitting ability, resulting in the best SELD performance. §.§.§ Number of convolutional blocks in TFCM The number of convolutional blocks m in TFCM influences the extraction of multi-scale temporal information. A larger m will extract more scales of temporal information. However, too large m might extract irrelevant features due to extensive dilation in D-Conv. Table <ref> shows the SELD performance of MFF-EINV2 with different numbers of convolutional blocks in TFCM. From the table, using six convolutional blocks effectively captures multi-scale temporal information. § CONCLUSION In this paper, we propose MFF-EINV2, a novel approach for SELD. In the MFF module, we introduce parallel subnetworks and employ TFCM to extract multi-scale features across spectral, spatial, and temporal domains. In addition, we leverage repeated multi-scale fusion between parallel subnetworks so that each subnetwork continuously receives information from other parallel representations. Results show that compared to EINV2, our MFF-EINV2 significantly reduces parameters by 68.5% while improving the SELD_score by 18.2%, demonstrating the effectiveness of our MFF module. Notably, without data augmentation, our proposed MFF-EINV2 outperforms other published methods and achieves SOTA performance. IEEEtran
http://arxiv.org/abs/2406.08315v1
20240612151626
Improving Policy Optimization via $\varepsilon$-Retrain
[ "Luca Marzari", "Changliu Liu", "Priya L. Donti", "Enrico Marchesini" ]
cs.AI
[ "cs.AI", "cs.LG" ]
Invariant multiscale neural networks for data-scarce scientific applications I. Schurov^1 ^a, D. Alforov^2, M. Katsnelson^1, A. Bagrov^1 and A. Itin^3 June 17, 2024 ================================================================================ § ABSTRACT We present , an exploration strategy designed to encourage a behavioral preference while optimizing policies with monotonic improvement guarantees. To this end, we introduce an iterative procedure for collecting retrain areas—parts of the state space where an agent did not follow the behavioral preference. Our method then switches between the typical uniform restart state distribution and the retrain areas using a decaying factor ε, allowing agents to retrain on situations where they violated the preference. Experiments over hundreds of seeds across locomotion, navigation, and power network tasks show that our method yields agents that exhibit significant performance and sample efficiency improvements. Moreover, we employ formal verification of neural networks to provably quantify the degree to which agents adhere to behavioral preferences. § INTRODUCTION By balancing the trade-off between exploration and exploitation, a reinforcement learning (RL) agent can typically rely on a scalar reward function to learn behaviors capable of solving a task <cit.>. However, these functions often lead to unforeseen behaviors, making it difficult to enforce particular behavioral preferences that we desire the system to exhibit <cit.>. For example, consider applying a policy optimization RL method to a robot learning how to reach random targets. Commonly, the agent is rewarded based on its distance from the goal and penalized for collisions <cit.>, and a uniform restart distribution throughout the state space randomly initializes the environment at each episode. In this setup, learning good navigation behaviors while satisfying the preference (or desiderata, interchangeably) “avoid collisions” requires many collisions around the same state. However, while the uniform restart distribution is pivotal to guarantee monotonic policy improvement <cit.>, it naturally makes it harder for agents to experience these similar collisions over time. This potentially translates into a higher variance in the local estimate of the objective <cit.>, thus making it hard to effectively enforce the desired behavior <cit.>. To address such an issue, previous policy iteration and gradient approaches investigated the impact of restoring the environment to specific states to improve performance while maintaining theoretical guarantees on monotonic improvement <cit.>. A leading example is the vine Trust Region Policy Optimization (TRPO) algorithm <cit.>, designed to enhance exploration and reduce the variance of gradient updates. In detail, vine restarts the agent in states visited under the current policy to generate additional rollouts from that state and reduce the policy update variance. The authors also demonstrate how the theoretically justified procedure retains monotonic policy improvement guarantees. Crucially, despite the approximations required by the practical algorithm, TRPO vine empirically results in superior performance over previous policy gradient methods based on uniform restart distributions. However, this method and, more generally, designing a poor restart state distribution has three critical downsides we address in our work. (i) To the best of our knowledge, policy optimization works have not considered restart distribution mechanisms geared towards improving specific behavioral preferences; (ii) sub-optimal distributions can cause approximate methods to get stuck in local optima, potentially resulting in poor agent performance <cit.>; (iii) vine significantly hinders sample efficiency, requiring many additional rollouts for each policy update. r0.49 < g r a p h i c s > Explanatory overview of . This paper presents , a novel exploration strategy designed to optimize policies, maintaining monotonic improvement guarantees while encouraging a behavioral preference. Our method, depicted in Figure <ref>, is inspired by human learning, where consistently repeating tasks enhances learning of a particular behavior. In detail, we exploit an ε decay strategy to combine the uniform restart state distribution over the state space (blue), typical of RL algorithms, with a restart strategy over retrain areas (green). The latter is modeled using an iterative procedure that collects and refines (i.e., creates and merges) portions of the state space where the agent violated the behavioral preference at training time. The proposed approach thus aims at retraining the agent from these areas, improving the advantage estimation of actions that violate the desiderata, according to a probability ε (purple). Crucially, our decaying schedule for ε allows us to maintain the same asymptotic convergence properties of the underlying RL algorithm and, along with the design of retraining areas, ensure that we avoid sub-optimal distributions. Our method also does not require additional rollouts and we demonstrate that using mixed uniform restart distributions leads, in the worst case, to the same monotonic improvement guarantees as in <cit.>. We first empirically demonstrate the benefits of employing  on top of policy optimization methods (i.e., TRPO <cit.> and Proximal Policy Optimization (PPO) <cit.>) over hundreds of seeds and different behavioral preferences. We also combine  with the Lagrangian implementations of TRPO and PPO since they have been recently employed to enforce specific behavior <cit.>. Our evaluation considers diversified tasks ranging from simulated locomotion to optimizing power networks and robotic navigation. The results show that enhancing policy optimization methods with  leads to significantly higher sample efficiency and better enforce the behavioral desiderata while solving the tasks. Crucially, we note that system designers typically evaluate if agents comply with the expected behavior empirically and can not provably quantify the degree to which they adhere to the preferences. Since our theory refers to the improvement over the main reward objective, we employ a formal verification (FV) of neural networks tool to provably quantify the rate at which the agent trained for the navigation task avoids collisions in the retrain areas.[We use navigation as an explanatory task for clarity since it allows us to visualize retrain areas.] Finally, the realistic Unity environment employed in the robotic navigation task enables the transfer of policies trained in simulation on ROS-enabled platforms. Hence, we show the effectiveness of in realistic unsafe navigation scenarios. § PRELIMINARIES AND RELATED WORK We consider problems defined as Markov decision processes (MDPs), modeled as a tuple (𝒮, 𝒜, 𝒫, ρ, R, γ); 𝒮 and 𝒜 are the finite sets of states and actions, respectively, 𝒫:𝒮 ×𝒜×𝒮→ [0, 1] is the state transition probability distribution, ρ: 𝒮→ [0, 1] is the initial uniform state distribution, R: 𝒮×𝒜→ℝ is a reward function, and γ∈ [0,1) is the discount factor. In policy optimization algorithms, agents learn a parameterized stochastic policy π:𝒮×𝒜→ [0,1], modeling the probability to take an action a_t ∈𝒜 in a state s_t ∈𝒮 at a certain step t. The goal is to find the parameters that maximize the expected discounted reward ψ(π) = 𝔼_τ∼π [∑_t=0^∞γ^t R(s_t, a_t)], where τ:= (s_0, a_0, s_1, a_1, …) is a trajectory with s_0 ∼ρ(s_0), a_t ∼π(a_t | s_t), s_t+1∼𝒫(s_t+1| s_t, a_t). We also define state and action value functions V_π and Q_π modeling the expected discount return starting from the state s_t (and action a_t for Q_π) and following the policy π thereafter as: V_π(s_t) = 𝔼_a_t, s_t+1, a_t+1,…[ ∑_i=0^∞γ^i R(s_t+i, a_t+i)] and Q_π(s_t, a_t) = 𝔼_s_t+1, a_t+1,…[ ∑_i=0^∞γ^i R(s_t+i, a_t+i)]. Moreover, given the current state and action, we can measure how much better or worse the agent performs compared to its expected performance—the advantage function: A_π(s,a) = Q_π(s,a) - V_π(s) = 𝔼_s' ∼𝒫(s' | s,a)[R(s,a)+ γ V_π(s')-V_π(s)]. To derive a bound on the policy improvement, <cit.> also define the expected advantage of a new policy π' over the old π, and relate the expected discounted return of π' to π: A(s) = 𝔼_a ∼π'(·| s)[A_π(s,a)], and ψ(π') = ψ(π) + 𝔼_τ∼π'[∑_t=0^∞γ^t A_π(s_t, a_t)]. In practice, the dependency on trajectories induced by π' makes the above equation hard to optimize. To address this, <cit.> introduce a surrogate local approximation L_π(π') to ψ(π'), using the state distribution over the current policy π rather than π': L_π(π') = ψ(π) + ∑_s ρ_π(s) ∑_a π'(a | s) A_π(s,a) = ψ(π) + 𝔼_τ∼π[∑_t=0^∞γ^t A(s_t) ]. With the above intuitions, it is possible to derive an upper bound on the absolute difference between |ψ(π') - L_π(π')|≤4α^2 γ k/(1-γ)^2, with k= max_s,a| A_π(s,a) |. Finally, by setting α = D^max_KL(π, π') = max_s D_KL(π(·| s) || π'(·| s)) and employing the relationship between the total variation (TV) divergence and the Kullback–Leibler (KL) divergence D_TV(p|| q)^2 ≤ D_KL(p|| q) <cit.>, <cit.> derive and prove the following lower bound on the policy improvement: ψ(π') ≥ L_π(π') - CD^max_KL(π, π'), with C=4kγ/1-γ^2. §.§ Constrained MDP Constrained optimization has been recently employed to encourage a behavioral preference, or a safety specification <cit.>. To this end, the classical MDP extends to a constrained MDP (CMDP) considering an additional set of 𝒞 := {C_i}_i∈ n indicator cost functions and 𝐥∈ℝ^n hard-coded thresholds for the constraints <cit.>. The goal of constrained RL algorithms is to maximize the expected reward while limiting the accumulation of costs under the thresholds. To this end, policy optimization algorithms typically employ the Lagrangian to transform the problem into an unconstrained one that is easy to implement over existing algorithms <cit.>. Consider the case of a single constraint characterized by a cost function C: 𝒮×𝒜→{0, 1}, for which we can define the expected cost function ψ_C(π) := 𝔼_τ∼π[∑_t=0^∞γ^t C(s_t, a_t) ], as well as a cost threshold l. The Lagrangian applies a differentiable penalty ℒ_𝒞(λ) = -λ(ψ_C(π) - l) to the policy optimization objective, where λ is the so-called Lagrangian multiplier. These algorithms thus take an additional gradient descent step in λ: ∇_λℒ_𝒞(λ) = l - ψ_C(π). The multiplier is forced to be ≥ 0 as it acts as a penalty when the constraint is not satisfied (i.e., λ increases) while decreasing to 0 and removing any penalty when the constraint holds. However, choosing arbitrary small values for the threshold potentially causes a detrimental trade-off between the main task and cost objectives, ultimately leading to policies that fail to solve the problem they are trained for. Moreover, the cost metric employed in the safety evaluation is purely empirical and does not provide any provable guarantees on the actual adherence to behavioral preferences. To address such an issue, employing formal verification of neural networks becomes of pivotal importance. §.§ Formal Verification of Neural Networks A reachability-based FV for neural networks tool takes as input a tuple 𝒯 = ⟨ℱ, 𝒳, 𝒴⟩, where ℱ is the trained policy (i.e., the neural network), and ⟨𝒳, 𝒴⟩ encodes a behavioral preference in terms of input-output relationships <cit.>. Specifically, 𝒳 is a precondition defined on the portion of the state space we are interested in, and 𝒴 models the postcondition specifying the desiderata. r0.49 < g r a p h i c s > Overview of FV for neural networks. An FV tool propagates intervals 𝒳 through ℱ and performs a layer-by-layer reachability analysis to compute the output reachable set ℛ(𝒳, ℱ). The tool then checks if ℛ(𝒳, ℱ) ⊆𝒴, meaning that the agent satisfies the preference for all the states in 𝒳. Figure <ref> shows a simplified overview of the verification process, which checks if (at least) one violation of the behavioral preference exists in 𝒳. Due to over-approximation errors introduced by the propagations, FV tools iteratively split 𝒳 into sub-domains 𝒳_i <cit.> (the first two blocks). When the output reachable set ℛ(𝒳_i, ℱ) is not included in 𝒴 (the second to last block), the iterative procedure ends—the behavioral preference is violated if at least one portion of the domain 𝒳 falls within this scenario. As a natural extension of the FV problem, recent works <cit.> propose to enumerate all the portions of 𝒳 violating the desiderata, thus provably quantifying the rate at which agents satisfy the input-output relationships. In this work, we rely on the tool proposed by <cit.> to quantify the degree to which agents adhere to behavioral preferences in the robotic navigation task. § POLICY OPTIMIZATION VIA We introduce  to restart an agent from regions of the state space where it previously violated a behavioral preference. Our goal is to encourage a policy to follow the desiderata while improving performance and sample efficiency. To this end,  collects retrain areas—subsets of the state space 𝒮⊆𝒮 defined using an iterative procedure that merges parts of 𝒮 where the agent violated the preference during training. We then introduce a mixed restart distribution, combining the typical uniform restart distribution ρ over the entire state space 𝒮, with another uniform restart distribution ρ:𝒮→ [0, 1] that considers retrain areas. Crucially, such a procedure is simple to implement and can potentially be applied to any policy optimization RL algorithm. l0.45 < g r a p h i c s > (left) The agent collides with an obstacle, receiving a positive cost. (right) A retrain area is created from that state. In the RL training loop, the iterative procedure for collecting and merging retrain areas begins upon receiving a positive cost. We use an indicator cost signal (as in the safe RL literature <cit.>) to detect the interactions where the agent violates the desiderata. Upon each violation, a retrain area is created and collected into a buffer 𝒮 that is initially empty.[Note that it is possible to set a maximum size for 𝒮 to limit the horizon over which retraining an agent.] Such an area is a portion of the space surrounding the state that led to a violation, created by encoding each feature of the agent's state as an interval with fixed size ω (i.e., a “bubble” around the state). Figure <ref> shows an explanatory example in the navigation task, where collision avoidance is our desiderata. By leveraging commonly available information (in this case, the sensors' precision), we encode a bubble of size ω around the state that led to the collision. The idea is that retraining an agent from these slightly different, collision-prone situations could improve performance. After modeling the new retrain area, a refinement procedure starts. In detail, we define a similarity rule to decide whether to merge two retrain areas, indicating they model a similar violation. Specifically, if two retrain areas are within a distance β (a hyper-parameter of ), they are combined by taking the minimum lower and maximum upper bounds of the two for each input feature's interval. The refinement offers two advantages: (i) it allows us to maintain a reasonable size for 𝒮, and (ii) it clusters similar behavioral violations within the same retrain area, guaranteeing a uniform sampling over different violations. The new or refined area is then inserted into 𝒮 and can be used to retrain the agent. Finally, before starting the next episode, the new initial state of the environment is either randomly sampled from the whole state space 𝒮 with probability 1-ε, or randomly sampled from a retrain area with probability ε. If the realized sample is from a retrain area, this means the environment resets to a configuration where the agent violated the behavioral preference in the previous trajectories. Crucially,  employs a linear decay for the mixed restarting distribution, avoiding the problem of getting stuck in suboptimal restart distributions. We refer to Appendix <ref> for an explanatory example of the generation and refinement of retrain areas and for the detailed pseudocode of our method. § MONOTONIC IMPROVEMENT GUARANTEES In this section, we theoretically derive a bound on the monotonic policy improvement for approaches that employ a mixture of uniform restart distributions, similar to . We show that the original bound on monotonic policy improvement presented by <cit.> still holds and, in some cases, can be tighter. Notably, our result motivates both the design of our method as well as its superior performance (Section <ref>). For the sake of clarity, we first recall the definition of α-coupled policies and the related lemma introduced to derive the bound in case of a single uniform restart distribution over 𝒮. First, Definition <ref> couples two policies that behave in the same way (i.e., given a state, they pick the same action) with probability ≥ 1-α, and Lemma <ref> bounds the gap between policy advantages satisfying such policies. We say that π and π' are two α-coupled policies if ∀ s ∈𝒮, we can define a joint distribution (a, a') | s such that P(a ≠ a'| s) ≤α. Given two α-coupled policies, π and π', we have that: |𝔼_s_t∼π'[A(s_t)] - 𝔼_s_t∼π[A(s_t)]|≤ 4α(1-(1-α)^t)max_s,a| A_π(s,a)|. It follows that: |ψ(π') - L_π(π')| ≤4α^2 γ k/(1-γ)^2 with k= max_s,a| A_π(s,a) |. Lemma <ref>, however, only considers the case where a uniform restart distribution over 𝒮 is used. We extend such result to the mixture of restarting distributions used by  (right side of Equation <ref>, where s_0∼ρ), starting by defining the expected discounted return of a new policy π' over the current π as: ψ(π') = (1-ε)[ψ(π) + 𝔼_τ∼π' s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]]+(ε)[ψ(π) + 𝔼_τ∼π' s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]]. For the surrogate loss (see Equation <ref>), we obtain its local approximation: L_π(π') = (1-ε)[ψ(π) + 𝔼_τ∼π s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]] + (ε)[ψ(π) + 𝔼_τ∼π s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]]. We thus extend Lemma <ref> to provide an upper bound on the distance between ψ(π') and L_π(π') under the mixed restart state distribution of . Considering a mixed restart distribution over ρ and ρ using , it holds that |ψ(π') - L_π(π')|≤4α^2 γ/(1-γ)^2[k(1-ε)+ k'ε] ≤4α^2 γ k/(1-γ)^2 with k'= max_ s_0 ∼ρ τ∼π'| A_π'(s,a) |≤ k= max_s_0 ∼ρ τ∼π'| A_π'(s,a) |, ε∈ [0,1]. Proof. Full proof of Lemma <ref> is presented in Appendix <ref>. Finally, by exploiting the relationship between the total variation divergence and the KL divergence <cit.>, we derive the following corollary on the monotonic improvement guarantee under -based methods. Let ρ and ρ be two different restart distributions. Combining ρ and ρ during the training as in , and by setting α=D_KL^max(π, π'), the bound on the monotonic improvement for the policy update ψ(π') ≥L_π(π') - CD^max_KL(π, π') with C=4kγ/1-γ^2 is still guaranteed. The result naturally follows from Lemma <ref> and the derivation of <cit.>. § LIMITATIONS AND ASSUMPTIONS OF Our algorithm requires a simulator to train the agent and the possibility of resetting the system to specific states. We believe the latter requirement is reasonable, and it has been also considered by previous work <cit.>. In  , we assume having access to an additional indicator cost signal from the environment. Such a signal is widely adopted in the safe RL literature, where system designers assume having access to a cost function that deems a state-action pair as safe or unsafe <cit.>. Finally, we assume the area surrounding a collision state is also prone to violations of the desiderata. When such an assumption does not hold (e.g., in highly non-linear systems such as power grids), we could use the exact feature values to determine a retrain area instead of intervals of size ω. § EXPERIMENTS We present a comprehensive evaluation of  applied to TRPO <cit.> that approximates the policy optimization theory of Section <ref>, and PPO <cit.> which relaxes the computational demands of TRPO. We refer to these methods as ε-TRPO and ε-PPO. Additionally, we investigate the impact of the proposed approach on the CMDP framework <cit.>, using the Lagrangian method on top of TRPO and PPO to model behavioral preferences as constraints. The resulting algorithms are named TRPOLagr, PPOLagr, ε-TRPOLagr, and ε-PPOLagr. Our experiments address the following questions: (i) does  allow agents to better adhere to behavioral preferences while solving the task? (ii) How does  impact existing CMDP-based methods aimed at satisfying these preferences? (iii) How often do agents satisfy the behaviors provably and empirically? To answer these questions, we begin our experiments using two known safety-oriented tasks, SafetyHopperVelocity-v1, and SafetyHalfCheetahVelocity-v1, from the Safety-Gymnasium benchmark <cit.>. To evaluate our method in a variety of setups and different behavioral desiderata, we also employ two practical scenarios based on an active network management task for a power system <cit.>, and navigation for a mobile robot <cit.>. These tasks are described in detail in Sec.<ref> in the supplementary material. For simplicity, we will refer to them as Hopper (Hop), HalfCheetah (Che), Active Network Management (ANM) and Navigation (Nav), respectively. Implementation Details. Data collection is performed on Xeon E5-2650 CPU nodes with 64GB of RAM, using existing implementations for PPO, TRPO, and their Lagrangian, based on the omnisafe library <cit.>. Complete hyperparameters are in Appendix <ref>. To ensure statistical significance <cit.>, we report the average return, cost, and standard error as shaded regions over the last training epoch of 50 independent runs per method. Figures <ref> and <ref> (the latter reported in the supplementary) show the average return in the first row and the average cost in the second row, where each column represents a different task. Notably, we are seeking agents that achieve a lower cost, which indicates they better adhere to the desired behavioral preferences. Our claims on the performance improvement of  are supported by 1600 training runs, which significantly surpasses the typical 3-10 runs per method used in previous policy optimization works <cit.>. We remark that due to employing a small penalty in the reward function to encourage specific behavioral preference, our results are not directly comparable to the published baselines <cit.>. For a fair comparison, we first collect the baseline with this new setting and then compare the performance with our approach. Considering the computational resources used for our experiments, Appendix <ref> also addresses the environmental impact. Empirical Evaluation (Performance of ε-TRPO and ε-PPO). Figure <ref> shows that TRPO and PPO enhanced with our  improve sample efficiency and allow agents to better adhere to the behavioral preferences. In Hopper and HalfCheetah, TRPO and PPO achieve substantially higher returns than their version; a result that could be easily misunderstood. In fact, this is related to the nature of the task, where the reward is directly proportional to the agents' velocities. For this reason, an agent that violates the behavioral preference “limit velocity under a threshold”, achieves higher returns. This is clearly shown in the first two columns of Figure <ref>, where ε-TRPO and ε-PPO resulted in notably lower cost compared to the baselines, indicating they lead the agents towards adhering to the velocity limit significantly more often than TRPO and PPO. Similar results are achieved in the ANM task where ε-TRPO and ε-PPO are notably safer and more sample efficient (see Pareto frontiers for convergence results in Figure <ref> in the supplementary) than the baseline counterparts. The benefits of  are also confirmed in the navigation task, where violating the safety desiderata “avoid collisions” leads to more collisions. Ultimately, achieving a higher cost (i.e., more collisions) hinders the navigation performance of the agent and leads to inferior returns. Specifically, TRPO and ε-TRPO converge to the same average cost. However, the higher sample efficiency of the latter through the training, in terms of learning collision avoidance behaviors more quickly, allows ε-TRPO to learn better navigation behaviors, outperforming TRPO in terms of average return. Moreover, ε-PPO significantly outperforms PPO both in terms of average cost and return. Performance of ε-TRPOLagr and ε-PPOLagr. During training (learning curves are reported l0.55 Hop Che ANM Nav PPOLagr 0.57 0.33 0.44 0.46 ε-PPOLagr 0.04 0 0.59 0.44 TRPOLagr 0.51 0.17 0.58 0.64 ε-TRPOLagr 0.25 0 0.79 0.56 Average fraction of the training steps where agents violate the constraints (lower is better). in Figure <ref> in the supplementary), both ε-TRPOLagr and ε-PPOLagr drastically reduce the amount of constraint violations in the Hopper and HalfCheetah velocity environments. We specify the fraction of training steps where agents violate their constraint in Table <ref>. However, at convergence, all the approaches satisfy the imposed thresholds. These results lead to some interesting considerations based on the setup of interest. In safety-critical contexts where it is crucial to satisfy constraints at training time,  showed significant empirical benefits. On the other hand, in non-critical contexts where performance at convergence is the main evaluation metric, the naive Lagrangian methods have superior return performance. Intuitively, this relates to the fact that Lagrangian methods often violate the constraints at training time, allowing agents to explore more and thus learn higher-performing behaviors. In the more complex, realistic scenarios, our empirical analysis leads to different considerations. Specifically, in navigation, -based methods and the Lagrangian baselines achieve comparable results in terms of cost (i.e., constraint satisfaction). However, retraining agents in areas that are collision-prone allowed them to learn policies with better navigation skills and higher performance. In the ANM task, retraining an agent during grid instability increases the frequency of constraint violations compared to the baseline. Nonetheless, our approach helps agents learn to manage the grid effectively over time in contrast to Lagrangian baselines, which in the end, fails to solve the problem efficiently (see the Pareto frontier in the supplementary material). Provably Verifying Navigation Behaviors. To further assess the benefit that  has over the behavioral preferences, we formally verify the policies trained for the navigation task. We consider this problem as an explanatory task for clarity since it allows us to easily visualize the retrain areas generated for “collision avoidance” on top of the environment. l0.6 < g r a p h i c s > (left) Density map of the retrain areas collected in the first and last training epochs; yellow indicates higher density. (right) Average behavioral violations percentage for policies trained with TRPO, PPO, PPOLagr, TRPOLagr, and their version. Figure <ref> on the left, shows a kernel density estimation map of the retrain areas distribution at the beginning (left) and final stages (right) of the training for the ε-TRPO agent. Here we can notice how the agent successfully learns to navigate the environment over time since the retrain areas are more equally distributed through the entire scenario. Using a recent verification tool <cit.>, we aim to quantify the probability that a navigation policy violates the collision avoidance preference. To this end, we consider three representative retrain areas collected while training the different -based algorithms (depicted as red dots in Figure <ref>). For each area, we encode the input-output relationship required by the tool (see Section <ref>), considering the retrain area as the precondition, and the minimum linear and angular velocities that would cause a collision as the postcondition. Broadly speaking, the FV tool checks where the trained policies do not exceed such minimum velocities (i.e., they do not collide), and returns the portion of each retrain area for which the given policy violates the postcondition (i.e., the probability of colliding in that area). The table in Figure <ref> reports the probability that policies at convergence collide in the chosen retrain areas, averaged over all the runs. Overall, this additional analysis based on FV highlights that -enhanced algorithms better adhere to the behavioral preference, further confirming our intuitions and the merits of our approach. Real experiments. To conclude our comprehensive evaluation, we perform an additional evaluation in realistic unsafe mapless navigation settings. Due to the similar performance at r0.45 < g r a p h i c s > Real-world experiments comparing ε-TRPO and TRPO in corner-case scenarios. convergence for TRPO and ε-TRPO, we report the evaluation for these two approaches. Specifically, we compare ε-TRPO and TRPO in scenarios where the agent has either all or only partially occluded LiDAR information. We hypothesize that if the agent is not exposed to multiple unsafe situations during the training, i.e., without an strategy, it is less likely to select a longer but safer trajectory, and eventually, the agent will prefer a straight trajectory leading to a collision. To this end, the Unity framework <cit.> we used to create the navigation task allows us to transfer the policies trained in simulation on ROS-enabled platforms such as our Turtlebot3 (Fig. <ref>). Hence, we test several corner-case situations, comparing the safer (from the formal verification results) trained agent at convergence for both ε-TRPO and TRPO. Realistic experimental results confirm our hypothesis, showing the benefit of retraining the agent in specific regions of the state space deemed unsafe (we refer to the video in the supplementary material). § DISCUSSION This work presented , a novel exploration strategy with monotonic improvement guarantees that optimizes policies while encouraging specific behavioral preferences. Our approach aims at retraining an RL agent from retrain areas where it violated a desired behavioral preference at training time. Our empirical and formal evaluation over hundreds of seeds considering various tasks and behavioral preferences, demonstrated the effectiveness in terms of higher sample efficiency and superior performance of  when integrated with existing policy optimization methods. Real-world experiments confirmed the benefit of the proposed approach in realistic setups. § APPENDIX § PSEUDO CODE AND FURTHER DETAILS We present in Algorithm <ref> the complete procedure of . Our approach requires the following inputs: a bubble size ω, which is used to determine the size of the retrain area; a similarity threshold β, used to merge similar retrain areas; and all the parameters for the ε decay strategy, as discussed in the main paper. More specifically, we start initializing the memory buffer of the retrain areas 𝒮 as an empty set, and we select the starting state s_0 using the initial uniform state distribution ρ over the entire state space 𝒮 (line 2). Then, upon each unsafe interaction with the environment (i.e., after the trigger of a cost signal), we generate a retrain area calling the method, which requires the previous state s_t-1 and the ω-bubble (line 3-7). After creating the retrain area r, checks if a refinement process can be performed. Specifically, our approach automatically checks for the existence of another similar retrain area r' to merge with, using the similarity threshold β provided as a parameter. If such an r' exists, the method is called (line 9). Refer to the next section for a complete overview of the and methods. If no similar retrain area is found, r is stored in the buffer 𝒮 (line 11). Finally, if there is at least one retrain area in 𝒮, the new initial state of the environment is either randomly sampled from the entire state space 𝒮 with probability 1-ε, or randomly sampled from a retrain area with probability ε. If the sample is from a retrain area, the environment resets to a configuration where the agent previously violated the behavioral preference. Importantly, employs a linear decay for the mixed restarting distribution, avoiding the problem of getting stuck in suboptimal restart distributions (line 16). §.§ Retrain Area Generation and Refinement Process This section shows an explanatory example of the and methods. We first report an example of retrain area generation in the robotic navigation context, then show an explanatory example of the refinement process in the HalfCheetah locomotion task. Retrain Area Generation. Suppose we are in a robotic navigation scenario, and an environment cost signal is triggered. This indicates that a collision between the agent and an obstacle has occurred, as depicted in Figure <ref>(a). Hence, our procedure selects the previous state s_t-1 that led to the collision as reported in Figure <ref>(b) and considers an ω-bubble around this state to generate a retrain area as in Figure <ref>(c). For instance, assuming a value ω=0.05, i.e., a small value that encodes the surroundings of an unsafe situation, we obtain one interval for each input feature as X: {x_0=[0.95,1], x_1=[0.03, 0.08], x_2=(0, 0.05], x_3=[0.03, 0.08], x_4,x_5,x_6=[0.95,1]} In particular, we assume that all the states that can be sampled from X are undesirable (i.e., potentially risky). Therefore, when the initial state s_0 is sampled from this region X using , the agent has to learn how to satisfy the behavioral desiderata. Refinement Procedure. Once a retrain area has been created, checks whether it is possible to perform a refinement with an existing retrain area. To this end, our approach checks if the distance between each corresponding interval in one X and X' is less than or equal to β—a similarity threshold received as a parameter. Formally, let X = {x_0, x_1, …, x_n} and X' = {x_0', x_1', …, x_n'}, where each x_i and x_i' are intervals that encode possible value for each feature x_i (∀ i ∈{0, …, n}). We use Moore's interval algebra <cit.> and define the distance between two intervals as [x_i, x_i] and [x'_i, x'_i] as: d([x_i, x_i], [x'_i, x'_i]) = max(|x_i - x'_i|, |x_i - x'_i|). Hence, two sets of intervals X and X', i.e., two retrain areas, are similar if and only if: ∀ i ∈{0, …, n}, d(x_i, x_i') ≤β We report in Figure <ref> an explanatory example of the locomotion task HalfCheetah showing two similar and not similar unsafe situations. In detail, this environment allows to detect unsafe situations such as the violation of the velocity threshold of the Cheetah along the x axis using a red ball in the center of the image. For the sake of clarity, each explanatory example use two single unsafe states (one with the original color and the other with a fixed red or green color) instead of intervals. In the case that two sets of intervals X and X' are similar, combines these areas into a novel area into a new set of intervals X”. For each pair of corresponding intervals x_i ∈ X and x'_i ∈ X' for all i ∈{0, …, n}, the new interval x”∈ X” is computed as: x_i” = [min(x_i, x'_i), max(x_i, x'_i)] i.e., our method considers the minimum of the lower bounds and the maximum of the upper bounds of the corresponding intervals. § PROOF OF LEMMA <REF>. In order to show the monotonic improvement guarantees of , we want to bound the difference |ψ(π') - L_π(π')| where ψ(π') = (1-ε)[ψ(π) + 𝔼_τ∼π' s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]]+(ε)[ψ(π) + 𝔼_τ∼π' s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]] and L_π(π') = (1-ε)[ψ(π) + 𝔼_τ∼π s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]] + (ε)[ψ(π) + 𝔼_τ∼π s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]]. Hence, following the proof of <cit.> we can derive a similar bound with simple algebra as: |ψ(π') - L_π(π')| = (1-ε)[ψ(π) + 𝔼_τ∼π' s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]]+(ε)[ψ(π) + 𝔼_τ∼π' s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]] - (1-ε)[ψ(π) + 𝔼_τ∼π s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]] + (ε)[ψ(π) + 𝔼_τ∼π s_0 ∼ρ[∑_t=0^∞γ^t A(s_t) ]] = (1-ε) [∑_t=0^∞γ^t |𝔼_τ∼π' s_0 ∼ρA(s_t) - 𝔼_τ∼π s_0 ∼ρA(s_t)|]_≤4kα^2γ/(1-γ)^2by <cit.> + (ε) [∑_t=0^∞γ^t |𝔼_τ∼π' s_0 ∼ρA(s_t) - 𝔼_τ∼π s_0 ∼ρA(s_t)|] ≤ (1-ε)[4kα^2γ/(1-γ)^2] + (ε) [∑_t=0^∞γ^t |𝔼_τ∼π' s_0 ∼ρA(s_t) - 𝔼_τ∼π s_0 ∼ρA(s_t)|]. As stated in the main paper, from the fact that the α-coupling definition (reported in Def. <ref>) is expressed over all possible states s, without any assumption or restriction on the initial state distribution, we have that π and π' are still α-coupled even when we chose s_0 using a ρ as initial state distribution. Hence, |ψ(π') - L_π(π')| ≤ (1-ε)[4kα^2γ/(1-γ)^2] + (ε)[4k'α^2γ/(1-γ)^2] = 4α^2γ/(1-γ)^2[k + ε(k'-k)] = 4α^2 γ/(1-γ)^2[k(1-ε) + k'ε]. Next, note that since 0 ≤ε≤ 1, we can derive that k' ≤ k. In fact, we have: k(1-ε) + k'ε ≤ k -ε k + ϵ k' ≤ 0 k ≥ k'. Thus we conclude that 4α^2 γ/(1-γ)^2[k(1-ε) + k'ε] ≤4α^2 γ k/(1-γ)^2. The correctness of our result stems from the following facts: (i) α-coupled policies are coupled over the entire 𝒮, ensuring that policies are coupled even when combining ρ and ρ, and (ii) k is defined as the maximum over all (s, a) ∈ (𝒮⊇𝒮, 𝒜) and, as such, k ≥ k'. In more detail, Lemma <ref> shows that enforcing the agent to start from retraining areas can help in narrowing the gap between the ψ(π') and the local approximation L_π(π'). In fact, when k=k', the difference reduces to the original case of Lemma <ref>. In contrast, when k' < k, the trajectories induced by ρ are not optimal since agents are penalized for violating the preference. Thus, the agent gains a more precise understanding of the portion of the state space where the policy does not yield the maximum possible advantage. This motivates the design of that linearly scales down ε→ 0, avoiding getting stuck in suboptimal restart distributions. § ENVIRONMENTS We briefly describe the tasks and the desired behavioral specifications, referring to the original works for more details <cit.>. Hopper, HalfCheetah (Figures <ref>a, b). The robots have to learn how to run forward by exerting torques on the joints and observing the body parts' angles and velocities (for a total of 12 and 18 input features). The actions control the torques applied to the (3 and 6) joints of the robot. The agents are rewarded based on the distance between two consecutive time steps (i.e., positive and negative values for forward and backward movements). In these tasks, we have a velocity preference—agents should not go faster than 0.7402m/s and 3.2096m/s, respectively. These are the same limitations considered in the safe RL literature <cit.>, and the indicator cost deeming a velocity violation triggers when the limits are violated. The positive cost has a three-fold role: it (i) starts the generation of a retrain area as described in Section <ref>, using an initial size ϕ=0.01 given by our initial grid search; (ii) triggers a small reward penalty to encourage the unconstrained baselines to avoid such an undesired behavior; and (iii) gets accumulated to model the constraints of the Lagrangian implementations. As in relevant literature, we set the constraints threshold to 25. Active Network Management (Figure <ref>c). The agent has to reschedule the power generation of different renewable and fossil generators, to satisfy the energy demand of three loads connected to the power grid. Specifically, we observe the state of the power network through 18 features (i.e., active and reactive power injections, charge levels, and maximum productions). Moreover, we control power injections and curtailments using 6 continuous actions. The behavioral preference here models energy losses and a grid's operational and transmission constraints. For this reason, we want our agent to limit penalties associated with violating these constraints. This task is significantly more complex than the previous ones since the agent is negatively rewarded based on the energy losses, the generation cost, and the violation of the operational constraints—however, to successfully solve the task in this particular environment, the agent must receive penalties associated with energy losses, which are a natural consequence of power transmission. The indicator cost has the same role and consequences as the previous environments and is triggered when the agent violates the system's operational constraints. Based on the performance of the unconstrained baselines, we set the constraints threshold to 350. Due to the non-linear dynamics of power networks, a retrain area is generated using the exact violation state, with an initial bubble size ω=0. Navigation (Figure <ref>d). A mobile robot has to control its motor velocities to reach goals that randomly spawn in an obstacle-occluded environment without having a map. The agent observes the relative position of the goal and sparse lidar values sampled at a fixed angle (for a total of 22 features) and controls linear and angular velocity using 2 continuous actions. Intuitively, the behavioral preference here models a safe behavior since we want the robot to avoid collisions. Similarly to the previous environments, the agent is positively (or negatively) rewarded based on its distance from the goal in two consecutive timesteps. The indicator cost has the same consequences as the previous environments, and it is triggered upon every collision. We set the constraints threshold to 20, indicating the robots should not collide for more than 20 steps in a training episode that lasts for 500 steps. We consider the simulated lidar precision to initialize the bubble size ϕ=0.025. § FULL RESULTS EMPIRICAL EVALUATION Figure <ref> shows the Pareto frontier reporting on the y-axis the average reward and on the x-axis the average cost at convergence. in general allows to achieve the best trade-off between average reward and cost (no other methods reach better performance in the upper-left corner), i.e., less violation of the behavioral desiderata while still successfully solving the task. It is important to notice that only in the navigation scenario the cost value of TRPO at convergence is slightly lower than ε-TRPO one. However, as reported in the learning curves in the main papers, our approach results in significantly more sample efficiency than the baseline counterparts. In Figure <ref>, we report the training performance of original constrained policy optimization algorithms and the one enhanced with . In detail, our approach allows us to reduce the constraint violations and, in the more complex tasks, allow agents to improve performance. For the sake of clarity, we also report the Pareto frontier at convergence in Figure <ref>. § HYPER-PARAMETERS Regarding the baselines, we performed an initial grid search, but the original parameters of the omnisafe library resulted in the best performance <cit.>. Table <ref> lists the key hyper-parameters considered in our initial grid search for TRPO, PPO, their Lagrangian versions, and . The best parameters used in our evaluation are highlighted in the last column. § ENVIRONMENTAL IMPACT Despite each individual training run being “relatively” computationally inexpensive due to the use of CPUs, the ≈1600 experiments of our evaluation led to cumulative environmental impacts due to computations that run on computer clusters for an extended time. Our experiments were conducted using a private infrastructure with a carbon efficiency of ≈ 0.275 kgCO_2eq/kWh, requiring a cumulative ≈240 hours of computation. Total emissions are estimated to be ≈6.93 kgCO_2eq using the https://mlco2.github.io/impact#computeMachine Learning Impact calculator, and we purchased offsets for this amount through https://www.treedom.netTreedom.
http://arxiv.org/abs/2406.09343v1
20240613172512
Frameworks, Modeling and Simulations of Misinformation and Disinformation: A Systematic Literature Review
[ "Alejandro Buitrago López", "Javier Pastor-Galindo", "José A. Ruipérez-Valiente" ]
cs.SI
[ "cs.SI" ]
alejandro.buitragol@um.es 0009-0002-1606-8766 Department of Information and Communications Engineering, University of Murcia Murcia Spain javierpg@um.es 0000-0003-4827-6682 Department of Information and Communications Engineering, University of Murcia Murcia Spain jruiperez@um.es 0000-0002-2304-6365 Department of Information and Communications Engineering, University of Murcia Murcia Spain § ABSTRACT The prevalence of misinformation and disinformation poses a significant challenge in today's digital landscape. That is why several methods and tools are proposed to analyze and understand these phenomena from a scientific perspective. To assess how the mis/disinformation is being conceptualized and evaluated in the literature, this paper surveys the existing frameworks, models and simulations of mis/disinformation dynamics by performing a systematic literature review up to 2023. After applying the PRISMA methodology, 57 research papers are inspected to determine (1) the terminology and definitions of mis/disinformation, (2) the methods used to represent mis/disinformation, (3) the primary purpose beyond modeling and simulating mis/disinformation, (4) the context where the mis/disinformation is studied, and (5) the validation of the proposed methods for understanding mis/disinformation. The main findings reveal a consistent essence definition of misinformation and disinformation across studies, with intent as the key distinguishing factor. Research predominantly uses social frameworks, epidemiological models, and belief updating simulations. These studies aim to estimate the effectiveness of mis/disinformation, primarily in health and politics. The preferred validation strategy is to compare methods with real-world data and statistics. Finally, this paper identifies current trends and open challenges in the mis/disinformation research field, providing recommendations for future work agenda. <ccs2012> <concept> <concept_id>10002978.10003029</concept_id> <concept_desc>Security and privacy Human and societal aspects of security and privacy</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [300]Security and privacy Human and societal aspects of security and privacy Frameworks, Modeling and Simulations of Misinformation and Disinformation: A Systematic Literature Review José A. Ruipérez-Valiente June 17, 2024 ========================================================================================================= § INTRODUCTION Individuals navigate in a world inundated with an unprecedented volume of information. The rise of the internet has fundamentally changed how citizens become informed, how they discuss and how they form their opinions <cit.>. Online Social Networks (OSNs) play a pivotal role in this new information ecosystem, allowing individuals to disseminate information globally easily. However, the democratization of information comes with inherent challenges, as the content shared on these platforms may not always be accurate, giving rise to misinformation and disinformation <cit.>. Particularly, the emergence of an interconnected digital environment, where data moves without restraint among various platforms, has brought to light a pressing concern: the exploitation of information to deceive, mislead or shape public opinion. Misinformation and disinformation (mis/disinformation) existed before OSNs, but the diffusion of false information is a phenomenon that has gained significant notoriety in the digital era, primarily due to the facilities for creating and disseminating content offered by OSNs, being a major threat to democracy, information-seeking, open debate, and a free and modern society <cit.>. The propagation of false information in OSNs affects public perception of important events<cit.>, undermining trust in institutions and affecting political decision-making <cit.>. Another example may be that of fake news in the healthcare setting. This can have serious consequences, leading people to make decisions that put their health or even their lives at risk, may delay appropriate treatment, and worsen the prognosis of serious diseases <cit.>. The escalation in the popularity of disinformation materialized in 2016, when the term “post-truth” was coined as the “word of the year” by Oxford Languages, reflecting when emotions and personal beliefs replace objective facts <cit.>, emphasizing the multidisciplinary nature of the phenomenon. From a social aspect and analyzing people's behavior in OSNs, previous works have found a cognitive phenomenon called confirmation bias, in which people tend to interpret, remember or favor information that confirms their pre-existing beliefs or hypotheses <cit.>. Since individuals more inclined to believe such information are similarly more likely to share it, this fact is related to another previous finding which analyzed that negative messages spread faster than positive ones <cit.>. This phenomenon results in echo chambers emerging in OSNs, where individuals are exposed to information that aligns with their beliefs, reinforcing preconceptions and limiting exposure to alternative perspectives <cit.>. This context has been recently exploited through a great wave of disinformation attacks that anticipated and accompanied events such as the Russia-Ukraine war <cit.>, the Israel-Palestine conflict <cit.> or the COVID-19 pandemic <cit.>. Gaining attention from the research community, governments and the public at large, the study and modeling of mis/disinformation leads to the utilization of frameworks, models and simulations to comprehend and conceptualize this phenomenon. Frameworks provide high-level structured and organized schemes to structure processes, concepts or dimensions of mis/disinformation <cit.>. Among the different approaches, technical ones are usually oriented to support the detection and combat disinformation <cit.>, recently including AI components <cit.>. Models are formal proposals that explain processes or activities of mis/disinformation dynamics, such as utilizing epidemic models <cit.>. Finally, simulations are computational executions that understand and evaluate the operation of certain mis/disinformation processes, usually designed under certain frameworks or models. Thanks to advances in artificial intelligence (AI), they do not necessarily have to be based on real data or people <cit.>. Despite the existing reviews in the field of misinformation and disinformation, we have not found any specific frameworks, modeling or simulations. For example, the authors in <cit.> and in <cit.> review fake news detection models contributed by various machine learning and deep learning algorithms categorized and described in different datasets. Moreover, in <cit.>, a comprehensive analysis is provided of the various techniques employed for rumor and fake news detection and not only considers AI techniques. On the other hand, Kapantai et al. <cit.> provided a systematic review of 10 experimental studies to create a unified taxonomic framework for understanding and categorizing different types of disinformation. Finally, Gupta et al. <cit.> reviewed 92 research articles to analyze existing solutions in the fight against disinformation and propose a potential solution to combat fake news through a news verification service. The surveys mentioned above focus on detection methodologies, particularly using AI, and move around the concept of fake news, a very limited vision of disinformation. Given the rapid proliferation of misleading information in various domains, understanding and researching these phenomena has gained significance. In this sense, our goal is to unveil how mis/disinformation is conceptualized, modeled and simulated in the literature in studies that aim to understand, research and assess mis/disinformation dynamics and processes. This situation calls for a comprehensive review of the current state of the art on frameworks, models and simulations around misinformation/disinformation. The current paper aims to conduct the first systematic literature review in these respects and answer the following research questions: * What are key commonalities and differences in defining misinformation and disinformation? * How has the mis/disinformation phenomenon been analyzed, modeled and simulated in the literature? * What purposes motivate the existing frameworks, models and simulations around mis/disinformation? * Which contexts are addressed by the existing frameworks, models and simulations around mis/disinformation? * Which validation is performed in the existing frameworks, models and simulations around mis/disinformation? The rest of the paper is organized as follows. Section <ref> describes the methodology followed in the systematic review, including some terminology clarifications, the research questions, databases and search terms, research selection and review process. Section <ref> presents the analysis and synthesis of the survey results. Section <ref> includes an analysis of our findings by extracting the current trends and open challenges. Finally, Section <ref> incorporates the conclusions of this work. § METHODOLOGY A standard systematic literature review methodology was followed, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) <cit.> as a basis for conducting our study. Figure <ref> shows the PRISMA diagram representing the different stages of our systematic review (inspired by the original proposal <cit.>) and is explained below. * Definition of key concepts and formulation of each RQ. * Application of search queries to pre-identified bibliographical databases. * Definition and application of a set of inclusion and exclusion criteria. * Conducted a comprehensive paper review and coding process for the research questions, followed by synthesis and analysis. §.§ Definition of key concepts and research questions This section presents a set of definitions to clarify the concepts of this systematic literature review. Figure <ref> illustrates this study's key concepts and RQs. Initially, the concepts of misinformation and disinformation were explored within the mis/disinformation phenomena. There is not an established consensus on the definitions of both terms. Following the guidelines of the European Commission <cit.>, in the present study, disinformation is defined as “false or misleading content spread to deceive or secure economic or political gain, which may cause public harm”. In contrast, misinformation is “false or misleading content shared without harmful intent, the effects can still be harmful”. Secondly, these two phenomena are analyzed through different means: frameworks as conceptual abstractions that can be social or computational for studying or analyzing mis/disinformation phenomena, models as logical or mathematical representations of any mis/disinformation dimension, and simulations as experiments that emulate mis/disinformation dynamics. Specifically, five RQs were formulated to guide the investigation. The first RQ searched to clarify the definitions of misinformation and disinformation, given the absence of an official definition. This takes on great importance in the scientific field, where it is necessary to be very conscientious with the terms used. Subsequently, the study analyzed the methods used to represent, model, and simulate the mis/disinformation phenomenon. The objectives and purposes behind these approaches were also examined. Furthermore, the study investigated the contextual factors influencing these frameworks, models, and simulations. Finally, the validation methods employed were explored. §.§ Identification of research works The article collection was done on February 21, 2024, in two bibliographic databases: Scopus and the Web of Science (WoS), since they are the most widely used databases in different scientific fields <cit.>. Scopus is the world's largest citation database of peer-reviewed research literature, with 27,950 active peer-reviewed journals <cit.>. Moreover, WOS is the second biggest bibliographic database, with almost 2.2 billion cited references from over 196 million records <cit.>. In order to perform the search on both databases, we restricted the query to title due to the high number of records. We used two pairs of terms to perform the query, one encompassing disinformation and misinformation. Therefore, papers were searched by title through the keywords “disinformation” or “misinformation”, and “framework” or “simulat*” or “model*” like the following: This query generated 303 initial studies (175 from Scopus and 128 from WoS). §.§ Screening of articles After obtaining the initial collection of 303 studies, 115 duplicated studies were identified and removed. With the collection of 188 studies, we formulated the following mandatory criteria to include a paper in the survey. The three authors conducted the screening process in consensus, applying the exclusion criteria sequentially so that a paper not matching a condition was removed. §.§.§ Exclusion by title and abstract * The paper is not written in English or Spanish: three studies (1.6% of the collection with 188 studies) were excluded. * The paper is not published in conference proceedings, journals, or edited books/volumes (i.e., book chapters) to be peer-reviewed: four studies (2.13%) were excluded. * The paper does not contain a conceptual proposal related to mis/disinformation. Studies which not propose a framework, model, or simulation were excluded: 56 studies (29.8%) of the papers were excluded. * The paper does not address mis/disinformation in the context of digital media, including online platforms, social networks, the internet, or other digital media. Therefore, studies that addressed the phenomenon in the context of human memory and cognitive or clinical psychology were excluded: seven studies (3.72%) of the papers were excluded. The first screening excluded 72 works, with the remaining 116 studies to be evaluated next. §.§.§ Exclusion by full text * The full text is open access, accessible through subscription, or sent by the authors upon request: two studies could not be obtained (1.72% of the remaining 116 studies). * The paper contains a conceptual proposal related to mis/disinformation: 54 studies (46.55%) were excluded. One challenge we faced is that the new wave of AI produced a large volume of works in which the model is not inspected <cit.>. Of the 54 studies excluded for not offering a conceptual proposal, 34 studies (62.96% of the 54 papers) were presenting only applied ML. * The paper addresses mis/disinformation in the context of digital media, including online platforms, social networks, the internet, or other digital media: 3 studies (2.59%) were excluded. The second screening excluded 59 works, with the remaining 57 studies to include for review. The `BibTex' of the final collection with these studies is available online[<https://osf.io/58r2w>]. §.§ Inclusion of papers for review The keywords from the resulting 57 selected papers for the systematic review were collected.. The total sum is 211, while there are 149 unique keywords. The most frequent ones are presented in Figure <ref>, having a high variability and strongly focused on misinformation (50%), social media (23.81%), and disinformation (23.81%). Alternatively, the research topic covers other lines, such as agent-based modeling, fact-checking, fake news, models, social networks, and COVID-19. On the other hand, Figure <ref> shows the distribution of the 57 papers by publication year. Only five studies are found from 2007 to 2017 matching the review criteria. We see an increasing interest in this topic from 2018 onwards, having important peaks in 2020 (8 works) and 2022 (11 works). Finally, the increase in 2023 (20 works) aligns with the growth trend observed in recent years. §.§ Review and coding process The 57 articles were reviewed in detail during this phase, and relevant data were extracted from each study. Subsequently, a structured coding system was used to organize and categorize the information collected uniformly to answer the research questions in Section <ref>. It is worth noting that RQ1 is not coded due to its nature. In particular, an inductive coding scheme (also called open coding) was used for this coding process for RQ3, RQ4, and RQ5. This means that the codes created were based on the qualitative data itself <cit.>. This is an iterative process since researchers can add new codes, split an existing code into two, or compress two existing codes into one as they review data. Only a deductive coding scheme was used for RQ2, starting from three selected codes (Model, Framework, and Simulation). Within each of these codes, further sub-codes were derived inductively. For all the RQs, the assigned codes are non-exclusive, i.e., one article can have several codes for one RQ. As a result, Table <ref> outlines the variable coding scheme we followed in this survey, indicating each RQ with its possible codes representing qualitative data. All data generated from the screening and coding processes were in an Excel file, which is available online [<https://osf.io/e62px/>]. § RESULTS §.§ What are key commonalities and differences in defining misinformation and disinformation? (RQ1) In the analyzed studies, particularly in the literature from Anglo-Saxon countries, there is a plethora of terms and concepts that are used to refer to false, untrue, or half-true information, such as “fake news” <cit.>, “misinformation” <cit.>, or “disinformation” <cit.>. Sometimes, these terms are used interchangeably <cit.>, which can be low discriminant. This RQ aims to uncover the key commonalities and differences that underlie its definitions, as well as disclose the number of papers that use the term `disinformation,' `misinformation,' or both terms and of the latter how many do so by giving definitions to discriminate between them making different use of each. Analyzing our collection of papers, the 57 papers, 37 employed the term misinformation, 17 used disinformation, and 3 employed both. However, 36 papers provide an explicit definition (17 articles explain the concept of misinformation, 11 studies define disinformation, and 8 works do it for both). Regarding the definition of disinformation. By way of example, in <cit.>, disinformation has been defined as false information designed to mislead. Moreover, in <cit.>, the phenomenon of disinformation is defined as a virus dangerous to the social organism capable of exponentially spreading in the information space in a short time, parasitizing primarily on the painful problems of the world as a whole, individual regions, and countries. Finally, D. Brody <cit.> defined disinformation as modifying the information process. In the case of misinformation, in <cit.> it is defined as false or misleading information that promotes claims already debunked by scientific evidence or expert opinions. Furthermore, in <cit.>, misinformation is used as an umbrella term to cover related concepts, including fake news, disinformation, false information, rumors, conspiracy theories, malignant information, inaccurate information, and more. Finally, in <cit.>, misinformation is defined as false claims that are mostly spread unintentionally. While, Safieddine et al. <cit.> defined misinformation as a piece of malicious information intended to cause undesirable effects on the general public, such as panic and misunderstanding, or to supplant valuable information and provide more studies that support that affirmation that is to say, i.e., it assigns the concept misinformation intentionality when spreading false information as in <cit.>. The relationship between both terms, disinformation and misinformation, is the consequences of both can be the same. For example, the authors in <cit.> “assume disinformation to be any deviant information that is intended to distort and mislead a target audience in a predetermined way”, argue as a key difference that disinformation is not only about the message itself but as a practice it has the potential to discredit the messenger and true information due to its close relationship with multiple social sectors, especially politics. Moreover, some papers defined misinformation and disinformation as terms that refer to information that does not directly reflect the `true' state of the world (e.g., distorted information or falsehoods). It does not differentiate between intentionality as in <cit.>. On the other hand, 26 articles argue that the difference between disinformation and misinformation lies in the intentionality with which the information is disseminated. In conclusion, this analysis reveals a consensus on the definitions of disinformation and misinformation. Misinformation refers to unintentionally false or inaccurate information, while disinformation entails deliberately spreading false or misleading information with the intent to deceive or manipulate. Figure <ref> shows that 63.2% of the papers provide a definition and 72.22% of these papers highlight intentionality as a key concept in their definition. On the other hand, 27.78% of these papers highlight any other aspect as a key concept in their definition, such as a virus, malicious information, or information that does not directly reflect the `true' or a modification of the information process. §.§ How has the mis/disinformation phenomenon been analyzed, modeled and simulated in the literature? (RQ2) In RQ2, our objective was to explore the existing literature on the representation of mis/disinformation. We coded the papers into three categories: frameworks, models, and simulations, which will be divided into several subcategories: * Frameworks: Technical (9 studies, 52.94%), Social (6 studies, 35.29%), Technical and Social (2 studies, 11.77%). * Models: Epidemiology (18 studies, 41.86%), Opinion dynamics diffusion (7 studies, 16.28%), Game-theoretic (6 studies, 13.95%), Information diffusion (5 studies, 11.62%), Causal (2 studies, 4.65%), Bayesian (2 studies, 4.65%), Linguistic Pattern Recognition (1 study, 2.33%), Ontologies (1 study, 2.33%), Social learning (1 study, 2.33%). * Simulations: Belief updating (20 studies, 50%), Countermeasures evaluation (10 studies, 25%), Mis/Disinformation diffusion (9 studies, 22.5%), Offensive evaluation (1 study, 2.5%) §.§.§ Types of frameworks Frameworks serve as organizational structures and schemes of work that facilitate a holistic understanding of the mis/disinformation phenomena (such as actors, methods, means, consequences, types, etc). Our analysis reveals that there are three main types of frameworks: * Technical: A technical framework in the context of mis/disinformation refers to a structured approach that focuses on utilizing computational and engineering methodologies to understand, analyze, and address the spread and impact of mis/disinformation. The primary objective of such frameworks is to develop tools, algorithms, and systems that can effectively analyze, detect, and combat mis/disinformation in digital information environments. For example, the European project CALYPSO <cit.> proposed the design of a framework that seeks to involve the audience in the early detection of mis/disinformation through a digital platform. The framework is based on gamification techniques to motivate users to participate in detecting and verifying mis/disinformation and in creating communities of practice involving different actors with different levels of responsibilities and privileges, such as readers, journalists, and experts. Moreover, in <cit.>, the authors proposed a technical framework that materialized in a modular implementation in Python that combines AI models and data visualization techniques to provide a practical resolution to emergency responders to help support their decision-making when working on an infodemic crisis. Furthermore, in <cit.>, the authors proposed a framework to provide real-time, comprehensive, and explainable detection measures for combating mis/disinformation in the era of Generative AI Models (GenAI). The framework operates on four levels: signal, perceptual, semantic, and human, aiming to detect manipulation traces in AI-generated content and address technical flaws, misleading content, and propagation processes contributing to the spread of mis/disinformation. * Social: These frameworks operate on the premise that combating mis/disinformation requires more than just technical solutions; it necessitates a comprehensive understanding of societal dynamics, human behavior, and the intricate networks through which information propagates. Social frameworks seek to manage mis/disinformation from a broader perspective, recognizing that the challenge extends beyond technological facets. By way of example, in <cit.>, Karlova and Fisher proposed a social diffusion framework for illustrating how information, misinformation, and disinformation are formed, disseminated, judged, and used in terms of key elements, beginning with milieux. Moreover, the authors in <cit.> proposed a framework called “the disinformation and misinformation triangle” to show the three determinants of disinformation and information proliferation in digital news: the pathogen, host, and environment. Furthermore, in <cit.>, the authors proposed a framework called “Misinformation Recognition and Response Model (MRRM),” which explores how individuals recognize and respond to disinformation. It emphasizes individual factors, situational factors, motivational goals, and message characteristics that influence information processing. The framework also highlights the importance of prebunking, source credibility, message format, and cognitive coping strategies in dealing with disinformation. Finally, in <cit.>, the authors proposed a framework that focuses on understanding the effects of AI on disinformation attacks and counterattacks and how organizations can proactively manage and respond to disinformation. It emphasizes the importance of a systematic and coordinated approach to counter disinformation efforts. The framework includes strategies such as detection and monitoring, corrective communication, flagging and removing content, and promoting media literacy. Additionally, the framework differentiates between malicious and non-malicious human and AI actors in digital environments. * Technical and social: These hybrid frameworks bridge the gap between human-centered thinking and innovative technical solutions, recognizing that an effective response to disinformation requires a multidisciplinary strategy. For example, the framework DISARM <cit.> is the open-source, master framework for fighting disinformation through sharing data and analysis and coordinating effective action. The framework has been developed, drawing on global cybersecurity best practices. But it is not only technical, it is used to help communicators, from whichever discipline or sector, to gain a clear shared understanding of disinformation incidents and to immediately identify defensive and mitigation actions that are available to them. Moreover, the disinformation threat framework proposed in 2023 by <cit.> characterizes the disinformation threat across domains by mapping out potential threat actors, their motives and capabilities, their observed patterns of attack, the attack channels they use, and the audiences they target, considering the attacker side, their tactics and approaches. In our analysis, there are 17 studies (26.32%) that proposed a framework. The social framework is the most used (9 studies, 52.94%), followed by technical frameworks (6 studies, 35.29%). At the same time, only two studies (11.77%) used both approaches. §.§.§ Types of models To understand the phenomenon of disinformation, having structures to represent its different elements is a key aspect. However, these models can be conceptual and supported by robust mathematical models to enhance their validity and analytical capabilities. From our analysis, ten primary categories are found: * Epidemiology: As infectious diseases spread in populations, disinformation can spread rapidly, affecting public opinion, attitudes, and, ultimately, people's behavior. The simile between an epidemic and disinformation is that exposure to false information can influence the adoption of erroneous beliefs, generating a kind of social “contagion.” Changes in people's susceptibility to disinformation and their susceptibility to corrective actions can be modeled like that of acquired immunity in epidemiology, adapting to the disinformation phenomenon, which an individual can adopt different roles. For example, the authors in <cit.> proposed a novel epidemiology model called “SEHIR” with an additional compartment of people not reacting instantly or at all to the disinformation called hibernators, to obtain more clarity on the pattern of spread of fake news. Moreover, in <cit.>, the authors proposed a “fuzzy” version from the epidemiology model SI (Susceptible - Infected), incorporating fuzzy parameters and factors such as user influence, the likelihood of sharing disinformation and the strength of association between different users in the social network to more accurately describe the spread of disinformation in the social network. Other epidemiology models such as SIR (Susceptible-Infected-Recovered) were used, for instance, in <cit.> where authors employed the SIR model as a basis, to understand how disinformation spreads in an epidemic-like manner, with misinformed individuals seeking to affect a susceptible population by transmitting messages with false information including specific elements of the disinformation diplomacy strategy, such as organic and paid reach, bots and trolls, providing a clearer picture of how the elements of disinformation interact to affect a susceptible population. As a last example, the authors in <cit.> used an existing epidemiology model called “SBFC,” which has three states (Susceptible - Believer - Fact Checker), which allows them to study how fake news spreads and is countered in a population of agents, who can decide whether to believe or check the veracity of the information they receive. * Information diffusion: Information diffusion models provide conceptual and mathematical tools for understanding how ideas, information, and narratives spread through social networks and society, influencing public opinion and individual and collective decision-making, representing the flow of information between individuals or nodes in a network, and considering factors such as the speed of information transmission, or the susceptibility of individuals to accept or reject false information. These models help to comprehend the mechanisms underlying the spread of disinformation, identify key factors that contribute to its proliferation, and develop targeted interventions or strategies to curb its impact and promote accurate information dissemination in society. By way of example, in <cit.>, the authors proposed a new information diffusion model composed of two models. One of them is responsible for predicting the behavior of individual users on the social network based on a Multivariate Hawkes process (MHP), which models the occurrence of temporal or spatiotemporal asynchronous events by capturing their mutual dependencies. The other introduces additional data to the network to alter the diffusion's future outcomes. On the other hand, in <cit.>, the authors used the existing multicampaign independent cascade model (MCICM) introduced in 2011. The MCICM simulates the spread of true and false information through a network's positive and negative seed nodes. It is used to study how false information spreads and how to counteract its impact, being able to identify key nodes to disseminate correct information and combat disinformation. Finally, the authors in <cit.> adapted the drift-diffusion model (DDM) to model the decision-making process. The model assumes a decision-making process in which an initial bias in favor of one discrete option over another changes gradually until a decision boundary is reached, i.e., a critical point in the model where a discrete decision is made based on the accumulation of evidence over time. This model helps illuminate why precision prompts (signals or information provided to individuals to help them discern the integrity of the information they are receiving) can decrease the spread of disinformation by examining how people think more or differently when given these prompts. * Opinion dynamics: These models draw on social network theory and social psychology to explore how opinions are transmitted and modified over time, how social interactions can shape individual beliefs and attitudes, and how these dynamics affect the collective perception of a topic. These models consider factors such as social influence, confirmation bias, cognitive biases, and information exposure to understand the evolution of opinions toward disinformation. In the context of disinformation, where false information can spread rapidly, understanding how opinions spread and persist is a research topic. For example, in <cit.>, the authors proposed a new opinion dynamics model to discuss how a disinformation campaign can change people's minds. Modeling the interaction between conspirators who spread false information and inoculators who attempt to protect the population susceptible to being influenced by such disinformation. Something similar is done in <cit.>, where the authors analyze with a novel opinion dynamics model the impact of disinformation on the results of a political election, introducing disinformation as an additional term in the information process. * Game-theoretic: Game theory is a multidisciplinary field that originated in economics and mathematics. It focuses on studying rational agents' strategic decisions to maximize their goals in an interactive environment. This modeling is conducted by formulating a game or strategic interaction between agents within a system or network. In the case of disinformation, game theoretic models allow us to examine how different actors make decisions informed by incentives and consequences. For instance, in <cit.>, the authors proposed a game theoretic model consisting of a repeated series of adapted impartial games from the original Nim game, played by a reinforcement learning agent (Q-learning), to learn the `truth' modeled by the optimal Nim strategy. Another example is the approach in <cit.>, where the authors develop a novel tripartite evolutionary game consisting of three players, including network media, government, and netizens, and analyze the mutual influences of the player. * Causal: Causal models represent causal relationships within an individual system or population. They facilitate inferences about causal relationships from statistical data. This modeling analyzes the cause-and-effect relationships between several variables. In the context of disinformation can help to understand the cause-and-effect relationships underlying the generation, propagation, and counteraction of disinformation in diverse information environments. For example, in <cit.>, the authors proposed a causal model adapted from the epidemiology model SIR (Susceptible - Infected - Recovered) to model and simulate scenarios of disinformation propagation in social networks caused by bots. Moreover, the authors in <cit.> examined how cognitive, affective, and environmental factors interplay to affect the acquisition and diffusion of food safety disinformation by analyzing a national sample of Chinese Internet users using surveys and proposed a novel causal model called “emotion-driven cognitive dissonance model” to explain how disinformation about food safety is diffused both online and offline and provide recommendations to help battle it. * Bayesian: Bayesian inference techniques refer to statistical methods and algorithms that update beliefs or make predictions based on Bayesian principles. These techniques are pivotal in Bayesian models, which allow the modeling of the spread of disinformation from a probabilistic perspective, considering the uncertainty associated with how individuals perceive and interpret information. These models allow for prior knowledge, such as on the reliability of sources or the susceptibility of certain groups to disinformation. For instance, in <cit.>, Cohen et al. proposed a Bayesian multi-agent model for social networks. This model is based on a partially observable Markov decision process (POMDP). This mathematical model of sequential decision-making where complete information about the system's current state is unavailable. The model proposed in this study considers three main factors: the similarity of the user with the evaluator, the credibility of the evaluator, and the actual ratings provided. Moreover, in <cit.> Zmigrod et al. built a Bayesian model of cognition and conceptualizes political disinformation receptivity as a cognitive inference problem where the reliability of incoming disinformation is weighed against the reliability of prior beliefs. By modeling the integration of prior beliefs and new information, the model aims to explain when individuals adopt or reject political disinformation and how they creatively generate interpretations rather than passively discern truth versus falsehood. * Linguistic Pattern Recognition: A Linguistic Pattern Recognition model is a computational approach that identifies and analyzes linguistic patterns in textual data. These models focus on recognizing specific linguistic and syntactic features that may indicate the presence of inaccurate or misleading information. The purpose of using these models in the context of disinformation is to automate the detection process, enhance accuracy in identifying deceptive content, and provide insights into the linguistic features associated with disinformation. For example, in <cit.>, a new model is proposed to use non-parametric statistical methods to identify linguistic patterns in text data and compile “hits” and scores for article content that is directly related to the indicators of multiple computational social science models. * Ontologies: In computer science and communication science, an ontology is a formal definition of types, properties, and relationships between entities that actually or fundamentally exist for a particular domain of discourse. It is a practical application of philosophical ontology with a taxonomy. In disinformation, ontologies can be designed to represent the structure of information and the relationships between entities of interest. This modeling approach can help to categorize different types of deceptive content, identify patterns of disinformation propagation, and analyze the impact of disinformation on various stakeholders. Moreover, ontologies facilitate the integration of diverse data sources and developing automated systems for disinformation detection, mitigation, and response. Motivated by the importance of ontology support in information sharing and integration, in <cit.>, the authors made the first effort toward building an ontology-supported disinformation model. They employed ontology, in general, and W3C standards[<https://www.w3.org/TR/owl-features>], in particular, to represent disinformation. This model enhances disinformation detection while laying the theoretical foundation for building a digital disinformation library that can foster disinformation sharing in the research community. This model promotes interdisciplinary collaboration by linking information and social science theories, artificial intelligence, systems analysis, and design. * Social learning: Social learning modeling is constructing computational models of how individuals acquire knowledge, beliefs, and behaviors through social interactions within a community or society, considering social factors such as peer influence, homophily, and network structure. These models provide a broader, empirically grounded, and analytically tractable framework for understanding information aggregation. However, their main drawback is that they tend to create a long-term population consensus and do not account for opinion heterogeneity or polarization. Polarization can be generated by introducing `stubborn' agents who remain fully committed to their original views rather than interacting and learning from their neighbors, a mechanism reminiscent of confirmation bias. To capture the effect of large-scale confirmation bias on social learning, the authors in <cit.> proposed a novel social learning model where most participants in a network update their beliefs unbiasedly based on new information. In contrast, some participants reject information incongruent with their preexisting beliefs. In total, 43 studies are identified that proposed a model. Only one study was identified that made use of two types of the above-described models at the same time. In <cit.>, the authors proposed a game-theoretic opinion model for the subjective opinions and behavioral strategies of attackers, users, and defenders. Further, they investigated which opinion model(s) can better help combat disinformation (i.e., not believing disinformation). Moreover, we distinguish when a proposal is completely new and novel (New proposal), when a proposal already existing in the literature is reused (Existing proposal), and when an existing proposal is adapted (Adapted proposal). In Figure <ref>, we can see that epidemiology models are the most used in the literature, simulating its propagation similar to the spread of diseases in a population (18 studies, 41.86% of studies that proposed a model). Most of these studies are novel proposals incorporating more or different states an agent can transit. The opinion dynamics models are the second most used (7 studies, 16.28%). In this case, except one, all studies proposed novel models for representing and analyzing disinformation. Following are the game-theoretic models (6 studies, 13.95%) and information diffusion models (5 studies, 11.62%). The rest of the models are not extensively used, with two types of models used in two studies and three types of models in only one study. §.§.§ Types of simulations Simulations offer a dynamic perspective on spreading mis/disinformation in controlled environments and provide a valuable tool for evaluating mis/disinformation models. From the coding process, we present four categories: * Belief updating: Belief updating simulation is a type of simulation employed to examine the dynamics of belief alteration among individuals following exposure to mis/disinformation. This type of simulation is used to study how people change their beliefs when exposed to mis/disinformation. These simulations aim to determine how misleading information affects people's beliefs. This type of simulation can be used to study various factors, such as the credibility of the information source, the presence of cognitive biases, and the participants' prior knowledge level. For example, in <cit.>, the authors simulated their stochastic epidemic model to analyze the transition between states (susceptible, believer, and fact-checker) when mis/disinformation and its relative debunking were spread in a social network. On the other hand, in <cit.>, the authors employed “NetLogo”, one of the most relevant simulation platforms, to simulate the diffusion of a piece of mis/disinformation. In particular, they used the “SBFC” model that describes the spread of misinformation as a competition between a fake news story and its debunking in a population of agents. Agents can switch between being Susceptible, Believers, or Fact-checkers, reflecting how individuals' beliefs evolve depending on the information they receive from their neighbors and their environment. * Countermeasures evaluation: Countermeasures evaluation simulations aim to analyze how corrective or preventive measures can affect the spread and perception of mis/disinformation in specific information environments, exploring hypothetical scenarios and assessing the impact of various countermeasures. This makes it possible to identify more effective strategies to combat mis/disinformation and improve society's resilience in the face of this phenomenon. For instance, the authors in <cit.> developed an agent-based model in Python to simulate and investigate the diffusion of prebunking interventions through three different stereotypical mis/disinformation campaign attack scenarios from a macro-level perspective and how prebunking intervention can help in the combat with mis/disinformation. On the other hand, in <cit.>, the authors used the third-party software called “Biolayout” as a three-dimensional modeling tool and a platform for simulating different scenarios. They propose the implementation of a “right-click authenticate” button as a countermeasure to combat the spread of mis/disinformation online. This button would allow users to right-click on a news item, image or video to perform real-time verification of its origin, original metadata, observations cited by editors, and crowd-sourced feedback. Moreover, the authors in <cit.> simulated their model to examine and optimize strategies for combating mis/disinformation in social networks, such as content moderation, education, and counter-campaigns. Concluding that the most promising strategies for combating mis/disinformation are education-based policies that increase skepticism and counter-campaign policies. Furthermore, the authors in <cit.> simulated mis/disinformation using LLMs. Specifically, they used GPT-3.5 as the generator. Countermeasures against mis/disinformation have been simulated through three defense strategies: prompting, detection, and majority voting. These strategies were evaluated to mitigate the harm caused by LLM-generated mis/disinformation in Open-Domain question-answering (ODQA) systems. * Mis/Disinformation diffusion: Mis/Disinformation diffusion simulations replicate and analyze the spread of false or misleading information throughout interconnected networks, such as social networks, information networks, or other graph structures. These simulations consider user interactions, content-sharing mechanisms, network topology, and information propagation pathways. The purpose of conducting these simulations is to gain insights into the mechanisms and patterns of mis/disinformation spread, understand the dynamics of information flow within networks, identify influential nodes or factors that accelerate or hinder dissemination and develop strategies to mitigate the harmful effects of mis/disinformation. For example, the authors in <cit.> used the third-party software called “Stella Architect” to simulate their causal model, using system dynamics as the main technique, in scenarios of mis/disinformation propagation in social networks caused by bots creating and sharing content, while modifying parameters, such as the rate of activation and deactivation of bots, to assess their impact on the spread of mis/disinformation on social networks caused by bots. In addition, in <cit.>, the authors simulated their TRAQ-M (Tracking Analysis, Quantification-Mitigation) framework, which is a platform with a computational system that applies Social Science Models and non-parametric statistical methods to understand complex human behavior patterns through language using Language Pattern Recognition models. These simulations focus on extracting linguistic patterns to compile reliable information and detect possible sources of misinformation in the open press. Furthermore, in <cit.>, the authors simulated their proposed new model called Susceptible, Exposed, Negative-Infected, Positive-Infected, Recovered Population (SENPR) to describe the dynamics of mis/disinformation in OSNs, analyzing the factors influencing its spread, and suggesting management strategies to suppress false information in emergencies. The model was simulated using the third-party software “AnyLogic” to build the multi-agent simulation model of the SENPR model and construct the network propagation model. * Offensive evaluation: As simulations can serve as a valuable tool to evaluate the impact of countermeasures, they can likewise serve to evaluate attacks of mis/disinformation that can consist of introducing fake or misleading content in a simulated information environment. Offensive evaluations are designed to assess the effectiveness and impact of these offensive strategies, such as coordinated campaigns or targeted messaging strategies. Simulating offensive tactics makes it possible to evaluate the potential risks posed by such strategies, identify vulnerabilities in simulated information dissemination systems, and develop countermeasures to mitigate the harmful effects of offensive mis/disinformation campaigns. Only one study was identified in this category, in <cit.> Beskow et al. employed bots to simulate distinct mis/disinformation attacks on Twitter. In the “Backing” attack, bots are programmed to bolster key influencers within a network, amplifying their messages and expanding their reach. Alternatively, in the “Bridging” attack, bots are configured to support various communities, facilitating connections and the dissemination of information across these diverse groups. As a result, 40 studies that proposed simulations are identified. Of these studies, three were identified that made use of two types of the above-described simulation types at the same time. By way of example, in <cit.>, the dissemination of mis/disinformation and how it affects people's opinions is simulated using a game-theoretic model previously defined. Moreover, in <cit.>, the authors simulated the spread of mis/disinformation and the opinion updating to study a possible relationship between echo chambers and the viral spread of mis/disinformation. Lastly, in <cit.>, the authors simulated the belief change in social networks through models based on evolutionary game theory and evolutionary graph theory. At the same time, extensive simulations are performed using real social network datasets to study the process of online information diffusion. Similar to what has been done in the models, we will distinguish between simulations conducted autonomously by researchers, such as programming their own simulations (Self-developed), and simulations that utilize existing third-party software (Third-party software). Figure <ref> illustrates that the most used type of simulations is Belief updating (20 studies, 50% of studies that proposed simulations). Of these simulations, the majority (18 studies, 90% of belief updating simulations) were self-developed by the authors. The second type of most used simulations was Countermeasures evaluation (10 studies, 25%) equal to the previous type. The authors self-developed the majority (8 studies, 10% of countermeasures evaluation simulations). Following this is the Mis/Disinformation diffusion (9 studies, 22.5%) and Offensive evaluation (1 study, 2.5%). §.§.§ Relationship between frameworks, models and simulations To analyze the relationship between frameworks, models and simulations, we employ a log-linear analysis to compare the three dimensions to determine if there is an association between them. Table <ref> shows the cell count of a log-linear analysis and the most typical associations in percentage. The reviewed studies tend to propose models together with simulations (50.88%). Then, it is typical for the framework proposals (24.56%) without modeling or simulation. Finally, studies propose unique models (12.28%) and unique simulations (7.02%) or jointly present a framework with modeling and simulation (5.26%). As a result, the z-score values from the log-linear analysis reveal significant relationships among the use of frameworks, models and simulations. While, p represents the probability value associated with the observed relationship in the log-linear analysis, indicating the significance level of the result. Notably, a substantial negative association is observed between using a framework and the presence of a model or simulation (z = -3.362, p = 0.001), indicating that articles proposing frameworks do not usually incorporate modeling or simulations. This fact makes sense due to a framework's conceptual or theoretical nature. Conversely, a positive and significant relationship is identified between the use of models and simulations (z = 1.771, p = 0.077), indicating that simulations precise a certain modeling or validate a particular model. Once the relationship between the presence or absence of a proposal related to a framework, simulation, or model has been analyzed, we will examine whether there is a connection between the specific type of each proposal. Table <ref> has been created to illustrate the number of combinations present for each type of proposal that appears more than once. We can see that 11 articles combine epidemiology models with belief updating simulation (19.3%). Then, 9 studies proposed a social framework without a model or simulation (15.79%). As expected, 4 opinion dynamics models are validated with belief updating simulations (7.02%). Finally, it was found that sometimes simulation of countermeasures is not based on particular models or frameworks (4 studies, 7.02%), but in other situations, it is employed through information diffusion models (3 articles, 5.26%). §.§ What purposes motivate the existing frameworks, models and simulations around mis/disinformation? (RQ3) RQ3 has as its objective to establish the main purpose of the proposal of each study. Based on the review, the purposes of the frameworks, models, and simulations to understand mis/disinformation can be classified into the five following categories: mis/disinformation conceptualization, effectiveness evaluation, cognitive assessment, and mis/disinformation detection. * Mis/Disinformation conceptualization Mis/Disinformation conceptualization refers to defining and structuring the fundamental concepts and elements of mis/disinformation. This involves identifying key attributes, characteristics, and mechanisms of mis/disinformation propagation and dissemination and analyzing its impact on individuals or communities. These frameworks, models, and simulations aim to increase the growing literature on the analysis and conceptualization of mis/disinformation in information environments. For instance, the main goal of the framework proposed in <cit.> was to help researchers and information professionals understand the fake news phenomenon and ways to fight it. It also helped design research studies to investigate the phenomenon of mis/disinformation empirically. Moreover, in <cit.>, the main purpose of their model and simulation was to help identify the impact of credible message dissemination patterns on the overall exposure to mis/disinformation on Twitter. Finally, in <cit.> Tambuscio et al. conceptualized the mis/disinformation phenomenon, in their model and simulations, taking into account the combination of selective exposure due to network segregation, forgetting (i.e., finite memory), and fact-checking. * Effectiveness evaluation In the context of mis/disinformation, effectiveness refers to the ability of proposed interventions and strategies to counteract or mitigate the impact of mis/disinformation on society. This includes assessing how these measures succeed in reducing the spread of mis/disinformation. These studies evaluate the effectiveness of mis/disinformation interventions involving countermeasures proposals or attacks. Countermeasures include a wide range of strategies, from technical solutions to algorithmic interventions to human-based approaches, offering solutions to tackle the problem of mis/disinformation targeted as increased media literacy and critical thinking skills will be addressed. However, despite the efforts of scientists and experts in the field, many challenges remain regarding mitigation, response, education, and awareness of mis/disinformation. For example, in <cit.>, the author discussed and evaluated, using simulations, different approaches, such as social constructionism, linguistics, heuristics, and empiric methods, to address the spread of mis/disinformation. Furthermore, the authors in <cit.> proposed an authentication method to reduce the spread of mis/disinformation on social media that consists of applying a verification system that can be accessed by right-clicking and providing a two-dimensional simulation for a proof-of-concept. As mentioned above, there are not many studies that investigate mis/disinformation from an attack perspective. While a large body of research has focused on detection, mitigation, and countermeasures, studies explicitly framing mis/disinformation as a targeted attack are scarce. We identified only one study <cit.> described above whose model and simulation main purpose was to evaluate the impact of different mis/disinformation attacks. * Cognitive assessment We have already mentioned the importance of considering human cognition in the context of information processing, belief formation, and susceptibility to deceptive information. These studies aim to study and analyze these aspects of mis/disinformation. For example, in <cit.>, the authors analyzed the effect of large-scale confirmation bias on social learning and examined how it can drastically change the way a networked, decentralized society processes information. Moreover, the authors in <cit.> analyzed how people digest information and how mis/disinformation exposure affects this process. * Mis/Disinformation detection A significant amount of work has been dedicated to the main purpose of mis/disinformation detection. Studies focusing on mis/disinformation detection aim to develop new models to distinguish between authentic and misleading or false information. By way of example, the main purpose of the EU platform CALYPSO <cit.> is to detect and fact-check suspected mis/disinformation cases in real-time early. This platform aimed to involve the audience in promptly detecting harmful content. Moreover, the authors in <cit.> aimed to demonstrate that their ontology-supported mis/disinformation model can enhance mis/disinformation detection. Furthermore, in <cit.>, the authors proposed a framework called “LIEALERT”, which combines advanced NLP techniques, a graph convolutional network, and attention mechanisms to detect mis/disinformation, predict its propagation depth, and cluster users based on their reactions to false information. Only one study was identified that was coded with two types of the above-described purposes simultaneously. The DISARM framework, described in the work by Terp et al. <cit.>, is a notable example of a dual-purpose approach. DISARM not only aims to increase the understanding of mis/disinformation processes but also seeks to provide practical guidance on mitigating the impact of mis/disinformation. The main purpose of the selected paper is Effectiveness evaluation (21 studies, 36.21%). Followed by Mis/Disinformation conceptualization (18 studies, 31.03%), and Cognitive assessment (10 studies, 17.24%). Finally, Mis/Disinformation detection is less common (9 studies, 15.52%). §.§ Which contexts are addressed by the existing frameworks, models and simulations around mis/disinformation? (RQ4) Mis/disinformation can be present in very different environments. Our analysis reveals six main contexts where mis/disinformation has been present: * Health In the context of health, mis/disinformation encompasses false or misleading information deliberately spread to influence beliefs, attitudes, decisions, or behaviors related to health and medical matters. For instance, vaccine hesitancy, mis/disinformation about COVID-19, and food safety are major concerns that directly affect the well-being of individuals and communities. In this context, mis/disinformation represents a significant threat that can directly affect public health and consumer confidence. For example, the authors in <cit.> tweets related to COVID-19 vaccination are used to perform the experiments and discuss the control strategy to minimize the misinformation and disinformation related to vaccination. In the same way, the model proposed in <cit.> adapted the Stimulus-Organism-Response (S-O-R) model where COVID-19 vaccine mis/disinformation exposure and information overload serve as environmental stimuli, perceived vaccination benefits, and barriers as cognitive organisms, fear as an affective organism, and information avoidance as a behavioral response. Lastly, in <cit.>, the authors explore the cognitive, affective, and environmental factors affecting the acquisition and diffusion of food safety mis/disinformation and show how mis/disinformation about food safety has become a serious problem in China. * Politics Mis/Disinformation in politics encompasses deliberately spreading false or misleading information to alter public opinion, shape political narratives, influence voting behaviors, and ultimately impact electoral outcomes. The objectives of political mis/disinformation can vary widely. They may include discrediting political opponents, manipulating public perceptions of policies or candidates, creating social divisions, fostering distrust in democratic institutions, and undermining the integrity of electoral processes. We can see an example in the model proposed in <cit.>, where the authors analyzed tweets from the 2018 elections in the US when Donald Trump became the president and tweets from the Swedish mid-term elections to compare the updating of opinions. Moreover, in <cit.>, the authors proposed a framework for analyzing mis/disinformation policies in the United States and China, identifying various dimensions of mis/disinformation policies, including agents, objectives, use of technology, governmental actions, and social context. * Defense Mis/Disinformation in defense encompasses intentionally disseminating false or misleading information to undermine the integrity and effectiveness of defense operations. The objectives of using mis/disinformation in defense contexts can include deceiving adversaries about military capabilities, intentions, or strategies; creating confusion or chaos within enemy ranks; concealing actual military plans or movements; sowing distrust or misinformation among allied forces; and influencing public perception or support for military actions or policies. For example, the framework proposed in <cit.> was applied to analyze mis/disinformation as a significant factor in hybrid threats, impacting the information environment and challenging distinguishing truth from lies. Hybrid threats typically combine conventional and unconventional methods, blending military and non-military tactics to achieve strategic objectives. Moreover, in <cit.>, the authors proposed a framework for counteracting mis/disinformation's negative impact on the cybersecurity system. This framework aims to combat mis/disinformation, strengthen the defense mechanisms of cybersecurity systems, and protect against potential threats posed by mis/disinformation and cyber attacks. Lastly, the framework mentioned above proposed in <cit.> called “DISARM” aims to manage information security-based standards for detecting and responding to information harms, including mis/disinformation. * News articles Mis/Disinformation in news articles encompasses the deliberate spread of false or misleading information with the intent to undermine trust in media sources, distort the public's perception of events, and contribute to the dissemination of biased narratives. Its primary objectives include influencing public opinion, shaping political discourse, and manipulating the narrative surrounding specific topics or events. For example, the framework proposed in the EU project CALYPSO <cit.> is developing a digital platform to source citizens’ contributions to false and misleading news articles. Moreover, in <cit.>, the authors review the literature on mis/disinformation, primarily from a news article perspective. * Crises In humanitarian crises, characterized by situations where communities or large populations are exposed to life-threatening conditions or imminent danger, mis/disinformation can exacerbate the severity of the crisis and lead to increased harm. Its primary objectives include spreading confusion, undermining trust in official sources of information, and impeding coordinated responses to the crisis, thereby prolonging and intensifying its impact on affected populations. Examples of such crises include global warming, which emerges as a pressing and multifaceted challenge that extends beyond environmental concerns. Mis/Disinformation can contribute to climate change since it can cause large segments of our societies to not believe in the reality of climate change. Natural disasters are another type of crisis. In this context, mis/disinformation can exacerbate the devastating consequences of earthquakes, hurricanes, floods, and other natural disasters. It can significantly impact decision-making, emergency response, and public safety. For instance, in <cit.>, the authors investigate human-machine interactions generating and mitigating mis/disinformation during humanitarian crises and propose a conceptual framework based on two activity systems: generating and mitigating mis/disinformation. Moreover, the authors in <cit.> investigate the model's predictions empirically using US county-level data on the impact of internet access on the formation of beliefs about global warming. Finally, the authors in <cit.>, acknowledging that effective debunking strategy is a potential tool to reduce the loss of massive digital mis/disinformation, proposed a novel rumor spreading–debunking (RSD) model by ordinary differential equation (ODE) system to explore the interplay mechanism between rumor spreading and debunking processes. Using a real-world rumor case, the “immigration rumor” during Hurricane Harvey in 2017, for their model's simulations. * E-commerce Mis/Disinformation in the context of e-commerce encompasses the deliberate dissemination of false or misleading information related to products, services, or reviews on online platforms. This misleading information can include fake product reviews, manipulated ratings, deceptive advertising, or fraudulent product features or benefits claims. The primary objective of mis/disinformation in e-commerce is to mislead consumers, influence their purchasing decisions, and create a false perception of products or services offered on these platforms. This can lead to a lack of trust among users, reduced credibility of online reviews, decreased customer satisfaction, and financial losses for consumers and businesses. In this context, the authors in <cit.> analyzed the social and emotional intelligence of communities and consumers frequently exposed to mis/disinformation and sharing fake news and reviews. Despite the different contexts identified in the analysis, most studies did not specify in what context mis/disinformation was used (32 studies, 56.14%). Figure <ref> presents the distribution of the environments across the selected articles. Health and Politics are the most employed context (9 studies, 15.79%). Followed by Defense, Crises and News articles (3 studies, 5.26%). E-commerce is the less employed context (1 study, 1.76%) §.§ Which validation is performed in the existing frameworks, models and simulations around mis/disinformation? (RQ5) Validation refers to confirming the accuracy, reliability, and applicability of a framework, model, or simulation by comparing and thoroughly analyzing it against independent external data. Validation is relevant for establishing the reliability and robustness of methods, models, and frameworks designed to address the challenge of mis/disinformation. From our analysis, we find five primary categories: * Simulations with real-world data Simulations with real-world data refer to a validation technique where frameworks and models are tested using actual data collected from the real world. This process involves running simulations based on authentic datasets to assess how well the framework or model performs in replicating real-world scenarios and outcomes. For example, in <cit.>, the authors take a sample of 1,000 users from a real Twitter dataset and evaluate the proposed game-theoretic opinion model, using these data to simulate five different types of networks (uncertainty, homophily, assertion, herding, and encounter-based) to analyze how each opinion model handles mis/disinformation. Moreover, the authors in <cit.> evaluate the proposed framework and models' impact on social network data like Twitter and Facebook, demonstrating effectiveness in simulating information diffusion and identifying mis/disinformation mitigation strategies. Finally, in <cit.>, the authors used extensive simulation based on a real-world online network data set (with 1,899 nodes) and another real-world dataset (with 9,877 nodes). To validate their game-theoretic model in the first effort to use the concepts of `Evolutionary graph theory' in game-theoretical applications to study the information diffusion process in social networks. To analyze and predict the spread of mis/disinformation. * Comparing with real-world data Comparing real-world data involves verifying the outcomes or predictions generated by frameworks or models with empirical observations obtained from real-world sources. This validation technique assesses the consistency between simulated results and actual data to determine the framework or model's accuracy and reliability. For instance, in <cit.>, the authors proposed a model that addresses mis/disinformation in emergency events by considering it a key factor in the formation of multiple opinions in online social networks to analyze how mis/disinformation affects online opinion dynamics during emergency events and how different official responses can intervene to manage evolving public opinion. This proposed model is validated by comparing simulation results with real-world data collected from online behaviors on the “Sina Weibo” platform during emergencies. Data on user interactions, such as likes on comments, reposts, and user profiles, are collected. * Surveys Surveys are quantitative research methods that consist of collecting data from people to evaluate methods' effectiveness and obtain valuable information on the acceptability and usefulness of theoretical and practical proposals. Typically, surveys consist of questions designed to elicit responses from a specific sample of participants. For example, in <cit.>, the authors employed the DISCERN questionnaire to evaluate their framework for analyzing YouTube content recommendation paths to monitor and detect health mis/disinformation on YouTube by assessing content networks and identifying key nodes within those networks. In the same way in <cit.>, Lin et al. used surveys to measure the effectiveness of proposed countermeasures, simulating the drift-diffusion model (DDM) to investigate the impact of accuracy prompts on individuals' sharing intentions. The score is based on the credibility of the news. Moreover, the authors in <cit.> explored different `folk' models of mis/disinformation on social media to understand individuals' perceptions and responses to mis/disinformation. The model has been validated through surveys, which consist of questions that addressed issues such as the origin of mis/disinformation, the perceived purpose of mis/disinformation, users' responses to mis/disinformation, and how they evaluate whether or not mis/disinformation. * Focus groups Focus groups are a type of research method in which a small group of people discuss a specific topic guided by a trained moderator. This group interaction helps researchers uncover deeper reasons for people's opinions and behaviors. It is a valuable tool for exploring why people think or feel a certain way. For example, in <cit.>, Tran et al. proposed a framework that focuses on examining human-machine interactions in the context of mis/disinformation generation and mitigation and validated their framework by engaging two groups of graduate students with two defined scenarios of humanitarian crises' mis/disinformation to validate the conceptual framework. * Mathematical proof To measure the correctness of a mathematical model, it is not necessary to use real data. Theoretical model proposals can be mathematically proven. By way of example, the authors in <cit.> propose a model called “SEDIS”, which includes states such as Susceptible, Exposed, Doubtful, and Infected, reflecting how people react to mis/disinformation online, and evaluate the proposed model by solving different equations by varying values of the parameters that form it. Furthermore, in <cit.> the authors proposed an opinion dynamics model to examine the relationship between users' personality traits such as extroversion, agreeableness, conscientiousness, and neuroticism and their engagement with mis/disinformation on social media during the COVID-19 pandemic. The proposed model has been validated through multinomial logistic regression analysis to predict mis/disinformation engagement from social characteristics and personality traits. Moreover, in <cit.>, the authors extended the traditional Susceptible-Exposed-Infected-Recovered (SEIR) model to study the dynamics of propagation of online mis/disinformation, considering four categories of users of social networking sites, namely, ignorant population, believers, active spreaders and stiflers. The model has been validated by determining the critical value of the spread of mis/disinformation, which regulates the stability switch from a state without spreaders to a state with spreaders. Having analyzed the articles of this survey, nearly half of them (25 studies, 43.86%) did not validate their proposal. On the contrary, validations through Simulations with real-world data, and Comparing with real-world data are the most common (11 studies, 19.30%). Followed by Surveys (6 studies, 10.53%). The less common ways of validation are Mathematical proof (3 studies, 5.26%) and Focus groups (1 study, 1.75%). To analyze the relationship between the type of validation (Simulations with real-world data, Comparing with real-world data, Surveys, Mathematical proof, Focus group) and the type of validated element (Framework, Model, Simulation), we employ a chi-square test of independence to determine if there is an association between them. The analysis involved creating a contingency table and performing a chi-square test to determine if there was a significant association between the variables. However, the results revealed a high p-value of 0.816, suggesting insufficient evidence to reject the null hypothesis of independence. Therefore, we cannot conclude that there is a significant relationship between the type of validation and the type of validated element in this dataset. These findings imply that the type of validation does not exhibit a meaningful association with the type of validated element, highlighting potential complexities or nuances in the validation processes across different contexts or domains. Regarding combinations that appeared more than twice, excluding not validated approaches. The most frequent combination (4 appearances) is validated epidemiology models and belief updating simulations with simulations with real-world data. The last combination appears twice and is validated again by epidemiology models and belief updating simulations in this case by comparing with real-world data. § FINDINGS ANALYSIS In this section, we provide a summary of our key findings, with a visual representation encapsulated in Figure <ref>, which shows the distribution of papers in each category along the RQs explored in the paper. Next, the past and future challenges in frameworks, models and simulation along the mis/disinformation phenomena are discussed. Finally, the existing limitations in this study and the implications of our research are addressed. §.§ Current trends The current trends in frameworks, models, and simulations within the mis/disinformation phenomenon extracted from the 57 papers analyzed are described below to answer each RQ proposed in this study. §.§.§ Mis/Disinformation terminology (RQ1) First, we analyzed the definitions and concepts the studies applied to mis/disinformation (RQ1), finding that most provided a definition, which can be: misinformation refers to unintentionally false or inaccurate information, while disinformation entails deliberately spreading false or misleading information with the intent to deceive or manipulate. Still, many studies did not provide a definition or a definition unrelated to intentionality, generating controversy around the concept of disinformation. In this sense, Kapantai et al. <cit.> emphasizes the importance of clear and commonly accepted definitions since different disinformation types might require different theoretical analyses and one of their objectives was to identify and organize the diverse definitions of the concept. In this study, the term “disinformation” has been used to refer to information that meets the European Commission's definition. This term is preferred to “fake news” or “misinformation” because it is more accurate and covers a wider range of misleading, inaccurate, or false information. §.§.§ Mis/Disinformation representations: Frameworks, models, and simulations (RQ2) Across papers, researchers used many different representations (RQ2) to mis/disinformation. We classified them into three main categories that are frameworks, models and simulations. We found that epidemiology is the most common model. While epidemiology is particularly suitable for capturing the dynamics of mis/disinformation since it can represent the spread of mis/disinformation and the change in people's beliefs <cit.>. It is important to accept its assumptions and inherent limitations, such as the notion of information `contagion' and the possibility of oversimplification of complex social and political dynamics. One of the most significant discoveries we have is the relationship between models, simulations, and frameworks. We found that model proposals were combined with simulations in order to prove the proposed model. On the other hand, frameworks usually appear without models or simulations. These findings align with the expectation that an association with the use of a framework is typically linked to a conceptual framework without proposed models or simulations. Unlike models, which predominantly focus on representing the dynamics of mis/disinformation spread, frameworks serve as organizational structures encompassing various aspects of mis/disinformation. We found that only a small portion of the studies propose a framework and most of them proposed social and high-level approaches. In this scenario, few frameworks address all aspects of mis/disinformation, from detection to mitigation. One of the most complete is DISARM <cit.>; the EU, NATO, and the ONU are organizations that support it. Others, like the proposed by Kapantai et al. <cit.>, emphasize the different mis/disinformation types. The authors proposed three independent dimensions with controlled values per dimension as categorization criteria for all types of mis/disinformation but did not propose any countermeasure. This absence of countermeasures also occurs in the SCOTCH framework <cit.>. This framework was developed closely with the United States Government (USG). Unlike the two previous ones, it proposes an analysis of the actors involved in the dissemination of mis/disinformation. This suggests a potential gap in the literature concerning the development of frameworks that holistically encompass the diverse dimensions of the mis/disinformation phenomenon. Concerning simulations, we found that belief updating is the most common simulation type for mis/disinformation, followed by countermeasures evaluation. There is a strong interest in investigating mis/disinformation's effects on individual and collective beliefs, as evidenced by the widespread use of belief updating simulations. Moreover, the focus on countermeasures evaluation simulations indicates a concerted effort to assess strategies to mitigate mis/disinformation's impact. We also noticed that few studies mainly aimed to simulate mis/disinformation attacks (Offensive evaluation), specifically only one study. This reflects a greater emphasis on developing defense and mitigation strategies, prioritizing identifying and countering existing mis/disinformation rather than actively simulating new attacks. §.§.§ Mis/Disinformation purposes (RQ3) Regarding the purposes (RQ3), efectiveness evaluation is the most common purpose. This observation highlights the research community's efforts in developing effective strategies and interventions to evaluate and combat the negative impact of mis/disinformation. In addition to the scientific community's dedication to addressing mis/disinformation with countermeasures, it is also important to recognize the involvement of external entities and organizations in this crucial effort. A prime example is the EU's active engagement in this fight against mis/disinformation, as seen in “EU vs Disinfo” proposed in 2015 <cit.>. §.§.§ Mis/Disinformation contexts (RQ4) We also extracted six main contexts (RQ4) across studies. We found that most of them occurred in health, especially in COVID-19, which shows that this context is a fertile ground for possible mis/disinformation campaigns. With the same number of studies, we have politics, underscoring the pervasive nature of political mis/disinformation in shaping public discourse and opinion formation, with potentially far-reaching implications for democratic processes and societal cohesion. Furthermore, most studies did not specify the context where mis/disinformation was applied. However, due to the limitations imposed by our inclusion and exclusion criteria, mis/disinformation will always be approached from a digital context. We have focused only on academic studies and current contexts where mis/disinformation is critical, such as wars, political conflicts, etc. This can lead to a biased view of the problem since mis/disinformation is also present in everyday contexts, such as sports, where its impact is not as critical. §.§.§ Mis/Disinformation validation methods (RQ5) Finally, we wish to report the extent of validation of the approaches (RQ5). We found that half of the studies had not validated their proposal, and most of the validated studies performed simulations with real-world data or compared their results with these types of data (comparing with real-world data). This suggests a trend utilizing realistic scenarios or benchmarks for validation, highlighting the importance of linking theoretical proposals with real-world situations. §.§ Open challenges Based on our findings and previous related studies, we find some open challenges in the area authors usually report. A description of each of these challenges is found below: §.§.§ Standardizing mis/disinformation frameworks, models and simulations An important open challenge is establishing a standardized and universally accepted mis/disinformation model to facilitate study comparability and interoperability. The same applies to frameworks. Our review found that DISARM <cit.> was the most complete approach with the European Union's support. Still, their authors mentioned that the framework should be tested in more scenarios and expanded with more incident types. Moreover, more proposals aim to cover new areas, like the analysis of the actors involved in the dissemination of mis/disinformation <cit.>. Future research should focus on developing new frameworks that include a mis/disinformation model. These new frameworks could be the combinations or extensions of available frameworks that should be tested and validated. One possible way to test frameworks and models can be real scenario simulations. Based on our findings, we noted that, at present, most studies that proposed mis/disinformation frameworks or models applied simulations of their approach. However, we identified a few papers that fully utilize newer and more powerful technologies, such as GenAI, particularly LLMs, which do not need real-world data to simulate mis/disinformation scenarios. For example, the authors in <cit.> discussed the potential of LLMs as an innovative agent-based method for comprehending, simulating, and evaluating mis/disinformation within controlled experimental settings. Thus, future research should consider LLMs' affordances in mis/disinformation research. §.§.§ Contextual variability in mis/disinformation frameworks, models and simulations Mis/Disinformation can be spread in any context and situation. For example, Capuano et al. <cit.> indicated that most mis/disinformation datasets are about politics and highlighted the need for a standard method for storing mis/disinformation data with more diverse topics. Moreover, the importance of the context is highlighted in <cit.>. The authors focused on fake news detection and discussed the challenge of how models with high accuracy on one concrete topic do not work as well as on another topic. Future mis/disinformation proposals should be independent of context to adapt to a dynamic and changing environment like digital media. §.§.§ Education as countermeasure Regarding the purposes, we found that `countermeasures' are the most common purpose in our results. However, we identified the challenges in implementing tools that aim to eradicate mis/disinformation at its root by focusing on educating and training individuals. Among our findings, only the CALYPSO platform <cit.>, part of an EU project, attempts to involve the public in mis/disinformation detection. However, it is currently in a prototype stage, with its sole function being to have individuals classify content as true, false, or undefined. A potential solution to educate the population about mis/disinformation could involve incorporating serious games with this theme. Although we did not find any proposals among our results, there are indeed serious games addressing mis/disinformation available. For instance, Tilt was established to enhance resilience against online manipulation. One of their offerings is the serious game called “Harmony Square” <cit.>, centered around the theme of fake news. The game unfolds in the idyllic Harmony Square, a small neighborhood with a mild obsession with democracy. As the player, you assume the `Chief Disinformation Officer' role. Throughout four brief levels, your task is to disrupt the square's peace by fostering internal divisions and turning its residents against each other. The game aims to expose tactics and manipulation techniques used to mislead people, garner a following, or exploit societal tensions for political purposes. Another potential solution involves using Cyber Ranges, a platform that simulates real operational environments for the individual or collective training of professionals. While no proposals were identified in the literature, the FFI (Norwegian Defence Research Establishment) is currently developing “Somulator” <cit.>, a solution designed to simulate social networks in exercises, capable of simulating online newspapers, microblogs, image sharing, video sharing, and more generic social networks. Moreover, in 2024, the NATO Strategic Communications Centre of Excellence (NATO StratCom COE) has unveiled its latest initiative to revolutionize military and strategic communications (StratCom) training. Known as the Information Environment Simulation Range (InfoRange) <cit.>, this platform offers an immersive simulated training environment that leverages advanced technology to elevate tabletop exercises and provide comprehensive crisis or conflict training in a responsive digital information environment. The InfoRange incorporates generative AI to facilitate exercise design and audience application, offering dynamic training opportunities by simulating realistic information environment infrastructure, with the aim to training audiences across various StratCom sub-disciplines, including public diplomacy, civilian and military public affairs, InfoOps, and PsyOps. §.§ Survey limitations and implications This study has inherent limitations, which may impact our findings: §.§.§ Paper selection The paper selection mainly limits this review. First, we have only used the key terms “disinformation” and “misinformation” to perform our document search based on the papers' titles. Other studies could also be working on disinformation, but they might use slightly different terms to describe their work. As we have seen in RQ1, there is no consensus on the terminology of the phenomenon. Therefore, those studies might not be included in our review. Nevertheless, we purposely opted for these terms to analyze the core of disinformation while having a manageable selection of papers for this study. Furthermore, we focused on the primary academic databases of Scopus and Web of Science. However, other peer-reviewed academic papers could be indexed in different databases and non-peer-reviewed publications, including pre-prints, technical or white reports that could be missing in our review, and non-academic work being conducted in industrial companies and by practitioners. Finally, we have based our RQ generation with a focus on frameworks, models, or simulations related to mis/disinformation. Still, other potential and valuable RQs about the mis/disinformation phenomenon might be missing in this review. §.§.§ Ethical concerns The study of mis/disinformation raises ethical challenges regarding the social responsibility of the media and freedom of expression <cit.>. Regulating mis/disinformation also poses challenges, as it can lead to sensitive situations, such as censorship of information and non-censorship of mis/disinformation. Delving into the regulation of mis/disinformation introduces its own set of complexities. A balance is needed, as strict rules can inadvertently censor legitimate information, undermining the principle of free expression, while inappropriate measures may allow mis/disinformation to proliferate unchecked. Another challenge is navigating the subjective nature of distinguishing truth from falsehood <cit.>. Recognizing that determinate truth often requires the establishment of a subject, attempts to establish universal truths raise concerns about the imposition of a single narrative or perspective. A balance must be struck against avoiding further imposition of a single truth. § CONCLUSION This study presents a systematic literature review encompassing frameworks, models, and simulations proposed up to 2023 for investigating the phenomena of misinformation and mis/disinformation. To our knowledge, this paper stands as the pioneering effort in reviewing methodologies aimed at representing and comprehending the dynamics of misinformation and mis/disinformation. Utilizing the widely recognized PRISMA methodology, we comprehensively searched the two foremost bibliographic databases, Scopus and Web of Science. Following stringent selection criteria, 57 papers were identified for further in-depth analysis to respond to five research questions (RQ). Firstly, RQ1 focused on exploring the terminology and definitions utilized to describe misinformation and disinformation, as the absence of an official definition necessitates clarification in the academic discourse. We concluded that there is a common consensus regarding the definitions of disinformation and misinformation, which revolves around intentionally disseminating false information. Particularly, disinformation refers to false information deliberately spread to manipulate or influence, while misinformation refers to false information without intent to deceive. Secondly, RQ2 delves into the formal representation of mis/disinformation within frameworks, models, and simulations, shedding light on how this complex phenomenon is conceptualized and operationalized. Among the identified frameworks, most studies propose social frameworks, while a subset focuses on technical frameworks; intriguingly, only two studies integrate both approaches. Epidemiology models emerge as the predominant approach in modeling mis/disinformation, leveraging them to simulate propagation akin to the spread of disease within populations, often introducing novel models that incorporate additional or diverse agent states. Opinion dynamics models follow closely, offering innovative approaches to representing mis/disinformation and analyzing the evolution of opinions surrounding it. Game-theoretic models and information diffusion models are also present in the landscape. In terms of simulations, belief updating stands out as the predominant simulation type employed in mis/disinformation studies, followed by countermeasures evaluation. Mis/Disinformation diffusion is also utilized alongside offensive evaluation, which appears in only one study. RQ3 delves into the primary objectives of the research being done through the frameworks, models, and simulations. The proposals' purposes are related to evaluating the effectiveness of mis/disinformation attacks and associated countermeasures, formalizing mis/disinformation, assessing cognitive impact, and detecting mis/disinformation. RQ4 aims to identify the contexts where misinformation and mis/disinformation are most prevalent, offering valuable insights into the domains and subjects of attention from the research community. Some studies and solutions are mainly proposed in the context of health and politics, but also defence, crises, news articles and e-commerce are areas for mis/disinformation research. However, most proposals were generic and agnostic from the field of application, with no context being defined. Finally, RQ5 examines the methods employed to validate the proposed frameworks, models, and simulations to ensure the feasibility and reliability of the approaches. A significant portion did not validate their proposals was observed. Nevertheless, among the studies that did validate their frameworks, models and simulations, most studies validated their proposals using simulations or comparisons with real-world data, followed by surveys. Conversely, mathematical proof and focus groups were less commonly used for validation. Mis/Disinformation has become an integral aspect of our digital lives. It has already proven to be potentially very dangerous in the digital ecosystem and outside of it, especially with the rise of fake news, memes, deepfakes, and emotionalism. In such a scenario, a trend in the model and simulating mis/disinformation offers insights into its dynamics and evolution and evaluates the effectiveness of mis/disinformation attacks and countermeasures. However, a holistic approach with models, frameworks, and simulations that consider social, psychological, technological, and contextual factors is lacking. Additionally, these approaches should be validated rigorously, ensuring their reliability and accuracy. Finally, these holistic approaches should prioritize empowering individuals at the heart of the issue by providing them with the necessary training and resources to detect and combat mis/disinformation effectively. Future work in the mis/disinformation area may prioritize standardizing frameworks, models and simulations to enhance comparability and interoperability across studies, adapting to diverse contexts and situations. Additionally, educational interventions to empower individuals to identify and mitigate the impact of mis/disinformation, leveraging innovative approaches such as serious games, need to be developed. This work has been partially funded by the strategic project CDL-TALENTUM from the Spanish National Institute of Cybersecurity (INCIBE), the Recovery, Transformation, and Resilience Plan, Next Generation EU, and the University of Murcia by FPU contract. ACM-Reference-Format
http://arxiv.org/abs/2406.09333v1
20240613171430
Memory-Efficient Sparse Pyramid Attention Networks for Whole Slide Image Analysis
[ "Weiyi Wu", "Chongyang Gao", "Xinwen Xu", "Siting Li", "Jiang Gui" ]
cs.CV
[ "cs.CV" ]
Instance-level quantitative saliency in multiple sclerosis lesion segmentation [ June 17, 2024 ============================================================================== § ABSTRACT Whole Slide Images (WSIs) are crucial for modern pathological diagnosis, yet their gigapixel-scale resolutions and sparse informative regions pose significant computational challenges. Traditional dense attention mechanisms, widely used in computer vision and natural language processing, are impractical for WSI analysis due to the substantial data scale and the redundant processing of uninformative areas. To address these challenges, we propose Memory-Efficient Sparse Pyramid Attention Networks with Shifted Windows (SPAN), drawing inspiration from state-of-the-art sparse attention techniques in other domains. SPAN introduces a sparse pyramid attention architecture that hierarchically focuses on informative regions within the WSI, aiming to reduce memory overhead while preserving critical features. Additionally, the incorporation of shifted windows enables the model to capture long-range contextual dependencies essential for accurate classification. We evaluated SPAN on multiple public WSI datasets, observing its competitive performance. Unlike existing methods that often struggle to model spatial and contextual information due to memory constraints, our approach enables the accurate modeling of these crucial features. Our study also highlights the importance of key design elements in attention mechanisms, such as the shifted-window scheme and the hierarchical structure, which contribute substantially to the effectiveness of SPAN in WSI analysis. The potential of SPAN for memory-efficient and effective analysis of WSI data is thus demonstrated, and the code will be made publicly available following the publication of this work. § INTRODUCTION Whole Slide Images (WSIs) have become an indispensable tool in modern digital pathology, enabling the digitization of histopathological slides and facilitating computer-aided diagnosis <cit.>. However, the gigapixel resolution of WSIs presents significant computational challenges for automated analysis, with the amount of data far surpassing the capacity of traditional image analysis techniques designed for natural images. In recent years, deep learning has made remarkable progress across various domains, revolutionizing the way we approach and solve complex problems. This progress has been largely driven by the development of powerful architectures that can learn rich, hierarchical representations from vast amounts of data. In particular, the natural language processing (NLP) domain has witnessed significant breakthroughs with the introduction of transformer-based models <cit.>. These models have revolutionized tasks such as language understanding, generation, and translation by effectively capturing long-range dependencies and contextual information in text data. Similarly, the field of computer vision (CV) has experienced rapid advancements, primarily due to the success of convolutional neural networks (CNNs) <cit.> and, more recently, Vision Transformers (ViTs) <cit.>. These state-of-the-art architectures have achieved exceptional performance in various tasks, including image classification, object detection, and semantic segmentation, by learning to extract meaningful features and representations from visual data. While these advancements have revolutionized the field of deep learning and introduced attention mechanisms that effectively capture long-range dependencies and focus on relevant information <cit.>, they also present challenges in terms of scalability and efficiency. The quadratic complexity of dense attention presents a significant challenge when dealing with longer sequences or a larger number of data. Various techniques have been proposed to address this computational bottleneck. Sparse Transformers <cit.> selectively attend to a subset of tokens, reducing computational complexity from quadratic to sub-quadratic. Linear Transformers <cit.>, on the other hand, approximate the self-attention mechanism to achieve linear computational complexity, enabling the processing of much longer sequences. Furthermore, there are many other advancements in general domains, such as position encoding techniques <cit.>. The success of these advancements suggests the potential of applying similar techniques to the analysis of WSIs, as they may help address the challenges posed by the large, spatially complex nature of WSIs. The predominant paradigm in WSI analysis has been the adoption of a two-stage patch-based framework. This approach begins by segmenting WSIs into smaller, non-overlapping patches, with the background removed. Each patch is then processed by a fixed feature extractor to generate high-dimensional feature representations. These features are aggregated using multiple instance learning (MIL) models, such as attention-based MIL (ABMIL) <cit.>, to predict slide-level outcomes. While many WSI analysis methods focus on extending ABMIL by incorporating additional losses or training strategies <cit.>, they treat patches as independent and identically distributed (i.i.d.) entities (Figure <ref>, Bottom), overlooking the rich spatial structures and long-range dependencies intrinsic to WSIs. The gigapixel nature of WSIs and the presence of vast uninformative regions pose challenges to the direct application of the advancements in general CV and NLP for dependency modeling. To bridge this gap between general deep learning domains and WSI analysis, we propose Memory-Efficient Sparse Pyramid Attention Networks (SPAN). SPAN introduces a novel framework that efficiently leverages the hierarchical nature and long-range contextual information in WSIs while maintaining computational efficiency. The key components of SPAN are designed to address the limitations of current patch-based methods. The sparse pyramid attention architecture hierarchically focuses on informative regions within the WSI, reducing computational overhead while preserving critical diagnostic features. By employing a pyramid structure, SPAN efficiently processes WSIs at multiple scales, capturing both local and global context. The sparse attention mechanism selectively attends to informative regions, alleviating the computational burden imposed by large, uninformative areas. Furthermore, SPAN incorporates shifted windows and global tokens to enhance the model’s ability to capture long-range contextual dependencies and global information. Moreover, SPAN is compatible with various general-purpose techniques, allowing for seamless integration and adaptation to the specific properties of WSI data. This flexibility opens up opportunities for future exploration and refinement of the SPAN framework. The main contributions of this paper are as follows: * We propose SPAN, a novel framework that combines sparse pyramid attention with shifted windows, specifically designed for efficient and effective WSI analysis. * We introduce a sparse pyramid attention architecture that hierarchically focuses on informative regions, reducing computational complexity while preserving critical diagnostic features. * We incorporate shifted windows and global information carrier tokens to enhance the model’s ability to capture long-range contextual dependencies, which are essential for accurate disease classification. * We evaluate SPAN on multiple public WSI datasets, demonstrating its superior performance compared to state-of-the-art methods in downstream classification tasks. Our approach enables the precise modeling of both spatial and contextual information, which is often challenging for existing methods due to memory constraints. § RELATED WORKS §.§ Attention Mechanisms Attention mechanisms, particularly self-attention, have revolutionized various domains, including natural language processing (NLP) and computer vision (CV). The introduction of Transformer-based models, such as BERT <cit.> and GPT <cit.>, has marked a paradigm shift from traditional recurrent neural networks in language modeling. By leveraging the power of self-attention to capture long-range dependencies in text, Transformers have achieved state-of-the-art performance on a wide range of tasks, establishing themselves as the dominant architecture in NLP. However, the quadratic computational complexity of self-attention can be prohibitive for processing long sequences. To address this issue, sparse attention mechanisms, such as Longformer <cit.> and BigBird <cit.>, have been proposed, limiting the attention computation to fixed windows and significantly reducing the computational complexity while still capturing important long-range dependencies. The Vision Transformer (ViT) <cit.> has challenged the long-standing dominance of convolutional neural networks (CNNs) by demonstrating the effectiveness of self-attention in learning visual representations. To further improve the performance and efficiency of ViT, several variants, such as Swin Transformer <cit.> and FasterViT <cit.>, introduce window attention mechanisms. Unlike in NLP, where window attention is primarily used to reduce computational complexity, the main purpose of window attention in CV is to introduce a hierarchical structure and incorporate inductive biases, leading to state-of-the-art performance on various computer vision tasks. Position encoding is another crucial aspect of attention mechanisms, allowing the model to incorporate positional information of the input tokens. In NLP, absolute position encoding <cit.> and relative position encoding <cit.> have been widely studied. Similarly, research on position encoding in ViT has been a highly active area, ranging from the initial Absolute Position Embedding (APE) <cit.> to the more recent Relative Position Bias (RPB) <cit.>. Recent studies have also actively sought to introduce rotary position encoding techniques from large language models (LLMs) into CV models to enhance the performance of downstream classification, segmentation tasks, and high-resolution image generation <cit.>. The combined use of self-attention mechanisms and position encoding has substantially improved models' ability to capture long-range dependencies and relationships, enhancing their performance across a wide range of tasks. §.§ Pyramid Structures in Computer Vision The concept of multi-scale feature extraction and representation has been a fundamental aspect of computer vision for decades. Unlike in natural language processing, where data is often treated uniformly, visual data is inherently hierarchical, with information present at various scales. The early recognition of this hierarchical nature can be traced back to seminal works like SIFT descriptors <cit.>, which employed a scale-space pyramid to extract scale-invariant features. The advent of deep learning and CNNs further ingrained the importance of hierarchical processing in computer vision. From the pioneering AlexNet <cit.> to more advanced architectures like ResNet <cit.> and ConvNeXt <cit.>, CNNs inherently process visual data in a hierarchical manner. The progressive downsampling of feature maps and increase in channel depth allow these networks to capture features at multiple scales, with shallow layers extracting fine details and deeper layers capturing more abstract semantics. Building upon this implicit hierarchical structure, WE propose explicit pyramid architectures to further enhance the multi-scale capabilities of CNNs. SPP-Net <cit.> introduced spatial pyramid pooling to aggregate context at multiple scales, inspiring a wave of multi-scale CNN designs. FPN <cit.> proposed a top-down architecture with lateral connections to build high-level semantic feature maps at all scales. HRNet <cit.> took a different approach, maintaining high-resolution representations throughout the network via parallel multi-resolution convolutions and repeated multi-scale fusions. The crucial role of pyramid structures has been recognized in ViTs as well. While the original ViT <cit.> processes image patches uniformly using an isotropic structure, many subsequent studies have explored integrating pyramid structures with efficient attention mechanisms to enhance the performance and efficiency of ViTs. The PVT <cit.> integrates pyramid structures into the transformer architecture, progressively reducing spatial resolution and increasing channel dimension to create a hierarchical representation. The Swin Transformer <cit.> combines a hierarchical design with a shifted window mechanism to enable better cross-window information exchange. The Focal Transformer <cit.> proposes a focal self-attention mechanism that operates on both fine-grain and coarse-grain levels, creating a multi-level hierarchy. FasterViT <cit.> combines CNNs and ViTs with carrier tokens to facilitate global information exchange among local windows at different scales. These concurrent developments reinforce the fundamental importance of multi-scale pyramid representation learning in computer vision. §.§ Whole Slide Image Analysis: Characteristics and Challenges The advancements in transformer-based models, as well as the effectiveness of pyramid structures in capturing multi-scale information, have the potential to improve performance significantly in the NLP and CV domains. However, applying these advancements directly to WSIs is challenging due to their sparse nature post-preprocessing and their gigapixel size. The original WSIs are typically stored in a pyramid format, with multiple magnification levels available for pathologists to examine the tissue at different scales. However, the pyramid structure underscores the importance of hierarchical information for human analysis of WSIs. However, mainstream WSI analysis methods typically operate in an isotropic fashion, treating input data uniformly without considering the inherent multi-scale nature of the WSI pyramid structure <cit.>. These methods are built upon ABMIL <cit.>. For instance, CLAM <cit.> utilizes an additional network to predict patches with high attention scores from ABMIL, grouping them into the same class as the corresponding WSIs. MHIM <cit.> employs a Siamese ABMIL network's attention outputs to drop patches randomly. DTFD <cit.> divides bags into sub-bags and uses these sub-bags for ABMIL training. However, these approaches fail to capture the crucial spatial and hierarchical features necessary for accurate WSI analysis. Another approach, introduced by TransMIL <cit.>, flattens patches into a sequence and then reshapes them into a square to preserve some spatial information. Nonetheless, this method distorts the true spatial relationships and may not perform well in certain situations, also ignoring hierarchical information. Drawing from the success of multi-scale and hierarchical structures in computer vision and the advancements in position encoding techniques, it is evident that effectively incorporating spatial information and multi-scale features in WSI analysis is crucial. Our proposed framework, SPAN, employs an efficient encoding strategy that enables precise analysis of WSIs with feasible memory usage and speed, making it possible to use the same modeling techniques as in other active research domains. § METHOD §.§ Overview SPAN is a sparse pyramid attention architecture designed for efficient and effective WSI analysis. The main components of SPAN include a sparse convolutional block, a window generation block, and a sparse attention block. Given the inputs, a feature matrix, and a coordinates matrix, SPAN first indexes the inputs. The architecture alternates between parameterized convolutional layers and parameterized sparse attention layers. The sparse attention layers capture local dependencies within the windows, as well as long-range dependencies with global attention. This process focuses on informative regions and interactions at the current scale. The convolutional layers gradually reduce the spatial resolution to capture spatial and hierarchical features. The pipeline of the SPAN architecture is presented in Figure <ref>. Finally, the classification head aggregates the learned features to make a slide-level prediction. §.§ Window Generation Block Given feature inputs 𝐗∈ℝ^N × d, where N is the number of non-empty patches and d is the feature dimension, conventional window generation methods <cit.> used in general domains are likely suboptimal in terms of efficiency for our sparse matrix scenario. These methods typically operate directly on dense feature matrices, obtaining different views of the same dense matrix by striding over a certain number of elements in the matrix's memory. However, due to the sparsity of the matrix positions, applying the same processing method would require first padding the feature matrix and coordinate matrix into a dense form. Since d is usually large, this approach would lead to a significant increase in memory consumption and computational overhead due to the inclusion of many unnecessary padding operations. To address this issue in window generation, we propose a block that utilizes indices for efficient window generation and attention mechanisms. By performing padding on the index matrix and using indices to specify the subsequent window attention computation, we avoid the padding of high-dimensional zero vectors and the duplication of feature matrices. The process, as illustrated in Figure <ref>, is a parallelizable high-speed module that involves only index operations and execution. We represent the input WSI as a sparse tensor 𝐗∈ℝ^N × d, and coordinates inputs 𝐂∈ℕ^N × 2. Additionally, we introduce an index matrix 𝐈∈ℕ^N × 1 that encodes the original spatial locations of the non-empty patches, establishing an index-feature-position mapping. This mapping allows us to perform padding operations on the 1D index matrix instead of directly padding the high-dimensional feature vectors, thereby substantially reducing memory overhead. The window generation block proceeds as follows: §.§ Parameterized Feature Extraction Block Our proposed model incorporates parameterized feature extraction blocks that enable the efficient capture of hierarchical features and long-range dependencies in sparse WSIs. The architecture is composed of two kinds of layers: convolutional layers and transformer layers. Convolutional Layers Given the sparse nature of the input features 𝐗∈ℝ^N × d and their corresponding positions 𝐂∈ℝ^N × 2, we employ sparse convolutions <cit.> to perform downsampling and feature encoding. Sparse convolutions operate directly on the non-zero elements of the input, making them computationally efficient and memory-friendly compared to dense convolutions. In the first feature extraction block, we apply a 1 × 1 convolution to the input features to avoid direct downsampling and preserve the initial spatial resolution. This helps to maintain the fine-grained details of the input data. In subsequent layers, sparse convolutions with a kernel size of 2 and a stride of 2 are used to downsample the spatial shape progressively. This downsampling operation reduces the number of patches by approximately a factor of 4, resulting in a hierarchical encoding structure. The reduction in the number of patches also accelerates subsequent attention computations and improves both computational and memory efficiency. Transformer Layers After the convolutional layers, we leverage the sparse window attention mechanism by utilizing the computation graph generated by the window generation block. It involves customized graph operations to efficiently manage sparse data structures and optimize the attention mechanism <cit.>. This block generates both non-overlapping and shifted windows, and the Transformer layers utilize these windows to perform attention computations within local contexts. By using indices computed from these non-overlapping and shifted windows, we avoid duplicating a large number of samples and can perform transformer operations directly on the original feature vectors. Although this approach extends the receptive field and captures dependencies within shifted windows, it may still fall short in capturing long-range dependencies beyond windows. To address this limitation, we introduce learnable global information carrier tokens. These tokens serve as a global context that can be accessed by all patch tokens, regardless of their local window. By attending to the global tokens, each patch token can incorporate global information into its representation. Similarly, the global tokens attend to all patch tokens, allowing them to gather information from the entire input sequence. This bidirectional interaction between global tokens and patch tokens enables the model to capture long-range dependencies that span multiple windows, enhancing its ability to model complex relationships within the WSI. We initialize learnable relative position biases of size (2w+1) × (2w+1) to encode positional information within each window, enhancing the model's ability to consider the relative spatial arrangement of tokens. In all models, the window size w is set to a default value of 6. §.§ Classification Head After stacking three blocks, we obtain a condensed and hierarchical representation of the WSI. To perform the final classification task, we introduce an additional attention pooling layer to aggregate the obtained feature maps for classification. Unlike traditional CNNs that often employ global average pooling or max pooling for final feature aggregation, we use an attention pooling layer. This choice is driven by the unique characteristics of WSI data. Even after two downsampling steps, the number of samples (i.e., patches) remains relatively large and varies considerably between different WSIs. Simple pooling methods may not be effective in handling this variability and could lead to a loss of important information. The attention pooling layer dynamically weights the importance of each patch based on its contribution to the final classification task. § EXPERIMENTS §.§ Experimental Setup To evaluate the performance of our proposed SPAN architecture, we conducted experiments on two open WSI datasets: CAMELYON-16 <cit.> and BRACS <cit.>. We adhered to the official training, validation, and test splits provided by the datasets. If an official split was not available, we used the following protocol: * Test Set Splitting: If the official test set is not provided, we randomly selected one-third of the samples as the test set with seed 42. * Validation Set Splitting: If the official validation set is not provided, we randomly selected 15% of the training set as the validation set with seed 42. The preprocessing pipeline adopted in this study is almost identical to that of CLAM <cit.> for all the datasets, with an additional crucial step introduced to align the patches with a grid of patch-sized cells. This alignment step is essential for preserving the spatial relationships between patches accurately. By extending the patch boundaries to the nearest multiple of the patch size (e.g., 224), we ensure that the patch dimensions are consistent with the input grid of the model. This approach allows the patches to be seamlessly mapped to integer coordinates, maintaining the true spatial relationships between patches. In contrast, the patch coordinates would be floating-point values without this alignment step, requiring rounding to the nearest integers. This rounding process can potentially distort the spatial relationships between patches. This preprocessing step may result in a slightly larger number of patches compared to the original CLAM preprocessing pipeline. In line with common practice in WSI analysis, we employed a ResNet50 encoder as the feature extractor in our experiments. Specifically, we used the outputs of the penultimate layer of the ResNet50 encoder. To ensure reproducibility and consistency across experiments, we used fixed random seeds. Each baseline model was run five times with different random initializations to account for variability in training. All the models adhered to the same hyperparameter settings, with a learning rate of 1e-4, the AdamW optimizer, and a weight decay of 5e-5. §.§ Main Results Table <ref> and Table <ref> demonstrate that our proposed SPAN method significantly outperforms existing baselines on the CAMELYON-16 and BRACS datasets in terms of accuracy and AUC. It's worth noting that TransMIL shows significant instability on the CAMELYON-16, with one run completely failing to learn any effective features, leading to much worse performance. Although SPAN incurs higher computational costs, we argue that the substantial performance gains justify this, considering the importance of accuracy in medical diagnosis. The results suggest that SPAN's architecture, designed to capture hierarchical structure and long-range dependencies in WSIs, contributes to its performance, highlighting its potential for enhancing the accuracy of computational pathology workflows. §.§ Ablation Studies To assess the contributions of various components of our model to its performance, we conducted ablation studies using the CAMELYON-16 dataset. The baseline model includes learnable relative position biases, attention pooling layers, a global token, a pyramid structure with downsampling convolutional layers, a window size of 6, and a shifted-window mechanism. We modified aspects such as position encoding, aggregation methods, the presence or absence of global is, pyramid structure, shifted-window mechanisms, and window sizes to explore different configurations and their effects on model performance. The ablation study results (Table <ref>) demonstrate the robustness and flexibility of our model. Even without positional encoding, our model performs well, probably due to the inherent positional information captured by the shifted-window attention and convolutional layers. Moreover, our model is compatible with various advanced positional encodings from other domains, such as Alibi and RoPE, highlighting its extensibility to integrate future advancements in positional encoding techniques. Although these advanced encodings do not currently outperform our baseline with learnable relative position biases, future work may uncover more suitable frequencies or variants tailored to WSI characteristics, further enhancing model performance. Crucially, our experiments underscore the importance of the shifted-window mechanism and hierarchical downsampling through convolutional layers in WSI analysis. Unlike textual data with a 1D sequential structure, WSIs possess a 2D spatial structure. This necessitates careful consideration of adjacent regions. The shifted window mechanism ensures effective communication between adjacent windows, capturing essential spatial relationships. Without this mechanism, as demonstrated by the performance drop in the ablation study, non-overlapping windows would result in some adjacent visual tokens failing to compute any attention interactions. Additionally, the removal of the pyramid structure, i.e., downsampling convolutional layers, led to a decrease in performance, highlighting its importance in capturing multi-scale features. Furthermore, we observed that increasing the window size beyond a certain point does not necessarily improve performance. This is despite the higher computational resources used (Figure <ref>). This phenomenon may be attributable to diminishing returns on capturing long-range dependencies and increased complexity in the learning process, which can hinder the model's efficiency at generalizing from training data as the window size becomes excessively large. We recommend that future research in WSI analysis consider incorporating these techniques to improve performance and carefully balance window size and computational efficiency. Lastly, our experiments with global tokens show that they are effective in carrying global information. Although using the global token representation directly for classification did not outperform the additional attention pooling layer, it still yielded competitive results. This finding suggests that global tokens can be a valuable tool for capturing global context in WSI analysis. § CONCLUSION AND LIMITATIONS We introduced SPAN, a memory-efficient Sparse Pyramid Attention Network designed specifically for the analysis of gigapixel Whole Slide Images (WSIs). In our experiments, SPAN demonstrated competitive performance in downstream WSI classification tasks. However, we recognize several limitations of our approach. Despite SPAN's compatibility with a variety of positional encoding techniques, directly applying modern encoding methods did not yield performance improvements in our tests. Future research could explore learnable positional encoding frequencies or WSI-specific frequency values to potentially further enhance SPAN's effectiveness. Our ablation studies also highlight the critical roles of the downsampling pyramid structure and the shifted-window mechanism in the efficacy of sparse attention models for WSI analysis. These elements are crucial to SPAN’s performance and could inform future innovations in this field. In conclusion, SPAN represents a notable development in the efficient analysis of gigapixel WSIs, offering enhanced accuracy and reduced computational demands in our studies. Opportunities for further enhancements remain, such as extending SPAN's applications to additional tasks like segmentation and integrating insights from related fields. Addressing these challenges could lead to more accurate, reliable, and efficient tools for computer-aided diagnosis in pathology. plainnat
http://arxiv.org/abs/2406.09261v1
20240613160142
Non-Invertible Surface Defects in 2+1d QFTs from Half Spacetime Gauging
[ "Wei Cui", "Babak Haghighat", "Lorenzo Ruggeri" ]
hep-th
[ "hep-th" ]
=1 cd shapes,arrows shapes plotmarks calc ℤ ℝ ℚ ℂ ℙ 𝔸 𝔽 𝒜 𝒞 ℋ ℒ 𝒩 ℳ 𝒪 𝒞 𝒬 𝒯 𝒫 𝒵 𝒰 𝒳 SL SO SU Sp U RCFT MCG Aut Arf a b Calabi-Yau tr ch @←→ #1#2#1[#2] footnote-1 [#1](#2)(#3:#4:#5) [#1] ((#2)+(#5*cos(#3),#5*sin(#3))) arc (#3:#4:#5); equationsection TheoremTheorem[section] Corollary[Theorem]Corollary definitionDefinition[section]
http://arxiv.org/abs/2406.08483v1
20240612175932
A suite of classical Cepheids tied to the binary cluster Berkeley 58 \& NGC 7790
[ "Daniel Majaess", "David G. Turner" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
To be accepted for publication The Cepheid hosting binary cluster: NGC 7790 and Berkeley 58 Mount Saint Vincent University, Halifax, Canada Daniel.Majaess@msvu.ca Saint Mary's University, Halifax, Canada § ABSTRACT The classical Cepheids CE Cas A, CE Cas B, CF Cas, and CG Cas are likely members of the binary open cluster comprising NGC 7790 and Berkeley 58. The clusters are of comparable age and in close proximity, as deduced from differentially dereddened UuB_PBVGR_P photometry, and Cepheid period-age relations. Gaia DR3 astrometric and spectroscopic solutions for the clusters are likewise consistent. Conversely, the seemingly adjacent open cluster NGC 7788 is substantially younger and less distant. § INTRODUCTION The coauthor (D.G.T.) long suspected that the open clusters Berkeley 58 and NGC 7790 (Fig. <ref>) might be associated <cit.>. <cit.> established an age for Berkeley 58 of logτ=8.0 ± 0.1 using <cit.> isochrones, and <cit.> concluded that NGC 7790 appears the same age according to Padova models <cit.>. Importantly, the classical Cepheids CE Cas A, CE Cas B, and CF Cas are members of NGC 7790 <cit.>, and <cit.> argued that Berkeley 58 hosts CG Cas as a coronal member (Fig. <ref>, near bottom left). The other binary open cluster that may host a Cepheid is NGC 6716 and Collinder 394 <cit.>. However, there are ambiguities concerning the membership of the classical Cepheid BB Sgr therein, and a separate effort is underway to clarify that star's status. Here, differentially dereddened UuB_PBVGR_P photometry is employed to assess whether Berkeley 58 and NGC 7790 form a binary pair, in conjunction with Gaia DR3 astrometry and spectroscopy. § ANALYSIS Gaia DR3 data were utilized to inspect the field of view (Fig. <ref>), and the clusters share comparable proper motions (Table <ref>). Yet unrelated open clusters along an adjacent sightline (e.g., NGC 7788) feature similar astrometry. Consequently, a holistic approach was pursued whereby cluster ages and potential binarity were examined via dereddened color-magnitude diagrams, along with period-age relations for the Cepheids. In addition, a debate continues regarding the Gaia zero-point <cit.>, and a similar situation transpired for Hipparcos parallaxes <cit.>. Hence the present reliance on dereddened color-magnitude diagrams. A color-magnitude diagram of differentially dereddened Gaia B_P G R_P photometry is plotted in Fig. <ref>. Stars were individually dereddened using extinction estimates inferred from low-resolution Gaia spectroscopy (λ≃330-1050 nm). <cit.> and <cit.> provide preliminary estimates for T_ eff, logg, A_G, and E(B_P-R_P). <cit.> stress that work on refining their initial approach continues. Indeed, there are discernible offsets between DR3 spectroscopically dereddened main-sequences and unobscured clusters (e.g., NGC 2451). Nevertheless, Fig. <ref> confirms that Berkeley 58 and NGC 7790 are coeval, whereas NGC 7788 appears younger <cit.>. Only the brightest turnoff stars for NGC 7788 are shown in Fig. <ref>, since its main-sequence bisects the older and more distant binary cluster. Regarding the latter, Berkeley 58 appears marginally closer than NGC 7790. A small subset of rogue points were removed from Fig. <ref>. lcccccc|c Gaia DR3 and Isochrone results. Berkeley 58 NGC 7790 CE Cas A CE Cas B CF Cas CG Cas NGC 7788 π 0.30±0.04 0.29±0.04 0.31±0.02 0.31±0.02 0.29±0.01 0.27±0.01 0.33±0.04 μ_α -3.49±0.13 -3.24±0.10 -3.30±0.01 -3.30±0.02 -3.24±0.01 -3.24±0.01 -3.16±0.20 μ_δ -1.81±0.11 -1.73±0.09 -1.81±0.02 -1.87±0.02 -1.77±0.01 -1.67±0.02 -1.80±0.11 logτ 8.0±0.1a 8.0±0.1b 7.99±0.07 8.03±0.06 8.01±0.06 8.04±0.06 7.3-7.6c * Uncertainties for cluster astrometry and Cepheid logτ represent the standard deviation. a <cit.> b <cit.> c <cit.> A differential extinction analysis was likewise undertaken using independent UuBV photometry, and the Gaia DR3 astrometric solutions (Table <ref>). The ultraviolet data utilized are characterized as approximating Johnson U and UVEX Sloan u <cit.>. Standardizing terrestrial ultraviolet photometry is a longstanding challenge <cit.>, and there exists the Hyades anomaly <cit.>. UVEX u advantageously samples faint stars in both fields (Berkeley 58 & NGC 7790). Therefore, UVEX u was paired with BV data from <cit.>[see <cit.>] and <cit.>, and standardized to the coauthor's (D.G.T.) unpublished photoelectric U observations hosted at WebDA[<https://webda.physics.muni.cz/>] <cit.>. Importantly, the independent UuBV and B_P G R_P results converge upon the same conclusion (Fig. <ref>): Berkeley 58 and NGC 7790 are two clusters of comparable age which are in close proximity, and thus form a binary cluster. Early-type stars yield mean reddenings of E(B-V)≃0.7 and 0.5 for Berkeley 58 and NGC 7790, accordingly, which agree with a subset of published findings <cit.>. Intrinsic UBV colors stemmed from <cit.>. The following relationship was adopted to determine the reddening trend, E(U-B)≃ E(B-V)X + E(B-V)^2Y + Z, and constrain remaining photometric inhomogeneities rather than dust properties. A cutoff was imposed for faint Berkeley 58 photometry <cit.>. The differential dereddening results were further validated by constructing a V-BV color-magnitude diagram (not shown) tied to the mean extinction. <cit.> determined <E(B-V)>=0.52±0.05 for NGC 7790 <cit.>, while Berkeley 58 is observed through increased obscuration <cit.>. The cluster sequences once again align. Cepheid ages can be compared to the clusters using the framework of <cit.>, <cit.>, and <cit.>. Pulsation periods for the Cepheids CG Cas (P≃ 4^ d.4), CF Cas (P≃ 4^ d.8), CE Cas A (P≃ 5^ d.1), and CE Cas B (P≃ 4^ d.5) are comparable. The mean Cepheid ages and standard deviations are logτ=8.04±0.06, 8.01±0.06, 7.99±0.07, 8.03±0.06, respectively (Table <ref>). That matches the evolutionary age of the clusters Berkeley 58 and NGC 7790 <cit.>. § CONCLUSIONS A multifaceted approach indicates that Berkeley 58 and NGC 7790 are in close proximity, share a common age, and constitute a binary open cluster (Fig. <ref>, Table <ref>). That finding is supported by dereddened multiband UuB_PBVGR_P photometry, and DR3 astrometry and spectroscopy. A suite of four Cepheid members have ages consistent with that for the clusters (i.e., logτ≃ 8.0, Table <ref>). NGC 7788 is discernibly younger, and lies to the foreground, and is likely unrelated. Continued research on Cepheid variables in open clusters is desirable <cit.>. Acknowledgments: this research relies on initiatives such as CDS, NASA ADS, arXiv, Gaia, WebDA (Paunzen, Stütz, Janik, Mermilliod), <cit.>, UVEX. Janet Drew kindly responded to questions regarding the latter. aasjournal
http://arxiv.org/abs/2406.09362v1
20240613174756
Lévy measures on Banach spaces
[ "Jan van Neerven", "Markus Riedle" ]
math.FA
[ "math.FA", "math.PR", "Primary: 60B05, Secondary: 46B09, 60E05 60G57, 60H05" ]
§ ABSTRACT In this work, we establish an explicit characterisation of Lévy measures on both L^p-spaces and UMD Banach spaces. In the case of L^p-spaces, Lévy measures are characterised by an integrability condition, which directly generalises the known description of Lévy measures on sequence spaces. The latter has been the only known description of Lévy measures on infinite dimensional Banach spaces that are not Hilbert. Lévy measures on UMD Banach spaces are characterised by the finiteness of the expectation of a random γ-radonifying norm. Although this description is more abstract, it reduces to simple integrability conditions in the case of L^p-spaces. Is every knot isotopic to the unknot? Sergey A. Melikhov June 17, 2024 ===================================== § INTRODUCTION A σ-finite measure λ on the Borel σ-algebra (U) over a separable Banach space U with λ({0})=0 is called a Lévy measure if the function ϕ_ϱ U^∗→, defined by ϕ_ϱ(u^∗)=exp(∫_U(e^iuu^∗-1-iuu^∗_B_U(u)) λ(ụ)), u^∗∈ U^∗, is the characteristic function of a probability measure ϱ on (U). Here, B_U is the closed unit ball of U, and U^∗ denotes the Banach space dual of U. In the case where U is finite-dimensional, it is well known that a σ-finite measure λ on the Borel σ-algebra (^d) with λ({0})=0 is a Lévy measure if and only if ∫_^d(|r|^2∧ 1) λ (ṛ)<∞. Indeed, the latter often serves as the definition of a Lévy measure on ^d in the literature. Replacing the Euclidean norm · by the Hilbert space norm ·, the above equivalent characterisation of Lévy measures extends to separable Hilbert spaces; see Parthasarathy <cit.>. Surprisingly, although the integrability condition (<ref>) can be formulated in Banach spaces, this characterisation of Lévy measures ceases to hold in arbitrary Banach spaces U. In fact, for the case U=C[0,1] of continuous functions on [0,1] it is shown in Araujo <cit.> that there exists a σ-finite Borel measure λ on (U) with λ({0})=0 and satisfying ∫_U(u^2∧ 1) λ (ụ)<∞, but the function ϕ_ϱ defined in (<ref>) is not the characteristic function of a Borel measure ϱ on (U). Vice versa, there exists a σ-finite Borel measure λ on (U) with λ({0})=0 such that (<ref>) is the characteristic function of a probability Borel measure on (U) but λ does not satisfy (<ref>). The only explicit characterisation of Lévy measures on an infinite dimensional Banach space, which is not a Hilbert space, is known for the spaces ℓ^p() of summable sequences for p≥ 1. This result was derived in Yurinskii <cit.> by means of a two-sided L^p-bound of compensated Poisson measure (for which he credited Novikov, who is also credited in Marinelli and Röckner <cit.> for similar estimates). A sufficient condition in terms of an integrability condition similar to (<ref>) is known in Banach spaces U of Rademacher type p∈ [1,2). In the converse direction, in Banach spaces U of Rademacher cotype q∈ [2,∞) it is known for a σ-finite measure λ, that if (<ref>) defines the characteristic function of a probability measure on U then λ satisfies an integrability condition similar to (<ref>). In fact, these necessary or sufficient conditions can be used to characterise Banach spaces of Rademacher type p∈ [1,2] or of Rademacher cotype q∈ [2,∞). These results can be found in Araujo and Giné <cit.>. In the present paper, we derive explicit characterisations of Lévy measures for both L^p-spaces and UMD Banach spaces. In the case of L^p-spaces, Lévy measures are characterised by an integrability condition, which directly generalises the aforementioned results for ℓ^p() for p≥ 2 by Yurinskii <cit.>. Lévy measures on UMD Banach spaces are characterised by the finiteness of the expectation of a random γ-radonifying norm. Although the latter description is more abstract, we demonstrate its applicability by deducing similar integrability conditions for the special cases of L^p-spaces as obtained earlier by different arguments. For both L^p-spaces and UMD spaces, our method relies on recently achieved two-sided L^p-estimates of integrals of vector-valued deterministic functions with respect to a compensated Poisson random measure in Dirksen <cit.> and Yaroslavtsev <cit.>. Such inequalities are sometimes called Bichteler-Jacod or Kunita inequalities and suggested to be called Novikov inequalities in <cit.>, where more historical details can be found. Since the results in Dirksen <cit.> and Yaroslavtsev <cit.> are only formulated for simple functions, we provide the straightforward arguments for their extension to arbitrary vector-valued deterministic functions. Throughout the paper, all vector spaces are real. We write _+ = (0,∞) and :={1,2,3,…}. We use the shorthand notation A≃_q B to express that the two-sided inequality c_q A ≤ B ≤ c_q'A holds with constants 0<c_q≤ c_q'<∞ depending only on q. § PRELIMINARIES Throughout this paper, we let U be a separable Banach space with dual space U^∗ and duality pairing ··. The Borel σ-algebra on U is denoted by (U). By a standard result in measure theory (e.g., <cit.>) the separability of U implies that every finite Borel measure ϱ on U is a Radon measure, that is, that for all Borel sets B∈(U) and ε>0 there exists a compact set K such that K⊆ B and ϱ(B ∖ K)<ε. Such measures are uniquely described by their characteristic function, which is the function ϕ_ϱ U^∗→ given by ϕ_ϱ(u^∗)=∫_U e^iuu^∗ ϱ(ụ). §.§ Lévy measures In what follows we write B_U,r := {u∈ U: u≤ r} for the closed ball in U of radius r>0 centred at the origin. Its complement is denoted by B_U,r^c. We furthermore write B_U:= B_U,1 and B_U^c:= B_U,1^c for the closed unit ball of U and its complement. A σ-finite measure λ on the Borel σ-algebra (U) with λ({0})=0 is called a Lévy measure if the function ϕ U^∗→ defined by ϕ(u^∗)=exp(∫_U (e^iuu^∗-1-i uu^∗_B_U(u)) λ(ụ)) is the characteristic function of a probability measure η(λ) on (U). For any r>0, we often decompose λ=λ|_r+λ|_r^c, where λ_r(·):=λ(·∩ B_U,r) and λ|_r^c(·):=λ(·∩ B_U,r^c). Every finite measure λ on (U) with λ({0})=0 is a Lévy measure. In this case, a probability measure π(λ) on (U) is defined by π(λ)(B)=e^-λ(U)∑_k=0^∞λ^∗ k(B)/k!, and η(λ):= π(λ)∗δ_u(λ), where u(λ):=-∫_B_U u λ(ụ), has characteristic function given by (<ref>). Let λ be a σ-finite measure measure on (U) satisfying λ({0})=0. The following assertions are equivalent: (a) λ is a Lévy measure; (b) the measure λ|_r^c is finite for each r>0, and for some (equivalently, for each) sequence (δ_k)_k∈ decreasing to 0 the set {η(λ|_δ_k^c): k∈} is weakly relatively compact. In Hilbert spaces, Lévy measures can be characterised by an integrability condition: Let H be a separable Hilbert space. A σ-finite measure λ satisfying ({0})=0 on (H) is a Lévy measure if and only if ∫_H ( u^2∧ 1) λ(ụ) < ∞. §.§ Poisson random measures Let (,,) be a probability space and let (E,) be a measurable space. An integer-valued random measure is a mapping N: Ω×→∪{∞} with the following properties: * For all B∈ the mapping N(B): ω↦ N(,B) is measurable; * For all ∈ the mapping N_ω: B↦ N(,B) is a measure. The measure ν(B):= (N(B)) is called the intensity measure of N. An integer-valued random measure N : Ω×→∪{∞} with σ-finite intensity measure ν is called a Poisson random measure (see <cit.>) if the following conditions are satisfied: enumi2 * For all B ∈ the random variable N_B is Poisson distributed with parameter ν(B); * For all finite collections of pairwise disjoint sets B_1,…,B_n in the random variables N(B_1), … , N(B_n) are independent. In the converse direction, if ν is a σ-finite measure on , then by <cit.> there exists a probability space (,,) and a Poisson random measure N:Ω×→∪{∞} with intensity measure ν. The Poisson integral of a measurable function F: E → [0,∞) with respect to the Poisson random measure N is the random variable ∫_E F Ṇ defined pathwise by (∫_E F(σ) N(σ̣))(ω) := ∫_E F Ṇ_ = ∫_E F(σ) Ṇ_(σ̣), where N_ is the ∪{∞}-valued measure of part (ii) of the above definition. If N:Ω×→∪{∞} is a Poisson random measure with intensity measure ν, the compensated Poisson random measure is defined, for B∈ with ν(B)<∞, by (B):=N(B)-ν(B). In what follows we consider the special case where E = I× U, where I is an interval in _+ and U is a separable Banach space, and consider Poisson random measures N:Ω×(I× U) →∪{∞} whose intensity measure is of the form ν = ⊗λ, where is the Lebesgue measure on the Borel σ-algebra (I) and λ a σ-finite measure satisfying λ({0}) = 0 on the Borel σ-algebra (U). These assumptions will always be in force and will not be repeated at every instance. The compensated Poisson random measure is then given, for all t>0 and all B∈(U) with λ(B)<∞, by (t,B):=N(t,B)-tλ (B), using the shorthand notation N(t,B):= N((0,t]× B). For fixed t>0, a simple function with values in another Banach space V is a function F (0,t]× U→ V of the form F=∑_i=1^m∑_j=1^n _ (t_i,t_i+1]× B_j⊗ v_i,j, where 0=t_1<… < t_m+1=t, v_i,j∈ V, and the disjoint sets B_j∈(U) satisfy λ(B_j)<∞ for i=1,…, m and j=1,…, n. Given B∈(U), the compensated Poisson integral over (0,t]× B of a simple function F (0,t] × U→ V of the above form is the V-valued random variable I_B(F):=∫_(0,t]× B F(s,u) (ṣ, ụ):= ∑_i=1^m∑_j=1^n ( (t_i∧ t, t_i+1∧ t], B_j∩ B )⊗ v_i,j. A strongly measurable function F (0,t]× U→ V is said to be integrable with respect to if there exists a sequence of simple functions F_n (0,t]× U→ V such that (a) F_n→ F pointwise (⊗λ)-almost everywhere; (b) for any B∈(U), the sequence (I_B(F_n))_n∈ converges in probability as n→∞. We say that F is L^p-integrable with respect to , where p∈ [1,∞), when the simple functions can be chosen in such a way that I_B(F_n) ∈ L^p(;V) for all n∈ and the convergence in (b) takes place with respect to the norm of L^p(;V). It is easily checked that the limit of the sequence (I_B(F_n))_n∈ is well defined in the sense that it does not depend on the choice of the approximating sequence (F_n)_n∈. In this situation, the limit is defined as I_B(F) :=∫_(0,t]× B F(s,u) (ṣ,ụ) :=lim_n→∞∫_(0,t]× B F_n(s,u) (ṣ,ụ) = lim_n→∞I_B(F_n). If F is L^p-integrable with respect to N, then the limit I_B(F) belongs to L^p(;V), the Bochner space of V-valued random variables X with (X^p)<∞. For V=, the space of integrable functions can be explicitly characterised as follows. Let U be a separable Banach space, and consider a measurable function F (0,t]× U→ for some t>0. * F is integrable with respect to N if and only if ∫_(0,t]× U (F(s,u)∧ 1) ṣ λ(ụ)<∞. * F is integrable with respect to if and only if ∫_(0,t]× U (F(s,u)∧F(s,u)^2) ṣ λ(ụ)<∞. In the second case, the characteristic function ϕ_I_B(F)→ of the real-valued random variable I_B(F) is given, for β∈, by ϕ_I_B(F)(β)= exp( ∫_(0,t]× B( e^iβ F(s,u)-1-iβ F(s,u)) λ(ụ) ṣ). This theorem has the following straightforward vector-valued corollary. If F (0,t]× U→ V is integrable with respect to , then for all B∈(U) the characteristic function ϕ_I_B(F) V^∗→ of I_B(F) is given, for v^∗∈ V^∗, by ϕ_I_B(F)(v^∗)= exp( ∫_(0,t]× B( e^iF(s,u)v^∗-1-iF(s,u)v^∗) λ(ụ) ṣ). We choose a sequence (F_n)_n∈ of simple functions F_n (0,t]× U→ V converging to F in V (⊗λ)-almost everywhere such that the sequence (I_B(F_n))_n∈ converges in probability for all B∈(U). Denoting the limit by I_B(F), for fixed B∈(U) and v^∗∈ V^∗, it follows that F_n(·,·)v^∗ converges to F(·,·)v^∗ (⊗λ)- almost everywhere in (0,t], and from ∫_(0,t]× BF_n(s,u)v^∗ (ṣ, ụ) = ⟨∫_(0,t]× B F_n(s,u) (ṣ, ụ) , v^∗⟩, it follows that the sequence (I_B(F_n(·,·)v^∗))_n∈ converges in probability to the real-valued random variable I_B(F)v^∗. We conclude that F(·,·)v^∗ is integrable with respect to and, for all B∈(U), ∫_(0,t]× BF(s,u)v^∗ (ṣ, ụ) =⟨∫_(0,t]× B F(s,u) (ṣ, ụ) , v^∗⟩. Theorem <ref> implies that the characteristic function of the real-valued random variable I(v^∗):= I_B(F(·,·)v^∗) is, for β∈, given by ϕ_I(v^∗)(β)=exp( ∫_(0,t]× B( e^iβF(s,u)v^∗-1-iβF(s,u)v^∗) λ(ụ) ṣ). Letting I:=I_B(F), it follows from (<ref>) that ϕ_I(v^∗)=[e^iIv^∗]=[e^i I(v^∗)]=ϕ_I(v^∗)(1). This completes the proof. Every function F (0,t]× U→ V belonging to L^1_⊗λ((0,t] × U;V) is integrable with respect to . Let F (0,t]× U→ V be a simple function of the form (<ref>). Recalling that is the intensity measure of N, it follows for any B∈(U) that [ ∫_(0,t]× B F(s,u) (ṣ,ụ)] ≤∑_i=1^m∑_j=1^n [ N( (t_i∧ t, t_i+1∧ t], B_j∩ B )] ⊗v_i,j + ( t_i+1∧ t- t_i∧ t) λ(B_j∩ B)⊗v_i,j = 2 ∑_i=1^m∑_j=1^n ( t_i+1∧ t- t_i∧ t) λ(B_j∩ B)⊗v_i,j = 2∫_(0,t]× BF(s,u) λ(ụ) ṣ. Now let F (0,t]× U→ V be an arbitrary function in L^1_⊗λ((0,t]× U;V). Then there exists a sequence (F_n)_n∈ of simple functions converging to F in L_⊗λ^1((0,t]× U;V); by a routine argument, we may assume that this sequence also converges to F pointwise (⊗λ)-almost everywhere in V. Since (<ref>) shows that the integrals I_t,B(F_n) converge in mean and thus in probability, it follows that F is integrable with respect to . §.§ γ-Radonifying operators Consider a Hilbert space H and a Banach space V. The set of finite rank operators from H to V is represented as H⊗ V. Any finite rank operator T∈ℒ(H,V) can be expressed as T = ∑_n=1^N h_n⊗ v_n, where N≥ 1, with the sequence (h_n)_n=1^N being orthonormal in H, and (v_n)_n=1^N being a sequence in V. Here, the rank one operator h⊗ v is the operator mapping h'∈ H to [h',h]_H v∈ V. We introduce γ(H,V) as the completion of the space of finite rank operators from H to V under the norm ∑_n=1^N h_n⊗ v_n_γ(H,V)^2 := ∑_n=1^N γ_n⊗ v_n^2 , where (γ_n)_n=1^N is a sequence of independent, real-valued, standard normally distributed random variables. This norm is independent of the particular representation of the operator as a sum of finite rank operators, provided the sequence (h_n)_n=1^N used in the representation is orthonormal in H. The identity mapping h⊗ v↦ h⊗ v is extended to a contractive embedding from γ(H,V) into ℒ(H,V). Consequently, elements of γ(H,V) can be identified with bounded linear operators from H to V. These operators are called γ-radonifying operators. For comprehensive insights into γ-radonifying operators, the reader is referred to <cit.> and the review paper <cit.>. Spaces of γ-radonifying operators enjoy the following ideal property. Given Hilbert spaces H_1, H_2 and Banach spaces V_1, V_2, for every R∈ℒ(H_1, H_2), S∈γ(H_2,V_2), and T∈ℒ(V_2,V_1), it holds that TSR∈γ(H_1,V_1) and TSR_γ(H_1,V_1)≤T_ℒ(V_2,V_1) S_γ(H_2,V_2) R_ℒ(H_1,H_2). We will need various other standard results on γ-radonifying operators; these will be quoted as soon as the need arises. Let us finally give some examples: [Hilbert spaces] When both H and V are Hilbert spaces we have a natural isometric isomorphism γ(H,V) = ℒ_2(H,V), the space of Hilbert-Schmidt operators from H to V. [L^p-spaces] When H is a Hilbert space and V = L^p(S,μ), where (S,μ) is a σ-finite measure space and p∈ [1,∞), the mapping J:L^p(S,μ;H)→γ(H,L^p(S,μ)) given by (Jf)h : =f(·)h defines an isomorphism of Banach spaces γ(H,L^p(S,μ))≃_p L^p(S,μ;H) with isomorphism constants only depending on p. In the particular case when H is an L^2-space, the spaces on the right-hand side are usually referred to as spaces of square functions and play a prominent role in Harmonic Analysis. It is worth mentioning that the isomorphism (<ref>) extends to Banach lattices V with finite cotype. § LÉVY MEASURES ON L^P-SPACES We will now specialise to V = L^p_μ:=L^p_μ(S):= L^p(S,μ) for some p∈ (1,∞) and a measure space (S,,μ). It will always be assumed that μ is σ-finite and that L^p_μ is separable; these assumptions are for example satisfied if the measure space (S,,μ) is μ-countably generated according to <cit.>. The σ-finiteness of μ implies that the various uses of Fubini's theorem in this paper are justified, and also that we may identify the dual space (L_μ^p)^* with L_μ^p', where 1/p+1/p'=1 (although, as is well known, σ-finiteness is not needed for this identification in the regime p∈ (1,∞)). The separability of L_μ^p implies that the mapping f↦ f, viewed as a function from (L_μ^p,(L_μ^p)) to L_μ^p, is strongly measurable (by Pettis's measurability theorem, see <cit.>). When U is a Banach space, L_μ^p(S;U) denotes the Banach space of all (equivalence classes of) strongly μ-measurable f S→ U for which f_L_μ^p(S;U) := (∫_S f(s)^p μ(ṣ))^1/p is finite. With this notation, L_μ^p(S;) = L_μ^p. As before, let U be a separable Banach space, and let λ a σ-finite measure on (U) which satisfies ({0})=0. Let N denote a Poisson random measure on × U with intensity measure ⊗λ, and let denote the associated compensated Poisson random measure. For fixed t>0 and for a simple function F (0,t]× U→ L_μ^p(S) of the form (<ref>), for each B∈(U), the compensated Poisson integral I_B(F):=∫_(0,t]× B F(s,u) (ṣ, ụ) is defined as in Subsection <ref>. As a special case of the result in <cit.>, for simple functions F:(0,t]× U→ L^p_μ(S), B∈(U), exponents p∈ (1,∞), we have the equivalence of norms ([ sup_0<s≤ t∫_(0,s]× B F(r,u) (ṛ,ụ) _L^p_μ(S)^p])^1/p≃_pF _(0,t]× B__p, where _p:= _λ^p + _λ^p if 1<p<2, _λ^p∩_λ^p if 2≤ p<∞, with _λ^p = L^p_μ(S,L^2_⊗λ(× U)), _λ^p = L^p_⊗λ(× U;L^p_μ(S)), and F__λ^p + _λ^p := inf{F_1__λ^p+F_2__λ^p: F=F_1+F_2, F_1∈_λ^p, F_2∈_λ^p}, F__λ^p ∩_λ^p := max{F__λ^p, F__λ^p}. Here, both _λ^p and _λ^p are viewed as Banach spaces of (equivalence classes of) measurable real-valued functions on × S× U. Explicitly, the norms in these spaces are defined by F__λ^p^p : = F_L^p_μ(S,L^2_⊗λ(× U))^p =∫_S ( ∫_(0,∞)× UF(t,u)(s)^2 λ(ụ) ṭ)^p/2μ(ṣ) , F__λ^p^p : = F_L^p_⊗λ(× U;L^p_μ(S))^p =∫_(0,∞)× U∫_S |F(t,u)(s)|^p μ(ṣ) λ(ụ) ṭ, where the second identity in the first line follows from <cit.> or <cit.>, which allow us to take the point evaluation with respect to s inside the integral. For later use we observe that _p is a Banach function space in the sense of Bennett and Sharpley <cit.>; this follows from <cit.>. In particular, if 0≤ f≤ g almost everywhere with g∈_p, then f∈_p and f__p≤g__p. Keeping in mind that we are assuming p∈ (1,∞), the spaces _p are reflexive as Banach spaces; this follows from, e.g., <cit.> along with the easy fact that if (X_0,X_1) is an interpolation couple of reflexive Banach spaces, then the spaces X_0∩ X_1 and X_0+X_1 are reflexive. By <cit.>, this implies that the norm of _p is absolutely continuous. The class of simple functions F× U→ L_μ^p(S) is dense in ℐ_p. This follows from <cit.>. As a consequence of the lemma above, for all t∈_+ and B∈(U), the compensated Poisson integral I_B(F) can now be defined by a standard density argument for all strongly measurable functions F (0,t]× U→ L_μ^p(S) such that F_(0,t]× B∈ℐ_p. We now set U=L_μ^p(S) to obtain the following characterisation of Lévy measures in L_μ^p(S). A σ-finite measure λ on (L^p_μ) with λ({0})=0 is a Lévy measure if and only if λ|_r^c is a finite measure for all r>0 and, moreover, * if p∈ [2,∞), it satisfies max{∫_S(∫_B_L_μ^pf(s)^2 λ(f̣) )^p/2μ(ṣ) , ∫_B_L_μ^pf^p_L^p_μ λ(f̣) } <∞. * if p∈ (1,2), it satisfies inf{∫_(∫_B_L_μ^pF_1(f)(s)^2 λ(f̣) )^p/2μ(ṣ) + ∫_B_L_μ^pF_2(f)^p_L_μ^p λ(f̣) }<∞, where the infimum is taken over all functions F_1∈𝒮^p_λ and F_2∈_λ^p with F_1(f)+F_2(f)=f for all f∈ B_L_μ^p={g∈ L_μ^p:g_Ł_μ^p≤ 1}. For the closed unit ball B_L_μ^p={f∈ L^p_μ: f_L_μ^p≤ 1}, we define the function G (0,1]× L_μ^p→ L_μ^p, G(t,f) =f _B_L_μ^p(f), and, letting D_δ:={f∈ L^p_μ: δ < f_L_μ^p≤ 1} for some δ∈ (0,1), we introduce analogously G_δ (0,1]× L_μ^p→ L_μ^p, G_δ(t,f) =f _D_δ(f). Note, that G and G_δ do not depend on t, but we left the previous notation for consistency. `If': Let N be a Poisson random measure with intensity ⊗λ. Assume first that the support of λ is contained in the closed unit ball B_L_μ^p. Let (δ_k)_k∈⊆ (0,1) be a sequence decreasing to 0. The assumed integrability conditions guarantee that G belongs to ℐ_p and thus also G_δ_k for each k∈, because ℐ_p is a Banach function space, as previously pointed out in Remark <ref>. It follows that the functions G and G_δ_k for k∈ are integrable with respect to . Thus, we can define the random variables X:=∫_(0,1]× B_L_μ^p f (ṣ,f̣) and X_k:=∫_(0,1]× D_δ_k f (ṣ,f̣), k∈, and Corollary <ref> shows that the probability distributions of X and X_k coincide with η(λ) and η(λ|_δ_k^c) for each k∈. Since ℐ_p is reflexive, it has absolutely continuous norm according to <cit.>. This enables us to conclude that lim_k→∞G-G_δ_k_ℐ_p=0. The isometry (<ref>) implies that X_k→ X in L^p(Ω; L_μ^p), and thus η(λ|_δ_k^c)→η(λ) weakly in the space of Borel probability measures as k→∞. Together with the assumed finiteness of λ|_r^c<∞ for all r>0, Theorem <ref> shows that λ is a Lévy measure. For the general case of a σ-finite measure λ with ({0})=0, we apply the decomposition λ=λ|_1+λ|_1^c of (<ref>). The measure λ|_1 is a Lévy measure by the first part, and λ|_1^c is a Lévy measure since it is finite by assumption. Now <cit.> guarantees that λ is a Lévy measure. `Only if': Assume that λ is a Lévy measure. Theorem <ref> implies that λ|_r^c is a finite measure for all r>0. To establish the integrability conditions we can assume that the Lévy measure λ has support in the closed unit ball B_L_μ^p. Let N be a Poisson random measure with intensity ⊗λ. Let (δ_k)_k∈⊆ (0,1) be an arbitrary sequence decreasing to 0. Since λ(D_δ_k)<∞, Lemma <ref> guarantees that G_δ_k is integrable with respect to for each k∈. Thus, we can define the random variables X_k:=∫_(0,1]× D_δ_k f (ṣ,f̣) for k∈. Since X_k has the same distribution as η(λ|_δ_k^c) by Corollary <ref>, Theorem <ref> implies that (X_k)_k∈ converges weakly to η(λ) in the space of Borel probability measures on (L^p_μ). Letting Y_k:=X_k-X_k-1 for k∈, with X_0:=0, it follows that the random variables Y_k are independent as the sets D_δ_k ∖ D_δ_k-1 are disjoint for all k∈. Since X_k=Y_1+… +Y_k is a sum of independent random variables converging weakly, Lévy's theorem in Banach spaces (see, e.g., <cit.>) implies that the random variables X_k converge almost surely to a random variable X, which must have distribution η(λ). Since (X^p)<∞ by <cit.>, it follows from <cit.> that X_k→ X in L^p(Ω;L_μ^p) as k→∞. The isometry (<ref>) implies that G_δ_k→ G in _p as k→∞. If p=2, Fubini's theorem implies ∫_S∫_B_L_μ^2f(s)^2 λ(f̣) μ(ṣ) = ∫_B_L_μ^2∫_S f(s)^2 μ(ṣ) λ(f̣) = ∫_B_L_μ^2f^2_L^2_μ λ(f̣). Consequently, the two integrals in part (a) of Theorem <ref> coincide, and it follows that a σ-finite measure λ on L^2_μ with λ({0})=0 is a Lévy measure if and only if ∫_L^2_μ(f^2_L_μ^2∧ 1) λ(f̣)<∞. This corresponds to the well known characterisation of Lévy measures on Hilbert spaces in Theorem <ref>. In this Example, we consider the sequence space ℓ^p = ℓ^p() with p∈ [2,∞). The canonical sequence of unit vectors in ℓ^p is denoted by (e_k)_k∈. Let λ be a σ-finite measure on (ℓ^p) with λ({0})=0 and λ|_r^c finite for all r>0. Then the condition in part (a) of Theorem <ref> is satisfied if ∑_k=1^∞(∫_B_ℓ^pfe_k^2 λ(f̣))^p/2<∞ and ∫_B_ℓ^pf^p_ℓ^p λ(f̣)<∞. Taking into account the assumption that λ|_r^c is finite for all r>0, we can conclude that λ is a Lévy measure if and only if ∑_k=1^∞(∫_B_ℓ^pfe_k^2 λ(f̣))^p/2<∞ and ∫_ℓ^p(f^p_ℓ^p∧ 1) λ(f̣)<∞. This characterisation coincides with the result derived in <cit.>. In this example, we consider the sequence space ℓ^p = ℓ^p() for p∈ (1,2). Let λ be σ-finite measure on (ℓ^p) with λ({0})=0 and λ|_r^c finite for all r>0. Theorem <ref> shows that λ is a Lévy measure if and only if inf{∑_k=1^∞∫_B_ℓ^pF_1(f)^2 λ(f̣)e_k^p/2 + ∫_B_ℓ^pF_2(f)_ℓ^p^p λ(f̣)}<∞, where the infimum is taken over all functions F_1∈_λ^p and F_2∈_λ^p with F_1(f)+F_2(f)=f for all f∈ B_ℓ^p={g∈ℓ^p:g_ℓ^p≤ 1}. If we take F_1=_ℓ^p_B_ℓ^p and F_2=0 or F_1=0 and F_2=_ℓ^p_B_ℓ^p then we obtain the sufficient conditions ∑_k=1^∞(∫_B_ℓ^pfe_k^2 λ(f̣))^p/2<∞ or ∫_B_ℓ^pf^p_ℓ^p λ(f̣)<∞. The second condition is known as a sufficient condition due to the fact that the space ℓ^p is of type p for p∈ [1,2]; see <cit.>. With the same methods as in Theorem <ref>, but using the L^p-estimates in martingale type and cotype spaces from <cit.>, one can show that if U is a separable Banach space with martingale type p∈ (1,2] and λ is a σ-finite measure on (U) with ({0}) =0, then ∫_U ( u^p ∧ 1) λ(ụ) < ∞ implies that λ is a Lévy measure. In the converse direction, if U has martingale cotype q∈ [2,∞), and if λ is a Lévy measure on (U), one can similarly show that ∫_U ( u^q ∧ 1) λ(ụ) < ∞. We leave the details to the reader, since these results are already covered, with a different method of proof, in <cit.>. The following result provides an alternative characterisation of Lévy measures on ℓ^p for p∈ (1,2). It does does not rely on Theorem <ref> but on the following formula for a real-valued random variable Y with characteristic function ϕ_Y and p∈ (0,2): [Y^p]=c_p ∫_0^∞1/β^1+p(1-ϕ_Y(β)) β̣ for a constant c_p depending only on p; see <cit.>. The following result coincides with <cit.>. Let p∈ (1,2) and λ be a σ-finite measure on (ℓ^p) with λ({0})=0. Then the following are equivalent: * λ is a Lévy measure; * λ|_r^c is finite for all r>0 and ∑_k=1^∞∫_0^∞1/β^1+p(1-exp(∫_f_ℓ^p≤ 1 (cos (βfe_k)-1) λ (f̣))) β̣< ∞ . Let λ be a Lévy measure, which immediately implies λ|_r^c<∞ for all r>0 by Theorem <ref>. Thus, we can assume that λ(B_U^c)=0. Let X be a random variable in ℓ^p with distribution η(λ). Denoting the canonical base in ℓ^p by (e_k)_k∈, it follows that ϕ_Xe_k(β)=ϕ_X(β e_k) for all β∈. Applying (<ref>) results in [ X^p] = ∑_k=1^∞[Xe_k^p] = c_p ∑_k=1^∞∫_0^∞1/β^1+p(1-ϕ_X(β e_k)) β̣ = c_p∑_k=1^∞∫_0^∞1/β^1+p(1-exp(∫_B_U (cos (βfe_k)-1) λ (f̣))) β̣. Since <cit.> guarantees [X^p]<∞, we obtain the claimed summability. For the converse direction, we assume first that the support of λ is contained in B_U. Let N be a Poisson random measure on ×ℓ^p with intensity measure ⊗λ. We define D_m:={f∈ℓ^p: δ_m < f_ℓ^p≤ 1} for a sequence (δ_m)_m∈ decreasing to 0. Since λ(D_m)<∞, Lemma <ref> guarantees that _(0,1]× D_m is integrable with respect to for each m∈. Thus, we can define the random variables X_m:=∫_(0,1]× D_m f (ṣ,f̣) for m∈. Corollary <ref> implies for m>n that ϕ_X_m-X_n(g)=exp( ∫_D_m∖ D_n( e^ifg-1-ifg) λ(f̣)) g∈ℓ^p^'. Equation (<ref>) implies for every m,n∈ and m>n that X_m-X_n^p =∑_k=1^∞[X_m-X_n)e_k^p ] =c_p ∑_k=1^∞∫_0^∞1/β^1+p(1-ϕ_X_m-X_n(β e_k)) β̣ =c_p∑_k=1^∞∫_0^∞1/β^1+p(1-exp(∫_D_m ∖ D_n (cos (βfe_k)-1) λ (f̣)))β̣. Lebesgue's dominated convergence theorem shows that [X_m-X_n^p]→ 0 as m,n→∞, and thus X_n converges to a random variable X in L^p(Ω;ℓ^p). Since the monotone convergence theorem shows for all g∈ℓ^p^' that lim_m→∞ϕ_X_m(g) =exp( ∫_B_ℓ^p( e^ifg-1-ifg) λ(f̣)), it follows that the distribution of X equals η(λ). Thus, λ is a Lévy measure on (ℓ^p). § LÉVY MEASURES ON BANACH SPACES WITH TYPE OR COTYPE Intro with definitions to be written by Jan Let U be a separable Banach space and λ be a σ-finite measure on (U) with λ({0})=0, and let N a Poisson random measure on × U with intensity measure ⊗λ. It is shown in <cit.> that if V is a Banach space of martingale type p∈ (1,2] then every simple function F× U → V satisfies for every B∈(U) that ([sup_t > 0∫_(0,t]× B F(r,u) (ṛ, ụ) ^p ])^1/p≤_B F__λ^p. Similarly, if V has martingale cotype q∈ [2,∞), then by <cit.> for every simple function F× U → V and every B∈(U) we have _B F__λ^q≤( [∫_(0,t]× B F(r,u) (ṛ, ụ)^q ])^1/q. Here, as before but with a different target space, _λ^p = L^p_⊗λ(× U;V), with F__λ^p^p : = F_L^p_⊗λ(× U;V)^p =∫_(0,∞)× UF(s,u)^p λ(ụ) ṣ. Let V be of martingale type p∈ (1,2]. If a function F× U → V satisfies F__λ^p<∞, then F is integrable with respect to . Since the function F belongs to _λ^p, there exists a sequence of simple functions F_n converging to F in _λ^p and pointwise (⊗λ)-almost everywhere. It follows for all t>0 and B∈(U) from (<ref>) that ∫_(0,t]× B F_n(r,u) (ṛ, ụ) converges in L^p_ℙ(Ω;V), which completes the proof. Let U be a separable Banach space, and let λ be a σ-finite measure on (U) with λ({0})=0. * If U is of martingale type p∈ (1,2], then λ is a Lévy measure if ∫_U (u^p ∧ 1) λ(ụ)<∞. * If U is of martingale cotype q∈ [2,∞) and λ is a Lévy measure, then ∫_U (u^q ∧ 1) λ(ụ)<∞. We introduce for any set B∈(U) the notation _(0,1]× B(t,u)↦ u _(0,1]× B(t,u). Part (a): Let N be a Poisson random measure with intensity λ and assume first that the support of λ is contained in B_U. Let (δ_k)_k∈⊆ (0,1) be a sequence decreasing to 0 and define D_k:={u∈ U: δ_k < u≤ 1}. Since _(0,1]× B_U__λ^p<∞ and _(0,1]× D_k__λ^p<∞, Lemma <ref> guarantees that the functions _(0,1]× B_U and _(0,1]× D_k for k∈ are integrable with respect to . Thus, we can define the U-valued random variables X:=∫_(0,1]× B_U u (ṣ,ụ) and X_k:=∫_(0,1]× D_k u (ṣ,ụ), k∈, and Corollary <ref> shows that the probability distributions of X and X_k coincide with η(λ) and η(λ|_δ_k^c) for each k∈. Since Lebesgue's theorem on dominated convergence shows lim_k→∞_(0,1]× B_U- _(0,1]× D_k__λ^p=0, inequality (<ref>) implies that X_k→ X in L^p(Ω;V), and thus η(λ|_δ_k^c)→η(λ) weakly in the space of Borel probability measures as k→∞. Since condition (<ref>) yields λ|_r^c<∞ for all r>0, Theorem <ref> shows that λ is a Lévy measure. For the general case of a measure λ with arbitrary support, we apply the decomposition λ=λ|_1+λ|_1^c. The measure λ_1 is a Lévy measure by the first part, and λ|_1^c is a Lévy measure since it is finite. Now <cit.> guarantees that λ is a Lévy measure. Part (b): Assume that λ is a Lévy measure. Theorem <ref> implies that λ|_1^c<∞. Thus, to establish the integrability conditions we can assume that the Lévy measure λ has support in B_U. Let N be a Poisson random measure with intensity ⊗λ. For an arbitrary sequence (δ_k)_k∈⊆ (0,1) decreasing to 0 we define D_k:={u∈ U: δ_k < u≤ 1} with D_0:=∅. Since λ(D_k)<∞ according to Theorem <ref>, Lemma <ref> guarantees that _(0,1]× D_k is integrable with respect to for each k∈. Thus, we can define the U-valued random variables X_k:=∫_(0,1]× D_k u (ṣ,ụ) for k∈. Since X_k has the same distribution as η(λ|_δ_k^c) by Corollary <ref>, Theorem <ref> implies that (X_k)_k∈ converges weakly to η(λ) in the space of Borel probability measures on (U). Letting Y_k:=X_k-X_k-1 for k∈ and X_0=0, it follows that the random variables Y_k are independent as the sets D_k ∖ D_k-1 are disjoint for all k∈. Since X_k=Y_1+… +Y_k is a sum of independent random variables converging weakly, Lévy's theorem in Banach spaces (see, e.g., <cit.>) implies that X_k converges almost surely to a random variable X, which must have distribution η(λ). Since Corollary 3.3 in <cit.> guarantees (X^q)<∞, Corollary 3.3 in <cit.> implies that X_k→ X in L^q(Ω;U) as k→∞. It follows from (<ref>) that ∫_U u^q λ(ụ) =sup_k∈_(0,1]× D_k__λ^q^q ≤sup_k∈( [X_k^q])^1/q<∞, which completes the proof. Banach spaces of Rademacher type and cotype are characterised in terms of Lévy measures on <cit.>: a Banach space is of Rademacher type p for p∈ [1,2) if and only if every σ-finite measure on (U) with λ({0})=0 and satisfying (<ref>) is a Lévy measure. Theorem <ref> does not characterise Banach spaces of type p but gives the same sufficient condition to be a Lévy measure. In fact, Theorem <ref> is slightly less general as martingale type p implies Rademacher type p (but not the other way). A similar comment applies to cotype. § LÉVY MEASURES ON UMD BANACH SPACES The aim of this section is to extend the results of the preceding section to UMD-spaces. This class of Banach spaces plays a prominent role in both stochastic analysis, where it provides the correct setting for Banach space-valued martingale theory and stochastic integration (see <cit.> and the references therein) and vector-valued harmonic analysis (see <cit.> and the references therein). A Banach space V is said to be a UMD-space when, for some (or equivalently, any) given p∈ (1,∞), there exists a constant β_p,V≥ 1 such that for every V-valued martingale difference sequence (d_j)_j=1^n and every {-1, 1}-valued sequence (ϵ_j)_j=1^n we have (∑_j=1^n ϵ_j d_j ^p)^1/p≤β_p,V (∑_j=1^n d_j^p)^1/p. Hilbert spaces and the spaces L^p(S,μ) for 1<p<∞ and measure spaces (S,μ) are examples of UMD-spaces. Additionally, when V is a UMD-space, then for any 1<p<∞, the Bochner spaces L^p(S,μ;V) are UMD-spaces. A comprehensive treatment of UMD-spaces is offered in <cit.> and the references therein. Let V be a UMD-space and M×Ω→ V a purely discontinuous martingale. For a fixed time t>0, define a random operator J_Δ M: ℓ^2((0,t]) → V by J_Δ Mh :=∑_s∈ (0,t] h_s Δ M(s), h = (h_s)_s∈ (0,t]∈ℓ^2((0,t]), where Δ M is the jump process associated with M, and ℓ^2((0,t]) is the Hilbert space of all mappings f:(0,t]→ satisfying f_ℓ^2((0,t])^2 := ∑_s∈ (0,t] |f(s)|^2 <∞, the sum on the right-hand side being understood as the supremum of all sums ∑_s∈ F |f(s)|^2 with F⊆ (0,t] finite. Now let p∈ [1,∞) be given and M be a V-valued purely discontinuous martingale. It is shown in <cit.> that M is an L^p-martingale if and only if for each t≥ 0 we have J_Δ M∈γ(ℓ^2((0,t]),V) almost surely and [ J_Δ M_γ(ℓ^2((0,t]),V)^p] <∞, where γ(ℓ^2((0,t]),V) is the Banach space of γ-radonifying operators from ℓ^2((0,t]) to V, and that, moreover, in this situation one has the equivalence of norms [sup_0< s≤ tM(s)^p] ≃_p,V[ J_Δ M_γ(ℓ^2((0,t]),V)^p]. In the remainder of this section, we let U be a separable Banach space, and consider a Poisson random measure N with intensity measure ⊗λ for a σ-finite measure λ on (U) with λ({0})=0. The compensated Poisson random measure is denoted by . Our aim is to apply (<ref>) to obtain an L^p-bound (see <cit.>) for martingales M of the form M_B(s):=∫_(0,s]× B F(r,u) (ṛ,ụ), s∈ (0,t], for simple functions F (0,t] × U→ V and some t>0 and B∈(U) fixed. This L^p-bound will allow us to extend the class of functions integrable with respect to N to a more general class of integrands. For a measurable function g: (0,t] × U→ we write g∈ L_N^2((0,t]× U) if for all ω∈Ω we have g_L_N^2((0,t]× U)(ω):= ∫_(0,t]× U |g(r,u)|^2 N(,ṛ, ụ) < ∞. In this way we may interpret the expression g_L_N^2((0,t] × U) as a nonnegative random variable on Ω. Now, for a strongly measurable function F (0,t] × U→ V introduce the restriction F_B (0,t] × U→ V defined by F_B(r,u):=_B(u)F(r,u) for the set B defining the martingale M_B in (<ref>). If F satisfies ∫_(0,t]× U |F_B(r,u)v^∗|^2 N(,ṛ, ụ) <∞ ∀ω∈Ω, v^∗∈ V^∗, we may now define, for every ω∈Ω, a bounded operator T_F_B(ω): L_N(ω)^2((0,t]× U)→ V by the Pettis integral (which is well defined by <cit.>) T_F_B(ω) g :=∫_(0,t]× U g(r,u) F_B(r,u) N(,ṛ, ụ). The Pettis measurability theorem implies that the V-valued random variable ω↦ T_F_B(ω) g is strongly measurable. Pointwise on Ω, the following identities holds for all v^∗∈ V^∗: J_Δ M_B^∗ v^∗_ℓ^2((0,t])^2 =∑_s∈ (0,t]Δ M_B(s)v^∗^2 = ∫_(0,t]× U |F_B(r,u)v^∗|^2 N(ṛ, ụ) = T_F_B^∗ v^∗_L_N^2((0,t]× U)^2, the middle identity being a consequence of <cit.>. Hence, as consequence of the comparison theorem for γ-radonifying operators (see <cit.>), applied pointwise on Ω, we obtain that T_F_B∈γ(L^2_N((0,t]× U), V) almost surely if and only if J_Δ M_B∈γ(ℓ^2((0,t]),V) almost surely, in which case we have almost surely the identity of norms J_Δ M_B_γ(ℓ^2((0,t]),V) = T_F_B_γ(L^2_N((0,t]× U), V). These considerations are key to proving the following theorem. Let V be a UMD-space and let p∈ [1,∞). For fixed t>0, let F (0,t]× U→ V be a strongly measurable function satisfying the weak L^2-integrability condition (<ref>) for all B∈(U). Then the following assertions are equivalent: * F is L^p-integrable with respect to N and satisfies, for all B∈(U), [ sup_0< s≤ t∫_(0,s]× B F(r,u) (ṛ,ụ)^p] < ∞; * T_F_B is in γ(L^2_N((0,t]× U), V) almost surely for all B∈(U) and T_F_B_γ(L^2_N((0,t]× U), V)^p <∞. In this situation, for all B∈(U), one has [ sup_0< s≤ t∫_(0,s]× B F(r,u) (ṛ,ụ)^p] ≃_p,VT_F_B_γ(L^2_N((0,t]× U), V)^p, with constants depending only on p and V. <ref><ref>: If F is simple and satisfies the conditions of <ref>, this implication follows by combining (<ref>) and (<ref>), the point here being that the L^p-integrability of F with respect to N holds by definition. Suppose now that F is strongly measurable and satisfies the conditions of <ref>. The idea of the proof is to approximate F with simple functions satisfying the conditions of the theorem. To this end, fix B∈(U) and let (_n)_n∈ and (_n)_n∈ be filtrations generating the Borel σ-algebras of _+ and U, such that each _n and _n consists of finitely many Borel sets. For each ω∈Ω, let _n L_N(ω)^2((0,t]× U)→ L_N(ω)^2((0,t]× U) denote the conditional expectation with respect to the product σ-algebra _n×_n. The functions F_n,B := _n F_B are simple, and each of them satisfies the conditions of the theorem. For each ω∈Ω and v^∗∈ V^∗, the L^2-contractivity of conditional expectations gives ∫_(0,t]× U |(_n F_B)(r,u)v^∗|^2 N(,ṛ, ụ) = ∫_(0,t]× U |_nF_Bv^∗(r,u)|^2 N(,ṛ, ụ) = _nF_Bv^∗_L_N(ω)^2((0,t]× U)^2 ≤F_Bv^∗_L_N(ω)^2((0,t]× U))^2 <∞. The self-adjointness of _n, see <cit.>, implies for each ω∈Ω and g∈ L_N(ω)^2((0,t]× U) that T_F_n,B(ω)g = ∫_(0,t]× B g(r,u) (_n F_B)(r,u) N(,ṛ, ụ) = ∫_(0,t]× B (_n g)(r,u) F_B(r,u) N(,ṛ, ụ) = (T_F_B(ω) ∘_n)g. Therefore, we conclude T_F_n,B(ω) = T_F_B(ω) ∘_n ∈γ(L_N(ω)^2((0,t]× U),V) by the right ideal property and, using again that _n is contractive, T_F_n,B(ω) _γ(L_N(ω)^2((0,t]× U),V) = T_F_B(ω) ∘_n _γ(L_N(ω)^2((0,t]× U),V) ≤ T_F_B(ω) _γ(L_N(ω)^2((0,t]× U),V). Next, since _n→ I strongly, it follows from <cit.> that lim_n→∞ T_F_B-F_n,B(ω)_γ(L_N(ω)^2((0,t]× U),V) = T_F_B(ω) - T_F_n,B(ω)_γ(L_N(ω)^2((0,t]× U),V) = 0. Finally, by monotone convergence, lim_n→∞ T_F_n,B_γ(L_N^2((0,t]× U),V)^p = T_F_B_γ(L_N^2((0,t]× U),V)^p. Since the theorem holds for each of the F_n,B, using routine arguments the theorem now follows by letting n→∞. <ref><ref>: Suppose that F is strongly measurable and satisfies the conditions of <ref>. Choose a sequence of simple functions F_n (0,t]× U→ V such that F_n→ F pointwise (⊗λ)-almost everywhere and, for any B∈(U), one has ∫_(0,t]× B F_n(r,u) (ṛ,ụ) →∫_(0,t]× B F(r,u) (ṛ,ụ) in L^p(;V) as n→∞. By Doob's inequality, one then also has lim_n,m→∞[ sup_0< s≤ t∫_(0,s]× B (F_n(r,u) - F_m(r,u)) (ṛ,ụ)^p] = 0. Letting F_n,B:=_BF_n and F_B:=_BF, it follows from (<ref>) and (<ref>) that lim_n,m→∞T_F_n,B - T_F_m,B_γ(L^2_N((0,t]× U), V)^p = 0. Passing to a subsequence, we may assume that, for almost all ∈, lim_n→∞ T_F_n,B() -T_F_m,B() _γ(L^2_N()((0,t]× U), V) = 0. By completeness of γ(L^2_N()((0,t]× U), V) it follows that, for almost all ∈, the limit T_B():= lim_n→∞ T_F_n,B() exists in γ(L^2_N()((0,t]× U), V). Then also, for any v^∗∈ V^∗, (T_F_n,B())^∗ v^∗→ (T_B())^∗ v^∗ in L^2_N()((0,t]× U). Hence, for all g∈ L^2_N()((0,t]× U) and v^∗∈ V^∗, it follows that T_B()gv^∗ =lim_n→∞T_F_n,B()gv^∗ = lim_n→∞∫_(0,t]× U gF_n,Bv^∗Ṇ(). This shows that F_n,Bv^∗→ (T_B())^∗ v^∗ weakly in L^2_N()((0,t]× U). Since F_n,Bv^∗→F_Bv^∗ pointwise, a standard argument establishes that F_Bv^∗ = (T_B())^∗ v^∗ (⊗λ)-almost everywhere, and hence as elements of L^2_N()((0,t]× U). But (by pairing with functions g∈ L^2_N()((0,t]× U)) this is the same as saying that T_B = T_F_B. Putting things together, we have shown that, for almost all ∈, lim_n→∞ T_F_n,B() = T_F_B() with convergence in γ(L^2_N()((0,t]× U), V). The finiteness of T_F_B_γ(L^2_N((0,t]× U), V)^p now follows from Fatou's lemma. This completes the proof of the implication <ref><ref>. The assertion about equivalence of norms follows by passing to the limit n→∞ in the preceding argument. We will apply this theorem to obtain a necessary and sufficient condition for a σ-finite measure on a separable UMD space to be a Lévy measure. To this end, let U be a separable Banach space. It will be useful to introduce the function G: (0,1] × U→ U defined by G(t,u):= u _B_U(u), where B_U={u∈ U:u≤ 1} as before. Note, that G does not depend on t, but we left the previous notation for consistency. Suppose that λ is a σ-finite measure on (U) with λ({0})=0. If the image measure λu^∗ is a Lévy measure on for all u^∗∈ U^∗, then G satisfies the weak L^2-integrability condition (<ref>) for all B∈(U), or equivalently, for all u^∗∈ U^∗ we have ∫_(0,1]× B_U |uu^∗|^2 N(ṣ,ụ) <∞ almost surely. Assuming without loss of generality that u^∗≤ 1, this follows from Theorem <ref>, because ∫_(0,1]× B_U |uu^∗|^2 ṣ λ(ụ) = ∫_B_U |uu^∗|^2 λ(ụ) ≤∫ _[-1,1] r^2 λu^∗(ṛ), and the last expression is finite (take H = in Theorem <ref>) since by assumption λu^∗ is a Lévy measure on . We now obtain the following characterisation of Lévy measures in the setting of UMD-spaces. Let U be a separable UMD-space and λ a σ-finite measure on (U) with λ({0})=0. Then λ is a Lévy measure if and only if the following conditions are satisfied: * λ|_r^c is a finite measure for all r>0; * λu^∗ is a Lévy measure on for all u^∗∈ U^∗; * for some (equivalently, for all) p∈ [1,∞) we have T_G_γ(L^2_N((0,1]× U), U)^p<∞, where N denotes a Poisson random measure with intensity measure ⊗λ. The proof follows the lines of Theorem <ref>. Let (δ_k)_k∈⊆ (0,1) be a sequence decreasing to 0. Define D_k:={u∈ U: δ_k < u≤ 1} and the functions G_k (0,1]× U→ U, G_k(t,u):= u _D_k(u), `If': Assume first that the support of λ is contained in B_U. By the lemma and condition (ii), F satisfies the weak L^2-integrability condition (<ref>). Condition (iii) guarantees by Theorem <ref> that the function G is integrable with respect to . By covariance domination, the same is true for the functions G_k for k∈. Thus, we can define the U-valued random variables X:=∫_(0,1]× B_U u (ṣ,ụ) and X_k:=∫_(0,1]× D_k u (ṣ,ụ), k∈, and Corollary <ref> shows that the probability distributions of X and X_k coincide with η(λ) and η(λ|_δ_k^c) for each k∈. Applying pointwise on Ω the γ-dominated convergence theorem (see <cit.>), and using the fact that (G_k)_k∈ increases pointwise to G, it follows that lim_k→∞T_G-T_G_k_γ(L^2_N((0,1]× U), U)^p=0 pointwise on Ω. The equivalence of norms of Theorem <ref> and dominated convergence imply that X_k→ X in L^p(Ω; U), and thus η(λ|_δ_k^c)→η(λ) weakly in the space of Borel probability measures as k→∞. Together with the finiteness of λ|_r^c<∞ for all r>0, which was assumed in condition (i), Theorem <ref> shows that λ is a Lévy measure. For the general case of a measure λ with arbitrary support, we apply the decomposition λ=λ|_1+λ|_1^c. The measure λ_1 is a Lévy measure by the first part, and λ|_1^c is a Lévy measure since it is finite. <cit.> guarantees that λ is a Lévy measure. `Only if': Assume that λ is a Lévy measure. Theorem <ref> implies that λ|_r^c is a finite measure for all r>0. This gives (i). It is clear that the image measures λu^∗ are Lévy measures, which is (ii). To establish the integrability condition (iii), we can assume that the Lévy measure λ has support in B_U. Let N be a Poisson random measure with intensity ⊗λ. As in the proof of Theorem <ref> one sees that the U-valued random variables X_k:=∫_(0,1]× D_k u (ṣ,ụ) for k∈ converge almost surely to a random variable X, which must have distribution η(λ) as defined in Subsection <ref>. Since <cit.> guarantees (X^p)<∞, <cit.> implies that X_k→ X in L^p(Ω;U) as k→∞. We claim that from this it follows that G is L^p-integrable with respect to N and X = ∫_(0,1]× B_U F = ∫_(0,1]× B_U u (ṣ,ụ). All this follows from the arguments in the proof of Theorem <ref>: As in the proof of <ref><ref>, the fact that (X_k)_k∈ is Cauchy in L^p(;U) implies Cauchyness of (T_G_k())_k∈ with respect to the norm of γ(L^2_N()(0,t]× B_U,U) for a.a. ω∈Ω. The proof of <ref><ref> in Theorem <ref> establishes that G is L^p-integrable with respect to N with integral X. Another application of the argument of <ref><ref> now shows that (iii) holds. Let U be the UMD-space L^p_μ(S), where (S,𝒮, μ) is a measure space and p∈ (1,∞). By the identification of <cit.> we have a natural isomorphism of Banach spaces γ(L^2_N()((0,1]× L^p_μ(S)), L^p_μ(S))≃ L^p_μ(S;L_N(ω)^2((0,1] × L^p_μ(S))) with norm equivalence constants depending only on p. Set G(s,f) = f_B_L_μ^p(f) as before, and write L^p_μ:=L^p_μ(S) for brevity. Reasoning formally (a rigorous version can be obtained by an additional mollification or averaging argument), it follows from Theorem <ref> (with t=1), Doob's inequality (applied twice), and Fubini's theorem that T_G_γ(L^2_N((0,1]× L_μ^p), L^p_μ)^p ≃_p sup_0≤ s≤ 1∫_(0,s]× B_L_μ^p f (ṛ,f̣)_L_μ^p^p ≃_p ∫_(0,1]× B_L_μ^p f (ṛ,f̣)_L_μ^p^p = ∫_S|∫_(0,1]× B_L_μ^p f(σ) (ṛ,f̣)|^p μ(σ̣) = ∫_S|∫_(0,1]× B_L_μ^p f(σ) (ṛ,f̣)|^p μ(σ̣) ≃_p ∫_Ssup_0≤ s≤ 1|∫_(0,s]× B_L_μ^p f(σ) (ṛ,f̣)|^p μ(σ̣). As the expectation is for the supremum of a real-valued martingale, we can apply <cit.>. This enables us to conclude in the case p∈ [2,∞) that [ T_G_γ(L^2_N((0,1]× L^p_μ), L^p_μ)^p] ≃_p ∫_S ( ( ∫_(0,1]× B_L_μ^pf(σ)^2 ṛ λ(f̣))^p/2 + ∫_(0,1]× B_L_μ^pf(σ)^p ṛ λ(f̣)) μ(σ̣) = ∫_S ( ∫_ B_L_μ^pf(σ)^2 λ(f̣))^p/2 μ(σ̣) + ∫_ B_L_μ^pf^p_L_μ^p λ(f̣). Thus, we obtain the same characterisation of a Lévy measure on L^p_μ for p∈ [2,∞) as in Theorem <ref>. In the case p∈ (1,2], we obtain by <cit.>, that [ T_G_γ(L^2_N((0,1]× L^p_μ), L^p_μ)^p] ≃_p ∫_S inf{( ∫_B_L_μ^pg_1,σ(f)^2 λ(f̣))^p/2 + ∫_ B_L_μ^pg_2,σ(f)^p ṛ λ(f̣)} μ(σ̣), where the infimum is taken over all functions g_1,σ∈ L^2_λ(B_L_μ^p) and g_2,σ∈ L^p_λ(B_L_μ^p) with f(σ)=g_1,σ(f)+g_2,σ(f) for all f∈ B_L_μ^p and σ∈ S. The expression on right-hand side is subtly different from the corresponding expression in Theorem <ref>. However, the present derivation, combined with Theorem <ref>, establish the equivalence of these expressions. Theorem <ref> continues to hold if the UMD property on V is weakened to reflexivity with finite cotype, by making the following adjustments. First of all, a version of <cit.> for V-valued martingales with independent increments is obtained in <cit.> for Banach spaces V with finite cotype. Using this result, the proof of <cit.> can be repeated, resulting in a version of this theorem for reflexive space with finite cotype; see <cit.>. Reflexivity enters in view of the results in <cit.> that are still needed in their stated forms. Our Theorem <ref> can be extended accordingly. We thank Ivan Yaroslavstev and Gergely Bódo for kindly pointing this out to us. § OUTLOOK Similarly as in finite dimensions, infinitely divisible measures on a Banach space U are characterised by triplets (a,Q,λ) where a∈ U, Q U^∗→ U is a nonnegative, symmetric trace class operator and λ is a Lévy measure on (U). For weak convergence of a sequence (μ_n)_n∈ of infinitely divisible measures with characteristics (a_n,Q_n, λ_n) necessary conditions are known in Banach spaces, but in general they are not sufficient; see <cit.>. Only in separable Hilbert spaces, necessary conditions are known, which are established in <cit.>. In fact, as pointed out in <cit.>, necessary conditions in Banach spaces would have allowed for an explicit characterisation for Lévy measures. As we have now such a characterisation, our result should enable the derivation for necessary conditions for the weak convergence of a sequence of infinitely divisible measures on L^p-spaces or in UMD-spaces. In the current work, using the L^p-estimates for simple functions in <cit.>, we have already introduced a description of the largest space of vector-valued deterministic functions integrable with respect to a compensated Poisson random measure in either L^p-spaces or UMD-spaces; see Lemma <ref> and Theorem <ref>. Such a description of the space of deterministic integrands can be used to derive the existence of a stochastic integral for random vector-valued integrands with respect to a compensated Poisson random measure, similarly as in <cit.>. Since the compensated Poisson random measure has independent increments, the decoupled tangent sequence can be constructed, and thus the decoupling inequalities in UMD-spaces enables to derive the existence of the stochastic integral. Acknowledgement. The authors would like to thank Gergely Bódo for proofreading an earlier version of this article and providing helpful comments, and Ivan Yaroslavtsev for helpful suggestions. plain
http://arxiv.org/abs/2406.08544v1
20240612180001
A practical framework for analyzing high-dimensional QKD setups
[ "Florian Kanitschar", "Marcus Huber" ]
quant-ph
[ "quant-ph" ]
florian.kanitschar@outlook.com Vienna Center for Quantum Science and Technology (VCQ), Atominstitut, Technische Universität Wien, Stadionallee 2, 1020 Vienna, Austria AIT Austrian Institute of Technology, Center for Digital Safety&Security, Giefinggasse 4, 1210 Vienna, Austria Vienna Center for Quantum Science and Technology (VCQ), Atominstitut, Technische Universität Wien, Stadionallee 2, 1020 Vienna, Austria § ABSTRACT High-dimensional (HD) entanglement promises both enhanced key rates and overcoming obstacles faced by modern-day quantum communication. However, modern convex optimization-based security arguments are limited by computational constraints; thus, accessible dimensions are far exceeded by progress in HD photonics, bringing forth a need for efficient methods to compute key rates for large encoding dimensions. In response to this problem, we present a flexible analytic framework facilitated by the dual of a semi-definite program and diagonalizing operators inspired by entanglement-witness theory, enabling the efficient computation of key rates in high-dimensional systems. To facilitate the latter, we show how matrix completion techniques can be incorporated to effectively yield improved, computable bounds on the key rate in paradigmatic high-dimensional systems of time- or frequency-bin entangled photons and beyond. A practical framework for analyzing high-dimensional QKD setups Marcus Huber June 17, 2024 =============================================================== Introduction.— Quantum key distribution is one of the most mature quantum technologies, yet still faces fundamental challenges to be overcome for long-distance applications. While the exponential loss in optical fibers can be overcome by moving to free space, other issues such as low key rate and, in particular, low noise tolerance prevail. The latter limits free-space and satellite-based realizations to nighttime operations, severely reducing up-time and a significant impediment to practical feasibility. Recent works have shown efforts to extend operation times gradually, mainly by experimental adaptations <cit.>. However, this did not turn out sufficient to close this gap and hints that additional developments from the theoretical and protocol side are required. Elaborate forms of entanglement beyond qubit entanglement are a promising platform to address this issue. High-dimensional (HD) entanglement <cit.>, besides naturally increasing the key rate per signal, has proven to enhance background noise-resistance <cit.> in entanglement-distribution tasks. States that are entangled in high dimensions can be produced in labs in various degrees of freedom, among others in the temporal domain <cit.>, the frequency domain <cit.>, and in the form orbital angular momentum entanglement <cit.>, as well as in combinations of those, leading to hyperentangled states <cit.>. However, from the theoretical side, the available tools for calculating secure key rates in high-dimensional quantum systems are rather limited. On one hand, there are methods <cit.> requiring Alice and Bob to measure between two and d+1 mutually unbiased bases, which is already practically infeasible in low to medium dimensions. On the other hand, numerical methods <cit.> can avoid impractical measurements but rely on computationally very expensive and RAM-intensive convex optimization procedures. These demands are particularly challenging for high-dimensional problems, limiting the practical dimensionality to the low teens with state-of-the-art hardware <cit.>, while cutoffs <cit.> or established reduction methods <cit.> known from DM CVQKD cannot be applied for HD QKD protocols. At the same time, recent progress in high-dimensional photonics exceeds those limitations by far and brings forth the need for methods to compute secure key rates in the regime of significantly larger encoding dimensions. Our work addresses this issue by introducing a framework for analyzing practical High-Dimensional QKD setups without relying on infeasible measurements or computationally expensive convex optimization methods. We rewrite the Devetak-Winter formula <cit.> for the secure key rate as a semi-definite program (SDP) constrained by linear functions of certain observables that can be derived from the actual measurements inspired by entanglement witnesses. Instead of solving this SDP directly, we derive its dual, which is guaranteed to lower bound the primal SDP, hence the secure key rate. We show that the dual problem can be simplified and rewritten so that it boils down to finding the largest eigenvalue of a parametrized matrix and solving a scalar-valued optimization problem constrained by linear functions of the largest eigenvalue. Additionally, we introduce a method that allows us to conveniently express or at least upper-bound those largest eigenvalues for a broad class of relevant matrices. Then, the resulting optimization problem can be solved using either Lagrange's or numerical methods. The reduction to a dimension-independent optimization problem (besides the eigenvalue-calculation) significantly decreases the computational demands of the key rate calculation task while still yielding reliable lower bounds. The choice of the operators used can be tailored to a) meet the requirements of the concrete physical measurement setup and/or b) simplify the expression for the largest eigenvalue, hence the whole optimization problem. Our method accommodates subspace postselection, which is known to improve key rates further, particularly in very noisy scenarios <cit.>, and we demonstrate how matrix completion techniques can be used to extend the range of possible observables used further by providing bounds on their expectations. Finally, we illustrate our method and calculate asymptotic secure key rates for a high-dimensional temporal entanglement setup analyzed in <cit.>. Protocol.— We analyze a general high-dimensional QKD protocol, comprised of the following steps. 1.) State Generation. A photon source distributes entangled quantum states ρ_AB to Alice and Bob. 2.) Measurement. Alice and Bob randomly and independently decide to measure either in their computational bases {A_1^x}_x=0^d-1, {B_1^y}_x=0^d-1 or in one of the test bases {A_2^x}_x=0^d-1, {B_2^y}_x=0^d-1 and record their outcomes in their respective registers. Steps 1.) and 2.) are repeated many times. 3.) Sifting. Alice and Bob use the classical authenticated channel to communicate their measurement choice to each other and may discard certain results. They also may choose to perform subspace postselection. 4.) Parameter Estimation. The communicating parties disclose some of their measurement results over the public channel to estimate the correlations between their bit-strings. 5.) Error-Correction & Privacy Amplification. Finally, on the remaining rounds, Alice and Bob perform error-correction and privacy amplification to reconcile their raw keys X and Y and decouple them from Eve. Key Rate Calculation.— As in the asymptotic regime collective i.i.d. attacks are known to be essentially optimal <cit.>, worst-case, after protocol execution, Eve holds a purification ρ_ABE of Alice's and Bob's shared state ρ_AB. Let us denote the map representing the (unknown) quantum channel by ℰ^ch, the map describing Alice's and Bob's measurements by ℰ^meas and the map describing the classical postprocessing steps by ℰ^PP. Summarizing the action of all those maps by ℰ:= ℰ^PP∘ℰ^meas∘ℰ^ch, the state after protocol execution reads σ_XYE':=ℰ( ρ_ABE). The following does not explicitly assume that subspace postselection is performed but generally treats a QKD protocol in dimension d. However, according to <cit.>, the asymptotic key rates for the protocol including subspace postselection, employing l subspaces of dimension D,(s.t. d=l × D), are simply given by the weighted average of l full space protocols of dimension D, K ≥∑_m=0^ℓ-1 P(M=m) K_m such that we can easily relate them to each other by replacing d by D and building the weighted sum. Here, P(M=m) is the probability that Alice and Bob obtain an outcome in the same subspace and K_m is the key rate obtained from this subspace. The Devetak-Winter formula <cit.> R^∞ = H(X|E) - H(X|Y) quantifies the asymptotic key rate of a QKD protocol as the difference between Eve's lack of knowledge about Alice's bitstring X and the amount of information Alice needs to communicate to Bob in order to reconcile their keys X and Y. Since the second term is purely classical and can be directly calculated from Alice's and Bob's data, we focus on the first term which can be lower bounded by the min-entropy, H(X|E)_ρ≥ H_min(X|E)_ρ. The latter is connected to the logarithm of Eve's average probability of guessing Alice's key string correctly, H_min(X|E)_ρ = -log_2(p_guess). Thus, in order to lower bound the asymptotic secure key rate, it suffices to find a way of calculating Eve's average guessing probability. We can formulate (see, for example, Ref. <cit.>) the average guessing probability as an optimization problem, where we maximize the probability that Eve guesses Alice's measurement outcome correctly over all purifications ρ_ABE (which we assume to be held by Eve) of Alice's and Bob's shared state ρ_AB and over all possible measurements {E^e}_e Eve might carry out, constrained by physical requirements and Alice's and Bob's observations. p_guess = max_ρ_ABE, {E^ℓ}_ℓ=0^d-1∑_ℓ,yρ_ABE A_1^ℓ⊗ B_1^k⊗ E^ℓ s.t.: ∑_ℓ E^ℓ = 1, ρ_ABE = 1, w_k = (Ŵ^(e)_k⊗1_E) ρ_ABE, w_j^L≤(Ŵ^(i)_j⊗1_E) ρ_ABE≤ w_j^U, E^ℓ≥ 0, ρ_ABE≥ 0, for k∈{1,..., N_eW}, j∈{1,..., N_iW} and ℓ∈{ 0,..., d-1}. While A_1 and B_1 denote Alice's and Bob's computational basis, the operators Ŵ^(e)_k are observables with expectations w_k known with equality and Ŵ^(i)_j are observables where we can only find upper and/or lower bounds w_j^U and w_j^L for it's expectations. As we derive in the Appendix, the dual of this SDP can be brought into the form min y_0 + ∑_k=1^N_eW y_k w_k + ∑_j=1^N_iW(z_j^U w_j^U - z_j^L w_j^L) s.t. y_0 ≥λ_max(M_ℓ)    ∀ℓ=0,...,d-1 z_j^L, z_j^U≥ 0    ∀ j=1,...,N_iW y_k ∈ℝ   ∀ k=0,...,N_eW, where λ_max(M_ℓ) denotes the largest eigenvalue of M_ℓ:=|ℓ⟩⟨⊗|1_d -∑_k=1^N_eW y_k W̅^(e)_k - ∑_j=1^N_iW(z_j^U-z_j^L)W̅^(i)_j. Due to the SDP duality theory, every solution of this dual problem is a valid upper bound for the guessing probability, hence giving rise to a valid lower bound on the secure key rate. By this procedure, we reduced the task of solving a computationally expensive optimization problem that turns out to be computationally infeasible already for medium dimensions to finding the largest eigenvalue of a matrix and solving a much simpler optimization problem. The matrix M_ℓ is a function of the observables chosen to formulate the initial optimization problem for the guessing probability. Thus, obviously, the choice of the observables heavily influences both the structure of the optimization problem, hence difficulty, and the final result, hence the obtained secure key rates. At the same time, possible choices are limited to observables that are accessible by the measurements Alice and Bob can carry out in their labs. For practical setups, this often will limit possible choices severely. We overcome this limitation by applying a matrix completion technique known from Refs. <cit.> (see also Appendix <ref>) to r:= (ρ_AB) and obtain r_j,l≥-r_j,kr_k,l-√((r_j,jr_k,k-r_j,k^2)(r_k,kr_l,l-r_k,l^2 ))/r_k,k. Starting from density matrix entries that are known from (or at least bounded by) measurements, this allows us to iteratively obtain bounds on missing entries, extending the number of possible observables. It remains to calculate the largest eigenvalue of M_ℓ, or bounds thereof, for a particular choice of observables. Strategies for upper-bounding or calculating eigenvalues heavily depend on the exact structure of M_ℓ, hence the choice of observables, and thus are highly problem-specific. Certain choices allow for a direct calculation with particular matrix structure-specific formulas, others can be tackled via generalized versions of the Sherman-Morrison formula, while some choices only allow for eigenvalue bounds, which introduce looseness in the obtained key rates but might simplify the problem further. For all those approaches, the optimisation problem then can be tackled using Lagrange's method, yielding quick results for the key rate at the cost of potentially lower key rates due to tailoring the observables for analytic solvability rather than maximal rates. Alternatively, one may pursue a semi-analytic approach, choosing observables with the goal of maximizing the key rate and solving for the largest eigenvalue numerically. This, however, means we require numerical optimisation for the solution of the optimisation problem in Eq. (<ref>). Since each solution of the dual problem is a valid upper bound for the guessing probability by construction, this still leads to reliable lower bounds on the secure key rates. While this method is certainly more time-consuming than the purely analytical methods outlined earlier, it nevertheless reduces both computational complexity and time demands compared to direct numerical convex-optimisation approaches by far and opens the path to high dimensions, while still remaining flexible. Demonstration and Results.— We illustrate our method for a high-dimensional temporal entanglement setup, analyzed earlier in Ref. <cit.> (Protocol 1), where a source prepares a d-dimensional state |Ψ_1⟩ = |DD⟩⊗1/√(d)∑_k=0^d-1|kk⟩ and Alice and Bob either measure the Time-of-Arrival (ToA), denoted as TT, or the temporal superposition of (not necessarily) neighboring time-bins (TSUP), denoted as SS. We obtain the following relations between the coincidence-click elements and density matrix elements, TT(i,j) = ⟨i,j|ρ||i,j⟩ and (i,jρ^Ti-1,j-1) = 1/4(D(i,j,0, 0) - D(i,j,π/2, π/2)), (i,j-1ρ^Ti-1,j) = 1/4(D(i,j,0, 0) + D(i,j,π/2, π/2)), where D(i,j,ϕ^A, ϕ^B) is a quantity derived from generalized x- and y-measurements, as elaborated on in Appendix <ref>. This means by performing x, y, and z measurements, we have access to all diagonal elements and the real parts of certain off-diagonal elements. This can now be used to bound additional off-diagonal density matrix elements with the matrix completion technique mentioned earlier (see also Appendix <ref>). Inspired by entanglement witness theory <cit.>, and based on the measurements performed, we choose observables Ŵ_1:=q_0 ∑_i=0^d-1|i,i⟩⟨i,i| + ∑_z=1^d-1∑_i=0^d-1 q_z (|i,i⟩⟨i+z, i+z| + |i+z,i+z⟩⟨i, i|) and Ŵ_2:= ∑_i,j=0i≠ j^d-1 p |i,j⟩⟨i,j|, where p and q_0, ...,. q_d-1 are real numbers. While w_2:=ρ_ABŴ_2 is known with equality (at least in the asymptotic setting) from z-measurements, note that w_1:= ρ_ABŴ_1 is a function of diagonal elements and real parts of off-diagonal elements we can either measure directly or at least bound by the matrix completion technique. This, finally leads to the following optimization problem minγ + S w_1^U + T w_2 s.t. γ≥λ_max(M_ℓ)    ∀ℓ=0,...,d-1 S ∈ℝ T ≥ 0. We note that our choice of observables Ŵ_1 and Ŵ_2 leads to different dual variables appearing exclusively in orthogonal subspaces of M_ℓ, which simplifies the eigenvalue problem significantly. Our choice of observables used for this demonstration aims for high key rate rather than fastest possible evaluation, since some of the eigenvalues cannot be calculated analytically directly. Therefore, we split off those subspaces where analytic solutions are known and apply numerical methods to determine the largest eigenvalue of the remaining part each of the matrices M_ℓ rather than using eigenvalue bounds that introduce looseness. Then, we optimise the objective function numerically, emphasizing that we do not rely on finding the minimum exactly, as every found minimum upper-bounds the guessing probability by construction. In what follows we assume an isotropic noise model, ρ = v |Ψ_1⟩⟨+|(1-v)/d^21_d^2, although we want to emphasize that this is only for demonstration purposes and our method does not rely on any particular noise model. We showcase our method for the high-dimensional QKD protocol discussed in Ref. <cit.> (see also Appendix <ref>). In Figure <ref>, we compare our method for two operator choices (see Appendix <ref> for details) denoted by KH1 (solid blue) and KH2 (solid red) with secure key rates obtained by Ref. <cit.> (black dotted) and Ref. <cit.> (black dashed). While KH1 outperforms both in terms of maximal tolerable noise resistance (minimal tolerable visibility v), KH2 performs comparably to the results by Doda et. al. for high visibilities. This demonstrates the flexibility of or approach which allows choosing the operator based on the present noise level. Next, we made a general operator Ansatz by choosing a parametrized exponential witness Ŵ_1 (see Appendix <ref>) q_z=e^-c(z-s), denoted KHexp, providing a two-parameter family of possible duals independent of system dimension. This already transcends the capabilities of SDP-based methods via opening up very high dimensions, concurrent with developments in experimental photonics <cit.>. In Figure <ref>, we see that increasing dimension, with equal parameters c and s, already improves key rates across the entire range of visibilities and, moreover, increases noise resistance even without further subspace postselection, a feature not observed for Ref. <cit.> for limited measurement setups. When combining this Ansatz with the idea of subspace postselection QKD <cit.>, we see that we can just as well harness the inherent noise resistance and dramatically decrease the visibility necessary for a positive key rate, while consistently outperforming the original subspace protocol. All of this was achieved without extensive optimization of the witness choice, structure, and parameters p and q_z. Further improvement is possible with more in-depth optimization, facilitated by our flexible method that accommodates a wide range of arbitrary observables and observable combinations. As this is not the main goal of this paper, maximizing key rates through witness optimization is left for future analysis of practical setups. Discussion.— To summarise, we set out to overcome the constraints of high-dimensional QKD by unlocking high-dimensional protocols without the need for mutually unbiased measurements or SDP optimization. We derive a general framework based on analytic SDP duals for efficient key rate estimation and propose a family of witnesses suitable for time-bin or frequency-bin photonic setups. The speed of our method is basically dimension-independent, which unlocks high dimensions that remain inaccessible with numerical techniques. To our surprise, based on the same data, the operators employed for demonstration purposes already outperform standard techniques that have additional measurement capabilities and feature key rates comparable to the full SDP in dimensions where the latter is still computationally accessible. This was achieved without thorough witness optimization which hints that further improvement is possible. We believe that this solves a key critical issue in quantum communication, for the first time harnessing the full potential of genuinely high-dimensional entangled states in quantum key distribution with feasible measurement settings. The protocol considered for illustration is directly implementable in all higher-dimensional setups based on interfering neighboring bins, and we expect experimental demonstrations very soon. The obvious next step will be to go from asymptotic key rates to a finite key analysis, to which our min-entropy-based formulation is already very amendable. F.K. thanks Matej Pivoluska for fruitful discussions and precious feedback and Fabien Clivaz for enlightening discussions. This work has received funding from the Horizon-Europe research and innovation programme under grant agreement No 101070168 (HyperSpace). § MATRIX COMPLETION TECHNIQUE In what follows, we briefly describe the matrix completion technique by Refs. <cit.> and it's application to our present problem. A principle minor of M is M_I, J where I=J ⊆{1,...,n}. According to Silvester's Criterion, every Hermitian n× n matrix M is positive semi-definite if and only if every nested sequence of principle minors has a non-negative determinant. Thus, in particular, if M is hermitian and positive semi-definite, (M_I,J)≥ 0 holds. This can be applied to the real part of a density matrix r = (ρ) = (r_i,j)_i,j, which is positive semi-definite (as r = 1/2(ρ+ρ̅)) and Hermitian (since symmetric). The condition that the determinant of every proper minor is non-negative, translates to [ r_j,j r_j,k r_j,l; r_k,j r_k,k r_k,l; r_l,j r_l,k r_l,l ]≥ 0   ⇔ r_j,l≥r_j,kr_k,l±√((r_j,jr_k,k-r_j,k^2)(r_k,kr_l,l-r_k,l^2 ))/r_k,k. Note that both r_j,jr_k,k-r_j,k^2≥ 0 and r_k,kr_l,l-r_k,l^2≥ 0 since both are the determinants of a 2× 2 minor. Thus, the solution with minus is smaller than the solution using plus and we obtain r_j,l≥r_j,kr_k,l-√((r_j,jr_k,k-r_j,k^2)(r_k,kr_l,l-r_k,l^2 ))/r_k,k. This relation can be used to iteratively derive lower bounds on the entries of r as needed to bound the expectation of the chosen observable. Thus, at the price of obtaining only bounds on the expectations instead of equalities, we can significantly extend the set of accessible observables. § APPLICATION TO THE HD TEMPORAL ENTANGLEMENT PROTOCOL In this section, we detail the application of our method to the high-dimensional temporal entanglement protocol analysed in Ref. <cit.>, sticking close to the notation chosen there. In this protocol, a source prepares the state |Ψ_target^P1⟩ := |DD⟩⊗1/√(d)∑_k=0^d-1|kk⟩ which is distributed to Alice and Bob over the quantum channel. They then perform either a Time of Arrival (ToA) measurement or a Temporal Superposition (TSUP) measurement. We start with a brief recap of the most important definitions from <cit.>. The action of the temporal-superposition setup is described by Û := |HH⟩⟨HH|⊗1_T⊗1_T + |HV⟩⟨HV|⊗1_T⊗Q̂_ϕT̂ + |VH⟩⟨VH|⊗Q̂_ϕT̂⊗1_T + |VV⟩⟨VV|⊗Q̂_ϕT̂⊗Q̂_ϕT̂. The TSUP measurement is then given by M̃_a,b(i,j,ϕ^A, ϕ^B) := |Ψ_a,b(i,j,ϕ^A,ϕ^B)⟩⟨$|, where |Ψ̃_1,1(i,j,ϕ^A, ϕ^B)⟩:= Û^†|DD, i,j⟩ = |Ψ̃_1(i,ϕ^A)⟩⊗|Ψ̃_1(j,ϕ^B)⟩, |Ψ̃_1,2(i,j,ϕ^A, ϕ^B)⟩ := Û^†|DA, i,j⟩ =|Ψ̃_1(i,ϕ^A)⟩⊗|Ψ̃_2(j,ϕ^B)⟩, |Ψ̃_2,1(i,j,ϕ^A, ϕ^B)⟩ := Û^†|AD, i,j⟩ = |Ψ̃_2(i,ϕ^A)⟩⊗|Ψ̃_1(j,ϕ^B)⟩ , |Ψ̃_2,2(i,j,ϕ^A, ϕ^B)⟩ := Û^†|AA, i,j⟩ =|Ψ̃_2(i,ϕ^A)⟩⊗|Ψ̃_2(j,ϕ^B)⟩, and we introduced |Ψ̃_x(i,ϕ)⟩ :=1/√(2)( |H,i⟩+ (-1)^x-1 e^-i ϕ|V, i-1⟩), forx ∈{1,2}. The ToA measurement is simply given by M(i,j) := 1_Pol⊗|i⟩⟨⊗||j⟩⟨.| We denote ToA measurement clicks byTT(i,j), while TSUP clicks are denoted bySS_a,b(i,j,ϕ^A, ϕ^B), where the subscript denotes which of Alice's and Bob's detectors clicked. It can be seen straight-forwardly that the ToA clicks directly give access to the diagonal elements of Alice's and Bob's shared density matrixρ_AB. The action of the TSUP measurements read as follows, 4 SS_1,1(i,j,ϕ^A, ϕ^B) = M̃_1,1(i,j,ϕ^A, ϕ^B) ρ_AB =i,jρ^Ti,j + i,j-1ρ^Ti,je^-ıϕ^B + i-1,jρ^Ti,j e^-ıϕ^A +i-1,j-1ρ^Ti,je^-ı(ϕ^A+ϕ^B) + i, jρ^Ti,j-1e^ıϕ^B+ i,j-1ρ^Ti,j-1 + i-1,jρ^Ti,j-1e^ı(ϕ^B-ϕ^A) + i-1,j-1ρ^Ti,j-1 e^-ıϕ^A + i,jρ^Ti-1,j e^ıϕ^A + i,j-1ρ^Ti-1,j e^ı(ϕ^A-ϕ^B) + i-1,jρ^Ti-1,j + i-1,j-1ρ^Ti-1,j e^-ıϕ^B + i,jρ^Ti-1,j-1 e^ı (ϕ^A+ϕ^B) + i,j-1ρ^Ti-1,j-1 e^ıϕ^A + i-1,jρ^Ti-1,j-1e^ıϕ^B + i-1,j-1ρ^Ti-1,j-1 4 SS_1,2(i,j,ϕ^A, ϕ^B) = M̃_1,2(i,j,ϕ^A, ϕ^B) ρ_AB =i,jρ^Ti,j - i,j-1ρ^Ti,je^-ıϕ^B + i-1,jρ^Ti,j e^-ıϕ^A -i-1,j-1ρ^Ti,je^-ı(ϕ^A+ϕ^B) - i, jρ^Ti,j-1e^ıϕ^B+ i,j-1ρ^Ti,j-1 - i-1,jρ^Ti,j-1e^ı(ϕ^B-ϕ^A) + i-1,j-1ρ^Ti,j-1 e^-ıϕ^A + i,jρ^Ti-1,j e^ıϕ^A - i,j-1ρ^Ti-1,j e^ı(ϕ^A-ϕ^B) + i-1,´jρ^Ti-1,j - i-1,j-1ρ^Ti-1,j e^-ıϕ^B - i,jρ^Ti-1,j-1 e^ı (ϕ^A+ϕ^B) + i,j-1ρ^Ti-1,j-1 e^ıϕ^A - i-1,jρ^Ti-1,j-1e^ıϕ^B + i-1,j-1ρ^Ti-1,j-1, 4 SS_2,1(i,j,ϕ^A, ϕ^B) = M̃_2,1(i,j,ϕ^A, ϕ^B) ρ_AB =i,jρ^Ti,j + i,j-1ρ^Ti,je^-ıϕ^B - i-1,jρ^Ti,j e^-ıϕ^A -i-1,j-1ρ^Ti,je^-ı(ϕ^A+ϕ^B) +i, jρ^Ti,j-1e^ıϕ^B+ i,j-1ρ^Ti,j-1 -i-1,jρ^Ti,j-1e^ı(ϕ^B-ϕ^A) - i-1,j-1ρ^Ti,j-1 e^-ıϕ^A -i,jρ^Ti-1,j e^ıϕ^A - i,j-1ρ^Ti-1,j e^ı(ϕ^A-ϕ^B) + i-1,jρ^Ti-1,j + i-1,j-1ρ^Ti-1,j e^-ıϕ^B -i,jρ^Ti-1,j-1 e^ı (ϕ^A+ϕ^B) - i,j-1ρ^Ti-1,j-1 e^ıϕ^A +i-1,jρ^Ti-1,j-1e^ıϕ^B + i-1,j-1ρ^Ti-1,j-1, 4 SS_2,2(i,j,ϕ^A, ϕ^B) = M̃_2,2(i,j,ϕ^A, ϕ^B) ρ_AB =i,jρ^Ti,j - i,j-1ρ^Ti,je^-ıϕ^B -i-1,jρ^Ti,j e^-ıϕ^A +i-1,j-1ρ^Ti,je^-ı(ϕ^A+ϕ^B) - i, jρ^Ti,j-1e^ıϕ^B+i,j-1ρ^Ti,j-1 +i-1,jρ^Ti,j-1e^ı(ϕ^B-ϕ^A) -i-1,j-1ρ^Ti,j-1 e^-ıϕ^A -i,jρ^Ti-1,j e^ıϕ^A + i,j-1ρ^Ti-1,j e^ı(ϕ^A-ϕ^B) +i-1,jρ^Ti-1,j - i-1,j-1ρ^Ti-1,j e^-ıϕ^B + i,jρ^Ti-1,j-1 e^ı (ϕ^A+ϕ^B) - i,j-1ρ^Ti-1,j-1 e^ıϕ^A - i-1,jρ^Ti-1,j-1e^ıϕ^B + i-1,j-1ρ^Ti-1,j-1. Now, we combine Eqs. (<ref>) - (<ref>) in the following way and obtain D(i,j,ϕ^A, ϕ^B) := SS_1,1(i,j,ϕ^A, ϕ^B)-SS_2,2(i,j,ϕ^A, ϕ^B) -SS_2,1(i,j, ϕ^A, ϕ^B) +SS_2,2(i,j, ϕ^A, ϕ^B) = e^-ı(ϕ^A+ϕ^B)i-1,j-1ρ_ABi,j +e^ı(ϕ^A-ϕ^B)i,j-1ρ_ABi-1,j + e^-ı(ϕ^A-ϕ^B)i-1,jρ_ABi,j-1 + e^ı(ϕ^A+ϕ^B)i,jρ_ABi-1,j-1. Choosingϕ_A = ϕ_B = 0(corresponding to a generalizedxmeasurement) yields D(i,j,0,0) = 2(i,j-1ρ_ABi-1,j) + 2 (i,jρ_ABi-1,j-1), while forϕ_A = ϕ_B = π/2(corresponding to a generalizedymeasurement) leads to D(i,j,π/2,π/2) = 2 ( i,j-1ρ_ABi-1,j) - 2( i,jρ_ABi-1,j-1). Adding and subtracting Eqs. (<ref>) and (<ref>) respectively, yields (i,jρ^Ti-1,j-1) = 1/4(D(i,j,0, 0) - D(i,j,π/2, π/2)), (i,j-1ρ^Ti-1,j) = 1/4(D(i,j,0, 0) + D(i,j,π/2, π/2)). Thus, performingx- andy-measurements suffices to give us access to the real parts of certain off-diagonal elements. Note that in case we do only perform a generalizedx-measurement, we can obtain at least a bound on the real parts of those off-diagonal elements, (i,jρ^Ti-1,j-1) = D(i,j,0, 0)/4 - √(TT(i-1,j) - TT(i,j-1)), which we can still use for our method. Based on the obtained real parts, the method explained in Appendix <ref>, allows us to iteratively bound additional off-diagonal elements. § OPERATOR CHOICES USED FOR DEMONSTRATION In this section we state the choices made for the observablesŴ_1andŴ_2used forKH1andKH2in Figures <ref> and <ref> in the main text. Recall that for demonstration purposes, we chose observables of the formŴ_1:=q_0 ∑_i=0^d-1 |i,i⟩⟨i,i| + ∑_z=1^d-1∑_i=0^d-1 q_z (|i,i⟩⟨i+z, i+z| + |i+z,i+z⟩⟨i, i|)andŴ_2:= ∑_i,j=0i≠j^d-1 p |i,j⟩⟨i,j|, wherepandq_0, ...,. q_d-1are chosen to be real numbers. ForKH1, we setp=1and for simplicity setq_5 = ... = q_15 = 0, while choosingq_0 = -1,q_2 = 1andq_3 = 2.7andq_4=0.47. ForKH2, we setp=1and choseq_0 = 0,q_1 = ... = q_11 = 1andq_12 = ... = q_15 = 0. Finally, for the exponential witnessKHexp, we chose, againp=1as well asq_z = e^-c(z-s)forc=0.75ands=4. To ease comparison, we kept those parameters fixed throughout the whole paper, despite dimension-dependent parameter choices would have increased key rates significantly in particular when subspace postselection was applied. Both the choices of the form ofŴ_1andŴ_2and the particular values inserted were solely for demonstration purposes and were not optimized systematically. Therefore, we expect different choices to lead to improved key rates for certain scenarios.
http://arxiv.org/abs/2406.08264v1
20240612143426
Emergent spinon-holon Feshbach resonance in a doped Majumdar-Ghosh model
[ "Simon M. Linsel", "Ulrich Schollwöck", "Annabelle Bohrdt", "Fabian Grusdt" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.quant-gas", "cond-mat.supr-con", "quant-ph" ]
simon.linsel@physik.uni-muenchen.de Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany Department of Physics, Harvard University, Cambridge MA 02138, USA Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany Institute of Theoretical Physics, University of Regensburg, D-93053, Germany fabian.grusdt@physik.uni-muenchen.de Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany § ABSTRACT Experimental and numerical spectroscopy have revealed rich physics in antiferromagnets, in particular in frustrated and doped systems. The Majumdar-Ghosh (MG) model has an analytically known spin-disordered ground state of dimerized singlets as a result of magnetic frustration. Here we study the single-hole angle-resolved photoemission spectrum (ARPES) of a doped MG model, where we introduce a spin-hole interaction that is experimentally accessible with ultracold molecules. We report a bound spinon-holon ground state and clear signatures of a spinon-holon molecule state and polarons in the ARPES spectrum at different magnetizations. Moreover, we find signatures of an emergent Feshbach resonance with tunable interactions associated with the unbinding of the spinon and the holon. Our results provide new insights into the physics of dopants in frustrated t-J models and establish the latter as a new platform for studies of emergent few-body phenomena. Emergent spinon-holon Feshbach resonance in a doped Majumdar-Ghosh model Fabian Grusdt June 17, 2024 ======================================================================== Introduction.—The resonating valence bond theory (RVB), developed by Anderson and Fazekas <cit.>, describes a quantum spin liquid (QSL) on a triangular lattice with featureless constituents: holons and spinons. Historically, RVB was proposed to describe high-temperature superconductivity in the 2D Hubbard model <cit.>. While the RVB paradigm is still widely applied for describing spin liquids, in the context of doped antiferromagnets theories with confined phases or non-trivial constituents have emerged in recent years. A prominent example are fractionalized Fermi liquids (FL*) <cit.>, often studied in the context of doped quantum dimer models <cit.>. This parton picture is in line with microscopic studies of doped holes <cit.> and hole pairs <cit.> in t-J and Hubbard models. Feshbach resonances have originally been introduced in the context of particle physics, where slow-moving colliding particles undergo resonant scattering <cit.>. Since then, Feshbach resonances have been widely used to realize tunable interactions in cold-atom experiments <cit.> and 2D semiconductors <cit.>. Recently, Feshbach resonances have further been proposed as a possible pairing mechanism for high-temperature superconductivity in cuprates <cit.>. In this Letter, we report Feshbach-like resonant interactions upon tuning across the spinon/holon unbinding in a paradigmatic doped frustrated quantum magnet. We study the doped Majumdar-Ghosh model <cit.> extended by spin-hole interactions that can be realized e.g. by ultracold polar molecules <cit.>, serving as a toy model relevant for other settings featuring spinon-holon bound states. The resonant spinon-holon interactions we reveal are directly probed by varying the density of unpaired spinons. Using matrix product states (MPS), we study the ground state properties and calculate the single-hole ARPES spectrum. In addition to the Feshbach-like resonance, we find a rich set of emergent few-body states realized in the doped MG model. Our results have possible implications for the physics of cuprates in the pseudogap regime. Doped Majumdar-Ghosh model.—We study the frustrated Hamiltonian ℋ̂ = -t ∑_⟨ı,ȷ⟩, σ𝒫̂_GW[ ĉ_ı,σ^†ĉ_ȷ, σ + H.c.] 𝒫̂_GW + J ∑_ȷ=1^NŜ_ȷ·Ŝ_ȷ+1 + J/2∑_ȷ=1^NŜ_ȷ·Ŝ_ȷ+2 - g ∑_⟨ı,ȷ⟩[ n̂^h_ıŜ^z_ȷ + H.c.] on the triangular ladder, see Fig. <ref>, where ĉ_ȷ, σ is a fermionic annihilation operator on site ȷ with spin σ∈{↑, ↓}, Ŝ_ȷ is the spin operator and n̂^h_ȷ = ∏_σ (1 - ĉ_ȷ, σ^†ĉ_ȷ, σ) is the hole density. It includes the bare Majumdar-Ghosh model featuring Heisenberg interactions between nearest neighbors (NN) and next-nearest neighbors (NNN) ∝ J. We allow NN hopping of fermions ∝ t; here 𝒫̂_GW denotes the Gutzwiller projector on states with no more than one fermion per site. In addition, we add a spin-hole interaction ∝ g that can be experimentally realized using e.g. ultracold polar molecules <cit.>, ultracold atoms <cit.>, or in Rydberg tweezer arrays <cit.>. We set t=J; this choice is not particularly special. We microscopically simulate the system using MPS and apply the density-matrix renormalization group (DMRG) <cit.> to calculate the ground state (GS). Time evolutions of the MPS are obtained using generalized subspace expansion (GSE) <cit.> for the first few time steps and then using the two-site time-dependent variational principle (TDVP2) <cit.>. In addition, we use linear extrapolation to improve the quality of the ARPES spectrum <cit.>. We enforce a global U(1) ⊗ U(1) symmetry of the hole number and the magnetization m = 2 ⟨∑_ȷŜ^z_ȷ⟩ / L. The magnetization is always positive in this Letter. We use the SyTen toolkit <cit.>. Spinon-holon Feshbach resonance.—The undoped Majumdar-Ghosh model, i.e. Hamiltonian (<ref>) at n̂_ȷ^h = 0, features a GS of dimerized singlets, see Fig. <ref>. We dope one hole into the system and set the magnetization to a small but non-zero value, i.e. we have some unpaired spinons in the system that are not bound in a singlet. Consequently, the GS of this doped and magnetized system is translationally invariant. By tuning the microscopic spin-hole interaction ∝ g, we can realize both an unbound and a bound regime, in which a spinon-holon bound state forms. The existence of the bound state itself is very natural and we will study the microscopic details later; for now, it is only important that its existence can be controlled via g. This brings us to the central idea put forward in this Letter: We propose that the unbinding transition of the spinon-holon bound state is associated with a Feshbach-type resonance with resonantly enhanced scattering. To reveal signatures of such resonantly enhanced spinon-holon interactions, we consider a system at finite magnetization. Assuming a Feshbach-like resonance, in which the spinon and the holon undergo two-body scattering, the interaction energy near the transition is given by ℋ̂_int∼ g_eff(g) n̂^h⟨n̂^S ⟩, where g_eff(g) is the resonantly enhanced interaction that depends on the bare coupling g and n̂^S is the unpaired spinon density. Here we employed a mean-field ansatz, replacing the unpaired spinon density by its average, i.e. n̂^S →⟨n̂^S ⟩ = m. Thus increasing m leads to a stronger interaction energy shift in the system. This situation resembles a 1D Fermi polaron problem <cit.>, where the holon corresponds to an impurity. The unpaired spinons around the holon form a Luttinger liquid and “dress” the holon in the following sense: The attractive (repulsive) polaron is a holon that is surrounded by ↑-spinons (singlets). Since the unpaired spinons are mutually hard-core, we expect the polarons to resemble 1D Fermi polarons. We can effectively extract g_eff(g) by tuning g, m in the vicinity of the Feshbach resonance by searching for Fermi polaron signatures and interaction shifts in the ladder in the one-hole ARPES spectrum A_σ(,̨ω) = 1/2 πRe∫_- ∞^∞dt e^i ω t⟨ψ_0| e^i ℋ̂ tĉ_,̨σ^† e^-i ℋ̂ tĉ_,̨σ|ψ_0⟩, where |ψ_0⟩ is the GS without holes. We calculate the full one-hole ARPES spectrum, thus going beyond mean-field theory. In addition to Fermi polaron-like branches, this gives rise to a rich set of few-body states, e.g. molecular spinon-holon states. First, we will focus on the polaron branches as they are immediately relevant to establish proof of the Feshbach resonance; However, we will discuss the other branches later. We fix ka=1.0 (corresponding approximately to the GS of the one-hole dispersion; a is the lattice constant) and plot the majority (σ = ↑) ARPES spectrum with respect to g for different magnetizations m; our result for m=37/81 is shown in Fig. <ref>a. Indeed, we identify two branches with attractive and repulsive Fermi polaron characters, respectively. The attractive Fermi polaron has a negative energy compared to the zero-hole GS energy E_0 ≡ 0. It has a particularly large spectral weight in the repulsive region (g<0) of the bare coupling g. This is in line with the Feshbach picture: A repulsive bare interaction corresponds to a strong resonantly enhanced effective attractive interaction and vice versa, giving rise to the existence of the two polaron branches. As a side note, the attractive polaron might still exist deep in the attractive region g>0 while the repulsive polaron will typically decay because of its high energy compared to the GS. The repulsive Fermi polaron has a positive energy and a large spectral weight for attractive bare interactions (g>0). In Fig. <ref>a we only fit the parts of the two polaron branches with high spectral weight. We repeat these calculations for different magnetizations and show the fitted polaron branches in Fig. <ref>b. We clearly observe a repulsion of the attractive/repulsive branches with increasing m, as expected from Eq. (<ref>), thus strongly suggesting a resonantly enhanced Feshbach interaction. We will discuss the microscopic details of the spinon-holon bound state next, after making one more remark: The Feshbach resonance is connected to the unbinding of a spinon-holon pair, our ARPES calculation is just a way to probe it. In fact, we argue that Feshbach resonances might be more generally prevalent in scenarios where partons form bound states, e.g. in the context of FL-FL* transitions <cit.>: in the FL* phase deconfined spinons and holons exist but the relevant charge carriers at low energies are constituted by bound pairs of spinons and holons, yielding electron-like quantum numbers. Free molecule limit.—We study the microscopic structure of the bound spinon-holon state at minimal magnetization, i.e. with only one hole and one unpaired spinon in the system. To study the effect of tuning the spin-hole interaction ∝ g, we calculate the QP weight Z = ∑_ȷ | ⟨ψ_0^1h|ĉ_ȷ, ↓|ψ_0^0h⟩ |^2, where |ψ_0^0(1)h⟩ is the DMRG GS with zero (one) holes at m=0(1/2). It includes the MPS overlap between the one-hole DMRG GS and the undoped GS in which we put a spinon directly next to a holon (by removing one spin from a singlet). Thus Z probes the existence of a spinon-holon bound state. Note that in contrast to the ARPES simulations, here we use an even system size of L=80. We perform a scan over -2 ≤ g/J ≤ 10 and plot Z in Fig. <ref>a. Z is zero for g/J ≪ -1 and features a maximum around g/J = 2, where sizable values Z ∼ 0.4 are realized. This confirms the formation of a spinon-holon bound state for g>0. Interestingly, the bare interaction g at the approximate location of the resonance coincides with the transition from the unbound to the bound regime in the free molecule limit, see Fig. <ref>a which is consistent with the bare interaction g driving the transition. To study the microscopic structure of the bound state further, we calculate the spin-hole correlator ⟨n̂_ȷ^hŜ^z_ȷ + d⟩ for g/J=0 and 2. For g/J=0, we observe a large bound state that extends over approximately 20 lattice sites, still well below the system size L=80. The size of the bound state decreases significantly down to a few lattice sites for g/J=2. Thus, the existence and the size of a bound spinon-holon state are directly tunable by g/J. Emergent few-body physics.—The calculated ARPES spectra feature not only Fermi polarons but also a rich set of further branches, some of which we will identify here. We show the majority (σ = ↑) ARPES spectrum for different spin-hole interactions ∝ g at m=25/81 in Fig. <ref>a. In addition to the Fermi polarons, we identify 4 branches where a spinon and the holon form a molecular state, signaled by a linear dependence of the energy E ∼± |g|. We call the state where the holon binds to a ↑-spinon (↓-spinon) a majority (minority) molecule. Both molecule types can be either attractively (negative energy) or repulsively (positive energy) bound. A repulsively bound molecule is an excited state that can be long-lived due to a lack of low-order resonant decay processes. We show the minority (σ = ↓) ARPES spectrum for different spin-hole interactions ∝ g at m=25/81 in Fig. <ref>b. In addition to the attractive and repulsive majority molecule, we observe a polaron-spinon continuum enclosed by a cosine dispersion in the momentum-dependent ARPES spectrum at fixed g/J, see inset of Fig. <ref>b. Further, we observe spinon particle-hole dressing as a continuum of excited states above the attractive majority molecule, where the interaction between the impurity (i.e. the holon) with the Luttinger liquid results in particle-hole excitations (i.e. collective excitations of spinons). Attentive readers may have noticed that the minority molecule and the Fermi polarons are not visible in the minority spectrum. This is due to their low spectral weight as a result of vanishing wave function overlaps in the ARPES spectrum, as we discuss further in <cit.> Sec. <ref>. More detailed studies of the molecular branches and the other few-body excitations not discussed here are tasks left for future research. Discussion and outlook.— We have investigated a doped Majumdar-Ghosh model as a paradigmatic frustrated quantum magnet. The model can be experimentally implemented using e.g. ultracold dipolar molecules <cit.>, and for g=0 using ultracold fermions in quantum gas microscopes <cit.>. Using ARPES based on MPS, we have found signatures of a spinon-holon Feshbach-like resonance which can be probed by tuning the density of unpaired spinons (i.e. magnetization). We have also found emergent few-body physics in the ARPES spectra, including different (partly repulsively bound) molecular branches of a spinon and a holon. Our results suggest wider applicability of this emergent Feshbach-like resonance to study the physics of superconducting phases or parton unbinding. It is potentially relevant to clarify the physics of doped quantum spin liquids and has potential applications in the context of heavy electrons <cit.> and cuprates. We thank T. Blatz, M. Grundner, L. Homeier, M. Kebrič, N. Mostaan, S. Paeckel, F. Palm, and H. Schlömer for fruitful discussions. This research was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement no 948141) — ERC Starting Grant SimUcQuam, and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC-2111 – 390814868. 42 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Anderson(1973)]Anderson1973 author author P. Anderson, title title Resonating valence bonds: A new kind of insulator? https://doi.org/10.1016/0025-5408(73)90167-0 journal journal Materials Research Bulletin volume 8, pages 153–160 (year 1973)NoStop [Fazekas and Anderson(1974)]Fazekas1974 author author P. Fazekas and author P. W. Anderson, title title On the ground state properties of the anisotropic triangular antiferromagnet, 10.1080/14786439808206568 journal journal The Philosophical Magazine: A Journal of Theoretical Experimental and Applied Physics volume 30, pages 423–440 (year 1974)NoStop [Anderson(1987)]Anderson1987 author author P. W. Anderson, title title The resonating valence bond state in la2cuo4 and superconductivity, 10.1126/science.235.4793.1196 journal journal Science volume 235, pages 1196–1198 (year 1987)NoStop [Baskaran et al.(1987)Baskaran, Zou, and Anderson]Baskaran1987 author author G. Baskaran, author Z. Zou, and author P. Anderson, title title The resonating valence bond state and high-tc superconductivity — a mean field theory, https://doi.org/10.1016/0038-1098(87)90642-9 journal journal Solid State Communications volume 63, pages 973–976 (year 1987)NoStop [Baskaran and Anderson(1988)]Baskaran1988 author author G. Baskaran and author P. W. Anderson, title title Gauge theory of high-temperature superconductors and strongly correlated fermi systems, 10.1103/PhysRevB.37.580 journal journal Phys. Rev. B volume 37, pages 580–583 (year 1988)NoStop [Senthil et al.(2003)Senthil, Sachdev, and Vojta]Senthil2003 author author T. Senthil, author S. Sachdev, and author M. Vojta, title title Fractionalized fermi liquids, 10.1103/PhysRevLett.90.216403 journal journal Phys. Rev. Lett. volume 90, pages 216403 (year 2003)NoStop [Senthil et al.(2004)Senthil, Vojta, and Sachdev]Senthil2004 author author T. Senthil, author M. Vojta, and author S. Sachdev, title title Weak magnetism and non-fermi liquids near heavy-fermion critical points, 10.1103/PhysRevB.69.035111 journal journal Phys. Rev. B volume 69, pages 035111 (year 2004)NoStop [Punk et al.(2015)Punk, Allais, and Sachdev]Punk2015 author author M. Punk, author A. Allais, and author S. Sachdev, title title Quantum dimer model for the pseudogap metal, 10.1073/pnas.1512206112 journal journal Proceedings of the National Academy of Sciences volume 112, pages 9552–9557 (year 2015)NoStop [Bohrdt et al.(2021a)Bohrdt, Demler, and Grusdt]Bohrdt2021 author author A. Bohrdt, author E. Demler, and author F. Grusdt, title title Rotational resonances and regge-like trajectories in lightly doped antiferromagnets, 10.1103/PhysRevLett.127.197004 journal journal Phys. Rev. Lett. volume 127, pages 197004 (year 2021a)NoStop [Bohrdt et al.(2023)Bohrdt, Demler, and Grusdt]Bohrdt2023 author author A. Bohrdt, author E. Demler, and author F. Grusdt, title title Dichotomy of heavy and light pairs of holes in the t-j model, 10.1038/s41467-023-43453-2 journal journal Nature Communications volume 14, pages 8017 (year 2023)NoStop [Feshbach(1962)]Feshbach1962 author author H. Feshbach, title title A unified theory of nuclear reactions. ii, https://doi.org/10.1016/0003-4916(62)90221-X journal journal Annals of Physics volume 19, pages 287–313 (year 1962)NoStop [Courteille et al.(1998)Courteille, Freeland, Heinzen, van Abeelen, and Verhaar]Courteille1998 author author P. Courteille, author R. S. Freeland, author D. J. Heinzen, author F. A. van Abeelen, and author B. J. Verhaar, title title Observation of a feshbach resonance in cold atom scattering, 10.1103/PhysRevLett.81.69 journal journal Phys. Rev. Lett. volume 81, pages 69–72 (year 1998)NoStop [Inouye et al.(1998)Inouye, Andrews, Stenger, Miesner, Stamper-Kurn, and Ketterle]Inouye1998 author author S. Inouye, author M. R. Andrews, author J. Stenger, author H. J. Miesner, author D. M. Stamper-Kurn, and author W. Ketterle, title title Observation of feshbach resonances in a bose–einstein condensate, 10.1038/32354 journal journal Nature volume 392, pages 151–154 (year 1998)NoStop [Bloch et al.(2008)Bloch, Dalibard, and Zwerger]Bloch2008 author author I. Bloch, author J. Dalibard, and author W. Zwerger, title title Many-body physics with ultracold gases, 10.1103/RevModPhys.80.885 journal journal Rev. Mod. Phys. volume 80, pages 885–964 (year 2008)NoStop [Chin et al.(2010)Chin, Grimm, Julienne, and Tiesinga]Chin2010 author author C. Chin, author R. Grimm, author P. Julienne, and author E. Tiesinga, title title Feshbach resonances in ultracold gases, 10.1103/RevModPhys.82.1225 journal journal Rev. Mod. Phys. volume 82, pages 1225–1286 (year 2010)NoStop [Sidler et al.(2017)Sidler, Back, Cotlet, Srivastava, Fink, Kroner, Demler, and Imamoglu]Sidler2017 author author M. Sidler, author P. Back, author O. Cotlet, author A. Srivastava, author T. Fink, author M. Kroner, author E. Demler, and author A. Imamoglu, title title Fermi polaron-polaritons in charge-tunable atomically thin semiconductors, 10.1038/nphys3949 journal journal Nature Physics volume 13, pages 255–261 (year 2017)NoStop [Schwartz et al.(2021)Schwartz, Shimazaki, Kuhlenkamp, Watanabe, Taniguchi, Kroner, and Imamoğlu]Schwartz2021 author author I. Schwartz, author Y. Shimazaki, author C. Kuhlenkamp, author K. Watanabe, author T. Taniguchi, author M. Kroner, and author A. Imamoğlu, title title Electrically tunable feshbach resonances in twisted bilayer semiconductors, 10.1126/science.abj3831 journal journal Science volume 374, pages 336–340 (year 2021)NoStop [Squire and March(2010)]Squire2010 author author R. H. Squire and author N. H. March, title title Microscopic model of cuprate superconductivity, https://doi.org/10.1002/qua.22853 journal journal International Journal of Quantum Chemistry volume 110, pages 2808–2822 (year 2010), http://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/qua.22853 https://onlinelibrary.wiley.com/doi/pdf/10.1002/qua.22853 NoStop [Homeier et al.(2023)Homeier, Lange, Demler, Bohrdt, and Grusdt]Homeier2023 author author L. Homeier, author H. Lange, author E. Demler, author A. Bohrdt, and author F. Grusdt, @noop title Feshbach hypothesis of high-tc superconductivity in cuprates, (year 2023), http://arxiv.org/abs/2312.02982 arXiv:2312.02982 [cond-mat.str-el] NoStop [Homeier et al.(2024a)Homeier, Bermes, and Grusdt]Homeier2024 author author L. Homeier, author P. Bermes, and author F. Grusdt, title title Scattering theory of mesons in doped antiferromagnetic mott insulators: Multichannel perspective and feshbach resonance, 10.1103/PhysRevB.109.125135 journal journal Phys. Rev. B volume 109, pages 125135 (year 2024a)NoStop [Majumdar and Ghosh(1969a)]Majumdar1969a author author C. K. Majumdar and author D. K. Ghosh, title title On Next‐Nearest‐Neighbor Interaction in Linear Chain. I, 10.1063/1.1664978 journal journal Journal of Mathematical Physics volume 10, pages 1388–1398 (year 1969a)NoStop [Majumdar and Ghosh(1969b)]Majumdar1969b author author C. K. Majumdar and author D. K. Ghosh, title title On Next‐Nearest‐Neighbor Interaction in Linear Chain. II, 10.1063/1.1664979 journal journal Journal of Mathematical Physics volume 10, pages 1399–1402 (year 1969b)NoStop [Gorshkov et al.(2011)Gorshkov, Manmana, Chen, Demler, Lukin, and Rey]Gorshkov2011 author author A. V. Gorshkov, author S. R. Manmana, author G. Chen, author E. Demler, author M. D. Lukin, and author A. M. Rey, title title Quantum magnetism with polar alkali-metal dimers, 10.1103/PhysRevA.84.033619 journal journal Phys. Rev. A volume 84, pages 033619 (year 2011)NoStop [Carroll et al.(2024)Carroll, Hirzler, Miller, Wellnitz, Muleady, Lin, Zamarski, Wang, Bohn, Rey, and Ye]Carroll2024 author author A. N. Carroll, author H. Hirzler, author C. Miller, author D. Wellnitz, author S. R. Muleady, author J. Lin, author K. P. Zamarski, author R. R. W. Wang, author J. L. Bohn, author A. M. Rey, and author J. Ye, @noop title Observation of generalized t-j spin dynamics with tunable dipolar interactions, (year 2024), http://arxiv.org/abs/2404.18916 arXiv:2404.18916 [cond-mat.quant-gas] NoStop [Jepsen et al.(2021)Jepsen, Ho, Amato-Grill, Dimitrova, Demler, and Ketterle]Jepsen2021 author author P. N. Jepsen, author W. W. Ho, author J. Amato-Grill, author I. Dimitrova, author E. Demler, and author W. Ketterle, title title Transverse spin dynamics in the anisotropic heisenberg model realized with ultracold atoms, 10.1103/PhysRevX.11.041054 journal journal Phys. Rev. X volume 11, pages 041054 (year 2021)NoStop [Homeier et al.(2024b)Homeier, Harris, Blatz, Geier, Hollerith, Schollwöck, Grusdt, and Bohrdt]Homeier2024_2 author author L. Homeier, author T. J. Harris, author T. Blatz, author S. Geier, author S. Hollerith, author U. Schollwöck, author F. Grusdt, and author A. Bohrdt, title title Antiferromagnetic bosonic t-j models and their quantum simulation in tweezer arrays, 10.1103/PhysRevLett.132.230401 journal journal Phys. Rev. Lett. volume 132, pages 230401 (year 2024b)NoStop [Sup()]Supplements @noop journal See Supplemental Material at [URL will be inserted by publisher] for details on the ARPES calculations NoStop [Schollwöck(2011)]Schollwoeck2011 journal author author U. Schollwöck, title title The density-matrix renormalization group in the age of matrix product states, https://doi.org/10.1016/j.aop.2010.09.012 journal journal Annals of Physics volume 326, pages 96–192 (year 2011), note january 2011 Special IssueNoStop [Yang and White(2020)]Yang2020 author author M. Yang and author S. R. White, title title Time-dependent variational principle with ancillary krylov subspace, 10.1103/PhysRevB.102.094315 journal journal Phys. Rev. B volume 102, pages 094315 (year 2020)NoStop [Paeckel et al.(2019)Paeckel, Köhler, Swoboda, Manmana, Schollwöck, and Hubig]Paeckel2019 author author S. Paeckel, author T. Köhler, author A. Swoboda, author S. R. Manmana, author U. Schollwöck, and author C. Hubig, title title Time-evolution methods for matrix-product states, https://doi.org/10.1016/j.aop.2019.167998 journal journal Annals of Physics volume 411, pages 167998 (year 2019)NoStop [Barthel et al.(2009)Barthel, Schollwöck, and White]Barthel2009 author author T. Barthel, author U. Schollwöck, and author S. R. White, title title Spectral functions in one-dimensional quantum systems at finite temperature using the density matrix renormalization group, 10.1103/PhysRevB.79.245101 journal journal Phys. Rev. B volume 79, pages 245101 (year 2009)NoStop [Hubig et al.(2024)Hubig, Lachenmaier, Linden, Reinhard, Stenzel, Swoboda, Grundner, and Mardazad]Syten author author C. Hubig, author F. Lachenmaier, author N.-O. Linden, author T. Reinhard, author L. Stenzel, author A. Swoboda, author M. Grundner, and author S. Mardazad, https://syten.eu journal journal The SyTen Toolkit (year 2024)NoStop [Massignan et al.(2014)Massignan, Zaccanti, and Bruun]Massignan2014 author author P. Massignan, author M. Zaccanti, and author G. M. Bruun, title title Polarons, dressed molecules and itinerant ferromagnetism in ultracold fermi gases, 10.1088/0034-4885/77/3/034401 journal journal Reports on Progress in Physics volume 77, pages 034401 (year 2014)NoStop [Bohrdt et al.(2021b)Bohrdt, Homeier, Reinmoser, Demler, and Grusdt]Bohrdt2021_1 author author A. Bohrdt, author L. Homeier, author C. Reinmoser, author E. Demler, and author F. Grusdt, title title Exploration of doped quantum magnets with ultracold atoms, https://doi.org/10.1016/j.aop.2021.168651 journal journal Annals of Physics volume 435, pages 168651 (year 2021b), note special issue on Philip W. AndersonNoStop [Tarruell and Sanchez-Palencia(2018)]Tarruell2018 author author L. Tarruell and author L. Sanchez-Palencia, title title Quantum simulation of the hubbard model with ultracold fermions in optical lattices, https://doi.org/10.1016/j.crhy.2018.10.013 journal journal Comptes Rendus Physique volume 19, pages 365–393 (year 2018), note quantum simulation / Simulation quantiqueNoStop [Esslinger(2010)]Esslinger2010 author author T. Esslinger, title title Fermi-hubbard physics with atoms in an optical lattice, https://doi.org/10.1146/annurev-conmatphys-070909-104059 journal journal Annual Review of Condensed Matter Physics volume 1, pages 129–152 (year 2010)NoStop [Prichard et al.(2024)Prichard, Spar, Morera, Demler, Yan, and Bakr]Prichard2024 author author M. L. Prichard, author B. M. Spar, author I. Morera, author E. Demler, author Z. Z. Yan, and author W. S. Bakr, title title Directly imaging spin polarons in a kinetically frustrated hubbard system, 10.1038/s41586-024-07356-6 journal journal Nature volume 629, pages 323–328 (year 2024)NoStop [Xu et al.(2023)Xu, Kendrick, Kale, Gang, Ji, Scalettar, Lebrat, and Greiner]Xu2023 author author M. Xu, author L. H. Kendrick, author A. Kale, author Y. Gang, author G. Ji, author R. T. Scalettar, author M. Lebrat, and author M. Greiner, title title Frustration- and doping-induced magnetism in a fermi–hubbard simulator, 10.1038/s41586-023-06280-5 journal journal Nature volume 620, pages 971–976 (year 2023)NoStop [Yang et al.(2021)Yang, Liu, Mongkolkiattichai, and Schauss]Yang2021 author author J. Yang, author L. Liu, author J. Mongkolkiattichai, and author P. Schauss, title title Site-resolved imaging of ultracold fermions in a triangular-lattice quantum gas microscope, 10.1103/PRXQuantum.2.020344 journal journal PRX Quantum volume 2, pages 020344 (year 2021)NoStop [Löhneysen et al.(1994)Löhneysen, Pietrus, Portisch, Schlager, Schröder, Sieck, and Trappmann]Loehneysen1994 author author H. v. Löhneysen, author T. Pietrus, author G. Portisch, author H. G. Schlager, author A. Schröder, author M. Sieck, and author T. Trappmann, title title Non-fermi-liquid behavior in a heavy-fermion alloy at a magnetic instability, 10.1103/PhysRevLett.72.3262 journal journal Phys. Rev. Lett. volume 72, pages 3262–3265 (year 1994)NoStop [Löhneysen(1996)]Loehneysen1996 author author H. v. Löhneysen, title title Non-fermi-liquid behaviour in the heavy-fermion system, 10.1088/0953-8984/8/48/003 journal journal Journal of Physics: Condensed Matter volume 8, pages 9689 (year 1996)NoStop [Bohrdt et al.(2020)Bohrdt, Demler, Pollmann, Knap, and Grusdt]Bohrdt2020 author author A. Bohrdt, author E. Demler, author F. Pollmann, author M. Knap, and author F. Grusdt, title title Parton theory of angle-resolved photoemission spectroscopy spectra in antiferromagnetic mott insulators, 10.1103/PhysRevB.102.035139 journal journal Phys. Rev. B volume 102, pages 035139 (year 2020)NoStop     Supplemental Material: Emergent Feshbach-like interactions in a doped Majumdar-Ghosh model Fabian Grusdt June 17, 2024 ========================================================================================== § CALCULATING THE ARPES SPECTRUM & MPS CONVERGENCE We perform microscopic MPS simulations of Hamiltonian (<ref>) for magnetization m=13/81,25/81,37/81,51/81 and spin-hole interaction -5 < g/J < 5. Note that in our convention m = 2 ⟨∑_ȷŜ^z_ȷ⟩ / L > 0, i.e. each spin contributes ± 1/L to m. Our procedure is similar to <cit.> but here we use the SyTen toolkit <cit.>. To probe signatures of Feshbach-like resonances, we calculate the majority (σ = ↑) and minority (σ = ↓) ARPES spectrum Eq. (<ref>): A_σ(,̨ω) = 1/2 πRe∫_- ∞^∞dt e^i ω t A_σ(,̨ t), where A_σ(,̨ t) = 1/L∑_ı, ȷ e^-i (̨ı - ȷ )⟨ψ_0| e^i ℋ̂ tĉ_ȷ, σ^† e^-i ℋ̂ tĉ_ı, σ|ψ_0⟩. First, we calculate the MPS groundstate (GS) |ψ_0⟩ without holes using DMRG with bond dimension χ=1000. The GS is well converged with variance ⟨ℋ̂^2 ⟩ - ⟨ℋ̂⟩^2 < 10^-11. Then, we dope a hole into the system and calculate the time evolution up to times t = 20/J using generalized subspace expansion (GSE) <cit.> for the first few time steps and then using the two-site time-dependent variational principle (TDVP2) <cit.> for later times. We choose the time step Δ t = 0.02/J. In Fig. <ref>, we compare the minority Green's function ⟨ e^i ℋ̂ t ĉ^†_L/2 + d e^-i ℋ̂ t ĉ_L/2⟩ for bond dimensions χ=800 and 1000, at time t=13/J. The difference is ∼ 10^-5, thus the time evolution is well converged. We calculate the MPS overlap with the original GS |ψ_0⟩. Next, we perform a Fourier transform into momentum space to obtain A_σ(,̨ t). To prevent unphysical oscillations in the time Fourier transform resulting from a cutoff at t=20/J, we apply linear extrapolation <cit.>. As a tradeoff between the error of the TDVP2 time evolution and the error of extrapolating the signal, we choose to suppress parts of our original signal and the complete extrapolated signal. We exponentially suppress times after t=40/3J with a Gaussian w(t) = exp(-2(t/t_0)^2) where t_0 = 20/3J. Finally, we perform a time Fourier transform into energy space which grants us access to the full ARPES spectrum A_σ(,̨ω). § ARPES SPECTRA In Fig. <ref>, we show the interaction-dependent majority (σ = ↑, a-d) and minority (σ = ↓, e-h) ARPES spectra at fixed momentum k=1.0 at different magnetizations. We label branches according to their qualitative behavior and illustrate the fitting of the polaron branches. Here we also briefly discuss the spectral weights of the polaron and the molecular branches. The minority molecule is not visible in the minority spectrum because of its vanishing spectral weight: We remove a ↓-spinon from a GS where practically all ↓-spinons are bound in singlets, see Fig. <ref>e-h. Conversely, the majority molecule ARPES weight increases for increasing m since we have more free ↑-spinons in the GS. This is especially relevant for the majority spectrum where the entire majority molecule spectral weight originates from free ↑-spinons in the GS, see Fig. <ref>a-d The polaron branches are very dominant in the highly magnetized majority spectrum because of a high chance to remove a free ↑-spinon. Vice versa, in a highly magnetized minority spectrum, the removal of a down spin will break a singlet; the holon and the adjacent ↑-spinon form a molecule, resulting in a vanishing polaron spectral weight.