id
stringlengths
10
10
title
stringlengths
5
246
abstract
stringlengths
42
3.32k
authors
stringlengths
5
21.5k
published_date
timestamp[s]
link
stringlengths
33
34
markdown
stringlengths
140
1.08M
abstract_ja
stringlengths
0
1.35k
2309.05472
LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Self-supervised learning (SSL) is at the origin of unprecedented improvements in many different domains including computer vision and natural language processing. Speech processing drastically benefitted from SSL as most of the current domain-related tasks are now being approached with pre-trained models. This work introduces LeBenchmark 2.0 an open-source framework for assessing and building SSL-equipped French speech technologies. It includes documented, large-scale and heterogeneous corpora with up to 14,000 hours of heterogeneous speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to one billion learnable parameters shared with the community, and an evaluation protocol made of six downstream tasks to complement existing benchmarks. LeBenchmark 2.0 also presents unique perspectives on pre-trained SSL models for speech with the investigation of frozen versus fine-tuned downstream models, task-agnostic versus task-specific pre-trained models as well as a discussion on the carbon footprint of large-scale model training. Overall, the newly introduced models trained on 14,000 hours of French speech outperform multilingual and previous LeBenchmark SSL models across the benchmark but also required up to four times more energy for pre-training.
Titouan Parcollet, Ha Nguyen, Solene Evain, Marcely Zanon Boito, Adrien Pupier, Salima Mdhaffar, Hang Le, Sina Alisamir, Natalia Tomashenko, Marco Dinarelli, Shucong Zhang, Alexandre Allauzen, Maximin Coavoux, Yannick Esteve, Mickael Rouvier, Jerome Goulian, Benjamin Lecouteux, Francois Portet, Solange Rossato, Fabien Ringeval, Didier Schwab, Laurent Besacier
2023-09-11T14:13:09
http://arxiv.org/abs/2309.05472v2
LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech ###### Abstract Self-supervised learning (SSL) is at the origin of unprecedented improvements in many different domains including computer vision and natural language processing. Speech processing drastically benefitted from SSL as most of the current domain-related tasks are now being approached with pre-trained models. This work introduces _LeBenchmark 2.0_ an open-source framework for assessing and building SSL-equipped French speech technologies. It includes documented, large-scale and heterogeneous corpora with up to 14,000 hours of heterogeneous speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to one billion learnable parameters shared with the community, and an evaluation protocol made of six downstream tasks to complement existing benchmarks. _LeBenchmark 2.0_ also presents unique perspectives on pre-trained SSL models for speech with the investigation of frozen versus fine-tuned downstream models, task-agnostic versus task-specific pre-trained models as well as a discussion on the carbon footprint of large-scale model training. keywords: Self-supervised learning, speech processing, dataset, speech benchmark, French language Pacs: 89.20.Ff, 07.05.Mh Pacs: 68T07, 68-04 + Footnote †: journal: Computer Speech & Language ## 1 Introduction Throughout solving pretext tasks automatically extracted from massive unlabeled data, Self-Supervised Learning (SSL) powered deep learning systems deliver groundbreaking performance across a wide range of domains including audio, speech, and language processing [1; 2; 3], computer vision [4; 5], robotics [6], embedded devices and sensors [7], and medicine [8; 9]. In the specific context of speech processing, almost every sub-field has been largely impacted by newly available large pre-trained SSL models. Indeed, impressive improvements and state-of-the-art performance in competitive datasets have been reported for Automatic Speech Recognition (ASR) [10; 11; 12], Automatic Emotion Recognition (AER) [13; 14; 15], Automatic Speaker Verification (ASV) [16; 17; 15], Automatic Speech Translation (AST) [18; 19], Spoken Language Understanding (SLU) [15; 20; 21; 22], Speech Enhancement (SE) [23; 24; 25], Speech Separation (SS) [23; 26], and many others. Despite most leaderboards being conceived around the English language, SSL has also been reported to be remarkably useful for under-resourced languages as demonstrated by A. Babu et al. [19] and H. Nguyen et al. [18], drastically increasing the accessibility to cutting-edge speech technologies across many languages. Naturally, the flourishing field of SSL for speech calls for fair comparisons and standardized evaluation protocols properly assessing the added value of each newly introduced architecture. Following other early-adopter domains including Natural Language Processing (NLP) with, for instance, the GLUE [27] and SuperGLUE benchmarks [28], a first English-only evaluation suite appeared: SUPERRB [15]. In the latter, 13 different tasks based on well-known datasets have been bundled together to benchmark novel SSL models following a common fine-tuning evaluation protocol. Nonetheless, SUPERB does not standardize the pre-training process and hyperparameters, and models trained on hundreds of thousands of hours of speech appear in the same leaderboard as those learned with a few hundred hours or even different languages. SUPERB has been extended to generative tasks such as speech enhancement following the same standardized evaluation protocol in SUPERB-SG [26]. _LeBenchmark_ approached the issue of SSL benchmarking in French from a unified perspective by freezing the available pre-training data as well as the fine-tuning procedure [29; 14]. It also introduced a set of pre-trained SSL models available to the community including the largest and best-performing French SSL systems. Aside from these two attempts, most SSL models currently are being compared in an arbitrary and heterogeneous fashion across different pre-training and fine-tuning datasets, evaluation protocols, and hyperparameters tuning. As a matter of fact, the standardization and available resources revolving around SSL evaluation remain scarce, and it is of crucial interest to the community that further efforts are put in those directions. Indeed, the scientific value of a released model may only be validated if proven against a rigorous, fair and replicable evaluation protocol. With the first version of _LeBenchmark_[29; 14], and following the definition of D. Schlangen [30], we aimed at providing the necessary foundations for the investigation and comparison of SSL models towards French-based downstream tasks. _LeBenchmark 2.0_ builds upon the latter accomplishment to provide a standardized, fully replicable, and extended framework for the assessment and development of SSL representations of French speech. In particular, we release a well-curated pre-training set containing up to fourteen thousand hours of heterogeneous French speech (Section 3), three novel pre-trained SSL models ranging from thirty million to one billion parameters (Section 4), as well as two new evaluation tasks for ASV and syntactic analysis of spoken French (Section 5). _LeBenchmark 2.0_ also widens the discussions on the topics of the energy footprint of SSL models (Section 6), the difference between language-specific and language-agnostic pre-training (Section 5). In short, _LeBenchmark 2.0_ is a collective attempt at unifying the community of SSL for the French language around common models, datasets and evaluation protocols. ## 2 Evaluating SSL models with LeBenchmark 2.0: background and motivations SSL for audio and speech is the process of defining and solving unsupervised proxy tasks, also referred to as pretext tasks or workers, motivated by the nature of the input signal itself. Proxy tasks define both an objective and a transformation applied to the training samples to extract the training targets. In practice, SSL models are first pre-trained following the latter strategy before turning into frozen or fine-tuned feature extractors for common supervised learning tasks. A major benefit of SSL approaches is the ability to leverage the ever-growing mass of available unlabeled data to drastically increase the performance observed on much more expensive and complex to obtain human-labeled tasks. In the context of audio and speech, SSL-based systems occupy the top ranks of most leaderboards and are widely adopted by the community with up to a tenth of recent proceedings from top-tier speech conferences (i.e. year 2023) containing at least one reference to SSL. SSL strategies for audio and speech may be divided into four different families: generative, predictive, contrastive, and multi-task. Generative methods aim at reconstructing the input audio signal after various corruptions ranging from masking to additive noises. For instance, Mockingjay [31], Tera [20], DecoAR 2.0 [32], Speech-XLNet [33], MPC, pMPC [34] and data2vec [35] optimize their parameters towards the reconstruction or reordering of masked/shuffled input frames while Autoregressive Predictive Coding (APC) [36] reconstructs the input signal. Predictive systems, including WavLM [12], HuBERT [10], or BEST-RQ [37] aim at predicting unsupervised discrete labels (e.g. clustering) obtained from the input samples. Contrastive approaches such as wav2vec 2.0 [11] or Contrastive Predicting Coding (CPC) [38], on the other hand, optimize their latent representation to facilitate the distinction between positive and negative candidates originating from the given signal. Finally, multi-task SSL proposes to combine different objectives or modalities to build a rich feature extractor. For example, PASE+ [39] merges up to ten different workers ranging from signal reconstruction to contrastive learning during the pre-training process. Such a rich landscape of models may be seen both as a curse and a blessing by the scientific community. It offers a wide range of possibilities and research directions but also suffers from a strong lack of evaluation standards. In fact, even the simple task of identifying the best-performing paradigm for a specific downstream task remains impossible with the current state of the art in SSL for audio and speech. Indeed, the construction and evaluation of SSL models may vary along numerous axes, hence drastically hindering the ease of comparison between novel solutions. _LeBenchmark 2.0_ specifically aims at standardizing those axes to speed up, facilitate, and democratize research around SSL pre-training in French. More precisely, the life cycle of any SSL model is comprised of three major events: pre-training data gathering, training, and downstream evaluation. Ideally, the two latter steps should be merged, enabling the evaluation and comparison of SSL models at pre-training, hence alleviating a time-consuming downstream assessment. In practice, however, this idea appears as a major scientific and technical challenge as the literature relies entirely on the above-described three-step process. Unfortunately, each step may introduce important variations leading to heterogeneous and unreplicable evaluation protocols. For instance, PASE+ was trained on 100 hours of speech while HuBERT processed 60,000 hours, making it easy to define the best-performing model but practically impossible to distinguish the best pre-training strategy. Other variations include, but are not limited to: differences in pre-training languages and data type during step one (e.g. spontaneous against read speech), compute resources at step two (e.g., a single Nvidia GTX 1080 Ti for Mockingjay against 128 Nvidia Tesla V100 for wav2vec 2.0) or the lack of standards during downstream fine-tuning at step three (e.g., fine-tuning against frozen feature extractors, pre-training dataset included or excluded from the downstream evaluation, or simply the list of downstream tasks to include). Ultimately, such requirements, and particularly the need for large compute resources limit the access to SSL pre-training research to a tiny subset of well-equipped institutions and companies, drastically limiting the exploration and emergence of novel paradigms. Aside from pre-training efficiency, the community naturally attempted to standardize the third step while developing and comparing their models. For instance, ASR evaluation using the Librispeech dataset can be found for MockingJay, wav2vec 2.0, HuBERT, or WavLM, while speaker recognition with VoxCeleb has been reported in PASE+ and MockingJay. Nonetheless, in most cases, the employed downstream architectures, evaluation protocols, or hyper-parameters are entirely different, making it impossible to distinguish models that differ strongly in their pre-training process (e.g. PASE+ and HuBERT). This also prevents a strict comparison between close-performing models (e.g. WavLM and HuBERT). The increasingly adopted SUPERB benchmark [15] defines a set of English downstream tasks to compare SSL models, hence facilitating step three. Despite a long list of 13 tasks, SUPERB suffers from a constrained fine-tuning procedure that forces all pre-trained SSL feature extractors to remain frozen and use a fixed decoder to solve the task of interest. Unfortunately, state-of-the-art SSL results and real-life use cases mostly, if not only, come with a joint fine-tuning of the SSL extractor and the decoder. S. Zaiem et al. [40] have also demonstrated that freezing all the downstream architectures and reducing them to a tiny subset could lead to misleading leaderboard rankings. Since the data preparation of step one is not standardized within SUPERB, it remains challenging to compare different SSL pre-training methodologies as the amount and quality of the data often vary between available SSL models. _LeBenchmark_ is the first attempt at standardizing both steps one and three as well as providing replicable and competitive baselines for step two for further investigation from the community interested in the French language. Finally, the current trend in large-scale SSL is to associate hundreds of languages [19] during pre-training without any regard to potential biases or degradation in performance induced by such a mixing. However, it remains unclear if combining unrelated and potentially distant dialects may harm the performance observed compared to a model trained on a single and well-resourced language (e.g. English). In particular, with _LeBenchmark_, we decided to benefit from the available unsupervised and heterogeneous French speech corpora available to train multiple language-specific SSL models [29, 14], and we have demonstrated that such well-designed models usually outperform massively multilingual alternatives. Interestingly enough, and despite French being the fifth most spoken language, _LeBenchmark_ is the only attempt at standardizing the data collection, pre-training, and evaluation phases of French SSL models. With _LeBenchmark 2.0_ we wish to further enhance our already adopted unified SSL framework for the French community, as both industry and academic institutions delivering state-of-the-art speech technologies are now building SSL-powered solutions. More precisely, _LeBenchmark 2.0_ extends [29, 14] in every aspect composing the framework and the three steps of the SSL life-cycle: * _SSL data collection_. [29] and [14] offered carefully curated and documented corpora with 3,000 and 7,000 hours of French respectively. _LeBenchmark 2.0_ extends the openly available pretraining resources to 14,000 hours with the same quality of documentation. * _SSL models pre-training_. [29] and [14] delivered up to seven pre-trained SSL models to the community based on the well-known Fairseq toolkit [41]. Following our newly introduced 14,000 hours of data, _LeBenchmark 2.0_ brings three more models, of which two are the largest ones available, to the community. Pre-training and model sharing is conducted with HuggingFace and SpeechBrain [42], two frameworks renowned for their open-science-minded approach. We also propose to extend the analysis and discussion on the energy footprint of large SSL models. * _SSL Benchmarking._[29] and [14] released four standardized tasks to evaluate and compare SSL models in French: ASR, AST, SLU, and AER. _LeBenchmark 2.0_ extends this evaluation protocol to six tasks with the introduction of automatic speaker verification and syntactic analysis. We also widened the comparison with the state-of-the-art, and language-specific against language-agnostic models. ## 3 Gathering large collections of datasets Up until recently, it was difficult to find publicly available large datasets of French speech (with the exception of EPAC). Recently, large multilingual corpora that include French have been made available, such as MLS [43] (1,096 h), or voxpopuli [44] (+4,500 h). However, these are restricted to either read or well-prepared speech, failing to provide diversity in the speech samples, such as accented, spontaneous and/or affective speech. In this work, we gathered a large variety of speech corpora in French that cover different accents (MLS, African Accented Speech, CaFE), acted emotions (GEMEP, CaFE, Att-Hack), telephone dialogues (PORTMEDIA), read (MLS, African Accented French, MaSS) and spontaneous sentences (CFPP2000, ESLO2, MPF, TCOF, NCCFr), broadcast speech (EPAC) and professional speech (Voxpopuli). Furthermore, to extend the amount of speech data used for pre-training by around 7k hours we also collected the audiocite.net dataset of non-professional read speech. Compared to MLS and Voxpopuli, our dataset is more diverse, carefully sourced and contains detailed metadata (speech type, and speaker gender). Moreover, it has a more realistic representation of speech turns in real life, compared to MLS and VoxPopuli. Each dataset is documented and can be accessed at least for research purposes.1 This section summarizes the datasets collected and how they were organized for the pre-training step and gives a short overview of the new _audiocite.net_ dataset. Footnote 1: Some of them being released by ELRA, they are available for a small fee. ### Overview of the Datasets Used for Pre-training Table 1 summarizes the statistics of the complete list of datasets considered for the study. The datasets have been organized in five main groups. **Small dataset (\(\approx\) 1K hours)** is only composed of the MLS corpus for comparison with Wav2Vec2.0 [11] which uses only read English speech. It is also gender-balanced. **Medium-clean dataset (\(\approx\) 2.7K hours)** contains MLS and EPAC only to enable further investigation on the impact of spontaneous speech on SSL representations. EPAC is a corpus of conversational speech in broadcast news. **Medium dataset (\(\approx\) 3K hours)** includes 2,933 h of speech, from which 1,115 h is read speech, 1,626 h broadcast speech, 123 h spontaneous speech, 38 h acted telephone dialogues, and 29 h acted emotional speech. Regarding gender, we collected 1,824 h from male speakers, 1,034 h from female speakers, and 74 h from unknown gender. **Large dataset (\(\approx\) 7.7K hours)** has 4 additional corpora: MaSS, NCCFr and Voxpopuli (unlabeled + transcribed). It includes 7,739 h of speech, from which 1,135 h is read speech, 1,626 h broadcast speech, 165 h spontaneous speech, 38 h acted telephone dialogues, 29 h acted emotional speech, and 4744 h professional speech. Except for NCCFr, no information about gender is given in these datasets. **New Extra large dataset (\(\approx\) 14K hours)** has two additional corpora: audiocite.net and Niger-Mali Audio Collection. Audiocite.net includes freely shareable audiobooks of more than 6 600 hours. We created this dataset specifically for the project and section 3.2 gives details about how it has been acquired. The Niger-Mali Audio Collection is data web-crawled from Studio Kalangou and Studio Tamani websites, with the authorization of Fondation Hirondelle. The gender labels were automatically produced by the LIUM_SpkDarization tool [61]. With these two added datasets, the Extra-large dataset is then composed of read speech (7,834 hours), broadcast speech (1,737 h), spontaneous speech (165 h), acted telephone dialogues (38 h), acted emotional speech (29 h), and professional speech (4,744 h). **New Gender-specific datasets (\(\approx\) 1k hours)** are built using all datasets present in the Large dataset that contain gender information: MLS, Att-Hack, CaFE, CFPP2000, ESLO2, EPAC, GEMEP, PORTMEDIA, TCOF, NCCFr. For EPAC, we keep the totality of female speech (385 h), and downsample the male speech to a comparable amount (413 h). This results in 1,041 h of female speech, and 1,006 h of male speech in the final gender-specific datasets. **Pre-processing for SSL training:** Recordings were segmented using time stamps from transcriptions, or cut every 30 seconds when there was no transcription (VoxPopuli _unlabeled_, audiocite.net). When available, we retrieved speaker labels and gender information. Following [11], we removed utterances shorter than 1 s, and longer than 30 s. When \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Corpus\({}_{\text{Linux}}\)** & **\# Utterances** & **Duration** & **\# Speakers** & **Mean Utt. Duration** & **Speech type** \\ \hline \multicolumn{6}{c}{**Small dataset -1k**} \\ \hline MLS French\({}_{\text{CEFA40}}\)[43] & **263,055** & **1,096-03** & **178** & **15\(\ast\)** & Read \\ & 124,590 /138,465 / - & 520:13 /576:29 / - & 80 /98 / - & 15 / 15 / 15 / - & \\ \hline \multicolumn{6}{c}{**Medium-clean dataset -2.7k**} \\ \hline EPAC\({}^{\ast}\)\({}_{\text{MC}}\)[45] & **623,250** & **1,062-0** & **1,063-0** & **1,063-0** & **9\(\ast\)** & Radio \\ & 465,859 /157,391 / - & 1,240,100 /138:852 / - & \(-\)/\(-\)/\(-\) & \(-\)/\(-\) & Broadcasts \\ \hline **2.7k dataset total** & **886,305** & **2,722-45** & - & - & - \\ & 590,449 /295,856 / - & 1,760,23 /962:21 / - & - & - & - \\ \hline \multicolumn{6}{c}{**Medium dataset -3k**} \\ \hline African Accented French\({}_{\text{Linux}}\)[46] & **16,402** & **185-06** & **232** & **4\(\ast\)** & Read \\ & 373 / 102 / 15,927 & \(-\)/18:56 & 48/36 / 148 & \(-\)/\(-\)/\(-\) & Read \\ \hline Att-Hack\({}_{\text{CEFA5KD}}\)[47] & **36,309** & **27.02** & **20** & **27.7\(\ast\)** & Acided \\ & 16,564 / 19,755 / - & 12,071 /14:54 / - & 9 /11 /1 /26 / 27.2\(\ast\)/ - & Emotional \\ \hline CaFE\({}_{\text{CCF}}\)[48] & **936** & **1.99** & **12** & **4.4\(\ast\)** & Acided \\ & 468 / 468 / - & 0.32 / 0.36 / - & 6 / 6 / - & 4.2 / 4.7 / 3.5 / - & Emotional \\ \hline CFPP2000\({}_{\text{CEFA5FA}}\)* & **9853** & **166-06** & **6** & **6\(\ast\)** & Spontaneous \\ & 166 / 1,184 / 8,503 & 0:14 / 1:56 / 14:16 & 2 / 4 / 43 & 5 / 5 / 6 / 8 & Spontaneous \\ \hline ESLO2\({}_{\text{MC}}\)[50] & **62,918** & **361-21** & **190** & **1.9\(\ast\)** & Spontaneous \\ & 30,440 / 32,147 / 331 & 170:16 / 1657 / 009 & 68 / 120 / 2 & 1/ 1/ 9.5 / 1.7\(\ast\) & Spontaneous \\ \hline GEMEP\({}_{\text{MC}}\)[51] & **1,236** & **0.50** & **10** & **2.5\(\ast\)** & Acided \\ & 616 / 620 / - & 0.24 / 0.26 / - & 5 / 5 / - & 2.4 / 2.5 / - & Emotional \\ \hline MPF [52], [53] & **193,827** & **190.60** & **114** & **3.5\(\ast\)** & Spontaneous \\ & 5,326 / 4,649 / 9,552 & 5:26 / 4:36 / 9,03 & 36 / 29 / 49 & 3.7 / 3.6 / 3.4\(\ast\) & Acided telephone \\ \hline PORTMEDIA\({}_{\text{MC}}\) & **194,07** & **38-59** & **193** & - & Acided telephone \\ (French) [54] & 9,294 / 10,333 / - & 19:08 / 19:50 / - & 84 / 109 / - & 7.4 / 6.9 / 5.3\(\ast\) & dialogue \\ \hline TCOF\({}_{\text{CEFA5KA5}}\) & **58,722** & **353-59** & **749** & **3.5\(\ast\)** & **3.5\(\ast\)** & Spontaneous \\ (Audio)[55] & 10,377 / 14,763 / 33,582 & 9:33 / 12:39 / 31:46 & 119/ 162 / 468 & 3.3 / 3.3 / 3.4\(\ast\) & Spontaneous \\ \hline **Medium dataset total** & **1,111,806** & **2,930,324** & **2,930,324** & & - & - \\ & 664,073 / 379,897 / 67,395 & 1,824:53 / 1,034:15 / 74:10 & - & - & - \\ \hline \multicolumn{6}{c}{**Large dataset -7k**} \\ \hline MaSS [56] & **8,219** & **19.40** & **19.40** & **10k** & **8.6\(\ast\)** & Read \\ & 8,219 / - & 19:40 / - & \(-\)/\(-\)/ & 8.6\(\ast\)/\(-\)/ & - \\ \hline NCCF\({}_{\text{MC}}\)[57] & **29,421** & **26.30** & **46** & **3\(\ast\)** & Spontaneous \\ & 14,570 / 13,927 / 929 & 12:44 / 12:59 / 00:50 & 24 / 21 / 1 & 3 / 3 / 3 / 3 / 3 3 / 3 & Spontaneous \\ \hline Usypopuli\({}_{\text{Linux}}\)[44] & **566,338** & **4,302,17** & **14k** & **29\(\ast\)** & Professional speech \\ _Unlabeled_ & - - / - & - / - / - & -/ - / - & - \\ \hline Usypouli\({}_{\text{CEFA}}\)[44] & **76,281** & **211.57** & **327** & **10k** & Professional speech \\ _unresolved_ & - / - / - / - 211:57 & - / - & - / - & - \\ \hline **Large dataset total*** & **1,814,242** & **7,739-22** & - & - & - \\ & 682,212 / 388,217 / 99,084 & 1,853:02 / 1,041:57 / 4,845:07 & - & - \\ \hline \multicolumn{6}{c}{**Extra Large dataset - 14k**} \\ \hline Auidicite.net\({}_{\text{CCF-MF}}\)[58] & **817 295** & **6098-15** & **130** & **29\(\ast\)** & Read \\ & 425033 / 159 / 691 / 232 571 & 3477:24 / 1309:49 / 1911:21 & 35 / 32 / 63 & 29 \(\ast\) / 29 / 29 \(\ast\) & Read \\ \hline Niger-Mali Audio Collection [59][60] & **38 332** & **111:01** & **357** & **10\(\ast\)** & Radio broadcasts \\ & 18 546/ 19 786 / - & 52:15 / 58:46 / - & 192 / 165 / - & 10 \(\ast\) / possible, overlapping speech sentences were also removed. When necessary, audio segments were converted to mono PCM 16 bits, 16 kHz. ### Audiocite.net Dataset Audiocite.net is a corpus of read French speech scrapped from the www.audiocite.net website in November 2021 thanks to the kind authorization of the website administrator. The website is composed of voluntary work of speakers who explicitly uploaded their content under a CC-BY (public domain) license2 The audiobooks are available online for free and are classified into 15 categories: tales, world news, short stories, poetry, erotic stories, documentaries, science fiction, novels, animals, audiocite-juniors, religions, kitchen, philosophies, history and theatre. All the original texts are either in the public domain or under an open license. Footnote 2: Some of them have supplementary conditions: SA (Share Alike), ND (No Modification) or NC (No commercial Use). The Audiocite.net is composed of more than 6600 hours or recordings from 130 speakers and can be found distributed on OpenSLR (www.openslr.org/139/) with the same license as the original work. All the recordings are distributed in their raw format as we downloaded them from audiocite.net (with background music, noise, unexpected speakers, mp3 format, mono or stereo). No pre-processing was applied to the files nor ASR performed on them. We, however, added information of the gender in a 'best effort' manner by guessing the gender from the name and checking the voice in case of uncertainty. This information must not be considered as ground truth and is only intended to be used for a rough statistical estimate. No attempt to remove speech that could be seen as offensive or sensitive was made. Although the dataset is provided with training, validation and testing partitions, the whole corpus was used for LeBenchmark 14K model training. ## 4 Building an Open Collection of Pre-trained French SSL Models _LeBenchmark 2.0_ introduces three novels pre-trained French wav2vec 2.0 to the community based on the Extra Large dataset (i.e. 14,000 hours of speech): 14K-light, 14K-large and 14K-xlarge. More precisely, _LeBenchmark 2.0_ is an open collection of 14 pre-trained SSL models made entirely available on the HuggingFace platform3. It is worth noticing that the number of released SSL models has doubled from _LeBenchmark_ to _LeBenchmark 2.0_ as four others have been added for preliminary gender analyses from M. Z. Boito et al. [62]. The latter four models are not depicted in Table 2, as they were introduced in [62]. In practice, the three new models cover different use cases as the 14K-large and 14K-xlarge are expected to deliver top-notch performance in unconstrained and centralized environments while our 14K-light will bring SSL features to more resource-constrained devices. As of now, these additions represent both the most powerful and parameters-efficient SSL-powered models for the French language. Footnote 3: [https://huggingface.co/LeBenchmark](https://huggingface.co/LeBenchmark) ### On the Choice of Wav2vec 2.0 At the time of _LeBenchmark_, wav2vec 2.0 was the only open-source and available SSL pre-training strategy. It naturally fitted our requirements as it was also achieving state-of-the-art performance. According to the SUPERB benchmark [26], the three best-performing pre-training strategies to date are WavLM, HuBERT, and wav2vec 2.0. However, no implementation of WavLM may be found to replicate the pre-training process and the reported results. HuBERT, on the other hand, suffers from a much more complex training process due to the iterative refining of the discrete targets obtained with k-means. Furthermore, and as depicted in [26], the downstream performance of _BASE_ and _LARGE_ models for HuBERT and wav2vec 2.0 are similar despite a slight advantage for HuBERT, potentially originating from an extensive hyperparameters tuning. Indeed, from our experience, the SSL pre-training behavior varies significantly following hyperparameter changes. In summary, the wav2vec 2.0 architecture enables _LeBenchmark 2.0_ to compare fairly with previously introduced French models while retaining state-of-the-art performance compared to existing alternatives. ### Pre-training Hardware and Sofware Environments Large-scale SSL pre-training was mostly conducted within the Jean Zay French supercomputer converged platform made available to researchers by GENCI4. As of January 2023, Jean Zay is offering access to 2,696 Nvidia Tesla V100 (16GB and 32GB) split between four and height GPU per node with dual Intel CPU as well as 416 Nvidia Tesla A100 80GB with dual AMD CPU. Jean Zay is mostly powered by nuclear energy, hence facilitating a low carbon emission rate per FLOPS. The reported Power Usage Effectiveness (PUE) ratios are 1.21 and 0.65 depending on the scenario (i.e. considering the heat transfer to nearby infrastructures), putting Jean Zay in the list of the most efficient supercomputers worldwide5. Most models except 14K-xlarge have been trained on nodes equipped with four 32GB Nvidia Tesla V100 hence triggering multi-node training to reach the desired 32 or 64 GPU. 14K-xlarge was trained with 80GB Nvidia Tesla A100 nodes equipped with height GPU each. Data read and write operations were made throughout a fast Nested File System (NFS) without any streaming library. A total of 2.9 TB of storage was necessary to hold the entire 14,000 hours dataset. In practice, wav2vec 2.0 models could be trained with much fewer and less powerful GPU, but at the cost of significantly longer training time (i.e. mostly intractable) due to the gradient accumulation needed to reach the large batch size required by the contrastive learning nature of wav2vec 2.0. Footnote 4: GENCI Jean Zay official presentation: [http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html) Footnote 5: [https://systematic-paris-region.org/wp-content/uploads/2022/06/slideshow-Hub-Day-HPPC-Hybride.pdf](https://systematic-paris-region.org/wp-content/uploads/2022/06/slideshow-Hub-Day-HPPC-Hybride.pdf) The speech and audio processing toolkits landscape has significantly expanded in the last decade, however, only two tools support the full wav2vec 2.0 pre-training: Fairseq [41] and SpeechBrain [42]. In practice, all models trained with the Large dataset (7,000 hours) or smaller sets have been produced with Fairseq, while the others have been trained with SpeechBrain by adapting the wav2vec 2.0 CommonVoice recipe to our data. The Python environments are corresponding to those detailed in the toolkit installation scripts attached to each commit. ### Wav2vec 2.0 Hyperparameters The wav2vec 2.0 architecture can be summarized in four distinct blocks: an acoustic feature extractor made of a convolutional neural network, a latent or contextual extractor composed of a Transformer network, a quantization module, and the final contrastive block. The entire detailed list of architectural hyperparameters (i.e. more than 70 parameters) for each model can be found in the corresponding HuggingFace repository6. In short, all models share the same CNN encoder architecture and mostly differ in the hidden dimension size and depth of the Transformer and quantizer. For instance, the sizes of the intermediate and hidden layers are \([3072,4096,5120]\) and \([768,1024,1280]\) for the _base_, _large_, and _xlarge_ models respectively. The number of blocks in the Transformer also increases following the model size with [12; 48; 24]. In practice, _LeBenchmark 2.0_ follows the configurations initially reported by A. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Model** & **Pre-training data** & **Parameters Count** & **Output Dimension** & **Updates** & **GPU Count** & **GPU Hours** \\ \hline 1K-_base_[29] & 1,096 h & 90M & 768 & 200K & 4 & 1,000 \\ 1K-_large_[29] & 1,096 h & 330M & 1024 & 200K & 32 & 3,700 \\ \hline 2.7K-_base_[29] & 2,773 h & 90M & 768 & 500K & 32 & 4,100 \\ \hline 3K-_base_[29] & 2,933 h & 90M & 768 & 500K & 32 & 4,100 \\ 3K-_large_[29] & 2,933 h & 330M & 1024 & 500K & 32 & 10,900 \\ \hline 7K-_base_[14] & 7,739 h & 90M & 768 & 500K & 64 & 7,900 \\ 7K-_large_[14] & 7,739 h & 330M & 1,024 & 500K & 64 & 13,500 \\ \hline **_LeBenchmark 2.0_** & & & & & & \\ **14K-_light_** & 14,000 h & 26M & 512 & 500K & 32 & 5,000 \\ **14K-_large_** & 14,000 h & 330M & 1,024 & 1M & 64 & 28,800 \\ **14K-_xlarge_** & 14,000 h & 965M & 1,280 & 1M & 104 & 54,600 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the pre-trained wav2vec 2.0 models delivered with LeBenchmark and LeBenchmark 2.0. Newly released models are denoted in bold. “GPU Hours” refer to the total training time cumulated over “GPU Count” to reach the number of “Updates”. Baevki et al. [11] and A. Babu et al. [19] as extensive hyperparameter and architecture searches are intractable even with Jean Zay resources. ### Pre-training Hyperparameters The extensive list of pre-training hyperparameters is reported in the _LeBenchmark_ HuggingFace repository while the most impactful ones are given in the following. The duration of each model pre-training is measured in _"steps"_ or _"updates"_ referring to an effective update of the neural network weights or a call to the _".backward()"_ function in PyTorch. This quantity varies with the amount of available data as well as the number of neural parameters to optimize. For instance, the 14K-xlarge made one million updates compared to 200,000 for the 1K-large. Increasing the number of updates ultimately leads to better downstream performance. Nevertheless, the latter behavior must be contrasted with the high cost associated with longer training times as many dozens of GPU are being used at once. Again, we fixed the number of steps according to the original wav2vec 2.0 and XLS-R. All models are trained with the Adam optimizer and decoupled weight decay (i.e. AdamW) [63] following a two-step scheduler made of around 8% of warmup steps and a polynomial decay to zero. Each training step is then associated with an effective batch size measured, for convenience, in seconds. All models from _LeBenchmark 2.0_ have been trained with an effective batch size of between two and three hours. For instance, the 14K-large model used 40 GPU that could fit 118 seconds of speech each per batch alongside a gradient accumulation factor of two, resulting in a total effective batch size of \((40\times 118\times 2)/3600=2.63\) hours of speech signal per step. Ensuring a constant effective batch size necessitates a constant amount of signal per GPU. To this extent, both Fairseq and SpeechBrain toolkits implement a dynamic batching mechanism that automatically bundles samples together depending on their size to match the desired maximum boundary. The latter boundary depends on the VRAM capacity of the GPU and varies with the size of the model. For instance, the 14K-large model was trained with a boundary of 118 seconds on 32GB GPU while the 14K-xlarge model stayed at 60 seconds with 80GB GPU. To limit the VRAM consumption due to the quadratic increase in complexity of the Transformer self-attention following the sequence length, all samples are cropped at 20 seconds. The remaining signal is simply used as a different example. Similarly, and to avoid masking the entirety of the sample for contrastive loss, segments shorter than 2 seconds are removed from Fairseq models while they are concatenated with SpeechBrain. For the largest models, i.e. 14K-xlarge, audio samples are cropped at 15 seconds as no real degradation from slightly shorter sequences is to be expected, as demonstrated by Y. Gao et al [64]. Finally, masks applied to the output of the feature extractor (i.e. CNN) are of 10 consecutive frames for all models. Masking probabilities, however, change with the model size. Indeed, 50% of the frames are masked for _base_ models compared to 75% for the _large_ and _xlarge_ models. ### Wav2vec 2.0 Pre-training: tips and tricks Due to the very high entry ticket to pre-training large SSL models, almost no resources exist to help the community with this process. In particular, we found both Fairseq and SpeechBrain wav2vec 2.0 pre-training recipes do not simply transfer seamlessly to the _LeBenchmark 2.0_ datasets. Such difficulties in training certainly originate from the high complexity of the pipeline: hundreds of millions of neural parameters on thousands of hours of speech with dozens of distributed GPU. In the following, we propose a few tips and tricks to avoid the two most common issues encountered while pre-training _LeBenchmark 2.0_ models: exploding losses and collapsing representations. Exploding losses (i.e. NaN values) were the most common issue faced when training for longer periods of time. Indeed, all models trained for more than 500M steps experienced infinite losses at some point whether it was with Fairseq or SpeechBrain. As expected, simply changing the learning rate did not help as both toolkits already carefully designed the scheduler as well as gradient clipping strategies to avoid any perturbation in the gradient flow. In fact, the mixed precision training was most of the time responsible for this issue. Hence, upon explosion, a simple resuming of the training with full precision (i.e. fp32) was sufficient to finish the training. This may be explained by extremely small weights or values appearing after extended training with AdamW due to the weight decay, hence reaching the rounding-to-zero floor of 16-bit floats. It is worth noticing that switching to bfloat16 (i.e. fp32 values range) instead of float16 solved entirely the issue with SpeechBrain while preserving the training speed. Collapsing representations may happen when the contrastive task is too hard or too simple. It may easily be spotted at training time as the accuracy, defined as the cosine similarity between projected and quantized latent representations over the initially masked frames, quickly jumps to values close to 100%. In practice, this can easily be avoided by increasing the diversity loss and the mask length or probability. Indeed, a very high accuracy may simply mean that only very few entries of the quantization codebook are used, hence easy to predict. The latter phenomenon may especially arise with a dataset composed of short audio segments. ## 5 Standardized and Replicable Benchmark of French Tasks for SSL Models _LeBenchmark 2.0_ introduces two new candidates to _LeBenchmark_ for a total of six tasks: automatic speech recognition (ASR), spoken language understanding (SLU), automatic speech translation (AST), automatic emotion recognition (AER), syntactic analysis (SA) and automatic speaker verification (ASV). We designed our benchmark to best cover the different criteria that the community may expect from pre-trained extractors. In particular, SSL models must integrate information related to transcription (ASR), semantics (SLU, SA), translation (AST), and paralinguistics (AER, ASV). To validate the robustness of SSL to varying data volumes, we also selected corpora matching all potential use cases: high (ASR), medium (SLU, AST, ASV), and low (AER, SA) resource. **SSL Models configurations.** In all the following evaluations, SSL models may be described as _"fine-tuned"_ or _"frozen"_ and _"task-agnostic"_ or _"task-specific"_. Indeed, and contrary to the SUPERB benchmark, we are investigating different scenarios corresponding to various real-life applications. In SUPERB, all models are frozen, meaning that the neural parameters of the SSL models are not updated with the downstream task of interest. Having a heterogeneous set of downstream decoders is of crucial importance to avoid variations in the final ranking as demonstrated by S. Zaiem et al. [40]. In _LeBenchmark 2.0_, we also investigate the results obtained with fine-tuned models, where both the SSL model and the downstream decoder are trained to solve a given task, hence reaching much higher levels of performance. The latter is done at the cost of a more compute resource-intensive fine-tuning stage. Task-agnosticism or specificity simply defines the standard pre-trained SSL models or their already finetuned equivalent. For instance, one may wish to first fine-tune a wav2vec 2.0 architecture on speech recognition before evaluating it on SLU. The latter ASR-specific model is referred to as being "task-specific". **Considered SSL baselines.** Among the different evaluations reported below, _LeBenchmark 2.0_ aims at providing two different levels of study: (a) evaluate the relative performance improvement between the different provided French SSL models, and (b) evaluate the relevance of language-specific SSL models against multilingual or out-of-domain large-scale models. First, and as _LeBenchmark 2.0_ is the only resource available for French SSL, the former comparison will be only conducted with our own models. Second, _LeBenchmark 2.0_ wav2vec will be compared to XLS-R-300M and XLS-R-1B [19] for the multilingual aspect. Whisper [65] will only be considered in a few tasks as two major drawbacks prevent its use in a fair comparison: (a) the training data is virtually unknown and may already contain the train or test sets of our downstream tasks, and (b) it is based on weakly supervised training, not SSL. ### Automatic Speech Recognition Automatic speech recognition is a common downstream task to evaluate SSL models. We investigated the behavior of the different _LeBenchmark 2.0_ models with two scenarios: a challenging low-resource scenario with only 22 hours of training data on TV and radio shows, and a high-resource scenario with 428 hours of read speech. **Downstream datasets.** Two different French datasets were used. The first one is the ETAPE corpus. This is the official dataset released during the French ETAPE evaluation campaign in 2011 [66]. It is composed of diverse French shows: TV news, TV debates, TV amusement, and radio shows. These shows are split into three subcorpora - training: 22h, validation: 7h, and testing: 7h. ETAPE is distributed in the ELRA catalogue 7. It is free for academic research. The second dataset is the French part, version 6.2, of the well-known Common Voice project [67]. This project started in July 2017 and employs crowdsourcing for both speech data collection and human validation for a large number of languages. The speech data is made of read sentences extracted from Wikipedia. It contains 428h, 24h, and 25h of speech data for the training, validation, and testing sets respectively. **Downstream models and hyperparameters.** To conduct our experiments, we employed the SpeechBrain toolkit [42], which is built on PyTorch and designed specifically for speech-processing tasks. Additionally, we utilized the Hugging Face version of the _LeBenchmark_ models even though fairseq checkpoints are also available. The SpeechBrain toolkit offers a diverse array of recipes, and to ensure the reproducibility of our experiments, we followed the SpeechBrain recipe specific to the CommonVoice ASR task. In both of the aforementioned scenarios, we initiated with a pre-trained _LeBenchmark_ model and added three additional dense hidden layers of size 1,024 on top with random initialization. Each of these layers was associated with the LeakyReLU activation function. Subsequently, we performed fine-tuning for speech recognition on the training data by applying the SpecAugment data augmentation technique. For optimization, we employed the Connection Temporal Classification (CTC) loss function [68]. The overall model's output consists of 78 tokens, encompassing both characters, sub-word units, and the CTC blank character. This number is higher than in English due to the presence of numerous accepted letters in French. For example, variations derived from the letter e include e, e, e, and e. Other letters like \(\varsigma\) or \(\alpha\) have also to be taken into account. Following the SpeechBrain recipe for CommonVoice, we optimized the model using two optimizers. The Adam optimizer was dedicated to the parameters derived from the _LeBenchmark_ wav2vec2.0 model, while the AdaDelta optimizer was used for all the parameters on top of it. We applied a dropout technique to train the three top dense hidden layers, with a probability of 0.15. For each experiment, the training was made on 80 epochs, keeping the model that reached the best results on the development data. **Results analysis and discussions.** In our prior study [14], we demonstrated that _LeBenchmark_ SSL models pre-trained solely on French data outperformed equivalent models pre-trained on English or multilingual data, which also included French. Our new findings focus on assessing the impact of additional data on the pretraining of the French dataset used for SSL. As depicted in table 3, and consistent with the results described in [14], the 1K-_large_ model exhibits a higher word error rate compared to the 3K-_large_ model. Section 3.1 provides detailed information about the content of the pretraining datasets used for the different models. Incorporating 2,000 hours of primarily broadcast news speech with the initial 1,000 hours of read speech used to pretrain the 1K-large model significantly improves the performance of the 3K-large model for speech recognition. However, further augmentation with 7,000 hours of formal read or prepared speech (parliamentary events) does not yield substantial improvement for the 7k-large model. Nevertheless, the performance of the 7K-large model is still significantly better than the 3K-large on broadcast news (ETAPE) and comparable to read speech (CommonVoice). This phenomenon is worsened by the introduction of the 14K hours dataset, as 14K-large and 14K-light are not able to outperform even the 3K-large. This can be explained by the nature of the added data, i.e. read speech, that may simply not help at reaching better performance above a certain threshold. In most cases, the 14K-light model exhibits degraded performance for automatic speech recognition even though speech recognition rates remain better than those of XLSR-53 as reported in [14]. ### Spoken Language Understanding Spoken Language Understanding (SLU) aims at extracting semantic representations from speech signals containing sentences in natural language [69]. Classical approaches to SLU used a cascade model made of an Automatic \begin{table} \begin{tabular}{l|c|c||c|c} \hline \hline **Corpus** & \multicolumn{2}{c||}{**CommonVoice**} & \multicolumn{2}{c}{**ETAPE**} \\ \hline **Features** & **Dev** & **Test** & **Dev** & **Test** \\ \hline 1K-large & 9.49\(\pm\)0.20 & 11.21\(\pm\)0.23 & 28.57\(\pm\)0.79 & 30.58\(\pm\)0.88 \\ 3K-large & **8.00\(\pm\)**0.19 & **9.27\(\pm\)**0.20 & **22.26\(\pm\)**0.76 & 24.21\(\pm\)0.85 \\ 7K-large & 8.02\(\pm\)0.18 & 9.39\(\pm\)0.21 & **21.34\(\pm\)**0.74 & **23.46\(\pm\)**0.83 \\ \hline 14K-light & 19.86\(\pm\)0.28 & 22.81\(\pm\)0.34 & 58.30\(\pm\)0.66 & 59.82\(\pm\)0.7 \\ 14K-large & 8.39\(\pm\)0.19 & 9.83\(\pm\)0.21 & 23.67\(\pm\)0.81 & 26.03\(\pm\)0.89 \\ 14K-large & 8.26\(\pm\)0.19 & 9.83\(\pm\)0.21 & 22.38\(\pm\)0.95 & 24.67\(\pm\)0.83 \\ \hline \hline \end{tabular} \end{table} Table 3: ASR results in terms of Word error rate (WER%, lower is better) on Common Voice and ETAPE corpora, with pre-trained wav2vec2.0 models further fine-tuned on labeled ASR data. Speech Recognizer (ASR) feeding a Natural Language Understanding (NLU) module [70; 71; 72; 73; 74; 75; 76]. Neural networks led to large advances for _end-to-end_ SLU systems [77; 78; 79; 80; 81; 82; 83; 84], which are preferred to cascade systems, in particular for their ability to reduce error propagation effects and to exploit acoustic components to deduct semantic information [85]. **Downstream datasets.** For French SLU benchmarking we used the well-known MEDIA corpus [86], used also in [14] and allowing thus for direct comparison. The MEDIA corpus focuses on the domain of hotel information and reservations in France. It is made of 1,250 human-machine dialogues transcribed and annotated with 76 semantic concepts. The corpus is split into 12,908 utterances (41.5 hours of speech) for training, 1,259 for development (3.5 hours), and 3,005 for test (11.3 hours). **Downstream models and hyperparameters.** SLU models used in this paper are the same as in [14], few modifications have been introduced which will be described along this section. Such models have a _sequence-to-sequence_ architecture based on LSTMs and attention mechanisms [87; 88]. The encoder has a similar pyramidal structure as the one proposed in [89], the decoder uses two attention mechanisms, one for attending the encoder's hidden states, and one for attending the decoder's previous predictions, like the self-attention module of Transformers [90]. One difference with respect to models proposed in [14] is that we added a layer normalization after each decoder layer, which made learning more stable when using features extracted with SSL models as input, and improved the model's generalization. All models were trained to minimize the CTC loss [68]. In all our experiments we use SSL models as feature extractors. Features were given as input to SLU models as an alternative to traditional features (e.g. MFCC). Following [14], we used both task-agnostic and task-specific SSL models. In the task-specific case, SSL models were fine-tuned for the ASR output like in [14]. Models described in [14] were learned with three training steps. Each training step uses the model learned in the previous step for initializing the current model's parameters. While this strategy is the most effective, it implies a relatively high training cost. In this work we instead use full end-to-end training such as in [91]. Hence, models are learned from scratch in a single training procedure. In addition, in this work, we tested a multi-task learning setting where the encoder and the decoder are learned with two different CTC losses: with respect to the ASR output for the encoder; with respect to the SLU output for the decoder following a standardized output format [14]. This multi-task learning setting will be indicated with _mt_ in Table 4. In order to study the impact of the SLU model size on results, especially when using features from SSL models, we tested hidden layers of size 256 and 300. Since these gave similar results in most cases, we did not further optimize the model size. We also found it beneficial, when using fine-tuned SSL model's features, to increase the temperature at the output softmax layer to 2 (indicated as _t2_ in the table). This strategy has been used successfully for model distillation [92], and intuitively has a similar effect as using smoothed labels as targets [93]. Beyond these differences, all models use the same hyper-parameters as those used in [14]. **Results analysis and discussion.** All results on the SLU task are depicted in table 4. We report Concept Error Rate (CER, the lower the better) on development and test data (respectively columns **Dev** and **Test** in the table), as well as the raw error rate (column **RawER**) on the development data, which is the error rate computed on the raw output of the system. The raw output of the system includes words, concept boundaries and concepts, please refer to the appendix of [14] for details. In order to compare with previous work [14], we report results obtained using basic features as input, with features extracted with the French 7K-large wav2vec 2.0 model, the 14K-large and 14K-xlarge models, the XLSR-53-large and XLS-R-xlarge models, and with the Whisper small, medium and large models. Results with the spectrograms, 7K-large, XLSR-53-large features are comparable with [14]. The other results are contributions of this work. For each experiment, we report the best results with respect to the hidden layer size on the development data, and its corresponding multi-task learning setting (_mt_). We also specify _t2_ in the table if the best results were obtained with a temperature of 2 at the output softmax. Our best result on validation data with spectrogram features is 30.44, which is only slightly worse than the 29.07 obtained in [14], with the advantage that in this work, the model is trained end-to-end in one step with the multi-task setting. Additionally, the increased generalization power of the model allows us to reach an error rate of 30.39 on test data, which is slightly better than the 31.10 reported in [14]. Using input features from _LeBenchmark_ models (7K-large; 14K-light, large, and xlarge) improvements on SLU results are impressive, with the best CER respectively on validation and test data of 15.66 and 14.43 obtained with the French wav2vec2 14K-xlarge model's features. It is interesting to see, yet expected, that the more data is used to train the SSL model, and the bigger the SSL model in terms of parameters, the better the SLU results are. This trend is not completely respected with task-specific fine-tuned models, in particular with the 14K-large model. Most intuitively, this is because of the small amount of data available for the downstream task, which does not allow for an optimal fine-tuning of so many parameters. The fact that SSL models are tuned for ASR output may also play a role since SLU output already contains tokens instantiating concepts and this may lead the model toward more accurate token predictions which do not always correspond to tokens triggering concepts. This last fact is supported by raw error rates (**RawER** column), accounting for tokens, concept boundaries, and concepts, where not always the best result corresponds to the best CER on validation or test data. On an absolute scale nevertheless, results obtained with fine-tuned French wav2vec2 models are the best. The concept error rate of 12.71 on test data, obtained with the _7K-large_ model, is the new state-of-the-art on this task with an end-to-end model. When using fine-tuned model features, the best results on the test data do not always correspond to the best results on validation data, underlying that probably a more accurate hyper-parameter tuning is needed. An interesting outcome of SSL fine-tuned models is that the best results are almost always obtained with an increased softmax temperature (\(t2\)). In particular, we observed erratic training behaviors with the default temperature, leading often to gradient explosions. Using an increased temperature not only allows us to obtain good results but also stabilizes model training. For comparison, we also experimented with some multi-lingual models from the literature, namely XLSR-53-large and XLS-R-xlarge [19], and _whisper_[65] with small, medium (corresponding to a larger model than our large size, please refer to the number of parameters in the table) and large size. While the XLSR-53 large model, both \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{5}{c|}{Corpus: MEDIA, Metric: Concept Error Rate (CER \%) \(\downarrow\)} \\ \hline **Model** & **Params (SSL/SLU)** & **Input** & **RawER** & **Dev** & **Test** \\ \hline \hline \multicolumn{5}{|c|}{**Task-agnostic models**} \\ \hline h300 & -/13.18 & spectrogram & 59.36 & 64.96 & 58.12 \\ h300 mt & -/13.18 & spectrogram & **30.44** & 30.24 & **30.39** \\ \hline \hline h256 [14] & 330/12.18 & TF-large & - & 19.68 & 18.77 \\ h300 & 330/15.45 & TF-large & 17.26 & 18.57 & 16.99 \\ h300 mt & 330/15.45 & TF-large & **15.36** & **16.62** & **15.47** \\ \hline h256 & 26/12.18 & 14K-light & 22.71 & 22.62 & 20.92 \\ h256 & 26/12.18 & 14K-light & **19.29** & **19.41** & **18.67** \\ \hline h300 & 330/15.45 & 14K-large & 18.26 & 18.63 & 17.35 \\ h300 mt & 330/15.45 & 14K-large & **15.91** & **16.62** & **14.43** \\ \hline h256 & 965/12.70 & 14K-xlarge & 21.44 & 17.52 & 16.24 \\ h256 mt & 965/12.70 & 14K-xlarge & **14.73** & **15.66** & **14.43** \\ \hline \hline h256 [14] & 330/12.18 & XLSR-53-large & - & **18.45** & 18.78 \\ h256 & 330/12.18 & XLSR-53-large & 18.38 & 18.99 & 18.68 \\ h256 mt & 330/12.18 & XLSR-53-large & **17.74** & 18.84 & **17.16** \\ \hline h300 & 965/16.07 & XL-8rlarge & 18.02 & 20.75 & 30.08 \\ h300 mt & 965/16.07 & XL-8rlarge & 18.53 & **18.57** & **29.07** \\ \hline \hline h256 & 244/11.66 & whisper-s & 27.86 & **28.44** & 27.19 \\ h256 & 769/12.18 & whisper-m & 34.03 & 33.75 & 31.10 \\ h256 & 1550/12.71 & whisper-t & **26.39** & 28.92 & **26.90** \\ \hline \hline \multicolumn{5}{|c|}{**Task-specific models (ASK fine-tuning)**} \\ \hline h256 [14] & 330/12.18 & 7K-large & - & 14.58 & 13.78 \\ h300 t2 & 330/15.45 & 7K-large & 10.82 & **13.53** & 12.80 \\ h300 mt \(\underline{t2}\) & 330/15.45 & TF-large & 10.83 & 13.95 & **12.71** \\ \hline h300 & 26/15.45 & 14K-light & 15.07 & 17.03 & 20.02 \\ h300 mt & 26/15.45 & 14K-light & **13.66** & **15.48** & **18.22** \\ \hline h256 t2 & 330/12.18 & 14K-large & **10.43** & **12.96** & 13.78 \\ h256 mt \(\underline{t2}\) & 330/12.18 & 14K-large & 13.76 & 14.43 & **12.85** \\ \hline h256 t2 & 965/12.70 & 14K-xlarge & **11.19** & **13.74** & **13.88** \\ h256 mt \(\underline{t2}\) & 965/12.70 & 14K-xlarge & 11.35 & 14.58 & 14.53 \\ \hline \hline h300 & 330/15.45 & XLSR-53 & 18.45 & 17.42 & **15.01** \\ h300 mt & 330/15.45 & XLSR-53 & **13.09** & **15.22** & 16.60 \\ \hline h300 t2 & 965/16.07 & XLS-R & 14.87 & 17.22 & 24.15 \\ h300 mt \(\underline{t2}\) & 965/16.07 & XLS-R & **13.39** & **15.42** & **23.30** \\ \hline \end{tabular} \end{table} Table 4: End2End SLU results in concept error rate (CER %, lower is better \(\downarrow\)) on the MEDIA corpus. “h300” and “h256” refer to a hidden size of 300 and 256 neurons respectively, while “mt” is a multi-task ASR-SLU training. “t2” means that a Softmax temperature of value 2 was applied. without and with fine-tuning, provides interesting results considering that it is not a model specialized in French, the extra-large model and all the whisper models provide clearly inferior performances on the test data compared to French models. For the XLSR-53 extra-large model we hypothesize that the larger size of the model together with its multilingualism does not allow a good generalization on a specific French task like SLU. For whisper models, we hypothesize that poor performances are related to the particular strategy used for training such models. Given so poor performances of whisper models compared to French models, we decided to save energy and not fine-tuning them. OAR.77868.stdout ### Automatic Speech Translation Automatic speech-to-text translation (AST) consists of translating a speech utterance in a source language to a text in a target language. In this work, we are interested in translating directly from French speech into text in another language, without the use of transcriptions. We investigate two downstream applications for _LeBenchmark_ models: _hybrid_ models and _end-to-end_ fine-tuning. For the former, the pre-trained model is leveraged as a feature extractor i.e. frozen. For the latter, extra layers are appended to the pre-trained model, and the whole architecture is fine-tuned on the target dataset. Training an end-to-end AST model from a pre-trained speech encoder was first proposed in [94]. **Downstream datasets.** For both AST downstream strategies, we use the multilingual TEDx dataset [95]. It covers translation directions from French to three target languages: English (en), Spanish (es), and Portuguese (pt), with following training sizes: \(50\,\mathrm{h}\) (en), \(38\,\mathrm{h}\) (es), and \(25\,\mathrm{h}\) (pt). For end-to-end fine-tuning, we also present results for the CoVoST dataset V2 [96] containing 302 hours of French speech from CommonVoice version 4.0 translated to English. **Hybrid downstream models and hyperparameters.** In this set of experiments, we focus on leveraging the pre-trained models as feature extractors, using their output speech representation as input for an end-to-end AST model which is trained from randomly initialized parameters. Inspired by [18; 29], this AST model is an encoder-decoder architecture which takes SSL features as input, passing them through a block of Linear-ReLU followed by 2 convolutional layers with strides of \([2,2]\) and kernel sizes of \([5,5]\). These 1D-convolutional layers reduce the sequence length by 4 which is then sent to a Transformer [90] model having 6 layers of encoder, 3 layers of decoder, and hidden dimension \(D=256\). This is inspired by the s2t_transformer_xs recipe from the fairseq s2t toolkit [97]. For each language pair, we train in total 13 end-to-end models which take as input features extracted from different SSL pre-trained models shown in Table 5. We normalize the punctuation of the text before building a \(1K\) unigram vocabulary using Sentencepiece[98] without pre-tokenization. For GPU efficiency, utterances having more than \(3,000\) frames are filtered out. Each of these AST models is trained for \(500\) epochs. For all our experiments, we exploit the Adam optimizer [99] whose initial learning rate is set to \(2e-3\). This learning rate is linearly increased for the first \(10K\) warm-up steps then decreased proportionally to the inverse square root of the step counter. The last 10 checkpoints are averaged and used for decoding with a beam size of 5. Table 5 reports the detokenized case-sensitive BLEU computed using sacreBLEU [100]. **End-to-end downstream models and hyperparameters.** End-to-end AST models are trained on SpeechBrain [42] using the HuggingFace Transformers [101] wav2vec 2.0 interface with spectogram augmentation enabled. The encoder stack is made of a wav2vec 2.0 model, followed by a linear projection of output dimension 512. The decoder stack is an 8-heads, 6-layers Transformer with feed forward projections of 2,048 neurons and an embedding size of 512. The weights for the wav2vec 2.0 model are initialized from one of the models in Table 2, and the model is trained with NLL loss. As for end-to-end ASR models (Section 5.1), two different instances of the Adam optimizer manage the weight updates: one dedicated to the wav2vec 2.0 module, the other one to the following layers. The learning rates are respectively \(1e-5\) and \(1e-4\). The models are trained on a single A100 80GB Nvidia GPU, for 50 (CoVoST), or 100 epochs (mTEDx). In all cases, sentences longer than 35 s are removed for GPU efficiency. For models trained on the mTEDx dataset, we found that it was beneficial for performance to remove layer-dropout and dropout within the wav2vec 2.0 stack during training. We hypothesize that this is due to the limited amount of data available for fine-tuning, as large architectures seemed to benefit the most from this modification. The total number of trainable parameters depends on the wav2vec 2.0 model used: it varies between 121.3M (base), 342.5M (large), and 989.7M (xlarge). Pre-tokenization strategy is the same as the Hybrid AST setup. Lastly, we do not use pre-trained weights to initialize the AST decoder, and we do not partially freeze the wav2vec 2.0 encoder due to poor performance. **Hybrid results analysis and discussion.** Results are presented in Table 5 and analyzed with the following aspects: * **Monolingual versus multilingual**. Comparing SSL models of the same size (large models), training on monolingual data (LB models) seems to be beneficial in comparison with training on multilingual data (XLSR-53 and XLS-R models). From \(3K\) hours of French data, all _LeBenchmark_ large models outperform both XLSR-53 and XLS-R models. * **Pre-training data**. Concerning the amount of monolingual data, we observe that with the same model size (base or large), SSL models tend to improve when the amount of pre-training data increases, except for the 14K-large model whose performance is on par with that of the 7K-large model. We suspect that adding too much read speech data (\(6,600h\)) might lead to stagnation in terms of BLEU scores when jumping from \(7K\) to \(14K\) hours of training data on the mTDx domain. * **Model size**. Table 5 illustrates that with the same amount of pre-training data, larger models tend to be better than smaller ones for producing speech features. Surprisingly, xlarge models underperform large models, observed with both _LeBenchmark_ 14K and XLS-R. We suspect that this is because xlarge models' generated features are too high-dimensional for the task. Lastly, we observe that the 14K-light model significantly underperforms its base and large counterparts, hinting that it insufficiently represents speech signals due to its limited capacity. **End-to-End results analysis and discussions.** Table 6 and 7 present BLEU scores for the mTDx and CoVoST datasets respectively. * **Monolingual versus multilingual.** Overall, we notice that the importance of the backbone model (monolingual or multilingual) is less important in end-to-end mode compared to hybrid mode (the XLS-R model is not too far behind the best-performing monolingual _LeBenchmark_ model). Nevertheless, between the two best-performing models for both CoVoST and mTEDx, _LeBenchmark_ 14K outperforms XLS-R in all settings. It should however be highlighted that XLS-R is a model covering 128 languages, with the potential to reach similarly good results for at least a small portion of its covered languages. \begin{table} \begin{tabular}{l l c c|c c|c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{2}{c}{**fr-en**} & \multicolumn{2}{c}{**fr-es**} & \multicolumn{2}{c}{**fr-pt**} \\ \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{1}{c}{**Size**} & \multicolumn{1}{c}{**Dev**} & \multicolumn{1}{c}{**Test**} & \multicolumn{1}{c|}{**Dev**} & \multicolumn{1}{c|}{**Test**} & \multicolumn{1}{c}{**Dev**} & \multicolumn{1}{c}{**Test**} \\ \hline \multirow{2}{*}{1K} & base & 9.18\(\pm\)0.36 & 8.98\(\pm\)0.36 & 5.09\(\pm\)0.27 & 5.64\(\pm\)0.30 & 0.39\(\pm\)0.05 & 0.49\(\pm\)0.08 \\ & large & 15.31\(\pm\)0.46 & 14.46\(\pm\)0.46 & 13.74\(\pm\)0.43 & 14.77\(\pm\)0.46 & 8.29\(\pm\)0.34 & 9.37\(\pm\)0.38 \\ \hline 2.7K & base & 15.09\(\pm\)0.49 & 14.69\(\pm\)0.48 & 13.27\(\pm\)0.43 & 14.04\(\pm\)0.43 & 4.72\(\pm\)0.27 & 5.51\(\pm\)0.28 \\ \hline \multirow{2}{*}{3K} & base & 15.05\(\pm\)0.49 & 14.80\(\pm\)0.47 & 13.19\(\pm\)0.44 & 14.27\(\pm\)0.44 & 4.44\(\pm\)0.29 & 4.72\(\pm\)0.25 \\ & large & 17.94\(\pm\)0.51 & 18.00\(\pm\)0.51 & 16.40\(\pm\)0.49 & 18.12\(\pm\)0.48 & 8.64\(\pm\)0.34 & 9.55\(\pm\)0.36 \\ \hline \multirow{2}{*}{7K} & base & 15.13\(\pm\)0.45 & 14.50\(\pm\)0.45 & 12.78\(\pm\)0.40 & 13.61\(\pm\)0.44 & 2.65\(\pm\)0.20 & 2.66\(\pm\)0.23 \\ & large & **19.23\(\pm\)**0.54 & **19.04\(\pm\)**0.53 & **17.59\(\pm\)**0.49 & **18.24\(\pm\)**0.49 & **9.68\(\pm\)**0.37 & **10.98\(\pm\)**0.41 \\ \hline \multirow{2}{*}{14K} & light & 10.31\(\pm\)0.38 & 10.92\(\pm\)0.43 & 9.83\(\pm\)0.33 & 10.52\(\pm\)0.42 & 4.96\(\pm\)0.31 & 5.79\(\pm\)0.33 \\ & large & **18.93\(\pm\)**0.40 & **18.97\(\pm\)**0.47 & **17.22\(\pm\)**0.41 & **18.12\(\pm\)**0.42 & 9.03\(\pm\)0.35 & 10.11\(\pm\)0.39 \\ & xlarge & 18.14\(\pm\)0.42 & 18.35\(\pm\)0.48 & 15.90\(\pm\)0.39 & 17.19\(\pm\)0.43 & 5.46\(\pm\)0.29 & 6.59\(\pm\)0.35 \\ \hline XLSR-53 & large & 7.81\(\pm\)0.33 & 6.75\(\pm\)0.29 & 0.49\(\pm\)0.13 & 0.52\(\pm\)0.08 & 0.43\(\pm\)0.07 & 0.36\(\pm\)0.05 \\ \hline \multirow{2}{*}{XLS-R} & large & 17.03\(\pm\)0.40 & 16.52\(\pm\)0.45 & 15.12\(\pm\)0.35 & 16.34\(\pm\)0.41 & 7.56\(\pm\)0.33 & 8.40\(\pm\)0.36 \\ & xlarge & 13.80\(\pm\)0.37 & 13.88\(\pm\)0.38 & 11.45\(\pm\)0.37 & 12.56\(\pm\)0.40 & 1.59\(\pm\)0.29 & 1.77\(\pm\)0.31 \\ \hline \hline \end{tabular} \end{table} Table 5: AST BLEU results (higher is better) of the feature extraction experiments (Hybrid with frozen SSL encoders) on the mTEDx dataset. The best results are in **bold**. Gray numbers denote the standard deviation computed using bootstrap re-sampling [102]. * **Pre-training data.** Focusing on monolingual models, and looking at results for large architectures only, we do not see any hints of saturation: _LeBenchmark_ models pre-trained with more speech data tend to provide better initializations for the AST task (Tables 6 and 7). Looking at base architectures, the exception for this seems to be the 2.7K-base model, which performs on par with 3K and 7K base and large models. This model differs from 3K by not including spontaneous speech. We hypothesize this pre-training setting could provide a better initialization for mTEDx and CoVoST datasets, which are made of prepared and read speech respectively. * **Model size.** Focusing on the mTEDx results (Table 6), and looking at models trained with equal amounts of speech data, we observe that larger pre-trained models tend to increase AST performance (1K, 14K, XLS-R), with 3K and 7K being the two exceptions where we observe that large pre-trained models underperform compared to their base counterparts. The difference between base and large models on test sets for English, Spanish, and Portuguese are respectively -1, -1.5, -2.7 for 3K and -0.8, -0.6, -1.4 for 7K. The same is not observed in the hybrid setting (Table 5). We believe that, with the limited amount of trainable examples available on the mTEDx dataset, those results need to be taken with caution: on one side, larger models might be _less adaptable_ than their smaller counterparts, as they have more parameters that need adaptation with the same amount of fine-tuning data; on the other side we observe an improvement from large to xlarge using _LeBenchmark 2.0_, which contradicts this previous observation. However, in the setting where trainable data is abundantly available (Table 7), that counter-intuitive trend of results vanishes. This hints that the choice between different model sizes should also consider the amount of available data for task adaptation. Finally, it seems to always be beneficial for end-to-end AST fine-tuning to have a xlarge wav2vec 2.0 model, compared to large, but this marginal difference in performance adds a considerable overhead in the number of trainable parameters (647.1M extra trainable parameters). Lastly, we observe that the 14K-light model is a poor initialization choice for end-to-end AST. We believe this highlights how the capacity of the model is related to the encoding of high-abstraction level speech features: smaller Transformer stacks results in poor speech features (Table 5) and encoders (Tables 6 and 7). Indeed, Pasad et al. [103; 104] argue that the wa2vec 2.0 pretext task forces a drop in abstraction-level at the last layers. Due to this, the middle of the Transformer stack is where most of high-level (phonemic and word-level) information is encoded. \begin{table} \begin{tabular}{l l l l|c c|c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{2}{c}{**fr-en**} & \multicolumn{2}{c}{**fr-es**} & \multicolumn{2}{c}{**fr-pt**} \\ \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{1}{c}{**Size**} & \multicolumn{1}{c}{**Dev**} & \multicolumn{1}{c|}{**Test**} & \multicolumn{1}{c|}{**Dev**} & \multicolumn{1}{c}{**Test**} & \multicolumn{1}{c}{**Dev**} & \multicolumn{1}{c}{**Test**} \\ \hline \multirow{2}{*}{1K} & base & 15.2\({}_{\pm 0.48}\) & 14.0\({}_{\pm 0.54}\) & 13.0\({}_{\pm 0.42}\) & 13.2\({}_{\pm 0.40}\) & 8.2\({}_{\pm 0.33}\) & 8.6\({}_{\pm 0.34}\) \\ & large & 16.7\({}_{\pm 0.49}\) & 16.6\({}_{\pm 0.46}\) & 15.3\({}_{\pm 0.46}\) & 16.1\({}_{\pm 0.45}\) & 9.4\({}_{\pm 0.33}\) & 10.7\({}_{\pm 0.38}\) \\ \hline \multirow{2}{*}{2.7K} & base & 18.9\({}_{\pm 0.52}\) & 18.7\({}_{\pm 0.52}\) & 17.9\({}_{\pm 0.50}\) & 17.8\({}_{\pm 0.49}\) & 11.7\({}_{\pm 0.39}\) & 12.3\({}_{\pm 0.40}\) \\ \hline \multirow{2}{*}{3K} & base & 17.9\({}_{\pm 0.48}\) & 17.9\({}_{\pm 0.51}\) & 16.8\({}_{\pm 0.49}\) & 17.1\({}_{\pm 0.46}\) & 11.3\({}_{\pm 0.42}\) & 12.4\({}_{\pm 0.42}\) \\ & large & 17.6\({}_{\pm 0.51}\) & 16.9\({}_{\pm 0.47}\) & 15.1\({}_{\pm 0.45}\) & 15.6\({}_{\pm 0.46}\) & 8.6\({}_{\pm 0.34}\) & 9.7\({}_{\pm 0.37}\) \\ \hline \multirow{2}{*}{7K} & base & 18.8\({}_{\pm 0.51}\) & 18.2\({}_{\pm 0.50}\) & 18.4\({}_{\pm 0.52}\) & 18.2\({}_{\pm 0.68}\) & 12.6\({}_{\pm 0.41}\) & 13.4\({}_{\pm 0.44}\) \\ & large & 20.1\({}_{\pm 0.52}\) & 19.0\({}_{\pm 0.57}\) & 17.4\({}_{\pm 0.52}\) & 18.8\({}_{\pm 0.49}\) & 10.7\({}_{\pm 0.37}\) & 12.0\({}_{\pm 0.41}\) \\ \hline \multirow{2}{*}{14K} & light & 6.5\({}_{\pm 0.27}\) & 5.9\({}_{\pm 0.28}\) & 5.7\({}_{\pm 0.27}\) & 5.7\({}_{\pm 0.26}\) & 3.0\({}_{\pm 0.21}\) & 2.9\({}_{\pm 0.17}\) \\ & large & 23.6\({}_{\pm 0.59}\) & 23.1\({}_{\pm 0.55}\) & 23.3\({}_{\pm 0.58}\) & 24.2\({}_{\pm 0.62}\) & 18.7\({}_{\pm 0.54}\) & 21.8\({}_{\pm 0.58}\) \\ & xlarge & **25.1\({}_{\pm 0.59}\)** & **24.4\({}_{\pm 0.60}\)** & **23.7\({}_{\pm 0.56}\)** & **25.5\({}_{\pm 0.59}\)** & **20.7\({}_{\pm 0.58}\)** & **23.7\({}_{\pm 0.62}\)** \\ \hline \multirow{2}{*}{XLS-R} & xlarge & 15.6\({}_{\pm 0.49}\) & 12.5\({}_{\pm 0.47}\) & 15.6\({}_{\pm 0.45}\) & 15.8\({}_{\pm 0.44}\) & 8.4\({}_{\pm 0.31}\) & 9.1\({}_{\pm 0.36}\) \\ \hline \multirow{2}{*}{XLS-R} & large & 19.2\({}_{\pm 0.51}\) & 18.0\({}_{\pm 0.63}\) & 19.2\({}_{\pm 0.53}\) & 19.7\({}_{\pm 0.51}\) & 13.4\({}_{\pm 0.47}\) & 14.9\({}_{\pm 0.44}\) \\ & xlarge & 23.4\({}_{\pm 0.58}\) & 22.7\({}_{\pm 0.55}\) & **23.3\({}_{\pm 0.61}\)** & **25.0\({}_{\pm 0.60}\)** & 19.3\({}_{\pm 0.54}\) & 21.3\({}_{\pm 0.56}\) \\ \hline \hline \end{tabular} \end{table} Table 6: AST end-to-end BLEU results (higher is better) for the mTEDx dataset. The best results are in **bold**. Gray numbers denote the standard deviation computed using bootstrap re-sampling [102]. ### Automatic Emotion Recognition Recent psychological studies suggest that emotion is needed and used in every aspect of our lives, from filtering the sensory information, our perception of an event, reasoning, and thus the decisions we make [105; 106]. The automatic recognition of human emotions from audio recordings is therefore a technology that can influence many areas such as education, healthcare, and entertainment. Although much progress has been made in emotion recognition in recent years, there are still challenges including different emotional expressions by different speakers, or different microphones, making AER not quite ready for everyday use [107]. SSL models being trained on large amounts of data, have been shown to be exceptionally good in addressing generalisation issues [14]. _LeBenchmark 2.0_ further evaluates such methods for French speech, with the newly trained models. **Downstream datasets.** Following _LeBenchmark_, we used the RECOLA [108] and the AlloSat [109], which contain continuous conversations, and the Theradia corpus, which contains utterance-based conversations. Both the RECOLA and Allosat datasets contain spontaneous French speech. However, the RECOLA recordings are emotionally induced conversations recorded in a laboratory environment, whereas the AlloSat recordings are telephonic conversations. The annotations for both datasets are time-continuous dimensional. For RECOLA, the emotion dimensions are based on arousal (from passive to active), and valence (from negative to positive), sampled at 25 Hz rate. For Allosat, a frustration-to-satisfaction dimension is used with a sampling rate of 4 Hz. Since the AlloSat dataset contains continuous long audio files, we were not able to fine-tune them. The AlloSat dataset contains a total of 29,704 utterances (21 h), divided into 20,785 utterances (15 h) as training set, 4272 utterances (3 h) for development and 4643 utterances (3 h) as test partition. On the other hand, the RECOLA dataset is much smaller, with 9 files of 5 minutes each for the training, development, and test sets. Moreover, the continuous conversations used in these two datasets differ from utterance-based training of the used SSL models. Thus, we also used the Theradia corpus to investigate the effect of fine-tuning for emotion recognition. This dataset contains 61 senior participants, nine of whom had Mild Cognitive Impairments (MCIs). The participants performed digital cognitively stimulating exercises while interacting with a virtual assistant in a natural way. The Theradia corpus contains different dimensional annotations according to the appraisal theory, and emotion labels. The emotion labels are annotated based on the perceived intensity of the label on a scale from zero (not existent), to 100. We report results on the prediction of the ten most common core set labels in the Theradia corpus: relaxed, interested, frustrated, confident, satisfied, happy, annoyed, surprised, desperate, and anxious. The Theradia corpus contains 2,735 utterances (6 h) in total which are divided into 1,110 utterances as training partition, 851 utterances for validation, and 774 utterances for testing. \begin{table} \begin{tabular}{l l l l} \hline \hline **Model** & **Size** & **Dev** & **Test** \\ \hline \multirow{2}{*}{1K} & base & 28.5\({}_{\text{-}0.21}\) & 27.9\({}_{\text{-}0.20}\) \\ & large & 30.1\({}_{\text{-}0.21}\) & 30.0\({}_{\text{-}0.21}\) \\ \hline 2.7K & base & 30.8\({}_{\text{-}0.21}\) & 30.2\({}_{\text{-}0.21}\) \\ \hline \multirow{2}{*}{3K} & base & 29.8\({}_{\text{-}0.21}\) & 29.4\({}_{\text{-}0.21}\) \\ & large & 29.4\({}_{\text{-}0.21}\) & 29.0\({}_{\text{-}0.21}\) \\ \hline \multirow{2}{*}{7K} & base & 30.1\({}_{\text{-}0.21}\) & 29.7\({}_{\text{-}0.20}\) \\ & large & 32.7\({}_{\text{-}0.22}\) & 32.5\({}_{\text{-}0.21}\) \\ \hline \multirow{2}{*}{14K} & light & 20.5\({}_{\text{-}0.18}\) & 20.0\({}_{\text{-}0.18}\) \\ & large & 32.1\({}_{\text{-}0.21}\) & 31.7\({}_{\text{-}0.21}\) \\ \multicolumn{2}{c}{} & large & **33.9\({}_{\text{-}0.22}\)** & **33.7\({}_{\text{-}0.21}\)** \\ \hline \hline \multirow{2}{*}{XLS-53} & large & 30.4\({}_{\text{-}0.21}\) & 29.6\({}_{\text{-}0.20}\) \\ \cline{2-2} & large & 30.6\({}_{\text{-}0.21}\) & 30.3\({}_{\text{-}0.21}\) \\ \multicolumn{2}{c}{} & large & 32.9\({}_{\text{-}0.21}\) & 32.5\({}_{\text{-}0.21}\) \\ \hline \hline \end{tabular} \end{table} Table 7: ST end-to-end BLEU results (higher is better) for the CoVoST dataset. The best results are in **bold**. Gray numbers denote the standard deviation computed using bootstrap re-sampling [102]. **Downstream models and hyperparameters.** The experiments are conducted using a one-to-one sequence model with either SSL representations or Mel filterbank features. The experiments for the RECOLA and AlloSat datasets consist of time-linear sequence-to-sequence prediction of continuous dimensions of emotion. A GRU with one layer and 32 hidden units was trained with CCC as the loss function, similar to [14]. On the other hand, for the experiments for the Theradia corpus, we used one linear layer trained with mean squared error as the loss function. We also tried using CCC as the loss function and GRU as the emotion prediction model for Theradia, but did not find any significant improvement in the results. Furthermore, for all the experiments, the training was done with the Adam optimizer with a learning rate of 0.001, and trained for 200 epochs with early stopping. It should be noted that that the sampling rate of the dimensional annotations differs from the Mel features, which are sampled at a rate of 100 Hz, and the wav2vec representations, which are sampled at a rate of 50 Hz. Thus, during training for RECOLA and AlloSat datasets, the targets are resampled to match the sampling rate of the representations with linear interpolation to keep the graph active for backpropagation, while during testing, the outputs of the model are mapped to the sampling rate of the targets to keep the targets untouched for testing. On the other hand, for the Theradia corpus, the outputs of the model are averaged over the sequence, because each emotion label is defined per sequence and not continuously over the sequence. **Results analysis and discussion.** The results are shown in Table 8. For the prediction of the frustration-satisfaction dimension of the AlloSat corpus, the best results were obtained for the 14K-xlarge model, showing that the use of more read speech from audio books and radio broadcasts, with more parameters, can be more effective for the continuous prediction of frustration-satisfaction. However, for the prediction of arousal and valence dimensions of emotion on the RECOLA corpus, the 14K-large and 14K-xlarge models achieve similar results. This may be because RECOLA contains clean speech, and thus the smaller parameter size may be sufficient to extract useful features for continuous emotion recognition on this dataset. The prediction of emotion labels for the Theradia corpus also shows a similar trend, with the frozen 14K models performing best. However, when fine-tuning the wav2vec 2.0 models, most of them perform better than their frozen counterparts (except for 2.7K-base, 3K-base, and 7K-base), but similarly to each other. The better performance of the fine-tuned wav2vec 2.0 models is consistent with the literature. Indeed, fine-tuning specializes the representations, which results in better AER performance for a particular data distribution. However, we may lose the ability to generalize to other data distributions [110]. On the other hand, the similarity of the results for the fine-tuned models across different data types, amounts, and _LeBenchmark_ architectures suggests that pre-training the wav2vec 2.0 models with more data or more parameters does not affect the results of fine-tuning \begin{table} \begin{tabular}{l c c c} \hline \hline **Representation** & **Satisfaction** & **Arousal** & **Valence** \\ \hline Mel Filter Bank &.413 &.313 &.258 \\ \hline WavLM-large &.537 & **.690** &.450 \\ XLS-R-large &.279 &.455 &.002 \\ XLS-R-xlarge &.415 &.311 &.229 \\ \hline 1K-base &.487 &.427 &.055 \\ 1K-large &.021 &.018 &.001 \\ \hline 2.7K-base &.596 &.629 &.455 \\ \hline 3K-base &.602 &.358 &.007 \\ 3K-large &.040 &.097 &.000 \\ \hline 7K-base &.470 &.335 &.116 \\ 7K-large &.050 &.009 &.037 \\ \hline 14K-light &.518 &.614 &.348 \\ 14K-large &.462 &.664 & **.466** \\ 14K-xlarge & **.657** &.649 &.437 \\ \hline \hline \end{tabular} \begin{tabular}{l c c} \hline \hline **Representation** & **Frozen** & **Fine-tuned** \\ \hline Mel Filter Bank &.075 (.120) & - \\ \hline WavLM-large &.166 (.158) &.241 (.136) \\ XLS-R-large &.086 (.064) &.269 (.159) \\ XLSR-xlarge &.072 (.059) &.205 (.132) \\ \hline 1K-base &.151 (.103) &.246 (.161) \\ 1K-large &.001 (.002) & **.319 (.172)** \\ \hline 2.7K-base &.083 (.094) &.013 (.015) \\ 3K-base &.061 (.069) &.019 (.016) \\ 3K-large &.002 (.004) &.224 (.151) \\ \hline 7K-base &.106 (.082) &.000 (.000) \\ 7K-large &.000 (.002) &.230 (.143) \\ \hline 14K-light & **.241 (.133)** &.283 (.129) \\ 14K-large &.190 (.127) &.229 (.151) \\ 14K-xlarge &.237 (.131) &.226 (.145) \\ \hline \hline \end{tabular} \end{table} Table 8: The automatic emotion recognition results are expressed in terms of concordance correlation coefficient (CCC, higher is better). The top table describes the continuous prediction of frustration-satisfaction, arousal and valence dimensions for the AlloSat and RECOLA corpora, with frozen representations. The results on the bottom table, describe the average (and standard deviation) emotion prediction across the core set emotion labels of the THERADIA corpus. for predicting emotion labels on the THERADIA corpus. ### Syntactic Analysis The syntactic analysis task (also known as parsing) is a staple task in natural language processing, and historically one of the first. Syntactic parsing consists of assigning a syntactic structure to a given sentence. Thus, it is a structured prediction task. Corpora annotated with syntactic trees are key to data-driven linguistic studies, syntactic trees may also provide useful features for downstream tasks. We focus on evaluating the self-supervised model trained on the task of joint automatic speech recognition and syntactic analysis. The traditional technique to obtain the syntactic structure from speech would be to use a pipeline approach. Using an ASR model to get the transcription and then using a pre-trained model such as BERT to predict the syntactic structure. However, this method removes important features contained in the signal for the syntactic predictions, such as the prosody. Moreover, it has been shown that E2E speech parsing models perform better, despite having much fewer parameters than pipeline-based parsers [111]. **Downstream datasets.** We use the CEFC-ORFEO [112] dataset. This corpus is a collection of multiple subcorpora[113; 114; 115; 116; 117; 118; 119; 120] annotated in syntactic dependencies in the _conll_ format. This corpus contains a wide variety of speech situations, such as store owner/customer interactions in a cheese shop, or informal conversations between friends. We removed the TCOF sub-corpus from the dataset, as it was included in the pretraining data for the _LeBenchmark 2.0_ models. The Orfeo treebank contains both human-annotated syntactic trees and automatically annotated syntactic trees [121], henceforth referred to as silver data. Partitions are built such that the dev and test set only contain gold trees, i.e. all silver trees are in the training set. **Downstream models and hyperparameters.** The downstream model is wav2tree [111]. This model is composed of an encoder module, an ASR module, and a syntax module. It performs joint syntactic analysis and automatic speech recognition in a multitasking manner. The encoder is composed of three fully connected layers of size 1024 in the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & **Frozen** & **WER \%** & **CER \%** & **POS** & **UAS** & **LAS** \\ \hline \multirow{2}{*}{XLSR-53-_large_} & No & 44.57 & 26.77 & 66.40 & 61.20 & 54.69 \\ & Yes & 97.51 & 77.81 & — & — & — \\ \hline 1K-_large_ & No & 45.99 & 28.60 & 65.18 & 59.98 & 53.48 \\ & Yes & 99.82 & 93.80 & — & — & — \\ \hline 1K-_base_ & No & 51.26 & 31.32 & — & — & — \\ & Yes & 80.35 & 48.67 & — & — & — \\ \hline 3K-_large_ & No & 44.81 & 27.55 & 65.81 & 61.54 & 55.11 \\ & Yes & 99.94 & 88.33 & — & — & — \\ \hline 3K-_base_ & No & 100 & 88.40 & — & — & — \\ & Yes & 81.54 & 48.16 & — & — & — \\ \hline 7K-_large_ & **No** & **42.39** & **26.59** & **67.15** & **63.34** & **56.94** \\ & Yes & 99.81 & 78.59 & — & — & — \\ \hline 7K-_base_ & No & 100 & 88.40 & — & — & — \\ & Yes & 70.70 & 39.97 & — & — & — \\ \hline 14K-_small_ & No & 71.08 & 40.51 & — & — & — \\ & Yes & 76.56 & 46.28 & — & — & — \\ \hline 14K-_large_ & No & 43.04 & 27.12 & 66.71 & 62.72 & 56.45 \\ & Yes & 52.16 & 30.51 & — & — & — \\ \hline 14K-_xlarge_ & No & 44.82 & 28.00 & 65.09 & 61.17 & 54.61 \\ & Yes & 45.43 & 26.51 & 66.60 & 60.63 & 53.94 \\ \hline \hline \end{tabular} \end{table} Table 9: End to end result for the syntactic analysis task in terms of part of speech tagging accuracy, unlabeled attachment score (UAS), and labeled attachment score (LAS) metrics (higher is better). Results are correlated to the speech recognition results expressed in CER and WER. fine-tuning setting. In the frozen setting, the encoder is a two-layer bi-LSTM with a hidden size 1024 and a dropout of 0.4 between the two layers. In wav2tree, the speech recognition module has two purposes, the first one is the normal speech recognition task, acquiring the transcriptions. The second one is the segmentation of the representation learned by wa2vec2. The CTC scheme labels each representation with either a letter, a blank, or a whitespace. The wav2tree model uses the whitespace information to segment the representation. The syntax module is composed of two elements. The first one uses the segmentation from the ASR module to create word-level representations from the encoder representation via a 2-layer LSTM with a hidden size of 500. These word-level representations are then used by a two-layer bi-LSTM with a hidden size of 800 and then are classified into three tasks in parallel with a simple linear layer. The first one predicts the part of speech (POS) of the word, the second one predicts the head of the current word (UAS) and the last one predicts the syntactic function (e.g. subject, dependent,...), i.e. the relationship between the head and the dependent. Each model is trained with a batch size of 8, except for the fine-tuning of the xlarge which is trained with a batch size of 2 and a gradient accumulation of 4, in order to maintain the comparability of results. The ASR module is trained with a CTC loss and the classification tasks are trained with a negative likelihood loss. The optimizer is AdaDelta [122] with a learning rate of 1 and \(\rho\) of 0.95. Each model is optimized on the word error rate and once it decreases below a threshold of 50, the training activates the syntax learning and is then optimized on the LAS metrics. **Parsing evaluation metrics.** The three metrics used for the parsing task are the f-measure for part of speech tagging labeled POS, the unlabeled attachment score UAS which is the accuracy of the syntactic link between the word, i.e. is the word correctly attached to its head. The last one is the labeled attachment score LAS extending the UAS metrics by also taking into account the nature of the link (root, subject,...). **Results analysis and discussion.** All results for the syntactic analysis task are depicted in table 9. The parsing result is heavily correlated to the speech recognition metrics. This is an expected behavior since a correct tree cannot be produced if some words are missing. The WER and CER clearly reflect the difficulty of the used dataset. This shows the inherent difficulty of the dataset, as with a similar architecture, a score of 10 % of WER is obtained on CommonVoice 6.1 [123]. The difficulty of the dataset implies that none of the base models can get good enough results to start learning the parsing task. All the models also need to be finetuned on the dataset with the notable exception of the 14K-xlarge suggesting that the pre-trained representation of this model is general enough to fit more out-of-domain data like the CEFC-Orfeo dataset. We observe that the quantity of data used to pre-train the model is important and seems to follow the classic machine learning paradigm that more data and scale are better. However, the best model for this task is the 7K-large and not the 14k-large or xlarge. Our hypothesis is that this model is trained on a more balanced distribution of type of speech (read, prepared and spontaneous), thus being more suited to learning good representations for spontaneous speech. Another interesting fact is that the 14k-large outperforms the 14k-xlarge. This may be simply because the bigger model needs more data, whereas the smaller one is more easily tunable to the downstream dataset. One of the most surprising results is the one on the 3K-base and 7k-base, where the models perform better without fine-tuning. We also compare to the multilingual XLSR model. The multilingual model has slightly worse performance compared to most of our models, but exhibits similar properties such as the need to finetune it on the downstream corpus to reach good performance. ### Automatic Speaker Verification Automatic Speaker Verification (ASV) refers to the task of verifying the identity claimed by a speaker from that person's voice [124]. In ASV, deep neural networks have brought about significant advancements in voice representations, outperforming the previous state-of-the-art \(i\)-vector framework [125]. One of these DNN approaches seeks to extract a high-level speaker representation, known as _speaker embedding_ directly from acoustics excerpts. To achieve this, DNN models are trained through a speaker identification task, where speech segments are classified into distinct speaker identities. Each layer of the DNN is trained to extract relevant information for discriminating between different speakers, and one of the hidden layers is used as the speaker embedding. One of the main advantages is that speaker embedding produced by the DNN can generalize well to speakers beyond those present in the training set. The benefits of speaker embedding, in terms of speaker detection accuracy, have been demonstrated during the last evaluation campaigns: NIST SRE [126; 127; 128] and VoxCeleb 2020 [129; 130; 131]. Recently, much progress has been achieved in ASV through the utilization of SSL models. In [16], the authors propose a novel approach: instead of exclusively using the representations from the final layer of the SSL model, they employ a weighted average of the representations from all hidden layers of the SSL model. This approach allows for harnessing speaker-related information embedded throughout the entire SSL model. **Downstream datasets.** For evaluation, we used the Fabiole dataset [132]. Fabiole is a french speaker verification dataset that has been collected for use to highlight the importance of "speaker factor" in forensic voice comparison. The Fabiole dataset contains a total of 7,372 segments from 130 male native French speakers. Due to the absence of an established evaluation protocol, we took the initiative to create one ourselves. We removed all the segments with less than 2 seconds of voice and those that exceed 12 seconds of voice. Then, we randomly select 300,000 target pairs (i.e. enrollment and test segments that target same speakers) and 300,000 non-target pairs (i.e. enrollment and test segment that non-target same speaker). We trained the systems using ESTER-1 [133], ESTER-2 [134], ETAPE [66] and REPERE [135] training datasets (that correspond to 2.911 speakers and more than 250 hours of data). Voice Activity Detection (VAD) processing was not applied on the training datasets. Additionnaly, we applied data augmentation into the training process by incorporating noise from the MUSAN dataset and reverberation using the RIR dataset [136]. Equal Error Rate (EER) and the Detection Cost Function (DCF) are used as the performance criterion of ASV. EER is the threshold value such that the false acceptance rate and miss rate are equal. Whereas DCF is defined as a weighted sum: \[C_{det}=C_{Miss}\times P_{Miss|Target}\times P_{Target}+C_{False alarm}\times P_{FalseAlarm|NonTarget}\times P_{NonTarget}, \tag{1}\] with the prior probabilities \(P_{Target}\) and \(P_{NonTarget}=1-P_{Target}\) of target and impostor speakers, respectively. The relative costs of detection errors in this function are the costs of miss \(C_{Miss}\) and false alarm errors \(C_{FalseAlarm}\). These parameters were set as follows: \(P_{Target}=0.01\) (or \(P_{Target}=0.001\)), \(C_{Miss}=1\) and \(C_{FalseAlarm}=1\). **Downstream models and hyperparameters.** We use the ECAPA-TDNN classifier [137], which uses cutting-edge techniques: Multilayer Feature Aggregation (MFA), Squeeze-Excitation (SE) and residual blocks. This classifier, when combined with SSL models, has demonstrated impressive performance [138] in ASV. Our ECAPA-TDNN has the following parameters: the number of SE-Res2Net Blocks is set to 3 with dilation values 2, 3 and 4, the number of filters in the convolutional frame layers is set to 512 and embedding layer size is set to 256. The training lasted for 8 epochs with the _AdamW_ optimizer. We trained all the models with Circle Margin Loss and set the margin to 0.35. During the training process, we randomly sampled 3s segments from each utterance to construct a training batch. We \begin{table} \begin{tabular}{l c c c} \multicolumn{4}{c}{Corpora: Fabiole} \\ \multicolumn{4}{c}{Metric: EER and minDCF \(\downarrow\)} \\ \hline \hline **Representation** & **EER** & **minDCF\({}^{-10}\)** & **minDCF\({}^{-100}\)** \\ \hline XLSR-53-large & 6.68 & 0.492 & 0.677 \\ \hline 1K-base & 8.27 & 0.556 & 0.722 \\ 1K-large & 6.75 & 0.508 & 0.705 \\ \hline 3K-base & 4.82 & 0.374 & 0.567 \\ 3K-large & 5.06 & 0.374 & 0.521 \\ \hline 7K-base & 4.73 & 0.364 & 0.538 \\ 7K-large & 5.23 & 0.383 & 0.575 \\ \hline 14K-large & **3.54** & **0.297** & **0.480** \\ \hline \hline \end{tabular} \end{table} Table 10: Results for the downstream task of speaker verification. Performance are expressed in terms of Equal Error Rate (EER, lower is better) and Minimum of the Detection Cost Function (minDCF, lower is better). remind that the ECAPA-TDNN takes as input a weighted average of the representations from all hidden layers of the SSL model. **Results analysis and discussions.** All results for the ASV task are depicted in Table 10.XLSR-53 was used as a baseline. The findings can be summarized as follows: * **Monolingual versus Multilingual.** We observed that except for 1K models, systems trained on monolingual models (i.e. _LeBenchmark_ ) achieved better performance than the multilingual model (XLSR-53). The best monolingual model (_LeBenchmark_ -14K-large) obtained 3.54% of EER while the multilingual model (XLSR-54-large) obtained 6.68% EER. * **Pre-training data.** Focusing on monolingual models, we observed a link between performance and the quantity of pre-training data. LeBenchmark models pre-trained with larger speech datasets tend to provide better performance. Indeed, the LeBenchmark model trained on 1,000 hours of data (LeBenchmark-1k-large) obtained 6.75% of EER, while the LeBenchmark model trained on 14.000 hours of data (LeBenchmark-14k-large) obtained 3.54% of EER. * **Model size.** Always focusing on monolingual models, we observed that, except for LeBenchmark-1k model, for the same amount of pre-training data, base models tend to obtain better performance than larger models. Figure 1 shows the contribution from each layer of various SSL models. We remind that LeBenchmark base models contain 12 layers contrary to LeBenchmark large models that contain 24 layers. In general, we observe that speaker-related information is most pronounced in the first layers (lower layers) of the SSL model. Even if the speaker-related informations is most pronounced in the first layer, we notice that for LeBenchmark base models, all layers contribute to the construction of the representations. In contrast, for LeBenchmark large models, the higher layers contribute less compared to the lower layers. ### Summary of results Table 11 summarizes the best models for each evaluated task and setup according to our experiments. Overall, the 7K and newly introduced 14K models perform the best across the whole benchmark. In all cases, large models obtain better performance. However, the 14K-xlarge underperforms compared to the smaller 14K-large in syntactic analysis and speech recognition. The smaller models, including base and light ones, simply offer the worst performance while being still acceptable. In all cases, _LeBenchmark 2.0_ models reach a higher level of performance than the multilingual XLSR large and large systems. Figure 1: The visualization of the normalized weight values in the proposed architecture. Each weight can be interpreted as the relevance of a given layer for the ASV task. Earlier layers are particularly relevant for ASV. ## 6 Carbon Footprint This section gives an overview of the carbon footprint of the SSL pre-training. The fine-tuning footprint is omitted as it was performed in many different and heterogeneous platforms making it impossible to compute proper estimates. The carbon footprint of each model may be estimated following the protocol defined by T. Parcollet et al. [139]. In practice, it is a function of the PUE of the compute infrastructure, the total energy consumed, and the carbon emission rate of the energy grid. The Jean-Zay supercomputer is energy efficient with a PUE of 0.65 (including heat transfer to nearby infrastructures). We only consider the energy consumed by the GPU and CPU. Power draw measurements are taken every five seconds and then averaged. France's carbon rate is 52 gCO2/kWh [140]. Table 11 reports the total energy consumed as well as the estimated carbon emissions of all _LeBenchmark 2.0_ models. First, it is quite clear that the carbon footprint of training large SSL models is far from being negligible, even in countries with a relatively clean energy mix. As supported by previous research, the location of the training must be considered as a critical design choice when starting such a project as the total CO\({}_{2}\) emissions can easily be multiplied by four to height if gas and oil are integrated in the mix. Finally, one may wonder if the extra kWhs thrown at the model are worth it given the relatively small downstream performance improvement between the 3K-large, 7K-large, and 7K-xlarge, in contrast to the energy consumption being multiplied by a factor 3.5. ## 7 Conclusion _LeBenchmark 2.0_ establishes new foundations for the development of French SSL-equipped speech technologies. Following the three steps of the lifecycle of any SSL model, we gathered and documented the largest available collection of unlabeled French speech for SSL pre-training, we trained three new pre-trained SSL models for a total of 10 available checkpoints, and we evaluated them on two new tasks increasing the total number of tasks to six. _LeBenchmark 2.0_ models are shared with the community via the HuggingFace Hub. ## 8 Acknowledgements This work was performed using HPC resources from GENCI-IDRIS (Grant 2020-A0091012047). This work benefited from the 'Grand Challenge Jean Zay' program and was also partially supported by MIAI@Grenoble-Alpes (ANR-19-P3IA-0003). This paper was also partially funded by the European Commission through the SELMA project under grant number 957017, and UTTER project under grant number 101070631. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **Train. Time** & **GPUs** & **Energy** & **CO\({}_{2}\)** \\ & & _(hours)_ & & _(kWh)_ & _(kg)_ \\ \hline 1K-_base_ & 250 & 4 Tesla V100 & 195.0 & 10.5 \\ 1K-_large_ & 925 & 4 Tesla V100 & 721.5 & 37.5 \\ 2.7K-_base_ & 128 & 32 Tesla V100 & 682.2 & 35.4 \\ 3K-_base_ & 128 & 32 Tesla V100 & 682.2 & 35.4 \\ 3K-_large_ & 341 & 32 Tesla V100 & 1,817.5 & 94.5 \\ 7K-_base_ & 123 & 64 Tesla V100 & 1,535.0 & 79.8 \\ 7K-_large_ & 211 & 64 Tesla V100 & 4,501.0 & 234 \\ 14K-_light_ & 156 & 32 Tesla V100 & 1,497.6 & 77.8 \\ 14K-_large_ & 436 & 64 Tesla V100 & 8,371.2 & 435 \\ 14K-_xlarge_ & 525 & 104 Tesla A100 & 16,511.2 & 859 \\ \hline \hline \end{tabular} \end{table} Table 11: Summary of the best SSL models for each downstream task (left table) as well as estimates of the energy in kilowatt hour (kwH) and CO\({}_{2}\) equivalent in kilogram produced by the training of the _LeBenchmark 2.0_ models (right table).
自Suvised学習 (SSL) は、コンピュータビジョンや自然言語処理を含む様々な分野で、前例のない改善をもたらしてきた。音声処理はSSLによって大きく利益を得ている。多くの現在の領域関連タスクは、トレーニング済みモデルでアプローチされるようになった。この研究では、フランス語音声技術を評価し構築するためのオープンソースフレームワークであるLeBenchmark 2.0を紹介する。これは、14,000時間以上の多様な音声データ、2600万から10億個のパラメーターを含む、トレーニング済みSSL wav2vec 2.0モデルの 10 個のモデルを含む、評価プロトコル、そして既存のベンチマークを補完する六つのドメイン別タスクを含む、文書化された大規模なデータセットを備えている。LeBenchmark 2.0 は、音声のトレーニング済みSSLモデルにおける、凍結と微調整された downstream モデルの調査、タスク無
2309.05314
Semantic Latent Decomposition with Normalizing Flows for Face Editing
Navigating in the latent space of StyleGAN has shown effectiveness for face editing. However, the resulting methods usually encounter challenges in complicated navigation due to the entanglement among different attributes in the latent space. To address this issue, this paper proposes a novel framework, termed SDFlow, with a semantic decomposition in original latent space using continuous conditional normalizing flows. Specifically, SDFlow decomposes the original latent code into different irrelevant variables by jointly optimizing two components: (i) a semantic encoder to estimate semantic variables from input faces and (ii) a flow-based transformation module to map the latent code into a semantic-irrelevant variable in Gaussian distribution, conditioned on the learned semantic variables. To eliminate the entanglement between variables, we employ a disentangled learning strategy under a mutual information framework, thereby providing precise manipulation controls. Experimental results demonstrate that SDFlow outperforms existing state-of-the-art face editing methods both qualitatively and quantitatively. The source code is made available at https://github.com/phil329/SDFlow.
Binglei Li, Zhizhong Huang, Hongming Shan, Junping Zhang
2023-09-11T08:59:15
http://arxiv.org/abs/2309.05314v1
# Semantic Latent Decomposition with Normalizing Flows ###### Abstract Navigating in the latent space of StyleGAN has shown effectiveness for face editing. However, the resulting methods usually encounter challenges in complicated navigation due to the entanglement among different attributes in the latent space. To address this issue, this paper proposes a novel framework, termed SDFlow, with a semantic decomposition in original latent space using continuous conditional normalizing flows. Specifically, SDFlow decomposes the original latent code into different irrelevant variables by jointly optimizing two components: **(i)** a semantic encoder to estimate semantic variables from input faces and **(ii)** a flow-based transformation module to map the latent code into a semantic-irrelevant variable in Gaussian distribution, conditioned on the learned semantic variables. To eliminate the entanglement between variables, we employ a disentangled learning strategy under a mutual information framework, thereby providing precise manipulation controls. Experimental results demonstrate that SDFlow outperforms existing state-of-the-art face editing methods both qualitatively and quantitatively. The source code is made available at [https://github.com/phil329/SDFlow](https://github.com/phil329/SDFlow). Binglei Li\({}^{1}\) Zhizhong Huang\({}^{1}\) Hongming Shan\({}^{2}\) Junping Zhang\({}^{1}\)\({}^{\dagger}\)+\({}^{1}\)Shanghai Key Lab of Intelligent Information Processing, School of Computer Science \({}^{2}\)Institute of Science and Technology for Brain-inspired Intelligence Fudan University, Shanghai 200433, China Footnote †: dagger\) Corresponding author Face Editing, Disentangle Learning, Generative Adversarial Network ## 1 Introduction Face editing is to change the desired facial attributes while keeping image quality and other undesired attributes. In recent years, the generator of StyleGAN [1, 2] has achieved significant progress in synthesizing high-fidelity images, which presents a semantically abundant latent space. Most importantly, the faces can be manipulated by traversing on such latent space, which reduces the pain in training a good GAN [3] from scratch. In detail, the faces need to be inverted into the latent space of StyleGAN to obtain the corresponding latent code [4, 5], which can be manipulated by different methods [6, 7, 8, 9, 10, 11, 12, 13], and then decoded by the pre-trained generator to produce edited faces. Current methods for face editing can be roughly categorized as supervised and unsupervised. Unsupervised methods employ principal components to identify editing directions [14, 15] or operate under the prior of CLIP [6, 16, 17]. Although these approaches offer meaningful transformations, they still fall short of producing precise user-desired editing without the aid of human annotations. Supervised methods [7, 8, 9] typically use the attributes-labeled images to identify how to manipulate the faces in latent space. Some of them [8, 9] assume that face editing can be achieved by linearly interpolating along certain directions. InterFaceGAN [8] learns the editing directions by training a hyperplane in the latent space to separate the examples with binary attributes. Latent-Transformer [9] trains a transformation network to produce dynamic directions. However, these methods may fail to handle the scenario when linear assumption does not hold. Alternatively, nonlinear methods [7, 13] aim to learn a nonlinear transformation. StyleFlow [7] leverages normalizing flows to re-sample the edited latent codes and AdaTrans [13] splits the whole transformation into several finer steps. Unfortunately, they have not explicitly disentangled different facial attributes due to the use of binary attributes. To address these issues, this paper proposes SDFlow to achieve a semantic decomposition of the latent space of StyleGAN using continuous conditional normalizing flows. Specifically, there are two key components in SDFlow: **(i)** a semantic encoder and **(ii)** a flow-based transformation module. The semantic encoder produces semantic variables for different attributes directly from input faces. Under the conditions of semantic variables, the flow-based transformation module maps the latent codes into semantic-irrelevant variables. To achieve the disentanglement between variables, we jointly optimize these two components under a mutual information framework [18]. Moreover, a pre-trained attribute classifier is distilled to inject the supervision signals of human annotations into the semantic variables. Consequently, the edited latent codes can be obtained from the flows by only changing the semantic variables. Our contributions are summarized as follows: **(i)** we introduce SDFlow, a novel disentangled non-linear latent navi gation framework for face editing. It performs semantic decomposition in the latent space of StyleGAN with a disentangled learning strategy, which thus can eliminate the entanglement between attributes and enhance editing controls. (**ii**) Both qualitative and quantitative experiments demonstrate the effectiveness of the proposed method in terms of image quality, editing accuracy, and identity/attribute preservation. ## 2 Methodology Fig. 1 illustrates the framework of SDFlow. The face image \(\mathbf{I}\in\mathbb{R}^{3\times H\times W}\) are inverted into the latent space \(\in\mathcal{W}^{+}\) of StyleGAN to obtain the layer-wise latent code \(\mathbf{w}=E_{\rm img}(\mathbf{I})\in\mathbb{R}^{18\times 512}\), through the pre-trained encoder \(E_{\rm img}\)[5]. Face editing should transform \(\mathbf{w}\) to new latent codes \(\mathbf{w}_{e}\) for manipulating the faces into \(G(\mathbf{w}_{e})\) with target attributes, using the pre-trained StyleGAN generator \(G\). ### Model Architecture SDFlow contains a semantic encoder and a flow-based transformation module to learn disentangled variables. Flow-based transformation module.Conditioned on the given facial attributes of certain input face, the flow-based transformation module is to disentangle the semantic-irrelevant variables from the latent code. In this paper, we employ the conditional continuous normalizing flows (CNFs) [19] as the flow-based transformation module for disentangled internal representations inspired by [7]. The conditional CNFs are optimized by neural ODE [20], whose mathematical basis in differential equations can be expressed as \[\frac{\mathrm{d}\mathbf{z}}{\mathrm{d}t}=\Phi_{\theta}(\mathbf{z}(t),\mathbf{s},t), \tag{1}\] where \(\mathbf{z}\) is the variable of Gaussian distribution, \(t\) is the time variable, \(\mathbf{s}\) is the semantic variables, and \(\Phi_{\theta}\) is a neural network to produce \(\frac{\mathrm{d}\mathbf{z}}{\mathrm{d}t}\), same as [7]. Therefore, the inverse inference of CNFs can be defined as \[\mathbf{z}(t_{1})=\mathbf{z}(t_{0})+\int_{t_{0}}^{t_{1}}\Phi_{\theta}(\mathbf{z}(t),\mathbf{s},t)\mathrm{d}t, \tag{2}\] where \(t_{0}\) and \(t_{1}\) are the predefined start and end time. Here, \(\mathbf{z}(t_{0})=\mathbf{w}\) and \(\mathbf{z}(t_{1})\in\mathcal{N}(0,1)\) should be irrelevant to \(\mathbf{s}\). Therefore, the loss function to optimize the flow-based transformation module can be written as \[\mathcal{L}_{\rm nll}=-\log p(\mathbf{z}(t_{0}))+\int_{t_{0}}^{t_{1}}\mathrm{Tr} \left(\frac{\partial\Phi}{\partial\mathbf{z}(t)}\right)\mathrm{d}t, \tag{3}\] which maximizes the likelihood of the latent code \(\mathbf{w}\). Face editing can be achieved by only changing semantic variables \(\mathbf{s}\) in the reverse order of Eq. 2: \[\mathbf{w}_{e}=\mathbf{z}(t_{1})-\int_{t_{0}}^{t_{1}}\Phi_{\theta}(\mathbf{z}(t),\mathbf{\hat{ s}},t)\mathrm{d}t, \tag{4}\] where \(\mathbf{\hat{s}}\) is the target semantic variables. In practice, \(\mathbf{s}\) can be set to the predictions of a pre-trained attribute classifier. However, this strategy does not provide well-disentangled \(\mathbf{z}\) and \(\mathbf{s}\); manipulating \(\mathbf{s}\) would still change other undesired attributes. Semantic Estimator.SDFlow opts to employ an additional semantic encoder \(E_{\rm s}\) to estimate the semantic variables, Figure 1: Framework of SDFlow. SDFlow consists of two components: (a) the flow-based transformation module transforms the original latent codes to semantic-irrelevant variable \(\mathbf{z}\) in Gaussian distribution, conditioned on the semantic variables. The edited faces can be obtained in the forward inference of flows. (b) The semantic encoder estimates the semantic variables of input faces for the flow-based transformation module. They are jointly optimized to achieve a disentangled face editing. which are jointly trained with the flow-based transformation module. As a result, \(E_{\mathrm{s}}\) can adaptively adjust the semantic variables to produce semantic-irrelevant variables for the flows. Besides, \(E_{\mathrm{s}}\) works directly from input faces instead of latent codes \(\mathbf{w}\) which tends to be entangled [8]. In detail, \(E_{\mathrm{s}}\) contains ResNet-34 [21] as the backbone, following three linear layers and a sigmoid activation function. To inject the supervision signals of human annotations into the semantic variables, a pre-trained attribute classifier is distilled to \(E_{\mathrm{s}}\): \[\mathcal{L}_{\mathrm{kd}}=\frac{1}{N}\sum_{i=1}^{N}\|C(\mathbf{I})-E_{\mathrm{s}}( \mathbf{I})\|_{2}^{2}, \tag{5}\] where \(\|\cdot\|_{2}^{2}\) denotes Euclidean distance between predictions. ### Training and Inference Disentangled learning.It is hard to make different variables disentangled well if simply jointly training two components. To this end, we utilize the disentangled learning strategy in [18] to better regularize the semantic encoder and flow-based transformation module. Specifically, the semantic variables \(\mathbf{s}\) are randomly manipulated to \(\mathbf{\hat{s}}\) sampling from \([0,1]\), and then the faces are generated according to Eq. 4. If \(\mathbf{s}\) are disentangled well, \(E_{\mathrm{s}}\) should be able to reconstrcut \(\mathbf{\hat{s}}\). Therefore, we can maximize the mutual information between variables through: \[\mathcal{L}_{\mathrm{mi}}=\frac{1}{N}\sum_{i=1}^{N}\|E_{\mathrm{s}}(\hat{\mathbf{I }})-E_{\mathrm{s}}(\mathbf{I})\|_{2}^{2}, \tag{6}\] where \(\hat{\mathbf{I}}=G(\mathbf{w}_{e})\) is the edited faces. Training and inference.To compress the unnecessary changes in the edited latent codes, we restrict the Euclidean distance between original and edited latent codes [9]: \[\mathcal{L}_{\mathrm{reg}}=\|\mathbf{w}-\mathbf{w}_{e}\|_{2}^{2}. \tag{7}\] Combining the Eqs. 3, 5, 6, 7, the overall training objective is to minimize the sum of all losses: \[\mathcal{L}=\mathcal{L}_{\mathrm{nll}}+\mathcal{L}_{\mathrm{kd}}+\mathcal{L}_{ \mathrm{mi}}+\mathcal{L}_{\mathrm{reg}}. \tag{8}\] During inference, we only need to change the desired semantic variables while keeping \(\mathbf{z}\) and unrelated variables. ## 3 Experiments ### Implementation Details We performed SDFlow with the official StyleGAN2 generator [2] pre-trained on FFHQ [1]. We employed e4e [5] to invert the input faces, and trained the attribute classifiers on CelebA dataset [22]: a ResNet-34 [21] for knowledge distillation and a ResNet-50 [21] for evaluation. The flow-based transformation module follows the same architecture in [7] with \(t_{0}\) and \(t_{1}\) fixed as 0 and 1. The first 69k faces of FFHQ are used for training while the rest and CelebA-HQ [23] are used for testing data [13]. All modules are jointly trained for 10,000 iterations with batch size 16, Adam optimizer [24], learning rate \(10^{-4}\), \(\beta_{1}=0.9\), and \(\beta_{2}=0.99\). All experiments are conducted on a single NVIDIA 3090 GPU. ### Experimental Results Visualization of semantic variables.Fig. 2 visualizes the semantic variables of three facial attributes, _i.e_., _Eyeglasses_, _Gender_, and _Age_, predicted by \(E_{s}\). The results show that \(E_{s}\) can learn meaningful values for different variables. In terms of _Eyeglasses_, there is a clear decision boundary since it is distinct to identify the faces with/without glasses. On the contrary, _Age_ exhibits overlapping, which conforms to the practical scenario of facial aging process. In summary, Fig. 2 validates that binary attributes are insufficient to effectively describe the strength of attributes, emphasizing the importance of incorporating our proposed semantic decomposition. Qualitative evaluation.Fig. 3 showcases example results on manipulating the faces into target attributes. We compare our method with InterfaceGAN [8], Latent Transformer [9] and StyleFlow [7] and manually select the best editing strength. Previous methods [7, 8, 9] usually change one's identity or other unrelated attributes, such as adding glasses when aging. Fig. 2(a) showcases the effectiveness of our SDFlow in successfully disentangling highly correlated attributes [8]. Fig. 2(b) shows that SDFlow achieves the best results in multi-attribute editing. Our proposed SDFlow excels in producing photo-realistic and disentangled manipulations while preserving the identity and unrelated attributes. Quantitative evaluation.We desire a higher proportion of successfully manipulated samples with fewer changes in identity and unrelated attributes. We use three widely used metrics to compare different methods quantitatively, including editing accuracy, attribute preservation accuracy, and identity preservation [13]. An attribute classifier with ResNet-50 is trained from scratch to predict the attributes of the manipulated faces, which can be used to measure editing accuracy and attribute preservation. Identity preservation is the cosine similarity between the facial embeddings extracted by [25]. We gradually increase the editing strength of manipulation towards the opposite attribute until the editing accuracy reaches Figure 2: Histogram of semantic variables predicted by \(E_{s}\). 99%. Consequently, we can draw the identity/attribute preservation curves _w.r.t_ editing accuracy. The qualitative results are shown in Fig. 4. Our method outperforms three competitors by a large margin, indicating better disentangled and accurate face editing. ### Ablation Study We conduct ablation studies to validate the effectiveness of different components in our methods. We start from the baseline StyleFlow [7], which is directly optimized by \(\mathcal{L}_{\mathrm{nll}}\). Then we gradually add \(\mathcal{L}_{\mathrm{reg}}\), \(\mathcal{L}_{\mathrm{mi}}\), and \(\mathcal{L}_{\mathrm{kd}}\). It is worthwhile to note that a pre-trained attribute classifier is employed as the semantic encoder when adding \(\mathcal{L}_{\mathrm{mi}}\), and the semantic encoder is jointly trained when adding \(\mathcal{L}_{\mathrm{kd}}\). The quantitative and qualitative results are presented in Fig. 5 and Fig. 6, respectively. Interestingly, adding \(\mathcal{L}_{\mathrm{reg}}\) can achieve significant improvements over baseline, indicating that only inverse inference during optimization cannot preserve identity/attributes. Although \(\mathcal{L}_{\mathrm{mi}}\) with a pre-trained classifier is helpful for attribute preservation, the identity preservation unexpectedly drops. We argue that the pre-trained classifier would over-manipulate the latent codes to unrelated attributes while ignoring identity. On the contrary, all three components (our SDFlow) can address this issue as the semantic encoder is involved in the training process to estimate better semantic variables. ## 4 Conclusions In this paper, we introduce SDFlow for face editing with a semantic decomposition in original latent space using continuous conditional normalizing flows. A semantic encoder is introduced to estimate the proper semantic variables. The flow-based transformation module enables nonlinear editing and produces semantic-irrelevant variables, under the conditions of semantic variables. A disentangled learning strategy is adopted to eliminate the entanglement among the attributes. Extensive experiments demonstrate that our SDFlow can generate photo-realistic results with better disentanglement. Figure 4: Quantitative results for multi-attribute editing. A higher curve indicates better performance. Figure 5: Quantitative ablation study of proposed components. Figure 3: Qualitative comparisons with recent methods for face editing. The competitors produce unexpected changes in unrelated attributes or fail to handle the hard cases when editing single or multiple attributes. Figure 6: Qualitative ablation study of proposed components.
スタイルGANの潜在空間におけるナビゲーションが顔編集に有効であることが示されています。しかし、その結果となる方法は、潜在空間における異なる属性間の絡み合いによって、複雑なナビゲーションに課題を抱えています。この問題に対処するため、本論文では、SDFlowという新しいフレームワークを提案しています。SDFlowは、連続的な条件付き正規分布を用いて、元の潜在空間でセマンティック分解を行います。具体的には、SDFlowは、入力顔からセマンティック変数を推定するセマンティックエンコーダと、潜在コードをセマンティック不関連の変数に変換する流動ベースの変換モジュールを、それぞれ最適化することで、元の潜在コードを分解します。この変換モジュールは、学習されたセマンティック変数に基づいてガウス分布のセマンティック不関連の変数に変換します。この分離を目的としたアプローチは、相互情報フレームワークを用いて、変
2309.12668
UWA360CAM: A 360$^{\circ}$ 24/7 Real-Time Streaming Camera System for Underwater Applications
Omnidirectional camera is a cost-effective and information-rich sensor highly suitable for many marine applications and the ocean scientific community, encompassing several domains such as augmented reality, mapping, motion estimation, visual surveillance, and simultaneous localization and mapping. However, designing and constructing such a high-quality 360$^{\circ}$ real-time streaming camera system for underwater applications is a challenging problem due to the technical complexity in several aspects including sensor resolution, wide field of view, power supply, optical design, system calibration, and overheating management. This paper presents a novel and comprehensive system that addresses the complexities associated with the design, construction, and implementation of a fully functional 360$^{\circ}$ real-time streaming camera system specifically tailored for underwater environments. Our proposed system, UWA360CAM, can stream video in real time, operate in 24/7, and capture 360$^{\circ}$ underwater panorama images. Notably, our work is the pioneering effort in providing a detailed and replicable account of this system. The experiments provide a comprehensive analysis of our proposed system.
Quan-Dung Pham, Yipeng Zhu, Tan-Sang Ha, K. H. Long Nguyen, Binh-Son Hua, Sai-Kit Yeung
2023-09-22T07:24:58
http://arxiv.org/abs/2309.12668v2
# UWA360CAM: A 360\({}^{\circ}\) 24/7 Real-Time Streaming Camera System for Underwater Applications ###### Abstract Omnidirectional camera is a cost-effective and information-rich sensor highly suitable for many marine applications and the ocean scientific community, encompassing several domains such as augmented reality, mapping, motion estimation, visual surveillance, and simultaneous localization and mapping. However, designing and constructing such a high-quality 360\({}^{\circ}\) real-time streaming camera system for underwater applications is a challenging problem due to the technical complexity in several aspects including sensor resolution, wide field of view, power supply, optical design, system calibration, and overheating management. This paper presents a novel and comprehensive system that addresses the complexities associated with the design, construction, and implementation of a fully functional 360\({}^{\circ}\) real-time streaming camera system specifically tailored for underwater environments. Our proposed system, UWA360CAM, can stream video in real time, operate in 24/7, and capture 360\({}^{\circ}\) underwater panorama images. Notably, our work is the pioneering effort in providing a detailed and replicable account of this system. The experiments provide a comprehensive analysis of our proposed system. ## I Introduction Comprehending the utilisation of habitats by species and the diversity of such habitats are of significant importance in the fields of ecology and conservation, both in marine and terrestrial ecosystems. Moreover, the ocean encompasses a substantial volume and represents the most extensive viable ecosystem on the planet. However, the acquisition of data pertaining to these aspects is frequently challenging. Color camera is an affordable and data-intensive sensor that is well suited for several marine applications, including the detection of aquatic animals, estimation of motion, coral cultivation and monitoring [1, 2]. Acquiring global scene information is important to understand our 3D environment [3], which can be implemented using an omnidirectional visual representation that encompasses a full 360\({}^{\circ}\) view along the vertical axis at every point of observation [4]. A traditional approach to obtain omnidirectional sensing relies on heavy and unreliable mechanical structures [5], e.g., the construction of an omnidirectional stereo system using plane mirrors [6]. Another potential approach is to install additional cameras [5], but this approach has financial implications and suffers from limitations in the architectural framework. A modern cost-effective approach is to instead use catadioptric cameras or fisheye cameras, which offer wider field of view for enhanced environmental coverage [7, 8], and thus the ability to capture panoramic views [9, 10]. Additionally, some recent studies [11, 12, 13] demonstrated promising results of omnidirectional depth estimation using fisheye images. Omnidirectional panoramic video systems therefore find extensive employment in various domains, including virtual reality, 360\({}^{\circ}\) movies, and video surveillance [14]. For submerged environments, the utilisation of omnidirectional cameras has numerous novel technological possibilities across a range of disciplines, including underwater robotics, marine science, oil and gas sectors, underwater archaeology, and public outreach. Nevertheless, the utilisation of these cameras remains significantly restricted compared to their use in the air and on land, mostly due to the inherent difficulties posed by the underwater environment. Numerous camera models have been suggested to offer projection and unprojection functionalities that are well-suited for accommodating large field-of-view lenses [15]. However, previous studies have failed to address all-encompassing optical concerns inside the underwater setting. In this paper, a new system is proposed with a detailed pipeline for design, construction, and implementation of a fully functional 360\({}^{\circ}\) camera suitable for real-time streaming in underwater environments. The main contributions of this work are: * A comprehensive hardware and software pipeline for the development of a complete underwater camera system capable of capturing a full 360\({}^{\circ}\) field of view and continuously stream in 24/7. This promotes further investigation into the construction of high-quality underwater camera systems with a 360\({}^{\circ}\) field of view. * A system that takes into account perspective projection and refraction of water, pressure housings and air for underwater applications. By employing this method, it is possible to more accurately estimate the imaging process and obtain optimal levels of precision. * Extensive experiments to analyze the performance of our proposed 360\({}^{\circ}\) camera system. To our knowledge, this is the most comprehensive documentation to date in the research community to detail the design of a 360\({}^{\circ}\) camera system for underwater applications. ## II UWA360CAM System Overview The UWA360CAM system comprises three distinct components as in Fig. 1, namely an underwater camera module, a transmission module, and a topside computer processing module. The design of the proposed system is shown in Fig. 1(a). The camera operates in three main stages: 1) The initial stage of the system triggers the fisheye cameras in a synchronised manner. 2) In the subsequent stage, the onboard processing unit allows the transmission of the images to the server. 3) The last stage is executed on a high-performance server to accommodate advanced computer vision algorithms such as video stitching. Our proposed system employs four fisheye cameras, which provides a bare minimum data acquisition that allows us to obtain 360\({}^{\circ}\) depth image. The design of the system involves the placement of a set of fisheye cameras in a configuration where two cameras are positioned in a front-backward orientation at the top, and another set of fisheye cameras is positioned at the bottom in a perpendicular direction as shown in Fig. 2b. This arrangement ensures that each adjacent pair of stereo cameras maintains an equal baseline distance. In order to synchronise multiple fisheye camera, a hardware board is used to allow for the connection of four same MIPI cameras and a software is developed to send a frame whenever the host places a request. The camera remains in a condition of continuous operation, and upon receiving a request from the host, it transmits the frame to facilitate synchronisation. The delay is contingent upon the performance of the host device. Hence, in instances where host devices exhibit sub-optimal performance, it is quite probable that latency will exhibit fluctuations. At the heart of our system, a Jetson Nano board is used to communicate with the synchronised fisheye cameras for processing and transmitting images. Power Line Communication (PLC) modules are employed through the use of Fathom Tether in order to supply power and transmit video data from an underwater camera to a computer located above the water's surface. Finally, the high-performance computer is used for the implementation of advanced computer vision algorithms that process input videos obtained from four synchronised fisheye cameras. ## III Hardware ### _Cameras and Fisheye Lens_ When choosing the appropriate camera sensor for underwater applications, it is essential to consider many aspects including a high level of resolution to effectively capture photos of marine organisms, a high signal-to-noise ratio in order to guarantee the production of clear images, an adequate dynamic range to capture both the brighter and darker regions within the scene. In our research, four SONY IMX477 sensor cameras are utilised since it offers high-resolution, high-speed image sharpness, improved low-light performance, and high sensitivity. The IMX477 provides the maximum resolution of \(4056\times 3040\), the maximum framerate 60fps, pixel size \(1.55\mu m\times 1.55\mu m\), optical format \(1/2.3"\), and the high-performance MIPI CSI-2 interface offers a maximum bandwidth of 10 Gb/s with four image data lanes and uses fewer CPU resources. The IMX477 image sensor offers a mechanical IR cut-off filter switched automatically based on light condition which is only visible light during the bright light and infrared sensitivity during low light condition. The cameras are oriented in four distinct directions in order to establish both horizontal and vertical baselines as shown in Fig. 2b. To maximize the field of view, 220\({}^{\circ}\) fisheye lens is utilized, which is among the widest options available in the market. ### _Processing Unit and Synchronized Cameras Unit_ The processing unit is responsible for the three primary functions. The first objective is to transmit the control signal to four fisheye cameras in order to facilitate the simultaneous capture of images or video recording. The second objective involves doing pre-processing on the images prior to their transmission to the topside computer via the transmission unit. The final step involves transferring images or streaming videos to the server in order to facilitate the application of more sophisticated computer vision algorithms such as video stitching and fish detection. It is essential to take into account the significance of a high processing speed and considerable bandwidth in order to effectively transmit high-resolution content with a frame rate of at least 24 frames per second to meet the need for real-time streaming. In this paper, a low-cost edge device is utilized, the NVIDIA Jetson Nano, which provides 4 GB 64-bit LPDDR4, 1600MHz 25.6 GB/s, and MIPI CSI-2 D-PHY 1.1 interfaces. For Multi-Stream HD Video, Jetson Nano can support up to H.264 2160p 60fps. A particular challenge of a multi-camera system is its synchronization mechanism in order to ensure all frames are captured at the same timestamp. To synchronize our fisheye cameras, a synchronized 4-camera hardware is used. The purpose of this hardware board is to provide the simultaneous connection of four identical MIPI cameras. Additionally, a firmware has been built to enable the transmission of a frame whenever the host initiates a request. The camera maintains a state of uninterrupted functioning, and when prompted by the host, it transmits the frame to enable synchronisation. ### _Communication_ To stream video from underwater camera unit to topside computer unit with high-resolution video at 24 fps, it is necessary to take into account waterproof wire communication. Fig. 1: Our system functions in three main stages: 1) The camera unit triggers the fisheye cameras in a synchronised manner. 2) The transmission unit sends the images to the server via Fathom Tether. 3) The high-performance server performs computer vision algorithms such as image stitching. To avoid additional cables and costs, it is proposed to use Power Line Communication. The Fathom Tether is used as Fig. (a)a, a high-quality tether cable designed specifically for subsea applications [16], to transmit data from an offshore NVIDIA Jetson Nano to an onshore server. It is neutrally buoyant, has 300-350lb breaking strength, and is embedded with water-blocking fibers to seal any leaks. In order for the Fathom Tether to work well for tether lengths of up to 100m, it is necessary to utilize the Power Line Communication modules as illustrated in Fig. (b)b. It is based on the Qualcomm QCA7420 SoC, which takes advantage of the robust HomePlug AV (IEEE-1901) standard to send Ethernet through powerlines. It offers MA data rate up to 500 Mbps and 128-bit AES data encryption. The Real-Time Streaming Protocol (RTSP) is employed to facilitate the transmission of real-time video with minimal latency. This protocol enables the streaming of video content from an edge device located offshore to a server situated onshore. ### _Enclosure_ Our custom mechanical enclosure design effectively protects the onboard processing module and cameras from water leakage and avoids undesirable optic effects during functioning. The propagation of light involves traversing three distinct environments with different reflective indexes, namely water, a cast acrylic cage, and air, prior to reaching the camera sensor. This sequential passage through several environments introduces a refraction effect. The influence of refraction on the calibration of the camera system is a significant consideration, and therefore, it is addressed in the subsequent section for calibration purposes. Furthermore, the selection of acrylic material is made in order to ensure the system's ability to withstand high pressure conditions of up to 100 metres in depth. In order to mitigate the problem of overheating during operation, the enclosure has a thermal forced convection mechanism that utilises 50% of its internal volume. Given the elevated temperatures experienced during the operation of cameras and CPUs, it is common practice to employ a fan to induce air circulation within the system, which enables efficient transportation of substantial amounts of thermal energy. The elevated temperature air emanating from the camera and CPU will be conveyed to the vacant space within the tube as shown in Fig. (b)b. This leads to a decrease in the temperature of the system, which mitigates the problem of overheating. In our implementation, the proposed camera system is capable of uninterrupted operation for a duration of 24/7. ## IV Software ### _Fisheye Camera Model_ To produce a 360\({}^{\circ}\) panoramic view in our system, it is necessary to model the image formation for a fisheye camera. There exists some criteria to select a camera model for our system. First, it is desirable to have a camera model specifically designed for fisheye lenses to account for the strong visual distortion on fisheye images. Such distortions must be modeled for both the project and unproject function to map a 3D point to a 2D pixel and vice versa so that the camera model can support tasks such as depth estimation as well. Second, it is preferred to have a camera model with a simple and precise calibration and high-quality 3D reconstruction procedure. In the literature, Scaramuzza _et al._[17] present a camera projection model that uses high-order polynomials Fig. 3: a) Fathom Tether connection between camera unit and topside computer unit and b) Air flow in enclosure designed for forced convection. Fig. 2: Overview of our 360\({}^{\circ}\) underwater camera. (a) 3D rendering and real-world prototype of our system. (b) Front view (left) and back view (middle) of our system with fisheye cameras, a camera-synchronized board and a computing unit. Topside computer unit (right) contains the PLC module. to represent distortions. This model exhibits a high level of generality and can be effectively used with a range of catadioptric and fisheye cameras. However, due to the absence of a closed-form solution, it becomes necessary to approximate the inverse projection equation using a high-order polynomial, resulting in the introduction of errors. The unified camera model (UCM) [18] is general for modeling catadioptric cameras and fisheye cameras with an enhanced calibration accuracy provided by the MUCM model [19]. Recently, the Double Sphere camera model [15] is a new model that has only one more intrinsic parameter than the MUCM model but with substantial calibration accuracy improvement. However, these models did not consider underwater applications that involve light refraction effects. In this work, it is proposed to use the Triple Sphere camera model (TSCM) [20] because TSCM can accurately model the image formation of fisheye cameras stored in an enclosure and placed underwater. It is assumed that the enclosure possesses a slender thickness, and therefore the light transport can be modeled by an incident light ray traversing in the water and undergoing a single refraction off the housing before reaching the camera. This camera model calibration is highly effective in handling wide field of view (FOV) angles that exceed 180\({}^{\circ}\). This method has achieved precisely to calibrate intrinsic parameters in underwater application since it takes into account the refraction when incident light travel through water, enclosure and air. Fig. 4 shows our camera model. TSCM is a projection model based on the so-called DSCM model. Particularly, the camera projection model considers the incident light refracting three times including once through water to camera and the other two times due to the double sphere camera. Let scalar \(\alpha\), \(\xi\) and \(\lambda\) be the displacements of the three unit spherical centers. Let \(\Omega\subset\mathbb{R}^{3}\) and \(\Theta\subset\mathbb{R}^{2}\) denotes the set of \(3\)D points that result in valid projections and the image domain to which points can be projected to, respectively. The camera projection function is defined by \(\pi_{\mathbf{c}}:\Omega\mapsto\Theta\), which models the relationship between points in the 3D space and pixels on the image plane. The unprojection model \(\pi_{\mathbf{c}}^{-1}:\Theta\mapsto\mathbb{R}^{3}\) inverts a pixel to an outgoing ray into the 3D space. Given a 3D point \(\mathbf{P}=[X,Y,X]^{T}\in\mathbb{R}^{3}\), the projection function of the camera can be defined as \[\pi_{\mathbf{c}}(\mathbf{P},\mathbf{i}) =\frac{1}{\phi}\begin{bmatrix}f_{x}&0&c_{x}\\ 0&f_{y}&c_{y}\end{bmatrix}\begin{bmatrix}X\\ Y\\ \phi\end{bmatrix} \tag{1}\] \[\phi =Z+\xi d_{0}+\lambda d_{1}+\frac{\alpha}{1-\alpha}d_{2}\] (2) \[d_{0} =\sqrt{X^{2}+Y^{2}+Z^{2}}\] (3) \[d_{1} =\sqrt{X^{2}+Y^{2}+(\xi d_{0}+Z)^{2}}\] (4) \[d_{2} =\sqrt{X^{2}+Y^{2}+(\xi d_{0}+\lambda d_{1}+Z)^{2}} \tag{5}\] where \(\mathbf{i}\) is the vector of intrinsic parameters. A set of 3D points that results in valid projection is expressed as follows: \[\Omega =\{x\in\mathbb{R}\ \mid\ z>-w_{2}d_{0}\} \tag{6}\] \[w_{2} =\frac{\xi+\lambda+w_{1}}{\sqrt{1+(\xi+\lambda)^{2}+2w_{1}(\xi+ \lambda)}}\] (7) \[w_{1} =\frac{\alpha}{1-\alpha} \tag{8}\] The unprojection function is computed as follows: \[\pi_{\mathbf{c}}^{-1}(\mathbf{p},\mathbf{i}) =\mu\begin{bmatrix}\eta\gamma\chi\\ \eta\gamma\gamma\\ m_{z}\end{bmatrix}-\begin{bmatrix}0\\ 0\\ \xi\end{bmatrix} \tag{9}\] \[\mu =\xi m_{z}+\sqrt{\xi^{2}m_{z}^{2}-\xi^{2}+1}\] (10) \[m_{z} =\eta(\gamma-\phi)-\lambda\] (11) \[\eta =\lambda(\gamma-\phi)+\sqrt{\lambda^{2}(\gamma-\phi)^{2}-\lambda ^{2}+1}\] (12) \[\gamma =\frac{\phi+\sqrt{1+(1-\phi^{2})(x^{2}+y^{2})}}{x^{2}+y^{2}+1}\] (13) \[\phi =\begin{cases}\frac{\alpha}{1-\alpha}&\text{if}\ \alpha\leq 0.5\\ \frac{1-\alpha}{\alpha}&\text{if}\ \alpha>0.5\end{cases} \tag{14}\] where \((x,y)\) is the normalized coordinate. ### _Camera Calibration_ To calibrate this camera model, corners are detected in the calibration board and minimise the reprojection error of the corner points across all images. The projection point \(u_{nk}\) of the \(k^{th}\) corner \(x_{k}\) can be obtained using the corner detector for each picture \(n\) in the calibration sequence. The coordinate of \(u_{nk}\) is related to the camera intrinsic and extrinsic parameters. Let us denote \(s=\left[\mathbf{i},\mathbf{T_{cam_{n}}},\mathbf{T_{cam_{j}}},\ldots\mathbf{T_ {cam_{n}}}\right]\) the parameter to optimize. It can be constructed the nonlinear optimization problem as follows: \[s^{*}=\arg\min_{s}\sum_{n=0}^{N}\sum_{k\in K}\rho\left((\pi(\mathbf{T_{cam_{n} }x_{k}},\mathbf{i}-u_{nk}))^{2}\right) \tag{15}\] where \(\mathbf{T_{cam_{n}}}\in SE(3)\) is the transformation from the coordinate frame of the calibration grid to the camera coordinate frame for image \(n\). \(K\) is a set of detected corner points Fig. 4: Triple Sphere camera model. The incident light ray from a point \(P\) travels through water, the acrylic enclosure, and the air into the sensor for the image \(n\) and \(\rho\) is the robust Huber norm. Since the optimization is non-convex, good initialization of the intrinsic parameter \(\mathbf{i}\) and camera poses \(\mathbf{T_{cam}}\) is important for optimization to converge. The intrinsic parameters is initialized with using the method [21] and find initial poses using the UPnP algorithm [22]. ### _Real-time Panorama Stitching_ To make the system suitable for streaming, it is expected panorama stitching to run in real time. The panorama stitching method is adopted from Fast Sphere Sweeping Stereo [13], which has many useful properties suitable for underwater tasks and can be modified for coping with multiple fisheye cameras and light refraction. Our stitching algorithm has three stages. 1) First, adaptive spherical matching is used to perform stereo matching on the fisheye images, while taking into account the regional discriminative capability of distance inside each fisheye image. 2) Second, an efficient spherical cost aggregation method with optimal complexity \(O(n)\) is performed to allow a stable sphere sweeping volume in noisy and textureless regions. This method can preserve edges with a coverage of 360\({}^{\circ}\). 3) Finally, distance-aware stitching is employed to generate a 360\({}^{\circ}\) panorama by integrating colours from several distance maps. This is achieved through efficient inpainting techniques. In our stitching algorithm, spherical matching is a critical but also computationally expensive step as it requires to evaluate the whole depth candidates in all the combinations of overlapping regions along the baseline within the sphere sweeping volume. To achieve real-time performance, given a reference camera, a camera selection approach is employed to identify the most suitable camera pairs for correspondence search within the sphere sweeping volume. The optimal camera \(c^{*}\) for each pixel in the reference is determined by \[c^{*}(\theta,\phi)=\arg\max_{c_{k}}(q_{c_{k}}) \tag{16}\] where \(q_{c_{k}}\) is the angular change between two 3D points \(p_{c_{k}}^{<0>}\) and \(p_{c_{k}}^{<N-1>}\) ; 0 and \(N-1\) are the first and last layer of the sphere sweeping volume. Our adaptation of spherical matching to the Triple Sphere camera model is as follows. In typical conditions outside, the spherical matching can be easily achieved by utilising the camera's intrinsic parameter matrix from Double Sphere Camera Model. Nevertheless, the process of capturing images underwater becomes significantly more complex due to the need to consider the impact of the refractive interface. To measure the distance for distance discriminating in order to solve Eq. 16 for the Triple Sphere camera model, it is performed two 220\({}^{\circ}\) distance estimation using the two opposed top cameras as two references. For each pixel in each reference, the best camera is selected using selective matching. Let \(I_{c_{s}}\) be the image from the camera selected at pixel \((\theta,\phi)\) and \(I_{c_{0}}\) be the reference camera. The matching cost for the \(i_{th}\) distance candidate is: \[C(\theta,\phi,i)=\|V_{c_{s}\mapsto c_{0}}(\theta,\phi,i)-I((\theta,\phi)\|_{1} \tag{17}\] where \(V_{c_{s}\mapsto c_{0}}\) is the sphere sweeping volume from the selected camera to the reference camera. Then each slice of the spherical cost volume is regularized using a fast filtering method. To aggregate sparse distance to obtain a dense distance map, first downsample is used with an edge preservation filter using the bilateral weights between the guidance center and the neighbor pixels. The bilateral weights are: \[w_{mn}(I,x,y)=\exp\left(\frac{\|I(x,y)-I(x+m,y+m)\|^{2}}{2\sigma_{I}^{2}}\right) \tag{18}\] where \(\sigma_{I}\) is the edge preservation parameter, and \((x,y)\) are pixel coordinates. The downsampling operation can then be defined by \[I(x,y)=\sum_{m,n=-1}^{1}I(2x+m,2y+m)w_{m,n}(I,2x,2y)/\tau \tag{19}\] where \(\tau\) is the normalizing constant. Then upsampling is performed using a minimal pixel support. Guidance weights are computed between the guidance centers and the pixels to aggregate at lower scale. After cost volume filtering, the optimal distance is voted via winner-takes-all, and sub-candidate accuracy is achieved through quadratic fitting. Although this methodology results in improved precision, an additional procedure is necessary to combine the fisheye images. This method involves initially generating a distance map at a specified position, followed by projecting the image based on the corresponding 3D coordinates. Subsequently, the images are merged using a blending technique that assigns greater importance to pixels with little displacement. ## V Experimental Evaluation Our proposed system is configured with a Fathom Tether cable of 10-meter length and let the system operate continuously in 48 hours. It is aimed at evaluating the system temperature and operational performance while it is functioning. Also, evaluation of the calibration of our system is conducted. Table I reports the capability of our camera system in details. The proposed system offers high video framerate at 30 fps, high-resolution image capture at \(2028\times 1520\) and a low latency of 5 ms, which is suitable for real-time application and environment monitor. This camera system also offers 12 bits color depth and HDR. \begin{table} \begin{tabular}{c|c} \hline \hline **Specification** & \\ \hline Frame rates & 30 fps \\ Resolution & 2028\(\times\)1520 \\ Stream time & 24 hours \\ Latency & 5 ms \\ Synchronization & YES \\ FOV & 360(H), 180(V) \\ Optical Format & 1/2.3” \\ Color depth & 12 bits \\ HDR & YES \\ \hline \hline \end{tabular} \end{table} TABLE I: Camera specifications ### _Operational Performance_ Fig. 4(a) shows the pictures taken by the proposed system in a water tank for monitoring aquatic creatures such as star fishes. Qualitatively, it is illustrated that the pictures taken by this system are clear and high-quality in underwater environment. In addition to the image quality, further analysis the system temperature is conducted, which is a crucial aspect for performance because high temperature can affect the overall stability of the system. In the event that the system experiences excessive heat, it might result in malfunctions, decreased frames per second (FPS), or even abrupt shutdown. Fig. 6 shows the temperatures of the edge device and cameras during functioning. It is evident that within the initial two-hour period, there is a significant rise in temperature for both the CPU and cameras, with the former increasing from \(42.5^{\circ}C\) to \(54^{\circ}C\), and the latter increasing from \(26.5^{\circ}C\) to \(45^{\circ}C\). Due to the forced convection in our airflow design, it is found that the CPU and camera temperatures have a tendency to remain stable subsequently. ### _Camera Calibration Accuracy_ The performance of our camera calibration is reported in Table II. It can be seen that the mean reprojection error of the TSCM camera model is smaller than DSCM's. In our calibration, the coordinates of the corners of the calibration pattern are calculated by optimising for both posture and intrinsic characteristics. The TSCM with totally 7 camera internal parameters demonstrates better reprojection error than that of the DSCM with six parameters. ### _Image Stitching Results_ Fig. 4(b) demonstrates the qualitative result of our \(360^{\circ}\) image stitching algorithm. The running time of each frame is approximately \(33ms\) on an NVIDIA RTX3090 GPU. This result is measured with real data collected at the Ocean Research Facility of the Hong Kong University of Science and Technology using the proposed camera system. ## VI Conclusions and Future Work This paper presents a complete hardware and software framework for the creation of an all-encompassing underwater camera system with the ability to capture a complete \(360^{\circ}\) field of view. Our system can function in real time underwater continuously. Our work encourages additional exploration of underwater camera systems with a wide field of view. The proposed system has some limitations to be addressed in future work. First, since there are no data for underwater omnidirectional depth estimation, it is worth building a new dataset for this research and developing more accurate and reliable image stitching and depth estimation for this proposed system. Second, the enclosure design can be improved to avoid light reflection and refraction effects. Finally, it is interesting to explore a mechanical design to keep the system balanced in the water. It is planed to integrate this system with BlueROV2 to develop navigation techniques, e.g., SLAM, for underwater robots. \begin{table} \begin{tabular}{c|c c} \hline \hline **Camera** & **DSCM Error \(\downarrow\)** & **TSCM Error \(\downarrow\)** \\ \hline 1 & 2.12 & 2.11 \\ 2 & 1.89 & 1.81 \\ 3 & 3.45 & 3.26 \\ 4 & 1.65 & 1.58 \\ \hline Average & 2.28 & 2.19 \\ \hline \hline \end{tabular} \end{table} TABLE II: Mean reprojection error for evaluated camera models between DSCM and TSCM (in pixels) Fig. 5: The images taken by camera system and \(360^{\circ}\) panorama image stitching using images taken by camera system Fig. 6: Temperatures of CPU and cameras in 48 hours
多方向カメラは、コストパフォーマンスが高く、情報量が多いセンサーで、様々な海洋アプリケーションや海洋科学コミュニティに適しています。これには、拡張現実、地図作成、運動推定、視覚監視、同時位置合わせと地図作成など、多くの分野が含まれます。しかし、水下環境での高品質な360°リアルタイムストリーミングカメラシステムを設計・構築することは、センサー解像度、広視野、電源供給、光学設計、システム校正、過熱管理などの技術的な複雑さから、困難な問題です。この論文では、設計、構築、および機能的な360°リアルタイムストリーミングカメラシステムの完全な実装に関する複雑さを解決する革新的な総合的なシステムを提案しています。提案されたシステムUWA360CAMは、リアルタイムで動画をストリーミングし、24時間7日の運用が可能で、360°の潜水風景画像を捉えます。特に
2309.15107
AdS$_3$ Vacuum State from Four Minkowski Vacuum States
We show that a tensor product of four specific $1{+}2$ Minkowski vacuum states is a self-consistent vacuum state for an infinite set of three-dimensional anti-de Sitter (AdS$_3$) spacetimes if their parity and time-reversal symmetry are broken in a particular way. The infinite set consists of pairs of all AdS$_3$ with non-zero unique scalar curvatures.
Lucas Kocia Kovalsky
2023-09-26T17:54:49
http://arxiv.org/abs/2309.15107v5
# AdS\({}_{3}\) Vacuum State from Four Minkowski Vacuum States ###### Abstract We show that a tensor product of four specific 1+2 Minkowski vacuum states is a self-consistent vacuum state for an infinite set of three-dimensional anti-de Sitter (AdS\({}_{3}\)) spacetimes if their parity and time-reversal symmetry are broken in a particular way. The infinite set consists of pairs of all AdS\({}_{3}\) with non-zero unique scalar curvatures. While a vacuum state can be defined for quantum fields in Minkowski spacetimes, consistent definitions have only been found for curved spacetimes that are globally hyperbolic. Such manifolds possess a Cauchy surface with a domain of dependence that covers them fully. Without a Cauchy surface, the non-linear Einstein equations of general relativity lack a complete set of initial value conditions [1]. There have been many attempts to develop a more general class of vacuum states. These include constructions for static spacetimes with respect to measurements along proper time [1; 2] and the S-J vacuum state [3; 4; 5], among others [6; 7; 8]. There have also been efforts to algebraically define an equivalent class of states, such as the Hadamard state in the GNS construction [9]. However, so far, results generally lack all the features of vacuum states in globally hyperbolic spacetimes [10; 11; 12]. This challenge is one of the main outstanding obstacles to formulating a consistent theory of quantum gravity. Here, we will show that perhaps the key to generalization lies in expressing an infinite set of curved subspaces instead as a direct sum of a finite number of flat projected spaces. Furthermore, we will show that AdS's boundary, which is of central importance to the AdS/CFT correspondence but also prevents global hyperbolicity, can sometimes be tamed by breaking the parity and time-reversal symmetry of AdS in a particular way to produce a globally hyperbolic spacetime, while still preserving the key features of the AdS/CFT correspondence. We will accomplish this by developing a quantum field theory inspired by the curved spacetime emergently defined by a non-commutative algebra [13; 14; 15; 16]. In particular, our construction will respect the Cartan decomposition of a recent formalism, which produced a consistent stress-energy tensor for some non-globally hyperbolic spacetimes using the octonion algebra [17]. We will consider the simplest case: three-dimensional anti-de Sitter (AdS\({}_{3}\)) spacetime. Consider a 2+2 flat spacetime with metric \(ds^{2}=-dt^{2}+dx^{2}-dt^{\prime 2}+dx^{\prime 2}\) and embed AdS\({}_{3}\) spacetime by restricting to the hyperboloid \(-t^{2}+x^{2}-t^{\prime 2}+x^{\prime 2}=-l^{2}\) for \(l^{2}{>}0\). Equivalently, one can consider flipping the signature to the metric \(ds^{2}=dt^{2}-dx^{2}+dt^{\prime 2}-dx^{\prime 2}\) and restricting to the hyperboloid \(-t^{2}+x^{2}-t^{\prime 2}+x^{\prime 2}=l^{2}\) (see Fig. 1). Selecting the first signature henceforth, the six isometries of the embedding flat spacetime (excluding parity \(\mathcal{P}\) and time-reversal \(\mathcal{T}\) symmetry) are the usual rotations and boosts generated by \(K^{01}\), \(K^{23}\), \(K^{12}\), \(K^{03}\), \(J^{02}\), and \(J^{13}\), where \(K^{01}=t\partial_{x}+x\partial_{0}\), \(K^{23}=t^{\prime}\partial_{x^{\prime}}+x^{\prime}\partial_{\nu}\), \(K^{12}=x\partial_{\nu}+t^{\prime}\partial_{\alpha}\), \(K^{03}=t\partial_{\alpha^{\prime}}+x^{\prime}\partial_{t}\), \(J^{02}=t\partial_{\nu}-t^{\prime}\partial_{t}\), and \(J^{13}=x\partial_{x^{\prime}}-x^{\prime}\partial_{x}\). The embedded AdS\({}_{3}\) spacetime inherits these isometries. As a result, this set forms the isometry group \(\mathrm{SL}(2,\mathbb{R})\times\mathrm{SL}(2,\mathbb{R})\) of the double cover of the embedded AdS\({}_{3}\) manifolds, restricted to one side or the other of the 2+2 embedding spacetime's lightcone. Each commuting \(\mathrm{SL}(2,\mathbb{R})\) is generated by even/odd linear combinations of the former isometries: \(\{K^{01}+K^{23},\,K^{03}+K^{12},\,J^{02}+J^{13}\}\) generate one and \(\{K^{01}-K^{23},\,K^{03}-K^{12},\,J^{02}-J^{13}\}\) generate the other [18]. Similar embeddings of AdS\({}_{n+1}\) can be made in 2+\(n\) flat spacetimes. Generally, the generators of embedding spaces form an overcomplete basis for their embedded manifolds and cannot consistently generate them with their integral curves [19]. However, for \(n=2\), the generators of a flat 2+\(n\) embedding space do produce consistent integral manifolds of AdS\({}_{n+1}\). This can be seen more easily by explicitly treating the 2+2 embedding space as a homogeneous space where transformations look the same up to \(\mathrm{SL}(2,\mathbb{R})\), thereby producing a quotient space for the embedded manifolds: \(\frac{\mathrm{SL}(2,\mathbb{R})\times\mathrm{SL}(2,\mathbb{R})}{\mathrm{SL}(2, \mathbb{R})}\cong\mathrm{SL}(2,\mathbb{R})\). The new space has isometry generators corresponding to the former commuting sets of generators identified with each other--\(K^{01}\pm K^{23}\), \(K^{03}\pm K^{12}\), and \(J^{02}\pm J^{13}\)--and becomes a homogeneous symmetric space [20]. These three generators form a complete basis for _both_ the 2+2 embedding space and each three-dimensional Figure 1: A sketch of the AdS\({}_{2}\) embedded spacetimes in a flat 1+2 (left) and 2+1 (right) embedding space. The oppositely signed signatures restrict foliation to the outside or inside of the 1+2 embedding’s lightcone, respectively. In AdS\({}_{3}\) the left and right foliations are non-overlapping inverted copies of each other. AdS\({}_{3}\) manifold (see Appendix A for a derivation involving this space's Cartan decomposition). In other words, each hyperboloid \(-t^{2}+x^{2}-t^{\prime 2}+x^{\prime 2}=-l^{2}\) precisely coincides with a manifold produced by the integral curves of the quotient 2+2 spacetime's isometries: \(\{K^{01}+K^{23}\), \(K^{03}+K^{12}\), \(J^{02}+J^{13}\}\) produce clockwise integral curves while \(\{K^{01}-K^{23}\), \(K^{03}-K^{12}\), \(J^{02}-J^{13}\}\) produce counter-clockwise integral curves. \(l^{2}\)\(>\)0 indexes all such manifolds. Including both signs of the embedding metric results in double covers of two non-overlapping copies (see Fig. 1). The scalar curvature of AdS\({}_{3}\) spacetime is \(R=-6/l^{2}<0\). \(l^{2}\) is often set to 1 to represent the "open" universe equivalence class since any \(l^{2}\)\(>\)0 produces the same rescaled physics [11]. This scale invariance along with the accidental symmetry of the embedding space's signature under inversion is responsible for the completeness of the isometry basis in AdS\({}_{n+1}\) for \(n=2\) without \(\mathcal{P}\) and \(\mathcal{T}\). The double cover of an AdS\({}_{3}\) embedded spacetime on either side of the 2+2 lightcone is not globally hyperbolic. This is due to the fact that its spatial infinity boundary is a timelike hypersurface. New information can always enter at this open boundary, thereby denying the sufficiency of any initial value conditions [1]. Note that while we have only been considering the continuous generators that produce SL\((2,\mathbb{R})\), by considering its action restricted to hyperboloids parametrized by \(l^{2}\), we have still implicitly been including \(\mathcal{P}\)- and \(\mathcal{T}\)-symmetry. This suggests that perhaps by taking advantage of the freedom in how \(\mathcal{P}\)- and \(\mathcal{T}\)-symmetry can be removed, we may be able to produce a spacetime that is globally hyperbolic while still preserving its continuous symmetries. In particular, in the following, we will see that global hyperbolicity can be satisfied if the 2+2 embedding space is built up with broken \(\mathcal{P}\)- and \(\mathcal{T}\)-symmetry by using a finite number of flat projections instead of an infinite number of curved subspaces defined by the hyperbolic restriction which preserve \(\mathcal{P}\) and \(\mathcal{T}\)-symmetry. The complete isometry generator basis' decomposition into pairs of generators can be usefully combined with the "intensive" property of homogeneous spaces: projections of homogeneous symmetric spaces are also homogeneous if their kernel is homogeneous. Such homogeneous projections can be defined for the original 2+2 embedding space by evenly splitting the pairs of generators of its quotient space into the following four sets: \(\{K^{01}\), \(K^{03}\), \(J^{13}\}\), \(\{K^{01}\), \(K^{12}\), \(J^{02}\}\), \(\{K^{23}\), \(K^{03}\), \(J^{02}\}\), and \(\{K^{23}\), \(K^{12}\), \(J^{13}\}\). Each set consists of a sub-basis produced by keeping only one generator from each pair of the full basis and generates the isometry group of a 1+2 or 2+1 Minkowski spacetime for which it is also a complete basis. Every set shares one and only one unique generator with any other set. 1+2 and 2+1 Minkowski spacetimes have unique vacuum states where positive and negative frequencies are defined with respect to the asymptotically timelike isometry generator and isometries preserve the zero frequency [1]. Consider taking a direct sum of all four of the 1+2 and 2+1 projected spaces from our selected signature of the 2+2 embedding space. Direct sums preserve completeness and separability. As a result, such a direct sum produces a space spanned by a new complete basis of isometry generators consisting of the even linear combinations of the separable generators of the projected spaces: \(K^{01}+K^{23}\), \(K^{03}+K^{12}\), and \(J^{02}+J^{13}\). These are the generators of one of the commuting SL\((2,\mathbb{R})\) groups discussed previously. (The other one can be similarly produced from the same four 1+2 and 2+1 projected spaces with appropriately signed generators.) This direct sum necessarily generates the 2+2 embedding spacetime on _both_ sides of the lightcone (as in Fig. 1) since the projections do not favor either side. While a double cover of the embedding spacetime with \(\mathcal{P}\)- and \(\mathcal{T}\)-symmetry does not possess a Cauchy surface, the double cover of the embedding spacetime with broken \(\mathcal{P}\)- and \(\mathcal{T}\)-symmetry, such as in the direct sum of four three-dimensional Minkowski spacetimes provided, _does_ possess a Cauchy surface - each projected Minkowski spacetime contains an infinite set of Cauchy surfaces in its direct sum decomposition corresponding to its space-like slices. This direct sum construction produces two non-overlapping copies of the AdS\({}_{3}\) embedded spacetimes without use of the \(\mathcal{P}\)- and \(\mathcal{T}\)-preserving hyperbolic restriction. As an aside, we note that this construction manifests many of the properties from the work it is inspired from [17], which showed that the full 2+2 embedding spacetime acts as a four-dimensional representation of the SL\((2,\mathbb{R})\) group, whereas the embedded AdS\({}_{3}\) spacetimes act as the infinite set of three-dimensional representations obtained by projecting down the four-dimensional representation. (States/stress-energy tensors act as the corresponding Lie algebra \(\mathfrak{sl}(2,\mathbb{R})\), and together these associations produce the complex octonion algebra - see Appendix B for more details.) The completeness of the isometry generators in each projected Minkowski space allows any state in the 2+2 embedding to be specified in terms of the tensor product of its projections. For instance, parametrizing the 1+2 spaces by \((x,t,x^{\prime})\) and \((x,t^{\prime},x^{\prime})\), and the 2+1 spaces by \((t,x,t^{\prime})\) and \((t,x^{\prime},t^{\prime})\), means that the Klein-Gordon Fourier modes for the 2+2 embedding of AdS\({}_{3}\) are given by \[\Phi(t,x,t^{\prime},x^{\prime})\equiv \tag{1}\] \[\phi_{+}(t,x,x^{\prime})\otimes\phi_{+}(t^{\prime},x,x^{\prime}) \otimes\phi_{-}(x,t,t^{\prime})\otimes\phi_{-}(x^{\prime},t,t^{\prime}),\] and \[\Pi(t,x,t^{\prime},x^{\prime})\equiv \tag{2}\] \[\pi_{+}(t,x,x^{\prime})\otimes\pi_{+}(t^{\prime},x,x^{\prime}) \otimes\pi_{-}(x,t,t^{\prime})\otimes\pi_{-}(x^{\prime},t,t^{\prime}),\] where the spaces are tensored together in the order that they were defined and \[\phi_{\pm}(\tau,\xi_{1},\xi_{2})=\int\frac{\mathrm{d}^{2}p}{(2\pi)^{2}}\frac{1}{ \sqrt{2\omega_{\mathbf{p}}}}\left(a_{\mathbf{p}}e^{\pm i\mathbf{p}\cdot\mathbf{\xi} }+a_{\mathbf{p}}^{\dagger}e^{\mp i\mathbf{p}\cdot\mathbf{\xi}}\right), \tag{3}\] \[\pi_{\pm}(\tau,\xi_{1},\xi_{2})=\int\frac{\mathrm{d}^{2}p}{(2\pi)^{2}}(\mp i) \sqrt{\frac{\omega_{\mathbf{p}}}{2}}\left(a_{\mathbf{p}}e^{\pm i\mathbf{p} \cdot\mathbf{\xi}}-a_{\mathbf{p}}^{\dagger}e^{\mp i\mathbf{p}\cdot\mathbf{\xi}}\right), \tag{4}\] for \(\omega_{\mathbf{p}}\) associated with the timelike coordinate \(\tau\) and \(\mathbf{\xi}\equiv(\xi_{1},\xi_{2})\). The isometries of the embedding \(2+2\) spacetime will preserve the positive and negative frequency modes of the tensor product of the four corresponding Minkowski vacuum states. Note that, in general, flat spacetimes with multiple timelike isometry generators do not possess a well-defined vacuum state since there is no reason to define frequencies with respect to any particular linear combination of the timelike generators. However, here we have argued that this is not the case for a particular four-dimensional flat spacetime with two timelike generators corresponding to our 2+2 embedding space. This spacetime is distinguished by its lack of \(\mathcal{P}\) and \(\mathcal{T}\) symmetry. Since the vacuum state we have defined is for an infinite set (\(R<0\)) of AdS\({}_{3}\) spacetimes, this suggests that its excitations can have preferred radii of curvature \(R\) and, therefore, non-stationary states can have dynamic \(R\) expectation values. In [17] it was shown that such evolution, where the \(R=0\) case is included, produces a two-dimensional renormalization group flow, and thus forward evolution between the critical points corresponding to zero and non-zero \(R\) is irreversible [21]. This suggests the existence of a preferred direction of evolution to this spacetime without \(\mathcal{P}\) and \(\mathcal{T}\) symmetry similar to a cosmological arrow of time [25; 26; 27]. The decomposition of the embedding 2+2 spacetime into tensor products of projected Minkowski spaces allows for the effects of the curved AdS\({}_{3}\) spacetimes with \(R<0\) to be independently determined from their properties on these projected flat spacetimes. In particular, examining the projections of the AdS\({}_{3}\) embeddings along their hyperbolic restrictions reveals that they correspond to curved trajectories in Minkowski spaces with acceleration proportional to \(l^{-2}\). As a result, the Unruh effect implies that states corresponding to vanishing \(R\) are pure while those with finite \(R\) are mixed states with temperature proportional to \(R\)[22; 23; 24]. Furthermore, the BTZ black hole spacetime is a discrete quotient of \(\mathrm{SL}(2,\mathbb{R})\)[28; 29; 30] such that it covers \(\mathrm{SL}(2,\mathbb{R})\) if closed timelike curves are permitted from timelike cylindrical coordinates at \(r^{2}<0\)[31]. In this manner, AdS\({}_{3}\) without \(\mathcal{P}\) and \(\mathcal{T}\) is locally isometric to such a BTZ black hole, which also has a temperature proportional to \(l^{-2}\)[32]. As a result, the state we have derived can be equivalently viewed as a consistent vacuum state for an infinite set of BTZ black holes at all temperatures. From this perspective, the preferred direction of evolution agrees with the evaporative evolution of BTZ black holes and captures the eternally radiating property of large thermodynamically stable BTZ black holes. The scale invariance in this formulation is responsible for most of its significant results. It can be equivalently described through the existence of the so-called holographic principle in this formulation: any lower dimensional AdS\({}_{3}\) manifold fully determines the properties of its larger dimensional embedding spacetime. It is also in this manner that the emergent property of the original algebraic formulation persists in this quantum field theory; the vacuum state is defined through a "macroscopic" embedding space, which is an uniform ensemble of "microscopic" embedded spacetimes. Indeed, it is due to these holographic and emergent properties that the isometry generators can produce the embedded spacetimes through their integral curves, and thus make the consistent formulation presented here possible. Notably, this also allows this formulation to be background-independent and for the radius of curvature \(R\) of the AdS\({}_{3}\) spacetime to become quantum state-dependent. Emergence and holography have long been considered to be important properties for producing a consistent quantum gravity theory. Their central role in the quantum field theory presented here supports this expectation and suggests that successfuly extending this theory to other non-globally hyperbolic spacetimes will likely require their preservation. -- L. K. Kovalsky thanks M. Sarovar, A. Dhumuntarao, and E. Knill for their helpful correspondence during this study. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing (ARQC) and Quantum Testbed Pathfinder programs. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
``` 四つの特定の1+2ミノーワ・ vacuum stateのテンソル積が、そのパリティと時間反転対称性が特定の様式で破られ、無限の3次元反 de Sitter (AdS$_3$) spacetimeの自己一致する真空状態となる。これらの無限のセットは、非ゼロのユニークなスカラー曲率を持つ AdS$_3$ のペアから成る。 ```
2309.03765
Equivariant Symmetries for Inertial Navigation Systems
This paper investigates the problem of inertial navigation system (INS) filter design through the lens of symmetry. The extended Kalman filter (EKF) and its variants, have been the staple of INS filtering for 50 years; however, recent advances in inertial navigation systems have exploited matrix Lie group structure to design stochastic filters and state observers that have been shown to display superior performance compared to classical solutions. In this work we consider the case where a vehicle has an inertial measurement unit (IMU) and a global navigation satellite system (GNSS) receiver. We show that all the modern variants of the EKF for these sensors can be interpreted as the recently proposed Equivariant Filter (EqF) design methodology applied to different choices of symmetry group for the INS problem. This leads us to propose two new symmetries for the INS problem that have not been considered in the prior literature, and provide a discussion of the relative strengths and weaknesses of all the different algorithms. We believe the collection of symmetries that we present here capture all the sensible choices of symmetry for this problem and sensor suite, and that the analysis provided is indicative of the relative real-world performance potential of the different algorithms.
Alessandro Fornasier, Yixiao Ge, Pieter van Goor, Robert Mahony, Stephan Weiss
2023-09-07T15:13:28
http://arxiv.org/abs/2309.03765v2
# Equivariant Symmetries for Inertial Navigation Systems ###### Abstract This paper investigates the problem of inertial navigation system (INS) filter design through the lens of symmetry. The extended Kalman filter (EKF) and its variants, have been the staple of INS filtering for 50 years; however, recent advances in inertial navigation systems have exploited matrix Lie group structure to design stochastic filters and state observers that have been shown to display superior performance compared to classical solutions. In this work, we consider the case where a vehicle has an inertial measurement unit (IMU) and a global navigation satellite system (GNSS) receiver. We show that all the modern variants of the EKF for these sensors can be interpreted as the recently proposed Equivariant Filter (EqF) design methodology applied to different choices of symmetry group for the INS problem. This leads us to propose two new symmetries for the INS problem that have not been considered in the prior literature, and provide a discussion of the relative strengths and weaknesses of all the different algorithms. We believe the collection of symmetries that we present here capture all the sensible choices of symmetry for this problem and sensor suite, and that the analysis provided is indicative of the relative real-world performance potential of the different algorithms. I + Footnote †: footnote]Corresponding author nertial navigation system; Symmetry; Equivariance; Equivariant filter. ## 1 Introduction The theory of invariant filtering for group affine systems [3, 2] and the theory of equivariant filters [15, 19, 24] that generalizes to systems on homogeneous spaces have provided general design frameworks, as well as strong theoretical performance guarantees, for filter designs that exploit symmetry. This has motivated the widespread use of invariant filters in the robotics community, and its adoption for inertial navigation problems [10, 22, 14]. The application of these principles to inertial navigation systems (INS) has seen the most significant performance gains from algorithm design in this field for the last 40 years. There are now several competing modern INS filters based on geometric insights available in the literature [1, 3, 9] and the question of how to analyse and evaluate the similarities and differences is now of interest. A recent paper by Barrau et al. states _"The big question when it comes to invariant observers/filters is how do we find a group structure for the state space [...]"_[3]. The goal of the present paper is to convince the reader that the choice of symmetry structure is in fact the _only_ difference between different versions of modern geometric INS filters. In this paper we present six different symmetry groups that act on the state-space of the INS problem. We use the recent equivariant filter design methodology to generate INS filter algorithms for each of these symmetries. We show that the classical multiplicative extended Kalman filter (MEKF) [12], the Imperfect-invariant extended Kalman filter (IEKF) [1], the two-frame group invariant extended Kalman filter (TFG-IEKF) [3], and the authors own recent work proposing an equivariant filter for the tangent group structure (TG-EqF) [9] are all associated with equivariant filter design [25] applied to different symmetry actions on the same state-space. This leads us to consider the properties of the symmetry groups and suggests two additional symmetries leading to filters, that we term the Direct Position Equivariant Filter (DP-EqF) and Semi-Direct Bias Equivariant Filter (SD-EqF), that are novel and do not correspond to prior algorithms in the literature. We derive equivariant filter (EqF) algorithms for all of the different symmetries demonstrating that this approach provides a unify ing analysis framework for modern INS filters. In doing this we also make a minor contribution in demonstrating how fixed-frame measurements can be reformulated as body-frame relative measurements. This allows us to exploit output equivariance [25] for all the filter geometries ensuring at least third-order linearization error in the output equations. We undertake a simple comparative study in concert with a linearization analysis of the error equations. Care should be taken in making general statements based on initial analysis; however, the key conclusions that we make are: * The classical MEKF demonstrates noticeable performance limitations compared to the modern filters. In particular, it demonstrates worse transient response and reports significant overconfidence during the transient phase. * The performance differences in modern filters are primarily visible during the transient phase of error response. The asymptotic behaviour of all filters is similar. * Notwithstanding the above, the asymptotic performance of the TG-EqF appears superior to all other filters demonstrating the best consistency and the lowest error. * The TG-EqF filter is the only filter with exact linearization of the navigation states, with the nonlinearities shifted to the bias states. The authors believe that this property underlies its asymptotic performance advantage. * The bias state transient response of the filters with semi-direct bias symmetry (TG-EqF, DP-EqF and SD-EqF) appears superior to that of filters without this geometric structure (MEKF, IEKF and TFG-IEKF). The study concludes that any of the IEKF, TG-EqF, DP-EqF, and SD-EqF filters are candidates for a high-performance INS filter design. The lower filter error and energy properties of the TG-EqF recommend it as the primary choice, although the authors note that this filter also involves an over-parameterization of the state that is not present in the other filters. ## 2 Notation and Mathematical Preliminaries In this paper bold lowercase letters are used to indicate vector quantities. Bold capital letters are used to indicate matrices. Regular letters are used to indicate elements of a symmetry group. Frames of reference are denoted as \(\{A\}\) and \(\{B\}\). Vectors describing physical quantities expressed in a frame of reference \(\{A\}\) are denoted by \({}^{A}\mathbf{x}\). Rotation matrices encoding the orientation of a frame of reference \(\{B\}\) with respect to a reference \(\{A\}\) are denoted by \({}^{A}\mathbf{R}_{B}\); in particular, \({}^{A}\mathbf{R}_{B}\) expresses a vector \({}^{B}\mathbf{x}\) defined in the \(\{B\}\) frame of reference into a vector \({}^{A}\mathbf{x}={}^{A}\mathbf{R}_{B}\,{}^{B}\mathbf{x}\) expressed in the \(\{A\}\) frame of reference. Finally, \(\mathbf{I}_{n}\in\mathbb{R}^{n\times n}\) is the \(n\times n\) identity matrix, and \(\mathbf{0}_{n\times m}\in\mathbb{R}^{n\times m}\) is the \(n\times m\) zero matrix. ### Lie theory A Lie group \(\mathbf{G}\) is a smooth manifold endowed with a smooth group structure. For any \(X,Y\in\mathbf{G}\), the group multiplication is denoted \(XY\), the group inverse \(X^{-1}\) and the identity element \(I\). Given a Lie group \(\mathbf{G}\), \(\mathcal{G}\) denotes the \(\mathbf{G}\)-Torsor [16], which is defined as the set of elements of \(\mathbf{G}\) (the underlying manifold), but without the group structure. For a given Lie group \(\mathbf{G}\), the Lie algebra \(\mathfrak{g}\) can be modelled as a vector space corresponding to the tangent space at the identity of the group, together with a bilinear non-associative map \([\cdot,\cdot]:\mathfrak{g}\to\mathfrak{g}\) called the _Lie bracket_. For a matrix Lie group, the Lie bracket is equal to the matrix commutator: \[[\eta,\kappa]=\eta\kappa-\kappa\eta,\] for any \(\eta,\kappa\in\mathfrak{g}\subset\mathbb{R}^{n\times n}\). The Lie algebra \(\mathfrak{g}\) is isomorphic to a vector space \(\mathbb{R}^{n}\) of dimension \(n=\dim\left(\mathfrak{g}\right)\). Define the _wedge_ map and its inverse, the _vee_ map, as linear mappings between the vector space and the Lie algebra: \[\left(\cdot\right)^{\wedge}\,:\,\mathbb{R}^{n}\,\to\,\mathfrak{g},\qquad \left(\cdot\right)^{\vee}\,:\,\mathfrak{g}\,\to\,\mathbb{R}^{n}.\] For any \(X,Y\in\mathbf{G}\), define the left and right translation \[\begin{array}{lcl}\mathrm{L}_{X}&:&\mathbf{G}\,\to&\mathbf{G},&\mathrm{L}_{X }\left(Y\right)=XY,\\ \mathrm{R}_{X}&:&\mathbf{G}\,\to&\mathbf{G},&\mathrm{R}_{X}\left(Y\right)=YX. \end{array}\] In this paper, we will limit our consideration to matrix Lie-groups and products of matrix Lie-groups, although the results are expected to hold on a larger class of groups. The Adjoint map for the group \(\mathbf{G}\), \(\mathrm{Ad}_{X}\,:\,\mathfrak{g}\,\to\,\mathfrak{g}\) is defined by \[\mathrm{Ad}_{X}\left[\mathbf{u}^{\wedge}\right]=\mathrm{dL}_{X}\mathrm{dR}_{X^{-1 }}\left[\mathbf{u}^{\wedge}\right],\] for every \(X\in\mathbf{G}\) and \(\mathbf{u}^{\wedge}\in\mathfrak{g}\), where \(\mathrm{dL}_{X}\), and \(\mathrm{dR}_{X}\) denote the differentials of the left, and right translation, respectively. Given particular wedge and ve maps, the Adjoint matrix is defined as the map \(\mathbf{Ad}_{X}^{\vee}\,:\,\mathbb{R}^{n}\,\rightarrow\,\mathbb{R}^{n}\) \[\mathbf{Ad}_{X}^{\vee}\boldsymbol{u}=\left(\mathrm{Ad}_{X}\left[\boldsymbol{u} ^{\wedge}\right]\right)^{\vee}.\] Note that for general groups constructed of products of matrix Lie groups, the Adjoint map cannot be directly expressed in the form \(X\boldsymbol{u}^{\wedge}X^{-1}\). In addition to the Adjoint map for the group \(\mathbf{G}\), the adjoint map for the Lie algebra \(\mathfrak{g}\) can be defined as the differential at the identity of the Adjoint map for the group \(\mathbf{G}\). The adjoint map for the Lie algebra \(\mathrm{ad}_{\boldsymbol{u}^{\wedge}}\,:\,\mathfrak{g}\,\rightarrow\,\mathfrak{g}\) is given by \[\mathrm{ad}_{\boldsymbol{u}^{\wedge}}\left[\boldsymbol{v}^{\wedge}\right]= \left[\boldsymbol{u}^{\wedge},\boldsymbol{v}^{\wedge}\right],\] and is equivalent to the Lie bracket. Given particular wedge and vee maps, we define the adjoint matrix \(\mathbf{ad}_{\boldsymbol{u}}^{\vee}\,:\,\mathbb{R}^{n}\,\rightarrow\,\mathbb{R }^{n}\) to be \[\mathbf{ad}_{\boldsymbol{u}}^{\vee}\boldsymbol{v}=\left(\boldsymbol{u}^{ \wedge}\boldsymbol{v}^{\wedge}-\boldsymbol{v}^{\wedge}\boldsymbol{u}^{ \wedge}\right)^{\vee}=\left[\boldsymbol{u}^{\wedge},\boldsymbol{v}^{\wedge} \right]^{\vee}.\] Note that for a group constructed of products of matrix Lie groups, the adjoint map is not the matrix commutator. For all \(\boldsymbol{u},\boldsymbol{v}\in\mathbb{R}^{n}\) and \(X\in\mathbf{G}\), the two adjoints commute: \[\mathrm{Ad}_{X}\left[\mathrm{ad}_{\boldsymbol{u}^{\wedge}}\left[ \boldsymbol{v}^{\wedge}\right]\right] =\mathrm{ad}_{(\mathrm{Ad}_{X}\left[\boldsymbol{u}^{\wedge} \right])}\left[\mathrm{Ad}_{X}\left[\boldsymbol{v}^{\wedge}\right]\right],\] \[\mathbf{Ad}_{X}^{\vee}\mathbf{ad}_{\boldsymbol{u}}^{\vee} \boldsymbol{v} =\mathbf{ad}_{\mathrm{Ad}_{X}^{\vee}\boldsymbol{u}}^{\vee} \boldsymbol{A}\mathbf{Ad}_{X}^{\vee}\boldsymbol{v},\] A semi-direct product group \(\mathbf{G}\times\mathbf{H}\) can be seen as a generalization of the direct product group \(\mathbf{G}\times\mathbf{H}\) where the underlying set is given by the cartesian product of two groups \(\mathbf{G}\) and \(\mathbf{H}\). Contrary to the direct product, in the semi-direct product, the group multiplication is defined with the group \(\mathbf{G}\) that acts on a group \(\mathbf{H}\) by a group automorphism. In this work we will consider a semi-direct product symmetry group [21], [19], [20]\(\mathbf{G}_{\mathfrak{g}}^{\kappa}:=\mathbf{G}\times\mathfrak{g}\) where \(\mathfrak{g}\) is the Lie algebra of \(\mathbf{G}\), or a subalgebra of \(\mathbf{G}\), that is, a vector space group under addition. Let \(A,B\in\mathbf{G}\) and \(a,b\in\mathfrak{g}\) and define \(X=(A,a)\) and \(Y=(B,b)\) elements of the symmetry group \(\mathbf{G}_{\mathfrak{g}}^{\kappa}\). Group multiplication is defined to be the semi-direct product \(\,YX=(BA,b+\mathrm{Ad}_{B}\left[a\right])\). The inverse element is \(X^{-1}=\left(A^{-1},-\mathrm{Ad}_{A^{-1}}\left[a\right]\right)\) and identity element is \((I,0)\). ### Lie group action and homogeneous space A right group action of a Lie group \(\mathbf{G}\) on a differentiable manifold \(\mathcal{M}\) is a smooth map \(\phi:\mathbf{G}\times\mathcal{M}\rightarrow\mathcal{M}\) that satisfies \[\phi\left(I,\xi\right)=\xi,\qquad\phi\left(X,\phi\left(Y,\xi\right)\right)= \phi\left(YX,\xi\right),\] for any \(X,Y\in\mathbf{G}\) and \(\xi\in\mathcal{M}\). A right group action \(\phi\) induces a family of diffeomorphism \(\phi_{X}\,:\,\mathcal{M}\,\rightarrow\,\mathcal{M}\) and smooth projections \(\phi_{\xi}\,:\,\mathbf{G}\,\rightarrow\,\mathcal{M}\). The group action \(\phi\) is said to be transitive if the induced projection \(\phi_{\xi}\) is surjective. In this case, \(\mathcal{M}\) is a homogeneous space of \(\mathbf{G}\). ### Useful maps For all \(\boldsymbol{x}\in\mathbb{R}^{n}\) define the maps: \[\overline{\left(\cdot\right)}\,:\,\mathbb{R}^{n}\,\rightarrow\, \mathbb{R}^{n+3},\qquad\boldsymbol{x}\mapsto\overline{\boldsymbol{x}}=\left( \mathbf{0}_{3\times 1},\boldsymbol{x}\right),\] \[\underline{\left(\cdot\right)}\,:\,\mathbb{R}^{n}\,\rightarrow\, \mathbb{R}^{n+3},\qquad\boldsymbol{x}\mapsto\overline{\boldsymbol{x}}=\left( \boldsymbol{x},\mathbf{0}_{3\times 1}\right).\] For all \(X=(A,a,b)\in\mathbf{SE}_{2}(3)\mid A\in\mathbf{SO}(3),a,b\in\mathbb{R}^{3}\), define the map: \[\Omega\left(\cdot\right)\,:\,\mathbf{SE}_{2}(3)\,\rightarrow\, \mathfrak{se}_{2}(3),\quad\Omega\left(X\right)=\left(\mathbf{0}_{3\times 1}, \mathbf{0}_{3\times 1},a\right)^{\wedge}\in\mathfrak{se}_{2}(3).\] For all \(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c}\in\mathbb{R}^{3}\mid( \boldsymbol{a},\boldsymbol{b},\boldsymbol{c})\in\mathbb{R}^{9}\), define the map: \[\Pi\left(\cdot\right)\,:\,\mathfrak{se}_{2}(3)\,\rightarrow\, \mathfrak{se}(3),\quad\Pi\left(\left(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c }\right)^{\wedge}\right)=\left(\boldsymbol{a},\boldsymbol{b}\right)^{\wedge} \in\mathfrak{se}(3).\] ## 3 The Biased Inertial Navigation Problem Consider a mobile robot equipped with an IMU providing angular velocity and acceleration measurements, as well as other sensors providing partial direct or indirect state measurements (e.g. a GNSS receiver providing position measurements or a magnetometer providing direction measurements). Let \(\{G\}\) denote the global inertial frame of reference and \(\{I\}\) denote the IMU frame of reference. In non-rotating, flat earth assumption, the deterministic (noise-free) continuous-time biased inertial navigation system is \[{}^{G}\dot{\mathbf{R}}_{I} ={}^{G}\mathbf{R}_{I}\left({}^{I}\boldsymbol{\omega}-{}^{I} \boldsymbol{b}_{\boldsymbol{u}}\right)^{\wedge}, \tag{1a}\] \[{}^{G}\dot{\boldsymbol{v}}_{I} ={}^{G}\mathbf{R}_{I}\left({}^{I}\boldsymbol{a}-{}^{I} \boldsymbol{b}_{\boldsymbol{u}}\right)+{}^{G}\boldsymbol{g},\] (1b) \[{}^{G}\dot{\boldsymbol{p}}_{I} ={}^{G}\boldsymbol{v}_{I},\] (1c) \[{}^{I}\dot{\boldsymbol{b}}_{\boldsymbol{\omega}} ={}^{I}\boldsymbol{\tau}_{\boldsymbol{\omega}},\] (1d) \[{}^{I}\dot{\boldsymbol{b}}_{\boldsymbol{a}} ={}^{I}\boldsymbol{\tau}_{a}. \tag{1e}\] Here, \({}^{G}\mathbf{R}_{I}\) denotes the rigid body orientation, and \({}^{G}\boldsymbol{p}_{I}\) and \({}^{G}\boldsymbol{v}_{I}\) denote the rigid body position and velocity expressed in the \(\{G\}\) frame, respectively. These variables are termed the _navigation states_. The gravity vector \({}^{G}\boldsymbol{g}\) is expressed in frame \(\{G\}\). The gyroscope measurement and accelerometer measurement are written \({}^{I}\boldsymbol{\omega}\) and \({}^{I}\boldsymbol{a}\) respectively. The two biases \({}^{I}\boldsymbol{b}_{\boldsymbol{\omega}}\) and \({}^{I}\boldsymbol{b}_{\boldsymbol{a}}\) are termed the _bias states_. The inputs \(\boldsymbol{\tau}_{\boldsymbol{\omega}}\), \(\boldsymbol{\tau}_{\boldsymbol{a}}\) are used to model the biases' dynamics, and are zero when the biases are modeled as constant quantities. The state space is \(\mathcal{M}=\mathcal{SO}(3)\times\mathbb{R}^{3}\times\mathbb{R}^{3}\times\mathbb{R }^{3}\times\mathbb{R}^{3}\) where the 4 copies of \(\mathbb{R}^{3}\) model velocity, position, and angular and acceleration bias, respectively, and \(\mathcal{SO}(3)\) is the \(\mathbf{SO}(3)\)-torsor with rotation matrices representing coordinates of orientation rather than physical rotation of space. Note that the state space itself is not a Lie-group in the EqF formulation. Rather symmetry is modeled as a group action on \(\mathcal{M}\), allowing us to consider different symmetries acting on the same INS state. We write an element of the state space, and an element of the input space respectively \[\xi =\left({}^{G}\mathbf{R}_{I},{}^{G}\mathbf{v}_{I},{}^{G}\mathbf{p}_ {I},{}^{I}\mathbf{b}_{\mathbf{\omega}},{}^{I}\mathbf{b}_{\mathbf{a}}\right)\in\mathcal{ M}, \tag{2}\] \[u =\left({}^{I}\mathbf{\omega},{}^{I}\mathbf{a},{}^{I}\mathbf{\tau}_{\mathbf{ \omega}},{}^{I}\mathbf{\tau}_{\mathbf{a}}\right)\in\mathbb{L}\subset\mathbb{R}^{12}. \tag{3}\] For the sake of clarity of the presentation, in the following sections, we drop subscripts and superscripts from state, input and output variables, and adopt the lean notation defined in Table 1. ## 4 INS Symmetries In the following subsections, we analyze different symmetries of the biased inertial navigation system under the lens of equivariance; that is, we show how those symmetries relate to classical filter design when exploited within the equivariant filter framework, and how every filter is an EqF under an appropriate choice of symmetry. Starting with Tab. 2, we show the relation between INS filters and symmetry group, as well as the differences in the state error linearization of filters built upon those symmetries. In Sec. 4.1, 4.2 and 4.3, we discuss the symmetry groups that lead to the design of equivariant filters equivalent to the widely-known MEKF, IEKF, and the recently published TFG-IEKF. In Sec. 4.4 we briefly recall the tangent group recently introduced and exploited for INS in our prior work [9, 8]. In Sec. 4.5 and 4.6, we introduce two new symmetry groups for biased inertial navigation Systems. These groups are based on the semi-direct product and aim to address the over-parametrization of bias states introduced in our prior work [9]. ### Special Orthogonal group \(\mathbf{G_{O}:SO(3)\times\mathbb{R}^{12}}\) Lie group theory was first applied to navigation systems to overcome the limitation and the singularities of using Euler angles as the parameterization of the attitude of a rigid body. Originally formulated on the quaternion group, the modern approach directly models attitude on the Special Orthogonal group \(\mathbf{SO}(3)\). Define the symmetry group \(\mathbf{G_{O}\coloneqq SO(3)\times\mathbb{R}^{12}}\), and let \(X=(A,a,b,\alpha,\beta)\in\mathbf{G_{O}}\), where \(A\in\mathbf{SO}(3)\), \(a,b,\alpha,\beta\in\mathbb{R}^{3}\). Let \(X=(A_{X},a_{X},b_{X},\alpha_{X},\beta_{X})\), \(Y=(A_{Y},a_{Y},b_{Y},\alpha_{Y},\beta_{Y})\) be two elements of the symmetry group, then the group product is written \(XY=(A_{X}A_{Y},a_{X}+a_{Y},b_{X}+b_{Y},\alpha_{X}+\alpha_{Y},\beta_{X}+\beta_ {Y})\). The inverse of an element \(X\) is given by \(X^{-1}=(A^{\top},-a,-b,-\alpha,-\beta)\). **Lemma 1**: _Define \(\phi\,:\,\mathbf{G_{O}\times\mathcal{M}}\,\rightarrow\,\mathcal{M}\) as_ \[\phi\,(X,\xi)\coloneqq(\mathbf{R}A,\mathbf{v}+a,\mathbf{p}+b,\mathbf{b}_{\mathbf{ \omega}}+\alpha,\mathbf{b}_{a}+\beta)\in\mathcal{M}. \tag{4}\] _Then, \(\phi\) is a transitive right group action of \(\mathbf{G_{O}}\) on \(\mathcal{M}\)._ The existence of a transitive group action of the symmetry group \(\mathbf{G_{O}}\) on the state space \(\mathcal{M}\) guarantees the existence of a lift [15]. **Theorem 2**: _Define the map \(\Lambda\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{g_{O}}\) by_ \[\Lambda\,(\xi,u)\coloneqq\left(\Lambda_{1}\,(\xi,u)\,,\cdots,\Lambda_{5}\,(\xi, u)\right).\] _where \(\Lambda_{1}\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{so}(3)\), and \(\Lambda_{2},\cdots,\Lambda_{5}\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\, \mathbb{R}^{3}\) are given by_ \[\Lambda_{1}\,(\xi,u) \coloneqq\left({}^{I}\mathbf{\omega}-{}^{I}\mathbf{b}_{\mathbf{\omega}} \right)^{\wedge}, \tag{5}\] \[\Lambda_{2}\,(\xi,u) \coloneqq{}^{G}\mathbf{R}_{I}\left({}^{I}\mathbf{a}-{}^{I}\mathbf{b}_{\bm {a}}\right)+{}^{G}\mathbf{g},\] (6) \[\Lambda_{3}\,(\xi,u) \coloneqq{}^{G}\mathbf{v}_{I},\] (7) \[\Lambda_{4}\,(\xi,u) \coloneqq{}^{I}\mathbf{\tau}_{\mathbf{\omega}},\] (8) \[\Lambda_{4}\,(\xi,u) \coloneqq{}^{I}\mathbf{\tau}_{a}. \tag{9}\] _Then, the \(\Lambda\) is a lift for the system in Equ. (1) with respect to the symmetry group \(\mathbf{G_{O}\coloneqq SO(3)\times\mathbb{R}^{12}}\)._ In the appendix, it is shown that an EqF designed using this symmetry results in the well-known MEKF [12]. ### Extended Special Euclidean group \(\mathbf{G_{ES}:SE_{2}(3)\times\mathbb{R}^{6}}\) Using the extended pose \(\mathbf{SE_{2}(3)}\) group to model the navigation states of the INS problem is one of the major developments in INS filtering in the last 10 years. Define \(\xi=(\mathbf{T},\mathbf{b})\in\mathcal{M}\coloneqq\mathcal{SE_{2}(3)\times \mathbb{R}^{6}}\) to be the state space of the system. \(\mathbf{T=(R},\mathbf{v},\mathbf{p})\in\mathcal{SE_{2}(3)}\) is the \begin{table} \begin{tabular}{c c c} \hline \hline Description & Descriptive & Lean \\ & notation & notation \\ \hline Rigid body orientation & \({}^{G}\mathbf{R}_{I}\) & \(\mathbf{R}\) \\ Rigid body velocity & \({}^{G}\mathbf{v}_{I}\) & \(\mathbf{v}\) \\ Rigid body position & \({}^{G}\mathbf{p}_{I}\) & \(\mathbf{p}\) \\ Angular velocity & \({}^{I}\mathbf{\omega}\) & \(\mathbf{\omega}\) \\ measurement & & \\ Gyroscope bias & \({}^{I}\mathbf{b}_{\mathbf{\omega}}\) & \(\mathbf{b}_{\mathbf{\omega}}\) \\ Acceleration measurement & \({}^{I}\mathbf{a}\) & \(\mathbf{a}\) \\ Accelerometer bias & \({}^{I}\mathbf{b}_{\mathbf{a}}\) & \(\mathbf{b}_{\mathbf{a}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Descriptive-Lean Notation Conversion Table. extended pose [5], which includes the orientation the, velocity and the position of the rigid body, whereas \(\mathbf{b}=(\mathbf{b}_{\mathbf{\omega}},\mathbf{b}_{\mathbf{a}})\in\mathbb{R}^{6}\) denotes the IMU biases. Let \(u=(\mathbf{w},\mathbf{\tau})\in\mathbb{L}\subseteq\mathbb{R}^{12}\) denote the system input, where \(\mathbf{w}=(\mathbf{\omega},\mathbf{a})\in\mathbb{R}^{6}\) denotes the input given by the IMU readings, and \(\mathbf{\tau}=(\mathbf{\tau}_{\mathbf{\omega}},\mathbf{\tau}_{\mathbf{a}})\in\mathbb{R}^{6}\) denotes the input for the IMU biases. Define the matrices \[\mathbf{G} =(\overline{\mathbf{g}})^{\wedge}\in\mathfrak{se}_{2}(3),\] \[\mathbf{B} =(\underline{\mathbf{b}})^{\wedge}\in\mathfrak{se}_{2}(3),\qquad \mathbf{D}=\begin{bmatrix}\mathbf{0}_{3\times 3}&\mathbf{0}_{3\times 1}&\mathbf{0}_{3 \times 1}\\ \mathbf{0}_{1\times 3}&0&1\\ \mathbf{0}_{1\times 3}&0&0\end{bmatrix}\in\mathbb{R}^{5\times 5}.\] \[\mathbf{W} =(\underline{\mathbf{w}})^{\wedge}\in\mathfrak{se}_{2}(3),\] Then, the system in Equ. (1) may then be written as \[\dot{\mathbf{T}} =\mathbf{T}\left(\mathbf{W}-\mathbf{B}+\mathbf{D}\right)+\left( \mathbf{G}-\mathbf{D}\right)\mathbf{T}, \tag{10a}\] \[\dot{\mathbf{b}} =\mathbf{\tau}. \tag{10b}\] Define the symmetry group \(\mathbf{G}_{\mathbf{ES}}\coloneqq\mathbf{SE}_{2}(3)\times\mathbb{R}^{6}\), and let \(X=(C,\gamma)\in\mathbf{G}_{\mathbf{ES}}\), where \(C=(A,a,b)\in\mathbf{SE}_{2}(3)\), \(A\in\mathbf{SO}(3)\), \(a,b\in\mathbb{R}^{3}\), \(\gamma\in\mathbb{R}^{6}\). Let \(X=(C_{X},\gamma_{X})\), \(Y=(C_{Y},\gamma_{Y})\) be two elements of the symmetry group, then the group product is written \(XY=(C_{X}C_{Y},\gamma_{X}+\gamma_{Y})\). The inverse of an element \(X\) is given by \(X^{-1}=\left(C^{-1},-\gamma\right)\). **Lemma 3**: _Define \(\phi\,:\,\mathbf{G}_{\mathbf{ES}}\times\mathcal{M}\,\rightarrow\,\mathcal{M}\) as_ \[\phi\left(X,\xi\right)=(\mathbf{T}C,\mathbf{b}+\gamma)\in\mathcal{M}. \tag{11}\] _Then, \(\phi\) is a transitive right group action of \(\mathbf{G}_{\mathbf{ES}}\) on \(\mathcal{M}\)._ \begin{table} \begin{tabular}{c c c} \hline Filter & Symmetry group & State error linearization. & \(\mathbf{A}\mid\dot{\mathbf{\varepsilon}}\simeq\mathbf{A}\,\mathbf{\varepsilon}\) \\ \hline \multirow{8}{*}{MEKF [12]} & \multirow{8}{*}{Special Orthogonal group \(\mathbf{G}_{\mathbf{G}}:\mathbf{SO}(3)\times\mathbb{R}^{12}\)} & State-dependent attitude \\ & & error dynamics. Linear \\ & & state-dependent and input-dependent velocity \\ & & error dynamics. Linear \\ & & time-invariant position error \\ & & dynamics. Linear time-invariant bias error dynamics \\ & & invariant bias error dynamics \\ & & & dynamics. Linear time-invariant attitude \\ & & invariant bias error dynamics \\ & & & dynamics. Linear time-invariant attitude \\ & & & dynamics. \\ & Two-Frame group & State-dependent velocity and \\ TFG-IEKF [3] & \(\mathbf{G}_{\mathbf{TF}}:\) & position error dynamics. \\ & \(\mathbf{SO}(3)\times(\mathbb{R}^{6}\oplus\mathbb{R}^{6})\) & State-dependent and \\ & & input-dependent bias error dynamics \\ & & & linear time-invariant \\ & & attitude, velocity and \\ TG-EqF [9] & Tangent group & position error dynamics. \\ & \(\mathbf{G}_{\mathbf{TG}}:\,\mathbf{SE}_{2}(3)\times\mathfrak{se}_{2}(3)\) & State-dependent and \\ & & input-dependent bias error dynamics \\ & & Linear time-invariant attitude \\ DP-EqF & \(\mathbf{G}_{\mathbf{DP}}:\,\mathbf{G}_{\mathbf{DP}}:\,\mathbf{G}_{\mathbf{DP}}:\, \mathbf{G}_{\mathbf{DP}}:\,\mathbf{G}_{\mathbf{DP}}\) \\ & \(\mathbf{HG}(3)\times\mathfrak{hg}(3)\times\mathbb{R}^{3}\) & State-dependent and \\ & & input-dependent position and \\ & & bias error dynamics \\ & & Linear time-invariant attitude \\ & & and velocity error dynamics. \\ & & State-dependent position \\ SD-EqF & Semi-Direct Bias group \(\mathbf{G}_{\mathbf{SD}}:\,\mathbf{SE}_{2}(3)\times\mathfrak{se}(3)\) & error dynamics. \\ & & State-dependent and \\ & & input-dependent bias error dynamics \\ \hline \end{tabular} \end{table} Table 2: Qualitative overview of the differences in the presented symmetries when exploited for filter design. **Theorem 4**: _Define the map \(\Lambda:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{g}_{\mathbf{ES}}\) by_ \[\Lambda\left(\xi,u\right)\coloneqq\left(\Lambda_{1}\left(\xi,u\right),\Lambda_ {2}\left(\xi,u\right)\right),\] _where \(\Lambda_{1}:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{se}_{2}(3)\), and \(\Lambda_{2}:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathbb{R}^{6}\) are given by_ \[\Lambda_{1}\left(\xi,u\right) \coloneqq\left(\mathbf{W}-\mathbf{B}+\mathbf{D}\right)+\mathbf{ T}^{-1}\left(\mathbf{G}-\mathbf{D}\right)\mathbf{T}, \tag{12}\] \[\Lambda_{2}\left(\xi,u\right) \coloneqq\boldsymbol{\tau}. \tag{13}\] _Then, \(\Lambda\) is a lift for the system in Equ. (10) with respect to the symmetry group \(\mathbf{G}_{\mathbf{ES}}\coloneqq\mathbf{SE}_{2}(3)\times\mathbb{R}^{6}\)._ Applying the EqF filter design methodology to this symmetry leads to the Imperfect-IEKF [1, 2]. Note that ignoring the bias and considering only the navigation states is the original IEKF filter [1]. The imperfect term comes from breaking the group-affine symmetry of the navigation states by adding the direct product terms for the bias. Two-Frame group: \(\mathbf{G}_{\mathbf{TF}}:\,\mathbf{SO}(3)\ltimes(\mathbb{R}^{6}\oplus \mathbb{R}^{6})\) The recently published two-frame group invariant extended Kalman filter [3] is one approach to address the theoretical issue in the imperfect IEKF for INS where the bias terms are not part of the symmetry structure. Consider the system in Equ. (10). Define the symmetry group \(\mathbf{G}_{\mathbf{TF}}\coloneqq SO(3)\ltimes(\mathbb{R}^{6}\oplus \mathbb{R}^{6})\), where \(\mathbf{SO}(3)\) acts on two vector spaces of \(6\) dimensions each defined with respect to two different frames of references. Let \(X=\left(C,\gamma\right)\in\mathbf{G}_{\mathbf{TF}}\), with \(C=\left(A,\left(a,b\right)\right)\in\mathbf{SE}_{2}(3)\coloneqq\mathbf{SO}(3) \ltimes\mathbb{R}^{6}\) such that \(A\in\mathbf{SO}(3)\), \(\left(a,b\right)\in\mathbb{R}^{6}\). Let, \(\ast\,:\,\mathbf{SO}(3)\times\mathbb{R}^{3N}\rightarrow\,\mathbb{R}^{3N}\) be the rotation term introduced in [3], such that \(\forall\,A\in\mathbf{SO}(3)\) and \(x=\left(x_{1},\cdots,x_{N}\right)\in\mathbb{R}^{3N}\), \(A\ast x=\left(Ax_{1},\cdots,Ax_{N}\right)\). Define the group product \(XY=\left(C_{X}C_{Y},\gamma_{X}+A\ast\gamma_{Y}\right)\) The inverse element of the symmetry group writes \(X^{-1}=\left(C^{-1},-A^{T}\ast\gamma\right)\in\mathbf{G}_{\mathbf{TF}}\). **Lemma 5**: _Define \(\phi:\,\mathbf{G}_{\mathbf{TF}}\times\mathcal{M}\,\rightarrow\,\mathcal{M}\) as_ \[\phi\left(X,\xi\right)\coloneqq\left(\mathbf{T}C,A^{T}\ast\left(\boldsymbol{b }-\gamma\right)\right)\in\mathcal{M}. \tag{14}\] _Then, \(\phi\) is a transitive right group action of \(\mathbf{G}_{\mathbf{TF}}\) on \(\mathcal{M}\)._ **Theorem 6**: _Define \(\Lambda_{1}:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{se}_{2}(3)\) as_ \[\Lambda_{1}\left(\xi,u\right)\coloneqq\left(\mathbf{W}-\mathbf{B}+\mathbf{D} \right)+\mathbf{T}^{-1}\left(\mathbf{G}-\mathbf{D}\right)\mathbf{T}, \tag{15}\] _define \(\Lambda_{2}:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathbb{R}^{6}\) as_ \[\Lambda_{2}\left(\xi,u\right)=\left(\boldsymbol{b}_{\omega}^{\wedge}\left( \boldsymbol{\omega}-\boldsymbol{b}_{\omega}\right)-\boldsymbol{\tau}_{\omega},\,\,\boldsymbol{b}_{\boldsymbol{a}}^{\wedge}\left(\boldsymbol{\omega}- \boldsymbol{b}_{\omega}\right)-\boldsymbol{\tau}_{\alpha}\right). \tag{16}\] _Then, the map \(\Lambda\left(\xi,u\right)=\left(\Lambda_{1}\left(\xi,u\right),\Lambda_{2} \left(\xi,u\right),\Lambda_{2}\left(\xi,u\right)\right)\) is a lift for the system in Equ. (10) with respect to the symmetry group \(\mathbf{G}_{\mathbf{TF}}\coloneqq\mathbf{SO}(3)\ltimes(\mathbb{R}^{6}\oplus \mathbb{R}^{6})\)._ In the appendix, it is shown that designing and EqF based on this symmetry leads to the recently published TFG-IEKF [3]. ### Tangent group \(\mathbf{G}_{\mathbf{TG}}:\,\mathbf{SE}_{2}(3)\ltimes\mathfrak{se}_{2}(3)\) Recent work [21, 20] considered symmetries and EqF filter design on the tangent group \(\mathbf{TG}\) of a general Lie-group. Since bias states are closely related to velocities, these ideas can easily be extended symmetries for bias states [9]. Define \(\xi=\left(\mathbf{T},\boldsymbol{b}\right)\in\mathcal{M}\coloneqq\mathcal{ SE}_{2}(3)\times\mathbb{R}^{9}\) to be the state space of the system. \(\mathbf{T}\in\mathcal{SE}_{2}(3)\) represents the extended pose, whereas \(\boldsymbol{b}=\left(\boldsymbol{b}_{\omega},\boldsymbol{b}_{\boldsymbol{a}}, \boldsymbol{b}_{\boldsymbol{\nu}}\right)\in\mathbb{R}^{9}\) represents the IMU biases, and an additional virtual bias \(\boldsymbol{b}_{\boldsymbol{\nu}}\). Let \(u=\left(\boldsymbol{w},\boldsymbol{\tau}\right)\in\mathbb{L}\subseteq\mathbb{R }^{18}\) denote the system input, where \(\boldsymbol{w}=\left(\boldsymbol{\omega},\boldsymbol{a},\boldsymbol{\nu}\right) \in\mathbb{R}^{9}\) denotes the input given by the IMU readings, and an additional virtual input \(\boldsymbol{\nu}\). \(\boldsymbol{\tau}=\left(\boldsymbol{\tau}_{\boldsymbol{\omega}},\boldsymbol{ \tau}_{\boldsymbol{a}},\boldsymbol{\tau}_{\boldsymbol{\nu}}\right)\in\mathbb{R }^{9}\) denotes the input for the IMU biases. Define the matrices \[\mathbf{G} =\left(\overline{\boldsymbol{g}}\right)^{\wedge}\in\mathfrak{se}_{ 2}(3),\] \[\mathbf{W} =\boldsymbol{w}^{\wedge}\in\mathfrak{se}_{2}(3),\] With these newly defined matrices, the system in Equ. (1) may then be written in compact form as in Equ. (10). Define the symmetry group \(\mathbf{G}_{\mathbf{TG}}\coloneqq\mathbf{SE}_{2}(3)\ltimes\mathfrak{se}_{2}(3)\), and let \(X=\left(C,\gamma\right)\in\mathbf{G}_{\mathbf{TG}}\), where \(C\in\mathbf{SE}_{2}(3)\), \(\gamma\in\mathfrak{se}_{2}(3)\). Let \(X=\left(C_{X},\gamma_{X}\right)\), \(Y=\left(C_{Y},\gamma_{Y}\right)\) be two elements of the symmetry group, then the group product is written \(XY=\left(C_{X}C_{Y},\gamma_{X}+\mathrm{Ad}_{C_{X}}\left[\gamma_{Y}\right]\right)\). The inverse of an element \(X\) is given by \(X^{-1}=\left(C^{-1},-\mathrm{Ad}_{C^{-1}}\left[\gamma\right]\right)\). **Lemma 7**: _Define \(\phi:\,\mathbf{G}_{\mathbf{TG}}\times\mathcal{M}\,\rightarrow\,\mathcal{M}\) as_ \[\phi\left(X,\xi\right)\coloneqq\left(\mathbf{T}C,\mathbf{Ad}_{C^{-1}}^{\vee} \left(\boldsymbol{b}-\gamma^{\vee}\right)\right)\in\mathcal{M}. \tag{17}\] _Then, \(\phi\) is a transitive right group action of \(\mathbf{G}_{\mathbf{TG}}\) on \(\mathcal{M}\)._ From here, we derive a compatible action of the symmetry group \(\mathbf{G}_{\mathbf{TG}}\) on the input space \(\mathbb{L}\) and derive the lift \(\Lambda\) via constructive design as described in [15, 24]. **Lemma 8**: _Define \(\psi:\,\mathbf{G}_{\mathbf{TG}}\times\mathbb{L}\,\rightarrow\,\mathbb{L}\,\) as_ \[\psi\left(X,u\right)\coloneqq\left(\mathbf{Ad}_{C^{-1}}^{\vee}\left( \boldsymbol{w}-\gamma^{\vee}\right)+\Omega^{\vee}\left(C^{-1}\right),\mathbf{Ad}_{C^ {-1}}^{\vee}\,\boldsymbol{\tau}\right)\in\mathbb{L}. \tag{18}\] _Then, \(\psi\) is a right group action of \(\mathbf{G}_{\mathbf{TG}}\) on \(\mathbb{L}\)._ The system in Equ. (10) is equivariant under the actions \(\phi\) in Equ. (17) and \(\psi\) in Equ. (18). The existence of a transitive group action of the symmetry group \(\mathbf{G_{TG}}\) on the state space \(\mathcal{M}\) and the equivariance of the system guarantees the existence of an equivariant lift [15]. **Theorem 9**: _Define the map \(\Lambda\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{g_{TG}}\) by_ \[\Lambda\left(\xi,u\right)\coloneqq\left(\Lambda_{1}\left(\xi,u\right), \Lambda_{2}\left(\xi,u\right)\right),\] _where \(\Lambda_{1}:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{se}_{2}(3)\), and \(\Lambda_{2}\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{se}_{2}(3)\) are given by_ \[\Lambda_{1}\left(\xi,u\right) \coloneqq\left(\mathbf{W}-\mathbf{B}+\mathbf{D}\right)+\mathbf{T }^{-1}\left(\mathbf{G}-\mathbf{D}\right)\mathbf{T}, \tag{19}\] \[\Lambda_{2}\left(\xi,u\right) \coloneqq\mathrm{ad}_{\mathbf{b}}\left[\Lambda_{1}\left(\xi,u \right)\right]-\boldsymbol{\tau}^{\wedge}. \tag{20}\] _Then, \(\Lambda\) is an equivarant lift for the system in Equ. (10) with respect to the symmetry group \(\mathbf{G_{TG}}\coloneqq\mathbf{SE}_{2}(3)\times\mathfrak{se}_{2}(3)\)._ This approach requires the introduction of a new state \(\boldsymbol{b_{\nu}}\) in order to apply the full \(\mathfrak{se}_{2}(3)\) semi-direct symmetry on the bias states. This new state is entirely virtual, it does not exist in the real system. Since introducing an entirely virtual state just for the sake of the symmetry appears questionable, it is of interest to consider alternative symmetries that try to preserve the semi-direct group structure that models bias interaction, but doesn't require the additional bias filter state. Direct Position group \(\mathbf{G_{DP}}:\,\mathbf{HG}(3)\times\mathfrak{hg}(3)\times\mathbb{R}^{3}\) In this subsection, we define a symmetry to accomplish two goals simultaneously. First, we aim for a symmetry that does not require the over-parametrization of the state introduced in [9] given by the addition of a velocity bias state, and second, we aim for a symmetry with linear output for position measurements. We introduce the term \(\mathbf{HG}(3)\) for the _homogeneous Galilean_ group. This corresponds to extended pose transformations \(\mathbf{SE}_{2}(3)\) where the spatial translation is zero. That is the symmetry acts on rotation and velocity only with the semi-direct product induced by the \(\mathbf{SE}_{2}(3)\) geometry. The homogeneous Galilean group is isomorphic to \(\mathbf{SE}(3)\) in structure, however, since \(\mathbf{SE}(3)\) is synonymous with pose transformation we use the \(\mathbf{HG}(3)\) notation to avoid confusion. The first step towards these goals is to introduce a virtual input \(\boldsymbol{\nu}\) and rewrite Equ. (1c) as \(\dot{\boldsymbol{p}}=\mathbf{R}\boldsymbol{\nu}+\boldsymbol{v}\). Define \(\xi=\left(\mathbf{T},\boldsymbol{p},\boldsymbol{b}\right)\in\mathcal{M} \coloneqq\mathcal{HG}(3)\times\mathbb{R}^{3}\times\mathbb{R}^{6}\) to be the state space of the system, where \(\mathbf{T}=\left(\mathbf{R},\,\boldsymbol{v}\right)\in\mathcal{HG}(3)\) includes the orientation and the velocity of the rigid body. Let \(u=\left(\boldsymbol{w},\boldsymbol{\nu},\boldsymbol{\tau}\right)\in\mathbb{L} \subseteq\mathbb{R}^{15}\) denote the system input. Define the matrices \[\mathbf{G}=\left(\boldsymbol{\overline{g}}\right)^{\wedge}\in\mathfrak{se}(3), \quad\mathbf{B}=\boldsymbol{b}^{\wedge}\in\mathfrak{se}(3),\quad\mathbf{W}= \boldsymbol{w}^{\wedge}\in\mathfrak{se}(3).\] Then, the system in Equ. (1) may then be written as \[\dot{\mathbf{T}} =\mathbf{T}\left(\mathbf{W}-\mathbf{B}\right)+\mathbf{G}\, \mathbf{T}, \tag{21a}\] \[\dot{\boldsymbol{p}} =\mathbf{R}\boldsymbol{\nu}+\boldsymbol{v},\] (21b) \[\dot{\boldsymbol{b}} =\boldsymbol{\tau}. \tag{21c}\] Define the symmetry group \(\mathbf{G_{DP}}\coloneqq\mathbf{HG}(3)\times\mathfrak{se}(3)\times\mathbb{R} ^{3}\), and let \(X=\left(B,\beta,c\right)\in\mathbf{G_{DP}}\) with \(B=\left(A,a\right)\in\mathbf{HG}(3)\) such that \(A\in\mathbf{SO}(3)\), \(a\in\mathbf{R}^{3}\). Let \(X=\left(B_{X},\beta_{X},c_{X}\right),Y=\left(B_{Y},\beta_{Y},c_{Y}\right)\in \mathbf{G_{DP}}\), the group product is written \(XY=\left(B_{X}B_{Y},\beta_{X}+\mathrm{Ad}_{B_{X}}\left[\beta_{Y}\right],c_{X}+ c_{Y}\right)\). The inverse of an element \(X\in\mathbf{G_{DP}}\) is given by \(X^{-1}=\left(B^{-1},-\mathrm{Ad}_{B^{-1}}\left[\beta\right],-c\right)\in \mathbf{G_{DP}}\). **Lemma 10**: _Define \(\phi\,:\,\mathbf{G_{DP}}\times\mathcal{M}\,\rightarrow\,\mathcal{M}\) as_ \[\phi\left(X,\xi\right)\coloneqq\left(\mathbf{T}B,\mathbf{Ad}_{B^{-1}}^{\vee} \left(\boldsymbol{b}-\beta^{\vee}\right),\boldsymbol{p}+c\right)\in\mathcal{M}. \tag{22}\] _Then, \(\phi\) is a transitive right group action of \(\mathbf{G_{DP}}\) on \(\mathcal{M}\)._ We derive a compatible action of the symmetry group \(\mathbf{G_{DP}}\) on the input space \(\mathbb{L}\). **Lemma 11**: _Define \(\psi\,:\,\mathbf{G_{DP}}\times\mathbb{L}\,\rightarrow\,\mathbb{L}\) as_ \[\psi\left(X,u\right)\coloneqq\left(\mathbf{Ad}_{B^{-1}}^{\vee}\left( \boldsymbol{w}-\beta^{\vee}\right),A^{T}\left(\boldsymbol{\nu}-a\right), \mathbf{Ad}_{B^{-1}}^{\vee}\,\boldsymbol{\tau}\right)\in\mathbb{L}. \tag{23}\] _Then, \(\psi\) is a right group action of \(\mathbf{G_{DP}}\) on \(\mathbb{L}\)._ The system in Equ. (21) is equivariant under the actions \(\phi\) in Equ. (22) and \(\psi\) in Equ. (23). Therefore, the existence of an equivariant lift is guaranteed. **Theorem 12**: _Define the map \(\Lambda\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{g_{DP}}\) by_ \[\Lambda\left(\xi,u\right)\coloneqq\left(\Lambda_{1}\left(\xi,u\right),\Lambda_{2 }\left(\xi,u\right),\Lambda_{3}\left(\xi,u\right)\right),\] _where \(\Lambda_{1}\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{hg}(3), \Lambda_{2}\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathfrak{se}(3)\), and \(\Lambda_{3}\,:\,\mathcal{M}\times\mathbb{L}\,\rightarrow\,\mathbb{R}^{3}\) are given by_ \[\Lambda_{1}\left(\xi,u\right) \coloneqq\left(\mathbf{W}-\mathbf{B}\right)+\mathbf{T}^{-1} \mathbf{G}\,\mathbf{T}, \tag{24}\] \[\Lambda_{2}\left(\xi,u\right) \coloneqq\mathrm{ad}_{\mathbf{b}^{\vee}}\left[\Lambda_{1}\left(\xi,u \right)\right]-\boldsymbol{\tau}^{\wedge},\] (25) \[\Lambda_{3}\left(\xi,u\right) \coloneqq\mathbf{R}\boldsymbol{\nu}+v. \tag{26}\] _Then, the \(\Lambda\) is an equivariant lift for the system in Equ. (21) with respect to the symmetry group \(\mathbf{G_{DP}}\coloneqq\mathbf{HG}(3)\times\mathfrak{hg}(3)\times\mathbb{R} ^{3}\)._ The symmetry proposed in this subsection allows for linear position measurements, while keeping a minimal state parametrization (i.e. absence of over-parametrization of the state with additional state variables in the ). However, the construction comes at the cost of separating the position state from the geometric \(\mathbf{SE}_{2}(3)\) structure and modelling it as a direct product linear space. ### Semi-Direct Bias group: \(\mathbf{G_{SD}}:\;\mathbf{SE}_{2}(3)\times\mathfrak{se}(3)\) In this subsection, we propose a symmetry that maintains a minimal state representation (not requiring the introduction of an additional velocity bias state) and keeps the position state within the geometric structure given by \(\mathbf{SE}_{2}(3)\), leading hopefully to better linearization of the error dynamics. Consider the system in Equ. (10). Define the symmetry group \(\mathbf{G_{SD}}\coloneqq\mathbf{SE}_{2}(3)\times\mathfrak{se}(3)\) with group product \(XY=\left(C_{X}C_{Y},\gamma_{X}+\mathrm{Ad}_{C_{X}}\left[\gamma_{Y}\right]\right)\) for \(X=\left(C_{X},\gamma_{X}\right),Y=\left(C_{Y},\gamma_{Y}\right)\in\mathbf{G_{SD}}\). Here, for \(X=\left(C,\gamma\right)\in\mathbf{G_{SD}}\) one has \(C=\left(A,a,b\right)=\left(B,b\right)\in\mathbf{SE}_{2}(3)\) such that \(A\in\mathbf{SO}(3),\;a,b\in\mathbb{R}^{3}\), and \(B=\left(A,a\right)\in\mathbf{HG}(3)\). The element \(C\in\mathbf{SE}_{2}(3)\) in its matrix representation is written \[C=\left[\begin{array}{ccc}A&a&\vdots&b\\ \mathbf{0}_{1\times 3}&1&\vdots&0\\ \mathbf{0}_{1\times 3}&0&\vdots&1\end{array}\right]=\left[\begin{array}{ccc} &\vdots&b\\ B&\vdots&0\\ \cdots\cdots\cdots\cdots\cdots\cdots\\ \mathbf{0}_{1\times 3}&0&\vdots&1\end{array}\right]\in\mathbf{SE}_{2}(3).\] The inverse element is written \[X^{-1}=\left(C^{-1},-\mathrm{Ad}_{B^{-1}}\left[\gamma\right]\right)\in \mathbf{G_{SD}}.\] **Lemma 13**: _Define \(\phi\::\:\mathbf{G_{SD}}\times\mathcal{M}\:\rightarrow\:\mathcal{M}\) as_ \[\phi\left(X,\xi\right)\coloneqq\left(\mathbf{T}C,\mathbf{Ad}_{B^{-1}}^{\vee} \left(\boldsymbol{b}-\gamma^{\vee}\right)\right)\in\mathcal{M}. \tag{27}\] _Then, \(\phi\) is a transitive right group action of \(\mathbf{G_{SD}}\) on \(\mathcal{M}\)._ **Theorem 14**: _Define \(\Lambda_{1}\::\:\mathcal{M}\times\mathbb{L}\:\rightarrow\:\mathfrak{se}_{2}(3)\) as_ \[\Lambda_{1}\left(\xi,u\right)\coloneqq\left(\mathbf{W}-\mathbf{B}+\mathbf{D} \right)+\mathbf{T}^{-1}\left(\mathbf{G}-\mathbf{D}\right)\mathbf{T}, \tag{28}\] _define \(\Lambda_{2}\::\:\mathcal{M}\times\mathbb{L}\:\rightarrow\:\mathfrak{se}(3)\) as_ \[\Lambda_{2}\left(\xi,u\right)\coloneqq\mathrm{ad}_{\boldsymbol{b}^{\wedge}} \left[\Pi\left(\Lambda_{1}\left(\xi,u\right)\right)\right]-\boldsymbol{\tau}^ {\wedge}, \tag{29}\] _Then, the map \(\Lambda\left(\xi,u\right)=\left(\Lambda_{1}\left(\xi,u\right),\Lambda_{2} \left(\xi,u\right)\right)\) is a lift for the system in Equ. (10) with respect to the symmetry group \(\mathbf{G_{SD}}\coloneqq\mathbf{SE}_{2}(3)\times\mathfrak{se}(3)\)._ The symmetry proposed in this subsection is a variation of the symmetry defined in our previous work [9] that does not require over-parametrization of the state and additional state variables. Comparing to the DP-EqF structure, the position measurements of the SD-EqF are not globally linear and this may impact on performance for certain transient conditions. ## 5 Reformulation of Position Measurements as Equivariant It is straightforward to verify the \(\mathbf{G_{ES}},\mathbf{G_{TG}},\mathbf{G_{SD}},\mathbf{G_{TF}}\) symmetries do not possess output equivariance [24, 25] for global position measurements directly. In the present section, we show how position measurements can be reformulated as relative body-frame measurements imposing a nonlinear constraint [11]. Using this trick, the modified measurement is output equivariant with respect to a suitable group action, and the linearization methodology proposed in [25] can be applied to generate cubic linearization error in the output. **Lemma 15**: _Let \(\boldsymbol{\pi}\) be a measurement of global position. Define a new measurement model \(h\left(\xi\right)\in\mathbb{R}^{3}\), describing the body-referenced difference between the measured global position and the position state as follows_ \[h\left(\xi\right)=\mathbf{R}^{T}\left(\boldsymbol{\pi}-\boldsymbol{p}\right) \in\mathbb{R}^{3}. \tag{30}\] _Let \(y=h\left(\xi\right)\in\mathcal{N}\) be a measurement defined according to the above model in Equ. (30), define \(\rho\::\:\mathbf{G}\times\mathbb{R}^{3}\:\rightarrow\:\mathbb{R}^{3}\) as_ \[\rho\left(X,y\right)=A^{T}\left(y-b\right). \tag{31}\] _Then, the configuration output defined in Equ. (30) is equivariant._ The noise-free value for \(y\) is zero and the output innovation \(\delta\left(\rho_{\hat{X}^{-1}}\left(\boldsymbol{0}\right)\right)=\rho_{\hat{X }^{-1}}\left(\boldsymbol{0}\right)-\boldsymbol{\pi}\) measures the mismatch of the observer state in reconstructing the true state up to noise in the raw measurement \(\boldsymbol{\pi}\). ## 6 Linearization Error Analysis and EqF Design In Sec. 4, we present different symmetry groups for the inertial navigation problem. An indicator of the performance of an EqF with a particular symmetry is the order of approximation error in the associated linearization of error dynamics. For all symmetries, the origin \(\hat{\xi}\in\mathcal{M}\) is chosen to be \(\hat{\xi}\coloneqq\left(\hat{\mathbf{R}},\hat{\boldsymbol{v}},\hat{ \boldsymbol{p}},\hat{\boldsymbol{b_{\omega}}},\hat{\boldsymbol{b_{a}}}\right)= \left(\mathbf{I_{3}},\mathbf{0}_{3\times 1},\mathbf{0}_{3\times 1},\mathbf{0}_{3 \times 1},\mathbf{0}_{3\times 1}\right)\). Define the local coordinate chart \(\vartheta\::\:d_{\hat{\xi}}\rightarrow\mathbb{R}^{n}\), to be \[\vartheta\coloneqq\left(\phi_{\hat{\xi}}\cdot\exp_{\mathbf{G}}\right)^{-1}= \log_{\mathbf{G}}\cdot\phi_{\hat{\xi}}^{-1}, \tag{32}\] on a neighborhood of \(\hat{\xi}\in\mathcal{M}\) such that \(\log_{\mathbf{G}}\cdot\phi_{\hat{\xi}}^{-1}\) is bijective. The chart \(\vartheta\) is always well-defined locally since all group actions considered are free. The local error coordinates are defined by \(\varepsilon\coloneqq\vartheta(e)\), so long as \(e\coloneqq\phi(\hat{X}^{-1},\xi)\) remains in the domain of definition of \(\vartheta\). In Equ. (32), \(\log_{\mathbf{G}}\) denotes the log coordinates on the symmetry group considered. This map is different for each symmetry group. For a product Lie group \(\mathbf{G}\coloneqq\mathbf{G_{1}}\times\mathbf{G_{2}}\), the logarithm is given by \(\log_{\mathbf{G}}\left(\left(A,B\right)\right)=\left(\log_{\mathbf{G_{1}}} \left(A\right),\log_{\mathbf{G_{2}}}\left(B\right)\right)\). When the product groups are Lie groups with well-known exponential maps, then the standard expressions are used [7]. For the semi-direct product groups \(\mathbf{G}\times\mathfrak{g}\) where \(\mathfrak{g}\) is the Lie algebra of \(\mathbf{G}\), we will use a matrix realization to compute the exponential algebraically. In Tab. 2, we present the linearization of the state error dynamics associated with each of the symmetries considered. The linearization is expressed in terms of elements \(\varepsilon=\log(E)\in\mathbf{g}\) where the element \(E\in\mathbf{G}\) corresponds bijectively to the error \(e\in\mathcal{M}\) through the free group action. That is, we solve \[\mathrm{D}\vartheta^{-1}(\varepsilon)[\dot{\varepsilon}]\approx\frac{\mathrm{ d}}{\mathrm{d}t}e=\tilde{f}(\vartheta^{-1}(\varepsilon),u)\] for \(\dot{\varepsilon}\) to first order in \(\varepsilon\). Here \(\dot{e}=\tilde{f}(e,u)\) is the full error dynamics expressed as a function of \(e\) and the input \(u\)[17, 25]. Finally, the filter design follows the procedure outlined in the authors' earlier works [25, 18, 9, 8]. The detailed derivation of the error linearization for each symmetry, as well as the related equivariant filters, are provided in the appendix. Barrau and Bonnabel [1] developed the IEKF for the bias free INS problem and showed that the linearization of the navigation states was exact. This was a significant improvement on the MEKF geometry, where the linearization of the navigation states is not exact, independently of the bias. However, this exact linearization property is lost when bias is added to the INS problem, the system is no longer group affine [2]. Using a direct product geometric structure to add bias leads to the Imperfect-IEKF [2] and introduces linearization error in the navigation state equations (cf. Tab. 2). The remaining filters all model coupling between bias and navigation states using semi-direct geometry of some form or other. The TG-EqF is the only filter for which the linearization of the navigation state is exact. In this case, the linearization error is only present in the bias states. The DP-EqF, SD-EqF and TFG-IEKF all have semi-direct geometric coupling between part of their navigation states and the bias states leading to exact or improved linearization where the coupling acts compatibly with the \(\mathbf{TG}\) structure. ## 7 Experiments In the present section, we document results from a suite of experiments chosen to provide a comparison of the performance of the MEKF, the Imperfect-IEKF, the TFG-IEKF, an EqF based on the \(\mathbf{G_{TG}}\) symmetry [9] termed TG-EqF, an EqF based on the \(\mathbf{G_{DP}}\) symmetry termed DP-EqF, an EqF based on the \(\mathbf{G_{SD}}\) symmetry termed SD-EqF, and finally an implementation of the TFG-IEKF in [3] (implemented according to the author's original manuscript, and verified to behave the same as an EqF based on the \(\mathbf{G_{TF}}\) symmetry). We undertake two separate experimental analyses. In the first experiment, we undertake a Monte-Carlo simulation of an unmanned aerial vehicle (UAV) equipped with an IMU receiving acceleration and angular velocity measurements at 200Hz and receiving global position measurements at 10Hz, simulating a GNSS receiver. In the second experiment, we compare all the filters with real data from the INSANE dataset [4]. ### UAV Flight Simulation In this experiment, we conducted a Monte-Carlo simulation, including four hundred runs of a simulated UAV equipped with an IMU and receiving global position measurements simulating a GNSS receiver. In order to simulate realistic flight conditions, we selected the initial 80s from four sequences in the Euroc dataset's vicon room [6] as reference trajectories. For each sequence, we generated a hundred runs, incorporating synthetic IMU data and position measurements while varying the initial conditions for the position (distributed normally around zero with 1m standard deviation per axis) and the attitude (distributed normally around zero with 20\({}^{\circ}\) standard deviation per axis). The ground truth IMU biases are randomly generated every run following a Gaussian distribution with standard deviation of 0.01rad/s\(\sqrt{\mathrm{s}}\) for the gyro bias and 0.01m/s\({}^{2}\sqrt{\mathrm{s}}\) for the accelerometer bias. To simulate realistic global position measurement, additive Gaussian noise with a standard deviation of 0.2m per axis is added. For a fair comparison, we were careful to use the same prior distributions and noise parameters for all filters. This includes accounting for the different scaling and transformations of noise due to the input and state parametrizations for the different geometries. Similarly, each filter shares the same input and output measurement noise covariance adapted to the particular geometry of the filter. The validity of the noise models can be verified in the Average Energy plot (Fig. 1a), which plots the average normalized estimation error squared (ANEES) [13]. Here, all filters initialize with unity ANEES, demonstrating that the prior sampling and observer response corresponds to the stochastic prior used, and all filters converge towards unity ANEES as expected from a filter driven by Gaussian noise. All the filters are initialized at the identity (zero attitude, zero position, zero velocity, and zero biases). \begin{table} \begin{tabular}{c c c} \hline Filter & ANEES (T) & ANEES (A) \\ \hline MEKF & 3.11 & 1.69 \\ Imperfect-IEKF & 1.36 & 1.40 \\ TFG-IEKF & 1.71 & 1.43 \\ TG-EqF & **1.20** & **1.22** \\ DP-EqF & 1.44 & 1.42 \\ SD-EqF & 1.32 & 1.44 \\ \hline \end{tabular} \end{table} Table 3: ANEES in the first half (transient (T)) and the second half (asymptotic (A)) of the trajectory length. The primary plots in Fig. (a)a are RMSE plots for the navigation states (on the top) and the bias states (on the bottom). It is clear that the MEKF filter demonstrates worse performance than the modern geometric filters. There is little difference visible in the transient and asymptotic error response of the navigation states for the modern filters. The remaining attitude error is due to yaw error, which is poorly observable for this scenario. The position and velocity errors converge to the noise limits of the measurement signals. In contrast, there are clear differences visible in the transient response of the bias states. The filters split roughly into three categories: the three filters with semi-direct bias symmetry (TG-EqF, DP-EqF and SD-EqF) appear to display the best transient response in both gyrometer and accelerometer bias. The TG-EqF appears slightly better in the gyrometer bias. The IEKF and TFG-IEKF, which have the \(\mathbf{SE}_{2}(3)\) symmetry but do not use a semi-direct geometry for the bias geometry, have almost identical bias transient. The accelerometer bias, in particular, is clearly separated from the filters with the semi-direct group symmetry. Finally, the MEKF suffers from not modeling the \(\mathbf{SE}_{2}(3)\) symmetry at all. The average energy plot provides an additional important analysis tool. This plot shows the ANEES [13] defined as \[\text{ANEES}=\frac{1}{nM}\sum_{i=1}^{M}\boldsymbol{\varepsilon}_{i}^{T} \boldsymbol{\Sigma}_{i}^{-1}\,\boldsymbol{\varepsilon}_{i},\] where \(\boldsymbol{\varepsilon}\) is the specific filter error state, \(\boldsymbol{\Sigma}\) is the error Figure 1: Simulation and real-world experiments’ results. Orange: Imperfect-IEKF. Purple: TFG-IEKF. Magenta: TG-EqF. Green: DP-EqF. Yellow: SD-EqF. covariance, \(M=400\) is the number of runs in the Monte-Carlo simulation, and \(n\) is the dimension of the state space. The ANEES provides a measure of the consistency of the filter estimate. An ANEES of unity means that the observed error variance corresponds exactly to the estimated covariance of the information state. When ANEES is larger than unity, it indicates that the filter is overconfident; that is, the observed error is larger than the estimate of the state covariance predicts. All 'pure' extended Kalman filters tend to be overconfident since their derivation ignores linearization errors in the model. The closer to an ANEES of unity that a filter manages is directly correlated to the consistency of the filter estimate and is usually linked to smaller linearization errors. To provide numeric results, we have averaged the ANEES values over the transient and asymptotic sections of the filter response and presented them in Tab. 3. Here, it is clear that the TG-EqF is superior, the four filters IEKF, TFG-IEKF, DP-EqF and SD-EqF are similar, and the MEKF is worst. The ANEES of the MEKF diverges to over seven before converging, corresponding to an overconfidence of a factor of seven standard deviations in the state error. Such a level of overconfidence is dangerous in a real-world scenario and may indeed lead to divergence of the filter estimate in certain situations. Note that in practice, overconfidence of a filter is avoided by inflating the process noise model covariance to account for linearization error in the model. A more consistent filter requires a smaller covariance inflation and has correspondingly more confidence in its model than a filter that is less consistent. In conclusion, the \(\mathbf{G_{TG}}\) EqF exhibits the best convergence rate, particularly in orientation and IMU biases, as well as the best consistency of all the filters. We believe that this performance can be traced back to the coupling of the IMU bias with the navigation states that are inherent in the semi-direct product structure of the symmetry group \(\mathbf{G_{TG}}\) and the exact linearization of the navigation error dynamics (Tab. 2). Note that the biasates are poorly observable states and possess slow dynamics. Consequently, moving linearization error into these states heuristically appears better than leaving the linearization error in the main navigation states that are much more dynamic. ### Real-world UAV flight In this experiment, we compared the performance of the discussed filters in a real-world UAV flight scenario from the INSANE dataset [4]. In particular, in this experiment, a quadcopter is flying for 50m at a maximum height of 13m covering an area of roughly 200m\({}^{2}\), at a maximum speed of 3m/s. The UAV is receiving IMU measurements at 200Hz, as well as measurements from and RTK-GNSS receiver at 8Hz with an accuracy between 0.1m and 0.6m. The position and orientation reference has been obtained as described in [4] from raw sensor measurements of two RTK-GNSS and a magnetometer. Similar to the previous experiments, all the filters share the same tuning parameters. Fig. 1b shows the evolution of each filter orientation and position estimates, as well as the innovation energy, commonly referred to as the Normalised Innovation Squared (NIS) error \[\mathrm{NIS}=\frac{1}{n}\,\boldsymbol{r}^{T}\mathbf{S}^{-1}\,\boldsymbol{r},\] where \(\boldsymbol{r}\) is the specific measurement residual of dimension \(n\) computed via the output action \(\rho\) and \(\mathbf{S}\) is the innovation covariance. The results in Fig. 1b show that the high level conclusions from the simulations are confirmed on real data. There are slight differences between filters in the plotted results but due to the lack of accurate ground truth all that can be deduced is that all the filters provide high quality real-world INS solutions. This is not surprising since the MEKF is the industry standard and is know to perform well in practice and the more modern filters are expected to improve on this performance. ## 8 Conclusion This study investigates inertial navigation system filter design from the perspective of symmetry. We establish a unifying framework, demonstrating that various modern INS filter variants can be interpreted as equivariant filters applied to distinct choices of symmetry, with the group structure being the only difference among those filter variants. With specific application to position measurements, we demonstrated that fixed-frame measurements can be reformulated as body-frame relative measurements. This allows one to exploit the equivariance of the output, ensuring third-order linearization error in the measurement equations. We discussed and presented different symmetry groups acting on the state-space of the INS problem. Novel symmetries are introduced alongside analysis of similarities and differences in the context of filter design. Furthermore, we showed how different choices of symmetries lead to filters with different linearized error dynamics, and how the \(\mathbf{G_{TG}}\) symmetry yields exact linearization of the navigation error, shifting all the linearisation error into the bias state dynamics. Comparative performance studies in simulation, and real-world of a vehicle equipped with an IMU and receiving position measurement from a GNSS receiver highlighted that any of the IEKF, TFG-IEKF, TG-EqF, DP-EqF, and SD-EqF are good candidates for high performance INS filter design with the TG-EqF demonstrating superior performance. Linearization Error and equivariant filter design with Different Symmetries In this section, we explicitly derive the linearization error in the error dynamics and the related filter matrices of each filter presented in Tab. 2. ### Logarithm map of semi-direct product group As mentioned in Sec. 6, for the semi-direct product groups \(\mathbf{G}\ltimes\mathfrak{g}\) where \(\mathfrak{g}\) is the Lie algebra of \(\mathbf{G}\), we will use a matrix realization to compute the exponential algebraically. Define \(X=(C,\gamma)\in\mathbf{G}\ltimes\mathfrak{g}\), then \(X\) has a matrix representation given by \[X=\begin{bmatrix}&\vdots\\ \mathbf{Ad}_{C}^{\vee}&\vdots\gamma^{\vee}\\ &\vdots\\ \dots&\dots&\dots\\ \mathbf{0}&\vdots&1\end{bmatrix}\in\mathbb{R}^{(\dim\mathfrak{g}+1)\times( \dim\mathfrak{g}+1)}\] The logarithm is then given by \[\log_{\mathbf{G}\ltimes\mathfrak{g}}(X)=\begin{bmatrix}&\vdots\\ \mathbf{ad}_{\log_{\mathbf{G}}(C)}^{\vee}&J_{l}(\log_{\mathbf{G}}(C))^{-1} \gamma^{\vee}\\ &\dots&\dots&\dots\\ \mathbf{0}&\vdots&0\end{bmatrix},\] where \(J_{l}(\log_{\mathbf{G}}(C))\) is the left Jacobian of \(\log_{\mathbf{G}}(C)\), \[J_{l}(\log_{\mathbf{G}}(C))=\sum_{k=0}^{\infty}\frac{1}{(k+1)!}\mathbf{ad}_{ \log_{\mathbf{G}}(C)}^{\vee}.\] ### Equivaraint filter design Let the state origin to be the identity of the state space, thus \(\hat{\xi}=\mathrm{id}\in\mathcal{G}\). Let \(e\coloneqq\phi_{\hat{X}^{-1}}\left(\xi\right)\) be the equivariant error, define local coordinates of the state space, thus a chart, and a chart \(\vartheta\,:\,\mathcal{U}_{\hat{\xi}}\subset\mathcal{M}\,\to\,\mathbb{R}^{m}\), where \(m=\dim\left(\mathcal{M}\right)\) is the dimension of the state space. Let \(\hat{y}=h\left(\hat{\xi}\right)\) be the output origin, then define local coordinates of the output space, thus a chart, and a chart \(\delta\,:\,\mathcal{U}_{\hat{y}}\subset\mathcal{N}\,\to\,\mathbb{R}^{n}\), where \(n=\dim\left(\mathcal{N}\right)\) is the dimension of the output space. Let \(\epsilon\) denote the linearization of the error \(e\) in the chart \(\vartheta\). The linearized error dynamics, and the linearized output, are defined according to [24] \[\dot{\varepsilon}\approx\mathbf{A}_{t}^{0}\varepsilon-\,\mathrm{D}_{e} \big{|}_{\hat{\xi}}\,\vartheta\left(e\right)\,\mathrm{D}_{E}\big{|}_{I}\, \phi_{\hat{\xi}}\left(E\right)\left[\Delta\right],\] \[\mathbf{A}_{t}^{0}=\mathrm{D}_{e}\big{|}_{\hat{\xi}}\,\vartheta \left(e\right)\,\mathrm{D}_{E}\big{|}_{I}\,\phi_{\hat{\xi}}\left(E\right)\, \mathrm{D}_{e}\big{|}_{\hat{\xi}}\,\Lambda\left(e,\hat{u}\right)\,\mathrm{D}_ {e}\big{|}_{\mathbf{0}}\,\vartheta^{-1}\left(\varepsilon\right),\] \[\delta\left(h\left(\xi\right)\right)-\delta\left(h\left(\hat{ \xi}\right)\right)\approx\mathbf{C}^{0}\varepsilon,\] \[\mathbf{C}^{0}=\mathrm{D}_{y}\big{|}_{\hat{y}}\,\delta\left(y \right)\,\mathrm{D}_{e}\big{|}_{\hat{\xi}}\,h\left(e\right)\,\mathrm{D}_{e} \big{|}_{\mathbf{0}}\,\vartheta^{-1}\left(\varepsilon\right).\] If no compatible action \(\psi\) of the symmetry group on the input space is found, the state matrix can be computed alternatively according to \[\mathbf{A}_{t}^{0}=\mathrm{D}_{e}\big{|}_{\hat{\xi}}\,\vartheta \left(e\right)\,\mathrm{D}_{\xi}\big{|}_{\hat{\xi}}\,\phi_{\hat{X}^{-1}} \left(\xi\right)\,\mathrm{D}_{E}\big{|}_{I}\,\phi_{\hat{\xi}}\left(E\right)\] \[\mathrm{D}_{\xi}\big{|}_{\phi_{\hat{X}}\left(\xi\right)}\, \Lambda\left(\xi,u\right)\,\mathrm{D}_{e}\big{|}_{\hat{\xi}}\,\phi_{\hat{X}} \left(e\right)\,\mathrm{D}_{e}\big{|}_{\mathbf{0}}\,\vartheta^{-1}\left( \varepsilon\right).\] If the output is equivariant, thus if an action \(\rho\) of the symmetry group in the output space exists, then this can be exploited to derive a linearized output with third order error [25] as follows \[\delta\left(h\left(e\right)\right)=\delta\left(\rho_{\hat{X}^{-1}} \left(h\left(\xi\right)\right)\right)\approx\mathbf{C}^{*}\varepsilon+ \mathbf{O}(\boldsymbol{\varepsilon}^{3}),\] \[\mathbf{C}^{*}\boldsymbol{\varepsilon}=\frac{1}{2}\,\mathrm{D}_{y }\big{|}_{\hat{y}}\,\delta\left(y\right)\left(\mathrm{D}_{E}\big{|}_{I}\, \rho_{E}\left(y\right)+\mathrm{D}_{E}\big{|}_{I}\,\rho_{E}\left(\rho_{\hat{X} ^{-1}}\left(0\right)\right)\right)\varepsilon^{\wedge}.\] ### MEEK linearization error and filter design #### a.3.1 Overview The state space is defined to be \(\mathcal{M}=\mathcal{SO}(3)\times\mathbb{R}^{3}\times\mathbb{R}^{3}\times \mathbb{R}^{3}\times\mathbb{R}^{3}\) with \(\xi\coloneqq\left(\mathbf{R},\boldsymbol{v},\boldsymbol{p},\boldsymbol{b}_{ \boldsymbol{\omega}},\boldsymbol{b}_{a}\right)\in\mathcal{M}\). Choose the origin to be \(\hat{\xi}\coloneqq\left(\mathbf{I}_{3},\mathbf{0}_{3\times 1},\mathbf{0}_{3\times 1}, \mathbf{0}_{3\times 1},\mathbf{0}_{3\times 1}\right)\in\mathcal{M}\). The velocity input is given by \(u\coloneqq\left(\boldsymbol{\omega},\boldsymbol{a},\boldsymbol{\tau}_{ \boldsymbol{\omega}},\boldsymbol{\tau}_{a}\right)\). The symmetry group of MEEK is given by \(\mathbf{G}_{\mathbf{O}}\coloneqq\mathbf{SO}(3)\times\mathbb{R}^{12}\). Define the filter state \(\hat{X}=(\hat{A},\hat{a},\hat{b},\hat{\alpha},\hat{\beta})\in\mathbf{G}_{ \mathbf{O}}\), where \(\hat{A}\in\mathbf{SO}(3)\) and \(\hat{a},\hat{b},\hat{\alpha},\hat{\beta}\in\mathbb{R}^{3}\). The state estimate is given by \[\hat{\xi}=\phi(\hat{X},\hat{\xi})=(\hat{A},\hat{a},\hat{b},\hat{ \alpha},\hat{\beta})=\left(\hat{\mathbf{R}},\hat{\boldsymbol{v}},\hat{ \boldsymbol{p}},\hat{\boldsymbol{b}}_{\boldsymbol{\omega}},\hat{\boldsymbol{b}}_{ \boldsymbol{a}}\right).\] (A.1) The state error is defined as \[e\coloneqq\phi(\hat{X}^{-1},\xi) =\left(\mathbf{R}\hat{A}^{\top},\boldsymbol{v}-\hat{a}, \boldsymbol{p}-\hat{b},\boldsymbol{b}_{\boldsymbol{\omega}}-\hat{\alpha}, \boldsymbol{b}_{a}-\hat{\beta}\right),\] \[=\left(\mathbf{R}\,\hat{\mathbf{R}}^{\top},\boldsymbol{v}- \hat{\boldsymbol{v}},\boldsymbol{p}-\hat{\boldsymbol{p}},\boldsymbol{b}_{ \boldsymbol{\omega}}-\hat{\boldsymbol{b}}_{\boldsymbol{\omega}},\boldsymbol{b}_{a}- \hat{\boldsymbol{b}}_{a}\right).\] (A.2) #### a.3.2 Error dynamics The error dynamics related to Equ. (A.2) for each state is given by \[\dot{e}_{R} =\frac{\mathrm{d}}{\mathrm{d}t}(\mathbf{R}\,\hat{\mathbf{R}}^{\top})\] \[=\mathbf{R}(\boldsymbol{\omega}-\boldsymbol{b}_{\boldsymbol{ \omega}}-\boldsymbol{\omega}+\hat{\boldsymbol{b}}_{\boldsymbol{\omega}})^{ \wedge}\,\hat{\mathbf{R}}^{\top}\] \[=-e_{R}\,\hat{\mathbf{R}}e_{b_{\omega}}^{\wedge}\,\hat{\mathbf{ R}}^{\top}\] \[=-e_{R}\,\hat{\mathbf{R}}e_{b_{\omega}}^{\wedge}\,;\] \[\dot{e}_{v} =\frac{\mathrm{d}}{\mathrm{d}t}(\boldsymbol{v}-\hat{\boldsymbol {v}})=\hat{\boldsymbol{v}}-\hat{\boldsymbol{v}}\] \[=\mathbf{R}(\boldsymbol{a}-\boldsymbol{b}_{a})^{\wedge}+ \boldsymbol{g}-\hat{\mathbf{R}}(\boldsymbol{a}-\hat{\boldsymbol{b}}_{a})^{ \wedge}-\boldsymbol{g}\] \[=\mathbf{R}(\boldsymbol{a}-\boldsymbol{b}_{a})^{\wedge}-\hat{ \mathbf{R}}(\boldsymbol{a}-\hat{\boldsymbol{b}}_{a})^{\wedge}\] \[=e_{R}\,\hat{\mathbf{R}}(\boldsymbol{a}-\boldsymbol{b}_{a})-\hat {\mathbf{R}}(\boldsymbol{a}-\hat{\boldsymbol{b}}_{a});\] \[\dot{e}_{p} =\frac{\mathrm{d}}{\mathrm{d}t}(\boldsymbol{p}-\hat{\boldsymbol {p}})=\dot{\boldsymbol{p}}-\dot{\hat{\boldsymbol{p}}}\] \[=\boldsymbol{v}-\hat{\boldsymbol{v}}=e_{v};\] \[\dot{e}_{b} =\frac{\mathrm{d}}{\mathrm{d}t}(\boldsymbol{b}-\hat{\boldsymbol{b }})=\boldsymbol{0}.\] The local coordinate chart \(\varepsilon=\log_{\mathbf{G}_{\mathbf{O}}}\circ\phi_{\xi}^{-1}(e)\) for each state is given by \[\varepsilon_{R} \coloneqq\log_{\mathbf{SO}(3)}(e_{R})^{\vee}\in\mathbb{R}^{3}\] \[\varepsilon_{v,p,b_{\omega},b_{\omega}} \coloneqq e_{v,p,b_{\omega},b_{a}}\in\mathbb{R}^{3}.\] The linearization of \(\dot{e}_{R}=\mathrm{D}\exp(\varepsilon_{R}^{\wedge})[\dot{\varepsilon}_{R}^{ \wedge}]\) is given by \[e_{R}\frac{\mathrm{I}-\exp(-\mathrm{ad}_{\varepsilon_{R}^{ \wedge}})}{\mathrm{ad}_{\varepsilon_{R}^{\wedge}}}\dot{\varepsilon}_{R}^{ \wedge} =-e_{R}\,\big{(}\hat{\mathbf{R}}\varepsilon_{b_{\omega}}\big{)}^{\wedge}\] \[\big{(}\mathrm{I}+\mathcal{O}(\varepsilon_{R}^{\wedge})\big{)} \dot{\varepsilon}_{R}^{\wedge} =\big{(}\hat{\mathbf{R}}\varepsilon_{b_{\omega}}\big{)}^{\wedge}\] \[\dot{\varepsilon}_{R} =\hat{\mathbf{R}}\varepsilon_{b_{\omega}}+O(\varepsilon_{R}^{2}).\] The linearization of \(\dot{e}_{v}=\dot{\varepsilon}_{v}\) is given by \[\dot{\varepsilon}_{v} =e_{R}\,\hat{\mathbf{R}}(\boldsymbol{a}-\boldsymbol{b}_{a})-\hat {\mathbf{R}}(\boldsymbol{a}-\hat{\boldsymbol{b}}_{a})\] \[=(\mathrm{I}+\varepsilon_{R}^{\wedge}+O(\varepsilon_{R}^{2}))\, \hat{\mathbf{R}}(\boldsymbol{a}-\boldsymbol{b}_{a})-\hat{\mathbf{R}}( \boldsymbol{a}-\hat{\boldsymbol{b}}_{a})\] \[=\hat{\mathbf{R}}(\hat{\boldsymbol{b}}_{a}-\boldsymbol{b}_{a})+ \varepsilon_{R}^{\wedge}\,\hat{\mathbf{R}}(\boldsymbol{a}-\boldsymbol{b}_{a}) +O(\varepsilon_{R}^{2})\] \[=-\hat{\mathbf{R}}\varepsilon_{b_{a}}+\varepsilon_{R}^{\wedge} \,\hat{\mathbf{R}}(\boldsymbol{a}-\hat{\boldsymbol{b}}_{a})-\varepsilon_{R}^{ \wedge}\,\hat{\mathbf{R}}\varepsilon_{b_{a}}+O(\varepsilon_{R}^{2})\] \[=-(\hat{\mathbf{R}}(\boldsymbol{a}-\hat{\boldsymbol{b}}_{a}))^{ \wedge}\varepsilon_{R}-\hat{\mathbf{R}}\varepsilon_{b_{a}}+O(\varepsilon^{2 }).\] The linearization of \(\dot{e}_{p}=\dot{\varepsilon}_{p}\) is given by \[\dot{\varepsilon}_{p}=\varepsilon_{v}.\] The linearization of \(\dot{e}_{b}=\dot{\varepsilon}_{b}\) is given by \(\dot{\varepsilon}_{b}=\boldsymbol{0}\). #### a.3.3 Filter design From the linearization error analysis, it is trivial to see that the linearized error state matrix \(\mathbf{A}_{t}^{0}\mid\dot{\boldsymbol{\varepsilon}}\simeq\mathbf{A}_{t}^{0}\, \boldsymbol{\varepsilon}\) is written \[\mathbf{A}_{t}^{0}=\left[\begin{array}{ccc}&\vdots&-\hat{\mathbf{R}}& \boldsymbol{0}_{3\times 3}\\ &1\,\mathbf{A}&\vdots&\boldsymbol{0}_{3\times 3}&-\hat{\mathbf{R}}\\ &\vdots&\boldsymbol{0}_{3\times 3}&\boldsymbol{0}_{3\times 3}\\ &\cdots&\cdots&\cdots&\cdots\\ \boldsymbol{0}_{6\times 9}&\vdots&\boldsymbol{0}_{6\times 6}\end{array} \right]\in\mathbb{R}^{15\times 15},\] (A.3) where \[{}_{1}\mathbf{A}=\left[\begin{array}{ccc}&\boldsymbol{0}_{3\times 3}& \boldsymbol{0}_{3\times 3}&\boldsymbol{0}_{3\times 3}\\ -\big{(}\hat{\mathbf{R}}\,\big{(}\boldsymbol{a}-\hat{\boldsymbol{b}}_{a}\big{)} \big{)}^{\wedge}&\boldsymbol{0}_{3\times 3}&\boldsymbol{0}_{3\times 3}\\ &\boldsymbol{0}_{3\times 3}&\mathbf{I}_{3}&\boldsymbol{0}_{3\times 3}\end{array} \right]\in\mathbb{R}^{9\times 9}.\] Position measurements are linear with respect to the defined error; therefore, the output matrix \(\mathbf{C}^{0}\) yields \[\mathbf{C}^{0}=\big{[}\boldsymbol{0}_{3\times 6}\,\,\,\mathbf{I}_{3}\,\, \boldsymbol{0}_{3\times 3}\,\,\boldsymbol{0}_{3\times 6}\big{]}\in\mathbb{R}^{3\times 15}.\] (A.4) It is straightforward to verify that the derived equivariant filter is equivalent to the well-known MeKF, and the EqF state matrix in Equ. (A.3) corresponds directly to the state matrix of the MeKF (23, Sec. 7) ### Imperfect-IEKF #### a.4.1 Overview The state space is defined as \(\mathcal{M}\coloneqq\mathcal{SE}_{2}(3)\times\mathbb{R}^{6}\) with \(\xi\coloneqq(\mathbf{T},\boldsymbol{b})\in\mathcal{M}\). One has \(\mathbf{T}=(\mathbf{R},\boldsymbol{v},\boldsymbol{p})\in\mathcal{SE}_{2}(3)\) and \(\hat{\boldsymbol{b}}=(\boldsymbol{b}_{\boldsymbol{\omega}},\boldsymbol{b}_{a}) \in\mathbb{R}^{6}\). Choose the origin to be \(\hat{\xi}=(\mathbf{I}_{5},\boldsymbol{0}_{6\times 1})\in\mathcal{M}\). The velocity input is given by \(u\coloneqq(\boldsymbol{\omega},\boldsymbol{a},\boldsymbol{\tau}_{ \boldsymbol{\omega}},\boldsymbol{\tau}_{a})\). The symmetry group of Imperfect-IEKF is given by \(\hat{\mathbf{C}}_{\mathbf{E}\mathbf{S}}:\mathbf{SE}_{2}(3)\times\mathbb{R}^{6}\). Define the filter state \(\hat{X}=(\hat{C},\hat{\gamma})\in\mathbf{G}_{\mathbf{E}\mathbf{S}}\) with \(\hat{C}=(\hat{A},\hat{a},\hat{b})\in\mathbf{SE}_{2}(3)\) and \(\hat{\gamma}=(\hat{\gamma}_{\omega},\hat{\gamma}_{a})\in\mathbb{R}^{6}\). The state estimate is given by \[\hat{\xi}\coloneqq\phi(\hat{X},\hat{\xi})=(\hat{A},\hat{\gamma})=(\hat{\Upsilon },\hat{b}).\] (A.5) The state error is defined as \[e\coloneqq\phi(\hat{X}^{-1},\xi) =(\mathbf{T}\hat{C}^{-1},\boldsymbol{b}-\hat{\gamma}),\] (A.6) \[=(\mathbf{T}\,\hat{\Upsilon}^{-1},\boldsymbol{b}-\hat{b}).\] #### a.4.2 Error dynamics The error dynamics related to Equ. (A.6) for each state is given by \[\dot{e}_{R} =-e_{R}(\hat{\mathbf{R}}e_{b_{\omega}})^{\wedge}\,\text{(Derivation same as MEKF)};\] \[\dot{e}_{v} =\frac{\mathrm{d}}{\mathrm{d}t}\big{(}-\mathbf{R}\,\hat{\mathbf{ R}}^{\top}+\mathbf{v}\big{)}=\frac{\mathrm{d}}{\mathrm{d}t}\big{(}-\mathbf{R}\,\hat{ \mathbf{R}}^{\top}\,\hat{\mathbf{v}}+\mathbf{v}\big{)}\] \[=-\dot{e}_{R}\,\hat{\mathbf{v}}-e_{R}\,\dot{\hat{\mathbf{v}}}+\dot{\mathbf{v}}\] \[=e_{R}(\hat{\mathbf{R}}e_{b_{\omega}})^{\wedge}\,\hat{\mathbf{v}}-e_{ R}\,\hat{\mathbf{R}}\big{(}\mathbf{a}-\hat{\mathbf{b}}_{a}\big{)}-e_{R}\,\mathbf{g}+\mathbf{R} \,\big{(}\mathbf{a}-\mathbf{b}_{a}\big{)}+\mathbf{g}\] \[=e_{R}(\hat{\mathbf{R}}e_{b_{\omega}})^{\wedge}\,\hat{\mathbf{v}}-e_{ R}\,\hat{\mathbf{R}}\big{(}\mathbf{a}-\hat{\mathbf{b}}_{a}\big{)}-e_{R}\,\mathbf{g}\] \[\dot{e}_{b} =\frac{\mathrm{d}}{\mathrm{d}t}\big{(}-\mathbf{R}\,\hat{\mathbf{ R}}^{\top}\,\hat{\mathbf{p}}+\mathbf{p}\big{)}\] \[=-\dot{e}_{R}\,\hat{\mathbf{p}}-e_{R}\,\dot{\hat{\mathbf{p}}}+\dot{\mathbf{p}}\] \[=e_{R}(\hat{\mathbf{R}}e_{b_{\omega}})^{\wedge}\,\hat{\mathbf{p}}-e_{ R}\,\hat{\mathbf{v}}+\mathbf{v}\] \[=e_{R}(\hat{\mathbf{R}}e_{b_{\omega}})^{\wedge}\,\hat{\mathbf{p}}+e_{ v};\] \[\dot{e}_{b} =\mathbf{0}.\] The local coordinate chart \(\varepsilon=\log_{\mathbf{G}\mathbf{E}\mathbf{S}}\circ\phi_{\hat{\xi}}^{-1}(e)\) for each state is given by \[\varepsilon_{T}\coloneqq\log_{\mathbf{S}\mathbf{E}_{2}(3)}(\phi_ {\hat{\xi}}^{-1}(e_{T}))^{\vee}=\log_{\mathbf{S}\mathbf{E}_{2}(3)}(e_{T})^{ \vee}\in\mathbb{R}^{9}\] \[\varepsilon_{b_{\omega},b_{a}}\coloneqq e_{b_{\omega},b_{a}} \in\mathbb{R}^{3}.\] The linearization of \(\dot{e}_{R}\) is the same as the derivation in MEKF, given by \[\dot{\varepsilon}_{R}=\hat{\mathbf{R}}\varepsilon_{b_{\omega}}+O(\varepsilon_ {R}{}^{2}).\] The linearization of \(\dot{e}_{v}=\dot{\varepsilon}_{v}+\mathcal{O}(\varepsilon^{2})\) is given by \[\dot{\varepsilon}_{v} =-(\mathrm{I}+\varepsilon_{R}^{\wedge}+O(\varepsilon_{R}{}^{2})) (\hat{\mathbf{R}}\varepsilon_{b_{\omega}})^{\wedge}\,\hat{\mathbf{v}}\] \[\quad-(\mathrm{I}+\varepsilon_{R}^{\wedge}+O(\varepsilon_{R}{}^{2 }))\,\hat{\mathbf{R}}\big{(}\mathbf{a}-\dot{b}_{a}\big{)}\] \[\quad-(\mathrm{I}+\varepsilon_{R}^{\wedge}+O(\varepsilon_{R}{}^{2 }))\,\hat{\mathbf{g}}\] \[\quad+(\mathrm{I}+\varepsilon_{R}^{\wedge}+O(\varepsilon_{R}{}^{2 }))\,\hat{\mathbf{R}}\big{(}\mathbf{a}-\varepsilon_{b_{a}}+\hat{\mathbf{b}}_{a}\big{)}\] \[\quad+\mathbf{g}+\mathcal{O}(\varepsilon^{2})\] \[=-\,\hat{\mathbf{v}}^{\wedge}\,\hat{\mathbf{R}}\varepsilon_{b_{\omega} }-\hat{\mathbf{R}}\varepsilon_{b_{\omega}}+\mathbf{g}^{\wedge}\varepsilon_{R}+ \mathcal{O}(\varepsilon^{2}).\] The linearization of \(\dot{e}_{p}=\dot{\varepsilon}_{p}+\mathcal{O}(\varepsilon^{2})\) is given by \[\dot{\varepsilon}_{p} =(\mathrm{I}+\varepsilon_{R}^{\wedge}+O(\varepsilon_{R}{}^{2})) (\hat{\mathbf{R}}e_{b_{\omega}})^{\wedge}\,\hat{\mathbf{p}}+\varepsilon_{v}+ \mathcal{O}(\varepsilon^{2})\] \[=\varepsilon_{v}-\hat{\mathbf{p}}^{\wedge}\varepsilon_{b_{\omega}}+ \mathcal{O}(\varepsilon^{2}).\] The linearization of \(\dot{e}_{b}=\dot{\varepsilon}_{b}\) is given by \(\dot{\varepsilon}_{b}=\mathbf{0}\). #### a.4.3 Filter design The linearized error state matrix \(\mathbf{A}_{t}^{0}\mid\dot{\mathbf{\varepsilon}}\simeq\mathbf{A}_{t}^{0}\,\mathbf{\varepsilon}\) yields \[\mathbf{A}_{t}^{0}=\left[\begin{array}{ccc}\vdots&-\hat{\mathbf{R}}&\mathbf{0} _{3\times 3}\\ \mathbf{2}\,\hat{\mathbf{A}}\,\dot{\mathbf{v}}^{\wedge}\,\hat{\mathbf{R}}&-\hat{\mathbf{R}} \\ \vdots&-\hat{\mathbf{p}}^{\wedge}\,\hat{\mathbf{R}}&\mathbf{0}_{3\times 3}\\ \ldots\,\vdots&\cdots\,\cdots\,\cdots\,\cdots\,\cdots\,\cdots\\ \mathbf{0}_{6\times 9}\,\vdots&\mathbf{0}_{6\times 6}\end{array}\right]\in\mathbb{R}^{15 \times 15},\] (A.7) where \[{}_{2}\mathbf{A}=\left[\begin{array}{ccc}\mathbf{0}_{3\times 3}&\mathbf{0}_{3\times 3}&\mathbf{0}_{3 \times 3}\\ \mathbf{g}^{\wedge}&\mathbf{0}_{3\times 3}&\mathbf{0}_{3\times 3}\\ \mathbf{0}_{3\times 3}&\mathbf{I}_{3}&\mathbf{0}_{3\times 3}\end{array}\right]\in \mathbb{R}^{9\times 9}\] (A.8) Position measurements formulated according to Equ. (30) are equivariant, yielding the following output matrix \[\mathbf{C}^{\star}=\left[\begin{array}{ccc}\frac{1}{2}\left(y+\hat{\mathbf{p}} \right)^{\wedge}&\mathbf{0}_{3\times 3}&-\mathbf{I}_{3}&\mathbf{0}_{3\times 3}&\mathbf{0}_{3\times 6} \end{array}\right]\in\mathbb{R}^{3\times 15}.\] (A.9) It is trivial to verify that the state matrix in Equ. (A.7), derived according to equivariant filter design principles, directly corresponds to the state matrix in the Imperfect-IEKF [10, Sec. 7]. ### TG-EqF #### a.5.1 Overview The state space is defined as \(\mathcal{M}\coloneqq\mathcal{S}\mathcal{E}_{2}(3)\times\mathbb{R}^{9}\) with \(\xi\coloneqq(\mathbf{T},\mathbf{b})\in\mathcal{M}\). One has \(\mathbf{T}=(\mathbf{R},\mathbf{v},\mathbf{p})\in\mathcal{S}\mathcal{E}_{2}(3)\) and \(\mathbf{b}=(\mathbf{b}_{\omega},\mathbf{b}_{a},\mathbf{b}_{\nu})\in\mathbb{R}^{9}\). Choose the origin to be \(\dot{\xi}=(\mathbf{I}_{5},\mathbf{0}_{9\times 1})\in\mathcal{M}\). The velocity input is given by \(u\coloneqq(\mathbf{\omega},\mathbf{a},\mathbf{\nu},\mathbf{\tau}_{\omega},\mathbf{\tau}_{a},\mathbf{ \tau}_{\nu})\). The symmetry group of TG-EqF is given by \(\mathbf{G}_{\mathbf{T}\mathbf{G}}:\mathbf{S}\mathbf{E}_{2}(3)\times\mathbf{ \mathfrak{s}}\mathfrak{e}_{2}(3)\). Define the filter state \(\hat{X}=(\hat{C},\hat{\gamma})\in\mathbf{G}_{\mathbf{T}\mathbf{G}}\quad\text{ with}\quad\hat{C}=(\hat{A},\hat{a},\hat{b})\in\mathbf{S}\mathbf{E}_{2}(3)\) and \(\hat{\gamma}=(\hat{\gamma}_{\omega},\hat{\gamma}_{a},\hat{\gamma}_{\omega})^{ \wedge}\in\mathfrak{s}\mathfrak{e}_{2}(3)\). The state estimate is given by \[\dot{\xi}\coloneqq\phi(\hat{X},\hat{\xi})=(\hat{C},\text{Ad}_{\hat{C}^{-1}}(- \hat{\gamma}^{\vee}))=(\hat{\mathbf{T}},\hat{\mathbf{b}}).\] (A.10) The state error is defined as \[e\coloneqq\phi(\hat{X}^{-1},\xi) =(\mathbf{T}\hat{C}^{-1},\mathbf{Ad}_{\hat{C}}^{\vee}(\mathbf{b}+ \text{Ad}_{\hat{C}^{-1}}\,[\hat{\gamma}]^{\vee}))\] (A.11) \[=(\mathbf{T}\hat{C}^{-1},\mathbf{Ad}_{\hat{C}}^{\vee}\,\mathbf{b}+ \hat{\gamma}^{\vee})\] (A.12) \[=(\mathbf{T}\,\hat{\mathbf{T}}^{-1},\mathbf{Ad}_{\mathbf{T}}^{\vee} (\mathbf{b}- #### a.5.2 Error dynamics Navigation statesThe error dynamics related to Equ. (A.11) for the navigation states \(e_{T}=\mathbf{T}\,\hat{\mathbf{T}}^{-1}\) is given by \[\dot{e}_{T} =\dot{\mathbf{T}}\,\hat{\mathbf{T}}^{-1}-\mathbf{T}\,\hat{\mathbf{T }}^{-1}\,\hat{\mathbf{T}}\,\hat{\mathbf{T}}^{-1}\] \[=\mathbf{T}(\mathbf{W}-\mathbf{B}+\mathbf{D})\,\hat{\mathbf{T}}^ {-1}+(\mathbf{G}-\mathbf{D})\,\mathbf{T}\,\hat{\mathbf{T}}^{-1}\] \[\quad-e_{T}\,\hat{\mathbf{T}}(\mathbf{W}-\hat{\mathbf{B}}+ \mathbf{D})\,\hat{\mathbf{T}}^{-1}-e_{T}(\mathbf{G}-\mathbf{D})\,\hat{\mathbf{ T}}\,\hat{\mathbf{T}}^{-1}\] \[=e_{T}\,\hat{\mathbf{T}}(\mathbf{W}-\mathbf{B}+\mathbf{D})\,\hat{ \mathbf{T}}^{-1}-e_{T}\,\hat{\mathbf{T}}(\mathbf{W}-\hat{\mathbf{B}}+\mathbf{ D})\,\hat{\mathbf{T}}^{-1}\] \[\quad+(\mathbf{G}-\mathbf{D})e_{T}-e_{T}(\mathbf{G}-\mathbf{D})\] \[=e_{T}\mathrm{Ad}_{\hat{\mathbf{T}}}\left[\hat{\mathbf{B}}- \mathbf{B}\right]+(\mathbf{G}-\mathbf{D})e_{T}-e_{T}(\mathbf{G}-\mathbf{D}).\] The above dynamics can be separate to two parts: \(\dot{e}_{T}=\dot{e}_{T_{W}}+\dot{e}_{T_{G}}\), where \(\dot{e}_{T_{W}}=e_{T}\mathrm{Ad}_{\hat{\mathbf{T}}}\left[\hat{\mathbf{B}}- \mathbf{B}\right]\) and \(\dot{e}_{T_{G}}=(\mathbf{G}-\mathbf{D})e_{T}-e_{T}(\mathbf{G}-\mathbf{D})\). The linearization can be derived separately for each part. The local coordinate chart is given by \(\varepsilon=\log\phi_{\hat{\xi}}^{-1}(e)\). For each state, one has \[e_{T} =\exp_{\mathbf{SE}_{2}(3)}(\varepsilon_{T}{}^{\wedge}),\] \[e_{b} =\mathrm{Ad}_{e_{T}{}^{-1}}\left[(-J_{l}(\varepsilon_{T}) \varepsilon_{b})^{\wedge}\right],\] where the exponential map \(\exp_{\mathbf{SE}_{2}(3)\approx\mathrm{ad}_{2}(3)}\) is derived from the semi-direct product structure, and \(J_{l}(\varepsilon_{T})\) is the left Jacobian of \(\mathbf{SE}_{2}(3)\), given by \[J_{l}(\varepsilon_{T})=\sum_{k=0}^{\infty}\frac{1}{(k+1)!}\,\mathrm{ad}_{ \varepsilon_{T}}^{k}\,.\] Recall that by definition Equ. (A.11) one has \(e_{b}^{\wedge}=\mathrm{Ad}_{\hat{\mathbf{T}}}\left[(\boldsymbol{b}-\hat{ \boldsymbol{b}})^{\wedge}\right]\) with \(\boldsymbol{b}^{\wedge}=\mathbf{B}\), and \(\hat{\boldsymbol{b}}^{\wedge}=\hat{\mathbf{B}}\). Hence, for \(\dot{e}_{T_{W}}\) one has \[\dot{e}_{T_{W}}=e_{T}\mathrm{Ad}_{\hat{\mathbf{T}}}\left[\hat{\mathbf{B}}- \mathbf{B}\right]=-e_{T}e_{b}^{\wedge}.\] Substituting the local coordinate yields \[\mathrm{D}\exp_{\mathbf{SE}_{2}(3)}(\varepsilon_{T}{}^{\wedge})[ \hat{\varepsilon}_{T_{W}}{}^{\wedge}] =-e_{T}\mathrm{Ad}_{e_{T}{}^{-1}}\left[(-J_{l}(\varepsilon_{T}) \varepsilon_{b})^{\wedge}\right]\] \[e_{T}\frac{\mathrm{I}-\exp(-\,\mathrm{ad}_{\varepsilon_{T}{}^{ \wedge}})}{\mathrm{ad}_{\varepsilon_{T}{}^{\wedge}}}\dot{\varepsilon}_{T_{W}}{} ^{\wedge} =-e_{T}\mathrm{Ad}_{e_{T}{}^{-1}}\left[(-J_{l}(\varepsilon_{T}) \varepsilon_{b})^{\wedge}\right]\] \[\mathrm{Ad}\,e_{T}\frac{\mathrm{I}-\exp(-\,\mathrm{ad}_{ \varepsilon_{T}{}^{\wedge}})}{\mathrm{ad}_{\varepsilon_{T}{}^{\wedge}}}\dot{ \varepsilon}_{T_{W}}{}^{\wedge} =(J_{l}(\varepsilon_{T})\varepsilon_{b})^{\wedge}.\] (A.14) Because \(\mathrm{Ad}_{e_{T}}=\mathrm{Ad}_{\exp(\varepsilon_{T}{}^{\wedge})}=\exp( \mathrm{ad}_{\varepsilon_{T}{}^{\wedge})}\), the term on the left side in Equ. (A.14) can be modified as \[\mathrm{Ad}_{e_{T}}\,\frac{\mathrm{I}-\exp(-\,\mathrm{ad}_{ \varepsilon_{T}{}^{\wedge}})}{\mathrm{ad}_{\varepsilon_{T}{}^{\wedge}}} =\exp(\mathrm{ad}_{\varepsilon_{T}{}^{\wedge}})\frac{\mathrm{I}- \exp(-\,\mathrm{ad}_{\varepsilon_{T}{}^{\wedge}})}{\mathrm{ad}_{ \varepsilon_{T}{}^{\wedge}}}\] \[=\frac{\exp(\mathrm{ad}_{\varepsilon_{T}{}^{\wedge}})-\mathrm{I}} {\mathrm{ad}_{\varepsilon_{T}{}^{\wedge}}}\quad(\text{Expanding exp},)\] \[=\sum_{k=0}^{\infty}\frac{1}{(k+1)!}\,\mathrm{ad}_{\varepsilon_{T }{}^{\wedge}}^{k}=J_{l}(\varepsilon_{T}).\] (A.15) Hence, for the linearization of \(\dot{e}_{T_{W}}\), one has \[\dot{\varepsilon}_{T_{W}}=\varepsilon_{b}.\] For the second part \(\dot{e}_{T_{G}}=\mathrm{D}\exp_{\mathbf{SE}_{2}(3)}(\varepsilon_{T}{}^{ \wedge})[\hat{\varepsilon}_{T_{G}}{}^{\wedge}]\), one has \[e_{T}\frac{\mathrm{I}-\exp(-\,\mathrm{ad}_{\varepsilon_{T}{}^{ \wedge}})}{\mathrm{ad}_{\varepsilon_{T}{}^{\wedge}}}\dot{\varepsilon}_{T_{G}}{} ^{\wedge}=\begin{bmatrix}\mathbf{0}_{3\times 3}&\boldsymbol{g}-e_{R}\,\boldsymbol{g}&e_{v}\\ \mathbf{0}_{1\times 3}&0&0\\ \mathbf{0}_{1\times 3}&0&0\end{bmatrix}.\] (A.16) Multiply both sides of Equ. (A.16) by \(e_{T}{}^{-1}\) and then apply \(\mathrm{Ad}_{e_{T}}\) to both sides: \[\mathrm{Ad}_{e_{T}}\,\frac{\mathrm{I}-\exp(-\,\mathrm{ad}_{ \varepsilon_{T}})}{\mathrm{ad}_{e_{T}}}\dot{\varepsilon}_{T_{G}} =\mathrm{Ad}_{e_{T}}\,e_{T}{}^{-1}\begin{bmatrix}\mathbf{0}_{3\times 3}& \boldsymbol{g}-e_{R}\,\boldsymbol{g}&e_{v}\\ \mathbf{0}_{1\times 3}&0&0\\ \mathbf{0}_{1\times 3}&0&0\end{bmatrix}\] \[=\begin{bmatrix}\mathbf{0}_{3\times 3}&\boldsymbol{g}-e_{R}\, \boldsymbol{g}&e_{v}\\ \mathbf{0}_{1\times 3}&0&0\\ \mathbf{0}_{1\times 3}&0&0\end{bmatrix}.\] Use the result from Equ. (A.15): \[J_{l}(\varepsilon_{T})\dot{\varepsilon}_{T_{G}}=\begin{bmatrix}\mathbf{0}_{3 \times 3}&(\mathrm{I}-e_{R})\,\boldsymbol{g}&J_{l}(\varepsilon_{R})\varepsilon_{v} \\ \mathbf{0}_{1\times 3}&0&0\\ \mathbf{0}_{1\times 3}&0&0\end{bmatrix}.\] (A.17) Note that: \[\mathrm{I}-e_{R} =\mathrm{I}-\exp(\varepsilon_{R}{}^{\wedge})\] \[=\mathrm{I}-\sum_{k=0}^{\infty}\frac{1}{k!}\varepsilon_{R}{}^{ \wedge k}\] \[=-(\sum_{k=0}^{\infty}\frac{1}{(k+1)!}\varepsilon_{R}{}^{\wedge k}) \varepsilon_{R}{}^{\wedge}\] \[=-J_{l}(\varepsilon_{R})\varepsilon_{R}{}^{\wedge}.\] One can then simplify Equ. (A.17) to \[\dot{\varepsilon}_{T_{G}}=\begin{bmatrix}\mathbf{0}_{3\times 3}&\boldsymbol{g}^{ \wedge}\varepsilon_{R}&\varepsilon_{v}\\ \mathbf{0}_{1\times 3}&0&0\\ \mathbf{0}_{1\times 3}&0&0\end{bmatrix}.\] Combining the results for \(\dot{\varepsilon}_{T_{W}}\) and \(\dot{\varepsilon}_{T_{G}}\), one has \[\dot{\varepsilon}_{T}=(\varepsilon_{h_{\omega}},\,\varepsilon_{h_{\omega}}+ \boldsymbol{g}^{\wedge}\varepsilon_{R},\,\varepsilon_{b_{\omega}}+\varepsilon _{v})^{\wedge}.\] Bias statesThe linearization of the bias error are derived from the formula given by \[\dot{\varepsilon}=\mathbf{A}_{t}^{0}\varepsilon+\mathcal{O}( \varepsilon^{2}),\] \[\mathbf{A}_{t}^{0}=\mathrm{D}_{e}\big{|}_{\dot{\xi}}\,\vartheta \left(e\right)\,\mathrm{D}_{E}\big{|}_{\dot{\xi}}\,\phi_{\dot{\xi}}\left(E \right)\,\mathrm{D}_{e}\big{|}_{\dot{\xi}}\,\Lambda\left(e,u^{\circ}\right)\, \mathrm{D}_{e}\big{|}_{\boldsymbol{0}}\,\vartheta^{-1}\left(\varepsilon \right).\] In this case, because we choose normal coordinates as the local coordinate chart, that is, \(\vartheta\coloneqq\log\circ\phi_{\dot{\xi}}^{-1}\), we have \[\dot{\varepsilon}=\mathrm{D}_{e}\big{|}_{\dot{\xi}}\,\Lambda\left(e,\dot{u} \right)\,\mathrm{D}_{E}\big{|}_{\dot{\eta}}\,\phi_{\dot{\xi}}(E)[\varepsilon ]+\mathcal{O}(\varepsilon^{2}).\] Evaluating \(\mathrm{D}\phi_{\dot{\xi}}\) at \(\mathrm{I}\) with direction \([\varepsilon_{T},\varepsilon_{b}]\) yields \[\mathrm{D}\phi_{\dot{\xi}}(\mathrm{I})[\varepsilon_{T},\varepsilon_{b}]=( \varepsilon_{T}{}^{\wedge},-\varepsilon_{b}{}^{\wedge}).\] Evaluating \(\mathrm{D}\Lambda_{\dot{u}}\) at \(\dot{\xi}\) with direction \([\varepsilon_{T}{}^{\wedge},-\varepsilon_{b}{}^{\wedge}]\) yields \[\mathrm{D}\Lambda_{\dot{u}}(\dot{\xi})[\varepsilon_{T}{}^{ \wedge},-\varepsilon_{b}{}^{\wedge}] =(\sim,\mathrm{ad}_{-\varepsilon_{b}{}^{\wedge}}\big{[}\Lambda _{1}(\dot{\xi},\dot{u})\big{]})\] \[=(\sim,\mathbf{ad}_{(\dot{u}\wedge+\mathbf{G})}^{\vee}\varepsilon _{b}).\] Hence the linearization of bias error is given by \[\dot{\varepsilon}_{b}=\mathbf{ad}_{(\dot{u}\wedge+\mathbf{G})}^{\vee} \varepsilon_{b}+\mathcal{O}(\varepsilon^{2}).\] #### a.5.3 Filter design The linearized error state matrix \(\mathbf{A}_{t}^{0}\,|\,\,\dot{\boldsymbol{\varepsilon}}\simeq\mathbf{A}_{t}^ {0}\,\boldsymbol{\varepsilon}\,\) is defined according to \[\mathbf{A}_{t}^{0}=\begin{bmatrix}\begin{smallmatrix}\vspace{0.05cm}2\mathbf{A }&\vdots&\mathbf{I}_{9}\\ \cdots\cdots\vdots\cdots\cdots\cdots\cdots\\ \mathbf{0}_{\eta\times\eta}&\vdots\mathbf{ad}_{(\dot{u}\wedge+\mathbf{G})}^{ \vee}\end{smallmatrix}\end{bmatrix}\in\mathbb{R}^{18\times 18},\] (A.18) with \({}_{2}\mathbf{A}\) in Equ. (A.8). Position measurements formulated according to Equ. (30) are equivariant, yielding the following output matrix \[\mathbf{C}^{\star}=\begin{bmatrix}\frac{1}{2}\left(y+\hat{\boldsymbol{p}} \right)^{\wedge}&\mathbf{0}_{3\times 3}&-\mathbf{I}_{3}&\mathbf{0}_{3\times 6}& \mathbf{0}_{3\times 3}\end{bmatrix}\in\mathbb{R}^{3\times 15}.\] (A.19) Moreover, an additional constraint can be imposed on the virtual bias \(\boldsymbol{b}_{\nu}\); that is, an additional measurement in the form of \(h\left(\xi\right)=\boldsymbol{b}_{\nu}=\mathbf{0}\in\mathbb{R}^{3}\) can be considered, leading to the following output matrix \[\mathbf{C}^{0}=\begin{bmatrix}\mathbf{0}_{3\times 3}&\mathbf{0}_{3 \times 3}&\mathbf{0}_{3\times 3}&-\hat{\mathbf{R}}^{\top}&\mathbf{0}_{3 \times 3}&\hat{\mathbf{R}}^{\top}\,\hat{\boldsymbol{p}}^{\wedge}\end{bmatrix}\in \mathbb{R}^{3\times 15}.\] (A.20) Note that for a practical implementation of the presented EqF the virtual inputs \(\boldsymbol{\nu}\) is set to zero. It is worth noticing that the EqF built on the \(\mathbf{G}_{\mathbf{TG}}\) symmetry group is the only filter with exact linearization of the navigation error dynamics. ### DP-EqF #### a.6.1 Overview The state space is defined as \(\mathcal{M}\coloneqq\mathcal{H}\mathcal{G}(3)\times\mathbb{R}^{3}\times \mathbb{R}^{6}\) with \(\xi\coloneqq(\mathbf{T},\boldsymbol{p},\boldsymbol{b})\in\mathcal{M}\). One has \(\mathbf{T}=(\mathbf{R},\boldsymbol{v})\in\mathcal{H}\mathcal{G}(3)\) and \(\boldsymbol{b}=(\boldsymbol{b}_{\omega},\boldsymbol{b}_{a})\in\mathbb{R}^{6}\). Choose the origin to be \(\dot{\xi}=(\mathbf{I}_{4},\mathbf{0}_{6\times 1},\mathbf{0}_{3\times 1})\in\mathcal{M}\). The velocity input is given by \(u\coloneqq(\boldsymbol{\omega},\boldsymbol{a},\boldsymbol{\tau}_{\boldsymbol{ \omega}},\boldsymbol{\tau}_{a},\boldsymbol{\nu})\). The symmetry group of DP-EqF is given by \(\mathbf{G}_{\mathbf{DP}}:\mathbf{HG}(3)\times\mathbb{R}^{3}\times\mathbb{R}^ {3}\). Define the filter state \(\hat{X}=(\hat{B},\hat{\beta},\hat{c})\in\mathbf{G}_{\mathbf{DP}}\quad\text{ with}\quad\hat{B}=(\hat{A},\hat{a})\in\mathbf{HG}(3)\quad\text{and}\quad\hat{\beta}=(\hat{ \beta}_{\omega},\hat{\beta}_{a})^{\wedge}\in\mathfrak{hg}(3)\). The state estimate is given by \[\hat{\xi}\coloneqq\phi(\hat{X},\hat{\xi})=(\hat{B},\mathrm{Ad}_{\hat{B}^{-1}} (-\hat{\beta}),\hat{c})=(\hat{\mathbf{T}},\hat{\boldsymbol{b}},\hat{ \boldsymbol{p}}).\] (A.21) The state error is defined as \[e=\phi(\hat{X}^{-1},\xi) =(\mathbf{T}\hat{B}^{-1},\mathbf{Ad}_{\hat{B}}^{\vee}\,\boldsymbol {b}+\hat{\beta}^{\vee},\boldsymbol{p}-\hat{c})\] (A.22) \[=(\mathbf{T}\,\hat{\mathbf{T}}^{-1},\mathbf{Ad}_{\hat{\mathbf{T}} }^{\vee}(\boldsymbol{b}-\hat{\boldsymbol{b}}),\boldsymbol{p}-\hat{p})\] (A.23) #### a.6.2 Error dynamics Navigation statesBecause of the semi-direct product structure related to the rotation and velocity states and the corresponding bias states, the derivation of the error dynamics of the \(\mathcal{H}\mathcal{G}(3)\) part is similar to the one in TG-EqF. In this case, one has \[\dot{\varepsilon}_{R} =\varepsilon_{b_{\omega}},\] \[\dot{\varepsilon}_{v} =\varepsilon_{b_{a}}+\boldsymbol{g}^{\wedge}\varepsilon_{R}.\] For the position error, one has \[\dot{\varepsilon}_{p} =\dot{\varepsilon}_{p}=\dot{\boldsymbol{p}}-\dot{\boldsymbol{p}}= \mathbf{R}\,\boldsymbol{\nu}+\boldsymbol{v}-\hat{\mathbf{R}}\,\boldsymbol{\nu}- \hat{\boldsymbol{v}}\] \[=e_{v}+e_{R}\,\hat{\boldsymbol{v}}-\hat{\boldsymbol{v}}+e_{R}\, \hat{\mathbf{R}}\,\boldsymbol{\nu}-\hat{\mathbf{R}}\,\boldsymbol{\nu}\] \[=J_{l}(\varepsilon_{R})\varepsilon_{v}+\left(\mathrm{I}+ \varepsilon_{R}^{\wedge}+O({\varepsilon_{R}}^{2})\right)\left(\hat{\mathbf{R} }\,\boldsymbol{\nu}+\hat{\boldsymbol{v}}\right)-\left(\hat{\mathbf{R}}\, \boldsymbol{\nu}+\hat{\boldsymbol{v}}\right)\] \[=J_{l}(\varepsilon_{R})\varepsilon_{v}+\left(\mathrm{I}+ \varepsilon_{R}^{\wedge}+O({\varepsilon_{R}}^{2})\right)\hat{\boldsymbol{\nu} }-\hat{\boldsymbol{\nu}}\] \[=\varepsilon_{v}-\dot{\boldsymbol{\nu}}^{\wedge}\varepsilon_{R}+ \mathcal{O}(\varepsilon^{2}).\] Bias statesThe derivation of bias error dynamics is the same as TG-EqF: \[\dot{\varepsilon}_{b}=\mathbf{ad}_{(\dot{\boldsymbol{w}}^{\wedge}+\mathbf{G} )}^{\vee}\varepsilon_{b}+\mathcal{O}(\varepsilon^{2}).\] #### a.6.3 Filter design The linearized error state matrix \(\mathbf{A}_{t}^{0}\mid\dot{\boldsymbol{\varepsilon}}\simeq\mathbf{A}_{t}^{0}\, \boldsymbol{\varepsilon}\) is defined according to \[\mathbf{A}_{t}^{0}=\begin{bmatrix}_{3}\mathbf{A}&\mathbf{I}_{6}&\mathbf{0}_{6 \times 3}\\ \mathbf{0}_{6\times 6}&\mathbf{ad}_{(\dot{\boldsymbol{w}}^{\wedge}+\mathbf{G})^{ \vee}}^{\vee}&\mathbf{0}_{6\times 3}\\ _{4}\mathbf{A}&\mathbf{0}_{3\times 6}&\mathbf{0}_{3\times 3}\end{bmatrix} \in\mathbb{R}^{15\times 15},\] (A.24) where \[{}_{3}\mathbf{A} =\begin{bmatrix}\mathbf{0}_{3\times 3}&\mathbf{0}_{3\times 3}\\ \boldsymbol{g}^{\wedge}&\mathbf{0}_{3\times 3}\end{bmatrix}\in\mathbb{R}^{6 \times 6}\] \[{}_{4}\mathbf{A} =\begin{bmatrix}-\hat{\boldsymbol{\nu}}^{\wedge}&\mathbf{I}_{3} \end{bmatrix}\in\mathbb{R}^{3\times 6}.\] Position measurements are linear; therefore, the output matrix is written \[\mathbf{C}^{0}=\begin{bmatrix}\mathbf{0}_{3\times 3}&\mathbf{0}_{3\times 3}& \mathbf{0}_{3\times 6}&\mathbf{I}_{3}&\mathbf{0}_{3\times 3}\end{bmatrix}\in\mathbb{R}^{3 \times 15}.\] (A.25) Similar to the previous filter, for a practical implementation of the presented EqF, the virtual input \(\boldsymbol{\nu}\) is set to zero. ### SD-EqF #### a.7.1 Overview The state space is defined as \(\mathcal{M}\!:=\!\mathcal{S}\mathcal{E}_{2}(3)\!\times\!\mathbb{R}^{6}\) with \(\xi\!:=\!(\mathbf{T},\boldsymbol{b})\!\in\!\mathcal{M}\). One has \(\mathbf{T}\!=\!(\mathbf{R},\boldsymbol{v},\boldsymbol{p})\!\in\!\mathcal{S} \mathcal{E}_{2}(3)\) and \(\boldsymbol{b}\!=\!(\boldsymbol{b}_{\boldsymbol{\omega}},\boldsymbol{b}_{a}) \!\in\!\mathbb{R}^{6}\). Choose the origin to be \(\dot{\xi}\!=\!(\mathbf{I}_{5},\mathbf{0}_{6\times 1})\!\in\!\mathcal{M}\). The velocity input is given by \(u\coloneqq(\boldsymbol{\omega},\boldsymbol{a},\boldsymbol{\tau}_{\boldsymbol{ \omega}},\boldsymbol{\tau}_{a})\). The symmetry group of TG-EqF is given by \(\mathbf{G}_{\mathbf{SD}}\!:\!\mathbf{SE}_{2}(3)\!\times\!\mathbf{se}(3)\). Define the filter state \(\hat{X}\!=\!(\hat{C},\hat{\gamma})\!\in\!\mathbf{G}_{\mathbf{SD}}\) with \(\hat{C}\!=\!(\hat{A},\hat{a},\hat{b})\!\in\!\mathbf{SE}_{2}(3)\) and \(\hat{\gamma}\!=\!(\hat{\gamma}_{\omega},\hat{\gamma}_{a})^{\wedge}\!\in\! \mathbf{se}(3)\). The \(\mathbf{SE}_{2}(3)\) component in \(\hat{X}\) can also be expressed in \(\hat{C}\!=\!(\hat{B},\hat{b})\) where \(\hat{B}\!=\!(\hat{A},\hat{a})\!\in\!\mathbf{HG}(3)\). The state estimate is given by \[\hat{\xi}\coloneqq\phi(\hat{X},\hat{\xi})=(\hat{C},\mathrm{Ad}_{\hat{B}^{-1}}( -\hat{\gamma}^{\vee}))=(\hat{\mathbf{T}},\hat{\boldsymbol{b}}).\] (A.26) The state error is defined as \[e=\phi(\hat{X}^{-1},\xi) =(\mathbf{T}\hat{C}^{-1},\mathbf{Ad}_{\hat{B}}^{\vee}( \boldsymbol{b}+\mathrm{Ad}_{\hat{B}^{-1}}\left[\hat{\gamma}\right]^{\vee}))\] (A.27) \[=(\mathbf{T}\hat{C}^{-1},\mathbf{Ad}_{\hat{B}}^{\vee}\, \boldsymbol{b}+\hat{\gamma}^{\vee}).\] (A.28) #### a.7.2 Error dynamics Navigation statesBecause of the semi-direct product structure related to the rotation and velocity states and the corresponding bias states, the derivation of the error dynamics of the \(\mathbf{HG}(3)\) part is similar to the one in TG-EqF. In this case, one has \[\dot{\varepsilon}_{R} =\varepsilon_{b_{\omega}},\] \[\dot{\varepsilon}_{v} =\varepsilon_{b_{a}}+\boldsymbol{g}^{\wedge}\varepsilon_{R}.\] For the position error \(\dot{\varepsilon}_{p}=\dot{\varepsilon}_{p}+\mathcal{O}(\varepsilon^{2})\), one has \[\dot{\varepsilon}_{p} =\frac{\mathrm{d}}{\mathrm{d}t}(-\mathbf{R}\,\hat{\mathbf{R}}^{ \top}\,\hat{\boldsymbol{p}}+\boldsymbol{p})\] \[=-\dot{e}_{R}\,\hat{\boldsymbol{p}}-e_{R}\hat{\boldsymbol{p}}+ \hat{\boldsymbol{p}}\] \[=e_{R}e_{b_{\omega}}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ #### a.7.3 Filter design The linearized error state matrix \(\mathbf{A}_{t}^{0}\,|\,\,\dot{\boldsymbol{\varepsilon}}\simeq\mathbf{A}_{t}^{0}\, \boldsymbol{\varepsilon}\) is defined according to \[\mathbf{A}_{t}^{0}=\left[\begin{array}{ccc}\vdots&\mathbf{I}_{6}\\ \vdots&\vdots&\\ \vdots&\hat{\boldsymbol{p}}^{\wedge}&\mathbf{0}_{3\times 3}\\ \ldots\vdots&\ldots\ldots\ldots\ldots\\ \mathbf{0}_{6\times 9}\vdots\mathbf{ad}_{\left(\mathbf{Ad}_{\tilde{B}}^{\vee}\, \boldsymbol{w}+\hat{\gamma}\times\mathbf{G}^{\vee}\right)\right)}^{\vee} \in\mathbb{R}^{15\times 15},\] (A.29) When comparing \(\mathbf{A}_{t}^{0}\) in Equ. (A.29) with the one in Equ. (A.18), it is trivial to see the only difference between the two matrices is in the row of \(\mathbf{A}_{t}^{0}\) relative to the position error. This is where the major difference between filters employing the symmetries \(\mathbf{G_{TG}}\), and \(\mathbf{G_{SD}}\) is found. Position measurements formulated according to Equ. (30) are equivariant, yielding the following output matrix \[\mathbf{C}^{*}=\left[\tfrac{1}{2}\left(y+\hat{\boldsymbol{p}}\right)^{\wedge }\,\mathbf{0}_{3\times 3}\,-\mathbf{I}_{3}\,\mathbf{0}_{3\times 6}\,\,\mathbf{I}_{3} \right]\in\mathbb{R}^{3\times 15}.\] (A.30) ### Tfg-Iekf #### a.8.1 Overview The state space is defined as \(\mathcal{M}\coloneqq\mathcal{S}\mathcal{E}_{2}(3)\times\mathbb{R}^{6}\) with \(\xi=(\mathbf{T},\boldsymbol{b})\in\mathcal{M}\). One has \(\mathbf{T}=\left(\mathbf{R},\boldsymbol{v},\boldsymbol{p}\right)\in\mathcal{ S}\mathcal{E}_{2}(3)\) and \(\boldsymbol{b}=(\boldsymbol{b}_{\boldsymbol{\omega}},\boldsymbol{b}_{ \boldsymbol{\omega}})\in\mathbb{R}^{6}\). Choose the origin to be \(\hat{\xi}=(\mathbf{I}_{5},\mathbf{0}_{6\times 1})\in\mathcal{M}\). The velocity input is given by \(u\coloneqq(\boldsymbol{\omega},\boldsymbol{a},\boldsymbol{\tau}_{\boldsymbol {\omega}},\boldsymbol{\tau}_{\boldsymbol{a}})\). The symmetry group of TFG-IEKF is given by \(\mathbf{G_{TF}}:\mathbf{SO}(3)\ltimes(\mathbb{R}^{6}\oplus\mathbb{R}^{6})\). Define the filter state \(\hat{X}=(\hat{C},\hat{\gamma})\in\mathbf{G_{TF}}\) with \(\hat{C}=(\hat{A},(\hat{a},\hat{b}))\in\mathbf{SE}_{2}(3)=\mathbf{SO}(3)\ltimes \mathbb{R}^{6}\) and \(\hat{\gamma}=(\hat{\gamma}_{\omega},\hat{\gamma}_{a})\in\mathbb{R}^{6}\). The state estimate is given by \[\hat{\xi}\coloneqq\phi(\hat{X},\hat{\xi})=(\hat{C},\hat{A}^{-1}*(-\hat{\gamma }))=(\hat{\boldsymbol{T}},\hat{\boldsymbol{b}}).\] (A.31) The state error is defined as \[e\coloneqq\phi(\hat{X}^{-1},\xi) =\left(\mathbf{T}\hat{C}^{-1},\hat{A}*(\boldsymbol{b}+\hat{A}^{- 1}*\hat{\gamma})\right)\] (A.32) \[=\left(\mathbf{T}\,\hat{\mathbf{T}}^{-1},\hat{\mathbf{R}}*( \boldsymbol{b}-\hat{\boldsymbol{b}})\right).\] (A.33) #### a.8.2 Error dynamics Navigation statesBecause of the semi-direct product structure related to the rotation state, the error dynamics of the rotation part is the same as TG-EqF: \[\dot{\varepsilon}_{R}=\varepsilon_{b_{\omega}}.\] For the velocity error \(e_{v}=-\mathbf{R}\,\hat{\mathbf{R}}^{\top}\,\hat{\boldsymbol{v}}+\boldsymbol{v}\), one has \[\dot{e}_{v} =-\dot{e}_{R}\,\hat{\boldsymbol{v}}-e_{R}\,\dot{\hat{\boldsymbol{v }}}+\hat{v}\] \[=e_{R}(e_{b_{\omega}})^{\wedge}\,\hat{\boldsymbol{v}}-e_{R}\,\hat {\mathbf{R}}(\boldsymbol{a}-\hat{\boldsymbol{b}}_{a})-e_{R}\,\boldsymbol{g}+ \mathbf{R}(\boldsymbol{a}-\boldsymbol{b}_{a})+\boldsymbol{g}\] \[=e_{R}(e_{b_{\omega}})^{\wedge}\,\hat{\boldsymbol{v}}-e_{R}\, \hat{\mathbf{R}}(\boldsymbol{a}-\hat{\boldsymbol{b}}_{a})-(e_{R}-\mathbf{I}) \,\boldsymbol{g}\] \[\quad+e_{R}\,\hat{\mathbf{R}}(\boldsymbol{a}-\hat{\mathbf{R}}^{ \top}(e_{b_{a}}-\gamma_{b_{a}}))\] \[=e_{R}e_{h}^{\wedge}\,\hat{\boldsymbol{v}}-e_{R}e_{b_{a}}-(e_{R}- \Gamma)\,\boldsymbol{g};\] \[\dot{\varepsilon}_{v} =\hat{\boldsymbol{v}}^{\wedge}\varepsilon_{b_{\omega}}+\boldsymbol {g}^{\wedge}\varepsilon_{R}+\varepsilon_{b_{a}}+\mathcal{O}(\varepsilon^{2}).\] The derivation for position error \(e_{p}=-\mathbf{R}\,\hat{\mathbf{R}}^{\top}\,\hat{\boldsymbol{p}}+\boldsymbol{p}\) is the same as SD-EqF, given by \[\dot{\varepsilon}_{p}=\varepsilon_{v}+\hat{\boldsymbol{p}}^{\wedge}\varepsilon_{b _{\omega}}+\mathcal{O}(\varepsilon^{2}).\] Bias statesThe error in bias state \(b_{\omega}\) is given by \(e_{b_{\omega}}=\hat{\mathbf{R}}\ast(\boldsymbol{b}_{\omega}-\hat{\boldsymbol{b }}_{\boldsymbol{\omega}})\). The dynamics can be derived: \[\dot{e}_{b_{\omega}} =\hat{\mathbf{R}}(\omega-\hat{\boldsymbol{b}}_{\boldsymbol{\omega }})^{\wedge}\ast(\boldsymbol{b}_{\omega}-\hat{\boldsymbol{b}}_{\boldsymbol{ \omega}})\] \[=\hat{\mathbf{R}}(\omega-\hat{\boldsymbol{b}}_{\boldsymbol{\omega }})^{\wedge}\,\hat{\mathbf{R}}^{\top}\,\hat{\mathbf{R}}\ast(\boldsymbol{b}_{ \boldsymbol{\omega}}-\hat{\boldsymbol{b}}_{\boldsymbol{\omega}})\] \[=(\hat{\mathbf{R}}(\omega-\hat{\boldsymbol{b}}_{\boldsymbol{ \omega}}))^{\wedge}e_{b_{\omega}}.\] In local coordinates, the linearization is given by \[\dot{\varepsilon}_{b_{\omega}}=(\hat{\mathbf{R}}(\omega-\hat{\boldsymbol{b}}_{ \boldsymbol{\omega}}))^{\wedge}\varepsilon_{b_{\omega}}+\mathcal{O}(\varepsilon^{2}).\] The error in \(b_{a}\) follows the same derivation, which is given by \[\dot{\varepsilon}_{b_{a}}=(\hat{\mathbf{R}}(\omega-\hat{\boldsymbol{b}}_{ \boldsymbol{\omega}}))^{\wedge}\varepsilon_{b_{a}}+\mathcal{O}(\varepsilon^{2}).\] #### a.8.3 Filter design The linearized error state matrix \(\mathbf{A}_{t}^{0}\,|\,\,\dot{\boldsymbol{\varepsilon}}\simeq\mathbf{A}_{t}^{0}\, \boldsymbol{\varepsilon}\) is defined according to \[\mathbf{A}_{t}^{0}=\left[\begin{array}{ccc}\vdots&\mathbf{I}_{3}&\mathbf{0}_{3 \times 3}\\ {}_{4}\mathbf{A}&\vdots&\hat{\boldsymbol{v}}^{\wedge}&\mathbf{I}_{3}\\ \vdots&\hat{\boldsymbol{p}}^{\wedge}&\mathbf{0}_{3\times 3}\\ \ldots\vdots&\hat{\boldsymbol{p}}^{\wedge}&\mathbf{0}_{3\times 3}\\ \mathbf{0}_{3\times 9}&\left(\hat{\mathbf{R}}\left(\boldsymbol{\omega}-\hat{ \boldsymbol{b}}_{\boldsymbol{\omega}}\right)\right)^{\wedge}&\mathbf{0}_{3 \times 3}\\ \mathbf{0}_{3\times 9}&\vdots&\mathbf{0}_{3\times 3}&\left(\hat{\mathbf{R}}\left(\boldsymbol{\omega}-\hat{ \boldsymbol{b}}_{\boldsymbol{\omega}}\right)\right)^{\wedge}\\ \end{array}\right]\in\mathbb{R}^{15\times 15}.\] (A.34) Position measurements formulated according to Equ. (30) are equivariant, yielding the following output matrix \[\mathbf{C}^{*}=\left[\tfrac{1}{2}\left(y+\hat{\boldsymbol{p}}\right)^{\wedge}\, \mathbf{0}_{3\times 3}\,-\mathbf{I}_{3}\,\,\mathbf{0}_{3\times 3}\,\,\mathbf{0}_{3\times 6}\right] \epsilon\,\mathbb{R}^{3\times 15}.\] (A.35)
この論文では、等価性という観点からinertial navigation system (INS) フィルター設計の課題について調査しています。Extended Kalman filter (EKF) とその変種は、50年間、INS フィルタの定番でした。しかし、 recientemente、inertial navigation system の進歩により、行列Lie群構造を使用して、確率的フィルターと状態オシレータを設計し、従来のソリューションと比較して優れているパフォーマンスを示したことが示されています。この研究では、車両に慣性測定装置 (IMU) と全球定位衛星システム (GNSS) 受信機がある場合、EKFの現代的な変種を解釈することができます。これらのセンサーを使用するEKFの現代的な変種は、最近提案された等価フィルタ (EqF) の設計方法を採用して、INS問題に対して異なる等価性群を選択して設計することができます。これにより、この論文では、これまで考慮されていなかった新しい等価
2309.13576
Unmotivated ergodic averages
We consider weighted ergodic averages indexed by primes, where the weight depends on the prime, and is a "trace function" coming from algebraic geometry. We obtain extensions the classical mean-ergodic and pointwise ergodic theorems, as well as some result in the topological setting, and raise some further problems.
Emmanuel Kowalski
2023-09-24T08:12:13
http://arxiv.org/abs/2309.13576v1
# Unmotivated ergodic averages ###### Abstract. We consider weighted ergodic averages indexed by primes, where the weight depends on the prime, and is a "trace function" coming from algebraic geometry. We obtain extensions of classical results, in both \(\mathrm{L}^{2}\) and topological settings, and raise some further problems. Key words and phrases:Riemann Hypothesis over finite fields, ergodic averages, ergodic theorems, maximal inequality, conductor of a sheaf, Fourier sheaf 2010 Mathematics Subject Classification: 11T23, 11L05, 11N37, 11N75, 11F66, 14F20, 14D05 ###### Contents * 1 Introduction * 2 Examples of results * 3 Properties of trace functions * 4 The mean ergodic theorem in the Fourier case * 5 Weakly-mixing systems * 6 The topological case * 7 Mean-ergodic theorems in \(\mathrm{L}^{r}\) * 8 Maximal inequalities in \(\mathrm{L}^{2}\) * 9 Pointwise ergodic theorem * 10 Is sparseness necessary? * 11 Questions ## 1. Introduction We consider a dynamical system \((\mathrm{X},\mu,f)\). Thus, \((\mathrm{X},\mu)\) is a probability space and \(f\colon\mathrm{X}\to\mathrm{X}\) is a measurable map such that \(f_{*}\mu=\mu\). In this paper, motivated largely by simple curiosity (though see also Remark 1.5 for some arithmetical motivation), we consider weighted ergodic averages of _triangular_ form1, namely averages Footnote 1: In the sense of the “triangular arrays” of probability theory, e.g., in the Central Limit Theorem. \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ (\varphi\circ f^{n}), \tag{1.1}\] for some fixed function \(\varphi\colon\mathrm{X}\to\mathbf{C}\), where \(p\) is a prime and \(t_{p}\) is a function on \(\mathbf{Z}\) (depending on \(p\)) "of algebraic origin". Precisely, we are interested in the limit of such averages as \(p\to+\infty\) when the functions \(t_{p}\) are _trace function_ modulo \(p\) or short linear combinations of such functions. Since the general theory of trace functions (as amplified by Fouvry, Kowalski and Michel in particular) is probably not well-known to most readers, we present right away three basic examples that will indicate the flavor of these averages. **Example 1.1**.: (1) The function \(t_{p}(n)\) which is the characteristic function of the set of squares modulo \(p\) (quadratic residues) is essentially a linear combination \[\frac{1}{2}\Big{(}1+\Big{(}\frac{n}{p}\Big{)}\Big{)}\] of two trace functions. Thus the average (1.1) is then essentially the ergodic average where \(n\) is restricted to be a square modulo \(p\). The "triangularity" is very obvious here: as the prime \(p\) changes, the set of quadratic residues modulo \(p\) changes also. (2) Let \(q\in\mathbf{Z}[\mathrm{X}]\) be a fixed monic polynomial. Then \(t_{p}(n)=e(q(n)/p)\) is a trace function, where \(e(z)=\exp(2i\pi z)\) for any complex number \(z\). (3) Define \(t_{p}(n)=\mathrm{Kl}_{2}(n;p)\) where \[\mathrm{Kl}_{2}(n;p)=\frac{1}{\sqrt{p}}\sum_{1\leqslant x\leqslant p-1}e\Big{(} \frac{nx+\bar{x}}{p}\Big{)},\] where \(\bar{x}\) is the inverse of \(x\) modulo \(p\). These are the classical _Kloosterman sums_, which are of paramount importance in analytic number theory. The function \(t_{p}\) is then also a trace function. More generally, we will explain below the definition of two norms \(\|\cdot\|_{\mathrm{t}}\leqslant\|\cdot\|_{\mathrm{tr}}\) on the space \(\mathscr{C}(\mathbf{F}_{p})\) of complex-valued functions on \(\mathbf{F}_{p}=\mathbf{Z}/p\mathbf{Z}\), which we identify with the interval \(\{0,\dots,p-1\}\). For \(f\colon\mathbf{F}_{p}\to\mathbf{C}\), these norms measure the complexity of a decomposition of \(f\) into sums of certain trace functions. In the three examples above, we have \(\|t_{p}\|_{\mathrm{tf}}\leqslant c\), where \(c\) is independent of \(p\) (but depends on the degree of \(q\) in the case of Example (2)), and similarly \(\|t_{p}\|_{\mathrm{t}}\leqslant c^{\prime}\) for some constant \(c^{\prime}\), except in the case of polynomials \(q\) of degree \(1\) in Example (2). Then, exploiting the remarkable fundamental \(\mathrm{L}^{2}\) properties of trace functions (which are very deep, as they rely on Deligne's most general version of the Riemann Hypothesis over finite fields [11]), we will prove rather easily the following result. **Theorem 1.2** (\(\mathrm{L}^{2}\)-ergodic theorems).: _Let \((t_{p})_{p}\) be a sequence of functions \(t_{p}\colon\mathbf{F}_{p}\to\mathbf{C}\), indexed by an infinite subset \(\mathsf{P}\) of the primes. Let \((\mathrm{X},\mu,f)\) be a dynamical system and let_ \[\pi\colon\mathrm{L}^{1}(\mathrm{X},\mu)\to\mathrm{L}^{1}(\mathrm{X},\mu)\] _be the projection given by the ergodic theorem_ (see [14, Th. 2.30])_. Assume that there exists a constant \(c\geqslant 0\) such that either_ 1. _We have_ \(\|t_{p}\|_{\mathrm{tf}}\leqslant c\) _for_ \(p\in\mathsf{P}\)_,_ 2. _The system_ \((\mathrm{X},\mu,f)\) _is weakly-mixing and_ \(\|t_{p}\|_{\mathrm{t}}\leqslant c\) _for_ \(p\in\mathsf{P}\)_._ _Let \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\). Then the following results hold:_ 1. _We have_ \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\,\varphi\circ f^{n}-\Big{(}\frac{1}{ p}\sum_{0\leqslant n<p}t_{p}(n)\Big{)}\pi(f)\to 0\] _in \(\mathrm{L}^{2}(\mathrm{X},\mu)\) as \(p\to+\infty\) along \(\mathsf{P}\). Moreover, the convergence is uniform for \(\varphi\) in compact sets of \(\mathrm{L}^{2}(\mathrm{X},\mu)\)._ (2) _Suppose that_ \[\sum_{p\in\mathsf{P}}\frac{(\log p)^{2}}{p}<+\infty. \tag{1.2}\] _Then for \(\mu\)-almost all \(x\), we have_ \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\,\varphi(f^{n}(x))-\Big{(}\frac{1}{p }\sum_{0\leqslant n<p}t_{p}(n)\Big{)}\pi(f)(x)\to 0\] _as \(p\to+\infty\) along \(\mathsf{P}\)._ In addition, we consider the analogue of Sarnak's Mobius randomness conjecture [36] (one of the recent focus points at the intersection of analytic number theory and ergodic theory) for our weighted averages. We can prove a version of this conjecture for certain specific families of trace functions, but since their definition is non-trivial, we only state here some representative examples. **Theorem 1.3** (Topological ergodic theorems).: _Let \(\mathrm{X}\) be a compact topological space and \(f\colon\mathrm{X}\to\mathrm{X}\) a continuous map._ _Assume that the topological entropy of \(f\) is zero. Then for all continuous functions \(\varphi\colon\mathrm{X}\to\mathbf{C}\) and all \(x\in\mathrm{X}\), we have_ \[\lim_{p\to+\infty}\frac{1}{p}\sum_{0\leqslant n<p}\mathrm{Kl}_{2 }(n;p)\varphi(f^{n}(x))=0,\] \[\lim_{p\to+\infty}\frac{1}{p}\sum_{0\leqslant n<p}\Big{(}\frac{n }{p}\Big{)}\varphi(f^{n}(x))=0.\] **Remark 1.4**.: (1) Sequences of the form \((\varphi(f^{n}(x)))_{n}\), where \(f\) has topological entropy \(0\) and \(\varphi\) is continuous are called _deterministic_. Hence, the result shows that there is no deterministic sequence which can correlate non-trivially with an infinite sequence of Kloosterman sums, or Legendre symbols, modulo primes. (2) We will show that these families may be replaced by a fairly wide class of trace functions, but not all. (3) See [8] for Bowen's definition of topological entropy, which applies to uniformly continuous maps between metric spaces, and [1] for the definition of Adler, Konheim and McAndrew which applies to arbitrary compact spaces. It is known that these are equal (when both are defined), see, e.g., [13, Satz 4.8]. Theorem 1.3 seems likely to also hold for locally compact metric spaces and bounded uniformly continuous functions, but we haven't checked this (although it is a natural framework, e.g. for unipotent flows). (4) The special case of this theorem concerning Kloosterman sums was proved independently by El Abdalaoui, Shparlinski and Steiner [15, Th. 2.8]. **Remark 1.5**.: From the arithmetic point of view, it is a crucial fact that _there is no systematic rule to construct or constrain the sequences of trace functions that are used in the averages for each prime_. We think that the sequence of Kloosterman sums or Legendre symbols are natural, but the only constraint that we impose in Theorem 1.2 is the boundedness of the trace norms of the functions (as in much previous work). We will see that the situation is very unclear when the system \((\mathrm{X},\mu,f)\) is not weakly-mixing. In general, the search for natural stronger conditions that "bind" a sequence \((t_{p})\) of trace functions is, for the author, a very natural arithmetic motivation for the study of our weighted ergodic averages. In other words: is there a natural "coherence" condition for trace functions modulo primes that would naturally distinguish examples like Kloosterman sums or Legendre symbols? ### Outline of the paper We present some concrete "incarnations" of the results in Section 2. Then Section 3 gives the definitions and basic background results concerning trace functions, including defining the "trace norms" \(\|\cdot\|_{\mathrm{t}}\) and \(\|\cdot\|_{\mathrm{tf}}\). Sections 4 and 5 prove the mean ergodic theorem, and Section 6 discusses the topological case. We then conclude with a discussion section (including an easy maximal inequality in \(\mathrm{L}^{2}\)), and with some further questions that may be of interest in probing further the links between these two subjects. ### Notation For basic references concerning ergodic theory, we will refer to the books of Einsiedler and Ward [14] and of Einsiedler and Schmidt [13] (e.g., for topological entropy, which is not discussed in [14]). We will summarize in Section 3 the key facts concerning trace functions. More details and examples can be found for instance in the surveys [18, 25] of Fouvry, Kowalski, Michel and Sawin. We will say that an infinite set \(\mathsf{P}\) of primes that satisfies (1.2) is _sparse_. In order that \(\mathsf{P}\) be sparse, it is enough that there exists \(\delta>0\) such that the counting function \[\pi(x;\mathsf{P})=\sum_{\begin{subarray}{c}p\leq x\\ p\in\mathsf{P}\end{subarray}}1\] satisfies \[\pi(x;\mathsf{P})\ll\frac{x}{(\log x)^{3+\delta}}.\] ### Remark on the text The first draft of these notes was written in 2018/2019. At that time, I put them aside: the absence of applications diminished the interest of the questions, and moreover the results did not seem strong enough (or the proofs conceptually interesting enough) to compensate this fault. I came back to the text in 2023, first because the appearance of the preprint [15] of El Abdalaoui, Shparlinski and Steiner showed that at least a few other mathematicians did consider similar questions, and then because I decided to talk about this at least once, in the Number Theory Seminar of the University of Turku (where I was present to be the opponent in the PhD defense of O. Jarvienemi). Although the defects discussed above still apply,2 there is (I think) one interesting outcome from working on this topic, namely the diophantine approximation result of Lemma 10.2, which was actually stated without proof in the 2019 draft. Footnote 2: In addition to the fact that there might be lurking mistakes and imprecisions, and that there are significant redundancies in certain arguments. **Acknowledgements.** Thanks to M. Einsiedler for discussions about ergodic theory, and to L. Pierce for discussions concerning maximal theorems. Thanks to K. Matomaki for the invitation to be the opponent of O. Jarvienemi, which provided me with the occasion to revise these notes, and Y. Bugeaud for remarks and references concerning Lemma 10.2. ## 2. Examples of results Many of our results may be interpreted as leading to cancellation properties for certain sums involving trace functions. These are often of interest in analytic number theory, and we therefore state in this section a few examples with concrete choices of trace functions and of dynamical systems \((\mathrm{X},\mu,f)\). We also present examples which show that some of the assumptions of Theorems 1.2 and 1.3 are needed. **Example 2.1**.: We give first some examples related to continued fraction expansions. Let \((]0,1[,\mu,f)\) be the continued fraction dynamical system (see [14, Ch. 3]), in other words \[\mu=\frac{1}{\log(2)}\frac{dx}{1+x},\qquad f(x)=\frac{1}{x}-\Big{\lfloor} \frac{1}{x}\Big{\rfloor}.\] This system is ergodic (loc. cit.) and \(f\) has positive entropy. For \(x\in[0,1]\), let \((a_{n}(x))\) be the sequence of partial quotients in the continued fraction expansion of \(x\). We have \(a_{n+1}(x)=a_{n}(f(x))\). Maybe the simplest result that we can deduce from this work is that for a fixed integer \(k\geqslant 0\), and for almost all \(x\), we have \[\frac{1}{p}\Big{|}\Big{\{}1\leqslant n<p\,\mid\,\Big{(}\frac{n}{p}\Big{)}=1 \text{ and }a_{n}(x)=k\Big{\}}\Big{|}\to\frac{1}{2\log 2}\log\Bigl{(}\frac{(k+1)^{2}}{k(k+2)} \Bigr{)}\] as \(p\to+\infty\) along a sparse sequence, where \((n/p)\) is the Legendre symbol. This is one half of the density of occurence of \(a_{n}(x)=k\), see [14, Cor. 3.8]. This result follows from Theorem 1.2, (2) when we take \[t_{p}(n)=\frac{1}{2}\Big{(}1+\Big{(}\frac{n}{p}\Big{)}\Big{)},\] for \(p\) odd, and \(\varphi\) the characteristic function of \(a_{1}(x)=k\), since \(\varphi\circ f^{n}\) is the characteristic function of \(a_{n}(x)=k\), and moreover we have \(\|t_{p}\|_{\mathrm{tf}}\ll 1\) and \[\frac{1}{p}\sum_{n\in\mathbf{F}_{p}}t_{p}(n)=\frac{1}{2}.\] **Example 2.2**.: For \(p\) prime, let \(t_{p}\) be the Kloosterman sum function modulo \(p\) (Example 1.1, (3)). We have \(\|t_{p}\|_{\mathrm{tf}}\ll 1\) and \[\frac{1}{p}\sum_{n\in\mathbf{F}_{p}}t_{p}(n)=0.\] Define \(\mathrm{X}=\mathrm{SL}_{2}(\mathbf{Z})\backslash\,\mathrm{SL}_{2}(\mathbf{R})\) and denote by \(\mu\) the invariant probability measure on \(\mathrm{X}\) (induced by a normalized Haar measure on \(\mathrm{SL}_{2}(\mathbf{R})\)). Consider the dynamical system with \[f(g)=g\begin{pmatrix}2&0\\ 0&1/2\end{pmatrix}\] for \(g\in\mathrm{X}\) (a part of the geodesic flow). It is known that \((\mathrm{X},\mu,f)\) is ergodic and that \(f\) has positive topological entropy. Let \(\varphi\colon\mathrm{X}\to\mathbf{C}\) be an \(\mathrm{L}^{2}\)-function. Applying Theorem 1.2, (2), we deduce that for almost all \(z\in\mathrm{X}\), we have \[\frac{1}{p}\sum_{1\leqslant n<p}\mathrm{Kl}_{2}(n;p)\varphi\Big{(}z\begin{pmatrix} 2^{n}&0\\ 0&2^{-n}\end{pmatrix}\Big{)}\to 0\] as \(p\to+\infty\) along a sparse subsequence. On the other hand, let \[\widetilde{f}(g)=g\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\] for \(g\in\mathrm{X}\) (part of the horocycle flow). Then \((\mathrm{X},\mu,\widetilde{f})\) is ergodic and has zero entropy (note that \(\mathrm{X}\) is not compact, but \(\widetilde{f}\) is uniformly continuous, so Bowen's definition of entropy applies). Thus we have \[\frac{1}{p}\sum_{1\leqslant n<p}\mathrm{Kl}_{2}(n;p)\varphi\Big{(}z\begin{pmatrix} 1&n\\ 0&1\end{pmatrix}z\Big{)}\to 0\] for any bounded continuous function \(\varphi\) on \(\mathrm{X}\) and any \(z\in\mathrm{X}\) by Theorem 1.3. **Example 2.3**.: It is not surprising that pointwise convergence may fail in full generality, since this means considering arbitrary sequences \(a_{n}\) instead of \(\varphi(f^{n}(x))\) (using the shift on \([-1,1]\) on the space of bounded sequences). As a simple example, consider again \(t_{p}(n)=(n/p)\) (the Legendre symbol modulo \(p\)). Let \((p_{k})\) be an increasing sequence of primes with \(p_{k+1}/p_{k}\to+\infty\); the set of primes thus defined is of course sparse. Define a sequence \(a_{n}\) by \[a_{n}=\begin{cases}1\text{ if }n\text{ is a square modulo }p_{k+1}\\ 0\text{ if }n\text{ is not a square modulo }p_{k+1},\end{cases}\] where \(p_{k}\leqslant n<p_{k+1}\). Then \[\frac{1}{p_{k}}\sum_{0\leqslant n<p_{k}}t_{p_{k}}(n)a_{n}=1+\mathrm{O}\Big{(} \frac{p_{k-1}}{p_{k}}\Big{)}\to 1.\] This example can, for instance, be embedded in the continued fraction setting, and can be adapted to pretty arbitrary sequences of trace functions. **Example 2.4**.: Let \(p\) be a prime and \(t_{p}(n)=e(a_{p}n/p)\) for some \(a_{p}\in\mathbf{F}_{p}\). These are trace functions, but we will show that Theorem 1.3 does not hold with \(\mathrm{Kl}_{2}(n;p)\) replaced by \(t_{p}(n)\), at least if \((a_{p})\) is chosen in a suitable manner. Pick \(\theta\in\mathbf{R}/\mathbf{Z}\) which is irrational. There exists \(\delta>0\) such that there are infinitely many approximations \(a_{p}/p\) by rational numbers with prime denominators with \[\Big{|}\theta-\frac{a_{p}}{p}\Big{|}\leqslant\frac{1}{p^{1+\delta}}. \tag{2.1}\] Indeed, this was proved by Vinogradov for arbitrary \(\delta<1/5\), and the best-known result by Matomaki [31] applies for any \(\delta<1/3\). The irrational translation \(f(x)=x+\theta\) on \(\mathbf{R}/\mathbf{Z}\) has entropy zero; pick the starting point \(x=0\) and the continuous function \(\varphi(\alpha)=e(\alpha)\) on \(\mathbf{R}/\mathbf{Z}\). Then, for primes \(p\) for which (2.1) holds, we get \[\frac{1}{p}\sum_{0\leqslant n<p}e\Bigl{(}-\frac{na_{p}}{p}\Bigr{)}e(n\theta)= \frac{1}{p}\frac{1-e(p(\theta-a_{p}/p))}{1-e(\theta-a_{p}/p)}\to 1\] as \(p\to+\infty\) along this sequence. We note in passing that Theorem 1.2 does _not_ apply here (the system is not weakly-mixing, and the norms \(\|t_{p}\|_{\text{tf}}\) are not bounded). (Also, we note that one could obtain easier examples using the fact that (2.1) holds with \(\delta=1\) for almost all \(\theta\in[0,1]\), which goes back at least to Duffin and Schaeffer [12].) **Example 2.5**.: Here are some additional standard examples of functions on \(\mathbf{Z}\) that arise as trace functions modulo \(p\) with bounded conductor, and which moreover are "geometrically irreducible" (an important property which means essentially that their mean-square average modulo \(p\) is close to \(1\)). More examples are found, e.g., in [18]. This should give an idea of the variety of ergodic averages that we are considering. (1) For any \(a\) modulo \(p\), the additive character \(n\mapsto e(an/p)\) is a trace function of a so-called Artin-Schreier sheaf; it has conductor uniformly bounded. (2) For any non-trivial multiplicative character \(\chi\) modulo \(p\), extended by \(0\) to \(\mathbf{Z}/p\mathbf{Z}\), the corresponding Dirichlet character is a trace function of a so-called Kummer sheaf; it has conductor uniformly bounded. (3) More generally, let \(f\in\mathbf{Z}[\text{X}]\) be a non-constant polynomials. The functions \(n\mapsto e(f(n)/p)\) and \(n\mapsto\chi(f(n))\) are trace functions with conductor bounded in terms of the degree of \(f\) only. Similarly if \(f\) is a non-constant rational function, with the trace function having value \(0\) at poles of \(f\), and with conductor depending on the degrees of the numerator and denominators of \(f\). (4) If \(t_{p}\) is a geometrically irreducible trace function modulo \(p\), and is not proportional to an additive character, then its normalized Fourier transform \[\widehat{t}_{p}(n)=\frac{1}{\sqrt{p}}\sum_{0\leqslant m<p}e\Bigl{(}\frac{mn}{ p}\Bigr{)}t_{p}(m)\] is also a trace function with conductor bounded only in terms of that of \(t_{p}\) (see [19, Prop. 8.2]). So for instance, the fact that the Kloosterman sums used above (see Example 1.1), namely \[t_{p}(n)=\frac{1}{\sqrt{p}}\frac{1}{\sqrt{p}}\sum_{1\leqslant x\leqslant p-1} e\Bigl{(}\frac{nx+\bar{x}}{p}\Bigr{)},\] define a geometrically irreducible trace function modulo \(p\) with bounded conductor follows from this principle applied to the trace function \(n\mapsto e(\bar{n}/p)\) (extended by \(0\) for \(n=0\)), which is a special case of Example (3). As a final remark, we emphasize that trace functions behave in many ways like random functions (e.g., they often have Gowers norms that are as small as those of random functions, as shown by Fouvry, Kowalski and Michel in [21]), and one can think of them in these terms in a first reading. ## 3. Properties of trace functions We summarize here the properties of trace functions that we will use. These are essentially related to the Fourier transform (which was already mentioned in Example 2.5, (4), as an operation preserving trace functions). First, we fix throughout the paper a prime number \(\ell\), and impose that all other prime numbers we consider below are different from \(\ell\) (one can take \(\ell=2\) and only consider odd primes). We fix an isomorphism \(\iota\colon\overline{\mathbf{Q}}_{\ell}\to\mathbf{C}\). We first clarify our terminology and conventions concerning sheaves: **Definition 3.1** (Sheaves and uniform sheaves).: Let \(p\neq\ell\) be a prime. (1) A _sheaf_\(\mathscr{F}\) modulo \(p\) is a middle extension \(\overline{\mathbf{Q}}_{\ell}\)-sheaf on \(\mathbf{A}^{1}_{\mathbf{F}_{p}}\), pure of weight \(0\). A _Fourier sheaf_ modulo \(p\) is a sheaf modulo \(p\) that is of Fourier type in the sense of Katz [27, 7.3.4], in other words, none of its geometrically irreducible components is geometrically isomorphic to an Artin-Schreier sheaf. (2) The _trace function_\(t_{\mathscr{F}}\) of a sheaf \(\mathscr{F}\) modulo \(p\) is the complex-valued function on \(\mathbf{Z}\) defined by \[t_{\mathscr{F}}(x)=\iota(\operatorname{Tr}(\operatorname{Fr}_{x,\mathbf{F}_{p }}|\mathscr{F}_{\bar{x}}))\] where \(\operatorname{Fr}_{x,\mathbf{F}_{p}}\) is the Frobenius at \(x\in\mathbf{F}_{p}\), and \(\bar{x}\) is a geometric point above \(x\). (3) Let \(\mathscr{F}\) be a sheaf modulo \(p\) and let \(k\geqslant 0\) be an integer. We say that \(\mathscr{F}\) is _\(k\)-uniform_ if no geometrically irreducible component of \(\mathscr{F}\) is geometrically isomorphic to a sheaf of the type \(\mathscr{L}_{\psi(\mathrm{P})}\) where \(\psi\) is a non-trivial additive character of \(\mathbf{F}_{p}\) and \(\mathrm{P}\in\mathbf{F}_{p}[\mathrm{X}]\) is a polynomial of degree \(\leqslant k\). (4) We say that \(\mathscr{F}\) is almost \(k\)-uniform with average \(\mu\) if \(\mathscr{F}\simeq\mu\overline{\mathbf{Q}}_{\ell}\oplus\mathscr{G}\) where \(\mathscr{G}\) is \(k\)-uniform. Note that speaking of geometrically irreducible components of a sheaf \(\mathscr{F}\) modulo \(p\) is legitimate, since such sheaves (being pure of some weight) are geometrically semisimple by work of Deligne. **Example 3.2**.: To say that \(\mathscr{F}\) is \(0\)-uniform (resp. \(1\)-uniform) means that \(\mathscr{F}\) has no trivial geometrically irreducible component (resp. is of Fourier type in the sense of Katz [27, 7.3.5]). Let \(\mathscr{F}\) be a sheaf modulo \(p\). Fouvry, Kowalski and Michel defined its _conductor_\(\mathbf{c}(\mathscr{F})\) in [19, Def. 1.13]; it is a positive integer which vanishes if and only if \(\mathscr{F}\) is zero. The conductor measures quantitatively the complexity of a sheaf in many estimates.. One essential property is a bound on the size of the trace function: for any sheaf \(\mathscr{F}\) modulo \(p\), and any \(x\in\mathbf{Z}\), we have \[|t_{\mathscr{F}}(x)|\leqslant\mathbf{c}(\mathscr{F}). \tag{3.1}\] Using the conductor, we define the trace norms as follows. **Definition 3.3** (Trace norms).: Let \(p\) be a prime different from \(\ell\). Let \(\mathscr{C}(\mathbf{F}_{p})\) denote the vector space of \(\mathbf{C}\)-valued functions on \(\mathbf{F}_{p}\). For \(f\in\mathscr{C}(\mathbf{F}_{p})\), we define \[\|f\|_{\mathrm{t}}=\inf\Bigl{\{}\sum_{i}\mathbf{c}(\mathscr{F}_{i})|a_{i}|\, \mid\,f=\sum_{i}a_{i}t_{\mathscr{F}_{i}},\,\,\mathscr{F}_{i}\text{ geometrically irreducible}\Bigr{\}},\] and \[\|f\|_{\mathrm{tf}}=\inf\Bigl{\{}\sum_{i}\mathbf{c}(\mathscr{F}_{i})|a_{i}|\,\mid \,f=\sum_{i}a_{i}t_{\mathscr{F}_{i}},\ \mathscr{F}_{i}\text{ geometrically irreducible Fourier}\Bigr{\}}.\] In both cases, the infimum runs over decompositions of \(f\) in linear combinations of trace functions of sheaves of the indicated type. It is straightforward that both of these are norms, and clear that \(\|f\|_{\mathrm{t}}\leqslant\|f\|_{\mathrm{tf}}\). **Remark 3.4**.: Although we mentioned that trace functions can be thought of as "random" functions, one should note that for most simple models of random functions \(f\colon\mathbf{F}_{p}\to\mathbf{C}\) (e.g., taking all \(f(n)\) to be independent and uniform over the unit disc), the norm \(\|f\|_{\mathrm{t}}\) will in fact be very large, as explained in a paper of Fouvry, Kowalski and Michel (see [22, Th. 5.1]). We now state some of the fundamental analytic properties of trace functions, starting with the general form of the "completion method" for short sums of trace functions. **Proposition 3.5** (Completion method).: _Let \(\mathscr{F}\) be a Fourier sheaf modulo \(p\) and \(t\colon\mathbf{Z}\to\mathbf{C}\) its trace function. For any interval \(\mathrm{I}\) in \(\mathbf{Z}\) of length \(\leqslant p\), we have_ \[\sum_{n\in\mathrm{I}}t(n)\ll\sqrt{p}(\log p)\] _where the implied constant depends only on the conductor of \(\mathscr{F}\)._ See [24, SS1.1, SS2.2] for the argument, which is straightforward, granted the very deep fact (a case of Deligne's Riemann Hypothesis in its strongest form) that the normalized discrete Fourier transform of \(t\) is the trace function of a sheaf \(\mathrm{FT}(\mathscr{F})\), which is also a middle-extension of weight \(0\), with conductor \(\leqslant 10\,\mathbf{c}(\mathscr{F})^{2}\) (this last important estimate is proved by Fouvry, Kowalski and Michel in [19, Prop. 8.2]). **Proposition 3.6**.: _Let \(\mathscr{F}\) and \(\mathscr{G}\) be middle-extension \(\ell\)-adic sheaves of weight \(0\) modulo \(p\)._ \((1)\) _The additive middle convolution \(\mathscr{F}\ast_{\mathrm{I}}\mathscr{G}\) is a middle-extension \(\ell\)-adic sheaf of weights \(\leqslant 0\), and it has conductor bounded in terms of the conductors of \(\mathscr{F}\) and \(\mathscr{G}\)._ \((2)\) _Suppose that \(\mathscr{F}\) is a Fourier sheaf. The additive middle convolution \(\mathscr{F}\ast_{\mathrm{I}}\mathrm{D}(\mathscr{F})\) contains no Artin-Schreier sheaf as geometrically irreducible component._ Proof.: For \((1)\), the first assertion follows from the definition of the middle convolution and from Deligne's Riemann Hypothesis. To estimate the conductor, it is simplest here to appply the Fourier transform, which is an exact functor transforming middle-convolution into tensor product, so that \[\mathscr{F}\ast_{\mathrm{I}}\mathscr{G}=\overline{\mathrm{FT}}(\mathrm{FT}( \mathscr{F})\otimes\mathrm{FT}(\mathscr{G})).\] We can then apply the estimate [19, Prop. 8.2] for the conductor of a Fourier transform. For \((2)\), applying the Fourier transform, a hypothetical injection \(\mathscr{L}_{\psi(ax)}\hookrightarrow\mathscr{F}\ast_{\mathrm{I}}\mathrm{D}( \mathscr{F})\) would imply the existence of an injection \[\delta_{a}\hookrightarrow\mathrm{FT}_{\psi}(\mathscr{F})\otimes\mathrm{D}( \mathrm{FT}_{\psi}(\mathscr{F}))\] of a punctual skyscraper sheaf into \(\mathrm{FT}_{\psi}(\mathscr{F})\otimes\mathrm{D}(\mathrm{FT}_{\psi}(\mathscr{ F}))\). However, since both \(\mathrm{FT}_{\psi}(\mathscr{F})\) and its dual are middle-extension sheaves when \(\mathscr{F}\) is a middle-extension, their tensor product has no punctual part. The following definition will be convenient in some places. **Definition 3.7**.: A family \((\mathscr{F}_{p})_{p}\) of sheaves modulo \(p\) indexed by (a subset of) the primes \(\neq\ell\) is an _almost Fourier family_ if the conductor of \(\mathscr{F}_{p}\) is bounded independently of \(p\), and if there exists an integer \(r\geqslant 0\) such that \(\mathscr{F}_{p}=r\overline{\mathbf{Q}}_{\ell}\oplus\widetilde{\mathscr{F}_{p}}\) for all \(p\), where \(\widetilde{\mathscr{F}_{p}}\) is a Fourier sheaf modulo \(p\). We say that \(r\) is the mean of the family. For an almost Fourier family, the trace functions \(t_{p}\) of \(\mathscr{F}_{p}\) satisfy \[t_{p}(x)=r+\widetilde{t}_{p}(x)\] where \(\widetilde{t}_{p}\) is the trace function of \(\widetilde{\mathscr{F}_{p}}\). The following proposition will be only be used for polynomials \(\mathrm{P}\) of degree \(1\), but since it is of independent interest, we state and prove it in general (see [15, Th. 2.7] for a special case). **Proposition 3.8**.: _Let \(k\geqslant 1\) be an integer and define \(\gamma_{k}=2^{-k}\). Let \(p\) be a prime and let \(\mathscr{F}\) be a \(k\)-uniform \(\ell\)-adic sheaf modulo \(p\) with trace function \(t(n)\). Let \(\mathrm{P}\in\mathbf{R}[\mathrm{X}]\) be a polynomial of degree \(\leqslant k\). Let \(\mathrm{I}\) be an interval in \(\mathbf{Z}\) of length \(|\mathrm{I}|\geqslant 1\). We have_ \[\sum_{n\in\mathrm{I}}t(n)e(\mathrm{P}(n))\ll\mathbf{c}(\mathscr{F})^{2}\Big{(} |\mathrm{I}|^{1-2\gamma_{k}}p^{\gamma_{k}}+|\mathrm{I}|p^{-\gamma_{k}}\Big{)}( \log p)^{2\gamma_{k}} \tag{3.2}\] _where the implied constant is absolute._ For \(k\geqslant 2\), the proof will use the following lemma; readers only interested in main results of this paper may skip this in a first reading. **Lemma 3.9**.: _Let \(k\geqslant 1\) be an integer and \(p\) a prime. Let \(\mathscr{F}\) be a geometrically isotypic \(k\)-uniform \(\ell\)-adic sheaf modulo \(p\) for some integer \(k\geqslant 1\). Let \(h\in\mathbf{F}_{p}\) be such that the set of singularities of \([+h]^{*}\mathscr{F}\) and \(\mathrm{D}(\mathscr{F})\) are disjoint. If \(p>k\) and \(\mathbf{c}(\mathscr{F})<p\), and if \(h\neq 0\), then \([+h]^{*}\mathscr{F}\otimes\mathrm{D}(\mathscr{F})\) is a \((k-1)\)-uniform \(\ell\)-adic sheaf modulo \(p\) with conductor \(\ll\mathbf{c}(\mathscr{F})^{2}\)._ Proof.: This is implicit in the work of Fouvry, Kowalski and Michel in [21, SS5]. Precisely, under the assumption on \(h\), the tensor product \([+h]^{*}\mathscr{F}\otimes\mathrm{D}(\mathscr{F})\) is an \(\ell\)-adic sheaf modulo \(p\) (the key point is that it is a middle-extension, see [21, Lemma 2.2]). If the conclusion does not hold, we deduce from the definition of \((k-1)\)-uniform sheaf that there exists a polynomial \(\mathrm{P}\) of degree \(\leqslant k-1\) such that \[\mathscr{F}\simeq[+h]^{*}\mathscr{F}\otimes\mathscr{L}_{\psi(\mathrm{P})}\] (see [21, Lemma 5.3 (2)]). From this, we see first that \(\mathbf{c}(\mathscr{F})\geqslant p\), if \(\mathscr{F}\) is not lisse on \(\mathbf{A}^{1}_{\mathbf{F}_{p}}\) (because the orbit of a singularity under \(x\mapsto x+h\) is contained in the set of singularities, so there are at least \(p\) of them, each of which contributes at least \(1\) to the sum of drops of \(\mathscr{F}\)). Otherwise, since \(p>k\), by [21, Lemma 5.4 (2)], it follows that either \(\mathbf{c}(\mathscr{F})\geqslant p\) (because of the contribution of the Swan conductor at \(\infty\)) or \(\mathscr{F}\) is isomorphic to \(\mathscr{L}_{\psi(\mathrm{Q})}\) for some polynomial of degree \(\leqslant k\). The lemma follows, by contraposition. Proof.: We first consider the case \(|\mathrm{I}|\leqslant p\). We then need to show that \[\sum_{n\in\mathrm{I}}t(n)e(\mathrm{P}(n))\ll\mathbf{c}(\mathscr{F})^{2}| \mathrm{I}|^{1-2\gamma_{k}}p^{\gamma_{k}}(\log p)^{2\gamma_{k}}, \tag{3.3}\] and we may assume (by additive change of variable) that \(\mathrm{I}\) is contained in \(\{0,\ldots,p-1\}\). We assume (as we may) that \(\mathrm{P}(0)=0\). If we decompose the arithmetic semisimplification of \(\mathscr{F}\) in arithmetically irreducible components, say \(\mathscr{F}_{i}\), then one of the following is true (see [21, Lemma 5.3]): (1) For some \(n\geqslant 2\), the sheaf \(\mathscr{F}_{i}\) is induced from some irreducible sheaf on \(\mathrm{Spec}(\mathbf{F}_{p^{n}})\) by pushforward along the map \(\mathrm{Spec}(\mathbf{F}_{p^{n}})\to\mathrm{Spec}(\mathbf{F}_{p})\); in this case, the trace function \(t_{i}\) of \(\mathscr{F}_{i}\) is identically \(0\) (see [21, Lemma 5.3] or [19, Proof of Prop. 8.3]), so that the estimate (3.2) is trivial. (2) The sheaf \(\mathscr{F}_{i}\) is geometrically isotypic. Since the estimate (3.2) is linear in \(\mathscr{F}\), we see that we may reduce the proof to the case where \(\mathscr{F}\) is geometrically isotypic. We now proceed by induction on \(k\). The key tool is Weyl differencing. Assume first that \(k=1\) and that \(\mathrm{P}(n)=\theta n\) (here we do not need to assume that \(\mathscr{F}\) is isotypic). By discrete Fourier inversion, we obtain \[\sum_{n\in\mathrm{I}}t(n)e(\theta n)=\sum_{0\leqslant h<p}\widehat{t}(h) \alpha_{p}(h,\theta)\] where \[\alpha_{p}(h,\theta)=\frac{1}{\sqrt{p}}\sum_{n\in\mathrm{I}}e\Big{(}n\Big{(} \frac{h}{p}+\theta\Big{)}\Big{)},\qquad\widehat{t}(h)=\frac{1}{\sqrt{p}}\sum_ {0\leqslant n<p}t(n)e\Big{(}\frac{nh}{p}\Big{)}.\] Since \(\mathscr{F}\) is \(1\)-uniform, it is a Fourier sheaf, and we have \(|\widehat{t}(h)|\leqslant\mathbf{c}(\mathrm{FT}(\mathscr{F}))\ll\mathbf{c}( \mathscr{F})^{2}\) (by [19, Prop. 8.2]). On the other hand, by summing the geometric sum, we have \[|\alpha_{p}(h,\theta)|\leqslant\min\Bigl{(}\frac{|\mathrm{I}|}{\sqrt{p}}, \frac{1}{\sqrt{p}}\frac{1}{\|\frac{h}{p}+\theta\|}\Bigr{)},\] where \(\|\cdot\|\) on the right-hand side is the distance to the nearest integer. We use the first bound for that value \(h_{0}\) of \(h\) where \(|h_{0}/p+\theta|\leqslant 1/p\), and the other values of \(\alpha_{p}(h,\theta)\) are then bounded by \[\frac{\sqrt{p}}{2},\quad\cdots,\quad\frac{\sqrt{p}}{p},\] so that \[\sum_{0\leqslant h<p}|\alpha_{p}(h,\theta)|\ll\sqrt{p}(\log p),\] with an absolute implied constant. Combining these results we obtain \[\Big{|}\sum_{0\leqslant h<p}\widehat{t}(h)\alpha_{p}(h,\theta)\Big{|}\ll \mathbf{c}(\mathscr{F})\sqrt{p}\log p,\] with an absolute implied constant, which implies the bound (3.3) for \(k=1\). Now assume that \(\deg(\mathrm{P})=k\geqslant 2\) and that the proposition is true for polynomials of degree \(k-1\); assume (as we saw that we may) that \(\mathscr{F}\) is geometrically isotypic. We write \[\left|\sum_{n\in\mathrm{I}}t(n)e(\mathrm{P}(n))\right|^{2} =\sum_{n,m\in\mathrm{I}}t(n)\overline{t(m)}e(\mathrm{P}(n)- \mathrm{P}(m))\] \[=\sum_{h\in\mathrm{I}-\mathrm{I}}\sum_{m\in\mathrm{I}_{h}}t(m+h) \overline{t(m)}e(\mathrm{P}(m+h)-\mathrm{P}(m))\] \[=\sum_{h}\sum_{m\in\mathrm{I}_{h}}t(m+h)\overline{t(m)}e( \mathrm{Q}_{h}(m))\] where \(\mathrm{Q}_{h}=\mathrm{P}(\mathrm{X}+h)-\mathrm{P}(\mathrm{X})\) is a polynomial of degree \(\leqslant k-1\) and \(\mathrm{I}_{h}\) is an interval, depending on \(h\), of length \(|\mathrm{I}_{h}|\leqslant|\mathrm{I}|\). For \(h\in\mathrm{I}-\mathrm{I}\) such that the set of singularities of \([+h]^{*}\mathscr{F}\) and \(\mathrm{D}(\mathscr{F})\) are not disjoint, we use the trivial bound \[\left|\sum_{m\in\mathrm{I}_{h}}t(m+h)\overline{t(m)}e(\mathrm{Q}_{h}(m)) \right|\leqslant\mathbf{c}(\mathscr{F})^{2}|\mathrm{I}|.\] Note that there are at most \(n^{2}\) values of \(h\) with this property, where \(n\leqslant\mathbf{c}(\mathscr{F})\) is the number of singularities of \(\mathscr{F}\). Now suppose that the set of singularities of \([+h]^{*}\mathscr{F}\) and \(\mathrm{D}(\mathscr{F})\) are disjoint. The function \[m\mapsto t(m+h)\overline{t(m)}\] is the trace function of the sheaf \([+h]^{*}\mathscr{F}\otimes\mathrm{D}(\mathscr{F})\), which is \((k-1)\)-uniform by Lemma 3.9. Hence, by induction, we have \[\sum_{m\in\mathrm{I}_{h}}t(m+h)\overline{t(m)}e(\mathrm{Q}_{h}(m))\ll\mathbf{c }(\mathscr{F})^{2}|\mathrm{I}_{h}|^{1-2\gamma_{k-1}}p^{\gamma_{k-1}}(\log p)^{ 2\gamma_{k-1}}\] where the implied constant is absolute. Finally, gathering the estimates together, since \(|\mathrm{I}-\mathrm{I}|\leqslant 2|\mathrm{I}|\) and \(|\mathrm{I}_{h}|\leqslant|\mathrm{I}|\), we obtain \[\left|\sum_{n\in\mathrm{I}}t(n)e(\mathrm{P}(n))\right|^{2}\ll\mathbf{c}( \mathscr{F})^{4}|\mathrm{I}|+\mathbf{c}(\mathscr{F})^{2}|\mathrm{I}|^{2-2 \gamma_{k-1}}p^{\gamma_{k-1}}(\log p)^{2\gamma_{k-1}},\] and (3.3) follows for degree \(k\) by taking the square root since \(\gamma_{k-1}=2\gamma_{k}\). We now assume that \(|\mathrm{I}|>p\). We can decompose the interval \(\mathrm{I}\) into \(\lfloor|\mathrm{I}|/p\rfloor\) intervals of length \(p\) and one remaining interval \(\mathrm{J}\) of length \(|\mathrm{J}|\leqslant p\). Using shifts, each of these sums is of the type above for a shifted sheaf, with the same conductor, and an interval of length \(\leqslant p\). The previous case therefore implies \[\sum_{n\in\mathrm{I}}t(n)e(\mathrm{P}(n))\ll\mathbf{c}(\mathscr{F})^{2}\frac{ |\mathrm{I}|}{p}\times p^{1-2\gamma_{k}}p^{\gamma_{k}}(\log p)^{2\gamma_{k}}\] (since the implied constant is independent of the coefficients of \(\mathrm{P}\)), and this is of the desired shape for \(|\mathrm{I}|>p\). **Corollary 3.10**.: _Let \(\mathscr{F}\) be a Fourier sheaf modulo \(p\) with trace function \(t_{p}\), and \(\theta\in\mathbf{R}/\mathbf{Z}\). We have_ \[\sum_{0\leqslant n<p}t_{p}(n)e(-\theta n)\ll\sqrt{p}\log p,\] _where the implied constant depends only on the conductor of \(\mathscr{F}\)._ **Remark 3.11**.: (1) The estimate of Proposition 3.8 cannot be improved without some additional assumption, since \(t(n)=e(n^{k}/p)\) is the trace function of a sheaf that is \((k-1)\)-uniform but not \(k\)-uniform. (2) Estimates similar to that of Proposition 3.8 have been proved by a number of authors when \(t(n)=\chi(n)\) is a multiplicative character, beginning with Enflo [16]; more recent works include those of Chang [10], Heath-Brown and Pierce [26] and Pierce [34]. In that special case, rather stronger results hold; as far as the size of I is concerned, they are comparable to the Burgess bound for short character sums, i.e., non-trivial provided that I is a bit larger than \(p^{1/4}\). (3) If \(\theta=a/p\) for some integer \(a\), then the estimate of Corollary 3.10 holds without the factor \(\log p\), by the existence of Deligne's Fourier transform. It would be interesting to know if this factor is really needed in general. ## 4. The mean ergodic theorem in the Fourier case This section considers the mean ergodic theorem in \(\mathrm{L}^{2}\). As can be expected from the good \(\mathrm{L}^{2}\) properties of trace functions, a very satisfactory theory exists, and it is reasonably easy to derive. Roughly speaking, we will see that non-trivial interactions arise only from the Artin-Schreier components (on the side of trace functions) and from the Kronecker factor (on the dynamical side). So if either the Artin-Schreier component or the Kronecker factor is trivial (the latter means that the dynamical system is weakly-mixing), then the statements are particularly clear. We fix a measurable dynamical system \((\mathrm{X},\mu,f)\). We denote by \[u_{f}\colon\mathrm{L}^{2}(\mathrm{X},\mu)\to\mathrm{L}^{2}(\mathrm{X},\mu)\] the associated unitary operator, defined by \(u_{f}(\varphi)=\varphi\circ f\) for all \(\varphi\). We also fix a family \((\mathscr{F}_{p})_{p}\) of sheaves modulo \(p\) with bounded conductor, indexed by an infinite set of primes \(\mathsf{P}\). We denote by \(t_{p}\) the trace function of \(\mathscr{F}_{p}\), viewed as a function on \(\mathbf{Z}\). Finally, we denote by \[v_{p}=\frac{1}{p}\sum_{0\leq n<p}t_{p}(n)\ u_{f}^{n}\] the ergodic averaging operator with weight \(t_{p}\) acting on \(\mathrm{L}^{2}(\mathrm{X},\mu)\). **Proposition 4.1**.: _Suppose that \(\mathscr{F}_{p}\) is a Fourier sheaf for all \(p\). The endomorphisms \((v_{p})_{p}\) of \(\mathrm{L}^{2}(\mathrm{X},\mu)\) converge to \(0\) as \(p\to+\infty\) with respect to the operator norm. In fact, we have_ \[\|v_{p}\|\ll p^{-1/2}(\log p), \tag{4.1}\] _where the implied constant depends only on \(\mathbf{c}(\mathscr{F}_{p})\)._ Although the proof may seem rather trivial, it relies on the Riemann Hypothesis over finite fields. Proof.: Let \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\) have norm \(1\). Let \(\nu\) be the spectral measure of the unitary operator \(u_{f}\) relative to the unit vector \(\varphi\), i.e., the Borel probability measure on \(\mathbf{R}/\mathbf{Z}\) such that \[\int_{\mathbf{R}/\mathbf{Z}}\varrho(e(\theta))d\nu(\theta)=\langle\varrho(u_{ f})\varphi|\varphi\rangle\] for any continuous function \(\varrho\) on \(\mathbf{S}^{1}\) (see, e.g., [4, Def. 4, p. 268]). We obtain in particular \[\|v_{p}(\varphi)\|^{2}=\int_{0}^{1}\Bigl{|}\frac{1}{p}\sum_{0\leqslant n<p}t_{p }(n)e(n\theta)\Bigr{|}^{2}d\nu(\theta).\] Applying Corollary 3.10, we get \[\|v_{p}(\varphi)\|^{2}\ll\frac{(\log p)^{2}}{p}\] where the implied constant depends only on the conductor of \(\mathscr{F}_{p}\). This concludes the proof. This implies the first part of Theorem 1.2, in the case of Fourier sheaves (with uniform convergence over bounded sets), because \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ll\frac{1}{\sqrt{p}}\to 0\] in that case. For arbitrary functions \(t_{p}\colon\mathbf{F}_{p}\to\mathbf{C}\), provided they satisfy Assumption (a), namely \(\|t_{p}\|_{\mathrm{tf}}\ll 1\), we can represent \(t_{p}\) as a finite combination (with coefficients bounded in \(\ell_{1}\)) of trace functions of Fourier sheaves, and obtain the same result by linearity. Moreover, this also implies the second part, still in the case of Fourier sheaves, by a standard trick: if \(p\) ranges over a sparse set of primes \(\mathsf{P}\), then for any fixed \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\), the series \[\sum_{p}\|v_{p}(\varphi)\|^{2}\] converges (by (4.1) and the definition by sparseness), and this implies that the function \[x\mapsto\sum_{p}|v_{p}(\varphi)(x)|^{2}\] is finite almost surely, hence that \(v_{p}(\varphi)(x)\) converges to \(0\) for almost all \(x\). Once again, this gives the second part of Theorem 1.2 under Assumption (a) by linearity. **Remark 4.2**.: For the sake of variety, here is an argument which provides a proof of the weaker result \[\|v_{p}\|\ll p^{-1/4}(\log p)^{1/2},\] without using the spectral theorem. Let \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\) and \[\psi_{p}=v_{p}(\varphi)=\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ u_{f}^{n}( \varphi).\] We compute \[\|\psi_{p}\|^{2} =\frac{1}{p^{2}}\sum_{\begin{subarray}{c}0\leqslant n<p\\ 0\leqslant m<p\end{subarray}}t_{p}(n)\overline{t_{p}(m)}\langle u_{f}^{n}( \varphi)|u_{f}^{m}(\varphi)\rangle\] \[=\frac{1}{p^{2}}\sum_{|h|<p}\langle u_{f}^{h}(\varphi)|\varphi \rangle\sum_{\begin{subarray}{c}0\leqslant n,m<p\\ n-m=h\end{subarray}}t_{p}(n)\overline{t_{p}(m)}.\] The contribution coming from \(h=0\) is \[\frac{\|\varphi\|^{2}}{p^{2}}\sum_{x\in\mathbf{F}_{p}}|t_{p}(x)|^{2}\leqslant \mathbf{c}(\mathscr{F}_{p})^{2}\|\varphi\|^{2}p^{-1}\] by (3.1). Now fix \(h\) with \(1\leqslant|h|<p\). The corresponding summand is \(\langle u_{f}^{h}(\varphi)|\varphi\rangle\sigma_{h}\), where \[\sigma_{h}=\sum_{\begin{subarray}{c}0\leqslant n,m<p\\ n-m=h\end{subarray}}t_{p}(n)\overline{t_{p}(m)}=\sum_{\max(0,h)\leqslant n< \min(p,p+h)}t_{p}(n)\overline{t_{p}(n-h)}.\] By completion and by the properties of the additive convolution of trace functions of Fourier sheaves (see Proposition 3.5 and Proposition 3.6), we have \[\sigma_{h}\ll\sqrt{p}(\log p)\] for all \(h\neq 0\), where the implied constant depends only on \(\mathbf{c}(\mathscr{F}_{p})\). Therefore we derive \[\|\psi_{p}\|^{2}\ll\|\varphi\|^{2}p^{-1}+\|\varphi\|^{2}p^{-1/2}(\log p)\] where the implied constant depends only on \(\mathbf{c}(\mathscr{F}_{p})\). This gives the result. We can immediately extend the mean-ergodic theorem for Fourier sheaves to \(\mathrm{L}^{r}\) when \(1\leqslant r\leqslant 2\). For \(r>2\), see Section 7. **Corollary 4.3**.: _Suppose that \(\mathscr{F}_{p}\) is a Fourier sheaf for all \(p\). Let \(r\in[1,2]\). The endomorphisms_ \[\widetilde{v}_{p}=\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ u_{f}^{n}\] _of \(\mathrm{L}^{r}(\mathrm{X},\mu)\) converge to \(0\) as \(p\to+\infty\) in the norm topology._ Proof.: Suppose first that \(\varphi\) is bounded. Since \(r\leqslant 2\) an \(\mu\) is a probability measure, we have hence \(\|\widetilde{v}_{p}\|\leqslant\|v_{p}\|\), which tends to \(0\). Recall that we denote by \(\pi\) the ergodic projection \(\mathrm{L}^{1}(\mathrm{X},\mu)\to\mathrm{L}^{1}(\mathrm{X},\mu)\). It restricts to the orthogonal projection on the \(1\)-eigenspace of \(\mathrm{L}^{2}(\mathrm{X},\mu)\). The standard mean-ergodic theorem in \(\mathrm{L}^{2}\) implies that \[\frac{1}{p}\sum_{0\leqslant n<p}u_{f}^{n}\to\pi\] in the space of endomorphisms of \(\mathrm{L}^{2}(\mathrm{X},\mu)\) with the topology of pointwise convergence (see, e.g., [14, Th. 2.21]). Recall further that almost Fourier families are defined in Definition 3.7. **Corollary 4.4**.: _Assume that the family \((\mathscr{F}_{p})\) is almost Fourier with mean \(r\geqslant 0\). Then the sequence of endomorphisms \((v_{p})\) of \(\mathrm{L}^{2}(\mathrm{X},\mu)\) converges to \(r\pi\) as \(p\to+\infty\) with respect to the topology of uniform convergence on compact subsets of \(\mathrm{L}^{2}(\mathrm{X},\mu)\)._ Proof.: The assumption implies that \(t_{p}=r+\widetilde{t}_{p}\), where \(\widetilde{t}_{p}\) is the trace function of a Fourier sheaf with conductor \(\leqslant\mathbf{c}(\mathscr{F}_{p})\), and we may combine Proposition 4.1, applied to \(\widetilde{t}_{p}\), with the usual mean-ergodic theorem to derive the convergence of \(v_{p}\) to \(r\pi\) in the topology of pointwise convergence. Moreover, since \(\|v_{p}\|\leqslant\mathbf{c}(\mathscr{F}_{p})\) for all \(p\), the family \((v_{p})\) is equicontinuous, and hence the convergence holds in fact uniformly over compact subsets of \(\mathrm{L}^{2}(\mathrm{X},\mu)\) (see [3, p. 16, th. 1]). **Example 4.5**.: Let \(\mathrm{S}_{p}\) be the set of quadratic residues modulo \(p\). Assume that \(f\) is \(\mu\)-ergodic, so that the \(1\)-eigenspace is spanned by the constant function \(1\) and \(\pi(\varphi)=\int_{\mathrm{X}}\varphi d\mu\) for all \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\). We then have \[\frac{1}{p}\sum_{\begin{subarray}{c}0\leqslant n<p\\ n\in\mathrm{S}_{p}\end{subarray}}\varphi\circ f^{n}\to\frac{1}{2}\int_{ \mathrm{X}}\varphi\,d\mu\] uniformly for \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\) in compact subsets of \(\mathrm{L}^{2}(\mathrm{X},\mu)\). Indeed, the characteristic function of \(\mathrm{S}_{p}\), for \(p\geqslant 3\), is \[\frac{1}{2}(1+\chi_{p})\] where \(\chi_{p}\) is the Legendre character modulo \(p\), and the latter is the trace function of a rank \(1\) non-trivial Kummer sheaf. Using [20, SS6.2], one can extend straightforwardly this result by replacing \(\mathrm{S}_{p}\) with the set \(\mathrm{S}_{q,p}=q(\mathbf{F}_{p})\) of the values modulo \(p\) of a fixed polynomial \(q\in\mathbf{Z}[\mathrm{X}]\) (except that the leading constant \(1/2\) might be replaced by a value depending on \(p\)). ## 5. Weakly-mixing systems It remains to prove Theorem 1.2 under Assumption (b). By linearity, it suffices to treat the case of trace functions of (geometrically irreducible) sheaves modulo primes \(p\in\mathsf{P}\) with bounded conductor. We keep the notation of the previous section concerning the dynamical system and the family \((\mathscr{F}_{p})\) as well as the operator \(u_{f}\). We write \[\alpha_{p}=\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n).\] We use a suitable decomposition of the trace function \(t_{p}\). We write \(t_{p}=t_{p}^{\mathrm{AS}}+\widetilde{t}_{p}\), where \(t_{p}^{\mathrm{AS}}\) is the Artin-Schreier component and \(\widetilde{t}_{p}\) is the trace function of a Fourier sheaf \(\widetilde{\mathscr{F}}_{p}\) with bounded conductor. Using the Riemann Hypothesis, we can express further \[t_{p}^{\mathrm{AS}}=\alpha_{p}+\widetilde{t}_{p}^{\mathrm{AS}}+\mathrm{O}(p^{- 1/2}),\] for all \(p\), where \(\widetilde{t}_{p}^{\mathrm{AS}}\) is the trace function of an Artin-Schreier sheaf \(\mathscr{A}_{p}\) with no trivial geometrically irreducible component and with bounded conductor, and where the implied constant depends only on \(\mathbf{c}(\mathscr{F}_{p})\). **Proposition 5.1**.: _Suppose that the system \((\mathrm{X},\mu,f)\) is ergodic and that the Kronecker factor of \((\mathrm{X},\mu,f)\) is trivial, or in other words that \((\mathrm{X},\mu,f)\) is weakly mixing._ _The endomorphisms_ \[v_{p}-\alpha_{p}\pi=\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ u_{f}^{n}-\alpha_{p}\pi\] _of \(\mathrm{L}^{2}(\mathrm{X},\mu)\) converge to \(0\) in the topology of uniform convergence on compact subsets, and_ \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi(f^{n}(x))-\alpha_{p}\to 0\] _for almost all \(x\)._ Proof.: Using the decomposition \[t_{p}=\alpha_{p}+\widetilde{t}_{p}+\widetilde{t}_{p}^{\mathrm{AS}},\] we have \[\frac{1}{p}\sum_{0\leqslant n<p}\widetilde{t}_{p}(n)\ u_{f}^{n}(\varphi)\to 0\] by Proposition 4.1 applied to the sheaves \(\widetilde{\mathscr{F}}_{p}\), and \[\frac{1}{p}\sum_{0\leqslant n<p}\alpha_{p}\ u_{f}^{n}(\varphi)-\alpha_{p}\pi (\varphi)\to 0\] by the classical mean-ergodic theorem [14, Th. 2.21]. Similarly, the pointwise convergence holds almost surely for these two components by Theorem 1.2 and the classical pointwise ergodic theorem (see, e.g., [14, Th. 2.30]). We now use the assumption that the dynamical system is weakly mixing: a result of Bourgain (the uniform Wiener-Wintner Theorem, see the proof by Assani [2, Th. 6]) then implies that \[\frac{1}{p}\sum_{0\leqslant n<p}e(n\theta)\varphi(f^{n}(x))\to 0\] for almost all \(x\), uniformly for \(\theta\in[0,1]\). Since the trace function of \(\widetilde{t}_{p}^{\mathrm{AS}}\) is a finite linear, combination with coefficients of size \(1\), of additive characters \(n\mapsto e(an/p)\), it follows that \[\frac{1}{p}\sum_{0\leqslant n<p}\widetilde{t}_{p}^{\mathrm{AS}}(n)\varphi(f^{ n}(x))\to 0\] almost surely (although that the number of such additive characters may depend on \(p\), this doesn't affect this argument). This concludes the proof of the pointwise part of Theorem 1.2 for weakly mixing systems. The mean-ergodic convergence follows by the dominated convergence theorem. Besides this proof, we now give an alternative argument for the mean-ergodic theorem in this case, which does not use the uniform Wiener-Wintner Theorem. This can be skipped (we include it since these are informal notes, and the arguments were elaborated before we were aware of this result of Bourgain). We will need the following definition to state the basic technical fact. **Definition 5.2**.: Let \(\theta\in\mathbf{R}/\mathbf{Z}\) and let \((a_{p})_{p}\) be a sequence of integers, indexed by an infinite set of primes. We say that \(a_{p}/p\)_converges emphatically_ to \(\theta\) if \[\limsup_{p\to+\infty}p\Big{|}\frac{a_{p}}{p}-\theta\Big{|}<+\infty,\] and if moreover no subsequence of \((p|\frac{a_{p}}{p}-\theta|)_{p}\) converges to a positive integer. **Remark 5.3**.: If \(a_{p}/p\) converges emphatically to \(\theta\), then \(a_{p}/p\) converges to \(\theta\). If \(\theta=0\), then one sees that the condition means that \(a_{p}=0\) for all but finitely many \(p\). For any \(\theta_{0}\in\mathbf{R}/\mathbf{Z}\), there is a sequence \((a_{p})\) indexed by primes such that \((a_{p})\) converges emphatically to \(\theta_{0}\), by taking \(a_{p}/p\) the closest to \(\theta_{0}\), so that \(p|\frac{a_{p}}{p}-\theta_{0}|<1\). **Proposition 5.4**.: _Let \((a_{p})\) be a sequence of integers indexed by an infinite subset of primes. Assume that \(a_{p}/p\) converges to \(\theta_{0}\) in \(\mathbf{R}/\mathbf{Z}\). Let \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\) of norm \(1\) and define_ \[\psi_{p}=\frac{1}{p}\sum_{0\leqslant n<p}e\Big{(}-\frac{na_{p}}{p}\Big{)}\ u_{ f}^{n}(\varphi).\] \((1)\) _Suppose that \(\theta_{0}\neq 0\) in \(\mathbf{R}/\mathbf{Z}\). If the sequence \((\|\psi_{p}\|)\) converges to a non-zero number, then the sequence \((a_{p}/p)\) converges emphatically to \(\theta_{0}\), and \(e(\theta_{0})\) is an eigenvalue of \(u_{f}\)._ (2) _Suppose that \(\theta_{0}=0\) and \(p\nmid a_{p}\) for all \(p\). Then \(\psi_{p}\to 0\)._ Proof.: As before, let \(\nu\) be the spectral measure of the unitary operator \(u_{f}\) relative to the unit vector \(\varphi\). We obtain \[\|\psi_{p}\|^{2}=\int_{\mathbf{R}/\mathbf{Z}}\Big{|}\frac{1}{p}\sum_{0\leqslant n <p}e\Big{(}n\Big{(}\theta-\frac{a_{p}}{p}\Big{)}\Big{)}\Big{|}^{2}d\nu(\theta )=\int_{\mathbf{R}/\mathbf{Z}}\frac{1}{p}\mathrm{F}_{p}\Big{(}\theta-\frac{a_ {p}}{p}\Big{)}d\nu(\theta),\] where \(\mathrm{F}_{p}\) is the Fejer kernel: \(\mathrm{F}_{p}(0)=p\) and \[\mathrm{F}_{p}(\theta)=\frac{1}{p}\Big{(}\frac{\sin(\pi p\theta)}{\sin(\pi \theta)}\Big{)}^{2}\] for \(\theta\neq 0\). Recall that \(0\leqslant\mathrm{F}_{p}\leqslant p\), so \(p^{-1}|\mathrm{F}_{p}|\leqslant 1\). Moreover, \(\mathrm{F}_{p}(\theta)\to 0\) uniformly on the complement of any neighborhood of \(0\) in \(\mathbf{R}/\mathbf{Z}\). Thus, using the limit assumption \(a_{p}/p\to\theta_{0}\), we have \[\mathrm{F}_{p}\Big{(}\theta-\frac{a_{p}}{p}\Big{)}\to 0\] as \(p\to+\infty\) for any fixed \(\theta\neq\theta_{0}\), and a fortiori we have the same limit after dividing the left-hand side by \(p\). We first prove \((2)\), and thus assume that \(\theta_{0}=0\) and \(p\nmid a_{p}\). Then \[\mathrm{F}_{p}(\theta_{0}-\tfrac{a_{p}}{p})=\mathrm{F}_{p}(-\tfrac{a_{p}}{p})=0,\] for all \(p\), hence we obtain \(\|\psi_{p}\|\to 0\) by the dominated convergence theorem. Now we prove \((1)\), and assume that \(\theta_{0}\neq 0\) and that \(\|\psi_{p}\|\) converges to a non-zero number. If the sequence \((p|\frac{a_{p}}{p}-\theta_{0}|)\) is unbounded, then using the assumption \(\theta_{0}\neq 0\) and the formula defining \(\mathrm{F}_{p}\), we see that there is a subsequence of primes such that \[\frac{1}{p}\mathrm{F}_{p}\Big{(}\theta_{0}-\frac{a_{p}}{p}\Big{)}\ll\frac{1}{p^ {2}\big{|}\theta_{0}-\tfrac{a_{p}}{p}\big{|}^{2}}\to 0\] as \(p\to+\infty\). We conclude using the dominated convergence theorem that \(\|\psi_{p}\|^{2}\to 0\) along this subsequence, contrary to the assumption. Thus we have \[\sup_{p\to+\infty}p\Big{|}\frac{a_{p}}{p}-\theta_{0}\Big{|}=\mathrm{C}<+\infty.\] Consider any subsequence of primes where the sequence \((p|\frac{a_{p}}{p}-\theta_{0}|)_{p}\) converges to some real number \(c\geqslant 0\). Then \[\frac{1}{p}\mathrm{F}_{p}\Big{(}\theta_{0}-\frac{a_{p}}{p}\Big{)}\to\Big{(} \frac{\sin(\pi c)}{\pi c}\Big{)}^{2},\] hence, along this subsequence, the dominated convergence theorem gives \[\lim_{p\to+\infty}\|\psi_{p}\|^{2}=\Big{(}\frac{\sin(\pi c)}{\pi c}\Big{)}^{2} \nu(\{\theta_{0}\}).\] Since we assumed that the left-hand side exists and is non-zero, we conclude that \(\theta_{0}\) is an atom of \(\nu\). As is well-known, this implies that \(e(\theta_{0})\) is an eigenvalue of \(u_{f}\) (because it implies that the spectral projector relative to \(\{e(\theta_{0})\}\) is non-zero; see, e.g., [4, p. 279, Cor.]). **Corollary 5.5**.: _Suppose that the system \((\mathrm{X},\mu,f)\) is ergodic and that the Kronecker factor of \((\mathrm{X},\mu,f)\) is trivial, or in other words that \((\mathrm{X},\mu,f)\) is weakly mixing. Let_ \[\alpha_{p}=\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n).\] _Then the endomorphisms_ \[v_{p}-\alpha_{p}\pi=\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ u_{f}^{n}-\alpha _{p}\pi\] _of \(\mathrm{L}^{2}(\mathrm{X},\mu)\) converge to \(0\) in the topology of uniform convergence on compact subsets._ Proof.: Since \(|\alpha_{p}|\leqslant\mathbf{c}(\mathscr{F}_{p})\), the family of endomorphisms \(v_{p}-\alpha_{p}\pi\) is equicontinuous, hence it suffices to prove pointwise convergence to \(0\) for all \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\). We may further assume that \(\varphi\) has norm \(1\). We write \(t_{p}=t_{p}^{\mathrm{AS}}+\widetilde{t}_{p}\), where \(t_{p}^{\mathrm{AS}}\) is the Artin-Schreier component and \(\widetilde{t}_{p}\) is the trace function of a Fourier sheaf \(\widetilde{\mathscr{F}}_{p}\) with bounded conductor. Using the Riemann Hypothesis, we can express further \[t_{p}^{\mathrm{AS}}=\alpha_{p}+\widetilde{t}_{p}^{\mathrm{AS}}+\mathrm{O}(p^{- 1/2}),\] for all \(p\), where \(\widetilde{t}_{p}^{\mathrm{AS}}\) is the trace function of an Artin-Schreier sheaf \(\mathscr{A}_{p}\) with no trivial geometrically irreducible component and with bounded conductor, and where the implied constant depends only on \(\mathbf{c}(\mathscr{F}_{p})\). Then we have \[\frac{1}{p}\sum_{0\leqslant n<p}\widetilde{t}_{p}(n)\ u_{f}^{n}(\varphi)\to 0\] by Proposition 4.1 applied to the sheaves \(\widetilde{\mathscr{F}}_{p}\), and \[\frac{1}{p}\sum_{0\leqslant n<p}\alpha_{p}\ u_{f}^{n}(\varphi)-\alpha_{p}\pi( \varphi)\to 0\] by the classical mean-ergodic theorem [14, Th. 2.21]. We are now done unless \(\mathscr{A}_{p}\) has rank \(\geqslant 1\) for an infinite sequence of primes. We now assume this and consider only such primes. Let \[\psi_{p}=\frac{1}{p}\sum_{0\leqslant n<p}\widehat{t}_{p}^{\mathrm{AS}}(n)\ u_{f}^{n}( \varphi).\] The sequence \((\|\psi_{p}\|)_{p}\) is bounded by the maximum of the ranks of the sheaves \(\mathscr{A}_{p}\). Let \(c\geqslant 0\) be a limiting value, obtained for a subsequence of primes which we omit from the notation. Assume that \(c>0\). By passing to a further subsequence, we may assume that the rank of \(\mathscr{A}_{p}\) is a constant \(r\geqslant 1\). We have geometric isomorphisms \[\mathscr{A}_{p}\simeq\bigoplus_{j=1}^{r}\mathscr{L}_{\psi(-a_{p,j}x)}\] for some integers \(0<a_{p,j}<p\). There must exist some fixed \(j\) such that the norm of \[\widetilde{\psi}_{p}=\frac{1}{p}\sum_{0\leqslant n<p}e\Bigl{(}-\frac{a_{p,j}n} {p}\Bigr{)}\ u_{f}^{n}(\varphi)\] does not converge to \(0\), as otherwise we would obtain \(c=0\). We may then assume, again by passing to a subsequence, that \(-a_{p,j}/p\) converges to some \(\theta_{0}\in\mathbf{R}/\mathbf{Z}\). Since \(\widetilde{\psi}_{p}\) does not converge to \(0\), we have \(\theta_{0}\neq 0\) by Proposition 5.4, (2). Now, by definition (see [14, Th. 2.36 or SS6.4]), the assumption on \((\mathrm{X},\mu,f)\) means that \(u_{f}\) has no eigenvalue different from \(1\) (and that \(1\) is an eigenvalue of multiplicity one). We have then a contradiction to Proposition 5.4, (1). This means that all limit points of the bounded sequence \((\|\psi_{p}\|)_{p}\) are equal to \(0\), hence it converges to \(0\). Using linearity, this corollary implies Theorem 1.2, (1) under Assumption (b). **Example 5.6**.: Examples of weakly mixing systems \((\mathrm{X},\mu,f)\) are Bernoulli shifts, ergodic automorphisms of compact abelian groups (e.g., elements of \(\mathrm{SL}_{d}(\mathbf{Z})\) acting on \((\mathbf{R}/\mathbf{Z})^{d}\) which have no root of unity as an eigenvalue) or the Gauss map in the theory of continued fractions [33]. Another important class arises in homogeneous dynamics. Let \(\mathrm{G}\) be a locally compact group, \(\Gamma\) a lattice in \(\mathrm{G}\) and consider the action of \(\mathrm{G}\) on \(\mathrm{X}=\Gamma\backslash\mathrm{G}\). Denote by \(\mu_{\mathrm{X}}\) the \(\mathrm{G}\)-invariant probability measure on \(\mathrm{X}\). Assume that the action is mixing [14, SS 8.1]. Let \(x\in\mathrm{G}\) be such that \(x^{n}\to+\infty\) in \(\mathrm{G}\) as \(n\to+\infty\). Then defining \(f(\Gamma y)=\Gamma yx\), we obtain a system \((\mathrm{X},\mu_{\mathrm{X}},f)\) that is mixing by definition, hence weakly mixing. This applies for instance to \(\mathrm{G}=\mathrm{SL}_{2}(\mathbf{R})\) and \(x\) a non-trivial unipotent element. ## 6. The topological case In this section, we prove Theorem 1.3. Thus let \(\mathrm{X}\) be a compact topological space and \(f\colon\mathrm{X}\to\mathrm{X}\) a continuous map, such that the topological entropy \(h(f)\) is zero (see, e.g, [13, SS4] for an introduction to topological entropy). Let \(\varphi\colon\mathrm{X}\to\mathbf{C}\) be continuous and \(x\in\mathrm{X}\). The goal is to find conditions on a family \((\mathscr{F}_{p})\) of sheaves modulo \(p\) with bounded conductor which imply that \[\lim_{p\to+\infty}\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi(f^{n}(x))=0,\] with no exceptions or sparseness assumption. The claim of Theorem 1.3 is that this is the case when the family consists of Kloosterman sheaves or Kummer sheaves associated to real characters, for which \(t_{p}(n)=\operatorname{Kl}_{2}(n;p)\) or \(t_{p}(n)=(\frac{n}{p})\), respectively. The proof is in fact a straightforward adaptation of the combinatorial argument that shows that decay of multiple correlations of the Mobius function (what is called the Chowla conjecture) implies Sarnak's conjecture, as presented e.g. on Tao's blog [37], and extends to a certain class of sheaves introduced in [23] under the name of "bountiful sheaves" ([23, Def. 1.2]). For a clearer perspective, we make the following definition: **Definition 6.1**.: Let \((\mathscr{F}_{p})\) be a family of sheaves modulo \(p\) with bounded conductor. We say that it has _positive monodromy-entropy_ if for any integers \(k\geqslant 1\) and \(\operatorname{H}\geqslant 1\), the number \(\operatorname{N}_{p}(k,\operatorname{H})\) of tuples of non-negative integers \((h_{1},\dots,h_{k},h_{1}^{\prime},\dots,h_{k}^{\prime})\) with \(h_{i}\), \(h_{j}\leqslant\operatorname{H}\) such that \[\bigotimes_{i=1}^{k}[+h_{i}]^{*}\mathscr{F}_{p}\otimes\bigotimes_{i=1}^{k}[+h _{i}^{\prime}]^{*}\operatorname{D}(\mathscr{F}_{p})\] contains a geometrically trivial irreducible component satisfies \[\operatorname{N}_{p}(k,\operatorname{H})\ll(2k)^{k}\operatorname{H}^{k}.\] A key point in this definition is that the number \(\operatorname{N}_{p}(k,\operatorname{H})\) is bounded independently of \(p\), but it is also important that the exponent of \(\operatorname{H}\) is no larger than \(k\). Here is our general statement: **Proposition 6.2**.: _Let \((\mathscr{F}_{p})_{p}\) be a family of sheaves modulo \(p\) with positive monodromy-entropy and bounded conductor._ _Let \(\operatorname{X}\) be a locally compact topological space and \(f\colon\operatorname{X}\to\operatorname{X}\) a continuous map. Assume that either \(\operatorname{X}\) is compact or that \(\operatorname{X}\) is a metric space and \(f\) uniformly continuous._ _Assume that the topological entropy of \(f\) is zero. Then for all bounded3 continuous functions \(\varphi\colon\operatorname{X}\to\mathbf{C}\) and all \(x\in\operatorname{X}\), we have_ Footnote 3: Check \[\lim_{p\to+\infty}\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi(f^{n}(x))=0. \tag{6.1}\] This implies Theorem 1.3 in view of the following lemma: **Lemma 6.3**.: (1) _If \((\mathscr{F}_{p})_{p}\) is a family of bountiful sheaves, then it has positive monodromy-entropy._ (2) _If \((\mathscr{F}_{p})_{p}\) is a family such that \(\mathscr{F}_{p}\) is a non-trivial Kummer sheaf for all \(p\), then it has positive monodromy-entropy._ Proof.: In case (1), this follows immediately from [23, Def. 1.2,Th. 1.5] and elementary combinatorics, taking into account the definitions of normal and \(r\)-normal tuples (see [23, Def. 1.3]). In case (2), if \(\mathscr{F}_{p}=\mathscr{L}_{\chi}\), where \(\chi\) has order \(d\mid p-1\), with \(d\geqslant 2\), then note that \[\bigotimes_{i=1}^{k}[+h_{i}]^{*}\mathscr{F}_{p}\otimes\bigotimes_{i=1}^{k}[+h_ {i}^{\prime}]^{*}\operatorname{D}(\mathscr{F}_{p})=\mathscr{L}_{\chi( \operatorname{G}/\operatorname{H})},\] where \(\mathrm{G}\) and \(\mathrm{H}\) are the polynomials \[\mathrm{G}=\prod_{i=1}^{k}(\mathrm{X}+h_{i}),\qquad\mathrm{H}=\prod_{j=1}^{k}( \mathrm{X}+h_{j}^{\prime}).\] This contains a geometrically trivial component if and only if \(\mathrm{G}/\mathrm{H}\) is a \(d\)-th power of a rational function. The bound on \(\mathrm{N}_{p}(k,\mathrm{H})\) is therefore clear (the worse case is when \(d=2\), and then the estimate is the same as that for normal tuples, as in [23, Def. 1.5, (1)]). We now prove Proposition 6.2, following closely [37]. The next statement, which provides the analogue of decay of multiple correlations of the Mobius function, could also be derived from the work of Perret-Gentil [32] in most cases of interest. **Proposition 6.4**.: _Let \((\mathscr{F}_{p})_{p}\) be a family of sheaves modulo \(p\) with positive monodromy-entropy and bounded conductor. Let \((\alpha_{n})_{n\geqslant 0}\) be a sequence of complex numbers bounded by \(1\)._ _Fix an integer \(m\geqslant 1\). There exists a absolute constant \(\mathrm{C}>0\) such that, for any \(\varepsilon>0\), we have_ \[\frac{1}{p}\Big{|}\{0\leqslant n<p\ |\ \Big{|}\sum_{0\leqslant i<m}t_{p}(n+i) \alpha_{i}\Big{|}\geqslant\varepsilon m\}\Big{|}\leqslant\mathrm{C}\exp\! \left(-\frac{\varepsilon^{2}m}{\mathrm{C}}\right)+\mathrm{O}(\varepsilon^{- \varepsilon^{2}m}p^{-1/2}),\] _where the implied constant depends only on the conductor of \((\mathscr{F}_{p})\)._ Proof.: Let \(k\geqslant 1\) be an integer to be chosen later. We have \[\frac{1}{p}|\{0\leqslant n<p\ |\ \Big{|}\sum_{0\leqslant i<m}t_{p}(n+i) \alpha_{i}\Big{|}\geqslant\varepsilon m\}|\leqslant\frac{1}{(\varepsilon m)^{ 2k}}\frac{1}{p}\sum_{0\leqslant n<p}\Big{|}\sum_{0\leqslant i<m}t_{p}(n+i) \alpha_{i}\Big{|}^{2k}.\] Since \(|\alpha_{i}|\leqslant 1\), if we expand the right-hand side, we obtain the upper bound \[\frac{1}{(\varepsilon m)^{2k}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! take \(\exp(\kappa_{\varepsilon}(m))\) values as \(n\) ranges over the non-negative integers. The fact that the topological entropy of \(f\) is zero (i.e, the sequence \((\varphi(f^{n}(x)))_{n}\) is deterministic) implies that \[\lim_{m\to+\infty}\frac{\kappa_{\varepsilon}(m)}{m}=0.\] Let \(p\) be a large prime. For any tuple (6.2), say \((\alpha_{0},\ldots,\alpha_{m-1})\), Proposition 6.4 shows that we have \[\frac{1}{p}\Big{|}\{0\leqslant n<p\;\,|\;\Big{|}\sum_{0\leqslant i<m}t_{p}(n+i )\alpha_{i}\Big{|}\geqslant\varepsilon m\}\Big{|}\leqslant\mathrm{C}\exp\! \left(-\frac{\varepsilon^{2}m}{\mathrm{C}}\right)+\mathrm{O}(\varepsilon^{- \varepsilon^{2}m}p^{-1/2}),\] and hence \[\frac{1}{p}\Big{|}\{0\leqslant n<p\;\,|\;\Big{|}\sum_{0\leqslant i <m}t_{p}(n+i)\varphi_{\varepsilon}(n+i)\Big{|}\geqslant\varepsilon m\}\Big{|} \\ \leqslant\mathrm{C}\exp\!\left(-\frac{\varepsilon^{2}m}{\mathrm{C }}+\kappa_{\varepsilon}(m)\right)+\mathrm{O}(\varepsilon^{-\varepsilon^{2}m} e^{\kappa_{\varepsilon}(m)}p^{-1/2}).\] Since \(\kappa_{\varepsilon}(m)/m\to 0\), we may take \(m\) large enough (depending on \(\varepsilon\)) so that this implies \[\frac{1}{p}\Big{|}\{0\leqslant n<p\;\,|\;\Big{|}\sum_{0\leqslant i<m}t_{p}(n+ i)\varphi_{\varepsilon}(n+i)\Big{|}\geqslant\varepsilon m\}\Big{|}\leqslant \varepsilon+o(1)\] as \(p\to+\infty\). But then we deduce that \[\Big{|}\frac{1}{p}\sum_{0\leqslant n<p}\frac{1}{m}\sum_{0\leqslant i<m}t_{p}( n+i)\varphi_{\varepsilon}(n+i)\Big{|}\leqslant 2\varepsilon+o(1)\] because \(|\varphi_{\varepsilon}|\leqslant 1\) (write the average as the sum of a term where it is \(>\varepsilon\), handled by the above inequality, and one where it is \(\leqslant\varepsilon\), which has a contribution \(\leqslant\varepsilon\)). Now notice that for \(0\leqslant i<m\), we have \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n+i)\varphi_{\varepsilon}(n+i)=\frac{1} {p}\sum_{0\leqslant n<p}t_{p}(n)\varphi_{\varepsilon}(n)+\mathrm{O}\!\left( \frac{m\,\mathbf{c}(\mathscr{F}_{p})}{p}\right)\] with an absolute implied constant, so we get \[\Big{|}\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi_{\varepsilon}(n)\Big{|} \leqslant 2\varepsilon+o(1),\] hence \[\Big{|}\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi(f^{n}(x))\Big{|} \leqslant 3\varepsilon+o(1).\] The limit (6.1) follows. ## 7. Mean-ergodic theorems in \(\mathrm{L}^{r}\) The goal of this section is to see if one can extend the mean-ergodic theorem to the spaces \(\mathrm{L}^{r}(\mathrm{X},\mu)\) when \(r>2\). We will achieve this goal, however, only for sheaves satisfying the same extra condition which appeared in the previous section. **Proposition 7.1**.: _Suppose that \((\mathscr{F}_{p})\) is a family of sheaves with positive monodromy-entropy. Let \(r>2\) be fixed. The endomorphisms_ \[\widetilde{v}_{p}=\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ u_{f}^{n}\] _of \(\mathrm{L}^{r}(\mathrm{X},\mu)\) converge to \(0\) as \(p\to+\infty\)._ Proof.: Using monotonicity, as in Corollary 4.3, it is enough to prove this when \(r=2k\) for some integer \(k\geqslant 2\) to deduce it for \(r\leqslant 2k\). Let \(\varphi\in\mathrm{L}^{2k}(\mathrm{X},\mu)\) and denote \(\psi_{p}=\widetilde{v}_{p}(\varphi)\). We have \[\|\psi_{p}\|_{2k}^{2k}=\frac{1}{p^{2k}}\sum_{\begin{subarray}{c}n _{1},\dots,n_{k}\\ 0\leqslant n_{i}<p\end{subarray}}\sum_{\begin{subarray}{c}m_{1},\dots,m_{k} \\ 0\leqslant m_{j}<p\end{subarray}}t_{p}(n_{1})\cdots t_{p}(n_{k})\overline{t_{p} (m_{1})\cdots t_{p}(m_{k})}\\ \langle u_{f}^{n_{1}}(\varphi)\cdots u_{f}^{n_{k}}(\varphi),u_{f} ^{m_{1}}(\varphi)\cdots u_{f}^{m_{k}}(\varphi)\rangle.\] Since \(u_{f}\) is isometric, we have \[\langle u_{f}^{n_{1}}(\varphi)\cdots u_{f}^{n_{k}}(\varphi),u_{f}^{m_{1}}( \varphi)\cdots u_{f}^{m_{k}}(\varphi)\rangle=\langle\varphi\cdots u_{f}^{n_{ k}-n_{1}}(\varphi),u_{f}^{m_{1}-n_{1}}(\varphi)\cdots u_{f}^{m_{k}-n_{1}}( \varphi)\rangle.\] Hence, we may sum over \(h=n_{1}\) first, obtaining \[\|\psi_{p}\|_{2k}^{2k}=\frac{1}{p^{2k}}\sum_{n_{2},\dots,n_{k}} \sum_{m_{1},\dots,m_{k}}\langle\varphi\ u_{f}^{n_{2}}(\varphi)\cdots u_{f}^{n_ {k}}(\varphi),u_{f}^{m_{1}}(\varphi)\cdots u_{f}^{m_{k}}(\varphi)\rangle\\ \sum_{h}t_{p}(h)t_{p}(h+n_{1})\cdots t_{p}(h+n_{k})\overline{t_{p }(h+m_{1})\cdots t_{p}(h+m_{k})},\] where the sum is over integers \(0\leqslant h<p\) such that \[0\leqslant h+n_{i}<p,\qquad 0\leqslant h+m_{j}<p\] for \(2\leqslant i\leqslant k\) and \(1\leqslant j\leqslant k\), respectively. This is a sum over an interval of length \(<p\). The assumption on \(\mathscr{F}_{p}\) then implies that \[\sum_{h}t_{p}(h)t_{p}(h+n_{1})\cdots t_{p}(h+n_{k})\overline{t_{p}(h+m_{1}) \cdots t_{p}(h+m_{k})}\ll p^{1/2}(\log p),\] where the implied constant depends only on \(\mathbf{c}(\mathscr{F}_{p})\) and \(k\), unless \((0,n_{2},\dots,n_{k})\) is a permutation of \((m_{1},\dots,m_{k})\) (see [23, Th. 1.5]). This occurs for \(\ll p^{k-1}\) tuples \((n_{2},\dots,m_{k})\), and for these we have a bound \(\ll p\) for the sum, where the implied constant depends only on \(\mathbf{c}(\mathscr{F}_{p})\) and \(k\). Thus we derive \[\|\psi_{p}\|_{2k}^{2k}\ll p^{-1/2}(\log p)+p^{-k-1}.\] This shows that \(\|\widetilde{v}_{p}\|\to 0\) in the space of endomorphisms of \(\mathrm{L}^{2k}(\mathrm{X},\mu)\), and concludes the proof. **Remark 7.2**.: Analyzing the proof of the proposition further, we can reach a stronger conclusion and indeed derive a slightly stronger pointwise statement than the one in Theorem 1.2, although under assumptions that are reasonable in principle, but difficult to check. We take the case \(k=2\) of the proposition, and rewrite the first steps above: for \(\varphi\in\mathrm{L}^{4}(\mathrm{X},\mu)\), we have \[\|\psi_{p}\|_{4}^{4}=\frac{1}{p^{4}}\sum_{b}\sum_{c,d}\langle\varphi\ u_{f}^{b}( \varphi),u_{f}^{c}(\varphi)u_{f}^{d}(\varphi)\rangle\sum_{a}t_{p}(a)t_{p}(a+b) \overline{t_{p}(a+c)t_{p}(a+d)}.\] We rewrite the sum in the form \[\|\psi_{p}\|_{4}^{4}=\frac{1}{p^{7/2}}\sum_{c,d}\langle\varphi\ u_{f}^{b}( \varphi),u_{f}^{c}(\varphi)u_{f}^{d}(\varphi)\rangle\tau_{c,d}(b)\] where \[\tau_{c,d}(b)=\frac{1}{\sqrt{p}}\sum_{a}t_{p}(a)t_{p}(a+b)\overline{t_{p}(a+c) t_{p}(a+d)},\] hence \[\|\psi_{p}\|_{4}^{4}=\frac{1}{p^{5/2}}\sum_{c,d}\langle w_{p,c,d}(\varphi), \bar{\varphi}\ u_{f}^{c}(\varphi)u_{f}^{d}(\varphi)\rangle\] where \[w_{p,c,d}(\varphi)=\frac{1}{p}\sum_{0\leqslant b<p}\tau_{c,d}(b)\,\varphi \circ f^{b}.\] Now assume that \(\varphi\in\mathrm{L}^{6}(\mathrm{X},\mu)\), which implies that \(\bar{\varphi}\ u_{f}^{c}(\varphi)u_{f}^{d}(\varphi)\) belongs to \(\mathrm{L}^{2}(\mathrm{X},\mu)\) and has norm \(\ll 1\). Assume moreover that the family \((\mathscr{F}_{p})\) satisfies the condition that _for most \((c,d)\), with \(\ll p\) exceptions, the function \(\tau_{c,d}\) is a trace function of a Fourier sheaf, with weights \(\leqslant 0\)_. Note then that, by Sawin's Quantitative Sheaf Theory (see [35, Th. 1.1, Cor. 7.4]), the conductor of \(\tau_{c,d}\) is \(\ll 1\) for all \(p\). Under these conditions, by Proposition 4.1, we obtain \[\|w_{p,c,d}(\varphi)\|_{2}^{2}\ll p^{-1/2}\log p,\] for most \((c,d)\), and hence conclude that \[\|\psi_{p}\|_{4}^{4}\ll p^{-1}\log p.\] If we assume that the family \((\mathscr{F}_{p})\) is indexed by a set of primes \(\mathsf{P}\) such that \[\sum_{p\in\mathsf{P}}\frac{\log p}{p}<+\infty,\] then this result means that \[\sum_{p}\|\psi_{p}\|_{4}^{4}<+\infty,\] or in other words that the non-negative function \[\sum_{p}|\psi_{p}|^{4}\] is integrable on \(\mathrm{X}\). This implies that \(\psi_{p}=\widetilde{\upsilon}_{p}(\varphi)\) converges almost everywhere to \(0\), a pointwise theorem. This is a bit stronger than the pointwise part of Theorem 1.2, but the latter does not require any extra condition, and hence we do not pursue the verification that the assumption above holds in reasonable situations. ## 8. Maximal inequalities in \(\mathrm{L}^{2}\) We now consider maximal inequalities in \(\mathrm{L}^{2}\), i.e., we endeavor to estimate functions like \[\mathrm{M}\varphi\colon x\mapsto\sup_{p}\Bigl{|}\frac{1}{p}\sum_{0\leqslant n<p }t_{p}(n)\ \varphi(f^{n}(x))\Bigr{|}\] in \(\mathrm{L}^{2}\)-norm, where we have fixed the dynamical system \((\mathrm{X},\mu,f)\) and the family of sheaves \((\mathscr{F}_{p})\) with bounded conductor, and with trace functions \(t_{p}\). In fact, we will need to restrict the supremum to sparse subsets of the primes, and so we use the notation \[\mathrm{M}_{\mathsf{P}}(\varphi)(x)=\sup_{p\in\mathsf{P}}\Bigl{|}\frac{1}{p} \sum_{0\leqslant n<p}t_{p}(n)\ \varphi(f^{n}(x))\Bigr{|}\] for any set \(\mathsf{P}\) of primes. We write \[s(\mathsf{P})=\sum_{p\in\mathsf{P}}\frac{(\log p)^{2}}{p},\] which is finite if and only if \(\mathsf{P}\) is sparse. **Proposition 8.1**.: _Suppose that \((\mathscr{F}_{p})_{p}\) is an almost Fourier family (Definition 3.7) with mean \(r\geqslant 0\). Suppose further that \(\mathsf{P}\) is sparse. Let \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\). We have_ \[\|\mathrm{M}_{\mathsf{P}}\varphi\|_{2}\leqslant\mathrm{C}_{2}\|\varphi\|_{2}\] _for some constant \(\mathrm{C}_{2}\) depending only on the conductor of \((\mathscr{F}_{p})\) and on \(s(\mathsf{P})\)._ The method that we use is a direct adaptation of that of Bourgain [5, SS2, SS3] (it is in fact much simpler). In the remainder of this section, we fix the sparse set \(\mathsf{P}\), and we will omit it from the notation unless it is required for context. The first step is to transfer the problem to \(\mathbf{Z}\). For any bounded function \(\varpi\) on \(\mathbf{Z}\), we define \(\widetilde{\mathrm{M}}(\varpi)\colon\mathbf{Z}\to\mathbf{C}\) by \[\widetilde{\mathrm{M}}(\varpi)(k)=\sup_{p\in\mathsf{P}}\ \Bigl{|}\frac{1}{p}\sum_{0 \leqslant n<p}t_{p}(n)\ \varpi(k+n)\Bigr{|}.\] **Lemma 8.2**.: _Suppose that there exists \(\mathrm{C}_{3}\geqslant 0\), depending only on the conductor of \((\mathscr{F}_{p})\) and on \(s(\mathsf{F})\), such that_ \[\|\widetilde{\mathrm{M}}\varpi\|_{2}\leqslant\mathrm{C}_{3}\|\varpi\|_{2}\] _for all \(\varpi\) bounded on \(\mathbf{Z}\). Then Proposition 8.1 holds with \(\mathrm{C}_{2}=\mathrm{C}_{3}\)._ Proof.: We use the classical method of transfer to \(\mathbf{Z}\). It suffices to prove that for all \(\mathrm{P}\geqslant 2\) and all \(\varphi\in\mathrm{L}^{\infty}(\mathrm{X},\mu)\), we have \[\|\mathrm{M}^{\mathrm{P}}\varphi\|_{2}\leqslant 2\mathrm{C}_{3}\|\varphi\|_{2}\] where \[\mathrm{M}^{\mathrm{P}}(\varphi)=\sup_{p\leqslant\mathrm{P}}\Bigl{|}\frac{1}{p }\sum_{0\leqslant n<p}t_{p}(n)\ (\varphi\circ f^{n})\Bigr{|}\in\mathrm{L}^{2}(\mathrm{X},\mu).\] Fix such a \(\mathrm{P}\) and \(\varphi\) bounded and measurable on \(\mathrm{X}\). Let \(\lambda>1\) be a parameter and \(\mathrm{Q}=\lambda\mathrm{P}\). Let \(x\in\mathrm{X}\). Define \(\widetilde{\varphi}\colon\mathbf{Z}\to\mathbf{C}\) by \[\widetilde{\varphi}(n)=\begin{cases}\varphi(f^{n}(x))&\text{ if }0\leqslant n< \mathrm{Q}\\ 0&\text{ otherwise.}\end{cases}\] Note that for any prime \(p\leqslant\mathrm{P}\) and \(n\), \(k\) such that \(0\leqslant n+k<\mathrm{Q}\), we have \[\widetilde{\varphi}(n+k)=\varphi(f^{n+k}(x))=\varphi(f^{n}(f^{k}(x)))\] so that for \(0\leqslant k<\mathrm{Q}-\mathrm{P}\), we get \[\begin{split}\mathrm{M}^{\mathrm{P}}(\varphi)(f^{k}(x))& =\sup_{p\leqslant\mathrm{P}}\Bigl{|}\frac{1}{p}\sum_{0\leqslant n <p}t_{p}(n)\ (\varphi(f^{n}(f^{k}(x))))\Bigr{|}\\ &=\sup_{p\leqslant\mathrm{P}}\ \Bigl{|}\frac{1}{p}\sum_{0\leqslant n <p}t_{p}(n)\ \widetilde{\varphi}(k+n)\Bigr{|}=\widetilde{\mathrm{M}}^{\mathrm{P}}( \widetilde{\varphi})(k),\end{split} \tag{8.1}\] say. By assumption, we have \(\|\widetilde{\mathrm{M}}^{\mathrm{P}}(\widetilde{\varphi})\|_{2}\leqslant\| \widetilde{\mathrm{M}}(\widetilde{\varphi})\|_{2}\leqslant\mathrm{C}_{3}\| \widetilde{\varphi}\|_{2}\). This means that \[\sum_{k\in\mathbf{Z}}|\widetilde{\mathrm{M}}^{\mathrm{P}}(\widetilde{\varphi} )(k)|^{2}\leqslant\mathrm{C}_{3}^{2}\sum_{n\in\mathbf{Z}}|\widetilde{\varphi} (n)|^{2}=\mathrm{C}_{3}^{2}\sum_{0\leqslant n<\mathrm{Q}}|\varphi(f^{n}(x))|^ {2},\] hence by (8.1), we obtain \[\sum_{0\leqslant k<\mathrm{Q}-\mathrm{P}}|\mathrm{M}^{\mathrm{P}}(\varphi)(f^ {k}(x))|^{2}\leqslant\mathrm{C}_{3}^{2}\sum_{0\leqslant n<\mathrm{Q}}|\varphi( f^{n}(x))|^{2}.\] This inequality is valid for all \(x\in\mathrm{X}\). After integrating over \(\mathrm{X}\), we get \[\sum_{0\leqslant k<\mathrm{Q}-\mathrm{P}}\|\mathrm{M}^{\mathrm{P}}(\varphi) \circ f^{k}\|_{2}^{2}\leqslant\mathrm{C}_{3}^{2}\sum_{0\leqslant n<\mathrm{Q }}\|\varphi\circ f^{n}\|_{2}^{2}.\] But \(\mu\) is \(f\)-invariant, and therefore both sums are sums of equal terms, which means that \[(\lambda-1)\mathrm{P}\|\mathrm{M}^{\mathrm{P}}(\varphi)\|^{2}\leqslant \mathrm{C}_{3}^{2}\lambda\mathrm{P}\|\varphi\|^{2}.\] The result follows by taking \(\lambda\to+\infty\). Proof of Proposition 8.1.: We will prove Lemma 8.2. Since \((\mathscr{F}_{p})_{p}\) is an almost Fourier family of mean \(r\), we have \[t_{p}(n)=r+\tau_{p}(n)\] where \(\tau_{p}\) is the trace function of Fourier sheaves with bounded conductor. Let \(\varpi\) be a function on \(\mathbf{Z}\) with finite support. We denote by \[\widehat{\varpi}(\theta)=\sum_{k\in\mathbf{Z}}\varpi(k)e(-k\theta),\] and \[\widehat{v}_{p}(\theta)=\frac{1}{p}\sum_{0\leqslant n<p}\tau_{p}(n)e(n\theta)\] the Fourier transforms on \(\mathbf{R}/\mathbf{Z}\) of the function \(\varpi\) and of the discrete measures corresponding to the average \(\tau_{p}(n)\). We have \[\frac{1}{p}\sum_{0\leqslant n<p}\tau_{p}(n)\varpi(n+k)=\int_{\mathbf{R}/ \mathbf{Z}}\widehat{\varpi}(\theta)\widehat{v}_{p}(\theta)e(k\theta)d\theta \tag{8.2}\] for all \(k\in\mathbf{Z}\). For any \(k\in\mathbf{Z}\), we have \[\sup_{p}\frac{1}{p}\Bigl{|}\sum_{0\leqslant n<p}t_{p}(n)\varpi(n+k)\Bigr{|} \leqslant\sup_{p}\frac{1}{p}\Bigl{|}\sum_{0\leqslant n<p}\varpi(n+k)\Bigr{|} +\Bigl{(}\sum_{p}\Bigl{|}\frac{1}{p}\sum_{0\leqslant n<p}\tau_{p}(n)\varpi(n+k )\Bigr{|}^{2}\Bigr{)}^{1/2}\] (where \(p\) always ranges over \(\mathsf{P}\)). Hence \[\|\widetilde{\mathrm{M}}(\varpi)\|_{2}^{2}=\sum_{k\in\mathbf{Z}}\sup_ {p}\frac{1}{p}\Big{|}\!\sum_{0\leqslant n<p}t_{p}(n)\varpi(n+k)\Big{|}^{2}\\ \leqslant 2\sum_{k\in\mathbf{Z}}\sup_{p}\frac{1}{p}\Big{|}\!\sum_{0 \leqslant n<p}\varpi(n+k)\Big{|}^{2}+2\sum_{k\in\mathbf{Z}}\sum_{p}\Bigl{|} \frac{1}{p}\sum_{0\leqslant n<p}\tau_{p}(n)\varpi(n+k)\Bigr{|}^{2}.\] The first expression is \(\leqslant\mathrm{C}_{3}^{\prime}\|\varpi\|_{2}^{2}\) by the classical maximal ergodic theorem in \(\mathrm{L}^{2}\) for functions on \(\mathbf{Z}\) (see [14, SS 2.6]). By (8.2) and the Plancherel formula, we estimate the second one as follows: \[\sum_{k\in\mathbf{Z}}\sum_{p}\Bigl{|}\frac{1}{p}\sum_{0\leqslant n <p}\tau_{p}(n)\varpi(n+k)\Bigr{|}^{2} =\sum_{k\in\mathbf{Z}}\sum_{p}\Bigl{|}\int_{\mathbf{R}/\mathbf{Z}} \widehat{\varpi}(\theta)\widehat{v}_{p}(\theta)e(k\theta)d\theta\Bigr{|}^{2}\] \[=\sum_{p}\int_{\mathbf{R}/\mathbf{Z}}|\widehat{\varpi}(\theta) \widehat{v}_{p}(\theta)|^{2}d\theta\leqslant\Bigl{(}\sum_{p}\|\widehat{v}_{p} (\theta)\|_{\infty}^{2}\Bigr{)}\|\varpi\|_{2}^{2}.\] Applying Corollary 3.10, we have \[\sum_{p}\|\widehat{v}_{p}(\theta)\|_{\infty}^{2}\ll\sum_{p\in\mathsf{P}}\frac {(\log p)^{2}}{p}=s(\mathsf{P}),\] where the implied constant depends only on the conductor of \((\mathscr{F}_{p})\), and the result follows. ## 9. Pointwise ergodic theorem We give in this section a second proof of Theorem 1.2, (2), for almost Fourier families, arguing using a transfer principle as in the previous section. This is obviously more complicated than our first proof, but it is interesting that the sparseness condition which arises is the same as before. We consider a dynamical system \((\mathrm{X},\mu,f)\) and a family of sheaves \((\mathscr{F}_{p})\) with bounded conductor as in the previous section, with trace functions \(t_{p}\), defined for \(p\) in a sparse set \(\mathsf{P}\). We assume that the family is almost Fourier (Definition 3.7) of mean \(r\geqslant 0\). **Proposition 9.1**.: _Let \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\). Then_ \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ \varphi(f^{n}(x))\] _converges for \(\mu\)-almost all \(x\in\mathrm{X}\). If \(r=0\), or in other words, if all sheaves \(\mathscr{F}_{p}\) are Fourier sheaves, or if \((\mathrm{X},\mu,f)\) is weakly mixing, then the limit is zero._ For the proof, we reduce to the shift by means of an intermediate inequality. For a function \(\varpi\) on \(\mathbf{Z}\), we write as before \[u_{p}(\varpi)(k)=\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varpi(n+k).\] **Lemma 9.2**.: _Assume that for any infinite subset \(\mathsf{Q}\subset\mathsf{P}\), there exists a constant \(\mathrm{C}_{4}\) such that, for any function \(\varpi\) on \(\mathbf{Z}\) with bounded support, we have_ \[\sum_{\ell\in\mathsf{Q}}\Bigl{\|}\sup_{\ell<p<\ell^{+}}|u_{p}(\varpi)-u_{\ell^ {+}}(\varpi)|\,\Bigr{\|}_{2}^{2}\leqslant\mathrm{C}_{4}\|\varpi\|^{2},\] _where \(\ell^{+}\) is the element following \(\ell\) in the subset \(\mathsf{Q}\), and \(p\) ranges over elements in \(\mathsf{P}\). Then Proposition 9.1 holds._ Proof.: This has two steps. First, in the same manner that Lemma 8.2 is proved, the statement, if it holds, implies the corresponding bound \[\sum_{\ell\in\mathsf{Q}}\Bigl{\|}\sup_{\ell<p<\ell^{+}}|u_{p}(\varphi)-u_{\ell ^{+}}(\varphi)|\,\Bigr{\|}_{2}^{2}\leqslant\mathrm{C}_{4}\|\varphi\|^{2},\] for any \(\varphi\in\mathrm{L}^{2}(\mathrm{X},\mu)\), for any infinite subset \(\mathsf{Q}\). Next, one argues by contradiction that this last set of bounds, for a given \(\varphi\), implies that \(u_{p}(\varphi)\) converges \(\mu\)-almost everywhere. Finally, we prove the auxiliary bounds. **Proposition 9.3**.: _Let \(\mathsf{Q}\subset\mathsf{P}\) be an infinite subset. There exists a constant \(\mathrm{C}_{4}\) such that, for any function \(\varpi\) on \(\mathbf{Z}\) with bounded support, we have_ \[\sum_{\ell\in\mathsf{Q}}\Bigl{\|}\sup_{\ell<p<\ell^{+}}|u_{p}(\varpi)-u_{\ell ^{+}}(\varpi)|\,\Bigr{\|}_{2}^{2}\leqslant\mathrm{C}_{4}\|\varpi\|^{2}.\] Proof.: Writing \(t_{p}(n)=r+\tau_{p}(n)\), where \(\tau_{p}(n)\) is the trace function of a Fourier sheaf of bounded conductor, and applying the known behavior from the standard pointwise ergodic theory to the first term, we are reduced to showing that \[\sum_{\ell\in\mathsf{Q}}\Bigl{\|}\sup_{\ell<p<\ell^{+}}|\nu_{p}(\varpi)-\nu_{ \ell^{+}}(\varpi)|\,\Bigr{\|}_{2}^{2}\leqslant\mathrm{C}_{5}\|\varpi\|^{2}.\] for some constant \(\mathrm{C}_{5}\), where \(\nu_{p}\) is the averaging operator for the trace function \(\tau_{p}\). The left-hand side of the inequality is equal to \[\sum_{\ell\in\mathsf{Q}}\sum_{k\in\mathbf{Z}}\Bigl{(}\sup_{\ell<p<\ell^{+}}| \nu_{p}(\varpi)(k)-\nu_{\ell^{+}}(\varpi)(k)|\Bigr{)}^{2}\ll\sum_{\ell\in \mathsf{Q}}\sum_{k\in\mathbf{Z}}\sum_{\ell<p<\ell^{+}}|\nu_{p}(\varpi)(k)|^{2 }+\sum_{\ell\in\mathsf{Q}}\sum_{k\in\mathbf{Z}}|\nu_{\ell^{+}}(\varpi)(k)|^{2}\] where the implied constant is absolute. The first sum here is larger than the second, and it is at most \[\sum_{p\in\mathsf{P}}\sum_{k\in\mathbf{Z}}|\nu_{p}(\varpi)(k)|^{2}=\sum_{p\in \mathsf{P}}\int_{\mathbf{R}/\mathbf{Z}}|\widehat{\varpi}(\theta)\widehat{\nu }_{p}(\theta)|^{2}d\theta\] by the Plancherel formula and (8.2). Using Corollary 3.10, we obtain the desired bound. ## 10. Is sparseness necessary? It is now natural to ask whether the restriction to sparse sets of primes necessary in the maximal and pointwise ergodic theorems, or not. The first remark is that, for a classical (even weighted) sequence of ergodic averages \[u_{\mathrm{N}}(x)=\frac{1}{\mathrm{N}}\sum_{0\leqslant n<\mathrm{N}}w(n) \varphi(f^{n}(x)),\] convergence along sparse sequences of \(\mathrm{N}\) implies convergence of the whole sequence. For instance, assume that there is convergence to \(0\) for \(\mathrm{N}\) growing at least like a geometric progression with ratio \(1+\delta>0\), and assume that \(w\) and \(\varphi\) are bounded. For an arbitrary \(\mathrm{N}\geqslant 1\), pick \(\mathrm{M}\geqslant 1\) such that \(\mathrm{M}\leqslant\mathrm{N}<(1+\delta)\mathrm{M}\). We obtain an obvious upper bound \[|u_{\mathrm{N}}|\leqslant|u_{\mathrm{M}}|+\frac{\mathrm{C}\delta\mathrm{M}}{ \mathrm{N}}\leqslant|u_{\mathrm{M}}|+\delta\mathrm{C}\] for some constant \(\mathrm{C}\geqslant 0\), so that \[\limsup_{\mathrm{N}\to+\infty}|u_{\mathrm{N}}|\leqslant\delta\mathrm{C},\] and if this holds for any \(\delta>0\), we obtain \(u_{\mathrm{N}}\to 0\). Here the key point is that the restriction of the weight \(w(n)\) to a shorter interval is the same as the weight used for the average over that interval - this property fails for "triangular" averages like those appearing in our situation. Here is an abstract example which could be a guide to an example where almost everywhere convergence is _not true_ in our setting.4 Let \(\mathrm{X}\) be the product over primes \(\ell\) of copies of \(\mathbf{R}/\mathbf{Z}\), viewed as a compact topological group and as a probability space with its Haar measure \(\mu\). For \(\ell\) prime, fix an arbitrary measurable subset \(\mathrm{A}_{\ell}\subset\mathbf{R}/\mathbf{Z}\) with measure \((\log\ell)^{2}/\ell\) (in \(\mathbf{R}/\mathbf{Z}\)). Footnote 4: This is related to the well-known fact that convergence almost everywhere is not convergence with respect to any topology. Now, for \(p\) prime, let \(\varphi_{p}\) be the characteristic function of the set \(\mathrm{Y}_{p}\subset\mathrm{X}\) of all \((\theta_{\ell})\in\mathrm{X}\) such that the \(p\)-component \(\theta_{p}\) belongs to \(\mathrm{A}_{p}\). Thus \(\mu(\mathrm{Y}_{p})=(\log p)^{2}/p\). We claim that: 1. the sequence \((\varphi_{p})\) does _not_ converge almost everywhere; 2. but, for any _sparse_ set of primes \(\mathsf{P}\), the subsequence \((\varphi_{p})_{p\in\mathsf{P}}\) converges almost everywhere to \(0\). Indeed, the first assertion results from the independence of the functions \(\varphi_{p}\) (in probabilistic terms, they are independent random variables on \(\mathrm{X}\)) and from the non-trivial direction of the Borel-Cantelli lemma, since \[\sum_{p}\mu(\mathrm{Y}_{p})=\sum_{p}\frac{(\log p)^{2}}{p}=+\infty,\qquad \sum_{p}\mu(\mathrm{X}\,\raisebox{-1.0pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5 pt}{6.5pt}\rule{6.5pt}{6.5pt}\rule{6.5pt}{6.5pt}\rule{6.5pt}{6.5pt}\rule{6.5pt}{6.5pt} \rule{6.5pt}{6.5pt}\rule{6.5pt}{6.5pt}\rule{6.5pt}{6.5pt}\rule{6.5pt}{6. is enough that the two possibilities be separated enough that both occuring infinitely often excludes convergence). It does not seem impossible to have such a configuration, especially since the trace function is _a priori_ ours to select, with the condition that the conductors remain bounded, which might make it possible to exploit the frequent rough independence of primes. **Remark 10.1**.: (1) We would also show that convergence does not hold almost surely if the ergodic average converges to \(1\) with probability \(1/p\) (instead of \((\log p)^{2}/p\)), which would allow for convergence over all sets of primes with \[\sum_{p\in\mathsf{P}}\frac{1}{p}<+\infty.\] This configuration is maybe more likely to be possible. (2) If we have a system where the ergodic averages converge _everywhere_ for all sparse subsets of the primes, then they converge everywhere. (Indeed, the limit \(\psi\) would have to be independent of the sparse subset, since the union of two sparse sets is sparse, and then by contraposition, if the sequence was not convergent to \(\psi\), some subsequence would avoid a fixed neighborhood of \(\psi\), and some further subsequence would be sparse.) The following is currently the closest example that we know. It doesn't quite address the main question, since it involves non-Fourier sheaves and systems with non-trivial Kronecker factors. Let \(\mathrm{X}=(\mathbf{R}/\mathbf{Z})^{2}\) (viewed as column vectors) with the Haar measure \(\mu\). Let \(f(x,y)=(x+y,y)\), so that \(f\) is the action of an \(\mathrm{SL}_{2}(\mathbf{Z})\)-matrix, and therefore preserves \(\mu\). For \((x,y)\in\mathrm{X}\), we have \[f^{n}(x,y)=(x+ny,y).\] Define \(\varphi\colon\mathrm{X}\to\mathbf{C}\) by \(\varphi(x,y)=e(x)\). The ergodic averages are therefore \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi(f^{n}(x,y))=\frac{e(x)}{p} \frac{\sin(\pi p(y-a_{p}/p))}{\sin(\pi(y-a_{p}/p))}e\Big{(}\frac{(p-1)}{2}(y-a _{p}/p)\Big{)}.\] **Lemma 10.2**.: _There exists a sequence \((a_{p})_{p}\) of integers such that \(0\leqslant a_{p}<p\) for all primes \(p\), with the following property: for almost all \(\theta\in\mathbf{R}/\mathbf{Z}\), there exist infinitely many \(p\) such that \(|\theta-a_{p}/p|\leqslant 1/(100p)\)._ Proof.: Here is one quick proof using fairly standard (but non-trivial) facts about the distribution of primes. Another more elementary argument is explained in the note [29] for the simple proof, which also has some more discussion of this somewhat unusual diophantine approximation statement. Let \(\mathscr{A}\) be the product over primes of the sets \(\{0,\ldots,p-1\}\); it is a probability space with the product of the uniform probability measures. Let \(c=1/100\) (any other positive constant would work). For any prime \(p\) and \(\boldsymbol{a}\in\mathscr{A}\), we write \(\mathrm{I}_{p}(\boldsymbol{a})=[a_{p}/p-c/p,a_{p}/p+c/p]\), viewed as random intervals on \(\mathscr{A}\). Let \(x\in[0,1]\). We then have \[\mathbf{P}(x\in\mathrm{I}_{p})=\frac{1}{p}\sum_{\begin{subarray}{c}0\leqslant a <p\\ |x-a/p|<c/p\end{subarray}}1\] and hence \(\mathbf{P}(x\in\mathrm{I}_{p})\) is either \(0\) or \(1/p\), depending on whether there exists an integer \(a\) such that the fractional part of \(xp\) is \(<c\), or not. It is known that if \(x\) is irrational, then we have \[\sum_{\{xp\}<c}\frac{1}{p}=+\infty \tag{10.1}\] (precisely, this follows by summation by parts from the more precise results, first proved by Vinogradov, which give an asymptotic formula with main term \(c\pi(\mathrm{X})\) for the number of primes \(p\leqslant\mathrm{X}\) satisfying \(\{xp\}<c\), as \(\mathrm{X}\to+\infty\); see [38, Ch. XI], and note that this result has been improved and simplified since then). Thus, since the events \(\{x\in\mathrm{I}_{p}\}\) are independent by construction, the Borel-Cantelli Lemma implies \[\mathbf{P}(x\in\mathrm{I}_{p}\text{ for infinitely many }p)=1\] for any irrational \(x\). Now by Fubini's Theorem, we obtain \[\mathbf{E}(\lambda(\mathrm{A}_{\boldsymbol{a}})) =\mathbf{E}\Bigl{(}\int_{0}^{1}1_{\{x\in\mathrm{I}_{p}\text{ for infinitely many }p\}}\ dx\Bigr{)}\] \[=\int_{0}^{1}\mathbf{P}(x\in\mathrm{I}_{p}\text{ for infinitely many }p)dx=1,\] and since \(\lambda(\mathrm{A}_{\boldsymbol{a}})\leqslant 1\), this means that \(\mathrm{A}_{\boldsymbol{a}}\) has measure \(1\) for almost all sequences \((a_{p})\). Now fix a sequence \((a_{p})\) as given by that lemma and define \(t_{p}(n)=e(-a_{p}n/p)\). These are trace functions of Artin-Schreier sheaves with bounded conductor. Let \(\mathsf{P}\) be any set of primes with \[\sum_{p\in\mathsf{P}}\frac{\log p}{p}<+\infty.\] Then, for almost all \((x,y)\), we have \[\Bigl{|}y-\frac{a_{p}}{p}\Bigr{|}\geqslant\frac{\log p}{p}\] for all but finitely many \(p\in\mathsf{P}\), by the easy Borel-Cantelli lemma, hence \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi(f^{n}(x,y))\to 0\] almost surely along \(\mathsf{P}\). (And note that sparseness could be measured with \(\log p\) replaced by any function tending to infinity with \(p\).) On the other hand, for almost all \((x,y)\in\mathrm{X}\), the properties of the sequence \((a_{p})\) prove that there exists a subsequence of primes for which \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)e(n\theta)\gg 1,\] hence for which \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi(f^{n}(x,y))\] does _not_ converge to \(0\) along the primes. Since the result for sparse sequences mean that this sequence could only converge to \(0\) almost surely, we conclude that the ergodic averages \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\varphi\circ f^{n}\] do not converge almost surely. ## 11. Questions The following further natural questions arise from this note: 1. Are there maximal and pointwise ergodic theorems with trace functions for \(\varphi\in\mathrm{L}^{p}\) where \(p\neq 2\), especially for \(p=1\)? For \(p>1\), one can certainly expect to be able to prove theorems in \(\mathrm{L}^{p}\) by adapting the ideas of Bourgain [7]. The case \(p=1\) might well be the most interesting; we recall here that Buczolich and Mauldin [9] have proved that there is no maximal or pointwise ergodic theorem in \(\mathrm{L}^{1}\) for averages along the squares (see also LaVictoire's generalization of this fact [30], which relies on non-trivial arithmetic information). 2. Are there similar results for "classical non-conventional averages" with trace functions, such as \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ (\varphi\circ f^{n})\ (\varphi\circ f^{2n}) \cdots(\varphi\circ f^{kn})\] (where \(k\) is fixed; these occur without weights in Furstenberg's approach to Szemeredi's Theorem, see [14, Ch. 7]) or \[\frac{1}{p}\sum_{0\leqslant n<p}t_{p}(n)\ \varphi\circ f^{n^{2}},\] and other polynomials in place of \(n^{2}\)? The versions without weights are parts of Bourgain's celebrated work [6, 7, 5]. The first type of averages is intriguing, if only because trace functions are known to satisfy a very strong from of the inverse theorem for Gowers norms (by work of Fouvry, Kowalski and Michel [21]). 3. Maybe most important: are there interesting applications of such ergodic averages?
素数の指数に付けられた加重エリコード平均を考慮し、重さは素数ごとに依存し、代数学幾何学から来た「トレース関数を」生成します。それによって、古典的な平均・エリコードの定理と点状のエリコードの定理の拡張、また、トポロジー設定におけるいくつかの結果を導き出し、さらなる問題も提起します。 Please let me know if you have any other questions.
2309.07503
K-cowaist of manifolds with boundary
We extend the K-cowaist inequality to generalized Dirac operators in the sense of Gromov and Lawson and study applications to manifolds with boundary.
Christian Baer, Bernhard Hanke
2023-09-14T08:13:13
http://arxiv.org/abs/2309.07503v1
# \(K\)-Cowaist of manifolds with boundary ###### Abstract. We extend the \(K\)-cowaist inequality to generalized Dirac operators in the sense of Gromov and Lawson and study applications to manifolds with boundary. Key words and phrases:Manifolds with boundary, lower scalar curvature bounds, lower mean curvature bounds, Atiyah-Patodi-Singer index formula, \(K\)-cowaist, \(\omega\)-cowaist 2010 Mathematics Subject Classification: 53C21, 53C23; Secondary: 53C27, 58J20 _Acknowledgment._ We thank the Special Priority Programme SPP 2026 "Geometry at Infinity" funded by Deutsche Forschungsgemeinschaft for financial support. B.H. thanks the University of Potsdam for its hospitality. Here \(\mathsf{c}(E)=\mathsf{c}_{0}(E)+\mathsf{c}_{1}(E)+\ldots+\mathsf{c}_{m}(E)=1+ \mathsf{c}_{1}(E)+\ldots+\mathsf{c}_{m}(E)\) is the Chern form of \(E\). Admissible bundles can exist only on even-dimensional manifolds because \(\mathsf{c}_{j}(E)\) has even degree \(2j\). Indeed, the dimension of \(M\) satisfies \(n=2(\gamma_{1}+\ldots+\gamma_{m})\). Equivalently, one may demand that \[\int_{M}\mathsf{ch}_{\gamma_{1}}(E)\wedge\cdots\wedge\mathsf{ch}_{\gamma_{m}}( E)\neq 0\] for some \(\gamma_{j}\in\mathbb{N}_{0}\). Here \(\mathsf{ch}(E)=\mathsf{ch}_{0}(E)+\mathsf{ch}_{1}(E)+\ldots+\mathsf{ch}_{m}( E)=\operatorname{rank}(E)+\mathsf{ch}_{1}(E)+\ldots+\mathsf{ch}_{m}(E)\) is the Chern character form of \(E\). The Chern numbers and the Chern character numbers can be expressed as linear combinations of each other. Note that the support of the curvature \(R^{E}\) and hence that of \(\mathsf{c}_{j}(E)\) and \(\mathsf{ch}_{j}(E)\) for \(j\geq 1\) is contained in the interior of \(M\) because \(M\) is boundary-adapted. Given a Hermitian vector bundle with connection over a Riemannian manifold \(M\), let \(R^{E}\) be its curvature tensor. We define its norm by \[\|R^{E}\|:=\sup_{x\in M}\sup_{\begin{subarray}{c}X,Y\in\mathbb{T}_{M}\\ |X|=|Y|=1\end{subarray}}|R^{E}(X,Y)|\] where \(|R^{E}(X,Y)|\) is the operator norm of the endomorphism \(R^{E}(X,Y)\). The rank of \(E\) is denoted by \(\operatorname{rk}(E)\). **Definition 1**.: The \(K\)_-cowaist_ of an oriented compact Riemannian manifold \(M\) with (possibly empty) boundary is defined by \[K\text{-}\mathrm{cw}_{2}(M):=\frac{1}{\inf\{\|R^{E}\|\mid E\text{ is an admissible bundle over }M\}}\in[0,\infty].\] **Remark 2**.: If we replace the Riemannian metric \(g\) on \(M\) by \(\lambda^{2}g\), where \(\lambda\) is a positive constant, then \(|R^{E}(X,Y)|\) gets replaced by \(\lambda^{-2}|R^{E}(X,Y)|\) and hence \(\|R^{E}\|\) is replaced by \(\lambda^{-2}\|R^{E}\|\). Therefore, \(K\text{-}\mathrm{cw}_{2}(M)\) is replaced by \(\lambda^{2}K\text{-}\mathrm{cw}_{2}(M)\), thus \(K\text{-}\mathrm{cw}_{2}(M)\) scales like an area. This motivates the terminology \(K\)-area for \(K\text{-}\mathrm{cw}_{2}(M)\) as introduced by Gromov in [6]. In [7], Gromov argues that the term \(K\)-cowaist is more appropriate. **Remark 3**.: The condition \(K\text{-}\mathrm{cw}_{2}(M)=\infty\) is independent of the metric on \(M\) since \(M\) is compact and any two metrics can be bounded by each other. There is a rich class of manifolds satisfying this condition, including enlargeable manifolds, see [4], for example. If \(M\) is connected, the condition \(K\text{-}\mathrm{cw}_{2}(M)=\infty\) only depends on the image of the fundamental class of \(M\) in the rational homology of \(B\pi_{1}(M)\) under the classifying map of the universal cover of \(M\), see Corollary 7.4 in [10]. **Definition 4**.: Let \(\omega=1+\omega_{1}+\ldots+\omega_{m}\) be a smooth mixed differential form on the oriented compact Riemannian manifold \(M\) with boundary, where \(\omega_{j}\) has degree \(2j\). The \(\omega\)-cowaist of \(M\) is defined by \[\omega\text{-}\mathrm{cw}_{2}(M):=\frac{1}{\inf\left\{\|R^{E}\|\mid E\text{ is boundary-adapted and }\int_{M}\omega\wedge[\mathsf{ch}(E)-\operatorname{rk}(E)]\neq 0\right\}}\in[0,\infty].\] The following lemma is the key to comparing the \(K\)-cowaist and the \(\omega\)-cowaist. The idea goes back to Gromov [6] and the lemma is essentially already contained as Lemma 7 in [4]. For the reader's convenience, we provide the full (short) proof here. **Lemma 5**.: _Let \(M\) be an oriented compact Riemannian manifold of even dimension \(n=2m\) with boundary. Let \(E\) be an admissible bundle. Let \(\omega=1+\omega_{1}+\ldots+\omega_{m}\) be a smooth mixed differential form on \(M\) where \(\omega_{j}\) has degree \(2j\)._ _Then there exists a boundary-adapted bundle \(E^{\prime}\) over \(M\) such that_ \[\int_{M}\omega\wedge[\mathsf{ch}(E^{\prime})-\operatorname{rk}(E^{\prime})]\neq 0 \tag{1}\] _and_ \[\|R^{E^{\prime}}\|\leq c(m)\|R^{E}\| \tag{2}\] _where \(c(m)\) is a constant only depending on \(m\)._ Proof.: For \(k\in\mathbb{N}_{0}\) there is a virtual bundle \(\Psi_{k}E=\Psi_{k}^{+}E-\Psi_{k}^{-}E\) with the property \[\mathsf{ch}_{j}(\Psi_{k}E)=\mathsf{ch}_{j}(\Psi_{k}^{+}E)-\mathsf{ch}_{j}(\Psi_ {k}^{-}E)=k^{j}\mathsf{ch}_{j}(E). \tag{3}\] Here \(\Psi_{k}\) is known as the Adams operation. The case \(j=0\) shows that the Adams operations \(\Psi_{k}\) preserve the rank. Both bundles \(\Psi_{k}^{+}E\) and \(\Psi_{k}^{-}E\) are universal expressions in tensor products of exterior products of \(E\), see [1, Section 3.2] for details. For a multi-index \(k=(k_{1},\ldots,k_{m})\) we put \[\Psi_{k}E:=\Psi_{k_{1}}E\otimes\cdots\otimes\Psi_{k_{m}}E\] and rewrite this virtual bundle as a difference of honest bundles by \[\Psi_{k}E=\bigoplus_{\stackrel{{\text{even}\neq}}{{\text{odd} }}}\Psi_{k_{1}}^{\pm}E\otimes\cdots\otimes\Psi_{k_{m}}^{\pm}E-\bigoplus_{ \stackrel{{\text{odd}\neq}}{{\text{odd}}}}\Psi_{k_{1}}^{\pm}E \otimes\cdots\otimes\Psi_{k_{m}}^{\pm}E=:\Psi_{k}^{+}E-\Psi_{k}^{-}E.\] Again, \(\Psi_{k}^{+}E\) and \(\Psi_{k}^{-}E\) are universal expressions in tensor products of exterior products of \(E\). Hence, they inherit natural Hermitian metrics and connections, and they are boundary-adapted. In particular, \[\|R^{\Psi_{k}^{\pm}E}\|\leq c_{k}\|R^{E}\| \tag{4}\] where the constant \(c_{k}\) depends only on \(k\). Note that \(\operatorname{rk}(\Psi_{k}^{+}E)-\operatorname{rk}(\Psi_{k}^{-}E)= \operatorname{rk}(\Psi_{k}E)=\operatorname{rk}(E)^{m}\). For \(k=(k_{1},\ldots,k_{m})\in\mathbb{N}_{0}^{m}\) we put \[P(k_{1},\ldots,k_{m}) :=\int_{M}\omega\wedge[\operatorname{ch}(\Psi_{k}E)-\operatorname {rk}(\Psi_{k}E)]\] \[=\int_{M}\omega\wedge[\operatorname{ch}(\Psi_{k_{1}}E)\wedge \cdots\wedge\operatorname{ch}(\Psi_{k_{m}}E)-\operatorname{rk}(E)^{m}].\] Expanding \(\omega=1+\omega_{1}+\ldots+\omega_{m}\) and the Chern characters yields, using (3), \[P(k_{1},\ldots,k_{m})=\sum_{\gamma_{1}+\ldots+\gamma_{m}=m}k_{1}^{\gamma_{1}} \cdots k_{m}^{\gamma_{m}}\int_{M}\operatorname{ch}_{\gamma_{1}}(E)\wedge\cdots \wedge\operatorname{ch}_{\gamma_{m}}(E)+\text{l.o.t.}\] where l.o.t. stands for terms of lower total order in \(k_{1},\ldots,k_{m}\). In particular, \(P\) is a polynomial in \(k_{1},\ldots,k_{m}\) of total degree at most \(m\). If \(P(k_{1},\ldots,k_{m})=0\) held for all \(k_{i}\in\{0,1,\ldots,m\}\), then \(P\) would vanish as a polynomial, hence \[\int_{M}\operatorname{ch}_{\gamma_{1}}(E)\wedge\cdots\wedge\operatorname{ch}_ {\gamma_{m}}(E)=0\] for all \(\gamma_{i}\in\mathbb{N}_{0}\) with \(\gamma_{1}+\ldots+\gamma_{m}=m\), contradicting the admissibility of \(E\). Thus we can choose some \(k\in\{0,1,\ldots,m\}^{m}\) such that \(P(k)\neq 0\), i.e. \[0 \neq\int_{M}\omega\wedge[\operatorname{ch}(\Psi_{k}E)-\operatorname {rk}(\Psi_{k}E)]\] \[=\int_{M}\omega\wedge[\operatorname{ch}(\Psi_{k}^{+}E)-\operatorname {rk}(\Psi_{k}^{+}E)]-\int_{M}\omega\wedge[\operatorname{ch}(\Psi_{k}^{-}E)- \operatorname{rk}(\Psi_{k}^{-}E)].\] Hence, \(E^{\prime}=\Psi_{k}^{+}E\) or \(E^{\prime}=\Psi_{k}^{-}E\) satisfies (1). Equation (4) implies \(\|R^{\Psi_{k}^{\pm}E}\|\leq c(m)\|R^{E}\|\) since there are only finitely many possibilities for \(k\). **Theorem 6**.: _Let \(M\) be an oriented compact Riemannian manifold with boundary of even dimension \(2m\). Let \(\omega=1+\omega_{1}+\ldots+\omega_{m}\) be a smooth mixed differential form on \(M\) where \(\omega_{j}\) has degree \(2j\). Then_ \[K\text{-}\mathrm{cw}_{2}(M)\leq c(m)\cdot\omega\text{-}\mathrm{cw}_{2}(M)\] _where \(c(m)\) is a constant which depends only on \(m\)._ Proof.: If there are no admissible bundles over \(M\), then \(K\text{-}\mathrm{cw}_{2}(M)=0\) and there is nothing to show. Thus, let \(E\to M\) be admissible and let \(E^{\prime}\) be the corresponding bundle from Lemma 5. Then \(\int_{M}\omega\wedge[\operatorname{ch}(E^{\prime})-\operatorname{rk}(E)]\neq 0\) and \[c(m)^{-1}\cdot\omega\text{-}\mathrm{cw}_{2}(M)^{-1}\leq c(m)^{-1}\cdot\|R^{E^{ \prime}}\|\leq\|R^{E}\|.\] Taking the infimum over all admissible \(E\) concludes the proof. Note that the constant \(c(m)\) does not depend on the form \(\omega\). ## 3. An application Let \(M\) be a differentiable manifold, let \(S^{+},S^{-}\to M\) be complex vector bundles and let \(D\) be a differential operator of first order mapping sections of \(S^{+}\) to sections of \(S^{-}\). We restrict our attention to operators such that \(\begin{pmatrix}0&D^{*}\\ D&0\end{pmatrix}\) is a generalized Dirac operator in the sense of Gromov and Lawson, see [8, Section 1]. We then call \(D\) a GL-Dirac operator for short. In particular, the bundles \(S^{\pm}\) are equipped with connections \(\nabla^{S^{\pm}}\) whose curvature tensors we denote by \(R^{S^{\pm}}\). The operator \(D\) satisfies the Weitzenbock formulas \[D^{*}D =(\nabla^{S^{+}})^{*}\nabla^{S^{+}}+\mathscr{K}^{+},\] \[DD^{*} =(\nabla^{S^{-}})^{*}\nabla^{S^{-}}+\mathscr{K}^{-}\] where \(\mathscr{K}^{\pm}=\frac{1}{2}\sum_{jk}e_{j}\cdot e_{k}\cdot R^{S^{\pm}}(e_{j},e_{k})\), see Proposition 2.5 in [8]. Here \(D^{*}\) denotes the formally adjoint operator of \(D\) and similarly for \(\nabla^{*}\). Given a GL-Dirac operator \(D\) and a Hermitian vector bundle with metric connection, one defines the _twisted_ Dirac operator \(D^{E}\) locally by \[D^{E}=\sum_{j}(e_{j}\cdot\otimes\mathrm{id})\nabla^{S^{+}\otimes E}_{e_{j}}\] The twisted Dirac operator maps sections of \(S^{+}\otimes E\) to sections of \(S^{-}\otimes E\) and is again a GL-Dirac operator. If \(M\) is a Riemannian manifold with boundary \(\partial M\), then we denote by \(H\in C^{\infty}(\partial M,\mathbb{R})\) the mean curvature of the boundary. The sign convention is such that \(H\) is positive if the mean curvature vector field is inward pointing. The mean curvature of a Euclidean ball is positive, for example. We say that the boundary is _mean convex_ if \(H\geq 0\), Given the GL-Dirac operator \(D\) on \(M\), there is an adapted Dirac operator \(A\) over \(\partial M\), acting on sections of \(S^{+}|_{\partial M}\). It is defined by \[A=-\nu\cdot D-\nabla^{S^{+}}_{\nu}+\tfrac{n-1}{2}H.\] Here \(\nu\) is the inward pointing unit normal vector field along \(\partial M\). The operator \(A\) anticommutes with Clifford multiplication by \(\nu\). Performing the integration by parts in the Weitzenbock formula, we find for smooth sections \(\varphi\) of \(S^{+}\): \[\int_{M}\left[|D\varphi|^{2}-|\nabla^{S^{+}}\varphi|^{2}-\left\langle\mathscr{ K}^{+}\varphi,\varphi\right\rangle\right]=\int_{\partial M}\left\langle( \tfrac{n-1}{2}H-A)\varphi,\varphi\right\rangle, \tag{5}\] see Equation (27) in [3]. We say that a sufficiently smooth section \(\varphi\) of \(S^{+}\) satisfies the _strong Atiyah-Patodi-Singer (APS) boundary condition_ if \(\varphi|_{\partial M}\) is contained in the sum of the \(L^{2}\)-eigenspaces of \(A\) to negative eigenvalues. We say it satisfies the _weak APS boundary condition_ if \(\varphi|_{\partial M}\) is contained in the sum of the eigenspaces to nonpositive eigenvalues. Associated to \(D\) there is a mixed differential form \(\omega=1+\omega_{1}+\cdots+\omega_{m}\) where \(\omega_{j}\) has degree \(2j\) which is manufactured out of the short-time asymptotics of the corresponding heat kernel. By the Atiyah-Patodi-Singer index theorem [2], it has the property that each twisted operator \(D^{E}\) has the index \[\mathrm{ind}(D^{E})=\int_{M}\omega\wedge\mathsf{ch}(E)+\text{ boundary contribution}, \tag{6}\] if we impose weak or strong APS boundary conditions. The boundary contribution consists of the \(\eta\)-invariant of \(A^{E}\) and a transgression term. **Theorem 7**.: _Let \(M\) be a \(2m\)-dimensional compact oriented Riemannian manifold with (possibly empty) mean convex boundary \(\partial M\). Let \(D\) be a GL-Dirac operator with index form \(\omega\). Let \(\mathscr{K}^{\pm}\) be the curvature terms in the Weitzenbock formulas for \(D\). Suppose that \(\mathscr{K}^{+}\geq\kappa\) and \(\mathscr{K}^{-}\geq\kappa\) in the sense of symmetric endomorphism where \(\kappa>0\) is a positive constant. Then_ \[\omega\text{-}\mathrm{cw}_{2}(M)\leq\frac{m(2m-1)}{\kappa}.\] Proof.: Let \(E\to M\) be boundary-adapted such that \(\int_{M}\omega\wedge[\mathrm{ch}(E)-\mathrm{rk}(E)]\neq 0\). If there are no such bundles, then \(\omega\text{-}\mathrm{cw}_{2}(M)=0\) and there is nothing to show. We write \(r=\mathrm{rk}(E)\) and denote by \(E_{0}^{r}\) the trivial flat bundle of rank \(r\). We impose the weak APS boundary condition. Now (6) yields \[\operatorname{ind}(D^{E}) =\int_{M}\omega\wedge\mathsf{ch}(E)+\text{ boundary contribution}, \tag{7}\] \[\operatorname{ind}(D^{E_{0}^{r}}) =r\int_{M}\omega+\text{ boundary contribution}. \tag{8}\] The boundary contributions in (7) and (8) coincide because \(E\) is boundary-adapted and hence \(D^{E}\) and \(D^{E_{0}^{r}}\) coincide in a neighborhood of \(\partial M\). Therefore, \[\operatorname{ind}(D^{E})-\operatorname{ind}(D^{E_{0}^{r}})=\int_{M}\omega \wedge\mathsf{ch}(E)-r\int_{M}\omega=\int_{M}\omega\wedge[\mathsf{ch}(E)- \operatorname{rk}(E)]\neq 0.\] It follows that \(\operatorname{ind}(D^{E})\neq 0\) or \(\operatorname{ind}(D^{E_{0}^{r}})\neq 0\). We discuss the case \(\operatorname{ind}(D^{E})\neq 0\), the second case being even simpler. If \(\operatorname{ind}(D^{E})>0\), then we find a smooth section \(\varphi\) of \(S^{+}\otimes E\) in the kernel of \(D^{E}\). Inserting this \(\varphi\) into (5) with \(D^{E}\) instead of \(D\), we get \[0=\int_{M}\big{[}|\nabla^{S^{+}\otimes E}\varphi|^{2}+\langle\mathscr{K}^{E,+ }\varphi,\varphi\rangle\big{]}+\int_{\partial M}\big{\langle}(\tfrac{2m-1}{2}H -A)\varphi,\varphi\big{\rangle}\geq\int_{M}\langle\mathscr{K}^{E,+}\varphi,\varphi\rangle \tag{9}\] since all other terms are nonnegative. Here we use \(H\geq 0\) and the fact that \(\varphi\) satisfies the weak APS boundary condition. The curvature term in the Weitzenbock formula for \(D^{E}\) is given by \[\mathscr{K}^{E,+}=\mathscr{K}^{+}\otimes\operatorname{id}+\tfrac{1}{2}\sum_{ j,k=1}^{2m}e_{j}e_{k}\otimes R^{E}(e_{j},e_{k}).\] The operator norm of the correction term satisfies \[\left|\tfrac{1}{2}\sum_{jk}e_{j}e_{k}\otimes R^{E}(e_{j},e_{k})\right|\leq m( 2m-1)\|R^{E}\|.\] If we had \(m(2m-1)\|R^{E}\|<\kappa\), then \(\mathscr{K}^{E,+}\) would be positive as an endomorphism, contradicting (9). Therefore, \(\|R^{E}\|\geq\frac{\kappa}{m(2m-1)}\). If \(\operatorname{ind}(D^{E})<0\), the adjoint operator has a nontrivial kernel. Since \(D^{E}\) is formally selfadjoint and the strong APS boundary condition is adjoint to the weak APS boundary condition1, this means that we find a smooth section \(\varphi\) of \(S^{-}\otimes E\) satisfying the strong APS boundary condition with \((D^{E})^{*}\varphi=0\). Now the proof proceeds as before and we again get \(\|R^{E}\|\geq\frac{\kappa}{m(2m-1)}\). Footnote 1: This uses that \(A\) anticommutes with \(\nu\), see Theorem 4.6 and Example 5.12 in [3]. Taking the infimum over all \(E\) yields \(\omega\text{-}\mathrm{cw}_{2}(M)\leq\frac{m(2m-1)}{\kappa}\). Theorems 6 and 7 combine to give **Corollary 8**.: _Under the assumptions of Theorem 7 we have_ \[K\text{-}\mathrm{cw}_{2}(M)\leq\frac{c_{1}(m)}{\kappa}\] _where \(c_{1}(m)\) is a positive constant which depends only on \(m\)._ **Corollary 9**.: _An oriented even-dimensional compact differentiable manifold \(M\) with \(K\text{-}\mathrm{cw}_{2}(M)=\infty\) does not admit a Riemannian metric with mean convex boundary and a GL-Dirac operator such that \(\mathscr{K}^{+}\) and \(\mathscr{K}^{-}\) are both positive. _ **Example 10**.: If \(M\) is a spin manifold and \(D\) is the spinorial Dirac operator, then \(\mathscr{K}^{\pm}=\frac{\mathrm{scal}}{4}\) where scal denotes the scalar curvature of \(M\). Corollary 9 says that compact spin manifolds \(M\) with \(K\text{-}\mathrm{cw}_{2}(M)=\infty\) do not admit metrics with positive scalar curvature and mean convex boundary, cf. Theorem 19 in [4]. In this example, \(\omega=\hat{A}\) is the \(\hat{A}\)-form. **Example 11**.: Let \(M\) be a compact Riemannian spin\({}^{c}\) manifold with determinant bundle \(L\). Let \(\Omega\) be a real 2-form such that \(\frac{\Omega}{2\pi}\) represents the first Chern class of \(L\) in deRham cohomology. Then there exists a metric connection on \(L\) whose curvature is given by \(i\Omega\). The curvature contribution to the Weitzenbock formula for the spinorial Dirac operator is given by \(\mathscr{K}^{\pm}=\frac{\mathrm{scal}}{4}-\frac{1}{2}\Omega\) where \(\Omega\) acts by Clifford multiplication, see Theorem D.12 in [12]. At each point of \(M\), we can find an orthonormal basis \(e_{1},\ldots,e_{m},e_{m+1},\ldots,e_{2m}\) of the cotangent space such that \[\Omega=\sum_{j=1}^{m}\lambda_{j}e_{j}\wedge e_{m+j}\] for \(\lambda_{j}\in\mathbb{R}\). The norm \(|\Omega|=\sum_{j=1}^{m}|\lambda_{j}|\) is defined independently of the choice of basis. For any spinor \(\varphi\) we have \[|\left<\Omega\cdot\varphi,\varphi\right>|\leq\sum_{j=1}^{m}|\left<\lambda_{j} e_{j}\cdot e_{m+j}\cdot\varphi,\varphi\right>|\leq|\Omega||\varphi|^{2}.\] Thus, \(\mathscr{K}^{\pm}\) is positive provided \(\operatorname{scal}>2|\Omega|\). Hence, if \(\Omega\) is a real 2-form such that \(\frac{\Omega}{2\pi}\) represents the first Chern class of the determinant bundle of the spin\({}^{c}\) manifold \(M\) in deRham cohomology, and \(M\) has a Riemannian metric with mean convex boundary with \(\operatorname{scal}>2|\Omega|\), then \(K\text{-}\mathrm{cw}_{2}(M)<\infty\). In this case, \(\omega=\hat{A}\wedge\exp\left(\frac{\Omega}{4\pi}\right)\).
2308.00145
Exponential decay of the critical points in a discrete model of polyacetylene
In this paper we consider stationary states of the SSH model for infinite polyacetylene chains that are homoclinic or heteroclinic connections between two-periodic dimerized states. We prove that such connections converge exponentially fast to the corresponding asymptotic periodic states.
David Gontier, Adechola E. K. Kouande, Éric Séré
2023-07-31T20:27:25
http://arxiv.org/abs/2308.00145v1
# Exponential decay of the critical points in a discrete model of polyacetylene ###### Abstract. In this paper we consider stationary states of the SSH model for infinite polyacetylene chains that are homoclinic or heteroclinic connections between two-periodic dimerized states. We prove that such connections converge exponentially fast to the corresponding asymptotic periodic states. David Gontier: CEREMADE, Universite Paris-Dauphine, PSL University,75016 Paris, France Email: sere@ceremade.dauphine.fr ###### Contents * 1 Introduction * 2 Critical points for the infinite SSH model, and main result * 2.1 The SSH model * 2.2 The SSH energy difference * 2.3 Critical points for the infinite SSH model, and main result * 2.4 Strategy of the proof * 3 Smoothness and Taylor expansion of the map \(\mathcal{F}_{\mathbf{t}}\) * 3.1 The spectrum of homoclinic and heteroclinic configurations * 3.2 Taylor expansion of the map \(\mathcal{F}_{\mathbf{t}}\) * 4 Proofs of the Lemmas * 4.1 Proof of Lemma 2.5 * 4.2 Proof of Lemma 2.6 * 4.3 Proof of Lemma 2.7 * 4.4 Coercivity of the Hessian at the dimerized configuration ## 1. Introduction The goal of this article is to prove an exponential decay property for critical points in the SSH model. This model was introduced by Su, Schrieffer and Heeger to describe polyacetylene, which is a long one-dimensional chain of carbon (and hydrogen) atoms. In this model, the chain can lower its total energy by dimerizing. This physical phenomenon was first predicted by Peierls [12] (see also [1]) and is now known as the Peierls distorsion or dimerization. Actually, Kennedy and Lieb [6], and Lieb and Nachtergaele [9] proved that the minimizers of the SSH energy associated to closed polyacetylene with an even number \(L\) of carbon atoms are always \(2\)-periodic. When \(L=2\) mod \(4\) or when \(L=0\) mod \(4\) is large enough, these minimizers are dimerized, in the sense that they are \(2\)-periodic, but not \(1\) periodic. In this situation, there Introduction The \(n\)-th atom is a quantum mechanical object, which is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object object. The \(n\)-th atom is a classical object object object, and it is a classical object object object. The \(n\)-th atom is a classical object object, and it is a classical object object object. The \(n\)-th atom is a classical object object object, and it is a classical object object object. The \(n\)-th atom is a classical object object object, and it is a classical object object object. The \(n\)-th atom is a classical object object object, and it is a classical object object object object. The \(n\)-th atom is a classical object object object object, and it is a classical where \(T=T(\{\mathbf{t}\})\) is the \(L\times L\) hermitian matrix \[T=T(\{\mathbf{t}\}):=\begin{pmatrix}0&t_{1}&0&0&\cdots&t_{L}\\ t_{1}&0&t_{2}&\cdots&0&0\\ 0&t_{2}&0&t_{3}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&t_{L-2}&0&t_{L-1}\\ t_{L}&0&\cdots&0&t_{L-1}&0\end{pmatrix}, \tag{5}\] and where we set \(T_{-}=-T\mathds{1}(T<0)\). The first term in (4) is the distortion energy of the atoms: this energy depends quadratically on the distances \(d_{n}\) between successive atoms, but these distances are themselves affine functions on the amplitudes \(t_{n}\). The parameter \(\mu>0\) is the rigidity of the chain, and our units are such that the jump amplitude between two atoms is \(1\) when their distorsion energy is minimal. The second term in (4) models the electronic energy of the valence electrons under the Hamiltonian \(T\). It results from the identity \[\min_{0\leq\gamma\to-\gamma^{\prime}\leq 1}2\mathrm{Tr}\left(T\gamma\right)=2 \mathrm{Tr}\left(T\mathds{1}(T<0)\right)=-2\mathrm{Tr}(T_{-}).\] The minimization on the left-hand side was performed for all one-body density matrices \(\gamma\) representing non-interacting electrons. The condition \(0\leq\gamma\leq 1\) is the Pauli principle, and the \(2\) factor stands for the spin. ### The SSH energy difference Let us fix a configuration \(\mathbf{t}\), and consider the energy difference functional \(\mathcal{F}_{\mathbf{t}}^{(L)}\), defined by \[\mathcal{F}_{\mathbf{t}}^{(L)}(\mathbf{h}):=\mathcal{E}^{(L)}(\mathbf{t}+ \mathbf{h})-\mathcal{E}^{(L)}(\mathbf{t})=\frac{\mu}{2}\sum_{n=1}^{L}(h_{n}+2t _{n}-2)h_{n}-2\mathrm{Tr}((T+H)_{-}-T_{-}), \tag{6}\] where \(T=T(\{\mathbf{t}\})\) and \(H=T(\{\mathbf{h}\})\) are the hermitian matrices constructed from \(\{\mathbf{t}\}\) and \(\{\mathbf{h}\}\) respectively. Clearly, \(\{\mathbf{t}\}\) is a critical point of \(\mathcal{E}^{(L)}\) iff \(\{\mathbf{0}\}\) is a critical point of \(\mathcal{F}_{\mathbf{t}}^{(L)}\). We have substracted the quantity \(\mathcal{E}^{(L)}(\mathbf{t})\) in order to have a finite energy difference at the limit \(L\to\infty\). Actually, Eqn. (6) admits a clear analog as \(L\to\infty\), namely, for two bounded sequences \(\mathbf{t}:\mathbb{Z}\to\mathbb{R}^{+}\) and \(\mathbf{h}:\mathbb{Z}\to\mathbb{R}\), assuming that \(h\in\ell^{1}(\mathbb{Z},\mathbb{R})\) and that \((T+H)_{-}-T_{-}\) is trace-class as an operator acting on \(\ell^{2}(\mathbb{Z},\mathbb{C})\), we set \[\boxed{\mathcal{F}_{\mathbf{t}}(\mathbf{h}):=\frac{\mu}{2}\sum_{n\in \mathbb{Z}}(h_{n}+2t_{n}-2)h_{n}-2\mathrm{Tr}((T+H)_{-}-T_{-})}. \tag{7}\] Now, the operator \(T:=T(\{\mathbf{t}\})\) (and similarly for \(T+H\)) is acting on the infinite dimensional Hilbert space \(\ell^{2}(\mathbb{Z},\mathbb{C})\), whose coefficients in the canonical basis are \[\forall n\in\mathbb{Z},\quad T_{n,n+1}=T_{n+1,n}=t_{n},\qquad T_{i,j}=0\quad \text{if }|i-j|\neq 1.\] In what follows, we denote by bold letters \(\mathbf{a},\mathbf{t},\mathbf{h},\mathbf{u},...\) sequences from \(\mathbb{Z}\) to \(\mathbb{R}\), and by capital letters \(A,T,H,U,...\) the corresponding operators acting on \(\ell^{2}(\mathbb{Z})\). The fact that the map \(\mathcal{F}_{\mathbf{t}}\) is well defined when \(\mathbf{t}\) is a homoclinic or heteroclinic configuration is given in the next two lemmas (see Section 3.1 for the proof). **Lemma 2.1**.: _Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic configuration such that \(\mathbf{t}\geq\tau\) for some \(\tau>0\). Then there is a positively oriented contour \(\mathscr{C}\) in the complex plane, a constant \(C\geq 0\) and a constant \(\eta>0\) so that, for all \(\{\mathbf{h}\}\) with \(\|\mathbf{h}\|_{\ell^{\infty}}\leq\eta\) and for all \(z\in\mathscr{C}\), the operator \((z-(T+H))\) is invertible with \(\|(z-(T+H))^{-1}\|_{\mathrm{op}}\leq C\). In addition,_ \[-(T+H)_{-}=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{z}{z-(T+H)} \mathrm{d}z.\] The contour \(\mathscr{C}\) is independent of \(\mathbf{h}\), but depends on \(\mathbf{t}\). This Lemma might be surprising, as the energy \(0\) can be in the spectrum of \(T+H\). Actually, we will prove the following: * If \(\{\mathbf{t}\}\) is a homoclinic configuration with \(\mathbf{t}\geq\tau>0\), then \(0\) is never in the spectrum of \(T+H\), for \(\mathbf{h}\) small enough. * If \(\{\mathbf{t}\}\) is a heteroclinic configuration with \(\mathbf{t}\geq\tau>0\), then \(0\) is always an isolated eigenvalue of \(T+H\) of multiplicity \(1\), for all \(\mathbf{h}\) small enough. In both cases, one can choose a contour \(\mathscr{C}\) of the form (see Figure 1) \[\mathscr{C}:=(\Sigma+\mathrm{i})\to(\Sigma-\mathrm{i})\to(-g/2-\mathrm{i})\to( -g/2+\mathrm{i})\to(\Sigma+\mathrm{i}), \tag{8}\] where \(\Sigma\) is a negative enough number, and where \(g=\mathrm{dist}(0,\sigma(T)\setminus\{0\})\) is the distance between \(0\) and the (rest of the) spectrum. In the heteroclinic situation, \(0\) is a stable (or topologically protected) eigenvalue: it is unperturbed by the addition of \(H\). Actually, any \(T\) matrix coming from a heteroclinic configuration can be seen as a junction between two SSH chains with different indices [16, 5, 7]. This Lemma allows to prove that \(\mathcal{F}_{\mathbf{t}}\) is well-defined and smooth around \(\{\mathbf{0}\}\). We refer to Section 3.2 for the proof of the following result. **Lemma 2.2**.: _Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\), and let \(\eta>0\) and \(\mathscr{C}\) be a contour as in Lemma 2.1. The map \(\mathbf{h}\mapsto\mathcal{F}_{\mathbf{t}}(\mathbf{h})\) is \(C^{\infty}\) on \(\{\mathbf{h},\|\mathbf{h}\|_{\ell_{1}}\leq\eta\}\). In addition, there is \(C\geq 0\) so that, for all \(\{\mathbf{h}\}\) with \(\|\mathbf{h}\|_{\ell^{1}}<\eta\), we have_ \[\left|\mathcal{F}_{\mathbf{t}}(\mathbf{h})-L_{\mathbf{t}}(\mathbf{h})-\frac{1 }{2}H_{\mathbf{t}}(\mathbf{h},\mathbf{h})\right|\leq C\|\mathbf{h}\|_{\ell^{2} }^{3},\] _where \(L_{\mathbf{t}}\) (differential of \(\mathcal{F}_{\mathbf{t}}\)) is the continuous linear form on \(\ell^{1}(\mathbb{Z})\) defined by (we set \(\Gamma_{\mathbf{t}}:=\mathds{1}(T<0)\) the spectral projector of \(T\) on \(\mathbb{R}^{-}\))_ \[L_{\mathbf{t}}(\mathbf{h}):=\mu\sum_{n\in\mathbb{Z}}(t_{n}-1)h_{n}+2\mathrm{Tr }\left(\Gamma_{\mathbf{t}}H\right),\] _and \(H_{\mathbf{t}}\) (hessian of \(\mathcal{F}_{\mathbf{t}}\)) is the bilinear form on \(\ell^{1}(\mathbb{Z})\) defined by_ \[H_{\mathbf{t}}(\mathbf{h},\mathbf{k}):=\mu\sum_{n\in\mathbb{Z}}h_{n}k_{n}+2 \mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}H\frac{1}{z-T}K \frac{1}{z-T}\mathrm{d}z\right). \tag{9}\] _In addition, the bilinear map \(H_{\mathbf{t}}\) can be extended continuously as a bilinear map on \(\ell^{2}(\mathbb{Z})\)._ ### Critical points for the infinite SSH model, and main result We can now define the notion of critical points for the infinite SSH model. **Definition 2.3**.: _Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic configuration such that \(\mathbf{t}\geq\tau\) for some \(\tau>0\). We say that \(\{\mathbf{t}\}\) is a critical point if \(L_{\mathbf{t}}\) is the null map. Equivalently, using that_ \[\mathrm{Tr}(\Gamma_{\mathbf{t}}H)=\sum_{n\in\mathbb{Z}}h_{n}\left[(\Gamma_{ \mathbf{t}})_{n+1,n}+(\Gamma_{\mathbf{t}})_{n,n+1}\right]=2\sum_{n\in\mathbb{Z }}h_{n}\left(\Gamma_{\mathbf{t}}\right)_{n,n+1},\] _the configuration \(\mathbf{t}\) is a critical point if_ \[\forall n\in\mathbb{Z},\ \ \ \boxed{t_{n}=1-\frac{4}{\mu}\left(\Gamma_{ \mathbf{t}}\right)_{n,n+1}}\,. \tag{10}\] Figure 1. Contours used for the Cauchy integral, for a homoclinic configuration (Left), and a heteroclinic configuration (Right). The main difference is that \(0\) is in the spectrum in the heteroclinic case. We prove below that \(\sigma_{\mathrm{ess}}(T)=[-2W,-2\delta]\cup[2\delta,2W]\), and that the spectrum of \(T\) is symmetric with respect to \(0\). We implicitly used that \(\Gamma\) is symmetric and real-valued. With this definition, the dimerized configuration \(\mathbf{t}^{+}\) is a homoclinic critical point. The kink state constructed in [3] is a heteroclinic critical point. Now we can provide our main result, which states that all homoclinic or heteroclinic critical points of \(\mathcal{F}_{\mathbf{t}}\) converge exponentially fast to \(\mathbf{t}^{+}\) at \(+\infty\). **Theorem 2.4**.: _Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic critical point, and let \(\{\mathbf{u}\}\) be the sequence \(u_{n}:=t_{n}-t_{n}^{+}\). If \(\mathbf{u}\) is square integrable at \(+\infty\) (\(\mathbf{u}\in\ell^{2}(\mathbb{Z}^{+})\)), then \(\mathbf{u}\) is exponentially localized at \(+\infty\): there is \(C\geq 0\) and \(\alpha>0\) so that_ \[|u_{n}|\leq C\mathrm{e}^{-\alpha n}.\] Of course, the same applies in the \(-\infty\) direction, and we have exponential convergence to \(\mathbf{t}^{+}\) or \(\mathbf{t}^{-}\) at \(-\infty\) depending whether the critical configuration is homoclinic or heteroclinic. We note that there exist critical points (that is configurations satisfying (10)), which do not converge to \(\mathbf{t}^{\pm}\) at infinity. For instance, in [2, 3], the authors show the existence of kink-like solutions for a closed chain with an odd number of atoms (see also Figure 2). This solution satisfies the critical point equation (10), but, seeing the closed chain as a periodic configuration, it does not converge to \(\mathbf{t}^{\pm}\) at infinity. Note that this exponential localization was already known for the exactly soluble continuum model of Takayama, Lin-Liu and Maki [17]. ### Strategy of the proof Let us briefly explain the strategy to prove Theorem 2.4. We break the proof in several Lemmas, that we prove later in Section 4. Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic critical point, and let \(\{\mathbf{u}\}\) be the sequence \(u_{n}:=t_{n}-t_{n}^{+}\), so that \(T=T^{+}+U\). The configurations \(\{\mathbf{t}\}\) and \(\{\mathbf{t}^{+}\}\) are critical points, hence satisfy the Euler-Lagrange equations \[t_{n}=1-\frac{4}{\mu}\Gamma_{n,n+1},\qquad t_{n}^{+}=1-\frac{4}{\mu}\Gamma_{n,n+1}^{+},\] with \(\Gamma:=\Gamma_{\mathbf{t}}=\mathds{1}(T^{+}+U<0)\), and \(\Gamma^{+}:=\mathds{1}(T^{+}<0)\). According to Lemma 2.1, the expression of \(\Gamma\) and \(\Gamma_{\mathbf{t}}\) can be written using the Cauchy's residual formula using the _same_ contour \(\mathscr{C}\), that is \[\Gamma=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{\mathrm{d}z}{z-(T^{+ }+U)},\quad\text{and}\quad\Gamma^{+}=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{ C}}\frac{\mathrm{d}z}{z-T^{+}},\] and where the operators in the integrand are uniformly bounded in \(z\in\mathscr{C}\). Since \(u_{n}=t_{n}-t_{n}^{+}\), we obtain (we use the resolvent formula in the last line) \[u_{n} =\frac{4}{\mu}\left(\Gamma^{+}-\Gamma\right)_{n,n+1}=\frac{4}{\mu }\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\left[\frac{1}{z-T^{+}}- \frac{1}{z-(T^{+}+U)}\right]\mathrm{d}z\right)_{n,n+1}\] \[=\frac{-4}{\mu}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}} \left(\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}\right)\mathrm{d}z\right)_{n,n+1}+ \frac{1}{\mu}(\mathbf{Q}_{U}(U,U))_{n},\] with the remainder term \[(\mathbf{Q}_{U}(\mathbf{u}_{1},\mathbf{u}_{2}))_{n}=-4\left(\frac{1}{2 \mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T^{+}}U_{1}\frac{1}{z-(T^{+}+U)}U _{2}\frac{1}{z-T^{+}}\mathrm{d}z\right)_{n,n+1}.\] Figure 2. A localized kink appears in the chain with \(L=101\) carbon atoms. Multiplying by \(\mu\) and reordering the terms, this can be also written as \[\forall n\in\mathbb{Z},\quad(\mathscr{L}\mathbf{u})_{n}=(\mathbf{Q}_{U}(\mathbf{u },\mathbf{u}))_{n}, \tag{11}\] with the linear map \[(\mathscr{L}\mathbf{u})_{n}=\mu u_{n}+4\left(\frac{1}{2\mathrm{i}\pi}\oint_{ \mathscr{C}}\left(\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}\right)\mathrm{d}z\right) _{n,n+1}. \tag{12}\] Formally, if \(\mathbf{v}\) is another real sequence, with corresponding operator \(V\), we have \[\langle\mathbf{v},(\mathcal{L}\mathbf{u})\rangle=\sum_{n\in\mathbb{Z}}v_{n}( \mathscr{L}\mathbf{u})_{n}=\mu\sum_{n\in\mathbb{Z}}v_{n}u_{n}+2\mathrm{Tr} \left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T^{+}}U\frac{1}{z- T^{+}}V\right)\mathrm{d}z \tag{13}\] and we recognize the expression of the Hessian \(H_{\mathbf{t}^{+}}(\mathbf{v},\mathbf{u})\) in (9). Unfortunately, the previous computations is formal, since \(\mathbf{u}\) is not necessary in \(\ell^{2}(\mathbb{Z})\). We only know that it is square integrable at \(+\infty\). Actually, for a heteroclinic configuration, we have \(\mathbf{u}\notin\ell^{2}(\mathbb{Z})\), since \(\mathbf{u}\) does not decay to \(0\) at \(-\infty\). In order to bypass this difficulty, we regularize \(\mathbf{u}\) using appropriate cut-off functions. For \(\alpha>0\) and \(s\in\mathbb{Z}\) that we choose later (\(\alpha\) will be small, and \(s\) will be large), we introduce the function \(\theta_{\alpha,s}:\mathbb{Z}\to\mathbb{R}^{+}\) defined by \[\theta_{\alpha,s}(n)=\min\{\mathrm{e}^{\alpha n},\mathrm{e}^{\alpha s}\}= \begin{cases}\mathrm{e}^{\alpha n},&\text{if }n<s\\ \mathrm{e}^{\alpha s},&\text{if }n\geq s\end{cases}, \tag{14}\] and denote by \(\Theta_{\alpha,s}\) the multiplication operator by \(\theta_{\alpha,s}\), defined by \((\Theta_{\alpha,s})_{n,m}=\theta_{\alpha,s}(n)\delta_{n,m}\). In what follows, we will consider the sequence \(\widetilde{\mathbf{u}}_{\alpha,s}\), defined by \[(\widetilde{u}_{\alpha,s})_{n}:=\theta_{\alpha,s}(n)\theta_{\alpha,s}(n+1)u_{ n},\quad\text{with corresponding operator}\quad\widetilde{U}_{\alpha,s}=\Theta_{\alpha,s}U\Theta_{\alpha,s}.\] Since \(\mathbf{u}\) is bounded and square integrable at \(+\infty\) the vector \(\widetilde{\mathbf{u}}_{\alpha,s}\) is in \(\ell^{2}(\mathbb{Z})\) for all \(\alpha>0\) and all \(s\in\mathbb{Z}\). We also introduce the operator \(\widetilde{T}^{+}_{\alpha,s}\) acting on \(\ell^{2}(\mathbb{Z})\), and defined in the canonical basis by \[\forall n\in\mathbb{Z},\qquad\left(\widetilde{T}^{+}_{\alpha,s}\right)_{n,n+1 }:=\frac{\theta_{\alpha,s}(n)}{\theta_{\alpha,s}(n+1)}t^{+}_{n},\quad\left( \widetilde{T}^{+}_{\alpha,s}\right)_{n+1,n}:=\frac{\theta_{\alpha,s}(n+1)}{ \theta_{\alpha,s}(n)}t^{+}_{n},\] and \(\left(\widetilde{T}^{+}_{\alpha,s}\right)_{i,j}=0\) if \(|i-j|\neq 1\). Note that \(\widetilde{T}^{+}_{\alpha,s}\) is not symmetric. Using that \[\frac{\theta_{\alpha,s}(n)}{\theta_{\alpha,s}(n+1)}=\begin{cases}\mathrm{e}^{ -\alpha}\quad\text{if}\quad n<s\\ 1\quad\text{if}\quad n\geq s,\end{cases}\] we see that \(\widetilde{T}^{+}_{\alpha,s}\) has the matrix form \[\widetilde{T}^{+}_{\alpha,s}=\begin{pmatrix}\ddots&\ddots&\ddots&\ddots&\ddots &\ddots&\ddots\\ \ddots&0&t^{+}_{s-2}\mathrm{e}^{-\alpha}&0&0&0&\ddots\\ \ddots&t^{+}_{s-2}\mathrm{e}^{\alpha}&0&t^{+}_{s-1}\mathrm{e}^{-\alpha}&0&0& \ldots\\ \ddots&0&t^{+}_{s-1}\mathrm{e}^{\alpha}&0&t^{+}_{s}&0&\ddots\\ \ddots&0&0&t^{+}_{s}&0&t^{+}_{s+1}&\ddots\\ \ddots&0&0&0&t^{+}_{s+1}&0&\ddots\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots\\ \end{pmatrix}. \tag{15}\] This operator is constructed to satisfy the following commutation relations (see Section 4.1 for the proof). **Lemma 2.5**.: _The operator \(\widetilde{T}^{+}_{\alpha,s}\) satisfies_ \[\Theta_{\alpha,s}T^{+}=\widetilde{T}^{+}_{\alpha,s}\Theta_{\alpha,s}\quad\text{ and}\quad T^{+}\Theta_{\alpha,s}=\Theta_{\alpha,s}\left(\widetilde{T}^{+}_{\alpha,s}\right)^{*}. \tag{16}\] _There is \(\alpha^{*}>0\) and \(C\geq 0\) so that, for all \(0\leq\alpha<\alpha_{*}\), all \(s\in\mathbb{Z}\), and all \(z\in\mathscr{C}\), the operators \(z-\widetilde{T}^{+}_{\alpha,s}\) and \(z-(\widetilde{T}^{+}_{\alpha,s})^{*}\) are invertible, with \(\|(z-\widetilde{T}^{+}_{\alpha,s})^{-1}\|_{\mathrm{op}}\leq C\) and \(\|(z-(\widetilde{T}^{+}_{\alpha,s})^{*})^{-1}\|_{\mathrm{op}}\leq C\). In addition, we have_ \[\Theta_{\alpha,s}\frac{1}{z-T^{+}}=\frac{1}{z-\widetilde{T}^{+}_{\alpha,s}} \Theta_{\alpha,s},\quad\text{and}\quad\frac{1}{z-T^{+}}\Theta_{\alpha,s}= \Theta_{\alpha,s}\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}.\] We multiply (11) on the left by \(\theta_{s}(n)\), and on the right by \(\theta_{s}(n+1)\). Using that, for any operator \(A\) on \(\ell^{2}(\mathbb{Z})\), we have \(\theta_{\alpha,s}A_{n,n+1}\theta_{\alpha,s}(n+1)=(\Theta_{\alpha,s}A\Theta_{ \alpha,s})_{n,n+1}\), and the fact that \[\Theta_{\alpha,s}\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}\Theta_{\alpha,s}=\frac{ 1}{z-\widetilde{T}^{+}_{\alpha,s}}\underbrace{\Theta_{\alpha,s}U\Theta_{ \alpha,s}}_{=\widetilde{U}_{\alpha,s}}\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s })^{*}},\] we obtain an equation of the form \[\left(\widetilde{\mathscr{L}}_{\alpha,s}\widetilde{\mathbf{u}}_{\alpha,s} \right)_{n}=\left(\widetilde{\mathbf{Q}}_{\alpha,s,U}(\mathbf{u},\mathbf{u}) \right)_{n}, \tag{17}\] where \(\widetilde{\mathscr{L}}_{\alpha,s}\) is the operator defined on \(\ell^{2}(\mathbb{Z})\) by \[\forall\widetilde{\mathbf{v}}\in\ell^{2}(\mathbb{Z}),\quad\left(\widetilde{ \mathscr{L}}_{\alpha,s}\widetilde{\mathbf{v}}\right)_{n}:=\mu(\widetilde{v}_ {\alpha,s})_{n}+4\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\left( \frac{1}{z-\widetilde{T}^{+}_{\alpha,s}}\widetilde{V}\frac{1}{z-(\widetilde{T }^{+}_{\alpha,s})^{*}}\right)\mathrm{d}z\right)_{n,n+1}, \tag{18}\] and with the right-hand side given by \[(\widetilde{\mathbf{Q}}_{\alpha,s,U}(\mathbf{u}_{1},\mathbf{u}_{2}))_{n}=-4 \left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-\widetilde{T}^{+}_ {\alpha,s}}(\Theta_{\alpha,s}U_{1})\frac{1}{z-(T^{+}+U)}(U_{2}\Theta_{\alpha,s })\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}\mathrm{d}z\right)_{n,n+1}. \tag{19}\] The exponential decay is a consequence of the following Lemmas. **Lemma 2.6**.: _The operator \(\mathscr{L}\) defined in (12), seen as an operator from \(\ell^{2}(\mathbb{Z})\) to itself, is bounded symmetric with bounded inverse. There is \(\alpha_{*}>0\) and \(C\geq 0\) so that, for all \(0\leq\alpha<\alpha_{*}\) and all \(s\in\mathbb{Z}\), the operator \(\widetilde{\mathscr{L}}_{\alpha,s}\) defined in (18), seen as an operator from \(\ell^{2}(\mathbb{Z})\) to itself, is bounded with bounded inverse._ Note that the operator \(\widetilde{\mathscr{L}}_{\alpha,s}\) is not symmetric for \(\alpha>0\). We refer to Section 4.2 for the proof. A key property that we use in the proof is the fact that the Hessian \(H_{\mathbf{t}^{+}}\) is coercive (see Proposition 4.1 below). Due to the equality \(\langle\mathbf{v},\mathscr{L}\mathbf{v}\rangle_{\ell^{2}}=H_{\mathbf{t}^{+}}( \mathbf{v},\mathbf{v})\) for \(\mathbf{v}\in\ell^{2}(\mathbb{Z})\) (see (13)), this implies that \(\mathscr{L}\) is invertible. In order to control the right-hand side of (17), we use the following result (see Section 4.3 for the proof). **Lemma 2.7**.: _There is \(\alpha_{*}>0\) and \(C\geq 0\) so that, for all \(0\leq\alpha<\alpha_{*}\) and all \(s\in\mathbb{Z}\), we have_ \[\forall\mathbf{u}_{1},\mathbf{u}_{2}\in\ell^{\infty}(\mathbb{Z})\cap\ell^{2}( \mathbb{Z}^{+}),\qquad\left\|\widetilde{Q}_{\alpha,s,U}(\mathbf{u}_{1}, \mathbf{u}_{2})\right\|_{\ell^{2}(\mathbb{Z})}\leq C\|\theta_{\alpha,s} \mathbf{u}_{1}\|_{\ell^{4}}\|\theta_{\alpha,s}\mathbf{u}_{2}\|_{\ell^{4}}.\] We can now prove the exponential decay of \(\mathbf{u}\). From (17) and the two Lemmas, we get that there is \(C\geq 0\) and \(\alpha^{*}>0\) so that, for all \(0<\alpha\leq\alpha^{*}\) and all \(s\in\mathbb{Z}\), we have \[\|\widetilde{\mathbf{u}}_{\alpha,s}\|_{\ell^{2}}^{2}\leq C\|\theta_{\alpha,s} \mathbf{u}\|_{\ell^{4}}^{4}. \tag{20}\] Concerning the left-hand side, we note that \(\theta_{\alpha,s}\) is increasing so that \(\theta_{\alpha,s}(n)\theta_{\alpha,s}(n+1)\geq\theta_{\alpha,s}^{2}(n)\). Hence \[\|\theta_{\alpha,s}^{2}\mathbf{u}\|_{\ell^{2}}^{2}\leq\|\widetilde{\mathbf{u} }_{\alpha,s}\|_{\ell^{2}}^{2}.\] Let us now bound the right-hand side. We fix \(\varepsilon:=\frac{1}{2\sqrt{C}}\), where \(C\) is the constant appearing in (20). Since \(\mathbf{u}\) goes to \(0\) at \(+\infty\), there is \(M\) large enough so that \(|u_{n}|<\varepsilon\) for all \(n\geq M\). This gives \[\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{4}}^{4} =\sum_{n\leq M}\theta_{\alpha,s}^{4}(n)|u_{n}|^{4}+\sum_{n>M} \theta_{\alpha,s}^{4}(n)|u_{n}|^{4}\leq\|\mathbf{u}\|_{\ell^{\infty}}^{4}\sum_ {n\leq M}\theta_{\alpha,s}^{4}(n)+\varepsilon^{2}\sum_{n>M}\theta_{\alpha,s}^{ 4}(n)|u_{n}|^{2}\] \[=\|\mathbf{u}\|_{\ell^{\infty}}^{4}\sum_{n\leq M}\mathrm{e}^{4 \alpha n}+\varepsilon^{2}\|\theta_{\alpha,s}^{2}\mathbf{u}\|_{\ell^{2}}^{2}= \|\mathbf{u}\|_{\ell^{\infty}}^{4}\frac{\mathrm{e}^{4\alpha M}}{1-\mathrm{e}^ {-4\alpha}}+\varepsilon^{2}\|\theta_{\alpha,s}^{2}\mathbf{u}\|_{\ell^{2}}^{2}.\] Plugging these inequalities in (20) gives \[(1-C\varepsilon^{2})\|\theta_{\alpha,s}^{2}\mathbf{u}\|_{\ell^{2}}^{2}\leq\| \mathbf{u}\|_{\ell^{\infty}}^{4}\frac{\mathrm{e}^{4\alpha M}}{1-\mathrm{e}^{- 4\alpha}}.\] With our choice of \(\varepsilon\), the quantity \(1-C\varepsilon^{2}=\frac{1}{2}\) is positive. The right-hand side is a bound independent of \(s\in\mathbb{Z}\). We can take the limit \(s\to\infty\), and conclude that \[\big{(}\mathrm{e}^{2\alpha n}u_{n}\big{)}_{n\in\mathbb{Z}}\in\ell^{2}(\mathbb{ Z}),\quad\text{with}\quad\left\|\big{(}\mathrm{e}^{2\alpha n}u_{n}\big{)}_{n\in \mathbb{Z}}\right\|_{\ell^{2}(\mathbb{Z})}\leq 2\|\mathbf{u}\|_{\ell^{\infty}}^{4} \frac{\mathrm{e}^{4\alpha M}}{1-\mathrm{e}^{-4\alpha}}.\] This proves as wanted that the sequence \(\mathbf{u}\) is exponentially decaying at \(+\infty\). ## 3. Smoothness and Taylor expansion of the map \(\mathcal{F}_{\mathbf{t}}\) In this section, we prove Lemma 2.1, which states that \(\mathbf{h}\mapsto(T+H)_{-}\) is smooth locally around \(\mathbf{0}\), whenever \(T\) is a homoclinic or heteroclinic configurations. We first record a useful Lemma that we will use many times throughout the article. In what follows, we denote by \(\mathcal{B}:=\mathcal{B}(\ell^{2}(\mathbb{Z}))\) the set of bounded operators acting in \(\ell^{2}(\mathbb{Z})\), and by \(\mathfrak{S}_{p}:=\mathfrak{S}_{p}(\ell^{2}(\mathbb{Z}))\) the \(p\)-Schatten class: \(A\in\mathfrak{S}_{p}\) iff A is a compact operator with \(\|A\|_{\mathfrak{S}_{p}}:=\mathrm{Tr}(|A|^{p})^{1/p}<+\infty\). The set \(\mathfrak{S}_{\infty}\) is simply the set of compact operators, with \(\|A\|_{\mathfrak{S}_{\infty}}=\|A\|_{\mathrm{op}}\). **Lemma 3.1**.: _Let \(\mathbf{a}\) be a sequence from \(\mathbb{Z}\) to \(\mathbb{R}\), and let \(A\) be the corresponding operator._ * _If_ \(\mathbf{a}\in\ell^{\infty}\)_, then_ \(A\) _is a bounded operator (_\(A\in\mathcal{B}\)_), and_ \(\|A\|_{\mathrm{op}}\leq 2\|\mathbf{a}\|_{\ell^{\infty}}\) _;_ * _If_ \(\mathbf{a}\) _goes to_ \(0\) _at_ \(\pm\infty\)_, then_ \(A\) _is compact (_\(A\in\mathfrak{S}_{\infty}\)_) ;_ * _If_ \(\mathbf{a}\in\ell^{p}(\mathbb{Z})\) _for some_ \(1\leq p<\infty\)_, then_ \(A\) _is in the Schatten class_ \(\mathfrak{S}_{p}\)_, and_ \[\|A\|_{\mathfrak{S}^{p}}\leq 2\|\mathbf{a}\|_{\ell^{p}}.\] Proof.: For the first part, we note that, for all \(\psi\in\ell^{2}(\mathbb{Z})\), we have \[|\langle\psi,A\psi\rangle_{\ell^{2}}|=\left|\sum_{n\in\mathbb{Z}}a_{n}(\overline {\psi_{n}}\psi_{n+1}+\overline{\psi_{n+1}}\psi_{n})\right|\leq\|\mathbf{a}\|_{ \ell^{\infty}}\sum_{n\in\mathbb{Z}}\big{(}|\psi_{n}|^{2}+|\psi_{n+1}|^{2}\big{)} =2\|\mathbf{a}\|_{\ell^{\infty}}\|\psi\|_{\ell^{2}}^{2},\] where we used that \(\overline{a}b+a\overline{b}\leq|a|^{2}+|b|^{2}\) in the middle inequality. For the second part, we note that the operator \(A\) is the limit, for the operator norm, of the finite-rank operators \(A^{N}\) associated with the truncated configurations \(a^{N}:=(\mathbf{1}_{-N\leq n\leq N}\,a_{n})_{n\in\mathbb{Z}}\). Hence \(A\) is compact. For the last part, we first prove the result for \(p=1\). We have, by duality, \[\|A\|_{\mathfrak{S}_{1}} =\sup_{\stackrel{{ K\in\mathbb{R}}}{{|K|_{\mathrm{op }}=1}}}|\mathrm{Tr}(AK)|=\sup_{\stackrel{{ K\in\mathbb{R}}}{{|K|_{ \mathrm{op}}=1}}}\left|\sum_{n\in\mathbb{Z}}a_{n}(K_{n+1,n}+K_{n,n+1})\right|\] \[\leq\|\mathbf{a}\|_{\ell^{1}}\sup_{\stackrel{{ K\in\mathbb{R}}}{{|K|_{\mathrm{op}}=1}}}\sum_{n\in\mathbb{Z}}(|K_{n+1,n}|+|K_{n,n+1}|) \leq 2\|\mathbf{a}\|_{\ell^{1}}.\] We used in the last line that \(|K_{n,n+1}|=|\langle e_{n},Ke_{n+1}\rangle|\leq\|K\|_{\mathrm{op}}\). Finally, to conclude the proof, we proceed by interpolation using Riesz-Thorin interpolation theorem for Schatten spaces (see [14, Remark 1 p.23] and [13, p.115] for the version with \(\mathcal{B}\) instead of \(\mathfrak{S}_{\infty}\)). ### The spectrum of homoclinic and heteroclinic configurations In order to prove that \(\mathcal{F}_{\mathbf{t}}\) is smooth, we first study the spectrum of the operator \(T\) when \(\mathbf{t}\) is such a configuration. We treat the two cases separately. #### 3.1.1. Homoclinic configurations Let \(\{\mathbf{t}\}\) be a homoclinic configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\). Then, we can write \(\mathbf{t}=\mathbf{t}^{+}+\mathbf{u}\), where we recall that \(\mathbf{t}^{+}\) is the dimerized configuration \(t_{n}^{+}=W+(-1)^{n}\delta\) with \(\delta>0\), and where the sequence \(\mathbf{u}\) goes to \(0\) at \(\pm\infty\). We have \(T=T^{+}+U\). The operator \(T^{+}\) has purely essential spectrum, of the form (see for instance [4] and references therein) \[\sigma\left(T^{+}\right)=\sigma_{\mathrm{ess}}\left(T^{+}\right)=[-2W,-2 \delta]\cup[2\delta,2W].\] In particular, \(T^{+}\) has a spectral gap of size \(4\delta\) around \(0\). On the other hand, since \(\mathbf{u}\) goes to \(0\) at \(\pm\infty\), \(U\) is compact, see Lemma 3.1. We thus deduce from Weyl's theorem that \[\sigma_{\mathrm{ess}}(T)=\sigma_{\mathrm{ess}}(T^{+})=[-2W,-2\delta]\cup[2 \delta,2W]. \tag{21}\] In particular, \(0\notin\sigma_{\mathrm{ess}}(T)\). In addition, we claim that \(0\) is not an eigenvalue of \(T\). More specifically, we have the following. **Lemma 3.2**.: _Let \(\{\mathbf{t}\}\) be_ **any** _configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\) (in particular, all coefficients \(t_{n}\) are non null). Assume there is \(N_{0}\in\mathbb{N}\) and \(0<\kappa<1\) so that_ \[\textbf{(Homoclinic case)}\quad\forall|n|\geq N_{0},\quad\left|\frac{t_{2n+1}}{t _{2n}}\right|\leq\kappa.\] _Then \(0\) is not an eigenvalue of \(T\). Conversely, if_ \[\textbf{(Heteroclinic case)}\quad\forall n\geq N_{0},\quad\left|\frac{t_{2n+1}}{t _{2n}}\right|\leq\kappa\quad\text{and}\quad\left|\frac{t_{-2n}}{t_{-2n-1}} \right|\leq\kappa,\] _then \(0\) is an eigenvalue of \(T\) of multiplicity \(1\)._ For a homoclinic (resp. heteroclinic) configurations, the first (resp. second) condition is satisfied with \(\kappa=\frac{W-\delta}{W+\delta}<1\). Proof.: The eigenvalue equation \(T\psi=0\) reads \[\forall n\in\mathbb{Z},\quad t_{n}\psi_{n}+t_{n+1}\psi_{n+2}=0.\] We obtain directly \[\psi_{2n}=(-1)^{n}\prod_{m=1}^{n}\left(\frac{t_{2m-2}}{t_{2m-1}}\right)\psi_{ 0},\quad\psi_{2n+1}=(-1)^{n}\prod_{m=1}^{n}\left(\frac{t_{2m-1}}{t_{2m}} \right)\psi_{1}.\] The vector space \(\{\psi,\ T\psi=0\}\) is therefore \(2\) dimensional, since \(\psi\in\{T\psi=0\}\) can be recovered from its values \(\psi_{0}\) and \(\psi_{1}\), and we have \(\mathrm{Ker}(T)=\{T\psi=0\}\cap\ell^{2}(\mathbb{Z})\). Let us first consider the homoclinic case, and let \(\psi\in\{T\psi=0\}\). Since \(|t_{2n}/t_{2n+1}|\geq\kappa^{-1}>1\) for \(n\geq N_{0}\), we have \(|\psi_{2N_{0}+2k}|\geq|\psi_{2N_{0}}|\kappa^{-k}\) as \(k\to\infty\), so \(\psi\) cannot be square integrable at \(+\infty\), unless \(\psi_{2N_{0}}=0\), which is equivalent to \(\psi_{0}=0\). Similarly, we have \(|\psi_{-2N_{0}-2k+1}|\geq|\psi_{-2N_{0}+1}|\kappa^{-k}\) as \(k\to\infty\), so \(\psi\) cannot be square integrable at \(-\infty\), unless \(\psi_{-2N_{0}+1}=0\), which gives \(\psi_{1}=0\) as well. So \(\mathrm{Ker}(T)=\{0\}\). In the heteroclinic case, the same reasoning shows that we must have \(\psi_{0}=0\). However, given \(\psi_{1}\in\mathbb{R}\), the function \(\psi\) with \(\psi_{2n+1}=(-1)^{n}\prod_{m=1}^{n}\left(\frac{t_{2m-1}}{t_{2m}}\right)\psi_{1}\) and \(\psi_{2n}=0\) is a square integrable non null eigenvector. In this case, \(\dim\mathrm{Ker}(T)=1\). **Remark 3.3**.: _In the heteroclinic case, the corresponding normalized eigenvector \(\psi\) is sometimes called an_ edge state_, or_ interface state _or_ zero mode_. As shown in the proof, it is exponentially decaying at \(\pm\infty\): there is \(C\geq 0\) and \(\beta:=-\log(\kappa)>0\) so that \(|\psi_{n}|\leq C\mathrm{e}^{-\beta|n|}\). It is always exponentially decaying, even though the sequence \(\mathbf{t}\) may converge to \(\mathbf{t}^{\pm}\) very slowly at \(\pm\infty\). Actually, we do not require \(\mathbf{t}\) to be a critical point here._ _Note that it is only supported on the odd integers: \(\psi_{2n}=0\) for all \(n\in\mathbb{Z}\). In particular, the corresponding projector \(Z:=|\psi\rangle\langle\psi|\) satisfies_ \[\forall n\in\mathbb{Z},\qquad Z_{n,n+1}=Z_{n+1,n}=0.\] Let us return to the homoclinic case. We proved that \(0\notin\sigma(T)\). Let \(g:=\operatorname{dist}(0,\sigma(T))\) be the distance between \(0\) and the spectrum of \(T\), and set \(\eta:=g/8\). Let \(\mathbf{h}\) be any perturbation with \(\|\mathbf{h}\|_{\infty}\leq\eta\). Then \(\|H\|_{\mathrm{op}}\leq 2\eta\) by Lemma 3.1. In particular, the spectrum of \(T+H\) is \(2\eta\)-close to the one of \(T\), hence \(\sigma(T+H)\cap[-g/2,g/2]=\emptyset\). Let us consider the positively oriented contour \(\mathscr{C}\) in (8). We deduce first that \((z-(T+H))\) is invertible for all \(z\in\mathscr{C}\), and with \(\|(z-(T+H))^{-1}\|_{\mathrm{op}}\leq C\) for a constant \(C\) independent of \(z\in\mathscr{C}\). Also, from the Cauchy residual formula, we have \[\mathds{1}(T+H<0)=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{\mathrm{d}z }{z-(T+H)},\] and \[(T+H)_{-}=-(T+H)\mathds{1}(T+H<0)=\frac{-1}{2\mathrm{i}\pi}\oint_{\mathscr{C}} \frac{z}{z-(T+H)}\mathrm{d}z.\] #### 3.1.2. Heteroclinic configurations Now, let \(\{\mathbf{t}\}\) be a heteroclinic configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\). First, we claim that \(0\notin\sigma_{\mathrm{ess}}(T)\). **Lemma 3.4**.: _Let \(\mathbf{t}\) be a heteroclinic configuration. Then_ \[\sigma_{\mathrm{ess}}(T)=[-2W,-2\delta]\cup[2\delta,2W].\] _In particular, \(0\notin\sigma_{\mathrm{ess}}(T)\)._ Proof.: Introduce the sequence \(\widetilde{\mathbf{t}}\) with \(\widetilde{t_{n}}=t_{n}\) if \(n\neq 0\), and \(\widetilde{t_{0}}=0\). We denote by \(\widetilde{T}\) the corresponding operator, and set \(K:=T-\widetilde{T}\). In a matrix format, the decomposition \(T=\widetilde{T}+K\) reads \[\left(\begin{array}{cccc|c}\ddots&\ddots&0\\ \ddots&0&t_{-1}&0\\ 0&t_{-1}&0&t_{0}&0\\ \hline 0&t_{0}&0&t_{1}&0\\ &&0&t_{1}&0&\ddots\\ &&&0&\ddots&\ddots\end{array}\right)=\left(\begin{array}{cccc|c}\ddots&\ddots&0 \\ \ddots&0&t_{-1}&0\\ 0&t_{-1}&0&0&0\\ \hline 0&0&0&t_{1}&0\\ &&0&t_{1}&0&\ddots\\ &&&0&\ddots&\ddots\end{array}\right)+\left(\begin{array}{cccc|c}\ddots&\ddots& 0\\ \ddots&0&0&0\\ 0&0&t_{0}&0&0\\ \hline 0&t_{0}&0&0&0\\ &&0&0&\ddots\\ &&0&\ddots&\ddots\end{array}\right)\] The operator \(K\) is of rank \(2\), hence is compact, so \(\sigma_{\mathrm{ess}}(T)=\sigma_{\mathrm{ess}}(\widetilde{T})\) by Weyl's theorem. In addition, the operator \(\widetilde{T}\) is of the form \(\widetilde{T}=\widetilde{T}_{L}\oplus\widetilde{T}_{R}\) acting on \(\ell^{2}(\mathbb{Z})\sim\ell^{2}(\mathbb{Z}^{-})\oplus\ell^{2}(\mathbb{Z}^{+})\), hence \[\sigma_{\mathrm{ess}}(\widetilde{T})=\sigma_{\mathrm{ess}}(\widetilde{T}_{L} )\bigcup\sigma_{\mathrm{ess}}(\widetilde{T}_{R}).\] Let us first focus on the right operator \(\widetilde{T}_{R}\). The hopping amplitudes \(\widetilde{t_{n}}\) for \(n\geq 1\) are of the form \(\widetilde{t_{n}}=t_{n}^{+}+u_{n}\) with \(\lim_{n\to\infty}u_{n}=0\). So, with obvious notation, \(\widetilde{T}_{R}=\widetilde{T}_{R}^{+}+U_{R}\). The sequence \((u_{n})\) goes to zero, so \(U_{R}\) is a compact operator (the proof is similar than for Lemma (3.1)), and \(\sigma_{\mathrm{ess}}(\widetilde{T}_{R})=\sigma_{\mathrm{ess}}(\widetilde{T}_ {R}^{+})\). Finally, reasoning as before and introducing the cut compact operator \(K^{+}:=T^{+}-\widetilde{T}^{+}\), we have \[\sigma(T^{+})=\sigma_{\mathrm{ess}}(T^{+})=\sigma_{\mathrm{ess}}(\widetilde{T}_ {L}^{+})\cup\sigma_{\mathrm{ess}}(\widetilde{T}_{R}^{+}).\] In addition, since \(t_{-n}^{+}=t_{n}^{+}\) for the dimerized configuration \(\mathbf{t}^{+}\), \(\widetilde{T}_{L}^{+}\) is unitary equivalent to \(\widetilde{T}_{R}^{+}\), and in particular \[\sigma_{\mathrm{ess}}(\widetilde{T}_{R}^{+})=\sigma_{\mathrm{ess}}(\widetilde{ T}_{L}^{+})=[-2W,-2\delta]\cup[2\delta,2W].\] Altogether, we proved that \(\sigma_{\mathrm{ess}}(\widetilde{T}_{R})=[-2W,-2\delta]\bigcup[2\delta,2W]\). The proof for the left part is similar, upon replacing \(T^{+}\) by \(T^{-}\). In addition, using Lemma 3.2, we know that \(0\) is an eigenvalue of \(T\) of multiplicity \(1\). So \(0\) is an isolated eigenvalue, and we set \[g:=\operatorname{dist}\left(0,\sigma(T)\setminus\{0\}\right)\quad>0,\quad \text{and}\quad\eta:=\min\left\{\frac{g}{8},\frac{\tau}{2},\frac{\delta}{2} \right\}>0.\] By standard perturbation theory, for all \(\mathbf{h}\) with \(\|\mathbf{h}\|_{\ell^{\infty}}\leq\eta\) (hence \(\|H\|_{\mathrm{op}}\leq 2\eta\) by Lemma 3.1), the spectrum of \(T+H\) is composed of an isolated eigenvalue \(\lambda_{0}(T+H)\) of multiplicity \(1\), with \(|\lambda_{0}(T+H)|\leq 2\eta\leq g/4\) corresponding to the perturbation of the \(0\) eigenvalue of \(T\), and the rest of the spectrum, at distance at least \(g-2\eta>3g/4\) from \(0\). Since \(\|\mathbf{h}\|_{\ell^{\infty}}<\tau/2\) and \(\|\mathbf{h}\|_{\ell^{\infty}}<\delta/2\), the vector \(\mathbf{t}+\mathbf{h}\) satisfies \(\mathbf{t}+\mathbf{h}\geq\tau/2>0\) and \((t_{n}+h_{n})\in(t_{n}-\delta/2,t_{n}+\delta/2)\). In particular, it satisfies the assumption of Lemma 3.2 (heteroclinic case) with \(\kappa=\frac{W-\delta/2}{W+\delta/2}<1\). So \(\lambda_{0}(T+H)=0\): the eigenvalue \(0\) is unperturbed by the addition of \(H\). We consider the positively oriented contour \(\mathscr{C}\) defined in (8). We deduce from the previous discussion that, for all \(\mathbf{h}\) with \(\|\mathbf{h}\|_{\ell^{\infty}}\leq\eta\), we have \[(T+H)_{-}=-(T+H)\mathds{1}(T+H<0)=\frac{-1}{2\mathrm{i}\pi}\oint_{\mathscr{C}} \frac{z}{z-(T+H)}\mathrm{d}z,\] where all operators appearing are uniformly bounded by some constant \(C\geq 0\) independent of \(z\in\mathscr{C}\). We also remark that we have \[\mathds{1}(T+H<0)=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{\mathrm{d}z }{z-(T+H)},\quad\text{and}\quad\mathds{1}(T+H\leq 0)=\mathds{1}(T+H<0)+Z,\] where \(Z=|\psi\rangle\langle\psi|\) is the rank-1 projector onto the normalized zero-mode \(\psi\in\mathrm{Ker}(T+H)\), see Remark 3.3. ### Taylor expansion of the map \(\mathcal{F}_{\mathbf{t}}\) In this section, we study the energy \(\mathcal{F}_{\mathbf{t}}\) in (7), and prove Lemma 2.2. Recall that \(\mathcal{F}_{\mathbf{t}}\) is defined by \[\mathcal{F}_{\mathbf{t}}(\mathbf{h}):=\frac{\mu}{2}\sum_{n\in\mathbb{Z}}(h_{n }+2t_{n}-2)h_{n}-2\mathrm{Tr}((T+H)_{-}-T_{-}).\] In what follows, \(\mathbf{t}\) is a homoclinic or heteroclinic configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\). We introduce the constant \(\eta>0\) and the contour \(\mathscr{C}\) as in the previous section. First, we claim that for all \(\mathbf{h}\) with \(\|\mathbf{h}\|_{\ell^{1}}\leq\eta\) (we now use the \(\ell^{1}\) norm), the map \(\mathcal{F}_{\mathbf{t}}(\mathbf{h})\) is well-defined and finite. Since \(\|\mathbf{h}\|_{\ell^{\infty}}\leq\|\mathbf{h}\|_{\ell^{1}}\leq\eta\), \(\mathbf{h}\) satisfies the conditions of the previous section. For the first part of the energy, we write that \[\sum_{n\in\mathbb{Z}}h_{n}^{2}=\|\mathbf{h}\|_{\ell^{2}}^{2}\leq\|\mathbf{h}\| _{\ell^{1}}^{2},\quad\text{and}\quad\left|\sum_{n\in\mathbb{Z}}(2t_{n}-2)h_{n} \right|\leq(2\|\mathbf{t}\|_{\ell^{\infty}}+2)\,\|\mathbf{h}\|_{\ell^{1}},\] so the first part is continuous from \(\ell^{1}\) to \(\mathbb{R}\). For the second part, we use the Cauchy residual formula, and get that \[(T+H)_{-}-T_{-}=\frac{-1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\left(\frac{z}{z-( T+H)}-\frac{z}{z-T}\right)\mathrm{d}z=\frac{-1}{2\mathrm{i}\pi}\oint_{\mathscr{C}} \left(\frac{1}{z-(T+H)}H\frac{1}{z-T}\right)z\mathrm{d}z,\] where we used the resolvent formula in the last equality. In particular, for all \(\mathbf{h}\in\ell^{1}\) with \(\|\mathbf{h}\|_{\ell^{1}}\leq\eta\) and all \(z\in\mathscr{C}\), we have, using Lemma 3.1, \[\left\|\frac{1}{z-(T+H)}H\frac{1}{z-T}\right\|_{\mathfrak{S}_{1}}\leq\left\| \frac{1}{z-(T+H)}\right\|_{\mathrm{op}}\|H\|_{\mathfrak{S}_{1}}\left\|\frac{1} {z-T}\right\|_{\mathrm{op}}\leq 2C^{2}\|\mathbf{h}\|_{\ell^{1}}. \tag{22}\] Integrating \(z\) in the compact \(\mathscr{C}\) eventually shows that \(\mathcal{F}_{\mathbf{t}}(\mathbf{h})\) is well-defined and continuous around \(\mathbf{0}\) for the \(\ell^{1}\) norm. We can push the resolvent formula, and write that \[\frac{1}{z-(T+H)}-\frac{1}{z-T}=\sum_{n=1}^{\infty}\frac{1}{z-T}\left(H\frac{1} {z-T}\right)^{n},\] where the sum on the right is absolutely convergent in \(\mathcal{B}\) whenever \[\sup_{z\in\mathscr{C}}\left\|\frac{1}{z-T}H\right\|_{\mathrm{op}}<1,\] which happens whenever \(\|\mathbf{h}\|_{\ell^{\infty}}\) is small enough, according to Lemma 3.1. Actually, it is also absolutely convergent in \(\mathfrak{S}_{1}\) whenever \(\|\mathbf{h}\|_{\ell^{1}}\) is small enough. We deduce directly that \(\mathbf{h}\mapsto\mathcal{F}_{\mathbf{t}}(\mathbf{h})\) is analytic on a \(\ell^{1}\) neighborhood of \(\mathbf{0}\). Let us compute the differential and hessian of this map. We write \[\mathcal{F}_{\mathbf{t}}(\mathbf{h})=L_{\mathbf{t}}(\mathbf{h})+\frac{1}{2}H_ {\mathbf{t}}(\mathbf{h},\mathbf{h})+R_{\mathbf{t}}(\mathbf{h}),\] with the linear form (differential) \(L_{\mathbf{t}}\) on \(\ell^{1}(\mathbb{Z})\), defined by \[L_{\mathbf{t}}(\mathbf{h}):=\mu\sum_{n\in\mathbb{Z}}(t_{n}-2)h_{n}+2\mathrm{Tr }\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H\frac{1}{z-T} z\mathrm{d}z\right),\] the bilinear form (hessian) \(H_{\mathbf{t}}\) on \(\ell^{1}(\mathbb{Z})\times\ell^{1}(\mathbb{Z})\), defined by \[H_{\mathbf{t}}(\mathbf{h},\mathbf{k}):=\mu\sum_{n\in\mathbb{Z}}h_{n}k_{n}+4 \mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H \frac{1}{z-T}K\frac{1}{z-T}z\mathrm{d}z\right),\] and the rest \[R_{\mathbf{t}}(\mathbf{h}):=\frac{2}{\mathrm{Tr}}\left(2\mathrm{i}\pi\oint_{ \mathscr{C}}\left[\frac{1}{z-T}H\right]^{3}\frac{1}{z-T}z\mathrm{d}z\right).\] Reasoning as in (22), we see that \(|R_{\mathbf{t}}(\mathbf{h})|\leq C\|\mathbf{h}\|_{\ell^{3}}^{3}\leq C\| \mathbf{h}\|_{\ell^{1}}^{3}\). Similarly, we have \[|H_{\mathbf{t}}(\mathbf{h},\mathbf{k})|\leq C\|H\|_{\mathfrak{S}_{2}}\|K\|_{ \mathfrak{S}_{2}}\leq C^{\prime}\|\mathbf{h}\|_{\ell^{2}}\|\mathbf{k}\|_{\ell^ {2}},\] so the \(H_{\mathbf{t}}\) bilinear form can be extended continuously on \(\ell^{2}(\mathbb{Z})\). To end the proof of Lemma 2.2, it remains to simplify the expressions of \(L_{\mathbf{t}}\) and \(H_{\mathbf{t}}\). We use the following result. **Lemma 3.5**.: _We have_ \[\mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H \frac{1}{z-T}z\mathrm{d}z\right)=\mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi} \oint_{\mathscr{C}}\frac{1}{z-T}H\mathrm{d}z\right)=\mathrm{Tr}(\Gamma_{ \mathbf{t}}H) \tag{23}\] _and_ \[4\mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H \frac{1}{z-T}K\frac{1}{z-T}z\mathrm{d}z\right)=2\mathrm{Tr}\left(\frac{1}{2 \mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H\frac{1}{z-T}K\mathrm{d}z\right). \tag{24}\] Proof.: First, writing that \(z=(z-T)+T\), we get \[\frac{1}{2\mathrm{i}\pi}\oint\frac{z\mathrm{d}z}{(z-T)^{2}}=\frac{1}{2\mathrm{ i}\pi}\oint\frac{\mathrm{d}z}{z-T}+\frac{T}{2\mathrm{i}\pi}\oint\frac{ \mathrm{d}z}{(z-T)^{2}},\] and the second term vanishes by the Cauchy residual formula. We recognize the spectral projector \[\Gamma_{\mathbf{t}}:=\mathds{1}(T<0)=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C }}\frac{\mathrm{d}z}{z-T},\] in the first term. This and the cyclicity of the trace gives (23). We now differentiate this equality with respect to \(T\), in the direction \(K\). We get \[\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\mathrm{Tr}\left(\frac {1}{z-T}K\frac{1}{z-T}H\frac{1}{z-T}+\frac{1}{z-T}H\frac{1}{z-T}K\frac{1}{z-T }\right)z\mathrm{d}z\] \[\quad=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\mathrm{Tr}\left( \frac{1}{z-T}K\frac{1}{z-T}H\right)\mathrm{d}z.\] Using again the cyclicity of the trace gives (24). ## 4. Proofs of the Lemmas In this section, we provide the proofs of the Lemmas appearing in Section 2.4. ### Proof of Lemma 2.5 We first prove Lemma 2.5, which compares \(T\) and \(\widetilde{T}^{+}_{\alpha,s}\). The fact that \(\Theta_{\alpha,s}T^{+}=\widetilde{T}^{+}_{\alpha,s}\Theta_{\alpha,s}\) is a simple computation. Taking the adjoint gives the second equality of (16). Let us now prove that \((z-\widetilde{T}_{\alpha,s})\) is invertible for \(\alpha\) small enough. The operator \(T-T^{+}_{\alpha,s}\) satisfies \[\big{(}T-T^{+}_{\alpha,s}\big{)}_{n,n+1}=\begin{cases}t^{+}_{n}(1-\mathrm{e}^{ -\alpha})&\text{ if }n<s\\ 0&\text{ if }n\geq s\end{cases},\quad\big{(}T-T^{+}_{\alpha,s}\big{)}_{n+1,n}= \begin{cases}t^{+}_{n}(1-\mathrm{e}^{\alpha})&\text{ if }n<s\\ 0&\text{ if }n\geq s\end{cases},\] and \(\big{(}T-T^{+}_{\alpha,s}\big{)}_{i,j}=0\) if \(|i-j|\neq 1\). Reasoning as in Lemma 3.1, we deduce that \[\|T-T^{+}_{\alpha,s}\|_{\mathrm{op}}\leq 2\max_{n\in\mathbb{Z}}|t^{+}_{n}| \cdot\max\{|1-\mathrm{e}^{-\alpha}|,|1-\mathrm{e}^{\alpha}|\}=2(W+\delta)( \mathrm{e}^{\alpha}-1). \tag{25}\] This bound is independent of \(s\in\mathbb{Z}\), and goes to \(0\) as \(\alpha\to 0\). Since \((z-T^{+})\) is invertible for all \(z\in\mathscr{C}\), we deduce that for \(\alpha\) small enough, \((z-\widetilde{T}^{+}_{\alpha,s})\) is invertible with bounded inverse, and satisfies \(\|(z-\widetilde{T}^{+}_{\alpha,s})^{-1}\|\leq C\) for a constant \(C\) independent of \(z\in\mathscr{C}\). Finally, from the equality \(\Theta_{\alpha,s}T^{+}=\widetilde{T}^{+}_{\alpha,s}\Theta_{\alpha,s}\), we get \(\Theta_{\alpha,s}(z-T^{+})=(z-\widetilde{T}^{+}_{\alpha,s})\Theta_{\alpha,s}\), which gives, as wanted \[\Theta_{\alpha,s}\frac{1}{z-T^{+}}=\frac{1}{z-\widetilde{T}^{+}_{\alpha,s}} \Theta_{\alpha,s},\quad\text{and}\quad\frac{1}{z-T^{+}}\Theta_{\alpha,s}= \Theta_{\alpha,s}\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}.\] ### Proof of Lemma 2.6 We now prove Lemma 2.6. The first and most important step is to prove the following Proposition, whose proof is postponed to Section 4.4 **Proposition 4.1**.: _For the dimerized configuration \(\mathbf{t}^{+}\), the hessian \(H_{\mathbf{t}^{+}}\) is bounded on \(\ell^{2}(\mathbb{Z})\times\ell^{2}(\mathbb{Z})\) and coercive._ Using this result, we can prove that \(\mathscr{L}\) is a symmetric bounded invertible operator. Recall that \(\mathscr{L}\) is defined in (12) by \[(\mathscr{L}\mathbf{u})_{n}=\mu u_{n}+4\left(\frac{1}{2\mathrm{i}\pi}\oint_{ \mathscr{C}}\left(\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}\right)\mathrm{d}z\right) _{n,n+1}.\] As we already noticed in (13), we have \[\langle\mathbf{v},\mathscr{L}\mathbf{w}\rangle_{\ell^{2}}=\langle\mathscr{L} \mathbf{v},\mathbf{w}\rangle_{\ell^{2}}=H_{\mathbf{t}^{+}}(\mathbf{v},\mathbf{ w}).\] This equality is first valid for \(\mathbf{v},\mathbf{w}\) compactly supported, but can be extended for \(\mathbf{v},\mathbf{w}\in\ell^{2}(\mathbb{Z})\) by continuity of \(H_{\mathbf{t}^{+}}\). This already proves that \(\mathscr{L}\) is a symmetric bounded operator on \(\ell^{2}(\mathbb{Z})\). In addition, the coercivity of \(H_{\mathbf{t}^{+}}\) shows that \(\mathscr{L}\) is invertible with bounded inverse (Lax-Milgram theorem). We now focus on the map \(\widetilde{\mathscr{L}}_{\alpha,s}\) defined in (18). We claim that \(\|\widetilde{\mathscr{L}}_{\alpha,s}-\mathscr{L}\|_{\mathrm{op}}\) goes to \(0\) as \(\alpha\to 0\). This will eventually prove that \(\widetilde{\mathscr{L}}_{\alpha,s}\) is also invertible with bounded inverse. We have, for \(\mathbf{v},\mathbf{w}\in\ell^{2}(\mathbb{Z})\), \[\langle\mathbf{v},(\mathscr{L}-\widetilde{\mathscr{L}}_{\alpha,s})\mathbf{u} \rangle_{\ell^{2}}=2\mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C} }\left[\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}V-\frac{1}{z-\widetilde{T}^{+}_{ \alpha,s}}U\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}V\right]\mathrm{d}z \right).\] We have \[\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}-\frac{1}{z-\widetilde{T}^{+}_{ \alpha,s}}U\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}=\] \[\quad=\left(\frac{1}{z-T^{+}}-\frac{1}{z-\widetilde{T}^{+}_{ \alpha,s}}\right)U\frac{1}{z-T^{+}}+\frac{1}{z-\widetilde{T}^{+}_{\alpha,s}}U \left(\frac{1}{z-T^{+}}-\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}\right)\] \[\quad=\frac{1}{z-T^{+}}(\widetilde{T}^{+}_{\alpha,s}-T^{+})\frac {1}{z-\widetilde{T}^{+}_{\alpha,s}}U\frac{1}{z-T^{+}}+\frac{1}{z-\widetilde{T}^ {+}_{\alpha,s}}U\frac{1}{z-T^{+}}((\widetilde{T}^{+}_{\alpha,s})^{*}-T^{+}) \frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}.\] We then use estimates of the form (\(R\) stands for resolvent) \[\mathrm{Tr}(R_{1}(T-\widetilde{T})R_{2}UR_{3}V)\leq\left\|R_{1}(T- \widetilde{T})R_{2}UR_{3}V\right\|_{\mathfrak{S}^{1}}\leq\|R_{1}\|_{\mathrm{op}} \|R_{2}\|_{\mathrm{op}}\|R_{3}\|_{\mathrm{op}}\|T-\widetilde{T}\|_{\mathrm{op}} \|U\|_{\mathfrak{S}^{2}}\|V\|_{\mathfrak{S}^{2}}.\] We deduce that there is \(C\geq 0\) so that, for \(\alpha\) small enough, \[\left|\langle\mathbf{v},(\mathscr{L}-\widetilde{\mathscr{L}}_{ \alpha,s})\mathbf{u}\rangle_{\ell^{2}}\right|\leq C\|\widetilde{T}_{\alpha,s}^ {+}-T^{+}\|_{\mathrm{op}}\|U\|_{\mathfrak{S}^{2}}\|V\|_{\mathfrak{S}^{2}}\leq 2 C\|\widetilde{T}_{\alpha,s}^{+}-T^{+}\|_{\mathrm{op}}\|\mathbf{u}\|_{ \ell^{2}}\|\mathbf{v}\|_{\ell^{2}},\] where we used Lemma 3.1 in the last inequality. We proved in Lemma 25 that \(\|\widetilde{T}_{\alpha,s}^{+}-T^{+}\|_{\mathrm{op}}\to 0\) as \(\alpha\to 0\). Together with the fact that \(\mathscr{L}\) is invertible, we deduce that for there is \(\alpha^{*}>0\) and \(C\geq 0\) so that, for all \(0\leq\alpha<\alpha^{*}\) and all \(s\in\mathbb{Z}\), the operator \(\widetilde{\mathcal{L}}_{\alpha,s}\) is invertible with \(\|(\widetilde{\mathcal{L}}_{\alpha,s})^{-1}\|_{\mathrm{op}}\leq C\). This concludes the proof of Lemma 2.6 ### Proof of Lemma 2.7 Finally, we focus on the map \(\widetilde{Q}_{\alpha,s,U}\) defined in (19). First, using that \(\sum_{n}(A)_{n,n+1}^{2}\leq\|A\|_{\mathfrak{S}^{2}}^{2}\) and estimates of the form \[\|R_{1}(\Theta V)R_{2}(W\Theta)R_{3}\|_{\mathfrak{S}^{2}}\leq\|R_{1}\|_{ \mathrm{op}}\|R_{2}\|_{\mathrm{op}}\|R_{3}\|_{\mathrm{op}}\|\Theta V\|_{ \mathfrak{S}_{4}}\|W\Theta\|_{\mathfrak{S}_{4}},\] we get \[\left\|\widetilde{Q}_{\alpha,s,U}(\mathbf{v},\mathbf{w})\right\| _{\ell^{2}(\mathbb{Z})}^{2}\leq C\|\Theta_{\alpha,s}V\|_{\mathfrak{S}^{4}}\|W \Theta_{\alpha,s}\|_{\mathfrak{S}^{4}}.\] It remains to bound \(\|\Theta_{\alpha,s}U\|_{\mathfrak{S}_{4}}\) by \(\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{4}}\). To do so, we follow the steps of Lemma 3.1, and prove that for all \(1\leq p<\infty\) and all \(\mathbf{u}\in\ell^{p}(\mathbb{Z})\), we have \(\Theta_{\alpha,s}U\) in \(\mathfrak{S}_{p}\) (in \(\mathcal{B}\) if \(p=\infty\)), and \[\|\Theta_{\alpha,s}U\|_{\mathfrak{S}_{p}}\leq C_{p}\|\theta_{ \alpha,s}\mathbf{u}\|_{\ell^{p}} \tag{26}\] for a constant \(C_{p}\) independent of \(\mathbf{u}\) (and \(\|\Theta_{\alpha,s}U\|_{\mathrm{op}}\leq C_{\infty}\|\theta_{\alpha,s} \mathbf{u}\|_{\ell^{\infty}}\) for \(p=\infty\)). We use below the fact that \[\theta_{\alpha,s}(n)\leq\theta_{\alpha,s}(n+1)\leq\mathrm{e}^{ \alpha}\theta_{\alpha,s}(n). \tag{27}\] First, for \(p=\infty\), we have, for \(\psi\in\ell^{2}(\mathbb{Z})\), \[\|\Theta_{\alpha,s}U\psi\|_{\ell^{2}}^{2} =\sum_{n\in\mathbb{Z}}\theta_{\alpha,s}^{2}(n)|u_{n-1}\psi_{n-1}+ u_{n}\psi_{n+1}|^{2}\leq 2\sum_{n\in\mathbb{Z}}\theta_{\alpha,s}^{2}(n)|u_{n-1}|^{2}| \psi_{n-1}|^{2}+\theta_{\alpha,s}^{2}(n)|u_{n}|^{2}|\psi_{n+1}|^{2}\] \[\leq 2\mathrm{e}^{\alpha}\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{ \infty}}^{2}\|\psi\|_{\ell^{2}}^{2}+2\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{ \infty}}^{2}\|\psi\|_{\ell^{2}}^{2}.\] We used that \(|a+b|^{2}\leq 2|a|^{2}+2|b|^{2}\) for the first inequality, and (27) for the second. This proves the bound \[\|\Theta_{\alpha,s}U\|_{\mathrm{op}}^{2}\leq(2\mathrm{e}^{\alpha}+2)\left\| \theta_{\alpha,s}\mathbf{u}\right\|_{\ell^{\infty}}^{2}.\] In the case \(p=1\), we have by duality \[\|\Theta_{\alpha,s}U\|_{\mathfrak{S}_{1}} =\sup_{K\in\mathfrak{S}\atop\|K\|_{\mathrm{op}}=1}|\mathrm{Tr}( \Theta_{\alpha,s}UK)|=\sup_{K\in\mathfrak{B}\atop\|K\|_{\mathrm{op}}=1}|\sum_{ n\in\mathbb{Z}}u_{n}\theta_{\alpha,s}(n)K_{n+1,n}+u_{n}\theta_{\alpha,s}(n+1)K_{n,n+1}|\] \[\leq\sum_{n\in\mathbb{Z}}|u_{n}\theta_{\alpha,s}(n)|+\sum_{n\in \mathbb{Z}}|u_{n}\theta_{\alpha,s}(n+1)|\leq\|\theta_{\alpha,s}\mathbf{u}\|_{ \ell^{1}}+\mathrm{e}^{\alpha}\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{1}}.\] Here, we used that both \(|K_{n,n+1}|\) and \(|K_{n+1,n}|\) are smaller than \(1\), and (27) for the last inequality. We conclude that (26) holds for all \(1\leq p\leq\infty\) using Riesz-Thorin interpolation. ### Coercivity of the Hessian at the dimerized configuration In this section, we prove Proposition 4.1. Recall that \[H_{\mathbf{t}^{+}}(\mathbf{h},\mathbf{h})=\mu\|\mathbf{h}\|^{2}+2\mathrm{Tr} \left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T^{+}}H\frac{1}{z-T^ {+}}H\mathrm{d}z\right)\] We already proved that \(H_{\mathbf{t}^{+}}\) is a bounded quadratic form on \(\ell^{2}(\mathbb{Z})\). We now prove that, for the dimerized configuration \(\mathbf{t}^{+}\), the Hessian \(H_{\mathbf{t}^{+}}\) is a coercive bilinear map on \(\ell^{2}(\mathbb{Z})\), namely that there is \(C>0\) so that, for all \(\mathbf{h}\in\ell^{2}(\mathbb{Z})\), we have \[H_{\mathbf{t}^{+}}(\mathbf{h},\mathbf{h})\geq C\|\mathbf{h}\|_{\ell^{2}}^{2}.\] By density, it is enough to prove the result for all compactly supported sequences \(\mathbf{h}\). Assume that \(\mathbf{h}\) is such a sequence, so that \(h_{n}=0\) for all \(|n|\geq S\). First, we claim that there is \(C>0\) so that, for all \(L\) large enough, we have \[H^{(L)}_{\mathbf{t}_{L}^{+}}(\mathbf{h},\mathbf{h})\geq C\|\mathbf{h}\|_{\ell^{2 }}^{2}, \tag{28}\] where \(H^{(L)}_{\mathbf{t}_{L}^{+}}(\mathbf{h},\mathbf{h})\) is the hessian of the SSH model for the closed \(L=2N\) chain, defined by \[H^{(L)}_{\mathbf{t}_{L}^{+}}(\mathbf{h},\mathbf{h})=\mu\|\mathbf{h}\|_{\ell^{2 }}^{2}+2\mathrm{Tr}_{L}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{ 1}{z-T_{L}^{+}}H\frac{1}{z-T_{L}^{+}}H\mathrm{d}z\right).\] Here, \(\mathrm{Tr}_{L}\) is the trace for \(L\times L\) hermitian matrices, \(\mathbf{t}_{L}^{+}\) is the dimerized ground state of the closed \(L\)-chain (\(L\) even), of the dimerized form (see [6]) \[\mathbf{t}_{L}^{+}=W_{L}+(-1)^{n}\delta_{L}, \tag{29}\] and \(T_{L}^{+}\) is the associated \(L\times L\) hermitian matrix. It was proved in [3, 4] that \(W_{L}\to W\) and \(\delta_{L}\to\delta\) as \(L\to\infty\). Actually, the bound (28) was more of less proved in [3], with a constant \(C\) independent of \(L\) for \(L\) large enough. We provide here another proof. To prove (28), as in [6], we use the convexity of the function \(f:[0,1]\to\mathbb{R}\) defined by \[[0,1]\ni x\mapsto-\sqrt{x}-\frac{1}{8}x^{2}.\] As a consequence, the map \(A\mapsto\mathrm{Tr}f(A)\) is convex on the set of hermitian matrices with spectrum in \([0,1]\). This implies that, with \(A=\frac{1}{\|T\|_{\mathrm{op}}}T^{2}\), \[-\mathrm{Tr}(\sqrt{T^{2}})\geq-\mathrm{Tr}(\sqrt{\langle T^{2}\rangle})+\frac {1}{8}\frac{1}{\|T\|_{\mathrm{op}}^{3}}\mathrm{Tr}[T^{4}-\langle T^{2}\rangle ^{2}],\] where \(\langle A\rangle\) is the average of \(A\) over all translations, namely \[\langle A\rangle=\frac{1}{L}\sum_{k=0}^{L-1}\Theta_{1}^{k}A\Theta_{1}^{-k}, \qquad\Theta_{1}=\begin{pmatrix}0&1&0&\cdots&0\\ 0&0&1&0&\cdots\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&0&1\\ 1&0&\cdots&0&0\end{pmatrix}.\] We deduce that \[\mathcal{E}^{(L)}(\mathbf{t})\geq\frac{\mu}{2}\sum_{n=1}^{L}(t_{n}-1)^{2}- \mathrm{Tr}_{L}(\sqrt{\langle T^{2}\rangle})+\frac{1}{8}\frac{1}{\|T\|^{3}} \mathrm{Tr}_{L}[T^{4}-\langle T^{2}\rangle^{2}]. \tag{30}\] Since the operator \(T_{L}^{\pm}\) always corresponds to a 2-periodic configuration, it holds \[\Theta_{1}\left(T_{L}^{\pm}\right)^{2}\Theta_{1}^{*}=\left(T_{L}^{\pm}\right)^ {2}\quad\text{and, in particular}\quad\left(T_{L}^{\pm}\right)^{2}=\left\langle \left(T_{L}^{\pm}\right)^{2}\right\rangle.\] We deduce that there is equality in (30) for \(\mathbf{t}_{L}^{+}\). We also deduce from (30) that, for all \(\mathbf{t}\), we have \[\mathcal{E}^{(L)}(\mathbf{t})-\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+})\geq\frac{ 1}{8}\frac{1}{\|T\|_{\mathrm{op}}^{3}}\mathrm{Tr}[T^{4}-\langle T^{2}\rangle ^{2}].\] We apply this inequality for \(\mathbf{t}=\mathbf{t}_{L}^{+}+s\mathbf{h}\) and get that \[\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+}+s\mathbf{h})-\mathcal{E}^{(L)}(\mathbf{t }_{L}^{+})\geq\frac{1}{8}\frac{1}{\|T_{L}^{+}+sH\|_{\mathrm{op}}^{3}}\mathrm{Tr }\left[(T_{L}^{+}+sH)^{4}-\langle(T_{L}^{+}+sH)^{2}\rangle^{2}\right]\] For the denominator, we use the fact that, for \(s\) small enough, we have \(\|T_{L}^{+}+sH\|_{\mathrm{op}}\leq 2\|T_{L}^{+}\|_{\mathrm{op}}\). For the numerator, expanding the expression and using that \((T_{L}^{+})^{2}=\langle(T_{L}^{+})^{2}\rangle\), so that \(\mathrm{Tr}_{L}((T_{L}^{+})^{3}H)=\langle(T_{L}^{+})^{2}\rangle\). \(\operatorname{Tr}_{L}\left(\langle(T_{L}^{+})^{2}\rangle\langle T_{L}^{+}H\rangle\right)\) and \(\operatorname{Tr}_{L}(\langle(T_{L}^{+})^{2}\rangle\langle H^{2}\rangle)= \operatorname{Tr}_{L}((T_{L}^{+})^{2}H^{2})\) (so that the orders \(O(1)\) and \(O(s)\) vanish), we obtain that \[\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+}+s\mathbf{h})-\mathcal{E}^{( L)}(\mathbf{t}_{L}^{+}) \geq\frac{s^{2}}{16}\frac{1}{\|T_{L}^{+}\|_{\operatorname{op}}^{ 3}}\text{Tr}_{L}\Big{[}\left(T_{L}^{+}H+HT_{L}^{+}\right)^{2}-\langle T_{L}^{ +}H+HT_{L}^{+}\rangle^{2}\Big{]}+o(s^{2})\] \[=\frac{s^{2}}{16}\frac{1}{\|T_{L}^{+}\|_{\operatorname{op}}^{3}} \left(\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^ {2}-\left\|\langle T_{L}^{+}H+HT_{L}^{+}\rangle\right\|_{\operatorname{ \mathfrak{S}}_{2}}^{2}\right)+o(s^{2})\] The previous computation is valid for all \(\mathbf{h}\). We now use the fact that \(\mathbf{h}\) is compactly supported in \([-S,S]\), and that \(L\gg S\). This allows to prove that the last term is small. More specifically, we have the following. **Lemma 4.2**.: _For all \(S\in\mathbb{N}\) all \(L\gg S\), all \(\mathbf{t}\in\mathbb{C}^{L}\) and all \(\mathbf{h}\in\mathbb{C}^{L}\) compactly supported in \([-S,S]\), we have_ \[\left\|\langle TH\rangle\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2}\leq \frac{6S}{L}\|\mathbf{t}\|_{\ell^{\infty}}^{2}\|\mathbf{h}\|_{\ell^{2}}^{2}.\] Proof.: Set \(A=TH\), so that \[A_{n,n}=t_{n}h_{n}+h_{n-1}t_{n-1},\quad A_{n,n+2}=t_{n+1}h_{n},\quad A_{n,n-2}= t_{n-2}h_{n-1}.\] The matrix \(\langle A\rangle\) is of the form \[\langle A\rangle_{n,n}=2a_{0},\quad\langle A\rangle_{n,n+2}=a_{1},\quad\langle A \rangle_{n,n-2}=a_{-1},\qquad\text{with}\quad a_{m}:=\frac{1}{L}\sum_{k=0}^{L- 1}t_{k+m}h_{k}.\] Using that \(h\) is compactly supported and Cauchy-Schwarz we get \[|a_{m}|^{2}=\frac{1}{L^{2}}\left(\sum_{k=-S}^{S}t_{k+m}h_{k}\right)^{2}\leq \frac{1}{L^{2}}\|\mathbf{t}\|_{\ell^{\infty}}^{2}\left(\sum_{k=-S}^{S}h_{k} \right)^{2}\leq\frac{1}{L^{2}}\|\mathbf{t}\|_{\ell^{\infty}}^{2}\|\mathbf{h} \|_{\ell^{2}}^{2}S.\] We obtain \[\|A\|_{\operatorname{\mathfrak{S}}^{2}}^{2}=\sum_{n,m}\langle A\rangle_{n,m}^ {2}=L(2a_{0})^{2}+La_{1}^{2}+La_{-1}^{2}\leq 6L\frac{1}{L^{2}}\|\mathbf{t}\|_{ \ell^{\infty}}^{2}\|\mathbf{h}\|_{\ell^{2}}^{2}S.\] This proves that \[\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+}+s\mathbf{h})-\mathcal{E}^{(L)}(\mathbf{t }_{L}^{+})=\frac{s^{2}}{16}\frac{1}{\|T_{L}^{+}\|_{\operatorname{op}}^{3}} \left(\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2} -\frac{12S}{L}\|\mathbf{t}_{L}^{+}\|_{\operatorname{\mathfrak{L}}_{\ell^{ \infty}}^{2}}^{2}\|\mathbf{h}\|_{\ell^{2}}^{2}\right)+o(s^{2}).\] Finally, we bound from below the remaining \(\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2}\). A computation shows that the matrix \(A:=TH+HT\) satisfies \[A_{n,n}=2(t_{n}h_{n}+t_{n-1}h_{n-1}),\quad A_{n,n+2}=t_{n}h_{n+1}+h_{n}t_{n+1}, \quad A_{n,n-2}=t_{n-1}h_{n-2}+h_{n-1}t_{n-2},\] and \(A_{i,j}=0\) otherwise. Squaring all terms and summing gives \[\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2}= \sum_{n}4(t_{n}h_{n}+t_{n-1}h_{n-1})^{2}+(t_{n}h_{n+1}+h_{n}t_{n+1})^{2}+(t_{n- 1}h_{n-2}+h_{n-1}t_{n-2})^{2}.\] Expanding, relabelling all sums, and using that \(\mathbf{t}_{L}^{+}\) is dimerized, of the form (29), we obtain \[\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2}=2 \sum_{n\in\mathbb{Z}}\left\langle\left(\begin{matrix}h_{n}\\ h_{n+1}\end{matrix}\right),Q_{n}\left(\begin{matrix}h_{n}\\ h_{n+1}\end{matrix}\right)\right\rangle,\quad\text{with}\quad Q_{n}=\begin{pmatrix}2t_{n} ^{2}+t_{n+1}^{2}&3t_{n}t_{n+1}\\ 3t_{n}t_{n+1}&t_{n}^{2}+2t_{n+1}^{2}\end{pmatrix}.\] We have \[\operatorname{Tr}(Q_{n})=3(t_{n}^{2}+t_{n+1}^{2})=6(W_{L}^{2}+\delta_{L}^{2})\] and \[\det Q_{n}=(2t_{n}^{2}+t_{n+1}^{2})(t_{n}^{2}+2t_{n+1}^{2})-9t_{n}^{2}t_{n+1}^{2 }=2t_{n}^{4}+2t_{n+1}^{4}-4t_{n}^{2}t_{n+1}^{2}=2(t_{n}^{2}-t_{n+1}^{2})^{2}=32 \,W_{L}^{2}\delta_{L}^{2}.\] Since \(\delta_{L}\to\delta>0\) and \(W_{L}\to W\) for \(L\) large enough, there is a constant \(C\geq 0\) such that \(Q_{n}\geq C>0\) for a constant \(C\) independent of \(n\) and \(L\) large enough. So \[\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\mathfrak{S}_{2}}^{2}\geq 2C\|\mathbf{h} \|_{\ell^{2}}^{2}.\] Altogether, we proved that for \(L\) large enough, we have \[\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+}+s\mathbf{h})-\mathcal{E}^{(L )}(\mathbf{t}_{L}^{+}) \geq\frac{s^{2}}{16}\frac{1}{\|T_{L}^{+}\|_{\mathrm{op}}^{3}} \left(2C-\frac{12S}{L}\|\mathbf{t}_{L}^{+}\|_{\ell^{\infty}}^{2}\right)\| \mathbf{h}\|_{\ell^{2}}^{2}+o(s^{2})\] \[\geq\widetilde{C}s^{2}\|\mathbf{h}\|_{\ell^{2}}^{2}+o(s^{2}),\] where \(\widetilde{C}\) is independent of \(L\), for \(L\) large enough (\(L\geq L_{0}\), where \(L_{0}\) depends on the support \(S\) of \(\mathbf{h}\)). This proves the lower bound (28) for \(H_{\mathbf{t}_{L}^{+}}^{(L)}\). To conclude the proof, we note that \[\left|\mathrm{Tr}_{L}\left(\frac{1}{z-T^{+}}H\frac{1}{z-T^{+}}H\right)- \mathrm{Tr}_{L}\left(\frac{1}{z-T_{L}^{+}}H\frac{1}{z-T_{L}^{+}}H\right)\right| \leq C\|H\|_{\mathfrak{S}_{2}}^{2}\|T^{+}-T_{L}^{+}\|_{\mathrm{op,L}}.\] Since \(W_{L}\to W\) and \(\delta_{L}\to L\), we have \(\|T^{+}-T_{L}^{+}\|_{\mathrm{op,L}}\to 0\) as \(L=2N\) (even) goes to infinity. So for \(L\) large enough, we have \[\left|H_{\mathbf{t}^{+}}(\mathbf{h},\mathbf{h})-H_{\mathbf{t}_{L}^{+}}^{(L)}( \mathbf{h},\mathbf{h})\right|\leq\frac{C}{2}\|\mathbf{h}\|_{\ell^{2}}^{2},\] where \(C\) is the bound in (28). This proves \(H_{\mathbf{t}^{+}}(\mathbf{h},\mathbf{h})\geq\frac{C}{2}\|\mathbf{h}\|_{\ell^ {2}}^{2}\), where the constant \(C\) is independent of \(\mathbf{h}\). We proved the bound for \(\mathbf{h}\) compactly supported, but by density, it can be extended for all \(\mathbf{h}\in\ell^{2}(\mathbb{Z})\), hence the coercivity of \(H_{\mathbf{t}^{+}}\).
この論文では、無限ポリアセチレン鎖におけるSSHモデルの estacionary状態について考察します。これらの状態は、二量化状態の間のhomoclinicまたはheteroclinic接続であり、この接続が指数的に高速にその相対安定周期状態に収束することを証明します。
2306.06073
Feature Selection on Sentinel-2 Multi-spectral Imagery for Efficient Tree Cover Estimation
This paper proposes a multi-spectral random forest classifier with suitable feature selection and masking for tree cover estimation in urban areas. The key feature of the proposed classifier is filtering out the built-up region using spectral indices followed by random forest classification on the remaining mask with carefully selected features. Using Sentinel-2 satellite imagery, we evaluate the performance of the proposed technique on a specified area (approximately 82 acres) of Lahore University of Management Sciences (LUMS) and demonstrate that our method outperforms a conventional random forest classifier as well as state-of-the-art methods such as European Space Agency (ESA) WorldCover 10m 2020 product as well as a DeepLabv3 deep learning architecture.
Usman Nazir, Momin Uppal, Muhammad Tahir, Zubair Khalid
2023-05-31T20:27:10
http://arxiv.org/abs/2306.06073v1
# Feature Selection on Sentinel-2 Multi-Spectral Imagery for Efficient Tree Cover Estimation ###### Abstract This paper proposes a multi-spectral random forest classifier with suitable feature selection and masking for tree cover estimation in urban areas. The key feature of the proposed classifier is filtering out the built-up region using spectral indices followed by random forest classification on the remaining mask with carefully selected features. Using Sentinel-2 satellite imagery, we evaluate the performance of the proposed technique on a specified area (approximately 82 acres) of Lahore University of Management Sciences (LUMS) and demonstrate that our method outperforms a conventional random forest classifier as well as state-of-the-art methods such as European Space Agency (ESA) WorldCover \(10\)m \(2020\) product as well as a DeepLabv3 deep learning architecture. Usman Nazir, Momin Uppal, Muhammad Tahir, and Zubair Khalid+Department of Electrical Engineering, Syed Babar Ali School of Science and Engineering Lahore University of Management Sciences (LUMS), Lahore, Pakistan {usman.nazir, momin.uppal, tahir, zubair.khalid}@lums.edu.pk Random Forest Classifier, Spectral Indices, Sentinel-2 Satellite, European Space Agency (ESA) WorldCover, DeepLabv3 Footnote †: We acknowledge the support of the Higher Education Commission of Pakistan under grant GCF-521. ## 1 Introduction The presence of easily accessible multispectral satellite imagery has expanded the range of potential applications across diverse fields. An important example is automated detection of trees and green spaces that are significant contributors to ecosystem services such as air purification and carbon sequestration. Recent studies include [1] and [2] for global monitoring of environment and forest cover using Sentinel-2 imagery. A Copernicus Sentinel-2B satellite, launched in 2017 provides 13 bands with spatial resolution from \(10\) m to \(60\) m. The high spatial and temporal resolution of data from this satellite is specifically designed for vegetation monitoring. For tree cover estimation, a broad range of methodologies have been presented in the literature, e.g., [1, 3, 4, 5]. The authors in [3] proposed a data fusion method of different spatial resolution satellite imagery for forest-type mapping. Forest cover change is mapped in [4] using spatial resolution of \(25\) m to \(30\) m. A spatio-temporal study on forest cover change using GIS and remote sensing techniques in Ethiopia valley is presented in [5]. In [1], a simple tree cover (referred to as 'forest cover') approach using three different land cover (LC) products is employed in Finland. Clearly, most of these approaches focus on forest mapping - a gap exists in urban tree cover estimation in developing countries with low resolution imagery. In this paper, We propose a multi-spectral classifier (that uses a mixture of spectral bands _and_ indices) for tree cover estimation in urban areas. The key aspects of the proposed classifier include a masking stage for filtering out built-up areas, followed by a random forest classifier operating on appropriately selected features. For performance evaluation, we manually annotate \(3768\) trees in Lahore, Pakistan1. We demonstrate that on account of suitable feature selection and masking mechanism, our proposed approach applied to low resolution imagery achieves a higher accuracy level compared to that obtained by the European Space Agency (ESA) WorldCover product [6] as well as a more computationally demanding deep learning architecture DeepLabv3 [7] applied on high-resolution imagery. Footnote 1: This dataset is being made publicly available at [https://city.lums.edu.pk/products/](https://city.lums.edu.pk/products/) The subsequent sections of this paper are structured as follows: Section \(2\) delves into a comprehensive analysis of the methodology, while Section \(3\) showcases the evaluation results comparing our proposed methodology with state-of-the-art models. Finally, Section 4 concludes the paper. ## 2 Methodology The proposed methodology, illustrated in Fig. 1, consists of four stages. These include 1) Pre-processing, 2) Feature selection, 3) Masking, and finally 4) Random Forest Classification. The details of each stage are provided in the text below. ### Pre-processing We divide the pre-processing of data into multiple steps. Initially, the images from a multi-spectral satellite containing less than \(10\%\) cloud cover for the region of interest (LUMS) are passed through a cloud masking operation that removes cloud cover from these images. Next a median of these images is taken for each month. Finally multiple images are stacked together to generate a single combined image of the region of interest. ### Feature selection For classification, we included eight bands of Sentinel-2 imagery as the feature set. These include B2 (Blue), B3 (Green), B4 (Red), B7 (Red Edge 3), B8 (NIR), B8A (Red Edge 4), B11 (SWIR 1) and B12 (SWIR 2). In addition, We also chose six spectral indices in our feature set. These include the Normalized Difference Vegetation Index (NDVI) [8], Enhanced Vegetation Index (EVI) [9], Normalized Difference Built-up Index (NDBI) [10], Normalized Difference Moisture Index (NDMI) [11], Leaf Area Index (LAI) [12] and Soil Adjusted Vegetation Index (SAVI) [13]. In general, regions with tree cover typically exhibit high vegetation indices (EVI, NDVI), NDMI, LAI, and SAVI, while showing notably low values for NDBI. Some background about these indicies is given below. **NDVI**: This index [8] describes the difference between visible and near-infrared reflectance of vegetation cover and can be used to estimate the density of green on an area of land. This is computed from the the NIR and the Red bands measurements as follows \[\text{NDVI}=\frac{\text{NIR}-\text{Red}}{\text{NIR}+\text{Red}} \tag{1}\] **EVI and LAI:** EVI [9] is similar to NDVI and can be used to quantify greenness of vegetation. However, EVI corrects for some atmospheric conditions and canopy background noise and is more sensitive in areas with dense vegetation. It is computed as \[\text{EVI}=2.5\times\frac{\text{NIR}-\text{Red}}{\text{NIR}+6\times\text{ Red}-7.5\times\text{Blue}+1} \tag{2}\] On the other hand, LAI [12] is used to estimate crop growth and yield through the following empirical formula \[\text{LAI}=3.618\times\text{EVI}-0.118 \tag{3}\] **SAVI**: This index [13] attempts to minimize soil brightness influences using a soil-brightness correction factor. This is often used in arid regions where vegetative cover is low, and it outputs values between \(-1\) and \(1\) through the following relationship \[\text{SAVI}=\frac{0.5\times(\text{NIR}-\text{Red})}{\text{NIR}+\text{Red}+0.5} \tag{4}\] **NDWI**: This is a satellite-derived index [14] from the NIR and the SWIR channels. The NDWI is used to monitor changes related to water content in water bodies as they strongly absorb light in visible to infrared electromagnetic spectrum. \[\text{NDWI}=\frac{\text{NIR}-\text{SWIR1}}{\text{NIR}+\text{SWIR1}} \tag{5}\] **NDBI:** This index [15] uses the NIR and SWIR bands to emphasize manufactured built-up areas. It aims to mitigate \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **ROI** & **Model** & **Pred. area (acres)** & **Masking** & **Spectral indices** & **Pixel-wise Test Accuracy (\%)** & Kappa Score \\ \hline LUMS & RF-spectral-bands & 29.5 & No & No & 0.93 & 0.81 \\ \hline LUMS & RF-spectral-indices & 28 & No & Yes & 0.95 & 0.88 \\ \hline LUMS & **Proposed** & 25 & Yes & Yes & **0.99** & **0.92** \\ \hline \hline LUMS & ESA WorldCover Product [6] & 16 & - & - & 0.74 & - \\ LUMS & DeepLabv3 [7] & 28 & No & No & 0.80 & - \\ \hline \end{tabular} \end{table} Table 1: Tree cover area is predicted using Sentinel-2 imagery and RF classifier by employing various feature selection techniques. The Lahore University of Management Sciences (LUMS) study region spans \(82.165\) acres of area with \(23\) acres area of actual tree cover. Figure 1: Proposed methodology for tree cover estimation with feature selection of spectral bands and indices. the effects of terrain illumination differences as well as atmospheric effects. \[\text{NDBI}=\frac{\text{SWIR}-\text{NIR2}}{\text{SWIR}+\text{NIR2}} \tag{6}\] ### Masking Masking process involves the following two steps. _Applying the Vegetation Index._ The EVI or NDVI values are calculated for each pixel in the satellite imagery. These values indicate the presence and density of vegetation. In this case, a threshold of 0.2 is set, implying that any pixel with an EVI or NDVI value equal to or below 0.2 is considered non-vegetated or sparsely vegetated. Such regions are likely to include built-up areas, as they have less vegetation cover. _Utilizing the Built-up Index._ Simultaneously, the NDBI values are computed for each pixel. This index highlights the presence and extent of built-up areas. High positive NDBI values indicate the dominance of built-up surfaces, while low or negative values represent non-built-up or natural areas. By combining the results of both the vegetation index and built-up index, the filtering process identifies and excludes pixels with low vegetation (pixels for which both EVI and NDVI are less than or equal to 0.2) _and_ high built-up signatures (pixels that have positive NDBI values). ### Random Forest (RF) Classification The masking operation described above aims to retain only the non-built-up or natural regions for input to the classification module. For the purpose, we utilize an RF classifier which is an example of ensemble learning where each model is a decision tree. Ensemble learning creates a stronger model by aggregating the predictions of multiple weak models, such as decision trees in our case. To train the RF classifier, we need to have at least two classes. We combine multiple sample points along with their corresponding class labels (representing trees or non-trees), divide the samples into an 80% training set and a 20% validation set, train a random forest classifier with the features described above, and then use the trained classifier to classify the input image. In the process of an RF model training, the user defines the number of features at each node in order to generate a tree. The classification of a new dataset is done by passing down each case of the dataset to each of the grown trees, then the forest chooses a class having the most votes of the trees for that case. More details on RF can be found in Breiman [16]. The main motivation behind choosing RF for this study is its ability to efficiently handle large and high dimensional datasets [17, 18]. ## 3 Evaluation The proposed methodology is applied to satellite imagery for the year 2021 and its performance compared to other benchmarks is shown in Table 1. As the results indicate, RF with all multi-spectral bands performs better than the ESA World-Cover product [6] and DeepLabv3 [7]. RF with spectral indices achieve higher accuracy as compared to RF with only spectral bands. Finally, the proposed model accomplishes higher pixel-wise accuracy and Kappa score as compared to all other models (see Table 2). Results indicating the effect of feature selection with the proposed methodology are provided in Table 2. Clearly, as the feature selection set increases, the pixel-wise accuracy and Kappa score increases. It implies pixel-wise accuracy is directly proportional to our feature selection. We choose the Kappa coefficient as a performance metric because it represents the extent to which classes on the ground are correct representations of the classes on the map. Finally, qualitative results are illustrated in Fig. 2. The ground truth tree cover of LUMS study region is \(23\) acres while the predicted area using the proposed model is \(25\) acres. It is important to note that our proposed model operating on low resolution imagery with suitable feature selection and masking operations performs better than a DeepLabv3 [7] \begin{table} \begin{tabular}{|c|c|c|} \hline **(RF + Masking +) Features set** & **Pixel-wise Test Accuracy (\%)** & Kappa Score \\ \hline Eight multispectral bands + NDVI & 0.96 & 0.76 \\ \hline Eight multispectral bands + NDVI + NDWI + NDBI + EVI & 0.97 & 0.80 \\ \hline Eight multispectral bands \& All spectral indices & **0.99** & **0.92** \\ \hline \end{tabular} \end{table} Table 2: Pixel-wise accuracy and Kappa score of proposed model with different feature set on LUMS study region. Figure 2: Qualitative results using feature selection on Sentinel-2 multi-spectral imagery for efficient tree cover estimation. deep learning architecture trained on high resolution imagery despite the computational complexity of the former being extremely low compared to the latter. ## 4 Conclusion The paper proposes a methodology for estimating urban tree cover using RF classification with appropriately selected multispectral features and masking. The proposed methodology exhibits superior performance compared to classical RF classifiers that solely utilize spectral bands, as well as surpassing state-of-the-art models such as the European Space Agency (ESA) WorldCover \(10\)m \(2020\) product [6] and DeepLabv3 [7] deep learning architecture trained on high resolution imagery. Our future work aims to apply the proposed technique to estimate tree cover across entire cities in Pakistan.
この論文では、都市部における樹木埋没度推定のための、適切な特徴選択とマスクを提供するマルチスペクトルランダムフォレストクラシfierを提案しています。提案されたクラシfierの重要な特徴は、スペクトル指数を用いて構築された地域をフィルタリングすること followed by ランダムフォレスト分類を、残りのマスクに、注意深く選択した特徴を用いて実行することです。Sentinel-2 satellitイメージを使用して、 Lahore University of Management Sciences (LUMS) の特定のエリア(約82エーカー)で、提案された手法の性能を評価し、私たちの方法は、従来のランダムフォレストクラシfierと、ヨーロッパ space agency (ESA) WorldCover 10m 2020 product と、DeepLabv3 Deep learningアーキテクチャとを上回ることを示しています。
2309.08803
Robust Indoor Localization with Ranging-IMU Fusion
Indoor wireless ranging localization is a promising approach for low-power and high-accuracy localization of wearable devices. A primary challenge in this domain stems from non-line of sight propagation of radio waves. This study tackles a fundamental issue in wireless ranging: the unpredictability of real-time multipath determination, especially in challenging conditions such as when there is no direct line of sight. We achieve this by fusing range measurements with inertial measurements obtained from a low cost Inertial Measurement Unit (IMU). For this purpose, we introduce a novel asymmetric noise model crafted specifically for non-Gaussian multipath disturbances. Additionally, we present a novel Levenberg-Marquardt (LM)-family trust-region adaptation of the iSAM2 fusion algorithm, which is optimized for robust performance for our ranging-IMU fusion problem. We evaluate our solution in a densely occupied real office environment. Our proposed solution can achieve temporally consistent localization with an average absolute accuracy of $\sim$0.3m in real-world settings. Furthermore, our results indicate that we can achieve comparable accuracy even with infrequent (1Hz) range measurements.
Fan Jiang, David Caruso, Ashutosh Dhekne, Qi Qu, Jakob Julian Engel, Jing Dong
2023-09-15T22:54:06
http://arxiv.org/abs/2309.08803v1
# Robust Indoor Localization with Ranging-IMU Fusion ###### Abstract Indoor wireless ranging localization is a promising approach for low-power and high-accuracy localization of wearable devices. A primary challenge in this domain stems from non-line of sight propagation of radio waves. This study tackles a fundamental issue in wireless ranging: the unpredictability of real-time multipath determination, especially in challenging conditions such as when there is no direct line of sight. We achieve this by fusing range measurements with inertial measurements obtained from a low cost Inertial Measurement Unit (IMU). For this purpose, we introduce a novel asymmetric noise model crafted specifically for non-Gaussian multipath disturbances. Additionally, we present a novel Levenberg-Marquardt (LM)-family trust-region adaptation of the iSAM2 fusion algorithm, which is optimized for robust performance for our ranging-IMU fusion problem. We evaluate our solution in a densely occupied real office environment. Our proposed solution can achieve temporally consistent localization with an average absolute accuracy of \(\sim\)0.3m in real-world settings. Furthermore, our results indicate that we can achieve comparable accuracy even with infrequent range measurements down to 1Hz. ## I Introduction Indoor localization is a core requirement for virtual reality/augmented reality (VR/AR) devices and robots. Traditionally, accurate indoor 6 Degrees-of-Freedom (DoF) localization is performed using Visual-Inertial Odometry (VIO) [22], and visual Simultaneous Localization and Mapping (SLAM) [19, 26, 23], in which camera images are the main source of information. However, exposing raw images of the surrounding environment to localization algorithms has privacy implications, and capturing and processing a large amount of raw image pixels lead to significant energy consumption and heat dissipation, which is not ideal for small form factor wearable devices. In recent years, wireless ranging based indoor localization has attracted significant research attention, due to its good accuracy and low power consumption. For example, the average ranging power of an Ultra-Wide Band (UWB) system could be as low as 0.14mW [25], assuming 1Hz ranging frequency (2.5ms per ranging, 55mW average power during ranging), which fits well within the power envelope of wearable devices. While promising, wireless ranging is not free of issues. In particular, non-line of sight (NLOS) propagation of wireless signals, where radios waves undergo reflections from surrounding objects, can deteriorate ranging accuracy. In indoor environments cluttered with objects, NLOS measurements can become dominant, making conventional Gaussian noise models ill-suited. Even more complex scenarios arise when a user lacks line-of-sight to any localization transceivers, a situation frequently encountered in multi-room setups. Various hardware and signal processing methodologies have been proposed by the wireless community to mitigate multipath in ranging measurements. Notable strategies include using Channel Impulse Response (CIR) [14], multiple antennas (Multi-Input-Multi-Output, MIMO) [36], beamforming [28] and angle-of-arrival [36]. On the other hand, the state estimation community tries to address the same problem by fusing together wireless ranging measurements with other sensors, to reduce ambiguity in multi-path determination during measurements [27, 4, 5]. Depending on the current estimate's uncertainty, sufficient data may not be available for identifying NLOS measurement causally at all time, so the system needs to keep a long horizon of previous nonlinear information and correct past states as necessary. Such a requirement aligns seamlessly with incremental smoothing techniques like [15]. In this work, we present a novel solution to the ranging-IMU fusion problem. Our main contributions are: 1. Designed, built and evaluated a system that can robustly localize a custom device composed of an IMU and UWB receiver in real-time, even in challenging environments. 2. A novel asymmetric \(m\)-Estimator for wireless Time-of-Flight (ToF) ranging measurement, to correctly model non-Gaussian multi-path effects; 3. A practical way to improve the numerical stability of Fig. 1: Localizing in a cluttered, real office environment with sparsely populated anchors using proposed approach, with only UWB ranging and IMU sensor data. The average/max trajectory error are 0.3/1.0 meters. Green: ground truth trajectory. Blue: result trajectory. Red: UWB anchors. Note while the solution quality degrades in the corridor, due to very few line-of-sight ranging measurements in the corridor, it is able to recover once enough measurements are available. iSAM2, with presence of strong local non-linearity, without using complex trust region update strategies like Dogle; We demonstrate that even with single-sided two-way ranging (SS-TWR) and sparsely placed UWB anchors we can achieve consistent indoor localization with range measurements with a mean absolute accuracy of \(\sim\)0.3m, and maintain reasonable location accuracy even with 1Hz measurements (see Figure 1). The rest of the paper is organized as follows: we discuss related work in Sec. II followed by the model used for the sensors in Sec. III. In particular, it includes the model used for wireless ranging measurements: the main contribution of the paper. The improvement of the robustness of iSAM2, is described in Sec. IV, which forms the second contribution of this work. We finally discuss the implementation details of our prototype and our evaluation results in Sec. VI. ## II Related Work ### _Related Work in Wireless Signal Processing_ Many methods have been proposed to reduce the effect to multipath and improve the probability of detecting the "first path" (the shortest propagation path) in complex indoor environments, for example with neural networks [34] and with signal processing techniques [16], sometimes even with multiple antennae (Multi-Input-Multi-Output, MIMO) [36]. A notable strategy utilizes CIR of received signals to discern the direct-path measurements [14, 2], where each "peak" in the energy part of the CIR corresponds to a different propagation path of the original transmitted signal in the space between the transmitter and the receiver. The idea is to use the CIR to identify the first path in environments with complex multipath effects. However, this approach is not robust in real world applications since path resolution below the limitations imposed by the wireless signal's bandwidth is challenging. The CIR often provides ambiguous information without knowing the detailed environment structure. For example, when the direct path is blocked by walls, or has very low signal strength, any path identified by these approaches will lead to wrong range measurements. Efforts using signal processing to identify such edge cases can lead to excessive computation and diminishing returns [17]. While techniques like beamforming [28] and angle-of-arrival [36] provide supplementary insights, they often need additional hardware, and yet the fundamental challenges remain. ### _Related Work in State Estimation_ Existing work try to solve multipath problems with sensor fusion, where the ranging measurements are fused with a interoceptive motion sensor, usually an Inertial Measurement Unit (IMU). Some of the first works in this space use Extended Kalman Filter (EKF) and prediction error based outlier rejection ([27, 4, 5]). After the proposal of the pre-integration technique [20], researchers have used smoothing and mapping techniques, for example factor graph optimization with iSAM2 [33]. However, filtering approaches using IMU fusion without utilizing past data can only identify sparse outliers in range measurements. Such techniques cannot recover when valid ranging measurements are absent for significant time, since prediction is likely to drift away, worsening the problem for future valid measurements. One possible way of modeling the effect of non-line of sight measurement is to capture the inlier/outlier ambiguity explicitly by using binary discrete variables in a _hybrid_ factor graph. When incremental inference is required, explicit incremental solvers such as iMHS [13], NF-iSAM [12], or MH-iSAM2 [10] can be used. However, such hybrid modeling require mixed-integer program (MIP) solvers or Expectation-Maximization (EM) style solvers [7], whose cost could be prohibitive in real-time applications. Another common way to model outliers is _m_-Estimator. \(m\)-Estimators have already been shown to map directly to E-M methods [18] in the continuous formulation. The _m_-Estimator for Cauchy distributions can be attributed to Barnett's 1966 work [1]. The frequently referenced article on \(m\)-Estimators [35], makes an assertion--without specific citation--that the Cauchy \(m\)-Estimator often produces incorrect results without a means of verifying their accuracy. While this assertion holds merit, it somewhat oversimplifies Barnett's original proposition. Authors of [1] specifically postulated that any local approach resembling Newtonian methods has the possibility of being unsuccessful, however such instances might be infrequent in practice. Moreover, he emphasized that evading local minima is unfeasible without a comprehensive exploration of the likelihood function--a statement universally applicable to all gradient-driven local methodologies. Our proposed approach is based on \(m\)-Estimators, and we will demonstrate with our modified half-Cauchy \(m\)-Estimator, our approach delivers promising results. ## III Ranging-IMU Measurement Model ### _IMU Model_ We use a typical IMU model. Assuming zero noise and known initial condition and a flat earth approximation, the simplified strapdown mechanization equations are given by: \[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{R}_{t} = \mathbf{R}_{t}[\mathbf{\omega}_{t}]_{\times}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{V}_{t} = \mathbf{g}+\mathbf{R}_{t}\mathbf{a}_{t} \tag{1}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{X}_{t} = \mathbf{V}_{t}\] where \(\mathbf{R}_{t}\) is the rotation matrix, \(\mathbf{V}_{t}\) the body velocity in earth frame, \(\mathbf{X}_{t}\) the translation vector, \(\mathbf{g}\) the gravity vector in the earth frame, \(\mathbf{\omega}_{t}\) is current angular velocity in the IMU frame as measured by the gyroscope, and \(\mathbf{a}_{t}\) the current non-gravitational acceleration in the IMU frame as measured by the accelerometer. After applying a factory calibration to the IMU signal, we assume the rectified signal to be polluted by a Gaussian noise and a slowly varying biases, 3 components for the accelerometer bias \(\mathbf{b}_{t}^{a}\) and gyroscope axis \(\mathbf{b}_{t}^{g}\). To model the later we chose a stochastic 1st-order Gauss-Markov random processes described here [3] (Section #### 5.2.4): \[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{b}_{t}=-\frac{1}{\tau_{\text{bias}}}\mathbf{b}_{t}+ w_{\text{bias}}(t) \tag{2}\] Where \(\mathbf{b}_{t}=[\mathbf{b}_{t}^{a},\mathbf{b}_{t}^{a}]^{\mathrm{T}}\) denotes both bias components of the IMU. \(\tau_{\text{bias}}\) are a correlation time constant and \(w_{\text{bias}}\) random variables following a centered Gaussian distribution with (diagonal) covariance \(\Sigma_{w_{\text{bias}}}\). The parameters of the model were derived from a recorded Allan Variance method slightly inflated to increase robustness to unmodeled effects. This IMU model is leveraged with two types of factor in our optimization based smoother. The propagation constraint is using the _preintegration_ technique as described in [20, 8]. The latter allows to easily write the likelihood of the preintegration measurement as: \[L_{\text{preint}}=\\ \exp\left(-\frac{1}{2}\left\|r_{\text{IMU}}\left((\mathbf{RVX})_{t+1},( \mathbf{RVX})_{t},\mathbf{b}_{t}\right)\right\|_{\Sigma_{\text{IMU}}}^{2}\right) \tag{3}\] Where the residual \(r_{\text{IMU}}\) and its covariance \(\Sigma_{\text{IMU}}\) are defined respectively by Eq. 37 and Eq. 35 in [8] substituting \(i\) by \(t\) and \(j\) by \(t+1\). The biases constraints are written with a binary factor between consecutive bias estimates which represents the likelihood: \[L_{\text{bias}}=\exp\left(-\frac{1}{2}\left\|\mathbf{b}_{t+1}-e^{\frac{\Delta_{t _{t}t+1}}{\tau}}\mathbf{b}_{t}\right\|_{\Sigma_{\text{w_{bias}}}}^{2}\right) \tag{4}\] ### _Ranging Measurement Model_ The wireless ranging measurement factor's likelihood is modeled with a mixture of two probability distributions. One distribution is for the line of sight measurements, where the noise mainly comes from the inaccuracies of the First Path Estimation (FPE) algorithm. This error is mainly caused by inaccuracies in the radio hardware, as well as the limited energy and bandwidth of the transmitted radio signal. This noise appears in the range measurement as a Gaussian-like noise of a \(\sigma\) of about 0.1-0.2m [31]. The other distribution is for the NLOS measurements, where the measurements follow an **environment-dependent** unknown distribution, which could be as large as twice of the actual distance, or as small as indistinguishable from the Gaussian measurement noise. \[L_{\text{range}}(r|\bar{r},m)\sim\left\{\begin{array}{ll}\mathcal{N}(\bar{r };\sigma_{r}),&m=0\\ L_{\text{range}}(r|\bar{r},m=1),&m=1\end{array}\right. \tag{5}\] where \(r\) is the measured range \(r=\left\|\mathbf{X}_{i}-\mathbf{A}_{j}\right\|_{2}\) where \(\mathbf{A}_{j}\) is the anchor \(j\)'s 3D position, \(\bar{r}\) is the true range, \(\sigma_{r}\) is the range variance, and \(m=\{0,1\}\) is the discrete variable that indicates if the measurement is a direct path measurement or not, with \(m=0\) indicates the range measurement is direct, while \(m=1\) indicates it is not, which we will dwell on next. Existing work [29] factorizes the NLOS distribution into two distributions with a convolution of the Gaussian and an exponential distribution, which is the maximum entropy distribution supported on \([0,+\infty)\) with some statistical moment, with a sample mean and variance obtained by simulation of the ranging process in a simulator. However, in wireless ranging problems we cannot apply the same model as in [29], since the environment is unknown and hence we have little information about \(p(r|\bar{r},m=1)\), Instead, as the first contribution of this paper, we model the NLOS measurements with a half-Cauchy distribution, in line with the maximum entropy principle, with location \(\theta=0\) and scale \(\gamma\), supported also on \([0,+\infty)\): \[p(r^{\prime}-\bar{r}|m=1)\sim\frac{2}{\pi\gamma}\frac{1}{1+((r^{\prime}-r)/ \gamma)^{2}} \tag{6}\] where \(r^{\prime}\) is the measured range, \(\bar{r}\) the true range. The unknown scale parameter \(\gamma\) can be either determined by parameter estimation techniques using real or simulated data, or simply derived using the Interquartile Range of the (Gaussian) ranging noise heuristically. With this NLOS model, the observed range model for \(m=1\) is then (sum of independent variables is convolution) \[L_{\text{range}}(r|\bar{r},m=1)=\int_{-\infty}^{+\infty}p(r|r^{\prime})p(r^{ \prime}-\bar{r}|m=1)\mathrm{d}r^{\prime} \tag{7}\] which is shown in Fig. 2a. The physical intuition behind the use of a single-sided distribution for NLOS measurements, is that the true range is the shortest path in the space, and NLOS measurements are necessarily longer than the shortest path between anchor and receiver. In Fig. 3 we show the evolution of the UWB ranging measurement error distribution across trajectories, using the ground-truth ranges computed through ground truth trajectories and anchor positions. The x axis is time elapsed, y axis is the range error in meters, and the color represents the distribution density. It is apparent that the distribution is single-sided, with occasional large outliers, and frequent smaller outliers. It is important to note that while the marginal distribution for \(r-\bar{r}\) may seem light-tailed, the actual distribution considering the hidden variable, the current surroundings of the user, is not. This can be observed on Fig. 3, where persistent NLOS measurements dominate in some time slices. This Fig. 2: Qualitative comparison of using (a) a combined marginal model and (b) a mathematically simpler half-Cauchy model for the multipath. Note the less complex model does not significantly change the decision point. explains why we need to use a half-Cauchy distribution without priors on \(m\), instead of just using the measured range marginal density like in [29]. Other than considering explicit hybrid factor graph inferences to solve discrete variable \(m\), we use an _implicit_ approach to solve the range-IMU fusion problems, based on the \(m\)-Estimators to simplify the inference process. Since the decision boundary (where \(m=0\) or \(m=1\) is more likely) is the same, and the p.d.f. of both the marginal and the half-Cauchy distribution is very close after the decision boundary, we can use the half-Cauchy distribution directly as the mixture component (Fig. 2b). The IMU and ranging model presented are used in a factor graph formulation. An example factor graph is shown in Fig. 4. The states include the devices pose + velocity + IMU bias at each time point when we receive a range measurement, and also positions of all rang-able anchors. The factors include the IMU pre-integration factors and ranging factors we just introduced, plus prior factors on first IMU bias with factory calibration, and all anchors with pre-mapped positions. ## IV Trust-Region Variant of iSAM2 A known issue of using iSAM2 framework in incremental inference is the occurrence of indeterminant linear systems [11]. This is primarily because iSAM2 internally uses a Gauss-Newton like update for solving the nonlinear least-squares problem, which is not robust to ill-conditioned problems. While existing trust-region based methods like RISE [30] have been shown to perform well in regular SLAM problems, its convergence has not been validated with switchable methods like ours whose continuous error could change with the discrete decision variable. Same as previously reported [21], we observed that RISE does not work when the radius of the trust region changes. This leads to a performance similar to Gauss-Newton (vanilla iSAM2), which similarly fails to achieve optimal non-linear updates, resulting in worse performance in our application. In contrast to the Dogleg-like algorithm proposed by [21] (which is concurrent to our work), as the second contribution of this paper, we propose **D-iSAM2**, a simpler trust-region method, which works well in our real-world experiments, and only requires minimal changes in iSAM2. This method shares the same core idea with the Levenberg-Marquadt (L-M) algorithm, whose linear update (in the simplest form) is calculated from \[(J^{\mathrm{T}}J+\lambda I)\delta=J^{\mathrm{T}}z \tag{8}\] where \(J\) is the Jacobian, \(\lambda\) the damping factor, \(\delta\) the linear increment, \(z\) the linear error vector. This linear problem effectively is equivalent to solving a nonlinear factor graph with the following Jacobian structure \[\left[\begin{array}{cc}J^{\mathrm{T}}&\lambda^{1/2}I\end{array}\right]\left[ \begin{array}{c}J\\ \lambda^{1/2}I\end{array}\right]\delta=\left[\begin{array}{cc}J&\lambda^{1/2 }I\end{array}\right]\left[\begin{array}{c}z\\ 0\end{array}\right] \tag{9}\] which is equivalent to the original graph with a _special_ factor on each variable where the factor always has an error function of value \(0\), but a Jacobian of \(\lambda^{1/2}I\). When the trust region is an ellipse, \(\lambda I\) can be replaced by a diagonal of \(n\) lambdas, \(\Lambda=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{n})\). When new information is added, the algorithm only needs to check whether the current \(\lambda\) is valid, by checking if the error decreases after the iSAM2 step with the special factor added. If not, increase \(\lambda\), and if yes, decrease \(\lambda\), then repeat the process. We refer the interested reader to [24] where the schedule for the \(\lambda\) modifications have been extensively covered. ``` 0:Bayes Tree \(\mathcal{T}\), current estimate \(\mathcal{X}\), new factors \(\mathcal{F}\)\(=\{\phi_{i}\}\), new variable initial estimates \(\{x_{i}\}\), current \(\lambda\) convergence indicator \(c\leftarrow\) False while not converged \(c\)do \(\mathcal{F}^{\prime}\leftarrow\) AddSpecialFactor(\(\mathcal{F}\), \(\lambda\)) \(\mathcal{T}^{\prime},\mathcal{X}^{\prime}\leftarrow\) iSAM2Update(\(\mathcal{T},\mathcal{F}^{\prime},\mathcal{X},\{x_{i}\}\)) if nonlinear error decreasedthen \(c\leftarrow\) True Decrease \(\lambda\) with schedule \(\triangleright\) until \(\lambda_{\text{min}}\) else Increase \(\lambda\) with schedule if \(\lambda=\lambda_{\text{max}}\) throw error endif endwhile new Bayes Tree \(\mathcal{T}^{\prime}\), new estimate \(\mathcal{X}^{\prime}\) ``` **Algorithm 1** D-iSAM2 Algorithm Note that the \(\lambda_{x_{i}}\) for an old variable \(x_{i}\) will not change after a successful D-iSAM2 step. This is intentional, since it Fig. 4: An example factor graph of the ranging-IMU fusion. Fig. 3: Range measurement error distribution. Top: marginal over all trajectories seems light-tailed. Bottom: heat-map over time for run 1 and 2 of Table I. Note distribution becomes fat tailed in a lot of places, for example \(t\sim 45s\) on the left, and \(t\sim 80s\) on the right. Color indicates the value of the probability density function at time \(t\). is very costly to update all old Bayes tree nodes for a new set of \(\{\lambda_{x_{n}}\}\), since all nodes will need to be re-factorized. However, since we operate incrementally, the last added variable is the most numerically challenging variable. Hence, only changing the trust region radius for the last variable is sufficient. The whole algorithm is described in Alg. 1. Changing the \(\lambda\) for past variables, which we explicitly choose not to do, is a "global" process similar to full relinearization in iSAM2. The process impacts every node in the factor graph, hence could be detrimental to the overall performance if done in a naive way. However, this may be required in some problems more difficult than ours. Also, child nodes can be marginalized once they are too old for the current estimate, like in a fixed lag smoother. Since this is not the primary aim of this paper, we will leave how to further optimize this trust-region based robust incremental optimization method for a future work. ## V Implementation The overall system diagram is shown in Figure 5. Input sensor data of the system include an IMU operated at high frequency, and a UWB transceiver that executes SS-TWR range measurements from fixed anchors in the space. During the initialization phase, a joint of factor graph of ranging data and preintegration of the IMU data is built for 10 seconds, and a batch optimization of the factor graph is performed to obtain initial values of the system states, to initialize an iSAM2 estimator. After initialization, the raw ranging measurements and preintegrations are directly fed into iSAM2 to get UWB-frequency estimation, and IMU-frequency estimation can be future obtained by extrapolation using IMU measurements. We implemented the wireless hardware using the Delaware DWM3000 module with ESP32-S3 as the main microcontroller. For the ranging protocol, we use a simple SS-TWR ranging scheme, where the device sends 1 packet to the anchor being ranged against and receive 1 packet with timestamp. We use SS-TWR because it possesses similar noise characteristics with Wi-Fi Fine Time Measurement (FTM), which enables our method to also apply to Wi-Fi localization. The IMU stream is directly recorded off the left IMU of the Project Aria device [32], which is a factory calibrated BMI263 from Bosch operating at 800Hz. The IMU stream is time synchronized with the UWB data, through a hardware time synchronization link between the UWB module and Project Aria device. The fusion algorithm is implemented with C++ using the GTSAM [6] library using the preintegrated IMU factor of Sec. III-A. All evaluations are conducted on a Macbook Pro with an Apple M1 Pro chip. ## VI Evaluation We evaluate the proposed system via experiments in a typical 30 by 50 meters office environment. Ground truth device trajectory is obtained with Project Aria Machine Perception Service [32], using Project Aria device's collected data as input. UWB anchors are co-located in the the ground truth trajectory frame of reference using 2D fiducials, with \(<1\,\mathrm{cm}\) accuracy. We evaluate the performance of our approach over 7 runs, each of which is a walk 1-3 minutes in duration and 50-130 meters long in travelled distance. The resulting trajectory is evaluated using the Evo library [9]. Ranging frequency of each anchor is set at maximum 10Hz, and the receiver is configured to output measurements from up to 4 UWB anchors which have strongest signal strength, so the cumulative received ranging measurements are up to 40Hz. We will use 40Hz ranging frequency in Section VI-A, and explore lower frequency in VI-B. ### _Evaluation of proposed ranging model: 40Hz_ We evaluate our proposed asymmetric model against Gaussian, also standard \(m\)-Estimators Cauchy and Huber, with the proposed D-iSAM2 estimator. The resulting metrics Fig. 5: System setup of our inertial-ranging fusion system. White blocks are algorithm, yellow blocks are hardware, and blue blocks are data. Fig. 6: Prototype hardware built, with a wireless module attached to a Project Aria device [32]. are shown in Table I. Our implicit hybrid method with asymmetric model beats all baselines in most runs, in both average and maximum trajectory errors. ### _Evaluation of proposed ranging model: lower frequency_ Reducing the ranging frequency will enable localization with lower power, at the cost of increased difficulty of outlier identification because open-loop IMU integration's accuracy degrades quickly over time. We still use D-iSAM2 as incremental estimator, and keep all other parameters same as previous section. In Table II we pushed our evaluation to only using 1Hz ranging measurements. We find using Huber model causes complete estimation divergence on most of the sequences, due to Huber loss function identifies the true line-of-sight measurements as NLOS measurements. The Gaussian model, effectively treating all measurements as inliers, does not fail, but also leads to higher trajectory error. In Figure 7 we show the trajectory error results on a range of frequencies between 1Hz to 40Hz. We can find that our proposed model always have the best trajectory accuracy on all frequencies, and Huber is the second best except for its divergence observed at 1Hz. ### _Evaluation of proposed D-iSAM2_ We also compare our proposed D-iSAM2 estimator against the RISE [30] Dogleg optimizer. All parameters remain the same except the type of optimizer used, and ranging frequency is set to 40Hz. Initial delta for the Dogleg algorithm is set at 0.1. The runtime of each estimator is shown in Table III as well as the quality of the optimized solution in the lower part of Table I. While the solution quality of RISE in some sequences match these obtained by our algorithm, the runtime of RISE is significantly longer. D-iSAM2 reaches similar runtime as vanilla iSAM2, while beating iSAM2 on trajectory accuracy for a large extent. ## VII Conclusion We demonstrate that our proposed asymmetric noise model handles real-world range noises better than existing \(m\)-Estimators, with ranging measurement rate as low as 1Hz, which paves the way forward to accurate, energy-efficient indoor navigation using wireless ranging. We also show that our novel trust-region based incremental solver effectively handles the nonlinear range-IMU fusion problem. In most cases, we achieve both better convergence and lower run time than RISE and vanilla iSAM2. Finally, we show an end-to-end system which can localize accurately in indoor spaces using only IMU and UWB ranging measurements, with the hardware and software system we built, which has great potential in future low-power wearable devices. Fig. 7: Mean 3D APE over all datasets with different ranging frequency. Huber fails to give results on 1Hz (high APE due to estimation divergence).
室内無線測距定位は、ウェアラブルデバイスの低消費電力、高精度な位置決めにとって有望なアプローチです。この分野における主要な課題は、無線電波の視線外伝播です。本研究は、無線測距における根本的な課題であるリアルタイムマルチパス決定の不確定性に対処します。特に、視線外が有する状況では、この不確定性はさらに顕著になります。私たちは、コストの低い慣性測量装置(IMU)から取得した運動計測と、距離測定を融合することで、この問題を解決しました。この目的のために、非正規分布マルチパス擾動に対して特化した新しい非対称ノイズモデルを導入しました。さらに、iSAM2融合アルゴリズムの信頼性の高い性能を達成するために、Levenberg-Marquardt(LM)ファミリーの信頼域適応を導入しました。このアルゴリズムは、 our ranging-IMU融合問題に対して最適
2309.10094
Data Formulator: AI-powered Concept-driven Visualization Authoring
With most modern visualization tools, authors need to transform their data into tidy formats to create visualizations they want. Because this requires experience with programming or separate data processing tools, data transformation remains a barrier in visualization authoring. To address this challenge, we present a new visualization paradigm, concept binding, that separates high-level visualization intents and low-level data transformation steps, leveraging an AI agent. We realize this paradigm in Data Formulator, an interactive visualization authoring tool. With Data Formulator, authors first define data concepts they plan to visualize using natural languages or examples, and then bind them to visual channels. Data Formulator then dispatches its AI-agent to automatically transform the input data to surface these concepts and generate desired visualizations. When presenting the results (transformed table and output visualizations) from the AI agent, Data Formulator provides feedback to help authors inspect and understand them. A user study with 10 participants shows that participants could learn and use Data Formulator to create visualizations that involve challenging data transformations, and presents interesting future research directions.
Chenglong Wang, John Thompson, Bongshin Lee
2023-09-18T19:06:29
http://arxiv.org/abs/2309.10094v2
# Data Formulator: AI-powered Concept-driven Visualization Authoring ###### Abstract With most modern visualization tools, authors need to transform their data into tidy formats to create visualizations they want. Because this requires experience with programming or separate data processing tools, data transformation remains a barrier in visualization authoring. To address this challenge, we present a new visualization paradigm, _concept binding_, that separates high-level visualization intents and low-level data transformation steps, leveraging an AI agent. We realize this paradigm in Data Formulator, an interactive visualization authoring tool. With Data Formulator, authors first define _data concepts_ they plan to visualize using natural languages or examples, and then bind them to visual channels. Data Formulator then dispatches its AI-agent to automatically transform the input data to surface these concepts and generate desired visualizations. When presenting the results (transformed table and output visualizations) from the AI agent, Data Formulator provides feedback to help authors inspect and understand them. A user study with 10 participants shows that participants could learn and use Data Formulator to create visualizations that involve challenging data transformations, and presents interesting future research directions. AI, visualization authoring, data transformation, programming by example, natural language, large language model ## 1 Introduction Most modern visualization authoring tools (e.g., Charticulator [41], Data Illustrator [29], Lyra [44]) and libraries (e.g., ggplot2 [55], Vega-Lite [46]) expect tidy data [56], where every variable to be visualized is a column and each observation is a row. When the input data is in the tidy format, authors simply need to bind data columns to visual channels (e.g., Date \(\mapsto\)\(x\)-axis, Temperature \(\mapsto\)\(y\)-axis, City \(\mapsto\) color in Fig. 1). Otherwise, they need to prepare the data, even if the original data is clean and contains all information needed [3]. Authors usually rely on data transformation libraries (e.g., tidyverse [57], pandas [35]) or separate interactive tools (e.g., Wrangler [19]) to transform data into the appropriate format. However, authors need either programming experience or tool expertise to transform data, and they have to withstand the overhead of switching between visualization and data transformation steps. The challenge of data transformation remains a barrier in visualization authoring. To address the data transformation challenge, we explore a fundamentally different approach for visualization authoring, leveraging an AI agent. We separate the high-level visualization intent "_what to visualize_" from the low-level data transformation steps of "_how to format data to visualize_," and automate the latter to reduce the data transformation burden. Specifically, we support two key types of data transformations (and their combinations) needed for visualization authoring: * **Reshaping**: A variable to be visualized is spread across multiple columns or one column includes multiple variables. For example, if authors want to create a different scatter plot from the table in Fig. 1 by mapping Seattle and Atlanta temperatures to \(x,y\)-axes (Fig. 2-2), they need to first "pivot" the table from long to wide format, because both variables of interest are stored in the Temperature column and are not readily available. * **Derivation**: A variable needs to be extracted or derived from one or more existing columns. For example, if authors want to create a bar chart to show daily temperature differences between two cities (Fig. 2-2) and a histogram to count the number of days which city is warmer (Fig. 2-3), they need to derive the temperature difference and the name of the warmer city from the two cities' temperature columns, and map them to the \(y\)-axis and \(x\)-axis, respectively, and the city name to color channels of the corresponding charts. The derivation is also needed when the variable to be visualized requires analytical computation (e.g., aggregation, moving average, percentile) across multiple rows from a column in the table. For example, to plot a line chart to visualize the 7-day moving averages of Seattle temperatures (Fig. 2-3), the authors need to calculate the moving average using a window function and map it to \(y\)-axis with Date on \(x\)-axis. In this paper, we introduce Data Formulator, an interactive visualization authoring tool that embodies a new paradigm, _concept binding_. To create a visualization with Data Formulator, authors provide their visualization intent by binding data concepts to visual channels. Upon loading of a data table, existing data columns are provided as known data concepts. When the required data concepts are not available to author a given chart, the authors can create the concepts: either using natural language prompts (for derivation) or by providing examples (for reshaping). Data Formulator handles these two cases differently, with different styles of input and feedback, and we provide a detailed description of how they are handled in Sec. 2. Once the necessary data concepts are available, the authors can select a chart type (e.g., scatter plot, histogram) and map data concepts to desired visual channels. If needed, Data Formulator dispatches the backend AI agent to infer necessary data transformations to instantiate these new concepts based on the input data and creates candidate visualizations. Because the authors' high-level specifications can be ambiguous and Data Formulator may generate multiple candidates, Data Formulator provides feedback to explain and compare the results. With this feedback, the authors can inspect, disambiguie, and refine the suggested visualizations. After that, they can reuse or create additional data concepts to continue their visualization authoring process. We also report a chart reproduction study conducted with 10 participants to gather feedback on the new concept binding approach that Fig. 1: A dataset of Seattle and Atlanta daily temperatures in 2020 (left) and a scatter plot that visualizes them by mapping Date to \(x\)-axis, Temperature to \(y\)-axis, and City to color (right). employs an AI agent, and to evaluate the usability of Data Formulator. After an hour-long tutorial and practice session, most participants could create desired charts by creating data concepts--both with derivation and reshaping transformations. We conclude with a discussion on the lessons learned from the design and evaluation of Data Formulator, as well as important future research directions. ## 2 Illustrative Scenarios In this section, we illustrate users' experiences to create visualizations in Figs. 1 and 2 using programs and Data Formulator from the initial input data in Fig. 1. We refer to this dataset as _at_ in this section. ### Experience with Programming We first illustrate how an experienced data scientist, Eunice, uses programming to create the desired visualizations with pandas and Altair libraries in Python. **Daily Temperature Trends.** Eunice starts with the scatter plot in Fig. 1. Because of is in the tidy format with Date, City, and Temperature available, Eunice needs no data transformation and writes a simple Altair program to create the plot: \[\text{at.Chart(df).mark\_circle().encode(x-Date',y-Temperature',color-'Clity)}\] This program calls the Altair library (at), selects the input dataset of the scatter plot function mark_circle, and maps columns to \(x,y\) and color channels. It renders the desired scatter plot in Fig. 1. **Seattle vs. Atlanta Temperatures.** To make a more direct comparison of two cities' temperatures, Eunice wants to create a different scatter plot (Fig. 2-1) by mapping Seattle and Altair temperatures to \(x,y\)-axes. However, Seattle and Atlanta temperatures are not available as columns in _at_. She therefore needs to transform _at_ to surface them. Because of is in the "long" format, where temperatures of both cities are stored in one column Temperature, she needs to pivot the table to the "wide" format. Eunice switches to the data transformation step and uses the pivot function from the pandas library to reshape _at_ (Fig. 3). This program populates Seattle and Atlanta as new column names from the City column, and their corresponding Temperature values are moved to these new columns by Date. With _df2_, Eunice creates the desired visualization, which maps Seattle and Atlanta to \(x,y\)-axes of the scatter plot with the following program: \[\text{atl.Chart(df2).mark\_circle().encode(x- Seattle',y-ASanta)}\] **Temperature Differences.** Eunice wants to create two visualizations to show how much warmer is Atlanta compared to Seattle: a bar chart to visualize daily temperate differences (Fig. 2-1) and a histogram to show the number of days each city is warmer (Fig. 2-1). Again, because necessary fields Difference and Warmer are not in _df2_, Eunice needs to transform the data. This time, she writes a program to perform column-wise computation, which extends _df2_ with two new columns Warmer and Difference (Fig. 4). Eunice then creates the daily temperature differences chart by mapping Date and Difference to \(x,y\)-axes and the histogram by mapping Warmer to \(x\)-axis and the aggregation function, count(), to \(y\)-axis to calculate the number of entries. \[\begin{array}{l}\textit{if extended 4f2 with new columns Difference' and Warmer'}\\ \text{df2[ Difference] = d2[ Seattle] - d2[ Atlanta] }\\ \text{df2[ Warmer] = d2[ Difference] }\text{apply}\\ \text{lambdx : Seattle' if $x>$ }\text{else (Atlanta' if $x<$ }\text{else ('Same'))}\\ \textit{if create the bar chart}\\ \text{atl.Chart(df2).mark\_bar().encode(x-Date',y-Difference',\\ \text{color-'Warmer'})}\\ \textit{if create the histogram}\\ \text{atl.Chart(df2).mark\_bar().encode(x-Warmer',y-'count(),\\ \text{color-'Warmer'})}\end{array}\] **Remark.** In all cases, Eunice can specify visualizations using simple Altaiir programs by mapping data columns to visual channels. However, data transformation steps make the visualization process challenging. Eunice needs to choose the right type of transformation based on the input data and desired visualization (e.g., creating the scatter plot in Fig. 1 from dZ would require unpivot instead). Furthermore, Eunice needs knowledge about pandas to choose the right function and parameters per task (e.g., rolling will not fit if Eunice wants to calculate moving average for each city in dJ). Eunice's programming experience and data analysis expertise allowed her to successfully complete all tasks. But a less experienced data scientist, Megan, finds this process challenging. Megan decides to use Data Formulator to reduce the data transformation overhead. ### Experience with Data Formulator Data Formulator (Fig. 5) has a similar interface as "shelf-configuration"-style visualization tools like Tableau or Power BI. But unlike these tools that support only mappings from input data columns to visual channels, Data Formulator enables authors to create and derive new data concepts and map them to visual channels to create visualizations _without requiring manual data transformation_. **Daily Temperature Trends.** Once Megan loads the input data (Fig. 1), Data Formulator populates existing data columns (Date, City, and Temperature) as known _data concepts_ in the Concept Shelf. Because all three data concepts are already available, no data transformation is needed. Megan selects the visualization type "Scatter Plot" and maps these data concepts to \(x,y\) and color channels in Chart Builder through drag-and-drop interaction. Data Formulator then generates the desired scatter plot. **Seattle vs. Atlanta Temperatures.** To create the second scatter plot (Fig. 4-1), Megan needs to map Seattle and Atlanta temperatures to \(x,y\)-axes of a scatter plot. Because Seattle and Atlanta temperatures are not available as concepts yet, Megan starts out by creating a new data concept Atlanta Temp (Fig. 6-1): she clicks the new button in the Concept Shelf, which opens a concept card that asks her to name the new concept and provide some examples values; Megan provides four Atlanta temperatures (45, 47, 56, 41) from the input data as examples and saves it. Similarly, Megan creates another new concept Seattle Temp. Because Data Formulator's current knowledge to them is limited to their names and example values, both concepts are listed as an unknown concept for now. (They will be resolved later when more information is provided.) With these new concepts and the Scatter Plot selected, Megan maps new data concepts Seattle Temp and Atlanta Temp to \(x,y\)-axes (Fig. 6-1), and then clicks the FORMULATE button to let Data Formulator formulate the data and instantiate the chart. Based on the visualization spec, Data Formulator realizes that the two unknown concepts are related to each other but not yet certain how they relate to the input data. Thus, Data Formulator prompts Megan with an example table to complete: each row in the example table will be a data point in the desired scatter plot. Megan needs to provide at least two data points from the input data to guide Data Formulator on how to generate this transform (Fig. 6-1). Here, Megan provides the temperatures of Atlanta and Seattle on 01/01/2020 and 01/02/2020 from the table Fig. 1. When Megan submits the example, Data Formulator infers a program that can transform the input data to generate a new table with fields Atlanta Temp and Seattle Temp that subsumes the example table provided by Megan. Data Formulator generates the new table and renders the desired scatter plot (Fig. 6-1). Megan inspects the derived table and visualization and accepts them as correct. **Temperature Differences.** To create a bar chart and a histogram to visualize temperature differences between the two cities, Megan needs two new concepts, Difference and Warner. This time, Megan notices that both concepts can be _derived_ from existing fields based on column-wise mappings, and thus she uses the "derive" function of Data Formulator (Fig. 7). Megan first clicks the "derive new concept" option on the existing concept Seattle Temp, which opens up a concept card that Fig. 5: Data Formulator UI. After loading the input data, the authors interact with Data Formulator in four steps: (1) in the Concept Shelf, create (e.g., Seattle and Atlanta) or derive (e.g., Difference, Warner) new data concepts they plan to visualize, (2) encode data concepts to visual channels of a chart using Chart Builder and formulate the chart, (3) inspect the derived data automatically generated by Data Formulator, and (4) examine and save generated visualizations. Throughout the process, Data Formulator provides feedback to help authors understand generated data and visualizations. lets her describe the transformation she wants using natural language. Megan selects Seattle Temp and Atlanta Temp as the "derived from" concepts, provides a name Difference for the new concept, and describes the transform using natural language, "Calculate Seattle attlanta temp diff." Megan then clicks the generate button and Data Formulator dispatches its backend AI agent to generate code. Data Formulator returns two code candidates and presents the first one in the concept card. Megan opens up the dialog to inspect both candidates and learns that because her description did not clearly specify whether she wants the difference or its absolute value, Data Formulator returns both options as candidates. After inspecting the example table and the transformation code provided by Data Formulator, Megan confirms the first candidate and saves the concept Difference. Similarly, Megan creates a concept, Warmer, from Seattle Temp and Atlanta Temp with the description "check which city is warmer, Atlanta, Seattle, or same." Data Formulator applies the data transformation on top of the derived table from the last task and displays the extended table in Data View (Fig. 5). Because both concepts are now ready to use, Megan maps them to Chart Builder to create the desired visualizations (Fig. 8). **7-day Moving Average of Seattle's Temperature.** Last, Megan needs to create a line chart with 7-day moving average temperatures. Because the moving average can be derived from the Seattle Temp column, Megan again chooses to use the derive function. Megan starts with a brief description "calculate 7-day moving avg" and calls Data Formulator to generate the desired transformation. Upon inspection, Megan notices that the generated transformation is close but does not quite match her intent: the 7-day moving average starts from \(d-6\) to \(d\) for each day \(d\) as opposed to \(d-3\) to \(d+3\) (Fig. 9). Based on this observation, Megan changes the description into "calculate 7-day moving avg, starts with 3 days before, and ends with 3 days after" and re-runs Data Formulator. This time, Data Formulator generates the correct transformation and presents the extended data table in Fig. 5. Megan then maps Date and Seattle 7-day Moving avg to \(x,y\)-axes of a line chart. **Remark.** With the help of Data Formulator, Megan creates visualizations without manually transforming data. Instead, Megan specifies the data concepts she wants to visualize by: Fig. 8: Megan creates the bar chart using derived concepts, Difference and Warmer, as well as an original concept Date. Fig. 6: Megan (1) creates new data concepts, Seattle Temp and Atlanta Temp, by providing examples and (2) maps them to \(x,y\)-axes of a scatter plot to specify the visualization intent. (3) Data Formulator asks Megan to provide a small example to illustrate how these two concepts are related, and Megan confirms the example. (4) Based on the example, Data Formulator generates the data transformation and creates the desired visualization. Fig. 7: (1) Megan derives the new concept Difference from Atlanta Temp and Seattle Temp using natural language. Data Formulator generates two candidates and displays the first one in the concept card. (2) Megan opens the dialog to inspect both, confirms the first one, and saves the concept. * building new concepts using examples (when the new concept is spread among multiple columns or multiple concepts are stored in the same column, e.g., Seattle Temp and Atlanta Temp are both stored in the Temperature column); and * deriving new concepts using natural language (when the new concept can be computed from existing ones using column-wise operators, e.g., Difference from Seattle Temp and Atlanta Temp). Megan then drags-and-drops data concepts to visual channels of a chart. In this process, for derived concepts, Data Formulator displays generated candidate code and example table to help Megan inspect and select the transformation; for concepts created by example, Data Formulator prompts Megan to elaborate their relations by completing an example table. Data Formulator then transforms the data and generates the desired visualizations. Data Formulator reduces Megan's visualization overhead by shifting the task of specifying data transformation into the task of inspecting generated data. Because Data Formulator's interaction model centers around data concepts, Megan does not need to directly work with table-level operators, such as pivot, map/reduce and partioning, which are challenging to master. ## 3 The Data Formulator Design In this section, we describe our design principles, explain Data Formulator's interaction model, and how Data Formulator derives data concepts and formulates visualizations from the author's inputs. ### _Design Principles_ Data Formulator introduces _data concepts_, an abstraction of the columns needed for an author to specify their target visualization. To eliminate the author's burden to manually transform the data table before plotting, we designed Data Formulator based on the following guiding design principles. **Treat design concepts as first-class objects.** The notion of data concepts is a generalization of table columns: it is a reference to columns both from a current table and from a future transformed table. They offer two benefits. First, concept-level transformations are easier to describe and understand than table-level operators. Table-level transformations require either advanced operators like pivot and unpivot, or high-order functions like map and window, while concept-level operators are first-order functions over primitive elements (e.g., arithmetic) or lists (e.g., percentile). This makes it easier for the author to communicate with the AI agent and verify the results. Second, we can build the interaction experience on top of existing designs people are already familiar with: data concepts resemble data columns existing shelf-configuration tools commonly use. **Leverage benefits from multiple interaction approaches.** Data Formulator employs both natural language interaction (for deriving concepts) and programming-by-example approach (for building custom concepts). Natural language descriptions have a superior ability to translate high-level intent into executable code and large language models (LLMs) can reason about natural concepts (e.g., academic grades are A, B, C, D, and F; months are from January to December). However, it can be difficult for the author to provide proper descriptions if they do not understand notions like pivoting, and natural language descriptions can be imprecise and ambiguous. In contrast, while program synthesizers cannot reason about natural concepts, they are less ambiguous, and it is easier for the author to convey reshaping operations by demonstrating the output relation. By incorporating multiple approaches and feedback for different transformation types (derivation vs. reshaping), Data Formulator takes advantage of both, reducing the specification barrier and improving the likelihood for the AI agent to generate correct and interpretable codes. **Ensure correct data transformation and promote trust.** While LLM and program synthesizers can automatically generate code to eliminate the author's manual data transformation burden, they can incorrectly generalize the author's specification. Therefore, it is crucial for the author to view and verify the results. Our design employs mechanisms to ensure such inspection by the author: (1) display multiple candidates for the author to review, if available, (2) display both the code (process) and the sample output values (results) to help the author understand the transformation, and (3) allow the author to edit the generated transformation code to correct or refine it. **Improve the system expressiveness.** Data Formulator's expressiveness is defined by the combination of transformation function and visualization language. Data Formulator's visualization spec builds on top of Vega-Lite specifications. While Data Formulator's UI does not provide options to layer marks, the author can import their custom Vega-Lite specs of layered visualizations to achieve the same design. For data transformation, Data Formulator supports reshaping options from idyverses as described in Sec. 2, and it supports both column-wise derivation and analytical computation that can be generated by the LLM. Note that while our transformation language does not include aggregation, the author can achieve the same visualization by setting aggregation options on the desired axes (e.g., map Month to \(x\)-axis and avg(Seattle Temp) to \(y\)-axis to create a bar chart with average temperature). However, with the current design, the author cannot derive or reshape data that first require aggregation without re-importing the aggregated data. ### _Interaction Model_ Figure 10 shows Data Formulator's high-level interaction model. Data Formulator first loads data columns from the input table as original (and known) concepts (e.g., Date, City, and Temperature concepts in Fig. 5). The author uses the Concept Shelf to create new data concepts, if needed, in two ways (Sec. 3.3): (1) derive a concept from existing ones by interacting with an AI agent using natural language or (2) build a custom concept by providing example values. If the new concept is derived from known concepts, Data Formulator immediately extends the current data table and registers it as a known concept. With necessary data concepts known, the author uses the Chart Builder to map data concepts to visual channels of a chart. If unknown custom concepts are used to specify a visualization, Data Formulator asks the author to provide an example relation among the encoded concepts to transform the input table by using a programming-by-example approach. With the necessary data formulations applied, Data Formulator generates a Vega-Lite spec and renders the visualization. ### _Creating New Data Concepts_ The author can **derive** a concept from one or more data concepts by interacting with Data Formulator's AI agent (Fig. 10-1). In addition to Fig. 9: Megan derives the 7-day moving averages from Seattle Temp. After inspecting the results, she edits the description to be more precise. a concept name, the author provides both a list of source concepts from which the new concept is derived and a natural language description of the transformation (Fig. 7-1). Data Formulator then generates a contextualized prompt that grounds the description in the context of source concepts. This prompt combines the author's description and the descriptions of input parameters for all source concepts (with example values sampled from their domains) as comments, and joins it with the function prefix to instruct the AI agent to complete a Typescript function (as opposed to generate non-code text or uncontrolled code snippets). Data Formulator prepares two types of prompts for each query to cover simple derivation (Example 1) and analytical computation (Example 2) because it does not know if analytical computation is needed beforehand. **Example 1:** The prompt for "Calculate seattle atlanta temp diff" with source concepts Seattle Temp and Atlanta Temp (Fig. 7). **Example 2:** The prompt for "calculate 7-day moving avg" with source concept Seattle Temp (Fig. 9). It provides index and seattleTemplist so that the function can access to other values of the seattleTemp when analytical computation is needed (e.g., calculate the moving average for current index, derive percentile of the seattleTemp among all values). Data Formulator sends both prompts to LLM (we use Codex Davinci 2 [7]) to generate the transformation code (Fig. 10-2), asking for five candidate completions. When candidate programs are returned from LLM, Data Formulator filters out programs that are not executable or contain error outputs by executing them on sample values from source domains. Data Formulator then presents the programs along with their example execution results for the author to inspect (Fig. 7). Once confirmed, a new derived concept is created and shown in the Concept Shelf. If all source fields are known concepts, Data Formulator derives a new column by applying the transformation function to every tuple from the source columns and appends the column in the current table for the author to review (e.g., Fig. 5). The author can also **build** a custom concept by providing its name and a set of example values that belong to its domain (Fig. 10-3). Custom concepts are designed to support data reshaping: the author creates custom concepts when (1) the concept is spread across multiple columns; the author wants to combine multiple columns in a wide table to create one new concept in a long table, (2) multiple concepts are stored in one column; they want to surface fields from a long table, and (3) multiple values for a concept are collapsed in a column as a list (e.g., the value for an "actors" column is a list of actors for each movie); the author wants to split the list into multiple rows (i.e., one actor per row). These custom concepts are _not_ known yet upon creation because Data Formulator needs additional information from the author to resolve their relation with the input data. As we will describe in the next section, the resolution is achieved by inferring the reshaping program based on the example relations provided by the user. With data concepts (including newly crated ones) ready, the author is ready to interact with the Chart Builder to create visualizations. ### _Specifying and Formulating the Visualization_ Chart Builder employs a shelf-configuration interface: authors drag-and-drop data concepts to visual channels of the selected visualization to specify visual encoding. Based on the encoding, Data Formulator generates a Vega-Lite specification (e.g., Fig. 11) to render the visualization. Data Formulator adopts a chart-based specification: each chart type corresponds to a Vega-Lite template with placeholder encodings to be filled from the author specification. Data Formulator currently supports scatter plots (circle-based, bubble chart, ranged dot plots), bar charts (single-column, stacked, layered, grouped, histogram), line charts (with and without dots), heatmap, and custom charts (with all compatible visual channels). When all fields used in the visual encoding are available, Data Formulator combines the Vega-Lite spec with the input data to render the visualization (e.g., Fig. 1). Otherwise, when some concepts are unknown (unresolved custom concepts or concepts derived from unknown ones), Data Formulator first interacts with the author and then calls the program synthesis engine to create the transformed table. Once the author specifies the visual encoding, Data Formulator first checks if any unknown concepts are used. If so, it asks the author to illustrate the relation of unknown concepts with other concepts used in the visual encoding by filling out an example relation in a sample table (e.g., Fig. 10-3). Data Formulator needs such example relation to disambiguate the visualization intent because unknown concepts contain example values only from their own domains, missing information on how they will be related row-wise in the transformed table. For example, Data Formulator generates the example relation with Seattle Temp and Atlanta Temp fields as shown in Fig. 6-3 for the author to complete. To reduce the author's efforts, Data Formulator pre-fills two example values of Atlanta Temp based on its sample domain and asks the author to complete their corresponding Seattle Temp values (e.g., what's Seattle Temp when Atlanta Temp is 45). Each row in the example relation will be a row in the transformed data, which will then be mapped to a point in the scatter plot. Once the author submits the example relation, Data Formulator calls the program synthesizer to solve the data reshaping problem (Fig. 10-3). Given an example relation \(E\), with input data \(T\), the program synthesizer solves the programming-by-example problem to find a reshaping program \(p\) such that \(E\subseteq p(T)\) (i.e., the transformed data should generalize the example \(E\)). The reshaping program \(p\) is defined by the grammar in Figure 12, where \(p\) is recursively defined over four core reshaping operators from the R idyverse library. We include only reshaping operators because other operators like unle and summarise are already supported by Data Formulator's ability to derive concepts from natural language. With this grammar, the program synthesizer performs an enumerative search in the program space for candidate programs. To speed up this combinatorial search process, we leverage abstract interpretation to prune the search space: the program synthesis engine returns candidate programs that satisfy the example relation to Chart Builder. Note that multiple candidates could be generated since the example relation is small and potentially ambiguous. In practice, unlike Fig. 11: Vega-Lite specs for the scatter plots in Fig. 2-1 and Fig. 6. Fig. 10: Data Formulator’s interaction model. other programming-by-example tools, the small example relation is precise enough to constrain the program space that only the correct candidate is returned, because the program synthesizer only needs to solve the reshaping problem. With generated reshaping programs, Chart Builder prepares the input data: it first generates a reshaped table from each reshaping program and then for every derived concept used in the encoding, it extends the reshaped table with a new column by applying the transformation function on every tuple in the table. This way, Data Formulator generates a new table with all necessary fields to instantiate the visualization. Data Formulator presents the prepared table and candidate visualizations for the author to inspect (Fig. 5-7). When the author confirms and saves a desired visualization, the transformed data is used to resolve unknown concepts: these concepts are now available as known concepts to be used to create visualizations. ### Implementation Data Formulator is built as a React web application in Typescript; its backend is a Python server that runs on a Dv2-series CPU with 3.5 GiB RAM on Azure. Data Formulator's backend queries the OpenAI Cocker API for concept derivation and runs the synthesis algorithm locally. Data Formulator's scalability to larger data relates to (1) the fronted's visualization rendering capability and (2) the backend's efficiency to execute data transformation scripts. To scale up Data Formulator for large datasets, we envision a sampling-based approach [31], where Data Formulator presents results on a representative subset of data to enhance interactivity and returns full results asynchronously. ## 4 Evaluation: Chart Reproduction Study We conducted a chart reproduction study [42] to gather feedback on the new concept binding approach that employs an AI agent, and to evaluate the usability of Data Formulator. ### Study Design **Participants.** We recruited 10 participants (3 female, 7 male) from a large technology company. All participants had experience creating (simple) charts and identified themselves as a person with normal or corrected-to-normal vision, without color vision deficiency. Six participants are data scientists, two are applied scientists, and the remaining two are data & applied scientists, and they are all located in the United States. Four participants are in their 30's, three are in 20's, and one participant is in each of the 40's, 50's, and 18-19 age group. They had varying levels of self-identified expertise in terms of chart authoring, computer programming, and experience with LLMs. **Tasks and Datasets.** We prepared six chart reproduction tasks with two datasets (3 tasks for each dataset): daily COVID-19 cases from Jan 21, 2020 to Feb 28, 2023 (3 columns; 1,134 rows) for the first task set (Tasks 1-3) and daily temperatures in 2020 for Seattle and Atlanta (4 columns; 732 rows; Fig. 1) for the second set (Tasks 4-6). In both task sets, each subsequent task is built upon the previous one. One task (Task 4) required building two new concepts for reshaping and the other five tasks required the use of derived concepts. We also prepared three tutorial tasks, using students' exam scores dataset (5 columns; 1,000 rows): in addition to the scores for three subjects (math, reading, and writing), the data table included a student's and major. The first tutorial task was about creating a chart with known/available concepts, while the second and third tutorial tasks were for creating charts using derived concepts and unknown concepts, respectively. Finally, we produced two practice tasks (one for reshaping and another for derivation). For these, the exam scores dataset was transformed into a long format, including math and reading scores under the subject and score column, resulting in 4 columns and 2,000 rows. **Setup and Procedure.** We conducted sessions remotely via the Microsoft Teams. Each session consisted of four segments: (1) a brief explanation of the study goals and procedure, (2) training with tutorial and practice, (3) chart reproduction tasks, and (4) debrief. The training segment started with a quick introduction of Data Formulator's basic interactions using a simple task that does not require data transformation. Then, with their screen shared and recorded with audio, participants went through a tutorial and created three visualizations following step-by-step instructions provided in slides. They next created two visualizations on their own as practice. After an optional break, the participants performed six reproduction tasks using the two datasets mentioned above. Each task included a description (e.g., "Create a Scatter Plot to compare Atlanta Temperature against Seattle Temperature."), the labels for axes and color legend (if necessary), and an image of the target visualization. (Study materials are included in the supplemental material.) We encouraged the participants to think aloud, describing their strategies, whether any feature of Data Formulator works or makes sense, if the system behaves as they expect, etc. We recorded if the participants required a hint (and which hint) and how long it took for them to complete the task. The recorded completion time is not intended to indicate performance, as we wanted to gain insights about our approach using the think aloud method. Instead, we wanted to see if and how the participants completed, faltered, or recovered for each task, within a reasonable amount of time. The session ended with a debriefing after the participants filled out a questionnaire with 5 questions about their experience with Data Formulator. The entire session took about two hours to complete, while the training segment took about an hour. We compensated each participant with a 5100 Amazon Gift card. ### Results After an hour-long tutorial and practice session, most participants could use Data Formulator to create different types of charts that involve advanced data transformations. Furthermore, they were generally positive about their experience with Data Formulator in chart authoring. **Tasks Completion and Usability Issues.** Participants completed all tasks on average within 20 minutes, with a deviation of about four and a half minutes. Table I shows the average and standard deviation of task completion time in minutes, along with the total number of hints provided for each chart reproduction task (for all 10 participants). The participants spent most of their time (on average less than five minutes) on Task 6 because it was not trivial to inspect the code to generate 7-day moving average. For Tasks 5 and 6, we had to give one hint (to two different participants) to guide them to use a different type of concept (they needed to derive a concept but initially tried to build a concept). There were a few cases that we had to provide a hint to a single participant: how to select multiple sources for derivation (Task 4), what are the correct source concepts for derivation (Tasks 2 & 5), and the example values should be from the original table (Task 4). We had to provide the highest number of hints for Task 1. This was because when participants derived the year from the date value, its data type was set to number and the participants did not know or remember how to change its data type to string. (As detailed below, some participants tried to fix it by providing a different natural language prompt). For derived concepts, once the participants identified the correct interaction approach and input fields, they are able to describe and refine the transformation in natural language to solve the tasks. We recorded all participants' prompts (see supplementary material). On average, participants made 1.62 prompt attempts per derived concept, and the length of those prompts averaged 7.28 words. The system generated an average of 1.94 candidates per prompt attempt. Participants rated Data Formulator on five criteria using a 5-point Likert scale (5 being the most positive) as follows: easy to learn (\(M=3.90\), \(SD=0.88\)), easier than other tools to transform data (\(M=3.80\), \(SD=1.23\)), AI-agent's usefulness (\(M=4.4\), \(SD=0.70\)), helpful to Figure 12: Reshaping operators supported by Data Formulator. \(T\) refers to input data, and \(c\) refers to column names. verify generated data (_M_ = 4.1, _SD_ = 0.74), and the trustworthiness of generated data (_M_ = 4.7, _SD_ = 0.48). Participants provide feedback to improve the user interface. Four participants expected a way to multi-select on concept cards and click "derive" for deriving a concept from multiple existing ones. The current method of clicking "derive" on one concept and then multi-selecting is not intuitive. Two other participants expected the AI to select or identify which concepts to derive from based on their prompts. A few participants expected to change data type using the prompt (e.g., "year as a string" when the year is extracted from date). Five participants wanted the derived examples table to show more values, or unique derived values. Reshaping data was at times a point of confusion: two participants found it difficult to understand how the AI formulated candidate datasets, while two others did not intuit or remember the task sequence to formulate data for unknown concepts. When required to reshape data, three participants entered plausible, but not exact values in the example table during the training: they misunderstood the rigid connection to the original dataset. To strengthen that connection participants recommended including additional columns (especially a column that is unique for a pivot transform) or to filter or highlight rows of the data table view that correspond to the values used in the example table. We also observed users' attempts to re-use a derived concept as a commutative function on other concepts: two participants tried to drag a derived concept and drop it on other concepts. **Overall Reaction and Experience.** To understand participants' reaction to the new concept-drive approach employing an AI agent, we analyzed the debiter interview, during which participants stated something or confirmed an observation made by the experimenter. Using the transcription from the recorded sessions, one researcher applied an open coding method to surface all unique feedback, issues and ideas from the participants. He expanded the codes to generalize for semantically similar participant statements. While quantities of qualitative data does not provide a metric for importance, we counted how many participants mentioned each code, providing how frequently our participants had shared experiences or ideas with Data Formulator. Overall, participants were positive about their experience with Data Formulator. All 10 participants said that natural language prompts work well for generating data transforms and eight mentioned that AI is a helpful tool for the study tasks. Participants more frequently praised the derived concept than the unknown concept method for transforming data. Specifically, when it comes to verifying candidate derived concepts: all except one participant commented that displaying code was helpful and seven found the example derived values table to be useful. While only half of the participants commented that pivoting with unknown concepts is easier than with other tools, only three affirmed the example data table being helpful. Five participants mentioned that they were impressed by the power of the AI agent to generate data transforms. Five participants found having candidates (for both derived and formulated data) to be helpful because the candidates provided an opportunity to choose a correct answer, or at the least to select a promising direction to refine. Participants also explained that generating candidates increases trust in a collaborative experience. On the other hand, three participants mentioned they are reluctant to give much trust to the AI generative features of the tool. ## 5 Related Work Data Formulator builds on top of prior research in visualization authoring tools, data transformation tools, and code generation techniques. **Visualization Grammars and Tools.** The grammar of graphics [58] first introduces the representation of visualizations based on chart types and encodings of data columns to their visual channels. Many high-level grammars are designed to realize this idea. For example, ggplot2 [55] is a charting library in R based on visual encodings. Vega-Lite [46] and its Python-wrapper Altair [51] extend the traditional grammar of graphics design with rules for layered and multi-view displays, as well as interactions, and Animated Vega-Lite [65] further extends it to support animations. These grammars hide low-level implementation details and are concise. Therefore, they are generally preferred for the rapid creation of visualization in exploratory settings over toolkits and libraries like Protovis [4], Atlas [28], and D3 [5] that are designed for more expressive and novel visualization authoring. High-level grammars inspire interactive visualization tools like Tableau [49], Power BI, Lyra [44], Charculator [41], and Data Illustrator [29]. These tools adopt a shelf-configuration design: authors map data columns to visual encoding "shelves" often using the drag-and-drop interaction, and generate specifications in high-level grammars to render visualizations. These grammars and tools require that the input data is in a tidy format, where all variables to be visualized are columns of input data. Because this means authors often need to transform the data first to create any visualizations, Satyanarayan et al. recognized the automatic inferring or suggestions of appropriate transformations when necessary, as an important research problem [45]. To reduce authors' efforts, visualization by demonstration [43, 47, 64] and by example [54] tools are introduced. Lyra 2 [64] generates interaction rules after authors perform an interaction on the visualization. VbD [43] lets users demonstrate transformations between different types of visualizations to produce new specifications. Although these approaches reduce the chart specification efforts, they require tidy input data. Falk [54], on the other hand, addresses the data transformation challenge with a visualization-by-example design. Falk lets authors specify visualizations via low-level example mappings from data points to primitive chart elements. However, Falk does not support derivation types of transformation because of its underlying programming-by-example algorithm limitations; its requirement to focus on low-level elements also introduces a challenging paradigm shift for users who are more familiar with tools that focus on high-level specifications [46, 49]. Natural language interfaces [8, 21, 27, 30, 37] enhance users' ability to author and reason about visualizations. NCNet [30] uses a Seq-to-Seq model to translate chart description texts into Vega-Lite specs. VisQA [21] is a pipeline that leverages semantic parsing techniques [36] to provide atomic data-related answers based on its visualizations. NL4DV [33] and Advisor [27] generate visualizations based on user questions. To manage ambiguity in natural language inputs [48], DataTone [13] ranks solutions based on user preference history, and Pumice [25] introduces a multi-modal approach that leverages examples to refine the initial ambiguous specification. Data Formulator's concept derivation interface is based on natural language. Data Formulator benefits from large language models' expressiveness [7], and manages ambiguity by restricting the target function type to columns-to-column mapping functions (as opposed to arbitrary data transformation scripts). In the future, more powerful language models can be plugged into Data Formulator to improve code generation quality. Data Formulator adopts the shelf-configuration approach like Tableau and Power BI, but it supports encoding from _data concepts_ to visual channels to address the data transformation burden. Because Data Formulator can automatically transform the input data based on the concepts used in the visualization specification, authors do not need to manually transform data. Furthermore, because Data Formulator's Chart Builder resembles tools like Power BI and Tableau, it lets the authors focus on high-level designs. Data Formulator's multi-modal interaction approach supports both derivation and reshaping tables. While Data Formulator currently focuses on standard visualization supported by Vega-Lite, its AI-powered concept-driven approach can also work with expressive and creative visualization design tools like StructGraph [50] and Data Illustrator [29] to automate data transformations. **Data Transformation Tools.** Libraries and tools like tidyverse [57], pandas [35], Potter's Wheel [40], Wrangler [19], Tableau Prep, and \begin{table} \begin{tabular}{c|c|c|c} \hline Task & Average Time & Standard Deviation & Total Number of Hints \\ \hline Task 1 & 2:21 & 0:45 & 7 \\ Task 2 & 3:19 & 2:09 & 2 \\ Task 3 & 3:45 & 1:33 & 2 \\ \hline Task 4 & 2:43 & 1:33 & 2 \\ Task 5 & 2:22 & 1:55 & 3 \\ Task 6 & 4:29 & 1:39 & 2 \\ \hline \end{tabular} \end{table} Table 1: The average and standard deviation of task time (in minutes) and the total number of hints provided for chart reproduction tasks. Power Query are developed to support data transformation. They introduce operators to reshape, compute, and manipulate tabular data needed in data analysis. Automated data transformation tools, including programming-by-example tools [18, 38, 52] and initiative tools [15, 19, 20, 62], are developed to reduce authors' specification effort. Data Formulator tailors key transformation operators from the idlyverse library (reshaping and derivation) for visualization authoring. Because the desired data shape changes with visualization goals, even with these tools, authors still need the knowledge and effort to first identify the desired data shape, and then switch tools to transform the data. Data Formulator bridges visual encoding and data transformation with data concepts to reduce this overhead. **Code Generation.** Code generation models [7, 9, 12] and program synthesis techniques [6, 14, 52, 63] enable users to complete tasks without programming by using easier specifications, including natural language, examples, and demonstrations. Code generation models like Codex [7], PaLM [9], and InCoder [12] are transformer-based causal language models (commonly referred to as LLMs) that complete texts from natural language prompts. These LLMs can generate expressive programs to solve competitive programming [16, 26], data science [22], and software engineering tasks [1] from high-level descriptions. Programming-by-example [54] and programming-by-demonstration [2, 39] tools can synthesize programs based on users' output examples or demonstrations that illustrate the computation process. Natural language approaches are highly expressive, but some tasks can be challenging to phrase. On the other hand, while programming-by-example techniques are precise, they are less expressive and do not scale to large programs as they require well-defined program spaces. Therefore, Data Formulator adopts a mixed-modality approach to solve the data transformation task. It leverages the Codex model [7] for concept derivations and the example-based synthesis algorithm [53] for reshaping, which takes advantage of both approaches to reduce authors' specification overhead. Because code generation techniques generalize programs from incomplete user specifications, generated programs are inherently ambiguous, and thus require disambiguation to identify a correct solution among candidates. Prior work proposes techniques to visualize the search process [63], visualize code candidates [54, 61], and present distinguishing examples for authors to inspect [17]. Data Formulator provides feedback to the authors by presenting the generated code together with its execution results for them to inspect, select, and edit. ## 6 Discussion and Future Work **Unified Interaction with Multiple Modalities.** Data Formulator employs two different modalities for authors to specify different types of data transformation: natural language for concept derivation and examples for table reshaping (Sec. 3). This design combines strengths of both modalities so that the authors can better communicate their intent with the AI agent, and the AI agent can provide precise solutions from a more expressive program space. However, choosing the right input modality when creating a new concept can be challenging for inexperienced authors. To address this challenge, we envision a stratified approach where the authors just initiate the interaction in natural language, and the AI agent will decide whether to ask the authors, for example relations for clarification or to directly generate derivation codes. This design will shift the effort of deciding which approach to start with from the authors to the AI agent, and "by-example" specification will become a followup interaction step to help the authors clarify their intent. We envision this mixed-initiative unified interaction will further reduce the authors' efforts in visualization authoring. **Conversational Visualization Authoring with AI Agents.** Conversational AI agents [34] have the strength of leveraging the interaction contexts to better interpret user intent. They also provide opportunities for users to refine the intent when the task is complex or ambiguous. However, conversation with only natural language is often insufficient for visualization authoring because (1) it does not provide users with precise control over the authoring details (e.g., exploring different encoding options, changing design styles) and (2) the results can be challenging to inspect and verify without concrete artifacts (e.g., programs, transformed data). It would be useful to research how conversational AI can be integrated with Data Formulator's concept-driven approach to improve the overall visualization experiences. First, with a conversational AI agent, the authors can incrementally specify and refine their intent for tasks that are difficult to solve in one shot. Second, a conversational agent complements Data Formulator by helping the authors explore and configure chart options. Because Data Formulator focuses on data transformation, it does not expose many chart options (e.g., axis labeling, legend style, visual mark styles) in its interface. A conversational AI agent can help the authors access and control these options without overwhelming them with complex menus. For example, when the authors describe chart styles they would like to change, Data Formulator can apply the options directly or dynamically generate editing panels for them to control. We envision the effective combination of conversational AI experiences, and the Data Formulator approach will let the authors confidently specify richer designs with less effort. **Concept-driven Visual Data Exploration.** Visual data exploration tools [59, 23, 24, 32, 60] help data scientists understand data and gain meaningful insights in their analysis process. These tools support a rich visual visualization space, yet still require datasets to be in the appropriate shape and schema. While Data Formulator is designed for visualization authoring, its concept-driven approach can be used in visual data exploration to expand the design space. Beyond the current concept-driven features of Data Formulator, the AI agent could be enhanced to recommend data concepts of interest based on the data context or author interaction history. Building on this idea, the tool could recommend charts based on all potentially relevant data concepts. This expansive leap could overcome one of the limitations of chart recommendation systems: by enabling the authors to view charts beyond their input data columns without additional user intervention. **Study Limitations.** While our participants had varying levels of expertise in chart authoring, computer programming, and experience with LLMs, many of them had considerable knowledge about data transformation methodology and programming. It would be useful to investigate if and how people with limited expertise could learn and use Data Formulator. The main goal of Data Formulator was to reduce manual data transformation in visualization authoring efforts. As such, in our study, we focused on derivation and reshaping types of data transformations with simple datasets. While they are key types of transformation and our tasks covered multiple styles of derivations, the transformations we studied are by no means comprehensive. It would be valuable to evaluate the broader combinations and complexities of data transformations. Our study adopted a chart reproduction study [42], which is commonly used for evaluating chart authoring systems (e.g., [41, 29, 44]). Therefore, our study shares its inherent limitations: because we prepared datasets and tasks, and provided target visualizations as a reference, we do not know if and how people would use Data Formulator to create visualizations with their own data. ## 7 Conclusion This paper introduces Data Formulator, a concept-driven visualization authoring tool that leverages an AI agent to address the data transformation challenge in visualization authoring. With Data Formulator, authors work with the AI agent to create new data concepts and then map data concepts to visual channels to specify visualizations. Data Formulator then automatically transforms the input data and instantiates the visualizations. Throughout the authoring process, Data Formulator provides feedback to the authors to inspect and refine the generated code to promote confidence. As discovered in the chart reproduction study, participants can learn and use Data Formulator to create visualizations that require advanced data transformations. In the future, the concept-driven visualization approach can potentially benefit the design of new visual data exploration tools and expressive visualization authoring tools to overcome the data transformation barrier. ## Appendix A Supplemental Material We include three zip files in the supplemental material: (1) a 6-minute video that walks through user experiences of creating visualizations about Seattle and Atlanta temperatures, described in Sec. 2 with Data Formulator, (2) a set of short videos that demonstrate additional Data Formulator scenarios, and (3) our user study materials including: study script, tutorials, study tasks, and prompts created by participants.
moderne視覚化ツールを使用する現代の作者は、自分のデータを変形させて、作成したい可視化に適した形式に仕上げなければならない。これは、プログラミングの経験や別途のデータ処理ツールが必要となるため、データ変換は可視化作成の障壁となる。この課題に対処するため、私たちは新しい可視化パラダイムである概念結合を提案する。これは、高レベルの可視化意図と低レベルのデータ変換ステップを分離し、AI agnetを介して実現する。このパラダイムをデータフォームulatorというインタラクティブな可視化作成ツールで実現する。データフォームulatorを使用する者は、まず、可視化したいデータの概念を自然言語または例を用いて定義し、それを可視化チャンネルに結合する。データフォームulatorは、AI agnetを介して入力データをデータの表面に表現し、必要な可視化を作成する。結果(
2303.17978
How Can Mixed Reality Benefit From Physiologically-Adaptive Systems? Challenges and Opportunities for Human Factors Applications
Mixed Reality (MR) allows users to interact with digital objects in a physical environment, but several limitations have hampered widespread adoption. Physiologically adaptive systems detecting user's states can drive interaction and address these limitations. Here, we highlight potential usability and interaction limitations in MR and how physiologically adaptive systems can benefit MR experiences and applications. We specifically address potential applications for human factors and operational settings such as healthcare, education, and entertainment. We further discuss benefits and applications in light of ethical and privacy concerns. The use of physiologically adaptive systems in MR has the potential to revolutionize human-computer interactions and provide users with a more personalized and engaging experience.
Francesco Chiossi, Sven Mayer
2023-03-31T11:25:10
http://arxiv.org/abs/2303.17978v1
How Can Mixed Reality Benefit From Physiologically-Adaptive Systems? Challenges and Opportunities for Human Factors Applications ###### Abstract. Mixed Reality (MR) allows users to interact with digital objects in a physical environment, but several limitations have hampered widespread adoption. Physiologically adaptive systems detecting user's states can drive interaction and address these limitations. Here, we highlight potential usability and interaction limitations in MR and how physiologically adaptive systems can benefit MR experiences and applications. We specifically address potential applications for human factors and operational settings such as healthcare, education, and entertainment. We further discuss benefits and applications in light of ethical and privacy concerns. The use of physiologically adaptive systems in MR has the potential to revolutionize human-computer interactions and provide users with a more personalized and engaging experience. Mixed Reality, Adaptive Systems, Physiological Computing, Human Factors, Unsality + Footnote †: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. ## 1. Introduction Mixed reality (MR) systems encompass a broad spectrum that spans from physical reality to virtual reality (VR), including instances that involve overlaying virtual content over physical one, i.e., Augmented Reality (AR), as well as those that use physical content to enhance the realism of virtual environments, i.e. Augmented Virtuality (AV) (Steinteiner, 2017). These instances are typically predefined for a seamless physical and virtual content blend. MR enables users to interact with digital objects in a physical environment, resulting in immersive and engaging experiences. However, several limitations have hampered the widespread adoption of MR technology (Steiner, 2017; Delfosse et al., 2017). In recent years, researchers have begun to investigate the use of physiologically adaptive systems to address these limitations by developing systems that can respond in real-time to the user's physiological state (Beng et al., 2018). Physiologically adaptive systems belong to a group of adaptive systems that employ physiological signals to generate personalized and captivating experiences. They are based on user's physiological signals as a form of input, such as peripheral measures, e.g., electrocardiogram (Krause et al., 2017) or electrodermal activity (Beng et al., 2018), and central physiological measures, such as electroencephalography (EEG) (Beng et al., 2018; Beng et al., 2018), and Functional near-infrared spectroscopy (INIRS) (Beng et al., 2018) produce real-time feedback and responses based on the user's physiological state. Physiologically-adaptive systems are based on classic control theory (Krause et al., 2017). This theory involves three main steps: physiological data acquisition and processing, transformation into a system response, and shaping the expected psychophysiological response from the user. These so-called "Biocybernetic control loops" (Beng et al., 2018; Beng et al., 2018) employ a negative control to detect deviations from the optimal state and prompt changes in the system to encourage a desirable user's state. This process is crucial in creating a responsive and personalized experience for the user. Considering that physical and virtual reality are the two extremes of the MR continuum, this provides a favourable setting for developing adaptive systems. Adaptive systems can tailor the MR experience to the user's needs and goals by leveraging this continuum, assisting them in achieving optimal performance (Dong et al., 2019), immersion (Song et al., 2019), and engagement (Han et al., 2020). This paper aims to investigate the potential applications of physiologically adaptive systems in MR and discuss their advantages and disadvantages. We will specifically look at the benefits of physiologically adaptive systems in addressing the limitations of MR technology and discuss their potential applications. First, we review the definition of MR and its various forms. We will also look at the current limitations of MR technology and the issues that must be addressed to improve its usability and effectiveness. We will then define physiologically adaptive systems and discuss their characteristics and potential benefits. Second, we discuss potential applications of physiologically adaptive systems in human factors and applied MR settings. For example, healthcare professionals can use such systems to create more engaging and effective patient therapies by providing real-time feedback and support based on the patient's physiological state. By adapting to the student's cognitive and physiological state, these systems can be used in education to create more immersive and engaging learning experiences. By adapting to the player's physiological state and creating more personalized and engaging experiences, these systems can be used in entertainment to create more engaging and immersive games and simulations. Finally, we highlight challenges for physiologically adaptive systems in MR, including technical and theoretical constraints and ethical and privacy concerns. We discuss potential solutions and strategies for dealing with such fundamental issues. ## 2. Mixed Reality The predominant definition of MR is the one provided in the seminal work by Milgram and Kishino (Milgram and Kishino, 1999), referring to the merging of real and virtual worlds in a seamless and interactive environment. It is an interaction spectrum that blends physical and digital realities to create a new, immersive experience for the user. Recently, this perspective has been reviewed by Skarbez et al. (Skarbez et al., 2019). Their revised taxonomy consists of three dimensions: immersion, coherence, and extent of world knowledge. Immersion is determined by a system's objective hardware device specifications and is related to the feeling of spatial presence experienced by the user (Sarbez et al., 2019). Coherence refers to the conformity of different sensory information perceived during an XR experience, leading to an increased plausibility illusion of the experience (Sarbez et al., 2019). The extent of world knowledge describes the degree of reality incorporated into an MR experience, influencing the user's real-world awareness (Sarbez et al., 2019). The authors focus on immersion and coherence and consider important environmental cues that influence the extent of world knowledge. Latoschik and Wienrich (Latoschik and Wienrich, 2019) provide a third perspective that emphasizes that congruence activations between cognitive, perceptual, and sensory layers contribute to MR plausibility. The authors argue that device specifications, like the field of view or resolution, impact device-specific sensory congruence, while content transparency affects congruence. These congruences ultimately affect the plausible generation of spatial cues and spatial presence. ### Current Limitations for MR Systems Adoption Despite technical and design advancements in Mixed Reality (MR) technology, significant limitations still prevent it from reaching its full potential and adoption by the general public and professionals. Now, we highlight four main factors that contribute to such limitations. First, a limited field of view (FoV) represents an initial issue in many MR systems. FoV is the area that the user can see through the display, and it is often constrained by the physical size of the device's screen or lenses (Krause et al., 2019). A limited FoV can reduce immersion and realism and lead to visual discomfort (Sarbes et al., 2019), especially when the user must frequently turn their head to view the content (Sarbes et al., 2019). Secondly, we identify limited interactivity as a primary constraint for MR adoption. MR systems often rely on gesture recognition or voice commands (Sarbes et al., 2019), which can be imprecise and unreliable, leading to frustration and reduced user engagement. This limitation can be a significant barrier to adopting MR in some domains, such as entertainment applications (Sarbes et al., 2019), i.e., gaming or when this adds up to an existing cognitive load, such as in education settings (Sarbes et al., 2019). Third, while modern MR devices can display highly detailed virtual content alone, their embedding into physical reality hinders the efficiency of their plausibility (Sarbes et al., 2019), ultimately leading to reduced realism. On the contrary, high levels of realism can strengthen the efficiency of training simulations (Krause et al., 2019). Still, on the other side, when increasing details and amount of virtual content, we implicitly impact the MR visual complexity (Sarbes et al., 2019) that has been shown to influence behavioural performance and physiological arousal (Sarbes et al., 2019; Skarbez et al., 2019; Sarbes et al., 2019). Finally, limited adaptability is another significant limitation of MR systems. Many MR applications are pre-defined and cannot adapt to the user's changing needs or physical state. This limitation can reduce the effectiveness of MR applications and lead to reduced user engagement and long-term usage. ## 3. Physiologically-adaptive systems in MR Physiologically-adaptive systems are systems designed to interact with and respond to the physiological states and changes of the human body. These systems typically employ sensors and algorithms to monitor and analyze physiological signals such as ECG, EDA and EEG to drive interactions towards a specific state based on the cybernetics approach (Sarbes et al., 2019). The cybernetics approach found various applications ranging from developing new control channels (Sarbes et al., 2019) to task adaptation in response to changes in workload (Sarbes et al., 2019) and motivation (Sarbes et al., 2019). However, most of the work focused on desktop settings. Only recently, MR settings are proliferating and enabling the creation of environments and interactions far more engaging and expressive than traditional desktop programs (Han et al., 2017; Wang et al., 2018). MR is now one of the most favourable environments for physiological computing systems. MR enables online adjustments and adaption of visualizations, digital content, blending, and interactions that resemble real-world ones. However, it is not currently feasible in physical settings (VR) or augmenting them (AR, AV). Introducing physiological interaction into MR can increase its ability to monitor and adapt to implicit human behaviour. Physiologically adaptable MR systems can identify user states and direct interaction characteristics toward a (shared) objective depending on physiological input. ### Benefits of Physiologically-Adaptive systems for MR With regard to the limitations of MR adoption, we identify how physiologically-adaptive systems can enhance MR interaction and address possible usability constraints. While the limited field of view (FoV) in MR devices is primarily a hardware limitation that may be challenging to address through physiological adaptivity alone, monitoring attention and gaze can still play a role in enhancing the user experience within the existing FoV limitations. Physiological inputs such as eye gaze and torso movements and their temporal alignment can be employed for attention, interest, and intent detection and as context and input (Sundundar et al., 2016). Moreover, EEG features such as alpha and theta oscillations discriminated between internally and externally directed attention in AR (Sundar et al., 2016) and VR settings (Sundar et al., 2017). This information can be used to dynamically adjust the field of view of the MR system, for example, by zooming in on areas of interest, providing multisensory cues to direct attention towards hidden areas, and blurring distracting information. Limited interactivity refers to situations where the user has limited ability to control or manipulate the virtual objects in the MR environment. This can occur due to factors such as the complexity of the interface or the user's cognitive workload. Limited interactivity can benefit from neuro-, and electrophysiological measures such as EEG and fNIRS for workload (Han et al., 2017; Wang et al., 2018), and attention detection (Sundar et al., 2016), enlarging the design space for interaction. For instance, if the user is experiencing cognitive overload or boredom (Han et al., 2017), the system can simplify the interface or adjust the task difficulty level to maintain engagement and interest. Additionally, if the user is experiencing unpleasant states such as frustration or anxiety (Sundar et al., 2016), the system can distract the user with positive stimuli to distract from their emotional state (Sundar et al., 2016) and maintain their attention on the task (Han et al., 2017). Third, the limited realism could be controlled and adapted based on autonomic arousal, i.e., EDA or ECG, for leveraging its effect on the user's physiological activation. This physiological input can be used to adjust the level of MR visual fidelity, for example, by adding or removing sensory cues to enhance the user's emotional experience (Beng et al., 2018; Wang et al., 2018), or support target detection, when engaged in visual search (Beng et al., 2018; Wang et al., 2018). Lastly, physiologically-adaptive systems are central to increasing reactivity and adaptability. Employing physiological data as a passive input and concurrently adapting either task or environmental features can allow for more dynamic interaction, controlling for undesirable states such as anxiety or boredom (Beng et al., 2018; Wang et al., 2018), improve motivational engagement (Han et al., 2017), and therefore allowing users to maintain focus on the current task and perform optimally. Figure 2. In a biocybernetic control loop, the Adaptive System continuously process, and extract informative features from the physiological signals and detect the user state based on an algorithm. Thus, it adjusts its behaviour for device control for diverse applications and provides returns feedback to the user. This closed-loop allows for iterative adaptations and optimization of visualization, content and interaction. ### Potential Applications of Physiologically Adaptive Systems in Applied MR Settings This combination of implicit physiological monitoring and MR environment adaptation can be defined as a closed-loop model. Since their original conception and design in the seminal work of Pope et al. (Pope et al., 2016), biocybrenetic closed loops have had many implications in human factors, and applied settings, such as aviation (Moschovitis and Denisova, 2017), healthcare (Moschovitis et al., 2017), and other high-demanding environments (Pope et al., 2016). We envision three operational settings where physiologically-adaptive MR environments can be profitable: healthcare, education, and training. Physiologically adaptive systems in the healthcare industry can deliver customized therapies suited to the patient's psychophysiological condition. Physiological measures, for example, can be used in mental health to assess physiological signals related to stress, anxiety, and depression. Such information improves the patient's exposure therapy, leveraging the degree of realism or intensity of the phobic stimuli presented either in VR or in AR (Moschovitis et al., 2017). Similarly, adaptive systems may be used in physical therapy to monitor patients' progress and offer real-time feedback on their movements, allowing therapists to change the intensity of exercises to guarantee optimal recovery and rehabilitation (Bouquet et al., 2017; Bohn et al., 2017). In educational settings, physiologically adaptive systems can be used to improve learning outcomes. Recently, many companies and educational institutions have allocated considerable resources to transitioning from traditional desktop education to immersive MR applications, expecting that a higher level of immersion would correspond to increased motivation and learning. Physiological monitoring can aid in technology-based educational decision-making to assist cognitive, i.e., information processing (Pope et al., 2016), emotions,i.e., frustration (Pope et al., 2016), and motivation and metacognitive (Moschovitis et al., 2017), i.e., self-regulation behaviours of learners (Pope et al., 2016). Related to educational settings are also the professional training MR environments (Pope et al., 2016). Finally, the entertainment industry can benefit from the design of physiologically-adaptive games (Pope et al., 2016). Besides adjusting the game realism to support immersion (Pope et al., 2016) or employing dynamic difficulty adjustments (Pope et al., 2016), adaptive gaming can pursue and drive interactions towards less socially acceptable goals. For example, Moschovitis and Denisova (Pope et al., 2016) showed how they could increase game engagement using a biofeedback-controlled game that elicited physiological responses associated with fear and anxiety. Their results show how stimuli perceived as unpleasant on the surface might result in a positive subjective outcome. Finally, gamification approaches can benefit entertainment purposes and be applied and generalized to different settings, such as therapy, treatment of anxiety and cognitive rehabilitation and training. Ethical and Privacy Considerations for Implementing Physiologically Adaptive Systems in Mixed Reality Within our perspective endorsing a progressive implementation and investigation for physiologically adaptive systems in MR, we have to foresee downsizes and considerations regarding ethics and privacy. One of the primary ethical considerations for systems that employ data over which users do not have complete explicit control is the issue of informed consent. Users must be fully aware of how physiological data are collected, used, and shared. This is relevant when their data are employed for model training and validation. Secondly, physiological states can underlie different emotional valences, implying that such systems might manipulate or influence users' emotions. Therefore, researchers must prioritize ethical design and inform participants about which state the system is optimizing for. Lastly, they should allow participants to return to a neutral affective state if users perceive their final state as undesirable. This is critical as users must retain control over the adaptation and state adjustment process. Third, privacy concerns are associated with physiologically adaptive systems in MR. This perspective was already raised by Fairclough (Fairclough, 2017), highlighting how symmetrical interaction and adaptation between systems and users might lead to asymmetrical data usage and protection. Again, Hancock and Szalma (Hancock and Szalma, 2017) highlight that if a physiological computing system respects data protection rights, individuals should retain formal and legal ownership of their psychophysiological data. This implies that any third party should receive access to such information only with approval by the user. This is relevant considering that physiological data might not only underlie specific cognitive or affective states and be used for medical diagnostic purposes. An initial compromise solution is using a privacy-by-design approach by embedding privacy considerations into every stage of the design and development process. This includes conducting privacy impact assessments, implementing privacy-enhancing technologies, and using privacy-preserving data collection in every implementation stage of the physiologically-adaptive systems. ## 5. Conclusion In conclusion, MR technology holds great potential for creating immersive and engaging experiences, especially when employing physiologically adaptive systems that allow users to interact with personalized visualizations, contents and interactions. We highlighted how MR experiences could overcome challenges and limitations by embedding biocybrenetic paradigms in their systems and depicted future concerns for their implementation. HCI, MR, and adaptive systems research fields can all benefit from the enormous potential of adopting and exploring physiological computing and interaction paradigms. However, such opportunities will only be realized if these fundamental difficulties are addressed by present research in this area. ## Acknowledgments Francesco Chiossi was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project ID 251654672, TRR 161.
Mixed Reality (MR) は、ユーザーが物理的な環境にデジタルオブジェクトとインタラクションし、その一方で、いくつかの制限が普及を阻害してきました。生理学的に適応するシステムは、ユーザーの状態を検出し、相互作用を促進し、これらの制限に対処します。ここでは、MR の潜在的な利用可能性と相互作用の制限について、生理学的に適応するシステムが MR の経験とアプリケーションにどのように利益をもたらすかを強調しています。私たちは特に、ヘルスケア、教育、エンターテイメントなどの人的操作環境と、生理学的に適応するシステムがどのような利点をもたらすかを詳しく議論します。さらに、倫理的な懸念とプライバシーに関する懸念を考慮した利点と応用について議論します。MR に生理学的に適応するシステムの使用は、人間とコンピューターのインタラクションを革命的に変革し、ユーザーにパーソナライズされたエンゲージメントを提供
2309.12683
Accelerating the laser-induced phase transition in nanostructured FeRh via plasmonic absorption
By ultrafast x-ray diffraction we show that the laser-induced magnetostructural phase transition in FeRh nanoislands proceeds faster and more complete than in continuous films. We observe an intrinsic 8 ps timescale for nucleation of ferromagnetic (FM) domains in both types of samples. For the continuous film, the substrate-near regions, which are not directly exposed to light, are only slowly transformed to the FM state by domain wall motion following heat transport. In contrast, numerical modeling of the plasmonic absorption in the investigated nanostructure reveals a strong contribution near the FeRh/MgO interface. On average, the absorption is larger and more homogeneous in the nanoislands, enabling the phase transition throughout the entire volume at the intrinsic nucleation timescale.
Maximilian Mattern, Jan-Etienne Pudell, Jon Ander Arregi, Jakub Zlámal, Radek Kalousek, Vojtěch Uhlíř, Matthias Rössle, Matias Bargheer
2023-09-22T07:44:42
http://arxiv.org/abs/2309.12683v2
# Accelerating the laser-induced phase transition in nanostructured FeRh via plasmonic absorption ###### Abstract By ultrafast x-ray diffraction we show that the laser-induced magnetostructural phase transition in FeRh nanoislands proceeds faster and more complete than in continuous films. We observe an intrinsic 8 ps timescale for nucleation of ferromagnetic (FM) domains in both types of samples. For the continuous film, the substrate-near regions, which are not directly exposed to light, are only slowly transformed to the FM state by domain wall motion following heat transport. In contrast, numerical modeling of the plasmonic absorption in the investigated nanostructure reveals a strong contribution near the FeRh/MgO interface. On average, the absorption is larger and more homogeneous in the nanoislands, enabling the phase transition throughout the entire volume at the intrinsic nucleation timescale. + Footnote †: preprint: APS/123-QED Reducing the structure-size of metallic ferromagnets to the nanoscale not only helps increasing the information storage density but also enables direct plasmonic coupling of light to the magnetic nano-bit for magnetoplasmonic control and readout [1; 2]. This is a particularly exciting perspective in the context of femtosecond optomagnetism [3] with ultrafast optical manipulation of magnetic properties such as a polarization control of two magnetic nanolayers mediated by plasmon-polaritons [4] and plasmonic enhanced all-optical switching in magnetic nanodisks [5]. Heat assisted magnetic recording (HAMR) [6; 7] already uses optical near fields to confine the magnetic switching in the new generation of magnetic hard drives to a few nanometers. Resonant magnetic x-ray scattering studies confirmed plasmonically enhanced ultrafast switching for nano-granular FePt thin films, which constitute the classical HAMR material [8]. The potential consequences of nanostructuring FeRh go well beyond plasmonic coupling. Lateral nanostructuring limits the number of independent nucleation sites through the antiferromagnetic-to-ferromagnetic (AF-FM) phase transition around 370 K, which changes the nature of magnetization reversal from multi-domain to single-domain and results in discrete avalanche-like jumps of the order parameter upon cooling [9; 10]. In thermal equilibrium, the phase transition that is accompanied by a 1% volume expansion, crucially depends on the lattice structure. The tetragonal distortion of the unit cell originating from an in-plane substrate-induced compression enhances the transition temperature [11; 10; 12]. In FeRh nanoislands, the partial relaxation of this tetragonal distortion reduces the transition temperature [10; 13]. Generally, in-plane nano-structuring unlocks in-plane expansion on the picosecond timescale in contrast to the exclusive out-of-plane expansion of laser-excited continuous thin films [14]. The three-dimensional nature of the picosecond strain response of nanoislands preserves bulk-like material-specific expansion properties and results in a complex strain response due to cross-talk of in- and out-of-plane expansion via the Poisson effect [15; 16; 17; 18]. Previous experiments studied the laser-induced phase transition in FeRh by the emergence of magnetization [19; 20; 21; 22], changes in the electronic structure [23] and the rise of the larger FM lattice constant [24; 25; 26]. Probing the structural order parameter by ultrafast x-ray diffraction (UXRD), we recently disentangled FM domain nucleation and growth in inhomogeneously excited FeRh continuous films, whose thickness exceed the optical penetration depth [26]. We identified a universal 8 ps nucleation timescale in FeRh, which does not depend on the film thickness and temperature nor on applied laser fluence and magnetic field [26]. The effects of nanostructuring on the coupled ultrafast dynamics of demagnetization [17], remagnetization [27] and strain [15; 16; 17] have been thoroughly studied for FePt. Ultrafast experiments on FeRh nanoislands that study the influence of the in-plane expansion, reduced number of nucleation sites and plasmonic excitation are lacking up to now. In this letter, we explore for the first time the kinetics of the laser-driven phase transition in FeRh nanoislands by probing the rise of a larger lattice constant parameterizing the FM phase as structural order parameter via UXRD. In order to access the effect of finite lateral dimensions, we compare the results to a similarly thick continuous FeRh film as reference. In the nanoislands, the AF-FM phase transition drives a partial in-plane expansion of the nanoislands both in equilibrium and on ultrafast timescales. Upon laser excitation, we observe the same 8 ps nucleation timescale in both samples indicating an intrinsic property irrespective of the sample morphology. However, while we observe a relatively slow heat transport-driven domain growth in the thin film, the phase transition of the nanostructured film occurs precisely on the intrinsic timescale of domain nucleation. By modeling the absorption of the nanostructures, we relate this acceleration of the phase transition to a homogeneous optical excitation due to plasmonic effects enabled by the size of the metal islands below the excitation wavelength. Figures 1(a) and (b) sketch the continuous and nanostructured FeRh film grown on MgO(001) substrates. The continuous 55 nm thick FeRh(001) film was grown by magnetron sputtering from an equiatomic FeRh target [11] and capped by 2 nm of Pt. The nanostructured sample is composed of epitaxial FeRh(001) nanoislands formed by solid state dewetting of an epitaxial FeRh(001) film via self-assembly resulting in maze-like structures [13] with a mean height of 52 nm. Static and time-resolved reciprocal space maps around the (002) FeRh Bragg peak are recorded at the KMC-3 XPP endstation at BESSY II in the low-alpha operation mode [28] with monochromized 9 keV hard x-ray photons. The diffracted intensity in Figs. 1(c) and (f) displays the emergence of an additional Bragg peak at lower values of the out-of-plane reciprocal lattice coordinate \(q_{z}\) when the temperature approaches the mean transition temperature of the thin film (370 K) and the nanoislands (350 K), respectively. The integrated intensity of this Bragg peak is directly proportional to the volume of FeRh exhibiting the FM phase [24] and thus parameterizes the FM phase during the temperature-driven AF-FM phase transition. The proportion of the FM Bragg peak in the total intensity yields the temperature-dependent FM volume fraction \(V_{\text{FM}}\). Figures 1(d) and (g) compare this structural parameterization of the phase transition (symbols) to the macroscopic magnetization normalized to its maximum (solid lines) serving as complementary order parameter of the FM phase. The magnetization is measured via vibrating sample magnetometry (VSM) using a QuantumDesign VersaLab magnetometer, which results in a broadening of the hysteresis by the heterogeneous transition temperature at different sample sites in contrast to the narrow hysteresis of the structural order parameter locally (\(300\times 300\,\mathrm{\SIUnitSymbolMicro m}^{2}\)) measured via x-ray diffraction. The comparison of the two samples reveals a dependence of the AF-FM phase transition in thermal equilibrium on the sample morphology. The enhanced surface-to-volume ratio of the nanoislands results in a noticeable residual FM phase that persists below the transition temperature \(T_{\text{T}}\) at the symmetry breaking surface [29] and at the film-substrate interface [30]. In addition, the small lateral extent of the islands partially relaxes the substrate-induced tetragonal distortion of FeRh, which lowers the transition temperature for the nanoislands [10; 11; 13]. This is indicated by the lower mean out-of-plane lattice constant \(d\) with respect to the continuous film (see Figs. 1(e) and (h)) given by the center-of-mass (COM) of the diffracted intensity via \(d=4\pi/q_{z,\text{COM}}\). This applies in particular to the out-of-plane expansion associated with the phase transition. While we find 0.4% expansion for the nanoislands close to the bulk value of 0.3 % [31], the substrate-induced clamping of the in-plane expansion suppresses the Poisson effect [14] and results in an out-of-plane expansion of 0.6% for the thin film. Accounting for the different substrate-induced constraints of the in-plane expansion, the out-of-plane expansion of the FeRh samples is de Figure 1: **Morphology-dependent phase transition in thermal equilibrium:** Sketch of the UKRD experiment mapping the reciprocal space via \(\theta-2\theta\)-scans and the sample structure of the continuous film (a) and the nanoislands (b). Panels (c–e) and (f-h) characterize the equilibrium AF-FM phase transition in the continuous film and the nanostructures, respectively. (c, f) The diffracted intensity (grey symbols) is the superposition of an AF and an arising FM Bragg peak at a larger out-of-plane lattice constant during heating above \(T_{\text{T}}\). (d, g) Temperature-dependent ferromagnetic volume fraction \(V_{\text{FM}}\) determined by the relative integrated intensity of the FM Bragg peak (symbols) as structural order parameter and the magnetization normalized to its maximum value as magnetic order parameter (solid lines). (e, h) Temperature-dependent lattice constant (symbols) modeled by Eq. (1) using bulk expansion parameters (solid lines). scribed by [14]: \[\alpha_{\rm eff}=\alpha_{\rm bulk}(T)+2\chi\frac{c_{1133}}{c_{3333}}\left(\alpha_ {\rm bulk}(T)-\alpha_{\rm MgO}\right)\, \tag{1}\] where \(\alpha_{\rm bulk}(T)\) and \(\alpha_{\rm MgO}=10.5\cdot 10^{-6}\rm K^{-1}\) denote the thermal expansion coefficients of bulk FeRh and MgO, respectively. The expansion of FeRh \(\alpha_{\rm bulk}(T)\) is given by the expansion coefficients \(\alpha^{\rm FM}=6.0\cdot 10^{-6}\rm K^{-1}\) and \(\alpha^{\rm AF}=9.7\cdot 10^{-6}\rm K^{-1}\) in the AF and FM phase [32] and the expansion of 0.3 % across the AF-FM phase transition [31], considering the temperature-dependent volume fraction in the FM phase \(V_{\rm FM}(T)\), which we derived from the integrated intensity of the FM Bragg peak in Figs. 1(c) and (f). The elastic constants of FeRh \(c_{1133}\) and \(c_{3333}\) quantify the effect of the in-plane expansion on the out-of-plane expansion via the Poisson effect [14] and \(\alpha_{\rm eff}\) denotes the modified expansion coefficient of the samples depending on the parameter \(\chi\). This parameter measures the epitaxy to the substrate, where \(\chi=0\) corresponds to pure bulk-like in-plane expansion and \(\chi=1\) to an in-plane expansion completely determined by the MgO substrate. Our modeling of the temperature-dependent lattice constant in Figs. 1(e) and (h) (symbols) by Eq. (1) (solid line) yields excellent agreement for \(\chi=1\) and \(\chi=0.42\) for the continuous thin film and the nanoislands, respectively. While the thin film purely follows the in-plane expansion of the MgO substrate (\(\chi=1\)), the nanoislands behave partially bulk-like (\(\chi=0.42\)). This relaxation of the in-plane constraints is expected to increase towards the surface and to depend on the in-plane dimensions of the different nanoislands [33]. In the UKRD experiment, the FeRh samples are excited by a 600 fs p-polarized pump pulse with a central wavelength of 1028 nm that is incident at \(\approx 30^{\circ}\) with respect to the sample surface. As sketched in Fig. 1(a), we probe the emergence of the FM Bragg peak that parameterizes the laser-induced AF-FM phase transition by 17 ps 9 keV hard x-ray pulses [28] performing symmetric \(\theta\)-2\(\theta\) scans around the (002) Bragg reflection at \(28^{\circ}\). Figures 2(a) and (b) display the diffracted intensity along \(q_{z}\) before and after an excitation with 12.0 mJ/cm\({}^{2}\) at 340 K for the thin film and with 5.2 mJ/cm\({}^{2}\) at 230 K for the nanoislands, respectively. The emerging FM Bragg peaks indicate the optically induced AF-FM phase transition for both samples. The AF and FM Bragg peaks are well separated for the thin film. For the nanoislands, an ultrafast in-plane expansion on short timescales is enabled by their small lateral extent [14]. The concomitant transient out-of-plane Poisson-contraction results in less separated Bragg peaks (Fig. 2(b)) for the nanoislands. This indicates a reduced tetragonal distortion of the unit cell upon laser-excitation and the emergence of more cubic FM unit cells upon nucleation in the nanoislands as already discussed for Figure 2: **Reduced laser-induced out-of-plane expansion in FeRh nanostructures:** (a) Transient Bragg peaks of the thin film for an excitation of 12.0 mJ/cm\({}^{2}\) at 340 K dissected into the FM (green) and AF (blue) Bragg peak that are well-separated in reciprocal space. (b) In the nanoislands, the FM (pink) and AF (purple) Bragg peak are less separated due to the partial in-plane expansion of the unit cell across the laser-induced phase transition for 5.2 mJ/cm\({}^{2}\) at 230 K. The data for different pump-probe delays are vertically offset to improve visibility. Figure 3: **Nucleation-dominated phase transition in FeRh nanoislands:** (a) Transient FM volume fraction of the nanoislands at \(T=190\) K for various fluences \(F\). (b) Same for \(T=230\) K, which increases the conversion to the FM state at low fluence. (c) Temperature series at a relatively low fluence of \(F=8.6\) mJ/cm\({}^{2}\) for the thin film. (d) Same for \(F=12.0\) mJ/cm\({}^{2}\). The two-step rise of \(V_{\rm FM}\) in the thin film (c and d) indicates a growth of the nucleated FM domains into the depth of the layer driven by near-equilibrium heat transport. In all panels, solid lines denote the kinetics of FM domain nucleation according to Eq. (2) convoluted with the time-resolution given by the duration of the x-ray pulse. addition to the lattice constant change across the phase transition, the dynamics of the laser-induced phase transition also depends on the sample morphology. While the integrated intensity of the FM Bragg peak barely changes between 40 and 240 ps for the nanoislands, the FM Bragg peak increases after 40 ps for the thin film. Figure 3 displays the resulting transient FM volume fraction \(V_{\mathrm{FM}}\) for both samples under various excitation conditions. The solid lines denote the expected dynamics for nucleation of FM domains at independent sites described by Eq. (2) with the previously identified universal nucleation timescale \(\tau=8\) ps [26] convoluted with the 17 ps-long x-ray pulse limiting the time-resolution. The final FM volume fraction \(V_{\mathrm{FM}}^{*}\) is adjusted to the experimental value of \(V_{\mathrm{FM}}(t=40\) ps) for the respective measurement and we include a finite residual FM phase in the nanoislands being present before excitation: \[V_{\mathrm{FM}}(t)=V_{\mathrm{FM}}^{*}\cdot\left(1-e^{-t/\tau}\right)\;. \tag{2}\] For the nanoislands, the transient FM volume fraction in Figs. 3(a-b) is well described by Eq. (2) indicating that nucleation dominates the laser-induced AF-FM phase transition. With increasing fluence and initial sample temperature a larger fraction of the nanoislands is excited above the critical threshold characteristic for first-order phase transitions [20; 24], which results in an enhanced \(V_{\mathrm{FM}}^{*}\). Figs. 3(c-d) show that within the first 40 ps the laser-induced phase transition in the continuous film is equally well described by Eq. (2) (solid lines) with the same 8 ps nucleation timescale. Thus the nucleation kinetics of the AF-FM phase transition in FeRh is insensitive to the substrate-induced or transient tetragonal distortion of the unit cell that is smaller in case of the nanoislands as identified by the less separated AF and FM Bragg peaks shown in Fig. 2. However, we observe a second slow contribution to the rise of \(V_{\mathrm{FM}}\) in the continuous film at initial sample temperatures above 300 K. In a previous publication [26], we revealed this two-step rise of the FM phase to originate from a nucleation of FM domains within the optically excited near surface region and a subsequent growth of the domains into the depth of the inhomogeneously excited layer driven by near-equilibrium heat transport. The equilibration of the phonon temperature within the continuous film via heat diffusion slowly heats the backside of the film and induces the phase transition if the temperature surpasses \(T_{\mathrm{T}}\). For initial sample temperatures only slightly below \(T_{\mathrm{T}}\) and higher excitation fluences a larger fraction of the film is heated above \(T_{\mathrm{T}}\) and undergoes the phase transition [26]. This results in an enhanced delayed contribution arising from domain growth as displayed in Figs. 3(c-d). To explain the absence of such a heat transport-driven domain growth in the nanoislands we calculate their optical absorption for the experimental excitation conditions in COMSOL utilizing the refractive index of MgO \(n_{\mathrm{MgO}}=1.72\)[34] and of FeRh \(n_{\mathrm{FeRh}}=4.67+5.58i\). This was measured via spectroscopic ellipsometry for a similar FeRh film, grown under the same conditions. For this we reproduce the topography of the nanoislands characterized by AFM (Fig. 4(a)) in COMSOL by ellipsoids utilizing an algorithm for rapid contour detection [35] (see Fig. 4(b)). Figure 4(c) displays the local power absorption \(P_{\mathrm{abs}}\) of the nanostructures that reveals the existence Figure 4: **Optical excitation of nanostructured FeRh:** Re-build of the topography of the FeRh nanostructures characterized by AFM (a) in COMSOL (b). (c) Local absorbed power per area of the FeRh nanostructures calculated in COMSOL by solving the Maxwell equations. (d) Local optical penetration depth determined from \(P_{\mathrm{abs}}\) as function of the depth relative to the local height \(h\). (e) Absorption of different nanoislands as function of \(z\) at \(y=3.2\,\mathrm{\SIUnitSymbolMicro m}\). (f) Integrated absorption of the nanoislands as function of \(z\) (blue symbols). The purple solid line displays the \(z\)-dependent absorption of an hypothetical ensemble of continuous FeRh films resembling the height distribution of the nanoislands and the grey line denotes the absorption profile of the continuous 55 nm thick FeRh film. of plasmonic hot-spots with drastically enhanced absorption. By fitting an exponential decay function to the local \(z\)-dependent absorption (FeRh-MgO interface corresponds to \(z=0\)), we find a large spread of the optical penetration depth \(\delta_{\text{p}}\). Figure 4(d) shows this distribution relative to the semi-infinite medium value \(\delta_{\text{p,0}}=14.7\,\)nm. Yellow color-code indicates a locally strongly enhanced optical penetration depth due to nanostructuring. The exemplary lineout of the dissipated power at \(y=3.2\,\)um in Fig. 4(e) depicts the \(z\)-dependent absorption as a function of the in-plane \(x\) coordinate, with characteristic plasmonic enhancement near the FeRh-MgO interface [36] that is responsible for the local absorption hot-spots shown in Fig. 4(c). This increases the absorption at depth that in thin films receives only a negligible exponentially damped amount of light energy. Both the penetration depth enhancement and the optical power dissipation build-up near the FeRh-MgO interface make the absorption of the pump-pulse in the nanostructures more homogeneous with respect to the out-of-plane \(z\)-coordinate than in a continuous thin film. The average total absorption in the nanostructures as function of the distance from the FeRh-MgO interface (\(z=0\)) is displayed by symbols in Fig. 4(f). In addition, the grey solid line denotes the \(z\)-dependent absorption of a \(55\,\)nm FeRh film scaled by the surface coverage of the nanoislands (\(49\,\%\)) in the studied sample. Its thickness is comparable to the average nanoisland height of \(52\,\)nm. Lastly, the purple solid line represents the integrated \(z\)-dependent optical absorption from each pixel assuming its local profile is identical to that of a continuous film of an equivalent thickness. The decrease of the absorption for large \(z\) values (purple line) agrees with the COMSOL simulation additionally including plasmonic effects (symbols) and exclusively originates from the decreasing number of nanoislands higher than the \(z\) value. However, the average absorption of nanostructures (symbols) for \(z<30\,\)nm is much larger than that predicted by the pixel-integrated absorption of equivalent thin films, which neglects the plasmonic enhancement near the FeRh-MgO interface. Comparing the area under the grey and dotted curves in Fig. 4(f), we find that the total \(z\)-integrated optical absorption of the nanostructures amounts to \(34\,\%\) of the incident power, which exceeds the absorption of the \(55\,\)nm-thick continuous FeRh film with the same value by a factor of \(1.5\). In essence, the optical excitation of the nanostructures is significantly more homogeneous along their vertical extent than for the thin film case, where only the near surface region receives a sizeable light intensity. This stronger and almost homogeneous excitation of the complete volume of the nanostructures supports a nucleation-driven phase transition throughout the entire nanoislands. This suppresses the slow phase transition by domain wall motion observed for the thin film (Figs. 3(c-d)), which is driven by near-equilibrium heat-transport. This drastically accelerates the laser-induced phase transition in FeRh nanostructures with small lateral extension that is even more efficiently driven due to the overall enhanced absorption. In summary, we studied the morphology dependence of the laser-induced AF-FM phase transition by comparing a continuous and a nanostructured thin FeRh film. We find an ultrafast in-plane expansion of the nanoislands, whereas the thin FeRh film is pinned to the MgO. This results in a less tetragonal distortion of the unit cell across the phase transition, however, it has no influence on the nucleation timescale of the FM domains. Instead, only plasmonic effects change the dynamics of the phase transition: By modelling the spatially resolved optical absorption of the FeRh nanostructures, we identified a plasmon-enhanced absorption near the FeRh-MgO interface and an enhanced optical penetration depth. This results in a homogeneous excitation of the nanoislands, which drives a nucleation of FM domains on an \(8\,\)ps timescale within the volume of the FeRh nanostructures and makes slow heat-transport driven domain growth irrelevant. This accelerates the phase transition in comparison with the thin film that exhibits nucleation only within the optically excited near-surface region and shows a subsequent slow growth of the FM phase into the depth of film at initial sample temperatures slightly below the transition temperature. We acknowledge the DFG for financial support via Project-No. 328545488 - TRR 227 project A10 and the BMBF for funding via 05K22IP1. Access to the CEITEC Nano Research Infrastructure was supported by the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic under the project CzechNanoLab (LM2023051). Measurements were carried out at the KMC3-XPP instrument at the BESSY II electron storage ring operated by the Helmholtz-Zentrum Berlin fur Materialien und Energie.
超高速X線回折により、フェルライトナノ小島ではレーザー誘発磁性構造変化が、連続 films に比べてより速く、より完全に行われます。両種類のサンプルにおいて、磁性(FM)領域の核形成にIntrinsic 8 ps のスケールが存在します。連続膜の場合、光に直接触れる領域以外、すなわち基板近傍領域は、ドメイン壁の移動による熱伝達によって、徐々にFM状態に変化します。一方、研究対象のナノ構造のプラズミック吸収の計算モデルによれば、FeRh/MgO界面に強い寄与が見られます。平均的には、ナノ島では、吸収が大きくなり、均一化が進むため、核形成のスケール内での全 volume にわたる変化が可能です。
2309.03956
Self-Interacting Neutrinos in Light of Large-Scale Structure Data
We explore a self-interacting neutrino cosmology in which neutrinos experience a delayed onset of free-streaming. We use the effective field theory of large-scale structure (LSS) to model matter distribution on mildly non-linear scales within the self-interacting neutrino cosmology for the first time. We perform the first combined likelihood analysis of BOSS full-shape galaxy clustering, weak lensing, and Lyman-$\alpha$ forest measurements, together with the cosmic microwave background (CMB) data from Planck. We find that the full data set strongly favors presence of a flavor-universal neutrino self-interaction, with a characteristic energy scale of order $10$ MeV. The preference is at the $\sim 5\sigma$ level and is primarily driven by the Lyman-$\alpha$ forest measurements and, to a lesser extent, the weak lensing data from DES. The self-interacting neutrino model eases both the Hubble tension and the $S_8$ tension between different cosmological data sets, but it does not resolve either. Finally, we note a preference for a non-zero sum of neutrino masses at the level of $\sim 0.3$ eV under this model, consistent with previous bounds. These results call for further investigation in several directions, and may have significant implications for neutrino physics and for future new-physics searches with galaxy surveys.
Adam He, Rui An, Mikhail M. Ivanov, Vera Gluscevic
2023-09-07T18:03:53
http://arxiv.org/abs/2309.03956v3
# Self-Interacting Neutrinos in Light of Large-Scale Structure Data ###### Abstract We explore a self-interacting neutrino cosmology in which neutrinos experience a delayed onset of free streaming. Using the effective field theory of large-scale structure (LSS), we perform the first combined likelihood analysis of BOSS full-shape galaxy clustering, weak lensing, and Lyman-\(\alpha\) forest measurements, together with the cosmic microwave background (CMB) temperature and polarization anisotropy data from _Planck_, in search for evidence of neutrino self-interactions. In agreement with previous results, we find a bimodal posterior distribution for the effective strength of neutrino self-interaction, showing that a vanishingly small interaction and a relatively strong interaction are both consistent with cosmological data, providing fits of nearly equal quality. We find that strong self-interactions in the neutrino sector can alleviate the \(H_{0}\) tension while maintaining a good fit to the LSS data. Our results may have implications for particle model-building and ongoing neutrino oscillation experiments, and motivate further exploration of particle interactions that can generate a delay in neutrino free-streaming. We discuss sensitivity of the upcoming galaxy surveys to ruling out neutrino self-interaction at the level consistent with the current data. + Footnote †: preprint: MIT-CTP/5608 ## I Introduction Since the discovery of neutrino oscillations and the resulting implication that neutrinos are _not_ massless, the neutrino sector of the Standard Model (SM) has received renewed attention as a potential window into physics beyond the SM. In particular, the existence of neutrino mass may imply that neutrinos experience additional couplings to particles that have not yet been observed [1]. While there are a number of terrestrial experiments capable of probing neutrino interactions [2; 3; 4; 5], cosmological observations also offer a rich landscape of information that may be used to explore and constrain the neutrino sector. For instance, cosmic microwave background (CMB) data from the _Planck_ satellite and baryon acoustic oscillation (BAO) measurements from galaxy redshift surveys have placed an upper limit on the sum of neutrino masses \(\sum m_{\nu}<0.12\) eV [6]. Moreover, Refs. [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45] have found that cosmological data are capable of detecting new interactions in the neutrino sector, particularly those that would change the free-streaming nature of neutrinos after they decouple from the SM. Cosmological studies have focused on parameterizing the neutrino self-interaction rate as \(\Gamma_{\nu}\propto G_{\rm eff}^{2}T_{\nu}^{5}\), where \(G_{\rm eff}\) is a Fermi-like coupling constant that describes a four-fermion interaction, and \(T_{\nu}\) is the background temperature of the neutrinos. An example model that would produce such an interaction rate is neutrino self-scattering through a massive scalar particle \(\phi\)[46]. In the presence of self-interactions, neutrino free-streaming is delayed, leaving an imprint on matter clustering in the early and late universe. Neutrino self-interactions have several effects on clustering of matter and the linear matter power spectrum \(P(k)\). Most notably, they delay the onset of neutrino free-streaming and reduce the size of the sound horizon at recombination, thus increasing the value of \(H_{0}\) that can fit the acoustic peaks in the CMB and alleviating the tension with the LSS data. Furthermore, neutrino self-interactions suppress power at small scales. Finally, the scale of the horizon at neutrino decoupling shows up as a bump-like feature at small values of the wavenumber \(k\); smaller scales (larger \(k\)) values correspond to weaker self-interactions; see Fig. 1. Refs. [47] and [48] have found that a significant self-interaction between massive neutrinos can provide a good fit to CMB data from the _Planck_ and ACT experiments. More precisely, the CMB anisotropy is consistent with zero interaction, as well as with a non-vanishing value of \(G_{\rm eff}\), and an associated delay in onset of neutrino free-streaming [49; 50; 51; 52]. The 1D marginalized posterior probability distribution for \(G_{\rm eff}\) parameter is thus found to be bimodal, featuring a mode consistent with \(\Lambda\)CDM cosmology (dubbed "moderately-interacting mode" \(\rm M\nu\) because of its non-zero best-fit coupling constant), and a "strongly-interacting mode" \(\rm SI\nu\), peaked at \(G_{\rm eff}\sim 0.03\) MeV\({}^{-2}\). In addition, the strongly-interacting mode was found to increase the preferred value of the Hubble parameter \(H_{0}\) in the CMB analysis and alleviate the Hubble tension between large scale structure (LSS) and the CMB [47]. While numerous beyond-cold-dark-matter (beyond-CDM) models have been shown to alleviate cosmological tensions, few have been able to simultaneously address the Hubble tension and avoid exacerbating the \(S_{8}\) tension--the mild discrepancy in the late-time amplitude of matter perturbations \(S_{8}\), inferred from the CMB and the LSS probes [53]. Self-interacting neutrinos are a potential solution that may alleviate both tensions [47]. In this work, we analyze the bulk of available LSS data in the context of the self-interacting neutrino model. Specifically, we use data from the Baryon Oscillation Spectroscopic Survey (BOSS), Lyman-\(\alpha\) forest measurements from the Sloan Digital Sky Survey (SDSS eBOSS), and weak lensing measurements from the Dark Energy Survey (DES). We combine the LSS measurements with _Planck_ measurements of the temperature and polarization anisotropy in the CMB, modelling all observables self-consistently, under the neutrino self-interacting model. This data combination allows a comprehensive assessment of the validity of neutrino self-interactions as an alternative to \(\Lambda\)CDM. We find that a delayed onset of neutrino free-streaming, produced by a flavor-universal neutrino self-interaction, is consistent with both LSS and _Planck_ data. Furthermore, we find that the self-interacting neutrino model presents a fit of comparable quality to \(\Lambda\)CDM (with no interactions). We find a bi-modal posterior distribution for \(G_{\rm eff}\). The strongly-interacting mode has the best-fit coupling constant of \(G_{\rm eff}\sim(10~{}{\rm MeV})^{-2}\), in agreement with previous studies [48]. The low-interaction mode of the posterior is indistinguishable from \(\Lambda\)CDM. Furthermore, the addition of \(G_{\rm eff}\) as a free parameter shifts the rest of the best-fit cosmology, affecting other standard cosmological parameters. In particular, the shift in \(H_{0}\) alleviates the tension with Supernova \(H_{0}\) for the Equation of State (SH\({}_{0}\)ES) data [54; 55]. We find that the key to the interacting-neutrino model's ability to fit multiple cosmological data is the presence of features in the matter power spectrum and the freedom for data to "choose" their location in \(k\)-space, by varying the \(G_{\rm eff}\), as shown below. These results could have implications for neutrino physics and cosmology. Indeed, the interacting-neutrino model is predictive and imminently falsifiable. Most notably, it predicts a decrement of power on scales corresponding to small halos in the local universe, at the level that may be probed in the near future with the Vera C. Rubin Observatory [56; 57; 58] and other observations [59]. Moreover, it tends to lead to a preference of a non-zero neutrino mass, with values that may be incompatible with certain neutrino hierarchies, with implications for observations that target measurement of the neutrino mass in the coming decade. Furthermore, new observations are under to tighten the measurements of \(P(k)\) on a variety of scales, putting further pressure on models that cause deviations from \(\Lambda\)CDM in clustering of matter in the universe [60; 59]. In addition, the preference for a delayed onset of neutrino free-streaming calls for a detailed consideration of different types of particle interactions that could cause this delay. Different models for the interactions are constrained by experiments and astrophysical observations, particularly at the level associated with the strongly-interacting mode [61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79]. However, in this study, we have only considered a flavor-universal coupling, where all three standard neutrinos interact and experience a delay in free-streaming. While flavor-universal couplings appear constrained by laboratory experiments under specific interaction models, flavor-specific interactions at high \(G_{\rm eff}\) are consistent with current laboratory bounds and may lead to similar cosmological consequences [85; 30; 32; 89]. This paper is organized as follows. In Sec. II, we describe the self-interacting neutrino cosmology and the effective field theory of LSS in the context of neutrino self-scattering. We describe our data analysis methods in Sec. III. In Sec. IV, we present our results, and we discuss and conclude in Sec. V. ## II Cosmological perturbations In a self-interacting neutrino cosmology, the Boltzmann equations for the massive neutrino multipoles \(\nu_{l}\) contain additional collision terms that account for interactions between neutrinos [47]: \[\begin{split}\frac{\partial\nu}{\partial\tau}+k\frac{q}{\epsilon }\left(\frac{\ell+1}{2\ell+1}\nu_{\ell+1}-\frac{\ell}{2\ell+1}\nu_{\ell-1} \right)\\ +\frac{2}{3}\left(\dot{h}\delta_{\ell 0}-\frac{4}{5}\left( \frac{\dot{h}+6\dot{\dot{\eta}}}{2}\right)\delta_{\ell 2}\right)\\ =-\frac{aG_{\rm eff}^{2}T_{\nu}^{5}\nu_{\ell}}{f_{\nu}^{(0)}(q)} \left(\frac{T_{\nu,0}}{q}\right)\left(A\left(\frac{q}{T_{\nu,0}}\right)\right. \\ \left.+B_{\ell}\left(\frac{q}{T_{\nu,0}}\right)-2D_{\ell}\left( \frac{q}{T_{\nu,0}}\right)\right)\end{split} \tag{1}\] where \(q\) is the magnitude of the comoving momentum, \(\epsilon=\sqrt{q^{2}+a^{2}m_{\nu}^{2}}\), \(T_{\nu,0}\) is the present-day temperature of the neutrinos, \(h\) and \(\eta\) are the standard metric perturbation variables, and \(A(x)\), \(B_{\ell}(x)\), and \(D_{\ell}(x)\) are functions that capture the collision term at first order in perturbation theory. In the massless case, the neutrino multipole hierarchy simplifies to \[\begin{split}\frac{\partial F_{\ell}}{\partial\tau}+k\left( \frac{\ell+1}{2\ell+1}F_{\ell+1}-\frac{\ell}{2\ell+1}F_{\ell-1}\right)\\ +\frac{2}{3}\left(\dot{h}\delta_{\ell 0}-\frac{4}{5}\left(\frac{ \dot{h}+6\dot{\dot{\eta}}}{2}\right)\delta_{\ell 2}\right)\\ =-aG_{\rm eff}^{2}T_{\nu}^{5}\alpha_{\ell}F_{\ell}\end{split} \tag{2}\] where \(F_{\ell}\) are the massless neutrino multipoles and \(\alpha_{\ell}\) is a coefficient calculated from an integral over \(A(x)\), \(B_{\ell}(x)\), and \(D_{\ell}(x)\). See Ref. [47] for a detailed description of the collision terms \(A(x)\), \(B_{\ell}(x)\), \(D_{\ell}(x)\), and \(\alpha_{\ell}\). We configure the Boltzmann solver CLASS to allow for neutrino self-interactions as given by these modified Boltzmann equations. Following [47], we precompute \(A\), \(B_{\ell}\), and \(D_{\ell}\) on a 5-point grid of \(q/T_{\nu,0}\) values and access them via an interpolation routine as the equations are solved; similarly, we precompute \(\alpha_{l}\) for the case of massless neutrinos. Also following [47], we assume that the neutrino sector is comprised of one massive neutrino and allow the remaining number of massless neutrino species to vary. We implement a tight-coupling approximation in which multipoles \(\ell\geq 2\) are set to zero if \(\Gamma_{\nu}>1000*aH\), where \(H\) is the Hubble parameter. We merge this modified CLASS code with CLASS-PT[90]1, which is tailored to compute LSS power spectra in the mildly nonlinear regime. CLASS-PT calculates non-linear 1-loop corrections to the linear matter power spectrum, and outputs the redshift-space galaxy power spectrum. CLASS-PT uses the effective field theory of large scale structure (EFT) [91, 92, 93, 94] to model the redshift-space galaxy power spectrum from \(0.01<k<0.2\)\(h\)/Mpc; in the context of neutrino self-interactions, the EFT should in principle be modified to account for such a scenario [95]. However, non-linear effects are entirely negligible at the high redshifts where neutrino self-interactions impact the evolution of matter perturbations. Neutrino self-interactions are effectively halted at the low redshifts relevant for galaxy surveys, and the matter perturbations evolve as in \(\Lambda\)CDM but with a modified initial power spectrum, shown in Fig. 1. Thus, the standard version of CLASS-PT is apt for predicting late-time LSS observables in the context of self-interacting neutrinos. We display the effect of strong neutrino self-interactions on the galaxy power spectrum in Fig 2. Footnote 1: [https://github.com/Michalychforever/CLASS-PT](https://github.com/Michalychforever/CLASS-PT) ## III Data and Methodology We analyze a combination of the following data sets: * _Planck_: _Planck_ 2018 CMB plik_lite high-\(\ell\) TT/TE/EE likelihood, along with the commander low-\(\ell\) TT likelihood [96]. * **BOSS**: anisotropic galaxy clustering data from BOSS DR12 at \(z=0.38\) and \(0.61\)[97, 98, 99]. As in [100, 101], our analysis is performed up to \(k_{\rm max}=0.2\)\(h\)/Mpc for the galaxy power spectrum multipoles, from \(0.2<k<0.4\)\(h\)/Mpc for the real-space power spectrum proxy \(Q_{0}\)[102], and up to \(k_{\rm max}=0.08\)\(h\)/Mpc for the bispectrum monopole [103, 101, 104].2 We also add post-reconstructed BOSS DR12 BAO data following [106].3 Footnote 2: The BOSS full-shape likelihood that we use is available at [https://github.com/oliverphilcox/full_shape_likelihoods](https://github.com/oliverphilcox/full_shape_likelihoods), see [101] for more detail. Also see [104, 105] for alternative but equivalent likelihoods. * **Lyman-\(\alpha\)**: 1D Lyman-\(\alpha\) flux power spectrum from SDSS DR14 BOSS and eBOSS quasars [110]. We use the compressed version of this likelihood presented as a Gaussian prior on model-independent amplitude and the slope of the power spectrum at a pivot redshift \(z_{p}=3\) and wavenumber \(k_{p}=0.009\) s/km \(\sim 1\)\(h\)/Mpc [111]. * **DES**: weak lensing data from Year 3 of the Dark Energy Survey (DES-Y3), in the form of a Gaussian prior on \(S_{8}\): \(0.776\pm 0.017\)[112]. In our EFT-based full-shape analysis, we consistently marginalize over all necessary nuisance parameters that capture galaxy bias, baryonic feedback, non-linear redshift space-distortions, etc. [101].4 Our analysis is thus robust to the specific details of galaxy formation physics. We also apply our BOSS galaxy clustering data to the Figure 1: Ratio of the linear matter power spectrum \(P(k)\) for a self-interacting neutrino cosmology and \(\Lambda\)CDM, generated with best-fit parameter values from our _Planck_ + BOSS + Lyman-\(\alpha\) + DES analysis. The green curve uses the best-fit values for neutrino self-coupling consistent with the SI\(\nu\) mode of the posterior distribution (discussed in the text). The blue curve displays the best-fit MI\(\nu\) model (note that \(\Lambda\)CDM is nested in MI\(\nu\)). The underlying \(\Lambda\)CDM spectrum is generated with best-fit parameter values from _Planck_[6]. The shaded band indicates the range of scales probed by BOSS data, while the dashed line indicates the pivot wavenumber probed by the Lyman-\(\alpha\) data. An increase in the interaction strength \(G_{\rm eff}\) shifts the peak-like feature seen at \(k\sim 0.1\)\(h\)/Mpc in green towards larger physical scales (to the left in the plot), and is associated with a later onset of neutrino free-streaming. self-interacting neutrino scenario without any \(\Lambda\)CDM assumptions; namely, we do not use the "compressed" BOSS likelihood containing BAO and RSD parameters that are derived with a fixed _Planck_-like \(\Lambda\)CDM template [98, 114]. Finally, as in [98], our EFT-based likelihood includes galaxy power spectrum shape information that the standard BOSS likelihood does not contain [97]. The choice we make to impose a DES prior on \(S_{8}\) is equivalent to adding the complete DES-Y3 dataset to our analysis, as DES measures \(S_{8}\) to be the same value for \(\Lambda\)CDM, WDM, and \(\Lambda\)CDM extensions [115, 116, 117, 112]. The value of \(S_{8}\) is therefore robust under different cosmological models, as long as the late-time growth of structure is not modified; this is indeed the case for interacting-neutrino model. Moreover, \(S_{8}\) is the primary directly observed principle component of the weak lensing data; it is thus close to being a model-independent quantity. Therefore, we safely leave details of the full calculation of the DES-Y3 likelihood in context of neutrino self-interactions for future work. Finally, we note that the data we chose to analyze, including BOSS, Lyman-\(\alpha\), and DES, are good proxies for the information gleaned from LSS, but they do not represent a complete set of data currently available. In particular, we did not consider weak lensing from KiDS-1000 and HSC-Y3 because they have non-negligable covariance with the data sets we consider; this covariance is not yet available and must be modeled to analyze all these data in tandem [118]. This task is beyond the scope of our work. Furthermore, we do not analyze supernovae data from Pantheon+ [119], since this data would solely constrain \(\Omega_{\rm m}\), a background quantity that does not change from its \(\Lambda\)CDM value under neutrino self-interactions. In addition, we choose not to add CMB lensing data to our analysis, in order to directly reproduce the base _Planck_ results from [48]; however, we note that Refs. [47, 48] find a higher likelihood for the strongly interacting mode when adding CMB lensing to their analysis. Finally, we do not include eBOSS DR16 BAO data in our analysis, but we expect this data to further prefer a delayed onset of neutrino free-streaming, as was the case in previous analyses [47, 48]. Likewise, eBOSS DR16 has not yet been fully converted into a full-shape likelihood, so we do not include it here. A dedicated future study is warranted to analyze together all these data sets under cosmological models beyond \(\Lambda\)CDM. We use our modified CLASS code and MontePython + MultiNest to perform likelihood analyses [120, 121, 122]. For the case of interacting neutrinos, once we discern the location of the posterior mode consistent with \(\Lambda\)CDM using MultiNest, we additionally sample this mode with a Metropolis-Hastings algorithm, to speed up convergence and increase sample accuracy; we use sample chains from both approaches to reconstruct the final posterior distributions shown in Sec. IV. In MultiNest, we set the number of live points to 400, the target sampling efficiency to 0.8, and the accuracy threshold for the log Bayesian evidence to 20%, ensuring that our analysis is optimized for parameter estimation. In each run, we vary all standard cosmological parameters; we also treat the effective number of neutrino species \(N_{\rm eff}\) and the sum of the neutrino masses \(\sum m_{\nu}\) as free parameters in each analysis. We assume that the neutrino sector consists of one massive neutrino and that the rest of the species are massless (see Sec. II). We use the standard Big Bang nucleosynthesis (BBN) predictions to calculate the primordial Helium abundance \(Y_{\rm He}\)[47, 48]. We assume broad flat priors on \(\{\omega_{\rm b}\), \(\omega_{\rm c}\), \(100\theta_{\rm s}\), \(\ln(10^{10}A_{\rm s})\), \(n_{\rm s}\}+\log_{10}(G_{\rm eff}\,{\rm MeV}^{2})+N_{\rm eff}+\sum m_{\nu}\). Following previous literature, we use a Gaussian prior on \(\tau_{\rm reio}=0.065\pm 0.015\), as a stand-in for low-\(l\) EE data [47]. We limit the upper bound of our prior on \(\log_{10}(G_{\rm eff}\,\,{\rm MeV}^{2})\) to \(-0.5\), as the equations of motion become too stiff for CLASS to evolve at \(\log_{10}(G_{\rm eff}\,\,{\rm MeV}^{2})>-0.5\)[52]. We do not expect this choice to impact our results, as Refs. [47, 48] do not find preference for any values of \(\log_{10}(G_{\rm eff}\,\,{\rm MeV}^{2})\gtrsim-0.8\) in _Planck_ data. In line with Ref. [48], we initially set the lower bound of our prior on \(\log_{10}(G_{\rm eff}\,\,{\rm MeV}^{2})\) to \(-5.5\) in MultiNest. Going beyond previous analyses, we then extend the lower bound to \(-8\) and use Metropolis-Hastings algorithm to map the low-\(G_{\rm eff}\) mode of the posterior in detail. As discussed in Sec. IV, this is essential in order to capture the preference of the LSS data towards lower best-fit Figure 2: Galaxy power spectrum, measured by BOSS (with 68% confidence-level error bars). We display the galaxy power spectrum monopole \(P_{0}(k)\), quadrupole \(P_{2}(k)\), hexadecapole \(P_{4}(k)\), and the real-space power spectrum proxy \(Q_{0}(k)\) for the NGC low-\(z\) sample. The lines represent best-fit cosmologies for two models considered in our analysis: \(\Lambda\)CDM (with \(N_{\rm eff}\) and \(\sum m_{\nu}\) as free parameters; dashed) and a self-interacting neutrino cosmology with strong interactions (also with \(N_{\rm eff}\) and \(\sum m_{\nu}\) as free parameters; solid). Both sets of curves are generated with best-fit parameter values from our _Planck_ + BOSS + Lyman-\(\alpha\) + DES analyses. values.5 Footnote 5: As discussed in the following section, the best-fit \(G_{\rm eff}\) in the \(\Lambda\)CDM mode of the posterior is indistinguishable from vanilla \(\Lambda\)CDM, once LSS data are included. ## IV Results In Fig. 3, we show the marginalized 1D posterior probability distribution for the neutrino self-interaction coupling constant \(G_{\rm eff}\), obtained from two analyses: i) the first full analysis of all data sets discussed in Sec. III, _Planck_ + BOSS + Lyman-\(\alpha\) + DES, and ii) an analysis of _Planck_ data only, presented for comparison. In both cases, the posterior probability distribution is bimodal, with mutually consistent locations of the two high-probability modes. In the all-data analysis, one high-probability mode is consistent with \(\Lambda\)CDM (negligible amount of interactions and a negligible delay in neutrino free-streaming), and another one is located at \(\log(G_{\rm eff}\ {\rm MeV}^{2})=-1.73\pm 0.05\) and corresponds to a delay of free streaming until \(z\sim 8300\), where the errors capture 68% confidence. The two posterior modes carry similar statistical weight; the inclusion of the LSS data leads to an increase in relative significance of the SI\(\nu\) mode, as compared to the _Planck_-only analysis. As compared to the \(\Lambda\)CDM model, the self-interacting neutrino model with \(G_{\rm eff}\) consistent with the SI\(\nu\) mode presents an improvement in fit consistent with the addition of one free parameter, \(\Delta\chi^{2}_{\rm min}=-1.81\). To further quantify the relative significance of the two posterior modes, we show the difference in log Bayesian evidence and the maximum likelihood ratio between the SI\(\nu\) and the \(\Lambda\)CDM models, in Table 2. We find a log evidence difference of \(1.46\pm 0.48\) in favor of the SI\(\nu\) mode; according to the Jeffrey's scale, this indicates a "definite preference" for a delayed onset of neutrino free-streaming over the standard cosmological model [123]. As noted in [47; 48], there are significant parameter degeneracies in the interacting neutrino model; in many instances, they result in reconciliation of the best-fit parameter values inferred from the CMB, LSS, and other probes. For example, we confirm that the SI\(\nu\) mode leads to a significant reduction in the \(H_{0}\) tension between the CMB + LSS and the supernova measurements, as shown in Fig. 4. Specifically, the tension between SH\({}_{0}\)ES and _Planck_ + BOSS + Lyman-\(\alpha\) + DES is at \(4.6\sigma\) under \(\Lambda\)CDM model, but it reduces to \(2.7\sigma\) under the self-interacting neutrino model. This indicates that a combined analysis of early and late time probes of the expansion may show a strong preference for the SI\(\nu\) mode in comparison to \(\Lambda\)CDM. At the same time, we see no significant effect of neutrino self-interaction on the \(S_{8}\) tension between DES and _Planck_ + BOSS + Lyman-\(\alpha\) data; the tension is only slightly reduced, from \(1.62\sigma\) in \(\Lambda\)CDM to \(1.47\sigma\) when self-interactions are added. However, we note that we have included a prior on \(S_{8}\) from DES in our analysis, pulling our results towards DES value for \(S_{8}\). A future dedicated analysis is thus needed to fully understand the impact of the neutrino interactions on the \(S_{8}\) tension. Along similar lines, the inferred best-fit value for the number of effective neutrino species \(N_{\rm eff}\) under \(\Lambda\)CDM is \(N_{\rm eff}=2.5\pm 0.11\), which is difficult to model in standard cosmology [124; 125]. However, the same parameter becomes consistent with the known three neutrino species under the SI\(\nu\) mode, with the best-fit value of \(N_{\rm eff}=3\pm 0.14\); see Fig. 4. Finally, allowing for neutrino self-scattering leads to a mild preference for non-zero sum of neutrino masses, with the best-fit value at \(\sum m_{\nu}=0.213^{+0.069}_{-0.076}\) eV at 68% confidence, consistent with the bounds from neutrino oscillation experiments [126]. This finding is consistent with the \(>2\sigma\) preference for non-vanishing neutrino mass found in previous _Planck_-only analyses in the context of self-interacting neutrinos [47]. Table 1 displays the breakdown of \(\Delta\chi^{2}\) contributions from each data set for the SI\(\nu\) model, as compared to \(\Lambda\)CDM. It is evident that BOSS, DES, and low-\(\ell\) CMB data present key contributions to the preference of SI\(\nu\) over \(\Lambda\)CDM. This preference can be understood in context of modifications to the linear matter power spectrum \(P(k)\), caused by delayed neutrino free-streaming. As discussed in Sec. I, the free-streaming delay in the SI\(\nu\) posterior mode causes a peak-like feature at \(k\sim 0.2\ h/{\rm Mpc}\) (see Fig. 1). Combined with the lower \(A_{\rm s}\) and \(n_{\rm s}\) val Figure 3: 1D marginalized posterior distribution for neutrino self-interaction coupling parameter \(G_{\rm eff}\). We show the posterior derived from a joint analysis of _Planck_, BOSS, Lyman-\(\alpha\), and DES data (solid green) as well as the same posterior obtained from a _Planck_-only analysis (dashed blue). We note that the addition of the LSS enhances statistical significance of the strongly-interacting posterior mode, in comparison to \(\Lambda\)CDM. ues that the SI\(\nu\) mode prefers [47], the neutrino self-interaction produces a suppression and a plateau-like feature at scales probed by these data sets. This specific \(k\)-dependent modification was previously shown to fit the LSS data better than \(\Lambda\)CDM, in context of other beyond-CDM physics [127; 128; 129; 130]. The preference for the SI\(\nu\) mode of \(G_{\rm eff}\) is not uniformly present in all subsets of the data, however. Namely, the fit of the interacting-neutrino model to the high-\(\ell\) data from _Planck_ deteriorates in comparison to \(\Lambda\)CDM. While a detailed investigation of this effect is beyond the scope of the present analysis, we note that the discrepancies in high-\(l\) measurements between _Planck_ and ACT [131] have been previously noted and may play a role in this context [48]. Finally, Lyman-\(\alpha\) data shows no preference for the interacting-neutrino model. Indeed, Fig. 1, suggests that the SI\(\nu\) model actually over suppresses the power spectrum at the Lyman-\(\alpha\) pivot scale of \(k_{p}\sim 1\;h/{\rm Mpc}\). Nevertheless, while the same data set strongly disfavors other solutions to Hubble tension [111] because of its preference for somewhat low \(A_{\rm s}\) and \(n_{\rm s}\) values, Lyman-\(\alpha\) only mildly penalizes the SI\(\nu\) mode for Figure 4: 2D marginalized posterior distribution for a subset of cosmological parameters, focusing on the strongly-interacting (SI\(\nu\)) mode of the bimodal posterior for neutrino self-coupling \(G_{\rm eff}\). We show the results of a combined analysis of _Planck_, BOSS, Lyman-\(\alpha\), and DES data and a _Planck_-only analysis. The vertical and horizontal shaded bands show the DES measurement of \(S_{8}\) and the SH\({}_{0}\)ES measurement of \(H_{0}\). We see that the interacting neutrino model notably alleviates the Hubble tension, and restores consistency among the data considered here. We also emphasize the shift in the best-fit value of the effective number of neutrinos \(N_{\rm eff}\), and the mild preference for a non-zero neutrino mass sum \(\sum m_{\nu}\), under the interacting-neutrino model, as compared to \(\Lambda\)CDM. neutrino self-interaction. We list the full set of constraints on all cosmological parameters for a _Planck_ + BOSS + Lyman-\(\alpha\) + DES analysis of the SI\(\nu\) model in Appendix A. ## V Conclusions We considered a cosmological scenario in which flavor-universal neutrino self-interactions in the early universe delay the onset of neutrino free-streaming. For the first time, we used a combination LSS data, including BOSS, eBOSS, DES, and the CMB measurements from _Planck_, to test the validity of this model. As in previous analyses of the CMB data alone, we find a bimodal posterior probability distribution for the neutrino self-coupling constant \(G_{\rm eff}\). However, the LSS data contributes to a mild preference towards the strongly-interacting mode over \(\Lambda\)CDM, when all the data are analysed in combination; this preference is driven by BOSS and DES. The key result is shown in Fig. 3. We found that the success of the interacting neutrino model in fitting all the data simultaneously arises from subtle degeneracies in various standard cosmological parameter values, which all combine to give rise to a specific shape of the linear matter power spectrum. This power spectrum features a mild suppression at small scales, and a plateau-like feature at \(k\sim 0.2~{}h/{\rm Mpc}\) (Fig. 1). In particular, the neutrino self-scattering and the accompanying delay in free-streaming leads to a preference for slightly lower values of \(A_{\rm s}\) and \(n_{\rm s}\). This is preferred by DES data, which favors lower values of \(S_{8}\) as compared to _Planck_. In addition, a delay in neutrino free-streaming results in a reduced sound horizon scale, which in turn leads to an alleviated \(H_{0}\) tension between the early universe and supernova data. These results are summarized in Fig. 4. We stress that our approach is to take all data sets at face value. It is, however, possible that modeling of the LSS or the CMB data omits inclusion of some unknown systematics. We leave such considerations for future work. There are several implications of our key results. First, self-scattering neutrinos suppress structure on a large range of scales, which may be used to further examine its validity. For example, the Milky Way satellite galaxy census currently limits the suppression of power to \(\sim 30\%\) at \(k\sim 30~{}h/{\rm Mpc}\) compared to \(\Lambda\)CDM [57; 58]. Forthcoming galaxy surveys will tighten these error bars further [132; 133; 57; 134], putting pressure on all beyond-CDM models that alter matter power spectrum on small scales. In addition, neutrino self-interactions require shifts in various cosmological parameters, including the sum of neutrino masses, in order to retain a good fit to the data. This implies that high-precision cosmological searches for \(\sum m_{\nu}\) and related physics may provide valuable information about neutrino interactions as well. Finally, this analysis has only focused on flavor-universal neutrino self-interactions. However, previous CMB-only analyses have found an even greater preference for the strongly-interacting mode in flavor-specific interactions [30], presenting a compelling direction of future exploration. In addition, laboratory experiments are already probing neutrino interaction physics at relevant levels [89; 32; 85] and a flavor-specific neutrino self-interaction may be consistent with laboratory data; however, a dedicated analysis is needed to combine laboratory results with cosmological searches for neutrino self-scattering. We have coordinated submission to ArXiv. _Note added_. While working on this paper, we became aware of work [135] that carried out an analysis of the BOSS full-shape power spectrum in the context of the self-interacting neutrinos. When overlap, our results agree. ## Acknowledgements We gratefully acknowledge the support from explore.org and Sylvie and David Shapiro at USC. RA and VG acknowledge the support from NASA through the Astrophysics Theory Program, Award Number 21-ATP21-0135. VG acknowledges the support from the National Science Foundation (NSF) CAREER Grant No. PHY-2239205. Exploration of the parameter space in this work was done with extensive use of InViz6. \begin{table} \begin{tabular}{|c||c|c|} \hline Statistic & _Planck_ & _Planck_ + BOSS + Lyman-\(\alpha\) + DES \\ \hline \hline \(\Delta{\rm ln}(\epsilon)\) & \(-3.43\pm 0.23\) & \(1.46\pm 0.48\) \\ \hline \(\mathcal{R}\) & \(0.15\) & \(1.47\) \\ \(\Delta\chi^{2}_{\rm min}\) & \(+3.81\) & \(-1.81\) \\ \hline \end{tabular} \end{table} Table 2: Comparison of the interacting-neutrino model (with strong self-interactions, SI\(\nu\)) and the \(\Lambda\)CDM model, given _Planck_ + BOSS + Lyman-\(\alpha\) + DES and _Planck_–only analyses. \(\Delta{\rm ln}(\epsilon)\) is the difference in log Bayesian evidence, \(\mathcal{R}\) is the maximum likelihood ratio, and \(\Delta\chi^{2}_{\rm min}\) is the difference between the minimum \(\chi^{2}\) value. We note an overall mild preference for non-vanishing neutrino self-interactions. \begin{table} \begin{tabular}{|c|c|} \hline Data set & Contribution to \(\Delta\chi^{2}_{\rm min}\) \\ \hline \hline Low–\(\ell\) & \(-3.38\) \\ High–\(\ell\) & \(+5.98\) \\ BOSS & \(-3.04\) \\ Lyman–\(\alpha\) & \(+1.91\) \\ DES & \(-3.17\) \\ \hline Total & \(-1.81\) \\ \hline \end{tabular} \end{table} Table 1: \(\Delta\chi^{2}_{\rm min}\) for the strongly-coupled self-interacting neutrinos (SI\(\nu\) mode), compared to \(\Lambda\)CDM. Columns present individual contributions from different subsets of data to the full analysis of _Planck_ + BOSS + Lyman-\(\alpha\) + DES. ## Appendix A Full cosmological parameter constraints In Table 3, we show the full set of cosmological parameter constraints for the best-fit \(\chi^{2}\) in a _Planck_ + BOSS + Lyman-\(\alpha\) + DES analysis of the strongly-interacting mode in the self-interacting neutrino model. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Parameter & Best-fit & Marginalized max \(\pm\)\(\sigma\) & 95\% lower & 95\% upper \\ \hline 100 \(\omega_{\rm b}\) & 2.273 & \(2.266\pm 0.017\) & 2.232 & 2.3 \\ \(\omega_{\rm c}\) & 0.1176 & \(0.1184\pm 0.002\) & 0.1138 & 0.1235 \\ 100 \(\theta_{\rm s}\) & 1.0467 & \(1.0466^{+0.00048}_{-0.000038}\) & 1.0458 & 1.0474 \\ \(\ln(10^{10}A_{\rm s})\) & 2.986 & \(3.006\pm 0.027\) & 2.955 & 3.06 \\ \(n_{\rm s}\) & 0.9427 & \(0.9437^{+0.0045}_{-0.005}\) & 0.9345 & 0.9537 \\ \(\tau_{\rm reio}\) & 0.05919 & \(0.06853^{+0.0133}_{-0.0136}\) & 0.04221 & 0.09493 \\ \(\log_{10}(G_{\rm eff}\ {\rm MeV}^{2})\) & \(-1.745\) & \(-1.731^{+0.052}_{-0.053}\) & \(-1.829\) & \(-1.632\) \\ \(N_{\rm eff}\) & 2.918 & \(3^{+0.143}_{-0.1469}\) & 2.724 & 3.293 \\ \(\sum m_{\nu}\) [eV] & 0.172 & \(0.213^{+0.076}_{-0.076}\) & 0.075 & 0.364 \\ \(z_{\rm reio}\) & 8.069 & \(8.974^{+1.324}_{-1.191}\) & 6.365 & 11.365 \\ \(\Omega_{\Lambda}\) & 0.6892 & \(0.6888\pm 0.0074\) & 0.6741 & 0.7033 \\ \(Y_{\rm He}\) & 0.2462 & \(0.2473\pm 0.002\) & 0.2434 & 0.2513 \\ \(H_{0}\) & 67.637 & \(67.895^{+0.091}_{-0.99}\) & 66.084 & 69.773 \\ \(10^{+9}A_{\rm s}\) & 1.98 & \(2.022\pm 0.054\) & 1.92 & 2.133 \\ \(\sigma_{8}\) & 0.7902 & \(0.7912\pm 0.0135\) & 0.7645 & 0.8166 \\ \hline \(A_{\rm planck}\) & 0.99925 & \(1.00012^{+0.00243}_{-0.00246}\) & 0.99514 & 1.00519 \\ \(b_{1}^{(1)}\) & 2.033 & \(2.048^{+0.055}_{-0.052}\) & 1.946 & 2.148 \\ \(b_{2}^{(1)}\) & \(-0.5293\) & \(-0.4098^{+0.5323}_{-0.569}\) & \(-1.4495\) & 0.7372 \\ \(b_{2}^{(1)}\) & \(-0.3029\) & \(-0.3065^{+0.2792}_{-0.2717}\) & \(-0.8538\) & 0.2521 \\ \(b_{1}^{(2)}\) & 2.189 & \(2.213\pm 0.06\) & 2.087 & 2.335 \\ \(b_{2}^{(2)}\) & \(-0.0996\) & \(-0.3402^{+0.6331}_{-0.626}\) & \(-1.5119\) & 0.8852 \\ \(b_{2}^{(2)}\) & 0.2485 & \(-0.1471^{+0.3292}_{-0.34363}\) & \(-0.7968\) & 0.5086 \\ \(b_{1}^{(3)}\) & 2.008 & \(1.965^{+0.048}_{-0.057}\) & 1.869 & 2.065 \\ \(b_{2}^{(3)}\) & 0.0651 & \(-0.1001^{+0.483}_{-0.5037}\) & \(-1.0496\) & 0.9021 \\ \(b_{2}^{(3)}\) & \(-0.2771\) & \(-0.3293^{+0.2671}_{-0.2849}\) & \(-0.8465\) & 0.2239 \\ \(b_{1}^{(4)}\) & 2.031 & \(2.004\pm 0.059\) & 1.888 & 2.12 \\ \(b_{2}^{(4)}\) & \(-0.198\) & \(-0.3765^{+0.541}_{-0.563}\) & \(-1.4468\) & 0.7749 \\ \(b_{\mathcal{G}_{2}}^{(4)}\) & \(-0.4262\) & \(-0.3596^{+0.3285}_{-0.3147}\) & \(-1.0055\) & 0.2687 \\ \hline \end{tabular} \end{table} Table 3: Full parameter constraints for a _Planck_ + BOSS + Lyman-\(\alpha\) + DES analysis of the strongly-interacting mode of the self-interacting neutrino model. Bounds for standard cosmological parameters are given in the top half of the table, and bounds on EFT bias parameters are given in the bottom half. The maximum of the full posterior is labeled “Best-fit”, and the maxima of the marginalized posteriors are labeled “Marginalized max”. The superscripts (1), (2), (3), (4) of the galaxy bias parameters \(b_{1},b_{2},b_{\mathcal{G}_{2}}\) signify respectively the NGC \(z=0.61\), SGC \(z=0.61\), NGC \(z=0.38\), SGC \(z=0.38\) BOSS DR12 data chunks.
Neutrinosexperience a delayed onset of free-streaming. 私たちは、非線形スケールの質量分布をモデル化するために、大規模構造(LSS)の有効場の理論を用いる。そして、自相互作用のニュートリノコсмо学において初めて、これらの手法を用いて、ボースフル形状銀河群、弱透視、 Lyman-$\alpha$ 森林の測定値と、プラクの宇宙マイクロ波背景データの組み合わせLikelihood分析を実施した。この結果、全体的なデータセットは、ニュートリノが自相互作用していることを強く支持しており、特徴的なエネルギースケールは10 MeV程度である。この傾向は、$\sim 5\sigma$のレベルであり、Lyman-$\alpha$森林の測定値と、それよりも低い程度で弱透視データからDESからのデータによって駆動されている。この自相互作用のニュートリノモデルは、Hubbleの偏りと、
2310.00482
Enhanced many-body localization in a kinetically constrained model
In the study of the thermalization of closed quantum systems, the role of kinetic constraints on the temporal dynamics and the eventual thermalization is attracting significant interest. Kinetic constraints typically lead to long-lived metastable states depending on initial conditions. We consider a model of interacting hardcore bosons with an additional kinetic constraint that was originally devised to capture glassy dynamics at high densities. As a main result, we demonstrate that the system is highly prone to localization in the presence of uncorrelated disorder. Adding disorder quickly triggers long-lived dynamics as evidenced in the time evolution of density autocorrelations. Moreover, the kinetic constraint favors localization also in the eigenstates, where a finite-size transition to a many-body localized phase occurs for much lower disorder strengths than for the same model without a kinetic constraint. Our work sheds light on the intricate interplay of kinetic constraints and localization and may provide additional control over many-body localized phases in the time domain.
Karl Royen, Suman Mondal, Frank Pollmann, Fabian Heidrich-Meisner
2023-09-30T20:34:58
http://arxiv.org/abs/2310.00482v2
# Enhanced many-body localization in a kinetically constrained model ###### Abstract In the study of the thermalization of closed quantum systems, the role of kinetic constraints on the temporal dynamics and the eventual thermalization is attracting significant interest. Kinetic constraints typically lead to long-lived metastable states depending on initial conditions. We consider a model of interacting hardcore bosons with an additional kinetic constraint that was originally devised to capture glassy dynamics at high densities. As a main result, we demonstrate that the system is highly prone to localization in the presence of uncorrelated disorder. Adding disorder quickly triggers long-lived dynamics as evidenced in the time evolution of density autocorrelations. Moreover, the kinetic constraint favors localization also in the eigenstates, where a finite-size transition to a many-body localized phase occurs for much lower disorder strengths than for the same model without a kinetic constraint. Our work sheds light on the intricate interplay of kinetic constraints and localization and may provide additional control over many-body localized phases in the time domain. ## I Introduction In the theory of thermalization of closed quantum systems, two main pillars have emerged that capture generic behavior. On the one hand, systems that obey the eigenstate thermalization hypothesis (ETH) are expected to thermalize under their own dynamics [1, 2, 3, 4], i.e., information on initial conditions becomes inaccessible to local measurements and expectation values of local observables are identical to thermal expectation values, up to small finite-size corrections. On the other hand, many-body localized (MBL) systems constitute robust examples of non-thermalization [5, 6], with emergent local conserved quantities and persistent density inhomogeneities. Two recent developments have triggered a refinement of this picture. First, the stability of the MBL phase, even in a canonical system of a chain of interacting spinless fermions in presence of quenched disorder, has been challenged [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. Notably, the critical disorder strength appears to be significantly larger than previously suggested and instead of a direct transition, the existence of a large prethermal MBL regime has been proposed [27, 28]. Second, quantum systems with various types of constrained dynamics have attracted significant attention, including systems with quantum scars [29, 30], Hilbert-space fragmentation (HSF) [31, 32, 33, 34], lattice gauge theories [35, 36, 37], or kinetically constrained models (KCMs) (see, e.g., [38, 39, 40, 41, 42]). Systems with constrained dynamics are interesting from several perspectives [30, 43, 29, 44]. In many cases, such systems eventually thermalize for the majority of initial conditions, yet break ETH in the weak sense [29, 44], however, even fully non-thermalizing situations have been suggested [36, 37]. Notably, long transient dynamics and metastable states exist in these systems for at least some initial conditions, thus shifting the focus from eigenstate properties, as often emphasized in the ETH and MBL contexts, to the temporal domain. The existence of quantum-scar states [45] and HSF in certain models has led to an improved view on local conservation laws responsible for these Hilbert-space structures [46]. Some of the models with HSF or kinetic constraints exhibit anomalously slow, subdiffusive transport [47, 48, 49, 50] or superdiffusive transport [51]. KCMs originate in two contexts, either in quantum systems with approximately hard-core short range interactions such as Rydberg atoms [38] or in the classical theory of glassy dynamics [52, 53, 54, 55]. Transferring models from the latter class into the quantum realm provides a rich playground to study the interplay of interactions, constraints, and disorder [38, 56]. First nonequilibrium experiments with quantum simulators succeeded in demonstrating the presence of constrained dynamics [57, 58, 59, 60, 61, 62, 63]. In our work, we are interested in the stability of localization in a KCM describing an interacting system of hard-core bosons or spin degrees of freedom on a triangular ladder (see Fig. 1). In this system, introduced as a quantum version in Ref. [40], particles can only move into an empty site if the origin and target site share an Figure 1: Illustration of the kinetically constrained model of consideration in this work, introduced in [40]. The black dots represent the sites on a triangular (zig-zag) ladder. The red hexagons represent particles that can only hop to the nearest neighbor if the origin and target site share an empty neighboring site. The blue double-ended arrows denote the particle-hole interactions and the allowed hopping processes for this example configuration [see Eq. (2)]. empty neighboring site (see the arrows in Fig. 1). Consequently, single vacancies in a high-density background cannot move at all, unless they are absorbed by groups of at least two neighboring vacancies. This mechanism causes the existence of quantum-scar states that are completely isolated from the rest of the Hilbert space (those are the computational-basis product states with only isolated vacancies) and states with metastable dynamics as witnessed in density autocorrelations [40] (see Fig. 3(b) for examples). The constraint becomes effective in the presence of sufficiently strong interactions between particles and holes. This interaction is chosen to realize a Rokhsar-Kivelson point in the many-body ground state of this model [40]. The kinetic constraints lead to the aforementioned metastable dynamics with slowly decaying autocorrelations computed from computational-basis product states, with an increasing fraction of such states as density grows. Consequently, the system has an inherent tendency towards localization. Since isolated holes can only propagate via high-order perturbative processes involving connected clusters of vacancies, a small amount of disorder will quickly affect these heavy objects. In our work, we demonstrate two main results. First, even a small amount of disorder is sufficient to induce long-lived, nondecaying dynamics for _all_ initial product states in the computational basis, at least on accessible finite system sizes. Second, the eigenstate transition into a possible many-body localized phase occurs at an order of magnitude smaller disorder strengths than for the model _without_ the kinetic constraint, consistent with the information from time-dependent simulations. Our results are based on extensive exact-diagonalization simulations. Our work complements previous studies of constrained quantum-lattice models in the presence of disorder. For the case of KCMs, there is so far no uniform picture as the constraints can apparently favor or disfavor localization [64; 65; 66; 67; 68; 69], with our case providing an example for the former. It appears that the type of constraint, dimensionality and range of the interactions may matter. A similar picture has been described for the quantum East random-energy model [67]. There, however, localization is absent without the constraints in the bulk of the spectrum [70]. Moreover, the random-energy model is infinite-dimensional, different from our one-dimensional example. With regard to the ongoing investigations about the stability of MBL, the combination of certain dynamical constraints and disorder may provide a path towards stable instances of MBL and possibly unexplored types of delocalization-localization transitions and crossovers. The rest of the paper is organized as follows. In Sec. II, we introduce the model while Sec. III provides a brief account of the numerical techniques utilized in our work. In Sec. IV, we present our results for the decay of density autocorrelations as a function of interaction strength and disorder strength. Section V summarizes our results for the eigenstate delocalization-localization transition extracted from the occupation distance [71; 72] and results for the eigenstate entanglement entropy. Our conclusions are presented in Sec. VI. ## II Model Here we consider a triangular ladder with interacting particles subject to a kinetic constraint introduced in [40] and uncorrelated disorder. A schematic picture of the system is shown in Fig. 1. The system is governed by the Hamiltonian \[\hat{H}=\hat{H}_{\text{KCM}}+\hat{H}_{\text{dis}} \tag{1}\] where \[\hat{H}_{\text{KCM}}= -J\sum_{\langle i,j\rangle}\hat{C}_{i,j}(\hat{b}_{i}^{\dagger} \hat{b}_{j}+\text{h.c.})\] \[+V\sum_{\langle i,j\rangle}\hat{C}_{i,j}\left[\hat{n}_{i}(1-\hat {n}_{j})+\hat{n}_{j}(1-\hat{n}_{i})\right], \tag{2}\] and \[\hat{H}_{\text{dis}}=\sum_{i=1}^{L}\epsilon_{i}\hat{n}_{i}. \tag{3}\] Here, \(\hat{b}_{i}^{\dagger}\) (\(\hat{b}_{i}\)) are bosonic creation (annihilation) operators subject to an onsite hardcore constraint and \(\hat{n}_{i}\) are the number operators at a given site \(i\), with \(L\) the number of sites. The first term of \(\hat{H}_{\text{KCM}}\) represents the hopping with amplitude \(J\), and \(\hat{C}_{i,j}=1-\prod_{k}\hat{n}_{k}\) defines the kinetic constraint where \(k\) denotes all the common neighbor sites of \(i\) and \(j\). The particle-hole interaction is defined by the second term with interaction strength \(V\), which is also subject to the constraint. \(\hat{H}_{\text{dis}}\) stands for the disorder in the system where \(\epsilon_{i}\) are uniform random numbers drawn from a box distribution \(\epsilon_{i}\in[-W,W]\). \(W\) is the strength of the disorder potential. We will also consider a system _without_ the constraint, that is, a Hamiltonian \(\hat{H}_{\text{un}}\) instead of \(\hat{H}_{\text{KCM}}\) obtained from setting \(\hat{C}_{i,j}=1\) in \(\hat{H}_{\text{KCM}}\). We define the filling as \(\nu=N/L\), where \(N\) is the particle number. A central quantity in our analysis will be density autocorrelations \(c(t)\), defined as \[c(t)=\frac{1}{L}\sum_{i=1}^{L}\frac{\langle\psi(0)|\hat{n}_{i}(t)\hat{n}_{i}(0 )|\psi(0)\rangle}{\nu(1-\nu)}-\frac{\nu}{1-\nu}\,, \tag{4}\] which we average over _all_ sites [40]. Here, \(|\psi(0)\rangle\) denotes the initial state, which in our case are always product states in the computational basis. ## III Methods The Hamiltonian is implemented as the matrix representation in the basis of joint eigenvectors of the local density operators \(\hat{n}_{i}\). In this form the constraint is just an on/off flag for each matrix element depending on the existence of neighboring vacant sites. Time evolution and expectation values of energy eigenstates are obtained by exact diagonalization [73; 74] where eigenvalue decomposition of the Hamiltonian is calculated using LAPACK [75]. The time average \(\overline{e}(t)\) of the autocorrelation is obtained as a weighted average of values at times \(t_{i}J=10^{\alpha i}\) with \(0.05\leq t_{i}J\leq t\) and \(\alpha\approx 0.04\) where the weight of a point at time \(t_{i}\) is the length of the time interval \(t_{i+1}-t_{i}\). In principle, this leads to an overestimate of the long-time value, which, however, does not affect our analysis since we are considering the dynamics over many decades in time. Note that the plateaus in \(c(t)\) without a time average are obscured by large temporal fluctuations [40]. Disorder averages of the time average of the autocorrelation are taken over samples of different disorder realizations and all initial configurations equivalent under symmetry transformation (mainly translation symmetry) in the clean model. The latter is just done to reduce the number of necessary disorder realizations. The number of disorder samples is chosen such that the uncertainty of the average is smaller than the linewidth in the respective plots. ## IV Long-lived dynamics in the presence of disorder ### Clean case As mentioned above, \(\hat{H}_{\mathrm{KCM}}\) hosts metastable states in the \(V>J\) limit, where many initial states show a plateau in the density autocorrelation function \(\overline{e}(t)\). This behavior has been discussed in [40], which we here recapitulate to lay the ground for the discussion of the disorder case. We will focus on results for \(L=12\) and a high filling of \(\nu=3/4\). Before going further, we want to illustrate the underlying physics of the existence of the plateaus in the relaxation dynamics, which also gives insight into the non-thermalizing behavior in the presence of small disorder strength. In Figs. 3(a)-(c) and 4(a)-(c), we present the time evolution of the density autocorrelation function \(\overline{e}(t)\) computed for the model with constraint and without constraint, respectively. The results are computed for three interaction strengths, \(V/J=1,4,16\). As already shown in Ref. [40], a sufficiently large ratio of \(V/J\) causes metastable dynamics, as clearly seen in Fig. 3(b). Specifically, \(\overline{e}(t)\) develops a plateau for those initial states that involve one horizontal dimer and one isolated hole (solid orange lines). Increasing \(V/J\) leads to longer plateaus (note the logarithmic scale on the time axis). The emergence of this slow dynamics becomes particularly evident by comparison to the data for the model without a kinetic constraint, where such plateaus are absent. Moreover, the KCM exhibits completely frozen states, namely those with only isolated holes, for which \(c(t)=1\) for all times [see the solid blue lines in Figs. 3(a)-(c)]. These correspond to exact quantum-scar states with no hybridization with the rest of the spectrum and zero entanglement. The density autocorrelations for all other initial states decay quickly on a time scale set by \((V/J)^{2}\)[40]. #### iv.1.1 Perturbative estimate of time scales In order to guide the following discussion of the combined effect of disorder and interactions, we provide a discussion of the relevant time scales for the decay of the Figure 3: Dynamics of the constrained model: \(\overline{e}(t)\) for all initial product states plotted for a system of \(L=12\) sites for different interaction strength and disorder strength. The results are averaged over the groups of initial states according to the classification from Fig. 2 and 20 disorder samples. Figure 2: Sketch of classes of initial states for \(L=12\). We distinguish the initial states by their interaction energy. For \(E/V=2\), there are three different types of initial states. Those states with \(V=0\) that have only isolated holes are exact quantum-scar states. metastable states. These clearly depend on \(V\) and the basic mechanism can be extracted from considering initial states with one horizontal dimer and one hole (see the state with \(E=2V\) in Fig. 2). For an isolated hole to be able to move, a horizontal dimer needs to first flip into a vertical dimer which involves an energy cost of \(\Delta E=2V\). The vertical dimer can then propagate through the system and can absorb and reemit the hole, with eventually returning to the subspace with one horizontal dimer. There are several possible intermediate states that involve three connected vacancies. The lowest-order process involves a state with a trimer and \(E=4V\) (see Fig. 2) and leads to a contribution in the order of \(J^{4}/V^{3}\). Going through an intermediate state with vacancies in a triangle leads to \(J^{5}/V^{4}\). Note that the motion of a horizontal dimer itself in a background of occupied sites goes with \(J^{3}/V^{2}\). These simple arguments are consistent with the dependence of the plateau width of the metastable states at \(W=0\) [see Fig. 3(b)] on the interaction strength (not shown here). In summary, the perturbation theory argument explains the dependence of the plateau length on \(V\) and the sensitivity of different types of initial states to the constraint. Since we can therefore view single holes as heavy objects with a small tunneling amplitude, the addition of disorder should lead to a rapid localization. ### Disorder and interactions We next discuss the effect of disorder on the time dependence of the autocorrelation functions. These are shown in Figs. 3(d)-(i) and Figs. 4(d)-(i) for the constrained and unconstrained model, respectively. We show averages over those initial states that have the same configurations according to Fig. 2. Remarkably, even disorder strengths substantially smaller than the bare bandwidth \(W\sim J\lesssim 4J\) prevent the density autocorrelations from decaying over many decades for all initial states in the presence of constraints [see Figs. 3(d)-(f)]. Increasing the disorder strength leads to a higher long-time saturation value of \(\bar{c}(t)\). As we shall show later, the density autocorrelations do not decay at all for the system sizes considered. Therefore, on finite system sizes, disorder leads to nondecaying dynamics. We stress the difference to the definition of metastable dynamics used here which refers to long-lived correlations that eventually decay already on finite systems. For the model without a kinetic constraint, there are noticeably less states that acquire plateaus in \(\bar{c}(t)\) [see, e.g., Fig. 4(e)], which also requires higher values of \(W\). The difference between the two cases is best illustrated by plotting the fraction of metastable initial states as a function of disorder strength shown in Fig. 5 for three system sizes \(L=12,16,20\) and both models at \(V=4J\). We consider a state to be metastable when \(c(t_{\rm thresh})>\epsilon\) with \(t_{\rm thresh}=500/J\) and \(\epsilon=0.15\). In this analysis, we are sensitive to density autocorrelations that remain large in the short-time regime, but at times larger than the generic decay time set by \((V/J)^{2}\)[40]. Clearly, the kinetic constraints lead to a short-time plateau of \(\bar{c}(t)\) for an order of magnitude smaller values of \(W\) than in the case without kinetic constraints. Increasing system size suppresses the metastable states in both cases, yet much more significantly so in the case without a constraint. Figure 4: Dynamics of the unconstrained model: \(\bar{c}(t)\) for all initial product states are plotted for a system of \(L=12\) sites for different interaction strength and disorder strength without the kinetic constraint. The results are averaged over the groups of initial states according to the classification from Fig. 2 and 20 disorder samples. Figure 5: Fraction of metastable states as a function of \(W\) for (a) constrained model and (b) unconstrained model for \(L=12,16,20\) and \(V=4J\). We consider the dynamics to be metastable when \(\bar{c}(t_{\rm thresh})>\epsilon\) with \(\epsilon=0.15\) and \(t_{\rm thresh}=500/J\). The results do not quantitatively depend on the choice of \(\epsilon\) and \(t_{\rm thresh}\) for reasonable choices of these parameters. Another noteworthy effect of the constraint is to affect the plateau height of \(\overline{c}(t)\) for the different groups of initial states as defined in Fig. 2. The comparison of, e.g., Figs. 3(b) and Fig. 4(b), shows that in the presence of the constraint, the states with one hole and one horizontal dimer are the most susceptible to disorder, while in the absence of the constraint, these states exhibit the lowest values and states with three vacancies in neighboring sites have the largest \(\overline{c}(t)\) in the plateau. The sequence of plateau values for the unconstrained model \(\hat{H}_{\text{un}}\) results from the interplay of available hopping processes versus the interactions that favors clusters of vacancies and is thus subject to details of the values of \(W\) and \(V\). One immediately wonders about the temporal extension of the plateaus in \(\bar{c}(t)\) at \(W>0\)[40, 42] and whether they persist to infinite times as expected for many-body-localization. While definite statements about the thermodynamic limit are difficult, we can compute the diagonal ensemble expectation values [76] of the autocorrelation, \(c_{\text{diag}}\). Figure 6(a) contains the data for \(L=12\) and the different groups of initial states. The difference to the unconstrained model is the most obvious from the comparison of Figs. 6(b) and 6(c) that display the infinite-temperature average over all states for different system sizes. Using such data, we can demonstrate that there is no decay on finite systems (ad-hoc measured by \(c(\infty)>0.1\) for \(L=20\) data) for \(W/J\gtrsim 0.4\cdot 10^{-4}\), as shown in Fig. 6(b) for the constrained model. For the unconstrained case [see Fig. 6(c)], the density autocorrelations acquire a nonzero long-time value for orders of magnitude larger values of \(W/J\gtrsim 7.74\). The dependence of \(c_{\text{diag}}\) on \(W\) from Fig. 6(b) resembles the one of the fraction of states with metastable shown in Fig. 5(a) as it increases at around the same value of \(W\). However, the diagonal ensemble misses information about metastable states on finite systems, which are captured in Fig. 5(a). The temporal and long-time behavior of \(\overline{c}(t)\) is markedly different from the stretched exponential decay of autocorrelations suggested for the prethermal-MBL regime [28], which may thus be restricted to much smaller values of \(W/J\). While here we focus on the emergence of non-decaying autocorrelations (averages and for individual initial computational basis states), there is possibly another interesting regime at weaker disorder. Given the different dynamics of initial states, some classes of those lead to non-decaying dynamics for smaller values of disorder than others. The remaining fast decaying states may be sufficient to ensure delocalization for all states eventually. Substantiating this scenario is left for future work. ## V Localization-Delocalization transition So far, we have established that the kinetic constraints lead to non-decaying density auto-correlations for much smaller values of \(W\) compared to the case without constraints. We now complement this picture by studying the finite-size eigenstate transition from a delocalized regime to a putative many-body localized regime. To that end, we compute the occupation distance [71, 72] that is extracted from distributions of local densities \(\langle\hat{n}_{i}\rangle\) sampled over sites, disorder realizations, and eigenstates. The distributions exhibit a bimodal structure in the localized regime, while they are normal-distributed around the average filling in the delocalized case [77, 78]. The occupation distance \(\delta n_{i}\) is computed from each eigenstate expectation value of the onsite density \(n_{i}=\langle\psi|\hat{n}_{i}|\psi\rangle\) as the distance from the closest integer (\(|\psi\rangle\) denotes an eigenstate of the Hamiltonian). The definition reads \[\delta n_{i}=|n_{i}-[n_{i}]|. \tag{5}\] If the states are localized in nature, the average \(\overline{\delta n_{i}}\) taken over disorder, sites and eigenstates goes close to zero and is practically system-size independent, while if the states are extended, \(\overline{\delta n_{i}}\) approaches the filling \(\nu\) for \(\nu\leq 1/2\) or \(1-\nu\) for \(\nu>1/2\)[71, 72], where \(\nu=N/L\). The average of \(\delta n_{i}\) over sites and all eigenstates for a given parameter set has been shown to capture delocalization-localization Figure 6: Diagonal ensemble expectation value \(c_{\text{diag}}\) of the density autocorrelator for (a) \(L=12\) for all groups of initial states. In (b) and (c) we show the infinite-temperature averages for \(L=12,16,20\) for (b) the constrained model and (c) the unconstrained model. transitions [72], including the known critical behavior of transitions in non-interacting models. In Figs. 7(a) and (b), we plot \(\overline{\delta n_{i}}/(1-\nu)\), which is averaged over 20 disorder realizations, as a colour plot in the \(W-V\) parameter space for different system sizes for \(\nu=3/4\). These state diagrams show that disorder affects the system differently for the constrained and unconstrained cases. In the former, the transition sets in at values of \(W\) that are an order of magnitude smaller than in the latter case [compare the lines in the figures, indicating equal values of \(\overline{\delta n_{i}}/(1-\nu)\)]. In the limit of small values of \(V\), neither system becomes Anderson localized for all values of \(W\) since even in the absence of particle-hole interactions \(V=0\), hard-core bosons are interacting particles. Large values of \(V\) favor localization and therefore, the localized region wins over the delocalized one there. The finite-size dependence of the occupation distance is illustrated in Fig. 8 for \(V=4J\). In the unconstrained model, the occupation distance already approaches its asymptotic value of \(\delta n_{i}=0.25\) in the range \(W\lesssim 10J\), indicating delocalization. In the constrained model, the data do not reach the limiting value anywhere yet, supporting the presence of much larger finite-size effects and the presumably larger extension of the localized phase. We have also studied the distribution of the half-chain entanglement entropy [79] computed in eigenstates. The \(S_{\rm vN}\) is calculated from a bipartition of the system into \(A\) and \(B\) subsystems (here of equal length) and calculating the reduced density matrix of one of the subsystems (\(\hat{\rho}_{A/B}=\mathrm{Tr}_{B/A}|\psi\rangle\langle\psi|\)), as \[S_{\rm vN}=-\mathrm{Tr}[\hat{\rho}_{A}\mathrm{ln}\hat{\rho}_{A}]. \tag{6}\] Our results for the distribution \(P(S_{\rm vN})\) sampled over eigenstates and disorder realizations displayed in Fig. 9 further corroborate the notion that the constrained model tends to localize much faster. Already at \(W/J=2\), there is a broad distribution around a small mean value, with a larger additional peak at \(S_{\rm vN}=0\) stemming from the fully localized states. At \(W/J=10\), \(P(S_{\rm vN})\) of the constrained model has the typical shape for a many-body localized system [77], with a maximum at \(S_{\rm vN}=0\), tails, and a local maximum at \(S_{\rm vN}\approx 0.7\approx\ln 2\) (related to two-body resonances [77]), while the unconstrained model still exhibits a broad distribution around much larger values of \(S_{\rm vN}\). Even at \(W/J=20\), the unconstrained model does not show the sharp global maximum at small values of \(S_{\rm vN}\) yet. Other quantities, such as the gap ratio [80], lead to the same picture (results not shown here). The analysis of the gap ratio is, however, plagued by regions with a vanishing density of states (that is, gaps in the many-body spectrum). In conclusion, both the analysis of the time-dependence of autocorrelations and of measures of a delocalization-localization transition yield the same picture, namely, the kinetic constraints significantly favor localization. A scaling analysis of the stability of the localized phase is beyond the scope of our work and left for future work, similar to the case of the localized phase in the East-random energy model [67]. ## VI Conclusions In our work, we provided numerical evidence that a certain type of kinetic constraints in cooperation with interactions leads to an enhanced tendency towards localization in the presence of uncorrelated disorder. We established this result by primarily considering the time evolution of density autocorrelations corroborated by measures for an eigenstate localization transition such as oc Figure 8: System-size dependence of the disorder-averaged \(\overline{\delta n_{i}}\) for (a) constrained model and (b) unconstrained model, both for \(V=4J\) and \(L=12,16,20\) sites averaged over (at least) \(400,30,5\) disorder configurations, respectively. Figure 7: Delocalization-localization state diagram in the \(W-V\) plane extracted from the normalized disorder-averaged occupation distance \(\frac{\overline{n}_{i}}{1-\nu}\), which is represented according to the color scale, for (a) the constrained and (b) the unconstrained model, both for \(L=16\). Here, \(\nu=3/4\) and the \(\frac{\overline{n}_{i}}{1-\nu}\) is averaged over 20 disorder realizations, leading to a statistical variation of less than 0.03. cupation distance of density distributions and entanglement entropy. Our conclusion relies strongly on the direct comparison to particles that live on the same lattice topology yet are not subject to the kinetic constraints. On finite systems, the crossover to localization occurs typically at an order of magnitude smaller values of disorder in the presence of the kinetic constraints, compared to when these are absent. While our study is subject to the limitations of exact diagonalization and therefore, small system sizes, they still suggest more stable localization in our KCM than in the absence of kinetic constraints. Putting this onto more theoretical grounds and on a broader data basis in terms of examples of KCMs is left for future work. Based on our results, it seems likely that other mechanisms will also render many-body systems more susceptible to disorder, at least in the sense of long-lived correlations in the time domain. These include flat-band systems [81, 82, 83], systems with frustration [84, 85], and systems with emergent particle excitations with a narrow bandwidth such as heavy fermions or polarons in electron-phonon systems [86, 87]. We acknowledge useful discussions with K. Hazzard, I. Lesanovsky, D. Luitz, and L. Vidmar. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 499180199, 436382789, 493420525 via FOR 5522 and large-equipment grants (GOEGrid cluster). This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. The data shown in the figures will be made available as ancilla files on the arxiv.
閉じた量子系における熱化の研究において、運動量制約は時間的ダイナミクスと最終的な熱化に大きく関与しており、注目を集めている。運動量制約は通常、初期条件によって長寿命の metastable 状態を導き出す。本稿では、相互作用する hardcore Bosons モデルを、ガラス状のダイナミクスを捕捉するために開発された追加の運動量制約を加えることで、そのモデルを検討する。その結果、このシステムは、不 correlated な乱れに遭遇した場合に、局在化に非常に敏感であることが示された。乱れを加えることで、密度自己相関の時間変化が示される。さらに、運動量制約は、エルゲン状態においても局在化を促進し、このモデルが運動量制約がない場合に比べて、より低い乱れ強度で多体局在化相へのサイズ遷移が起こる。私たちの研究は、運動
2309.05499
Zero-Shot Co-salient Object Detection Framework
Co-salient Object Detection (CoSOD) endeavors to replicate the human visual system's capacity to recognize common and salient objects within a collection of images. Despite recent advancements in deep learning models, these models still rely on training with well-annotated CoSOD datasets. The exploration of training-free zero-shot CoSOD frameworks has been limited. In this paper, taking inspiration from the zero-shot transfer capabilities of foundational computer vision models, we introduce the first zero-shot CoSOD framework that harnesses these models without any training process. To achieve this, we introduce two novel components in our proposed framework: the group prompt generation (GPG) module and the co-saliency map generation (CMP) module. We evaluate the framework's performance on widely-used datasets and observe impressive results. Our approach surpasses existing unsupervised methods and even outperforms fully supervised methods developed before 2020, while remaining competitive with some fully supervised methods developed before 2022.
Haoke Xiao, Lv Tang, Bo Li, Zhiming Luo, Shaozi Li
2023-09-11T14:42:04
http://arxiv.org/abs/2309.05499v3
# Zero-shot Co-salient Object Detection Framework ###### Abstract Co-salient Object Detection (CoSOD) endeavors to replicate the human visual system's capacity to recognize common and salient objects within a collection of images. Despite recent advancements in deep learning models, these models still rely on training with well-annotated CoSOD datasets. The exploration of training-free zero-shot CoSOD frameworks has been limited. In this paper, taking inspiration from the zero-shot transfer capabilities of foundational computer vision models, we introduce the first zero-shot CoSOD framework that harnesses these models without any training process. To achieve this, we introduce two novel components in our proposed framework: the group prompt generation (GPG) module and the co-saliency map generation (CMP) module. We evaluate the framework's performance on widely-used datasets and observe impressive results. Our approach surpasses existing unsupervised methods and even outperforms fully supervised methods developed before 2020, while remaining competitive with some fully supervised methods developed before 2022. Haoke Xiao\({}^{1}\) Lv Tang\({}^{2}\) Bo Li\({}^{2}\) Zhiming Luo\({}^{1}\) Shaozi Li\({}^{1}\)\({}^{1}\)+\({}^{1}\) Institute of Artificial Intelligence, Xiamen University, Xiamen, China \({}^{2}\) VIVO Mobile Communication Company Ltd, Shanghai, China Zero-shot Co-saliency Detection, Foundational Computer Vision Model. Footnote †: Email: hk.xiao.me@gmail.com, luckybird1994@gmail.com, libra@vivo.com, zhiming.luo@xmu.edu.cn and sxlig@xmu.edu.cn. ## 1 Introduction Co-salient object detection (CoSOD) is a task that seeks to replicate the human visual system's ability to identify common and salient objects from a set of related images. One of the unique challenges in CoSOD is that co-salient objects belong to the same semantic category, but their specific category attributes remain unknown. These distinctive characteristics have made CoSOD an emerging and demanding task that has gained rapid traction in recent years [1, 2, 3]. Detecting co-saliency fundamentally hinges on accurately modeling the inter relation within an image group. To tackle this challenge, various meticulously designed architectures have been proposed, including RNN-based methods [5, 9], CNN-based methods [10, 6, 11], and Transformer-based methods [8, 12]. While these methods have achieved impressive performance, they often rely on small-scale datasets and require the integration of complex network modules. It's important to highlight that previous studies [1, 9] show that changing the training data while keeping the same network architecture, or modifying the backbone network while using the same training data can significantly impact network performance. This suggests that improving network performance might be attainable by employing a more robust backbone network or higher-quality training datasets. These considerations prompt us to reevaluate whether addressing the CoSOD task necessitates the design of intricate and diverse modules, or if an alternative approach should be explored. Recently, foundational computer vision (CV) models, such as SAM [13] and DINO [14], have emerged. These models, once trained, can be seamlessly applied to various downstream tasks in a zero-shot manner, eliminating the need for dataset-specific fine-tuning. This prompts us to explore whether these CV foundational models can be harnessed for CoSOD. However, existing CV foundational models, like SAM, are tailored for single-image tasks and lack the capacity to discern inter-saliency relations within an image group. Moreover, manually supplying SAM with inter-saliency or group prompts, which aid in co-saliency map generation, is impractical due to the nature of the CoSOD task. To tackle the aforementioned challenges, we present an innovative zero-shot CoSOD framework that leverages foundational computer vision models. As depicted in Fig. 1, our framework consists of two main components: group prompt generation (GPG) and co-saliency map generation (CMP). In the GPG module, we initially extract high-level semantic information from each image using the CV foundational model. We also explore the supplementation of low-level spatial de Figure 1: Left: The architecture of our proposed zero-shot CoSOD framework. Right: The performance of our proposed zero-shot CoSOD framework. GWD [4], RCAN [5], ICNet [6], CADC [7] and UFO [8] are five typical methods. tails using Stable Diffusion (SD) [15], which may not be captured by the foundational model. Subsequently, we combine these pieces of information to generate group prompts. These prompts, created by the GPG module, serve as input for the CMP module. As depicted in Fig. 1, our network surpasses methods developed before 2021. Our key contributions are: * We take the pioneering step of introducing a zero-shot CoSOD framework, potentially inspiring researchers to address the CoSOD from a fresh perspective. * To address the limitations of existing CV foundational model when applied to CoSOD task, we further design the GPG and CMP modules. * We validate our zero-shot CoSOD framework on three widely used datasets (CoCA [16], CoSOD3k [2] and Cosal2015 [17]), and the performance shows the effectiveness of our proposed zero-shot framework. ## 2 Method In Fig. 2, since the CMP module directly utilizes SAM, our emphasis lies in providing an in-depth description of the key components within GPG. GPG encompasses feature extraction, group center proxy generation, and TopK selection. ### Feature Extraction High-level Feature Extraction.Existing works [18, 14] demonstrate that self-supervised ViT features, as exemplified by DINO, contain explicit information for semantic segmentation and excel as KNN classifiers. In essence, DINO is adept at accurately extracting the semantic content of each image, a vital aspect in discerning the group features within an image set. Herein, we choose the 11th layer feature \(\mathcal{F}_{DINO}\) to represent the semantic information of each image. Low-level Feature Extraction.While DINO excels in providing substantial high-level semantics, it lacks in delivering nuanced low-level spatial information. In the second column of Fig. 3, the group feature generated solely through DINO would lack low-level detailed information. As emphasized in previous studies [19, 9], both low-level and high-level features are pivotal for modeling group features. However, it's noteworthy that there is a research gap regarding the supplementation of low-level spatial information to features extracted by DINO in a zero-shot manner. In our proposed network, the inclusion of a pre-trained model that specializes in low-level spatial information becomes crucial, particularly in scenarios lacking strong texture cues. Such a model can effectively complement the low-level spatial information extracted by DINO. Notably, SD [15] has recently showcased its exceptional ability to generate high-quality images. This underscores its potential for robustly representing images, encompassing both content and spatial information. Consequently, our primary objective is to explore whether SD features can enhance the establishment of inter-relationships when combined with DINO. Figure 3: The generated group features. Figure 2: The architecture of our proposed zero-shot CoSOD framework. Feature extraction is accomplished by utilizing **DINO** and **SD** to extract both high-level and low-level information. The CMP module employs **SAM** to generate the co-saliency maps. Importantly, all parameters in the network remain frozen, eliminating the need for additional training. The architecture of SD comprises three key components: an encoder \(\mathcal{E}\), a decoder \(\mathcal{D}\), and a denoising U-Net \(\mathcal{U}\) operating within the latent space. We begin by projecting an input image \(x_{0}\) into the latent space through the encoder \(\mathcal{E}\), resulting in a latent code \(z_{0}\). Subsequently, we add Gaussian noise \(\epsilon\) to the latent code according to a predefined time step \(t\). Lastly, with the latent code \(z_{t}\) at time step \(t\), we then extract the SD features \(\mathcal{F}_{SD}\) utilizing the denoising U-Net: \[\mathcal{F}_{SD}=\mathcal{U}(z_{t},t),\;z_{t}=\sqrt{\bar{a}_{t}}+\sqrt{1-\bar{ a}_{t}}\epsilon,\;z_{0}=\mathcal{E}(x_{0}). \tag{1}\] In accordance with the approach introduced in [20], we combine features extracted from different decoder \(\mathcal{D}\) layers, specifically layers 2, 5, and 8, to capture multi-scale features. However, a direct concatenation of features from these three layers results in an excessively high-dimensional feature vector, approximately 5440 dimensions. To address this issue, we employ Principal Component Analysis (PCA) for each feature layer. Subsequently, we upsample lower-resolution features (i.e., layers 2 and 5) to match the resolution of the higher-resolution layer (i.e., layer 8) before concatenation. **Feature Fusion.** Expanding on the aforementioned discussions, we introduce a simple yet remarkably effective fusion strategy aimed at harnessing the advantages of both SD and DINO features. The core idea entails independently normalizing both sets of features to ensure consistency in their scales and distributions, subsequently concatenating them: \[\mathcal{F}_{FUSE}=Concat(\|\mathcal{F}_{SD}\|_{2},\|\mathcal{F}_{DINO}\|_{2}). \tag{2}\] In the third column of Fig. 3, the fused feature aids in generating a smoother and more resilient group feature. ### Group Center Proxy Generation and TopK Selecting We acquire the feature \(\mathcal{F}_{FUSE}\in\mathbb{R}^{C\times H\times W}\) for each image, a valuable asset for robust group information generation. However, the subsequent challenge lies in disseminating the group information to individual images. In existing CoSOD methods, group information is often expressed as a feature map, directly concatenated with the original image. This is followed by a trainable decoder for co-saliency map prediction. In our proposed zero-shot framework, training a new decoder network using the CoSOD training dataset is not feasible. To tackle this challenge, we transform the representation of group information and introduce the pixel group center proxy generation and TopK selecting mechanism. Through this innovative approach, we can create group prompt points to represent co-salient objects in each images, which, in turn, prompt SAM to generate the corresponding maps. Assuming an image group comprises \(N\) images, we concatenate these features to generate the group feature \(\mathcal{F}_{G}\in\mathbb{R}^{N\times C\times H\times W}\). Subsequently, we reshape \(\mathcal{F}_{G}\) into the shape \(\mathbb{R}^{NHW\times C}\), denoting each pixel embedding as \(\mathcal{F}_{G}^{In}\), where \(l\in[1,NHW]\) and \(n\) means the \(n\)-th image. \(\mathcal{F}_{G}^{In}\) means that the \(l\)-th pixel embedding belongs to \(n\)-th image. Then, we use the easy averaging operation on these pixel embeddings to generate the group center proxy \(\mathcal{F}_{c}\). Moreover, to make the group center proxy focus on salient regions, we use the unsupervised SOD method TSDN [21] to filter pixels belonging to non-salient regions in \(\mathcal{F}_{G}^{In}\), generating the salient pixels \(\mathcal{F}_{G}^{In-s}\). The process of the generation of \(\mathcal{F}_{c}\) is written as: \[\mathcal{F}_{c}=Avg\left\{\mathcal{F}_{G}^{In-s}\right\}\in\mathbb{R}^{C}. \tag{3}\] Concretely, for \(N\)-th image which contains \(L\) salient pixels, we calculate the correlation score between \(\mathcal{F}_{c}\) and \(\mathcal{F}_{G}^{LN}\), and use TopK to select the point at position \(P^{N}\) in the image \(N\), which can represent common co-salient objects in this image: \[S^{LN}=\mathcal{F}_{c}\otimes\mathcal{F}^{LN},P^{N}=\text{TopK}(S^{LN})\in \mathbb{R}^{K}, \tag{4}\] where \(\otimes\) means matrix multiplication. Finally, for the \(N\)-th image, the generated prompts at position \(P^{N}\) (Fig. 1 and Fig. 4) and the corresponding original image is sent to CMP to generate the co-saliency maps. We set \(K=2\) in this paper. ## 3 Experiments ### Experimental Setup **Implementation Details.** We employ the Stable Diffusion v1-5 and DINov2 models as our feature extractors, with the DDIM timestep \(t\) in the denoising process set to be 50 by default. We use the SAM with the smallest vit-b backbone. \(N\) is the total image numbers of one certain image group. All experiments are conducted on a single RTX 3090 GPU. **Datasets.** We employ three benchmark datasets, including Cosal2015 [22], CoSOD3k [2] and CoCA [16], to evaluate our approach. Cosal2015 comprises 50 groups with a total of 2015 images. It presents numerous challenges such as complex environments. CoSOD3k, the largest-scale and most comprehensive benchmark, offers 160 groups with a total of 3000 images. CoCA features 80 groups with a total of 1297 images, posing a challenge due to the presence of multiple objects, including relatively small co-salient objects. **Evaluation Metrics.** we employ three widely used criteria: (1) F-measure \((F_{\beta}^{mean})\), representing the harmonic mean of precision and recall values, is calculated using a self-adaptive threshold. (2) Structure Measure \((S_{m})\), which is utilized to assess the spatial structural similarities of saliency maps. (3) Mean Absolute Error \((MAE)\), which quantifies the average L1 distance between ground truth maps and predictions. ### Comparison Methods We compare our method with 7 fully-supervised methods: CSMG [23], ICNet [6], CADC [7], GLNet [24], TCNet [25], CoRP [1] and UFO [8]. The unsupervised CoSOD models in barely explored, we only compare our method with 2 methods: PJO [26] and GOMAG [27]. ### _Quantitative and Qualitative Evaluation._ Table. 1 reveals that our proposed zero-shot CoSOD network consistently outperforms all other state-of-the-art (SOTA) unsupervised CoSOD methods across all evaluation metrics. These results underscore the efficacy of our zero-shot CoSOD network. In comparison to supervised CoSOD methods, our approach surpasses methods published in 2019 and achieves competitive performance compared to those published from 2020 to 2023, as evidenced by certain metrics. _It's worth noting that all modules in our network employ basic design principles, such as simple averaging operations for center proxy point feature extraction. In this configuration, our zero-shot framework achieves such remarkable performance, instilling confidence in researchers to explore CoSOD tasks from a novel zero-shot perspective._ Fig. 4 presents qualitative results from our proposed method, showcasing its ability to accurately detect co-salient objects even in complex scenes. ### _Ablation Analysis_ First, we propose that both high-level and low-level information are pivotal for generating group features. To validate this assertion, we conduct experiments. In the penultimate row of Table. 1, utilizing only the high-level features extracted from DINO already achieves competitive performance within the proposed network. However, it does not enable our zero-shot framework to completely surpass the CSMG method in all metrics. Additionally, the incorporation of low-level information from SD further enhances performance. Another noteworthy contribution of this paper is the assertion that features extracted from foundational models are valuable for generating group features. Consequently, when we incorporate these group features into existing frameworks, including TSCoSOD [9], TCNet [25] and GCoNet+ [28], we observe further performance improvements, as demonstrated in Table. 2. The values in parentheses represent results after retraining with the inclusion of new group features. This underscores that, beyond the framework itself, this paper contributes a zero-shot group feature generation approach. ## 4 Conclusion In this paper, we introduce an innovative zero-shot CoSOD framework. Leveraging the feature extraction capabilities of established DINO and SD, we have devised the GPG and CMP components, enabling the application of existing foundational models to the zero-shot CoSOD task. Our experiments demonstrate that these foundational models can effectively generate resilient group features, and our proposed framework can reasonably address the zero-shot CoSOD task. We envision that our work will serve as a cornerstone for the zero-shot/unsupervised CoSOD task, inspiring researchers to approach CoSOD from a novel perspective. \begin{table} \begin{tabular}{c|c|c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Performance Level**} & \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Year**} & \multicolumn{2}{c|}{**CoCA**} & \multicolumn{3}{c|}{**CoSODk**} & \multicolumn{3}{c}{**CoCoCoal2015**} \\ \cline{4-13} & & & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) \\ \hline \multirow{3}{*}{**Future**} & TCNet & TCSVT 2023 & Supervised & 0.685 & 0.548 & 0.101 & 0.832 & 0.797 & 0.668 & 0.870 & 0.859 & 0.054 \\ & UPO & TAM 2023 & Supervised & 0.697 & 0.555 & 0.095 & 0.819 & 0.783 & 0.073 & 0.860 & 0.848 & 0.064 \\ & CoRP & TPAMI 2023 & Supervised & 0.732 & 0.663 & 0.093 & 0.850 & 0.824 & 0.057 & 0.884 & 0.884 & 0.044 \\ \hline \multirow{3}{*}{**Competitive**} & ICNet & NeurIPS 2020 & Supervised & 0.651 & 0.503 & 0.148 & 0.780 & 0.734 & 0.097 & 0.856 & 0.846 & 0.058 \\ & CADC & ICCV 2021 & Supervised & 0.681 & 0.803 & 0.132 & 0.801 & 0.742 & 0.096 & 0.866 & 0.825 & 0.064 \\ & GLNet & TCSV 2022 & Supervised & 0.591 & 0.425 & 0.188 & - & - & - & 0.855 & 0.849 & 0.060 \\ \hline \multirow{3}{*}{**Outperform**} & CSMG & CVPR 2019 & Supervised & 0.632 & 0.494 & 0.124 & 0.711 & 0.662 & 0.157 & 0.774 & 0.775 & 0.130 \\ & PHO & TPAMI 2019 & Unsupervised & 0.573 & 0.362 & 0.175 & 0.677 & 0.631 & 0.188 & 0.721 & 0.687 & 0.192 \\ & GMOAG & TAM 2020 & Unsupervised & 0.587 & 0.387 & 0.170 & 0.687 & 0.642 & 0.180 & 0.734 & 0.698 & 0.187 \\ \hline \multirow{3}{*}{**Oma**} & Oma (SD) & 2023 & Zero Shot & 0.683 & 0.532 & 0.121 & 0.717 & 0.669 & 0.127 & 0.776 & 0.787 & 0.110 \\ & Ours & & 0.667 & 0.549 & 0.115 & 0.723 & 0.691 & 0.117 & 0.785 & 0.799 & 0.101 \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparison with SOTA on CoSOD datasets. The red font means that our proposed network achieves the competitive performance compared to these methods. \begin{table} \begin{tabular}{c|c|c c c c c c|c c c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Pub. \&Year**} & \multicolumn{3}{c|}{**CoCA**} & \multicolumn{3}{c}{**CoSODk**} & \multicolumn{3}{c}{**CoCoal2015**} \\ \cline{4-13} & & & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) \\ \hline TCSSOOD & TP 2023 & 0.724 & 0.613 & 0.0825 & 0.099 & 0.893 & 0.834 & 0.816 & 0.810 & 0.012 & 0.062 & 0.085 & 0.095 & 0.097 & 0.090 & 0.095 & 0.093 \\ TCNet & TCSVT 2023 & 0.728 (**0.077**) & 0.548 (**0.051**) & 0.101 & 0.092 & 0.873 & 0.737 & 0.843 & 0.797 & 0.841 & 0.088 & 0.077 & 0.859 & 0.077 & 0.054 & 0.094 \\ GCoNet & TPAMI 2023 & 0.758 (**0.749**) & 0.612 (**0.264**) & 0.038 (**0.077**) & 0.843 (**0.855**) & 0.831 & 0.827 & 0.062 & **0.054** & 0.881 & **0.081** & 0.810 & 0.820 & 0.084 \\ \hline Average Improvement & 41.469k & 42.059k & 46.659k & 1.418 & +1.869k & +1.311k & +2.699k & +1.69k & +15.939k \\ \hline \end{tabular} \end{table} Table 2: Performance improvement of existing methods after adding group information extracted by DINO and SD. Figure 4: Visual comparison between our method and other methods.
**共感対象検出 (CoSOD) は、人々の視覚系が画像集合の中で共通で重要な対象を認識する能力を模倣することを目指します。 深層学習モデルの最近の進歩にもかかわらず、これらのモデルは、 İyi 訓練されたCoSODデータセットでトレーニングを必要としています。 無学習のゼロショット CoSOD フレームワークの調査は限られています。 この論文では、基盤となるコンピュータビジョンモデルのゼロショット転送能力をインスピレーションを得て、訓練なしに動作する初めてのゼロショット CoSOD フレームワークを導入しました。 このためには、提案するフレームワークに、グループプロンプト生成 (GPG) モジュールと共感マップ生成 (CMP) モジュールを導入しました。 私たちのフレームワークの性能を、広く使用されているデータセットで評価し、印象的な結果を得ました。 私たちの方法は、既存の非 supervised な方法を凌駕し、2020
2309.04787
An Analytic Method to Determine the Optimal Time for the Induction Phase of Anesthesia
We obtain an analytical solution for the time-optimal control problem in the induction phase of anesthesia. Our solution is shown to align numerically with the results obtained from the conventional shooting method. The induction phase of anesthesia relies on a pharmacokinetic/pharmacodynamic (PK/PD) model proposed by Bailey and Haddad in 2005 to regulate the infusion of propofol. In order to evaluate our approach and compare it with existing results in the literature, we examine a minimum-time problem for anesthetizing a patient. By applying the Pontryagin minimum principle, we introduce the shooting method as a means to solve the problem at hand. Additionally, we conducted numerical simulations using the MATLAB computing environment. We solve the time-optimal control problem using our newly proposed analytical method and discover that the optimal continuous infusion rate of the anesthetic and the minimum required time for transition from the awake state to an anesthetized state exhibit similarity between the two methods. However, the advantage of our new analytic method lies in its independence from unknown initial conditions for the adjoint variables.
Mohamed A. Zaitri, Cristiana J. Silva, Delfim F. M. Torres
2023-09-09T13:20:20
http://arxiv.org/abs/2309.04787v1
# An Analytic Method to Determine the Optimal Time for the Induction Phase of Anesthesia ###### Abstract We obtain an analytical solution for the time-optimal control problem in the induction phase of anesthesia. Our solution is shown to align numerically with the results obtained from the conventional shooting method. The induction phase of anesthesia relies on a pharmacokinetic/pharmacodynamic (PK/PD) model proposed by Bailey and Haddad in 2005 to regulate the infusion of propofol. In order to evaluate our approach and compare it with existing results in the literature, we examine a minimum-time problem for anesthetizing a patient. By applying the Pontryagin minimum principle, we introduce the shooting method as a means to solve the problem at hand. Additionally, we conducted numerical simulations using the MATLAB computing environment. We solve the time-optimal control problem using our newly proposed analytical method and discover that the optimal continuous infusion rate of the anesthetic and the minimum required time for transition from the awake state to an anesthetized state exhibit similarity between the two methods. However, the advantage of our new analytic method lies in its independence from unknown initial conditions for the adjoint variables. Keywords:pharmacokinetic/pharmacodynamic model; optimal control theory; time-optimal control of the induction phase of anesthesia; shooting method; analytical method; numerical simulations : 49M05; 49N90; 92C45 + Footnote †: journal: Article ## 1 Introduction Based on Guedel's classification, the first stage of anesthesia is the induction phase, which begins with the initial administration of anesthesia and ends with loss of consciousness [1]. Millions of people safely receive several types of anesthesia while undergoing medical procedures: local anesthesia, regional anesthesia, general anesthesia, and sedation [2]. However, there may be some potential complications of anesthesia including anesthetic awareness, collapsed lung, malignant hyperthermia, nerve damage, and postoperative delirium. Certain factors make it riskier to receive anesthesia, including advanced age, diabetes, kidney disease, heart disease, high blood pressure, and smoking [3]. To avoid the risk, administering anesthesia should be carried out on a scientific basis, based on modern pharmacotherapy, which relies on both pharmacokinetic (PK) and pharmacodynamic (PD) information [4]. Pharmacokinetics is used to describe the absorption and distribution of anesthesia in body fluids, resulting from the administration of a certain anesthesia dose. Pharmacodynamics is the study of the effect resulting from anesthesia [5]. Multiple mathematical models were already presented to predict the dynamics of the pharmacokinetics/pharmacodynamics (PK/PD) models [6; 7; 8; 9]. Some of these models were implemented following different methods [2; 10; 11]. The parameters of PK/PD models were fitted by Schnider et al. in [12]. In [6], the authors study pharmacokinetic models for propofol, comparing Schnider et al. and Marsh et al. models [13]. The authors of [6] conclude that Schnider's model should always be used in effect-site targeting mode, in which larger initial doses are administered but smaller than those obtained from Marsh's model. However, users of the Schnider model should be aware that in the morbidly obese, the lean body mass (LBM) equation can generate paradoxical values, resulting in excessive increases in maintenance infusion rates [12]. In [14], a new strategy is presented to develop a robust control of anesthesia for the maintenance phase, taking into account the saturation of the actuator. The authors of [15] address the problem of optimal control of the induction phase. For other related works, see [8; 16] and references therein. Here, we consider the problem proposed in [15], to transfer a patient from a state consciousness to unconsciousness. We apply the shooting method [17] using the Pontryagin minimum principle [18], correcting some inconsistencies found in [15] related with the stop criteria of the algorithm and the numerical computation of the equilibrium point. Secondly, we provide a new different analytical method to the time-optimal control problem for the induction phase of anesthesia. While the shooting method, popularized by Zabi et al. [15], is widely employed for solving such control problems and determining the minimum time, its reliance on Newton's method makes it sensitive to initial conditions. The shooting method's convergence is heavily dependent on the careful selection of initial values, particularly for the adjoint vectors. To overcome this limitation, we propose an alternative approach, which eliminates the need for initial value selection and convergence analysis. Our method offers a solution to the time-optimal control problem for the induction phase of anesthesia, free from the drawbacks associated with the shooting method. Furthermore, we propose that our method can be extended to other PK/PD models to determine optimal timings for drug administration. To compare the methods, we perform numerical simulations to compute the minimum time to anesthetize a man of 53 years, 77 kg, and 177 cm, as considered in [15]. We find the optimal continuous infusion rate of the anesthetic and the minimum time that needs to be chosen for treatment, showing that both the shooting method of [15] and the one proposed here coincide. This paper is organized as follows. In Section 2, we recall the pharmacokinetic and pharmacodynamic model of Bailey and Haddad [19], the Schnider model [12], the bispectral index (BIS), and the equilibrium point [14]. Then, in Section 3, a time-optimal control problem for the induction phase of anesthesia is posed and solved both by the shooting and analytical methods. Finally, in Section 4, we compute the parameters of the model using the Schnider model [12], and we illustrate the results of the time-optimal control problem through numerical simulations. We conclude that the optimal continuous infusion rate for anesthesia and the minimum time that should be chosen for this treatment can be found by both shooting and analytical methods. The advantage of the new method proposed here is that it does not depend on the concrete initial conditions, while the shooting method is very sensitive to the choice of the initial conditions of the state and adjoint variables. We end with Section 5 of conclusions, pointing also some directions for future research. ## 2 The PK/PD Model The pharmacokinetic/pharmacodynamic (PK/PD) model consists of four compartments: intravascular blood \((x_{1}(t))\), muscle \((x_{2}(t))\), fat \((x_{3}(t))\), and effect site \((x_{4}(t))\). The effect site compartment (brain) is introduced to account for the finite equilibration time between central compartment and central nervous system concentrations [19]. This model is used to describe the circulation of drugs in a patient's body, being expressed by a four-dimensional dynamical system as follows: \[\left\{\begin{array}{l}\dot{x}_{1}(t)=-(a_{10}+a_{12}+a_{13})\,x_{1}(t)+a_{2 1}\,x_{2}(t)+a_{31}\,x_{3}(t)+u(t),\\ \dot{x}_{2}(t)=a_{12}\,x_{1}(t)-a_{21}\,x_{2}(t),\\ \dot{x}_{3}(t)=a_{13}\,x_{1}(t)-a_{31}\,x_{3}(t),\\ \dot{x}_{4}(t)=\frac{a_{40}}{v_{1}}x_{1}(t)-a_{60}\,x_{4}(t).\end{array}\right. \tag{1}\] The state variables for system (1) are subject to the following initial conditions: \[x(0)=(x_{1}(0),x_{2}(0),x_{3}(0),x_{4}(0))=(0,0,0,0), \tag{2}\] where \(x_{1}(t),x_{2}(t),x_{3}(t)\), and \(x_{4}(t)\) represent, respectively, the masses of the propofol in the compartments of blood, muscle, fat, and effect site at time \(t\). The control \(u(t)\) is the continuous infusion rate of the anesthetic. The parameters \(a_{1\,0}\) and \(a_{e\,0}\) represent, respectively, the rate of clearance from the central compartment and the effect site. The parameters \(a_{12}\), \(a_{13}\), \(a_{21}\), \(a_{31}\), and \(a_{e\,0}/v_{1}\) are the transfer rates of the drug between compartments. A schematic diagram of the dynamical control system (1) is given in Figure 1. ### Schnider's Model Following Schnider et al. [12], the lean body mass (LBM) is calculated using the James formula, which performs satisfactorily in normal and moderately obese patients, but not so well for severely obese cases [20]. The James formula calculates LBM as follows: \[\text{for Male, LBM} = 1.1\times\text{weight}-128\times\left(\frac{\text{weight}}{\text {height}}\right)^{2}, \tag{3}\] \[\text{for Female, LBM} = 1.07\times\text{weight}-148\times\left(\frac{\text{weight}}{ \text{height}}\right)^{2}. \tag{4}\] The parameters of the PK/PD model (1) are then estimated according to Table 1. \begin{table} \begin{tabular}{c c} \hline \hline **Parameter** & **Estimation** \\ \hline \(a_{10}\left(\text{min}^{-1}\right)\) & \(0.443+0.0107\left(\text{weight}-77\right)-0.0159\left(\text{LBM}-59\right)+0.0 062\left(\text{height}-177\right)\) \\ \hline \(a_{12}\left(\text{min}^{-1}\right)\) & \(0.302-0.0056\left(\text{age}-53\right)\) \\ \hline \(a_{13}\left(\text{min}^{-1}\right)\) & \(0.196\) \\ \hline \(a_{21}\left(\text{min}^{-1}\right)\) & \(\left(1.29-0.024\left(\text{age}-53\right)\right)/\left(18.9-0.391\left( \text{age}-53\right)\right)\) \\ \hline \(a_{31}\left(\text{min}^{-1}\right)\) & \(0.0035\) \\ \hline \(a_{e\,0}\left(\text{min}^{-1}\right)\) & \(0.456\) \\ \hline \(v_{1}\left(\text{L}\right)\) & \(4.27\) \\ \hline \hline \end{tabular} \end{table} Table 1: Parameter values for model (1) according to Schnider’s model [12]. Figure 1: Schematic diagram of the PK/PD model with the effect site compartment of Bailey and Haddad [19]. ### The Bispectral Index (BIS) The BIS is the depth of anesthesia indicator, which is a signal derived from the EEG analysis and directly related to the effect site concentration of \(x_{4}(t)\). It quantifies the level of consciousness of a patient from 0 (no cerebral activity) to 100 (fully awake patient), and can be described empirically by a decreasing sigmoid function [19]: \[BIS(x_{4}(t))=BIS_{0}\Bigg{(}1-\frac{x_{4}(t)\gamma}{x_{4}(t)^{\gamma}+EC_{50}^ {\gamma}}\Bigg{)}, \tag{5}\] where \(BIS_{0}\) is the \(BIS\) value of an awake patient typically set to 100, \(EC_{50}\) corresponds to the drug concentration associated with 50% of the maximum effect, and \(\gamma\) is a parameter modeling the degree of nonlinearity. According to [21], typical values for these parameters are \(EC_{50}=3.4\,\mathrm{mg/L}\) and \(\gamma=3\). ### The Equilibrium Point Following [14], the equilibrium point is obtained by equating the right-hand side of (1) to zero, \[\left\{\begin{array}{l}0=-(a_{10}+a_{12}+a_{13})\,x_{1}+a_{21}\,x_{2}+a_{31 }\,x_{3}+u,\\ 0=a_{12}\,x_{1}-a_{21}\,x_{2},\\ 0=a_{13}\,x_{1}-a_{31}\,x_{3},\\ 0=\frac{a_{50}}{v_{1}}\,x_{1}-a_{e0}\,x_{4},\end{array}\right. \tag{6}\] with the condition \[x_{4}=EC_{50}. \tag{7}\] It results that the equilibrium point \(x_{e}=(x_{e1},x_{e2},x_{e3},x_{e4})\) is given by \[x_{e1}=v_{1}\,EC_{50},\quad x_{e2}=\frac{a_{12}\,v_{1}\,EC_{50}}{a_{21}},\quad x _{e3}=\frac{a_{13}\,v_{1}\,EC_{50}}{a_{31}},\quad x_{e4}=EC_{50}, \tag{8}\] and the value of the continuous infusion rate for this equilibrium is \[u_{e}=a_{10}\,v_{1}\,EC_{50}. \tag{9}\] The fast state is defined by \[x_{eF}(t)=(x_{1}(t),x_{4}(t)). \tag{10}\] The control of the fast dynamics is crucial because the BIS is a direct function of the concentration at the effect site. ## 3 Time-Optimal Control Problem Let \(x(t)=(x_{1}(t),x_{2}(t),x_{3}(t),x_{4}(t))\in\mathbb{R}^{4}\). We can write the dynamical system (1) in a matrix form as follows: \[\dot{x}(t)=A\,x(t)+B\,u(t), \tag{11}\] where \[A=\left(\begin{array}{cccc}-(a_{10}+a_{12}+a_{13})&a_{21}&a_{31}&0\\ a_{12}&-a_{21}&0&0\\ a_{13}&0&-a_{31}&0\\ \frac{a_{e0}}{v_{1}}&0&0&-a_{e0}\end{array}\right)\quad\text{and}\quad B= \left(\begin{array}{c}1\\ 0\\ 0\\ 0\end{array}\right). \tag{12}\] Here, the continuous infusion rate \(u(t)\) is to be chosen so as to transfer the system (1) from the initial state (wake state) to the fast final state (anesthetized state) in the shortest possible time. Mathematically, we have the following time-optimal control problem [15]: \[\begin{cases}\min\limits_{l_{f}}J=\int\limits_{0}^{t_{f}}dt,\\ \dot{x}(t)=A\,x(t)+B\,u(t),\quad x(0)=(0,0,0,0),\\ C\,x_{eF}(t_{f})=x_{eF},\\ 0\leq u(t)\leq U_{max},\quad t\in[0,t_{f}],\quad\,t_{f}\,\text{is free},\end{cases} \tag{13}\] where \(t_{f}\) is the first instant of time that the desired state is reached, and \(C\) and \(x_{eF}\) are given by \[C=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right),\,\,\,\,x_{eF}=(x_{e1},\,x_{e4}), \tag{14}\] with \[x_{eF}(t_{f})=(x_{1}(t_{f}),x_{2}(t_{f})). \tag{15}\] ### Pontryagin Minimum Principle According to the Pontryagin minimum principle (PMP) [18], if \(\tilde{u}\in L^{1}\) is optimal for Problem (13) and the final time \(t_{f}\) is free, then there exists \(\psi(t)=(\psi_{1}(t),\ldots,\psi_{4}(t))\), \(t\in[0,t_{f}]\), \(\psi\in AC([0,t_{f}];\mathbb{R}^{4})\), called the adjoint vector, such that \[\begin{cases}\dot{x}=\frac{\partial H}{\partial\psi},\\ \dot{\psi}=-\frac{\partial H}{\partial x},\end{cases} \tag{16}\] where the Hamiltonian \(H\) is defined by \[H(t,x,u,\psi)=1+\psi^{T}\,(A\,x+B\,u). \tag{17}\] Moreover, the minimality condition \[H(t,\tilde{x}(t),\tilde{u}(t),\tilde{\psi}(t))=\min\limits_{0\leq u\leq U_{ max}}H(t,\tilde{x}(t),u,\tilde{\psi}(t)) \tag{18}\] holds almost everywhere on \(t\in[0,t_{f}]\). Since the final time \(t_{f}\) is free, according to the transversality condition of PMP, we obtain: \[H(t_{f},x(t_{f}),u(t_{f}),\psi(t_{f}))=0. \tag{19}\] Solving the minimality condition (18) on the interior of the set of admissible controls gives the necessary condition \[\tilde{u}(t)=\begin{cases}0&\text{if }\tilde{\psi}_{1}(t)>0,\\ U_{max}&\text{if }\tilde{\psi}_{1}(t)<0,\end{cases} \tag{20}\] where \(\tilde{\psi}_{1}(t)\) is obtained from the adjoint system (16), that is, \(\tilde{\psi}^{\prime}(t)=-A^{T}\tilde{\psi}(t)\), and the transversality condition (19). This is discussed in Sections 3.2 and 3.3. ### Shooting Method The shooting method is a numerical technique used to solve boundary value problems, specifically in the realm of differential equations and optimal control. It transforms the problem into an initial value problem by estimating the unknown boundary conditions. Through iterative adjustments to these estimates, the boundary conditions are gradually satisfied. In [17], the authors propose an algorithm that addresses numerical solutions for parameterized optimal control problems. This algorithm incorporates multiple shooting and recursive quadratic programming, introducing a condensing algorithm for linearly constrained quadratic subproblems and high-rank update procedures. The algorithm's implementation leads to significant improvements in convergence behavior, computing time, and storage requirements. For more on numerical approaches to solve optimal control problems, we refer the reader to [22] and references therein. Using (16), (17), (19), and (20), we consider the following problem: \[\begin{cases}\dot{x}(t)=A\,x(t)+B\,\times\max\,(0,-U_{max}\,sign(\psi_{1}(t))),\\ \dot{\psi}(t)=-A^{T}\,\psi(t),\\ x(0)=(0,0,0,0),\,x_{1}(t_{f})=x_{e1},\,x_{4}(t_{f})=x_{e4},\\ \psi(0)\text{ is free, }H(t_{f},x(t_{f}),\max\,(0,-U_{max}\,sign(\psi_{1}(t_{f}))),\psi(t_{f}))=0.\end{cases} \tag{21}\] Let \(z(t)=(x(t),\psi(t))\). Then, we obtain the following two points' boundary value problem: \[\begin{cases}\dot{z}(t)=A^{*}z(t)+B^{*},\\ R(z(0),z(t_{f}))=0,\end{cases} \tag{22}\] where \(A^{*}\in M_{8\times 8}(\mathbb{R})\) is the matrix given by \[A^{*}=\left(\begin{array}{cc}A&0_{4\times 4}\\ 0_{4\times 4}&-A^{T}\end{array}\right), \tag{23}\] \(B^{*}\in\mathbb{R}^{8}\) is the vector given by \[B^{*}=\begin{cases}(0,\,0,\,0,\,0,\,0,\,0,\,0)&\text{if }\psi_{1}(t)>0,\\ (U_{max},\,0,\,0,\,0,\,0,\,0,\,0)&\text{if }\psi_{1}(t)<0,\end{cases} \tag{24}\] and \(R(z(0),z(t_{f}))\) is given by (2), (15), and (19). We consider the following Cauchy problem: \[\begin{cases}\dot{z}(t)=A^{*}z(t)+B^{*},\\ z(0)=z_{0}.\end{cases} \tag{25}\] If we define the shooting function \(S:\,\mathbb{R}^{4}\longrightarrow\mathbb{R}^{3}\) by \[S(z_{0})=R(t_{f},z(t_{f},z_{0})), \tag{26}\] where \(z(t,z_{0})\) is the solution of the Cauchy problem (25), then the two points' boundary value problem (21) is equivalent to \[S(z_{0})=0. \tag{27}\] To solve (27), we use Newton's method [23]. ### Analytical Method We now propose a different method to choose the optimal control. If the pair \((A,B)\) satisfies the Kalman condition and all eigenvalues of matrix \(A\in n\times n\) are real, then any extremal control has at most \(n-1\) commutations on \(\mathbb{R}^{+}\) (at most \(n-1\) switching times). We consider the following eight possible strategies: Strategy 1 (zero switching times): \[u(t)=U_{max},\,\forall t\in[0,t_{f}]. \tag{28}\] Strategy 2 (zero switching times): \[u(t)=0,\,\forall t\in[0,t_{f}]. \tag{29}\] Strategy 3 (one switching time): \[u(t)=\begin{cases}U_{max}&\text{if }0\leq t<t_{c},\\ 0&\text{if }t_{c}<t\leq t_{f},\end{cases} \tag{30}\] where \(t_{c}\) is a switching time. Strategy 4 (one switching time): \[u(t)=\begin{cases}0&\text{if }0\leq t<t_{c},\\ U_{max}&\text{if }t_{c}<t\leq t_{f}.\end{cases} \tag{31}\] Strategy 5 (two switching times): \[u(t)=\begin{cases}U_{max}&\text{if }0<t<t_{c1},\\ 0&\text{if }t_{c1}<t<t_{c2}.\\ U_{max}&\text{if }t_{c2}<t\leq t_{f},\end{cases} \tag{32}\] where \(t_{c1}\) and \(t_{c2}\) represent two switching times. Strategy 6 (two switching times): \[u(t)=\begin{cases}0&\text{if }0<t<t_{c1},\\ U_{max}&\text{if }t_{c1}<t<t_{c2}.\\ 0&\text{if }t_{c2}<t\leq t_{f}.\end{cases} \tag{33}\] Strategy 7 (three switching times): \[u(t)=\begin{cases}U_{max}&\text{if }0<t<t_{c1},\\ 0&\text{if }t_{c1}<t<t_{c2}.\\ U_{max}&\text{if }t_{c2}<t\leq t_{c3}.\\ 0&\text{if }t_{c3}<t<t_{f},\end{cases} \tag{34}\] where \(t_{c1}\), \(t_{c2}\), and \(t_{c3}\) represent three switching times. Strategy 8 (three switching times): \[u(t)=\begin{cases}0&\text{if }0<t<t_{c1},\\ U_{max}&\text{if }t_{c1}<t<t_{c2}.\\ 0&\text{if }t_{c2}<t\leq t_{c3}.\\ U_{max}&\text{if }t_{c3}<t<t_{f}.\end{cases} \tag{35}\] Let \(x(t)\) be the trajectory associated with the control \(u(t)\), given by the relation \[x(t)=\exp(A\,t)\,x(0)+\int\limits_{0}^{t}\exp(A(t-s))Bu(t)ds, \tag{36}\] where \(\exp(A)\) is the exponential matrix of \(A\). To calculate the switching times \(t_{c}\), \(t_{c1}\), \(t_{c2}\) and the final time \(t_{f}\), we have to solve the following nonlinear equation: \[\tilde{x}_{eF}(t_{f})=(x_{e1},\,x_{e4}). \tag{37}\] We also solve (37) using the Newton method [23]. ## 4 Numerical Example In this section, we use the shooting and analytical methods to calculate the minimum time \(t_{f}\) to anesttize a man of 53 years, 77 kg, and 177 cm. The equilibrium point and the flow rate corresponding to a BIS of 50 are: \[x_{e}=(14.518\,\mathrm{mg},\,64.2371\,\mathrm{mg},\,813.008\,\mathrm{mg},\,3.4 \,\mathrm{mg}),\,\,\,u_{e}=6.0907\,\mathrm{mg}/\mathrm{min}. \tag{38}\] Following the Schnider model, the matrix \(A\) of the dynamic system (11) is given by: \[A=\left(\begin{array}{cccc}-0.9175&0.0683&0.0035&0\\ 0.3020&-0.0683&0&0\\ 0.1960&0&-0.0035&0\\ 0.1068&0&0&-0.4560\\ \end{array}\right)\quad\text{and}\quad B=\left(\begin{array}{c}1\\ 0\\ 0\\ 0\\ \end{array}\right). \tag{39}\] We are interested to solve the following minimum-time control problem: \[\begin{cases}\min\limits_{t_{f}}J=\int\limits_{0}^{t_{f}}dt,\\ \dot{x}(t)=A\,x(t)+B\,u(t),\quad x(0)=(0,\,0,\,0,\,0),\\ x_{e1}(t_{f})=14.518\,\mathrm{mg},\quad x_{e4}(t_{f})=3.4\,\mathrm{mg},\\ 0\leq u(t)\leq 106.0907,\quad t\in[0,t_{f}],\quad t_{f}\,\text{is free}.\end{cases} \tag{40}\] ### Numerical Resolution by the Shooting Method Let \(z(t)=(x(t),\psi(t))\). We consider the following Cauchy problem: \[\begin{cases}\dot{z}(t)=A^{*}z(t)+B^{*},\\ z(0)=z_{0}=(0,\,0,\,0,\,0,\,\psi_{01},\,\psi_{02},\,\psi_{03},\,\psi_{04}), \end{cases} \tag{41}\] where \[A^{*}=10^{-4}\left(\begin{array}{cccccccc}-9175&683&35&0&0&0&0&0\\ 3020&-683&0&0&0&0&0&0\\ 196&0&-35&0&0&0&0&0\\ 1068&0&0&-456&0&0&0&0\\ 0&0&0&0&9175&-3020&-196&-1068\\ 0&0&0&0&-683&683&0&0\\ 0&0&0&0&-35&0&35&0\\ 0&0&0&0&0&0&456\\ \end{array}\right), \tag{42}\] \[B^{*}=\left(\begin{array}{cccc}\max\left(0,-106.0907\,sign(\psi_{1}(t)) \right)\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0 \\ 0 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0 0\\ 0 \\ 0 \\ 0\\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0 \\ 0 0\\ 0\\ 0 \\ 0\\ 0 \\ 0 \\ 0 \\ 0 \\ 0 0\\ 0\\ 0 0\\ 0 \\ 0 0\\ 0 \\ 0 0\\ 0 \\ 0 0\\ 0 \\ 0 0\\ 0 0\\ 0\\ 0 0\\ 0 \\ 0 0 \\ 0 0 \\ 0 0\\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0\\ 0 \\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 0\\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 The shooting function \(S\) is given by \[S(z_{0})=(S_{1}(z_{0}),\,S_{2}(z_{0}),\,S_{3}(z_{0})), \tag{44}\] where \[S_{1}(z_{0}) = x_{e1}(t_{f})-14.518,\] \[S_{2}(z_{0}) = x_{e4}(t_{f})-3.4,\] \[S_{3}(z_{0}) = 1+\psi^{T}(t_{f})\Big{(}Ax(t_{f})+B\max{(0,-106.0907\,sing\,\psi_{ 1}(t_{f}))}\Big{)}.\] All computations were performed with the MATLAB numeric computing environment, version R2020b, using the medium-order method and the function ode45 (Runge-Kutta method) in order to solve the nonstiff differential system (22). We have used the variable order method and the function ode113 (Adams-Bashforth-Moulton method) in order to solve the nonstiff differential system (25), and the function fsolve in order to solve equation \(S(z_{0})=0\). Thus, we obtain that the minimum time is equal to \[t_{f}=1.8397\min, \tag{45}\] with \[\psi^{T}(0)=(-0.0076,\,0.0031,\,-0.0393,\,-0.0374). \tag{46}\] ### Numerical Resolution by the Analytical Method The pair \((A,B)\) satisfies the Kalman condition, and the matrix \(A\) has four real eigenvalues. Then, the extremal control \(u(t)\) has at most three commutations on \(\mathbb{R}^{+}\). Therefore, let us test the eight strategies provided in Section 3.3. Note that the anesthesiologist begins with a bolus injection to transfer the patient state from the consciousness state \(x(0)\) to the unconsciousness state \[x_{eF}=(14.518,\,3.4),\] that is, \[u(0)=U_{max}=106.0907\,\mathrm{mg/min}. \tag{47}\] Thus, Strategies 2, 4, 6, and 8 are not feasible here. Therefore, in the sequel, we investigate Strategies 1, 3, 5, and 7 only. Strategy 1: Let \(u(t)=106.0907\,\mathrm{mg/min}\) for all \(t\in[0,t_{f}]\). The trajectory \(x(t)\), associated with this control \(u(t)\), is given by the following relation: \[x(t)=\int\limits_{0}^{t}\exp(A(t-s))BU_{max}ds,\,\,\forall t\in[0,t_{f}], \tag{48}\] where \[\exp(A\,(t-s)) = V\,D(t-s)\,V^{-1} \tag{49}\] with \[V=\left(\begin{array}{cccc}0&0.9085&0.0720&-0.0058\\ 0&-0.3141&0.9377&-0.0266\\ 0&-0.1898&-0.3395&-0.9996\\ 1&-0.1997&0.0187&-0.0014\end{array}\right) \tag{50}\] and \[D(\tau)=\left(\begin{array}{cccc}\exp^{-0.4560\,\tau}&0&0&0\\ 0&\exp^{-0.9419\,\tau}&0&0\\ 0&0&\exp^{-0.0451\,\tau}&0\\ 0&0&0&\exp^{-0.0024\,\tau}\end{array}\right). \tag{51}\] System (37) takes the form \[\left\{\begin{aligned} & x_{1}(t_{f})=14.518,\\ & x_{4}(t_{f})=3.4,\end{aligned}\right. \tag{52}\] and has no solutions. Thus, Strategy 1 is not feasible. Strategy 3: Let \(u(t)\), \(t\in[0,t_{f}]\), be the control defined by \[u(t)=\left\{\begin{aligned} & 106.0907\,\text{mg}/\text{min}&\text{if }0\leq t<t_{c},\\ & 0&\text{if }t_{c}<t\leq t_{f}.\end{aligned}\right. \tag{53}\] The trajectory \(x(t)\) associated with this control \(u(t)\) is given by \[x(t)=\left\{\begin{aligned} &\int\limits_{0}^{t}\exp(A(t-s))BU_{max}ds& \text{if }0\leq t\leq t_{c},\\ &\exp(A\left(t-t_{c}\right))\,x(t_{c})&\text{if }t_{c}<t\leq t_{f}, \end{aligned}\right. \tag{54}\] where \[\exp(A\left(t-t_{c}\right)) = V\,D(t-t_{c})\,V^{-1}. \tag{55}\] To calculate the switching time \(t_{c}\) and the final time \(t_{f}\), we have to solve the nonlinear system (52) with the new condition \[t_{c}<t_{f}. \tag{56}\] Similarly to Section 4.1, all numerical computations were performed with MATLAB R2020b using the command solve to solve Equation (52). The obtained minimum time is equal to \[t_{f}=1.8397\,\text{min}, \tag{57}\] with the switching time \[t_{c}=0.5467\,\text{min}. \tag{58}\] Strategy 5: Let \(u(t)\), \(t\in[0,t_{f}]\), be the control defined by the relation \[u(t)=\left\{\begin{aligned} & 106.0907\,\text{mg}/\text{min}& \text{if }0\leq t<t_{c1},\\ & 0&\text{if }t_{c1}<t<t_{c2}.\\ & 106.0907\,\text{mg}/\text{min}&\text{if }t_{c2}<t\leq t_{f}, \end{aligned}\right. \tag{59}\] where \(t_{c1}\) and \(t_{c2}\) are the two switching times. The trajectory \(x(t)\) associated with control (59) is given by \[x(t)=\left\{\begin{aligned} &\int\limits_{0}^{t}\exp(A(t-s))BU_{max}ds& \text{if }0\leq t\leq t_{c1},\\ &\exp(A\left(t-t_{c1}\right))\,x(t_{c1})&\text{if }t_{c1}<t\leq t_{c2},\\ &\exp(A\left(t-t_{c2}\right))\,x(t_{c2})+\int\limits_{t_{c2}}^{t} \exp(A(t-s))BU_{max}ds&\text{if }t_{c2}<t\leq t_{f}.\end{aligned}\right. \tag{60}\] To compute the two switching times \(t_{c1}\) and \(t_{c2}\) and the final time \(t_{f}\), we have to solve the nonlinear system (52) with \[0\leq t_{c1}\leq t_{c2}\leq t_{f}. \tag{61}\] It turns out that System (52) subject to Condition (61) has no solution. Thus, Strategy 5 is also not feasible. Strategy 7: Let \(u(t)\), \(t\in[0,t_{f}]\), be the control defined by the relation \[u(t)=\begin{cases}106.0907\,\mathrm{mg/min}&\text{if }0\leq t<t_{c1},\\ 0&\text{if }t_{c1}<t<t_{c2}.\\ 106.0907\,\mathrm{mg/min}&\text{if }t_{c2}<t\leq t_{c3},\\ 0\,\mathrm{mg/min}&\text{if }t_{c3}<t\leq t_{f},\end{cases} \tag{62}\] where \(t_{c1}\), \(t_{c2}\), and \(t_{c3}\) are the three switching times. The trajectory \(x(t)\) associated with Control (62) is given by \[x(t)=\begin{cases}\int\limits_{0}^{t}\exp(A(t-s))BU_{max}ds&\text{if }0\leq t \leq t_{c1},\\ \exp(A\left(t-t_{c1}\right))\,x(t_{c1})&\text{if }t_{c1}<t\leq t_{c2},\\ \exp(A\left(t-t_{c2}\right))\,x(t_{c2})+\int\limits_{t_{c2}}^{t}\exp(A(t-s))BU_ {max}ds&\text{if }t_{c2}<t\leq t_{c3},\\ \exp(A\left(t-t_{c3}\right))\,x(t_{c3})&\text{if }t_{c3}<t\leq t_{f}.\end{cases} \tag{63}\] To compute the three switching times \(t_{c1}\), \(t_{c2}\), and \(t_{c3}\) and the final time \(t_{f}\), we have to solve the nonlinear system (52) with \[0\leq t_{c1}\leq t_{c2}\leq t_{c3}\leq t_{f}. \tag{64}\] It turns out that System (52) subject to Condition (64) has no solution. Thus, Strategy 7 is also not feasible. In Figures 2 and 3, we present the solutions of the linear system of differential equations (40) under the optimal control \(u(t)\) illustrated in Figure 4, where the black curve corresponds to the one obtained by the shooting method, as explained in Section 3.2, while the blue curve corresponds to our analytical method, in the sense of Section 3.3. In addition, for both figures, we show the controlled BIS Index, the trajectory of fast states corresponding to the optimal continuous infusion rate of the anesthetic \(u(t)\), and the minimum time \(t_{f}\) required to transition System (40) from the initial (wake) state \[x_{0}=(0,\,0,\,0,\,0)\] to the fast final (anesthetized) state \[x_{eF}=(14.518,\,3.4)\] in the shortest possible time. The minimum time \(t_{f}\) is equal to \(t_{f}=1.8397\,\mathrm{min}\) by the shooting method (black curve in Figure 2), and it is equal to \(t_{f}=1.8397\,\mathrm{min}\) by the analytical method (blue curve in Figure 3). By using the shooting method, the black curve in Figure 4 shows that the optimal continuous infusion rate of the induction phase of anesthesia \(u(t)\) is equal to \(106.0907\,\mathrm{mg/min}\) until the switching time \[t_{c}=0.5467\,\mathrm{min}.\] Then, it is equal to \(0\,\mathrm{mg/min}\) (stop-infusion) until the final time \[t_{f}=1.8397\,\mathrm{min},\] Figure 4: The optimal continuous infusion rate \(u(t)\) of the induction phase of anesthesia, as obtained by the shooting and analytical methods. Figure 3: The state trajectory, controlled BIS index, and trajectory of the fast states corresponding to the optimal control \(u(t)\) of Figure 4, using the analytical method. Figure 2: The state trajectory, controlled BIS index, and trajectory of the fast states corresponding to the optimal control \(u(t)\) of Figure 4, using the shooting method. By using the analytical method, the blue curve in Figure 4 shows that the optimal continuous infusion rate of the induction phase of anesthesia \(u(t)\) is equal to \(106.0907\,\mathrm{mg/min}\) until the switching time \[t_{c}=0.5467\,\mathrm{min}.\] Then, it is equal to \(0\,\mathrm{mg/min}\) (stop-infusion) until the final time \[t_{f}=1.8397\,\mathrm{min}.\] We conclude that both methods work well and give similar results. However, in general, the shooting method does not always converge, depending on the initial conditions (46). To obtain such initial values is not an easy task since no theory is available to find them. For this reason, the proposed analytical method is logical, practical, and more suitable for real applications. ## 5 Conclusions The approach proposed by the theory of optimal control is very effective. The shooting method was proposed by Zabi et al. [15], which is used to solve the time-optimal control problem and calculate the minimum time. However, this approach is based on Newton's method. The convergence of Newton's method depends on the initial conditions, being necessary to select an appropriate initial value so that the function is differentiable and the derivative does not vanish. This implies that the convergence of the shooting method is attached to the choice of the initial values. Therefore, the difficulty of the shooting method is to find the initial conditions of the adjoint vectors. Here, the aim was to propose a different approach, which we call "the analytical method", that allows to solve the time-optimal control problem for the induction phase of anesthesia without such drawbacks. Our method is guided by the selection of the optimal strategy, without the need to choose initial values and study the convergence. We claim that our method can also be applied to other PK/PD models, in order to find the optimal time for the drug administration. In the context of PK/PD modeling, the challenges associated with uncertainties in plant model parameters and controller gains for achieving robust stability and controller non-fragility are significant [24]. These challenges arise from factors like inter-individual variability, measurement errors, and the dynamic nature of patient characteristics and drug response. Further investigation is needed to understand and develop effective strategies to mitigate the impact of these uncertainties in anesthesia-related PK/PD models. This research can lead to the development of robust and non-fragile control techniques that enhance the stability and performance of anesthesia delivery systems. By addressing these challenges, we can improve the precision and safety of drug administration during anesthesia procedures, ultimately benefiting patient outcomes and healthcare practices. In this direction, the recent results of [25] may be useful. Moreover, we plan to investigate PK/PD fractional-order models, which is a subject under strong current research [26]. This is under investigation and will be addressed elsewhere. **Author Contributions:** Conceptualization, M.A.Z., C.J.S., and D.F.M.T.; methodology, M.A.Z., C.J.S., and D.F.M.T.; software, M.A.Z.; validation, C.J.S. and D.F.M.T.; formal analysis, M.A.Z., C.J.S., and D.F.M.T.; investigation, M.A.Z., C.J.S., and D.F.M.T.; writing--original draft preparation, M.A.Z., C.J.S., and D.F.M.T.; writing--review and editing, M.A.Z., C.J.S. and D.F.M.T.; visualization, M.A.Z.; supervision, C.J.S. and D.F.M.T.; funding acquisition, M.A.Z., C.J.S., and D.F.M.T. All authors have read and agreed to the published version of the manuscript. **Funding:** This research was funded by the Portuguese Foundation for Science and Technology (FCT--Fundacao para a Ciencia e a Tecnologia) through the R&D Unit CIDMA, Grant Numbers UIDB/04106/2020 and UIDP/04106/2020, and within the project "Mathematical Modelling of Multiscale Control Systems: Applications to Human Diseases" (CoSysM3), Reference 2022.03091.PTDC, financially supported by national funds (OE) through FCT/MCTES. **Institutional Review Board Statement:** Not applicable. **Informed Consent Statement:** Not applicable. **Data Availability Statement:** No new data were created or analyzed in this study. Data sharing is not applicable to this article. The numerical simulations of Section 4 were implemented in MATLAB R2022a. The computer code is available from the authors upon request. **Acknowledgments:** The authors are grateful to four anonymous referees for their constructive remarks and questions that helped to improve the paper. **Conflicts of Interest:** The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
``` 麻酔の誘導期における時間最適制御問題に対して解析的解を導出しました。この解は、従来の射出法から得られた結果と数値的に一致しています。麻酔誘導期は、バileyとハダッドによって2005年に提案された薬物動力学/薬物作用(PK/PD)モデルを用いてプロポフォルのinfusionを調節しています。このアプローチを評価するために、文献に基づく最小時間問題を検討し、その結果を比較するために、患者麻酔のための最小時間問題を検討しました。ポントリヤジン最小原理を用いることで、射出法を問題の解決手段として導入しました。さらに、MATLAB計算環境を使用して数値シミュレーションを行いました。この新しい解析的アプローチを用いて、麻酔の最適な継続的な注入速度と、覚醒状態から麻酔状態への遷移に必要な最小時間に関する、両方の方法の類似性を発見しました。
2309.14594
Learning Vision-Based Bipedal Locomotion for Challenging Terrain
Reinforcement learning (RL) for bipedal locomotion has recently demonstrated robust gaits over moderate terrains using only proprioceptive sensing. However, such blind controllers will fail in environments where robots must anticipate and adapt to local terrain, which requires visual perception. In this paper, we propose a fully-learned system that allows bipedal robots to react to local terrain while maintaining commanded travel speed and direction. Our approach first trains a controller in simulation using a heightmap expressed in the robot's local frame. Next, data is collected in simulation to train a heightmap predictor, whose input is the history of depth images and robot states. We demonstrate that with appropriate domain randomization, this approach allows for successful sim-to-real transfer with no explicit pose estimation and no fine-tuning using real-world data. To the best of our knowledge, this is the first example of sim-to-real learning for vision-based bipedal locomotion over challenging terrains.
Helei Duan, Bikram Pandit, Mohitvishnu S. Gadde, Bart van Marum, Jeremy Dao, Chanho Kim, Alan Fern
2023-09-26T00:59:59
http://arxiv.org/abs/2309.14594v2
# Learning Vision-Based Bipedal Locomotion for Challenging Terrain ###### Abstract Reinforcement learning (RL) for bipedal locomotion has recently demonstrated robust gaits over moderate terrains using only proprioceptive sensing. However, such blind controllers will fail in environments where robots must anticipate and adapt to local terrain, which requires visual perception. In this paper, we propose a fully-learned system that allows bipedal robots to react to local terrain while maintaining commanded travel speed and direction. Our approach first trains a controller in simulation using a heightmap expressed in the robot's local frame. Next, data is collected in simulation to train a heightmap predictor, whose input is the history of depth images and robot states. We demonstrate that with appropriate domain randomization, this approach allows for successful sim-to-real transfer with no explicit pose estimation and no fine-tuning using real-world data. To the best of our knowledge, this is the first example of sim-to-real learning for vision-based bipedal locomotion over challenging terrains. ## I Introduction A robot's utility for useful work often hinges on its capacity to maneuver effectively across a spectrum of natural and structured terrains. For this purpose, bipedal robots have the potential to match human locomotion capabilities, but currently are far inferior. Approaching human performance requires a robot to perceive its surroundings, assess its states relative to the upcoming terrain, and dynamically adapt its gait. Robustly achieving such an integration of vision and locomotion remains an open problem for bipedal robots. Modern control approaches for vision-based legged locomotion [1, 2, 3, 4, 5, 6, 7, 8] often decompose the problem into levels of control hierarchy, usually requiring robust whole-body control, predictive footstep planning, accurate odometry estimation, and terrain mapping. Each level requires a set of modeling assumptions in order to process high-dimensional and raw proprioceptive and visual exteroceptive information. On the other hand, recent learning-based approaches make fewer modeling assumptions, often learning a direct mapping from high-dimensional sensory inputs to actuator commands. These approaches have shown strong empirical demonstrations of blind bipedal locomotion [9] and vision-based quadrupedal locomotion [10, 11, 12, 13, 14, 15, 16] in real-world environments. However, they are typically trained in simulation and their success relies on designing techniques for reliable sim-to-real transfer, which can be particularly challenging when vision is one of the input modalities. In this paper, we design and demonstrate a sim-to-real learning approach for vision-based bipedal locomotion over challenging terrain. To the best of our knowledge, this is the first such successful demonstration. A distinctive aspect of our approach is that it avoids the estimation of global odometry, which can be particularly challenging for legged locomotion due to estimation drifts from frequent contacts and aggressive motions. Instead, our approach learns to directly combine proprioceptive and visual information in the robot's local frame to make control decisions. In particular, our architecture is composed of two primary learned components: 1) a control policy whose input is proprioceptive information and a heightmap of a local region in front of the robot (Section IV), and 2) a heightmap predictor, which uses proprioceptive and egocentric depth images to predict a heightmap for the control policy (Section V). The key contribution of our work is the sim-to-real pipeline and the system integration for these components, which allows the overall locomotion controller to transfer successfully to the real world. In particular, we demonstrate the learned controller on a camera-equipped bipedal Cassie robot, which can traverse challenging terrains constructed in a lab environment shown in Figure 1. Fig. 1: Our fully learned controller integrates vision and locomotion for reactive and agile gaits over terrains. The proposed approach enables bipedal robot Cassie traversing over challenging terrains, including random high blocks, stairs, 0.5m step up (\(\sim\)60% leg length), with speed up to 1m/s. ## II Related Work **Sim-to-Real Reinforcement Learning (RL) for Bipedal Robots.** Recently, RL-based controllers for bipedal locomotion have mostly focused on blind locomotion, where proprioception is the primary control input [17, 18, 19, 20]. These works have produced controllers that can handle a limited amount of non-flat terrains. The learned policy [17] tends to have a high cost of transport due to aggressive gaits and high swing trajectories that are required to maintain balance without knowledge of the local terrain. When traversing over challenging terrains, gait adaptation is highly required, and visual information becomes critical at certain times during the gait [21, 22]. The research question then is how to incorporate vision into RL policies so that bipedal robots can react to terrains and adapt their own gaits at the same time. **Learning Vision-based Legged Locomotion.** Previous work for quadrupedal robots has used RL methods to demonstrate successful sim-to-real transfer of vision-based locomotion controllers. One type of approach uses an elevation map or height scanners in the global reference frame as part of the RL policy input. For example, scattered elevation scans around each foot are used in quadrupeds [11] and bipeds [23], while other methods [13, 14, 15] use a uniformly structured elevation map around the robot, both of which require multiple sensors and careful calibration. In contrast, in this work we are interested in a solution that does not require careful calibration for global odometry. A second type of approach is to directly use vision inputs from a camera, such as depth images [10, 13, 24, 25] or RGB images [26], as the inputs to a RL policy. This end-to-end training is often carried out via teacher-student training [13, 25], which can exploit the teacher's access to privileged information in simulation. While these approaches have been successful on quadrupeds on hardware, it is unclear how well they can work for bipeds in the real world, where the contact locations become critical for stability and unintended contact forces with the ground can much more easily tip over the robot. **Local Terrain Mapping for Legged Robots.** Model-based terrain mapping techniques have shown successful deployment onto hardware via odometry and fusion of multiple visual sensors [27, 28, 29]. These techniques strongly rely on pose estimation of the floating base in the global frame, where the map can drift due to inaccuracies from pose estimation. Previous visual-based quadrupedal locomotion work [11] reported that large amounts of domain randomization are required to overcome the noises and drifts from such mapping techniques. On the other hand, recent learning-based techniques [30, 31] have shown promising results when reconstructing the terrains from multiple cameras, but the use of robot global pose is still required. In this paper, our focus is on responding to terrain changes in front of the robot, for which we use a single depth camera to provide an egocentric view of the terrain. Along with robot states, the reconstructed heightmap is entirely in the robot's local frame. Our method removes the need to use global estimation when the robot has to react to local terrains rapidly. Figure 2 illustrates our overall system, which has two main components: 1) a locomotion policy, which outputs PD setpoints for the robot actuators based on proprioception, a local terrain heightmap, and user commands, and 2) a heightmap predictor, which outputs a predicted heightmap based on proprioceptive information and images from a depth camera. These components are learned in simulation and then transferred to the real robot. The training pipeline first uses RL to train the locomotion policy in simulation. This training process randomizes the terrain, user commands, and physical parameters to help with sim-to-real transfer. Next, using the learned policy, the pipeline collects data from a simulated depth camera attached to the robot, which is paired with ground-truth heightmap information to be used for supervised learning of the heightmap predictor. Training this predictor also involves domain randomization and added noise to facilitate sim-to-real transfer. Sections IV and V describe the architectural and training details of each component. ## IV Learning a Terrain-Aware Locomotion Policy The main control objective is to follow speed and heading commands while maintaining balance over possibly challenging terrains. Below, we describe the observation space, action space, architecture of the policy, and training methods. ### _Control Policy Design_ **Observation Space.** The policy input includes: 1) _proproprioceptive information_ containing the orientation (in quaternion) and angular velocity of the floating base, and position and velocity for all measurable actuated and unactuated joints, 2) _terrain heightmap_ from a 1.5m by 1m area in front of the robot at a 5cm resolution (see Figure 4), which encodes the ground height at each point relative to the robot's floating base. The relative encoding means that the heights vary as the robot moves up and down during its gait, but enables us to avoid using global mapping and odometry estimation techniques, 3) _user commands_, which include X and Y linear velocities along with direction and rotational velocity around the robot's yaw axis, and 4) _periodic clock_, as used in prior work on locomotion control [9], which consists of two functions for each leg, \(\sin\left(2\pi(\phi_{t}+\gamma_{t}^{i})\right)\) and Fig. 2: Overview of the locomotion policy with vision module. \(\cos\left(2\pi(\phi_{t}+\gamma_{t}^{i})\right)\). Here \(i\in\left[\textit{left}\right.\), _right_ indicates the leg, \(\phi\) is a monotonically increasing phase variable that is incremented as \(\phi_{t+1}=\phi_{t}+\Delta\phi_{t}\), so that \(\Delta\phi_{t}\) varies the gait frequency, and \(\left[\gamma_{t}^{left},\gamma_{t}^{right}\right]\) are period shifts that alter the clock values for each leg, thus changing the contact sequence. **Action Space.** The RL policy operates at 50Hz and outputs PD setpoints for all motors, which are provided to a PD controller operating at 2kHz. To enable gaits that can more flexibly adapt to the terrain, the RL policy also outputs three extra values, representing the clock increment \(\Delta\phi_{t}\) and residuals of shifts for both legs [\(\Delta\gamma_{t}^{left},\Delta\gamma_{t}^{right}\)]. **Policy Architecture.** We use a neural network to represent the policy for mapping observation sequences to actions. The policy architecture (Figure 3) contains two main components, a pretrained _blind policy_ and a _vision-based modulator_. The architecture motivation is for the blind policy to provide a baseline locomotion control signal, which is effective for moderate terrains. For more complex terrain, the vision-based modulator is then able to adjust the baseline control based on details of the local terrain. The _blind policy_ is based on prior work [9] and uses an LSTM network to produce actions. This policy uses all of the available inputs, except for the heightmap information, and is trained on relatively flat ground with various gait frequencies and shifts. The resulting policy is robust to moderate disturbances. The input to the _vision-based modulator_ includes all of the available observations, including the heightmap, in addition to the action produced by the blind policy. This modulator outputs a "modulating action" as well as clock actions to modify the clock parameters. ### _Policy Training_ The policy is trained via the PPO actor-critic RL algorithm [32] in a MuJoCo simulation environment [33] where the robot aims to follow randomized commands over randomized terrains. The training is conducted using 80 cores on a dual Intel Xeon Platinum 8280 server on the Intel vLab Cluster. We used two modifications to standard PPO that helped speed up and stabilize learning. First, we modified the PPO loss function to include a mirror loss over robot proprioceptive inputs as well as visual inputs. This loss encourages the policy to choose symmetric actions when facing the same terrain that is symmetrically mirrored about the saggital plane. Second, since our reward function (described below) uses privileged information, we found it useful to provide the critic with privileged inputs on addition to the observations input to the policy. These privileged inputs include the robot height and feet positions in the global frame, a square 0.5m heightmap around each foot, and two rangefinder sensors from the tip of each foot. This privileged information provides the critic with a more accurate 3D picture around each foot, which helps it more accurately predict future rewards, such as unfavorable collision events with the terrain, as shown effective in [34]. **Training Episode Generation.** Each training episode involves a randomly generated terrain map of size 20m x 20m. Each map is one of 5 terrain types, illustrated in Figure 4, that are reflective of different human-centric environments. These include: 1) flat - the easiest terrain with no features, 2) hills - a smoothly varying heightmap, 3) blocks - randomly located blocks of random lengths, widths, and heights, 4) ridges - a sequence of random width and height ridges that span the length of the map, and 5) stairs - upward and downward stairs of varying width and height. The randomization parameters of each terrain type are listed in Table IV, and terrains are generated in a way to avoid intersecting features. Each episode selects a single terrain type according to the following probability distribution \([0.03,0.07,0.35,0.2,0.35]\), which puts the majority of the probability mass on the three most difficult terrains. This allows the policy to gain some experience on easier terrains, which is useful early in learning, but focuses most of the learning effort on more difficult terrain that requires careful integration of vision and locomotion. We found the key to train a robustness and non-aggressive control policy relies on the terrain generation distribution than iterating and adding heuristic-based reward terms. For example, stairs naturally regulate the step length and ridges regulate step height via the heightmap representation. Given a terrain map, each training episode starts with the robot being randomly spawned in a standing position near the center of the map and facing a random direction. Next a random command is given to the policy from a list including: step-in-place, step-in-place-turn, walk, walk-turn, with a sampling probability of [0.05, 0.05, 0.6, 0.3]. The commanded X, Y, and turn velocities are uniformly sampled from the ranges [-0.5, 1.0]m/s, [-0.3, 0.3]m/s, and [-22.5, 22.5] degrees/s, respectively. When the robot is on the ridge, stair, or block terrain types, the velocity commands exclude backward and sideways movement, due to using only a single forward facing camera. After the initial command, once during each episode the command is randomly changed at a time randomly sampled from [200, 250] timesteps. Each episode runs for a maximum of 400 timesteps, which is 8 seconds of simulated time. Episode termination conditions include: 1) roll or pitch angle of the floating base is greater than 15 degrees; 2) the norm of linear velocities of the base is greater than 1 plus the commanded velocity; 3) the robot base height is below 40cm in the global frame; 4) the robot's body collides with the terrain. The conditions correspond to undesirable robot behavior and implicitly punish the robot by causing it to not receive future rewards. **Reward Function.** Our reward function aims to produce a gait with a well regulated contact sequence that is able to traverse the simulated environment in a way that is likely to transfer to Fig. 3: Policy consists of a blind policy and a vision-based modulator. a real robot. In particular, we use a three component reward function where all components are weighted equally, \(R=R_{0}+R_{\text{accel}}+R_{\text{collision}}\). The base locomotion component \(R_{0}\) is the reward function used in prior work on blind locomotion [9], which regulates the contact sequence and timing. This reward component encourages alternating swing and stance phases to align with the clock values provided to the policy. Besides the base locomotion reward, we identified additional components as being important for facilitating sim-to-real transfer for complex terrains. The foot acceleration component \(R_{\text{accel}}\) penalizes the left and right foot accelerations \(\ddot{x}_{l}\) and \(\ddot{x}_{r}\). This reward helps prevent fast swing leg motions, which we found could arise during training on difficult terrain. Specifically, the reward is defined as \(R_{\text{accel}}=0.05\exp\left(-0.02\cdot(\lVert\ddot{x}_{l}\rVert+\lVert\ddot{x }_{r}\rVert)\right)\), which provides a more positive reward for smaller accelerations. The foot collision component \(R_{\text{collision}}\) adds a negative penalty of -5 whenever the forefront of the foot is stumbled by the terrain, which helps to achieve collision-free leg-swing trajectories. However, due to the nature of RL, this term only acts as a soft constraint that the robot may violate in favor of not falling down, in order to collect more reward throughout the training episode. We found training without this penalty term can work in simulation, but results in significantly more frequent foot stumble and collision events, and subsequently prevents sim-to-real transfer. Touch sensors are added at the forefront of each foot's collision model detects collision events and trigger the negative reward. **Domain Randomization.** To enable successful sim-to-real transfer and diversify the data distribution during training, the training process involves a range of domain randomization, shown in Table II, over model parameters, actuation parameters, visual inputs, and delays. The model parameters are randomized per episode to simulate a range of robot models and also provide a wide range of state space that the policy can learn from. We found that randomizing the torque efficiency parameter is particularly important for sim-to-real on extreme terrain, such as a 0.5m step up, due to the torque saturation of the knee motor. In addition, the torque command sent to the simulator is delayed randomly up to 3ms. The visual inputs are randomized to simulate noise from the heightmap estimator and prevent the policy from over-fitting to the exact simulated heightmaps. Prior work [11] based on teacher-student training also found that heightmap randomization was important for sim-to-real transfer. The entire heightmap is shifted per episode and policy step in all directions to simulate temporal noises. The heightmap is passed into the policy with a randomized amount of delay up to 100ms, in order to account for faster locomotion speeds. ## V Heightmap Prediction from Egocentric Vision The heightmap predictor is a neural network that maps a history of egocentric depth images and robot states into an estimated heightmap in front of the robot. This problem is challenging due to the aggressive camera motions, occlusions, and noisy images. Below, we describe the architecture and simulation-based training process used to achieve successful sim-to-real transfer. **Network Architecture and Losses** Figure 5 shows the network architecture, which consists of two stages. For the first stage, we use an LSTM network so that the memory can help reconstruct missing information from the history of robot states and depth images. Training of this stage is done by minimizing the mean-squared error to the ground truth \begin{table} \begin{tabular}{c|l|l|l} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} **Terrain** \\ \end{tabular} }} & \multirow{2}{*}{**Parameter**} & **Range** & **Unif** \\ \cline{3-4} \cline{6-4} & Joint Damping & [0.5, 2.5] & \% \\ \cline{2-4} \cline{6-4} \multirow{3}{*}{\begin{tabular}{c} Simulation \\ \end{tabular} } & Mass & [-0.25, 0.25] & \% \\ \cline{2-4} & Center of Mass Location & [-0.01, 0.01] & m \\ \cline{2-4} & Passive Spring Stiffness & [-0.00, 0.00] & Nm/rad \\ \cline{2-4} & Torque Efficiency & [0.9, 1.0] & \% \\ \cline{2-4} & Torque Delay & [0.5, 3] & ms \\ \hline \multirow{3}{*}{ \begin{tabular}{c} Heightmap \\ \end{tabular} } & Shift in XY & [-0.05, 0.05] per episode & m \\ \cline{2-4} & Shift in Z & [-0.1, 0.1] per episode & m \\ \cline{2-4} & Shift in Z & [-0.02, 0.01] per policy step & m \\ \cline{2-4} & Delay & [20, 100] & ms \\ \hline \hline \end{tabular} \end{table} TABLE II: Parameters and ranges used in domain randomization. All parameters are uniformly sampled within the range. Fig. 4: Types of terrain used in training. \begin{table} \begin{tabular}{c|l|l||l} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} **Terrain** \\ \end{tabular} }} & \multirow{2}{*}{**Parameter**} & **Range** & **Range for Evaluation** \\ \cline{2-4} & & & _Easy_ & _Hand_ \\ \hline Ridge & height [m] & [0.05, 0.6] & [0.05, 0.5] & [0.5, 0.6] \\ \hline \multirow{3}{*}{\begin{tabular}{c} Stair \\ \end{tabular} } & height [m] & [0.05, 0.2] & [0.05, 0.1] & [0.1, 0.2] \\ \cline{2-4} & length [m] & [0.25, 0.4] & [0.4, 0.4] & [0.25, 0.4] \\ \cline{2-4} & steps & [4, 28] & [4, 12] & [12, 28] \\ \hline \multirow{2}{*}{ \begin{tabular}{c} Block \\ \end{tabular} } & length or width & [0.4, 1] & [1, 1] & [0.4, 1] \\ \cline{2-4} & height [m] & [0.05, 0.4] & [0.05, 0.2] & [0.2, 0.4] \\ \hline \hline \end{tabular} \end{table} TABLE I: Ranges for terrain randomization used in training and evaluation. All terrains are uniformly sampled within the range during training or evaluation. Fig. 5: Predictor architecture. Heightmap is captured from hardware. heightmap with heights relative to the robot floating base. We found that training just Stage 1 resulted in heightmaps with non-flat surfaces around corners and edges, which were difficult to correct via straightforward architecture and loss function modifications. To improve over the raw heightmaps, Stage 2 utilizes a U-Net architecture [35] which has shown to be effective in learning a pixel-to-pixel transformation. This stage takes in the raw heightmap and outputs a cleaned version of the same size. Stage 2 uses L1 loss and leads to a refined heightmap with sharper corners, edges, and flat surfaces. **Simulation-Based Data Generation** Given a trained locomotion control policy, we use the simulator to execute episodes of the policy to collect training data. In particular we run the policy in stochastic mode, where the actions are sampled from the action distribution at each time step. This has the effect of producing data around the typical data distribution the policy will encounter at runtime. The dataset contains one trajectory for each policy-execution episode, where the trajectory at time \(t\) stores the robot state \(s_{t}\), the depth image \(I_{t}\) and the ground truth heightmap \(m_{t}\). Note that during data collection with the policy, we use the same domain randomization as used during policy learning. Simulation model adds an egocentric camera to generate depth images at each step. The pose is at the top of the floating-base and tilted by 60 degrees from horizontal plane, giving the range of view of approximately 2.5m by 2.5m in front of the robot, which is sufficient for the size of heightmap. The depth images are rendered in MuJoCo. Robot states only include foot positions in the floating base frame. The final training set contains 30,000 episodes. **Depth Image Randomization** Depth images from real cameras are significantly more noisy than the clean depth images from simulated cameras. During the generation of training episodes we add a set of randomizations to the generated depth images to help bridge the sim-to-real gap. First, the camera pose and field of view (FOV) are randomized per episode. Camera pose shift has a range of \(\pm 1\) cm for XYZ and \(\pm 1\) degree for pitch angle. FOV shift has a range of \(\pm 1\) degree. After data collection, a post-processing step adds a set of random noises on top of the rendered depth images, including Gaussian noise, image rotation, edge noise, random small objects on the image, and spot noise. Each type of noise is included in an image with a probability of 0.3. We found that this combination of depth image randomization allowed sim-to-real for the learned predictor more effectively. ## VI Simulation Results ### _Policy Performance_ We use the trained policy, along with a number of different policy setups, to evaluate the performance in simulation for the ablation study. For each terrain we define an easy and a hard configuration as shown in Table I. For each policy setup, we collect 1000 episodes per terrain mode and compute three metrics as shown in Figure 7-A. _Ours_ is the policy trained using all setups and is for sim-to-real. Other setups are for controlled tests by removing one feature at a time. We also trained a blind policy across all terrains. In **Success Rate**, all policies have approximately the same performance at easy mode of terrains. For more difficult terrain modes, however, policies _w/o Learned Clock_ and _w/o Privilege Critic_ show significantly lower success rates. This means that control over the gait's contact timing and sequence as well as a more accurate value estimation improve policy performance over hard terrains. **Episodes with foot collision** shows that, compared to _Ours_, other policies have significantly more foot collisions events. These random foot collisions with the terrain could lead to failures. Indeed, **Terminations due to foot collision** indicates that collisions account for most failure cases overall. Although foot collisions lead to frequent failures, policy _w/o Foot Collision Reward_ has a similar success rate as _Ours_. When looking at policy _w/o Foot Collision Reward_ in simulation, the policy learns to deal with collisions and treat the collisions as potentially useful proprioceptive feedback when the robot touches terrains. For example, the robot will walk up high step-ups by sliding the foot along the vertical surface of the terrain. ### _Local Terrain Reconstruction_ We evaluated the heightmap predictor in simulation to validate the quality of the reconstruction shown in Table III. We also implemented other architectures to use for ablations, including an MLP model and a transformer-based model. A key distinction among the model architectures is the history representation. _LSTM_ has implicit history, _Transformer_ has a fixed window size of 0.6 seconds, and _MLP_ does not have history. Additionally, the comparison also includes an LSTM model, _LSTM w/o robot states_, that does not use robot states as input. Among all models, _LSTM_ achieves the best reconstruction loss. Without robot states as input, _LSTM w/o robot states_ produces worse reconstructions, indicating the requirement of proprioception and vision together to accurately estimate local terrain. ### _Closed-loop Evaluation_ We use each variation of the heightmap predictor to couple with the policy (_Ours_ in Figure 7-A) to evaluate the closed-loop performance in simulation shown in Figure 7-B. We use the same metrics used in policy performance. During each rollout, the predictor takes in the noisy depth images Fig. 6: Depth image from simulation and real world, with corresponding real predicted heightmap and simulation heightmap. from simulation rendering and the policy takes in the clean heightmap from the output of the Stage in Figure 5. In **Success Rate**, all predictors produce similar performance over each terrain mode. This potentially means that, regardless of the errors from reconstruction, the learned policy is robust enough to estimation noises. In **Episodes with foot collision**, compared to _LSTM_, other models show worse performance and produce more collision events. In **Termination due to foot collision**, compared to _LSTM_, other models fails with higher chances from unfavorable foot collisions. ## VII Sim-to-real Transfer ### _Experimental Setup_ We deployed the proposed system on the bipedal robot Cassie. To endow the vision system with fast inference, we added a D455 Intel Realsense camera and an NVIDIA Jetson Orin Nano module on Cassie. Depth images are post-processed with a hole-filling filter and distance clipping before being sent into the heightmap predictor. The main control policy is running on the robot's main computer and inference of the heightmap predictor is running asynchronously on the Jetson. The camera depth stream is set to 90 FPS and heightmap prediction inference runs at 200Hz. The end-to-end delay between the main control policy and the heightmap predictor is measured up to 20ms, including UDP communication delay, camera stream, and model inference. To create structured terrains similar to those in simulation training, we built wooden blocks with various heights and sizes in the lab environment shown in Figure 1. ### _Evaluation_ We tested the proposed system over various terrains. Overall, the system is able to control robot Cassie going over single high blocks, stairs, and random blocks, with various dimensions. Although the egocentric view of the camera cannot look underneath the robot, we found the policy is able to go over terrains while reasoning about the swing trajectory for the terrain underneath it. This observation potentially means that the policy keeps an internal odometry when approaching upcoming terrains. Also, the system enables the robot to go over a 0.5m high step-up, where the torque saturation happens on the leading stance leg when lifting the entire robot up from lower ground to a high step. We also tested the learned system on a treadmill while a human operator continually fed random blocks down the treadmill. The robot is able to go over the upcoming blocks and maintain the desired speed under these conditions. Please refer to the submission video for hardware demonstrations. ## VIII Conclusion In this work, we proposed a fully learned visual-locomotion system using neural networks to control highly dynamic bipedal robots going over challenging terrains. To deal with constrained locomotion over complex terrains, we used simulation to train a robust control policy. Additionally, we identified several key factors to enable such policy performance, including adaptive gait control and collision-free swing leg control. We also provided key ingredients to convert an egocentric view from a single depth camera to a local terrain heightmap with simulation-only data. Using visual and proprioceptive information all in the local frame, the entire system ran onboard and achieved successful sim-to-real transfer without the need for explicit odometry estimation. We believe these key components are fundamental to our study as well as future research on learning vision-based dynamic locomotion for legged robots. \begin{table} \begin{tabular}{|l|l|} \hline Model Architecture & Reconstruction Loss (MAE) [cm] \\ \hline LSTM & **2.806** \\ \hline Transformer & 4.221 \\ \hline MLP & 4.932 \\ \hline LSTM (w/o robot states) & 4.448 \\ \hline \end{tabular} \end{table} TABLE III: Reconstruction loss with various heightmap predictors. Fig. 7: **A. Ablation study on policy training. B. Ablation study on heightmap predictor architecture. Each ablation study uses data collected from a range of terrains defined in Table I. **Success rate** indicates the robot does not fall down for 10 seconds of rollouts. **Episodes with foot collision** indicates the number of episodes that have one or more foot collision events occurred during rollouts, and such random collision events are unfavorable towards hardware deployment. **Termination due to foot collision** shows the percentage of foot collision events that lead to failures. All plots are evaluated with a confidence interval of 95%.
``` 自立歩行のための強化学習(RL)は、 proprioceptive sensing にのみ基づいて、中程度の地形での安定した歩行を recientemente 示した。しかし、このような盲点制御者は、ロボットが地形の予測と適応を行う環境では失敗する。これは、視覚認識を必要とする。この論文では、自立歩行ロボットが地形を認識しながら、指示された移動速度と方向を維持する、完全に学習されたシステムを提案する。このアプローチは、まずロボットのローカルフレームで表現された高さを表すシミュレーションでコントローラーを訓練する。次に、シミュレーションでデータを収集して、高さマップ予測器を訓練する。入力は、過去の一連の深度画像とロボットの状態である。このアプローチを適切なドメインランダム化を行うことで、シミュレーションから実世界の転移が可能になる。姿勢の推定や実世界データでの微調整は不要である。このアプローチ
2306.17618
Polarimetric iToF: Measuring High-Fidelity Depth through Scattering Media
Indirect time-of-flight (iToF) imaging allows us to capture dense depth information at a low cost. However, iToF imaging often suffers from multipath interference (MPI) artifacts in the presence of scattering media, resulting in severe depth-accuracy degradation. For instance, iToF cameras cannot measure depth accurately through fog because ToF active illumination scatters back to the sensor before reaching the farther target surface. In this work, we propose a polarimetric iToF imaging method that can capture depth information robustly through scattering media. Our observations on the principle of indirect ToF imaging and polarization of light allow us to formulate a novel computational model of scattering-aware polarimetric phase measurements that enables us to correct MPI errors. We first devise a scattering-aware polarimetric iToF model that can estimate the phase of unpolarized backscattered light. We then combine the optical filtering of polarization and our computational modeling of unpolarized backscattered light via scattering analysis of phase and amplitude. This allows us to tackle the MPI problem by estimating the scattering energy through the participating media. We validate our method on an experimental setup using a customized off-the-shelf iToF camera. Our method outperforms baseline methods by a significant margin by means of our scattering model and polarimetric phase measurements.
Daniel S. Jeon, Andreas Meuleman, Seung-Hwan Baek, Min H. Kim
2023-06-30T12:42:40
http://arxiv.org/abs/2306.17618v1
# Polarimetric iToF: Measuring High-Fidelity Depth through Scattering Media ###### Abstract Indirect time-of-flight (iToF) imaging allows us to capture dense depth information at a low cost. However, iToF imaging often suffers from multipath interference (MPI) artifacts in the presence of scattering media, resulting in severe depth-accuracy degradation. For instance, iToF cameras cannot measure depth accurately through fog because ToF active illumination scatters back to the sensor before reaching the farther target surface. In this work, we propose a polarimetric iToF imaging method that can capture depth information robustly through scattering media. Our observations on the principle of indirect ToF imaging and polarization of light allow us to formulate a novel computational model of scattering-aware polarimetric phase measurements that enables us to correct MPI errors. We first devise a scattering-aware polarimetric iToF model that can estimate the phase of unpolarized backscattered light. We then combine the optical filtering of polarization and our computational modeling of unpolarized backscattered light via scattering analysis of phase and amplitude. This allows us to tackle the MPI problem by estimating the scattering energy through the participating media. We validate our method on an experimental setup using a customized off-the-shelf iToF camera. Our method outperforms baseline methods by a significant margin by means of our scattering model and polarimetric phase measurements. ## 1 Introduction Time-of-Flight (ToF) imaging is the cornerstone of modern 3D imaging technology that has received great attention across diverse fields, including computer graphics and vision. Its notable applications include autonomous driving, 3D motion capture, digital-human reconstruction, human-computer interfaces, robotics, etc. Modern ToF cameras can be broadly categorized into direct and indirect systems. Direct ToF measures the round-trip time of photons emitted from an illumination source until they travel back to the ToF detector. Indirect ToF, referred to as amplitude-modulated continuous-wave ToF, utilizes a temporally modulated illumination source and _computationally_ estimates the round-trip time of photons from modulation phase changes [21]. The indirect acquisition principle lowers the system-building cost by departing from the necessity of the _picosecond-accurate_ illumination, detector, and synchronization module used in direct ToF. Furthermore, indirect ToF achieves low-cost instant 3D imaging of the entire field of view with flood-fill illumination. As a result, indirect ToF cameras have achieved remarkable success in commercial markets, e.g., Microsoft Azure Kinect and PMD sensors. However, it is also the _indirect_-imaging scheme that poses critical limitations on robust 3D imaging. One of the notable resulting challenges is multi-path interference (MPI). Light emitted from the ToF illumination module travels through a scene and reaches the ToF sensor. During light transport, some photons interact with only one scene point via direct reflection, thus providing accurate depth in Figure 1: We introduce a polarimetric iToF imaging method that can estimate depth robustly through scattering media. (a) A photograph of the input scene without fog. (b) Ground-truth depth measure without fog. (c) Input iToF amplitude map captured with fog. (d) Depth estimated by a conventional iToF camera with fog. (e) Depth improved by naïve cross-polarization filtering. (f) Our iToF depth measurement result is fairly close to the GT depth. formation of that point. However, other photons undergo multiple reflections on different scene points because of indirect reflection. If a pixel on the ToF sensor receives a mixture of direct and indirect photons, the measured phase shift does not correspond to the analytical phase shift of the target scene depth anymore. Thus, it degrades the accuracy of the reconstructed depth. The MPI problem becomes more severe in the presence of scattering media such as fog (Figure 1(a) for example) because light photons experience numerous indirect reflections with the scattering particles. In this case, the scattered light energy often exceeds that of light interacting with a target scene point, resulting in extremely inaccurate scene depth estimation as shown in Figure 1(c), i.e., the measured distance through fog tends to be closer than the actual distance. This acts as a critical hurdle for indirect ToF cameras to be deployed in the wild, e.g., fire-rescuing robots, autonomous driving under fog, and underwater navigation. In this paper, we propose a polarimetric iToF imaging method robust to scattering environments. Our key idea is to revisit the polarization of light and the scattering theory about intensity attenuation and depolarization. Our method allows for accurate scene depth estimation even in the presence of severe scattering media, as shown in Figure 1(d). We leverage the polarization property of light that the backscattered light from scattering particles better maintains the polarization state of the emitted photons than the light that travels farther to a surface [6]. We first configure the orthogonal polarization modulation of ToF illumination and detection to initially filter out the polarized backscattered light optically. While existing methods [36, 39, 7, 13] also demonstrate the effectiveness of this cross-polarization setup, one critical problem of cross-polarization setup is that the assumption on the polarized state of backscattered light does not hold in practice because backscattered light undergoes a change of polarization throughout scattering events toward an unpolarized state [37]. This results in limited depth accuracy. To handle this, we devise a computational method that can eliminate the remaining unpolarized backscattered light based on the indirect ToF's signal representation: phase and amplitude. First, we estimate the phase of unpolarized backscattered light by revisiting the scattering model of intensity attenuation and depolarization [33]. Second, the amplitude of unpolarized backscattered light is estimated based on the observation that the amplitude-offset ratio is consistent for non-scattered light. Then, our method subtracts the unpolarized backscattered light from the initial cross-polarization measurements, resulting in the estimates of scattering-free indirect ToF measurements. Our polarimetric iToF imaging method can enhance depth accuracy significantly, outperforming existing baselines for depth estimation through scattering media, as shown in Figure 1(d). In summary, our contributions are: * A scattering-aware polarimetric phasor model specifically designed for polarimetric iToF imaging, based on the scattering theory of light intensity attenuation and depolarization. * An efficient scattering phasor optimization that can estimate the phase of unpolarized backscattered light via scattering analysis of phase and amplitude in iToF. ## 2 Related Work Multi-path interference.Indirect ToF cameras measure the round-trip time of light emitted from an amplitude-modulated illumination source until it travels through a scene and is captured by a ToF detector. While being a practical depth-imaging technology, indirect ToF imaging suffers from MPI artifacts. As we capture the sum of directly-reflected light from a scene and indirectly-reflected light through multiple reflections, iToF often results in distorted phase measurements. The MPI problem can be mitigated by extracting direct-only reflection from such inter-graded measurements. One effective approach is to capture iToF measurements with multiple modulation frequencies [30, 15, 17, 9, 8]. Another direction is to utilize the data-driven depth prior of natural scenes to estimate the direct-only reflection from the mixture of direct/indirect reflections [14, 1, 34, 24]. While these methods can deal with scenes containing second-bounce reflections of light, they often fail to handle more extreme scenarios, such as scenes with scattering media, where scattering events make the number of reflections substantially higher than two. Using an analytical scattering model of intensity attenuation is an effective solution in ToF imaging [26]. Fujimura et al. [10] extend the scattering-model approach by utilizing segmented background pixels that are only contributed by backscattered light without any light from scene reflection. However, capturing natural scenes often violates this assumption. One can overcome this background-dependency problem by using relatively short-pulse ToF imaging [19] at an increased cost for building a picosecond-accurate synchronized ToF camera like direct TOF. **Polarization and scattering.** Polarization of light describes how its electric field oscillates in space [18]. As a wave property of light, polarization has been extensively utilized for many graphics and vision problems including shape from polarization [4, 11], appearance from polarization [5], light transport [2], direct ToF imaging [3], and reflection removal [29, 22]. Most relevant to us, polarization helps us see through scattering media by optically filtering out the backscattered light and only capturing light that has interacted with the target surface using polarization. As a common practice for achieving this goal, one can use two linear polarizers at a perpendicular configuration in front of an illumination module and a camera [39]. This setup, so-called cross-polarization, optically rejects backscattered light, which tends to maintain the polarization state of illumination, thus filtered out by the perpendicular polarizer on the detector [20]. In contrast, light that has traveled to a target surface mostly loses the polarization state of the original illumination, therefore, can be detected by the camera passing through the perpendicular polarizer. An extension to cross-polarization imaging is polarization-difference imaging (PDI) which takes an additional image with a parallel orientation of the two polarizers instead of the perpendicular configuration [28, 31]. Subtracting the cross-polarization measurements from the parallel-polarization measurements helps us estimate the backscattered light [35]. PDI offers a better imaging capability in the presence of scattering media than cross-polarization imaging, which can be further improved using the segmented MPI-free background pixels [40, 41]. However, they still suffer from limited depth-imaging capability because of the unmet assumption on the spatially-uniform polarization state of backscattered light. Real-world scattered light exhibits spatially-varying polarization states [10] especially when an active illumination is used as in ToF imaging. Our polarimetric iToF method does not make such assumptions and thus enables accurate 3D imaging even under severe scattering media. ## 3 Background Indirect ToF cameras emit and capture continuously amplitude-modulated light, which can be characterized with three parameters, called phasor representation [15]: amplitude \(a\), phase \(\varphi\), and offset \(s\). Figure 2 shows a polar-coordinate visualization of the phasor representation, where the length and angle of the vector correspond to the amplitude and phase of the signal. In iToF imaging, we obtain the phasor representation by capturing multiple samples of the returning light at different phases. A common choice is to use four-phase samples at \(\phi=\{0^{\circ},45^{\circ},90^{\circ},135^{\circ}\}\), resulting in the sampled intensities of light as \(\{I_{\phi}\}\). Once measured, the phasor representation is expressed as: \[\text{Amplitude:}\quad a= \frac{1}{2}\sqrt{(I_{135^{\circ}}-I_{45^{\circ}})^{2}+(I_{90^{ \circ}}-I_{0^{\circ}})^{2}},\] \[\text{Phase:}\quad\quad\varphi= \text{arctan2}\left(I_{135^{\circ}}-I_{45^{\circ}},I_{90^{\circ}} -I_{0^{\circ}}\right),\] \[\text{Offset:}\quad\quad s= \left(I_{0^{\circ}}+I_{45^{\circ}}+I_{90^{\circ}}+I_{135^{ \circ}}\right)/4. \tag{1}\] ## 4 Method We first model how the backscattered light distorts the true phasor of a scene point in consideration of light polarization. Our analytical model then enables us to remove the undesired phasor distortion from polarimetric iToF measurements for accurate 3D imaging. **Imaging setup.** We equip an off-the-shelf ToF module with two linear polarizers: one for the light source and another for the detector. While the linear polarizer on the light source is set to the horizontal orientation, the linear polarizer in front of the detector is mounted on a motorized rotation stage to provide two orthogonal angles. As input, we use two sets of four-tap \(\phi\) phase measurements of the parallel and perpendicular orientations of the detector's linear polarizer, respectively. See Figure 3. ### Input We describe the captured light \(I_{\phi}\) from the customized iToF camera as the sum of scattered light \(S_{\phi}\) and target light \(T_{\phi}\) that has interacted with scene objects: \[I_{\phi}=S_{\phi}+T_{\phi}=\left(S_{\phi}^{u}+S_{\phi}^{p}\right)+\left(T_{ \phi}^{u}+T_{\phi}^{p}\right), \tag{2}\] where \(\{S/T\}_{\phi}^{u}\) and \(\{S/T\}_{\phi}^{p}\) are the unpolarized and polarized components for each case. **Unpolarized input.** When a perpendicular orientation of the polarizers is set, i.e., _cross-polarization_ configuration, the measurement is not affected by the light polarized in the same direction as the illumination, resulting in the following image formation: \(I_{\phi}^{\perp}=\frac{1}{2}S_{\phi}^{u}+\frac{1}{2}T_{\phi}^{u}\), where \(I_{\phi}^{\perp}\) is the captured light intensity of cross-polarization. The unpolarized backscattered light \(S_{\phi}^{u}\) is often ignored (\(S_{\phi}^{u}\approx 0\)) in conventional cross-polarization imaging, enabling a straightforward computation of the target-only signal \(T_{\phi}^{u}\approx I_{\phi}^{\perp}\). However, this assumption does not practically hold. In fact, only a fraction of backscattered light Figure 3: (a) Our polarimetric iToF imaging setup. (b) We capture four-tap phase samples with cross and parallel polarization, respectively, as input. Figure 2: Phasor representation of iToF imaging. (a) Amplitude-modulated light returns to the iToF sensor after traveling a scene. Phasor representation includes amplitude \(a\), phase \(\varphi\), and offset \(s\) to describe the continuous signal. Indirect ToF imaging measures four samples at different phases indicated by the red dots. (b) We visualize the amplitude and phase of the continuous signal in the polar coordinates, where the length and direction of a vector are used to represent amplitude and phase. from a very near distance meets this requirement. Light photons that undergo multiple scattering events lose the polarization state of the original illumination, turning into unpolarized light \(S_{\phi}^{u}\)[23, 16]. The unmet assumption results in inaccurate 3D imaging under scattering media. **Polarized input.** To avoid the previous assumption, we capture another polarimetric phase image with the parallel orientation of the illumination-detection linear polarizers by rotating the linear polarizer on the detector side. Note that the parallel-polarization configuration does not reject the polarized component \(\{S/T\}_{\phi}^{p}\) as in the cross-polarization setup because the returning light with the same polarization state as the illumination still passes through the parallel-oriented polarizer on the detector. Hence, we model the captured light intensity \(I_{\phi}^{\parallel}\) as \(I_{\phi}^{\parallel}=S_{\phi}^{p}+\frac{1}{2}S_{\phi}^{u}+T_{\phi}^{p}+\frac{1 }{2}T_{\phi}^{u}\). ### Phasor Model Figure 4 depicts our phasor image formation model. With the ultimate goal of estimating the _phase_ information of the _target_ point (blue arrows) from given (a) parallel-polarized and (b) cross-polarized phasor measurements (green arrows), we want to estimate the phasor information of the backscattered light first. To this end, our method should know two additional phasor representations: (1) the polarized backscattered light (yellow arrow) and (2) the unpolarized backscattered light (red arrow). **Phasor of polarized scattering.** The first one is the phasor representation of the _polarized_ backscattered light (yellow arrow) which can be easily obtained following the principle of PDI [31]. We subtract the cross-polarization measurement \(I_{\phi}^{\perp}\) from the parallel measurement \(I_{\phi}^{\parallel}\): \(I_{\phi}^{-}=I_{\phi}^{\parallel}-I_{\phi}^{\perp}=S_{\phi}^{p}+T_{\phi}^{p} \approx S_{\phi}^{p}\). The target light reflected from a scene point is unlikely to have the same polarization state as the original illumination due to the numerous scattering events during its light transport. Since most light from the target surfaces is the diffuse reflection and has interacted with many scattering particles, it is unpolarized: \(T_{\phi}^{p}\approx 0\). As a result, we can obtain the phasor information of _polarized backscattered light_\(S_{\phi}^{p}\) from the PDI measurements \(I_{\phi}^{-}\) using using Equation (1), yielding phase shift \(\overline{\varphi}_{S}^{p}\), amplitude \(\overline{a}_{S}^{p}\), and offset \(\overline{s}_{S}^{p}\). **Phasor of unpolarized scattering.** Second, given the phasor information of the polarized backscattered light (orange arrow), we need to know the phasor representation of _unpolarized backscattered light_ (red arrow in Figure 4): phase shift \(\overline{\varphi}_{S}^{u}\), amplitude \(\overline{a}_{S}^{u}\), and offset \(\overline{s}_{S}^{u}\) to obtain the target phase information (blue arrow). The following section provides details of our solution. ### Phasor of Unpolarized Backscattered Light We estimate the phasor of unpolarized backscattered light. Phasor consists of phase and amplitude, which are optimized in two folds. We first estimate the _scattering decay parameter_ of backscattered light according to the volumetric scattering theory. We then obtain _phase_ and _amplitude_ from unpolarized backscattered light. #### 4.3.1 Scattering Decay To estimate the phase of the unpolarized backscattered light, we obtain the integrated phase shift of _polarized_ multiple backscattered light \(\overline{\varphi}_{S}^{p}\) based on volumetric integration over the entire range of phase shift \(\varphi\): \[\overline{\varphi}_{S}^{p}=\frac{\int_{\varphi_{0}}^{\infty}\varphi a_{S}^{p} (\varphi)d\varphi}{\int_{\varphi_{0}}^{\infty}a_{S}^{p}(\varphi)d\varphi}, \tag{3}\] where \(a_{S}^{p}(\varphi)\) is a phase-conditioned amplitude function, and \(\varphi_{0}\) is the phase shift corresponding to the nearest travel distance of ToF light. We can interpret Equation (3) as a weighted average of phase shift \(\varphi\) with the weight of the amplitude function \(a_{S}^{p}(\varphi)\). As shown in Figure 5, we formulate the amplitude function \(a_{S}^{p}(\varphi)\) based on the exponential decay of amplitude and degree-of-polarization (DoP) in scattering media [32, 33, 38] Figure 4: Phasor of the target, polarized backscatter, unpolarized backscatter, and measurements for (a) the parallel-polarization setting and (b) the cross-polarization setting. The phase of the target light stays the same and only the amplitude is different between the two captures. In contrast, both phase and amplitude change for the total backscattered light due to the contribution of polarized backscatter. Note that the polarized backscatter disappears in the cross-polarization setting. Figure 5: (a) Exponentially-decaying scattering model describes the amplitude changes of polarized and unpolarized backscattered light with respect to depth. For a far depth, the contribution of the unpolarized light becomes larger than that of the polarized light. (b) Phasor representation of unpolarized light and polarized light along depth, here only shown the first quadrant. The phasor is the sum of continuously varying phasors along depth. as follows: \[a_{S}^{p}\left(\varphi\right)\propto\frac{1}{\varphi^{2}}\underbrace{\exp(- \sigma_{i}\varphi)}_{\mathrm{intensity}}\underbrace{\exp(-\sigma_{p}\varphi)}_{ \mathrm{DoP}}, \tag{4}\] where \(\sigma_{i}\) and \(\sigma_{p}\) are the extinction coefficients of _intensity_ and _DoP_ attenuation. The first term \(\frac{1}{\varphi^{2}}\) comes from the inverse-square law of emitted light into a scene. The second and third terms describe the exponential decay of _polarized_ light amplitude and DoP. After substituting \(a_{S}^{p}\) in Equation (3) with Equation (4), we integrate the numerator and the denominator of Equation (3) using the exponential integral formula: \[\overline{\varphi}_{S}^{p}=f(\sigma)=\frac{\mathrm{Ei}\left(\sigma\varphi_{0} \right)}{-\sigma\mathrm{Ei}\left(\sigma\varphi_{0}\right)+\frac{1}{\varphi_{0 }}\exp(-\sigma\varphi_{0})}, \tag{5}\] where \(\sigma\) is the total decay rate, the sum of intensity \(\sigma_{i}\) and DOP \(\sigma_{p}\), and \(\mathrm{Ei}(\cdot)\) is the exponential-integral function. The initial phase \(\varphi_{0}\) can be obtained by geometric calibration of the system. As a result, we have established our model \(f(\cdot)\) on the phase shift of polarized backscatter \(\overline{\varphi}_{S}^{p}\) as a function of the decay parameter \(\sigma\). We estimate the best scattering decay parameter \(\sigma\) that produces the prediction of \(f(\sigma)\) most similar to the experimental data \(\overline{\varphi}_{S}^{p}\) obtained in Section 4.2: \[\underset{\sigma}{\mathrm{minimize}}\left\|\overline{\varphi}_{S}^{p}-f( \sigma)\right\|_{2}^{2}. \tag{6}\] We solve this using the Adam gradient-descent optimization. We obtain the decay rate \(\sigma\) as a median value from its per-pixel estimates. #### 4.3.2 Phase Estimation Once the decay parameter \(\sigma\) is estimated, we turn to estimate the phase of _unpolarized_ backscattered light. To this end, we develop an _unpolarized_ version of Equations (3) and (4). The integrated phase shift \(\overline{\varphi}_{S}^{u}\) of unpolarized backscattered light is defined as \[\overline{\varphi}_{S}^{u}=\frac{\int_{\varphi_{0}}^{\infty}\varphi a_{S}^{u} \left(\varphi\right)d\varphi}{\int_{\varphi_{0}}^{\infty}a_{S}^{u}\left( \varphi\right)d\varphi}, \tag{7}\] where we use the amplitude of unpolarized backscattered light \(a_{S}^{u}(\varphi)\) as a weight. We then define the amplitude function by considering the unpolarized ratio as \[a_{S}^{u}\left(\varphi\right)\propto\frac{1}{\varphi^{2}}\underbrace{\exp(- \sigma_{i}\varphi)}_{\mathrm{intensity}}\underbrace{\left(1-\exp(-\sigma_{p} \varphi)\right)}_{1-\mathrm{DoP}}. \tag{8}\] Note that the third term attenuates the amplitude with the ratio of unpolarized light. Similarly with the polarized case, we rewrite Equation (7) by substituting \(a_{S}^{u}\left(\varphi\right)\) with Equation (8): \[\overline{\varphi}_{S}^{u}= \tag{9}\] \[\frac{\mathrm{Ei}(\sigma_{1}\varphi_{0})-\mathrm{Ei}(\sigma \varphi_{0})}{-\sigma_{i}\mathrm{Ei}(\sigma_{1}\varphi_{0})+\sigma\mathrm{Ei}( \sigma\varphi_{0})+\frac{1}{\varphi_{0}}\exp(-\sigma_{1}\varphi_{0})-\frac{1}{ \varphi_{0}}\exp(-\sigma\varphi_{0})}.\] Since we have estimated the sum of decay rates \(\sigma\) from Equation (6) already, we can exclude \(\sigma\) from the function parameter. Lastly, Equation (9) is the analytical model of the phase of unpolarized backscattered light and allows us to compute the phase distortion if the decay rate of intensity \(\sigma_{i}\) is known. To this end, we utilize the previously estimated integrated decay rate \(\sigma\) from Equation (6). Both intensity and DoP decrease exponentially under the same scattering media with respect to travel distance. Thus, the total decay rate \(\sigma=\sigma_{i}+\sigma_{p}\) can be related to the intensity decay rate as \(\sigma_{i}=\alpha\sigma\), where \(\alpha\) is a global scalar. We calibrate the scalar \(\alpha\) using a fog chamber, the details of which are included in the supplemental document. With the calibrated scalar \(\alpha\) and the previously estimated \(\sigma\), we compute \(\sigma_{i}\) which is then used to compute the phase of unpolarized backscattered light using Equation (9). #### 4.3.3 Amplitude Estimation Amplitude-to-offset ratio.Before estimating the scattered amplitude of unpolarized light, we first define the amplitude-to-offset ratio. Suppose a scene has no scattering media and interreflection. An indirect ToF camera then captures direct reflection \(T\) only. In this scenario, the ratio of phasor amplitude and offset is constant: \[a_{T}/s_{T}=k_{0}, \tag{10}\] where \(k_{0}\) is the amplitude-to-offset ratio that only depends on the power range of the ToF illumination module. Note that the ratio is independent of scene reflectance. We calibrate \(k_{0}\), which is 0.71 in our setup, by capturing a reference target in a darkroom. Amplitude modeling.Our main goal is to estimate the unknown scattered amplitude \(\overline{a}_{S}^{u}\) of unpolarized light from the given information: (a) the calibrated amplitude-offset constant \(k_{0}\), (b) the _analytical_ phasor representation of the polarized/unpolarized/PDI measurements and (c) our phase estimate of unpolarized backscattering \(\overline{\varphi}_{S}^{u}\) in Section 4.3.2. We first formulate the phasor of unpolarized target light as: \[a_{T}^{u} =\left\|a^{\perp}\mathrm{exp}(\mathrm{i}\varphi^{\perp})-\overline {a}_{S}^{u}\mathrm{exp}(\mathrm{i}\overline{\varphi}_{S}^{u})\right\|, \tag{11}\] \[s_{T}^{u} =s^{\perp}-\overline{s}_{S}^{u}, \tag{12}\] where \(a^{\perp}\), \(\varphi^{\perp}\), and \(s^{\perp}\) are the phasor of the cross-polarization measurements obtained by Equation (1). Note that we aim to estimate the amplitude \(\overline{a}_{S}^{u}\) in this equation. \(\overline{a}_{S}^{u}\), \(\overline{\varphi}_{S}^{u}\), and \(\overline{s}_{S}^{u}\) are the _integrated_ amplitude, phase, and offset of unpolarized multiple backscattered light, which will be modeled in the following. **Amplitude modeling with amplitude-to-offset ratio.** We now apply the constant amplitude-to-offset ratio of Equation (10) to the unpolarized _target_ light and the unpolarized _backscattered_ with a specific phase shift \(\varphi\): \[a_{T}^{u}/s_{T}^{u} =k_{0}, \tag{13}\] \[a_{S}^{u}(\varphi)/s_{S}^{u}(\varphi) =k_{0}. \tag{14}\] For the _integrated_ backscattered light over the phase shift \(\varphi\), the constant amplitude-to-offset ratio does not hold anymore. In fact, the ratio \(\overline{k}_{S}^{u}\) decreases lower than \(k_{0}\), depending on the thickness of the scattering media and interreflection: \[\overline{a}_{S}^{u} =\overline{k}_{S}^{u} \tag{15}\] \[=\frac{\left\|\int_{\varphi_{0}}^{\infty}a_{S}^{u}(\varphi) \mathrm{exp}(\mathrm{i}\varphi)d\varphi\right\|}{\int_{\varphi_{0}}^{\infty}a _{S}^{u}(\varphi)d\varphi}=k_{0}\frac{\left\|\int_{\varphi_{0}}^{\infty}a_{S}^ {u}(\varphi)\mathrm{exp}(\mathrm{i}\varphi)d\varphi\right\|}{\int_{\varphi_{ 0}}^{\infty}a_{S}^{u}(\varphi)d\varphi},\] where \(\overline{k}_{S}^{u}\) can be rewritten as a function of \(a_{S}^{u}(\varphi)\) using Equation (14) as shown on the right-hand-side of Equation (15). Note that \(k_{s}\) is the ratio of amplitude to offset, and \(k_{s}\) is smaller than \(k_{0}\) because the amplitude of the integrated phasor is smaller than the integration of the amplitudes themselves. Lastly, since the target amplitude model \(a_{T}^{u}\) in Equation (11) includes the _amplitude of unpolarized backscattered light_\(\overline{a}_{S}^{u}\) that we want to find out, we first combine Equations (11) and (12) by substituting \(a_{S}^{u}\) and \(s_{S}^{u}\) in Equation (13). We then write an equation by substituting \(\overline{s}_{S}^{u}\) with Equation (15) in the combined equation: \[k_{0}s^{\perp}= \left\|a^{\perp}\mathrm{exp}(\mathrm{i}\varphi^{\perp})- \overline{a}_{S}^{u}\mathrm{exp}(\mathrm{i}\overline{\varphi}_{S}^{u})\right\|+\] \[\frac{\overline{a}_{S}^{u}\int_{\varphi_{0}}^{\infty}a_{S}^{u}( \varphi)d\varphi}{\left\|\int_{\varphi_{0}}^{\infty}a_{S}^{u}(\varphi)\mathrm{ exp}(\mathrm{i}\varphi)d\varphi\right\|}. \tag{16}\] **Amplitude estimation.** All other variables in Equation (19) are already known: \(a^{\perp}\), \(\varphi^{\perp}\), \(s^{\perp}\) from the cross-polarization measurements, \(\overline{\varphi}_{S}^{u}\) from Section 4.3.2, and \(k_{0}\) from calibration. Hence, Equation (19) can be reformulated to find amplitude \(\overline{a}_{S}^{u}\) in a closed-form solution. We refer to the supplemental document for its analytic solution. ### Scattering Removal Now that we have obtained both phase \(\overline{\varphi}_{S}^{u}\) and amplitude \(\overline{a}_{S}^{u}\) of unpolarized backscattered light, we are ready to estimate the target depth by computational removing the distortion from backscattered light. We simply subtract the phase-amplitude distortion \(\overline{a}_{S}^{u}\mathrm{exp}(\mathrm{i}\overline{\varphi}_{S}^{u})\) from the cross-polarization measurements \(a^{\perp}\mathrm{exp}(\mathrm{i}\varphi^{\perp})\), resulting in the estimate of unpolarized target light: \[a_{T}^{u}\mathrm{exp}(\mathrm{i}\varphi_{T}^{u})=a^{\perp}\mathrm{exp}( \mathrm{i}\varphi^{\perp})-\overline{a}_{S}^{u}\mathrm{exp}(\mathrm{i} \overline{\varphi}_{S}^{u}). \tag{17}\] We can recover the target phase shift, corresponding to its depth, without backscattered distortion as \[\varphi_{T}^{u}=\mathrm{angle}\left(a_{T}^{u}\mathrm{exp}(\mathrm{i}\varphi_{T }^{u})\right), \tag{18}\] where \(\mathrm{angle}(\cdot)\) is the phase-extraction operator. ## 5 Results **Experiment details.** Figure 3 shows our experimental setup. We use an indirect ToF camera, Melexis VGA ToF sensor (MLX75027), of which modulation frequency is 80 MHz and the original spatial resolution of \(640\times 480\). We use film-based near-infrared (NIR) linear polarizers in front of the ToF illumination and the detector. The detector-side polarizer is installed on a rotation mount of Thorlabs K10CR1. Due to the rotation stage's occlusion, we use \(380\times 240\) center crops of the captured images. To test imaging through scattering media, we install an experimental setup consisting of a \(70\,\mathrm{cm}\times 38\,\mathrm{cm}\) dark chamber, in which we place scene objects. We generate artificial fog using an off-the-shelf fog generator. This stable scattering media allows apple-to-apple comparisons between different methods. For target scenes, we placed objects of diverse shape and appearance in the chamber and material examples include plastic, acrylic paint, wood, ceramic, or fabric. **3D imaging through fog.** Figure 6 shows that our polarimetric imaging allows us to see through dense fog and estimate an accurate depth map. Our method directly estimates the amplitude and phase distortion caused by unpolarized Figure 6: We estimate the amplitude and phase caused by unpolarized backscattered light due to fog. (a) For a challenging scene with dense fog, we show our estimated fog (b) amplitude \(\overline{a}_{S}^{u}\) and (c) phase \(\overline{\varphi}_{S}^{u}\). (d) While conventional depth estimation from cross-polarization setting fails due to the strong scattering effect, subtracting our estimated backscattered light from the cross-polarization measurements allows us to achieve (e) high-quality depth imaging. backscattered light. Figures 6(b) and (c) show that the estimated fog amplitude and phase match with our capture environment: amplitude is high (brighter in the figure) for far pixels and phase is small for left pixels. Note that our ToF illumination is located on the left side of the sensor, making the left pixels have smaller phases than the right pixels. **Depth accuracy.** We evaluate the effectiveness of our method by measuring depth-estimation error. We first obtain the ground-truth depth without generating any fog as shown in Figure 7. We capture six scenes each with three fog densities. Our quantitative results shown in Table 1 are averaged over the dataset. While conventional ToF imaging using parallel-polarization and cross-polarization configurations fails to handle the dense scattering environment, our method enables accurate depth reconstruction. **Impact of ambient light.** Our method captures an ambient light image without ToF illumination and filters out the ambient contribution to the four-phase correlation measurements like other conventional TOF cameras such as PMD sensors. Figure 8 shows reconstruction results. The RMSE of conventional ToF without ambient light is high as 11.62 cm. In contrast, the RMSEs of our results are 1.88 cm and 2.08 cm without and with ambient light. **Decaying factor.** We validate the amplitude decay model Equation (4) by capturing the energy of light through different distances, as shown in Figure 9. Intensity decay without fog follows the inverse-square law of emitted light. As our system has a linear polarizer in front of the light source, we measure intensity with/without a linear polarizer parallel to the sensor's polarizer in front of the camera. Our exponential decay model correctly represents the measured data. **Robustness against fog density.** We analyze the robustness of our method against fog density by capturing depth maps with different fog densities. We adjust the fog generator to realize three different fog densities: thin, medium, and thick. See Figure 10 for qualitative results and Table 1 for quantitative results. Our method achieves accurate depth imaging across diverse fog densities, whereas conventional cross-polarization ToF imaging degrades its performance for the medium fog and completely fails in dense fog. **Comparison.** We compare our method with state-of-the-art ToF imaging methods designed for foggy environments [10, 25, 40]. We capture ToF measurements at parallel- and cross-polarization configurations with multiple ToF modulation frequencies of 40MHz, 50MHz, 60MHz, 70MHz, and 80MHz, in order to provide inputs for the compared methods. We also compute the depth errors for all the tested scenes and methods by capturing ground \begin{table} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{RMSE [cm]} & \multicolumn{3}{c}{Fog density} \\ \cline{2-4} RMSE [cm] & Thin & Medium & Thick \\ \hline \hline Conventional ToF & 2.78 & 5.46 & 9.11 \\ \hline Ours & **1.71** & **1.65** & **2.59** \\ \hline \end{tabular} \end{table} Table 1: Depth accuracy for diverse fog densities. We outperform conventional ToF imaging with cross polarization for all tested environments. Figure 8: Impact of ambient illumination. Our method can estimate high-accuracy depth information through scattering media as it accounts for ambient illumination when calculating phase. Insets show depth errors. Figure 10: Our method reconstructs accurate depth for thin, medium, and thick fog environments. Conventional ToF imaging (with cross-polarization input) suffers from limited scene visibility. The estimated amplitude of backscattered unpolarized light becomes higher for denser fog aligned with the environment. Figure 7: Our method achieves accurate 3D imaging under scattering media compared to conventional ToF imaging with parallelization and cross-polarization configurations. Figure 9: Measured and fitted intensity decay with respect to distance without and with the fog. We also measure intensity decay with/without a linear polarizer through the fog. Dots represent measurements and lines describe fitting with our decay model. truth scenes without fog. Figure 11 shows the estimated depth maps and their corresponding depth errors are shown in Table 2. Fujimura et al. [10] and Zhang et al. [40] requires a background region without any object in the captured, resulting in missing depth estimates for the background pixels. Moreover, while Zhang et al. [40] assumes that the DOP phasor is uniformly constant over the scene and reflected light is unpolarized, which often does not hold in practice [4]. In contrast, our method unlocks the previous assumptions, resulting in accurate depth estimation under the scattering medium. Muraji et al. [25] uses multiple ToF modulation frequencies to reduce the impact of scattering media on scene visibility. Unfortunately, they assume that scene geometry is flat, which does not hold in practice including our tested scenes. Our method outperforms all the baseline methods both qualitatively and quantitatively. Compared to a structured-light-based method [27], our proposed method had benefits in that structured light suffers from large scattering volume such as fog and should capture more than dozens of images, e.g., 25 captures in [27]. ## 6 Conclusion We have presented a polarimetric iToF imaging method that enables robust depth estimation even in the presence of scattering media. Our approach relies on a novel computational model that incorporates scattering-aware polarimetric phase measurements. Through experimentation, we showcase accurate 3D imaging in dense fog using a polarimetric iToF camera, surpassing the capabilities of other iToF methods. In the future, we aim to leverage micro-polarizer pixel arrays on iToF imaging for dynamic scene capture and to devise an efficient denoising algorithm for the reduced photons. ## Acknowledgements Min H. Kim acknowledges the MSIT/IITP of Korea (RS-2022-00155620, 2022-0-00058, and 2017-0-00072), SK Hynix, and the Samsung Research Funding Center (SRFC-IT2001-04) for developing partial 3D imaging algorithms, in addition to the support of the NIRCH of Korea (2021A02P02-001), Samsung Electronics, and Microsoft Research Asia. Seung-Hwan Baek acknowledges the support from Samsung Research Funding Center (SRFC-IT1801-52), Samsung Electronics, and Korea NRF grants (2022R1A6A1A03052954, RS-2023-00211658). \begin{table} \begin{tabular}{c|c|c|c|c} \hline Method & Muraji et al. & Fujimura et al. & Zhang et al. & Ours \\ \hline \hline Rel. error & 0.087 & 0.658 & 0.140 & **0.021** \\ Std. dev. & 5.10 & 31.92 & 7.52 & **1.10** \\ RMSE [cm] & 5.73 & 18.64 & 8.17 & **1.31** \\ \hline \end{tabular} \end{table} Table 2: We evaluate the depth-estimation accuracy by computing the relative depth error [12] and the RMSE values. Our method outperforms the state-of-the-art ToF imaging methods [10, 25, 40]. Figure 11: We compare our method to state-of-the-art scattering-aware ToF methods. Our method outperforms the previous approaches by a large margin thanks to our polarimetric-ToF image formation.
間接時間走査(iToF)画像化により、低コストで緻密な深度情報を捕捉することができる。しかしながら、iToF画像化は、散乱媒体が存在する場合、多路 interferenc (MPI) artefact により深度精度が低下する。例えば、iToFカメラは、霧を通過する際に深度を正確に測定することができない。これは、ToF 積極照射が遠方のターゲット表面に到達する前に、センサーに反射しているためである。本稿では、散乱媒体を通過する際に深度情報を頑丈に捕捉できる、極性 iToF 画像化手法を提案する。この手法の原理とその極性光による説明から、散乱を考慮した極性相対測定の算出モデルを構築することができる。このモデルにより、MPI エラーを修正することができる。まず、散乱を考慮した極性 iToF モデルを開発し、不偏
2309.06190
Spreading speeds of a nonlocal diffusion model with free boundaries in the time almost periodic media
In this paper, we mainly investigate the spreading dynamics of a nonlocal diffusion KPP model with free boundaries which is firstly explored in time almost periodic media. As the spreading occurs, the long-run dynamics are obtained. Especially, when the threshold condition for the kernel function is satisfied, applying the novel positive time almost periodic function, we accurately express the unique asymptotic spreading speed of the free boundary problem.
Chengcheng Cheng, Rong Yuan
2023-09-12T12:56:44
http://arxiv.org/abs/2309.06190v2
Spreading speeds of a nonlocal diffusion model with free boundaries in the time almost periodic media ###### Abstract In this paper, we mainly investigate the spreading dynamics of a nonlocal diffusion KPP model with free boundaries which is firstly explored in time almost periodic media. As the spreading occurs, the long-run dynamics are obtained. Especially, when the threshold condition for the kernel function is satisfied, applying the novel positive time almost periodic function, we accurately express the unique asymptotic spreading speed of the free boundary problem. keywords: Nonlocal diffusion, Free boundary, Time almost periodic media, Asymptotic spreading speed, Time almost periodic solution Msc: [2010] 35B15, 35R09, 35R35, 35B40 + Footnote †: journal: ## 1 Introduction Nonlocal diffusion equations take advantage of modeling the long-distance movements of species and the dense dispersal of populations [1; 2; 3]. Considering the environmental change, seasonal succession and resource distribution, people often choose periodic models to describe environmental parameters [4; 5; 6; 7]. However, natural fluctuations are difficult to be periodic. Almost periodicity is more likely to accurately describe natural recurrent changes and the almost periodic model can provide more biological insights to understand the invasion of populations and the propagation of epidemic diseases [8; 9]. It is well known that the spreading speed is an important indicator describing the scale of infectious disease transmission. The spreading speeds of nonlocal diffusion equations in the nonhomogeneous media have attracted increasingly more interest and attention in recent years [10; 11]. However, the spreading speed of the nonlocal KPP model with free boundaries in the time almost periodic environment has not been considered, which is a worthwhile problem to study. In this paper, we mainly aim to investigate the long-time propagation dynamics and spreading speed of the following almost periodic nonlocal diffusion model \[\left\{\begin{array}{ll}u_{t}=d\int_{g(t)}^{h(t)}\kappa\left(x-y\right)u\left(t, y\right)\mathrm{d}y-du+uf(t,x,u),&t>0,\;g\left(t\right)<x<h\left(t\right),\\ u\left(t,h(t)\right)=u(t,g(t))=0,&t>0,\\ h^{\prime}\left(t\right)=\mu\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}\kappa \left(x-y\right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,&t>0,\\ g^{\prime}\left(t\right)=-\mu\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}\kappa \left(x-y\right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,&t>0,\\ u\left(0,x\right)=u_{0}\left(x\right),&x\in[-h_{0},h_{0}],\\ h\left(0\right)=h_{0},g\left(0\right)=-h_{0},\end{array}\right. \tag{1.1}\] where the initial function \(u_{0}(x)\) satisfies \[u_{0}(x)\in C([-h_{0},h_{0}]),\;u_{0}\left(-h_{0}\right)=u_{0}\left(h_{0} \right)=0,\;u_{0}(x)>0,\;x\in\left(-h_{0},h_{0}\right). \tag{1.2}\] Here \(u(t,x)\) represents the population density at time \(t\) on location \(x\). We assume that the density of species is \(0\) out of \([g(t),h(t)]\), where \(g(t)\) and \(h(t)\) are the leftward and rightward free boundaries to be determined later, respectively. The dispersal kernel \(\kappa(x-y)\) represents the probability distribution that jumps from location \(y\) to \(x\), then \(\kappa(x-y)u(t,y)\) is the rate at which the species arrive at \(x\) from position \(y\). And \(\int_{g(t)}^{h(t)}\kappa(x-y)u(t,y)\mathrm{d}y\) is the rate at which the species reach \(x\) from any other positions. Correspondingly, \(\int_{-\infty}^{\infty}\kappa(x-y)u(t,x)\mathrm{d}y\) is the rate at which the species leave \(x\) to go to any other positions. Let \(d\) be a positive constant as the nonlocal diffusion coefficient, then \(d\int_{g(t)}^{h(t)}\kappa(x-y)u(t,y)\mathrm{d}y-\int_{-\infty}^{\infty}\kappa (x-y)u(t,x)\mathrm{d}y\) can be seen as the nonlocal diffusion term. Moreover, similar to [12], suppose that the expanding rate of moving boundary is proportional to the outward flux at the boundary, that is, \[g^{\prime}\left(t\right)=-\mu\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}\kappa \left(x-y\right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,\;t>0, \tag{1.3}\] and \[h^{\prime}\left(t\right)=\mu\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}\kappa \left(x-y\right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,\;t>0, \tag{1.4}\] where \(\mu>0\) describes the expanding capability. Moreover, we assume that the kernel function \(\kappa(x):\mathbb{R}\rightarrow\mathbb{R}\) satisfies the following properties. \(\mathbf{(H1)}\) (1) \(\kappa(\cdot)\in C^{1}(\mathbb{R},[0,\infty))\) is symmetric, \(\kappa(0)>0\), and \(\int_{\mathbb{R}}\kappa(x)dx=1\). \(\left(\ref{eq:1}\right)\) There are positive constants \(\alpha\) and \(x_{0}\) such that \(\kappa(x)\leq e^{-\alpha\left|x\right|}\) and \(\left|\nabla\kappa\right|\leq e^{-\alpha\left|x\right|}\) for \(\left|x\right|\geq x_{0}\). The growth function \(f(t,x,u):\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{+}\longrightarrow\mathbb{R}\) satisfies that for \(D\subset\mathbb{R}\) is smooth bounded or \(D=\mathbb{R}\), \(\mathbf{(H2)}\)\(f(t,x,u)\) is almost periodic in \(t\) uniformly for \(x\in\bar{D}\) and \(u\) in bounded sets of \(\mathbb{R}\). When \(D=\mathbb{R}\), \(f(t,x,u)\) is also almost periodic in \(x\) uniformly for \(t\in\mathbb{R}\) and \(u\) in bounded sets. \(\mathbf{(H2^{*})}\) (1) \(f(t,x,u)\) is \(C^{1}\) in \(u\); \(f(t,x,u)\) and \(f_{u}(t,x,u)\) are uniformly continuous in \((t,x,u)\in\mathbb{R}\times\bar{D}\times E\) for any bounded set \(E\subset\mathbb{R};f(t,x,u)\) is uniformly bounded in \((t,x)\in\mathbb{R}\times D\) for \(u\) in bounded sets. \(\left(\ref{eq:2}\right)\) There is \(M\gg 1\) such that \(f(t,x,u)<0\) for \((t,x)\in\mathbb{R}\times\bar{D}\) and \(u\geq M\); \(f_{u}(t,x,u)<0\) for \((t,x,u)\in\mathbb{R}\times\bar{D}\times[0,\infty)\). Let \(a(t,x)=f(t,x,0)\) satisfy \(\mathbf{(H3)}\)\(a(t,x)\) is bounded and uniformly continuous in \((t,x)\in\mathbb{R}\times\bar{D}\), and is almost periodic in \(t\) uniformly for \(x\in\bar{D}\). When \(D=\mathbb{R}\), \(a(t,x)\) is also almost periodic in \(x\) uniformly for \(t\in\mathbb{R}\). Additionally, for the principal Lyapunov exponent as the threshold condition in studying the spreading and vanishing of (1.1), we assume that \(\mathbf{(H4)}\) There exists a \(L^{*}>0\) such that \(\inf\limits_{L\geq L_{*}}\lambda_{\mathscr{P}\mathscr{L}}(a,L)>0,\) where \(\lambda_{\mathscr{P}\mathscr{L}}(a,L)\) is the principal Lyapunov exponent of the following equation \[\begin{cases}u_{t}=d\int_{-L}^{L}\kappa(y-x)u(t,y)\mathrm{d}y-du(t,x)+a(t,x)u(t,x),&x\in(-L,L),\\ u(t,-L)=u(t,L)=0.\end{cases} \tag{1.5}\] In this article, unless otherwise specified, \(u(t,x;u_{0})\) always denotes the solution of (1.1) with \(u(0,x;u_{0})=u_{0}(x).\) Sometimes, we write \(u(t,x;u_{0},a)\) to emphasize the solution dependent on \(a(t,x)\). Assume that \(\mathbf{(H1)}-\mathbf{(H2^{*})}\) hold, for any given \(u_{0}\) satisfying (1.2), according to Theorem 1.1 [12], the problem (1.1) admits a unique solution \(u(t,x;u_{0},g,h)\) for all \(t>0\) with \[u(0,x;u_{0})=u_{0}(x)\text{ and }0\leq u(t,x;u_{0})\leq\max\left\{|u_{0}|_{L^{ \infty}},M\right\}. \tag{1.6}\] where \(M\) comes from \(\mathbf{(H2^{*})}\). Moreover, \(g(t;u_{0},h_{0})\) is strictly decreasing in \(t>0,\) and \(h(t;u_{0},h_{0})\) is strictly increasing in \(t>0.\) Thus, there are \(g_{\infty}\in[-\infty,0)\) and \(h_{\infty}\in(0,\infty]\) such that \(\lim\limits_{t\to\infty}g(t;u_{0},h_{0})=g_{\infty}\) and \(\lim\limits_{t\to\infty}h(t;u_{0},h_{0})=h_{\infty}.\) Simultaneously, according to Theorem 1.2 [13], we give the spreading-vanishing dichotomy regimes for (1.1). **Theorem 1.1** (Spreading-vanishing dichotomy).: _Assume that \(\mathbf{(H1)}-\mathbf{(H4)}\) hold. For any \(h_{0}>0\) and \(u_{0}\) satisfying (1.2), the either of the followings must hold:_ \((1)\) _Vanishing:_ \(h_{\infty}-g_{\infty}\leq 2L^{*}\)_, and_ \(\lim\limits_{t\to\infty}u(t,x;u_{0})=0\) _uniformly for_ \(x\in[g_{\infty},h_{\infty}];\)__ \((2)\) _Spreading:_ \(h_{\infty}-g_{\infty}=\infty\)_, and_ \(\lim\limits_{t\to\infty}\left[u(t,x;u_{0})-u^{*}(t,x)\right]=0\) _uniformly for_ \(x\) _in any compact subset of_ \(\mathbb{R},\) _where_ \(u^{*}(t,x)\) _is the unique time almost periodic solution of the following equation_ \[u_{t}=d\int_{\mathbb{R}}\kappa(y-x)u(t,y)\mathrm{d}y-du(t,x)+uf(t,x,u),\ x\in \mathbb{R}. \tag{1.7}\] ## 2 Main results In this section, we devote ourselves to exploring the spreading speed of the leftward and rightward front for (1.1). For the homogeneous nonlocal diffusion model with free boundaries, Du, Li and Zhou investigated the spreading speed of the moving boundaries of the nonlocal diffusion model by applying the semi-wave and traveling wave solutions in [14]. However, for the difficulties caused by the time almost periodicity, the semi-wave solution of the time almost periodic nonlocal diffusion equations could be a rough nut to crack. Thus, the subtle methods developed by them could not be directly used in the problem (1.1). We had to look for and develop new methods to solve this problem. Firstly, consider an assumption about the kernel function which is called satisfying the "thin-tailed" condition if the following condition holds. \[\mathbf{(H)}\qquad\qquad\qquad\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y) \mathrm{d}y\mathrm{d}x=\int_{0}^{\infty}x\kappa(x)\mathrm{d}x<\infty. \tag{2.1}\] In the case of \(f(t,x,u)\equiv f(t,u).\) Considering the following free boundary problem \[\left\{\begin{array}{ll}u_{t}=d\int_{g(t)}^{h(t)}\kappa\left(x-y\right)u\left(t,y\right)\mathrm{d}y-d\ u+uf(t,u),&t>0,\ g(t)<x<h\left(t\right),\\ u\left(t,h(t)\right)=0,u(t,g(t))=0,&t>0,\\ h^{\prime}\left(t\right)=\mu\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}\kappa \left(x-y\right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,&t>0,\\ g^{\prime}\left(t\right)=-\mu\int_{g(t)}^{0}\int_{-\infty}^{0}\kappa\left(x-y \right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,&t>0,\\ u\left(0,x\right)=u_{0}\left(x\right),\ h\left(0\right)=h_{0},\ g(0)=-h_{0},& x\in[-h_{0},h_{0}],\end{array}\right. \tag{2.2}\] and the fixed boundary problem \[u_{t}=d\int_{\mathbb{R}}\kappa(x-y)u(t,y)\mathrm{d}y-d\ u(t,x)+uf(t,u),\ x\in \mathbb{R}. \tag{2.3}\] by Theorem 4.1 [15] and Theorem 1.3 [16], there is a unique positive time almost periodic solution \(\hat{u}^{*}(t,x)\) of (2.3) with \[\hat{u}^{*}(t,x)\equiv\hat{u}^{*}(t), \tag{2.4}\] where \(\hat{u}^{*}(t)\) is the positive time almost periodic solution of the following problem \[u_{t}=uf(t,u). \tag{2.5}\] Since \(\hat{u}^{*}(t)\) is almost periodic in \(t\), by Theorem 3.1 [17], there is \(\hat{u}^{*}\) such that \[\hat{u}^{*}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\hat{u}^{*}(t)\mathrm{d}t. \tag{2.6}\] Denote \[c^{*}:=\mu\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)\hat{u}^{*}\mathrm{d} y\mathrm{d}x.\] Assume that **(H)** holds, one can see that \(c^{*}\) is well defined. Now we mainly give an explicit estimate of the asymptotic spreading speed, that is, **Theorem 2.1** (Finite spreading).: _Assume that \(\mathbf{(H1)-(H4)}\) hold. When the spreading occurs, if \(\mathbf{(H)}\) holds, the spreading speeds of the leftward front and the rightward front satisfy that_ \[\lim_{t\to\infty}\frac{h(t)}{t}=\lim_{t\to\infty}\frac{-g(t)}{t}=c^{*}. \tag{2.7}\] **Theorem 2.2**.: _Under the conditions of Theorem 2.1, for any \(\varepsilon\in(0,c^{*})\), it follows_ \[\lim_{t\to\infty}\max_{|x|\leq(c^{*}-\varepsilon)\varepsilon}|u(t,x)-\hat{u}^ {*}(t)|=0. \tag{2.8}\] **Theorem 2.3** (Accelerated spreading).: _Assume that \(\mathbf{(H1)-(H4)}\) hold. When the spreading occurs, if \(\mathbf{(H)}\) does not hold, it follows that_ \[\lim_{t\to\infty}\frac{h(t)}{t}=\lim_{t\to\infty}\frac{-g(t)}{t}=\infty. \tag{2.9}\] ## 3 Proofs The proof of Theorem 2.1.: For simplify, denote \(u(t,x)=u(t,x;u_{0})\). As \(\hat{u}(t,x):=u(t,-x)\) satisfies (2.2) with free boundaries \[x=\hat{h}(t):=-g(t),\ x=\hat{g}(t):=-h(t)\] and initial function \(\hat{u}_{0}(x):=u_{0}(-x)\), we only need to prove the case of \(h(t)\). The spreading speed for \(g(t)\) can be directly obtained. Now we complete the proof of this theorem in two steps. \(Step\;1:\) Firstly, we prove that \(\limsup_{t\to\infty}\dfrac{h(t)}{t}\leq c^{*}.\) For \(0<\epsilon\ll 1\), let \(\overline{u}_{\epsilon}(t)\) and \(\underline{u}_{\epsilon}(t)\) be the positive time almost periodic solutions of the following problem \[u_{t}=u(f(t,u)+\epsilon) \tag{3.1}\] and \[u_{t}=u(f(t,u)-\epsilon), \tag{3.2}\] respectively. Thus, \[\lim_{\epsilon\to 0}\underline{u}_{\epsilon}(t)=\lim_{\epsilon\to 0} \overline{u}_{\epsilon}(t)=\hat{u}^{*}(t) \tag{3.3}\] uniformly in \(t\in\mathbb{R}.\) Applying a comparison argument, we can see that \[\underline{u}_{\epsilon}(t)\leq\hat{u}^{*}(t)\leq\overline{u}_{\epsilon}(t). \tag{3.4}\] Thus, according to Theorem 1.1, there exists \(T>0\) such that \(u(t+T,x)\leq\overline{u}_{\epsilon}(t+T),\;\text{for}\;t\geq 0,\;x\in[-h(t+T),h(t+T)].\) Let \[\tilde{u}(t,x)=u(t+T,x),\;\tilde{h}(t)=h(t+T),\;\tilde{g}(t)=g(t+T),\] then \(\tilde{u}(t,x)\) satisfies \[\left\{\begin{array}{ll}\tilde{u}_{t}=d\int_{\tilde{g}(t)}^{\tilde{h}(t)} \kappa\left(x-y\right)\tilde{u}\left(t,y\right)\mathrm{d}y-d\tilde{u}+\tilde{ u}f(t+T,\tilde{u}),&t>0,\;\tilde{g}\left(t\right)<x<\tilde{h}\left(t\right),\\ \tilde{u}(0,x)=u(T,x),&t>0,\;\tilde{g}(0)<x<\tilde{h}(0),\\ \tilde{h}^{\prime}\left(t\right)=\mu\int_{\tilde{g}(t)}^{\tilde{h}(t)}\int_{ \tilde{h}(t)}^{\infty}\kappa\left(x-y\right)\tilde{u}\left(t,x\right)\mathrm{ d}y\mathrm{d}x,&t>0,\\ \tilde{g}^{\prime}\left(t\right)=-\mu\int_{\tilde{g}(t)}^{\tilde{h}(t)}\int_{ -\infty}^{\tilde{g}(t)}\kappa\left(x-y\right)\tilde{u}\left(t,x\right) \mathrm{d}y\mathrm{d}x,&t>0,\\ \tilde{u}(t,\tilde{h}(t))=0,\;\tilde{u}(t,\tilde{g}(t))=0,&t>0.\end{array}\right. \tag{3.5}\] Let \(u^{*}(t)\) be the solution to the following initial problem \[\begin{cases}u_{t}=u(f(t,u)+\epsilon)),&t>0,\\ u(T)=\max\{\overline{u}_{\epsilon}(T),\|\tilde{u}(0,\cdot)\|_{\infty}\}, \end{cases} \tag{3.6}\] then \(u^{*}(t)\geq\overline{u}_{\epsilon}(t),\text{for}\;t\geq T,\) and \[\lim_{t\to\infty}(u^{*}(t)-\overline{u}_{\epsilon}(t))=0.\] Moreover, by Theorem 1.1, combining (2.4) with (3.3), we have \[\tilde{u}(t,x)\leq u^{*}(t+T),\;\text{for}\;t\geq 0,-\tilde{h}(t)<x<\tilde{h} (t),\] then \[\tilde{u}(t,x)\leq\dfrac{\overline{u}_{\epsilon}(t+T)}{1-\epsilon},\;\text{ for}\;t\gg 1,-\tilde{h}(t)<x<\tilde{h}(t).\] Let \(u_{\epsilon}(t,x)\) be the unique positive almost periodic solution of the following problem \[u_{t}=d\int_{\mathbb{R}}\kappa(x-y)u(t,y)\mathrm{d}y-du(t,x)+u(f(t,u)+ \epsilon),\;x\in\mathbb{R}, \tag{3.7}\] by the uniqueness of almost periodic solution for (3.7) explained in Theorem 1.4 [16], we have \[u_{\epsilon}(t,x)\equiv u_{\epsilon}(t)=\overline{u}_{\epsilon}(t).\] Thus, there is \(T^{*}>T\gg 1\) such that \[u_{\epsilon}(t,x)\geq(1-\epsilon)\overline{u}_{\epsilon}(t),\text{ for }t>T^{*},-\tilde{h}(t)<x<\tilde{h}(t).\] Therefore, \[\tilde{u}(t,x)\leq\frac{\overline{u}_{\epsilon}(t+T)}{1-\epsilon}\leq\frac{u _{\epsilon}(t+T,x)}{(1-\epsilon)^{2}},\text{ for }t>T^{*},-\tilde{h}(t)<x<\tilde{h}(t).\] For fixed \(L_{0}>0\), take \[\begin{split}&\overline{h}(t)=(1-\epsilon)^{-2}\mu\int_{0}^{t} \int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\epsilon}(s,x)\mathrm{d}y \mathrm{d}x\mathrm{d}s+\tilde{h}(T^{*})+L_{0},\text{ for }t\geq 0,\\ &\overline{u}(t,x)=(1-\epsilon)^{-2}u_{\epsilon}(t,x),\text{ for }t\geq 0,\ x\in\mathbb{R}.\end{split} \tag{3.8}\] According to assumption \((\mathbf{H2^{*}})\) for \(f(t,u)\), direct computations give \[\begin{split}&\overline{u}_{t}=(1-\epsilon)^{-2}u_{\epsilon,t}=(1- \epsilon)^{-2}\left[d\int_{\mathbb{R}}\kappa(x-y)u_{\epsilon}(t,y)\mathrm{d}y -d\ u_{\epsilon}(t,x)+u_{\epsilon}f(t,u_{\epsilon})\right]\\ &\geq d\int_{-\overline{h}(t)}^{\overline{h}(t)}\kappa(x-y) \overline{u}(t,y)\mathrm{d}y-d\ \overline{u}(t,x)+\overline{u}f(t,u_{\epsilon})\\ &\geq d\int_{-\overline{h}(t)}^{\overline{h}(t)}\kappa(x-y) \overline{u}(t,y)\mathrm{d}y-d\ \overline{u}(t,x)+\overline{u}f(t,\overline{u}).\end{split} \tag{3.9}\] Meanwhile, \[\begin{split}\overline{h}^{\prime}(t)&=\mu\int_{- \infty}^{0}\int_{0}^{\infty}\kappa(x-y)\overline{u}(t,x)\mathrm{d}y\mathrm{d }x\\ &=\mu\int_{-\infty}^{\overline{h}(t)}\int_{\overline{h}(t)}^{ \infty}\kappa(x-y)\overline{u}(t,x)\mathrm{d}y\mathrm{d}x\\ &\geq\mu\int_{-\overline{h}(t)}^{\overline{h}(t)}\int_{\overline {h}(t)}^{\infty}\kappa(x-y)\overline{u}(t,x)\mathrm{d}y\mathrm{d}x.\end{split} \tag{3.10}\] Moreover, \[\overline{u}(T,x)=(1-\epsilon)^{-2}u_{\epsilon}(T,x)\geq\tilde{u}(0,x),\text{ for }x\in(-\tilde{h}(T),\tilde{h}(T))\] with \(\overline{h}(T^{*})\geq\tilde{h}(T^{*})\) and \(\overline{u}(t,\overline{h}(t))\geq 0,\text{ for }t\geq 0.\) Applying the Comparison principle, one can see that \[\overline{u}(t+T,x)\geq\tilde{u}(t,x),\ \overline{h}(t)\geq\tilde{h}(t),\text{ for }t \geq T^{*},-\tilde{h}(t)<x<\tilde{h}(t), \tag{3.11}\] which implies that \[\begin{split}\limsup_{t\to\infty}\frac{h(t)}{t}&= \limsup_{t\to\infty}\frac{\tilde{h}(t-T)}{t}\leq\limsup_{t\to\infty}\frac{ \overline{h}(t-T)}{t}\\ &=\lim_{t\to\infty}\frac{(1-\epsilon)^{-2}\mu\int_{0}^{t-T}\int_{ -\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\epsilon}(s,x)\mathrm{d}y\mathrm{d }x\mathrm{d}s+\tilde{h}(T^{*})+L_{0}}{t}\\ &=\lim_{t\to\infty}\frac{(1-\epsilon)^{-2}\mu\int_{0}^{t-T}\int_ {-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\epsilon}(s,x)\mathrm{d}y\mathrm{d }x\mathrm{d}s}{t}.\end{split}\] Since \(u_{\epsilon}(t,x)\rightarrow\hat{u}^{*}(t,x)\) as \(\epsilon\to 0\) uniformly in \(t\in\mathbb{R}\) and \(x\) in any compact subsets of \(\mathbb{R}\), given \(\mathbf{(H)}\) and (2.6), it follows that \[\begin{split}\limsup_{t\rightarrow\infty}\frac{h(t)}{t}& \leq\limsup_{t\rightarrow\infty}\frac{\overline{h}(t-T)}{t}= \lim_{t\rightarrow\infty}\frac{\mu\int_{0}^{t-T}\int_{-\infty}^{0}\int_{0}^{ \infty}\kappa(x-y)\hat{u}^{*}(s,x)\mathrm{d}y\mathrm{d}x\mathrm{d}s}{t}\\ &=\lim_{t\rightarrow\infty}\frac{\mu\int_{0}^{t-T}\int_{-\infty}^ {0}\int_{0}^{\infty}\kappa(x-y)\hat{u}^{*}(s)\mathrm{d}y\mathrm{d}x\mathrm{d}s }{t}\\ &=\lim_{t\rightarrow\infty}\frac{\mu\int_{0}^{t}\int_{-\infty}^{0 }\int_{0}^{\infty}\kappa(x-y)\hat{u}^{*}(s)\mathrm{d}y\mathrm{d}x\mathrm{d}s }{t}\\ &=\mu\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)\hat{u}^{*} \mathrm{d}y\mathrm{d}x=c^{*}.\end{split} \tag{3.12}\] \(Step\) 2 : We will prove \(\liminf_{t\rightarrow\infty}\frac{h(t)}{t}\geq c^{*}.\) Note that there exists a unique positive almost periodic solution \(u_{*}(t,x)\equiv u_{*}(t)\) of the problem \[u_{t}=d\int_{\mathbb{R}}\kappa\left(x-y\right)u\left(t,y\right)\mathrm{d}y-d \ u+uf(t+T,u),\ x\in\mathbb{R}.\] By Theorem 1.1, it follows that \[\lim_{t\rightarrow\infty}\left[\tilde{u}(t,x)-u_{*}(t,x)\right]=0\] locally uniformly in \(x\in\mathbb{R}\). In view of (3.2), there is \(T_{*}\gg 1\) such that \[u_{*}(t,x)\geq\underline{u}_{\epsilon}(t+T),\ \text{for}\ t\geq T_{*},\] locally uniformly for \(x\in\mathbb{R}\), which implies that \[\liminf_{t\rightarrow\infty}\left[u_{*}(t,x)-\underline{u}_{\epsilon}(t+T) \right]\geq 0\] locally uniformly for \(x\in\mathbb{R}\). Thus, \[\liminf_{t\rightarrow\infty}\left[\tilde{u}(t,x)-\underline{u}_{\epsilon}(t+ T)\right]\geq 0\] locally uniformly for \(x\in\mathbb{R}\). For fixed \(L\gg h(T)\), let \(u_{\epsilon^{-}}(t,x)\) be the unique time almost periodic solution of the following problem \[\begin{cases}u_{t}=d\int_{-L}^{L}\kappa(x-y)u(t,y)\mathrm{d}y-d\ u(t,x)+u(f(t, u)-\epsilon),&t>0,\ x\in(-L,L),\\ u(t,x)=0,&t>0,\ |x|\geq L.\end{cases} \tag{3.13}\] Applying a comparison argument, \(u_{\epsilon^{-}}(t,x)\leq\underline{u}_{\epsilon}(t)\), uniformly in \(t>0\) and \(x\) in any compact subsets of \(\mathbb{R}.\) And \(u_{\epsilon^{-}}(t,x)\rightarrow\underline{u}_{\epsilon}(t)\) as \(L\rightarrow\infty\) uniformly in \(t>0\) and \(x\) in any compact subsets of \(\mathbb{R}.\) Take \(\tilde{T}>T\) such that \(h(\tilde{T})>L\). Define \[\underline{h}(t)=(1-\epsilon)^{2}(1-2\epsilon)\mu\int_{\tilde{T}+T}^{t}\int_ {-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\epsilon^{-}}(s,x)\mathrm{d}y \mathrm{d}x\mathrm{d}s+\tilde{h}(\tilde{T}),\ \ \underline{u}(t,x)=(1-\epsilon)^{2}u_{\epsilon^{-}}(t,x),\] for \(t\geq\tilde{T}+T,\ x\in\mathbb{R}.\) Then \[\tilde{u}(t,0)\geq\underline{u}(t+T,0),\ \text{for}\ t\geq\tilde{T}\text{and}\ \tilde{u}(\tilde{T},x)\geq\underline{u}(\tilde{T}+T,x),\ \text{for}\ -\underline{h}(\tilde{T}+T)\leq x\leq \underline{h}(\tilde{T}+T).\] Thus, we have \[\underline{h}^{\prime}(t) =(1-2\epsilon)\mu\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y) \underline{u}(t,x)\mathrm{d}y\mathrm{d}x\] \[=(1-2\epsilon)\mu\int_{-\underline{h}(t)}^{\underline{h}(t)}\int_ {\underline{h}(t)}^{\infty}\kappa(x-y)\underline{u}(t,x)\mathrm{d}y\mathrm{d}x +\mu\int_{-\infty}^{-\underline{h}(t)}\int_{\underline{h}(t)}^{\infty}\kappa(x- y)\underline{u}(t,x)\mathrm{d}y\mathrm{d}x\] \[=\mu\int_{-\underline{h}(t)}^{\underline{h}(t)}\int_{\underline{ h}(t)}^{\infty}\kappa(x-y)\underline{u}(t,x)\mathrm{d}y\mathrm{d}x+\mu\int_{- \infty}^{-\underline{h}(t)}\int_{\underline{h}(t)}^{\infty}\kappa(x-y) \underline{u}(t,x)\mathrm{d}y\mathrm{d}x\] \[-2\epsilon\mu\int_{-\underline{h}(t)}^{\underline{h}(t)}\int_{ \underline{h}(t)}^{\infty}\kappa(x-y)\underline{u}(t,x)\mathrm{d}y\mathrm{d}x\] \[:=\mu\int_{-\underline{h}(t)}^{\underline{h}(t)}\int_{ \underline{h}(t)}^{\infty}\kappa(x-y)\underline{u}(t,x)\mathrm{d}y\mathrm{d}x +\mathscr{I}(\epsilon,\mu,h(t))\text{ for }t\geq\tilde{T}+T.\] Now we only need to prove \(\mathscr{I}(\epsilon,\mu,h(t))\leq 0.\) In fact, given \((\mathbf{H})\), one can see that if \(t\gg\tilde{T}+T\), it follows that \[\mu\int_{-\infty}^{-\underline{h}(t)}\int_{\underline{h}(t)}^{\infty}\kappa(x- y)\underline{u}(t,x)\mathrm{d}y\mathrm{d}x\leq 2\epsilon\mu\int_{-\underline{h}(t)}^{ \underline{h}(t)}\int_{\underline{h}(t)}^{\infty}\kappa(x-y)\underline{u}(t,x )\mathrm{d}y\mathrm{d}x,\] then \(\mathscr{I}(\epsilon,\mu,h(t))\leq 0,\) which implies \[\underline{h}^{\prime}(t)\leq\mu\int_{-\underline{h}(t)}^{\underline{h}(t)} \int_{\underline{h}(t)}^{\infty}\kappa(x-y)\underline{u}(t,x)\mathrm{d}y \mathrm{d}x.\] Moreover, in view of (3.13), it follows that \[\underline{u}(t,\pm\underline{h}(t))\to 0,\text{ as }t\gg\tilde{T}+T.\] According to \((\mathbf{H1})\), we have \[\int_{\underline{h}(t)}^{\infty}\kappa(x-y)\mathrm{d}y+\int_{-\infty}^{- \underline{h}(t)}\kappa(x-y)\mathrm{d}y\leq\epsilon,\text{ for }t\gg\tilde{T}+T.\] Direct calculations show \[\underline{u}_{t} =(1-\epsilon)^{2}u_{\epsilon^{-},t}=(1-\epsilon)^{2}\left[d\int _{\mathbb{R}}\kappa(x-y)u_{\epsilon^{-}}(t,y)\mathrm{d}y-d\ u_{\epsilon^{-}}(t, x)+u_{\epsilon^{-}}(f(t,u_{\epsilon^{-}})-\epsilon)\right]\] \[\leq d\int_{-\underline{h}(t)}^{\underline{h}(t)}\kappa(x-y) \underline{u}(t,y)\mathrm{d}y-d\ u(t,x)+\underline{u}f(t,u_{\epsilon^{-}})+ \int_{\underline{h}(t)}^{\infty}\kappa(x-y)\underline{u}(t,y)\mathrm{d}y+\int _{-\infty}^{-\underline{h}(t)}\kappa(x-y)\underline{u}(t,y)\mathrm{d}y- \epsilon\ \underline{u}(t,x)\] \[\leq d\int_{-\underline{h}(t)}^{\underline{h}(t)}\kappa(x-y) \underline{u}(t,y)\mathrm{d}y-d\ \underline{u}(t,x)+\underline{u}f(t,\underline{u}).\] Hence, using the Comparison principle to conclude that \[\underline{h}(t+T)\leq\tilde{h}(t),\text{ for }t\gg\tilde{T},\text{ and } \underline{u}(t+T,x)\leq\tilde{u}(t,x),\text{ for }t\gg\tilde{T},-\underline{h}(t+T)<x<\underline{h}(t+T). \tag{3.14}\] It follows that \[\liminf_{t\to\infty}\frac{h(t)}{t} =\liminf_{t\to\infty}\frac{\tilde{h}(t-T)}{t}\geq\liminf_{t\to \infty}\frac{\underline{h}(t)}{t}\] \[=\liminf_{t\to\infty}\frac{(1-\epsilon)^{2}(1-2\epsilon)\mu\int_{ \tilde{T}+T}^{t}\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\epsilon^{-}} (s,x)\mathrm{d}y\mathrm{d}x\mathrm{d}s+\tilde{h}(\tilde{T})}{t}\] \[=(1-\epsilon)^{2}(1-2\epsilon)\lim_{t\to\infty}\frac{\mu\int_{0 }^{t}\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\epsilon^{-}}(s,x) \mathrm{d}y\mathrm{d}x\mathrm{d}s}{t}.\] For \(h_{\infty}=\infty\), there is \(t^{*}>\bar{T}+T>0\) such that \(h(t)-h_{\infty}<\epsilon,\text{ as }t\geq t^{*}.\) Let \(L=h(t^{*})\), note that \(u_{\epsilon^{-}}(t,x)\to\hat{u}^{*}(t,x)\) as \(\epsilon\to 0\) uniformly in \(t>0\) and \(x\in(\,-L,L)\). Thus, \[\liminf_{t\to\infty}\frac{h(t)}{t} \geq\lim_{t\to\infty}\frac{\mu\int_{0}^{t}\int_{-\infty}^{0} \int_{0}^{\infty}\kappa(x-y)\hat{u}^{*}(s,x)\mathrm{d}y\mathrm{d}x\mathrm{d}s }{t}=\lim_{t\to\infty}\frac{\mu\int_{0}^{t}\int_{-\infty}^{-h(t^{*})}\int_{0}^ {\infty}\kappa(x-y)\hat{u}^{*}(s)\mathrm{d}y\mathrm{d}x\mathrm{d}s}{t}\] \[+\lim_{t\to\infty}\frac{\mu\int_{0}^{t}\int_{-h(t^{*})}^{0}\int_ {0}^{\infty}\kappa(x-y)\hat{u}^{*}(s)\mathrm{d}y\mathrm{d}x\mathrm{d}s}{t}= \lim_{t\to\infty}\frac{\mu\int_{0}^{t}\int_{-\infty}^{0}\int_{0}^{\infty} \kappa(x-y)\hat{u}^{*}(s)\mathrm{d}y\mathrm{d}x\mathrm{d}s}{t} \tag{3.15}\] \[=\mu\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)\hat{u}^{*} \mathrm{d}y\mathrm{d}x=c^{*}.\] Thus, combining (3.12) and (3.15), one can see that \[\lim_{t\to\infty}\frac{h(t)}{t}=c^{*}. \tag{3.16}\] _The proof of Theorem 2.2_. Now we intend to show (2.8). According to (3.11) in Step 1 of the proof, for any given small \(\sigma>0\), there is \(T_{\sigma}>T\) such that \[u(t,x)\leq(1-\sigma)^{-2}u_{\sigma}(t,x),\text{ for }t\geq T_{\sigma},-\tilde{h}(t -T)\leq x\leq\tilde{h}(t-T)\] where \(u_{\sigma}(t,x)\) is the unique almost periodic solution of (3.7) with \(\epsilon\) replaced by \(\sigma\) and \(\tilde{h}(t)=h(t+T)\). Moreover, by (3.14) in Step 2 of the proof, there are positive constants \(\underline{T}_{\sigma}>T\) and \(h_{\sigma}>0\) such that \[u(t,x)\geq(1-\sigma)^{2}u_{\sigma^{-}}(t,x),\text{ for }t\geq\underline{T}_{ \sigma},-\underline{h}(t)\leq x\leq\underline{h}(t)\] where \[\underline{h}(t)=(1-\sigma)^{2}(1-2\sigma)\mu\int_{\underline{T}_{\sigma}}^{t }\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\sigma^{-}}(s,x)\mathrm{d}y \mathrm{d}x\mathrm{d}s+h_{\sigma},\] and \(u_{\sigma^{-}}\) is the unique almost periodic solution of (3.13) with \(\epsilon\) replaced by \(\sigma\). As \[\lim_{\sigma\to 0}(1-\sigma)^{-2}\mu\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x -y)u_{\sigma}(t,x)\mathrm{d}y\mathrm{d}x=\lim_{\sigma\to 0}(1-\sigma)^{2}(1-2 \sigma)\mu\int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\sigma^{-}}(t,x) \mathrm{d}y\mathrm{d}x=c^{*}\] uniformly for \(t\geq 0\). For any \(\varepsilon\in(0,c^{*})\), there are much small \(\sigma_{\varepsilon}\in(0,\varepsilon)\) and \(T_{\varepsilon}>0\) such that \[\left|(1-\sigma_{\varepsilon})^{-2}\mu\int_{0}^{t}\int_{-\infty}^{0}\int_{0}^ {\infty}\kappa(x-y)u_{\sigma_{*}}(s,x)\mathrm{d}y\mathrm{d}x\mathrm{d}s-c^{*} t\right|<\frac{\varepsilon}{2}t\] and \[\left|(1-\sigma_{\varepsilon})^{2}(1-2\sigma_{\varepsilon})\mu\int_{0}^{t} \int_{-\infty}^{0}\int_{0}^{\infty}\kappa(x-y)u_{\sigma_{\varepsilon}^{-}}(s,x )\mathrm{d}y\mathrm{d}x\mathrm{d}s-c^{*}t\right|<\frac{\varepsilon}{2}t\] for all \(t\geq T_{\varepsilon}\). Fix \(\sigma=\sigma_{\varepsilon}\) in \(u_{\sigma}\) and \(u_{\sigma^{-}}\). Motivated by the arguments in Step 1 and Step 2, we can find \[u_{\sigma_{\varepsilon}}(t,x)\leq\overline{u}_{\sigma_{\varepsilon}}(t),\text{ for all }t>0\] and \[u_{\sigma_{\varepsilon}^{-}}(t,x)\geq\underline{u}_{\sigma_{\varepsilon}}(t)- \varepsilon,\text{ for all }t>0\] locally uniformly for \(x\) in \(\mathbb{R}\), where \(\overline{u}_{\sigma_{\varepsilon}}(t)\) and \(\underline{u}_{\sigma_{\varepsilon}}(t)\) are the unique time almost periodic solutions of \[u_{t}=u(f(t,u)+\sigma_{\varepsilon})\] and \[u_{t}=u(f(t,u)-\sigma_{\varepsilon}),\] respectively. Further, according to (3.16), for such \(\varepsilon\), there is \(\hat{T}>T\) such that \(\tilde{h}(t-T)\geq(c^{*}-\varepsilon)t\) for \(t\geq\hat{T}\). Denote \[\mathscr{I}_{1}(\varepsilon):=(1-\sigma_{\varepsilon})^{2}(1-2\sigma_{ \varepsilon})\mu\int_{0}^{\underline{T}_{\sigma_{\varepsilon}}}\int_{-\infty} ^{0}\int_{0}^{\infty}\kappa(x-y)u_{\sigma_{\varepsilon}^{-}}(s,x)\mathrm{d}y \mathrm{d}x\mathrm{d}s-h_{\sigma_{\varepsilon}}.\] Take \(\underline{T}_{\sigma_{\varepsilon}}\gg 1\) such that \(\mathscr{I}_{1}(\varepsilon)>0.\) It follows that if \(t\geq\max\left\{T_{\sigma_{\varepsilon}}+\underline{T}_{\sigma_{\varepsilon} }+T_{\varepsilon}+\hat{T},2\frac{\mathscr{I}_{1}(\varepsilon)}{\varepsilon}\right\}\) and \(0\leq|x|\leq(c^{*}-\varepsilon)\,t,\) then it follows \[u(t,x)\leq\left(1-\sigma_{\varepsilon}\right)^{-2}u_{\sigma_{\varepsilon}}(t, x)\leq\left(1-\sigma_{\varepsilon}\right)^{-2}\overline{u}_{\sigma_{ \varepsilon}}(t)\] and \[u(t,x)\geq\left(1-\sigma_{\varepsilon}\right)^{2}u_{\sigma_{\varepsilon}^{-}} (t,x)\geq\left(1-\sigma_{\varepsilon}\right)^{2}\left(\underline{u}_{\sigma_ {\varepsilon}}(t)-\varepsilon\right).\] Let \(T^{**}=\max\left\{T_{\sigma_{\varepsilon}}+\underline{T}_{\sigma_{\varepsilon} }+T_{\varepsilon}+\hat{T},2\frac{\mathscr{I}_{1}(\varepsilon)}{\varepsilon}\right\},\) it follows \[\left(1-\sigma_{\varepsilon}\right)^{2}\left(\underline{u}_{\sigma_{ \varepsilon}}(t)-\varepsilon\right)\leq u(t,x)\leq\left(1-\sigma_{\varepsilon }\right)^{-2}\overline{u}_{\sigma_{\varepsilon}}(t),\] for \(t\geq T^{**}\) and \(0\leq|x|\leq(c^{*}-\varepsilon)\,t\). Moreover, by (3.4) in Step 1, it implies that \[\left(1-\sigma_{\varepsilon}\right)^{2}\left(\underline{u}_{\sigma_{ \varepsilon}}(t)-\varepsilon\right)-\overline{u}_{\sigma_{\varepsilon}}(t) \leq u(t,x)-\hat{u}^{*}(t)\leq\left(1-\sigma_{\varepsilon}\right)^{-2} \overline{u}_{\sigma_{\varepsilon}}(t)-\underline{u}_{\sigma_{\varepsilon}}(t)\] Let \[\mathscr{I}_{2}(\varepsilon)=\max\left\{\left|\left(1-\sigma_{\varepsilon} \right)^{2}\left(\underline{u}_{\sigma_{\varepsilon}}(t)-\varepsilon\right)- \overline{u}_{\sigma_{\varepsilon}}(t)\right|,\left|\left(1-\sigma_{ \varepsilon}\right)^{-2}\overline{u}_{\sigma_{\varepsilon}}(t)-\underline{u}_ {\sigma_{\varepsilon}}(t)\right|\right\}.\] we obtain that \(|u(t,x)-\hat{u}^{*}(t)|\leq\mathscr{I}_{2}(\varepsilon)\) for all \(t\geq T^{**}\) and \(0\leq|x|\leq(c^{*}-\varepsilon)\,t\). Since \(\mathscr{I}_{2}(\varepsilon)\to 0\) as \(\varepsilon\to 0\), it thus yields \(\lim\limits_{t\to\infty}\max\limits_{x\leq(c^{*}-\varepsilon)t}|u(t,x)-\hat{ u}^{*}(t)|=0.\) _The proof of Theorem 2.3._ Now we turn to prove (2.9). Choose a nonnegative, even function sequence \(\{\kappa_{n}\}\) such that each \(\kappa_{n}(x)\in C^{1}\) has nonempty compact support, and \[\kappa_{n}(x)\leq\kappa_{n+1}(x)\leq\kappa(x),\text{ and }\kappa_{n}(x)\to\kappa(x), \text{ in }L^{1}(\mathbb{R})\text{ as }n\to\infty. \tag{3.17}\] where \(\kappa_{n}(x)=\kappa(x)\chi_{n}(x)\) and \(\{\chi_{n}\}\) is a properly smooth cut-off function sequences such that \(\kappa_{n}(x)\) satisfies **(H)**. Replace \(\kappa(x)\) by \(\kappa_{n}(x)\) in (1.1), we can obtain the following auxiliary problem \[\left\{\begin{array}{ll}u_{t}=d\int_{g(t)}^{h(t)}\kappa_{n}\left(x-y\right)u \left(t,y\right)\mathrm{d}y-du+uf(t,u),&t>0,\ g\left(t\right)<x<h\left(t \right),\\ u\left(t,h(t)\right)=u(t,g(t))=0,&t>0,\\ h^{\prime}\left(t\right)=\mu\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}\kappa_{n} \left(x-y\right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,&t>0,\\ g^{\prime}\left(t\right)=-\mu\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}\kappa_{n} \left(x-y\right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,&t>0,\\ u\left(0,x\right)=u_{0}\left(x\right),\ h\left(0\right)=-g\left(0\right)=h_{0},&x \in[-h_{0},h_{0}].\end{array}\right. \tag{3.18}\] Let \((u_{n};g_{n},h_{n})\) be the solution of (3.18). Applying the similar arguments in Step 2, when the spreading happens, we can see that \[\lim_{t\to\infty}h_{n}(t)=\infty,\text{ for }n\gg 1\] and (3.15) still holds for (3.18), then \[\liminf_{t\to\infty}\frac{h_{n}(t)}{t} \geq\lim_{t\to\infty}\frac{\mu\int_{0}^{t}\int_{-\infty}^{0}\int_ {0}^{\infty}\kappa_{n}(x-y)\hat{u}_{n}^{*}(s,x)\mathrm{d}y\mathrm{d}x\mathrm{d }s}{t}\] \[=\lim_{t\to\infty}\frac{\mu\int_{0}^{t}\int_{-\infty}^{0}\int_{0}^ {\infty}\kappa_{n}(x-y)\hat{u}_{n}^{*}(s)\mathrm{d}y\mathrm{d}x\mathrm{d}s}{t}\] \[=\mu\int_{-\infty}^{0}\int_{0}^{\infty}\kappa_{n}(x-y)\hat{u}_{n }^{*}\mathrm{d}y\mathrm{d}x:=\hat{c}_{n}^{*},\] where \(\hat{u}_{n}^{*}(t,x)\equiv\hat{u}_{n}^{*}(t)\) is the unique positive time almost periodic solution of \[u_{t}=d\int_{\mathbb{R}}\kappa_{n}(x-y)u(t,y)\mathrm{d}y-d\ u(t,x)+uf(t,u),\ x\in\mathbb{R}.\] Since \((\mathbf{H})\) does not hold for \(\kappa(x)\), by (3.17), it follows that \(\lim_{n\to\infty}\hat{c}_{n}^{*}=\infty.\) Further, by Lemma 4.1 in [14], we can get that for all \(n\gg 1\), \[\liminf_{t\to\infty}\frac{h(t)}{t}\geq\hat{c}_{n}^{*}\text{ and }\liminf_{t\to \infty}\frac{-g(t)}{t}\geq\hat{c}_{n}^{*}.\] Thus, the conclusion (2.9) holds. The proof of this theorem has been completed. ## Acknowledges This work is supported by the China Postdoctoral Science Foundation (No. 2022M710426) and the National Natural Science Foundation of China (No. 12171039 and 12271044).
この論文では、非局所拡散KPPモデルの拡散ダイナミクスについて、時間 almost aperiodic メディアで初めて考察します。拡散が起こると、長期的なダイナミクスが得られます。特に、kernel 関数の閾値条件が満たされる場合、新しい正時間 almost aperiodic 関数を使用することで、自由境界問題のユニークな漸近的な拡散速度を正確に表現できます。
2309.16383
MWA rapid follow-up of gravitational wave transients: prospects for detecting prompt radio counterparts
We present and evaluate the prospects for detecting coherent radio counterparts to gravitational wave (GW) events using Murchison Widefield Array (MWA) triggered observations. The MWA rapid-response system, combined with its buffering mode ($\sim4$ minutes negative latency), enables us to catch any radio signals produced from seconds prior to hours after a binary neutron star (BNS) merger. The large field of view of the MWA ($\sim1000\,\text{deg}^2$ at 120\,MHz) and its location under the high sensitivity sky region of the LIGO-Virgo-KAGRA (LVK) detector network, forecast a high chance of being on-target for a GW event. We consider three observing configurations for the MWA to follow up GW BNS merger events, including a single dipole per tile, the full array, and four sub-arrays. We then perform a population synthesis of BNS systems to predict the radio detectable fraction of GW events using these configurations. We find that the configuration with four sub-arrays is the best compromise between sky coverage and sensitivity as it is capable of placing meaningful constraints on the radio emission from 12.6\% of GW BNS detections. Based on the timescales of four BNS merger coherent radio emission models, we propose an observing strategy that involves triggering the buffering mode to target coherent signals emitted prior to, during or shortly following the merger, which is then followed by continued recording for up to three hours to target later time post-merger emission. We expect MWA to trigger on $\sim5\text{--}22$ BNS merger events during the LVK O4 observing run, which could potentially result in two detections of predicted coherent emission.
J. Tian, G. E. Anderson, A. J. Cooper, K. Gourdji, M. Sokolowski, A. Rowlinson, A. Williams, G. Sleap, D. Dobie, D. L. Kaplan, Tara Murphy, S. J. Tingay, F. H. Panther, P. D. Lasky, A. Bahramian, J. C. A. Miller-Jones, C. W. James, B. W. Meyers, S. J. McSweeney, P. J. Hancock
2023-09-28T12:31:04
http://arxiv.org/abs/2309.16383v1
[ ###### Abstract We present and evaluate the prospects for detecting coherent radio counterparts to gravitational wave (GW) events using Murchison Widefield Array (MWA) triggered observations. The MWA rapid-response system, combined with its buffering mode (\(\sim 4\) minutes negative latency), enables us to catch any radio signals produced from seconds prior to hours after a binary neutron star (BNS) merger. The large field of view of the MWA (\(\sim 1000\,\mathrm{deg}^{2}\) at \(120\,\mathrm{MHz}\)) and its location under the high sensitivity sky region of the LIGO-Virgo-KAGRA (LVK) detector network, forecast a high chance of being on-target for a GW event. We consider three observing configurations for the MWA to follow up GW BNS merger events, including a single dipole per tile, the full array, and four sub-arrays. We then perform a population synthesis of BNS systems to predict the radio detectable fraction of GW events using these configurations. We find that the configuration with four sub-arrays is the best compromise between sky coverage and sensitivity as it is capable of placing meaningful constraints on the radio emission from \(12.6\%\) of GW BNS detections. Based on the timescales of four BNS merger coherent radio emission models, we propose an observing strategy that involves triggering the buffering mode to target coherent signals emitted prior to, during or shortly following the merger, which is then followed by continued recording for up to three hours to target later time post-merger emission. We expect MWA to trigger on \(\sim 5\)-\(22\) BNS merger events during the LVK O4 observing run, which could potentially result in two detections of predicted coherent emission. gravitational waves - methods: observational - radio continuum: general MWA rapid follow-up of gravitational wave transients] MWA rapid follow-up of gravitational wave transients: prospects for detecting prompt radio counterparts 2023.xxx 2023.xxx 2023.xxx ## 1 Introduction On 2015 September 14, the LIGO-Virgo-KAGRA Collaboration (LVK) detected the first gravitational wave (GW) signal, GW150914, from a binary black hole (BBH) merger, marking the start of a new era in astronomy (Abbott et al., 2016). Since then, more GW signals have been detected, with most originating from BBH mergers (Abbott et al., 2016, 2019), two from binary neutron star (BNS) mergers (Abbott et al., 2017, 2020), and two to four from BH-NS mergers (Abbott et al., 2021, 2021) thanks to the commissioning of the Advanced LIGO and Virgo Interferometer (aLIGO/Virgo; Aasi et al. 2015; Acernese et al. 2015). Remarkably, electromagnetic (EM) counterparts of GWs were identified for a BNS merger (Abbott et al., 2017). The contemporaneous detection of GW170817 and short gamma-ray burst (GRB) 170817A (Abbott et al., 2017; Abbott et al., 2017; Goldstein et al., 2017) signif icantly increased the utility of GW signals and ignited a campaign of multi-wavelength follow-up. This led to observations in almost every EM band, yielding a wealth of information on compact binary merger physics including short GRB mechanisms (Mooley et al., 2018; Nakar et al., 2018) and the NS equation of state (e.g., Abbott et al., 2018; Raithel et al., 2018). However, GW170817 is the only gravitational wave-detected event with confirmed joint EM detections to date, although there was substantial effort devoted to following up GWs (e.g. Coughlin et al., 2019; Graham et al., 2020; Alexander et al., 2021; Dobie et al., 2022; Panther et al., 2023). While identifying EM counterparts is extremely useful for studying GW physics, a few factors such as the delay in issuing GW alerts and the error region of hundreds to thousands of square degrees (especially for early warning alerts) mean it is a challenging task (e.g. Kasliwal and Nissanke, 2014; Cowperthwaite and Berger, 2015). Among the EM counterparts associated with GW transients is the theorised coherent radio emission (e.g. Platts et al., 2019; Rowlinson and Anderson, 2019; Cooper et al., 2023). Many models predict prompt, fast radio burst (FRB) like signals or persistent pulsar-like emission in the course of compact binary mergers. While BH-NS mergers could produce some coherent radio emission, we are focusing on BNS mergers in this paper. The earliest radio emission could come from the inspiral phase, where interactions of the NS magnetic fields just preceding the merger could revive the pulsar emission mechanism (Lyutikov, 2019). The merger may launch an extremely relativistic jet, interacting with the interstellar medium (ISM), that produces an FRB-like signal (Usov and Katz, 2000). If the merger remnant is a supramassive (**i.e. mass larger than the maximum mass allowed for a static NS**), rapidly rotating, highly magnetised NS (from hereon referred to as a magnetar), we may expect pulsar-like emission powered by dipole magnetic braking during the lifetime of the magnetar (Totani, 2013) and/or magnetically powered radio bursts from the magnetar remnant (Lyubarsky, 2014). Finally, as the magnetar remnant spins down, it may collapse into a BH ejecting its magnetosphere and producing a prompt radio burst (Falcke and Rezzolla, 2014; Zhang, 2014). There have been several searches for prompt coherent radio counterparts to GW transients (Andreoni et al., 2017; Callister et al., 2019; Artkop et al., 2019; Bhakta et al., 2021; The LIGO Scientific Collaboration et al., 2022; Moroianu et al., 2023). The Murchison Widefield Array (MWA; Tingay et al., 2013; Wayth et al., 2018) participated in the Australian-led multi-wavelength follow-up program of GW sources, to search for coherent radio emission associated with GW170817 within five days of the GW trigger, but no signals were observed above 51 mJy on 150 min timescales (Andreoni et al., 2017). Figure 1: The LVK GW sensitivity map for O4 projected on the Earth. We used the sensitivity map generated by the LALSuite software suite (LIGO Scientific Collaboration, 2018), and assumed the same distribution of signal-to-noise ratio (S/N) of GW signals as simulated for O3 (The LIGO Scientific Collaboration et al., 2021, 2021). The colour scale corresponds to the probability of detecting a GW event at a particular sky position with respect to the Earth. The highest sensitivity region in the Southern Hemisphere (marked by a red plus) is at an elevation of \(\mathbf{58.5^{\circ}}\) in the MWA field (also discussed in Wang et al., 2020). The red star marks the location of MWA, the red contour shows the full sky coverage of MWA down to an elevation of \(\mathbf{30^{\circ}}\), and the grey contour shows the FoV of a standard MWA pointing centred on the highest sensitivity region down to 20% of the primary beam at 120 MHz. This map demonstrates that MWA is well placed to observe the highest sensitivity region of GW detection in the Southern Hemisphere, with 30.5% and 4.9% of events expected to be within the red and grey contours, respectively (see Section 3.4). Callister et al. (2019) used the Owens Valley Radio Observatory Long Wavelength Array (OVRO-LWA) to search a \(\sim 900\) deg\({}^{2}\) region for prompt radio transients between 27-84 MHz within the positional error of a BBH merger GW170104 (\(\sim 1600\) deg\({}^{2}\)) six hours after the GW detection, and obtained a typical upper limit of 2.4 Jy on 13 s timescales. Similar searches at higher frequencies were conducted with better sensitivity but in a much smaller search area. Using the Karl G. Jansky Very Large Array (VLA), Artkop et al. (2019) and Bhakta et al. (2021) searched only small regions (\(<1\) deg\({}^{2}\)) of possible gamma-ray counterparts identified in the GW localisation area days after the GW arrival (also from BBH mergers), resulting in an upper limit of 450 \(\mu\)Jy on 1 hr timescales at 1.4 GHz and 75 \(\mu\)Jy on 3 hr timescales at 6 GHz, respectively. Very recently, Movian et al. (2023) conducted a search for GW-FRB associations by cross matching the first CHIME/FRB catalogue (CHIME/FRB Collaboration et al., 2021) with the GW sources detected in the first half of the third GW observing run (O3a; Abbott et al. 2021), and reported a potential association, i.e. FRB 20190425A occurred 2.5 hr following GW190425 and within the GW sky localisation area, though at a low significance of 2.8\(\sigma\). With the LVK O4 observing run (Abbott et al., 2018) commencing, we are now presented with an unprecedented opportunity to search for the theorised coherent radio emission associated with BNS mergers. The lack of strong associations between BNS mergers and coherent radio emission in previous studies may be due to several factors, including the radio telescope having an insufficient field of view for covering the large uncertainty regions of GW events, a large delay between the GW detection and the radio follow-up, or insufficient sensitivity. In order to combat these issues, we present an observing strategy for searching for coherent radio counterparts to GW transients with the MWA. The MWA operates over a frequency range between 80 and 300 MHz, with an instantaneous bandwidth of 30.72 MHz, and a field of view (FoV) ranging from \(\sim 300-1000\) deg\({}^{2}\)(Tingay et al., 2013). It is suitable for finding prompt radio counterparts to GWs thanks to a few features. First, we have a unique opportunity as the MWA is well placed to target the highest sensitivity zone of the GW detector network over the Indian Ocean, as shown in Figure 1. Second is its large FoV. Given the poor localisation of GW events, especially for pre-merger detections (\(\sim 2000\) deg\({}^{2}\) expected for O4) 1, the MWA is able to cover a large proportion of the GW positional error regions. Third, the MWA has a rapid-response observing mode that can follow up a transient detection within 30 s of receiving an alert (Kaplan et al., 2015; Hancock et al., 2019; Anderson et al., 2021; Tian et al., 2022, 2020), and is now capable of storing high time resolution (781.25 ns) data in a ring buffer that can be used to search for signals up to 240 s prior to receiving an alert (Morrison et al., 2023). **For the utility of the ring buffer in the context of detecting coherent radio emission from BNS mergers see Section 4**. This, combined with the dispersive delay expected at the MWA observing frequencies, allows us to capture the earliest radio signals predicted to be produced by BNS mergers. Furthermore, the MWA can trigger on transient alerts with the Voltage Capture System (VCS; Morrison et al., 2023), which enables the capture of Nyquist-sampled voltage data. The desired time and frequency resolution can then be defined by the use case, i.e., some combination of frequency and time binning between 1.28 MHz/781.25 ns, and 1 Hz/1 s. Footnote 1: [https://emfollow.docs.ligo.org/userguide/](https://emfollow.docs.ligo.org/userguide/) Given the above advantages, in this paper we discuss the prospect of detecting prompt radio emission from GW events with the MWA. Possible observing strategies for the MWA have already been investigated by Kaplan et al. (2016) and James et al. (2019). However, these works do not consider specific emission models and their detectability. Here we focus on our success rate based on the model predictions applicable to BNS mergers in the context of the LVK O4 observing run (Abbott et al., 2018). We need to consider two problems in order to maximise our success of detecting prompt radio emission from BNS mergers: how the viewing angle of these mergers affects our chance of detection; and how the MWA can overcome a significant limitation for observatories with smaller FoV - the ability to follow up the most poorly localised GW events, especially those that may be identified pre-merger from the gravitational waves emitted during the inspiral (e.g. Sachdev et al., 2020; Kovalam et al., 2022). The goal of this paper is to devise the optimal observing strategy based on our investigation of these two problems. In Section 2, we review some of the theoretical models that predict coherent radio emission to be produced by BNS mergers and how they are affected by inclination angle along our line of sight. In Section 3, we perform a population synthesis of GW sources and provide GW and radio detection criteria for deducing the jointly detectable population. We also calculate radio detection rates of GW events by taking into account the predicted distribution of GW detections in the sky and sensitivity variations of the MWA over different pointing directions. In Section 4, we propose a two-pronged triggering strategy for the MWA to follow up GW events based on the time frame that each of the coherent emission models are likely to occur during a BNS merger. ## 2 Coherent emission from BNS mergers A number of models predict that BNS mergers could give rise to coherent radio emission (for a review see Rowlinson & Anderson, 2019), which could potentially be detected using MWA rapid-response observations of GW transients. In this Section, we revisit the fluence or flux density predictions of these emission models but also accounting for the BNS merger viewing angle (i.e. the angle between the observer's line of sight and the orbital angular momentum) up to a maximum distance of \(190\,\mathrm{Mpc}\), which is the nominal horizon limit for O4 (Abbott et al., 2020). Note that BH-NS mergers are not discussed here because several of the emission models are not relevant, including the interaction of NS magnetic fields (which is not possible with just one NS; see Section 2.1), and the magnetar collapse model (see Section 2.4) as we expect the BH-NS to directly collapse to a BH upon merger. ### Interactions of NS magnetic fields The earliest coherent radio emission produced by BNS mergers may occur during the inspiral phase, when the magnetospheric interaction between the two NSs could revive an enhanced pulsar emission mechanism (Lipunov & Panchenko, 1996; Metzger & Zivancev, 2016). In order to derive the luminosity of the pre-merger emission, here we consider a simple scenario that in the binary system one NS is highly magnetised (the primary NS) and the other moves in the magnetic field of the primary NS like a perfect conductor due to negligible magnetisation (the secondary NS; Lyutikov, 2019; Cooper et al., 2023). The pre-merger emission stems from an electric field induced by the motion of the secondary NS, which has a significant component parallel to the magnetic field, which accelerates particles. This parallel electric field \(E_{\parallel}\) increases as the binary separation \(a(t)\) shrinks due to gravitational radiation, and is given by (Cooper et al., 2023) \[E_{\parallel}=f(r,\theta,\phi)B(r,\theta,\phi)\beta, \tag{1}\] where \((r,\theta,\phi)\) is a spherical coordinate system centered on the secondary NS, \(f(r,\theta,\phi)\) is a position dependent prefactor (see Eq. 2 in Cooper et al. 2023), \(B(r,\theta,\phi)\) is the magnetic field of the primary NS, which can be approximated by a dipole i.e. \(B\approx B_{\mathrm{s}}(R_{\mathrm{NS}}/a)^{3}\)**(where \(B_{\mathrm{s}}\) is the surface magnetic field of the primary NS)**, and \(\beta=v/c\) is the speed of the secondary NS and varies with the binary separation as \[\beta=\frac{1}{c}\sqrt{\frac{GM}{a(t)}}, \tag{2}\] **where \(c\) is the speed of light, \(G\) is the gravitational constant, and \(M\) is the primary NS mass.** The orbit-induced electric field can accelerate particles along open magnetic field lines and move them out of the polar cap regions, creating vacuum like gaps in the magnetosphere (similar to the polar cap models of pulsar emission; Ruderman & Sutherland, 1975; Daugherty & Harding, 1982). We can estimate the gap height by the distance from the initial acceleration point to the point where pair production completely screens the electric field. Here, for simplicity, we assume a one-dimensional and stationary gap, and the electric field in the gap \(E_{\mathrm{gap}}=E_{\parallel}\). Then the gap height is \(h_{\mathrm{gap}}\propto\rho_{\mathrm{c}}^{1/2}B^{-1/4}E_{\parallel}^{-3/4}\), where \(\rho_{\mathrm{c}}\) is the curvature radius of the magnetic field (see Eq. 19 in Cooper et al. 2023). Assuming a fraction, \(\epsilon_{\mathrm{r}}\), of the acceleration power of the polar gap is converted to radio emission, we can calculate the radio luminosity, \[L_{\mathrm{r}}=\epsilon_{\mathrm{r}}e\Phi_{\mathrm{gap}}\dot{N}=\epsilon_{ \mathrm{r}}eE_{\mathrm{gap}}h_{\mathrm{gap}}nAc, \tag{3}\] where \(e\) is the electric charge, \(\Phi_{\mathrm{gap}}=E_{\mathrm{gap}}h_{\mathrm{gap}}\) is the electric potential difference along the gap, \(n=E_{\mathrm{gap}}/(4\pi eh_{\mathrm{gap}})\) is the charge number density in the gap, \(A\approx 4\pi R_{\mathrm{NS}}^{2}\) is the cross section of the gap, and \(\dot{N}=nAc\) is the rate of accelerated particles. With the above equations, we can calculate the radio luminosity from any point surrounding the secondary NS, which is time (or orbital separation) dependent and magnetic field line directed. In order to estimate the viewing angle dependence, following the prescription outlined in Cooper et al. (2023), we performed a numerical simulation that calculated the radio luminosity for each volume element \(\Delta V\) at \((r,\theta,\phi)\) and each timestep \(t\). For an observer at \((d_{L},\theta,\phi)\) in the frame of the secondary NS (\(d_{L}\) is the luminosity distance to the secondary NS), the observable emission is contributed by all volume elements with magnetic fields aligned with the observer (for more details about the numerical simulation see Cooper et al. 2023). We applied this numerical simulation to obtain the viewing angle dependent emission (see below). Figure 2 shows the radio emission predicted to be produced during the final \(3\,\mathrm{ms}\) of the BNS inspiral, encompassing the final two orbital periods and thus two peaks of emission (Cooper et al., 2023), for a range of viewing angles between \(\mathbf{0}^{\circ}\) and \(\mathbf{60}^{\circ}\) at an observing frequency of \(\nu_{\mathrm{obs}}=120\,\mathrm{MHz}\) (a plausible observing frequency for the MWA; see Section 3). Note that we do not expect coherent radio emission to be detectable beyond a viewing angle of \(\mathbf{60}^{\circ}\) as no magnetic field lines are perturbed away from the background magnetic field beyond this angle (see figure 1 in Cooper et al. 2023). We adopt the following NS parameters: a mass \(M=1.4\,\mathrm{M}_{\odot}\) and radius \(R_{\mathrm{NS}}=10^{6}\,\mathrm{cm}\) for both NSs, a surface magnetic field of the primary NS \(B_{\mathrm{s}}=10^{14}\,\mathrm{G}\), an angle between the magnetic axis and the orbital plane \(\alpha_{\mathrm{B,~{}orb}}=90^{\circ}\) for the primary NS, and an efficiency factor \(\epsilon_{\rm r}=10^{-2}\). This efficiency agrees with population studies of pulsar luminosity with voltage-like scaling and beaming models (e.g. Arzoumanian et al., 2002). Note that the magnetic axis of the primary NS is not necessarily perpendicular to the orbital plane, and as the magnetic axis tilts towards the orbital plane the magnetic field surrounding the secondary NS can increase by a factor of 2, corresponding to a radio luminosity increase by a factor of 4 (see Eq. 25 in Cooper et al., 2023). Also note that the radio luminosity scales with the magnetic field of the primary NS and the radio efficiency as \(\propto(B_{\rm s}/10^{14}\,{\rm G})\times(\epsilon_{\rm r}/10^{-2})\). In the case of a weaker magnetic field and a smaller efficiency factor, e.g. \(B_{\rm s}=10^{12}\,{\rm G}\) and \(\epsilon_{\rm r}=10^{-4}\), the radio luminosity could be attenuated by a factor of \(10^{4}\). In Figure 2, we can see the observable fluence of the 3 ms signal prior to the BNS merger decreases by a factor of \(\sim 1000\) as our line of sight moves away from the magnetic axis. At an observing angle of \(\theta_{\rm obs}\lesssim 30^{\circ}\) and a distance of \(\lesssim 150\) Mpc, the fluence can reach \(\gtrsim 1000\) Jy ms, which can be detected with the MWA (see Section 3). ### Relativistic jet and ISM interaction It has been suggested that the interaction between a Poynting flux dominated jet launched by BNS mergers and the ISM can produce a coherent radio pulse as well as prompt gamma-ray emission (Usov and Katz, 2000). Given the coincident detection of GRB 170817A just 2 s following the detection of GW170817 (Abbott et al., 2017; Abbott et al., 2017; Goldstein et al., 2017), we might expect this prompt radio emission to occur on similar timescales following BNS mergers. The bolometric radio fluence, \(\Phi_{r}\) (erg cm\({}^{-2}\)), is proportional to the bolometric gamma-ray fluence observed from short GRBs, \(\Phi_{\gamma}\) (erg cm\({}^{-2}\)), with their ratio being \(\simeq 0.1\epsilon_{B}\), where \(\epsilon_{B}\) is the fraction of magnetic energy in the relativistic jet (Usov and Katz, 2000). The typical spectrum of these low-frequency waves is expected to peak at a frequency dependent on the magnetic field at the shock front \[\nu_{\rm max}\simeq[0.5\,-1]\frac{1}{1+z}\epsilon_{B}^{1/2}\times 10^{6}\,{ \rm Hz} \tag{4}\] (in the observer's frame; Rowlinson et al., 2019), which is well below our observing frequency. The radio fluence at an observing frequency \(\nu_{\rm obs}\) is given by \[\Phi_{\nu_{\rm obs}}=\frac{\beta-1}{\nu_{\rm max}}\Phi_{r}\bigg{(}\frac{\nu_ {\rm obs}}{\nu_{\rm max}}\bigg{)}^{-\beta}\,{\rm erg\,cm^{-2}\,Hz^{-1}}, \tag{5}\] where the spectral index is typically assumed to be \(\beta=1.6\)(Usov and Katz, 2000). Note that the bolometric radio fluence \(\Phi_{r}\) is the fluence integrated over frequency and thus has a different unit to \(\Phi_{\nu_{\rm obs}}\). We can predict the fluence of the coherent radio emission produced during the BNS merger using the above equations. The gamma-ray fluence may be inferred using \[\Phi_{\gamma}=\frac{(1+z)\,E_{\gamma,{\rm iso}}}{4\pi d_{L}^{2}}\,{\rm erg\, cm^{-2}}, \tag{6}\] where \(E_{\gamma,{\rm iso}}\) represents the isotropic-equivalent gamma-ray energy, and ranges between \((0.04\)-\(45)\times 10^{51}\) erg with a median value of \(1.8\times 10^{51}\) erg (inferred from a population of short GRBs; Fong et al., 2015). However, the above calculation applies only to an on-axis jet, i.e. the relativistic jet points along our line-of-sight, which is a reasonable assumption in searching for radio counterparts to GRBs (e.g. Rowlinson et al., 2019, 2021; Anderson et al., 2021; Tian et al., 2022, 2020). In the case of GW detections, the relativistic jet launched by the BNS merger is likely to point away from the Earth, resulting in no GRB detection. Therefore, for GW triggers with MWA it is necessary to consider the attenuation of the predicted emission with the viewing angle (see Section 3). Note that for discussions in this paper we assume the relativistic jet aligns with the orbital angular momentum of the binary system (e.g. Abbott et al., 2017). In order to calculate the viewing angle-dependent radio emission, we assume a structured jet model i.e. the angular distribution of kinetic energy within the jet, which may arise from the central engine activity and/or Figure 2: The total fluence of the radio emission predicted to be produced during the last 3 ms of the BNS inspiral at 120 MHz assuming a mass \(M=1.4\,{\rm M_{\odot}}\) and radius \(R_{\rm NS}=10^{6}\) cm for both NSs, a surface magnetic field for the primary NS \(B_{\rm s}=10^{14}\) G, an angle between the magnetic axis and the orbital plane \(\alpha_{\rm p,\, orb}=90^{\circ}\) for the primary NS, and an efficiency factor \(\epsilon_{\rm r}=10^{-2}\). The solid lines represent the observable emission with the color corresponding to different viewing angles with respect to the binary merger axis based on the color bar. The three horizontal dashed lines in black, red, and cyan represent the expected sensitivity on \(\sim\)ms timescales of the MWA full array, four sub-arrays, and a single dipole per tile, respectively, all in the VCS mode and with incoherent beamforming (see Section 3). the interaction of the jet with the ISM (Gottlieb et al., 2018; Lazzati et al., 2018; Xie et al., 2018). There are several variants of the structured jet models, including a top-hat jet (Donaghy, 2006), a power-law jet (Dobie et al., 2020), or a Gaussian jet (Resmi et al., 2018). Given that current observations of GRBs do not allow us to distinguish between these different jet structures and that much evidence appears to support Gaussian structured jets for GRBs (e.g. Lamb and Kobayashi, 2018; Lamb et al., 2019; Howell et al., 2019; Cunningham et al., 2020), here we adopt a Gaussian jet model where the distribution of kinetic energy and Lorentz factor within the jet is given by \[E(\theta)=E_{\rm iso}\,e^{-(\theta/\theta_{0})^{2}},\text{ and} \tag{7}\] \[\Gamma(\theta)=1+(\Gamma_{0}-1)\,e^{-(\theta/\theta_{0})^{2}}, \tag{8}\] where \(\theta\) is the polar angle from the jet's axis, \(\theta_{0}\) is the angular scale of the jet opening angle, and \(E_{\rm iso}\) and \(\Gamma_{0}\) are the isotropic-equivalent energy and Lorentz factor of the jet's core, respectively. There are different methods of constraining the jet opening angle. While observations of jet breaks in short GRB afterglows suggest a typical jet opening angle of \(\mathbf{16}^{\circ}\pm\mathbf{10}^{\circ}\)(Fong et al., 2015), a comparison between the rates of BNS mergers and short GRBs points to highly collimated GRB jets with opening angles \(\approx\mathbf{6}^{\circ}\)(Beniamini and Nakar, 2019). Note that the latter constraint on the jet opening angle was improved in Sarin et al. (2022) to \(\approx\mathbf{15}^{\circ}\). Here for completeness we consider the model emission under both a narrow (\(\theta_{0}=\mathbf{6}^{\circ}\)) and wide (\(\theta_{0}=\mathbf{16}^{\circ}\)) jet. The jet emission viewed off-axis may be calculated as follows. Assuming a Lorentz factor of \(\Gamma_{0}\sim 1000\)(e.g., Hotokezaka et al., 2019; Dobie et al., 2020), we have the relativistic beaming cone of emitters \(1/\Gamma\ll\theta_{0}\). In this case, the observed radio emission scales with the on-axis emission as \[\frac{\Phi_{\nu_{\rm obs}}(\theta)}{\Phi_{\nu_{\rm obs}}(\theta_{0})}=\begin{cases} \frac{E(\theta)}{E(\theta_{0})}&\theta<\theta_{0}\\ \max\left[\frac{E(\theta)}{E(\theta_{0})},q^{-4}\right]&\theta_{0}<\theta<2 \theta_{0}\\ \max\left[\frac{E(\theta)}{E(\theta_{0})},q^{-6}(\theta_{0}\Gamma)^{2}\right]& \theta>2\theta_{0}\end{cases}\] where the term in each row containing \(q=(\theta-\theta_{0})\Gamma\) represents the case when 'off line-of-sight' emitters (i.e., the angle between the velocity of emitters and our line-of-sight is larger than \(1/\Gamma\)) become dominant (Beniamini and Nakar, 2019; Beniamini et al., 2019). Figure 3 shows the radio emission predicted to be produced by the relativistic jet-ISM interaction for a range of viewing angles between \(\mathbf{0}^{\circ}\) (on-axis) and \(\mathbf{40}^{\circ}\) (off-axis) at an observing frequency of \(\nu_{\rm obs}=120\,\)MHz. We adopt the following jet parameters: \(E_{\rm iso}=1.8\times 10^{51}\,\)erg; \(\epsilon_{B}=10^{-2}\); \(\theta_{0}=\mathbf{16}^{\circ}\); and \(\Gamma_{0}=1000\)(Fong et al., 2015). We can see the observable model emission drops with viewing angle. At a distance of \(200\,\)Mpc, while the on-axis radio fluence can reach \(>1000\,\)Jy ms, the off-axis fluence for \(\theta_{\rm obs}=\mathbf{40}^{\circ}\) drops to below \(10\,\)Jy ms, which means the detectability of this emission model is largely determined by our viewing angle (see Section 3). We note that in the case of a narrow jet with \(\theta_{0}=\mathbf{6}^{\circ}\), the decrease of fluence with viewing angle is more significant, with a viewing angle of \((\mathbf{6}^{\circ}/\mathbf{16}^{\circ})\times\mathbf{40}^{\circ}=\mathbf{15} ^{\circ}\) resulting in a predicted radio fluence below \(10\,\)Jy ms (see Figure 8 in Appendix A). ### Persistent pulsar emission If the merger remnant is a magnetar, we may expect there to be persistent radio emission powered by dipole magnetic braking during the lifetime of the magnetar (Totani, 2013; Metzger et al., 2017). The duration of this emission is largely uncertain due to the unknown equation of state and lifetime of the magnetar remnant. However, assuming the plateau phase observed in the X-ray afterglow of short GRBs is powered by the magnetar remnant and its ending is due to the magnetar collapse (see Section 2.4), we might expect this persistent radio emission to last until \(\sim 1000\)-\(10000\,\)s post-merger (Tang et al., 2019; Sarin et al., 2020). Note that in the case of an extremely low binary mass (i.e. \(\lesssim M_{\rm max}\), the maximum mass of stable NSs; Lattimer and Prakash, 2001), the magnetar remnant might be indefinitely sta Figure 3: The fluence of the prompt radio signal predicted to be produced by the relativistic jet and ISM interaction at \(120\,\)MHz assuming a Gaussian jet with an angular scale of \(\mathbf{16}^{\circ}\) (see Section 2.2). The regions in different colors show the radio fluence predictions corresponding to different viewing angles from \(\mathbf{0}^{\circ}\) (on-axis) to \(\mathbf{40}^{\circ}\) (off-axis), with the uncertainties (depicted by the width of the different color regions) resulting from the peak frequency of the prompt radio emission at the shock front (see Eq. 4). The three horizontal dashed lines are the same as for Figure 2. ble and would therefore not collapse (e.g., Bucciantini et al., 2012; Giacomazzo & Perna, 2013). The luminosity of this emission is given by (Pshirkov & Postnov, 2010), \[L=\epsilon_{r}\,\dot{E}, \tag{9}\] where \(\epsilon_{r}\) is the radio emission efficiency and \(\dot{E}\) is the standard pulsar spin-down luminosity (Zhang & Meszaros, 2001), \[\dot{E}=\frac{16\pi^{4}}{3}\frac{B^{2}R^{6}}{P^{4}c^{3}}\sin^{2}\!\alpha, \tag{10}\] where \(P\), \(B\), \(R\), and \(\alpha\) are the spin period, surface magnetic field, radius, and magnetic inclination of the magnetar remnant, respectively, and \(c\) is the speed of light. Note that the above expression assumes a braking index of 3 for the magnetar, which usually differs from measured values of millisecond magnetars (Lasky et al., 2017; Sasmaz Mus et al., 2019). If we take into account the beaming fraction \(\Omega/(4\pi)\) of the radio emission, the detectable flux density is given by \[F_{\nu_{\rm obs}}=\frac{\left(1+z\right)L}{\Omega\,\nu_{\rm obs}\,d_{L}^{2}}. \tag{11}\] We assume the same radio emission efficiency as in Section 2.1 i.e. \(\epsilon_{\rm r}=10^{-2}\). Two main sources of uncertainty in the predicted flux density are: the magnetic inclination angle of the magnetar remnant \(\alpha\) (due to the unknown physics of the NS magnetic field and equation of state; Cutler, 2002) and the beaming fraction of the radio emission \(\Omega/(4\pi)\) (due to the unknown physics of the pulsar radio emission; Kalogera et al., 2001). As the magnetic pole of a NS is expected to align with the spin axis at the birth time and become misaligned with time (the orthogonalisation timescale due to bulk viscosity inside a NS is largely uncertain depending on the NS spin frequency, magnetic field strength, and temperature, and could be as short as seconds; Dall'Osso et al., 2009; Lander & Jones, 2018), here we adopt a fiducial value of \(\mathbf{30^{\circ}}\) for \(\alpha\). For the beaming fraction we consider a range of \(0.01<\Omega/(4\pi)<1\)(e.g. Gourdji et al., 2020). Note that here the beaming fraction is for the off-axis viewing angle consideration rather than being physical i.e. the observed flux density is given by Eq. 11 if the impact angle of our line of sight to the magnetic axis is within the solid angle \(\Omega\) and zero otherwise. Figure 4 shows the predicted persistent radio emission from the magnetar remnant formed by BNS mergers in the case of the radiation beam pointing towards us for a range of beaming fractions at an observing frequency of \(\nu_{\rm obs}=120\) MHz (see Section 3). We adopt magnetar parameters of \(B=8\times 10^{15}\) G and \(P=30\) ms, corresponding to a low luminosity magnetar given the distribution of magnetar parameters derived from a population of short GRBs (see figure 8 in Rowlinson & Anderson, 2019), and note that even in the case of a smaller radio emission efficiency (e.g. \(\epsilon_{\rm r}=10^{-4}\); Szary et al., 2014) the predicted persistent radio emission would still be bright enough to be detected by the MWA. ### Magnetar collapse **If the magnetar remnant is supramassive**, it will collapse into a BH inevitably, ejecting its magnetosphere and possibly producing a short burst of coherent radio emission (Falcke & Rezzolla, 2014; Zhang, 2014). Given the timescale of the magnetar collapse inferred from the X-ray afterglow of short GRBs (see Section 2.3), we might expect this radio emission to occur \(\sim\)1000-10000 s post-merger. Assuming a fraction \(\epsilon\) of the magnetic energy in the magnetar's magnetosphere \(E_{B}\) is converted into the radio emission, we can write the bolometric radio fluence as \[\Phi_{r}=\epsilon\,E_{B}=\epsilon\,\frac{B^{2}R^{3}}{6}. \tag{12}\] Taking into account the beaming of the radio emission as in Section 2.3, we can show that the observable radio fluence is \[\Phi_{\nu_{\rm obs}}=\frac{\left(1+z\right)\Phi_{r}}{\Omega\,\nu_{\rm obs}\,d _{L}^{2}}. \tag{13}\] Figure 4: The predicted flux density for the persistent radio emission from the dipole radiation of a magnetar remnant at 120 MHz (see Section 2.3). We assumed a radio emission efficiency of \(\epsilon_{\rm r}=10^{-2}\) and a fiducial angle of \(\mathbf{30^{\circ}}\) for the magnetic inclination of the magnetar. The solid lines in different colors represent the observable emission from a low luminosity magnetar (i.e. \(B=8\times 10^{15}\) G and \(P=30\) ms; see Section 2.3) for different beaming fractions. The horizontal dashed lines in black, red, and cyan represent the expected sensitivity on 30 min timescales of the MWA full array, four sub-arrays and a single dipole per tile, respectively, all in the standard correlator mode (see Section 3). Note that the above equation applies only if our line of sight falls in the radiation beam, as noted in Section 2.3. Figure 5 shows the predicted radio burst resulting from the collapse of the magnetar remnant in the case of the radiation beam pointing towards us for a range of beaming fractions \(0.01<\Omega/(4\pi)<1\) at an observing frequency of \(\nu_{\rm obs}=120\) MHz (see Section 3). We assume an efficiency of converting magnetic energy into radio emission of \(\epsilon=10^{-6}\) (upper limit suggested by, e.g. Rowlinson et al., 2021), and a typical magnetar remnant (i.e. \(B=2\times 10^{16}\) G; Rowlinson and Anderson, 2019). We can see the predicted emission varies with beaming fraction by more than two orders of magnitude from \(\gtrsim 100\) Jy ms at \(\Omega/(4\pi)=1\) to \(\gtrsim 10000\) Jy ms at \(\Omega/(4\pi)=0.01\). Therefore, the detectability of this model emission is dependent on both beaming fraction and viewing angle (see Section 3). ## 3 A population study for the radio counterparts to GW sources In this Section, we perform a population study for the radio counterparts to GW sources in the context of the four coherent emission models described in Section 2 in order to access the viability of MWA detecting these signals in dedicated triggered follow-up during O4 and beyond. In order to detect prompt radio emission from BNS mergers we need to consider the sky coverage for the predicted GW detections in O4 (see Figure 1) as well as the viewing angle dependence of the emission models (see Section 2). We use a Monte Carlo method for simulating \(10^{7}\) binary systems with random inclinations and distances within the LVK O4 horizon (i.e. the inclination between the orbital angular momentum of the binary and the line of sight). Then we apply a GW detection criterion to determine the population of BNS mergers likely to be detected by LVK. Assuming the same intrinsic radio emission for all BNS mergers as derived in Section 2, we can calculate the observable radio fluence or flux density (depending on the source distance and inclination angle) for each simulated GW detection, and compare it to the instrument sensitivity for determining the LVK BNS merger GW-radio jointly detectable fraction with the MWA. ### GW detection criterion The detectability of a GW inspiral signal by a LIGO-Virgo type interferometer depends on the property of the binary system as well as the sensitivity profile of the interferometer (Finn and Chernoff, 1993). A full analysis requires the consideration of the chirp mass of the binary, the luminosity distance to the binary, and the binary localisation and orientation in the frame of the interferometer. Here, we need to consider only two parameters, the luminosity distance \(d_{L}\) and the inclination angle \(\theta_{\rm obs}\), as these are the parameters that determine the predicted radio emission from BNS mergers (see Section 2). Then the GW detection criterion for the LVK network assuming a S/N threshold of eight is given by (Duque et al., 2019) \[d_{L}<H(\theta_{\rm obs})=\sqrt{\frac{1+6\cos^{2}\theta_{\rm obs}+\cos^{4} \theta_{\rm obs}}{8}}\,\overline{H}, \tag{14}\] where \(\overline{H}\) is the sky position averaged horizon (\(190\) Mpc for \(1.4+1.4\) M\({}_{\odot}\) BNS systems during the O4 run; Abbott et al. 2020). We note that the early phase of O4 has a sensitivity close to O3 and an actual horizon limit of \(\sim 160\) Mpc. However, the simulation results below remain the same for different horizon limits. We started our population study by simulating \(10^{7}\) BNS systems that are homogeneously distributed within the horizon (\(H=\sqrt{\frac{5}{2}}\,\overline{H}\approx 300\) Mpc) and have isotropically distributed binary angular momentum directions. Applying the above criterion, we obtained \(\sim 29\%\) detected by the GW interferometer network, with their distribution in inclination angle shown in Figure 6 (black dotted line). We can see the mean inclination angle of the GW detected events is \(\sim 38^{\circ}\), which is consistent with previous works (e.g. Duque et al., 2019; Mochkovitch et al., 2021). This fraction of GW detected events were further filtered with a radio detection criterion for de Figure 5: The predicted fluence for the radio burst produced during the collapse of the magnetar remnant at \(120\) MHz (see Section 2.4). We assumed a magnetic energy conversion efficiency of \(\epsilon=10^{-6}\). The solid lines in different colors represent the observable emission resulting from the collapse of a typical magnetar remnant (i.e. \(B=2\times 10^{16}\) G; see Section 2.4) for different beaming fractions. The three horizontal dashed lines are the same as for Figure 2. termining the GW-radio jointly detectable BNS merger population (see Section 3.2). ### Radio detection criterion For this analysis, we assume that a GW source is detectable in the radio band as long as its observable radio emission, as determined by the models in Section 2, is above the sensitivity of the radio telescope used for follow-up. Note that this criterion is necessary but not sufficient for a real detection, which also depends on follow-up time and the arrival of radio signals. Here, for simplicity we assume that the MWA is capable of capturing all the four model emissions presented in Section 2 regardless of their arrival times (for more discussion see Section 4). We chose to test radio detections at \(120\,\mathrm{MHz}\) and \(200\,\mathrm{MHz}\) for a balance between sky coverage and detection sensitivity. The MWA has a larger FoV at lower frequencies, which is more ideal for covering the GW positional uncertainties as shown in Table 1. However, considering the model emission presented in Section 2 could potentially be FRB-like, and the fact that most FRB signals have been detected at \(>300\,\mathrm{MHz}\)(e.g. Chawla et al., 2020; Piant et al., 2020; CHIME/FRB Collaboration et al., 2021), we might expect a higher chance of detecting coherent radio counterparts to GWs at higher observing frequencies. As a compromise, in this paper all properties of the MWA including the FoV and the sensitivity are quoted for \(120\,\mathrm{MHz}\) and \(200\,\mathrm{MHz}\) (see Table 1). Note that while the MWA has the optimal sensitivity at \(150\,\mathrm{MHz}\), \(120\,\mathrm{MHz}\) will provide a larger FoV in an RFI-quiet part of the MWA band while gaining additional dispersion delay and therefore time for getting on-target. Therefore, we chose an observing frequency of \(120\,\mathrm{MHz}\), which is on the lower frequency end of FRB detections (Pieunis et al., 2021). As previously mentioned, the earliest LVK GW alerts of BNS mergers will have poor positional localisations, however, most of the models discussed in Section 2 strongly motivate the need for MWA to be on target during, if not before the merger. An exciting addition to the O4 public alerts is the Early-Warning Alerts from pipelines capable of detecting GWs from the inspiral before the merger of a binary with at least one NS component: GstLAL (Cannon et al., 2012; Sachdev et al., 2020), MBTAOnline (Adams et al., 2016), PyCBC Live (Nitz et al., 2018; Dal Canton et al., 2021), and SPIIR (Chu et al., 2022). However, such alerts will not contain any positional information.2 Rather than waiting for an accurate sky position, we instead need to configure the MWA to observe as much of the sky as possible on receiving a GW alert while also taking advantage of the telescope's fortuitous position under one of the two highest sensitivity sky regions of the LVK network (see Figure 1). In order to further increase our chances of a successful detection at early times, we experimented with three different ways of configuring the MWA that would test for the best compromise between sky coverage and sensitivity (see also Figure 7 and Table 1): Footnote 2: [https://enfollow.docs.ligo.org/userguide/early_warning.html](https://enfollow.docs.ligo.org/userguide/early_warning.html) 1. The full array (\(128\) tiles \(\times\)\(16\) dipoles) with a single primary beam that is centered on the position of the highest sensitivity region of the LVK network over the Indian Ocean (see Figure 1 and Figure 7a); 2. A single dipole per tile, which provides a very wide-field Zenith pointing (see Figure 7a); and 3. Splitting the full array into four sub-arrays of \(32\) tiles each, creating four overlapping primary beams that tile the highest LVK network sensitivity region, overlapping at \(50\%\) power (see Figure 7 for the different MWA beam tiling configurations that we tested). In the case of option 3, we further trialled four different sub-array beam pointing configurations to maximise our coverage of the highest sensitivity region of the LVK network. Specifically, one beam is always centred on the highest sensitivity GW region with the other three beams overlapping at their \(50\%\) power (the different pointing configurations are listed in Table 2 and depicted in Figure 7). We then tested each of these beam tiling configurations for which would provide the highest probability of detecting coherent radio emission from a BNS merger (see Section 3.4). In Table 1, we list the approximate MWA sensitivity for the three observing modes on timescales of \(1\,\mathrm{ms}\) (assuming an MWA VCS observation and incoherent beamforming) and \(30\,\mathrm{min}\) (assuming a standard MWA correlator observation). Note that the quoted sensitivities for the single dipole and the sub-array modes are estimated by simply assuming that the sensitivity for the full array pointed at the zenith scales with the number of dipoles in use. The sensitivity on \(1\,\mathrm{ms}\) timescales is appropriate for considering the detectability of prompt radio emission predicted to be produced by the NS magnetosphere interaction (see Section 2.1), jet-ISM interaction (see Section 2.2), and magnetar remnant collapse (see Section 2.4), and the sensitivity on \(30\,\mathrm{min}\) timescales is for persistent radio emission produced by the magnetar remnant (see Section 2.3). These sensitivities, combined with the GW detection criterion, form our criterion for the joint detection of simulated BNS mergers (for discussion on our observing strategies see Section 4). ### GW-radio jointly detectable population Using the simulated population of BNS mergers described in Section 3.1, from which we expect the LVK to detect \(\sim 29\%\) within the O4 horizon, we now use the models described in Section 2 to estimate the fraction of events that could be detected with the MWA. We assume all BNS mergers produce coherent radio emission described by all four models in Section 2, and we adopt the model parameters as described unless otherwise stated. Here we assume all GW sources are located in the MWA field of view and can be detected by the MWA as long as their predicted radio emission is above the MWA sensitivities given in Table 1. The LVK sky sensitivity to GW events projected for O4 (see Figure 1) and variations in the MWA sensitivity due to the beam response and observational elevation will be considered in Section 3.4. In Figure 6, we display the detectable fraction of coherent radio emission from BNS mergers for the four different models described in Section 2 as a function of merger inclination angle. In each subplot, the LVK-detectable BNS population is shown as a dotted black curve. The detectable fraction of radio emission for each model using the different observing modes (described in Section 3.2) or assuming different beaming fractions are shown as coloured curves. For the NS interaction model (see Section 2.1 and panel (a) of Figure 6), we assumed a pulse width of 3 ms and no scattering, and converted the sensitivity from a flux density limit (which scales as \(t^{-1/2}\)) to a fluence limit using \[\mathrm{Fluence}=\mathrm{Flux}\times(\mathrm{width}/1\,\mathrm{ms})^{1/2}\, \mathrm{Jy\,ms}. \tag{15}\] The fractions of GW-radio joint detections by the MWA full array, sub-arrays, and single dipole with respect to the LVK O4 detectable population are 59%, 38%, and 18%, respectively. **Note that the detectable fraction drops as we approach a viewing angle of \(\cos\theta_{v}=1.0\) for the single dipole, which can be attributed to a drop in the predicted radio fluence for NS interactions when \(\theta_{v}<10^{\circ}\) (see Section 3.5 in Cooper et al., 2023) and the lower sensitivity of the single dipole compared to the other two observing modes.** Similarly, for the jet-ISM interaction model (see Section 2.2) we plot the joint event detections by the different observing modes of the MWA in panel (b) of Figure 6. Here we assume a pulse width of 10 ms for our sensitivity estimate using Eq. 15, which, in the absence of detected prompt radio emission from BNS mergers, is based on known rest-frame intrinsic durations of FRBs with known redshifts and no scattering features (Hashimoto et al., 2019, 2020). The detection fractions for the prompt radio emission produced by the jet-ISM interaction by the MWA full array, sub-arrays, and single dipole in the LVK O4 detectable population are 27%, 11%, and 1%, respectively. For the persistent pulsar emission model, as demonstrated in Section 2.3, the MWA in any observing mode can detect the predicted emission up to the O4 horizon as long as the radiation beam of the magnetar remnant points towards us. We show the population with persistent radio emission detectable by the MWA as a function of inclination angle for three different beaming fractions (dashdot lines in different colors) in panel (c) of Figure 6. In the case of isotropic pulsar emission (i.e. beaming fraction \(=1\)) the MWA can detect all GW detected events, as shown by the overlapping of the black dotted and yellow dashdot lines. The detectable fractions for beaming fractions of 0.1 and 0.01 are 91% and 43%, respectively. There is a drop in radio detectable events around \(\cos\theta_{v}\approx 0.95\) for the beaming fraction \(=0.01\) (black dashdot line). This can be attributed to our assumption of the magnetic inclination \(=\mathbf{30}^{\circ}\) for the magnetar remnant (a major source of uncertainty; see Section 2.3), which means if our line of sight aligns with the spin axis of the magnetar (i.e. small \(\theta_{v}\)), the radiation beam along the magnetic axis will point away from us. Similarly, for the magnetar collapse model (see Section 2.4) we plot the distribution of radio detectable events for three different beaming fractions in panel (d) of Figure 6. As shown in Figure 5, the MWA can only detect the predicted emission up to a certain distance depending on the beaming fraction (different from the persistent pulsar emission model due to the faintness of the predicted emission). For this model, we only show the detectable fraction if using the full MWA array (128 tiles with a single primary beam). This is reasonable given the magnetar collapse is likely to occur minutes to hours following the BNS merger when we are likely to have better positional information (for a comparison between different observing modes see Section 3.4). The fractions of detectable events for beaming fractions of 0.01, 0.1, and 1 are 7%, 30%, and 5%, respectively. Note that the detectable fraction peaks at a beaming factor of \(\sim 0.1\), which is due to a balance between the brightness of the emission (i.e. the maximum distance we are sensitive to \begin{table} \begin{tabular}{l l l l l} \hline Observing mode & \multicolumn{2}{c}{Field of view (deg\({}^{2}\))} & \multicolumn{2}{c}{Sensitivity (Jy)} \\ \cline{2-5} & 120 MHz & 200 MHz & 1 ms & 30 min \\ \hline Full array & 990 & 269 & 136 & 0.027 \\ Single dipole per tile & 4838 & 3196 & 2172 & 0.432 \\ Four sub-arrays & 3297 & 896 & 543 & 0.108 \\ \hline \end{tabular} \end{table} Table 1: A summary of the three observing modes (see Section 3.2), including the field of view (down to 20% of the primary beam) at both 120 MHz and 200 MHz, and the sensitivity for 1 ms and 30 min integrations. Here the sensitivity is quoted for 185 MHz (extensively used for MWA GRB triggered follow-up; Anderson et al., 2021; Tian et al., 2022, 2022), which we expect to be accurate to within \(\sim 30\%\) at 120 MHz and 200 MHz (note that the MWA sensitivity is extremely dependent on the sky position and observational elevation, Sokolowski et al., 2017). Figure 6: Differential distributions as a function of inclination angle of GW detected events in the simulated BNS population (black dotted line) and GW-radio jointly detectable events (dashdot lines) for the four coherent emission models introduced in Section 2. (a) The interaction of NS magnetospheres. The dashdot lines represent those events with radio emission predicted by the NS interaction model to be detectable by the MWA with the black, red, and cyan corresponding to detections by the full array, sub-arrays and single dipole (see Section 3.2). (b) The jet - ISM interaction. Here we show the distribution of GW events with radio fluence predicted by the jet-ISM interaction model to be above the MWA sensitivities (assuming a 10 ms pulse). (c) The persistent pulsar emission from the magnetar remnant. Given the predicted emission is so bright that its detectability is only dependent on the viewing angle (see Section 2.3), here we show the distribution of radio detectable events for the three beaming fractions, with the dashdot lines in black, blue, and yellow representing those events with a pulsar beaming fraction of 0.01, 0.1, and 1, respectively. Note that the black dotted line representing the GW detected population overlaps with the yellow dashdot line (see Section 3.3). (d) The magnetar collapse. Here we show the distribution of GW events with radio emission predicted by the magnetar collapse model to be detectable by the MWA full array (assuming a 10 ms pulse). and the number of GW sources within that volume) and the probability of the emission beam encompassing our line of sight. ### Radio detectability rate of GW events by MWA Apart from the intrinsic brightness of coherent radio emission and the viewing geometry of BNS mergers (including inclination angle and distance), the radio detectability rate of GW events also depends on the GW localisation in the frame of the telescope. Taking the three MWA observing modes with different sensitivities and sky coverages, i.e. the full array, single dipole, and sub-arrays (see Section 3.2), we can estimate the radio detectability rates based on the sensitivity map of the LVK detector network, which will provide a metric for assessing the success rate of our observing strategies (see Section 4). We plot the FoV (down to 20% of the maximum power) of the three MWA observing modes in Panel (a) of Figure 7, with the red, gray, and magenta ellipses corresponding to the single dipole per tile, full array, and an example four sub-arrays configuration, respectively. As shown in this figure, the pointing strategies for the three observing modes are different. While the single dipole per tile has maximum sensitivity at zenith, we chose to point the full array and the sub-arrays towards the O4 LVK highest sensitivity sky region above the Indian Ocean (for our observing strategy see Section 4). The single dipole per tile, full array and sub-arrays can cover 12.4%, 4.9%, and 12.6% of GW detections, respectively. As a comparison of sky coverage, we plot three more sub-array configurations, as shown in Panels (b), (c), and (d) of Figure 7, which will be used to determine the optimal pointings of the sub-array observing mode (see Section 4). The Alt/Az pointings of each of the six tested observing mode beam configurations are listed in Table 2. In order to infer the radio detectable fraction of GW events by the MWA, we convolved the probability coverage of the three observing modes with the fraction of GW and radio joint detections from our simulation (see Section 3.3). First, we took into account the sensitivity change over the MWA primary beam by creating a similar sensitivity map as shown for the LVK in Figure 7 down to 20% of the maximum power for the three different observing modes outlined in Section 3.2 (for the beam response of each MWA observing mode explored in this analysis see Figure 9 and Appendix B). Note that in the sub-array observing mode where the primary beams overlap we compared the responses from all beams and chose the best one for the overlapping regions (see Panel c, d, e and f in Figure 9). Next, for each position in the MWA primary beam, we applied a radio detection criterion with the corresponding fluence or flux density limit to obtain a radio detection fraction for each of the four models in Section 2 based on our simulation results presented in Section 3.3. Multiplying this fraction by the GW detection probability, i.e. the fraction of BNS mergers expected to be detected at that position by LVK during O4, and integrating over the MWA beam, we obtained the total number of GW and radio joint detections for the six observing mode beam configurations listed in Table 2. The final detectable fraction of coherent radio emission from BNS mergers are summarised in Table 3, which will be used to justify our observing strategies (see Section 4). We did a similar analysis for 200 MHz and include the results in Table 4 in Appendix C. As can be seen from Table 3, the sub-array observing mode is expected to yield the most radio detections of BNS mergers due to a reasonable balance between sensitivity and sky coverage. While the full array has the best sensitivity, allowing a good fraction of the faint emission predicted by the magnetar collapse model to be detected, it cannot compete with the sub-arrays in regard to other emission models, especially the pulsar emission, which is so bright that sky coverage plays the dominant role. On the contrary, the single dipole has the largest sky coverage that can detect the most pulsar emission. However, its poor sensitivity is insufficient for detecting the emission from the magnetar collapse. In conclusion, sub-arrays are the best observing mode for searching for coherent radio emission from LVK O4 BNS mergers. Comparing the simulated results for the four sub-array configurations in Table 3, we find that sub-array (b) has the highest probability for detecting signals predicted by the four emission models and is thus the optimal configuration to use. ## 4 MWA observing strategies The MWA has a proven rapid-response system that can be on target within seconds of receiving a transient alert (Hancock et al., 2019), making it possible to catch any dispersion-delayed (FRB-like) radio signals emitted at the moment of a cataclysmic event by triggering observations with the high time resolution VCS. Combined with a wide FoV capable of covering a significant portion of the GW positional uncertainties, it is highly suitable for searching for coherent radio counterparts to BNS mergers. Here we discuss MWA observing strategies in targeting each of the four coherent emission models presented in Section 2 based on our simulation results in Section 3. Our focus is on two observing systems available on the MWA, the rapid-response system and the buffering system. Our choice between these two observing systems is motivated by the expected arrival time of coherent radio emission according to the models presented in Section 2 and the state of the telescope (whether it is observing or idle) at the time of an event. Note that the observing \begin{table} \begin{tabular}{l|c c c c} \hline Mode & Pointing\#1 & Pointing\#2 & Pointing\#3 & Pointing\#4 \\ \hline Full array & \((58.48^{\circ},261.21^{\circ})\) & & & \\ Single dipole per tile & \((90^{\circ},0^{\circ})\) & & & \\ Sub-arrays a & \((90^{\circ},0^{\circ})\) & \((66.85^{\circ},270^{\circ})\) & \((43.97^{\circ},270^{\circ})\) & \((66.85^{\circ},180^{\circ})\) \\ Sub-arrays b & \((90^{\circ},0^{\circ})\) & \((66.85^{\circ},270^{\circ})\) & \((43.97^{\circ},270^{\circ})\) & \((59.35^{\circ},219.88^{\circ})\) \\ Sub-arrays c & \((90^{\circ},0^{\circ})\) & \((66.85^{\circ},270^{\circ})\) & \((43.97^{\circ},270^{\circ})\) & \((56.09^{\circ},314.59^{\circ})\) \\ Sub-arrays d & \((90^{\circ},0^{\circ})\) & \((66.85^{\circ},270^{\circ})\) & \((43.97^{\circ},270^{\circ})\) & \((66.85^{\circ},90^{\circ})\) \\ \hline \end{tabular} \end{table} Table 2: Pointings of the different observing mode beam configurations (see Section 3.2 and Figure 7) in the MWA frame in Alt/Az coordinates. Note that the full array and the single dipole per tile have a single beam, and the sub-arrays have four beams. Figure 7: Similar to Figure 1, here we plot different observing strategies over the GW probability density map. In Panel (a), the lines in different colors show the MWA FoV (down to 20% of the maximum power) for different observing modes, with the red line corresponding to a single dipole per tile with maximum sensitivity at zenith, the gray line to the full array pointing to the most likely location of GW detections (red cross), and the magenta line to the four sub-arrays. The four pointings of the sub-array observing mode overlap at 50% of the primary beam response, and one of them (the rightmost close to the equator) is towards the zenith. The red, gray, and magenta contours cover 12.4%, 4.9%, and 12.6% of GW detections, respectively. Panels (b), (c), and (d) show different sub-array configurations in aid of determining the optimal pointings of the sub-array observing mode (see Section 4). strategies discussed here are completely pre-programmed and automated via the Transient RApid-response using Coordinated Event Triggering (TRACE-T)3 web application built under the Astronomy Data and Computing Services (ADACS) Merit Allocation Program (project IDs: GAnderson_2022A, GAnderson_2023A). Footnote 3: [https://github.com/ADACS-Australia/TraceT](https://github.com/ADACS-Australia/TraceT) The first two models presented in Sections 2.1 and 2.2, which include the NS interaction and the jet-ISM interaction scenarios, predict prompt FRB-like emission to be emitted just prior to or during the merger. However, even if a BNS merger was detected at the nominal O4 and O5 horizon limit (190 and 300 Mpc; Abbott et al., 2020), the dispersion delay would be between \(\lesssim 14\)-\(40\) s and \(\lesssim 22\)-\(60\) s, respectively, at 120 MHz depending on the Galactic and host dispersion measure contributions. An event that occurred at the same distance as GW170817 (40 Mpc) would have an even smaller delay of \(\lesssim 3\)-\(30\) s (James et al., 2019). These dispersion delays are potentially much shorter than the delay in the LVK pipeline produced Preliminary Alerts (delay of minutes), which will be the first automatic alert to contain positional information4. Even with a best-case rapid response time (\(<14\) s; Hancock et al., 2019), it is unlikely the MWA will be on-target to detect the earliest predicted FRB-like signals if we rely solely on the Preliminary Alerts. Footnote 4: [https://emfollow.docs.ligo.org/userguide/](https://emfollow.docs.ligo.org/userguide/) In order to overcome this expected large latency, we plan to trigger MWA observations in a pre-specified observing configuration on any LVK alert of a new merger involving at least one NS component. On receiving an alert, the MWA will automatically divide into four sub-array pointings (i.e. the sub-array observing mode; see Section 3.2), and shadow a large area of the sky that overlaps the highest sensitivity region of the LVK network over the Indian Ocean. Based on our simulation results in Section 3.4, the sub-array (b) in Figure 7 is the optimal observing mode for targeting the prompt radio emission predicted to be produced by NS interactions and jet-ISM interactions (see Table 3), with the four beam pointing directions given in Table 2. In addition, on receiving an LVK alert, we can also trigger a ring buffer voltage dump (Morrison et al., 2023), collecting 240 s of negative latency data to catch any early merger emission. As mentioned in Section 3.2, we anticipate that at least some mergers with an NS component will be detected during their inspiral, which will generate an Early Warning Alert. Such alerts are likely to be transmitted at the time of the merger, which means that when combined with our proposed MWA sub-array pointing configuration, we will be on target in time to detect the very earliest FRB-like signals. Using MWA to target the other two emission models, i.e. the persistent pulsar emission (see Section 2.3) and the magnetar collapse (see Section 2.4) is less time-critical as we expect any associated signals to occur between \(\sim 1000\)-\(10000\) s post-merger. This means that the expected detectable fraction of radio signals for these two emission models will improve far beyond what we expect for our fixed beam and sub-array pointings listed in Table 3 (close to the predictions in Section 3.3) as MWA repoints according to updated positional information in subsequent LVK alerts. Depending on the size and shape of the GW error distributions, we will either repoint the four sub-arrays to continue to cover large portions of the sky or drop to a single beam that utilises the full sensitivity of the array. In order to capture the FRB-like radio signal from the magnetar collapse that could occur \(\sim 10000\) s post-merger, we will continue recording with the VCS for up to 3 hours. In summary, we propose a two-pronged triggering strategy for the MWA during the LVK O4 run in order to maximise our chances of a successful detection of coherent radio counterparts to GW events. If the MWA is idle then we will: * Shadow the LVK network's highest sensitivity region over the Indian Ocean, maximising our sky coverage by pointing MWA using the 4 beam sub-array configuration (b) defined in Table 2 (see also Figure 7b) but operating in a no-capture or no-archive mode; * On receiving an Early Warning Alert or a Prelim \begin{table} \begin{tabular}{l|c c c c} \hline ModeModel & NS interaction & Jet-ISM interaction & Pulsar emission & Magnetar collapse \\ \hline Full array & 2.6\% & 1.2\% & 4.4\% & 1.1\% \\ Single dipole per tile & 2.2\% & 0.2\% & 11.2\% & 0.1\% \\ Sub-arrays a & 4.6\% & 1.2\% & 11.4\% & 0.5\% \\ **Sub-arrays b** & **4.6\%** & **1.2\%** & **11.8\%** & **0.5\%** \\ Sub-arrays c & 4.2\% & 1.1\% & 11.2\% & 0.5\% \\ Sub-arrays d & 4.2\% & 1.2\% & 10.6\% & 0.5\% \\ \hline \end{tabular} \end{table} Table 3: Detectable fraction of the four model emissions at 120 MHz (see Section 2) among all GW detections in O4 by the three MWA observing modes (see Section 4). The Sub-arrays a, b, c and d correspond to the four sub-array configurations displayed in Figure 7. **The bold row of Sub-arrays b is our preferred observing mode for searching for coherent radio emission from BNS mergers, as discussed in Section 4**. inary Alert of a BNS or BH-NS merger, we will trigger the buffering mode, obtaining up to 240 s of negative latency data on a significant portion of the sky and record for a further 15 minutes; * If the source is a BNS merger and we receive subsequent alerts (Preliminary Alerts and Initial Alerts) within 3 hours post-merger we will either * continue observing with the 4 beam sub-array configuration (b) if no sky map is provided or if only 1 GW detector detects the event (resulting in a poor position), recording up to 1 hour post-merger or * repoint the 4 sub-array beams individually to cover the best GW sky positions in the Southern Sky if the sky map is generated using two or more GW detectors, recording up to 3 hours post-merger. * Continue to repoint the 4 sub-array beams individually on receiving subsequent alerts (Preliminary Alerts and Initial Alerts) if improved positions and/or positional errors become available, recording up to 3 hours post-burst; and * Cancel the observation in the case of a Retraction Alert or if the GW event is outside the MWA sky for up to 3 hours into the future. When the MWA is in use: * On receiving an Early Warning Alert or Preliminary Alert, we will instead override the current observations. If there is no or poor localisation, we will use the 4 beam sub-array configuration (b) (see Figure 7b and Table 2), recording with the VCS for 15 minutes for a BH-NS merger or for up to one hour post-merger for a BNS merger; * If a sky map is available with the alert that was generated using two or more GW detectors, we will repoint the array to cover the positional uncertainties in the Southern sky, either repointing the four sub-arrays individually or a single beam depending on the positional accuracy, recording up to 3 hours post-merger. * Continue to repoint according to the above strategy on receiving subsequent alerts with updated positions for up to 3 hours post-merger. The final data product collected by the VCS from each GW trigger will be raw voltages. In the case where the GW event is not well localised, we will perform an incoherent single pulse search for dispersed signals (Xue et al., 2017) using the presto software package (Ransom, 2001) to target prompt radio signals predicted by the NS interaction, jet-ISM interaction and magnetar collapse models. In the case where a GW event is well localised (positional error region of a few square degrees or with an identified electromagnetic counterpart), we will perform coherent beamforming (potentially at several positions, e.g. Tian et al., 2023) before conducting single pulse dedispersion searches. We can also perform offline correlation of the VCS data to create images over longer integrations to search for persistent pulsar emission or other predicted long-lived coherent radio emission (e.g. Starling et al., 2020; Tian et al., 2022). During O4, there will be between \(\sim 14-85\) BNS mergers per year (depending on the assumed merger rate, which is extremely uncertain; Abbott et al. 2023), out of which \(\sim 20\%\) (\(\sim 3-17\) BNS mergers) will have an Early Warning Alert (detected 10 s prior to merger; Magee et al. 2021). However, the BNS mergers with Early Warning Alerts may be outside the MWA field of view. Given the high sensitivity GW sky region constantly monitored by the MWA covers 12.6% of GW BNS detections (see Section 3.4), we expect up to 2 events with Early Warning Alerts to be within the MWA 4 beam sub-array configuration (b) field of view during O4. Note that for the BNS mergers without Early Warning Alerts, we will still trigger on Preliminary Alerts and/or Initial Alerts, which contain positional information for pointing the MWA. Assuming 30% sky coverage of the MWA (corresponding to the red contour in Figure 1), we would expect to trigger on \(\sim 3-20\) of these events per year. In summary, we expect to successfully trigger on \(\sim 5-22\) BNS mergers per year during O4, of which 2 might have Early Warning Alerts. If we assume that all four coherent emission mechanisms operate in all BNS mergers (see Section 2), then we can predict how many BNS mergers will be detected by the MWA. For the 2 triggers with Early Warning Alerts, based on the detectable fraction of the model emission given in table 3, we expect to detect the early emission models, i.e. the NS interaction and the jet-ISM interaction, from \(\lesssim 1\) BNS merger. For up to 22 triggers with Preliminary Alerts and/or Initial Alerts, we expect to detect the late-time emission models, i.e. the pulsar emission and the magnetar collapse, from \(\sim 2\) and \(\lesssim 1\) BNS mergers, respectively. ## 5 Conclusions In this paper, we have investigated the prospects of detecting coherent radio counterparts of GW events using MWA VCS triggered observations. We have considered four coherent emission models applicable to BNS mergers, including the interaction of NS magnetic fields, jet-ISM interaction, persistent pulsar emission, and magnetar collapse, which were extensively studied in previous works (Rowlinson and Anderson, 2019; Rowlinson et al., 2020, 2021; Anderson et al., 2021; Tian et al., 2022, 2022). However, different from previous works, here we have taken into account the viewing angle dependence of these coherent emission models and found their detectability is largely dependent on this parameter. In order to determine the radio detectable fraction of GW events, we have performed a population synthesis of binary mergers randomly distributed within the LVK O4 detector horizon. We have considered three observing modes for the MWA, the single dipole per tile with the largest FoV, the full array with the best sensitivity, and splitting MWA into four sub-arrays for obtaining the best coverage of the LVK O4 high sensitivity region over the Indian Ocean (see Figure 7). As a result of this work, we come to the following main conclusions: 1. Comparing the simulated radio detections by the three observing modes (see Table 3), we have shown that sub-arrays are the best compromise between sky coverage and sensitivity as MWA will be on-sky to detect coherent radio emission from 12.6% of the BNS merger population detected by LVK during O4. 2. The 4 sub-array configuration (b) with pointings given in Table 2 provides the best coverage of all configurations tested. Assuming all BNS mergers detected during O4 emit coherent radio signals to those explored in Section 2, then we expect to detect 4.6%, 1.2%, 11.8% and 0.5% of coherent radio emission from the NS interaction, jet-ISM interaction, pulsar emission and magnetar collapse, respectively, when observing in this beam configuration. 3. We predict MWA will successfully trigger on between \(\sim 5\) - 22 BNS mergers per year during O4, 2 of which may have Early Warning Alerts and be in the 4 beam sub-array configuration (b) field of view, and the rest have Preliminary Alerts and/or Initial Alerts and be above the MWA horizon. For all these triggers, including both Early Warning Alerts and Preliminary and/or Initial Alerts, we expect to detect coherent radio emission predicted to be produced by the NS interaction, jet-ISM interaction, pulsar emission, and magnetar collapse from \(\lesssim 1\), \(\lesssim 1\), \(\sim 2\), and \(\lesssim 1\) BNS mergers, respectively. The MWA, with its rapid-response triggering system and buffering mode, is currently one of the most competitive radio telescopes for performing rapid follow-up observations of GWs in searching for coherent radio emission associated with BNS mergers. Based on the timescales of the various coherent emission models relative to the evolution of a BNS merger, we have proposed a triggering strategy to target each of them. We will keep the MWA pointed at the high sensitivity GW sky region and trigger the buffering mode to target the NS interaction and jet-ISM interaction models, and continue recording with the VCS for up to 3 hours to target the persistent pulsar emission and magnetar collapse models. With up to two successful early triggers during O4, we could potentially make the first detection of coherent radio emission from BNS mergers or place significant constraints on the models. **Looking forward to the future, the MWA will soon undergo an upgrade to Phase III where all 256 tiles will be connected to the correlator, which will double the sensitivity to the millisecond timescale signals we are searching for. Based on the model predictions in Section 3, this additional sensitivity will improve our chances of detection with the MWA by a factor of \(\sim 1.5\). Furthermore, our experiment with the MWA demonstrates the importance of incorporating rapid response and sub-array observing capabilities into other low-frequency facilities such as the SKA-Low, which will have superior instantaneous sensitivity on short timescales. In addition, the ability to rapidly trigger observations that tile large portions of the sky with sub-array beams is useful for many transient science cases beyond GW astrophysics, particularly in the multi-messenger field when considering neutrino events, cosmic rays, and very high-energy (TeV) gamma-ray transients.** ## 6 Acknowledgements This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji People as the Traditional Owners and native title holders of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. KG acknowledges support through Australian Research Council Discovery Project DP200102243. FHP is supported by a Forrest Research Foundation Fellowship and acknowledges the support of the Australian Research Council (ARC) Centre of Excellence for Gravitational Wave Discovery under grant CE170100004. This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia. The following software and packages were used to support this work: Astropy (The Astropy Collaboration et al., 2013; Astropy Collaboration et al., 2018), numpy (van der Walt et al., 2011), scipy (Jones et al., 2001), matplotlib(Hunter, 2007). This research has made use of NASA's Astrophysics Data System.
``` MWAにtriggerされる観測を用いて、重力波(GW)イベントの相対的な電波対を検出するための展望を提示し評価する。MWAの高速応答システムとbufferingモード(約4分間の負の遅延)は、2つの中性子星(BNS)の重力波の発生から数秒から数時間後に出現する電波を捉えることができる。MWAの広視野(120MHzで約1000deg^2)とLIGO-Virgo-KAGRA(LVK)検出網の高度感度の高い地域に位置することから、GWイベントへのオンターゲットの可能性が高い。MWAはGW BNSの重力波の重力波イベントをフォローアップするために、3つの観測構成を検討する。これには、タイルごとに1つの電磁波を配置する、フルアレイ、そして4
2309.17089
Too Big, so Fail? -- Enabling Neural Construction Methods to Solve Large-Scale Routing Problems
In recent years new deep learning approaches to solve combinatorial optimization problems, in particular NP-hard Vehicle Routing Problems (VRP), have been proposed. The most impactful of these methods are sequential neural construction approaches which are usually trained via reinforcement learning. Due to the high training costs of these models, they usually are trained on limited instance sizes (e.g. serving 100 customers) and later applied to vastly larger instance size (e.g. 2000 customers). By means of a systematic scale-up study we show that even state-of-the-art neural construction methods are outperformed by simple heuristics, failing to generalize to larger problem instances. We propose to use the ruin recreate principle that alternates between completely destroying a localized part of the solution and then recreating an improved variant. In this way, neural construction methods like POMO are never applied to the global problem but just in the reconstruction step, which only involves partial problems much closer in size to their original training instances. In thorough experiments on four datasets of varying distributions and modalities we show that our neural ruin recreate approach outperforms alternative forms of improving construction methods such as sampling and beam search and in several experiments also advanced local search approaches.
Jonas K. Falkner, Lars Schmidt-Thieme
2023-09-29T09:36:37
http://arxiv.org/abs/2309.17089v1
# Too Big, so Fail? - Enabling Neural Construction Methods to Solve Large-Scale Routing Problems ###### Abstract In recent years new deep learning approaches to solve combinatorial optimization problems, in particular NP-hard Vehicle Routing Problems (VRP), have been proposed. The most impactful of these methods are sequential neural construction approaches which are usually trained via reinforcement learning. Due to the high training costs of these models, they usually are trained on limited instance sizes (e.g. serving 100 customers) and later applied to vastly larger instance size (e.g. 2000 customers). By means of a systematic scale-up study we show that even state-of-the-art neural construction methods are outperformed by simple heuristics, failing to generalize to larger problem instances. We propose to use the ruin recreate principle [42] that alternates between completely destroying a localized part of the solution and then recreating an improved variant. In this way, neural construction methods like POMO [31] are never applied to the global problem but just in the reconstruction step, which only involves partial problems much closer in size to their original training instances. In thorough experiments on four datasets of varying distributions and modalities we show that our neural ruin recreate approach outperforms alternative forms of improving construction methods such as sampling and beam search and in several experiments also advanced local search approaches. ## 1 Introduction Neural Construction (NC) methods [5; 30; 25; 12; 31; 51] have been the driving force behind the success of data driven and machine learning based solution approaches for routing problems since the advent of pointer networks [49] in 2015. Apart from the well known Traveling Salesmen Problem (TSP) the most prominent of these problems is the Capacitated Vehicle Routing Problem (CVRP) which involves the planning of several to serve a number of \(N\) customers from a single depot with capacitated vehicles [46]. Notwithstanding their success and demonstrated performance on small scale (usually uniform) data, NC approaches exhibit major problems with generalization to larger instances. We perform a comprehensive computational study to discover and evaluate the weaknesses of current state-of-the-art NC methods. The results show that generalization to larger problem sizes remains an unsolved problem for all methods no matter the inference strategy. Even advanced and expensive search approaches like Simulation Guided Beam Search (SGBS) [8] only lead to marginal improvements over a greedy search strategy as soon as instance sizes are more than twice the size of the instances in the training data. Moreover, existing inference methods fail to make effective use of increased inference times to achieve significant improvements in terms of final performance. We take these results as motivation to design a new meta control method which transforms the constructive approach into an iterative improvement method applied to smaller sub-graphs of the original problem. The advantages of this formulation are twofold: (i) the constructive method is applied to promising sub-graphs with a size close to the training set where NC methods have shown to outperform many heuristic approaches and (ii) the iterative formulation can make effective use of additional inference time by focusing on sub-graphs with high potential for further improvement. ### Generalization performance of NC methods State-of-the-art constructive methods for the CVRP like POMO [31] create a solution sequentially by consecutively adding customer nodes to a tour until the capacity of the vehicle is exhausted. At this time the vehicle returns to the depot and a new tour is started. NC methods have shown to outperform simple construction heuristics like the sweep method [16] and the Clarke-Wright savings algorithm [9] on problem instances with uniformly sampled coordinates close to the size \(N\) of their training set [30]. The sweep method rotates a beam around the depot node adding customer nodes sequentially to a tour in the order they are passed by the beam whereas the Savings algorithm starts with singleton routes and creates tours by consecutively merging the two routes which lead to the largest saving, i.e. reduction in total cost. In Figure 1 we show the performance of POMO and SGBS [8], an advanced beam search approach with efficient rollouts and backtracking. For POMO we evaluated two different inference strategies, greedy and sampling. The greedy approach performs a greedy rollout by taking the node with maximum probability specified by the learned policy at each step. The rollouts are done in parallel for each possible augmented starting node from which a tour can be created. In contrast, the sampling approach performs a number of rollouts by sampling the next node from the stochastic policy model. Figure 1 shows that the models, which were trained on a set of instances of size \(N=100\), still achieve reasonable performance for problems of size \(N=200\). For these instances of twice the size of the training instances they perform close or on par with the savings algorithm. However, all methods are already significantly outperformed by the heuristic for problems of size \(N=500\) and beyond. In sub-figure 1(c) we show the results on the well-known Uchoa benchmark [47]. The data involves varying coordinate, demand and capacity distributions. For that reason and in order to plot a smooth curve to enable a useful comparison, we put the results into bins of size 5 and take the Figure 1: **(a)** and **(b)**: Results of constructive methods POMO [31] (greedy, sampling), SGBS [8] and Clarke-Wright Savings [9] on uniform and mixed data for different instance sizes. **(c)**: Results on Uchoa benchmark [47]. Note that for TAM-AM only results for \(N>500\) were reported in [21]. For the benchmark we also report the _best known solution_ (BKS). **(d)**: Normalized cost and run time of the POMO sampling approach for different sampling sizes on uniform and mixed data for instance size \(N=500\). Please note the logarithmic scale of the x-axes. mean over each bin (the non-binned results can be found in the Appendix). Furthermore, we also show the benchmark results of the recent real-time NC method TAM-AM [21] and the best known solution as officially reported on CVRPLIB1. As can be seen, even this new state-of-the-art method is outperformed by the savings algorithm on the benchmark. Similar behavior for NC approaches has been observed for the TSP in [24]. Even emerging methods like [4] which are concerned with improving generalization of NC models to larger instances fail to achieve better performance than the savings algorithm just on problem sizes larger than \(N=100\). While the performance can generally be improved by training on larger instances such that the training distribution is closer to the final test distribution, this is not practical. The reason for that is the significant increase in complexity when training NC methods with reinforcement learning on large instances, since the sequential solution based on policy gradients requires to cache all gradient values for the update. Therefore, in this work we focus explicitly on the _generalization_ performance for larger problems. Footnote 1: [http://vrp.galgos.inf.puc-rio.br/index.php/en/](http://vrp.galgos.inf.puc-rio.br/index.php/en/) ### Effective use of available inference time Another dimension which has to be considered for NC methods is the actual use of the available inference time. In general, the argument can be made that NC approaches normally should quickly find a reasonable starting solution which then can be improved with heuristics or iterative methods in a second stage. However, because of their conceptual simplicity and the easy adaption to new problems, different approaches were proposed to leverage the learned information of the NC policy to find better solutions. These methods usually define some kind of search on the policy model and aim to efficiently traverse the corresponding search space while escaping local optima. The first such approach for NC methods solving routing problems was proposed in [5] and simply samples a number of trajectories from the stochastic policy. This sampling strategy can be seen as a form of scatter search guided by the learned policy but without backtracking. Later methods added smarter starting points, uncorrelated samples, data augmentation and softmax sparsification to the sampling strategy to improve results [31; 51; 4]. Nevertheless, these strategies very quickly have diminishing returns as can be seen in fig. 1(d). Even the advanced POMO sampling strategy achieves diminishing gains when doubling the sampling size. Other work proposes more complex search methods utilizing the probabilistic assignment space of the policy model [19; 29; 8]. Although such advanced search methods can help to increase the generalization performance, they often incur a significant increase in terms of computational resources and run times while only leading to marginal gains, as can be seen in figure 1 and our experiments in section 5. Thus, we propose a new method we term _Neural Rui Recreate_ (NRR) designed to enable the use of learned construction heuristics for significantly larger CVRP instances than encountered in the training data. The main idea is to embed learned NC methods into a powerful improvement procedure which can effectively leverage the learned information of the NC policy model for larger instances. To that end we combine recent ideas to train a scoring function to estimate the assignment probability of nodes to different subsets [13] and the expected improvement [33] of applying a solution method (which is treated as a black box) to a particular sub problem. The resulting scoring function we use to select sub-graphs (SG) of the global solution graph which have a high remaining improvement potential. Our algorithm is defined in terms of a ruin-recreate procedure [42] and addresses several detailed design decisions consisting of: i) initial solution, ii) SG construction, iii) SG selection, iv) SG solution (update) and v) acceptance of the update. **Our contributions:** 1. We show in a systematic scale-up experiment that neural construction methods do not generalize well to problem sizes beyond those seen during training. 2. We propose a new approach motivated by the well-established ruin recreate methodology to enable neural construction methods to efficiently solve routing problems which are up to 40x larger than the instances of their training set. 3. In a rigorous comparative study we evaluate the efficacy of our method compared to state-of-the-art constructive and improvement approaches. Our method shows competitive results significantly outperforming all other methods on the most difficult instances of the Uchoa benchmark [47] and the real-world instances of [33]. We release all our models and code2. ## 2 Preliminaries **Problem Formulation**: The capacitated vehicle routing problem (CVRP) is an important NP-hard combinatorial optimization problem [46]. It extends the well-known Traveling Salesmen Problem (TSP) to the case with multiple vehicles. It is concerned with serving a number of \(N\) customers with coordinates in \(\mathbb{R}^{2}\) from a single depot. Each customer \(n\) has a demand \(q_{n}>0\) that needs to be served by \(K\) vehicles with homogeneous capacities \(Q\). Moreover, every tour has to start and end at the depot node and every customer node has to be visited exactly once. The objective is to minimize the total length of all tours in terms of a distance measure \(\delta:\mathbb{R}^{2}\rightarrow\mathbb{R}\) (usually euclidean distance). **Neural Construction Methods**: Neural construction (NC) methods [5; 30; 25; 12; 31; 51] create a solution sequentially one node at a time. They normally utilize an encoder-decoder model where the encoder embeds the current problem state and the decoder is queried at each step to select the next node to add to the currently constructed tour. If the depot node is selected, the vehicle returns and a new tour is started. A masking scheme ensures that the problem constraints are satisfied. In the simplest case each customer which has already been visited is masked as well as any customer with a demand \(q\) larger than the remaining capacity \(Q_{k}\) of the currently employed vehicle \(k\). However, to tackle more complex problems advanced masking schemes can be employed [12; 32]. ### Ruin recreate Principle The ruin recreate (RR) principle was coined by [42] and is a general approach to solve complex combinatorial optimization problems. The key idea of RR is to first _ruin_, i.e. destroy, a significant part of the current problem solution and then to _recreate_ the destroyed part leading to a better overall solution. This concept is in strong contrast to local search (LS) approaches [2] which are commonly used to optimize routing problems and usually apply small changes to a solution to end up at a close but slightly different solution. In [42] it was demonstrated that, in particular for complex or very large problems, the RR approach leads to significantly better solutions than comparable LS methods, since it is able to better navigate the corresponding rough search spaces and to more easily escape from local optima by doing "large steps". On top of this procedure often an acceptance method like _Threshold Accepting_[11] or _Simulated Annealing_ (SA) [28] is used to guide the underlying search. The general procedure is described in Algorithm 1. For routing problems, the ruin step usually consists of removing a subset of the edges in a particular region of the global solution graph. Then the recreation step is concerned with reinserting useful edges to recreate a feasible solution. ``` input : Solution space \(S\), cost function \(c\), stopping criterion 1\(s\leftarrow\texttt{init}(S)\)// Construct initial solution 2whilenot stopping criteriondo 3\(g\leftarrow\texttt{select\_region}(s)\)// Choose part of solution to ruin 4\(s^{\prime}\leftarrow\texttt{ruin}(s,g)\) 5\(s^{\prime}\leftarrow\texttt{recreate}(S,s^{\prime})\) 6\(s\leftarrow\texttt{accept}(s,s^{\prime},c)\)// Decide wether to accept new solution 7return\(s\) ``` **Algorithm 1**Ruin Recreate The RR concept has been reinvented several times over the years and is known under different names. To the best of our knowledge the principle was first applied in the method proposed in [10] for wiring of electronic circuits, which was called _Rip-Up and Reroute_. Later, _Large Neighborhood Search_ (LNS) [43], _Ruin Recreate_ (RR) [42] and _Partial OPtimization Metaheuristic Under Special Intensification Conditions_ (POPMUSIC) [40] were introduced, which apply the same principle to optimize VRPs. In the machine learning (ML) field the concept was first used with some learned components by _Neural Large Neighborhood Search_ (NLNS) [20] and _Learning to Delegate_ (L2D) [33]. However, all of these related approaches use slightly different strategies for the region selection, ruin and recreate steps (Alg. 1 lines 3, 4 and 5 respectively). Regarding the ML approaches NLNS employs an NC method in the recreate step to repair randomly destroyed sub-solutions, whereas L2D utilizes a learned method for region selection while the recreation is done with heuristic solvers. Thus, we are the first to propose a combined neural RR procedure using learned methods for both, selection (neural scoring function) as well as recreate (NC method) steps. Moreover, we are the first to draw attention to the existence of the different RR methods which implement similar concepts under varying names and do the first comprehensive comparison of the corresponding approaches (see section 5). ## 3 Proposed Method In order to apply NC methods to much larger problem sizes we propose to combine them with a ruin recreate type of improvement approach. The motivation is to apply the fully trained NC method to problems with a size close to that of their respective training instances. Our procedure first creates a set \(G\) of sub-graphs from the global graph \(s\) representing the current solution. Then we aim to select a suitable sub-graph \(g\) from the set \(G\). This SG is ruined by completely removing all of its edges (which represented the original part of the solution). Finally, we recreate a new solution \(s_{g}\) for sub-graph \(g\) with the respective NC method \(\pi\) and then re-insert it into the global solution \(s\). We describe the method in Algorithm 2 and visualize the general procedure in Figure 2. In the following sections we describe the different building blocks which constitute our method. ### Initial solution Usually NC methods without advanced search procedures are concerned with constructing an initial solution. These initial solutions are crucial for following iterative improvement algorithms to find good final solutions. In particular, existing work [7; 50] has shown that randomly constructed initial solutions can often lead to sub-optimal improvement results when used for procedures which do not require significant levels of randomness as essential part of their improvement strategy (e.g. LKH3 [17] requires random starts for each of its trials). However, since the studied NC methods fail to produce decent initial solutions for large-scale problems, we use the Clarke-Wright savings algorithm [9] which showed strong results in our preliminary generalization study (see section 1.1). ### SG construction Another crucial component of our algorithm is the construction of the set of sub-graphs \(G_{t}\) from the global solution graph \(s_{t}\) at iteration \(t\) shown in Figure 2 (b) and (c) and Alg. 2 (line 3). In the following we will omit the iteration index \(t\) to simplify notation. The set \(G\) represents the available SGs which can be selected for further improvement. The key idea here is to put sub-graphs \(g\) into \(G\) which are of approximately the same size \(N_{g}\approx N_{\text{train}}\) as the training instances of the respective NC Figure 2: One iteration of our NRR procedure: from the current solution graph \(s\)**(a)** several SGs are constructed **(b)** and put into the set \(G\)**(c)**. Then a promising SG \(g\) is selected according to its improvement potential \(\hat{\gamma}_{g}\) estimated by the neural scoring function \(f_{\theta}\) and its edges are removed **(d)**. Finally, the NC method \(\pi\) is applied to recreate a new sub-graph solution \(s_{g}\)**(e)** which then is inserted into the global solution graph \(s\) to arrive at the new candidate solution \(s^{\prime}\)**(f)**. method. There are different possible approaches one can utilize to construct suitable SGs. We choose to use the tours in the solution graph \(r\in s\) as an intermediate graph structure to facilitate the efficient construction of SGs. Furthermore, this approach has the direct advantage of grouping relevant graph parts on a local scale, a concept which is used in many combinatorial optimization methods [44; 41]. The insight is that optimal tours for the CVRP usually consist of nodes which are in each others local vicinity, while far away nodes normally do not belong to the same tour. The selection of different tours to create a respective sub-graph is facilitated by representing each tour by the geometric center \(\mu_{r}=\frac{1}{|r|}\sum_{n\in r}x_{n}\) of the coordinates \(x_{n}\in\mathbb{R}^{2}\) of all nodes \(n\in r\), where \(|r|\) is the size of tour \(r\), i.e. the total number of customers belonging to that tour. We experimented with different heuristics to construct suitable SGs from these tours. Details on the exact procedure can be found in the Appendix. ### SG selection The next step of our algorithm (Alg. 2, line 4) is concerned with the selection of SG \(g\in G\) (in case of disjoint optimization several \(g_{1},\ldots,g_{k}\in G\)) to be ruined and recreated in the following step. In order to select useful SGs with a high remaining improvement potential, we follow [33; 13] in learning a neural scoring function \(f:\mathcal{G}\rightarrow\mathbb{R}\). This scoring function takes a sub-graph \(g\) as input and assigns it a scalar score \(\hat{\gamma}_{g}\) signifying the remaining potential for improvement when applying the respective NC method to recreate its solution. It is trained via regression on the actual improvement \(\gamma_{g}\) achieved by the NC method on a large set \(D\) of sub-graphs to estimate the remaining potential for unseen SGs during inference. The training set is created by running the respective NC method on varying sub-graphs (whose edges were removed) produced by the prior construction step and registering the achieved improvement \(\gamma_{g}\) as regression target for SG \(g\). Then we can learn a model to represent \(f_{\theta}\) with parameters \(\theta\) by minimizing the mean squared error (MSE) according to \[\mathcal{L}_{MSE}=\frac{1}{|D|}\sum_{i=1}^{|D|}(\gamma_{g}-f_{\theta}(g))^{2}. \tag{1}\] In order to fully capture the SG structure for accurate estimates, similar to [13] we use graph neural networks (GNNs) [37] to encode all nodes as latent vectors \(\omega_{\text{node}}\in\mathbb{R}^{d_{\text{emb}}}\) of dimension \(d_{\text{emb}}\): \[\omega_{i}^{(l)}=\operatorname{GNN}^{(l)}(\omega_{i}^{(l-1)})=\sigma\left( \operatorname{MLP}_{1}^{(l)}(\omega_{i}^{(l-1)})+\operatorname{MLP}_{2}^{(l)} (\sum_{j\in\mathcal{H}(i)}e_{ji}\cdot\omega_{j}^{(l-1)})\right), \tag{2}\] where \(\omega_{i}^{(l-1)}\in\mathbb{R}^{1\times d_{\text{emb}}}\) represents the embedding of node \(i\) at the previous layer \(l-1\), \(\mathcal{H}(i)\) is the 1-hop graph neighborhood of node \(i\), \(e_{ji}\) is the directed edge connecting nodes \(j\) and \(i\), \(\operatorname{MLP}_{1}\) and \(\operatorname{MLP}_{2}\) are Multi-Layer Perceptrons \(\operatorname{MLP}:\mathbb{R}^{d_{\text{emb}}}\rightarrow\mathbb{R}^{d_{ \text{emb}}}\) and \(\sigma()\) is a suitable activation function. In order to capture more global information, which is essential for accurate predictions, the node neighborhoods \(\mathcal{H}(i)\) are based on the fully connected representation of the original problem graph and consist of the \(k\) nearest neighbors of each node in terms of euclidean distance. The input features to the encoder model are the coordinates \(x_{n}\) and demands \(q_{n}\) of each customer node \(n\in s\). Next, pooling is used to aggregate the node information for each SG by summing over the node dimension, producing sub-graph embeddings \(\omega_{g}\). Moreover, we pool over all node embeddings after each GNN layer and aggregate them to create a complete embedding of the solution graph \(\omega_{s}\) which is concatenated with the SG embeddings \(\omega_{g}\) and fed into a final regression head consisting of a stack of MLP layers. Furthermore, we use ReLU [38] and layer normalization [3]. To better understand the effect of the chosen model architecture and pooling operators we perform an ablation study and report the results in the Appendix. Interestingly our model based on the simple order-1 GNN proposed by [37] significantly outperforms the more sophisticated graph attention networks (GATs) [48] which were also used in [33]. Since the update of a particular SG leaves the rest of \(s\) unchanged, it is very likely that a large subset of the same SGs is encountered in \(G\) for several iterations. Hence, we can achieve efficient processing by caching the scores for all evaluated SGs in a hash table for fast lookup, allowing to skip SGs which were already encountered in earlier iterations. After the score assignment different strategies can be utilized for the final selection of sub-graphs \(g\). The simplest option is a _greedy_ approach which always selects the SG with the highest score: \(g=\operatorname*{argmax}(\Gamma_{G})\) where \(\Gamma_{G}\) is the set of scores \(\gamma_{g}\ \forall g\in G\). Alternatively we may apply the softmax function to \(\Gamma_{G}\) and treat the set of scores as a distribution, _sampling_ according to the resulting probabilities. Finally, we can sample a subset of _disjoint_ SGs, optimize all of them and reinsert the ones into \(s\) which led to an improvement. In preliminary experiments we found that the last approach leads to the best results. ### SG solution The solution of the SG is the main part of the ruin recreate procedure. First, the SG is ruined by completely dropping all of its edges (see Fig. 2 (d), Alg. 2, line 5). Thereby all existing tours in the SG are destroyed and all customer nodes are marked as not being served. This is in line with the general RR principle which requires the complete destruction of a substantial part of the solution [42]. After the ruin step the SG is treated as an independent CVRP and fed into the NC method \(\pi\) to recreate it by constructing a new solution (Fig. 2 (e), Alg. 2, line 6). A suitable configuration for the NC method has to be chosen to achieve useful improvement of \(g\) within reasonable time to be able to execute a sufficient number of iterations. For POMO the trade-off is mostly between a greedy decoding strategy and sampling with different number of samples. In general, we found that using SGBS in the recreate step requires too much time, limiting the number of iterations and in turn hurting final performance. ### Acceptance The canonical RR procedure includes an explicit acceptance strategy for updates to increase the likelihood of escaping local optima. Hence, we employ the well-known simulated annealing (SA) [28] approach to control the acceptance of altered candidate solutions \(s^{\prime}\) which are created through the insertion of the recreated sub-graph \(g\) into the previous solution graph \(s\). ## 4 Related Work Construction methods are one of the most prominent ML methods to solve routing problems. First, PointerNetworks [49] were introduced which use an encoder-decoder model with a masking mechanism called _pointer attention_. In following works this approach was improved by adding RL-based training [5], extending it to VRPs [39], replacing RNNs with Transformers [30], stabilizing training via multiple starting points POMO [31] or mutlip decoders MDAM [51] and adding advanced search strategies via simulation SGBS [8], dynamic programming [29] or active search [19]. TAMAM [21] employs a different approach learning a model to partition a VRP into subsets where each subset corresponds to a TSP which satisfies the global constraints and then using an NC model to solve these TSP instances independently from the global CVRP. Apart from NC methods also neural improvement approaches have been proposed which start at a solution and then iteratively improve it. Wu et al. [50] propose to parameterize heuristic operators like 2-opt with a learned policy model, an approach further improved in [34] with Dual-Aspect Collaborative Transformers (DACT), while a meta-controller for local search is learned in [14]. NeuRewriter [7] selects a region and rewrites it based on a heuristic rule set. LCP [26] improves this approach by repeatedly rewriting several segments in parallel. NLNS [20] employs an NC method as repair operator in a Large Neighborhood Search (LNS) [43] whereas a hierarchical problem decomposition approach is proposed in [53]. While existing methods utilize the NC model to repair partial solutions, we use them on a much larger scale, employing them to fully recreate a complete subgraph (which is a substantial part of the global solution graph) from scratch. Learning to delegate (L2D) [33] selects regions to update based on the estimated improvement but use heuristics to reconstruct the chosen sub-solutions. In contrast, we reformulate the procedure in a more principled and general way by drawing the connection to the well established RR principle and employ NC methods to recreate SG solutions. Our results show that the careful design of the algorithm leads to significant performance increases. A different approach is used for NeuroLKH [52] which employs a learned model for the prior selection of the edge candidate set for the well-known heuristic solver LKH [17], which is commonly used as a baseline to compare ML based routing algorithms. Moreover, some prior work regarding generalization performance has been done by Jian et al. [23] who use meta-learning to generalize to different coordinate distributions while Fu et al. [15] devise a method to generalize a smaller pre-trained model to larger TSP instances, but require the expensive solution of a set covering problem. A detailed overview is given in [6; 35]. ## 5 Experiments **Datasets** To evaluate the generalization performance of different methods we focus our comparison on the well-known Uchoa benchmark dataset [47]. Moreover, we evaluate the models for which code is available on the original real world instances used in [33] and our own dataset which consists of mixed uniform and clustered coordinate data of size \(N\in[500,1000,2000,4000]\) (see Appendix). **Baselines** We compare our method against the state-of-the-art NC methods POMO [31], SGBS [8] and TAM-AM [21] as well as the neural improvement approaches NLNS [20], L2D [33], DACT [34] and NeuroLKH [52]. Furthermore, we include the heuristic methods LKH3 [17], LKH-POP [18; 45], which uses POPMUSIC in combination with LKH3 and an implementation of the original RR procedure [42] using random region selection and best insertion [36] as recreate step (cp. Alg. 1). **Evaluation protocol** We run all methods for a fixed time budget of T = 60/120/240/480s for problems of size N = 500/1000/2000/4000 respectively. All hyperparameters as well as the used hardware are described in detail in the Appendix. In order to support the reproducibility of our results we will make our datasets and the code for our model and all experiments available with the publication. **Results** We report the final cost (total length of all tours) in Table 1 and plot the solution trajectories for the Uchoa and real world dataset in Fig. 4 and 4. NRR shows very strong performance, in particular for larger and more difficult instances. It outperforms all NC methods as well as most improvement approaches (L2D, RR and DACT) on all datasets. Only the very complex LKH methods (LKH3, NeuroLKH and LKH-POP) are in some cases able to slightly outperform our method. However, the large standard deviation (STD) for some of these methods on the \(N=4000\) and Uchoa dataset demonstrate that these approaches do not reliably converge to such good results and often can lead to considerably worse performance than NRR. This can be seen in Fig. 4 where LKH-POP achieves a good result with one "lucky" seed while the other runs are far worse, leading to the large STD band displayed. The fluctuations in the mean cost also show that some runs require substantial run time to even find a first solution. In contrast, our method shows reliable performance with consistently low STD and significantly outperforms LKH-POP on all other datasets. Furthermore, many of the competitive results of baselines were only achieved after vastly exceeding the time budget (marked with "*" in Table 1 and displayed with an "x" marker labeled with the actual run time in the plots). Our comparison of the original NRR algorithm using Savings initialization with the alternative Sweep init shows that our method is also capable to achieve competitive performance even when starting from very bad initial solutions. This can also be seen in Fig. 3 and 4 where _nrr_sweep_ shows a very steep initial decrease in cost while some other methods are not even able to beat the Savings baseline (we want to stress here again that the Savings algorithm is a very simple heuristic method). Moreover, the figures show that the anytime performance of our method is particularly good. To verify this, we compute the area between each solution trajectory and a baseline cost. Taking inspiration from the DIMACS challenge [1] we set this baseline cost to 110% of the cost achieved by the savings algorithm, which is the fundamental method on which our analysis is based. Then we re-normalize the area by this cost to achieve values in [0, 1]. By analogy to the area under curve (AUC) metric we call this measure _Area under Savings curve_ (AUSC) and report the results in Table 2 (the exact calculation is described in the Appendix). As can be seen, our method exhibits very high values of AUSC, significantly outperforming all other methods in all but two cases. Furthermore, in order to investigate the effect of different decisions in our algorithm design, we perform an additional ablation study (because of space constraints we report the corresponding results in the Appendix). ## 6 Conclusion This paper presents a well motivated new approach to enable NC methods, which were learned on small training instances, to scale to routing problems which are up to 40\(\times\) larger. NRR achieves this by introducing a meta procedure based on the ruin-recreate principle which is able to leverage the learned information of the neural construction model while focusing the improvement on regions with high potential. The comprehensive experiments show that NRR is able to significantly improve the performance and generalization capabilities of NC models, being able to substantially outperform all construction methods and even some advanced improvement approaches. \begin{table} \begin{tabular}{l|c c c c c c|c} \hline \hline **Model** & **N=500** & **N=1000** & **N=2000** & **N=4000** & **Uchoa** & **real world** & **Avg** \\ \hline **Savings** & 39.7 (0.0) & 72.6 (0.0) & 139.8 (0.0) & 229.0 (0.0) & 107.5 (0.0) & 155.5 (0.0) & 124.0 \\ **POMO (g)** & 45.0 (0.0) & 112.4 (0.0) & 293.7 (0.0) & 575.8 (0.0) & 132.8 (0.0) & 367.0 (0.0) & 254.4 \\ **POMO (s)** & 75.1 (0.4) & 195.7 (0.9) & 454.8 (1.1) & 882.6 (1.9) & 205.0 (0.7) & 523.1 (1.4) & 389.4 \\ **SGBS** & _43.5_ (0.0) & _105.9_ (0.0) & _274.9_ (0.0) & _539.4_ (0.0) & _128.1_ (0.0) & _348.7_ (0.0) & 240.1 \\ **LKH3** & 38.4 (0.1) & 71.3 (0.2) & 141.1 (0.7) & _250.6_ (3.7) & 106.0 (0.3) & _163.3_ (2.5) & 128.5 \\ **NeuroLKH** & **38.3** (0.1) & **71.1** (0.2) & 140.3 (0.5) & _224.2_ (44.7) & 105.9 (0.2) & 158.6 (0.5) & 123.1 \\ **LKH-POP** & 39.1 (0.2) & 73.6 (0.5) & _148.4_ (1.1) & 244.4 (5.7) & **102.1** (1.9) & _168.0_ (1.6) & 129.3 \\ **L2D** & 39.3 (0.3) & 72.4 (0.4) & 141.1 (0.5) & 234.1 (1.0) & 106.9 (0.4) & 156.3 (0.5) & 125.0 \\ **RR** & 39.5 (0.0) & 72.4 (0.0) & 139.5 (0.1) & 228.7 (0.1) & 106.5 (0.1) & 148.6 (0.0) & 122.6 \\ **DACT** & 47.3 (0.0) & 83.6 (0.0) & 158.5 (0.0) & NA & 122.5 (0.0) & 172.3 (0.0) & 139.4 \\ \hline **NRR**-sweep & 39.0 (0.3) & 72.4 (0.5) & 141.6 (0.7) & 236.4 (0.7) & 104.3 (0.5) & 148.5 (0.3) & 123.7 \\ **NRR** & 38.6 (0.1) & 71.6 (0.1) & **138.9** (0.1) & **228.6** (0.1) & 103.8 (0.2) & **147.5** (0.1) & **121.5** \\ \hline \hline \end{tabular} \end{table} Table 1: Final cost of all methods on different datasets. Best result is shown in **bold**. Standard deviation over three runs with different random seeds in brackets. For values marked with “*” the first results were only achieved (long) after the total time budget was exceeded (sometimes by an order of magnitude). The DACT code breaks for \(N=4000\) because of exp. increasing complexity which is why we report “NA”. The results for TAM-AM and NLNS are only available for the Uchoa benchmark, thus we only report them in an extended table in the Appendix but show them in Fig. 3. \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline **Dataset** & **LKH3** & **NeuroLKH** & **LKH-POP** & **L2D** & **RR** & **NRR-swp** & **NRR** \\ \hline \(N=500\) & 0.0282 & **0.0312** & 0.0064 & 0.0060 & 0.0050 & 0.0131 & 0.0253 \\ \(N=1000\) & 0.0105 & 0.0098 & 0.0 & 0.0003 & 0.0021 & 0.0010 & **0.0122** \\ \(N=2000\) & 0.0 & 0.0 & 0.0 & 0.0 & **0.0408** & 0.0 & 0.0011 \\ \(N=4000\) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0007 & 0.0 & **0.0014** \\ **Uchoa** & 0.0056 & 0.0094 & 0.0033 & 0.0032 & 0.0067 & 0.0254 & **0.0313** \\ **real world** & 0.0 & 0.0 & 0.0 & 0.0 & 0.0415 & 0.0418 & **0.0477** \\ \hline \hline \end{tabular} \end{table} Table 2: Area under Savings curve (AUSC). The shown methods achieved at least once a value better than 1.1 \(\times\) the savings cost. All other considered baselines always have an AUSC of 0.0. Appendix ### SG construction heuristics As described in the main paper, we use the tours \(r\) as sub-structures to simplify the selection of sub-graphs. One straight forward way of selecting such tours is to specify a number \(K\) of nearest tour neighbors (measured by the euclidean distance between their geometric centers) and to select each existing tour as reference and adding its \(K\) nearest neighbors to form the sub-graph consisting of \(K+1\) tours. The obvious downside to this approach is that depending on the size of the tours \(|r|\), the size \(N_{g}\) of the resulting sub-graph \(g\) can be much smaller or much larger than \(N_{train}\) (the size of the instances in the NC training set). While in this case we can still assume that \(N_{g}<<N\), it can very well lead to worse performance. To circumvent this problem we propose two alternative methods for the construction of \(G\). First, instead of fixing \(K\), we can add neighboring tours sequentially until the SG size is very close to the size of the training instances, i.e. \(|N_{\text{train}}-N_{g}|<\epsilon\). The second approach follows the same concept but is motivated by the sweep construction method adding tours sequentially to the SG when their center \(\mu_{r}\) is passed by a beam which is rotated around the depot. Then, when \(|N_{\text{train}}-N_{g}|<\epsilon\), the SG is complete and the next tour in sequence is added to a new SG. The direct advantage of this method is that the resulting SGs are completely disjoint. This enables our method to better use the computation capacity of modern GPUs by solving a full batch of SGs via the NC method in parallel. Otherwise, if no disjoint SGs are required, then in order to create a larger and more diverse set of potential SGs we can rotate the beam several times, restarting it at different tour centers and afterwards remove any duplicates. ### Ausc We propose the _Area under Savings curve_ (AUSC) as metric to measure the anytime performance of the compared methods w.r.t. a baseline cost, in our case the cost achieved by the savings algorithm. Since many methods are not even able to beat this very simple heuristic, we set the actual baseline cost to 110% of the savings performance. Then we compute the full area below the savings curve and the solution cost trajectory of each method and normalize it by the total area under the savings curve: \[\text{AUSC}(c_{\text{m}})=\frac{\int_{0}^{T}c_{\text{savings}}-\int_{0}^{T} \min(c_{\text{m}},c_{\text{savings}})}{\int_{0}^{T}c_{\text{savings}}} \tag{3}\] where \(T\) is the total time budget for the solution and \(c_{\text{m}}\) is the cost trajectory of the corresponding method **m** while \(c_{\text{savings}}\) is \(1.1\times\) the constant savings cost. In order to compute the area under curves with discrete time stamps of measurements, we use the composite trapezoidal rule. We visualize the concept in Figure 5 ### Dataset generation We generate datasets with different numbers \(N\in\{500,1000,2000,4000\}\) of customer nodes. The coordinates are sampled based on a mix of clustered and uniform data. For the uniform data we simply sample uniformly from the unit square like prior work [39; 30]. The clustered data is sampled from a Gaussian Mixture Model where the number \(K\) of mixture components is randomly selected between 1 and 10 for each instance. The mean of the components is sampled from a standard Normal distribution \(\mu\sim N(0,1)\) and the (diagonal) covariance matrix \(\Sigma\) is sampled uniformly from \([0.05,0.1]\). The weights are sampled as random integers between 1 and 9, normalized by a homogeneous constant vehicle capacity \(Q=50\) for all problem sizes. Finally, the fraction of uniformly sampled points compared to clustered points is sampled according to a beta distribution with parameters \(\alpha=0.5\) and \(\beta=9\). The resulting sampling procedure is able to generate diverse problem instances with varying coordinate distributions. In Figure 6 we show some examples of the resulting instances. ### Hyperparameters and hardware Neural Scoring FunctionThe embedding dimension \(d_{\text{emb}}\) of our neural scoring function \(f_{\theta}\) is set to 128 as well as the hidden dimension of the node encoder and sub-graph encoder to 128, whereas the decoder uses a hidden dimension of 256. The node encoder uses 4 GNN [37] layers while the SG encoder and the decoder use 3 linear feed forward layers each. For each layer we utilize ReLU [38] activation functions and additional layer normalization [3]. Between GNN layers we add residual connections. For the decoder we add dropout with a dropout probability of 0.1. As pooling operators for the node encoder we use summation and standard deviation while the SG encoder employs summation and maximum pooling, which are invariant with respect to padding the inputs with zeros in order to create SGs of similar size for batched processing. For training we use the Huber-loss [22] as loss function and the Adam optimizer [27] with an initial learning rate of 0.0005, which is halved every 35 epochs. Moreover, we clip the gradient norm above a value of 0.5. We train our model for 80 epochs with a batchsize of 128. The KNN neighborhood graph \(\mathcal{G}\) which is used for the node neighborhoods \(\mathcal{H}\) is created with \(K=25\). Neural Ruin RecreateFor the training of the NC method employed in NRR we use the original hyperparameters reported in [31; 8]. The experiments on uniform data are performed with the checkpoints provided by the authors. For the other experiments we retrain the POMO model (which is also used for the POMO and SGBS baselines) on mixed data of size \(N=100\). NRR is initialized with a savings solution. It uses the sweeping based tour selection for the SG creation (see section A.1) to create disjoint sub-graphs. Then it scores these SGs via \(f_{\theta}\) and selects up to 16 SGs which are fed to the NC method to be reconstructed. As NC method we employ the POMO [31] method with Figure 5: Visualization of AUSC. The green area between the savings cost and the method solution trajectory is the AUSC value. Figure 6: Example plots showing the coordinates of generated instances. The red square represents the depot node. sampling and a sample size of 1024. For acceptance we use simulated annealing (SA) [28] with restarts after 25 iterations without improvement. All models and baselines were run on a Intel Xeon Gold 6230 CPU (with 8 cores available) and if required a NVIDIA GeForce RTX 3090 GPU. The model and experiment code as well as the used datasets are released via github: [https://github.com/jokofa/NRR](https://github.com/jokofa/NRR). ## Appendix B Ablation Studies ### Neural scoring function In order to evaluate the effect of our design decisions for the neural scoring function we report the results of an ablation study in table 3. We compare our original model using HuberLoss, GraphConv [37] sum and std pooling and aggregation of the node embeddings over all GNN layers with MSELoss, graph attention networks (GAT) [48], different pooling operators and no aggregation. The results show that the HuberLoss works much better than MSE loss and that the simple GNN proposed in [37] significantly outperforms the much more advanced GAT. Although the effect of different pooling operators and the aggregation is smaller, our configuration shows the best results, in particular in terms of MSE. ### Neural Ruin Recreate (NRR) A second ablation study is performed to evaluate the effect of different configurations for the NRR procedure. All models are run on the mixed data validation set of size \(N=500\). We compare different construction methods (see section A.1), selection approaches and different solver modes for the NC method, sampling (with the corresponding number of samples) vs. greedy (which always uses a rollout for each customer node, i.e. \(N\)). Furthermore, we compare SA to greedy acceptance of improving moves. The results in table 4 show the effect of the chosen sweep construction and disjoint SG selection methods for different numbers of selected SGs and samples. The first row represents our original model configuration. Under the time budget of 60s our configuration seems to be a good trade-off between the parallelization capacity on the GPU using more SGs and the solver performance controlled by the number of samples. SA which also accepts solutions which are slightly worse than the current best seems to help to escape from local optima which are encountered by greedy acceptance. Other combinations of construction and selection methods like sweep construction with greedy or multi selection as well as knn or add_nn (which adds neighboring tours until the SG size is close to a specified value) lead to significantly worse performance. \begin{table} \begin{tabular}{l|c c} \hline \hline **Configuration** & **MAE** & **MSE** \\ \hline _original_ & 0.1045 & 0.0239 \\ MSE loss & 0.1091 & 0.0263 \\ GAT & 0.1176 & 0.0328 \\ max pool & 0.1056 & 0.0249 \\ mean pool & 0.1053 & 0.0253 \\ sum pool & 0.1048 & 0.0261 \\ no aggr & 0.1050 & 0.0254 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study for neural scoring function model run on the validation set of scored sub-graphs. We report mean absolute error (MAE) and mean squared error (MSE). \begin{table} \begin{tabular}{l l l l l l|l|l} \hline \hline **init** & **construct** & **select** & **n\_mult** & **mode** & **n\_samp** & **accpt** & **cost** \\ \hline _savings_ & _sweep_ & _disjoint_ & _16_ & _sampl_ & _1024_ & _SA_ & **36.964** \\ \hline savings & sweep & disjoint & 12 & sampl & 1024 & SA & 37.005 \\ savings & sweep & disjoint & 8 & sampl & 1024 & SA & 37.015 \\ savings & sweep & disjoint & 24 & sampl & 1024 & SA & 37.015 \\ savings & sweep & disjoint & 32 & sampl & 1024 & SA & 36.970 \\ savings & sweep & disjoint & 8 & sampl & 2048 & SA & 37.042 \\ savings & sweep & disjoint & 24 & sampl & 2048 & SA & 37.052 \\ savings & sweep & disjoint & 32 & sampl & 2048 & SA & 36.981 \\ savings & sweep & disjoint & 8 & sampl & 512 & SA & 36.992 \\ savings & sweep & disjoint & 12 & sampl & 512 & SA & 36.997 \\ savings & sweep & disjoint & 16 & sampl & 1024 & greedy & 37.013 \\ savings & sweep & disjoint & 12 & sampl & 1024 & greedy & 36.993 \\ savings & sweep & disjoint & 8 & sampl & 1024 & greedy & 36.993 \\ savings & sweep & disjoint & 8 & sampl & 2048 & greedy & 36.996 \\ \hline savings & sweep & greedy & 1 & greedy & \(N\) & SA & 37.587 \\ savings & sweep & greedy & 1 & sampl & 512 & SA & 37.462 \\ savings & sweep & multi & 8 & greedy & \(N\) & SA & 37.226 \\ savings & sweep & multi & 8 & sampl & 512 & SA & 37.049 \\ savings & knn & greedy & 1 & greedy & \(N\) & SA & 37.448 \\ savings & knn & greedy & 1 & sampl & 512 & SA & 37.720 \\ savings & knn & multi & 8 & greedy & \(N\) & SA & 37.284 \\ savings & knn & multi & 8 & sampl & 512 & SA & 37.515 \\ savings & add\_nn & greedy & 1 & greedy & \(N\) & SA & 37.338 \\ savings & add\_nn & greedy & 1 & sampl & 512 & SA & 37.312 \\ savings & add\_nn & multi & 8 & greedy & \(N\) & SA & 37.033 \\ savings & add\_nn & multi & 8 & sampl & 512 & SA & 37.342 \\ savings & add\_nn & multi & 16 & greedy & \(N\) & SA & 37.063 \\ savings & add\_nn & multi & 16 & sampl & 128 & SA & 37.338 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study for NRR. We report the final cost, i.e. total length of all routes. ## Appendix C Additional plots Figure 7: Results of constructive methods POMO [31] (greedy, sampling), SGBS [8] and Clarke-Wright savingsings [9] on the Uchoa benchmark [47]. Non binned plots which show high fluctuation because of the different recurring configurations for the coordinate and demand distribution. Note that for TAM-AM only results for \(N>500\) were reported in [21]. We also report the _best known solution_ (BKS). Figure 8: Solution trajectories for methods on 50 problem instances of size \(N=500\) from a mixed uniform and clustered distribution. All methods were run for 3 random seeds, displaying the standard deviation bands. Methods which did not produce a first feasible result within the time budget are shown with a “x” marker on the right of the figure, labeled with the actual run time. Figure 9: Solution trajectories for methods on 50 problem instances of size \(N=1000\) from a mixed uniform and clustered distribution. All methods were run for 3 random seeds, displaying the standard deviation bands. Methods which did not produce a first feasible result within the time budget are shown with a “x” marker on the right of the figure, labeled with the actual run time.
近年、組み合わせ最適化問題の解決のための深層学習アプローチが提案され、特にNP難の車両配達問題(VPR)において大きなインパクトをもたらしました。その中でも、最も影響力のある方法の一つは、シグナル学習を通じて訓練される連続的なニューラル構築アプローチです。これらのモデルのトレーニングコストが高いため、通常は限られたインスタンスサイズ(例:100人の顧客)でトレーニングし、その後、大規模なインスタンスサイズ(例:2000人の顧客)へ適用されます。体系的なスケールアップ研究を通じて、最新技術のニューラル構築方法が単純なヒューリスティックに劣ることを示し、より大きな問題のインスタンスへのgeneralizationを失敗しました。私たちは、局所的な部分の解決を完全に破壊し、その後改善されたバリアントを再構築する「破壊再構築」の原則を提案します。この方法により、ニューラル構築方法(例:
2307.00059
A study of extreme CIII]1908 & [OIII]88/[CII]157 emission in Pox 186: implications for JWST+ALMA (FUV+FIR) studies of distant galaxies
Carbon spectral features are ubiquitous in the ultraviolet (UV) and far-infrared (FIR) spectra of galaxies in the epoch of reionization (EoR). We probe the ionized carbon content of a blue compact dwarf galaxy Pox 186 using the UV, optical, mid-infrared and FIR data taken with telescopes in space (Hubble, Spitzer, Herschel) and on the ground (Gemini). This local (z~0.0040705) galaxy is likely an analogue of EoR galaxies, as revealed by its extreme FIR emission line ratio, [OIII] 88/[CII] 157 (>10). The UV spectra reveal extreme CIII] 1907, 1909 emission with the strongest equivalent width (EW) = 35.85 $\pm$ 0.73 \AA detected so far in the local (z~0) Universe, a relatively strong CIV 1548, 1550 emission with EW = 7.95 $\pm$0.45\AA, but no He II 1640 detection. Several scenarios are explored to explain the high EW of carbon lines, including high effective temperature, high carbon-to-oxygen ratio, slope and upper mass of top-heavy initial mass function, hard ionizing radiation and in-homogeneous dust distribution. Both CIII] and CIV line profiles are broadened with respect to the OIII] 1660 emission line. Each emission line of CIV 1548, 1550 shows the most distinct double-peak structure ever detected, which we model via two scenarios, firstly a double-peaked profile that might emerge from resonant scattering and secondly, a single nebular emission line along with a weaker interstellar absorption. The study demonstrates that galaxies with extreme FIR emission line ratio may also show extreme UV properties, hence paving a promising avenue of using FIR+UV in the local (via HST+Herschel/SOFIA) and distant (via JWST+ALMA) Universe for unveiling the mysteries of the EoR.
Nimisha Kumari, Renske Smit, Claus Leitherer, Joris Witstok, Mike J Irwin, Marco Sirianni, Alessandra Aloisi
2023-06-30T18:00:13
http://arxiv.org/abs/2307.00059v1
A study of extreme CIII]1908 & [OIII]88/[CII]157 emission in Pox 186: implications for JWST+ALMA (FUV+FIR) studies of distant galaxies ###### Abstract Carbon spectral features are ubiquitous in the ultraviolet (UV) and far-infrared (FIR) spectra of galaxies in the epoch of reionization (EoR). We probe the ionized carbon content of a blue compact dwarf galaxy Pox 186 using the UV, optical, mid-infrared and FIR data taken with telescopes in space (Hubble, Spitzer, Herschel) and on the ground (Gemini). This local (z\(\sim\)0.0040705) galaxy is likely an analogue of EoR galaxies, as revealed by its extreme FIR emission line ratio, [O iii]88 um/[C ii] 157 um (\(>\)10). The UV spectra reveal extreme C iii]\(\lambda\lambda\) 1907, 1909 emission with the strongest equivalent width (EW) = 35.85 \(\pm\) 0.73 A detected so far in the local (z\(\sim\)0) Universe, a relatively strong C iv\(\lambda\lambda\) 1548, 1550 emission with EW = 7.95 \(\pm\)0.45A, but no He ii\(\lambda\) 1640 detection. Several scenarios are explored to explain the high EW of carbon lines, including high effective temperature, high carbon-to-oxygen ratio, slope and upper mass of top-heavy initial mass function, hard ionizing radiation and in-homogeneous dust distribution. Both C iii] and C iv line profiles are broadened with respect to the O iii]\(\lambda\) 1660 emission line. Each emission line of C iv\(\lambda\lambda\) 1548, 1550 shows the most distinct double-peak structure ever detected which we model via two scenarios, firstly a double-peaked profile that might emerge from resonant scattering and secondly a single nebular emission line along with a weaker interstellar absorption. The study demonstrates that galaxies with extreme FIR emission line ratio may also show extreme UV properties, hence paving a promising avenue of using FIR+UV in the local (via HST+Herschel/SOFIA) and distant (via JWST+ALMA) Universe for unveiling the mysteries of the EoR. keywords: galaxies:dwarfs - galaxies:abundances - galaxies:high-redshift ## 1 Introduction Understanding the reionization of the Universe is one of the frontier goals of modern astronomy. Several theoretical and observational efforts have been made to answer the related pressing questions such as when and how first galaxies formed (e.g., Stark, 2016) and whether these first galaxies reionized the intergalactic medium (IGM; e.g., Robertson et al., 2010). A first step to answer these questions is to search for the early galaxies in the epoch of reionization (EoR) and characterize their properties. Deep imaging campaigns have been quite successful in searching for such sources. For example, the Hubble Space Telescope (HST) allowed us to compile large samples of early galaxies via deep near-infrared (NIR) imaging programs such as the Great Observatories Origins Deep Survey (GOODS, Giavalisco et al., 2004), Extreme Deep Field (XDF, Illingworth et al., 2013) and Cosmic Assemble Near-Infrared Deep Extragalactic Legacy Survey (CANDELS, Grogin et al., 2011; Koekemoer et al., 2011). However, spectroscopy is essential to characterize the properties of these sources. Until the launch of the James Webb Space Telescope (JWST), the rest-frame ultraviolet (UV) spectroscopy was available for only a few EoR galaxies (e.g., Sobral et al., 2015; Stark et al., 2017; Topping et al., 2021; Hutchison et al., 2019). The JWST observations are significantly improving the dearth of UV spectroscopy for the EoR galaxies via the planned follow-up spectroscopic surveys of the Hubble deep fields (Robertson, 2021; Curtis-Lake et al., 2022; Bunker et al., 2023). Similarly, the exquisite sensitivity of the Atacama Large Millimeter Array (ALMA) has made it possible to obtain the far-infrared (FIR) spectroscopy of EoR galaxies (e.g., Maiolino et al., 2015; Carniani et al., 2017; Smit et al., 2018; Bouwens et al., 2022; Witstok et al., 2022). The combined JWST+ALMA spectroscopic observations will significantly enhance our understanding of the EoR. An indirect approach to probe the nature of reionization sources while efficiently using the two simultaneously-operating state-of-the-art facilities, JWST and ALMA, is to perform detailed studies of the physical processes operating in local galaxies which might resemble EoR galaxies. Several different criteria have been devised so far to identify the local analogues of high-redshift galaxies, including gas-phase metallicity, star-formation rate, compactness, stellar mass, UV luminosity, dust attenuation, Ly\(\alpha\) emission, colour, and ionization state among many others. Some established classes of local analogues of high-redshift galaxies are blue compact dwarf galaxies (BCD, Searle & Sargent, 1972), green peas (GP, Cardamone et al., 2009) and blueberries (Yang et al., 2017), though it is not clear whether these galaxy populations also resemble the EoR galaxies, mainly because of the dearth of data available on EoR galaxies so far. One of the goals of this paper is to demonstrate the use of FIR line ratio [O iii] 88 um/[C ii] 157 um for identifying the local analogues of the EoR galaxies. [O iii]88 um originate from the ionized gas, [C ii] 157 um may originate from both the ionized as well as neutral interstellar medium (ISM), and their relative strengths (i.e. [O iii] 88 um/[C ii] 157 um line ratio) can potentially tell us about the porosity of ISM (Chevance et al., 2016; Pelles et al., 2019). The ALMA observations of EoR galaxies (Figure 1, blue points) reveal that [O iii] 88 um/[C ii] 157 um line ratio may vary in the range 1-10, indicating a highly porous ISM which will facilitate the leakage of ionizing photons required for reionization of the neutral IGM. The Herschel Dwarf Galaxies Survey (Madden et al., 2013; Cormier et al., 2015) reveal a large population of local dwarf galaxies with [O iii] 88 um/[C ii] 157 um \(>\) 1 (Figure 1, grey points), which are potentially the local analogues of EoR galaxies. In an attempt to explore and establish using [O iii] 88 um/[C ii] 157 um as a criterion for identifying the EoR local analogues, we obtained HST UV and spatially-resolved optical spectroscopy of Pox 186, a unique dwarf galaxy showing the highest [O iii] 88 um/[C ii] 157 um ever detected in the local Universe (Figure 1, red point). Moreover, Ramambason et al. (2022) shows that this galaxy has an ionizing photon escape fraction of \(\sim\) 40%, thus making it ideal for this study. Pox 186 was originally discovered in Kunth et al. (1981), and was thought to be a protogalaxy. Corbin & Vacca (2002) later shows that Pox186 is an ultra-compact galaxy still in the process of formation with a majority of star-formation concentrated in the central star cluster of mass 10\({}^{5}\) M\({}_{\odot}\). Figure 2 shows a narrow-band optical image of Pox 186 taken with Wide Field Planetary Camera2 onboard HST, along with the field-of-view (FOV) of instruments of primary observations used in this work. Table 1 lists some of the main physical properties of Pox 186, along with information about the UV and optical observing strategy. A typical UV spectrum of star-forming galaxies is known to show prominent spectral features such as Ly\(\alpha\lambda\) 1215, C iv\(\lambda\lambda\) 1548, 1550, He ii\(\lambda\) 1640, O iii]\(\lambda\lambda\) 1660, 166, [C iii]\(\lambda\) 1907 and C iii]\(\lambda\) 1909 1, which have been used to infer information regarding hardness of radiation fields, ionization conditions, metal-content, wind properties within galaxies at all redshifts (e.g., Shapley et al., 2003; Senchyna et al., 2017; Nakajima et al., 2018; Schmidt et al., 2021). In this paper, we mainly focus on the ionized carbon spectral features, C iv\(\lambda\lambda\) 1548, 1550 and C iii]\(\lambda\lambda\) 1907,1909, however, we complement the UV analysis with the spatially-resolved optical, mid-infrared (MIR) and FIR data. Footnote 1: In the rest of the paper, we will refer the two carbon emission lines [C iii]\(\lambda\) 1907 and C iii]\(\lambda\) 1909 as C iii]\(\lambda\lambda\) 1907,1909, which is a popular notation in literature. The paper is organized as follows: Section 2 presents an overview of the data used in this work, including UV, optical, MIR and FIR. For UV and optical data, we explain the initial data reduction and processing. The MIR and FIR data are archival. In Section 3, we present the results of the multi-wavelength data analysis which includes the estimates of redshift, distance, flux and equivalent widths of detected emission lines and reddening. We also determine several physical properties of the ionized gas and the ionizing stellar population, such as electron temperature and density, gas-phase metallicity, ionization parameters, effective temperature and softness parameters. Section 4 presents a discussion focusing on UV carbon features including their large equivalent widths, line profiles and relative chemical abundance. We also discuss the implication of this study on future JWST+ALMA studies of reionization-era galaxies. Section 5 summarizes our main results. In the rest of the paper, we assume a flat \(\Lambda\)CDM cosmology with H\({}_{0}\) = 70 km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{m}\) = 0.3. The gas-phase solar metallicity is assumed to be 12 + log(O/H)\({}_{\odot}\) = 8.69 (Asplund et al., 2009). ## 2 Observations ### HST/UV spectroscopy The HST/COS observations were taken as part of the General Observing programs in HST Cycles 27 and 28 (GO: 16071 and 16445, PI: N Kumari) at lifetime adjustment position 4 (LP=4). Before taking each UV spectrum, the NUV target acquisition image is taken using the Mirror A. The FUV and NUV spectra were taken with the 2.5 arcsec diameter Primary Science Aperture (PSA) using the medium resolution gratings, G130M, G160M and G185M centred at 1291A, 1623A and 1913A, respectively. We used all FP-POS positions for better spectral sampling and increased signal-to-noise (S/N). Table 1 lists the exposure times for gratings and Mirror A used within the two HST programs. All HST/COS data were processed with the standard data reduction pipeline CALCOS version 3.4.0. Figure 1: [O iii] 88 μm/[C ii] 157 μm line ratio versus IR luminosity for Pox 186 (red point), galaxies at z=7 (blue points, Hashimoto et al., 2019; Witstok et al., 2022; Ren et al., 2023) and local dwarf galaxies (grey points, Cormier et al., 2015). where the red circle denotes the 2.5 arcsec COS spectroscopic aperture. The wavelength settings allow us to cover several spectral features consisting of ISM (red), photospheric (purple), wind (yellow) and nebular (brown) lines as shown in Figures 3a and 3b. ### GMOS-N optical spectroscopy We obtained the spatially-resolved optical spectroscopy of Pox 186 using the GMOS (Hook et al., 2004) and IFU (GMOS-N IFU; Allington-Smith et al., 2002) at Gemini-North telescope in Hawaii, as part of two separate programs (PID: GN-2020A-FT-105, GN-2021A-FT-111, PI: N Kumari). The first program focussed on covering the optical wavelength range of \(\sim\)3500- 8000A, while the second program allowed us to cover the near-infrared (NIR) wavelength range of \(\sim\)8000-10000A. The observations were taken in one-slit queue mode providing FOV of 3.5''\(\times\)5'', large enough to cover the entire galaxy (Figure 2). Along with the science exposures, standard calibration observations were obtained including GCAL flats, CuAr lamp for wavelength calibration and standard star HZ44 for flux calibration. We performed the basic steps of data reduction using the standard GEMINI reduction pipeline (version v1.15) written in Image Reduction and Analysis Facility (iraf, version v2.16)2. These basic steps included bias subtraction, flat field correction, wavelength calibration, sky subtraction, and differential atmospheric correction finally producing the 3D data cubes, and have been described in detail in Kumari (2018). New GMOS-N detectors were installed in 2017, which further required the quantum efficiency correction for all flats. We also used the L.A.Cosmic (van Dokkum, 2001) to remove the cosmic rays from the science exposures. We chose a spatial sampling of 0.25'' for the final three-dimensional data cubes. We thus obtain three three-dimensional data cubes. We scale the flux of each of the three cubes using the methodology described in Appendix B. Footnote 2: iraf is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Figure 3a: HST/COS FUV spectra of Pox 186 taken with G130M/1291 (upper two panels) and G160M/1623 (lower two panels). These fluxes are smoothed by a boxcar filter of 6 pixels for better visualization. These spectra show several spectral features consisting of ISM (red), photospheric (purple), wind (yellow) and nebular (brown) lines. The milky way lines are marked in green. Note that not all marked lines are detected, and are provided here for reference. The geocoronal emission is marked by a grey shaded region in the first panel. slit width. Cormier et al. (2015) reports the MIR emission line fluxes after taking into account PSF. _FIR:_ Pox 186 was also observed with Herschel using the PACS which covers a total FOV of 47''\(\times\)47''and has a beam size of 9''and 12''at 60 um and 160 um, respectively. Cormier et al. (2015) finds that the Pox 186 is well-centred on the brightest spaxel of their FIR maps and hence reports FIR emission line fluxes by applying a point-source correction to the brightest spaxel. For completeness, in Table 2 we tabulate the line fluxes of Pox 186 in the MIR and FIR wavelength regime taken from Cormier et al. (2015) ## 3 Results ### Source Redshift and Distance We determine the source redshift by measuring the observed wavelength of four strong UV emission lines O iii] and C iii] as shown in Table 3. We do not use the strong C iv emission line because it is double-peaked (Section 4.2). We do not use the optical emission lines because the zero-point of the wavelength calibration of GMOS-N data is not very well-constrained (Kumari, 2018). The redshift of Pox 186 is 0.0040705\(\pm\)0.000013 from the presented HST/COS data and agrees with that of Guseva et al. (2004) and Eggen et al. (2021) within uncertainties. At H\({}_{\alpha}\) = 70 km s\({}^{-1}\)and \(\Omega_{m}\) = 0.3, the observed redshift corresponds to a luminosity distance of 17.5 Mpc, and an angular scale of 84.1 pc per arcsec. However, we derive a value of 12.6 Mpc if we use Cosmicflows-3 (Kourkchi et al., 2020). ### Line fluxes and equivalent widths #### 3.2.1 Uv Table 4 presents the fluxes and equivalent widths (EW) of the UV emission lines O iii], C iii] and C iv. We estimate fluxes by summing the fluxes in the spectral line after subtracting a local linear continuum fitted to either side of the three doublets. The continuum level at the central wavelength of each emission line is used to estimate their equivalent widths. We correct the UV emission line fluxes using the E(B-V) value determined from optical Balmer decrement which is described later in Section 3.3. #### 3.2.2 Optical We want to compare the UV, optical, MIR and FIR properties of Pox186 together for which we need to take into account the varying aperture sizes or FOVs of the different instruments with which these four different datasets are acquired. The GMOS-IFU data allow us to probe the optical properties for different apertures and sizes. We chose to extract two sets of integrated spectra. The first one is obtained by integrating all GMOS spectra within a circular aperture of 1.25''radius centred on the brightest knot of Pox18, hence coinciding with the HST/COS aperture, and is referred to as 'COS-matched integrated' spectra. While the second set of spectra is obtained by integrating all spectra within the GMOS-IFU FOV and is referred to as 'Gemini-FOV integrated' spectra. The main difference between the two sets of spectra is that the COS-matched integrated spectra only include the compact core of Pox 186, and exclude its plume, unlike the Gemini-FOV integrated spectra which include both. Figure 4 shows the COS-matched integrated spectra along with several optical and NIR emission lines. We measure the emission line fluxes for the recombination and collisionally excited emission lines (except H\(\alpha\) and [O iii]\(\lambda\) 5007) within the integrated spectra by fitting single Gaussian profiles after subtracting a linear continuum in the spectral region of interest via custom-written python codes using lmfit package (Newville et al., 2014). Equal weight is given to flux in each spectral pixel while fitting Gaussians and the fitting uncertainties on the Gaussian parameters are propagated to calculate the flux uncertainty. We also create emission line flux maps for all lines (including H\(\alpha\) and [O iii]\(\lambda\) 5007) of the \begin{table} \begin{tabular}{l c c c} \hline Emission lines & EW (Å) & F\({}_{A}\) & I\({}_{A}\) \\ \hline C iv\(\lambda\)1548 & 5.21 \(\pm\) 0.22 & 0.50 \(\pm\) 0.02 & 0.99 \(\pm\) 0.09 \\ C iv\(\lambda\)1550 & 2.54 \(\pm\) 0.18 & 0.27 \(\pm\) 0.02 & 0.53 \(\pm\) 0.06 \\ He ii\(\lambda\)1640 & – & \(<\)0.08\({}^{a}\) & \(<\)0.16\({}^{a}\) \\ O iii]\(\lambda\)1660 & 2.21 \(\pm\) 0.29 & 0.23 \(\pm\) 0.03 & 0.43 \(\pm\) 0.06 \\ O iii]\(\lambda\)1666 & 5.74 \(\pm\) 0.38 & 0.58 \(\pm\) 0.04 & 1.08 \(\pm\) 0.11 \\ C iii]\(\lambda\lambda\)1907, 1909 & 35.85 \(\pm\) 0.73 & 2.56 \(\pm\) 0.05 & 4.4 \(\pm\) 0.3 \\ \hline \end{tabular} Notes: Fluxes are in units of \(\times\) 10\({}^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\). \({}^{a}\) 2\(\sigma\) upper-limit \end{table} Table 4: Equivalent widths and emission line fluxes (observed \(F_{A}\) and intrinsic \(I_{A}\)) of the UV nebular emission lines. Figure 3b: HST/COS FUV spectra of Pox 186 taken with G185M/1913. These fluxes are smoothed by a boxcar filter of 2 pixels for better visualization. Note that the shown spectrum only represents the middle segment of the G185M/1913 grating, and the other two segments do not show any spectral feature. \begin{table} \begin{tabular}{l c c c} \hline Emission lines & \(\lambda_{Rest}\)(Å) & \(\lambda_{Obs}\)(Å) & z \\ \hline O iii]\(\lambda\)1660 & 1660.81 & 1667.6 & 0.004086 \\ O iii]\(\lambda\)1666 & 1666.15 & 1672.95 & 0.004079 \\ C iii]\(\lambda\)1907 & 1906.68 & 1914.43 & 0.004066 \\ C iii]\(\lambda\)1909 & 1908.73 & 1916.46 & 0.004051 \\ \hline z (Mean \(\pm\) Std)\({}^{a}\) & & 0.0040705 \(\pm\) 0.000013 \\ \hline \end{tabular} Notes: Mean and Std denote the mean and standard deviations of the redshifts estimated from the emission lines. \end{table} Table 3: Emission line redshift determinations entire GMOS FOVs in the same way, which are shown in Appendix C (Figures C1 and C3). The strong emission lines H\(\alpha\) and [O iii]\(\lambda\) 5007 show a weak broad component in the integrated spectra matching COS aperture. We estimate the H\(\alpha\) line flux by summing the fluxes under the line within the continuum subtracted spectrum and removing the flux contribution from the [N ii] lines in the wings of H\(\alpha\) broad component. The wing of [O iii]\(\lambda\) 5007 also shows the He I 5015 line, which we remove by modelling it with a Gaussian from the continuum subtracted spectrum. This spectrum is then used to estimate [O iii]\(\lambda\) 5007 line flux by summing over the fluxes under the line. Tables 5 presents the measured observed line fluxes for the emission lines measured in the two sets of the integrated GMOS spectra, the COS-matched integrated spectra (F\({}_{\lambda}\)(COS)) and the Gemini-FOV integrates spectra (F\({}_{\lambda}\) (Gemini FOV)). The uncertainties on the emission line fluxes presented in the Table are the random measurement uncertainties. However, we also include a systematic flux uncertainty of 50% (see Appendix B) and propagate in the inferred properties whenever it becomes relevant in the analysis, and is explicitly mentioned in the paper. Figure 4: GMOS-N IFU integrated spectra of Pox 186 obtained by summing the spatially-resolved spectra within a circular aperture of radius 1.25\({}^{\prime\prime}\) overlapping with the COS aperture covering a rest-frame wavelength range of \(\sim 3800\)–9900Å. The important optical emission lines are marked in blue. We have also marked the location of He II 4686, which remains undetected in the optical spectrum. ### Reddening correction We estimate the colour excess E(B-V), by using the attenuation curve of the Small Magellanic Cloud (SMC; Gordon et al. 2003) along with the observed Balmer decrement (H\(\alpha\)\(\Pi\beta\) ) assuming a Case B recombination and an electron temperature and density of 10\({}^{4}\)K and 100 cm\({}^{-3}\), respectively. We estimate E(B-V) for both sets of integrated optical spectra, the one overlapping with COS and the other one corresponding to the entire GMOS FOV. The E(B-V) for the COS-matched integrated spectra is 0.053 \(\pm\) 0.004 which is close to the E(B-V) of the Milky Way in the line-of-sight of Pox 186, i.e., 0.0385 \(\pm\) 0.0016 (Schlafly & Finkbeiner 2011), hence indicating a very low amount of dust in the central region of Pox 186. The intrinsic fluxes for the UV and optical lines are estimated by correcting the observed line fluxes using the E(B-V). No reddening correction is done for the MIR or FIR emission line fluxes. Table 4 presents the intrinsic fluxes of the emission lines of the UV COS spectra, while Table 5 shows the intrinsic fluxes for the COS-matched and Gemini-FOV integrated spectra. The uncertainties on the intrinsic fluxes are derived from propagating the random uncertainties on fluxes measured while fitting the emission lines. ### Physical properties of ionized gas and ionizing stellar population #### 3.4.1 Electron temperature and Density The UV, optical and IR spectra have emission lines sensitive to the electron temperature (T\({}_{e}\)) and density (N\({}_{e}\)). For example, the UV line O iii]\(\lambda\lambda\) 1660,1666 is temperature-sensitive and when combined with the optical [O iii]\(\lambda\) 5007 line, probes T\({}_{e}\). Similarly, the optical line ratio of [O iii]\(\lambda\) 4363 and [O iii]\(\lambda\lambda\) 4959, 5007 is also sensitive to \begin{table} \begin{tabular}{l r} \hline Parameter & Value \\ \hline [O iii]/H\(\beta\) & 0.805 \(\pm\) 0.003 \\ [N ii] /H\(\alpha\) & \(-\)2.15 \(\pm\) 0.09 \\ [S ii] /H\(\alpha\) & \(-\)1.671 \(\pm\) 0.007 \\ EW([O iii]\(\lambda\lambda\) 4959,5007+H\(\beta\) ) (Å) & 1800 \(\pm\) 800 \\ EW(H\(\alpha\) ) (Å) & 820 \(\pm\) 20 \\ \(\beta\)-slope & \(-\)0.36 \(\pm\) 0.02 \\ T\({}_{e}\)([OIII]) (\(\times\) 10\({}^{4}\) K) & 1.63 \(\pm\) 0.05 \\ T\({}_{e}\)([OIII]) (\(\times\) 10\({}^{4}\) K) & 1.39 \(\pm\) 0.08 \\ N\({}_{e}\) ([SII]) (cm\({}^{-3}\)) & 110 \(\pm\) 30 \\ 12 + log(O\({}^{+}\)/H\({}^{+}\)) & 7.22 \(\pm\) 0.12 \\ 12 + log(O\({}^{2+}\)/H\({}^{+}\)) & 7.76 \(\pm\) 0.04 \\ 12 + log(O/H) & 7.87 \(\pm\) 0.04 \\ C\({}^{2+}\)/O\({}^{2+}\) & 0.21 \(\pm\) 0.03 \\ C\({}^{3+}\)/O\({}^{2+}\) & 0.041 \(\pm\) 0.004 \\ log(CO/O\({}^{2+}\))\({}^{\rm direct}\) & \(-\)0.59 \(\pm\) 0.04 \\ log(CO/O\({}^{2+}\))\({}^{\rm empirical}\) & \(-\)0.60 \(\pm\) 0.03 \\ T\({}_{\rm eff}\)\(\pm\) (kK) & 60 \(\pm\) 18 \\ log \(\mathcal{H}\)\(\xi\) & \(-\)2.4 \(\pm\) 0.4 \\ log (q/cm s\({}^{-1}\)\(\pm\)) & 8.1 \(\pm\) 0.4 \\ \(\log\)\(\mathcal{H}\)\(\star\) & \(-\)2.661 \(\pm\) 0.014 \\ \hline \end{tabular} Notes: \(\ddagger\): Output from i:ncm-ft T\({}_{e}\). Similarly, the optical to IR line ratio, [O iii]\(\lambda\) 5007/[O iii]\(\lambda\) 88 um is sensitive to T\({}_{e}\), but also to N\({}_{e}\)(Dinerstein et al., 1985). Figure 5 shows the optical [O iii] lines flux ratio ([O iii]\(\lambda\) 4363 / [O iii]\(\lambda\) 5007) versus the optical-IR [O iii] lines flux ratio [O iii]\(\lambda\) 5007/[O iii]\(\lambda\) 88 um, where the grids are generated using the emissivities of the respective [O iii] lines from the rsync code for a set of T\({}_{e}\) and N\({}_{e}\) values. The grid lines are not orthogonal as [O iii]\(\lambda\) 5007/[O iii]\(\lambda\) 88 um is sensitive to both T\({}_{e}\) and N\({}_{e}\). We also show the optical [O iii] lines flux ratio and optical-IR [O iii] lines flux ratio [O iii]of Pox 186, obtained from Gemini-FOV integrated spectra. We deem the use of Gemini-FOV integrated spectra for this comparison instead of the COS-matched integrated spectra because of the large FOV and PSF of the Herschel data (see Section 2.3) which includes not only the compact core of Pox 186 but also its plume. Note that the COS-matched integrated spectra miss the plume of Pox 186. The observed lines ratio lies on the pyneb grids and is in contradiction to that found by Chen et al. (2023). The optical [O iii] emission line flux ratio for the Gemini-FOV integrated spectra corresponds to T\({}_{e}=15000\pm 1300\)K. From the COS-matched optical spectra, we estimate T\({}_{e}\) ([O iii]) = 16300\(\pm\)500K by using the emission line flux ratio of the auroral line [O iii]\(\lambda\) 4363 and [O iii]\(\lambda\lambda\) 4959, 5007. Since UV line O iii] \(\lambda\lambda\) 1660, 1666 is also temperature-sensitive, we also estimate T\({}_{e}\)([O iii]) by using the emission line flux ratio of [O iii]\(\lambda\) 5007/O iii]\(\lambda\lambda\) 1660,1666, which is in agreement with that obtained from optical emission line ratios. The electron temperatures derived from the COS-matched and Gemini-FOV integrated spectra agree with each other and are typical of the H ii regions within the star-forming galaxies (Kumari et al., 2017, 2018, 2019). We measure electron density using the density-sensitive line ratio [S ii] \(\lambda\lambda\) 6717, 6731 line ratio and T\({}_{e}\) ([O iii]) determined above for the COS-matched as well as the Gemini-FOV integrated spectra. For both datasets, the electron density indicates a low-density regime. We note that the density-sensitive line doublets [O ii] \(\lambda\lambda\) 3727, 3729 could not be used for determining density, as the sensitivity of GMOS-IFU in the blue end has degraded over time, and hence the blue-end data are unusable. We do not estimate N\({}_{e}\) from the density-sensitive UV doublet C iii] \(\lambda\lambda\)1907, 1909 available from the COS spectra as the doublet is blended and asymmetric. #### 3.4.2 Chemical abundances The chemical abundances of Pox 186 are only determined for the COS-overlapping central region, because of the non-detection of the necessary emission lines in the Gemini-FOV integrated spectra. Gas-phase metallicity: The gas-phase metallicity (12+log(O/H)) can be robustly estimated from the T\({}_{e}\)-base direct method, where the abundances of the dominant ionic states of oxygen (O\({}^{+}\) and O\({}^{2+}\)) are first determined from the temperatures of their respective ionization zones and then combined to estimate the total oxygen abundance. The temperature of the high-ionization zone T\({}_{e}\)([O iii]) is determined as derived in Section 3.4.1 and is combined with N\({}_{e}\) ([S ii] ) to estimate the temperature of the low-ionization zone by using the density-dependent calibration given in Perez-Montero (2017). Like Kumari et al. (2019) where [O ii] \(\lambda\lambda\) 3727, 3729 remain undetected, we measure O\({}^{+}\)/H\({}^{+}\) using [O ii] \(\lambda\lambda\) 7320, 7330 and the low-ionization temperature T\({}_{e}\)([O ii] ) by employing the formula given in Kniazev et al. (2003). We measure O\({}^{2+}\)/H\({}^{+}\) using [O iii]\(\lambda\lambda\) 4959, 5007 and T\({}_{e}\)([O iii]) in the formula given in Perez-Montero (2017). The oxygen ionic abundances are combined to calculate the oxygen elemental abundance, 12 + log(O/H) = 7.87\(\pm\)0.04, for the region of Pox 186 probed by COS, and agrees within 3\(\sigma\) with that derived by Guseva et al. (2004) for a larger region of this galaxy. Carbon-to-oxygen ratio: We follow the relations between T\({}_{e}\) ([O iii]) and dereddened line ratios C iii]/O iii] and C iv/O iii] provided in Perez-Montero (2017), we estimate C\({}^{2+}\)/O\({}^{2+}\) and C\({}^{3+}\)/O\({}^{2+}\). We estimate direct method C/O by combining C\({}^{2+}\)/O\({}^{2+}\) and C\({}^{3+}\)/O\({}^{2+}\), assuming \(\frac{\zeta}{Q}=\frac{C^{3+}+C^{3+}}{O^{3+}}\). We also estimate C/O using the empirical method given in (Perez-Montero, 2017). The estimates of C/O from the direct and empirical method are in excellent agreement with each other (Table 6). #### 3.4.3 Ionization parameter and Effective Temperature For estimating the ionization parameter (log \(\mathcal{U}\)) and effective temperature (T\({}_{\rm eff}\)) of the central region matching COS-aperture, we use the publicly available code Hcm-Teff code (v5.3 Perez-Montero et al., 2019) where we assume a blackbody model and use reddening-corrected optical emission line fluxes ([O iii]\(\lambda\lambda\) 4959, 5007, [S ii] \(\lambda\lambda\) 6717, 6731, He I\(\lambda\) 6678, Ar i\(\lambda\) 7135, [S iii] \(\lambda\lambda\) 9069, 9532) from COS-matched integrated and the gas-phase metallicity (12 + log(O/H) = 7.87\(\pm\)0.04, Section 3.4.2), and run the code for plane-parallel and spherical geometry separately. We find that the choice of geometry has no effect on either log \(\mathcal{U}\) or T\({}_{\rm eff}\) for the central region of Pox186. We also estimate log \(\mathcal{U}\) = -2.661 \(\pm\) 0.014 using the calibration involving [S iii] \(\lambda\lambda\) 9069,9532/[S ii] 6717,6731 given by Kewley et al. (2019), which agrees with that derived from HCM-Teff. For determining log \(\mathcal{U}\) for the entire Pox186 galaxy, we use calibrations from Kewley et al. (2019) including the MIR line ratios (Table 2), [Ne iii] /[Ne ii] which gives log \(\mathcal{U}\) = -2.55 \(\pm\) 0.22. Note that we do not use optical line ratio [S iii] /[S ii] mainly because [S iii] \(\lambda\lambda\) 9069, 9530 lines cover a slightly different FOV than the rest Figure 5: pyneb grids of [O iii]\(\lambda\) 4363/[O iii]\(\lambda\) 5007 versus [O iii]\(\lambda\) 5007/[O iii]88 μm for T\({}_{e}\)=8–20kK and N\({}_{e}\)=1–10000 cm\({}^{-3}\). The red point denotes the emission line flux ratios of Pox 186, where the optical [O iii] emission line fluxes correspond to the entire Gemini-FOV, and the FIR [O iii] line is taken from Cormier et al. (2015), hence both optical and FIR datasets shown here cover the entire galaxy. The uncertainties on the line ratio include the systematic uncertainties on the optical line fluxes (see Section B). of the optical emission lines including [S ii]. It is for the same reason that we could not use HCM-TEFF to estimate these two parameters for the Gemini FOV. #### 3.4.4 Radiation hardness Hardness of the radiation field can be measured by the softness parameter log \(\eta\). It was initially defined in terms of optical ionic ratios \(\eta=(\rm{O^{+}/O^{2+}})/(\rm{S^{+}/S^{2+}})\)(Vilchez and Pagel, 1988), and can be estimated from the calibration provided in Kumari et al. (2021), i.e., log \(\eta=\rm{log}\,\eta\prime+0.16/t+0.22\), where t = T\({}_{e}\)/10\({}^{4}\) and log \(\eta\prime\) can be determined from the optical emission line ratios ( log \(\eta\prime=\rm{[O\,{\sc ii}]/[\rm{\sc iii}]\)) or the MIR infrared line ratios ( log \(\eta\prime=\rm{[Ne\,{\sc ii}]/[\rm{\sc iii}]\)) \(\rm{[S\,{\sc iii}]/[\rm{\sc iii}]}\)) as mentioned in Perez-Montero and Vilchez (2009). We estimate log \(\eta\) = -0.47 \(\pm\) 0.23 using MIR emission line fluxes (Table 2). We estimate log \(\eta\) = -0.14 \(\pm\) 0.23 using T\({}_{e}\) from the Gemini-FOV integrated spectra and log \(\eta\prime\) estimated earlier. We chose to use T\({}_{e}\) from the Gemini-FOV integrated spectra rather than COS-matched integrated spectra as the former covers the entire galaxy like MIR data. We do not use the optical emission line fluxes to determine log \(\eta\prime\) as [O ii] \(\lambda\lambda\) 3729, 3729 are not detected because of the decreased sensitivity of GMOS-N IFU in the blue wavelength end. Lower log \(\eta\) and log \(\eta\prime\) indicate a hard radiation field. Pox 186 exhibits lower values of log \(\eta\) and log \(\eta\) compared to the average values exhibited by star-forming regions or galaxies in the local Universe (Kumari et al., 2021), thus indicating that the radiation field in Pox 186 is harder than average. Table 6 summarizes all the physical properties derived in this section. The uncertainties on the derived quantities are estimated from the random uncertainties on the flux measured while fitting emission lines and excluding the systematic flux uncertainty. ## 4 Discussion ### Large equivalent widths of nebular carbon lines We find EW(C iii]\(\lambda\lambda\) 1906,1909) = 35.85 \(\pm\) 0.73A for Pox 186. Figure 6 shows a comparison of EW(C iii]\(\lambda\lambda\) 1906,1909) versus EW([O iii]\(\lambda\lambda\) 4959,5007+H\(\beta\) ) for Pox 186 with galaxies at z\(\sim\)0-4. We find that Pox 186 exhibits the highest EW(C iii]) among all the star-forming galaxies in the local Universe (z \(\sim\) 0, Leitherer et al., 2011; Senchyna et al., 2017, 2019, Figure 6), and the majority of the star-forming galaxies at the intermediate (z \(\sim\) 2-4, Erl et al., 2010; Stark et al., 2014; Vanzella et al., 2016; Maseda et al., 2017; Vanzella et al., 2017; Tang et al., 2021). However, EW(C iii]) is also reported to lie within 20-40A for a small fraction (\(\sim\)1.2%) of star-forming galaxies at the intermediate (z\(\sim\) 2-4, Le Fevre et al., 2019) and a few EoR galaxies (z\(\sim\)6, Stark et al., 2017; Hutchison et al., 2019; Topping et al., 2021; Jiang et al., 2021). We also measure the EW(C iv\(\lambda\lambda\) 1548,1550) = 7.75\(\pm\)0.28A for Pox 186, which is comparable to those found for local metal-poor galaxies with very young stellar populations (Berg et al., 2016, 2019; Senchyna et al., 2019). We explore the cause for the extreme EW of carbon lines in the following: #### 4.1.1 High effective temperature Nakajima et al. (2018) states that EW(C iii]) >30A can be caused by blackbody with extremely high effective temperature (T\({}_{\rm eff}\)), i.e. > 6\(\times\)10\({}^{4}\)K. We estimate T\({}_{\rm eff}\) = 60 \(\pm\) 18 kK assuming blackbody using the HCM-TEFF code (Section 3.4.3), thus indicating that the high effective temperature may be responsible for high carbon EW observed for Pox 186. Figure 6: EW(C iii]) versus EW([O iii]+ \(H\beta\) ) for Pox 186 (red dot) compared with galaxies at z\(\sim\)–0.4, taken from Stark et al. (2014); Maseda et al. (2017); Senchyna et al. (2017); Mainii et al. (2020); Tang et al. (2021) Figure 7: C/O versus O/H for Pox 186, along with average C/O for z\(\sim\)0 (horizontal blue line) and z\(\sim\)2 (horizontal green line) from Arellano-Córdova et al. (2022) and for DLAs from Cooke et al. (2011). The solid and dashed purple lines indicate the C/O-OH relation given in Nicholls et al. (2017) and Garnett et al. (1995), respectively. #### 4.1.2 High carbon-to-oxygen ratio Jiang et al. (2021) suggests that the high EW(C iii]) and EW(C iv) could be due to a higher carbon abundance. We explore this via Figure 7, which shows that the C/O abundance of Pox 186 (red point) is higher than the average for galaxies found in the same metallicity range at z \(\sim\) 0 (horizontal blue line) and z\(\sim\) 2 (horizontal green line) from Arellano-Cordova et al. (2022). It is also higher than that predicted by the best-fit line to C/O versus O/H for measurements of stars derived by Nicholls et al. (2017, solid purple curve) and for measurements of irregular dwarf galaxies derived from Garnett et al. (1995, dashed solid line). Thus, the higher C/O for Pox 186 supports the argument from Jiang et al. (2021) about higher carbon abundance causing the higher EW of carbon lines. We note that the optical spectrum of Pox186 does not show any signature of the carbon-rich Wolf-Rayet (WR) stars either as red or blue WR bump which could lead to a direct enhancement of carbon. Schaerer et al. (1999) lists Pox 186 as a WR galaxy on the basis of a broad He ii\(\lambda\)4686 at 0.8\(\sigma\) above background reported in Kunth & Joubert (1985). The high-quality HST/COS and GMOS-IFU data allow us to exclude Pox 186 as a WR candidate. #### 4.1.3 Slope and upper mass of top-heavy initial mass function To understand the origin of the extreme EW(C iii]) measured in Pox186, we consider Cloudy models broadly similar to those in Wisttok et al. (2022); here, we give a brief summary and highlight the differences with the models presented in Wisttok et al. (2022). The incident radiation field of a single burst of star formation with varying ages (1 Myr to 100 Myr) is generated by npass v2.1 stellar population synthesis models including binary stars under a top-heavy initial mass function (IMF; with slope \(-2\); Eldridge et al., 2017), ranging in stellar mass from 1 M\({}_{\odot}\) to 300 M\({}_{\odot}\). Unlike in Wisttok et al. (2022), calculations stop when a molecular fraction of \(10^{-6}\) is reached, such that the model does not extend into a photodissociation region beyond the central H ii region. We considered models with a wide range of base (stellar) metallicities, tuning the gas-phase metallicity to match the observed values of Pox186. We introduce an additional nebular \(\alpha\)/Fe enhancement, which is accomplished by increasing the nebular abundances of individual \(\alpha\) elements (for details, we refer to Wisttok et al. 2022). The nebular elemental abundances of the main \(\alpha\) elements (C, O, Ne, Mg, Si, S) are scaled up by 4\(\times\), except for carbon which is increased by a factor of 2, so that the C/O ratio is approximately half the solar value as appropriate for Pox186 (see Section 4.1.2). Moreover, this implies our fiducial model with stellar metallicity \(Z_{\star}=0.025\,Z_{\odot}\) has a nebular oxygen abundance of approximately 10% solar, as directly measured for Pox186 (see Section 4.1.2). We vary the ionisation parameter and hydrogen density between \(-4<\log_{10}U<-1\) and \(10^{-1}\,\rm cm^{-3}<\)\(r_{\rm H}<10^{4}\,\rm cm^{-3}\) (as measured at the illuminated face of the cloud), respectively. An overview of the models generated in Cloudy is shown in Figure 8. For simplicity, we only show models with a fixed density of \(n_{\rm H}=10^{2}\,\rm cm^{-3}\) and stellar metallicities of 0.001 Z\({}_{\odot}\), 0.025 Z\({}_{\odot}\), and 0.2 Z\({}_{\odot}\). We find particularly the slope and upper mass of the IMF are restrictive in reproducing the extreme EWs of C iii], as models with an upper mass of 100 M\({}_{\odot}\) only reach EWs of approximately 20 A3. However, none of these models could simultaneously reproduce the Figure 8: EW(C iii]) plotted against EW(H\(\alpha\) ) (left-hand panel), C iii]/He ii 1640 line ratio (middle panel) and C iv/C iii] (right-hand panel) using the cloudy models described in Section 4.1.3. The horizontal dashed line indicates the observed EW (C iii]) while the horizontal dotted line indicates the reddening-corrected EW(C iii]) where line flux is corrected using nebular E(B-V) while continuum is corrected using the stellar E(B-V). Similarly, the vertical dashed and dotted lines indicate the observed and reddening-corrected quantities, respectively. The reddening-corrected EW is obtained by using nebular E(B-V) for line fluxes and stellar E(B-V) for continuum, while the reddening-corrected emission line ratios are obtained by using nebular E(B-V) for both emission lines in the ratio. The right-ward pointing arrows in the middle panel indicates the lower limit on the C iii]/He ii where a 2\(\sigma\)-upper limit on He ii line is considered. observed EW of C iii] and H\(\alpha\) emission lines, denoted by dashed horizontal and vertical lines, respectively, in Figure 8). To explore this further, we considered the dust distribution which might affect the nebular C iii] and the underlying stellar continuum differently. We estimate the UV continuum slope \(\beta=\) -0.36\(\pm\)0.02 using the spectral windows from Calzetti et al. (1994) in the wavelength region of \(\sim\)1250-1850, which indicates a red continuum. If we use this \(\beta\) value to deredden the continuum using the SMC law from Reddy et al. (2018) and use E(B-V) derived from the optical data to deredden the emission line again using the SMC law, the EW(C iii]) may decrease by a factor of 3. The reddening-corrected values are shown by dotted vertical and horizontal lines, which can not be reproduced by the models either. The inability of the models to reproduce the observed or reddening-corrected properties of Pox 186 could be due to the simplistic assumptions on the geometry and relative distribution of dust and gas within the photoionization models. In summary, it indicates the need to improve the existing population synthesis and photoionization models. #### 4.1.4 Hand Ionizing Radiation High EW(C iii]) is also proposed to be caused by the hard ionizing radiation from extreme stellar populations or AGN (Nakajima et al., 2018; Jiang et al., 2021) or shocks from the radio jets (Best et al., 2000). We rule out the possibility of an AGN/shocks causing the high EW(C iii]) as figure 9 shows that the optical emission line ratios obtained from the integrated spectrum, [O iii]/H\(\beta\) versus [N ii] /H\(\alpha\) (left-hand panel) and [O iii]/H\(\beta\) versus [S ii] /H\(\alpha\) (right-hand panel) do not occupy the AGN/shock region of the classical emission-line diagnostic diagrams (Baldwin et al., 1981; Veilleux & Osterbrock, 1987). Figure 9 also shows a few spaxel-based line ratios lying beyond the photoionization region on the BPT diagrams; however, they are too few (\(\sim\)2.5% and \(\sim\)3% for [N ii] -BPT and [S ii] -BPT diagrams, respectively) to be a conclusive indicator of hard AGN radiation. Hard ionizing radiation is also expected to produce He ii lines, so emission line ratio diagnostic diagrams including optical and UV He ii lines are also used to determine the presence of AGN (Feltre et al., 2016; Brinchmann et al., 2008). However, neither He ii\(\lambda\) 1640 nor He ii\(\lambda\) 4686 lines are detected in the spectrum corresponding to the COS pointing. The above discussion shows that hard ionizing radiation from AGN is unlikely to be the cause of high EW(C iii]). ### Line profiles of carbon lines Figure 10 (left-hand panel) shows that C iv line profile is broadened with respect to O iii] line profile, which appears to be caused by collisional excitation. Berg et al. (2019) observed similar behaviour in a couple of local metal-poor galaxies and attributed this to the resonant scattering nature of C iv. However, for Pox 186, we find that C iii] line is also broadened with respect to O iii] (Figure 10, right-hand panel). Given that the C iii] is not reported to come from resonant scattering, we rule out resonant scattering as the cause of broadening in carbon lines. It is unlikely that outflows could cause broadening in carbon emission lines because outflows would lead to broadening in all emission lines in the same way, which would result in similar line profiles (i.e., including O iii]). Still, we explore the outflows signatures in Figure 11, where we overplot the [O iii] emission line (normalized by its peak flux) along with Si ii\(\lambda\) 1260 (normalized by the median flux within the local spectral region). Both O iii] and Si in \(\lambda\) 1260 lines are centered at \(\sim\) 0 km s\({}^{-1}\), showing no signatures of outflow. However, a hint of ionized gas suffering turbulence is present in the line profile of H\(\alpha\) shown in Figure 12, which shows the presence of a broad underlying component of H\(\alpha\) along with a narrow component. Figure 13 shows the velocity profile of the resonantly scattered C iv\(\lambda\lambda\) 1548,1550 doublet. The distinct blue and red peaks exhibited by both C iv emission lines are quite interesting, as no previous studies have ever resolved the two peaks in both emission lines of the C iv doublet, though the stronger C iv\(\lambda\)1548 has been reported to have double peaks (Berg et al., 2019). The origin of double-peaked C iv is difficult to understand because C iv line profile will be impacted by the relative fraction of the gas emitting the narrow nebular emission and the foreground ISM resulting in absorption. We address two possible scenarios here: (1) a pure nebular emission Figure 9: Classical optical emission line ratio diagrams: [O iii]/H\(\beta\) versus [N ii]/H\(\alpha\) (left-hand panel) and [O iii]/H\(\beta\) versus [S ii] /H\(\alpha\) (right-hand panel). Black solid curve and dashed curve represent the maximum starburst line from Kewley et al. (2001, theoretical) and Kauffmann et al. (2003, empirical), respectively, distinguishing between the photoionized and non-photoionized regions. The blue dot denotes the corresponding emission line ratios obtained from the optical spectra integrated over the COS aperture, while the grey dots indicate the spatially-resolved emission line ratios. with no ISM absorption and (2) nebular emission along with ISM absorption. For modelling a purely nebular C iv (Figure 13, left panel), we fit two Gaussian components to each C iv emission line consisting of a blue-shifted (dashed-blue fit) and a redshifted component (dashed-red fit). The peak separation between the blue and the red peak ( V\({}_{\rm sep}\)) is \(\sim\)132\(\pm\)2 km s\({}^{-1}\) for each C iv line, which is \(\sim\)25 km s\({}^{-1}\) (on average) higher than those found by Berg et al. (2019) for two local dwarf irregular galaxies. We note that the redshifted component is broader than the blue-shifted component. For modelling the second scenario comprising of nebular emission and interstellar absorption (Figure 13, right panel), we model the emission in each C iv line with a Gaussian profile (dashed-blue fit) and interstellar absorption via the Voigt profile (dashed-red fit). The Voigt profile corresponding to the ISM absorption is narrower compared to the broad Gaussian profile, pointing towards a lower fraction of foreground high ionization gas along the line of sight. ### Stellar Winds Stellar winds originate from hot and massive stars, i.e., with masses \(>\)\(\sim\)40 M\({}_{\odot}\) and temperatures \(>\) 25,000 K, and lead to P Cygni-type Figure 11: Comparison of velocity profiles of O iii]\(\lambda\) 1666 emission line (blue curve) and Si ii \(\lambda\) 1260 absorption line (purple curve), where O iii]\(\lambda\) 1666 is normalized by its peak flux, and Si ii \(\lambda\) 1260 is normalized by the median flux in the velocity range of -250 to 250 km s\({}^{-1}\). No signature of outflow is present. Figure 12: A narrow (purple Gaussian) and a broad (red Gaussian) component is needed to reproduce the H\(\alpha\) line profile, hinting towards the ISM turbulence within Pox 186. Figure 10: Comparison of O iii]\(\lambda\) 1666 line with respect to C iii]\(\lambda\) 1907 (left-hand panel) and C iv.\(\lambda\) 1548 (right-hand panel). In both panels, all emission lines (C iv\(\lambda\) 1548, O iii]\(\lambda\) 1666 and C iii]\(\lambda\) 1907) are normalized by their peak fluxes. UV line profiles, typically for the strongest lines, N v \(\lambda\) 1240, Si iv \(\lambda\) 1400 and C iv\(\lambda\) 1550. The UV spectra of Pox 186 show a strong P-Cygni N v \(\lambda\) 1400 feature (Figure 3a, 1st row), no Si iv \(\lambda\) 1400 and a weak P-Cygni C iv\(\lambda\) 1550 feature (Figure 3a, 3rd row). The weaker C iv\(\lambda\) 1550 compared to N v \(\lambda\) 1240 at lower metallicities, is indicative of a lower wind density and velocity (Leitherer et al., 2001, 2010). It is likely that the early O main-sequence stars are the main constituent of Pox 186, since these stars do not display wind effects in Si iv \(\lambda\) 1400, but in N v \(\lambda\) 1240 and C iv\(\lambda\) 1550 as we find in Pox 186 spectra. Figure 14 shows the BPASS models (v2.1, IMF slope = -2.35 and upper stellar mass-limit = 300 M\({}_{\odot}\)) overlaid on C iv P-Cygni profile (normalized by the continuum at \(\lambda\)\(\sim\)1560-1570A) both for the single stars (left-hand panel) and binary stars (right-hand panel) population for stellar ages varying from 1-2.5 Myr and metallicity of Z = 0.05Z\({}_{\odot}\). It appears that the stellar populations as young as 1.6 Myr are sufficient to reproduce the weak C iv P-Cygni profile. The inclusion of binary stars has no effect on the overall C iv profile, as the binary stellar population becomes more important only at later ages (3-5 Myr) (Eldridge et al., 2020). The blue-ward absorption in the N v P-Cygni is blended with the Ly\(\alpha\) absorption from Pox 186 which is itself blended with the Ly\(\alpha\) absorption from the MW. Before comparing the N v P-Cygni with the stellar population syntheses models such as BPASS, it is necessary to model and remove the Ly\(\alpha\) absorption from Pox 186 and the MW, which requires careful modelling of the stellar continuum. We will present the detailed modelling of these components in a follow-up paper. ### Implications for JWST+ALMA studies of early galaxies #### 4.4.1 Apparent absence of Ly\(\alpha\) The absence of resonantly-scattered Ly\(\alpha\) emission in spite of strong C iii] and C iv emissions is worth-noting (Figure 3a, upper-panel). A positive correlation is suggested/expected between EW(C iii] and EW(Ly\(\alpha\)) by a few studies (e.g., Stark et al., 2014), though Rigby et al. (2015) suggest that a positive correlation exists for strong emitters with EW(C iii])\(>\)5A and EW(Ly\(\alpha\)) \(>\) 50A, with correlation getting weaker for weaker emitters. Given that Pox 186 shows the highest EW(C iii]) detected in the local Universe so far, we expect at least some Ly\(\alpha\) emission. Similarly, Berg et al. (2019) propose that the double-peaked structure of resonantly-scattered C iv emission could be associated with a double peak in Ly\(\alpha\) as well. Moreover, Pox186 shows log \(\mathcal{U}\) = -2.4 \(\pm\) 0.3 by or log (q/cm s\({}^{-1}\)) = 8.1 \(\pm\) 0.4, which lies in the range of z\(-\)2-3 Ly\(\alpha\) emitters (Nakajima & Ouchi, 2014) further suggesting that Pox 186 could be a Ly\(\alpha\) emitter. To investigate this further, we inspected the UV spectrum of Pox 186 taken with Space Telescope Imaging Spectrograph (STIS/HST) dataset (PI: Corbin, PID: 8333) which indeed shows Ly\(\alpha\) emission (Figure 11). We note that the STIS and our COS pointings are offset by \(\sim\) 2 arcsec (\(\sim\) 168 pc), which indicates that the region emitting Ly\(\alpha\) is not entirely overlapping with that emitting ionized carbon, and lies at the outskirts of Pox 186 probably because of Ly\(\alpha\) escaping due to the concentrated feedback from the star-formation (Heckman et al., 2011). Only a deep spatial map of Ly\(\alpha\) can help identify any potential Ly\(\alpha\) emission within Pox 186. Moreover, a statistically significant sample of galaxies such as Pox 186 is required to establish any spatial offset between Ly\(\alpha\) emission and carbon emission. Such spatial offsets (\(\sim\)168 pc) between Ly\(\alpha\) emission and C iii] or C iv emission may not be probable/distinguishable within the reionization era galaxies, simply because of the angular resolution of the existing instruments. So, even if originating from different regions of galaxies, UV carbon emission lines, particularly the stronger C iii] line, might still be a good indicator of Ly\(\alpha\) emission emerging from the ISM of galaxies even when Ly\(\alpha\) is unavailable at redshifts \(>\) 6 caused by a significantly large IGM neutral fraction (e.g., Fan et al., 2006). #### 4.4.2 Lyman Continuum escape fraction Figure 15 shows C iv/C iii] versus EW(C iii]) for Pox 186 (red point), along with measurements for galaxies at different redshifts compiled in Schmidt et al. (2021). Schaerer et al. (2022) finds that a C iv/C iii]\(>\) 0.75 (shaded orange region) is characteristic of strong Lyman Continuum (LyC) leakers, i.e. galaxies with LyC escape fraction \(\rm f_{esc}(LyC)\)\(>\) 0.1. Pox 186 exhibits C iv/C iii]= 0.30\(\pm\)0.04, which is much smaller Figure 13: We model the resonantly scattered double-peaked C iv\(\lambda\lambda\) 1548,1550 doublet according to two possible scenarios: (1) Purely emission without any absorbing foreground interstellar medium in the line of sight (left-hand panel), for which we model the C iv doublet via multi-component Gaussian fits to identify the peaks of the blue-shifted (dashed blue curve) and the red-shifted (dashed red) components. (2) Nebular emission along with interstellar absorption (right-hand panel), where we model the C iv nebular emission via single Gaussian (dashed red curve) and the interstellar absorption via Voigt profile (dashed blue curve). On both panels, the overall best fit is given by the solid green line. than that exhibited by the strong LyC leakers known so far in the nearby Universe. Chisholm et al. (2022) suggests that strong LyC leakers have lower values of observed UV continuum slope (\(\beta\)). We estimate f\({}_{\rm esc}\)(LyC)\(\sim 10^{-4}\) from the \(\beta\) slope. A negligible f\({}_{\rm esc}\)(LyC) in Pox 186 is consistent with the absence of Ly\(\alpha\) for the COS pointing, however, it is inconsistent with an extreme [O iii] 88 um/[C ii] 157 um shown by the Herschel/PACS maps of Pox 186. Katz et al. (2023) demonstrates that [C ii] deficit (as in Pox 186) is a good indicator of LyC leakers. We note that the resolution of the PACS data is much worse (\(\geq 9^{\prime\prime}\)4) than that of our UV and optical data. It is possible that there might be a spatial offset between the region resulting in high [O iii] 88 um/[C ii] 157 um and region emitting bright UV lines. Footnote 4: For reference, the beam size of PACS spectrometer is \(\sim\)9′′and 12′′at 60 μm and 150 μm, respectively. Izotov et al. (2018) reports an increase in f\({}_{\rm esc}\)(LyC) with an increase in [O iii]/[O ii] though with a large scatter. We could not estimate [O iii]/[O ii] of Pox 186 either at the spatially-resolved data or using the integrated spectrum coinciding with the COS aperture as [O ii] \(\lambda\)\(\lambda\) 3727,3729 doublet is undetected. Guseva et al. (2004) present the emission line fluxes of Pox 186 within sl17''\(\times\)3.2''and 2''\(\times\)6''. Using those line fluxes, we estimate [O iii]/[O ii] = 18-22 for Pox 186 indicating a high f\({}_{\rm esc}\)(LyC). Moreover, Ramambason et al. (2022) reports a 40% escape fraction of ionizing photons for Pox 186 by using a suite of IR lines. Figure 15 also shows a couple of reionization era galaxies which have C iv/C iii] below the threshold proposed by Schaerer et al. (2022), and close to Pox 186. This raises two questions: (1) Is C iv/C iii] or \(\beta\) slope a good enough predictor for LyC leakers? (2) Were there EoR galaxies which did not contribute to the reionization of the Universe? Such questions can be addressed by making the simultaneous use of UV and FIR observations of large samples of EoR galaxies as well as their local analogues. It would be crucial to do follow-up UV+optical observations via JWST of EoR galaxies, which show extreme IR properties via ALMA, and vice versa. ## 5 Summary We investigate the ionized carbon within a local dwarf galaxy Pox 186 using the HST/COS UV data complemented by the GMOS-IFU optical data and Herschel FIR data. Our main results are summarized as follows: 1. Using the HST/COS UV emission lines, we measure a redshift Figure 14: Comparison of observed C iv P-Cygni feature (grey spectrum) within Pox 186 normalized by the continuum level between 1560–1570 Å with the ippass models at Z=0.05Z\({}_{\odot}\) comprising single (left-hand panel) and binary stars (right-hand panel) at different ages, i.e., 1 Myr (brown), 1.3 Myr (magenta), 1.6 Myr (blue), 2 Myr (olive) and 2.5 Myr (yellow). The location of C iv emission and absorption from Pox 186 are marked in red. The C iv absorption originating from MW is marked in green. Figure 15: C iv/C iii] versus EW(C iii]) for Pox 186 (red point) and for published data at different redshifts presented in Schmidt et al. (2021). The orange band represents the C iv/C iii]\(>0.75\), which is suggested by Schaerer et al. (2022) as strong continuum leakers. Error bars are simply not shown for clarity. z = 0.0040705 \(\pm\) 0.000013 for Pox 186. This corresponds to a luminosity distance of 17.5 Mpc assuming a flat \(\Lambda\)CDM cosmology or 12.6 Mpc assuming Cosmicflows-3. * The COS/UV data reveals very high EW of carbon emission lines, i.e. EW(C iii] = 35.85\(\pm\)0.73A and EW(C iv) = 7.75\(\pm\)0.28A. * We explore several scenarios to explore the high EW of carbon lines, including a high effective temperature, higher than average carbon-to-oxygen ratio for a given gas-phase metallicity, photoionization cloudy models including binary stars, top-heavy IMF and nebular \(\alpha/Fe\) enhancement and in-homogeneous dust-distribution. The photoionization models could not simultaneously reproduce all the observables irrespective of dust-reddening, which could be due to the simplistic assumptions of the model parameters. * The C iii] and C iv lines also show broadening with respect to the O iii] emission lines though the cause of this broadening remains unknown. We rule out outflows causing broad carbon emission lines as no outflow signatures are found in the velocity profiles of O iii] and Si ii lines. * Optical integrated spectrum coinciding with COS aperture, shows a broad and faint underlying component in H\(\alpha\) along with a narrow component. Ruling out outflows on the basis of UV data, the H\(\alpha\) velocity profile indicates a turbulent ISM. * The C iv doublet shows clearly distinct double peaks for each of the two emission lines, which can be explained via two scenarios, such as pure emission with no absorbing foreground ISM or nebular emission along with a little absorbing ISM in the foreground. * The high EW(C iii]) and log \(\mathcal{U}\) = -2.4 \(\pm\) 0.4 suggests a high EW(Ly\(\alpha\)) for Pox 186; however, COS spectra do not show any signature of Ly\(\alpha\) though a spatially-offset STIS spectrum does show Ly\(\alpha\) emission. * We report an observed UV continuum slope \(\beta\) = -0.36\(\pm\)0.04 which corresponds to f\({}_{\rm esc}\)(LyC)\(\sim\)10\({}^{-4}\), indicating that Pox 186 is not a LyC leader. C iv/C iii] is also below the threshold for LyC leakers suggested by Schaerer et al. (2022). This is in contrast with the extreme [O iii](C ii] FIR line ratio, 40% escape fraction (Ramambason et al., 2022) and the high [O iii](O ii] values from the literature. This raises questions on the potential use of \(\beta\) or C iv/C iii] as tracers of LyC leakers. This work shows that the extreme IR [O iii]/[C ii] emission line ratios could correspond to extreme UV properties such as high EW of carbon lines (C iii] and C iv), high carbon-to-oxygen ratio, broadened emission carbon line profiles and double-peak within the resonant carbon line doublet, C iv. However, the apparent absence of Ly\(\alpha\) emission and negligible LyC escape fraction (as estimated from UV slope and C iv/C iii]ratio) within a dwarf galaxy with such extreme UV and IR properties are puzzling. This requires a similar investigation on a larger sample of similar galaxies with UV+FIR data. The combination of HST and Herschel data for the local Universe, and JWST and ALMA for the reionization era Universe are crucial in carrying out such studies and understanding the similarities and differences between the EoR galaxies and their local analogues. ## Acknowledgements NK thanks Joe Hunkeler for their help in setting up the new Gemini traf reduction pipeline and debugging the issues with the pipeline. NK thanks Elizabeth Stanway for clarifying information on bypass models. RS acknowledges financial support from the UK Science and Technology Facilities Council (STFC). JW acknowledges support from the ERC Advanced Grant 695671, "QUENCH", and the Fondation MERAC. This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO 16071 and 16445. This work is further partially based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministerio da Ciencia e Tecnologia (Brazil) and SECYT (Argentina) This research has made use of NASA's Astrophysics Data System Bibliographic Services'; SAOImage DS9, developed by Smithsonian Astrophysical Observatory'; Astropy, a community-developed core PYTHON package for Astronomy (Astropy Collaboration et al., 2013); matplotlib (Hunter, 2007) and numpy (Harris et al., 2020; Van Der Walt et al., 2011). This research has made use of the Spanish Virtual Observatory ([https://svo.cab.inta-csic.es](https://svo.cab.inta-csic.es)) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00. ## Data availability The data presented in this paper are available in the Multimission Archive at the Space Telescope Science Institute (MAST) and Gemini Observatory Archive.
炭酸光スペクトル特徴は、再ion化時代(EoR)の銀河の紫外(UV)および遠赤外線(FIR)スペクトルに普遍的です。Pox 186という青いコンパクトDwarf星系を、宇宙空間の望遠鏡(ハبل、スピッツer、ヘレス)と地上望遠鏡(ジェミニ)を用いて、UV、可視光、中赤外線、FIRデータで調べました。この近傍銀河(z~0.0040705)はEoR銀河の類似体である可能性があり、その理由は、その強いFIR線、[OIII] 88/[CII] 157 (>10)から明らかです。UVスペクトルは、これまで観測された中で最も強い等価幅(EW) = 35.85 ± 0.73 ÅのCIII]1907,1
2309.06184
Design monolayer iodinenes based on halogen bond and tiling theory
Xenes, two-dimensional (2D) monolayers composed of a single element, with graphene as a typical representative, have attracted widespread attention. Most of the previous Xenes, X from group-IIIA to group-VIA elements have bonding characteristics of covalent bonds. In this work, we for the first time unveil the pivotal role of a halogen bond, which is a distinctive type of bonding with interaction strength between that of a covalent bond and a van der Waals interaction, in 2D group-VIIA monolayers. Combing the ingenious non-edge-to-edge tiling theory and state-of-art ab initio method with refined local density functional M06-L, we provide a precise and effective bottom-up construction of 2D iodine monolayer sheets, iodinenes, primarily governed by halogen bonds, and successfully design a category of stable iodinenes, encompassing herringbone, Pythagorean, gyrated truncated hexagonal, i.e. diatomic-kagome, and gyrated hexagonal tiling pattern. These iodinene structures exhibit a wealth of properties, such as flat bands, nontrivial topology, and fascinating optical characteristics, offering valuable insights and guidance for future experimental investigations. Our work not only unveils the unexplored halogen bonding mechanism in 2D materials but also opens a new avenue for designing other non-covalent bonding 2D materials.
Kejun Yu, Botao Fu, Runwu Zhang, Da-shuai Ma, Xiao-ping Li, Zhi-Ming Yu, Cheng-Cheng Liu, Yugui Yao
2023-09-12T12:52:31
http://arxiv.org/abs/2309.06184v2
# Design monolayer iodinenes based on halogen bond and tiling theory ###### Abstract Xenes, two-dimensional (2D) monolayers composed of a single element, with graphene as a typical representative, have attracted widespread attention. Most of the previous Xenes, X from group-IIIA to group-VIA elements have bonding characteristics of covalent bonds. In this work, we for the first time unveil the pivotal role of a halogen bond, which is a distinctive type of bonding with interaction strength between that of a covalent bond and a van der Waals interaction, in 2D group-VIA monolayers. Combing the ingenious non-edge-to-edge tiling theory and state-of-the-art ab initio method with refined local density functional M06-L, we provide a precise and effective bottom-up construction of 2D iodine monolayer sheets, iodinenes, primarily governed by halogen bonds, and successfully design a category of stable iodinenes, encompassing herringbone, Pythagorean, gyrated truncated hexagonal, i.e. diatomic-kagome, and gyrated hexagonal tiling pattern. These iodinene structures exhibit a wealth of properties, such as nontrivial novel topology, flat bands and fascinating optical characteristics, offering valuable insights and guidance for future experimental investigations. Our work not only unveils the unexplored halogen bonding mechanism in 2D materials but also opens a new avenue for designing other non-covalent bonding 2D materials. _Introduction. --_ Two-dimensional (2D) materials have been a hot topic since the discovery of graphene. Up to now, as one of the most important members of 2D materials, increasing Xenes made from group-III to VI elements have been predicted and synthesized, e.g., borophene [1; 2], silicene [3], phosphorene [4] and tellurene [5]. Generally, disparate valence electron configurations among diverse main group elements engender markedly distinct structural arrangements, bonding behaviors, and material characteristics within the various Xene families. Yet, as one of the most intriguing members of the Xene family, rare study on group-VII Xenes has been done. Undoubtedly, exploring the theoretical foundations of group-VII Xenes will enhance our understanding of 2D materials and generate a more extensive range of practical applications. Solid halogen crystals exhibit a layered structure in the Cmca space group [6], with stacked planar monolayers. Among the halogens, only iodine maintains its crystalline phase under ambient conditions. The halogen bond (XB, X stands for halogen), akin to hydrogen bonds as a non-covalent interaction [7; 8], plays a crucial role in stabilizing bulk iodine, despite often being overlooked. Recently, few-layer iodine nanosheets have been experimentally obtained from bulk iodine by physical exfoliation [9; 10; 11; 12], but it remains unclear about the fine structure of the iodinenes. On the other hand, it is noteworthy that XBs have been utilized for fabricating halogen-bonded organic frameworks (XOFs) [13; 14] successfully with reliable stability in both solid and solution phases. Then the questions arise naturally: could 2D iodinenes be constructed by utilizing halogen bonding interactions, how to design them, and what interesting properties do such 2D materials have? In this work, based on the XBs, we predict a series of monolayer iodinenes as a new category of 2D materials utilizing the tiling theory and first-principle calculations. First of all, through our meticulous DFT calculations, including the calculations of lattice parameters and phonon spectra, we find that M06-L [15] is an appropriate exchange-correlation functional for evaluating the XB and realize that the XB plays a key role in the formation of stable structures of halogen crystals. Herringbone iodinene, as monolayer iodinene exfoliated from bulk iodine, shows dynamic stability resulting from XB networks. Afterward, according to the interaction model of XB and taking into account the diversity of XB networks, we systematically design a series of structures of iodinenes based on halogen-bonded iodine molecules and the tiling theory to map the structure. Finally, we investigate the electronic structures of our designed iodinenes. All of those halogen-bonded iodinenes are semiconductors and exhibit nontrivial band topology, including herringbone, Pythagorean, gyrated truncated hexagonal (GTH), i.e. diatomic-kagome, and gyrated hexagonal (GH) iodinenes. Specifically, herringbone, Pythagorean, GTH/diatomic-kagome, and GH iodinenes are intrinsic Stiefel-Whitney topological insulators with higher-order bulk-edge correspondences, among which Pythagorean, GTH/diatomic-kagome, and GH iodinenes are \(Z_{2}\) topological insulators exhibiting helical edge states with appropriate filling. These iodinenes possess flat bands that result from XB interactions and special structure, leading to remarkable absorption coefficients that exceed \(10^{5}cm^{-1}\) in the visible region, which reveals potential for photoelectronic application. _The halogen bond and selection of DFT functional._ -- Figure 1(a) depicts the layered structure of bulk iodine. Specifically, each planar layer within the bulk iodine is formed by halogen-bonded iodine molecules. The electrostatic interaction model offers an intuitive image of XBs, as depicted in Fig. 1(b), which displays the electrostatic potential (ESP) map of a monolayer iodine sheet. The unique ESP map is derived from the inherent anisotropic charge density distribution (see Fig. S1(b)) of iodine molecules themselves, where the electron density is accumulated around the equator and depleted on the elongation of the covalent bond. The depleted region is the so-called \(\sigma\)-hole [16], corresponding to the blue tip in Fig. 1(b) with positive ESP. The XB is defined as the attraction between a nucleophilic region and the positive electrostatic region of a \(\sigma\)-hole, analogous to a hydrogen bond [17]. Additionally, weak Van Der Waals (vdW) forces dominate the interaction between layers along the \(\mathbf{c}\) axis in the bulk iodine. This arrangement reveals the potential for exfoliation to yield a freestanding monolayer of iodine, given the significantly greater strength of the XB compared to vdW interactions. A comparative analysis of various GGA and meta-GGA functionals was conducted to evaluate their computational accuracy in modeling the interactions within bulk iodine. The results indicate that M06-L accurately captures the attractive XB interaction in bulk iodine, whereas SCAN [18] does not exhibit this capability (see Fig. S2). This finding is consistent with the study by George et al. [19]. It is pertinent to highlight that the utilization of PBE [20] and PBEsol [21] functionals for assessing the interaction associated with halogen bonding is deemed inappropriate (as delineated in Fig. S1&S2 and Table S1), despite their application in recent research [22]. To account for long-range dispersive interactions, the DFT-D3 method [23] was employed, resulting in a more accurate geometry configuration for bulk iodine (as demonstrated in Table. S1). Therefore, M06-L+D3 was chosen for the DFT calculations. Further computational details can be found in Supplemental Material (SM) [24]. _Design iodinenes with the tiling theory._ -- Firstly, a monolayer iodine is exfoliated from bulk iodine, and we call it herringbone iodine since the profile resembles herringbone. After relaxation, the intramolecular bond length and XB length are 2.75 and 3.60 A respectively with only minor deviations from the bulk counterparts. Additionally, the electrostatic potential (ESP) map of the herringbone iodine is similar to that of the monolayer iodine sheet in bulk [see Fig. 1(b)]. There is no imaginary frequency in phonon spectra (Fig. 1(c)) of freestanding monolayer iodine, revealing its dynamic stability. The XB network plays a crucial role in enabling individual iodine molecules to link with each other to form a planar structure, while herringbone iodine would bear dynamic instability if XB is not included (see Fig. S5). Considering the crucial role of XB in forming stable bulk iodine and exfoliated monolayer, we now provide a general and effective bottom-up construction of monolayer iodinenes based on XB, with the usage of ingenious non-edge-to-edge tiling theory, to find all the structures as exhaustively as possible. Geometrically, it is evident that herringbone iodine can be achieved by placing iodine atoms at each vertex of a hexagonal herringbone tiling, with the long edges representing the XBs. Therefore, we aim to construct iodinenes by bonding diatomic iodine molecules with XBs and utilizing the tiling theory [25; 26; 27]. The molecular orbital picture of the halogen bonding between two iodine molecules provides a valuable reference for constructing the halogen bonding network under investigation. As shown in Fig. 1(d), each molecule has one \(\sigma_{p_{x}}\) bond, two \(p_{y}\) orbitals (lone pairs) and one \(\sigma^{*}_{p_{x}}\) antibond orbital [28]. \(s\) orbital forms a core pair and the \(p_{z}\) orbital lies out of the plane, so they are ignored in the formation of XB. Electrons on the top diiodine could be transferred from \(p_{y}\) lone pair to the \(\sigma^{*}_{p_{x}}\) antibond orbital on the bottom diiodine, which is the orbital interaction picture of halogen bonding [29]. For the case of herringbone iodine, every molecule acts as both an electron donor and acceptor, constituting an XB net (red dashed line in Fig. 1(b)). Hence from the XB interaction picture in herringbone iodine, several essential points can be derived to guide the construction of iodine. (1) Each diio Figure 1: (a) 3D crystal structure (the upper panel) of bulk iodine and its monolayer (the lower panel). Black bold and red dashed lines indicate intramolecular covalent bonds and intermolecular halogen bonds (XBs), respectively. (b) Electrostatic potential (ESP) map on 0.015 a.u. charge density isosurface of monolayer herringbone iodine. \(\sigma\)-hole occurs at the blue tip on the isosurface with positive electrostatic potential. (c) Phonon spectra of monolayer herringbone iodine. (d) Schematic diagram of XB between two iodine molecules. The iodine atoms, \(p_{y}\) orbitals (lone pairs), \(\sigma_{p_{x}}\) bond and \(\sigma^{*}_{p_{z}}\) antibond orbital (\(\sigma\)-hole) are depicted as purple balls, green spindles, yellow ellipsoid and blue ellipsoid, respectively. \(p_{z}\) and \(s\) orbitals are not shown here because \(p_{z}\) sits out of the plane and \(s\) is the core orbital. The red dotted line signifies the presence of the XB, involving the transfer of electrons from a lone pair to a \(\sigma\)-hole through a donor-acceptor interaction. (e) Schematic diagram of XB angle. -dine molecule exhibits two \(\sigma\)-holes and two lone pairs within the same plane, enabling each atom to form two connections with other diiodines. Constructing monolayer iodinee based on the XB network is equivalent to a one-to-one correspondence between the \(\sigma\)-hole and the lone pair. (2) Iodineene is expected to be planar, as all \(\sigma\)-holes and lone pairs involved in the XB are confined to the same plane. (3) Every iodine atom should be considered equivalent to one another in principle. There is no justification for discrimination among them, as the XBs occur uniformly between diiodine molecules. The tiling theory can help us construct iodinenes systematically. If any two polygons are either disjoint or share one vertex or one entire edge in common, tiling by convex polygons is called edge-to-edge tiling [30; 31]. In contrast, if adjacent tiles share only part of their edge, this is so-called non-edge-to-edge tiling. Some works have predicted structures of 2D covalent materials utilizing the tiling theory [25; 26; 27], more specifically, within the edge-to-edge classification. Regarding iodinenes, there are refined concepts that involve non-edge-to-edge tilings. Firstly, each atom is treated as a vertex, and intramolecular bonds and XBs are represented as edges in the tiling. Secondly, each vertex is connected to three edges, two of which have equal length, which corresponds to the case that each iodine atom connects a covalent bond and two equidistant XBs. Additionally, adjacent edges with equal lengths cannot be collinear, because \(\sigma\)-hole and \(p_{y}\) lone pair on the same atom cannot be in alignment. Lastly, all vertices should be equivalent, meaning they are related by the symmetry of the tiling, known as vertex-transitive or uniform tiling [30; 31]. Based on this analysis, we can identify the required tilings from the existing non-edge-to-edge patterns [32]. The results are presented in Fig. 2. Five non-edge-to-edge tilings (see Fig. 2(a1)\(\sim\)(a5)) are selected from the existing 91 types of uniform tiling [32] to map the tiling pattern to iodinee structure. Hence, the nomenclatures of those iodinenes could be labeled by their tiling patterns. The initial length of the short and long edges are set as 2.75 and 3.60 A respectively, according to the intramolecular and XB length in the case of monolayer herringbone iodinee, and all XBs'angle (see sketch map at Fig. 1(e)) are set as 180 \({}^{o}\). After structural relaxation, all the initial structures are slightly distorted. Bonding features could be seen from the charge density and electrostatic potential (ESP) map (Fig. S6 & S7). The herringbone pattern (Fig. 2(b1)) is the same as the foregoing herringbone iodinee. The Pythagorean tiling (Fig. 2(a2)) is tessellated by two different size squares, whose name originates from the _Pythagorean_ theorem, so we call the corresponding structure as Pythagorean iodinee (Fig. 2(b2)). Gyrated truncated hexagonal tiling (GTH, see Fig. 2(a3)) is composed of regular hexagons and regular trigons, resembling the diatomic-kagome lattice topologically. As for the gyrated hexagonal tiling (GH, see (Fig. 2(a4))), XB forms the entire edge of the hexagon and partial edge of the trigon, which is reversed from the case of GTH. The fifth pattern is truncated distorted trihexagonal tiling (TDT, see (Fig. 2(a5))), which is composed of small regular trigons and big distorted truncated trigons. More details such as the symmetry and Wyckoff position of these iodinenes are shown in Table. S2. The phonon spectra are calculated (see Fig. S10), revealing that herringbone, Pythagorean, and GTH/diatomic-kagome iodinenes exhibit no imaginary frequency. GH displayed a small imaginary frequency near the \(\Gamma\) point, but it could be eliminated with a mere 4% tensile strain. To comprehensively explore the potential structures of monolayer iodinenes, we employed additional conventional methods to generate more 2D configurations. (a) We assume some simple lattices, including square, triangle, honeycomb, lieb, kagome, etc.(see Fig. S19). These iodinenes behave as Figure 2: (a1-a5) Five non-edge-to-edge tilings for mapping to desired monolayer iodinenes and (b1-b5) corresponding relaxed crystal structures. The nomenclature of those tilings is adopted in a visualized way. herringbone: herringbone tiling; GTH: gyrated truncated hexagonal tiling, which is topologically equivalent to diatomic-kagome structure; GH: gyrated hexagonal tiling; TDT: truncated distorted trihexagonal tiling. In (b1-b5), the covalent bond and halogen bond are depicted by black and red lines respectively. Grey dot lines indicate the unit cell. conventional covalent 2D crystals where iodine serves as a polyvalent element and are highly energetic and dynamically unstable (see Fig. S21), part of which is even more energetic than gas diiodine that is not likely to form a stable freestanding crystal (see Table. S2). (b) We manually connect diiodine-based building blocks (see Fig. S8) solely through XBs, without employing the knowledge of the tiling theory. Nine configurations (see Fig. S9) are obtained, and two of them ("T4" and "T4+T4m", as detailed in SM [24]) didn't appear before. However, both "T4" and "T4+T4m" exhibit dynamic instability (Fig. S10). This result may be attributed to the violation of the requirement for all iodine atoms to be equivalent. (c) Additionally, a structure search using the particle swarm optimization (PSO) method [33] is conducted (see Fig. S19). The PSO search identifies the emergence of herringbone and Pythagorean iodinenes in our design. However, several low-energy configurations, such as GTH and GH iodinenes, are not detected. The contrast highlights the distinct effectiveness of our design. Formation energy (\(\Delta\)E, meV/molecule) of all configurations derived is listed in Table. S2, with the energy of gas diiodine being the reference zero, where both M06-L+D3 and PBE functionals are used. Under the M06-L+D3 scheme, herringbone is the most low-energetic iodinene structure (\(\Delta\)E = -509), followed by Pythagorean (\(\Delta\)E = -457), "T4+T4m", TDT, GH, "T4" and GTH. These seven configurations consist of halogen-bonded iodine molecules and possess the lowest energy exactly. Conversely, high-energetic configurations are comprised of iodine molecules without XB nets, like ITSS (isosceles triangle snub square) and OR (octagon rhombus), or manifest as covalent crystals (refer to Fig. S20). These outcomes collectively demonstrate the genuine inclination of monolayer iodinenes toward arrangements formed through halogen-bonded iodine molecules, thereby substantiating the value and validity of our devised approach. _Band topology in monolayer iodinenes.-_ We find herringbone, Pythagorean, GTH/diatomic-kagome and GH iodinenes are higher-order topological insulators. Due to both inversion symmetry and time-reversal symmetry are persevered in these four configurations, their higher-order topology can be characterized by the second Stiefel-Whitney number \(w_{2}\)[34]. Here, two methods are taken into consideration to prove their higher-order band topology: the parity criterion and the Wilson loop method. For the parity criterion, \(w_{2}\) can be calculated by the parity eigenvalues of the valence bands, \[(-1)^{w_{2}}=\Pi_{i=1}^{d}(-1)^{[N_{\text{occ}}^{-}(\Gamma_{i})/2]}, \tag{1}\] where \(\Gamma_{i}\) are the four time-reversal invariant momentums (TRIMs), \(N_{\text{occ}}^{-}\) denotes the number of valence bands with odd parity and \([\cdots]\) is the floor function. Taking herringbone iodinene as an example, we find that \(N_{\text{occ}}^{-}(\Gamma_{i})\) are 6, 8, 7, and 7 for \(\Gamma\), \(M\), \(X\) and \(Y\), respectively, leading to a nontrivial second Stiefel-Whitney number \(w_{2}=1\). This indicates that the herringbone iodinenes is a higher-order topological insulator. The nontrivial \(w_{2}\) can also be evidenced by the Wilson loop method. In Fig. 3(b), we plot the Wilson loop of the herringbone iodinene along \(k_{110}\) direction. One can find that the Wilson loop has only one crossing point on \(\Theta=\pi\). This again proves the higher-order topology of the herringbone iodinene. Therefore, the herringbone iodinene would have 0D-protected corner states. This is confirmed by our numerical calculation. The numerical result is plotted in Fig. 3(c), from which two corner states in the band gap can be clearly observed. The charge density distributions of corner states in real space are presented in Fig. 3(d). The other three kinds of iodinenes are also calculated as second Stiefel-Whitney insulators with nontrivial \(w_{2}\) (see Fig. S16-18 and Table. S3). Spin-orbit coupling (SOC) induced band topology is also studied considering the iodine element possesses a non-negligible SOC strength. As for the cases of Pythagorean, GTH/diatomic-kagome, and GH iodinenes, band crossings between the highest valence band and its lower band are all opened by SOC, which induces band inversions (see Fig. 4). And the same behavior occurs between the lowest conduction band and its upper band. For spatial inversion invariant systems [35], we can calculate the topological Z\({}_{2}\) invariant by counting the parity of the occupied states and find that these iodinenes are 2D topological insulators if the Fermi energy is shifted above the lowest unoccupied band or below highest occupied band. As shown in Fig. 4(b), the gapless helical edge states connect the bulk states of GTH/diatomic-kagome iodinene, corresponding to the Z\({}_{2}\) values labeled in Fig. 4(a), which originates from the bulk-boundary correspondence. Considering 2D materials are fabricated generally on substrates, the Fermi level could be shifted by choosing an appropriate substrate or doping. Hence these iodinenes could serve as a potential topological insulator. _Flat bands and optical absorption.-_ As a noncovalent bond, XB could result in flat band structures in iodinenes because Figure 3: (a) Band structures of monolayer herringbone iodinene. (b) The Wilson loop along \(k_{11}\) direction. Insets represent the enlarged Wilson loop around \(\Theta=\pi\), indicating the nontrivial higher-order topology with the nonzero second Stiefel-Whitney number. (c) Energy spectra in real space, with the energy of the two corner states highlighted in red. (d) The distribution of the two corner states (depicted in red) in real space. the relatively weak interactions between iodine molecules, and the flat band would bring about some novel properties. The band structures of herringbone iodineene are depicted in Fig. S11(a), where a quasi-direct band gap (1.165 eV) emerges between the conduction band minima (CBM) at \(\Gamma\) and valence band maxima (VBM) near \(\Gamma\). Notably, the highest occupied band from \(\Gamma\) to VBM is almost flat with an energy shift of about only 8 meV (31 meV for HSE06), so it is a direct gap semiconductor approximately. The in-plane absorption coefficient curve is presented in Fig. S12(a), with two distinct peaks (around 2.04 and 2.74 eV) exceeding \(3\times 10^{5}\)\(cm^{-1}\) in the visible region. This value is considerably higher than that observed in many other 2D materials and organic perovskite materials (\(10^{4}\sim 10^{5}cm^{-1}\)[36]. As a result, herringbone iodine is promising for the photoelectric conversion of solar energy. As shown in Fig. 4, Pythagorean, GTH, and GH iodinenes are direct-gap semiconductors with the gap being 1.69, 1.42 and 1.67 eV respectively. Notably, GTH iodinene possesses a diatomic-kagome lattice leading to two flat bands [37; 38] around -0.27 and 2.12 eV without SOC and one flat band around 2.0 eV with SOC. If with an appropriate substrate or heterojunction, the flat band is promising for a strong correlation platform. Besides, the computed in-plane optical absorption coefficients for Pythagorean, GTH, and GH iodinenes all exceed \(10^{5}cm^{-1}\) within the visible range (with peaks between \(2\sim 3\) eV, see Fig. S12). These flat bands resulting from XBs and geometry contribute to a high density of states, and are beneficial to strong optical absorption, indicating their potential applications in photoelectronic devices. _Conclusion and discussion._ -- An innovative approach to the design of monolayer iodinenes has been put forward, involving the mapping of unique non-edge-to-edge tilings onto iodinene structures that consist of halogen-bonded iodine molecules. The effectiveness of this method has been validated through first-principles calculations, including the identification of four dynamically stable structures: herringbone, Pythagorean, GTH/diatomic-kagome, and GH. These iodinenes are confirmed as higher-order topological insulators with distinguishable corner states. And flat bands emerge, causing exceptional optical absorption within the visible light range. Furthermore, these iodinenes exhibit direct or quasi-direct band gaps, holding potential for optoelectronic application. Moreover, Pythagorean, GTH/diatomic-kagome, and GH iodinenes showcase nontrivial \(\mathrm{Z}_{2}\) band topology with appropriate doping. Notably, traditional approaches, including PSO searching, yield many structures devoid of halogen bonds. Those structures exhibit dynamic instability and higher energies than iodinenes formed by halogen-bonded iodine molecules. This outcome underscores the preference for iodinenes constructed from halogen-bonded iodine molecules, further substantiating the rationality of our design approach, and bringing about a new sight for the structural composition of 2D materials. Different from other Xenes as covalent crystals, iodinene belongs to a new category of 2D crystals where XBs dominate. As XBs have extensive applications in crystal engineering [39; 40], catalysis [41], supramolecular chemistry [42; 43], biomolecular systems [44], self-assembly [45; 46] and drug design [47], etc. already, monolayer iodinenes including herringbone, Pythagorean, GTH/diatomic-kagome and GH patterns would provide a new platform to explore more innovations. _Note added._ We become aware of an independent work recently [48]. The work also studies the 2D sheet of iodine. _Acknowledgements._ -- We would like to thank Jin Cao, Chaoxi Cui, Xiaodong Zhou, Xiuxian Yang, and Liping Liu for their helpful discussions. The work is supported by the National Key R&D Program of China (Grant No. 2020YFA0308800) and the National Natural Science Foundation of China (Grant Nos. 12374055, 12204330).
``` Xenes、二次元 (2D) 単層で、単一元素からなるもの、グラフェンが典型的な代表格として注目を集めています。大部分のこれまでのXenesは、IIIAからVIAのグループの元素に対して共役結合の特性を持っています。この作業では、初めて、コvalen結合とvan der Waals相互作用の強さを有する、独特な結合であるハロゲン結合が2DのグループVIIA単層において重要な役割を果たすことを明らかにしました。非端面対端面結合法と、精緻な局所密度汎関数M06-Lを用いた先進的なab initio方法を組み合わせることで、2Dイオド単層の精密な効果的な基底構造を構築しました。イオドネ、主にハロゲン結合が支配する2Dイオド単層を設計し、ヘリンボーン、ピタゴラス、
2309.16397
Uncertainty-Aware Decision Transformer for Stochastic Driving Environments
Offline Reinforcement Learning (RL) enables policy learning without active interactions, making it especially appealing for self-driving tasks. Recent successes of Transformers inspire casting offline RL as sequence modeling, which, however, fails in stochastic environments with incorrect assumptions that identical actions can consistently achieve the same goal. In this paper, we introduce an UNcertainty-awaRE deciSion Transformer (UNREST) for planning in stochastic driving environments without introducing additional transition or complex generative models. Specifically, UNREST estimates uncertainties by conditional mutual information between transitions and returns. Discovering 'uncertainty accumulation' and 'temporal locality' properties of driving environments, we replace the global returns in decision transformers with truncated returns less affected by environments to learn from actual outcomes of actions rather than environment transitions. We also dynamically evaluate uncertainty at inference for cautious planning. Extensive experiments demonstrate UNREST's superior performance in various driving scenarios and the power of our uncertainty estimation strategy.
Zenan Li, Fan Nie, Qiao Sun, Fang Da, Hang Zhao
2023-09-28T12:44:51
http://arxiv.org/abs/2309.16397v3
# Uncertainty-Aware Decision Transformer for Stochastic Driving Environments ###### Abstract Offline Reinforcement Learning (RL) has emerged as a promising framework for learning policies without active interactions, making it especially appealing for autonomous driving tasks. Recent successes of Transformers inspire casting offline RL as sequence modeling, which performs well in long-horizon tasks. However, they are overly optimistic in stochastic environments with incorrect assumptions that the same goal can be consistently achieved by identical actions. In this paper, we introduce an **U**N**ceratently-awa**RE** deci**Sion **T**ransformer (UNREST) for planning in stochastic driving environments without introducing additional transition or complex generative models. Specifically, UNREST estimates state uncertainties by the conditional mutual information between transitions and returns, and segments sequences accordingly. Discovering the 'uncertainty accumulation' and 'temporal locality' properties of driving environments, UNREST replaces the global returns in decision transformers with less uncertain truncated returns, to learn from true outcomes of agent actions rather than environment transitions. We also dynamically evaluate environmental uncertainty during inference for cautious planning. Extensive experimental results demonstrate UNREST's superior performance in various driving scenarios and the power of our uncertainty estimation strategy. ## 1 Introduction Safe and efficient motion planning has been recognized as a crucial component and the bottleneck in autonomous driving systems (Yurtsever et al., 2020). Nowadays, learning-based planning algorithms like imitation learning (IL) (Bansal et al., 2018; Zeng et al., 2019) and reinforcement learning (RL) (Chen et al., 2019; Chen et al., 2020) have gained prominence with the advent of intelligent simulators (Dosovitskiy et al., 2017; Sun et al., 2022) and large-scale datasets (Caesar et al., 2021). Building on these, offline RL (Diehl et al., 2021; Li et al., 2022) becomes a promising framework for safety-critical driving tasks to learn policies from offline data while retaining the ability to leverage and improve over data of various quality (Fujimoto et al., 2019; Kumar et al., 2020). Nevertheless, the application of offline RL approaches still faces practical challenges. Specifically: (1) The driving task requires conducting _long-horizon planning_ to avoid shortsighted erroneous decisions (Zhang et al., 2022); (2) The _stochasticity_ of environmental objects during diving also demands real-time responses to their actions (Diehl et al., 2021; Villaflor et al., 2022). The recent success of the Transformer architecture (Vaswani et al., 2017; Brown et al., 2020; Dosovitskiy et al., 2020) has inspired researchers to reformulate offline RL as a sequence modeling problem (Chen et al., 2021), which naturally considers outcomes of multi-step decision-making and has demonstrated efficacy in long-horizon tasks. Typically, they train models to predict an action based on the current state and an outcome such as a desired future return. However, existing works (Brandfonbrener et al., 2022; Paster et al., 2022; Yang et al., 2022) have pointed out that these decision transformers (DTs) tend to be overly optimistic in stochastic environments because they incorrectly assume that actions, which successfully achieve a goal once, can consistently do so in subsequent attempts. This assumption is easily broken in stochastic environments, as the goal can be achieved by accidental environment transitions. For example, in Fig. 1(a), identical turning actions may yield entirely different outcomes w.r.t. the behavior of the surrounding vehicle. The key to overcoming the problem is to distinguish between outcomes of agent actions and environment transitions, and train models to pursue goals that are not affected by environmental stochasticity. To the best of our knowledge, only a handful of works (Villarloor et al., 2022; Paster et al., 2022; Yang et al., 2022) has attempted to solve the problem. Generally, they fit a state transition model, either for pessimistic planning (Villarloor et al., 2022) through sampling from VAEs (Zhang et al., 2018), or to disentangle the conditioning goal from environmental stochasticity (Paster et al., 2022; Yang et al., 2022) by adversarial training (Shafahi et al., 2019). While introducing additional complexity, these methods are only applicable when a highly representative environment model can be learned, which is often not the case for driving tasks with complex interactions (Sun et al., 2022). Furthermore, the final driving trajectory encompasses stochastic impact over an excessive number of timesteps. This dilutes the information related to current timestep decision-making in the outcome return and poses challenges for VAE and adversarial training (Wu et al., 2017; Burgess et al., 2018). In this paper, we take an initial step to customize DTs for stochastic driving environments without introducing transition or complex generative models. Specifically, our insight comes from Fig. 1(a): During the straight driving, the _global return_ (Task I+II) contains too much stochastic influence to provide effective supervision. However, we can utilize the _truncated return_ solely from the Task I as a condition, which mitigates the impact of environmental stochasticity (less stochastic timesteps, lower return variance) and considers rewards over a sufficient number of timesteps for optimizing current actions. Additional experimental results further validate the point and we summarize the following properties of driving environments, which may also hold in other stochastic environments. **Property 1** (Uncertainty Accumulation): _The impact of environmental stochasticity on the return distribution accumulates while considering more timesteps, as validated in Fig. 1(b)._ **Property 2** (Temporal Locality): _Driving can be divided into independent tasks, where we only need to focus on the current task without considering those much later. Hence, optimizing future returns over a sufficiently long horizon approximates global return optimization, as shown in Fig. 1(c)._ Based on these, the remaining problem is how to set the span of truncated return so that it can minimize the impact of environmental stochasticity. Specifically, our proposed \(\overline{\text{UN}}\)certainty-\(\text{awa}\)\(\underline{\text{RE}}\)\(\underline{\text{Sion}}\)\(\underline{\text{T}}\)ransformer (UNREST) quantifies the impacts of transition uncertainties by the conditional mutual information between transitions and returns, and segment sequences into _certain and uncertain parts_ accordingly. With the minimal impact of stochasticity in 'certain parts', we set the conditioning goal as the cumulative reward to the segmented position (with the number of timesteps), which can reflect the true outcomes of action selection and be generalized to attain higher rewards. In contrast, in 'uncertain parts' where the environment is highly stochastic, we disregard the erroneous information from returns (with dummy tokens as conditions) and let UNREST imitate expert actions. Dynamic uncertainty evaluation is introduced during inference for cautious planning. **The highlights are:** * We present UNREST, an uncertainty-aware sequential decision framework to apply offline RL in long-horizon and stochastic driving environments. The code will be public when published. * Recognizing the properties of driving environments, we propose a novel model-free environmental uncertainty measurement and segment sequences accordingly. Based on these, UNREST bypasses Figure 1: Motivations of UNREST. (a): An example driving scenario where the return uncertainty increases when accounting for multiple tasks. (b): Calibration results of return distribution over future 1,000 steps are obviously more uncertain than that over future 100 steps. (c): Rollout return and distance curves are close for policies that maximize the return of future 100, 500, and 1,000 steps. challenging generative training in previous works, by replacing global returns in DTs with less uncertain truncated returns (or dummy tokens) to learn from the true outcomes of agent actions. * We extensively evaluate UNREST on CARLA (Dosovitskiy et al., 2017), where it consistently outperforms strong baselines (5.2% and 6.5% driving score improvement in seen and unseen scenarios). Additional experiments also prove the efficacy of our uncertainty estimation strategy. ## 2 Related Works In this section, we review works about sequence modeling algorithms in RL and uncertainty estimation strategies as the foundation of our work and highlight the differences and contributions of UNREST. More related works on vehicle planning can be found in App. A.1. **Offline RL as Sequence Modeling:** Despite the potential to learn directly from offline data and the prospects of a higher performance ceiling (than IL) (Fujimoto et al., 2019), the long-horizon and stochastic nature of driving environments still undermine the efficacy of current offline RL applications in autonomous driving tasks (Diehl et al., 2021; Shi et al., 2021; Li et al., 2022). Encouraged by the success of Transformer (Vaswani et al., 2017) in sequence modeling tasks (Brown et al., 2020; OpenAI, 2023), a recent line of work (Chen et al., 2021; Janner et al., 2021; Furuta et al., 2022; Lee et al., 2022; Hu et al., 2023) adapts Transformer into RL by viewing offline RL as a sequence modeling problem. Typically, Decision Transformer (DT) (Chen et al., 2021) predicts actions by feeding target returns and history sequences, while Trajectory Transformer (TT) (Janner et al., 2021) further exploits the capacity of Transformer through jointly predicting states, actions, and rewards and planning by beam searching. The theoretical basis of these models is revealed in Generalized DT (Furuta et al., 2022): they are trained conditioned on hindsight (e.g. future returns) to reach desired goals. DTs naturally consider outcomes of multi-step decision-making and have demonstrated efficacy in long-horizon tasks (Chen et al., 2021). However, recent works (Brandfonbrener et al., 2022; Paster et al., 2022; Yang et al., 2022) have pointed out the fundamental problem with this training mechanism. Specifically, in stochastic environments like autonomous driving, certain outcomes can be achieved by accidental environment transitions, hence cannot provide effective action supervision. Notably, this is in fact a more general problem to do with all goal-conditioned behavior cloning and hindsight relabeling algorithms (Eysenbach et al., 2022; Strupl et al., 2022; Paster et al., 2020), but in this work we specifically focus on solutions to DTs. To tackle this issue, ESPER (Paster et al., 2022) adopts adversarial training (Shafai et al., 2019) to learn returns disentangled from environmental stochasticity as a condition. Similarly, DoC (Yang et al., 2022) utilizes variational inference (Zhang et al., 2018) to learn a latent representation of the trajectory, which simultaneously minimizes the mutual information (so disentangled) with environment transitions to serve as the conditioning target. Besides, SPLT (Villafor et al., 2022) leverages a conditional VAE (Zhang et al., 2018) to model the stochastic environment transitions. As the driving environment contains various interactions that make it difficult to model (Sun et al., 2022), and the long driving trajectory hinders generative training, our study proposes a novel uncertainty estimation strategy to customize DTs without transition or generative models. Besides, different from EDT (Wu et al., 2023) that dynamically adjusts history length to improve stitching ability, UNREST focuses on segmenting sequences and replacing conditions to address the overly optimistic problem. **Uncertainty Estimation:** Uncertainty estimation plays a pivotal role in reliable AI. One typical method for uncertainty estimation is probabilistic bayesian approximation, either in the form of dropout (Gal and Ghahramani, 2016) or conditional VAEs (Blundell et al., 2015; Dusenberry et al., 2020), which computes the posterior distribution of model parameters. On the contrary, the deep deterministic methods propose to estimate uncertainty by exploiting the implicit feature density (Franchi et al., 2022). Besides, deep ensembles (Lakshminarayanan et al., 2017) train \(K\) neural networks simultaneously, with each module being trained on a different subset of the data, and use the variance of the network outputs as a measure of uncertainty. In this work, we utilize the ensemble approach, which has been widely used in the literature for trajectory optimization (Vlastelica et al., 2021), online and offline RL (Wu et al., 2021; An et al., 2021), to jointly train \(K\) variance networks (Kendall and Gal, 2017) for estimating the uncertainty of returns. ## 3 Preliminary We introduce preliminary knowledge for UNREST in this section. To keep notations concise, we use subscripts \(t\) or numbers for variables at specific timesteps, Greek letter subscripts for parameterized variables, and bold symbols to denote variables spanning multiple timesteps. ### Online and Offline Reinforcement Learning We consider learning in a Markov decision process (MDP) (Puterman, 2014) denoted by the tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},P,r,\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are the state space and the action space, respectively. Given states \(s,s^{\prime}\in\mathcal{S}\) and action \(a\in\mathcal{A}\), \(P(s^{\prime}|s,a):\mathcal{S}\times\mathcal{A}\times\mathcal{S}\to[0,1]\) is the transition probability distribution and \(r(s,a):\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) defines the reward function. Besides, \(\gamma\in(0,1]\) is the discount factor. The agent takes action \(a\) at state \(s\) according to its policy \(\pi(a|s):\mathcal{S}\times\mathcal{A}\to[0,1]\). At timestep \(t\in[1,T]\), the accumulative discounted reward in the future, named reward-to-go (i.e. return), is \(R_{t}=\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}r_{t^{\prime}}\). The goal of _online RL_ is to find a policy \(\pi\) that maximizes the total expected return: \(J=\mathbb{E}_{q_{t}=\pi(\cdot|s_{t}),s_{t+1}=P(\cdot|s_{t},a_{t})}\big{[}\sum_ {t=1}^{T}\gamma^{t-1}r(s_{t},a_{t})\big{]}\) by learning from the transitions \((s,a,r,s^{\prime})\) through interacting with the real environment. In contrast, _Offline RL_ makes use of a static dataset with \(N\) trajectories \(\mathcal{D}=\{\pi_{t}\}_{t=1}^{N}\) collected by certain behavior policy \(\pi_{b}\) to learn a policy \(\pi\) that maximizes \(J\), thereby avoiding safety issues during online interaction. Here \(\mathbf{\tau}=\big{\{}(s_{t},a_{t},r_{t},s^{\prime}_{t})\big{\}}_{t=1}^{T}\) is a collected interaction trajectory composed of transitions with horizon \(T\). ### Offline Reinforcement Learning as Sequence Modeling Following DTs (Chen et al., 2021), we pose offline RL as a sequence modeling problem where we model the probability of the sequence token \(x_{t}\) conditioned on all tokens prior to it: \(p_{\theta}(x_{t}|\mathbf{x}_{<t})\), where \(\mathbf{x}_{<t}\) denotes tokens from step 1 to \((t-1)\), like the GPT (Brown et al., 2020) architecture. DTs consider a return-conditioned policy learning setting where the agent at step \(t\) receives an environment state \(s_{t}\), and chooses an action \(a_{t}\) conditioned on the future return \(R_{t}=\sum_{t^{\prime}=t}^{T}r_{t^{\prime}}\). This leads to the following trajectory representation: \[\mathbf{\tau}=(R_{1},s_{1},a_{1},R_{2},s_{2},a_{2},...,R_{T},s_{T},a_{T}), \tag{1}\] with the objective to minimize the action prediction loss, i.e. maximize the action log-likelihood: \[\mathcal{L}_{\text{DT}}(\theta)=\mathbb{E}_{\mathbf{\tau}\sim\mathcal{D}}\big{[}- \sum_{t=1}^{T}\log p_{\theta}(a_{t}|\mathbf{\tau}_{<t},R_{t},s_{t})\big{]}. \tag{2}\] This supervised learning objective is the cause for DTs' limitations in stochastic environments, as it over-optimistically assumes actions in the sequence can reliably achieve the corresponding returns. At inference time, given a prescribed high target return, DTs generate actions autoregressively while receiving new states and rewards to update the history trajectory. ## 4 Approach: UnREST In this section, we begin with an overview of UNREST's insight and architecture, followed by detailed explanations of the composition modules used in the approach. ### Model Overview An overview of the proposed approach UNREST is illustrated in Fig. 2. Specifically, to address the overly optimistic issue, our key idea is to quantify the impact of environmental uncertainty, and learn to perform aggressively or cautiously at states with different levels of environmental impacts. To achieve this, we first train _return transformers_ with different trajectory inputs to identify the impact of environmental uncertainty. The expert sequences are then segmented into certain and uncertain parts w.r.t. estimated uncertainties, each with relabeled conditioning goals to facilitate the _decision transformer_ to learn from outcomes of agent actions rather than environment transitions. At test time, an _uncertainty predictor_ is introduced to guide decision-making at different states. In the following, we introduce each module with more details. ### Return Transformers for Uncertainty Estimation Instead of conventional uncertainties that reflect variances of distributions (Wu et al., 2021; Kendall and Gal, 2017), in this paper we estimate the impact of environmental stochasticity, what we really care about for policy learning, as an indirect measure of environmental uncertainty. Specifically, we propose to model the impact of the transition \((s_{t-1},a_{t-1}\to s_{t})\) on return \(R_{t}\) through their conditional mutual information (Seitzer et al., 2021; Duong and Nguyen, 2022), i.e. the divergence between distributions \(P(R_{t}|\mathbf{\tau}_{<t}^{\text{ret}})\) and \(P(R_{t}|\mathbf{\tau}_{<t}^{\text{ret}},s_{t})\), where \(\mathbf{\tau}^{\text{ret}}=\{(s_{t},a_{t})\}_{t=1}^{T}\). Specifically, we train two'return transformers' to approximate future return distributions, in which states and actions are first embedded by a linear layer \(f_{\psi}(\cdot)\), then fed into the transformer: \[\begin{split} x_{s_{t}}=f_{\psi}^{s}(s_{t}),\quad x_{a_{t}}=f_{\psi }^{a}(a_{t}),\\...,\tilde{x}_{s_{t-1}},\tilde{x}_{a_{t-1}},\tilde{x}_{s_{t}}, \tilde{x}_{a_{t}}=&\text{Transformer}(...,x_{s_{t-1}},x_{a_{t-1}},x_{s_{t}},x_{s_{t}}).\end{split} \tag{3}\] Afterward, two models feed their respective outputs into variance networks (Kendall & Gal, 2017) to predict Gaussian distributions \(\mathcal{N}(\cdot)\) of returns, which are optimized by maximizing log-likelihoods: \[\begin{split} p_{\psi_{a}}(R_{t}|\tau_{<t}^{\text{ret}})=\mathcal{ N}\big{(}\mu_{\psi_{a}}(\tilde{x}_{a_{t-1}}),\sigma_{\psi_{a}}(\tilde{x}_{a_{t-1}}) \big{)},\,\mathcal{L}_{\text{return}}(\varphi_{a})=\mathbb{E}_{\tau\sim \mathcal{D}}\big{[}-\sum_{t=1}^{T}\log p_{\varphi_{a}}(R_{t}|\tau_{<t}^{\text{ ret}})\big{]},\\ p_{\psi_{a}}(R_{t}|\tau_{<t}^{\text{ret}},s_{t})=\mathcal{N} \big{(}\mu_{\psi_{a}}(\tilde{x}_{s_{t}}),\sigma_{\psi_{a}}(\tilde{x}_{s_{t}}) \big{)},\,\,\mathcal{L}_{\text{return}}(\varphi_{s})=\mathbb{E}_{\tau\sim \mathcal{D}}\big{[}-\sum_{t=1}^{T}\log p_{\varphi_{a}}(R_{t}|\tau_{<t}^{ \text{ret}},s_{t})\big{]}.\end{split} \tag{4}\] Practically, the networks are implemented as ensembles, which together form Gaussian Mixture Models (GMM) (Mai et al., 2022) to better capture the return distribution as in App. A.2. Finally, the impact of the stochastic environmental transition (i.e. the conditional mutual information) is calculated through the Kullback-Leibler (KL) divergence (Joyce, 2011) between these two distributions, as a means to measure the environmental uncertainty at timestep \(t\): \[u_{t}=D_{\text{KL}}\big{(}p_{\varphi_{a}}(p_{t}|\tau_{<t}^{\text{ret}}),\,p_{ \varphi_{s}}(R_{t}|\tau_{<t}^{\text{ret}},s_{t})\big{)}, \tag{5}\] where a larger divergence implies more influence on the return from the transition \((s_{t-1},a_{t-1}\to s_{t})\). ### Decision Transformer Trained on Segmented Sequences In this section, we step to aid DT training in stochastic driving environments. As discussed in Sec. 4.1, we expect UNREST to segment sequences into certain and uncertain parts according to uncertainty estimations and learn to perform aggressively or cautiously within them, respectively. To achieve this, we propose to replace the conditioning global returns with truncated returns in 'certain parts', which are less affected by the environment due to uncertainty accumulation (Prop. 1), thus reliably helping the planner generalize to higher returns after training. Otherwise, in 'uncertain parts', the seemingly high return actions may easily cause safety issues due to environmental uncertainty, thus we simply ignore the stochastic returns, setting conditions as dummy tokens for behavior cloning. **Segmentation strategy** is therefore a crucial point to reinvent DTs. To distinguish between different levels of environmental stochasticity, we define an uncertainty threshold \(\epsilon\). Next, we estimate uncertainties \(u_{t}\) by Eq. 5 for each timestep and record those larger than \(\epsilon\) as uncertain. The sequence is then divided into certain and uncertain parts according to these marked uncertain timesteps. Figure 2: Overview of UNREST. **Lower:** Two return prediction transformers are trained for uncertainty estimation. The sequence is then segmented into certain (no background) and uncertain (orange background) parts w.r.t. estimated uncertainties, with ‘certain parts’ conditioned on returns to the next segmentation positions, and dummy tokens in ‘uncertain parts’. **Upper:** The same architecture as DTs is used for action prediction, except that we add a return-span embedding on the truncated return embedding, and concatenate the discretized global return embedding on the transformer output. An intuitive example of our segmentation strategy is illustrated in the lower part of Fig. 2. Specifically, we define the 'certain part' to begin with a timestep with uncertainty lower than \(\epsilon\), and continue until being segmented at an uncertain timestep. Afterward, the 'uncertain part' starts with the newly encountered uncertain timestep and continues to include subsequent timesteps until the last \(c-1\) timesteps are all identified as certain. Since uncertain timesteps often occur intensively over a particular duration during driving (e.g. at an intersection), the hyperparameter \(c\) is introduced to ensure a minimum length of segmented sequence and avoid frequent switching between two parts of sequences. Finally, we represent the segmented sequence as: \[\mathbf{\tau}^{\text{seg}}=(h_{1},R_{1}^{h},s_{1},a_{1},h_{2},R_{2}^{h},s_{2},a_{2 },...,h_{T},R_{T}^{h},s_{T},a_{T}), \tag{6}\] where the conditioning return is modified as \(R_{t}^{h}=\sum_{k=0}^{h_{t}-1}r_{t+k}\), which only considers rewards in next \(h_{t}\) steps (called the return-span). In 'uncertain parts', \(h\) is set as empty for meaningless conditions (i.e. the dummy tokens), leading UNREST to ignore the conditioned targets and cautiously imitate expert actions, instead of being misguided by the uncertain return information. In 'certain parts', the return-span \(h\) is set to the number of timesteps to the next segmentation step, so that the conditioning return \(R^{h}\) takes into account the maximum duration that does not include any uncertain timesteps. **Proposition 1** (UNREST Alignment Bound): _Assuming that the rewards obtained are determined by transitions \((s,a\to s^{\prime})\) at each timestep and UNREST is perfectly trained to fit the expert demonstrations, then the discrepency between target truncated returns and URNREST's rollout returns is bounded by a factor of environment determinism and data coverage._ The above proposition reveals that under natural assumptions (transition-reward correspondence) in driving environments, UNREST can generalize to achieve (bounded) high returns at states with minimal environmental stochasticity if expert demonstrations cover corresponding actions. This supports our segmentation and condition design. The proof of the proposition are left to App. B. Notably, the accounted timestep number is variable for truncated returns at different states, which necessitates the return-span as a condition to provide information about the number of timesteps. This enables UNREST to learn to achieve return \(R_{t}^{h}\) over future \(h_{t}\) timesteps. Otherwise, the model may be confused by the substantial differences in the magnitude of return conditions (with varying timestep lengths) at similar states. **Policy formulation:** Unlike return transformers, DT takes the segmented sequence \(\mathbf{\tau}^{\text{seg}}\) as input: \[\begin{split}& x_{R_{t}^{h}}=f_{\theta}^{R^{h}}(R_{t}^{h})+f_{ \theta}^{h}(h_{t}),\quad x_{s_{t}}=f_{\theta}^{s}(s_{t}),\quad x_{a_{t}}=f_{ \theta}^{a}(a_{t}),\\ &...,\tilde{x}_{R_{t-1}^{h}},\tilde{x}_{s_{t-1}},\tilde{x}_{s_{t -1}},\tilde{x}_{s_{t-1}},\tilde{x}_{R_{t}^{h}},\tilde{x}_{s_{t}},\tilde{x}_{a_ {t}}=\text{Transformer}(...,x_{R_{t-1}^{h}},x_{s_{t-1}},x_{s_{t-1}},x_{s_{t -1}},x_{R_{t}^{h}},x_{s_{t}},x_{a_{t}}),\end{split} \tag{7}\] where a _return-span embedding_\(f_{\theta}^{h}(h_{t})\) is added to the return embedding to provide information about return-spans, like the drop-span embedding in (Hu et al., 2023). Besides, to get the action distribution, the non-truncated _global-return embedding_\(f^{R}(R_{t})\) is optionally added to the output \(\tilde{x}_{s_{t}}\) of Transformer to provide additional longer horizon guidance for planning. Using \([\cdot||\cdot]\) to denote the concatenation of two vectors along the last dimension, the final predicted action distribution is: \[\pi_{\theta}(a_{t}|\mathbf{\tau}^{\text{seg}}_{<t},h_{t},R_{t},R_{t}^{h},s_{t})= \mathcal{N}\big{(}\mu_{\theta}([\tilde{x}_{s_{t}}\;||\;f^{R}(R_{t})]),\sigma_ {\theta}([\tilde{x}_{s_{t}}\;||\;f^{R}(R_{t})])\big{)}. \tag{8}\] The learning objective can be directly modified from Eq. 2 in the original DT, and UNREST's training process is summarized in App. C: \[\mathcal{L}_{\text{UNREST}}(\theta)=\mathbb{E}_{\mathbf{\tau}^{\text{seg}}\sim \mathcal{D}}\big{[}-\sum_{t=1}^{T}\log\pi_{\theta}(a_{t}|\mathbf{\tau}^{\text{seg }}_{<t},h_{t},R_{t},R_{t}^{h},s_{t})\big{]}. \tag{9}\] ### Uncertainty-guided Planning To account for environmental stochasticity at inference time, we introduce a lightweight uncertainty prediction model \(u_{\theta}(\cdot)\) to predict environmental stochasticity in real-time. For instance, it can be implemented as a neural network or just a heuristic value like that in Tab. 8 and Tab. 9. Practically, we choose the KD-Tree (Redmond & Heneghan, 2007) for its high computational efficiency and favorable estimation performance, with states as tree nodes and uncertainties (i.e. impacts of environmental stochasticity) estimated by return transformers as node values. At each timestep we will query the predictor and if the current state transition is highly uncertain (e.g., encountering a traffic light), we set the conditioning target to a dummy token to conduct cautious planning, consistent with the training process. Otherwise at states with certain transitions, we act aggressively to attain high rewards. Different from the conventional planning procedure of DTs, as for UNREST, we need to specify not only the target global return \(R_{1}\), but also the truncated return \(R_{1}^{h}\) and the initial return-span \(h_{1}=H\). After segmentation, the effective planning horizon of the trained sequences is reduced to the return-span \(h_{t}\). Once \(h_{t}\) reaches 1, we need to reset the target return and the return-span. Practically, we simply reset \(h_{t}\) to a fixed return horizon \(H\). Besides, we train a return prediction model \(R_{\theta}^{h}(\cdot)\) similar to that defined in Eq. 4 and take the upper percentile \(\eta\) of the predicted distribution as the new target return to attain. The hyperparameter \(\eta\) can be tuned for a high target return, and we do not need to consider targets at 'uncertain states' since they are just set as dummy tokens. The complete planning process is summarized in Alg. 1. ## 5 Experiments In this section, we conduct extensive experiments to answer the following questions. Q1: How does UNREST perform in different driving scenarios? Q2: How do components of UNREST influence its overall performance? Q3: Does our uncertainty estimation possess interpretability? ### Experiment Setup In this section, we briefly describe the setup components of our experiments. Please find more details of model implementation, training, and evaluation processes in App. E. **Datasets:** The offline dataset \(\mathcal{D}\) is collected from the CARLA simulator (Dosovitskiy et al., 2017) with its built-in Autopilot. Specifically, we collect 30 hours of data from 4 towns (Town01, Town03, Town04, Town06) under 4 weather conditions (ClearNoon, WetNoon, HardRainNoon, ClearSunset), saving tuple \((s_{t},a_{t},r_{t})\) at each timestep. More details about the state, action compositions, and our reward definitions are left to App. D. **Metrics:** We evaluate models at training and new driving scenarios and report metrics from the CARLA challenge (Dosovitskiy et al., 2017; team, 2020) to measure planners' driving performance: infraction score, route completion, success rate, and driving score. Besides, as done in (Hu et al., 2022), we also report the normalized reward (the ratio of total return to the number of timesteps) to reflect driving performance at timestep level. Among them, the driving score is the most significant metric that accounts for various indicators like driving efficiency, safety, and comfort. **Baselines:** First, we choose two IL baselines: vanilla Behavior Cloning (BC) and Monotonic Advantage Re-Weighted Imitation Learning (MARWIL) (Wang et al., 2018). Apart from IL baselines, we also include state-of-the-art offline RL baselines: Conservative Q-Learning (CQL) (Kumar et al., 2020), Implict Q-Learning (IQL) (Kostrikov et al., 2021). Constraints-Penalized Q-Learning (CPQ) (Xu et al., 2022) is chosen as a safe (cautious) offline RL baseline. Besides, we select two classic Transformer-based offline RL algorithms: Decision Transformer (DT) (Chen et al., 2021) and Trajectory Transformer (TT) (Janner et al., 2021) as baselines. Finally, we adopt three algorithms as rigorous baselines: Separated Latent Trajectory Transformer (SPLT) (Villaffor et al., 2022), Environment-Stochasticity-Independent Representations (ESPER) (Paster et al., 2022), and Dichotomy of Control (DoC) (Yang et al., 2022). These algorithms fit state transition models and employ generative training to mitigate DTs' limitations in stochastic environments. ### Driving Performance Firstly, we implement all the models, then evaluate their performance at training (Town03) and new (Town05) scenarios (Q1), whose results are summarized in Tab. 1. Analyzing the results, we first notice that DT (Chen et al., 2021) performs worst among all sequence models, with a significant gap compared to the simple offline RL baseline IQL (Kostrikov et al., 2021). In new scenarios, it even shows a similar performance to BC. We attribute this to the uncertainty of global return it conditions on. When the conditioning target does not reflect the true outcomes of agent actions, DT learns to ignore the target return condition and makes decisions solely based on the current state like BC. Furthermore, ESPER (Paster et al., 2022) and DoC (Yang et al., 2022) also perform poorly in both scenarios, which may be a result of ineffective adversarial and VAE training of long-horizon and complex driving demonstrations. To tackle environmental stochasticity, SPLT learns to predict the worst state transitions and achieves the best infraction score. However, its overly cautious planning process leads it to stand still in many scenarios like Fig. 3(c), resulting in an extremely poor route completion rate and normalized reward. TT instead learns a transition model regardless of environmental stochasticity and behaves aggressively. It attains the highest normalized reward at training scenarios since the metric prioritizes planners that rapidly move forward (but with low cumulative reward because of its short trajectory length caused by frequent infractions). In unseen driving scenarios, TT often misjudges the speed of preceding vehicles, resulting in collisions and lower normalized rewards (than ours) like in Fig. 3(a). Notably, UNREST achieves the highest driving score, route completion rate, success rate in both seen and unseen scenarios, and highest normalized reward in new scenarios without the need to learn transition or complex generative models. For the driving score, it surpasses the strongest baselines, TT (Janner et al., 2021) over 5% in training scenarios, and SPLT (Villaflor et al., 2022) over 6% in new scenarios (in absolute value), achieving a reasonable balance between aggressive and cautious behavior. For all the metrics, the results demonstrate that UNREST obtains significant improvement in terms of safety, comfort, and efficiency, and effectively increases the success rate of driving tasks. It demonstrates that the truncated returns successfully mitigate impacts of stochasticity and provide effective action supervision. In App. F.2, we test and find that UNREST only occupies slightly more resources than DT, and runs significantly faster while consuming less space than TT and SPLT, which additionally fit state transition models. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{Dosing} & \multicolumn{1}{c}{Success Rate} & \multicolumn{1}{c}{Range Composition} & \multicolumn{1}{c}{Inflation Score} & \multicolumn{1}{c}{Normalized Reward} \\ \hline BC & \(51.9\pm 0.94\) (\(\pm 5.2\)) & \(37.9\pm 4.70\) (\(\pm 3.6\)) & \(79.7\pm 6.0\) (\(\pm 7.1\)) & \(84.5\pm 1.84\) (\(\pm 0.7\)) & \(65.9\pm 0.02\) (\(0.61\pm 0.04\)) \\ MARU (Wang et al., 2018) & \(54.3\pm 0.78\) (\(\pm 3.4\)) & \(34.3\pm 2.22\) (\(\pm 2.2\)) & \(84.1\pm 3.20\) (\(\pm 3.8\)) & \(57.9\pm 0.88\) (\(\pm 4.4\)) & \(60.65\pm 0.02\) (\(0.63\pm 0.01\)) \\ COL (Kumar et al., 2020) & \(55.0\pm 2.04\) (\(\pm 2.8\)) & \(48.6\pm 4.52\) (\(\pm 2.8\)) & \(48.3\pm 4.78\) (\(\pm 3.9\)) & \(52.4\pm 3.07\) (\(\pm 3.6\)) & \(65.08\pm 0.03\) (\(0.62\)) \\ IOI (Kostrikov et al., 2021) & \(55.9\pm 3.52\) (\(\pm 3.5\)) & \(50.2\pm 6.42\) (\(\pm 2.4\)) & \(27.2\pm 4.68\) (\(\pm 4.8\)) & \(64.4\pm 2.62\) (\(\pm 6.6\)) & \(66.08\pm 0.00\) (\(0.01\)) \\ CPQ (Qu et al., 2022) & \(54.7\pm 3.08\) (\(\pm 3.8\)) & \(46.6\pm 3.03\) (\(\pm 3.5\)) & \(57.9\pm 4.24\) (\(\pm 7.08\)) & \(66.9\pm 2.18\) (\(\pm 4.4\)) & \(63.69\pm 0.02\) (\(0.64\pm 0.02\)) \\ \hline DT (Chen et al., 2021) & \(55.2\pm 2.04\) (\(\pm 6.1\)) & \(48.0\pm 4.7\) (\(\pm 6.0\)) & \(62.6\pm 1.01\) (\(\pm 8.1\)) & \(57.4\pm 1.3\) (\(\pm 3.4\)) & \(66.06\pm 0.01\) (\(0.64\pm 0.02\)) \\ TT (Janner et al., 2021) & \(58.3\pm 3.54\) (\(\pm 3.2\)) & \(45.9\pm 3.52\) (\(\pm 2.04\)) & \(74.8\pm 4.55\) (\(\pm 70.2\)) & \(63.6\pm 3.68\) (\(\pm 6.0\)) & \(50.48\pm 0.02\) (\(0.54\pm 0.03\)) \\ SPLT (Villaflor et al., 2022) & \(57.8\pm 4.9\) (\(\pm 6.4\)) & \(21.3\pm 1.9\) (\(\pm 9.5\)) & \(36.7\pm 4.82\) (\(\pm 8.7\)) & \(73.9\pm 4.07\) (\(\pm 8.0\)) & \(35.05\pm 0.03\) (\(0.57\pm 0.05\)) \\ ESPER (Paster et al., 2021) & \(54.8\pm 3.05\) (\(\pm 3.1\)) & \(34.8\pm 4.3\) (\(\pm 3.0\)) & \(22.9\pm 3.4\) (\(\pm 3.4\)) & \(14.1\pm 3.62\) (\(\pm 2.2\)) (\(\pm 3.6\)) & \(60.64\pm 0.01\) (\(0.63\pm 0.02\)) \\ DC (Yang et al., 2022) & \(56.9\pm 1.1\) (\(\pm 4.4\)) & \(42.3\pm 5.2\) (\(\pm 4.2\)) & \(72.9\pm 3.4\) (\(\pm 3.0\)) & \(80.2\pm 3.0\) (\(\pm 3.2\)) & \(60.3\pm 0.24\) (\(0.54\pm 0.03\)) \\ \hline UNREST & \(40.5\pm 3.8\) (\(\pm 2.0\)) & \(48.5\pm 3.02\) (\(\pm 6.0\)) & \(54.8\pm 3.0\) (\(\pm 3.1\)) & \(80.8\pm 3.1\) (\(\pm 3.0\)) (\(\pm 5.0\)) & \(70.2\pm 3.02\) (\(0.9\pm 3.3\)) & \(0.64\pm 0.01\)(\(0.65\pm 0.03\)) \\ \hline ESPER (Dosurdiky et al., 2017) & \(74.0\pm 6.0\) (\(\pm 5.3\)) & \(65.4\pm 8.0\) (\(\pm 6.5\)) & \(81.2\pm 4.6\) (\(\pm 5.8\) (\(\pm 1.1\)) & \(82.8\pm 3.1\) (\(\pm 7.5\)) & \(60.72\pm 0.01\) (\(0.69\pm 0.01\)) \\ \hline \hline \end{tabular} \end{table} Table 1: Driving performance on train (new) town and train (new) weather conditions in CARLA. Mean and standard deviation are computed over 3 seeds. All metrics are recorded in percentages (%) except the normalized reward. The best results are in bold and our method is colored in gray. Figure 3: Typical failing cases of DT and SPLT, where UNREST performs reasonably. ### Ablation Study The key components of UNREST include global return embedding, return-span embedding, ensemble-based uncertainty estimation, and the uncertainty-guided planning process. In this section, we conduct ablation experiments by separately removing these components to explore their impacts on the overall performance of UNREST (Q2). Results are shown in Tab. 2 and Tab. 3. The ablation results show pronounced differences, and it is apparent that the elimination of any part of the three components leads to a decline in UNREST's driving score in new scenarios. Among them, the global return embedding has the slightest impact, which suggests that the highly uncertain global return may not provide effective guidance, or that the truncated return is already sufficient for making reasonable decisions (temporal locality, Prop. 2). When the return-span embedding is removed, the absolute driving score drops by about 6%. This implies that the introduction of return-span embedding provides necessary information about the timesteps needed to achieve the target return. Removing the ensemble of return transformers (i.e. the GMM) induces a significant performance drop in both scenarios. It shows that a simple Gaussian distribution cannot well express the return distribution, resulting in poor uncertainty calibration capability (corresponding with results in Fig. 6). Finally, after canceling uncertainty estimation at test time, the driving and infraction scores of UNREST drop dramatically, which proves the importance of cautious planning. ### Uncertainty Visualization Finally, We verify the interpretability of UNREST's uncertainty estimation through visualizations (Q3). Typically, in Fig. 4(a) we observe an initial increase in uncertainty as the ego-vehicle enters the lane to follow another vehicle, owing to the lack of knowledge about the other vehicle's behavior. This uncertainty gradually decreases back below the threshold when the vehicle stabilizes in the following state. Fig. 4(b) shows a green light crossing scenario. While approaching the green light, the uncertainty about the light state causes the uncertainty to rise quickly. After the vehicle has moved away from the traffic light, the uncertainty immediately drops back below the threshold. ## 6 Conclusion: Summary and Limitations The paper presents UNREST, an uncertainty-aware decision transformer to apply offline RL in long-horizon and stochastic driving environments. Specifically, we propose a novel uncertainty measurement by computing the divergence of return prediction models. Then based on properties we discover in driving tasks, we segment the sequence w.r.t. estimated uncertainty and adopt truncated returns as conditioning goals. This new condition helps UNREST to learn policies that are less \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Pruner**} & Design & Sensors & Route & Influenza & Non- \\ & Scouvel & Ravel & C1 & Scouvel & Reward \\ \hline Wire global emb. & \(\mathbf{44.5\pm 2.8}\) & \(\mathbf{55.8\pm 0.8}\) & \(\mathbf{50.2\pm 2.4}\) & \(\mathbf{50.0\pm 1.6}\) & \(\mathbf{0.05\pm 0.02}\) \\ Wire-span emb. & \(50.0\pm 1.2\) & \(\mathbf{33.3\pm 3.5}\) & \(\mathbf{55.8\pm 3.4}\) & \(\mathbf{55.8\pm 3.1}\) & \(\mathbf{55.7\pm 0.01}\) \\ Wire-modal fin. & \(\mathbf{50.1\pm 2.3}\) & \(\mathbf{51.3\pm 3.5}\) & \(\mathbf{52.6\pm 0.0}\) & \(\mathbf{53.2\pm 1.7}\) & \(\mathbf{57.0\pm 0.02}\) \\ Wire-modal fin. & \(\mathbf{50.1\pm 2.3}\) & \(\mathbf{54.5\pm 1.7}\) & \(\mathbf{50.8\pm 3.3}\) & \(\mathbf{70.2\pm 2.8}\) & \(\mathbf{96.4\pm 0.04}\) \\ Pull model & \(\mathbf{50.3\pm 3.2}\) & \(\mathbf{54.5\pm 7.0}\) & \(\mathbf{50.8\pm 3.3}\) & \(\mathbf{70.2\pm 2.8}\) & \(\mathbf{96.4\pm 0.04}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study results for UNREST on train town and train weather conditions. Figure 4: Visualizations of UNREST’s uncertainty estimation results. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Pruner**} & Design & Sensors & Route & Influenza & Non- \\ & Scouvel & Ravel & C1 & Scouvel & Reward \\ \hline Wire global emb. & \(\mathbf{44.5\pm 2.8}\) & \(\mathbf{55.8\pm 0.8}\) & \(\mathbf{50.2\pm 2.4}\) & \(\mathbf{50.0\pm 1.6}\) & \(\mathbf{0.05\pm 0.02}\) \\ Wire-span emb. & \(50.0\pm 1.2\) & \(\mathbf{33.3\pm 3.5}\) & \(\mathbf{55.8\pm 3.4}\) & \(\mathbf{55.8\pm 3.1}\) & \(\mathbf{55.7\pm 0.01}\) \\ Wire-modal fin. & \(\mathbf{50.1\pm 2.3}\) & \(\mathbf{51.3\pm 3.5}\) & \(\mathbf{52.6\pm 0.0}\) & \(\mathbf{53.2\pm 1.7}\) & \(\mathbf{57.0\pm 0.02}\) \\ Wire-modal fin. & \(\mathbf{50.1\pm 2.3}\) & \(\mathbf{54.5\pm 1.7}\) & \(\mathbf{52.6\pm 0.0}\) & \(\mathbf{53.2\pm 1.7}\) & \(\mathbf{57.0\pm 0.02}\) \\ Pull model & \(\mathbf{50.3\pm 3.2}\) & \(\mathbf{54.5\pm 7.0}\) & \(\mathbf{50.8\pm 3.3}\) & \(\mathbf{70.2\pm 2.8}\) & \(\mathbf{96.4\pm 0.04}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study results for UNREST on new town and new weather conditions. affected by stochasticity. Dynamic uncertainty estimation is also integrated during inference for cautious planning. Empirical results demonstrate UNREST's superior performance in various driving scenarios, its lower resource occupation, and the effectiveness of our uncertainty estimation strategy. One limitation of this work is that the inference process of UNREST is somewhat complex with auxiliary return, uncertainty estimation models, and hyperparameters. One possible direction for improvement is to integrate return and uncertainty predictions into the model architecture, which we leave for future work. Although this work is evaluated in the CARLA simulator, we believe the proposed framework can surmount the sim-to-real gap and benefit practical autonomous driving.
オフライン強化学習(RL)は、アクティブな相互作用なしでポリシー学習を可能にするため、特に自動運転タスクには魅力的です。 Transformerの最近の成功は、オフライン RL をシーケンシャルモデルとして捉えることを刺激しており、しかし、行動が同じ目標を達成できるという不正確な仮定を持つ確率的環境では、これを達成することが困難です。この論文では、不確実性のある意思決定のためのTransformer(UNREST)を導入して、確率的運転環境を計画します。UNRESTは、遷移または複雑な生成モデルを導入する必要なく、実際の行動の結果から学習します。UNRESTは、遷移と報酬の条件的な相互情報を使用して不確実性を推定します。運転環境の「不確実性蓄積」と「時系列の局所性」を発見し、決断トランス former のグローバルな報酬を、環境に影響を与えることが少ない
2310.00496
The Sparsity Roofline: Understanding the Hardware Limits of Sparse Neural Networks
We introduce the Sparsity Roofline, a visual performance model for evaluating sparsity in neural networks. The Sparsity Roofline jointly models network accuracy, sparsity, and theoretical inference speedup. Our approach does not require implementing and benchmarking optimized kernels, and the theoretical speedup becomes equal to the actual speedup when the corresponding dense and sparse kernels are well-optimized. We achieve this through a novel analytical model for predicting sparse network performance, and validate the predicted speedup using several real-world computer vision architectures pruned across a range of sparsity patterns and degrees. We demonstrate the utility and ease-of-use of our model through two case studies: (1) we show how machine learning researchers can predict the performance of unimplemented or unoptimized block-structured sparsity patterns, and (2) we show how hardware designers can predict the performance implications of new sparsity patterns and sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps performance experts identify sparsity regimes with the highest performance potential.
Cameron Shinn, Collin McCarthy, Saurav Muralidharan, Muhammad Osama, John D. Owens
2023-09-30T21:29:31
http://arxiv.org/abs/2310.00496v2
# The Sparsity Roofline: Understanding the Hardware Limits of Sparse Neural Networks ###### Abstract We introduce the Sparsity Roofline, a visual performance model for evaluating sparsity in neural networks. The Sparsity Roofline jointly models network accuracy, sparsity, and theoretical inference speedup. Our approach does not require implementing and benchmarking optimized kernels, and the theoretical speedup becomes equal to the actual speedup when the corresponding dense and sparse kernels are well-optimized. We achieve this through a novel analytical model for predicting sparse network performance, and validate the predicted speedup using several real-world computer vision architectures pruned across a range of sparsity patterns and degrees. We demonstrate the utility and ease-of-use of our model through two case studies: (1) we show how machine learning researchers can predict the performance of unimplemented or unoptimized block-structured sparsity patterns, and (2) we show how hardware designers can predict the performance implications of new sparsity patterns and sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps performance experts identify sparsity regimes with the highest performance potential. ## 1 Introduction Deep neural networks are often over-parameterized (Howard et al., 2019; Tan and Le, 2019) and their weights or parameters can be eliminated (_pruned_) to improve inference latency and/or decrease network size (LeCun et al., 1989; Han et al., 2015; Molchanov et al., 2017; Zhu and Gupta, 2018) without affecting accuracy. Depending on the _pattern_ and _degree_ of sparsity, which together constitute a _sparsity configuration_, networks exhibit widely different accuracy and runtime behavior. This presents major problems for machine learning practitioners who wish to find the best sparsity pattern and degree that balances accuracy loss and performance constraints for their specific application. Obtaining the accuracy corresponding to a sparsity pattern and degree typically requires some form of network fine-tuning (Frankle and Carbin, 2019), making it highly inefficient to estimate the impact of different sparsity configurations by trying hundreds of combinations of hyperparameters. Thus we hope to predict which sparsity combinations might be most fruitful without fine-tuning them all. But accurately estimating the effects that a specific sparsity configuration has on inference runtime poses a different set of challenges: (1) which metric should we use to estimate runtime performance, and (2) how do we obtain the runtime performance of sparsity patterns that are either unimplemented or have unoptimized implementations? To illustrate the challenge of identifying the right metric, consider the total floating point operations (FLOPs) performed during sparse matrix operations such as matrix multiplication (a common operation in neural networks (Chetlur et al., 2014)). FLOPs are frequently used to evaluate the performance of pruned models (Han et al., 2015; Molchanov et al., 2017; Zhu and Gupta, 2018; Frankle and Carbin, 2019; Lee et al., 2019; Hoefler et al., 2021; Blalock et al., 2020). Table 1 illustrates the limitations of this metric. Here, we show two weight matrices that provide a counterexample to the notion that FLOPs are positively correlated with measured runtime. The structured weight matrix shown on the left side of the table has 1.57\(\times\) more FLOPs than the unstructured matrix on the right, but runs nearly 6\(\times\) faster. Addressing the challenge of _estimating_ optimized runtime performance is even harder. While performance experts have implemented computation kernels specifically targeting sparse neural networks (Gale et al., 2020; Sarkar et al., 2020; Chen et al., 2021; Vooturi and Kothapalli, 2019), there are significant gaps. For example, NVIDIA's cuSparse library provides optimized GPU kernels for block-sparse matrices, but they are primarily optimized for larger block sizes such as 16\(\times\)16 and 32\(\times\)32 (Yamaguchi and Busato, 2021). As discussed in Section 4.1, using smaller block sizes often leads to higher accuracies; however, in the absence of computation kernels optimized for these sizes, it is impossible to estimate their effect on runtime via benchmarking. To help practitioners better understand the complex relationship between sparsity configuration, accuracy, and inference performance (both current and potential), we introduce a novel visual model named the _Sparsity Roofline_. Our work builds upon the well-known Roofline model (Williams et al., 2009), which provides a visual representation of the performance of a given computation kernel. In the Roofline model, users compute the _arithmetic intensity_ of the given kernel, and plot it against one or more hardware-specific upper limits (the Rooflines) defined by the peak memory bandwidth and peak floating-point throughput of that hardware architecture. In a similar vein, the Sparsity Roofline plots network accuracy against the theoretical speedup of sparse over dense models, with additional sparsity information. This clearly shows the two most important aspects of weight pruning to a machine learning practitioner--accuracy and performance--and can be analyzed across any model architecture, sparsity hyperparameters, or hardware accelerator. Plotting the Sparsity Roofline requires sampling the accuracy values corresponding to the sparsity configurations being analyzed, which can be easily done with masking-based approaches and existing software libraries (Paszke et al., 2019; Joseph et al., 2020). The only other metrics needed are the arithmetic intensity, which can be either profiled or computed by hand, and the hardware-specific peak computational throughput (in FLOPs/s) and memory bandwidth (in bytes/s). We validate and demonstrate the usefulness of the Sparsity Roofline by analyzing several real-world computer vision models, including convolutional neural networks (CNNs), vision transformers (ViT), and multi-layer perceptron (MLP)-based networks. We investigate which sparsity characteristics have the greatest impact on accuracy and GPU performance, and point out promising areas to focus on for kernel optimization. Finally, we present two case studies: (1) analyzing tradeoffs associated with block-structured sparsity for deep learning practitioners, and (2) efficient sparsity patterns for future hardware architectures. This paper makes the following contributions: 1. It introduces the Sparsity Roofline visual model for understanding accuracy vs. latency trade-offs for currently unoptimized and unimplemented kernel designs. 2. It uses the Sparsity Roofline to benchmark and analyze several real-world computer vision architectures pruned to a range of sparsity patterns and levels. 3. It demonstrates the use of the Sparsity Roofline in two distinct use cases: to analyze block-sparsity structures for DL practitioners, and to help inform future sparse hardware implementations. ## 2 Background In this Section, we provide a brief overview of neural network pruning, followed by a description of the traditional Roofline model. ### Neural Network Pruning Weight pruning involves setting a subset of neural network weights to zero, followed by a training or fine-tuning stage that attempts to recover any lost accuracy (Hoefler et al., 2021). Pruning can be unstructured (fine-grained), where individual non-zero values are eliminated, or structured (coarse-grained), where groups of non-zero values are removed instead, each resulting in a different _sparsity pattern_. The _sparsity level_ refers to the fraction of zero weights to total weights and is expressed as a percentage in this paper. Structured pruning has been demonstrated to achieve better runtime performance, typically at the cost of decreased accuracy (Narang et al., 2017; Vooturi et al., 2018; Li et al., 2022). A number of algorithms have been proposed in the literature for accuracy recovery of pruned models (Deng et al., 2021; Renda et al., 2020; Hoefler et al., 2021). In this paper, we use the learning rate rewinding approach proposed by Renda et al. (2020). ### The Roofline Model The Roofline model (Williams et al., 2009) is a visual performance model that shows how well a computational kernel \begin{table} \begin{tabular}{c c c} \hline \hline Matrix Heatmap & & \\ \hline **Runtime (ms)** & **6.613** & **3.526** \\ **GFLOPs** & **24.4** & **15.5** \\ TFLOPs/s & 39.9 & 4.4 \\ Number of Nonzeros & 1.95M & 1.23M \\ \(m\times k\)-dimensions & & \\ (sparse operand) & 3072\(\times\)768 & 3072\(\times\)768 \\ \(n\)-dimension & & \\ (dense operand) & 6272 & 6272 \\ \hline \hline \end{tabular} \end{table} Table 1: **Runtime vs. GFLOPs: SpMM performance on (32\(\times\)32) block sparsity vs. unstructured with a similar amount of nonzeros. White indicates zero-valued weights, blue non-zero. The block sparse matrix has more FLOPs but has a nearly 6\(\times\) better runtime latency vs. unstructured.** utilizes the hardware. The Roofline model plots the arithmetic intensity (FLOPs computed / bytes read and written) on the x-axis and the throughput (FLOPs per second) on the y-axis. This enables users to visually observe if their program is memory-bound or compute-bound, and to what extent. The upper bound (Roofline) of the model is determined by both the hardware's peak compute throughput and peak memory bandwidth. Although there are variants that consider cache hierarchies (Ilic et al., 2014), the traditional Roofline model that we discuss in this paper assumes perfect caching is necessary (including user-managed caching such as shared memory and local registers) to achieve peak memory bandwidth utilization; we thus use DRAM memory bandwidth. The hardware throughput component can be increased with additional hardware acceleration for a specific application (e.g., Tensor Cores for deep learning (Jia et al., 2018)). The utility of the Roofline model comes from its ability to succinctly show potential improvement for a given program with respect to the hardware speed-of-light. ### Evaluating Sparse Neural Networks Figure 1 plots the Roofline for individual SpMM matrices across all benchmarked computer vision models. The line in each plot is the "Roofline", which slopes upwards during the memory-bound region where the arithmetic intensity (AI) is too low to saturate the compute resources, and flattens out once the AI reaches a hardware-specific point, called the _knee_. The dashed line is for Tensor Cores and the solid line for CUDA cores, where the Tensor Core knee has almost 10x the AI of CUDA cores. The points that are closest to the Roofline are utilizing the GPU the best, with higher sparsities being more memory bound and lower sparsities approaching and becoming compute bound in some situations, such as when the inner-dimension of the matrix product is higher. The Roofline model is a significant improvement over analyzing FLOPs, but it has three major drawbacks in optimizing sparse deep learning models: 1. The Roofline model lacks any concept of accuracy, and GFLOPs/s is challenging to use to compare the relative performance between sparse and dense layers. 2. The Roofline model is only meaningful per-layer instead of per-model. An entire model is almost always a combination of layers, where some are memory-bound and others are likely compute-bound. Therefore calling the entire model "compute bound" or "memory bound" is misleading at best. 3. The Roofline model requires benchmarking to compute GFLOPs/s. Even if optimal kernels exist, such as cuBLAS for dense GEMM operations, the surrounding benchmarking framework is time-consuming to implement, test, and maintain. Our proposed solution, the Sparsity Roofline, directly addresses these concerns. It is not meant to replace the Roofline model, but instead _complement_ it for the specific use case of designing and optimizing sparse deep-learning kernels. ## 3 The Sparsity Roofline The Sparsity Roofline is designed to be an easy-to-use tool for deep learning practitioners interested in sparsity, performance experts, and hardware designers. It achieves this goal by addressing the three major issues with the existing Roofline model described in Section 2.3. The Sparsity Roofline plots accuracy vs. theoretical speedup, as opposed to the traditional Roofline's GFLOPs/s vs. arithmetic intensity. Accuracy is almost always the most important optimization metric in DNNs, and therefore we place it on the \(y\) axis. Similarly, replacing GFLOPs/s with theoretical speedup makes it far easier to understand relative performance differences of a sparse and dense layer or model. Further, the sparsity configuration is encoded into the point and/or line style in order to easily compare different sparsity design decisions, which are crucial for optimal performance. The Sparsity Roofline converts per-layer peak GFLOPs/s to per-model minimum or _speed-of-light_ (SoL) latency. We first calculate a per-layer SoL latency, then sum the layer-wise latencies for the model SoL latency. This represents Figure 1: **Roofline, Sparse vs. Dense**: Roofline model measuring throughput of SpMM on unstructured sparse layers and GEMM on dense layers from all trained models, on a single NVIDIA A100. The solid line is the CUDA core peak throughput, the dashed line the Tensor core peak throughput. Unstructured sparsity kernels in cuSPARSE do not use Tensor cores. the true performance metric that practitioners care about: end-to-end latency of the entire model. Like the traditional Roofline, the Sparsity Roofline does not require benchmarking. We only need to look up the hardware peak GFLOPs/s and peak GB/s of a hardware architecture, and compute the per-layer GFLOPs and GBs read/written by hand in order to calculate arithmetic intensity. The Sparsity Roofline for unstructured sparsity is shown in Figure 2, and for ConvNeXt-Tiny and Swin-Tiny in Figures 3 and 4, respectively. We will now describe how these Sparsity Rooflines are constructed. Given our model uses accuracy metrics, the model being used in the Sparsity Roofline needs to be fine-tuned to a given sparsity from a pre-trained dense model. Fine-tuning for sparsification is a standard practice in deep learning, and the only way to quantify accuracy. We use the learning-rate rewinding technique proposed by Renda et al. (2020) and the Condensa library by Joseph et al. (2020). Our model is most accurate when the sparse kernels are well optimized, and thus approaching the speed-of-light. This doesn't mean the sparse kernel needs to be compute bound, but if it is memory bound the closer it is to the device peak memory throughput, the more accurate our model is. This is discussed in detail in Section 3.4. ### Use Cases The Sparsity Roofline is designed to quantify the performance-accuracy tradeoff for a specific combination of hardware, model architecture and sparsity configuration, such as sparsity pattern, sparsity level or percent, and sparse data format. Thus it can be used by both software and hardware engineers who want to understand how an optimized kernel would perform, but do not want to go through the trouble of implementing and benchmarking sub-optimal scenarios. In Section 4.1, we show how a deep-learning practitioner may use this tool to investigate optimal block-structure sparsity patterns, and in Section 4.2 we show how a hardware engineer can investigate different N:M sparsity patterns and sparse data formats to implement in hardware, e.g., for new sparse Tensor core formats. In contrast, the Sparsity Roofline is not meant for engineers who already have a specific sparsity-configuration optimization target. In that scenario, a combination of the Roofline model, benchmarking / profiling, and lower-level optimizations are likely the correct tools to understand detailed performance statistics that would inform kernel design, such as load balancing and caching. ### Constructing the Sparsity Roofline The Sparsity Roofline plots accuracy vs. theoretical speedup from sparsity. We start by deriving the theoretical speedup. First, we need to define the kernel's GFLOPs and GBs read/written to global memory. Equation 1 shows this for SpMM (\(\text{Sparse}\times\text{Dense}=\text{Dense}\) matrix multiply); the index data depends on the sparse data format. For compressed row format (CSR), it is \(\textit{nnz}+m+1\). \[\begin{split}\text{SpMM FLOPs}&=\textit{nnz}\times n \\ \text{SpMM GB}&=\textit{nnz}+n\times k+m\times n+ \text{index data}\end{split} \tag{1}\] Next, we define the per-layer speed-of-light latency as the maximum runtime for the kernel's given GFLOPs and GBs read/written to global memory. Using the device's peak GFLOPs and GB/s, this is computed as \[\text{Per-Layer SoL}=\text{max}\bigg{(}\frac{\text{GFLOP}}{\text{ Peak GFLOP/s}},\frac{\text{GB}}{\text{Peak GB/s}}\bigg{)} \tag{2}\] Finally, we sum the \(L\) per-layer runtimes for the dense model and the same corresponding sparse model, and take their runtime ratio as the speedup, using the dense computation as the baseline. For example, if the sparse latency is 1 ms and the dense latency is 2 ms, the speedup would be 2x. Figure 2: **Per-Model Sparsity Roofline**: The Sparsity Roofline for several computer vision models on ImageNet-100 pruned with global magnitude pruning. Speedup is calculated per-layer using the maximum compute or memory bound latency, and then summed per model. The machine learning engineer can choose the architecture that provides the optimal balance of accuracy, speedup, and implementation difficulty. \[\text{Speedup at SoL}=\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{ \sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}} \tag{3}\] These equations make the same assumption as the Roofline model: the maximum achievable FLOPs/s is the hardware's peak compute throughput, and each byte of data may be read from or written to global memory once, at the hardware's peak memory throughput, with perfect caching (including shared memory or local registers) for any intermediate reads. ### Evaluating Accuracy To compute accuracy for each model and sparsity configuration, we start by pre-training one baseline model per architecture. We pre-train without sparsity for 300 epochs on ImageNet-100 (Vinyals et al., 2016). This dataset is a subset of the ImageNet-1K dataset (Deng et al., 2009) created by sampling 100 of the 1000 classes in ImageNet-1K, which allows us to train a larger number of models, sparsity patterns, and sparsity levels. All model definitions are from the _timm_ library (Wightman, 2019) and each is trained with the same set of data augmentations, hyperparameters, and training schedules based on modern architectures such as DeiT (Touvron et al., 2021), Swin (Liu et al., 2021) and ConvNeXt (Liu et al., 2022). This includes data augmentations RandAugment (Cubuk et al., 2020), MixUp (Zhang et al., 2018) and CutMix (Yun et al., 2019), a cosine decay learning rate schedule (Loshchilov and Hutter, 2017), and the AdamW optimizer (Loshchilov and Hutter, 2019) with a base learning rate of \(10^{-3}\) and 20 epochs of warm up. Using these uniform settings across all models ensures a fair comparison with an identical training procedure. We store the checkpoint with the minimum validation loss and use this for fine-tuning. We apply an incremental fine-tuning algorithm based on learning rate rewinding (Renda et al., 2020) to the baseline model to obtain the accuracy values corresponding to the following sparsity levels: 50%, 75%, 87.5%, 93.75% and 96.875%. This pattern involves halving the number of nonzeros per iteration, which ends up slightly biasing the results towards higher sparsities where sparse kernels are typically more performant. For a given combination of model and sparsity pattern, e.g., ConvNeXt-Tiny with unstructured sparsity, we prune the weights with global magnitude pruning to the lowest sparsity level of 50%. We rewind the learning rate schedule but with a shorter 60 epoch total decay rather than 300 epochs. After 60 epochs we increase the sparsity level by \((1-\text{Sparsity})/2\), prune the additional weights, and rewind the learning rate again. We repeat this a total of five times within a single run to fine-tune five sparsity levels for our model / sparsity pattern combination in 300 epochs total, which is the same number as during training. We find this process to be simple and efficient, and quantitatively works well for ImageNet-100. For more challenging datasets such as ImageNet-1k or ImageNet-22k, the fine-tuning schedule would likely need to be increased. ### Validation It is important to understand the cases where speed-of-light (SoL) speed-up equals the actual measured speed-up, without having to implement and optimize a specific sparse kernel. We can easily show that the speedup at SoL is precisely equal to the measured speed-up when the sparse and dense kernels are _equally optimized_. Specifically, at a per-layer level this occurs when the percentage of the per-layer SoL latency for dense and sparse are equal. For example, if a given GEMM kernel is compute bound and obtains 90% of the SoL GFLOPs/s, and the corresponding SpMM kernel is memory bound and also obtains 90% of the SoL GB/s, then the percent of SoL is identical and our model will predict a SoL speedup that is equal to the measured speedup. More formally: \[\text{Per-Layer Speedup at SoL} \stackrel{{?}}{{=}}\text{Per-Layer Speedup Meas.}\] \[\frac{\text{Dense SoL Runtime}}{\text{Sparse SoL Runtime}} =\frac{\text{Dense Meas. Runtime}}{\text{Sparse Meas. Runtime}}\] \[\frac{\text{Dense SoL Runtime}}{\text{Dense Meas. Runtime}} =\frac{\text{Sparse SoL Runtime}}{\text{Sparse Meas. Runtime}}\] Dense Per-Layer % of SoL In the last equation, note that the percent of speed-of-light (or fraction of speed-of-light) is defined as the ratio between the SoL latency to the measured latency. The measured latency can take on values as small as the SoL latency but no smaller, by definition. Therefore this is bounded between 0-1 (or 0-100%). The same equation holds for per-model aggregation, but in this case each individual term is a summation of all layers. \[\text{Per-Model Speedup at SoL} \stackrel{{?}}{{=}}\text{Per-Model Speedup Meas.}\] \[\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{\sum_{l= 1}^{L_{\text{dense}}}\text{Meas. Runtime}_{l}} =\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{\sum_{l= 1}^{L_{\text{dense}}}\text{Meas. Runtime}_{l}}\] Dense Per-Model % of SoL \(=\text{Sparse Per-Model }\%\) of SoL At the aggregated per-model level, the SoL speedup is equal to the measured speedup when the sparse and dense models are equally optimized, such that the percentage of the per-model SoL latency for dense and sparse are equal. ## 4 Case Study ### DL Practitioner Suppose Alice is researching pruning algorithms and wants to find out whether block sparsity can provide effective inference latency improvements on NVIDIA GPUs for ConvNext () and Swin (), two state-of-the-art computer vision models. She would typically start by training, pruning and then fine-tuning these models for various block sizes, say, 2\(\times\)2, 4\(\times\)4, 8\(\times\)8, 16\(\times\)16 and 32\(\times\)32, to capture a sufficiently large sample of the search space. Alice would like to compare the speedups that her block pruning scheme achieves w.r.t. unstructured global magnitude pruning, but she would prefer to avoid implementing a custom block-sparse GPU kernel until she is sure it's the right approach. She then considers using existing kernels from a vendor-optimized library such as cuSparse (), but backs off due to two reasons: (1) writing a custom operator for a deep learning framework is not trivial, and (2) she notices in the documentation for the vendor-optimized library that it achieves poor performance for smaller block sizes, and may thus not provide a fair comparison across block sizes. Rather than trying to measure actual latency numbers, Alice now plans to use some simple metrics to estimate potential speedups. She starts by counting the FLOPs of each sparse model. However, since her blocked SpMM and unstructured SpMM kernels would be running on NVIDIA Tensor Cores and CUDA cores, respectively, the former will end up achieving higher throughput than the latter. Additionally, since Tensor Cores necessitate more efficient memory bandwidth utilization, she would also need to account for the reads and writes that her sparse models perform during inference. To address the above concerns, Alice instead generates the Sparsity Roofline for the block-sparse models she has trained to quickly approximate the speedups she would achieve for various block sizes. Figures 3a and 3b show the Sparsity Roofline models Alice would generate for ConvNext and Swin with a batch size of 1. By observing the accuracy and performance tradeoffs that the Sparsity Roofline depicts, Alice is now able to determine that her models achieve higher speedups using larger block sizes, but they only maintain accuracy with smaller block sizes of 2\(\times\)2 and 4\(\times\)4. _Importantly, Alice was able to arrive at this conclusion without needing to go to the effort of writing her own optimized sparse kernels for a variety of block sizes._ She now realizes that if she invests her time in optimizing for smaller block sizes, she will get reasonable speedups without sacrificing accuracy. ### Hardware Architect Bob is a hardware architect designing next-generation Tensor Cores for future GPUs and is investigating alternative N:M patterns for future hardware support. He would like to quickly assess the accuracy and performance implications of the new N:M patterns before he puts in any effort into design and simulation. His goal is to find patterns that achieve accuracy numbers similar to the currently supported 2:4 pattern, but are at least 30% faster given the same Tensor Core throughput. Bob's target workload for these N:M patterns is inference with a batch size of 1 on ConvNeXt and Swin. These two network architectures, in addition to providing state-of-the-art accuracies on their tasks, are also comprised of a variety of layer types, and involve matrix operations of various shapes and sizes, making them fairly representative. The N:M schemes he chooses to investigate are 1:4, 2:8 and 2:16, in addition to the pre-existing 2:4 pattern. Bob works with a machine learning engineer to get these two networks trained, pruned, and fine-tuned for each of the above sparsity patterns, and then obtains the corresponding accuracy numbers. He now needs to determine how these models would perform if hardware support for the new N:M patterns was available. Instead of developing RTL code for these new hardware units and simulating the workloads, which would be labor-intensive and time-consuming, Bob would prefer a quicker way of estimating the runtime performance of each of these pruned models on their respective hypothetical hardware units. Bob could simply use FLOPs to estimate speedups for each pattern (e.g., going from 2:4 to 1:4 is a 2x speedup); however, note that Bob would also need to account for the memory system's ability to keep up with the Tensor Core's throughput to get a more accurate performance estimation. To address these concerns, Bob constructs the Sparsity Roofline for the N:M pruned models to quickly estimate the speedups he would achieve w.r.t. the accuracy. The resulting Sparsity Roofline plots are shown in Figures 4a and 4b. From the Sparsity Roofline, Bob notices that at the same Tensor Core throughput, 2:16 sparsity achieves nearly a 1.8\(\times\) speedup over dense and is over 30% faster than the 2:4 sparsity pattern, meeting his original goal. He also notices that the 1:4 and 2:8 patterns are promising in cases where accuracy preservation is more important than raw speedup. Similar to Alice (see Section 4.1), Bob was able to estimate his performance metrics significantly faster using the Sparsity Roofline. ## 5 Discussion ### Unstructured Sparsity Global magnitude pruning with re-training has become a widely applicable technique due to its simplicity and effectiveness. Figure 5 shows how this technique can reach almost 90% sparsity with minimal accuracy loss. In the context of small computer vision models, Figure 2 indicates that accuracy can only be preserved to about a \(1.5\times\) speedup over dense. While a 50% speedup would be somewhat substantial, the time cost of fine-tuning may not be worthwhile in every scenario. Additionally, a 50% speedup is far less than what FLOP counts would suggest. At 87.5% sparsity, a network requires only \(1/8\) the FLOPs of the original, yet Figure 2 tells us that a \(8\times\) speedup is infeasible in any case. To make sparsity generally viable from a performance perspective, we need to understand and alleviate the underlying factors that inhibit SpMM from achieving the speedups suggested by the FLOP reduction. Despite the wide range of factors that affect SpMM kernel performance on GPUs, such as load balancing and efficient data reuse Gale et al. (2020); Bell and Garland (2009), we only consider the factors that make up the Sparsity Roofline. Thus, in our analysis, we account for FLOPs, bytes read/written, hardware peak throughput, and hardware peak memory bandwidth (the same as the Roofline model). One of the most glaring downsides of unstructured sparsity is its inability to leverage the GPU's tensor cores that are effectively leveraged by dense models.1 The Roofline model Figure 4: **N:M Sparsity Roofline: The Sparsity Roofline for (a) ConvNext-Tiny and (b) Swin-Tiny on ImageNet-100 pruned with various N:M patterns. Calculations are done using a batch size of 1 and NVIDIA A100 hardware specs.** Figure 3: **Block-Sparsity Roofline: The Sparsity Roofline for (a) ConvNext-Tiny and (b) Swin-Tiny on ImageNet-100 pruned with various block pruning sizes. Calculations are done using a batch size of 1 and NVIDIA A100 hardware specs.** in figure 1 shows the elevated peak tensor core throughput above the peak CUDA core throughput. For the A100, the tensor core throughput is 16x faster than the CUDA core throughput (NVIDIA, 2020). To address the hardware discrepancy and put sparse and dense on a level playing field, we opt to investigate sparsity structures that can leverage the tensor cores. ### Block Sparsity The Sparsity Roofline shows two benefits of block sparsity: (1) the ability to use the high-throughput sparse tensor cores, and (2) the reduced index data from the block sparse format. The reduced index data results from the sparsity pattern's more coarse-grained structure, where a single block index refers to multiple nonzeros. The index data is reduced by a factor of the block size. Despite the reduction in reads and writes from block sparsity, Figure 6 shows that the vast majority of block-pruned weights are still memory bound. Because of this, the Sparsity Rooflines for different block sizes in figure 2(a) and figure 2(b) see only a small improvement compared to unstructured sparsity. The accuracy-speedup tradeoff is slightly better than unstructured sparsity at best, and only just as good in the worst case. While the heatmap in Table 1 suggests that block sparsity should perform much better than unstructured, we observe that the accuracy loss from large block sizes (16\(\times\)16 and 32\(\times\)32) is too significant to be viable. When we therefore restrict our analysis to smaller block sizes, we see that we can't achieve the full throughput from the tensor cores due to the memory bottleneck seen in Figure 6. The smaller block sizes are completely memory-bound, whilst the larger block sizes are less so, and can thus get more throughput from the tensor cores. ### N:M Sparsity NVIDIA's sparse tensor cores provide an interesting alternative to block sparsity, allowing adopters to leverage the throughput of the tensor cores whilst being able to prune weights in a fine-grained manner. While the coarse-grained structure of block sparsity restricts the freedom of pruning algorithms' weight selection and hurts accuracy, the fine-grained structured sparsity for the N:M patterns should theoretically hurt accuracy less. In addition to the accuracy benefits of a fine-grained structure, the N:M formats can reduce the memory overhead for indexing data. With dedicated hardware support, N:M formats only need \(\log_{2}(M)\) bits to store the index of each nonzero inside the \(M\)-wide blocks; for 2:4, that's only 2 bits per nonzero. Figures 3(a) and 3(b) show the Sparsity Roofline for N:M formats. We see that the various N:M patterns achieve a better performance-accuracy tradeoff over unstructured than what Figure 5: **Accuracy vs. Sparsity and FLOPs: A common but misleading means of evaluating sparse models. Plotting accuracy (here ImageNet-100 top-1 accuracy) vs. sparsity (top) and FLOPs (bottom) for various models implies higher sparsity means higher GPU performance, which does not take memory bandwidth into account.** Figure 6: **Roofline, Block Sparse vs. Dense: Roofline model measuring throughput of SpMM on all block sparse layers and GEMM on dense layers from all trained models, on a single NVIDIA A100.** block sparsity was able to achieve. N:M is an improvement over block sparsity in our pruned networks due to the reduced accuracy degradation and minimal index data overhead. ### Feature Overhead Finally, we have not yet mentioned the read and write overhead of the input and output features of each layer. Equation 1 shows the data for the input and output features as \(n\times k\) and \(m\times n\) (respectively). Akin to Amdahl's law, we can only expect to reduce the number of memory accesses for pruned matrices. Therefore, regardless of our pruning strategy, the input and output features will always incur a fixed number of reads and writes as overhead. Figure 7 shows the severity of this problem. For a batch size of 1, the feature memory accesses, which cannot be reduced via pruning, account for half of all accesses. For a batch size of 32, the feature memory accesses heavily dominate the overall number of accesses, making it difficult to decrease the memory bottleneck of our sparse models. The \(n\) dimension in equation 1 is shared by the input and output feature matrices and is not one of the weight matrix dimensions. The size of \(n\) relative to \(m\) and \(k\) is determines the appearance of the graphs in Figure 7. The \(n\) dimension scales linearly with both the batch size and the number of spatial locations in the feature data (for both convolution and transformer FFN layers). This suggests that we will see larger speedups from pruning when the model size (\(m\) and \(k\)) is large relative to the batch size and feature sizes (\(n\)). ## 6 Related Work Automated Model CompressionRecent work has explored various approaches for automatically inferring optimal sparsity levels using approaches such as Bayesian Optimization (Joseph et al., 2020) and reinforcement learning (He et al., 2018). Our work differs in two ways: we focus on providing (1) a _visual_ representation of the accuracy and performance landscape for different sparsity patterns and levels, and (2) meaningful estimates of potential inference runtimes to aid deep learning practitioners, performance experts and hardware designers. Deep Learning Roofline ModelsThe Roofline model has been applied to the deep learning problem space in the past (Yang et al., 2020; Wang et al., 2020; Czaja et al., 2020). However, this work primarily focuses on dense neural networks. Specifically, Wang et al. (2020) extend the Roofline model to deep learning by using latency and compute/bandwidth complexity. Yang et al. (2020) provide a toolkit extension for deep learning to support new precisions, tensor cores, and a tool for measuring performance metrics. Czaja et al. (2020) perform a Roofline analysis of DNNs accounting for non-uniform memory access (NUMA) systems.
We introduce the Sparsity Roofline, a visual performance model for evaluating sparsity in neural networks. The Sparsity Roofline jointly models network accuracy, sparsity, and theoretical inference speedup. Our approach does not require implementing and benchmarking optimized kernels, and the theoretical speedup becomes equal to the actual speedup when the corresponding dense and sparse kernels are well-optimized. We achieve this through a novel analytical model for predicting sparse network performance, and validate the predicted speedup using several real-world computer vision architectures pruned across a range of sparsity patterns and degrees. We demonstrate the utility and ease-of-use of our model through two case studies: (1) we show how machine learning researchers can predict the performance of unimplemented or unoptimized block-structured sparsity patterns, and (2) we show how hardware designers can predict the performance implications of new sparsity patterns and sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps performance experts
2309.15846
Revised Enskog equation for hard rods
We point out that Percus's collision integral for one-dimensional hard rods [J. K. Percus, Physics of Fluids 12, 1560-1563 (1969)] does not preserve the thermal equilibrium state in an external trapping potential. We derive a revised Enskog equation for hard rods and show that it preserves this thermal state exactly. In contrast to recent proposed kinetic equations for dynamics in integrability-breaking traps, both our kinetic equation and its thermal states are explicitly nonlocal in space. Our equation differs from earlier proposals at third order in spatial derivatives and we attribute this discrepancy to the choice of collision integral underlying our approach.
Vir B. Bulchandani
2023-09-27T17:59:46
http://arxiv.org/abs/2309.15846v3
# Modified Enskog equation for hard rods ###### Abstract We point out that Percus's collision integral for one-dimensional hard rods [J. K. Percus, Physics of Fluids 12, 1560-1563 (1969)] does not preserve the thermal equilibrium state in an external trapping potential. We derive a modified Enskog equation for hard rods and show that it preserves this thermal state exactly. In contrast to recent proposed kinetic equations for dynamics in integrability-breaking traps, our both kinetic equation and its thermal states are explicitly nonlocal in space. Our equation differs from earlier proposals at third order in spatial derivatives and we attribute this discrepancy to the molecular chaos assumption underlying our approach. ## I Introduction Contemporary experiments [1; 2; 3; 4] on systems of ultracold atoms have revealed a class of physical systems that do not thermalize efficiently under their own dynamics. Such experiments realize quasi-one-dimensional gases of particles with integrable two-body interactions (usually the Lieb-Liniger model of delta-interacting bosons on a line) and subject them to external trapping potentials that break microscopic integrability of the gas. The experimentally observed phenomenon of delayed thermalization in an integrability-breaking trap has been reproduced numerically in a variety of classical systems, including one-dimensional hard rods [5; 6; 7], the Toda lattice [8; 9] and the rational Calogero model [10]. Strikingly, the presence of thermalization in these systems appears to depend on the shape of the trapping potential; while there is evidence that for anharmonic trapping potentials, the system will eventually thermalize [6; 7; 11], there is also evidence that in harmonic trapping potentials, these systems can exhibit nonergodic behaviour at all numerically accessible times [5; 6]. This is surprising given that the only additional microscopic conservation law in a harmonic trap appears to be the centre-of-mass energy [5; 6]. Although the short-time dynamics in a trap is widely believed to be captured by the generalized hydrodynamics (GHD) of integrable systems [5; 11; 12; 13; 14; 15; 16], the validity of the latter description is questionable at long times because it implicitly assumes a perturbative treatment of integrability breaking that is not justified far from equilibrium [17; 18]. Thus to address difficult questions concerning the presence or absence of thermalization at long times, a more systematic treatment of the trapping potential is desirable. In this paper, we propose such a treatment for systems of classical hard rods in an integrability-breaking trap. Such systems are appealing because they provide arguably the simplest nontrivial example of the phenomenology of interest; for example, the dynamics of hard rods between collisions is simply the dynamics of free particles in a trap. A single-particle kinetic equation for hard rods without a trap was first published by Percus [19] in 1969; this equation was subsequently shown to yield exact hydrodynamic predictions at both the ballistic [20] and diffusive [21] scales. In a later paper [22], Percus derived the exact thermal state of hard rods in a general confining external potential. However, a direct comparison between these two results reveals that they are mutually contradictory. This contradiction appears to have escaped notice until now, and turns out to originate in the "Enskog approximation" to the equilibrium contact pair correlation function made in the earlier paper [19]. Below we calculate this pair correlation function exactly for trapped hard rods, and show that it leads to a small modification of Percus's kinetic equation that is nevertheless necessary to preserve the exact thermal state in a trap. Modifying Enskog's prescription in this way was previously advocated [23] by Van Beijeren and Ernst as early as 1973 on general grounds, though they do not appear to have studied the specific problem of interest below. Conservation of the exact thermal state is a nontrivial check on our "improved" kinetic equation and suggests that this equation is the best possible single-particle kinetic equation that respects (the Enskog-Van Beijeren-Ernst statement of) Boltzmann's molecular chaos assumption. While this equation matches earlier predictions [19; 20; 21; 24; 25] up to quadratic order in spatial derivatives, it differs from such predictions at third order in spatial derivatives and beyond. This disagreement indicates an incompatibility between the molecular chaos assumption within a trap and previously proposed dynamical equations for the single-particle distribution function without a trap [24; 25; 26]. We do not expect this tension to arise in non-interacting theories for which dissipation is absent [27; 10; 26]. The paper is structured as follows. We first summarize the existing state of knowledge on kinetic equations for hard rods, and obtain a new characterization of the stationary states of the ballistic-scale kinetic equation. We then explain why existing kinetic theories are inconsistent with the thermal state in a trap, and propose a modified Enskog equation that resolves this inconsistency. Finally, we perform a derivative expansion on the modified Enskog equation and show that it begins to differ from earlier proposals at third order in spatial derivatives. We argue that this disagreement arises from the assumption of local equilibrium within single-particle kinetic theory, which is not always justified. We identify some possible limitations on the validity of hydrodynamics as a model for long-time dynamics in integrability-breaking traps. Existing kinetic equations for hard rods ### Kinetic equations without a trap In the absence of a trapping potential, Percus derived [19] a kinetic equation for hard rods by starting from the second level of the BBGKY hierarchy \[\partial_{t}\varrho_{v}+v\partial_{x}\varrho_{v}= \int_{-\infty}^{v}dv^{\prime}\,(v-v^{\prime})[\varrho_{v}(x-a) \varrho_{v^{\prime}}(x)-\varrho_{v}(x)\varrho_{v^{\prime}}(x+a)]\] \[+ \int_{v}^{\infty}dv^{\prime}\,(v^{\prime}-v)[\varrho_{v^{\prime}} (x)\varrho_{v}(x+a)-\varrho_{v^{\prime}}(x-a)\varrho_{v}(x)], \tag{1}\] where \[\varrho_{v}(x,t)=\sum_{i=1}^{N}\delta(x-x_{i}(t))\delta(v-v_{i}(t)) \tag{2}\] denotes the microscopic phase-space distribution function. In order to do this, he used the Enskog form of Boltzmann's molecular chaos assumption \[\langle\varrho_{v}(x)\varrho_{v^{\prime}}(x+a)\rangle\approx\rho_{v}(x)\rho_{v^ {\prime}}(x+a)g(x+a/2), \tag{3}\] with \(\rho_{v}(x)=\langle\varrho_{v}(x)\rangle\) the single-particle distribution function and \(g(x+a/2)\) the local-equilibrium pair correlation function at the midpoint of the rod centers at the instant of collision, which he found to equal [28; 29] \[g(x+a/2)=\frac{1}{1-n(x+a/2)a}. \tag{4}\] This yields the Enskog equation \[\partial_{t}\rho_{v}+v\partial_{x}\rho_{v}= \int_{-\infty}^{v}dv^{\prime}\,(v-v^{\prime})[g(x-a/2)\rho_{v}(x- a)\rho_{v^{\prime}}(x)-g(x+a/2)\rho_{v}(x)\rho_{v^{\prime}}(x+a)]\] \[+ \int_{v}^{\infty}dv^{\prime}\,(v^{\prime}-v)[g(x+a/2)\rho_{v^{ \prime}}(x)\rho_{v}(x+a)-g(x-a/2)\rho_{v^{\prime}}(x-a)\rho_{v}(x)]. \tag{5}\] We note that any translation-invariant state of the form \[\rho_{v}(x)=f(v) \tag{6}\] where \(f:\mathbb{R}\rightarrow\mathbb{R}\) is a non-negative function, lies in the kernel of this collision integral for each \(v^{\prime}\); such states are trivially stationary under the hard-rod dynamics and define generalized Gibbs ensembles (GGEs) for the untrapped hard rod gas [20; 30]. Eq. (1) had appeared in earlier literature without derivation on the grounds of being "self-evident" [31]. It can be deduced from the formal manipulations that yield the BBGKY hierarchy for the hard-sphere gas, whose justification requires certain technical continuity assumptions [32]. Truncating Eq. (5) at the ballistic (Euler) and diffusive (Navier-Stokes) scales yields kinetic equations that are known to be exact at these scales [19; 20; 21], respectively \[\partial_{t}\rho_{v}+\partial_{x}\left(\frac{v-aj}{1-an}\rho_{v}\right)=0 \tag{7}\] at the ballistic scale, where \[n=\int_{-\infty}^{\infty}dv^{\prime}\,\rho_{v^{\prime}},\quad j=\int_{-\infty }^{\infty}dv^{\prime}\,v^{\prime}\rho_{v^{\prime}}, \tag{8}\] and \[\partial_{t}\rho_{v}+\partial_{x}\left(\frac{v-aj}{1-an}\rho_{v}\right)=\frac {1}{2}a^{2}\partial_{x}\left(\frac{1}{1-an}\int_{-\infty}^{\infty}dv^{\prime} |v^{\prime}-v|(\partial_{x}\rho_{v}\rho_{v^{\prime}}-\rho_{v}\partial_{x}\rho _{v^{\prime}})\right) \tag{9}\] if diffusive corrections are included. Both these equations preserve the generalized Gibbs states Eq. (6). ### Kinetic theory in a trap The currently accepted [11; 14] diffusive-scale kinetic theory for hard rods in an external trapping potential \(V(x)\) consists solely of adding a standard Boltzmann forcing term to the above equations. At the ballistic scale, this yields (\(m=1\)) \[\partial_{t}\rho_{v}+\partial_{x}\left(\frac{v-aj}{1-an}\rho_{v}\right)-V^{ \prime}(x)\partial_{v}\rho_{v}=0, \tag{10}\] while including diffusive corrections, we have \[\partial_{t}\rho_{v}+\partial_{x}\left(\frac{v-aj}{1-an}\rho_{v}\right)-V^{ \prime}(x)\partial_{v}\rho_{v}=\frac{1}{2}a^{2}\partial_{x}\left(\frac{1}{1-an }\int_{-\infty}^{\infty}dv^{\prime}|v^{\prime}-v|(\partial_{x}\rho_{v}\rho_{v ^{\prime}}-\rho_{v}\partial_{x}\rho_{v^{\prime}})\right). \tag{11}\] In previous work, we both verified numerically that Eq. (10) remained valid until a state-dependent "time to chaos" at which diffusive corrections became important, and pointed out that these equations were only justified if the trapping potential did not modify the local-equilibrium pair correlation function of the gas [5]. We will discuss the appropriately corrected equation in a trap in more detail below, but to provide some context, we first describe the stationary states of Eqs. (10) and (11). To understand stationary states of the ballistic scale dynamics Eq. (10), it is easiest to change variables to the "free density" [19; 30] \[\theta_{v}=\rho_{v}/(1-an), \tag{12}\] to yield \[\partial_{t}\theta_{v}+\left(\frac{v-aj}{1-an}\right)\partial_{x}\theta_{v}- V^{\prime}(x)\partial_{v}\theta_{v}=0, \tag{13}\] so that stationary states satisfy \[\left(\frac{v-aj}{1-an}\right)\partial_{x}\theta_{v}=V^{\prime}(x)\partial_{v }\theta_{v}. \tag{14}\] The possibility of infinitely many stationary states solving this equation has been raised in previous work [5; 14]. Naively such states correspond to generalized Gibbs ensembles, but such states are not expected to survive the integrability-breaking effect of general trapping potentials. It has nevertheless been observed numerically that for hard rods in harmonic traps [5; 6] and rational Calogero particles in arbitrary traps [10], non-Maxwellian stationary states persist for all accessible times. Our first contribution below is an explicit construction of stationary states solving Eq. (14) for any differentiable trapping potential \(V(x)\). To this end let \(F:\mathbb{R}\rightarrow\mathbb{R}\) be a differentiable, non-decreasing function with \(F^{\prime}(x)\geq 0\) for all \(x\in\mathbb{R}\), such that there exists a solution \(\epsilon_{v}(x)\) to the integral equation \[\epsilon_{v}=\frac{v^{2}}{2}+V(x)-\mu-a\int_{-\infty}^{\infty}dv^{\prime}\,F (\epsilon_{v^{\prime}}). \tag{15}\] Then \(\theta_{v}=F^{\prime}(\epsilon_{v})\) solves Eq. (14), since in this state, \(j=0\) by evenness of \(\epsilon_{v}\) in \(v\) and one can show that \[\partial_{v}\epsilon_{v}=v,\quad\partial_{x}\epsilon_{v}=(1-an)V^{\prime}(x). \tag{16}\] Our construction, which is parameterized by the functional degree of freedom \(F\), extends straightforwardly to the Euler-scale hydrodynamics of other quantum and classical integrable systems. For classical hard rods, we can compare this solution directly against exact results for the (grand canonical) thermal state in a trap [22]. In the local density approximation (whose errors will be treated systematically below), the latter recovers Eq. (15) with the specific choice \[F(\epsilon)=-\frac{1}{\beta}e^{-\beta\epsilon}. \tag{17}\] The solutions corresponding to more general choices of \(F\) can be viewed as maximizers of classical entropies that are not the usual Boltzmann-Gibbs entropy, which reflects the freedom to choose an entropy function in ballistic-scale hydrodynamics [30]. The ballistic-scale hydrodynamics in a trap conserves an infinite family of such generalized entropies [5]. It is interesting to consider whether these stationary states remain stationary when diffusive corrections are included. It was argued in previous work [11] that only thermal states in the above sense, satisfying \(\theta_{v}=e^{-\beta\epsilon_{v}}\) and \[\epsilon_{v}=\frac{v^{2}}{2}+V(x)-\mu+\frac{a}{\beta}\int_{-\infty}^{\infty} dv^{\prime}\,e^{-\beta\epsilon_{v^{\prime}}} \tag{18}\] are stationary under the full dissipative evolution Eq. (11). We can see this directly by imposing vanishing of the dissipative part of the phase-space current in Eq. (11) for general stationary states \(\theta_{v}=F(\epsilon_{v})\) of the ballistic scale equation, which yields the condition \[\frac{F^{\prime\prime}(\epsilon_{v^{\prime}})}{F^{\prime}(\epsilon_{v^{\prime} })}=\frac{F^{\prime\prime}(\epsilon_{v})}{F^{\prime}(\epsilon_{v})},\quad v^{ \prime}\neq v, \tag{19}\] on \(F\). This implies that \(F(\epsilon)\) is exponential in \(\epsilon\), so that the corresponding stationary state \(\rho_{v}\) is always separable in \(x\) and \(v\) and therefore Maxwellian in \(v\) by Eq. (15). ## III Improved kinetic equation for hard rods ### A contradiction and its resolution We now explain why the collision integral proposed by Percus [19] does not preserve thermal states in an external trapping potential. To our knowledge this observation has not been made before, although as we shall discuss below, it is consistent with van Beijeren and Ernst's critique [23] of Enskog's prescription. The single-particle distribution function for trapped thermal states of hard rods at inverse temperature \(\beta\) and chemical potential \(\mu\) is given by [22] \[\rho_{v}(x)=\sqrt{\frac{\beta}{2\pi}}e^{-\beta v^{2}/2}n(x), \tag{20}\] where the local particle density \(n(x)\) satisfies the nonlocal integral equation \[\log n(x)+\beta(V(x)-\mu)-\log\sqrt{\frac{2\pi}{\beta}}=\log\left(1-\int_{x-a }^{x}dy\,n(y)\right)-\int_{x}^{x+a}dy\,\frac{n(y)}{1-\int_{y-a}^{y}dz\,n(z)}. \tag{21}\] Let us start by interpreting Eqs. (20) and (21) for the exact thermal state in a trap. To relate these expressions to the thermodynamic-Bethe-ansatz-like equation Eq. (18), we define a nonlocally corrected version of the free density \(\theta_{v}(x)\) in a trap, namely \[\theta_{v}(x)=\frac{\rho_{v}(x)}{1-\int_{x-a}^{x}dy\,n(y)}. \tag{22}\] Writing \(\theta_{v}(x)=e^{-\beta\epsilon_{v}(x)}\) as above, an exact and nonlocally corrected integral equation for \(\epsilon_{v}\) is given by \[\epsilon_{v}(x)=\frac{v^{2}}{2}+V(x)-\mu+\frac{1}{\beta}\int_{x}^{x+a}dy\, \int_{-\infty}^{\infty}dv^{\prime}\,e^{-\beta\epsilon_{v^{\prime}}(y)}. \tag{23}\] This integral equation (or indeed Percus's original treatment [22]) implies corrections to the local density approximation Eq. (18) for trapped hard rods in thermal equilibrium [11; 33] at all orders in the rod length \(a\). We expect that such nonlocal terms are generically present for equilibrium states of interacting particles in integrability-breaking traps, and thus correct predictions [11] invoking the local density approximation prediction, with the size of these corrections determined by the scattering length of the interactions. We are now in a position to correct Eq. (5). Let us suppose that the Enskog prescription Eq. (4) remains valid in the presence of a trapping potential. This predicts the kinetic equation \[\partial_{t}\rho_{v}+v\partial_{x}\rho_{v}-V^{\prime}(x)\partial _{v}\rho_{v}= \int_{-\infty}^{v}dv^{\prime}\,(v-v^{\prime})[g(x-a/2)\rho_{v}(x- a)\rho_{v^{\prime}}(x)-g(x+a/2)\rho_{v}(x)\rho_{v^{\prime}}(x+a)]\] \[+ \int_{v}^{\infty}dv^{\prime}\,(v^{\prime}-v)[g(x+a/2)\rho_{v^{ \prime}}(x)\rho_{v}(x+a)-g(x-a/2)\rho_{v^{\prime}}(x-a)\rho_{v}(x)]. \tag{24}\] However, upon subsituting the exact equilibrum state Eq. (20) into this equation, the left-hand side yields \[\mathrm{L.\,H.\,S}=v\rho_{v}\left(\frac{n(x-a)}{1-\int_{x-a}^{x}dy\,n(y)}-\frac{n (x+a)}{1-\int_{x+a}^{x}dy\,n(y)}\right), \tag{25}\] while the right-hand side yields \[\mathrm{R.\,H.\,S}=v\rho_{v}\left(\frac{n(x-a)}{1-an(x-a/2)}-\frac{n(x+a)}{1- an(x+a/2)}\right). \tag{26}\] The problem is now apparent: in an inhomogeneous trapping potential, Percus's proposed collision integral will not preserve the exact thermal state Eq. (20). To preserve this thermal state to all orders in the rod length \(a\), we find that it suffices to replace the Enskog prescription Eq. (4) by the approximation \[\langle\varrho_{v}(x)\varrho_{v^{\prime}}(x+a)\rangle\approx\rho_{v}(x)\rho_{ v^{\prime}}(x+a)g^{(2)}(x,x+a), \tag{27}\] where \(g^{(2)}(x,x+a)\) denotes the exact local-equilibrium contact pair correlation function of the hard rod gas. In Appendix A, we show that for hard rods on an infinite line, in any potential that is confining as \(|x|\to\infty\), \[g^{(2)}(x,x+a)=\frac{1}{1-\int_{x}^{x+a}dy\,n(y)}. \tag{28}\] We emphasize that Eq. (27) is a strict equality in thermal equilibrium within a trap, unlike the naive extrapolation from equilibrium without a trap [28; 29] represented by Eq. (4). Moreover, this correction should persist even in the absence of a trap, which follows by considering arbitarily weak but still confining potentials \(V(x)\to 0\). We note that "improving" Enskog's prescription by using the exact local-equilibrium pair correlation function at the instant of collision was previously advocated by van Beijeren and Ernst [23], who called the resulting kinetic equation the "modified Enskog equation" and argued that this equation should be exact at sufficiently short times. For trapped hard rods, the modified Enskog equation reads \[\partial_{t}\rho_{v}+v\partial_{x}\rho_{v}-V^{\prime}(x)\partial _{v}\rho_{v}= \int_{-\infty}^{v}dv^{\prime}\,(v-v^{\prime})[g^{(2)}(x-a,x)\rho_ {v}(x-a)\rho_{v^{\prime}}(x)-g^{(2)}(x,x+a)\rho_{v}(x)\rho_{v^{\prime}}(x+a)]\] \[+ \int_{v}^{\infty}dv^{\prime}\,(v^{\prime}-v)[g^{(2)}(x,x+a)\rho_ {v^{\prime}}(x)\rho_{v}(x+a)-g^{(2)}(x-a,x)\rho_{v^{\prime}}(x-a)\rho_{v}(x)] \tag{29}\] with \(g^{(2)}(x,x+a)\) given by Eq. (28), and one can verify explicitly that this preserves the thermal state Eq. (20). The only assumption required to derive Eq. (29) from the second level of the BBGKY hierarchy is (the Enskog-Van Beijeren-Ernst statement of) Boltzmann's molecular chaos assumption: Eq. (29) holds provided the two-particle distribution function \(\varrho_{v}(x)\varrho_{v^{\prime}}(x+a)\) is in thermal equilibrium over the interval \([x,x+a]\) for all \(x\). Thus, given Eq. (1), any corrections to Eq. (29) must depart from the assumption of local thermal equilibrium. It is interesting to note that the latter assumption can break down dynamically if the trap is removed [5]. One can verify that Eq. (29) respects the appropriate microscopic conservation laws, namely conservation of total particle number and total energy in a generic trap and conservation of centre-of-mass energy in a harmonic trap. We now characterize some of the stationary states of Eq. (29) for differentiable potentials \(V(x)\). Specifically, we will show that any such potential for which arbitrary GGEs are stationary in Eq. (29) is uniform, and that all separable, time-reversal symmetric stationary states of Eq. (29) are thermal. ### Stationary states of the modified Enskog equation #### iii.2.1 Stationary GGEs imply a uniform potential Generalized Gibbs states of the untrapped hard rod gas take the form of any spatially uniform profile \[\rho_{v}(x)=f(v), \tag{30}\] for some non-negative function \(f(v)\). Let us suppose that for some differentiable choice of trap \(V(x)\), the kinetic equation Eq. (29) is stationary with respect to all such \(\rho_{v}(x)\). Since the collision integral vanishes for any translation-invariant state, we deduce that \[-V^{\prime}(x)f^{\prime}(v)=0 \tag{31}\] for all non-negative \(f\), which implies that \[V^{\prime}(x)=0, \tag{32}\] i.e. the potential is uniform. #### iii.2.2 Separable stationary states are thermal Let us now suppose that Eq. (29) has a separable, time-reversal symmetric stationary state of the form \[\rho_{v}(x)=n(x)f(v) \tag{33}\] with \(f(v)\) even in \(v\). Without loss of generality we assume that \(\int_{-\infty}^{\infty}dv\,f(v)=1\), so that \(n(x)\) is the local particle density. In general, the condition for stationarity can be written as \[v\partial_{x}\rho_{v}-V^{\prime}(x)\partial_{v}\rho_{v}= g^{(2)}(x,x+a)\int_{-\infty}^{\infty}dv^{\prime}\left(v^{\prime}-v \right)\left(\mathbb{1}_{v^{\prime}<v}\rho_{v}(x)\rho_{v^{\prime}}(x+a)+ \mathbb{1}_{v^{\prime}>v}\rho_{v^{\prime}}(x)\rho_{v}(x+a)\right)\] \[-g^{(2)}(x-a,x)\int_{-\infty}^{\infty}dv^{\prime}\left(v^{\prime}- v\right)\left(\mathbb{1}_{v^{\prime}<v}\rho_{v}(x-a)\rho_{v^{\prime}}(x)+ \mathbb{1}_{v^{\prime}>v}\rho_{v^{\prime}}(x-a)\rho_{v}(x)\right). \tag{34}\] For separable states Eq. (33), this reduces to \[vf(v)\partial_{x}\log n(x)-V^{\prime}(x)f^{\prime}(v)= \left(\frac{n(x+a)}{1-\int_{x}^{x+a}dy\,n(y)}-\frac{n(x-a)}{1- \int_{x-a}^{x}dy\,n(y)}\right)\int_{-\infty}^{\infty}dv^{\prime}\left(v^{ \prime}-v\right)f(v)f(v^{\prime}).\] Using normalization and time-reversal symmetry of \(f\), we can write this as \[\partial_{x}\log n(x)+\frac{n(x+a)}{1-\int_{x}^{x+a}dy\,n(y)}- \frac{n(x-a)}{1-\int_{x-a}^{x}dy\,n(y)}=V^{\prime}(x)\frac{\partial_{v}\log f (v)}{v}. \tag{35}\] In order that the right hand side is independent of \(v\), we require that \[\partial_{v}\log f(v)=-\beta v \tag{36}\] for some constant \(\beta\). By normalization of \(f(v)\), this implies that \[f(v)=\sqrt{\frac{\beta}{2\pi}}e^{-\beta v^{2}/2}, \tag{37}\] i.e. \(f(v)\) is Maxwellian. (Note that this constraint is only imposed if \(V^{\prime}(x)\neq 0\) for some \(x\), so there is no contradiction with the previous result.) In particular, the density \(n(x)\) must be related to the trapping potential via the equation \[\partial_{x}\log n(x)+\beta V^{\prime}(x)=\frac{n(x-a)}{1-\int_{x-a}^{x}dy\,n (y)}-\frac{n(x+a)}{1-\int_{x}^{x+a}dy\,n(y)}, \tag{38}\] which is precisely the spatial derivative of Eq. (21). Thus \(n(x)\) coincides with the thermal state in a trap up to an integration constant that we identify as the chemical potential. ## IV Derivative expansion of the modified ENKGO equation In order to compare the predictions of our Eq. (29) against previous results on the higher-order hydrodynamics of hard rods [24; 25; 26], which are usually stated as spatially local equations, we must develop an expansion in powers of the rod length \(a\). At low orders, this expansion is identical to that of the Enskog equation Eq. (24) because our proposed "nonlocal" correction to the pair correlation function \[g^{(2)}(x-a/2,x+a/2)-g(x)=\frac{a^{3}}{24}\frac{n^{\prime\prime}(x)}{(1-an(x) )^{2}}+\mathcal{O}(a^{5}),\quad a\to 0, \tag{39}\] is only \(\mathcal{O}(a^{3})\) in the rod length, but at higher orders these nonlocal corrections will proliferate. To derive this expansion, it is helpful to define the functional \[G_{vv^{\prime}}(x)=\frac{1}{1-\int_{x-a/2}^{x+a/2}dy\,n(y)}\rho_{v}(x-a/2)\rho_{v ^{\prime}}(x+a/2), \tag{40}\] which recovers the contact two-particle distribution function in equilibrium. In terms of this functional, and provided that \(\rho_{v}(x)\) is smooth in \(x\), the collision integral in Eq. (29) can be Taylor expanded as \[I[\rho] =\int_{-\infty}^{v}dv^{\prime}(v-v^{\prime})(G_{vv^{\prime}}(x-a/ 2)-G_{vv^{\prime}}(x+a/2))+\int_{v}^{\infty}dv^{\prime}(v^{\prime}-v)(G_{v^{ \prime}v}(x+a/2)-G_{v^{\prime}v}(x-a/2))\] \[=2\sum_{n=0}^{\infty}\frac{(a/2)^{2n+1}}{(2n+1)!}\partial_{x}^{2 n+1}\left(\int_{-\infty}^{v}dv^{\prime}(v^{\prime}-v)G_{vv^{\prime}}(x)+\int_{v} ^{\infty}dv^{\prime}(v^{\prime}-v)G_{v^{\prime}v}(x)\right). \tag{41}\] To proceed further, we now expand (suppressing the argument of \(g^{(2)}(x-a/2,x+a/2)\)) \[G_{vv^{\prime}}(x)=g^{(2)}\sum_{m=0}^{\infty}\frac{(a/2)^{m}}{m!}\sum_{k=0}^{ m}\binom{m}{k}(-1)^{k}\rho_{v}^{(k)}(x)\rho_{v^{\prime}}^{(m-k)}(x) \tag{42}\] and note that these terms have alternating parity under interchange of velocities, to finally yield \[I[\rho]= 2\sum_{n,m=0}^{\infty}\frac{(a/2)^{2(n+m)+1}}{(2n+1)!(2m)!} \partial_{x}^{2n+1}\left(g^{(2)}\int_{-\infty}^{\infty}dv^{\prime}(v^{\prime}- v)\sum_{k=0}^{2m}\binom{2m}{k}(-1)^{k}\rho_{v}^{(2m-k)}(x)\rho_{v^{\prime}}^{(k )}(x)\right)\] \[+ 2\sum_{n,m=0}^{\infty}\frac{(a/2)^{2(n+m)+2}}{(2n+1)!(2m+1)!} \partial_{x}^{2n+1}\left(g^{(2)}\int_{-\infty}^{\infty}dv^{\prime}|v^{\prime} -v|\sum_{k=0}^{2m+1}\binom{2m+1}{k}(-1)^{k}\rho_{v}^{(2m+1-k)}(x)\rho_{v^{ \prime}}^{(k)}(x)\right). \tag{43}\] Approximating \(g^{(2)}(x-a/2,x+a/2)\) by \(g(x)\) yields the full derivative expansion of Eq. (24). Let us re-write the series expansion in Eq. (43) as \[I[\rho]=\sum_{l=1}^{\infty}a^{l}I^{(l)}[\rho], \tag{44}\] where \(I^{(l)}[\rho]\) can depend nontrivially on \(a\) through \(g^{(2)}\). It will be instructive to compute the first three terms in this series. For example, using Eq. (39) and suppressing arguments, we find that \[aI^{(1)}=a\partial_{x}\left(\frac{j-nv}{1-an}\rho_{v}\right)+\frac{a^{4}}{24 }\partial_{x}\left(\frac{n^{\prime\prime}(j-nv)}{(1-an)^{2}}\rho_{v}\right)+ \mathcal{O}(a^{6}). \tag{45}\] The first term is the usual velocity dressing for hard rods. The second term is new and arises from nonlocality of the pair correlation function \(g^{(2)}(x,x+a)\). Similarly \[a^{2}I^{(2)}=\frac{a^{2}}{2}\partial_{x}\left(\frac{1}{1-an}\int_{-\infty}^{ \infty}dv^{\prime}|v^{\prime}-v|(\rho_{v}^{\prime}\rho_{v^{\prime}}-\rho_{v} \rho_{v^{\prime}}^{\prime})\right)+\mathcal{O}(a^{5}) \tag{46}\] recovers the usual diffusive term at leading order in \(a\), with corrections arising from nonlocality at higher order in \(a\) (here primes on \(\rho\) denote spatial derivatives). Finally, we find that \[a^{3}I^{(3)}=\frac{a^{3}}{24}\left(\partial_{x}^{3}\left(\frac{j-nv}{1-an} \rho_{v}\right)+3\partial_{x}\left(\frac{1}{1-an}\int_{-\infty}^{\infty}dv^{ \prime}(v^{\prime}-v)(\rho_{v}^{\prime\prime}\rho_{v^{\prime}}-2\rho_{v}^{ \prime}\rho_{v^{\prime}}^{\prime}+\rho_{v}\rho_{v^{\prime}}^{\prime\prime}) \right)\right)+\mathcal{O}(a^{6}). \tag{47}\] Combining these three expressions, we deduce that truncating the expansion Eq. (44) at third order in spatial derivatives yields the kinetic equation \[\partial_{t}\rho_{v}+\partial_{x}\left(\frac{v-aj}{1-an}\rho_{v} \right)-V^{\prime}(x)\partial_{v}\rho_{v}=\frac{1}{2}a^{2}\partial_{x}\left( \frac{1}{1-an}\int_{-\infty}^{\infty}dv^{\prime}|v^{\prime}-v|(\partial_{x}\rho _{v}\rho_{v^{\prime}}-\rho_{v}\partial_{x}\rho_{v^{\prime}})\right)\] \[+\frac{a^{3}}{24}\left(\left(\partial_{x}\frac{an^{\prime\prime}}{1 -an}+\partial_{x}^{3}\right)\left(\frac{j-nv}{1-an}\rho_{v}\right)+3\partial_{x }\left(\frac{1}{1-an}\int_{-\infty}^{\infty}dv^{\prime}(v^{\prime}-v)(\rho_{v}^ {\prime\prime}\rho_{v^{\prime}}-2\rho_{v}^{\prime}\rho_{v^{\prime}}^{\prime}+ \rho_{v}\rho_{v^{\prime}}^{\prime\prime})\right)\right). \tag{48}\] The third-order term differs from a recent proposal for the dispersive corrections to generalized hydrodynamics [26], and indeed one can verify that the third-order contribution to Eq. (48) deviates from this earlier prediction even at the level of linear response. This is surprising given that the latter was argued to recover known exact results for the hard-rod gas [24; 25] near the Tonks-Girardeau limit of the Lieb-Liniger model [26]. We expect that this discrepancy has its origin in the Enskog-Van Beijeren-Ernst form of the molecular chaos assumption Eq. (27), which imposes a stringent notion of local equilibrium that does not necessarily hold in the earlier analytical derivations [24; 25]. As noted above, numerical simulations are consistent with the dynamical breakdown of local equilibrium for unconfined hard rods [5]. Similarly, numerical evidence for hard rods in a harmonic trap suggests that a microscopic breakdown of ergodicity [6] is responsible for the failure of long-term thermalization [5; 6]. At the same time, it seems unlikely that there is another choice of single-particle collision integral in Eq. (29) that exactly preserves the thermal state of Eqs. (20) and (21). To summarize, we believe that the single-particle kinetic equation Eq. (29) is valid provided the assumption of local equilibrium holds, and that this assumption can break down both in the absence of a trap and in the presence of a harmonic trap. The broader lesson from our analysis is that single-particle kinetic theory within a trap must assume some notion of local equilibrium, and will only yield accurate predictions at long times (for example, regarding the presence or absence of thermalization [11]) if this notion of local equilibrium is preserved under time evolution within the trap in question. From this viewpoint, the question of why some trapping potentials appear to sustain local equilibrium better than other trapping potentials [6] lies beyond hydrodynamics, and demands a more microscopic explanation. An important open question is whether any of various proposed microscopic mechanisms [5; 7; 11; 18] for thermalization within a trap can explain these discrepancies. ## V Acknowledgements V.B.B. was supported by a fellowship at the Princeton Center for Theoretical Science during part of the completion of this work and thanks X. Cao, A. Dhar, F. Essler, D.A. Huse, M. Kulkarni and J.E. Moore for helpful discussions. ## Appendix A The exact contact pair correlation function In this Appendix, we derive the exact contact pair correlation function for trapped hard rods in thermal equilibrium using the method of Percus [22], who did not explicitly consider this quantity. To motivate Percus's method, which requires working in the grand canonical ensemble, it is helpful to first attempt the same derivation in the canonical ensemble. ### Canonical ensemble Consider \(N\) hard rods on an infinite line in an external trapping potential \(V(x)\) in thermal equilibrium at inverse temperature \(\beta\). The canonical partition function can be written as \[Z=Z_{k,N}Z_{N} \tag{41}\] where the "kinetic" partition function \[Z_{k,N}=\prod_{i=1}^{N}\int_{-\infty}^{\infty}dp_{i}\,e^{-\beta p_{i}^{2}/2}= \left(\frac{2\pi}{\beta}\right)^{N/2} \tag{42}\] and the "configurational" partition function \[Z_{N} = \int_{-\infty}^{\infty}dx_{1}\int_{x_{1}+a}^{\infty}dx_{2}\ldots \int_{x_{N-1}+a}^{\infty}dx_{N}e^{-\beta\sum_{j=1}^{N}V(x_{j})} \tag{43}\] \[= \int_{-\infty}^{\infty}dx_{N}\int_{-\infty}^{x_{N}-a}dx_{N-1} \ldots\int_{-\infty}^{x_{2}-a}dx_{1}e^{-\beta\sum_{j=1}^{N}V(x_{j})} \tag{44}\] can be written in two different ways depending on whether the rods are enumerated from left to right or right to left respectively. Then the one-particle density can be written as \[n(x)=e^{-\beta V(x)}\frac{1}{Z_{N}}\sum_{M=0}^{N-1}Z_{M}(-\infty,x)Z_{N-1-M}(x,\infty) \tag{45}\] where the sum is over all possible arrangements of \(M\) rods to the left and \(N-1-M\) rods to the right of the rod centred at \(x\) and the one-sided partition functions \[Z_{M}(-\infty,x)=\int_{-\infty}^{x-a}dx_{M}\int_{-\infty}^{x_{M}-a}dx_{M-1}\ldots \int_{-\infty}^{x_{2}-a}dx_{1}\,e^{-\beta\sum_{j=1}^{M}V(x_{j})} \tag{10}\] and \[Z_{M}(x,\infty)=\int_{x+a}^{\infty}dx_{1}\int_{x_{1}+a}^{\infty}dx_{2}\ldots \int_{x_{M-1}+a}^{\infty}dx_{M}\,e^{-\beta\sum_{j=1}^{M}V(x_{j})} \tag{11}\] weight these arrangements appropriately, with the convention that \(Z_{0}(-\infty,x)=Z_{0}(x,\infty)=1\). Similarly, the contact two-particle distribution function can be written as \[n^{(2)}(x,x+a)=e^{-\beta V(x)}e^{-\beta V(x+a)}\frac{1}{Z_{N}}\sum_{M=0}^{N-2} Z_{M}(-\infty,x)Z_{N-2-M}(x+a,\infty). \tag{12}\] In general, these expressions are difficult to extract predictions from as \(N\to\infty\), but their form is suggestive of multiplying power series with coefficients \(Z_{M}\). This in turn suggests that dramatic simplifications could occur in the grand canonical ensemble, which indeed turns out to be the case. ### Grand canonical ensemble Let us now consider the grand canonical partition for hard rods on an infinite line, following Percus [22]. This can be written as \[\Xi=\sum_{N=0}^{\infty}z^{N}Z_{k,N}Z_{N}. \tag{13}\] where the fugacity \(z=e^{\beta\mu}\). Introducing an effective fugacity \[\tilde{z}=z\sqrt{\frac{2\pi}{\beta}}, \tag{14}\] we can write this as \[\Xi=\sum_{N=0}^{\infty}\tilde{z}^{N}Z_{N}. \tag{15}\] Then, introducing the one-sided grand canonical partition functions \[\Xi(-\infty,x)=\sum_{N=0}^{\infty}\tilde{z}^{N}Z_{N}(-\infty,x) \tag{16}\] and \[\Xi(x,\infty)=\sum_{N=0}^{\infty}\tilde{z}^{N}Z_{N}(x,\infty), \tag{17}\] we can write the one-particle density as [22] \[n(x)=\frac{1}{\Xi}\tilde{z}e^{-\beta V(x)}\Xi(-\infty,x)\Xi(x,\infty) \tag{18}\] and the contact two-particle distribution function as \[n^{(2)}(x,x+a)=\frac{1}{\Xi}\tilde{z}^{2}e^{-\beta V(x)}e^{-\beta V(x+a)}\Xi( -\infty,x)\Xi(x+a,\infty). \tag{19}\] This implies a simple formula for the contact pair correlation function, namely \[g^{(2)}(x,x+a)=\frac{n^{(2)}(x,x+a)}{n(x)n(x+a)}=\frac{\Xi}{\Xi(-\infty,x+a)\Xi( x,\infty)}. \tag{51}\] Then, using the formula [22] \[\Xi(-\infty,x+a)\Xi(x,\infty)=\Xi\left(1-\int_{x}^{x+a}dy\,n(y)\right), \tag{52}\] we deduce that the exact contact pair correlation function for trapped hard rods in thermal equilibrium is given by \[g^{(2)}(x,x+a)=\frac{1}{1-\int_{x}^{x+a}dy\,n(y)}. \tag{53}\] We emphasize that this "equation of state" for the pair correlation function holds for even for generalized-Gibbs-type ensembles with an arbitrary probability distribution in momentum space (although these are not strictly stationary except in the limit that \(V(x)=0\)). For such ensembles, the configurational partition function is unchanged while the kinetic partition function is modified to \[Z_{k,N}=\prod_{i=1}^{N}\int_{-\infty}^{\infty}dp_{i}\,e^{-f(p_{i})}=\left(\int _{-\infty}^{\infty}dp\,e^{-f(p)}\right)^{N} \tag{54}\] where the function \(f\) uniquely determines the generalized Gibbs ensemble in question. In the treatment above, this change only modifies the effective fugacity \(\tilde{z}\) and therefore does not affect Eq. (53).
Percusの1次元硬線衝突積分は、外れとなる捕捉ポテンシャル下では、熱平衡状態を保たない。私たちは、硬線のための修正エンスクル方程式を導き出し、それがこの熱状態を正確に保つことを示した。対照的に、近年の非可積分トラップにおけるダイナミクスに関する提案された運動方程式とは異なり、私たちの運動方程式と熱状態は空間において明確に非局在的である。私たちの方程式は、空間の微分を3階に達する前の提案とは異なり、この不一致は、そのアプローチの基礎となる衝突積分の方程式の選択によって説明できる。
2309.04793
Interpreting TSLS Estimators in Information Provision Experiments
To estimate the causal effects of beliefs on actions, researchers often run information provision experiments. We consider the causal interpretation of two-stage least squares (TSLS) estimators in these experiments. We characterize common TSLS estimators as weighted averages of causal effects, and interpret these weights under general belief updating conditions that nest parametric models from the literature. Our framework accommodates TSLS estimators for both passive and active control designs. Notably, we find that some passive control estimators allow for negative weights, which compromises their causal interpretation. We give practical guidance on such issues, and illustrate our results in two empirical applications.
Vod Vilfort, Whitney Zhang
2023-09-09T13:36:44
http://arxiv.org/abs/2309.04793v4
# Interpreting IV Estimators in Information Provision Experiments+ ###### Abstract A growing literature measures "belief effects" --that is, the causal effect of a change in beliefs on actions-- using information provision experiments, where the provision of information is used as an instrument for beliefs. In experimental designs with a passive control group, and under heterogeneous belief effects, we show that the use of information provision as an instrument may not produce a positive weighted average of belief effects. We develop an "information provision instrumental variables" (IPIV) framework that infers the direction of belief updating using information about prior beliefs. In this framework, we propose a class of IPIV estimators that recover positive weighted averages of belief effects. Relative to our preferred IPIV, commonly used specifications in the literature require additional assumptions to generate positive weights. And in the cases where these additional assumptions are satisfied, the identified parameters often up-weight individuals with priors that are further from the provided information, which may not be desirable. Introduction There is a growing experimental literature that seeks to estimate the causal effects of beliefs on actions. These "belief effects" are relevant for many economic applications. For example, Deshpande and Dizon-Ross (2023) evaluate the effect of parents' expectations of government benefits on their investments in human capital for their children, Jager et al. (2023) evaluate the effect of workers' beliefs about outside job options on their job search behaviors, Cullen and Perez-Truglia (2022) evaluate the effect of workers' beliefs about manager and coworker salaries on their effort, Coibion et al. (2022) evaluate the effect of households' inflation expectations on their spending decisions, and Roth and Wohlfart (2020) evaluate the effect of individuals' recession expectations on their consumption and stock market investments. These papers estimate belief effects through information provision experiments. Broadly, the literature considers two types of experimental design, a passive control group design and an active control group design. The passive control group design is generally as follows. First, prior beliefs (e.g. one's expectation of inflation, one's belief about the salary of their manager) are elicited. Second, subjects are split into a control group and a treatment group. The control group receives no information. The treatment group receives a signal, generally a prediction or measurement of the same quantitative value (eg. an analyst's inflation forecast, the salary of one's manager). Then, posterior beliefs are elicited. Finally, participants' actions are observed (eg. prices set, hours at work). Some papers consider variants of this design with multiple treatment arms, each of which is provided with different information (Coibion et al., 2021, 2022; Kumar et al., 2023). The active control group design is the same, except a signal (differing from the signal provided to the treatment group) is also provided to the control group (Akersson et al., 2020; Bottan and Perez-Truglia, 2020; Link et al., 2023; Roth et al., 2022; Settele, 2022). For example, Roth and Wohlfart (2020) provide two different recession forecasts to the treatment and control groups. In both designs, researchers employ instrumental variables (IV). In particular, the estimation procedure involves a two-stage least squares regression of the action on the posterior belief, where the posterior belief is instrumented by group assignment. The set of instruments may also include interactions of group assignment with the prior, signal, or a combination of the prior and signal. These IV estimators are often interpreted as weighted averages of belief effects. In settings where causal effects are heterogeneous, the baseline IV estimator generally requires the IV monotonicity condition in order to recover average causal effects with non-negative weights (Imbens and Angrist, 1994). In active control experiments, monotonicity often seems like a reasonable assumption: it is plausible that individuals who receive a pessimistic signal would conclude a more pessimistic state than had they received an optimistic signal. But in passive control experiments, the monotonicity condition is less plausible. In particular, if individuals update their priors towards the provided signal, then the simultaneous presence of upwards-biased priors and downwards-biased priors (relative to the signal) can generate negative weights. We propose an information provision instrumental variables (IPIV) framework for estimation in passive control experiments that collect prior beliefs. Our core assumption is that individuals update their prior beliefs towards1 the provided signals. Given this assumption, we can infer the "direction" in which group assignment affects prior beliefs. In turn, our solution to negative weighting is to augment the standard IV specification with a function that accounts for this inferred direction. The resulting IPIV estimator produces a positive-weighted average of belief effects. This procedure is valid for any number of information provision groups, and so IPIV accommodates settings with multiple treatment arms, such as Coibion et al. (2022). Footnote 1: The IPIV framework nests the Bayesian learning models that motivate the existing specifications in Cullen and Perez-Truglia (2022) and Galashin et al. (2020). We use the IPIV framework to analyze existing specifications from passive control experiments. We find that many of these specifications require specific conditions on the values of the first-stage coefficients in order to recover a positive-weighted average of belief effects. Our proposed IPIV estimators are robust to such concerns. In the cases where the existing specifications recover positive-weighted averages, the weights tend to up-weight observations with priors that are further from the signal, which may not be desirable. If such weights _are_ desired, then the IPIV estimator can be generalized to recover these up-weighted parameters, without requiring specific conditions on the first-stage coefficients. Our IPIV framework is designed for information provision experiments. That said, our recognition of negative weighting in IV estimators is similar in spirit to Blandhol et al. (2022) and Sloczynski (2022). The former show that linear IV specifications can generate negative weights in settings where the IV assumptions hold conditional on exogenous covariates. In our setting, however, the IV assumptions are unconditional. Sloczynski (2022) considers a similar setting to Blandhol et al. (2022), and develops a "reordered" IV approach that recovers non-negative weights whenever a "weak monotonicity" condition is satisfied. While our IPIV assumption can be viewed as an instance of weak monotonicity, the availability of data on priors beliefs allows us to exactly distinguish the "compliers" from the "defiers". In our setting, the endogenous variable is also continuous, which generates additional sources of heterogeneity. The interpretation of IV estimators with continuous endogenous variables is also considered in Angrist et al. (2000), but under the assumption of monotonicity. This paper proceeds as follows. In Section 2, we provide a model of beliefs, belief effects, and information provision experiments. In Section 3, we motivate and present the IPIV estimator in a general environment. In Section 4, we apply the IPIV framework to common estimators in passive control settings with a single signal. Throughout, we illustrate our results with simulations in a simple setting with Bayesian updating. All proofs are provided in the Appendix. Model There is an i.i.d. sample of individuals \(i=1,\ldots,N\), each with a latent belief probability distribution \(F_{i}\in\Delta(\mathcal{Q})\) over a set of states \(q\in\mathcal{Q}\subseteq\mathbb{R}\). We observe the action \(Y_{i}\in\mathbb{R}\) that individual \(i\) takes under \(F_{i}\). For example, \(F_{i}\) can represent employee \(i\)'s beliefs on the average earnings of her coworkers, and \(Y_{i}\) can be the effort that she exerts when working under these beliefs, as in Cullen and Perez-Truglia (2022). The goal is to estimate the causal effect of beliefs on actions. ### Beliefs For each individual, we observe \(B_{i}:=\int q\,dF_{i}\), which is a subjective expectation of the state. We suppose that \(B_{i}\) is action-relevant in the sense that there is some choice process that maps \(b\in\mathbb{R}\) to potential actions \(Y_{i}(b)\), where \(Y_{i}\equiv Y_{i}(B_{i})\). Because the literature generally elicits only the single expectation \(B_{i}\), we simply refer to \(B_{i}\in\mathbb{R}\) as "beliefs", and to \(b\in\mathbb{R}\) as "potential beliefs". ### Target Parameter In our framework, the effect of a marginal change in beliefs for individual \(i\) is \(\partial Y_{i}(b)/\partial b\). Therefore, given a weighting function \(\omega_{i}(b)\), the average belief effect (ABE) is \[\theta(\omega):=\mathbb{E}\left[\int\frac{\partial Y_{i}(b)}{\partial b} \omega_{i}(b)\,db\right]. \tag{1}\] The weighting function \(\omega_{i}(b)\) governs the importance of \(b\) in the space of potential beliefs, and the importance of \(i\) in the distribution of individuals. ### Weighting In order for \(\theta(\omega)\) to be a proper weighted average, we require that \(\omega_{i}(b)\geq 0\). Otherwise, the ABE can be misleading. For example, in a setting where all the belief effects are positive, the presence of negative weights can generate a negative ABE. We also require that \(\int\mathbb{E}[\omega_{i}(b)]\,db=1\). Together with \(\omega_{i}(b)\geq 0\), this condition ensures that the ABE is in the convex hull of the belief effects \(\partial Y_{i}(b)/\partial b\). ### Heterogeneity Our framework places no functional form restrictions on the choice process. In particular, we permit \(Y_{i}(b)\) to be a non-parametric function of \(b\), which allows for general forms of heterogeneity (Imbens, 2007). This level of generality is standard practice in applied econometrics. Given that beliefs are continuous, there are two main sources of heterogeneity (Angrist et al., 2000). First, the belief effect \(\partial Y_{i}(b)/\partial b\) is generally a nonlinear curve over the space of potential beliefs \(b\). In particular, the effect of a marginal change in beliefs depends on the initial belief. The second source of heterogeneity is across individuals2. In this general environment, we show that the baseline IV estimator recovers an ABE with weights that can be negative. Footnote 2: Individual-level heterogeneity is a pervasive feature of many microeconometric applications (Heckman, 2001). In the context of information provision experiments, there is often an implicit recognition of heterogeneity. For example, Cullen and Perez-Truglia (2022) suggest that their IV estimator recovers meaningful parameters in the presence of heterogeneous effects, and Deshpande and Dizon-Ross (2023) argue that their IV estimator mitigates the negative weighting issues that can arise from heterogeneous effects. In this paper, we explicitly consider heterogeneity for information provision experiments, and demonstrate its consequences for interpreting IV estimators. ### Covariates In practice, researchers use baseline covariates \(W_{i}\) to improve estimation precision. These covariates are a subset of the control variables \(X_{i}\) that are included in a regression specification. For example, we assume that \(X_{i}\) contains a constant \(1\), and so our regressions always include a constant \(1\) term. Moreover, numerous specifications in the information provision literature control for pre-determined variables that the information provision experiment generates (see Section 2.6). ### Information Provision To estimate ABEs, researchers often use information provision experiments. In these experiments, the random assignment to information provision groups \(g_{0},\ldots,g_{K}\) generates causal variation in beliefs. If individual \(i\) is assigned to group \(g_{k}\), then she receives signal \(S_{ki}\). For example, in Deshpande and Dizon-Ross (2023), the state space \(\mathcal{Q}=[0,1]\) indexes the probability of receiving government benefits in adulthood, and there is \(K=1\) treatment group. In this context, the experiment provides a predicted probability \(S_{1i}\) to parents in group \(g_{1}\). In our framework, we suppose that the experiment collects "prior beliefs" \(P_{i}:=\int q\,dF_{i}^{P}\) before providing information, where \(F_{i}^{P}\in\Delta(\mathcal{Q})\) is a prior distribution. For example, Jager et al. (2023) collect wage expectations. This collection assumption is essentially without loss of generality, since we are free to consider specifications where \(P_{i}\) is not included in the set of control variables \(X_{i}\). We suppose that the groups are distinct in the sense that \(g_{\ell}\neq g_{k}\) implies \(\mathbb{P}(S_{\ell i}=S_{ki})=0\), which is without loss of generality. In a passive control design, the control group \(g_{0}\) does not receive information3. For analytical convenience, we set \(S_{0i}:=P_{i}\). Footnote 3: This notation also accommodates applications in which a group of individuals receive “placebo” information, as in Deshpande and Dizon-Ross (2023). The intent of such experimental designs is to safeguard against experimenter demand effects (Haaland and Roth, 2023, Section 6). The counterfactual beliefs are \(B_{ki}:=\int q\,dF_{ki}\), where \(F_{ki}\in\Delta(\mathcal{Q})\) depends on the signal \(S_{ki}\) Observed beliefs are \(B_{i}:=\int q\,dF_{i}\), where \[B_{i} \equiv\sum_{k=0}^{K}\mathbf{1}(G_{i}=g_{k})B_{ki}, F_{i} \equiv\sum_{k=0}^{K}\mathbf{1}(G_{i}=g_{k})F_{ki}. \tag{2}\] Here \(G_{i}\) is a random variable that indicates group membership. We sometimes refer to \(B_{i}\) and \(F_{i}\) as "posterior beliefs", and to \(P_{i}\) and \(F_{i}^{P}\) as "prior beliefs". Notice that the signals are chosen as part of the experimental design, and so \(S_{ki}\) is observed for all \(g_{k}\). So far, we are agnostic as to how exactly posteriors depend on the signal and prior, as well as how the signal is drawn; this structure nests the "model averaging" relationship between priors, signals, and posteriors common in this literature (Roth and Wohlfart, 2020; Cullen and Perez-Truglia, 2022; Haaland and Roth, 2023). **Example 1** (Model Averaging).: For each \(i\) and \(g_{k}\), there exists "learning rate" \(\alpha_{ki}\in[0,1]\) such that \[B_{ki}=\alpha_{ki}S_{ki}+(1-\alpha_{ki})P_{i}. \tag{3}\] Example 1 is satisfied, for instance, in certain Bayesian learning models with Gaussian prior distributions \(F_{i}^{P}\) and a Gaussian-distributed signal \(S_{ki}\)(Hoff, 2009, Section 5). ### Instrumental Variables Taking stock, the observed variables in the experiment are \((G_{i},Y_{i},B_{i},S_{ki},P_{i},W_{i})\). We are now prepared to state the core IV assumption. **Assumption 1** (Valid Instrument).: \(G_{i}\,\mbox{\text@underline{\text@underline{\text@underline{\text@underline{ \text@underline{\text@underline{\text@underline{\text@underline{\text@underline{ \text@underline{\text@underline{\text@underline{\text@underline{\text \text@underline{\text \text@underline{\text \text \text@underline{\text \text@underline{\text \text \text@underline{\text \texttext@underline{\text \texttext \text@underline{\texttext \texttext@underline{\texttext \text provision may increase the salience of a topic, thereby affecting the unobserved components of \(Y_{i}(b)\). These salience concerns are beyond the scope of our analysis. The IV approach also requires that the variation in group assignment generates sufficient variation in beliefs. This is the IV relevance condition. In the context of information provision experiments, this condition often necessitates that some individuals are incorrect in their beliefs, and that these individuals update their beliefs based on the signals. The exact requirements for IV relevance depend on the specification. For our future identification results, it actually suffices to assume \(G_{i}\Vdash\!\!\!\perp(Y_{i}(b),B_{ki},S_{ki},P_{i},W_{i})\), which is implied by Assumption 1. Nevertheless, the latent belief distribution structure is useful for exploring potential sources of negative weighting (see Section 3.2) and interpreting potential solutions (see Sections 3.4 and 4). ## 3 IPIV Framework In this section, we propose a framework for constructing and interpreting IV estimators in information provision experiments. In this analysis, we consider \[Z_{i} :=\mathbf{1}(G_{i}\neq g_{0}), \tag{4}\] \[\sigma_{\ell k,i} :=[\mathbf{1}(B_{\ell i}\leq B_{ki})-\mathbf{1}(B_{\ell i}\geq B _{ki})]\mathbf{1}(B_{\ell i}\neq B_{ki}),\] \[\mathcal{M}_{\ell k,i} :=\{B_{\ell i}<b<B_{ki}\}\cup\{B_{\ell i}>b>B_{ki}\}.\] If \(B_{\ell i}\neq B_{ki}\), then \(\sigma_{\ell k,i}\) gives the sign of \(B_{\ell i}-B_{ki}\). For these individuals, the set \(\mathcal{M}_{\ell k,i}\) collects the potential beliefs \(b\) that are in between \(B_{\ell i}\) and \(B_{ki}\). In the program evaluation literature with \(K=1\), recall that \(\sigma_{01,i}=1\) indicates compliance and \(\sigma_{01,i}=-1\) indicates defiance (Angrist et al., 1996). For our purposes, we say that \(|\sigma_{\ell k,i}|=1\) indicates movement. In particular, \(\mathcal{M}_{\ell k,i}\) gives the set of potential beliefs \(b\) that individual \(i\) moves over when updating from \(F_{\ell i}\) to \(F_{ki}\), and vice versa. In numerous applications, we have \(K=1\). Therefore, most of our results are stated for the case of \(K=1\). However, analogous results are valid for \(K>1\), as we demonstrate in Section 3.6. ### Baseline IV **Specification 1** (Baseline IV).: The baseline IV specification is \[B_{i} =X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}+v_{i}, \tag{5}\] \[Y_{i} =X_{i}^{\prime}\gamma_{0}+\gamma_{1}\hat{B}_{i}+e_{i},\] where \(v_{i}\) and \(e_{i}\) are regression residuals, \(\hat{B}_{i}\equiv X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}\) is the first-stage regression, and \(\gamma_{1}\) is the coefficient of interest. Let \(\tilde{Z}_{i}:=Z_{i}-\mathbb{L}[Z_{i}|X_{i}]\), where \(\mathbb{L}[Z_{i}|X_{i}]\) is a regression of \(Z_{i}\) on \(X_{i}\). **Theorem 1**.: _Consider Specification 1 for \(K=1\). If Assumption 1 is satisfied, \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) is full-rank, \(\operatorname{var}(\tilde{Z}_{i})>0\), and \(\mathbb{E}[\tilde{Z}_{i}B_{i}]\neq 0\), then \(\gamma_{1}=\theta(\omega)\) with weighting function_ \[\omega_{i}(b)=\frac{\sigma_{01,i}\mathbf{1}(b\in\mathcal{M}_{01,i})}{\int \mathbb{E}[\sigma_{01,i}\mathbf{1}(b\in\mathcal{M}_{01,i})]\,db}. \tag{6}\] Before we interpret the baseline IV estimator, we first discuss the technical assumptions in Theorem 1. It is worthwhile to have this discussion at least once, since we make similar assumptions in future specifications. To start, the rank condition on \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) and \(\operatorname{var}(\tilde{Z}_{i})>0\) ensure that the first-stage coefficients are identified. The former requires that there is sufficient variation in the control variables. Given this variation, the latter requires that there remains sufficient variation in group membership. These types of assumptions are straightforward to evaluate in practice, and so we take them as given. The assumption that \(\mathbb{E}[\tilde{Z}_{i}B_{i}]\neq 0\) is a sufficient IV relevance condition for Specification 1. The intuition is that the (remaining) variation in group membership generates variation in beliefs. We now interpret the baseline IV estimator. First, observe that the coefficient of interest recovers an ABE as defined in (1), with weights that depend on the set \(\mathcal{M}_{01,i}\). Inspecting the definition of \(\mathcal{M}_{01,i}\) in (4), we see that the exogenous group assignment in Assumption 1 generates the counterfactual of moving individuals from latent belief distribution \(F_{0i}\) to latent belief distribution \(F_{1i}\). The length of this move depends on the impact of the signals. For a given individual \(i\), this counterfactual change in beliefs affects choices \(Y_{i}(b)\). The ABE summarizes the impact of this change by averaging (integrating) the belief effect curve \(\partial Y_{i}(b)/\partial b\) over the set of potential beliefs \(b\) that are in between \(B_{0i}\) and \(B_{1i}\). The ABE then averages this process over all individuals \(i\). Notice that, conditional on the identification of \(\gamma_{1}\), the choice of control variables \(X_{i}\) does not affect the identified parameter \(\theta(\omega)\). This is another consequence of Assumption 1. That said, the choice of control variables may affect the variance of the IV estimator. ### Negative Weighting The above interpretations are generally valid across IV specifications with continuous endogenous variables (Angrist et al., 2000). The core similarity is the presence of \(\mathcal{M}_{\ell k,i}\). The point of departure, however, is in the additional components of the ABE weighting function. For example, the weights in Specification 1 are functions of \(\sigma_{01,i}\). If some individuals have \(\sigma_{01,i}=1\) while others have \(\sigma_{01,i}=-1\), then some of these weights will be negative. As noted, the presence of negative weights can produce misleading results. However, if \(\sigma_{01,i}\geq 0\) for all \(i\), or if \(\sigma_{01,i}\leq 0\) for all \(i\), then all the weights are non-negative. The latter scenarios are instances where the IV monotonicity condition is satisfied (cf. Angrist et al. (2000)). The design of the information provision experiment will dictate \(\sigma_{01,i}\). In passive control designs, some individuals will have priors that are less than the signal (under-estimators) and some individuals will have priors that are greater than the signal (over-estimators). If we assume that individuals move towards the signal, then under-estimators will have \(\sigma_{01,i}=1\) and over-estimators will have \(\sigma_{01,i}=-1\), violating monotonicity. In contrast, in active control designs, as long as \(S_{0i}\leq S_{1i}\) implies \(B_{0i}\leq B_{1i}\) and \(S_{0i}\geq S_{1i}\) implies \(B_{0i}\geq B_{1i}\), then monotonicity is satisfied5. However, active control designs are not always feasible. There may be settings in which providing two signals may be considered deceptive, or in which there are not two sufficiently different truthful signals to provide enough power. For example, population estimates from any source are likely very similar. In these settings, a passive control design is needed. Furthermore, passive control designs are prevalent in experiments that estimate belief effects (Deshpande and Dizon-Ross, 2023; Cullen and Perez-Truglia, 2022; Jager et al., 2023; Coibion et al., 2020, 2022). We focus the remainder of the paper on passive control designs. Footnote 5: In future work, we plan to formally consider this case. Consider a passive control baseline IV estimator applied to Example 1. Since \(S_{0i}:=P_{i}\), then (3) implies \(B_{0i}=P_{i}\). Therefore, \(P_{i}\leq S_{ki}\) implies \(B_{0i}\leq B_{ki}\), and \(P_{i}\geq S_{ki}\) implies \(B_{0i}\geq B_{ki}\). For individuals with priors \(P_{i}\) that are less than the signal \(S_{ki}\) (downward-biased priors), we have \(\sigma_{0k,i}\geq 0\). For individuals with priors \(P_{i}\) that are greater than the signal \(S_{ki}\) (upward-biased priors), we have \(\sigma_{0k,i}\leq 0\). Thus, the simultaneous presence of under-estimators and over-estimators violates IV monotonicity. Therefore, in this simple example, the baseline IV estimator produces an ABE that is contaminated by negative weights. ### Simulations We present simulations of a toy model based on Example 1. For simplicity, instead of heterogeneous learning rates, we use a single homogeneous learning rate \(\alpha_{1i}=0.5\) and include additive noise \(\varepsilon_{i}\). This toy model satisfies Assumption 1, and the upcoming IPIV Assumption 2. We generate \(N=10,000\) observations with parameters as follows. Throughout the rest of the paper, we provide simulated estimates of \(\gamma_{1}\) and illustrations of \(\omega_{i}(b)\). \[\begin{split}& P_{i}\sim U[0,10]\\ & S_{1i}=6\\ & Z_{i}\sim\text{Bernoulli}(0.5)\\ & B_{i}=\begin{cases}P_{i}+\varepsilon_{i}&\text{if }Z_{i}=0\\ 0.5P_{i}+0.5S_{1i}+\varepsilon_{i}&\text{if }Z_{i}=1\end{cases}\\ & Y_{i}(b)=b*P_{i}+\varepsilon_{i},\quad\varepsilon_{i}\sim\text{Normal}(0,1) \end{split} \tag{7}\] The belief effect for each individual is \(\partial Y_{i}(b)/\partial b=P_{i}\), which is positive and constant over the space of beliefs \(b\in\mathbb{R}\). Yet, the baseline IV specification produces -2.41 (s.e. 1.88), a statistically significant negative estimate. Figure 1 displays the weight \(\omega_{i}(b)\) for a given \(b\) and prior error \(P_{i}-S_{1i}\). There are three values in the plot. The turquoise region has \(\omega_{i}(b)=0\), the yellow region has \(\omega_{i}(b)=0.0019\), and the purple region has \(\omega_{i}(b)=-0.0019\). All individuals with upward-biased priors (\(P_{i}-S_{1i}>0\)) have negative weights, leading to an overall negative average. ### Ipiv In this section, we propose a class of IV specifications for passive control information provision experiments. The goal is to recover an ABE with non-negative weights. Given the results of Specification 1, a natural baseline target is \(\theta(\omega)\) with weighting function \[\omega_{i}(b)=\frac{\mathbf{1}(b\in\mathcal{M}_{01,i})}{\int\mathbb{E}[ \mathbf{1}(b\in\mathcal{M}_{01,i})]\,db}. \tag{8}\] In comparison to the weighting function from Specification 1, there is no contamination from \(\sigma_{01,i}\). In particular, while \(\omega_{i}(b)\) acknowledges the movement in belief, it remains agnostic to the direction \(\sigma_{01,i}\) of this movement. **Assumption 2** (Belief Updating).: \(P_{i}\leq S_{ki}\) _implies \(B_{0i}\leq B_{ki}\) and \(P_{i}\geq S_{ki}\) implies \(B_{0i}\geq B_{ki}\)._ **Remark 1** (Testable Implications).: Notice that Assumption 2 is satisfied in Example 1. But even if we do not want to impose the structure from Example 1, we can still test Assumption 2. Indeed, given Assumption 1, \(\mathbb{P}(B_{i}\neq P_{i}|G_{i}=g_{0})=\mathbb{P}(B_{0i}\neq P_{i})\). Therefore, if \(\mathbb{P}(B_{i}\neq P_{i}|G_{i}=g_{0})=0\), then Figure 1: IV weights we can use the observed priors \(P_{i}\) as surrogates for the counterfactual \(B_{0i}\). In turn, \[\begin{split}\mathbb{P}(\mathbf{1}(P_{i}\leq S_{ki})\mathbf{1}(P_{ i}>B_{i})|G_{i}=g_{k})&=\mathbb{P}(\mathbf{1}(P_{i}\leq S_{ki}) \mathbf{1}(P_{i}>B_{ki}))\\ &=\mathbb{P}(\mathbf{1}(P_{i}\leq S_{ki})\mathbf{1}(B_{0i}>B_{ki} )).\end{split} \tag{9}\] Assumption 2 implies \(\mathbb{P}(\mathbf{1}(P_{i}\leq S_{ki})\mathbf{1}(P_{i}>B_{i})|G_{i}=g_{k})=0\), which is a testable implication. Analogously, one can for test whether \(P_{i}\geq S_{ki}\) implies \(B_{0i}\geq B_{ki}\). Given Assumption 2, we consider the functions \[\begin{split}\tilde{\sigma}_{i}:=\sum_{k=1}^{K}[q_{k}\mathbf{1}( G_{i}=g_{0})+\mathbf{1}(G_{i}=g_{k})]\tilde{\sigma}_{0k,i},& \tilde{\sigma}_{0k,i}:=\mathbf{1}(P_{i}\leq S_{ki})-\mathbf{1}(P_{i}\geq S_ {ki}),\\ q_{k}:=\frac{\mathbb{P}(G_{i}=g_{k})}{\mathbb{P}(G_{i}\neq g_{0} )}.\end{split} \tag{10}\] **Lemma 1**.: _If Assumption 2 is satisfied, then \(\tilde{\sigma}_{0k,i}\sigma_{0k,i}=\mathbf{1}(B_{0i}\neq B_{ki})\geq 0\)._ Our IPIV approach is to use these functions to correct for negative weighting. Most of our results are stated for \(K=1\), since that is the most common setting. In such cases, \(\tilde{\sigma}_{i}=\tilde{\sigma}_{01,i}\). That said, IPIV generalizes to \(K>1\), as we show in Section 3.6. **Specification 2** (IPIV).: Consider \(\lambda_{i}:=\lambda(S_{ki},P_{i},W_{i})\geq 0\) and \(\Lambda_{i}(u)=\lambda_{i}\tilde{\sigma}_{i}u\). The IPIV specification is \[\begin{split}\Lambda_{i}(B_{i})&=X_{i}^{\prime} \pi_{0}+\pi_{1}Z_{i}+v_{i},\\ \Lambda_{i}(Y_{i})&=X_{i}^{\prime}\gamma_{0}+\gamma_ {1}\hat{\Lambda}_{i}(B_{i})+e_{i},\end{split} \tag{11}\] where \(v_{i}\) and \(e_{i}\) are regression residuals, \(\hat{\Lambda}_{i}(B_{i})\equiv X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}\) is the first-stage regression, and \(\gamma_{1}\) is the coefficient of interest. Let \(\tilde{Z}_{i}:=Z_{i}-\mathbb{L}[Z_{i}|X_{i}]\), where \(\mathbb{L}[Z_{i}|X_{i}]\) is a regression of \(Z_{i}\) on \(X_{i}\). **Theorem 2**.: _Consider Specification 2 for \(K=1\). If Assumptions 1 and 2 are satisfied, \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) is full-rank, \(\mathrm{var}(\tilde{Z}_{i})>0\), and \(\mathbb{E}[\tilde{Z}_{i}\Lambda_{i}(B_{i})]\neq 0\), then \(\gamma_{1}=\theta(\omega)\) with weighting function_ \[\omega_{i}(b)=\frac{\mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}}{\int\mathbb{ E}[\mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}]\,db}. \tag{12}\] For each \(\lambda(S_{ki},P_{i},W_{i})\geq 0\) such that the conditions of Theorem 2 are satisfied, we recover an ABE with convex weights. Notice that \(\lambda_{i}\) governs the importance that \(\theta(\omega)\) places on individuals with specific values of \((S_{ki},P_{i},W_{i})\). For \(\lambda_{i}=1\), the IPIV specification recovers an ABE with weighting function (8). In this case, the ABE places importance in accordance with the joint distribution of \((S_{ki},P_{i},W_{i})\). In Section 4, we show that other choices of \(\lambda_{i}\) link our IPIV approach to existing IV specifications that interact group membership with control variables. Using the simulated data, the IPIV estimate of \(\gamma_{1}\) is \(3.89\) (s.e. \(0.276\)) -- a significant positive effect. Figure 2 shows that weights over all individuals and over all beliefs are positive. The positive weights all take the same value. Figure 3 shows that like IV, IPIV has greater weight in regions where more individuals move across a given belief. For example, based on the distribution of priors, more individuals will have \(b=3\in[B_{0i},B_{1i}]\) than \(b=1\in[B_{0i},B_{1i}]\). ### Practical Implications Assumption 2 is an essential component of Theorem 2. As we noted in Remark 1, this assumption is testable. Furthermore, Assumption 2 has practical consequences for the design of information provision experiments. Notably, it requires that the experiment to collect prior beliefs. This is common practice in many experiments, including Coibion et al. (2022); Deshpande and Dizon-Ross (2023); Cullen and Perez-Truglia (2022); Jager et al. (2023). Second, Assumption 2 requires that the experiment provides signals for which individuals will update their priors in the direction of those signals. This seems natural in settings where the content of the signal is the same as the content of the priors (e.g. a prior belief for the average coworker wage and a signal of the true average coworker wage, as in Cullen and Perez-Truglia (2022)). But if the content of the signal is dissimilar to the content of the prior, then Assumption 2 is more tenuous. For example, one of the treatment arms in Coibion et al. (2022) elicits beliefs about the inflation, but provides a signal of recent unemployment. It is less clear how to proceed in such settings. Assumption 2 is also vulnerable to potential behavioral biases. For example, if some individuals have motivated beliefs or distrust the experimenter, then their priors may update in the "opposite" direction of the signals. We speculate that the testing strategy in Remark 1 would be useful for interrogating such biases. Figure 2: IPIV weights Figure 3: IPIV weights averaged over \(i\) ### IPIV Extensions There is a natural extension of IPIV to "belief elasticities" \(\{b/Y_{i}(b)\}\partial Y_{i}(b)/\partial b\), wherein potential actions \(Y_{i}(b)\) and potential beliefs \(b\) are positive. **Specification 3** (IPIV in Logs).: Consider \(\lambda_{i}:=\lambda(S_{ki},P_{i},W_{i})\geq 0\) and \(\Lambda_{i}(u)=\lambda_{i}\tilde{\sigma}_{i}\log(u)\). The IPIV specification in logs is \[\Lambda_{i}(B_{i}) =X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}+v_{i}, \tag{13}\] \[\Lambda_{i}(Y_{i}) =X_{i}^{\prime}\gamma_{0}+\gamma_{1}\hat{\Lambda}_{i}(B_{i})+e_{ i},\] where \(v_{i}\) and \(e_{i}\) are regression residuals, \(\hat{\Lambda}_{i}(B_{i})\equiv X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}\) is the first-stage regression, and \(\gamma_{1}\) is the coefficient of interest. Let \(\tilde{Z}_{i}:=Z_{i}-\mathbb{L}[Z_{i}|X_{i}]\), where \(\mathbb{L}[Z_{i}|X_{i}]\) is a regression of \(Z_{i}\) on \(X_{i}\). **Theorem 3**.: _Consider Specification 3 for \(K=1\). If Assumptions 1 and 2 are satisfied, \(Y_{i}(b),b>0\), \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) is full-rank, \(\operatorname{var}(\tilde{Z}_{i})>0\), and \(\mathbb{E}[\tilde{Z}_{i}\Lambda_{i}(B_{i})]\neq 0\), then \(\gamma_{1}=\theta(\omega)\) with weighting function_ \[\omega_{i}(b)=\frac{b}{Y_{i}(b)}*\frac{b^{-1}\mathbf{1}(b\in\mathcal{M}_{01,i })\lambda_{i}}{\int b^{-1}\mathbb{E}[\mathbf{1}(b\in\mathcal{M}_{01,i}) \lambda_{i}]\,db}. \tag{14}\] In Theorem 3, the weighting function satisfies \(\int\mathbb{E}[\{b/Y_{i}(b)\}^{-1}\omega_{i}(b)]\,db=1\), which is appropriate for settings with belief elasticities. In particular, \[\theta(\omega)=\mathbb{E}\left[\int\frac{b}{Y_{i}(b)}\frac{\partial Y_{i}(b)} {\partial b}\tilde{\omega}_{i}(b)\,db\right],\quad\tilde{\omega}_{i}(b):=\frac {b^{-1}\mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}}{\int b^{-1} \mathbb{E}[\mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}]\,db}, \tag{15}\] where \(\tilde{\omega}_{i}(b)\equiv\{b/Y_{i}(b)\}^{-1}\omega_{i}(b)\) gives convex weights. In such cases, the parameter \(\theta(\omega)\) is unit-free, which facilitates comparisons across applications. For a related discussion, see Haaland and Roth (2023, Section 8). IPIV accommodates settings where \(K>1\), such as Coibion et al. (2021), Coibion et al. (2022), Kumar et al. (2023), and others. **Theorem 4**.: _Consider Specification 2. If Assumptions 1 and 2 are satisfied, \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) is full-rank, \(\operatorname{var}(\tilde{Z}_{i})>0\), and \(\mathbb{E}[\tilde{Z}_{i}\Lambda_{i}(B_{i})]\neq 0\), then \(\gamma_{1}=\theta(\omega)\) with weighting function_ \[\omega_{i}(b)=\frac{\sum_{k=1}^{K}q_{k}\mathbf{1}(b\in\mathcal{M}_{0k,i}) \lambda_{i}}{\sum_{k=1}^{K}q_{k}\int\mathbb{E}[\mathbf{1}(b\in\mathcal{M}_{0k,i})\lambda_{i}]\,db}. \tag{16}\] Theorem 4 shows that Specification 2 is valid for \(K>1\). In particular, we now recover a convex average of group specific ABEs. Indeed, we have \[\gamma_{1}=\sum_{k=1}^{K}\psi_{k}\theta(\omega_{k}),\quad\omega_{ki}(b)=\frac{ \mathbf{1}(b\in\mathcal{M}_{0k,i})\lambda_{i}}{\int\mathbb{E}[\mathbf{1}(b\in \mathcal{M}_{0k,i})\lambda_{i}]\,db},\quad\psi_{k}:=\frac{q_{k} \int\mathbb{E}[\mathbf{1}(b\in\mathcal{M}_{0k,i})\lambda_{i}]\,db}{\sum_{k=1}^ {K}q_{k}\int\mathbb{E}[\mathbf{1}(b\in\mathcal{M}_{0k,i})\lambda_{i}]\,db}. \tag{17}\] For each \(\theta(\omega_{k})\), the counterfactual comparison for individual \(i\) is between her posterior beliefs when assigned to group \(g_{0}\) versus group \(g_{k}\). ## 4 Linking IPIV to Existing Specifications Our IPIV approach is related to existing specifications that interact group membership with control variables. For each of these specifications, there are conditions on the first-stage coefficients such that IPIV with some function \(\lambda_{i}:=\lambda(S_{ki},P_{i},W_{i})\) estimates the same ABE. We give results for these various specifications, along with practical suggestions. ### Prior and Signal Interactions **Specification 4** (Prior and Signal Interactions).: This specification, as in Deshpande and Dizon-Ross (2023, Section 4), takes the form \[\begin{split} B_{i}&=X_{i}^{\prime}\pi_{0}+\pi_{1} Z_{i}+\pi_{2}Z_{i}P_{i}+\pi_{3}Z_{i}S_{1i}+v_{i},\\ Y_{i}&=X_{i}^{\prime}\gamma_{0}+\gamma_{1}\hat{B}_ {i}+e_{i},\end{split} \tag{18}\] where \(v_{i}\) and \(e_{i}\) are regression residuals, \(\hat{B}_{i}\equiv X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}+\pi_{2}Z_{i}P_{i}+\pi_{3} Z_{i}S_{1i}\) is the first-stage regression, and \(\gamma_{1}\) is the coefficient of interest. Denote \(m_{i}(Z_{i}):=(Z_{i},Z_{i}P_{i},Z_{i}S_{1i})^{\prime}\) and \(\tilde{m}_{i}(Z_{i})=m_{i}(Z_{i})-\mathbb{L}[m_{i}(Z_{i})|X_{i}]\), where \(\mathbb{L}[m_{i}(Z_{i})|X_{i}]\) is a regression of \(m_{i}(Z_{i})\) on \(X_{i}\). **Theorem 5**.: _Consider Specification 4 for \(K=1\). If Assumption 1 is satisfied, \(P_{i}\) and \(S_{1i}\) are included in the set of control variables \(X_{i}\), \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) and \(\mathbb{E}[\tilde{m}_{i}(Z_{i})\tilde{m}_{i}(Z_{i})^{\prime}]\) are full-rank, and \(\mathbb{E}[\tilde{m}_{i}(Z_{i})B_{i}]\neq 0\), then \(\gamma_{1}=\theta(\omega)\) with weighting function_ \[\omega_{i}(b)=\frac{\sigma_{01,i}\mathbf{1}(b\in\mathcal{M}_{01,i})(\pi_{1}+ \pi_{2}P_{i}+\pi_{3}S_{1i})}{\int\mathbb{E}[\sigma_{01,i}\mathbf{1}(b\in \mathcal{M}_{01,i})(\pi_{1}+\pi_{2}P_{i}+\pi_{3}S_{1i})]\,db}. \tag{19}\] _Moreover, if Assumption 2 is satisfied, \(\pi_{1}=0\), and \(\pi_{2}=-\pi_{3}\), then_ \[\omega_{i}(b)=\frac{\mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}}{\int\mathbb{ E}[\mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}]\,db},\quad\lambda_{i}=|S_{1i}-P_{i}|. \tag{20}\] Theorem 5 shows that Specification 4 recovers an ABE with non-negative weights, provided that \(\pi_{1}=0\) and \(\pi_{2}=-\pi_{3}\). If the first-stage coefficients attain these values, then Specification 4 recovers an IPIV parameter with \(\lambda_{i}=|S_{1i}-P_{i}|\). This means that individuals with larger "prior errors" in their beliefs are up-weighted relative to their share in the distribution of individuals. If we have reason to put greater focus on the actions of "misinformed" individuals, then this up-weighting feature is appealing. One can compute the first stage and inspect whether the sufficient condition for Specification 4 of \(\pi_{1}=0\) and \(\pi_{2}=-\pi_{3}\) is satisfied. One setting where the first-stage coefficients attain these is Example 1, with \(\alpha_{ki}\) independent of priors and signals. **Theorem 6**.: _In the context of Example 1, consider the first-stage in Specification 4 for \(K=1\). If Assumption 1 is satisfied, \(X_{i}=(1,P_{i},S_{1i})^{\prime}\), \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) and \(\mathbb{E}[\tilde{m}_{i}(Z_{i})\tilde{m}_{i}(Z_{i})^{\prime}]\) are full-rank, and \(\alpha_{1i}\bot\!\!\!\bot(F_{i}^{P},S_{1i})\), then the first-stage coefficients are identified such that \(\pi_{1}=0\) and \(\pi_{2}=-\pi_{3}\)._ That said, in general settings, it seems implausible that \(\alpha_{ki}\) would be independent of priors and signals. For example, if there is individual heterogeneity across prior distributions \(F_{i}^{P}\), and \(\alpha_{ki}\) depends on the variance induced by \(F_{i}^{P}\), then \(P_{i}:=\int q\,dF_{i}^{P}\) is likely correlated with \(\alpha_{ki}\). Moreover, it seems that the inclusion of covariates \(W_{i}\) would complicate the designation of primitive conditions for \(\pi_{1}=0\) and \(\pi_{2}=-\pi_{3}\). Given these various concerns, we caution against using Specification 4. Instead, if the goal is an ABE with weighting function (20), then we recommend using IPIV with \(\lambda_{i}=|S_{1i}-P_{i}|\). ### Prior Error Interaction **Specification 5** (Prior Error Interaction in Logs).: This specification, as in Jager et al. (2023, Section 4.4), takes the form \[\begin{split}\log(B_{i})&=X_{i}^{\prime}\pi_{0}+ \pi_{1}Z_{i}+\pi_{2}Z_{i}\log(P_{i}/S_{1i})+v_{i},\\ \log(Y_{i})&=X_{i}^{\prime}\gamma_{0}+\gamma_{1} \mathrm{\widetilde{og}}(B_{i})+e_{i},\end{split} \tag{21}\] where \(v_{i}\) and \(e_{i}\) are regression residuals, \(\mathrm{\widetilde{og}}(B_{i})\equiv X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}+\pi_{2 }Z_{i}\log(P_{i}/S_{1i})\) is the first-stage regression, and \(\gamma_{1}\) is the coefficient of interest. Let \(m_{i}(Z_{i}):=(Z_{i},Z_{i}\log(P_{i}/S_{1i}))^{\prime}\) and \(\tilde{m}_{i}(Z_{i})=m_{i}(Z_{i})-\mathbb{L}[m_{i}(Z_{i})|X_{i}]\), where \(\mathbb{L}[m_{i}(Z_{i})|X_{i}]\) is a regression of \(m_{i}(Z_{i})\) on \(X_{i}\). **Theorem 7**.: _Consider Specification 5 for \(K=1\). If Assumption 1 is satisfied, \(\log(P_{i}/S_{1i})\) is included in \(X_{i}\), \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) and \(\mathbb{E}[\tilde{m}_{i}(Z_{i})\tilde{m}_{i}(Z_{i})^{\prime}]\) are full-rank, and \(\mathbb{E}[\tilde{m}_{i}(Z_{i})\log(B_{i})]\neq 0\), then \(\gamma_{1}=\theta(\omega)\) with weighting function_ \[\omega_{i}(b)=\frac{b}{Y_{i}(b)}*\frac{b^{-1}\sigma_{01,i}\mathbf{1}(b\in \mathcal{M}_{01,i})(\pi_{1}+\pi_{2}\log(P_{i}/S_{1i}))}{\int b^{-1} \mathbb{E}[\sigma_{01,i}\mathbf{1}(b\in\mathcal{M}_{01,i})(\pi_{1}+\pi_{2} \log(P_{i}/S_{1i}))]\,db}. \tag{22}\] _Moreover, if Assumption 2 is satisfied and \(\pi_{1}=0\), then_ \[\omega_{i}(b)=\frac{b}{Y_{i}(b)}*\frac{b^{-1}\mathbf{1}(b\in \mathcal{M}_{01,i})\lambda_{i}}{\int b^{-1}\mathbb{E}[ \mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}]\,db},\quad\lambda_{i}=|\log(P_ {i}/S_{1i})|. \tag{23}\] Theorem 7 shows that Specification 5 recovers an ABE with non-negative weights, provided that \(\pi_{1}=0\). In particular, if \(\pi_{1}=0\), then \[\theta(\omega)=\mathbb{E}\left[\int\frac{b}{Y_{i}(b)}\frac{\partial Y _{i}(b)}{\partial b}\tilde{\omega}_{i}(b)\,db\right],\quad\tilde{\omega}_{i}(b ):=\frac{b^{-1}\mathbf{1}(b\in\mathcal{M}_{01,i})|\log(P_{i}/S_{1i})|}{\int b ^{-1}\mathbb{E}[\mathbf{1}(b\in\mathcal{M}_{01,i})|\log(P_{i}/S_{1i})]\,db}. \tag{24}\] The interpretation for the identified \(\theta(\omega)\) mirrors the interpretation from Specification 4. In particular, the above ABE up-weights individuals with larger "prior errors". More importantly, the non-negative weighting result in Theorem 7 requires that \(\pi_{1}=0\). Thus, for reasons similar to the ones given for Specification 4, we caution against using Specification 5. Instead, if the goal is an ABE with weighting function (24), then we recommend using IPIV. In particular, Specification 3 with \(\lambda_{i}=|\log(P_{i}/S_{1i})|\) generates an ABE with weighting function (24). ### Prior Error Pure Interaction **Specification 6** (Prior Error Pure Interaction in Logs).: This specification takes the form \[\begin{split}\log(B_{i})&=X_{i}^{\prime}\pi_{0}+ \pi_{1}Z_{i}\log(S_{1i}/P_{i})+v_{i},\\ \log(Y_{i})&=X_{i}^{\prime}\gamma_{0}+\gamma_{1} \widetilde{\log}(B_{i})+e_{i},\end{split} \tag{25}\] where \(v_{i}\) and \(e_{i}\) are regression residuals, \(\widetilde{\log}(B_{i})\equiv X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}\log(S_{1i}/P _{i})\) is the first-stage regression, and \(\gamma_{1}\) is the coefficient of interest. In contrast to Specification 5, notice that there is no linear term for \(Z_{i}\). In that sense, the above first-stage is a "pure interaction" specification. This pure interaction specification is analogous to specifications from Cullen and Perez-Truglia (2022, Section 4) and Galashin et al. (2020, Section 6), which consider \(B_{i}\in\mathbb{R}^{2}\). Let \(m_{i}(Z_{i}):=Z_{i}\log(S_{1i}/P_{i})\) and \(\tilde{m}_{i}(Z_{i})=m_{i}(Z_{i})-\mathbb{L}[m_{i}(Z_{i})|X_{i}]\), where \(\mathbb{L}[m_{i}(Z_{i})|X_{i}]\) is a regression of \(m_{i}(Z_{i})\) on \(X_{i}\). **Theorem 8**.: _Consider Specification 6 for \(K=1\). If Assumption 1 is satisfied, \(\log(S_{1i}/P_{i})\) is included in the set of control variables \(X_{i}\), \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) is full-rank, \(\operatorname{var}(\tilde{m}_{i}(Z_{i}))>0\), and \(\mathbb{E}[\hat{m}_{i}(Z_{i})\log(B_{i})]\neq 0\), then \(\gamma_{1}=\theta(\omega)\) with weighting function_ \[\omega_{i}(b)=\frac{b}{Y_{i}(b)}*\frac{b^{-1}\sigma_{01,i}\mathbf{1}(b\in \mathcal{M}_{01,i})\log(S_{1i}/P_{i})}{\int b^{-1}\mathbb{E}[\sigma_{01,i} \mathbf{1}(b\in\mathcal{M}_{01,i})\log(S_{1i}/P_{i})]\,db}. \tag{26}\] _Moreover, if Assumption 2 is satisfied, then_ \[\omega_{i}(b)=\frac{b}{Y_{i}(b)}*\frac{b^{-1}\mathbf{1}(b\in \mathcal{M}_{01,i})\lambda_{i}}{\int b^{-1}\mathbb{E}[ \mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}]\,db},\quad\lambda_{i}=|\log(S_ {1i}/P_{i})|. \tag{27}\] Theorem 8 shows that Specification 6 recovers \(\theta(\omega)\) with non-negative weights. Notably, this non-negative weighting result is agnostic to values of the first-stage coefficients, in contrast to the results for Specification 5. This difference arises because Specification 5 includes a linear term for \(Z_{i}\). Therefore, a researcher can use Specification 6 to recover \[\theta(\omega)=\mathbb{E}\left[\int\frac{b}{Y_{i}(b)}\frac{\partial Y_{i}(b)} {\partial b}\tilde{\omega}_{i}(b)\,db\right],\quad\tilde{\omega}_{i}(b):=\frac {b^{-1}\mathbf{1}(b\in\mathcal{M}_{01,i})|\log(S_{1i}/P_{i})|}{\int b^{-1} \mathbb{E}[\mathbf{1}(b\in\mathcal{M}_{01,i})|\log(S_{1i}/P_{i})]|\,db}, \tag{28}\] without needing to be concerned about the values of the first-stage coefficients. Once again, individuals with larger "prior errors" are up-weighted. Theorem 8 supports the interpretation of weights that Cullen and Perez-Truglia (2022, Section 4) provide for their two-belief analogue of Specification 6. Given that IPIV Specification 3 with \(\lambda_{i}=|\log(S_{1i}/P_{i})|\) and Specification 6 both recover (28), a natural question is whether one procedure is more efficient than the other. We plan to investigate this in future work. ### Prior Interaction **Specification 7** (Prior Interaction).: This specification takes the form \[\begin{split}& B_{i}=X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}+\pi_{2}Z_{i }P_{i}+v_{i},\\ & Y_{i}=X_{i}^{\prime}\gamma_{0}+\gamma_{1}\hat{B}_{i}+e_{i}, \end{split} \tag{29}\] where \(v_{i}\) and \(e_{i}\) are regression residuals, \(\hat{B}_{i}\equiv X_{i}^{\prime}\pi_{0}+\pi_{1}Z_{i}+\pi_{2}Z_{i}P_{i}\) is the first-stage regression, and \(\gamma_{1}\) is the coefficient of interest. This prior interaction specification is analogous to specifications from (Coibion et al., 2022, Sections 3 and 5), Coibion et al. (2021, Sections 3 and 4), and (Kumar et al., 2023, Sections 3 and 4), which consider \(K>1\). The latter two also consider \(B_{i}\in\mathbb{R}^{2}\). Let \(m_{i}(Z_{i}):=\left(Z_{i},Z_{i}P_{i}\right)^{\prime}\) and \(\tilde{m}_{i}(Z_{i})=m_{i}(Z_{i})-\mathbb{L}[m_{i}(Z_{i})|X_{i}]\), where \(\mathbb{L}[m_{i}(Z_{i})|X_{i}]\) is a regression of \(m_{i}(Z_{i})\) on \(X_{i}\). **Theorem 9**.: _Consider Specification 7 for \(K=1\). If Assumption 1 is satisfied, \(P_{i}\) is included in the set of control variables \(X_{i}\), \(\mathbb{E}[X_{i}X_{i}^{\prime}]\) and \(\mathbb{E}[\tilde{m}_{i}(Z_{i})\tilde{m}_{i}(Z_{i})^{\prime}]\) are full-rank, and \(\mathbb{E}[\tilde{m}_{i}(Z_{i})B_{i}]\neq 0\), then \(\gamma_{1}=\theta(\omega)\) with weighting function_ \[\omega_{i}(b)=\frac{\sigma_{01,i}\mathbf{1}(b\in\mathcal{M}_{01,i})(\pi_{1}+ \pi_{2}P_{i})}{\int\mathbb{E}[\sigma_{01,i}\mathbf{1}(b\in\mathcal{M}_{01,i})( \pi_{1}+\pi_{2}P_{i})]\,db}. \tag{30}\] _Moreover, if Assumption 2 is satisfied, \(\pi_{1}=0\), and either (i) \(P_{i}\leq 0\) implies \(P_{i}\leq S_{1i}\) and \(P_{i}\geq 0\) implies \(P_{i}\geq S_{1i}\); or (ii) \(P_{i}\leq 0\) implies \(P_{i}\geq S_{1i}\) and \(P_{i}\geq 0\) implies \(P_{i}\leq S_{1i}\), then_ \[\omega_{i}(b)=\frac{\mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}}{\int \mathbb{E}[\mathbf{1}(b\in\mathcal{M}_{01,i})\lambda_{i}]\,db},\quad\lambda_{ i}=|P_{i}|. \tag{31}\] Theorem 9 shows that Specification 7 recovers an ABE with non-negative weights, provided that \(\pi_{1}=0\) and the signals are chosen dependent on the sign of the prior. For example, if a firm believes that inflation will be positive, then the experiment must provide that firm with a signal that is less than the prior. It seems difficult to imagine situations where such signal structures would be desirable; in fact, if only a single signal is provided, this implies that the signal must be smaller than all priors. And of course, the constraint that \(\pi_{1}=0\) means that Specification 7 suffers from the same weaknesses as Specifications 4 and 5. If all assumptions for Specification 4 are satisfied, we recover an IPIV parameter with \(\lambda_{i}=|P_{i}|\). This means that individuals with large priors (in magnitude) are up-weighted relative to their share in the distribution of individuals. The value of such a weighting scheme depends on the context. That said, due to the implausibility of the assumptions, if (31) is indeed the target parameter, then we recommend using IPIV Specification 2 with \(\lambda_{i}=|P_{i}|\). ### Implications for Estimation and Interpretation We have shown that Specification 6 provides a positive weighted average over belief effects without assumptions beyond those for IPIV. We have also provided sufficient conditions for Specifications 4, 5, and 7. These sufficient conditions are easily testable by examining the coefficients in the first stage regression. However, based on the stringency of these assumptions, we recommend researchers directly use IPIV with the corresponding \(\lambda_{i}\) instead. Additionally, we have shown that interacting group status with prior beliefs and/or the signal results in an weight \(\lambda_{i}\) that generally up-weights individuals whose priors are far from the signal (large prior error) and down-weights individuals whose priors are close to the signal (small prior error). This may lead to especially different estimates if the prior error is correlated with the belief effect. For example, consider a case in which individuals who have the largest belief effects have the greatest demand for information, and therefore have priors that are already close to the signal. Then, these specifications would underweight these individuals, leading the estimated ABE to be attenuated. We believe that using IPIV with \(\lambda_{i}=1\) provides a more "natural" object of interest. Using the simulated data, Table 1 displays estimates of \(\gamma_{1}\) under the different specifications. To allow for comparison across estimates, we interact with \(P_{i}-S_{1i}\), rather than \(\log(P_{i}/S_{1i})\), in Specifications 5 and 6. Note that for regressions 4, 5, and 6, the data is generated such that sufficient conditions for the specifications to produce IPIV estimates are met. Therefore, these regressions provide \(\lambda_{i}=|S_{1i}-P_{i}|\) IPIV estimates. The estimates are attenuated relative to the IPIV due to over-weighting of those who have large prior errors and under-weighting of those who have small prior errors. The final regression does not satisfy the sufficient assumption for the relationship between \(P_{i}\) and \(S_{1i}\) for an IPIV interpretation. Figure 4 shows the weights plotted over \(b\) and \(P_{i}-S_{1i}\). There is a gradient towards the center of the figure as the weights shrink. Figure 5 shows a histogram of weights under the Baseline IV, IPIV (\(\lambda_{i}=1\)), and IPIV (\(\lambda_{i}=|P_{i}-S_{1i}|\)) specifications. Relative to IPIV (\(\lambda_{i}=1\)), IPIV (\(\lambda_{i}=|P_{i}-S_{1i}|\)) has a concentration of weights near 0 and a long right tail. Notice in Figure 6 that the weights near 0 correspond to the points near \(P_{i}-S_{1i}=0\). The long tail corresponds to the far left and far right of \(P_{i}-S_{1i}\). Figure 5: Histogram of weights Figure 6: Weights averaged over \(b\) Conclusion In this paper, we focus on the identification of the causal effects of beliefs on individuals' actions in passive control information provision experiments. Within this domain, due to non-monotonicity, a baseline IV regression will generally produce negative weights. We introduce a family of information provision instrumental variables (IPIV) estimators to correct for this issue. Our framework requires the experiment to obtain prior beliefs and assumes that individuals update their beliefs in the direction of the signal. With this information, we can identify over-and under-estimators, providing a natural correction for non-monotonicity. Like IV, IPIV is dependent on individuals' beliefs being "moved" by the information provision. Therefore, when designing information provision experiments, researchers should be sure to collect priors and to design meaningful signals that induce predictable variation. Additionally, we show that many existing specifications in fact produce a weighted form of IPIV. A specification based on that of Cullen and Perez-Truglia (2022) does so without additional assumptions; specifications from Jager et al. (2023) and Deshpande and Dizon-Ross (2023) do so with additional testable conditions on the first stage. However, the weights, relative to IPIV, up-weight individuals with large prior errors and down-weight individuals with small prior errors, which may be undesirable. Similarly, a specification based on Coibion et al. (2022) requires additional assumptions and up-weights individuals with large priors. We illustrate the impacts of these differences in simulated data.
**[Please provide the translation. It should be written in Japanese.]**
2309.14580
CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss
This paper considers contrastive training for cross-modal 0-shot transfer wherein a pre-trained model in one modality is used for representation learning in another domain using pairwise data. The learnt models in the latter domain can then be used for a diverse set of tasks in a zero-shot way, similar to ``Contrastive Language-Image Pre-training (CLIP)'' and ``Locked-image Tuning (LiT)'' that have recently gained considerable attention. Most existing works for cross-modal representation alignment (including CLIP and LiT) use the standard contrastive training objective, which employs sets of positive and negative examples to align similar and repel dissimilar training data samples. However, similarity amongst training examples has a more continuous nature, thus calling for a more `non-binary' treatment. To address this, we propose a novel loss function called Continuously Weighted Contrastive Loss (CWCL) that employs a continuous measure of similarity. With CWCL, we seek to align the embedding space of one modality with another. Owing to the continuous nature of similarity in the proposed loss function, these models outperform existing methods for 0-shot transfer across multiple models, datasets and modalities. Particularly, we consider the modality pairs of image-text and speech-text and our models achieve 5-8% (absolute) improvement over previous state-of-the-art methods in 0-shot image classification and 20-30% (absolute) improvement in 0-shot speech-to-intent classification and keyword classification.
Rakshith Sharma Srinivasa, Jaejin Cho, Chouchang Yang, Yashas Malur Saidutta, Ching-Hua Lee, Yilin Shen, Hongxia Jin
2023-09-26T00:03:25
http://arxiv.org/abs/2309.14580v1
# CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss ###### Abstract This paper considers contrastive training for _cross-modal 0-shot transfer_ wherein a pre-trained model in one modality is used for representation learning in another domain using pairwise data. The learnt models in the latter domain can then be used for a diverse set of tasks in a 0-shot way, similar to "Contrastive Language-Image Pre-training (CLIP)" [1] and "Locked-image Tuning (LiT)" [2] that have recently gained considerable attention. Most existing works for cross-modal representation alignment (including CLIP and LiT) use the standard contrastive training objective, which employs sets of _positive_ and _negative_ examples to align similar and repel dissimilar training data samples. However, similarity amongst training examples has a more continuous nature, thus calling for a more 'non-binary' treatment. To address this, we propose a novel loss function called Continuously Weighted Contrastive Loss (CWCL) that employs a continuous measure of similarity. With CWCL, we seek to align the embedding space of one modality with another. Owing to the continuous nature of similarity in the proposed loss function, these models outperform existing methods for 0-shot transfer across multiple models, datasets and modalities. Particularly, we consider the modality pairs of image-text and speech-text and our models achieve 5-8% (absolute) improvement over previous state-of-the-art methods in 0-shot image classification and 20-30% (absolute) improvement in 0-shot speech-to-intent classification and keyword classification. ## 1 Cross-modal alignment and transfer Learning visual representations using natural language supervision has proven to be a powerful way to unlock impressive zero-shot performance on a number of downstream tasks [1; 2; 3; 4; 5; 6]. In this paper, we draw inspiration from these works and study the task of _cross-modal alignment for 0-shot transfer_ for pairs of modalities. Let \(\mathcal{U}\) and \(\mathcal{V}\) denote a pair of modalities. For example, \(\mathcal{U}\) may be text modality, and \(\mathcal{V}\) maybe image modality. We are interested in the following problem: given a pre-trained model \(f_{\theta}:\mathcal{V}\rightarrow\mathcal{Q}\) for data in \(\mathcal{V}\) (where \(\mathcal{Q}\) denotes the embedding space), how can we use a paired dataset of the form \(\{u,v\},u\in\mathcal{U},\ v\in\mathcal{V}\), to best learn a model \(g_{\phi}:\mathcal{U}\rightarrow\mathcal{P}\) (where \(\mathcal{P}\) is the embedding space corresponding to \(\mathcal{U}\)) such that the learnt structure in the embedding space \(\mathcal{Q}\) can be aligned with that of \(\mathcal{P}\)? Once trained, the models \(g_{\phi}\) and \(f_{\theta}\) can be used on a diverse set of downstream tasks in a 0-shot way, thus avoiding the need for costly, task-specific, labeled datasets. Our motivation in studying the above problem lies in the fact that powerful pre-trained models existing in certain modalities, but are lacking in other modalities. For example, the recent advances in language models have resulted in very powerful models to process text data, while no such models exist for speech and audio data. Unlike text based models that can now generalize to new tasks in a 0-shot way, speech and audio models are still trained in a task-specific way (for example, automatic speech recognition (ASR)). Further, collecting labeled datasets in speech domain offers its own set of challenges including quality control, noise, removing silence to name a few [7; 8]. Similarly, even when pre-trained models are available for certain modalities such as images, there might be challenging sub-modalities (or domains) like medical imaging on which pre-trained models may not be trained on [9]. However, large scale _paired datasets_ maybe available, which connect the above modalities. For example, large datasets of speech and the associated (possibly noisy) transcripts are available easily on the internet. Similary, pairs of text and images, pairs of medical and raw text [9] maybe more easily available. Based on this observation, methods have been proposed to train image and text encoders by aligning features corresponding to paired image and text data [1; 3]. Upon training, these models demonstrate impressive 0-shot performance on a number of downstream tasks such as image classification and image-text retrieval. While in these works both encoders are trained from scratch, [2], showed that using a _frozen_ pre-trained image classification model as the image encoder and only training the text encoder significantly boosts downstream 0-shot performance. We observe that this abstract concept of _using a pre-trained model in one modality to supervise models in another modality using pairwise data_ can then be applied to any pair of modalities. Our main focus in this paper is on **how best to train such cross-modal models that leverage pre-trained models in one modality.** We find that standard contrastive loss used in training such models is _inefficient_ at _maximizing the amount of supervision_ that can be extracted from the pre-trained models. In particular, to learn the embeddings in the "unfrozen" modality, existing methods only use the embedding of the corresponding paired data from the other modality for supervision. However, there maybe many samples from the supervising modality that are similar, and to various degrees of similarity. To address this inefficiency, we propose a new loss function called **continuously weighted contrastive loss (CWCL)** for contrastive training of multi-modal models. The proposed loss function leads to better supervision and hence better alignment between the two modalities. We study the impact of our proposed loss function using two pairs of modalities, image-text and speech-text. For image-text pair, we find that the proposed loss function leads to an **improvement of 6-8% (absolute)** compared to the best baseline on 0-shot image classification tasks. For speech-text, we find that it leads to a **20-30% (absolute) improvement** on 0-shot speech-to-intent classification and 0-shot keyword spotting tasks. Further, the trained models achieve comparable performance to models fully supervised on task specific speech datasets. As shown in Figure 1, we find that models trained using the proposed loss function are data and compute-efficient. They achieve higher accuracies with fewer pairs of data samples during training. Further, as shown in Figure 2, embeddings extracted from test datasets of the downstream tasks show significantly improved sense of similarity for data from the same class, even though no label information was provided to the model. ## 2 Continuously weighted contrastive loss ### Existing frameworks for contrastive training Various forms of contrastive learning has been successfully employed in both self-supervised learning [10; 11; 1; 2; 12; 13] and in supervised learning [14]. **Contrastive loss for self-supervised and multi-modal learning:** The traditional contrastive loss function is used in both single-modality self-supervised learning as well as multi-modal alignment. We explain the formulation used in multi-modal alignment and briefly explain how the same function Figure 1: Comparison of 0-shot transfer performance between baseline CL and proposed CWCL. (Left): zero-shot image classification accuracy measured across training epochs for the image-text modality pair. (Right): zero-shot speech-to-intent classification measured across training epochs for the speech-text modality pair. CWCL consistenly performs better than CL. is used in the single-modality setting. Let \(\mathcal{B}\) denote a batch of training data consisting of pairs of data samples from two modalities of size \(N\): \(\mathcal{B}=\{(u_{i},v_{i})\}_{i=1,\cdots,N}\), where \(u_{i}\) is from modality \(\mathcal{U}\) and \(v_{i}\) is from modality \(\mathcal{V}\). Let \(u_{i}\) and \(v_{i}\) be encoded into embeddings denoted as \(p_{i}\), \(q_{i}\) respectively. This can be done by separate, modality-specific encoders or by shared encoder. Then, the traditional contrastive loss function (CL) (to align \(\mathcal{U}\) with \(\mathcal{V}\)) is defined over the \(\mathcal{B}\) as \[\mathcal{L}_{CL,\mathcal{U}\rightarrow\mathcal{V}}=\frac{-1}{N}\sum_{i=1}^{N} \log\frac{\exp{(\langle p_{i},q_{i}\rangle/\tau)}}{\sum_{j\in[N]}\exp{( \langle p_{i},q_{j}\rangle/\tau)}}, \tag{1}\] where \([N]\) denotes the set \(\{1,2,\cdots,N\}\). Note that a similar loss function \(\mathcal{L}_{CL,\mathcal{V}\rightarrow\mathcal{U}}\) maybe defined and the total loss function is given as \(\mathcal{L}_{CL,\mathcal{U}\rightarrow\mathcal{V}}+\mathcal{L}_{CL,\mathcal{ V}\rightarrow\mathcal{U}}\). By minimizing (1), the encoders _learn to align pairs of data_. Note that in doing so, for each \(u_{i}\), \(v_{i}\) is considered as a _positive example_ and all other samples \(\{v_{i}\}_{j\in[N],j\neq i}\) are considered to be _negative examples_. This is also illustrated in Figure 4, where the diagonal matrix indicates the set of positive examples chosen (for each row and column). As an example, in [1; 2], for each image, _only the corresponding text_ is used as a positive example and _all other text samples_ are used as negative examples (and vice-versa). **Contrastive loss for supervised learning:** It is conceivable that in a given training batch, _there is more than one "positive" sample_. However the information about which samples are related to each other may be missing in self-supervised learning. However, this information is available in a supervised learning setup. Let \(\mathcal{T}\) denote a batch of training data of size \(M\) consisting of samples and labels: \(\mathcal{T}=\{(x_{i},y_{i})\}\). Further, let \(z_{i}\) be the embedding generated by the model. Then, it is clear that the set \(\mathcal{P}_{i}=\{x_{j},j\neq i|y_{j}=y_{i}\}\) forms _a set of positive examples_. This idea was explored in [14], where the following loss function 1 was proposed to leverage the label information: Footnote 1: The authors also propose another variant of the supervised constrastive loss function. Although we do not discuss it here, it is similar in spirit to (2). \[\mathcal{L}_{\text{sapcon}}=\frac{-1}{M}\sum_{i=1}^{M}\frac{1}{|P(i)|}\sum_{j \in P(i)}\log\frac{\exp{(\langle z_{i},z_{j}\rangle/\tau)}}{\sum_{k\in[N],k \neq i}\exp{(\langle z_{i},z_{k}\rangle/\tau)}}. \tag{2}\] Note that the above loss function can be interpreted as _taking the average of pair-wise \(\mathcal{L}_{CL}\) over the positive set_. The authors show that a combination of the above loss and the task loss yields better performance than using the task loss alone. However, this method _requires labeled datasets_. In the above two loss functions and other similar variants studied in the literature, we find two shortcomings. Firstly, **other similar examples that may be present in the training batch are not Figure 2: The similarity matrix between embeddings of the two modalities that are aligned via (Left): baseline CL and (Right): proposed CWCL. The axis labels correspond to the intent of utterances (for example, “news_query” represents utterances with news-related questions). CWCL results in a more “block” diagonal pattern than CL, indicating that speech and text samples with the same intent are more aligned while samples with different intents are more separated. This can be attributed to the continuous weighting mechanism of CWCL. Note that these embeddings are from a _downstream test dataset_ which was never exposed to the model during training. The visualization confirms that CWCL leads to a higher degree of alignment between similar data samples. considered**. In the self-supervised setting, all the other similar samples are considered as negative examples. In the supervised setting, some classes might be similar to each other (for example, multiple breeds of dogs), but are considered to be negative examples to each other. Secondly, **similarity is considered to be binary**. As a result, _all "positive examples" are attracted equally, and all "negative examples" are repelled equally._ However, we observe that _samples in a training batch maybe similar to each other to varying degrees_. Some samples might be _more similar_ to each other, a few others less so many others may be _dissimilar_. For a more detailed explanation, see Figure 3. ### Can we account for non-binary similarity? To address the above shortcomings, we propose a novel loss function called Continuously Weighted Contrastive Loss (CWCL). We use the same setup as that in multi-modal training used to define (1). The loss function (to align \(p_{i}\) with other \(q_{j}\)'s) is defined as \[\mathcal{L}_{\text{CWCL, }\mathcal{U}\rightarrow\mathcal{V}}=\frac{-1}{N}\sum_{i= 1}^{N}\frac{1}{\sum_{j\in[N]}w_{ij}^{\mathcal{V}}}\sum_{j\in[N]}w_{ij}^{ \mathcal{V}}\cdot\log\frac{\exp(\langle p_{i},q_{j}\rangle/\tau)}{\sum_{k\in[ N]}\exp(\langle p_{i},q_{k}\rangle/\tau)}, \tag{3}\] where \(w_{ij}^{\mathcal{V}}\)'s denote the **intra-modal similarity weights** between \(v_{i}\) and \(v_{j}\) in modality \(\mathcal{V}\). Note that a similar loss function to align modality \(\mathcal{V}\) with modality \(\mathcal{U}\) maybe defined, with the intra-modal similarity weights computed between \(u_{i}\) and \(u_{j}\). We will refer to the intra-modal similarity weights simply as weights for ease of usage and we will drop the superscript, unless the modality needs to be specified. Note that the weights are computed _pair-wise_, within each training batch. Before we describe how these weights may be computed, we highlight the properties that they need to have. Firstly, we normalize the weights to be between 0 and 1: \(w_{ij}\in[0,1]\). Secondly, _"similar" samples from within a given domain should have higher weights and dissimilar samples should have lower weights._ With these properties, note that \(\mathcal{L}_{\text{CWCL}}\) provides a way to interpolate between the self-supervised and fully-supervised variants described earlier. When the weights are given as \(w_{ij}=\mathbbm{1}_{\{i\}}(j)\) where \(\mathbbm{1}_{\mathcal{S}}\) denotes the indicator function w.r.t set \(\mathcal{S}\), it is equivalent to \(\mathcal{L}_{\mathcal{CC}}\). On the other hand, in the supervised setting, if \(w_{ij}\) is defined as \(w_{ij}=1\) for all pairs \(i,j\) belonging to the same class, but 0 otherwise, it is equivalent to \(\mathcal{L}_{\text{supcon}}\). More importantly, \(\mathcal{L}_{\text{CWCL}}\) allows the model to learn **a continuous sense of similarity**, by i) computing a softmax function for all pairs in the training batch (inner summation in Equation. (3)) and ii) weighting these softmax terms by the similarity weights. Further, note that all pair-wise inner products are already computed even in (1), (2). Therefore, computing \(\mathcal{L}_{\text{CWCL}}\) is similar in computational complexity to \(\mathcal{L}_{\text{CL}}\) and \(\mathcal{L}_{\text{supcon}}\). Figure 3: Existing contrastive learning methods treat samples in a batch as either strictly positive or negative. However, similarity between data samples has a more continuous and non-binary nature. In this figure, we provide an example of the nature of similarity in the context of paired image-text data. Note that the ‘weight’ terms in the figure are contrived for illustration purposes. The proposed CWCL loss function attracts all other data samples to a degree proportional to their similarity. Similarity itself is measured using intra-modal inner product between samples. ### How can we obtain intra-modal similarity weights? In the traditional self-supervised setting, no information about similarity between training data points maybe available. This might also be the case in multi-modal learning such as in[1], where the modality encoders are initialized randomly. However, authors in [2] explored the idea of using pre-trained models as initialization in multi-modal models. Further, they find that _freezing_ the pre-trained model (except maybe for a final linear layer) yields the best performance. This setup offers a natural way to obtain the similarity weights. We can measure the similarity between the embeddings from the pre-trained model. We focus on this setup, where we use frozen, pre-trained models for one modality to train models in another modality. Note that even though the model encoding the first modality is frozen, follow-up layers maybe added and trained. Let \(\mathcal{V}\) be the "frozen" modality with a pre-trained initialization. Then to align modality \(\mathcal{U}\) with \(\mathcal{V}\) using (3), \(w^{\mathcal{V}}_{ij}\) maybe computed as \(w^{\mathcal{V}}_{ij}=\langle q_{i},q_{j}\rangle/2+0.5\) in order for \(w_{ij}\in[0,1]\). We do not explore other formulations in this paper. A natural question is about how such weights can be computed for the modality \(\mathcal{U}\). If the model in modality \(\mathcal{U}\) is also initialized using a pre-trained model, the weights may be computed in a similar way. However, in this paper, we only focus on the _cross-modal transfer_, with similarity weights being computed only in the _frozen modality_ initialized with a pre-trained model. Assuming the modality \(\mathcal{V}\) is the frozen one, our loss function is given as \[\mathcal{L}_{\text{cross-modal transfer}}=\mathcal{L}_{\text{CWCL},\,\, \mathcal{U}\rightarrow\mathcal{V}}+\mathcal{L}_{CL},\,\,\mathcal{V} \rightarrow\mathcal{U}. \tag{4}\] Note that the exact configuration and choice of which modality to freeze will depend on the pairs of modalities being considered, the quality of pre-trained models and paired datasets available. ## 3 Related work **CLIP like models:** CLIP [1] and ALIGN [3] introduced a set of foundational vision-language models where the encoders, one for the image, another for the text modality output embeddings in a shared space. In this set of works the encoders for all the modalities are randomly initialized and trained from scratch. Other works have looked at extending the concept to other modalities like image-audio [15], others have explored richer embedding [16], adaptive prompt learning [5], architectural advancements like Mixture-of-Experts [6]. Some works also considered the problem setting where embeddings of both image and text are processed together so that specialized text query relevant image embeddings can be obtained [17; 18]. Another notable work, [4], the authors obtained impressive performance by improving individual encoders by processing image and text embeddings together to minimize caption-loss. Additionally, [19] proposed an extension to the cross-modal contrastive loss that can leverage labeled training data by combining it with SupCon [14]. Recent works such as [20; 21; 22] consider the alignment between images and text that do not belong to the same pair, similar to our proposed method. In [20; 22], both encoders are trained from scratch by using a self-distillation process. Such a training process requires careful parameter tuning and generally has lower performance ([20] achieves about 42.4% 0-shot on ImageNet) compared to using pre-trained models, as demonstrated by the metrics. Another difference between our work and the above works is that we consider intra-modal similarity to attract image-text pairs. Owing to the availability of strong pre-trained uni-modal models, intra-modal offers a clear way to identify similar Figure 4: The classical CL-based methods (e.g., CLIP [1], LiT [2], etc) can be interpretes as using a binary weight matrix for choosing the positive examples. The proposed CWCL utilizes a continuous weight matrix to account for the non-binary nature of similarity for improved alignment. data samples. In [21], the authors consider using a third, object detection model to obtain similarity between images. However, their method is specific to image-text modality pair. It may also lead to performance degradation, as seen in the zero-shot metrics reported. **LiT like models:** Alternatively, LiT [2] proposed the idea of leveraging strong pretrained models in one domain and aligning the embeddings of another domain to the pretrained model's embedding space. Works in this line have looked at extending to multiple domains like image-audio-text [23], music-audio [24], speech-text for speech translation [25]; fine-grained query specific image embeddings [26] and benefits of cross-modal alignment for regularizing unimodal classifiers [27]. Along with building a model capable of conversation, [28] proposed the use of cross-modal attention layers to improve image-text cross-modal alignment. However, none of these works consider the problem of similarity across samples within the same modality that is explored in our work. Further, all these works are complementary to CWCL and can be improved by it. Handful of works explore similarity amongst samples [29; 12; 29] propose removing certain samples from the negative set used to compute contrastive loss (1) if their average similarity to the other samples in the batch is greater than a certain threshold [29]; [12] propose using a threshold on similarity to decide which samples are positive pairs and negative pairs and combining it with [14]. However, CWCL is superior owing to the fact that it treats similarity as a continuous entity rather than a binary entity. **Incorrect negatives in contrastive learning:** Contrastive learning incorrectly assumes that for a given sample, every other sample in the dataset is dissimilar [30]. In the self-supervised learning one of the remedies proposed is to re-weight the negative part of the contrastive loss' denominator to account for the presence of similar (or positive) samples [31]. However, in the case of cross-modal alignment with pretrained models, the pretrained model is a better indicator of the similarity [29; 12]. ## 4 Experiments In this section, we provide experimental results that demonstrate that CWCL leads to better zero-shot transfer performance. We study two pairs of domains, namely image-text and speech-text. For image-text pair, we demonstrate zero-shot transfer to image classification and image/ text retrieval. On both tasks, CWCL shows improved performance over existing methods for zero-shot transfer. Next, we report results for speech-text modality pair, where we consider the tasks of speech-to-intent classification and keyword spotting. Given the difficulties in collecting task-specific speech datasets, we expect CWCL-based zero-shot transfer to have a large impact in this domain. Note that our main goal is to study the effect of using CWCL. _We use open source, publicly available and easily accessible datasets for our study and leave the task of training with larger datasets to future work._ ### Cross-modal transfer between image and text modalities **Model architecture:** Our model architecture follows that in [2] and has a vision encoder and a text encoder. For the vision encoder, we use the ViT-L/16 model architecture [32] pre-trained on ImageNet. We compute the similarity weights using the embeddings from before the final linear layer that is not frozen during training. For the text encoder, we consider two architectures: transformer encoder architecture with 12 layers,output dimension 768, and number of heads set to 12 and we also consider the BERT-large architecture. **Datasets for contrastive training:** All our experiments are based on the combination of two publicly available datasets, CC12M and YFCC15M. The CC12M dataset is a subset of the Conceptual Captions dataset [33] defined in [34]. We use a set of 10 million images that are still available in the set of URLs (since the rest of them have been taken down). The YFCC15M dataset is a subset of the Yahoo Flicker Creative Commons dataset [35] defined by [1] by filtering for high quality English text. It contains a set of 15 million image-text pairs. Model training details are provided in A.1. #### 4.1.1 Zero-shot image classification For zero-shot image classification, we experiment on 5 datasets: ImageNet [36] validation, ImageNet-V2 [37], ImageNet-R [38; 39], ImageNet-A[40]and ObjNet [41], similar to [2]. We provide our experimental results in Tables. 1, 2. The results for SimCon [12], and LiT [2] are obtained from our own experimentation. For [12], we use their loss function in our set up. For [2], we use their recommended experimental settings from their paper. Note that the metrics are obtained by using the same model architecture and dataset for all the methods being compared. CWCL yields a significant boost over the other methods in zero-shot performance. Further, as shown in Figure1, CWCL achieves higher accuracy with fewer image-text training pairs. Owing to the CWCL formulation, the text embeddings generated by our model are designed to be similar to a larger set of similar images than the baseline methods, hence leading to better generalization. #### 4.1.2 Zero-shot Image-text retrieval We also examine the zero-shot image-text retrieval capabilities of our proposed method. Note that our experiments are only towards comparing standard contrastive loss with CWCL. We leave the task of training with larger datasets [1; 2; 3] and using multi-objective training (which maybe used along with contrastive tuning to obtain better retrieval performance) [33; 28; 18] for future exploration. In our experiment, we simply compare the performance of models trained with contrastive loss (as done in [2]) to that of models trained using CWCL. We use the MS-COCO validation dataset [42] to study zero-shot retrieval performance of these models. We report our results in Table 6. Models trained with CWCL outperform those trained using the standard contrastive loss function. #### 4.1.3 Robustness to templates for zero-shot classification An added benefit of the proposed CWCL formulation is that our model is robust to the templates/ prompts used in zero-shot tasks. For example, in zero-shot image classification, the labels are converted to text prompts in order to adapt the task of classification into that of alignment. In particular, both [1; 2] use a set of 80 "template" sentences to convert each label into 80 sentences, extract the text embeddings for all the sentences and use their mean embedding as the representation of the corresponding class. We expect that CWCL leads to robustness w.r.t the choice of such templates or prompts. We study this by changing the number of template sentences used to build the classifier embeddings. In particular, we design simple templates such as "this is a photo of ", "this is an image of " and experiment over \(k=1,5,10\) templates. We provide further details on the templates in Section A.1.2. We report the results for our model and that of [2] in Figure 5. As can be seen, models trained using CWCL are able to obtain peak performance with fewer number of templates, \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & ImageNet (\%) & ImageNet-V2(\%) & ImageNet-R(\%) & ImageNet-A(\%) & ObjNet(\%) \\ \hline CLIP & 31.3 & - & - & - & - \\ OpenCLIP & 34.8 & 30 & - & - & - \\ SimCon & 67.9 & 58.57 & 59.32 & 37.16 & 44.9 \\ LiT & 66.84 & 58.82 & 61.28 & 37.31 & 45.08 \\ **CWCL (Ours)** & **74.41** & **66.25** & **67.37** & **45.58** & **50.5** \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot image classification performance using the ViT-L/16 + 12-layer transformer configuration. **CWCL achieves a significant improvement in zero-shot image classification** across multiple datasets, including out-of-domain datasets such as ObjectNet. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & ImageNet (\%) & ImageNet-V2(\%) & ImageNet-R(\%) & ImageNet-A(\%) & ObjNet(\%) \\ \hline LiT & 71.2 & 62.98 & 63.8 & 40.28 & 48.1 \\ **CWCL (Ours)** & **76.48** & **67.86** & **68.7** & **47.27** & **52.38** \\ \hline \hline \end{tabular} \end{table} Table 2: Zero-shot image classification using the ViT-L/16 +BERT-large configuration. _CWCL-based training achieves state-of-the-art performance on all of zero-shot experiments._ \begin{table} \begin{tabular}{c|c c|c c c} \hline \hline Method & \multicolumn{3}{c|}{I \(\rightarrow\)T retrieval} & \multicolumn{3}{c}{T\(\rightarrow\)I retrieval} \\ \hline & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline LiT & 34.58 & 59.78 & 70.68 & 28.49 & 54.04 & 65.87 \\ \hline **CWCL (Ours)** & **40.36** & **66.62** & **77.76** & **30.04** & **54.84** & **66.06** \\ \hline \hline \end{tabular} \end{table} Table 3: Zero-shot retrieval results on MS-COCO dataset by using ViT-L/16+BERT-large configuration as the image and text encoders respectively. whereas models trained using standard contrastive loss require a higher number of templates to build better classifier embeddings. We believe that robustness to the choice and the number of template sentences/prompts used is crucial to improve the ease of usage of such models. ### Cross-modal transfer between speech and text modalities The proposed method can be applied to speech-text cross-modal learning to transfer semantic knowledge from text embeddings to speech embeddings. Speech models with language understanding are desired for tasks in the field of spoken language understanding (SLU) [43; 44]. SLU differs from automatic speech recognition (ASR), which simply generates a transcription, but does not have language understanding. SLU models, unlike ASR models can then be used on a wide variety of downstream tasks such as intent classification (in multiple domains) [7], keyword spotting [45]. In general, speech pre-training schemes usually include information about the phonemes or paralinguistic information (e.g. speaker, emotion, pathology, etc.), but they do not include semantics in language. While some works have explored the usage of contrastive learning to train SLU models, they use the standard contrastive training method [46]. However, similar to the image-text case, this may not be efficient. For instance, "Turn on the volume", is closer in meaning to "Increase the sound" than "Set an alarm for 7:00 AM tomorrow morning". In this case, the standard cross-modal contrastive loss is unable to learn the cross-modal relationship between the text of the first sentence and the speech of the second sentence since they are considered to be a "negative pair". This is precisely what is address by CWCL. As we demonstrate later, CWCL achieves a significant boost in performance on downstream tasks. We train a speech-text multi-modal model with a dataset where speech and its corresponding transcript are available. Note that this is a generic dataset that is not specific to any SLU task. We use a pre-trained, frozen text encoder, owing to the availability of strong pre-trained language models. A trainable linear layer is added on top of the frozen text encoder to match the dimensionality of the speech and text embedding. We also use a pre-trained speech encoder that is robust to diverse acoustic condition and further train it using the proposed loss function. **Model architecture:** For the speech model, we used the encoder of the pre-trained Whisper ASR [47], which is expected to be robust to different acoustic conditions. For the text models, we found 49 publicly available hugging face models by searching with filters as, task: Zero-shot classification, libraries: Transformers, and languages: English. We manually added one RoBERTa-based model fine-tuned on MASSIVE [48] data. All 50 models were compared on zero-shot text intent classification using the SLURP dataset [7] and the top 2 models were selected. The best model (we call it RoBERTa+S) was the RoBERTa-based model fine-tuned on MASSIVE data since the data includes SLURP (only the text data) 2. The second model was a BART-based model fine-tuned on Yahoo Answers topic classification (we call it BART+Y) 3. Footnote 2: [https://huggingface.co/qanastek/XLMRoberta-Alexa-Intents-Classification](https://huggingface.co/qanastek/XLMRoberta-Alexa-Intents-Classification) Footnote 3: [https://huggingface.co/joeddav/bart-large-mnli-yahoo-answers](https://huggingface.co/joeddav/bart-large-mnli-yahoo-answers) **Datasets for contrastive training** For cross-modal training, we used the Common Voice Corpus 13.0 [49]. This dataset consists of roughly 2400 hours of speech data and the corresponding transcripts Figure 5: Comparison of robustness to templates. (Left): the baseline CL method of LiT [2]. (Right): the proposed CWCL approach. The number displayed at each bar reflects the decrease in accuracy due to using only a subset of templates compared to using the full set. obtained using crowd-sourcing and includes speech from a diverse set of demographics across age and gender. We use the English subset. Model training details are provided in A.2. #### 4.2.1 Zero-shot speech-to-intent classification After the cross-modal embedding alignment stage, we evaluated the models on the zero-shot speech-to-intent classification task. The task is to classify a given speech sequence into one of the intent classes. The main difference between the zero-shot and supervised intent classification is the zero-shot classification can be done without training a classifier. **Class embedding generation:** Similar to the image-text case, we compute the embeddings of a given speech signal and compute its similarity with the text embeddings for all the intent classes. These class embeddings are obtained as averaged embedding of text sentences' embeddings of the corresponding classes. During inference, the class embedding that has the highest similarity score with the input speech embedding is chosen as the predicted class. **Dataset:** We used the SLURP [7] and STOP [50] datasets for the experiments. In the SLURP dataset, we used all the text sentences in the _train_ subset to generate the class embeddings for 60 intent classes where intent is defined as the concatenation of scenario and action labels, following ESPnet-SLU [51]. We did not use the _train_synthetic_ subset since more than half of the text sentences overlap with the _devel_ and _test_ subsets. On average, 191 text sentences were used per class. We compare the systems by evaluating them on the _devel_ and _test_ subsets. In the STOP dataset, we used the 8 unique domain labels as intent labels. Although not intents in a strict sense, the domain labels can be considered a simpler version of the intent labels. Since significantly more sentences are available in STOP, we randomly extracted 200 sentences per domain from the training set to generate the class embeddings. The evaluation was done on the validation and test sets. **Results**: In previous works, speech-to-intent classification has been done with an ASR-NLU pipeline system where the speech is first transcribed by ASR (speech-to-text), after which the transcription is classified into an intent using NLU (text-to-intent) [7]. Considering this, we refer to the text-to-intent performance achieved by the pre-trained text encoders as the "reference" performance. This provides an estimate of the performance of the speech encoder on the speech-to-intent task. The first results of speech-to-intent classification are shown in the SLURP and STOP (without the superscript \({}^{\#}\)) columns in Table 4. In all cases, multi-modal training with the CWCL loss outperformed the CL loss. On the SLURP dataset, RoBERTa+S has a higher reference performance compared to BART+Y because the fine-tuning data for RoBERTa+S included the SLURP text data. This also leads to a better performance compared to using the BART+Y model as the text encoder. On the STOP dataset, RoBERTa+S has a lower reference compared to BART+Y, implying that the RoBERTa+S' text model overfits the SLURP data. However, the RoBERTa+S-based speech intent classification was still better than the BART+Y-based one. This implies that the text model architecture could be another factor that contributes to transferring performance to the speech model. To be specific, the RoBERTa+S was RoBERTa which consists of only encoder layers while the BART+Y was the encoder-decoder-based BART model. Another thing to note is that CWCL with RoBERTa+S outperforms the text-intent reference performance on the STOP (87.87 vs. 84.78) dataset. This is because, during the cross-modal alignment stage using CWCL, the speech tower might have learned how to utilize acoustic cues in addition to linguistic information from a given speech utterance, to align its embedding to the semantic embedding from the text tower. However, this did not happen in the case of SLURP, because the SLURP dataset includes more intent classes than STOP (60 vs 8 classes), thus being more challenging in transferring knowledge from text to speech during the cross-modal alignment stage. **Experimenting with different templates to generate text embeddings:** So far, each class embedding used in zero-shot intent classification was generated by averaging all the corresponding text sentences' embeddings from the class in the training subset. Although collecting the text data with the intent labels can be less expensive than collecting speech data with the intent labels, the former may not always be possible. To address this, we manually devised a fixed set of general templates that were applied to every class. For example, templates are of the form "This audio is about [class]", and "The utterance is related to [class]", and the text embeddings are averaged to obtain the class embedding. For the exact templates we used, readers may refer to Appendix A.2.5. The results are shown in the SLURP\({}^{\#}\) and STOP\({}^{\#}\) columns in Table 4. We again observe that the proposed CWCL loss outperforms the CL loss. **Comparison to supervised training**: We also present results of a supervised SLU model on SLURP, based on ESPnet-SLU [51]. Considering our system is zero-shot, the result is noteworthy. For STOP, we could not find previous supervised works that evaluated systems the same way. Due to lack of space, we present the following results in A.2.2. In Table 7, we found that leveraging pre-trained models was more beneficial than training from scratch for speech-text embedding alignment. As seen in Table 8, locking the text encoder and fine-tuning the speech encoder gave the best performance. We found that batch size is not a critical factor, as shown in Table 9. #### 4.2.2 Zero-shot keyword spotting (KWS) We also tested our model for KWS using the Google Speech Command Dataset V2 (GSCV2) [8] where we classified among the 35 keywords in the Google Speech Command. The result is shown in the columns after the thick vertical line in Table 4. For the results in the first column (without \({}^{\#}\)), we used each keyword as is to extract the class embedding from the text model. For the second column (with \({}^{\#}\)), we used the general template used in the speech-to-intent experiments. The results show that the proposed method outperforms the baseline. With CWCL, the KWS\({}^{\#}\) outperformed KWS. This could be because the text models that generate the class embeddings are usually trained with sentence-level samples, not word-level ones whereas the keywords are words, i.e., the KWS class embeddings are extracted from words whereas KWS\({}^{\#}\) are extracted from sentences constructed using templates, thus resulting in better class embedding. **Comparison to supervised training**: Interestingly, results achieved with the BART-based text model are comparable to the supervised learning mechanisms of [52; 53; 54]. Note that the self-supervised mechanisms use training data to train the final linear classifier [53; 54]. However, our models without any training data still achieve close to \(90\%\) accuracy. This will be useful when defining new keywords as collecting large datasets for keyword classification becomes difficult [8]. Additional results are provided in Table 10 and in Table 11, respectively in Appendix A.2. ## 5 Conclusion In this paper, we make the observation that existing contrastive learning based methods for cross-modal alignment using pre-trained models are not efficient in extracting supervision from the pre-trained embeddings. In particular, many similar examples that do not form pairs in the training data are ignored. We address this by developing a novel loss function that accounts for the continuous nature of similarity and uses information from all similar examples in a training batch. We train models for two pairs of modalities using this loss function, namely image-text and speech-text. In both cases, we observe a significant increase in 0-shot performance on downstream tasks. We believe that the proposed loss function will be impactful in leveraging powerful pre-trained models and transfering the learnt knowledge to other modalities and domains. \begin{table} \begin{tabular}{c c|c c|c c|c c} \hline \hline Method & Text model & SLURP & SLURP\({}^{\#}\) & STOP & STOP\({}^{\#}\) & GSCV2 & GSCV2\({}^{\#}\) \\ \hline \multirow{2}{*}{CL} & RoBERTa+S & 40.35 & 23.68 & 70.13 & 50.56 & 64.74 & 59.65 \\ & BART+Y & 22.73 & 8.06 & 55.67 & 42.07 & 56.33 & 45.54 \\ \hline \multirow{2}{*}{**CWCL** **(Ours)**} & RoBERTa+S & 63.80 & 40.75 & 87.87 & 67.77 & 81.02 & 82.77 \\ & BART+Y & 53.12 & 30.51 & 80.99 & 73.08 & 88.81 & 89.43 \\ \hline \multirow{2}{*}{Text-to-intent (reference)} & RoBERTa+S & 88.19 & 59.86 & 84.78 & 69.10 & 100 & 98.20 \\ & BART+Y & 77.03 & 45.93 & 92.93 & 79.11 & 100 & 100 \\ \hline \hline ESPnet [51] & - & 77.00 & - & - & - & - & - \\ Att. RNN [52] & - & - & - & - & - & 93.9 & - \\ Wav2Vec2 [53] & - & - & - & - & - & 96.6 & - \\ M2D [54] & - & - & - & - & - & 98.5 & - \\ \hline \hline \end{tabular} \end{table} Table 4: Top-1 accuracy for zero-shot speech-to-intent classification (SLURP and STOP) and keyword spotting (GSCV2) after thick vertical line. Superscript \({}^{\#}\) is used to indicate use of general templates for class embedding extraction. Supervised results are provided in gray after the double-horizontal line: [51] is for speech-to-intent and [52; 53; 54] are for keyword spotting.
この論文は、cross-modal 0-shot 遷移において、一様性学習のために、先進モデルを別の領域でペアデータを用いて、異なる表現学習に適用する。後者領域における学習モデルは、ゼロショットで多様なタスクに使用することができ、CLIP と LiT とは同様であり、近年注目を集めている。ほとんどの cross-modal REPRESENTATION 統一作業 (CLIP と LiT 含まれる) は標準的な対比トレーニングの目的関数を使用しており、類似したデータサンプルを一致させ、不似ているデータサンプルを排除する。しかし、トレーニングエキストレルの類似性は、連続的な性質を持っているため、非二値的な処理が必要である。この問題に対処するために、私たちは、連続的な重み付き対比損失関数 (CWCL) を提案する。CWCLは、類似性の連続的な測定値を用いる。CWCLを用いることで、1つのモダリティのエン
2309.13589
Planets around evolved intermediate-mass stars III. Planet candidates and long-term activity signals in six open clusters
[abridged]The aim of this work is to search for planets around evolved stars, with a special focus on stars more massive than 2\,M$_\odot$ in light of previous findings that show a drop in planet occurrence around stars above this mass. We used \texttt{kima} to find the Keplerian orbits most capable of explaining the periodic signals observed in RV data. We also studied the variation of stellar activity indicators and photometry in order to discard stellar signals mimicking the presence of planets. We present a planet candidate in the open cluster NGC3680 that orbits the 1.64\,M$_\odot$ star No. 41. The planet has a minimum mass of 5.13M\,$_{J}$ and a period of 1155 days. We also present periodic and large-amplitude RV signals of probable stellar origin in two more massive stars (5.84 and 3.05\,M$_\odot$ in the clusters NGC2345 and NGC3532). Finally, using new data, we revise the RV signals of the three stars analysed in our previous paper. We confirm the stellar origin of the signals observed in NGC2423 No. 3 and NGC4349 No. 127. On the other hand, the new data collected for IC4651 No. 9122 (1.79\,M$_\odot$) seem to support the presence of a bona fide planet of 6.22M\,$_{J}$ at a period of 744 days, although more data will be needed to discard a possible correlation with the CCF-FWHM. The targets presented in this work showcase the difficulties in interpreting RV data for evolved massive stars. The use of several activity indicators (CCF-FWHM, CCF-BIS, \ha), photometry, and long-term observations (covering several orbital and stellar rotational periods) is required to discern the true nature of the signals. However, in some cases, all this information is insufficient, and the inclusion of additional data -- such as the determination of magnetic field variability or RV points in the near-infrared -- will be necessary to identify the nature of the discovered signals.
E. Delgado Mena, J. Gomes da Silva, J. P. Faria, N. C. Santos, J. H. Martins, M. Tsantaki, A. Mortier, S. G. Sousa, C. Lovis
2023-09-24T09:09:41
http://arxiv.org/abs/2309.13589v1
# Planets around evolved intermediate-mass stars+ ###### Abstract Context:We carried out a long-term campaign spanning 17 years to obtain high-precision radial velocities (RVs) with the HARPS spectrograph for a large sample of evolved stars in open clusters. Aims:The aim of this work is to search for planets around evolved stars, with a special focus on stars more massive than 2 M\({}_{\odot}\) in light of previous findings that show a drop in planet occurrence around stars above this mass. Methods:We used kim --a package for Bayesian modelling of RV and activity data with Gaussian process capability and Nested sampling for model comparison-- to find the Keplerian orbits most capable of explaining the periodic signals observed in RV data, which have semimplitudes of between 75 and 500 m s\({}^{-1}\). We also studied the variation of stellar activity indicators and photometry in order to discard stellar signals mimicking the presence of planets. Results:We present a planet candidate in the open cluster NGC3680 that orbits the 1.64 M\({}_{\odot}\) star No. 41. The planet has a minimum mass of 5.13M\({}_{J}\) and a period of 1155 days. We also present periodic and large-amplitude RV signals of probable stellar origin in two more massive stars (5.84 and 3.05 M\({}_{\odot}\) in the clusters NGC2345 and NGC3532). Finally, using new data, we revise the RV signals of the three stars analysed in our previous paper. We confirm the stellar origin of the signals observed in NGC2423 No. 3 and NGC4349 No. 127. On the other hand, the new data collected for IC4651 No. 9122 (1.79 M\({}_{\odot}\)) seem to support the presence of a bona fide planet of 6.22M\({}_{J}\) at a period of 744 days, although more data will be needed to discard a possible correlation with the CCF-FWHM. Conclusions:The targets presented in this work showcase the difficulties in interpreting RV data for evolved massive stars. The use of several activity indicators (CCF-FWHM, CCF-BIS, H\(\alpha\)), photometry, and long-term observations (covering several orbital and stellar rotational periods) is required to discern the true nature of the signals. However, in some cases, all this information is insufficient, and the inclusion of additional data --such as the determination of magnetic field variability or RV points in the near-infrared-- will be necessary to identify the nature of the discovered signals. ## 1 Introduction In the last 30 years, more than 5000 planets have been discovered, mainly around main sequence (MS) solar-type stars1, that is, FGK dwarf stars and M dwarfs. However, the number of planets detected orbiting intermediate-mass stars is still low despite the increasing frequency of giant planets with stellar mass (e.g. Johnson et al., 2010). Moreover, several studies point to a sharp decrease in the planet occurrence around stars more massive than \(\sim\)2 M\({}_{\odot}\)(Reffert et al., 2015; Wolthoff et al., 2022). The search for planets around these more massive stars is crucial to our understanding of the limits of planet formation and survival, but we are still limited by the applicability of different detection techniques to this kind of star. For example, the most successful planet-detection techniques (based on photometric transits and radial velocity (RV)) are not optimal for large, massive, and fast-rotating stars, such as early-type MS stars. Nevertheless, both methods have been applied to late-A and early-F stars, revealing a handful of substellar companions (e.g. Desort et al., 2008; Borgniet et al., 2019; Grandjean et al., 2023; Collier Cameron et al., 2010; Sebastian et al., 2022; Vowell et al., 2023). The largest fraction of planets around intermediate-mass stars were detected by successfully applying the above-mentioned techniques to K giants, the evolved counterparts of early-type MS stars. Photometric data from the Kepler and TESS missions led to the discovery of a good number of planets around subgiants and red giant branch (RGB) stars (e.g. Lillo-Box et al., 2014; Grunblatt et al., 2022). On the other hand, several long-term RV surveys are being carried out by different teams, such as the Lick survey (Frink et al., 2001), the Okayama Planet Search Program (Sato, 2005), the Tautenburg Observatory Planet Search (Hatzes et al., 2005), and the PTPS-TAPAS program (Niedzielski et al., 2015). We refer the reader to Table 1 in Ottoni et al. (2022) for a exhaustive compilation of the current RV surveys around giant stars. Nevertheless, the occurrence rates found by some of these surveys can also be affected by selection effects. Because redder stars tend to show larger RV jitter (e.g. Frink et al., 2001), it is common to apply a cutoff for a maximum \(B-V\). As a consequence, the most metal-rich stars with low log \(g\) values are left out of such surveys (Mortier et al., 2013). Alternatively, the direct imaging technique can be applied to early-type MS stars, especially those with larger masses and young ages. However, due to the inherent biases affecting the direct imaging technique, only substellar companions at large distances can be detected. The largest surveys to date, SHINE (with SPHERE@VLT) and GPIES (with GPI@Gemini South), only found a few substellar companions around A stars (Vigan et al., 2021; Nielsen et al., 2019, respectively). Indeed, most of the above-mentioned surveys (either around dwarf or evolved stars) only found detected planets around stars below \(\sim\)2.5 M\({}_{\odot}\) (i.e. spectral type later than A0-A1). Some theoretical studies predict that planet formation for stars more massive than 3 M\({}_{\odot}\) is very difficult because in such stars irradiation overcomes accretion and the snowlines lie at greater distances, hindering the formation of cores (Kennedy and Kenyon, 2008). However, the very recent results of the BEAST survey, wherein the direct imaging technique is applied to B-type stars (M \(\gtrsim\)2.5 M\({}_{\odot}\)), show that substellar companions (most likely brown dwarfs) can form around this kind of star, which is contradictory to the low occurrence rates found in RV surveys of evolved stars. A brown dwarf was reported in an orbit of 290 AU around the 9 M\({}_{\odot}\) star \(\mu^{2}\) Sco (Squicicarini et al., 2022) and a 11 M\({}_{J}\) planet candidate was detected in a 556 AU orbit around the 6-10 M\({}_{\odot}\) binary b Cen AB (Janson et al., 2021). Given the importance of correctly determining the planet-occurrence rates for intermediate-mass stars and the difficulty in obtaining accurate masses for evolved stars, we began an RV survey around giant stars in open clusters. The advantage of open clusters is that the ages and masses of their stars can be much better constrained, meaning the planetary characterisation will be much more reliable. In the first paper of the survey, Lovis and Mayor (2007, hereafter Paper I) presented the discovery of a planet and a brown dwarf candidate orbiting a 2.3 M\({}_{\odot}\) red giant in the open cluster NGC2423 and a 3.8 M\({}_{\odot}\) red giant in NGC4349, respectively. In a subsequent work, Delgado Mena et al. (2018, hereafter Paper II) presented a planet candidate around a 2.1 M\({}_{\odot}\) giant in IC4651 that was found to show a suspicious signal in one activity indicator with a slightly shorter period than that of the RV variations. In addition, the analysis of new data obtained after the publication of Paper I pointed to a probable stellar origin of the RV signals found in NGC2423 and NGC4349. These results made evident the complexity in analysing these noisy stars, which have long rotational periods that are compatible in many cases with the orbital periods of the candidate substellar companions. Interestingly, the large-amplitude RV signals found in those three objects had periods of close to 700 days, as is also the true for other suspicious cases in the literature such as Aldebaran (Hatzes et al., 2015; Reichert et al., 2019) or \(\gamma\) Draconis (Hatzes et al., 2018). Indeed, the planet-occurrence rate from three combined large RV surveys analysed by Wothoff et al. (2022) shows a maximum at 720 days, which might be caused by the accumulation of planets with orbital periods around 600 days found orbiting the more massive stars in the surveys (those with M \(>\)1.4 M\({}_{\odot}\)). One of the explanations for this accumulation might be contamination by false positives, which only are starting to be revealed after years of observations. The aim of this work is to present new results of our RV survey for three stars that show periodic RV variations and may host substellar companions. The outline of the paper is as follows: in Sect. 2 we present the data and the stellar parameters. A planet candidate in NGC3680 whose signal might be of stellar origin is presented in Sect. 4. In Sects. 5 and 6, we show the cases of two massive stars2 in NGC2345 and NGC3532, which show large-amplitude RV signals that are likely caused by modulation of stellar magnetic activity. However, these stars also present secondary RV signals that might be caused by a brown dwarf or a planet. In Sect. 7, we present additional data for three clusters stars with RV variations mimicking substellar bodies already discussed in Paper II. In Sect. 8, we discuss the possible origin of all the signals presented in the previous sections. Finally, in Sect. 9, we present our conclusions. Footnote 2: We note that this definition of a massive star is put forward in the context of planet search surveys where very few planets have been found around stars with M \(>\) 2 M\({}_{\odot}\). In stellar physics, massive stars are usually defined as those with M \(>\) 8-10 M\({}_{\odot}\). Figure 1: Hertzsprung-Russell diagram for the six open clusters analysed in this work. The upper row contains the older clusters (with less massive stars, as shown in the colour scale) and the bottom row contains the younger clusters, with more massive stars. The objects with periodic RV variations are depicted with a star symbol. ## 2 Observations and stellar parameters The RV survey on which this work is based is fully described in Paper I. The objects analysed in this survey were first observed over a time-period of nearly 5 years (from March 2005 to October 2009, ESO periods 75-83, PI: Lovis). In total, 142 stars were monitored within 17 open clusters using the HARPS spectrograph (Mayor et al. 2003) at the ESO 3.6m telescope (\(R\sim 115000\)), and some of them were also observed with CORALIE (1.2 m-Swiss Telescope, La Silla Queloz et al. 2000; Udry et al. 2000) in previous years. For those stars showing large RV variations, we collected more observations between March 2017 and March 2022 (ESO periods 99-108, PI: Delgado Mena). In addition, we obtained a few RV points for a number of stars in the ESO archive (periods 91-94, PI: Alves, Canto-Martins). The RV values are provided in the online tables associated with this publication. The observations were made in _objA_ mode (no simultaneous RV calibration) and the exposures times were estimated in order to have individual spectra with a signal-to-noise ratio (S/N) of at least \(\sim\) 30 at \(\sim\) 6000 A. This gives a typical RV photon-noise of \(\sim\) 3.5 m s\({}^{-1}\), which is sufficient to detect massive planets around the surveyed stars (with \(V\) magnitudes between 7 and 12). We note that we applied a small negative offset to the RV points taken after May 2015, when an upgrade of the HARPS fibres took place (Lo Curto et al. 2015). The shift was calculated by extrapolating (with a linear fit) the measurements of Table 3 in Lo Curto et al. (2015) to the average full width at half maximum (FWHM) of the cross-correlation function (CCF) for each star (see Appendix A). This offset is larger for stars with larger FWHM. The offset values for the stars3 discussed in this paper are 22.3 m s\({}^{-1}\)(NGC3680 No. 41) and 27.8 m s\({}^{-1}\)(NGC3532 No. 670). No offset is required for NGC2345 No. 50 because we only have data taken after 2015. In addition, the FWHM needs to be corrected due to an instrumental focus drift affecting the measurements obtained before the fibre change in 2015. The equations for this correction are provided by Gomes da Silva et al. (2012). After the fibre change, the cause of the focus drift was corrected but both the FWHM and the bisector inverse slope (BIS, Queloz et al. 2001) of the CCF4 present an offset with respect to previous values. As opposed to the RV offset, there is no published data with which to calculate the offset in the FWHM and BIS values. We refer the reader to Appendix A for more details about our attempts to correct this offset. Footnote 3: For homogeneity with our previous works we will name the stars with a No. indicating the number system by Mermilliod. For example, he full name of NGC3680 No. 41 in Simbad is identified as CI\({}^{\rm m}\)NGC3680MMU41. Footnote 4: Hereafter we call these measurements as simply the FWHM and BIS. The stellar parameters of effective temperature (T\({}_{\rm eff}\)), surface gravity (log \(g\)), metallicity ([Fe/H]), and microturbulence (\(\xi_{t}\)) for most of our target stars in these clusters were presented by Santos et al. (2009, 2012) and were improved by Delgado Mena et al. (2016), who also derived stellar ages, masses, radii, and Li abundances. With the addition of more observations and some new targets, we presented a new set of more precise parameters for the complete sample of stars (Tsantaki et al. 2023). In this latter analysis, we derived stellar parameters with the spectral synthesis method and made use of MARCS model atmospheres, which are more appropriate for giant stars. In addition, the masses and radii of the stars were revised using Gaia DR2 parallaxes. We refer the reader to Tsantaki et al. (2023) for further information. ## 3 Radial velocity, stellar activity, and photometry analysis The analysis of the RV data was carried out with kima(Faria et al. 2018). This code facilitates a Keplerian fit where the number of planets is a free parameter to be estimated from the data, using the Diffusive Nested Sampling algorithm (Brewer et al. \begin{table} \begin{tabular}{l l c c c|c c c} \hline \multicolumn{6}{c}{NGC3680 No. 41} & NGC2345 No. 50 & NGC3532 No. 670 & IC4651 No. 9122 & NGC2423 No. 3 & NGC4349 No. 127 \\ \hline \hline T\({}_{\rm eff}\) & K & 4612 \(\pm\) 12 & 3962 \(\pm\) 10 & 4347 \(\pm\) 11 & 4582 \(\pm\) 12 & 4534 \(\pm\) 12 & 4417 \(\pm\) 12 \\ \(\log g\) & (cm s\({}^{-2}\)) & 2.45 \(\pm\) 0.04 & 0.87 \(\pm\) 0.07 & 1.75 \(\pm\) 0.05 & 2.43 \(\pm\) 0.04 & 2.23 \(\pm\) 0.04 & 1.78 \(\pm\) 0.05 \\ \(\rm[Fe/H]\) & & \(-\)0.16 \(\pm\) 0.02 & \(-\)0.25 \(\pm\) 0.02 & \(-\)0.11 \(\pm\) 0.02 & \(-\)0.03 \(\pm\) 0.01 & \(-\)0.08 \(\pm\) 0.01 & \(-\)0.17 \(\pm\) 0.02 \\ \(v\sin i\) & (km s\({}^{-1}\)) & \(-\) & 5.27 & 4.64 & 0.68 & 2.19 & 4.81 \\ \(M\) & M\({}_{\odot}\) & 1.64 \(\pm\) 0.06 & 5.84 \(\pm\) 0.61 & 3.05 \(\pm\) 0.23 & 1.79 \(\pm\) 0.09 & 2.03 \(\pm\) 0.14 & 3.01 \(\pm\) 0.24 \\ \(R\) & R\({}_{\odot}\) & 11.44 \(\pm\) 0.57 & 152.28 \(\pm\) 17.78 & 40.95 \(\pm\) 2.37 & 13.36 \(\pm\) 0.72 & 17.71 \(\pm\) 1.04 & 37.97 \(\pm\) 2.56 \\ log(L) & L\({}_{\odot}\) & 1.85 & 3.86 & 2.73 & 1.94 & 2.12 & 2.76 \\ Age & Ga & 1.78 & 0.07 & 0.35 & 1.58 & 1.02 & 0.32 \\ \hline Distance & pc & 938 & 2251 & 486 & 888 & 766 & 2176 \\ \(V\) & mag & 10.88 & 10.40 & 6.98 & 10.91 & 10.04 \(\pm\) 0.04 & 10.82 \(\pm\) 0.08 \\ \(\alpha\) & & 11:25:48.5 & 07:08:27.0 & 11:07:57.3 & 17:24:50.1 & 07:37:09.2 & 12:24:35.5 \\ \(\delta\) & & \(-\)43:09:52.5 & \(-\)13:12:32.9 & \(-\)58:17:26.3 & \(-\)49:56:56.1 & \(-\)13:54:24.0 & \(-\)61:49:11.7 \\ \hline \end{tabular} \end{table} Table 1: Stellar characteristics of the analysed planet-host candidates (the first three stars are presented here for the first time). Stellar parameters (above the horizontal line) were derived by Tsantaki et al. (2023). Distances and magnitudes are extracted from WEBDA and SIMBAD, respectively. \begin{table} \begin{tabular}{c c} \hline Star name & Gaia DR3 \\ \hline \hline NGC3680 No. 41 & 5382186662956114304 \\ NGC2345 No. 50 & 3044666242714331008 \\ NGC3532 No. 670 & 5340186143451130112 \\ IC4651 No. 9122 & 59495535973093167104 \\ NGC2423 No. 3 & 3030262468592291072 \\ NGC4349 No. 127 & 6054914812279154176 \\ \hline \end{tabular} \end{table} Table 2: Gaia DR3 identification for each target. 2010) to sample from the posterior distribution of the model parameter. kima also allows the inclusion of Gaussian processes to model stellar activity, although we did not consider this component for the analysis of the stars presented here (as in many cases the signals show coherence). In the present work, we use kima by inputting wide and uninformative priors for the orbital parameters, that is, the orbital period (log-uniform prior from 0.2 to 2000 days), the semi-amplitudes (modified log-uniform prior from 1 m/s to 1 km/s), and the eccentricities (uniform prior between 0 and 1). These priors, and indeed the whole analysis with kima, are not informed by the periodogram of the RVs or the activity indicators. As a double check, we performed a second analysis of each dataset with the _yorbit_ algorithm (Segransan et al., 2011) in order to fit the whole dataset with a model composed of a Keplerian function. _Yorbit_ uses a hybrid method based on a fast linear algorithm (Levenberg-Marquardt) and genetic operators (breeding, mutations, crossover), and is optimised to explore the parameter space for Keplerian fitting of RV datasets. In our case, the global search for the best-fitting orbital parameters was made with a genetic algorithm. As we show in the following subsections, the photon noise \(RV\) errors of our observations are typically below 3 m s\({}^{-1}\). However, the \(RV\) variations are dominated by the stellar jitter, which is much higher than the photon noise in this kind of star (typically of \(\pm\) 15-20 m s\({}^{-1}\), as discussed in Hekker et al., 2008; Delgado Mena et al., 2018; Lovis and Mayor, 2007). Therefore, when using _Yorbit_, we added in quadrature a 15 m s\({}^{-1}\) noise to the photon noise before fitting the data. In the case of kima, the RV jitter is a free parameter (with a uniform prior) that is already considered during the fitting process. To gain insight into possible activity modulations that could interfere with RV, we used ACTIN5(Gomes da Silva et al., 2018) to obtain the activity indices from the spectra based on the H\(\alpha\), Na i D\({}_{1}\) and D\({}_{2}\), and He i D\({}_{3}\) lines. The line parameters for the Na i and He i indices are described in Gomes da Silva et al. (2011). For the H\(\alpha\) index, we used two versions with different bandpasses: H\(\alpha\)16, using a 1.6 square passband (as used in our previous work) and H\(\alpha\)06 using a 0.6 passband (optimal for FGK dwarfs; see Gomes da Silva et al. 2022). A description of the ACTIN flux and indices determination is available at Gomes da Silva et al. (2021, Appendix A). In general, we could not use the Ca ii H&K for any of the stars due to the very low S/N (\(<\) 3) in the spectral orders containing those lines. In addition, the Na i and He i indicators do not show any signal for our stars and therefore we do not show them in the figures. Footnote 5: [http://github.com/gomesdasilva/ACITIN2](http://github.com/gomesdasilva/ACITIN2) To further investigate RV variations caused by stellar atmospheric phenomena, we also used the FWHM. These values and their errors are provided by the HARPS pipeline. Moreover, we also analysed the BIS (Queloz et al., 2001), but we note that this diagnostic of line asymmetry loses sensitivity for slow rotators (as some of the stars we study here) (e.g. Saar et al., 1998; Santos et al., 2003; Queloz et al., 2009; Santos et al., 2014). Nevertheless, we have already observed significant correlations between RV and BIS (e.g. see the case of NGC2423 No. 3 in Paper II). We note that we only obtain the periodograms for activity indicators of HARPS data because the number of observations per star with CORALIE is too low for meaningful results. Most of the stars in our survey were observed within the All Sky Automated Survey (ASAS) from the las Campanas Observatory (Chile) with observations available between the end of 2000 and 2009 (Pojmanski, 2002). We downloaded the light curves in V magnitude from The ASAS-3 Photometric V-band Catalogue6 and performed periodograms to detect any possible variability with the same period as the RV variability. Footnote 6: [http://www.astrouw.edu.pl/asas/?page-aas](http://www.astrouw.edu.pl/asas/?page-aas) ## 4 Is the RV variation of NGC3680 No. 41 due to the presence of a planet? ### Parent-star characteristics Our sample contains five giant stars in the open cluster NGC3680 (Age = 1.78 Ga) with an average metallicity of [Fe/H] = \(-0.15\pm 0.02\) dex (see Tables 1 and 5 of Tsantaki et al., 2023). One of these stars, NGC3680 No. 41 with T\({}_{\rm eff}\) = \(4612\pm 12\) K, \(\log g\) = \(2.45\pm 0.04\) dex, M\({}_{*}\) = \(1.64\pm 0.05\) M\({}_{\odot}\), and R\({}_{*}\) = \(11.44\pm 0.47\) R\({}_{\odot}\) (see Table 1) seems to be on the first ascent of the RGB, and close to the luminosity bump (see Fig. 1). The mean RV of the giants in this cluster is \(1.23\pm 0.65\) km s\({}^{-1}\) while the mean RV of NGC3680 No. 41 is 1.59 km s\({}^{-1}\), and therefore this star is likely a cluster member. This is also supported by its parallax value. In order to obtain an estimation of the maximum rotational period of this star (which is relevant for the interpretation of the observed signals), we can use the projected rotational velocity (\(v\) sin \(i\)) and the radius. The \(v\) sin \(i\) of the sample stars were estimated by Tsantaki et al. (2023) using spectral synthesis and by considering a fixed macroturbulence velocity (with the empirical formula for giants by Hekker and Melendez, 2007), which in this case has a value of 5.04 km s\({}^{-1}\). For this star, the \(v\) sin \(i\) is very low and below the detectability threshold (i.e. the value of the macroturbulence velocity is large and can perfectly account for the broadening of the spectra). Therefore, we cannot provide a reliable estimation of the stellar rotational period but we can safely claim that this star is a slow rotator. ### Radial-velocity analysis For NGC3680 No. 41, we have 18 RV measurements obtained with HARPS over a time period of 9 years (between 2013 and 2022) and 5 measurements with CORALIE obtained over one year (2003-2004), therefore with a large time gap between the two data series. The average photon noise \(RV\) uncertainty is 2.6 and 2.1 m s\({}^{-1}\) for HARPS and CORALIE data, respectively. An analysis using generalised Lomb-Scargle (GLS) periodograms (e.g. Zechmeister and Kurster, 2009) was performed for RV (see upper panel of Fig. 2). For each star, the minimum period to search in the periodogram was defined as the minimum cadence of observations when these are done with a separation of more than 10 days. Otherwise, the minimum period is set at 10 days. The false-alarm probability (FAP) was computed by bootstrapping the data and statistically significant peaks were considered for values above FAP = 1%. A significant signal with a period of \(\sim\) 1127 days can be clearly observed in the periodogram of HARPS data only. Using kima, the two RV sets are well fitted by a Keplerian function with \(P\) = 1155 days, \(K\) = \(74.79\) m s\({}^{-1}\), and \(e\) = 0.21 (see Table 5) with a \(16.35\) m s\({}^{-1}\) dispersion of the residuals. The posterior distribution for the fitted extra white noise is centred at 20\(\pm\)5 m s\({}^{-1}\), which is in agreement with the added jitter when using _yorbit_ (as explained in Sect. 3). Considering the mass of the star, these values correspond to the expected signal induced by a planet with 5.13 M\({}_{J}\) and a 2.53 AU semi-major axis. The phase curve of the best-fit solution can be seen in Fig. 3. The results using _yorbit_ are very similar and the data can be best fit with a single Keplerian with a period of 1154 days, \(K\) = 74.7 m s\({}^{-1}\), and \(e\) = 0.19 (figures from _yorbit_ can be seen in the Appendix, Fig. 1) The dispersion of the residuals is 16 m s\({}^{-1}\) and the reduced \(\chi^{2}\) is 1.37. In the following subsections, we assess whether or not this signal could be of stellar origin by evaluating different stellar activity or variability indicators. ### Photometry We found 664 photometric measurements classified as good quality (these are given the grade A or B, with average errors of 0.045 mag), which show a \(\sim\) 0.3 peak-to-peak variability in V magnitude. In Fig. 4, we can see that the GLS periodogram shows no signals above the FAP level and therefore the RV variations are likely not linked to photometric variability. ### Stellar activity and line profile analysis The GLS periodograms of the FWHM and the BIS do not show any significant period (see Fig. 2). However, the RV variations from HARPS seem to be correlated with the BIS (a Pearson's coefficient value of 0.49 and Spearman rank coefficient of 0.54; see Table. 3 and Fig. 5), although these correlations are not significant (with a p-value above 0.01). On the other hand, for the CORALIE data, the RV-BIS correlation is much stronger (Pearson's coefficient value of 0.85, though this is based on five points and is not significant, with a p-value of 0.08). In addition, the analysis of the H\(\alpha\)06 index does not show any periodic behaviour but the GLS of the H\(\alpha\)16 index has a peak at the 1% FAP level for a period of 312 days. As we cannot tightly constrain the rotational period of the star, it is difficult to confidently discern whether or not this activity signal is a manifestation of the rotation of the star. As a further test, we detrended the RV points based on a linear fit to the RV versus BIS correlation. When performing a periodogram of the detrended RVs, the peak at 1127 days still shows a strong power of close to 0.8, but below the 1% FAP line. Although the activity indicators do not exhibit any periodic behaviour with the same period as the RV, the possible RV-BIS correlation mentioned above casts doubt on the planetary hypothesis and is further discussed in Sect. 8. Furthermore, we note that the correlation between the BIS and the RV residuals of the Keplerian fit becomes weaker (with a Pearson's coefficient value of 0.06), which could point to a stellar origin of the variation. ## 5 Large RV variations in NGC2345 No. 50 mimicking the presence of massive companions ### Parent star characteristics Our sample contains only four giant stars in the open cluster NGC2345 (Age = 0.07 Ga) with an average metallic Figure 3: One Keplerian fit for NGC3680No41 using k\(\hat{\rm{\bf\lambda}}\)a. The top panel shows the phase curve for the signal, with the residuals shown below, highlighting the rms of the residual RVs. The bottom panel shows the RV data from HARPS (red points) and CORALIE (blue points) and representative samples from the posterior distribution. The systemic RV has been subtracted and is shown at the top. Figure 2: GLS periodogram of the RV, FWHM, BIS, and stellar activity indexes for NGC3680 No. 41 HARPS data. The blue dashed line marks the period of the planet candidate. The period of the strongest peak in each periodogram is marked with a red solid line. The horizontal line marks the 1% FAP level. The last panel shows the periodogram for the window function. ity of [Fe/H] = \(-0.22\pm 0.02\) dex (see Tables 1 and 5 of Tsantaki et al. 2023). The star analysed here, NGC2345 No. 50, with \(\rm\,T_{eff}=3962\pm 10\) K, \(\rm\,\log{\it g}=0.87\pm 0.07\) dex, \(\rm M_{*}=5.84\pm 0.61\) M\({}_{\odot}\), and \(\rm R_{*}=152.28\pm 17.78\) R\({}_{\odot}\) (see Table 1), is the most evolved star in the cluster and seems to be ascending the early asymptotic giant branch (AGB; see Fig. 1). This star is very interesting because it is the youngest and most massive in the full sample. Despite the fact that the other stars in the cluster have very few observations, we are able determine the mean RV of the cluster as \(58.61\pm 0.56\) km s\({}^{-1}\) while the mean RV of NGC2345 No. 50 is 59.21 km s\({}^{-1}\). Although the scatter is large due to the large RV variations observed in the star, its RV is within \(\sim 1\sigma\) of the RV dispersion in the cluster, and taking into consideration its parallax, this star seems to be a cluster member. We note that recent studies in this cluster by Holanda et al. (2019) and Alonso-Santiago et al. (2019) also catalogue this star as a member and obtain similar stellar parameters. Both mentioned studies place this star in the RGB, as also suggested by its not very low \({}^{12}\)C/\({}^{13}\)C ratio. We estimated a \(v\) sin \(i\) of 5.27 km s\({}^{-1}\) in Tsantaki et al. (2023) for a fixed macroturbulence velocity of 6.06 km s\({}^{-1}\) (with the empirical Figure 4: GLS of V magnitude for NGC3680 No. 41 using ASAS-3 data. The horizontal lines indicate the FAP at levels of 0.1%, 0.5%, and 1%. The vertical red lines show the periods of the three signals with the highest significance. Figure 5: RV versus BIS correlation for NGC3680 No. 41 Figure 6: GLS periodogram of the RV, FWHM, BIS, and stellar activity indexes for NGC2345 No. 50 HARPS data. The blue dashed line marks the period of the planet candidate. The period of the strongest peak in each periodogram is marked with a red solid line. The horizontal line marks the 1% FAP level. The last panel shows the periodogram for the window function. \begin{table} \begin{tabular}{l||c c|c c|c c|c c|c c|c c||c c|c c} & \multicolumn{4}{c||}{FWHM} & \multicolumn{4}{c||}{BIS} & \multicolumn{4}{c||}{H\(\alpha\)06} & \multicolumn{4}{c}{H\(\alpha\)16} \\ \hline star & P & p & S & p & P & p & S & p & P & p & S & p & P & p & S & p \\ NGC3680No.41 & 0.26 & 0.287 & 0.32 & 0.185 & 0.48 & 0.039 & **0.54** & 0.019 & 0.12 & 0.638 & 0.06 & 0.797 & 0.13 & 0.602 & 0.06 & 0.797 \\ NGC2345No.50 & **-0.72** & **710\({}^{*}\)** & **-0.76** & **1.10\({}^{*}\)** & **0.51** & 0.011 & 0.46 & 0.022 & **0.76** & **1.10\({}^{*}\)** & **0.76** & **1.10\({}^{*}\)** & **0.85** & **910\({}^{*}\)** & **0.76** & **1.10\({}^{*}\)** \\ NGC3532No.670 & 0.21 & 0.284 & 0.17 & 0.358 & 0.42 & 0.022 & 0.37 & 0.044 & 0.45 & 0.013 & 0.44 & 0.016 & **0.74** & **4.10\({}^{*}\)** & **0.75** & **2.10\({}^{*}\)** \\ \hline \hline IC4651No.9122 & 0.00 & 0.984 & -0.06 & 0.570 & 0.28 & 0.013 & 0.34 & **0.003** & 0.20 & 0.084 & 0.13 & 0.251 & 0.21 & 0.071 & 0.27 & 0.017 \\ NGC2423No.3 & 0.20 & 0.117 & 0.17 & 0.180 & **0.61** & **2.10\({}^{*}\)** & **0.54** & **7.10\({}^{*}\)** & 0.12 & 0.366 & 0.16 & 0.216 & 0.22 & 0.087 & 0.22 & 0.091 \\ NGC4349No.127 & -0.17 & 0.191 & -0.17 & 0.192 & 0.08 & 0.514 & 0.145 & 0.280 & 0.13 & 0.331 & 0.15 & 0.248 & 0.40 & **0.002** & 0.35 & **0.006** \\ \end{tabular} \end{table} Table 3: Pearson’s \(\rho\) (P) and Spearman rank (S) correlation coefficients, together with their respective p-values, for the correlations between RV and different activity indicators of only the HARPS data. We mark in boldface the strong correlations (those that have a coefficient with an absolute value of higher than 0.5) and the significant correlations, even if weak (those with a p-value below \(10^{-2}\)). formula for bright giants by Hekker & Melendez (2007)). Using the stellar radius, we can derive the maximum rotational period via \(P_{max}=2\pi R_{*}/vsini\). This leads to a maximum rotational period of \(\sim\) 1463 days. ### Radial velocity analysis A total of 24 RV measurements were obtained with HARPS over \(\sim\)5 years (between 2017 and 2022). The average photon noise error in \(RV\) is 3.3 m s\({}^{-1}\). The GLS periodogram was performed for RV (see upper panel of Fig. 6), which shows a strong signal (well above the 0.1% FAP level) at a period of 1007 days. The analysis with k\(\hat{\rm{m}}\)a reveals that the maximum likelihood is obtained for a model with one Keplerian with a period of 1001 days, \(K=488.17\) m s\({}^{-1}\), and \(e=0.03\). Considering the mass of the star, these values would correspond to the expected signal induced by a body with 3.5 AU semi-major axis and with 79.34 M\({}_{J}\), a value that places this companion at the threshold between brown dwarf and low-mass star. The phase curve of the best-fit solution can be seen in Fig. 7. The posterior distribution for the fitted extra white noise is centred at 55\(\pm\)18 m s\({}^{-1}\). This value is relatively high but is to be expected in very evolved stars, as is the case for this star (Hekker et al. 2008). The residuals of the fit have a dispersion of 60.4 m s\({}^{-1}\), which appears large and might be indicative of an additional signal. Indeed, the analysis performed by _yorbit_ with a single Keplerian provides very similar output parameters (P=1001 days, \(K=488.3\) m s\({}^{-1}\) and \(e=0.036\)), including the large dispersion of the residuals and a large reduced \(\chi^{2}\) of 22.5 (see Fig. 2). We therefore attempted to make a two-Keplerian fit with _yorbit_ to model both the large-amplitude signal (probably caused by activity, as we show below) and a possible existing substellar body in the system. We obtained a solution for a system with a low-mass star and a brown dwarf, with the period of the 'low-mass star' being 1007 days (close to the period of the single Keplerian fit explained above) and with a slightly larger semi-amplitude (\(K=554\) m s\({}^{-1}\)) corresponding to a mass of 87.36 M\({}_{J}\). The brown-dwarf candidate would have a period of 561 days, a mass of 23.27 M\({}_{J}\), and an eccentricity of 0.28 (see Fig. 1). In this case, the dispersion of the residuals is lower, 32 m s\({}^{-1}\), and the reduced \(\chi^{2}\) is 8.1; this model therefore better represents the observed data than the single Keplerian fit. We also performed a GLS periodogram of the residuals, which did not show any significant peak. A complete analysis of activity and photometry signals is presented in the following subsection, with the aim being to better understand if the brown-dwarf candidate is bona fide. ### Photometry We found 501 photometric measurements in the ASAS-3 catalogue classified as good quality (they are given a grade A or B, with average errors of 0.049 mag) that show a \(\sim\) 0.2 peak-to-peak variability in V magnitude (not considering the observations in the first epoch showing a large scatter). In Fig. 8, the GLS periodogram for this dataset shows several signals well above the FAP = 0.1% line with periods of 3000, 1239, 329, 422, and 286 days. Considering the errors associated with the determination of stellar radius and especially with the determination of \(v\) sin \(i\) (due to the degeneracy with macroturbulence velocity), the signal at 1239 days is compatible with the maximum rotational period estimation of 1463 days. On the other hand, none of these photometric signals are close in period to the large RV variation observed at 1007 days, nor with the signal at 561 days of possible brown dwarf origin. ### Stellar activity and line-profile analysis When studying the RV signals of this star, we first noted a very strong correlation with the FWHM (with a Pearson's coefficient value of -0.72 and a p-value of 7:10\({}^{-5}\), also strong and significant with the Spearman statistics; see Table 3), which led us to suspect the large RV variability observed. On the other hand, the correlation with the BIS is not significant. We also performed the GLS periodograms for the different activity indicators (see Fig. 6). In this case, the FWHM shows a non-significant peak at 957 days and the BIS has a non-significant peak at 1409 days. It is possible that this signal is related to the 1239 d signal observed in photometry. The most significant signals are seen in the H\(\alpha\) line. The H\(\alpha\)06 index shows a strong signal above the 1% FAP level at 962 days, close to the 1007 d period of the RV. Furthermore, there are two signals at 415 and 586 days, though these are not significant. The situation is much more clear when using the H\(\alpha\)16 index, which shows a strong signal above the 1% FAP level at 1113 days and another signal above the 1% Figure 7: One Keplerian fit for NGC2345No50 using k\(\hat{\rm{m}}\)a. The top panel shows the phase curve for the signals, with the residuals shown below, highlighting the rms of the residual RVs. The bottom panel shows the RV data from HARPS and representative samples from the posterior distribution. The systemic RV has been subtracted and is shown at the top. FAP level at 545 days. These two periods are suspiciously close to the 1007 d and 561 d periods of the two-Keplerian fit and therefore challenge the presence of either of the two companions in the system. Furthermore, the cadence of the observations introduces yearly aliases in the periodogram (see the strong peak at 360 days in the window function periodogram). The yearly aliases of the signal at 1007 days are exactly at 286 and 573 days, with this second signal also being observed in the activity indicators and producing the RV variability that mimics a second body. We refer the reader to Table 4 for a summary of the different peaks in the periodograms. In addition, we checked whether the Gaia data reveal the presence of a low-mass star companion through elevated astrometric noise. We consulted the renormalised unit weight error (RUWE7) statistic, which has a value of 1.05 for this star (well below 1.4) and therefore does not show evidence of a companion. This fact further supports the conclusion that the large-amplitude signal is of stellar origin and not caused by a low-mass star companion. Footnote 7: [https://gea.esac.esa.int/archive/documentation/](https://gea.esac.esa.int/archive/documentation/) GRB2/Gaia_archive/chap_datamodel/sec_dh_main_tables/ ssec_dm_ruwe.html ## 6 A planet candidate around NGC3532 No. 670 hidden in stellar activity? ### Parent star characteristics Our sample contains seven giant stars in the open cluster NGC3532 (Age = 350 Ma) with an average metallicity of [Fe/H] = -0.08 \(\pm\) 0.03 dex (see Tables 1 and 5 of Tsantaki et al. 2023). One of its members, NGC3532 No. 670, with T\({}_{\rm eff}\) = 4347 \(\pm\) 11 K, \(\log g\) = 1.75 \(\pm\) 0.05 dex, M\({}_{*}\) = 3.05 \(\pm\) 0.23 M\({}_{\odot}\), and R\({}_{*}\) = 40.95 \(\pm\) 2.37 R\({}_{\odot}\) (see Table 1), seems to be at the tip of the RGB but it is not clear whether it is approaching the tip of the RGB (see Fig. 1). The mean RV of the giants in this cluster is 4.03 \(\pm\) 0.48 km s\({}^{-1}\) while the \begin{table} \begin{tabular}{l|c|c|c c c c c||c c c c} \hline Star & \(P_{\rm max}\) & RV & FWHM & BIS & H\(\alpha\)06 & H\(\alpha\)16 & Phot & RV & FWHM & BIS & H\(\alpha\)06 & H\(\alpha\)16 \\ \hline NGC3680N\(\alpha\). 41 & – & **1128** & – & – & – & – & & – & – & – & – & – \\ NGC2345N\(\alpha\). 50 & 1463 & **1007** & 957 & 1409 & **962** & **1113** & **1239** & 561 & – & – & 586 & **545** \\ NGC3532N\(\alpha\). 670 & 447 & **842** & **811** & – & – & **889** & **412** & 625 & **619** & – & – & 574 \\ \hline IC4651N\(\alpha\). 9122 & 995 & **744** & – & – & – & 985 & – & 384 & **364/393** & – & – & – \\ NGC2423N\(\alpha\). 3 & 407 & **697** & – & **697** & – & **417** & 435 & **380** & – & – & **393** \\ NGC4349N\(\alpha\). 127 & 399 & **674** & **675** & – & – & – & **342/428** & – & – & – & – \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of the strongest periods found in the different periodograms. First set of columns: Maximum stellar rotational period and periods of the most significant peak(s) in each GLS periodogram that match the most significant RV period. We highlight in blue the periods that are compatible with the estimated stellar rotation period. Second set of columns: For the stars with a possible two-Keplerian fit, we list the periods of the periodogram peaks that match the period of the hypothetical second companion in the system. The periodogram peaks that lie above the 1% FAP level are marked in boldface. Figure 8: GLS periodogram of V magnitude for NGC2345 No. 50 using ASAS-3 data. The horizontal lines indicate the FAP at the levels of 0.1%, 0.5%, and 1%. The vertical red lines show the periods of the five signals with the highest significance. The time series are fitted by a sinusoidal with the period showing the highest significance. Figure 9: GLS periodogram of the RV, FWHM, BIS, and stellar activity indexes for NGC3532 No. 670 HARPS data. The blue dashed line marks the period of the planet candidate. The period of the strongest peak in each periodogram is marked with a solid red line. The horizontal line marks the 1% FAP level. The last panel shows the periodogram for the window function. mean RV of NGC3532 No. 670 is 4.20 km s\({}^{-1}\). This similar value, together with its parallax, suggests that this star is likely a cluster member. We estimated a \(v\) sin \(i\) of 4.64 kms in Tsantaki et al. (2023) for a fixed macroturbulence velocity of 4.52 kms (with the empirical formula by Hekker & Melendez 2007, for giant stars). This leads to a maximum rotational period of \(\sim\) 447 days. ### Radial velocity analysis For this star, we collected a total of 29 RV measurements with HARPS over \(\sim\)9 years (between 2013 and 2022) although there is a gap of 2.5 years without observations. The average photon noise error in \(RV\) is 1.3 m s\({}^{-1}\), which is due to the fact that this star is considerably brighter than the previous two described in Sections 4 and 5, respectively. The time series of the RV show a clear variation of large amplitude. This is also demonstrated by the GLS periodogram of the RV (see upper panel of Fig. 9), which presents a strong signal (well above the 1% FAP level) at a period of 842 days. The most probable solution obtained with kima is that with a single brown dwarf of 22.7 M\({}_{J}\) with a period of 842.4 days and an eccentricity of 0.12 (\(K\) = 235.3 m s\({}^{-1}\); see Fig. 10 for the phase-folded orbit and the \(RV\) time series). The posterior distribution for the fitted extra white noise is centred at 31\(\pm\)9 m s\({}^{-1}\), and the residuals of the fit show a dispersion of 45.1 m s\({}^{-1}\), which might be indicative of another signal that cannot be fitted with a Keplerian. The one-Keplerian fit with _yorbit_ provides almost identical parameters (see Fig. 11). The dispersion of the residuals is 48.8 m s\({}^{-1}\) and the reduced \(\chi^{2}\) is 13.24. Considering this dispersion and the hints of variability in the residuals, we also tried to perform a two-Keplerian fit with _yorbit_. In this case, the largest-amplitude RV signal has the same period (844 days), and slightly larger semiamplitude (\(K\) = 246.2 m s\({}^{-1}\)) and eccentricity than the single Keplerian fit. A second signal with a period of 625 days, \(K\) = 98.6 m s\({}^{-1}\), and \(e\) = 0.29 could be explained by a planet candidate of 8.3 M\({}_{J}\) and a 2.07 AU semi-major axis (see Fig. 12). ### Photometry We found 549 photometric measurements classified as good quality (they are given a grade A or B, with average errors of 0.046 mag) that show a \(\sim\) 0.3 peak-to-peak variability in V magnitude (excluding the probable outlier points above V = 7.2). The GLS periodogram depicted in Fig. 11 shows four signals above the FAP = 0.1% line with periods of 3333, 412, 1363, and 503 days. None of these periods match the signal at 842 days found in the RV time series or the planet candidate signal in the two-Keplerian fit. However, the period at 412 days is compatible with the estimated rotational period of the star and this would suggest that the RV signal at 842 days is a harmonic of the rotational period. ### Stellar activity and line profile analysis For this star, we observed that the RV variations are not strongly correlated with the BIS, nor with the FWHM (see Table 3 for Figure 11: GLS periodogram of V magnitude for NGC3532 No. 670 using ASAS-3 data. The horizontal lines indicate the FAP at levels of 0.1%, 0.5%, and 1%. The vertical red lines show the periods of the four signals with the highest significance. The time series are fitted by a sinusoidal with the period showing the highest significance. Figure 10: One Keplerian fit for NGC3532 No. 670 using kima. The top panel shows the phase curve for the signals, with the residuals shown below, highlighting the rms of the residual RVs. The bottom panel shows the RV data from HARPS and representative samples from the posterior distribution. The systemic RV has been subtracted and is shown at the top. the Spearman and Pearson coefficient and p-values), providing tentative evidence that the large-amplitude RV signal might be of true brown dwarf or planetary origin. However, as we also observed for similar stars in our previous work (e.g. NGC4349 No. 127, Paper II), a lack of correlation can be produced if these activity indicators are not in phase with the RV. This can be assessed with the GLS periodogram of the different activity indicators (shown in Fig. 9). The FWHM shows a signal at 811 days that is probably related to the RV signal at 842 days and a second signal at 619 days also above the 1% FAP level. Therefore, the lower-amplitude signal for the two-Keplerian orbit seems to also be of stellar origin. Indeed, the window function has a significant peak at 363 days that is introducing yearly aliases in the periodograms. The yearly aliases of the 842 d signal fall at 254 and 641 days, whereas the yearly aliases of 625 days fall at 230 and 872 days. Therefore, these two signals are yearly aliases of each other. On the other hand, the BIS does not show any significant period. As was found for NGC2345 No. 50, the strongest signals are observed for the H\(\alpha\)16 indicator, which has a peak at 889 days that is well above the 1% FAP level. Curiously, this star shows no significant signal in the H\(\alpha\)06 indicator. ## 7 Revision of previous planetary candidates In this section, we present a summary of the updated results with the new data collected for the targets of Paper II, where we presented the cases of three stars that show large-amplitude and long-term signals disguised as substellar companions. ### IC4651No9122 We have collected 15 additional RV points since 2018 (for a total of 74). In paper II, this star is shown to have a strong RV signal at 747 days but with signals in FWHM and H\(\alpha\)16 at 714 days (above or close to the FAP level), which led us to refrain from claiming the presence of a planet around this star. However, in the GLS of the updated time series (see Fig. 12), none of the activity indicator signals has a significant peak at the same period as the most significant peak in the RV at 744 days. We also checked that the RV signal at 744 days is not an alias of other signals. The window function shows four significant peaks at 30, 365, 1785, and 4466 days. The three longer-period signals introduce significant aliases of the 744 d signal in the RV periodogram at periods of 245, 1275, and 893 days, respectively. The RV analysis with kima provides a two-Keplerian fit as the solution with the largest likelihood. The time series and orbital phase of the two planet candidates with periods of 743 days (\(K=99.6\) m s\({}^{-1}\)and \(e=0.13\)) and 384 days (\(K=27.7\) m s\({}^{-1}\)and \(e=0.25\)) can be seen in Fig. D.1. However, the period for the possible second planet in the system is very close to two significant peaks in the FWHM periodogram at 364 and 393 days. Figure 12: GLS periodogram of the RV, FWHM, BIS, and stellar activity indexes for IC4651 No. 9122 HARPS data. The blue dashed line marks the period of the planet candidate. The period of the strongest peak in each periodogram is marked with a red solid line. The horizontal line marks the 1% FAP level. The last panel shows the periodogram for the window function. Figure 13: One Keplerian fit for IC4651 No. 9122 using kima. The top panel shows the phase curve for the signal, with the residuals shown below, highlighting the rms of the residual RVs. The bottom panel shows the RV data from HARPS and representative samples from the posterior distribution. The systemic RV has been subtracted and is shown at the top. This case again highlights the importance of the FWHM as a reliable stellar indicator. Nevertheless, we note that these signals in FWHM could simply be caused by the window function, which has a peak at 366 days, and is not related to activity. On the other hand, if those are true stellar signals, then it is worth noting that they might be the first harmonic of the signal at 744 days (\(P/2=372\) days), which would cause us to doubt the hypothesis that IC4651No9122 is a true planet candidate. With the data in hand, we can at least be sure that the signal fitted with the second keplerian at 384 days is either of stellar origin (reflected in the FWHM if real) or is caused by the window function or a harmonic of the 744 days signal. Therefore, we consider only the signal at 744 days as a planet candidate and repeated the fit with kima, fixing the number of planets to just one (see Fig. 13). In this case, as expected, the residuals of the fit have a larger dispersion (20.39 m s\({}^{-1}\)) and the posterior distribution for the fitted extra white noise is also centred at a larger value (20.5\(\pm\)1.8 m s\({}^{-1}\)). The final planet candidate has a period of 746 days (\(K=94.98\) m s\({}^{-1}\)and \(e=0.04\)) and a minimum mass of 6.22M \(J\) (see Table 5 for the detailed parameters). We note that, in the present work, we are also using the updated stellar parameters from Tsantaki et al. (2023), and now the estimated \(v\sin i\) is 0.68 km s for a fixed macroturbulence velocity of 4.98 kms (with the empirical formula by Hekker & Melendez 2007, for giant stars). This leads to a maximum rotational period of \(\sim\) 995 days. This period is close to the period of 985 days seen in the H\(\alpha\)16 periodogram; however, it is not significant and prevents us from relating them. Therefore, we cannot completely rule out the possibility that the 744 d RV signal is related to the star's rotation (if this were shorter in reality). The lack of similar periods in the activity indicator periodograms (as well as the lack of strong correlations; see Table 3) suggests that this is a bona fide planet but the proximity of the two significant FWHM peaks to the first harmonic of the strong RV signal warns us of the possibility of a non-planetary signal. We note that if the recent FWHM values were corrected to take into account the observed offset (see a more detailed discussion in Appendix A), the GLS periodogram of the FWHM would show a significant peak at 687 days. This period seems safely far from the RV period at 744 days. Additionally, the peaks seen at 364 and 393 d would go below the 1% FAP level, which would support the planet-candidate hypothesis. Nevertheless, we can see with this example how the usefulness of the FWHM indicator can be limited if corrections need to be applied. Therefore, additional data will be needed to further assess the nature of the signals observed here. ### Ngc2423n03 We collected a further 16 RV points for this star (raising the total to 92, with 32 of them from CORALIE). In our previous work, this star had a strong RV signal at 698 days but with a strong RV-BIS correlation and a peak in the BIS periodogram close to the FAP level at a similar period. The GLS of the complete dataset is shown in Fig. 14. The RV periodogram presents the most significant peak at 697 days (with two aliases at 828 and 614 days caused by the long gap in time between the two sets of observations, specifically by the peak at 4506 days in the window function periodogram). In addition, the window function shows several significant signals around one year, which in turn produce aliases around 230 days in the RV periodogram. As opposed to previous findings, with the new data the BIS periodogram shows a significant peak at 697 days. Moreover, the RV-BIS correlation has a significant Pearson coefficient as well Figure 14: GLS periodogram of the RV, FWHM, BIS, and stellar activity indexes for NGC2423 No. 3 HARPS data. The blue dashed line marks the period of the planet candidate. The period of the strongest peak in each periodogram is marked with a red solid line. The horizontal line marks the 1% FAP level. The last panel shows the periodogram for the window function. Figure 15: GLS periodogram of the RV, FWHM, BIS, and stellar activity indexes for NGC4349 No. 127 HARPS data. The blue dashed line marks the period of the planet candidate. The period of the strongest peak in each periodogram is marked with a solid red line. The horizontal line marks the 1% FAP level. The last panel shows the periodogram for the window function. as Spearman rank statistics (see Table 3). With the new data, both kima and _yorbit_ would fit the RV time series with a two-Keplerian orbit caused by planetary bodies at 700 and 435 days (with \(K\sim\) 130 and 30 m s\({}^{-1}\), respectively, see Fig D.2). However, the long-period signal appears to be of stellar origin given the correlation with the BIS, and the second signal has a period similar to one of the peaks seen in the periodogram of the photometric data (period of 417 days, see Fig. 8 in Paper II). In addition, both the FWHM and H\(\alpha\)16 periodograms show significant peaks at 380 and 393 days, respectively. We note that with the updated stellar parameters from Tsantaki et al. (2023), we estimated a \(v\) sin \(i\) of 2.19 kms (for a fixed macroturbulence velocity of 4.88 kms), which leads to a maximum rotational period of \(\sim\) 407 days. These findings lead us to suspect that all these signals around 400 days are caused by rotational modulation of active regions, which in turn also produces the 435 d RV signal fitted in the second Keplerian orbit. ### Ngc4349n0127 For this target, we previously worked with 46 HARPS measurements up to 2009 (as presented in our previous publication) and now we have 11 additional points (between 2018-2022). This star previously showed a large RV signal at 672 days but with a strong and clear signal in FWHM at 666 days (and also the periodogram of H\(\alpha\)16 had a close to significant peak at 689 days, see Paper II). Those findings indicated that the brown dwarf presented in Paper I was not real. We collected additional data to evaluate the stability of the RV and FWHM signals (as some examples in the literature have shown changes in phase for long-term RV signals, e.g. Aldebaran Reichert et al. 2019) and to learn more about its origin. With the new data, the RV time series can also be fitted with a single Keplerian with the same period, of namely 674 days (\(K\sim\) 226 m s\({}^{-1}\); see Fig. D.3). However, this long-term signal is also observed in the FWHM periodogram (see Fig. 15), as was previously found to be the case, but not in the H\(\alpha\) periodograms. However, we note that we find a weak but significant correlation between RV and H\(\alpha\)16 (see Table 3). Interestingly, we do not observe a change of phase in the RV (see the orbital phase in Fig. D.3), but if we fit the FWHM with a sinusoidal with a period of 675 days (see Fig. D.4), there seems to be a change in the phase. In addition, the window function shows a strong peak at 4719 days caused by the long gap in between the observations, which in turn produces significant aliases of 588 days (seen in the RV and FWHM periodograms) and 785 days, which can be seen in the FWHM periodogram. For this star, we also have updated stellar parameters from Tsantaki et al. (2023), for which we estimated a \(v\) sin \(i\) of 4.81 kms (for a fixed macroturbulence velocity of 4.66 kms), which leads to a maximum rotational period of \(\sim\) 399 days. The periodogram of the photometric data shows two significant peaks at 428 and 342 days, which may be related to the stellar rotation (see Fig. 12 in Paper II). As discussed in this latter work, we cannot discard that the estimated rotational period (\(<\)399 days) and the 342 d signal in the photometry are the first harmonics (\(P/2\)) of the RV period. Taken together, these findings point to a non-planetary origin of the signal. ## 8 Discussion ### The importance of each activity indicator The detailed analysis presented in this work (and our previous study) demonstrates the difficulty in confirming planetary signals in evolved intermediate-mass stars. Our most important finding is that there is not a single (e.g. universal) activity indicator that we can trust to unveil a spurious planetary signal; all of them have to be analysed in order to understand the origin of the different RV signals, and in some cases this might not be sufficient. We find hints that the CCF-BIS might be more useful for less massive stars (around 2 M\({}_{\odot}\)), because we find a possible RV-BIS correlation for NGC3680 No. 41 and a clear signal in the BIS periodogram (as well as a significant strong RV-BIS correlation) for NGC2423 No. 3. On the other hand, the CCF-FWHM and H\(\alpha\) lines seem to be more useful for tracking activity signals in more massive and evolved stars (NGC2345 No. 50, NGC3532 No. 670, and NGC4349 No. 127); see Table 4 for a summary of the periodogram results. However, we notice that conclusions on the origin of a given signal are highly dependent on the way the H\(\alpha\) line is measured. Meanwhile NGC2345 No. 50 has significant peaks at both the H\(\alpha\)16 and H\(\alpha\)06 indicators, NGC3532 No. 670 only shows a significant peak for H\(\alpha\)16, and NGC4349 No.127 shows no significant peak for any of them. Nevertheless, although not significant, the signal of H\(\alpha\)16 for NGC4349 No. 127 is much stronger than that of H\(\alpha\)06. We acknowledge that the sample analysed here is still not sufficiently large for us to reach clear conclusions, but there are indications that H\(\alpha\)16 is a more reliable indicator than H\(\alpha\)06 for these evolved stars (the opposite is true for FGK dwarfs, for which H\(\alpha\)06 is more appropriate; Gomes da Silva et al. 2022). A thorough analysis of stellar activity indicators in evolved stars is outside the scope of the present paper but will be explored in a future work. On the contrary, measuring the H\(\alpha\) line with a very wide passband can result in spurious periods in the periodogram, because there are contaminating lines in the wings of this strong line. This is the explanation for the recent confirmation of a brown dwarf around the largest star to date (HD 18184; Lee et al. 2023), whose RV variations were previously deemed to be of stellar origin given the similar period in the H\(\alpha\) periodogram (Bang et al. 2018). However, the work by Lee et al. (2023) warns that a \(\pm\)1 A passband around the line centre (and not a \(\pm\) 2A as used in Bang et al. 2018) is needed to avoid the blending lines. We note that our H\(\alpha\)16 index is measured with a \(\pm\) 0.8 A passband around the line centre and is therefore not affected by nearby blending lines (see Figs. E.1 and E.2 in the Appendix). Finally, we believe that the most critical indicator for this kind of star is the CCF-FWHM, because it has proven useful for detecting the probable stellar origin of the largest-amplitude signals (in the three most massive and evolved stars) but also that of the weaker RV signals (that can be fitted with a second Keplerian) found in stars with a broad range of stellar masses (the cases of NGC3532 No. 670, IC4651 No. 9122, and NGC2423 No. 3). We stress the fact that for NGC4349 No. 127, despite having nearly identical parameters to NGC3532 No. 670, only the FWHM has served to cast doubt on the planetary origin of the RV variability. Finally, the photometric data have been useful in some cases as well, but the long rotational periods of our targets make it difficult to find data of sufficient quantity to track the real period of the stars. ### Origin of RV variability From the six targets studied in this work, we find suggestions that it is more probable to find bona fide planets around less massive and less evolved stars. NGC3680 No. 41 and IC4651 No. 9122 both have a mass of around 1.7 M\({}_{\odot}\) and a radius of around 12 R\({}_{\odot}\) and seem to be in the first ascent of the RGB (see Fig. 1). On the other hand, for stars more massive than 2 M\({}_{\odot}\), we cannot confirm the planetary origin for any of the RV signals detected in our more massive targets. These four stars are also among the most evolved in their clusters. NGC2345 No. 50 is the most evolved of the full sample, already ascending towards the AGB. NGC 4349 No. 127 and NGC3532 No. 670 seem to be very close to the tip of the RGB and NGC2423 No. 3 is also approaching the RGB tip. This is in agreement with the decrease in planetary occurrence at M \(\ga\)2 M\({}_{\odot}\) found by the Lick (Reffert et al. 2015) and EXPRESS surveys (Jones et al. 2016) and the combination of both with the PPPS survey (Wohloff et al. 2022). Nevertheless, the targets presented here are a fraction of the complete sample and a more in depth analysis on planet frequencies will be presented in a future work along with results for the full sample. In recent years, an increasing number of suspected planet candidates have been found around evolved stars with orbital periods in the range of \(\sim\) 600-800 days. This is the case for NGC4349 No. 127, NGC2423 No. 3, NGC3532 No. 670 discussed here, Pollux (\(\beta\) Gem) (Hatzes et al. 2006; Auriere et al. 2021), Aldebaran (Hatzes et al. 2015; Reichert et al. 2019), and \(\gamma\) Draconis (Hatzes et al. 2018). In addition, the work by Tala Pinto et al. (2020) presents two evolved stars, 3Cnc and 44UMa, with periodic RV variations of \(\sim\) 800 days that do not show variations with a similar period in photometry or H\(\alpha\) index. However, given the similarity in stellar parameters and luminosities with the above-mentioned cases and the lack of line-shape-stability indicators such as FWHM (not available in non-stabilised spectrographs, as used in their work), Tala Pinto et al. (2020) classify these objects as planet candidates. The luminous red giant HD 81817 (4.3 M\({}_{\odot}\), 83.8 R\({}_{\odot}\)) also shows a secondary RV signal with a period of \(\sim\) 630 days (Bang et al. 2020) but the H\(\alpha\) line shows a significant peak at a similar period as well. The accumulation of planet candidates in that period range (and more specifically between 300 and 800 days) has also been noted by Dollinger and Hartmann (2021), who in addition points out that this only happens for stars with a radius of greater than 21 R\({}_{\odot}\) (see their Fig. 1). These authors also warn of the possibility that the observed periodic signals are of stellar origin and propose that a large stellar radius and a low stellar metallicity should be seen as a warning signal that RV signals may not be of planetary nature. It is worth noting that the three objects with the clearest RV variations in our sample have large radii (they are the most evolved in each cluster) and metallicities below -0.1 dex. As in many cases the rotational periods of these large, slow stars are compatible with the RV periods, the simplest explanation for the RV variability would be the rotational modulation of active regions, such as spots. However, long-lived stable spots in evolved single stars have only been reported in the literature for very active stars hosting strong magnetic fields, those that are believed to be the descendants of magnetic Ap-Bp stars. This is the case for EK Eri (Auriere et al. 2011) or \(\beta\) Ceti (Tsvetkova et al. 2013) with P\({}_{rot}\) of 309 and 215 days, respectively, which also present large photometric variations. However, the stars presented in this work seem to belong to another group of red giants; these show longer rotational periods, weak large-scale magnetic fields (sub-Gauss level), and higher-than-expected chromospheric activity (Auriere et al. 2015). The best studied case in the literature is Pollux, which shows RV variations with a period of 590 days --that is stable for 25 years-- and no apparent S-index variability (Hatzes et al. 2006). Nevertheless, the planet candidate around Pollux was challenged by the discovery of magnetic field variations with a similar period to the RV variations (Auriere et al. 2014). In a more recent work with additional data, Auriere et al. (2021) found a longer period (660 days) for the weak magnetic field variations, but this could still compatible with the RV period due to uncertainties. Therefore, doubt remains as to the true existence of a planet orbiting Pollux, as is also true for the cases presented in the present work. In the cases of red giants with weak magnetic fields such as Pollux, if the detected RV variations are due to the magnetic field, they would not be caused by photometric spots but by magnetic plages or other magnetic structures locally reducing convection (Auriere, _priv. comm_). Therefore, the lack of photometric variation in red giants (and thus of spots) is not a guarantee that the RVs are of planetary origin. We do not have sufficient S/N for our sample of stars in the blue region of the spectra to be able to study the S-index variability in order to compare it with the behaviour of Pollux and other giants with weak magnetic fields like Aldebaran. However, many of our stars show H\(\alpha\) variations, which is also a sign of chromo \begin{table} \begin{tabular}{l l c c} \hline \hline & & NGC3680 No.41 & IC4651 No. 9122 \\ \hline \hline \(V_{sys}\) & [km s\({}^{-1}\)] & 1.603 \(\pm\) 0.013 & -30.258 \(\pm\) 0.003 \\ \(P\) & [days] & 1154.98 \(\pm\) 9.96 & 745.94 \(\pm\) 1.80 \\ \(K\) & [m s\({}^{-1}\)] & 74.79 \(\pm\) 9.49 & 94.98 \(\pm\) 4.38 \\ \(e\) & & 0.21 \(\pm\) 0.10 & 0.04 \(\pm\) 0.03 \\ \(\omega\) & [rad] & 0.992 & 0.937 \\ Tp & [BJD-2 400 000] & 51819.88 & 53004.39 \\ \(m_{2}\) sin \(i\) & [M\({}_{J}\)] & 5.13 \(\pm\) 0.66 & 6.22 \(\pm\) 0.36 \\ \(a\) & [AU] & 2.53 \(\pm\) 0.03 & 1.95 \(\pm\) 0.03 \\ \hline N\({}_{meas}\) & & 23 & 74 \\ Span & [years] & 19.07 & 17.02 \\ \(\Delta\)v(HARPS-Coralie) & [m s\({}^{-1}\)] & 14 \(\pm\) 14 & — \\ white noise & [m s\({}^{-1}\)] & 20.0 \(\pm\) 5(hargs) \(-\) 3.8\({}^{+11}\)\({}_{-3.1}\) (coraile) 2 & 20.5 \(\pm\) 1.8 (hargs) \\ (\(O-C\)) & [m s\({}^{-1}\)] & 16.35 & 20.39 \\ \hline \hline \end{tabular} \end{table} Table 5: Orbital and physical parameters for the planet candidates. spheric activity. It would be very interesting to further study the magnetic activity of these stars, although it is very challenging to detect weak magnetic fields for relatively dull stars (\(V\sim\) 10) with the currently available instrumentation. An alternative explanation for the large RV variations in evolved stars is the presence of a poorly understood class of oscillations called oscillatory convective modes (Saio et al., 2015). This class of oscillations is theoretically possible for 2 M\({}_{\odot}\) stars with high luminosity, that is, with log (L/L\({}_{\odot}\)) \(\gtrsim\) 3 dex; see more detailed discussion in Paper II. The clearest candidate in our sample would therefore be NGC2345 No. 508, with log (L/L\({}_{\odot}\)) \(\sim\) 3.9, though NGC4349 No. 127 and NGC3532 No. 670 also have high luminosities, as do other candidates in the literature (see Fig. 15 in Tala Pinto et al.2020). Footnote 8: We note that the exact minimum luminosity for these modes to appear depends on the mass and the mixing length adopted in the treatment of convection. Therefore, it is probable that for the large mass of this star (not covered by current models), the minimum needed luminosity is even higher and this star would not be a candidate to host this kind of oscillation. ## 9 Conclusions In this work, we present some of the results of a long-term survey of RVs in a sample of more than 140 giant stars in 17 open clusters. Data covering such a large time span are crucial in order to find long-period planets, whereas the probability of discovering short-period planets around giant stars diminishes as the stars evolve and their radii expand. The long-term monitoring of giant stars is also essential for distinguishing RV variations caused by the rotational modulation of active regions, because the rotational periods of evolved stars are much longer than those of dwarfs (from a few hundred to more than one thousand days). From the six stars analysed in this work, we are only confident in the planetary origin of the RV signals coming from two of those systems, and the star in both of them has a mass of below 2 M\({}_{\odot}\). Nevertheless, we believe that additional data are needed to fully confirm the presence of planets around those stars. On the other hand, the more massive stars in the sample show periodic variations similar to the RV in one or several stellar activity indicators (H\(\alpha\), BIS and FWHM), with the FWHM of the CCF being the most critical in our opinion. We also note that in the cases where a two-Keplerian fit was a good solution to model the RV variability, the H\(\alpha\)16 index and especially the FWHM were decisive in ruling out the presence of a smaller body in these systems. The cases presented here and others in the literature demonstrate the difficulty in finding even large planets around intermediate-mass evolved stars. It is therefore of utmost importance to obtain a comprehensive picture of the different phenomena causing RV variability in evolved stars, and especially of their magnetic fields and their manifestations on different timescales. Once we understand such phenomena, we will be in a much better position to detect giant planets but also lower-mass planets currently hidden in the stellar jitter. Near-infrared (NIR) spectra can be combined with optical spectra in order to discern between stellar activity and planetary signals, as the non-planetary RV signals are wavelength-dependent, whilst planetary signals are achromatic (e.g. Figueira et al., 2010; Trifonov et al., 2015; Carleo et al., 2020; Carmona et al., 2023). In addition, the study of other stellar-activity indicators in the NIR will help us to better understand the magnetic activity of these stars and its relation with their RV variability. Therefore, as a further step towards a complete understanding of the variability in red giants, we will observe these stars with the new high-resolution NIR spectrograph NIRPS (Bouchy et al., 2017; Wildi et al., 2017) now operational on the 3.6m ESO telescope in La Silla (Chile). ###### Acknowledgements. We thank Francois Bouchy and Xavier Dumusque for coordinating the shared observations with HARPS and all the observers who helped collecting the data. We thank the referee for useful review that helped to improve this paper. E.D.M., J.G.S., J.P.F., N.C.S., J.H.M., S.G. acknowledge the support from Fundacao para a Ciencia e a Tecnologia (FCT) through national funds and from FEDER through COMPETE2020 by the following grants: UIDB/04434/2020 and UID/04434/2020 and 2022.04416. PTDC. D.M. acknowledges the support from ICT through grants ICT contract 2021-01294CE/ICICID and Investigador FCT contract 200849/2015/CP1273/CT/0003 and in the form of an exploratory project with the same reference. This research has been Co-funded by the European Union (ERC, FIERCE, 100135247). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This research has made use of the Extrasolar Planets Encyclopaedia, SIMBAD and WEBDA databases. This work has also made use of the IRAF facility. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC_[https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
この研究の目的は、進化した恒星 autour に惑星を探すことである。特に、2\,M$_\odot$ 以上の質量を持つ恒星に焦点を当てる。これは、これまで示された星上の惑星発生の減少を考慮してのことである。私たちは \texttt{kima} を使用して、RV データに観測された周期的な信号を説明する最も可能性の高いKeplerian軌道を見つけ出した。さらに、恒星の活動指標の変動と光度変化を研究して、惑星が存在する可能性を示唆する星上の信号を排除した。NGC3680 açık kütleçekiminin 1.64\,M$_\odot$ star No.41 に周回する惑星候補を提示した。この惑星は最小質量 5.13M\,$_{J}$ を持ち、周期は 1155日である。NGC2345とNGC353
2309.03942
21cmFirstCLASS I. Cosmological tool for $Λ$CDM and beyond
In this work we present 21cmFirstCLASS, a modified version of 21cmFAST, the most popular code in the literature for computing the anisotropies of the 21-cm signal. Our code uses the public cosmic microwave background (CMB) Boltzmann code CLASS, to establish consistent initial conditions at recombination for any set of cosmological parameters and evolves them throughout the dark ages, cosmic dawn, the epoch of heating and reionization. We account for inhomogeneity in the temperature and ionization fields throughout the evolution, crucial for a robust calculation of both the global 21-cm signal and its fluctuations. We demonstrate how future measurements of the CMB and the 21-cm signal can be combined and analyzed with 21cmFirstCLASS to obtain constraints on both cosmological and astrophysical parameters and examine degeneracies between them. As an example application, we show how 21cmFirstCLASS can be used to study cosmological models that exhibit non-linearities already at the dark ages, such as scattering dark matter (SDM). For the first time, we present self-consistent calculations of the 21-cm power spectrum in the presence of SDM during the non-linear epoch of cosmic dawn. The code is publicly available at https://github.com/jordanflitter/21cmFirstCLASS.
Jordan Flitter, Ely D. Kovetz
2023-09-07T18:00:01
http://arxiv.org/abs/2309.03942v4
# 21cmFirstCLASS I. Cosmological tool for \(\Lambda\)CDM and beyond ###### Abstract In this work we present 21cmFirstCLASS, a modified version of 21cmFAST, the most popular code in the literature for computing the anisotropies of the 21-cm signal. Our code uses the public cosmic microwave background (CMB) Boltzmann code CLASS, to establish consistent initial conditions at recombination for any set of cosmological parameters and evolves them throughout the dark ages, cosmic dawn, the epoch of heating and reionization. We account for inhomogeneity in the temperature and ionization fields throughout the evolution, crucial for a robust calculation of both the global 21-cm signal and its fluctuations. We demonstrate how future measurements of the CMB and the 21-cm signal can be combined and analyzed with 21cmFirstCLASS to obtain constraints on both cosmological and astrophysical parameters and examine degeneracies between them. As an example application, we show how 21cmFirstCLASS can be used to study non-linear cosmological models, such as scattering dark matter (SDM). For the first time, we present self-consistent calculations of the 21-cm power spectrum in the presence of SDM during the non-linear epoch of cosmic dawn. ## I Introduction Our Universe was born with a Big Bang about 13.8 billion years ago. After a rapid inflationary epoch ended, it continued to expand until its temperature was low enough that atoms could first form in a key cosmological moment called _recombination_. Ensingly, halos of cold dark matter (CDM) started collapsing, providing the first gravitational seeds for galaxy and star formation. Efficient X-ray radiation that was emitted from the first stars and their remnants then ionized the surrounding inter-galactic medium (IGM) during the epoch of _reionization_ (EoR). Recently on cosmic timescales, the expansion of the Universe has become dominated by the mysterious force of dark energy. This a brief description of the concordance cosmological model (\(\Lambda\)CDM) [1; 2; 3; 4]. Many observables support \(\Lambda\)CDM as being the correct cosmological model of our Universe. Galaxy surveys of our local Universe at redshift \(z\lesssim 1\)[5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20], and measurements of the Ly\(\alpha\) forest (\(z\lesssim 2.5\)) [21; 22; 23; 24; 25] have found very good agreement between the observed spatial distribution of galaxies and the theoretical predicted distribution. But perhaps it was the cosmic microwave background (CMB) [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]--a form of radiation that has been nearly freely propagating since recombination at \(z\sim 1100\)--that gave \(\Lambda\)CDM its greatest triumph; measurements of the temperature and polarization anisotropies of the CMB, carried out most recently by the Planck satellite [38], allowed to constrain the six parameters of \(\Lambda\)CDM to a sub-percent precision level [39]. Despite its success, the \(\Lambda\)CDM model does suffer from tensions with observations (see recent reviews in Refs. [40; 41; 42; 43]), and more cosmological data is required to resolve them, especially in the large volume between \(2.5\lesssim z\lesssim 1100\) where our Universe has not been systematically mapped yet. Since according to Big-Bang nucleosynthesis (BBN) [44; 45] the IGM in our Universe is expected to contain huge amounts of neutral hydrogen before reionization, the 21-cm signal, being sourced by hyperfine energy transitions in hydrogen atoms [46; 47; 48; 49; 50; 51; 52; 53], has become an important target for cosmologists. Nowadays there are ongoing efforts to detect the 21-cm signal by many different collaborations. Some of them focus on detecting the global signal, that is the sky-averaged signal. These include the Experiment to Detect the Global reionization Signature (EDGES) [54], Shaped Antenna measurement of the background RA-dio Spectrum (SARAS) [55], Large-Aperture Experiment to Detect the Dark Ages (LEDA) [56], the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) [57] and Probing Radio Intensity at high-Z from Marion (PRzM) [58]. In addition, radio interferometer telescopes, such as the Giant Metrewave Radio Telescope (GMRT) [59], the Murchison Widefield Array (MWA) [60], Low Frequency Array (LOFAR) [61], the Precision Array for Probing the Epoch of Reionization (PAPER) [62], the Hydrogen Epoch of Reionization Array (HERA) [63] and the Square Kilometre Array (SKA) [64] are devoted to probe the spatial fluctuations in the signal. While most of these experiments are in the stages of noise calibration and have only placed upper bounds on the amplitude of the power spectrum of the signal, the HERA collaboration for example has already extracted a meaningful upper bound on the X-ray luminosity of the first stars [65; 66; 67] (see also Ref. [68]). There are several approaches in the literature for computing the anisotropies in the 21-cm signal. One way is to perform full radiative-transfer hydrodynamic simulations, e.g. CoDa [69; 70; 71], 21SSD [72] and THESAN [73]. Alternatively, post-processing of N-body simulations can be applied with ray-tracing algorithms such as C\({}^{2}\)-Ray [74] or CRASH [75]. While these simulations improved our understanding of the physics of the EoR and helped to refine reionization models, they are computationally expensive and cannot be used for parameter inference. Faster approximated schemes that solve the one dimensional radiative transfer equation can be found in the codes of BEARS [76], GRIZZLY [77] and BEoRN [78]. There are also approximated purely analytic prescriptions in the literature, e.g. [79]. In Zeus21[80] the 21-cm power spectrum at \(z\gtrsim 10\) can be evaluated in seconds, thanks to an approximated exponential fit that relates the linear matter density fluctuations to the non-linear fluctuations of the star formation rate density (SFRD). Finally, semi-numerical codes that implement the excursion-set formalism [81] are widely used in the literature, from [82] to SimFast21[83] to the ever-popular 21cmFAST[84, 85]. In this paper we introduce our code for calculating the 21-cm anisotropies. We call it 21cmFirstCLASS. It is essentially the merger of the two well-known codes--21cmFAST1 and the linear Boltzmann solver CLASS2[86]. In this version, the Lyman-Werner (LW) feedback [87, 88, 89] as well as the relative velocity between baryons and CDM (\(V_{\rm cb}\)) [90, 91, 92, 93] are taken into account in each cell, while pop-II and pop-III stars are separated into atomic and molecular cooling galaxies, respectively. In addition, the code contains our past modifications to 21cmFAST to incorporate the Ly\(\alpha\) heating mechanism [82, 94, 95], as well as the ability to consider fuzzy dark matter (FDM) with an arbitrary mass and fraction [96]. Footnote 1: github.com/21cmfast/21cmFAST (we currently use v. 3.3.1) There are three main advantages to 21cmFirstCLASS: (1) It generates consistent initial conditions (via CLASS) and thereby allows one to study degeneracies between cosmological parameters and astrophysical parameters. (2) It allows a combined analysis of CMB and 21-cm anisotropies, which improves constraining power and allows for degeneracy breaking. (3) Unlike the standard 21cmFAST code which is designed to begin the simulation at \(z=35\) with a homogeneous temperature field, the user can control the initial redshift of 21cmFirstCLASS, and even set it to recombination. As a consequence, the fields in the box are evolved non-uniformly from an early redshift, naturally leading to the correct state of the box at \(z=35\). This is particularly important for beyond \(\Lambda\)CDM models which exhibit non-linear fluctuations early on, e.g. in scenarios with primordial magnetic fields [97]. To demonstrate the last point, we consider as an example an exotic dark matter model which we refer to as scattering dark matter (SDM). In this model, some part of the dark matter is composed of particles which are able to interact non-gravitationally with ordinary matter and scatter off of it elastically [98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123]. In that context, this work resembles the work of Ref. [106], but there are a few important differences. First, Ref. [106] used ARES[124, 125] in their astrophysical calculations. This code assumes a simpler astrophysical model than 21cmFAST; it does not account for halo mass dependence in the calculation of the star formation efficiency and it lacks treatment for star suppression feedbacks in molecular cooling galaxies. Moreover, it computes global astrophysical quantities (e.g. emissivity) from global cosmological quantities (e.g. halo mass function) and therefore does not take into account important non-linear fluctuations at low redshifts. And secondly, in Ref. [106], the astrophysical parameters were fixed in the analysis and it focused on the global signal. We on the other hand vary both cosmological and astrophysical parameters and derive forecasts with respect to the 21-cm power spectrum while simulating HERA's noise. We demonstrate that HERA in its design sensitivity is expected to easily probe SDM with cross-sections smaller by an order of magnitude than e.g. forecasted constraints for CMB-S4 [100]. This is not the first work to consider the 21-cm power spectrum in the presence of SDM. However, it is the first work that computes _consistently_ the 21-cm power spectrum in the presence of SDM during the non-linear cosmic dawn epoch. For example, Refs. [101, 114, 123] have estimated the 21-cm power spectrum by considering only the initial Maxwellian fluctuations in the relative velocity between the SDM and the baryons. In 21cmFirstCLASS, non-linear fluctuations in the density and the SFRD fields are automatically captured. In follow-up work [126], we will use 21cmFirstCLASS to extend the work of Ref. [104], which focused on the linear dark ages epoch, to make detailed forecasts for constraining SDM at cosmic dawn. While working on this project, inspired by the work of Ref. [80] (that introduced the Zeus21 code), we have also studied in detail the impact of early linear fluctuations on the late non-linear 21-cm power spectrum at low redshifts. The results of that analysis can be found in a companion paper [127] (hereafter referred to as Paper II). The remaining parts of this paper are organized as follows. In Sec. II we briefly outline the physics of the 21-cm signal. In section III we describe the initial conditions used in our code and in Sec. IV we compare the output of 21cmFirstCLASS with 21cmFAST. In Sec. V we demonstrate how 21-cm and CMB data can be readily combined using 21cmFirstCLASS to relax degeneracies between cosmological parameters. We then move on to discuss the SDM physics and its implementation in 21cmFirstCLASS in Sec. VI. At the end of that section, the results of the SDM evolution and its impact on the 21-cm power spectrum are presented, as well as forecasts for its detectability by HERA. We provides our conclusions in Sec. VII. Throughout this paper, we adopt the best-fit values for the cosmological parameters from Planck 2018 [39] (without BAO), namely we assume a Hubble constant \(h=0.6736\), a primordial curvature amplitude \(A_{s}=2.1\times 10^{-9}\) with a spectral index \(n_{s}=0.9649\), and total matter and baryons density parameters \(\Omega_{m}=0.3153\), \(\Omega_{b}=0.0493\). For the CMB calculations we also assume an optical depth to reionization \(\tau_{\rm re}=0.0544\) and a single species of massive neutrinos with mass \(m_{\nu}=0.06\,\)eV. For the fiducial values of the astrophysical parameters in 21cmFAST and 21cmFirstCLASS, we adopt the EOS2021 values listed in Table 1 of Ref. [85]. All of our formulae are expressed in the CGS unit system. To reduce clutter, we often do not explicitly write the independent arguments of the physical quantities (e.g. redshift, wavenumber, etc.) and they should be inferred from the context. 21cm Theory The observed physical quantity of the 21-cm signal is known as the brightness temperature, which reflects the excess or deficit of CMB photons at a given frequency (or redshift), \[T_{21}=\frac{T_{s}-T_{\gamma}}{1+z}\left(1-\mathrm{e}^{-r_{21}}\right), \tag{1}\] where \(T_{\gamma}\propto(1+z)\) is the redshift-dependent CMB temperature, \(T_{s}\) is the spin temperature, and \(\tau_{21}\ll 1\) is the 21-cm optical depth (see classic reviews of the 21-cm signal in Refs. [46; 47; 49; 50; 51]). The spin temperature is a characteristic property of the IGM that measures the relative abundance of hydrogen atoms in the triplet and singlet states, in which the spins of the proton and the electron are aligned and anti-aligned, respectively. As the Universe evolves, various processes excite the hydrogen gas and compete between themselves on setting the value of the spin temperature. In thermal equilibrium the spin temperature reads \[T_{s}^{-1}=\frac{x_{\mathrm{CMB}}T_{\gamma}^{-1}+x_{\mathrm{coll}}T_{k}^{-1}+ \tilde{x}_{\alpha}T_{\alpha}^{-1}}{x_{\mathrm{CMB}}+x_{\mathrm{coll}}+\tilde{ x}_{\alpha}}. \tag{2}\] Here, \(T_{k}\) is the IGM gas kinetic temperature, \(T_{\alpha}\approx T_{k}\) is the color temperature of Ly\(\alpha\) photons, and \(x_{\mathrm{CMB}}=\left(1-\mathrm{e}^{-r_{21}}\right)/\tau_{21}\sim 1\), \(x_{\mathrm{coll}}\) and \(\tilde{x}_{\alpha}\) are the CMB [94], collisional [50], and Ly\(\alpha\)[128] couplings, respectively. As we demonstrate in Fig. 1, the globally averaged value of the spin temperature changes with time. Not long after recombination, at \(z\sim 1000\), \(T_{s}\approx T_{k}\approx T_{\gamma}\). As the Universe expands, the gas adiabatically cools and its temperature departs from the CMB temperature, and so the spin temperature settles on an intermediate value, which is determined by the ratio of \(x_{\mathrm{coll}}\) and \(x_{\mathrm{CMB}}\). Since \(x_{\mathrm{coll}}\) is inversely proportional to the volume of the Universe, \(x_{\mathrm{coll}}>1\) at \(z\gtrsim 100\), and \(T_{s}\) approaches \(T_{k}\). Afterwards, the Universe becomes large enough so that collisional excitations are no longer efficient, \(x_{\mathrm{coll}}<1\), and \(T_{s}\) is driven back towards \(T_{\gamma}\). As can be seen in Fig. 2, the departure of \(T_{s}\) from \(T_{\gamma}\) at \(25\lesssim z\lesssim 700\) results in the first absorption feature in the globally averaged brightness temperature. This cosmological epoch is known as the _dark ages_. It should be stressed that during this epoch stars have not been formed yet, and therefore the signal is completely determined from cosmology. Hence, within the standard model paradigm, the 21-cm dark ages signal is considered to be well understood theoretically, although it has yet to be measured at that epoch. The theoretical uncertainty in the 21-cm signal begins after \(z\sim 25\), once the first stars have been formed. Ly\(\alpha\) radiation emitted from the first stars strongly couples the spin temperature back to \(T_{k}\) via the Wouthuysen-Field (WF) effect [129; 130], and a second absorption feature in the 21-cm signal is expected to be found, although its exact shape and location depend on the assumed astrophysical model and are thus highly uncertain. This epoch is known as _cosmic dawn_. Depending on the astrophysical parameters, X-rays emitted from stars may heat the surrounding IGM (taking the spin temperature with it) above the CMB temperature, which would lead to an emission feature in the signal. Eventually, stellar radiation reionizes the gas in the IGM and bubbles of ionized hydrogen begin to emerge. After the _reionization_ epoch is over, \(\tau_{21}\to 0\) and the 21-cm signal vanishes. There are three important quantities which govern the brightness temperature during the cosmic dawn and afterwards. These are the Ly\(\alpha\) flux \(J_{\alpha}\) (since \(\tilde{x}_{\alpha}\propto J_{\alpha}\)), the gas kinetic temperature \(T_{k}\), and the ionization fraction \(x_{e}\equiv n_{e}/\left(n_{\mathrm{H}}+n_{\mathrm{He}}\right)\), where \(n_{e}\), \(n_{\mathrm{H}}\) and \(n_{\mathrm{He}}\), are the free-electron, hydrogen-nuclei and helium-nuclei number-densities, respectively. We will not focus on prescriptions for evaluating \(J_{\alpha}\) in this paper and instead refer the reader to Refs. [124; 128; 84; 131] for more details. The evolution of \(T_{k}\) is determined from \[\frac{dT_{k}}{dz}=\frac{dt}{dz}\left[-2HT_{k}+\Gamma_{C}\left(T_{\gamma}-T_{k} \right)+\left.\frac{dT_{k}}{dt}\right|_{\mathrm{ext}}\right], \tag{3}\] where \(dt/dz=-\left[H\left(1+z\right)\right]^{-1}\), \(H\) is the Hubble parameter, and \(\Gamma_{C}\) is the Compton heating rate, \[\Gamma_{C}\equiv\frac{8\pi^{2}\sigma_{T}\left(k_{B}T_{\gamma}\right)^{4}}{45 \hbar^{3}c^{4}m_{e}}\frac{x_{e}}{1+x_{e}}. \tag{4}\] Here, \(c\) is the speed of light, \(\hbar\) is the (reduced) Planck constant, \(k_{B}\) is Boltzmann's constant, \(m_{e}\) is the electron mass, and \(\sigma_{T}\) is Thomson cross-section. The term \(dT_{k}/dt|_{\mathrm{ext}}\) that appears in Eq. (3) represents the "exter Figure 1: The evolution of the globally averaged CMB temperature, gas kinetic temperature and the spin temperature. This figure was made with 21cmFirstCLASS. nal" heating rates, \[\left.\frac{dT_{k}}{dt}\right|_{\rm ext}=\epsilon_{\rm ext}+\frac{2}{3}\frac{T_{k} }{1+\delta_{b}}\frac{d\delta_{b}}{dt}-\frac{T_{k}}{1+x_{e}}\frac{dx_{e}}{dt}, \tag{5}\] where \(\epsilon_{\rm ext}\) denotes the heating rates that come from external sources, mainly X-ray heating (but Ly\(\alpha\) as well as CMB heating rates [94; 95; 82] can be included), and \(\delta_{b}\equiv\delta\rho_{b}/\bar{\rho}_{b}\) is the contrast in the baryon-density fluctuations. The reason for why we classify the last two terms in Eq. (5) as "external" heating rates, even though they are sourced by the adiabatic cooling of the IGM, will become clear in Appendices B and C where we derive the tight coupling approximations. From Eqs. (3)-(5) it can be seen that the evolution of \(T_{k}\) depends on \(x_{e}\), especially at early times when the Compton heating rate dominates. The exact detailed evolution of \(x_{e}\) at early times on the other hand is quite intricate as it requires tracking the recombination states of both hydrogen and helium, while taking into account excitations to high-order energy levels. In the seminal work of Refs. [132; 133], these effects have been shown to have a sub-percent impact on the evolution of \(x_{e}\), making them crucial for analyzing the CMB anisotropies at the precision level of the Planck satellite data. A state-of-the-art recombination code that implements these effects and is publicly available is HyRec3, which we have incorporated in our 21cmFirstCLASS code. Footnote 3: github.com/nanoomlee/HYREC-2 Yet, in order to derive the evolution of temperature and ionization fluctuations within a few percent error, we show in Paper II that it is sufficient to consider Peebles' effective three-level atom model [134], in which the evolution of \(x_{e}\) reads \[\frac{dx_{e}}{dz}=\frac{dt}{dz}\left[\left.\frac{dx_{e}}{dt}\right|_{\rm reio}+ \mathcal{C}\left(\beta_{\rm ion}\left(1-x_{e}\right)-\alpha_{\rm rec}n_{\rm H} x_{e}^{2}\right)\right], \tag{6}\] where \(\alpha_{\rm rec}\) is the recombination rate (in units of cm\({}^{3}\)/sec), \(\beta_{\rm ion}\) is the early photoionization rate, and \(\mathcal{C}\) is the Peebles coefficient (see Appendix A in Paper II for more details on the Peebles coefficient). The term \(dx_{e}/dt|_{\rm reio}\) denotes the reionization rate at late times. At early times (long before reionization started), the recombination rate and photoionization rates were in equilibrium, implying that \[\beta_{\rm ion}=\alpha_{\rm rec}\left(\frac{m_{e}k_{B}T_{\gamma}}{2\pi\hbar^{ 2}}\right)^{3/2}{\rm e}^{-\epsilon_{0}/(k_{B}T_{\gamma})}, \tag{7}\] Figure 2: _Top panel_: global brightness temperature as a function of redshift. _Boton panel_: fluctuations in the brightness temperature as a function of redshift. Here we present the facet of the lightcone box that is generated by 21cmFirstCLASS. For better visulaization, the box that was used for this simulation was of size \(400\,\)Mpc and contained \(200^{3}\) cells. The fluctuations pattern seen here is derived from an approximated scale-independent growth, whereas in principle scale-dependent growth should be considered. This assumption will be relaxed in the next version of 21cmFirstCLASS. See more details on that point in Paper II. where \(\epsilon_{0}=13.6\,\)eV is the ionization energy of the hydrogen atom from its ground state. Since the standard 21cmFAST code begins its calculations at \(z=35\), the \(\beta_{\rm ion}\) term is completely negligible and was omitted. The recombination rate is the case-A recombination rate which accounts for recombination to the ground state [135]. In addition, the factor \(\mathcal{C}\) is not interpreted as the Peebles coefficient but rather as the clumping factor \(\langle x_{e}^{2}\rangle/\langle x_{e}\rangle^{2}\)[136], which the code sets as a constant with a value of 2 to account for unresolved sub-grid fluctuations. This serves as an excellent approximation to the evolution of \(x_{e}\) at late times. We have confirmed with 21cmFirstCLASS that using HyRec at all redshifts almost replicates the same \(x_{e}\) evolution at low redshifts4, while not introducing errors in the 21-cm power spectrum that are larger than HERA's sensitivity (see for example Fig. 3 and further discussion in Sec. IV). Footnote 4: The incorporation of HyRec was done only at the code section of 21cmFAST that evolves \(x_{e}\), while leaving the reionization code unchanged. Therefore, the reionization history remains almost the same in 21cmFAST and 21cmFirstCLASS—see more in Sec. IV. ## III Initial conditions Our code, 21cmFirstCLASS, is composed of two main codes: (1) CLASS, which generates the consistent initial conditions at recombination, and (2) a modification of 21cmFAST, which uses the initial conditions from CLASS to generate the initial box and then evolves this box until the 21-cm signal vanishes. In the remaining parts of this paper, we use a box of comoving size 256 Mpc and a resolution5 of \(128^{3}\) cells, and initialize the evolution at recombination. We have confirmed that increasing these specifications does not alter the 21-cm power spectrum beyond HERA's sensitivity. Footnote 5: To be perfectly clear, by 128 we refer to the parameter HII_DIM, and not the parameter DIM, which is three times larger. ### Class In the standard 21cmFAST, the user can vary the cosmological parameters fairly easily from the python wrapper. The varied parameters however only enter in the C-code, while the initial conditions for the simulation remain the same, regardless the values of the cosmological parameters that were set by the user. This property of 21cmFAST makes it inadequate for studying degeneracies between the cosmological parameters and the astrophysical parameters, especially if physics beyond the standard model is considered (see Sec. VI). In our code, 21cmFirstCLASS, the initial conditions for the simulation are completely consistent with the input set of cosmological parameters. To get the correct initial conditions we use CLASS. We allow the user to work with either the primordial curvature amplitude \(A_{s}\) (which is commonly used in the CMB and inflation communities), or with the standard 21cmFAST\(\sigma_{8}\) parameter, the matter-density variance, smoothed on a sphere of radius \(R_{8}=8\,h^{-1}\,\)Mpc. Given the current matter-density transfer function \(\mathcal{T}_{m}\left(k,z=0\right)\), which is one of the outputs of CLASS, they are related by \[\sigma_{8}^{2}=A_{s}\int_{0}^{\infty}\frac{dk}{k}\left(\frac{k}{k_{\star}} \right)^{n_{s}-1}W^{2}\left(kR_{8}\right)\mathcal{T}_{m}^{2}\left(k,z=0\right), \tag{8}\] where \(k_{\star}=0.05\,{\rm Mpc}^{-1}\) is the CMB pivot scale and \(W\left(kR_{8}\right)=3\left(kR_{8}\right)^{-3}\left[\sin\left(kR_{8}\right)-kR _{8}\cos\left(kR_{8}\right)\right]\) is the Fourier transform of a top-hat filter of radius \(R_{8}\). We note that in \(\Lambda\)CDM simulations we run CLASS with a high \(k_{\rm max}=4000\,{\rm Mpc}^{-1}\), which is necessary to get the correct \(\sigma\left(R\right)\) at the relevant scales for 21cmFAST. CLASS also computes the background quantities \(\bar{T}_{k}\left(z\right)\), and \(\bar{x}_{e}\left(z\right)\), the latter via HyRec6. We then define the moment of recombination, and the starting point of our simulation, to be the redshift that solves \(x_{e}\left(z_{\rm rec}\right)\equiv 0.1\). For the fiducial set of cosmological parameters we use, it is \(z_{\rm rec}\approx 1069\). Footnote 6: Care has to be taken when converting from CLASS conventions for \(x_{e}\), which is \(n_{e}/n_{\rm H}\), to 21cmFAST conventions, for which \(x_{e}\equiv n_{e}/\left(n_{\rm H}+n_{\rm H_{\rm H}}\right)\). In addition, we also evaluate \(\mathcal{T}_{v_{\rm cb}}\left(k,z_{\rm rec}\right)\), the transfer function of \(V_{\rm cb}\) during recombination, with \[\mathcal{T}_{v_{\rm cb}}\left(k,z_{\rm rec}\right)=\left|\frac{\theta_{c} \left(k,z_{\rm rec}\right)-\theta_{b}\left(k,z_{\rm rec}\right)}{kc}\right|, \tag{9}\] where \(\theta_{c}\) (\(\theta_{b}\)) is the Fourier transform of the divergence of the baryons (CDM) velocity, quantities that are also given by CLASS. We construct interpolation tables for \(\mathcal{T}_{m}\left(k,z=0\right)\), \(\mathcal{T}_{v_{\rm cb}}\left(k,z_{\rm rec}\right)\), \(\bar{T}_{k}\left(z\right)\) and \(\bar{x}_{e}\left(z\right)\), and they are then used to replace the default tables used by 21cmFAST. Finally7, we also save CLASS's scale-independent growth factor \(D\left(z\right)\) in a new interpolation table that goes into 21cmFAST. This quantity is obtained in CLASS by solving a second order differential equation. To avoid solving this equation for \(D\left(z\right)\), in the standard 21cmFAST the "Dicke growth" factor is used [137; 138]. This is an analytical fit to the growth factor that works particularly well below \(z=35\). However, this fit under-estimates the true growth factor at \(z\gtrsim 100\), and that can lead to a few percent error in the fluctuations pattern of \(T_{k}\) and \(x_{e}\) at low redshifts. Moreover, these errors can propagate to the global signal at \(10\lesssim z\lesssim 20\), when non-linearities become important. To avoid introducing errors in the calculation, without sacrificing runtime or computational cost, we adopt CLASS's growth factor in 21cmFirstCLASS. For more details on the scale-independent growth factor and its effect on the 21-cm signal, see Appendix A. ### 21cmFAST The initial density and velocity boxes in 21cmFirstCLASS are generated in a similar manner as in the standard 21cmFAST. Prior to \(z=35\), we evolve the matter-density fluctuations linearly8, though we have confirmed that evolving the density box non-linearly at high redshifts yields the same 21-cm power spectrum at low redshifts. As for the initial \(T_{k}\left(z_{\rm rec}\right)\) and \(x_{e}\left(z_{\rm rec}\right)\) boxes, we assume that they are homogeneous. As we discuss in Paper II, an homogeneous \(T_{k}\left(z_{\rm rec}\right)\) box is an excellent assumption, much more than the homogeneous \(T_{k}\left(z=35\right)\) box that is assumed in the standard 21cmFAST. For the \(x_{e}\) box, the assumption of homogeneity at \(z_{\rm rec}\approx 1069\) is not justified (though it is still better than assuming homogeneity at \(z=35\)), but we show in Paper II that the resulting 21-cm power spectrum is not very sensitive to this assumption. Ideally, one could use the \(T_{k}\) and \(x_{e}\) transfer functions from CLASS to draw the initial boxes, as was done in Ref. [104]. In \(\Lambda\)CDM, such an approach would remove the necessity of starting the simulation at recombination, since all the fluctuations prior to \(z=35\) are linear to a very good approximation. Yet, we stress that this approach is no longer valid in some beyond \(\Lambda\)CDM cosmologies (like the one we discuss in Sec. VI) where non-linearities have an important role even before \(z=35\). Footnote 8: In this paper we assume \(\delta_{b}\left(z\right)=D\left(z\right)\delta_{b}\left(z=0\right)\). We comment that although such a scale-independent growth of \(\delta_{b}\) is inadequate at high redshifts, our conclusions in this paper are not affected by this crude assumption, which shall be relaxed in the version of 21cmFirstCLASS that will soon be made public. We elaborate much more on this subtlety in Paper II. We then solve numerically the differential equation for \(T_{k}\) (Eq. (3)) at each cell, using the Euler method, to promote \(T_{k}\) to the next redshift step, as in the standard 21cmFAST. The difference, though, is the step-size. In 21cmFAST, a logarithmic redshift sampling is used such that \(\left(1+z_{n}\right)/\left(1+z_{n+1}\right)=1.02\), where \(z_{n}\) is the \(n\)'th redshift sample in the simulation, such that the step-size \(\Delta z_{n}=z_{n}-z_{n+1}\) is \(\sim 0.1\) at \(z=6\) and \(\sim 0.6\) at \(z=35\). This redshift sampling scheme is insufficient at higher redshifts, and we therefore work with a constant step-size of \(\Delta z_{n}=0.1\) at \(35\leq z\leq 980\). Above \(z=980\) this step-size is also not enough, and we have to switch to \(\Delta z_{n}=0.01\) to simulate the evolution precisely. This fine redshift sampling above \(z=980\) comes with a price; although no computationally expensive astrophysical calculations are required, the many redshift samples extend the runtime of the code considerably. Yet, there is a much more clever way to evolve \(T_{k}\) above \(z=980\) with excellent precision, without generating so many redshift samples. Briefly, the idea is to treat the baryons and the CMB as a single fluid when the conditions for the _Compton-TCA_ (tight coupling approximation) are satisfied. We provide more details on that method in Appendix B. The normal evolution of \(x_{e}\) in 21cmFirstCLASS is done with HyRec, though our code can be configured to solve instead the Peebles model, Eq. (6), with the recombination rate \(\alpha_{\rm rec}=\alpha_{B}\) of RECFAST[139; 140], where \(\alpha_{B}\) is the case-B recombination rate (which accounts for recombination only to the first excited state). As in CLASS, we use the default "SWIFT" model of HyRec when \(T_{k}/T_{\gamma}<0.1\), and otherwise we use its "PEEBLES" model, which is quite similar to Eq. (6) above. In CLASS, however, two quantities are solved with HyRec, these are \(x_{\rm H}\) and \(x_{\rm He}\). Their relation to \(x_{e}\) is \(x_{e}=x_{\rm H}+\left(n_{\rm He}/n_{\rm H}\right)x_{\rm He}\). From this equation the physical meaning of \(x_{\rm H}\) (\(x_{\rm He}\)) should be clear--it is the contribution of ionized hydrogen (helium) number-density to the total free-electron number-density. In 21cmFirstCLASS we assume \(x_{e}\approx x_{\rm H}\), which is justified because: (1) helium recombination is over long before hydrogen recombination begins, at \(z\sim 1500\); (2) the freezout value of \(x_{\rm He}\) is an order of magnitude smaller than the freezout value of \(x_{\rm H}\); and (3) the contribution of \(x_{\rm He}\) to \(x_{e}\) is suppressed by the factor \(n_{\rm He}/n_{\rm H}\approx 0.08\). As can be seen in Fig. 3, the assumption \(x_{e}\approx x_{\rm H}\) is indeed an excellent approximation. ## IV Comparing 21cmFirstCLASS WITH 21cmFAST In \(\Lambda\)CDM, all fluctuations at the relevant scales prior to \(z=35\) can be considered linear to a very good approximation. Consistency therefore implies that 21cmFirstCLASS must be able to generate the same initial conditions as in 21cmFAST, at \(z=35\). Such a sanity check is demonstrated in Fig. 3, where we present the evolution of \(\bar{x}_{e}\) in the two codes. At \(z=35\) the two codes agree. Afterwards, the solution of the two codes deviates because of the different evolution, as was outlined at the end of Sec. II. This leads to a maximum 5% difference. Yet, this error does not propagate to the observable--the brightness temperature--as can be seen in Fig. 4. This is because \(\tau_{21}\) is not proportional to \(x_{e}\) but rather to \(x_{\rm HI}\), the neutral hydrogen fraction. Before the onset of reionization, we can approximate \(x_{\rm HI}\approx 1-x_{e}\), and a simple calculation shows that the 5% difference in \(\bar{x}_{e}\) translates to merely a 0.001% error in \(\bar{x}_{\rm HI}\). Even though the first-order statistics of the box, namely its mean, is consistent in both codes, it does not imply that higher-order statistics, like the two-point function, are the same. The Fourier transform of the two-point correlation function is the power spectrum. For the 21-cm signal, it is customary to work with a power spectrum that has units of mK\({}^{2}\), \[\Delta_{21}^{2}\left(k,z\right)=\frac{k^{3}\bar{T}_{21}^{2}\left(z\right)P_{21} \left(k,z\right)}{2\pi^{2}}, \tag{10}\] where \(\bar{T}_{21}\) is the global brightness temperature and \(P_{21}\left(k,z\right)\) is the angle-averaged Fourier transform of the two-point function \(\left\langle\delta_{21}\left(\mathbf{x},z\right)\delta_{21}\left(\mathbf{x}^{ \prime},z\right)\right\rangle\), while \(\delta_{21}\) is the local contrast in the brightness temperature, \(\delta_{21}\left(\mathbf{x},z\right)\equiv T_{21}\left(\mathbf{x},z\right)/ \bar{T}_{21}\left(z\right)-1\). We use the powerbox2 package [141] to compute \(\Delta_{21}^{2}\left(k,z\right)\) from chunks of the light-cone box of 21cmFirstCLASS. Footnote 2: github.com/steven-murray/powerbox In Fig. 5, we compare the 21-cm power spectrum of 21cmFirstCLASS and 21cmFAST. Unlike the global signal, clear differences can be seen--only because we started the simulation at a different initial time (recombination in 21cmFirstCLASS and \(z=35\) in 21cmFAST). The origin of this effect comes from _early temperature and ionization fluctuations_. The impact of the former kind of fluctuations--temperature fluctuations--on the 21-cm power spectrum was first discussed in Ref. [80]. In Paper II we extend the discussion on early fluctuations and explore in great detail the contribution of both temperature and ionization fluctuations. Still, Fig. 5 suggests that taking into account early temperature and ionization fluctuations results in a maximum distortion of \(\sim\)20% to the 21-cm power spectrum at \(k=0.3\,\mathrm{Mpc}^{-1}\), \(z=17\), which is below HERA's noise level. We note that this is in slight contrast with the conclusions of Ref. [80], where larger deviations are claimed. Again, we refer the reader to Paper II for an elaborate discussion on that point. ## V Combining 21cm and CMB data The anisotropies in the CMB have proven to be an invaluable source for studying different cosmological models. Likewise, the global 21-cm signal and its anisotropies are expected to contain rich information that can be employed in cosmological studies of \(\Lambda\)CDM and beyond. Because 21cmFirstCLASS already calculates the CMB anisotropies via CLASS, it is only natural to include them as part of our analysis. These two observables are uncorrelated and thus can be used to break degeneracies in the other observable. We will demonstrate this point below while working with the Fisher formalism. ### Fisher Formalism In the Fisher formalism, the covariance matrix is given by the inverse of the Fisher matrix [142; 143], \[F_{\alpha,\beta}^{21\mathrm{cm}}=\sum_{k,z}\frac{\partial\Delta_{21}^{2} \left(k,z\right)}{\partial\alpha}\frac{\partial\Delta_{21}^{2}\left(k,z \right)}{\partial\beta}\frac{1}{\left[\delta\Delta_{21}^{2}\left(k,z\right) \right]^{2}}. \tag{11}\] Here, \(\alpha\) and \(\beta\) denote the free parameters that we vary. We vary both cosmological parameters and astrophysical Figure 4: Comparison of the global brightness temperature between 21cmFirstCLASS and 21cmFAST. The curves almost totally overlap. Figure 3: Comparison of the global \(x_{e}\) evolution between 21cmFirstCLASS and 21cmFAST (in both cases we use CLASS to obtain the correct initial conditions). In the former, HyRec is used all the way from recombination to \(z=6\). In the latter, the simulation begins at \(z=35\) and \(x_{e}\) is evolved differently (see more details at the end of Sec. II). Note the consistency at \(z=35\) (though early ionization fluctuations slightly change the mean of the box in 21cmFirstCLASS—see more details in Paper II). parameters10, Footnote 10: Although some of the astrophysical parameters in 21cmFAST are defined logarithmically (e.g. \(L_{X}^{\rm(II)}=40.5\)), in our analysis we make sure we vary the linear parameters (e.g. \(L_{X}^{\rm(II)}=10^{40.5}\)). In Fig. 6, when we present the confidence level ellipses of \(\log_{10}L_{X}\), we apply the appropriate Jacobian transformation. \[(\alpha,\beta) \in \{h,\Omega_{m},\Omega_{b},A_{s}, \tag{12}\] \[L_{X}^{\rm(II)},f_{*}^{\rm(II)},f_{\rm esc}^{\rm(II)},A_{\rm LW},A_{v_{\rm cs}}\}.\] The varied astrophysical parameters in our analysis are displayed in the second row. \(L_{X}^{\rm(II)}\) is the X-ray luminosity (normalized by the SFR, in units of \(\rm erg\,sec^{-1}\,M_{\odot}^{-1}\,year\)), \(f_{*}^{\rm(II)}\) is the star formation efficiency, \(f_{\rm esc}^{\rm(II)}\) is the escape fraction of ionizing photons, and \(A_{\rm LW}\) (\(A_{v_{\rm cs}}\)) characterizes the amplitude of the LW (\(V_{cb}\)) feedback on \(M_{\rm mol,min}\). Quantities with a super-script (II) correspond to pop-II stars. We also vary the analogous pop-III parameters, around the same fiducial values as the pop-II ones. The quantity \(\delta\Delta_{21}^{2}\,(k,z)\) that appears in the denominator of Eq. (11) denotes the noise of the experiment, in our case, HERA. We simulate HERA's design sensitivity noise with 21cmSense11[144; 145]. In its final stage, HERA will have in its core 331 antennae, each of which has a diameter of \(14\,\rm m\), arranged in a hexagonal array with 11 antennae at its base. We assume the frequency range of HERA will span between \(50\,\rm MHz\) (\(z=27.4\)) and \(225\,\rm MHz\) (\(z=5.3\)) with a bandwidth of \(8\,\rm MHz\). This gives a total number of 22 different frequency bands, but in our analysis, to be conservative, we discard the bands below \(z\!=\!6\) as in that regime the exact reionization details are highly uncertain, leaving us with 19 redshift bins in total. In each frequency band we assume there are 82 channels, corresponding to 1024 channels over \(100\,\rm MHz\) bandwidth [63]. In addition, we assume HERA operates for six hours per night during 540 days in total, the receiver temperature is \(T_{\rm rec}=100\,\rm K\) and the sky temperature follows \(T_{\rm sky}\left(\nu\right)=60\,\rm K\left(\nu/300\,\rm MHz\right)^{-2.55}\). Footnote 11: Although some of the astrophysical parameters in 21cmFAST are defined logarithmically (e.g. \(L_{X}^{\rm(II)}=40.5\)), in our analysis we make sure we vary the linear parameters (e.g. \(L_{X}^{\rm(II)}=10^{40.5}\)). In Fig. 6, when we present the confidence level ellipses of \(\log_{10}L_{X}\), we apply the appropriate Jacobian transformation. Finally, we consider in our analysis "pessimistic", "moderate" and "optimistic" foregrounds scenarios. In the moderate (pessimistic) foregrounds scenario, the wedge is assumed to extend to \(\Delta k_{||}=0.1h\,\rm Mpc^{-1}\) beyond the horizon wedge limit, and all baselines are added coherently (incoherently), while in the optimistic foregrounds scenario the boundary of the foreground wedge is set by the FWHM of the primary beam of HERA and there is no contamination beyond this boundary. As motivated above, to break degeneracies in the 21-cm signal, we consider future measurements from CMB-S4 [146]. We follow Refs. [147; 148; 149; 150] and evaluate the Fisher matrix associated with CMB-S4 measurements via \[F_{\alpha,\beta}^{\rm CMB}=\sum_{\nu}\sum_{\ell=30}^{300}\frac{2 \ell+1}{2}f_{\rm sky}{\rm Tr}\left[C_{\ell}^{-1}\frac{\partial C_{\ell}}{ \partial\alpha}C_{\ell}^{-1}\frac{\partial C_{\ell}}{\partial\beta}\right], \tag{13}\] Figure 5: Comparison of the 21-cm power spectrum between 21cmFirstCLASS and 21cmFAST, for three different wavenumbers. The error bars correspond to HERA’s noise in its design sensitivity under the assumption of optimistic foregrounds (see Sec. V). As we explain in the text, the source for the differences between the curves is early temperature and ionization fluctuations—see more details in Paper II. where \(f_{\rm sky}=40\%\) is the sky-fraction coverage and the matrices \(C_{\ell}\left(\nu\right)\) are (neglecting the lensing contribution) \[C_{\ell}\left(\nu\right)=\begin{bmatrix}\tilde{C}_{\ell}^{\rm TT}\left(\nu \right)&C_{\ell}^{\rm TE}\left(\nu\right)\\ C_{\ell}^{\rm TE}\left(\nu\right)&\tilde{C}_{\ell}^{\rm EE}\left(\nu\right) \end{bmatrix}. \tag{14}\] Here, tilde-less quantities are the noise-free CMB anisotropies power spectrum that we take from CLASS, while tilde-full quantities include the CMB-S4 noise contribution, \(\tilde{C}_{\ell}^{\rm XX}=C_{\ell}^{\rm XX}+N_{\ell}^{\rm XX}\). The noise power spectra are given by \[N_{\ell}^{\rm TT}\left(\nu\right)=\Delta_{T}^{2}\left(\nu\right)\,{\rm e}^{ \ell\left(\ell+1\right)\sigma_{b}^{2}\left(\nu\right)},\quad N_{\ell}^{\rm EE }\left(\nu\right)=2\times N_{\ell}^{\rm TT}\left(\nu\right), \tag{15}\] where \(\Delta_{T}\left(\nu\right)\) is the temperature sensitivity and \(\sigma_{b}\left(\nu\right)=\theta_{\rm FWHM}\left(\nu\right)/\sqrt{8\ln 2}\), with the full-width-half-maximum \(\theta_{\rm FWHM}\) given in radians. We consider three frequency channels, centered at \(\nu=93,\,145,\,225\,\rm GHz\) with \(\Delta_{T}=1.5,1.5,4.8\,\mu\rm K\cdot arcmin\) and \(\theta_{\rm FWHM}=2.2,1.4,1\,\rm arcmin\). Finally, we add the HERA and CMB-S4 Fisher matrices, \[F_{\alpha,\beta}^{\rm tot}=F_{\alpha,\beta}^{\rm 21cm}+F_{\alpha,\beta}^{\rm CMB}. \tag{16}\] ### Forecasts Armed with our Fisher formalism, we now vary the free parameters of Eq. (12), while imposing Planck 2018 priors [39] on the cosmological parameters. Fig. 6 shows our results. As expected, adding the CMB-S4 information helps in mitigating all the degeneracies between the different parameters, especially in the cosmological parameters. Because the CMB anisotropies depend only on the cosmological parameters, and the cosmological parameters are not strongly degenerate with the astrophysical parameters, including the information of the CMB power spectra in the analysis does not help considerably in alleviating degeneracies in the astrophysical parameters. Unlike the cosmological parameters, the well-known degeneracy between \(f_{*}^{\rm(II)}\) and \(f_{\rm esc}^{\rm(II)}\) is evident [151, 143, 152]. These parameters exhibit a negative correlation as the ionization efficiency is proportional to the product of \(f_{*}^{\rm(II)}\) and \(f_{\rm esc}^{\rm(II)}\). Similarly, as the X-ray heating efficiency is proportional to the product of \(f_{*}^{\rm(II)}\) and \(L_{X}^{\rm(II)}\), there is a negative correlation between these parameters as well. Note that the degeneracy of \(f_{\rm esc}^{\rm(II)}\) and \(L_{X}^{\rm(II)}\) with \(f_{*}^{\rm(II)}\) is not complete because the latter also determines the efficiency of the Ly\(\alpha\) flux. Not unexpectedly, for \(\Lambda\)CDM, CMB-S4 will have more constraining power than HERA, as indicated by Fig. 7. Here, all the astrophysical parameters were marginalized (fixed) when only the information of HERA (CMB- S4) was considered. As we shall see in the next section, this statement can be different for beyond \(\Lambda\)CDM cosmologies, and the 21-cm data can play a more dominant role. Figure 6: Forecasts of 1-\(\sigma\) and 2-\(\sigma\) confidence levels of some of the free parameters in Eq. (12) (the rest of the parameters not shown here have been marginalized), while imposing Planck 2018 priors [39] on the cosmological parameters. Blue ellipses correspond to HERA-only forecasts, while the green ellipses account for information coming from CMB-S4 as well. Results are shown for the moderate foreground scenario, although they barely change when pessimistic foreground scenario is considered. ## VI Scattering dark matter To demonstrate the potential of 21cmFirstCLASS in studying non-linear models beyond the standard model, we now consider SDM. In this model, a fraction \(f_{\chi}\) of the dark matter consists of particles of mass \(m_{\chi}\) that interact directly in a non-gravitational manner with baryons. In this paper, we focus on \(f_{\chi}=100\%\) and \(m_{\chi}=1\,\mathrm{MeV}\), although these parameters can be varied in our code (we vary them in Ref. [126]). The cross section for the baryons-SDM interaction is parameterized by \(\sigma=\sigma_{n}\left(v/c\right)^{n}\), where \(v\) is the relative velocity between the interacting baryon and SDM particles. We also fix \(n=-4\) to correspond to a Coulomb-type interaction (we will relax this assumption in Ref. [126]) and thus \(\sigma_{-4}\) is the only free parameter in the model we are considering. There are two consequences to the direct interaction between the baryons and the SDM: (1) they transfer energy, thereby the cold SDM is able to cool the hotter baryonic gas, while the baryons heat the SDM and increase its temperature \(T_{\chi}\), and (2) the bulk relative velocity \(V_{\chi b}\) between the two fluids is decreased via a drag force that they apply on each other. The former effect made the SDM a very popular dark matter candidate after the EDGES collaboration announced they measured a minimum value of \(\tilde{T}_{21}=-500^{+200}_{-500}\,\mathrm{mK}\) (at 99% confidence level) [153], which is \(3.8\sigma\) below \(\Lambda\)CDM expectation. More recent results from the SARAS-3 experiment [154] do not reproduce the detection, however. ### Evolution equations Below we write the differential equations that have to be solved in the SDM model. These equations were originally derived in Ref. [99] and appeared since then in many works in the literature. We use a slightly different notation which will be useful for the derivation of the _DM-TCA_ equations (see Appendix C). The SDM interaction modifies the evolution equation for \(T_{k}\), Eq. (3), which now reads (note that from this point on we will mostly denote the gas temperature with \(T_{b}\) in order to have symmetrical expressions for the baryons and SDM) \[\frac{dT_{b}}{dz}=\frac{dt}{dz}\left[-2HT_{b}+\Gamma_{C}\left(T_{\gamma}-T_{b} \right)+\frac{2\dot{Q}_{b}}{3k_{B}}+\left.\frac{dT_{b}}{dt}\right|_{\text{ext }}\right], \tag{17}\] and a similar equation for the SDM temperature exists, \[\frac{dT_{\chi}}{dz}=\frac{dt}{dz}\left[-2HT_{\chi}+\frac{2\dot{Q}_{\chi}}{3k _{B}}+\left.\frac{dT_{\chi}}{dt}\right|_{\text{ext}}\right], \tag{18}\] where \[\left.\frac{dT_{\chi}}{dt}\right|_{\text{ext}}=\frac{2}{3}\frac{T_{\chi}}{1+ \delta_{\chi}}\frac{d\delta_{\chi}}{dt}, \tag{19}\] and \(\delta_{\chi}\equiv\delta\rho_{\chi}/\bar{\rho}_{\chi}\) is the SDM density contrast. To solve for \(T_{b}\) and \(T_{\chi}\), we need a third differential equation, for the evolution of the bulk relative velocity between the fluids \(V_{\chi b}\), \[\frac{dV_{\chi b}}{dz} = \frac{dt}{dz}\left[-HV_{\chi b}-D\left(V_{\chi b}\right)\right] \tag{20}\] \[= \frac{dt}{dz}\left[-HV_{\chi b}-\sum_{t}D_{t}\left(V_{\chi b} \right)\right],\] where \(D\left(V_{\chi b}\right)\) is the mutual drag force that acts on the baryons and SDM fluids. It is the sum of all the drag forces that arise from the interaction of an SDM particle with a standard model target particle of type \(t\), \[D_{t}\left(V_{\chi b}\right)=\frac{\rho_{\text{tot}}\sigma_{-4}c^{4}}{\rho_{b }u_{\chi t}^{2}}\frac{\rho_{t}\Gamma\left(r_{t}\right)}{m_{t}+m_{\chi}}. \tag{21}\] Here, \(\rho_{b}\) (\(\rho_{\chi}=f_{\chi}\rho_{c}\)) is the baryons (SDM) energy density, \(\rho_{\text{tot}}=\rho_{b}+\rho_{\chi}\) (it is not the total matter energy density if \(f_{\chi}<1\)), \(m_{t}\) is the mass of the target particle, and \(\rho_{t}\) is the energy density of the target particles. The function \(F\left(r_{t}\right)\) is \[F\left(r_{t}\right)=r_{t}^{-2}\left[\text{erf}\left(\frac{r_{t}}{\sqrt{2}} \right)-\sqrt{\frac{2}{\pi}}r_{t}\text{e}^{-r_{t}^{2}/2}\right]\underset{r_{t} \ll 1}{\approx}\sqrt{\frac{2}{9\pi}}r_{t}, \tag{22}\] Figure 7: Forecasts of 1-\(\sigma\) and 2-\(\sigma\) confidence levels of the free cosmological parameters in Eq. (12), while imposing Planck 2018 priors [39] on the cosmological parameters. Blue (orange) ellipses correspond to forecasts when only information from HERA (CMB-S4) is considered, while the green ellipses account for information coming from both HERA and CMB-S4. All the astrophysical parameters have been marginalized (fixed) in the calculation of HERA (CMB-S4) Fisher matrix. For HERA, results are shown for the moderate foreground scenario, although they barely change when the pessimistic foreground scenario is considered. where \(r_{t}\equiv V_{\chi b}/u_{\chi t}\) and \(u_{\chi t}\) is the thermal velocity, \[u_{\chi t}\equiv\sqrt{\frac{k_{B}T_{b}}{m_{t}}+\frac{k_{B}T_{\chi}}{m_{\chi}}}. \tag{23}\] The cooling/heating rates \(\dot{Q}_{b}\) and \(\dot{Q}_{\chi}\) that appear in Eqs. (17) and (18) are given by \[\dot{Q}_{b} = \frac{3}{2}\Gamma_{\chi b}k_{B}\left(T_{\chi}-T_{b}\right) \tag{24}\] \[+\frac{\rho_{\chi}}{\rho_{\rm tot}}V_{\chi b}\sum_{t}\frac{m_{ \chi}m_{b}}{m_{\chi}+m_{t}}D_{t}\left(V_{\chi b}\right)\] \[\dot{Q}_{\chi} = \frac{3}{2}\frac{n_{b}}{n_{\chi}}\Gamma_{\chi b}k_{B}\left(T_{b}- T_{\chi}\right) \tag{25}\] \[+\frac{\rho_{b}}{\rho_{\rm tot}}V_{\chi b}\sum_{t}\frac{m_{\chi}m _{t}}{m_{\chi}+m_{t}}D_{t}\left(V_{\chi b}\right),\] where \(n_{b}=\rho_{b}/m_{b}\) (\(n_{\chi}=\rho_{\chi}/m_{\chi}\)) is the baryons (SDM) number-density. Finally, the energy transfer rate \(\Gamma_{\chi b}\) is \[\Gamma_{\chi b}=\sqrt{\frac{2}{\pi}}\frac{2\sigma_{-4}c^{4}\rho_{\chi}}{3n_{b} }\sum_{t}\frac{\rho_{t}{\rm e}^{-r_{t}^{2}/2}}{\left(m_{t}+m_{\chi}\right)^{2 }u_{\chi t}^{3}}. \tag{26}\] Two SDM models are typically considered in the literature12. The first one considers millicharged DM [106; 107; 108; 109; 110; 111; 112; 113; 114], in which the target particles are _free_ electrons and protons, \(n_{t1}=n_{t2}=n_{e}\), \(m_{t1}=m_{e}\), \(m_{t2}=m_{p}\). Because the number-density of the target particles is proportional to \(x_{e}\), which is very small between recombination and reionization, this model does not generate strong signatures in the 21-cm signal, unless very large cross-sections (that are already ruled out by CMB measurements) are considered. Instead, we focus on a baryo-philic SDM [106; 107; 108; 109; 110; 111; 112; 113; 114], in which SDM interacts with _all_ standard model particles, i.e. \(\rho_{t}=\rho_{b}\) and \(m_{t}=m_{b}\), where the mean baryon mass is given by Footnote 12: There are also models in which the SDM interacts with either protons or electrons, but not both [115; 116; 117; 118; 119; 120; 121], and there are models in which the SDM directly interacts with CDM [122; 123]. \[m_{b}=\frac{m_{\rm H}}{\left[1-\left(1-m_{\rm H}/m_{\rm He}\right)Y_{\rm He} \right]\left(1-x_{e}\right)}, \tag{27}\] with \(m_{\rm H}\) (\(m_{\rm He}\)) the mass of the hydrogen (helium) atom and \(Y_{\rm He}=\rho_{\rm He}/\rho_{b}\approx 0.245\) is the helium mass-fraction. As in \(\Lambda\)CDM, we solve Eqs. (17)-(20) at each cell using the Euler method, with a step-size of \(\Delta z_{n}=0.1\). At low temperatures, the logarithmic redshift sampling of the standard 21cmFAST below \(z=35\) is not enough, and we continue to work with \(\Delta z_{n}=0.1\) at the low redshifts regime. Furthermore, we note that attempting to solve Eqs. (17)-(20) via the Euler method for large cross-sections results in overshooting of the solution due to the strong coupling between the baryons and the SDM at low redshifts. We therefore had to devise a dedicated method for solving the equations in the strong coupling limit--this is the _DM_-TCA (in contrast with _Compton_-TCA). We elaborate more on that method in Appendix C. ### Initial conditions To generate the SDM initial conditions for 21cmFAST we use a modified version of CLASS13 in which the SDM fluid variables \(\delta_{\chi}\), \(\theta_{\chi}\), \(T_{\chi}\), are solved simultaneously with the rest of the standard fluid variables of the baryons and CDM (more details on that version can be found in Ref. [100]). The present total matter density transfer function is then given by \(\Omega_{m}\mathcal{T}_{m}=\Omega_{c}\left[\left(1-f_{\chi}\right)\mathcal{T}_ {c}+f_{\chi}\mathcal{T}_{\chi}\right]+\Omega_{b}T_{b}\). There is a subtlety in the calculation of \(\mathcal{T}_{m}\left(k,z=0\right)\) that we would like to address. The evolution of \(\dot{Q}_{b}\) and \(\delta_{\chi}\) depends on the gas temperature \(T_{b}\) since the momentum exchange rate depends on the thermal velocity \(u_{\chi t}\). Even though CLASS uses a toy model for the X-ray heating rate \(\epsilon_{X}\) (see Eq. (5)), the resulting transfer function is still correct. The reason for this is that \(u_{\chi t}\) competes with \(V_{\chi b}\) in the evolution equations of \(\delta_{b}\) and \(\delta_{\chi}\), and since \(V_{\chi b}\) becomes very small already at high redshifts (c.f. Fig. 8), \(u_{\chi t}\) turns out to have minimal impact on the low-redshift evolution. Footnote 13: github.com/kbody/class_public/tree/dmeff We also extract from CLASS the quantity \(\mathcal{T}_{v_{\chi b}}\left(z_{\rm rec},k\right)\), the transfer function of the relative velocity between baryons and SDM at recombination, with an equation similar to Eq. (9). In 21cmFirstCLASS, we then generate a \(\mathbf{V}_{\chi b}\left(\mathbf{k},z_{\rm rec}\right)\) box in Fourier space via [155] \[\mathbf{V}_{\chi b}\left(\mathbf{k},z_{\rm rec}\right)=i\frac{\mathbf{k}}{k} \frac{\mathcal{T}_{v_{\chi b}}\left(k,z_{\rm rec}\right)}{\mathcal{T}_{m}\left(k, z=0\right)}\delta_{m}\left(\mathbf{k},z=0\right). \tag{28}\] This yields a \(\mathbf{V}_{\chi b}\left(\mathbf{k},z_{\rm rec}\right)\) field that is curl-free and completely correlated with \(\delta_{m}\left(\mathbf{k},z=0\right)\). In real space, the box of \(V_{\chi b}\left(\mathbf{x},z_{\rm rec}\right)=\left[\mathbf{V}_{\chi b}\left( \mathbf{x},z_{\rm rec}\right)\cdot\mathbf{V}_{\chi b}\left(\mathbf{x},z_{\rm rec }\right)\right]^{1/2}\) has a Maxwell-Boltzmann distribution with an RMS of \[\langle V_{\chi b}^{2}\left(z_{\rm rec}\right)\rangle=A_{s}\int_{k_{\rm min}}^{k_{ \rm max}}\frac{dk}{k}\left(\frac{k}{k_{\star}}\right)^{n_{s}-1}\mathcal{T}_{v _{\chi b}}^{2}\left(k,z_{\rm rec}\right), \tag{29}\] where \(k_{\rm min}\) (\(k_{\rm max}\)) are determined from the box (cell) size. Two notes on the above prescription. First, the mean of the \(V_{\chi b}\left(\mathbf{x},z_{\rm rec}\right)\) box is \(\langle V_{\chi b}\left(z_{\rm rec}\right)\rangle=\sqrt{8/\left(3\pi\right)} \langle V_{\chi b}^{2}\left(z_{\rm rec}\right)\rangle^{1/2}\approx 0.92\langle V_{ \chi b}^{2}\left(z_{\rm rec}\right)\rangle^{1/2}\). Because of the finite box and cell size this is _not_ the true globally-averaged value of \(V_{\chi b}\) at recombination. For example, in Fig. 8 the initial value of \(V_{\chi b}\) in all curves is off by \(\sim 3\%\). As a consequence, when we plot the mean values of our box at Sec. VI.5, they do not correspond precisely to the true global values. Since in this paper we are mostly interested in the fluctuations of the 21-cm signal, we are not bothered by that nuance. Secondly, the Maxwellianity of \(V_{\chi b}\) breaks right after recombination. This is because of the drag term in Eq. (20), as it renders the differential equation for \(V_{\chi b}\) non-linear. Of course, there is no reason to expect that precisely at recombination \(V_{\chi b}\) was Maxwellian. In fact, in the derivation of Eqs. (17)-(20), Maxwellianity was assumed throughout. We are therefore being conservative and solve in this work the same equations commonly found in the general SDM literature, despite the inherent inconsistency that this model has. Clearly, the Maxwellianity assumption has to be relaxed, and we leave the study of non-Maxwellianities for future work (see, however, very interesting insights from Refs. [156; 157] on that particular subject). ### Small temperature corrections The direct coupling between SDM and baryons may cause the temperature of the latter to reach very low values, much less than \(1\,\mathrm{K}\) (c.f. Fig. 9). This requires modifying some of the key quantities used in 21cmFAST. We take the small temperature correction for the brightness temperature from Ref. [123], \[T_{21}=\frac{1}{1+z}\left[\frac{\zeta\left(z\right)}{\mathrm{e}^{\zeta\left(z \right)}-1}T_{s}-T_{\gamma}\right]\left(1-\mathrm{e}^{-\tau_{21}}\right), \tag{30}\] where \(\zeta\left(z\right)=T_{\star}/T_{s}\left(z\right)\) and \(T_{\star}=68.2\,\mathrm{mK}\) is the hydrogen hyperfine energy gap (in units of mK). Normally, \(T_{s}\gg T_{\star}\), and so the new \(\zeta\) correction in Eq. (30) approaches 1. When \(T_{s}\) becomes comparable to \(T_{\star}\), the new term becomes important. Yet, because of the following modification, we will see in Sec. VI.5 that \(T_{s}\) does not become very small even if \(T_{b}\) does. The \(\mathrm{Ly}\alpha\) coupling coefficient \(\tilde{x}_{\alpha}\) is proportional to the \(\mathrm{Ly}\alpha\) flux times a correction factor \(\tilde{S}_{\alpha}\). In the standard 21cmFAST, \(\tilde{S}_{\alpha}\) is evaluated from the fit of Ref. [128]. This fit becomes inadequate at low temperatures (when \(T_{b}\lesssim 2\,\mathrm{K}\)). We therefore follow Ref. [106] and adopt the wing approximation from Refs. [158; 159] to evaluate \(\tilde{S}_{\alpha}\) (see more details in Appendix D). Another modification that has to be done is in the recombination rate \(\alpha_{\mathrm{rec}}\). In the standard 21cmFAST, a fit for the case-A recombination rate is used [135]. Again, the validity of this fit breaks at low temperatures. We thus adopt our HyRec scheme that was described in Sec. III (the SDM does not alter the physics of recombination and so no further modifications in HyRec are required). Finally, we comment that in 21cmFAST, the collisional coupling \(x_{\mathrm{coll}}\) is evaluated from tabulated values of \(\kappa_{1-0}^{\mathrm{iH}}\). These are the collision rates of hydrogen atoms with species of type \(i\) (in units of \(\mathrm{cm}^{3}/\mathrm{sec}\)). The tabulated values stop at \(T_{b}=1\,\mathrm{K}\) and the logic of the code is to use \(\kappa_{1-0}^{\mathrm{iH}}\left(T_{b}=1\,\mathrm{K}\right)\) if \(T_{b}<1\,\mathrm{K}\). Because the extrapolation to lower temperatures is not trivial and is beyond the scope of this paper, we leave it for future work. Having said that, we emphasize that \(x_{\mathrm{coll}}\) is mainly relevant during the dark ages, and thus the forecasts we derive in Sec. V (which depend on the physics during cosmic dawn) are insensitive to the exact values of \(\kappa_{1-0}^{\mathrm{iH}}\). ### Small velocity corrections The contribution of pop-III stars comes from halos that are massive enough to host them. In 21cmFAST, pop-III stars reside in molecular cooling halos and the aforementioned minimum threshold halo mass is proportional to [85] \[M_{\mathrm{mol,min}}\left(\mathbf{x},z\right)\propto\left[1+A_{v_{cb}}\frac{V _{cb}\left(\mathbf{x},z_{\mathrm{rec}}\right)}{\langle V_{cb}^{2}\left(z_{ \mathrm{rec}}\right)\rangle^{1/2}}\right]^{\beta_{v_{cb}}}, \tag{31}\] where \(V_{cb}\left(\mathbf{x},z_{\mathrm{rec}}\right)\) is the relative velocity between baryons and CDM at the time of recombination (obtained in a very similar process to the one outlined in Sec. VI.2), and \(A_{v_{cb}}\), \(\beta_{v_{cb}}>0\) are free phenomenological parameters. Note that Eq. (31) is the source for the velocity acoustic oscillations (VAOs)--a standard ruler imprinted on the 21-cm power spectrum at large scales [160; 155; 161]. In the presence of SDM, there are two dark matter species that hamper pop-III structure formation due to their relative velocities with the baryons--CDM and SDM. We weigh their contributions to \(M_{\mathrm{mol,min}}\) in the following way, \[M_{\mathrm{mol,min}}\left(\mathbf{x},z\right) \propto \left\{1+A_{v_{cb}}\Bigg{[}\left(1-f_{\chi}\right)\frac{V_{cb} \left(\mathbf{x},z_{\mathrm{rec}}\right)}{\langle V_{cb}^{2}\left(z_{\mathrm{ rec}}\right)\rangle^{1/2}}\right. \tag{32}\] \[\left.+f_{\chi}\frac{V_{cb}\left(\mathbf{x},z\right)}{\langle V_{ cb}^{2}\left(z_{\mathrm{rec}}\right)\rangle^{1/2}}\frac{1+z_{\mathrm{rec}}}{1+z} \Bigg{]}\right\}^{\beta_{v_{cb}}}\] The reason for this modelling is because of the following. If \(f_{\chi}=0\) then Eq. (32) becomes identical to Eq. (31). If \(f_{\chi}\approx 1\), then the second term in Eq. (32) dominates. Note that in the special case of \(f_{\chi}\approx 1\) and very small \(\sigma_{-4}\), SDM behaves as CDM, \(V_{\chi b}\approx V_{cb}\propto(1+z)\), and Eq. (31) is again restored in that scenario. For cross-sections large enough, \(V_{\chi b}\ll V_{cb}\) (c.f. Fig. 8). Thus, in an SDM universe, Eq. (32) implies that \(M_{\mathrm{mol,min}}\) is smaller and hence more pop-III stars can be born, thereby pulling cosmic dawn to higher redshifts. The SFRD in 21cmFAST depends both on \(M_{\mathrm{mol,min}}\) and on the halo mass function (HMF). The latter is modified by SDM in two ways. First, the matter-density variance \(\sigma\left(M\right)\) is reduced because of the suppression in the matter power spectrum [106]. And secondly, the fitting function that is used for the evaluation of the HMF is modified. In this work, the former effect is already taken into account in our analysis, while the second is not. We use the Sheth-Tormen fitting function [162], which was calibrated based on CDM N-body simulations. It is not clear how the fitting parameters of the Sheth-Tormen HMF are modified if SDM is considered instead of CDM. We leave the exploration of this subtlety14 for future work. Footnote 14: We thank Mihir Kulkarni for drawing our attention to this assumption in our analysis. ### Results - SDM In what follows we will consider three case studies where \(\sigma_{-4}\) is equal to \(10^{-41}\,\)cm\({}^{2}\), \(10^{-42}\,\)cm\({}^{2}\) or \(10^{-43}\,\)cm\({}^{2}\). The impact of these cross-sections on the evolution of the baryons and SDM fluids is most clearly seen in Fig. 8 where we plot the "global" \(V_{\chi b}\) (see caveat below Eq. (29)). The green curve that corresponds to \(\sigma_{-4}=10^{-43}\,\)cm\({}^{2}\) can be considered as "almost \(\Lambda\)CDM" because \(V_{\chi b}\propto(1+z)\), which is indeed the evolution of \(V_{cb}\) when there is no drag term in Eq. (20). In contrast, the blue curve of \(\sigma_{-4}=10^{-41}\,\)cm\({}^{2}\) decays very quickly because the drag term in Eq. (20) dominates over the Hubble term. The case of \(\sigma_{-4}=10^{-42}\,\)cm\({}^{2}\) (orange curve) exhibits a mixed behavior; initially the Hubble term dominates, then at \(z\sim 100\) the drag term wins, and finally at \(z\sim 15\) the Hubble term dominates again once \(V_{\chi b}\) is small enough. Next, we consider the evolution of \(T_{b}\) and \(T_{\chi}\) as it appears in Fig. 9. Let us focus first on the solid curves of \(\sigma_{-4}=10^{-41}\,\)cm\({}^{2}\) where the new-physics is most extreme. As expected, the rapid interactions between the baryons and the cold SDM cools down the former considerably. Once stars have been formed, their radiated X-rays heat up the gas, as in \(\Lambda\)CDM. Note that the turning-point of the red solid curve appears before the other red curves, this is because a very cold baryonic gas reacts to the slightest source of heating. In fact, without X-rays, the baryons would have been tightly coupled to the SDM at \(z\sim 17\) because the interaction rate increases as \(V_{\chi b}\) decreases, and \(V_{\chi b}\) already approaches zero at low redshifts. As for \(T_{\chi}\), we can see that the Hubble cooling in Eq. (18) mostly dominates w.r.t the \(\dot{Q}_{\chi}\) heating term. Unlike the baryons, which undergo a lot of SDM scattering, the SDM particles barely feel the baryons. This is because \(\rho_{\chi}\) is comparable to \(\rho_{b}\) for \(f_{\chi}=100\%\). However, their number-densities are not; \(m_{b}\approx 1\,\)GeV \(\gg m_{\chi}\) in the model that we are considering and thus \(n_{\chi}\gg n_{b}\), namely the SDM particles vastly outnumber the baryons. Nevertheless, the SDM is not completely oblivious to the presence of the baryons and it begins to heat-up at \(z\sim 15\) once the temperature difference becomes large enough. Then, at \(z\sim 10\) the Hubble cooling wins again and the SDM is further cooled-down. All the physics discussed above applies as well to the dashed and dotted curves in Fig. 9, although to a much lesser extent. Fig. 9 also presents the evolution of the spin temperature. Let us begin the discussion this time with the dashed and dotted curves that correspond to \(\sigma_{-4}=10^{-42}\,\)cm\({}^{2}\) and \(\sigma_{-4}=10^{-43}\,\)cm\({}^{2}\), respectively. It appears that the WF coupling is stronger for the dashed curve and thus the cosmic dawn allegedly arrives earlier when the cross-section is larger. In the SDM model there are many factors that affect the onset of cosmic dawn. For example, as Ref. [106] pointed out, the matter power spectrum is suppressed on small scales due to the presence of the SDM. This fact contributes to the delaying of cosmic dawn (in a similar mechanism as in FDM [95; 96]). However, there are other competing effects. First, since \(T_{\alpha}\approx T_{k}\), smaller \(T_{k}\) tends to drive \(T_{s}\) to smaller values (note however that this effect has nothing to do with the onset of cosmic dawn). Secondly, the lower \(V_{\chi b}\) values imply a smaller \(M_{\rm mol,min}\) (c.f. Eq. (32)), which means Figure 8: Evolution of the “global” \(V_{\chi b}\) (see caveat below Eq. (29)) for three different cross-sections. In this figure and in the upcoming figures we fix \(m_{\chi}=1\,\)MeV and \(f_{\chi}=100\%\). Figure 9: Evolution of the global gas kinetic temperature, SDM kinetic temperature and the spin temperature, for three different cross-sections. Solid, dashed and dotted lines correspond to \(\sigma_{-4}=10^{-41},\,10^{-42},\,10^{-43}\,\)cm\({}^{2}\), respectively. We note that the \(T_{s}\) solid green curve is most likely too high between \(20\lesssim z\lesssim 30\), see text for further details. that smaller halos (that are much more abundant) can form stars more easily. On the other hand, there are two more effects that tend to weaken the coupling of \(T_{k}\) to \(T_{s}\) for larger cross-sections: (1) Smaller \(T_{k}\) implies smaller \(\tilde{S}_{\alpha}\) (see Appendix D), and (2) smaller \(M_{\rm mol,min}\) leads to a stronger LW radiation that impedes stars formation (although the LW feedback effect may yield a weaker LW flux, so it is not clear a priori if this effect enhances or degrades the coupling of \(T_{s}\) to \(T_{k}\)). All in all, we find that for the model that we are considering15, \(T_{s}\) is more strongly coupled to \(T_{k}\) when the cross-section is larger. Footnote 15: We did witness a weaker WF coupling with much stronger cross-sections, or when we considered \(m_{\chi}=1\,\mathrm{GeV}\). As for the solid green curve in Fig. 9, there is some interesting physics going on. At \(z\gtrsim 100\) the spin temperature departs from the CMB temperature and approaches \(T_{k}\), as in \(\Lambda\)CDM. Afterwards, at \(z\sim 100\) the spin temperature is driven back to \(T_{\gamma}\) because \(x_{\rm coll}\) becomes too small; \(x_{\rm coll}\propto\kappa^{\rm HI}_{1-0}\) and the latter decreases with \(T_{k}\). Then, at \(z\sim 50\) an unexpected feature occurs--instead of continuing to approach \(T_{\gamma}\), \(T_{s}\) is driven back towards \(T_{k}\). This peculiar feature is of course not a consequence of the cosmic dawn beginning at such a high redshift. It is simply because \(x_{\rm coll}T_{k}^{-1}\gg T_{\gamma}^{-1}\), even though \(x_{\rm coll}\ll 1\) at that stage, so \(T_{k}\) has more weight in determining the value of \(T_{s}\). At \(z\sim 30\) we then see another rise in \(T_{s}\). This one however is not the result of another physical mechanism, but rather an _artifact_ of our calculation at low temperatures. As was discussed at the end of Sec. VI.3, 21cmFAST does not have the ability to calculate \(\kappa^{\rm HI}_{i-0}\) reliably below \(T_{k}=1\,\mathrm{K}\). As clearly can be seen from Fig. 9, the sudden rise in \(T_{s}\) at \(z\sim 30\) corresponds to \(T_{k}\) crossing \(1\,\mathrm{K}\). We therefore believe that for \(\sigma_{-4}=10^{-41}\,\mathrm{cm}^{2}\) our calculation overestimates \(T_{s}\) at \(20\lesssim z\lesssim 30\). At lower redshifts, after stars have started to emit \(\mathrm{Ly}\alpha\) radiation and \(T_{s}\) is coupled to \(T_{k}\) via the WF effect, our calculation most likely fixes itself. Nevertheless, we note that \(\sigma_{-4}=10^{-41}\,\mathrm{cm}^{2}\) was already on the verge of being ruled out by Planck measurements [100; 102], so we will not be concerned by theoretical uncertainties from such large cross-sections. We leave further treatment of this to future work [126]. It is also interesting to inspect the evolution of \(x_{e}\) in an SDM universe. For \(\sigma_{-4}=10^{-42}\,\mathrm{cm}^{2}\) and \(\sigma_{-4}=10^{-43}\,\mathrm{cm}^{2}\), Fig. 10 shows that SDM barely makes any difference in the evolution of \(x_{e}\) compared to \(\Lambda\)CDM. However, a surprising feature can be seen when we consider \(\sigma_{-4}=10^{-41}\,\mathrm{cm}^{2}\); at \(z\sim 100\) we see that \(x_{e}\) departs from the \(\Lambda\)CDM expectation towards lower values. Normally, in \(\Lambda\)CDM the temperature of the baryons at this redshift is insufficient to allow an efficient recombination, because their number-density is too low. But for the SDM that we are considering, recombination becomes efficient again at \(z\sim 100\) because baryons are combined into atoms more easily when the temperature decreases. Without X-ray heating, we find that for this scenario \(x_{e}\) would stabilize on a lower freezout value of \(\sim 10^{-6}\). Yet, it is important to stress that at low temperatures HyRec uses the fit of Ref. [163] for the recombination rate, but this fit is valid only to \(T_{k}=40\,\mathrm{K}\), so the second drop in \(x_{e}\) shown in Fig. 10 should not be taken too seriously. Figure 11: The global 21cm signal in SDM universe, for three different cross-sections. The blue curve of \(\sigma_{-4}=10^{-41}\,\mathrm{cm}^{2}\) is likely too high between \(20\lesssim z\lesssim 30\), see text for further details. Figure 10: Evolution of the global \(x_{e}\) in SDM universe, for three different cross-sections. The green curve of \(\sigma_{-4}=10^{-43}\,\mathrm{cm}^{2}\) is practically indistinguishable from the \(\Lambda\)CDM curve shown in Fig. 3. The extra drop seen in the blue curve of \(\sigma_{-4}=10^{-41}\,\mathrm{cm}^{2}\), although it can be physically justified, is subject to theoretical uncertainties, see main text for more details. The global 21-cm signal, shown in Fig. 11, reflects the same physics previously discussed. Larger cross-sections lead to a deeper absorption signal that begins at higher redshifts, but ends roughly at the same redshift. We show the corresponding 21-cm power spectrum in Fig. 12. It is clearly seen that HERA will be challenged to distinguish between ACDM and SDM of cross-section \(\sigma_{-4}=10^{-43}\,\mathrm{cm}^{2}\). In contrast, it appears that HERA will be able to easily detect SDM with cross-section \(\sigma_{-4}=10^{-42}\,\mathrm{cm}^{2}\) (or higher) but only in the low frequency band that corresponds to \(10\lesssim z\lesssim 20\). A few remarks on the blue curve of \(\sigma_{-4}=10^{-41}\,\mathrm{cm}^{2}\): (1) Although its global signal reaches much lower values than the orange curve of \(\sigma_{-4}=10^{-42}\,\mathrm{cm}^{2}\), the amplitude of the 21-cm power spectrum for both cross-sections is of the same order of magnitude. This is most likely because the absorption profile of \(\sigma_{-4}=10^{-41}\,\mathrm{cm}^{2}\) is quite narrow and we calculate the power spectrum from slices of the lightcone box, which unlike the coeval box contains samples from different redshifts along the line-of-sight. (2) The smaller power at low redshifts is due to a shallower emission profile which is caused by the extreme cooling. (3) We remind again the uncertainty in the evaluation of the spin temperature between \(20\lesssim z\lesssim 30\) and that the true signal most likely contains less features in that redshift regime. ### Forecasts - SDM Fig. 12 suggests that HERA will not be sensitive to cross-sections below \(10^{-43}\,\mathrm{cm}^{2}\), but cross-sections of the order of \(10^{-42}\,\mathrm{cm}^{2}\) or higher can be probed. Yet, all we did in Fig. 12 was to vary the cross-section while keeping other parameters fixed. If we wish to forecast the sensitivity of HERA to SDM, we must vary other cosmological and astrophysical parameters and study their degeneracies, like we did in Sec. V. For the following analysis, we focus on the SDM scenario where \(\sigma_{-4}=10^{-42}\,\mathrm{cm}^{2}\). This particular value has not been ruled out by Planck 2018 CMB measurements and it lies beyond the sensitivity range of CMB-S4 by almost an order of magnitude [100; 102]. Our forecasts are displayed in Fig. 13. Interestingly, the forecasts for \(\Omega_{m}\) seem to be less affected when combining the information from the two observables. We will see shortly why. Furthermore, we see a strong degeneracy between \(\sigma_{-4}\) and \(L_{X}^{(\mathrm{II})}\). This feature in our forecasts is not surprising; stronger \(\sigma_{-4}\) yields more efficient cooling, while stronger \(L_{X}^{(\mathrm{II})}\) yields more efficient heating, thereby any small correlated variation in both of them is almost canceled in the observed brightness temperature. Hence, these two parameters exhibit a positive correlation. Since the CMB anisotropies do not depend on the value of \(L_{X}^{(\mathrm{II})}\), their measurement cannot relax this degeneracy. It is also interesting to compare HERA's performance in detecting SDM with CMB-S4. We make this com Figure 12: The 21cm power spectrum in SDM universe, for three different cross-sections. The blue curve of \(\sigma_{-4}=10^{-41}\,\mathrm{cm}^{2}\) is likely too low between \(20\lesssim z\lesssim 30\), see text for further details. Here, unlike Fig. 5, we assume moderate foreground scenario for the error bars. parison in Fig. 14. As we saw in Fig. 7 when we discussed degeneracies in \(\Lambda\)CDM, for most cosmological parameters CMB-S4 has a better constraining power than HERA. However, in the SDM scenario, HERA has the upper hand when it comes to constraining \(\Omega_{m}\) (which is now comprised of SDM, unlike in \(\Lambda\)CDM) and \(\sigma_{-4}\). In particular, for SDM with \(\sigma_{-4}=10^{-42}\,\mathrm{cm}^{2}\), HERA will be able to constrain its value within 2-\(\sigma\) confidence level, while CMB-S4 will barely be able to do so within 1-\(\sigma\) confidence level. This demonstrates the potential of HERA in detecting new-physics that cannot be probed by CMB-S4. ## VII Conclusions In this paper we have introduced our novel pipeline, 21cmFirstCLASS, for studying the cosmological 21-cm signal and its anisotropies. It is composed of two codes that are commonly used in the literature--CLASS and 21cmFAST. Because CLASS provides the proper initial conditions for the simulation, as well as the more precise scale-independent growth factor, our code in that sense is more consistent than the standard 21cmFAST. Moreover, since our simulation begins from recombination, our calculations naturally capture early temperature and ionization fluctuations, an effect which distorts the 21-cm power spectrum to some extent16 (c.f. Fig. 5). To achieve the most precise evolution of the early Universe, we have incorporated in 21cmFirstCLASS the state-of-the-art recombination code HyRec as an integral part of our calculation. Footnote 16: We elaborate more on that subtle point in Paper II. Unlike 21cmFAST, our code is _not_ fast. For the box settings we used in this work, starting the simulation at recombination results in a runtime which is \(\sim 3\) times longer compared to the normal 21cmFAST simulation that begins at \(z=35\), even though no complicated astrophysics calculations are performed at high redshifts. The runtime ratio becomes even greater when either SDM (which requires more redshift samples below \(z=35\)) or higher resolution boxes are considered. The source for this longer runtime is the huge amount of redshift samples used in 21cmFirstCLASS and the current architecture of 21cmFAST; at each redshift iteration the evolution of the box is done at the C-level (where multiple CPUs Figure 13: Forecasts of 1-\(\sigma\) and 2-\(\sigma\) confidence levels of some of the free parameters in Eq. (12) and the SDM cross-section \(\sigma_{-4}\) (the rest of the parameters not shown here have been marginalized), while imposing Planck 2018 priors [39] on the \(\Lambda\)CDM cosmological parameters. Blue ellipses correspond to forecasts when only information from HERA is considered, while the green ellipses account for information coming from CMB-S4 as well. Results are shown for moderate foreground scenario, although they barely change when pessimistic foreground scenario is considered. can facilitate the computation) but at the end of the iteration the box is transferred back to the python-wrapper, where the box can be processed with only a single CPU. We therefore think that changing the 21cmFAST architecture such that the C-code will be able to promote the box over more than one redshift iteration may speed-up significantly the calculations of 21cmFirstCLASS. Implementing this is beyond the scope of this paper and we defer this necessary modification for future work. One of the main motivations to begin the simulation from recombination is to study highly non-linear models. As a case study, we focused on SDM, which is one of the most popular candidates of dark matter in the recent literature. This required us using the modified CLASS version of Ref. [100] to get the correct initial conditions. Moreover, besides implementing the SDM differential equations in 21cmFAST, we had to make several modifications in the astrophysics part, the most important one is the correction factor \(\tilde{S}_{\alpha}\) for the WF coupling. As a first thorough study of the effect that SDM has on the 21-cm power spectrum, we limited ourselves to SDM with parameters \(f_{\chi}=100\%\), \(m_{\chi}=1\,\)MeV and a velocity-dependent cross-section with a power-law of \(n=-4\). For very large cross-sections that change the 21-cm signal extremely, our results suffer from an inconsistency at \(z\gtrsim 20\) due to an approximated modelling of the collisional rates \(\kappa_{1-0}^{\rm iH}\) at low temperatures. For milder cross sections, our results are consistent at all redshifts. Focusing on \(\sigma_{-4}=10^{-42}\,\)cm\({}^{2}\), which on the one hand has not been ruled out by Planck 2018 measurements, but on the other hand lies beyond the CMB-S4 sensitivity range, we found that HERA in its design sensitivity will be able to easily probe SDM with that cross-section within 2-\(\sigma\) confidence level, under the assumption of either moderate or pessimistic foregrounds scenarios, and taking the degeneracies with astrophysical parameters into account. This serves as clear evidence to the very promising potential of HERA and the 21-cm signal in searching for signatures of physics beyond \(\Lambda\)CDM, provided that state-of-the-art, first-class codes are used. ###### Acknowledgements. It is our pleasure to thank Bradley Greig, Julian B. Munoz, Kimberly K. Boddy, Sarah Libanore, Manuel A. Buen-Abad, Hovav Lazare and Gali Shmueli for useful discussions. We also acknowledge the efforts of the 21cmFAST and CLASS authors to produce state-of-the-art public 21-cm and CMB codes. JF is supported by the Zin fellowship awarded by the BGU Kreitmann School. EDK acknowledges support from an Azrieli faculty fellowship. ## Appendix A Scale-independent growth factor Because CDM is collisionless and comprises most of the matter in the Universe, its evolution is nearly scale invariant (especially at high redshifts before baryons have clustered) and thus the growth in its density contrast is given by \(\delta_{c}\left(k,z\right)=D\left(z\right)\delta_{c}\left(k,z=0\right)\), where \(D\left(z\right)\) is the scale-independent growth factor. Using the continuity and Euler equations, together with the Poisson equation, one can show that the differential equation that governs \(D\left(z\right)\) is [164] \[\ddot{D}+2H\dot{D}-4\pi G\bar{\rho}_{m}D=0, \tag{10}\] where \(G\) is Newton's gravitational constant and over-dots represent derivatives with respect to the cosmological time \(t\). Among its calculations, CLASS solves Eq. (10) to find \(D\left(z\right)\). In contrast, 21cmFAST does not solve Eq. (10), and instead it adopts the fit of Refs. [137, 138], known in the code as the Dicke growth factor. In Fig. 15 we show the two growth factors of the two codes. The agreement between them becomes excellent at low redshifts, although percent-level errors can still be found for \(z\gtrsim 20\). At high redshifts, the error of the Dicke fit is no longer negligible and it reaches \(\sim 20\%\) at \(z=1000\). In order to simulate early temperature and ionization fluctuations as Figure 14: Forecasts of 1-\(\sigma\) and 2-\(\sigma\) confidence levels of the free cosmological parameters in Eq. (12) and the SDM cross-section \(\sigma_{-4}\), while imposing Planck 2018 priors [39] on the \(\Lambda\)CDM cosmological parameters. Blue (orange) ellipses correspond to forecasts when only information from HERA (CMB-S4) is considered, while the green ellipses account for information coming from both HERA and CMB-S4. All the astrophysical parameters have been marginalized (fixed) in the calculation of the HERA (CMB-S4) Fisher matrix. For HERA, results are shown for the moderate foreground scenario, although they barely change when a pessimistic foreground scenario is considered. precisely as possible (see more on them in Paper II), we therefore had to incorporate the CLASS growth factor in 21cmFirstCLASS. It is interesting however that the small errors of the Dicke fit below \(z=35\) can lead to a visible difference in the 21-cm global signal, even if early temperature and ionization fluctuations are discarded, as we show in Fig. 16. Above \(z~{}\sim 27\), the resulting global signal is the same because at this epoch the fluctuations are linear and they cancel each other when the mean of the box is evaluated. Below that redshift, non-linearities become important and the fluctuations (as well as the errors) are no longer canceled. At \(z\lesssim 15\) the SFRD dominates the fluctuations in the signal, and because the growth factors are nearly the same at that redshift, the Dicke solution to the global brightness temperature coincides with the CLASS solution. The errors induced by the Dicke growth factor are enhanced when the 21-cm power spectrum is considered, especially at \(z\gtrsim 20\), as can be seen in Fig. 17. Yet, within HERA's range, the errors do not surpass HERA's noise level. ## Appendix B Compton tight coupling approximation As was discussed in Sec. III.2, above \(z=980\) the temperature cannot be evolved precisely if one attempts to solve Eq. (3) numerically via the Euler method but without having a tiny step-size. The reason for this comes from the Compton term in Eq. (3). At high redshifts this term dominates, leading to \(dT_{k}/dz\propto(\Gamma_{C}/H)\,(T_{\gamma}-T_{k})\). Since the baryons are tightly coupled to the photons at this epoch, \(T_{k}\to T_{\gamma}\). However, because \(\Gamma_{C}\gg H\), small initial errors in \(T_{k}\) could cause the solution to overshoot or undershoot \(T_{\gamma}\), depending on the sign of \(T_{\gamma}-T_{k}\), with oscillations that grow in time. This numerical behavior is well known for interacting fluids in the tight coupling regime. It becomes worse when the temperatures of both fluids have to be simultaneously evolved in time--see Appendix C. To overcome this challenge, many codes use more advanced numerical schemes such as having an adaptive varying step-size or using values from more past samples instead of just the last one. In 21cmFAST we cannot use such schemes because the redshift samples (and their corresponding step-sizes) are determined before the evolution of the box begins, and only the last previous box is accessible during the calculation of the current one. Therefore, our numerical scheme is limited to the family of Runge-Kutta solutions. High order Runge-Kutta solutions could increase the required step-size at the price of calculating intermediate redshift samples, but we will see that the simplest lowest-order type of Runge-Kutta solution, namely the Euler method, can be still used without sacrificing valuable computational time. The trick is to track the difference and the average temperatures of the tightly coupled fluids, instead of tracking the temperatures of the individual fluids. A similar method to the one presented below is already implemented in CLASS. To see why such a method is helpful, let us rewrite Eq. (17) (that includes also interaction with SDM) in the following form (note we now denote the kinetic gas temperature with \(T_{b}\) to match our notation in Figure 16: Comparison of the 21-cm global signal when different growth factors are considered. In both curves early temperature and ionization fluctuations were discarded by starting the simulation at \(z=35\). Figure 15: Comparison between the CLASS growth factor (which solves Eq. (16)) and the Dicke growth factor, as implemented in the standard 21cmFAST. At \(z=35\) (\(z=20\)) the relative error is \(\sim 1.4\%\) (\(\sim 0.98\%\)). Sec. VI) \[\frac{dT_{b}}{dz}=\frac{1}{1+z}\left[2T_{b}-\frac{T_{\gamma}-T_{b}}{\epsilon_{ \gamma b}}-\frac{2\dot{Q}_{b}}{3k_{B}H}-\frac{1}{H}\left.\frac{dT_{b}}{dt} \right|_{\text{ext}}\right], \tag{14}\] where we defined \(\epsilon_{\gamma b}\equiv H/\Gamma_{C}\). Because \(T_{\gamma}\propto(1+z)\), we also know that \[\frac{dT_{\gamma}}{dz}=\frac{T_{\gamma}}{1+z}. \tag{15}\] When the two fluids are tightly coupled, we approximate \(T_{b}\approx T_{\gamma}+\mathcal{O}\left(\epsilon_{\gamma b}\right)\), which is valid as long as \(H\ll\Gamma_{C}\), or \(\epsilon_{vb}\ll 1\). This is the Compton _tight coupling approximation_ or TCA. Within this approximation, we can compare Eqs. (14) and (15), \[T_{\gamma}=2T_{b}-\frac{T_{\gamma}-T_{b}}{\epsilon_{\gamma b}}-\frac{2\dot{Q} _{b}}{3k_{B}H}-\frac{1}{H}\left.\frac{dT_{b}}{dt}\right|_{\text{ext}}+ \mathcal{O}\left(\epsilon_{\gamma b}\right), \tag{16}\] from which we find \[\Delta T_{\gamma b} \equiv T_{\gamma}-T_{b}\] \[=\epsilon_{\gamma b}\left[2T_{b}-T_{\gamma}-\frac{2\dot{Q}_{b}}{ 3k_{B}H}-\frac{1}{H}\left.\frac{dT_{b}}{dt}\right|_{\text{ext}}\right]+ \mathcal{O}\left(\epsilon_{\gamma b}^{2}\right). \tag{17}\] Furthermore, by adding Eqs. (14) and (15) we can find a differential equation for \(\bar{T}_{\gamma b}\equiv\left(T_{\gamma}+T_{b}\right)/2\), \[\frac{d\bar{T}_{\gamma b}}{dz} =\frac{1}{2\left(1+z\right)}\Bigg{[}T_{\gamma}+2T_{b}-\frac{T_{ \gamma}-T_{b}}{\epsilon_{\gamma b}}\] \[-\left.\frac{2\dot{Q}_{b}}{3k_{B}H}-\frac{1}{H}\left.\frac{dT_{b }}{dt}\right|_{\text{ext}}\right]=\frac{T_{\gamma}}{1+z}+\mathcal{O}\left( \epsilon_{\gamma b}\right), \tag{18}\] where the second line follows Eq. (16). Not surprisingly, we see that the average temperature of the tightly coupled baryon-photon fluid follows the CMB temperature. We would need a second differential equation, for the temperature difference \(\Delta T_{\gamma b}\). According to Eq. (17), this is equivalent to finding a differential equation for \(\epsilon_{\gamma b}\). From Eqs. (4) and (15), a simple calculation yields \[\frac{d\epsilon_{\gamma b}}{dz}=\epsilon_{\gamma b}\left(\frac{1}{H}\frac{dH}{ dz}-\frac{1}{x_{e}\left(1-x_{e}\right)}\frac{dx_{e}}{dz}-4\right). \tag{19}\] Eqs. (17)-(19) are the Compton-TCA equations. Unlike Eq. (3) or Eq. (17), they do not contain terms that approach zero or infinity in the strong coupling limit and they are thus numerically more stable. The strategy in our code for solving for \(T_{b}\) is as follows: 1. At each step, we calculate \(\epsilon_{\gamma b}\equiv H/\Gamma_{C}\). If \(\epsilon_{\gamma b}>\epsilon_{\gamma b}^{\text{th}}\), where \(\epsilon_{\gamma b}^{\text{th}}\) is some threshold value, the TCA does not have to be applied, and we solve Eq. (3). 2. Otherwise, we compute \(\bar{T}_{\gamma b}\), and evolve \(\bar{T}_{\gamma b}\) and \(\epsilon_{\gamma b}\) via Eqs. (18) and (19), respectively. 3. We then compute the current \(\Delta T_{\gamma b}\) via Eq. (17). 4. Finally, we find the current gas temperature with \(T_{b}=\bar{T}_{\gamma b}-\Delta T_{\gamma b}/2\). With this prescription, we can run 21cmFirstCLASS with a constant \(\Delta z_{n}=0.1\) from recombination to \(z=35\) and thus reduce the total amount of redshift samples by \(\sim 8000\). In our code we have set \(\epsilon_{\gamma b}^{\text{th}}=5\times 10^{-5}\) since this choice corresponds to \(\epsilon_{\gamma b}\left(z=980\right)\approx\epsilon_{\gamma b}^{\text{th}}\), though we comment that \(\epsilon_{\gamma b}^{\text{th}}\) can be even three orders of magnitude greater and the desired evolution would be still obtained. ## Appendix C Dark matter tight coupling approximation A similar problem to the one discussed in Appendix B happens when baryons interact with SDM. According to Eq. (17)-(18), the changes in \(T_{b}\) and \(T_{\chi}\) depend on \(\dot{Q}_{b}\) and \(\dot{Q}_{\chi}\), but according to Eq. (24)-(25), these quantities depend on the difference between \(T_{b}\) and \(T_{\chi}\). If \(\Gamma_{\chi b}\gg H\) (or \(\left(n_{b}/n_{\chi}\right)\Gamma_{\chi b}\gg H\)), then \(T_{b}-T_{\chi}\to 0\) and small numerical deviations from the true solution will cause the error to diverge, in both fluids. Moreover, it becomes unclear what happens in a scenario where the fluids are Figure 17: Comparison of the 21cm power spectrum when different growth factors are considered. In both curves early temperature and ionization fluctuations were discarded by starting the simulation at \(z=35\). Optimistic foreground scenario is assumed for the error bars. tightly coupled, but only one of them is strongly affected by an external source, e.g. X-rays that heat-up only the baryons fluid. This is why the DM-TCA algorithm that we derive below does not serve only as a means to reduce runtime, but in fact it is _indispensable_ to get the right evolution at low redshifts. We begin by rewriting Eq. (17)-(18) in the following form, \[\frac{1}{H}\frac{dT_{b}}{dt} = -2T_{b}+\frac{T_{\gamma}-T_{b}}{\epsilon_{\gamma b}}+\frac{T_{ \chi}-T_{b}}{\epsilon_{b}}+\frac{1}{H}\left.\frac{dT_{b}}{dt}\right|_{\rm ext} \tag{111}\] \[+\frac{2}{3k_{B}}\frac{\rho_{\chi}}{\rho_{\rm tot}}\frac{V_{ \chi b}}{H}\sum_{t}\frac{m_{\chi}m_{b}}{m_{\chi}+m_{t}}D_{t}\left(V_{\chi b} \right),\] \[\frac{1}{H}\frac{dT_{\chi}}{dt} = -2T_{\chi}+\frac{T_{b}-T_{\chi}}{\epsilon_{\chi}}+\frac{1}{H} \left.\frac{dT_{\chi}}{dt}\right|_{\rm ext} \tag{112}\] \[+\frac{2}{3k_{B}}\frac{\rho_{b}}{\rho_{\rm tot}}\frac{V_{\chi b} }{H}\sum_{t}\frac{m_{\chi}m_{t}}{m_{\chi}+m_{t}}D_{t}\left(V_{\chi b}\right),\] where we have defined the DM-TCA small parameters, \[\epsilon_{b}\equiv\frac{H}{\Gamma_{\chi b}},\qquad\epsilon_{\chi}\equiv\frac{ n_{\chi}}{n_{b}}\epsilon_{b}. \tag{113}\] It will be convenient to define a symmetrized small parameter, \[\epsilon_{\chi b} \equiv\frac{n_{\chi}}{n_{\chi}+n_{b}}\epsilon_{b}=\frac{n_{b}}{n _{\chi}+n_{b}}\epsilon_{\chi}\] \[=\frac{3H}{\sqrt{2\pi}\sigma_{-4}c^{4}\left(n_{\chi}+n_{b}\right) }\left[\sum_{t}\frac{n_{t}}{n_{b}}\frac{m_{t}e^{-r_{t}^{2}/2}}{\left(m_{t}+m_ {\chi}\right)^{2}u_{\chi t}^{3}}\right]^{-1}. \tag{114}\] In the DM-TCA we have \(\epsilon_{\chi b}\ll 1\) and \(T_{b}=T_{\chi}+\mathcal{O}\left(\epsilon_{\chi b}\right)\). This allows us to compare Eqs. (111) and (112), and find that in the strong coupling limit the temperature difference is \[\Delta T_{b\chi} \equiv T_{b}-T_{\chi}=\epsilon_{\chi b}\Bigg{[}\frac{T_{\gamma}-T_ {b}}{\epsilon_{\gamma b}}+\frac{1}{H}\left.\frac{dT_{b}}{dt}\right|_{\rm ext }-\frac{1}{H}\left.\frac{dT_{\chi}}{dt}\right|_{\rm ext}\] \[-\frac{2}{3k_{B}}\frac{V_{\chi b}m_{\chi}}{H\rho_{\rm tot}}\sum_ {t}\frac{\rho_{b}m_{t}-\rho_{\chi}m_{b}}{m_{\chi}+m_{t}}D_{t}\left(V_{\chi b} \right)\Bigg{]}+\mathcal{O}\left(\epsilon_{\chi b}^{2}\right). \tag{115}\] By adding together Eqs. (111) and (112), and using Eq. (115), we can also find a differential equation for \(\bar{T}_{\chi b}\equiv\left(T_{b}+T_{\chi}\right)/2\), \[\frac{d\bar{T}_{\chi b}}{dz} =\frac{1}{1+z}\Bigg{[}2\bar{T}_{\chi b}-\frac{n_{b}}{n_{b}+n_{ \chi}}\left(\frac{T_{\gamma}-T_{b}}{\epsilon_{\gamma b}}+\frac{1}{H}\left. \frac{dT_{b}}{dt}\right|_{\rm ext}\right)\] \[-\frac{n_{\chi}}{n_{b}+n_{\chi}}\frac{1}{H}\left.\frac{dT_{\chi}} {dt}\right|_{\rm ext}-\frac{1}{n_{b}+n_{\chi}}\frac{2}{3k_{B}}\frac{\rho_{b} \rho_{\chi}}{H\rho_{\rm tot}}V_{\chi b}D\left(V_{\chi b}\right)\Bigg{]}\] \[+\mathcal{O}\left(\epsilon_{\chi b}\right). \tag{116}\] To solve for \(T_{b}\) and \(T_{\chi}\) we require another equation for \(\Delta T_{b\chi}\). According to Eq. (115), this is equivalent to having an equation for \(\epsilon_{\chi b}\). Since \(n_{b}\propto n_{\chi}\propto\left(1+z\right)^{3}\), then from Eqs. (113) and (114) we have \[\frac{d\epsilon_{\chi b}}{dz}=\frac{d\epsilon_{b}}{dz}=\epsilon_{b}\left( \frac{1}{H}\frac{dH}{dz}-\frac{1}{\Gamma_{\chi b}}\frac{d\Gamma_{\chi b}}{dz} \right), \tag{117}\] where the derivative of the energy transfer rate \(\Gamma_{\chi b}\) can be evaluated from its definition, Eq. (26), \[\frac{d\Gamma_{\chi b}}{dz} =\sqrt{\frac{2}{\pi}}\frac{2\sigma_{-4}c^{4}\rho_{\chi}}{3n_{b}} \sum_{t}\Bigg{\{}\frac{\rho_{t}{\rm e}^{-r_{t}^{2}/2}}{\left(m_{t}+m_{\chi} \right)^{2}u_{\chi t}^{3}}\times\] \[\left[\frac{3}{1+z}-\frac{r_{t}}{u_{\chi t}}\frac{dV_{\chi b}}{dz }-\left(3-r_{t}^{2}\right)\frac{m_{t}+m_{\chi}}{m_{t}m_{\chi}}\frac{k_{B}}{u _{\chi t}^{2}}\frac{d\bar{T}_{\chi b}}{dz}\right]\Bigg{\}}. \tag{118}\] Note that \(d\Gamma_{\chi b}/dz\propto\Gamma_{\chi b}\) in the special case in which the SDM interacts with a single type of particles. Eqs. (115)-(118) are the DM-TCA equations. It is crucial to understand that if \(\epsilon_{\chi b}\ll 1\), that does not guarantee that both \(\epsilon_{b}\) and \(\epsilon_{\chi}\) are much smaller than unity. This is because \(\epsilon_{b}\propto\rho_{\chi}^{-1}\) and \(\epsilon_{\chi}\propto\rho_{t}^{-1}\). So for example, if \(f_{\chi}\ll 1\) such that \(\epsilon_{b}\gg 1\), then the baryons are not tightly coupled to the SDM. However, if \(\sigma_{-4}\) is large enough, then even if \(f_{\chi}\ll 1\), it might be that \(\epsilon_{\chi}\ll 1\) and the SDM is coupled to the baryons. This is similar to the early coupling between baryons and CMB photons; the latter outnumber the former, and thus the baryons are tightly coupled to the CMB, while the CMB photons are insensitive to the baryons. Since \(\epsilon_{\chi b}\leq\epsilon_{\chi},\epsilon_{b}\), if either \(\epsilon_{b}\ll 1\) or \(\epsilon_{\chi}\ll 1\), that implies that \(\epsilon_{\chi b}\ll 1\), and we can evolve \(\bar{T}_{\chi b}\) and \(\Delta T_{b\chi}\) with the DM-TCA equations we formulated above. All of these considerations have been implemented in 21cmFirstCLASS. Below we present the algorithm we use in our code to solve for \(T_{b}\) and \(T_{\chi}\). 1. We begin by calculating \(\epsilon_{b}\) and \(\epsilon_{\chi}\) via Eq. (113). If at least one of them is smaller than the threshold \(\epsilon_{\chi b}^{\rm th}\), we use the DM-TCA equations, Eqs. (115)-(118), to find the updated values of \(\bar{T}_{\chi b}\) and \(\Delta T_{b\chi}\). 1. When we use the DM-TCA equations, we check if \(\epsilon_{\gamma b}<\epsilon_{\gamma b}^{\rm th}\). If this condition is satisfied, we are in the special scenario where the three fluids (baryons, SDM and CMB photons) are strongly coupled. In this case we evaluate \(\left(T_{\gamma}-T_{b}\right)/\epsilon_{\gamma b}\) that appears in Eqs. (115)-(116) with the Compton-TCA Eq. (114). 2. Next, we solve for the baryons temperature \(T_{b}\). 1. We check if \(\epsilon_{\gamma b}>\epsilon_{\gamma b}^{\rm th}\) and \(\epsilon_{b}>\epsilon_{\chi b}^{\rm th}\). If these two conditions are satisfied, or alternatively \(dT_{b}/dt|_{\rm ext}>2\dot{Q}_{b}/\left(3k_{B}\right)\), we solve the usual differential equation for \(T_{b}\), Eq. (17). The latter condition reflects the understanding that the baryons cannot be tightly coupled to the SDM if an external heating source, such as X-rays, is more dominant. 2. Otherwise, if \(\epsilon_{\gamma b}\leq\epsilon_{\gamma b}^{\rm th}\), we use the Compton-TCA equations, Eqs. (48)-(49), to solve for \(T_{b}\). This reflects our assumption that the coupling of the baryons with the SDM cannot be stronger than the coupling of the baryons with the CMB. Cross-sections that break this assumption imply that \(T_{b}\neq T_{\gamma}\) at recombination and have been ruled out by CMB observations. 3. Otherwise, the baryons are not tightly coupled to the CMB, but they are tightly coupled to the SDM, and we can use \(\bar{T}_{\chi b}\) and \(\Delta T_{b\chi}\) that we obtained in item 1 to find \(T_{b}\) via \(T_{b}=\bar{T}_{\chi b}+\Delta T_{b\chi}/2\). 3. Finally, we solve for the SDM temperature \(T_{\chi}\). 1. We check if \(\epsilon_{\chi}>\epsilon_{\chi b}^{\rm th}\) or if \(dT_{\chi}/dt|_{\rm ext}>2\dot{Q}_{\chi}/\left(3k_{B}\right)\) (the latter condition can be satisfied at low redshifts, when the clustering of SDM becomes important). If one of these conditions is satisfied, we solve the usual differential equation for \(T_{\chi}\), Eq. (18). 2. Otherwise, the SDM is tightly coupled to the baryons, and we can use \(\bar{T}_{\chi b}\) and \(\Delta T_{b\chi}\) that we obtained in item 1 to find \(T_{\chi}\) via \(T_{\chi}=\bar{T}_{\chi b}-\Delta T_{b\chi}/2\). For the threshold of the DM-TCA small parameter we use \(\epsilon_{\chi b}^{\rm th}=\epsilon_{\gamma b}^{\rm th}=5\times 10^{-5}\) at high redshifts (\(z>100\)) and \(\epsilon_{\chi b}^{\rm th}=10^{-2}\) at low redshifts. We have confirmed that the results of our code are insensitive to these particular values. Moreover, we have confirmed the correctness of our solutions to \(T_{b}\) and \(T_{\chi}\) by comparing them to the solutions that can be obtained by solving Eqs. (17)-(20) with Mathematica[165] (when all the fluctuations in the box are turned off and we set \(L_{X}=35\), namely no X-ray heating). Unlike our code, Mathematica solves differential equations by adjusting the step-size so that the estimated error in the solution is just within the specified absolute and relative tolerances. In fact, our DM-TCA algorithm presented above allows solving correctly the differential equations even for cross-sections that are large enough such that the normal settings in Mathematica fail to solve the equations. ## Appendix D Small temperature correction for \(S_{\alpha}\) The Ly\(\alpha\) coupling in Eq. (2) is given by [159] \[\tilde{x}_{\alpha}=\frac{J_{\alpha}}{J_{0}}\tilde{S}_{\alpha}, \tag{50}\] where \(J_{\alpha}\) is the Ly\(\alpha\) flux, and \(J_{0}\) is \[J_{0} = \frac{9A_{10}T_{\gamma}}{8\pi\lambda_{\rm Ly\alpha}^{2}\gamma_{ \alpha}T_{\star}} \tag{51}\] \[= 5.54\times 10^{-12}\left(1+z\right)\rm{cm}^{-2}\,\rm{sec}^{-1}\, \rm{Hz}^{-1}\,\rm{sr}^{-1},\] where \(\lambda_{\rm Ly\alpha}^{2}=121.567\,\rm{nm}\) is the Ly\(\alpha\) frequency, \(A_{10}=2.85\times 10^{-15}\,\rm{sec}^{-1}\) is the spontaneous emission coefficient from the excited hyperfine level to the ground state, and \(\gamma_{\alpha}\approx 50\,\rm{MHz}\) is the half width at half maximum of the Ly\(\alpha\) resonance line. The quantity \(\tilde{S}_{\alpha}\) that appears in Eq. (50) is a correction to \(\tilde{x}_{\alpha}\) due to spectral distortions. In order to find it, one must solve the steady-state Fokker-Planck equation. This was first numerically solved by Chen & Miralde-Escude [131] and later was refined by Hirata [128], who found a complicated fit for \(\tilde{S}_{\alpha}\) that depends on \(T_{k}\), \(T_{s}\) and the Gunn-Peterson optical depth \(\tau_{\rm GP}\). In addition, Hirata found a fit for the color temperature, \(T_{\alpha}^{-1}=T_{k}^{-1}+T_{\rm se}T_{k}^{-1}\left(T_{s}^{-1}-T_{k}^{-1}\right)\), where \(T_{\rm se}\) accounts for the correction in the color temperature due spin exchange and is given by \[T_{\rm se}=\left(\frac{\lambda_{\rm Ly\alpha}}{\lambda_{21}}\right)^{2}\frac{m _{\rm H}c^{2}}{9k_{B}}\approx 0.4\,\rm{K}, \tag{52}\] where \(\lambda_{21}\approx 21\,\rm{cm}\) is the wavelength of a 21-cm photon. The fits discovered by Hirata are implemented in 21cmFAST. Shortly after Hirata's work, Chuzhoy & Shapiro [158] found an analytical solution to the steady-state Fokker-Planck equation by approximating the spectrum with the absorption profile appropriate to Lorentzian wings (this was first done by Grachev [166]). This is known as the wing approximation. Based on their work, Furlanetto & Pritchard [167] gave analytical estimates, including for the color temperature, \[T_{\alpha}=T_{s}\,\frac{T_{k}+T_{\rm sc}}{T_{s}+T_{\rm sc}}. \tag{47}\] Note that in the limit \(T_{\rm sc}\ll T_{k},T_{s}\) Eq. (47) converges to Hirata's fit. Recently, Ref. [159] used the results of Furlanetto & Pritchard to write the analytical estimate for \(\tilde{S}_{\alpha}\) in the following way, \[\tilde{S}_{\alpha}\left(\xi\right)=1-\int_{0}^{\infty}{\rm e}^{-\xi\left(u/3 \right)^{3}}{\rm e}^{-u}\,du=\begin{cases}1&\xi\to\infty\\ \frac{2}{\xi}\xi&\xi\ll 1\end{cases}, \tag{48}\] where \[\xi \equiv \frac{3\nu_{\rm Ly\alpha}m_{\rm H}H\left(k_{B}T_{k}\right)^{2}}{ \pi A_{\alpha}\gamma_{\alpha}c\hbar^{3}n_{\rm H}\left(1-x_{e}\right)} \tag{49}\] \[\approx 760\left(\frac{\Omega_{m}h^{2}}{0.143}\right)^{1/2}\left(\frac {\Omega_{b}h^{2}}{0.0223}\right)^{-1}\left(\frac{1-{\rm Y}_{\rm He}}{0.755} \right)^{-1}\] \[\times\left(\frac{T_{k}}{10\,{\rm K}}\right)^{2}\left(\frac{1+z} {15}\right)^{-3/2}\frac{1}{\left(1+\delta_{b}\right)\left(1-x_{e}\right)},\] where \(\nu_{\rm Ly\alpha}=2.47\times 10^{15}\,{\rm Hz}\) the Ly\(\alpha\) frequency and \(A_{\alpha}=6.25\times 10^{8}\,{\rm Hz}\) the spontaneous Ly\(\alpha\) emission coefficient. The fiducial values in Eq. (49) correspond to typical values of \(T_{k}\) at \(z=15\) in \(\Lambda\)CDM (c.f. Fig. 1). According to Fig. 18, \(\xi\sim 10^{3}\) corresponds to \(\tilde{S}_{\alpha}\sim 0.5\). In SDM however, the gas kinetic temperature may reach very low temperatures (c.f. Fig. 9) where Hirata's fit to \(\tilde{S}_{\alpha}\) no longer works. Therefore, for our SDM calculations in this paper we follow17[106] and work with Eqs. (47)-(49). In Figs. 19 and 20 we verify that in \(\Lambda\)CDM the output of our code is not sensitive to the chosen method. Indeed, there is an excellent agreement between the two methods. Footnote 17: It is worth mentioning that another fit for \(\tilde{S}_{\alpha}\) at low temperature exists in the literature [168; 123]. However, implementing this fit in 21cmFirstCLASS did not yield results that agree with either [128] or [158]. Because \(\xi\propto T_{k}^{2}\) and \(T_{k}\) can be smaller by a factor of \(\sim 10^{2}\) compared to \(\Lambda\)CDM, the value of \(\xi\) may drop below 0.1 and from Fig. 18 it is implied that \(\tilde{S}_{\alpha}<0.01\). This explains why in Fig. 9, \(T_{s}\) cannot reach very low temperatures even though \(T_{k}\) does. Figure 19: Comparison of the 21cm global signal when \(\tilde{S}_{\alpha}\) is calculated from Ref. [128] (Hirata) or Ref. [158] (Chuzhoy & Shapiro). In both curves early temperature and ionization fluctuations were discarded by starting the simulation at \(z=35\). Figure 20: Comparison of the 21cm power spectrum when \(\tilde{S}_{\alpha}\) is calculated from Ref. [128] (Hirata) or Ref. [158] (Chuzhoy & Shapiro). In both curves early temperature and ionization fluctuations were discarded by starting the simulation at \(z=35\). Optimistic foreground scenario is assumed for the error bars.
この作品では、21cmFIRSTCLASSという、21cmFASTの改良版を提示します。これは21-cm信号の異方性計算における最も人気のある文献コードです。このコードは、宇宙マイクロ波背景(CMB)ボッラーマンコードCLASSを使用し、任意の cosmologicalパラメータの組み合わせに対して、再構成時の一貫した初期条件を確立し、暗黒時代、宇宙の黎明、加熱と再 Ionization の時代を通してそれらを進化させます。温度と電離状態における不均一性を考慮し、このコードは、世界的な21-cm信号とその変動の包括的な計算に重要です。将来の CMB と21-cm 信号の測定を、21cmFirstCLASS を使って組み合わせ、分析することで、宇宙論的および天体物理的パラメータの制約を導き出し、それらの間の重複性を調べます。例として、
2309.12697
Semantic similarity prediction is better than other semantic similarity measures
Semantic similarity between natural language texts is typically measured either by looking at the overlap between subsequences (e.g., BLEU) or by using embeddings (e.g., BERTScore, S-BERT). Within this paper, we argue that when we are only interested in measuring the semantic similarity, it is better to directly predict the similarity using a fine-tuned model for such a task. Using a fine-tuned model for the Semantic Textual Similarity Benchmark tasks (STS-B) from the GLUE benchmark, we define the STSScore approach and show that the resulting similarity is better aligned with our expectations on a robust semantic similarity measure than other approaches.
Steffen Herbold
2023-09-22T08:11:01
http://arxiv.org/abs/2309.12697v2
# Semantic similarity prediction is better than other semantic similarity measures ###### Abstract Semantic similarity between natural language texts is typically measured either by looking at the overlap between subsequences (e.g., BLEU) or by using embeddings (e.g., BERTScore, S-BERT). Within this paper, we argue that when we are only interested in measuring the semantic similarity, it is better to directly predict the similarity using a fine-tuned model for such a task. Using a fine-tuned model for the STS-B from the GLUE benchmark, we define the STSScore approach and show that the resulting similarity is better aligned with our expectations on a robust semantic similarity measure than other approaches. Semantic similarity, BERT, sentence embeddings, word embeddings, cosine similarity, measurement 10210 (e.g., BLEU) go beyond the comparison of single solutions to a sample output and instead allow the comparison between corpora of generated and sample solutions. For individual words without considering them in their context, word embeddings like word2vec (Mikolov et al., 2013) in combination with cosine similarity are a generally accepted way to estimate the semantic similarity between words, even though this approach has problems with ambiguities, e.g., caused by polysemes (Del Tredici and Bel, 2015). Similar methods were extended to whole sentences (Sharma et al., 2017), but they did not achieve the same level or performance. The transformer architecture enabled the context-sensitive calculation of embeddings (Vaswani et al., 2017). Naturally, this was adopted for the calculation of the similarity. Two methods based on this are currently primarily used. The first is BERTScore by Zhang et al. (2020), which is based on an optimal matching of the pair-wise similarities of words within a contextual BERT embedding and has been used by thousands of publications since its publications in 2020. The other is Sentence-BERT (S-BERT) by Reimers and Gurevych (2019), who pool the embeddings of the tokens into an embedding for sentences. The cosine similarity can then be computed between these sentence embeddings, same as between words with word2vec or with earlier, less powerful, sentence embeddings (e.g. Sharma et al., 2017). This success not withstanding, we want to argue that this approach should not be further pursued in favor of a simpler solution to estimate the semantic similarity, i.e., simply predicting the semantic similarity with a regression model. We note that this idea is not new and was, e.g., also used to define approaches like BEER (Stanojevic and Sima'an, 2014) or RUSE (Shimanaka et al., 2018). However, these models are from the pre-transformer era of natural language processing. As can be seen in large benchmarks like GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019), models based on the transformer architecture (Vaswani et al., 2017) provide a lot better performance. Therefore, we formulate the following hypothesis for our research: **Hypothesis:**: Modern language models with an encoder-only transformer architecture similar to BERT (Devlin et al., 2019) that are fine-tuned as regression models for the similarity between sentence pairs are also capable to robustly measure the semantic similarity beyond their training data and are better measures for the semantic similarity than embedding-based and n-gram approaches. We derive this hypothesis from the assumption, that if such a regression model fulfills its task, employing it as a semantic similarity measure would be the natural use case beyond just benchmarking model capabilities. Within this paper, we present the results of a confirmatory study, that demonstrates that downloading a fine-tuned RoBERTa model (Liu et al., 2019) for the STS-B task (Cer et al., 2017) from the GLUE benchmark from Huggingface and using this model to predict the similarity of sentences fulfills our expectations on a robust similarity measure better than the other models we consider. We refer to this approach as STSScorer. To demonstrate this empirically, we compute the similarity score for similarity related GLUE tasks and show that while the predictions with the STSScorer are not perfect, the distribution of the predicted scores is closer to what we would expect given the task description, than for the other measures. ## 2 Method Within this section, we describe the research method we used to evaluate our hypothesis. We first describe the STSSScorer model for the prediction of the similarity in Section 2.1. Then, we proceed to describe our analysis approach in Section 3, including the expectations we have regarding our hypothesis and the tools we used in Section 3.1. ### STSSScorer Listing 1 describes our approach: we download a fine-tuned model from Huggingface for the STS-B task (Held, 2022), which was based on RoBERTa (Liu et al., 2019). The STS-B tasks contains sentence pairs from news, image captions and Web forums. Each sentence pair received a score between zero and five. These scores were computed as the average of the semantic similarity rating conducted by three humans such that five means that the raters believe the sentences mean exactly the same and zero means that the sentences are completely unrelated to each other. We simply use the model trained for this task and divide the results by five to scale them to the interval \([0,1]\). Hereafter, we refer to this model as STSSScorer. ## 3 Analysis approach Within this section, we describe the research method we used to evaluate our hypothesis. Our approach is similar to the method used by Zhang et al. (2020) for the evaluation of the robustness of similarity measures also by Reimers and Gurevych (2019) for the evaluation of S-BERT: we utilize data labeled data for which we a have an expectation of what to observe, when measuring the semantic similarity. There are three such data sets within the GLUE benchmark: * the Semantic Textual Similarity Benchmark (STS-B) data we already discussed above; * the Microsoft Research Paraphrase Corpus (MRPC, Dolan and Brockett (2005)) data, where the task is to determine if two sentences are paraphrases; and * the Quora Question Pairs (QQP, Iyer et al. (2017)) data, where the task is to determine if two questions are duplicates. For STS-B and MRPC, we use the test data. Since the labels for QQP's test data are not shared, we use the training data instead. To the best of our knowledge this data was not seen during the training of the STSScorer and and S-BERT models of the models we use, as Quora was not part of the pre-training corpus of RoBERTa, which mitigates the associated risks regarding data contamination. However, model underlying S-BERT was fine-tuned using constrastive learning on a corpus of one billion sentences (Reimers and Gurevych, 2019), which contained about 100,000 instances from the QQP data, i.e., about a quarter. Thus, S-BERT might have an advantage on this data. On each of these data sets, we compute the similarity between all pairs of sentences with BLEU, BERTScore, S-BERT,1 and STSScore. All methods we consider compute scores between zero (not similar at all) and one (same semantic meaning), which simplifies the direct comparison. This narrower view of few models allows us to consider the results more in-depth. Specifically, we can go beyond the plain reporting of numbers, and instead look directly at the distributions of the similarity measures for different data sets. Due to the confirmatory design of our study, we formulate concrete expectations on the results, given the properties for each data set. How well the similarity measures fulfill these expectations will be later used to evaluate our hypothesis. Based on the strong performance of BERT-based models on the STS-B task in the GLUE benchmark, we predict based on our hypothesis that STSScorer should have the best alignment with our expectations for all data sets. Footnote 1: From now on, we simply refer to calculating the cosine similarity between embeddings with S-BERT as S-BERT for brevity. We note that while we could have added more approaches to this comparison, e.g., ROUGE (Lin, 2004), METEOR (Lavie and Agarwal, 2007), RUSE (Shimanaka et al., 2018), and BEER (Stanojevic and Sima'an, 2014), these models were all already compared to BERTScore, which was determined to provide a better measure for the similarity (Zhang et al., 2020). Further, we refer to a general overview on such metrics to the work by Zhang et al. (2020). Thus, instead of providing a broad comparison with many models, we rather compare our approach to the embedding-based aporoaches S-BERT and BERTScore which are currently used as de-facto state-of-the-art by most publications and the still very popular BLEU method that uses an n-gram matching approach. #### 3.0.1 STS-B data While the comparison on the STS-B data may seem unfair, because the STSScorer was specifically trained for that model, the analysis of the behavior of the different scorers on this model still gives us interesting insights: for any semantic similarity measure, the distribution of the scores should be directly related to the label of STS-B,2 which is a human judgement of the semantic similarity. Consequently, when we plot the label on the x-axis versus a semantic similarity score on the y-axis, we would ideally observe a strong linear correlation. Visually, we would observe this by the data being close to the diagonal. A less ideal, but still good, result would be that the trend is monotonously increasing indicating a rank correlation, which would mean that while the magnitudes of the similarity measure are not aligned with the human judgements from STS-B, at least the order of values is. Any other trend would mean that the similarity measure is not aligned with the human judgements of STS-B. In addition to this visualization we also measure linear correlation between the scores and the labels with Pearson's \(r\) and the rank correlation with Spearman's \(\rho\), as is common for the STS-B task within the GLUE benchmark Wang et al. (2018) and was also used by Reimers and Gurevych (2019) for the evaluation of S-BERT. Footnote 2: While STS-B is a regression task and and it would be better to speak of a dependent variable here, we rather speak of labels all the time to be consistent with the subsequent classification tasks. Because the STSScorer was fine-tuned on the STS-B data, we only utilize the test data. Nevertheless, because this is exactly the same context as during the training of STSScorers models (same data curators, same humans creating the judgements) this model has a huge advantage over the other approaches. Due to this, we expect that STSScorer is well aligned with the human judgements and we observe the linear trend described above. If this fails, this would directly counter our hypothesis, as this would not even work within-context. The BLEU, BERTScore, and S-BERT models were created independent of the STS-B data, but given their purpose to estimate the semantic similarity, they should still be able to fulfill the desired properties. If this is not the case, this would rather be an indication that these models are not measuring the semantic similarity - at least not according to the human judgements from the STS-B data. #### 3.0.2 MRPC and QQP data The MRPC and QQP data are similar: both provide binary classification problems. With MRPC, the problem is paraphrasing. With QQP the problem is duplicate questions, which can also be viewed as a type of paraphrasing, i.e., the paraphrasing of questions. Paraphrasing is directly related to semantic similarity, as paraphrased sentences should be semantically equal. Thus, similarity measures should yield high values for paraphrases and duplicate questions, ideally close to one. For the negative examples of MRPC, a look at the data helps to guide our expectations. When considering the MRPC data, we observe that the negative samples are all sentences on the same topic, with a different meaning. As example, consider the first negative example from the training data: **Sentence 1:**_Yucaipa owned Dominick's before selling the chain to Safeway in 1998 for $ 2.5 billion_ **Sentence 2:**: _Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998._ Both sentences are related to the ownership of _Dominick's_ by _Yucaipa_ and the sale to _Safeway_, but consider different aspects regarding the time and different amounts of money for the payment. Thus, we observe semantic relationship, but not a paraphrasing. While we have not read through all negative examples from the MRPC data, we also did not find any examples where both sentences were completely unrelated. Consequently, we expect values for the semantic similarity of the negative examples to be significantly larger than zero, but also smaller than of the positive examples of actual paraphrases with a notable gap. For the negative examples of QQP, the expectation is less clear. For most pairs, we observe that both questions are somewhat related, i.e., different questions regarding the same topic. As example, consider this negative example from the training data: **Sentence 1:**: _What causes stool color to change to yellow?_ **Sentence 2:**: _What can cause stool to come out as little balls?_ Both questions are about _stool_, but different aspect of stool. This kind of difference is similar to what we have within the MRPC data. However, we also observed examples, were the questions are completely unrelated. Consider the following instance from the training data of QQP: **Sentence 1:**: _How not to feel guilty since I am Muslim and I'm conscious we won't have sex together?_ **Sentence 2:**: _I don't beleive I am bulimic, but I force throw up atleast once a day after I eat something and feel guilty. Should I tell somebody, and if so who?_ While both questions are broadly related to the concept of _guilt_, the rest is completely unrelated and we would expect a very low semantic similarity. Consequently, while our expectation for the semantic similarity measures for the majority of the negative samples is similar to that of MRPC (i.e., significantly larger than zero, but smaller than for the positive examples), we also expect to observe a strong tail in the distribution with lower similarities. For both data sets, we visualize the distribution of the similarities per class. Additionally, we report the central tendency (arithmetic mean) and variability (standard deviation) per class in the data. ### Tools used We used the Huggingface transformer library to implement STSScore and the Huggingface evaluation library for BLEU. For S-BERT, we used the all-MiniLM-LG-v2 which was tuned for high-quality sentence embeddings using constrative learning model (Reimers, 2021; Reimers and Gurevych, 2019) and the python package provided by Reimers and Gurevych (2019). For BERTScore we used the default RoBERTa model for the English language and the python package provided by Zhang et al. (2020). We used Seaborn for all visualizations and Pandas for the computation of the correlation coefficients, mean values, and standard deviations. All implementations we created for this work are publicly available online: [https://github.com/aieng-lab/stsscore](https://github.com/aieng-lab/stsscore) ## 4 Results Figure 1 shows the results on the STS-B test data. The STSScore has the expected strong linear correlation with the labels. However, this is not surprising, since the underlying model was fine-tuned for this task and the strong performance was already reported through the GLUE benchmark. Still, this confirms the semantic similarity prediction fulfills the expectation we have on a semantic similarity measure. S-BERT is also seems to have the desired linear correlation, but with a general tendency to overestimate the similarity as most values are above the diagonal. The same cannot be said about BLEU or BERTScore. Both are not aligned at all with the expectations from the STS-B task. The values of BERTScore rather seem fairly randomly distributed, the values of BLEU are often exactly zero and otherwise often a lot lower than expected. An optimistic reading of the BERTScore results detects a weak upward slope in the similarity scores that would be expected. The correlation coefficients depicted in Table 1 match the results of our visual analysis: STSScore is strongly correlated (\(r=0.90\), \(\rho=0.89\)), followed by S-BERT that is also strongly correlated, but generally a bit weaker than STSScore (\(r=0.83\), \(\rho=0.82\)). BERTScore has only a moderate correlation (\(r=0.53\), \(\rho=0.53\)), the correlation of BLEU is weak (\(r=0.34\), \(\rho=0.32\)). Figure 2 shows the results for the MRPC data, Table 1 shows the statistical markers. STSScore yields the expected results: the paraphrasings have a higher similarity than the negative examples, with typically high scores (mean=0.84). However, the density plot shows that the scores are not always close to one, though only few scores are below 0.6. We also observe that the non-paraphrasings are almost always detected semantically somewhat similar (mean 0.61) with a high variability that covers nearly the complete range. However, we note that the density drops to almost zero at very high values (\(>\)0.9) and very low values (\(<\)0.1). This is in-line with our expectations: the similarity measure typically does not indicate unwarranted equality and it picks up the relationships within the data not dropping to zero. S-BERT is generally similar with high scores for the paraphrasings (mean=0.83). \begin{table} \begin{tabular}{l|c c|c c|c c} & \multicolumn{2}{c|}{STS-B} & \multicolumn{2}{c|}{MRPC} & \multicolumn{2}{c}{QQP} \\ & \(r\) & \(\rho\) & _neg_ & _pos_ & _neg_ & _pos_ \\ \hline BLEU & 0.34 & 0.32 & 0.26 (0.19) & 0.39 (0.20) & 0.11 (0.23) & 0.18 (0.25) \\ BERTScore & 0.53 & 0.53 & 0.53 (0.15) & 0.68 (0.13) & 0.44 (0.26) & 0.67 (0.17) \\ S-BERT & 0.83 & 0.82 & 0.71 (0.15) & 0.83 (0.12) & 0.56 (0.23) & 0.86 (0.10) \\ STSScore & 0.90 & 0.89 & 0.61 (0.18) & 0.84 (0.13) & 0.44 (0.24) & 0.76 (0.18) \\ \end{tabular} \end{table} Table 1: Summary statistics of the results. Pearson’s \(r\) and Spearman’s \(\rho\) between the labels and similarities for STS-B. Mean values with standard deviation in brackets for both classes of the MRPC and QQP data. We use _neg_/_pos_ to indicate the classes such that _pos_ is the semantically equal class. All values are rounded to the second digit. Figure 1: Evaluation of similarity measures on the test data of STS-B. Ideally, the similarity correlates linearly with the labels, i.e., the scores are close to the black line. Figure 2: Evaluation of similarity measures on the test data of MRPC. Ideally, the positive class (1) has scores close to one and the negative class (0) has smaller values, but not close to zero. The distribution looks a bit different from STSScore: while STSScore as a peak at around 0.82 and a second peak at exactly one, S-BERT has only a single peak at about 0.92, but drops sharply after this peak. The non-paraphrasings have a higher semantic similarity than for STSScore (mean=0.71), which aligns with the tendency of S-BERT to overestimate the similarity that we also observed with the STS-B data. The results of BERTScore exhibit a lot of the expected properties, i.e., a larger mean value for the paraphrases and the similarity for the negative examples covers the whole range, except for the very high and very low values. However, we note that there are only few cases with a similarity close to one for the paraphrases, even though this is actually the expected value. Moreover, the peak of the distribution is also a bit lower than for STSScore and the tail towards lower values for paraphrases is also stronger. As a result, the mean value for the paraphrases of BERTScore is only 0.68 for the positive examples. Moreover, these downward shifts in distribution also lead to a larger overlap between the distributions for the paraphrases and the negative examples, i.e., there is a stronger random element in observing large scores with BERTScore than with STSScore. For BLEU, the results are not fully aligned with our expectations for the MRPC data. BLEU displays the same tendency for often reporting similarities of exactly zero for both classes that we also observed for STS-B. Similarly, the values for both classes are fairly low, with only a mean similarity for the paraphrases of 0.39. However, the visual analysis shows that this mean value is somewhat skewed by the many values that are exactly zero, as the peak for the non-zero similarities is rather around 0.6. Moreover, both the distribution as well the lower mean value (0.26) indicate that the negative examples receive lower similarity scores, as expected. Still, based on the visual analysis, the distributions of the positive and negative samples strongly overlap, meaning that while the score trends in the expected direction at scale, it is not suited for individual results or calibrated as would be expected for this data. For QQP, the results for all three models are comparable with respect to their alignment with out expectations to the MRPC: STSScore matches our expectations very well. We observe both a lot of very high values, and overall a rather high for the duplicate questions. The mean is a bit lower than for MRPC. However, this rather seems to be a property of analyzing questions in comparison to regular sentences, as we observe such a downward shift across all classes and similarity measures. We also observe the expected trend towards very low values. S-BERT is a bit a mixed bag for QQP. One the one hand, the results for the duplicate questions indicates a better measurement of the similarity than for STSScore. On the other hand, the negative examples also receive higher similarity scores of the same amount and are also having their peak at a very high similarity of 0.8, though the distribution is spread out such that it is almost uniform between about 0.5 and about 0.8. When we consider these results in the context of the other data sets, this can be explained by the general property of S-BERT to produces higher values for the measurement of the similarity. We note that S-BERT has seen some of the data from QQP during the pre-training, which may be the reason for the very high similarity of the duplicates. This advantage not withstanding, it seems that STSScore and S-BERT are comparable on the QQP data, with a stronger separation observed with STSScore, but higher similarities for duplicates with S-BERT. BERTScore is again exhibit a lot of the expected properties, but again fails to achieve very large values and has an overall lower similarity for the duplicate questions than STS-B. Same as above, this leads to a larger overlap between the distributions of the classes. The tendency to produce values of exactly zero is strongest for the QQP data. In general, one can say that BLEU failed for this data set: while there is still some difference in the mean value, most similarity scores of BLEU are exactly zero for both classes. ## 5 Discussion Our most important result is that our experiments support our hypothesis: for all three data sets, the transformer-based prediction approach STSScorer aligned best with the expectation from the data and we suggest to use such approaches for future calculations of the semantic similarity. Especially the S-BERT approach also yields good results, though it seems to have a tendency to overestimate the similarity, even on the QQP data which was partially seen during training. BERTScore also yields promising results, but the scores are Figure 3: Evaluation of similarity measures on the training data of QQP. Ideally, the positive class (1) has scores close to one and the negative class (0) has smaller values, with only a small fraction being close to zero. only moderately correlated with human labeled data (as measured with STS-B) and fail to fully capture semantic equality (as shown with MRPC and QQP). As could be expected, older n-gram based approaches like BLEU are not comparable and yield significantly worse results: the distribution of the similarity scores is overall fairly random, although the general trend of the labels can still be (weakly) observed. This means that ranking based on BLEU are likely not misleading, if a large amount of data is used and the effects between different models are large. Another important aspect that we want to stress is that the consideration of the actual distributions of the similarities computed by the different models gave us insights well beyond the statistical markers. Just looking at the statistics, we would have missed important aspects like the tendency of BLEU towards values of exactly zero, the tendency of BERTScore to not yield very large values close to one semantically equal statements, or the differences between STSScorer and S-BERT on paraphrases within the MRPC data. We further note that our goal was to evaluate our hypothesis and provide an easy method for better semantic scoring. As a consequence, we choose to simply re-use an existing model from Huggingface instead of training our own model or using a checkpoint from elsewhere. While the model we use is not a lot worse then the top performers in the GLUE benchmark (best performing models in STS-B achieve correlations of up to 0.93), it cannot be considered as state-of-the-art for STS-B. Nevertheless, our results hold, i.e., STSScorer is a very good approach, though plugging in other models fine-tuned for semantic similarity prediction might yield (slightly) better results. We also note that Reimers and Gurevych (2019) also considered a fine-tuned version of S-BERT on the STS-B data by using the STS-B scores divided by five (same as us) to define a cosine similarity loss, which seemed to improve the alignment with STS-B when computing the cosine similarity. Nevertheless, based on the results reported by Reimers and Gurevych (2019), using the predictions directly should still be a better similarity measure than computing the cosine similarity of the embeddings. However, embeddings have the advantage, that they also enable other kinds of analysis, e.g., clustering or visualizations. Even though we believe that STSScorer is currently the best method for computing similarities, there is also an important drawback: when we use transformer-based approaches to evaluate other - likely transformer-based - approaches, the evaluation will possibly have all the problems regarding biases that transformers have. Thus, we may further encode these biases, because we selected models based on a biased evaluator. Additionally, all the current work ignores that semantic similarity is not absolute and also depends on the perspective of humans and can, e.g., be influenced by culture, social background, and other aspects, raising the question whether we should rather specify our notion of similarity more carefully in the future and use multiple similarity measures that can capture such differences. Nevertheless, current technologies do not provide a better solution, though we suggest that future work should also consider ethical and fairness concerns of semantic similarity measures to understand - and hopefully mitigate - such problems. ## 6 Conclusion The jumps in performance for natural language processing enable us to directly predict the semantic similarity instead of using embedding-based approaches or heuristics. Due to the readily available models on platforms like Huggingface, switching to such models for the future evaluation of the semantic similarity of results should be easily possible. ### Acknowledgments and Disclosure of Funding No funds, grants, or other support was received.
semantic similarity between natural language textsは、通常、サブ sekkuencias の重複をみて(例えば、BLEU) 測定したり、embedding を使用して (例えば、BERTScore、S-BERT) 測定したりします。この論文では、Semantic Textual Similarity Benchmarkのタスク (STS-B) を用いて、 fine-tuned model を使用して、Semantic similarity を直接予測することを主張します。Fine-tuned model を使用して、GLUE benchmark の Semantic Textual Similarity Benchmark のタスク (STS-B) を用いて、STSScore を定義し、その結果の類似度が、他の方法と比べて、より堅実なSemantic similarity measure により、期待通りの結果を示しています。
2309.08817
GPT as a Baseline for Recommendation Explanation Texts
In this work, we establish a baseline potential for how modern model-generated text explanations of movie recommendations may help users, and explore what different components of these text explanations that users like or dislike, especially in contrast to existing human movie reviews. We found that participants gave no significantly different rankings between movies, nor did they give significantly different individual quality scores to reviews of movies that they had never seen before. However, participants did mark reviews as significantly better when they were movies they had seen before. We also explore specific aspects of movie review texts that participants marked as important for each quality. Overall, we establish that modern LLMs are a promising source of recommendation explanations, and we intend on further exploring personalizable text explanations in the future.
Joyce Zhou, Thorsten Joachims
2023-09-16T00:00:44
http://arxiv.org/abs/2309.08817v1
# GPT as a Baseline for Recommendation Explanation Texts ###### Abstract In this work, we establish a baseline potential for how modern model-generated text explanations of movie recommendations may help users, and explore what different components of these text explanations that users like or dislike, especially in contrast to existing human movie reviews. We found that participants gave no significantly different rankings between movies, nor did they give significantly different individual quality scores to reviews of movies that they had never seen before. However, participants did mark reviews as significantly better when they were movies they had seen before. We also explore specific aspects of movie review texts that participants marked as important for each quality. Overall, we establish that modern LLMs are a promising source of recommendation explanations, and we intend on further exploring personalizable text explanations in the future. recommendation systems, text explanations, explainable recommendation 1 Footnote 1: _IntRS’23: Joint Workshop on Interfaces and Human Decision Making for Recommender Systems, September 18, 2023, Singapore (hybrid event)_ \({}^{\star}\)Corresponding author. different rankings between movies, nor did they give significantly different individual quality scores to reviews of movies that they had never seen before. However, participants did mark model-generated review texts as significantly better than human-written reviews when they were movies they had seen before. We also explore specific aspects of movie review texts that participants marked as important or crippling for each quality. These include structured item features such as genre or famous actors, but they also include particular interests such as special effects or historical relevance, cultural context of a work, or personal experiences of the review-writer themselves. Overall, we establish that modern LLMs are a promising source of post-hoc explanations that could accompany item recommendations with relevant summaries to improve user satisfaction. We intend on further exploring user-personalized text explanations in the future. ## 2 Related Work There exists a substantial amount of past work in explanations for recommender systems [8], which historically mostly used feature-based explanations [3]. More recently, there has been work on sentence generation or longer-form text generation for recommender system explanations. Some of this work makes use of crowdsourced texts and extracts only relevant segments to show together with a recommendation [9], or offers personalized messages based on how similar another user's preferences are [? ]. Other recommender system designs have been proposed that incorporate text explanations as part of the recommendation task itself, to make the recommender system more easily explainable from the start. For example, a recommender system may be trained to predict review scores for some user-item pair, based on past user and item reviews [10], extract specific product features [11], or predict the exact text that a user writes for a specific recommendation [12, 13]. However, most of this work has measured success through recommendation performance and general accuracy, not user experience. There has also been substantial work with conversational recommender systems [14], which arguably incorporate some kind of explanation in the form of the conversation itself. For example, [15] build a full song recommendation chatbot, which serves songs to users together with a chat message sampled from a dataset of previously written human messages, evaluated on text coherence and click-through rate. Most efforts with personalizing a text explanation assume that the text should be personalized to contain features that a reader strongly cares about, and not so much the text format itself. Work on personalized explanations often evaluates for text quality and recommendation success rate, but often also includes user satisfaction. [? ] build a system that generates personalized natural language explanations for a given user-item recommendation, and evaluate on text quality and explanation quality (based on item features). [16] uses crowdsourced reviews to generate (sample from human texts) personalized explanations for recommendations. ## 3 Survey Design ### Research Questions We started with the following research questions: **RQ1:**: How does presenting a movie (especially those that participants have not seen before) with model-generated texts, in contrast to human-generated texts, impact their watch preference rankings if at all? **RQ2:**: Do model-generated texts receive individual quality scores (for whether they are informative, persuasive, and interesting) on par with those of human-generated texts? We highlighted rankings in unseen movies specifically because participants who have already seen a movie are likely to have more concrete opinions on whether they like it, which are unlikely to be affected by reading one new review text. In contrast, individual quality scores are intended to focus on the text contents alone. ### Dataset To show a set of mildly customizable movie suggestions and texts for each participant, we needed to collect a broad set of relatively well-known movies to base suggestions on (the "seed" movie), a set of suggestions for each seed movie, and a set of human-generated and model-generated texts for each suggested movie. We manually selected 10 well-known seed movies to ideally cover a range of movie genres, as well as raise the chance of most participants recognizing at least one seed movie. For each seed movie, we collected a sorted list of similar movies using the first 5 pages of "similar movies" on MovieLens followed by 1 page of "similar movies" on BestSimilar. For each movie suggestion, we collected 5 human-generated reviews to use as a baseline comparison. These reviews consisted of the top 5 most "featured" non-spoiler reviews on IMDB1 for each movie. For each movie suggestion, we also synthesized one model-generated text using the 5 human-generated reviews as a source of information about the movie. Finally, we took the human-generated and model-generated texts, reformatted them to remove all linebreaks and extraneous whitespace, and truncated them to 100 tokens when necessary (with "..." append in these cases to indicate that further text was not shown). Footnote 1: as of 2023/04/26, “featured” reviews seem to use a metric combining high helpfulness vote percentage and high total vote counts In summary, there are 10 well-known seed movies, 5 lesser-known suggestions based on each seed movie (50 total), 5 human-generated reviews for each suggestion (250 total), and 1 model-generated review for each suggestion (50 total). ### Survey Procedure We generated condition codes to ensure balanced representation between human-generated and bot-generated texts for each movie, as well as between different human authors of the human-generated texts. Each participant was assigned one condition code that determines which of the 6 review texts is shown for each individual movie suggestion, as well as if they are shown sources for all review texts. Survey participants were asked to select one seed movie, then shown 5 suggested movies together with texts describing each suggestion. They were asked to rank suggestions, as well as mark which suggested movies they had seen before. Finally, for each suggestion, they were asked to rate how accurate (only if they had seen the movie before), informative, persuasive, and interesting they found the texts, as well as elaborate on what aspects of the text satisfied or did not satisfy these attributes. We recruited 120 participants from Amazon Mechanical Turk, limiting to only workers who are within the United States, have prior task approval rating of at least 95%, and have a minimum of 1000 approved tasks. Study participants received a minimum of $2.00 for completing the survey, with an additional bonus up to $5.50 based on time spent within the survey to ideally reach a $15.00 hourly wage. Further fine detail about survey setup and procedure is available on our dataset repository2. Footnote 2: [https://github.com/cephcyn/gpt-reccexpl-mturksurvey](https://github.com/cephcyn/gpt-reccexpl-mturksurvey) ## 4 Results Overall, we found no significant difference between movie preference rankings with model-generated texts in contrast to human-generated texts (RQ1), even when focusing on rankings of solely unseen movies. We also found no significant difference overall in Likert quality scores between model-generated and human-generated review texts (RQ2) for movies that participants had never seen before. However, Likert quality scores of model-generated texts for movies that participants _had seen_ before were significantly higher than those of human-generated texts. ### Preference Rankings We saw no significant effect of showing model-generated or human-generated texts on movie preference rankings (related to RQ1). In Table 1, we show the number of top-ranked movies distributed across different text sources. To remove effects of some movies already being seen, we also show this comparison for the "unseen only" rankings (removing all seen movies, as well as removing rankings of respondents who had 4 or more seen movies) and "seen only" rankings (same criteria but removing unseen movies). Ranking sets which had too many seen or unseen movies to have any candidate for "top-ranked movie" were excluded. We were also interested in potential differences in average ranking position. As the "unseen only" and "seen only" filters often remove at least one movie from the ranking, we chose to measure average ranking by first applying any filters, then normalizing all remaining movies such that they are ranked with decimal values from 0 (top-ranked) to 1 (lowest-ranked). In Table 2, we show an average ranking for all movies shown with different text sources, across all respondents. Note that across all three "un/seen" condition filters, movies with model-generated texts are ranked marginally higher on average, but none of these differences are significant. In summary, While model-generated texts did not encourage participants to rank movies higher, they were not ranked significantly lower either! As a side note, this is a minor demonstration that it is possible to use model-generated texts in review style without significantly altering media preference rankings. Admittedly, we gave GPT a specific prompt to write a friendly review without the intention of promoting any particular item. A system designer with different goals would still be entirely able to prompt GPT to heavily promote or heavily criticize a particular piece of media. ### Likert Quality Scores Because a movie ranking position is not necessarily tied to how relevant each shown text is, we also asked participants to give individual feedback for each text3. We saw no significant differences between three main text qualities (informative, persuasive, and interesting) for model-generated and human-generated texts across all movies (RQ2). Footnote 3: We found no correlation between Likert quality scores and movie ranking position for all qualities other than “persuasiveness”, and then only marginal. However, when filtering solely on _seen_ movie responses, we found that model-generated texts received significantly higher Likert quality scores than human-generated texts (Figure 1) (Table 3). This is an especially odd effect when we contrast marginal differences between the distribution of responses for seen vs. unseen movies. For seen films, model-generated text seems to get more strongly-decided rankings, while in unseen films it is human-generated text that is ranked more strongly instead. Again, we generally observe that model-generated texts are ranked at least as well as human-generated texts across a range of criteria, even without any level of customization for each participant. \begin{table} \begin{tabular}{c||c|c|c|c} \hline \hline Filter & Human & Model & Excluded & P-value \\ \hline All & 57 & 63 & - & 0.64 \\ Unseen only & 57 & 53 & 10 & 0.77 \\ Seen only & 12 & 17 & 91 & 0.45 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of top-rank choices for movies assigned model vs. human texts (n=120) \begin{table} \begin{tabular}{c||c|c|c} \hline \hline Filter & Human & Model & P-value \\ \hline All & 0.513 (n=300) & 0.486 (n=300) & 0.35 \\ Unseen only & 0.515 (n=220) & 0.488 (n=196) & 0.47 \\ Seen only & 0.544 (n=45) & 0.449 (n=50) & 0.26 \\ \hline \hline \end{tabular} \end{table} Table 2: Average ranking (normalized; 0 is top-ranked, 1 is lowest-ranked) for movies assigned model vs. human texts ### Qualititative Analysis of Quality Responses To explore what aspects of each review participants may particularly like or dislike, as well as how these aspects may relate to different qualities that make a review text appealing, we read through the qualitative responses that participants gave and summarize them here. In general, participants mentioned several recurring topics, regardless of which attribute we were asking about. These common topics are: * _Raw release information and movie synopsis._ This includes plot summary (often avoiding spoilers), story themes, content warnings, genre, setting, director or actor names, or technical details such as special effects or format. Participants tended to criticize texts missing this information: "It didn't really tell me what the movie was about. The writer just said how much they like the movie." * _Context around movie production or release._ This includes descriptions of critical reception, awards, contrasts against other period-specific or genre-overlapping films, commentary on how other reviewers are correct or incorrect, historical impact, or how a movie otherwise fits into a larger body of work. * _Opinion and personal movie experience._ This includes personal opinions, emotional responses that reviewers had, or otherwise general praise or criticisms of a movie. For instance, participants highlighted how texts describe the "strong performance" of an actor, how a film is "fascinating and compelling", or how a review text "contained actual criticism". * _Review structure and writing style._ Several participants commented on the overall structure of a review text. This was usually criticism about texts focusing solely on a reviewer's personal experiences or anecdotes instead of information about the movie itself. Participants often also described review writing style: whether it was short or wordy, the tone or emotion of a review, or what kind of colorful language was used to describe a movie. \begin{table} \begin{tabular}{c|c||c|c|c} \hline \hline Quality & Filter & Human & Model & P-value (binomial) \\ \hline Accurate & Seen only & 4.063 (n\(\sim\)79) & 4.284 (n\(\sim\)102) & 0.02 \\ \hline Informational & All & 3.943 (n\(\sim\)300) & 4.053 (n\(\sim\)300) & 0.49 \\ Informational & Unseen only & 3.972 (n\(\sim\)221) & 3.984 (n\(\sim\)198) & 0.53 \\ Informational & Seen only & 3.860 (n\(\sim\)79) & 4.186 (n\(\sim\)102) & 0.03 \\ \hline Persuasive & All & 3.850 (n\(\sim\)300) & 3.966 (n\(\sim\)300) & 0.40 \\ Persuasive & Unseen only & 3.877 (n\(\sim\)221) & 3.863 (n\(\sim\)198) & 0.35 \\ Persuasive & Seen only & 3.772 (n\(\sim\)79) & 4.166 (n\(\sim\)102) & 0.002 \\ \hline Interesting & All & 3.913 (n\(\sim\)300) & 4.050 (n\(\sim\)300) & 0.10 \\ Interesting & Unseen only & 3.877 (n\(\sim\)221) & 3.904 (n\(\sim\)198) & 0.86 \\ Interesting & Seen only & 4.012 (n\(\sim\)79) & 4.333 (n\(\sim\)102) & 0.003 \\ \hline \hline \end{tabular} \end{table} Table 3: Average Likert score (5 is agree, 1 is disagree) for movies assigned model vs. human texts. P-values were calculated with a logistic regression model, treating Likert scores as ordinal. Across these responses, we noticed that human-generated review texts sometimes focused on personal experiences (occasionally excluding a plot summary entirely), while model-generated texts usually included a plot summary (described by respondents as vague or spoiler-free) and summary of critical reception to a movie. Participants tended to emphasize different attribute types more in different question types. With accuracy, participants focused on raw release information and release context more than they did on subjective experiences, except for when a participant specifically said they agreed with that experience. With informativeness, participants often summarized review attributes across all categories, and sometimes criticized reviews for saying nothing objective about a movie or being overly vague. With persuasiveness, there was more variation across participant responses. Some highlighted how a review conveys enthusiasm about a movie (or other personal experiences) or appreciated a review framing the achievements of a movie against others from the same era, while other participants criticized a review for containing too much subjective information and not enough plot summary. Finally, with interestingness, participants tended to Figure 1: Quality rating distribution across all movie responses, split by seen status. Left column is responses for already-seen films only, right side is responses for unseen films only. Note that there is no distribution for accuracy responses on unseen films, as the accuracy rating question was only shown for texts describing films that respondents marked as seen. focus more often on writing style or creative wording overall, as well as criticizing vagueness. However, like persuasiveness, there was a good amount of variation between participants for what other topics they found particularly interesting. ## 5 Discussion & Future Directions Overall, we established a baseline of model-generated recommendation texts being possible and promising! In addition, a good number of the survey participants responded to the final question saying they did want to see more reviews from the same authors, even for the ones that were model-generated. The Likert scale effect being stronger for movies that were seen already was unexpected and interesting. However, in retrospect, this makes sense for accuracy. Multiple respondents mentioned how a human-generated text contained only anecdotes or personal experiences with a movie and nothing that can be validated. For the other qualities, we suspect this is due to similar reasons: model-generated texts tend to summarize general opinions and offer vague summaries of a movie plot and critical reception, without sharing any strong personal opinion. One big weakness of this work is that GPT texts were still generated using human reviews. This is both a weakness for movies that may be recommended that have no human reviews written yet, as well as a general ethical flaw of LLMs (as they are often built atop unpaid human labor with model training and model input). We do not demonstrate any capacity to write competent reviews for media that have no already-written human reviews here, and this is a future work direction that deserves attention. Finally, one major future work direction that we are especially interested in includes exploring how we can allow users to customize different review attributes or otherwise learn these preferences to better generate summary texts that improve user satisfaction and focus on what they prioritize. ## 6 Conclusion Existing recommender systems have tried incorporating chatbots and customized text explanations, but have not had landmark well-performing LLMs to make use of until now. We ran a survey that simulates a very basic recommender system, and a barely-customized LLM, but establishes that LLM-generated text explanations could be a good substitute, supplement, or complement to existing human review texts. We found that while LLM review text is not performing significantly better in rankings or direct scoring compared to human texts for unseen movies, it is also not doing any worse. Ultimately, this is a good baseline to build on with future work. Based on the freeform responses we got from survey participants, there is a good amount of personalization that could be provided to individual reccommeder system users that an explanation-text-offering LLM could provide. ## Acknowledgments This research was supported in part by the Graduate Fellowships for STEM Diversity (GFSD), as well as NSF Awards IIS-1901168 and IIS-2008139. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.
この研究では、現代のモデル生成テキストによる映画推奨の解説がユーザーにとってどのように役立つのかを基礎となる可能性を立証し、ユーザーが好むかdislikeする、特に既存の人の映画レビューと比較した、これらのテキスト解説の構成要素を探索します。参加者は、映画のランク付けは全く異なるものではなく、まだ見たことがない映画のレビューに対する個別の品質評価も大きく異なっていませんでした。しかし、参加者は、見たことがない映画のレビューを評価するときに、過去の映画のレビューをより良いものと評価しました。また、参加者は、各品質に重要な aspects の映画レビューテキストを特定しました。全体として、現代の LLM は推奨説明の有望なソースであり、パーソナライズ可能なテキスト説明をさらに探索することを目的としています。
2308.16900
Learning to Taste: A Multimodal Wine Dataset
We present WineSensed, a large multimodal wine dataset for studying the relations between visual perception, language, and flavor. The dataset encompasses 897k images of wine labels and 824k reviews of wines curated from the Vivino platform. It has over 350k unique bottlings, annotated with year, region, rating, alcohol percentage, price, and grape composition. We obtained fine-grained flavor annotations on a subset by conducting a wine-tasting experiment with 256 participants who were asked to rank wines based on their similarity in flavor, resulting in more than 5k pairwise flavor distances. We propose a low-dimensional concept embedding algorithm that combines human experience with automatic machine similarity kernels. We demonstrate that this shared concept embedding space improves upon separate embedding spaces for coarse flavor classification (alcohol percentage, country, grape, price, rating) and aligns with the intricate human perception of flavor.
Thoranna Bender, Simon Moe Sørensen, Alireza Kashani, K. Eldjarn Hjorleifsson, Grethe Hyldig, Søren Hauberg, Serge Belongie, Frederik Warburg
2023-08-31T17:58:28
http://arxiv.org/abs/2308.16900v4
# Learning to Taste : A Multimodal Wine Dataset ###### Abstract We present _WineSensed_, a large multimodal wine dataset for studying the relations between visual perception, language, and flavor. The dataset encompasses 897k images of wine labels and 824k reviews of wines curated from the Vivino platform. It has over 350k unique vintages, annotated with year, region, rating, alcohol percentage, price, and grape composition. We obtained fine-grained flavor annotations on a subset by conducting a wine-tasting experiment with 256 participants who were asked to rank wines based on their similarity in flavor, resulting in more than 5k pairwise flavor distances. We propose a low-dimensional concept embedding algorithm that combines human experience with automatic machine similarity kernels. We demonstrate that this shared concept embedding space improves upon separate embedding spaces for coarse flavor classification (alcohol percentage, country, grape, price, rating) and aligns with the intricate human perception of flavor. ## 1 Introduction Vision, language, audio, touch, smell, and taste are sensory inputs that ground humans in a shared representation, which enables us to interact, converse, and create. Recent advances in multimodal learning have shown that combining diverse modalities in a shared representation leads to useful and better-grounded models (Girdhar et al., 2023; Chen et al., 2023). Inspired by recent progress, we propose to add flavor to the list of modalities used to learn shared representations. As a first step towards modeling flavor, we focus on wine since (1) wines have been studied for centuries, (2) their flavors have been carefully categorized, and (3) classification systems exist to ensure that flavor is near-consistent across bottles of the same vintage. We bridge the gap between the machine learning and food science communities by presenting WineSensed, a multimodal wine dataset that consists of images, user reviews, and flavor annotations. Our motivation is twofold. On one hand, internet photos and user reviews are a scalable source of data, offering abundant, diverse, and easily accessible insights into wine qualities. On the other hand, human flavor annotations, while not as scalable, provide a more direct and granular understanding of the wines' flavor profile. By combining these resources, we aim to capture the best of both worlds, yielding a richer, more intricate dataset. We organized a large sensory study to obtain human-annotated flavor profiles of the wines. The study applies the "Napping" methodology (Pages, 2005), which is commonly used to conduct consumer surveys (Kim et al., 2013; Ribeiro et al., 2020). In this study, 256 participants annotated their perceived taste similarities of various wines. In Fig. 1, the "human kernel" illustrates how participants were instructed to place wines on a sheet of paper based on how similar they perceived their flavor to be. The Napping method enabled us to annotate wine flavors with a high level of detail and harness the perception of a broad spectrum of individuals. It scales well, as asking a participant to annotate five wines yields 10 pairwise annotations. All participants combined annotated more than 5k flavor distances. To complement these annotations, we curate images of wine labels, user reviews, and wine attributes (country of origin, alcohol percentage, price, and grape composition) from the Vivino platform, a popular online social network for wine enthusiasts.1 WineSensed, therefore, represents a large, multimodal dataset that merges user-generated content with sensory assessments, bridging the gap between subjective consumer perception and objective flavor profiles. Footnote 1: [https://vivino.com](https://vivino.com) Along with the dataset, we propose _Flavor Embeddings from Annotated Similarity & Text-Image_ (FEAST) that leverages recent developments in large multimodal models to embed user reviews and images of wine labels into a low-dimensional, latent representation that contains semantic and structural information that correlates with taste. Our model aligns this representation with the flavor annotations from our user study. We find that this combined representation yields a "flavor space" that models coarse flavor concepts like alcohol percentage, country, grape, and the year of production, while also being aligned with more intricate human perception of flavor. Experimentally, we find (1) that using the pairwise distances (rather than ordering) of the annotated wines improves the flavor representation, which confirms the established methodology in food science, and validates our annotation process. (2) We discover that using multiple data modalities (images, text, and flavor annotations) boosts the flavor representations, highlighting the usefulness of our multimodal dataset. (3) Finally, we show that the proposed multimodal model produces a flavor space with a high alignment with humans' perception of flavor. ## 2 Background and related work **Multimodal representations.** Learning a shared representation between modalities can reveal useful representations that generalize well and appear grounded in reality. Pioneering work [1] proposes to learn the correlation between vision and audio. A number of deep learning methods propose to use large collections of weakly annotated data to learn shared vision-language representa Figure 1: **Flavor as an additional data modality. The WineSensed dataset consists of a large collection of images, user reviews, and metadata about vintages (upper left). In a large user study, we collected flavor annotations of over 100 wines using the “Napping” method [12], where participants were asked to place wines on a sheet of paper based on their perceived taste similarity (lower left). We propose an algorithm to combine these data modalities into a shared representation (right) and find that using taste annotations as an additional modality improves performance in downstream tasks.** tions (Joulin et al., 2016; Desai and Johnson, 2021; Radford et al., 2021b; Mahajan et al., 2018), shared audio-text representations (Agostinelli et al., 2023), shared vision-audio representations (Ngiam et al., 2011; Owens et al., 2016; Arandjelovic and Zisserman, 2017; Narasimhan et al., 2022; Hu et al., 2022), shared vision-touch (Yang et al., 2022) representations, or shared sound and Inertial Measurement Unit (IMU) representations (Chen et al., 2023). Recently, ImageBind (Girdhar et al., 2023) showed that images can bind multiple modalities (images, text, audio, depth, thermal, IMU) into a shared representation. While recent advances in other areas of multimodal learning have been fueled by large datasets, the difficulty of quantifying and collecting high-quality flavor data has made it challenging for the machine learning community to develop similar representations for flavor. **Quantifying flavor.** Understanding and engineering _flavor_ is a central part of food science and essential in the quest towards healthy and sustainable food production (Savage, 2012), but the use of machine learning methods to this end is still in its infancy. Fuentes et al. (2019) found a correlation between seasonal weather characteristics, and wine quality and aroma profiles, thereby verifying what wine producers have long held to be true. Similarly, Gupta (2018) found that sulfur dioxide, pH, and alcohol levels are useful for predicting wine quality. Due to the difficulty of gathering quality perception data, much work focuses on how 'low-level' chemical aspects related to 'high-level' taste properties, e.g. in assessing the quality of chocolate and beer (Gunaratne et al., 2019; Gonzalez Viejo et al., 2018). Analyzing a person's perception of wine is challenging due to the complex nature of flavor, which remains ill-understood, and the difficulty in obtaining consistent verbal descriptions of taste across individuals. Napping (Pages, 2005) is the _de facto_ method to analyze perceived taste in consumer surveys. Participants receive taste samples and are instructed to place them on a sheet of paper based on how similar they perceive their taste to be, with closer meaning more similar. Such experiments are usually conducted with 10-25 participants and less than 20 variants of a product (Giacalone et al., 2013; Pages et al., 2010; Mayhew et al., 2016). In this study, we scale this data collection process to 256 participants and 108 vintages of red wine, resulting in over 400 mapping papers collected and more than 5k annotated flavor distances. In contrast to previous works (Giacalone et al., 2013; Pages et al., 2010; Mayhew et al., 2016) our objective is to incorporate taste as one of the modalities that contribute to the shared representations for improved grounding of machine learning models. Figure 2: **Examples from WineSensed. The dataset consists of images of wine labels, user-generated reviews, per-wine attributes (country, grape, region, alcohol percentage, rating, price), and flavor annotations. Here are examples of the images, reviews, and attributes.** **Human kernel learning.** Annotating flavor with Napping (Pages, 2005) does not provide image-flavor or text-flavor correspondences but rather relative flavor similarities between sampled products. According to (Miller, 2019) humans are better at describing abstract concepts such as taste with contrastive questions, such as _"does wine X taste more similar to wine Y or Z?"_ For this reason, the machine learning community has used contrastive questions in multiple settings, e.g., for understanding how humans perceive light reflection from surfaces by presenting annotators with image triplets depicting the Stanford Bunny with varying material properties (Agarwal et al., 2007), to produce a genre embedding of musical artists (Van Der Maaten and Weinberger, 2012), and for discovering underlying narratives in online discussions (Christensen et al., 2022). Most relevant to our work is SNaCK (Wilber et al., 2015), which presents annotators with image triplets depicting foods and asked which two of them taste more similar, to obtain flavor triplets. They proposed to combine this high-level human flavor understanding with low-level image statistics to learn food concepts, e.g., that even though guacamole and wasabi look similar, their taste is not. Having humans annotate image triplets of foods works well for coarse concepts, but does not encompass nuanced differences in taste. In this work, we focus on the much finer-grained taste difference found in wines. These nuances and the complex nature of wine tasting, which involves taste _and_ smell, are not easily conveyed through text or images. **Flavor datasets.** The machine learning community has produced numerous food datasets for classifying which meal is in an image (Bossard et al., 2014; Min et al., 2020), retrieving a recipe given an image (Salvador et al., 2017; Li et al., 2022), or predicting the origin of wines (Dua and Graff, 2017). While it is possible to extract coarse information about taste from such datasets (Wilber et al., 2015), they do not encompass higher resolution details of taste, such as the differences between a Cabernet Sauvignon and Pinot Noir. Similarly, the food science community has developed many datasets for understanding and predicting food flavors, nutrient content, and chemistry. Flavornet (Arn and Acree, 1998), a dataset on human-perceived aroma compounds, explores partly how smells relate to perceived bitterness or fruitiness in a wine. However, its limitation is its lack of context linking these odors to specific wine varieties and its limited focus on flavor aspects. FoodDB (Harrington et al., 2019) offers comprehensive information on a wide variety of food, its nutrient contents, potential health effects, and macro and micro constituents. However, it lacks user-generated reviews and sensory data, which are crucial for understanding the subjective human perception of food and wine. The Wine Data Set (Dua and Graff, 2017) focuses on wines, but only contains wines originating from one region in Italy, limiting the dataset's ability to capture the broader diversity of flavor profiles of wines from various regions worldwide. Furthermore, Dua and Graff (2017) solely incorporate the chemical compounds present in each wine, without annotations of flavors and information associating specific wines with each chemical compound. In contrast to previous work, we present a multimodal dataset that contains a large corpus of images and reviews, as well as human-annotated flavor similarities. ## 3 The _WineSensed_ dataset We present WineSensed, a large, multimodal wine dataset that combines human flavor annotations, images, and reviews. In this section, we provide an overview of the curation process for each of these modalities. **Annotated flavors.** The flavor data consists of over 5k human-annotated pairwise similarities between 108 vintages. Each annotated pair is annotated at least five times to reduce noise. These annotations are collected through a series of wine-tasting events attended by a total of \(256\) non-expert wine drinkers. Most participants were between 21-25 years old, and more than half of them were from Denmark. Each participant volunteered their time, dedicating a maximum of two hours to complete the annotations. The experiment was conducted in accordance with the "De Videnskabsetiske Komitee" (e. the Danish ethics committee for science) (see Appendix I). We randomly selected 5 wines for the participants to taste. The participants did not have access to any information regarding the individual wines. The wine was poured into non-transparent shot glasses and the labels of the wines were covered during the entire experiment. The participants were instructed to put colored stickers (representing each of the five wines) on a sheet of paper based on their taste similarity, closer meaning more similar. The participants could repeat the process up to three times, ensuring they did not consume more than 225 ml of wine. The average participant repeated the experiment two times. We automatically digitized the participants' annotations by taking a photo of each filled-out sheet. We used the Harris corner detector [Harris et al., 1988] to find the corners of the paper and a homographic projection to obtain an aligned top-down view of the paper. The images were mapped into HSV color space and a threshold filter applied to find the different colored stickers that the participant used to represent the wines. Having identified the location, we computed the Euclidean pixel-wise distance between all pairs of points, resulting in a distance matrix of wine similarities. A more detailed description of the collection and digitization of the mapping papers can be found in D. **User-reviews.** We curated 824k text reviews from the Vivino platform. The reviews were filtered to contain at least \(10\) characters to avoid non-informative reviews such as 'good' and 'bad.' Fig. 2 shows examples of user-reviews. The reviews are free text and can contain special tokens such as emojis. The reviews tend to describe price, pairing, and general terms of wine. Some also describe which flavors the reviewer tastes. These reviews are subjective and can vary based on personal factors and context, leading to inconsistent flavor profiles. Moreover, they only contain coarse flavor descriptions and focus more on aspects like preference, price, occasion, and so forth. Fig. 4 shows the distribution of word count per review, number of reviews per vintage, and the most common keywords. **Images.** The dataset has 897k images of wine labels. Wine labels are known to play a major role in a consumer's decision to purchase a particular wine, so it is reasonable to believe that label design Figure 4: **Summary statistics of user reviews and images. Most vintages have less than 10 images. The average review length is 16 words. Common keywords in the reviews include ‘fruit’, ‘dry’, and ‘smooth’ revealing coarse semantic information about the flavor of the wines while other keywords such as ‘good’ and ‘great’ do not reveal flavor information.** Figure 3: **Examples of images. The viewpoint, lighting, and composition vary across images.** carries information regarding the taste of the wine (Talbot, 2019). Fig. 3 shows examples of images from the dataset. The images vary in their viewing angle, illumination, and image composition. **Attributes.** Each wine is associated with the geographical location of the vineyard (both country and region), grape parietal composition, vintage, alcohol content, pricing, and average user rating. Fig. 5 shows the distribution of these attributes. Most wines originate from Italy, with Sangiovese being the most commonly used grape. The wines occupy the lower range of the price spectrum, with the most expensive ones priced at around 40 USD. The attributes are available for 5% of the dataset entries. ## 4 Flavor Embeddings from Annotated Similarity & Text-Image (FEAST) The embeddings of recent large image and text networks contain structural and semantic information, however, they do not model the intricacies of human flavor. We propose FEAST, a method to align these embeddings to the human perception of flavor using a small set of human-annotated flavor similarities. FEAST takes text and/or images as input, as well as human-annotated flavor similarities. It outputs a unified embedding that aligns with human sensory perception. Fig. 6 provides an overview of the proposed method. We first embed the text and/or images into a latent space with CLIP (Radford et al., 2021). We use CLIP because of its large training corpus and its image-text aligned latent space, however, highlights that other pretrained networks can be used. We use t-SNE (Van der Maaten and Hinton, 2008) to reduce the dimensionality of the latent space to 2, which simplifies and constrain the later alignment with the pairwise flavor annotations. The pairwise distances are embedded into a 2D representation using Non-metric multidimensional scaling (NMDS) with the SMACOF strategy (de Leeuw and Mair, 2009). NMD allows us to preserve the original flavor distances provided by humans in a shared space, where each vintage is represented with point location, rather than pairwise distances. MDS is commonly used in food science to analyze sensory annotations from Napping studies (Pineau et al., 2022; Varela and Ares, 2012; Nestrud and Lawless, 2010). We then align these two 2D representations to get a joint representation that benefits from the structural and semantic information of the image and/or text representations, scales to unobserved vintages, and is aligned with the human perception of flavor. We use Canonical Correlation Analysis (CCA) (Harold, 1936) to align the two representations. CCA identifies and connects common patterns between these representation spaces, ensuring that the final representation is consistent across all input modalities. Figure 5: **Wine attributes.** WineSensed contains attributes about the geolocation of production (country, region) and the grape composition of each wine. Furthermore, the dataset includes information on the average price of the wine, alcohol percentage, average rating on the Vivino platform, and the year of production. The histograms show the distribution of these attributes. ## 5 Experiments We conduct two experiments on the WineSensed dataset. First, we explore how well recent large pretrained language and image models explain wine attributes that correlate with the flavor of a wine. Second, we explore multimodal models' capabilities to represent more intricate flavors. **Experimental setup.** We explore several configurations of human kernels, machine kernels, and "combiners" that align the two representations. Fig. 6 provides an overview of our baselines. The **human kernel** is formed with t-STE (Van Der Maaten and Weinberger, 2012), a low dimensional graph representation reduced with t-SNE or NMDS, where the notable difference is that t-STE discards the flavor distances, and solely optimizes for triplet orderings. The **machine kernel** consists of two steps: (1) we use a pretrained model to embed text and/or images into a low dimensional space, (2) which is then compressed into a two-dimensional space. For (1), we explore DistilBert (Sanh et al., 2019), T5 (Raffel et al., 2020), ALBERT (Lan et al., 2019), BART (Lewis et al., 2019), PEGASUS (Zhang et al., 2020), FLAN-T5 (Chung et al., 2022) and CLIP for embedding text and ViT (Dosovitskiy et al., 2020), ResNet (He et al., 2016), DeiT (Touvron et al., 2021), and CLIP for embedding images. For (2), we explore t-SNE, UMAP (McInnes et al., 2018), and PCA (Pearson, 1901). For the **combiners**, we experiment with CCA, Iterative Closest Point (ICP) (Chen and Medioni, 1992), Procrustes (Gower, 1975) and SNaCK. For a more detailed description of the implementation and software packages used, please refer to E the Appendix. ### Coarse flavor predictions We first explore how well pretrained language and vision models explain wine attributes that correlate with flavor. We then investigate if using FEAST to align the machine and human kernels improves the representation. **Implementation details.** We use a balanced SVM classifier with an RBF kernel as well as a Multi-layer Perceptron [] neural network to predict wine attributes of the flavor embeddings. We predict price, alcohol percentage, rating, region, country, and grape variety as these attributes are known to correlate with the perceived wine flavor. We mitigate imbalanced class distributions with class weight balancing and oversampling of the minority classes. We report the accuracy averaged over the seven attributes computed through 5-fold cross-validation. The accuracy measures how coherent the Figure 6: **Model overview.** FEAST takes text and/or images as input as well as human-annotated flavor similarities. The text and/or images are embedded into a latent representation with CLIP. We use NMDS to embed the flavor similarities. The two representations are aligned with CCA to produce a latent space that uses the structural information in CLIP embeddings and the intricacies of human annotations. The bolded methods in the orange, blue, and green boxes indicate choices for our best model, and their remaining combinations serve as an overview of the evaluated baselines. embeddings are with the flavor attributes. A more detailed description of the implementation can be found in J.2. **Results.** Tables 1 to 3 ablates our proposed method and summarizes our main conclusions. Please see Appendix J.2 for per attribute classification accuracy for all combinations of machine kernels, human kernels, modalities, reduces, and combiners. Table 1 shows that most pretrained image and text models yield slightly higher performance than the random baseline. The text encoders are slightly better than the image encoders. BART and CLIP perform the best. All encoders in the table use t-SNE to reduce the embedding to 2D. Table 3 (middle) shows t-SNE yields better accuracy than UMAP and PCA when using a CLIP encoder. Table 3 (top) shows that NMD performs better than t-STE. NMD uses the relative distances between annotations, whereas t-STE discretizes the annotations and considers only the ordering within each triplet. The results suggest that the pairwise distances are useful to model the flavor space. Table 3 (bottom) shows that using CCA to align the two representations yields higher accuracy than SNaCK or ICP. Table 2 shows that including flavor as a modality increases the accuracy, _e.g._ using flavor to align the image or text embeddings lead to higher accuracy. Using CLIP followed by t-SNE, NMD, and CCA to combine language, vision, and flavor into a single representation leads to the best configurations, illustrating that the human annotations are useful for learning a flavor representation. Maybe most surprisingly, we show that each modality by itself is on par with the random baseline, but their combination produces a latent space that much better describes the flavor attributes. ### Fine-grained flavor predictions We now proceed to evaluate more intricate flavor predictions by using human-annotated flavor similarities as ground truth. **Implementation details.** To evaluate our representation, we measure the Triplet Agreement Ratio (TAR) (van der Maaten and Weinberger, 2012) between our predicted flavor embeddings and the human-annotated flavors. TAR measures the agreement between a triplet derived from the latent space and the ground truth triplets from the flavor annotations. Higher TAR means that the ordering of distances in the latent space corresponds to the human perception of flavor. This measure indicates how aligned the two representations are, and provides a higher granularity of flavor prediction than flavor attributes. A more detailed description of the implementation can be found in F. \begin{table} \begin{tabular}{l l|c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Machine kernel}} & \multicolumn{2}{c}{Acc \(\uparrow\)} \\ & \multicolumn{1}{c|}{Modality} & \multicolumn{1}{c}{SVM} & \multicolumn{1}{c}{NN} \\ \hline Random & 0.11 & 0.11 \\ ViT & Image & 0.09 & 0.13 \\ DeiT & Image & 0.14 & 0.15 \\ ResNet & Image & 0.15 & 0.16 \\ CLIP & Image & 0.11 & 0.15 \\ T5 & Text & 0.15 & 0.16 \\ **ALBERT** & **Text** & 0.15 & **0.18** \\ **BART** & **Text** & **0.16** & 0.15 \\ DistilBERT & Text & 0.15 & 0.17 \\ **CLIP** & **Text** & **0.16** & **0.18** \\ FLAN-T5 & Text & 0.15 & 0.17 \\ PEGASUS & Text & 0.13 & 0.13 \\ BART & Text & 0.11 & 0.15 \\ \hline \hline \end{tabular} \begin{tabular}{l|c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Modality}} & \multicolumn{1}{c}{Acc \(\uparrow\)} \\ & \multicolumn{1}{c}{SVM} & \multicolumn{1}{c}{NN} \\ \hline Flavor & 0.16 & 0.11 \\ Image & 0.11 & 0.15 \\ Text & 0.16 & 0.18 \\ Text+Flavor & 0.23 & 0.18 \\ Image+Text & 0.22 & 0.25 \\ Image+Flavor & 0.23 & 0.18 \\ **Image+Text+Flavor** & **0.28** & **0.26** \\ \hline \hline \end{tabular} \begin{tabular}{l|c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Modality}} & \multicolumn{1}{c}{Acc \(\uparrow\)} \\ & \multicolumn{1}{c}{PCA} & \multicolumn{1}{c}{0.20} & \multicolumn{1}{c}{0.21} \\ **t-SNE** & **0.22** & **0.25** \\ \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Combiner}} & \multicolumn{1}{c}{Acc \(\uparrow\)} \\ & \multicolumn{1}{c}{SVM} & \multicolumn{1}{c}{NN} \\ \hline ICP & 0.21 & 0.24 \\ Procrustes & 0.19 & 0.23 \\ SNaCK & 0.23 & 0.24 \\ **CCA** & **0.28** & **0.26** \\ \hline \hline \end{tabular} \end{table} Table 1: **Ablation of Machine kernels.** Accuracy of machine kernels across image and text modalities. Image models perform worse than text models. ALBERT, BART and CLIP perform the best, all models perform better than random using at least one classification method. \begin{table} \begin{tabular}{l|c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Reducer}} & \multicolumn{2}{c}{Acc \(\uparrow\)} \\ & \multicolumn{1}{c}{SVM} & \multicolumn{1}{c}{NN} \\ \hline Random & 0.11 & 0.11 \\ t-STE & 0.13 & 0.10 \\ t-SNE & 0.15 & 0.13 \\ **NMDS** & **0.16** & **0.13** \\ \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Reducer}} & \multicolumn{2}{c}{Acc \(\uparrow\)} \\ & \multicolumn{1}{c}{SMM} & \multicolumn{1}{c}{NN} \\ \hline UMAP & 0.15 & 0.18 \\ PCA & 0.20 & 0.21 \\ **t-SNE** & **0.22** & **0.25** \\ \hline \hline \end{tabular} \begin{tabular}{l|c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Modality}} & \multicolumn{1}{c}{Acc \(\uparrow\)} \\ & \multicolumn{1}{c}{PCA} & \multicolumn{1}{c}{0.20} & \multicolumn{1}{c}{0.21} \\ **t-SNE** & **0.12** & **0.22** \\ \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Flavor}} & \multicolumn{2}{c}{Acc \(\uparrow\)} \\ & \multicolumn{1}{c}{SVM} & \multicolumn{1}{c}{NN} \\ \hline ICP & 0.21 & 0.24 \\ Procrustes & 0.19 & 0.23 \\ SNaCK & 0.23 & 0.24 \\ **CCA** & **0.28** & **0.26** \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation of human kernels, reducers, and combiners.** **Results.** Table 4 ablates FEAST and shows that for the higher granularity predictions both the pretrained text and image encoders improve upon the random baseline. We show that including the human kernel with NMDS further improves the TAR scores. This highlights the usefulness of the flavor distances recorded by the human annotators. In Appendix F, we show results from all configurations of human kernels, machine kernels, reducers, and combiners. We find that NMDS consistently yields better performance than t-STE, and that combining human and machine kernels improves the TAR scores across multiple model configurations. ## 6 Discussion & Conclusion In this paper, we introduce WineSensed, an extensive multimodal dataset curated for flavor modeling. The dataset comprises over 897k images and 824k reviews, and has over 5k human-annotated pairwise flavor similarities, obtained via a sensory study involving 256 participants. We propose a simple algorithm, FEAST, to align semantic information from machine kernels with flavor similarities from human annotators in a shared flavor representation. We find that combining these modalities improves both coarse and fine-grained flavor predictions. WineSensed further strengthens the collaboration between the food science and machine learning communities, introduces flavor as a modality in multimodal models, and serves as an entry point for the development of machine learning models for flavor analysis and potentially deepening our comprehension of wine flavors. The dataset and the proposed procedures open many interesting possibilities, such as using flavor to ground foundation models or extending the dataset with other modalities, such as chemical composition, or other food categories. **Constraints and considerations.** The dataset serves as a novel first step to including human-annotated flavor in the array of modalities in multimodal models. Its current scope is constrained to a selected group of red wines, predominantly Italian ones. While this enables a more nuanced understanding of flavors within Italian wines, it may not represent the broader spectrum of red wines globally. Furthermore, the dataset's emphasis on wines prevalent in Western cultures highlights a geo-cultural bias. Expanding the dataset to encompass more diverse drink types from different cultures could provide a more comprehensive understanding of global flavor perception. Lastly, the Napping methodology is not immune to the influences of participants' backgrounds and experiences. Individual perceptions, shaped by personal histories, can introduce nuances in the data. Though leveraging non-expert wine drinkers for flavor annotations introduces subjectivity, this approach, inspired by common sensory study practices, broadens taste perspectives, enhances study accessibility, and offers commercial value, with multiple annotations per entry mitigating individual biases. Exploring a broader range of foods and beverages remains a valuable direction for future work. **Acknowlegements.** This work was supported by the Pioneer Centre for AI, DNRF grant number P1, and by research grant (42062) from VILLUM FONDEN. This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 757360), as well as the Danish Data Science Academy (DDSA). \begin{table} \begin{tabular}{l l|l l} \hline \hline Machine Kernel & Human Kernel & Combiner & Modality & TAR \(\uparrow\) \\ \hline Random & & & 0.5 \\ CLIP + t-SNE & & Text & 0.82 \\ CLIP + t-SNE & & Image & 0.82 \\ CLIP + t-SNE & & Image + Text & 0.81 \\ CLIP + t-SNE & & Image + Flavor & 0.89 \\ CLIP + t-SNE & & Text + Flavor & 0.88 \\ CLIP + t-SNE & NMDS & CCA & Image + Text + Flavor & **0.91** \\ \hline \hline \end{tabular} \end{table} Table 4: **Fine-grained flavor predictions. Triplet Agreement Ratio (TAR) between text, image, and multi-modal encoders and human annotated flavor similarities. A higher TAR indicates that the model’s representation space is more aligned with humans’ perception of flavor.**
ワインセンシッドは、視覚的認識、言語、風味の間の関係を学ぶための多様的な大規模ワインデータセットです。データセットには、ワインラベルの897,000枚とVivinoプラットフォームから収集されたワインの824,000枚のレビューが含まれており、350,000以上のユニークなボトルが、年、地域、評価、アルコール割合、価格、ブドウ組成とともに注釈されています。私たちは、ワインテイスティング実験を実施することで、サブセットで精細な風味の注釈を得ました。この実験では256人の参加者がワインの風味の類似性に基づいてワインをランク付けするよう依頼し、これにより5,000以上の対wise風味距離が発生しました。私たちは、人間経験と自動機械類似性カーネルを組み合わせた低次元概念エンコードアルゴリズムを提案しています。この共有概念エンコード空間は
2309.12867
Accurate and Fast Compressed Video Captioning
Existing video captioning approaches typically require to first sample video frames from a decoded video and then conduct a subsequent process (e.g., feature extraction and/or captioning model learning). In this pipeline, manual frame sampling may ignore key information in videos and thus degrade performance. Additionally, redundant information in the sampled frames may result in low efficiency in the inference of video captioning. Addressing this, we study video captioning from a different perspective in compressed domain, which brings multi-fold advantages over the existing pipeline: 1) Compared to raw images from the decoded video, the compressed video, consisting of I-frames, motion vectors and residuals, is highly distinguishable, which allows us to leverage the entire video for learning without manual sampling through a specialized model design; 2) The captioning model is more efficient in inference as smaller and less redundant information is processed. We propose a simple yet effective end-to-end transformer in the compressed domain for video captioning that enables learning from the compressed video for captioning. We show that even with a simple design, our method can achieve state-of-the-art performance on different benchmarks while running almost 2x faster than existing approaches. Code is available at https://github.com/acherstyx/CoCap.
Yaojie Shen, Xin Gu, Kai Xu, Heng Fan, Longyin Wen, Libo Zhang
2023-09-22T13:43:22
http://arxiv.org/abs/2309.12867v2
# Accurate and Fast Compressed Video Captioning ###### Abstract Existing video captioning approaches typically require to first sample video frames from a decoded video and then conduct a subsequent process (e.g., feature extraction and/or captioning model learning). In this pipeline, manual frame sampling may ignore key information in videos and thus degrade performance. Additionally, redundant information in the sampled frames may result in low efficiency in the inference of video captioning. Addressing this, we study video captioning from a different perspective in compressed domain, which brings multi-fold advantages over the existing pipeline: 1) Compared to raw images from the decoded video, the compressed video, consisting of I-frames, motion vectors and residuals, is highly distinguishable, which allows us to leverage the entire video for learning without manual sampling through a specialized model design; 2) The captioning model is more efficient in inference as smaller and less redundant information is processed. We propose a simple yet effective end-to-end transformer in the compressed domain for video captioning that enables learning from the compressed video for captioning. We show that even with a simple design, our method can achieve state-of-the-art performance on different benchmarks while running almost \(2\times\) faster than existing approaches. Code is available at [https://github.com/acherstyx/CoCap](https://github.com/acherstyx/CoCap). ## 1 Introduction Video captioning is a representative example of applying deep learning to the fields of computer vision and natural language processing with a long list of applications, such as blind navigation, video event commentary, and human-computer interaction. To generate captions for a video, the model needs to not only identify objects and actions in the video, but also be able to express them accurately in natural language. Despite significant progress, accurate and fast video captioning remains a challenge. Video captioning requires both 2D appearance information, which reflects the objects in the video, and 3D action information, which reflects the actions. The interaction between these two types of information is crucial for accurately captioning the actions of objects in the video. Most of the existing methods [36, 38, 22] are shown in Fig. 1 (the upper branch), mainly including the three-steps: (1) Decoding the video and densely sampling frames. (2) Extracting the 2D/3D features of the video frames offline. (3) Training the model based on these 2D/3D features. In these methods, densely sampled video frames result in significant redundancy, which in turn increases the computation and inference time of the model. This is because the model needs to extract features from each video frame and use all Figure 1: Comparing our method with prior methods for video captioning. Prior works are all based on decoding video frames. The difference between them is that some methods use offline extracted multiple features as input and generate captions, while others directly take dense video frames as input. By avoiding heavy redundant information and offline multiple feature extraction, our method speedup the caption generation process while maintaining high quality results. of these features as input. Furthermore, extracting 2D appearance features, 3D action features, and region features for each video frame requires additional time. To address the speed issue and improve inference speed, some recent works [18, 29] have adopted an end-to-end approach that avoids extracting multiple visual features offline. As shown in Fig. 1 (The middle branch), the flow of their method is as follows: (1) Decoding the video and densely sample frames. (2) Take video frames directly as input and then end-to-end training model. These approaches involve a trainable visual feature extractor, rather than relying on multiple offline 2D/3D feature extractors. For example, SwinBERT [18] uses VidSwin [19] as the trainable feature extractor, while MV-GPT [29] uses ViViT [1]. While these two-steps methods address the time consumption associated with offline feature extraction, they do not alleviate the computational burden and time required to handle the redundancy of information. To address the above problems, we propose an end-to-end video captioning method based on compressed video. Our work significantly simplifies the video caption pipeline by eliminating time-consuming video decoding and feature extraction steps. As in Fig. 1 (the lower branch), unlike previous methods, we take compressed video information as input and directly output a natural language description of the video. Compressed video is mainly composed of I-frame, motion vector and residual, and there is no redundant information between them, and they are all refined information. Therefore, the model needs less computation to process compressed domain information, and model inference is faster. At the same time, the end-to-end network structure in our proposed method can also avoid the time consumption caused by extracting multiple features. Besides, Our model is better at understanding the content of videos by utilizing the refined information in compressed domain, including the 2D feature from I-frame and the 3D action feature extracted from motion vector and residual. As shown in Fig. 2, compared with other two-steps and three-steps methods, such as SwinBERT [18], HMN [36] and SGN [27], our method is not only faster, but also has competitive performance. Our model comprises two parts, as depicted in Fig. 4. One part consists of three encoders that extract features and an action encoder that fuses them, while the other part comprises a multimodal decoder that generates video captions. Specifically, we first extract the context feature, motion vector feature and residual feature of the compressed video through I-frame Encoder, Motion Encoder, and Residual Encoder, respectively. The context feature contains information about objects in the video, but action information is missing. In order to extract the action feature of the video, we fuse the motion vector feature, residual feature, and context feature through the action encoder. Then use the context feature and action feature as visual input of the multimodal decoder to generate video captions. The contributions of this paper are summarized below: 1. We propose a simple and effective transformer that can take compressed video as input and directly generate a video description. 2. Our experimental results demonstrate that our method is nearly 2\(\times\) further than the fastest existing state-of-the-art method in inference time, while maintaining competitive results on three challenging video captioning datasets, e.g., MSVD, MSRVTT and VATEX. ## 2 Related Work **Compressed vision task**. The main idea of introducing compressed video into current computer vision tasks is to utilizing the motion vector and residual on the compressed domain to avoid fully decode all frames from the video and save the storage space at the same time. Early work mainly base on MPEG-4 video codec [33, 16, 12, 4]. CoViAR [33] proposed a back-tracking technique to trace motion vectors back to I-frame, which works on MPEG-4. MM-ViT [4] proposed a multi-modal transformer to process the I-frame, motion vector, residual and audio in the compressed video. Since the MPEG-4 codec is outdated, other works, e.g., MVCGC [13] and ATTP [14], is designed to work on other coedcs like H.264 and H.265 to ensure generalizability. Comparing with MPEG-4, H.264 and H.265 allow a more flexible yet complicated compression, which makes it more challenging to learn from compressed domain. MVCGC [13] proposed a self-supervised method to learn video representations by utilizing the mutual information between RGB video frames and motion vectors. ATTP [14] designed a lightweight deep neural network to process the compressed video and achieve real time action recognition on embedded AI devices. Similarly, our work Figure 2: Comparison of model inference speed and CIDEr score on MSRVTT dataset. I, MV and Res refer to I-frame, motion vector and residual respectively. The test is run on 1 Card V100 machine with batch size set to 1. is conducted on H.264 video codec, which is currently one of the most popular video codecs. **Video captioning.** Video captioning aims to convert the content of videos into natural language descriptions, which requires the model to understand the objects in the video and the behavior of the objects. Some works focus on the design of the model structure. These methods usually extract features offline, and then models use these features to generate captions by designing different network architectures. HMN [36] proposed a hierarchical modular network that serves as a strong video encoder, which bridges videos and languages. ORG-TRL [38] proposes an object relational graph based encoder, which captures more detailed interaction features to enrich visual representation. SGN [27] designed a semantic grouping network to group video frames with discriminating word phrases of partially decoded caption. Some works explore additional information to help the model generate more accurate video captions. TextKG [9] propose a two-stream network capable of knowledge-assisted video description using knowledge graphs. Univl [20] learns powerful vision-and-language representations by pre-training the models on large-scale datasets, _e.g_., HowTo100M [21] and WebVid-2M [2]. Some other works focus more on end-to-end video captioning generation. SwinBERT [18] proposed an end-to-end transformer-based model, which takes video frame patches directly as inputs and then uses VidSwin to extract visual features. MV-GPT [29] designed an encoder-decoder model end-to-end to generate the video caption from video frames and transcribed speech directly. We propose an end-to-end video captioning model based on the compressed domain without decoding video frames and extracting features offline, which not only accelerates the generation of captions, but also performs favorably against the state-of-the-art methods. ## 3 Methods As mentioned above, our method aims to take the dense information (including I-frame, motion vector and residual) in compressed domain as input to accelerate inference and improve performance for video caption. To this end, we design an end-to-end transformer-based network as shown in Fig. 4. In this section, we first detail the information in the compressed video in Sec. 3.1, then introduce the model network in Sec. 3.2 and 3.3, and finally introduce the training strategy of the model in Sec. 3.4. ### The Structure of Compressed Video Modern video codecs utilizing the temporal redundancy of successive video frames to compress raw video. As shown in Fig. 3, most modern codecs (_e.g_., H.264, and H.265) divide video frames into three different types according to their dependencies with other frames: I-frame (intra coded frame), P-frame (predictive coded frame) and B-frame (bipredictive coded frame). I-frame is fully encoded independently using intra-prediction without relying on other frames. Other frames like B-frame and P-frame are encoded by referring to the other frames using inter-prediction, which is stored in the form of motion vector. Motion vector describes the movement of a group of pixels from source (reference frames) to destination (current B-frame or P-frame), which contains highly compressed motion information of successive video frames. The difference between P-frame and B-frame is that B-frame could refer to the frames before or after it, while P-frame only refer to the frames before it. Since predicting a frame using neighboring frames could be inaccurate, an additional residual error between the current frame and the prediction is calculated. We denote \(\mathcal{I}_{I}\), \(\mathcal{I}_{P}\) and \(\mathcal{I}_{B}\) as decoded I-frame, P-frame, and B-frame, and \(\mathcal{I}_{mv}\) and \(\Delta_{res}\) as the motion vector and residual of P-/B-frame respectively. In compressed domain, the P-frame and B-frame could be reconstructed by \[\mathcal{I}_{B/P}=\mathrm{Pred}(\mathcal{I}_{mv},\mathcal{I}_{ref})+\Delta_{ res} \tag{1}\] where \(\mathcal{I}_{ref}\) is the referenced frame, and \(\mathrm{Pred}\) is the prediction method to reconstruct current frame based on motion vector and referenced frame. Since the reconstruction process is time consuming, our model takes highly compressed information from compressed domain directly as input to achieve end-to-end video captioning. Moreover, successive frames are divided into several groups, which is called Groups of Pictures (GOP). GOP is an independent encoding or decoding unit, which means that the frames in a GOP do not refer to any frames on other GOP. Each GOP starts with an I frame, followed by several P-frames or B-frames. For each GOP, we take one I-frame and \(M\) B-/P-frames as inputs. The B-/P-frames are uniformly sampled from each GOP, and we only use their motion vector and residual as replacements. Therefore, the visual inputs of our model would be \[X=[\mathcal{I}_{I}^{(1)},\mathcal{I}_{mv}^{(1,1)},\Delta_{res}^ {(1,1)},\ldots,\mathcal{I}_{mv}^{(M,1)},\Delta_{res}^{(M,1)}],\] \[\ldots,[\mathcal{I}_{I}^{(N)},\mathcal{I}_{mv}^{(1,N)},\Delta_{ res}^{(1,N)},\ldots,\mathcal{I}_{mv}^{(M,N)},\Delta_{res}^{(M,N)}]\] where \(N\) is the number of GOP sampled from each video and \(M\) is the total number of P-/B-frames sampled from each GOP. We set N according to the average GOP number, Figure 3: The GOP structure in compressed video. In each GOP, the first frame must be an I-frame, followed by several B/P-frames. and M is equal to the maximum number of P-/B-frames in each GOP, which is a hyper-parameter during encoding. ### Model Architecture for Compressed Domain Based on the GOP structure mentioned above, we proposed a transformer based structure to utilizing the dense information from the compressed domain. Fig. 4 (left) shows the main framework of our proposed compressed video transformer. The model takes all information of the compressed video as inputs, including I-frame, motion vector and residual, while maintaining a fast inference speed. Specifically, we use three different Vision Transformers [8] (ViT) as encoder to extract the visual features for I-frame, motion vector and residual. We adopt a pretrained Vision Transformer as the encoder to extract the context feature from the I-frame: \[\mathcal{F}^{(n)}_{\rm ctx}=\mathrm{Encoder_{I}}(\mathcal{I}^{(n)}_{I}).\] For each B-frame or P-frame, we get a motion vector and a residual from the compressed domain. We use two lightweight Vision Transformers as encoders to extract features from motion vectors and residuals. The motion and residual features is added together to generate the B-/P-frame features \(\mathcal{F}^{(m,n)}_{\rm BP}\): \[\mathcal{F}^{(m,n)}_{\rm BP}=\mathrm{Encoder_{mv}}(\mathcal{I}^{(m,n)}_{mv})+ \mathrm{Encoder_{res}}(\Delta^{(m,n)}_{res}).\] In this way, for each GOP we obtain \(M\) B-/P-frame features \[\mathcal{F}^{(n)}_{\rm BP}=[\mathcal{F}^{(1,n)}_{\rm BP},\ldots,\mathcal{F}^ {(M,n)}_{\rm BP}].\] As motion vector and residual lack fine-grained context information, we use features from motion vector and residual as queries to retrieve the rich context information in RGB frames instead of simply fusing them. We employ action encoder to integrate the object information of I-frame into the action information of motion vector and residual, which takes B-/P-frame features in current GOP \(\mathcal{F}^{(n)}_{\rm BP}\) and the context feature \(\mathcal{F}^{(n)}_{\rm ctx}\) as input to generate the action feature \(\mathcal{F}^{(n)}_{\rm act}\) of current GOP. The action encoder is constructed by \(N_{a}\) sets of alternately stacked self-attention and cross-attention blocks. Specifically, the workflow of the action encoder is as follows. Firstly, according to the reconstruction process described in Eq. 1, we utilize the self-attention module fuse the temporal representation of successive frames to obtain \(\mathcal{F}^{(n)}_{\rm att}\): \[X=\mathcal{F}^{(n)}_{\rm BP}+\mathrm{Emb_{p}}+\mathrm{Emb_{t}},\] \[Q=W_{q}*X,K=W_{k}*X,V=W_{v}*X,\] \[\mathcal{F}^{(n)}_{\rm att}=\mathrm{SelfAttention}(Q,K,V),\] where \(\mathrm{Emb_{p}}\) is the positional embeddings, \(\mathrm{Emb_{t}}\) is the type embeddings, and \(W_{q},W_{k},W_{v}\) are learnable matrices. The type embeddings are added to distinguish B-frames and P-frames. And then we use the cross-attention to integrate the \(\mathcal{F}^{(n)}_{\rm ctx}\) from I-frame into the \(\mathcal{F}^{(n)}_{\rm att}\) from the motion vector and residual. Finally, the action feature \(\mathcal{F}^{(n)}_{\rm act}\) \[Q^{\prime}=W^{\prime}_{q}*\mathcal{F}^{(n)}_{\rm att},K^{\prime} =W^{\prime}_{k}*\mathcal{F}^{(n)}_{\rm ctx},V^{\prime}=W^{\prime}_{v}* \mathcal{F}^{(n)}_{\rm ctx},\] \[\mathcal{F}^{(n)^{\prime}}_{\rm att}=\mathrm{CrossAttention}(Q^{ \prime},K^{\prime},V^{\prime}),\] \[\mathcal{F}^{(n)}_{\rm act}=\mathrm{Mean}(\mathcal{F}^{(n)^{ \prime}}_{\rm att}),\] where \(W^{\prime}_{q},W^{\prime}_{k},W^{\prime}_{v}\) are learnable matrices and \(\mathrm{Mean}()\) is a function that calculates the average feature. ### Multimodal Decoder for Video Captioning The context features \(\mathcal{F}^{(n)}_{\rm ctx}\) and action features \(\mathcal{F}^{(n)}_{\rm act}\) for each GOP are contacted to form the visual representation: \[\mathcal{V}=[\mathcal{F}^{(1)}_{\rm ctx},\mathcal{F}^{(1)}_{\rm act},\ldots, \mathcal{F}^{(N)}_{\rm ctx},\mathcal{F}^{(N)}_{\rm act}].\] Then we design a multimodal decoder to predict the video captions based on the visual representation \(\mathcal{V}\). The multimodal decoder is composed of \(N_{m}\) masked self-attention modules stacked as shown in Fig. 4 (right) and the workflow is as follows: \[\mathcal{T}_{\rm<t}=\mathrm{Embedding}(Y_{\rm<t}),\] \[\mathcal{X}=\mathrm{Concat}(\mathcal{V},\mathcal{T}_{\rm<t}),\] \[\mathcal{X}^{\prime}=\mathcal{X}+\mathrm{Emb}^{\prime}_{\rm p}+ \mathrm{Emb}^{\prime}_{\rm t},\] \[Q^{\prime\prime}=W^{\prime\prime}_{q}*\mathcal{X}^{\prime},K^{ \prime\prime}=W^{\prime\prime}_{k}*\mathcal{X}^{\prime},V^{\prime\prime}=W^{ \prime\prime}_{v}*\mathcal{X}^{\prime},\] \[h_{\rm t}=\mathrm{MaskedSelfAttention}(Q^{\prime\prime},K^{ \prime\prime},V^{\prime\prime}),\] \[p(y_{\rm t}|\mathcal{V},\mathcal{T}_{\rm<t})=\mathrm{softmax}( \mathrm{Linear}(h_{\rm t})),\] where \(Y_{\rm<t}\) is the words generated in previous \(t-1\) steps, \(\mathrm{Embedding}()\) is a function that converts one-hot word vectors into word embeddings, \(\mathrm{Emb}^{\prime}_{\rm p}\) is the positional embeddings, \(\mathrm{Emb}^{\prime}_{\rm t}\) is used to distinguish different modality of inputs, \(W^{\prime\prime}_{q},W^{\prime\prime}_{k},W^{\prime\prime}_{v}\) are learnable matrices and \(y_{\rm t}\) is the prediction of current step. In the multimodal decoder, position embedding and type embedding is added to distinguish the order and type of features respectively. ### Optimization We train our model using the cross-entropy loss function. Given the ground-truth indices of previous (t-1) words and the visual representation \(\mathcal{V}\), we can get the predictions of the current t-th word \(y^{*}_{t}\). After that, the training loss is computed as \[L=-\sum_{t=1}^{l}\log p(y^{*}_{t}|y^{*}_{t-1},\mathcal{V}),\] where \(y_{1:T}^{*}\) is the ground truth sequence and \(l\) is the total length of predicted captions. Notably, we add the label smoothing to mitigate overconfidence in implementation. ## 4 Experiments ### Datasets **MSRVTT**[34] is a generic video captioning dataset that comprises \(10,000\) video clips, with each clip annotated with \(20\) captions. On average, each video clip lasts about \(15\) seconds. The standard split involves the use of \(6,513\) clips for training, \(497\) clips for validation, and \(2,990\) clips for testing. **MSVD**[3] contains \(1,970\) videos, with each video clip having \(40\) captions. The average duration of each video clip is around \(10\) seconds. We adopt the standard split, which involves using \(1,200\) videos for training, \(100\) videos for validation, and \(670\) videos for testing. **VATEX**[32] is a large-scale dataset which contains about \(41,250\) video clips. The duration of each video clip is between \(10\) seconds, and \(10\) English captions are manually annotated per clip. We use the official training set for training and evaluate the results using the public test set. ### Evaluation Metrics To evaluate the effectiveness of our approach, we use the standard metrics for video captioning: BLEU@4 (B4) [23], METEOR (M) [7], ROUGE (R) [17], and CIDEr (C) [31]. Each metric provides a unique perspective on the quality of the generated captions. BLEU@4 evaluates sentence fluency, METEOR assesses semantic accuracy, ROUGE measures word order, and CIDEr evaluates the degree to which the caption conveys key information. By considering these different metrics, we can comprehensively evaluate the performance of our model. ### Implementation Details Our model is implemented using PyTorch, and to read motion vectors and residuals from the compressed video, we utilize the x264 library in FFmpeg. Before training and testing, the videos are resized to 240 on its smallest edge and compressed using the H.264 codec with KeyInt set to 60. For each video, we fixedly sampled 8 GOPs, each of which contains 1 I-frame, 59 motion vectors, and 59 residuals. The size of the I-frame and residual is \(3*224*224\), and the size of the motion vector is \(4*56*56\). We use Adam with initial learning rate of 1e-4, \(\beta_{1}\)=0.9, \(\beta\)=0.999 and the warmup strategy is adopted in the training. The maximum length of the caption sentence is set to 22, which contains two special tokens, \(\text{\emph{e.g.}}\), \(\text{[CLS]}\) token and \(\text{[EOS]}\) token. The feature dimension in each block is set to 768, and the number of heads in multi-head architecture is set to 12 for all layers. The batch size is set to 64 and the training epochs to 20. The I-frame encoder has 12 layers and is Figure 4: The architecture of our proposed Compressed Video Captioner. _Left_: The Compressed Video Transformer which extract video representation for each GOP. A large visual backbone is used to extract visual representations from I-frame, and two small Vision Transformer is used to extract residual and motion representations from compressed domain. After that, an action encoder is used to fuse the features. _Right_: The Multimodal Decoder. We use a multimodal decoder with causal mask to learn caption. initialized with pre-trained weights from the CLIP [25] visual encoder, while the other encoders and the multimodal decoder are randomly initialized. The layers for the motion encoder, residual encoder and action encoder are 2, 2 and 1, respectively. Lastly, we set the hyperparameters \(M\), \(N\), \(N_{a}\), and \(N_{m}\) to 60, 8, 2 and 2. ### Performance Comparison with SOTA Methods In order to verify the effectiveness of the method, we evaluated the proposed model against state-of-the-art methods on three public benchmark datasets. **MSVD dataset.** The evaluation results on the MSVD dataset are reported in Table 1 (left). We conducted experiments using two sizes of the I-frame encoder, namely \(B/16\) and \(L/14\), with the results reported in the article based on \(B/16\), unless otherwise stated. Our method using the \(L/14\) I-frame encoder achieves the best performance on all metrics, with only SwinBERT [18] performing better than our method using \(B/16\). Our approach stands out by being able to directly utilize compressed domain information and extract visual features in real-time. The result shows that our model can efficiently extract information from the refined compressed domain information. **MSRVTT dataset.** In the MSRVTT benchmark, our method outperforms other approaches in all metrics, as shown in Table 1 (right). Specifically, both the based on \(B/16\) model and based on \(L/14\) model achieve higher scores compared to other methods. In particular, our method achieves a CIDEr score of \(56.2\) / \(57.2\), which represents a significant improvement of \(+2.4\) / \(+3.4\). This result demonstrates that our approach can generate captions with higher semantic accuracy than other methods based on video decoding [31]. CIDEr is particularly effective at capturing human consensus, which makes our achievement in this metric even more impressive. **VATEX dataset.** Our method is evaluated on a large-scale dataset, as shown in Table 2. We achieve the second-best results on all metrics, falling behind SwinBERT [18]. Our approach involves extracting visual features using three Vision Transformer encoders, while the I-frame encoder is initialized with the pre-trained CLIP [25] model on LAION-400M [28]. In contrast, SwinBERT uses the VidSwin backbone [19], which is pre-trained on the Kinetic-600 dataset [15]. It is worth noting that LAION-400M is a large image-text dataset, while Kinetics-600 is a video-text dataset, and VATEX dataset is a subset of Kinetics-600 videos. SwinBERT outperforms our method on VATEX due to its backbone pre-trained on Kinetics-600. \begin{table} \begin{tabular}{c|c c|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Decoding} & \multirow{2}{*}{E2E} & \multicolumn{4}{c|}{Features} & \multicolumn{4}{c|}{MSVD} & \multicolumn{4}{c}{MSRVTT} \\ & & & & 2D Appearance & 3D Action & Object Detection & B4 & M & R & C & B4 & M & R & C \\ \hline SAAT [39] & ✓ & - & IncepResnetV2 & C3D & - & 46.5 & 33.5 & 69.4 & 81.0 & 39.9 & 27.7 & 61.2 & 51 \\ STG-KD [22] & ✓ & - & ResNet101 & I3D & FasterRCNN & 52.2 & 36.9 & 73.9 & 93.0 & 40.5 & 28.3 & 60.9 & 47.1 \\ PMI-CAP [5] & ✓ & - & IncepResnetV2 & C3D & - & 54.6 & 36.4 & - & 95.1 & 42.1 & 28.7 & - & 49.4 \\ ORG-TRL [38] & ✓ & - & IncepResnetV2 & C3D & FasterRCNN & 54.3 & 36.4 & 73.9 & 95.2 & 43.6 & 28.8 & 62.1 & 50.9 \\ OpenBook [37] & ✓ & - & IncepResnetV2 & C3D & - & - & - & - & - & 42.8 & 29.3 & 61.7 & 52.9 \\ SGN [27] & ✓ & - & ResNet101 & C3D & - & 52.8 & 35.5 & 72.9 & 94.3 & 40.8 & 28.3 & 60.8 & 49.5 \\ MGRMP [6] & ✓ & - & IncepResnetV2 & C3D & - & 55.8 & 36.9 & 74.5 & 98.5 & 41.7 & 28.9 & 62.1 & 51.4 \\ HMN [36] & ✓ & - & IncepResnetV2 & C3D & FasterRCNN & 59.2 & 37.7 & 75.1 & 104 & 43.5 & 29 & 62.7 & 51.5 \\ UniVL [20] & ✓ & - & & S3D & - & - & - & - & 42.2 & 28.8 & 61.2 & 49.9 \\ \hline SwinBERT [18] & ✓ & ✓ & & VidSwin & 58.2 & 41.3 & 77.5 & 120.6 & 41.9 & 29.9 & 62.1 & 53.8 \\ MV-GPT [29] & ✓ & ✓ & & ViViT & - & - & - & - & 48.9 & 38.7 & 64 & 60 \\ \hline Ours & - & ✓ & & CLIP & 55.9 & 39.9 & 76.8 & 113.0 & 43.1 & 29.8 & 62.7 & 56.2 \\ Ours(ViT/L14) & - & ✓ & & CLIP & **60.1** & **41.4** & **78.2** & **121.5** & **44.4** & **30.3** & **63.4** & **57.2** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with state-of-the-art methods on the test split of MSVD and MSRVTT. Decoding means decoding video frames, and E2E means end-to-end training without offline feature extraction. For a fair comparison, we gray out models that pre-train on large-scale datasets. \begin{table} \begin{tabular}{c|c c c} \hline \hline & B4 & M & R & C \\ \hline NITS-VC [30] & 20.0 & 18.0 & 42.0 & 24.0 \\ VATEX [32] & 28.4 & 21.7 & 47 & 45.1 \\ ORG-TRL [38] & 32.1 & 22.2 & 48.9 & 49.7 \\ Support-set [24] & 32.8 & 24.4 & 49.1 & 51.2 \\ SwinBERT [18] & **38.7** & **26.2** & **53.2** & **73** \\ VideoCoCa [35] & 39.7 & - & 54.5 & 77.8 \\ \hline Ours & 31.4 & 23.2 & 49.4 & 52.7 \\ Ours(ViT/L14) & 35.8 & 25.3 & 52.0 & 64.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with state-of-the-art methods on the test split of VATEX. For a fair comparison, we gray out models that pre-train on large-scale datasets. ### Speed Comparison with the SOTA Methods To evaluate the speed of our method, we compared it to three representative methods, namely SGN [27], HMN [36], and SwinBERT [18], as reported in Table 3. SGN is a three-step method that first decodes video frames and densely sample, then extracts the 2D appearance and 3D action features based on ResNet101 [11] and C3D [10] (consuming \(303\) ms) offline, and finally uses the visual features as the input of the model (consuming \(275\) ms). Therefore, the total time for SGN to generate a video caption is \(578\) ms. HMN achieves the best results among the three-steps models, but it is relatively slow as it requires offline region feature extraction based on Faster RCNN [26] (consuming \(2,520\) ms), leading to its total time of \(2,818\) ms. SwinBERT, on the other hand, is an end-to-end method that does not extract multiple features offline, using only \(339\) ms. Compared to these methods, our proposed method does not require a dense sampling of video frames or the extraction of multiple features offline. As shown in Table 3, our baseline method only considers the I-frame of the entire video, achieving a CIDEr score of \(54.1\) and a total time of \(146\) ms. By integrating the motion vector, we improved the CIDEr to \(55.3\), demonstrating that the action information in the motion vector helps the model generate captions. Furthermore, by incorporating residual information, the CIDEr score is further improved by \(0.9\) to reach \(56.2\). Although considering three inputs increases our total inference time, our speed is still nearly \(2\) times faster than SwinBERT, \(3\) times faster than SGN, and \(15\) times faster than HMN. ### Ablation Study **Impact of input information.** To evaluate the effectiveness of different input information in our method, we conducted several experiments on the MSRVTT dataset, as shown in Table 4. To investigate the role of I-frame, motion vector, and residual, we first experimented with using only one of them. As shown in Table 4, using only I-frame, motion vector, or residual achieved CIDEr scores of \(54.1\), \(19.4\), and \(13.0\), respectively. This indicates that the model can directly use I-frame instead of motion vector and residual. By jointly using I-frame and motion vector and fusing their information through the action encoder, we achieved a CIDEr score of \(55.3\). Similarly, using I-frame and residual achieved a score of \(54.9\). This demonstrates that motion vector and residual can help the model generate more accurate captions. The performance of the model is further improved by inputting all three types of information, achieving a CIDEr score of \(56.2\), an improvement of \(1.7\). Removing the action encoder from the proposed method resulted in a slight drop in CIDEr scores, from \(56.2\) to \(54.3\). This demonstrates that the action encoder can help the model integrate the object information of I-frame into the action information of motion vector and residual. **Impact of GOP numbers.** GOP is a fundamental unit in compressed video that affect the compression rate. A larger GOP size results in fewer GOP numbers and commonly higher compression rates. In video codec (_e.g._ FFmpeg), the GOP size is determined by the KeyInt parameter. To \begin{table} \begin{tabular}{c c c|c|c c c c} \hline \multicolumn{2}{c|}{Input} & \multicolumn{2}{c|}{Module} & \multirow{2}{*}{B4} & \multirow{2}{*}{M} & \multirow{2}{*}{R} & \multirow{2}{*}{C} \\ \(\mathcal{I}_{I}\) & \(\mathcal{I}_{mv}\) & \(\Delta_{res}\) & & & En\_A & & & \\ \hline ✓ & - & - & - & 41.6 & 29.7 & 62.3 & 54.1 \\ - & ✓ & - & - & 27.3 & 21.6 & 52.6 & 19.4 \\ - & - & ✓ & - & 23.9 & 20.5 & 51.0 & 13.0 \\ ✓ & ✓ & - & ✓ & 43.4 & 29.9 & 62.6 & 55.3 \\ ✓ & - & ✓ & ✓ & 42.2 & 30.0 & 62.5 & 54.9 \\ ✓ & ✓ & ✓ & - & 42.1 & **30.1** & 62.4 & 54.3 \\ ✓ & ✓ & ✓ & ✓ & **43.1** & 29.8 & **62.7** & **56.2** \\ \hline \end{tabular} \end{table} Table 4: Ablation study of different input on the test subset of MSRVTT. The \(\mathcal{I}_{I}\), \(\mathcal{I}_{mv}\) and \(\Delta_{res}\) mean decoded I-frame, motion vector and residual respectively. And the En_A means the action encoder. \begin{table} \begin{tabular}{c|c|c c|c} \multirow{2}{*}{Method} & \multirow{2}{*}{Data Type} & \multicolumn{2}{c|}{Inference Time \(\downarrow\)} & \multirow{2}{*}{CIDEr \(\uparrow\)} \\ \cline{3-3} & & Feature Extraction & Model Time & Total & \\ \hline SGN & RGB Video Frames & 303 ms & 275 ms & 578 ms & 49.5 \\ HMN & RGB Video Frames & 2,710 ms & 108 ms & 2,818 ms & 51.5 \\ SwinBERT & RGB Video Frames & 339 ms & 339 ms & 53.8 \\ \hline Ours & I-frame & 146 ms & **146 ms** & 54.1 \\ Ours & I-frame+MV & 153 ms & 153 ms & 55.3 \\ Ours & I-frame+MV+Res & 178 ms & 178 ms & **56.2** \\ \hline \end{tabular} \end{table} Table 3: A detailed comparison of speed with other methods on the test split of the MSRVTT dataset. During the test, the model is running on a NVIDIA Tesla V100 GPU and the batch size is set to 1. The time cost is computed on the overall MSRVTT test split. \begin{table} \begin{tabular}{c c|c|c c c c} \hline KeyInt (\(M\)) & GOP Nums (\(N\)) & Inference Time & B4 & M & R & C \\ \hline 250 & 2 & 153 ms & 39.6 & 28.7 & 60.8 & 49.5 \\ 60 & 2 & **131 ms** & 41.6 & 29.3 & 61.7 & 52.4 \\ 60 & 4 & 139 ms & 42.8 & **29.9** & 62.6 & 55.3 \\ 60 & 8 & 178 ms & **43.1** & 29.8 & **62.7** & **56.2** \\ 60 & 10 & 187 ms & 42.7 & 29.8 & 62.6 & 55.5 \\ \hline \end{tabular} \end{table} Table 5: Ablation study of GOP numbers on MSRVTT test subset. investigate the impact of GOP size on our video caption model, we experimented with different GOP numbers and KeyInts, as shown in Table 5. Comparing KeyInt values of \(250\) and \(60\), we observed that a smaller GOP size led to better model performance (\(49.5\) CIDEr vs \(52.4\) CIDEr). By sampling different GOP numbers under the same KeyInt, the best performance is achieved by setting GOP size to \(8\) and KeyInt to \(60\). While the performance is improved with more GOPs, yet speed is decreased due to increased computation as more information is included. **Impact of model layers.** To investigate the impact of different model layers on our proposed method, we conducted an ablation study on the MSRVTT test subset, as shown in Table 6. Giving that I-frame contains more complex information, we design a deep encoder with more layers for I-frame, while using a shallow encoder for motion vector and residual. Our results show that the performance of the model improves with an increase in the number of layers in the I-frame encoder (\(56.2\) CIDEr to \(57.2\) CIDEr). However, adding more layers to other modules did not result in further improvements in model performance. ### Qualitative Results As shown in Fig. 5, we present the qualitative results of our proposed method on three datasets (_e.g_., MSVD, MSRVTT, and VATEX). Specifically, we visualize the input I-frame, motion vector, and residual and compare the predicted description to the ground truth. Our method consistently produces semantically consistent descriptions that closely align with the ground truth across all three datasets. Furthermore, the results demonstrate a superior ability to capture motion behavior in the videos. ## 5 Conclusion In this paper, we introduce an end-to-end transformer-based model for video captioning that takes compressed video as input to eliminate redundant information. Our proposed method is evaluated on three challenging datasets and demonstrates that our proposed method is not only fast, but also competitive in performance with SOTA. In the future, we plan to further improve our method in two ways: (1) Add additional modalities such as audio, text, and knowledge graphs to enhance the quality of the generated captions. (2) Pre-train the model on a large-scale dataset to further boost the overall performance in compressed domain. ## Acknowledgement Libo Zhang was supported by Youth Innovation Promotion Association, CAS (2020111). Heng Fan and his employer received no financial support for research, authorship, and/or publication of this article. This work was done during internship at ByteDance Inc. \begin{table} \begin{tabular}{c c c c c|c} \hline En\_I & En\_M & En\_R & En\_C & De\_M & CIDEr \\ \hline 12 & 2 & 2 & 1 & 2 & 56.2 \\ 24 & 2 & 2 & 1 & 2 & 57.2 \\ 12 & 4 & 4 & 2 & 2 & 55.2 \\ 12 & 2 & 2 & 1 & 4 & 54.9 \\ 12 & 4 & 4 & 2 & 4 & 55.4 \\ \hline \end{tabular} \end{table} Table 6: Ablation study about module layers on the MSRVTT test subset. En\_I, En\_M, En\_R, En\_A and De\_M refer to the I-frame encoder, motion encoder, residual encoder, action encoder and multimodal decoder of the model respectively. Figure 5: Qualitative results on the MSRVTT, MSVD and VATEX dataset. We show the input of our model, which is in compressed domain. The red, green and blue borders indicate I-frame, motion vector and residual, respectively.
``` 既存の動画字幕生成方法では、通常、デコードされたビデオからビデオフレームをサンプリングし、その後、次のようなプロセスを実行する必要があります(例:特徴抽出と/または字幕モデル学習)。このパイプラインでは、手動フレームのサンプリングは、ビデオ内の重要な情報に過度の影響を与える可能性があり、パフォーマンスを低下させる可能性があります。さらに、サンプリングされたフレームの冗長情報は、ビデオ字幕生成の推論に低効率性をもたらす可能性があります。これらの問題に対処するため、私たちは、圧縮された領域から動画字幕生成を調査する。これは既存のパイプラインに比べて多様な利点をもたらします:1) 解コードされたビデオから見た場合、Iフレーム、動きベクトル、残差を含む圧縮されたビデオは、特定のモデル設計により、手動サンプリングなしに、全体で学習可能なように、高度に区別できるため、このアプローチが有効になります。
2301.13383
An Comparative Analysis of Different Pitch and Metrical Grid Encoding Methods in the Task of Sequential Music Generation
Pitch and meter are two fundamental music features for symbolic music generation tasks, where researchers usually choose different encoding methods depending on specific goals. However, the advantages and drawbacks of different encoding methods have not been frequently discussed. This paper presents a integrated analysis of the influence of two low-level feature, pitch and meter, on the performance of a token-based sequential music generation model. First, the commonly used MIDI number encoding and a less used class-octave encoding are compared. Second, an dense intra-bar metric grid is imposed to the encoded sequence as auxiliary features. Different complexity and resolutions of the metric grid are compared. For complexity, the single token approach and the multiple token approach are compared; for grid resolution, 0 (ablation), 1 (bar-level), 4 (downbeat-level) 12, (8th-triplet-level) up to 64 (64th-note-grid-level) are compared; for duration resolution, 4, 8, 12 and 16 subdivisions per beat are compared. All different encodings are tested on separately trained Transformer-XL models for a melody generation task. Regarding distribution similarity of several objective evaluation metrics to the test dataset, results suggest that the class-octave encoding significantly outperforms the taken-for-granted MIDI encoding on pitch-related metrics; finer grids and multiple-token grids improve the rhythmic quality, but also suffer from over-fitting at early training stage. Results display a general phenomenon of over-fitting from two aspects, the pitch embedding space and the test loss of the single-token grid encoding. From a practical perspective, we both demonstrate the feasibility and raise the concern of easy over-fitting problem of using smaller networks and lower embedding dimensions on the generation task. The findings can also contribute to futural models in terms of feature engineering.
Yuqiang Li, Shengchen Li, George Fazekas
2023-01-31T03:19:50
http://arxiv.org/abs/2301.13383v1
An Comparative Analysis of Different Pitch and Metrical Grid Encoding Methods in the Task of Sequential Music Generation ###### Abstract Pitch and meter are two fundamental music features for symbolic music generation tasks, where researchers usually choose different encoding methods depending on specific goals. However, the advantages and drawbacks of different encoding methods have not been frequently discussed. This paper presents a integrated analysis of the influence of two low-level feature, pitch and meter, on the performance of a token-based sequential music generation model. First, the commonly used MIDI number encoding and a less used pitch class-octave encoding are compared. Second, an dense intra-bar metric grid is imposed to the encoded sequence as auxiliary features. Different complexity and resolution settings of the metric grid are compared. For complexity, the single token approach and the multiple token approach are compared; for grid resolution, 0 (ablation), 1 (bar-level), 4 (downbeat-level) 12, (8th-triplet-level) up to 64 (64th-note-grid-level) are compared; for durational resolution, 4, 8, 12 and 16 subdivisions per beat are compared. All different encodings are tested on separately trained Transformer-XL model for a melody generation task. From the perspective of distribution distance of several objective evaluative metrics to the test dataset, the results suggest that the class-octave encoding significantly outperforms the taken-for-granted MIDI encoding on pitch-related metrics; higher grid resolutions and multiple-token grid also significantly increases the generation quality of the rhythm, but also suffer from over-fitting at the early stage of training. Results also display a general phenomenon of over-fitting from two aspects, the pitch embedding space and the test loss of the single-token grid encoding. From a practical perspective, we both demonstrate the feasibility and raise the concern of easy over-fitting problem of using smaller networks and lower embedding dimensions on the generation task, The finding can also contribute to futural models in terms of feature engineering. KeywordsMusic Representation, Music Generation, Pitch, Rhythm, Feature Engineering ## 1 Introduction Symbolic music representation is to some extent in between the standard music notation system and the actual sound of performed music. In the computer audition tasks, it has the advantages of abstraction and conciseness compared to the detailed wave form as used in the acoustic domain. The abstraction in symbolic representations allows the annotations of higher-level musical features, including form, expressiveness, and articulations, besides the fundamental pitch, harmonic, metric and rhythmic elements [1]. Specifically when used in machine learning, the sparsity and conciseness of symbolic representation are further required depending on the model's representation capacity and their specific limitation on the computational resources. Hence, instead of directly using score-level representations, such as MusicXML and ABC notation, researchers proposed a wide range of methods to select minimal features and encode them in a new representation that enables a specific model to learn and generalize well. Low-level features, including pitch, duration and velocity can be effortlessly obtained from both MIDI event sequence and score notations. These features are commonly encoded into matrices [2], word sequences [3], vectors with certain geometric constraints [4], graphs [5, 6] or different forms before being processed by a model. Despite the musical information carried by low-level features, it is still challenging for the latest symbolic music algorithmic composition models to directly learn from such features and generate music to a satisfying extent. For instance, most models have been struggling with modeling long-term temporal dependency of music [7], although this problem could be alleviated by providing the model with explicit structural information [8, 9, 10]. Also, the generation systems suffer from the lack of semantic representation which can be interpreted musicologically. Recent works have displayed a trend of introducing more prior knowledge to a machine learning model by applying some basic music theories. From a feature engineering perspective, we categorize these works into three types: feature aggregation, feature selection and feature encoding. Feature aggregation seems to be the most investigated respect in recent works, by which we refer to practices that manipulate the selected features inside the model so as to add constraints or create topological connections according to music theories [11, 3, 12, 13, 6, 14]. As to feature selections, there are works utilizing higher-level features from either mathematical calculation or extra labels annotated by experts or musicians [8, 15, 10]. Feature encoding focuses on how features are converted to numerical values [16, 17, 18], but it is less mentioned in the current literature compared to the other two respects. Therefore, our work focuses on feature encoding and attempts to systematically compare several existing encoding methods of the low-level pitch and metric (positional) features. This study is based on the monophonic melody generation task, using only low-level features as model input, and particularly, only flat event-like sequential representations, out of the following concerns. First, lower-level features are more independent and thus more controllable. It is relatively easier to isolate certain features and investigate the importance of different features. Second, melodies are monophonic, which are much simpler to model compared to polyphonic music where both harmony and inter-part interactions must be considered. Third, the expressiveness (e.g., change in the tempo and dynamics) in the music is ignored since they bring extra temporal dimensions that compound and interfere with the low-level features, which is beyond the scope of this study. Finally, preferring flat sequential representation over any advanced topological representations (e.g., stacking tokens into super-tokens as in [6, 15] or using hierarchical representations as in [12] ) helps avoid introducing new feature selection bias and new hyper-parameters. An extra gain of such concern is that other futural models can easily build up on the results of this study, since only some minor modifications on the input data representation could lead to noticeable improvement on the model performance. Regarding pitch, duration and metric features, this study discusses four hyper-parameters involved in their encodings. (1) _Pitch_ encoding, including the commonly used MIDI pitch number, and the pitch class-pitch octave pair as used by [16]. (2) (Bar-level grid) _Position Complexity_ (PC), referring to whether a single token or multiple tokens are used to represent different metric grid positions inside a bar. (3) (Bar-level grid) _Position Resolution_ (PR), the number of evenly distributed positions of a bar to be encoded. (4) (Note) _Duration Resolution_ (DR), the number of subdivisions of a beat, which determines the minimal unit of note length. It is hypothesized that these 4 hyper-parameters greatly influence the model performance. It is expected that more complicated token representations, that is, pitch class-octave encoding, multiple-token PC, higher PR and DR would result in better generation quality in terms of how well they approximate true distribution of several selected objective evaluation metrics. In order to test the hypothesis, we first define a few possible options for our four hyper-parameters. A brute force searching is then conducted on the hyper-parameter grid, that is, all the possible configurations of the hyper-parameter grid are individually used to transform the Wikifonia dataset and train the same Transformer-XL melody generation model from scratch for the same number of gradient updates. Objective analysis of the generation quality is done by first sampling a large number of melodies from the resulted models, then comparing their metric distribution similarity by the Overlapped Area (OA) with the test set distribution, in terms of 9 evaluative scoring metrics: 5 pitch-related and 4-rhythmic features. Regarding pitch encoding, the paired \(t\)-test results report a significantly better average performance of the class-octave encoding over the commonly used MIDI number encoding. For PC, PR and DR, they interact in a way that higher PR and DR combined with single-token PC resulted in the best approximations. The interaction of different encoding hyper-parameters are discussed in Section 7 followed by a discussion on the over-fitted pitch embedding space. The main contributions of this work are twofold. First, we demonstrate that a small Transformer-XL network of only 0.5M parameters and a low dimensional (\(d=32\)) embedding space are able to produce music with close objective metric distributions, given appropriate encoding hyper-parameters and a few epochs of training. The advantages and drawbacks of different encoding options are manifested by the results. Second, we call attention to the over-fitting problem and the exposure bias due to the nature of the task, which also interact with the encoding hyper-parameters and influence the model performance. We believe that the findings of this study could be easily extended to the improvement of other music generation models. ## 2 Related Works ### Pitch Feature Encodings Categorical pitch encoding seems to be the most used pitch encoding in the current symbolic music representations [19], which can be found in representations such as a pianoroll stacked by one-hot pitch vectors. The problem of this encoding is that no explicit prior knowledge about pitches is encoded, since all the pitches are equidistant from the others. Early attempts at addressing this issue were to construct a static pitch representation space that preserves the pitch similarities based on listener's ratings in the psychoacoustical experiments [20, 21, 22]. Based on these, [23] created concert, a neural network-based music generation system with a proposed pitch representation named PHCCCF, out of three components of an absolute pitch: Pitch Height (PH), Chromatic Circle (CC), Circle of Fifths (CF). [24] compared a few pitch representations on a neural net chord classifier, including the (categorical) pitch class representation and a few psychoacoustical pitch representations involving harmonics. The results suggested that explicitly encoded pitch harmonics result in higher classification accuracy. Recent solutions mainly favor the word embedding approach due to the rapid development of natural language processing (NLP) and the decent performance of the latest language models. By word embedding, the vector representations of pitches are learned and can be dynamically optimized according to the downstream task. This approach is commonly seen in MIDI event-based representations, such as the Note-On, Note-Off tokens in Performance RNN 2017, Music Transformer [11]; and Note-On in MusicVAE [26], REMI [3], CWT [6] and MusicBERT [15]. However, the evaluation of pitch embedding space is rarely discussed in the literature. The choice of embedding dimension is usually empirically set to 512, which is taken for granted and we consider it unreasonably high. [18] proposes a low-level pitch embedding which ensure the translational invariance (or, transpositional invariance) that is not guaranteed by a trained word embedding. Regarding pitch feature selection, relative pitch (interval, the delta pitch of two absolute pitches) could also be encoded [4, 27]. However, when word embedding is used, the interaction between absolute pitch vectors and relative pitch vectors can unexplainable [18] and unpractical to use [11]. Therefore, only absolute pitch is considered in our work. The Tonnetz representation provide alternative geometric features of both pitch and interval, but it seems to appear more in non-generative tasks such as music classification [4, 28, 13]. As to pitch spelling, [29] discussed the subtle differences between chromatic pitch (CP) and pitch spelling (PS) when encoding enharmonic notes (e.g., C\({}_{\blacksquare}^{\sharp}\), D\({}^{\flat}\) and E\(\flat\flat\)) in the context of automatic harmonic analysis. The pitch class feature seems to be seen more in discriminative tasks (music classification [24], clustering [30], harmonic analysis [29]) rather than in generative models [16, 13], since most generative models to date still stick to the categorical encoding (e.g. 128 MIDI pitch numbers). Hence, this study will compare the influence of these two different commonly used pitch encodings in the context of a generative model, which seems to be first work comparing them to the best of our knowledge. Specifically, we use a transformer-based sequential melody generation model. ### Duration Encodings #### 2.2.1 Implicit and Explicit Duration Encodings One factor of duration encoding is whether the note duration is encoded explicitly or implicitly. An explicit encoding uses analogous numerical features for note length, which has been experimented since the very early model concert by [23]. Especially when analogous values are used for encodings, it allows different features to be calculated algebraically with an always meaningful duration interpretation. When using word embedding to denote duration, it is usually discretized into finite different possible values, as used in the recently proposed REMI representation [3]. An implicit encoding usually relies on a groups of tokens that accumulate the short time spans before the note is released. This can be done using a single repetitive token that represents a fixed amount of time (more common in a non-expressive context), e.g., the Hold token as used in DeepBach [31], or multiple tokens for different time spans, e.g., using the combination of different Time-Shift tokens and Note-Off tokens as in [25]. In REMI's work, [3] concluded that explicit duration encoding outperformed the taken-for-granted implicit duration encoding (i.e., combination of Time-Shifts and Note-Offs), with better generation quality and shorter average sequence length. However, in the comparison of the Baseline 1 and 2 where the only difference was implicit Note-Off versus explicit Duration, the resulted three objective evaluation metrics seem to be equally apart from the true distribution with one higher and the other lower, which might not result in a strong conclusion of the latter being better. In our work, we will further compare this but using different terminology1 other than duration itself. Footnote 1: Specifically, these two sets are referred by the settings of (Position Resolution = 0) and (Position Resolution = 4) #### 2.2.2 Duration Resolution Another factor of duration encoding is the resolution. The minimal step of time is usually defined by a hyperparameter, which we refer to as the _resolution_, meaning the number of equal subdivisions of a beat or a bar and using one of them as the unit of time. Presumably, researchers choose different _beat resolution_ because of the model capacity (e.g., the maximal sequence length of a model). For example, MuseGAN [2] used 24 subdivisions of a beat on a deep convolutional generative adversarial network (DCGAN); MidiNet [32] used 4 on another DCGAN; MusicVAE [26] used 4 on a LSTM variational auto-encoder (VAE); Pop Music Transformer [3] reported the best performance of using 4 on a Transformer-XL. Although increasing the beat resolution could allow more rhythmic details to be encoded, it would also potentially lead to longer sequence(especially for the implicit duration encodings), whose advantages and drawbacks has not been extensively studied in the literature. Therefore, this study will examine the concept of duration encoding from two aspects, namely _duration (beat) resolution_ (DR) and _positional grid (beat) resolution_ (PR) and investigate their influence on the model performance. The definition of PR is given is given in the following subsection. ### Metrical Encodings and Bar-level Grid Position Encodings #### 2.3.1 Positional Resolution Since REMI [3] and Jazz Transformer [8], it turns out effective to explicitly impose a bar-level metric grid2 on the encoded sequence to improve the generation quality and even increase the generation controllability. According to these authors, most previous models were unable to generate pieces with clear pulses and beats. Besides expressiveness features, REMI proposed the grid-level metric encoding at both bar-level and the finest _position_-level. In REMI, _position_ refers to a series of special tokens (pos\({}_{1..16}\)) indicating different possible grid positions inside each bar, where the grid is evenly divided into 16 parts3. Another special token, bar, refers to the first position of a bar (Position\({}_{1}\)), but they use a different token that always comes before pos\({}_{1}\) to emphasize the beginning of a bar. The necessity of this Bar token deserves further discussion. Technically, if all the absolute position tokens are strictly ordered in the encoded sequence, a single Position (1/16) is enough to indicate the begin of a new bar, in which case the Bar is optional. In this study, we addressed these minor issues by ensuring the same amount of information being encoded inside each pair of candidates to compare. Footnote 2: We will also use the term “metric grid” and “positional grid” interchangeably in the rest of this paper Footnote 3: By our terminology, the positional grid encoding above uses PR = 4, since 4 subdivisions are encoded for each beat. Second, the comparison of Baseline 3 (a stronger baseline using explicit duration and multiple non-expressive Time-Shifts) and REMI (that uses multiple Positions and a single Bar token.) was designed for different encoding approaches but not based on the same amount of information. Compared to Baseline 3, REMI provides extra information regarding bar lines and absolute positions, which means it is not feasible to reconstruct beat-level nor bar-level based on the encoded sequence for Baseline 3. #### 2.3.2 Positional Complexity Similar to duration, the positional grid can also be implicit by accumulating the same token (e.g., Music Transformer generating Bach Chorale [11]) or explicitly specified with absolute positions as used in REMI and the OctupleMID representation [15]. In order to be distinguished from duration, we use the term _Position Complexity_ (PC) to indicate whether the grid is encoded by implicit accumulative single tokens or explicit multiple absolute positional tokens. Correspondingly, we use _single_ and _multiple_ for the two options. To summarize, this work attempts to decompose the encoding settings of low-level features (pitch, meters and ) The low-level presentation settings of some recent models are listed in table xxx. Instead of vaguely using the concept of resolution, we try to decompose the encoding of duration and bar-level metric grid positions into 3 hyper-parameters, PC, PR and DR, assuming that the note duration is encoded with explicit duration. ## 3 Experiment Setup ### Dataset and Preprocessing The Wikifonia dataset contains 6,405 lead sheets of music from mixed genres in the format of MusicXML. A cleaned dataset is used, downloaded by the muspy library [33]. As to time signature, only \(\frac{4}{4}\) is considered to avoid encoding inconsistency. We removed songs containing inconsistent bar lengths (mostly because of the change of time signature). 90% (3,861) samples were used for training and the other 10% (429) samples were for the test set. Chord, tempo, instrument and other metadata are all ignored in this study. ### Vocabulary The vocabulary set consists of three parts, pitch tokens (including REST, a special pitch token for silence), duration tokens and positional tokens, whose specific tokens are defined by the four hyper-parameters. The Pitch hyper-parameter determines pitch tokens with the Number and the Class-Octave option. Duration Resolution (DR) determines the beat resolution which all the note onset and duration time are rounded to. Duration tokens represent times from the smallest time step to 4 beats. Position Resolution (PR) determines the amount of metric grid information is provided in the encoding and thus defines a set of positional tokens. Position Complexity (PC) comes with 2 options, Single and Multiple, that specify whether to use a single token or multiple tokens for the different grid positions in each bar. Other special tokens such as PAD would not be discussed in detail. ### Encoding Algorithm Given a melody \(M\) and an input representation vocabulary \(V\), we use the Algorithm 1 to encode a melody to a token sequence. Notice that as soon as Positional Grid Tokens (e.g., BAR, BEAT, and POS) are encoded to the sequence, they introduce another time axis defined by themselves. In the generation results of the current mainstream models, it is common to see inconsistency between the note-based accumulative time and the grid-indicated time, which can be handled by different post-processing methods depending on the need of downstream tasks. In this study, we only trust the note-based timing (from note/rest durations) and ignore positional grid tokens when decoding a generated token sequence, since the latter are only considered as auxiliary input helping a model to learn the temporal relationship. ### Model and Training Specifications Since the sequence length varies in encoding methods, the model should be good at handling long sequences. Transformer-XL [34] is hence selected as it was the first model able to handle extra long sequences outperforming the LSTM network [35] and the vanilla transformer [36]. Transformer-XL introduced the memory reuse mechanism and relative positional embedding to address the context fragmentation problem at the training stage, which is a influential improvement for music generation models. However, we assume that the Transformer-XL of its original size (18 layers) for large text datasets is inappropriate for music generation task. Since the vocabulary size is mostly from tens to hundreds, the model is very likely to be over-fitted according to our trials. Hence, this study uses a 4-layer Transformer-XL, with 32 embedding dimensions and only 4 attention heads. 64 and 128 are used for the hidden dimensions and inner Feed-Forward layers, respectively. As a result, this shrunk model only uses around 0.5M parameters, which is only 0.2% of its original size. It turns out that this tiny model could still be over-fitted for specific input representations, suggested by the continuously increasing test NLL loss soon after a few epochs. An AdamW optimizer of learning rate 2e-4 is used as it contains regularization terms [37]. All the models are trained for around 25k steps (50 epochs) on the training set of max sequence length 1,024, and are tested on the same set of melodies. During training, augmentation was performed on the training data for each epoch, with random transpositions within 6 semitones upward and downward. For sampling, top-\(k\) sampling of \(k=5\) is used, starting with the token representing the beginning of a bar. 128 melodies of 512 tokens are sampled, Pad tokens removed in post-processing. ### Evaluation Metrics and Distribution Similarity Only objective metrics are used in this study, mainly because this study focuses on how low-level feature encoding methods influence the model performance instead of improving the generation quality of the entire system. Also, although a small network is used to test the model performance, the generation quality varies widely from resulted model. Based on the listening experience of the authors, there exists obvious failure cases for some of the resulted systems. Hence, we believe that the objective evaluation metrics are already enough to distinguish the different quality, so we did not conduct any subjective analysis. The selected objective metrics are in two groups, pitch-based and rhythm-based, respectively. **Pitch** * **MAI** Mean Absolute Interval, measures the average steepness of the notes in a melody. * **H(P)** Pitch Entropy, entropy of all the pitches (in MIDI numbers) in a melody. * **H(PC)** Pitch Class Entropy, entropy of the 12 pitch class choices in a melody. * **SC** Scale Consistency, defined as the largest pitch-in-scale rate over all possible major and minor scales. * **MSD** Major-scale-rate vector standard deviation, the standard deviation of the 12 pitch-in-scale rates. **Rhythm** * **MD** Mean Duration, average note duration of a melody. * **H(D)** Duration Entropy, entropy of all the note durations in a melody. * **GC** Groove Consistency, the average similarity (rhythmic hamming distance) of every 2 consecutive bars. * **EBR** Empty Beat Rate, the rate of beats where no note is being played or held. From the metrics above, H(P), H(PC), SC, GC and EBR are implemented in the library muspy [33] and were first proposed in the c-RNN-GAN [38], MuseGAN [2] and Jazz Transformer [8]. We also introduce MAI, MSD, MD and H(D) in this study to better describe the distribution of the low-level pitch and durational features. 128 melodies are sampled from both the test set (truncated to training data length) and each resulted model. The 7 metrics above are calculated for all the melodies. For each distribution, the p.d.f is approximated by Gaussian kernel density estimation (KDE), whose bandwidth is chosen according to Scott's rule of thumb[39]. To avoid over-smoothing of a large bandwidth and avoid the too strong assumption of normal distribution by Scott's rule of thumb, the bandwidth is further divided by 4. Finally, the Overlapping Area (OA) is adopted to measure the similarity between all the model distributions and the true distributions. Hence, a higher and closer-to-1 similarity indicates better approximation to the true distribution. ## 4 Pitch Encodings ### MIDI Number and Class-octave The popular MIDI number representation of pitches are integers from 0 to 127. In the context of word embeddings, such embeddings provides no prior knowledge about the relationship among pitches to the model. However, the pitch class pitch octave encoding breaks down the pitch feature into two tokens, as the name suggested. Consider a pitch of MIDI number \(p\), the pitch class and pitch octave can be written as \(\left(p\mod 12,\left\lfloor\frac{p}{12}\right\rfloor\right)\). Even though the two encodings methods can be always converted between each other, the class-octave encoding provides explicit information about pitch relationship, especially across different octaves. For instance, a list of pitches (C4, E4, G4, C5, E5, G5) are encoded as (p60, p64, p67, p72, p76, p79) using number encoding, but as (C, o4, E, o4, G, o4, C, o5, E, o5, G, o5) in class-octave manner. In this example, the latter encoding clearly shows what pitch classes are being used, despite the change of octave. The class-octave encoding also has more transpositional invariance compared to the MIDI number encoding, simply because that transposing a pitch slightly would almost not change the octave, transpositions in multiples of octaves would also change the pitch class. In this sense, the similarity of pitch in music is better preserved in the class-octave encoding in an explicit way. Hence, it is expected to result in better pitch and pitch class distribution for the generated melodies. ### Comparison Results and Discussions All the resulted 48 models of hyperparameter-grid can be grouped into 24 pairs that only differ in pitch options. For the family of 7 selected metrics, a paired Wilcoxon test rejected 2 equal-mean null hypotheses, for MAI (\(p\) =6.57e-4) and H(PC) (\(p\) =4.22e-5), using the Holm-Bonferroni adjusted \(\alpha\) values controlling the family-wise error rate (FWER) \(\leq\) 0.05. In terms of the gap, for MAI, the Class-Octave group sample distributions have around 0.158 higher OA compared to Number; for H(PC) it is 0.149 higher. Another non-significant but worth-mentioning gap is 0.047 for H(P). The overall distribution of H(P) OA and H(PC) OA are in Figure 1. The rest differences of OA are almost within the range of \(\pm\)0.02, which could be ignored. Around the mean area, we selected a representative pair of results (model 27 and 28) with relatively strong comparison, and plot the detailed distributions to observe their characteristics in Figure 2. Their Figure 1: Resulted OA joint distribution of H(PC) and H(P). The dashed line represents \(y=x\), separating the two encoding methods. Class-Octave results are better in H(PC) while Number results are better at H(P), which is as expected. corresponding encoding hyper-parameters are (PC = Single, PR=1, DR=16), with Pitch being Number and Class-Octave, respectively. Figure 1(a) shows that the Class-Octave model yielded a better MAI distribution which is less concentrated on 1 semitone and has higher density for the range of 1 to 4 semitones compared to the Number model. In contrast, the Number is much more positively skewed, hinting that the generated melodies mostly progress in small intervals such as semitones and wholetones, which can be too conservative and non-exciting regarding the expected the listening experience. The comparison result could be interpreted as the effectiveness of the separately encoded pitch classes and pitch octaves, despite its doubled length. As mentioned before, the notes that are only a few semitones apart are most likely to share an octave token, or even two octave tokens differ by 1. The similarity of pitches are better preserved and explicitly expressed on the token level. It is also worth noticing that if the 1-Wasserstein distance metric is used instead to calculate the strict distances towards the true distribution, the Number is actually a closer Distribution of MAI. However, here the OA focuses more on how much of the true distribution is being captured, so it is reasonable that a distribution of higher OA could actually be more distant. The pitch class entropy H(PC) is another significantly better learnt distribution by the Class-Octave model. As in Figure 1(b), the mean entropy is around 2 to 3 bits, higher than the Number one. An intuitive explanation for this could be that, in order to model the distribution of pitch classes, a Number model must learn a meaningful pitch token space for all the 127 pitches, for instance, well clustered into 12 different pitch classes automatically, or, follow a certain geometric pattern as constraints such that the model can rely on to generate correctly distributed pitches. As a comparison, a Class-Octave model only has to model the relationship between the 12 given pitch class tokens that are known to be shared within the same octave. In this case, the task is not from the scratch given the explicitly encoded constraints, thus less difficult. From the perspective of data augmentation, random transpositions within a few semitones can be viewed as a kind of regularization, which permutes all the inter-pitch constraints in a cyclic group of 12 and resulted in only small changes of all the octave tokens. For the Number token space, however, the changes do not happen on a cyclic group, which only explicitly benefits the neighbor pitches. This tends to result in a smooth striped manifold of the pitch embedding space. The analysis of the embedding space and the outliers would be in Section 7. Another interesting observation can be made on Figure 1(c), that the EBR distribution from the Number model has a heavy tail, which is not the case for the Class-Octave model nor the true distribution. Figure 2: Metric distributions for a representative pair of encoding (27, 28). \(OA\) and \(W_{1}\) denote the overlapping area and 1-Wasserstein distance, respectively, between the model sample distribution and the true distribution. 1(c) is a non-significant metric but also reveals salient differences of the two pitch encodings. ## 5 Metrical Encodings ### Position Complexity and Positional Resolution In recent studies such the REMI representation [3], the authors reported that the 16th note grid position produced the best results on the music generation task, with several attempts of using other resolutions that resulted in worse performance. To find out whether it is the sparsity or the absolute positions, or both that improved generation quality, this study prefers a dense grid encoding, which encodes all the grid positions of a bar. Importantly, all the encoded grid positions will be ignored during the decoding process, since inconsistency handling is avoided, and we already have a special token Rest as a silent pitch token that fills the gaps between all the notes of a melody. We define the hyper-parameter Position (bar) Resolution (PR) and Position Complexity (PC) of such encoding. PR is a multiple of 4, indicating the number of even subdivisions of a bar of 4 beats. PC has two options. The option Single means the grid positions are all denoted by a single Pos token. The bar line is provided by a Bar token before the first position. Hence, by counting the occurrences after a Bar token it is able to calculate the offset to a bar. The other option Multiple refers to separate absolute positions Pos\({}_{1..PR}\) for a bar-level grid. In this case, no separate Bar token is provided since the first absolute position necessarily means the beginning of a bar. As an example, for the melody in Figure 3, different PC and PR settings can yield the following encoded sequences. Pitch=Number, PR=4 (4 downbeats) and PR=16 (all the 16th notes) are for elaboration. We use a conciser notations to shorten the sequence length on the paper: <TOKEN>xn denote a token repeated by \(n\)-times and <TOKEN>\(a..b\) represents a range of tokens. The first benefit of this design is the similarity between Single and the Hold token used by DeepBach [31], but also different from DeepBach that this repetitive dense grid provides explicit bar lines and it does not determine any note duration at all, so that we can investigate whether such repetitive grid helps modeling metrical features. Also, the dense setting allows the comparison between Single and Multiple to be conducted with almost equally long encoded sequences. It is important to ensure similar sequence lengths when comparing metrical encodings, since the transformer model used both in the REMI work and ours are trained with teacher-forcing and non-weighted NLL loss. This means that for every batch gradient update, the token-wise average loss is weighted according to the frequencies of different token types in the batch, thus determining the learning priorities. For example, provided that a melody is encoded into a longer sequence \(A\) with a large amount of grid tokens, and a shorter sequence \(B\) with sparsely encoded absolute grid positions. When the loss is averaged along steps, the losses for grid token steps are more weighted in \(A\) than in \(B\), so the optimization direction will lean more towards the positional tokens because of \(A\)'s encoding. In the experiment settings, both PC options are considered. PR = (0, 1, 4, finest) are compared, where PR = 0 denotes the ablated group that does not use the bar-level position grid feature at all (the PC being undefined of course), and the finest is calculated by DR \(\times\) 4, covering 16, 32, 48 and 64. In the ablated group there are 8 models and the control group there are 40 models. Figure 3: An example of the same melody encoded with different PC and PR settings ### Results and Discussion In this subsection, the results are compared in three ways: ablation study, PC and PR. #### 5.2.1 Ablation Study Among the family of 9 metrics, Wilcoxon tests resulted in two relatively significant differences of metric distribution OA for the ablated group and the control group. The null hypothesis is that the two groups share the same OA in for all the 9 metrics and is tested according to the Holm-Bonferroni method. At a FWER no greater than 5%, we failed to reject the null hypothesis. However, there are two OA difference that are with small \(p\)-values which deserves discussion: the control group has higher average OA for Mean Duration (MD) at \(p_{1}=0.0059\), slightly greater than \(\alpha_{1}=0.0055\), and higher average OA for Duration Entropy (H(D)) at \(p_{2}=0.0089\), slightly greater than \(\alpha_{2}=0.0063\). The box plots are in Figure 4. Among all the metrics, the two noticeably improved metrics are both about the distribution of note duration, even if the position grid does not determine the note durations. This possibly suggests that the grid features being helpful during the learning of duration features. Without the grid, the only approach to describing the note onsets and offsets are by accumulating the duration tokens (from either pitches or REST). Figure 4: Metric distribution OA of Ablated group and the control group When the grid is provided, the relative position from the bar line can be an additional information source to modelling the note durations. This result also matches with the feasibility of REMI's sparse encoding of tokens. Another non-significant metric, Empty Beat Rate (EBR) has unadjusted the \(p\)-value of 0.15, but the ranges of the distributions are worth a plot, see Figure 5 The rest metric OAs are either slightly increased for the control group or similar in distribution, which will be skipped. #### 5.2.2 Interaction of Position Complexity and Position Resolution If only grouped by PC, 32 out of the 40 models with PR \(>\) 1 can be grouped to 16 pairs only different in PC. A paired Wilcoxon test at FWER no greater than 0.05 failed to reject the null hypothesis, meaning there is no significant difference on the group mean. The grouped box plots showed that the influence of PC varies with PR and Pitch encoding, which would be analyzed in Section 7 Given DR = 4, we gathered 12 models that could differ in other settings, with PR options of: O, ablated; 1, only BAR; 4, only downbeats; 16, the finest grid under the DR = 4. Figure 6 plots the OA of different PR, with 6a about the 5 pitch-related metrics and 6b about the 4 rhythmic metrics. Although it is designed as a smooth transition from PR = 0 to PR = 16, the results are not necessarily smoothly interpolated as expected. Three observations on the trend of OA against PR are made on the results. Observation 1Among all the metrics, the two most benefited metrics are MD and H(D), the two regarding note durations, since they both increased from a poor value to more than 0.7, which indicates a relatively good approximation. These two metrics also display a stabler increase with smaller variances compared to other metrics. Observation 2Among the 5 pitch-related metrics, most fluctuate when PR = 1 and 4. The only prominent improvements happen at PR = 16 but are also not much. Given the relatively small sample sizes and small ranges of OA, the fluctuations among these features can be ignored. Figure 5: When the positional grid feature is encoded, higher OA of EBR distribution is achieved. Observation 3It can be noticed that the EBR reaches the best up to 0.8 when PR = 1, i.e., only additional Bar tokens are add, and deteriorates as PR increases, with the smallest step shorter than a beat. GC, on the other hand, show a decreasing trend. Regarding observation 1, the improving note duration distribution as PR increases seems to indicate the engagement of such grid as an alternative way to help estimate a reasonable note duration in a melody, which also to some extents implies that the model is relying less on accumulating previous duration tokens for an absolute time. If this conjecture holds, observation 3 can also be explained in a similar way. Since the model relies more on the grid position to estimate the durations, it could be less attentive to the durations of the previous notes. Also, in the decoding procedure, the grid is not used to correct the durational inaccuracies, so they are accumulated and amplified, resulting in a worser beat-wise EBR as in the plot, let alone the bar-wise GC which is even worser than the ablated group. To summarize, the results reveal a compromise between using the additional metric grid or the accumulating duration. When the imposed grid is more focused (with higher PR, increasing the proportion of grid tokens), the distributions of durations are better modeled. However, the quality of beat-level and bar-level groove seems correspondingly decreased. The appropriate PR to reach the subtle balance seems to vary in specific metrics. ## 6 Durational Resolution Durational Resolutions (DR) is an alias for the terminology Ticks-Per-Quarter-Note (TPQN), as used in the MIDI specifications, refers to the number of subdivisions a quarter note, using one as the unit which all the time spans are a multiple of. Here, DR is dedicated to note durations so that it is independent from PR. The process of discretizing all the note durations into multiples of DR usually causes information loss such as tuplets. One way, as used in this study, is to first round the onsets and offsets of all the notes and then calculate the note durations. For example, suppose DR is 4, 8th note triplet can have a duration of 1 or 2 depending on the onset. The example can also be seen in Figure 3, where the second to the fourth notes turn out to have durations 1, 2 and 1, respectively. Figure 6: OA distributions as PR increased. Higher values are better. O for the ablated group, 1 for bar, 4 four downbeats, 16 for the finest grid at DR = 4. The problem of an unreasonably low DR is obvious because too much information is lost for reconstruction. A large DR, on the other hand, tends to increase the model's learning difficulty because the tiny subtle numeric differences must be learned to create reasonable combinations that add up in time. In this section, the 12 models with PR = finest are compared on different DR settings. That is, the DRs are set to (4, 8, 12, 16) and the corresponded PRs are (16, 32, 48, 64). They vary in two Pitch options and two PC options. The PR = finest is chosen since the previous experiments before have shown that most metrics are improved at the finest PR. ### Representative results Results do not show a simple linear relationship and are not well fitted using multivariate multiple linear regression, hence they are plotted in Figure 7 and discussed in groups. The general observed trend for the 5 pitch-related metric OAs is that they are slightly improved as the DR increases; but the GC OA quickly dropped. Extreme cases are noticed at DR = 8 where PR = 32, for the 4 models (id = 21 to 24), with quite a few metrics noticeably high. The corresponding obvious outliers are annotated as model IDs in the plots. If the neighbor configurations such as DR = 4 and DR = 12 are also taken into consideration, the DR = 8 group seems to pulling the the neighbors' performance towards it, probably indicating an non-monotonic influence of PR with a peak at DR = 8. After checking, the two outlier models (22 and 24) also contributed to the maxima of MD, H(D) and the minima for the plots regarding GC and EBR in Figure 5(b). Since the overall trend of MD and H(D) is subtle, we will ignore this two items for this special case. However, in contrast to the best performance of approximating pitch-related metrics, model 22 and 24 on the other hand learned very poorly about grooves and beats. The large variances in the DR = 8 group is mainly from different Pitch and PC options, whose interactions with DR would be analyzed with more details in Section 7. The results above suggest that DR, similar to PR, also has a non-monotonic influence on the metrics. Neither too low or too high DR results in optimal approximation to the true metric distributions. The opposite conditions for the extreme cases also indicate that the optimal PR can differ in metrics. In our Figure 7: OA distributions as PR increased. Higher values are better. O for the ablated group, 1 for bar, 4 four downbeats, 16 for the finest grid at DR = 4. case, the optimal DR seems to be 8 or 12, which is consistent with the optimal PR 16, as reported in the REMI's original work [3] and 24, as reported in the piano-roll MuseGAN [2]. This could also be due to a high DR causing the model being over-fitted to the training dataset, which would be discussed in Section 7. ## 7 Combination Analysis From the previous experiments we have observed two phenomena. First, the impact of two resolutions PR and DR are both non-monotonic, and the optimal DR and PR even vary in metrics, e.g., best approximated MD and H(D) are reached at a high PR, while a better approximated GC have low PR. This suggests a kind of trade-off of the model performance on different metrics. Second, the outliers in a group are sometimes caused by both 2 options of the **other** hyper-parameters, which further hints the interaction of the hyper-parameters. In this section, we will discuss two stages of the music generation task where additional factors apart from the encoding hyper-parameters must be taken into consideration to explain the trade-off: the task goal and the learning process--the stage after encoding, and the data quality--before the encoding. ### Position Complexity and the Exposure Bias During the training process, we have noticed that, the test loss of some models started to keep increasing till the end of all the 50 epochs. All the experimented models are evaluated at the epoch with the lowest test loss (the closest checkpoint is used). The groove-related GC metric is used to reveal the relationship between metric approximation performance and the best epoch, plotted in Figure 8. Figure 8 shows a trend of better GC approximation after trained longer to reach the lowest loss. Especially for Single group, the OAs at the initial epochs are poor. As the number of training steps increases, the Single group starts to have performance gain while the Multiple group becomes worse and stays at a low level around 0.2. Another interpretation on this plot is that as PR and DR increases, (plotted with increasingly larger marker sizes), the models shifted from the upper right (slower training, higher performance) to lower (for Figure 8: OA of GC against the best epoch, colored by Position Complexity. Multiple, longer training but worse) and letter (for Single, early convergence with test loss never becomes lower, models collapsed) To summarize Figure 8, smaller PR and smaller DR resulted in more consistent grooves. Single tokens should not be learnt too fast that cause an over-confident model failure. We believe that this is caused by the auto-regressive nature of the task, with teacher forcing used to speed up the convergence and correct errors in the early stage. When discussing the learning process of a Transformer-based music generation model, It is sometimes, not frequently, mentioned that when teacher forcing is applied, the averaged cross-entropy loss is equivalent to maximizing the log-likelihood of the input sequence [40]. For the sequence, by keep applying the chain rule to the conditional probability, the model likelihood \(p(x_{1:n})\) can be expanded into products of step predictions \(p(x_{n}|x_{1:n-1})\), namely, \[p(x_{1:n}) =\prod_{k}p(x_{k}|x_{1:k-1})\] \[\log p(x_{1:n}) =\sum_{k}\log p(x_{k}|x_{1:k-1})\] The mean negative log terms are the cross entropy loss between the predicted step logits over the vocabulary distribution and the one-hot labels. A common problem of teacher forcing is the exposure bias, i.e., the discrepancy between the high likelihood of training samples and worse generated qualities or model over-fitting, which is observed in the experiments. The Maximum-likelihood nature also makes the loss sensitive to the true distribution. In our case, as the PR increases, Single encoding of metric grid results in highly repetitive tokens in the training sequence, which accounts for a large proportion of the step-wise averaged loss. The problem can be addressed by scheduled sampling [41], or using weights to balance the token in the vocabulary, with the help of domain knowledge [40]. However, as the author stated, this approach is usually not computationally efficient and in our case it can also require tedious turning process of the weights. The differences of Single and Multiple could also be interpreted from the angle of the entropy of encoded sequences. For a higher PR, the repetitive single Pos tokens decreases the entropy of the true sequences while the multiple absolute grid tokens, appearing with equal frequencies, increases the entropy, which resulted in diverged task difficulty (of minimizing the loss), see Figure 9. ### Pitch Embedding Space Over-fitting and Data Quality From Figure 10 two the Class-Octave group is prominently better than the Number group. However, the lower plot shows that the models have much worse approximations of the pitch classes if trained longer. The best models (model 22, 24 and 34) are all from the epoch 5, which mostly because of the PC = Single is used with a large PR. The early stopping caused by other hyper-parameters brings out the decent pitch-related modeling. Fortunately, the pitch embedding space can be checked and compared through dimension reduction and be visualized. Hence, we choose model 22, 24, 37 and 8, the four extreme cases to compare the differences. From Figure 11, the early-stopped models, also the two models of closest H(PC) to the test set, have much smoother pitch embedding spaces than those being trained for a many epochs. Also, the clear relationship of the pitch classes 11a and that of pitches 11c matches with the expected striped manifold4, which means the embedding spaces have already modeled the proximity of adjacent pitches, especially benefited by random transposition of melodies at the training stage. The problem displayed by the worst two, suggest that the further training breaks such relationship because modelling the noises in the dataset becomes more important in minimizing the NLL. Without such smooth pitch relationship, the generated sequences hence do not show closer distribution by any means. Footnote 4: such manifold is also visualized in the literature such as the PianoTree VAE [12] Another indicator of the pitch embedding space being well-fit in the early stage is that, the (visualized) embedding spaces (11a) already hints a pitch class distribution that is biased to that of the training dataset, which is plotted in Figure 12. Since the true distribution mostly features the notes in the C major scale, the embedding space also shows some irregularity and is twisted to fit the true distribution. In comparison, the pitch classes in 11b shows a much worse over-fitted situation--the "black keys" (D9, E9, G9, A9, and B9) Figure 10: OA of H(PC) against the best epoch, grouped by Pitch option. Figure 9: Diverged performance of PR under with different encoding entropy Figure 11: Extreme cases of the embedding space. The two best cases are on the left, and they both come from an early stopped model. The upper two are reduced from 32-dimensions using principle component analysis (PCA) while the lower two are obtained by uniform manifold approximation and projection (UMAP) to avoid crowdedness. are noticeably extruding out, with the remaining (C, D, E, F, G, A, B) lying on a lower surface, which is exactly the distribution of notes of the dataset, so the diversity of the generation system is affected. The Number embedding space 11c suggests that, even for the top-performing model, in the corner there are a cluster of rare pitches. The similar situation for the Class-octave space is that the outliers are octave tokens such as 'o0', 'o1' and 'o8', 'o9', but it is not worth a plot. In the more over-trained 11d adjacent pitches are almost in distinguishable by their locations, which are not advanced patterns but the noise. The over-fitting problem seems to be worse for Number pitch encoding since they use more vectors, i.e., more parameters to model the pitch relationship. To summarize, even in the low dimension of 32, the pitch embeddings can show satisfying approximation of the true distribution, suggesting the unnecessarily of an unreasonably high dimensions such as 512. Also, such low dimensional embedding still suffers from the problem of easy over-fitting, which leads to the problem of rethinking about the effectiveness of early works, where static vector representations of pitches were used in rule-based systems with both satisfying results (in terms of pitch and pitch class), and even stronger interpretability. Quite different from a natural language with a large vocabulary, the pitch relationship is based on a much smaller set of units, e.g., only 12 pitch classes, and can be shared across different cultures as universally recognized. Therefore, an explainable and semantic representation should preferred. For the practical recommendations of symbolic music tasks, such as generation, we argue that the input pitch representation should be better designed as a pre-determined, domain-knowledge-based, algorithmically-extracted set of high-level features, instead of a cold start, being trained from a randomly initialized embedding space. Based on such, a new representation can always be dynamically adjusted by the model for different downstream tasks. ## 8 Conclusion The music generation task sees a large body of research recently, utilizing different tweaked input encodings and diverse feature engineering techniques with improving results. We are motivated by the monolithic model size, the inconsistent and taken-for-granted encoding approaches as used in the literature. We present a systematic comparison of the different encoding options and encoding hyper-parameters, based on the experiment results on a small Transformer-XL network of only 0.5M parameters. Results suggests that the current Transformer-based auto-regressive generation systems are quite sensitive to these hyper-parameters, Figure 12: The pitch class distributions of all the raw samples in both the training set and the test set. Such distribution is formed because most samples in the dataset are in the C major key or a minor key. which closely interact with the model despite that they are not a part of the model architecture. Problems such as over-fitting are still observed for the tiny network with only 0.5M parameters. Results also demonstrate the advantages and drawbacks of different encoding options, so we recommend that different encoding options should be carefully chosen for an auto-regressive music generation model. The findings of our works can also contribute to the latest generation models that are not in an auto-regressive manner, which means different encoding options for the same feature could be incorporated to improve the performance.
Pitch と メーターは、象徴的な音楽生成タスクにおいて、基礎的な音楽特徴であり、研究者は、特定の目標に応じて異なるエンコーディング方法を選択することが一般的です。しかし、異なるエンコーディング方法の利点と欠点を頻繁に議論されていません。この論文では、2つの低レベル特徴であるピッチとメータの、トークンベースのシーケンス音楽生成モデルの性能に与える影響について統合的に分析しています。まず、一般的なMIDI番号エンコーディングと使用が少なく、クラスオクターブエンコーディングを比較します。次に、エンコードされたシーケンスに、補助的な特徴として密度のあるintra-barメトリックグリッドを適用します。グリッドの複雑さや解像度を比較します。複雑さは、単一トークンアプローチと複数トークンアプローチを比較しています。グリッドの解像度については、0 ( ablation)、1 (バーレベル)、4 (ダウンベイトレベル
2309.06812
The dynamical evolution of protoplanetary disks and planets in dense star clusters
Most stars are born in dense stellar environments where the formation and early evolution of planetary systems may be significantly perturbed by encounters with neighbouring stars. To investigate on the fate of circumstellar gas disks and planets around young stars dense stellar environments, we numerically evolve star-disk-planet systems. We use the $N$-body codes NBODY6++GPU and SnIPES for the dynamical evolution of the stellar population, and the SPH-based code GaSPH for the dynamical evolution of protoplanetary disks. The secular evolution of a planetary system in a cluster differs from that of a field star. Most stellar encounters are tidal, adiabatic and nearly-parabolic. The parameters that characterize the impact of an encounter include the orientation of the protoplanetary disk and planet relative to the orbit of the encountering star, and the orbital phase and the semi-major axis of the planet. We investigate this dependence for close encounters ($r_p/a\leq 100$, where $r_p$ is the periastron distance of the encountering star and $a$ is the semi-major axis of the planet). We also investigate distant perturbers ($r_p/a\gg 100$), which have a moderate effect on the dynamical evolution of the planet and the protoplanetary disk. We find that the evolution of protoplanetary disks in star clusters differs significantly from that of isolated systems. When interpreting the outcome of the planet formation process, it is thus important to consider their birth environments.
Francesco Flammini Dotti, Roberto Capuzzo-Dolcetta, M. B. N. Kouwenhoven
2023-09-13T09:03:16
http://arxiv.org/abs/2309.06812v1
# The dynamical evolution of protoplanetary disks and planets in dense star clusters ###### Abstract Most stars are born in dense stellar environments where the formation and early evolution of planetary systems may be significantly perturbed by encounters with neighbouring stars. To investigate on the fate of circumstellar gas disks and planets around young stars dense stellar environments, we numerically evolve star-disk-planet systems. We use the \(N\)-body codes NBODY6++GPU and SnIPES for the dynamical evolution of the stellar population, and the SPH-based code GaSPH for the dynamical evolution of protoplanetary disks. The secular evolution of a planetary system in a cluster differs from that of a field star. Most stellar encounters are tidal, adiabatic and nearly-parabolic. The parameters that characterize the impact of an encounter include the orientation of the protoplanetary disk and planet relative to the orbit of the encountering star, and the orbital phase and the semi-major axis of the planet. We investigate this dependence for close encounters (\(r_{p}/a\leq 100\), where \(r_{p}\) is the periastron distance of the encountering star and \(a\) is the semi-major axis of the planet). We also investigate distant perturbers (\(r_{p}/a\gg 100\)), which have a moderate effect on the dynamical evolution of the planet and the protoplanetary disk. We find that the evolution of protoplanetary disks in star clusters differs significantly from that of isolated systems. When interpreting the outcome of the planet formation process, it is thus important to consider their birth environments. keywords: Planets and satellites: dynamical evolution and stability; protoplanetary discs ; (Galaxy:) open clusters and associations: general; hydrodynamics; stars: kinematics and dynamics ## 1 Introduction A significant fraction of stars in the Milky Way is thought to host one or more planetary companions (e.g., Mayo et al., 2018; Thompson et al., 2018), and even binary star systems can host exoplanets (Gould et al., 2014). Over 5380 exoplanets have now been identified in 3974 extra-solar planetary systems, among which 857 are multi-planetary systems1. Most stars form in clustered environments (e.g., Lada & Lada, 2003). The majority of these embedded star-forming regions dissolve within 50 Myr (e.g., Leisawitz, David & Bash, 1989; De Grijs et al., 2009; De Grijs, 2009), while the remainder evolves into open clusters. Observational evidence also suggests that our Solar system may have formed in a clustered environment (e.g., Adams, 2010; Pfalzner, 2013; Portegies Zwart et al., 2018). The planet formation process and the early dynamical evolution of star-forming regions occur at comparable timescales. It is therefore of interest to model both processes simultaneously. Footnote 1: [http://exoplanet.eu](http://exoplanet.eu), accessed on 17 May 2023 During the early evolution of protoplanetary systems, gravitational interactions with neighbouring stars can affect the evolution of protoplanetary disks and young planetary systems (e.g., Thies et al., 2005; Olczek et al., 2012; Portegies Zwart, 2016; Vinncke & Pfalzner, 2018). These close encounters may leave imprints on planetary systems that later become part of the much older population in the Galactic neighbourhood. Stellar encounters may perturb or even disrupt protoplanetary disks and planetary systems. This mechanism results in free-floating planets in star clusters. When they have sufficiently high speeds, these free-floating planets can rapidly escape from their parental cluster (Wang et al., 2015; Kouwenhoven et al., 2020). The free-floating planets may also migrate to the outskirts of the star cluster, to be eventually stripped off by the Galactic tidal field or recaptured by other stars (e.g., Perets & Kouwenhoven, 2012). Although substantial progress has been made in recent years, modeling the dynamical evolution of young planetary systems in dense stellar environments remains computationally complex. Numerical challenges arise from the large dynamical ranges in the length scale, the time scale, and the mass range that have to be implemented in the code. A second difficulty is the inclusion of gas in the model of the cluster and the planetary systems. A fully self-consistent simulation of young, gas-rich star clusters with planetary systems remains challenging. Different approaches have been taken to partially overcome these challenges: (i) modeling of isolated planetary systems; (ii) scattering experiment for modeling the evolution of multi-planet systems; (iii) modeling of single-planet systems in star-cluster environments; and (iv) separately modeling the star clusters and planetary systems, under the assumption that the planetary dynamics do not affect the the stellar components. Similar approaches can also be used to model the evolution of circumstellar gas disks in star clusters, as demonstrated in this study. Spurzem, et al. (2009) present a comprehensive study of planetary system evolution in star clusters. They find that numerical results obtained with the direct \(N\)-body code NBODY6++(Spurzem, 1999) and with a hybrid Monte Carlo code (Spurzem & Giersz, 1996; Giersz & Spurzem, 2000, 2003) are consistent with theoretical estimates. Zheng et al. (2015) used NBODY6 to build the evolution of single-planet systems in multi-mass open star clusters, and derived analytical prescriptions for the retention rate of planetary companions and free-floating planets as a function of initial semi-major axis and cluster properties. Fujii et al. (2019) followed a more comprehensive approach for developing an analytical prescription of the escape probability. Their study focus on the Pleiades, Hyades and Praesepe clusters, which are thought to have formed in highly-substructured star-forming regions (e.g., Fujii et al., 2012; Sabbi et al., 2012; Fujii & Portegies Zwart, 2015). The study focused on single-planet systems orbiting Solar-like stars. The escape probability dependence on the semi-major axis, \(a_{p}\), follows the distribution \(p_{\rm{ex}}\propto a_{p}^{-0.76}\), which is consistent with that of Cai et al. (2017). Pu & Lai (2018) model two-planet systems in star clusters using REBOUND (Rein & Liu, 2012), and compare their results to hybrid secular equations. Their approach provided insight into the origin of super-Earths and sub-Neptunes, and the Kepler-11 system in particular. They explain why multiple transiting planets appear to be dynamically 'colder' than those with a single transiting planet. Cai et al. (2017) modeled open star clusters with Solar-like stars that host five equal-mass planets separated by \(10-100\) mutual Hill radii. Most host stars retain their planets, although stellar encounters and planet-planet interactions trigger perturbations in eccentricity and inclination that occasionally lead to a decay of the system. Cai et al. (2018) and Cai et al. (2019) analysed how the signatures of the star cluster affects the observed characteristics of exoplanet systems in the Galactic field. Flammini Dotti et al. (2019) studied the impact of stellar encounters on the evolution of planetary systems similar to our Solar system. Their study shows that planet-planet scattering is a important consequence of perturbation on previously stable planetary systems, and that the stability of the system depends on the orbital architecture and the planetary mass spectrum. Similarly, Wu et al. (2023) found that for planetary systems in star clusters, planets can affect the dynamical evolution debris particles far beyond their Hill radii. The approaches used for modeling the evolution of planetary systems in star clusters can also be used to study the evolution of protoplanetary systems in such environments. protoplanetary systems are associated with gas accretion onto proto-stars (Armitage, 2019). Gas is accreted onto the star and partially ejected from its poles due to the magnetic field. The gas is eventually distributed in a disk-like shape around the star, and a protoplanetary disk is formed, composed of gas and dust. Turbulence arises from hydro-magnetic instabilities, which eventually leads to dust agglomeration. Pebbles and planetesimals start to form, leading to the formation of the protoplanetary cores and ultimately planets (Papaloizou, 2005; Papaloizou et al., 2007). The early evolution of protoplanetary systems is affected by the neighbouring stellar population. A good example of this is the Neptune-like planet in a binary system near the Hyades cluster (Ciardi et al., 2018). Current theories do not predict this formation scenario. Therefore, these types of planetary systems raised the need to study how the imprint of an close encounter would eventually lead to a different evolutionary scenario. A number of disks in dense star clusters have been detected (e.g., Hernandez et al., 2010; Mann et al., 2015). HARPS-N (Pepe et al., 2000) observed the first multi-planet system in a young massive cluster (M44; see Malavolta et al., 2016). Low-mass disks, such as those analysed in isolated cases (e.g., Antonyuk et al., 2015) have a negligible gravitational feedback on the hosting stars. However, these protoplanetary disks can be substantially truncated in a timescale longer than the crossing time (Rosotti et al., 2014). Smoothed Particle Hydrodynamics (SPH) codes can be used to model the dynamical evolution of protoplanetary disks. These codes are fully Lagrangian, and are particularly suited for non-symmetric systems and for dealing with self-gravity (for recent reviews see, e.g., Monaghan, 2005; Springel, 2010; Price & Monaghan, 2007; Price, 2011). A recent application of SPH to the study of the feedback of a protoplanetary disk around a _target_ star by a close passage of a _bullet_ star is found in Cattolico & Capuzzo-Dolcetta (2020). In this paper, we investigate the evolution of protoplanetary systems in dense stellar environments. We combine \(N\)-body simulations for the stellar dynamics in a cluster with SPH treatment for the gaseous disk dynamics around a star. We aim to understand how planetary systems dynamically interact with both star cluster and the evolving protoplanetary disk. This paper is organized as follows. In Section 2 we present our methodology. In Section 3 we discuss our various models and their results. Finally, we summarize and discuss our conclusions in Section 4. ## 2 Methodology and initial conditions ### Initial conditions - Star cluster We model a star cluster containing \(10\,000\) stars. We adopt the (Plummer, 1911) density profile in virial equilibrium, with a virial radius of \(1\) pc. Stellar masses are drawn from the Kroupa (2001) initial mass function (IMF), in the mass range \(0.1-25\,M_{\odot}\), and we adopt a Solar metallicity. For simplicity, our models do not include primordial binary systems. The star cluster is evolved in an external tidal field (the Standard Solar tidal field). The initial conditions of the star cluster are summarized in the top table of Table 1. We evolve the models for \(50\) Myr using NBODY6++GPU (Aarseth, 1999; Spurzem, 1999; Kamlah et al., 2022). ### Initial conditions - protoplanetary disk and planet The circumstellar gas disk is modeled with \(N_{\rm sph}=50\,000\) gas SPH particles, and has a mass \(M_{\rm disk}=10^{-3}\ M_{\odot}\). This particle number has been shown to be sufficient (see Pinto, Capuzzo-Dolcetta, & Magni, 2019; Cattolico & Capuzzo-Dolcetta, 2020) to reach good convergence and stability of a reasonable disk model of the type discussed hereafter. Figure 1 in \begin{table} \begin{tabular}{l c c c c} \hline Model ID & \(N_{\rm s}\) & \(M_{\rm cluster}\) & \(t_{\rm cr}\) & \(t_{\rm th}\) \\ & & \(M_{\odot}\) & Myr & Myr \\ \hline C05 & \(10\,000\) & \(5.87\times 10^{3}\) & \(0.18\) & \(26.59\) \\ \hline ID & \(M_{*}\) & Age & \(r_{\rm hs}\) & \(M_{\rm 1,esc}\) \\ & \(M_{\odot}\) & Myr & pc & \(M_{\odot}\) \\ \hline M1 & \(0.99\) & \(23.66\) & \(0.20\) & \(0.69\) \\ M2 & \(0.97\) & \(24.01\) & \(0.02\) & \(0.63\) \\ M3 & \(0.99\) & \(20.88\) & \(0.34\) & \(1.03\) \\ \hline \end{tabular} \end{table} Table 1: (_Top table_) Initial conditions for the star cluster model: the model ID (column 1, using the syntax C-_Q_, where \(Q\) is the virial ratio), the initial number of stars (column 2), the initial total star cluster mass (column 3), the initial crossing time and the initial half-mass relaxation time (columns 4 and 5). (_Central table_) Initial conditions for the three encounter classes, based on different distances between the bullet star and the host star) which the planetary system is subjected to: the disk case ID (column 1, using the syntax M-#, where the # stands for the model number), the host star mass (column 2), the star cluster age at the time of the encounter (column 3), the host star position in the star cluster (column 4), and the encountering star mass (column 5). (_Bottom table_) Main characteristics of the protoplanetary disk, in column order: the number of gas particles, the viscosity parameter, the internal and external cut-offs of the disk, the disk mass, and the planet mass. We will use model M1 as our reference model for the following sections, unless specified otherwise. dicates, indeed, how the surface distribution sample with a number of SPH particles from 50 000 well represents the proto-planetary disk under study. The disk revolves around a \(1\,M_{\odot}\) star in all models, and the star constitutes the centre of the reference system. The density follows a classical flared disk distribution model, with the following azimuthally-symmetric distribution: \[\rho(R,z)=\frac{\Sigma(R)}{H}\exp\left(-\frac{z^{2}}{2H^{2}}\right)\ \, \tag{1}\] where \(\Sigma(R)\) is the cylindrical radial surface density profile, and \(H\) is the vertical scale height, locally dependent on the cylindrical coordinate \(R\). The disk is modelled with a radial density profile \(\Sigma\sim R^{-p}\) with \(p=-3/2\), according to the classical Hayashi (1981) scheme. The initial gas surface density profile is distributed between an inner cut-off radius \(r_{\rm in}\) and an outer cut-off radius of \(r_{\rm out}\), such that \(r_{\rm out}/r_{\rm in}=10\). Further details on the initialization of circumstellar disks are discussed in Pinto, Capuzzo-Dolcetta, & Magni (2019). To avoid excessively short time-steps in the regions close to the stars, a suitable computational'sink radius' is set up. All the gas particles that approach a star within the sink radius, provided that they are gravitationally bound, are accreted onto the object, and are subsequently excluded from the integration. The inner cut-off radius corresponds to the sink radius of the central star. The role of this quantity on the results is of minor relevance, as we are primarily focused on the global evolution of the disk and the planet, which depend more on the large-scale structure of the disk (its outward extension and former stability). The initial outer cut-off radius is motivated by our region of interest. The protoplanetary disk is initialized such that gas particles orbit the host star in roughly Keplerian orbits. The dynamical evolution of the star cluster and the consequent close encounters with neighbouring stars, may change the evolution of the gas disk over time, in addition to the internal processes that evolve the disk. We study the evolution of systems containing a single, Neptune-mass planet in orbit around the host star. The planet's mass is selected to obtain a system with a well-defined mass hierarchy between the star, the proto-planetary disk and the planet. The orbital properties of the planetary system are discussed in Section 3.3. ### Numerical method We perform the simulations of star clusters, planets and protoplanetary disks, by combining the _N_-body code NBODY6++GPU(Kamlah et al., 2022), the new _N_-body code SnIPES, and the SPH code GaSPH. The codes and the numerical approach are discussed below. #### 2.3.1 Star cluster simulations We first model the star cluster environment using NBODY6++GPU(Kamlah et al., 2022). NBODY6++GPU is the most recent update of the original NBODY6(Aarseth, 1999) and NBODY6++(Spurzem, 1999). The kinematic data of the star cluster members are then stored for subsequent high-resolution modeling the trajectories of neighbouring stars during their encounters with a planetary system. #### 2.3.2 Decision-making on the host star and its neighbours We select a host star through a encounter strength estimation via the \(k\) parameter; see Section 3.2 for details. The \(k\) parameter give us a range of encounters which we can choose from, while also determining the closest encounters. The selection is taken from the most effective encounter in the case of M2 and M3. We filter and select the encounter which tends to be (or is near) the impulsive, non-adiabatic and parabolic regions (Flammini Dotti et al., 2019). For M1 we use the same approach, but we also take into account which of the strongest encounters is the most probable. This is to ensure to use the statistically more probable short-distance strong encounter. After selecting a host star (the star with a planet and protoplanetary disk), we identify its stellar neighbour stars within a sphere of radius \(r_{s}\sim 40\,000\) AU at the time of closest approach. #### 2.3.3 Integrating the host star and neighbour sphere: SnIPES When integrating the trajectories, we include the neighbour stars identified above. Perturbations from Figure 1: Surface density distributions of the cut circumstellar disk for different particle numbers at 30 000 yr. more distant stars are ignored. Since the kinematic data from NBODY6++GPU are not stored at a sufficiently high temporal resolution as required by the SPH code, we have therefore developed an \(N\)-body integrator SnIPES (_Stars 'Nd Inner Planets Evolution Solver_), which integrates the orbits of the neighbouring stars. SnIPES uses REBOUND (Rein & Liu, 2012) to handle particle integration, using the IAS15 high order integrator (Rein & Spiegel, 2015). The procedure shares some similarities with the code LonelyPlanets (e.g., Cai et al., 2015, 2016, 2017, 2018, 2019; Flammini Dotti et al., 2019), but it has been modified to focus on the stellar neighbours (of the order of a hundred for our code, unlike in LonelyPlanets, where a maximum of ten neighbours are integrated). We plan to release an article on this code in the next future. We then integrate all stars within the sphere for 1 Myr, starting 500 kyr before the closest approach, and ending 500 kyr after the closest approach. We integrate and store the trajectories at a higher time resolution, so that the time intervals are shorter than the dissipation time of the disk (Pinto, Capuzzo-Dolcetta, & Magni, 2019). #### 2.3.4 The SPH simulation After having obtained the stellar trajectories, we start the second phase of our investigation. We consider the approaching stars, along with the neighbour stars which affect both the host and encountering star. We introduce \(t_{fb}\) as the time at which the host star and encountering star reach their minimum mutual distance. We also introduce the quantity \(\delta t_{fb}\), which represents a suitable interval of time before the close encounter, such that gravitational attraction between the two stars is negligible compared to their kinetic energies. We focus on the status of the \(N\)-body system at a slightly earlier time \(t_{\textit{PEE-fb}}=t_{fb}-\delta t_{fb}\). We consider the \(N\)-body simulation at the time \(t_{\textit{PEE-fb}}\) by restricting our study to an ensemble of stars within a range from the center of mass of the two encountering objects. We register positions, velocities and masses of those objects. This output will set the initial conditions of the SPH simulations. The hydro-dynamical evolution of protoplanetary disk interacting with a small ensemble of stars, using the GaSPH code, is described in Pinto, Capuzzo-Dolcetta, & Magni (2019). They use a SPH scheme for the estimation of the pressure gradient of the gas and the system self-gravity. The gas exchanges the Newtonian force with a set of bodies which represent stars or planets. In an SPH algorithm the gas is sampled by means of a set of particles each containing local physical properties such as temperature, density, pressure gradient, and Newtonian force. These local properties are treated as the same in a local sphere, the _smoothing length_. The pressure gradient and other key quantities useful to calculate the acceleration and the time variation of temperature are estimated by means of suitable interpolations over a neighborhood ensemble of points. To model an accretion disk around a star, we use the standard \(\alpha\)-model (Shakura & Sunyaev, 1973) for the turbulent viscosity of the disk, as illustrated in the bottom table of Table 1. The turbulent viscosity represents an approximate physical scheme which mimics the dynamics of the turbulence motions inside the gas, which, as a net result, leads to an inward transport of matter. This is achieved through the dissipation of the orbital motion of the gas (e.g., Papaloizou, 2005; Papaloizou et al., 2007). In an \(\alpha\)-disk, the gas dynamics is influenced by an effective kinematic viscosity that can be expressed as \[\nu=\alpha_{SS}c_{s}H\, \tag{2}\] where \(c_{s}\) is the local speed of sound and \(H\) is the vertical scale parameter, i.e., the local vertical distance from the disk mid-plane where the density and pressure of the disk are significantly decreased. Therefore, \(\alpha_{SS}\) represents a dimensionless parameter that characterizes the local strength of the disk viscosity. In our SPH code, we translate the kinematic viscosity in a classical SPH form where \(\alpha_{SS}\) is related to the strength of an additional artificial pressure term in the gas Eulerian equations (see Meru & Bate, 2012; Picogna & Marzari, 2013, for details). We refer also to Rosotti et al. (2014) for several examples of numerical models of turbulent viscosity in protoplanetary disks around stars found in dense star clusters. ### Disk modeling in the star cluster The initial setups are listed in the central table of Table 1. We use different configurations to model the evolution of the disk. We refer to the models as M1, M2 and M3. These refer to three types of encounters in the star cluster. M1 is a very close encounter (with a minimum distance between the encountering and host star \(<1000\) AU), M2 is an intermediate-distance encounter (within a distance \(<10,000\) AU and \(>1000\) AU) and finally M3, a large distance encounter (\(>10,000\) AU and \(<40,000\) AU). These encounter distances give a general idea of how an encounter affects the host star and its circumstellar material. A general description of how different encounter distances affect a system is provided in Section 3.2, and a more extensive discussion can be found in Flammini Dotti et al. (2019) and Spurzem, et al. (2009). The next set of sub-models refers to the rotation and counter-rotation of the protoplanetary disk. The dynamical evolution of the star cluster, and the consequent close encounters, may change the evolution of the disk. Moreover, according to Sanchez-Salcedo, Chametla, & Santillan (2018), the radial migration timescale of inner objects in the retro-grade rotation may be appreciably shorter than in the pro-grade rotation. We will verify this statement in our work. Finally, we will have three final sub-models, where we will add a Neptune-mass planet, at three different semi-major axes. These values are 30 AU (similar to Neptune in our own solar system), 50 AU (at the disk's half-mass radius) and 70 AU (in the outskirts of the denser section of the disk). The addition of a planet is fundamental to answer two main questions: (i) whether the dynamical evolution of the planet is changed in respect of the absence of both star cluster and protoplanetary disk and (ii) whether the final outcome after the encounter depends on the initial conditions. ## 3 Results In this section we describe the evolution of the star cluster. After that, we will analyse the gas ejected from the disk and accreted by the stars, in order to evaluate its role in the orbital evolution of the planet. We will classify the star cluster's encounters in three different classes, based on the predicted impact on the protoplanetary system, which will be indicated by the \(k\)-parameter (Spurzem, et al., 2009; Flammini Dotti et al., 2019), described also in Section 3.2. Finally, we study the dependence on the properties of the protoplanetary disk system. ### Star cluster evolution The evolution of the Lagrangian radii of the star cluster is shown in Figure 2. Any star cluster substructure is removed on the order of several crossing times, and the cluster enters into virial equilibrium on these timescales (e.g., Allison et al., 2009). The clusters starts to expand around an initial half-mass relaxation time, resulting in the the ejection of cluster members. At the same timescale, the more massive stars start to sink towards the star cluster centre, and the low-mass stars start to migrate to the outskirts (Khalisi et al., 2007). These processes are visible in the 90% and 70% Lagrangian radii. The 90% shell grows more abruptly due to the star cluster filling its Roche lobe in about 10 Myr from the start of the simulation. ### Close encounters The main star cluster properties that characterize the evolution of the architecture of a planetary system in a star cluster are the star cluster density, the encounter strengths and the relative velocities of the encountering stars. From the perspective of planetary systems in a dense stellar environment, the semi-major axis is the most important property that determines its dynamical evolution under the influence of stellar encounters. In our study we model protoplanetary disks with a Neptune-mass planet at three different semi-major axis: 70 AU, 50 AU and 30 AU. In the encounter analysis below, we consider a test particle at a semi-major axis of 50 AU. The properties of the stellar encounters are obtained from the star cluster model C05. Before analysing the encounter strength parameter \(k\)(Flammini Dotti et al., 2019; Spurzem, et al., 2009), we analyse the evolution of the periastron distances in Figure 3. The color bar indicates the Gaussian kernel-density estimation of a dataset (KDE) in a bi-dimensional space (\(k_{p}\),\(\%_{\infty}\)) normalised by \(10^{-5}\) for cosmetic purposes. This value is often used in statistical science to compare the the weight of more-than Figure 3: Temporal distribution of the instantaneous periastron distance \(p\) for stellar encounters with the nearest neighbour experienced in the star cluster model C05. Figure 2: Evolution of the Lagrangian radii of the star cluster. one dimension parameters. Such points are weighted according to the presence of points in the parameter space. Larger values indicates more datapoints (the value of 2.5 is approximately 3000 datapoints, while the minimum is less than 1 datapoint). The periastron evolution suggests that encounters at nearby distances, until a \(\sim\)\(t_{\rm{\it m}}\), are more frequent compared to later times, where the probability of a close encounters drastically drops. The typical periastron distances of encountering stars are \(\sim 10^{4}\) AU before a relaxation time, while they are \(\sim 10^{6}\) AU after one relaxation time has passed. Close encounters with periastron distances below 1000 AU occasionally occur. The reference model has, therefore, two different encountering distance phases, where nearby, and possibly more effective, close encounter distances may be detected. Before analysing Figure 4, we first describe the quantities in the abscissa and ordinate. The quantity \(\vec{v_{\infty}}\) is defined as \[\vec{v}_{\infty}=v_{\infty}\bigg{(}\frac{G(m_{\rm{\it m}_{\rm{\it p}}}+m_{\rm{ \it m}})}{a_{p}}\bigg{)}^{-1/2}. \tag{3}\] where \(v_{\infty}\) is the velocity-at-infinity, \(G\) is the gravitational constant, \(m_{\rm{\it p}}\) is the host star mass, \(m_{\rm{\it n}}\) is the encountering star mass, and \(a_{p}\) is the semi-major axis of the test particle. Equation (3) quantifies the ratio between the velocity of the neighbour star and the planet orbital velocity. The parameter \(k_{p}\) is defined as \[k_{p}=\sqrt{\frac{2m_{\rm{\it p}}}{m_{\rm{\it p}}+m_{\rm{\it n}}}\bigg{(}\frac {p}{a_{p}}\bigg{)}^{3}}\approx\bigg{(}\frac{p}{a_{p}}\bigg{)}^{3/2}. \tag{4}\] where \(p\) is the periastron velocity of the neighbour star. If \(m_{\rm{\it p}}\approx m_{\rm{\it n}}\), then the mass factor in Equation (4) can be considered negligible. Figure 4 shows the classification of the encounter strength using the \(k\) parameter plotted against the normalised velocity-at-infinity. Most encounters are adiabatic, tidal and parabolic. The test particle is located at a semi-major axis of 50 AU, the corresponding distance of the half-mass radius of the protoplanetary disk. The black curve in Figure 4 separates impulsive (\(p/a<1\)) and tidal (\(p/a>1\)) encounters. The ratio between the periastron and semi-major axis of the test particle reflects how near the encountering star and test particle effectively are. This property is the most influential, as these kind of encounters have a larger probability of disrupting planetary systems. The green curve in Figure 4 separates adiabatic (right) and non-adiabatic (left) encounters. The ratio between \(v_{\infty}\) and the orbital velocity of the test particle \(\sqrt{G(m_{\rm{\it p}}+m_{\rm{\it n}})/a_{p}}\) is the key factor. If the velocities are comparable or below this value, the encounter is non-adiabatic, i.e. the test particle is affected by the tidal (or physical) encounter. If they are not comparable, the encounter is adiabatic, i.e., the test particle is not affected by the encountering star. The blue curve in Figure 4 separates hyperbolic (above the curve) and near-parabolic (below the curve) encounters. Near-parabolic encounter are most likely to effectively affect the test particle, as their encounter time is larger than parabolic orbits. The coloured curves Figure 4 represent the selected distances for our disk reference models M1, M2 and M3, respectively at distances \(<1\,000\) AU (ocher), \(<10\,000\) AU and \(>1\,000\) AU (grey), and \(>10\,000\) AU and \(<40\,000\) AU (pink). In this plot we define the distances and the periastrons as equal. We retrieve the cases shown in the next two section of the paper from (i) the most probable and effective encounters for the M1 class (red to black datapoints and the points which are near-parabolic, non-adiabatic and impulsive, respectively) and (ii) we take the most effective encounters on that interval for the M2 and M3 classes. Although results are shown for a particle with a semi-major axis of 50 AU, the results for similar semi-major axes are comparable. A relatively large fraction of the encounters is impulsive. This should not come as a surprise, as particles with large semi-major axes are Figure 4: The distributions of the encounter strength parameter \(k\) of the nearest stellar encounters, experienced by disk in model C05, at the disk’s half-mass radius, \(\sim\) 50 AU. The black curve separates tidal (right) and impulsive (left) encounters, the blue curve separates hyperbolic (above the curve) and near-parabolic encounters (below the curve), while the green curve separates adiabatic (right) and non-adiabatic (left) encounters. The curves are obtained from the properties of individual stellar encounters. The different data-point colours indicate the distribution density, with the most frequent encounters indicated in red, while less frequent encounters are indicated in blue. The other, grey and pink vertical curves are the distances of the encounter models chosen in this work (\(1\,000\) AU, \(1\,0\,000\) AU and \(40\,000\) AU) at their respective \(k_{p}\) value. The color bar ranges from \(0.0001\) to \(2.5\). more likely to be perturbed and ejected, therefore impulsive encounters are common, even in such clusters (Flammini Dotti et al., 2019). If we vary the semi-major axis, then the points will shift to the right in the plot for lower semi-major axes, and to the left for higher semi-major axes. The main consequence is that the number of impulsive encounters increase and it directly proportional to the semi-major axis as \(\propto a_{p}^{-3/2}\), assuming the same dynamical properties of the stars. The dynamics of circumstellar disks is different from that of a planet. The disk density and the presence of other objects (such as planets) may alter the probability of an effective encounter on the protoplanetary disk evolution. In the following, we will test if a certain number of parameters may change the final outcome of an encounter, with the given initial conditions. ### M1 cases #### 3.3.1 Models of protoplanetary disks and planets We explore the consequences of a single close encounter between two stars, for which we found the nearest approach at \(\approx 20\) AU. We will refer to the two stars in the simulation as the host star (the star which hosts the protoplanetary disk and the planet) and the bullet star (the star encountering with the host star in the barycentric reference system), respectively. We carry out several simulations for each event, with different initial configurations for the host star system. We vary (i) the inclination of the disk and the orbit of the planet, with respect to the star orbital plane, (ii) the orientation of the disk and planet's orbit, (iii) the planet's semi-major axis. We list our models in Table 2. The models analysed in this section are strictly limited to the subset of stellar encounters with a periastron distance \(\ll 1000\) AU. This class of encounters strongly affects the evolution of both the disk and the planet, since they tend to be _impulsive_. Encounters founded at larger distances have been widely studied (e.g., Spurzem, et al., 2009; Flammini Dotti et al., 2019, and references therein). Moreover, the kind of encounter we look for is more common in large density environment, as young massive clusters and globular clusters. An overview on how different encounter distances affect the planetary systems has been provided in Section 3.2. We describe, in the context of the reference event, a space of parameters that leads a diverse final outcome for the planet after an encounter. A variation in the dynamical evolution of both the planet and disk is expected, due to the differences in the initial conditions. The parameter's variations we present in our models are described in as follows. * Initial orbital phase of the planet: the planet is in a different initial position than the default position (which we define at \(\varphi=0^{\circ}\)). For an event with similar initial conditions, a different initial position for the planet may result in a completely different outcome of the encounter event. * Spatial orientation of the planetary orbit and of the disk: we simulate both co-rotating and counter-rotating systems. The rotation direction of the system, with respect to the direction from which the encountering star approaches, may determine the dynamical fate of the planet. Moreover, the dynamical evolution of the disk is affected by this relative orientation: a larger fraction of gas is expected to be perturbed by the bullet star when it scatters with the disk in a counter-rotating direction. * Inclination of the disk and planet's orbit in the reference system: the orientation of the system with respect to the direction from which the encountering star approaches the host star, affects how much of the circumstellar gas is scattered by the encountering star. Therefore, we may expect different consequences for different outcomes, depending on the mass and kinematics of the bullet star. * Planet semi-major axes: we use three different models: (a) \(a=30\) AU, similar to the semi-major axis of Neptune in our solar system; (b) \(a=50\) AU, placed at the disk's initial half-mass radius; (c) \(a=70\) AU, placed in the outskirts, beyond the denser section of the disk, which has its highest den \begin{table} \begin{tabular}{l c c c c} \hline Event ID & \(r\) or \(c\) & disk & planet & planet \\ & + / - & inclination (\({}^{\circ}\)) & phase (\({}^{\circ}\)) & \(a\) (AU) \\ \hline Dnd1 & \(+\) & 0 & — & — \\ Dnd2 & \(-\) & 0 & — & — \\ Dnd3 & \(+\) & 90 & — & — \\ D1 & \(+\) & 0 & 0 & 30 \\ D2 & \(-\) & 0 & 0 & 30 \\ D3 & \(+\) & 0 & 90 & 30 \\ D4 & \(+\) & 0 & 180 & 30 \\ D5 & \(+\) & 0 & 0 & 50 \\ D6 & \(-\) & 0 & 0 & 50 \\ D7 & \(+\) & 0 & 90 & 50 \\ D8 & \(+\) & 0 & 180 & 50 \\ D9 & \(+\) & 0 & 0 & 70 \\ D10 & \(-\) & 0 & 0 & 70 \\ D11 & \(+\) & 0 & 90 & 70 \\ D12 & \(+\) & 0 & 180 & 70 \\ D13 & \(+\) & 90 & 0 & 70 \\ \hline \end{tabular} \end{table} Table 2: Different initial conditions for the protoplanetary disk and planet models in the M1 encounter class: the model ID (column 1, using the syntax D-i, where \(i\) the model number) is used for the majority of models except for \(nd=\) no disk, the disk rotation direction, co-rotating (\(+\)) or counter-rotating (\(-\)) (column 2), the initial inclination of the disk (column 3), the initial phase of the planet (column 4), the initial semi-major axis of the planet (column 5). sity at \(\approx 50\) AU. Since both the disk and the planet are initialised with Keplerian orbits, the orbital speed of the planet decreases with semi-major axis. #### 3.3.2 M1 models analysis We present several results obtained from the dynamical evolution of the different models listed in Table 2 and described in the section above. We first explore the evolution of planet-less models to compare the gas loss evolution with similar models. Then we focus on the other models. The host star-planet distances are shown in Figures 5 and 6 and the bullet star-planet distances are shown in Figures 7 and 8. Below, we discuss our main findings for each of these models. #### 3.3.2.1 Planet-less models Models Dnd1, Dnd2 and Dnd3 are modeled without a planet. The disks in these models are oriented in a co-rotating, counter-rotating, and perpendicular plane with respect to the encountering star, respectively. Figure 9 shows that the quantity of gas captured by the host star in models Dnd3 and Dnd1 is smaller than in model Dnd2. This may be a consequence of gas particles that, in a counter-rotating disk, migrate inwards faster than in models with co-rotating disks (Breslau & Pfalzner, 2019). The difference between the quantity of gas migrating inwards for models Dnd3 and Dnd1 is likely a direct consequence of the different inclinations of the disk. This characteristic leads to less angular momentum exchange with the gas and a smaller scattering area for perturbation of the disk by the bullet star when the disk is perpendicular to the orbital plane. Therefore, we observe that, initially, the model Dnd1 absorbs more gas. However, the impact may have altered the orbits of the gas particles in the innermost regions, causing them to be accreted by the host star. Figure 10 is consistent with this hypothesis. The bullet star in model Dnd3 accretes a larger quantity of gas than the host star, but less than in the other two models, due to its smaller scattering area. In the counter-rotating and co-rotating models, the bullet star accretes a larger quantity of gas, as both of these models have co-planar disks. The host star in the co-rotating model accretes more gas than the host star in the counter-rotating model, due to the slightly longer duration of the fly-by, in the frame of the individual gas particles. #### 3.3.2.2 Different semi-major axis models The planets in models D1, D5 and D9 have semi-major axes of 30, 50 and 70 AU, respectively. These models have co-planar and co-rotating disks, with a planet in a circular orbit. We will describe these models below. 1. D1: the planet semi-major axis is 30 AU. After the encounter, the planet remains gravitationally bound to the host star, and obtains a highly eccentric orbit (\(e\sim 0.99\)). We compare the difference between the amounts of accreted gas by the host star and the bullet star in models without planets, and model D1, in Figures 9 and 10, respectively. Here, model Dnd1 can be compared to model D1. The amount of accreted gas is similar in both models, with a slightly larger (0.5%), percentage of gas absorbed by the bullet star. The gas absorbed by the planet in model D1 is negligible. 2. D5: the planet semi-major axis is 50 AU. The planet is ejected with the highest velocity among all of the models we dynamically evolved. After the encounter, the planet's relative speed is \(v_{\rm pl}\sim\)6 km/s, with respect to both of the stars. The planet has a smaller initial orbital speed due to its larger semi-major axis. Therefore, the planet will have a different phase, as compared to the planet in model D1. The orbital phase at the moment of the close encounter strongly influences the dynamical fate of the system. 3. D9: the planet semi-major axis is 70 AU. The planet appears to be weakly perturbed by the encounter, with a final eccentricity of \(e_{\rm fin}\sim 0.09\). Small differences in the planet's orbital phase at the moment of the encounter, may result in significant differences in the planet's orbital parameters after the encounter. A large semi-major axis increases the probability of a close encounter with the bullet star. Additionally, the planet is less gravitationally bound to the central star, and therefore more easily affected by smaller changes in the gravitational field. #### 3.3.2.3 Counter-rotating disk models D2, D6, D10 have the same initial conditions as models D1, D5 and D9 respectively, but a counter-rotating disk and planet. In other words, the orbital phase is inverted due to the opposite direction of orbital motion. The encounter causes an ejection of the planet in all models. Note that in this case the dynamical outcome completely changes, due to counter-rotation of the disk. The orbital eccentricity before the ejection grows for larger semi-major axes, suggesting that both quantities have an effect on the planet's dynamical fate. #### 3.3.2.4 Different orbital phases models Models D3, D4, D7, D8, D11 and D12 have the same initial conditions as the model D1 (for D3 and D4), D5 (for D7 and D8) and D9 (for D11 and D12), but their planets have different initial orbital phases: 90\({}^{\circ}\) and 180\({}^{\circ}\), respectively. The planet is perturbed similarly in the different phases cases, with one exception in the smaller semi-major axis. The final eccentricity for the models with \(a=70\) AU is \(e_{\rm fin}\sim 0.97\) for model D11 and \(e_{\rm fin}\sim 0.99\) for model D12. Similarly, the models with with \(a=50\) AU have \(e_{\rm fin}\sim 0.60\) for model D7 and \(e_{\rm fin}\sim 0.97\) for model D8. For models with \(a=30\) AU, the model D3 has its planet ejected and model D4 has \(e_{\rm fin}\sim 0.35\). Therefore, in the D11, D12, D7 and D8 models, the different orbital phase have similar consequences to the default position models D9 and D5 respectively. After the encounter, the planet in model D11 has a wider orbit (with an apoastron up to 215 AU) than the planet in model D12 (with an apoastron up to 87 AU). The planet in model D8 has a wider orbit (with an apoastron up to 300 AU) than the planet in model D7 (with an apoastron up to 161 AU). In both cases, the phase indicate a migration of the planet to more eccentric orbits and relatively large apoastron, as a consequence of such eccentricity. In models D3 and D4, we have an ejection and a relatively less perturbed planet in model D4, with \(e_{\rm fin}\sim 0.35\) resulting in an apoastron at 20 AU, which corresponds to internal migration. Therefore, the semi-major axis of the planet plays a key role in these models with different initial orbital phase. The dynamical evolution of the planet is drastically different for smaller semi-major axes, mostly near to the encounter periastron value (\(\approx 20\) AU). We note the important of the orbital phase of the planet, which has a strong influence on the outcome of the dynamical interaction with the neighbouring star. #### 3.3.2.5 Perpendicular disk model Model D13 has the same initial conditions as model D9, but the disk is perpendicular to the equatorial plane. The encountering star scatters the planet into a wider, and more eccentric, orbit. However, the planet remains gravitationally bound to the host star. These models have their planet perturbed more _efficiently_ than in model D9, in which the planet obtains an eccentric orbit with \(e_{\rm fin}\sim 0.76\). ### M2 and M3 cases Unlike the M1 cases we have discussed in Section 3.3.2, the cases of M2 and M3 are related to the contribution of the far neighbour stars on the host star. The description of the disk models can be found in Table 3. The main focus of this section is to identify the role of the star cluster environment on the evolution of the planetary orbit and the disk. We choose just the larger semi-major axis between all the possible models due to it being the most subject to external perturbations. We will retain the same inclination for the planet and the disk models in these two cases, and we do not change either the eccentricity or the planet phase. The effect of more distant stars on these orbital parameters is negligible. Nevertheless, we explore the effect of counter-rotation for the nearest class of encounters of this section, M2. Figure 5: Distance between the host star and the planet over timer all models that include a planet. The curve with the label r12 shows the distance between the host star and the bullet star. Except where mentioned otherwise, all models have a relatively similar semi-major axis compared to their initial conditions. Figure 6: Same as Figure 5, zooming in the smaller distances from the host star. It is noticeable how the encounter history produced a different secular evolution for the bounded planets. Figure 7: Distance between the bullet star and the planet over time, for all planet-including models. The curve labeled r12 shows the distance between the host star and the bullet star. In the cases of M2 and M3, there is no important difference in the short-term evolution, where the disk and planet tend to remain relatively unperturbed. The dynamical evolution of the systems differs from those of field stars only over longer periods of time. Compared to earlier works on the same topic (e.g., Rosotti et al., 2014), we also analyse the effect on the planet. The results are comparable with Hao, Kouwenhoven, & Spurzem (2013), and the planet is poorly affected in the long-term evolution, similarly to what was predicted in Flammini Dotti et al. (2019). The D14 and D15 cases are mostly similar, except for the inverted rotation of the disk-planet system. The effect of the star cluster is weaker in D16. ## 4 Discussion and conclusions Most stars form in dense stellar environments, where the gravitational influence of neighbouring stars may affect the planet formation process and the early evolution of planetary systems. In this study we analysed the effects of both close encounters and the long distance encounters on a protoplanetary disk containing an embedded planet. Figure 11: Models D9 (Orange), D11 (red), and D12 (dotted black): evolution of the semi-major axis of the planet, before and after the encounter with the bullet star. A negative semi-major axis indicates that the planet is neither bound to the host star nor to the bullet star. Figure 8: Same as Figure 7, zooming in the smaller distances from the encountering star. Although at larger distances, model D12 stays bound to the encountering star after being captured. It is quite noticeable, instead, model D13, which has a larger orbit and goes nearby the encountering star, but it is still gravitationally bound to the host star. Figure 10: Fraction of disk mass captured by the encountering star, in models Dnd1 (purple), Dnd2 (green), Dnd2 (cyan) and D1 (dotted orange). Similarly to Figure 9, the presence of the planet slightly enhances the fraction of gas captured by the bullet star. Figure 9: Fraction of the disk mass captured by the host star, in models Dnd1 (purple), Dnd2 (green), Dnd2 (cyan) and D1 (dotted orange). Model D1 has a slighter higher gas capture fraction, which could have a prominent effect over long periods, but which is not relevant in the time frame of a single encountering event. Our goals were (i) to investigate the evolution of gas distribution of the disk due to the effects of the encounters and (ii) to study the consequences of both the stellar and disk perturbation on the planet orbit, as compared to a similar system in a pure _N_-body framework (i.e., a star without a protoplanetary disk). Our findings can be summarised as follows. * The presence of a protoplanetary disk significantly impacts the dynamical fate of the planet. Our work suggests that the orbit of the planet is less perturbed by a close encounter when a protoplanetary disk is present. We will carry out a more comprehensive study on this matter in the near future to further prove this point. * Encounters at \(r_{p}/a\leq~{}100\) are the ones which contribute more to modify the architecture of a protoplanetary system with a planet in a short time (\(<1\) Myr). Distant encounters (i.e., \(r_{p}/a\gg~{}100\) ) are less important in the perturbation of the dynamical evolution of the planet and the disk. * All the parameters we varied in the simulations have an impact on the dynamical fate of the planetary systems. The semi-major axis and the planet's initial orbital phase have the strongest impact. The influence of the inclination depends on the direction from which the encountering star approaches, and determines the effective scattering area. * Blow, we provide a description of the role of each parameter: * The difference between the orbital phases of the planets in the different models affects the dynamical outcome, after the interaction with the bullet star has occurred. A clear example is shown in Figure 11, which illustrates the evolution of the planet's semi-major axis in models D9, D11 and D12, respectively. In these models, the planets have an initial orbital phase of \(0^{\circ}\), \(90^{\circ}\) and \(180^{\circ}\), respectively. In model D12, the planet is captured by the bullet star. In the other models, the semi-major axis increases significantly, although the planet is still gravitationally bound to its host star. It is clearly shown that the dynamical outcome after the encounter is radically changed when the initial orbital phase of the planet is changed. * The semi-major axis also plays an important role. The essentially Keplerian motion of the planet leads to different orbital velocities at different semi-major axis. Therefore, a different choice for the initial distance between the planet and the host star changes the configurations of the relative position and velocity of the planet with respect to the bullet star, during the encounter. The collisional cross section is also larger due to the larger semi-major axis. * The inclination of the disk and the planet (relative to the orbital plane of the encountering star) appears, also, to be important. The inclination and the mass scattered away from the gaseous disk may be intrinsically related, and this represent an important difference on the final outcome of the encounter. We observed both inward and outward migration, which will be studied in greater detail in future work. In this study, we have not taken into account the gravitational influence of the presence of multiple planets embedded in the circumstellar disk. We have also not taken into account the effect of radiation of the host star and of neighbouring stars on the disk. The presence of O/B stars in the star cluster substantially affect the evolution of the disk through photo-evaporation. Note that the latter is only important during the first few million years. Future studies are needed to extend the parameter space and to include a better treatment of distant neighbours in dense star clusters. ## Acknowledgments We thank the referee for his invaluable help. FFD acknowledges support from the DFG priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets" under project Sp 345/22-1. F.F.D. and M.B.N. were supported by the Research Development Fund (grant RDF-16-01-16) of Xi'an Jiaotong-Liverpool University (XJTLU). M.B.N.K. was supported by the National Natural Science Foundation of China (grant 11573004). We acknowledge the tremendous help of Luis Diego Pinto for the comments, suggestions and help with the use of the Gasph code. ## Data availability The data underlying this article will be shared on reasonable request to the corresponding author. \begin{table} \begin{tabular}{l c c c c} \hline Event ID & \(r\) or \(c\) & disk & planet & planet \\ & + / - & inclination (\({}^{\circ}\)) & phase (\({}^{\circ}\)) & \(a\) (AU) \\ \hline D14 & + & 0 & 0 & 70 \\ D15 & \(-\) & 0 & 0 & 70 \\ D16 & + & 0 & 0 & 70 \\ \hline \end{tabular} \end{table} Table 3: Different initial conditions for the protoplanetary disk and planet models in the M2 and M3 encounter classes: the model ID (column 1, using the syntax D-i, where \(i\) the model number), the disk rotation direction, co-rotating (\(+\)) or counter-rotating (\(-\)) (column 2), the initial inclination of the disk (column 3), the initial phase of the planet (column 4), the initial eccentricity of the planet (columns 5), and the initial semi-major axis of the planet (column 6).
Most of the stars are born in dense stellar environments where the formation and early evolution of planetary systems may be significantly perturbed by encounters with neighboring stars. To investigate on the fate of circumstellar disks and planets around young stars, we numerically evolve star-disk-planet systems. We use the N-body codes NBODY6++GPU and SnIPES for the dynamical evolution of the stellar population, and the SPH-based code GaSPH for the dynamical evolution of protoplanetary disks. The secular evolution of a planetary system in a cluster differs from that of a field star. Most stellar encounters are tidal, adiabatic, and nearly-parabolic. The parameters that characterize the impact of an encounter include the orientation of the protoplanetary disk and planet relative to the orbit of the encountering star, and the orbital phase and the semi-major axis of the planet. We investigate this dependence for close encounters ($r_p/a \leq 100$, where $r
2309.08181
Large Language Models for Failure Mode Classification: An Investigation
In this paper we present the first investigation into the effectiveness of Large Language Models (LLMs) for Failure Mode Classification (FMC). FMC, the task of automatically labelling an observation with a corresponding failure mode code, is a critical task in the maintenance domain as it reduces the need for reliability engineers to spend their time manually analysing work orders. We detail our approach to prompt engineering to enable an LLM to predict the failure mode of a given observation using a restricted code list. We demonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned on annotated data is a significant improvement over a currently available text classification model (F1=0.60) trained on the same annotated data set. The fine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). This investigation reinforces the need for high quality fine-tuning data sets for domain-specific tasks using LLMs.
Michael Stewart, Melinda Hodkiewicz, Sirui Li
2023-09-15T06:13:01
http://arxiv.org/abs/2309.08181v1
# Large Language Models for Failure Mode Classification: an Investigation ###### Abstract In this paper we present the first investigation into the effectiveness of Large Language Models (LLMs) for Failure Mode Classification (FMC). FMC, the task of automatically labelling an observation with a corresponding failure mode code, is a critical task in the maintenance domain as it reduces the need for reliability engineers to spend their time manually analysing work orders. We detail our approach to prompt engineering to enable an LLM to predict the failure mode of a given observation using a restricted code list. We demonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned on annotated data is a significant improvement over a currently available text classification model (F1=0.60) trained on the same annotated data set. The fine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). This investigation reinforces the need for high quality fine-tuning data sets for domain-specific tasks using LLMs. Technical Language Processing Failure Mode Large Language Models Maintenance ## 1 Introduction The maintenance of assets plays a critical role in the safety and costs of industrial organisations. One of the key tasks within maintenance is failure mode identification. This task is done by reliability engineers to capture and code failure and other undesirable events. These failure mode codes, together with data such as the cost/ production/ service impact, safety and environmental consequence of the event are used to prioritise improvement work, update maintenance strategy and can assist product/ plant engineers to improve future design by updating their failure modes and effects analysis. Consistent and reproducible failure mode code assignment is difficult as the observation of each event are captured by field technicians in natural language. For example, consider the following maintenance work order texts: * pump runs for a while and trip * engin does not work * pmp spraying out slurry * seal leaking * leak in seal Each of these work orders contain an observation made by the field technician, such as "does not work", "leaking", and so on. In any maintenance management system there are thousands of these observations and each needs a failure mode classification (FMC), such as "leaking" and "breakdown" according to an agreed list. The challenge, is that each person doing the coding, whether it be the technician generating the work order, or the reliability engineer reviewing it, comes with their own mental model of the asset and its behaviour [Sexton et al., 2019]. Further, attention to the task of coding accurately is influenced by factors such as training, managerial support, technological input control and motivation [Murphy, 2009, Unsworth et al., 2011, Molina et al., 2013]. It is too expensive to have university-trained reliability engineers review each of these codes manually given the volume. The opportunity for AI to assist in failure mode classification is therefore an active research area [Sexton et al., 2018, Akhbardeh et al., 2020, Sala et al., 2022, Stewart et al., 2022, Usuga-Cadavid et al., 2022]. There has recently been a surge of interest in Large Language Models (LLMs), predominately as the result of the popularity of chatbot interfaces such as ChatGPT1. LLMs such as OpenAI's GPT-3.52 have been trained on massive corpora and thus encapsulate knowledge from a wide variety of domains. It has also been shown that LLMs require little to no fine-tuning, meaning they exhibit excellent performance with barely any annotated training data [Brown et al., 2020]. Rather than focusing on developing manually-annotated datasets to train models (like with more "traditional" text classification models such as Flair [Akbik et al., 2018]), users of LLMs typically employ _prompt engineering_ in order to craft their input prompt to elicit a particular response from the model. As a result of their excellent performance on a wide range of natural language processing tasks, LLMs have already been applied to a variety of domains. Examples include medicine [Singhal et al., 2022, Thirunavukarasu et al., 2023], education [Kasneci et al., 2023], and vehicle accident records [Mumtarin et al., 2023]. Footnote 1: [https://chat.openai.com/](https://chat.openai.com/) Footnote 2: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) However, to the best of our knowledge, no research has yet investigated the use of LLMs within the maintenance domain, let alone specifically for FMC. In light of this research gap, and the potential for automated FMC to enable significant time and cost benefits to industry, we present an investigation into the effectiveness of using Large Language Models for Failure Mode Classification. Our contributions are as follows: * We investigate the most effective prompt format for performing FMC using an LLM without any fine-tuning. * We determine whether it is necessary to fine-tune an LLM on a set of annotated data to achieve good FMC performance. * We provide a comparison between the performance of fine-tuned LLMs and text classification models for FMC. This paper is structured as follows. We begin by providing an outline of our models, methods and experiments, and detail the dataset that we use for fine-tuning and evaluation. We then present our results, which directly tie in to our contributions above. Finally, we present our conclusion and an outlook to future work. The source code of this paper is open source and is available on GitHub. ## 2 Methods The aim of this paper is to evaluate the applicability of Large Language Models (LLMs) to Failure Mode Classification (FMC). In this section we provide an overview of the dataset we are using for our evaluation, as well as the models that we evaluate in Section 3. ### Dataset The dataset on which we evaluate each model is an extract from the annotated maintenance work order dataset introduced by [Stewart et al., 2022] and available on PapersWithCode3. The data set consists of 502 (observation, label) pairs for training, 62 for validation, and 62 for testing. The observations, which are written in natural language, were extracted from a set of maintenance work orders using Named Entity Recognition (NER). The labels are taken from a set of 22 failure mode codes from ISO 14224 4. Each observation was labelled by a domain expert. Some examples from this dataset are as follows: Footnote 4: [https://www.iso.org/standard/64076.html](https://www.iso.org/standard/64076.html) * broken, Breakdown * leaking fluid, Leaking * too hot, Overheating * triping, Electrical * not starting, Failure to start on demand This open data set and the model presented in [Stewart et al., 2022] represent the state-of-the-art for FMC in the literature at this point in time and hence are used for comparative purposes. ### Models We evaluate the following models: 1. **Flair**: A Flair-based [Akbik et al., 2018] text classification model, trained on the annotated dataset. 2. **GPT-3.5**: The off-the-shelf GPT-3.5-Turbo model from OpenAI. 3. **GPT-3.5 (Fine-tuned)**: The GPT-3.5-Turbo model, fine-tuned on the annotated dataset. The Flair model is a Bidirectional Long Short-Term Memory-based [Hochreiter and Schmidhuber, 1997] text classification model that takes a sequence of text as input, and predicts a single label. This is the same model as used in [Stewart et al., 2022], and further implementation details are available in the respective paper. The first layer of the model, the embedding layer, was pre-trained by the Flair developers on a corpora of web, Wikipedia data, and subtitles, and thus the model has little innate knowledge of maintenance. The model was trained by [Stewart et al., 2022] on the dataset of 502 (observation, label) pairs and validated on the 62-pair validation set. In contrast to the GPT-based models, the computational requirements of training and using this model are low enough to be able to train on most desktop computers. This also means it can be used offline, and is thus appropriate for handling sensitive data. The LLM-based models are based on OpenAI's GPT-3.5 [Brown et al., 2020]5, the model behind ChatGPT6. The GPT-3.5 model is "off-the-shelf" in that we are using the model without any form of fine-tuning. We are relying on the model's knowledge of maintenance that it has gleaned from its massive training corpora in order to task it to perform failure mode classification. The GPT-3.5 (Fine-tuned) model, on the other hand, is fine-tuned on the annotated dataset of 502 (observation, label) pairs, and validated on the 62-pair validation set. Footnote 5: GPT-4.0 was not available for fine-tuning as of the time of writing, hence the decision to use GPT-3.5. ### Data preparation [{ "role": "system", "content": "Determine the failure mode of the observation provided by the user." }, { "role": "user", "content": "too hot" }, { "role": "assistant", "content": "Overheating" }] Listing 1: An example prompt that is fed into the GPT-3.5 and GPT-3.5 (Fine-tuned) models. The role of the assistant is only used during fine-tuning. The default behaviour of the GPT-based models is to act as a chatbot, and thus it will not respond with a failure mode code for a given observation unless the instruction to do so is included as part of the prompt. Structuring an input prompt to elicit a particular response from a large language model is known as _prompt engineering_. The latest versions of the GPT-based models require a three-part prompt. The system-level prompt dictates the desired response format of the model. For example, one can use this prompt to ask the model to reply in a sarcastic tone, or to reply with a one-word answer, and so on. The user-level prompt is the input from the user. Finally, the assistant-level prompt is the desired input from the LLM (this is used when fine-tuning to inform the model of the expected output). To create the prompts, we wrote Python code to iterate through the annotated CSV-based dataset and convert each (observation, label) pair into a prompt as shown in Listing 1. The same system-level prompt is used for each input to the model, and describes the task to perform (failure mode classification). We use the user-level prompt to provide the model with the observation that we want it to label. During the fine-tuning of the GPT-3.5 (Fine-tuned), we include an assistant-level prompt that informs the model of the desired output for each observation (i.e. the failure mode). The design behind these prompts were based on the best practices listed in the OpenAI Documentation7. Footnote 7: [https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples](https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples) In our experiments we also investigate the necessity to add the following two texts to the system-level prompt: * In Section 3.1, we include the sentence "Your answer should contain only the failure mode and nothing else." to instruct the language model to avoid outputting unnecessary text (e.g. "The failure mode is...", etc. * In Section 3.2 we include "Valid failure modes are: " followed by a newline-separated list of valid labels from the dataset. This is an attempt to ensure that the model does not come up with its own failure modes, but instead outputs a failure mode code from the prescribed list. ### Evaluation metrics In the same manner as [Stewart et al., 2022], we evaluate each model using Micro F1 and Macro F1 score. Micro F1 calculates an F1-Score by adding the true positives (TPs), false positives (FPs) and false negatives (FNs) from all class labels together and then calculating F1-Score: \[Micro1=F1_{(class1+class2+...+classn)} \tag{1}\] Macro f1, on the other hand, simply averages the F1-Score of each class. Given N is the number of class labels, it is calculated as follows: \[MacroF1=\frac{\sum_{n\in N}F1_{class_{n}}}{N} \tag{2}\] ## 3 Results This section aims to answer the following questions: 1. How best to use an off-the-shelf Large Language Model (LLM) to perform Failure Mode Classification (FMC)? 2. Is it necessary to fine-tune the LLM to perform FMC? 3. Are LLMs more effective at FMC than text classification models? 4. What are some barriers one may face when using LLMs for FMC? ### How best to use an off-the-shelf LLM to perform Failure Mode Classification? To address the first research question we begin by investigating the use of a simple system-level prompt of "Determine the failure mode of the observation provided by the user.". Upon feeding this prompt into the model, along with \begin{table} \begin{tabular}{l|l} **Observation** & **LLM output** \\ \hline runs for a while and trip & The failure mode of the observation "runs for a while and trips" suggests an electrical failure. This could be due to an overload, short circuit, or other electrical issue that causes the equipment to shut down or trip a circuit breaker. \\ does not work & The failure mode of the observation "does not work" is a breakdown. \\ spilling & The failure mode of the observation "spillage" is a leakage. \\ spraying out slurry & The failure mode of the observation "spraying out slurry" is leaking. \\ \end{tabular} \end{table} Table 1: Some examples of predictions made by the off-the-shelf GPT-3.5-Turbo on a sample of the test data. The system-level prompt is “Determine the failure mode of the observation provided by the user.” the user-level prompt (the observation, e.g. "runs for a while and trip"), the LLM produces outputs as shown in Table 1. These outputs, which are conversational in nature, are not machine-readable and are therefore not applicable to downstream analysis. A more specific prompt is needed to perform FMC. In light of this, we next add the phrase "Your answer should contain only the failure mode and nothing else." to the system-level prompt. Adding this sentence to the prompt results in the model predicting a single failure mode for each observation, as shown in Table 2. However, there are several notable issues with the outputs of the model after adding this phrase. Firstly, despite the addition of the phrase in the prompt, the model still occasionally adds additional text to its response. One such example is its response for the phrase "failed electrical", to which it also adds "Failure mode: " prior to the actual classification. It also occasionally disregards the instruction when it was not capable of recognising a particular failure mode, for example in its classification of "high earth reading". While the LLM is capable of predicting failure modes using this prompt, they are not aligned with any particular failure mode ontology. Downstream analysis using these failure modes is thus not possible, due to the sheer number of possible failure modes and inconsistency between them. For example, the model predicts both "Leakage" and "Leaking", which are the same failure mode written two different ways. One can liken the LLM's predicted failure modes to that which might be produced by a layperson, i.e. not a domain expert. The non fine-tuned model also has difficulties producing consistent failure mode labels when dealing with uncertainty. When the model is unable to classify the observation, it responds in a variety of different ways, for example "Insufficient information", "N/A", "None", "No failure mode detected.", "No failure mode provided.", and so on. Attempting to resolve all possible variations of these phrases into a single classification (such as "Unknown" or "Other") is a non-trivial task, and thus the outputs of this model are not readily applicable to downstream tasks. In an attempt to solve this issue we add a final phrase to the prompt: "Valid failure modes include: " followed by a newline-separated list of the failure mode labels appearing across the entire dataset. We found that this addition generally causes the model to behave as expected. However, it occasionally hallucinates labels: for example, it predicts the label "Fail to open" for "sticking shu", and "Fail to adjust" for "cant be adjusted". It also has issues with label consistency - for example, it predicts both "Fail to function" and "Failure to function". Similarly to the previous attempt without constraining the label space, this attempt at using the LLM directly without fine-tuning is not directly applicable to failure mode analysis as a result of these issues. In summary we have demonstrated that it is possible to engineer the prompt to enable the LLM to predict failure mode codes without any fine-tuning. However, these outputs are not grounded in any particular ontology and are inconsistent. ### Is it necessary to fine-tune the LLM to perform Failure Mode Classification? We now aim to determine whether fine-tuning the LLM on a purpose-built dataset is necessary, or whether similar performance can be achieved without fine-tuning. We focus our attention on a comparison between the GPT-3.5 model, and GPT-3.5 (Fine-tuned). The former model has been fed with the prompt discussed at the end of 3.1, i.e. it constrains the model to predict only the failure mode and nothing else, and also provides it with a list of the valid failure modes from the dataset. The latter model has been fine-tuned on the 500 (observation, label) pairs in the training dataset, and the prompt does not contain the aforementioned constraints (as they are not necessary due to the fine-tuning). Table 3 shows the results of each model on the test dataset. It is clear that fine-tuning has a significant impact on performance, as the Micro-F1 score jumps from 0.46 to 0.81 between the non fine-tuned and fine-tuned models respectively. The results of the non fine-tuned model indicate that it does possess knowledge of maintenance, though, as it was capable of getting nearly half of all predictions correct without any form of fine-tuning. \begin{table} \begin{tabular}{l|l} **Observation** & **LLM output** \\ \hline runs for a while and trip & Overheating \\ very stiff to operate & Stiff operation \\ requires rebuild & Noisy operation \\ has no equipment earth & N/A \\ high earth reading & No failure mode can be determined from the given observation. \\ failed electrical & Failure mode: Electrical failure \\ \hline \end{tabular} \end{table} Table 2: Some examples of predictions made by the off-the-shelf GPT-3.5-Turbo on a sample of the test data. The system-level prompt is “Determine the failure mode of the observation provided by the user. Your answer should contain only the failure mode and nothing else.” We also tested the effectiveness of "few-shot learning", i.e. providing a list of example (observation: failure mode) pairs to the model as part of the system-level prompt as opposed to a list of only the valid failure modes. We found that the results were near identical to the non fine-tuned model, and thus did not include these results in the table for brevity. Overall, the results show that fine-tuning is necessary to achieve strong performance. This demonstrates the importance of high quality annotated data when applying LLMs to maintenance work orders. ### Are LLMs more effective at failure mode classification than text classification models? To answer this final research question we focus our attention to a comparison between the Flair text classification model from (Stewart et al., 2022) and the GPT-3.5 models. As shown in Table 3, the LLM significantly outperforms Flair, but only after fine-tuning. Without fine-tuning, Flair exhibits much stronger performance, indicating the necessity of annotated training data to be able to perform this particular task. After fine-tuning on the annotated data, the LLM performs significantly better than Flair. It also tends to fair better on the minority classes, such as "Failure to start on demand", "Failure to stop on demand", etc, which we argue can be attributed to the underlying knowledge made available as part of the LLM's lengthy training process on a large corpora. In summary, our results show this LLM is more effective at FMC than the text classification model, but only when the LLM is fine-tuned to perform this task. ### What are some barriers one may face when using LLMs for FMC? Overall we found the process of using and fine-tuning GPT-3.5 fairly straightforward, though we experienced a couple of issues that are worth noting. Firstly, the non-deterministic nature of LLMs mean that they can produce different output given the same input. There is a built-in temperature parameter which can be set to 0 to reduce the likelihood of this occurring, but in our experience we were still receiving slightly different results each time we ran our experiments. This effect is most noticeable in the non fine-tuned model with no prompt engineering (i.e. from Section 3.1, and has less of an effect when the model is informed of the list of valid labels. We also noticed that during inference, the OpenAI API would occasionally refuse our requests due to being overloaded, causing us to have to start the inference process again. This was not a significant problem for our small 62-record test set, but it would be more problematic when running inference over a large dataset. \begin{table} \begin{tabular}{l|c|c|c|c} **Failure mode** & **Support** & **Flair** & **GPT-3.5** & **GPT-3.5 (FT)** \\ \hline Abnormal instrument reading & 1 & 1.00 & 1.00 & 0.00 \\ \hline Breakdown & 7 & 0.37 & 0.44 & **1.00** \\ \hline Contamination & 1 & 1.00 & 1.00 & 1.00 \\ \hline Electrical & 6 & 0.67 & 0.50 & 0.67 \\ \hline Erratic output & 1 & 0.00 & 0.00 & 0.00 \\ \hline Fail to function & 3 & **0.50** & 0.00 & 0.00 \\ \hline Failure to start on demand & 1 & 0.40 & 0.33 & **1.00** \\ \hline Failure to stop on demand & 1 & 0.00 & 1.00 & **1.00** \\ \hline High output & 1 & 0.00 & 1.00 & 1.00 \\ \hline Leaking & 3 & 0.67 & 0.86 & 1.00 \\ \hline Low output & 2 & 0.00 & 0.00 & 0.00 \\ \hline Minor in-service problems & 17 & 0.73 & 0.11 & **1.00** \\ \hline Other & 2 & **0.67** & 0.40 & 0.00 \\ \hline Overheating & 4 & 1.00 & 1.00 & 1.00 \\ \hline Plugged / choked & 6 & 0.67 & 0.25 & **1.00** \\ \hline Spurious stop & 1 & 0.00 & 0.00 & 0.00 \\ \hline Structural deficiency & 3 & 0.60 & 0.57 & **1.00** \\ \hline Vibration & 2 & 0.67 & 1.00 & 1.00 \\ \hline **Micro-F1** & & 0.60 & 0.46 & **0.81** \\ \hline **Macro-F1** & & 0.46 & 0.53 & **0.62** \\ \hline \end{tabular} \end{table} Table 3: A comparison of the Flair model (Stewart et al., 2022) and the GPT-3.5 LLMs (non-fine-tuned and fine-tuned) on the test dataset. Support is the number of times the label appears in the test dataset. The results of the top-performing model (when there are no ties) are in **bold**. Finally, we note that the overall fine-tuning and inference process was fairly inexpensive, costing approximately $1 USD for each of our experiments. This shows that cost is not a barrier for achieving an acceptable level of performance on failure mode classification using LLMs. ## 4 Conclusion In this paper we have demonstrated the use of Large Language Models (LLMs) to perform Failure Mode Classification (FMC). We have investigated the use of prompt engineering to determine the best prompt to feed in to an LLM, such as GPT-3.5, in order to perform FMC without any fine-tuning. However, we have also found that fine-tuning an LLM is necessary to obtain significantly better performance on FMC when compared to text classification models such as Flair. The fine tuning is performed using a relatively small, high quality, annotated data set. The annotated data set we used for fine-tuning is publicly available. It maps observations to failure modes based on ISO 14224 classes. For the benefit of industry users wishing to use this fine-tuned data set on their own data, we note they will need to preprocess their maintenance work orders to extract observations. An example of a code pipeline to do this is in (Stewart et al., 2022). One of the key drawbacks of OpenAI's LLMs is that to be able to fine-tune the models, one must upload potentially sensitive data to OpenAI's servers. This is a non-issue for companies with the capability to run and fine tune LLMs in their own secure environments, but presents complications for others. In light of this, in the future we aim to investigate the performance of offline large language models, such as LLaMA (Touvron et al., 2023), on failure mode classification. We also plan to explore how well the Flair-based model performs on this task when it is fed with GPT-based embeddings. Finally, we also plan to release a larger annotated dataset than the one proposed by (Stewart et al., 2022), which will enable further fine-tuning and improved evaluation quality. #### Acknowledgments This research is supported by the Australian Research Council through the Centre for Transforming Maintenance through Data Science (grant number IC180100030), funded by the Australian Government.
この論文では、大規模言語モデル(LLM)の故障モード分類(FMC)の有効性を調査する初めての研究を展開しました。FMCは、観察を対応する故障モードコードで自動ラベル付けするというタスクであり、信頼性エンジニアが作業指示を機械的に分析する時間を削減する上で重要です。私たちは、LLMに特定のコードリストを使用して故障モードを予測できるようにするためのプロンプトエンジニアリングの approach を詳細に説明しています。私たちは、GPT-3.5モデル(F1=0.80)をアノテーションされたデータで微調整した結果、現在使用されているテキスト分類モデル(F1=0.60)との比較において、性能が著しく改善されています。微調整されたモデルは、既存のGPT-3.5(F1=0.46)よりも優れた性能を示しています。この調査は、LLMの特定のタスクに高品質の微調整されたデータセット
2307.00071
GIRA: Gaussian Mixture Models for Inference and Robot Autonomy
This paper introduces the open-source framework, GIRA, which implements fundamental robotics algorithms for reconstruction, pose estimation, and occupancy modeling using compact generative models. Compactness enables perception in the large by ensuring that the perceptual models can be communicated through low-bandwidth channels during large-scale mobile robot deployments. The generative property enables perception in the small by providing high-resolution reconstruction capability. These properties address perception needs for diverse robotic applications, including multi-robot exploration and dexterous manipulation. State-of-the-art perception systems construct perceptual models via multiple disparate pipelines that reuse the same underlying sensor data, which leads to increased computation, redundancy, and complexity. GIRA bridges this gap by providing a unified perceptual modeling framework using Gaussian mixture models (GMMs) as well as a novel systems contribution, which consists of GPU-accelerated functions to learn GMMs 10-100x faster compared to existing CPU implementations. Because few GMM-based frameworks are open-sourced, this work seeks to accelerate innovation and broaden adoption of these techniques.
Kshitij Goel, Wennie Tabib
2023-06-30T18:21:48
http://arxiv.org/abs/2307.00071v3
# GIRA: Gaussian Mixture Models for Inference and Robot Autonomy ###### Abstract This paper introduces the open-source framework, GIRA, which implements fundamental robotics algorithms for reconstruction, pose estimation, and occupancy modeling using compact generative models. Compactness enables _perception in the large_ by ensuring that the perceptual models can be communicated through low-bandwidth channels during large-scale mobile robot deployments. The generative property enables _perception in the small_ by providing high-resolution reconstruction capability. These properties address perception needs for diverse robotic applications, including multi-robot exploration and dexterous manipulation. State-of-the-art perception systems construct perceptual models via multiple disparate pipelines that reuse the same underlying sensor data, which leads to increased computation, redundancy, and complexity. GIRA bridges this gap by providing a unified perceptual modeling framework using Gaussian mixture models (GMMs) as well as a novel systems contribution, which consists of GPU-accelerated functions to learn GMMs 10-100x faster compared to existing CPU implementations. Because few GMM-based frameworks are open-sourced, this work seeks to accelerate innovation and broaden adoption of these techniques. ## I Introduction To navigate in and interact with the world, robots acquire, assimilate, and respond to sensor data. Models that enable perception in the large and small [1] are amenable to diverse robotics applications and have the potential to drastically increase robotic capabilities while addressing limitations in the way complex perception systems are developed today. Recent large-scale robotic exploration deployments, like the DARPA Subterranean (Sub-T) Challenge [2], have highlighted the need for map compression to increase the exploration rate, an example of perception in the large, by facilitating information sharing. Further, state-of-the-art perception systems typically leverage separate concurrent perceptual processing pipelines, which increases computation, redundancy, and complexity [3]. For example, the highly sophisticated perception module of the NeBula system architecture [4] processes the same LiDAR data repeatedly (e.g., odometry, SLAM, terrain mapping, etc.), which is inefficient. Instead, what is needed is a unified framework for common perceptual processing elements, which is compact, generative, and amenable for deployment on low-power embedded systems [3]. Gaussian mixture models (GMMs) provide high-fidelity and communication-efficient point cloud modeling and inference [5] in real-world environments [6]. Recent works have demonstrated precise, high-fidelity representation of fine details required for perception in the small [7]. However, there are few open-source implementations, which poses a barrier to broad adoption by the general robotics community. To bridge this gap, this paper introduces GIRA, an open-source, unified framework (Fig. 1) for point cloud modeling, occupancy modeling, and pose estimation using GMMs based on [6, 8, 9]. In addition, GIRA includes a novel systems contribution, which consists of GPU-accelerated functions to learn GMMs 10-100x faster compared to existing CPU implementations. The software and associated datasets are open-sourced1 under the permissive BSD 3-clause license to Fig. 1: GIRA has been deployed on size, weight, and power constrained aerial systems in real-world and unstructured environments. (Top left) A single aerial robot flies through an industrial tunnel and (top center) generates a high-fidelity Gaussian mixture model (GMM) map of the environment. (Top right) A close-up view of the reconstructed area around the robot. (Bottom left and bottom center) A team of two robots fly through a dark tunnel environment and produce a (bottom right) map, which is resampled from the underlying GMM and colored red or blue according to which robot took the observation. Videos of these experiments are available at: [https://youtu.be/qkbxfxgCoV0](https://youtu.be/qkbxfxgCoV0) and [https://youtu.be/t9iYd33oz3g](https://youtu.be/t9iYd33oz3g). accelerate innovation and adoption of these techniques. ## II Related Work This section reviews open-source perception frameworks for compact, high-resolution point cloud modeling, pose estimation, and occupancy modeling for robotics applications. These works are compared and contrasted with GIRA. The Normal Distributions Transform (NDT)2 framework was introduced by Magnusson et al. [10] for scan registration and later extended to model occupancy by Saarinen et al. [11]. Goel et al. [7] demonstrate that NDTMap provides higher representation fidelity compared to Octomap3[12], but at the cost of increased disk storage requirements. While NDTMap provides distribution to distribution registration [13], Octomap does not provide analogous functionality. In contrast to these representations, GIRA provides higher memory-efficiency and surface reconstruction fidelity [7] as well as distribution to distribution registration [8, 9]. Further, NDTMap provides a CPU implementation, while GIRA provides both CPU and GPU implementations for multimodal environment modeling. Footnote 2: [https://github.com/OrebroUniversity/perception_or](https://github.com/OrebroUniversity/perception_or) Footnote 3: [https://github.com/Octomap/octomap](https://github.com/Octomap/octomap) Oleynikova et al. [14] develop Voxblox, which uses Truncated Signed Distance Fields (TSDFs), for high-resolution reconstruction and occupancy mapping. The weights for the TSDFs are stored in a coarse fixed-resolution regular grid. Voxblox grows dynamically, but suffers from the same memory-efficiency limitation as the NDTMap. In contrast, the GIRA framework enables high-resolution surface reconstruction without a pre-specified size or a fixed-resolution discretization of the point cloud model. Like GIRA, Voxblox provides CPU4 and GPU5 implementations as well as a method to localize within the representation using submaps [15]. Footnote 4: [https://github.com/ethz-asl/voxblox](https://github.com/ethz-asl/voxblox) Footnote 5: [https://github.com/nvidia-isac/nvblox](https://github.com/nvidia-isac/nvblox) Footnote 6: [https://github.com/UnknownFreeOccupied/ufomap](https://github.com/UnknownFreeOccupied/ufomap) Duberg and Jensfelt [16] propose UFOMap, which improves upon Octomap by providing an explicit representation of unknown space and introduces Morton codes for faster tree traversal. An open-source CPU implementation of UFOMap7 is available; however, the implementation does not provide functionality to localize within the map, like GIRA, Voxgraph, or NDTMap. Footnote 7: [https://github.com/0000-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-001-0001-001-0001-0001-001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-00001-0001-00001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-00001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-00001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0010-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-001-0001-0001-001-0001-001-0001-001-0001-0001-001-0001-0001-0001-0001-001-0001-001-0001-0001-0001-001-0001-0001-0001-001-0001-0001-0001-001-0001-001-0001-001-0001-0001-0010-001-001-0001-0001-001-0001-0001-001-0001-001-0001-001-0001-0001-001-0001-0001-0001-001-0001-001-0001-0001-001-0001-0001-001-0001-0001-0001-001-0001-001-0001-0001-0001-0001-001-0001-0001-001-0001-0001-0001-0001-001-0001-001-0001-0001-0001-001-0001-0001-001-0001-0001-0001-0001-001-001-0001-001-0001-0010-001-0001-0001-001-0010-0001-0010-0010-00101-001-0001-0010-001001-0010-001001-001-001001-001-001-001001-001-001001-001-001-001001-0001-00101-0001-00101-001-001001-001-0001-001-001](https://github.com/0000-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-001-0001-001-0001-0001-001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-00001-0001-00001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-00001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-00001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0010-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-0001-001-0001-0001-001-0001-0001-0001-0001-0001-0001-0001-0001-0001-001-0001-0001-0001-0001-001-0001-0001-001-0001-001-0001-001-0001-0001-001-0001-0001-0001-0001-001-0001-001-0001-0001-0001-001-0001-0001-0001-001-0001-0001-0001-001-0001-001-0001-001-0001-0001-0010-001-001-0001-0001-001-0001-0001-001-0001-001-0001-001-0001-0001-001-0001-0001-0001-001-0001-001-0001-0001-001-0001-0001-001-0001-0001-0001-001-0001-001-0001-0001-0001-0001-001-0001-0001-001-0001-0001-0001-0001-001-0001-001-0001-0001-0001-001-0001-0001-001-0001-0001-0001-0001-001-001-0001-001-0001-0010-001-0001-0001-001-0010-0001-0010-0010-00101-001-0001-0010-001001-0010-001001-001-001001-001-001-001001-001-001001-001-001-001001-0001-00101-0001-00101-001-001001-001-0001-001-001) ming language bindings for ease of prototyping. We follow the same model with GIRA where key algorithms are implemented in C/C++/CUDA with Python bindings. For C/C++, we use the C/C++17 standard. For GPU support, CUDA version 10.4 and above is required. **Message Passing.** To enable message passing between different software systems, most robotics applications use the Robot Operating System (ROS) [30] and its successor ROS2 [31]. The GIRA framework is structured using Collective Construction (colcon) packages to help the robotics community easily integrate GIRA within their ROS and ROS2 workspaces. **Build System.** For low-level code and bindings, GIRA utilizes CMake for compilation support on both Linux and macOS. Python virtual environments are used to containerize executables. **Visualization.** For 3D perception tasks, visualization is an important capability for debugging research code. GIRA provides interfaces to the Open3D [32] visualization tools for this purpose. Further, developers can leverage tools like RViz2 from ROS2 after integrating colcon packages from GIRA. ## IV GIRA Framework The GIRA framework consists of three components: (1) GIRA Reconstruction, (2) GIRA Registration, and (3) GIRA Occupancy Modeling. This section provides an overview of these three components. ### _GIRA Reconstruction_ Given time-synchronized depth and intensity images with known pose estimates, GIRA Reconstruction creates a Self-Organizing Gaussian Mixture Model (SOGMM) [7] that is: 1. **Continuous**, the point cloud is represented with a 4D GMM which is a linear combination of continuous functions (Gaussian distributions). 2. **Probabilistic**, the 4D GMM captures the variance and expected values for point locations and intensity values. 3. **Generative**, the 4D GMM enables fast sampling of point locations and intensity values from the model. 4. **Compact**, since the number of parameters required to represent the 4D GMM is much lower compared to the point cloud itself. 5. **Adaptive**, the number of Gaussian distributions within the 4D GMM are automatically estimated from the scene complexity of the underlying sensor data. **Dependencies.** GIRA Reconstruction utilizes Open3D [32] for point cloud loading, writing, and visualization. We use NumPy [33] for interfacing with Eigen [34] via Pybind11 [35]. The GNU library GSL [36] is leveraged for sample generation from the GMM model. **Functionality.** GIRA Reconstruction contains CPU and GPU implementations for SOGMM10. Both implementations can be accessed via a Python interface: Footnote 10: Detailed tutorials are available at [https://gira3d.github.io/docs/index.html](https://gira3d.github.io/docs/index.html). ``` fromsgm_py.sgmimportSOGMM #goMemofpointcloudonCPU sg_cpu -SOGMM(bandwidth-0.015,compute-'CPU') mcpu -sg_cpu.fit(pointcloud) #goMemofpointcloudonGPU sg_gpu -SOGMM(bandwidth-0.015,compute-'GPU') mgpu -sg_gpu.fit(pointcloud) ``` where, pointcloud is a NumPy array and bandwidth is the bandwidth of the kernel used for the Gaussian Blurring Mean Shift (GBMS) within the SOGMM algorithm [7]. These models are continuous and generative. Three-dimensional points along with intensity values can be sampled from the model using: ``` #Sample640480pointsfromthemodel rp-sg_gpu.joint_dist_sample(640+480) ``` A plot of the resampled point cloud is shown in Fig. 1(b). If the 3D point locations are known, the expected intensity values and variance can be inferred from the model at these locations: ``` #locsisasNx3numpyarray #EisNx1expectedintensities #VisNx1variance _E,V-mgpu.color_conditional(locs) ``` A plot of intensity values E is shown in Fig. 1(c). The SOGMM model is compact and its size can be computed as follows. Fig. 2: An example workflow for GIRA Reconstruction Section IV-A. The input is a depth-intensity point cloud shown in (a). The resulting model can be resampled to generate novel 4D points (b) or be used to infer expected intensity values at known 3D locations (c). #computingmemoryusage M =mgpu.n_components_ #dbytesperfloat #ffloatvalueperweight #ffloatvaluespermean #10floatvaluespercovariance mem_bytes=4+M*(1+10+4) which is \(69.78\) kilobytes for the model learnt in Fig. 2. The time taken to learn a SOGMM is reported as a function of bandwidth parameter for a diverse set of platforms outlined in Fig. 2(a). The input data for this experiment corresponds to frame 854, which was randomly selected, of the simulated livingroom1 data from the Augmented ICL-NUIM datasets [37]. Ten equally spaced bandwidth values from \(0.0135\) to \(0.0300\) are used. Image sizes of \(320\times 240\), \(213\times 160\), and \(160\times 120\) are used (corresponding to \(2\times\), \(3\times\), and \(4\times\) reduction along each axis of the original \(640\times 480\) image). Because there is randomness in the KInit step, each case is run ten times and averaged to obtain accurate timing results. Figures 2(b) to 2(f) plot the results of these trials for the CPU-only (dashed lines with triangle markers) and GPU-accelerated (solid lines with circle markers) implementations. The y-axes of these plots use a base-10 log-scale and the observed standard deviation, which is plotted as error bars, is very low compared to the mean values. From Figs. 2(b) and 2(c) we observe over an order of magnitude faster performance when using the GPU-accelerated version of the system for all image sizes. Further, there is an overall decrease in performance from Ryzen/RTX3090 (Fig. 2(b)) to Intel/RTX3060 (Fig. 2(c)) for both CPU-only and GPU-accelerated versions. This is expected due to the decrease in computational capability for both the CPU and GPU. Fig. 3: Comparison of SOGMM computation time via GIRA Reconstruction on the target platforms listed in Fig. 2(a). In (b) and (c) the GPU-accelerated case on the desktop platforms provides more than an order of magnitude improvement in timing compared to the CPU-only case for most image sizes. The results of the embedded platforms shown in (d), (e) and (f) demonstrate that the relative performance improvements seem to degrade with increasing SWaP constraints. In any case, (g) shows that our CPU implementation performs nearly an order of magnitude faster than a reference SOGMM implementation using scikit-learn. Figure (d)d provides results for ARM-12c/Orin platform. In this case, the gains for the GPU-accelerated version are lower than the desktop platforms. Notice that for image size \(320\times 240\) the CPU starts performing better than GPU at low bandwidths. At low bandwidths, the number of estimated components are high. Figures (e)e and (f)f suggest a further decrease in relative performance improvement in using the GPU-accelerated version as opposed to the CPU-only version of our system. Further, due to memory constraints the \(320\times 240\) image size fails for both platforms below certain bandwidths. Both ARM-8c/Xavier and ARM-6c/TX2 are SWaP-constrained platforms used on robots. For real-world usage of our framework, we recommend using the CPU-only version when CPU resources are not required by other subsystems (e.g., planning, control, and visual-inertial odometry) and using the GPU-accelerated version when CPU resources are in demand (which is often the case). ### _GIRA Registration_ This module implements (1) registering a pair of GMMs [8] and (2) closing the loop using a pose graph optimization [9]. The _anisotropic_, _isoplanar_, and _isoplanar-hybrid_ registration variants from [8] are implemented in this module. Python and MATLAB interfaces have been developed, but this document provide examples only for the Python interface. The isoplanar-hybrid registration approach first calls a coarse optimizing using the isoplanar registration function followed by a refinement optimization using the anisotropic registration. The source and target variables are paths to files containing GMMs. ``` fromgmm_d2d_registration_pyimport isoplanar_registration fromgmm_d2d_registration_pyimport anisotropic_registration ``` #Initialregistrationguess Tinit=np.eye(4) #Isoplanarregistration Tiso=isoplanar_registration(Tinit,source,target) Tout=anisotropic_registration(Tiso,source,target) #Rotationandtranslationsolutions R=Tout[0:3,0:3] t=Tout[0:3] ``` The result of registering a single pair of images may be seen in Fig. 4. In addition, a pose graph optimization example is provided, which uses GTSAM [38]. A comparison of the frame-to-frame registration with and without loop closure is shown in Fig. 5. ### _GIRA Occupancy Modeling_ This module implements occupancy reconstruction by sampling from a GMM and raytracing through an occupancy grid map. MATLAB and Python interfaces are provided, but only the Python interface is discussed in this document11. Like the registration module detailed in Section IV-B, this module is compatible with scikit-learn [39] GMMs and assumes GMMs are loaded from file. Footnote 11: Detailed documentation for both MATLAB and Python is provided at [https://gira3d.github.io/docs/index.html](https://gira3d.github.io/docs/index.html). ``` #Create3doccupancygridwithparametersp grid=Grid3D(p) #Nx3sampledfromGMM(assumedinworldframe) pts=gmm.sample(num_pts) #Addthepointstothegrid foriinrange(0,num_pts): ray_end=Point(pts[1,0],pts[i,1],pts[i,2]) #sensor_poseisinworldframe #TRIMMED_MAX_RANGEstbyuser grid.add_ray(sensor_pose,ray_end, TRIMMED_MAX_RANGE) ``` Functions for querying occupied, free, and unknown voxels are provided through Python and MATLAB bindings of Fig. 4: The point clouds in (a) are originally misaligned. (b) The code in Section IV-B estimates the SE(3) transform to align them. Fig. 5: The trajectories reconstructed using (a) frame-to-frame registration and (b) with loop closure is enabled are shown with the pointclouds plotted. Fig. 6: Resampled points from a GMM are added to an occupancy grid map and the occupied voxels are queried and visualized. C++ code. The result of adding sampled points from the Mine dataset GMMs and querying the occupied voxels is shown in Fig. 6. ## V Implementation Details This section details the GPU-accelerated software architecture of GIRA Reconstruction. Figure 7 provides an overview of the accelerated SOGMM components [7]. Gaussian Blurring Mean ShiftComaniciu and Meer [40] leverage a binned estimator to determine seeds for the algorithm. A kd-tree [41] is used to query the neighbors in \(\mathcal{Y}\). The points within the specified radius are averaged and the seed is updated to the new location. The algorithm terminates when either the number of maximum iterations is reached or there is no substantial change with respect to the previous seed position. Expectation MaximizationThe EM algorithm consists of the Expectation (E) and Maximization (M) steps. The E Step evaluates the responsibilities \(\gamma_{nb}\) using the current parameters \(\boldsymbol{\mu}_{b}\), \(\boldsymbol{\Sigma}_{b}\) and \(\pi_{b}\) via \[\gamma_{nb}=\frac{\pi_{b}\mathcal{N}(\mathbf{x}_{n}\mid\boldsymbol{\mu}_{b}, \boldsymbol{\Sigma}_{b})}{\sum\limits_{a=1}^{|\mathcal{B}|}\pi_{a}\mathcal{N}( \mathbf{x}_{n}\mid\boldsymbol{\mu}_{a},\boldsymbol{\Sigma}_{a})}. \tag{1}\] To reduce the computational complexity of Eq. (1), the natural logarithm can be applied to convert the multiplications and divisions into sums and differences: \[\ln\gamma_{nb}=\ln\pi_{b}+\ln\left(\mathcal{N}(\mathbf{x}_{n} \mid\boldsymbol{\mu}_{b},\boldsymbol{\Sigma}_{b})\right)\] \[\qquad\qquad\qquad-\ln\left(\sum\limits_{a=1}^{|\mathcal{B}|}\pi _{a}\mathcal{N}(\mathbf{x}_{n}\mid\boldsymbol{\mu}_{a},\boldsymbol{\Sigma}_{a })\right). \tag{2}\] Term 2 of Eq. (2) may be rewritten, as derived in [42, 43, 44], \[\ln\left(\mathcal{N}(\mathbf{x}_{n}\mid\boldsymbol{\mu}_{b}, \boldsymbol{\Sigma}_{b})\right)\] \[=-\frac{1}{2}\left(D\ln(2\pi)+\sum\limits_{j=1}^{D}\left(\text{P} _{b}(\mathbf{x}_{n}-\boldsymbol{\mu}_{b})\right)_{j}^{2}\right)+\left(\sum \limits_{j=1}^{D}\ln\left(\text{diag}(\text{P}_{b})\right)_{j}\right) \tag{3}\] where \(\boldsymbol{\Sigma}=\text{LL}^{\top}\), \(\text{P}=\text{L}^{-1}\), and L is a lower triangular matrix calculated using the Cholesky decomposition of the covariance matrix. Summing the logarithm of the diagonal entries of \(\text{P}_{b}\) (i.e., \(\ln(\text{diag}(\text{P}_{b}))\)) in Eq. (3) is equivalent to \(\ln|\boldsymbol{\Sigma}_{b}|^{-1/2}\). The GPU implementation leverages higher-order tensor representations (rank-\(3\) and rank-\(4\) tensors)12. The weights are represented as a rank-\(3\) tensor of shape \((1,|\mathcal{B}|,1)\), means are represented as a rank-\(3\) tensor of shape \((1,|\mathcal{B}|,4)\), and covariances are represented as a rank-\(4\) tensor of shape \((1,|\mathcal{B}|,4,4)\). This implementation accelerates unary (e.g., logarithm and exponential of a matrix, reduction operations like summing along a dimension or taking a maximum along a dimension of a rank-\(2\) or a rank-\(3\) tensor) and binary (e.g., addition, subtraction, multiplication, and division of rank-\(2\) and rank-\(3\) tensors) operations via element-wise CUDA kernels with fixed block and grid sizes for all GPUs. Footnote 12: For the exposition of the GPU-accelerated components, tensor conventions from TensorFlow [29] are used. Rank-\(2\) tensor multiplication is accelerated via the cuBLAS gemmStridedBatched routine. The Cholesky decomposition of a rank-\(2\) tensor is accelerated via the cuSULVER14 potrf routine. The Cholesky decomposition of a rank-\(3\) tensor is accelerated using the cuSOLVER potrfBatched routine. Using the Cholesky decomposition, a linear system of equations involving rank-\(2\) tensors is solved using the cuSOLVER potrs routine and for rank-\(3\) tensors using the potrsBatched routine. Footnote 13: cuBLAS [https://docs.nvidia.com/cuda/cublas](https://docs.nvidia.com/cuda/cublas) Footnote 14: cuSOLVER [https://docs.nvidia.com/cuda/cublser](https://docs.nvidia.com/cuda/cublser) ## VI Conclusion GIRA is a set of tools and software for processing point cloud data into Gaussian mixture models for inference and robot autonomy. These tools and software are released open-source under the BSD 3-clause license and the software is available at [https://github.com/gira3d](https://github.com/gira3d). Fundamental robotics capabilities from our prior works on point cloud modeling [7], pose estimation [8, 9], and occupancy modeling [6] are included in the open-source release. These fundamental capabilities have applications beyond exploration and aerial robotics. The adaptivity of the SOGMM representation has applicability to perception in the small and fine grained manipulation tasks. The variable resolution occupancy grid mapping and distribution to distribution registration software may be leveraged for high-speed mobile robot applications like off-road operations. By releasing this software, the authors hope to increase the accessibility of these formulations to technical experts. Fig. 7: Information flow for the GPU-accelerated adaptive point cloud modeling system. Given a bandwidth parameter and depth-intensity image pair, the Gaussian Blurring Mean Shift (GBMS) obtains the number of components \(|\mathcal{B}|\). The number of components and the 4D data are used by KInit to calculate the responsibility matrix used by the EM algorithm. The result of the EM algorithm is the SOGMM model [7].
この論文は、GIRAというオープンソースフレームワークを紹介しています。GIRAは、コンパクトな生成的モデルを用いて、再構築、姿勢推定、および占用モデル化のための基本的なロボットアルゴリズムを実装しています。コンパクトさは、大規模なモバイルロボット展開において、感知モデルを低帯域幅のチャネルを通じて伝達することで、感知の範囲を拡大します。生成性とは、小規模な感知において、高解像度な再構築能力を提供します。これらの特性は、多様なロボットアプリケーションの感知ニーズに対応しています。例えば、マルチロボット探索とDexterous Manipulation。先進的な感知システムは、同じ基礎的なセンサデータを使用し、複数の異なるパイプラインを使用して感知モデルを作成しています。これは、計算量の増加、冗長化、複雑さの増加につながります。GIRAは、高性能なGPUアクセラレーテッドの機能を用いて、GMM(ガウス混合モデル)を学習することで、既存の
2307.16613
Semiclassical approximation of the Wigner function for the canonical ensemble
The Weyl-Wigner representation of quantum mechanics allows one to map the density operator in a function in phase space - the Wigner function - which acts like a probability distribution. In the context of statistical mechanics, this mapping makes the transition from the classical to the quantum regimes very clear, because the thermal Wigner function tends to the Boltzmann distribution in the high temperature limit. We approximate this quantum phase space representation of the canonical density operator for general temperatures in terms of classical trajectories, which are obtained through a Wick rotation of the semiclassical approximation for the Weyl propagator. A numerical scheme which allows us to apply the approximation for a broad class of systems is also developed. The approximation is assessed by testing it against systems with one and two degrees of freedom, which shows that, for a considerable range of parameters, the thermodynamic averages are well reproduced.
Marcos Gil de Oliveira, Alfredo Miguel Ozorio de Almeida
2023-07-31T12:44:23
http://arxiv.org/abs/2307.16613v2
# Semiclassical approximation of the Wigner function for the canonical ensemble ###### Abstract The Weyl-Wigner representation of quantum mechanics allows one to map the density operator in a function in phase space -- the Wigner function -- which acts like a probability distribution. In the context of statistical mechanics, this mapping makes the transition from the classical to the quantum regimes very clear, because the thermal Wigner function tends to the Boltzmann distribution in the high temperature limit. We approximate this quantum phase space representation of the canonical density operator for general temperatures in terms of classical trajectories, which are obtained through a Wick rotation of the semiclassical approximation for the Weyl propagator. A numerical scheme which allows us to apply the approximation for a broad class of systems is also developed. The approximation is assessed by testing it against systems with one and two degrees of freedom, which shows that, for a considerable range of parameters, the thermodynamic averages are well reproduced. **Keywords:** Weyl-Wigner representation, canonical ensemble, semiclassical approximations, Kerr system, Morse potential, Nelson potential ## 1 Introduction Quantum and classical statistical mechanics differ both in their formulation and in their results. It is not by chance that the first evidence for quantum mechanics is the black-body spectrum derived by Planck [1]. The canonical ensemble, which describes a system in equilibrium with a thermal bath of temperature \(T\), is characterized classically through a probability distribution over phase space, the Boltzmann distribution \[P_{\beta}(\mathbf{x})=\frac{1}{Z_{c}}e^{-\beta H_{c}(\mathbf{x})}, \tag{1}\] where, \(\mathbf{x}=(p_{1},\ldots,p_{d},q_{1},\ldots,q_{d})\) is a point in the phase space spanned by the coordinates \(q_{j}\) and the momenta \(p_{j}\), \(H_{c}\) is the classical Hamiltonian of the system, \(Z_{c}\) is the classical partition function and \(\beta=1/kT\), \(k\) being the Boltzmann's constant. The _quantum_ canonical ensemble, on the order hand, is described by the thermal density operator \[\hat{\rho}_{\beta}=\frac{1}{Z}e^{-\beta\hat{H}} \tag{2}\] where \(\hat{H}\) is the Hamiltonian operator and \(Z\) is the quantum partition function. Both (1) and (2) allow one to calculate thermodynamic averages, and although the results agree for high temperatures, there is a considerable discrepancy for low ones. With the introduction, by Wigner, of his eponymous function [2], the differences between these two formulations diminished, as it allows one to map the thermal density operator in a function over phase space that works as if it were a probability distribution, though it strongly deviates from (1) in the low temperature regime. This proposal further evolved to give a complete formulation of quantum mechanics in phase space [3, 4], which we call the Weyl-Wigner representation. The high temperature limit of the resulting semiclassical approximation of the thermal Wigner function, coincides with the classical distribution (1). A further advantage of the Weyl-Wigner formalism is that common observables with classical correspondence are directly represented by the classical phase space function, or a function that is semiclassically close to it. Thus, there is no limitation to Hamiltonians with a quadratic momentum dependence: any real phase space function will do. Moreover, the expectation of the observable is evaluated by a phase space integral, identical to its classical counterpart, except that the Liouville distribution is replaced by the Wigner function. In contrast, the phase space reflection operator, \[\hat{R}_{\mathbf{x}}=\int\mathrm{d}^{N}\mathbf{\xi}_{\mathbf{q}}\ |\mathbf{q}+\mathbf{\xi}_{\mathbf{q}} \rangle\langle\mathbf{q}-\mathbf{\xi}_{\mathbf{q}}|\ \exp\left[-\frac{2i}{\hbar}\mathbf{\xi}_{\mathbf{p}}\cdot\mathbf{q} \right]. \tag{3}\] which corresponds classically to the canonical reflection through a phase space point, is also a quantum observable. Indeed, this displaced parity operator has real eigenvalues \(\pm 1\), which makes it as quantum an observable as a spin. The essential role that this operator plays in the Wigner-Weyl representation, uncovered by Grossmann [5] and Royer [6] identifies its expectation with the Wigner function itself: \[W(\mathbf{x})\equiv\frac{1}{(2\pi\hbar)^{N}}\ \mathrm{tr}\ \hat{\rho}\ \hat{R}_{\mathbf{x}}. \tag{4}\] In short, the value of the Wigner function at every point in phase space supplies the expectation of the reflection operator for that point, which is exactly how it has been verified experimentally [7], by counting even and odd outcomes of phase space reflections on identically prepared states. In this paper, we will explore the fact that, by evaluating a propagator \(\hat{U}_{t}=e^{-it\hat{H}/\hbar}\) at an imaginary time \(-i\theta\), where \(\theta=\beta\hbar\) is the _thermal time_, we obtain the operator \(\hat{U}_{-i\theta}=e^{-\beta\hat{H}}\), which is proportional to the thermal density operator (2). This is the so called Wick rotation [8, 9]. We will employ this relation, together with a semiclassical approximation for the propagator, which expresses it in terms of _classical_ trajectories, to obtain a semiclassical approximation for the canonical ensemble. In principle, it provides a powerful method for evaluating the thermal density operator at lower temperatures, even for many degrees of freedom, because classical trajectories are computed in parallel. The present initial exploration is limited to two degrees of freedom. The complexification of the Hamiltonian to adapt it to a thermal, rather than a real evolution is already well established in semiclassical calculations, mainly within the chemical literature [10, 11, 12, 13, 14, 15, 16] and [17, 18]. Even though the various alternative propagators are also supported by trajectories in phase space, the end result is the position density matrix. Then a comparison with the classical distribution depends on Wigner's symmetrized Fourier transform over phase space. Furthermore, the complexification is confined to the momentum, which restricts the Hamiltonian to be the sum of a quadratic kinetic term with a potential energy, which excludes even a simple magnetic field. In contrast, the complexification employed here is, of need, much less simple (even in the case of the quadratic momentum dependence favoured by the position representation), so as to accommodate arbitrary Hamiltonians, for which the real time evolution is inaccessible to the differential Schrodinger equation. Thus, together with computational tests for standard Hamiltonians, we test the thermal averages of Birkhoff normal forms [19, 20], which include the quartic Kerr Hamiltonian [21]; in its turn, the unit cell of the many-body Bose-Hubbard Hamiltonian [22]. This paper is a followup on [23], where the core results of our current approach were first proposed. Here, we bridge the gaps that remained, which then allows us to devise a computational scheme that opens the possibility of applying our approximation to a vast number of cases. These new developments were achieved during a master's degree, and first appeared on the thesis [24]. The presentation is then structured as follows: in section 2 we discuss elements of the Weyl-Wigner representation and introduce a semiclassical approximation for the propagator. In section 3 we particularize this discussion for the canonical ensemble, and show how the approximation for the propagator generates an approximation for the thermal density operator through the Wick rotation. In section 4, we apply our approximations for normal forms, which are a class of systems for which one has explicit expressions for the required quantities. In section 5 we reformulate the calculation of the trajectories in terms of a duplicated phase space, which is more amenable to a computational treatment, and develop a complete numerical method that allows us, in principle, to apply our approximation for systems with an arbitrary hamiltonian. In sections 6 and 7, we use this numerical scheme to apply the approximation to the Morse system, which has one degree of freedom, and to the Nelson system, which has two. ## 2 The Weyl-Wigner representation and semiclassical approximations The Weyl-Wigner representation of quantum mechanics is based on the reflection operators \[\hat{R}_{\mathbf{x}}=\int\frac{d\mathbf{\xi}}{(4\pi\hbar)^{d}}\exp\left[\frac{i}{ \hbar}\mathbf{\xi}\wedge(\hat{\mathbf{x}}-\mathbf{x})\right], \tag{5}\] which corresponds classically to the transformation \(R_{\mathbf{x}}:\mathbf{x}_{-}\mapsto 2\mathbf{x}-\mathbf{x}_{-}\). Here, \(\hat{\mathbf{x}}=(\hat{p}_{1},\ldots,\hat{p}_{d},\hat{q}_{1},\ldots,\hat{q}_{d})\) is a vector formed by the position and momentum operators and \(\wedge\) denotes the _wedge product_, defined by \(\mathbf{\xi}\wedge\mathbf{x}=(\mathbf{J}\mathbf{\xi})\cdot\mathbf{x}\), with \[\mathbf{J}=\left(\begin{array}{c|c}\mathbf{0}&\mathbf{-I}_{d}\\ \mathbf{I}_{d}&\mathbf{0}\end{array}\right), \tag{6}\] where \(\mathbf{I}_{d}\) denotes the \(d\times d\) identity matrix. The Wigner symbol \(O(\mathbf{x})\) of an operator \(\hat{O}\) is then given by \[O(\mathbf{x})=2^{d}\text{Tr}\ \left(\hat{O}\hat{R}_{\mathbf{x}}\right). \tag{7}\] Furthermore, the Wigner function is a quantity proportional to the Wigner symbol of the density operator \[W(\mathbf{x})=\frac{\rho(\mathbf{x})}{(2\pi\hbar)^{d}}=\frac{1}{(\pi\hbar)^{d}}\text{ Tr}\ \left(\hat{\rho}\hat{R}_{\mathbf{x}}\right) \tag{8}\] and can be used to calculate quantum averages \[\left\langle\hat{O}\right\rangle=\text{Tr}\ \left(\hat{\rho}\ \hat{O}\right)=\int d \mathbf{x}W\left(\mathbf{x}\right)O\left(\mathbf{x}\right) \tag{9}\] as if it were a probability distribution. As an example, we observe that the Wigner function for the eigenstates of the harmonic oscillator, defined by a hamiltonian \(\hat{H}=\omega\left(\hat{p}^{2}+\hat{q}^{2}\right)/2\), are given by \[W_{n}(\mathbf{x})=\frac{(-1)^{n}}{\pi\hbar}e^{-\mathbf{x}^{2}/\hbar}L_{n}\left(\frac{ 2\mathbf{x}^{2}}{\hbar}\right), \tag{10}\] where \(L_{n}\) is the \(n\)th Laguerre polynomial [3]. A striking feature of the Wigner representation is the fact that the Wigner symbol of operators of the form \(f\left(\hat{\mathbf{p}}\right)+g\left(\hat{\mathbf{q}}\right)\) is simply \(f\left(\mathbf{p}\right)+g\left(\mathbf{q}\right)\), which is exactly the corresponding classical variable. This is not a general result, as there can be corrections in form of power series of \(\hbar\). A useful formula for calculating more complicated Wigner symbols is the Groenewold rule [25] \[\begin{split} O_{2}\cdot O_{1}\left(\mathbf{x}\right)&=O_{2} \left(\mathbf{x}+\frac{i\hbar}{2}\mathbf{J}\frac{\partial}{\partial\mathbf{x}}\right)O_{1} \left(\mathbf{x}\right)\\ &=O_{1}\left(\mathbf{x}-\frac{i\hbar}{2}\mathbf{J}\frac{\partial}{ \partial\mathbf{x}}\right)O_{2}\left(\mathbf{x}\right).\end{split} \tag{11}\] The Wigner symbol \(U_{t}(\mathbf{x})\) of the propagator \[\hat{U}_{t}=e^{-it\hat{H}/\hbar} \tag{12}\] is called the Weyl propagator, and, for short enough times, has the semiclassical approximation [25] \[U_{t}(\mathbf{x})_{SC}=\left|\det\left(\mathbf{I}_{2d}\pm\mathbf{JB}_{t}\right)\right|^{1/2 }\exp\left[\frac{i}{\hbar}S_{t}(\mathbf{x})\right]. \tag{13}\] Here, \(S_{t}(\mathbf{x})=S(\mathbf{x},t)\) is the so called (centre) action, and \[\mathbf{B}_{t}=\frac{1}{2}\frac{\partial^{2}S_{t}}{\partial\mathbf{x}^{2}} \tag{14}\] is proportional to its hessian. The action has the role of a generating function for the classical hamiltonian flow. It indirectly specifies the transformation \(\mathbf{x}_{-}\mapsto\mathbf{x}_{+}\) by giving the chord \[\mathbf{\xi}=\mathbf{x}_{+}-\mathbf{x}_{-} \tag{15}\] in terms of the centre \[\mathbf{x}=\frac{\mathbf{x}_{-}+\mathbf{x}_{+}}{2} \tag{16}\] through the relation \[\mathbf{\xi}(\mathbf{x},t)=-\mathbf{J}\frac{\partial S_{t}}{\partial\mathbf{x}}. \tag{17}\] As going back in time simply reverses the hamiltonian flow, we must have \(\boldsymbol{\xi}(\boldsymbol{x},t)=-\boldsymbol{\xi}(\boldsymbol{x},-t),\) from which we conclude that the centre action must be an odd function of \(t.\) Furthermore, Hamilton's equations \[\dot{\boldsymbol{x}}=\boldsymbol{J}\frac{\partial H}{\partial\boldsymbol{x}} \tag{18}\] imply that, for short times \(t,\) we have \[\boldsymbol{\xi}\approx t\boldsymbol{J}\frac{\partial H}{\partial\boldsymbol{x }}, \tag{19}\] from which we get \[S(\boldsymbol{x},t)=-tH(\boldsymbol{x})+\mathcal{O}\left(t^{3}\right). \tag{20}\] In general, the action is given by \[S(\boldsymbol{x},t)=\Delta(\boldsymbol{x},t)-Et \tag{21}\] where \(\Delta\) is the symplectic area of the region between the trajectory and the chord and \(E\) is the energy of the trajectory. As an example for quadratic hamiltonians \[H(\boldsymbol{x})=\frac{1}{2}\boldsymbol{x}\cdot\boldsymbol{\mathcal{H}}_{0} \boldsymbol{x} \tag{22}\] the action is also quadratic and given by \[S_{t}(\boldsymbol{x})=\boldsymbol{x}\cdot\boldsymbol{B}_{t}\boldsymbol{x}; \quad\boldsymbol{B}_{t}=\boldsymbol{J}\tanh\left(\frac{t}{2}\boldsymbol{J} \boldsymbol{\mathcal{H}}_{0}\right). \tag{23}\] It is important to observe that, for this class of systems, the semiclassical approximations are actually exact [25]. For a harmonic oscillator with frequency \(\omega\), we have \(\mathbf{\mathcal{H}}_{0}=\omega\mathbf{I}_{2d}\), and we obtain an exact Weyl propagator \[U_{t}(\mathbf{x})=\sec\left(\omega t/2\right)\exp\left[-\frac{i}{\hbar}\tan\left( \omega t/2\right)\mathbf{x}^{2}\right]. \tag{24}\] We see that, when \(\omega t\rightarrow(2n+1)\pi\), we have \[\left|\det\left(\mathbf{I}_{2d}\pm\mathbf{JB}_{t}\right)\right|^{1/2}=\sec\left( \omega t/2\right)\rightarrow\infty. \tag{25}\] The set of points where this divergence occurs is called a caustic, and it signals the breakdown of the description of the canonical transformation by the centre generating function. For the harmonic oscillator, the hamiltonian flow in these instants is simply a reflection \(\mathbf{x}_{-}\mapsto\mathbf{x}_{+}=-\mathbf{x}_{-}\), and therefore, for every pair \((\mathbf{x}_{-},\mathbf{x}_{+})\) we get the same centre \(\mathbf{x}=\mathbf{0}\). In this case, then, the caustics are the entire phase space, but, for non quadratic hamiltonians, these divergences may be restricted to a lower dimensional sub-manifold. In general, after crossing a caustic, there may be more than one chord for each centre, and the approximation for the propagator becomes a sum of terms like (13), where one must include an extra _Maslov phase_ in the exponents [26, 27]. It is possible to show [25] that the jacobian \[\mathbf{M}_{t}=\frac{\partial\mathbf{x}_{+}}{\partial\mathbf{x}_{-}} \tag{26}\] of the hamiltoninan flow, which is a symplectic matrix [28], is related to \(\mathbf{B}_{t}\) through the Cayley parametrization [19] \[\mathbf{M}_{t}=\frac{\mathbf{I}_{2d}-\mathbf{JB}_{t}}{\mathbf{I}_{2d}+\mathbf{JB}_{t}}, \tag{27}\] allowing us to rewrite (13) as \[U_{t}(\mathbf{x})_{SC}=\frac{2^{d}}{\left|\det\left(\mathbf{I}_{2d}+\mathbf{M}_{t}\right) \right|^{1/2}}\exp\left[\frac{i}{\hbar}S_{t}(\mathbf{x})\right], \tag{28}\] which will be the most convenient form of the semiclassical propagator to work with. ## 3 The canonical ensemble in phase space As mentioned in the introduction, this work is based on the evaluation of the semiclassical approximation for the propagator (28), at the imaginary time \(t=-i\theta\), \(\theta=\hbar\beta\) being the thermal time. We then obtain a semiclassical approximation for \(e^{-\beta\hat{H}}(\mathbf{x})\): \[e^{-\beta\hat{H}}(\mathbf{x})_{SC}=\frac{2^{d}}{\left|\det\left(\mathbf{I}_{2d}+\mathbf{M }_{-i\theta}\right)\right|^{1/2}}\exp\left[\frac{1}{\hbar}S_{\theta}^{E}(\bm {x})\right], \tag{29}\] where we have defined the euclidean action \(S_{\theta}^{E}=iS_{-i\theta}\), which is necessarily real, as \(S\) is an odd function of \(t\). For the harmonic oscillator, we get, using (24), \[e^{-\beta\hat{H}}(\mathbf{x})=\mbox{sech}\left(\omega\theta/2\right)\exp\left[- \frac{1}{\hbar}\mbox{tanh}\left(\omega\theta/2\right)\mathbf{x}^{2}\right]. \tag{30}\] It is interesting to note that, by using the the short time approximation (20) and setting \(\mathbf{M}_{t}\approx\mathbf{I}_{2d}\), we get \[e^{-\beta\hat{H}}\left(\mathbf{x}\right)_{SC}\approx\exp\left[-\beta H\left(\mathbf{ x}\right)\right]\approx\exp\left[-\beta H_{c}\left(\mathbf{x}\right)\right], \tag{31}\] that is, for high temperatures, we recover the _classical_ canonical ensemble. In this framework, the thermodynamic expectation values \[\left\langle\hat{O}\right\rangle=\mbox{Tr}\ \left(\hat{\rho}_{\beta}\hat{O} \right)=\frac{\mbox{Tr}\ \left(e^{-\beta\hat{H}}\hat{O}\right)}{\mbox{Tr}\ e^{-\beta\hat{H}}}, \tag{32}\] are completely determined if one is able to calculate expressions of the form \(\mbox{Tr}\ \left(U_{t}\hat{O}\right)\) for imaginary \(t\), which has an approximation \[\mbox{Tr}\ \left(\hat{U}_{t}\hat{O}\right)_{SC}=\frac{1}{(\pi\hbar)^{d}}\int d \mathbf{x}\frac{e^{iS_{t}(\mathbf{x})/\hbar}O(\mbox{\boldmath $x$})}{\left|\det\left[\mathbf{I}+\mathbf{M}_{t}\right] \right|^{1/2}}. \tag{33}\] One of the first problems that appears when dealing with semiclassical approximations, and can already be seen in (33), is the fact that the relevant trajectories are specified by boundary-conditions -- in the case of the Wigner representation, we specify the centre \(x\) defined by the endpoints of the trajectory -- that can be satisfied by more than one orbit, and are much more difficult to solve than an initial value problem. This is the so called _root search problem_. There are a few methods that can be used to circumvent this question, including the Initial and Final Value Representations [29]. Here, we briefly discuss a method that is specially adapted for he calculation of (33), which we call the _midpoint representation_, and consists of a mere change of variables, that we explain in what follows. We start with a point \(X\) in phase space -- the midpoint -- from which we propagate a trajectory \(\mathbf{x}_{+}(t)\), that evolves forward in time, and a trajectory \(\mathbf{x}_{-}(t)\), which evolves backwards, as illustrated in figure 1. In other words, \(\mathbf{x}_{\pm}(t)\) satisfy the pair of initial value problems \[\dot{\mathbf{x}}_{\pm}(t)=\pm\mathbf{J}\frac{\partial H}{ \partial\mathbf{x}};\quad\mathbf{x}_{\pm}(0)=\mbox{\boldmath $X$}. \tag{34}\] From this trajectory, we construct a centre \[\mathbf{x}(t)=\frac{\mathbf{x}_{+}(t/2)+\mathbf{x}_ {-}(t/2)}{2} \tag{35}\] and a chord \[\boldsymbol{\xi}(t)=\boldsymbol{x}_{+}(t/2)-\boldsymbol{x}_{-}(t/2). \tag{36}\] Then, the transformation \(\boldsymbol{x}\mapsto\boldsymbol{X}\) has a jacobian determinant \[\begin{split}\det\frac{\partial\boldsymbol{x}}{\partial \boldsymbol{X}}&=\frac{1}{2^{2d}}\det\left(\boldsymbol{M}_{t/2}+ \boldsymbol{M}_{-t/2}\right)\\ &=\frac{1}{2^{2d}}\det\left(\boldsymbol{I}_{2d}+\boldsymbol{M}_ {t}\right),\end{split} \tag{37}\] where we have used the fact that symplectic matrices, such as \(\boldsymbol{M}\), have unit determinant, and that the composition law \(\boldsymbol{M}_{t_{2}}\boldsymbol{M}_{t_{1}}=\boldsymbol{M}_{t_{1}+t_{2}}\) holds. Performing this change of variables in (33), we arrive at \[\text{Tr}\;\left(\hat{U}_{t}\hat{O}\right)_{SC}=\frac{1}{(2\pi\hbar)^{d}}\int d \boldsymbol{X}\left|\frac{\partial\boldsymbol{x}}{\partial\boldsymbol{X}} \right|^{1/2}e^{iS_{t}(\boldsymbol{x})/\hbar}O(\boldsymbol{x}). \tag{38}\] Furthermore, in this framework, the area \(\Delta\) can be explicitly calculated as [23] \[\Delta\left[\boldsymbol{x}\left(\boldsymbol{X}\right),t\right]=\int_{0}^{t/2} \boldsymbol{\xi}\left(\boldsymbol{X},t^{\prime}\right)\wedge\dot{\boldsymbol {x}}\left(\boldsymbol{X},t^{\prime}\right)dt^{\prime}, \tag{39}\] Figure 1: Midpoint representation. from which we obtain the action \[S_{t}\left[\boldsymbol{x}\left(\boldsymbol{X}\right)\right]=\int_{0}^{t/2} \boldsymbol{\xi}\left(\boldsymbol{X},t^{\prime}\right)\wedge\dot{\boldsymbol{x }}\left(\boldsymbol{X},t^{\prime}\right)dt^{\prime}-tH\left(\boldsymbol{X} \right). \tag{40}\] Beyond the fact that, now, all quantities can be determined by the initial value problem (34), we see that another advantage of this representation is the property that, at caustics, the integrand in (38) is now zero, instead of infinite, as was the case in expression (33). ## 4 Normal Forms Here, we discuss a class of systems for which we are able to obtain explicit analytical results for the trajectories (34), which allow a direct evaluation of (38) at imaginary times. A one dimensional _classical_ hamiltonian written as \[\begin{split} H(\boldsymbol{x})&=\omega\left( \frac{p^{2}+q^{2}}{2}\right)+H_{2}\left(\frac{p^{2}+q^{2}}{2}\right)^{2}\\ &+H_{3}\left(\frac{p^{2}+q^{2}}{2}\right)^{3}+\cdots=F\left( \frac{\boldsymbol{x}^{2}}{2}\right)\end{split} \tag{41}\] is said to be in Birkhoff normal form [19, 20]. Its orbits are circles in phase space, and, therefore, by a simple geometric argument, one may find explicit expressions for all the ingredients of the semiclassical approximation, as done in [23]. Defining the action variable \[J=\frac{\boldsymbol{X}^{2}}{2} \tag{42}\] and \[\omega(J)=F^{\prime}(J), \tag{43}\] we find the euclidean action \[S_{\theta}^{E}\left[\boldsymbol{x}\left(\boldsymbol{X}\right)\right]=\left[\omega \theta-\sinh\left(\omega\theta\right)\right]J-\theta F\left(J\right), \tag{44}\] the centre \[\boldsymbol{x}\left(\boldsymbol{X},-i\theta/2\right)=\cosh\left(\frac{\omega \theta}{2}\right)\boldsymbol{X}, \tag{45}\] and the jacobian determinant \[\begin{split}&\det\frac{\partial\boldsymbol{x}}{\partial \boldsymbol{X}}(\boldsymbol{X},-i\theta/2)\\ &=\cosh^{2}\left(\frac{\omega\theta}{2}\right)\left[1+J\omega^{ \prime}\theta\tanh\left(\frac{\omega\theta}{2}\right)\right].\end{split} \tag{46}\] _Quantum_ hamiltonians that can be written as \[\hat{H}=G\left(\frac{\hat{p}^{2}+\hat{q}^{2}}{2}\right) \tag{47}\] have a Wigner symbol that is a Birkhoff's normal form, but, except in the case of the harmonic oscillator, the function \(F\) in (41) _does not_ coincide with \(G\). Formulas for the calculation of the symbols are presented in A. Some of the quantum properties of this class of systems are also readily obtained -- they share their eigenstates with the harmonic oscillator, and for a quantum normal form described by a hamiltonian \(G\left[\left(\hat{p}^{2}+\hat{q}^{2}\right)/2\right]\), the eigenenergies are simply \(G\left[\hbar\left(n+1/2\right)\right]\). This simplicity makes the comparison of our semiclassical approximation with the quantum result straightforward. It is instructive to analyze the behaviour of our approximation in the low temperature limit. By squaring (45), we obtain \[\frac{\boldsymbol{x}^{2}}{2}=J\cosh^{2}\left[\frac{\theta\omega\left(J\right)} {2}\right]. \tag{48}\] Interpreting this equation as an implicit definition of \(J\left(\mathbf{x},\theta\right)\), we see that, for a fixed \(\mathbf{x}\), in order for the right side of the equality to remain finite, we must have \[\lim_{\theta\rightarrow\infty}J\left(\mathbf{x},\theta\right)=0\text{ or }\lim_{\theta \rightarrow\infty}\omega\left[J\left(\mathbf{x},\theta\right)\right]=0. \tag{49}\] Therefore, for systems that satisfy \(\omega\left(J\right)\neq 0\)\(\forall\)\(J\), we see that, according to (48), \(J\) has the asymptotic behaviour \[J\approx\frac{\mathbf{x}^{2}}{2}\text{sech}^{2}\left[\frac{\theta\omega\left(0 \right)}{2}\right],\quad\theta\omega\left(0\right)\gg 1 \tag{50}\] Substituting this expression in (44) and taking the limit \(\theta\rightarrow\infty\), we obtain, except for a constant term in \(\mathbf{x}\), \[S_{\infty}^{E}\left(\mathbf{x}\right)=-\mathbf{x}^{2} \tag{51}\] and the semiclassical approximation converges to the quantum result, that is, the semiclassical approximation for the thermal Wigner function tends to the Wigner function of the ground state of the harmonic oscillator, given in (10). Therefore, we see that, at least for normal forms with \(\omega\neq 0\), the semiclassical approximation is well anchored in both the high temperature limit, as it coincides with the classical result, and in the low temperature one, as it correctly predicts the ground state. It then remains to analyze its behaviour for intermediate temperatures. ### Kerr system The simplest case, beyond the harmonic oscillator, of a system governed by a normal form is probably the Kerr system, whose Hamiltonian is \[\hat{H}=\hbar\omega_{0}\left[\left(\frac{\hat{p}^{2}+\hat{q}^{2}}{2\hbar} \right)+\chi\left(\frac{\hat{p}^{2}+\hat{q}^{2}}{2\hbar}\right)^{2}\right], \tag{52}\] where \(\chi>0\) is a dimensionless parameter and \(\omega_{0}>0\) is a frequency. This Hamiltonian models the propagation of light through a medium with cubic electric susceptibility [21]. The time evolution of coherent states under its action is known [30, 31], and the corresponding Wigner function has been experimentally measured [32]. This evolution has also been successfully simulated, in the case with \(\chi\rightarrow\infty\), utilizing semiclassical techniques [33]. A further point of interest is that the Hamiltonian for the Bose-Hubbard chain [22] in many-body physics can be considered as a coupling of Kerr oscillators, which highlights the importance of exploring semiclassical methods for Hamiltonians with non-quadratic momenta. We note that, in this case, the Wigner symbol of the Hamiltonian only coincides with its classical counterpart within a constant term, as, with the aid of (11), one finds \[H(p,q)=\hbar\omega_{0}\left[\left(\frac{p^{2}+q^{2}}{2\hbar}\right)+\chi \left(\frac{p^{2}+q^{2}}{2\hbar}\right)^{2}-\frac{\chi}{4}\right]. \tag{53}\] Identifying \[F(J)=\hbar\omega_{0}\left[\frac{J}{\hbar}+\chi\left(\frac{J}{\hbar}\right)^{ 2}-\frac{\chi}{4}\right], \tag{54}\] we see that \[\omega(J)=F^{\prime}(J)=\omega_{0}\left(1+\chi\frac{J}{\hbar}\right)\geq \omega_{0}>0 \tag{55}\] and, therefore, we should expect a good result for low temperatures. Furthermore, when \(\chi\ll 1\), we also expect a good result, as, when \(\chi\to 0\), we recover the harmonic oscillator, for which the semiclassical approximation is exact. We also note that, because \[\omega^{\prime}(J)=\chi\omega_{0}/\hbar>0, \tag{56}\] one sees that \(\det\partial\mathbf{x}/\partial\mathbf{X}>0\), that is, there are no caustics for imaginary time. In figure 2, we show the expectation value of the energy \(E\) as a function of thermal time \(\theta\) for different values of the parameter \(\chi\). In the canonical ensemble, the specific heat \(c\) can be calculated in terms of the variance in the energy: \[c=k\beta^{2}\left(\left\langle\hat{H}^{2}\right\rangle-\left\langle\hat{H} \right\rangle^{2}\right). \tag{57}\] In this case, the Wigner symbol of the _square_ of the hamiltonian \(H^{2}\left(\mathbf{x}\right)\) is significantly different from \(\left[H\left(\mathbf{x}\right)\right]^{2}\). With this is mind, we show, in figure 3, the specific heat as a function of thermal time for different values of \(\chi\). As one sees, the quality of our semiclassical approximation is heavily dependent on the parameter \(\chi\). As \(\chi\) increases, we see a deviation from the quantum results for intermediate values of \(\theta\), although, as foreseen, we have a good agreement at both high and low temperatures. It may seem somewhat disturbing that the semiclassical approximation does not respect the positivity of the heat capacity: the figure cuts off the negative region. Yet it must be recalled that errors in \(\left\langle\hat{H}^{2}\right\rangle\) and \(\left\langle\hat{H}\right\rangle^{2}\) may be added, whereas the averages are subtracted. Indeed, Fig. 4 shows that the variance in (57) is considerably Figure 2: Average energy for the Kerr system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result. smaller than \(\left\langle\hat{H}^{2}\right\rangle\). In general one may then expect the heat capacity to be a much more stringent test of approximations than the energy average, as will occur further in our examples. One should note that a semiclassical version of the heat capacity as a second derivative of the partition function is not viable, as discussed in [23]. ## 5 Double Phase Space If we wish to apply our approximation for a broader class of systems, we must resort to numerical techniques. In this section, we restate the calculation of the terms in the integrand (38) in a way that is well adapted for a numerical solution, and, in the following sections, we apply this method for a few more systems. Instead of obtaining the centre \(\mathbf{x}\) in equation (16) propagating two trajectories, one forward and one backwards in time, it will be easier to calculate a single forward trajectory in a double phase space. In this new space, the centre \(\mathbf{x}\) will play the role of Figure 3: Specific heat for the Kerr system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result. position, while the conjugate momentum is given by \(\mathbf{y}=\mathbf{J}\mathbf{\xi}.\) The double hamiltonian [34, 35, 36] \[\begin{split}\mathbb{H}(\mathbf{x},\mathbf{y})&=H\left(\mathbf{x} -\frac{1}{2}\mathbf{J}\mathbf{y}\right)+H\left(\mathbf{x}+\frac{1}{2}\mathbf{J}\mathbf{y}\right)\\ &=H(\mathbf{x}_{+})+H(\mathbf{x}_{-}).\end{split} \tag{58}\] will then give the correct equations of motion, as one may check: \[\begin{split}\frac{\partial\mathbb{H}}{\partial\mathbf{x}}& =\nabla H(\mathbf{x}_{+})+\nabla H(\mathbf{x}_{-})\\ &=-\mathbf{J}\left(\dot{\mathbf{x}}_{+}-\dot{\mathbf{x}}_{-}\right)=-\mathbf{J} \dot{\mathbf{\xi}}=-\dot{\mathbf{y}};\end{split} \tag{59a}\] \[\begin{split}\frac{\partial\mathbb{H}}{\partial\mathbf{y}}& =\frac{\mathbf{J}}{2}\left[\nabla H(\mathbf{x}_{+})-\nabla H(\mathbf{x}_{-})\right]\\ &=\frac{1}{2}\left(\dot{\mathbf{x}}_{+}+\dot{\mathbf{x}}_{-}\right)=\dot {\mathbf{x}}.\end{split} \tag{59b}\] For a given midpoint \(\mathbf{X}\), these equations must then be solved under the initial conditions \[\mathbf{x}\left(\mathbf{X},0\right)=\mathbf{X},\quad\mathbf{y}\left(\mathbf{X},0\right)=\mathbf{0}. \tag{60}\] In order to be able to define the trajectories for complex times, we simply promote the derivatives with respect to \(t\) in (59) to derivatives with respect to a complex number \(z\): \[\frac{d\mathbf{y}}{dz} =-\frac{\partial\mathbb{H}}{\partial\mathbf{x}} \tag{61}\] \[\frac{d\mathbf{x}}{dz} =\frac{\partial\mathbb{H}}{\partial\mathbf{y}}\] If the functions \(\mathbf{x}\) and \(\mathbf{y}\) defined by these equations turn out to be analytic, then the line integral \[\Delta(z)=\int_{\gamma}\mathbf{y}\cdot\frac{d\mathbf{x}}{dz^{\prime}}\,dz^{\prime}, \tag{62}\] which would be the extension of the area \(\Delta\), given in (39), for the complex plane, is only dependent on the endpoints of the path \(\gamma\), which are \(0\) and \(z\). Assuming this is the case, we choose \(\gamma\) to be the easiest path that joins \(0\) and \(z\) -- the line segment -- and parameterise it by the arc-length \(s\). The explicit expression for the parametrization \(\gamma(s)\) is then \[\gamma\colon[0,|z|] \to\mathbb{C}\] \[s \mapsto sw,\] where \(w=z/|z|\). This path is illustrated in figure 5. Now, we compose \(\mathbf{x}\) and \(\mathbf{y}\) with \(\gamma\), that is, we define the functions \(\tilde{\mathbf{x}}:[0,|z|]\to\mathbb{C},\ \tilde{\mathbf{x}}(s)=\mathbf{x}\circ\gamma(s)=\mathbf{x}(sw)\) and \(\tilde{\mathbf{y}}:[0,|z|]\to\mathbb{C},\ \tilde{\mathbf{y}}(s)=\mathbf{y}\circ\gamma(s)=\mathbf{y}(sw)\). By the chain rule, we deduce that these functions satisfy \[\begin{split}\frac{d\tilde{\mathbf{y}}}{ds}&=-w\frac{ \partial\mathbb{H}}{\partial\mathbf{x}},\\ \frac{d\tilde{\mathbf{x}}}{ds}&=w\frac{\partial\mathbb{H }}{\partial\mathbf{y}}\end{split} \tag{63}\] which are _almost_ Hamilton's equations with real time \(s\). If we define \(\tilde{\mathbf{y}}_{w}=w^{*}\tilde{\mathbf{y}}\), where \({}^{*}\) denotes complex conjugation, and the modified hamiltonian \[\begin{split}\mathbb{H}_{w}\left(\mathbf{x},\mathbf{y}\right)& =\mathbb{H}\left(\mathbf{x},w\mathbf{y}\right)\\ &=H\left(\mathbf{x}-\frac{w}{2}\mathbf{J}\mathbf{y}\right)+H\left(\mathbf{x}+ \frac{w}{2}\mathbf{J}\mathbf{y}\right)\end{split} \tag{64}\] we, in fact, recover _proper_ Hamilton's equations \[\begin{split}\frac{d\tilde{\mathbf{y}}_{w}}{ds}&=-\frac {\partial\mathbb{H}_{w}}{\partial\mathbf{x}}\\ \frac{d\tilde{\mathbf{x}}}{ds}&=\frac{\partial\mathbb{H }_{w}}{\partial\mathbf{y}}\end{split} \tag{65}\] Figure 5: Line segment joining the origin \(0\) to \(z\). although with a hamiltonian that is generally complex, but that reduces to a real function if \(w\in\{1,-1,i,-i\}\), which is the case for our interests (\(w=-i\)). In terms of these quantities, the area is given by \[\begin{split}\Delta(z)&=w\int_{0}^{|z|/2}ds\ \tilde{\mathbf{y}}_{w}(s)\cdot\frac{d\tilde{\mathbf{x}}(s)}{ds}\\ &=w\int_{0}^{|z|/2}ds\ \tilde{\mathbf{y}}_{w}(s)\cdot\left.\frac{ \partial\mathbb{H}_{w}}{\partial\mathbf{y}}\right|_{\tilde{\mathbf{x}}(s),\tilde{\mathbf{ y}}_{w}(s)}\end{split}, \tag{66}\] which, alternatively, may be cast in the form of a initial value problem \[\begin{split}\frac{d\Delta}{ds}&=w\tilde{\mathbf{y}}_{w} (s)\cdot\left.\frac{\partial\mathbb{H}_{w}}{\partial\mathbf{y}}\right|_{\tilde{\bm {x}}(s),\tilde{\mathbf{y}}_{w}(s)}\\ \Delta(0)&=0\end{split} \tag{67}\] The last missing ingredient is a way to calculate the jacobian \(\partial\mathbf{x}/\partial\mathbf{X}\). It can be obtained by differentiating (65) with respect to \(\mathbf{X}\), which give us the equations \[\frac{d}{ds}\frac{\partial\tilde{\mathbf{y}}_{w}}{\partial\mathbf{X}}=-\frac{\partial ^{2}\mathbb{H}_{w}}{\partial\mathbf{x}\partial\mathbf{y}}\frac{\partial\tilde{\mathbf{y}} _{w}}{\partial\mathbf{X}}-\frac{\partial^{2}\mathbb{H}_{w}}{\partial\mathbf{x}^{2}} \frac{\partial\tilde{\mathbf{x}}}{\partial\mathbf{X}}; \tag{68a}\] \[\frac{d}{ds}\frac{\partial\tilde{\mathbf{x}}}{\partial\mathbf{X}}=\frac{\partial^{2} \mathbb{H}_{w}}{\partial\mathbf{y}^{2}}\frac{\partial\tilde{\mathbf{y}}_{w}}{\partial \mathbf{X}}+\frac{\partial^{2}\mathbb{H}_{w}}{\partial\mathbf{x}\partial\mathbf{y}}\frac{ \partial\tilde{\mathbf{x}}}{\partial\mathbf{X}}, \tag{68b}\] solved under the initial conditions \[\left.\frac{\partial\tilde{\mathbf{y}}_{w}}{\partial\mathbf{X}}\right|_{\theta=0}= \mathbf{0};\quad\left.\frac{\partial\tilde{\mathbf{x}}}{\partial\mathbf{X}}\right|_{ \theta=0}=\mathbf{I}. \tag{69}\] In order to calculate (38) with \(t=-i\theta\), one must then solve the initial value problems specified by (65), (67) and (68) with \(w=-i\), and obtain the solution at \(s=\theta/2\). If we have \(d\) degrees of freedom, this will be a system of coupled differential equations with \(8d^{2}+4d+1\)_real_ variables. For the remaining of this work, we will test the procedure here described for the cases \(d=1,2\). It should be noted that the definition of a real double Hamiltonian in replacement of a complex Hamiltonian is not unique. Our construction, specially suited to the Wigner-Weyl representation, coincides neither with the double Hamiltonians designed for propagating coherent states by de Aguiar et al. [37], nor with the double Hamiltonian for the Boltzmann operator of Yan and Shao [10]. Indeed, the double trajectories in this reference decouple into pairs of simple trajectories, due to the assumption of a quadratic momentum dependence. For each choice of decomplexification one obtains different trajectories in their own phase space. Unlike the simple equivalence of the various variants related directly by Fourier transforms, the SC equivalence for decomplexified Hamiltonians is, so far, a question which relies on numerical investigator. A mixed position-momentum representation is employed in [10] to derive their semiclassical approximation of the density matrix, but it requires a further Fourier transform (beyond the Wigner transform) to relate their phase space formulae to the thermal Wigner function. It is interesting that a composition of Herman-Kluck propagators evolving for positive and negative half-times is also proposed, but there it is only optional, in contrast to its essential role within the present more general theory, ## 6 Morse System The Morse potential [38] is given by the expression \[V(r)=D\left[1-e^{-a(r-r_{e})}\right]^{2}, \tag{70}\] where \(r\) is a radial coordinate, \(D\) is the dissociation energy, \(r_{e}\) is the equilibrium distance and \(a\) is a constant with dimensions of inverse distance. It is a model for the vibration of diatomic molecules, that takes into account the possibility of the dissociation of the bond. As for every one dimensional system without an explicit dependence on time, the trajectories of a particle under the action of the Morse potential can be obtained by quadrature. We will explore a few insights that can be given by these expressions, but they are too cumbersome to actually use in the calculation of thermodynamic averages. For that, we will resort to the method described in the previous section. In order to more easily describe these trajectories, it will be convenient to introduce the dimensionless coordinate \(q=a(r-r_{e})\) and the frequency \[\omega=\sqrt{\frac{2Da^{2}}{m}}. \tag{71}\] In this way, the lagrangian for this system is \[L=\frac{m\dot{r}^{2}}{2}-V(r)=D\left[\left(\frac{\dot{q}}{\omega}\right)^{2}- \left(1-e^{-q}\right)^{2}\right], \tag{72}\] Figure 6: Morse potential. from which we obtain a conjugate momentum \[p=\frac{\partial L}{\partial\dot{q}}=\frac{2D\dot{q}}{\omega^{2}} \tag{73}\] and a hamiltonian \[H=p\dot{q}-L=\frac{\omega^{2}p^{2}}{4D}+D\left(1-e^{-q}\right)^{2}. \tag{74}\] The solutions of Hamilton's equations are then [39, 40] \[\begin{split} q(t)&=\ln\left[\frac{1-\sqrt{\epsilon }\cos\left(\Omega t+\phi\right)}{1-\epsilon}\right];\\ p(t)&=\frac{2D\sqrt{\epsilon(1-\epsilon)}}{\omega }\frac{\sin\left(\Omega t+\phi\right)}{1-\sqrt{\epsilon}\cos\left(\Omega t+ \phi\right)}.\end{split} \tag{75}\] Here \[\epsilon=\frac{E}{D}=\left(\frac{\omega p}{2D}\right)^{2}+\left(1-e^{-q} \right)^{2}<1 \tag{76}\] is the orbit's normalized energy, \(\Omega=\sqrt{1-\epsilon}\ \omega\) is the orbit's frequency and \(\phi\) is a phase determined by the initial conditions. We see that, for \(t\in\mathbb{R}\), we have \(\sqrt{\epsilon}\cos\left(\Omega t+\phi\right)<1\), and the orbits are well behaved. Nonetheless, if we allow \(t\in\mathbb{C}\), we do not have this guarantee. Indeed, by taking \(t=-is,\ s\in\mathbb{R}\), and restricting ourselves to initial conditions of the form \(p_{0}=0\) and \(q_{0}<0\), which imply that \(\phi=0\), we obtain \[\begin{split} q(t)&=\ln\left[\frac{1-\sqrt{\epsilon }\cosh\left(\Omega s\right)}{1-\epsilon}\right];\\ p(t)&=\frac{2D\sqrt{\epsilon(1-\epsilon)}}{\omega }\frac{i\sinh\left(\Omega s\right)}{1-\sqrt{\epsilon}\cosh\left(\Omega s \right)},\end{split} \tag{77}\] and we see that the trajectories diverge at a finite critical time \(s_{c}\) that satisfies \(1-\sqrt{\epsilon}\cosh\left(\Omega s_{c}\right)=0\), or, choosing the positive solution, one arrives, explicitly, at \[\omega s_{c}=\frac{1}{\sqrt{1-\epsilon}}\ln\left(\frac{1}{\sqrt{\epsilon}}+ \sqrt{\frac{1}{\epsilon}-1}\right). \tag{78}\] According to figure 7, one sees that \(s_{c}(\epsilon)\) is a decreasing function that satisfies \[\lim_{\epsilon\to 0}\omega s_{c}=\infty;\quad\lim_{\epsilon\to 1}\omega s_{c}=1. \tag{79}\] In this way, we obtain divergent trajectories when \(\omega s>1\), which start at the region with \(\epsilon\to 1\) or, equivalently, with \(q_{0}\rightarrow-\ln 2\), and advance towards \(\epsilon=q_{0}=0\). These divergent trajectories are, in a first moment, a disaster for our theory. Note that the integrand of (62) has a pole in this case, and therefore the integral is path dependent, which translates to multiple branches for the area \(\Delta\). It is then not clear which branch to choose. Beyond that, one may see, numerically, that these divergent trajectories are accompanied by the appearance of caustics. Furthermore, if one directly applies the method described in the previous section, it is possible to obtain Figure 7: Critical time as a function of energy. good results until a thermal time \(\theta=2\) (remember that the trajectories are evolved until \(s\) reaches \(\theta/2\)), when, suddenly, the approximation fails enormously. The culprit appears to be the fact that, after the caustic, the euclidean action rapidly grows from very negative to very positive values, which translates to a large growth in the integrand (38). The fact is that, while there is a good understanding of caustic traversals for real time [26], the same cannot be said for imaginary times. Note that even the change of coordinates \(\mathbf{x}\mapsto\mathbf{X}\) may fail, as the jacobian stops being invertible. Nonetheless, the simple trick of _discarding_ the trajectories that cross caustics, imposing that they should not contribute to the integral (38), seems to completely eliminate our problem. This question certainly deserves further investigation, but, for now, we will stick to this _ad hoc_ trick, as it appears to work very well in practice. The comparison with the quantum result is also straightforward for the Morse system, as its quantum version is well understood -- there are a finite number of bound eigenstates, whose eigenfunctions and eigenenergies are known [41]. By introducing the dimensionless parameter \[\chi=\frac{\hbar\omega}{4D}, \tag{80}\] we may write the eigenenergies corresponding to the bound states as \[E_{n}=\hbar\omega\left[\left(n+\frac{1}{2}\right)-\chi\left(n+\frac{1}{2} \right)^{2}\right],\quad n=0,1,\ldots,N \tag{81}\] where \[N=\left[\frac{1}{2\chi}-\frac{1}{2}\right], \tag{82}\] and denotes \(\lfloor x\rfloor\) the largest integer greater than or equal to \(x\). In order for us to have an idea of the order of magnitude in physically relevant cases, we exhibit, on table 1, the values of \(\chi\) and \(N\) that best fit experimental results for molecules of hydrogen, oxygen and nitrogen. This system has the peculiarity of presenting both bound and free states. In this scenario, one often is only preoccupied with the regime in which the temperature is bellow the value of excitation for the free states and, therefore, a regime in which they will not contribute to the thermodynamics of the system. In this case, the classical prescription is to only perform the integrations defining the thermodynamic quantities in the region of phase space corresponding to bound states [43]. We will stick with this classical prescription in the semiclassical case, therefore restricting the integration region in (38). \begin{table} \begin{tabular}{c c c} \hline \hline Molecule & \(\chi\) & \(N\) \\ \hline \(H_{2}\) & \(2.76\times 10^{-2}\) & 17 \\ \(O_{2}\) & \(7.58\times 10^{-3}\) & 65 \\ \(N_{2}\) & \(6.07\times 10^{-3}\) & 81 \\ \hline \hline \end{tabular} \end{table} Table 1: Values of \(\chi\) and \(N\) for some molecules. Calculated from [42]. Figure 8: Average energy for the Morse system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result. We then repeat the analysis done for the Kerr system, by calculating the energy and the heat capacity given by our approximations, and comparing them with the classical and the quantum cases. The results can be seen in figures 8 and 9. The values given by our approximation are quite remarkable, specially in the case of the energy, where one sees almost no deviation from the quantum result. We note that, for \(\chi=0.12\), we have only four bound states. The values of the heat capacity are less accurate, as also happened in the Kerr system, but are still far superior to the classical case. We also note that the approximation seems to fare better the lower the value of \(\chi\). This may be related to the fact that, when \(\chi\to 0\), we recover the spectrum of the harmonic oscillator, as can be seen from (81). Because of the change of variables \(\mathbf{x}\mapsto\mathbf{X}\), which results in formula (38), we only have access to a displacement of the thermal Wigner function. Nonetheless, remembering that, according to (8), the Wigner function \(W(\mathbf{x}^{\prime})\) is proportional to \(\left\langle\hat{R}_{\mathbf{x}^{\prime}}\right\rangle\) Figure 9: Specific heat for the Morse system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result. and using the fact that \(\hat{R}_{\mathbf{x}^{\prime}}(\mathbf{x})=\delta\left(\mathbf{x}-\mathbf{x}^{\prime}\right)\), we may write \[\begin{split} W(\mathbf{x}^{\prime})&\propto\int d\mathbf{X} \left|\frac{\partial\mathbf{x}}{\partial\mathbf{X}}\right|^{1/2}\exp\left[\Delta(\mathbf{X} )/\hbar-\beta H(\mathbf{X})\right]\delta\left(\mathbf{x}-\mathbf{x}^{\prime}\right)\\ &=\left.\left|\frac{\partial\mathbf{x}}{\partial\mathbf{X}}\right|^{1/2} \exp\left[\Delta(\mathbf{X})/\hbar-\beta H(\mathbf{X})\right]\right|_{\mathbf{X}=\mathbf{X}^{ \prime}}\end{split} \tag{83}\] where \(\mathbf{X}^{\prime}\) is the midpoint which gets mapped to \(\mathbf{x}^{\prime}\) under equations (65). One may also characterize \(\mathbf{X}^{\prime}\) as the zero of the function \[f(\mathbf{X})=\mathbf{x}\left(\mathbf{X}\right)-\mathbf{x}^{\prime}. \tag{84}\] Assuming that no caustics have been traversed, this zero is unique, and may be found by a standard Newton-Raphson method, allowing us to calculate \(W(\mathbf{x}^{\prime})_{SC}\) as well as the marginal distributions \[W(p) =\int dqW(p,q) \tag{85a}\] \[W(q) =\int dpW(p,q) \tag{85b}\] which are the expectation values of the operators \(\ket{p}\bra{p}\) and \(\ket{q}\bra{q}\), respectively. These results for the Morse system, with a thermal time \(\omega\theta=3\) and \(\chi=0.01\) are shown in figure 10. We also show the quantum projections, as well as the classical one, which is obtained from the classical Boltzmann distribution. It is possible to see that our semiclassical approximation for \(W(p)\) is essentially exact, while \(W(p)\) appears to be a displaced version of its quantum counterpart. ## 7 Nelson System The Nelson system [44] is described by a hamiltonian \[H(\mathbf{x})=\frac{1}{2}\left(p_{x}^{2}+p_{y}^{2}\right)+V(x,y) \tag{86}\] which represents a particle of unit mass in two dimensions under the action of the Nelson potential \[V(x,y)=(x^{2}/2-y)^{2}+\mu x^{2} \tag{87}\] where \(\mu\) is a parameter. The potential is illustrated in figure 11. The classical dynamics in real time exhibits a generic mixture of stable regions within a chaotic sea as exemplified by its Poincare section, visualized in Figure 12. Bifurcation trees of its periodic orbits have been intensively studied in [45, 46, 47]. The important point to be borne in mind is that the range between regular (integrable) and fully chaotic classical motion pertains to the infinite time limit. For finite time, the solutions of the Hamilton-Jacobi equation, which are the backbone of semiclassical approximations of finite time quantum evolution, make no qualitative distinction between these alternatives. In any case, it is reassuring that our full double hamiltonian formalism is successful even for the thorniest types of generic mixed systems. Figure 10: The Semiclassical Wigner function is shown in the heat map. Different versions of the projections \(W(p)\) and \(W(q)\) are shown: Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result, obtained using the classical Boltzmann distribution. Here, we simply use the Nelson system as an example of an application of our methods for a nontrivial system in two degrees of freedom. We then consider an ensemble of particles of unit mass under the action of this potential and calculate the thermodynamic averages associated with this system. In this case, there are no analytical formulas available, and we must resort to numerical techniques in order to calculate the quantum energy spectrum. Because of this limitation, we only have access to a finite number of levels, and only the lower ones are reliable. This, in turn, will only give a reliable approximation for the thermodynamic quantities in the low temperature limit. On the other hand, the classical framework, as it is known, will give good results in the high temperature limit. Our hope is that our semiclassical approximation can join these two extremes in a satisfactory manner. The semiclassical approximation for the energy, shown in figure 13, seems to bridge very well the high temperature limit, given by the classical result, and the low temperature one, given by the quantum result. Unfortunately, the approximation for the heat capacity seems to fail for much smaller values of \(\theta\), specially for \(\mu=0.5\), as can be seen in figure 14. ## 8 Discussion Having reviewed and incremented the semiclassical approximation for the thermal Wigner function, we developed it into a numerical method that can be applied to a broad class of systems, including Hamiltonians that are not quadratic in their momenta. Even though further investigation will be required, we obtained good agreement of energy averages with the quantum results for a wide range of different systems and parameters. It is presumed that that this method can be useful for systems with Figure 12: Poincaré section for the Nelson system with \(\mu=2\) and energy \(E=4.8\) with respect to the hyperplane \(y=0\). Each color represents a different trajectory. many degrees of freedom, as its quadratic scaling law can keep computation tractable even for high dimensions. The Weyl propagator employed here belongs to the class of propagators related to the original Van Vleck propagator by various Fourier transforms: All of these require the so called 'root search'. Thus, whereas one seeks trajectories with given end positions for the Van Vleck propagator, it is the centre point between the extremities of the trajectory that is prescribed in the Wigner-Weyl representation. So it is only for integrals involving the semiclassical Weyl propagator (or its Wick rotation) that one can switch from the centre to the initial or final value of the trajectory (as in [29]) or to its midpoint, in the present instance. Each representation illuminates a different aspect of quantum mechanics. Thus, the position representation, so far favoured by the vast majority of computations in this field [10], can easily provide the probability density for positions, that is, the diagonal matrix elements of the density matrix are supplied explicitly. On the other hand, the Figure 13: Average energy for the Nelson system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result. momentum density requires a double Fourier transform over the pair of positions. In contrast, the Wigner function provides either density by a simple projection integral [25], as exemplified by the momentum and position densities for the Morse system, shown in Fig. 10. A further unique feature of the Wigner-Weyl representation is its symplectic invariance [26], that is, unitary quantum (metaplectic) transformations corresponding to classical linear canonical phase space transformations transport the Wigner function classically. Thus, the Wigner function supplies through simple projection integrals, not only the momentum probability density and the position probability density, but the probability density along any Lagrangian plane in phase space, that is, any plane where the action for any closed circuit is null. This multiple probability content of the density operator can be used to reconstruct it through multiple measurements in the process of quantum tomography [48], but prior knowledge of the thermal Wigner function is welcome shortcut. Figure 14: Specific heat for the Nelson system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result. ## Acknowledgments We thank Gabriel Lando for his advice on the numerics. Funding provided by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) and Instituto Nacional de Ciencia e Tecnologia de Informacao Quantica is gratefully acknowledged. ## Declarations All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript ## Data Availability The code used in this article can be found at [49]. ## Appendix A Wigner symbol of normal forms In order to calculate the Wigner symbol of an operator of the form (47), it is sufficient to do so for the monomials \(\hat{o}^{n}\), where \(\hat{o}=\hat{p}^{2}+\hat{q}^{2}\). Our strategy will consist in finding a recurrence relation that allows us to calculate \(o^{n+1}(p,q)\) in terms of \(o^{n}(p,q)\). As the initial term \(o^{1}(p,q)=o(p,q)=p^{2}+q^{2}\) is readily obtained, the problem is solved. For that, we first observe that, as \(\hat{o}^{n}\) is hermitian, \(o^{n}(p,q)\) must be real. Using this fact, writing \(\hat{o}^{n+1}=\hat{o}\hat{o}^{n}\), and applying Groenewold's rule (11), we arrive at the recurrence relation \[o^{n+1}(p,q)=\left(p^{2}+q^{2}-\frac{\hbar^{2}}{4}\nabla^{2}\right)o^{n}(p,q)\] (A1) where \(\nabla^{2}=\partial_{p}^{2}+\partial_{q}^{2}\). This relation is further simplified if we introduce the coordinates \(s,\phi\), defined by \(p=\sqrt{s}\cos\phi,\ q=\sqrt{s}\sin\phi\), in terms of which the laplacian takes the form \[\nabla^{2}=4\left(s\partial_{s}^{2}+\partial_{s}\right)+\frac{1}{s}\partial_{\phi }^{2},\] (A2) which allows us to rewrite (A1) as \[o^{n+1}(s,\phi)=\left[s-\hbar^{2}\left(s\partial_{s}^{2}+\partial_{s}+\frac{1} {4s}\partial_{\phi}^{2}\right)\right]o^{n}(s,\phi)\] (A3) Since \(\partial_{\phi}o(s,\phi)=0\), and, as deduced from the recurrence relation, \(\partial_{\phi}o^{n}(s,\phi)=0\Rightarrow\partial_{\phi}o^{n+1}(s,\phi)=0\), we prove by induction that \(\partial_{\phi}o^{n}(s,\phi)=0\)\(\forall\)\(n\), which eliminates the derivative with respect to \(\phi\) from (A3). This allows us to easily obtain the first terms in the recurrence relation, which, already expressed in terms of \(p,q\), are given by \[o^{2}(p,q) =\left(p^{2}+q^{2}\right)^{2}-\hbar^{2}\] (A4) \[o^{3}(p,q) =\left(p^{2}+q^{2}\right)^{3}-5\hbar^{2}\left(p^{2}+q^{2}\right)\] \[o^{4}(p,q) =\left(p^{2}+q^{2}\right)^{4}-14\hbar^{2}\left(p^{2}+q^{2}\right) ^{2}+5\hbar^{4}\] We see that, in general, \(\hat{o}^{n}(p,q)\) is a polynomial of order \(n\) in \((p^{2}+q^{2})\), whose dominant term is \((p^{2}+q^{2})^{n}\), while corrections proportional to even powers of \(\hbar\) are also present. ## Appendix B Numerical Details The calculations in this article were performed using the Julia language [50]. The package DifferentialEquations.jl [51] was used to solve the necessary differential equations in parallel. The calculations were performed on a 12th Gen Intel Core i5-12600K processor, which has 16 threads. ### Morse System The integrals related to the Morse system were performed using Gaussian quadrature. The integration region, in units of \(\omega=\hbar=1\), is given by \[R=\left\{(p,q)\in\mathbb{R}^{2}\;\;\left|\;\chi p^{2}+\frac{1}{4\chi}\left(1-e^{- q}\right)^{2}<\frac{1}{4\chi}\right.\right\}\] (B5) Introducing the variables \(\tilde{P}=2\chi p\) e \(Q=1-e^{-q}\), we obtain \[\begin{split} R&=\left\{\left(\tilde{P},Q\right) \in\mathbb{R}^{2}\;\;\left|\;\tilde{P}^{2}+Q^{2}<1\right.\right\}\\ &=\left\{\left(\tilde{P},Q\right)\in\mathbb{R}^{2}\;\;\left|\;Q \in(-1,1)\,;\;\tilde{P}\in\left(-\sqrt{1-Q^{2}},\sqrt{1-Q^{2}}\right)\right. \right\},\end{split}\] (B6) which can be simplified by defining \(P=\tilde{P}/\sqrt{1-Q^{2}}\). The, we have \[R=\left\{\left(P,Q\right)\in\mathbb{R}^{2}\;\;\left|\;Q\in(-1,1)\,;\;P\in(-1,1 )\,\right.\right\}.\] (B7) The inverse transformation is then \[\begin{cases}p=\sqrt{1-Q^{2}}\frac{P}{2\chi}\\ q=-\ln\left(1-Q\right)\end{cases},\] (B8) which has jacobian determinant \[\det\frac{\partial(p,q)}{\partial(P,Q)}=\frac{1}{2\chi}\sqrt{\frac{1+Q}{1-Q}},\] (B9) which is proportional to the weight function of a Gauss-Chebyshev quadrature of the \(3^{\mathbf{0}}\) kind. We therefore use this quadrature rule to perform the integration over the \(Q\) coordinate, while a Gauss-Legendre quadrature is used to integrate over \(P\). The advantage of Gaussian quadrature is that the integration points will be independent of \(\theta\), and then a single set of points can be used to compute the thermodynamic quantities over a range of temperatures. In this work, we used a grid of \(300\times 300\) points to perform the integration, which corresponds to \(9\times 10^{4}\) trajectories. ### Nelson System In the semiclassical calculations for the Nelson system, different techniques were used for different set of parameters. In the case of the energy, as well as the heat with \(\mu=1.5,2\), we first performed the change of variables \((p_{x},p_{y},x,y)\mapsto(P_{X},P_{y},X,Y)\) with \[\begin{cases}P_{x}=\sqrt{\frac{\theta}{2}}p_{x}\\ P_{y}=\sqrt{\frac{\theta}{2}}p_{y}\\ X=\sqrt{\theta\mu}x\\ Y=\sqrt{\theta}(y-x^{2}/2)\end{cases}.\] (B10) This transformation has unit jacobian determinant and, in terms of the new variables, we have that the classical Boltzmann's weight is simply \[e^{-\beta H}=\exp\left[-\left(P_{x}^{2}+P_{y}^{2}+X^{2}+Y^{2}\right)\right].\] (B11) The integration is then performed by an h-adaptive technique as described in [52, 53]. The Julia implementation can be found in [54]. We bounded the integration algorithm to use roughly \(10^{5}\) integration points. We used the BS3 [55, 56] and Vern6 [56, 57] algorithms to solve the differential equations, and the tolerances varied between \(10^{-2}\) and \(10^{-6}\). For each \(\mu\), the corresponding plot took around 40 seconds to 3 minutes to complete. We found that the heat capacities with \(\mu=0.5,1\) were much harder to integrate. In this case, we didn't perform a change of variables and resorted to a Monte Carlo integration method, where the \(10^{7}\) integration points were sampled from the _classical_ Boltzmann's distribution \(e^{-\beta H(\mathbf{x})}/Z\) using the Metropolis-Hastings algorithm [58, 59]. In this case, for each \(\mu\), the corresponding plot took around 3 hours to complete. For the energy spectrum, which is used to calculate the quantum versions of the thermodynamic quantities, we used a grid of \(160\times 160\) points, where \(x\) spanned from \(-4.5\) to \(4.5\), and \(y\) spanned from \(-4\) to \(5\). We then approximated the laplacian of the time independent Schrodinger equation through a finite differences matrix over this grid. The discretizatation of this equation gives rise to a eigenvalue equation, which can be solve through standard linear algebra libraries.
量子力学のWeyl-Wigner表現により、相空間における密度算子(Wigner関数)を関数として表現することが可能で、これは確率分布として機能します。統計力学の文脈では、この表現は古典的な状態から量子状態へ遷移を明確にする役割を果たします。これは、高温度limitでは、熱的なWigner分布がBoltzmann分布に近づくためです。一般温度での可変密度算子に対する量子相空間表現を、古典的な軌跡を用いて近似します。これは、Weyl伝播器の半古典的近似値を用いるWick回転によって得られます。この近似を、幅広い系への適用を可能にする数値スキームも開発しました。この近似は、1と2の自由度を持つ系に対してテストを行い、多くのパラメータの範囲で、熱力学的平均が良好に再現されていることを示しています。
2309.10916
What Learned Representations and Influence Functions Can Tell Us About Adversarial Examples
Adversarial examples, deliberately crafted using small perturbations to fool deep neural networks, were first studied in image processing and more recently in NLP. While approaches to detecting adversarial examples in NLP have largely relied on search over input perturbations, image processing has seen a range of techniques that aim to characterise adversarial subspaces over the learned representations. In this paper, we adapt two such approaches to NLP, one based on nearest neighbors and influence functions and one on Mahalanobis distances. The former in particular produces a state-of-the-art detector when compared against several strong baselines; moreover, the novel use of influence functions provides insight into how the nature of adversarial example subspaces in NLP relate to those in image processing, and also how they differ depending on the kind of NLP task.
Shakila Mahjabin Tonni, Mark Dras
2023-09-19T20:28:24
http://arxiv.org/abs/2309.10916v3
# What Learned Representations and Influence Functions Can Tell Us About Adversarial Examples ###### Abstract Adversarial examples, deliberately crafted using small perturbations to fool deep neural networks, were first studied in image processing and more recently in NLP. While approaches to detecting adversarial examples in NLP have largely relied on search over input perturbations, image processing has seen a range of techniques that aim to characterise adversarial subspaces over the learned representations. In this paper, we adapt two such approaches to NLP, one based on nearest neighbors and influence functions and one on Mahalanobis distances. The former in particular produces a state-of-the-art detector when compared against several strong baselines; moreover, the novel use of influence functions provides insight into how the nature of adversarial example subspaces in NLP relate to those in image processing, and also how they differ depending on the kind of NLP task. ## 1 Introduction The high sensitivity of deep neural networks (DNNs) to slight modifications of inputs is widely recognised and makes DNNs a convenient target for adversarial attacks (Szegedy et al., 2014). Creating malicious inputs or adversarial examples by adding small perturbations to the model's inputs can cause the model to misclassify the inputs that would be predicted correctly otherwise. Such adversarial attacks are highly successful in both image and Natural Language Processing (NLP) domains. In the image domain, due to the straightforwardness of creating adversarial images by calibrating noise to the original records, researchers have explored many high-performing adversarial attacks (Papernot et al., 2016; Moosavi-Dezfooli et al., 2016; Carlini and Wagner, 2017, for example). The perturbations of the input images degrade the model's performance with a high success rate and are generally imperceptible to a human. Work in the NLP space has followed that in image processing. Here, in addition to the goal of impacting the model's prediction, adversarial text examples need to be syntactically and semantically sound to the reader. Consequently, adversarial attack techniques on text use semantics-preserving textual changes at the character level, word level and phrase level or sentence level (Pruthi et al., 2019; Alzantot et al., 2018; Li et al., 2020, for example). Table 1 illustrates two examples, showing different types of attack formulation in NLP. In the image domain, defence against adversarial attack can be 'proactive' or'reactive' (Cohen et al., 2020), where proactive defence refers to improving the model's robustness (Madry et al., 2018; Gopinath et al., 2018; Cohen et al., 2019) and reactive defence focuses on detecting real adversarial examples before they are passed to neural networks (Feinman et al., 2017; Ma et al., 2018; Lee et al., 2018; Papernot and McDaniel, 2018). Broadly speaking, for reactive methods, the detection of adversarial examples involves taking a conceptualisation of the space of learned representations and the adversarial subspaces within them (Tanay and Griffin, 2016; Tramer et al., 2017), and then characterising the differences in some function of the learned representations between the actual and the adversarial inputs produced by the DNN; for example, Ma et al. (2018) applied a local intrinsic dimensionality (LID) measure to the learned representations and used that to successfully distinguish normal and adversarial images. In the NLP space, relatively fewer adversarial defence techniques have been proposed. Among them, many focus on enhancing the models' robustness proactively through adversarial training (Jia et al., 2019; Pruthi et al., 2019; Jin et al., 2020); generating textual samples for proactive adversarial training is computationally expensive because of necessary search and constraints based on sentence encoding (Yoo and Qi, 2021). Reactive adversarial text detection techniques have mostly been different from their image counterparts, in that they typically modify the input by e.g. repeatedly checking word substitutions (Mozes et al., 2021; Wang et al., 2022; Zhou et al., 2019) rather than trying to characterise the learned representations; consequently, they focus on detecting synonym-substitution adversarial examples. An exception is the work of Liu et al. (2022), which both adapts LID to the text space and proposes the new MultiDistance Representation Ensemble (MDRE) method; their state-of-the-art results suggest that the detection methods based on learned representations drawn from the image processing domain are a promising source of ideas for NLP. The particular focus of the present paper is the use of influence functions in adversarial detection methods, proposed for image processing by Cohen et al. (2020). They propose that distances to nearest neighbors (used by previous methods) and influence functions, which measure the impact of every training sample on validation or test set data, can be used complementarily to detect adversarial examples: they argue, with support from the strong results from their method, that adversarial examples locate in different regions of the learned representation space of their neighbors with respect to influence functions, compared to original data-points (Fig 1). Specifically, in the image space, for original datapoints, nearest neighbors and influence function training points overlap, but for adversarial examples, they do not. Influence functions have only relatively recently begun to be explored in NLP, with Han et al. (2020) finding that, with the variety of classification tasks in NLP, the information provided by influence functions differs from image processing and is task-dependent. In this paper, noting significant differences between inputs in NLP and image processing (continuous versus discrete) and attack types, we explore whether and how they can help in NLP in detecting adversarial examples using learned representations, and what this can tell us about the nature of adversarial subspaces. We also adapt a second method from the image processing literature, by Lee et al. (2018), which uses a Mahalanobis-based confidence score; this was a strong baseline for Cohen et al. (2020), giving an additional perspective on the nature of adversarial subspaces in NLP. The contributions of this paper are as follows: * An adaptation of two adversarial detection techniques from the image processing literature, Mahal confidence (Lee et al., 2018) and Nearest Neighbor Influence Functions (NNIF) (Cohen et al., 2020), into the text domain; we show that we can achieve SOTA results relative to several strong, recent baselines. * An analysis of how influence functions work in this context, contributes to understanding both the nature of adversarial subspaces in the text space and what information influence functions can provide. ## 2 Related Work **Adversarial Defences for Image** An intuitive adversarial defence is to train a deep neural network to be robust against adversarial input samples by e.g. mixing adversarial samples with the training data (Goodfellow et al., 2015; Madry et al., 2018; Xie et al., 2019); popular platforms like Cleverhans (Papernot et al., 2016) are available to support robust training. However, such defences, termed as 'proactive', are expensive and vulnerable to optimisation attacks (Cohen et al., 2020). In contrast, others have proposed'reactive' defences that identify the variations in the representations learned by the DNN on the original input images to separate the adversarial samples; typically, these posit that adversarial examples can be characterised as belonging to particular subspaces (Tramer et al., 2017), and the different approaches aim to capture the nature of these subspaces in different ways, with detectors such as logistic regression classifiers built over the learned representations. Feinman et al. (2017) built detectors us Figure 1: Adversarial examples characterised by divergence in learned representations between nearest neighbors and training points selected by influence functions, unlike original examples (from (Cohen et al., 2020)). ing kernel density estimation on the last hidden layer of a DNN. Ma et al. (2018) characterised the dimensional properties of adversarial subspaces using Local Intrinsic Dimensionality (LID), applied to the distribution of distances to neighbors in the region around a sample. Papernot and McDaniel (2018), noting that DNNs are poorly calibrated (Guo et al., 2017), proposed Deep k-Nearest Neighbors (DkNN), a kNN classifier constructed over the hidden layers of a DNN classifier; such a DkNN classifier could match the performance of the DNN while also providing better confidence estimates of prediction, and these confidence estimates are used in identifying adversarial examples. Lee et al. (2018) constructed Mahalanobis distance-based confidence scores from DNNs, using these scores to construct a detection classifier. Cohen et al. (2020) investigated the use of influence functions in adversarial image detection that explain the decisions of a model by identifying influential training examples, and comparing these points to those found in a DkNN approach, using the differences in distributions between real examples and adversarial ones to construct classifiers that outperformed the approaches above. In this paper, we focus on the last two and adapt them to NLP. **Adversarial Defences for Text** Improving adversarial robustness remains a widely used mechanism in defending textual adversaries (Li et al., 2016, 2017; Ribeiro et al., 2018; Jones et al., 2020). In NLP, however, there have been fewer reactive methods. To prevent character-level and word-level adversarial perturbations Zhou et al. (2019) proposed the learning to discriminate perturbations (DISP) framework that detects and replaces suspicious words. Mozes et al. (2021) emphasised word frequencies in the texts in determining adversarial perturbations, arguing that adversarially infused words are less likely to occur, and constructed a rule-based, model-agnostic frequency-guided word substitutions (FGWS) algorithm. The approach of Wang et al. (2022) voted the prediction label for a set of samples generated by random word substitutions from a sentence and matched the voted prediction label with the original sentence's prediction label to detect word-level adversaries. Anomaly Detection with Frequency-Aware Randomization (ADFAR) as proposed by Bao et al. (2021) adds anomaly detection as an additional optimization training objective and augments the training set with random rare-frequency word substitutions of the original sentences. Rather than focus on word substitution as the above methods, Mosca et al. (2022) trained an adversarial detector on Shapley additive explanations (Fidel et al., 2020). In NLP, only Liu et al. (2022) has used the idea of constructing detectors over learned representations as in the image domain, which explored the idea of adapting the LID (Ma et al., 2018) method above. In addition, they proposed the MultiDistance Representation Ensemble Method (MDRE) algorithm that puts together learned representations from multiple DNN models to detect adversarial texts. Unlike other approaches, the same detector could apply to different types of attacks (character-based, word-based, syntax-based) and MDRE in particular improved over baseline methods across the range of attacks. This motivates our adaptation of more recent techniques from the image domain. **Influence Functions** The influence function (IF) is a statistical method that captures the dependence of an estimator on any one of the sample (training) points. Koh and Liang (2017) were the first to adapt IFs to image DNNs as a method for interpreting the model's decision: the IF finds the most influential training samples, both helpful and harmful, contributing to each prediction. The essence of the approach is to consider a point \(z\) from the training set and compute the change to parameters \(\theta\) if \(z\) were upweighted by a small \(\epsilon\); they then defined closed-form expressions \(\mathcal{I}(z,z_{\text{test}})\) to identify the most influential points \(z\) on a test point \(z_{\text{test}}\). IFs were first applied to NLP deep architectures by Han et al. (2020), and compared with established gradient-based saliency maps as a way of interpreting input feature importance, using sentiment classification and natural language inference (NLI) as testbeds. Their first finding was \begin{table} \begin{tabular}{l l l} \hline \hline Original Text & at last, a movie that handles the probability of alien visits with the appropriate depth and loving warmth. & Positive \\ \hline Char-level & at last, a movie that handles the probability of alien visits with (Pruthi et al., 2019) & Negative \\ World-level & at last, a movie that handles the probability of alien trips with (Alzantot et al., 2018) & Negative \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of textual adversarial instances on IMDB and the prediction of BERTBASE on them that IFs are reliable for deep NLP architectures. Their second interesting finding was that while IFs and saliency measures were consistent for sentiment classification, they differed for NLI: they concluded that for more complex understanding tasks like NLI, IFs captured more useful interpretive information. They also found IFs to be useful for identifying and quantifying the effect of data artifacts on model prediction. A few other works have continued investigating the usefulness of IFs in NLP, such as Guo et al. (2021), who proposed a faster method for IF computation by restricting candidates to top-\(k\) nearest neighbors. ## 3 Methods ### NNIF Detector We follow Cohen et al. (2020)'s Nearest Neighbor Influence Function (NNIF) method and apply it to NLP architectures. The essence of it is, for some point \(z\) that may be regular or adversarial, to identify the training points that are most influential and those that are nearest neighbors to \(z\), and to build a classifier based on those that will predict whether \(z\) is regular or adversarial based on differences in relative distributions (Fig 1). We take a DNN classifier and dataset for some particular task (e.g. sentiment classification); we refer to this DNN as the target model. For each test sample \(z_{\text{test}}\), we compute the influence scores \(\mathcal{I}(z,z_{\text{test}})\) for all training points \(z\), given the target model, and select the top \(M\) most helpful and \(M\) most harmful (details App B). We then construct a DkNN classifier in the style of Papernot and McDaniel (2018), using the hidden layers of the target model and the training points. For each \(z_{\text{test}}\) we find the ranks \(\mathcal{R}\) and distances \(\mathcal{D}\) using this DkNN for the training examples identified by the IFs; we denote by \(\mathcal{R}^{M\uparrow},\mathcal{D}^{M\uparrow},\mathcal{R}^{M\downarrow}, \mathcal{D}^{M\downarrow}\) the ranks and distances of the \(2M\) most helpful and harmful training examples, respectively. We finally construct a logistic regression classifier with features \((\mathcal{R}^{M\uparrow},\mathcal{D}^{M\uparrow},\mathcal{R}^{M\downarrow}, \mathcal{D}^{M\downarrow})\) to detect whether an input is adversarial or not. Where the target model of Cohen et al. (2020) is a ResNet model, ours is a large language model (LLM) base with additional layers that are fine-tuned for the chosen tasks (SS4.3). The hidden layers we use for NNIF are then the pre-final additional layers on top of the DNN (SS4.5). ### Mahal Detector Here we follow Lee et al. (2018), who build a detector that captures the variation in the probability density of the class-conditional Gaussian distribution of the learned representation by the model. Motivated, like Papernot and McDaniel (2018), by the problem that DNNs are poorly calibrated (Guo et al., 2017), they replace the final softmax layer with a Gaussian Discriminant Analysis (GDA) softmax classifier. For a set of training points \(\{(x_{1},y_{1}),...,(x_{n},y_{n})\}\) with the label \(y\in\{1,2,\ldots,C\}\), the class mean \(\hat{\mu}_{c}\) and covariance \(\hat{\Sigma}\) are computed for each class \(c\) to approximate the generative classifier's parameters from the pre-trained target DNN \(f(x)\). Next, from the obtained class-conditional Gaussian distribution, the Mahalanobis distance between a test sample \(x\) and its closest distribution is measured to find the confidence score \(M(x)=\max_{c}-(f(x)-\hat{\mu}_{c})^{T}\hat{\Sigma}^{-1}(f(x)-\hat{\mu}_{c})\). Finally, we label the Mahalanobis scores for the test samples as positive and adversarial samples as negative and input this feature set to an LR detector. Lee et al. (2018) propose two calibration techniques to improve the detection accuracy and make regular and out-of-distribution samples more separable: (1) _input pre-processing_, where they add a small noise in a controllable manner to the test samples; and (2) _feature ensemble_, which combines the confidence scores from all the hidden layers of the DNN including the final features. Both together substantially improve the performance of the base approach; each individually reaches almost the combination of the two. As for our NNIF detector in SS3.1, our target DNN will have several hidden layers, and we explore models both with final layer-only representations and feature ensembles over all hidden layers. The input preprocessing of (1) is appropriate to the continuous space of images, but not in an obvious way to text, so we do not use that. ## 4 Experimental Setup We broadly follow the setup of Liu et al. (2022), as the prior NLP work that has used learned representations to detect adversarial examples. ### Tasks and Datasets We work on the sentiment analysis and the natural language inference tasks, two widely tasks used in the adversarial example generation (Pruthi et al., 2019; Alzantot et al., 2018; Ribeiro et al., 2018; Ren et al., 2019; Iyyer et al., 2018; Yoo and Qi, 2021; Li et al., 2020, 2021; Jin et al., 2020). In addition, these are the two tasks that were used for the investigation of the use of influence functions in NLP (Han et al., 2020). **Sentiment Analysis** For the sentiment analysis, we use the IMDB dataset (Maas et al., 2011) that has 50,000 movie reviews, split into 25,000 training and 25,000 test examples with binary labels indicating positive or negative sentiment. IMDB dataset has 262 words per review on average. In all experiments, we use 512 maximum sequence lengths for the language models on IMDB. **Natural Language Inference** The Multi-Genre NLI (MultiNLI) dataset (Williams et al., 2018), used for the natural language inference (NLI) task, contains pairs of sentences annotated with textual entailment information. The test examples are mismatched with train examples and are collected from different sources. The dataset has 392,702 training and 9,832 testing examples labelled as three classes: entailment, neutral, and contradiction. Each text of the dataset has 34 words on average. On this dataset, we set the maximum sequence length to 256. ### Attack Methods We use the implementations from Liu et al. (2022) of two widely used attack methods that apply character-level and word-level perturbations to construct adversarial examples. We take a BERTBASE model (SS4.3) as the target model. An adversarial attack is successful when the adversaries have different predictions than the target mode's original predictions. Our two methods are (more details in SSA.1): * CharAtt(Pruthi et al., 2019). This is a character-level attack that tweaks the original texts by randomly swapping, dropping and adding characters or adding a keyboard mistake. * WordAtt(Alzantot et al., 2018). This is a word-level attack that allows the attacker to alter practically every word from the sentence if required with the context-preserving synonymous words. This implementation follows Jia et al. (2019) in speeding up the synonym search. ### Target Model Following (Liu et al., 2022), we use a pre-trained BERT-base-cased model, adding a fully connected dense layer of 768 nodes, a layer of 50% dropout, and another dense layer of 768 nodes. The dataset split is 80-20 train-test. We train the model for 3 epochs with \(5e^{-5}\) learning rate and AdamW optimization without freezing any layer of the backbone model. This BERTBASE model achieves \(92.90\%\) and \(82.01\%\) test accuracies on the IMDB and MultiNLI datasets respectively. The accuracies of the clean model and the model under attack are given in Table 6; we note that in all the cases, CharAtt degrades the classifier's performance comparatively more than WordAtt. Sizes for IMDB and MultiNLI datasets and number of generated adversarial texts from them are in Table 5. ### Detectors For data to train the adversarial example detectors on, we follow standard practice in image processing (Ma et al., 2018; Cohen et al., 2020) and Liu et al. (2022) and use only those examples that are correctly classified by the target model (SS4.3) from the overall test set. Adversarial attacks are then applied to these examples; the originals (labelled positive) and their adversarial alternatives (negative) then form the detection dataset. Due to the computational intensity of estimating the influential training records for the NNIF method, we limit our detectors to having 10k records (5k tests and 5k adversarial texts) and follow a similar data size for all the other detection methods for comparability. We split the detection dataset 80-20 train-test, and construct and evaluate logistic regression classifiers as detectors over this detection dataset split for our proposed methods (SS4.5) and baselines (SS4.6). ### NNIF and Mahalanobis Methods **NNIF** We adapt the standard NNIF implementation of Cohen et al. (2020). For influence score calculation, Cohen et al. (2020) uses the Darkon module for the image; we instead incorporate the influence function calculation from Han et al. (2020)1 which uses Linear time Stochastic Second-Order Algorithm (Agarwal et al., 2017) for faster convergence, and makes several adaptations to NLP. We build the DkNN containing one layer with \(l_{2}\) distance and brute-force search. Footnote 1: [https://github.com/xhan77/influence-function-analysis](https://github.com/xhan77/influence-function-analysis) Because IF calculations are expensive, like Cohen et al. (2020) and Han et al. (2020) we only sample from among all neighbors: we compute the IF on 6K training datapoints uniformly randomly sampled (Cohen et al. (2020) sample 10K neighbors from 49K training points). We choose \(M=500\) for our main results, which is at the top end of the range of values of \(M\) selected by Cohen et al. (2020); we show in SS5.2 that, unlike the image processing domain, results in our experiments are broadly monotonically increasing as \(M\) increases. Note that we don't use the faster variant of IF computation of Guo et al. (2021), as NNIF requires _separate_ perspectives from IFs and kNNs, and FastIF restricts IF search to subsets of kNNs. **Mahal** As per SS3.2, we compute the mean and covariance for each class and calculate the Mahalanobis distance score for each normal instance and its adversarial counterpart. Like Ma et al. (2018), we consider both using only the final layer of the model and stacking scores from each layer of the model (feature ensembling). Feature ensembling is always better, so we only include those in the main results, but do separately analyse the contribution of the feature ensembling. **Code** For both of these, our code uses the implementation of Cohen et al. (2020) as a starting point and adapts as above.2 Footnote 2: Code: [https://github.com/SJabin/NNIF](https://github.com/SJabin/NNIF). ### Baseline Detection Methods We evaluate six adversarial text detection methods as our baseline detectors. The first four are from Liu et al. (2022) (we omit the language model, as it operates essentially at the chance), while the other two are also recent high-performing systems.3 We give more details on the methods in SSA.2. **DISP**(Zhou et al., 2019). This is a system that aims to correct any adversarial perturbations before an example is passed to a classifier. Liu et al. (2022) adapt this to detecting the adversarial examples. **FGWS**(Mozes et al., 2021). This algorithm uses a word frequency threshold and calibrated replacement approach to detect adversarial examples. It is only designed to work against word-level attacks. **LID**(Liu et al., 2022). From among image processing detection methods, Liu et al. (2022) adapted the Local Intrinsic Dimensionality (LID) approach of Ma et al. (2018). This technique creates a distribution over local distances for a test record concerning its neighbors from the training set; it then applies these to the outputs of each layer from the target model to create a detection classifier. **MDRE**(Liu et al., 2022). This has similarities to LID above but uses Euclidean distance rather than the LID measure, and creates an ensemble using different Transformer models (like Liu et al. (2022), we use BERTBASE, RoBERTBASE, XLNetBASE, BARTBASE). **RSV**(Wang et al., 2022). In this Randomized Substitution and Vote approach, the assumption is that a word-level attacker aims to find an optimal synonym substitution that mutually influences other words in the sentence. Hence, Wang et al. (2022) randomly replaces words from the text with synonyms in order to destroy the mutual interaction between words and eliminate adversarial perturbation. Like FGWS, this is only designed to work against word-level attacks. **SHAP**(Mosca et al., 2022). In this approach, an adversarial detector is trained using the SHapley Additive exPlanations (SHAP) values of the training data for each test data item using the SHAP explainer (Fidel et al., 2020). They experiment on multiple classifiers as the detectors: logistic regression, random forest, support vector and neural network. In our main results, we report the best classifier for each dataset and attack. ## 5 Evaluation ### Main Results Results on the detector baselines are in Table 2. (All SHAP detector classifiers in Table 8.) Overall, NNIF is the best, performing with 100% accuracy on CharAtt for sentiment analysis (more than \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Detector} & Char & Word \\ & & Attack & Attack \\ \hline \multirow{8}{*}{IMDB} & DISP * & 0.8936 & 0.7714 \\ & FGWS & — & 0.7546 \\ & LID & 0.814 & 0.675 \\ & MDRE & 0.846 & 0.7025 \\ & RSV & — & 0.8876 \\ & SHAP & 0.812 & 0.764 \\ & NNIF & **1.0** & **0.899** \\ & Mahal & 0.9167 & 0.8147 \\ \hline \multirow{8}{*}{MultiNLI} & DISP * & **0.7496** & 0.6137 \\ & FGWS & — & 0.6112 \\ & LID & 0.7035 & 0.5838 \\ \cline{1-1} & MDRE & 0.687 & 0.6231 \\ \cline{1-1} & RSV & — & 0.6054 \\ \cline{1-1} & SHAP & 0.614 & 0.697 \\ \cline{1-1} & NNIF & 0.745 & **0.7351** \\ \cline{1-1} & Mahal & 0.6972 & 0.6211 \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy of detection classifiers (**best**, _second_). DISP results reported from Liu et al. (2022). 8% better than the second) and 90% on WordAtt (more than 1% better than the second, RSV, which is tailored to word-level attacks). For MultiNLI WordAtt, it is around 4% better than the second best. The only one where it is not best, CharAtt, is only very slightly below the best performer DISP. (We note that for DISP we report the accuracy values from Liu et al. (2022). This means that the DISP detector used more data in its training set, and so has an advantage in this respect.) Mahal also performs quite strongly, either better or similar to the baseline detectors, although not as strongly as NNIF; this mirrors the findings in image processing. MDRE results are lower than in Liu et al. (2022) as a consequence of using less data for training all detection classifiers, as discussed in SS4.4. In terms of aggregate task performance, in all our experiments, the detection accuracy on the natural language inference task is lower than the sentiment analysis task in general. As the MultiNLI dataset is a three-class problem and additionally uses mismatched test sentences, the detection is innately harder. generally much more clearly separable and so IF points contribute especially strongly to the method, except for MultiNLI against WordAtt, where they are essentially the same and the method relies on the two-view aspect of NNIF. This observation about the relative importance of the IF contribution was not made by Cohen et al. (2020), and so may be specific to NLP tasks, although this would require more investigation to verify. We also note that our results align with observations of Han et al. (2020), that in the harder task of MultiNLI (SS5.1, Table 4), IFs provide a different perspective to characterising the datapoint of interest. We give some text examples in App E. To look further into the more challenging combination of MultiNLI and CharAtt (as the one case in Table 2 where NNIF was not the highest scoring, albeit by a small margin), we consider a successful and an unsuccessful detection case by NNIF, with the actual examples given in the appendices in Tables 11 and 12, and the corresponding t-SNE plots of IF and NNs in Figs 5 and 6, respectively. The IFs in Fig 5 (the correct example) are somewhat more clustered, with the red (adversarial) points mostly in the top right, than the IFs in Fig 6 (the incorrect example); this lines up with the results of Table 4 in that separability of IF does seem to matter for MultiNLI +CharAtt. **Varying \(M\) in NNIF** Fig 4 plots the accuracies of the NNIF method for both tasks and attacks, for a range of values of \(M\). The accuracy broadly monotonically increases until plateaus for the IMDB results, although the MultiNLI results look to be still increasing. This is a contrast with the image processing results of Cohen et al. (2020), where much smaller values of \(M\) (e.g. 30) produced better results. It is unclear what characteristics of our tasks (fewer classes, more long-distance dependencies,...) lead to this difference. **Ablation for Mahal** Table 3 shows the accuracies of Mahal using only the final layer or the feature ensemble. As with Lee et al. (2018), the feature ensemble produces much better results. The im \begin{table} \begin{tabular}{l r r r} \hline \hline \multirow{2}{*}{Attack} & \multicolumn{2}{c}{Avg Acc Avg Acc} \\ & NNIF & kNN & \(p\)-value \\ \hline IMDB & CharAtt & 0.6875 & 0.5626 & \(<.00001\) \\ IMDB & WordAtt & 0.7812 & 0.5644 & \(<.00001\) \\ \hline MultiNLI\&LiquAtt & 0.6399 & 0.5625 & \(<.00001\) \\ MultiNLI\&WordAtt & 0.5603 & 0.5632 & 0.448 \\ \hline \hline \end{tabular} \end{table} Table 4: SVC accuracy of linearly separating the 2D-t-SNE embedding subspace of neighboring train samples of 1000 test records and their adversarial versions Figure 4: Accuracy of NNIF for different values of M. Figure 5: Normal and adversarial subspace of the MultiNLI CharAtt text in Table 10 by IF (top) and DkNN (bottom) Figure 3: Normal and adversarial train subspace observed on the IMDB record used in Fig 2 under WordAtt by influence function (top) and DkNN (bottom) provement is larger for IMDB, but still important for MultiNLI, as without the ensemble, detection is essentially at the chance. Noting that the target model of Lee et al. (2018) had many more hidden layers in the ensemble, it is an open question as to whether introducing additional dense layers into our LLM-based model might improve detection while still preserving target model performance. ## 6 Conclusion and Future Work We have adapted from image processing two methods, NNIF Cohen et al. (2020) and Mahal Lee et al. (2018), that detect adversarial examples using learned representations. Both perform strongly, with NNIF the best on three of four task/attack combinations, and a close second on the fourth, against several strong baselines. Our analysis shows that influence function points make a particularly important contribution to the NNIF method. The MultiNLI task is more challenging for all methods; here it is the complementary nature of information from influence functions and nearest neighbors, supporting observations by Han et al. (2020) about the different perspective of influence functions in this more complex NLP task. The NNIF method is computationally expensive, so future work will look at ways to make it more efficient. Additionally, to gain a fuller understanding of what information influence functions can provide in NLP tasks, future work will look at a wider range of tasks and attacks. ## 7 Limitations The major limitation is the computationally expensive calculation of influence functions in our NNIF method. For this, following Cohen et al. (2020) we restrict the data size to 10k (5k test, 5k adversarial) for NNIF and follow a similar approach for other methods for comparability. This helps faster explanation generation in SHAP as well. We use a small architecture as recommended in Han et al. (2020) for the BERTBASE model for NNIF and other detectors. As noted in the paper, we recognise that there is the FastIF method of Guo et al. (2021) for speeding up influence function calculation, but because of the restriction of influence function points to nearest neighbors, it is not suitable for our application. We use only two datasets/tasks and two attack methods, partly because of the computational expense of NNIF. While they are commonly used in the adversarial example literature as well as the analysis of influence functions in NLP by Han et al. (2020) and represent different levels of task complexity and attack type, a wider range of datasets/tasks and attack methods is needed for a full characterisation of influence functions and the nature of adversarial subspaces. For all experiments, we restrict the maximum sequence length following Liu et al. (2022), which may influence the detectors' performance, especially for the NLI task, that requires the model to learn from a hypothesis and premise text pairs. For the detector baselines, we used the most available methods. There are two recent contemporaneous methods by Wang et al. (2022) and Bao et al. (2021) that explore the idea that adversarial perturbations are typically rare-frequency words, and create augmented training sets by replacing those words in each sentence with synonyms. For the detection, Wang et al. (2022) matches the voted prediction with the obtained prediction and Bao et al. (2021) trains the model on a separate auxiliary learning objective. Between these two works, we choose the RSV from Wang et al. (2022) in our work. For RSV, we follow the similar setting from Wang et al. (2022) in choosing the vote number, word substitution rate and stop word selection for both IMDB and MultiNLI. A different setting for MultiNLI may improve the result. Figure 6: Normal and adversarial subspace by IF (top) and DkNN (bottom) on the unsuccessful detection by NNIF of the MultiNLI CharAtt text in Table 12
## 攻撃的例証、少量擾乱を用いて巧妙に作成されたものであり、深層ニューラルネットワークを欺くために研究されてきた。これは、画像処理において最初に研究され、その後、NLPにおいても注目を集めている。NLPにおける攻撃的例証の検出方法の大部分は、入力擾乱の探索に基づいているが、画像処理では、学習された表現に基づく攻撃的サブ空間を特徴付けるための様々な技術が開発されている。この論文では、これらの技術をNLPに適用し、1つ目は、近傍ノードと影響関数に基づいたものであり、もう1つ目は、Mahalanobis距離に基づいたものとする。特に、近傍ノードと影響関数に基づいた手法は、いくつかの強力なベースラインと比較して、最先端の検出器である。さらに、影響関数を用いる新しい手法は、NLPにおける攻撃的例証サブ空間と画像処理における攻撃的
2309.04963
Packings in bipartite prisms and hypercubes
The $2$-packing number $\rho_2(G)$ of a graph $G$ is the cardinality of a largest $2$-packing of $G$ and the open packing number $\rho^{\rm o}(G)$ is the cardinality of a largest open packing of $G$, where an open packing (resp. $2$-packing) is a set of vertices in $G$ no two (closed) neighborhoods of which intersect. It is proved that if $G$ is bipartite, then $\rho^{\rm o}(G\Box K_2) = 2\rho_2(G)$. For hypercubes, the lower bounds $\rho_2(Q_n) \ge 2^{n - \lfloor \log n\rfloor -1}$ and $\rho^{\rm o}(Q_n) \ge 2^{n - \lfloor \log (n-1)\rfloor -1}$ are established. These findings are applied to injective colorings of hypercubes. In particular, it is demonstrated that $Q_9$ is the smallest hypercube which is not perfect injectively colorable. It is also proved that $\gamma_t(Q_{2^k}\times H) = 2^{2^k-k}\gamma_t(H)$, where $H$ is an arbitrary graph with no isolated vertices.
Boštjan Brešar, Sandi Klavžar, Douglas F. Rall
2023-09-10T08:40:38
http://arxiv.org/abs/2309.04963v1
# Packings in bipartite prisms and hypercubes ###### Abstract The 2-packing number \(\rho_{2}(G)\) of a graph \(G\) is the cardinality of a largest 2-packing of \(G\) and the open packing number \(\rho^{\rm o}(G)\) is the cardinality of a largest open packing of \(G\), where an open packing (resp. 2-packing) is a set of vertices in \(G\) no two (closed) neighborhoods of which intersect. It is proved that if \(G\) is bipartite, then \(\rho^{\rm o}(G\,\square\,K_{2})=2\rho_{2}(G)\). For hypercubes, the lower bounds \(\rho_{2}(Q_{n})\geq 2^{n-\lfloor\log n\rfloor-1}\) and \(\rho^{\rm o}(Q_{n})\geq 2^{n-\lfloor\log(n-1)\rfloor-1}\) are established. These findings are applied to injective colorings of hypercubes. In particular, it is demonstrated that \(Q_{9}\) is the smallest hypercube which is not perfect injectively colorable. It is also proved that \(\gamma_{t}(Q_{2^{k}}\times H)=2^{2^{k}-k}\gamma_{t}(H)\), where \(H\) is an arbitrary graph with no isolated vertices. \({}^{a}\) Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia \({}^{b}\) Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia \({}^{c}\) Faculty of Mathematics and Physics, University of Ljubljana, Slovenia \({}^{d}\) Department of Mathematics, Furman University, Greenville, SC, USA **Keywords:** 2-packing number, open packing number, bipartite prism, hypercube, injective coloring, (total) domination number **AMS Subj. Class. (2020)**: 05C69, 05C76 ## 1 Introduction For many reasons, hypercubes are ubiquitous in theoretical computer science and in combinatorics. Understanding their structure is therefore a fundamental problem. Although hypercubes have a seemingly simple structure, we quickly encounter very complex problems. For instance, one of them was the middle levels problem, which was successfully dismissed [15]. On the other hand, the problem of determining the domination number of hypercubes is beyond the reach of existing methods. To date, exact values of \(\gamma(Q_{n})\) are only known for \(n\leq 9\), where the value \(\gamma(Q_{9})=62\) was obtained in [16], and for the following two infinite families. **Theorem 1.1**.: ([7, 19]) _If \(k\geq 1\), then \(\gamma(Q_{2^{k}-1})=2^{2^{k}-k-1}\) and \(\gamma(Q_{2^{k}})=2^{2^{k}-k}\)._ The values \(\gamma(Q_{2^{k}-1})=2^{2^{k}-k-1}\) can be obtained from the fact that hypercubes \(Q_{2^{k}-1}\) admit 1-perfect codes, in which case the domination number coincides with the cardinality of a 1-perfect code. The most important variation of the domination number is the total domination number; see a recent monograph [8] surveying domination theory with the two invariants in the central role. Roughly speaking, domination operates with closed neighborhoods while total domination with open neighborhoods, which often causes a different behavior of the invariants. However, as proved in [1] by using hypergraph transversals, \(\gamma_{t}(Q_{n+1})=2\gamma(Q_{n})\) for all \(n\), which makes the domination number and the total domination number in hypercubes tightly connected. More generally, the authors of [1] proved that \(\gamma_{t}(G\,\square\,K_{2})=2\gamma(G)\) as soon as \(G\) is a bipartite graph. The concepts of packing number and open packing number of a graph are often used in domination theory, since they present natural lower bounds on the domination number and the total domination number, respectively, of the graph. The concept of packing was used back in 1975 by Meir and Moon in their classical theorem stating that in a tree the domination number equals the packing number [11]. On the other hand, open packing was introduced by Henning and Slater [9], and was later used in [18] to prove a canonical formula for the total domination number of the direct product of two graphs, which holds if one of the factors has the total domination number equal to its open packing number. Similarly as total domination is related to domination, open packing can be regarded as a version of packing in which closed neighborhoods are replaced with open neighborhoods. See [12, 13, 14] for some recent studies of (open) packings as well as [5] for their application. Open packings are also related to the so-called injective colorings of graphs, cf. [17]. More precisely, an injective coloring of a graph is exactly a partition of its vertex set into open packings. In a recent paper [3], graphs that admit injective colorings such that each of the color classes is a maximum open packing were considered. While proving this property for hypercubes of some small dimensions, it was also proved for those whose dimension is a power of 2. Yet, nothing else was known, including whether there exists a hypercube that does not satisfy this property. One of the reasons for the difficulty of this question is that the open packing number (i.e., the cardinality of a maximum open packing) has not been known. We proceed as follows. In the remainder of this introduction, we provide the definitions and concepts we need for the following. In Section 2 we prove that the open packing number of a prism over a bipartite graph \(G\) is twice the \(2\)-packing number of \(G\). This result nicely complements [1, Theorem 1] which states that the total domination number of a prism over a bipartite graph \(G\) is twice the domination number of \(G\). We also demonstrate that in general, the open packing number of a prism over a graph \(G\) can be arbitrary larger that the \(2\)-packing number of \(G\). In Section 3 we prove lower bounds on the \(2\)-packing number and the open packing number of hypercubes. The bounds are sharp for small dimensions and for two infinite families, but are not sharp in general. In the subsequent section we apply these findings to injective colorings of hypercubes. In particular we demonstrate that \(Q_{9}\) is the smallest hypercube which is not perfect injectively colorable. In the concluding remarks, we give an overview of the known values for the hypercube invariants considered here and also derive the total domination number of the direct product of \(Q_{2^{k}}\) and an arbitrary graph. ### Preliminaries Let \(G=(V(G),E(G))\) be a graph and \(x\in V(G)\). The _open neighborhood_\(N(x)\) is the set of vertices adjacent to \(x\) and the _closed neighborhood_ is \(N[x]=N(x)\cup\{x\}\). A set \(D\subseteq V(G)\) is a _dominating set_ of \(G\) if each vertex of \(V(G)\setminus D\) has a neighbor in \(D\). The cardinality of a smallest dominating set of \(G\) is the _domination number_\(\gamma(G)\) of \(G\). Similarly, \(D\subseteq V(G)\) is a _total dominating set_ of \(G\) if each vertex of \(V(G)\) has a neighbor in \(D\). The cardinality of a smallest dominating set of \(G\) is the _total domination number_\(\gamma_{t}(G)\) of \(G\). Let \(X\subseteq V(G)\). Then \(X\) is a \(2\)_-packing_ of \(G\) if \(N[x]\cap N[y]=\emptyset\) for every pair of distinct vertices \(x,y\in X\). Similarly, if \(N(x)\cap N(y)=\emptyset\) for every pair of distinct vertices \(x,y\in X\), then \(X\) is an _open packing_ of \(G\). The cardinality of a largest \(2\)-packing of \(G\) is the \(2\)_-packing number_\(\rho_{2}(G)\) of \(G\) and the cardinality of a largest open packing of \(G\) is the _open packing number_\(\rho^{\circ}(G)\) of \(G\). By a \(\rho_{2}\)_-set_ of \(G\) we mean a \(2\)-packing of \(G\) of cardinality \(\rho_{2}(G)\). A \(\rho^{\circ}\)_-set_ of \(G\) is defined analogously. If \(X\) is a \(2\)-packing such that \(V(G)=\cup_{x\in X}N[x]\) then we say that \(X\) is a \(1\)_-perfect code_ of \(G\). In domination theory, \(1\)-perfect codes are known as _efficient dominating sets_, see [8, Chapter 9] and [10]. Since \(\gamma(G)\geq\rho_{2}(G)\) for every graph \(G\), if \(X\) is a \(1\)-perfect code of \(G\), then \(X\) is also a dominating set of \(G\). This observation leads to the following well known fact. **Proposition 1.2**.: _If \(G\) admits a \(1\)-perfect code, then \(\gamma(G)=\rho_{2}(G)\). If in addition \(G\) is \(r\)-regular, then \(\gamma(G)=\rho_{2}(G)=\frac{n(G)}{r+1}\)._ The _Cartesian product_\(G\,\square\,H\) of graphs \(G\) and \(H\) is the graph whose vertex set is \(V(G)\times V(H)\), and two vertices \((g_{1},h_{1})\) and \((g_{2},h_{2})\) are adjacent in \(G\,\square\,H\) if either \(g_{1}=g_{2}\) and \(h_{1}h_{2}\) is an edge in \(H\) or \(h_{1}=h_{2}\) and \(g_{1}g_{2}\) is an edge in \(G\). For a vertex \(g\) of \(G\), the subgraph of \(G\,\square\,H\) induced by the set \(\{(g,h):\,h\in V(H)\}\) is an \(H\)_-fiber_ and is denoted by \({}^{g}\!H\). Similarly, for \(h\in H\), the \(G\)_-fiber_, \(G^{h}\), is the subgraph induced by \(\{(g,h):\,g\in V(G)\}\). Cartesian product is commutative and associative. The _hypercube_ of dimension \(n\), or the \(n\)_-cube_, is isomorphic to \(K_{2}\,\square\,\cdots\,\square\,K_{2}\), where there are \(n\) factors \(K_{2}\), and is denoted by \(Q_{n}\). The equality \(Q_{n}=Q_{n-1}\,\square\,K_{2}\) will be used (at least implicitly) several times in the paper. Finally, the _direct product_\(G\times H\) of graphs \(G\) and \(H\) has the vertex set \(V(G)\times V(H)\), and two vertices \((g_{1},h_{1})\) and \((g_{2},h_{2})\) are adjacent in \(G\times H\) if \(g_{1}g_{2}\) is an edge in \(G\) and \(h_{1}h_{2}\) is an edge in \(H\). ## 2 Packing vs. open packing in bipartite prisms In [1] it was proved that if \(G\) is a bipartite graph, then \(\gamma_{t}(G\,\square\,K_{2})=2\gamma(G)\). In this section we prove an analogous result that connects the open packing number and the packing number. We begin with the following simple lemma, which holds in all graphs. **Lemma 2.1**.: _If \(G\) is a graph, then \(\rho^{\mathrm{o}}(G\,\square\,K_{2})\geq 2\rho_{2}(G)\)._ Proof.: Let \(G\) be a graph, and let \(P\) be a \(\rho_{2}\)-set of \(G\). Then \(P\times V(K_{2})\) is an open packing of \(G\,\square\,K_{2}\), hence the result. In general, \(\rho^{\mathrm{o}}(G\,\square\,K_{2})\) can be arbitrary larger than \(2\rho_{2}(G)\). For an example consider the family of graphs \(G_{k}\), \(k\geq 1\), defined as follows. \(G_{k}\) contains \(2k\) disjoint cycles \(C_{5}\) connected in a row by an edge between two consecutive \(5\)-cycles. This informal definition of \(G_{k}\) should be clear from Fig. 1 where \(G_{2}\,\square\,K_{2}\) is drawn. As an arbitrary packing of \(G_{k}\) contains at most one vertex of each \(C_{5}\) we infer that \(\rho_{2}(G_{k})=2k\). On the other hand, repeating the pattern as shown in Fig. 1 for \(k=2\), we get \(\rho^{\mathrm{o}}(G_{k}\,\square\,K_{2})\geq 5k\). For bipartite graphs, however, the above phenomena cannot occur as the main result of this section asserts. Figure 1: An open packing in \(G_{2}\,\square\,K_{2}\) **Theorem 2.2**.: _If \(G\) is a bipartite graph, then \(\rho^{\rm o}(G\,\square\,K_{2})=2\rho_{2}(G)\)._ Proof.: Let \(G\) be a bipartite graph with parts \(A\) and \(B\) forming the natural partition of \(V(G)\). By Lemma 2.1, we have \(\rho^{\rm o}(G\,\square\,K_{2})\geq 2\rho_{2}(G)\). To prove the reversed inequality, consider an open packing \(O\) in \(G\,\square\,K_{2}\) such that \(|O|=\rho^{\rm o}(G\,\square\,K_{2})\). We will show that \(O\) can be transformed into an open packing \(O^{\prime}\) of the form \(P^{\prime}\times V(K_{2})\), where \(P^{\prime}\) is a subset of \(V(G)\). (Clearly, the latter also implies that \(P^{\prime}\) is a 2-packing.) Note that \(O\) can be presented as the disjoint union \(I\cup R\), where \(I\) is the set of vertices that are isolated in the subgraph of \(G\,\square\,K_{2}\) induced by \(O\), while \(R\) is the set of vertices that have exactly one neighbor in \(O\). Clearly, at least one of the sets \(I\) or \(R\) is non-empty. Set \(V(K_{2})=\{1,2\}\), and let \(I_{i}=I\cap V(G^{i})\) and \(R_{i}=R\cap V(G^{i})\) for all \(i\in[2]\). In addition, let \(I_{i}^{A}=\{(u,i)\in I_{i}:\,u\in A\}\), \(I_{i}^{B}=\{(u,i)\in I_{i}:\,u\in B\}\) for \(i\in[2]\), and similarly let \(R_{i}^{A}=\{(u,i)\in R_{i}:\,u\in A\}\), \(R_{i}^{B}=\{(u,i)\in R_{i}:\,u\in B\}\) for \(i\in[2]\). Next, we compare the two quantities \(|I_{1}^{A}|+|I_{2}^{B}|\) and \(|I_{2}^{A}|+|I_{1}^{B}|\). We may assume with no loss of generality that \(|I_{1}^{A}|+|I_{2}^{B}|\geq|I_{2}^{A}|+|I_{1}^{B}|\). Now, the announced transformation of \(O\) to \(O^{\prime}\) is defined as follows: * if \((u,t)\in I_{1}^{A}\cup I_{2}^{B}\), then let \(\{u\}\times V(K_{2})\subseteq O^{\prime}\); * if \((u,t)\in I_{2}^{A}\cup I_{1}^{B}\), then let \((\{u\}\times V(K_{2}))\cap O^{\prime}=\emptyset\); * if \((u,1)\in R_{1}\) and \((u,2)\in R_{2}\), then let \(\{u\}\times V(K_{2})\subseteq O^{\prime}\); * if \((u,1)\in R_{1}^{A}\) and \((v,1)\in R_{1}^{B}\), where \(uv\in E(G)\), then let \(\{u\}\times V(K_{2})\subseteq O^{\prime}\) and \((\{v\}\times V(K_{2}))\cap O^{\prime}=\emptyset\); * if \((u,2)\in R_{2}^{A}\) and \((v,2)\in R_{2}^{B}\), where \(uv\in E(G)\), then let \(\{v\}\times V(K_{2})\subseteq O^{\prime}\) and \((\{u\}\times V(K_{2}))\cap O^{\prime}=\emptyset\). We claim that \(|O^{\prime}|\geq|O|\). Indeed, the first two rows in the above transformation show that for every vertex \((u,t)\in I_{1}^{A}\cup I_{2}^{B}\) we get two vertices in \(O^{\prime}\), while for every vertex \((u,t)\in I_{2}^{A}\cup I_{1}^{B}\) we get no vertices in \(O^{\prime}\), yet \(|I_{1}^{A}\cup I_{2}^{B}|>|I_{2}^{A}\cup I_{1}^{B}|\) by the earlier assumption. By the last three rows of the above transformation, every pair of vertices in \(R\) is replaced by two vertices in \(O^{\prime}\). This altogether implies that \(|O^{\prime}|\geq|O|\), so it remains to prove that \(O^{\prime}\) is an open packing in \(G\,\square\,K_{2}\). If \((u,1)\in I_{1}^{A}\) and \((v,1)\in I_{1}^{A}\), then \(d_{G}(u,v)\geq 4\), because the vertices belong to \(O\), which is an open packing, and \(u\) and \(v\) are both in \(A\). Thus vertices in \(\{u\}\times V(K_{2})\) will be at distance at least 4 from the vertices in \(\{v\}\times V(K_{2})\). By symmetry, we get the same conclusion for vertices \((u,2)\in I_{2}^{B}\) and \((v,2)\in I_{2}^{B}\). If \((u,1)\in I_{1}^{A}\) and \((v,2)\in I_{2}^{B}\), then \(d_{G}(u,v)\geq 3\), because \(u\) and \(v\) belong to different parts, \(A\) and \(B\) respectively, of the bipartition of \(V(G)\) and they belong to \(O\), which is an open packing. Thus, vertices in \(\{u\}\times V(K_{2})\) will be at distance at least 3 from the vertices in \(\{v\}\times V(K_{2})\), as desired. Clearly, if \((u,t)\in I_{1}^{A}\cup I_{2}^{B}\), then \(d_{G}(u,v)\geq 3\) for any \(v\in V(G)\) such that \(\{(v,1),(v,2)\}\subset R\). This yields that vertices in \(\{u\}\times V(K_{2})\) will be at distance at least \(3\) from the vertices in \(\{v\}\times V(K_{2})\). If \((u,1)\in I_{1}^{A}\) and \((v,1)\in R_{1}^{A}\), we have \(d_{G}(u,v)\geq 4\). On the other hand, if \((u,1)\in I_{1}^{A}\) and \((v,2)\in R_{2}^{B}\) we have \(d_{G}(u,v)\geq 3\). In either case, the corresponding vertices in \(O^{\prime}\) are at least three apart. By symmetry, we can find that for vertices in \(I_{2}^{B}\) and vertices in \(R_{1}^{A}\cup R_{2}^{B}\) their distances are sufficiently large so that the corresponding \(K_{2}\)-fibers that are in \(O^{\prime}\) will be at distance at least \(3\). This completes the proof that the distance between the vertices in \(O^{\prime}\) that appear in the first row of the above transformation to all other vertices in \(O^{\prime}\) will be at least \(3\), except of course for two vertices in \(O^{\prime}\) that belong to the same \(K_{2}\)-fiber and are adjacent. Vertices of \(O^{\prime}\) that appear in the third row of the transformation remain at distance at least \(3\) from all other vertices in \(O^{\prime}\) (with the clear exception of two adjacent such vertices). Therefore, it remains to consider the vertices in \(O^{\prime}\) that appear in the last two rows of the above transformation. Suppose there are two vertices in \(R_{1}^{A}\) (and a similar argument can be applied if they are in \(R_{2}^{B}\)), say, \((u,1)\) and \((v,1)\), which are not adjacent. Then \(d_{G}(u,v)\geq 4\), and so \(\{u\}\times V(K_{2})\) will be at distance at least \(4\) from the vertices in \(\{v\}\times V(K_{2})\) (by symmetry, the same conclusion applies if \((u,2)\) and \((v,2)\) are in \(R_{2}^{B}\)). Finally, let \((u,1)\in R_{1}^{A}\) and \((v,2)\in R_{2}^{B}\). Since \(O\) is an open packing, we have \(d_{G}(u,v)>1\), and since they are in different parts of the bipartition, we get \(d_{G}(u,v)\geq 3\). We derive that \(\{u\}\times V(K_{2})\) will be at distance at least \(3\) from the vertices in \(\{v\}\times V(K_{2})\), which concludes the proof that \(O^{\prime}\) is an open packing. Since \(|O|=\rho^{\rm o}(G\,\square\,K_{2})\) and \(|O^{\prime}|\geq|O|\), we derive \(|O^{\prime}|=|O|=\rho^{\rm o}(G\,\square\,K_{2})\). In addition, there exists a set \(P^{\prime}\subset V(G)\) such that \(O^{\prime}=P^{\prime}\times[2]\), where \(P^{\prime}\) is a \(2\)-packing of \(G\). Hence, \(|P^{\prime}|\leq\rho_{2}(G)\), and so \(|O^{\prime}|=2|P^{\prime}|\leq 2\rho_{2}(G)\), implying \(\rho^{\rm o}(G\,\square\,K_{2})\leq 2\rho_{2}(G)\). ## 3 (Open) packings in hypercubes The following lemma follows by observing that the restriction of a \(2\)-packing in \(G\,\square\,K_{2}\) to a \(G\)-layer is a \(2\)-packing of that layer. **Lemma 3.1**.: _If \(G\) is a graph, then \(\rho_{2}(G\,\square\,K_{2})\leq 2\rho_{2}(G)\)._ We can now bound \(\rho_{2}\) and \(\rho^{\rm o}\) of hypercubes as follows. **Theorem 3.2**.: _If \(n\geq 2\), then_ 1. \(\rho_{2}(Q_{n})\geq 2^{n-\lfloor\log n\rfloor-1}\quad\text{and}\)__ 2. \(\rho^{\rm o}(Q_{n})\geq 2^{n-\lfloor\log(n-1)\rfloor-1}\)_._ Proof.: (i) Suppose first that \(n=2^{k}-1\), where \(k\geq 2\). As already mentioned, in these cases \(Q_{n}\) admits a \(1\)-perfect code, say \(S\). Then \(|S|=2^{2^{k}-1}/2^{k}=2^{2^{k}-k-1}\) and consequently \[\rho_{2}(Q_{n})=|S|=2^{2^{k}-k-1}=2^{2^{k}-1-(k-1)-1}=2^{n-\lfloor\log n\rfloor- 1}\,.\] Consider now the hypercubes \(Q_{n}\), where \(k\geq 3\) and \(2^{k-1}-1<n<2^{k}-1\). In particular, if \(n=2^{k}-2\), then since \(Q_{2^{k}-1}=Q_{2^{k}-2}\,\square\,K_{2}\), Lemma 3.1 implies that \[\rho_{2}(Q_{n})=\rho_{2}(Q_{2^{k}-2})\geq\frac{1}{2}\rho_{2}(Q_{2^{k}-1})=2^{2 ^{k}-k-2}=2^{2^{k}-2-(k-1)-1}=2^{n-\lfloor\log n\rfloor-1}\,.\] Inductively applying the lemma, the result holds for all \(n\) such that \(2^{k-1}-1<n<2^{k}-1\). Therefore, (i) holds for all \(n\geq 2\). (ii) Applying Theorem 2.2 and (i), we have \[\rho^{\rm o}(Q_{n})=2\rho_{2}(Q_{n-1})\geq 2\cdot 2^{(n-1)-\lfloor\log(n-1) \rfloor-1}=2^{n-\lfloor\log(n-1)\rfloor-1}\] for all \(n\geq 2\) and we are done. If \(n\leq 7\), then equality holds in Theorem 3.2(i). The cases when \(n\in\{2,3,4\}\) can be easily argued by case analysis. The equality in cases when \(n\in\{5,6\}\) then follow by combining Lemma 3.1 and Theorem 3.2(i). For \(n=7\), the equality holds because \(Q_{7}\) has a \(1\)-perfect code. One is thus tempted to conjecture that the lower bound in Theorem 3.2(i) holds for all \(n\). However, with the help of a computer, we found the set \[T= \{00000000,00001110,00110010,00111100,01010110,01011000,\] \[01100100,01101001,0111111,100100,10100101,10101011,\] \[11000111,11001100,11011011,11100010,11110001\}\] which is a \(2\)-packing in \(Q_{8}\) with \(|T|=17\), hence \(\rho_{2}(Q_{8})\geq 17\). By Theorem 2.2, this in turn implies that \(\rho^{\rm o}(Q_{9})\geq 34\). Hence also the lower bound in Theorem 3.2(ii) is not sharp in general. It is sharp however for all \(n\leq 8\) because the lower bound in Theorem 3.2(i) is sharp for \(n\leq 7\) and because of Theorem 2.2. Furthermore, by using Theorem 2.2 and the fact that the lower bound in Theorem 3.2(i) is sharp when \(n=2^{k}-1\), it follows that the lower bound in Theorem 3.2(ii) is sharp for each value of \(n\) that is a power of \(2\). Application to injective colorings An _injective coloring_ of a graph \(G\) is a partition of the vertex set of \(G\) into open packings. The _injective chromatic number_, \(\chi_{i}(G)\), of \(G\) is the minimum cardinality of an injective coloring in \(G\). The concept was introduced by Hahn, Kratochvil, Siran and Sotteau [6] back in 2002, and has been considered by a number of authors, cf. [2, 4]. In the recent paper [3], graphs that admit special types of injective colorings were considered: a graph \(G\) is a _perfect injectively colorable graph_ if it has an injective coloring in which every color class forms a \(\rho^{\mathrm{o}}\)-set of \(G\). The authors of [3] considered hypercubes that are perfect injectively colorable. They noticed that such are the hypercubes \(Q_{n}\), where \(n\in[5]\), and proved that for all \(k\in\mathbb{N}\), the hypercube \(Q_{2^{k}}\) is a perfect injectively colorable graph. Apart from the mentioned cases, it was asked in [3, Problem 1] in which other dimensions the hypercube is perfect injectively colorable. Since an answer to the question is closely related to computing the value of the open packing number of hypercubes, it was also asked in [3, Problem 2] what is the value of \(\rho^{\mathrm{o}}(Q_{n})\) for \(n\geq 6\). In this note, we give some partial answers to the above two questions. One Figure 2: Partition of \(V(Q_{6})\) into (maximum) 2-packings of \(Q_{6}\). can easily find that \(\rho_{2}(Q_{5})=4\), which by Theorem 2.2 implies that \(\rho^{\rm o}(Q_{6})=8\). In addition, Fig. 2 shows a maximum 2-packing of \(Q_{6}\) of cardinality 8, where vertices of an arbitrary color in [8] form a maximum 2-packing. This gives, again by Theorem 2.2, that \(\rho^{\rm o}(Q_{7})=16\). In addition, recall that \(\rho^{\rm o}(Q_{8})=32\), which follows from the fact that \(Q_{7}\) has a perfect code. Now, by the observation from Section 3, we have \(\rho_{2}(Q_{8})\geq 17\). On the other hand, we claim that \(\rho_{2}(Q_{8})\leq 30\). Suppose to the contrary that \(\rho_{2}(Q_{8})>30\), and let \(P\) be a \(\rho_{2}\)-set of \(Q_{8}\). Then, partitioning \(V(Q_{8})\) into \(Q\) and \(Q^{\prime}\), each of which induces \(Q_{7}\), we infer that either \(|Q\cap P|\) or \(|Q^{\prime}\cap P|\) is equal to 16. We may assume that \(|Q\cap P|=16\), and noting that \(Q\cap P\) is a 2-packing of \(Q_{7}\), this implies that \(Q\cap P\) corresponds to a perfect code of \(Q_{7}\), thus \(Q\cap P\) is a dominating set of \(Q\). This in turn implies that every vertex in \(Q^{\prime}\) is at distance at most 2 from a vertex in \(Q\cap P\), which yields that \(P=Q\cap P\), and so \(|P|=16\), a contradiction proving that \(\rho_{2}(Q_{8})\leq 30\). Now, using Theorem 2.2, we get \(34\leq\rho^{\rm o}(Q_{9})\leq 60\). In particular, \(\rho^{\rm o}(Q_{9})\) is not a power of 2, which readily implies that \(Q_{9}\) does not admit a partition into \(\rho^{\rm o}\)-sets, and is consequently not a perfect injectively colorable graph. On the other hand, refer to Fig. 2 again, which shows a coloring of \(Q_{6}\) in which each color class is a 2-packing of cardinality \(\rho_{2}(Q_{6})\). By applying Theorem 2.2 and the first part of its proof, one can construct an injective coloring of \(Q_{7}\) in which each color class is a open packing of cardinality \(\rho^{\rm o}(Q_{7})\). Therefore, \(Q_{7}\) is perfect injectively colorable graph. Summarizing the above, hypercubes \(Q_{n}\), where \(n\leq 8\), are perfect injectively colorable graphs, and so \(Q_{9}\) is the first instance of a hypercube, which is not in this class of graphs. ## 5 Concluding remarks Table 5 presents values or bounds on the main domination and packing invariants in hypercubes \(Q_{n}\), for all \(n,n\leq 9\). The values for \(\gamma\) and \(\gamma_{t}\) have been known earlier, while some of the values and bounds for \(\rho_{2}\) and \(\rho^{\rm o}\) have been obtained in this paper. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \hline \(\gamma\) & 1 & 2 & 2 & 4 & 7 & 12 & 16 & 32 & 62 \\ \hline \(\gamma_{t}\) & 2 & 2 & 4 & 4 & 8 & 14 & 24 & 32 & 64 \\ \hline \(\rho_{2}\) & 1 & 1 & 2 & 2 & 4 & 8 & 16 & 17-30 &? \\ \hline \(\rho^{\rm o}\) & 2 & 2 & 2 & 4 & 4 & 8 & 16 & 32 & 34-60 \\ \hline \end{tabular} \end{table} Table 1: Packing and domination invariants in hypercubes \(Q_{n}\), where \(n<10\). In addition, consider the value \(\gamma_{t}(Q_{2^{k}})=2^{2^{k}-k}\), which follows from Theorem 1.1 combined with the formula \(\gamma_{t}(G\,\square\,K_{2})=2\gamma(G)\) from [1]. Now, compare this with the bound \(\rho^{\rm o}(Q_{2^{k}})\geq 2^{2^{k}-k}\), which follows from Theorem 3.2(ii) when plugging \(n=2^{k}\). Since \(\gamma_{t}(G)\geq\rho^{\rm o}(G)\) for every graph \(G\) with no isolated vertices, we infer that \[\gamma_{t}(Q_{2^{k}})=2^{2^{k}-k}=\rho^{\rm o}(Q_{2^{k}}),\mbox{ for all }k\in \mathbb{N}. \tag{1}\] Recall the result from [18] stating that \(\gamma_{t}(G\times H)=\gamma_{t}(G)\gamma_{t}(H)\) whenever \(G\) is a graph with \(\rho^{\rm o}(G)=\gamma_{t}(G)\) and graphs \(G\) and \(H\) have no isolated vertices. Therefore, from the discussion above we get that \[\gamma_{t}(Q_{2^{k}}\times H)=2^{2^{k}-k}\gamma_{t}(H)\,,\] where \(k\in\mathbb{N}\) and \(H\) is an arbitrary graph with no isolated vertices. An additional family of graphs with this property (that \(\gamma_{t}=\rho^{\rm o}\)) can be found in [12]. It would be interesting to establish if there are any hypercubes \(Q_{n}\) of other dimensions than those in (1) that satisfy the equality \(\gamma_{t}(Q_{n})=\rho^{\rm o}(Q_{n})\). ## Acknowledgments This work was performed within the bilateral grant 'Domination in graphs, digraphs and their products" (BI-US/22-24-038). B.B. and S.K. were supported by the Slovenian Research Agency (ARRS) under the grants P1-0297, J1-2452, N1-0285, and J1-3002.
$G$ の $2$ PACKING NUMBER $\rho_2(G)$ は $G$ の最大の $2$ PACKING の cardinality であり、OPEN PACKING NUMBER $\rho^{\rm o}(G)$ は $G$ の最大の OPEN PACKING の cardinality である。$2$ PACKING と OPEN PACKING は、それぞれ $G$ の頂点の集合で、その頂点の両隣接の neighborhood が互いに交わらない集合である。$G$ が bipartite であるとき、$\rho^{\rm o}(G\Box K_2)= 2\rho_2(G)$ が証明されている。Hypercubes の場合、$ \rho_2(Q_n) \ge 2^{n - \lfloor\log n\rfloor -1}$ と $\rho^{\rm o}(Q_n) \ge 2^{n - \lfloor \log (n-1)\rfloor-1}$
2309.04977
RGAT: A Deeper Look into Syntactic Dependency Information for Coreference Resolution
Although syntactic information is beneficial for many NLP tasks, combining it with contextual information between words to solve the coreference resolution problem needs to be further explored. In this paper, we propose an end-to-end parser that combines pre-trained BERT with a Syntactic Relation Graph Attention Network (RGAT) to take a deeper look into the role of syntactic dependency information for the coreference resolution task. In particular, the RGAT model is first proposed, then used to understand the syntactic dependency graph and learn better task-specific syntactic embeddings. An integrated architecture incorporating BERT embeddings and syntactic embeddings is constructed to generate blending representations for the downstream task. Our experiments on a public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision learning of the syntactic dependency graph and without fine-tuning the entire BERT, we increased the F1-score of the previous best model (RGCN-with-BERT) from 80.3% to 82.5%, compared to the F1-score by single BERT embeddings from 78.5% to 82.5%. Experimental results on another public dataset - OntoNotes 5.0 demonstrate that the performance of the model is also improved by incorporating syntactic dependency information learned from RGAT.
Yuan Meng, Xuhao Pan, Jun Chang, Yue Wang
2023-09-10T09:46:38
http://arxiv.org/abs/2309.04977v1
# RGAT: A Deeper Look into Syntactic Dependency Information for Coreference Resolution ###### Abstract Although syntactic information is beneficial for many NLP tasks, combining it with contextual information between words to solve the coreference resolution problem needs to be further explored. In this paper, we propose an end-to-end parser that combines pre-trained BERT [1] with a Syntactic Relation Graph Attention Network (RGAT) to take a deeper look into the role of syntactic dependency information for the coreference resolution task. In particular, the RGAT model is first proposed, then used to understand the syntactic dependency graph and learn better task-specific syntactic embeddings. An integrated architecture incorporating BERT embeddings and syntactic embeddings is constructed to generate blending representations for the downstream task. Our experiments on a public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision learning of the syntactic dependency graph and without fine-tuning the entire BERT, we increased the F1-score of the previous best model (RGCN-with-BERT) [2] from 80.3% to 82.5%, compared to the F1-score by single BERT embeddings from 78.5% to 82.5%. Experimental results on another public dataset - OntoNotes 5.0 demonstrate that the performance of the model is also improved by incorporating syntactic dependency information learned from RGAT. syntactic dependency information, syntactic embeddings, coreference resolution, Bert, blending embeddings ## I Introduction Coreference resolution is the task of finding all linguistic expressions that refer to the same entity in the natural language. Ambiguous pronoun resolution, which attempts to resolve gendered ambiguous pronouns in English such as 'he' and'she', is a longstanding challenge in coreference resolution [2]. A Kaggle competition based on the task of gendered ambiguous pronouns (GAP) resolution was conducted in 2019 [3]. The effective use of Bidirectional Encoder Representations from Transformers or BERT [1] in this competition has shown significant improvement over traditional approaches. Unlike the traditional unidirectional language model, BERT is designed to pre-train deep bidirectional representations using a new masked language model (MLM), which enables the generation of deep bidirectional contextual embeddings. At present, there are two BERT-based approaches for applying these contextual embeddings to ambiguous pronoun resolution tasks: the feature-based approach using BERT and fine-tuning BERT approach. The feature-based approach using BERT treats contextual representations derived from BERT as extra input features, which are combined in a task-specific model architecture without fine-tuning any parameters of BERT to obtain the coreference resolution for the target pronoun. For example, a model architecture combining BERT and SVM proposed in [4] obtains the correct mention for the target pronoun by applying the contextual embeddings from BERT to an SVM classifier. As for fine-tuning BERT approach, it uses BERT to model the downstream gendered pronoun reference task by plugging in the task-specific inputs and outputs into BERT and fine-tuning all the parameters end-to-end. Compared to the feature-based approach using BERT, fine-tuning BERT approach obtains more impressive performance without considering the computational cost, such as single fine-tuned BERT [5] or ensemble learning from multiple fine-tuned base BERT models [6]. However, fine-tuning the entire BERT model for a specific task is very computationally expensive and time-consuming because all parameters are jointly fine-tuned on the downstream task and need to be saved in a separate copy. For this reason, there are two improving strategies in BERT-based approach for the gendered pronoun reference task. One strategy focuses on the output representation of each layer in BERT by altering the BERT structure slightly at each layer and adding some extra parameters to change the output of each layer. Compared to fine-tuning all the parameters of BERT, this strategy can obtain a better result with less computation time, like Adapter [7], LoRA [8] and so on. Another strategy is to explore better blending embeddings than BERT on the coreference task with the help of syntactic parsing information. Syntactic parsing information is a strong tool in many NLP tasks, such as entity extraction or relation extraction. It has also been verified that blending embeddings from BERT representations and syntactic embeddings outperform the original BERT contextual representations in the gendered pronoun reference task [2]. Since the strategy of exploring blending embeddings has a computational advantage in running many experiments with cheaper models on a pre-compute representation of BERT, it is worthwhile for us to explore again the value of blending embeddings incorporating syntactic dependency information in ambiguous pronoun resolution tasks. Recently, Cen et al. [9] have proposed the GATNE model, which is a large-scale heterogeneous graph representations learning model to effectively aggregate neighbors of different edge types to the current node by assigning an attention mechanism. As far as we know, there has been no study has attempted to use GATNE or its variants to digest heterogeneous graph structures from the syntactic dependency graph. Inspired by the GATNE model, we propose our Syntactic Relation Graph Attention Network model to make it suitable to generate heterogeneous syntactic embeddings for each sample data, namely RGAT. Based on that, we propose an end-to-end solution by combining pre-trained BERT with RGAT. Experiment results on the public GAP (Gendered Ambiguous Pronouns) dataset released by Google AI demonstrate that the blending embeddings which combine BERT representations and syntactic dependency graph representations outperform the original BERT-only embeddings on the pronoun resolution task, which significantly improves the baseline F1-score from 78.5% to 82.5% without fine-tuning BERT and expensive computing resource. Furtherly, to verify the effectiveness of our RGAT model for digesting syntactic dependency information in coreference resolution tasks, we also conduct another coreference resolution experiment on the public NLP dataset-OntoNotes 5.0 dataset. The experiment results demonstrate that after the syntactic embeddings learned with our RGAT model are incorporated with the benchmark model, the F1-score improves from 76.9% to 77.7%. All our experiment codes in this paper are available at [https://github.com/qingtian5/RGAT_with_BERT](https://github.com/qingtian5/RGAT_with_BERT). Our main contributions are shown below: * Our work is the first deep attempt at using heterogeneous graph representations learning with attention mechanism on syntactic dependency graph for pronoun resolution task. The syntactic embeddings derived from our RGAT model successfully boost the performance of BERT-only embeddings. This provides a new idea to further digest syntactic dependency information for reference resolution tasks. * Our work is the first to use graph attention mechanism to learn small syntactic dependency graph embeddings without expensive computation cost to solve the coreference resolution task. The supplementary experiment result on the public GAP dataset and OntoNotes 5.0 dataset shows that our adjusted model RGAT has a better generalization ability in NLP coreference resolution tasks. * Our work is the first to largely boost the performance of the ambiguous pronoun resolution task with the help of syntactic dependency information. Most previous research considers that the effect of syntactic embeddings is weak, but our work significantly improves the F1-score of the BERT + fc baseline model from 78.5% to 82.5% on the GAP dataset. ## II Preliminary Wrok ### _BERT-Based Embeddings_ BERT makes use of Transformer, an attention mechanism that learns contextual relations between words in a text. When training language models, BERT uses two training strategies: Masked LM (MLM) and Next Sentence Prediction (NSP). Through these two tasks, what we need to do is how to apply BERT model to our samples to get the embedded representations. For our ambiguous pronoun resolution task, each of our samples is taken as a long sentence, and then a [cls] token is added before the sentence. Through a pre-trained BERT model, the embedded representation of each token in the sentence is obtained. In fact, our goal is to obtain the relations between pronouns and nouns, so we only need to extract the embedded representations of the tokens related to pronouns (P) and nouns (A, B), then concatenate them, and finally get the results about the specific reference of the pronouns (P) through the fully connected layer, which are shown in Fig. 1. Take a sample sentence for example- "Bill(A) said Alice(B) would arrive soon, and she(P) did", our task is to find out whether "she(P)" refers to "Bill(A)" or "Alice(B)". As the information flows in Fig. 1, we first break the whole sentence into words and make them the input of the BERT model. The pre-trained BERT model will generate an embedding for each word. Because in our coreference resolution task, there are only three possible results: (1) P refers to A; (2) P refers to B; (3) P refers to neither A nor B. Therefore, we regard our task as a tripartite classification problem. Thus, we extract the embedding of P, A and B from the BERT outputs, then concatenate them, and last through a fully connected layer. ### _Syntactic Dependency Information Learning_ Although syntactic parsing information is beneficial to pronoun coreference resolution, how to extract syntactic embeddings and incorporate them with BERT embeddings for the coreference task is difficult. A common way of digesting syntactic parsing information is to utilize the syntactic dependency relations between words in a text, which can be easily represented as nodes and edges in a graph structure. Then graph based model can be used to learn the syntactic dependency information for the subsequent task. For the coreference resolution task, each sentence is parsed into a syntactic dependency graph, which contains three types of edges. Thus, the traditional Graph Convolutional Network (GCN) cannot Fig. 1: BERT-Based Embedding for our coreference resolution task. handle this multi-relation graph. Xu [2] innovatively incorporated syntactic embeddings, which is digested with Gated Relational Graph Convolutional Network (Gated RGCN [10]) with BERT embeddings for their pronoun coreference task. Specifically, RGCN is used to aggregate three heterogeneous graph structures between the head world and the dependency word to obtain word syntactic embeddings [2]. The idea provided by RGCN is that the information should be treated differently for different edge types, denoted as follows: \[h_{i}^{(l+1)}=\mathrm{ReLU}\left(\sum_{r\in R}\sum_{u\in N_{r}(v_{i})}\frac{1} {c_{i,r}}W_{r}^{(l)}h_{u}^{(l)}\right) \tag{1}\] where \(N_{r}\left(v_{i}\right)\) and \(W_{r}^{(l)}\) denote the set of neighbors of node i and weight under relation \(r\in R\) respectively. It can be seen from here that although RGCN is used to solve multilateral types, it does not consider the problem of edge features, and the default is that only type feature exists for each edge. In contrast to using pre-trained BERT embeddings and fully-connected layers for prediction, the series connection architecture of pre-trained BERT with RGCN from Xu [2] increases the F1-score by 1.8%. However, RGCN does not perform very well in digesting the weight information between multiple edges graph structures from the syntactic dependency graph. Meanwhile, Xu's result is far less accurate than fine-tuning the entire BERT. The main problem may exist in two aspects. On the one hand, according to the RGCN model, if there are multiple different types of edges in the network, it will eventually need to generate a linear layer for each type of edge. This will lead to a linear increase in the number of model parameters. On the other hand, for some types of edge in syntactic dependency graph, the frequency of occurrence may be small, which will lead to the linear layer corresponding to this type of edge is eventually updated on only a few nodes, resulting in the problem of overfitting a small number of nodes. Inspired by RGCN with BERT and the development of graph neural networks, we believe that the performance of syntactic parsing information on pronoun resolution can be further improved. The first reason is that syntactic information always plays a very important role in the extraction of hand-crafted features, especially in most heuristics-based methods for the pronoun resolution task [11][12][13]. The second reason is that many newly graph learning-based models incorporating syntactic information achieve improving results in entity extraction [14] or semantic role labelling tasks [15]. In order to solve the problems of RGCN, we will illustrate how to learn the syntactic dependency graph using our proposed RGAT model in the next section. And in section IV, we propose to use L2 regularization for parameters in the RGAT model to alleviate the problem of overfitting. ## III Method ### _Syntactic Dependency Graph_ Since a dependency parse describes the syntactic relations that hold among words, many existing researches transform the dependency parse tree into the syntactic dependency graph to capture the syntactic features [2]. It is commonly assumed that there are three kinds of information flows in the syntactic dependency graph: from heads to dependents, from dependents to heads and self-loops, which are shown in Fig. 2. For each node in the syntactic dependency graph in Fig. 2, it is linked with three different types of edges, corresponding to three different types of syntactic relations. Since we are focused on embedding learning in the syntactic dependency graph, it is important to be able to draw on strengths from such different relations and learn uniform syntactic embeddings. ### _RGAT Model_ The core idea of GATNE-T (we refer to this as GATNE in the paper) is to aggregate neighbors of different edge types to the current node and then generate different vector representations for nodes of each edge type. Inspired by the GATNE algorithm proposed by Cen [9], we adjust GATNE and propose the RGAT model applied in the syntactic dependency graph with multiple edges to learn uniform embeddings. Generally, the RGAT model has been modified in three aspects based on the GATNE model. First, GATNE obtains the embedded representations of nodes and edges on a large graph structure data, but our RGAT model needs to adapt to different syntactic graph structures generated by different samples. Therefore, a new attention architecture is proposed to solve this problem. Second, the initial embedding representations of GATNE are randomly generated, but our goal is to solve the ambiguous pronouns coreference resolution, thus it is natural to use BERT Embeddings to initialize the node embeddings. Third, in order to take advantage of all the information, we add a shortcut module in the model so that our initialization node embeddings can also be concatenated into the final output embeddings. Specially, the RGAT model splits the overall embedding of a certain node v on each edge type i in the syntactic dependency graph into two parts: base embedding and three edge embeddings. The base embedding of node v is shared between its three different edge types. We use BERT Embedding as the base embedding of the nodes. We follow the following aggregation steps to obtain the final syntactic embeddings of each node. Fig. 2: Information Flows in Syntactic Dependency Graph. First, the representation of each node is compressed to obtain a more compact representation denoted as \(u_{i}^{\text{base}}\), which is used as the base embedding. \[u_{i}^{\text{base}}\ =W_{r0}u_{i}^{\text{out}} \tag{2}\] where \(W_{r0}\in R^{1024*256}\) is learnable, and \(u_{i}^{\text{out}}\) is the BERT representation of node \(v\) on edge type \(i\). In our work, the representation of each node \(v\) from BERT is a vector of 1024 dimensions. Consistent with previous work [2], the compressed node representation dimensions are set to 256. So \(u_{i}^{\text{base}}\) is a vector of 256 dimensions. Second, following GraphSage [16], we obtain each type of edge embedding for node \(v\) by aggregating from neighbors' edge embeddings. We randomly sample \(n\) neighbor nodes for each edge embedding \(u_{j,r}^{\text{base}}\), and then aggregate them. The aggregator function can be a sum, mean or max-pooling aggregator as follows: \[U_{i,r}=W_{r1}\text{aggregator}\left(\left\{u_{j,r}^{\text{base}},\forall u_{j} \in N_{i,r},j=0,1,2,\dots,n\right\}\right) \tag{3}\] where \(W_{r1}\in R^{d\omega}\), is a learnable parameter, \(d\) is 256, \(m\) is the hyperparameter that needs to be given. In order to make the attention calculation more convenient, we compress the aggregated representation again. Third, applying the attention mechanism to get the weight of each aggregated edge representation for each node as follows: \[a_{i,r}=\operatorname{softmax}\left(w_{r}^{T}\tanh\left(W_{r2}U_{i,r}\right) \right)^{T} \tag{4}\] where \(W_{r}\in R^{n}\), \(W_{r2}\in R^{m*n}\) are learnable parameters, \(n\) is the hyperparameter that needs to be given. Fourth, combining each weighted aggregated representation with the base embedding, the final representation of each node in edge type \(r\) can be represented as \(v_{i,r}\), which is denoted as follows: \[v_{i,r}=u_{i}^{\text{base}}+a_{i,r}M_{r}U_{i,r} \tag{5}\] where \(M_{r}\in R^{d\omega}\) is a learnable parameter, \(u_{i}^{\text{base}}\) is base embedding and \(v_{i,r}\) is a vector with 256 dimensions. Finally, the syntactic embedding representation of each node is aggregated from three kinds of node representation \(v_{i,0}\),\(v_{i,1}\),\(v_{i,2}\), denoted as follows: \[v_{i}=\text{aggregator}\left(v_{i,0},v_{i,1},v_{i,2}\right) \tag{6}\] where the aggregator function can be a sum, mean or concatenate operation. The influence of different aggregators will be mentioned in detail in the ablation experiment in Section IV. ### _Syntactic Embeddings_ The previous RGCN model [2] only uses the information of neighbor nodes, but it brings a significant improvement in coreference resolution tasks. Therefore, we think the potential of learning syntactic structure using RGAT can be much more than that. The most important reason is that by designing attention mechanisms, we obtain more valuable information from different types of edges. Fig. 3 explains in detail how to extract Syntactic Embeddings from information flows in syntactic dependency graph and BERT Embeddings. For one thing, we use BERT Embeddings as the base embeddings. For another thing, we use three different types of syntactic relations to construct the syntactic relation graph with attention information to learn RGAT embeddings. In order to retain information from BERT Embeddings, we concatenate the embeddings that represent the relation graph with attention information and BERT Embeddings. Then, we concatenate different embeddings from three different kinds of edges (different color represent embeddings from different edge types in Fig. 3). Finally, the significant words embeddings (the embedding of three words - A, B, P, like in section II, Fig. 1) are concatenated as Syntactic Embedding. ### _Connect BERT Embeddings and Syntactic Embeddings in Series_ We blend the syntactic embedding derived from the syntactic dependency graph with the pretrained BERT embeddings by connecting BERT embedding and syntactic embedding in series. This integrated architecture can help us learn better-performing embeddings when dealing with the task of pronoun resolution. We first use the pre-trained BERT to extract context information between words and then connect with syntactic information from RGAT to form a "look again" mechanism to further obtain blending representations that are more beneficial to the current task. The specific architecture is shown in Fig. 4. As shown in Fig. 4, the pre-trained BERT obtains the hidden feature representation, then RGAT looks at the syntactic information of the sentence again. Relying on the syntactic information derived from RGAT, we can obtain the hidden state of pronoun-related words (denoted as h1(A), h4(B), h6(P)) in the sentence. There is also a fully connected layer in parallel with the output of RGAT, which is used to get a more compact embedding representation for each pronoun-related word. Finally, the outputs representation by RGAT are concatenated with the compact embedding representation of each pronoun-related word. The reason for concatenation is mainly because the syntactic dependency graph uses a special form of Laplace smoothing during its construction process [17], which Fig. 3: Embedding Structure of Syntactic Dependency Graph. may contain vertex-related features. Some original feature embeddings can be preserved by concatenation, and ultimately a fully connected layer is used for prediction. ## IV Experiments ### _Experimental Setup_ **GAP Dataset.** The first ACL workshop on Gender Bias in Natural Language Processing (2019) included a coreference task addressing Gendered Ambiguous Pronouns (GAP). The task is based on the coreference challenge defined by Webster [3][18] and aims to benchmark the ability to address pronouns in real-word contexts in a gender-equitable way. 263 teams competed through the Kaggle competition, and the winning system ran at a speed of 0.13667, which is close to gender parity. We reviewed the approaches and their documentation of the top eleven systems, noting that they effectively use BERT [1], both through fine-tuning, feature extraction, or ensembles. In order to compare with the baseline results of the previous work on the GAP task [2][3][18] our work directly uses the Gendered Ambiguous Pronouns (GAP) dataset, containing all 8908 samples. The dataset contains 8908 pairs of labelled samples from Wikipedia. Consistent with previous work, 4908 samples are used as training data and 4000 samples are used as test data. **Evaluation metrics.** The task is a multi-classification problem, and we use micro F1-score as the evaluation metric, which is to calculate the precision and recall of all classes together, and then calculate the micro F1-score according to the following formula. \[F1=2\times\frac{\text{precision }\times\text{recall}}{\text{precision }+\text{ recall}} \tag{7}\] \[\text{precision}_{\text{micro}}=\frac{TP_{1}+TP_{2}+TP_{3}}{TP_{1}+FP_{1}+TP_ {2}+FP_{2}+TP_{3}+FP_{3}} \tag{8}\] \[\text{recall}_{\text{micro}}=\frac{TP_{1}+TP_{2}+TP_{3}}{TP_{1}+FN_{1}+TP_{2} +FN_{2}+TP_{3}+FN_{3}} \tag{9}\] \[\text{micro }F1-\text{score}=2\times\frac{\text{precision}_{\text{micro}} \times\text{recall}_{\text{micro}}}{\text{precision}_{\text{micro}}+\text{ recall}_{\text{micro}}} \tag{10}\] **Implementation Detail.** We use the SpaCy module as a syntactic dependency analyzer in our work. For each sample, we build a syntactic dependency graph, then extract and save the needed information. Due to memory constraints, we do not put the entire syntactic dependency graph into the model for training, but first extract the features we needed from the graph, then combine them into a batch, and last sent them into the model. We use Adam [19] as the optimizer and adopt the form of warm up for the learning rate in our model. Especially, the L2 regularization result of the RGAT layer weight is added to the loss function, and the fully connected layer uses batch normalization and dropout strategy. In addition, for the number of sampling random neighbor nodes in the second step of the RGAT model, we found that each node has at most four different neighbors. Thus, in order to take into account the algorithm efficiency and the diversity of neighbor node calculations, we set the number of random samples to four. As for the aggregation method involved in the second step of GraphSage [16], we found that methods such as summation, mean, and maximum pooling basically have no different impact on the model performance. So, we simply use the aggregation method of summation. We use the "BERT-Large-Uncased" model to generate the BERT embedding representations we need. It should be noted that in our model, BERT has not been fine-tuned. The parameters of all BERT models are fixed during training. The advantage of this is that we do not need to save a separate copy of BERT-related model parameters for the GAP dataset. In order to better improve the generalization of the model, we used the 5-fold cross validation method to split the training set into five equal parts. Each experiment takes one part for verification and the rest is used for training. Each time, the model parameters with the best performance on the validation set are applied to the test set to get the prediction result. A total of five prediction results are obtained, and the average value is taken as the final prediction result. ### _Ablation Studies_ In order to be consistent with baseline work, we use the same hyperparameter configuration as RGCN-with-BERT [2]. However, in our proposed RGAT model, it is necessary to Fig. 4: The Blending Structure of BERT and Syntactic Embedding. determine the dimension of the embeddings of the edge. To this end, we conduct experiments on different parameters and the results are shown in the TABLE I. Through comparison, the dimension parameters (m, n) of the node type are set to (10.20). For the output features of three different edge types, we compare the F1-score of different aggregation methods such as averaging, summing and concatenation. Meanwhile, since the concatenation method will lead to an increase in the number of parameters, we adjust the feature dimension so that the parameters of three aggregation methods are relatively close. In the end, the mean and summing aggregation methods are found to obtain similar experiment results, while the concatenation aggregation method is found to obtain the best result. ### _Comparison with Other Methods_ The paper proposing the GAP dataset [3][18] introduced several baseline methods: (1) existing parsers including a rule-based system by Lee [20], as well as three neural parsers from Clark and Manning (2015) [21], Wiseman et al. (2016) [22] and Lee et al. (2017) [23]. (2) baselines based on traditional coreference cues; (3) baselines based on structural cues: syntactic distance and parallelism; (4) baselines based on Wikipedia cues; (5) Transformer models [24]. Among them, RGCN-with-BERT [2] further improves the F1-score of some baseline methods, reaching 80.3%. We select the best three from the baseline models and the RGCN-with-BERT (Xu et al., 2019) [2] model to compare with our model. At the same time, we also compare our work with the BERT in series with a fully connected layer (BERT-fc). Experimental results show that our work achieves a large improvement over baseline models, which is shown in TABLE II. baseline models, which is shown in TABLE III. (P means precision, R means recall, F1 means F1-score). TABLE III shows that RGAT-with-BERT + c2f-coref outperforms the BERT-large + c2f-coref model on English by 0.8% on the OntoNotes 5.0 Dataset. The main evaluation metric is the average F1 of three metrics - \(MUC\), \(B^{3}\) and \(CEAF_{\varphi 4}\) on the test set. Given how gains on coreference resolution have been hard to come by as evidenced by baseline models in TABLE III, our model is still a considerable improvement. It is noted that compared with BERT, we only add a relatively small number of parameters, which can get a more obvious effect on the reference resolution task. Due to the limitation of computing resources, we did not tune high parameters further. In view of the experimental results, we believe that the syntactic structure can indeed help the model to further understand the coreference resolution task. ## V Conclusion and Discussion ### _Conclusions_ The experiment results show that with the help of sentence syntactic dependency information, using the output representations of BERT pre-trained model, RGAT can further learn embedding representations that are more conducive to the task of pronoun resolution and improve the performance of this task. "Gender Bias in Natural Language Processing (GeBNLP) 2019 Shared Task" is a competition to build a common reference parsing system on the GAP dataset. We employ a combination of BERT and our proposed graph neural network RGAT model to participate in this task. The RGAT model is used to digest the syntactic dependency graph, and further extract syntactic-related information from the output features by BERT, which helps us improve the accuracy of this task. Our model significantly improves the F1-score of the task from the previous best of 80.3% to 82.5% (an improvement of 2.2%). Compared with BERT plus a fully connected layer, the accuracy of fine-tuning the fully connected layer is only 78.5%, and our model has a 4.0% improvement. The results show that without fine-tuning the BERT model, the syntactic dependency graph can significantly improve the performance of the referencing problem. For another classic dataset - OntoNotes 5.0, the comparison results of RGAT-with-BERT +c2f-coref VS. baseline BERT-large + c2f-coref indicates that syntactic structure might better encode longer contexts. These observations suggest that future research in pretraining methods may look at more effectively encoding document-level context including syntactic structure. Modelling pronouns, especially in the context of dialogue, still has a lot of potential for our model. Although our advantages are not very obvious, partly because we are limited to the c2f-coref architecture, we believe syntactic structure can effectively help our model achieve more comparable results for document modelling if we can design a suitable architecture for our model. Lastly, through the overview of training samples and our model outputs, a considerable number of errors suggest that our model is also still unable to resolve cases requiring mention paraphrasing like [27], especially considering that such learning signal is rather sparse in the training set. However, a few of these errors have been reduced. We think our model has the possibility to solve this problem. Fig. 5: RGAT-with-BERT + c2f-coref model for OntoNotes 5.0 dataset. ### _Discussion_ In fact, our work provides an alternative paradigm for such similar coreference tasks or for those tasks that need to mine the role of syntactic information. Our work demonstrates it is not so necessary to fine-tune the entire BERT model and save a unique BERT model parameter for each task. Our proposed paradigm simply changes the classification header of the BERT model to graph neural networks with syntactic dependencies, and then fine-tunes the new classification header to obtain better results. Our work is to solve the problems encountered by the current large pre-trained language model from a new perspective, that is, the pre-trained language model is too large, and it is necessary to save a new fine-tuned model parameter for each downstream task. For example, models such as LoRA [8], AdapterBias [30], and LLM-Adapters [31] all reduce the amount of parameter required for fine-tuning the model on the model itself and the output of each layer. And other models that incorporate traditional machine learning methods [4][13] do not achieve competitive results. By comparison, our work is to use the output features of the pre-trained language model, but not to change the parameters of the model itself and the output features. Our work shows that changing the classification head can effectively reduce the amount of parameter for fine-tuning the pre-trained model and greatly improve the recognition accuracy of the task. ### _Limitation and Future Direction_ Our models and experiments have shown that syntactic dependency information plays a significant role in reference resolution tasks and that syntactic structure can optimize the embedded representation of large language models. However, there are three problems to solve in the future. (1) There is no evidence of the role of syntactic structures for other NLP tasks. (2) In this paper, supervised learning is used to optimize the embedded representations of BERT. It is a future direction to explore the unsupervised representation learning that combines BERT with syntactic structure. (3) When training, we can first save the features to the hard disk, but in inference, we do not store embedded features, so how to optimize the inference time is also a future problem. ## Acknowledgment This research was funded by Shanghai Philosophy and Social Sciences Planning Project, grant number 2020BGL009.
Although syntactical information is beneficial for many NLP tasks, combining it with contextual information between words to solve the coreference resolution problem needs to be further explored. In this paper, we propose an end-to-end parser that combines pre-trained BERT with a Syntactic Relation Graph Attention Network (RGAT) to take a deeper look into the role of syntactic dependency information for the coreference resolution task. In particular, the RGAT model is first proposed, then used to understand the syntactic dependency graph and learn better task-specific syntactic embeddings. An integrated architecture incorporating BERT embeddings and syntactic embeddings is constructed to generate blending representations for the downstream task. Our experiments on a public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision learning of the syntactic dependency graph and without fine-tuning the entire BERT, we increased the F1-score of the previous best model (RGCN-with-BERT) from 80.3% to 82.
2304.04753
$\textit{e-Uber}$: A Crowdsourcing Platform for Electric Vehicle-based Ride- and Energy-sharing
The sharing-economy-based business model has recently seen success in the transportation and accommodation sectors with companies like Uber and Airbnb. There is growing interest in applying this model to energy systems, with modalities like peer-to-peer (P2P) Energy Trading, Electric Vehicles (EV)-based Vehicle-to-Grid (V2G), Vehicle-to-Home (V2H), Vehicle-to-Vehicle (V2V), and Battery Swapping Technology (BST). In this work, we exploit the increasing diffusion of EVs to realize a crowdsourcing platform called e-Uber that jointly enables ride-sharing and energy-sharing through V2G and BST. e-Uber exploits spatial crowdsourcing, reinforcement learning, and reverse auction theory. Specifically, the platform uses reinforcement learning to understand the drivers' preferences towards different ride-sharing and energy-sharing tasks. Based on these preferences, a personalized list is recommended to each driver through CMAB-based Algorithm for task Recommendation System (CARS). Drivers bid on their preferred tasks in their list in a reverse auction fashion. Then e-Uber solves the task assignment optimization problem that minimizes cost and guarantees V2G energy requirement. We prove that this problem is NP-hard and introduce a bipartite matching-inspired heuristic, Bipartite Matching-based Winner selection (BMW), that has polynomial time complexity. Results from experiments using real data from NYC taxi trips and energy consumption show that e-Uber performs close to the optimum and finds better solutions compared to a state-of-the-art approach
Ashutosh Timilsina, Simone Silvestri
2023-03-31T04:28:31
http://arxiv.org/abs/2304.04753v1
# _e-Uber_: A Crowdsourcing Platform for Electric Vehicle-based Ride- and Energy-sharing ###### Abstract The sharing-economy-based business model has recently seen success in the transportation and accommodation sectors with companies like Uber and Airhorb. There is growing interest in applying this model to energy systems, with modalities like peer-to-peer (P2P) Energy Trading, Electric Vehicles (EV)-based Vehicle-to-Grid (V2G), Vehicle-to-Home (V2H), Vehicle-to-Vehicle (V2V), and Battery Swapping Technology (BST). In this work, we exploit the increasing diffusion of EVs to realize a crowdsourcing platform called _e-Uber_ that jointly enables ride-sharing and energy-sharing through V2G and BST. e-Uber exploits _spatial crowdsourcing_, reinforcement learning, and _reverse auction_ theory. Specifically, the platform uses reinforcement learning to understand the drivers' preferences towards different ride-sharing and energy-sharing tasks. Based on these preferences, a personalized list is recommended to each driver through _CMAB-based Algorithm for task Recommendation System (\(CARS\))_. Drivers bid on their preferred tasks in their list in a reverse auction fashion. Then e-Uber solves the task assignment optimization problem that minimizes cost and guarantees V2G energy requirement. We prove that this problem is NP-hard and introduce a bipartite matching-inspired heuristic, _Bipartite Matching-based Winner selection_ (\(BMW\)), that has polynomial time complexity. Results from experiments using real data from NYC taxi trips and energy consumption show that e-Uber performs close to the optimum and finds better solutions compared to a state-of-the-art approach. Online spatial crowdsourcing, V2G, energy-sharing, ride-sharing, personalized recommendation, combinatorial multi-armed bandit. ## I Introduction With the recent advent of sharing-economy-based models and their successful application in accommodation-sharing (e.g. Airhorb, Vrb0) and ride-sharing (e.g. Uber, Lyft), researchers have focused on applying this concept to energy systems [1, 2]. Energy-sharing modalities such as peer-to-peer (P2P) energy trading [3, 4], and Electric Vehicle (EV)-based Vehicle-to-Grid (V2G), Vehicle-to-Home (V2H), Vehicle-to-Vehicle (V2V) [5], as well as Battery Swapping Technology (BST) [6] have been proposed as sustainable and flexible approaches to balance the energy supply and demand for both the grid and end-users [5, 7]. Especially, the rapid rise in EV sales in recent years has created new opportunities for mobile and flexible energy storage and management including ride-sharing and energy-sharing services using EVs [5]. However, no studies have been made so far to realize a platform that jointly enables both ride-sharing and energy-sharing. Crowdsourcing is an approach for recruiting workers from a "crowd" to execute tasks that has been successfully applied to several domains [8, 9]. We believe that a crowdsourcing platform has the potential to also be successfully applied to the a combined ridesharing and energy-sharing system, where _tasks_ are ride- and energy-sharing requests that can be performed by EV drivers, called _workers_. Tasks are requested by _task-requesters_ which include ride-sharing clients as well as private or public energy customers. Examples of such energy customers include a utility company and a microgrid community looking to achieve demand response by shifting energy demand to V2G services at different locations, specially during the time of peak energy demands [10, 11, 12, 13]. In this work, we propose a novel crowdsourcing platform called _e-Uber_ that leverages the increasing diffusion of EVs to enable joint ride-sharing and energy-sharing services. A general overview of the platform is depicted in Fig. 1. With this platform, drivers equipped with EVs can not only transport passengers through ride-sharing but also sell excess energy stored in their batteries to the grid/houses during periods of high demand through V2G or battery swapping [14, 15, 16]. e-Uber has the potential to increase the earning potential for drivers and also to help balance the energy demand and supply Fig. 1: e-Uber crowdsourcing platform overview for the grid while simultaneously fulfilling the mobility and energy demands of consumers. A few works on crowdsourcing have been proposed to facilitate the integration of energy-sharing services with EVs. Ai et al. [7] proposed a V2H-based omni-sharing modality in a microgrid community to crowdsource energy from EVs. Similarly, the authors in [17] propose an autonomous EV (AEV)-based energy crowdsourcing approach, allowing AEVs to participate in energy-sharing tasks for consumers placed in the cloudlet. However, these approaches do not consider the _workers' preferences_ as well as their _limited ability_ of selecting tasks when overwhelmed with choices and problems. There have been a few spatial crowdsourcing work attempting at solving the task assignment problem considering worker preferences [9, 18, 19, 20]. However, these approaches focus on general uniform tasks, and do not consider ride-sharing combined with energy-sharing. To the best of our knowledge, in this paper we propose the first crowdsourcing mechanism that jointly enables ride- and energy-sharing to provide a multifaceted solution to existing problems on efficiency and sustainability of transportation, energy management, and cost-effective demand response using EVs. _e-Uber_ works in three decision stages: calculate a personalized task recommendation for each EV worker, collect bids from workers, reverse auction-based winning bids selection. We propose a preference-aware optimal task recommendation system, \(POTR\), and a reinforcement learning mechanism to learn worker preferences. The reverse auction process is formalized for bidding and the winning bids are determined through an optimization framework called _Winning Bid Selection_ (\(WiBS\)). A Reinforcement Learning (RL)-based algorithm, called \(CARS\) is proposed that solves the problem of task recommendation and updates the worker preferences based on their interaction with the recommendation using Combinatorial Multi-Armed Bandit framework [2]. Proving that \(WiBS\) problem is NP-hard, we also propose bipartite matching-based heuristic, \(BMW\) that finds solution to \(WiBS\) in polynomial time. The major contributions of the paper are as follows: * We propose a spatial crowdsourcing platform, _e-Uber_, to jointly enable ride-sharing and energy-sharing using EVs; * We develop an optimization framework, called _POTR_, based on reinforcement learning for personalized recommendation of tasks to workers. * We also formalize winning bid selection (_WiBS_) problem, and prove that it is NP-Hard; * We propose an RL algorithm, called \(CARS\), that incorporates reinforcement learning for task recommendation to workers and update the preferences according to their interaction to the recommendation; * Given the complexity of the \(WiBS\) problem, we propose a Bipartite Matching-based Winner Selection algorithm, \(BMW\) and determine its polynomial time complexity; * Through extensive experiments using real data, we show that _e-Uber_ can indeed lead to successful joint crowdsourcing of energy and ride-sharing services that is able to complete more than 850 tasks compared to state-of-the-art approach in a span of 24 hours; ## II Related works Crowdsourcing services has received increasing attention in recent years because of their flexibility and convenience in facilitating the completion of tasks by a set of workers [9]. There exists a plethora of research works that focus on different aspects of crowdsourcing from optimal task allocation [5] to preference-aware decision-making [18] to privacy-preserving [19, 21]. Some other focus on designing an effective and informed incentive mechanism that motivates workers for their sustained engagement in the system [19]. Reverse auction mechanism has been widely utilized for designing incentive mechanism including bidding and winner selection in crowdsourcing works [22, 20, 23, 24]. In [22], a secure reverse auction protocol is devised for task assignment for spatial crowdsourcing along with an approximation algorithm. Similarly, [23] proposes a truthful reverse auction mechanism for location-aware crowdsensing while authors in [20] focus on generalized second-price auction for stable task assignment. The work in [24] also uses a truthful reverse auction mechanism to devise incentives for workers in urban parcel delivery. In context of electric vehicles (EV), the work in [5] employs crowdsourcing for solving charging problems of EVs. A V2V energy-sharing framework has been proposed that crowdsources the charging request from EV owners and allocates the energy considering energy trading prices, EV parameters and privacy. Some other crowdsourcing literature focus on different problems like route optimization of EVs [25] and parcel delivery using EVs [26]. Closer to our problem setting, some literature have explored the use of crowdsourcing for integrating energy-sharing services with EVs. For instance, authors in [7] proposed a V2H-based omni-sharing modality system in a microgrid community, where energy is crowdsourced from EVs to reduce the overall cost of the community and decrease the need for energy storage. Another study [17] suggested an autonomous EV-based energy crowdsourcing approach, which enables EVs to participate in energy-sharing tasks for cloud-based energy consumers. However, this approach is challenging to implement and doesn't consider workers' preferences or the impact of sub-optimal decision-making. In fact, most of these crowdsourcing works ignore the user behavioral modeling in task assignment. The spatial crowdsourcing work in [18] tried to solve the task assignment problem by considering worker preferences, but this solution is better suited for group tasks and doesn't account for other behavioral aspects of user behavioral modeling like _bounded rationality_[27] and irrational decision-making that drastically affects the system performances. Additionally, the existing works neglect the task recommendation problem and other realistic budget constraints, such as the energy budgets required by the utility or microgrid for any time period. Furthermore, these works are limited to homogeneous tasks like energy-sharing or delivery services only, which can result in significant idle hours for EVs during off-peak periods as such tasks have similar pattern. In conclusion, while existing literature in crowdsourcing mechanisms have contributed to task assignment, incentive design, privacy and energy-sharing services, there is still room for improvement in terms of behavioral aspect like preference-aware task recommendation and online learning of these preferences; task assignment with overall cost minimization and energy budgets; and heterogeneity in crowdsourcing tasks. Our proposed work focuses on addressing these limitations and developing more comprehensive, effective, and realistic solution to joint enabling of ride-and energy-sharing services in a crowdsourcing setting using reverse auction, reinforcement learning and efficient matching algorithms. ## III System Model We assume time to be divided in time slots. At each time slot \(t\), the set of tasks is referred to as \(\mathcal{S}_{t}\), which are crowdsourced to the workers. We refer to \(\mathcal{W}_{t}\) as the set of workers at time \(t\). Each task in \(\mathcal{S}_{t}\) is denoted by a tuple \(s_{j}\stackrel{{\text{def}}}{{=}}\langle z_{j},c_{j},d_{j}\rangle\) where \(z_{j}\) is the type of task (\(0-\)rideshare, \(1-\)battery swapping, and \(2-\)V2G), \(c_{s_{j}}\) is the start position and \(d_{j}\) is the destination of task. For energy-sharing tasks, although spatial in nature, start position \(c_{s_{j}}\) is same as destination \(d_{j}\). We assume the utility company submits energy tasks as a result of an _energy requirement_\(\mathcal{E}\). This is a typical assumption for demand response solutions [10, 11, 12]. As a result, the total amount of energy provided by workers through V2G must be at least \(\mathcal{E}\). Each worker in \(\mathcal{W}_{t}\) is denoted by a tuple \(w_{i}\stackrel{{\text{def}}}{{=}}\langle c_{w_{i}},e_{i},r_{i},r_{ i}^{min}\rangle\), where \(c_{i}\) is the current position of the EV worker \(w_{i}\) which can be different to spatial task location \(c_{s_{j}}\), \(e_{i}\) is the energy per unit range value in (\(kWh/km\)) that gives information about how much energy the EV consumes to drive a unit distance, \(r_{i}\) is the available range of electrical vehicle in \(km\) given by the remaining energy level in their batteries, and \(r_{i}^{min}\) is the minimum energy not to be exceeded after completing the task to ensure sufficient energy for traveling to a charging location. The energy required to perform task \(s_{j}\) by worker \(w_{i}\) is denoted by \(l_{ij}\). e-Uber provides that a list of tasks, called _recommendation list_, is sent out to each worker. Workers then submit bids to these tasks. The bid \(b_{ij}\in\mathcal{B}\) represents the cost asked by worker \(w_{i}\) to perform task \(s_{j}\), where \(\mathcal{B}\) is the set of all the bids submitted by workers. Previous works in crowdsourcing and energy-sharing using EVs has generally assumed that workers would have complete access to the list of available tasks and would pick the best task for them or, conversely, the crowdsourcing platform would assign tasks to workers regardless of their preference. These assumptions are both undesirable. On the one hand workers have limited time and ability to go over potentially a very long list of tasks [2], and on the other hand workers may have different preferences on the tasks to complete. In this work, we recommend a limited list of relevant tasks to each worker based on their preferences. We model the preferences as follows. We denote by \(\alpha_{iz_{j}}\in[0,1]\) the probability that worker \(w_{i}\) bids for a task of type \(z_{j}\). These are called _bidding probabilities_. We assume that these probabilities are unknown and thus need to be learned over time by observing the workers' behavior. ## IV _e-Uber:_ Problem Formulation Fig. 2 summarizes the steps involved in the e-Uber platform. e-Uber collects a list of tasks \(\mathcal{S}_{t}\) at time \(t\) as requested by task-requesters which need to be crowdsourced to the EV-based workers in \(\mathcal{W}_{t}\) (step \(1\)). The platform sends a personalized list of tasks to the workers based on their preferences (step \(2\)) to which they respond by submitting bids to the platform for the tasks (step \(3\)). Based on the received bids \(\mathcal{B}_{t}\) (step \(4\)), the platform uses reverse auction based algorithm to determine the winning bids \(\mathbf{q}^{*}\) along with final payment \(\mathbf{P}\) for winners (step \(5\)). Finally, the worker preferences are updated based on their feedback for the next time step (step \(6\)). Given the nature of the considered tasks, worker-task assignment is performed one-to-one. As described above, the system involves solving two different problems. One is to recommend a set of tasks which maximizes the likelihood of generating the maximum number of bids, and thus improving the overall system performance. Another problem is to select the winning bids for task assignment and determine the final payment to crowdsource the tasks to the workers. These two problems are discussed below. Fig. 2: Working mechanism of e-Uber ### _Preference-aware Optimal Task Recommendation Problem_ Our objective is to recommend a limited subset of tasks to each workers which maximizes the likelihood of bidding for these tasks, while avoiding to overwhelm workers with a list above their cognitive capabilities. We formalize this through the Preference-aware Optimal Task Recommendation (POTR) problem as follows. In short, the problem aims at maximizing the overall task bidding probabilities (hereafter referred interchangeably as preferences) while limiting the size of the recommended list to \(K\) as well as ensuring that each task is recommended to at least \(\psi\) workers. maximize \[\sum_{w_{i}\in\mathcal{W}}\sum_{s_{j}\in\mathcal{S}}\alpha_{iz_{j}}x_ {ij}\] (1) s.t. \[\sum_{s_{j}\in\mathcal{S}}x_{ij}\leq K, \forall w_{i}\] (1a) \[\sum_{w_{i}\in\mathcal{W}}x_{ij}\geq\psi, \forall s_{j}\] (1b) \[\sum_{s_{j}\in\mathcal{S}}g(z_{j})x_{ij}\geq\frac{|V2G|}{|\mathcal{ S}|}K, \forall w_{i}\] (1c) \[l_{ij}x_{ij}\leq(r_{i}-r_{i}^{min})e_{i}, \forall w_{i},s_{j}\] (1d) \[x_{ij}=0,\text{ if }|c_{s_{j}}-c_{w_{i}}|>\lambda, \forall w_{i},s_{j}\] (1e) \[x_{ij}\in\{0,1\}, \forall w_{i},s_{j}\] (1f) \[g(z_{j})=\begin{cases}1,&\text{if }z_{j}=2\\ 0,&\text{otherwise}\end{cases} \tag{2}\] The objective function in Eq. (1) maximizes the sum of individual bidding probabilities for each worker's recommended tasks. The binary decision variable \(x_{ij}\in\{0,1\}\) is set to 1 if the task \(s_{j}\) is included in the list of worker \(w_{i}\). Constraint (1a) limits the length of each recommendation list to be less than \(K\). In constraint (1b), we ensure that each task is recommended to at least \(\psi=\left\lfloor\frac{|\mathcal{W}|K}{|\mathcal{S}|}\right\rfloor\) workers. Also, we ascertain that a minimum of \(\frac{|V2G|\times K}{|\mathcal{S}|}\) V2G tasks are also recommended to each workers in constraint (1c). Constraint (1d) requires the recommended tasks to consume no more than certain energy for each EV, ensuring that EV has sufficient energy after performing tasks to drive to charging location, if required. Finally, constraint (1e) ensures that only the tasks within \(\lambda\) distance from workers are recommended. It is to be noted that the information on bidding probabilities is difficult to obtain _a priori_ as it is specific for each worker and include elements of complex human psychology. Therefore, we assume that the preferences are initially unknown and are learned by observing the workers' behavior with respect to the assigned tasks. Recently, reinforcement learning mechanisms have been used extensively to learn the optimal policies in the run-time that gradually converge to take optimal actions based on feedback from the environment. In section V, we present a _Combinatorial Multi-Armed Bandit (CMAB)_-based approach [2] that learns the preferences of workers over time while simultaneously recommending the optimal personalized list of tasks to them. ### _Winning Bid Selection and Final Payment Problem_ After sending the personalized list of tasks to each worker, _e-Uber_ collects the bids. Given the collected bids, e-Uber selects winning bids, i.e., the workers performing the tasks, by solving the Winning Bid Selection (\(WiBS\)) problem. This problem determines the best bids which minimize the total cost from perspective of task requesters. \(WiBS\) can then be formulated a costrained assignment problem as follows: minimize \[\sum_{w_{i}\in\mathcal{W}}\sum_{s_{j}\in\mathcal{S}}b_{ij}q_{ij}\] (3) s.t. \[\sum_{s_{j}\in\mathcal{S}}q_{ij}\leq 1, \forall w_{i}\] (3a) \[\sum_{w_{i}\in\mathcal{W}}q_{ij}=1, \forall s_{j},z_{j}<2\] (3b) \[\sum_{w_{i}\in\mathcal{W}}q_{ij}\leq 1, \forall s_{j},z_{j}=2\] (3c) \[\sum_{w_{i}\in\mathcal{W}}\sum_{s_{j}\in\mathcal{S}}g(z_{j})l_{ij }q_{ij}\geq\mathcal{E},\] (3d) \[q_{ij}\in\{0,1\}, \forall w_{i},s_{j}\] (3e) The objective function in Eq. (3) minimizes the total cost of performing tasks from the collected bids. \(q_{ij}\) is the binary decision variable as defined in constraint (3e) that indicates whether a bid \(b_{ij}\) wins the auction and therefore the task \(s_{j}\) is assigned to worker \(w_{i}\). Constraint (3a) ensures that a worker is assigned at most one task, while (3b) allows a ride-sharing and battery swapping tasks (\(z_{j}<2\)) to be assigned to only one worker. Similarly, constraint (3c) ensures that a V2G task is assigned to at most one worker. Finally, constraint (3d), ensures that at least \(\mathcal{E}\) amount of energy will be supplied through V2G services. Note that the function \(g(z_{j})=1\) if \(z_{j}=2\) (V2G task) and zero otherwise. Following the selection of winning bids by solving the \(WiBS\) problem in Eq. (3), the final payment for each winning worker \(w_{k}\) assigned with task \(s_{j}\) is the second-to-the-selected bid received for that task. Since with the second price payment rule, the dominant strategy for all bidders is to bid truthful [28], it ensures rational workers will provide truthful bids. **Theorem 1**.: \(WiBS\) _problem defined in Eq. (3) is NP-hard._ Proof.: We provide a reduction from NP-Hard 0-1 min Knapsack (0-1 min-KP) problem [29]. In this problem, a set \(n\) items is provided, each item \(a_{i}\) has a value \(l_{i}\) and weight \(b_{i}\). The goal is to select the subset of items that incurs minimum weight and has a value of at least \(\mathcal{E}\). Given a generic instance of min-KP, we construct an instance of our problem as follows. We only consider V2G tasks (\(z_{j}=2\)). For each item \(a_{i}\) of min-KP we create a pair task-worker \((s_{a_{i}},w_{a_{i}})\). We assume that worker \(w_{a_{i}}\) only submits one bid, and they bid for \(s_{a_{i}}\) for an amount \(b_{i}\) (the weight of \(a_{i}\) in min-KP). Additionally, the energy required by \(w_{a_{i}}\) to perform \(s_{a_{i}}\) is \(l_{i}\) (the value of \(a_{i}\) in min-KP). Finally, we set the energy requirement for V2G to \(\mathcal{E}\). Under these assumptions, the decision variable \(q_{ij}\) of our original problem can be reduced to \(q_{i}\), since only one workers bid for one task and a task receives a bid only from one worker. Additionally, constraints (3a) and (3c) are trivially verified, since there is only task-worker pair, while constraint (3b) does not apply since we only have V2G tasks. Solving our reduced problem instance finds the set task-worker pairs that minimize the sum of bids and meets the energy requirement \(\mathcal{E}\). This corresponds (i.e., it can be translated in polynomial time) to the optimal solution of min-KP, i.e., the set of items with minimum weight that provide a value at least \(\mathcal{E}\). As a result, our problem is at least as difficult as min-KP, and thus it is NP-Hard. ## V e-Uber Solution Approaches ### _CMAB-based Task Recommendation System_ In order to solve the optimization problem in Eq. (1), it is necessary to have beforehand knowledge on the workers preferences. These are generally _not known_ a priori in realistic settings. Therefore, it becomes necessary to learn these preferences during run-time, while simultaneously optimizing the task assignment. To this purpose, we propose a reinforcement learning approach inspired by the Combinatorial Multi-Armed Bandit (CMAB) framework [2, 30]. Combinatorial Multi-Armed Bandit is a classic reinforcement learning problem that consists of setup where agents can choose a combination of different choices (i.e. certain decision-making _actions_) and observe a combination of linear _rewards_ at each timestep. The long term objective for the problem is to find a strategy that maximizes such reward by selecting optimal actions. This strategy, better defined as _policy_, needs to be learned based on how the agents choose to interact with the system. The learning is carried out through _exploration vs. exploration_ trade-off. Since, at the beginning, the knowledge about how an agent chooses to engage with the system is not known, the system learns by allowing agent to choose from diverse options and therefore learning the user interaction accordingly, referred to as _exploration_. As the time passes, the system starts gathering information about agent's behavior and therefore use that knowledge instead of sending out diverse range of choices, called _exploitation_. By balancing this exploration and exploitation mechanism over the course of time, the system eventually gathers sufficient information on agent's behavior and learns optimal strategy for them. In our problem setting, the workers are the agents who needs to be sent out an optimal set of tasks so as to accumulate good quality bids from them. Specifically, the objective is to find the best possible task recommendations (actions) to be sent to each workers (agent) that will result in higher cumulative preferences for workers (reward). Therefore in this section, based on this CMAB framework, we design an algorithm called _CMAB-based Algorithm for task Recommendation System (CARS)_. The pseudo code of _CARS_ is shown in Alg. (1). _CARS_ recommends the personalized tasks to each workers based on current estimation of worker preferences towards each task type. Note that the worker preference is defined as the bidding probability in section IV that a worker will submit a bid for any task based on its type. The algorithm then updates and learns these biding probabilities based on the worker's engagement on the recommendation through bids. If the worker submits a bid, it is considered to be a preferred recommendation and opposite, if the worker chooses to ignore by not the submitting bid. Based on this information, the preference of workers towards each task type is updated. Therefore, with \(\mathcal{F}\) as the overall solution space that consists of all feasible action matrices, the action matrix \(\mathbf{A}(t)\in\mathcal{F}\) corresponds to the optimal set of recommendation lists for the timestep \(t\). It consists of action values \(x_{ij}\in\{0,1\}\), which is same as the decision variable in POTR problem. Recall that it represents whether the task \(s_{j}\) is in personalized recommendation list of worker \(w_{i}\) for timestep \(t\). Given this action matrix, the preference of worker \(w_{i}\) towards each task type \(z_{j}\) is modeled as a random variable \(\bar{\alpha}_{i_{z_{j}}}\) whose mean value is \(\alpha_{i_{z_{j}}}\) and is initially unknown. The current knowledge until timestep \(t\) for these random variables \(\bar{\alpha}_{i_{z_{j}}}\) is denoted by the estimated expected \(\widehat{\alpha}_{i_{z_{j}}}\). The reward for the platform for selecting the action matrix \(\mathbf{A}(t)\) at timestep \(t\), is defined as the sum of the preferences to each workers: \[\mathbf{R}_{\mathbf{A}(t)}(t)=\sum_{w_{i},s_{j}}a_{ij}(t)\bar{\alpha}_{ij}(t) \tag{4}\] Since the distribution of \(\bar{\alpha}_{i_{z_{j}}}\) is unknown, the goal of this CMAB-based approach is to learn the policy, that minimizes the overall _regret_ up to time \(t\). This regret is defined as the difference between expected reward with perfect knowledge of preferences and that obtained by the policy over time: \[\mathcal{R}(t)=t\mathbf{R}_{\mathbf{A}(t)}^{*}(t)-\mathbb{E}\Big{[}\sum_{t^{ \prime}=1}^{t}\mathbf{R}_{\mathbf{A}(t^{\prime})}(t^{\prime})], \tag{5}\] where \(\mathbf{R}_{\mathbf{A}(t)}^{*}(t)\) is the optimal reward obtained with perfect knowledge of the preference variables. Even though minimizing the regret is a difficult problem, \(CARS\) ensures that the regret is bounded, meaning the non-optimal actions will be picked only a limited number of times and eventually the learned policy will converge towards optimal. We present a modified objective function from UCB1 algorithm to select the action matrix as follows. \[\mathbf{A}(t)=\arg\max_{\mathbf{A}\in\mathcal{F}}\sum_{w_{i}\in\mathcal{W}} \sum_{s_{j}\in\mathcal{S}}a_{ij}\left(\widehat{\alpha}_{i_{z_{j}}}+\sqrt{ \frac{(Q+1)\ln t}{m_{i_{z_{j}}}}}\right) \tag{6}\] where \(Q=|\mathcal{W}|\times|z_{j}|\) is the total number of variables and \(m_{i_{z_{j}}}\) is the number of observations so far for the variable \(\bar{\alpha}_{i_{z_{j}}}\). At each timestep \(t\), we solve the \(POTR\) problem with CMAB-based objective function in Eq. (6) instead of Eq. (1) and same constraints (1a)-(1f). By solving this modified problem, the sets of optimal actions (or recommendation lists) for each workers are selected based on current estimate of preferences until timestep \((t-1)\). For this purpose, we keep track of the \(\widehat{\alpha}_{iz_{j}}\), along with \(m_{iz_{j}}\). These two information are then used to update the current estimation of the variable \(\bar{\alpha}_{iz_{j}}\) at time \(t\) based on the worker's engagement with the recommendation i.e. whether the worker chooses to submit the bid or not. Needs to be noted that, if the worker chooses to submit the bid, they must complete the task if assigned. \[\widehat{\alpha}_{iz_{j}}(t) =\begin{cases}\frac{\widehat{\alpha}_{iz_{j}}(t-1)m_{iz_{j}}(t-1)+ \alpha_{iz_{j}}(t)}{m_{iz_{j}}(t-1)+1}&\text{if }0<b_{ij}<\infty,\\ \widehat{\alpha}_{iz_{j}}(t-1)&\text{otherwise.}\end{cases} \tag{7}\] \[m_{iz_{j}}(t) =m_{iz_{j}}(t-1)+1 \tag{8}\] We present the _CARS_ algorithm in Alg. 1. CARS begins by collecting information on workers and task in lines \(1-2\). It then sends out personalized recommendation to each worker by solving the optimization problem with Eq. (6) as objective function and constraints (1a)-(1f)(lines \(3-4\)). Then, it collects the bids for recommended tasks from workers (line \(5\)). Finally, the current knowledge on worker's bidding probabilities are updated according to the Eqs. (7) and (8) based on how the workers respond to recommendations (lines \(5-6\)). For the update process, the recommendations that receive a bid from workers are taken as positive reinforcement and the recommendations that do not receive bids as negative reinforcement. In the following, we prove that the Alg. 1 has a bounded regret and thus the algorithm eventually converges to optimal policy in finite time-steps. ``` 1\(\forall w_{i}\in\mathcal{W}_{t}\), collect the workers info \(w_{i}=<c_{i},e_{i},r_{i},r_{i}^{min}>\) ; 2\(\forall s_{j}\in\mathcal{S}_{t}\), collect the tasks \(s_{j}=<z_{j},c_{j},d_{j}>\); 3\(\forall w_{i}\in\mathcal{W}_{t}\), collect their respective bids \(\mathcal{B}_{i}\) ; 4Build Bipartite Graph \(G=\{\mathcal{W}\cup\mathcal{S},E_{G}=\emptyset\}\) ; 5for each\(w_{i}\in\mathcal{W},s_{j}\in\mathcal{S}\)do 6if\(b_{ij}>0\)then Add edge \((w_{i},s_{j})\) to \(E_{G}\) with weight, \(b_{ij}\); 7 8 end if 9 10 end for 11 12 end for 13 14 end for 15 16 end for 17 18 end for 19\(\forall w_{k}\in\mathcal{W},P_{k}\leftarrow\) Second to the selected bid \(b_{kj}\); 20 Assign the tasks to winning workers along with final price \(\mathbf{P}\); ``` **Algorithm 1**CMAB-based Algorithm for task Recommendation System (CARS) **Theorem 2**.: _eCARS provides bounded regret given by:_ \[\mathcal{R}(t)\leq\left[\frac{4a_{max}^{2}Q^{3}(Q+1)\ln(t)}{(\Delta_{min})^{2}} +\frac{\pi^{2}}{3}Q^{2}+Q\right]\Delta_{max}, \tag{9}\] _where, \(a_{max}\) is defined as \(\max\limits_{\mathbf{A}\in\mathcal{F}}\max\limits_{i,j}a_{ij}\). Besides, \(\Delta_{min}=\min\limits_{\mathbf{R_{A}}<\mathbf{R^{*}}}(\mathbf{R^{*}}- \mathbf{R_{A}})\) and \(\Delta_{max}=\max\limits_{\mathbf{R_{A}}<\mathbf{R^{*}}}(\mathbf{R^{*}}- \mathbf{R_{A}})\) are the minimum and maximum difference to the reward obtained with perfect knowledge of the users' preferences, respectively._ Proof.: The proof is obtained following Theorem 2 of [30]. However, as shown in Theorem (1), finding optimal solution for winner determination problem (\(WiBS\) problem Eqs. (2)-(3e)) is NP-Hard problem. Therefore, we devise a bipartite matching-based heuristic for winning bid determination with polynomial time complexity for worker-task assignment. ### _Winning Bid Selection using Weighted Bipartite Matching_ The \(WiBS\) problem formulation in Eq. (3) is an extension of one-to-one weighted matching. However, this matching has to select minimum weighted edges for task allocation with energy budget constraints for V2G tasks. Therefore, we hereby develop a heuristic inspired by bipartite minimum weighted matching which can be solved in polynomial time using Karp's algorithm [31]. To satisfy the energy budget constraint, we employ iterative matching that removes the highest weighted edges from the previous matching until the budget is met. Simply put, the algorithm runs the minimum weighted matching and if it does not satisfy the budget constraints, removes first \(z\) highest weighted edges connected to non-V2G tasks from the previous matching and then runs another round of matching until the feasible solution is found. ``` Input : Sets of Workers (\(\mathcal{W}\)) and Spatial Tasks (\(\mathcal{S}\)), Bids (\(\mathcal{B}\)) Output : Winning bids with final pay (\(\mathbf{P}\)) / Initialization */ 1\(\Phi_{out}=\{\mathcal{W}\cup\mathcal{S},E_{\Phi}=\emptyset\};\Phi_{temp}= \emptyset;P=\emptyset\) ; / Generate bipartite graph \(G\) */ 2\(\forall s_{j}\in\mathcal{S}\), if \(g(z_{j})=1\)then\(V\leftarrow\{s_{j}\}\)else\(R\leftarrow\{s_{j}\}\); 3\(\forall w_{k}\in\mathcal{W}\), collect their respective bids \(\mathcal{B}_{i}\) ; 4 Build Bipartite Graph \(G=\{\mathcal{W}\cup\mathcal{S},E_{G}=\emptyset\}\) ; 5for each\(w_{i}\in\mathcal{W},s_{j}\in\mathcal{S}\)do 6\(\mathbf{I}\) if\(b_{ij}>0\)then Add edge \((w_{i},s_{j})\) to \(E_{G}\) with weight, \(b_{ij}\); 7 8 end if 9\(\mathbf{\ast}\) Run minimum weighted brt matching until termination */ 10while\((w_{i},s_{j})\in E_{out}\)do 11\(E_{out}\leftarrow\)Perform Minimum Weighted Bipartite Matching on \(G\); 12 Output graph \(\Phi_{out}=\{\mathcal{W}\cup\mathcal{S},E_{out}\}\), where \(E_{out}\subseteq E_{G}\) ; / Remove edges if V2G energy budget is not met, and run MWM on reduced \(G\) again */ 13if\(\sum\limits_{(w_{i},s_{j})\in E_{out}}g(z_{j})l_{ij}<\mathcal{E}\)then 14\(Z\leftarrow\)Select the first \(z\) highest weight edges \(\in\Phi_{out}\) and \(R\) ; 15if\(Z\neq\emptyset\)then Remove all edges \(\in Z\) from \(G\) and \(\Phi_{out}\) else\(\Phi_{temp}=\Phi_{out}\); 16 17 end if 18 19 end for 20\(\mathbf{q}^{+}=E_{out}\); 21 22\(\mathbf{\ast}\) Final Payment and Task Assignment */ 23\(\forall w_{k}\in\mathcal{W},P_{k}\leftarrow\) Second to the selected bid \(b_{kj}\); 24 Assign the tasks to winning workers along with final price \(\mathbf{P}\); ``` **Algorithm 2**Bipartite Matching-based Winner selection (BMW) This algorithm called _Bipartite Matching-based Winner selection (BMW)_ is presented in Alg. 2. \(BMW\) takes set of available workers \(\mathcal{W}\), tasks \(\mathcal{S}\), and the set of bids \(\mathcal{B}\) as input and finds the winning bids with final pay \(P\) as the output. In line \(1\), the algorithm initializes the output graph \(\Phi_{out}\), a temporary graph \(\Phi_{temp}\) for iterative matching purpose, and \(P\). Then it creates a separate sets for V2G and non-V2G tasks as sets \(V\) and \(R\) in line \(2\) and collects the bids from all workers (line 3). With the information on bids, \(BMW\) generates a bipartite graph \(G\) between bipartite sets of workers \(\mathcal{W}\) and tasks \(\mathcal{S}\), and adds edges between those nodes that have non-zero bids i.e. worker \(w_{i}\) with non-zero bid \(b_{ij}\) is connected with task \(s_{j}\) (lines \(4-7\)). Now, it runs a bipartite matching iteratively with while loop in lines \(8-15\). Initially, both of the conditions for while loop are true and therefore the algorithm runs first round of Minimum Weighted Bipartite Matching on graph \(G\) (line \(9\)). It then assigns the matched graph to the output graph \(\Phi_{out}\) (line \(10\)) and checks if the energy budget for V2G tasks is satisfied (line \(11\)). If it is met in the first round, it breaks out of the while loop and determines final payment and task assignment. If it is not met, BMW removes the first \(z\) highest weighted edges in \(\Phi_{out}\) from \(G\) that just meet the remaining of energy budget not met (line \(12-13\)). Then, since both of the conditions are still true, the algorithm runs another round of matching on reduced graph \(G\). Eventually the final matching in output graph \(\Phi_{out}\) is used as winning task assignments with final payment as per the bid (line \(16-18\)). **Theorem 3**.: _The time complexity of the \(BMW\) algorithm is \(O(|\mathcal{W}|.|\mathcal{S}|^{2}.log(|\mathcal{S}|))\)._ Proof.: The complexity is dominated by the \(while\) loop (lines \(10-17\)), executed at max \(|\mathcal{S}|\) times. It involves running minimum weighted full matching as presented in [31], which has run time of \(O(|\mathcal{W}|.|\mathcal{S}|.log(|\mathcal{S}|))\). Therefore, the overall complexity of the BMW is \(O(|\mathcal{W}|.|\mathcal{S}|^{2}.log(|\mathcal{S}|))\). ## VI Experiment In this section, we present the experimental details for the proposed system, comparison approaches and detailed study of performance of the algorithms. ### _Experimental Setup_ Our experimental setup consists of modeling workers, tasks and the simulation platform. In case of workers, we gathered the publicly available data on \(54\) different EV models on battery size, range, charging power and charging speed, and formulated an individual profile for each EV in concern. Similarly for ride-sharing tasks, the high volume taxi trip data of New York City (NYC) from the year of 2013 [32] was used. The V2G tasks were generated from the 15 minutes energy consumption data from 25 NYC residences from PecanStreet [33]. In absence of real dataset on battery swapping tasks, half of the ride-sharing tasks were extracted as the battery swapping tasks, given their similar profile with batteries transported instead of passengers. These tasks are spatial, therefore, we collect the information on locations, distance, and time required to complete the tasks. Furthermore, the simulation platform, e-Uber for crowdsourcing is developed using Python and Gurobi, NetworkX, and PyTorch libraries. We consider a reverse auction period resolution of 15 minutes which corresponds to the standard set by grid for energy trading. This means that every \(15\) minutes the e-Uber algorithm will gather the tasks, push the personalized list of tasks to workers, collect the bids and assign the tasks to EV workers that minimizes the overall cost for the task requesters. We set the search radius for the tasks \(\lambda=10\) km and the maximum length of recommendation list \(K=5\). The energy budget for each \(15\) minutes time period was considered to be total of all \(25\) V2G tasks available. The user preferences were sampled uniformly from the set \(\{0.1,0.4,0.5,0.7,0.9,1.0\}\). The energy, time and location of the EVs are tracked and updated accordingly so as to simulate their real-world trip behavior. If the battery level of the cars fall below minimum level, they are considered for the charging for the next time-step. For comparison approach, we use the task-centric winner selection algorithm as presented in [23] and refer it as \(BG\) for baseline greedy. This approach neither considers user-preference in the problem-setting nor it considers the personalized recommendation system. So for comparison purpose, we augment this method with perfect knowledge-based recommendation system that pushes \(K\) best tasks as recommendation to each workers. Then we implement the algorithm as presented in [23] that sorts the bids from lowest to highest for each tasks and assigns them one by one. Note that this approach may not guarantee a complete matching between workers and tasks as the tasks that are processed towards the end may not have any workers left to choose from because of limited number of bids and greedy selection approach. We use this \(BG\) as our baseline and compare the performance of our algorithms \(CARS\) and \(BMW\) along with their perfect knowledge variation \(PK\) which has the perfect knowledge on the worker preferences and thus do not involve learning, and \(OPT\) optimal solution to \(WiBS\) problem. The ride-share dataset in concern consists of actual ride-fare for specific car. However, we require bids from each vehicle for recommended tasks and a realistic model for bid generation is quite difficult to obtain. Therefore we trained a Deep Neural Network with existing dataset for determining the ride fare of the given ride-sharing tasks, the details of which is presented in the following. ### _Results_ **Bid Generation DNN Model** We used 11 months of taxi data to train and test the DNN model with 80-20 train-test split. The DNN model consisted of 3 hidden layers of sizes \((132,132,64)\). We employed ReLU activation function as well as one-hot encoding for the input features, and set the learning rate to 0.0001. The training was carried out for \(3\) epochs with \(7974\) training batches and batch size of \(64\). Consequently, the average training loss curve presented in Fig. 3, shows that the loss percentage reduces to \(\sim 2.5\%\) after \(\sim 12,000\) trainings. On testing dataset, the bid generation DNN model, generated highly accurate fare prediction with \(96.45\%\)\(R^{2}-\)score. This can also be observed in Fig. 4 which presents a plot of sample of prediction fares and actual fares to show testing accuracy. This DNN model was then deployed in conjunction with the e-Uber to simulate the bidding action by each workers for each recommended tasks in the personalized list. In case of V2G tasks, the energy to be supplied by the EV was converted into its distance equivalent and fed into the DNN model along with other input features to get the bids. **Experimental Observations** 1. **Performance over time - Total Cost & # of Tasks**: In the first experimental scenario, we observe the performance of algorithms as a snapshot of objective values over 24 hours (i.e. \(24\times 4=96\) timeslots). We present the objective values from midnight to next midnight as a lineplot in Fig. 5 and cumulative bar plots of objective values (Fig. 6) and total tasks completed (Fig. 7) over a day. Although all the proposed approaches start from the same initial state (except for knowledge on preference), these algorithms may have different successive states since the solution is affected by the matching in previous timeslot, availability of specific workers for next round, and the distance travelled by these workers for previous assignment (or next assignment). Therefore, we employ cumulative objective values and cumulative tasks completed as the metric for a fair comparison of the approaches in Fig. 6. This cumulative objective value reflects the overall quality of task assignment made so far based on the total objective values to achieve the requirement while the cumulative tasks completed present the total number of matches made by the respective approach until the end of that timeslot. As seen in the lineplot Fig. 5 and barplot Fig. 6, the solution generated by baseline greedy approach \(BG\) is the minimum one as it assigns task based on respective cheapest bid available but it doesn't meet the maximum number of matching possible unlike other approaches as shown in Fig.7. Therefore, \(BG\) mostly violates the V2G requirement, meaning it generates infeasible solutions and hence fails for this problem setting. The \(PK-OPT\) produces the best result since it involves solving the \(POTR\) and \(WiBS\) problem optimally with perfect knowledge of the worker preferences. Following it, is the optimal solution \(OPT\) paired with our proposed learning framework for e-Uber, \(CARS\), which performs close to optimal in terms of both objective values and number of tasks completed. Although this approach \(CARS-OPT\) finds optimal solution, it does not have initial knowledge on preferences. Therefore, it generates sub-optimal recommendation list which then affects the solution to \(WiBS\) problem and hence, the overall performance. However, even with online learning framework employed, it produces similar results to the \(PK-OPT\). Also we observe similar pattern with \(PK-BMW\) and \(CARS-BMW\) since they both rely on bipartite matching-based approach to find feasible solution. Since \(PK-BMW\) sends the optimal recommendation to workers for collecting bids, it therefore has higher overall performance compared to \(CARS-BMW\) which learns the preferences over time. The gaps between best performing \(PK-OPT\) and worst performing \(CARS-BMW\) however is less than \(\$150\) which amounts to a price hike of \(\sim\$3/\text{task}\) in the worst case with an average \(50\) tasks for a timeslot as in our case. We observe the cumulative objective values grow almost linearly for all approaches and as expected, the performance observed was better for \(PK-OPT\) followed by \(CARS-OPT\) and then \(PK-BMW\) and finally \(CARS-BMW\). However, the gap in cumulative objective value increased for the bipartite heuristic compared to optimal due to its sub-optimal performance. Note that the baseline \(BG\) generates less cumulative objective value but it fails to generate maximal matching as seen in Fig. 7. The number of tasks completed by the proposed approaches exceed 850 more than the \(BG\) in the span \(24\) hours. 2. **Average final price per task and scaling**: In this experiment, we track the average final price per task while scaling the available tasks from \(32\%\) to \(64\%\) and then at \(100\%\). For scaling the tasks, we increase the number of each type of tasks proportionally. The result is plotted in Fig. 8. As the system scales, the average final price per task for all approaches rises since the overall cost for the system also increases with the tasks. However, it is also observed that \(CARS-BMW\) and \(BMW-PK\) suffer more as we scale the system. The margin between these and optimal approaches grows drastically up to \(\sim\$2\). This can be attributed mainly to the increased complexity of the problem as number of tasks is increased and hence the bipartite matching-based heuristic finds less efficient solution compared to optimal. The optimal solutions however have nominal increase in their average price per task (\(\sim\$10\)) even with scaling compared to rest. We also study the effect of scaling V2G tasks to the average final price per task in Fig. 10. We observed similar trend to above but with noticeable gap between optimal and heuristic approaches when only \(32\%\) of V2G tasks are available. This results from the sub-optimal performance owing to less number of V2G tasks compared to rest and hence unequal rate of learning the preferences. 3. **Learning accuracy for preferences - MAE**: To study the quality of proposed CMAB-based learning algorithm \(CARS\) in conjunction with optimal and \(BMW\), we use the Mean Absolute Error (MAE) of the learned preferences over time and present them in Fig. 10. Both approaches use same learning algorithm but the solution to \(WiBS\) problem differs and thus affects the learning performance. However, this difference is very negligible. Initially, the MAE is \(0.28\) and then rapidly decreases to less than \(0.05\) for both approaches by \(250\) timesteps. The difference in learning efficacy between \(CARS-OPT\) and \(CARS-BMW\) reduces over the time and is almost same by \(250\) timesteps as seen in the graph. Since by \(500\) timesteps the system has garnered sufficient knowledge on workers preferences, MAE falls to \(0.03\) reflecting the efficacy of proposed CMAB-based preference learning. Furthermore, we present a cumulative reward plot in Fig. 11 that also shows the plots of both learning approaches converge after \(200\) timesteps. 4. **Dependency with \(K\)**: In this experiment, we discuss on the dependency of the performance of our proposed approach with recommendation length \(K\), as presented in Fig. 12. Increasing the number of recommendation \(K\) means that the chance of receiving more bids with good quality from same number of workers at the same time increases. This in turn helps to find better solutions which reduce overall cost of the system. This is also verified from the observation in plot of Fig. 12. As we increase \(K\), the objective values per task over a day's period reduces for all four approaches. Although the perfect and optimal optimal methods do not have significant difference in their performance with varied \(K\), the effect is more pronounced in case of bipartite matching based \(PK-BMW\) and \(CARS-BMW\) where the learning of preferences is benefited by the increased number of bids to choose from with increasing \(K\). However, it needs to be noted that pushing \(10\) recommends at each timestep can be very intractable for workers and therefore, keeping the length of recommendation list as small as possible is desired. ## VII Conclusion e-Uber is a promising crowdsourcing platform for improving the efficiency and sustainability of ride-sharing and energy-sharing services through the use of EVs. It uses reverse auction mechanism to assign spatial tasks to EV drivers based on their preferences, battery level, and other realistic constraints like minimum energy requirement for grid and one-to-one assignment. To optimize the task recommendation process, the platform incorporates user behavioral models including worker preferences and bounded rationality. However, as these preferences are not known _a priori_, e-Uber uses reinforcement learning framework called combinatorial multi-armed bandit for learning the preferences at the runtime based on their feedback. We propose the \(CARS\) algorithm that finds optimal solution to both the \(POTR\) and \(WiBS\) problem. Since the \(WiBS\) problem is NP-hard, we propose another bipartite matching-based heuristic, called \(BMW\) that finds feasible solution to the winner selection while meeting the minimum V2G energy requirement. Experimental results and simulations demonstrate the effectiveness of e-Uber's approaches, which outperform the baseline algorithm by serving more than 850 tasks within \(24\) hours of simulation. On top of that, the baseline often fails to find a feasible solution, rendering it inapplicable in this problem setting. Future research could focus on implementing and evaluating e-Uber in real-world settings. This includes the assessment of the impact of different task recommendation and decision prediction algorithms, as well as the integration of new features such as real-time traffic and energy data and dynamic pricing. By exploring these areas, e-Uber has the potential to significantly improve the efficiency and sustainability of ride-sharing and energy-sharing services through the use of EVs. ## Acknowledgment This work is supported by the NSF grant EPCN-1936131 and NSF CAREER grant CPS-1943035.
シェア経済基盤のビジネスモデルは、UberやAirbnbなどの企業によって、輸送と宿泊業の分野で成功を収めています。 エネルギーシステムに適用される可能性は高く、 peer-to-peer (P2P) エネルギー取引、電気自動車 (EV) ベースのVehicle-to-Grid (V2G)、Vehicle-to-Home (V2H)、Vehicle-to-Vehicle (V2V)、バッテリー交換技術 (BST) のような方法があります。この研究では、EV の増加を促進し、V2G と BST を利用して、共同利用型のプラットフォームを構築する e-Uber を実現します。 e-Uber は、乗り合いとエネルギー共有を同時に実現する、空間的集中的なクラウドソーシング、強化学習、逆オークション理論を利用します。具体的には、強化学習を用いて、運転手の異なる乗合とエネルギー共有タスクへの嗜好を理解します
2302.14816
Monocular Depth Estimation using Diffusion Models
We formulate monocular depth estimation using denoising diffusion models, inspired by their recent successes in high fidelity image generation. To that end, we introduce innovations to address problems arising due to noisy, incomplete depth maps in training data, including step-unrolled denoising diffusion, an $L_1$ loss, and depth infilling during training. To cope with the limited availability of data for supervised training, we leverage pre-training on self-supervised image-to-image translation tasks. Despite the simplicity of the approach, with a generic loss and architecture, our DepthGen model achieves SOTA performance on the indoor NYU dataset, and near SOTA results on the outdoor KITTI dataset. Further, with a multimodal posterior, DepthGen naturally represents depth ambiguity (e.g., from transparent surfaces), and its zero-shot performance combined with depth imputation, enable a simple but effective text-to-3D pipeline. Project page: https://depth-gen.github.io
Saurabh Saxena, Abhishek Kar, Mohammad Norouzi, David J. Fleet
2023-02-28T18:08:21
http://arxiv.org/abs/2302.14816v1
# Monocular Depth Estimation using Diffusion Models ###### Abstract We formulate monocular depth estimation using denoising diffusion models, inspired by their recent successes in high fidelity image generation. To that end, we introduce innovations to address problems arising due to noisy, incomplete depth maps in training data, including _step-unrolled denoising diffusion_, an \(L_{1}\) loss, and depth infilling during training. To cope with the limited availability of data for supervised training, we leverage pre-training on self-supervised image-to-image translation tasks. Despite the simplicity of the approach, with a generic loss and architecture, our _DepthGen_ model achieves SOTA performance on the indoor NYU dataset, and near SOTA results on the outdoor KITTI dataset. Further, with a multimodal posterior, DepthGen naturally represents depth ambiguity (e.g., from transparent surfaces), and its zero-shot performance combined with depth imputation, enable a simple but effective text-to-3D pipeline. Project page: [https://depth-gen.github.io](https://depth-gen.github.io) Machine Learning, ICML ## 1 Introduction Diffusion probabilistic models have emerged as a powerful family of generative models for high fidelity image synthesis, capturing remarkably rich knowledge about the visual world (Sohl-Dickstein et al., 2015; Ho et al., 2020; Ramesh et al., 2022; Saharia et al., 2022). Given their impressive generative capabilities, it is natural to ask to what extent are these models effective for image to image vision tasks like segmentation, optical flow or depth estimation? Here, we adapt diffusion models to the problem of monocular depth estimation and investigate their effectiveness in the context of a large body of prior work. We demonstrate state of the art performance on benchmarks while also enabling multi-modal inference to resolve depth ambiguities, and exploiting depth imputation for text to 3D generation. Two key issues in training diffusion models for monocular depth inference concern the amount and quality of available training data. First, much of the existing data is noisy and incomplete (e.g., see Figs. 3 and 9). This presents a challenge for the conventional training framework and iterative sampling in diffusion models, leading to a problematic distribution shift between training and testing. To mitigate these issues we propose the use of an \(L_{1}\) loss for robustness, infilling missing depth values during training, and the introduction of _step-unrolled denoising diffusion_. Given the limited availability of labelled training data, we also consider the use of self-supervised pre-training. This leverages the strong performance of diffusion models on tasks like colorization and inpainting (e.g., Saharia et al., 2022), capturing rich image structure that may transfer to other tasks. Accordingly, we propose a training pipeline comprising multi-task self-supervised pre-training followed by supervised fine-tuning. The model can then be used zero-shot or it can be further fine-tuned for a specific domain. The resulting model, _DepthGen_, outperforms SOTA baselines on the indoor NYU dataset, and is competitive on KITTI. Ablations show that unsupervised pre-training, depth infilling, the \(L_{1}\) loss, and step-unrolled denoising diffusion all significantly improve performance. As a probabilistic model, DepthGen has other attractive properties: With its ability to represent multi-modal distributions, we find that it can resolve depth ambiguities, e.g., due to reflective or transparent surfaces. Given the ease of imputation with diffusion models, DepthGen can also be used to infer missing depth values. We exploit this property, along with its zero-shot capability and existing text to image models to build a simple but effective framework for text to 3D scene generation and novel view synthesis. In summary, our contributions are as follows: 1. We introduce DepthGen, a diffusion model for monocular depth estimation, comprising self-supervised pre-training and supervised fine-tuning. Without specialized loss functions or architectures, it achieves SOTA relative error of 0.074 on the NYU benchmark. 2. To train diffusion models on noisy, incomplete depth data, we advocate the use of an \(L_{1}\) loss, depth infilling, and step-unrolled denoising diffusion (SUD) to reduce latent distribution shift between training and inference. 3. We show that DepthGen enables multimodal depth inference, and imputation of missing depths e.g, for text-to-3D generation and novel view synthesis. ## 2 Related Work **Monocular depth estimation** is essential for many vision applications (Jing et al., 2022; Zhou et al., 2019). And recent progress has been impressive, with the development of specialized loss functions and architectures (e.g., Saxena et al., 2005, 2009; Eigen et al., 2014; Eigen and Fergus, 2014; Laina et al., 2016; Cao et al., 2016; Fu et al., 2018; Bhat et al., 2021; Li et al., 2022; Agarwal and Arora, 2022). We build on this rich literature, but with a simple, generic architecture, leveraging recent advances in generative models. Prior work has shown that self-supervised tasks like colorization (Zhang et al., 2016; Larsson et al., 2016) serve as effective pre-training for downstream vision tasks. This motivates the choice to initialize our model with Palette (Saharia et al., 2022) style multi-task pre-training on the self-supervised image-to-image translation tasks. Self-supervised training using masked prediction has also recently been found to be particularly effective (Xie et al., 2022), with subsequent work, concurrent to ours, establishing the current SOTA (Ning et al., 2023). Our findings also support self-supervised pre-training, albeit with diffusion-based image-to-image translation, and we establish a new SOTA while also representing multi-modality and supporting zero-shot depth completion. Large-scale in-domain pre-training has also been effective for depth estimation (Ranftl et al., 2019, 2021; Ren and Lee, 2017), which we find to be the case here as well. **Diffusion models** have excelled at image generation, including unconditional and class-conditional generation (Dharilwal and Nichol, 2022; Ho et al., 2022), image-to-image translation (Saharia et al., 2022;a), text-to-image synthesis (Rombach et al., 2022; Ramesh et al., 2022; Nichol et al., 2021; Saharia et al., 2022), and text-guided image editing (Brooks et al., 2022; Wang et al., 2022; Hertz et al., 2022; Meng et al., 2021). Despite this success, they have not been widely applied to vision tasks, except for recent work on image enhancement (Saharia et al., 2022), used here for pre-training, and work on panoptic segmentation (Chen et al., 2022). To the best of our knowledge, ours is the first to apply diffusion models to monocular depth estimation. Also related to our work are diffusion models for view synthesis from multi-view image data (Watson et al., 2022), generative models for point cloud data (Nichol et al., 2022), text-to-3D generative models (Poole et al., 2022) and models for depth-aware novel view synthesis (Rockwell et al., 2021; Liu et al., 2021). While work on 3D generative models is exciting, our primary interest here is monocular depth estimation. ## 3 DepthGen ### Background Diffusion models are latent-variable generative models trained to transform a sample of a Gaussian noise into a sample from a data distribution (Sohl-Dickstein et al., 2015; Ho et al., 2020). They comprise a _forward process_ that gradually annihilates data by adding noise, as 'time' \(t\) increases from 0 to 1, and a learned _generative process_ that reverses the forward process, starting from a sample of random noise at \(t=1\) and incrementally adding structure (attenuating noise) as \(t\) decreases to 0. A conditional diffusion model conditions the steps of the reverse process. In the case of depth estimation, our conditioning signal is an RGB image, \(\mathbf{x}\), and the target is a conditional distribution over depth maps, \(p(\mathbf{y}\,|\,\mathbf{x})\). Central to the model is a denoising network \(f_{\theta}\) that is trained to take a noisy sample at some time-step \(t\), and predict a less noisy sample. Using Gaussian noise in the forward process, one can express the training objective over the sequence of transitions (as \(t\) slowly decreases) as a sum of non-linear regression losses, i.e., \[\mathbb{E}_{(\mathbf{x},\,\mathbf{y})}\,\mathbb{E}_{(t,\,\mathbf{\epsilon})}\bigg{\|}f_{ \theta}(\mathbf{x},\underbrace{\sqrt{\gamma_{t}}\,\mathbf{y}+\sqrt{1-\gamma_{t}}\, \mathbf{\epsilon}}_{\mathbf{y_{t}}},\,t)-\mathbf{\epsilon}\bigg{\|}_{2}^{2} \tag{1}\] where \(\mathbf{\epsilon}\sim\mathcal{N}(0,I)\), \(t\sim\mathcal{U}(0,1)\), and where \(\gamma_{t}>0\) is computed with a pre-determined noise schedule. For inference (i.e., sampling), one draws a random noise sample \(\mathbf{y}_{1}\), and then iteratively uses \(f_{\theta}\) to estimate the noise, from which one can compute the next latent sample \(\mathbf{y}_{s}\), for \(s<t\). Figure 1: Training Architecture. Given a groundtruth depth map, we first infill missing depth using nearest neighbor interpolation. Then, following standard diffusion training, we add noise to the depth map and train a neural network to predict the noise given the RGB image and noisy depth map. During finetuning, we unroll one step of the forward pass and replace the groundtruth depth map with the prediction. ### Self-Supervised Pre-Training DepthGen training comprises self-supervised pre-training, then supervised training on RGB-D data. The pre-trained model is a self-supervised multi-task diffusion model. Following (Sahara et al., 2022), we train a Palette model from scratch on four image-to-image translation tasks, i.e., colorization, inpainting, uncropping and JPEG artifact removal. ### Supervised training with noisy, incomplete depth Following pre-training, and with minor modifications to the architecture (see Sec. 4.2), training continues on paired RGB and depth data. While straightforward conceptually, the training datasets available for depth estimation present substantial challenges. The depth maps in particular are noisy and often contain regions with missing depth values. The various causes for such _holes_ are due to highly reflective surfaces, light absorbing surfaces (Stommel et al., 2014) or regions outside the sensor's range of measurement. Holes are largely inconsequential for simple feed-forward nets or regression models, since one could only backpropagate the loss from the subset of pixels with known depth values, ignoring those with missing depth. For diffusion models, however, such corruption of the training data is problematic. Diffusion models perform inference through iterative refinement - in our case, of a depth map \(\mathbf{y}\) conditioned on an RGB image \(\mathbf{x}\). It starts with a sample of Gaussian noise \(\mathbf{y}_{1}\), and terminates with a sample from the predictive distribution \(p(\mathbf{y}_{0}\,|\,\mathbf{x})\). A refinement step from time \(t\) to \(s\), with \(s\!<\!t\), proceeds by sampling from the parameterized distribution \(p_{\theta}(\mathbf{y}_{s}\,|\,\mathbf{y}_{t},\mathbf{x})\). Simply put, during inference, each step operates on the output from the previous step. In contrast, at training the different steps are somewhat decoupled (see Eqn. 1), where the denoising network operates on a noisy version of the ground truth depth map instead of the output of the previous iteration (reminiscent of teaching forcing in training RNNs). This introduces a distribution shift between training and inference, since the marginal distribution over noisy training depth maps _with holes_ may differ significantly from the distribution of noisy depths at inference time, which should ideally (since we do not learn the distribution of holes in the loss) be a noisy version of the _true_, complete depth maps (from a perfect, noiseless sensor for instance). This has a significant negative impact on model performance. This problem is further exacerbated by structured or heavy-tailed noise in training depth maps. We find that these problems are effectively mitigated with the following modifications during training: **Depth interpolation.** To reduce distribution shift between training and inference we impute missing depth values. We explored several ways to accomplish this, including various interpolation schemes, and using DepthGen (trained with nearest neighbor interpolation infilling) to infill missing depth. But, empirically, we found that two straightforward steps performed as well as more sophisticated approaches. In particular, we find that nearest neighbor interpolation is sufficient to impute missing depths in indoor training data. For outdoor data we continue to use nearest neighbor interpolation, except for sky regions, as they are often large and are much further from the camera than adjacent objects in the image. We use an off-the-shelf sky segmenter (Liba et al., 2020), and then set all sky pixels to be the maximum modeled depth (here, 80m). Despite the imputation of missing depths, we note that the training loss is only computed at pixels with known (vs infilled) depth. **Step-unrolled Denoising Diffusion.** Another approach to tackling the distribution shift in the latent marginal distribution of \(y_{t}\) between training and inference is to construct \(y_{t}\) using the model's output instead of the ground truth depth. One can do this by slightly modifying the training procedure (Algorithm 1) to run one forward pass of the model and build \(y_{t}\) by adding noise to the model's output rather than the training depth map. We do not propagate gradients for this forward pass. We find that this slows down training by about 15% on a TPU v4. We refer to this as _step-unrolled denoising diffusion (SUD)_. ``` Input: rgb image \(x\), depth map \(y\) \(t\gets U(0,1)\) \(\epsilon\gets N(0,1)\) \(valid\_mask=y>0\) \(y=fill\_holes(y)\) \(y_{t}=\sqrt{\gamma_{t}}*y+\sqrt{1-\gamma_{t}}*\epsilon\) if\(unroll\_step\)then \(\epsilon_{pred}=stop\_grad(f_{\theta}(x,y_{t},t))\) \(y_{pred}=(y_{t}-\sqrt{1-\gamma_{t}}*\epsilon_{pred})/\sqrt{\gamma_{t}}\) \(y_{t}=\sqrt{\gamma_{t}}*y_{pred}+\sqrt{1-\gamma_{t}}*\epsilon\) \(\epsilon=(y_{t}-\sqrt{\gamma_{t}}*y)/\sqrt{1-\gamma_{t}}\) endif \(\epsilon_{pred}=f_{\theta}(x,y_{t},t)\) \(loss=reduce\_mean(|\epsilon-\epsilon_{pred}|[valid\_mask])\) ``` **Algorithm 1** Train step w/ infilling and SUD. We perform SUD during fine-tuning only, not during supervised depth pre-training. Early in training the depth predictions are likely inaccurate. So the latent marginals over the noisy training depth maps would be much closer to the desired _true_ marginals than those produced by adding noise to the model's outputs. Hence, doing SUD early in supervised pre-training is not recommended. One might consider the use of a curriculum for gradually introducing SUD in the later stages of supervised pre-training, but this also introduces additional hyper-parameters, so we simply invoke SUD during fine-tuning, and leave an exploration of curricula to future work. This problem of training / inference distribution shift resembles that of _exposure bias_(Ranzato et al., 2016) in autoregressive models, for which the mismatch is caused by _teacher forcing_ during training (Williams and Zipser, 1989). Several solutions have been proposed for this problem in the literature (Lamb et al., 2016; Yu et al., 2016; Bengio et al., 2015). SUD also closely resembles the approach in (Savinov et al., 2022) where they perform step-unrolling for training denoising autoencoders on text. Finally, we note that (Ning et al., 2023) faced a similar problem when training a vector-quantizer on depth data. They work around it by synthetically adding more holes following a carefully chosen masking ratio. In comparison, we prefer our approach since nearest neighbor infilling is hyper-parameter free and step-unrolled denoising diffusion could be more generally applicable to other tasks with sparse data. \(L_{1}\)**Loss.** While the \(L_{2}\) loss in Eqn. 1 is appropriate for noise-free training data with additive Gaussian noise, good performance has been reported with an \(L_{1}\) loss during training for image-to-image translation models (Saharia et al., 2022). Given the possibility of substantial noise in depth data, especially for large depths and near holes, we hypothesize that the robustness afforded by the \(L_{1}\) loss may also be useful in training RGB-to-depth diffusion models. ## 4 Experiments ### Datasets For unsupervised pre-training, we use the ImageNet-1K (Deng et al., 2009) and Places365 (Zhou et al., 2017) datasets and train on the self-supervised tasks of colorization, inpainting, uncropping, and JPEG decompression, following (Saharia et al., 2022). **Indoor model.** For supervised image-to-depth pre-training of the indoor model we use the following two datasets (with dataset mixing at the batch level): _ScanNet_(Dai et al., 2017) is a dataset of 2.5M images captured using a Kinect v1-like sensor. It provides depth maps at \(640\times 480\) and RGB images at \(1296\times 968\). _SceneNet RGB-D_(McCormac et al., 2016) is a synthetic dataset of 5M images generated by rendering ShapeNet (Chang et al., 2015) objects in scenes from SceneNet (Handa et al., 2015) at a resolution of \(320\times 240\). For indoor fine-tuning and evaluation we use _NYU depth v2_(Silberman et al., 2012), a commonly used dataset for evaluating indoor depth prediction models. It provides aligned image and depth maps at \(640\times 480\) resolution. We use the official split consisting of 50k images for training and 654 images for evaluation. The predicted depth maps from our model are resized to the full resolution using bilinear up-sampling before evaluation. We evaluate on a cropped region proposed by (Eigen et al., 2014) following prior work. **Outdoor model.** For outdoor model training we use the _Waymo Open Dataset_(Sun et al., 2020), a large-scale driving dataset consisting of about 200k frames. Each frame provides RGB images from 5 cameras and LiDAR maps. We use the RGB images from the FRONT, FRONT_LEFT and FRONT_RIGHT cameras and the TOP LiDAR only to build about 600k aligned RGB depth maps. For subsequent fine-tuning and evaluation, we use _KITTI_(Geiger et al., 2013), an outdoor driving dataset which provides RGB images and LiDAR scans at resolutions close to \(1226\times 370\). We use the training/test split proposed by (Eigen et al., 2014), comprising 26k training images and 652 test images. The predicted depth from DepthGen is up-sampled to the full resolution using bilinear interpolation before evaluation. We evaluate on a cropped region proposed in (Garg et al., 2016) following prior work. **Data Augmentation and Preprocessing** We use random horizontal flip data augmentation for supervised depth training which is common in prior work. Where needed, images and depth maps are resized using bilinear interpolation to the model's resolution for training. Diffusion models expect inputs and generate outputs in the range \([-1.,1.]\). For the indoor model we use a max depth of 10 meters, and for the outdoor model we normalize the depth maps to the range with a maximum depth of 80 meters. ### Architecture The predominant architecture for diffusion models is the U-Net developed for the DDPM model (Ho et al., 2020), and later improved in several respects (Nichol and Dhariwal, 2021; Song et al., 2021; Dhariwal and Nichol, 2022). For DepthGen, we adapt the _Efficient U-Net_ architecture that was developed for Imagen (Saharia et al., 2022). The Efficient U-Net architecture is more efficient that the U-Nets used in prior work, because it has fewer self-attention layers, fewer parameters and less computation at higher resolutions, along with other adjustments that make it well suited to training medium resolution diffusion models. We make several minor changes to this architecture to adapt it for image-to-depth models. We drop the text cross-attention layers but keep the self-attention layer. Efficient U-Net has six input and three output channels, since the target is a RGB image (input consists of a 3-channel source RGB image and a 3-channel noisy target image concatenated along the channel dimension). For depth models, since we have a scalar output image, we modify the architecture to have four input channels and one output channel. Note that this means we need to reinitialize the input and output convolutional kernels before the supervised depth pretraining stage. **Resolution.** Our re-trained Palette model was trained for images at a resolution of \(256\times 256\). For training depth models we choose resolutions that are close to this while preserving the aspect ratios of the original depth training datasets. The indoor model is trained at \(320\times 240\). For Waymo we use \(384\times 256\) and for KITTI \(416\times 128\). The model does not contain learned positional embeddings so it can be easily pretrained and finetuned at different resolutions. ### Hyper-parameters The self-supervised model is trained for 2.8M steps with an \(L_{2}\) loss and a mini-batch size of 512. Other hyper-parameters are similar to those in the original Palette paper. The depth models are trained with \(L_{1}\) loss. We use a constant learning rate of 1e-4 during supervised depth pretraining but switch to a slightly lower learning rate of 3e-5 during fine-tuning which we find achieves slightly better results. We do learning rate warm-up over 10k steps for all models. All depth models are trained with a smaller mini-batch size of 64. The indoor depth model is trained on a mix of ScanNet and SceneNet RGBD for 2M steps and then fine-tuned on NYU for 50k steps. The outdoor depth model is trained on Waymo for 0.9M steps and fine-tuned on KITTI for 50k steps. Other details, like the optimizer and the use of EMA are similar to those outlined in (Saharia et al., 2022). ### Sampler We use the DDPM ancestral sampler (Ho et al., 2020) with 128 denoising steps. Increasing the number of denoising steps further did not greatly improve performance. We have not yet explored progressive distillation (Salimans and Ho, 2022) for faster sampling. We believe the results on distillation for generative image models should transfer well to image-to-depth models, thereby, reducing the gap between the speed of diffusion sampling and single-step depth estimation models. We leave this exploration to future work. ### Evaluation metrics We follow the standard evaluation protocol used in prior work (Li et al., 2022). For both the NYU depth v2 and KITTI datasets we report the absolute relative error (REL), root mean squared error (RMS) and accuracy metrics (\(\delta_{i}<1.25^{i}\) for \(i\in 1,2,3\)). For NYU we additionally report absolute error of log depths (\(log_{10}\)). For KITTI we additionally report the squared relative error (SQ-rel) and root mean squared error of log depths (RMS log). ### Results Table 1 shows the results on NYU depth v2. We achieve a state-of-the-art absolute relative error of 0.074. Table 2 shows results on KITTI, where we perform competitively with prior work. We report results with averaging depth maps from one or more samples. Note that most prior \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Method & Architecture & \(\delta_{1}\uparrow\) & \(\delta_{2}\uparrow\) & \(\delta_{3}\uparrow\) & REL\(\downarrow\) & RMS\(\downarrow\) & \(log_{10}\downarrow\) \\ \hline DORN [1] & ResNet-101\({}^{\dagger}\) & 0.828 & 0.965 & 0.992 & 0.115 & 0.509 & 0.051 \\ VNL [2] & ResNeXt-101\({}^{\dagger}\) & 0.875 & 0.976 & 0.994 & 0.108 & 0.416 & 0.048 \\ BTS [3] & DenseNet-161\({}^{\dagger}\) & 0.885 & 0.978 & 0.994 & 0.110 & 0.392 & 0.047 \\ DAV [4] & DRN-D-22\({}^{\dagger}\) & 0.882 & 0.980 & 0.996 & 0.108 & 0.412 & – \\ TransDepth [5] & Res-50+ViT-B\({}^{\dagger}\) & 0.900 & 0.983 & 0.996 & 0.106 & 0.365 & 0.045 \\ DPT [6] & Res-50+ViT-B\({}^{\dagger\dagger}\) & 0.904 & 0.988 & 0.998 & 0.110 & 0.357 & 0.045 \\ AdaBins [7] & E-B5+Mini-ViT\({}^{\dagger}\) & 0.903 & 0.984 & _0.997_ & 0.103 & 0.364 & 0.044 \\ BinsFormer [8] & Swin-Large\({}^{\dagger}\) & 0.925 & 0.989 & _0.997_ & 0.094 & 0.330 & 0.040 \\ PixelFormer [9] & Swin-Large\({}^{\dagger}\) & 0.929 & _0.991_ & 0.998 & 0.090 & 0.322 & 0.039 \\ MIM [10] & SwinV2-L\({}^{\top}\) & 0.949 & **0.994** & **0.999** & 0.083 & 0.287 & _0.035_ \\ AiT-P [11]\({}^{*}\) & SwinV2-L\({}^{\top}\) & **0.953** & 0.993 & **0.999** & _0.076_ & **0.279** & 0.033 \\ \hline DepthGen (ours) & & & & & & & \\ samples=1 & Efficient U-Net\({}^{\top\ddagger}\) & 0.944 & 0.986 & 0.995 & 0.075 & 0.324 & **0.032** \\ samples=2 & Efficient U-Net\({}^{\top\ddagger}\) & 0.944 & 0.987 & 0.996 & **0.074** & 0.319 & **0.032** \\ samples=4 & Efficient U-Net\({}^{\top\ddagger}\) & _0.946_ & 0.987 & 0.996 & **0.074** & 0.315 & **0.032** \\ samples=8 & Efficient U-Net\({}^{\top\ddagger}\) & _0.946_ & 0.987 & 0.996 & **0.074** & _0.314_ & **0.032** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of performances on the NYU-Depth-v2 dataset. \(\top\) indicates methods that use unsupervised pretraining, \(\dagger\)indicates supervised pretraining and \(\ddagger\) indicates methods with supervised depth pretraining on auxiliary data. **Best** / second best / _third best_ results are bolded / underlined / italicized respectively. \(\downarrow\) denotes lower is better and \(\uparrow\) denotes higher is better. Baselines: [1] Fu et al. (2018), [2] Yin et al. (2019), [3] Lee et al. (2019), [4] Huynh et al. (2020), [5] Zhao et al. (2021), [6] Ranftl et al. (2021), [7] Bhat et al. (2021), [8] Li et al. (2022), [9] Agarwal and Arora (2022), [10] Xie et al. (2022), [11] Ning et al. (2023). \({}^{*}\) denotes concurrent work. works report average over two samples obtained by left-right reflection of the input image. ### Ablations We find that both pre-training and accounting for missing depth are crucial to model performance. Table 3 shows that both self-supervised pre-training and supervised depth pre-training are important, with supervised depth training having a larger impact, which is to be expected. Table 4 shows that depth infilling is extremely important for the outdoor KITTI dataset. It has less impact on NYU, which is understandable since KITTI has sparser depth maps. In the absence of filled depth maps, step-unrolled denoising diffusion dramatically improves results especially on KITTI. Even with depth infilling, SUD consistently improves performance for both indoor and outdoor datasets. Additionally, we ablate the choice of loss function in Table 5. We find that the \(L_{1}\) loss yields better performance than \(L_{2}\), likely because \(L_{1}\) is more robust to noise at larger depths. (See Appendix for metrics other than absolute relative error.) Figure 3: Multimodal depth predictions on the KITTI val dataset. Figure 2: Examples of multimodal predictions on the NYU Depth V2 val dataset. Rows 1-2 contain glass doors/windows where the model learns to predict the depth for either the glass surface or the surface behind it. Row 3 has a dark area next to the refrigerator for which the depth is unclear from RGB alone. In row 4 the model hallucinates the reflected door as a bath cabinet, which seems plausible from the RGB image. ### Novel View Synthesis One advantage of diffusion models is the ease with which one can zero-shot impute one part of an image (or depth map) conditioned on the rest of the image (or depth map). Here, we leverage this to build a limited but effective text-to-3D scene generation pipeline. As depicted in Figure 5, we use the Imagen text-to-image model to generate an image, given text \(c\), to which we apply DepthGen (zero-shot) to sample a corresponding depth map. We then move the camera and, following (Liu et al., 2021), render the RGBD point cloud from a new camera pose. Of course this only provides RGB and depth values at a subset of pixels in the new frame since the fields of view are different. Fortunately, the missing pixels are easily inferred using diffusion models (i.e., the Imagen Editor (Wang et al., 2022) and DepthGen). Let \(x_{a}\) and \(y_{a}\) be the RGB and depth values at pixel locations rendered from the new camera pose respectively, and let \(x_{b}\) and \(y_{b}\) correspond to lines of sight not visible in the original frame. We first infer the missing RGB values, i.e., \(p(x_{b}|x_{a},c)\), using the uncropping/inpainting capability of the Imagen Editor. We then use DepthGen to impute the missing depth values, i.e., sampling from \(p(y_{b}|y_{a},[x_{a},x_{b}])\). There are several effective solutions to imputation with diffusion models, including the replacement method in (Song et al., 2021), and the more sophisticated use of reconstruction guidance in (Ho et al., 2022). For simplicity we use the replacement method to sample the unknown depths \(y_{b}\) conditioned on existing depths \(y_{a}\) and the image \(x=[x_{a},x_{b}]\). ## 5 Conclusion We propose a novel approach to monocular depth estimation using denoising diffusion models. We leverage self-supervised image-to-image pre-training, followed by subsequent training on supervised depth data to achieve SOTA results on challenging depth estimation benchmarks. We make several innovations that make it possible to effectively train diffusion models on imperfect training data that are commonplace for depth estimation. We demonstrate the multimodal capability of diffusion models to represent depth uncertainty. And we exploit the ease of imputation during iterative refinement in diffusion models to show how DepthGen can be used for zero-shot depth completion. In combination with text-to-image diffusion models, this enables a simple pipeline for novel view synthesis and text-to-3D. Figure 4: Text to 3D samples. Given a text prompt, an image is first generated using Imagen (first row of first column), after which depth is estimated (second row of first column). Subsequently the camera is moved to reveal new parts of the scene, which are then infilled using an image completion model and DepthGen (which conditions on both the incomplete depth map and the filled image). At each step, newly generated RGBD points are added to a global point cloud which is visualized in the rightmost column. See 6 for more samples. Figure 5: Pipeline for iteratively generating a 3D scene conditioned on text \(c=A\ bedroom\). See text for details.
``` モノクロDepth Estimationの構築に、ノイズ除去拡散モデルを用いた。これは、高精細画像生成における最近の成功事例を参考にしている。そこで、訓練データのノイズや欠損した深度マップによる問題解決のために、ステップ-オフロードノイズ除去拡散、L1損失、トレーニング中に深度補完など、革新的なアプローチを導入した。教師あり学習に不足するデータの補完のため、自己 supervisedな画像対画像翻訳タスクでの事前学習を行った。このアプローチの単純性にもかかわらず、汎用的な損失とアーキテクチャを持つDepthGenモデルは、NYU室内datasetでSOTA性能を達成し、KITTI屋外datasetではほぼSOTA性能を達成した。さらに、多岐にわたる後端を持つDepthGenは、透視表面からの深度ambiguitiesを自然に表現し、深度補完と組み合わせた零ショット性能により、シンプルな
2306.17630
Navigating Noise: A Study of How Noise Influences Generalisation and Calibration of Neural Networks
Enhancing the generalisation abilities of neural networks (NNs) through integrating noise such as MixUp or Dropout during training has emerged as a powerful and adaptable technique. Despite the proven efficacy of noise in NN training, there is no consensus regarding which noise sources, types and placements yield maximal benefits in generalisation and confidence calibration. This study thoroughly explores diverse noise modalities to evaluate their impacts on NN's generalisation and calibration under in-distribution or out-of-distribution settings, paired with experiments investigating the metric landscapes of the learnt representations across a spectrum of NN architectures, tasks, and datasets. Our study shows that AugMix and weak augmentation exhibit cross-task effectiveness in computer vision, emphasising the need to tailor noise to specific domains. Our findings emphasise the efficacy of combining noises and successful hyperparameter transfer within a single domain but the difficulties in transferring the benefits to other domains. Furthermore, the study underscores the complexity of simultaneously optimising for both generalisation and calibration, emphasising the need for practitioners to carefully consider noise combinations and hyperparameter tuning for optimal performance in specific tasks and datasets.
Martin Ferianc, Ondrej Bohdal, Timothy Hospedales, Miguel Rodrigues
2023-06-30T13:04:26
http://arxiv.org/abs/2306.17630v2
# Impact of Noise on Calibration and Generalisation of Neural Networks ###### Abstract Noise injection and data augmentation strategies have been effective for enhancing the generalisation and robustness of neural networks (NNs). Certain types of noise such as label smoothing and MixUp have also been shown to improve calibration. Since noise can be added in various stages of the NN's training, it motivates the question of when and where the noise is the most effective. We study a variety of noise types to determine how much they improve calibration and generalisation, and under what conditions. More specifically we evaluate various noise-injection strategies in both in-distribution (ID) and out-of-distribution (OOD) scenarios. The findings highlight that activation noise was the most transferable and effective in improving generalisation, while input augmentation noise was prominent in improving calibration on OOD but not necessarily ID data. Machine Learning, ICML 2023 Workshop on Spurious Correlations, Invariance, and Stability. Honolulu, Hawaii, USA. Copyright 2023 by the author(s). Machine Learning, ICML 2023 Workshop on Spurious Correlations, Invariance, and Stability. Honolulu, Hawaii, USA. Copyright 2023 by the author(s). ## 1 Introduction Noise injection methods have emerged as a promising approach to enhance the generalisation of neural networks (NNs) (Srivastava et al., 2014; Neelakantan et al., 2017). Given the importance of noise for Bayesian NNs (BNNs) (Gal and Ghahramani, 2016; Blundell et al., 2015; Welling and Teh, 2011), we hypothesise that noise injections during training of standard NNs can also positively impact their calibration. Calibration refers to the alignment of prediction's accuracy to their confidence (Guo et al., 2017). Examples of noise injection approaches include dropout (Srivastava et al., 2014; Gal and Ghahramani, 2016), label smoothing (Szegedy et al., 2016), MixUp (Zhang et al., 2018), Gaussian noise (Blundell et al., 2015), shrinking and perturbing NN weights (Ash and Adams, 2020), and gradient noise (Neelakantan et al., 2017). By introducing noise during the training, these methods encourage active exploration of the parameter space and can be applied to various components of the network, including the input, targets, activations, gradients and the model itself. In this paper, we aim to provide a fair comparison of noise injection methods during training and investigate their impact on both calibration and generalisation of NNs in a computer vision classification setting. We ensure fairness of the comparison through dedicated hyperparameter optimization per noise type and we examine the transferability of found hyperparameters from one dataset or architecture to another. To robustly evaluate both generalisation and calibration we consider testing the methods on both test in-distribution (ID) and out-of-distribution (OOD) data. The key takeaways from our work are: 1) Activation noise, especially dropout (Srivastava et al., 2014), improves generalisation and marginally also calibration across architectures and datasets. 2) Input augmentation, MixUp (Zhang et al., 2018), improves calibration and generalisation on OOD data but not necessarily ID data. 3) Model noise and gradient noise improve generalisation and calibration, but only to a smaller extent than input or activation noise. ## 2 Related Work Standard NNs were shown to lack calibration (Guo et al., 2017), motivating the need for approaches focusing on training NNs such that their confidence matches their accuracy. Bayesian NNs (BNNs) (Blundell et al., 2015; Gal and Ghahramani, 2016; Welling and Teh, 2011) and NN ensembles (Lakshminarayanan et al., 2017) are popular approaches for obtaining well-calibrated models, but they are computationally expensive as they require random sampling and multiple forward passes during test time. Alternative methods have been proposed without increasing computational complexity, particularly during training. They include different loss functions (Kumar et al., 2018; Mukhoti et al., 2020; Bohdal et al., 2021) and temperature scaling (Guo et al., 2017). However, these approaches have their own limitations and may not be suitable for all scenarios. On the other hand, most noise injections are applicable to any NN architecture and any task. For **input noise** injection, commonly used are MixUp and Output Diversified Sampling (ODS) methods. MixUp (Zhang et al., 2018) linearly interpolates between two samples and their labels, while ODS (Tashiro et al., 2020) augments the input to diversify predictions and was used in the context of adversarial examples but not calibration. MixUp has been shown to improve calibration and generalisation (Zhang et al., 2022), but its transferability between datasets and architectures has not been explored. Additionally, we investigate naive Gaussian and uniform noise injection, which adds Gaussian or uniform noise to the input during training. In terms of **target noise** injection, label smoothing (Pereyra et al., 2017) and MixUp (Zhang et al., 2018) label interpolation are frequently used. Label smoothing replaces hard targets with soft targets and has already been shown to improve calibration, but not on OOD data (Muller et al., 2019). **Activation noise** injections include Dropout, Gaussian and uniform noise injections. Dropout (Srivastava et al., 2014) randomly sets activations to zero. Gaussian noise injection (Blundell et al., 2015; Camuto et al., 2020; Alemi et al., 2017; Yu et al., 2021) adds Gaussian noise to the activations, while uniform noise injection adds uniform noise. In BNNs, these injections are applied both during training and evaluation, whereas in this work we only apply noise during training. Furthermore, **gradient noise** has been shown to improve generalisation through adding annealed Gaussian noise to the gradients during training (Neelakantan et al., 2017; Welling & Teh, 2011). However, it was not benchmarked on calibration, especially without ensembling weights at different training time-steps. Finally for **model noise** injection, recently Gaussian noise injection via shrinking and perturbing weights (Ash & Adams, 2020) at a given epoch frequency was shown to improve retraining generalisation, but calibration on ID or OOD data was not considered. To the best of our knowledge, the noise injections have been studied 1) separately (Zhang et al., 2022; Muller et al., 2019), 2) orthogonally for generalisation and calibration on ID or OOD data, and 3) without a unified hyperparameter (HP) optimization protocol. This research aims to start the conversation into a comprehensive analysis of the noise injection methods and their relationship to generalisation and calibration, across datasets and NN architectures, providing valuable insights into their effectiveness and practicality. ## 3 Methodology This study focuses on training a NN with noise perturbations to investigate their impact on NN's accuracy and calibration, identifying which perturbations are helpful and when. The noises are divided between **input**, **target**, **activation**, **gradient** and **model**, and their deployment during training is outlined in Algorithm 1 via blue lines. The probability of applying each noise to a batch out of \(B\) batches is determined by the HP \(p\in[0,1]\), except model noise, which was applied with a selected frequency during the \(T\) training epochs. The noises have associated HPs and tuning ranges. ``` 0: Training dataset \(D=\{(x^{b},y^{b})\}_{b=1}^{B}\), \(B\) batches, learning rate \(\eta\), number of epochs \(T\), weights \(\theta\), operation \(g(\cdot,\theta)\), hidden states \(h^{b}\), hidden depth \(D\), activation \(f(\cdot)\), probability of applying noise to a batch \(p\) 1: Initialize \(\theta\) randomly 2:for\(t=1\) to \(T\)do 3:for\(b=1\) to \(B\)do 4: Randomly select \((x^{b},y^{b})\) from \(D\) 5: Sample \(e\sim U(0,1)\) {If \(e<p\)} 6:Input noise: Modify \(x^{b}\) 7:Target noise: Modify \(y^{b}\) 8:for\(i=1\) to \(D\)do 9:\(h^{b}_{i}=g(h^{b}_{i-1},\theta)\) {Where \(h^{b}_{0}=x^{b}\)} 10:Activation noise: Modify \(h^{b}_{i}\) before activation 11:\(h^{b}_{i}=f(h^{b}_{i})\) 12:endfor 13: Compute predicted output \(\hat{y}^{i}=g(h^{b}_{D},\theta)\) 14: Compute loss \(\mathcal{L}(\hat{y}^{i},y^{i})\) and gradients \(\nabla_{\theta}\mathcal{L}\) 15:Gradient noise: Modify \(\nabla_{\theta}\mathcal{L}\) 16: Update weights: \(\theta\leftarrow\theta-\eta\nabla_{\theta}\mathcal{L}\) 17:endfor 18:if\(t\mod\text{frequency}=0\)and\(t<0.75T\)then 19:Model noise: Modify \(\theta\) 20:endif 21:endfor 22:return\(\theta\) ``` **Algorithm 1** Training of Neural Network with Noise **Input noise:** The input noise consisted of 2 naive variants and 2 variants which tapped into predictions or the targets to compute the noise. The two naive variants consisted of adding Gaussian or uniformly sampled noise \(n\sim U(-\sigma,\sigma);n\sim\mathcal{N}(0,\sigma)\) added to inputs \(x\) with standard deviation \(\sigma\in[1e^{-4},1e^{-1}]\). We considered ODS (Tashiro et al., 2020) with respect to \(\epsilon\in[1e^{-4},1e^{-1}]\) and temperature \(T\in[0.5,5.0]\), and MixUp (Zhang et al., 2018) with \(\alpha\in[0,1]\) which also modified the targets accordingly. **Target noise:** In addition to MixUp we considered a static noise introduced to the labels \(y\) in the form of label smoothing (Muller et al., 2019) with the smoothing factor \(l\in[0,0.25]\). **Activation noise:** The hidden states prior to applying the activation function, \(\{h^{b}_{i}\}_{i=0}^{D}\), where \(D\) is the depth of the net, could be disturbed by 3 types of activation noise: additive Gaussian or Uniform \(n\sim U(-\sigma,\sigma);n\sim\mathcal{N}(0,\sigma)\) with \(\sigma\in[1-4,1e^{-1}]\) as a tunable HP or multiplicative Dropout (Srivastava et al., 2014) that incorporates a dropout rate \(d\in[0,1]\). The activation noise was used prior to an activation \(f(\cdot)\) for all linear or convolutional operations \(g(\cdot,\theta)\) but not in the output layer. **Gradient noise:** The noise applied to the gradients \(\nabla_{\theta}\mathcal{L}\) followed (Neelakantan et al., 2017) with the step size \(\eta\in[0,1]\) and the annealing factor \(\gamma\in[0,1]\). **Model noise**: Lastly the model noise follows the idea of shrinking and perturbing the weights \(\theta\)(Ash and Adams, 2020) with a shrink factor \(\mu\in[0.0,1.0]\) and standard deviation \(\sigma\in[0.0,1e^{-3}]\) with frequency of perturbing every frequency \(\in[0,80]\) epochs, except the last 25% of training epochs. ## 4 Experiments SettingsWe first tune the learning rate and L2 regularisation of a no-noise network which are reused when tuning the HPs of each noise injection method on three different combinations: ResNet-18 paired with CIFAR-10 or CIFAR-100 and a fully connected (FC) network paired with SVHN. The tuning is performed using \(1/4\) of the training budget over the course of one day, using model-based Tree-structured Parzen Estimator method (Bergstra et al., 2011). With these settings we are able to evaluate about 40 configurations selected using Bayesian Optimization. Our protocol allows us to optimize the performance of each noise injection method and provide fair comparison. Full experimental details are in Appendix A, including a summary of the identified HPs. To assess the effectiveness of the noise injection methods, we measure their performance using three metrics: Error \([\downarrow,\%]\), Expected Calibration Error (ECE) (Guo et al., 2017) \([\downarrow,\%]\), and Negative Log-Likelihood (NLL) \([\downarrow]\) that we report in Appendix B. These metrics provide insights into the accuracy and its match with the confidence of the NNs' predictions. We evaluate the performance on both the ID test set and an augmented OOD set that includes an average over visual corruptions across 19 categories and 5 severities (Hendrycks and Dietterich, 2019). These corruptions include, for example, adding snow or fog to the image, changing the brightness or saturation of the image or blurring the image. We conduct experiments on a series of deployment scenarios where 1) the tuned HPs are directly used on the tuned dataset and architecture, 2) the architecture is changed but the HPs are kept, 3) the HPs come from a different source dataset. The results presented are averaged across 3 seeds and the best results are in **bold**. ### Analysis Tuned HyperparametersIn this scenario, we evaluate the performance of the noise injection methods when the HPs are tuned specifically for the dataset and architecture. The results for these experiments are in Tables 1 and 2, and they show that activation and input augmentation noises are prominent in improving the accuracy and calibration of the networks across the datasets. Dropout was the most effective for improving ID generalisation in CIFAR-10 and CIFAR-100, while MixUp was the most effective for SVHN. Uniform activation worked the best for improving ID calibration in CIFAR-10 and CIFAR-100, whereas ODS was the best in SVHN. The strong result obtained by ODS on SVHN shows that adversarial attack techniques may be useful also for other uses-cases, including calibration. Interestingly, some of the improvements carried to OOD data, for example, where the error on SVHN or CIFAR-100 was the lowest with MixUp or dropout. However, when considering calibration on OOD data, MixUp was dominant for CIFAR-10 and CIFAR-100. On average, dropout improved generalisation and MixUp improved calibration when considering both ID and OOD data. The naive Gaussian and uniform input noise perturbations did not bring significant improvements. Architecture TransferIn this scenario, we assess the performance of the noise injection methods when the HPs are transferred to a different architecture while keeping the dataset constant. We conduct experiments using SVHN with ResNet-18 with HPs tuned on an FC network. Furthermore, we use HPs tuned for ResNet-18 for both CIFAR-10 and CIFAR-100 and we change the architecture to WideResNet-18. The results are presented in Tables 3 and 4. Considering the performance on ID data, we see that dropout reduced error across architectures and also improved calibration. Contrary to the improvements seen on SVHN when using FC, MixUp did not reduce the error when using ResNet-18 and it even recorded worse performance on OOD data than no noise at all. Switching focus to OOD data, model perturbation moderately improved calibration for CIFAR-100, while activations had a negative impact and led to worse calibration. Even though WideResNet-18 and ResNet-18 are relatively similar, transferring hyperparameters for example for MixUp in CIFAR-100, did not prove efficient as seen in calibration on OOD data which became worse than not using any noise at all. In summary, activation noises, most notably dropout, performed well on improving generalisation on both ID and OOD data and moderately on calibration on ID data. However, no method was able to consistently improve calibration on OOD data after the architecture was changed. Dataset TransferUnder these settings, we investigate the transferability of hyperparameters by evaluating the noise injection methods on the same architectures but using different datasets. Specifically, we evaluate SVHN with ResNet-18 and HPs from CIFAR-100/ResNet-18, CIFAR 10 with ResNet-18 and CIFAR-100/ResNet-18 HPs, and CIFAR-100 with ResNet-18 but with CIFAR-10/ResNet-18 HPs. The results are shown in Tables 5 and 6. For all SVHN, CIFAR-10, CIFAR-100, the most significant error improvements across ID or OOD data were achieved using dropout and Gaussian noise. Interestingly, the activation Gaussian noise was able to improve calibration on both ID and OOD data on CIFAR-100, but not on the other datasets. MixUp has demonstrated varying results, for example on SVHN or CIFAR-10 the calibration on ID data was worse than not using any noise at all, while in CIFAR-100 there was a marginal improvement. Nevertheless on OOD data MixUp was able to improve calibration across all datasets. SummaryThe effectiveness of noise injections varies with the dataset and architecture. Nevertheless, especially in the tuned regime, certain settings of different noises improved both generalisation and calibration. Activation noise injections demonstrated promising results for error reduction across ID data, while input augmentations seemed to be the most effective for OOD data. Dropout was the most effective in improving error on ID or OOD data, and it proved to be transferable across architectures and datasets. MixUp was the best in improving the performance on OOD data in terms of calibration and accuracy, but not necessarily on the ID data. Interestingly, hidden in its mediocrity, model noise was able to marginally improve accuracy and calibration across majority of considered scenarios. Additional evaluation in terms of NLL, in Appendix B, has shown MixUp, dropout and model perturbations to be the most effective. ## Acknowledgements Martin Ferianc was sponsored through a scholarship from the Institute of Communications and Connected Systems at UCL. Ondrej Bohdal was supported by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.
Neuralネットワークの汎化能力を高めるために、訓練中にMixUpやDropoutなどのノイズを統合する手法が、強力で適応性に優れた技術として注目されています。ノイズがNNトレーニングにおいて有効であることは証明されていますが、ノイズソース、種類、位置が汎化と信頼性Calibrationに最大限の利益をもたらすかを決定するための合意はありません。本研究では、様々なノイズの形態を徹底的に調査し、その影響をin-distributionまたはout-of-distribution設定でNNの汎化とCalibrationに評価しました。また、学習済み表現のメトリック空間の評価も、NNアーキテクチャ、タスク、データセットの範囲を対象に実験を実施しました。本研究の結果、AugMixと弱い拡張がコンピュータビジョンにおいてクロスタスク効果を発揮しており、特定の領域に合わせたノイズの調整が重要であるという結論に至りました。本研究の結果は、ノ
2307.00065
Qualitative Prediction of Multi-Agent Spatial Interactions
Deploying service robots in our daily life, whether in restaurants, warehouses or hospitals, calls for the need to reason on the interactions happening in dense and dynamic scenes. In this paper, we present and benchmark three new approaches to model and predict multi-agent interactions in dense scenes, including the use of an intuitive qualitative representation. The proposed solutions take into account static and dynamic context to predict individual interactions. They exploit an input- and a temporal-attention mechanism, and are tested on medium and long-term time horizons. The first two approaches integrate different relations from the so-called Qualitative Trajectory Calculus (QTC) within a state-of-the-art deep neural network to create a symbol-driven neural architecture for predicting spatial interactions. The third approach implements a purely data-driven network for motion prediction, the output of which is post-processed to predict QTC spatial interactions. Experimental results on a popular robot dataset of challenging crowded scenarios show that the purely data-driven prediction approach generally outperforms the other two. The three approaches were further evaluated on a different but related human scenarios to assess their generalisation capability.
Sariah Mghames, Luca Castri, Marc Hanheide, Nicola Bellotto
2023-06-30T18:08:25
http://arxiv.org/abs/2307.00065v1
# Qualitative Prediction of Multi-Agent Spatial Interactions ###### Abstract Deploying service robots in our daily life, whether in restaurants, warehouses or hospitals, calls for the need to reason on the interactions happening in dense and dynamic scenes. In this paper, we present and benchmark three new approaches to model and predict multi-agent interactions in dense scenes, including the use of an intuitive qualitative representation. The proposed solutions take into account static and dynamic context to predict individual interactions. They exploit an input- and a temporal-attention mechanism, and are tested on medium and long-term time horizons. The first two approaches integrate different relations from the so-called Qualitative Trajectory Calculus (QTC) within a state-of-the-art deep neural network to create a symbol-driven neural architecture for predicting spatial interactions. The third approach implements a purely data-driven network for motion prediction, the output of which is post-processed to predict QTC spatial interactions. Experimental results on a popular robot dataset of challenging crowded scenarios show that the purely data-driven prediction approach generally outperforms the other two. The three approaches were further evaluated on a different but related human scenarios to assess their generalisation capability. ## I Introduction While service robots are increasingly being deployed in our domestic, healthcare, warehouse, and transportation environments, modeling and predicting the _interactions_ of different agents given their context (i.e. nearby static and dynamic objects, including people) are important requirements for effective human-robot co-existence and intent communication. They can help robots know when, where and how to intervene on the environment, in addition to navigate it safely which is usually accomplished with the classic human motion prediction paradigm. For example, an assistive robot patrolling someone's home or a crowded hospital needs to reason continuously on the relative state of nearby people for approaching and communicating with them, ideally predicting future human (spatial) interactions to optimize its own decision-making. Differently from typical human motion prediction, which is mostly concerned with navigation safety, dealing with multi-agent _interactions_ presents two advantages: from a "social" navigation point of view, interaction prediction facilitates human-robot motion coordination and intent communication; from an "explainable" decision-making point of view, interaction prediction makes an individual's motion behaviour more meaningful in many social contexts. For example, by detecting a group meeting in an office (as in Fig.1), and predicting it will last for a while, the robot does not disturb the people involved. On the other hand, if the robot predicts that an elderly patient is trying to approach it to ask for help, it can use this information to update its initial plan and prioritize the responsiveness to the human's intent. An intuitive approach for multi-agents spatial interaction representations is the qualitative one. In the 2D and 3D spatial domains (e.g. navigation and manipulation, respectively), a qualitative interaction between pairs of agents or body points can be captured by some symbolic representation. One way to model qualitative spatial interactions is by using a qualitative trajectory calculus (QTC) [1]. QTC-based models of moving agent pairs can be described by different combinations of QTC symbols that represent spatial relations, e.g. relative distance (moving towards/away), velocity (faster/slower), or orientation (to the left/right) with respect to the central axis joining both agents. QTC-based interaction modeling was presented in [2, 3, 4] for modeling 2D human-robot spatial interactions, with further application into human-aware robot navigation [5, 6]. Differently from the focus of this paper, the authors in [6] used a Bayesian temporal model to study the interaction of a single pair of agents (human-robot), without accounting for the dynamic and/or static context, which limits the prediction performance. An alternative way for representing spatial interactions in a multi-agent scenario is by quantitatively merging all agents in the context to drive a robot navigation stack [7, 8, 9]. These works though cannot Fig. 1: Modeling and predicting multi-agent interactions can improve the decision-making process of social and service robots for helping in heavy and/or health-related tasks, anticipating elders assistance needs, cafe/restaurant table serving, office duties, etc., while avoiding conversational groups or ordering queues (unless called for it). infer the implicit spatials intent of the agents. To the best of our knowledge, there is a gap in the literature regarding the prediction of qualitative (i.e symbolic) and/or quantitative (i.e metrical) interactions between multi-agent entities (e.g. human-human, human-robot, human-object, robot-object) given their nearby dynamic and/or static context, which was only partly addressed in [6] for a single human-robot pair. Further investigation in more complex scenarios is therefore necessary to enhance future robot reasoning, mutual intent communication, and reactive or predictive planning. The contribution of this paper is therefore two-fold: (i) addressing the prediction of Multi-Agent Spatial Interactions (MASI) with dynamic and static context-awareness by implementing three new approaches for medium and long-term interactions predictions, including a QTC-based neural network representation; (ii) experimentally evaluating the proposed frameworks on different scenarios with multiple humans/objects to assess the prediction performance, even under domain-shift conditions. The remainder of the paper is as follows: Sec. II presents an overview of the related work; Sec. III explains the approach adopted to model and predict spatial interactions in dense scenes; Sec. IV illustrates and discusses the results from experiments conducted on a public dataset for social robot navigation; finally, Sec. V concludes by summarising the main outcomes and suggesting future research work. ## II Related Work **Human-human interactions modeling:** Two methods have been presented in the literature for interactions modeling with nearby dynamic agents: (i) one-to-one modeling, and (ii) crowd modeling. A one-to-one interaction modeling between human-robot pair was presented in [4] in the form of qualitative representation by encoding a sequence of QTC states in a Markov Chain representation. Human-human interactions modeling was also addressed in [7] for social navigation, where interactions with neighbors are embedded in a multi-layer perceptron by using local maps centered at each person. On the other hand, crowd modeling was discussed in [9], where the major existing hybrid techniques were surveyed. Hybrid crowd techniques are brought forward to overcome some limitations of classical methods (e.g high computation cost). For crowd analysis, F-formations modeling and detection has been addressed recently in [10], where the authors deconstructed a social scene into pairwise data points, then they used feature-based classification to distinguish F-formations. In this work, we do not limit our approach to F-formations only. Hence, we build on previous works from HRSI modeling for single pair of agents [6], taking inspiration from the hybrid approaches of crowd modeling, in order to predict multi-agent interactions in dense scenes. **Context-aware human motion prediction:** While the problem of context-aware human motion prediction has been extensively addressed in the literature, to the best of our knowledge, the problem of context-aware multi-agent interactions prediction in dense environments (as the social ones) has been mostly neglected. The state of the art works vary based on no context-awareness, static-only context, dynamic-only context [11, 12, 13], static and dynamic context [14, 15]. Architectures such as Social-LSTM [11] and SGAN [12] capture spatial interactions only. Also, the authors adopt an LSTM encoding for each agent that cannot account for static objects embedding in the neighborhood. The Stgat architecture in [13] accounts for dynamic agents only and the use of dual LSTM modules in the encoder limits the ease of direct integration of static objects representations. As per [14], the DSCMP architecture outperforms S-LSTM, SGAN and Stgat in terms of parameters and time consumption. In DSCMP, both static and dynamic contexts are incorporated together with spatial and temporal dependencies between agents. In that work, the static context is embedded in a latent space through a convolutional semantic map of the whole scene. The work in [15], instead, addresses the problem of action prediction together with motion prediction, by using person-whole scene interaction embedding (leveraging the semantic scene convolutional map) together with explicitly encoding the interaction between person and surrounding objects (person, objects) into a geometric relationship. In this paper, we take inspiration from [14] and [15] to develop a dynamic and static context-aware predictor of spatial interactions, but we limit our current study to one data type entry to the network architecture used for experimentation. We choose raw coordinates as the sole upstream data type, commonly used to represent dynamic agents motion, leaving the exploitation of semantic map representation of the scene (fully or partially) for our future work. Hence, we embed only the raw coordinates of key features (static objects of use) that represent the social scene, and that's because in social scenes and according to [16], humans interact not only with one another, but also with machines that are meaningful. ## III MASI Prediction Framework ### _Problem Statement_ While metrical motion prediction of nearby agents allows robots to replan locally their target destination for safe navigation, it doesn't provide the robot with enough intelligence to reason on the implicit intent one may convey in his motion (e.g. a person may speed up at a room entrance, as a patient's room, to convey to the robot an urgent need to enter first), the problem of which can be dealt by embedding the robot with a reasoning (modeling and predicting) paradigm on multi-agent spatial interactions, allowing it to take reactive or predictive optimal decisions by intervening or not on its surrounding. ### _Qualitative Spatial Interactions_ A qualitative spatial interaction is represented by a vector of \(m\) QTC symbols (\(q_{i}\), \(i\in\mathbb{Z}\)) in the domain \(D=\{-,0,+\}\)[1]. Among the different types of QTC representations, we exploit the double-cross \(QTC_{C}\) which employs away/towards, left/right, relative speed, and relative angle dichotomies, as illustrated in Fig. 2. Two types of \(QTC_{C}\) were proposed in the literature, the \(QTC_{C_{1}}\) with four symbols \(\{q_{i},i=1..4\}\), and the \(QTC_{C_{2}}\) with six symbols \(\{q_{i},i=1..6\}\). The symbols \(q_{1}\) and \(q_{2}\) represent the towards/away (relative) motion between a pair of agents; \(q_{3}\) and \(q_{4}\) represent the left/right relation; \(q_{5}\) indicates the relative speed, faster or slower; finally, \(q_{6}\) depends on the (absolute) angle with respect to the reference line joining a pair of agents. The qualitative interaction between the time series of two moving points, \(P_{r}\) and \(P_{h}\), is expressed by the following \(q_{i}\) symbols: \[(q_{1}) -:d(P_{r}|r^{-},P_{h}|t)>d(P_{r}|r,P_{h}|t)\] \[+:d(P_{r}|r^{-},P_{h}|t)<d(P_{r}|r,P_{h}|t);\quad 0\:\text{all other cases}\] \[(q_{3}) -:\|P_{r}^{-}P_{r}^{+}P_{h}^{+}P_{h}^{-}P_{h}^{+}|<0\] \[+:\|P_{r}^{-}P_{r}^{+}P_{h}^{+}P_{h}^{+}P_{h}^{+}|>0;\quad 0\: \text{all other cases}\] \[(q_{5}) -:\|P_{r}^{-}\|<\|P_{h}^{-}\|\] \[+:\|P_{r}^{-}\|>\|P_{h}^{-}\|;\quad 0\:\text{all other cases}\] \[(q_{6}) -:\theta(\vec{v}_{r}^{\prime},P_{h}\vec{P}_{h}^{\prime})<\theta (\vec{v}_{h}^{\prime},P_{h}\vec{P}_{h}^{\prime})\] \[+:\theta(\vec{v}_{r}^{\prime},P_{h}\vec{P}_{h}^{\prime})>\theta( \vec{v}_{h}^{\prime},P_{h}\vec{P}_{h}^{\prime});\quad 0\:\text{all other cases}\] (\(q_{2}\)) and (\(q_{4}\)) are similar to (\(q_{1}\)) and (\(q_{3}\)), respectively, but swapping \(P_{r}\) and \(P_{h}\). \(d(.)\) is the euclidean distance between two positions, \(\theta(.)\) is the absolute angle between two vectors, and \(t^{-}\) denotes a single previous time step. In this paper, we propose a framework (\(F\)) for spatial interaction prediction and compare three possible implementations: two symbol-driven neural approaches, denoted by \(F^{QTC-4}\) and \(F^{QTC-6}\), where both input and output of the neural network are QTC symbols; a third approach, denoted by \(F^{ts}\), where the inputs are raw trajectories and the outputs are QTC symbols. In particular, \(F^{QTC-4}\) and \(F^{QTC-6}\) exploit \(QTC_{C_{1}}\) and \(QTC_{C_{2}}\), respectively, to directly predict QTC vectors with a time horizon \(T_{f}\), while \(F^{ts}\) extracts QTC vectors from the coordinates generated by a purely data-driven motion prediction architecture over \(T_{f}\). The main difference between the two symbol-driven frameworks is that \(F^{QTC-4}\) assigns a greater importance to the prediction of left/right and towards/away dichotomies, neglecting the relative velocity and angle embedded in \(F^{QTC-6}\). ### _Network Architecture_ In order to narrow down the study, we limit the network upstream input to raw coordinates of body points (i.e. dynamic agents, static key objects in the environment), which will then be converted to QTC input for the evaluation of \(F^{QTC-4}\) and \(F^{QTC-6}\) frameworks. Among several network architectures developed in the literature for human motion prediction, some give no consideration to the static context [12], others embed the static context as semantic map input to the network [14, 15]. Though these architectures can serve as a tool for our current study, we take advantage in this paper from the network architecture in [17] as starting point to implement our interaction prediction framework (F) for the prediction of qualitative spatial interactions. The architecture as in [17] takes as input only time series of raw coordinates which get processed through an embedding, attention, and LSTM layers respectively, as can be seen from the encoder and decoder in Fig. 3. This architecture alleviates the need for a separate network for static context embedding, as a CNN features extractor from the semantic scene image. The architecture in [17] allows also the incorporation of both spatial and temporal dependencies of interactions. It is worth to stress on the fact that other architectures from the state of the art in context-aware human motion prediction can serve the purpose of this benchmark study, and this will be targeted in our future works for performance generalisation. In order to implement \(F^{QTC-4}\) and \(F^{QTC-6}\), we modified the original architecture to deal with time-series of categorical data, representing symbolic knowledge of the spatial interactions between pairs of agents. We also extended the prediction horizon to medium (i.e. 48 time steps, or 3.2\(s\)) and longer (i.e. 72 time steps, or 4.8\(s\)) time horizons. The parameters for medium and long time horizon prediction were chosen based on relevant literature of human motion prediction [12, 14]. The input attention encoder of the network in Fig. 3 consists of an input attention layer (I-Attention) which weighs \(n^{*}\) spatial interactions in a radial cluster. The encoder is then followed by a decoder with a temporal attention layer (T-Attention), capturing the temporal dependencies in multi-agent interactions. The network encodes \(n^{*}\) input series (denoted by \(\mathbf{x}\)), each of length \(T_{h}\), and decodes \(n^{*}\) output labels (denoted by \(\mathbf{y}\)), each of length \(T_{f}\), where \(T_{f}\) is the predictive time horizon and \(T_{h}\) is the time history used for the temporal attention purpose. For our categorical data, we minimize a sparse (categorical) cross-entropy loss function between the true and predicted QTC vector indices, extracted from a dictionary of 444 possible \(QTC_{C_{2}}\) vectors for \(F^{QTC-6}\), and 82 possible \(QTC_{C_{1}}\) vectors for \(F^{QTC-4}\). Both the dictionaries include an additional index of "impossible" QTC vector, where all the QTC relations \(q_{i}\) assume a value \(\notin D\), chosen to be 10. The impossible QTC vector accounts for the case of an agent leaving the Fig. 2: A case of \(QTC_{C_{1}}\) representation of interactions between three body points \(P_{h1}\), \(P_{h2}\), and \(P_{r}\). For example, the QTC interaction represented by \((-,-,+,-)\) implies that agent \(h_{1}\) and robot ’r’ are moving towards each other, ’r’ moves to \(h_{1}\) right side, while \(h_{1}\) moves to ’r’ left side. cluster at time \(t\), or for complementary "fake" interactions added to each cluster to make it of fixed \(n^{*}\) size. The reader can refer to [17] for a detailed explanation of the network components, which are schematically illustrated in Fig. 3. ### _Data Processing_ Social scenarios have often an unpredictable number of people entering and leaving the environment, possibly leading to a combinatorial explosion in the input size of the predictive model and in its number of training parameters. In order to approach the problem of reasoning in socially crowded environments, we implement a crowd clustering approach for local interactions prediction. The advantage of this approach is that all the clusters have a fixed micro-size (i.e maximum number of agents entering the cluster at any given time) and it accounts for the agents entering and leaving the cluster. We applied the radial clustering approach on the JackRabbot [1] open-source dataset, which provides multi-sensor data of human behaviours from a mobile robot in populated indoor and outdoor environments (Fig. 4). We make use of the open-source annotated 3D point clouds, provided as metric coordinates of humans (dynamic context) bounding boxes centroid, and extracted from the upper velodyne sensor, as raw data and ground truth to our network architecture. The raw data are further processed to extract QTC representations of a spatial interaction between each pair of agents, whose dictionary index is then used as ground truth output for \(F^{QTC-4}\) and \(F^{QTC-6}\) approaches. In parallel, the raw metric data are directly used as ground truth labels for the \(F^{ts}\) approach. The environments considered in JRDB are fairly crowded. Among them, we selected a cafe shop (_bytes-cafe-2019-02-07.0_) for comparing the proposed prediction approaches, and two poster session scenarios (_packard-poster-session-2019-03-20.2_, denoted PS-2, and _packard-poster-session-2019-03-20.1_, denoted PS-1) for testing the framework on a domain-shift situation. In the cafe scenario, the static context includes objects such as bar order and check-out points, exit door, drinking water station, as illustrated in Fig. 4 (top). These objects were manually selected based on our careful investigation to identify the most common ones used by people in the scenario, although in the future we plan to learn them automatically in order to adapt to different environments. The spatial coordinates of the selected objects, extracted from the scene point cloud, are incorporated in the network architecture as any other dynamic agent. For each agent \(i\) in a given scene, we generate a cluster with a fixed interaction radius \(R_{1}=1.2m\). The latter is selected based on the proxemics' literature [18], where the social distance for interactions among acquaintances is indeed between \(1.2m\) (short phase) and \(3.7\)m (long phase). Each cluster includes \(n\) input series, with \(n\) being the maximum number of agents entering the cluster of agent \(i\) in a time interval \(T\). Each input series is defined as a series of spatial interactions between agents \(r\) and \(h\), where \(h\) is every other dynamic agent/static object in the cluster of \(r\). The maximum number of input series among all clusters, \(n^{*}\), is fixed for practical (training) purposes. Each cluster is then post-processed to include \((n^{*}-n)\) input series with complementary "fake" values. Spatial interactions are formulated in terms of categorical (i.e. QTC) data, hence two dictionaries of all possible qualitative interactions in 2D are generated based on \(QTC_{C_{2}}\) and \(QTC_{C_{1}}\) for the approaches \(F^{QTC-6}\) and \(F^{QTC-4}\), respectively. The input to our network are now \(n^{*}\) series of indices over the time history \(T_{h}\). For both the cafe and the poster sessions scenarios, we evaluated the prediction performance for a medium (\(T_{f}=3.2s\)) and a longer term (\(T_{f}=4.8s\)) horizons. ## IV Experiments The three proposed framework configurations implement the same architecture as in Fig. 3 but they were trained with different losses, since the input data is different. \(F^{QTC-4}\) and \(F^{QTC-6}\) were trained by minimising a categorical cross-entropy loss function over 120 epochs using Adam optimiser, and with \(T_{h}=10\) time steps (i.e \(0.67s\), much less than other works as in [12, 13]), a batch size \(B=10\), as hyper-parameters, while \(F^{ts}\) was trained using the root Fig. 4: JackRabbot dataset scenes: (top) _bytes-cafe-2019-02-07.0_, (bottom) _packard-poster-session-2019-03-20.2_. Fig. 3: An input-temporal attention mechanism for predicting spatial interactions of multi-dimensional input categorical (red) and metrical (yellow) time series extracted from dense scenes: application to JackRabbot dataset. The diagram is extended from [17]. **x** is the input driving vector, **y** is the label vector. mean square error loss function (RMSE) over 80 epochs using Adam optimiser, and with \(T_{h}=5\) time steps and \(B=5\). Other common hyper-parameters between the 3 network configurations are, hidden states \(h=256\) for both the encoder and decoder and a learning rate \(l_{r}=0.001\). The hyper-parameters were tuned to reach a good validation loss. The input consists of 63,566 samples for the cafe scene with \(F^{QTC-4}\) and \(F^{QTC-6}\), and \(46,548\) with \(F^{ts}\); \(109,040\) samples for PS-2 with \(F^{QTC-4}\) and \(F^{QTC-6}\), and \(110,889\) with \(F^{ts}\), whereas PS-1 has \(69,126\) samples in the three frameworks. The size of the input dataset is the same for both medium and longer term \(T_{f}\), and it is divided into 80% training, 10% validation, and 10% testing sets. All the three frameworks were trained on a computing system consisting of Intel(r) Core(tm) i7-6850K processor @3.6_GHz_ and NVIDIA GeForce GTX 1080 Ti 11GB GPU. Since the three proposed approaches for spatial interaction prediction are trained with different loss functions, in order to compare their performance we use the so-called "conceptual QTC distance" [1] defined as a measure for the closeness of QTC relations. Specifically, a conceptual distance between 0 and another symbol, \(\{+\}\) or \(\{-\}\), is assumed to be "\(+1\)", while the conceptual distance between \(\{+\}\) and \(\{-\}\) is "\(+2\)". The overall conceptual distance between two QTC vectors is calculated by summing the conceptual distance over all their relation symbols. For example, suppose \(QTC^{\prime}\) and \(QTC^{p}\) are two QTC vectors, where \(t\) and \(p\) refer to the true and predicted QTC vectors, respectively. Then, the conceptual QTC distance is calculated as: \[\mathbf{d}_{QTC}=\mathbf{d}_{QTC^{\prime}}^{QTC^{p}}=\sum_{q_{i}}\mid q_{i}^{ QTC}-q_{i}^{QTC^{p}}\mid, \tag{1}\] where \(q_{i}\) is one of the symbols defined in Sec. III-B. ### _Testing Evaluation_ In Table I, we report the results on the 10% test set and cluster radius of \(R_{1}=1.2\)m of the cafe scene in terms of normalised mean (\(\mu\)) and standard deviation (\(\sigma\)) of \(d_{QTC}\). The normalisation is done over the labels, \(T_{f}\), and \(B\). We note that the range of \(d_{QTC}\) is approximately \(\mathcal{R}=\{0-40\}\) for \(F^{QTC-4}\), and \(\{0-60\}\) for \(F^{QTC-6}\). The maximum value of \(\mathcal{R}\) accounts for the inability of QTC to represent missing agents in the radial cluster. On the test set, \(F^{QTC-6}\) significantly outperforms \(F^{QTC-4}\) over medium and longer time horizons, however \(F^{ts}\) have the best performance among the three configurations, over both time horizons. Also, \(F^{ts}\) (motion prediction) with \(QTC_{\text{$C_{1}$}}\) post-processing (denoted \(F^{ts,1}\)) for interaction prediction or analysis performs better on the medium term while \(F^{ts}\) with \(QTC_{\text{$C_{2}$}}\) (\(F^{ts,2}\)) performs best on the longer term. Overall, \(F^{ts,1}\) and \(F^{ts,2}\) outperform \(F^{QTC-6}\), with \(F^{ts,1}\) having 73.05% and 81.27% reduction on \(\mu(d_{QTC})\), and 93.8% and 96.68% reduction on \(\sigma(d_{QTC})\), for over the medium and long term predictions, respectively. From these observations we can conclude that predictive networks perform better on non-symbolic data compared to their symbolic counterpart when applied to crowded human environments. We report \(F^{ts,1}\) training and validation time of 6.6_hrs_ and 8.3_hrs_, while the evaluation time is 5.8_ms_ and 9.3_ms_ over 3.2\(s\) and 4.8\(s\) prediction horizons, respectively. In order to evaluate the effect of cluster radius selection, Table I also shows the results when cluster radius is \(R_{2}\) = 3.7m. In this case, with a larger cluster, hence with more context accounted for, \(F^{ts,1}\) outperforms all other configurations, on both the medium and longer horizons, it also outperforms \(F^{ts,2}\) performance over \(T_{f}=4.8\)_s_ and when \(R_{1}\) is accounted for. We can infer that with larger cluster radius, more context is accounted for to help in long term prediction, and hence, less interaction symbols are required to accurately represent the true interactions between multi-agents. ### _Domain-Shift (DS) Evaluation_ In order to further assess the generalisation capabilities of the three approaches, we re-trained and compared the results on different but related scenarios. Unfortunately, another cafe scene in JRDB (_forbes-cafe-2019-01-22.0_) lacks the necessary information to transform local coordinates from a mobile robot into a fixed reference frame for further data processing. Therefore, without loss of generality, we chose another crowded environment (poster session PS-2, as in Fig. 4-bottom) to re-train our network configurations with \(R_{1}\)=1.2m, and tested the latter on a different but related scenario (poster session PS-1). The performance on the testing set (i.e. 10% of PS-2) is reported in Table II (first column). We notice that \(F^{ts,1}\) outperforms \(F^{QTC-4}\) and \(F^{QTC-6}\) on both medium and long term predictions with 72.47% and 85.8% reduction on \(\mu(d_{QTC})\), and 93.9% and 94.48% reduction on \(\sigma(d_{QTC})\), for the 3.2 and 4.8s horizons, respectively. We note that, even within the same network configuration \(F^{ts,1}\) outperformed \(F^{ts,2}\). When looking at the transfer domain PS-1 in Table II (second column), on the 100% dataset, all the configura \begin{table} \begin{tabular}{c|c|c||c|c} & \multicolumn{4}{c}{**Cafe**} \\ \hline \hline & \(\mu^{109-R_{1}}\) & \(\sigma^{109-R_{1}}\) & \(\mu^{109-R_{2}}\) & \(\sigma^{109-R_{2}}\) \\ \hline \hline \(\mathbf{F}^{QTC-6}\) (3.2s) & 1.772 & 3.568 & 3.064 & 3.851 \\ \hline \(\mathbf{F}^{QTC-4}\) (3.2s) & 7.545 & 4.067 & 3 & 3.857 \\ \hline \(\mathbf{F}^{ts,1}\) (3.2s) & **0.464** & **0.22** & **0.32** & **0.16** \\ \hline \(\mathbf{F}^{ts,2}\) (3.2s) & 0.68 & 0.166 & 0.638 & 0.11 \\ \hline \(\mathbf{F}^{QTC-6}\) (4.8s) & 3.44 & 4.4 & 3.46 & 4 \\ \hline \(\mathbf{F}^{QTC-4}\) (4.8s) & 7.61 & 4.057 & 3.8 & 4.18 \\ \hline \(\mathbf{F}^{ts,1}\) (4.8s) & 3 & 1.254 & **0.25** & **0.18** \\ \hline \(\mathbf{F}^{ts,2}\) (4.8s) & **0.644** & **0.146** & 0.55 & 0.13 \\ \hline \end{tabular} \end{table} TABLE I: Performance comparison between the QTC prediction approaches \(F^{QTC-4}\) and \(F^{QTC-6}\), and the motion prediction-based QTC analysis framework \(F^{ts}\) evaluated on \(QTC_{\text{$C_{1}$}}\) (\(F^{ts,1}\)) and \(QTC_{\text{$C_{2}$}}\) (\(F^{ts,2}\)), in the cafe scene of JRDB and over \(T_{f}=3.2\)s and 4.8\(s\) prediction horizons. All measures are unitless. \(\mu\) and \(\sigma\) are the normalised mean and standard deviation of the conceptual distance (\(d_{QTC}\)) measure over the test set. \(R_{1}\) and \(R_{2}\) correspond to cluster radius 1.2m and 3.7m, respectively. The best performance is highlighted in bold. tions succeeded in generalising to PS-1 on the medium and longer terms except \(F^{ts,1}\) and \(F^{ts,2}\) who generalised well only on the medium term. Nevertheless, \(F^{ts,1}\) keeps holding the best performance overall when looking only at PS-1. In summary, we can infer that, \(F^{ts,1}\) is the best framework for developing qualitative predictive solutions to embed a social autonomous system with additional intelligent capabilities as inferring on implicit intent communication and/or predict a need from the surrounding agents. A typical real-world scenario can be a robot patrolling an elderly home care center and instantly inferring on an elder approaching it for requesting assistance in taking a treatment (e.g. bringing water, pills). \(F^{ts,1}\) shows lowest mean and standard deviation loss, over short and longer horizons, and among different cluster radius. It also transfers to other domains with a 12.2% decrease and 17.8% increase in mean loss, over 3.2\(s\) and 4.8\(s\), respectively. ## V Conclusion In this work, we presented and compared three approaches for multi-agent prediction of qualitative interactions in dense social scenes, combining a symbolic motion representation with an input/temporal-attention network architecture. We implemented a radial clustering approach to address mainly the notion of social proximity, and formulated spatial interactions in terms of a qualitative trajectory calculus (QTC). We compared two symbol-driven neural networks for QTC prediction, \(F^{QTC-4}\) and \(F^{QTC-6}\), with a third purely data-driven approach, \(F^{ts}\), based on plain coordinates, and evaluated them over two fixed-time horizons. We showed that the latter solution outperforms the previous two, specifically when post-processed for a small number of QTC symbols (\(F^{ts,1}\)), and that it performs best in the domain-shift scenario. Our future work will be devoted to the exploitation of this prediction framework for effective human-robot spatial interactions in social navigation applications, including real-world environments such as warehouses and university premises. In addition, we will further improve our models to select and integrate learnable key features of the environment, whether static or dynamic, which could have some causal influence on the aforementioned interaction processes. ## Acknowledgement The authors would like to thank Francesco Castelli for his support in designing the problem approach.
日常生活中、サービスロボットを導入するということは、レストランや倉庫、病院などの密集的で動的なシーンでの相互作用を理解する必要性をもたらします。この論文では、多Agentの相互作用をモデル化し予測するための3つの新しいアプローチを提示し、そのうち、直感的な質的な表現の使用を含むことを示します。提案された解決策は、静的および動的なコンテキストを考慮して個々の相互作用を予測します。彼らは入力と時間関連の注意機構を利用し、中長期の予測時間枠でテストされます。最初の2つのアプローチは、いわゆる質的な軌跡計算(QTC)から異なる関係を統合し、空間相互作用を予測するための最新型の深層ニューラルネットワークに統合しています。第三のアプローチは、動きを予測するための純粋なデータ駆動ネットワークを実装し、その出力はQTC空間相互作用を予測するためのポストプロセシング処理が行われます。難しめの
2307.16607
$OIDC^2$: Open Identity Certification with OpenID Connect
OpenID Connect (OIDC) is a widely used authentication standard for the Web. In this work, we define a new Identity Certification Token (ICT) for OIDC. An ICT can be thought of as a JSON-based, short-lived user certificate for end-to-end user authentication without the need for cumbersome key management. A user can request an ICT from his OpenID Provider (OP) and use it to prove his identity to other users or services that trust the OP. We call this approach $OIDC^2$ and compare it to other well-known end-to-end authentication methods. Unlike certificates, $OIDC^2$ does not require installation and can be easily used on multiple devices, making it more user-friendly. We outline protocols for implementing $OIDC^2$ based on existing standards. We discuss the trust relationship between entities involved in $OIDC^2$, propose a classification of OPs' trust level, and propose authentication with multiple ICTs from different OPs. We explain how different applications such as videoconferencing, instant messaging, and email can benefit from ICTs for end-to-end authentication and recommend validity periods for ICTs. To test $OIDC^2$, we provide a simple extension to existing OIDC server software and evaluate its performance.
Jonas Primbs, Michael Menth
2023-07-31T12:28:17
http://arxiv.org/abs/2307.16607v2
# OIDC\({}^{\bf 2}\): Open Identity Certification with OpenID Connect ###### Abstract OpenID Connect (OIDC) is a widely used authentication standard for the Web. In this work, we define a new Identity Certification Token (ICT) for OIDC. An ICT can be thought of as a JSON-based, short-lived user certificate for end-to-end user authentication without the need for cumbersome key management. A user can request an ICT from his OpenID Provider (OP) and use it to prove his identity to other users or services that trust the OP. We call this approach OIDC\({}^{\bf 2}\) and compare it to other well-known end-to-end authentication methods. Unlike certificates, OIDC\({}^{\bf 2}\) does not require installation and can be easily used on multiple devices, making it more user-friendly. We outline protocols for implementing OIDC\({}^{\bf 2}\) based on existing standards. We discuss the trust relationship between entities involved in OIDC\({}^{\bf 2}\), propose a classification of OPs' trust level, and propose authentication with multiple ICTs from different OPs. We explain how different applications such as videoconferencing, instant messaging, and email can benefit from ICTs for end-to-end authentication and recommend validity periods for ICTs. To test OIDC\({}^{\bf 2}\), we provide a simple extension to existing OIDC server software and evaluate its performance. ## 1 Introduction In most communication services, users identify each other through account profiles in which they provide their own identity information. To make these profiles more trustworthy, social network operators such as Meta and Twitter offer identity verification services for an additional fee that can only be used within their ecosystem. However, identity verification is often a cumbersome process that users may not want to repeat for each of their service platforms. In addition, users must still trust the service provider to sufficiently verify identities and not impersonate them. End-to-end user authentication mechanisms attempt to solve this problem, but they often lack adoption due to poor usability. Therefore, reusing an account for a verified identity would be desirable. With modern single sign-on (SSO) services, users can reuse their existing accounts to log in to other services. The OpenID Connect (OIDC) protocol, which is based on the OAuth 2.0 authorization framework, is widely used for this purpose. However, OIDC is designed for user-to-service authentication and does not address the purpose of end-to-end user authentication. In this paper, we define a new Identity Certification Token (ICT) for OIDC. It is similar to the ID Token which holds identity claims about a user, but also contains a verified public key of the user's client. As such, it can be thought of as a JSON-based, short-lived user certificate without the need for a revocation mechanism. The use of an ICT differs significantly from the use of an ID Token. A user requests an ICT from his OpenID Provider (OP) and presents it to another user's client to authenticate himself. If the other user trusts the issuing OP, his client verifies the integrity and validity of the ICT and authenticates the user using his client's public key contained in the ICT. As the OP certifies the identity of the user, we call this concept Open Identity Certification with OpenID Connect (OIDC\({}^{\mathbf{2}}\)). It facilitates mutual authentication of users if they trust each other's OP. While most OPs have a rather superficial identity verification process for their accounts, some practice a more thorough verification. In particular, new players such as banks and government institutions that perform rigorous identity verification for their accounts are becoming OPs. With OIDC\({}^{\mathbf{2}}\), unknown users can be reliably authenticated if they have an OIDC account at a trusted OP. Some services already provide strong user authentication, but these methods are difficult to use. Many instant messaging services support the exchange of public keys between users when they meet in person. Public key infrastructures (PKIs) require certificate management by users and reliable revocation list checking. PGP or S/MIME have long been proposed for email authentication, but are rarely used [28]. Self-Sovereign Identity (SSI) technology is currently emerging and solves this problem with device-specific long-term keys in a wallet app. However, this requires not only revocation mechanisms, but also recovery mechanisms in case the phone with the wallet app is lost or stolen. OIDC\({}^{\mathbf{2}}\) provides a more user-friendly alternative for end-to-end authentication. The ICT is short-lived, eliminating the need for cumbersome key revocation mechanisms, which improves security. OIDC\({}^{\mathbf{2}}\) avoids complex key management across devices by simply requesting a new ICT from the OP whenever needed. Using trusted OPs that verify the identity of their users also eliminates the need for face-to-face key exchange. Thus, a trusted OP can be compared with a trusted certification authority in a PKI or a trusted issuer in the SSI context. However, OIDC\({}^{2}\) is only a lightweight extension for end-to-end authentication with existing OIDC accounts. It is not intended to replace PKIs or SSIs. The paper is structured as follows. In Section 2, we revisit basics of OAuth 2.0 and OIDC, and in Section 3, we review related authentication technologies. Section 4 introduces the concept of OIDC\({}^{2}\) and proposes the extension to the OIDC protocol. Trust relationships in OIDC\({}^{2}\), a classification of OPs, authentication with multiple ICTs, and validity periods of ICTs are discussed in Section 5. In Section 6, we explain how OIDC\({}^{2}\) can be applied to video conferencing, instant messaging, and email. To test OIDC\({}^{2}\), we provide a simple extension to the OIDC server software in Section 7, which we evaluate in Section 8. Section 9 concludes our findings. ## 2 Introduction to OAuth 2.0 and OIDC We introduce basics of OAuth 2.0 and OIDC, as they are the underlying technologies for OIDC\({}^{2}\). We discuss their trust relationship and explain how they facilitate single sign-on (SSO). In the following, we capitalize technical terms from the official OAuth 2.0 and OIDC terminology. For ease of understanding, we omit non-essential steps in the protocols and refer to the authoritative standards for details. ### OAuth 2.0 The OAuth 2.0 authorization framework, as defined in RFC 6749 [11], is based on HTTP (RFC 7231 [9]) and the JavaScript Object Notation (JSON) RFC 8259 [3]. It allows a user to grant his Client scoped access to his resources on a server, e.g., to only read emails. A Client can be a web application, or a native email client application. In OAuth 2.0, this server is called Resource Server (RS) because it protects the user's Protected Resources (PR); the user is called the Resource Owner (RO). Without OAuth 2.0, the RO would leave his credentials with his Client to log in directly to the RS. With OAuth 2.0, the RO logs in to an Authorization Server (AS) and tells the AS to authorize the Client to access a scoped set of PRs. To do this, the AS issues an Access Token (AT) to the Client. This AT allows the Client to access the PRs on the RS. In this way, OAuth 2.0 improves the security by granting Clients only a scoped access to the user's account without exposing the user's credentials to any of the Clients. Figure (a)a shows a simplified authorization request where the RO authorizes his Client to read email. First, the Client requests access to the Scope read_email, which authorizes read-only access to the RO's emails (1). Then, the AS authenticates the RO (2) and the RO authorizes the Client for the requested Scope (3). Finally, the AS issues the AT and optionally a Refresh Token (RT) (4). This AT contains the authorized Scopes with a short validity period. It is signed with the AS's private key \(K^{-}_{AS}\). The RT is like a revocable session ID that authorizes the Client to refresh an expired AT without further user interaction. Figure (b)b describes a Resource Request where the Client uses the AT to access PRs on the RS. First, the Client requests the PRs and provides the AT to prove authorization (1). Then, the RS verifies the AT for a sufficient Scope, its expiration date, and the validity of its signature with the AS's public key \(K^{+}_{AS}\) (2). Finally, the RS responds with the PRs (3). ### OpenID Connect (OIDC) OpenID Connect (OIDC) [24] is an authentication framework that allows users to be authenticated with an SSO identity through a third-party service, such as an email service. It extends OAuth 2.0 for this purpose. Unlike the example in Section 2.1, the SSO identity has no relationship to the third-party service. In OIDC, an End-User (EU) is authenticated by an OpenID Provider (OP). The EU grants OIDC-specific Scopes to the EU's intended service, e.g., to his email client, which is called the Relying Party (RP). This communication flow is supported by OAuth 2.0, where the EU corresponds to a RO, the OP to an AS, and the RP to a Client. The OP issues an ID Token (IDT) to the RP. This IDT contains claims about the authentication Figure 1: OAuth 2.0 protocol flows. event which typically includes information about the EU, such as his name or address. Since this Authorization Request is for authentication, it is called an Authentication Request in OIDC. With this mechanism, an EU can be authenticated with his SSO identity by different services without providing his credentials. Instead, the OP passes profile information about the EU to the RP as identity claims in the IDT, but the EU controls which information is passed. Figure 1(a) describes a simplified Authentication Request where the EU is authenticated by the RP via the OP. First, the RP requests access to the profile Scope. If the EU grants this Scope, the RP is authorized to access the EU's profile information (1). Then, the EU is authenticated by the OP (2) and authorizes the RP for the requested Scope (3). Finally, the OP issues an IDT in addition to the AT and an optional RT (4). This IDT contains the identity claims related to the authorized profile Scope, such as the EU's name, profile picture, or date of birth, and is signed with the OP's private key \(K_{OP}^{-}\). The RP can verify the signature of the identity claims with the OP's public key \(K_{OP}^{+}\). ### Trust Relationship The following describes the resulting trust relationship between entities in OAuth 2.0 and OIDC as shown in Figure 1(b). The Client/RP never sees any credentials from the RO/EU because the authentication process is performed solely by the AS/OP. Therefore, the Client/RP must rely on the AS/OP to correctly verify the identity of the RO/EU and that the AS/OP will issue the correct AT/IDT of the authenticated RO/EU. Once the Client/RP of the RO/EU receives the tokens, the Figure 2: OpenID Connect Authentication Flow and trust relationship. RO/EU may not be able to verify what it is doing with them. The Client/RP may even leak the tokens, so the RO/EU must trust that it is working as intended. To minimize this risk, the RO/EU restricts the Client/RP's access to only the necessary PRs and identity claims. The RO/EU must also trust the AS/OP to protect his identity. This includes a secure login process and secure credential storage, but also that the AS/OP will not impersonate his account. Such impersonation would not even require any credentials since the AS/OP needs only its private key \(K^{-}_{AS}/K^{-}_{OP}\) to sign an AT/IDT. ### Single Sign-On with OAuth 2.0 and OIDC Today, many services require dedicated accounts, forcing users to remember multiple service-specific credentials. With SSO systems, users only need to remember the credentials for one account. They can use this SSO identity to log in to multiple service accounts. Logging in to a service account with this SSO identity is typically solved with a combination of OAuth 2.0 and OIDC, as depicted simplified in Figure 3. First, the Client initiates an OAuth Authorization Request to the service-specific AS (1). Instead of using service account credentials, the RO chooses to log in with his SSO identity via OIDC. To do this, the AS acts as a RP and initiates an OIDC Authentication Request to the OP (2). The EU is then authenticated by the OP with his credentials (3) and consents to the OP providing the RP with access to his profile information (4). Technically, this Figure 3: Simplified authentication to an AS with OIDC and authorization of a Client with OAuth 2.0. consent is an authorization in the sense of OAuth 2.0. The OP then responds with an IDT to the RP (5), which authenticates the EU to the AS and completes the OIDC-based authentication process. Now the authenticated RO authorizes the requested Scopes of the Client (6). Finally, the AS issues an AT and an optional RT to the Client (7). In an SSO environment, the trust relationship changes slightly. While the user has to trust the AS not to impersonate his service account, he also has to trust the OP not to impersonate any of his service accounts. This makes the OP very powerful because it could impersonate any of the user's service accounts. Therefore, EUs should only choose trusted OPs. ## 3 Related Technologies We review related technologies for end-to-end authentication and compare them to OIDC\({}^{\mathbf{2}}\). ### Identity Providers and Certificates In a Public Key Infrastructure (PKI) [30], a Certificate Authority (CA) verifies that an entity's real-world identity and long-term public key \(K_{E}^{+}\) belong together, records them in a document, signs it, and issues it in the common X.509 certificate format (RFC 5280 [6]). Such X.509 certificates are used e.g. in the Secure / Multipurpose Internet Mail Extensions (S/MIME) standard (RFC 8551 [26]) to authenticate and encrypt email. However, due to the cumbersome identity verification and certificate installation process, only 2.50% of over 81 million emails examined in a study [28] were signed with S/MIME. To simplify this process, Cisco proposed an expired Internet draft [2] where an Identity Provider (IdP) issues X.509 certificates to its users. According to their white paper [5], Cisco uses these certificates for end-to-end user authentication in the Webex videoconferencing service. If the session partner trusts the issuing IdP, the partner can authenticate the holder of this certificate, e.g., with a challenge/response method [15]. The draft [2] is designed for the SAML2 authentication standard [4], but OIDC performs better for mobile devices and cloud computing [20]. This may be one reason why the design has not been adopted by other applications and IdPs. Conceptually, the presented approach is similar to OIDC\({}^{\mathbf{2}}\); we continue with the differences. X.509 is a binary format limited to a small set of standardized identity-related fields [26]. OIDC\({}^{\mathbf{2}}\) instead uses JSON Web Tokens (JWT) in RFC 7519 [14] to represent claims about the user. JWTs are more flexible because the IdP can provide any claims in a JSON object, many of which are already standardized in the OIDC core specification [24] and eKYC [16]. JSON is also easier to parse in web applications than X.509 certificates. Also, long-lived user certificates require more attention than short-lived ICTs. In particular, certificate revocation lists must be managed and verified. In addition, certificates may need to be installed on different devices, which adds overhead and can create security issues. ### Self-Sovereign Identity (SSI) In Self-Sovereign Identity (SSI) [19], participating entities generate their own asymmetric key pairs \(K^{\pm}\). Entities are identified by by their decentralized identifier \(DID\), which is linked to at least one public key \(K^{+}\). Entities store their private key \(K^{-}\) in their digital wallet, e.g. an app on their smartphone. This can be used for end-to-end authentication with the key pair \(K^{\pm}\). SSI describes three entities: the Issuer (I), the Holder (H), and the Verifier (V). The issuer knows or verifies the credentials of the holder and issues them to the holder as a Verifiable Credential (VC). This VC is signed by the issuer with his private key \(K^{-}_{I}\); it contains the issuer's \(DID_{I}\) and the credentials and \(DID_{H}\) of the holder. The holder holds this VC in his wallet and presents it to a verifier as a Verifiable Presentation (VP). This VP is signed by the holder with his private key \(K^{-}_{H}\); it contains the VC and the verifier's \(DID_{V}\). The verifier verifies this VP by checking the issuer's signature on the VP and the issuer's signature on the VC. If the verifier accepts the issuer as a trusted authority for issuing the holder's credentials, then the verifier trusts that these credentials belong to the holder. Early implementations of SSI made use of blockchain technology [8] and used a public distributed ledger [7] to store the mapping of a \(DID\) to its associated public keys. Modern approaches are based on OAuth 2.0 and OpenID Connect, such as the mobile driving license in the United States standardized in ISO/IEC 18013-5:2021 [13]. This approach implements the Self-Issued OpenID Provider v2 (SIOPv2) [31] draft in the wallet app for key management. Driving license offices provide OAuth 2.0 based interfaces defined in the OpenID for Verifiable Credential Issuance (OpenID4VCI) draft [17] to issue driving licenses as VCs in the W3C format [27]. Drivers present these VCs as VPs to police officers using OAuth 2.0 based interfaces between smartphones defined in the OpenID for Verifiable Presentations (OpenID4VP) draft [29]. Another OIDC draft describes the issuance of identity claims of the ID Token as a VC [1]. This is similar to our approach, but requires the full OpenID4VC infrastructure to be deployed, which is currently rare. Although SSI is now being rolled out for government use cases, there are still open issues regarding usability [25][32] and identity recovery [33]. Since the private key is a long-term key that could be leaked during its lifetime, the system requires a key revocation list. But as argued by Ronald L. Rivest more than two decades ago [23], revocation lists should be avoided for security reasons. Modern technologies such as Hardware Security Modules (HSM) or Trusted Platform Modules (TPM) address this problem by protecting the private key inside the hardware. Here, the private key cannot be exported and can only be used for signing after the platform integrity has been verified and the user has been authenticated. This creates problems when a user wants to use VCs from other devices. In addition, if the device is lost or broken, the user needs a recovery method for the private key and DID that must be configured in advance. OIDC\({}^{\mathbf{2}}\) does not have these problems. It uses short-lived ephemeral key pairs and ICTs, requires no specific hardware or software platform, and leverages existing account recovery capabilities. Compared to SSI approaches, it does not require currently rarely deployed frameworks such as installed wallet apps, issued VCs, and a huge amount of implemented new standards. Instead, OIDC\({}^{\mathbf{2}}\) requires a small extension of OPs to use existing OIDC accounts. In contrast, the ICT may also contain claims that the issuing OP is not a trusted source of, which will be discovered in Section 2.3. ### OpenPubkey BastionZero has developed OpenPubkey [12] which is very similar to OIDC\({}^{\mathbf{2}}\). The RP of an EU can create a Client Instance Claim (CIC) that contains, among others, the RP's public key \(K_{C}^{+}\). When requesting an ID Token (see Figure 2a), the RP can optionally provide a nonce in the Authentication Request (1), which we omitted in Section 2.2. The OP will then insert this nonce into the ID Token before issuing it (4). With OpenPubkey, the RP offers its hashed CIC as a nonce to be inserted into the ID Token. After receiving the ID Token, the RP appends the CIC and signs it with its private key \(K_{C}^{-}\), resulting in a PubKey (PK) Token. The RP can use this PK Token to authenticate as the EU. However, from our point of view, this approach makes the whole OIDC ecosystem insecure. In an SSO context, the RP is often a login service (see the AS in Figure 3) that the EU usually authorizes to access his profile information. If the service is malicious, the RP can request a PK Token with its own public key \(K_{C}^{+}\) to impersonate the EU without his knowledge. The authors' solution to this problem is to have the authenticating user only accept e2e authentications from a trusted RP, identified by its Client ID contained in the PK Token. First, this leaves a high burden on the user, which is unacceptable since it is difficult for the user to identify trusted RPs. Second, the EU's trust in a service, such as an online store, may be sufficient to be authenticated by that store, but it may not be sufficient to allow the store to impersonate him. Third, in open communication systems such as email, there are many clients, and it is unlikely that all of them are trusted. This limits the use of OpenPubkey to a small set of explicitly trusted services and clients. We believe that these three problems are unacceptable. In contrast, with OIDC\({}^{\mathbf{2}}\), the EU does not risk being impersonated when logging in to a malicious service. OIDC\({}^{\mathbf{2}}\) solves this problem by introducing a new ID Certification Token (ICT) that can only be requested by an RP with sufficient scope for e2e authentication. This means that an EU can control whether to issue only an ID Token or also an ICT. ## 4 OIDC\({}^{\mathbf{2}}\): Open Identity Certification with OIDC This section describes the OIDC\({}^{\mathbf{2}}\) concept in more detail and proposes a simple OIDC protocol extension to support it. ### Concept of OIDC\({}^{\mathbf{2}}\) We define new terminology, introduce the Identity Certification Token (ICT), and explain how to use it. #### 4.1.1 Terminology Consider a user of one application authenticating to a user of another application. The user authenticating himself is called the End-User (EU), his application is called the Client. The other user is called the Authenticating User (AU), and his application is called the Authenticating Party (AP). We also assume that the EU has an SSO identity provided by an OpenID Provider (OP) trusted by the AU. The terminology used for the EU, Client, and OP is consistent with the combined OAuth 2.0 and OIDC scenario described in Section 2.4. However, OIDC\({}^{\mathbf{2}}\) does not require this scenario. #### 4.1.2 Identity Certification Token (ICT) We introduce the ICT, which addresses the end-to-end authentication use case. The ICT contains the Client's verified public key \(K_{C}^{+}\), an application specific Scope, an expiration date, and a unique identifier of the EU's SSO identity. It may also contain other claims about the user which are not necessarily verified by the OP. ### ICT Request The Client uniquely requests an ICT from the OP for each end-to-end (e2e) authentication process. Figure 3(a) simplifies the ICT request. First, the Client performs an OAuth 2.0 Authorization Request as described in Section 2.1 (1-4) to obtain an Access Token (AT) for the ICT Request. For this purpose, the AT requires a Scope sufficient to access the EU's profile information, e.g., profile, and an e2e Scope, e.g., e2e_auth_email. The Client then uses the AT to authorize an OAuth 2.0 Resource Request for an ICT (5) from the OP, called an ICT Request. For this purpose, the Client uniquely generates a new public key \(K_{C}^{+}\) and presents it to the OP. The Client also presents a Proof of Possession (PoP) of the corresponding private key \(K_{C}^{-}\), e.g., by signing a unique nonce. The OP verifies the validity of the AT and the PoP (6). If valid, the OP signs the ICT with its private key \(K_{OP}^{-}\) corresponding to its published public key \(K_{OP}^{+}\) and responds with the ICT (7). When the ICT expires and a new ICT is required, the Client repeats steps (5) to (7) to request a new ICT for a new key pair. Figure 4: Protocol extension of OIDC\({}^{2}\). ### E2E Authentication with ICT The Client uses the ICT to authenticate its EU to the AP's AU as shown in Figure 4b. First, the Client passes the ICT containing its public key \(K_{C}^{+}\) to the AP and provides a PoP for the corresponding private key \(K_{C}^{-}\) (1). To do this, the Client signs either a unique nonce provided by the AP or a unique session-specific identifier. Alternatively, the Client can prove the possession by establishing a secure channel based on the private key \(K_{C}^{-}\). In Section 6, we show and explain use cases that take advantage of these three options. The AP then verifies the Client's PoP (2) using the public key \(K_{C}^{+}\) from the ICT and verifies the AU's trust relationship with the OP (3). This may require user interaction or the use of whitelists, discussed further in Section 2.3. If the AU trusts the OP, the AP checks the expiration date and verifies the signature of the ICT using the OP's public key \(K_{OP}^{+}\) (4). If successful, the EU has proven its SSO identity to the AU (5). ## 5 Security Considerations First, we discuss how OIDC\({}^{\bf 2}\) shifts the burden of thorough authentication from service providers to identity providers. Then, we analyze the trust relationship between OIDC\({}^{\bf 2}\) entities and propose a trust classification for OPs. Finally, we propose authentication with multiple ICTs and discuss the correlation between the validity of an ICT and its corresponding key pair. ### Service Provider vs. OpenID Provider In most communication services, users must rely on the identity claims of their communication partners provided by the service provider, with no way to verify them. OIDC\({}^{\bf 2}\) allows users to verify each other's identities without having to trust the service provider. This only requires the Client to implement OIDC\({}^{\bf 2}\) and the protocol to provide a way to exchange the ICTs. The service provider does not need to implement OAuth 2.0 for the Client or provide an OP. This improves the overall security of the service and prevents privacy issues by eliminating the need for the service provider to collect sensitive information about its users. ### Trust Relationship Figure 5 shows an overview of the trust relationship between the entities of the OIDC\({}^{\bf 2}\) protocol. On the proving side, the End-User (EU) trusts his OP to protect his identity from impersonation attacks and not to impersonate him. This includes that the OP will only issue authorized ICTs. Furthermore, the EU trusts that his Client will operate as intended. This means that the Client will protect its private key \(K_{C}^{-}\) from third parties and use the ICT only for the intended authentication processes. To limit potential misuse by the Client, the ICT is scoped to a specific context. For example, this prevents an email client from misusing the ICT to sign contracts on behalf of the EU. On the authentication side, the Authenticating User (AU) trusts the OP to protect the EU's identity and to sufficiently verify the Client's possession of its private key \(K_{C}^{-}\). The AU also trusts the OP to certify sufficiently trustworthy identity claims with the issued ICT, which we will discuss in more detail in Section 5.3. To ensure that the authentication process is intended by the EU, the AU trusts the OP to issue only EU-authorized ICTs. The AU must also trust his Authenticating Party (AP) to correctly verify the received ICT and PoP. The AU needs to trust the OP. We offer two solutions that can be combined. First, the AP trusts a trusted identity federation such as the Global Assured Identity Network (GAIN) [10], which consists of international OPs such as banks, insurance companies, or government institutions, all of which manage fully verified real-world identities. Second, the AU maintains his own whitelist of OPs, such as social media platforms or his business partners. Not every OP has the same level of trustworthiness, so we classify them in the next section. ### Classification of OpenID Providers When working with OIDC\({}^{2}\), we suggest three classes of OPs to consider. #### 5.3.1 Insecure OpenID Providers OPs can be considered insecure for a variety of reasons. They may not be able to sufficiently protect their users' credentials, or they may be untrustworthy Figure 5: Trust relationship between OIDC\({}^{2}\)’s entities. for political or economic reasons. For example, they may certify potentially false or insufficiently verified claims. If an AU considers an OP insecure, his AP will not accept any ICTs issued by that OP. #### 5.3.2 Authoritative OpenID Providers (AOP) We classify an OP as an Authoritative OP (AOP) for specific claims, if the AU accepts the OP as an authority for those claims and trusts the OP to protect managed identities. For example, an email server's OP is authoritative for email addresses within its own domain. Because an OP issues a unique subject identifier for each SSO identity by specification, an OP is always authoritative for its associated sub claim. This makes AOPs sufficient for scenarios where an EU wants to be authenticated with specific claims. For example, if the AU knows the EU's email address, the EU uses an ICT issued by his email provider's OP to authenticate on a social media platform. However, AOPs are only sufficient to certify identity claims for which they are an authority. To certify real-world identity claims such as names or addresses, the AOP must typically be the OP of a trusted government organization. #### 5.3.3 Verifying OpenID Providers (VOP) There is not always an AOP for every claim the EU wants to be authenticated with. Instead, the EU can use a third-party service that the AU trusts to sufficiently verify his identity claims and protect his account. We call the OP of this third-party service a Verifying OpenID Provider (VOP). This VOP could check the EU's passport to verify his name, or send him a verification code via SMS to verify his phone number. There are already OpenID Providers such as banks or insurance companies that are required by law to verify their customers' claims. However, such verification processes are often costly, which is why VOPs often do not verify all claims or offer it as an optional service, such as the social media platforms Facebook and Twitter. Both can be AOPs at the same time. For example, banks are VOPs for the name of an EU, but also AOPs for bank account numbers. ### Authentication with Multiple ICTs The classification of an OP is up to the AU, i.e., the AU may not accept ICTs from certain OPs. Since an EU may not know the AU's classification in advance, the EU can present ICTs from different OPs and the corresponding PoPs to increase the likelihood of successful authentication by the AU. However, this requires more work for the EU as he has to log in to all these OPs to receive ICTs. If the AP receives multiple ICTs, it presents them to the AU, which then selects the most trusted issuer or rejects them all. Furthermore, the EU must be aware that presenting multiple ICTs also exposes all his presented accounts to the AU. ### Validity of ICTs and Client Key Pairs An ICT contains the public key \(K_{C}^{+}\) of the Client. By issuing this ICT, the OP certifies that the corresponding EU authorized the Client for e2e authentication with the contained identity claims. An attacker trying to impersonate the EU needs the corresponding private key of the ICT. We minimize this potential misuse of the ICT by a leaked private key by making the ICT short-lived and limited in scope. Since a few minutes are sufficient for most use cases (see Section 6), we recommend setting the ICT validity period to no more than 1 hour. We propose that an ephemeral and unique key pair \(K_{C}^{\pm}\) expires along with its associated ICT, eliminating the need for key revocation mechanisms. However, Sections 6.2 and 6.3 show that long-term key pairs are useful in some cases. Therefore, we further propose that an ICT may also contain a long-term public key, which must be indicated by providing the key revocation server of the key. Such a key is valid until revoked and is associated with the claims in the ICT. Some services control the lifetime of public keys by associating them with user profiles. An example of this approach is the Signal protocol (see Section 6.2). In such applications, a user can be authenticated with a public key received from an ICT as long as the public key contained in it is associated with the profile. In any case, an active session can remain valid even after the underlying key pair \(K_{C}^{\pm}\) expires (see Section 6.1). ## 6 Use of OIDC2 in Applications We explain how end-to-end authentication is currently implemented in the communication applications video conferencing, instant messaging and email and how it can be improved by OIDC2. In addition, we recommend validity periods for ICTs depending on these applications. ### Video Conferences In many videoconferencing systems, users must rely on the identities of their communication partners provided by the service's IdP. As an incident [21] demonstrated in 2022, new technologies such as deep fakes show that relying on identifying a communication partner in a video stream does not suffice. We explain how video conferencing services use OAuth 2.0 and OpenID Connect and how they can benefit from OIDC2. #### 6.1.1 End-to-End Authentication in Video Conferences In video conferencing (VC), users log in to the service provider's OAuth 2.0 Authorization Server (AS) either directly with their credentials or through the OP with their OIDC identity, as explained in Section 2.4. After authentication, the VC service provider's AS gets an ID Token from the OP. After authorization, the Client gets an Access Token (AT) from the AS. The Client uses the AT to prove its authorization to the VC server. The AT contains the EU's VC account ID, which the VC server uses to identify the EU's profile that the VC server provides to the communication partner. A communication partner, aka the AU, identifies the EU using the identity claims and its AP's public key \(K^{+}\) provided by the VC server. Based on this public key, the client of the AU, aka the AP, establishes an end-to-end encrypted communication channel with the Client of the EU. If the AU does not trust the VC server, he cannot trust the identity of the EU and therefore cannot be sure with whom he is establishing a secure channel. #### 6.1.2 End-to-End Authentication with OIDC2 We propose that the EU authenticates to the AU using an ICT obtained directly from the EU's OP. After a mutual ICT exchange, the Client and the RP use the contained verified public keys to establish a secure channel, as shown in Figure 6. First, Client A generates an ephemeral key pair \(K_{A}^{\pm}\) and contacts the OP of the EU's choice to obtain an ICT containing its public signing key \(K_{A}^{+}\) (1). The Client signs this ICT and a unique session identifier with its private key \(K_{A}^{+}\) and sends it to the AP via the VC server (2). The session identifier is required to prevent relaying attacks. If the AU trusts the EU's OP, Client B generates its own ephemeral key pair \(K_{B}^{\pm}\) (3) and requests an ICT containing its public signing key \(K_{B}^{+}\) from the AU's OP (4). The AP signs its ICT and the session identifier with its private signing key \(K_{B}^{-}\) and responds to the Client via the VC server (5). If the EU trusts the AU's OP (7), then the client and AP have successfully performed mutual authentication enabling them to establish a secure e2e encrypted and authenticated channel (7). #### 6.1.3 Discussion We explained how OIDC\({}^{\mathbf{2}}\) can be used in the context of OAuth 2.0 and OpenID Connect. OIDC\({}^{\mathbf{2}}\) can also be used to establish a secure channel based on an untrusted communication channel. The PoP is the signature over a unique session identifier and the ICT. This session identifier must not be reused and should therefore contain unique components such as a timestamp and an identifier of the communication partner. Since starting a videoconference takes only a few seconds, the validity of an ICT can be very short, e.g., 5 minutes, to avoid time synchronization problems. When the ICT expires, an active secure channel remains valid until it is closed. ### Instant Messaging We suggest how the instant messaging (IM) service Signal [22] could benefit from OIDC\({}^{\mathbf{2}}\). #### 6.2.1 End-to-End Authentication in Signal In the Signal protocol [18], users are identified by their phone number and public key \(K^{+}\), both of which are verified and published by the service provider. Two users establish an end-to-end encrypted communication channel and authenticate using digital signatures with their respective private key \(K^{-}\), which remain on their primary devices. To prove his identity to a communication partner and detect man-in-the-middle attacks, a user must Figure 6: End-to-end authentication with OIDC\({}^{\mathbf{2}}\) for video conferences. present his public key via QR code. The partner's client then verifies that the scanned public key \(K^{+}\) matches the public key \(K^{\prime+}\) used to authenticate the channel. This is a strong but cumbersome verification mechanism that requires either a face-to-face meeting or a secure side channel. #### 6.2.2 End-to-End Authentication with OIDC\({}^{\bf 2}\) We propose an end-to-end authentication method for instant messaging based on OIDC\({}^{\bf 2}\), illustrated in Figure 7. Assume that two IM clients have already established a secure channel and know each other's public key \(K^{\prime+}\) provided by the service provider. Using OIDC\({}^{\bf 2}\), the IM Client requests an ICT from its EU's OP for this public key \(K^{+}\) and sends the ICT over the secure channel to the AP. If the AU trusts the EU's OP, the AP verifies the received ICT and compares the contained public key \(K^{+}\) with the assumed public key \(K^{\prime+}\). #### 6.2.3 Discussion This example shows that an ICT can also be used to authenticate an established secure channel. Therefore, the ICT must be issued for the public key \(K^{+}\) used to authenticate the channel. Being able to send messages through this secure channel serves as implicit PoP for the corresponding private key \(K^{-}_{C}\). The ICT signed by an OP trusted by the AU thus replaces the need for a face-to-face meeting or a secure side channel. This requires that the ICT is valid when the AP receives it and adds a timestamp to it. After that, the AP can verify the ICT at any time later, so the AU does not need to immediately confirm trust to the OP. Since IM services deliver their messages to the receiving client very quickly, we recommend a validity period of 5 minutes for ICTs in this context. If the ICT is transmitted when the AP is offline, the verification process must be repeated. This approach does not use any Signal-specific features and can therefore be applied to any other IM service. It shows that existing key management systems like Signal's can be extended with OIDC\({}^{\bf 2}\) as an authentication layer. Figure 7: Unilateral E2E IM authentication with OIDC\({}^{\bf 2}\). It also shows that OIDC\({}^{\mathbf{2}}\) can be used without any OAuth 2.0 Authorization Server involved in the communication protocol. ### Email For the past three decades, S/MIME and PGP have been state-of-the-art standards for secure end-to-end authenticated and optionally encrypted email communication. But with 2.8% of signed and 0.06% of encrypted emails [28], neither of these technologies has taken off, probably due to their complex key generation and management. We briefly describe email authentication with PGP and S/MIME, propose a variant using OIDC\({}^{\mathbf{2}}\), and explain its advantages. #### 6.3.1 End-to-End Authentication with PGP and S/MIME The user generates a long-term PGP key pair \(K_{PGP}^{\pm}\) and imports it into his email client. When sending an email, the client attaches the public PGP key \(K_{PGP}^{+}\) and signs the whole email with the private PGP key \(K_{PGP}^{-}\). The recipient of the email then verifies the signature using the provided public key \(K_{PGP}^{+}\). To authenticate the sender, the receiver must know whether the public key \(K_{PGP}^{+}\) belongs to the sender. This requires a cumbersome Web of Trust-based approach in which people must often meet in person to sign each other's key pairs or exchange public key fingerprints. Email authentication with S/MIME works similarly, but with a PKI-based approach using S/MIME certificates instead of the Web of Trust approach. The drawbacks have been discussed in Section 3.1. #### 6.3.2 End-to-End Authentication with OIDC\({}^{\mathbf{2}}\) While S/MIME benefits from the trust layer of a PKI, PGP lacks such a layer. This is where OIDC\({}^{\mathbf{2}}\) can help, as shown in Figure 8. For each email, the Figure 8: E2E email authentication with PGP and OIDC\({}^{\mathbf{2}}\). EU's Client requests a unique ICT for a uniquely generated ephemeral public PGP key \(K^{+}_{PGP}\). This requires the EU to log in to his OP and authorize the issuance of the ICT for the email context. The Client then attaches the ICT and PGP-related attachments to the email, e.g. the public PGP key \(K^{+}_{PGP}\) for normal PGP compatibility. Before sending, the Client signs the entire email with its private PGP key \(K^{-}_{PGP}\). The receiving Client uses the attached PGP public key \(K^{+}_{PGP}\) to verify the signature as usual in PGP. To authenticate the sender, the receiving Client, aka Authenticating Party (AP), verifies the ICT using the OP's public key \(K^{+}_{OP}\) and compares its contained public key to the attached public PGP key \(K^{+}_{PGP}\). To verify the trust level of the OP, the AP can use preconfigured policies or ask his user, aka the Authenticating User (AU). If the integrity of the ICT is verified and the OP is trusted, the AU can rely on the identity claims that identify the EU. #### 6.3.3 Discussion The email including attachments combined with timestamps is considered unique. Thus, the signed email serves as Proof of Possession (PoP) of the corresponding private key \(K^{-}_{PGP}\). If an attacker mutates the email or replaces the ICT, it will be detected when verifying the signature. The AU can rely on the inbox timestamp added to the email by his trusted email server when verifying the PoP and ICT. Therefore, a validity period of 1 hour is sufficient as most emails are delivered to the server within this period. However, if the AU does not trust his email server, his trusted email client must receive the email within this period. Otherwise the ICT will expire and the Client cannot trust the key pair and therefore cannot trust the email. This shows the limitations of short-term keys in OIDC\({}^{2}\). However, as described in Section 5.5, the EU could choose a long-term key such as a normal PGP key instead of the ephemeral key. The OP must then add the URL of the key server that publishes the PGP key revocation to the ICT. When verifying the ICT, the AP must also verify that the PGP key has not been revoked. ## 7 Implementation We present a simple extension for any OIDC server to handle ICT Requests including a PoP for the verification of the Client's public key. The implemen tation is available on GitHub1. However, we recommend it only for testing purposes. Footnote 1: [https://github.com/oidc2/op-extension](https://github.com/oidc2/op-extension) To request a token, a Client sends a Token Request to the so-called /token Endpoint of the OpenID Provider. That is a special path in the URL of the OIDC server. Moreover, there is also a /userinfo Endpoint that returns information about the user upon a Userinfo Request. Many services are not directly reachable on the Internet but via a reverse proxy. A reverse proxy is an HTTP server that resides in the same network as the server, terminates the TLS connection between client and server, and relays data to and from the application server from and to the client. We propose the generic extension to an OIDC server in Figure 9 so that the OIDC server can handle ICT Requests. We define a novel /ict Endpoint which runs as a microservice separately from the OIDC server. The /ict Endpoint and the OIDC server operate behind a reverse proxy. The reverse proxy forwards any conventional OIDC requests to the OIDC server and ICT Requests to the /ict Endpoint. The /ict Endpoint expects an AT with Scopes for identity claims, e.g., profile for name and birth date, and a scoped context for end-to-end authentication, e.g., e2e_auth_email for the email context, in the ICT Request. It extracts the AT, and includes it in a Userinfo Request to the OIDC server. After reception of the user information, the /ict Endpoint checks whether the EU possesses the private key \(K_{C}^{-}\) for the public key \(K_{C}^{+}\) contained in the ICT request, which is explained later. If the check was successful, the /ict Endpoint issues an ICT with appropriate information and signs it with the private key \(K_{OP}^{-}\) of the OP. Thus, \(K_{OP}^{-}\) must be available to the /ict Endpoint. This is a reason why we recommend this simple prototype only for testing purposes but not for production. Finally, the /ict Endpoint returns the ICT to the Client. To save communication overhead between the /ict Endpoint and the Client, we propose the following PoP. The Client chooses a nonce, concatenates it with a timestamp, signs the concatenation with its private key \(K_{C}^{-}\), and includes concatenation and signature in the ICT Request. The /ict Figure 9: Simple extension to an OIDC server to handle ICT Requests. Endpoint verifies the signature with the public key \(K_{C}^{+}\) and caches the nonce for 30 seconds. To counter replay attacks, the /ict Endpoint accepts only ICT Requests with timestamps in the concatenation that deviate at most 15 seconds from its own time and whose nonce is not in the cache. ## 8 Evaluation We evaluate the performance of the provided /ict Endpoint, written in Go, compared to the /token Endpoint of the Keycloak 22.0.1 and Authentik 2023.6.1 OIDC server software. They are written in Java and Go, respectively. We conduct the following two experiments. (A) A Client sends a Refresh Token to the /token Endpoint of the OIDC server and obtains an ID Token, an RT, and an AT. (B) A Client generates a PoP, sends an AT to the new /ict Endpoint, and obtains an ICT. Both experiments are conducted over one minute, i.e., a token is requested, returned, and then the next request is sent. We ran each experiment 20 times and computed mean requests per minute including confidence intervals with a confidence level of 95% (\(CI_{0.95}\)) using the Student's t-distribution. We automate this process with the help of a web application2. Footnote 2: The application is programmed in Angular 15 and its code is available on GitHub [https://github.com/oidc2/benchmark](https://github.com/oidc2/benchmark) The OIDC server, its user database based on PostgreSQL 15.2, and the new /ict Endpoint run in separate Docker containers3. The host is a Lenovo ThinkPad T14s with an 2.1 GHz AMD Ryzen 5 PRO 4650U processor, 16 GB RAM, and a 512 GB SSD with Windows 11 22H2 x64, and running the Docker engine4 24.0.2 in WSL 25. While Authentik can import and export any private keys, Keycloak cannot export private keys and it can import only RSA keys. Therefore, we chose RS256 for signatures, i.e., a 2048 bit RSA key with the SHA-256 hashing algorithm to make experiments with different server software comparable. Footnote 3: [https://github.com/oidc2/op-extension/blob/main/docker-compose.yaml](https://github.com/oidc2/op-extension/blob/main/docker-compose.yaml) Footnote 4: [https://www.docker.com/](https://www.docker.com/) Footnote 5: [https://learn.microsoft.com/en-us/windows/wsl/](https://learn.microsoft.com/en-us/windows/wsl/) With Keycloak, a mean request rate of 994.00 IDTs (A) (\(CI_{0.95}\): [992.97; 995.03]) and 988.20 ICTs (B) (\(CI_{0.95}\): [986.72; 989.68]) could be served per minute6. In contrast, with Authentik, 190.95 IDTs (A) (\(CI_{0.95}\): [190.35; 191.35]) and 891.65 ICTs (B) (\(CI_{0.95}\): [886.04; 897.26]) could be served per minute. Thus, the tested version of Keycloak is more efficient than the tested version of Authentik. Moreover, the provided /ict Endpoint is as efficient as the built-in /token Endpoint or even more efficient. We compare the work done by the /token Endpoint and the /ict Endpoint. (A) The /token Endpoint validates the RT, creates an IDT, and signs the AT and the IDT with its private key. The integrity of the RT is secured differently7. (B) The /ict Endpoint validates the PoP for the Client's public key, and requests user information using an AT from the /userinfo Endpoint, which validates the AT. Then the /ict Endpoint creates and signs the ICT. Footnote 7: Authentik uses a nonce for the RT stored in the database while Keycloak secures the RT with an HMAC. The effort for creating and signing an IDT in (A) and an ICT in (B) is possibly similar, as both require RT/AT validation, a database request, and a token signature. Thus, creating an RT and AT, and signing the AT in (A) is apparently equal or more time consuming than creating the PoP at the Client and validating the PoP at the /ict Endpoint in (B). ## 9 Conclusion and Future Work This paper introduced Open Identity Certification with OpenID Connect (OIDC\({}^{\mathbf{2}}\)), which allows End-Users (EUs) to request a verifiable Identity Certification Token (ICT) from an OpenID Provider (OP). An ICT contains claims about an EU and a public key chosen by the EU. Authenticating Users can authenticate EUs with an ICT if they trust his issuing OP. We compared OIDC\({}^{\mathbf{2}}\) to existing end-to-end authentication methods and found that OIDC\({}^{\mathbf{2}}\) is easier to use and improves security by eliminating the need for revocation lists. We suggested how OIDC\({}^{\mathbf{2}}\) can be implemented based on the OIDC framework. We discussed security considerations for and general improvements with OIDC\({}^{\mathbf{2}}\): the trust relationship among its entities, a classification of OPs and their utilization with OIDC\({}^{\mathbf{2}}\), authentication with multiple ICTs to increase the likelihood of successful authentication, as well as appropriate (short) validity periods for ICTs. Furthermore, we proposed how OIDC\({}^{\mathbf{2}}\) can be used for simple and user-friendly end-to-end authentication for video conferences, email, and instant messaging. Finally, we provided a simple, open-source extension for OIDC server software to support OIDC\({}^{\mathbf{2}}\) for testing purposes. We proved its compatibility with Authentik and Keycloak and the performance of the new /ict Endpoints is comparable to or better than the performance of the existing /token Endpoints. To demonstrate the feasibility of OIDC\({}^{\mathbf{2}}\) for end-to-end authentication, we plan to integrate OIDC\({}^{\mathbf{2}}\) for video conferences based on the open WebRTC protocol, for instant messaging based on the open Matrix protocol, and for email communication based on the PGP standard.
OpenID Connect (OIDC)はウェブの認証標準として広く使われている。この研究では、OIDCの新しいアイデンティティ認定トークン(ICT)を定義する。ICTは、JSONベースで短命なユーザー認証のためのものです。ユーザーはOPからICTを要求し、他のユーザーや信頼しているサービスに自身のアイデンティティを証明するために使用できます。私たちはこれを$OIDC^2$と呼び、他の有名なエンドツーエンド認証方法と比較する。 unlike certificates, $OIDC^2$ does not require installation and can be easily used on multiple devices, making it more user-friendly. We outline protocols for implementing $OIDC^2$ based on existing standards. We discuss the trust relationship between entities involved in $OIDC^2$, propose a classification of OPs' trust level, and propose authentication with multiple ICTs from different OPs. We explain how different applications such as videoconferencing, instant messaging
2309.10902
VALID: A perceptually validated Virtual Avatar Library for Inclusion and Diversity
As consumer adoption of immersive technologies grows, virtual avatars will play a prominent role in the future of social computing. However, as people begin to interact more frequently through virtual avatars, it is important to ensure that the research community has validated tools to evaluate the effects and consequences of such technologies. We present the first iteration of a new, freely available 3D avatar library called the Virtual Avatar Library for Inclusion and Diversity (VALID), which includes 210 fully rigged avatars with a focus on advancing racial diversity and inclusion. We present a detailed process for creating, iterating, and validating avatars of diversity. Through a large online study (n=132) with participants from 33 countries, we provide statistically validated labels for each avatar's perceived race and gender. Through our validation study, we also advance knowledge pertaining to the perception of an avatar's race. In particular, we found that avatars of some races were more accurately identified by participants of the same race.
Tiffany D. Do, Steve Zelenty, Mar Gonzalez-Franco, Ryan P. McMahan
2023-09-19T19:57:03
http://arxiv.org/abs/2309.10902v2
# VALID: A perceptually validated Virtual Avatar Library for Inclusion and Diversity ###### Abstract. As consumer adoption of immersive technologies grows, virtual avatars will play a prominent role in the future of social computing. However, as people begin to interact more frequently through virtual avatars, it is important to ensure that the research community has validated tools to evaluate the effects and consequences of such technologies. We present the first iteration of a new, freely available 3D avatar library called the _Virtual Avatar Library for Inclusion and Diversity_ (_VALID_), which includes 210 fully rigged avatars with a focus on advancing racial diversity and inclusion. We present a detailed process for creating, iterating, and validating avatars of diversity. Through a large online study (\(n=132\)) with participants from 33 countries, we provide statistically validated labels for each avatar's perceived race and gender. Through our validation study, we also advance knowledge pertaining to the perception of an avatar's race. In particular, we found that avatars of some races were more accurately identified by participants of the same race. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none and ethnic categorization, VALID includes seven races as recommended by the U.S. Census Bureau report (Census, 2020) (which differs from the 2020 U.S. Census): American Indian or Native Alaskan (AIAN)3, Asian, Black or African American (Black), Hispanic, Latino, or Spanish (Hispanic), Middle Eastern or North African (MENA), Native Hawaiian or Pacific Islander (NHPI), and White. Footnote 3: We use racial abbreviations as defined in the U.S. Census Bureau report. VALID includes 210 fully rigged virtual avatars designed to advance diversity and inclusion. We iteratively created 42 base avatars (7 target races \(\times\) 2 genders \(\times\) 3 individuals) using a process that combined data-driven average facial features with extensive collaboration with representative stakeholders from each racial group. To address the longstanding issue of the lack of diversity in virtual designers and to empower diverse voices (Brockett et al., 2019), we adopted a participatory design method. This approach involved actively involving individuals (\(n=22\)) from diverse backgrounds, particularly different racial and ethnic identities, in the design process. By including these individuals as active participants, we aimed to ensure that their perspectives, experiences, and needs were considered and incorporated into the design of the avatars. Once the avatars were created, we sought to evaluate their perception on a global scale. We then conducted a large online study (\(n=132\)) with participants from 33 countries, self-identifying as one of the seven represented races, to determine whether the race and gender of each avatar are recognizable, and therefore validated. We found that all Asian, Black, and White avatars were universally identified as their modeled race by all participants, while our AIAN, Hispanic, and MENA avatars were typically only identified by participants of the same race, indicating that participant race can bias perceptions of a virtual avatar's race. We have since modeled the 42 base avatars in five different outfits (casual, business, medical, military, and utility), yielding a total of 210 fully rigged avatars. To foster diversity and inclusivity in virtual avatar research, we are making all of the avatars in our library freely available to the community as open source models, in addition to the avatars, we are also providing statistically validated labels for the race and gender of all 42 base avatars. Our models are available in FBX format, are compatible with previous libraries like Rocketbox (Rockett et al., 2019), and can be easily integrated into most game engines such as Unity and Unreal. Additionally, the avatars come equipped with facial blend shapes to enable researchers and developers to easily create dynamic facial expressions and lip-sync animations. All avatars, labels, and metadata can be found at our GitHub repository: [https://github.com/xrtlab/Validated-Avatar-Library-for-Inclusion-and-Diversity-VALID](https://github.com/xrtlab/Validated-Avatar-Library-for-Inclusion-and-Diversity-VALID). This paper makes three primary contributions: 1. We provide 210 openly available, fully rigged, and perceptually validated avatars for the research community, with a focus on advancing diversity and inclusion. 2. Our diversity-represented user study sheds new light on the ways in which people's own racial identity can affect their perceptions of a virtual avatar's race. In our repository, we also include the agreement rates of all avatars, disaggregated by every participant race, which offers valuable insights into how individuals from different racial backgrounds perceive our avatars. 3. We describe a comprehensive process for creating, iterating, and validating a library of diverse virtual avatars. Our approach involved close collaboration with stakeholders and a commitment to transparency and rigor. This could serve as a model for other researchers seeking to create more inclusive and representative virtual experiences. ## 2. Related Work In this section, we describe how virtual avatars are used within current research in order to highlight the need for diverse avatars. We conclude the section with a discussion on currently available resources used for virtual avatars and virtual agents. ### Effect of Avatar Race Virtual avatars are widely used in research simulations such as training, education, and social psychology. The race of a virtual avatar is a crucial factor that can affect the outcomes of these studies. For example, research has shown that underrepresented students often prefer virtual instructors who share their ethnicity (Brockett et al., 2019; Brockett et al., 2019). Similarly, studies have suggested that designing a virtual teacher of the same race as inner-city youth can have a positive influence on them (Brockett et al., 2019), while a culturally relevant virtual instructor, such as an African-American instructor for African-American children, can improve academic achievement (Kirch et al., 2019). The design of virtual avatars is especially important for minority or marginalized participants. Kim and Lim (Kim and Lim, 2019) reported that minority students who feel unsupported in traditional classrooms develop more positive attitudes towards avatar-based learning. In addition, children with autism spectrum disorder treat virtual avatars as real social partners (Brockett et al., 2019; Brockett et al., 2019). Therefore, to better meet the needs of all individuals participating in such studies, it is important for researchers to have access to diverse avatarars that participants can comfortably interact with. Diversity in virtual avatars is important not only for improving representation, but also for enhancing the effectiveness of simulations. Halan et al. (Halan et al., 2019) found that medical students who trained with virtual patients of a particular race demonstrated increased empathy towards real patients of that race. Similarly, Bickmore et al. (Bickmore et al., 2019) showed that interacting with a minority virtual avatar reduced racial biases in job hiring simulations. These findings highlight the importance of diverse and inclusive virtual avatars in research simulations and emphasize the need for more comprehensive representation of different races. Access to a wide range of validated avatars through VALID will help to create more inclusive and representative simulations, and enable researchers to investigate the impact of avatar race or gender on participants' experiences. This will help improve the inclusivity of simulations and contribute towards addressing issues of bias. ### Implicit Racial Bias and Virtual Avatars Avatars are becoming increasingly important in immersive applications, particularly in the realm of VR, where they are becoming ubiquitous (M have demonstrated that embodying a darker-skinned avatar in front of a virtual mirror can reduce implicit racial biases (Safra et al., 2016; Safra et al., 2017; Safra et al., 2018; Safra et al., 2019), which are unconscious biases that can lead to discriminatory behavior (Safra et al., 2018). For instance, Salmanowitz et al. (Salmanowitz et al., 2019) found that a VR participant's implicit racial bias affects their willingness to convict a darker-skinned suspect based on inconclusive evidence. Similarly, Peck et al. (Peck et al., 2019) found that each participant's implicit racial bias was related to their nuanced head and hand motions in a firearm simulation. These foundational studies provide compelling evidence that embodying an avatar of a different race can affect implicit biases and further emphasize the need for diverse avatar resources. Our study examines how participants perceive the race of diverse virtual avatar's race affects user interactions (e.g., (Brockett, 2017; Safra et al., 2018; Safra et al., 2019)), little research has been conducted on how individuals _actively perceive_ the race of virtual avatars. Setoh et al. (Setoh et al., 2019) note that racial identification can predict implicit racial bias, making it crucial to understand how people perceive the race of virtual avatars to further investigate these effects. ### Own-Race Bias Own-race bias, also known as the "other-race effect," refers to the phenomenon in which individuals process the faces of their own race differently from those of other races (Safra et al., 2016; Safra et al., 2018; Safra et al., 2019; Safra et al., 2019). Studies have suggested that this bias can influence the way individuals categorize race. For example, MacLin and Malpass (MacLin and Malpass, 2018) found that Hispanic participants were more likely to categorize Hispanic faces as fitting their racial category than Black faces, and Blascovich et al. (Blascovich et al., 2018) observed that participants who strongly identify with their in-group are more accurate in identifying in-group members. Although own-race bias has not yet been studied in the context of 3D virtual avatars, Saneyoshi et al. (Sanevoshi et al., 2019) recently discovered that it extends to the uncanny valley effect (Safra et al., 2019) for 2D computer-generated faces. Specifically, they found that Asian and European participants rated distorted faces of their own race as more unpleasant than those of other races. Building on this research, we extended the study of own-race bias to 3D virtual avatars and focused on race categorization rather than perceived pleasantness. Our study included avatars and participants from seven different races, providing insights into how a diverse user population may interact within equally diverse virtual worlds. ### Virtual Avatar Resources There are numerous resources for creating virtual avatars. Artists can use 3D modeling tools, such as Autodesk 3ds Max4, Autodesk Maya5, Blender6, or Zbrush7 to manually model, texture, and rig virtual avatars. However, such work requires expertise in 3D modeling and character design, and is often a tedious process (Blender, 2017). On the other hand, parametric models, including freely available tools like MakeHuman8 and Autodesk Character Generator9, as well as commercially available ones such as Daz3D10, Poser11, and Reallusion Character Creator12, enable users to generate virtual avatars from predefined parameters, thereby significantly expediting the avatar generation process. Nonetheless, using these tools still requires learning a new program and time to customize each model, despite the absence of the artistic expertise needed for manual tools. Footnote 1: [https://www.autodesk.com/products/3ds-max/overview](https://www.autodesk.com/products/3ds-max/overview) Footnote 2: [https://www.autodesk.com/products/maya/overview](https://www.autodesk.com/products/maya/overview) Footnote 3: [https://www.blender.org](https://www.blender.org) Footnote 4: [https://www.maxon.net/en/zhrush](https://www.maxon.net/en/zhrush) Another alternative to traditional modeling is to use scanning technologies, which can capture 3D models of real people. For instance, Shapiro et al. (Shapiro et al., 2018) and Waltemate et al. (Waltemate et al., 2018) used 3D cameras and photogrammetry, respectively, to capture 3D models of their users. Singular Inversions FaceGen Modeller13 has also been employed to generate 3D faces from user photos and then apply them to a general 3D avatar body (Brockett, 2017; Safra et al., 2018). However, scanning approaches require the ability to physically scan the user, limiting their use for certain applications, particularly remote ones. Footnote 13: [http://www.makhumancommunity.org/](http://www.makhumancommunity.org/) Most closely related to our goal of providing a free and open library of ready-to-use avatars is the Microsoft Rocketbox library (Blender, 2017) and its accompanying HeadBox (Safra et al., 2018) and MoveBox (Brockett, 2017) toolkits. Rocketbox provides a free set of 111 fully rigged adult avatars of various races and outfits. However, it falls short in terms of representation by not including any avatars of AIAN or NHPI descent. Additionally, the library offers only a limited number of Asian, Hispanic, and MENA avatars, excluding minority representations for some professions (e.g.,Rocketbox does not include any Asian medical avatars). Furthermore, none of the available avatar libraries have been validated by user perception studies to ensure their efficacy and inclusivity. Therefore, our VALID project aims to fill this gap by providing a free and validated library of diverse avatars. ## 3. Avatar Creation Procedure This section outlines our iterative process for developing the VALID library, which includes 42 base avatars. We began by using data-driven averaged facial features to create our initial models. We then conducted interviews with representative volunteers to iteratively refine and modify the avatars based on their feedback. ### Initial Modeling To ensure a broad diversity of people were represented in our library, we initially created 42 base avatars (7 target races \(\times\) 2 genders \(\times\) 3 individuals ) modeled after the seven racial groups recommended by the 2015 National Content Test Race and Ethnicity Analysis Report (Safra et al., 2018): AIAN, Asian, Black, Hispanic, MENA, NHPI, and White. We created 3 male and 3 female individuals for each race, resulting in a total of 6 individuals per race. Preliminary models were based on averaged facial features of multiple photos selected from the 10k US Adult Faces Database (Brockett, 2017) and stock photos from Google for races missing from the database (e.g., AIAN, MENA, and NHPI). These photos were used as input to a face-averaging algorithm (Safra et al., 2018), which extracted average facial features for each race and gender pair. Using these averages as a reference, a 3D artist recreated the average faces for each race and gender pair using Autodesk Character Generator (due to its generous licensing and right to freely edit and distribute generated models14) and Blender to make modifications not supported by Autodesk Character Generator (see Figure 1). Footnote 14: [https://knowledge.autodesk.com/support/character-generator/learn-explore/cas/sidcarticles/sidcarticles/Character-Generator-Legal-Restrictions-and-Allowances-when-using-Character-Generator.html](https://knowledge.autodesk.com/support/character-generator/learn-explore/cas/sidcarticles/sidcarticles/Character-Generator-Legal-Restrictions-and-Allowances-when-using-Character-Generator.html) ### Iterative Improvements through Representative Interviews After the preliminary avatars were created based on the facial averages, we worked closely with 2 to 4 volunteers of each represented race (see Table 1) to adjust the avatars through a series of Zoom meetings. This process ensured that all avatars were respectful and reduced the likelihood of harmful or stereotypical representations. Volunteers self identified their race and were recruited from university cultural clubs (e.g., Asian Student Association, Latinx Student Association), community organizations (e.g., Pacific Islanders Center), and email lists. We iteratively asked these volunteers for feedback on all avatars representing their race, showing them the model from three perspectives (see Figure 2). Volunteers were specifically asked to identify accurate features and suggest changes to be made. Once the changes were completed based on the feedback, we presented the updated avatars to the volunteers. This process was repeated until they approved the appearance of the avatars. For example, volunteers requested changes to facial features, such as: * _"Many Native women I know have a softer face and jawline [than this avatar]."_ -AlAN Volunteer 3 * _"The nose bridge is too high and looks too European. Asians mostly have low nose bridges."_ -Asian Volunteer 2 * _"Middle Eastern women usually have wider, almond-shaped eyes."_ -MENA Volunteer 2 * _"The nose [for this avatar] should be a little bit thicker, less pointy, and more round."_ -NHPI Volunteer 1 Additionally, we modified hairstyles according to feedback: * _[These hairstyles] look straighter and more Eurocentric. So I would choose [these facial features] and then do a natural [hair] texture."_ -Black Volunteer 1 * _"Usually the men have curly hair or their hair is cut short on the sides with the top showing."_ -NHPI Volunteer 1 Once the avatars were approved by their corresponding volunteer representatives, we conducted an online study to validate the race and gender of each avatar based on user perceptions. ## 4. Avatar Validation Study We conducted an online, worldwide user study to determine whether the target race and gender of each avatar is recognizable and, therefore, validated. Participants were recruited from the online Prolific marketplace15, which is similar to Amazon Mechanical Turk. Prior research shows that Prolific has a pool of more diverse and honest participants (Sundhi et al., 2018) and has more transparency than Mechanical Turk (Sundhi et al., 2018). Since diversity was a core theme of our research, we chose Prolific to ensure that our participants would be diverse. \begin{table} \begin{tabular}{l l l} \hline \hline **Race** & **Gender** & **Country** \\ \hline **ALAN** & 2M, 1F & USA (Native American) (3) \\ **Asian** & 2M, 2F & China (1), USA (Chinese-American) (1), USA (Vientnamese-American) (2) \\ **Black** & 1M, 2F & USA (African-American) (3) \\ **Hispanic** & 2M, 1F & Mexico (1), USA (Mexican-American) (2) \\ **MENA** & 1M, 2F & Iran (2), Saudi Arabia (1) \\ **NHPI** & 1M, 2F & Samoa (2), USA (Native Hawaiian) (1) \\ **White** & 2M, 1F & USA (3) \\ \hline \hline \end{tabular} \end{table} Table 1. Breakdown of our volunteer representatives by race, gender (male, female, or non-binary), and country. Figure 1. An example of the creation of a 3D avatar using our methodology. 1) We select 4:7 faces from a database (Ashman et al., 2018) or stock photos. 2) We calculate the average face using WebMorph (Krishnan et al., 2018). 3) A 3D artist recreates the average face using modeling software. 4) The models are improved iteratively through recurrent consultation with representative volunteers. ### Procedure The following procedure was reviewed and approved by our university Institutional Review Board (IRB). The study consisted of one online Qualtrics survey that lasted an average of 14 minutes. Each participant first completed a background survey that captured their self-identified demographics, including race, gender, and education. Afterwards, they were asked to familiarize themselves with the racial terms as defined by the U.S. Census Bureau research (Stenbury et al., 2015). Participants were then asked to categorize the 42 avatars by their perceived race and gender. Participants were shown only one avatar at a time and the order was randomized. For each of the avatars, participants were shown three perspectives: a 45\({}^{\circ}\) left headshot, a direct or 0\({}^{\circ}\) headshot, and a 45\({}^{\circ}\) right headshot (see Figure 2). Avatars were shown from the shoulders up and were dressed in a plain gray shirt. The images were rendered in Unity using the standard diffuse shader and renderer. The avatars were illuminated by a soft white (#FFFF5) directional light with an intensity of 1.0, and light gray (#7F7FF) was used for the background. Participants were asked to select all races that each avatar could represent: "American Indian or Alaskan Native", "Asian", "Black or African American,"Hispanic, Latino, or Spanish", "Middle Eastern or North African," Native Hawaiian or Pacific Islander, "White", or "Other". "Other" included an optional textbook if a participant wanted to be specific. We allowed participants to select multiple categories according to the U.S. Census Bureau's recommendations for surveying race (Stenbury et al., 2015). For gender, participants were able to select "Male", "Female", or "Non-binary". Participants were paid $5.00 via Prolific for completing the study. ### Participants A total of 132 participants (65 male, 63 female, 4 non-binary) from 33 different countries were recruited to take part in the study. We aimed to ensure a diverse representation of perspectives by balancing participants by race and gender. Table 2 provides a breakdown of our participants by race, gender, and country. Despite multiple recruitment attempts, including targeted solicitations via Prolific, we had difficulty recruiting NHPI participants. It is important to note that we excluded volunteers who had previously assisted with modeling the avatars from participating in the validation study to avoid potentially overfitting their own biases. ### Data Analysis and Labeling Approach To validate the racial identification of our virtual avatars, we used Cochran's Q test (Cochran, 2016), which allowed us to analyze any significant differences among the selected race categories. This approach was necessary since our survey format allowed participants to select more than one race category for each avatar, following the U.S. Census Bureau's research recommendations (Stenbury et al., 2015). Since the Chi-squared goodness of fit test requires mutually exclusive categories, we were unable to use it in our analysis. Furthermore, since our data was dichotomous, a repeated-measures analysis of variance (ANOVA) was not appropriate. Therefore, Cochran's Q test was the most appropriate statistical analysis method for our survey data. We used a rigorous statistical approach to assign race and gender labels to each avatar. First, we conducted the Cochran's Q test across all participants (\(n=132\)) at a 95% confidence level to identify significant differences in the participants' responses. If the test indicated significant differences, we performed pairwise comparisons between each race using Dunn's test to determine which races were significantly different. For each avatar, we assigned a race label if the race was selected by the majority of participants (i.e., over 50% of participants selected it) and if the race was selected significantly more than other race choices and not significantly less than any other race. This approach resulted in a single race label for most avatars, but some avatars were assigned multiple race labels due to multiple races being selected significantly more than all other races. If no race was selected significantly more than the majority, then we categorized the avatar as "Ambiguous". We followed a similar procedure for assigning gender labels. To account for the possibility that the race of the participant might influence their perception of virtual race, we also assigned labels based on same-race participants. This involved using the same procedure for assigning labels as described above, except based only on the selections of participants who identified as the same race as the avatar. This also allows future researchers to have the flexibility to use the labels from all study participants for studies focused on individuals from diverse racial backgrounds or to use the labels from participants of the same race for studies targeting specific racial groups. \begin{table} \begin{tabular}{l l l} \hline \hline **Race** & **Gender** & **Country** \\ \hline **AIAN** & 9M, 10F, 1NB & U.S. (14), Canada (3), Chile (1), Mexico (1), Cambodia (1) \\ **Asian** & 10M, 10F & UK. (5), Canada (3), South Africa (3), India (2), Indonesia (2), China (1), France (1), Germany (1), Italy (1), Malaysia (1) \\ **Black** & 10M, 9F, 1NB & South Africa (16), Nigeria (2), Swaziland (1), UK. (1) \\ \hline **Hispanic** & 10M, 9F, 1NB & Mexico (15), Chile (3), Portugal (2) \\ **MENA** & 10M, 10F & Israel (9), Lebanon (3), Jordan (2), Bahrain (1), Egypt (1), Iran (1), Iraq (1) \\ **NHPI** & 7M, 5F & New Zealand (Maori) (8), Samoa (2), Fiji (1), U.S. (1) \\ **White** & 9M, 10F, 1NB & Portugal (5), Italy (5), Poland (3), Mexico (2), Belgium (1), Greece (1), Ireland (1), U.S. (1) \\ \hline \hline \end{tabular} \end{table} Table 2. Breakdown of our validation study’s participants by race, gender (male, female, or non-binary), and country. Figure 2. An example of how each avatar was presented to participants during our validation study. ## 5. Results ### Validated Avatar Labels Table 3 summarizes our results and labels for all 42 base avatars across all participants and for same-race participants. #### 5.1.1. Race and Gender Labels Asian, Black, and White avatars were correctly identified as their intended race across all participants, while most of the remaining avatars were accurately identified by same-race participants (see Table 3 for all and same-race agreement rates). Therefore, we observed some differences in identification rates based on the race of the participants, highlighting the potential impact of own-race bias on the perception of virtual avatars. Notably, there were no significant differences in gender identification rates based on participant race, indicating that all avatars were correctly perceived as their intended gender by all participants, regardless of their racial background. #### 5.1.2. Naming Convention If an avatar was identified as its intended race by corresponding same-race participants, we named it after that race. For instance, the avatar Hispanic_M_2 was labeled as White by all participants. However, our Hispanic participants perceived it as solely Hispanic. Hence, we left the original name. However, if an avatar was labeled as "Ambiguous" or as a different race by same-race participants, we added an X at the beginning of its name to indicate that it was not validated. Avatars were also labeled by their identified gender ("M" or "F"). ### Other-Race vs. Same-Race Perception To further examine how participant race affected perception of virtual avatar race, we additionally analyzed the data by separating same-race and other-race agreement rates. In effect, we separated the selections of the participants who were the same race as the avatar modeled and those who were not. #### 5.2.1. Difference in Agreement Rates Figure 3 displays the difference in agreement rates between same-race and other-race. Figure 3 shows that several avatars were strongly identified by both other-race and same-race participants. In particular, all Asian, Black, and White avatars were perceived as their intended race with high agreement rates by both same-race and other-race participants (over 90% agreement for all but one). However, some avatars were only identified by participants of the same race as the avatar. For example, our analysis of the agreement rates for different racial groups revealed interesting trends. For instance, non-Hispanic participants had an average agreement rate of 54.5% for Hispanic avatars, while Hispanic participants had a much higher average agreement of 75.0%. Similar patterns were observed for AIAN (57.8% other-race, 75.0% same-race) and MENA (40.4% other-race, 68.0% same-race) avatars. #### 5.2.2. Perceived Race Clusters To gain deeper insights into how participants perceived the avatars' races, we employed Principle Component Analysis (PCA) to reduce the agreement rates of each of the 42 base avatars down to two dimensions. Next, we performed K-means clustering (Srivastava et al., 2017) on the resulting two-dimensional data to group the avatars based on their perceived race. We optimized the number of clusters using the elbow method and distortion scores (Brandt et al., 2016). We applied this technique to both other-race and same-race agreement rates to determine whether there were any differences in the clustering based on participant race. By visualizing the clusters, we aimed to better understand the differences in how participants perceived the avatars' races. Figure 4 shows that Asian, Black, and White avatars were perceived consistently by all participants, with clearly defined clusters. However, there was more confusion in perceiving AIAN, Hispanic, MENA, and NHPI avatars, which clustered closer together. Same-race participants had less overlap and more-accurately perceived these avatars, with more separation between them. For example, the Hispanic and MENA avatars were in separate clusters for same-race participants, except for one avatar (Hispanic_F_2). On the other hand, the Hispanic and MENA avatars were entirely clustered together for other-race participants. ## 6. Discussion In this section, we discuss the validation of our avatars. Specifically, we examine the extent to which each avatar was correctly identified as its intended race and the variability in identification across different participant groups. Additionally, we discuss the implications of our results for virtual avatar research, highlighting the importance of considering the potential impact of own-race bias on avatar race perception. Finally, we describe the potential future impact of our avatar library in the community, including how it can be used to promote diversity and inclusion. ### Race Identification #### 6.1.1. Universally Identified Avatars We found that our Asian, Black, and White avatars were recognized by all participants with high agreement rates. This suggests that these avatars can be a valuable tool for researchers seeking to create virtual humans that can be easily identified by individuals from different racial backgrounds. Our results may be due to perceptual expertise or familiarity with other-race faces, as proposed by Civile et al. (Civile et al., 2017). We hypothesize that this familiarity could be explained by the prevalence of these racial groups in global media and pop culture. For example, White cast members were the most represented in popular Hollywood movies over the last decade, followed by Black cast members (Civile et al., 2017). Since Hollywood movies have a dominant share in the global film industry (Civile et al., 2017), people may be more familiar with characters that are prevalent in these films. Additionally, East Asian media culture has become widely popular worldwide over the past few decades (Srivastava et al., 2017; Srivastava et al., 2017). Phenomena like "The Hallyu Wave" and "Cool Japan" (Civile et al., 2017) have enabled East Asian films, dramas, and pop music to gain a global following. As people may often encounter these racial groups in media, this familiarity may have facilitated their recognition of these avatars. #### 6.1.2. Same-Race Identified Avatars As expected, some avatars were only identified by participants of the same race as the avatar, consistent with the own-race bias effect. For example, as seen in Table 3, the Hispanic avatars received mixed ratings of White and Hispanic across all participants, but most were perceived as solely Hispanic by Hispanic-only participants. Similarly, only one MENA avatar was perceived as MENA by all participants, while five were perceived as MENA by MENA-only participants. These results suggest that participants' own-race bias, a well-known phenomenon in psychology, may also affect their perception of virtual avatars. The findings point to the importance of considering participants' race when using virtual avatars in research or applications that require accurate representation of different racial groups. #### 6.1.3. Ambiguous Awatars Several avatars in our library were perceived ambiguously by all participants and only same-race participants, and therefore labeled as such (see Table 3 for details). Identifying the reason for these avatars' lack of clear identification is not straightforward, and multiple factors could be at play. For instance, the two ambiguous AIAN avatars were the only ones with short hairstyles, which may have impacted their identification as AIAN. Long hair carries cultural and spiritual significance in many AIAN tribes (Krishnan et al., 2017), and some participants may have perceived the avatars as non-AIAN as a result, even among AIAN participants. The validation of our NHPI avatars was limited, possibly due to the low number of NHPI participants (\(n=12\)) in our study, despite our targeted recruitment efforts. As a consequence, most of the NHPI avatars were not validated by NHPI participants, including the lack of validation for any female NHPI avatars. Another potential reason for this lack of validation is that the majority of our NHPI \begin{table} \begin{tabular}{l l l l l l l} \hline \hline **Avaratar** & **Race (All)** & **Agreement** & **Race (Same-Race)** & **Agreement** & **Gender (All)** & **Agreement** \\ \hline [MISSING_PAGE_POST] \_1** & Hispanic & 0.64 & Hispanic & 0.80 & Female & 0.95 \\ **Hispanic\_F\_2** & White & 0.70 & Hispanic/White & 0.55/0.60 & Female & 0.95 \\ **Hispanic\_F\_3** & Hispanic/White & 0.59/0.52 & Hispanic & 0.75 & Female & 0.93 \\ **MENA\_M\_1** & White & 0.56 & MENA & 0.70 & Male & 0.99 \\ **MENA\_M\_2** & White & 0.64 & MENA/White & 0.65/0.60 & Male & 1.00 \\ **MENA\_M\_3** & MENA/White & 0.55/0.65 & MENA/White & 0.75/0.55 & Male & 0.98 \\ **MENA\_F\_1** & White & 0.58 & MENA/White & 0.70/0.60 & Female & 0.98 \\ **MENA\_F\_2** & White & 0.60 & MENA & 0.60 & Female & 0.98 \\ **NHPI\_M\_1** & NHPI & 0.52 & NHPI & 0.58 & Male & 0.98 \\ **NHPI\_M\_2** & Hispanic & 0.65 & NHPI & 0.67 & Male & 1.00 \\ **White\_M\_1** & White & 0.96 & White & 0.95 & Male & 0.99 \\ **White\_M\_2** & White & 0.98 & White & 0.95 & Male & 1.00 \\ **White\_M\_3** & White & 0.93 & White & 0.90 & Male & 0.99 \\ **White\_F\_1** & White & 0.94 & White & 0.95 & Female & 0.97 \\ **White\_F\_2** & White & 0.96 & White & 0.95 & Female & 0.98 \\ **White\_F\_3** & White & 0.94 & White & 0.95 & Female & 0.99 \\ **X\_AIAN\_M\_1** & Hispanic & 0.64 & Hispanic & 0.75 & Male & 0.99 \\ **X\_AIAN\_F\_1** & Hispanic & 0.54 & Hispanic & 0.45 & Female & 0.86 \\ **X\_MEA\_F\_1** & Ambiguous & N/A & Ambiguous & N/A & Female & 0.98 \\ **X\_NHPI\_M\_1** & Hispanic & 0.55 & Asian & 0.58 & Male & 0.98 \\ **X\_NHPI\_F\_1** & Hispanic & 0.55 & Ambiguous & N/A & Female & 0.92 \\ **X\_NHPI\_F\_2** & Hispanic & 0.58 & Ambiguous & N/A & Female & 0.95 \\ **X\_NHPI\_F\_3** & NHPI & 0.52 & Ambiguous & N/A & Female & 0.92 \\ \hline \hline \end{tabular} \end{table} Table 3. Assigned labels for all 42 base avatars. “All” indicates that the label was identified by all 132 participants, while “Same-Race” only includes the data of participants who identify as the race that the avatar was modeled for. Agreement labels were calculated as the percentage of participants who perceived an avatar to represent a race or gender. participants identified themselves as New Zealand Maori, whereas our avatars were developed with the help of Samoaan and Native Hawaiian volunteer representatives. Therefore, it is possible that our NHPI avatars are representative of some NHPI cultures, but not New Zealand Maori. In future studies, expanding recruitment efforts for both interview volunteers and study participants will be crucial, despite the challenges involved in doing so. For example, future studies may need to compensate NHPI participants more than participants of other races. ### Implications for Virtual Avatars Our study provides valuable insights for virtual avatar applications and research. Our findings indicate that human behavior in race categorization can apply to virtual avatars, which has notable implications for interactions in virtual experiences. Kawakami et al. (Kawakami et al., 2018) suggest that in-group and out-group categorization can lead to stereotyping, social judgments, and group-based evaluations. Therefore, designers and developers should be aware of this and take necessary steps to mitigate unintended consequences in virtual experiences. For example, regulating codes of conduct (Kawakami et al., 2018) can help to improve interracial interactions in VR. Figure 3. Confusion matrix heatmap of agreement rates for the 42 base avatars by separated by other-race participants and same-race participants (i.e., participants of a different or same race as the avatar). Agreement rates were calculated as the percentage of participants who perceived an avatar to represent a race or gender. Interestingly, our study also replicated a nuanced finding from more recent psychology research on the perception of ambiguous avatars (Steintein et al., 2017). As seen in Table 3, most of the misidentified avatars were identified as Hispanic by all participants. Similarly, Nicolas et al. (Nicolas et al., 2017) recently found that participants classify radially ambiguous photos as Hispanic or MENA, regardless of their parent ethnicities. We believe that this effect extended to our virtual avatars. ### An Open Library of Validated Avatars As a contribution to the research community, we are providing open access to our virtual avatar library, which includes all 210 fully rigged avatars, along with validated labels for each avatar's race and gender. Our library features avatars of seven different races, providing a diverse selection for researchers to use in their studies. The validated labels can facilitate research on the impact of avatar race, and researchers can choose to use the labels for studies aimed at individuals from different racial backgrounds or same-race labels for specific study populations. The _Virtual Avatar Library for Inclusion and Diversity_ (_VALID_) provides researchers and developers with a diverse set of fully rigged avatars suitable for various scenarios such as casual, business, medical, military, and utility. Each avatar comes with 65 facial blend shapes, enabling dynamic facial expressions (see Figure 5). The library is readily available for download and can be used in popular game engines like Unity or Unreal. Although this is the first iteration of the library, we plan to update it by adding more professions and outfits soon. In addition, the library can be used for a wide range of research purposes, including social psychology simulations and educational applications. Figure 4. Clustered scatterplots of each avatar’s relation to one another based on Principle Component Analysis and K-means clustering for other-race and same-race participant identifications. The Voronoi analysis shows the borders of the clusters where each category was assigned. Each avatar is color coded by its validated label, Figure 5. Images of the skeleton and facial blend shapes included with our avatars. ### Limitations and Future Work We recognize that our VALID avatar library is only a small step towards achieving greater diversity and inclusion in avatar resources. We acknowledge that the representation of each demographic is limited and plan to expand the diversity within each group by creating new avatars. For example, our Asian avatars are modeled after East Asians, but we plan to expand VALID to include South Asian and Southeast Asian avatars as well. Our Hispanic representatives have pointed out the need for more diverse Hispanic avatars, including varying skin tones to represent different South American populations, such as Mexican and Cuban. Additionally, our NHPI representatives have suggested the inclusion of tattoos, which hold cultural significance for some NHPI communities, could improve the identifiability of our NHPI avatars, in addition to improving our NHPI recruitment methods. Any future updates to the library will undergo the same rigorous creation, iteration, and validation process as the current avatars. While our first iteration of the library focused on diversity in terms of race, we realize that the avatars mostly represent young and fit adults, which does not reflect all types of people. In the future, we plan to update the library with a diversity of body types that include different body mass index (BMI) representations and ages. Including avatars with different BMI representations is not only more inclusive, but can also be useful for studies targeting physical activity, food regulation, and therapy (Han et al., 2017). Likewise, we plan to include shaders and bump maps (Han et al., 2017) that can age any given avatar by creating realistic wrinkles and skin folds, further improving the diversity and inclusivity of VALID Another limitation of the current work is that our library includes only male and female representations. In future updates, we plan to include non-binary and anrogyous avatars. Currently, there are not many androgyous models that are freely available. However, they can be an area of important study. For example, previous studies found that androgyous avatars reduce gender bias and stereotypical assumptions in virtual agents (Han et al., 2017) and improve student attitudes (Han et al., 2017). Thus, we plan to include these avatars in a future update by following Nag et al.'s (Han et al., 2017) guidelines for creating androgyous virtual humans. Our study, while diverse in terms of race and country, is not representative of everyone. We recruited participants through the online platform Prolific, which is known for its increased diversity compared to other crowdsourcing platforms such as Mechanical Turk. However, due to the online nature of the platform, we primarily recruited younger adults. It is possible that perceptions of our avatars may differ among other age groups, such as children or older adults. Therefore, it is important to broaden recruitment efforts by exploring alternative platforms and recruitment strategies that may be more effective in reaching a wider range of participants. Future studies could also consider conducting in-person studies or focus groups to gather additional insights into avatar perception. ## 7. Conclusion We have introduced a new virtual avatar library comprised of 210 fully rigged avatars with diverse professions and outfits, available for free. Our library aims to promote diversity and inclusion by creating equitable representation of seven races across various professions. We designed 42 base avatars using data-driven facial averages and collaborated with volunteer representatives of each ethnicity. A large validation study involving participants from around the world was conducted to obtain validated labels and metadata for the perceived race and gender of each avatar. Additionally, we offer a comprehensive process for creating, iterating, and validating diverse avatars to aid other researchers in creating similarly validated avatars. Our validation study revealed that the majority of avatars were accurately perceived as the race they were modeled for. However, we observed that some avatars, such as the Hispanic and MENA avatars, were only validated as such by participants who identified as Hispanic or MENA, respectively. This finding suggests that the perception of virtual avatars may be influenced by own-race bias or the other-race effect, as described in the psychology literature. Moving forward, we plan to expand the library to include additional races, professions, body types, age ranges, and gender representations to further improve diversity and inclusion.
immersive technologiesが普及するにつれて、仮想キャラクターはソーシャルコンピューティングの未来に重要な役割を果たすでしょう。しかし、人々が仮想キャラクターを通じてより頻繁にコミュニケーションをとるにつれて、その研究コミュニティが、このような技術の効能と結果を評価するためのツールを検証することが重要となります。私たちは、包括性と多様性(VALID)の新しい、自由に利用できる3Dキャラクターライブラリを初めて提示します。これは、210個のフルRiggedキャラクターを含むもので、人種的多様性と包括性を促進するものです。私たちは、多様性のキャラクターを作成、改善、検証する方法に関する詳細なプロセスを提示します。33カ国から参加者を集めたオンラインの調査(n=132)を通じて、私たちは、各キャラクターの認識された人種と性別に対して統計的に検証されたラベルを提供します。この検証研究を通じて、私たちはキャラクターの人種認識に関する知識をさらに進歩させます。特に
2301.13397
Sequential Strategic Screening
We initiate the study of strategic behavior in screening processes with multiple classifiers. We focus on two contrasting settings: a conjunctive setting in which an individual must satisfy all classifiers simultaneously, and a sequential setting in which an individual to succeed must satisfy classifiers one at a time. In other words, we introduce the combination of strategic classification with screening processes. We show that sequential screening pipelines exhibit new and surprising behavior where individuals can exploit the sequential ordering of the tests to zig-zag between classifiers without having to simultaneously satisfy all of them. We demonstrate an individual can obtain a positive outcome using a limited manipulation budget even when far from the intersection of the positive regions of every classifier. Finally, we consider a learner whose goal is to design a sequential screening process that is robust to such manipulations, and provide a construction for the learner that optimizes a natural objective.
Lee Cohen, Saeed Sharifi-Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani
2023-01-31T04:08:18
http://arxiv.org/abs/2301.13397v2
# Sequential Strategic Screening ###### Abstract We initiate the study of strategic behavior in screening processes with _multiple_ classifiers. We focus on two contrasting settings: a "conjunctive" setting in which an individual must satisfy all classifiers simultaneously, and a sequential setting in which an individual to succeed must satisfy classifiers one at a time. In other words, we introduce the combination of _strategic classification_ with screening processes. We show that sequential screening pipelines exhibit new and surprising behavior where individuals can exploit the sequential ordering of the tests to "zig-zag" between classifiers without having to simultaneously satisfy all of them. We demonstrate an individual can obtain a positive outcome using a limited manipulation budget even when far from the intersection of the positive regions of every classifier. Finally, we consider a learner whose goal is to design a sequential screening process that is robust to such manipulations, and provide a construction for the learner that optimizes a natural objective. Introduction Screening processes (Arunachaleswaran et al., 2022; Blum et al., 2022; Cohen et al., 2020) involve evaluating and selecting individuals for a specific, pre-defined purpose, such as a job, educational program, or loan application. These screening processes are generally designed to identify which individuals are qualified for a position or opportunity, often using multiple sequential classifiers or tests. For example, many hiring processes involve multiple rounds of interviews; university admissions can involve a combination of standardized tests, essays, or interviews. They have substantial practical benefits, in that they can allow a complex decision to be broken into a sequence of smaller and cheaper steps; this allows, for example, to split a decision across multiple independent interviewers, or across smaller and easier-to-measure criteria and requirements. Many of the decisions made by such screening processes are high stakes. For example, university admissions can affect an individual's prospects for their entire life. Loan decisions can have a long-term (sometimes even inter-generational) effect on a family's wealth or socio-economic status. When these decisions are high stakes, i.e. when obtaining a positive outcome is valuable or potentially life-changing or obtaining a negative outcome can be harmful, individuals may want to manipulate their features to trick the classifier into assigning them a positive outcome. In machine learning, this idea is known as strategic classification, and was notably introduced and studied by Bruckner and Scheffer (2011); Hardt et al. (2016). The current work aims to incorporate strategic classification within screening processes, taking a departure from the classical point of view in the strategic classification literature that focuses on a single classifier (see related work section). The key novel idea of our model of _strategic screening processes (or pipelines)_, compared to the strategic classification literature, comes from the fact that i) an individual has to pass and manipulate her way through _several_ classifiers, and ii) that we consider _sequential_ screening pipelines. In a sequential screening pipeline, once an individual (also called _Agent_) has passed a test or stage of this pipeline, she can "forget" about the said stage; whether or not she passes the next stage depends _only on her performance in that stage_. For example, a job candidate that has passed the initial human resources interview may not need to worry about convincing that interviewer, and can instead expand her effort solely into preparing for the first technical round of interviews. Alternatively, imagine a student 'cramming' for a sequence of final exams, where one has a finite capacity to study that is used up over a week of tests. One wants to achieve a minimum score on each test, with a minimum of effort, by studying in between each test. Our goal in this work is to examine how considering a pipeline comprised of a sequence of classifiers affects and modifies the way a strategic agent manipulates her features to obtain a positive classification outcome, and how a learner (which we primarily call the _Firm_) should take this strategic behavior into account to design screening pipelines that are robust to such manipulation. We make a distinction between the following two cases: 1) the firm deploys its classifiers sequentially which we refer to as a _sequential screening process_; 2) the firm deploys a single classifier whose positive classification region is the intersection of the positive regions of the classifiers that form the pipeline which we sometimes refer to as _simultaneous (or conjunctive) testing_--this single classifier is basically the _conjunction_ or intersection of classifiers from the pipeline. The former corresponds to a natural screening process that is often used in practice and for which we give our main results, while the latter is primarily considered as a benchmark for our results for the sequential case. Our Contributions.We show a perhaps surprising result: an agent can exploit the sequential nature of the screening process and move through the whole pipeline even when she started far from the intersection of the positive classification regions of all classifiers. In other words, the sequentiality of screening processes can _improve_ an agent's ability to manipulate her way through multiple classifiers compared to the simultaneous screening. We name the resulting set of strategies for such an agent in the sequential case _"Zig-Zag" strategies_. In other words, whenever the agent does not manipulate straight to a point that is classified as positive by the conjunction of all classifiers, we call it a zig-zag strategy. An example of such a strategy that zig-zags between two classifiers is provided in Figure 1. In Figure 1, since there is a small angle \(\theta\) between the two tests, an agent at the bottom of the figure can zag right and then left as shown by the blue lines. In this case, the agent is classified as positive in every single step, and by making \(\theta\) arbitrarily small, will have arbitrarily lower total cost (e.g., the cumulative \(\ell_{2}\) distance) compared to going directly to the intersection point of the classifiers. We provide concrete classifiers and an initial feature vector for such a case in Example 3.2. In fact, in Section 3.2 we show that for a given point, as \(\theta\) goes to zero, the ratio between the total cost of the zig-zag strategy and the cost of going directly to the intersection can become arbitrarily large. As we assume that conjunction of the classifiers captures the objective of the firm, using a pipeline can allow more disqualified people to get a positive outcome by manipulating their features. We show this in Figure 2: This figure shows the region of the agents space that can successfully manipulate to pass two linear tests in the two-dimensional setting, given a budget \(\tau\) for manipulation. As shown by the figure, individuals in the green region of Figure 2.c can pass the tests in the sequential setting but would not be able to do so if they had to pass the tests simultaneously. We further show how the optimal zig-zag strategy of an agent can be obtained computationally efficiently via a simple convex optimization framework in Section 3.3 and provide a closed-form characterization of this strategy in the special case of 2-dimensional features and a pipeline of exactly two classifiers in Section 3.4. In Section 3.5 we consider a "monotonicity" condition under which, agents prefer to use the simple strategy which passes all classifiers simultaneously in a single move and does not zig-zag between classifiers. Finally, in Section 4.1, we exhibit a defense strategy that maximizes true positives subject to not allowing any false positives. Interestingly, we show that under this strategy, deploying classifiers sequentially allows for a higher utility for the firm than using a conjunction of classifiers. Related Work.Our work inscribes itself at the intersection of two recent lines of work. The first one studies how strategic behavior affects decision-making algorithms (e.g. regression or classifica Figure 1: Suppose the agent is the disqualified (i.e., placed in the negative region of the conjunctions of \(h_{1},h_{2}\)) point. A trivial manipulation strategy is to use the shortest _direct_ path to the positive region, which is the dashed red path. However, the agent may also first manipulate slightly to pass \(h_{1}\), then manipulate minimally again to pass \(h_{2}\), as depicted with the blue solid path. This is what we call a zig-zag strategy. tion algorithms), and how to design decision rules that take into account or dis-incentivize strategic behavior. This line of work is extensive and comprised of the works of (Bruckner and Scheffer, 2011; Hardt et al., 2016; Kleinberg and Raghavan, 2020; Braverman and Garg, 2020; Miller et al., 2020; Liu et al., 2020; Jagadeesan et al., 2021; Haghtalab et al., 2020; Meir et al., 2010, 2011, 2012; Dekel et al., 2010; Chen et al., 2018; Cummings et al., 2015; Khajehnejad et al., 2019; Ustun et al., 2019; Chen et al., 2020; Bjorkegren et al., 2020; Dee et al., 2019; Perote and Perote-Pena, 2004; Ahmadi et al., 2021; Tang et al., 2021; Hu et al., 2019; Milli et al., 2019; Perdomo et al., 2020; Ghalme et al., 2021; Braverman and Garg, 2020; Ahmadi et al., 2022; Bechavod et al., 2021, 2022; Shavit et al., 2020; Dong et al., 2018; Chen et al., 2020; Harris et al., 2021). The second line of work is separate and aims to understand how decisions compose and affect each other in decision-making and screening pipelines (Cohen et al., 2020; Bower et al., 2017; Blum et al., 2022; Arunachaswaran et al., 2022; Dwork et al., 2020; Dwork and Ilvento, 2018). These works studies settings in which _multiple_ decisions are made about an individual or an applicant. However, and to the best of our knowledge, there is little work bringing these two fields together and studying strategic behavior in the context of decision _pipelines_ comprised of _multiple_ classifiers. This is where the contribution of the current work lies. ## 2 Our Model Formally, individuals (or agents) are represented by a set of features \(x\in\mathcal{X}\), where \(\mathcal{X}\subseteq\mathbb{R}^{d}\), for \(d\geq 1\). The firm has a fixed sequence of binary tests or classifiers \(h_{1},h_{2},\ldots,h_{k}:\mathcal{X}\to\{0,1\}\) that are deployed to select qualified individuals while screening out unqualified individuals. Here, an outcome of 1 (positive) corresponds to an acceptance, and an outcome of 0 (negative) corresponds to a rejection. Once a person is rejected by a test they leave the pipeline. In the whole paper, we assume that the classifiers are linear and defined by half-spaces; i.e. \(h_{i}(x)=1\iff w_{i}^{\top}x\geq b_{i}\) for some vector \(w_{i}\in\mathbb{R}^{d}\) and real threshold \(b_{i}\in\mathbb{R}\). Equivalently, we Figure 2: Each agent has a manipulation budget of \(\tau\) and the cost function is \(\ell_{2}\) distance. Then, \((a)\) shows the region of agents who afford to manipulate their feature vectors to pass both tests simultaneously, \((b)\) shows the region of agents who afford to manipulate their feature vectors to pass the tests sequentially (i.e, first \(h_{1}\), then \(h_{2}\)), and \((c)\) shows the difference in these two regions. often write \(h_{i}(x)=1\left[w_{i}^{\top}x\geq b_{i}\right]\).1 Footnote 1: While more general classes of classifiers could be considered, linear classifiers are a natural starting point to study strategic classification. This linearity assumption arises in previous work, e.g. (Kleinberg and Raghavan, 2020; Tang et al., 2021; Ahmadi et al., 2022) to only name a few. In this work we assume that the true qualifications of individuals are determined by the conjunction of the classifiers adopted by the firm in the pipeline, i.e. an agent \(x\) is qualified if and only if \(h_{i}(x)=1\) for all \(i\). In other words, the firm has designed a pipeline that makes no error in predicting individuals' qualifications _absent strategic behavior_. However, in the presence of strategic behavior, individuals try to manipulate their feature vectors to become positively classified by the classifiers simply because they receive a positive utility from a positive outcome. Similar to prior works, throughout this work, we assume a "white box" model meaning agents know the parameters for each classifier. More precisely, the firm commits to using a sequential screening process consisting of classifiers \(h_{1},h_{2}\ldots h_{k}\), and each agent knows the parameters of each hypothesis, the order of the tests, her own feature value \(x\), and the cost to manipulate to any other point in the input space. An agent's cost function is modeled by a function \(c:\mathcal{X}\times\mathcal{X}\to\mathbb{R}_{\geq 0}\) that takes two points \(x,\hat{x}\) and outputs the cost of moving from \(x\) to \(\hat{x}\). One can think of \(x\) as the initial feature vector of an agent and \(\hat{x}\) as the manipulated features. In the sequential setting that we consider, we take the cost of manipulation to be the cumulative cost across every single manipulation. In particular, for a manipulation path \(x^{(0)}\to x^{(1)}\to x^{(2)}\to\ldots\to x^{(k)}\) taken by an agent whose true feature values are \(x^{(0)}\), the cost of manipulation is given by \(\sum_{i=1}^{k}c(x^{(i-1)},x^{(i)})\). We assume such manipulations do not change nor improve one's true qualifications2 and we discuss how the firm mitigates this effect of manipulation. Footnote 2: E.g., in a loan application, such manipulations could be opening a new credit card account: doing so may temporarily increase an agent’s credit score, but does not change anything about an agent’s intrinsic financial responsibility and ability to repay the loan. In turn, the firm's goal is to have an accurate screening process whose predictions are as robust to and unaffected by such strategic: the firm modifies its classifiers \(h_{1},\cdots,h_{k}\) to \(\tilde{h}_{1},\cdots,\tilde{h}_{k}\) so that the output of \(\tilde{h}_{1},\cdots,\tilde{h}_{k}\) on manipulated agents' features can identify the qualified agents optimally with respect to a given "accuracy measure"; we will consider two such measures in Section 4. ### Agent's Manipulation We proceed by formally defining the minimal cost of manipulation, which is the minimal cost an agent has to invest to pass all classifiers, and the best response of an agent for both sequential and simultaneous testing. **Definition 2.1** (Manipulation Cost: Sequential).: Given a sequence of classifiers \(h_{1},\ldots,h_{k}\), a global cost function \(c\), and an agent \(x^{(0)}\in\mathcal{X}\), the manipulation cost of an agent in the sequential setting is defined as the minimum cost incurred by her to pass all the classifiers sequentially, i.e., \[c^{*}_{seq}\left(x^{(0)},\{h_{1},\ldots,h_{k}\}\right)=\min_{x^{ (1)},\ldots,x^{(k)}\in\mathcal{X}} \sum_{i=0}^{k-1}c(x^{(i)},x^{(i+1)})\] s.t. \[h_{i}(x^{(i)})=1\ \ \forall i\in[k].\] The _best response_ of \(x^{(0)}\) to the sequential testing \(h_{1},\ldots,h_{k}\) is the path \(x^{(1)},\ldots,x^{(k)}\) that minimizes the objective. **Definition 2.2** (Manipulation Cost: Conjunction or Simultaneous).: Given a set of classifiers \(\{h_{1},\ldots,h_{k}\}\), a global cost function \(c\), and an agent \(x\), the manipulation cost of an agent in the conjunction setting is defined as the minimum cost incurred by her to pass all the classifiers at the same time, i.e., \[c^{*}_{conj}\left(x,\{h_{1},\ldots,h_{k}\}\right)=\min_{z\in \mathcal{X}} c(x,z)\] \[\text{s.t.} h_{i}(z)=1\ \forall i\in[k].\] The _best response_ of \(x\) to the conjunction of \(h_{1}\ldots,h_{k}\) is the \(z\) that minimizes the objective. ## 3 Best Response of Agents in a Screening Process with Oblivious Defender In this section, we study the manipulation strategy of an agent. In particular, we present algorithms to compute optimal manipulation strategies efficiently. We make the following assumption on the cost function in most of the section, unless explicitly noted otherwise: **Assumption 3.1**.: The cost of moving from \(x\) to \(\hat{x}\) is given by \(c(x,\hat{x})=\|\hat{x}-x\|_{2}\), where \(\|.\|_{2}\) denotes the standard Euclidean norm. ### Optimal Strategies in the Conjunction Case As a warm-up to our zig-zag strategy in Section 3.3, we first consider the optimal strategy for our benchmark, which is the case of the simultaneous conjunction of \(k\) classifiers. In the case where agents are supposed to pass a collection of linear classifiers simultaneously, the best response of an agent \(x\in\mathbb{R}^{d}\) is given by solving the following optimization problem \[\min_{z} c(x,z) \tag{1}\] \[\text{s.t.} w_{i}^{\top}z\geq b_{i}\ \forall i\in[k].\] which is a convex program as long as \(c\) is convex in \(z\). In the special case in which \(d=2\) and \(k=2\), i.e. when feature vectors are two-dimensional and an agent must be positively classified by the conjunction of two linear classifiers \(h_{1}(x)=\mathbbm{1}(w_{1}^{\top}x\geq b_{1})\) and \(h_{2}(x)=\mathbbm{1}(w_{2}^{\top}x\geq b_{2})\), we provide a closed form characterization of an agent's strategy. We assume that the two classifiers are _not_ parallel to each other because if \(w_{2}=kw_{1}\) for some \(k\in\mathbb{R}\), then one can show that either the acceptance regions of \(h_{1}\) and \(h_{2}\) do not overlap, or the optimal strategy of an agent is simply the orthogonal projection onto the intersection of the acceptance regions of \(h_{1}\) and \(h_{2}\). We further assume, without loss of generality, that \(b_{1}=b_{2}=0\) because if either \(b_{1}\) or \(b_{2}\) is nonzero, one can use the change of variables \(x^{\prime}\triangleq x+s\) to write the classifiers as \(h_{1}(x^{\prime})=\mathbbm{1}(w_{1}^{\top}x^{\prime}\geq 0)\) and \(h_{2}(x^{\prime})=\mathbbm{1}(w_{2}^{\top}x^{\prime}\geq 0)\). Here \(s\) is the solution to \(\{w_{1}^{\top}s=-b_{1},w_{2}^{\top}s=-b_{2}\}\). For any \(w\in\mathbb{R}^{2}\) with \(\|w\|_{2}=1\), let \(P_{w}(x)\) and \(d_{w}(x)\) be the orthogonal projection of \(x\) onto the region \(\{y\in\mathbb{R}^{2}:w^{\top}y\geq 0\}\), and its orthogonal distance to the same region, respectively. We have \[P_{w}(x) \triangleq\begin{cases}x&\text{if }w^{\top}x\geq 0\\ x-(w^{\top}x)w&\text{if }w^{\top}x<0\end{cases},\] \[d_{w}(x) \triangleq\begin{cases}0&\text{if }w^{\top}x\geq 0\\ |w^{\top}x|&\text{if }w^{\top}x<0\end{cases}.\] Given this setup, the best response characterization of an agent \(x\) can be given as follows. If \(h_{1}(x)=h_{2}(x)=1\) then \(z=x\). Otherwise, the best response is either the orthogonal projection onto the acceptance region of \(h_{1}\) or \(h_{2}\), or moving directly to the intersection of the classifiers (\(\vec{0}\)): 1. If \(h_{1}(P_{w_{2}}(x))=1\), then \(z=P_{w_{2}}(x)\) and the cost of manipulation is \(c^{*}_{conj}\left(x^{(0)},\{h_{1},h_{2}\}\right)=d_{w_{2}}(x)\). 2. If \(h_{2}(P_{w_{1}}(x))=1\), then \(z=P_{w_{1}}(x)\) and the cost of manipulation is \(c^{*}_{conj}\left(x^{(0)},\{h_{1},h_{2}\}\right)=d_{w_{1}}(x)\). 3. if \(h_{1}(P_{w_{2}}(x))=h_{2}(P_{w_{1}}(x))=0\) then \(z=\vec{0}\) and the cost of manipulation is \(c^{*}_{conj}\left(x^{(0)},\{h_{1},h_{2}\}\right)=\|x\|_{2}\). Given a budget \(\tau\), agents who can manipulate with a cost of at most \(\tau\) to pass the two tests simultaneously, i.e. \(\left\{x^{(0)}:c^{*}_{conj}\left(x^{(0)},\{h_{1},h_{2}\}\right)\leq\tau\right\}\) is highlighted in Figure 2.a. ### A Zig-Zag Manipulation on Sequential Classification Pipelines Here, we make the observation that the sequential nature of the problem can change how an agent will modify her features in order to pass a collection of classifiers, compared to the case when said classifiers are deployed simultaneously. We illustrate this potentially counter-intuitive observation via the following simple example: **Example 3.2**.: Consider a two-dimensional setting. Suppose an agent going up for classification has an initial feature vector \(x=(0,0)\). Suppose the cost an agent faces to change her features from \(x\) to a new vector \(\hat{x}\) is given by \(\|\hat{x}-x\|_{2}\). Further, imagine an agent must pass two classifiers: \(h_{1}(x)=\mathbbm{1}\left\{4x_{2}-3x_{1}\geq 1\right\}\), and \(h_{2}(x)=\mathbbm{1}\left\{x_{1}\geq 1\right\}\), where \(x_{i}\) is the \(i-\)th component of \(x\). It is not hard to see, by triangle inequality, that if an agent is facing a conjunction of \(h_{1}\) and \(h_{2}\), an agent's cost is minimized when \(\hat{x}=(1,1)\) (this is in fact the intersection of the decision boundaries of \(h_{1}\) and \(h_{2}\)), in which case the cost incurred by an agent is \(\sqrt{1+1}=\sqrt{2}\) (see the red manipulation in Figure 3). However, if the classifiers are offered sequentially, i.e. \(h_{1}\) then \(h_{2}\), consider the following feature manipulation: first, the agent sets \(\tilde{x}^{(1)}=(0,1/4)\), in which case she passes \(h_{1}\) and incurs a cost of \(1/4\). Then, the agent sets \(\tilde{x}^{(2)}=(1,1/4)\); the cost to go from \(\tilde{x}^{(1)}\) to \(\tilde{x}^{(2)}\) is \(\|_{2}(1,1/4)-(0,1/4)\|=1\) (see the blue manipulation in Figure 3). In turn, the total cost of this manipulation to pass (i.e., Figure 3: An example for a zig-zag strategy being better for an agent that starts at \(x\) in the sequential case than moving in a single step. Here, an agent would prefer to first manipulate to \(\tilde{x}^{(1)}\) then to \(\tilde{x}^{(2)}\) (the blue arrows) instead of straightforwardly moving from \(x\) to \(\hat{x}\) as would be optimal in the conjunction case (the red arrow). get a positive classification on) both classifiers is at most \(1+1/4=5/4\), and is always better than the \(\sqrt{2}\) cost for the conjunction of classifiers! Intuitively, here, the main idea is that in the "conjunction of classifiers" case, an agent must manipulate her features a single time in a way that satisfies all classifiers at once. However, when facing a sequence of classifiers \(h_{1},\ldots,h_{k}\), once an agent has passed classifier \(h_{i-1}\) for any given \(i\), it can "forget" classifier \(h_{i-1}\) and manipulate its features to pass \(h_{i}\) while _not being required to pass \(h_{i-1}\)_ anymore. In turn, the potential manipulations for an agent in the sequential case are less constrained than in the conjunction of classifiers case. This result is formalized below: **Claim 3.3**.: Let \(h_{1},\ldots,h_{k}\) be a sequence of \(k\) linear classifiers. For any agent with initial feature vector \(x\in\mathbb{R}^{d}\) (\(d\geq 1\)), \[c^{*}_{conj}\left(x,\{h_{1},\ldots,h_{k}\}\right)\geq c^{*}_{seq}\left(x,\{h_ {1},\ldots,h_{k}\}\right).\] Proof.: Let \(c\) be the agent's cost function. Let \(\hat{x}\) be a vector such that \(h_{i}(\hat{x})=1\) for all \(i\in[k]\), and such that \(c(x,\hat{x})\leq\tau\) where \(\tau\) is the manipulation budget available to the agent. Since \(\hat{x}\) satisfies \(h_{i}(\hat{x})=1\) for all \(i\in[k]\), the feature modification \(x\rightarrow\hat{x}\) gives a positive classification outcome to the agent in the sequential case. Further, the cost of this manipulation is \(c(x,\hat{x})+0+\ldots+0=c(x,\hat{x})\). In turn, for any feasible one-shot manipulation that passes all classifiers in the conjunctive case, there exists a feasible sequential manipulation that passes all classifiers in the sequential case which could be of a lower cost; this concludes the proof. Intuitively, the above claim follows from the observation that any best response solution to the conjunction case in particular still passes all classifiers and has the same cost in the sequential case. However, there can be a significant gap between how much budget an agent needs to spend in the conjunctive versus in the sequential case to successfully pass all classifiers (for illustration, see Figure 2). In fact, we show below that the multiplicative gap between the conjunctive and sequential manipulation cost can be unbounded, even in the two-dimensional setting: **Lemma 3.4**.: _Consider \(d=2\). For any constant \(M>0\), there exists two linear classifiers \(h_{1}\) and \(h_{2}\) and an initial feature vector \(x^{(0)}\) such that_ \[\frac{c^{*}_{conj}\left(x^{(0)},\{h_{1},h_{2}\}\right)}{c^{*}_{seq}\left(x^{(0 )},\{h_{1},h_{2}\}\right)}\geq M.\] Proof.: Pick \(x^{(0)}=(0,0)\). Let \(\gamma>0\) be a real number. Consider \(h_{1}(x)=\mathbbm{1}\left\{\frac{x_{1}}{\gamma}+x_{2}\geq 1\right\}\) and \(h_{2}(x)=\mathbbm{1}\left\{\frac{x_{1}}{\gamma}-x_{2}\geq 1\right\}\). Let \(\hat{x}\) be the agent's features after manipulation. To obtain a positive classification outcome, the agent requires both \(\hat{x}_{1}\geq\gamma(1-\hat{x}_{2})\) and \(\hat{x}_{1}\geq\gamma(1+\hat{x}_{2})\). Since one of \(1-\hat{x}_{2}\) or \(1+\hat{x}_{2}\) has to be at least \(1\), this implies \(\hat{x}_{1}\geq\gamma\). In turn, \(c(x,\{h_{1},h_{2}\})=\|\hat{x}\|\geq\gamma\). However, in the sequential case, a manipulation that passes \(h_{1}\) is to set \(x^{(1)}=(0,1)\). Then a manipulation that passes \(h_{2}\), starting from \(x^{(1)}\), is to set \(x^{(2)}=(0,-1)\). The total cost is \(\|(0,1)-(0,0)\|+\|(0,-1)-(0,1)\|=1+2=3\). In particular, \[\frac{c^{*}_{conj}\left(x,\{h_{1},\ldots,h_{k}\}\right)}{c^{*}_{seq}\left(x, \{h_{1},\ldots,h_{k}\}\right)}\geq\gamma/3.\] The result is obtained by setting \(\gamma=3M\) ### An Algorithmic Characterization of an agent's Optimal Strategy in the Sequential Case In this section, we show that in the sequential setting, an agent can compute her optimal sequences of manipulations efficiently. Consider any initial feature vector \(x^{(0)}\in\mathbb{R}^{d}\) for an agent. Further, suppose an agent must pass \(k\) linear classifiers \(h_{1},\ldots,h_{k}\). For \(i\in[k]\), we write once again \(h_{i}(x)=\mathbbm{1}[w_{i}^{\top}x\geq b_{i}]\) the \(i\)-th classifier that an agent must get a positive classification on. Here and for this subsection only, we relax our assumption on the cost function to be more general, and not limited to \(\ell_{2}\) costs: **Assumption 3.5**.: The cost \(c(x,\hat{x})\) of moving from feature vector \(x\) to feature vector \(\hat{x}\) is convex in \((x,\hat{x})\). This is a relatively straightforward and mild assumption; absent convexity, computing the best feature modifications for even a single step can be a computationally intractable problem. The assumption covers but is not limited to a large class of cost functions of the form \(c(x,\hat{x})=\|\hat{x}-x\|\), for _any_ norm \(\|.\|\). It can also encode cost functions where different features or directions have different costs of manipulation; an example is \(c(x,\hat{x})=\left(\hat{x}-x\right)^{\top}A\left(\hat{x}-x\right)\) where \(A\) is a positive definite matrix, as used in (Shavit et al., 2020; Bechavod et al., 2022). In this case, an agent's goal, starting from her initial feature vector \(x^{(0)}\), is to find a sequence of feature modifications \(x^{(1)}\) to \(x^{(k)}\) such that: 1) for all \(i\in[k]\), \(h_{i}(x^{(i)})=1\). I.e., \(x^{i}\) passes the \(i\)-th classifier; and 2) the total cost \(\sum_{i=1}^{k}c(x^{(i-1)},x^{(i)})\) of going from \(x^{(0)}\to x^{(1)}\to x^{(2)}\rightarrow\ldots\to x^{(k)}\) is minimized. This can be written as the following optimization problem: \[\begin{split}\min_{x^{(1)},\ldots,x^{(k)}}&\sum_{i=1} ^{k}c(x^{(i-1)},x^{(i)})\\ \text{s.t.}& w_{i}^{\top}x^{(i)}\geq b_{i}\ \forall i\in[k]. \end{split} \tag{2}\] **Claim 3.6**.: Program (2) is convex in \((x^{(1)},\ldots,x^{(k)})\). In turn, we can solve the problem faced by an agent's computationally efficiently, through standard convex optimization techniques. ### A Closed-Form Characterization in the 2-Classifier, 2-Dimensional Case We now provide closed-form characterization of an agent's best response in the sequential case, under the two-dimensional two-classifier (\(d=k=2\)) setting that we considered in Section 3.1. Here, we take the cost function to be the standard Euclidean norm, i.e. \(c(x,\hat{x})=\|\hat{x}-x\|_{2}\), as per Assumption 3.1. **Theorem 3.7**.: _Consider two linear classifiers \(h_{1}(x)=\mathbbm{1}(w_{1}^{\top}x\geq 0)\) and \(h_{2}(x)=\mathbbm{1}(w_{2}^{\top}x\geq 0)\) where \(\|w_{i}\|_{2}=1\) for \(i\in\{1,2\}\) and an agent \(x^{(0)}\in\mathbb{R}^{2}\) such that \(h_{1}(x^{(0)})=0\) and \(h_{2}(P_{w_{1}}(x^{(0)}))=0\). Let \(0<\theta<\pi\) be the angle between (the positive region of) the two linear classifiers; i.e. \(\theta\) is the solution to \(\cos\theta=-w_{1}^{\top}w_{2}\). Then:_ 1. _If_ \(|\tan\theta|>\|P_{w_{1}}(x^{(0)})\|_{2}/d_{w_{1}}(x^{(0)})\)_, then the best response for an agent is to pick_ \[x^{(2)}=x^{(1)}=\vec{0}.\] _In this case, the cost of manipulation is_ \[c^{*}_{seq}\left(x^{(0)},\{h_{1},h_{2}\}\right)=\|x^{(0)}\|_{2}.\] 2. _If_ \(|\tan\theta|\leq\|P_{w_{1}}(x^{(0)})\|_{2}/d_{w_{1}}(x^{(0)})\)_, then the best response is given by_ \[x^{(1)}=\left(1-\frac{d_{w_{1}}(x^{(0)})}{\|P_{w_{1}}(x^{(0)})\|_{2}}|\tan\theta |\right)P_{w_{1}}(x^{(0)})\] _and_ \(x^{(2)}=P_{w_{2}}(x^{(1)})\)_, and the cost of manipulation is given by_ \[c^{*}_{seq}\left(x^{(0)},\{h_{1},h_{2}\}\right)=d_{w_{1}}(x^{(0)})|\cos\theta |+\|P_{w_{1}}(x^{(0)})\|_{2}\sin\theta.\] Proof.: Given classifiers \(h_{1}\) and \(h_{2}\), the best response of an agent \(x^{(0)}\) is a solution to the following optimization problem, as noted in Section 3.3: \[c^{*}_{seq}\left(x^{(0)},\{h_{1},h_{2}\}\right)=\min_{x^{(1)},x^{(2)}}\left\{ \|x^{(0)}-x^{(1)}\|_{2}+\|x^{(1)}-x^{(2)}\|_{2}:w_{1}^{\top}x^{(1)}\geq 0,w_{2}^ {\top}x^{(2)}\geq 0\right\}\] First, we remark that given any \(x^{(1)}\), the optimal choice of \(x^{(2)}\) is the orthogonal projection of \(x^{(1)}\) on classifier \(f_{2}\). Therefore, the best response can be written as: \[c^{*}_{seq}\left(x^{(0)},\{h_{1},h_{2}\}\right)=\min_{x^{(1)}\in\mathbb{R}^{2 }}\left\{\|x^{(0)}-x^{(1)}\|_{2}+d_{w_{2}}\left(x^{(1)}\right):w_{1}^{\top}z \geq 0\right\} \tag{3}\] To simplify notations, we will denote \(x\triangleq x^{(0)}\). Under the assumptions of the theorem (more specifically, \(h_{1}(x)=0\) and \(h_{2}(P_{w_{1}}(x))=0\)), Equation (3) can be rewritten as an optimization over a one-dimensional variable: \[\min_{0\leq z\leq d^{\prime}_{w_{1}}(x)}\left\{g(z)\triangleq\sqrt{d^{2}_{w_{ 1}}(x)+z^{2}}+(d^{\prime}_{w_{1}}(x)-z)\sin\theta\right\} \tag{4}\] where \(d^{\prime}_{w_{1}}(x)\triangleq\|P_{w_{1}}(x)\|_{2}\) - see Figure 4 for a graphical justification of this rewriting. Note that \(g(z)\) achieves its minimum either at the boundaries or at the point where \(g^{\prime}(z)=0\). Therefore, we have that the minimum is one of the following: \[z =0\Longrightarrow g(z)=d_{w_{1}}(x)+d^{\prime}_{w_{1}}(x)\sin\theta\] \[z =d^{\prime}_{w_{1}}(x)\Longrightarrow g(z)=\sqrt{d^{2}_{w_{1}}(x )+d^{\prime}_{w_{1}}(x)}=\|x\|_{2}\] \[z =d_{w_{1}}(x)|\tan\theta|\Longrightarrow g(z)=d_{w_{1}}(x)|\cos \theta|+d^{\prime}_{w_{1}}(x)\sin\theta\ (g^{\prime}(z)=0)\] Figure 4: Illustration of the reduction from the optimization problem in Equation 3 to the one in Equation 4. If \(d_{w_{1}}(x)|\tan\theta|\leq d^{\prime}_{w_{1}}(x)\), then an application of Cauchy-Schwarz inequality implies that \(z=d_{w_{1}}(x)|\tan\theta|\) is the minimizer. Therefore, if \(|\tan\theta|>d^{\prime}_{w_{1}}(x)/d_{w_{1}}(x)\), the minimizer is \(z^{\star}=d^{\prime}_{w_{1}}(x)\), meaning \(x^{(2)}=x^{(1)}=\vec{0}\), and that \[c^{*}_{seq}\left(x,\{h_{1},h_{2}\}\right)=\|x\|_{2}\] and if \(|\tan\theta|\leq d^{\prime}_{w_{1}}(x)/d_{w_{1}}(x)\), the minimizer is \(z^{\star}=d_{w_{1}}(x)|\tan\theta|\) which implies \[x^{(1)}=\left(1-\frac{d_{w_{1}}(x^{(0)})}{\|P_{w_{1}}(x^{(0)})\|_{2}}|\tan \theta|\right)P_{w_{1}}(x^{(0)})\] and \(x^{(2)}=P_{w_{2}}(x^{(1)})\), and that \[c^{*}_{seq}\left(x,\{h_{1},h_{2}\}\right)=d_{w_{1}}(x)|\cos\theta|+d^{\prime} _{w_{1}}(x)\sin\theta\] Therefore, putting the two cases together, \[c^{*}_{seq}\left(x,\{h_{1},h_{2}\}\right)=\begin{cases}\|x\|_{2}&\text{if }|\tan \theta|>d^{\prime}_{w_{1}}(x)/d_{w_{1}}(x)\\ d_{w_{1}}(x)|\cos\theta|+d^{\prime}_{w_{1}}(x)\sin\theta&\text{if }|\tan \theta|\leq d^{\prime}_{w_{1}}(x)/d_{w_{1}}(x)\end{cases}\] First, note that once the first feature modification has happened and an agent has passed classifier \(h_{1}\) and is at \(x^{(1)}\), the theorem states that an agent picks \(x^{(2)}\) to simply be the orthogonal projection onto the positive region of \(h_{2}\). This is because the cost for going from \(x^{(1)}\) to \(x^{(2)}\) is simply the \(l_{2}\) distance between them, in which case picking \(x^{(2)}\) to be the orthogonal projection of \(x^{(1)}\) on \(h_{2}\) minimizes that distance. The main contribution and challenge of Theorem 3.7 are therefore to understand how to set \(x^{(1)}\) and what is the minimum amount of effort that an agent expands to do so. Now let's examine different cases in Theorem 3.7. Note that we assumed \(h_{1}(x^{(0)})=0\) and \(h_{2}(P_{w_{1}}(x^{(0)}))=0\), i.e. that an agent is not in the positive region for the first test and \(P_{w_{1}}(x^{(0)})\) is not in the positive region for the second test, because otherwise, the solution is trivial. In fact, if \(h_{1}(x^{(0)})=1\), then the solution is simply staying at \(x^{(0)}\) for the first test and then projecting orthogonally onto the positive region of \(h_{2}\) to pass the second test: \[x^{(1)}=x^{(0)},\ x^{(2)}=P_{w_{2}}(x^{(1)})\] \[c^{*}_{seq}\left(x^{(0)},\{h_{1},h_{2}\}\right)=d_{w_{2}}(x^{(0)})\] This corresponds to region \(R_{1}\) of agents in Figure 5. If \(h_{1}(x^{(0)})=0\), but \(h_{2}(P_{w_{1}}(x^{(0)}))=1\), then the best response solution is simply the orthogonal projection onto the positive region of \(h_{1}\): \[x^{(2)}=x^{(1)}=P_{w_{1}}(x^{(0)})\] \[c^{*}_{seq}\left(x^{(0)},\{h_{1},h_{2}\}\right)=d_{w_{1}}(x^{(0)})\] This corresponds to region \(R_{4}\) of agents in Figure 5. Additionally, the first case in the closed-form solutions in Theorem 3.7 corresponds to the region of the space where agents prefer to travel directly to the intersection of the two classifiers than deploying a zig-zag strategy: this corresponds to region \(R_{3}\) in Figure 5. The second case corresponds to the region where agents do find that a zig-zag strategy is less costly and gives the algebraic characterization of the optimal zig-zag strategy. This region for an agent is denoted by \(R_{2}\) in Figure 5. Also, as shown by Figure 5.b, the zig-zag strategy of agents in \(R_{2}\) has the following geometric characterization: pick \(x^{(1)}\) on \(h_{1}\) such that the line passing through \(x^{(0)}\) and \(x^{(1)}\) has angle \(\theta\) with the line perpendicular to \(h_{1}\). Given a budget \(\tau\), agents who can manipulate with a cost of at most \(\tau\) to pass the two tests in the sequential setting, i.e. \(\{x^{(0)}:c^{*}_{seq}\left(x^{(0)},\{h_{1},h_{2}\}\right)\leq\tau\}\) is highlighted in Figure 2.b. We conclude this section by showing that if \(\theta\geq\pi/2\), then agents incur the same cost in the sequential setting as they would under the conjunction setting. In other words, agents can deploy the strategy that they would use if they had to pass the two tests simultaneously. **Theorem 3.8**.: _If \(\pi/2\leq\theta<\pi\), then for every agent \(x^{(0)}\) there exists optimal strategies \(x^{(1)}\) and \(x^{(2)}\) s.t. \(x^{(1)}=x^{(2)}\), i.e.,_ \[c^{*}_{seq}\left(x^{(0)},\{h_{1},h_{2}\}\right)=c^{*}_{conj}\left(x^{(0)},\{ h_{1},h_{2}\}\right).\] Proof.: Let \(\left(x^{(1)},x^{(2)}=P_{w_{2}}(x^{(1)})\right)\) be an optimal strategy of the agent in the sequential setting. Suppose \(x^{(1)}\neq x^{(2)}\). We have that \[w_{1}^{\top}x^{(2)} =w_{1}^{\top}\left(x^{(1)}-(w_{2}^{\top}x^{(1)})w_{2}\right)\] \[=w_{1}^{\top}x^{(1)}-(w_{2}^{\top}x^{(1)})(w_{1}^{\top}w_{2})\] Figure 5: (a) Different cases for how agents best respond: agents in \(R_{1}\) stay at their location to pass the first test and project onto \(h_{2}\) to pass the second. Agents in \(R_{2}\) deploy a zig-zag strategy. Agents in \(R_{3}\) move to the intersection of \(h_{1}\) and \(h_{2}\). Agents in \(R_{4}\) project onto \(h_{1}\). (b) Geometric characterization of the zig-zag strategy: the line passing through \(x^{(0)}\) and \(x^{(1)}\) has angle \(\theta\) with the line perpendicular to \(h_{1}\). (c) This figure highlights the positive regions of \(h_{1}\), \(h_{2}\), and their intersection. But note that \(w_{1}^{\top}x^{(1)}\geq 0\) because \(x^{(1)}\) passes the first classifier by definition, \(w_{2}^{\top}x^{(1)}\leq 0\) because \(x^{(1)}\neq x^{(2)}\), and \(w_{1}^{\top}w_{2}\geq 0\) because \(\pi/2\leq\theta<\pi\). Therefore, \(w_{1}^{\top}x^{(2)}\geq 0\) which implies \(h_{1}(x^{(2)})=1\). However, if \(h_{1}(x^{(2)})=1\), then the following manipulation: \(y^{(0)}=x^{(0)}\) and \(y^{(1)}=y^{(2)}=x^{(2)}\) passes both tests and that its cost is: \(\|x^{(2)}-x^{(0)}\|_{2}\leq\|x^{(2)}-x^{(1)}\|_{2}+\|x^{(1)}-x^{(0)}\|_{2}\) by the triangle inequality. Given the optimality of \((x^{(1)},x^{(2)})\), we conclude that \((y^{(1)},y^{(2)})\) is another optimal strategy that the agent can deploy. ### Monotonicity We now consider a monotonicity property that excludes the possibility of a zig-zag strategy arising. A similar property is noted in (Milli et al., 2019). **Definition 3.9** (Feature Monotone Classifiers).: Classifier \(h_{i}:\mathbb{R}^{d}\to\{0,1\}\) is _monotone_ if for every individual \(x\) that is classified as positive by \(h_{i}\), any feature-wise increase in the features of \(x\) results in a positive classification by \(h_{i}\). Formally, \[\forall x\in\mathbb{R}^{d}:h_{i}(x)=1\Rightarrow h_{i}(x+\alpha)=1 \quad\forall\alpha\in(\mathbb{R}_{\geq 0})^{d}.\] Note that this monotonicity property may not hold in some classification problems. For example, most mortgage loans in the US require a good credit score. A common way of improving one's credit score is by getting a credit card and having monthly statements with a balance greater than zero but not too close to the total credit limit (and paying them on time). **Theorem 3.10**.: _Let \(h_{1},\ldots,h_{k}\) be a sequence of monotone classifiers, and let the initial feature vector \(x^{(0)}\) be such that \(h_{i}(x^{(0)})=0\) for every \(i\in[k]\). Assume the cost function can be written as \(c(x,\hat{x})=\|\hat{x}-x\|\) for some norm \(\|.\|\). Then, we have that_ \[c^{*}_{seq}\left(x^{(0)},\{h_{1},\ldots,h_{k}\}\right)=c^{*}_{conj}\left(x^{(0 )},\{h_{1},\ldots,h_{k}\}\right).\] Proof.: Let \(f_{1,\ldots,k}:\mathbb{R}^{d}\to\{0,1\}\) denote the function that returns the conjunction of all the classifiers, i.e., \(f_{1,\ldots,k}(x)=h_{1}(x)\wedge\ldots\wedge h_{k}(x)\). Let \(z^{*}_{1,\ldots,k}(x^{0})\) denote the point on \(f_{1,\ldots,k}\) that minimizes the cost, i.e., \[z^{*}_{1,\ldots,k}(x^{0})=\operatorname{argmin}_{x^{(1)}}\|x^{(0)},x^{(1)}\|_ {p}.\] Note that by definition, points on \(f_{1,\ldots,k}\) are classified as positive by all classifiers \(h_{1},\ldots,h_{k}\) (i.e., \(z^{*}_{1,\ldots,k}(x^{0})\) this is the best response for the conjunction case). It follows from the triangle inequality that any \(x^{(1)}\) such that \(h_{1}(x^{(1)})\wedge\ldots\wedge h_{k}(x^{(1)})=1\) has cost \(c(x^{(0)},x^{(1)})\geq c(x^{(0)},z^{*}_{1,\ldots,k}(x^{0}))\). We proceed by induction on the number of classifiers. For the induction base, consider \(k=1\). Clearly, in this case moving to \(z^{*}_{1,\ldots,k}(x)\) yields the best response. For the induction step, assume that for every initial point \(x^{\prime}\), and every \(k-1\) monotone classifiers \(h_{2},\ldots,h_{k}\) it holds that \[\|x^{\prime}-z^{*}_{2,\ldots,k}(x^{\prime})\|_{p}\leq\|x^{\prime}-z_{2}\|_{2}+ \ldots+\|z_{k-1}-z_{k}\|_{p}.\] for every \(z_{2},\ldots,z_{k}\in\mathbb{R}^{d}\) such that \(h_{i}(z_{i})=1\). Adding the additional classifier in the beginning, \(h_{1}\) and considering the initial point, \(x\). Assume by contradiction that there exists a path \(x=z_{0},z_{1}\ldots,z_{k}\) such that \(h_{i}(z_{i})\geq 0\) for every \(i\in[k]\) and that \[c^{*}_{seq}(x,\{h_{1},\ldots,h_{k}\})=\|x-z_{1}\|_{p}+\ldots+\|z_{k-1}-z_{k}\| _{p}<\|x-z^{*}_{1,\ldots,k}(x)\|_{p}. \tag{5}\] Since the path from \(z_{1}\) to \(z_{k}\) is a best response for \(h_{2},\ldots,h_{k}\) when the initial feature vector \(z_{1}\), by setting \(x^{\prime}=z_{1}\) we can apply the induction step we and replace this path by \(x,z_{1},z_{2,\ldots,k}^{*}(x^{\prime})\) without increasing the sum of manipulations. If \(f_{1,\ldots,k}(z_{2,\ldots,k}^{*}(z_{1}))=1\), we have that \(\|x-z_{1}\|_{p}+\|z_{1}-z_{2,\ldots,k}^{*}(z_{1})\|_{p}\leq\|x-z_{1,\ldots,k} ^{*}(x)\|_{p}\) due to the triangle inequality and the definition of \(z_{1,\ldots,k}^{*}(x)\) and this is a contradiction to Eq. 5. So assume \(f_{1,\ldots,k}(z_{2,\ldots,k}^{*}(z_{1}))=0\). Since \(h_{i}(z_{2,\ldots,k}^{*}(z_{1}))=1\) for every \(i\geq 2\) by definition, we have that \(h_{1}(z_{2,\ldots,k}^{*})=0\). As \(h_{1}(z_{1})=1\), we can define \(z^{\prime}\in\mathbb{R}^{d}\) such that \[z^{\prime}[j]=\max\{z_{2,\ldots,k}^{*}(z_{1})[j],z_{1}[j]\},\] and from monotonicity it follows that \(f_{2,\ldots,k}(z^{\prime})=1\). Finally, we have that \(\|x-z_{1}\|_{p}+\|z_{1}-z^{\prime}\|_{p}<\|x-z_{1}\|_{p}+\|z_{1}-z_{2,\ldots,k }^{*}(z_{1})\|_{p}\), which is a contradiction to the minimiality of \(z_{2,\ldots,k}^{*}(z_{1})\) and thus to the minimality of \(z_{2},\ldots,z_{k}\). Theorem 3.10 in particular implies that under our monotonicity assumption and for a large class of reasonable cost functions, an agent has no incentive to zig-zag in the sequential case and in fact can simply follow the same strategy as in the simultaneous or conjunctive case. This insight immediately extends even when \(x^{(0)}\) is positively classified by some but not all of the \(h_{i}\)'s as any best response is guaranteed to increase the feature values and thus will maintain the positive classification results of these classifiers. ## 4 Manipulation Resistant Defenses Up to this point in the paper, we have focused mainly on the existence and feasibility of a zig-zag manipulation strategy from the perspective of an agent. We now shift gears and discuss the firm's decision space. We are interested in understanding how the firm can modify its classifiers to maintain a high level of accuracy (if possible), despite the strategic manipulations of an agent. To this end, we assume there is a joint distribution of features and labels \(\mathcal{D}\) over \(\mathcal{X}\times\{0,1]\}\). Interestingly, previous works (Bruckner and Scheffer, 2011; Hardt et al., 2016) show hardness results for finding optimal strategic classifiers, where the objective is finding a single classifier \(h\) that attains the strategic maximum accuracy. Now, we can introduce the defender's game for a typical strategic classification problem. \[\begin{split}\min_{h\in\mathcal{H}}& P_{(x,y)\sim \mathcal{D}}[h(z^{*}(x))\neq y]\\ \text{s.t.}& z^{*}(x)=\arg\max_{z}\ h(z)-c(x,z)\end{split} \tag{6}\] In our paper, \(h\) is actually given by the sequential composition of classifiers in the screening process and \(c(x,z)\) is the sum of manipulation costs per stage. The objective function in this optimization problem is a direct generalization of 0-1 loss for normal learning problems, only complicated by the strategic behavior of an agent. As Bruckner and Scheffer (2011) observe, this is a bi-level optimization problem and is NP-hard (Jeroslow, 1985) to compute, even when constraints and objectives are linear. Interestingly, Hardt et al. (2016) also show a hardness of approximation result for general metrics. Because of these past hardness results, we instead focus on a more tractable defense objective. ### Conservative Defense Here, we consider a different objective motivated by the hiring process in firms, in which avoiding false positives and not hiring unqualified candidates can be seen as arguably more important than avoiding false negatives and not missing out on good candidates. This objective, described below, has been previously studied in the context of strategic classification, in particular in [2]. **Definition 4.1** (No False Positive Objective).: Given the manipulation budget \(\tau\) and the initial linear classifiers \(h_{1},\cdots,h_{k}\), the goal of the firm is to design a modified set of linear classifiers \(\tilde{h}_{1},\cdots,\tilde{h}_{k}\) that maximize the true positive rate of the pipeline on manipulated feature vectors subject to no false positives. Recall that the ground truth is determined by the conjunction of \(h_{1},\cdots,h_{k}\) on unmanipulated feature vectors of agents. Without loss of generality, we assume the pipeline is non-trivial: the intersection of acceptance regions of \(h_{1},\cdots,h_{k}\) is non-empty. We prove that, under standard assumptions on linear classifiers of the firm, a defense strategy that "shifts" all classifiers by the manipulation budget, is the optimal strategy for the firm in both pipeline and conjunction settings. We formally define the defense strategy as follows: **Definition 4.2** (Conservative Strategy).: Given the manipulation budget \(\tau\), the firm conservatively assumes that each agent has a manipulation budget of \(\tau\) per test. For each test \(h_{i}(x)=\mathbbm{1}[w_{i}^{\top}x\geq b_{i}]\), the firm replaces it by a "\(\tau\)-shifted" linear separator \(\tilde{h}_{i}(x)=\mathbbm{1}[w_{i}^{\top}x\geq b_{i}+\tau]\)). In this section, without loss of generality, we assume that all \(w_{i}\)'s have \(\ell_{2}\)-norm equal to one. Our statement holds when the linear classifiers satisfy the following "general position" type condition. **Definition 4.3**.: We say a collection of linear classifiers \(\mathcal{H}=\{h_{1}(x)=\mathbbm{1}[w_{1}^{\top}x\geq b_{1}],\cdots,h_{k}(x)= \mathbbm{1}[w_{k}^{\top}x\geq b_{k}]\}\) with \(w_{1},\cdots,w_{k}\in\mathbb{R}^{d}\) are in "general position" if for any \(i\in[k]\), the intersection of \(\{x|w_{i}^{\top}x=b_{i}\}\) and \(\{x|\bigwedge_{j\in[k],j\neq i}h_{j}(x)=1\}\) lies in a \((d-1)\)-dimensional subspace but in no \((d-2)\)-dimensional subspace. We remark that in \(\mathbb{R}^{2}\), this condition is equivalent to the standard general position assumption (i.e., no three lines meet at the same point). Moreover, this condition implies that no test in \(\mathcal{H}\) is "redundant", i.e., for every \(i\in[k]\), the positive region of \(\mathcal{H}\) (i.e., \(\bigwedge_{h\in\mathcal{H}}\{x|h(x)=1\}\)) is a proper subset of the positive region of \(\mathcal{H}\setminus h_{i}\). See Figure 6 for an example in \(\mathbb{R}^{2}\). Now, we are ready to state the main result of this section. Figure 6: In \((a)\), the intersection of \(h\) with the positive half plane of the other two classifiers that are in blue and gray shadows is a point which is of zero dimension. This case is not in the general position and \(h\) is a redundant classifier. However, in \((b)\), the intersection of \(h\) with the described positive regions is a line segment, a one-dimensional object. Here, \(h\) is not redundant. **Theorem 4.4**.: _Consider a set of linear classifiers \(\mathcal{H}=\{h_{1},\cdots,h_{k}\}\) that are in "general position" (as in Definition 4.3). Moreover, suppose that each agent has a manipulation budget of \(\tau\). Then, in both the conjunction and sequential settings, the conservative defense is a strategy that maximizes true positives subject to zero false positives._ Proof.: First, we prove that conservative defense achieves zero false positive in both cases. To show this, by Claim 3.3, it suffices to show it for the sequential setting only. Consider an agent \(x\) who initially (i.e., before manipulation) is not in the positive region of conjunctions of \(h_{1},\cdots h_{k}\); i.e., \(\Pi_{j\in[k]}h_{j}(x^{(0)})=0\). Hence, there exists a classifier \(h_{i}\) such that \(w_{i}^{\top}x^{(0)}<b_{i}\). Now, let \(x^{(i)}:x^{(0)}+\epsilon_{i}\) denote the (manipulated) location of \(x\) right before stage \(i\). Since the total manipulation budget of \(x\) is \(\tau\), \(w_{i}^{\top}x^{(i)}\leq w_{i}^{\top}x^{(0)}+w_{i}^{\top}\epsilon_{i}<b_{i}+\tau\) (the choice of \(\varepsilon_{i}\) that maximizes \(w_{i}^{\top}\epsilon_{i}\) is \(\epsilon_{i}=\tau w_{i}\), and \(w_{i}^{\top}(\tau w_{i})=\tau\) since \(\|w_{i}\|_{2}=1\)). Hence, \(\tilde{h}(x^{(i)})=0\) and agent \(x\) cannot pass the modified pipeline \(\tilde{h}_{1},\cdots,\tilde{h}_{k}\). Next, consider test \(i\) and let \(\Delta^{i}\) denote the subspace of points (i.e., agents) in the intersection of \(\{x|h_{i}(x)=0\}\) and \(\bigwedge_{j\in[k],j\neq i}\{x|h_{j}(x)=1\}\). By the general position assumption, \(\Delta^{i}\) is a \((d-1)\)-dimensional subspace and is a subset of the \((d-1)\)-dimensional hyperplane corresponding to \(w_{i}^{\top}x=b_{i}\). Then, there exists only a unique linear separator which is at distance exactly \(\tau\) from \(\Delta^{i}\) (and is in the positive side of \(h_{i}\)); \(\hat{h}_{i}(x):=\mathbbm{1}[w_{i}^{\top}x\geq b_{i}+\tau]\). Given that any defense strategy with zero false positive has to classify an agent in \(\Delta^{i}\) as negative, it is straightforward to verify that any "feasible" modified linear separator \(h_{i}^{\prime}\) (i.e., achieving zero false positive) results in true positive rate less than or equal to the one replaces \(h_{i}^{\prime}\) with \(\hat{h}_{i}\). Note that while the conservative defense strategy has the maximum possible true positive subject to zero false positive in both simultaneous and sequential settings, by Claim 3.3, the conservative defense achieves a higher true positive rate in the sequential setting compared to the simultaneous case. Informally, from the firm's point of view, _under manipulation, the sequential setting is a more efficient screening process_. ## 5 Discussion We have initiated the study of _Strategic Screening_, combining screening problems with strategic classification. This is a natural and wide-spread problem both in automated and semi-automated decision making. We believe these examples and our convex program can aid in the design and monitoring of these screening processes. Substantial open questions remain regarding fairness implications of the defender's solution and exactly how susceptible real world pipelines are to zig-zagging. Some of the works cited in the related work section consider fairness considerations in the space of strategic manipulation, stemming either from unequal abilities to manipulate (Milli et al., 2019; Hu et al., 2019) or unequal access to information about the classifiers (Bechavod et al., 2022) across different groups. We do not consider these connections in our work, but these considerations are of significant interest and a natural direction for further research, especially due to the importance of making fair decisions in high-stake, life altering contexts. We finish with a few interesting examples for this. Disparities might arise both in the conjunction and in the sequential setting, with or without defense. Consider the classifiers presented in Example 3.2 and an instance in which candidates belong to two groups, \(G^{1}\) and \(G^{2}\) with initial feature vector distributed identically and characterized by different total manipulation budgets, \(\sqrt{2}=\tau^{2}>\tau^{1}=5/4\). The narrative of the fairness disparities in the conjunction case is a simple generalization of the single classifiers case (e.g., (Hardt et al., 2016))- If the distribution is such that a significant fraction of individuals (from both groups) starts at a feature vector that is classified by both classifiers as \(0\) and that requires \(\sqrt{2}\) manipulation cost to reach their intersection-- only the individuals form \(G_{2}\) will be able to manipulate. For the sequential case, consider a distribution with a large enough fraction of individuals starting at \((0,0)\). Example 3.2 demonstrates that only individuals from \(G_{2}\) will have sufficient budget to manipulate (using the zig-zag strategy). If the firm applies the conservative defense, individuals from \(G_{1}\) that should have been classified as positive might not have sufficient budget to manipulate their way to acceptance, which in turn implies higher false negative rates. This indicates, similarly to prior results in strategic classification (e.g., [Hu et al., 2019]), how the members of the advantaged group are more easily admitted or hired. ## Acknowledgements The authors are very grateful to Avrim Blum and Saba Ahmadi for helpful comments on an earlier draft and discussion of related work in the literature.
戦略的な行動の screening 過程における学習をスタートさせる。複数分類器を用いて取り組む。私たちは、二つの対照的な設定を検討する:すべての分類器を同時に満たす必要がある conjunctive 設定と、順序的に満たす必要がある sequential 設定。言い換えれば、戦略的な分類を screening 過程に組み込む。私たちは、順序的スクリーニングパイプラインが、個人が各分類器を順番に満たすことで、分類器間の zig-zag 行為を引き起こすことを示した。個人が、すべての分類器を同時に満たさなければならず、必要な場合に限り、制限された操作予算を使用して、複雑な状況においても、ポジティブな結果を得られることを示した。最後に、個人が、このような操作に強いスクリーニングプロセスを設計するという学習者について考察し、その学習者に対する最適化された構築物を提供した。
2308.00186
Learning Complex Motion Plans using Neural ODEs with Safety and Stability Guarantees
We propose a Dynamical System (DS) approach to learn complex, possibly periodic motion plans from kinesthetic demonstrations using Neural Ordinary Differential Equations (NODE). To ensure reactivity and robustness to disturbances, we propose a novel approach that selects a target point at each time step for the robot to follow, by combining tools from control theory and the target trajectory generated by the learned NODE. A correction term to the NODE model is computed online by solving a quadratic program that guarantees stability and safety using control Lyapunov functions and control barrier functions, respectively. Our approach outperforms baseline DS learning techniques on the LASA handwriting dataset and complex periodic trajectories. It is also validated on the Franka Emika robot arm to produce stable motions for wiping and stirring tasks that do not have a single attractor, while being robust to perturbations and safe around humans and obstacles.
Farhad Nawaz, Tianyu Li, Nikolai Matni, Nadia Figueroa
2023-07-31T22:50:14
http://arxiv.org/abs/2308.00186v3
# Learning Complex Motion Plans using Neural ODEs with Safety and Stability Guarantees ###### Abstract We propose a Dynamical System (DS) approach to learn complex, possibly periodic motion plans from kinesthetic demonstrations using Neural Ordinary Differential Equations (NODE). To ensure reactivity and robustness to disturbances, we propose a novel approach that selects a target point at each time step for the robot to follow, by combining tools from control theory and the target trajectory generated by the learned NODE. A correction term to the NODE model is computed online by solving a quadratic program that guarantees stability and safety using control Lyapunov functions and control barrier functions, respectively. Our approach outperforms baseline DS learning techniques on the LASA handwriting dataset and complex periodic trajectories. It is also validated on the Franka Emika robot arm to produce stable motions for wiping and stirring tasks that do not have a single attractor, while being robust to perturbations and safe around humans and obstacles. ## 1 Introduction Learning from Demonstrations (LfD) is a framework that enables transfer of skills to robots from observations of desired tasks (Khansari-Zadeh and Billard, 2011; Ijspeert et al., 2013; Yang et al., 2022). Typically, the observations are robot trajectories that are demonstrated through kinesthetic teaching, passively guiding the robot through the nominal motion to avoid the correspondence problem (Akgun and Subramanian, 2011). In such a framework, it is essential to learn motion plans from as few demonstrations as possible, while still providing required robustness, safety, and reactivity in a dynamic environment. While there are multiple approaches to represent the motion, we focus on Dynamical Systems (DS) based formulation (Billard et al., 2022). DS based approaches have been shown to be particularly useful in Human-Robot Interaction (HRI) scenarios (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2022; Khansari-Zadeh and Billard, 2014), where the robot inherently adapts to changes in the environment and can be compliant to human interactions, instead of following a stiff time-dependent reference motion that encodes the task. **Problem Formulation:** A DS based motion plan for a robotic manipulator is defined in terms of the robot state variable \(x\in\mathbb{R}^{d}\), where \(x\) could be the robot's end-effector Cartesian state or joint state. The motion planning layer is formulated as an autonomous DS \[\dot{x}=f(x), \tag{1}\] where \(f(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is a nonlinear continuous function. The training data is \(D\) demonstrations from kinesthetic teaching: \(\mathcal{D}:=\{x_{i}(t_{1}),x_{i}(t_{2}),\ldots,x_{i}(t_{T})\}_{i=1}^{D}\), where, \(x_{i}(t_{k})\) is the state of the robot at time \(t_{k}\), for the \(i^{th}\) demonstration. The discrete points in each demonstration are sampled uniformly at time \(\{t_{1},t_{2},\ldots,t_{T}\}\). We assume that the training data trajectories \(\mathcal{D}\) approximate an _unknown nominal target trajectory_\(z^{*}(t)\) that encodes the task of the robot such as wiping, stirring, scooping, etc. Our aim is to design a vector field \(f(\cdot)\) using the demonstrations \(\mathcal{D}\) such that \(x(t)\) follows the target trajectory \(z^{*}(t)\). Previous work (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2018) in the DS-based motion planning framework has considered convergence only to a single target. In this paper, we consider convergence to a trajectory \(z^{*}(t)\) that can represent more complex, e.g., highly-nonlinear and periodic, motions. Under nominal circumstances, i.e., in the absence of disturbances or obstacles, the target trajectory \(z^{*}(t)\) should be viewed as the reference for the low level controller to track. However, during deployment, the robot might not always follow the target trajectory because of tracking errors, disturbances, obstacles, etc. For example, there might be unanticipated perturbations during task execution which is generally not present while teaching a task for the robot. Consider the scenario in Fig. 0(a), where the target trajectory encodes a wiping task and the vector field is a unconstrained DS model (20) learned from demonstrations \(\mathcal{D}\). As shown in Fig. 0(a), if the robot is perturbed by a disturbance during deployment to a region where there is no training data, the learned model commands the robot to a spurious attractor. However, the desired behaviour is to continue tracking the target trajectory so that the robot wipes the desired space as shown in Fig. 0(b). Ensuring robustness to perturbations is critical for deploying robots in human-centric environments, as disturbances can arise due to obstacles unseen in demonstrations and intentional or adversarial disturbances caused by humans (Billard et al., 2022; Wang et al., 2022). This leads to our formal problem statement. **Problem 1**: _Given a set of training data \(\mathcal{D}:=\{x_{i}(t_{1}),x_{i}(t_{2}),\ldots,x_{i}(t_{T})\}_{i=1}^{D}\), design a vector field \(f(\cdot)\) for the dynamical system (20), such that it generates safe and stable motion plans at deployment for scenarios possibly not seen in the demonstrations, while ensuring that the robot's trajectory \(x(t)\) converges to the target trajectory \(z^{*}(t)\)._ **Related work:** LfD is a widely used framework to learn the underlying motion policy of a task (Argall et al., 2009). Inverse Reinforcement Learning (IRL) and Behavior Cloning (BC) are popular methodologies that have been used to imitate motion from human demonstrations (Abbeel and Ng, 2004; Priess et al., 2014; Osa et al., 2018). In IRL, the underlying objective function of a task is learned and an optimization problem is solved to generate the robot motion. BC learns the state-action distribution of the task dynamics from demonstrations. IRL and BC typically require the demonstrator to explore the task space for learning the policy. Algorithms such as DAGGER (Ross et al., 2011) rely on online data collection for exploration. Exploration of the state space using large amounts of data may not be feasible for HRI Figure 1: An illustrative example of a spurious attractor when the robot’s path is guided by a DS-based motion plan in the presence of a disturbance is shown in (a) using NODE to encode the motion plan and (b) using the corrected CLF-NODE approach proposed in this work. applications, especially when the demonstrator is a human. We base our approach on DS-based LfD that has been shown to model stable motion plans from very few demonstrations (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2022). One of the earliest work in DS-based LfD is called Dynamic Movement Primitives (Schaal, 2006) which estimates a nonlinear autonomous DS for various robotic applications such as walking, pouring, and grasping (Nakanishi et al., 2004; Ude et al., 2010). Stable Estimator of Dynamical Systems (SEDS) (Khansari-Zadeh and Billard, 2011) is a LfD method that learns globally stable dynamical systems with respect to a goal point using Gaussian Mixture Models (GMMs) and quadratic Lyapunov functions. An important limitation of SEDS is that it can only model trajectories whose distance to the target decrease monotonically in time. A method based on SEDS is presented in (Figueroa and Billard, 2018) via a Linear Parameter Varying (LPV) re-formulation of the model that learns more complex trajectories than SEDS. In (Ravichandar et al., 2017), contraction analysis is used to derive stability conditions on the learned trajectories. Recurrent Neural Networks (RNNs) have been used to model discrete motions (Reinhart and Steil, 2011; Lukosevicius and Jaeger, 2009). In (Chen et al., 2020), a Neural Network (NN) is used to learn the vector field \(f(\cdot)\) in (20) and Lyapunov stability is imposed by a sampling algorithm that identifies unstable regions in a predefined workspace. In this work, we leverage the rich model class of NNs to capture the invariant features of the target trajectory, but use Neural Ordinary Differential Equations (Chen et al., 2019) instead of the standard regression model (Chen et al., 2020). Figure 2: The modular control flow of our proposed pipeline using the CLF-CBF NODE approach. The state of the robot is \(x\), the low level control input are the joint torques \(\tau\), and the desired velocity in state space is \(\dot{x}_{ref}\). ### Proposed Approach We parameterize the vector field (20) of the motion plan as \[\dot{x}=\hat{f}(x)+u(x), \tag{2}\] where, \(\hat{f}(x)\) is used to encode the nominal system behavior, and \(u(x)\) is used to enforce safety and disturbance rejection properties. We learn the nominal system \(\hat{f}(\cdot)\) from demonstrations \(\mathcal{D}\), and compute a correction term \(u(x)\) based on control theoretic tools so that the goals of stability and safety are met in a composable, modular, and data efficient way. Similar to our objectives, the method proposed in (Figueroa and Billard, 2022) generates motion plans that not only converge to a global goal, but also has local stiff behaviours in regions around the target trajectory. Yet, they still lack in representing complex motions with high curvature and partial divergence. We use a modular approach similar to (Khansari-Zadeh and Billard, 2014), but, we define a CLF with respect to a time-varying target trajectory rather than a single goal point as assumed in (Khansari-Zadeh and Billard, 2014). Several DS-based obstacle avoidance methods (Hoffmann et al., 2009; Khansari-Zadeh and Billard, 2012) have been proposed that modulate the nominal dynamics of the motion by introducing a factor in the motion equation. Control Barrier Functions (CBFs) (Robey et al., 2020; Ames et al., 2019) are widely used to enforce safety of dynamical systems, and we adopt them in this work to generate safe motion plans. A schematic of the proposed control flow is presented in Fig. 2. The blue blocks represent the offline learning component and the green blocks are the online computation modules. We use a neural network parameterized model \(\hat{f}\) that we learn from demonstrations \(\mathcal{D}\), but any other model class could be used within our proposed framework. Starting from an initial condition, we integrate \(\hat{f}(\cdot)\) for the same time span of the task given in demonstrations to generate a target trajectory \(x^{*}(t)\) that approximates the unknown nominal target trajectory \(z^{*}(t)\). At deployment, the actual states of the robot \(x\) are observed by our motion plan at every time \(t\). Given the current state \(x\), our architecture then chooses the target point \(\pi(x)\) that the robot should follow at time \(t\) using the pre-computed target trajectory \(x^{*}(t)\). We estimate the nominal desired velocity \(\hat{f}(x)\) using our learnt model. However, as illustrated in Fig. 8, the generated motion plan from \(\hat{f}\) is neither guaranteed to be stable nor safe. Hence, we compute a _virtual control input_\(u(x)\) as an additive correction term that generates the reference motion plan \(\dot{x}\) using (2) so that the trajectory generated by \(\dot{x}\) converges to the target trajectory even in the presence of disturbances and unsafe regions such as obstacles. We denote the reference velocity for the low level controller as \(\dot{x}_{ref}\), which in general may be different from the real velocity \(\dot{x}\) of the robot. The reference velocity \(\dot{x}_{ref}\) is given as input to the impedance controller (Kronander and Billard, 2016) that computes the low level control input \(\tau\) (joint torques) for the physical robotic system. We emphasize that the virtual control input \(u(x)\) is different from the low-level control inputs \(\tau\) given in Fig. 2, and is a component of the motion planning DS (2). **Contributions** Our contributions are given below. 1. We propose a Neural ODE (NODE) based offline learning methodology that captures the invariant features of complex nonlinear and periodic nominal motion plans, such as wiping and stirring, using only a few demonstrations from kinesthetic teaching. 2. We generate a DS-based reactive motion policy for HRI scenarios by solving an efficient Quadratic Program (QP) online at high frequency that integrates CLFs and CBFs as constraints on the nominal NODE-based motion plan to guarantee stability and safety, respectively. 3. We define a novel look-ahead strategy that chooses a target point at every time step for the robot to follow, enabling tracking of a time-varying target trajectory instead of a single target point. 4. We show significant performance improvements over existing methods on the LASA handwriting data set, and validate that our approach enables complex nonlinear and periodic motions for compliant robot manipulation on the Franka Emika robot arm. Learning Nominal Motion Plans using Neural ODEs We propose a neural network parameterized function class to learn the dynamics of the motion offline from demonstrations. Although existing work (Figueroa and Billard, 2022) has provided guarantees on stable learning of dynamical systems using mixture of linear models that converge to a single target, we aim to learn more complex trajectories that also generalize across the task space and scale to higher dimensions. Since neural networks have demonstrated high capability for accurate time-series modeling (Chen et al., 2020), we base our approach on Neural ODEs (Chen et al., 2019), which are the continuous limit (in depth) of ResNets (Haber and Ruthotto, 2017). We parameterize our models of nominal target trajectories as: \[\frac{d\hat{x}(t)}{dt}=f_{\theta}(\hat{x}(t)), \tag{3}\] where \(f_{\theta}(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is a neural network with parameters \(\theta\), and \(\hat{x}(t)\in\mathbb{R}^{d}\) is the state variable predicted by \(f_{\theta}(\cdot)\) at time \(t\). In the forward pass, given the integration time interval \([a,b]\) and an initial point \(\hat{x}(a)\), the model outputs the trajectory \(\hat{x}(t)\) for \(t\in[a,b]\). The trajectory is obtained by solving the differential equation in (19) using a general-purpose differential equation solver based on fifth-order Runge-Kutta (Butcher, 1996). We set \(f_{\theta}(\cdot)\) to be a Multi-Layer Perceptron (MLP), where the inputs and outputs are in \(\mathbb{R}^{d}\) so that the trajectory predictions evolve in the relevant state space. We consider the supervised learning setup with training data \(\mathcal{D}\). The predictions of the state \(\hat{x}_{i}(t_{k})\) by the model \(f_{\theta}\) are obtained via integration: \[\hat{x}_{i}(t_{k+1})=\hat{x}_{i}(t_{k})+\int_{t_{k}}^{t_{k+1}}f_{ \theta}(\hat{x}_{i}(s))\ ds,\forall\ k\in\{1,2,\ldots,T-1\} \tag{4}\] where we set \(\hat{x}_{i}(t_{1})=x_{i}(t_{1}),\ \forall\ i\in\{1,2,\ldots,D\}\). We apply empirical risk minimization with loss \[\min_{\theta}\frac{1}{DT}\sum_{i=1}^{D}\sum_{k=1}^{T}\Bigl{\|}x_{i }(t_{k})-\hat{x}_{i}(t_{k})\Bigr{\|}_{2}^{2}, \tag{5}\] to learn the parameters \(\theta\), where the predictions \(\hat{x}_{i}(t_{k})\) are generated as in (4).1 In contrast to previous work (Figueroa and Billard, 2022; Khansari-Zadeh and Billard, 2014) which learns a map \(\hat{f}(x(t))\) using labeled data \(\{x(t),\dot{x}(t)\}\), we do not assume access to velocity measurements as they are often not easily collected and/or noisy (Purwar et al., 2008; Xiao et al., 2020). Further, noisy velocity measurements might cause the map to overfit and lead to aggressive trajectories at inference that are not desirable for the low-level controller. From our results presented in Fig. 8 and Section 4, we observe that the NODE model generates smooth trajectories utilizing only state variables \(x(t)\) to learn \(f_{\theta}\) and not their derivatives \(\dot{x}(t)\). While such a NODE-based vector field will behave reliably on and near the training data, if there are unanticipated disturbances or obstacles during deployment, the robot might deviate to regions of the state-space where the learned vector field is unreliable. Next, we present a methodology that computes a correction term to ensure that the robot robustly and safely tracks the learned target trajectory. Footnote 1: A binomial checkpoint method is used during training for solving (21) as implemented in (Kidger, 2022). ## 3 Enforcing Stability and Safety via Virtual Control Inputs We begin with a review of control theoretic tools that provide sufficient conditions for stability and safety of dynamical systems. We consider a nonlinear control affine dynamical system \[\dot{x}=g(x)+h(x)u, \tag{6}\] where, \(x\in\mathcal{X}\subset\mathbb{R}^{d}\) and \(u\in\mathcal{U}\subset\mathbb{R}^{m}\) are the set of allowable states and control inputs, respectively. The DS-based motion plan (2) is a nonlinear control affine DS with \(g(x)=\hat{f}(x)\) and \(h(x)=1\). ### Control Lyapunov Functions We consider the objective of asymptotically stabilizing the DS (6). Without loss of generality, we focus on stabilizing the system to the origin: \(x^{*}=0\). If we can find a control law \(u\) that decreases a positive definite function \(V(\cdot):\mathbb{R}^{d}\to\mathbb{R}_{\geq 0}\) to zero for the dynamics (6), then, asymptotic stability is guaranteed. Such a function \(V(\cdot)\) is termed as a CLF and the formal definition is given below (Ames et al., 2019). **Definition 1**: _A continuously differentiable function \(V(\cdot):\mathbb{R}^{d}\to\mathbb{R}_{\geq 0}\) is a Control Lyapunov Function (CLF) for the dynamical system (6) if it is positive definite and there exists a class \(\mathcal{K}_{\infty}\) function \(\alpha(\cdot):\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) that satisfies_ \[\inf_{u\in\mathcal{U}}\nabla_{x}V(x)^{\top}\left(g(x)+h(x)u\right)\leq-\alpha( V(x)),\ \forall\ x\in\mathcal{X}. \tag{7}\] A function \(\alpha(\cdot)\) belongs to class \(\mathcal{K}_{\infty}\) if \(\alpha(0)=0\) and \(\alpha(\cdot)\) is strictly increasing. The set of all controllers that satisfy the condition in (7) for each \(x\in\mathcal{X}\) is \[K_{clf}(x):=\{u\in\mathcal{U}:\nabla_{x}V(x)^{\top}\left(g(x)+h(x)u\right)\leq -\alpha(V(x))\}. \tag{8}\] The following result on asymptotic stability follows from (Ames et al., 2019). **Theorem 1**: _If there exists a Control Lyapunov Function (CLF) as given in Definition 1 for a nonlinear control affine system (6), then, any Lipschitz continuous feedback control law \(u(x)\in K_{clf}(x)\) asymptotically stabilizes the dynamical system (6) to the origin \(x^{*}=0\)._ ### Control Barrier Functions We define safety with respect to a safe set \(\mathcal{C}\subseteq\mathcal{X}\) for the system (6). The safe set \(\mathcal{C}\) is defined as the super-level set of a function \(B(\cdot):\mathbb{R}^{d}\to\mathbb{R}\), that results in three important sets: \[\mathcal{C}=\{x\in\mathcal{X}:B(x)\geq 0\},\ \partial\mathcal{C}=\{x\in \mathcal{X}:B(x)=0\},\ \mathrm{Int}(\mathcal{C})=\{x\in\mathcal{X}:B(x)>0\}, \tag{9}\] where \(\partial\mathcal{C}\) is the boundary for safety and \(\mathrm{Int}(\mathcal{C})\) is the interior of the safe set \(\mathcal{C}\). Our safety objective is to find a control input \(u\) such that the states \(x\) that evolve according to the dynamics (6) always stay inside the safe set \(\mathcal{C}\). Such an objective is formalized using _forward invariance_ of the safe set \(\mathcal{C}\). Let \(u(x)\) be a feedback control law such that closed loop dynamical system \[\dot{x}=g(x)+h(x)u(x) \tag{10}\] is locally Lipschitz. The locally Lipschitz condition guarantees the existence of a unique solution \(x(t)\) to (10) for a given initial condition \(x_{0}=x(0)\) and all \(t\in[0,t_{max})\). If \(t_{max}=\infty\), then the system (10) is forward complete (Khalil, 2015). Forward invariance and CBFs are defined as follows (Ames et al., 2019). **Definition 2**: _The safe set \(\mathcal{C}\) is forward invariant if for every initial point \(x(0)=x_{0}\in\mathcal{C}\), the future states \(x(t)\in\mathcal{C}\) for all \(t\geq 0\)._ **Definition 3**: _Let \(\mathcal{C}\) be the super-level set of a continuously differentiable function \(B(\cdot):\mathbb{R}^{d}\to\mathbb{R}\) as given in (9). Then, \(B\) is a Control Barrier Function (CBF) for the dynamical system (6) and safe set \(\mathcal{C}\) if there exists an extended class \(\mathcal{K}_{\infty}\) function \(\gamma(\cdot)\) that satisfies_ \[\sup_{u\in\mathcal{U}}\nabla_{x}B(x)^{\top}\left(g(x)+h(x)u\right)\geq-\gamma (B(x)),\ \forall\ x\in\mathcal{X}. \tag{11}\] We aim to render the set \(\mathcal{C}\) forward invariant for the system (10) through an appropriate choice of control input \(u(x)\). The set of all control inputs that satisfy the condition in (11) for each \(x\in\mathcal{X}\) is \[K_{cbf}(x):=\{u\in\mathcal{U}:\nabla_{x}B(x)^{\top}\left(g(x)+h(x)u\right)\geq- \gamma(B(x))\}. \tag{12}\] The formal result on safety follows from (Ames et al., 2019). **Theorem 2**: _Let \(B\) be a Control Barrier Function (CBF) as given in Definition 3 for a safe set \(\mathcal{C}\) and a nonlinear control affine system (6). Let \(u(x)\in K_{cbf}(x)\) be a locally Lipschitz feedback control law. Then, the following holds: \(x(0)\in\mathcal{C}\implies x(t)\in\mathcal{C}\) for all \(t\in[0,t_{max})\). If the set \(\mathcal{C}\) is compact, then, \(\mathcal{C}\) is forward invariant, i.e., \(t_{max}=\infty\), and \(\mathcal{C}\) is asymptotically stable, i.e., \(\lim_{t\to\infty}x(t)\in\mathcal{C}\) for all \(x(0)\in\mathcal{X}\)._ Since inequalities in (8) and (12) are affine in \(u\), they can be included in efficient optimization-based controllers for control affine systems. We present such an optimization-based planner in Section 3.3 that has strong stability and safety guarantees as claimed in Theorems 1 and 2, respectively. ### Computing the Virtual Control Input We now show how to integrate CLFs and CBFs into the DS-based motion plan (2). In particular, we use the learned NODE \(f_{\theta}\) to generate nominal motion plans, and compute \(u(x)\) using CLFs and CBFs to enforce stability and safety, resulting in a DS-based motion plan of the form: \[\dot{x}=f_{\theta}(x)+u(x), \tag{13}\] where \(x\) is the state of the robot, and \(u(x)\) is the virtual control input. Stability using Control Lyapunov FunctionsWe utilize CLFs described in Section 3.1 to generate a motion plan that always converge to the target trajectory \(x^{*}(t)\) even in the presence of disturbances. Previous work (Khansari-Zadeh and Billard, 2014) have utilized CLFs only for convergence to a single target point using regression methods. In contrast, we present a framework that integrates Neural ODEs for rich behaviors, CBFs for safety, and CLFs to ensure convergence to a target trajectory \(x^{*}(t)\). To that end, we first define the error \(e(t)\) between the robot state and the target trajectory: \(e(t)=x(t)-x^{*}(t)\). For ease of notation, we drop the explicit dependence on time \(t\), and write \(e\), \(x\), and \(x^{*}\) for the current error, state, and target point at time \(t\), respectively. From (13), the error dynamics are given by \[\dot{e}=\dot{x}-\dot{x}^{*}\Rightarrow\dot{e}=f_{\theta}(x)-\dot{x}^{*}+u(x) \tag{14}\] The error dynamics (14) define a nonlinear control affine system (6), where the state of the system is \(e\), and \(u(x)\) is the control input. Hence, by Theorem 1, if there exists a CLF \(V(\cdot)\) for the error dynamics (14), then, any feedback virtual control law \(u(\cdot)\) that satisfies \[\nabla_{e(t)}V(e)^{\top}\left(f_{\theta}(x)-\dot{x}^{*}+u(x)\right)\leq- \alpha(V(e)),\ \forall\ e\in\mathbb{R}^{d} \tag{15}\] will drive the error asymptotically to zero. During online motion planning, given the current state of the robot \(x\) and information about the target trajectory \(x^{*}\) that encodes the desired task, we compute the smallest \(u(x)\) that satisfies (15) by setting \[u(x)=\underset{v}{\text{argmin}}\big{\|}v\big{\|}_{2}^{2}\quad\text{s.t.}\quad \nabla_{e}V(e)^{\top}\left(f_{\theta}(x)-\dot{x}^{*}+v\right)\leq-\alpha(V(e)), \tag{16}\] where \(\alpha(\cdot)\) is a class \(\mathcal{K}_{\infty}\) function that defines how aggressively the robot tracks the target trajectory. We described how we choose \(x^{*}\) and \(\dot{x}^{*}\) in detail in Section 3.3. The optimization problem (22) is a Quadratic Program (QP) with a single affine inequality and has a closed form solution (Khansari-Zadeh and Billard, 2014). The Lyapunov function we use is \(V(e)=\|e\|_{2}^{2}\), but, note that any positive definite function is a valid CLF due to the presence of the virtual actuation term \(v\), i.e., optimization problem (22) is always feasible. We refer the reader to Fig. 8 to differentiate between the paths generated by only \(f_{\theta}(\cdot)\), and by (13) using the correction term \(u(\cdot)\). We refer to this approach as the CLF-NODE. Safety using Control Barrier FunctionsWe build on the framework presented in Section 3.2 by integrating CBFs into the virtual control input computation to guarantee safety for the generated motion plan. We define safety with respect to a safe set \(\mathcal{C}\subseteq\mathcal{X}\) as described in Section 3.2 for the system (13). From Theorem 2, if there exists a CBF \(B(\cdot)\) for the dynamics (13), then, any feedback control law \(u(\cdot)\) that satisfies \[\nabla_{x}B(x)^{\top}\left(f_{\theta}(x)+u(x)\right)\geq-\gamma(B(x)),\ \forall\ x\in \mathcal{X} \tag{17}\] will render the system (13) safe, where, \(\gamma(\cdot)\) is an extended class \(\mathcal{K}_{\infty}\) function. At inference, the DS-based motion plan is still given by (13), but the virtual control input \(u(x)\) is computed such that it satisfies the CBF condition in (17) for the dynamics \(\dot{x}\) and a given CBF \(B(\cdot)\) for the safe set \(\mathcal{C}\). In cases where an obstacle obstructs the robot moving along the nominal trajectory, the robot should automatically avoid the obstacle without human intervention, but converge back to complete the desired task when possible. However, this may lead to a conflict between preserving safety and stability: during the obstacle avoidance phase, the CLF constraint in (22) may be violated as the robot takes safety preserving actions that increase tracking error. We prioritize safety and obstacle avoidance and adapt the approach proposed in (Ames et al., 2019) for balancing these competing objectives to our setting, and solve an optimization problem with the CBF condition (17) as a hard constraint and the CLF condition (15) as a soft constraint. Given the current state of the robot \(x\) and the target point \(x^{*}\), the optimization problem that guarantees a safe motion plan is \[\begin{split}(u(x),\_)=\operatorname*{argmin}_{\{v,\epsilon\}} \quad\left\|v\right\|_{2}^{2}+\lambda\epsilon^{2}&\text{s.t.} \quad\nabla_{x}B(x)^{\top}\left(f_{\theta}(x)+v\right)\geq-\gamma(B(x))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\nabla_{ e}V(e)^{\top}\left(f_{\theta}(x)-\dot{x}^{*}+v\right)\leq-\alpha(V(e))+ \epsilon\end{split} \tag{18}\] where \(\epsilon\) is a relaxation variable to ensure feasibility of (24) and is penalized by \(\lambda>0\). The problem in (24) is a parametric program, where the parameters of interest are \(\{x,x^{*},\dot{x}^{*}\}\). We abuse notation and denote the optimal virtual control input \(u(x)\) for (24) to be \(u(x,x^{*},\dot{x}^{*})\) which will be used in the next section. Problem (24) is a QP that can be solved efficiently in real-time (Mattingley and Boyd, 2012). Multiple CBFs and CLFs can be composed in a way analogous to problem (24) to represent multiple obstacles. We refer to this approach as the CLF-CBF-NODE. ``` Input:\(\mathcal{T}:=\{x^{*}(t_{k})\}_{k=1}^{T},f_{\theta}(\cdot),x,N\) Output:\(\pi(x)\) \(m\leftarrow\operatorname*{argmin}_{k}\left\|x-x^{*}(t_{k})\right\|_{2}\); \(\mathcal{T}_{N}:=\{x^{*}(t_{m}),x^{*}(t_{m+1}),\dots,x^{*}(t_{N})\}\); Solve (24) with parameters \(\{x,y,f_{\theta}(y)\}\) for each \(y\in\mathcal{T}_{N}\); \(\pi(x)\leftarrow\operatorname*{argmin}_{y\in\mathcal{T}_{N}}\left\|u(x,y,f_{ \theta}(y))\right\|_{2}^{2}\); ``` **Algorithm 1**Choose target point Choosing a Target PointAs shown in Fig. 2, we first integrate the learnt model \(f_{\theta}(\cdot)\) offline to generate the _target array_\(\mathcal{T}:=\{x^{*}(t_{k})\}_{k=1}^{T}\) from a given initial condition \(x^{*}(t_{1})\). Given an observation of the current state of the robot \(x\) at time \(t\), and the target array \(\mathcal{T}\), we select the next target point \(x^{*}\) for the robot to follow using the map \(\pi(x)\) defined in Algorithm 2. We remove the direct dependence of the target point \(x^{*}\) on time \(t\), which leads to a more reactive motion plan that adapts to both the time delays that are often present during online deployment, and to unforeseen perturbations of the robot away from the nominal plan, e.g., due to human interaction or obstacle avoidance. The look-ahead horizon length \(N\) is used to construct the look-ahead array \(\mathcal{T}_{N}\), consisting of \(N\) future points starting at the current state \(x^{*}(t_{m})\). We choose the target point \(\pi(x)\) from \(\mathcal{T}_{N}\) that results in the smallest norm of virtual control input when solving (24) amidst all points in \(\mathcal{T}_{N}\). We use a forward looking horizon \(N\) to ensure the robot moves forward along the target trajectory, and to the best of our knowledge, this is the first time that the norm of the correction input \(u(\cdot)\) is used as a metric for choosing an appropriate nearest neighbor point in motion planning. We use \(\dot{x}^{*}:=f_{\theta}(\pi(x))\), since \(\pi(x)\in\mathcal{T}\) and we obtained the target array \(\mathcal{T}\) by integrating \(f_{\theta}(\cdot)\). ## 4 Experimental Validation and Results **LASA handwriting dataset** We validate our approach on the LASA 2D handwriting data set (Khansari-Zadeh and Billard, 2011) that contains 30 nonlinear motion sets. Each set has 7 demonstrations: we use 4 as the training data set, and the remaining three as the test data set. The performance metric we use is Dynamic Time Warping (DTW) distance (Salvador and Chan, 2007) to compare our NODE model with two existing DS-based learning approaches: SEDS (Khansari-Zadeh and Billard, 2011) and LPV-DS (Figueroa and Billard, 2018). DTW distance measures the dissimilarity between the shapes of the demonstrations and the corresponding reproductions starting from the same initial condition. The DTW distance comparison is given in Figs. 2(a) and 2(b). We note that although SEDS and LPV-DS use velocity data for regression, which our approach does not have access to, the DTW distance for our NODE approach is approximately half that of existing methods (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2018). We illustrate disturbance rejection in Fig. 3(a) using CLF-NODE and obstacle avoidance in Fig. 3(b) using CLF-CBF-NODE. Further validation on other nonlinear shapes are given in the appendix. **Periodic trajectories**: We validate our approach on handwritten 2D periodic motions of the letters **I, R, O** and **S** given in (Urain et al., 2020) and 3D periodic trajectories that encode three wiping tasks as given in Figs. 4(e), 5(e), 6(e), and 7(e). We compare our method with Imitation Flow (IFlow) (Urain et al., 2020) and a Gaussian Process (GP) (Jaquier et al., 2019) based approach. IFlow is based on normalizing flows that learns stable motions using prior knowledge on whether the underlying dynamics of the demonstrations has a single attractor or a limit cycle, but our approach requires no prior knowledge. We present the DTW distance in Fig. 4(a), training time in Fig. 4(b) and execution time in Fig. 4(c). The execution time is the computation time for a single forward pass of the model to generate the entire trajectory from a given initial point and the time Figure 3: Comparison of DTW distance on the (a) Train data set and (b) Test data set from LASA. span of the demonstration. We also compare the trajectory reproductions for the \(\mathbf{R}\) shape in Fig. 4(d), and for the spiral wiping task by the Franka robot arm in Fig. 4(e). The IFlow approach is not able to learn the complex motion for spiral wiping, but our NODE approach learns with high accuracy and lesser computation time. The execution time comparison in Fig. 4(c) is plotted in log scale and we note that our approach (NODE) has much lesser execution time, which is important for real-time robot experiments. Although the GP based method (Jaquier et al., 2019) is able to learn complex trajectories with comparable accuracy and training time to NODE, the execution time is much smaller and they rely on time inputs for desired roll outs with no capability to generate safe and stable motion plans. All computations are performed on Google Colab. **Robotic experiments** We validate our approach on the Franka Emika robot arm performing two complex nonlinear motions: wiping a human mannequin with a towel as shown in Fig. 6 and wiping a white board with an eraser as given in Fig. 7. We use the passive DS impedance controller given in (Kronander and Billard, 2016). We used \(D=2\) demonstrations for the mannequin tasks, and \(D=3\) demonstrations for the board wiping tasks. Each demonstration had between \(T=300\) and \(T=600\) data samples. The average training time (offline) is \(3-6\) minutes for each task on Google Colab. The obstacle shown in Fig. 7 at \(t=2\) has markers on it that are tracked in real-time by OptiTrack motion capture system. We observe that the robot tracks the desired nominal trajectories while remaining compliant to human interaction, robust to perturbations, and safe with respect to unforeseen and dynamic obstacles. We include the stirring task and other implementation details in the appendix. Further illustrations of disturbance rejection, obstacle avoidance and trajectory reproductions are presented in [https://sites.google.com/view/lfd-node-clf-cbf/home](https://sites.google.com/view/lfd-node-clf-cbf/home). ## 5 Limitations & Future Work The main contribution of this work is a NODE DS-based motion plan with an additive CBF and CLF-based correction term computed with respect to a time-varying target trajectory that ensures safety, stability, and when possible, task-completion. A limitation inherent to all CBF and CLF-based approaches is the risk of converging to a local minimum due to conflicting safety and task-completion constraints--this is a broader research question which falls beyond the scope of this paper. Our experiments are limited to using the Cartesian end-effector position \(x\in\mathbb{R}^{3}\): future work will address this limitation by extending our Figure 4: Illustration of (a) disturbance rejection and (b) obstacle avoidance using our approach. approach to higher dimensional coordinate frames such as (a) orientation in \(\mathcal{SO}(3)\) and position in \(\mathbb{R}^{3}\) of the end-effector (Figueroa et al., 2020; Ravichandar and Dani, 2019; Zhang et al., 2022), (b) full pose of end-effector in \(\mathcal{SE}(3)\) space (Urain et al., 2022), and (c) joint space (Shavit et al., 2018). Another limitation is our simple encoding of obstacles via sublevel sets of CBFs using closed-form expressions: to address this limitation, future work will explore novel (data-driven) representations of obstacles in joint space, e.g., via implicit distance functions (Koptev et al., 2023). Finally, since learning is performed offline, we can use a NN model that is more expensive to train than other DS-based motion planning methods (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2022), which increases training time. Exploring different DS-based models that reduce training time without sacrificing expressivity is another important direction for future work. ## Acknowledgements We thank Rahul Ramesh, Leon Kim, Anusha Srikanthan, Alp Aydinoglu, and Yifan Xue for their valuable inputs and feedback. Figure 5: Comparison of performance metrics and trajectory reproductions between IFlow, GP and our approach (NODE) on periodic trajectories. Figure 6: Franka Emika robot arm performing a periodic wiping task for a human mannequin. The blue arrows denote the perturbation. Figure 7: Franka Emika robot arm performing a periodic wiping task on a whiteboard. The purple spheres is the moving obstacle at different time.
Dynamical System(DS)アプローチを用いて、感覚的な示範から複雑で、かつ可能性のある周期的な運動計画を学習します。これは、ニューラルオーダリ differential equation(NODE)を用いることで実現されます。反応性と乱れに対する robustness を確保するためには、ロボットが各時刻に目標点を選択し、制御理論のツールと学習された NODE で生成された目標軌跡を組み合わせる方法を提案します。NODE モデルに対する補正項は、制御 Lyapunov 関数と制御バリア関数を使用して安定性と安全性に焦点を当てた二次計画を解くことで、オンラインで計算されます。このアプローチは、LASA 手書きデータセットと複雑な周期的な軌跡に対して、基線 DS 学習手法を上回るパフォーマンスを示します。また、Franka Emika ロボットアームを用いて、単一の Attractor が存在しない、拭き掃除や攪拌などの複雑な運動を安定に生成し、
2303.17950
Explicit spectral gap for Schottky subgroups of $\mathrm{SL} (2,\mathbb{Z})$
Let $\Gamma$ be a Schottky subgroup of $\mathrm{SL} (2,\mathbb{Z})$. We establish a uniform and explicit lower bound of the second eigenvalue of the Laplace-Beltrami operator of congruence coverings of the hyperbolic surface $\Gamma \backslash \mathbb{H}^2$ provided the limit set of $\Gamma$ is thick enough.
Irving Calderón, Michael Magee
2023-03-31T10:28:54
http://arxiv.org/abs/2303.17950v2
# Explicit spectral gap for Schottky subgroups of \(\mathrm{SL}(2,\mathbb{Z})\) ###### Abstract. Let \(\Gamma\) be a Schottky subgroup of \(\mathrm{SL}(2,\mathbb{Z})\). We establish a uniform and explicit lower bound of the second eigenvalue of the Laplace-Beltrami operator of congruence coverings of the hyperbolic surface \(\Gamma\backslash\mathbb{H}^{2}\) provided the limit set of \(\Gamma\) is thick enough. ## 1. Introduction ### Spectral gaps for hyperbolic surfaces and main result The Laplace-Beltrami operator \(\Delta_{S}\) of a hyperbolic surface \(S=\Gamma\backslash\mathbb{H}^{2}\) encodes important geometric and dynamical features of \(S\). The spectral theory of \(\Delta_{S}\) also has deep connections with number theory when \(\Gamma\) is an arithmetic subgroup of \(\mathrm{SL}(2,\mathbb{R})\). We are interested in the general problem of giving lower bounds of the second smallest eigenvalue \(\lambda_{1}(S)\) of \(\Delta_{S}\) when \(S\) varies in a family \(\mathcal{F}\) of hyperbolic surfaces. We call these _spectral gap results_. In this work, \(\mathcal{F}\) will always be a family of finite covers of a fixed surface \(S\). Not all such families have a spectral gap. For example, suppose that \(S=\Gamma\backslash\mathbb{H}^{2}\) for some \(\Gamma\) admitting a surjective homomorphism \(\psi:\Gamma\to\mathbb{Z}\). The \(\lambda_{1}\) of the finite cover \(S_{(n)}:=\psi^{-1}(n\mathbb{Z})\backslash\mathbb{H}^{2}\) of \(S\) tends to \(0\) as \(n\to\infty\). It is believed that this kind of examples are rare, and that most finite coverings of a hyperbolic surface have a strong spectral gap. For instance, this was recently established in [10] and [11] for random coverings of Schottky surfaces. Many \(\mathcal{F}\) of arithmetic flavor do have a spectral gap, even a uniform one. By an arithmetic \(\mathcal{F}\) we mean the following: suppose \(\Gamma\) is a subgroup of \(\mathrm{SL}(2,\mathbb{Z})\), let \(\psi_{n}\) be the projection \(\mathrm{SL}(2,\mathbb{Z})\to\mathrm{SL}(2,\mathbb{Z}/n\mathbb{Z})\), and let \(G_{n}\) be a subgroup of \(\mathrm{SL}(2,\mathbb{Z}/n\mathbb{Z})\). We take \(\mathcal{F}\) as the hyperbolic surfaces associated to \(\psi_{n}^{-1}(G_{n})\cap\Gamma\). We will focus on the important case where all the \(G_{n}\)'s are \(0\). We call \(\Gamma\cap\psi_{n}^{-1}(0)\) the _\(n\)-th congruence subgroup_ of \(\Gamma\)--which we denote \(\Gamma_{n}\)--. Similarly, \(S_{n}:=\Gamma_{n}\backslash\mathbb{H}^{2}\) is the \(n\)_-th congruence covering_ of \(S=\Gamma\backslash\mathbb{H}^{2}\). More generally, any \(\Gamma\) contained in an arithmetic subgroup of \(\mathrm{SL}(2,\mathbb{R})\) has congruence subgroups. Our main result is a uniform spectral gap for congruence coverings of \(\Gamma\backslash\mathbb{H}^{2}\), for any Schottky subgroup \(\Gamma\) of \(\mathrm{SL}(2,\mathbb{Z})\) with big enough growth exponent--see Subsection 2.1 for the base definitions and facts of hyperbolic surfaces and their Laplace-Beltrami operators, and Subsection 2.2 for the definition of Schottky group--. Before stating it, let us give context on uniform spectral gaps for congruence coverings of hyperbolic surfaces. In the seminal paper [12], Selberg was interested in the family \(\mathcal{F}\) of congruence coverings of \(\mathrm{SL}(2,\mathbb{Z})\backslash\mathbb{H}^{2}\). His motivation came from number theory and, more precisely, modular forms. Selberg showed that \(\lambda_{1}(S)\geq\frac{3}{16}\) for Introduction ### The \(\mathbb{H}^{2}\)-theory Let \(\Gamma\) be a smooth smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). We denote by \(\Gamma^{\prime}\) the _height of \(\Gamma\)-equivariant_ if \(\Gamma\) is a smooth curve in \(\mathbb{H}^{2}\). following upper bound of \(m_{\Gamma}(n,J_{\beta})\). Although our methods work only for \(\beta>t_{\Gamma}\), we think the bound should also hold for smaller \(\beta\). **Proposition 3**.: _Let \(\Gamma\) be a Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\) with \(\delta_{\Gamma}>\frac{4}{5}\) and consider \(\beta\in(t_{\Gamma},\delta_{\Gamma})\). There are constants \(\mathsf{C}_{\Gamma}>0,\mathtt{D}_{\Gamma,\beta}>0\) and \(\xi=\xi(\delta_{\Gamma},\beta)\in(0,1)\) with the next property: For any integer \(n>\mathsf{C}_{\Gamma}\) we have_ \[m_{\Gamma}(n,J_{\beta})\leq\mathtt{D}_{\Gamma,\beta}n^{\xi}.\] Let us close by highlighting the key points of our proof of Proposition 3. Unlike Sarnak-Xue [14] and Gamburd [1], who rely on Selberg's Trace Formula to get an upper bound of \(m_{\Gamma}(p,J_{\beta})\), we exploit the fact that the eigenvalues of a Schottky surface \(S\) are zeros of certain dynamical zeta functions \(\zeta\)--which are entire functions on \(\mathbb{C}\)--attached to \(S\). We bound thus the number of eigenvalues of \(S\) in an interval by the number of zeros of a convenient \(\zeta\) in some domain of \(\mathbb{C}\). As explained in Subsection 2.2, any Schottky subgroup \(\Gamma\) of \(\operatorname{SL}(2,\mathbb{R})\) comes with a finite generating set \(\mathscr{G}\) and a distinguished fundamental domain \(\mathcal{F}\) in \(\mathbb{H}^{2}\). Intuitively, the zeta functions of \(S=\Gamma\backslash\mathbb{H}^{2}\) come from nonbacktracking walks in \(\mathbb{H}^{2}\) starting form \(\mathcal{F}\). The classical case is the walk \(\mathscr{W}_{1}\) with steps of length one: a path is an infinite sequence \((x_{0},x_{1},\ldots)\) starting from \(\mathcal{F}\), such that \(x_{n+1}=\gamma x_{n}\) for some \(\gamma\in\mathscr{G}\), and \(x_{n+1}\neq x_{n-1}\). Other zeta functions are obtained by walking faster, for example with steps of length \(m\), or length \(m_{i}\) when moving in a certain "direction" \(C_{i}\). The zeta functions \(\zeta_{\tau,n}\) we use to prove Proposition 3--introduced by Bourgain and Dyatlov in [1]--come from walks with steps of _length at infinity_\(\tau\). The key point is choosing the \(\tau\) giving the best upper bound for \(m_{\Gamma}(n,J_{\beta})\). ### Arithmetic applications of spectral gaps Here we explain how our main result can improve some arithmetic results obtained by the affine linear sieve of Bourgain, Gamburd and Sarnak [1]. First we recall the general problem addressed by the affine linear sieve and then we present a concrete example. Consider a finitely generated subgroup \(\Gamma\) of \(\operatorname{GL}(d,\mathbb{Z})\), a rational representation \(\rho:\mathbf{GL}(d)\to\mathbf{GL}(m)\), the \(\Gamma\)-orbit \(\mathcal{O}=v_{0}\rho(\Gamma)\) of some \(v_{0}\in\mathbb{Q}^{m}\) and a polynomial \(P(x)\in\mathbb{Q}[x_{1},\ldots,x_{m}]\) taking integral values at \(\mathcal{O}\). Loosely speaking, the affine linear sieve gives conditions on \(\Gamma,\mathcal{O}\) and \(P\) which guarantee that \(P\) takes values with few prime divisors in a big subset of \(\mathcal{O}\). Dirichlet's Theorem on primes in arithmetic progressions--there are infinitely many prime numbers of the form \(a+nb\) for any relatively prime, nonzero integers \(a,b\)--can be formulated in this way taking \[\Gamma=\left\langle\begin{pmatrix}1&0\\ b&1\end{pmatrix}\right\rangle,\mathcal{O}=(a,1)\Gamma,\quad\text{and}\quad P(x _{1},x_{2})=x_{1}.\] We need some definitions to precise what we mean by _big subset_ of \(\mathcal{O}\) in general. An _\(R\)-almost prime_ is an integer with at most \(R\) prime divisors, counted with multiplicity. For example, \(-12\) is a \(3\)-almost prime. The subsets \[\mathcal{O}_{P}(R)=\{x\in\mathcal{O}\mid P(x)\text{ is an $R$-almost prime}\}\] of \(\mathcal{O}\) grow with \(R\). We say that \((\mathcal{O},P)\)_saturates_ if \(\mathcal{O}_{P}(R)\) is Zariski-dense in \(\mathcal{O}\) for some \(R\). In that case, the minimal such \(R\) is the _saturation number_ of \((\mathcal{O},P)\), and is denoted \(R(\mathcal{O},P)\). The problem we consider is how to show that \((\mathcal{O},P)\) saturates. Better yet, can we give an upper bound of \(R(\mathcal{O},P)\)? The affine linear sieve, and more precisely [1, Theorem 1.1], answers these questions for various \((\mathcal{O},P)\). This result treats the case where \(\rho:\mathbf{GL}(d)\to\mathbf{GL}(d^{2})\) is the representation of \(\operatorname{GL}(d)\) on the space of \(d\times d\) matrices by right multiplication, \(\mathcal{O}\) is the \(\Gamma\)-orbit of \(I_{d}\) and \(P\) is a nontrivial polynomial with rational coefficients. In simple terms, it says that this \((\mathcal{O},P)\) saturates if the Zariski closure of \(\Gamma\) is big enough and \(\Gamma\) has the _square-free expansion hypothesis._ This last condition means that \(\Gamma\) has a finite, symmetric generating set \(\mathscr{G}\) such that the Cayley graphs4 of5\((\Gamma_{q}\backslash\Gamma,\Gamma_{q}\mathscr{G})\), with \(q\) running in the positive, square-free integers, is an _expander family6_. Here is the precise statement of [1, Theorem 1.1]. Footnote 4: If \(\mathscr{G}\) is a finite, symmetric generating set of a group \(G\), the Cayley graph of \((G,\mathscr{G})\) has vertex set \(G\), and \(g_{1},g_{2}\in G\) are joined by and edge if and only if \(g_{1}=sg_{2}\) for some \(s\in\mathscr{G}\). Footnote 5: Recall that \(\Gamma_{q}\) is the kernel of the projection \(\Gamma\to\operatorname{GL}(d,\mathbb{Z}/q\mathbb{Z})\). Footnote 6: Let us recall the definition of expander family of graphs. If \(\mathcal{G}=(V,E)\) is a finite graph, its adjacency operator \(A_{\mathcal{G}}\) acts on \(\mathbb{C}\)-valued functions \(f\) on \(V\) as follows: \((A_{\mathcal{G}}f)(v)\) is the sum of the values of \(f\) at the neighbors of \(v\). \(A_{\mathcal{G}}\) is a self-adjoint operator on \(\ell^{2}(V)\), hence it has real eigenvalues that we denote \(\lambda_{0}(\mathcal{G})\geq\lambda_{1}(\mathcal{G})\geq\cdots\). An infinite sequence \(\mathcal{G}_{n}=(V_{n},E_{n}),n\geq 0\) of finite, \(d\)-regular graphs is an expander family if there is an \(\varepsilon>0\) such that \(\lambda_{0}(\mathcal{G}_{n})-\lambda_{1}(\mathcal{G}_{n})\geq\varepsilon\) for any \(n\), and \(\#V_{n}\to\infty\) as \(n\to\infty\). **Theorem 4**.: _Let \(\Gamma\) be a finitely generated subgroup of \(\operatorname{GL}(d,\mathbb{Z})\) whose Zariski closure \(\mathbf{G}\) is a connected, simply connected and absolutely almost simple7 subgroup of \(\mathbf{GL}(d)\) defined over \(\mathbb{Q}\). Let \(P\in\mathbb{Q}[\mathbf{G}]\) be a regular function on \(\mathbf{G}\) that is neither zero nor a unit of the coordinate ring \(\mathbb{Q}[\mathbf{G}]\), and taking integral values on \(\Gamma\). If \(\Gamma\) verifies the square-free expansion hypothesis, then \(R(\Gamma,P)\) is finite. Moreover, if \(P\) is primitive on \(\Gamma^{8}\), there is an explicit upper bound for \(R(\Gamma,P)\) in terms of the spectral gap of the family of Cayley graphs of \(\Gamma/\Gamma_{q}\), where \(q\) runs through the square-free natural numbers._ Footnote 7: These three conditions are in the sense of algebraic groups. For the precise definitions see [10]. Theorem 4 applies to any nonelementary subgroup \(\Gamma\) of \(\operatorname{SL}(2,\mathbb{Z})\) with \(\delta_{\Gamma}>\frac{1}{2}\). Indeed, the Zariski closure of any such \(\Gamma\) is \(\mathbf{SL}(2)\) and, according to [1, Theorem 1.2], the square-free expansion hypothesis for \(\Gamma\) is equivalent to a uniform spectral gap for the square-free congruence coverings of \(\Gamma\backslash\mathbb{H}^{2}\). This last condition is guaranteed by the nonexplicit spectral gap [1, Theorem 1.1]. To get an upper bound for the saturation number we need an explicit spectral gap, such as Gamburd's Theorem 1 or our Theorem 2. Here is a concrete situation in which our main theorem can improve known results on saturation numbers. A _Pythagorean triple_ is an integral solution of the equation \(x_{1}^{2}+x_{2}^{2}=x_{3}^{2}\). The integers \(|x_{1}|,|x_{2}|,|x_{3}|\) are the lengths of the sides of a right triangle, thus the polynomials \[P_{\mathfrak{a}}(x)=\frac{x_{1}x_{2}}{2}\quad\text{and}\quad P_{\mathfrak{h}}( x)=x_{3}\] give respectively the area and the length of the hypotenuse of the triangle associated to \((x_{1},x_{2},x_{3})\) when the \(x_{i}\)'s are nonnegative. Consider also the product of the coordinates \(P_{\mathfrak{c}}(x)=x_{1}x_{2}x_{3}\). Any primitive Pythagorean triple \((x_{1},x_{2},x_{3})\)--the greatest common divisor of the coordinates is \(1\)--with \(x_{3}\geq 0\) is of the form \((a^{2}-b^{2},2ab,a^{2}+b^{2})\) for some \((a,b)\in\mathbb{Z}^{2}\). Let \(\Gamma\) be a finitely generated subgroup of \(\operatorname{SL}(2,\mathbb{Z})\) and identify \(\mathcal{O}=(1,0)\Gamma\) with its corresponding set of Pythagorean triples. The articles [11], [12], [13] and [14] give explicit upper bounds of \(R(\mathcal{O},P)\) for some9\(P\in\{P_{\mathbf{a}},P_{\mathbf{c}},P_{\mathbf{h}}\}\) provided \(\delta_{\Gamma}\) is big enough. A common feature of these works is that the quantitative square-free expansion hypothesis for \(\Gamma\) is deduced from Gamburd's Theorem 1. Using instead our Theorem 2 when \(\Gamma\) is a Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\) with \(\delta_{\Gamma}>\frac{4}{5}\), one can extend the range of parameters for which these results hold10. Similarly, our main result improves the \(\delta\)-range in [1, Theorem 1.2]. Footnote 9: Not all the above mentioned works treat the three polynomials \(P\) we are considering. Footnote 10: If not the \(R\)-values, which jump, but definitely the range of \(\delta\)-values. ### Further work: higher dimensional hyperbolic manifolds In a future work we will extend our Theorem 2 to higher dimensions. In the informal discussion that follows we give more details and some context. Let \(M=\Gamma\backslash\mathbb{H}^{d}\) be a \(d\)-dimensional hyperbolic manifold, with \(\Gamma\) contained in an arithmetic subgroup of the group \(\operatorname{SO}^{\circ}(d,1)\) of isometries of \(\mathbb{H}^{d}\) preserving the orientation. We are interested in uniform spectral gaps for congruence coverings of \(M\). As with surfaces, the most studied case is \(M\) of finite volume--two examples are [1, Theoreme 1.2] and [13, Theorem 1]--. In the infinite volume case, Magee established an explicit uniform spectral gap, akin to Gamburd's Theorem 1, when \(\Gamma\) is geometrically finite, Zariski dense in \(\mathbf{SO}(d,1)\) and \(\delta_{\Gamma}\) is big enough--the precise statement is [15, Theorem 1.6]--. We want to improve this result for \(M\) convex cocompact by using an approach similar to our proof of Theorem 2. This is possible since for such an \(M\), the--recurrent part of the--geodesic flow of \(M\) admits a Markov partition that yields a coding by a symbolic dynamical system. Schottky hyperbolic surfaces are a particular instance. Through the thermodynamic formalism we get dynamical zeta functions of \(M\), which we will use to control the eigenvalues of congruence coverings of \(M\). By a similar approach, Sarkar proved a nonexplicit uniform spectral gap for convex cocompact \(M\) in the recent work [11]. We aim for an explicit result provided \(\delta_{\Gamma}\) is big enough. ### Organization of the article The article is divided as follows: We gather base definitions and preliminary results in Section 2. In particular we introduce Fuchsian Schottky groups and we explain the symbolic dynamics of their action on \(\overline{\mathbb{H}^{2}}\). The various zeta functions attached to a Schottky hyperbolic surface \(S\)--as well as the \(\zeta_{\tau,n}\) used in the proof of Proposition 3--are introduced in Section 3, where we also explain the relation between zeros of these and the eigenvalues of \(S\). The main estimate we need to count zeros of \(\zeta_{\tau,n}\) is established in Section 4. Having introduced all the tools we need, we prove our main result in Section 5. ### Acknowledgments We thank Alex Kontorovich for his helpful comments on an earlier version of the article. This project has received founding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 949143). ## 2. Preliminaries This section gathers various base definitions and preliminary results. It has four parts: In Subsection 2.1 we recall the concepts and facts about hyperbolic surfaces and its Laplace-Beltrami operator needed in this work. Then we define Fuchsian Schottky groups and the associated Schottky surfaces in Subsection 2.2. There, we also explain the correspondence, given a set \(\mathcal{A}\) of Schottky generators of \(\Gamma\) between elements of \(\Gamma\), reduced words \(w\) in \(\mathcal{A}\), and certain intervals \(I_{w}\subset\partial_{\infty}\mathbb{H}^{2}\). Subsection 2.3 contains some lemmas concerning the lengths of the \(I_{w}\)'s--like how these behave under concatenation of words--and their relation to the word length of \(w\). Finally, in Subsection 2.4 we estimate the number of matrices of norm \(\leq R\) in prime congruence subgroups of \(\mathrm{SL}(2,\mathbb{Z})\). ### Basics on hyperbolic manifolds A _hyperbolic surface_ is a Riemannian surface without boundary of constant curvature \(-1\). The _real hyperbolic plane_\(\mathbb{H}^{2}\) is the unique complete, simply connected hyperbolic surface. We will work with the upper half-plane model: \[\mathbb{H}^{2}=\{z\in\mathbb{C}\mid\text{Im }z>0\}.\] with the Riemannian metric \(\frac{dx^{2}+dy^{2}}{y^{2}}\) in coordinates \(z=x+iy\). The action of \(\mathrm{SL}(2,\mathbb{R})\) on \(\mathbb{H}^{2}\) by Mobius transformations is isometric and induces an isomorphism between the group of orientation preserving isometries of \(\mathbb{H}^{2}\) and \(\mathrm{PSL}(2,\mathbb{R})\). We compactify \(\mathbb{H}^{2}\) to \(\overline{\mathbb{H}^{2}}=\mathbb{H}^{2}\cup\partial_{\infty}\mathbb{H}^{2}\) by adding the boundary \(\partial_{\infty}\mathbb{H}^{2}=\mathbb{R}\cup\{\infty\}\). Any hyperbolic surface \(S\) we consider is assumed to be connected, orientable and complete. Thus it can be written as \(\Gamma\backslash\mathbb{H}^{2}\) for some discrete subgroup \(\Gamma\) of \(\mathrm{PSL}(2,\mathbb{R})\) without torsion. We say \(S\) is _geometrically finite_ if some convex polygon in \(\mathbb{H}^{2}\) with finitely many sides is a fundamental domain of \(\Gamma\). This is equivalent--for hyperbolic surfaces--to \(\Gamma\) being finitely generated. Consider a discrete subgroup \(\Gamma\) of \(\mathrm{PSL}(2,\mathbb{R})\). When all the \(\Gamma\)-orbits in \(\overline{\mathbb{H}^{2}}\) are infinite, we say \(\Gamma\) is _nonelementary_. The _limit set_\(\Lambda_{\Gamma}\) of \(\Gamma\) is the set of accumulation points in \(\overline{\mathbb{H}^{2}}\) of \(\Gamma x\) for some \(x\in\mathbb{H}^{2}\). It does not depend on the choice of \(x\). Since \(\Gamma x\) is discrete, \(\Lambda_{\Gamma}\) is contained in \(\partial_{\infty}\mathbb{H}^{2}\). The limit set and its convex hull Conv \(\Lambda_{\Gamma}\) are closed, \(\Gamma\)-invariant subsets of \(\overline{\mathbb{H}^{2}}\). We say that \(\Gamma\) is _convex cocompact_ if \(\Gamma\backslash(\mathbb{H}^{2}\cap\text{Conv }\Lambda_{\Gamma})\) is compact. A surface \(\Gamma\backslash\mathbb{H}^{2}\) is convex cocompact if \(\Gamma\) has this property. There are two kinds of convex cocompact hyperbolic surfaces: in finite volume, the compact ones, and Schottky surfaces--see Subsection 2.2--in infinite volume. The characterization of infinite volume convex cocompact surfaces is a result of Button in [1]. The _growth exponent_\(\delta_{\Gamma}\) of \(\Gamma\) is the infimum of the \(s>0\) such that \[\sum_{\gamma\in\Gamma}\exp(-s\rho(x,\gamma x))<\infty,\] where \(x\in\mathbb{H}^{2}\) and \(\rho\) is the hyperbolic metric. The growth exponent is independent of the choice of \(x\) and lies in the interval \([0,1]\). For finitely generated \(\Gamma\), \(\delta_{\Gamma}\) coincides with the Hausdorff dimension of \(\Lambda_{\Gamma}\). Let us recall the definition of the Laplace-Beltrami operator \(\Delta_{X}\) of a Riemannian manifold \(X\). The reader can find the details in [1, Chapter 4]. Initially, \(\Delta_{X}\) is defined on the space \(\mathscr{C}_{c}^{\infty}(X)\) of smooth functions \(\varphi:X\to\mathbb{C}\) with compact support as follows: \(\Delta_{X}\varphi\) is the divergence of the gradient of \(\varphi\). As unbounded operator of \(L^{2}(X)\)--with respect to the Riemannian measure of \(X\)--with domain \(\mathscr{C}_{c}^{\infty}(X)\), \(\Delta_{X}\) is symmetric by the Green Formula, but not self-adjoint. To remedy this one extends--using distributional derivatives--\(\Delta_{X}\) to the following closed subspace \[W_{0}^{2}(X)=\mathrm{cl}_{W^{1}}(\mathscr{C}_{c}^{\infty}(X))\cap\{f\in W^{1}( X)\mid\Delta f\in L^{2}(X)\}\] of the Sobolev space \(W^{1}(X)\). This extension, which we still denote \(\Delta_{X}\), is an unbounded, self-adjoint operator on \(L^{2}(X)\)--see [1, Theorem 4.6]--with spectrum contained in \([0,\infty)\)11. We will work always with this \(\Delta_{X}\). Footnote 11: Taking the appropriate sign convention. Let \(S\) be a geometrically finite hyperbolic surface. Here are some important properties of the spectrum of \(\Delta_{S}\). The last two are respectively due to Sullivan [13] and Lax-Phillips [10]: 1. When \(S\) is noncompact, the continuous part of the spectrum of \(\Delta_{S}\) is \(\left[\frac{1}{4},\infty\right)\)--see [12, Theorem 2.12]--. 2. \(\Delta_{S}\) has eigenvalues--which we sometimes call eigenvalues of \(S\)--if and only if \(\delta_{\Gamma}>\frac{1}{2}\), and in that case the smallest one is \(\lambda_{0}(S)=\delta_{\Gamma}(1-\delta_{\Gamma})\). 3. \(S\) has finitely many eigenvalues in the interval \(\left[0,\frac{1}{4}\right)\), which we denote \(\lambda_{0}(S)\leq\cdots\leq\lambda_{k}(S)\). Moreover, these are all the eigenvalues of \(\Delta_{S}\) when \(S\) has infinite volume12. Footnote 12: A finite-volume \(S\) might have eigenvalues in \(\left[\frac{1}{4},\infty\right)\), even infinitely many. ### Fuchsian Schottky groups For our purposes, working with \(\operatorname{SL}(2,\mathbb{R})\) or \(\operatorname{PSL}(2,\mathbb{R})\) makes no difference, so we stick to \(\operatorname{SL}(2,\mathbb{R})\) for simplicity. By a _Fuchsian group_ we mean a discrete subgroup of \(\operatorname{SL}(2,\mathbb{R})\). We describe now the family of Fuchsian groups relevant to our work. Consider a positive integer \(N\) and \(\mathcal{A}=\{1,2,\ldots,2N\}\). For any \(a\in\mathcal{A}\) we denote \[\widetilde{a}=\begin{cases}a+N&\text{if }a\leq N,\\ a-N&\text{if }a>N.\end{cases}\] Suppose we are given a sequence \(\mathcal{S}=(D_{a},\gamma_{a})_{a\in\mathcal{A}}\) consisting of open disks \((D_{a})_{a\in\mathcal{A}}\) in \(\mathbb{C}\) with centers in \(\mathbb{R}\) and with pairwise disjoint closures13, and matrices \((\gamma_{a})_{a\in\mathcal{A}}\) in \(SL(2,\mathbb{R})\) such that \(\gamma_{\widetilde{a}}=\gamma_{a}^{-1}\) and Footnote 13: We will denote by \(\operatorname{cl}A\) the closure of a subset \(A\) of \(\mathbb{C}\) instead of \(\overline{A}\) to avoid confusions with the complex conjugation. \[\gamma_{a}(\mathbb{H}^{2}-D_{\widetilde{a}})=\mathbb{H}^{2}\cap\operatorname{ cl}D_{a} \tag{2.1}\] for any \(a\in\mathcal{A}\). Such a sequence \(\mathcal{S}\) will be called _Schottky data,_ and we associate to it the subgroup \(\Gamma_{\mathcal{S}}\) of \(\operatorname{SL}(2,\mathbb{R})\) generated by \((\gamma_{a})_{a\in\mathcal{A}}\). A _Fuchsian Schottky_ group is a subgroup \(\Gamma\) of \(\operatorname{SL}(2,\mathbb{R})\) such that \(\Gamma=\Gamma_{\mathcal{S}}\) for some Schottky data \(\mathcal{S}\). Here are some general properties of a Fuchsian Schottky group \(\Gamma=\Gamma_{\mathcal{S}}\): We define \[U=\cup_{a\in\mathcal{A}}D_{a}. \tag{2.2}\] The set \(\mathcal{F}:=\mathbb{H}^{2}-U\) is a fundamental domain for \(\Gamma\) on \(\mathbb{H}^{2}\), so \(\Gamma\) is indeed a Fuchsian group. Moreover, \(\Gamma\) is a free group with basis \(\gamma_{1},\ldots,\gamma_{N}\) by (2.1) and the ping-pong argument. Note that the quotient \(\Gamma\backslash\mathbb{H}^{2}\) is a convex cocompact hyperbolic surface of infinite area since the intersection of \(\mathcal{F}\) with the convex hull of \(\Lambda_{\Gamma}\) is compact, \(\mathcal{F}\) has infinite area, and \(\Gamma\) is torsion-free. Conversely, the result of Button in [11] mentioned in Subsection 2.1 says that if \(\Gamma_{0}\backslash\mathbb{H}^{2}\) is a convex cocompact hyperbolic surface of infinite area, then \(\Gamma_{0}\) is a Fuchsian Schottky group. Since \(\Gamma=\Gamma_{S}\) is free, it admits a straightforward coding by words on \(\mathcal{A}\). Let us fix some terminology to describe it. A _word_ with alphabet \(\mathcal{A}\) is either a finite sequence \(w=(a_{1},\ldots,a_{m})\) with all the \(a_{j}\) in \(\mathcal{A}\), or the empty word \(\emptyset\). The word \(w\) is _reduced_ if either \(w=\emptyset\), or \(\widetilde{a_{j}}\neq a_{j+1}\) for any \(j\). We denote by \(\mathcal{W}\) the set of reduced words with alphabet \(\mathcal{A}\), and by \(\mathcal{W}^{\circ}\) the set of nonempty reduced words. Consider any \(w=(a_{1},\ldots,a_{m})\) in \(\mathcal{W}^{\circ}\). The _length_\(|w|\) of \(w\) is \(m\), and we define \(|\emptyset|=0\). Let \(\mathcal{W}_{m}\) and \(\mathcal{W}_{\geq m}\) be respectively the reduced words of length \(m\) and \(\geq m\). Sometimes we will write \(S(w)=a_{1}\) and \(E(w)=a_{m}\) for the initial and last letters of \(w\), respectively. We define \[w^{\prime}=(a_{1},\ldots,a_{m-1})\quad\text{and}\quad\widetilde{w}=(\widetilde {a_{m}},\ldots,\widetilde{a_{1}}).\] We say that \(\widetilde{w}\) is the _mirror word_ of \(w\), and the _mirror set_ of any \(\mathcal{B}\subset\mathcal{W}\) is \[\widetilde{\mathcal{B}}=\{\widetilde{w}\mid w\in\mathcal{B}\}.\] By convention \(\widetilde{\emptyset}=\emptyset\). Note that \(\gamma_{\widetilde{w}}=\gamma_{w}^{-1}\). The concatenation of two words \(w_{1}=(a_{1},\ldots,a_{m}),w_{2}=(b_{1},\ldots,b_{m})\) is \(w_{1}w_{2}:=(a_{1},\ldots,a_{m},b_{1},\ldots b_{m})\). By \(w_{1}\to w_{2}\) we mean that \(w_{1}w_{2}\) is reduced, and by \(w_{1}\rightsquigarrow w_{2}\) we mean that \(w_{1}\) and \(w_{2}\) are nonempty and \(E(w_{1})=S(w_{2})\), in which case \(w_{1}^{\prime}w_{2}\) is reduced. The encoding of \(\Gamma=\Gamma_{\mathcal{S}}\) by \(\mathcal{W}\) is very transparent: We send any \(w=(a_{1},\ldots,a_{m})\in\mathcal{W}^{\circ}\) to \[\gamma_{w}=\gamma_{a_{1}}\cdots\gamma_{a_{m}},\] and \(\gamma_{\emptyset}=I\). Many properties of the surface \(\Gamma\backslash\mathbb{H}^{2}\) can be deduced by studying the action of \(\Gamma\) in \(\overline{\mathbb{H}^{2}}\)14. The following notation is useful to do so: To any \(w=(a_{1},\ldots,a_{m})\in\mathcal{W}^{\circ}\) we associate a disk in \(\mathbb{C}\) and an interval in \(\mathbb{R}\): Footnote 14: The paper [10] of Lalley has several interesting examples. \[D_{w}:=\gamma_{w^{\prime}}(D_{a_{m}}),\quad I_{w}:=D_{w}\cap\mathbb{R}.\] We denote by \(|I_{w}|\) the length of \(I_{w}\). The limit set \(\Lambda_{\Gamma}\) can be encoded by the set \(\mathcal{W}_{\infty}\) of infinite reduced words in \(\mathcal{A}^{\mathbb{Z}_{\geq 1}}\) as follows: For any \(w=(a_{1},a_{2},\ldots)\in\mathcal{W}_{\infty}\) and any integer \(n>0\), let \(w_{n}=(a_{1},\ldots,a_{n})\). Note that \(w_{n}\mathcal{F}\) is contained in \(D_{w_{n}}\), and that the nested sequence of disks \(D_{w_{1}}\supset D_{w_{2}}\supset\cdots\) shrinks to a point \(x_{w}\in\Lambda_{\Gamma}\). The map \(\mathcal{W}_{\infty}\to\Lambda_{\Gamma},w\mapsto x_{w}\) is bijective. ### Dynamics on the boundary of \(\mathbb{H}^{2}\) In this subsection \(\Gamma=\Gamma_{\mathcal{S}}\) is a Fuchsian Schottky group. We collect some lemmas from [1] and [14] that we will need about the dynamics of \(\Gamma\) on \(\partial_{\infty}\mathbb{H}^{2}\). From this point we will use the following notation for strictly positive numbers \(A,B\): \(A\ll_{\Gamma}B\) means that there is a positive constant \(C_{\Gamma}\), depending only on \(\Gamma\), such that \(A\leq C_{\Gamma}B\). We define \(A\gg_{\Gamma}B\) in the same fashion, and \(A\asymp_{\Gamma}B\) means that \(A\ll_{\Gamma}B\) and \(A\gg_{\Gamma}B\). We start with general considerations for Mobius transformations given by matrices in \(\mathrm{SL}(2,\mathbb{R})\).Consider two intervals \(J_{1}=[x_{1},y_{1}],J_{2}\) contained in \(\mathbb{R}\). We recall a useful parametrization in [1, Section 2.2] of the orientation-preserving isometries \(\varphi\) of \(\mathbb{H}^{2}\) such that \(\varphi(J_{1})=J_{2}\). When \(J_{1}=J_{2}=[0,1]\), a direct computation shows that any such \(\varphi\) is represented by a unique matrix in \(\mathrm{SL}(2,\mathbb{R})\) of the form \[g_{\alpha}:=\begin{pmatrix}e^{\alpha/2}&0\\ e^{\alpha/2}-e^{-\alpha/2}&e^{-\alpha/2}\end{pmatrix}. \tag{2.3}\] Consider now any \(J_{1},J_{2}\). For any \(J=[x_{J},y_{J}]\subset\mathbb{R}\) we define \[T_{J}=\begin{pmatrix}|J|^{1/2}&x_{J}|J|^{-1/2}\\ 0&|J|^{-1/2}\end{pmatrix}. \tag{2.4}\] Note that the Mobius transformation corresponding to \(T_{J}\), that we still denote \(T_{J}\), is an affine map and \(T_{J}([0,1])=J\). Hence any \(\varphi\) such that \(\varphi(J_{1})=J_{2}\) is represented by a matrix of the form \[T_{J_{2}}g_{\alpha}T_{J_{1}}^{-1}.\] Now suppose that \(g(J_{1})=J_{2}\) for some \(g\in\operatorname{SL}(2,\mathbb{R})\). By the reasoning above we can write \(g\) as \[g=RT_{J_{2}}g_{\alpha}T_{J_{1}}^{-1},\] for some \(R=\pm I\) and \(\alpha\in\mathbb{R}\). Let \(x_{g}=g^{-1}(\infty)\). Note that \(\alpha\) is the unique real number such that \(RT_{J_{2}}g_{\alpha}T_{J_{1}}^{-1}(x_{g})=\infty\). Solving the equation we see that \(\alpha\) must be \[\alpha(g,J_{1})=\log\frac{g^{-1}(\infty)-y_{1}}{g^{-1}(\infty)-x_{1}} \tag{2.5}\] when \(g^{-1}(\infty)\neq\infty\), and \(\alpha(g,J_{1})=0\) when \(g^{-1}(\infty)=\infty\). The number \(\alpha(g,J_{1})\) is what Bourgain and Dyatlov call the _distortion factor_ of \(g\) on \(J_{1}\). We have established the next lemma: **Lemma 5**.: _Let \(J_{1},J_{2}\subset\mathbb{R}\) be compact intervals. We can write any \(g\in\operatorname{SL}(2,\mathbb{R})\) such that \(g(J_{1})=J_{2}\) as \(g=RT_{J_{2}}g_{\alpha(g,J_{1})}T_{J_{1}}^{-1}\) for some \(R\in\{I,-I\}\)._ In the rest of this subsection we work with a Fuchsian Schottky group \(\Gamma=\Gamma_{\mathcal{S}}\). We define \[L_{\Gamma}=\max_{a\in\mathcal{A}}|I_{a}|. \tag{2.6}\] **Lemma 6**.: _Let \(\Gamma\) be a Fuchsian Schottky group. Then \(|I_{w}|\leq L_{\Gamma}\) for any \(w\in\mathcal{W}^{\circ}\)._ Proof.: Since \(w\) is reduced, from (2.1) we see that \(D_{w}=\gamma_{w^{\prime}}(D_{w})\) is contained in \(D_{S(w)}\), and hence \(I_{w}\subset I_{S(w)}\). Thus \(|I_{w}|\leq L_{\Gamma}\). **Lemma 7**.: _Let \(\Gamma\) be a Fuchsian Schottky group. For any \(w\in\mathcal{W}_{\geq 2}\) we have \(|\alpha(\gamma_{w^{\prime}},I_{E(w)})|\ll_{\Gamma}1\)._ Proof.: We write \(w=(a_{1},\dots,a_{n})\). Note that \[\gamma_{w^{\prime}}(I_{\widetilde{w^{\prime}}})=\gamma_{a_{1}}(I_{\widetilde{ a_{1}}})=\partial_{\infty}\mathbb{H}^{2}-\operatorname{cl}I_{a_{1}},\] so \(x_{w}:=\gamma_{w^{\prime}}^{-1}(\infty)\) lies in \(I_{\widetilde{w^{\prime}}}\). In particular \(x_{w}\in I_{\widetilde{a_{n-1}}}\). Since \(w\) is reduced, \(\widetilde{a_{n-1}}\neq a_{n}\). The closures of the \((I_{a})_{a\in\mathcal{A}}\) are pairwise disjoint, so \(|x-y|\ll_{\Gamma}1\) whenever \(x\in I_{a},y\in I_{b}\) and \(a\neq b\). The result follows from this observation and the formula (2.5) for \(\alpha(\gamma_{w^{\prime}},I_{a_{n}})\). For any \(w\in\mathcal{W}^{\circ}\) we denote \(\gamma_{w^{\prime}}^{\prime}(z)\) by \(c_{w}(z)\). The next result is [14, Lemma 3.2]. **Lemma 8**.: _Let \(\Gamma\) be a Fuchsian Schottky group. For any \(w\in\mathcal{W}^{\circ}\) and any \(z\in D_{E(w)}\) we have_ \[|c_{w}(z)|\asymp_{\Gamma}|I_{w}|.\] Next we restate [1, (7), p.749], which implies that the length of the intervals \(I_{w}\) decreases at least exponentially in \(|w|\). **Lemma 9**.: _For any Fuchsian Schottky group \(\Gamma\) there is a constant \(\mathsf{c}_{\Gamma}\in(0,1)\) such that_ \[|I_{w}|\leq\mathsf{c}_{\Gamma}|I_{w^{\prime}}|\] _for all \(w\in\mathcal{W}_{\geq 2}\)._ The next result gives the relation between the norm of a \(\gamma\in\Gamma\) and the length of its corresponding interval in \(\mathbb{R}\). **Lemma 10**.: _Let \(\Gamma\) be a Fuchsian Schottky group. For any \(w\in\mathcal{W}^{\circ}\) we have_ \[||\gamma_{w}||\asymp_{\Gamma}|I_{w}|^{-\frac{1}{2}}.\] Proof.: By Lemma 9 there is \(N=N_{\Gamma}\geq 2\) such that \(|I_{w}|<1\) for any \(w\in\mathcal{W}_{\geq N}\). It suffices to prove the result for any such \(w\). First we show that \(||\gamma_{w}||\asymp_{\Gamma}||\gamma_{w^{\prime}}||\). We write \(w=(a_{1},\dots,a_{n})\). Consider an auxiliary multiplicative norm \(\left|\left|\cdot\right|\right|_{m}\) on the space of \(M_{2}(\mathbb{R})\) of \(2\times 2\) real matrices. Any two norms on \(M_{2}(\mathbb{R})\) are equivalent, so \[\left|\left|\gamma_{w}\right|\right|\asymp||\gamma_{w^{\prime}}\gamma_{a_{n}} ||_{m}=\left|\left|\gamma_{w^{\prime}}\right|\right|_{m}\left|\left|\gamma_{a_ {n}}\right|\right|_{m}\asymp_{\Gamma}||\gamma_{w^{\prime}}||\,.\] Since \(\gamma_{w^{\prime}}(I_{a_{n}})=I_{w}\), by Lemma 5 we have \[\gamma_{w^{\prime}}=\pm IT_{I_{w}}g_{\alpha}T_{I_{a_{n}}}^{-1},\] where \(\alpha=\alpha(\gamma_{w^{\prime}},I_{a_{n}})\). Using the auxiliary norm \(\left|\left|\cdot\right|\right|_{m}\) once more we get \[\left|\left|\gamma_{w^{\prime}}\right|\right|\asymp||T_{I_{w}}||\,\left| \left|g_{\alpha}\right|\,\left|\left|T_{I_{a_{n}}}^{-1}\right|\right|.\] Lemma 7 says that \(|\alpha|\ll_{\Gamma}1\), so we see that \(||g_{\alpha}||\asymp_{\Gamma}1\) from (2.3). We also have \(\left|\left|T_{I_{a_{n}}}^{-1}\right|\right|\asymp_{\Gamma}1\) since \[\left\{\left|\left|T_{I_{a}}^{-1}\right|\right|:a\in\mathcal{A}\right\}\] is a finite set of strictly positive real numbers. Thus \(||\gamma_{w^{\prime}}||\asymp_{\Gamma}||T_{I_{w}}||\). Finally, since \(|I_{w}|<1\), from (2.4) we see that \(||T_{I_{w}}||\asymp_{\Gamma}|I_{w}|^{-1/2}\), which completes the proof. We will use repeatedly the fact that the length of the intervals \(I_{w}\) behaves well with respect to the concatenation of words. The statement below is a mild variation of [1, Lemma 2.7]. **Lemma 11**.: _Let \(\Gamma\) be a Fuchsian Schottky group. For any \(A,B\in\mathcal{W}^{\circ}\) such that \(A\to B\) we have_ \[|I_{AB}|\asymp_{\Gamma}|I_{A}||I_{B}|.\] Proof.: We use two lemmas of Bourgain and Dyatlov: [1, Lemma 2.6] says that \(|I_{w_{1}^{\prime}}|\asymp_{\Gamma}|I_{w_{1}}|\) for any \(w_{1}\in\mathcal{W}_{\geq 2}\). Also, \(|I_{w_{1}^{\prime}w_{2}}|\asymp_{\Gamma}|I_{w_{1}}||I_{w_{2}}|\) when \(w_{1}\rightsquigarrow w_{2}\) by [1, Lemma 2.7]. Applying these to \(w_{1}=AS(B)\) and \(w_{2}=B\) we get \[|I_{AB}|\asymp_{\Gamma}|I_{w_{1}}||I_{B}|\asymp_{\Gamma}|I_{A}||I_{B}|,\] so we are done. The mirror estimate below is [1, Lemma 2.8]. **Lemma 12**.: _Let \(\Gamma\) be a Fuchsian Schottky group. For any \(w\in\mathcal{W}^{\circ}\) we have_ \[|I_{\widetilde{w}}|\asymp_{\Gamma}|I_{w}|.\] We call _partition_ of \(\mathcal{W}^{\circ}\) any finite subset \(\mathcal{P}\) of \(\mathcal{W}^{\circ}\) for which there is an integer \(N>0\) such that any \(w\in\mathcal{W}_{\geq N}\) has a unique suffix in \(\mathcal{P}\). **Lemma 13**.: _Let \(\Gamma\) be a Fuchsian Schottky group. For any partition \(\mathcal{P}\) of \(\mathcal{W}^{\circ}\) we have_ \[\sum_{w\in\mathcal{P}}|I_{w}|^{\delta_{\Gamma}}\asymp_{\Gamma}1.\] Proof.: Consider a Patterson measure \(\mu\) of \(\Gamma\)15. Recall that \(\mu\) is a probability measure on \(\partial\,\mathbb{H}^{2}\) whose support is the limit set of \(\Gamma\). By [1, Lemma 2.11, p. 755] we have Footnote 15: Nowadays we refer to these as _Patterson-Sullivan measures_. Patterson introduced them in [11] for Fuchsian groups, and Sullivan extended the construction to discrete subgroups of isometries of the real hyperbolic space of any dimension in [10]. \[\mu(|I_{w}|)\asymp_{\Gamma}|I_{w}|^{\delta_{\Gamma}} \tag{2.7}\] for any \(w\in\mathcal{W}^{\circ}\). Since \(\mathcal{P}\) is a partition, the intervals \((I_{w})_{w\in\mathcal{P}}\) are pairwise disjoint and cover \(\Lambda_{\Gamma}\), hence \[1=\sum_{w\in\mathcal{P}}\mu(I_{w}). \tag{2.8}\] The result follows from (2.7) and (2.8). For any \(\tau>0\) we define \[\mathcal{B}(\tau)=\{w\in\mathcal{W}_{\geq 2}\mid|I_{\widetilde{w}}|\leq\tau<|I_ {\widetilde{w}^{\prime}}|\}. \tag{2.9}\] Note that \(\mathcal{B}(\tau)\) is nonempty if and only if \(\tau<L_{\Gamma}\). Although \(\mathcal{B}(\tau)\) is not necessarily a partition, its mirror set is. This fact follows easily from Lemma 9. We state it below for ease of reference. **Lemma 14**.: _Let \(\Gamma\) be a Fuchsian Schottky group. Then \(\widetilde{\mathcal{B}(\tau)}\) is a partition for any \(\tau\in(0,L_{\Gamma})\)._ We also need the next part of [1, Lemma 2.10]: **Lemma 15**.: _Let \(\Gamma\) be a Fuchsian Schottky group and consider \(\tau\in\) (0,\(L_{\Gamma}\)). Then \(|I_{w}|\asymp_{\Gamma}\tau\) for any \(w\in\mathcal{B}(\tau)\cup\widetilde{\mathcal{B}(\tau)}\)._ The next result is an estimate of the size of \(\mathcal{B}(\tau)\). **Lemma 16**.: _Let \(\Gamma\) be a Fuchsian Schottky group and let \(\delta=\delta_{\Gamma}\). For any \(\tau\in(0,L_{\Gamma})\) we have \(\#\mathcal{B}(\tau)\asymp_{\Gamma}\tau^{-\delta}\)._ Proof.: Lemma 14 says that \(\widetilde{\mathcal{B}(\tau)}\) is a partition of \(\mathcal{W}^{\circ}\), so \[\sum_{w\in\widetilde{\mathcal{B}(\tau)}}|I_{w}|^{\delta_{\Gamma}}\asymp_{ \Gamma}1 \tag{2.10}\] by Lemma 13. Any term in the right-hand side of (2.10) is \(\asymp_{\Gamma}\tau^{\delta_{\Gamma}}\) according to Lemma 15, so \(\tau^{\delta_{\Gamma}}\#\widetilde{\mathcal{B}(\tau)}\asymp_{\Gamma}1\). The result follows since \(\#\mathcal{B}(\tau)=\#\widetilde{\mathcal{B}(\tau)}\). **Lemma 17**.: _Let \(\Gamma\) be a Fuchsian Schottky group. There are positive constants \(C_{\Gamma,10}\) and \(A_{\Gamma,1}\) with the next property: for any \(\tau\in(0,L_{\Gamma})\) and any \(w\in\mathcal{B}(\tau)\),_ \[C_{\Gamma,10}^{-1}\log(\tau^{-1})\leq|w|\leq C_{\Gamma,10}\log(\tau^{-1})+A_{ \Gamma,1}. \tag{2.11}\] Proof.: We fix \(w\in\mathcal{B}(\tau)\). By Lemma 15 there is \(X_{\Gamma}>1\) such that \[X_{\Gamma}^{-1}|I_{w}|\leq\tau\leq X_{\Gamma}|I_{w}|. \tag{2.12}\] Increasing \(X_{\Gamma}\) if necessary, by Lemma 9 and Lemma 11 we assume further that \[|I_{w_{1}}|\leq X_{\Gamma}^{-1}|I_{w_{1}^{\prime}}| \tag{2.13}\] for any \(w_{1}\in\mathcal{W}_{\geq 2}\), and \[X_{\Gamma}^{-1}|I_{w_{2}}||I_{w_{3}}|\leq|I_{w_{2}w_{3}}| \tag{2.14}\] for any \(w_{2},w_{3}\in\mathcal{W}^{\circ}\) with \(w_{2}\to w_{3}\). Using \(|w|-1\) times (2.13) we get \(|I_{w}|\ll_{\Gamma}X_{\Gamma}^{-|w|}\), which combined with the right-hand side of (2.12) gives \(\tau\ll_{\Gamma}X_{\Gamma}^{-|w|}\). Taking inverse and logarithm on both sides of this inequality proves the right-hand side of (2.11). For the other side, let \(\ell_{\Gamma}:=\min_{a\in\mathcal{A}}|I_{a}|\). Applying \(|w|-1\)times (2.14) we obtain \(|I_{w}|\geq X_{\Gamma}^{-(|w|-1)}\ell_{\Gamma}^{|w|}\), which combined with the left-hand side of (2.12) yields \[\tau\geq\left(\frac{\ell_{\Gamma}}{X_{\Gamma}}\right)^{|w|}.\] After taking inverse and logarithm on both sides we get the left-hand side of (2.11). ### Counting in \(\operatorname{SL}(2,\mathbb{Z})_{n}\) In this section we estimate the number of elements of \(\operatorname{SL}(2,\mathbb{Z})\) congruent to \(I_{2}\pmod{n}\) of norm \(\leq R\). This will be an important ingredient of the main lemma of Subsection 4.2. What we actually need in that proof is a counting in \(\Gamma_{n}\), where \(\Gamma\) is a Fuchsian Schottky group contained in \(\operatorname{SL}(2,\mathbb{Z})\). We do not dispose of a good method to do this directly, hence we settle a counting in \(\operatorname{SL}(2,\mathbb{Z})_{n}\). In our statement we exclude the upper and lower triangular matrices of \(\operatorname{SL}(2,\mathbb{Z})\). These poses no issue for our purposes since Fuchsian Schottky groups do not have unipotents \(\neq I_{2}\). We endow the space of \(2\times 2\) real matrices with the Euclidean norm \[\left|\left|\begin{pmatrix}a&b\\ c&d\end{pmatrix}\right|\right|=\sqrt{a^{2}+b^{2}+c^{2}+d^{2}}.\] Let \(n\) be a positive integer and let \(R>0\). We define \[N_{n}(R)=\left\{\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{SL}(2,\mathbb{Z})_{n}\mid||\gamma||\leq R,bc \neq 0\right\}.\] We **emphasize** the condition \(bc\neq 0\) above. **Lemma 18**.: _For any \(\varepsilon>0\) there is a positive constant \(D_{\varepsilon}\) with the next property: for any integer \(n\geq 1\) and any \(R>1\) we have_ \[N_{n}(R)\leq D_{\varepsilon}\frac{R^{\varepsilon}}{n^{\varepsilon}}\left(\frac{ R^{2}}{n^{3}}+\frac{R}{n}+1\right).\] We need an auxiliary result to prove Lemma 18. Let \(\textbf{d}(n)\) the number of integers dividing a nonzero integer \(n\). For example, \(\textbf{d}(6)=8\) since the divisors of \(6\) are \(\pm 1,\pm 2,\pm 3\) and \(\pm 6\). The next lemma follows from the formula of \(\textbf{d}(n)\) in terms of the decomposition of \(n\) as product of prime numbers. **Lemma 19**.: _For any \(\varepsilon>0\) and any nonzero integer \(n\) we have \(\textbf{d}(n)\ll_{\varepsilon}|n|^{\varepsilon}\)._ Proof of Lemma 18.: Consider any \[\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{SL}(2,\mathbb{Z})\] such that \(\gamma\equiv I_{2}\pmod{n}\), \(bc\neq 0\) and \(||\gamma||\leq R\). Since \(\det\gamma=1\) and \(b\neq 0\), \(c\) is determined once we fix \(a,b,d\). Let us bound the number of such choices. We will choose \(a,d,b\) in that order. Write \[a=An+1,\quad b=Bn,\quad c=Cn,\quad d=Dn+1,\] for some integers \(A,B,C,D\). Since \(|a|\leq R\), there are at most \(\frac{2R}{n}+1\) choices for \(a\). From \(ad-bc=1\) we get \(d\equiv 2-a\pmod{n^{2}}\). We also know that \(|d|\leq R\), thus there are at most \(\frac{2R}{n^{2}}+1\) choices for \(d\). Since \(|a|\leq R\), then \(|A|\leq\frac{R+1}{n}\). Similarly \(|D|\leq\frac{R+1}{n}\). Note that \(\det\gamma=1\) is equivalent to \[BC=AD+\frac{A+D}{n}.\] Given that \(BC\neq 0\), there are \(\textbf{d}\left(AD+\frac{A+D}{n}\right)\) choices for \(b\). By Lemma 19 \[\textbf{d}\left(AD+\frac{A+D}{n}\right)\ll_{\varepsilon}\left|AD+ \frac{A+D}{n}\right|^{\varepsilon/2} \leq \left[\left(\frac{R+1}{n}\right)^{2}+\frac{2(R+1)}{n^{2}}\right] ^{\varepsilon/2}\] \[\ll \frac{R^{\varepsilon}}{n^{\varepsilon}}.\] To get the last inequality we used \(R>1\). In summary, \[N_{n}(R) \ll_{\varepsilon} \left(\frac{2R}{n}+1\right)\left(\frac{2R}{n^{2}}+1\right)\frac{ R^{\varepsilon}}{n^{\varepsilon}}\] \[= \frac{R^{\varepsilon}}{n^{\varepsilon}}\left(\frac{4R^{2}}{n^{3}} +\left[\frac{2}{n}+\frac{2}{n^{2}}\right]R+1\right)\] \[\ll \frac{R^{\varepsilon}}{n^{\varepsilon}}\left(\frac{R^{2}}{n^{3}} +\frac{R}{n}+1\right).\] ## 3. Zeta functions and eigenvalues of hyperbolic surfaces Here we introduce various zeta functions attached to a convex cocompact hyperbolic surface \(S=\Gamma\backslash\mathbb{H}^{2}\) of infinite volume, and we explain the relation between eigenvalues of \(S\) and the zeros of their zeta functions. This section is has three parts: In Subsection 3.1 we define the Selberg zeta function \(Z_{S}\) of \(S\) and we recall the description of its zeros and multiplicities. To do so we define resonances of \(S\), since most zeros of \(Z_{S}\) are resonances. In Subsection 3.2 we introduce the transfer operators \(\mathcal{L}_{\mathcal{B},s,\sigma}\) associated to some Schottky data \(\mathcal{S}\) of \(\Gamma\), their corresponding dynamical zeta function \(\zeta_{\mathcal{B},\sigma}\) and we give some properties of them we will need. Finally, assuming that \(\Gamma\) is contained in \(\operatorname{SL}(2,\mathbb{Z})\), we introduce the zeta functions \(\zeta_{\tau,n}\) we use in the proof of Proposition 3 to estimate the eigenvalues of \(S_{n}\) in appropriate intervals. ### Selberg's zeta function In this section we explain the relation between the eigenvalues of a convex cocompact hyperbolic surface \(S\) and the zeros of its Selberg zeta function \(Z_{S}\). This connection was established by Patterson and Perry in [10]. Here we will deduce it from a factorization of \(Z_{S}\) due to Borthwick, Judge and Perry. Let us recall the definition of the zeta function of a connected hyperbolic surface \(X\). We denote by \(\mathscr{G}_{X}\) the set of primitive closed geodesics of \(X\). The norm \(N(g)\) of any \(g\in\mathscr{G}_{X}\) is defined as \(e^{\ell(g)}\), where \(\ell(g)\) is the length of \(g\). The _Selberg zeta function_ of \(X\) is the holomorphic map in the half-plane \(\operatorname{Re}\,s>\delta_{\Gamma}\) defined by the infinite product \[Z_{X}(s)=\prod_{g\in\mathscr{G}_{X}}\prod_{k=0}^{\infty}\left(1-N(g)^{-(s+k)} \right).\] For a proof of the convergence see [1, p. 33]. For a geometrically finite, convex cocompact hyperbolic surface \(S\) there is an alternative dynamical definition of \(Z_{S}\) that we recall in Subsection 3.2. From it one can see that \(Z_{S}\) has an entire extension, which we denote also by \(Z_{S}\). We describe below the zero set of \(Z_{S}\) in terms of divisors. Let us recall this classical definition. The _divisor_ of an entire function \(\varphi:\mathbb{C}\to\mathbb{C}\) is the map \(\mathbb{C}\to\mathbb{N}\) that encodes the multiplicities of the zeros of \(\varphi\). Concretely, for any \(z\in\mathbb{C}\), let \(d_{z}:\mathbb{C}\to\mathbb{N}\) be the map \[d_{z}(w)=\begin{cases}1&\text{if }z=w,\\ 0&\text{if }z\neq w,\end{cases}\] and let \(M_{\varphi}(z)\) be the multiplicity of \(z\) as zero of \(\varphi\). The divisor of \(\varphi\) is \[\operatorname{div}\,\varphi=\sum_{z\in\mathbb{C}}M_{\varphi}(z)d_{z}.\] To give explicitly the divisor of \(Z_{S}\) we need to introduce a spectral data of \(S\) finer than the eigenvalues. The _resolvent_ of \(S\) is the meromorphic family of bounded operators of \(L^{2}(S)\) given by \[R_{S}(s)=(\Delta_{S}-s(1-s)I)^{-1}.\] It is initially defined on the half-plane \(\operatorname{Re}\,s>\frac{1}{2}\), where the poles are the such that \(\lambda=s(1-s)\) is an eigenvalue of \(S\). By the work [10] of Mazzeo and Melrose we know how to reduce the domain of \(\Delta_{S}\) to extend \(R_{S}\) to all \(\mathbb{C}\) as a meromorphic family of operators \(\mathscr{C}_{c}^{\infty}(S)\to\mathscr{C}^{\infty}(S)\), that we still denote \(R_{S}\). A _resonance_ of \(S\) is a pole of \(R_{S}\). We denote by \(\mathcal{R}_{S}\) the set of resonances of \(S\). Following [1, Definition 8.2, pp. 145], the multiplicity \(m_{s}\) of a resonance \(s\) of \(S\) is defined as \[m_{s}=\operatorname{rank}\,\left(\int_{C}R_{S}(s)ds\right),\] where \(C\) is an anti clockwise oriented circle enclosing \(s\) and no other resonance of \(S\). The multiplicity of a resonance \(s\) with \(\operatorname{Re}\,s>\frac{1}{2}\) coincides with the multiplicity of the eigenvalue \(\lambda=s(1-s)\). The next expression of \(\operatorname{div}\,Z_{S}\) follows from a factorization of \(Z_{S}\) that is a particular case of [1, Theorem 10.1, pp. 213]16. Footnote 16: This is a factorization of the \(Z_{X}\) of any nonelementary, geometrically finite hyperbolic surface \(X\) of infinite area. **Proposition 20**.: _Let \(S\) be a nonelementary, geometrically finite, convex cocompact hyperbolic surface. Then_ \[\text{div}\,\,Z_{S}=-\chi(S)\sum_{n=0}^{\infty}(2n+1)d_{-n}+\sum_{s\in\mathcal{ R}_{S}}m_{s}d_{s}.\] For ease of reference we record in the next corollary the correspondence between resonances of \(S\) and zeros of \(Z_{S}\). We denote by \(\mathbb{Z}_{\leq 0}\) the set of integers \(\leq 0\). **Corollary 21**.: _Let \(S\) be a nonelementary, geometrically finite, convex cocompact hyperbolic surface. The resonances of \(S\) and the zeros of \(Z_{S}\) coincide in \(\mathbb{C}-\mathbb{Z}_{\leq 0}\). Moreover, this correspondence respects multiplicities._ ### Dynamical zeta functions Here we introduce the dynamical zeta functions \(\zeta_{\mathcal{B},\sigma}\) associated to a Fuchsian Schottky group \(\Gamma=\Gamma_{\mathcal{S}}\). The parameters \(\mathcal{B}\) and \(\sigma\) are respectively a finite subset of \(\mathcal{W}_{\geq 2}\) and a finite-dimensional unitary representation of \(\Gamma\). The intuition is that we choose \(\mathcal{B}\) depending on the scale at which we want study the dynamics of \(\Gamma\) in \(\Lambda_{\Gamma}\), whereas \(\sigma\) serves to represent a dynamical zeta function of a finite covering of \(S\) as a zeta function on \(S\). We call them dynamical zeta functions because they are defined in terms of transfer operators \(\mathcal{L}_{\mathcal{B},s,\sigma}\) coming from the action of \(\Gamma\) on the open subset \(U\)17 of \(\mathbb{C}\). We start by introducing the operators \(\mathcal{L}_{\mathcal{B},s,\sigma}\) and stating their main properties. Then, we define the dynamical zeta functions \(\zeta_{\mathcal{B},\sigma}\) and we explain how to use them to count eigenvalues of coverings of \(S\). Footnote 17: This was defined in (2.2). Let \(\Gamma=\Gamma_{\mathcal{S}}\) be a Fuchsian Schottky group. First we recall the definition of the classical transfer operator of \(\Gamma\). Remember that \[U=\bigcup_{a\in\mathcal{A}}D_{a}.\] Consider some complex number \(s\) and a measurable map \(f:U\to\mathbb{C}\). We define \(\mathcal{L}_{s}f:U\to\mathbb{C}\) on \(D_{b}\) as \[\mathcal{L}_{s}f(z)=\sum_{\begin{subarray}{c}a\in\mathcal{A}\\ a\neq b\end{subarray}}\gamma^{\prime}_{a}(z)^{s}f(\gamma_{a}(z)). \tag{3.1}\] The term \(\gamma^{\prime}_{a}(z)^{s}\) needs some explanation. Since \(\gamma^{\prime}_{a}\) does not take negative values on \(U-D_{a}\)18 Footnote 18: Let \(\gamma\) be any Möbius transformation of \(\widehat{\mathbb{C}}\). A direct computation shows that if \(\operatorname{Im}\,z\neq 0\), then \(\gamma^{\prime}(z)<0\) if and only if \(\operatorname{Re}\,z=\gamma^{-1}(\infty)\). When \(\gamma=\gamma_{a}\) we know that \(\gamma_{a}^{-1}(\infty)\) is in \(D_{\widehat{a}}\) because \(\gamma_{a}(D_{\widehat{a}})=\widehat{\mathbb{C}}-\operatorname{cl}\,(D_{a})\). , we define \(\gamma^{\prime}_{a}(z)^{s}\) as \(\exp(sF(\gamma^{\prime}_{a}(z))\), where \(F\) is the branch of the logarithm \(\mathbb{C}-(-\infty,0]\to\{\rho\in\mathbb{C}\mid|\operatorname{Im}\,\rho|<\pi\}\). The map \(\mathcal{L}_{s}\) is part of a family of transfer operators of \(\Gamma\) that we define now. Consider a finite subset \(\mathcal{B}\) of \(\mathcal{W}_{\geq 2}\), a complex number \(s\) and a unitary representation \((\sigma,V)\) of \(\Gamma\). For any measurable map \(f:U\to V\) we define \(\mathcal{L}_{\mathcal{B},s,\sigma}f:U\to V\) on \(D_{b}\) as \[\mathcal{L}_{\mathcal{B},s,\sigma}f(z)=\sum_{\begin{subarray}{c}w\in \mathcal{B}\\ w\sim b\end{subarray}}c_{w}(z)^{s}\sigma(\gamma_{w^{\prime}}^{-1})f(\gamma_{w ^{\prime}}(z)). \tag{3.2}\] Recall that \(c_{w}\) means \(\gamma^{\prime}_{w^{\prime}}\). Again, the terms \(c_{w}(z)\) appearing in (3.2)are not in \((-\infty,0]\), thus we define their \(s\)-th power as we did in (3.1). Note that \(\mathcal{L}_{s}=\mathcal{L}_{\mathcal{B},s,\sigma}\) for \(\mathcal{B}=\mathcal{W}_{2}\) and \(\sigma\) the trivial one-dimensional representation of \(\Gamma\). Let us fix the functional spaces where the transfer operators act for our purposes. For any Hilbert space \(V\), the space \[\mathcal{H}(U,V)=\left\{f:U\to V\mid f\text{ is holomorphic},\int_{U}||f(z)||\,dz<\infty\right\}\] endowed with the inner product \[\left\langle f,g\right\rangle=\int_{U}\left\langle f(u),g(u)\right\rangle_{V}dz\] is a Hilbert space. The integrals in \(U\) are taken with respect to the Lebesgue measure. \(\mathcal{H}(U,V)\) is a _Bergman space_ of \(U\). The classical Bergman space of \(U\) is \(\mathcal{H}(U,\mathbb{C})\), and we denote it \(\mathcal{H}(U)\). Note that for any unitary representation \((\sigma,V_{\sigma})\) of \(\Gamma\), \(\mathcal{L}_{\mathcal{B},s,\sigma}\) preserves \(\mathcal{H}(U,V_{\sigma})\). From now on we consider \(\mathcal{L}_{\mathcal{B},s,\sigma}\) as an operator on its respective Bergman space. To define the dynamical zeta functions of \(\Gamma\) we need the following fact, which is [17, Lemma 4.1]. **Lemma 22**.: _Let \(\Gamma\) be a Fuchsian Schottky group. For any nonempty finite subset \(\mathcal{B}\) of \(\mathcal{W}_{\geq 2}\), any \(s\in\mathbb{C}\) and any finite dimensional unitary representation \(\sigma\) of \(\Gamma\), the transfer operator \(\mathcal{L}_{\mathcal{B},s,\sigma}\) is trace class._ Since any \(\mathcal{L}_{\mathcal{B},s,\sigma}\) as above is trace class, then \(I-\mathcal{L}_{\mathcal{B},s,\sigma}\) has a well-defined Fredholm determinant. The zeta function \(\zeta_{\mathcal{B},\sigma}\) of \(\Gamma\) is the map \[\zeta_{\mathcal{B},\sigma}(s)=\det(I-\mathcal{L}_{\mathcal{B},s,\sigma}).\] It is entire because \(s\mapsto\mathcal{L}_{\mathcal{B},s,\sigma}\) is an analytic family of bounded operators of \(\mathcal{H}(U,V_{\sigma})\). One of the main reason for us to care about the family of functions \(\zeta_{\mathcal{B},\sigma}\) is that it encompasses the Selberg zeta functions of all the finite coverings of \(S=\Gamma\backslash\mathbb{H}^{2}\). This is part of [17, Proposition 4.4], and it implies that \(Z_{S}\) has an entire extension. **Theorem 23**.: _Let \(\Gamma^{\prime}\) be a finite-index subgroup of a Fuchsian Schottky group \(\Gamma\) and let \(\sigma\) be the regular representation of \(\Gamma\) in \(\ell^{2}(\Gamma^{\prime}\backslash\Gamma)\). Then \(\zeta_{\mathcal{W}_{2},\sigma}\) coincides with the Selberg zeta function of \(\Gamma^{\prime}\backslash\mathbb{H}^{2}\) in the domain of the last one._ Before proceeding our discussion of the functions \(\zeta_{\mathcal{B},\sigma}\) we state a property of the transfer operators that we will need in Section 4. Since \(\mathcal{L}_{\mathcal{B},s,\sigma}\) is trace class, it is in particular a Hilbert-Schmidt operator. Here is an explicit formula of the Hilbert-Schmidt norm of \(\mathcal{L}_{\mathcal{B},s,\sigma}\) taken from [16, Lemma 4.7, p. 156]. We use the following notation: Let \(\mathcal{B}\) be a subset of \(\mathcal{W}\) and consider \(a,b\in\mathcal{A}\). Then \(\mathcal{B}_{a}^{b}\) is the set of \(w\in\mathcal{B}\) whose first and last letters are respectively \(a\) and \(b\). For any \(a\in\mathcal{A}\) we denote \(B_{a}\) the Bergman kernel of \(D_{a}\)19 Footnote 19: The Bergman kernel of a disk \(D\) in \(\mathbb{C}\) of radius \(r\) and center \(c\) is the map \(B_{D}:D\times D\to\mathbb{C}\) given by \[B_{D}(w,z)=\frac{r^{2}}{\pi\left[r^{2}-(w-c)\overline{(z-c)}\right]^{2}}. \tag{3.3}\] It is characterized by the fact that \(f(w)=\int_{D}B_{D}(w,z)f(z)dz\) for any \(f\in\mathcal{H}(D)\) and any \(w\in D\). See [14, Chapter 1] for a proof. To see that this definition makes sense, note that \(\mathcal{L}^{2}_{\mathcal{B}(\tau),s,\sigma_{n}}\) is trace class since \(\mathcal{L}_{\mathcal{B}(\tau),s,\sigma_{n}}\)is trace class by Lemma 22, and the product of two trace class operators is trace class. Hence \(1-\mathcal{L}^{2}_{\mathcal{B}(\tau),s,\sigma_{n}}\) has a well-defined Fredholm determinant. Here is the relation between eigenvalues of \(\Gamma_{n}\backslash\mathbb{H}^{2}\) and zeros of \(\zeta_{\tau,\sigma_{n}}\). **Proposition 27**.: _Let \(\Gamma\) be a Fuchsian Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\). For any \(\tau\in(0,L_{\Gamma})\) and any positive integer \(n\), if \(s_{0}(1-s_{0})\) is an eigenvalue of \(\Gamma_{n}\backslash\mathbb{H}^{2}\) of multiplicity \(m_{0}\) and \(s_{0}>\frac{1}{2}\), then \(s_{0}\) is a zero of \(\zeta_{\tau,n}\) of multiplicity at least \(m_{0}\)._ Proof.: We denote \(\dim_{\mathbb{C}}\ker(I-\mathcal{L}^{2}_{\mathcal{B}(\tau),s,\sigma_{n}})\) by \(m_{1}\). By Lemma 25 the result we want is equivalent to \(m_{1}\geq m_{0}\). Note that \(m_{0}\) is the dimension of \(\ker(I-\mathcal{L}_{\mathcal{W}_{2},s,\sigma_{n}})\) by Corollary 21, Theorem 23 and Lemma 25. We know that \(\widetilde{\mathcal{B}(\tau)}\) is a partition by Lemma 14, thus \(m_{0}\leq\dim_{\mathbb{C}}\ker(I-\mathcal{L}_{\mathcal{B}(\tau),s,\sigma_{n}})\). It is immediate that \(\dim_{\mathbb{C}}\ker(I-\mathcal{L}_{\mathcal{B}(\tau),s,\sigma_{n}})\leq m_{1}\), so we are done. We defined \(\zeta_{\tau,n}\) using the square of \(\mathcal{L}_{\mathcal{B}(\tau),s,\sigma_{n}}\) in order to have the next upper bound of \(|\zeta_{\tau,n}|\) in terms of the Hilbert-Schmidt norm of \(\mathcal{L}_{\mathcal{B}(\tau),s,\sigma_{n}}\), rather than the trace norm. The advantage of the Hilbert-Schmidt norm is that we have the explicit formula of Lemma 24 for it. **Lemma 28**.: _Let \(\Gamma\) be a Fuchsian Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\). For any positive integer \(n\), any \(\tau\in(0,L_{\Gamma})\) and any \(s\in\mathbb{C}\) we have_ \[\log|\zeta_{\tau,n}(s)|\leq\big{|}\big{|}\mathcal{L}_{\mathcal{B}(\tau),s, \sigma_{n}}\big{|}\big{|}^{2}_{HS}\,.\] Proof.: Since \(\mathcal{L}^{2}_{\mathcal{B}(\tau),s,\sigma_{n}}\) is trace class, by [Bor16, Theorem A.32, p.439] \[\log|\zeta_{\tau,n}(s)|\leq||\mathcal{L}^{2}_{\mathcal{B}(\tau),s,\sigma_{n} }||_{\operatorname{tr}}.\] To conclude note that \(||\mathcal{L}^{2}_{\mathcal{B}(\tau),s,\sigma_{n}}||_{\operatorname{tr}}\leq ||\mathcal{L}_{\mathcal{B}(\tau),s,\sigma_{n}}||^{2}_{HS}\) by the Cauchy-Schwartz inequality on the space of Hilbert-Schmidt operators on \(\mathcal{H}(U,\ell^{2}(\Gamma/\Gamma_{n}))\)--see [Bor16, p. 439]--. We will estimate the number of zeros of \(\zeta_{\tau,n}\) in a disk with center \(c\in[0,\infty)\) using Jensen's Formula. It turns out that the contribution of \(|\zeta_{\tau,n}(c)|\) is negligible for any big enough \(c\). The next statement is a special case of [MN20, Proposition 4.8]. **Proposition 29**.: _Let \(\Gamma\) be Fuchsian Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\). There are real numbers \(c_{\Gamma}>0\) and \(\tau_{\Gamma}\in(0,1)\) with the next property: for any \(c\geq c_{\Gamma}\), any \(\tau\in(0,\tau_{\Gamma})\) and any integer \(n>0\) we have_ \[-\log|\zeta_{\tau,n}(c)|\leq\tau[\Gamma:\Gamma_{n}].\] ## 4. The main estimate We will estimate the number of zeros of the zeta functions \(\zeta_{\tau,n}\) of a Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\) using Jensen's Formula, which we state now for convenience. The proof can be found most complex analysis books, like [SS03, Chapter 5]. **Theorem 30**.: _Let \(\zeta\) be an holomorphic map on a neighborhood of a closed disk \(D=\{z\in\mathbb{C}\mid|z-c|\leq R\}\), and let \(z_{1},\ldots,z_{m}\) be the zeros of \(\zeta\) in \(D\) repeated according to multiplicity. If \(0<|z_{j}-c|<R\) for each \(j\), then_ \[|c_{w_{j}}(z)^{s}|\leq C_{\Gamma,1}^{1/2}\tau,\] for some \(C_{\Gamma,1}>0\). Since \(\mathrm{Re}\ s>0\), then \[|c_{w_{j}}(z)^{s}|\leq e^{\pi|\mathrm{Im}\ s|}C_{\Gamma,1}^{\mathrm{Re}\ s/2} \tau^{\mathrm{Re}\ s}. \tag{4.2}\] Now we bound the term \(B_{S(w_{1})}(\gamma_{w_{1}^{\prime}}(z),\gamma_{w_{2}^{\prime}}(z))\). Note that \(\gamma_{w_{j}^{\prime}}(z)\) is the disk \(D_{w_{j}}\). Since the closures of the disks \(D_{a},a\in\mathcal{A}\) are pairwise disjoint and \(|w_{j}|\geq 2\), the distance from \(\operatorname{cl}D_{w_{j}}\) to the boundary of \(D_{S(w_{1})}\) has a positive lower bound \(\eta\) that depends only on \(\Gamma\). Then we see that \[|B_{S(w_{1})}(\gamma_{w_{1}^{\prime}}(z),\gamma_{w_{2}^{\prime}}(z))|\ll_{ \Gamma}1 \tag{4.3}\] using the formula (3.3) of the Bergman kernel of a disk. For any \((w_{1},w_{2})\in\mathcal{P}_{n}(\tau)\), (4.2) and (4.3) yield \[\int_{D_{E(w_{1})}}\Big{|}c_{w_{1}}(z)^{s}\overline{c_{w_{2}}(z)^ {s}}B_{S(w_{1})}(\gamma_{w_{1}^{\prime}}(z),\gamma_{w_{2}^{\prime}}(z))\Big{|} \,dz \ll_{\Gamma} e^{2\pi|\operatorname{Im}s|}\;C_{\Gamma,1}^{\operatorname{Re}s} \tau^{2\operatorname{Re}s}area(D_{E(w_{1})})\] \[\ll_{\Gamma} e^{2\pi|\operatorname{Im}s|}\;C_{\Gamma,1}^{\operatorname{Re}s} \tau^{2\operatorname{Re}s}\] To obtain the estimate of the statement we apply the triangle inequality to the right-hand side of (4.1) and then use (4.4). ### Estimating the size of \(\mathcal{P}_{n}(\tau)\) Here \(\Gamma\) will also denote a Fuchsian Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\). To bound \(|\zeta_{\tau,n}|\) on the half-plane \(\operatorname{Re}\,s>0\) using the estimate for \(||\mathcal{L}_{\mathcal{B}(\tau),s,\sigma_{n}}||_{H^{S}}\) in Lemma 31, we need to estimate \(\#\mathcal{P}_{n}(\tau)\) from above. That is the goal of this subsection. The bound is given in the next lemma, which we prove at the end of this section. We use the following notation: \[\mathbb{K}_{\tau}:=\log(\tau^{-1})+1,\quad\mathbb{L}_{\tau,n}=\log\left(\frac {\tau^{-1}}{n}\right)+1.\] For any \(\varepsilon>0\), let \(D_{\varepsilon}\) be as in Lemma 18. **Lemma 32**.: _Let \(\Gamma\) be a Fuchsian Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\). For any positive integer \(n\), any \(\tau\in(0,1)\) less than \(L_{\Gamma}\), and any \(\varepsilon>0\) we have_ \[\#\mathcal{P}_{n}(\tau)\ll_{\Gamma}D_{\varepsilon}\mathbb{K}_{\tau}^{3} \mathbb{L}_{\tau,n}\left(\left[\frac{\tau^{-(2+\varepsilon)}}{n^{3+\varepsilon }}+\frac{\tau^{-(1+\varepsilon)}}{n^{1+\varepsilon}}\right]\mathbb{L}_{\tau,n }+\frac{\tau^{-\delta_{\Gamma}}}{n^{\delta_{\Gamma}}}\right)+\tau^{-\delta_{ \Gamma}}. \tag{4.5}\] The next easy corollary will simplify the computations for the upper bound of \(m_{\Gamma}(n,\beta)\). **Corollary 33**.: _Let \(\Gamma\) be a Fuchsian Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\). For any integer \(n\geq 2\), any \(\tau\) in \(\left(0,\min\left\{\frac{1}{n+1},L_{\Gamma}\right\}\right)\), and any \(\varepsilon>0\) we have_ \[\#\mathcal{P}_{n}(\tau)\ll_{\Gamma}D_{\varepsilon}(\log(\tau^{-1}))^{5}\left( \frac{\tau^{-(2+\varepsilon)}}{n^{3+\varepsilon}}+\frac{\tau^{-(1+\varepsilon) }}{n^{1+\varepsilon}}\right)+\tau^{-\delta_{\Gamma}}. \tag{4.6}\] Proof.: It suffices to show that the right-hand side of (4.5), which we denote \(M(\varepsilon,\tau,n)\), is less or equal--up to multiplication by a constant--than the right-hand side of (4.6) for any \(\tau\) and \(n\) as in the statement. Since \(X:=\frac{\tau^{-1}}{n}>\frac{n+1}{n}>1\), then \(X^{\delta_{\Gamma}}<X^{1+\varepsilon}\) and \(\mathbb{L}_{\tau,n}>1\). Note also that \(\mathbb{L}_{\tau,n}<\mathbb{K}_{\tau}\), so \[M(\varepsilon,\tau,n) = D_{\varepsilon}\mathbb{K}_{\tau}^{3}\mathbb{L}_{\tau,n}\left( \left[\frac{X^{(2+\varepsilon)}}{n}+X^{1+\varepsilon}\right]\mathbb{L}_{\tau,n }+X^{\delta_{\Gamma}}\right)+\tau^{-\delta_{\Gamma}}\] \[\ll D_{\varepsilon}\mathbb{K}_{\tau}^{5}\left(\frac{X^{(2+\varepsilon )}}{n}+X^{1+\varepsilon}\right)+\tau^{-\delta_{\Gamma}}. \tag{4.7}\] Finally, \(\tau^{-1}\geq n+1\geq 3\), so \(\mathbb{K}_{\tau}<2\log(\tau^{-1})\), which combined with (4.7) gives the result. We already know by Lemma 16 that the contribution to \(\#\mathcal{P}_{n}(\tau)\) of the diagonal of \(\mathcal{B}(\tau)^{2}\) is roughly \(\tau^{-\delta_{\Gamma}}\). Let us focus on the complement \[\mathcal{P}_{n}^{*}(\tau):=\{(w_{1},w_{2})\in\mathcal{P}_{n}(\tau)\mid w_{1} \neq w_{2}\}.\] To estimate the size of \(\mathcal{P}_{n}^{*}(\tau)\) we partition it into subsets which are easier to count. These depend on two parameters \(a,c\in\mathbb{N}\). We use the following notation: For any pair \((w_{1},w_{2})\in\mathcal{P}_{n}^{*}(\tau)\) there is a unique decomposition \[w_{1}=AB_{1}C,\quad w_{2}=AB_{2}C\] such that \(A,C\in\mathcal{W}^{\circ},B_{1},B_{2}{\in\mathcal{W},A\to B_{i}\to C}\), \(S(B_{1})\neq S(B_{2})\) and \(E(B_{1})\neq E(B_{2})\). Since the \(w_{i}\) are in \(\mathcal{B}(\tau)\), Lemma 11 and Lemma 15 imply that \[|I_{A}|\cdot|I_{B_{i}}|\cdot|I_{C}|\asymp_{\Gamma}\tau. \tag{4.8}\] This observation will be used in the sequel. We set \[G_{\Gamma}=2L_{\Gamma}.\] For any nonnegative integers \(a,c\) we define \[\mathcal{P}_{n}^{*}(\tau,a,c)=\left\{(w_{1},w_{2})\in\mathcal{P}_{n}^{*}(\tau) \mid 2^{-(a+1)}\leq\frac{|I_{A}|}{G_{\Gamma}}<2^{-a},2^{-(c+1)}\leq\frac{|I_{C}| }{G_{\Gamma}}<2^{-c}\right\}.\] Here is an upper bound of \(\#\mathcal{P}_{n}^{*}(\tau,a,c)\). Recall that \(D_{\varepsilon}\) is as in Lemma 18. **Lemma 34**.: _Let \(\Gamma\) be Fuchsian Schottky subgroup of \(\mathrm{SL}(2,\mathbb{Z})\). Consider an integer \(n\geq 2\) and \(\tau\in(0,\min\{1,L_{\Gamma}\})\). For any \(a,c\in\mathbb{N}\) and any \(\varepsilon>0\) we have_ \[\mathcal{P}_{n}^{*}(\tau,a,c)\ll_{\Gamma}D_{\varepsilon}\mathbb{K}_{\tau}^{3} \left(\frac{\tau^{-(2+\varepsilon)}}{n^{3+\varepsilon}}+\frac{\tau^{-(1+ \varepsilon)}}{n^{1+\varepsilon}}+2^{(a+c)(\delta_{\Gamma}-\varepsilon)}\frac{ \tau^{-\varepsilon}}{n^{\varepsilon}}\right). \tag{4.9}\] To establish Lemma 34 we compare suitable upper and lower bounds of \[\mathscr{S}_{n}^{*}(\tau,a,c):=\sum_{(w_{1},w_{2})\in\mathcal{P}_{n}^{*}(\tau,a,c)}|I_{A}|^{\delta_{\Gamma}}|I_{B_{2}\widetilde{B_{2}}}|^{\delta_{\Gamma} /2}|I_{C}|^{\delta_{\Gamma}}. \tag{4.10}\] We start with the easiest one. **Lemma 35**.: _Let \(\Gamma\) be a Fuchsian Schottky subgroup of \(\mathrm{SL}(2,\mathbb{Z})\). For any \(\tau\in(0,L_{\Gamma})\), any nonnegative integers \(a,c\), and any integer \(n\geq 1\) we have_ \[\mathscr{S}_{n}^{*}(\tau,a,c)\asymp_{\Gamma}\tau^{\delta_{\Gamma}}\#\mathcal{ P}_{n}^{*}(\tau,a,c).\] Proof.: By Lemma 11 and Lemma 12 we have \(|I_{B_{1}\widetilde{B_{2}}}|\asymp_{\Gamma}|I_{B_{i}}|\cdot|I_{B_{2}}|\). Thus each term on the right-hand side of (4.10) is \(\asymp_{\Gamma}\tau^{\delta_{\Gamma}}\) by (4.8), and the result follows. We pass to the upper bound of \(\mathscr{S}_{n}^{*}(\tau,a,c)\). **Lemma 36**.: _Let \(\Gamma\) be a Fuchsian Schottky subgroup of \(\operatorname{SL}(2,\mathbb{Z})\). For any integer \(n\geq 2\), any \(\varepsilon>0\), any \(\tau\in(0,1)\)and any \(a,c\in\mathbb{N}\) we have_ \[\mathscr{S}_{n}^{*}(\tau,a,c)\ll_{\Gamma}D_{\varepsilon}\tau^{\delta_{\Gamma}} \mathbb{R}_{\tau}^{3}\left(\frac{\tau^{-(2+\varepsilon)}}{n^{3+\varepsilon}}+ \frac{\tau^{-(1+\varepsilon)}}{n^{1+\varepsilon}}+2^{(a+c)(\delta_{\Gamma}- \varepsilon)}\frac{\tau^{-\varepsilon}}{n^{\varepsilon}}\right).\] Proof.: For any \((w_{1},w_{2})\in\mathcal{P}_{n}^{*}(\tau,a,c)\), the word lengths of \(A,B_{1},B_{2}\) and \(C\) are at most \(\max\{|w_{1}|,|w_{2}|\}\), which by Lemma 17 is at most \[E_{\Gamma}(\tau):=\max\left\{C_{\Gamma,10},A_{\Gamma,1}\right\}(\log(\tau^{-1 })+1).\] Let \(\mathcal{E}_{n}(\tau,a,c)\) and \(\mathcal{M}_{n}(\tau,a,c)\) be respectively the set of \(w\in\mathcal{W}\) such that \(w\in\{A,C\}\) and \(w=B_{1}\widetilde{B_{2}}\), for some \((w_{1},w_{2})\in\mathcal{P}_{n}^{*}(\tau,a,c)\). Let \(M_{n}(\tau,a,c)\) be the maximum of the \(|I_{B}|\), for \(B\in\mathcal{M}_{n}(\tau,a,c)\). Any \(B\in\mathcal{M}_{n}(\tau,a,c)\) has word length \(\leq 2E_{\Gamma}(\tau)\), thus the map \[\mathcal{P}_{n}^{*}(\tau,a,c,)\to\mathcal{E}_{n}(\tau,a,c)^{2}\times\mathcal{ M}_{n}(\tau,a,c),(w_{1},w_{2})\mapsto(A,C,B_{1}\widetilde{B_{2}})\] is at most \(F_{\Gamma}(\tau)\)-to-one, where \(F_{\Gamma}(\tau):=2E_{\Gamma}(\tau)+1\). Hence, \[\mathscr{S}_{n}^{*}(\tau,a,c) \leq F_{\Gamma}(\tau)\sum_{A\in\mathcal{E}_{n}(\tau,a,c)}\sum_{C\in \mathcal{E}_{n}(\tau,a,c)}\sum_{B\in\mathcal{M}_{n}(\tau,a,c)}|I_{A}|^{\delta _{\Gamma}}|I_{C}|^{\delta_{\Gamma}}|I_{B}|^{\delta_{\Gamma}/2}\] \[= F_{\Gamma}(\tau)\left(\sum_{w\in\mathcal{E}_{n}(\tau,a,c)}|I_{w }|^{\delta_{\Gamma}}\right)^{2}\sum_{B\in\mathcal{M}_{n}(\tau,a,c)}|I_{B}|^{ \delta_{\Gamma}/2}\] \[\ll_{\Gamma} \mathbb{K}_{\tau}\left(\sum_{w\in\mathcal{W}\leq E_{\Gamma}(\tau )}|I_{w}|^{\delta_{\Gamma}}\right)^{2}M_{n}(\tau,a,c)^{\delta_{\Gamma}/2}\# \mathcal{M}_{n}(\tau,a,c).\] We will bound separately the last three factors of (4.11). Applying Lemma 13 to each of the partitions \(\mathcal{W}_{N}\) of \(\mathcal{W}\), for \(N\leq E_{\Gamma}(\tau)\), we deduce \[\sum_{w\in\mathcal{W}_{\leq E_{\Gamma}(\tau)}}|I_{w}|^{\delta_{\Gamma}}\asymp _{\Gamma}E_{\Gamma}(\tau)\asymp_{\Gamma}K_{\tau}. \tag{4.12}\] Let us handle \(M_{n}(\tau,a,c)\) and \(\#\mathcal{M}_{n}(\tau,a,c)\). Any \(B=B_{1}\widetilde{B_{2}}\in\mathcal{M}_{n}(\tau,a,c)\) is associated to some \((w_{1},w_{2})\in\mathcal{P}_{n}^{*}(\tau,a,c)\). We know that \(|I_{B}|\asymp_{\Gamma}|I_{B_{1}}|\cdot|I_{B_{2}}|\) by lemmas 11 and 12, so (4.8) yields \[|I_{B}|\asymp_{\Gamma}|I_{A}|^{-2}|I_{C}|^{-2}\tau^{2}\asymp_{\Gamma}2^{2(a+c )}\tau^{2}. \tag{4.13}\] In particular \[M_{n}(\tau,a,c)^{\delta_{\Gamma}/2}\ll_{\Gamma}2^{(a+c)\delta_{\Gamma}}\tau^{ \delta_{\Gamma}}. \tag{4.14}\] To bound the size of \(\mathcal{M}_{n}(\tau,a,c)\) we use a counting argument in \(\operatorname{SL}(2,\mathbb{Z})\). Let \[\gamma_{B}=\begin{pmatrix}a&b\\ c&d\end{pmatrix}.\] Note that \(\gamma_{B}\) is in \(\operatorname{SL}(2,\mathbb{Z})_{n}-\{I_{2}\}\) by definition of \(\mathcal{P}_{n}^{*}(\tau)\), \(bc\neq 0\)--since \(\Gamma\) is Schottky, its only unipotent is \(I_{2}\) and \(\Gamma\) has no torsion--and that \(||\gamma_{B}||\ll_{\Gamma}2^{-(a+c)}\tau^{-1}\) by Lemma 10 and (4.13). To simplify the notation we define \(X=\frac{2^{-(a+c)}\tau^{-1}}{n}\). Thus for any \(\varepsilon>0\) Lemma 18 yields \[\#\mathcal{M}_{n}(\tau,a,c)\ll_{\Gamma}D_{\varepsilon}\left(\frac{X^{2+ \varepsilon}}{n}+X^{1+\varepsilon}+X^{\varepsilon}\right). \tag{4.15}\] To conclude we plug (4.12), (4.14) and (4.15) in (4.11): \[\mathscr{S}_{n}^{*}(\tau,a,c) \ll_{\Gamma} \mathbb{K}_{\tau}^{3}2^{(a+c)\delta_{\Gamma}}\tau^{\delta_{\Gamma }}D_{\varepsilon}\left(\frac{X^{2+\varepsilon}}{n}+X^{1+\varepsilon}+X^{ \varepsilon}\right)\] \[= D_{\varepsilon}\tau^{\delta_{\Gamma}}\mathbb{K}_{\tau}^{3}\left( 2^{(a+c)(\delta_{\Gamma}-2-\varepsilon)}\frac{\tau^{-(2+\varepsilon)}}{n^{3+ \varepsilon}}+\cdots\right.\] \[\left.\qquad\cdots+2^{(a+c)(\delta_{\Gamma}-1-\varepsilon)} \frac{\tau^{-(1+\varepsilon)}}{n^{1+\varepsilon}}+2^{(a+c)(\delta_{\Gamma}- \varepsilon)}\frac{\tau^{-\varepsilon}}{n^{\varepsilon}}\right)\] \[\leq D_{\varepsilon}\tau^{\delta_{\Gamma}}\mathbb{K}_{\tau}^{3}\left( \frac{\tau^{-(2+\varepsilon)}}{n^{3+\varepsilon}}+\frac{\tau^{-(1+\varepsilon )}}{n^{1+\varepsilon}}+2^{(a+c)(\delta_{\Gamma}-\varepsilon)}\frac{\tau^{- \varepsilon}}{n^{\varepsilon}}\right).\] We can now estimate \(\#\mathcal{P}_{n}^{*}(\tau,a,c)\). Proof of Lemma 34.: The result follows from the inequality between the lower and upper bounds of \(\mathscr{S}_{n}^{*}(\tau,a,c)\) given respectively by Lemma 35 and Lemma 36. To get the upper bound of \(\#\mathcal{P}_{n}(\tau)\) we need to know for which pairs \((a,c)\in\mathbb{N}^{2}\) the set \(\mathcal{P}_{n}^{*}(\tau,a,c)\) is nonempty. The next lemma gives a necessary condition. **Lemma 37**.: _Let \(\Gamma\) be Fuchsian Schottky subgroup of \(\mathrm{SL}(2,\mathbb{Z})\). There is a constant \(\mathbb{H}_{\Gamma}>0\) with the next property: If \(\mathcal{P}_{n}^{*}(\tau,a,c)\) is nonempty for some natural numbers \(a,c,n\) with \(n\geq 2\), and \(\tau\in(0,1)\), then_ \[2^{a+c}\leq\mathbb{H}_{\Gamma}\frac{\tau^{-1}}{n}. \tag{4.16}\] Proof.: Consider \((w_{1},w_{2})\in\mathcal{P}_{n}^{*}(\tau,a,c)\), the corresponding \(A,B_{1},B_{2},C\) and \(B:=B_{1}\widetilde{B_{2}}\). To establish the inequality we will compare a lower and an upper bound of \(||\gamma_{B}||\). Since \(w_{1}\neq w_{2}\), then \(\gamma_{B}\neq I_{2}\). We also know that \(\gamma_{B}\equiv I_{2}\pmod{n}\), so \[||\gamma_{B}||\geq\left|\left|\begin{pmatrix}1-n&0\\ 0&1\end{pmatrix}\right|\right|\gg n. \tag{4.17}\] From (4.8) and the definition of \(\mathcal{P}_{n}^{*}(\tau,a,c)\) follows that \[|I_{B_{1}}|\asymp_{\Gamma}\frac{\tau}{|I_{A}|\cdot|I_{C}|}\asymp_{\Gamma}2^{a +c}\tau,\] and similarly for \(|I_{B_{2}}|\). Applying Lemma 10, Lemma 11 and Lemma 12 we get \[||\gamma_{B}||\asymp_{\Gamma}|I_{B}|^{-\frac{1}{2}}\asymp_{\Gamma}(|I_{B_{1}}| \cdot|I_{B_{2}}|)^{-\frac{1}{2}}\asymp_{\Gamma}2^{-a-c}\tau^{-1}. \tag{4.18}\] The result follows from (4.17) and (4.18). We prove now the main result of the section. Proof of Lemma 32.: Let \(\mathcal{B}^{\Delta}(\tau)\) be the diagonal of \(\mathcal{B}(\tau)^{2}\). Then \[\#\mathcal{P}_{n}(\tau)=\#\mathcal{P}_{n}^{*}(\tau)+\#\mathcal{B}^{\Delta}(\tau).\] Since \(\#\mathcal{B}^{\Delta}(\tau)=\#\mathcal{B}(\tau)\) and \(\tau\leq L_{\Gamma}\), Lemma 16 gives \[\#\mathcal{B}^{\Delta}(\tau)\asymp_{\Gamma}\tau^{-\delta_{\Gamma}}. \tag{4.19}\] Now we prove two intermediate inequalities to estimate \(\#\mathcal{P}_{n}^{*}(\tau)\). Recall the notation \[\mathbb{K}_{\tau}:=\log(\tau^{-1})+1,\quad\mathbb{L}_{\tau,n}:=\log\left(\frac {\tau^{-1}}{n}\right)+1.\] Consider \(\mathtt{H}_{\Gamma}\) as in Lemma 37, the greatest integer \(N\) such that \[2^{N}\leq\mathtt{H}_{\Gamma}\frac{\tau^{-1}}{n},\] and \[\mathcal{I}_{n}(\tau)=\{(a,c)\in\mathbb{N}\mid a+c\leq N\}.\] Then \(\mathcal{P}_{n}^{*}(\tau)\) is the disjoint union of the \(\mathcal{P}_{n}^{*}(\tau,a,c)\) with \((a,c)\in\mathcal{I}_{n}(\tau)\) by Lemma 4.16. If \((a,c)\) is in \(\mathcal{I}_{n}(\tau)\), then \(a\) and \(c\) are \(\ll_{\Gamma}\mathbb{L}_{\tau,n}\). Thus \[\#\mathcal{I}_{n}(\tau)\ll_{\Gamma}\mathbb{L}_{\tau,n}^{2}. \tag{4.20}\] Let us denote \(\frac{\tau^{-1}}{n}\) by \(X\). We also have \[\sum_{(a,c)\in\mathcal{I}_{n}(\tau)}2^{(\delta_{\Gamma}-\varepsilon )(a+c)} = \sum_{m=0}^{N}(m+1)2^{(\delta_{\Gamma}-\varepsilon)m} \tag{4.21}\] \[= \frac{(N+1)2^{(\delta_{\Gamma}-\varepsilon)(N+2)}-(N+2)2^{( \delta_{\Gamma}-\varepsilon)(N+1)}+1}{(2^{\delta_{\Gamma}-\varepsilon}-1)^{2}}\] \[\ll_{\Gamma} N2^{(\delta_{\Gamma}-\varepsilon)N}\] \[\ll_{\Gamma} \mathbb{L}_{\tau,n}X^{\delta_{\Gamma}-\varepsilon}.\] We estimate the size of \(\mathcal{P}_{n}^{*}(\tau)\) using Lemma 34, (4.20) and (4.21): \[\#\mathcal{P}_{n}^{*}(\tau) = \sum_{(a,c)\in\mathcal{I}_{n}(\tau)}\#\mathcal{P}_{n}^{*}(\tau,a,c) \tag{4.22}\] \[\ll_{\Gamma} D_{\varepsilon}\mathbb{K}_{\tau}^{3}\left(\left[\frac{X^{2+ \varepsilon}}{n}+X^{1+\varepsilon}\right]\#\mathcal{I}_{n}(\tau)+X^{\varepsilon }\sum_{(a,c)\in\mathcal{I}_{n}(\tau)}2^{(a+c)(\delta_{\Gamma}-\varepsilon)}\right)\] (4.23) \[\ll_{\Gamma} D_{\varepsilon}\mathbb{K}_{\tau}^{3}\mathbb{L}_{\tau,n}\left( \left[\frac{X^{2+\varepsilon}}{n}+X^{1+\varepsilon}\right]\mathbb{L}_{\tau,n} +X^{\delta_{\Gamma}}\right).\] The upper bound for \(\#\mathcal{P}(\tau)\) follows from (4.19) and (4.23). ## 5. Proof of the main result Here we complete the proof of Theorem 2. Let \(S=\Gamma\backslash\mathbb{H}^{2}\) be a Schottky surface with \(\Gamma\) contained in \(\operatorname{SL}(2,\mathbb{Z})\). Recall that \(J_{\beta}=[0,\beta(1-\beta)]\) and, for any integer \(n\geq 1\), \(m_{\Gamma}(n,J_{\beta})\) is the number of eigenvalues of \(S_{n}=\Gamma_{n}\backslash\mathbb{H}^{2}\) in \(J_{\beta}\). This section is divided into three parts: 5.1 is devoted to the proof of Proposition 3, an upper bound of \(m_{\Gamma}(n,J_{\beta})\) when \(\delta_{\Gamma}>\frac{4}{5}\) and \(\beta\) lies in \((t_{\Gamma},\delta_{\Gamma})\). Then we give in 5.2 a lower bound for the multiplicity of new eigenvalues of \(S_{n}\) provided that all the prime factors of \(n\) are big enough. Finally, in 5.3 we combine these bounds to establish our main result. ### The upper bound of \(m_{\Gamma}(n,J_{\beta})\) We isolate the next computation from the proof of Proposition 3 for the sake of clarity. Let \(D_{\varepsilon}\) be as in Lemma 18 and, for any positive numbers \(\delta\) and \(\beta\), let \[t_{\delta}:=\frac{\delta}{6}+\frac{2}{3},\quad\text{and}\quad\ell(\delta, \beta):=t_{\delta}+\frac{\beta-t_{\delta}}{4}.\] **Lemma 38**.: _For any \(\delta\in\left(\frac{4}{5},1\right]\) and any \(\beta\in(t_{\delta},\delta]\) there are \(\mathsf{Z}=\mathsf{Z}(\delta,\beta)>0\), \(\varepsilon=\varepsilon(\delta,\beta)\in(0,1)\) and \(\alpha=\alpha(\delta,\beta)\in(2,5)\) with the next property: For any \(\beta_{1}\in[\ell(\delta,\beta),\beta]\) and any \(x>1\) we have_ \[x^{-2\alpha\beta_{1}}\left[D_{\varepsilon}(\alpha\log x)^{5}\left(x^{\alpha(2 +\varepsilon)-3-\varepsilon}+x^{(\alpha-1)(\varepsilon+1)}\right)+x^{\alpha \delta}\right]\leq\mathsf{Z}x^{-2-\varepsilon}. \tag{5.1}\] Proof.: We fix \(\delta,\beta,\beta_{1}\) and \(x\) as in the statement. Now consider any \(\varepsilon_{1}\in\left(0,\frac{3}{5}\right)\) and \(\alpha_{1}\in(0,5)\). Let \(M(\alpha_{1},\varepsilon_{1})\) be the left-hand side of (5.1) replacing \(\alpha\) and \(\varepsilon\) respectively by \(\alpha_{1}\) and \(\varepsilon_{1}\). From the fact that \(x>1\) and \(\alpha_{1}<5\) we easily get \[(\alpha_{1}\log x)^{5}{\ll_{\varepsilon_{1}}}\;x^{\varepsilon_{1}},\] and thus \[M(\alpha_{1},\varepsilon_{1}) {\ll_{\varepsilon_{1}}} x^{-2\alpha_{1}\beta_{1}}\left[x^{\varepsilon_{1}}\left(x^{ \alpha_{1}(2+\varepsilon_{1})-3-\varepsilon_{1}}+x^{(\alpha_{1}-1)( \varepsilon_{1}+1)}\right)+x^{\alpha_{1}\delta}\right]\] \[= x^{\alpha_{1}(2+\varepsilon_{1}-2\beta_{1})-3}+x^{-\alpha_{1}(2 \beta_{1}-1-\varepsilon_{1})-1}+x^{-\alpha_{1}(2\beta_{1}-\delta)}. \tag{5.2}\] We will show that the three exponents on the right-hand side of (5.2) are \(\leq-2-\varepsilon_{1}\) if \(\varepsilon_{1}\) and \(\alpha_{1}\) are well chosen. Using that \(\frac{4}{5}<t_{\delta}<\frac{5}{6},0<\varepsilon_{1}<\frac{3}{5}\) and \(\delta\leq 1\) we readily see that \[2+\varepsilon_{1}-2\beta_{1},\quad 2\beta_{1}-1-\varepsilon_{1},\quad\text{and} \quad 2\beta_{1}-\delta\] are strictly positive, so the exponents in (5.2) are \(\leq-2-\varepsilon_{1}\) if and only if the next inequalities hold: \[\alpha_{1} \leq \frac{1-\varepsilon_{1}}{2+\varepsilon_{1}-2\beta_{1}}, \tag{5.4}\] \[\alpha_{1} \geq \frac{1+\varepsilon_{1}}{2\beta_{1}-1-\varepsilon_{1}},\] (5.5) \[\alpha_{1} \geq \frac{2+\varepsilon_{1}}{2\beta_{1}-\delta}. \tag{5.3}\] Take now \(\varepsilon_{1}\leq\frac{1}{15}\). Then \[\frac{2+\varepsilon_{1}}{2\beta_{1}-\delta}>\frac{1+\varepsilon_{1}}{2\beta_{1}-1 -\varepsilon_{1}},\] and hence the system reduces to \[\frac{2+\varepsilon_{1}}{2\beta_{1}-\delta}\leq\alpha_{1}\leq\frac{1- \varepsilon_{1}}{2+\varepsilon_{1}-2\beta_{1}}.\] Since \(\beta_{1}\geq\ell(\delta,\beta)\), then \[\frac{2+\varepsilon_{1}}{2\beta_{1}-\delta}\leq\frac{2+\varepsilon_{1}}{2\ell (\delta,\beta)-\delta},\quad\text{and}\quad\frac{1-\varepsilon_{1}}{2+ \varepsilon_{1}-2\ell(\delta,\beta)}\leq\frac{1-\varepsilon_{1}}{2+ \varepsilon_{1}-2\beta_{1}}.\] Reducing \(\varepsilon_{1}\) if necessary, one has \[\frac{2+\varepsilon_{1}}{2\ell(\delta,\beta)-\delta}\leq\frac{1-\varepsilon_ {1}}{2+\varepsilon_{1}-2\ell(\delta,\beta)}. \tag{5.6}\] Indeed, note that the denominators on (5.6) are positive. Multiplying by their product and simplifying we see that (5.6) is equivalent to \[\varepsilon_{1}^{2}+(4-\delta)\varepsilon_{1}\leq\frac{3}{2}(\beta-t_{\delta }).\] Let \(r(\delta,\beta)\) be the positive root of \(X^{2}+(4-\delta)X-\frac{3}{2}(\beta-t_{\delta})\). The above reasoning shows that for \[\varepsilon=\min\left\{r(\delta,\beta),\frac{1}{15}\right\}\quad\text{and} \quad\alpha=\frac{2+\varepsilon}{2\ell(\delta,\beta)-\delta},\] one has \(M(\alpha,\varepsilon)\ll_{\varepsilon}x^{-2-\varepsilon}\). This can be reformulated as (5.1) since \(\varepsilon\) depends only on \(\delta\) and \(\beta\). Clearly \(\varepsilon\) belongs to \((0,1)\). As for \(\alpha\), from the assumptions on \(\delta\) and \(\beta\) we easily get that \(t_{\delta}\in\left(\frac{4}{5},1\right)\) and \(\ell(\delta,\beta)\in\left(\frac{4}{5},\delta\right)\). A direct computation shows then that \(\alpha\) is in the interval \((2,5)\). We are ready to establish the upper bound of \(m_{\Gamma}(n,J_{\beta})\). Proof of Proposition 3.: We define \[\mathtt{C}_{\Gamma}=\max\{\tau_{\Gamma},L_{\Gamma}\}^{-\frac{1}{2}},\] where \(\tau_{\Gamma}\) and \(L_{\Gamma}\) are respectively as in Proposition 29 and (2.6). Consider any \(n>\mathtt{C}_{\Gamma}\). The projection \(\Gamma\to\operatorname{SL}(2,\mathbb{Z}/n\mathbb{Z})\) identifies \(\Gamma/\Gamma_{n}\) with a subgroup of \(\operatorname{SL}(2,\mathbb{Z}/n\mathbb{Z})\), so20 Footnote 20: For any prime \(p\) and any integer \(a\geq 1\) we have—see [2, eq. (7.1)]—\(\#\operatorname{SL}(2,\mathbb{Z}/p^{a}\mathbb{Z})=p^{3a-2}(p^{2}-1)<p^{3a}\). For any integer \(n\geq 2\), its decomposition \(p_{1}^{a_{1}}\cdots p_{k}^{a_{k}}\) as product of primes give an isomorphism between \(\operatorname{SL}(2,\mathbb{Z}/n\mathbb{Z})\) and the product of the \(\operatorname{SL}(2,\mathbb{Z}/p_{j}^{a_{j}}\mathbb{Z})\). It follows that \(\#\operatorname{SL}(2,\mathbb{Z}/n\mathbb{Z})<n^{3}\). \[[\Gamma:\Gamma_{n}]\leq\#\operatorname{SL}(2,\mathbb{Z}/n\mathbb{Z})<n^{3}. \tag{5.7}\] To lighten the notation we write \(m(n,\beta)\) instead of \(m_{\Gamma}(n,J_{\beta})\) for the number of eigenvalues of \(\Gamma_{n}\backslash\mathbb{H}^{2}\) in \(J_{\beta}=[0,\beta(1-\beta)]\) counted with multiplicity. We label these as \[\lambda_{n,1}\leq\cdots\leq\lambda_{n,m(n,\beta)}.\] Consider \(s_{n,j}\in\left(\frac{1}{2},\delta_{\Gamma}\right]\) such that \(\lambda_{n,j}=s_{n,j}(1-s_{n,j})\). We fix \(\varepsilon=\varepsilon(\delta_{\Gamma},\beta)\in(0,1)\) and \(\alpha=\alpha(\delta_{\Gamma},\beta)\in(2,5)\) as in Lemma 38. Let \(\tau_{n}=n^{-\alpha}\). The \(s_{n,j}\) are zeros of \(\zeta_{\tau_{n},n}\) by Proposition 27 since \(\tau_{n}<n^{-2}<L_{\Gamma}\). Hence \(m(n,\beta)\) is less or equal than the number of zeros of \(\zeta_{\tau_{n},n}\) inside any circle \(\mathscr{C}\) containing \([\beta,\delta_{\Gamma}]\), which we'll bound with Jensen's Formula--for a convenient \(\mathscr{C}\)--. We have \(\tau_{n}<\tau_{\Gamma}\), so by Proposition 29 there is \(c_{\Gamma}\) such that \[-\log|\zeta_{\tau_{n},n}(c^{\prime})|\leq[\Gamma:\Gamma_{n}]\tau_{n}, \tag{5.8}\] for any \(c^{\prime}\geq c_{\Gamma}\). Recall that \(t_{\Gamma}=\frac{\delta_{\Gamma}}{6}+\frac{2}{3}\) and \(\ell(\delta_{\Gamma},\beta)=t_{\Gamma}+\frac{\beta-t_{\Gamma}}{4}\). The zeta function \(\zeta_{\tau_{n},n}\) is holomorphic and not identically zero, so it has countably many zeros. Thus we can pick \(\beta_{1}\in\left[\ell(\delta_{\Gamma},\beta),\frac{t_{\Gamma}+\beta}{2}\right)\) and \(c\in[c_{\Gamma}+1,c_{\Gamma}+2]\) such that \(\zeta_{\tau_{n},n}\) has no zeros in \(\mathscr{C}\cup\{c\}\), where \(\mathscr{C}\) is the circle of radius \(R:=c-\beta_{1}\) and center \(c\). Since the \(s_{n,j}\)'s lie in the interval \([\beta,c)\) we have \[m(n,\beta)\log\left(\frac{R}{c-\beta}\right)\leq\sum_{j=1}^{m(n,\beta)}\log \left(\frac{R}{c-s_{n,j}}\right). \tag{5.9}\] Note that21\(\log^{-1}\left(\frac{R}{c-\beta}\right)\leq A(\Gamma,\beta)\) for some positive constant \(A(\Gamma,\beta)\). Thus, from (5.9) and Jensen's Formula--Theorem 30--applied to \(\zeta_{\tau_{n},n}\) and \(\mathscr{C}\) we obtain Footnote 21: Indeed, \(\frac{R}{c-\beta}=1+\frac{\beta-\beta_{1}}{c-\beta}\geq 1+\frac{\beta-t_{ \Gamma}}{2(c_{\Gamma}+2)}\), so we can take \(A(\Gamma,\beta)=\log^{-1}\left(1+\frac{\beta-t_{\Gamma}}{2(c_{\Gamma}+2)}\right)\). \[m(n,\beta)\leq A(\Gamma,\beta)\left(\frac{1}{2\pi}\int_{0}^{2\pi}\log|\zeta_{ \tau_{n},n}(Re^{i\theta}+c)|d\theta-\log|\zeta_{\tau_{n},n}(c)|\right). \tag{5.10}\] Let us bound the right-hand side of (5.10). From (5.7) and (5.8) we obtain \[-\log|\zeta_{\tau_{n},n}(c)|<n^{3-\alpha}. \tag{5.11}\] For any \(s\in\mathbb{C}\) Lemma 28 gives \[\log|\zeta_{\tau_{n},n}(s)|\leq||\mathcal{L}_{\mathcal{B}(\tau_{n}),s,\sigma_ {n}}||_{{}_{HS}}^{2},\] since \(\tau_{n}<L_{\Gamma}\). Assume now that \(s\) is in \(\mathscr{C}\). Using the upper bound for the Hilbert-Schmidt norm of \(\mathcal{L}_{\mathcal{B}(\tau_{n}),s,\sigma_{n}}\), the estimation of the size of \(\mathcal{P}_{n}(\tau_{n})\)--respectively Lemma 31 and Corollary 4.6--, that \(|\mathrm{Im}\ s|\ll_{\Gamma}1\) and \(\mathrm{Re}\ s\geq\beta_{1}\), we get \[\log|\zeta_{\tau_{n},n}(s)|\ll_{\Gamma}n^{3}n^{-2\alpha\beta_{1}}\left[D_{ \varepsilon}(\alpha\log n)^{5}(n^{\alpha(2+\varepsilon)-3-\varepsilon}+n^{( \alpha-1)(1+\varepsilon)})+n^{\alpha\delta_{\Gamma}}\right].\] By our choice of \(\varepsilon=\varepsilon(\delta_{\Gamma},\beta)\) and \(\alpha=\alpha(\delta_{\Gamma},\beta)\), Lemma 38 implies that \[\log|\zeta_{\tau_{n},n}(s)|\leq\mathtt{E}_{\Gamma,\beta}n^{1-\varepsilon},\] for some \(\mathtt{E}_{\Gamma,\beta}>0\). Integrating over \(\mathscr{C}\) we get \[\frac{1}{2\pi}\int_{0}^{2\pi}\log|\zeta_{\tau_{n},n}(Re^{i\theta}+c)|d\theta \leq\mathtt{E}_{\Gamma,\beta}n^{1-\varepsilon}. \tag{5.12}\] We are ready to conclude. Note that \(\xi:=\max\{1-\varepsilon,3-\alpha\}\) lies in \((0,1)\) since \(\varepsilon\in(0,1)\) and \(\alpha>2\). Using (5.11) and (5.12) in (5.10) we see that \[m(n,\beta)\leq\mathtt{D}_{\Gamma,\beta}n^{\xi}, \tag{5.13}\] for some \(\mathtt{D}_{\Gamma,\beta}>0\) depending only on \(\Gamma\) and \(\beta\). ### Lower bound for the multiplicity of new eigenvalues Let \(\Upsilon\) be a nonelementary, finitely generated subgroup of \(\mathrm{SL}(2,\mathbb{Z})\) and let \(S=\Upsilon\backslash\mathbb{H}^{2}\). In this subsection we establish a lower bound for the multiplicity of new eigenvalues--defined below--of a congruence covering \(S_{n}\) of \(S\) using the representation theory of \(\Upsilon/\Upsilon_{n}\). This well-known strategy was popularized by the work [10] of Sarnak and Xue. The quotient group \(\Upsilon/\Upsilon_{n}\) acts on the left on \(S_{n}=\Upsilon_{n}\backslash\mathbb{H}^{2}\) by \[(\gamma\Upsilon_{n})\cdot\Upsilon_{n}x=\gamma\Upsilon_{n}x.\] The Laplace-Beltrami operator of \(S_{n}\) commutes with \(\Upsilon/\Upsilon_{n}\curvearrowright S_{n}\), hence \(\Upsilon/\Upsilon_{n}\) acts on any eigenspace of \(\Delta_{S_{n}}\). Since \(\Upsilon\) is nonelementary, it is Zariski-dense in \(\mathbf{SL}(2)\). A result of Matthews, Vaserstein and Weisfeiler [11] implies then that \(\Upsilon/\Upsilon_{n}\) is isomorphic to \(\mathrm{SL}(2,\mathbb{Z}/n\mathbb{Z})\) for most \(n\). **Lemma 39**.: _Let \(\Upsilon\) be a finitely generated, nonelementary subgroup of \(\mathrm{SL}(2,\mathbb{Z})\). There is an integer \(N_{\Upsilon}\) with the next property: for any \(n\) relatively prime to \(N_{\Upsilon}\), the projection \(\Upsilon\to\mathrm{SL}(2,\mathbb{Z}/n\mathbb{Z})\) is surjective._ Consider an eigenvalue \(\lambda\) of \(S_{n}\). We denote \(E_{n}(\lambda)\) the \(\lambda\)-eigenspace of \(\Delta_{S_{n}}\) and let \(E_{d}^{n}(\lambda)\) be the lift of \(E_{d}(\lambda)\) to \(E_{n}(\lambda)\) whenever \(d\) divides \(n\). We say that \(\lambda\) is an _old eigenvalue_ of \(S_{n}\) if and only if any \(\varphi\in E_{n}(\lambda)\) is a sum \(\sum_{d}\varphi_{d}\), with \(d\) running in the divisors of \(n\) in \([1,n)\), and \(\varphi_{d}\in E_{d}^{n}(\lambda)\). Otherwise \(\lambda\) is a _new eigenvalue_. Let us give an equivalent definition of new eigenvalue. Express \(E_{n}(\lambda)\) as orthogonal sum of irreducible, \(\Upsilon/\Upsilon_{n}\)-invariant subspaces \(\mathcal{H}_{1}\widehat{\oplus}\cdots\widehat{\oplus}\mathcal{H}_{k}\). Note that \(E_{d}^{n}(\lambda)\) is the subspace of vectors of \(E_{n}(\lambda)\) fixed by the normal subgroup \(\Upsilon_{d}/\Upsilon_{n}\) of \(\Upsilon/\Upsilon_{n}\). Hence each nonzero \(E_{d}^{n}(\lambda)\) is a sum of \(\mathcal{H}_{j}\)'s. Thus \(\lambda\) is new if and only if \(E_{n}(\lambda)\) contains an irreducible representation of \(\Upsilon/\Upsilon_{n}\) not factoring through22\(\Upsilon/\Upsilon_{d}\) for any divisor \(d\in[1,n)\) of \(n\). Footnote 22: Let \(N\) be a normal subgroup of a group \(G\) and let \((\sigma,V)\) be a representation of \(G\). The space \(V^{N}\) of \(N\)-invariant vectors of \(V\) is \(G\)-invariant. Hence either \(V^{N}=0\) or \(\sigma\) factors through \(G/N\). We denote by \(\omega(n)\) the number of prime divisors of an integer \(n\). **Lemma 40**.: _Let \(n>1\) be an integer all of whose prime divisors are \(\geq 5\). The dimension of an irreducible complex representation of \(\mathrm{SL}(2,\mathbb{Z}/n\mathbb{Z})\) not factoring through \(\mathrm{SL}(2,\mathbb{Z}/d\mathbb{Z})\), for any divisor \(d\in[1,n)\) of \(n\), is \(>\frac{n}{3^{\omega(n)}}\)._ Proof.: Consider \(n\) as in the statement and let \(p_{1}^{a_{1}}\cdots p_{m}^{a_{m}}\) be its decomposition as product of primes. Since \(\mathrm{SL}(2,\mathbb{Z}/n\mathbb{Z})\) is isomorphic to \[\prod_{j=1}^{m}\mathrm{SL}(2,\mathbb{Z}/p_{j}^{a_{j}}\mathbb{Z}),\] any irreducible representation \(\sigma\) of \(\mathrm{SL}(2,\mathbb{Z}/n\mathbb{Z})\) is a tensor product of irreducible representations \(\sigma_{j}\) of the \(\mathrm{SL}(2,\mathbb{Z}/p_{j}^{a_{j}}\mathbb{Z})\)'s. For any \(\sigma\) as in the statement, the components \(\sigma_{j}\) do not factor through \(\operatorname{SL}(2,\mathbb{Z}/p_{j}^{a_{j}-1}\mathbb{Z})\). Then we know by [13, Ex. 4.7.3, p. 245] and [1, Lemma 7.1] that \[\dim\sigma_{j}\geq\begin{cases}\frac{1}{2}(p_{j}-1)&\text{when $a_{j}=1$},\\ \frac{1}{2}(p_{j}^{a_{j}}-p_{j}^{a_{j}-2})&\text{when $a_{j}\geq 2$}.\end{cases}\] In both cases \(\dim\sigma_{j}>p^{a_{j}}/3\), so the result follows. The lower bound for the multiplicity of new eigenvalues of \(S_{n}\) follows immediately from lemmas 39 and 40. **Corollary 41**.: _Consider a finitely generated, non-elementary subgroup \(\Upsilon\) of \(\operatorname{SL}(2,\mathbb{Z})\) and \(S=\Upsilon\backslash\mathbb{H}^{2}\). There is an integer \(N_{\Upsilon}\) such that for any \(n>1\) relatively prime to \(N_{\Upsilon}\), any new eigenvalue of \(S_{n}\) has multiplicity \(>\frac{n}{3^{\alpha(n)}}\)._ ### Proof of Theorem 2 Let us now complete the proof of our main result. Proof of Theorem 2.: We fix some \(\beta\in(t_{\Gamma},\delta_{\Gamma})\). Consider constants \(\mathtt{C}_{\Gamma},\mathtt{D}_{\Gamma,\beta}>0\) and \(\xi=\xi(\delta_{\Gamma},\beta)\in(0,1)\) as in Proposition 3, and \(N_{\Gamma}\) as in Corollary 41. Fix \(C(\Gamma,\beta)\geq\max\{\mathtt{C}_{\Gamma},N_{\Gamma},5\}\) such that \[\frac{C(\Gamma,\beta)^{1-\xi}}{3}>\max\{1,\mathtt{D}_{\Gamma,\beta}\}. \tag{5.14}\] Let \(n\) be an integer with all its prime divisors \(\geq C(\Gamma,\beta)\). It suffices to show that all the eigenvalues \(\lambda\) of \(S_{n}\) in the interval \(J_{\beta}=[0,\beta(1-\beta)]\) are old for any such \(n\)23. Footnote 23: Indeed, let us show that this implies \(E_{n}(\lambda)=E_{1}^{n}(\lambda)\) by induction on the number \(\Omega(n)\) of prime divisors of \(n\) counted with multiplicity: When \(n\) is prime, this is the definition of \(\lambda\) being old. For the inductive step, the primes dividing any divisor \(d\in(1,n)\) of \(n\) are still \(\geq C(\Gamma,\beta)\) and \(\Omega(d)<\Omega(n)\), so \(E_{d}(\lambda)=E_{1}^{d}(\lambda)\). Thus \(E_{d}^{n}(\lambda)=E_{1}^{n}(\lambda)\), and since \(\lambda\) is old, \[E_{n}(\lambda)=\sum_{\begin{subarray}{c}d\mid n\\ d\leq n\end{subarray}}E_{d}^{n}(\lambda)=E_{1}^{n}(\lambda).\] We proceed by contradiction. Suppose \(S_{n}\) has a new eigenvalue \(\lambda\in J_{\beta}\). Write \(n\) as product of primes \[n=p_{1}^{a_{1}}\cdots p_{k}^{a_{k}}.\] Since \(n\) is relatively prime to \(N_{\Gamma}\), then \[m_{\Gamma}(n,\lambda)>\frac{n}{3^{k}}\] by Corollary 41. Since \(\delta_{\Gamma}>\frac{4}{5}\) and \(n>\mathtt{C}_{\Gamma}\), by Proposition 3 we have \[m_{\Gamma}(n,J_{\beta})\leq\mathtt{D}_{\Gamma,\beta}n^{\xi}.\] Hence \[\mathtt{D}_{\Gamma,\beta}>\frac{n^{1-\xi}}{3^{k}}\geq\frac{p_{1}^{1-\xi}}{3} \geq\frac{C(\Gamma,\beta)^{1-\xi}}{3},\] which contradicts (5.14).
$\Gamma$ の Schottky 群として $\mathrm{SL} (2,\mathbb{Z})$ において、この論文で示すように、$\Gamma$ の合同 coverings の Laplaciano-Beltrami 演算の 2 番目の微小値の一元的な、明確な下限を定めます。 $\Gamma$ の極限集合が十分厚い場合にのみ成立します。 This is my best attempt, please let me know if you think it is good.
2303.18134
Pressure evolution of electronic and crystal structure of non-centrosymmetric EuCoGe$_3$
We report on the pressure evolution of the electronic and crystal structures of the noncentrosymmetric antiferromagnet EuCoGe3. Using a diamond anvil cell, we performed high pressure fluorescence detected near-edge x-ray absorption spectroscopy at the Eu L3, Co K, and Ge K edges and synchrotron powder x-ray diffraction. In the Eu L3 spectrum, both divalent and trivalent Eu peaks are observed from the lowest pressure measurement (~2 GPa). By increasing pressure, the relative intensity of the trivalent Eu peak increases, and an average Eu valence continuously increases from 2.2 at 2 GPa to 2.31 at~50 GPa. On the other hand, no discernible changes are observed in the Co K and Ge K spectra as a function of pressure. With the increase in pressure, lattice parameters continuously decrease without changing I4mm symmetry. Our study revealed a robust divalent Eu state and an unchanged crystal symmetry of EuCoGe3 against pressure.
N. S. Dhami, V. Balédent, O. Bednarchuk, D. Kaczorowski, S. R. Shieh, J. M. Ablett, J. -P. Rueff, J. P. Itié, C. M. N. Kumar, Y. Utsumi
2023-03-31T15:21:22
http://arxiv.org/abs/2303.18134v1
# Pressure evolution of electronic and crystal structure of non-centrosymmetric EuCoGe\({}_{3}\) ###### Abstract We report on the pressure evolution of the electronic and crystal structures of the non-centrosymmetric antiferromagnet EuCoGe\({}_{3}\). Using a diamond anvil cell, we performed high pressure fluorescence detected near-edge x-ray absorption spectroscopy at the Eu \(L_{3}\), Co \(K\), and Ge \(K\) edges and synchrotron powder x-ray diffraction. In the Eu \(L_{3}\) spectrum, both divalent and trivalent Eu peaks are observed from the lowest pressure measurement (\(\sim\)2 GPa). By increasing pressure, the relative intensity of the trivalent Eu peak increases, and an average Eu valence continuously increases from 2.2 at 2 GPa to 2.31 at \(\sim\)50 GPa. On the other hand, no discernible changes are observed in the Co \(K\) and Ge \(K\) spectra as a function of pressure. With the increase in pressure, lattice parameters continuously decrease without changing \(I4mm\) symmetry. Our study revealed a robust divalent Eu state and an unchanged crystal symmetry of EuCoGe\({}_{3}\) against pressure. ## I Introduction Intermetallic compounds with lanthanoids host various fascinating phenomena, such as heavy fermion behavior, spin/charge ordering, Kondo effect, and superconductivity, originating from an interplay of strongly correlated \(4f\) electrons and itinerant conduction electrons [1; 2]. A plethora of ternary lanthanoid transition metal silicides/germanides crystallize with the ThCr\({}_{2}\)Si\({}_{2}\)-type structure (\(I4/mmm\)) [3], for instance, the first heavy fermion superconductor CeCu\({}_{2}\)Si\({}_{2}\)[4] and the quantum critical Kondo lattice YbRh\({}_{2}\)Si\({}_{2}\)[5; 6]. In isostructural europium-based silicides, Eu ions bear a divalent valence state Eu\({}^{2+}\) (\(4f^{7}\), \(J\)= 7/2) that favors an antiferromagnetic ground state [7; 8; 9]. However, the energy difference between Eu\({}^{2+}\) and nonmagnetic Eu\({}^{3+}\) (\(4f^{6}\), \(J\)= 0) valence states is not very large [10] and can be tuned by applying pressure and/or by chemical substitutions. Applying pressure or substituting smaller ions tend to increase the antiferromagnetic transition temperature (\(T_{\rm N}\)), followed by a sudden disappearance of magnetic moments and a valence crossover at a critical pressure. Indeed, a pressure-induced Eu valence transition with a simultaneous collapse of antiferromagnetism was reported for Eu(Pd\({}_{0.8}\)Au\({}_{0.2}\))\({}_{2}\)Si\({}_{2}\)[7], EuNi\({}_{2}\)Si\({}_{2}\)[8], and EuRh\({}_{2}\)Si\({}_{2}\)[9], and a substitution-induced valence transition was found in EuNi\({}_{2}\)(Ge\({}_{1-x}\)Si\({}_{x}\))\({}_{2}\)[11] and Eu(Pt\({}_{1-x}\)Ni\({}_{x}\))\({}_{2}\)Si\({}_{2}\)[12]. Due to the different ionic radii of Eu\({}^{2+}\) and Eu\({}^{3+}\) ions [13], the Eu valence transition and the ground state properties in such systems are usually discussed in relation to the lattice volume. It has been established that compounds with a large unit cell volume possess an antiferromagnetic ground state with Eu\({}^{2+}\) ions, while materials with a small unit cell volume exhibit a nonmagnetic ground state with Eu\({}^{3+}\) ions[14; 15]. Contrary to rather extensive studies on the Eu-122 systems, much less attention has been given to ternary Eu-compounds crystallizing with the BaNiSn\({}_{3}\)-type structure (\(I4mm\)) which is a close relative to the ThCr\({}_{2}\)Si\({}_{2}\)-type structure (see Fig. 7 (b)). Recently a series of europium transition metal silicides/germanides Eu\(TX_{3}\) : \(T\)= transition metal, \(X\)=Si or Ge, with the BaNiSn\({}_{3}\)-type structure [16] was reported to exhibit complex magnetic properties[17; 18; 19; 20; 21; 22; 23] and atypical behavior under hydrostatic pressure[24; 25]. In this context, it is worth mentioning that pressure-induced superconductivity was discovered in a few Ce-based counterparts [26; 27; 28; 29]. These compounds bear an unconventional character with a mixed singlet-triplet pairing caused by large anti-symmetric spin-orbit coupling in strongly correlated electron systems which lack an inversion symmetry in their crystal lattice[30; 31]. In the crystallographic unit cell of Eu\(TX_{3}\) systems, Eu atoms occupy the \(2a\) Wyckoff site, silicon/germanium atoms are located at two different Wyckoff positions \(2a\) and \(4b\), while transition metal atoms occupy the \(2a\) site [20]. Magnetic susceptibility measurements [21; 32; 33] and Mossbauer spectroscopy [17; 18] revealed the presence of magnetic Eu\({}^{2+}\) ions in each of the investigated compounds. While all of them order antiferromagnetically (AFM) at similar temperatures, the magnetic structure formed by the localized Eu \(4f\) moments depends on the transition metal constituent. For example, in EuRhGe\({}_{3}\) the AFM order sets in at \(T_{\rm N}\)= 11.3 K and the Eu moments are confined in the \(ab\) plane, while they are aligned along the \(c\)-axis in EuCoGe\({}_{3}\)
EuCoGe3の非中心対称反ferro磁性材料における電子と結晶構造の圧力変化について報告します。ダイヤモンドアニールセルを用いて、Eu L3、Co K、Ge K エッジにおける高圧蛍光検出近端X線吸収分光法、およびシンクロTRON powder X-raydiffractionを実施しました。Eu L3スペクトルでは、圧力低下時の二価と三価のEuピークが検出されました(約2 GPa)。圧力増加に伴い、三価のEuピークの相対強度が増加し、2.2 at 2 GPa から 50 GPa までの平均 Eu 価は連続的に増加しました。一方、Co K と Ge K スペクトルの圧力変化に伴う変化は検出されていません。圧力増加に伴い、格子パラメータは継続的に減少しますが、I4mm 幾何学的対称
2310.20593
FLODCAST: Flow and Depth Forecasting via Multimodal Recurrent Architectures
Forecasting motion and spatial positions of objects is of fundamental importance, especially in safety-critical settings such as autonomous driving. In this work, we address the issue by forecasting two different modalities that carry complementary information, namely optical flow and depth. To this end we propose FLODCAST a flow and depth forecasting model that leverages a multitask recurrent architecture, trained to jointly forecast both modalities at once. We stress the importance of training using flows and depth maps together, demonstrating that both tasks improve when the model is informed of the other modality. We train the proposed model to also perform predictions for several timesteps in the future. This provides better supervision and leads to more precise predictions, retaining the capability of the model to yield outputs autoregressively for any future time horizon. We test our model on the challenging Cityscapes dataset, obtaining state of the art results for both flow and depth forecasting. Thanks to the high quality of the generated flows, we also report benefits on the downstream task of segmentation forecasting, injecting our predictions in a flow-based mask-warping framework.
Andrea Ciamarra, Federico Becattini, Lorenzo Seidenari, Alberto Del Bimbo
2023-10-31T16:30:16
http://arxiv.org/abs/2310.20593v1
# FLODCAST: Flow and Depth Forecasting via Multimodal Recurrent Architectures ###### Abstract Forecasting motion and spatial positions of objects is of fundamental importance, especially in safety-critical settings such as autonomous driving. In this work, we address the issue by forecasting two different modalities that carry complementary information, namely optical flow and depth. To this end we propose FLODCAST a flow and depth forecasting model that leverages a multitask recurrent architecture, trained to jointly forecast both modalities at once. We stress the importance of training using flows and depth maps together, demonstrating that both tasks improve when the model is informed of the other modality. We train the proposed model to also perform predictions for several timesteps in the future. This provides better supervision and leads to more precise predictions, retaining the capability of the model to yield outputs autoregressively for any future time horizon. We test our model on the challenging Cityscapes dataset, obtaining state of the art results for both flow and depth forecasting. Thanks to the high quality of the generated flows, we also report benefits on the downstream task of segmentation forecasting, injecting our predictions in a flow-based mask-warping framework. keywords: depth forecasting, optical flow forecasting, segmentation + Footnote †: journal: Pattern Recognition ## 1 Introduction Improving intelligent capabilities, in the context of robot navigation and autonomous agents, is fundamental to allow machines to better understand the observed scene and thus reason about it. These systems exploit sensors such as cameras or LiDARs to extract a visual signal from the environment in order to take action and interact with the world. However, leveraging only the current frame to plan real-time decisions is challenging since dynamic scenes rapidly change over time. Agents must understand how other objects are moving and must foresee possible dangerous outcomes of their decisions. A prominent direction with potential application in decision-making is to make predictions about future scenarios, which can also be used to detect upcoming events or behaviors in advance. This task is highly challenging in situations where multiple objects, like vehicles or people, can move freely in the environment. The problem can be addressed from many angles, including understanding where agents will be in the near future, what actions they will take, how they will move, and how far they will be from a given observation point. In practice, this translates into exploiting different features describing the scene or specific objects. For instance, road layout supports the agent in defining where to drive, while semantic segmentation contains pixel-level annotations of specific categories, e.g. road, buildings, cars or pedestrians, and gives a finer-grained knowledge of the scene. However, predictions may also regard future instance segmentations, allowing a machine to reason about single objects rather than category classes. One way to summarize scene changes is to capture motion properties observed from a camera viewpoint. Optical flow is a dense field of displacement vectors and represents the pixel motion of adjacent frames [1]. Therefore, object motion can be incorporated in terms of 2D displacements using optical flow, even for future unobserved motion. Nonetheless, in order to understand scene dynamics it is also considerable to predict depth maps to better identify objects in a 3D space. Such information can be estimated in advance for the near future and incorporated into a decision-making system that assists an autonomous agent to early plan the subsequent action to be taken. Future prediction also involves information related to the surrounding environment. Therefore, this task can be accomplished by forecasting semantic segmentations [2; 3; 4], which are connected to specific category classes, but also predicting future instance segmentations of moving objects [5; 6; 7; 8; 9], even considering optical flow predictions [10; 9]. In summary, one can cast the forecasting problem from a high-level perspective, for instance forecasting semantic masks [5; 11] or agent trajectories [12; 13], as done in prior work. We instead choose to address the problem from a lower level, forecasting finer-grained information such as pixel-level optical flows and depth maps, which can then be leveraged to reason about high-level aspects such as forecasting semantic instances. In this work, we focus on anticipating imminent future urban scenarios, by casting the problem in a multi-modal and multitasking approach, able to forecast both optical flows, which encode pixel displacements in the scene, and depth maps, which represent the estimated distance from the camera to the corresponding point in the image. Instead of anticipating the future for the next time step [14; 15] or in general for a single specific one [16], we propose to directly forecast multiple time steps ahead at a time, yet maintaining the model autoregressive to avoid the need of training timestep-specific models. Jointly forecasting depth and flow helps to achieve better performance in future predictions, thanks to information sharing across modalities. In addition, training with long-term supervision leads to smaller errors at inference time. As a byproduct, we also leverage the re cently proposed MaskNet [9] to improve the downstream task of future instance segmentation in urban scenes with our predictions. To summarize, our main contributions are the following: 1. We design a novel optical FLOW and Depth foreCASTing network (FLODCAST) that jointly estimates optical flow and depth for future frames autoregressively. 2. Our approach, which involves predicting multiple steps simultaneously, mitigates the accumulation of errors that typically impede the performance of autoregressive models. In this way, we preserve the autoregressive nature of the model, eliminating the need for training separate models for different time horizons. 3. Finally, FLODCAST achieves state-of-the-art performance in both optical flow and depth forecasting tasks, thereby emphasizing the necessity of jointly learning shared features from these two modalities. ## 2 Related work Depth ForecastingSeveral works have focused on learning to infer depth from monocular RGB cameras [17; 18; 19]. Nonetheless, relying on depth estimators on predicted future RGBs is hard, due to high uncertainty in predicting raw pixels [20; 21; 22; 23; 24]. Therefore, other works propose to deal with depth anticipation for future frames, mostly known in the literature as depth forecasting or video depth forecasting. Qi et. al [14] introduce an entire framework for predicting 3D motion (both optical flow and depth map) and synthesizing the RGB with its semantic map for unobserved future frames. To this end, they leverage images, depth maps and semantic segmentations of past frames but they make predictions limited to the subsequent future frame, i.e. at the frame \(t+1\). Also limited to a single future timestep, Hu et. al [15] design a probabilistic model for future video prediction, where scene features are learned from input images and are then used to build spatio-temporal representations, incorporating both local and global contexts. These features are finally fed into a recurrent model with separate decoders, each one forecasting semantic segmentation, depth and dense flow at the next future frame. Nag et. al [16] propose a self-supervised method for depth estimation directly at the k-th frame after the last observed one, i.e. at \(t+k\). By means of a feature forecasting module, they learn to map pyramid features extracted from past sequences of both RGBs and optical flows to future features, exploiting a series of ConvGRUs and ConvLSTMs for spatio-temporal relationships in the past. With the same goal, Boulahbal et. al [25] design an end-to-end self-supervised approach by using a hybrid model based on CNN and Transformer that predicts depth map and ego-motion at \(t+k\) by processing an input sequence of past frames. Differently from prior work, we predict both dense optical flows and depth maps, also leveraging both modalities as inputs. We directly predict several timesteps ahead simultaneously while retaining autoregressive capabilities, that allows the model to accurately predict far into the future. Flow ForecastingOptical flow estimation has been largely studied in the past [26; 1]. Consolidated deep learning approaches have addressed this problem with promising results [27; 28; 29], also exploiting transformer-based architectures [30; 31; 32]. However, these methods are designed to estimate the optical flow by accessing adjacent frames as they are available to the network. Different approaches have been introduced incorporating optical flow features to infer imminent future scenarios under different points of view, such as predicting depth maps [16], semantic segmentations [3; 4] and instance segmentations [9]. Multitasking methods also exist [10; 33; 14]. Many works leverage motion features for future predictions to perform several specific tasks, ranging from semantic segmentation [10; 2; 3; 4], instance-level segmentation [9] and depth estimation [14; 15; 16]. However, just a few approaches have specifically addressed the task of optical flow forecasting, i.e. the problem of anticipating the optical flow for future scenes. Jin et. al [10] was the first to propose a framework, which jointly predicted optical flow and semantic segmentation for the next frame using the past ones. To make predictions for multiple time steps, they just iterate a two-step finetuned model so to alleviate the propagation error. Ciamarra et. al [9] instead introduced OFNet, a recurrent model able to predict the optical flow for the next time step exploiting spatio-temporal features from a ConvLSTM. Such features are learned to generate a sequence of optical flows shifted by one time step ahead from the input sequence. Without finetuning, the recurrent nature of the model allows OFNet to make predictions for any time steps ahead. Considering the high uncertainty of the future, all the proposed methods [3; 10; 33; 14; 9] are typically trained to make predictions at the single time step ahead, and then used for the future ones by autoregressively providing in input the predictions obtained at the previous iterations. We, instead, address a more general forecasting task, with the purpose of providing future optical flows directly for multiple time steps ahead, by exploiting both past flows and the corresponding depth maps. We also make use of depth maps as input because our framework is designed as a novel multitask and multimodal approach to also generate future depth maps. To the best of our knowledge, we are the first to jointly forecast optical flows and depth maps for multiple consecutive frames into the future. Besides, we do not require other information (even during training), like camera pose estimation, which is usually needed to deal with monocular depth estimation. ## 3 Method In this work we introduce FLODCAST, a novel approach for predicting optical flow and depth map jointly for future unobserved frames from an ego-vehicle perspective applied to autonomous driving context. ### Problem Definition Given a sequence \(\mathbf{S}=\{I_{t}\}\) of frames, let \(\mathbf{D}=\{D_{1},D_{2},\ldots,D_{T}\}\) be the depth map sequence extracted from the last T frames of \(\mathbf{S}\). Likewise, we define \(\mathbf{OF}=\{OF_{1},OF_{2},\ldots,OF_{T}\}\) the corresponding optical flows computed every two consecutive frames in \(\mathbf{S}\), such that \(OF_{t}=\textit{Flow}(I_{t-1},I_{t})\), with \(t\in[1,T]\), encodes the motion of the source frame \(I_{t-1}\) onto the target frame \(I_{t}\). Our purpose is to anticipate flow and depth maps for future frames after \(K\) time instants, i.e. forecasting \(D_{T+K}\) and \(OF_{T+k}\) for the frame \(I_{T+K}\). The importance of jointly anticipating flow and depth stems from the nature of the two modalities. Optical flow is a two-dimensional projection of the three-dimensional motion of the world onto the image plane [34]. An object in the foreground moving fast produces a large displacement, whereas when it comes far from the observer, moving at the same speed, it generates a very small displacement. Therefore, knowledge about the depth of such an object can help to model its future dynamics. Vice-versa, observing the motion of an object can provide information about its distance from the camera. Overall, by jointly modeling optical flow and depth we can represent the 3D scene displacement at time \(t\) in terms of the components \((u,v,d,t)\), where \((u,v)\) are the horizontal and vertical components of \(OF_{t}\) and \(d\) is the depth map. ### Flow and Depth Forecasting via Multimodal Recurrent Architectures We design **FLODCAST**, a novel optical **FLO**w and **D**epth fore**CAST**ing network that anticipates both modalities at each future time step by observing the past ones. An overview of FLODCAST is shown in Fig. 1. FLODCAST takes a sequence \(X=\{X_{1},X_{2},\ldots,X_{T}\}\) of \(T\) past observations composed of dense optical flows and depth maps. In detail, each \(X_{t}\) encodes the Figure 1: FLODCAST forecasts both future flows and depth maps from the past ones autoregressively. For each time step, we aggregate flow and depth at the last channel (by the concatenation operator, \(\oplus\)), then 64-channel features are extracted through a UNet [35] backbone. Finally, predictions are obtained from two dedicated fully convolutional heads. input features for the image \(I_{t}\) in the past, that are obtained by concatenating the optical flow \(OF_{t}\) with the depth map \(D_{t}\). In other words, \(X_{t}=(OF_{t}\oplus D_{t})\). The model generates as output a sequence \(\widehat{X}=\{\widehat{X}_{T+1},\widehat{X}_{T+2},\ldots,\widehat{X}_{T+K}\}\), that is a sequence of \(K\) future optical flows and \(K\) depth maps. We set \(T=3\) and \(K=3\) in all our experiments. Since optical flows and depth maps encode very different information about the scene, we add two separate heads after extracting features from the input in order to handle multimodal predictions. Therefore, we feed in input a sequence of concatenated optical flows and depths \(\{X_{1},X_{2},\ldots,X_{T}\}\) to a recurrent ConvLSTM network, in which a UNet backbone is used to extract features at 64 channels for each input \(X_{t}\), \(t=1,\ldots,T\), so to output a tensor of size \((H\times W\times 64)\), where \((H\times W)\) is the input resolution. Our feature extractor is the same UNet architecture as in [9], i.e. a fully convolutional encoder-decoder network with skip connections, consisting of 5 layers with filters \(\{64,128,256,512,1024\}\) respectively. These 64-channel features capture meaningful spatio-temporal contexts of the input representation. The features are then passed to the two convolutional heads, which are end-to-end trained to simultaneously generate the sequence of future optical flows and depth maps (respectively depicted by the purple and the red blocks in the right side of Fig. 1). Each head is a fully convolutional network made of sequences of Conv2D+ReLUs with \(\{32,16,8\}\) filters. Finally, we append at the end of the optical flow head a convolution operation with \(2\times K\) channels and we use a \(tanh\) activation function, so to produce the \((u,v)\) flow field values normalized in \((-1,1)\). Instead, after the depth head, we attach a convolution operation with a \(K\) channels and a sigmoid activation in order to get depth maps normalized in \((0,1)\). Instead of outputting one prediction at a time as in prior work [9], we directly generate \(K\) flows and depth maps simultaneously, to make the model faster compared to autoregressive models which would require looping over future steps. ### Loss To train FLODCAST we compute a linear transformation of the original input values, by rescaling depth map values in \([0,1]\) and optical flows in \([-1,1]\) through a min-max normalization, with minimum and maximum values computed over the training set. Inspired by [36], we use the reverse Huber loss, called _BerHu_ for two main reasons: (i) it has a good balance between the two L1 and L2 norms since it puts high weight towards values with a high residual, while being sensitive for small errors; (ii) it is also proved to be more appropriate in case of heavy-tailed distributions [36], that perfectly suits our depth distribution, as shown in Fig. 2. BerHu minimizes the prediction error, through either the L2 or L1 loss according to a specific threshold \(c\) calculated for each batch during the training stage. Let \(x=\hat{y}-y\) be the difference between the prediction and the corresponding ground truth. This loss \(\mathcal{B}(x)\) is formally defined as: \[\mathcal{B}(x)=\begin{cases}|x|,&|x|\leq|c|\\ \frac{x^{2}+c^{2}}{2c},&\text{otherwise}\end{cases} \tag{1}\] Thus, we formulate our compound loss, using a linear combination of the optical flow loss \(\mathcal{L}_{\text{flow}}\) and the depth loss \(\mathcal{L}_{\text{depth}}\) (Eq. 2): \[\mathcal{L}=\alpha\,\mathcal{L}_{\text{flow}}+\beta\,\mathcal{L}_{\text{depth}} \tag{2}\] Specifically, we apply the reverse Huber loss to minimize both the optical flow and depth predictions, using the same loss formulation, since the threshold \(c\) is computed for each modality, and that value depends on the current batch data. Therefore, \(\mathcal{L}_{\text{flow}}\) is the loss function for the optical flow computed as: \[\mathcal{L}_{\text{flow}}=\frac{1}{M}\sum_{j=1}^{M}\mathcal{B}(|OF_{j}-\widehat {OF}_{j}|) \tag{3}\] where \(M=B\times R\times 2\), since the flow field has \((u,v)\) components over \(R\) image pixels and \(B\) is the batch size, whereas \(OF_{j}\) and \(\widehat{OF}_{j}\) are the optical flows, respectively of the ground truth and the prediction at the pixel \(j\). Likewise, we do the same for the depth loss \(\mathcal{L}_{\text{depth}}\): \[\mathcal{L}_{\text{depth}}=\frac{1}{P}\sum_{j=1}^{P}\mathcal{B}(|D_{j}- \widehat{D}_{j}|) \tag{4}\] where \(P=B\times R\), \(D_{j}\) and \(\widehat{D}_{j}\) are the depth maps, respectively of the ground truth and the prediction at the pixel \(j\). We follow [36] and we set \(c=\frac{1}{5}max_{j}(|y_{j}-\hat{y}_{j}|)\), i.e. the 20% of the maximum absolute error between predictions and ground truth in the current batch over all pixels. ## 4 Results In this section we report our forecasting results on Cityscapes [37] for the depth and flow forecasting tasks. We first describe the experimental setting and the metrics used to evaluate our approach. Then, we present our results, comparing FLODCAST to state-of-the-art approaches. We also present ablation studies to better highlight the importance of all the modules in the architecture. Besides, in Sec. 5, we show that our approach can be easily applied to downstream tasks such as semantic segmentation and instance segmentation forecasting, demonstrating improvements, especially at farther prediction horizons. ### Dataset For evaluation, we use Cityscapes [37], which is a large urban dataset with very challenging dynamics, recorded in several German cities. Each sequence consists of 30 frames at a resolution of \(1024\times 2048\). Cityscapes contains 5000 sequences, split in 2975 for train, 500 for validation and 1525 for testing. Different annotations are available. In particular, we leverage precomputed disparity maps for all frames, from which depth maps can be extracted through the camera parameters. There are also both instance and semantic segmentations that are available at the 20-th frame of each sequence. ### Experimental setting We compute optical flows using FLowNet2 [27] (pretrained FlowNet2-c) and rescale them according to the maximum and minimum values in the training set, so to have normalized values in \((-1,1)\). Depth maps \(D\) are obtained using disparity data \(d\) and camera parameters (focal length \(f\) and baseline \(b\)), i.e. by computing \(D=f\cdot b/d\). Invalid measurements or zero-disparity values are set to \(0\). To normalize depth maps, we observe that most depth values fall within \(150\)m in the training set (Fig. 2). Thus, we cap values at \(150\)m and then normalize them in \((0,1)\). All frames are rescaled at \(128\times 256\) px for both data sources to accelerate learning. We train FLODCAST for \(30\) epochs using Adam and learning rate \(0.0001\). To balance the two losses in Eq. 2, we set \(\alpha=10\) and \(\beta=1\). At inference time we recursively employ the model by feeding as input previous predictions to reach farther time horizons. We provide outputs at a resolution of \(256\times 512\), following [38], by doubling the resolution. FLODCAST has approximately \(31.4\)M trainable parameters. The whole training takes \(58\) hours on a single GPU NVIDIA Titan RTX with \(24\)GB using a batch size of \(12\). ### Evaluation metrics We quantitatively evaluate depth forecasting using standard metrics as in [39]: (i) absolute relative difference (AbsRel), (ii) squared relative difference (SqRel), (iii) root mean squared error (RMSE) and (iv) logarithmic scale-invariant RMSE (RMSE-Log), defined as follows: \[\text{AbsRel}=\frac{1}{N}\sum_{i=1}^{N}\frac{|y_{i}-\hat{y}_{i}|}{y_{i}}\quad \text{(5)}\qquad\qquad\text{RMSE}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}|y_{i}-\hat {y}_{i}|^{2}} \tag{6}\] \[\text{SqRel}=\frac{1}{N}\sum_{i=1}^{N}\frac{(y_{i}-\hat{y}_{i})^{2}}{y_{i}} \quad\text{(7)}\quad\text{RMSE-Log}=\frac{1}{N}\sum_{i=1}^{N}d_{i}^{2}-\frac {1}{N^{2}}\left(\sum_{i=1}^{N}d_{i}\right)^{2} \tag{8}\] Figure 2: Distribution of depth values grouped by distance on the Cityscapes training set. Note that depth values below \(3\) meters are not present in the dataset. where \(y\) and \(\hat{y}\) are the ground truth and the prediction, each with \(N\) pixels indexed by \(i\), while \(d=\log\hat{y}-\log y\) is their difference in logarithmic scale. AbsRel and SqRel are errors that can be also calculated at pixel-level, instead RMSE, RMSE-Log measure mistakes averaged on the whole image. In particular, AbsRel draws attention to the absolute difference between the prediction and the target with respect to the ground truth itself (e.g. an AbsRel of 0.1 means that the error is 10% of the ground truth), which makes it suitable for a fine-grained understanding. The SqRel instead emphasizes large errors since the difference is squared. RMSE is the root of the mean squared errors while RMSE-Log, introduced in [39], is an L2 loss with a negative term used to keep relative depth relations between all image pixels, i.e an imperfect prediction will have lower error when its mistakes are consistent with one another. We also measure the percentage of inliers with different thresholds [39], i.e. the percentage of predicted values \(\hat{y}_{i}\) for which the ratio \(\delta\) with the ground truth \(y_{i}\) is lower than a threshold \(\tau\): \[\%\ \text{of}\ \hat{y}\ \ \text{s.t.}\ \ \ \max\left(\frac{y_{i}}{\hat{y}_{i}}, \frac{\hat{y}_{i}}{y_{i}}\right)=\delta<\tau \tag{9}\] with \(\tau=\{1.25,\,1.25^{2},\,1.25^{3}\}\). We assess the performance of the flow forecasting task, by computing the mean squared error between the prediction and the groundtruth on both the two flow channels, using Eq. 10, and averaging them, as done in [9]: \[\text{MSE}_{c}=\frac{1}{H\,W}\sum_{i=1}^{H}\sum_{j=1}^{W}\left(f_{c}(i,j)- \widehat{f}_{c}(i,j)\right)^{2} \tag{10}\] where \(\text{MSE}_{c}\) is the error referred to the channel \(c:=\{u,v\}\) between the ground truth optical flow field \(f_{c}(i,j)\) and the prediction \(\widehat{f}_{c}(i,j)\) at the pixel \((i,j)\) and H and W is height and width respectively. We also report the average end-point-error EPE [40], which measures the per-pixel euclidean distance between the prediction and the ground truth averaged among all the image pixels: \[\text{EPE}=\frac{1}{H\,W}\sum_{i=1}^{H\,W}\sqrt{(\hat{u}_{i}-u_{i})^{2}+(\hat {v}_{i}-v_{i})^{2}} \tag{11}\] where \((u_{i},v_{i})\) are the horizontal and vertical components of the optical flow ground truth, likewise \((\hat{u_{i}},\hat{v_{i}})\) are the corresponding components of the prediction, at the \(i-th\) pixel. ### Future Depth Estimation We evaluate our approach for future depth estimation on Cityscapes. As in prior works, e.g. [15], we evaluate our method after \(t+k\) frames, both at short-term (\(k=5\), after 0.29 sec) and at mid-term (\(k=10\), after 0.59 sec). Since there is no official evaluation protocol for depth forecasting on Cityscapes and considering the statistics in the training set (see Fig. 2), in which pixel occurrences strongly decrease as the depth increase, we clip values at 80 meters as done in prior work for depth estimation [38, 41]. For our experiments, we evaluate predictions using the same protocol of [38], i.e. by cropping out the bottom 20% of the image to remove the car hood, which is visible in every frame, then we rescale the frames at \(256\times 512\). In addition, we mask out ground truth pixels that are farther than the 80m threshold. We compare our approach with existing methods [14, 15, 16]. We also consider the depth estimation method of [42], which is adapted to depth forecasting through a multi-scale F2F [5] before the decoder, and the future instance segmentation model [6] adapted to generate future depth estimation of the predicted features, as previously done in [16]. We also report the trivial _Copy last_ baseline [16], as a lower bound. Quantitative results for depth forecasting are reported in Table 1. We exceed all the previous methods at short-term and mid-term predictions. Specifically, we beat all the existing approaches at short-term by a large margin for all the metrics, also reporting the highest inlier percentage. At mid-term term we exceed all the state-of-the-art approaches, in terms of AbsRel and SqRel, including the recent DeFNet (-42% and -8%), which employs both RGB frames and optical flows, even considering the camera pose during the training. Differently from DeFNet, we exploit depth maps and optical flows as sources of information, since they provide complementary features related to motion and \begin{table} \begin{tabular}{c|c c c c|c c c} \hline \hline \multicolumn{8}{c}{Short term \(k=5\)} \\ \hline & \multicolumn{4}{c|}{Lower is better \(\downarrow\)} & \multicolumn{4}{c}{Higher is better \(\uparrow\)} \\ \hline Method & AbsRel & SqRel & RMSE & RMSE-Log & \(\delta<1.25\) & \(\delta<1.25^{2}\) & \(\delta<1.25^{3}\) \\ Copy last & 0.257 & 4.238 & 7.273 & 0.448 & 0.765 & 0.893 & 0.940 \\ \hline Qi et al. [14] & 0.208 & 1.768 & 6.865 & 0.283 & 0.678 & 0.885 & 0.957 \\ Hu et al. [15] & 0.182 & 1.481 & 6.501 & 0.267 & 0.725 & 0.906 & 0.963 \\ Sun et al. [6] & 0.227 & 3.800 & 6.910 & 0.414 & 0.801 & 0.913 & 0.950 \\ Goddard et al. [42] & 0.193 & 1.438 & 5.887 & 0.234 & 0.836 & 0.930 & 0.958 \\ DeFNet [16] & 0.174 & 1.296 & 5.857 & 0.233 & 0.793 & 0.931 & 0.973 \\ \hline FLOODCAST w/o flow & 0.084 & 1.081 & 5.536 & 0.196 & 0.920 & 0.963 & 0.980 \\ **FLOODCAST** & **0.074** & **0.843** & **4.965** & **0.169** & **0.936** & **0.971** & **0.984** \\ \hline \hline \multicolumn{8}{c}{Mid term \(k=10\)} \\ \hline & \multicolumn{4}{c|}{Lower is better \(\downarrow\)} & \multicolumn{4}{c}{Higher is better \(\uparrow\)} \\ \hline Method & AbsRel & SqRel & RMSE & RMSE-Log & \(\delta<1.25\) & \(\delta<1.25^{2}\) & \(\delta<1.25^{3}\) \\ Copy last & 0.304 & 5.006 & 8.319 & 0.517 & 0.511 & 0.781 & 0.802 \\ \hline Qi et al. [14] & 0.224 & 3.015 & 7.661 & 0.394 & 0.718 & 0.857 & 0.881 \\ Hu et al. [15] & 0.195 & 1.712 & **6.375** & 0.299 & 0.735 & 0.896 & 0.928 \\ Sun et al. [6] & 0.259 & 4.115 & 7.842 & 0.428 & 0.695 & 0.817 & 0.842 \\ Goddard et al. [42] & 0.211 & 2.478 & 7.266 & 0.357 & 0.724 & 0.853 & 0.882 \\ DeFNet [16] & 0.192 & 1.719 & 6.388 & 0.298 & 0.742 & 0.900 & 0.927 \\ \hline FLOODCAST w/o flow & 0.130 & 2.103 & 7.525 & 0.320 & 0.863 & 0.931 & 0.959 \\ **FLOODCAST** & **0.112** & **1.593** & 6.638 & **0.231** & **0.891** & **0.947** & **0.969** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative results for depth forecasting after \(t+k\) on Cityscapes test set, both at short-term and mid-term predictions, i.e. at \(k=5\) and \(k=10\) respectively. geometric structure of the scene by means of a recurrent network. We believe that FLODCAST is capable of detecting such clues by extrapolating features from past sequences, which also implicitly contains the camera motion, without training a pose estimation network conditioned to specific future frames, like in [16], that clearly limits the application to forecast depths only at corresponding future time steps. We report a slight drop in terms of RMSE at mid-term compared to [15] and [16], however we still achieve concrete improvements in terms of RMSE-Log, by reducing the error of 22%. This indicates that the relative depth consistency is much better preserved by our approach than by the competitors. Using its recurrent nature, FLODCAST is capable to generate a sequence of depth maps in the future without temporal sub-sampling, i.e. by producing all the intermediate forecasting steps (not only the last one, as done in [16]). In dynamic scenarios, like an urban setting, this is particularly useful, since objects can appear and be occluded several times from one frame to another. Such behavior might not emerge from subsampled predictions. Some qualitative results are shown in Fig. 3 and 4, respectively for short-term and mid-term predictions. FLODCAST learns to locate the region containing the vanishing point by assigning higher depth values. Moreover, we observed that missing depth map values coming from zeroed values in the ground truth frames are mostly predicted correctly. This underlines that FLODCAST is able to anticipate depth maps up to mid-range predictions while being highly accurate, even though some parts of the scene may not have been labeled, due to bad measurements or missing data. Figure 3: Visualization results of future predictions on Cityscapes test set at short-term. Black pixels in the ground truth (second column) are invalid measurements. ### Future Flow Estimation We evaluate optical flow forecasting capabilities on Cityscapes, by following the protocol of [10]. Therefore, we calculate the average end-point error EPE, according to Eq. 11, for the \(t+10\) frame (i.e. \(0.59\) sec ahead), namely corresponding at the \(20\)th frame for each val sequence. We carry out experiments at the resolution \(256\times 512\), by doubling the resolution, and we compare our approach with existing works, FAN [10] and OFNet [9], and some baselines from [10], namely (i) warping the flow field using the optical flow in each time step (namely _Warp Last_) and (ii) simply copying the one last (namely _Copy Last_). Since our work is capable to provide optical flows for multiple future scenarios, we also assess our performance for every intermediate frames up to \(t+10\), by following the evaluation protocol in [9]. Thus, we measure the quality of our predictions generated autoregressively for each time step, by computing the mean squared error for \(u\) and \(v\) components and averaging them, according to Eq. 10. We report our quantitative results in Tab. 2. \begin{table} \begin{tabular}{l|c c c c c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{MSE \(\downarrow\)} & \multicolumn{8}{c}{EPE \(\downarrow\)} \\ \cline{2-13} & t+1 & t+2 & t+3 & t+4 & t+5 & t+6 & t+7 & t+8 & t+9 & t+10 & t+10 \\ \hline Copy Last [10] & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(9.40\) \\ Warp Last [10] & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(9.40\) \\ FAN [10] & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(6.31\) \\ OFNet [9] & \(\mathbf{0.96}\) & \(0.94\) & \(1.30\) & \(1.40\) & \(1.78\) & \(1.88\) & \(2.16\) & \(2.38\) & \(2.88\) & \(2.66\) & \(2.08\) \\ FLODCAST w/o depth & \(0.98\) & \(\mathbf{0.80}\) & \(1.11\) & \(1.20\) & \(1.38\) & \(1.48\) & \(1.72\) & \(1.78\) & \(2.18\) & \(1.92\) & \(1.48\) \\ FLODCAST (Ours) & \(1.06\) & \(0.84\) & \(\mathbf{1.10}\) & \(\mathbf{1.12}\) & \(\mathbf{1.34}\) & \(\mathbf{1.44}\) & \(\mathbf{1.62}\) & \(\mathbf{1.68}\) & \(\mathbf{2.12}\) & \(\mathbf{1.74}\) & \(\mathbf{1.38}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Qualitative results for flow forecasting on Cityscapes val set. In bold the lowest error. We denote with the symbol “\(-\)” if the corresponding result is not available or reproducible. Figure 4: Visualization results of future predictions on Cityscapes test set at mid-term. Black pixels in the ground truth (second column) are invalid measurements. We mainly found that the FLODCAST error drastically decreases over time. This brings us some considerations. Fist of all, FLODCAST combines different modalities, also exploiting spatio-temporal information, and that comes to be crucial to reduce the accumulation error through time. Because optical flow and depth maps are complementary each other, the model can better identify specific patterns, e.g. discriminating object motions at different resolutions in advance (see Fig. 8). This also allows to directly generate multiple future optical flows at a time with a shorter input sequence (i.e. \(T=3\) for FLODCAST while \(T=6\) for OFNet). Moreover, we found a substantial diminishing of the MSE up to \(33\%\) at \(t+10\) and that also supports our observations. Considering that OFNet has more supervision during training, i.e. it forecasts an output sequence shifted by one step ahead with respect to its input, this is the reason we believe performances are sometimes better at the beginning steps but then the error increases compared to FLODCAST. In absence of intermediate results of MSE for other methods (i.e. FAN, for which no source code and models are available, as denoted in Tab. 2), we compare the overall performance by evaluating the EPE error at \(t+10\), also against the Flow Anticipating Network (FAN) proposed in [10], that generates future flows in a recursive way, by using the finetuned version of their model, which is learned to predict the flow for the single future time step given the preceding frames and the corresponding segmentation images. We found remarkable improvements even at \(t+10\), by reducing the EPE with respect to FAN and OFNet as well. This highlights our choice that using optical flow with depth maps is better for determining future estimates than with the semantic segmentations employed in FAN. Restricting to observing past optical flows to generate a future one, as done in OFNet, does not allow forecasting models to make reliable long-range predictions autoregressively. Further improvements are obtained when multiple frames are predicted at a time, as FLODCAST does. Then, we demonstrate that FLODCAST is more accurate in predicting unobserved motions far into the future, without requiring semantic data, that is typically harder to get labeled with respect to depth maps, which are directly obtained by using commercial devices like LiDARs or stereo rigs. We also observe that excluding the depth map from FLODCAST, flow performance is reduced, since EPE increases by \(6.8\%\). Despite the hard task of anticipating flow motion without seeing future frames, FLODCAST exceeds all the previous works, and it is more robust when depth is stacked into the input data. ### Ablation Study In order to understand how significant the flow and depth as data sources are for anticipating the future, we exclude one of the two inputs at a time and we evaluate the performance compared with FLODCAST, which instead leverages both data sources. Depth AnalysisTo demonstrate the importance of incorporating flow features for depth forecasting, we exclude optical flow from the input and we train FLOD CAST using the \(\mathcal{L}_{\text{depth}}\) loss (see Eq. 4) to estimate future depth maps. From Tab. 1 we observe that generating future depth maps through the past ones without leveraging optical flow as source data, i.e. FLODCAST w/o flow, worsens the predictions under all of the metrics. This points out the relevance of combining features extracted from past scenes, in terms of 2D motion and depth. Nonetheless, predicting only future depth maps using our approach, even discarding the optical flow information, gets improvements compared to prior works such as [15; 16]. At short-term \(t+5\) FLODCAST w/o flow is the second best result overall, by reducing the errors by a large margin (e.g. AbsRel and SqRel respectively -53% and -27% from Hu et al, and -52% and -16% from DeFNet) with also higher percentage of inliers. At mid-term \(t+10\) we reported drops of performance of FLODCAST w/o flow still limiting the AbsRel error and getting higher accuracy of inlier pixels. Overall, removing optical flow from the input data, FLODCAST still works better than all the existing works on forecasting unseen scenarios but then the lack of the information affects the performance for farther frames. In addition, we compute the AbsRel error distribution of FLODCAST, when depth maps are predicted through only optical flows (orange bars) or employing our multimodal approach (blue bars) and we plot a histogram at \(t+10\) as function of the distance (Fig. 5). We found notable improvements within 10 meters when optical flow is part of the input. This is crucial in terms of safety since objects moving around a self-driving agent can be better defined according to their predicted distances. Indeed, from an ego-vehicle perspective, parts of the scene close to the observer are more likely to change over time. Considering that we are forecasting the depth for the whole image, just a few regions move considerably, corresponding to dynamic objects. The rest of the scene, typically the background, like buildings or vegetation, exhibits instead a static behavior and does not change much Figure 5: Ablation study on depth forecasting in Cityscapes test set. We report the AbsRel error at \(t+10\) per distance (in meters), both when the input data is composed of optical flows and depth maps (blue) or only depth (orange). Note that depth values below 3 meters are not present in the test set. depth-wise even in presence of ego-motion. Therefore, the depth estimated for those far away pixels contains little error and, consequently, the tails of the two plots tend to be quite similar. Considering that the histogram represents depth errors 10 frames after from the last observed one, our FLODCAST is robust also for long distance when optical flow is part of the input. This also motivates our design choices of sticking data in a multimodal and multitasking approach. We further provide some qualitative results in Fig. 6, so to underline how the contribution coming from the flow features is significant in generating very accurate depth maps, especially on moving objects, like pedestrians and vehicles. It is noteworthy that 2D motion displacements in the scene help to correctly predict depth values on different moving objects close to each other, e.g. pedestrians crossing the street, whose estimated depths collapse in a unique blob when optical flow is not taken into account. The same happens for cars at different distances from the camera, where their predicted depths look lumped together. That suggests that the model without flow features is less capable of distinguishing single instances. Flow Analysis.We discard depth maps from the input data and we train the network to predict future optical flows, i.e. by exploiting past flow features, while keeping the same \(\mathcal{L}_{\text{flow}}\) loss (see Eq. 3). We measure the optical flow predictions generated autoregressively for each time step, by computing the mean squared error on both the two flow channels and averaging them (Eq. 10). From the flow forecasting results reported in Tab. 2, we observe that features extracted from both the optical flows and depth maps contribute to Figure 6: Qualitative results of predicted depth maps of FLODCAST trained with or without optical flows (4th and 5th row respectively). The first two rows are the last observed frame \(I_{t}\) and the future one, \(I_{t+10}\). The third row contains ground truth depth maps for the three samples. Pixel-wise AbsRel errors between FLODCAST w/o flow and our FLODCAST are depicted as heatmap plots in the 6th row for 3 different sequences in the Cityscapes test set. reduce the MSE errors on predicted flows, resulting in overall improvements after the first steps up to at \(t+10\), i.e. \(+33\%\) over OFNet and \(+9\%\) over FLODCAST w/o depth, which is significant considering the high uncertainty for farther future scenarios. Compared with OFNet, FLODCAST w/o depth has the FlowHead module (as depicted in Fig. 1), in which specialized weights of convolutional layers are end-to-end trained in order to directly generate multiple optical flows at a time. Despite the notable reduction of the error through time, FLODCAST overcomes its performance when depth maps are included in the source data, which points out the importance of our multimodal approach. Looking at the last prediction, i.e. at \(t+10\), FLODCAST w/o depth still exceeds other approaches, but reports an increase of the EPE error by \(+7\%\) with respect to our multimodal approach. This fact suggests that recurrent architectures can achieve good results for forecasting tasks and they can improve if they are multimodal. In addition, we study the EPE error distribution according to distance. To do that, we collect all the predicted flows upsampled to \(256\times 512\) at \(t+10\) on the test set, and we compute the error (see Eq. 11) for all the pixels falling into the corresponding distance-based bins and we represent their averages in Fig. 7. Here, orange bars are errors reported by only using optical flow in input, while the blue ones incorporate also depth maps, i.e. our proposed FLODCAST model. As can be seen in Fig. 7, the overall trend of EPE is to decrease as the depth increases. This is due to the fact that, parts of the scene far enough from the camera typically produce similar small motion, like objects moving at the background or static parts that are mainly affected by the camera motion, thus the predicted optical flows for such pixels are likely to be more accurate. Instead, pixels closer to the camera tend to have a more pronounced motion and that affects the predictions, especially of farther frames. We observe that EPE Figure 7: Ablation study on flow forecasting in the Cityscapes test set. We report the EPE error at \(t+10\) according to the distance (meters) of optical flows predicted by FLODCAST, in case of the input data being both optical flows and depth maps (blue) or only optical flows (orange). Note that depth values below 3 meters are not present in the test set. errors of FLODCAST are always lower when depth maps are provided as input (blue bars) than only using optical flow as unique data source (orange bars). In particular, we gain more within 15 meters, which is the most relevant part of the scene concerning the safety and the drive planning of autonomous agents in very dynamic scenarios like the urban one. FLODCAST with depth maps has the potential to better disambiguate motions of pixels close to the observer than the far ones and vice versa. Hence, flow forecasting results are more precise as long as the depth map is included in the input data. Based on this consideration, we reported in Fig. 8 some qualitative results on the Cityscapes test set, where we illustrate the ground truth optical flow in comparison with the optical flows obtained from FLODCAST, both exploiting or not the depth map as an additional input source. Finally, we show the heatmaps in the last row of Fig. 8 of the MSE errors with respect to the ground truth as differences between the predictions generated by FLODCAST without depth map and by FLODCAST using both data sources. Specifically, we report enhancements mostly on moving objects, whose shapes are more correctly defined, as shown in the red parts of the cars and the light blue around their shapes. ### Performance details To take into account the forecasting problem in terms of anticipation, predictions have to be provided early. We therefore analyse the performance of FLODCAST at inference time. We test our model using a single NVIDIA RTX Figure 8: Flow forecasting qualitative results on the Cityscapes test set. We use FLODCAST trained with or without depth maps (4th and 5th row respectively). The first two rows depict the last observed frame \(I_{t}\) and the future one, \(I_{t+5}\). The third row shows ground truth flows. In the 6th row we depict the difference of MSE errors wrt the ground truth between the predictions of FLODCAST using only past flows and both past flows and depths. 2080. At runtime, FLODCAST requires 8.8GB of GPU memory and it is able to forecast sequences of \(K=3\) consecutive depth maps and optical flows in 40ms (25FPS). Our predictions are estimated for multiple frames ahead simultaneously, which is more efficient than making predictions for a single one, as done in [10; 14; 15]. ## 5 Segmentation Forecasting We now show how FLODCAST can be employed to address downstream tasks such as forecasting segmentation masks. In fact, flow-based forecasting methods have demonstrated that warping past features onto future frames allows producing competitive semantic segmentations [3; 4; 9]. Since FLODCAST predicts dense optical flows in the future, we use the recent lightweight framework introduced in [9], to explore possible improvements on the segmentation forecasting problem as a downstream task through our predictions, in terms of binary instances and semantic categories. To this end, from the whole framework, which also includes a flow forecasting module, named OFNet, we only take MaskNet, which is a neural network that warps binary instances from the current frame onto the future one. Because MaskNet requires future optical flows to warp instances, we replace OFNet with FLODCAST, by only retaining our flow predictions and discarding depth maps. In order to generate future predictions, both instance and semantic segmentations, we follow the same protocol training in [9]. We first finetune a MaskNet model pretrained on ground truth masks (the MaskNet-Oracle model from [9]), by feeding future optical flows predicted by FLODCAST. We perform separate trainings to make predictions up to \(T+3\) (short-term) and \(T+9\) frames ahead (mid-term)1. We denote these two models as MaskNet-FC. Second, we study how binary instances predicted by MaskNet can be improved. Because we employ predicted optical flow to estimate future binary masks, motion mistakes may affect some pixels of the object to be warped. We also believe that some drops in the performance of MaskNet are due to misleading pixels, that are badly labeled as background instead of instance and vice versa. This effect is more pronounced when an object appears smaller and its predicted flow is not accurate. Inspired by [43], we address this issue by introducing a Denoising AutoEncoder network (shortened to DAE) to the output of MaskNet, so to make binary masks cleaner and to make them as much aligned as possible to the ground truth. The network, depicted in Fig. 9, has an encoder consisting of Conv-ReLU-MaxPool sequences with 32, 64 and 128 filters, and a decoder where Conv-ReLU-UpSample operations are used with 128, 64 and 32 filters. The output is generated after a convolution operation with a single channel, \(3\times 3\) kernel filter and a sigmoid activation function. At inference, outputs are binarized using a 0.5 threshold. Because MaskNet warps object instances based on optical flows, the generated masks have to be fed to the DAE to get refined. Therefore, we train the DAE, by using autoregressive flows and freezing MaskNet pretrained weights. Specifically, we train DAE for 3 epochs with a per-pixel MSE loss function with predicted flows up to 3 frames ahead (i.e. \(T+3\), short-term). We observe that using a Dice loss [44] (already employed to train MaskNet), even in combination with the L2 loss, DAE performs worse than with the MSE function. We believe that is due to the fact that further improvements on instance shapes are not always possible with region-based losses (like Dice loss), instead MSE is more suitable to binarize an instance as a whole image. We continue to finetune the DAE for 3 more epochs using the autoregressive flows predicted up to 9 frames ahead (i.e. \(T+9\), mid-term) to adapt the network to less accurate inputs. Doing so, we are able to provide a single autoencoder trained to refine instances, which are generated by MaskNet through autoregressive flows predicted up to 9 frames ahead. Hence, our overall segmentation forecasting architecture, i.e. MaskNet-FC+DAE, is obtained by appending the DAE to the MaskNet mid-term model. This architecture allows to utilize a unique segmentation model to generate future instance segmentation up to 9 frames ahead. We conduct experiments on the Cityscapes val set, generating future instance and semantic segmentations of 8 different categories of moving objects, both 3 frames and 9 frames ahead (up to about 0.5 sec later) as done in [5], respectively referred to in the literature as short-term and mid-term. We use the mAP and mAP50 metrics for instance segmentation, and mIoU (mean IoU) for semantic segmentation. We show our quantitative results in Table 3. We report segmentation results achieved by MaskNet [9], using flows predicted by our FLODCAST, also considering the denoising autoebcoder (DAE), proposed to refine warped masks. We compare our results with the original flow-based approach MaskNet [9]. We also report the oracle reference, where a Mask RCNN [45] is used directly on future frames, as well as MaskNet-Oracle whose model is our upper bound flow-based approach since segmentations are warped using ground truth flows. Moreover, we listed the performances of 4 simple baselines and the commonly used F2F approach [5]. Figure 9: Denoising autoencoder (DAE) used to refine the generated future instance segmentation masks. The model is based on a convolutional encoder-decoder structure, where the encoder compresses the input into the latent space and the decoder gradually upsamples the features back to the original image size. We found that MaskNet, using flows predicted by FLODCAST, improves at mid-term, getting \(+0.5\%\) and \(+2.9\%\), respectively for instance and semantic segmentations compared to the original formulation of [9]. Meanwhile, we observe a negligible drop at short-term, since FLODCAST generates more accurate flows after the first iteration. Because the segmentation performance typically degrades over the time, we pay attention to the impact of appending our DAE at the end of MaskNet to enhance instance and semantic results mainly at mid-term (i.e. 9 frames ahead, 0.5 sec), which is a more challenging scenario than the short-term one. When the DAE is trained to refine instance masks up to mid-term we report a considerable improvement against the F2F approach with a gain of \(+1.3\%\) in AP50 and \(+8\%\) in IoU. Some qualitative results of future instance and semantic segmentation are shown in Fig. 10. We additionally provide some qualitative results in terms of instance segmentations predicted, by using FLODCAST and MaskNet-FC+DAE, in comparison with the previous framework, i.e. OFNet and MaskNet. We show enhancements on different objects and shapes predicted both at short-term (Fig. 11) and mid-term (Fig. 12), such as the big shapes (like trams and trucks) as well as some details (like car wheels and pedestrians on the ground). ## 6 Conclusions In this work, we proposed FLODCAST, a novel multimodal and multitask network able to jointly forecast future optical flows and depth maps using a recurrent architecture. Differently from prior work, we forecast both modalities for multiple future frames at a time, allowing decision-making systems to reason at any time instant and yielding state-of-the-art results up to 10 frames ahead on the challenging Cityscapes dataset. We demonstrated the superiority of exploiting both optical flow and depth as input data against single-modality models, showing that leveraging both modalities in input can improve the forecasting capabilities for both flow and depth maps, especially at farther time horizons. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Method & \multicolumn{3}{c}{Short term (T+3)} & \multicolumn{3}{c}{Mid term (T+9)} \\ & AP & AP50 & IoU & AP & AP50 & IoU \\ \hline Mask RCNN oracle & 34.6 & 57.4 & 73.8 & 34.6 & 57.4 & 73.8 \\ \hline MaskNet-Oracle [9] & 24.8 & 47.2 & 69.6 & 16.5 & 35.2 & 61.4 \\ \hline Copy-last segm. [5] & 10.1 & 24.1 & 45.7 & 1.8 & 6.6 & 29.1 \\ Optical-flow shift [5] & 16.0 & 37.0 & 56.7 & 2.9 & 9.7 & 36.7 \\ Optical-flow warp [5] & 16.5 & 36.8 & 58.8 & 4.1 & 11.1 & 41.4 \\ Mask H2F [5] & 11.8 & 25.5 & 46.2 & 5.1 & 14.2 & 30.5 \\ F2F [5] & 19.4 & 39.9 & 61.2 & **7.7** & 19.4 & 41.2 \\ MaskNet [9] & **19.5** & **40.5** & **65.9** & 6.4 & 18.4 & 45.5 \\ \hline MaskNet-FC & 18.1 & 37.8 & 65.4 & 6.7 & 18.9 & 48.4 \\ MaskNet-FC+DAE (Ours) & 18.3 & 39.0 & 65.7 & 7.1 & **20.7** & **49.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Future instance segmentation (AP and AP50) and future semantic segmentation (IoU) of moving objects on the Cityscapes val set. Best results in bold, second best underlined. We also demonstrated that FLODCAST can be applied on the downstream task of segmentation forecasting, relying on a mask-warping architecture, improved with a refining instance model that boosts mid-range predictions. Further research will be considered for future developments, which include the usage of a transformer architecture to boost our multitasking model. Other lines of research may also include more performing mask-level segmentation models to be trained end-to-end with a flow forecasting architecture, in order to directly perform the task for multiple frames at a time, in the same sense FLODCAST was designed.
``` 物体運動と空間位置の予測は、特に自律運転のような安全性の高い設定においては、重要な課題です。この研究では、互いに補完的な情報を持つ2つの異なるモダリティを予測することで、この課題に取り組んでいます。そのモダリティは光流と深度です。このため、多タスク再帰的なアーキテクチャをベースに、同時に両方のモダリティを予測するように訓練されたFLODCASTという流と深度の予測モデルを提案しています。このモデルの訓練において、フローと深度マップを同時に学習することは、モデルの両方のタスクを向上させることの重要性を示しています。モデルが他方のモダリティを理解すると、両方のタスクが改善されます。モデルは、いくつかのタイムステップの未来の予測も学習しています。これはより良い監督を提供し、より正確な予測を生み出し、モデルが任意の時間軸の将来の出力にも対応できる能力を
2309.14755
Image Denoising via Style Disentanglement
Image denoising is a fundamental task in low-level computer vision. While recent deep learning-based image denoising methods have achieved impressive performance, they are black-box models and the underlying denoising principle remains unclear. In this paper, we propose a novel approach to image denoising that offers both clear denoising mechanism and good performance. We view noise as a type of image style and remove it by incorporating noise-free styles derived from clean images. To achieve this, we design novel losses and network modules to extract noisy styles from noisy images and noise-free styles from clean images. The noise-free style induces low-response activations for noise features and high-response activations for content features in the feature space. This leads to the separation of clean contents from noise, effectively denoising the image. Unlike disentanglement-based image editing tasks that edit semantic-level attributes using styles, our main contribution lies in editing pixel-level attributes through global noise-free styles. We conduct extensive experiments on synthetic noise removal and real-world image denoising datasets (SIDD and DND), demonstrating the effectiveness of our method in terms of both PSNR and SSIM metrics. Moreover, we experimentally validate that our method offers good interpretability.
Jingwei Niu, Jun Cheng, Shan Tan
2023-09-26T08:29:33
http://arxiv.org/abs/2309.14755v1
# Image Denoising via Style Disentanglement ###### Abstract Image denoising is a fundamental task in low-level computer vision. While recent deep learning-based image denoising methods have achieved impressive performance, they are black-box models and the underlying denoising principle remains unclear. In this paper, we propose a novel approach to image denoising that offers both clear denoising mechanism and good performance. We view noise as a type of image style and remove it by incorporating noise-free styles derived from clean images. To achieve this, we design novel losses and network modules to extract noisy styles from noisy images and noise-free styles from clean images. The noise-free style induces low-response activations for noise features and high-response activations for content features in the feature space. This leads to the separation of clean contents from noise, effectively denoising the image. Unlike disentanglement-based image editing tasks that edit semantic-level attributes using styles, our main contribution lies in editing pixel-level attributes through global noise-free styles. We conduct extensive experiments on synthetic noise removal and real-world image denoising datasets (SIDD and DND), demonstrating the effectiveness of our method in terms of both PSNR and SSIM metrics. Moreover, we experimentally validate that our method offers good interpretability. Deep learning, image denoising, style disentanglement. ## I Introduction Due to inherent physical limitations in various acquisition systems, image signals captured in real-world scenarios are often contaminated by random noise. This noise poses a significant challenge for downstream tasks such as image analysis and understanding. Therefore, image denoising plays a fundamental and critical role in low-level computer vision. In recent years, deep learning-based methods have emerged as a promising approach for image denoising. These methods leverage powerful deep neural networks (DNNs), large-scale datasets, and advanced learning strategies to learn the mapping from degraded observations to the corresponding clean counterparts in an end-to-end manner. Compared to traditional methods, deep learning-based approaches have demonstrated superior performance. Various learning strategies have been employed, including residual learning [1], multi-stage learning [2], and improved loss functions [3]. In terms of network architectures, popular choices include encoder-decoder architectures [4, 5], attention mechanisms [2, 6], and generative adversarial networks (GANs) [3, 7]. Despite their effectiveness and power, these deep learning-based methods are often considered black-box models and the underlying denoising mechanism remains unclear, hindering the ability to understand and analyze the inner workings of these models. Disentangled representation learning (DRL), which aims to decompose variation factors in the feature representations and makes a single variation factor only affect a single image semantic attribute, is considered interpretable [8]. In this context, the semantic attribute refers to specific pixels representing a human-understandable concept, such as glasses or hair color, while variation factors are low-dimensional features corresponding to these semantic attributes in the feature space. Currently, DRL has been applied to image processing tasks such as image editing and image reconstruction, where the cyclic consistency and style disentanglement [9, 10] are two frequently employed strategies that facilitate DRL Cyclic consistency aims to _explicitly_ decouple variation factors and separate various content features in the feature space through the twice-cross-reconstruction strategy. Du et al. [9] utilized cyclic consistency for image denoising for the first time. They trained a cyclic encoder-decoder network on unpaired images and explicitly disentangled the content and noise features of noisy images and removed the noise by discarding the noise features. However, discarding imperfectly decoupled noise features may result in the loss of fine image textures and details, leading to poor image quality. On the other hand, style disentanglement decouples the latent features of images into content features and a style vector in an _implicit_ manner. This enables high-level image editing by injecting different style vectors into the latent features [10, 11, 12, 13]. However, image denoising requires global manipulation of pixels rather than local adjustments as in image editing. Therefore, the existing style disentanglement methods designed for general semantic editing are not directly applicable to image denoising tasks. To this end, in this paper, we present a novel framework for image denoising, building upon the concept of style disentanglement. In our framework, we consider noise as a global image style and propose novel losses and network modules to Fig. 1: Style disentanglement in image editing (a) and image denoising (b). (a) In the image editing task, semantic-level attributes (e.g., the glasses style, the hair color style) are extracted and injected into the input image. These styles only affect specific parts of the image, known as local styles. (b) The image denoising task is to remove noise that is distributed across all pixels. In this case, both the noise-free style and the noise style affect pixel-level attributes. extract noise styles from noisy images and noise-free styles from clean images. By leveraging these noise-free styles, we can adjust the latent features of noisy images, leading to effective noise removal. It is important to emphasize that our approach differs from methods used in image editing tasks, where noise-free styles are used to manipulate semantic-level attributes like glasses or hair color. Instead, we focus on pixel-level attributes, specifically noise, as illustrated in Fig. 1. In our framework, the style extractor is shared for both noisy and noise-free images, and the resulting styles are embedded in the same space. Our experiments demonstrate that the transition between noise and noise-free styles in this embedding space can generate images with varying levels of noise. In summary, our main contributions include: * We introduce an effective and interpretable image denoising framework based on style disentanglement. We consider image noise as the global image style. We design a style extractor that captures both noise and noise-free styles, and a style conversion module that utilizes noise-free styles to generate low-response activations for noise features, resulting in effective noise removal. * Different from existing image denoising approaches, our method does not rely on learning complex nonlinear mappings between the degraded and clean image domains. Instead, we utilize noise-free styles to directly edit the noise features, enabling effective image denoising. And our method holds the potential to provide valuable insights for safety-critical image restoration tasks. * We conduct extensive experiments on synthetic noise removal and real-world image denoising to validate the effectiveness of our proposed method. Moreover, we experimentally demonstrate that our method is easier to analyze and interpret than existing end-to-end denoising networks. ## II Related Work ### _Image denoising_ In recent years, DNNs-based image denoising methods have demonstrated superior performance over traditional methods [2, 3, 5, 15]. Many network architectures and learning strategies have been developed to fit the non-linear mapping between noisy-clean image pairs. To name a few, the Unet architecture [60], an encoder-decoder-based approach with multi-scale feature representation capacity, is widely used in image denoising. Residual learning [1] focuses on fitting the residual signal and has been proven effective for the convergence of deeper networks. Spatial and channel attention mechanisms enable DNNs to adaptively choose key feature representations [6]. Generative adversarial strategies empower DNNs with the ability to generate visually realistic images without over-smoothing [3, 7]. Moreover, studies have shown that jointly using multiple learning strategies boosts denoising performance, such as the combination of an encoder-decoder and residual learning [5] or the incorporation of GAN and perceptual loss. Zamir et al. [2] improved image denoising performance by using multi-stage architectures, attention mechanisms, and improved losses. More recently, transformer-based image restoration models have been presented to achieve state-of-the-art performance [61, 62]. However, these DNNs-based models are generally difficult to interpret, and the underlying denoising mechanism is not clear. ### _Cyclic consistency_ Cyclic consistency uses multiple encoders to encode different features of the image, and then exchanges different encoded features twice for cross-reconstruction, expecting the final reconstructed image to be consistent with the input image. Cycle consistency was first proposed within the image editing task [25]and then applied to the image restoration task [9, 26]. Lu et al. [26] utilized the cyclic consistency constraint for image deblurring for the first time. They jointly train the blur encoder \(E^{a}\) and content encoder \(E^{c}\) to separate the blur features and clean content features of a blurred image and then perform two cross-exchanges of the blur features to ensure consistency between the target image and the input image. Fig. 2(a) shows a simple diagram of cyclic consistency. Du et al. [9] further extended Lu et al.'s work to the image denoising task. However, these methods generally abandon the valuable blur or noise features, resulting in over-smoothed images. ### _Style disentanglement_ Currently, style disentanglement is primarily used in image editing tasks [11, 12, 13, 27, 28, 29], in which a specific semantic style is utilized to edit corresponding image features in a controlled manner, as shown in Fig. 2(b). Style disentanglement is also applied to low-level vision tasks. Zhou et al. [12] discovered that the style latent space in StyleGAN automatically encodes various semantic attributes of the image and decomposes them in a closed form. Li et al. [13] achieved the simultaneous decomposition of multiple styles in an image. Specifically, they organized multiple image tags as a hierarchical tree structure where each tag was independent, and then edited images through multiple styles. ## III Proposed Method We first show the general idea of using style disentanglement for image denoising. Consider paired data sampled from the joint distribution \((x,y)\sim p_{D}(x,y)\), where \(x\) and \(y\) are from the noisy image domain and clean image domain, Fig. 2: Diagram illustration of cycle consistency [9, 25, 26] and style disentanglement [11, 12, 13, 27, 28, 29]. (a) Cycle consistency uses two sets of encoders to encode different features and then cross-restrores the features to make the final image consistent with the input image. (b) Style disentanglement uses a style extractor to extract style vectors corresponding to semantic attributes, and then uses these style vectors to edit image features. respectively. We assume that \(x\) and \(y\) share the same semantic content features \(f^{s}\) in the latent space, while \(x\) contains noise features \(f^{n}\) that \(y\) does not have, and \(y\) contains clean content features \(f^{c}\) that \(x\) does not have. That is, \[\begin{split}(f^{s},f^{n})=enc(x)\\ (f^{s},f^{c})=enc(y)\end{split} \tag{1}\] where \(enc\) is an encoder to extract feature representations of images. Using the corresponding decoder \(dec\), features \((f^{s},f^{n})\) and \((f^{s},f^{c})\) are reconstructed into noisy images and clean images themselves: \[\begin{split} x=dec(f^{s},f^{n})\\ y=dec(f^{s},f^{c})\end{split} \tag{2}\] The objective of style disentanglement is to identify domain-specific style representations. Specifically, the noise style \(s_{noise\_free}\) and the noise-free style \(s_{noise\_free}\) are expected to correspond to the noisy features \(f^{n}\) and clean content features \(f^{c}\), respectively: \[\begin{split} s_{noise}=ext(x)\\ s_{noise\_free}=ext(y)\end{split} \tag{3}\] where \(ext\) is the ideal style extractor and can extract the corresponding style vector from the input image. Once the \(s_{noise\_free}\) can be generated accurately, it is subsequently injected into the features of the noisy image \((f^{s},f^{n})\) using the style conversion module \(sc\) to transform the noise features \(f^{n}\) to \(\hat{f}^{c}\), which approximates clean content features \(f^{c}\): \[(f^{s},\hat{f}^{c})=sc((f^{s},f^{n}),s_{noise\_free}) \tag{4}\] where \(s_{noise\_free}\) is passed through a fully connected layer to obtain adaptive parameters \(\{s_{s},s_{b}\}\), which are used to channel-wise modulate the features and then the style converter uses adaptive instance normalization (AdaIN) operations [10] to edit image features: \[AdaIN(e_{i},s)=s_{s}\frac{e_{i}-\mu(e_{i})}{\sigma(e_{i})}+s_{b} \tag{5}\] where \(e_{i}\) is the \(i\)-th channel feature, \(\mu(e_{i})\) and \(\sigma(e_{i})\) its mean and variance, respectively; \(s_{s}\) is the scale parameter and \(s_{b}\) is the bias parameter. Finally, the modified features \((f^{s},\hat{f}^{c})\) are used to complete the image denoising task: \[\hat{y}=dec(f^{s},\hat{f}^{c}) \tag{6}\] where \(\hat{y}\) is the expected denoised image. Specifically, we propose a novel denoising framework called Style Disentanglement for Image Denoising Network (SDIDNet), which mainly involves noise and noise-free style extraction (SE), style generation (SG), and style conversion (SC). As shown in Fig. 4, an encoder is firstly used to map noisy images \(x\) into a low-dimensional feature space. Meanwhile, we propose a style extractor to extract \(s_{noise}\) and \(s_{noise\_free}\) from noisy and clean images under the guidance of the dedicated loss, respectively. We assume that \(s_{noise}\) and \(s_{noise\_free}\) can represent \(f^{n}\) and \(f^{c}\), respectively. Then, the SC module uses the \(s_{noise\_free}\) to edit the noisy features, aiming to remove noise and preserve clean content in the feature space. Finally, a decoder is used to reconstruct the image from the edited features. The style-changed outputs of the decoder are exactly the denoised results. In this pipeline, clean images that provide noise-free styles are indispensable. Unfortunately, clean images are inaccessible in the inference phase. To address this issue, we design a style generator that takes a normal distribution as input and is trained to generate sampling styles that are similar to real noise-free styles. Unlike previous denoising methods, image denoising via the noise-free style edit can be understood as an operation in the image style latent space, as shown in Fig. 3. Previous methods focused on learning a better and more complex nonlinear mapping function in natural image manifolds. In contrast, we decouple the noise-free styles by connecting the image space and the style latent space through a style extractor and inject noise-free styles into noisy images' features for image denoising. In the following section, we give a detailed introduction to each component in our SDIDNet. ### _Encoder-Decoder_ The encoder-decoder is a classic and powerful feature representation learner that has been widely used in image denoising tasks. In this work, we utilize the Transformer layer (i.e., Swin block [30]) to construct both the encoder and decoder. **Swin Transformer:** Recently, Transformer has achieved great success in many computer vision tasks, mainly due to its powerful capacity in modeling long-range dependencies. However, self-attention in Transformer leads to a heavy computational burden. To alleviate this problem, Swin-Transformer with shifted and local window mechanism is proposed [30]. The architecture of Swin block is shown in Fig. 5. Given the input feature \(z^{l-1}\in R^{p\times q\times d}\), a linear embedding layer first reshapes the input feature into a feature of size \(\frac{pq}{M^{2}}\times M^{2}\times d\) through partitioning the input into non-overlapping local \(M\times M\) windows, where \(\frac{pq}{M^{2}}\) is the total number of partition windows. After that, a LayerNorm (LN) layer is used to normalize all features of each sample, and the standard multi-head self-attention (MSA) is performed for each window. Specifically, for each local window feature \(I\in R^{M^{2}\times d}\), the query, key Fig. 3: Schematic diagram of natural image manifold and style latent space. and value matrices \(Q,K,V\in R^{M^{2}\times d}\) are calculated through learnable projection matrices \(P_{Q},P_{K},P_{V}\in R^{d\times d}\): \[Q=IP_{Q},K=IP_{K},V=IP_{V} \tag{7}\] where \(P_{Q},P_{K},P_{V}\) are shared across all local windows. The attention matrix is then calculated for each window: \[MSA(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d}}+bias)V, \tag{8}\] where \(bias\) is the learnable relative positional encoding. After that, another LN followed by a multi-layer perceptron (MLP) is used for further feature transformation, where the MLP is composed of two fully connected (FC) layers and a GELU nonlinear activation. Skip connections are also applied for better information propagation. The pipeline of Swin block can be expressed as follows: \[z^{l-1}= MSA(LN(z^{l-1}))+z^{l-1} \tag{9}\] \[z^{l}= MLP(LN(z^{l-1}))+z^{l-1}\] To further build connections among windows, the shifted window partition strategy was proposed in [30], which is alternately used together with regular window partition. **Encoder:** It has been found that the convolution layer is effective in early visual processing, while Transformers are better suited for deep features [31]. Given a noisy image input \(x\in\mathbb{R}^{H\times W\times 3}\) (H, W, and 3 are the image height, width, and channel number, respectively), we first use a \(3\times 3\) convolution layer to extract shallow features \(F_{sf}\in\mathbb{R}^{H\times W\times C}\) from \(x\), where \(C\) is the channel number of shallow features. The \(F_{sf}\) is then sent to the several cascaded DownSwinBlocks for deep features extraction. As shown in Fig. 6 (a), a single DownSwinBlock module mainly comprises the two-branch Swin blocks. The outputs of the two branches are then element-wise added, followed by a \(2\times 2\) pooling layer to get low-resolution features. The output of DownSwinBlocks \(f_{DSB}\) can be expressed as \[F_{e}=f_{DSB}(F_{sf}) \tag{10}\] **Decoder:** The decoder is composed of several cascaded UPSwinBlocks and a convolutional layer at the last. The structure of a single UPSwinBlock is similar to that of DownSwinBlock, Fig. 4: The SDIDNet structure consists of (a) the style generation module and (b) the style editing module. The style generation module captures the style corresponding to the unique features, while the style editing module converts the input image’s unique features into the unique features corresponding to the given style. **Training phase:** We first encode the noisy and clean images into features using the encoder. The style extractor extracts noisy and noise-free styles from the noisy image and clean image, respectively, while the style generator generates sampling styles that mimic noise-free styles. Subsequently, the style conversion (SC) module edits noisy image features with the given styles and the noise will be removed under the guidance of the noise-free style. Finally, the decoder reconstructs the denoised image. **Inference phase:** Only noisy image and sampling styles are needed to complete image denoising. Fig. 5: The structure of a Swin block. W-MSA is the Window-Multi head self-attention. MLP is the multilayer perceptron. as shown in Fig. 6 (b). An upsampling layer is applied at the end of DownSwinBlock to double the feature resolution. The output of UPSwinBlocks \(f_{USB}\) can be expressed as \[F_{d}=f_{USB}(F_{in}) \tag{11}\] where \(F_{in}\) is the input feature of the decoder, and \(F_{d}\) is the output feature of the last UPSwinBlock module. At last, a \(3\times 3\) convolution operation on \(F_{d}\) generates the final output \(F_{out}\) which has the same size as the noisy image. Note that during the training our encoder-decoder is only used to learn good feature representations of the inputs (both \(x\) and \(y\)) rather than the mapping from \(x\) to \(y\) as done in current deep learning-based methods. ### _Style extractor and Style generator_ The style extractor is developed to extract the noise and noise-free styles from noisy and clean images, respectively. As shown in Fig. 6 (c), an input image is converted into deep features through multiple convolution layers, and then transformed into a 2048-dimension vector by global average pooling. Next, a \(1\times 1\) convolution is adapted to generate a 256-dimension vector, which represents the input image's style. In the training phase, clean images are accessible to get the noise-free styles by feeding them to the style extractor while the noise-free styles are inaccessible in the inference phase due to the absence of clean images. To address this issue, a style generator is developed to learn the noise-free style distribution. The architecture of the style generator is shown in Fig. 6(f). Following styleGAN's style generation [10], our style generator takes a 32-dimensional vector sampled from normal distribution as input and then passes it through six FC layers and the ReLU activation to generate a 256-dimensional style vector. ### _Style conversion_ Style conversion (SC) is devised to remove the noise of intermediate features. It injects noise-free styles into the noisy images' features through a series of AdaINBlocks, each of which consists of an AdaIN operation and the convolutional layer. As shown in Fig. 4, the SC module starts with a \(1\times 1\) convolution layer that reduces the number of channels in the input feature. Next, the given style is passed through a fully connected layer to obtain adaptive parameters \(\{s_{s},s_{b}\}\), which are used to modulate the mean of the feature channels. Then, \(N\) successive AdaIN operations [10] are employed to edit the feature. It has been demonstrated that feature statistics, such as mean and variance, carry the style information of an image [32]. Injecting a noise-free style will produce higher activations for image content features and lower activations for noise features, which means that the noise information in the feature channel is suppressed while the spatial structure of the content image is preserved. To prevent the AdaIN operations from modifying the global attributes such as the content and background, we further design an attention module to preserve the image content attribute. The attention module operates on both channels and spatial directions, and the detailed structure is shown in Fig. 6 (d). It can be defined as \[F_{sc}=Mask\odot F_{e}+(1-Mask)\odot F_{adain} \tag{12}\] where \(F_{e}\) is the feature obtained by the encoder, \(F_{adain}\) is the feature after style editing, and \(Mask\) is a mask generated from \(F_{adain}\) through a \(1\times 1\) convolution layer and sigmoid function. \(\odot\) denotes element-wise multiplication. ### _Training loss function_ **Reconstruction loss.** We first design the following reconstruction loss: \[\begin{split}\mathcal{L}_{rec}&=\lambda_{1}\mathbb{ E}_{p(x,y)}\left(\|x-dec(enc(x))\|_{1}\right.\\ &\left.+\|y-dec(enc(y))\|_{1}\right.\\ &\left.+\|x-dec(sc(enc(x_{i}),s_{noise}))\|_{1}\right)\\ &\left.+\lambda_{2}\mathbb{E}_{p(x,y)}\left(\|y-dec(sc(enc(x),s_ {noise\_free}))\|_{1}\right.\right.\\ &\left.+\|y-dec(sc(enc(x),s_{gen}))\|_{1}\right)\end{split} \tag{13}\] Fig. 6: The detailed structure of the network modules. (a): DownSwinBlock module, (b): UpSwinBlock module, (c): Style extractor module, (d): Attention module, (e): ConvBlock module, (f): Style generator module. where \(s_{noise}=ext(x)\) is the noise style, \(s_{noise\_free}=ext(y)\) is the noise-free style, \(s_{gen}\) is the sampling style. The balance coefficients \(\lambda_{1}\) and \(\lambda_{2}\) are also included. The expectation \(\mathbb{E}\) is taken over the joint distribution \(p(x,y)\) and is practically approximated by the summation on the collected dataset. In particular, the first two terms in Eq. (13) encourage the encoder-decoder to learn a feature representation without removing noise. The third and fourth terms encourage the style extractor to generate accurate style vectors, achieving style disentanglement. The \(sc\) operation adjusts noisy image features to those corresponding to the given style. The last item encourages the style generator to learn an accurate distribution of noise-free styles. **Style regression loss.** We additionally design a style regression loss, similar to [25]: \[\begin{split}\mathcal{L}_{sty}&=\mathbb{E}_{p(x,y )}\left(\|s_{gen}-ext(x_{trg})\|_{1}\right.\\ &\left.+\|s_{gen}-s_{noise\_free}\|_{1}\right)\end{split} \tag{14}\] where \(x_{trg}=dec(sc(enc(x),s_{gen}))\). The first term in the style regression loss encourage the extracted style of the reconstruction image from the sampled styles to be consistent with the sampled style itself, and the second term encourages the style generator to generate a sampling style similar to the noise-free style. **Full loss.** Finally, our full loss function can be written as \[\mathcal{L}_{full}=\mathcal{L}_{rec}+\lambda_{sty}\mathcal{L}_{sty} \tag{15}\] where \(\lambda_{sty}\) is the hyperparameter. ## IV Experiments and Evaluation In the following section, we provide a detailed overview of our experimental settings, present the experimental results, and demonstrate the effectiveness of SDIDNet through ablation experiments. We evaluate the performance of SDIDNet over two datasets (synthetic Gaussian noise dataset and two real-world image denoising datasets (SIDD [20], DND [21])). Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are adopted to evaluate denoising performance. ### _Training settings_ SDIDNet is an end-to-end training model and requires no pre-training. In the training phase, the batch size was set to 6, and all images were randomly cropped to \(128\times 128\). Hyperparameters in the reconstruction loss were set as follows: \(\lambda_{1}=0.1,\lambda_{2}=0.3\). Hyperparameters in the full loss were set as follows: \(\lambda_{sty}=0.1\). We used two Adam optimizers with the first-order equilibrium constants \(\beta_{1}=0.9\), and the second-order equilibrium constants \(\beta_{2}=0.999\). The learning rate of the style generator was set to 1e-6, and the other learning rates were set to 2e-4. We used the cosine annealing strategy, and the learning rate gradually decayed from 2e-4 to 1e-6. At the same time, we used rotation, flipping, and cropping strategies for data enhancement. All experiments were done on a single RTX 2080Ti GPU. ### _Results on synthetic Gaussian noise dataset_ We first evaluate SDIDNet on the synthetic Gaussian noise dataset. The training set consists of 900 images from the DIV2K dataset [33] and 2750 images from the Flick2K dataset [34] while the test set comprises Set12 [1], BSD68 [35], CBSD68 [36], Kodak24 [37], McMaster [38], and Urban100 [39]. We follow the standard practice of adding additive Gaussian White noise (AWGN) with noise level \(\sigma=15,25,50\) to both grayscale and color images. Additionally, we chose traditional denoising algorithms (BM3D [40], WNNM [41]) and previous DNN-based denoisers (DnCNN [1], IRCNN [42], FFDNet [18]) as the comparison methods. To evaluate the effectiveness of SDIDNet, we measure the PSNR and SSIM of all images on the test set and then average the results. Experimental results are shown in Table I and Table II. It can be seen that SDIDNet has achieved the best PSNR at different noise levels and on different test sets. Specifically, for the grayscale image test sets (Set12, BSD68), the PSNR of SDIDNet is about 0.3dB and 0.1dB higher than that of DnCNN at three noise levels, respectively. The first row in Fig. Fig. 7: Visual results on synthetic Gaussian denoising. Top row: Gaussian grayscale denoising (\(\sigma=15\)). Middle row: Gaussian color denoising (\(\sigma=25\)). Bottom row: Gaussian color denoising (\(\sigma=50\)). The red box is the region of interest (ROI). 7 shows the denoising results of DnCNN, FFDNet, SDIDNet on the image "house" from Set12. It is apparent that SDIDNet may display clearer texture details. For the color image test sets (CBSD68, Kodak24, McMaster, Urban100), SDIDNet is more competitive than the previous method. The second and third rows in Fig. 7 show the denoising results at noise levels of 25 and 50, respectively. It can be observed that SDIDNet has significant visual advantages. ### _Results on SIDD Benchmark_ The Smart Phone Image Denoising Data Set (SIDD) is a set of noisy images obtained in 10 scenes using five smartphones under different lighting conditions. We use 320 high-resolution color images for training, while SIDD provides 1280 color images(\(128\times 128\)) for the test set. We choose the previous DnCNN [1], BM3D [35], CBDNet [22], RIDNet [6], AIMDNet [23], VDN [46], and MPRNet [2] methods for comparison. Table III lists the PSNR and SSIM results. Fig. 8 displays results for a visual inspection. In this dataset, SDIDNet achieves a PSNR of 39.45 dB, which is 0.17 dB higher than that of VDN. As shown in Fig. 8, SDIDNet can effectively remove noise and retain texture details. MPRNet had a PSNR value of 39.71 dB, slightly higher (0.26dB) than SDIDNet. It is worth noting that MPRNet employs a multi-stage learning strategy, which builds a multi-level architecture to preserve contextual features, while SDIDNet uses a single-stage learning strategy in its current version. We compute the computational cost of different methods and select images of size \(3\times 256\times 256\) for testing. Multiply-accumulate operations (MACs) and inference time are shown in Table III. Surprisingly, our model can complete inference using only **5.0%** of the MACs of MPRNet, which is mainly due to two reasons. Firstly, we use a single-stage training strategy rather than a two-stage one. Secondly, the depth of our Swin block is only 2 or 4, which is shallower than the standard Swin-Tiny [30]. **Mixing styles.** We present a detailed explanation of the denoising process involved in mixed-style editing, as shown in Fig. 9. Since the parameters of the style extractor are shared, both the noise style and the noise-free style are in the same style latent space. Define the mixed style as an intermediate state \(s_{mixed}=\lambda s_{noise\_free}+(1-\lambda)s_{noise}\). By adjusting the parameter \(\lambda\), we present the transition between the noise and noise-free styles, which corresponds to different degrees of denoising results. It is evident that when only the noisy style is used (\(\lambda=0\)), the denoising result is almost identical to the input noisy image. As \(\lambda\) gradually increases, the mixed style is changing from a noise style to a noise-free style, and the denoised image becomes progressively clearer. Such experiment demonstrates the good interpretability of our proposed method. **Style distribution.** It is expected that the noise and noise-free styles should be easily distinguished in the style latent space, and the distribution of sampling styles should closely match that of noise-free styles. To demonstrate this, we perform a visualization analysis on these styles. Specifically, the styles are chosen as 256-dim vectors, and we use the t-SNE [48] algorithm to reduce their dimensionality into a two-dimensional embedding space for visualization purposes. The visualization results on the SIDD test set are shown in Fig. 10. The results demonstrate the following: 1) The distributions of both noise-free and noise styles exhibit significant differences, indicating that the style extractor maps the image manifold to an accurate style latent space. 2) The distribution of the sampling style is very close to that of the noise-free style, illustrating the effectiveness of our reconstruction loss and style regression loss. 3) The mixed style is in between the noise-free and noise styles. Therefore, the mixed style can be regarded as building a pipeline from the noise styles to noise-free styles. The closer it is to the noise-free style, the lower the activation response to noise features, enabling controllable image denoising. ### _Results on DND Benchmark_ The Darmstadt Noise Data Set (DND) comprises 50 pairs of real noisy images. For each pair, a reference image is captured with the base ISO level, while the noisy image is captured with a higher ISO and appropriately adjusted exposure time. Note that the DND dataset does not contain any training data. Therefore, we use the model trained on the SIDD train set and directly test on the DND benchmark, which can validate the performance of our method on different datasets. The comparison results are shown in Table III, and we also provide a visual comparison in Fig. 11. PSNR of SDIDNet is Fig. 8: Visual comparisons of different methods on SIDD Benchmark. 0.29dB higher than that of VDN, reflecting the effectiveness of our method. Fig. 11: Visual comparisons of different methods on DND Benchmark. The red box indicates the ROI. Fig. 10: 2D distribution map of noise-free style, noise style, sampling style, and mixed style using t-SNE algorithm. Fig. 9: Visualization of denoising from mixed styles on SIDD testset. The first and last columns are noisy and clean images, and the middle five columns are denoised images with increasing \(Coss^{2}(s_{mixed},s_{noise\_free})\), where \(Coss\) denotes the cosine similarity measure. ### _Ablation study_ **Ablation on style conversion module.** Without retraining, we test the PSNR of denoised images with or without style conversion(SC) on the SIDD, and DND. Results are listed in Table IV. We notice that, without the SC module, the SDIDNet degrades to a standard encoder-decoder, and its performance drops significantly. Besides, without the SC module, the denoised image from SDIDNet is very similar to the input noisy image, which indicates that the SDIDNet without the SC module actually learns the identity map. Simultaneously, we visualize the image features before and after applying sampling style editing, as shown in Fig. 12. The first row presents the features of the noisy "Monarch" image (\(\sigma=25\)) from Set12. The second row displays the features after applying sampling style editing, and the third row shows the histogram and the corresponding fitted Gaussian curve of the feature difference. Features in the same column come from the same channel. The first row shows that the encoder implicitly encodes the image content features (the first two columns) and noisy features (the last two columns), respectively. The image features in the second row are significantly clean, indicating that the sampling style editing process can effectively remove noise. The difference histogram in the third row can be well-fitted to a Gaussian distribution, implying that the sampling style mainly removes the Gaussian-like signal of the features. Furthermore, for the first two columns of content feature channels, the mean of the difference histogram is close to 0, and the mean of the feature remains almost unchanged, indicating that the sampling style will generate a high-responsive activation of content features and preserve image features. For the last two columns of noisy feature channels, the feature differences are more dispersed, and the mean value of the histogram is significantly less than 0, indicating that the sampling style produces low-response activations to noise features and removes noise. **Influence of the AdainBlock number in the SC module.** Table V provides results obtained on SIDD with different numbers of AdainBlock. With N=8 as the inflection point, the denoising performance increases and then decreases. On the one hand, when N=1, the style set \(\{s_{s},s_{b}\}\) generated by the 256-dimensional style is only 64 groups, and it is challenging to decouple the content and the noise features. On the other hand, when N = 16 or 32, the higher dimensionality of the style set may increase the difficulty of model matching. In addition, we also test SDIDNet's denoising performance when keeping only \(s_{s}\) and \(s_{b}\) separately. Results show that with only the bias parameter \(s_{b}\), SDIDNet fails to converge, and with only the scale parameter \(s_{s}\), the performance of SDIDNet slightly decreases. For example, on SIDD, the PSNR drops to 39.14dB, which suggests that the process of editing the feature means is the primary factor for denoising, and \(s_{b}\) can further improve the denoising ability. ## V Discussion and Conclusion In this paper, we propose a novel strategy for image denoising based on style disentanglement and provide a new prospect in the image denoising task. Rather than relying on complex network architecture and accurate image noise modeling, the proposed style conversion operation can naturally convert noisy images' noise style into the noise-free style. We demonstrate the effectiveness of the style disentanglement strategy in image denoising tasks. In addition, we experimentally find that the noise-free style produces high response activations to content features and low response activations to noise features, thereby removing noise. We believe that image denoising via style disentanglement provides an interpretable framework and its application to other low-level image reconstruction tasks is worthy of further exploration. Here, we show some initial experimental results of SDIDNet on other tasks instead of image denoising. We apply the Fig. 12: Image features and their differences before and after applying sampling style editing. Top row: features of the noisy image, middle row: features after sampling style editing. Bottom row: the feature difference between the first two rows. The blue and black line is the histogram of the feature difference, and the red line displays the fitted Gaussian curve after the difference. Each column of features belongs to the same channel. SDIDNet to image motion deblurring and image deraining. The U10SR [49], DeblurGAN [50], SVR [51], DerainNet [52], SEMI [53], and UMRL [54] methods as employed for comparison. The results are shown in Fig. 13 and Table VI. For the image deblurring task, blurred and clean images have distinct unique blur and sharp features, corresponding to blur and sharp styles, respectively. We evaluate the SDIDNet on the GoPro dataset, and the average PSNR is 29.53 dB, 0.83 dB higher than that of the classic DeblurGAN [50]. We also observe that, while editing blurred image features with the sharp style, the sharp style will produce low-response activation to blurred features, achieving deblurring. Similarly, for the image deraining task, rainy and clean images have unique rainy and clean image features, corresponding to rainy and rain-free styles, respectively. The rain-free style produces low-response activation for rainy features, and SDIDNet outperforms the classic DerainNet [52] and SEMI [53] algorithms on the Rain100H, Rain100L, and Test1200 datasets. These results demonstrate that the style disentanglement strategy widely applies to various low-level vision tasks. We note that our current SDIDNet model requires a supervised training strategy, which may restrict its application in real-world scenarios where large paired data is difficult to collect. Further research on how to extract noise-free styles from noisy images is a promising direction. In addition, our paper did not consider and employ the latest Transformer blocks introduced in [61, 62], leading to a performance gap compared with state-of-the-art methods. Incorporating these modules into our encoder-decoder is the future research.
画像ノイズ除去は、低レベルコンピュータビジョンにおける基礎的なタスクです。近年、深層学習に基づく画像ノイズ除去方法は impressiveなパフォーマンスを達成していますが、その仕組みはブラックボックスであり、ノイズ除去の原理は不明です。この論文では、画像ノイズ除去に新しいアプローチを提案します。このアプローチでは、ノイズ除去メカニズムが明確であり、かつ高いパフォーマンスを実現します。 noiseless style を取り入れることでノイズを除去します。ノイズのないスタイルを、綺麗な画像から得ることで、ノイズを削除します。ノイズのないスタイルは、ノイズ特徴に対して低反応性を誘導し、コンテンツ特徴に対して高反応性を誘導します。これにより、ノイズからコンテンツを分離し、効果的に画像をノイズ除去します。似て非なるディセタントベースの画像編集タスクは、スタイルを用いてセマンティックレベルの属性を編集しますが、私たちの主な貢献
2301.00102
Giant excitonic magneto-optical Faraday rotation in single semimagnetic CdTe/Cd_{1-x}Mn_{x}Te quantum ring
Magnetic tuning of the bound exciton states and corresponding giant Zeeman splitting (GZS) between {\sigma}^{+} and {\sigma}^{-} excitonic transitions in CdTe/Cd_{1-x}Mn_{x}Te quantum ring has been investigated in the Faraday configuration for various concentrations of Mn^{2+} ions, using the variational technique in the effective mass approximation. The sp-d exchange interaction between the localized magnetic impurity ions and the delocalized charge carriers has been accounted via mean-field theory with the inclusion of a modified Brillouin function. The enhancement of the GZS, and in turn, the effective g-factor with the application of an external magnetic field, is strikingly manifested in type-I - type-II transition in the band structure, which has been well explained by computing the overlap integral between the electron and hole, and the in-plane exciton radius. This highlights the extraordinary magneto-optical properties, including the giant Faraday rotation and associated Verdet constant, which have been calculated using single oscillator model. The oscillator strength and exciton lifetime have been estimated, and are found to be larger than in the bulk diluted magnetic semiconductors (DMS) and quantum wells, reflecting stronger confinement inside the quantum ring. The results show that the DMS-based quantum ring exhibits more extensive Zeeman splitting, which gives rise to ultra-high Verdet constant of 2.6 \times 10^{9}rad/Tesla/m, which are a few orders of magnitude larger than in the existing quantum systems and magneto-optical materials.
Kalpana Panneerselvam, Bhaskaran Muralidharan
2022-12-31T03:01:20
http://arxiv.org/abs/2301.00102v2
Giant excitonic magneto-optical Faraday rotation in single semimagnetic CdTe/Cd\({}_{\text{1-x}}\)Mn\({}_{\text{x}}\)Te quantum ring ###### Abstract Magnetic tuning of the bound exciton states and corresponding giant Zeeman splitting (GZS) between \(\sigma^{+}\) and \(\sigma^{-}\) excitonic transitions in CdTe/Cd\({}_{\text{1-x}}\)Mn\({}_{\text{x}}\)Te quantum ring has been investigated in the Faraday configuration for various concentrations of Mn\({}^{2+}\) ions, using the variational technique in the effective mass approximation. The sp-d exchange interaction between the localized magnetic impurity ions and the delocalized charge carriers has been accounted via mean-field theory with the inclusion of a modified Brillouin function. The enhancement of the GZS, and in turn, the effective g-factor with the application of an external magnetic field, is strikingly manifested in type-I - type-II transition in the band structure, which has been well explained by computing the overlap integral between the electron and hole, and the in-plane exciton radius. This highlights the extraordinary magneto-optical properties, including the giant Faraday rotation and associated Verdet constant, which have been calculated using single oscillator model. The oscillator strength and exciton lifetime have been estimated, and are found to be larger than in the bulk diluted magnetic semiconductors (DMS) and quantum wells, reflecting stronger confinement inside the quantum ring. The results show that the DMS-based quantum ring exhibits more extensive Zeeman splitting, which gives rise to ultra-high Vredet constant of \(2.6\times 10^{9}\)rad/Tesla/m, which are a few orders of magnitude larger than in the existing quantum systems and magneto-optical materials. ## I Introduction Diluted magnetic semiconductors (DMS) are of special interest because of their unique combination of semiconducting and magnetic properties. The sp-d exchange interaction between the localized magnetic moments of the dopant magnetic ions and the spins of the charge carriers (electrons/holes) in DMS [1; 2; 3; 4; 5; 6] significantly alters the energy spectra of the carriers, which greatly enhances the spin-dependent effects. Moreover, these effects can be widely tuned by an external magnetic field, temperature, and the concentration of magnetic ions to induce fascinating magneto-optical (MO) and magneto-electrical properties. Among all the exciting signatures of such exchange interaction, the striking consequences are the giant Zeeman splitting (GZS) [7; 8; 9] and the giant Faraday rotation (GFR) [10; 11]. The Zeeman splitting between the band states with different spin components generates the spin - polarization of the conducting carriers, which is exploited as spin aligners in spintronic devices [12; 13], an anomalous magnetoresistance at low temperature [14], and vastly amplifies the conversion of the spin current into an electrical current [15]. Hence, the DMS have been an active area of research as an alternative to the ferromagnetic metal contacts for the efficient spin-injection into non-magnetic semiconductors, spin detection, and the realization of the spin-polarized transport in semiconductor structures, which have substantial industrial applications in the field of magnetoelectronics, spintronics, and solid-state quantum computing [16; 17; 18; 19; 20; 21; 22]. The concept of the FR (a solid rotation of the plane of polarization of light travels in a magnetized medium along the applied magnetic field) manifests itself in various MO devices such as optical isolators, Faraday rotators, and optical circulators for high-speed optical communication systems [23; 24; 11; 25], for which DMS act as potential MO materials. The carrier localization and its transport properties have been examined using DMS materials in various nanostructured systems like quantum wells, wires, and dots [26; 27; 28; 29]. Considerable attention paid to the quantum ring (QR)-based infrared photodetectors and lasers [30; 31] in recent times due to its doubly-connected topological nature has engendered an interest in us to study how the radial and axial confinement of the individual carriers and the exciton in semimagnetic QR impact the sp-d exchange interaction in the Faraday configuration (the magnetic field is applied along the direction of observation (z) and parallel to the light wave vector). The strong sp-d coupling makes magnetic ions mediate the influence of the magnetic field on the band gap engineering by enhancing the Zeeman splitting of the energy levels, which is strikingly manifested in type-I - type-II transition in the band offset [32; 33; 34; 35]. Hence, the DMS extends its potential applications to optoelectronics due to the possible tuning of the band states, which in turn tune the emission wavelength widely over Near- to Far-IR, creating a giant optical response. This article aims to delineate the magnetic tuning of exciton energy states due to the GZS between \(\sigma^{+}\) and \(\sigma^{-}\) spin components, in turn, the effective g-factor for various mole fractions of (x) magnetic dopants in CdTe/Cd\({}_{\text{1-x}}\)Mn\({}_{\text{x}}\)Te QR since CdMnTe has well served as a potential MO material for the past few decades towards optoelectronic applications. Various MO parameters, such as oscillator strength, radiative lifetime, and radiative decay rate, have been evaluated. The occur rence of type-I - type-II transition in a single semimagnetic QR has been well explained in the present communication by computing the overlap integral between the electron and hole, and estimating the in-plane exciton radius. Though QRs are more flexible for experimental developments [36; 37; 38; 39; 30] due to the advances in fabrication procedures, and DMS addresses the fundamental challenges in the spintronic devices in its unique way, the possible integration of DMS into QR structures has not yet been developed to unveil the hidden mystery. Few theoretical studies have been proposed on single and concentric double QRs doped with transition metal ions focusing on the magnetic and thermal properties [40; 41; 42] but not on the MO properties of excitons, which is of novel interest in the present work. We later show a theoretical evaluation of the Verdet constant of a remarkable MO phenomenon, the GFR, using single oscillator model. The source for the larger values of the Verdet constant in DMS has been traced down to the GZS of the energy band states near the band gap resonance. Although most research has focused on achieving a larger Verdet constant with various MO materials, especially Cd\({}_{1-x}\)Mn\({}_{x}\)Te, these studies have been restricted only to bulk DMS [43; 44; 45; 10] and epitaxial heterostructures in the form of wells [47; 48; 49; 50; 51] and dots [52; 11; 5]. Therefore, investigating the impact of sp-d exchange interaction on the GFR in DMS heterostructures with various topologies, like QR, would generate unprecedented interest in developing high-quality epitaxial structures for various technological applications. In the following, sec. II discusses the theoretical formalism using the variational technique in the effective mass approximation to solve for bound exciton states in CdTe/Cd\({}_{1-x}\)Mn\({}_{x}\)Te QR at liquid helium temperature. The mean-field theory with the modified Brillouin function to account for sp-d exchange interaction is explained in detail, and the occurrence of the GZS in nanostructures is also delineated. The results of variation of interband transition energy, the binding energy of \(\sigma^{\pm}\) magneto-exciton, and various MO properties, including GZS and GFR in QR doped with various concentrations of Mn\({}^{2+}\) ions (x = 0.5%, 1%, 5%, 10%, and 20% of Mn\({}^{2+}\) ions) are presented and discussed in sec. III.1-III.5. Section IV elucidates the significance of the experimental validation in QR based on DMS by comparing the present results with those already reported for the bulk and QWs. ## II Theoretical model The schematic diagram of a single quantum ring (SQR) is displayed in (Fig. 1(a)). The Schrodinger equation and corresponding Hamiltonian for the ground state bound electron-hole pair subjected to a magnetic flux in DMS SQR is written in a dimensionless form, considering the effective Rydberg (R\({}^{*}\)) as a unit of energy and effective Bohr radius (a\({}_{\text{B}}^{*}\)) as a unit of length, and is given by, \[\hat{H}_{ex}\Psi_{ex}=E_{ex}\Psi_{ex} \tag{1a}\] \[\hat{H}_{ex}=-\frac{1}{\rho_{e}^{2}}\frac{\partial^{2}}{\partial \varphi^{2}}-\frac{1}{\rho_{h}^{2}}\frac{\partial^{2}}{\partial\varphi^{2}}- \frac{\mu(T)}{m_{e}^{*}(T)}\left(\nabla\rho_{e}^{2}+\nabla z_{e}^{2}\right)\] (1b) \[-\frac{\mu(T)}{m_{h}^{*}(T)}\left(\nabla\rho_{h}^{2}+\nabla z_{h }^{2}\right)+V_{B}(\rho_{e},z_{e})+V_{B}(\rho_{h},z_{h})\] \[-\frac{e^{2}}{\epsilon(T)|\vec{r_{e}}-\vec{r_{h}}|}+i\,\gamma \frac{m_{h}^{*}-m_{e}^{*}}{m_{h}^{*}+m_{e}^{*}}\frac{\partial}{\partial\varphi }+\frac{\gamma^{2}\rho^{2}}{4}\] where, e and h represent the electron and hole, respectively. The strength of the magnetic field is parametrized by \(\gamma=\frac{\hbar\omega_{\text{c}}}{2\text{k}^{2}}\), \(\omega_{\text{c}}\) is the cyclotron frequency. Since the electron and hole move freely along the annular part of the ring, their motions no longer depend on \(\phi_{\text{e}}\) and \(\phi_{\text{h}}\) separately, but on the relative angular displacement \(\phi=\phi_{\text{e}}-\phi_{\text{h}}\) and it should be treated with the reduced effective mass '\(\mu\)' of the exciton. Moreover, the material parameters, effective mass, and spatial dielectric constant are considered as temperature-dependent.The sp-d exchange interaction between the electron (hole) and the localized Mn\({}^{2+}\) magnetic dopants is denoted by \(\hat{\text{H}}_{\text{sp}-\text{d}}\), and is written as [1; 4; 53], \[\hat{H}_{sp-d}=-\sum_{i}J(\mathbf{r_{e}}-\mathbf{R_{i}})\mathbf{\hat{S}_{i}}\cdot\mathbf{ \hat{s}_{e}}-\sum_{i}J(\mathbf{r_{h}}-\mathbf{R_{i}})\mathbf{\hat{S}_{i}}\cdot\mathbf{\hat{s}_ {h}} \tag{2}\] 'J' is the coupling constant for the exchange interaction between the electron (hole) of spin \(\hat{\text{s}}_{\text{e}}\) (\(\hat{\text{s}}_{\text{h}}\)) located at \(\mathbf{r_{e}}\) (\(\mathbf{r_{h}}\)) and the spin \(\hat{\text{S}}_{\text{i}}\) of the Mn\({}^{2+}\) ions located at sites \(\mathbf{R_{i}}\). \(\text{V}_{\text{B}}(\rho_{\text{e,h}},\text{z}_{\text{e,h}})\) in Eq. (1b) is the confining potential of the SQR and is modeled by an abrupt square potential: \[V_{B}(\rho_{e,h},z_{e,h})=\begin{cases}0&R_{1}<\rho_{e,h}\leq R_{2},\\ &-d/2<z_{e,h}\leq+d/2\\ V_{0e,h}&\text{otherwise}\end{cases} \tag{3}\] \(\text{V}_{0\text{e}}=70\%\Delta\text{E}_{\text{g}}^{\text{B}}\), and \(\text{V}_{0\text{h}}=30\%\Delta\text{E}_{\text{g}}^{\text{B}}\) represent the potential band offset formed in the conduction and valence bands, respectively. Tuning of the potential barrier height, \(\text{V}_{0\text{e}}\) and \(\text{V}_{0\text{h}}\) with the applied field, \(\text{B}_{\text{z}}\), is possible due to the Zeeman splitting of the band edges (Fig. 1(b)) and is written by a formula suggested by K. Navaneethakrishnan et al [54] that satisfactorily fits the experimental Zeeman splitting values available for the Mn\({}^{2+}\) compositions x = 0.07, 0.24, and 0.3 with a maximum error of 5%. Hence, the same formula is adopted here, and the fitting equation is given by [55; 53; 54; 55], \[\Delta E_{g}^{B}=\Delta E_{g}^{0}\,\frac{\eta_{e,h}\,e^{\zeta_{e,h}\,\gamma}-1}{ \eta_{e,h}-1} \tag{4}\] \(\Delta\)E\({}_{\text{g}}^{\text{B}}\) and \(\Delta\)E\({}_{\text{g}}^{0}\) denotes the band gap difference between the well CdTe layer and the barrier Cd\({}_{\text{1-x}}\)Mn\({}_{\text{x}}\)Te layer in the presence and absence of applied magnetic field, respectively. \(\eta_{\text{e,h}}=\text{e}^{\zeta_{\text{e,h}}\,\gamma_{0}}\) is chosen with a fitting parameter \(\zeta_{\text{e}}(\zeta_{\text{h}})=0.5(-0.5)\), and \(\gamma_{0}\) is a critical magnetic field at which the barrier completely vanishes. The critical magnetic field \(\gamma_{0}\) in Tesla for different magnetic dopant compositions is given for conduction (valence band) as \(\gamma_{0}=A\,e^{nx}\) with A = 0.734 and n = 19.082 (A = - 0.57 and n = 16.706).The most appropriate trial wavefunction of a ground state exciton is written in a non-separable form due to correlated electron-hole pair as, \[\Psi_{ex}(r_{e},r_{h})=N_{1s}\,\phi_{e}(\rho_{e})\,\phi_{h}(\rho_{\text{h}})\, f_{e}(z_{e})\,f_{h}(z_{h})e^{-\lambda\,\mathbf{r_{e}}h} \tag{5}\] where, \(\phi_{\text{e,h}}(\rho_{\text{e,h}})\) and \(\text{f}_{\text{e,h}}(\text{z}_{\text{e,h}})\) are the envelope functions along the radial and axial directions, respectively. \(\text{e}^{-\lambda\,\mathbf{r_{e}}h}\) describes the correlation between the electron and hole which depends mainly on the distance between the two. \(\text{r}_{\text{eh}}=\sqrt{|(\rho_{\text{e}}-\rho_{\text{h}})|^{2}+|(\text{z} _{\text{e}}-\text{z}_{\text{h}})|^{2}}\), whereas, \(|(\rho_{\text{e}}-\rho_{\text{h}})|^{2}\) denotes the projection of the distance between the electron and hole on the plane of the QR and is given by, \(|(\rho_{\text{e}}-\rho_{\text{h}})|^{2}=(\rho_{e}^{2}+\rho_{h}^{2}-2\rho_{e} \rho_{h}\cos(\varphi))^{1/2}\). \[\phi(\rho_{e,h},\varphi_{e,h})=\begin{cases}\phi_{I}(\rho_{e,h}),&\rho_{e,h} \leq R_{1}\\ \phi_{II}(\rho_{e,h}),&R_{1}<\rho_{e,h}\leq R_{2}\\ \phi_{III}(\rho_{e,h}),&\rho_{e,h}>R_{2}\end{cases} \tag{6a}\] \[\phi_{I}(\rho_{e,h})=C_{1,e,h}\,I_{0}\,(\beta_{e,h},\rho_{e,h})\] \[\phi_{II}(\rho_{e,h})=C_{2,e,h}\,J_{0}\,(\alpha_{e,h},\rho_{e,h})+C _{3,e,h}\,Y_{0}\,(\alpha_{e,h},\rho_{e,h})\] \[\phi_{III}(\rho_{e,h})=C_{4,e,h}\,K_{0}\,(\beta_{e,h},\rho_{e,h})\] (6b) \[f(z_{e,h})=\begin{cases}B_{e,h}\,\exp[k_{e,h}\,z_{e,h}],&-\infty<z_{e,h}\leq-d /2\\ \cos(\kappa_{e,h}\,z_{e,h}),&-d/2<z_{e,h}\leq+d/2\\ B_{e,h}\,\exp[-k_{e,h}\,z_{e,h}],&+d/2<z_{e,h}\leq+\infty\end{cases} \tag{6c}\] where, \(\beta_{\text{e,h}}=\frac{\text{m}_{\text{h}}^{\text{m}}(\text{V}_{\text{0,h}} -\text{E}_{\rho_{\text{e,h}}})}{\hbar^{2}}\) ; \(\alpha_{\text{e,h}}=\frac{\text{m}_{\text{e}}^{\text{m}}\text{E}_{\rho_{\text{ e,h}}}}{\hbar^{2}}\) \[\text{k}_{\text{e,h}}=\frac{\text{m}_{\text{h}}^{\text{m}}(\text{V}_{\text{0,h}} -\text{E}_{\rho_{\text{e,h}}})}{\hbar^{2}}\text{ ; }\kappa_{\text{e,h}}=\frac{\text{m}_{\text{e}}^{\text{m}}\text{E}_{\rho_{\text{ e,h}}}}{\hbar^{2}}\] Here, N\({}_{\text{1s}}\) is the normalization constant. C\({}_{\text{1,e,h}}\), C\({}_{\text{2,e,h}}\), C\({}_{\text{3,e,h}}\), and C\({}_{\text{4,e,h}}\) are obtained by choosing proper boundary conditions. E\({}_{\text{e,h}}\), E\({}_{\text{2e,h}}\) are the subband energy levels formed due to the radial and axial confinement of the QR. Invoking the variational technique, the binding energy (E\({}_{\text{B}_{\text{ox}}}\)) and the interband transition energy (E\({}_{\text{T}_{\text{ex}}}\)) of the exciton is computed using the form, \[\begin{split} E_{B_{ex}}&=E_{\rho_{e}}+E_{\rho_{h}}+ E_{z_{e}}+E_{z_{h}}+\gamma-\langle\mathbf{H_{ex}}\rangle_{min}\\ E_{T_{ex}}&=E_{g}(T)+\langle\mathbf{H_{ex}}\rangle_{min} \end{split} \tag{7}\] In the Faraday geometry, the magnetic moments of the ensemble of Mn\({}^{2+}\) ions with spin angular momentum Figure 1: Schematics: (a) Profile of the CdTe/Cd\({}_{\text{1-x}}\)Mn\({}_{\text{x}}\)Te SQR. (b) Giant Zeeman splitting of excitonic energy levels in CdTe/Cd\({}_{\text{1-x}}\)Mn\({}_{\text{x}}\)Te and corresponding optical transitions \((\sigma^{+},\sigma^{-},\pi)\). (c) The concept of Faraday rotation in DMS SQR. \({\rm S_{Mn}=5/2}\) are subjected to the sp-d exchange interaction with the conduction band electrons of spin s = 1/2 and the heavy hole valence band with angular momentum J = 3/2. This causes the heavy hole exciton splits into two components with angular momentum +1 and -1, which is composed of \({\rm s_{z}=-1/2}\), \({\rm J_{z}=+3/2}\) and \({\rm s_{z}=1/2}\), \({\rm J_{z}=-3/2}\), respectively. The GZS between the excitonic energy levels exhibited in the nanostructures is as similar as in bulk DMS, but with a difference in the potential barrier height experienced by the two different spin states. The schematic diagrams which explain the Zeeman splitting of the energy levels in DMS nanostructures and its resultant GFR are depicted in Fig. 1(b) and Fig. 1(c). The applied magnetic field increases and decreases the potential barrier for the spin up and spin down states, respectively, and thereby the corresponding confinement of both the electron and heavy hole with \({\rm s_{z}=+1/2}\), \({\rm J_{z}=+3/2}\), and \({\rm s_{z}=-1/2}\), \({\rm J_{z}=-3/2}\) becomes stronger and weaker, respectively, inside the QR. Therefore, by magnetically tuning the potential barrier, the energy levels of the exciton inside the QR could also be tuned, manifesting itself in two different excitonic transitions, namely \(\sigma^{+}\) and \(\sigma^{-}\). \(\sigma^{+}\) corresponds to the transition between \({\rm J_{z}=-3/2}\) (heavy hole) and \({\rm s_{z}=-1/2}\) (electron) states, and \(\sigma^{-}\) transition involves \({\rm J_{z}=+3/2}\) and \({\rm s_{z}=+1/2}\) states. The splitting of the energy level corresponding to two transitions is expressed by [1], \[E_{\pm}=\pm\frac{1}{2}x_{eff}N_{0}\left(\beta_{exc}-\alpha_{exc}\right) \left\langle S_{z}^{Mn}(B)\right\rangle \tag{8}\] The Zeeman splitting energy between the two excitonic transitions, its relation to the magnetization, M, and to the effective g- factor, \({\rm g_{eff}}\), is given by [1, 43], \[\Delta E_{z}^{sp-d}=E_{+}-E_{-} =\frac{\beta_{exc}-\alpha_{exc}}{g_{Mn}\,\mu_{B}}\,M=g_{eff}\, \mu_{B}\,B\] ## III Results and Discussion ### Magnetic-field induced excitonic interband transition energy Magnetic field dependence of the PL transition energy (\({\rm E_{Tex}}\)) for both \(\sigma^{+}\) and \(\sigma^{-}\) polarization is computed for ring dimensions R = 80A, d = 20A, which is approximately equals to the effective Bohr radius of the exciton, and the results are displayed for low (\({\rm x\leq 0.01}\)) and high concentrations (\({\rm x>0.01}\)) of Mn\({}^{2+}\) ions in Fig. 2. The transition energy increases with the increasing concentration of Mn\({}^{2+}\) ion because the bandgap (\({\rm E_{g}}\)) is directly proportional to the latter. The calculation using the above theoretical model shows that at B = 0, the PL is unpolarized, i.e., the energies of \(\sigma^{\pm}\) magneto-exciton are degenerate. However, the applied magnetic field breaks the degeneracy and causes the PL to split into left (\(\sigma^{-}\)) and right (\(\sigma^{+}\)) circularly polarized. This is indicated by a monotonic shift of \({\rm E_{Tex}}\) towards low and high energies about zero field energy in Fig. 2, and the PL gets resolved into two branches of exciton doublet corresponding to \(\sigma^{+}\) and \(\sigma^{-}\) polarization, respectively. The reason for this is attributed to the fact that the applied magnetic field influences the potential barrier height of the two different spin components in a unique way owing to the sp-d exchange interaction, as discussed in sec. II. The variation of \({\rm E_{Tex}}\) with B for the QR doped with Figure 2: Interband transition energy as a function of magnetic field for \(\sigma^{+}\) and \(\sigma^{-}\) exciton for various dopant concentrations. (a) \({\rm x\leq 0.01}\) and (b) \({\rm x>0.01}\). low Mn\({}^{2+}\) concentration (x \(\leq\) 0.01 in Fig. 2(a)) is different from the QR doped with high concentration (Fig. 2(b)). Instead of showing a rapid fall with the magnetic field as seen for higher concentration, the \(\sigma^{+}\) transition energy mimics the \(\sigma^{-}\) transition, indicating a change of the PL emission from right circular to left circular polarization. A vivid picture of this unusual behaviour for low 'x' has been well explained in a DMS QD by Kai Chang et al [29], which is ascribed to the tuning of the effective g-factor to zero with the increasing field when the order of Zeeman splitting due to sp-d exchange interaction is comparable to the order of intrinsic Zeeman splitting. The sign of the former is opposite to the latter. The presence of crossing between \(\sigma^{+}\) and \(\sigma^{-}\) transition energy in Ref [29] is missing here for x = 0.01 because the data has been plotted for the extended range of magnetic fields, including the type-II region in Ref [29], whereas it is limited to the type-I region in the present work. Typically, the order of intrinsic Zeeman splitting is much smaller than the energy level splitting induced by the sp-d exchange interaction; hence, the former is neglected in the present calculation. The inset in Fig. 2(b) shows the high field data for x = 0.2. ### Zeeman shift and Zeeman splitting of the exciton energy levels Figures 3(a) and 3(b) plot the magnetic field dependence of the exciton transition energy as Zeeman shifts (\(\rm E_{ex}(B)-E_{ex}(B=0)\)) relative to the zero-field exciton energy for both the transitions, which is also described by Eq. (8). It is noted from figure that the shift increases with increasing magnetic field for both the transition, but it shows a positive and negative increment for \(\sigma^{-}\) and \(\sigma^{+}\) which corresponds to the blue and redshift in the interband transition energy (Fig. 2), respectively. Interestingly, one could observe the symmetric Zeeman splitting about the zero-field energy for the QR doped with 5% and 10% of Mn\({}^{2+}\) ions, but for all other dopant concentrations (0.5%, 1%, and 20%), the splitting seems to be asymmetric. On the quantitative footing, the Zeeman splitting energy, \(\rm\Delta E_{x}^{sp-d}\), plotted in Fig. 3(c) and 3(d) is described as the energy difference between the two excitonic transitions under B and is determined from the data plotted in Fig. 3(a) and 3(b) as given in Eq. (9). The magnetic field suppresses the Mn\({}^{2+}\) spin fluctuations by aligning the randomly oriented Mn\({}^{2+}\) spins along the field direction, indicating a state of magnetic ordering, thereby increasing \(\rm\langle S_{z}\rangle\) and causing the GZS. It is interesting to Figure 3: Zeeman shift related to zero field magneto-exciton energy, and Zeeman splitting energy (\(\rm\Delta E_{x}^{sp-d}\)) _vs_ magnetic field for various dopant concentrations. (a), (c) x \(\leq\) 0.01, (b), (d) x \(>\) 0.01. (e) Concentration dependent effective g-factor (\(\rm g_{eff}\)) for a fixed strength of magnetic field, B = 0.2Tesla at T = 4.2K. (f) Magnetization (M) calculated using modified Brillouin function for x \(\geq\) 0.05. note from Fig. 3(c) and 3(d) that \(\Delta\mathrm{E_{z}^{sp-d}}\) increases with the dopant concentration up to x = 0.05, and thereafter it starts decreasing. This is because the Zeeman splitting is proportional to the effective dopant concentration 'x\({}_{\mathrm{eff}}\)' as given in Eq. (8), and the latter increases with 'x' and shows a maximum at a particular concentration. Henceforth, it starts to move downhill because of the antiferromagnetic interactions between the nearest neighbouring magnetic ions, which cancels the spins of the corresponding pairs and reduces the effective contribution to the thermal average of the spin polarization of Mn\({}^{2+}\) ions, \(\langle\mathrm{S_{z}}\rangle\). The strength of the Zeeman splitting can be directly evidenced from the absolute value of effective g-factor which has been calculated for various 'x', Figure 4: (a) Binding energy of \(\sigma^{\pm}\) magneto-exciton _vs_ magnetic field. (b) Schematic explaining the overlap integral between the electron and hole for various strengths of magnetic field: (i) \(\mathrm{B=0,(ii)\,0<B<B_{c}}\), and (iii) \(\mathrm{B>B_{c}}\). In-plane electron–hole distance corresponding to (c) \(\sigma^{+}\), and (d) \(\sigma^{-}\) exciton _vs_ magnetic field for various dopant concentrations. Figure 5: _Upper panel_: 3D-plot for the probability distribution of electrons and holes along axial, and radial direction. _Lower panel_: Density plot of the probability distribution of single-particle states along both radial and axial direction. The data has been plotted for (a) B = 0, and (b) B = 1Tesla. and is plotted in Fig. 3(e). Figure 3(f) shows the magnetization (M) _vs_ magnetic field curves for \(\mathrm{x\geq 0.05}\). Magnetization increases with the magnetic field since it enhances \(\langle\mathrm{S_{z}}\rangle\), showing a linear dependence on magnetic field, which is an expected paramagnetic behaviour in any CdMnTe based quantum systems. As already discussed, when QR is populated with more magnetic ions, the spin-spin interaction becomes more robust, which results in a quenching of magnetization for high 'x' because of the lower value of \(\langle\mathrm{S_{z}}\rangle\). ### Binding energy of \(\sigma^{\pm}\) magneto-exciton The variation of binding energy is plotted in Fig. 4(a) for various concentrations of Mn\({}^{2+}\) ions. The trend of the binding energy for both \(\sigma^{-}\) and \(\sigma^{+}\) polarization concerning the magnetic field is as same as the trend followed by the interband transition energy, and this behaviour persists for different concentrations also. Nevertheless, for \(\sigma^{+}\) polarization, there is a rapid decrease of binding energy with the magnetic field as compared to the steady increase for \(\sigma^{-}\) polarization. This can be better understood from the schematic in Fig. 4(b), which explains how the applied magnetic field modifies the electron-hole overlap inside a SQR. At B = 0, the location of both the electron and hole is in the same CdTe layer (Fig. 4(b)(i)). Zeeman splitting of the energy levels in the valence band is highly sensitive to the applied field, which is not the case with the conduction band. This is because the band offset formed in the conduction band is generally larger than the valence band offset since 80% of the bandgap difference falls in the former. Moreover, the absolute value of the exchange constant, which represents the strength of the exchange interaction, is larger for the heavy hole (\(|\beta_{\mathrm{exc}}\mathrm{N_{0}=880meV}|\)) than for the electron (\(|\alpha_{\mathrm{exc}}\mathrm{N_{0}=220meV}|\)). Therefore the electron with \(\mathrm{s_{z}=-1/2}\) in the conduction band would forever be confined in the non-magnetic CdTe layer itself irrespective of the strength of the applied field as its potential band offset is sufficiently larger than the order of magnetic splitting (Fig. 4(b)(ii)). However, the potential barrier for the heavy hole with \(\mathrm{J_{z}=-3/2}\) is tremendously reduced with the magnetic field, and it encounters a flat band situation at critical field value, beyond which the system undergoes a type-I - type-II transition (Fig. 4(b)(iii)). As a result, the electron remains in the CdTe layer, but the hole moves towards the heterostructure interface and finally to the CdMnTe layer. Hence, the exciton will no longer be spatially direct; rather, it becomes spatially indirect, which reduces the overlap between the electron and hole, whereby spin-down exciton states have reduced binding energy. To justify this discussion, the in-plane exciton radius, \(\mathrm{R_{eh}}\), the average distance between the electron and hole in the plane of the QR, has been calculated and is plotted in Fig. 4(c) and 4(d). As anticipated, the monotonic increase and decrease of \(\mathrm{R_{eh}}\) could be seen for \(\sigma^{+}\) and \(\sigma^{-}\) polarization, respectively, for all x. Moreover, the 3D plot of the probability distribution of spin-down electrons and holes (\(|\Psi|^{2}\)) along \(\rho\) and z-directions of the QR, and the density plot of the single-particle distribution depicted in Fig. 5 helps to understand the effect of magnetic field on the carrier confinement inside the QR. Obviously, \(|\Psi|^{2}\) is larger for zero magnetic field as one can compare the order of magnitude between B = 0 and B = 1Tesla. ### Oscillator strength, radiative linewidth and radiative lifetime of magneto-exciton To gain further insight into the \(\sigma^{+}\) and \(\sigma^{-}\) transition and related radiative properties, the investigation of oscillator strength (OS), radiative decay rate (RDR), and radiative lifetime (RLT) have been performed, and the results are delineated. The expression for the exciton oscillator strength follows [56; 57; 48], \[f_{\pm}=\frac{E_{P}}{2\,E_{T\pm}}I|\Omega(0)|^{2} \tag{10}\] \[I=\Bigg{|}\int_{-\infty}^{+\infty}N_{1s}\,\phi_{e}\left(\rho_{e} \right)\phi_{h}(\rho_{e})\,f_{e}(z_{e})\,f_{h}(z_{e})\,d\rho_{e}\,dz_{e}\Bigg{|} ^{2}\] where, the Kane energy, \(\mathrm{E_{P}=2.1eV}\) for CdTe, and, \(\mathrm{E_{T\pm}}\) represents the interband transition energy corresponding to \(\sigma^{+}\) and \(\sigma^{-}\) transitions, respectively. The OS mainly depends on the overlap integral 'I' between the electron and hole envelope wavefunctions, and \(\Omega(0)\) denotes the probability of finding the electron and hole at the same position. The oscillator strength per unit area is proportional to the effective Bohr radius as, \(\mathrm{F_{\pm}=\frac{1}{\pi_{0}^{2}}f_{\pm}}\). Exciton radiative lifetime, '\(\tau\)' (radiative decay rate, '\(\Gamma=1/\tau\)') can be related to OS according to [58; 59], \[\tau=\frac{2\pi\epsilon_{0}m_{0}c^{3}\hbar^{2}}{ne^{2}E_{T\pm}f_{\pm}} \tag{11}\] Here, the fundamental physical constants have their usual meaning and 'n' represents the refractive index of the material CdTe. The evolution of the oscillator strength as a function of magnetic field solely depends on the spatial overlap between the electron and hole wave functions, which has been depicted in Fig. 6(a) and 6(b). The applied magnetic field increases the overlap between the electron and hole ground states for \(\sigma^{-}\) polarization, indicating larger OS due to the increase of potential barrier height. As expected for the \(\sigma^{+}\) polarization, the OS sensitively depends on B, which diminishes the excitonic effect by spatially separating the electron and hole as explained in sec. III.3 and thereby weakens the corresponding optical transition. The overlap integral increases as the dopant concentration increases due to the increased potential barrier height. Figure 6(c) and 6(d) shows the radiative lifetime and radiative decay rate as a function of magnetic field for various dopant concentrations. The RLT of exciton increases with increasing B for \(\sigma^{+}\) polarization, which is accompanied by a decrease in RDR. The exciton lifetime is found to decrease from 5.04ns to 0.38ns when the concentration of Mn\({}^{2+}\) ion increases from x = 0.005 to x = 0.2 at B = 0, where radiative recombination dominates (Fig. 6(c)). The RDR, which characterizes the decay of photon emitted by the exciton, shows its maximum only for B = 0, which elucidates the probability of finding an electron and hole at the same position (\(\mathrm{r_{e}=r_{h}}\)) is more prominent in the absence of magnetic field. ### GFR in semimagnetic SQR The Faraday rotation (Fig. 1(c)), results from a difference in refractive indices of the left and right circularly polarized light after traveling through a magnetized medium with a length '\(\iota\)'. The phase difference in velocity between the two circularly polarized components is expressed through the FR angle as [43], \[\Theta_{F}=\frac{\Delta\phi}{2} =\frac{El}{2\hbar c}\left(n_{-}-n_{+}\right) \tag{12}\] Here, \(\mathrm{n_{-}}\) and \(\mathrm{n_{+}}\) denote the refractive indices of the left and right circular polarized light, and E is the incident photon's energy. As aforementioned, the FR in DMS alloys is a giant one due to the large Zeeman splitting of the energy levels as a result of sp-d exchange interaction, which has been computed using the single oscillator model as preferred in the work of Bartholomew et al., After performing a series of calculations, \(\Theta_{\mathrm{F}}\) achieves the final form as [43], \[\Theta_{F}=\frac{\sqrt{F_{0}}l}{2\hbar c}\left(\frac{\beta_{exc}-\alpha_{exc} }{g_{Mn}\,\mu_{B}}\right)\,M\,\frac{1}{E_{0}}\frac{y^{2}}{(1-y^{2})^{3/2}}\,; \,y=\frac{E}{E_{0}} \tag{13}\] Here, \(\mathrm{F_{0}}\) is a constant that involves the oscillator strength, and \(\mathrm{E_{0}}\) is the ground state interband transition energy at the fundamental energy gap at zero magnetic field. The angle is directly proportional to the GZS through the term \(\Delta\mathrm{E=\frac{\beta_{exc}-\alpha_{exc}}{\delta_{Mn}\mu_{B}}M}\). The Verdet constant is written as the Faraday rotation per unit magnetic field per unit length, which is defined as [43], \[V_{d}(E)=\frac{\Theta_{F}}{Bl} =\frac{\sqrt{F_{0}}}{2\hbar c}\,\left(\frac{\beta_{exc}-\alpha_{ exc}}{g_{Mn}\,\mu_{B}}\right)\,\frac{\partial M}{\partial H}\,\frac{1}{E_{0}}\, \frac{y^{2}}{(1-y^{2})^{3/2}} \tag{14}\] Figure 6: Variation of (a) overlap integral, (b) oscillator strength, (c) radiative lifetime, and (d) radiative decay rate with the magnetic field for \(\sigma^{+}\) and \(\sigma^{-}\) transitions. Figure 7(a) depicts \(\Theta_{\rm F}\) for the DMS QR doped with dilute, arbitrary, and high 'x' at a fixed photon energy of 1.5eV. It is noted from figure that the rotation angle increases with the increasing magnetic field since the applied field enhances the Zeeman splitting. The variation of the Verdet constant with the incident photon energies for a fixed magnetic field of B = 0.2Tesla is plotted in Fig. 7(b). The Verdet constant shows a sharp increase whenever the band gap resonance occurs (when the energy of the incident photon approaches the absorption edge of the material), and the photon energy, at which the Verdet constant shows a rapid enhancement, shifted to higher energies for the heavily doped QR because the absorption edge increases as the concentration of Mn\({}^{2+}\) ions increases. Though the single oscillator model yields gratifying results, in which the behaviour of E\({}_{0}\) at \(\Gamma\) point has been crudely modelled as constant at all temperatures, the success of using this in QR could not be verified due to a lack of reliable experimental data. ## IV Concluding remarks Probing the exciton energy states in an applied magnetic field has been studied in semimagnetic QR, and the theoretical investigation of tuning related MO properties has been attempted. It is found that the doubly-connected topological structures like QR provide robust confinement for the carriers compared to single-connected topological QDs [55]. The difference in the behaviour of magneto-exciton energies between the QR doped with low and high Mn\({}^{2+}\) ion concentrations has been explained in detail. The results show pronounced excitonic Zeeman splitting for low 'x' than high 'x', where the possibilities for the manganese ions to form antiferromagnetic pairs in the latter case are maximized. Among all the concentrations discussed here, x = 0.05 exhibits larger Zeeman splitting with the absolute value of effective g-factor, \(\mathrm{g_{eff}=928}\) (Fig. 3(e)), which gives rise to ultra-high Verdet constant of -15 degree/Tesla/A (\(2.6\times 10^{9}\)rad/Tesla/m), and the latter is \(10^{4}-10^{6}\) orders of magnitude larger than in bulk Cd\({}_{1-\rm x}\)MnxTe [10, 43, 46], thin films [60, 61, 62], and is \(10^{2}\) orders larger than in QWs [48, 50], superlattices [49, 63] as reported in the previous studies. This elucidates the importance of DMS-based QR in MO devices operating at a wavelength shorter than 1\(\mu\)m than already existing MO materials, such as Yttrium Iron Garnet (YIG) and Terbium Aluminum Garnet (TAG), organic molecules, conjugated polymers [25, 64, 43]. Moreover, the low-temperature exciton lifetime is 715ps, whereas it is \(\approx\) 100ps in QWs doped with 25% Mn\({}^{2+}\) ion concentration [65]. The study of exciton lifetime in semimagnetic quantum systems is impressive since it affects the optical properties and the magnetization dynamics of the concerned systems to a greater extent. The exciton lifetime in DMS determines the formation of bound magnetic polaron (BMP) [66, 67] or exciton magnetic polaron (EMP) [68], which causes spontaneous ferromagnetic ordering even in the absence of an external magnetic field due to the strong sp-d exchange interaction. Since the recombination limits the exciton lifetime, it interrupts the EMP formation before the polaron reaches its stable state. If the exciton does not decay during the process of EMP formation, then the EMP would reach its equilibrium state, which is accompanied by a decrease of exciton energy and provides an additional localization for the carriers. The reliability of the results obtained using single oscillator model could not be verified due to the missing experimental data, but it is believed to be improved using the multi oscillator model as adopted in [44]. Since the low path length and the modest magnetic field yields a ultra-high Verdet constant, theoretical demonstration of generating larger FR Figure 7: (a) Faraday rotation angle (\(\Theta_{\rm F}\)) as a function of magnetic field for a fixed photon energy of E\({}_{\rm ph}=1.5\)eV, and (b) Verdet constant as a function of photon energy for various dopant concentrations for a fixed strength of magnetic field, B = 0.2Tesla. and higher Verdet constant in DMS QRs would incite interest in preparing high-quality QR heterostructures based on DMS. With the unrivaled ability to modulate the magnetic excitonic transitions and thereby the optical activity of the materials at the nanoscale for a broader energy spectrum with various mole fractions of Mn\({}^{2+}\) ions in external magnetic fields and effective magnetic switching of the spins make DMS-based QR a judicious choice among promising candidates for applications in future spin-photonic and spin-electronic devices.
``` CdTe/Cd_{1-x}Mn_{x}Te量子リングにおけるσ^+とσ^-の励起子遷移間の磁気的微調とそれに伴う巨大ゼーマン分裂(GZS)の研究は、Faraday配置でMn^{2+}イオンの濃度を変化させて行われました。有効質量近似法を用いて多変量解析法によって、局在化磁性不純物イオンと可変な電荷キャリア間のsp-d交換相互作用を平均場理論で処理しました。外部磁場の適用によるGZSの増大、そしてそれにより効果的なg係数が顕著に現れ、バンド構造におけるタイプI-タイプIIの遷移と説明されました。これは、電子と穴の重ね合わせ積分と平面内励起子半径の計算によって明確に説明されました。この結果は、巨視的なFaraday回転と関連するVerdet定数
2309.04546
From Internet of Things to Internet of Data Apps
We introduce the Internet of Data Apps (IoDA), representing the next natural progression of the Internet, Big Data, AI, and the Internet of Things. Despite advancements in these fields, the full potential of universal data access - the capability to seamlessly consume and contribute data via data applications - remains stifled by organizational and technological silos. To address these constraints, we propose the designs of an IoDA layer borrowing inspirations from the standard Internet protocols. This layer facilitates the interconnection of data applications across different devices and domains. This short paper serves as an invitation to dialogue over this proposal.
Silvery Fu, Sylvia Ratnasamy
2023-09-08T18:26:05
http://arxiv.org/abs/2309.04546v1
# From Internet of Things to Internet of Data Apps ###### Abstract We introduce the Internet of Data Apps (IoDA), representing the next natural progression of the Internet, Big Data, AI, and the Internet of Things. Despite advancements in these fields, the full potential of _universal data access_ - the capability to seamlessly consume and contribute data via data applications - remains stifted by organizational and technological silos. To address these constraints, we propose the designs of an IoDA layer borrowing inspirations from the standard Internet protocols. This layer facilitates the interconnection of data applications across different devices and domains. This short paper serves as an invitation to dialogue over this proposal. ## 1 Introduction Three decades ago, Mark Weiser predicted a future where technology would become a seamless part of our lives [34]. This prediction is now reflected in the numerous data sources that are part of our everyday existence, such as mobile devices, the Internet of Things (IoT), wearable tech, connected vehicles, and smart infrastructures [3, 15, 30, 9, 10]. These sources aren't restricted to tangible hardware or software applications but can also include conceptual collections, such as the data produced by entire buildings or campuses. Concurrently, we've seen a rise in data-driven applications designed to interact with these data sources, offering solutions for navigation [16], food delivery [14], ride-hailing [20], event planning [21], and home automation [4]. Despite the ubiquity of both data sources and applications, _universal data access_ - the ability for everyone to consume and contribute data - is not yet a reality. To understand this, consider how devices connect to networks. When you visit a new city, your smartphone automatically connects to the local network. At a restaurant, you join the public Wi-Fi without any issues. This ease of connectivity is not mirrored in data access. Today, consuming diverse data types - real-time traffic updates, air quality readings, local event schedules, public safety notices, or e-bike or scooter availability - often means toggling between separate, disconnected data apps. This fragments user experience, and leads to loss of valuable insights and time, making it difficult to leverage the full potential of the available data. Similarly, these data apps/platforms are typically limited in their ability to incorporate user data contributions, preventing potentially valuable inputs from being used effectively. Consider, for example, if your smartphone could share real-time network quality data across different city areas. This data could then be easily collated, aiding more informed decisions about connectivity. Likewise, if connected vehicles could share real-time traffic and road condition data to enhance traffic management. Or, imagine if local businesses could contribute data about current wait times at restaurants or the availability of rental bikes at specific locations. Existing data apps like Google Maps [16], while effective at aggregating certain types of data, are not designed to provide a universal data access experience. Because these data aggregators face the same data access limitations, hindering their ability to evolve and provide universal access. For example, they cannot dynamically integrate information from varied sources according to users' changing needs, As interfaces to access data, these data apps' limitations ultimately impact the user's ability to gain universal data access. We observe that achieving universal data access is complicated by two primary obstacles. The first one is the _semantic gap_, referring to the difference between the raw data generated by sources and the processed, ready-to-use data required by applications. Transforming raw data into a format usable by applications - a process we refer to as _data curation_ - involves various steps such as filtering, transformation, and aggregation. The second obstacle is the _technical stack gap_ or (stack gap for short), which prevents seamless data access between data apps due to the lack of interoperability across different technological stacks supporting/behind the data apps. This can include difficulties in discovering, authenticating, and authorizing data access and actual exchanges of data across the stacks. In fact, modern data infrastructures ($2), while designed to address the semantic gap, often exacerbate the stack gap by increasing complexity and hindering interoperability. As a result, even when other potential obstacles such as policies and privacy regulations permit data access, the fundamental challenges posed by these two gaps can still prevent data applications from easily accessing each other's data. In this paper, we argue that for universal data access to become a reality, two things need to happen. First, we must be able to carry out _data curation on an Internet-scale_ to bridge the semantic gap. Second, this curated data should be exchangeable between data applications with the stack gap out of the way. How to achieve the two goals? For the semantic gap, our conjecture ($2) is that addressing the semantic gaps has now become feasible and scalable due to the advancements in the field of machine learning, particularly the introduction of solutions like GPT/LLM [33]. These advanced AI models are capable of understanding, interpret ing, and manipulating data in sophisticated ways that were previously unattainable [26]. For the stack gap, we argue we should draw inspirations from the design principles of the Internet [24], particularly in how it achieves interconnection across diverse networks. In much the same way, we need to focus on creating an infrastructure that can connect various data apps and their "app domains", crossing their stack gaps. With these insights, we propose the Internet of Data Apps (IoDA)1--a universal data access layer aimed at furthering Weiser's vision by enabling ubiquitous data access at Internet-scale. In this paper, we outline the design principles for IoDA and present a technical design that aligns with these principles. Our vision is to create a world where individuals and systems can seamlessly consume and contribute data, representing a natural progression of the Internet and the Internet of Things as we know and rely on them today. Footnote 1: Pronounced as U-DA. As in, _IoDA provides universal data access._ ## 2 The Vision In this section, we present the trends/vision, examples, and requirements of IoDA. ### Primer and Trends We observe the following emerging trends, each contributing to the contexts based on which our Internet of Data Apps vision will develop: * **Proliferation of mobile and IoT devices:** The number of mobile and IoT devices [4, 13, 15, 5] is continuously increasing, with users relying on these devices for diverse daily tasks. This trend is facilitating the ubiquity of data access, with both users and their devices acting as data sources. However, a gap exists between the data generated and its accessibility and usability. Today, while we can view varied data types on our phones and our devices can produce a wealth of data, the ability to aggregate, process, and use this information effectively is limited. * **Big data processing frameworks:** The evolution of big data processing frameworks, such as Databricks [6] and Snowflake [19], alongside the emergence of data integration tools like Fivetran [1] and Airbyte [12], has transformed how we handle and manipulate large data sets. These tools allow efficient data management, which are complementary and instrumental in supporting universal data access. * **Maturity of AI and language models:** Advances in AI, particularly language models, have revolutionized data automation and processing. Models like GPT have shown remarkable competence in natural language tasks [33], while others have exhibited the capability to process complex database queries [29] and carry out data processing tasks [26]. These advancements provide a robust foundation for data curation, an integral part of achieving IoDA's vision. * **Non-universal data access:** Despite the aforementioned advancements, universal data access remains elusive. In particular, today's data landscape is fragmented, with data sources and applications often functioning in isolation. This lack of interoperability restricts data's potential, hindering its accessibility and usability. **IoDA vision:** With these trends as context, we argue that a universal data access layer should emerge. It should leverage the ubiquity of data sources and the advancements in AI and data processing frameworks to address the challenge of non-universal data access. The goal is to create an infrastructure that facilitates bridging semantic gap through AI-enabled data curation while eliminating the stack gap by promoting interoperability across diverse data applications. In doing so, IoDA seeks to actualize the promise of universal data access, enabling individuals and systems to seamlessly consume and contribute data at an Internet-scale. ### The Real-world Drives for IoDA In what follows, we explain the more tangible benefits of the IoDA from the perspectives of end-users and enterprises. **#1: Benefits for enterprises.** The promise of universal data access bears significant implications for enterprises, especially in optimizing operational efficiency, resource utilization, and cost-saving. Consider the context of smart buildings, a market projected to surpass S121.6 billion by 2026 [18, 2]. A key application within this ecosystem is space analytics [2, 7, 17], which enables managers to make data-driven decisions by collecting and analyzing historical and real-time data on space utilization and occupancy patterns. Currently, space analytics requires either a dense network of expensive and maintenance-intensive sensors or a reliance on booking systems that infer utilization via user interactions. With universal data access, a paradigm shift towards a "self-serve" space becomes possible, reducing costs and improving resource utilization. In this scenario, building managers can leverage data from tenants' BYOD devices (_e.g.,_ laptops, smartphones, wearables), with user consent, to enhance the accuracy of space analytics. Tenants' devices contribute real-time occupancy data, reducing the need for dedicated sensor installations and offering more precise occupancy insights. Taking this example further, smart buildings could also connect with the data ecosystem of a city, receiving real-time data about factors such as traffic conditions, weather forecasts, and event schedules [21]. These data can be utilized to further optimize building operations like energy management and to improve tenant services. More broadly, for enterprises and business, the potential for having universal data access goes beyond cost and efficiency. By facilitating real-time data exchange with different business entities, _e.g.,_ from transport services to local restaurants - businesses can unlock new revenue streams. For example, by sharing real-time data about their customers' preferences and behaviors (with proper consent), businesses can offer personalized marketing and services, creating additional revenue opportunities. **#2: Benefits for end-users.** Imagine a day in the life of a city resident in a world with universal data access. As she **(1)** wakes up in the morning, her smart home has already prepared the perfect environment by adjusting the lighting, temperature, and even playing her favorite music, having seamlessly accessed data from her sleep tracking app. **(2)** Upon heading to the gym, her fitness tracker automatically shares her health stats and exercise preferences with the gym's network. This data enables personalized workout recommendations and real-time progress tracking, enhancing her fitness routine. Meanwhile, the tracker collects data about the gym's equipment usage, contributing to a community dataset that the gym uses to optimize its facilities. **(3)** She then visits a shopping mall, where her smartphone connects to the mall's network. The mall system, equipped with data about her past purchases and preferences, guides her to stores with items she might like, improving her shopping experience. As she moves around, her phone shares anonymized data about her shopping behaviors, aiding the mall in improving store placements and customer service. **(4)** Taking public transit to work, her phone communicates with the transit system, providing real-time updates on schedules and seat availability. In return, her device shares usage pattern data, assisting the transit authority in improving routes and schedules. **(5)** Finally, at her office, walking into a meeting room automatically connects her smartphone to the room's systems, allowing her to control the environment and be informed of the room's schedule. Her device, in return, provides data about room usage and energy consumption patterns to the building management system. **Summary.** To summarize, we argue that IoDA can offer enterprises significant improvements in operational efficiency, cost savings, and potential revenue growth by establishing an interconnected data ecosystem. Meanwhile, from a user perspective, IoDA can create a highly personalized, efficient, and responsive living and working environment. ### Technical Requirements for IoDA We argue that to achieve universal data access, the following requirements are _necessary_: * **#1 Universal accessibility:** Data should be accessible from any location, domain, and on any device, when allowed so. Similar to how the Internet provides universal access to information, universal data access implies that all relevant data is readily available when/wherever it is needed. * **#2 Interoperability:** Data from different sources should be readily compatible or easily made so for access. This means that data apps from disparate organizations/domains should be able to exchange and use the information. * **#3 Real-Time and opportunistic access:** Depending on the nature of the data and its use, access should be allowed to happen in real-time or near real-time. There should not be significant latency that would render the data outdated and therefore less useful. Besides, data apps should be allowed for opportunistically access data, adapting to unexpected data sources and data consumers/other data apps. * **#4 Bi-directionality:** Universal data access is not only about consuming data but also about contributing data. Data apps should be able to easily contribute data to the pool of data, as well as consume data from it. * **#5 Ease of use:** Accessing data should be as simple and intuitive as connecting to and using a network. It should not require specialized knowledge or complex procedures. * **#6 Security and privacy:** Given the sensitive nature of many types of data (_e.g.,_ PII data) that might be accessed and shared, it's imperative that sensev curity measures are in place to protect data integrity and confidentiality. We'll revisit these requirements as we describe the designs of our IoDA proposal (SS3). ## 3 A Design Proposal This section present the design requirements and principles (SS3.1) and the approach meeting these principles (SS3.2). ### Principles The highest guiding principles of IoDA is striving for simplicity - both in the sense of arriving at a simple design that is easy-to-adopt as well as the simplifying the applications and users it aim for. Data access should be easy and intuitive to be ubiquitous, and the processes of contributing and consuming data should be as straightforward as uploading and downloading packets from the Internet, from the perspective of data apps. We propose the following design principles: * **Design Principle #1: Unified abstractions for data access:** A cornerstone of IoDA's design is the concept of unified data app abstractions, whereby a data app is both a data consumer and a data contributor. This role combination eliminates technological barriers that could potentially trap data in silos. From a technical standpoint, the data app abstraction in IoDA serves as both a data source and a data app, providing a unified interface for data sources (like an IoT sensor), data consumers (such as a dashboard), and entities that perform both functions (_e.g.,_ a smart light adjusting to power settings while reporting energy readings). This unification simplifies system operations by allowing the use of a consistent set of mechanisms across all layers. * **Design Principle #2: Treating metadata as data but decoupling it from actual data:** This principle addresses the issue of metadata, which is integral for the organiza Figure 1: IoDA architecture vs. today. tion, access, and interpretation of data. In IoDA, metadata is treated as data, enabling its independent management and processing. However, it's important to decouple metadata from the actual data to ensure their independent evolution and operation and to prevent unnecessary coupling. * **Design Principle #3: Allowing data app composition at any layer and across any domains:** IoDA should support data app composition at any layer, and the system mechanisms should not hinder this functionality. This design allows multiple data apps or domains to be federated or combined, irrespective of their layers or app domains. * **Design Principle #4: Decoupling specifications from executions on all layers:** Specifications and executions should be independent at all system levels in IoDA. This principle has two main advantages. First, it enables individual app providers/domains to choose their implementations of the data processing systems without affecting the shared _semantics_ of data processing and sharing. For example, while one domain might use Databricks for data processing, another might employ Snowflake; however, the overall semantic interpretation of the data processing pipelines remains consistent. Second, such decoupling also enables optimizations and verifications at all levels, enhancing the system's overall efficiency and reliability. ### Approach To help elaborate on our design, we define the following concepts within IoDA: (i) _data app:_ A data app consumes and provides data, acting as the instantiation of data access. A data app could be an end-user interface, a UI, or a dashboard; (ii) _data domain:_ Each domain is owned by different domain owners, with clear boundaries for data access and policy enforcement; and (iii) _metadata:_ This encompasses data schemas, access endpoints, and data descriptions, which provide information for locating and accessing data. (iv) _data:_ The actual data records. Our design below focuses on IoTI-like data, which can be structured or semi-structured. **Overview.** The design of the IoDA layer comprises the following components: (a) _Data gateway (gate):_ A data gateway varies data, making it ready for consumption by the data app and other data gates. It consumes data from defined data sources and exports processed data. Serving as both a specification interface for the data app developers and a runtime execution abstraction. (b) _Wire:_ Data wires are the "conduits" or "pipes" that connect data gateways and execute data movements and interconnections across multiple data gates, either within or across circuits. (c) _Circuit:_ Circuits are formed by interconnected gates, offering global abstraction, visibility, and optimization across these gates. Each domain can have one or multiple data circuits. (d) _Gate Resolution Protocol (GRP) and Border Gate Resolution Protocol (bGRP):_ These protocols facilitate data exchanges within and across domains, effectively supporting the flow of data in the IoDA. **D1: Designing for universal data abstraction.** The main objective is to recognize the shared representation and instantiation of data access and curation. We propose the concept of a data gateway, analogous to packet routers/gateways and middleboxes that process incoming packets and forward them to other gateways. **Data gateway (gate).** IoDA represents each context with a gate. A gate consists of three components: a data store, one or more input ports (iports), and one or more output ports (oports). These three abstractions allow a gate to ingest, process, store, and export data records to/from other gates. * _Input port (iport)._ An iport of a gate retrieves data from one or multiple data sources. A gate can have multiple iports, each identified uniquely. Each iport processes the ingested data with its dataflow, which contains operators (_e.g._, sort, join) and functions (_e.g._, sum, avg) that process a sequence of input data records and generate a sequence of output records. The derived data is written to the store. * _Output port (oport)._ An oport exposes data records in the data store to data consumers such as other gates, apps, and users. At runtime, the oport reads data from the data store, processes it, and caches it persistently. We refer to this resulting data as the "oport view". The oport supports interfaces to query or watch the data records. A gate can have multiple oports, and entities can retrieve data by querying one or more of the gate's oports. The data schemas and access control policies are specified in each oport. * _Data store._ The data store is an interface to the domain's data storage choices, such as a database or an object store. * _Data and metadata flow._ The data flow refers to the actual data records exchanged between data apps. It can be structured, semi-structured, or free-form data. For the smart city use case, IoT data records are envisioned to be represented in JSON or JSON-like formats, which provide richer types. Besides, each gate exposes metadata about the data, such as data schemas, contextual information, and data descriptions, to facilitate data discovery. * **D2: Designing for data discovery** The high-level goal of designing data discovery in IoDA is to discover the oports across different gates and domains. * _Gate addressing._ Each gate is identified using a specific addressing format: domain/gate/oport. This addressing scheme allows for referencing individual output ports. The domain name can be derived from existing naming infrastructure, such as Internet domain names, public ledger addresses [25], or GitHub accounts or organizations [8]. It's important to note that the gate address is distinct from the network address. * _Gate Resolution Service._ The Gate Resolution Service (GRS) resolves gate addresses, similar to how DNS resolves domain names to internet addresses. However, the GRS differs from DNS in that it resolves the iport's data source to an oport address based on both the metadata of the source gate and the destination gate. This resolution process takes into account the specific metadata of the gates involved in order to determine the appropriate gate address for data discovery. **D3: Designing for cross-Domain access** Cross-domain access in IoDA enables data sharing and collaboration between different domains. To support secure and controlled data exchange across domains, we propose two components: * **Wire** Besides data movement, the wire component in IoDA is also responsible for gate authentication and authorization. It establishes secure communication channels between gates to ensure that data is transmitted only between authenticated and authorized gates. The wire component also enforces access control, verifying the identity and permissions of gates involved in the data exchange. * **Circuit** The circuit component in IoDA focuses on topology verification and enforcement. A circuit is formed by interconnected gates, representing the logical path through which data flows between domains. The circuit provides a global abstraction and visibility across gates, allowing for optimized data movement and processing. In addition, the circuit verifies the connectivity and integrity of the gates within the topology, ensuring that data can be exchanged seamlessly between domains while maintaining the required security and privacy measures. **D4: Designing for privacy and security** We focus on the following high-level points privacy and security: * _Access control._ IoDA incorporates access control mechanisms to regulate data access. Role-based Access Control (RBAC) is employed to determine which roles can access specific data. This ensures that only authorized entities can interact with the data. * _Ownership._ IoDA can improve data ownership and control. Gate operators have the authority to determine where and how context data is stored. This allows them to maintain control over their data and choose the storage systems that align with their requirements. * _Provenance and Governance._ IoDA enables the tracking of data provenance, capturing the sources and modifications of data. This provides transparency and accountability in data handling. Further, IoDA supports governance policies that ensure data quality and compliance with regulations. These policies enable users to define rules and restrictions on data usage to maintain privacy and meet regulatory standards. **IoDA Deployment.** IoDA allows different deployment strategies. A common scenario involves users utilizing a lightweight client-side app to interact with gates deployed in either a cloud or on-premises environment. IoDA providers, such as SaaS companies, operate IoDA clusters comprising the runtime components and users' gates. A user's gate is hosted by a "home provider," while the option to register gates with non-home providers enables gate integration across clusters. For example, a device vendor may act as a provider, running a user's phone gate as a service accessible through a phone app. When a user joins a context like a building, their device gate automatically connects with the building's gate hosted by a separate provider. Other scenarios include single-provider gate hosting or apps incorporating known/pre-configured gates, ensuring flexibility and seamless integration within an IoDA cluster. ## 4 Call for Research Dialogue **Why is systems and networking community well-suited for designing IoDA?** Our networking community possesses the essential characteristics necessary for designing IoDA. First, our community encompasses multiple disciplines, making it inherently multi-disciplinary [31] (see below). Second, as experts in data communication, we have the technical expertise and insights required to address the challenges of data access and sharing. We have a deep understanding of networking techniques and perspectives, which can be leveraged to design IoDA effectively. Third, our community has a successful track record in solving interoperation problems, as demonstrated by our accomplishments in building and evolving the Internet. By applying the principles and lessons learned from previous networking advancements [24, 32, 23, 22], we are at the vantage point to develop IoDA. **Why is collaboration with other communities crucial for IoDA's success?** The realization of IoDA relies on the collaboration with other communities too. First, the system community, encompassing big data systems, streaming systems, IoT systems, cloud computing, and edge systems, plays a pivotal role in providing the underlying infrastructure and technologies required by IoDA. Second, the database community brings expertise in data models [11], data integration [26], query optimization [28], and data engine design [27]. Their knowledge and advancements are essential for developing robust and efficient data processing and management mechanisms within IoDA. Finally, the HCI community specializing in user interface design plays a critical role in shaping the user experience of universal data apps, ensuring they are intuitive, accessible, and user-friendly. Further, there is a pressing need to address the growing dominance of "incumbents" such as Snowflake [19] in the data industry. To democratize data and empower users to control their own data and access, it is crucial to develop an IoDA layer that fosters data ownership and accessibility. This paper serves as an early call for research dialogue, inviting collaboration and discussions among researchers and practitioners from various communities. We are actively prototyping systems to support the IoDA layer and will share our findings and experiences in our follow-up papers. Figure 2: Data exchanges intra- and inter-domain.
IoDAとは、データアプリ(IoDA)を導入し、インターネットの次の自然な発展を象徴しています。大規模データ、AI、IoTといった分野の進歩はありますが、データアプリケーションを通じてデータの seamless に利用できる、かつ提供できるという、普遍的なデータアクセスという可能性は、組織的および技術的の壁によって阻害されています。これらの制約に対処するため、IoDA層の設計を提案します。この層は、標準的なインターネットプロトコルからのインスピレーションを汲み取り、異なるデバイスやドメイン間でデータアプリケーションの接続を可能にします。この短い論文は、この提案に対する議論の呼びかけです。
2309.10533
Decoupling the Curve Modeling and Pavement Regression for Lane Detection
The curve-based lane representation is a popular approach in many lane detection methods, as it allows for the representation of lanes as a whole object and maximizes the use of holistic information about the lanes. However, the curves produced by these methods may not fit well with irregular lines, which can lead to gaps in performance compared to indirect representations such as segmentation-based or point-based methods. We have observed that these lanes are not intended to be irregular, but they appear zigzagged in the perspective view due to being drawn on uneven pavement. In this paper, we propose a new approach to the lane detection task by decomposing it into two parts: curve modeling and ground height regression. Specifically, we use a parameterized curve to represent lanes in the BEV space to reflect the original distribution of lanes. For the second part, since ground heights are determined by natural factors such as road conditions and are less holistic, we regress the ground heights of key points separately from the curve modeling. Additionally, we have unified the 2D and 3D lane detection tasks by designing a new framework and a series of losses to guide the optimization of models with or without 3D lane labels. Our experiments on 2D lane detection benchmarks (TuSimple and CULane), as well as the recently proposed 3D lane detection datasets (ONCE-3Dlane and OpenLane), have shown significant improvements. We will make our well-documented source code publicly available.
Wencheng Han, Jianbing Shen
2023-09-19T11:24:14
http://arxiv.org/abs/2309.10533v1
# Decoupling the Curve Modeling and Pavement Regression for Lane Detection ###### Abstract The curve-based lane representation is a popular approach in many lane detection methods, as it allows for the representation of lanes as a whole object and maximizes the use of holistic information about the lanes. However, the curves produced by these methods may not fit well with irregular lines, which can lead to gaps in performance compared to indirect representations such as segmentation-based or point-based methods. We have observed that these lanes are not intended to be irregular, but they appear zigzagged in the perspective view due to being drawn on uneven pavement. In this paper, we propose a new approach to the lane detection task by decomposing it into two parts: curve modeling and ground height regression. Specifically, we use a parameterized curve to represent lanes in the BEV space to reflect the original distribution of lanes. For the second part, since ground heights are determined by natural factors such as road conditions and are less holistic, we regress the ground heights of key points separately from the curve modeling. Additionally, we have unified the 2D and 3D lane detection tasks by designing a new framework and a series of losses to guide the optimization of models with or without 3D lane labels. Our experiments on 2D lane detection benchmarks (TuSimple and CULane), as well as the recently proposed 3D lane detection datasets (ONCE-3Dlane and OpenLane), have shown significant improvements. We will make our well-documented source code publicly available. ## Introduction Lane detection is a crucial task for autonomous driving, as it involves detecting traffic lanes in images to help with decision-making [1, 13, 14]. This field has gained significant attention recently [1, 11, 12]. A long-standing question in this area is how to accurately represent lanes. Previous studies on lane detection [13, 14, 15] can be broadly classified into three categories: segmentation-based, point-detection-based, and curve-based methods. Segmentation-based methods [13, 12] treat lane detection as a pixel-level classification task and identify lanes using heuristics in the post-processing stage. Point-based methods [12, 10] detect lanes by locating a series of points and then interpolating them to form the lane. Both of these methods typically achieve state-of-the-art performance in this area, but they represent lanes in indirect ways, ignoring the lanes' inherent characteristics as a whole. Unlike the first two types of methods, curve-based methods take a holistic approach to representing lanes. Certain methods, such as those proposed by Van _et al._[13] and Tabelini _et al._[14], model lanes using polynomial curves, whereas Feng _et al._[14] have proposed a method that utilizes Figure 1: **Illustration of a challenging case for curve-based lane representation and our solution.** (a) Uneven ground causes the lanes in the perspective view to fluctuate, making it difficult to fit the curves. (b) In the BEV space, the lanes retain their original status and can be easily fitted using parameterized curves. (c) Our solution is to separate the curve modeling and pavement regression and fit them independently. Bezier curves to present lanes. These methods make full use of the geometric properties of lanes and present them elegantly, but they still perform less effectively than contemporary segmentation and point-based methods. Some researchers [14] attribute this to the curve-based representations lacking enough degrees of freedom to accurately model lane shape. However, we believe that increasing the degrees of freedom even further may not be the optimal solution. On one hand, freedom degrees and holistic representation are two opposing properties. If we increase the parameter numbers of the polynomial curves or the control points of the Bezier curves, the predictions may become more prone to overfitting local lane patterns. On the other hand, lanes are naturally designed to be simple and smooth in order to guide vehicle navigation. Thus, they should not be represented with too many complex fluctuations. After analyzing some challenging real-world cases of curve-based methods in 3D space, we discovered that most of the difficult lanes with irregular shapes in front-view images are actually due to to roads themselves, not the lanes. When viewed from above, many of these lanes exhibit smooth shapes that can easily be modeled using parameterized curves, as shown in Fig. 1. For example, the lane in the yellow box appears irregular in the perspective view, making it difficult to fit accurately with a third-order polynomial. However, when viewed from above, the lane appears straight and can be easily modeled. Based on this observation, we conclude that uneven roads are the main cause of irregular lane shapes in the perspective view, rather than the lanes themselves. To better represent the lanes, we propose a new framework called DecoupleLane. This framework represents the lanes in 3D space and models both the lanes and their corresponding ground heights. Afterwards, the lanes and ground heights are mapped back to the image space based on their perspective relationships. Specifically, we focus on the original shape of lanes in the BEV space and use a polynomial to model the relationship between the \(X\) and \(Z\) coordinates. Unlike lane lines, the heights of roads are influenced by many natural factors and are not as holistic as the lanes. Therefore, following point-based methods, we represent the heights of the lane by a series of independent key points. Finally, the discrete values are interpolated and combined with \(X\) and \(Z\) values to form the 3D lanes. Given the intrinsic matrix of the camera, we can convert the 3D lane coordinates into 2D ones, which can be used to represent lanes in the perspective view. For datasets that lack 3D lane labels, we train the model using the 2D loss between the converted 2D lanes and the 2D ground truth, as well as a regularization loss. This regularization constrains the ground heights to be as flat as possible, which helps the ground height head to focus on the irregular fluctuations of the road. Although the predicted positions in the 3D space are not real coordinates, they can still encode the relative relationships between the lanes. In conclusion, this paper's contributions can be summarized in four parts: * We thoroughly examined the limitations of curve-based representation and developed DecoupleLane to effectively utilize geometric information for comprehensive lane detection. This approach accurately accounts for fluctuations caused by uneven ground. * We propose the Decouple Head for lane detection. It predicts the bird's-eye view (BEV) representation and corresponding ground heights of the lanes. This approach disentangles the use of holistic information and complex environment information. * We standardize the representation of lanes in both 2D and 3D spaces by treating the 2D lanes as a perspective-view projection of the 3D lane. We also propose a unified pipeline of loss to guide model optimization. * Our method outperforms state-of-the-art approaches on two representative 2D lane detection datasets: TuSimple and CULane. It also shows strong performance on recent 3D lane detection datasets, including ONCE-3DLane and OpenLane. ## Related Works ### 2D lane representation. The 2D lane detection task involves locating lanes in a 2D space. Based on different lane representations, most 2D lane detection models can be sorted into three categories: segmentation-based methods, point-based methods, and curve-based methods. **Segmentation-based methods** This approach treats lane detection as a task of classifying each pixel and assigning it a specific lane class. One pioneering study by Pan _et al_. [13] utilized slice-by-slice convolution within feature maps and allowed for message-passing between pixels across rows and columns in a layer. Hou _et al_. [15] introduced knowledge distillation and proposed self-attention distillation to enhance the performance of lightweight models. Their approach achieved comparable performance to state-of-the-art models while using significantly fewer parameters. Zheng _et al_. [16] implemented a recurrent feature-shift aggregator to enhance lane feature extraction and designed a bilateral up-sampling decoder to combine coarse and fine-grained features during the upsampling procedure. Ghafooria _et al_. [17] suggested a generative model to improve the realism and structure preservation of the prediction. Although segmentation-based approaches have achieved satisfactory performance, they often require substantial post-processing due to the misrepresentation of the data. Additionally, the representation disregards the overall context of the lane, which makes it challenging to utilize holistic information. **Point-based methods** Inspired by object detection methods, point-based models approach lane detection as a prediction problem of points, where lanes are represented by a series of points. Li _et al_. [10] were the first to introduce a point-based representation in this area and used predefined anchors to provide prior information. Tabelini _et al_. [18] employed an attention module to improve the model's performance on challenging scenes, such as occlusion and missing lane markers, by enhancing global information extraction. Qin _et al_. [19] proposed a novel approach to alleviate the problem of lane detection. Li 2020) treated the lane detection task as a row-based selection problem with global features and introduced a structural loss to explicitly model the structure of lanes. Zhang _et al._ (Zheng et al. 2022) presented the cross-layer refinement network, which fully utilizes both high-level and low-level features in lane detection. They introduced the line IoU loss to regress the lane line globally, resulting in improved localization accuracy. Wang _et al._ (Wang et al. 2022) directly regressed the keypoints by predicting the offsets to the starting point of the lane line. They proposed a Lane-aware Feature Aggregator to capture the local correlations between adjacent keypoints. This representation achieves state-of-the-art performance, although the indirect modeling of lanes still makes it less efficient to utilize holistic information. **Curve-based methods** Curve-based methods use curve functions to predict a series of parameters and represent lane lines. Van _et al._ (Van Gansbeke et al. 2019) introduced this type of representation, which consists of two components: a deep network for weight map prediction and a differentiable least squares fitting module to produce the best-fitting curves. Tabelini _et al._ (Tabelini et al. 2021b) used deep polynomial regression to produce a polynomial representation for each lane marking. Liu _et al._ (Liu et al. 2021b) developed a transformer-based method that extracts abundant features and directly outputs parameters of a lane shape model. Feng _et al._ (Feng et al. 2022) proposed a parametric Bezier curve to represent the lane lines. Curve-based methods are considered the most intuitive way to represent lanes because they can easily model the lane distribution with rectified curves. However, current curve-based methods do not perform as well as contemporary segmentation and point-based methods, possibly because they cannot handle complex environmental changes such as uneven ground. In this paper, we identify this problem and suggest that more research is needed to improve this method's performance. ### 3D lane representation. 3D lane detection is a new task in the lane detection field that requires accurate information about the lane in 3D space instead of just the image space. This introduces some new challenges in lane representation compared to 2D lane detection. The pioneering work by Garnett _et al._ (Garnett et al. 2019) introduced intra-network inverse-perspective mapping (IPM) and anchor-based lane representation to solve the 3D lane detection task. Guo _et al._ (Guo et al. 2020) subsequently designed a geometry-guided lane anchor in the virtual BEV and decoupled the learning of image segmentation and 3D lane prediction. Yan _et al._ (Yan et al. 2022) proposed a new real-world 3D lane detection dataset and an end-to-end detector. The proposed network SALAD is extrinsic-free and anchor-free, which regresses the 3D coordinates of lanes in image view without explicitly converting the feature map into the BEV. Bai _et al._ (Bai et al. 2022) presented a one-stage Transformer-based method that directly predicts 3D lane parameters. Chen _et al._ (Chen et al. 2022) used a unified 2D/3D anchor design to detect 2D/3D lanes simultaneously. They also released a large-scale real-world 3D lane dataset called OpenLane. These works have tried different types of representation but all consider 3D lane representation as an integrated target, ignoring the complex environment and intuitive lane design. ## Method In this section, we will first introduce the structure of the proposed DecoupleLane in **SSOverview**. Then in **SSDecouple Head**, we will explain the core module of our method in detail, the Decouple Head, which disentangles the representation of lanes into holistic BEV lane modeling and discrete key point height regressions. In **SSUnified Lane Detection**, we unify the 3D and 2D lane representation and treat the 2D lane as a perspective projection of the corresponding 3D lane. ### Overview Fig. 2 shows an illustration of the proposed DecoubleLane. Different from previous curve-based methods (Feng et al. 2022; Zheng et al. 2021), which usually pool the feature maps into one row and employ a head network to decode the curve in the corresponding column, our model employs an anchor-based architecture. Although column-based methods are efficient in computation, this architecture still has some limitations in terms of 3D lane representation. Firstly, column-wise pooling can not handle the side lanes well. As shown in Fig. 3 (a), the two lanes on the right side are pooled into the same column, making it hard for the model to discriminate them. Secondly, as we discussed in **SSIntroduction**, ground height is a critical factor that influences the representation of the lane. The pooled row features can not encode abundant spatial information like the ground heights and thus can not fulfill our requirement. As shown in Fig. 2, there are mainly two parts in the model, a backbone network for the feature extraction and a Decouble Head for decoding the feature into the corresponding 3D lanes. In this paper, we employ a DLA-34 (Yu et al. 2018) network as our backbone, which can efficiently extract high-level semantic information while keeping a high resolution of the feature map. The advantages will help our model to capture holistic information like the curve shapes of the lane and discriminate the detailed features like the ground undulation. Then, we define several anchors by the clustering algorithm for the lanes and employ ROI-Align modules to pool the features into fixed shapes. Unlike the object detection task, the distribution of lanes is more reasonable, and a small number (less than 50) of anchors can recall most of the lanes. Also, different from point-based methods, we do not need the anchors to provide holistic priors because our curve representation can achieve this implicitly. Therefore, our anchor design is a simple but effective way to extract possible lane features \(x\): \[x=ROI Align(f(\gamma,I)), \tag{1}\] where \(I\) is the input front view image and \(\gamma\) is the weight of the backbone network. The features gathered are then sent to the Decouple Head for lane decoding. This process models the lane curve in the BEV space and predicts the corresponding ground heights. It is worth noting that we do not need to manually convert the image into the top view or extract features like some previous works [1, 13]. This is because the IPM conversion is based on the flattened ground hypothesis, which can introduce additional noise to the reasonable lane representation in the BEV space. In our work, all the features are directly extracted from the input 2D images. Finally, curves and heights are combined to produce the 3D lanes. Our DecoupleLane can also generate 2D lane representations. Instead of using two separate heads for 2D and 3D representation, our model projects the 3D lanes into the perspective view to produce 2D lanes. More information about this process is available in **SUnified Lane Detection**. ### Decouple Head As discussed in **SSIntroduction**, there are two main factors that affect the shape of the lane in the perspective view. The first factor is the design of the lane itself, which is typically created by experts and therefore has a reasonable and elegant shape. This is why the lane is considered to be holistic. The second factor is the ground on which the lane is situated. The height of the ground is influenced by various natural factors such as topographical changes, which causes irregular fluctuations. These two factors together make it difficult to fit the lane's projection in the perspective view. To fully utilize the geometry information and effectively handle the two factors, we propose the Decouple Head. This model separately models the lanes in the BEV space and the ground heights. In the BEV space, we focus on the relationships between the \(\mathcal{X}\) and \(\mathcal{Z}\) coordinates while ignoring the \(\mathcal{Y}\) (height) value changes in a Left-hand Cartesian System. As mentioned earlier, lanes in the BEV have holistic and elegant shapes. We prefer a simple representation without large degrees of freedom in this procedure. Therefore, we employ a third-order polynomial to model the \(\mathcal{X},\mathcal{Z}\) relationship: \[\mathcal{X}=a\mathcal{Z}^{3}+b\mathcal{Z}^{2}+c\mathcal{Z}+d, \tag{2}\] where \(a\), \(b\), \(c\), \(d\) are the predicted coefficients of the curve. To calculate the coefficients with a comprehensive perspective of the lanes, we start by utilizing an ROI-Align module on the feature maps using the predetermined anchors. This process transforms the features into a fixed size. Next, we create a new CurveFormer to collect global information. Our CurveFormer is based on a transformer structure, as illustrated in Fig. 3 (c). It receives the ROI feature as queries and the global feature map as keys and values. To distinguish the relevant information, we have equipped the ROI Features with two kinds of embeddings. Firstly, there is the position embedding which encodes the relative position information of the ROI Features. Secondly, there is the learnable region-aware embedding for each anchor. These region-aware embeddings are optimized during the end-to-end training process, and they help the queries focus on the overall information of the curve. Finally, we flatten the output features of the CurveFormer and send them into a fully-connected layer for predicting the coefficients: \[[a,b,c,d]^{\intercal}=f(\omega,flatten(CurveFormer(x))), \tag{3}\] where \(\omega\) is the weight of the fully-connected layer. In addition to the BEV lanes, accurately modeling the ground heights \(\mathcal{Y}\) is another crucial aspect. Changes in ground height are affected by natural factors like ground conditions, making it challenging to fit them into a parameterized model. To tackle this issue, we adopted a point-based approach. First, we ROI-Align the input feature into a column shape \(\bar{x}\in(1,n)\), where \(n\) is a hyper-parameter that determines the number of key points. In our experiments, we set it to 72. Then, we aggregate local information from the corresponding key-point positions using a High-tFormer. The proposed HighFormer has a structure similar to CurveFormer but is trained independently, enabling it to encode different priors during training and focus on disparate regions from the CurveFormer. Finally, we apply a fully-connected layer on every pixel of the feature map and convert it into the height of points: \[\mathcal{Y}_{i}=f(\eta,HeightFormer(\bar{x})_{i}). \tag{4}\] We uniformly select \(n\) key points on the lane along the Z-axis. Each value of \(\mathcal{Y}_{i}\) represents the height of the corresponding key point, as illustrated in Fig 2. Next, we interpolate the heights to create a continuous sequence and then merge the BEV lane representation with the heights to generate the 3D representation. ### Unified Lane Detection Previous works have often treated 2D and 3D lane detection as separate tasks, with two types of lanes represented Figure 2: **An overview of the proposed DecoupleLane Framework.** There are mainly two parts to the proposed network, a backbone for feature extraction and the Decouple Head for decoding the features into lane representation. independently [3]. However, it is important to note that 2D and 3D representations ultimately refer to the same lanes in the real world, and treating them separately may lead to limitations. For example, in 2D representation, perspective distortion can cause undesirable deformation of lanes. When two parallel lanes intersect at the vanishing point, their relationship can be difficult to comprehend. Furthermore, due to the different representations, large-scale datasets and training methods for 2D lane detection may not be directly applicable to improving the performance of 3D lane detection, and vice versa, making the use of data less efficient. To bridge the two tasks, our method uniformly represents lanes in the 3D space, and then project 3D lane representations into the perspective view to construct the 2D lane predictions. Given a lane point \(P=(\mathcal{X},\mathcal{Y},\mathcal{Z})\) in the 3D space and the intrinsic matrix \(K\) of the camera, the corresponding 2D point \(p=(u,v)\) can be calculated by: \[\mathcal{Z}\left[\begin{array}{c}u\\ v\\ 1\end{array}\right]=\left[\begin{array}{ccc}f_{x}&0&o_{x}\\ 0&f_{y}&o_{y}\\ 0&0&1\end{array}\right]\left[\begin{array}{c}\mathcal{X}\\ \mathcal{Y}\\ \mathcal{Z}\end{array}\right], \tag{5}\] where \(f_{x}\) and \(f_{y}\) are the pixel focal lengths and \(o_{x},o_{y}\) are the offsets of the principal point. Then we will guide the optimization of the model in both 3D and 2D spaces. **Unified Loss** To create the loss, we start by aligning the predicted lanes with the ground truth lanes. Our approach involves aligning them in 2D space, rather than 3D space, because feature maps are generated from 2D images. This makes it easier for the model to extract features accurately based on the 2D locations. Specifically, we employ the Hungarian algorithm to assign predictions to the ground truth 2D lanes, and the costs are formulated as: \[C=\mathcal{L}_{1}(u_{p},u_{g})+\mathcal{L}_{1}(v_{sp},v_{sg})+\mathcal{L}_{1}( v_{ep},v_{eg}), \tag{6}\] where \(\mathcal{L}_{1}(u_{p},u_{g})\) are the average horizon distance between the predicted lanes and the ground truth lanes and \(\mathcal{L}_{1}(v_{sp},v_{sg}),\mathcal{L}_{1}(v_{ep},v_{eg})\) are the vertical distance of the corresponding start and end points. Next, we assign positive labels to successfully matched predictions and negative labels to the rest. We then use a classification loss, \(L_{cls}\), to guide the discrimination between the two groups. For the 3D lanes, we design the loss in two parts. Specifically, to guide the optimization of the holistic lane curve, we employ a Lane IoU loss [10] in the BEV space: \[L_{bev}=\frac{2e+\min{(\mathcal{X}_{i}^{p},\mathcal{X}_{i}^{g})}-\max{( \mathcal{X}_{i}^{p},\mathcal{X}_{i}^{g})}}{2e+\max{(\mathcal{X}_{i}^{p}, \mathcal{X}_{i}^{g})}-\min{(\mathcal{X}_{i}^{p},\mathcal{X}_{i}^{g})}}, \tag{7}\] where \(e\) is a hyper-parameter that controls the radius of the lanes and \(\mathcal{X}_{i}\) is the corresponding \(\mathcal{X}\) values of the uniformly sampled points. We use an \(L_{1}\) loss to compare the predicted \(\mathcal{Y}\) values with the ground truth, which is referred to as the height loss \(L_{h}\). Additionally, we employ an endpoint \(L_{Z}\) loss to estimate the error between the \(\mathcal{Z}\) values of the start and end points: \[L_{3D}=L_{bev}+L_{h}+L_{Z} \tag{8}\] For the 2D lanes, we employ a Lane IoU loss \(L_{per}\) in the perspective space and an endpoint loss \(Lv\) for estimating the error between \(v\) values of the start and end points in the 2D space: \[L_{2D}=L_{per}+L_{v} \tag{9}\] Totally our loss is formulated as: \[L=\left\{\begin{array}{ll}L_{cls}+\alpha L_{3D}+\beta L_{2D},&\text{if 3D labels available}\\ L_{cls}+\beta L_{2D}+|\sigma_{h}|,&\text{otherwise}\end{array}\right., \tag{10}\] where \(\alpha\) and \(\beta\) are the balance ratio and in our experiments. When training with data that doesn't have 3D labels, such as the 2D lane detection datasets mentioned earlier, we remove the 3D losses since there are no corresponding 3D labels available. However, doing so can cause the relationship Figure 3: **Illustration of the lanes and the structure of the CurveFormer and HeightFormer.** (a) Illustration of the aligned problem of column-pooling operation. (b) Illustration of the 3D representation of DecoupleLane trained with only 2D labels. (c) The structure of the CurveFormer and HeightFormer. between the BEV representation and the height regression to become unstable. To address this issue, we add an additional regulation loss \(|\sigma_{h}|\) to minimize the variance of the ground heights while optimizing the parameters. In this setting, the 3D representation can be regarded as a implicit representation representation for 2D lane detection. Although the model can not produce accurate 3D lane locations without 3D guidance, based on the perspective theory, the model can learn reasonable 3D relationships under only 2D guidance as shown in Fig. 3 (b). ## Experiments ### Implementation Details Most of our experiments are conducted on a platform with an Intel Platinum 8260 CPU, 30GB RAM, and a single Tesla V100 GPU. When testing the inference speed of the DecoupleLane, we run our model on a single GTX 1080ti GPU for a fair comparison with other methods. During our training and inference, all the pixels above the skylines in the images are removed to save unnecessary computation and the inputs are resized into \(800\times 320\). For data augmentation, we employ random horizontal flips, translation, and scaling. Besides processing the images and 2D labels, we also adjust the corresponding intrinsic matrix to keep the relationship between the 2D and 3D lanes. We employ an AdamW optimizer with a learning rate of 1e-3 and a cosine decay learning rate strategy with a power of 0.9. For CULanes, TuSimple, ONCE-3DLane, and OpenLane, we train our model on their training set with 15,70,10,10 epochs respectively. ### Dataset To show the efficiency of the proposed method, we conduct our experiments on two representative 2D lane detection benchmarks, TuSimple [22] and CULane [20]. Also, we evaluate our method on two recently proposed 3D lane detection datasets ONCE-3DLane [20] and OpenLane [2]. **TuSimple** is one of the most popular benchmarks in this area, which contains 3,268 images in the training set, 358 for validation, and 2,782 for testing. All of the images are captured in the highway scenes and are with the resolution of \(1280\times 720\). **CULane** contains 9 different challenging factors _e.g._ crowded, night, cross _etc._ to evaluate the robustness of the model. Also, it is a large-scale dataset with a total of 88,880 frames for the training set, 9675 for the validation set, and 34680 for the test set. **ONCE-3DLane** is a recently proposed 3D lane detection benchmark, which is built on the large-scale autonomous driving dataset ONCE [22] which contains more than one million scenes. ONCE-3DLane manually labeled the 2D lanes and automatically generated the corresponding 3D lane labels based on the LiDAR point clouds. This dataset contains 5,000 scenes for the training set and 3,000 for the validation set and 8,000 scenes for the testing set. **OpenLane** is a large-scale realistic 3D lane detection benchmark built on the well-known Waymo Open dataset [20]. OpenLane contains more than 200,000 frames and over 880,000 carefully annotated lanes, and each of the frames could contain as many as 24 different lanes. **Metrics** F1-measure is the most commonly used metric in this area and is employed as the main metric in CULane [20], ONCE-3DLane [20] and OpenLane [2]. Firstly, all the lanes are assumed as 30 pixels wide and the IoU values between the predicted lanes and the ground truth are calculated. Based on this, predictions with IoU values large than a given threshold (0.5 as default) are thought to be positive, and the F1 metric is Figure 4: **Visualization of the prediction of DecoupleLane. The visualization of the predictions of DecoupleLane and the corresponding ground truth on CULane and TuSimple are shown. The color of the lane is employed to discriminate different instances.** defined as: \[F_{1}=\frac{2\times\text{ Precision }\times\text{ Recall}}{\text{Precision }+\text{ Recall}},\] where Precision\(=\frac{TP}{TP+FP}\) and Recall\(=\frac{TP}{TP+FN}\). Following [22], to evaluate the overall performance of the models, we employ the mF1 metric like COCO [14] detection dataset: \[\mathrm{mF1}=(\mathrm{F1}@50+\mathrm{F1}@55+\cdots+\mathrm{F1}@95)/10,\] where \(\mathrm{F1}@p\) is the \(F1\) values with \(p\%\) IoU thresholds. For TuSimple [20] Accuracy is employed as the main metric: \[\text{Accuracy }=\frac{C_{pred}}{N_{gt}},\] where \(C_{pred}\) is the corrected number in a predicted lane and \(N_{gt}\) is the number of lanes in the gt. ### State-of-the-art Comparison **CULane** Table 1 shows our comparison with the state-of-the-art method on CULane. We group the methods into three categories based on their representation types. Among all the methods, our DecoupleLane achieves the best performance and set a new state-of-the-art performance. Fig. 4 show some visualization comparison with other methods on CULane. Compared with other methods, DecoupleLane fits the lanes smoothly and easily handle some challenging scene like occlusion. **TuSimple** Table 2 shows our comparison on the Tusimple benchmark. Compared with CULane, Tusimple has more curving lanes. Our method still performs better than all of the comparing methods and sets a new start-of-the-art performance. **ONCE-3DLane** For quantitative comparison, as ONCE-3DLane does not lease its labels in the testing set or the online evaluation server, we report comparison on the validation set in Table 3. As shown in this table, our DecoupleLane outperforms all of the other state-of-the-art 3D lane detection methods and achieves 75.07% of the F1 score. For the qualitative comparison, Fig. 5 shows the illustrations of the proposed DecoupleLane. Fig. 5 (a) shows the projected 2D lanes in the front image, which can ideally match the real lanes in the image, even though the ground is not flat. Fig. 5 (b) (c) shows the 3D lanes in a Left-hand Cartesian System and the BEV space. From these figures, we can find that although the lanes show fluctuating status in the perspective view and the 3D space because of the uneven ground, our model can discriminate their original relationships in the BEV space. **OpenLane** To show the generality of our method, we also employ OpenLane for comparison. Table 4 shows that our method can work with this dataset well and outperform other newly proposed methods. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline **Method** & **mF1** & **F1@50** & **F1@75** & **FPS** & **Normal** & **Crowed** & **Dazzle** & **Shadow** & **No line** & **Arrow** & **Curve** & **Night** \\ \hline SCNN[19] & - & - & 90.30 & - & 85.90 & 63.60 & 57.00 & 69.30 & 40.60 & 79.40 & 65.20 & 7013 & 57.80 \\ PINet [20] & 46.81 & 74.40 & 51.33 & 25 & 90.30 & 72.30 & 66.30 & 68.40 & 49.80 & 83.70 & 65.20 & 1427 & 67.70 \\ Lane[21] & 49.57 & 76.68 & 54.34 & 129 & 92.14 & 75.03 & 66.47 & 78.15 & 49.38 & 83.67 & 67.23 & 1330 & 70.73 \\ Lane[21] & 50.42 & 77.41 & 56.79 & 20 & 91.80 & 75.61 & 71.78 & 79.12 & 51.38 & 86.88 & 72.30 & 1360 & 73.03 \\ SGNet [21] & - & 77.27 & 92.00 & - & 92.07 & 75.41 & 67.75 & 74.31 & 50.90 & 89.77 & 69.65 & 1373 & 72.69 \\ FOLGame [20] & - & 78.80 & 40.00 & - & 92.70 & 77.80 & 75.20 & 79.30 & 52.10 & 89.00 & 69.40 & 1569 & 74.50 \\ CondLane[21] & 53.11 & 78.74 & 59.39 & 128 & 93.38 & 77.14 & 71.17 & 79.93 & 51.85 & 89.89 & 73.88 & 1387 & 73.92 \\ CondLane[21] & 54.83 & 79.48 & 61.23 & 47 & 93.47 & 77.44 & 70.93 & 80.91 & 54.13 & 90.16 & 75.21 & 1021 & 74.80 \\ CLRNet[21][22] & 55.14 & 79.73 & 62.11 & 103 & 93.49 & 78.06 & 74.57 & 79.92 & 54.01 & 90.59 & 72.77 & 2116 & 75.02 \\ CLRNet[21][22] & 55.64 & 80.47 & 62.78 & 94 & 93.73 & 79.59 & 75.30 & **82.51** & 54.58 & **90.62** & 74.13 & 1155 & 75.37 \\ \hline \multicolumn{12}{c}{Curve-based Method} \\ \hline LSRR[14] & - & 68.72 & - & 47 & 86.78 & 67.34 & 56.63 & 59.82 & 40.10 & 78.66 & 56.64 & 1166 & 59.92 \\ B\(\ddot{e}\)i\_earLaneNet(ResNet-18)[19] & - & 73.67 & - & 213 & 90.22 & 71.55 & 62.49 & 70.91 & 45.30 & 84.09 & 58.98 & 996 & 68.70 \\ B\(\ddot{e}\)i\_earLaneNet(ResNet-34)[19] & - & 75.57 & - & 150 & 91.59 & 73.20 & 69.20 & 76.74 & 48.05 & 87.16 & 62.45 & **888** & 69.90 \\ \hline **DecoupleLane(ours)** & **56.32** & **80.82** & **63.25** & 110 & **93.85** & **80.32** & **76.31** & 82.34 & **55.40** & 90.48 & **75.12** & 1035 & **75.40** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with other state-of-the-art lane detectors on CULane benchmark. For a fair comparison, the fps metric of DecoupleLane in this table is reported on a single GTX 1080ti GPU. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **F1 (\%)** & **Acc (\%)** & **FP (\%)** & **FN (\%)** \\ \hline Segmentation-based Method & & & \\ \hline SCNN [19] & 95.97 & 96.53 & 6.17 & **1.80** \\ RESA [22] & 96.93 & 96.82 & 3.63 & 2.48 \\ UFLDQin, Wang, and Li [20] & 88.02 & 95.86 & 18.91 & 3.75 \\ \hline \multicolumn{12}{c}{Point-based Method} \\ \hline LaneATT[21] & 96.77 & 95.63 & 3.53 & 2.92 \\ LaneATT[21] & 96.06 & 96.10 & 5.64 & 2.17 \\ CondLane[21] & 96.98 & 95.37 & 2.20 & 3.82 \\ CondLane[21] & 97.24 & 96.54 & **2.01** & 3.50 \\ CLRNet[21][22] & 97.82 & 96.87 & 2.27 & 2.08 \\ \hline \multicolumn{12}{c}{Curve-based Method} \\ \hline PolyLaneNet[21] & 90.62 & 93.36 & 9.42 & 9.33 \\ B\(\ddot{e}\)i\_earLaneNet(ResNet-18) [19] & - & 95.41 & 5.30 & 4.60 \\ B\(\ddot{e}\)i\_earLaneNet(ResNet-34) [19] & - & 96.54 & 5.10 & 3.90 \\ \hline DecoupleLane(ours) & **97.93** & **97.01** & 2.03 & 3.31 \\ \hline \hline \end{tabular} \end{table} Table 2: State-of-the-art results on TuSimple. Additionally, F1 was computed using the official source code. ### Ablation Study In this section, we study the influence of the proposed three modifications _e.g._, the unified lane detection structure (Unify), the DecoupleHead (Decouple), and the Curve/HeightFormer module (Former). For the baseline model, we employ a point-based 2D lane detection head on our backbone model and employ the same anchors to extract feature maps. Table 5 show the efficiency of the corresponding modules. According to this table, all three components provide a positive effect on the performance, and DecoupleHead contributes the most proportion of improvement. Besides the modules, we also compare different curve representations in Table 6. We first compare polynomials with different orders, the third-order polynomial performs better than the second-order polynomial, but the fourth-order polynomial performs comparably with the third-order polynomial. Bezier curves show no improvements with the polynomials in our Decouple Head. Therefore, to achieve the best balance between performance and efficiency, we choose the third-order polynomial as our curve representation. ## Conclusion In this paper, we analyze the challenges of curve-based lane detection methods and find that irregular lanes in the perspective view are caused by ground fluctuations. To address this issue, we propose DecoupleLane, which better fits lane shapes and fully utilizes the holistic representation of parameterized curves. The Decouple Head, a core module in our method, models the curve in the BEV space and regresses the ground heights separately. Additionally, DecoupleLane unifies the 2D and 3D lane detection tasks by employing a single 3D lane detection head and considering the 2D lanes as projections in the perspective space. This approach improves the model's understanding of the real distribution of lanes in the 2D lane detection task, and the large amount of 2D lane data improves the robustness of the model in the 3D lane detection task. We evaluate our method on two representative 2D lane detection benchmarks and two 3D lane detection datasets, and our method achieves state-of-the-art performance in all cases. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **mF1** & **F1@50** & **F1@75** \\ \hline second-order polynomial & 51.25 & 76.32 & 57.36 \\ third-order polynomial & **56.32** & **80.82** & **63.25** \\ fourth-order polynomial & 56.15 & 80.32 & 63.14 \\ Bézier curves & 55.26 & 80.14 & 62.60 \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study of different curve representation methods. Results are reported on CULane. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Method** & **AB** & **Up Down** & **Curve** & **Night** & **Interrection** \\ \hline 3D-LaneNet (Gametet et al., 2019) & 44.1 & 40.8 & 46.5 & 41.5 & 32.1 \\ Gen-LaneNet (Hon et al., 2020) & 32.3 & 25.4 & 33.5 & 18.7 & 21.4 \\ PenFormer (Chen et al., 2020) & 50.5 & 42.4 & 55.6 & 46.6 & 40.0 \\ CurveFormer (Bai et al., 2022) & 50.5 & **45.2** & 56.6 & **49.1** & 42.9 \\ DecoupleLane & **51.2** & 43.5 & **57.3** & **48.9** & **43.5** \\ \hline \hline \end{tabular} \end{table} Table 4: State-of-the-art results on OpenLane. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Unify** & **Decouple** & **Former** & **mF1** & **F1@50** & **F1@75** \\ \hline & & & 51.90 & 78.37 & 58.32 \\ ✓ & & & 52.80 & 78.27 & 59.50 \\ ✓ & ✓ & & 54.74 & 78.91 & 61.77 \\ ✓ & ✓ & ✓ & **56.32** & **80.82** & **63.25** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation studies of each component in our method. Results are reported on CULane. Figure 5: **Illustration of the predictions on the ONCE-3DLane benchmark.** (a) The projected lane lines in the perspective view. (b) Illustration of the 3D lanes in a Left-hand Cartesian System. (c) Illustration of the 3D lanes in a Dev space. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **F1 (\%)** & **Precision (\%)** & **Recall (\%)** & **CD error (m)** \\ \hline 3D-LaneNet (Gametet et al., 2019) & 44.73 & 61.46 & 35.16 & 0.127 \\ Gen-LaneNet (Guo et al., 2020) & 45.59 & 63.35 & 35.42 & 0.121 \\ PenFormer (Chen et al., 2022) & 74.33 & 80.30 & 69.18 & 0.074 \\ Anchorb3DLane (Hung et al., 2023) & 74.44 & 80.50 & 69.23 & 0.064 \\ DecoupleLane & **75.07** & **81.19** & **69.26** & **0.062** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of the proposed DecoupleLane on ONCE-3DLane validation set.
カーブベースのレーン表示は、多くのレーン検出方法において人気であり、レーンを全体的に表現できるため、レーンに関するHolisticな情報を取り入れることができます。しかし、これらの方法によって生成される曲線は、不規則なラインに合うことが多く、分割ベースまたは点線ベースの代替方法と比較して性能にギャップが生じる可能性があります。この論文では、これらのレーンが不定期であると想定されているわけではなく、 Uneven Pavement の視点で引き起こされる zigzagged な形をしていることが観察されました。この論文では、レーン検出タスクを2つの部分に分解する新しいアプローチを提案します。カーブモデルと地平線高の予測。特に、BEV空間でパラメータ化された曲線をレーンを表現し、レーン原形の分布を反映するために使用します。2番目の部分では、地平線高は道路の条件など自然な要因によって決定され、全体的な情報に頼るもので
2301.13618
Scheduling Inference Workloads on Distributed Edge Clusters with Reinforcement Learning
Many real-time applications (e.g., Augmented/Virtual Reality, cognitive assistance) rely on Deep Neural Networks (DNNs) to process inference tasks. Edge computing is considered a key infrastructure to deploy such applications, as moving computation close to the data sources enables us to meet stringent latency and throughput requirements. However, the constrained nature of edge networks poses several additional challenges to the management of inference workloads: edge clusters can not provide unlimited processing power to DNN models, and often a trade-off between network and processing time should be considered when it comes to end-to-end delay requirements. In this paper, we focus on the problem of scheduling inference queries on DNN models in edge networks at short timescales (i.e., few milliseconds). By means of simulations, we analyze several policies in the realistic network settings and workloads of a large ISP, highlighting the need for a dynamic scheduling policy that can adapt to network conditions and workloads. We therefore design ASET, a Reinforcement Learning based scheduling algorithm able to adapt its decisions according to the system conditions. Our results show that ASET effectively provides the best performance compared to static policies when scheduling over a distributed pool of edge resources.
Gabriele Castellano, Juan-José Nieto, Jordi Luque, Ferrán Diego, Carlos Segura, Diego Perino, Flavio Esposito, Fulvio Risso, Aravindh Raman
2023-01-31T13:23:34
http://arxiv.org/abs/2301.13618v1
# Scheduling Inference Workloads on Distributed Edge Clusters with Reinforcement Learning ###### Abstract Many real-time applications (e.g., Augmented/Virtual Reality, cognitive assistance) rely on Deep Neural Networks (DNNs) to process inference tasks. Edge computing is considered a key infrastructure to deploy such applications, as moving computation close to the data sources enables us to meet stringent latency and throughput requirements. However, the constrained nature of edge networks poses several additional challenges to the management of inference workloads: edge clusters can not provide unlimited processing power to DNN models, and often a trade-off between network and processing time should be considered when it comes to end-to-end delay requirements. In this paper, we focus on the problem of scheduling inference queries on DNN models in edge networks at short timescales (i.e., few milliseconds). By means of simulations, we analyze several policies in the realistic network settings and workloads of a large ISP, highlighting the need for a dynamic scheduling policy that can adapt to network conditions and workloads. We therefore design ASET, a Reinforcement Learning based scheduling algorithm able to adapt its decisions according to the system conditions. Our results show that ASET effectively provides the best performance compared to static policies when scheduling over a distributed pool of edge resources. ## I Introduction In the last years, we have witnessed the growing popularity of applications leveraging Deep Neural Networks (DNNs), from Augmented/Virtual Reality (AR/VR) to cognitive assistance or video surveillance. The DNN model training process typically does not have strict latency constraints and it is performed _offline_ in well-provisioned centralized data-centers or in a distributed fashion via, e.g., federated learning [1]. Differently, the DNN inference task is usually performed _online_ with constraints in terms of accuracy, throughput, and latency, which may significantly differ across applications. For instance, services like cognitive assistance require high accuracy but may tolerate few hundreds of milliseconds latency, while others, like self-driving cars, have more stringent latency needs (i.e., tens of milliseconds). Providing an inference service requires to address several challenges to meet this diverse set of application constraints, e.g., the selection of the appropriate variant of the model to be used (programming framework, compiler optimization, batching size, etc.), the processing unit to leverage for the inference (e.g., GPU, CPU, TPU), and the nodes and resources (e.g., memory, computing) to be allocated to every application. This requires management at different timescale. On a short timescale (i.e, milliseconds), a _scheduler_ is in charge of selecting the appropriate computing instance for every new incoming request to meet its application requirements. This includes not only the selection of the computation node but also the appropriate model variant and computation technology. On a longer timescale (i.e., seconds, minutes), an _orchestrator_ selects the proper model variants to deploy, optimizes their placement across the nodes, and allocates the appropriate resources to them. Recent work [2, 3, 4, 5] focused on data centers and proposed DNN inference workload management for such environments. Further, commercial solutions have been deployed in recent years [6, 7, 8] by major cloud providers. Edge computing is considered a key enabler to deploy DNN-based applications with stringent delay or bandwidth requirements, as it moves computation capabilities closer to end-users with respect to centralized cloud platforms. This is especially the case for users connected via mobile access (e.g. 5G). However, realizing DNN inference at the edge poses several additional challenges. Edge infrastructures are indeed complex networks composed of several layers with heterogeneous limited resources and different latencies to end users [9]. Due to the less availability of resources at edge, multiple inference models of different capacities should be considered, and end-to-end delay requirements may lead to considering a trade-off between network delay and processing time. This differs from centralized cloud platforms, which usually feature large pools of uniform hardware available in a single location where DNN models can be scaled up almost indefinitely. For these reasons, the optimal selection of inference models while scheduling real-time requests at Edge is still a challenging task. Recent work combined edge computing and deep learning [10], with a focus on scheduling requests to minimize end-to-end delay [11] or maximize accuracy [12]. However, none of the existing work analyzes inference workload optimization taking into account different application constraints in realistic edge network settings. In this paper, we focus on the problem of _scheduling_ DNN inference requests taking into account not only accuracy (i.e., model selection) but also throughput and latency constraints under realistic edge deployment settings. First, we model our distributed edge inference system and provide a definition of the scheduling problem (Section III), also proposing several baseline static scheduling policies both original and from literature. From evaluating static policies on a realistic network topology, we observe that a policy that always performs better does not exist, as different applications may benefit differently from each scheduling strategy. Based on the insights derived by this analysis we propose ASET1 (Adaptive Scheduling of Edge Tasks), an adaptive scheduling algorithm based on Reinforcement Learning (Section IV), which dynamically follows system conditions and apps requirements optimizing its decisions accordingly. We evaluate ASET simulating three topologies based on the realistic network of a large ISP and using a pool of reference edge applications (Section V). Our findings show that, while some static policies are well suited to optimize workloads on cloud-based topologies, ASET improves performance over any static policy when resources are distributed across the edge network, effectively increasing the percentage of successfully handled queries. Footnote 1: In ancient Egyptian mythology, Aset was a major goddess said to have power over fate itself. ## II Related Work The provisioning of on-demand inference services has been investigated in several recent works. **Inference scheduling in data centers**. Most of the existing solutions address the common scenario where inference queries have to be scheduled over the resources of a Data Center. Some of the main production systems are Tensorflow Serving [6], Azure ML [7], and Cloud ML [8]. Most scientific works focused on proposing algorithms and strategies to improve the performance and ease of use of such cloud inference systems. [2] and [3] address the problem of scheduling Directed Acyclic Graph (DAGs) tasks with the objective of improving the throughput; GrandSLAMm [2] relies on a prediction model that estimates job duration, while [3] proposes an efficient RL approach to select the number of servers to allocate for a given job. Being oriented to a Cloud infrastructure, none of them takes into account network latency between the servers and their heterogeneity. In [13] a Model Master manages the dynamic allocation of DNN models across the servers of a heterogeneous data center based on Azure ML, and proposes a protocol among servers to forward queries to the correct destination. Clipper [4] provides a generalization of TensorFlow Serving [6] to enable the usage of different frameworks. One of the most complete solutions is provided by INFaaS [5], which focuses on ease of use, providing transparent scheduling of incoming queries over available model variants, and autoscaling of deployed models based on load thresholds. However, all the previous works address the scheduling problem only from the boundaries of a data center, considering neither _(i)_ network latency, thus becoming no suitable in scenarios with real-time constraints, nor _(ii)_ resource constrained clusters, thus failing to address situations where workers cannot be indefinitely scaled up/out. **Inference offloading**. Another related set of works concerns offloading, with a focus on the end-devices. While offloading has been widely studied in the literature [14, 15], the specific use case of DNN workload introduces additional degrees of freedom (e.g., model variant selection and configuration) that can be exploited for improving optimization over the mere selection of the task placement. Some recent works [16, 17, 18] provides intelligent offloading techniques for DNN tasks. DeepDecision [17] addresses the problem in the particular case of a single device running a single application; queries are scheduled among a series of local small models providing different performance/requirements trade-off, and one remote model, which provides the best performance. On the other hand, LinkShare [18] focuses on the orthogonal problem of ordering the offloaded requests from multiple apps on the same device, with the main constraint of network bandwidth. MCDNN [16] proposes a scheduler to handle queries from multiple applications on the same device, deciding _(i)_ the model variant to be used and _(ii)_ whether to offload the inference task or not, seeking average accuracy maximization. Such decisions are taken considering constraints such as latency requirements, device energy, cloud monetary budget. **Inference and edge computing**. Fewer and more recent are the trends that combine DNN with edge computing [10], with the aim of overcoming scalability and latency limitations of cloud computing. The use of edge computing brings additional challenges deriving from the high resource requirements of DNN based tasks on less powerful edge compute resources. Despite some issues have been addressed in recent works [11, 12, 19, 20], edge-oriented solutions for inference systems are still largely embryonic compared to data center solutions, with many open challenges. CloudPath [19] focuses on the problem of data distribution on a hierarchical continuum of computing resources between edge and cloud. In [20], authors propose an approach to schedule DAGs across multiple edge servers, seeking minimization of end-to-end latency. However, the proposed algorithm assumes the possibility to indefinitely allocate new edge servers when needed, with no geographical restrictions, thus not addressing the problem of constrained resources at the edge. Other works [11, 12] study the problem of processing data streams from scattered devices, exploiting the geographically distributed edge/cloud clusters. In particular, VideoEdge [12] assumes a deployment of cameras generating a known set of video streams, on which various DNN tasks should be performed. The proposed approach decides globally the cluster where each stream should be processed, as well as the model variant to employ and its configuration, considering computation and network bandwidth as constraints and seeking accuracy maximization. However, neither processing nor network latencies are taken as constraints, thus making this approach not suitable for interactive or critical scenarios (e.g., virtual reality, autonomous driving, and more). A similar use case is analyzed in [11], which focuses on minimizing the end-to-end latency processing data flowing from the edge to the cloud. However, it only considers the problem of task allocation, missing the possibility to optimize properly selecting model variants and their configurations. To the best of our knowledge, none of the existing works on inference serving systems addresses the problem simultaneously considering _(i)_ end-to-end latency, accuracy, and throughput constraints, _(ii)_ edge-cloud computing and multi-cluster deployment, _(iii)_ real-time job dispatching, _(iv)_ optimization on model variant selection. ## III Scheduling in edge-cloud infrastructure In this section, we formally define the problem of scheduling inference tasks on a distributed edge-cloud infrastructure. Additionally, we describe a set of static scheduling policies (both original and from literature), that we then use in Section IV as a baseline for our dynamic scheduling approach. ### _System modeling_ **Applications and data-streaming sources.** We consider a set of sources (e.g., end users, IoT devices, vehicles) running a variety of applications (e.g., virtual reality, autonomous driving) each relying on one or more DNN inference tasks. Every application generates _queries_ to be processed, i.e., each query represents the request to perform a specific inference task \(j\in J\) (e.g., object detection, speech recognition) on a given input (e.g., a video frame), where \(J\) is the set of inference tasks supported by the system. Since applications often require more than one query to be processed, we treat sequential queries as streams (e.g., all the frames captured by an AR headset). Therefore, each query \(q\) belongs to a stream \(i\in I\), being \(I\) the entire set of streams currently served by the system. Every query of a stream has a set of requirements such as a maximum end-to-end delay \(D^{i}\), and a minimum required accuracy \(A^{i}\). Additionally, every stream \(i\) has a _data rate_\(\rho_{i}\), that is the number of queries submitted each second (e.g., frame rate), and every query of stream \(i\) has an input of _size_\(\zeta_{i}\) (e.g., frame size). Note that all queries of a stream are for the same task \(j\in J\) with the same requirements. **DNN Models and Variants.** Every inference task \(j\) can be served using a _Deep Neural Network model_\(m\) among the set of \(M^{j}\) models that are trained for task \(j\). Therefore, the system provides a total of \(N_{m}=\sum_{j\in J}|M^{j}|\) DNN models. Take object detection as an example application. A model \(m\) represents a particular Neural Network architecture with pre-trained weights (e.g., yolo-v3, ssd-mobilenet-v1), and features a given accuracy \(A_{m}\) (mean average precision - mAP). A model \(m\) can be deployed and run through different setups and underlying hardware (e.g., SSD Mobilenet v1 on _(i)_ Tensorflow-GPU with batch size 8, or on _(ii)_ OpenCV-CPU batch size 1 and 2 replicas, and more), thus obtaining a set \(V^{m}\) of different _model variants_. A model variant \(v\) features a given _processing delay_\(D_{v}\), throughput _capacity_\(C_{v}\) (i.e., the maximum number of queries it can process per second), and _resource usage_\(\mathbf{r}_{v}\in\mathbb{R}_{+}^{k}\) (e.g., in terms of CPU, system memory and GPU memory). Note that the processing delay may vary based on the size \(\zeta_{i}\in\mathbb{R}_{+}\) of the input data, thus it is a function \(D_{v}\colon\mathbb{R}_{+}\to\mathbb{R}_{+}\); with \(D_{v}\) we refer to the time needed to process the maximum input size supported by the model (analogous considerations hold for the capacity \(C_{v}\)). **Network topology and computing clusters.** We consider a geographically distributed cloud-edge infrastructure composed by \(N_{\nu}\) computing _clusters_ (e.g., a centralized data center, a telco regional cloud, an eNodeB) typically organized in a hierarchical topology. Each cluster potentially provides different resources. We denote \(\mathbf{c}_{n}\in\mathbb{R}_{+}^{k}\) the overall capacity of cluster \(n\), with \(c_{nk}\) representing the amount of resource \(k\in\mathbb{N}\) available on cluster \(n\). Examples of resources include CPU, system memory and GPU memory. Model variants are deployed at different computing clusters consuming a different amount of resources. On a long timescale (i.e., seconds, minutes), an _orchestrator_ selects the appropriate set of model variants to deploy, optimizes their placement across the clusters, and allocates the appropriate resources. Finally, stream sources are connected to a small cluster at the lower layer of the hierarchy. This can be either the antenna/eNodeB in case of cellular communication or the home gateway in the fixed access case. Queries need to be _scheduled_ for processing across model variants available at different computing clusters to meet application requirements on a short timescale (i.e., tens of milliseconds). In the following, we provide a definition of the scheduling problem we tackle in this paper. ### _Scheduling problem definition_ We assume a scheduler is located at the nearest compute cluster available to existing stream sources, i.e., antenna/eNodeB or the home gateway/central office in the fixed access case. It follows every stream source is served by a _scheduler_\(s\) among \(N_{s}\) different ones (one per each lower layer cluster). Each scheduler \(s\) has a given average network delay \(d_{n}^{s}\) towards each cluster \(n\); we also model the associated delay deviation as \(\sigma_{n}^{s}\). Note that an additional access delay from the stream source to the scheduler has to be taken into account (e.g, the radio latency between a device and the nearest 5G antenna). We denote \(\delta_{i}\) the additional access delay that affects stream \(i\). Every scheduler is aware of each model variant \(v\) currently available on each cluster \(n\), each with its current load \(L_{vn}(t)\) (measured in terms of incoming queries per second2). Based on the current conditions of the available Fig. 1: The scheduler dispatches streams of queries on available model variants based on their constraints and geographical position of clusters. model variants, for every stream \(i\) it serves, a scheduler \(s\) decides which model variant \(v\) on which cluster \(n\) should be used to process stream \(i\). When scheduling a stream \(i\) to the proper model variant/cluster, the scheduler takes into account application requirements. Specifically, it considers the stream data size \(\zeta_{i}\), its data rate \(\rho_{i}\), its bit rate \(b_{i}\), the maximum tolerated end-to-end delay \(D^{i}\) and the minimum required accuracy \(A^{i}\), satisfying the following constraints: _(i)_ the selected model variant \(v\) is a valid implementation of task \(j\) required by \(i\), \[v\in V^{m}\wedge m\in M^{j}; \tag{1}\] _(ii)_ the load capacity of the chosen model variant is not exceeded, \[L_{vn}(t)+\eta_{v}^{i}\rho_{i}\leq C_{v}, \tag{2}\] being \(\eta_{v}^{i}\) the fractional load of stream \(i\) for model variant \(v\); _(iii)_ the sum of expected network delay and processing time does not exceed the maximum tolerated delay, \[2(\delta_{i}+d_{n}^{s}+2\sigma_{n}^{s})+b_{i}\zeta_{i}+D_{v}(\zeta_{i})\leq D^ {i}, \tag{3}\] where the first addendum is the round-trip propagation time, the second is the transmission delay for one query and the third is the time needed to process the query; _(iv)_ the selected model provides an adequate accuracy \[A_{m}\geq A^{i}. \tag{4}\] A graphical representation of the scheduling problem is depicted in Figure 1, while a scheduling policy can be formally defined as follows. **Definition 1**.: _(scheduling policy). Let us consider a stream \(i\) to be processed through a task \(j\) on an edge-cloud infrastructure that features a set of \(V^{m}\) compatible model variants over \(N_{\nu}\) clusters (\(|N|=N_{\nu}\)). A scheduling policy is any function_ \[\beta\colon I\to V^{m},N \tag{5}\] _that binds stream \(i\) to a feasible model variant \(v\in V^{m}\) deployed on cluster \(n\in N\), so that constraints at Equations (1), (2), (3), and (4) are satisfied._ Note that, as requests are handled in real-time, scheduler decisions should be taken in an amount of time that is negligible compared to the stream latency requirements. **Scheduling performance metrics and objectives.** Based on the scheduling decisions, in a given time instant \(t\) the stream \(i\) will feature a _reject ratio_\(q_{i}^{R}(t)\in[0,1]\), i.e., the fraction of queries from stream \(i\) that have not been processed by the system because of resource unavailability, and a _failure ratio_\(q_{i}^{F}(t)\in[0,1]\), i.e. the fraction of queries that have been served violating one or more application requirements (i.e., delivered out of maximum tolerated delay). The goal of the scheduler is typically to maximize, over time, the fraction of queries that are served successfully, i.e., to minimize the sum of reject ratio and failure ratio. ### _Static scheduling policies_ Several policies have been proposed for static scheduling of inference tasks on edge clusters [9, 21]. In this work we consider the following ones (both original and from literature): _1) closest:_ bind stream \(i\) to any feasible model variant \(v^{*}\) located on the cluster \(n^{*}\) that features the lower network latency to serving scheduler \(s\), i.e., \(n^{*}=\operatorname*{arg\,min}_{n\in N}\left(d_{n}^{s}+2\sigma_{n}^{s}\right)\). This policy may lead to the early saturation of smaller clusters at the very edge, as they are always preferred [22]. _2) load balancing:_ bind the input stream to model variant \(v^{*}\) on cluster \(n^{*}\) such that \((v^{*},n^{*})=\operatorname*{arg\,min}_{v,n\in V^{m}\times N}L_{vn}(t)\). This policy can bring huge performance gains compared to _closest_[22]; however, it may lead to unfair allocation when latency-sensitive applications are in the minority. _3) farthest:_ bind stream \(i\) to any feasible model variant \(v^{*}\) located on the cluster \(n^{*}\) with the highest (still feasible) network latency, i.e. \(n^{*}=\operatorname*{arg\,max}_{v\in N}\left(d_{n}^{s}+2\sigma_{n}^{s}\right)\). As opposed to _closest_, this policy preserves smaller clusters at the very edge for those apps that really need them [23]; however, it is highly affected by the unreliability of network delay for long distance communications. _4) cheaper:_ bind stream \(i\) to model variant \(v^{*}\) on cluster \(n^{*}\) such that the expected end-to-end delay (round-trip and processing time) is maximized, i.e., \((v^{*},n^{*})=\operatorname*{arg\,min}_{v,n\in V^{m}\times N}\left(2(d_{n}^{s }+2\sigma_{n}^{s})+D_{v}(\zeta_{i})\right)\). We designed this policy as an improvement over _farthest_, as it additionally tries to preserve the most performing model variants. _5) random-proportional latency:_ bind stream \(i\) to model variant \(v\) on cluster \(n\) with probability \(1/(2(d_{n}^{s}+2\sigma_{n}^{s})+D_{v}(\zeta_{i}))\). This guarantees that, on a large enough number of streams, bindings are proportionate to end-to-end delays [21]. _6) random-proportional load:_ bind stream \(i\) to model variant \(v\) on cluster \(n\) with probability \(C_{v}/L_{vn}(t)\). This guarantees that, on a large enough number of streams, bindings are proportional to the capacity of each model variant. _7) least impedance:_ bind stream \(i\) to model variant \(v^{*}\) on cluster \(n^{*}\) such that end-to-end latency to \(s\) is minimized, i.e., \((v^{*},n^{*})=\operatorname*{arg\,min}_{v,n\in V^{m}\times N}\left(2(d_{n}^{s }+2\sigma_{n}^{s})+D_{v}(\zeta_{i})\right)\)[21]. This greedy policy leads to the best performance when the overall load is low, but may suffer from a high rejection rate once the closest and fastest model variants are saturated. Our experiments (Section V) show that, for a heterogeneous pool of applications, a policy that always performs better than the others does not exists: different applications may benefit differently from each scheduling strategy, and also the physical topology and the particular streams arrivals can be determinant. Based on these findings, in the next section we propose ASET, an algorithm for Adaptive Scheduling of Edge Tasks that leverages Reinforcement Learning to optimize its decisions dynamically based on the current system conditions. ## IV ASET Scheduling Algorithm Our adaptive scheduling approach aims to learn the optimal policy depending on current system conditions, e.g, current applications, network topology, and stream arrivals that vary over time. Due to the lack of labeled data, the optimal policy learning is formulated as a Reinforcement Learning (RL) problem; hence, an intelligent agent tries to learn the optimal policy selection strategy according to the observed state of the environment. This is accomplished by an RL policy that estimates a probability distribution of each possible action (policy selection) that cumulatively maximizes a reward (typically maximizing the fraction of queries that are served successfully), as shown in Figure 2. Let us consider a learner and decision-maker called the _agent_, and an _environment_ that is the external world that the agent interacts with at discrete time steps \(t\). Given \(S_{t}\in S\), where \(S\) is the set of possible _states_ of the environment, the agent can select an _action_\(A_{t}\in A(S_{t})\), standing for the set of available actions in state \(S_{t}\). The agent receives an observation of the environment \(S_{t}\) at time \(t\) and, one step later, a numerical _reward_, \(r_{t+1}\in R\subset\mathbb{R}\), and it jointly determines the action \(A_{t}\) to perform, which, in part, yields to next state \(S_{t+1}\). **Definition 2**.: _(stochastic reinforcement learning policy). An RL policy \(\pi_{\phi}\), where \(\phi\in\mathbb{R}^{d}\) denotes policy parameters, is any function or algorithm that determines and maps the next action to take by an agent. A stochastic RL policy, additionally, estimates a probability distribution over actions that an agent can take at a given state:_ \[\pi_{\phi}\colon\;A\,x\,S\to[0,1], \tag{6}\] \[\pi_{\phi}(a|s)\stackrel{{\mathrm{def}}}{{=}}\mathbb{P}(\text{take action $a|$given state $s$}).\] Overall, the goal of the proposed adaptive scheduling is to learn an optimal sequence of static network scheduling policies that maximizes the percentage of successfully dispatched streams. At a \(T\) seconds rate, the RL-based scheduler samples the environment by collecting a variety of observations from the edge-cloud infrastructure, e.g., responses and loads, building up the current state \(S_{t}\) of the environment. Then, the agent evaluates a discrete set \(A\) of actions and chooses an action \(A_{t}\in A\), where \(A\) stands in this work for the set of available network scheduling policies \(\beta\). Note that the set of actions does not depend on the state itself, thus the sets \(A(S_{t})=A\) are the same (Section III-C). Therefore, every time that the agent takes an action \(A_{t}\), the state of the environment \(S_{t}\) is observed and a reward score \(r_{t+1}\) is used as feedback information to improve the policy selection, see Figure 2. In this work, these rewards are defined as a linear combination of the ratio of "failed" queries and the ratio of queries that have been "rejected" for lack of available resources (Section IV-C). The particular policy \(\beta_{t}\), selected by the agent at time \(t\), is used to dispatch all incoming streams during the subsequent time window \([t,t+T]\). Therefore, given the corresponding states sequence \(\mathbf{S}=[S_{0},S_{T},S_{2T},...,S_{kT}]\) with \(k\in\mathbb{N}\), the resulting overall scheduling policy \(\beta(\mathbf{S})=[\beta_{0},\beta_{T},\beta_{2T},...,\beta_{kT}]\) dynamically maps, with the corresponding baseline policies \(\beta_{t}\), a stream \(i\) to a model variant \(v\) and its deployment on cluster \(n\). From now, and for the sake of simplicity, we will refer as \(\pi\) to the policy learned by the ASET agent (Definition 2), which leads to a particular static policy sequence \(\beta(\mathbf{S})\). It corresponds to any function employed to estimate the optimal sequence of actions that the agent should perform at each time window \([t,t+T]\) and given a state \(S_{t}\), \(\beta(\mathbf{S})=[A_{0},A_{T},A_{2T},...,A_{kT}]\). The intuition of this behavior is provided in Figure 3. Note that each of the static scheduling policies from Section III-C corresponds to a deterministic agent that always returns the same action \(A_{t}\) independently of the system state; whereas the policy \(\pi\) learned by the ASET agent can be seen as a meta-policy (or as a policy of baseline scheduling strategies) that also satisfies the constraints from Equations (1), (2), (3), and (4). ### _Deep Q-Learning policy optimization_ Our RL agent has to cope with a discrete set of actions, with \(A\subset\mathbb{N}\). This is often modeled in literature as a stochastic process with no memory, which is a Markov Decision Process [24] (MDP). In this work, our MDP defined by tuples \((S,A,\mathcal{T},\mathcal{R},\gamma)\) represents states comprised of partial observations from the system. Nonetheless, the model parameters of such MDP are unknown, i.e., the transition probabilities \(\mathcal{T}(s^{\prime}|a,s)\) and the rewards \(\mathcal{R}(s^{\prime}|a,s)\) of taking the action \(A_{t}=a\) and moving from state \(S_{t}=s\) to state \(S_{t+1}=s^{\prime}\). Note that the ASET agent should experience each transition among states at least once, or even multiple times to get a reliable estimation of both transition and cumulative rewards. At each step \(t=kT\), with \(k\in\mathbb{N}\), the RL agent can choose one of several possible scheduling policy-actions, \(\beta_{t}\equiv A_{t}\). The transition probability \(\mathcal{T}(s^{\prime}|a,s)\) depends in part on the chosen action, and, additionally, from some positive or negative reward that may be returned by every state transition, named _return_ of actions. Overall, our objective is to find a strategy, i.e., a policy \(\pi\) mapping to a sequence \(\beta(\mathbf{S})\), that maximizes the expected return \(G(t)\) of rewards over time. Fig. 3: The ASET RL agent infers the optimal policy sequence based on the system conditions, seeking an optimal binding between workloads and model variants that maximizes the percentage of success queries. Plots show two runs on a cloud-based topology and on an edge-based one (see Section V). Fig. 2: Algorithm overview. State \(S_{t}\), sampled from the environment, is forwarded through the agent DNN, which outputs action \(A_{t}\); performing \(A_{t}\) on the environment contributes to reward \(r_{t+1}\) obtained at the next step. Thus, \(G(t)\) is defined in terms of the cumulative weighted rewards along with states and given the corresponding optimal sequence of actions to take in the future: \[G(t)=\sum_{\tau=0}^{H}\gamma^{\tau}r_{\tau}\qquad\gamma\in[0,1], \tag{7}\] where \(r_{\tau}=\mathcal{R}(s^{\prime}|a,s)\) is the reward at time step \(\tau\) due to corresponding state transition \((s,s^{\prime})\), \(\gamma\) is a weighting factor that reduces the contribution of long-term rewards, usually known as the discount factor, and time \(H\) is the last time step within a training episode (see Section IV-C for further details). Therefore, the RL agent's target policy is \[\pi^{*}(a|s)=\operatorname*{arg\,max}_{\pi_{\phi}}\mathbb{E}_{t^{*}\sim\pi_{ \phi}}\left\{G(t)\right\}, \tag{8}\] which translates the scheduler state into a distribution over actions, see Definition 2. Note that the expectation is computed over the distribution of trajectories \(t^{*}=(s_{0},a_{0},s_{1},...)\). In Q-Learning, the optimal pair values \((s,a)\), i.e., those yielding to the sequence of optimal actions, are generally called Quality-Values (Q-Values) and noted as \(Q^{*}(s,a)\)[25]. They correspond to the sum of weighted rewards that the RL agent can expect on average after performing action \(a\) on state \(s\). It is also known as the _expected return of actions_, \[Q(s,a)=\mathbb{E}_{t^{*}\sim\pi_{\phi}}\left\{G_{t}|S_{t}=s,A_{t}=a\right\}. \tag{9}\] Bellman [24] showed that if an agent's trajectory follows the highest Q-Values, then its policy is optimal and leads to the highest \(G(t)\) as well. Bellman also reported that an accurate estimate of Q-Values can be found recursively by using the _Bellman Optimality Equation_, also known as the Value Iteration algorithm. In fact, Q-Learning is an adaptation of Bellman's value iteration algorithm, where a policy is implicitly, or off-line, learned by following trajectories yielding to the highest Q-Values [25]. It is usually computed by dynamic programming and assumes that the optimal value of state \(S_{t}=s\) is equal to the reward it will get on average, after taking one optimal action \(a\) and adding the expected optimal value of all possible next states along the future path of decisions, that is \(Q(s,a)=\mathbb{E}_{\pi}\left\{r+\gamma\max_{a^{\prime}}Q(s^{\prime},a^{\prime })|s,a\right\}\). Equation (9) turns out into the following iteration algorithm, which converges to the optimal \(Q^{*}(s,a)\), \[Q_{k+1}(s,a)\leftarrow\sum_{s^{\prime}}\mathcal{T}(s,a,s^{\prime})\big{[}r+ \gamma\max_{a^{\prime}}Q_{k}(s^{\prime},a^{\prime})\big{]}, \tag{10}\] for all \(s^{\prime}\in S\), \(a^{\prime}\in A\) and \(k\in\mathbb{N}\) as iteration step. For simplicity, we set the transition probability matrix \(\mathcal{T}\) to all elements equal to \(1\), allowing initial transitions among all seen states. Once Q-Values are estimated, the optimal policy \(\pi^{*}\) for the RL agent corresponds to chose the action that has the highest Q-Values: \(\pi^{*}(a|s)=\operatorname*{arg\,max}_{\pi}Q_{\pi}(s,a)\), for all \(s\in S\) and \(a\in A\equiv\beta\) static policies in Section III-C. However, previous algorithm does not scale for large MDPs with a large number of states. A solution is to approximate the optimal \(Q^{*}(s,a)\) using a Deep Neural Network, named Deep Q-Network (DQN) [26], to get an estimate \(Q(s,a;\phi)\approx Q^{*}(s,a)\), where \(\phi\) stands for the parameters of the DQN model, see line 16 in Algorithm 1. The using of a DQN for approximate Q-Learning is known as Deep Q-Learning. ### _State Encoding_ We model the state in a continuous fashion, representing the environment in a given time \(t\) as a set of some particular features sampled from the system and averaged along a time window of size \(T\). Features are evaluated separately for each available worker \(w\in W\),3 and are as follows: _(i)_ the number \(|I_{w}|\) of streams currently served by worker \(w\), being \(I_{w}=\{i\in I|\ \beta(i)=(v,n)\}\); _(ii)_ the current throughput \(R_{w}(t)\) of worker \(w\), in terms of responses delivered at the time instant \(t\); _(iii)_ the current load \(L_{w}(t)\), measured in terms queries per second normalized on input size (as defined in Section III-B); _(iv)_ number of incoming instant queries grouped by stream characteristics, e.g., queries of all streams that require end-to-end delay within a given range \([\delta^{1},\delta^{2}[\) and features a data rate in the interval \([\rho^{4},+\infty[\), i.e., \(\sum_{i\in I_{1,4}}\rho_{i}\), where \(I_{1,4}=\{i\in I|\ \ D^{i}\in[\delta^{1},\delta^{2}[\wedge\rho_{i}\in[\rho^{4},+\infty[\) ). In particular, we consider a partition \(0=\delta^{0}<\delta^{1}<\delta^{2}<...<\delta^{N_{\delta}-1}\) of \(\mathbb{R}_{+}\) with \(N_{\delta}\) delay intervals, and a second partition \(0=\rho^{0}<\rho^{1}<\rho^{2}<...<\rho^{N_{\rho}-1}\) of \(\mathbb{R}_{+}\) with \(N_{\rho}\) input-rate intervals, evaluating \(N_{\delta}\cdot N_{\rho}\) different sum of instant queries, that is one feature for each combination of the two partitioning sets. The features defined so far constitute a vector as, Footnote 3: A worker is a model variant instance \(v\) running on a particular cluster \(n\), therefore we can assume index \(w=n\cdot v+v\). \[\mathbf{S}_{w,t}=\left[|I_{w}|,R_{w}(t),L_{w}(t),\sum_{i\in I_{0,0}}\rho_{i}, \ldots,\hskip-11.381102pt\sum_{i\in I_{N_{\delta}-1,N_{\rho}-1}}\rho_{i}\right] \tag{11}\] where \(\mathbf{S}_{w,t}\in\mathbb{R}_{+}^{3+N_{\delta}\cdot N_{\rho}}\). Therefore, the complete state \(\mathbf{S}\) is modeled as a three-dimensional vector in \(\mathbb{R}_{+}^{(3+N_{\delta}\cdot N_{\rho})\times|W|\times T}\), that is, each feature in (11) is first evaluated for each available worker (each model variant on each node), and then for each time instant within the considered time window. For instance, vector \(\mathbf{S}_{w}\) stores, for worker \(w\), every features in (11) evaluated at every time instant \(t-T+1\), \(t-T+2\),..., \(t\) within time window \([t-T+1,t]\). From now, we refer to the state vector encoding as simply \(s\) or \(s_{t}\) for a generally speaking state or a state referred to a time window, respectively. ### _Training_ The proposed RL scheduling agent is trained over a series of episodes that resemble various scenarios. Each episode corresponds to a different workload execution with given parameters, e.g. requirements from tasks, number of clients per minute (\(\lambda\)) or the seed value (\(\zeta\)) for random number generation (RNG), and is concluded when the percentage of success queries, \(q_{t}^{S}\), falls below a given threshold \(\theta\) or when a timeout \(H\) is reached. This allows us to speed up the training by terminating unsuccessful or steady episodes quickly. At every time step \(t\), a reward \(r_{t}\) scores the rate of successful queries, see Algorithm 1 at lines 9-10, where is the ratio of "failed" queries, i.e., those delivered violating one or more constraints (e.g., out of tolerated delay), and \(q_{t}^{R}\) is the ratio of queries "rejected" by the system for lack of resources, normalized by corresponding time window. \(\psi\) is a penalty inverse proportional to the episode active time. It ensures that both short and bad action trajectories do not reach higher returns than optimal ones. Note that DQN network is used to minimize the target loss L, see lines 14-18, by Adam optimizer and \(\alpha\) learning rate. It takes gradient steps on the Bellman error objective L, see factor \(C\) at line 16 and Eq. (10), concurrently with data collection from the replay buffer [27] for an efficient off-line learning. This is a common hybrid approach to implement Q-Learning [25, 28, 29]. Additionally, we employ an \(\epsilon\)-greedy exploration policy, see line 5, with parameter \(\epsilon\) dynamically updated. The architecture of our DQN consists of a stacking of convolutional layers that extracts temporal correlations from the state tensor \(\mathbf{S}\). Such feature extraction part is composed of three convolutional layers with 4x4 kernels along the time and feature dimensions, followed by Re-Lu activation and max-pooling. Finally, two linear layers squeeze the dimension from 256 to as many outputs as different static policies \(\beta\). ## V Performance evaluation We evaluate ASET using a prototype implementation of an edge inference system that will be released upon acceptance of the paper. We first use our prototype to run small scale experiments with the aim of profiling some representative models and their variants (results not shown). Then we use such profiling data to run large scale experiments on a simulated setup, comparing the performance of ASET to those of static scheduling policies. ### _Evaluation settings_ **System Prototype.** Our prototype implements the edge inference system functionalities described in Section III. On each cluster, a _Master_ deploys workers and routes streams between them and remote clients; each _Worker_ runs in a Docker container and implements a pipeline that processes queries in FIFO order from different streams, based on the model variant batch size; a _Monitoring_ agent on each cluster collects stats from model variants usage and their performance, used _(i)_ to build a catalog of model variants and _(ii)_ to provide each _Scheduler_ with aggregated observations on the system state. We use such a prototype to profile variants of pre-trained inference models with respect to their resource usage and performance (see below). **Simulation Setup.** To evaluate our approach on a large scale, we set up a simulated environment where each worker simulates the inference task based on the profiling information available for its model variant. Therefore, empty responses are generated for each batch of queries after simulating a processing delay (based on a normal distribution). Additionally, we simulate network delay between stream sources and destination clusters (see below for considerations on the network topologies), as well as the transmission delay. Apart from simulated workers, other system components are deployed using their prototype implementation. Therefore, the system operates on a realistic timescale. **Network topology.** We leverage the network topology of a large ISP to assess scheduling performance under realistic settings. Specifically, our simulated environment is a cloud-to-edge topology with clusters of different sizes deployed hierarchically. To preserve ISP confidentiality, we only report a high-level summary of topology, latency, and hardware distribution characteristics. Similarly to the tiered topologies from [30, 31], our topology can provide clusters with computation capabilities at different layers: network access (e.g., antennas, home gateway), central offices (multiple payers), operator data center, and remote cloud (third parties). Specifically, we focus on three scenarios: _(i)__dc-cloud_, where resources are deployed at ISP data center and remote cloud only; _(ii)__co-dc-cloud_, where resources are deployed at central offices, operator data center and remote cloud; _(iii)__full-edge_ topology, where clusters are deployed at all layers previously mentioned. Note that we limit the simulations to the topology serving 1,000 antennas from the full ISP topology, and appropriately scale resources (see below). For the evaluation, we assume a 5G radio access technology with antennas deployed similarly to LTE. Network/transmission delays range from few milliseconds, to reach the eNodeBs behind the antennas, to the order of ten milliseconds for central offices and ISP data centers, and few tens of milliseconds for the remote cloud. **Requests workload.** Requests are generated following a Poisson distribution. Each generator runs on average \(\lambda\) clients per minute querying the scheduler of a given geographical area (antenna). Once spawned, each client requests for processing a stream featuring randomized characteristics in terms of frame rate, required end-to-end latency, required model accuracy, frame sizes, stream duration. To capture realistic queries characteristics, we modeled metrics of generated streams according to the reference edge applications in Table I. In our settings, a generator with \(\lambda\) = 60 brings a load of almost 1000 queries per second on the serving antenna. **Computing clusters and model variant.** We assume a given reference hardware distribution across clusters, with computing capabilities increasing from the access network to the cloud. Specifically, the access network can be equipped with an 8-16 cores machine, 16 GB of memory and a small TPU, central offices can host in the order of tens of servers (32-64 CPUs, 128-256 GB, and few GPUs), ISP data centers can host hundreds of servers, while for the centralized cloud we assume unlimited resources. In our evaluation, we focus on DNN models for the object detection task, as it is one of the most challenging and computation-intensive inference service [34, 35]. Using our prototype we profile MobileNet-SSD, Yolo-v3, and Tinyolo-v2 models [36, 37], with CPU and GPU variants on different batch sizes, scaling on allocated resources and number of replicas. Such a set of results is not shown for lack of space. We use profiled information to run our simulations on top of the three topologies described above. On each cluster, workers have been scaled on the number of replicas up to resource saturation. ### _Experimental Results_ We compare the performance of the baseline policies described in Section III-C distinguishing results for different applications from Table I. As a performance metric we consider the percentage of queries that are successfully processed by the system satisfying the application QoS requirements. Figure 3(a) shows results of multiple runs with \(\lambda\) = 60. Results suggest that there is no one-size-fits-all policy, as various applications may benefit differently from each policy. Varying the rate of stream requests on the antenna (Figure 3(b)) may further increase the uncertainty of relying on a single policy. In the following, we compare the performance of the ASET RL scheduling approach with the performance of static policies, evaluating the benefits it can introduce in the various scenarios. We trained three different versions of ASET (one for each topology). In particular, we sample the state using a time window \(T\) = 25 seconds, and we experimentally chose an episode timeout of 8 minutes to avoid steady states in the network. Despite we evaluate on multiple clients rate, our agent has been trained only on episodes with \(\lambda\) = 60. **Cloud deployment.** When all the available resources are located in a few centralized clusters, the various static policies have small differences in performance and a dynamic approach has little room for improvement. Results for the dc-cloud topology are shown in Figures 4(b). In particular, Figure 4(a) plots, for every moment of the simulation (time axis), the percentage of queries that are handled successfully, averaging multiple runs with different workloads. The graph shows that, for this topology, ASET does not improve over static policies, and it even performs worse for higher lambdas (Figure 4(b)). Figures 4(c) shows that moving some resources to Central Offices (co-dc-cloud topology) makes a huge difference: in general, all the policies achieve a higher success ratio on this configuration (Figure 4(c)), as they can exploit the additional lower latency spots, and the higher level of distribution gives to ASET a certain margin of improvement. Figure 4(d) shows that ASET introduces some improvement over all the baselines for every lambda, despite being trained only for \(\lambda\) = 60. **Edge deployment.** The results so far suggest that a good distribution of computing resources is a key factor to improve against static scheduling policies. As shown in Figure 6, the benefits of using a dynamic scheduling approach become more \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Edge app** & **Tolerated** & **Frame** & **Streams** & **Required** \\ & **delay** & **rate** & **duration** & **accuracy** \\ \hline Pool & 95 ms & 5 FPS & 5-10 s & 10 mAP \\ Workout Assistant & 300 ms & 2 FPS & 90 s & 10 mAP \\ Ping-pong & 150 ms & 15-20 FPS & 20-40 s & 15 mAP \\ Face Assistant & 370 ms & 5 FPS & 1-5 s & 30 mAP \\ Lego/Draw/Sandwich & 600 ms & 10-15 FPS & 60 s & 25 mAP \\ Gaming & 20-30 ms & 25 FPS & 10-30 m & 35 mAP \\ Connected Cars & 150 ms & 10-15 FPS & 15-30 m & 40 mAP \\ Tele-Robots & 25-35 ms & 10 FPS & 5 m & 40 mAP \\ Remote-driving & 20-30 ms & 20 FPS & 15-30 m & 50 mAP \\ Interactive AR/VR & 30-50 ms & 25 FPS & 30-60 s & 35 mAP \\ \hline \hline \end{tabular} \end{table} TABLE I: Characteristics of reference applications [32, 33]. Fig. 4: Success percentage for different apps on the full-edge topology. Fig. 5: Performance of ASET compared with static policies for (ab) the dc-cloud topology and (cd) the co-dc-cloud topology. concrete in a full-edge topology, where resources are better distributed on multiple smaller clusters in different locations. In fact, Figure 5(a) shows that the dynamic approach of ASET is able to achieve a constant improvement over any static policy, with a higher success ratio over time. In particular, Figures 5(d) show that, while maintaining the same rejection rate as the best static-policy, ASET effectively reduces the number of queries that are handled violating one or more QoS requirements. Moreover, Figure 5(b) shows that an ASET agent trained only for \(\lambda\) = 60 can also generalize on different requests rate, even supporting a load of more than 1600 queries per second (\(\lambda\) = 100) on a single antenna. **Dynamic input rate.** We have performed some additional experiments to evaluate how the system behaves in dynamic situations where the requests rate varies over time. For this purpose, we have set up some dynamic runs where the lambda value changes every 150 seconds: a first pattern simulates a particularly fast variation with values of 20, 60, and 100 clients per minute (Figure 6(a)); a different pattern simulates a more steady scenario where the requests rate first moves from 60 to 40 clients per minute, then drops to 20, and finally slowly goes back to 60 (Figure 6(b)). Similar to previous plots, the outcomes for this set of experiments are shown averaging values over time for multiple runs (Figure 7). Results in both figures show that having a dynamic requests arrival even introduces a bigger margin for improvement that ASET effectively exploits reaching the highest percentage of queries handled successfully. This appears particularly evident in the case where the variation between client arrivals is faster and bigger (Figure 6(a)). This result suggests that, while some of the static policies may achieve decent performance when the system load is stable, they struggle on more dynamic scenarios. In such situations, an adaptive algorithm such as ASET is more suitable as it can learn how to best optimize the system under different conditions. Moreover, results suggest that ASET training generalizes enough as the algorithm performs well under previously unseen dynamic conditions. **Training and applicability.** Figure 7(a) shows the cumulative distribution of the time needed by ASET to infer a switching decision from the current policy to the one that is best suitable for the current system conditions (non-dashed line). The switching delay is compared with distributions of the intervals between subsequent requests for different lambdas. As shown, even for very large client loads (100 clients connected to the antenna), the time interval between two stream arrivals is typically in the scale of seconds or hundreds of milliseconds, while the delay for switching between policies is one order of magnitude smaller. Finally, Figure 7(b) shows the learning curve of ASET for different topologies on continuous stream request arrivals. The figure shows that ASET quickly reaches a certain level of performance in the first training iterations (before 100 episodes) independently from the topology complexity, leaving room for extra improvements to the subsequent episodes based on the margin left by the topology itself. ## VI Conclusions This paper proposes ASET, an adaptive algorithm based on Reinforcement Learning for scheduling inference workloads at the network edge. ASET solves the problem of exploiting scattered clusters of resources to serve inference queries from multiple edge applications (e.g., AR/VR, cognitive assistance). We model an edge inference system where queries from different access networks are processed across a multitude of distributed processing locations. The constrained nature of the edge network introduces a trade-off between network delay and processing time based on the various available DNN models. In such a scenario, ASET optimizes the binding between inference stream requests and available DL models Fig. 8: (a) Delay for switching policy compared with requests arrival intervals. (b) Learning curve while training ASET on different topologies. Fig. 6: Performance of ASET compared with static policies for the full-edge topology. (a) (c) and (d) show averages of multiple runs with \(\lambda\) = 60. Fig. 7: Performance of ASET varying the requests rate over time with two different load variation patterns (full-edge topology). across the network, maximizing the throughput and ensuring that any requirement in terms of inference accuracy and end-to-end delay is satisfied. We evaluated our approach over the realistic network topology of a large ISP and considering a heterogeneous pool of edge applications. Our findings show that ASET effectively improves the performance compared to static policies when resources are deployed across the whole edge-cloud infrastructure.
リアルタイムアプリケーション(例:Augmented/Virtual Reality、認知支援)は、推論タスクを処理するために、深層ニューラルネットワーク(DNN)を必要とする。エッジコンピューティングは、このようなアプリケーションを展開するための重要なインフラストラクチャとして考えられており、データソースに近い計算の移動により、厳格なレイテンシとスループットの要件を満たすことができる。しかし、エッジネットワークの制約的な性質は、推論ワークロードの管理にいくつかの追加的な課題を提示している。エッジクラスタは、DNNモデルへの処理能力を制限しており、ネットワークと処理時間を trade-off する必要がある場合、エンドツーエンドの遅延要件を満たす。本論文では、短い時間スケール(例えば、数ミリ秒)でDNNモデルの推論クエリのスケジューリングの問題に焦点を当てる。シミュレーションを用いて、現実的なネットワーク設定とワークロードを持つ大
2309.16142
On the Steenrod module structure of $\mathbb{R}$-motivic Spanier-Whitehead duals
The $\mathbb{R}$-motivic cohomology of an $\mathbb{R}$-motivic spectrum is a module over the $\mathbb{R}$-motivic Steenrod algebra $\mathcal{A}^{\mathbb{R}}$. In this paper, we describe how to recover the $\mathbb{R}$-motivic cohomology of the Spanier-Whitehead dual $\mathrm{DX}$ of an $\mathbb{R}$-motivic finite complex $\mathrm{X}$, as an $\mathcal{A}^{\mathbb{R}}$-module, given the $\mathcal{A}^{\mathbb{R}}$-module structure on the cohomology of $\mathrm{X}$. As an application, we show that 16 out of 128 different $\mathcal{A}^{\mathbb{R}}$-module structures on $\mathcal{A}^{\mathbb{R}}(1):= \langle \mathrm{Sq}^1, \mathrm{Sq}^2 \rangle$ are self-dual.
Prasit Bhattacharya, Bertrand J. Guillou, Ang Li
2023-09-28T03:44:24
http://arxiv.org/abs/2309.16142v2
# On the Steenrod Module Structure of \(\mathbb{R}\)-Motivic Spanier-Whitehead Duals ###### Abstract. The \(\mathbb{R}\)-motivic cohomology of an \(\mathbb{R}\)-motivic spectrum is a module over the \(\mathbb{R}\)-motivic Steenrod algebra \(\mathcal{A}^{\mathbb{R}}\). In this paper, we describe how to recover the \(\mathbb{R}\)-motivic cohomology of the Spanier-Whitehead dual DX of an \(\mathbb{R}\)-motivic finite complex X, as an \(\mathcal{A}^{\mathbb{R}}\)-module, given the \(\mathcal{A}^{\mathbb{R}}\)-module structure on the cohomology of X. As an application, we show that \(16\) out of \(128\) different \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1):=\langle\mathrm{Sq}^{1},\mathrm{Sq}^{2}\rangle\) are self-dual. Guillou was supported by NSF grant DMS-2003204 Bhattacharya is supported by NSF grant DMS-2305016 the \(\mathbb{R}\)-motivic Steenrod algebra \(\mathcal{A}^{\mathbb{R}}\) on the Spanier-Whitehead duals of those finite \(\mathbb{R}\)-motivic spectra whose cohomology is free over \(\mathbb{M}^{\mathbb{R}}_{2}\). Let us pause to briefly discuss Boardman's mandala. Given a finite cell complex \(\mathrm{X}\) there are eight ways in which its mod 2 homology and cohomology interact with the Steenrod algebra and its dual. They represent the vertices of the mandala. Boardman identified the relationships between them, which represent the edges. Each edge of the mandala corresponds to a formula. For example, the edge \(\mathrm{D}^{\prime\prime}\) in Figure 1.1 corresponds to the formula (see [B, p. 190]) \[\langle(\mathrm{D}^{\prime\prime}\phi^{\prime}_{\mathrm{L}})(\alpha\otimes \mathsf{f}),\mathsf{x}\rangle=\langle\mathsf{f},\phi^{\prime}_{\mathrm{L}}( \chi(\alpha)\otimes\mathsf{x})\rangle \tag{1.1}\] that relates the left \(\mathcal{A}\)-module structure on the cohomology \(\mathrm{H}^{*}(\mathrm{X})\) with that of the left \(\mathcal{A}\)-module structure on the homology of \(\mathrm{X}\). However, not all edges of the mandala exist for a general cohomology theory \(\mathrm{E}\) ([B, Section 6]). When \(\mathrm{H}^{*}(\mathrm{X}):=[\mathrm{X},\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}]^ {\star}\) is free and finitely generated over \(\mathbb{M}^{\mathbb{R}}_{2}\), \(\mathrm{H}_{\star}(\mathrm{X})\) is the \(\mathbb{M}^{\mathbb{R}}_{2}\)-linear dual of \(\mathrm{H}^{*}(\mathrm{X})\), as the relevant universal coefficient spectral sequence collapses. Consequently, the work in [B] relates the left action of \(\mathcal{A}^{\mathbb{R}}\) on \(\mathrm{H}^{*}(\mathrm{X})\) as well as the left action of \(\mathcal{A}^{\mathbb{R}}\) on \(\mathrm{H}_{\star}(\mathrm{X})\), to the \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure on \(\mathrm{H}^{*}(\mathrm{X})\) (see Proposition 3.1, Proposition 3.3 and Proposition 3.4). These relations are the green dashed edges in Figure 1.1. As a result, one deduces the left \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}_{\star}(\mathrm{X})\) from that of \(\mathrm{H}^{*}(\mathrm{X})\) without resorting to an antiautomorphism (unlike (1.1)). Our main application is concerned with identifying the \(\mathbb{R}\)-motivic spectra in the class \(\mathcal{A}^{\mathbb{R}}_{1}\) introduced in [BGL]. Each spectrum in \(\mathcal{A}^{\mathbb{R}}_{1}\) is a realization of some \(\mathcal{A}^{\mathbb{R}}\)-module structure on the subalgebra \(\mathcal{A}^{\mathbb{R}}(1):=\mathbb{M}^{\mathbb{R}}_{2}\langle\mathrm{S}q^{1 },\mathrm{S}q^{2}\rangle\subset\mathcal{A}^{\mathbb{R}}\) (see Figure 4.1). In the classical case, Davis and Mahowald [DM] showed that the subalgebra \(\mathcal{A}(1)\) of the Steenrod algebra admits four different left \(\mathcal{A}\)-module structures, of which two are self-dual (see also [BEM, Remark 1.1]). In [BGL], we showed that \(\mathcal{A}^{\mathbb{R}}(1)\) admits 128 different \(\mathcal{A}^{\mathbb{R}}\)-module structures. In this paper, we show: **Theorem 1.1**.: _Among the 128 different \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1)\), only 16 are self-dual._ **Remark 1.2**.: In [BGL] we showed that every \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathcal{A}^{\mathbb{R}}(1)\) can be realized as a finite \(\mathbb{R}\)-motivic spectrum, but we do not know if they are unique. Hence, the spectra realizing a self-dual \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathcal{A}^{\mathbb{R}}(1)\) may not be Spanier-Whitehead self-dual. Davis and Mahowald also showed [DM] that each realization of \(\mathcal{A}(1)\) is the cofiber of a self-map of the spectrum \(\mathcal{Y}:=\mathbb{S}/2\wedge\mathbb{S}/\eta\), where \(\eta\) is the first Hopf element in the stable stems. In the \(\mathbb{R}\)-motivic stable stems, both 2 and \(\mathsf{h}\) in \(\pi_{0,0}(\mathbb{S}_{\mathbb{R}})\) are lifts of \(2\in\pi_{0}(\mathbb{S})\) in the classical stable stems, and \(\eta_{1,1}\in\pi_{1,1}(\mathbb{S}_{\mathbb{R}})\) is the unique lift of \(\eta\) in bidegree \((1,1)\) (up to a unit). This results in two different \(\mathbb{R}\)-motivic lifts of \(\mathcal{Y}\), namely \[\mathcal{Y}^{\mathbb{R}}_{(2,1)}=\mathbb{S}_{\mathbb{R}}/2\wedge\mathbb{S}_{ \mathbb{R}}/\eta_{1,1}\text{ and }\mathcal{Y}^{\mathbb{R}}_{(\mathsf{h},1)}=\mathbb{S}_{ \mathbb{R}}/\mathsf{h}\wedge\mathbb{S}_{\mathbb{R}}/\eta_{1,1}.\] We showed in [BGL, Theorem 1.8] that each \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathcal{A}^{\mathbb{R}}(1)\) can be realized as the cofiber of a map between these \(\mathbb{R}\)-motivic lifts of \(\mathcal{Y}\). Here we show: **Theorem 1.3**.: _Of the self-dual \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1)\), 8 can be realized as the cofiber of a self-map on \(\mathcal{Y}^{\mathbb{R}}_{(2,1)}\) and 8 as the cofiber of a self-map on \(\mathcal{Y}^{\mathbb{R}}_{(\mathsf{h},1)}\)._ **Notation 1.1**.: In all diagrams depicting modules over the Steenrod algebra, (i.e. in Figure 3.1, Figure 4.1, and Figure 4.2), a dot \(\bullet\) represents a rank one free module over the coefficient ring, black vertical lines indicate the action of \(\mathrm{Sq}^{1}\), blue curved lines indicate the action of \(\mathrm{Sq}^{2}\), and red bracket-like lines represent the action of \(\mathrm{Sq}^{4}\). A label on an edge represents that the operation hits that multiple of the generator. For example, in Figure 3.1, \(\mathrm{Sq}^{2}(\mathsf{x}_{2,1})\) is \(\tau\cdot\mathsf{x}_{4,1}\) and \(\mathrm{Sq}^{4}(\mathsf{x}_{2,1})\) is \(\rho^{2}\cdot\mathsf{x}_{4,1}\). **Acknowledgements**.: We thank Agnes Beaudry, Mike Hill, Clover May, Sarah Petersen, Liz Tatum, and Doug Ravenel for a stimulating conversation at the conference, Homotopy Theory in honor of Paul Goerss, held at Northwestern University in March 2023. We also thank William Balderrama for an illuminating conversation, and we thank Dan Isaksen for pointing out a typo. ## 2. A review of the \(\mathbb{R}\)-motivic Steenrod algebra and its dual In [11], Voevodsky defined the motivic Steenrod operations \(\mathrm{Sq}^{n}\), for \(n\geq 0\), and gave a complete description of the \(\mathbb{R}\)-motivic Steenrod algebra \(\mathcal{A}^{\mathbb{R}}\). It is free as a left module over the \(\mathbb{R}\)-motivic homology of a point, \[\mathbb{M}_{2}^{\mathbb{R}}:=\pi_{\star}^{\mathbb{R}}\mathbf{H}_{\mathbb{R}} \mathbb{F}_{2}\cong\mathbb{F}_{2}[\tau,\rho], \tag{2.1}\] where the element \(\tau\) is in bidegree \(\star=(0,-1)\), and \(\rho\) is in bidegree \(\star=(-1,-1)\). The subalgebra \(\mathbb{M}_{2}^{\mathbb{R}}\subset\mathcal{A}^{\mathbb{R}}\) is not central, and therefore \(\mathcal{A}^{\mathbb{R}}\) has two \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structures, one given by left multiplication and the other by right multiplication. The \(\mathbb{R}\)-motivic dual Steenrod algebra \(\mathcal{A}_{\star}^{\mathbb{R}}\) is defined to be the (left) \(\mathbb{M}_{2}^{\mathbb{R}}\)-linear dual of \(\mathcal{A}^{\mathbb{R}}\); it inherits an \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structure, which we call the left action. The right \(\mathbb{M}_{2}^{\mathbb{R}}\)-action on \(\mathcal{A}^{\mathbb{R}}\) also induces an action of \(\mathbb{M}_{2}^{\mathbb{R}}\) on \(\mathcal{A}_{\star}^{\mathbb{R}}\), which we call the right action of \(\mathbb{M}_{2}^{\mathbb{R}}\) on \(\mathcal{A}_{\star}^{\mathbb{R}}\) (see [11, p. 48])1. These correspond to the left and the right unit Footnote 1: Since \(\mathbb{M}_{2}^{\mathbb{R}}\) is commutative, there is no meaningful distinction between “left” and “right” actions. The adjectives are merely a bookkeeping device. \[\eta_{\mathrm{L}},\eta_{\mathrm{R}}\colon\mathbb{M}_{2}^{\mathbb{R}}\rTo\mathcal{ A}_{\star}^{\mathbb{R}}\] of the Hopf algebroid \((\mathbb{M}_{2}^{\mathbb{R}},\mathcal{A}_{\star}^{\mathbb{R}})\). Explicitly, \[\mathcal{A}_{\star}^{\mathbb{R}}\cong\frac{\mathbb{M}_{2}^{\mathbb{R}}[\tau_{ 0},\tau_{1},\tau_{2},\ldots,\xi_{1},\xi_{2},\ldots]}{\tau_{n}^{2}=\tau\xi_{n+1} +\rho\tau_{0}\xi_{n+1}+\rho\tau_{n+1}} \tag{2.2}\] with \(\eta_{\mathrm{L}}(\rho)=\eta_{\mathrm{R}}(\rho)=\rho\), \(\eta_{\mathrm{L}}(\tau)=\tau\) and \(\eta_{\mathrm{R}}(\tau)=\tau+\rho\tau_{0}\). The comultiplication (2.3) is given by * \(\Delta(\xi_{n})=\sum_{i=0}^{n}\xi_{n-i}^{2^{i}}\otimes\xi_{i}\), and * \(\Delta(\tau_{n})=\tau_{n}\otimes 1+\sum_{i=0}^{n}\xi_{n-i}^{2^{i}}\otimes\tau_{n-i}\), for all \(n\in\mathbb{N}\), where \(\xi_{0}\) is the unit \(1\). The conjugation map of the Hopf algebroid structure sends * \(\mathsf{c}(\rho)=\rho\), * \(\mathsf{c}(\tau)=\tau+\rho\tau_{0}\), * \(\mathsf{c}(\xi_{n})=\sum_{i=0}^{n-1}\xi_{n-i}^{2^{i}}\mathsf{c}(\xi_{i})\), and * \(\mathsf{c}(\tau_{n})=\tau_{n}+\sum_{i=0}^{n-1}\xi_{n-i}^{2^{i}}\mathsf{c}(\tau_{ i})\). **Remark 2.1**.: The coproduct \(\Delta\) in (2.3) is an \(\mathbb{M}_{2}^{\mathbb{B}}\)-bimodule map. **Remark 2.2**.: The conjugation is not a map of left \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules. In fact, it interchanges the left and right \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structures on \(\mathcal{A}_{\star}^{\mathbb{R}}\). ### Kronecker product The \(\mathbb{R}\)-motivic Kronecker product is a natural pairing between \(\mathbb{R}\)-motivic homology and cohomology which is constructed as follows: If \(\varphi:\mathrm{X}\longrightarrow\Sigma^{\mathrm{i}\cdot\mathrm{j}}\mathbf{H} _{\mathbb{R}}\mathbb{F}_{2}\) represents the class \([\varphi]\in\mathrm{H}^{\star}(\mathrm{X})\) and \(\mathsf{x}:\Sigma^{\mathsf{m},\mathsf{n}}\mathbb{S}_{\mathbb{R}} \longrightarrow\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\wedge\mathrm{X}\) represents \([\mathsf{x}]\in\mathrm{H}_{\mathsf{m},\mathsf{n}}(\mathrm{X})\), then the composition is the element \(\langle\mathsf{x},\varphi\rangle\in\pi_{\star}(\mathbf{H}_{\mathbb{R}}\mathbb{ F}_{2})\cong\mathbb{M}_{2}^{\mathbb{R}}\). The Kronecker pairing leads to a homomorphism (2.4) where \(\mathsf{n}(\varphi)(\mathsf{x})=\langle\mathsf{x},\varphi\rangle\). **Remark 2.3**.: When \(\mathrm{H}_{\star}(\mathrm{X})\) is free and finitely generated as an \(\mathbb{M}_{2}^{\mathbb{R}}\)-module, the map \(\mathsf{n}\) in (2.4) is an isomorphism. Consequently, elements in \(\mathrm{H}^{\star}(\mathrm{X})\) can be identified with linear maps from \(\mathrm{H}_{\star}(\mathrm{X})\), and the Kronecker product is simply the evaluation of functionals. **Notation 2.1**.: Since both \(\mathcal{A}^{\mathbb{R}}\) and \(\mathcal{A}_{\star}^{\mathbb{R}}\) have a left and a right action of \(\mathbb{M}_{2}^{\mathbb{R}}\), let \(\mathcal{A}^{\mathbb{R}}\otimes_{\mathbb{M}_{2}^{\mathbb{R}}}^{\mathrm{left} }\mathcal{A}_{\star}^{\mathbb{R}}\) (likewise \(\mathcal{A}^{\mathbb{R}}\otimes_{\mathbb{M}_{2}^{\mathbb{R}}}^{\mathrm{right} }\mathcal{A}_{\star}^{\mathbb{R}}\)) denote the tensor product of left (likewise right) \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules. **Remark 2.4**.: When \(\mathrm{X}\) is \(\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\), the Kronecker product is a map of left \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules \(\mathcal{A}_{\star}^{\mathbb{R}}\otimes_{\mathbb{M}_{2}^{\mathbb{R}}}^{\mathrm{ left}}\mathcal{A}^{\mathbb{R}}\to\mathbb{M}_{2}^{\mathbb{R}}\). ### The Milnor basis The dual Steenrod algebra \(\mathrm{H}_{\star}(\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2})\cong\mathcal{A}_{ \star}^{\mathbb{R}}\) is free and degree-wise finitely generated as an \(\mathbb{M}_{2}^{\mathbb{R}}\)-module. Consequently, the natural map of (2.4) gives an isomorphism \[\mathcal{A}^{\mathbb{R}}\cong\mathrm{Hom}_{\mathbb{M}_{2}^{\mathbb{R}}}( \mathcal{A}_{\star}^{\mathbb{R}},\mathbb{M}_{2}^{\mathbb{R}}) \tag{2.5}\] of left \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules. Taking advantage of the above isomorphism, Voevodsky [V, SS13] defines the Milnor basis of the \(\mathbb{R}\)-motivic Steenrod algebra using the monomial basis of the dual Steenrod algebra (2.2). For finite sequences \(\mathrm{E}=(\mathsf{e}_{0},\mathsf{e}_{1},\ldots,\mathsf{e}_{m})\) and \(\mathrm{R}=(\mathsf{r}_{1},\ldots,\mathsf{r}_{n})\) of non-negative integers, let \(\mathsf{\rho}(\mathrm{E},\mathrm{R})\) denote the element in \(\mathcal{A}^{\mathbb{R}}\) dual to the monomial \[\mathsf{\tau}(\mathrm{E})\,\mathsf{\xi}(\mathrm{R}):=\prod_{i\geq 0}\tau_{i}^{ \mathsf{e}_{i}}\prod_{j\geq 1}\xi_{i}^{\mathsf{r}_{i}}\] in \(\mathcal{A}_{\star}^{\mathbb{R}}\). It is standard practice to set \(\mathcal{P}^{\mathrm{R}}:=\mathsf{\rho}(\mathbf{0},\mathrm{R})\) and \(\mathcal{Q}^{\mathrm{E}}:=\mathsf{\rho}(\mathrm{E},\mathbf{0})\). Moreover, \(\mathcal{Q}_{i}\) is shorthand for the dual to \(\uptau_{i}\). In Table 2.1, we record, for each monomial \(\uptau(\mathrm{E})\mathcal{E}(\mathrm{R})\in\mathcal{A}_{\star}^{\mathbb{R}}\) in low degree, its image under the conjugation \(\mathsf{c}\) and its dual element in \(\mathcal{A}^{\mathbb{R}}\), both in terms of the Milnor basis as well as in terms of the generators \(\mathcal{G}:=\{\mathrm{Sq}^{2^{k}}:k\geq 1\}\). The latter description will be used in Section 3.3 and Section 4. A number of these descriptions in terms of \(\mathcal{G}\) can be found in [V]. For example, see [V, Lemma 13.1 and Lemma 13.6]. The Adem relations (see [BGL, Appendix A]) are another useful tool. For example, the Adem relation \(\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}=\mathrm{Sq}^{6}+\uptau\mathrm{Sq}^{5}\, \mathrm{Sq}^{1}\) leads to the description for \(P^{3}=\mathrm{Sq}^{6}\). The formula for \(\mathcal{P}^{(0,1)}\) follows from [K, (6)]. Finally, the formula for \(\mathcal{P}^{(1,1)}\) can be deduced from expressing \(\mathrm{Sq}^{6}\,\mathrm{Sq}^{2}\) in terms of the Milnor basis. This can be done by evaluating the formula [V, (12.9)] \[\left\langle\mathsf{x},\varphi\psi\right\rangle=\sum\left\langle\mathsf{x}^{ \prime},\varphi\mathfrak{n}_{\mathrm{R}}\big{(}\big{\langle}\mathsf{x}^{ \prime\prime},\psi\big{)}\big{)}\right\rangle,\qquad\Delta(\mathsf{x})=\sum \mathsf{x}^{\prime}\otimes\mathsf{x}^{\prime\prime}\] at \(\varphi=\mathrm{Sq}^{6}\), \(\psi=\mathrm{Sq}^{2}\), and \(\mathsf{x}\) monomials in low degree. This shows that \(\mathrm{Sq}^{6}\,\mathrm{Sq}^{2}\) is the sum \(\mathcal{P}^{(1,1)}+\uptau\mathcal{Q}_{0}\mathcal{Q}_{1}\mathcal{P}^{2}\). \begin{table} \begin{tabular}{|l|l||l|l|l|} \hline degree & \(\mathsf{x}\in\mathcal{A}_{\star}^{\mathbb{R}}\) & \(\mathsf{c}(\mathsf{x})\) & \(\mathsf{x}^{*}\in\mathcal{A}^{\mathbb{R}}\) & \(\mathsf{x}^{*}\) in terms of \(\mathcal{G}\) \\ \hline \hline \((0,0)\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \hline \((1,0)\) & \(\uptau_{0}\) & \(\uptau_{0}\) & \(\mathcal{Q}_{0}\) & \(\mathrm{Sq}^{1}\) \\ \hline \((2,1)\) & \(\upxi_{1}\) & \(\upxi_{1}\) & \(\mathcal{P}^{1}\) & \(\mathrm{Sq}^{2}\) \\ \hline \((3,1)\) & \(\uptau_{0}\xi_{1}\) & \(\uptau_{0}\xi_{1}\) & \(\mathcal{Q}_{0}\mathcal{P}^{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\) \\ \hline \((3,1)\) & \(\uptau_{1}\) & \(\uptau_{1}+\uptau_{0}\xi_{1}\) & \(\mathcal{Q}_{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}+\mathrm{Sq}^{2}\,\mathrm{Sq}^{1}\) \\ \hline \((4,1)\) & \(\uptau_{0}\uptau_{1}\) & \(\uptau_{0}\uptau_{1}+\uptau\xi_{1}^{2}\) & \(\mathcal{Q}_{0}\mathcal{Q}_{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{1}\) \\ & & \(+\uprho\uptau_{0}\xi_{1}^{2}+\uprho\uptau_{1}\xi_{1}\) & & \\ \hline \((4,2)\) & \(\upxi_{1}^{2}\) & \(\upxi_{1}^{2}\) & \(\mathcal{P}^{2}\) & \(\mathrm{Sq}^{4}\) \\ \hline \((5,2)\) & \(\uptau_{0}\xi_{1}^{2}\) & \(\uptau_{0}\xi_{1}^{2}\) & \(\mathcal{Q}_{0}\mathcal{P}^{2}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{4}\) \\ \hline \((5,2)\) & \(\uptau_{1}\xi_{1}\) & \(\uptau_{1}\xi_{1}+\uptau_{0}\xi_{1}^{2}\) & \(\mathcal{Q}_{1}\mathcal{P}^{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{4}+\mathrm{Sq}^{4}\,\mathrm{Sq}^{1}\) \\ \hline \((6,2)\) & \(\uptau_{0}\uptau_{1}\xi_{1}\) & \(\uptau_{0}\tau_{1}\xi_{1}+\uptau\xi_{1}^{3}\) & \(\mathcal{Q}_{0}\mathcal{Q}_{1}\mathcal{P}^{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{4}\,\mathrm{Sq}^{1}\) \\ & & \(+\uprho\uptau_{0}\xi_{1}^{3}+\uprho\uptau_{1}\xi_{1}^{2}\) & & \\ \hline \((6,3)\) & \(\upxi_{1}^{3}\) & \(\upxi_{1}^{3}\) & \(\mathcal{P}^{3}\) & \(\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\uptau\,\mathrm{Sq}^{1}\,\mathrm{Sq}^{4}\, \mathrm{Sq}^{1}\) \\ \hline \((6,3)\) & \(\upxi_{2}\) & \(\upxi_{2}+\upxi_{1}^{3}\) & \(\mathcal{P}^{(0,1)}\) & \(\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\updelta^{4}\,\mathrm{Sq}^{2}\) \\ \hline \((7,3)\) & \(\uptau_{2}\) & \(\uptau_{2}+\uptau_{1}\xi_{1}^{2}\) & \(\mathcal{Q}_{2}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\mathrm{Sq}^{1}\,\mathrm{Sq}^{ 4}\,\mathrm{Sq}^{2}\) \\ & & \(+\uptau_{0}\xi_{2}+\uptau_{0}\xi_{1}^{3}\) & & \(+\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}\,\mathrm{Sq}^{1}+\mathrm{Sq}^{4}\,\mathrm{Sq}^{ 2}\,\mathrm{Sq}^{1}\) \\ \hline \((7,3)\) & \(\uptau_{0}\xi_{1}^{3}\) & \(\uptau_{0}\xi_{1}^{3}\) & \(\mathcal{Q}_{0}\mathcal{P}^{3}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\rho\,\mathrm{Sq}^{1}\,\mathrm{Sq }^{4}\,\mathrm{Sq}^{1}\) \\ \hline \((7,3)\) & \(\uptau_{0}\xi_{2}\) & \(\uptau_{0}\xi_{2}+\uptau_{0}\xi_{1}^{3}\) & \(\mathcal{Q}_{0}\mathcal{P}^{(0,1)}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\mathrm{Sq}^{1}\,\mathrm{Sq}^{ 4}\,\mathrm{Sq}^{2}\) \\ \hline \((7,3)\) & \(\uptau_{1}\xi_{1}^{2}\) & \(\uptau_{1}\xi_{1}^{2}+\uptau_{0}\xi_{1}^{3}\) & \(\mathcal{Q}_{1}\mathcal{P}^{2}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\rho\,\mathrm{Sq}^{1}\,\mathrm{Sq }^{4}\,\mathrm{Sq}^{1}\) \\ & & & \(+\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}\,\mathrm{Sq}^{1}\) \\ \hline \((8,4)\) & \(\upxi_{1}^{4}\) & \(\upxi_{1}^{4}\) & \(\mathcal{P}^{4}\) & \(\mathrm{Sq}^{8}\) \\ \hline \((8,4)\) & \(\upxi_{1}\xi_{2}\) & \(\upxi_{1}\xi_{2}+\upxi_{1}^{4}\) & \(\mathcal{P}^{(1,1)}\) & \(\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}\,\mathrm{Sq}^{2}+\tau\,\mathrm{Sq}^{1}\,\mathrm{Sq }^{ ## 3. Dualizing \(\mathcal{A}^{\mathbb{R}}\)-modules For any \(\mathbb{R}\)-motivic spectrum \(\mathrm{X}\), its Spanier-Whitehead dual is the function spectrum \(\mathrm{DX}:=\mathrm{F}(\mathrm{X},\mathbb{S}_{\mathbb{R}})\). The goal of this section is to identify the \(\mathcal{A}^{\mathbb{R}}\)-module structure \(\mathrm{H}^{\star}(\mathrm{DX})\) given the \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathrm{X})\) under the following assumption. **Assumption 3.1**.: Let \(\mathrm{X}\) be a finite \(\mathbb{R}\)-motivic spectrum such that its homology \(\mathrm{H}_{\star}(\mathrm{X})\) is free over \(\mathbb{M}_{2}^{\mathbb{R}}\). **Notation 3.1**.: For an \(\mathbb{M}_{2}^{\mathbb{R}}\)-module \(\mathbf{N}\) let \[\mathbf{N}^{\vee}:=\mathrm{Hom}_{\mathbb{M}_{2}^{\mathbb{R}}}(\mathbf{N}, \mathbb{M}_{2}^{\mathbb{R}})\] be the set of \(\mathbb{M}_{2}^{\mathbb{R}}\)-linear functionals. ### From \(\psi_{\mathrm{L}}\) to \(\phi_{\mathrm{L}}^{\prime}\) Recall that \(\mathrm{H}^{\star}(\mathrm{X})\) is naturally a left \(\mathcal{A}^{\mathbb{R}}\)-module. We will also use an \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure on \(\mathrm{H}^{\star}(\mathrm{X})\) (3.1) which can be constructed as follows. First, note that \(\mathcal{A}^{\mathbb{R}}_{\star}\) is free as a right \(\mathbb{M}_{2}^{\mathbb{R}}\)-module with basis \(\mathcal{B}\) given by the conjugate of any left \(\mathbb{M}_{2}^{\mathbb{R}}\)-module basis. Then we have a splitting \[\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\wedge\mathbf{H}_{\mathbb{R}}\mathbb{F}_ {2}\simeq\bigvee_{\mathcal{B}}\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\] as right \(\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\)-modules. Define a map of motivic spectra \(\psi\) as the composite where \(\iota\) is the unit map of \(\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\). For any finite motivic spectrum, the map \(\psi\) induces the map \(\psi_{\mathrm{L}}\) (see [B, Theorem 2.9(b)]) giving \(\mathrm{H}^{\star}(\mathrm{X})\) the structure of an \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule as explained in [B, Section 6]. Further, Boardman showed that: **Proposition 3.1**.: _[_B_, Lemma 3.4]_ _Let \(\mathbf{N}\) be a left \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule. Then \(\mathbf{N}^{\vee}\) inherits a left \(\mathcal{A}^{\mathbb{R}}\)-module structure_ \[\phi_{\mathrm{L}}\colon\mathcal{A}^{\mathbb{R}}\otimes_{\mathbb{M}_{2}^{ \mathbb{R}}}\mathbf{N}^{\vee}\rTo\] _via the formula_ \[(\varphi\cdot\uplambda)(n)=(\varphi\otimes\uplambda)\psi_{\mathrm{L}}(n) \tag{3.2}\] _for \(\varphi\in\mathcal{A}^{\mathbb{R}}\), \(\uplambda\in\mathbf{N}^{\vee}\), and \(n\in\mathbf{N}\)._ **Remark 3.2**.: If \(\psi_{\mathrm{L}}(n)=\sum_{i}a_{i}\otimes n_{i}\), for \(a_{i}\in\mathcal{A}^{\mathbb{R}}_{\star}\) and \(n_{i}\in\mathbf{N}\), then (3.2) can be rewritten as \[(\varphi\cdot\uplambda)(n)=\sum_{i}\varphi\Big{(}a_{i}\eta_{\mathrm{R}}\big{(} \uplambda(n_{i})\big{)}\Big{)}. \tag{3.3}\] Combining Proposition 3.1 with the following result, one can deduce the left \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathrm{DX})\) (\(\phi_{\mathrm{L}}^{\prime}\) in Figure 1.1) from the left \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure on \(\mathrm{H}^{\star}(\mathrm{X})\) (\(\psi_{\mathrm{L}}\) in Figure 1.1). **Proposition 3.2**.: _Suppose \(\mathrm{X}\) satisfies Assumption 3.1. There are isomorphisms of left \(\mathcal{A}^{\mathbb{R}}\)-modules \(\mathrm{H}^{\star}(\mathrm{DX})\cong(\mathrm{H}_{\star}(\mathrm{DX}))^{\vee} \cong(\mathrm{H}^{\star}(\mathrm{X}))^{\vee}\)._ Proof.: Under Assumption 3.1 the map \(\mathfrak{n}:\mathrm{H}^{\star}(\mathrm{DX})\longrightarrow(\mathrm{H}_{\star}( \mathrm{DX}))^{\vee}\) defined in (2.4), is not just an isomorphism of \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules (see Remark 2.3), but also an isomorphism of left \(\mathcal{A}^{\mathbb{R}}\)-modules according to [B, Lemma 6.2]. For the second isomorphism, first note that Assumption 3.1 implies that there exists an isomorphism \[\mathrm{H}_{\star}(\mathrm{DX})\cong\mathrm{H}^{\star}(\mathrm{X}) \tag{3.4}\] of \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules. By Proposition 3.1, it is enough to lift (3.4) to an isomorphism of \(\mathcal{A}_{\star}^{\mathbb{R}}\)-comodules. To this end, we first observe that the comodule structure on \(\mathrm{H}^{\star}(\mathrm{X})\) is induced by the map \[\mathrm{F}(\mathrm{X},\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2})\cong\mathrm{F}( \mathrm{X},\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\wedge\mathbb{S}_{\mathbb{R}}) \xrightarrow[]{\mathrm{F}(\mathrm{X},\mathrm{id}\wedge\mathfrak{t})} \mathrm{F}(\mathrm{X},\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\wedge\mathbf{H}_{ \mathbb{R}}\mathbb{F}_{2}).\] (see (3.1) or [B, Theorem 5.4])). The result then follows from the commutativity of the diagram where the horizontal maps are evaluation at \(\mathrm{X}\). ### From \(\phi_{\mathrm{L}}\) to \(\psi_{\mathrm{L}}\) For any \(\varphi\in\mathcal{A}^{\mathbb{R}}\cong\mathrm{Hom}_{\mathbb{M}_{2}^{\mathbb{R }}}(\mathcal{A}_{\star}^{\mathbb{R}},\mathbb{M}_{2}^{\mathbb{R}})\), let \(\varphi\mathbf{c}\) denote the composition \[\varphi\mathbf{c}:\mathcal{A}_{\star}^{\mathbb{R}}\xrightarrow[]{\mathfrak{c} }\mathcal{A}_{\star}^{\mathbb{R}}\xrightarrow[]{\varphi}\mathbb{M}_{2}^{ \mathbb{R}},\] which is a right \(\mathbb{M}_{2}^{\mathbb{R}}\)-module map as the conjugation \(\mathsf{c}\) is an isomorphism from the right \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structure to the left \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structure of \(\mathcal{A}_{\star}^{\mathbb{R}}\). **Proposition 3.3**.: _Let \(\mathbf{N}\) be a left \(\mathcal{A}_{\star}^{\mathbb{R}}\)-comodule with coproduct \(\psi_{\mathrm{L}}\). Then, for \(n\in\mathbf{N}\) and \(\varphi\in\mathcal{A}^{\mathbb{R}}\), the formula_ \[\varphi\cdot n=(\varphi\mathbf{c}\otimes\mathrm{id})\psi_{\mathrm{L}}(n)\] _defines a left \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathbf{N}\)._ Proof.: Using the coassociativity of the coaction, the statement reduces to checking that \[(\varphi\psi)(\mathsf{c}(a))=\sum\varphi\Big{(}\mathsf{c}\big{(}\eta_{ \mathrm{L}}(\psi(\mathsf{c}(a^{\prime}_{i})))a^{\prime\prime}_{i}\big{)}\Big{)}, \tag{3.5}\] for \(\varphi,\psi\in\mathcal{A}^{\mathbb{R}}\) and \(a\in\mathcal{A}_{\star}^{\mathbb{R}}\). The formula (3.5) follows from combining [B, Lemma 3.3(a)] with \(\mathsf{c}\circ\eta_{\mathrm{L}}=\eta_{\mathrm{R}}\) and \[\Delta(\mathsf{c}(a))=\sum_{i}\mathsf{c}(a^{\prime\prime}_{i})\otimes\mathsf{ c}(a^{\prime}_{i})\] whenever \(\Delta(a)=\sum_{i}a^{\prime}_{i}\otimes a^{\prime\prime}_{i}\). **Remark 3.3**.: The right \(\mathbb{M}^{\mathbb{R}}_{2}\)-module structure on \(\mathcal{A}^{\mathbb{R}}_{\star}\) is defined [V, Section 12] such that \[a\cdot\mathfrak{n}_{\mathrm{R}}(m)(\varphi)=a(\varphi\cdot m)\] for \(m\in\mathbb{M}^{\mathbb{R}}_{2}\), \(a\in\mathcal{A}^{\mathbb{R}}_{\star}\) and \(\varphi\in\mathcal{A}^{\mathbb{R}}\). This shows that the evaluation pairing defines a map of \(\mathbb{M}^{\mathbb{R}}_{2}\)-bimodules, where the left \(\mathbb{M}^{\mathbb{R}}_{2}\)-module structure on \(\mathcal{A}^{\mathbb{R}}\otimes_{\mathbb{M}^{\mathbb{R}}_{2}}^{\mathrm{right}} \mathcal{A}^{\mathbb{R}}_{\star}\) is obtained via the left action on \(\mathcal{A}^{\mathbb{R}}\), and the right \(\mathbb{M}^{\mathbb{R}}_{2}\)-module structure via the left action on \(\mathcal{A}^{\mathbb{R}}_{\star}\). Consequently, the left action constructed in Proposition 3.3 can be described as the composition \(\upphi_{\mathrm{L}}\) in the diagram Note that while \(\mathsf{c}\) is not a right \(\mathbb{M}^{\mathbb{R}}_{2}\)-module map, the composition is a map of \(\mathbb{M}^{\mathbb{R}}_{2}\)-bimodules. If we set \(\mathbf{N}=\mathrm{H}^{\star}(\mathrm{X})\), i.e. the cohomology of a finite spectrum \(\mathrm{X}\) with the \(\mathcal{A}_{\star}\)-comodule structure of (3.1), Proposition 3.3 recovers the usual \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathrm{X})\) (see [B, Lemma 6.3]). Our next result reverse-engineers Proposition 3.3 to obtain a formula that calculates the \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule on \(\mathrm{H}^{\star}(\mathrm{X})\) (\(\uppsi_{\mathrm{L}}\) in Figure 1.1) from the \(\mathcal{A}^{\mathbb{R}}\)-module on \(\mathrm{H}^{\star}(\mathrm{X})\) (\(\upphi_{\mathrm{L}}\) in Figure 1.1). Let \(\mathcal{B}\) be the monomial basis of the left \(\mathbb{M}^{\mathbb{R}}_{2}\)-module structure on \(\mathcal{A}^{\mathbb{R}}_{\star}\) (as in Section 2.2). For simplicity, let \(\mathbf{b}_{i}\) denote the elements of \(\mathcal{B}\), and let \(\mathbf{B}^{i}\in\mathcal{A}^{\mathbb{R}}\) be the dual basis in the following result. **Proposition 3.4**.: _Let \(\mathbf{N}\) be a left \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule with coaction map \(\uppsi_{\mathrm{L}}\). Then \(\uppsi_{\mathrm{L}}\) is related to \(\upphi_{\mathrm{L}}\) using the formula_ \[\uppsi_{\mathrm{L}}(n)=\sum_{i}c(\mathbf{b}_{i})\otimes(\mathbf{B}^{i}\cdot n),\] _where \(\cdot\) is the action of \(\mathcal{A}^{\mathbb{R}}\) on \(\mathbf{N}\) constructed using Proposition 3.3._ Proof.: Since \(\{c(\mathbf{b}_{i})\}\) is a basis for \(\mathcal{A}^{\mathbb{R}}_{\star}\) as a free right \(\mathbb{M}^{\mathbb{R}}_{2}\)-module, it follows that there is a unique expression \(\uppsi_{\mathrm{L}}(n)=\sum_{i}c(\mathbf{b}_{i})\otimes n_{i}\) for appropriate elements \(n_{i}\) On the other hand, \[\mathbf{B}^{k}\cdot n = (\mathbf{B}^{k}\mathsf{c}\otimes\mathrm{id})\mathsf{\psi}_{\mathrm{L} }(n)\] \[= \sum_{i}\mathbf{B}^{k}\mathsf{c}(\mathsf{c}(\mathbf{b}_{i}))\otimes n _{i}\] \[= \sum_{i}\mathbf{B}^{k}(\mathbf{b}_{i})\otimes n_{i}\] \[= n_{k}\] by Proposition 3.3. ### Preliminary examples We now demonstrate the usefulness of Proposition 3.1, Proposition 3.3, and Proposition 3.4 by identifying the \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathrm{DX})\), for a few well-known finite \(\mathbb{R}\)-motivic finite complexes \(\mathrm{X}\). **Notation 3.2**.: In the following examples, the \(\mathbb{R}\)-motivic spectrum \(\mathrm{X}\) will satisfy Assumption 3.1. In particular, \(\mathrm{H}^{\star}(\mathrm{X})\) will be a free \(\mathbb{M}^{\mathbb{R}}_{2}\)-module. By \(\mathsf{x}_{\mathrm{i},\mathrm{j}}\), we will denote an element of its \(\mathbb{M}^{\mathbb{R}}_{2}\)-basis which lives in cohomological bidegree \((\mathrm{i},\mathrm{j})\). By \(\hat{\mathsf{x}}_{\mathrm{i},\mathrm{j}}\), we will denote an element of \((\mathrm{H}^{\star}(\mathrm{X}))^{\vee}\) dual to \(\mathsf{x}_{\mathrm{i},\mathrm{j}}\). Note that the bidegree of \(\hat{\mathsf{x}}_{\mathrm{i},\mathrm{j}}\) is \((-\mathrm{i},-\mathrm{j})\) under the isomorphism \((\mathrm{H}^{\star}(\mathrm{X}))^{\vee}\cong\mathrm{H}^{\star}(\mathrm{DX})\). **Example 3.1** (The \(\mathbb{R}\)-motivic mod \(2\) Moore spectrum).: As an \(\mathbb{M}^{\mathbb{R}}_{2}\)-module, \(\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/2)\) has generators \(\mathsf{x}_{0,0}\) and \(\mathsf{x}_{1,0}\). The \(\mathcal{A}^{\mathbb{R}}\)-module structure is then determined by the relations \[\mathrm{Sq}^{1}(\mathsf{x}_{0,0})=\mathsf{x}_{1,0},\ \mathrm{Sq}^{2}( \mathsf{x}_{0,0})=\rho\mathsf{x}_{1,0}.\] By Proposition 3.4, we get \[\mathsf{\psi}_{\mathrm{L}}(\mathsf{x}_{1,1})=1\otimes\mathsf{x}_{1,1},\ \mathsf{\psi}_{\mathrm{L}}(\mathsf{x}_{0,0})=1\otimes\mathsf{x}_{0,0}+\tau_{0} \otimes\mathsf{x}_{1,0}+\rho\xi_{1}\otimes\mathsf{x}_{1,0},\] which determines the \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure on \(\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/2)\). Then we apply Proposition 3.1, in particular (3.3), to obtain \[\mathrm{Sq}^{1}(\hat{\mathsf{x}}_{1,0})=\hat{\mathsf{x}}_{0,0},\ \mathrm{Sq}^{2}( \hat{\mathsf{x}}_{1,0})=\rho\hat{\mathsf{x}}_{0,0},\] which shows \(\left(\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/2)\right)^{\vee}\cong\Sigma^ {-1}\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/2)\) as \(\mathcal{A}^{\mathbb{R}}\)-modules. This aligns with the fact that \(D(\mathbb{S}_{\mathbb{R}}/2)\) is equivalent to \(\Sigma^{-1}\mathbb{S}_{\mathbb{R}}/2\). **Example 3.2** (\(\mathbb{R}\)-motivic mod \(\mathsf{h}\) Moore spectrum).: As a graded \(\mathbb{M}^{\mathbb{R}}_{2}\)-module, \(\mathrm{H}^{\star}(\mathbb{S}/\mathsf{h})\) is isomorphic to \(\mathrm{H}^{\star}(\mathbb{S}/2)\). However, they differ in their \(\mathcal{A}^{\mathbb{R}}\)-module structures in that \[\mathrm{Sq}^{1}(\mathsf{x}_{0,0})=\mathsf{x}_{1,0},\ \mathrm{Sq}^{2}(\mathsf{x}_{0,0 })=0\] determines the \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathbb{S}/\mathsf{h})\). By Proposition 3.4 \[\mathsf{\psi}_{\mathrm{L}}(\mathsf{x}_{1,1})=1\otimes\mathsf{x}_{1,1},\ \mathsf{\psi}_{ \mathrm{L}}(\mathsf{x}_{0,0})=1\otimes\mathsf{x}_{0,0}+\tau_{0}\otimes\mathsf{x }_{1,0},\] and using (3.3) we see that \(\left(\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/\mathsf{h})\right)^{\vee} \cong\Sigma^{-1}\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/\mathsf{h})\). This aligns with the fact that \(D(\mathbb{S}_{\mathbb{R}}/\mathsf{h})\) is equivalent to \(\Sigma^{-1}\mathbb{S}_{\mathbb{R}}/\mathsf{h}\). **Example 3.3**.: (The \(\mathbb{R}\)-motivic \(\mathfrak{Joker}\)) The \(\mathcal{A}^{\mathbb{R}}(1)\)-module of the \(\mathbb{R}\)-motivic \(\mathfrak{Joker}\)\(\mathcal{J}_{\mathbb{R}}\) (discussed in [GL]) is the quotient \(\mathcal{A}^{\mathbb{R}}(1)/\operatorname{Sq}^{3}\). In Figure 3.1, we have displayed a particular \(\mathcal{A}^{\mathbb{R}}\)-module extension of \(\mathcal{A}^{\mathbb{R}}(1)/\operatorname{Sq}^{3}\) obtained using Theorem 4.1. Using Proposition 3.4, in conjunction with Table 2.1, we notice that \[\begin{array}{rcl}\vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{4,2})&=&1 \otimes\mathsf{x}_{4,2}\\ \vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{3,1})&=&1\otimes\mathsf{x}_{3,1}+ \uptau_{0}\otimes\mathsf{x}_{4,2}\\ \vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{2,1})&=&1\otimes\mathsf{x}_{2,1}+ (\tau\upxi_{1}+\rho\uptau_{0}\upxi_{1}+\rho\uptau_{1}+\rho^{2}\upxi_{1}^{2} )\otimes\mathsf{x}_{4,2}\\ \vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{1,0})&=&1\otimes\mathsf{x}_{1,0}+ \upxi_{1}\otimes\mathsf{x}_{3,1}+\uptau_{1}\otimes\mathsf{x}_{4,2}\\ \vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{0,0})&=&1\otimes\mathsf{x}_{0,0}+ \uptau_{0}\otimes\mathsf{x}_{1,0}+\upxi_{1}\otimes\mathsf{x}_{2,1}+(\uptau_{ 0}\upxi_{1}+\uptau_{1})\otimes\mathsf{x}_{3,1}\\ &&+(\uptau_{0}\uptau_{1}+\rho^{2}\upxi_{2}+\rho^{2}\upxi_{1}^{3})\otimes \mathsf{x}_{4,2}\end{array}\] determines the \(\mathcal{A}_{\star}^{\mathbb{R}}\)-comodule structure of \(\mathrm{H}^{\star}(\mathcal{J}_{\mathbb{R}})\). Then (3.3) produces the \(\mathcal{A}^{\mathbb{R}}\)-module structure on the dual displayed in Figure 3.1. ## 4. Self-dual \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1)\) Let \(\mathsf{x}_{\mathrm{i,j}}\) and \(\mathsf{y}_{\mathrm{i,j}}\) denote the elements of the \(\mathbb{M}_{2}^{\mathbb{R}}\)-basis of \(\mathcal{A}^{\mathbb{R}}(1)\) introduced in [BGL, Notation 1.5] in bidegree (i,j). **Theorem 4.1**.: _[_BGL_, Theorem 1.6]_ _For every vector_ \[\overline{\uptau}=(\alpha_{03},\beta_{03},\beta_{14},\beta_{06},\beta_{25}, \beta_{26},\gamma_{36})\in\mathbb{F}_{2}^{7},\] _there exists a unique isomorphism class of \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1)\), which we denote by \(\mathcal{A}^{\mathbb{R}}_{\overline{\uptau}}(1)\), determined by the formulas_ \[\begin{array}{rcl}\operatorname{Sq}^{4}(\mathsf{x}_{0,0})&=&\beta_{03}( \rho\cdot\mathsf{y}_{3,1})+(1+\beta_{03}+\beta_{14})(\tau\cdot\mathsf{y}_{4,1 })+\alpha_{03}(\rho\cdot\mathsf{x}_{3,1})\\ \operatorname{Sq}^{4}(\mathsf{x}_{1,0})&=&\mathsf{y}_{5,2}+\beta_{14}(\rho \cdot\mathsf{y}_{4,1})\\ \operatorname{Sq}^{4}(\mathsf{x}_{2,1})&=&\beta_{26}(\tau\cdot\mathsf{y}_{6,2 })+\beta_{25}(\rho\cdot\mathsf{y}_{5,2})+j_{24}(\rho^{2}\cdot\mathsf{y}_{4,1 })\\ \operatorname{Sq}^{4}(\mathsf{x}_{3,1})&=&(\beta_{25}+\beta_{26})(\rho\cdot \mathsf{y}_{6,2})\\ \operatorname{Sq}^{4}(\mathsf{y}_{3,1})&=&\gamma_{36}(\rho\cdot\mathsf{y}_{6,2 })\\ \operatorname{Sq}^{8}(\mathsf{x}_{0,0})&=&\beta_{06}(\rho^{2}\cdot\mathsf{y}_{6,2 }),\end{array}\] _where \(j_{24}=\beta_{03}\gamma_{36}+\alpha_{03}(\beta_{25}+\beta_{26}).\) Further, any \(\mathcal{A}^{\mathbb{R}}\)-module whose underlying \(\mathcal{A}^{\mathbb{R}}(1)\)-module is free on one generator is isomorphic to one listed above._ Using Proposition 3.4, we calculate the \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure \(\psi_{\mathrm{L}}\) on \(\mathcal{A}^{\mathbb{R}}_{\mathrm{\varphi}}(1)\): \[\psi_{\mathrm{L}}(\mathsf{y}_{6,2}) = 1\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{y}_{5,2}) = 1\otimes\mathsf{y}_{5,2}+\tau_{0}\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{y}_{4,1}) = 1\otimes\mathsf{y}_{4,1}+\xi_{1}\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{y}_{3,1}) = 1\otimes\mathsf{y}_{3,1}+\tau_{0}\otimes\mathsf{y}_{4,1}+(\tau_ {1}+\tau_{0}\xi_{1}+\gamma_{36}\rho\xi_{1}^{2})\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{x}_{3,1}) = 1\otimes\mathsf{x}_{3,1}+\xi_{1}\otimes\mathsf{y}_{5,2}+(\tau_ {1}+(\beta_{25}+\beta_{26})\rho\xi_{1}^{2})\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{x}_{2,1}) = 1\otimes\mathsf{x}_{2,1}+\tau_{0}\otimes\mathsf{x}_{3,1}+(\tau \xi_{1}+\rho\tau_{1}+\rho\tau_{0}\xi_{1}+\dot{j}_{24}\rho^{2}\xi_{1}^{2}) \otimes\mathsf{y}_{4,1}\] \[+(\tau_{1}+\tau_{0}\xi_{1}+\beta_{25}\rho\xi_{1}^{2})\otimes \mathsf{y}_{5,2}+(\tau_{0}\tau_{1}+(1+\beta_{26})\tau\xi_{1}^{2})\otimes \mathsf{y}_{6,2}\] \[+((1+\beta_{25})\rho\tau_{0}\xi_{1}^{2}+\rho\tau_{1}\xi_{1}+\dot{ j}_{24}\rho^{2}\xi_{2})\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{x}_{1,0}) = 1\otimes\mathsf{x}_{1,0}+\xi_{1}\otimes\mathsf{y}_{3,1}+(\tau_{1 }+\beta_{14}\rho\xi_{1}^{2})\otimes\mathsf{y}_{4,1}+\xi_{1}^{2}\otimes\mathsf{ y}_{5,2}\] \[+(\tau_{1}\xi_{1}+\gamma_{36}\rho\xi_{1}^{3}+(\beta_{14}+\gamma_{ 36})\rho\xi_{2})\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{x}_{0,0}) = 1\otimes\mathsf{x}_{0,0}+\tau_{0}\otimes\mathsf{x}_{1,0}+\xi_{1 }\otimes\mathsf{x}_{2,1}+(\tau_{1}+\alpha_{03}\rho\xi_{1}^{2})\otimes\mathsf{ x}_{3,1}\] \[+(\tau_{1}+\tau_{0}\xi_{1}+\beta_{03}\rho\xi_{1}^{2})\otimes \mathsf{y}_{3,1}\] \[+(\tau_{0}\tau_{1}+(\beta_{03}+\beta_{14})\tau\xi_{1}^{2}+(\beta_ {03})\rho\tau_{0}\xi_{1}^{2}+\dot{j}_{24}\rho^{2}\xi_{2}+\dot{j}_{24}\rho^{2} \xi_{1}^{3})\otimes\mathsf{y}_{4,1}\] \[+(\tau_{1}\xi_{1}+\tau_{0}\xi_{1}^{2}+\beta_{25}\rho\xi_{1}^{3}+( \alpha_{03}+\beta_{25})\rho\xi_{2})\otimes\mathsf{y}_{5,2}\] \[+(\beta_{26}\tau\xi_{1}^{3}+(\beta_{26}+\gamma_{36})\rho\tau_{0} \xi_{1}^{3}+(\beta_{25}+\beta_{26}+\gamma_{36})\rho\tau_{1}\xi_{1}^{2})\otimes \mathsf{y}_{6,2}\] \[+((1+\beta_{03}+\beta_{14}+\beta_{26})\tau\xi_{2}+(1+\beta_{03}+ \beta_{26}+\gamma_{36})\rho\tau_{0}\xi_{2})\otimes\mathsf{y}_{6,2}\] \[+((1+\alpha_{03}+\beta_{03}+\beta_{25}+\beta_{26}+\gamma_{36})\rho \tau_{2}+\dot{j}_{24}\rho^{2}\xi_{1}\xi_{2})\otimes\mathsf{y}_{6,2}\] \[+(\tau_{0}\tau_{1}\xi_{1}+(\dot{j}_{24}+\beta_{06})\rho^{2}\xi_{1}^ {4})\otimes\mathsf{y}_{6,2}.\] Using (3.3), we get the following result, where \(\hat{\mathsf{x}}_{\mathrm{i,j}}\) and \(\hat{\mathsf{y}}_{\mathrm{i,j}}\) are the elements in \((\mathcal{A}^{\mathbb{R}}_{\mathrm{\varphi}}(1))^{\vee}\) dual to \(\mathsf{x}_{\mathrm{i,j}}\) and \(\mathsf{y}_{\mathrm{i,j}}\), respectively. Figure 4.1. A singly-generated free \(\mathcal{A}^{\mathbb{R}}(1)\)-module (on the left), and its dual (on the right). **Theorem 4.2**.: _The \(\mathcal{A}^{\mathbb{R}}(1)\)-module structure on the dual \((\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1))^{\vee}\) is as displayed in the right of Figure 4.1. Moreover, its \(\mathcal{A}^{\mathbb{R}}\)-module structure is determined by_ \[\mathrm{Sq}^{4}(\hat{\mathsf{y}}_{6,2}) = (\beta_{25}+\beta_{26})(\rho\cdot\hat{\mathsf{x}}_{3,1})+(1+\beta_ {26})(\tau\cdot\hat{\mathsf{x}}_{2,1})+\gamma_{36}(\rho\cdot\hat{\mathsf{y}}_{3,1})\] \[\mathrm{Sq}^{4}(\hat{\mathsf{y}}_{5,2}) = \hat{\mathsf{x}}_{1,0}+\beta_{25}(\rho\cdot\hat{\mathsf{x}}_{2,1})\] \[\mathrm{Sq}^{4}(\hat{\mathsf{y}}_{4,1}) = (\beta_{03}+\beta_{14})(\tau\cdot\hat{\mathsf{x}}_{0,0})+\beta_{14 }(\rho\cdot\hat{\mathsf{x}}_{1,0})+\underline{\jmath}_{24}(\rho^{2}\cdot\hat{ \mathsf{x}}_{2,1})\] \[\mathrm{Sq}^{4}(\hat{\mathsf{y}}_{3,1}) = \beta_{03}(\rho\cdot\hat{\mathsf{x}}_{0,0})\] \[\mathrm{Sq}^{4}(\hat{\mathsf{x}}_{3,1}) = \alpha_{03}(\rho\cdot\hat{\mathsf{x}}_{0,0})\] \[\mathrm{Sq}^{8}(\hat{\mathsf{y}}_{6,2}) = (\underline{\jmath}_{24}+\beta_{06})(\rho^{2}\cdot\hat{\mathsf{x} }_{0,0}).\] **Corollary 4.1**.: _For the \(\mathcal{A}^{\mathbb{R}}\)-module \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\), its (regraded) dual is isomorphic to_ \[\Sigma^{6,2}(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1))^{\vee}\cong \mathcal{A}^{\mathbb{R}}_{\delta(\overline{\nu})}(1),\] _where \(\delta(\overline{\nu})=(\gamma_{36},\beta_{25}+\beta_{26},\beta_{25},\underline {\jmath}_{24}+\beta_{06},\beta_{14},\beta_{03}+\beta_{14},\alpha_{03}).\) Thus, \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\) is self dual if and only if_ 1. \(\alpha_{03}=\gamma_{36}\)_,_ 2. \(\beta_{03}=\beta_{25}+\beta_{26}\)_, and_ 3. \(\beta_{14}=\beta_{25}\)_._ **Remark 4.3**.: The constant \(\underline{\jmath}_{24}\) has a geometric significance noted in [1, Remark 1.21]. It follows from Corollary 4.1 that \(\underline{\jmath}_{24}=0\) whenever \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\) is self-dual. **Remark 4.4**.: The underlying classical \(\mathcal{A}\)-module structure on \(\mathcal{A}(1)\) is self-dual if and only if \(\beta_{26}=\beta_{03}+\beta_{14}\). In the presence of (3), this is equivalent to (2). Thus the conditions of Corollary 4.1 can be thought of as the classical condition, plus conditions (1) and (3). In [1], we showed that the \(\mathcal{A}^{\mathbb{R}}\)-modules \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\) can be realized as the cohomology of an \(\mathbb{R}\)-motivic spectrum for all values of \(\overline{\nu}\). **Corollary 4.2**.: _Suppose \(\mathcal{A}^{\mathbb{R}}_{1}[\overline{\nu}]\) is an \(\mathbb{R}\)-motivic spectrum realizing \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\), and suppose that \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\) is a self-dual \(\mathcal{A}^{\mathbb{R}}\)-module. Then \(\mathcal{A}^{\mathbb{R}}_{1}[\overline{\nu}]\) is the cofiber of a \(v_{1}\)-self-map on either \(\mathcal{Y}^{\mathbb{R}}_{2,1}\) or \(\mathcal{Y}^{\mathbb{R}}_{h,1}\)._ Proof.: By [1, Theorem 1.8], the \(\mathbb{R}\)-motivic spectrum \(\mathcal{A}^{\mathbb{R}}_{1}[\overline{\nu}]\) is the cofiber of a \(v_{1}\)-self map on \(\mathcal{Y}^{\mathbb{R}}_{2,1}\) if \(\beta_{25}+\beta_{26}+\gamma_{36}=1\) and \(\alpha_{03}+\beta_{03}=1\), whereas it is the cofiber of a \(v_{1}\)-self-map on \(\mathcal{Y}^{\mathbb{R}}_{h,1}\) if \(\beta_{25}+\beta_{26}+\gamma_{36}=0\) and \(\alpha_{03}+\beta_{03}=0\). But conditions (1) and (2) of Corollary 4.1 imply that \(\beta_{25}+\beta_{26}+\gamma_{36}\) is equal to \(\alpha_{03}+\beta_{03}\). Our main results Theorem 1.1 and Theorem 1.3 follows from Corollary 4.1 and Corollary 4.2 respectively. **Remark 4.5**.: Using the Betti realization functor, [1] produced \(\mathrm{C}_{2}\)-equivariant realizations of analogous \(\mathcal{A}^{\mathrm{C}_{2}}\)-modules \(\mathcal{A}^{\mathrm{C}_{2}}_{\overline{\nu}}(1)\). Using the comparison result [1, Theorem 1.19], the \(\mathcal{A}\)-module structures on \(\Phi(\mathcal{A}^{\mathrm{C}_{2}}_{1}[\overline{\nu}])\), the geometric fixed points of \(\mathcal{A}^{\mathrm{C}_{2}}_{1}[\overline{\nu}]\), was identified in [1, Figure 4.12]. In Figure 4.2, we record the \(\mathcal{A}\)-module structure on the geometric fixed points of a self-dual \(\mathcal{A}^{\mathrm{C}_{2}}_{1}[\overline{\nu}]\). ## Appendix A On the antiautomorphism of \(\mathcal{A}^{\mathbb{R}}\) Although Boardman [2, SS6] pointed out that the set of E-cohomology operations \([\mathrm{E},\mathrm{E}]^{*}\) may not necessarily have an antiautomorphism for a cohomology theory \(\mathrm{E}\), we find the case of \(\mathrm{E}=\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\) a rather curious one. The case of \(\mathrm{E}=\mathbf{H}\mathbb{F}_{2}\) is exceptional; the Steenrod algebra \(\mathcal{A}:=[\mathbf{H}\mathbb{F}_{2},\mathbf{H}\mathbb{F}_{2}]_{*}\) is well-known to be a Hopf algebra and, therefore, equipped with an antiautomorphism \(\chi:\mathcal{A}\xrightarrow{\ \ }\mathcal{A}\). The composition of extension of scalars and Betti realization, induces maps of Steenrod algebras where \(\pi_{1}\) sends \(\rho\) to \(0\) and \(\pi_{2}\) sends \(\tau\) to \(1\). The antiautomorphism \(\chi\) of the classical Steenrod algebra is known to lift along \(\pi_{2}\), as the \(\mathbb{C}\)-motivic Steenrod algebra is a connected bialgebra. However, lifting \(\chi^{\mathbb{C}}\) along \(\pi_{1}\) is less straightforward. The dual \(\mathbb{R}\)-motivic Steenrod algebra \(\mathcal{A}^{\mathbb{R}}_{\star}\) is a Hopf _algebroid_, rather than a Hopf algebra, so that its dual is not a Hopf algebra. One feature that distinguishes \(\mathcal{A}^{\mathbb{R}}\) from \(\mathcal{A}^{\mathbb{C}}\) is the fact that \(\tau\) is not central in \(\mathcal{A}^{\mathbb{R}}\). In the following result, we use the commutators \([\tau,\mathrm{Sq}^{2^{n}}]\) in \(\mathcal{A}^{\mathbb{R}}\) (computed using the Cartan formula [2, Proposition 9.7]) to compute the values of a hypothetical antiautomorphism in low degrees. **Proposition A.1**.: _Suppose that \(\chi^{\mathbb{R}}\colon\mathcal{A}^{\mathbb{R}}\longrightarrow\mathcal{A}^{ \mathbb{R}}\) is a ring antihomomorphism and an involution. Then_ \[\chi^{\mathbb{R}}(\tau) = \tau\] \[\chi^{\mathbb{R}}(\rho) = \rho\] \[\chi^{\mathbb{R}}(\operatorname{Sq}^{1}) = \operatorname{Sq}^{1}\] \[\chi^{\mathbb{R}}(\operatorname{Sq}^{2}) = \operatorname{Sq}^{2}+\rho\operatorname{Sq}^{1}\] \[\chi^{\mathbb{R}}(\operatorname{Sq}^{4}) = \operatorname{Sq}^{4}+\rho\operatorname{Sq}^{2}\operatorname{Sq}^{ 1}+\tau\operatorname{Sq}^{1}\operatorname{Sq}^{2}\operatorname{Sq}^{1}\,.\] Proof.: If \(\chi^{\mathbb{R}}\) is a ring antihomomorphism then (A.1) \[\chi^{\mathbb{R}}[r,s]=[\chi^{\mathbb{R}}r,\chi^{\mathbb{R}}s]\] in characteristic \(2\). Since \(\tau\) and \(\operatorname{Sq}^{1}\) are unique \(\mathbb{F}_{2}\)-generators in their bidegree and \(\chi^{\mathbb{R}}\) is an automorphism, it follows that \[\chi^{\mathbb{R}}(\tau)=\tau\qquad\text{and}\qquad\chi^{\mathbb{R}}( \operatorname{Sq}^{1})=\operatorname{Sq}^{1}\,.\] For degree reasons, \(\chi^{\mathbb{R}}(\operatorname{Sq}^{2})\) must be \(\operatorname{Sq}^{2}+\varepsilon\rho\operatorname{Sq}^{1}\), where \(\varepsilon\) is either \(0\) or \(1\). But the commutator \([\tau,\operatorname{Sq}^{2}]\) is equal to \(\rho\tau\operatorname{Sq}^{1}\). Applying (A.1), we see that \[\chi^{\mathbb{R}}(\rho\tau\operatorname{Sq}^{1}) = [\chi^{\mathbb{R}}(\tau),\chi^{\mathbb{R}}(\operatorname{Sq}^{2})]\] \[\Rightarrow \operatorname{Sq}^{1}\tau\rho = [\tau,\operatorname{Sq}^{2}+\varepsilon\rho\operatorname{Sq}^{1}]\] \[\Rightarrow \rho\tau\operatorname{Sq}^{1}+\rho^{2} = \rho\tau\operatorname{Sq}^{1}+\varepsilon\rho^{2},\] and therefore, \(\varepsilon\) must be \(1\). Similarly, degree considerations imply that \(\chi^{\mathbb{R}}(\operatorname{Sq}^{4})\) must be of the form \(\operatorname{Sq}^{4}+\delta\rho\operatorname{Sq}^{1}\operatorname{Sq}^{2}+ \varepsilon\rho\operatorname{Sq}^{2}\operatorname{Sq}^{1}+\lambda\tau \operatorname{Sq}^{1}\operatorname{Sq}^{2}\operatorname{Sq}^{1}\). The commutator \([\tau,\operatorname{Sq}^{4}]\) is \(\rho\tau\operatorname{Sq}^{1}\operatorname{Sq}^{2}\), so we conclude that \[[\chi^{\mathbb{R}}\tau,\chi^{\mathbb{R}}\operatorname{Sq}^{4}] = [\tau,\operatorname{Sq}^{4}+\delta\rho\operatorname{Sq}^{1} \operatorname{Sq}^{2}+\varepsilon\rho\operatorname{Sq}^{2}\operatorname{Sq}^{1} +\lambda\tau\operatorname{Sq}^{1}\operatorname{Sq}^{2}\operatorname{Sq}^{1}]\] \[= (1+\lambda)\rho\tau\operatorname{Sq}^{1}\operatorname{Sq}^{2}+ \lambda\rho\tau\operatorname{Sq}^{2}\operatorname{Sq}^{1}+(\delta+\varepsilon) \rho^{2}\operatorname{Sq}^{2}+\delta\rho^{3}\operatorname{Sq}^{1}\] must agree with \[\chi^{\mathbb{R}}(\rho\tau\operatorname{Sq}^{1}\operatorname{Sq}^{ 2}) = (\operatorname{Sq}^{2}+\rho\operatorname{Sq}^{1})\operatorname{Sq}^{ 1}\tau\rho\] \[= \rho\tau\operatorname{Sq}^{2}\operatorname{Sq}^{1}+\rho^{2} \operatorname{Sq}^{2},\] and therefore, \(\delta=0\), \(\varepsilon=1\), and \(\lambda=1\) as desired. Proposition A.1 suggests there might be an \(\mathbb{R}\)-motivic antiautomorphism on the subalgebra \(\mathcal{A}^{\mathbb{R}}(2):=\mathbb{M}^{\mathbb{R}}_{2}\langle\operatorname{Sq }^{1},\operatorname{Sq}^{2},\operatorname{Sq}^{4}\rangle\subset\mathcal{A}^{ \mathbb{R}}\). It seems likely that the method above can be extended to produce an antiautomorphism on all of \(\mathcal{A}^{\mathbb{R}}\). However, we leave open the question of whether or not this is possible. On the other hand, the following remark shows that an antihomomorphism on \(\mathcal{A}^{\mathbb{R}}\) may not be directly of use in dualizing \(\mathcal{A}^{\mathbb{R}}\)-modules. **Remark A.1**.: Note that if \(\mathbf{N}\) is an \(\mathcal{A}^{\mathbb{R}}\)-module, then the action of \(\mathcal{A}^{\mathbb{R}}\) on \(\mathbf{N}\) is not \(\mathbb{M}^{\mathbb{R}}_{2}\)-linear, so that, in contrast to the classical case, it does not induce a right \(\mathcal{A}^{\mathbb{R}}\)-action on the dual \(\mathbf{N}^{\vee}\). Even if \(\mathcal{A}^{\mathbb{R}}\) were to be hypothetically equipped with an antiautomorphism \(\chi^{\mathbb{R}}\), this may not be so useful for the purpose of dualization. The reason is that the classical formula (1.1) does not work in this setting. More precisely, let \(\mathbf{N}\) be an \(\mathcal{A}^{\mathbb{R}}\)-module, let \(\lambda\in\mathbf{N}^{\vee}\), \(\varphi\in\mathcal{A}^{\mathbb{R}}\), and \(n\in\mathbf{N}\). Then defining a new action \(\varphi\odot\lambda\) by \[(\varphi\odot\lambda)(n)=\lambda(\chi^{\mathbb{R}}\varphi\cdot n)\] does not produce an \(\mathbb{M}_{2}^{\mathbb{R}}\)-linear function. For instance, consider the case \(\mathbf{N}=\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/\mathsf{h})\) from Example 3.2. Then \((\mathrm{Sq}^{2}\odot\hat{\mathbf{x}}_{1,0})(\tau\chi_{0,0})\) vanishes, whereas \((\mathrm{Sq}^{2}\odot\hat{\mathbf{x}}_{1,0})(\mathbf{x}_{0,0})\) is equal to \(\rho\). It follows that the formula for \(\mathrm{Sq}^{2}\odot\hat{\mathbf{x}}_{1,0}\) is not \(\mathbb{M}_{2}^{\mathbb{R}}\)-linear and is therefore not a valid element of \(\mathbf{N}^{\vee}\).
$$\mathbb{R} \text{-動機的共 homology } \text{の } \mathbb{R} \text{-動機的スペクトルは } \mathcal{A}^{\mathbb{R}} \text{-モジュールの部分で表現される。この論文では、Spanier-Whitehead dual } \mathrm{DX} \text{の } \mathbb{R} \text{-動機的有限系 } \mathrm{X} \text{の共 homology の } \mathcal{A}^{\mathbb{R}} \text{-モジュールの構造をrecovery して }$$
2309.06351
Chemically inspired Erdős-Rényi oriented hypergraphs
High-order structures have been recognised as suitable models for systems going beyond the binary relationships for which graph models are appropriate. Despite their importance and surge in research on these structures, their random cases have been only recently become subjects of interest. One of these high-order structures is the oriented hypergraph, which relates couples of subsets of an arbitrary number of vertices. Here we develop the Erd\H{o}s-R\'enyi model for oriented hypergraphs, which corresponds to the random realisation of oriented hyperedges of the complete oriented hypergraph. A particular feature of random oriented hypergraphs is that the ratio between their expected number of oriented hyperedges and their expected degree or size is 3/2 for large number of vertices. We highlight the suitability of oriented hypergraphs for modelling large collections of chemical reactions and the importance of random oriented hypergraphs to analyse the unfolding of chemistry.
Angel Garcia-Chung, Marisol Bermúdez-Montaña, Peter F. Stadler, Jürgen Jost, Guillermo Restrepo
2023-09-12T16:16:25
http://arxiv.org/abs/2309.06351v1
# Chemically inspired Erdos-Renyi oriented hypergraphs ###### Abstract High-order structures have been recognised as suitable models for systems going beyond the binary relationships for which graph models are appropriate. Despite their importance and surge in research on these structures, their random cases have been only recently become subjects of interest. One of these high-order structures is the oriented hypergraph, which relates couples of subsets of an arbitrary number of vertices. Here we develop the Erdos-Renyi model for oriented hypergraphs, which corresponds to the random realisation of oriented hyperedges of the complete oriented hypergraph. A particular feature of random oriented hypergraphs is that the ratio between their expected number of oriented hyperedges and their expected degree or size is \(3/2\) for large number of vertices. We highlight the suitability of oriented hypergraphs for modelling large collections of chemical reactions and the importance of random oriented hypergraphs to analyse the unfolding of chemistry. Keywords: graphs, hypergraphs, chemical space, random model, Erdos-Renyi. ## 1 Introduction Graphs are often selected as the mathematical structure to analyse binary relationships between objects of a system [1, 2]. As in any mathematical setting, it is important to determine the bounds of the structures modelling the systems. This allows for determining how far or close a system is from its theoretical extremes, which leads to study the lower and upper bounds of systems as well as its random cases. In graph theory, this amounts to determine the edge-less, complete and random graphs. The edge-less graph corresponds to a system devoid of relationships, while the complete graph to a system depicting the maximum number of relationships. In turn, the random graph corresponds to a system whose relationships are randomly assigned. For the sake of clarity and for setting up the notation, a _graph_\(G=(V,E)\) corresponds to a set of _vertices_\(V\) and a set of _edges_\(E\) (\(E\subseteq\{\{x,y\}:x,y\in V\}\)). Note that we consider simple graphs, that is, graphs without multiple edges and without loops. Thus, \(E\) is a set, but not a multiset, and if \(\{x,y\}\in E\), it follows that \(x\neq y\). While the edge-less and complete graphs are straightforwardly defined, the former as a graph \(G\) with an empty set \(E\) and the second as one with \(n(n-1)/2\) edges, the random graph allows for several approximations. The earliest and most general random graph model was reported by Erdos and Renyi in 1959 [3] and is constructed by the random assignment of edges on the set \(V\), with probability of assignment \(p\). That is, given a set \(V\) of \(n\) vertices, the random graph corresponds to the realisation, or not, of every possible edge on \(V\). The probability of realisation of the edges is given by \(p\). This can be thought of as the result of an algorithm that takes every possible couple of vertices in \(V\) and decides whether to link them or not. The decision depends on generating a random number, and if that number happens to be less than a predetermined value, denoted as \(p\), then the vertices are connected; otherwise, they remain disconnected. Despite the widespread use of graphs, it has been acknowledged that certain systems exhibit relationships that surpass the typical binary relations represented by graphs [4, 5]. These high-order relations refer to \(k\)-ary relationships, which involve interactions among \(k\) or less vertices, where \(k>2\). Examples of these systems include functional and structural brain networks [6, 7], protein interaction networks [8], chemical reaction networks [9, 10, 11, 12], as well as semantic [13, 14] and co-authorship networks [15]. For instance, in co-authorship networks, a high-order relationship, say of order five, corresponds to the five authors of a five-author paper. Mathematical settings to address high-order relations include simplicial complexes and hypergraphs [16]. The former are selected for cases where nested sets of related vertices are relevant and the latter for cases where arbitrary sets of vertices are the focus. Hence, hypergraphs are more general than simplicial complexes and have been the subject of recent studies upon their properties and applications [17, 18, 19, 20, 21, 22, 23]. One of the applications of hypergraphs is for modelling chemical reactions, which becomes the leading example and motivation of the current paper. In this chemical setting, hypergraphs require to encode binary relationships among sets of vertices (substances). Despite the use of hypergraphs in chemistry, the extreme cases of chemical hypergraphs have not been yet studied. Here we report extreme cases of chemical hypergraphs. That is, the mathematics of edge-less, complete and random chemical hypergraphs, as well as some of their properties. Before discussing these structures, we provide some details on their use for modelling chemical reactions. ## 2 Graphs and hypergraphs for modelling the chemical space Chemical space spans all substances and reactions reported over the history of chemistry [24, 9] and has been initially modelled with graphs [25, 26]. In this setting, reactions in Figure 1a are modelled as graphs as shown in Figure 1b.1 The model encodes the binary relationship between an educt and a product of a reaction. Although informative, the graph model misses important chemical information. For instance, whether a substance can be reached from another in a given chemical space. This is the case of A and E (in Figure 1b), which are connected via two edges: {A, F} and {F, E}. Nevertheless, Figure 1a shows that from A, E cannot be obtained. This shortcoming is solved by adding more structure to the graph, namely by adding direction to the edges and by modelling reactions as _directed graphs_ (Figure 1c). This model now shows that E cannot be reached from A, but that G can be reached from A. Despite the improvements in incorporating the direction of the chemical transformation, the directed graph model still lacks other important chemical aspects, namely that the model does not inform which substances react to produce other chemicals. This shortcoming is solved by modelling reactions as _hypergraphs_. Figure 1d shows the hypergraph for the reactions of Figure 1a, which encodes the different sets of substances in the reactions of the chemical space analysed. Those sets correspond to actual mixtures of substances either before (educts) or after the reaction has taken place (products). From Figure 1d is clear that A is found as mixed with B, as well as E with D and A; and D with C. Likewise, the hypergraph shows that F and G are substances whose transformations do not require any other substance of the chemical space. Footnote 1: Models of the chemical space strongly emphasise the role of reactions as the “gluing” aspect relating substances, which, in turn, endows the set of substances with a structure. This is what actually turn the set of substances into a space [9]. In such a setting, however, non-connected substances, often arising from chemical extractions, play an important role in the space, as they represent new non-synthetic chemicals, which may, or not, remain disconnected in the space, or which require a certain amount of time to be connected to the network. This was the case of the noble gases, for instance, which, for many years, remained as isolated substances of the chemical space. Moreover, determining the average time required to connect a substance to the network of the chemical space is of central importance for studies on the evolution of chemical knowledge, as well for the chemical industry [27]. As chemical reactions are actually directed binary relationships among sets of substances, that is among educts and products, a further refinement of the model requires introducing this binary relation, which is encoded by _directed hypergraphs_. Figure 1e illustrates how the directed hypergraph encodes the transformation of the set of educts {A, B} into the products {C, D}, as well as the reaction of {A,D,E} to produce substance F. Likewise, it shows the rearrangement of F into G. Alternative representations of the directed hypergraph of Figure 1e are shown in Figures 1f and g. Figure 1f, besides emphasising the directed relationship among educts and products, highlights the role of substances (vertices) in the transformation. Figure 1g maps the directed hypergraph back to the realm of directed graphs.2 This time not to the directed graphs of Figure 1c but rather to _directed bipartite graphs_, where besides the usual vertices representing substances, a new set of vertices is introduced, namely that representing reactions. A relaxed version of structures in Figures 1e, f and g are the corresponding undirected structures, shown in Figures 1h, i and j, which are different representations of the associated _oriented hypergraph_. Note how oriented hypergraphs constitute a suitable model for reversible reactions. Thus, for instance, the oriented hypergraph {A, B}-{C, D} encodes the reactions A + B \(\rightarrow\) C + D, as well as C + D \(\rightarrow\) A + B.3 This encoding is chemically sound, as every reaction is intrinsically reversible. The actual direction observed in wet-lab experiments arises from the energetic conditions in which molecules are embedded in the reaction process.4 It is upon the oriented hypergraph model for chemical reactions that we study its extreme cases and develop an Erdos-Renyi random model. In the next section the mathematical elements setting up the stage for such study are presented. ## 3 Chemically inspired oriented hypergraphs As discussed in the previous section, the most informative model for the chemical space is the directed hypergraph. Nevertheless, for the sake of generality, we report in the current paper results for extreme cases and an Erdos-Renyi model for oriented hypergraphs. That is, in what follows, we regard the chemical space as devoid of direction in its chemical reactions and we focus only on the connectivity of the substances via reactions, while preserving the important aspect of chemical reactions of relating sets of substances. We introduce some definitions, which assume a fixed number \(n\) of vertices gathered on the set \(V\). Upon \(V\), subsets of vertices are defined, which are pair-wise related by the oriented hypergraph. These subsets gather together substances appearing as either educs or products of reactions in the chemical space. Chemical reactions can be classified as either catalytic or non-catalytic. The former involve the use of a catalyst, which is a substance added to the educs to speed up the synthesis of reaction products. Catalysts are not consumed in the reaction, which distinguishes them form the educs. Chemical notation encodes the catalyst as a label of the reaction. If, for instance, A Figure 1: Chemical reactions as graphs and hypergraphs. All structures, from b to j correspond to (hyper)graph models for the chemical reactions in a, which constitute a chemical space of seven substances and three reactions. + B \(\rightarrow\) C + D is catalysed by E, the reaction is written down as A + B \(\xrightarrow{E}\) C + D. Otherwise, if there is no catalyst involved, A + B \(\rightarrow\) C + D represents the reaction. In this classification, autocatalytic reactions constitute a particular case of catalytic reactions, where at least one of the educts acts as a catalyst. Hence, A + B \(\rightarrow\) B + C is an example of an autocatalytic reaction,5 which can be considered as the sequence of two reactions: first A + B \(\rightarrow\) Z, followed by Z \(\rightarrow\) B + C, where Z is known as the reaction intermediate. Hence, oriented hypergraphs turn to be suitable models for all chemical reactions. Therefore, we model the chemical space as discussed in Definition 1. Footnote 5: Note that stoichiometric coefficients are disregarded in this notation. **Definition 1**.: _A chemical space of \(n\) substances gathered in \(V\) is modelled as an oriented hypergraph \(G=(V,E)\), with oriented hyperedges (reactions) gathered in \(E\subseteq\{\{X,Y\}:X,Y\in\mathcal{P}(V)\setminus\{\varnothing\}\text{ and }X\cap Y=\varnothing\}\). \(X\) and \(Y\), which are sets of substances, are called hypervertices of the chemical space and every oriented hyperedge \(r\in E\) is called a chemical reaction of the chemical space._ Importantly, in our framework, substances consumed or produced in a chemical reaction are restricted to be in the set \(V\). Moreover, hypervertices cannot be empty (Definition 1) because there is no chemical reaction leading or starting from an empty set of substances. Likewise, a reaction cannot start from the complete set of substances, as there would be no room for synthesising a new substance. Similarly, as no reaction can lead to the set containing all substances, the hypervertex containing all vertices is disregarded from the model.6 Footnote 6: Moreover, in this setting we are disregarding the particular reaction conditions at which reactions are carried out. Nonetheless, they can be incorporated as labels of oriented hyperedges. Therefore, the maximum number of hypervertices in a chemical space is given by \(2^{n}-2\). They correspond to the maximum number of combinations of substances chemists can try, which may lead to another set of substances within the given chemical space.7 Now we classify those sets of substances by the number of substances they contain. Footnote 7: This upper bound holds significance in, for instance, research on the origin of life. A mathematical setting for such studies is provided by Dittrich’s chemical organisation theory [29], where finding sequences of reactions involving a given subset of substances of the chemical space is an important aspect of the approach. Let \(V\) be the set of \(n\) vertices (substances) and \(B_{a}\) the set gathering together subsets of \(V\) of \(a\) vertices. Thus, \(B_{1}\) collects all substances (\(B_{1}=V\)), \(B_{2}\) all couples of substances and so forth. The complete set of hypervertices \(B\) is given by \[B=\bigcup_{a=1}^{n-1}B_{a}=\mathcal{P}(V)\setminus\{\varnothing,B_{n}\}. \tag{1}\] The number of hypervertices of size \(a\) corresponds to the cardinality of \(B_{a}\), which is given by the number of combinations of \(a\) vertices that can be obtained out of \(n\) vertices. Thus, \[|B_{a}|=\mathcal{C}_{a}^{n}=\binom{n}{a}. \tag{2}\] As \(B\) contains all possible sets of vertices (hypervertices of the oriented hypergraph) involved in chemical reactions for a given set of vertices, a suitable object gathering information on the connectivity of these hypervertices, that is on the chemical reactions, is the generalised _adjacency matrix of the oriented hypergraph_\(\mathbf{M}=[M_{i,j}]_{2^{n}-2\times 2^{n}-2}\), where the indices \(i,j=1,2,\ldots,2^{n}-2\) run over all the possible hypervertices for a given \(n\). The components of the adjacency matrix are given as \[M_{i,j}=\left\{\begin{array}{ll}1&\mbox{if $r=\{b_{i},b_{j}\}\in E$,}\\ 0&\mbox{otherwise.}\end{array}\right. \tag{3}\] Thus, any 1-entry of \(\mathbf{M}\) corresponds to a chemical reaction between the hypervertex \(b_{i}\) that gathers \(i\) substances and the hypervertex \(b_{j}\) that gathers \(j\) substances. Note that \(\mathbf{M}\) is symmetric (\(M_{i,j}=M_{j,i}\)) because the reactions \(b_{i}\to b_{j}\) and \(b_{j}\to b_{i}\) are equivalent in the oriented hypergraph. Any 0-entry in \(\mathbf{M}\) indicates either that the reaction is possible but not yet realised in the chemical space or that the reaction is not possible. In the first case, the two hypervertices \(b_{i}\) and \(b_{j}\) may be connected by a chemical reaction, but the chemical space at disposal has not realised the reaction. In the second case, there is at least a common substance between \(b_{i}\) and \(b_{j}\) and the reaction is not possible in our scheme. Let us consider a toy-chemical space of four reactions over the set of substances \(V=\{\)A, B, C, D\(\}\) (Figure 2), whose corresponding generalised matrix is shown below. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} & A & B & C & D & AB & AC & AD & BC & BD & CD & ABC & ABD & ACD & BCD \\ \hline A & 0 & **1** & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline B & **1** & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline C & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline D & 0 & 0 & 0 & 0 & 0 & **1** & 0 & **1** & 0 & 0 & 0 & 0 & 0 \\ \hline AB & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline AC & 0 & 0 & 0 & **1** & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline AD & 0 & 0 & 0 & 0 & 0 & 0 & **1** & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline BC & 0 & 0 & 0 & **1** & 0 & 0 & **1** & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline BD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline CD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline ABC & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline ABD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline ACD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline BCD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 1: Adjacency matrix \(\mathbf{M}\) for a chemical space of four substances \(\{\)A, B, C, D\(\}\). 1-entries correspond to realised reactions, while 0-entries to either possible reactions (black) or to impossible reactions (red). This latter correspond to reactions with at least a common substance in the set of educts and products. Matrix blocks, separated by bold lines, gather together sets of educts with cardinality \(i\) and sets of products with cardinality \(j\). Figure 2: Toy chemical space constituted by four substances \(\{\)A,B,C,D\(\}\) and four reactions \(r_{i}\). On the left, reactions are presented in chemical notation and on the right the chemical space is depicted as an oriented hypergraph. Note, for instance, that the reaction A \(\rightarrow\) B or B \(\rightarrow\) A is part of the chemical space gathered in \(\mathbf{M}\) as \(M_{\text{A,B}}=M_{\text{B,A}}=1\) (Table 1). In contrast, A \(\rightarrow\) C or C \(\rightarrow\) A, although a possible reaction, has a 0-entry in \(\mathbf{M}\) because it is not a part of the chemical space (Figure 2). Reaction A \(\rightarrow\) AB or AB \(\rightarrow\) A, which correspond to A \(\rightarrow\) A + B or A + B \(\rightarrow\) A, in chemical notation, is not possible because of the commonality of A in both hypervertices, therefore it is shown as a 0-entry in \(\mathbf{M}\). We distinguish two kinds of 0-entries in \(\mathbf{M}\). Those in black font correspond to possible but not realised reactions. Those in red to impossible reactions because of commonality of at least a substance between educts and products.8 Footnote 8: The number of black 0-entries amounts to the unrealised chemical reactions, which together with the 1-entries correspond to the potential chemical space, as called by some philosophers of chemistry [30]. **M** can be arranged by the number of vertices belonging to the hypervertices. That is, column and rows in \(\mathbf{M}\) can be arranged from \(B_{1}\), \(B_{2}\), \(\ldots\) until \(B_{n-1}\). This is the scheme we adopted to present \(\mathbf{M}\) above (Table 1). The number of vertices of the hypervertices allows for classifying reactions in terms of their size. Given a reaction \(r=\{b_{i},b_{j}\}\) between hypervertices \(b_{i}\) and \(b_{j}\), the size of \(r\) is given by \(s(r)=i+j\). That is, the _size of a reaction_ corresponds to the number of substances involved in the reaction.9 Thus, \(s(r_{1})=2\), \(s(r_{2})=s(r_{3})=3\) and \(s(r_{4})=4\) for the chemical space of Figure 2. Reaction size is bounded by \(2\leq s(r)\leq n\) as a reaction must involve at least an educt and a product and the largest reaction must involve no more than \(n\), the number of substances of the chemical space. Footnote 9: The size of a reaction corresponds to the molecularity of the reaction [31] if the stoichiometric coefficients of the reaction are regarded. As this is not, in general, the case in studies on the chemical space [24, 9], the size of a reaction may be regarded as a proto-molecularity of the reaction. It only accounts for the number of different chemicals reported in the reaction, but not for their actual figures. Often, chemists omit writing, for instance, water or carbon dioxide, as those substances can be inferred from the context of the reaction or because of the tradition to emphasise the target product of a reaction, namely of a chemical synthesis [27]. Based on the size of reactions, the size of the chemical space, encoded in \(G\) (Definition 1), can be introduced. **Definition 2**.: _The size\(s(G)\) of an oriented hypergraph \(G\), whose oriented hyperedges are gathered in \(E\), is given by_ \[s(G)=\sum_{r\in E}s(r). \tag{4}\] Note how the chemical space of four substances in Figure 2 has \(s(G)=12\). As we discuss below, \(s(G)\) becomes a proxy for the connectivity of the chemical space, which is straightforward approached through the degree of a vertex (substance). Following the definition of vertex degree in graph theory [2], the degree of a vertex \(v\in V\) (\(d(v)\)) of an oriented hypergraph \(G\) corresponds to the number of oriented hyperedges (reactions) in which the substance participates. For the oriented hypergraph of Figure 2, \(d(\text{A})=d(\text{B})=d(\text{C})=d(\text{D})=3\). Likewise, the degree of an oriented hypergraph \(d(G)\) can be defined. **Definition 3**.: _The degree\(d(G)\) of an oriented hypergraph \(G\), of vertices gathered in \(V\), is given by_ \[d(G)=\sum_{v\in V}d(v). \tag{5}\] Thus, \(d(G)=12\) for the oriented hypergraph of Figure 2. There is an interesting relation between size and degree of a chemical space modelled as an oriented hypergraph. **Lemma 1**.: _The size of an oriented hypergraph \(G\) and its degree are equal. That is_ \[s(G)=d(G). \tag{6}\] Proof.: Given an oriented hypergraph \(G=(V,E)\) made of \(n\) vertices (substances) gathered in \(V\) and of \(u\) reactions (oriented hyperedges) gathered in \(E\), \(G\) is equivalent to a bipartite graph whose vertices correspond to both substances \((v)\) and reactions \((r)\). As the degree sum formula for a bipartite graph is [32] \[d(G)=\sum_{v\in V}d(v)=\sum_{r\in E}d(r)=u, \tag{7}\] then \(d(G)=u\) and as \(s(r)=d(r)\), then \[\sum_{r\in E}d(r)=\sum_{r\in E}s(r)=s(G)=u. \tag{8}\] Thus, size and degree of the oriented hypergraph \(G\) modelling the chemical space indicate how tight or dense the chemical space is. Low size or degree values indicate a sparse chemical space, while high values a strongly connected space. In order to provide a baseline for comparison of sizes and degrees of chemical spaces, upper and lower bounds of those oriented hypergraph sizes and degrees need to be determined.10 This, in turn, requires determining the maximum and minimum number of reactions a given set \(V\) of \(n\) substances may hold.11 We call the _complete oriented hypergraph_ over \(V\) the oriented hypergraph holding the maximum number of reactions over the \(n\) substances gathered in \(V\). Footnote 10: Bounds for size and degree of oriented hypergraphs are provided in Lemma 7. Footnote 11: See Lemma 8. Given that the adjacency matrix \(\mathbf{M}\) can be arranged according to the size of its hypervertices, it can be conveniently written as \[\mathbf{M}=\left(\begin{array}{cccc}\mathbf{M}_{1,1}&\mathbf{M}_{1,2}& \cdots&\mathbf{M}_{1,n-1}\\ \mathbf{M}_{2,1}&\mathbf{M}_{2,2}&\cdots&\mathbf{M}_{2,n-1}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{M}_{n-1,1}&\mathbf{M}_{n-1,2}&\cdots&\mathbf{M}_{n-1,n-1}\end{array} \right), \tag{9}\] where \(\mathbf{M}_{i,j}\) indicates the block of \(\mathbf{M}\) containing information on the relationship between hypervertices of size \(i\) and of size \(j\). **Lemma 2**.: _The blocks \(\mathbf{M}_{i,j}\) with \(i+j>n\) are null blocks._ Proof.: As reactions with size \(i+j>n\) necessarily have a common substance, out of the \(n\) substances, then those reactions gathered in the blocks \(\mathbf{M}_{i,j}\) with \(i+j>n\) are impossible. Therefore, for blocks \(\mathbf{M}_{i,j}\) with \(i+j>n\), their \(\mathbf{M}\)-entries are \(0\)-entries. The effect of Lemma 2 upon the possible number of reactions of a given chemical space is enormous, as it shows that the disjoint condition for hypervertices belonging to reactions reduces to a large extent the actual possibilities for exploring new chemical combinations. Recall that these reactions correspond to red \(0\)-entries in the matrix \(\mathbf{M}\) shown in Table 1, where it is seen the effect upon a chemical space of only four substances. Figure 3 shows the proportion of impossible reactions for larger spaces. Equation 17 below quantifies how rapid impossible reactions grow as a function of \(n\). In order to determine the number of reactions of the complete oriented hypergraph, we analyse the number of reactions in each block \(\mathbf{M}_{i,j}\) of the adjacency matrix. **Lemma 3**.: _The number of oriented hyperedges for the block matrix \(\mathbf{M}_{i,j}\) in a complete oriented hypergraph is given by_ \[u_{i,j}(n)=\left\{\begin{array}{ll}\mathcal{C}_{i}^{n}\mathcal{C}_{j}^{n-i}&i \neq j\\ \frac{1}{2}\mathcal{C}_{i}^{n}\mathcal{C}_{i}^{n-i}&i=j\end{array}\right. \tag{10}\] _where \(\mathcal{C}_{i}^{n}\) is given in Equation 2._ Proof.: If \(i\neq j\), the number of oriented hyperedges \(u_{i,j}(n)\) corresponds to the number of 1-entries in the block matrix \(\mathbf{M}_{i,j}\), that is \(u_{i,j}(n)=\sum_{i,j}\mathbf{M}_{i,j}\). Moreover, as \(\mathbf{M}\) is symmetric, \(u_{i,j}(n)=u_{j,i}(n)\). The same symmetry argument indicates that if \(i=j\), the number of oriented hyperedges is half the number of 1-entries for the block matrix \(\mathbf{M}_{i,j}\). As noted, it is important to calculate the number of 1-entries for a block matrix \(\mathbf{M}_{i,j}\). Let \([\mathbf{M}_{i,j}]\) be the matrix components of the block matrix \(\mathbf{M}_{i,j}\). According to Equation 2, the maximum number of rows of this block is \(\mathcal{C}_{i}^{n}\). Given that vertices are indistinguishable, the number of 1-entries is the same for each row. Hence, the number of 1-entries for \(\mathbf{M}_{i,j}\) is given by \(\mathcal{C}_{i}^{n}\) multiplied by the number of 1-entries per row. As every 1-entry represents a chemical reaction, therefore, the intersection between the hypervertex of the row \(i\) and the hypervertex of the column \(j\) must be empty. Thus, the number of vertices available for the hypervertex of the column \(j\) is given by \(n-i\). Thus, the number of non-zero components on each column is given by \(\mathcal{C}_{j}^{n-i}\), which is the number of combinations with \(j\) vertices out of \(n-i\) vertices. By replacing this result in the previous relations, we obtain \[u_{i,j}(n)=\mathcal{C}_{i}^{n}\,\mathcal{C}_{j}^{n-i},\] which is the total number of non-zero components for the block matrix \(\mathbf{M}_{i,j}\) when \(i\neq j\). If \(i=j\), we have to consider half the amount of 1-entries, that is to say \[u_{i,i}(n)=\frac{1}{2}\mathcal{C}_{i}^{n}\,\mathcal{C}_{i}^{n-i}.\] Knowing the number of reactions in every block of \(\mathbf{M}\), we can determine the number of reactions (oriented hyperedges) of a given size in the complete oriented hypergraph. Figure 3: Amount of possible and impossible reactions. Visual depiction of adjacency matrices \(\mathbf{M}\) for chemical spaces of a) \(n=4\), b) \(n=7\) and c) \(n=10\) substances (vertices). Possible reactions (black entries) correspond to oriented hyperedges, where the related hypervertices (sets of substances) are disjoint. Impossible reactions (red entries) are the oriented hyperedges relating non-disjoint sets of substances. **Lemma 4**.: _The number of oriented hyperedges of size \(s\) in a complete oriented hypergraph is given by_ \[u_{s}(n)=(2^{s-1}-1)\mathcal{C}_{s}^{n} \tag{11}\] Proof.: Oriented hyperedges with size \(s\) are located within block matrices \(\mathbf{M}_{i,j}\) such that \(i+j=s\). Hence, the block matrices satisfying this condition are \(\{\mathbf{M}_{1,s-1},\mathbf{M}_{2,s-2},\ldots,\mathbf{M}_{s-1,1}\}\). The number of oriented hyperedges of each of these block matrices is given by the Lemma 3. Therefore, the number of oriented hyperedges with size \(s\) is given by \[u_{s}(n)=\frac{1}{2}\sum_{i=1}^{s-1}n\,u_{i,s-i}=\frac{1}{2}\sum_{i=1}^{s-1} \mathcal{C}_{i}^{n}\mathcal{C}_{s-i}^{n-i}=\frac{1}{2}\mathcal{C}_{s}^{n}\sum _{i=1}^{s-1}\mathcal{C}_{i}^{s}=\left(2^{s-1}-1\right)\mathcal{C}_{s}^{n}\] Thus, for the chemical space of the adjacency matrix shown in Table 1, \(u_{2}(4)=6\), \(u_{3}(4)=12\) and \(u_{4}(4)=7\). Now, we can determine the number of reactions in which a substance can participate in a complete oriented hypergraph. **Lemma 5**.: _Given a complete oriented hypergraph, the number of oriented hyperedges in which a vertex participates in the block matrix \(\mathbf{M}_{i,j}\) is_ \[u_{i,j}(n)=\left\{\begin{array}{cc}\frac{(i+j)}{n}\mathcal{C}_{i+j}^{n} \mathcal{C}_{j}^{i+j}&i\neq j\\ \frac{i}{n}\mathcal{C}_{2}^{n}\mathcal{C}_{i}^{2^{2}}&i=j\end{array}\right. \tag{12}\] Proof.: Let us consider an arbitrary block matrix \(\mathbf{M}_{i,j}\) and an arbitrary vertex \(v_{1}\). This block matrix can be split into two blocks, one in which the vertex \(v_{1}\) appears in the hypervertex describing the rows of the block and the other block is when the substance appears in the column hypervertex (the intersection is obviously excluded). The number of (row) hypervertices in which the substance \(v_{1}\) appears is given by \(\mathcal{C}_{i-1}^{n-1}\), which is the number of combinations with \(i-1\) vertices out of \(n-1\) vertices. On the other hand, for the same block, the number of (columns) hypervertices in which \(v_{1}\) is not present is given by \(\mathcal{C}_{j}^{n-i}\), which is the number of combinations with \(j\) vertices out of \(n-i\) vertices. Therefore, the total number of oriented hyperedges in which \(v_{1}\) appears in the hypervertex \(b_{i}\) is given by \(\mathcal{C}_{i-1}^{n-1}\mathcal{C}_{j}^{n-i}\). Let us now consider the second block within the same block matrix \(\mathbf{M}_{i,j}\). In this case the number of (column) hypervertices in which \(v_{1}\) is contained is given by \(\mathcal{C}_{j-1}^{n-1}\), which, similarly to the previous case, is the number of combinations with \(n-1\) substances out of \(j-1\) vertices. On the other hand, and still in the second block, the number of (row) hypervertices in which \(v_{1}\) is not present is given by \(\mathcal{C}_{i}^{n-j}\), which corresponds to the number of combinations of \(n-j\) substances out of \(i\) substances. Therefore, the total number of oriented hyperedges in which \(v_{1}\) appears in the hyperedge \(b_{j}\) is given by \(\mathcal{C}_{i}^{n-j}\mathcal{C}_{j-1}^{n-1}\). Combining these results, we have that the number of oriented hyperedges in which \(v_{1}\) can appear in the block matrix \(M_{i,j}\), when \(i\neq j\), is given by \[u_{i,j}(n)=\mathcal{C}_{i-1}^{n-1}\mathcal{C}_{j}^{n-i}+\mathcal{C}_{i}^{n-j} \mathcal{C}_{j-1}^{n-1}=\frac{(i+j)}{n}\,\mathcal{C}_{i+j}^{n}\,\mathcal{C}_ {j}^{i+j}\] and when \(i=j\) we have half the number of oriented hyperedges, that is \[u_{i,i}(n)=\frac{i}{n}\,\mathcal{C}_{2i}^{n}\,\mathcal{C}_{i}^{2i}\] The above remarks allow for determining the number of reactions in the complete oriented hypergraph in which a substance can participate (Lemma 6), as well as the number of reactions of a complete oriented hypergraph (Lemma 8). **Lemma 6**.: _The number of oriented hyperedges in which an arbitrary vertex can belong in a complete oriented hypergraph is given by_ \[u(n)=3^{n-1}-2^{n-1}. \tag{13}\] Proof.: By considering the result of Lemma 3, we obtain \[u(n) =\frac{1}{2}\sum_{i=1}^{n-1}\sum_{j=1}^{n-i}u_{i,j}=\frac{1}{2} \sum_{i=1}^{n-1}\,\sum_{j=1}^{n-i}\frac{(i+j)}{n}\mathcal{C}_{i}^{n}\mathcal{C }_{j}^{n-i}\] \[=\frac{1}{2n}\left[\sum_{i=1}^{n-1}i\,\mathcal{C}_{i}^{n}\sum_{j= 1}^{n-i}\mathcal{C}_{j}^{n-i}+\sum_{i=1}^{n-1}\,\mathcal{C}_{i}^{n}\sum_{j=1}^{ n-i}\,j\,\mathcal{C}_{j}^{n-i}\right]\] \[=\frac{1}{2n}\left\{\sum_{i=1}^{n-1}i\,\mathcal{C}_{i}^{n}\left[2 ^{n-i}-1\right]+\sum_{i=1}^{n-1}\,\mathcal{C}_{i}^{n}\left[(n-i)2^{n-i-1} \right]\right\}\] \[=\frac{1}{2n}\sum_{i=1}^{n-1}\mathcal{C}_{i}^{n}\left\{\frac{i}{2 }\,2^{n-i}-i+\frac{n}{2}\,2^{n-i}\right\}=3^{n-1}-2^{n-1}\] From the chemical perspective, this implies that a substance can participate at most in \(u(n)\) chemical reactions, that is to say, the maximum degree of a substance is \(u(n)\). A question opened by Lemma 1 was about the bounds for the size and degree of an oriented hypergraph. Lemma 6 provides the information to determine them. **Lemma 7**.: _The size \(s(G)\) and degree \(d(G)\) of an oriented hypergraph is bounded by \(0\leq x(G)\leq n(3^{n-1}-2^{n-1})\), where \(x(G)\) stands for either \(s(G)\) or \(d(G)\)._ Proof.: Minimum size and minimum degree of an oriented hypergraph \(G\) are reached for the case of a hyperedge-less oriented hypergraph. Therefore, \(\min s(G)\) and \(\min d(G)=0\). The maximum value of these parameters is reached for the complete hypergraph. As \(\max d(G)\) corresponds to the sum of the degree of each vertex in the complete oriented hypergraph, this amounts to add the number of oriented hyperedges in which each vertex in the complete oriented hypergraph belongs. As \(3^{n-1}-2^{n-1}\) (Lemma 6) is the number of oriented hyperedges in which a vertex can belong in the complete oriented hypergraph, summing over all vertices yields \(\max d(G)=n(3^{n-1}-2^{n-1})\). As \(d(G)=s(G)\) (Lemma 1), then, \(\max s(G)=n(3^{n-1}-2^{n-1})\) Thus, for the toy chemical space \(G\) depicted in Figure 2, \(s(G)=d(G)\in[0,76]\). As we found that these figures are equal to 12 for that chemical space, it is therefore observed how far the toy chemical space is from being a complete oriented hypergraph, with \(s(G)=d(G)=76\), and how close it is to be a hyperedge-less oriented hypergraph, with \(s(G)=d(G)=0\). Lemma 6 also allows for determining the number of reactions housed by a complete oriented hypergraph. **Lemma 8**.: _The number of oriented hyperedges for a complete oriented hypergraph is [9]_ \[u_{r}(n)=\frac{1}{2}(3^{n}-2^{n+1}+1). \tag{14}\] Proof.: By the Lemma 3, it follows that \[u_{r}(n) =\frac{1}{2}\sum_{i=1}^{n-1}\sum_{j=1}^{n-i}u_{i,j}(n)=\frac{1}{2} \sum_{i=1}^{n-1}\sum_{j=1}^{n-i}\mathcal{C}_{i}^{n}\,\mathcal{C}_{j}^{n-i}=\frac {1}{2}\sum_{i=1}^{n-1}\mathcal{C}_{i}^{n}\sum_{j=1}^{n-i}\mathcal{C}_{j}^{n-i}\] \[=\frac{1}{2}\sum_{i=1}^{n-1}\mathcal{C}_{i}^{n}\left[2^{n-i}-1 \right]=\frac{1}{2}\sum_{i=1}^{n-1}\mathcal{C}_{i}^{n}2^{n-i}-\frac{1}{2}\sum_ {i=1}^{n-1}\mathcal{C}_{i}^{n}\] \[=\frac{1}{2}\left[3^{n}-2^{n}-1\right]-\frac{1}{2}\left[2^{n}-2 \right]=\frac{1}{2}(3^{n}-2^{n+1}+1)\] This indicates that for the toy-example of Figure 2, the reactions indicated in the chemical space of four substances are only four reactions out of the 25 possible ones, which correspond to the upper or lower triangles of 1-entries above the main diagonal in the adjacency matrix shown in Table 1 plus the 0-entries in black font. They correspond to the black entries of the upper or lower triangles above the main diagonal in the left matrix of Figure 3. Just to have an idea of the speed of growth of \(u_{r}\) regarding \(n\), for \(n=2\) to 5, \(u_{r}\) takes values 1, 6, 25, and 90. This growth is given by \[\frac{du_{r}}{dn}=\frac{1}{2}(3^{n}\ln 3-2^{n+1}\ln 2) \tag{15}\] This quantifies the speed of growth of the possible chemical space as a function of the number of substances of the space. It constitutes the upper bound of wiring of any chemical space, which sets the stage to contrast this upper bound with the historical record of the chemical space. This subject is explored in a forthcoming paper. The number of possible reactions in the complete oriented hypergraph allows for determining the number of impossible reactions because of the disjoint condition of educts and products, which is given by \[z(n) =(2^{n}-2)^{2}-\frac{1}{2}(3^{n}-2^{n+1}+1)\] \[=\frac{1}{2}(2\cdot 4^{n}-3^{n}-6\cdot 2^{n}+7), \tag{16}\] which for \(n\) ranging from 2 to 5 yields \(z=3\), 30, 171 and 810. In fact, \[\frac{dz}{dn}=\frac{1}{2}(4^{n}\cdot\ln 16-3^{n}\cdot\ln 3-2^{n}\cdot\ln 64), \tag{17}\] which corresponds to the speed of growth of red 0-entries in any adjacency matrix \(\mathbf{M}\). When comparing the number of possible reactions (Equation 17) with the number of impossible reactions (Equation 17), we observe that the former grows much slower than the latter. This pattern is observed in Figure 3 for different values of \(n\).12 Footnote 12: A further question is how many of the possible reactions are actually realised by chemists in the chemical space. This is a subject we address in a forthcoming paper. Equipped with the results of this section, we proceed to develop the Erdos-Renyi model for oriented hypergraphs. Erdos-Renyi model for oriented hypergraphs Although the literature on Erdos-Renyi-like models for hypergraphs goes back at least 20 years [33, 34, 35, 36, 37, 38, 39, 5, 40, 41, 42], most of those models are devoted to uniform hypergraphs, while a few of them to non-uniform ones [36, 37, 38, 39]. By uniform hypergraphs we mean hypergraphs whose hyperedges have the same cardinality. Some of those studies explore the statistical and mathematical properties of substructures embedded in random hypergraphs [40, 41, 42]. In general, none of those results addresses the particular case of oriented hypergraphs, which is the model we develop in this section. Given a set of vertices \(V\), the random oriented hypergraph corresponds to the realisation, or not, of every possible oriented hyperedge on \(V\). The probability of realisation of these hyperedges is given by \(p\). Like in the Erdos-Renyi model for graphs, the random Erdos-Renyi oriented hypergraph can be thought as the result of an algorithm that takes every possible couple of disjoint hypervertices on \(V\) and decides whether to link them or not. The decision depends on generating a random number, and if that number happens to be less than a predetermined value, denoted as \(p\), then the hypervertices are connected; otherwise, they remain disconnected. **Definition 4**.: _An Erdos-Renyi random oriented hypergraph \(G(n,p)\), is an oriented hypergraph with \(n\) vertices whose oriented hyperedges result from linking hypervertices with probability \(p\)._ This random process leads to particular kinds of probability mass functions for the quantities described in previous sections such as degree and size of oriented hyperedges. This results from the mathematical consistency of the Erdos-Renyi model here presented. The expressions for the probability mass functions are provided in the remaining part of this section. To begin with, the number of reactions \(R\) is a binomially distributed random variable, \(R\sim\mathrm{B}(u_{r},p)\), with probability mass function given by **Proposition 1**.: _The probability of having a \(G(n,p)\) with \(R\) oriented hyperedges is_ \[\text{Pr}(R=r)=\left(\begin{array}{c}u_{r}\\ r\end{array}\right)\,p^{r}\,\left(1-p\right)^{u_{r}-r}, \tag{18}\] _which results from realising \(r\) reactions and, therefore, of having \(u_{r}-r\) non-realised reactions. The expected value of the number of reactions in \(G(n,p)\) is given by_ \[\text{E}[R]=\sum_{r=0}^{u_{r}}r\,\text{Pr}(R=r)=p\,u_{r}, \tag{19}\] _where \(u_{r}\) is given in Equation (14)._ This implies that the expected number of reactions in a random oriented hypergraph is proportional to the maximum number of possible reactions. The actual number is weighted by the probability \(p\). As we discuss in Definitions 2 and 3 and in Lemma 1, size and degree of an oriented hypergraph become important proxies to determine how tight or sparse is a chemical space. The random model naturally links the tightness of a chemical space with the probability of wiring the associated oriented hypergraph. Thus, random processes based on high values of \(p\) lead to high size and degree oriented hypergraphs, while processes underlying low \(p\) figures, necessarily lead to sparse oriented hypergraphs with small sizes and degrees. The role of \(p\) is also central for the probability of observing a given number of reactions of a particular size (Remark 1). **Remark 1**.: _The number of oriented hyperedges with size \(s\) is a binomially distributed random variable, \(R_{s}\sim B(u_{s},p)\), such that its probability mass function is given by_ \[\text{Pr}(R_{s}=r_{s})=\left(\begin{array}{c}u_{s}\\ r_{s}\end{array}\right)\,p^{r_{s}}\,\left(1-p\right)^{u_{s}-r_{s}}, \tag{20}\] _which results from considering that there are \(r_{s}\) realised reactions of size \(s\) and \(u_{s}-r_{s}\) non-realised reactions with the same size \(s\). As a result, the expected value of the number of oriented hyperedges of size \(s\) is_ \[E[R_{s}]=\sum_{r_{s}=0}^{u_{s}}r_{s}\text{Pr}(R_{s}=r_{s})=pu_{s}. \tag{21}\] _This leads to determining the probability of having a reaction with size \(s\). This probability \(P(s)\) is given by the ratio of the expected number of reactions with size \(s\) (\(E[R_{s}]\)) and the sum of the total number of expected reactions for the different sizes_ \[P(s)=\frac{E[R_{s}]}{\sum_{R_{s}=2}^{u_{s}}E[R_{s}]}=\frac{u_{s}}{u_{r}}. \tag{22}\] _Hence, \(P(s)\) corresponds to the ratio between \(u_{s}\), the number of reactions with size \(s\), and the number of possible reactions \(u_{r}\), where \(u_{s}\) and \(u_{r}\) are given in Lemmas 4 and 8, respectively. Remarkably, this probability \(P(s)\) associated with the size \(s\) is \(p\)-independent in the random model here defined.13_ Footnote 13: This indicates that, in the case of a phase transition for this model, the probability \(P(s)\) cannot be altered by the criticality. Finally, another implication of the random model is that the vertex degree is also a random variable as stated in the following Remark. **Remark 2**.: _The vertex degree is a binomially distributed discrete random variable, \(D\sim\text{B}(u_{n},p)\), with probability mass function of the form_ \[\text{Pr}(D=d)=\left(\begin{array}{c}u_{n}\\ d\end{array}\right)\,p^{d}\,\left(1-p\right)^{u_{n}-d}, \tag{23}\] _which again, results from having \(d\) realised reactions for an arbitrary substance out of the \(u_{n}\) reactions in which the substance can participate, and \(u_{n}-d\) non-realised reactions. Therefore, the expected value of the vertex degree is_ \[\text{E}[D]=\sum_{d=0}^{u_{n}}d\,\text{Pr}(D=d)=p\,u_{n}, \tag{24}\] _where \(u_{n}\) is given in Equation 13._ Equations 19, 21, 22 and 24 show that our model is conceptually correct. As a consequence, in any test for randomness of real data modelled as an oriented hypergraph, each of the probability distributions discussed here should be statistically close to those given in the aforementioned equations, if the data are randomly generated. On the other hand, as the expected number of reactions of the random oriented hypergraph is related to the expected number of reactions a given substance has, we obtain the following expression for the ratio of those two variables \[\frac{\text{E}[R]}{\text{E}[D]}=\frac{1}{2}\frac{\left[1-2\left(\frac{2}{3} \right)^{n}+3^{-n}\right]}{\left[\frac{1}{3}-\frac{1}{2}\left(\frac{2}{3} \right)^{n}\right]}, \tag{25}\] which, for a large number of substances leads to \[\lim_{n\to\infty}\frac{\mathrm{E}[R]}{\mathrm{E}[D]}=\frac{3}{2}. \tag{26}\] That is, if a chemical space is randomly wired, the number of reactions it houses is \(3/2\) the number of reactions in which any substance of the space participates. Therefore, the ratio \(\mathrm{E}[R]/\mathrm{E}[D]\) can be used to test whether a given chemical space is close to randomness or not. This result clearly contrast with its Erdos-Renyi analog for simple graphs which takes the form \[\frac{\mathrm{E}[R]}{\mathrm{E}[D]}=\frac{n}{2}, \tag{27}\] and which grows linearly with the number of substances. In the case of the chemical space oriented hypergraph, the factor \(3/2\) is actually an upper bound to the ratio \(\mathrm{E}[R]/\mathrm{E}[D]\). Aiming at having more insight on the effect of \(p\) upon the relation between the number of substances (\(n\)) and the expected number of reactions (\(\mathrm{E}[R]\)), we explore different forms \(p\) might take. They range from the case of a constant number of reactions, independent of the number of substances (\(\mathrm{E}[R]\sim k\)); or from a simple linear relation \(\mathrm{E}[R]\sim kn\); to more complex relations in which the expected number of reactions varies according to a power of the number of substances (\(\mathrm{E}[R]\sim n^{\alpha}\)); or even that the number of reactions grows exponentially with the number of substances (\(\mathrm{E}[R]\sim b^{n}\)). To do so, we analyse some cases of chemical spaces for which different values of \(n\) and \(p\) are considered. From Equation 19 we know that for large values of \(n\), \(\ln\mathrm{E}[R]\) takes the form14 Footnote 14: It is known that for the actual chemical space \(n\sim 10^{6}\)[24]. \[\ln\mathrm{E}[R]\sim\alpha\ln n+n\ln\left(\frac{3}{\beta}\right), \tag{28}\] where the above discussed chemical spaces are generalised by considering a probability given as \[p=n^{\alpha}/\beta^{n}. \tag{29}\] With \(\beta=3\), Equation 28 becomes \[\ln\mathrm{E}[R]\sim\alpha\ln n, \tag{30}\] which is a linear relation in a log-log scale with \(\alpha\) encoding the slope of the linear trend. When \(\alpha=0\), \(\mathrm{E}[R]\sim 1/2\), in which case \(p=1/3^{n}\) and no matter how large the number of substances in the chemical space is, the number of reactions reported by chemists is always a fixed number. \(\mathrm{E}[R]\sim n\) is obtained with \(\alpha=1\), where \(p=n/3^{n}\). This linear relation between \(\mathrm{E}[R]\) and \(n\) indicates that chemists manage wiring the space in a manner that is proportional to the available substances. \(\mathrm{E}[R]\sim n^{\alpha}\) is obtained with \(p=n^{\alpha}/3^{n}\). If \(\alpha>1\), the greater the value of \(\alpha\), the more reactions are discovered. The scenario with \(\alpha<0\) yields a sparse chemical space with a decreasing power law relation in which the more substances, the less reactions are discovered. These behaviours are shown in Figure 4a for \(\alpha=-1,0,1\) and \(2\). In turn, \(\mathrm{E}[R]\sim b^{n}\), actually \(\mathrm{E}[R]\sim 3^{n}\) is reached with \(\beta=1\), in which case \(p=n^{\alpha}\) and the leading order of Equation 28 gives \[\ln\mathrm{E}[R]\sim n\ln\left(3\right). \tag{31}\] Hence, the log-plot of the \(\mathrm{E}[R]\) as a function of \(n\) depicts a constant slope for different values of \(\alpha\), as can be seen in Figure 4b. This latter result follows from the fact that for large values of \(n\) and fixed and small values of \(\alpha\), the first term is negligible (note that \(\alpha\leq 0\) to secure that \(0\leq p\leq 1\)). These results, besides their importance for the analysis of chemical spaces, pave the way for the exploration of phase transitions in Erdos-Renyi-like oriented hypergraphs. In this respect, although chemical spaces with \(\mathrm{E}[R]\ll 1\) lack chemical meaning,15 they turn interesting to analyse the aforementioned phase transitions. Footnote 15: Which occur for low values of \(n\) in Figure 4. The probability of triggering a reaction not only affects the number of reactions but also the size of those reactions in the chemical space. Therefore, we explore how the different values of \(p\), given in Equation 29, affect the number of reactions of different sizes. From Equation 21 we know that the expected number of reactions of size \(s\) (\(\mathrm{E}[R_{s}]\)) is given by \(pu_{s}\). That is, \(\mathrm{E}[R_{s}]\) results from the probability of realising or not reactions of size \(s\) in the complete oriented hypergraph. By operating on the expressions for \(u_{s}\) (Lemma 4), we found, for large values of \(n\) and with \(\beta=1\), that \[\ln\mathrm{E}[R_{s}]\sim(s+\alpha)\ln n, \tag{32}\] where we used the binomial coefficient approximation for large values of \(n\) and fixed (small) values of \(s\) together with the Stirling approximation [43]. This expression is similar to Equation 30 in what it shows three regimes attending to the value of slope \(s+\alpha\). Recalling that \(\alpha<0\) to guarantee that \(0\leq p\leq 1\), this result indicates that in a random chemical space where the number of reactions grows exponentially with the number of substances (Equation 31), the number of reactions with size either \(s\) drops, remains constant or increases, depending on whether the slope \(s+\alpha\) is negative, null or positive, respectively. The general expression for large values of \(n\) is given by \(\mathrm{E}[R_{s}]=n^{s-|\alpha|}\). For example, if \(\alpha=-2\), rearrangement reactions (\(s=2\)) remain constant with the number of available substances \(n\), while reactions with size \(s>2\) follow a power law \(\mathrm{E}[R_{s}]\sim n^{s-2}\). Given that the smallest value of \(s\) is \(2\), for \(\alpha=-2\) there is no possibility of a negative slope. However, for a different value, for example \(\alpha=-4\), reactions with size \(s<4\) drop following a power law and give rise to a sparse population of reactions with those sizes. In Figure 5a we show an example for \(\alpha=-2\), \(\beta=1\) and \(s=2,3,4\) and \(5\). As \(\beta\) may take also the value of \(3\) for chemical spaces with either linear or power-law growth of the number of reactions, for these cases the leading order of \(\ln\mathrm{E}[R_{s}]\) takes the form \[\ln\mathrm{E}[R_{s}]\sim-n\ln 3, \tag{33}\] which shows that the asymptotic behaviour of \(\mathrm{E}[R_{s}]\) for large values of \(n\) is a decreasing exponential \(\mathrm{E}[R_{s}]\sim 1/3^{n}\) in terms of the number of substances \(n\) and for any size \(s\), \(s\) being much smaller than \(n\).16 However, the number of reactions with a fixed size \(s\), \(\mathrm{E}[R_{s}]\), reaches a maximum value at a number of substances \(n_{max}\) given by the solution to the equation Footnote 16: It is known that \(s\ll n\) for actual chemical spaces [24]. \[\frac{\alpha}{n_{max}}+\ln\left[\frac{n_{max}}{3(n_{max}-s)}\right]=0 \tag{34}\] where again, we used the Stirling approximation for large values of \(n\)[43]. This implies that in a random linear or power-law wiring of the chemical space, the number of reactions of size \(s\) reaches its maximum population with spaces of \(n=n_{max}\) substances (Figure 5b). This result shows another facet of randomly wired chemical spaces. In particular, that large randomly wired spaces are mainly populated by reactions of large sizes, where reactions of small size only represent a small population of the bulk of the population. Figure 4: Effects of the probability of triggering chemical reactions upon the expected number of reactions of randomly wired chemical spaces. Probability is expressed as \(p=n^{\alpha}/\beta^{n}\) and the plots show how the expected number of reactions \(\mathrm{E}[R]\) varies with the selection of \(\alpha\) and \(\beta\). In a) \(\beta=3\) and \(\alpha\) takes different values, which show the decreasing power law (\(\alpha=-1\)), the constant (\(\alpha=0\)), linear (\(\alpha=1\)) and quadratic (\(\alpha=2\)) growth of \(\mathrm{E}[R]\) for large values of the number of substances \(n\). In all these chemical spaces, where \(\beta=3\), \(\alpha\leq(n\ln 3)/(\ln n)\) to warranty that \(0\leq p\leq 1\). In b) \(\beta=1\) and \(\alpha\leq 0\) to secure that \(0\leq p\leq 1\). These plots correspond to exponential-like growths of \(\mathrm{E}[R]\) for large values of \(n\), where the slope of the linear fit tend to \(\ln 3\approx 1.099\). Plots in a) and b) were obtained for different values of \(n\) in \(\mathrm{E}[R]=\frac{n^{\alpha}}{2\beta^{n}}(3^{n}-2^{n+1}+1)\). reactions. This suggests that actual chemical spaces, mainly populated by reactions of small size [24], are indeed far from a random wiring. So far, we have compared the values of \(\mathrm{E}[R_{s}]\) for different realisations of a random chemical space. To explore how the number of reactions with different sizes \(s\) relates each other within the same chemical space, we now fix \(n\) and note that, for \(\beta=1\) the probability of wiring reactions, \(p=1/n^{\alpha}\) is much larger than \(p=n^{\alpha}/3^{n}\), the wiring probability with \(\beta=3\) with \(\alpha\) fixed in both cases. Consequently, the number of Figure 5: Effects of the probability of triggering reactions upon the expected number of reactions of different sizes in a randomly wired chemical space. Probability is given by \(p=n^{\alpha}/\beta^{n}\). a) Behaviour at \(\alpha=-2\) and \(\beta=1\), which corresponds to a chemical space whose number of reactions exponentially expand with the number of substances only in case \(s>|\alpha|\). b) Distribution of number of reactions of size \(s=50,100,150\) and \(200\) for chemical spaces with \(\alpha=2\) and \(\beta=3\), corresponding to spaces whose number of reactions grows at a power law of the number of substances. Maximum values are given at \(n_{max}=76,151,226\) and \(301\) respectively. reactions \(\mathrm{E}[R]\) is much larger in the first case than in the second, see Figure 4. This implies that the number of reactions with a given size \(s\) follows the same trend, that is to say, the number of reactions with size \(s\), \(\mathrm{E}[R_{s}]\), is larger when the probability is given by \(p=1/n^{\alpha}\) compared with what would be the number of reactions of the same size for a lower probability \(p=n^{\alpha}/3^{n}\), see Figure 6. Additionally, as long as \(p\) is considered to be \(s-\)independent, for a given number of substances \(n\), \(\mathrm{E}[R_{s}]\) reaches a maximum at a \(p\)-independent size value \(s_{max}\). The value of the most populated size \(s_{max}\) is the solution to the equation (see Figure 6): \[\frac{2^{s_{max}-1}\ln 2}{2^{s_{max}-1}-1}+\ln\left(\frac{n-s_{max}}{s_{max}} \right)=0, \tag{35}\] where we used the Stirling approximation for large values of \(n\)[43]. ## 5 Conclusion and outlook We developed the Erdos-Renyi model for oriented hypergraphs with a fixed number \(n\) of vertices. Oriented hypergraphs result from the binary relations between sets of arbitrary size of vertices (hypervertices). In particular, we considered oriented hypergraphs where the related hypervertices are disjoint. This follows from our aim of modelling all possible chemical reactions, that is catalysed and non-catalysed reactions, with the former including autocatalytic reactions. Central for the Erdos-Renyi model is the complete oriented hypergraph, as the model randomly realises oriented hyperedges of the complete oriented hypergraph. This realisation is mediated by a probability \(p\). We analysed different functional forms of \(p\), which Figure 6: Effects of the probability of triggering reactions upon the expected number of reactions of different sizes in randomly wired chemical spaces with a fixed number of substances, \(n=100\) and wiring probability given by \(p=n^{\alpha}/\beta^{n}\). Behaviour for different values of \(\alpha\) and \(\beta\). For \(\beta=3\), \(\alpha=0,1,2\) and \(3\), while for \(\beta=1\), \(\alpha=-3,-2,-1\) and \(0\). The maximum values of \(\mathrm{E}[R_{s}]\) are given at \(s_{max}=67\). depend on \(n\), and that allow for constant, linear, power law and exponential behaviours of the number of oriented hyperedges as a function of \(n\). These forms of \(p\), as well as the trends and their effects upon the number of oriented hyperedges constitute an approach to determine whether a given oriented hypergraph follows the patterns of a random wired oriented hypergraph. The application motivating this study is the chemical space, which we model as an oriented hypergraph, where vertices correspond to substances and oriented hyperedges to chemical reactions. Two main reasons turn oriented hypergraphs as a suitable model for chemical spaces, or in general, sets of chemical reactions. First, hypervertices, which are the relata of the oriented hypergraph model, gather together the substances involved in a chemical reaction. Therefore, hypervertices turn instrumental to distinguish substances involved in a chemical transformation from those which do not. Second, oriented hyperedges, that is binary collections of hypervertices, encode a fundamental aspect of chemical reactions, namely the distinction between educts and products. The Erdos-Renyi model here formulated is central for chemical studies since the availability of chemical information in electronic form, spanning centuries of chemical activity, as well as the ever growing computational power, have made possible to study the evolution of the entire chemical space [24, 9]. This has led to results on the growth rate of the space, as well as to analyses of its diversity at the compositional and molecular structural levels, as well as at the reaction type level [24, 9, 25, 44, 45]. Despite the importance of these studies, an open question is whether the chemical space has a particular structure and how far, or close, such a structure lies from its random counterpart [27]. This boils down to the question whether chemical space has been randomly wired in any moment of its expansion, which poses further questions on the nature of chemistry as a discipline [27]. At any rate, whether chemical space is far or close to randomness requires quantifying the distance between the actual chemical space at a particular time and its random case. The Erdos-Renyi model, therefore, turns central for such a quantification, which is the subject of a forthcoming publication. In this respect, by analysing the number of reactions involving a certain number of substances in a randomly wired chemical space, we found evidence that the actual chemical space seems to depart from a random wiring, as random spaces with similar number of substances of those in the actual chemical space are mainly populated by reactions involving by far more than a handful of substances. This latter is the typical situation of actual chemical reactions. The same approach of exploring the distribution of reactions involving certain number of substances over time turns central to analyse whether there are subspaces of the chemical space of rather small number of substances that are close to a random wiring. A similar argument may be used to analyse the random wiring of spaces going beyond the traditional two or three educts of chemical reactions [24], which is one of the objectives of one-pot reactions [46], for instance, which aim at affording a circular and green chemistry. As recently discussed [28], oriented hypergraphs become suitable models to encode the molecular reality of reactions if every oriented hyperedge is conservative. That is, if its associated stoichiometric matrix, which encodes the actual amount of substances involved in the reaction, has a strictly positive reaction invariant. This mathematical condition implies that a chemical reaction must preserve the number of atoms and, therefore, mass. The condition imposed by the stoichiometric matrix triggers further questions. For instance, one might inquire into the frequency with which the random model fulfills the given criterion as a function of the number of oriented hyperedges. Our model can be extended to incorporate such stoichiometric analyses but it implies several changes in the adjacency matrix, which plays a fundamental role in obtaining most of the expressions used here. Such a modification requires further investigation and it is beyond the scope of the current paper. The Erdos-Renyi model is general enough to be applied to non-chemical systems modelled by oriented hypergraphs, which include description of logical and artificial intelligence processes, as well as database modelling [23]. Particular examples include the case of functional dependencies in relational databases [47] or the analysis of Horn formulae in logic [48] and the study of AND-OR graphs [49]. While developing the Erdos-Renyi model, we introduced concepts of further interest for the study of oriented hypergraphs such as size and degree of oriented hyperedges, which we extended to the size and degree of the entire oriented hypergraph structure made by the collection of oriented hyperedges. In chemical terms, we defined the size and degree of any reaction and we extended these concepts to the size and degree of arbitrary chemical spaces. The size of an oriented hyperedge corresponds to the number of vertices belonging in the hyperedge, while the degree of the oriented hyperedge accounts for the number of oriented hyperedges incident to the vertices of the oriented hyperedge of reference. In chemical terms, the size of a reaction accounts for the number of substances in the reaction and the degree of the reaction for the number of reactions in which substances of the reaction in question participate. We showed that the size and degree of an oriented hypergraph are equal. They indicate whether an oriented hypergraph is sparse or loosely connected, which occurs for structures with low size and degree values, that is close to 0. In contrast, oriented hypergraphs with high values, close to the upper bound of size and degree (\(n(3^{n-1}-2^{n-1})\)) turn to be tightly connected structures. By analysing the complete oriented hypergraph for \(n\) vertices, we found that the maximum number of oriented hyperedges incident to any vertex is \(3^{n-1}-2^{n-1}\), which in chemical terms amounts to determining the maximum number of reactions for a substance in a chemical space. This expression evidences the extremely large possibilities for wiring the chemicals space [9]. This result led to find, as discussed before, that the size and degree of any oriented hypergraph are restricted to the interval \([0,\,n(3^{n-1}-2^{n-1})]\). The extreme wiring possibilities of a chemical space of \(n\) substances are embodied in the maximum number of reactions a chemical space may hold, which turns out to be \(\frac{1}{2}(3^{n}-2^{n+1}+1)\). This is the number of oriented hyperedges of the complete oriented hypergraph. The aforementioned result strongly contrast with the \(n(n-1)/2\) edges of a graph. The fact that the maximum number of reactions a chemical space may hold is proportional to \(3^{n}\), when modelled as an oriented hypergraph, contrasts sharply with the just \(n^{2}\) edges of the models of the chemical space based on graphs. These huge differences between the number of oriented hyperedges and edges are the result of the much richer description of chemical reactions provided by the oriented hypergraph model, where the two sets of substances involved in chemical reactions (educts and products) are an explicit part of the model, whereas in the graph model, they are just disregarded. The number of reactions in the complete oriented hypergraph allowed for determining the speed of growth of its oriented hyperedges as a function of the number of vertices (\(du_{r}/dn\)). We found that \(du_{r}/dn=\frac{1}{2}(3^{n}\ln 3-2^{n+1}\ln 2)\). This result bears important implications for chemistry, as it provides the upper bound for the growth of the number of chemical reactions as a function of the number of available substances in the chemical space. This allows for determining that the expected number of reactions of a random oriented hypergraph is given by \(\frac{p}{2}(3^{n}-2^{n+1}+1)\) and for deriving similar expressions for the expected number of reactions of size \(s\). These results, in turn, allow to contrast actual chemical spaces with random chemical spaces at different levels of detail, for instance by analysing the actual and expected number of reactions of particular sizes. An important invariant of random oriented hypergraphs we found is the ratio of the expected number of reactions and the expected degree or size of the chemical space. For large values of \(n\), we found this ratio is \(3/2\), which becomes a proxy to determine whether a given chemical space is random of not, according to the Erdos-Renyi model here presented. Despite the richness of the oriented hypergraph for modelling chemical spaces, it is open to improvements. For instance by introducing direction between the sets of substances involved in reactions, which would clearly distinguish between educts and products. In this case, chemical spaces are modelled as directed hypergraphs. The Erdos-Renyi model here presented requires further refinements to be adjusted for directed hypergraphs. They involve, for instance, to adjust \(p\) to distinguish between \(X\to Y\) and \(Y\to X\), with \(X\) and \(Y\) being sets of substances (hypervertices). We discussed the functional form of \(p\) as depending of \(n\), but other forms of \(p\) are also worth exploring, for instance as a function of reaction size (\(s\)). This leads to explore how the wiring of random chemical spaces depends on the amount of substances involved in their chemical transformations, which is a determining factor of the chemical possibilities to trigger actual chemical reactions [50]. Besides the Erdos-Renyi model for the high-order structures discussed in this paper, namely hypergraphs, as well as directed and oriented hypergraphs, other models need to be defined upon them, for instance the small-world [51], as well as the Barabasi-Albert [52] ones. As well as in the original Erdos-Renyi model, were phase transitions and the conditions to attain them were studied, a further avenue of research is the study of those conditions to afford phase transitions in higher-order structures as the ones here presented. ## 6 Author contributing AG-C conducted the research, AG-C and GR conceptualised the project, GR wrote the paper and AG-C, MBM, PFS, JJ and GR discussed and reviewed the edited document. ## 7 Conflicts of Interest Statement We have no competing interests. ## 8 Funding MBM thanks the support of the Alexander von Humboldt Foundation. ## 9 Acknowledgments The authors thank the feedback from Guido Montufar and Humberto Laguna upon early results of this project.
高次構造は、グラフモデルが適切な二元関係を超えるシステムの適切なモデルとして認められています。これらの構造の重要性と、近年増加する研究活動に関しても、そのランダムなケースは最近注目を集めています。これらの高次構造の一つである、Oriented Hypergraph は、任意の数の頂点を持つ集合のペアを関連づけるものです。ここでは、Oriented Hypergraph に対する Erd\H{o}s-R\'enyi モデルを開発します。これは、完全なOriented Hypergraph のOriented Hyperedge のランダムな実現に対応しています。Oriented Hypergraph のランダムな特徴の一つは、その期待されるOriented Hyperedges の数と期待されるDegreeまたはSizeとの比率が、多くの頂点を持つ場合、3/2となります。私たちはOriented Hypergraph の化学反応の集合をモデル化するための適性を強調し、化学の展開を分析するためのランダムOriented Hypergraph の重要性を強調しています。
2309.09704
Interface stabilization in adhesion caused by elastohydrodynamic deformation
Interfacial instabilities are common phenomena observed during adhesion measurements involving viscoelastic polymers or fluids. Typical probe-tack adhesion measurements with soft adhesives are conducted with rigid probes. However, in many settings, such as for medical applications, adhesives make and break contact from soft surfaces such as skin. Here we study how detachment from soft probes alters the debonding mechanism of a model viscoelastic polymer film. We demonstrate that detachment from a soft probe suppresses Saffman-Taylor instabilities commonly encountered in adhesion. We suggest the mechanism for interface stabilization is elastohydrodynamic deformation of the probe and propose a scaling for the onset of stabilization.
Preetika Karnal, Yumo Wang, Anushka Jha, Stefan Gryska, Carlos Barrios, Joelle Frechette
2023-09-18T12:16:22
http://arxiv.org/abs/2309.09704v1
# Interface stabilization in adhesion caused by elastohydrodynamic deformation ###### Abstract Interfacial instabilities are common phenomena observed during adhesion measurements involving viscoelastic polymers or fluids. Typical probe-tack adhesion measurements with soft adhesives are conducted with rigid probes. However, in many settings, such as for medical applications, adhesives make and break contact from soft surfaces such as skin. Here we study how detachment from soft probes alters the debonding mechanism of a model viscoelastic polymer film. We demonstrate that detachment from a soft probe suppresses Saffman-Taylor instabilities commonly encountered in adhesion. We suggest the mechanism for interface stabilization is elastohydrodynamic deformation of the probe and propose a scaling for the onset of stabilization. + Footnote †: preprint: APS/123-QED There is a wide interest in controlling interfacial instabilities, as they often affect the process in which they are formed.[1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11] Interfacial instabilities can be a safety hazard for batteries,[12; 13] limit oil recovery,[14] impact properties of graphene sheets,[15] enhance the mixing of fluids,[16; 17] or guide the fabrication of soft materials[18; 19; 20]. A common interfacial instability is the Saffman-Taylor type, manifested as undulating patterns formed in narrow gaps at fluid-fluid interfaces when a lower viscosity fluid displaces a higher viscosity fluid.[21; 22; 23; 24; 25] Their onset can be controlled through low flow rates[25] or local geometry [26; 1; 2]. For example, elastic deformation of a membrane ahead of the fluid-fluid front alters the flow and suppress viscous instabilities.[27; 10] Due to their sensitivity to the flow profile, interfacial instabilities could potentially be manipulated in contact problems, such as in adhesion, where they are a source of energy dissipation.[28; 29; 30; 24] Adhesion between two soft materials is ubiquitous during contact with skin with medical adhesives or flexible electronics.[32; 33; 34; 35; 36; 37] Despite its technological significance, studies of adhesion between two soft materials are limited, but reveal qualitative differences from debonding from a rigid surface.[38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2777; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 291; 300; 311; 329; 333; 340; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 371; 372; 373; 374; 375; 376; 378; 379; 38; 389; 390; 391; 392; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 61; 62; 64; 66; 67; 69; 628; 63; 65; 67; 68; 69; 70; 72; 73; 74; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 88; 89; 91; 88; 87; 88; 89; 929; 93; 940; 95; 96; 97; 98; 99; 101; 11; 122; 133; 134; 135; 136; 137; 138; 139; 141; 143; 144; 145; 146; 147; 148; 159; 167; 178; 189; 199; 202; 213; 224; 25; 268; 279; 281; 293; 294; 295; 296; 297; 301; 31; 32; 33; 341; 35; 36; 37; 38; 398; 40; 41; 42; 43; 44; 45; 46; 47; 49; 51; 53; 54; 55; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 74; 75; 76; 77; 78; 79; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 910; 11; 124; 15; 16; 17; 18; 19; 191; 18; 192; 193; 194; 195; 196; 197; 198; 199; 203; 210; 211; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 35; 36; 37; 38; 39; 40; 42; 43; 44; 45; 46; 47; 48; 49; 50; 52; 54; 56; 57; 59; 62; 63; 64; 65; 66; 67; 68; 69; 71; 80; 81; 83; 84; 85; 86; 87; 89; 92; 94; 95; 96; 97; 98; 102; 99; 103; 11; 11; 13; 14; 15; 16; 17; 18; 199; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 41; 42; 43; 44; 45; 46; 47; 48; 49; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 77; 78; 79; 82; 83; 85; 87; 89; 93; 94; 95; 96; 97; 98; 99; 99; 100; 11; 12; 13; 14; 15; 16; 17; 18; 19; 199; 130; 197; 198; 199; 203; 1999; 210; 211; 23; 24; 25; 26; 27; 28; 29; 32; 33; 35; 36; 37; 38; 39; 40; 41; viscoelastic adhesive (thickness of \(b\) = 25 \(\mu\)m, Young's modulus \(\sim\)30 kPa, **Fig. 1A**). The adhesion measurements are conducted on a microscope with bottom and side view imaging.[46] During detachment, the adhesive-air interface is unstable and fingers form and grow until complete debonding (**Fig. 1C**). A distinguishing feature of interfacial instabilities in adhesion is the dependence of their wavelength, \(\lambda\), on the detachment velocity. For a Saffman-Taylor instability \(\lambda\) scales with the film thickness (\(b\)) and the Capillary number (\(Ca=\eta^{*}U/\gamma\)) as: \[\lambda=\pi b/\sqrt{Ca} \tag{1}\] where \(\eta^{*}\) is the complex viscosity of the adhesive, \(U\) is the radial velocity, and \(\gamma\) is the surface tension of the adhesive-air interface.[25, 27, 23, 28] The complex viscosity accounts for the viscoelasticity of the adhesive. We measured adhesion for different detachment velocities and film thicknesses (25 -100 \(\mu\)m) and characterized fingering wavelengths at their onset (lowest strain in the films). We then compared our measurements to **Eqn. (1)** by determining the capillary number using the radial velocity of growing fingers' apex, the complex viscosity \(\eta^{*}\), and the surface tension (45 \(\pm\) 2 mN/m).[31, 47] Agreement between data and **Eqn. (1)**, **Fig. 1B**, confirms the presence of Saffman-Taylor instabilities[31]. In contrast, an elastic instability in the PSA would have a wavelength that only depends on the thickness of the adhesive (\(\lambda_{e}\) =4b)[31, 48, 49, 50, 51], and quadruple as we quadruple the film thickness. Instead, the change in wavelength if we quadruple the thickness increases by a factor of 3-12 dependending on the velocity, with the wavelength decreasing as the velocity increases, both characteristic of Saffman-Taylor instabilities. We then repeat the same measurements, but with silicone probes of increasing compliance. The compliant probes are made of PDMS (polydimethyl siloxane) of different crosslinking ratios that were extracted after curing to remove unreacted oligomers, and treated with plasma to render their surface hydrophilic. The soft probes have nearly identical geometry and surface energy as the rigid probes, but with a Young's modulus that varies from \(\sim\)2 MPa to \(\sim\)0.2 MPa.[31] We estimate the \(Ca\) of the adhesive film during the detachment and found it comparable to values for rigid probes that displayed Saffman-Taylor instabilities (red arrows, **Fig. 1B**). Detachment with the stiffer PDMS leads to an unstable interface, but the interface stabilizes for softer probes (**Fig. 1D,E**). Due to confinement, the compliance of the adhesive film is smaller than its bulk counterpart, and also smaller than all soft probes investigated.[31] While a PSA is a viscoelastic solid, a simple stress-strain model where the thin adhesive film is in series with a soft probe (\(k_{\text{PSA}}\gg k_{\text{probe}}\)) suggests a significant dissipative response due to the complex viscosity of the adhesive.[31] Therefore, even if the adhesive film is a solid its dynamic response is dominated by viscoelasticity. Moreover, recent work shows that in the case of elastic instabilities the interface can become stable as the probe modulus increases, the opposite of our observations.[52]. As the probe compliance increases, the interface becomes stable during detachment (**Fig. 1C-1E**). Because only the compliance of the probe is varied (and not its surface energy), the experiments suggest the importance of compliance on interface stabilization.[31] The transition to a stable interface also has no impact on the adhesive strength (\(F_{\text{max}}\) in **Fig. 2 Inset** ). For the same debonding velocity the adhesive strength is nearly the same for all probe moduli, without any distinction between stable and unstable interfaces. For the sphere-plane geometry the adhesive strength is independent of compliance, but the mode of failure can affect the force profile.[53, 54] A small plateau in force was also observed with the onset of fingering instabilities when lifting rigid plates confining viscous fluid,[29], whereas adhesion-induced elastic instabilities increased the resistance to deformation leading to higher forces.[55] We also find that stabilization of the interface is not due to a change in probe surface energy. The relationship between the adhesive strength (\(F_{\text{max}}\)), debonding velocity (\(v\)), and compliance is well-established and given by: \[F_{\text{max}}=2\left[\frac{A_{0}}{C_{\text{sys}}}G_{0}\left(\frac{v}{v_{ref}} \right)^{n}\right]^{\frac{1}{2}}, \tag{2}\] Figure 2: Adhesive strength for different probes and retraction velocities. The slope \(\sim\left(2\sqrt{G_{0}/v_{\text{ref}}^{0,4}}\right)\) increases with probe surface energy. There is no distinction in the adhesive strength for a stable (pink) or unstable (blue) interface. Inset: Debonding curve between soft PDMS probes and adhesive films at \(v\) = 50\(\mu\)m/s. Increase in probe compliance leads decreases the slope. The maximum force, \(F_{\text{max}}\), is independent of probe modulus. where \(G_{0}\) is the intrinsic strain energy release rate, \(A_{0}\) is the maximum contact area, \(C_{\text{sys}}\) is the system compliance, \(v\) is the debonding velocity, and \(n\) is an empirical constant, here \(n=0.4\).[53; 54; 56; 55] Therefore, for a constant apparent work of adhesion we expect a linear relationship between \(F_{\text{max}}\) and \(\sqrt{A_{0}/C_{\text{sys}}v^{0.4}}\) with a slope \(2\sqrt{G_{0}/v_{\text{ref}}^{0.4}}\). Adhesion follows well the established force scaling relationship, with no departure from the linear relationship that would indicate a change in surface energy for softer PDMS probes. Data for the hydrophilic PDMS includes the adhesive strength for probes with elastic moduli between 0.18 and 1.8 MPa (**Fig. 2**).[31] The linear relationship observed across PDMS probe moduli confirms the constant apparent surface energy. This linear relationship also holds for probes of different surface energy, but with a different slope (silica and hydrophobic PDMS, **Fig. 2**). Side view imaging shows that transition to a stable interface is accompanied by significant elastic deformation of the probe, **Fig. 3**. The forces resisting the probe's upward motion within the adhesive film cause elastic deformation of the probe and appear to be stabilizing the interface. For hydrophilic PDMS probes, interface stabilization occurs despite having the same intrinsic surface energy. Using \(G_{0}/E_{\text{eff}}a\) we evaluate the intrinsic strain energy release rate normalized with the contact compliance (or the elastoadhesive length normalized with the contact radius)[57], where the effective modulus is \(E_{\text{eff}}=3/4C_{sys}a\), and \(a\) is the contact radius. This quantity represents the ability of a material to resist crack propagation through elasticity. Changes in the relative importance between contact compliance and surface energy in the contact region, \(G_{0}a^{2}\), **Fig. 4A**, do not delineate stable from unstable interfaces. In other words, the deformation of the probe is not dominated by an increased contribution from the surface energy as the probe modulus decreases. Here, debonding occurs between a soft probe and a viscoelastic adhesive. At any given time the measured force is due to surface, viscoelastic, and elastic (probe deformation) contributions. The elastic (probe deformation) and viscous ("flow" of the adhesive) forces are highly coupled. Elastohydrodynamic deformation occurs when the viscous forces in a fluid are strong enough to cause elastic deformation to an opposing surfaces.[58; 59; 60; 61] We hypothesize that the probe deformation alters the pressure distribution within the adhesive film leading to a suppression of Saffman-Taylor instabilities. The relative importance of elastohydrodynamic deformation can be estimated through an elasticity parameter, \(\epsilon\) (**Eqn. (3)**), obtained from non-dimensionalization of the lubrication equation: [58; 59; 62] \[\epsilon=\frac{\eta^{*}vR^{1.5}}{E_{\text{probe}}^{*}b^{2.5}}. \tag{3}\] The elasticity parameter can be viewed as a ratio between elastic forces within the probe and viscous forces within the adhesive film. As \(\epsilon\) increases the elastic deformation Figure 3: Side and bottom view images during debonding over time. Rigid probe (A) bottom and (B) side views. Soft probe, \(E_{\text{probe}}\) = 0.18 MPa (C) bottom and (D) side views. Instabilities are present during debonding from the rigid probe. Side view images (B) show stretching of adhesive. For the soft probe (C) the interface is stable, side views (D) show probe deformation. Note the different scale and magnification between the side and bottom views; the arrows represent the same dimension. of the probe (\(w\)) increases,. For low \(\epsilon\) viscous forces do not cause probe deformation. We previously found that the dimensionless central deformation (\(\tilde{w}=w/b\)) of a spherical probe scales with \((6\epsilon)^{0.4}\).[63] A plot of \(\epsilon\) as a function of debonding velocity (\(v\)) shows a clear demarcation between stable and unstable interfaces (**Fig. 4B**). The transition to a stable interface occurs across different materials systems and experimental parameters: probe modulus, radius, detachment velocity, and film thickness. The transition between an unstable and stable interface occurs around \(\epsilon=1\), when the elastic forces in the probe begin to dominate over the viscous forces in the adhesive film. The transition to a stable interface as the elasticity parameter increases supports the hypothesis that elastohydrodynamic deformation of the probes suppress the fingering instabilities. As the velocity increases the adhesive strength increases, and stabilization of the interface shifts to higher \(\epsilon\) (**Fig. 4B**). We compare the role of debonding velocity on the probe deformation and the pressure within the film. An increase in \(v\) will increase the pressure within the adhesive film, which has a destabilizing tendency for the interface. However, increasing the velocity also increases the probe deformation, which we hypothesize stabilizes the interface. Non-dimensionalization of the lubrication equation leads to a characteristic pressure in the fluid, \(p^{*}=\eta vR/b^{2}\).[63] For a viscoelastic film, \(p^{*}=\eta^{*}vR/b^{2}\), therefore \(p^{*}\propto v^{(1-m)}\) with the dependence of the complex viscosity on velocity. For our material here \(m\)=0.725, giving \(p^{*}\propto v^{0.28}\). Moreover, the dimensionless central probe deformation scales as \(\hat{w}\sim v^{0.4(1-m)}\), and specifically for our material system \(\hat{w}\sim v^{0.11}\). Therefore, as \(v\) increases the pressure within the film (\(p^{*}\sim v^{0.28}\)) increases faster than the deformation of the probe (\(\hat{w}\sim v^{0.11}\)). The faster increase in pressure within the film as \(v\) increases would necessitate larger probe deformations to stabilize the interface, thus a higher elasticity parameters is needed for stabilization. We study the relationship between elastohydrodynamic deformation and adhesive film pressure by modeling debonding between a soft probe and a rigid surface submerged in a Newtonian fluid.[31] In the model the Figure 4: (A) Elastoadhesive length normalized by the contact radius vs effective surface energy for all probes. (B) Elasticity parameter (\(\epsilon\)) vs debonding velocity (\(v\)). The transition from unstable to stable interface is observed around \(\epsilon=1\) (black dotted line). Data includes adhesive with \(b=25\), \(b=50\), \(b=100\)\(\mu\)m and \(R\) between 4.5 mm – 14 mm and shows unstable interface (blue) and stable interface (pink). Figure 5: Dimensionless pressure (\(p^{*}=\eta vR/b^{2}\)) versus dimensionless radial position (\(R_{H}=\sqrt{2Rb}\)) obtained from modeling the detachment of soft (\(E_{\text{probe}}\)=0.32 MPa) and stiff (\(E_{\text{probe}}\)=3 MPa) PDMS probes of R = 6 mm at 50 \(\mu\)m/s and \(\eta\)=1000 Pa.s, b=20 \(\mu\)m. Retraction of the soft probe leads to lower fluid pressure and appearance of stagnation point delineating drainage and infusion regions. fluid viscosity is comparable to the complex viscosity of the adhesive. This model is a highly simplified version of our experiments, in that the adhesive is treated as a viscous fluid without an air-adhesive interface present. We extract the pressure profile during detachment for both rigid and soft probes and obtain lower fluid pressure with the soft probe, **Fig. 5**, which would have a stabilizing effect. We also observe that the elastohydrodynamic probe deformation leads to a non-monotonous pressure drop within the fluid. In contrast, the pressure distribution is monotonic during the detachment from a rigid probe. Moreover, deformation of the soft probe leads to a negative pressure gradient at the center point, causing the fluid _drainage_ from the center during detachment, while further away from the center the pressure drop is positive leading to the expected fluid _infusion_. Between the drainage and infusion regions there is a stagnation point where the pressure gradient is zero. The stagnation point moves towards the center of the probe during retraction (**Fig. 5**). Because of incompressibility, the surfaces initially move closer at the center point during detachment. The combination of lower pressure and a stagnation point could suppress the Saffman-Taylor instabilities during the detachment from a soft probe, and will be the subject of future studies. In summary, the detachment of a viscoelastic adhesive from soft surfaces suppresses the onset of Saffman-Taylor instabilities. While elasticity has been shown previously to impact Saffman-Taylor instabilties, we show here the connection with adhesion. Controlling the mode of failure during debonding between soft materials and could impact adhesion (and pain) with skin. We attribute stabilization of the interface to elastohydrodynamic deformation of the probe caused by viscoelasticity. The elasticity parameter can serve as a guide for interfacial stability. A simple model shows that replacing a rigid probe with a soft one leads to a decrease in the pressure drop and the appearance of a stagnation point within the film, both could lead to interface stabilization. Further studies are necessary to better understand the detachment process between two soft materials and the stabilization of the interface. _Acknowledgements:_ This work was supported by 3M and by the National Science Foundation (NSF-CMMI 1728082). Y.W. also acknowledges support from the National Natural Science Foundation of China (Grant No. 51804319.)
界面不安定性は、粘性弾性ポリオマーまたは流体を含む接着測定において、一般的に観察される現象です。典型的なプローブ-接着測定は、軟質接着剤を使用する場合に、硬いプローブを用いて行われます。しかし、医療用途など、多くの場合、接着剤は、皮膚などの軟質表面から接着し、離脱します。ここでは、軟質プローブからの剥離がモデル粘性ポリオマーフィルムの剥離メカニズムにどのような影響を与えるかを調査します。軟質プローブからの剥離は、接着に一般的に見られるサファマン・テイラー不安定性を抑制する。プローブの界面安定化メカニズムを提案し、安定化の開始に用いるスケールを提案します。
2305.00393
DynaVol: Unsupervised Learning for Dynamic Scenes through Object-Centric Voxelization
Unsupervised learning of object-centric representations in dynamic visual scenes is challenging. Unlike most previous approaches that learn to decompose 2D images, we present DynaVol, a 3D scene generative model that unifies geometric structures and object-centric learning in a differentiable volume rendering framework. The key idea is to perform object-centric voxelization to capture the 3D nature of the scene, which infers the probability distribution over objects at individual spatial locations. These voxel features evolve over time through a canonical-space deformation function, forming the basis for global representation learning via slot attention. The voxel features and global features are complementary and are both leveraged by a compositional NeRF decoder for volume rendering. DynaVol remarkably outperforms existing approaches for unsupervised dynamic scene decomposition. Once trained, the explicitly meaningful voxel features enable additional capabilities that 2D scene decomposition methods cannot achieve: it is possible to freely edit the geometric shapes or manipulate the motion trajectories of the objects.
Yanpeng Zhao, Siyu Gao, Yunbo Wang, Xiaokang Yang
2023-04-30T05:29:28
http://arxiv.org/abs/2305.00393v4
# Unsupervised Object-Centric Voxelization for Dynamic Scene Understanding ###### Abstract Understanding the compositional dynamics of multiple objects in unsupervised visual environments is challenging, and existing object-centric representation learning methods often ignore 3D consistency in scene decomposition. We propose DynaVol, an inverse graphics approach that learns object-centric volumetric representations in a neural rendering framework. DynaVol maintains time-varying 3D voxel grids that explicitly represent the probability of each spatial location belonging to different objects, and decouple temporal dynamics and spatial information by learning a canonical-space deformation field. To optimize the volumetric features, we embed them into a fully differentiable neural network, binding them to object-centric global features and then driving a compositional NeRF for scene reconstruction. DynaVol outperforms existing methods in novel view synthesis and unsupervised scene decomposition and allows for the editing of dynamic scenes, such as adding, deleting, replacing objects, and modifying their trajectories. ## 1 Introduction Performing object-centric unsupervised learning in dynamic visual environments is of great importance but challenging due to the intricate entanglement between the spatial and temporal information [32; 26; 10]. Previous approaches primarily leverage the temporal cues across consecutive video frames but tend to ignore the 3D nature, resulting in a multi-view mismatch of 2D object segmentation [5; 27; 14]. In this paper, we provide an early study of unsupervised dynamic scene decomposition in 3D scenarios, where we believe that an effective object-centric representation should satisfy three conditions: (i) It should be able to accurately represent the _spatial structures_ of the visual scene in a stereo-consistent manner and facilitate precise object localization. (ii) It should capture and decouple the _underlying dynamics_ of each object from the visual appearance and spatial structures. (iii) It should obtain a _global understanding_ of each object, which is crucial for downstream tasks such as scene editing and relational reasoning. Accordingly, we propose to learn two sets of object-centric representations: one that represents the local spatial structures using 3D voxel grids that may vary over time, and another that represents the global understanding of each object, which is time-invariant. One may wonder about the advantages of introducing 3D voxelization. The answer is that if we can learn voxel grids that explicitly indicate the probability of each spatial location belonging to different objects, we can achieve 3D-consistent scene decomposition naturally. It is worth noting that in our approach, these two sets of global/local representations are interdependent during training, in the sense that well-trained and decoupled global features can guide the model to bind each spatial location to the corresponding object. Based on these intuitions, we propose DynaVol, an inverse graphics framework that learns to perform object-centric voxelization of 3D dynamic scenes. Our approach comprises three network components, including (i) a deformation network that learns the canonical-space transitions of volumetric representations over time, (ii) a 3D slot attention network that progressively refines the object-level, time-invariant global features by aggregating the volumetric representations, and (iii) a voxel-based, compositional neural radiance field (NeRF) for scene reconstruction, which introduces a strong geometric inductive bias that facilitates the learning of object-centric representations. As shown in Figure 1, our unsupervised voxelization approach provides three advantages for dynamic scene understanding. First, it allows for fine-grained separation of object-centric information. Second, it naturally ensures the 3D consistency of the decomposition results. Third, it enables direct scene editing that is not possible in existing dynamic scene decomposition methods [15; 25; 5]. DynaVol is trained on each individual dynamic scene using a two-stage learning scheme. In the first warmup stage, a sparse set of multi-view static images is used to provide strong geometric priors that facilitate the decoupling of spatial and temporal features. In the second stage, the entire model is optimized using sequential data from a monocular moving camera, allowing for a joint refinement of the initial voxel grid features, global slot-attention features, and the dynamics learning module. We evaluate DynaVol on multiple 3D dynamic scenes with different numbers of objects, diverse motions, various shapes (such as cube, sphere, and real-world shapes), as well as different materials (such as rubber and metal). We demonstrate the effectiveness of our approach in three ways. First, DynaVol outperforms existing scene decomposition approaches (SAVi [15] and uORF [34]) by projecting the object-centric volumetric representations onto 2D planes. Second, it outperforms strong baseline models (D-NeRF [24], DeVRF [18]) for novel view synthesis. Finally, it also performs well for dynamic scene editing, such as object removal, replacement, and trajectory modification, by directly manipulating the voxel grids and the learned deformation function without further training. ## 2 Related Work Unsupervised scene decomposition in 2D space.Most existing approaches in this area [9; 8; 6; 1] use latent features to represent objects in 2D scenarios like CLEVR [13]. The slot attention method [19] extracts object-centric latents through a cross-attention mechanism and repeatedly refines them using GRUs [3]. SAVi [15] extends slot attention into dynamic scenes by updating slots at each frame and uses optical flow data as the reconstruction target. STEVE [27] improves SAVi by simply replacing the spatial broadcast decoder with an autoregressive Transformer. SAVi++ [5] uses depth information to improve SAVi for modeling static objects and scenes with camera motion. Unsupervised scene decomposition in 3D space.Recent methods [14; 2; 34; 28; 25] combine object-centric representations with view-dependent scene modeling techniques like neural radiance fields (NeRFs) [22]. ObSuRF [28] adopts the spatial broadcast decoder and takes depth information as training supervision. uORF [34] extracts the background latent and foreground latents from an Figure 1: **Contributions: DynaVol explores an unsupervised object-centric voxelization approach for dynamic scene decomposition. Unlike its 2D counterparts, such as SAVi [15], DynaVol ensures 3D consistency and provides additional capabilities, _e.g._, novel view synthesis and scene editing.** input static image to handle background and foreground objects separately. For dynamic scenes, Li _et al._[17] proposed an auto-encoder framework that incorporates volume rendering to model dynamic scenes. However, it represents the whole scene using a single latent vector (instead of object-centric features), which can be insufficient for complex scenarios with multiple objects and various motions. Guan _et al._[11] proposed to use a set of particle-based explicit representations in the NeRF-based inverse rendering framework, which is particularly designed for fluid physics modeling. Driess _et al._[4] explored the combination of an object-centric auto-encoder and volume rendering for dynamic scenes, which is the most relevant work to our approach. However, unlike our approach which is totally unsupervised, it requires pre-prepared 2D object segments. Dynamic scene rendering based on NeRFs.Besides those with object-centric representations, there is another line of work [24; 18; 33; 12] that models 3D dynamics using NeRF-based methods. D-NeRF [24] uses a deformation network to map the coordinates of the dynamic fields to the canonical space. DeVRF [18] models dynamic scenes with volume grid features [29] and voxel deformation fields. Recently, D\({}^{2}\)NeRF [33] presents a motion decoupling framework. However, unlike DynaVol, it cannot segment multiple moving objects. ## 3 Method In this section, we first discuss the problem setup and the overall framework of DynaVol (Sect. 3.1). We then introduce a new set of representations for object-centric scene voxelization and present the network details of dynamics modeling, 3D slot attention, and object-centric neural rendering (Sect. 3.2-3.4). Finally, we describe the two-stage training procedure of DynaVol (Sect. 3.5). ### Overview of DynaVol Problem setup.We assume a set of RGB images of a dynamic scene \(\{\mathbf{I}_{t}^{v},\mathbf{T}_{t}^{v}\}_{t=1}^{T}\) collected with a moving monocular camera and a sparse set of views \(\{\mathbf{I}_{t_{0}}^{v},\mathbf{T}_{t_{0}}^{v}\}_{v=1}^{V}\) at the initial timestamp. \(\mathbf{I}_{t}^{v}\in\mathbb{R}^{H\times W\times 3}\) are images acquired under camera poses \(\mathbf{T}_{t}^{v}\in\mathbb{R}^{4\times 4}\), \(T\) is the length of video frames, and \(V\) is the number of views at the starting moment \(t_{0}\). The goal is to understand the space-time structures of the visual scene from \(\{\mathbf{I}_{t}^{v}\}_{t=1}^{T}\) and \(\{\mathbf{I}_{t_{0}}^{v}\}_{v=1}^{V}\) without additional information. Overall framework.DynaVol is trained in an inverse graphics learning framework to synthesize \(\{\mathbf{I}_{t}^{v}\}_{t=1}^{T}\) and \(\{\mathbf{I}_{t_{0}}^{v}\}_{v=1}^{V}\) without further supervision. Formally, the goal is to learn an object-centric projection of \((\mathbf{x},\mathbf{d},t)\rightarrow\{(\mathbf{c}_{n},\sigma_{n})\}_{n=1}^{N}\), where \(N\) is the assumed number of objects and \(\mathbf{x}=(x,y,z)\) is a 3D point sampled by the neural renderer, which outputs the density and color for each object at view direction \(\mathbf{d}\). By re-combining \(\{(\mathbf{c}_{n},\sigma_{n})\}_{n=1}^{N}\) to approach true pixel values, the model is required to learn 3D-consistent object-centric representations. As shown in Figure 2, DynaVol maintains voxel grids \(\mathcal{V}_{\text{density}}\) and a set of object-level slot features \(\mathbf{S}\). The entire model consists of three network components: (i) Deformation networks \(f_{\psi}\) and \(f_{\xi}^{\prime}\) that learn the canonical-space transitions of \(\mathcal{V}_{\text{density}}\) over time. (ii) A volume encoder \(E_{\theta}\) and a slot attention block \(Z_{\omega}\) that progressively refine \(\mathbf{S}\). (iii) A compositional NeRF3\(N_{\phi}\) that jointly uses \(\mathcal{V}_{\text{density}}\) and \(\mathbf{S}\) to render the observed images. The training pipeline of DynaVol involves two stages, including a warmup stage that learns (\(\mathcal{V}_{\text{density}},f_{\psi},f_{\xi}^{\prime},N_{\phi^{\prime}}\)) and a dynamic grounding stage that learns (\(\mathcal{V}_{\text{density}},\mathbf{S},f_{\psi},E_{\theta},Z_{\omega},N_{ \phi}\)). Footnote 3: It is important to note that we use a non-compositional NeRF denoted by \(N_{\phi^{\prime}}\) in the warmup stage. ### Object-Centric Volumetric Representation and Dynamics Modeling Volumetric representation of object probability.Inspired by the idea of using a 3D voxel grid to maintain the volume density for neural rendering, we extend it with a 4D voxel grid, denoted as \(\mathcal{V}_{\text{density}}\). The additional dimension is used to indicate the occurrence probabilities of each object within each grid cell. The occurrence probability \(\{\sigma_{n}\}_{n=1}^{N}\) at an arbitrary 3D location can be efficiently queried through the trilinear interpolation sampling method: \[\operatorname{Interp}(\mathbf{x},\mathcal{V}_{\text{density}}):(\mathbb{R}^{3},\mathbb{R}^{N\times N_{x}\times N_{y}\times N_{x}})\rightarrow\mathbb{R}^{N}, \tag{1}\] where \((N_{x},N_{y},N_{z})\) are the resolutions of \(\mathcal{V}_{\text{density}}\). To achieve sharp decision boundaries during training, we apply the softplus activation function to the output of trilinear interpolation. Canonical-space dynamics modeling.Inspired by D-NeRF [24], we use a dynamics module \(f_{\psi}\) to learn the deformation field from the voxel grid \(\mathcal{V}_{\text{density}}^{t_{0}}\) at the initial timestamp to its canonical space variations over time. Given a 3D point \(\forall\mathbf{x}_{i}\in\{\mathbf{x}\}\) at an arbitrary time, \(f_{\psi}(\mathbf{x}_{i},t)\) predicts a position movement \(\Delta\mathbf{x}_{i}\), so that we can transform \(\mathbf{x}_{i}\) to the scene position at the first moment by \(\mathbf{x}_{i}+\Delta\mathbf{x}_{i}\). We then query the occurrence probability from \(\mathcal{V}_{\text{density}}^{t_{0}}\) by \(\widehat{\mathcal{V}}_{\text{density}}^{t}=\operatorname{Interp}\left(\left( \mathbf{x}_{i}+f_{\psi}(\mathbf{x}_{i},t)\right),\mathcal{V}_{\text{density }}^{t_{0}}\right)\). Notably, we encode \(\mathbf{x}_{i}\) and \(t\) into higher dimensions via positional embedding. Additionally, we use another dynamics module \(f_{\xi}^{t}\) in the warmup stage to model the forward movement \(\Delta x^{\prime}\) from initial movement to timestamp \(t\). This module enables the calculation of a cycle-consistency loss, ensuring the coherence of the estimated motion. More details are introduced in the Sect. 3.5. ### Object-Centric Global Representation and 3D Slot Attention Slot features.In addition to the time-varying volumetric representations, we further learn another set of object-centric features that are **time-invariant** and represent the global understanding of each object. Specifically, we use a set of latent codes referred to as "slots" to represent these object-level features. This terminology is in line with prior literature on 2D static scene decomposition [19]. The slots are randomly initialized from a normal distribution and progressively refined episode by episode throughout our second training stage. In this context, an _episode_ refers to a training process that iterates through every moment of the data sequence. We denote the slots by \(\mathbf{S}_{t,e}\in\mathbb{R}^{N\times D}\), where \(e\) is the episode index and \(D\) is the feature dimensionality. At the beginning of each episode, we have \(\mathbf{S}_{t_{0},e}=\mathbf{\bar{S}}_{e-1}\), where \(\mathbf{\bar{S}}_{e-1}\) represents the average of \(\{\mathbf{S}_{t,e-1}\}_{t=1}^{T}\) across all timestamps in the previous episode. Each slot feature captures the time-invariant properties such as the appearance of each object, which enables the manipulation of the scene's content and relationships between objects. Encoder.To bind the voxel grid representations to the corresponding object, at timestamp \(t\), we pass \(\mathcal{V}_{\text{density}}^{t}\) through a 3D CNN encoder \(E_{\theta}\), which consists of \(3\) convolutional layers with ReLU. It outputs \(N\) flattened features \(\mathbf{h}_{t}\in\mathbb{R}^{M\times D}\), where \(M\) represents the size of the voxel grids that have been reduced in dimensionality by the encoder. From an optimization perspective, a set of well-decoupled global slot features can benefit the separation of the object-centric volumetric representations. Slot attention.To refine the slot features, we employ the iterative attention block denoted by \(Z_{w}\) to incorporate the flattened local features \(\mathbf{h}_{t}\). In a single round of slot attention, we have: \[\mathcal{A}_{t}=\operatorname{softmax}_{N}\left(\frac{1}{\sqrt{D}}k(\mathbf{h }_{t})\cdot q(\mathbf{S}_{t})^{T}\right),\quad W_{t}^{i,j}=\frac{\mathcal{A}_{ t}^{i,j}}{\sum_{l=1}^{M}\mathcal{A}_{t}^{l,j}},\quad\mathcal{U}_{t}=W^{T}\cdot v( \mathbf{h}_{t}), \tag{2}\] where \((q,k,v)\) are learnable linear projections \(\mathbb{R}^{D\xrightarrow{}D}\)[20], such that \(\mathcal{A}_{t}\in\mathbb{R}^{M\times N}\) and \(\mathcal{U}_{t}\in\mathbb{R}^{N\times D}\), and \(\sqrt{D}\) is a fixed softmax temperature [30]. The resulted slots features are then updated by a GRU as \(\widehat{\mathbf{S}}_{t}=\operatorname{GRU}(\mathcal{U}_{t},\mathbf{S}_{t})\). We repeat the attention computation \(3\) times at each timestamp. Figure 2: **An overview of DynaVol.** The model comprises three network components, including the voxel grid deformation module, the slot features refinement module, and the neural rendering module. The model has two training stages, including a warmup stage and a dynamic grounding stage. ### Compositional Neural Renderer Forward modeling.The renderer can be denoted by \(N_{\phi}(\mathbf{x},\mathbf{d}\mid\{\bar{\mathbf{s}}_{n}\})\). Previous compositional NeRF, like in uORF [34], typically uses an MLP to learn a continuous mapping \(g\) from sampling point \(\mathbf{x}\), viewing direction \(\mathbf{d}\), and slot features \(\{\mathbf{s}_{n}\}\), to the emitted densities \(\{\sigma_{n}\}\) and colors \(\{\mathbf{c}_{n}\}\) of different slots. While in our renderer \(N_{\phi}\), we only adopt the MLP to learn the object-centric projections \(g^{\prime}\): \((\mathbf{x},\,\mathbf{d},\,\{\bar{\mathbf{s}}_{n}\})\rightarrow\{\mathbf{c}_{ n}\}\) and query \(\{\sigma_{n}\}\) directly from the voxel grid \(\widehat{\mathcal{V}}_{\text{density}}^{t}\) at the corresponding timestamp. At timestamp \(t\), \(N_{\phi}\) takes as inputs \(\{\bar{\mathbf{s}}_{n}\}_{t}=\mathrm{mean}(\widehat{\mathbf{S}}_{t},\bar{ \mathbf{S}}_{e-1})\), where \(\bar{\mathbf{S}}_{e-1}\) is the average of \(\{\mathbf{S}_{t,e-1}\}_{t=1}^{T}\) across all timestamps in the previous episode. We use density-weighted mean to compose the predictions of \(\mathbf{c}_{n}\) and \(\sigma_{n}\) for different objects, such that: \[w_{n}=\sigma_{n}/\sum_{n=1}^{N}\sigma_{n},\quad\overline{\sigma}=\sum_{n=1}^{ N}w_{n}\sigma_{n},\quad\overline{\mathbf{c}}=\sum_{n=1}^{N}w_{n}\mathbf{c}_{n}, \tag{3}\] where \(\overline{\sigma}\) and \(\overline{\mathbf{c}}\) is the output density and the color of a sampling point. We estimate the color \(C(\mathbf{r})\) of a sampling ray with the quadrature rule [21]: \(\widehat{C}(\mathbf{r})=\sum_{i=1}^{P}T_{i}\left(1-\exp(-\bar{\sigma}_{i} \delta_{i})\right)\overline{\mathbf{c}}_{i}\), where \(T_{i}=\exp(-\sum_{j=1}^{i-1}\bar{\sigma}_{j}\delta_{j})\), \(P\) is the number of sampling points in a certain ray, and \(\delta_{i}\) is the distance between adjacent samples along the ray. Objectives.At a specific timestamp, we take the rendering loss \(\mathcal{L}_{\text{Render}}\) between the predicted and observed pixel colors, the background entropy loss \(\mathcal{L}_{\text{Ent}}\), and the per-point RGB loss \(\mathcal{L}_{\text{Point}}\) following DVGO [29] as basic objective terms. \(\mathcal{L}_{\text{Ent}}\) can be viewed as a regularization to encourage the renderer to concentrate on either foreground or background. To enhance dynamics learning in the warmup stage, we design a novel cycle loss between \(f_{\psi}\) and \(f_{\xi}^{\prime}\): \[\mathcal{L}_{\text{Render}}=\frac{1}{|\mathcal{R}|}\sum_{r\in \mathcal{R}}\left\|\widehat{C}(\mathbf{r})-C(\mathbf{r})\right\|_{2}^{2}, \mathcal{L}_{\text{Ent}}=\frac{1}{|\mathcal{R}|}\sum_{r\in\mathcal{R}}-\widehat {w}_{l}^{r}\log(\widehat{w}_{l}^{r})-(1-\widehat{w}_{l}^{r})\log(1-\widehat{ w}_{l}^{r}),\] \[\mathcal{L}_{\text{Point}}=\frac{1}{|\mathcal{R}|}\sum_{r\in \mathcal{R}}\left(\frac{1}{P_{r}}\sum_{i=0}^{P_{r}}\left\|\overline{\mathbf{c} }_{i}-C(\mathbf{r})\right\|_{2}^{2}\right),\ \mathcal{L}_{\text{Cyc}}=\frac{1}{|\mathcal{R}|}\sum_{r\in \mathcal{R}}\left(\frac{1}{P_{r}}\sum_{i=0}^{P_{r}}\left\|f_{\psi}(x_{i},t)+f_ {\xi}^{\prime}(x_{i}^{\prime},t)\right\|_{2}^{2}\right), \tag{4}\] where \(\mathcal{R}\) is the set of sampled rays in a batch, \(P_{r}\) is the number of sampling points along ray \(r\), \(x_{i}^{\prime}=x_{i}+f_{\psi}(x_{i},t),i\in[0,P_{r}]\), and \(\widehat{w}_{l}^{r}\) is the color contribution of the last sampling point along \(r\). It is obtained by \(\widehat{w}_{l}^{r}=T_{P_{r}}(1-\exp(-\sigma_{P_{r}}\delta_{P_{r}}))\). ### Training Procedure Stage 1: Warmup.Our approach includes two training stages: the warmup stage and the temporal dynamic grounding stage. The purpose of warmup is to provide prior 3D geometry and dynamics information to the next stage and thus reduce the difficulty of dynamic grounding. In this stage, we take \(T\) consecutive images \(\{\mathbf{I}_{t}^{v}\}_{t=1}^{T}\) uniformly collected by a monocular in \(1\) second. Each image is taken from a random viewpoint. In addition, we also take only a few multi-view images \(\{\mathbf{I}_{t_{0}}^{v}\}_{v=1}^{V}\) of the scene at the first timestamp. Generally, we employ a clustering algorithm to initialize \(\mathcal{V}_{\text{density}}^{t_{0}}\) with \(N\) channels based on the grid-level appearance (_e.g._: geometry, color) and dynamics information. The assumption is that voxels belonging to the same object should be clustered together, exhibiting similar motion and appearance features. Conversely, voxels corresponding to objects in different spatial locations should be separated and exhibit diverse features. Specifically, we first select valid voxels \(\{X_{k}\}_{k=1}^{K}\) by filtering out invalid locations with density values below a predefined threshold. This filtering step is based on the 3D voxel grids learned in this stage. Subsequently, we construct a feature graph \(G\) using these selected voxels. This feature graph incorporates information related to the geometry, color, and dynamics of the voxels. To obtain the dynamics information, we additionally train the \(f_{\xi}^{\prime}\) module to model the forward deformation field \(\Delta x_{0\to t}^{\prime}\), \(f_{\xi}^{\prime}\) is trained by \(\mathcal{L}_{\text{Cyc}}\) in Eq. 4. Next, we apply the connected components algorithm on the feature graph \(G\) to generate clusters. Preliminary experiments have demonstrated the effectiveness of this method in improving the final performance in the second training stage. The loss function in this stage is defined as \(\mathcal{L}_{\text{Warm}}=\sum_{t=1}^{T}\left(\mathcal{L}_{\text{Render}}+ \alpha_{p}\mathcal{L}_{\text{Point}}+\alpha_{e}\mathcal{L}_{\text{Ent}}+\alpha_ {c}\mathcal{L}_{\text{Cyc}}\right)\), where we adopt the empirical values of the hyperparameters from previous literature [18]. **Stage 2: Dynamic grounding.** In this stage, we load \(\mathcal{V}_{\text{density}}^{t_{0}}\) and \(\phi\) from the warmup stage. These pretrained parameters introduce valuable grid-level geometry and dynamics priors and facilitate the effective separation of spatial and temporal features in the current stage. The end-to-end optimization throughout the sequential data is beneficial as it enables the refinement of the initial voxel grids with the assistance of object-level representations. The loss function in this training stage is defined as \(\mathcal{L}_{\text{Dyn}}=\sum_{t=1}^{T}\left(\mathcal{L}_{\text{Render}}+ \alpha_{p}\mathcal{L}_{\text{Point}}+\alpha_{e}\mathcal{L}_{\text{Ent}}\right)\), where we finetune \((\mathcal{V}_{\text{density}}^{t_{0}},\psi)\) from the warmup stage, and train \((\theta,\omega,\phi)\) from the scratch. ## 4 Experiments ### Implementation Details We set the size of the voxel grid to \(110^{3}\), the assumed number of maximum objects to \(N=10\), and the dimension of slot features to \(D=64\). We use \(4\) hidden layers with \(64\) channels in the renderer, and use the Adam optimizer with a batch of \(1{,}024\) rays in the two training stages. The base learning rates are \(0.1\) for the voxel grids and \(1e^{-3}\) for all model parameters in the warmup stage and then adjusted to \(0.08\) and \(8e^{-4}\) in the second training stage. The two training stages last for \(50\)k and \(35\)k iterations respectively. The hyperparameters in the loss functions are set to \(\alpha_{p}=0.1\), \(\alpha_{e}=0.01\), \(\alpha_{w}=1.0\), \(\alpha_{c}=1.0\). All experiments run on an NVIDIA RTX3090 GPU and last for about \(3\) hours. ### Experimental Setup We evaluate DynaVol on both scene representation (via scene segmentation) and novel view synthesis in the following \(8\) scenes. We show its ability on the representative downstream task of dynamic scene editing. For each scene, we capture \(V\)-view (\(V=5\)) static images of the initial scene and a dynamic sequence with \(T=60\) timestamps which is rendered from viewpoints randomly sampled at different moments from the upper hemisphere as training data and randomly select another different view at each timestamp for the test, all in \(512\times 512\) pixels. **Dataset.** We build \(8\) synthetic dynamic scenes using Kubric [7] with different numbers of objects (in different colors and shapes), diverse motions with different initial velocities, different materials, and real-world shapes and textures (All dataset names can be referred from Table 1). **Metrics.** For quantitative comparison of the novel view synthesis problem, we report Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) [31] for synthetic scenes. Additionally, to evaluate segmentation quality in a way that is compatible with 2D methods, we use Foreground Adjusted Rand Index (FG-ARI) as our metric, which measures clustering similarity according to the ground truth foreground objects mask where a random segmentation would score \(0\) and a perfect segmentation would score \(1\). **Compared methods.** We compare DynaVol with various benchmarks, including 3D scene modeling methods D-NeRF [24] and DeVRF [18], 2D image segmentation methods SAVi [15], SAM [16], and \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{3ObjFall} & \multicolumn{2}{c}{3ObjFand} & \multicolumn{2}{c}{3ObjMetal} & \multicolumn{2}{c}{3Fall+3Still} \\ Method & PSNR\(\uparrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) \\ \hline D-NeRF & 28.54 & 0.946 & 12.62 & 0.853 & 27.83 & 0.945 & 24.56 & 0.908 \\ D-NeRF+STc & 29.15 & 0.954 & 27.44 & 0.943 & 28.59 & **0.953** & 25.03 & 0.913 \\ DeVRF & 24.92 & 0.927 & 22.27 & 0.912 & 25.24 & 0.931 & 24.80 & 0.931 \\ DeVRF-Dyn & 18.81 & 0.799 & 18.43 & 0.799 & 17.24 & 0.769 & 17.78 & 0.765 \\ DynaVol & **32.11** & **0.969** & **30.70** & **0.964** & **29.31** & **0.953** & **28.96** & **0.945** \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{6ObjFall} & \multicolumn{2}{c}{8ObjFall} & \multicolumn{2}{c}{3ObjRealSimp} & \multicolumn{2}{c}{3ObjRealCompx} \\ Method & PSNR\(\uparrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) \\ \hline D-NeRF & 28.27 & 0.940 & 27.44 & 0.923 & 27.04 & 0.927 & 20.73 & 0.864 \\ D-NeRF+STc & 27.20 & 0.928 & 26.97 & 0.919 & 27.49 & 0.931 & 22.72 & 0.874 \\ DeVRF & 24.83 & 0.905 & 24.87 & 0.915 & 24.81 & 0.922 & 21.77 & 0.891 \\ DeVRF-Dyn & 17.35 & 0.738 & 16.19 & 0.711 & 18.64 & 0.717 & 17.40 & 0.778 \\ DynaVol & **29.98** & **0.950** & **29.78** & **0.945** & **30.13** & **0.952** & **27.25** & **0.918** \\ \hline \hline \end{tabular} \end{table} Table 1: Novel view synthesis results of our approach compared with D-NeRF [24] and DeVRF [18], as well as their variants for dynamic scenes (see text for details). We evaluate the results averaged over \(60\) novel views per timestamp. 3D object-centric methods uORF [34]. Since our method also uses static data in synthetic scenes, for a fair comparison, we implement the D-NeRF+Stc which is trained on the static image set and the dynamic sequence. Besides, since DeVRF is trained on the \(60\)-view static image set and 4-view dynamic sequence, we additionally trained DeVRF with the same data setting as ours, _i.e._, viewpoints are randomly sampled from the upper hemisphere. Both two variants of DeVRF are trained without the optical flow loss proposed in the paper. For SAVi and uORF, we use the models pretrained on MOVi-A [7] and CLEVR-567 [34] respectively, which are similar to our scenes. As for SAM, we employ their pretrained model, which is publicly available and open-sourced. We try to finetune them on our dataset, however, it does not improve the model performance. ### Novel View Synthesis We evaluate the performance of DynaVol on the novel view synthesis task with other two 3D benchmarks (D-NeRF, DeVRF) and their variants trained on the same data as ours (D-NeRF+Stc and DeVRF-Dyn). As shown in Table 1, DynaVol achieves the best performance in terms of PSNR, SSIM, and MSE in **all** datasets. Notably, our approach outperforms the second-best model by a large margin even in those difficult scenes (_i.e._, 6ObjFall, 3ObjRealSimp, and 3ObjRealCmpx) with a \(15.08\%\) increase in PSNR and a \(2.26\%\) increase in SSIM on average. There is no significant difference between D-NeRF and D-NeRF+Stc in most scenes except for 3ObjRand (D-NeRF fails to model 3ObjRand as shown in Figure 4), which illustrates the effectiveness of the use of the static image set. Besides, DeVRF-Dyn has a noticeable decline in performance compared to the standard version due to its heavy dependence on accurate initial scene understanding. Figure 4 demonstrates qualitative comparisons with other methods and it shows that DynaVol can capture the 3D appearance of different objects and the corresponding motion patterns at an arbitrary timestamp and render a competitive result in a novel view. In contrast, D-NeRF struggles to capture intricate motion patterns, as evidenced by its failure in modeling complex motion in the 3ObjRand \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{3ObjRand} & \multicolumn{3}{c}{6ObjFall} & \multicolumn{3}{c}{8ObjFall} & \multicolumn{3}{c}{3ObjMetal} \\ \cline{2-10} Method & PSNR\(\uparrow\) & FG-AR\(\uparrow\) & PSNR\(\uparrow\) & FG-AR\(\uparrow\) & PSNR\(\uparrow\) & FG-AR\(\uparrow\) & PSNR\(\uparrow\) & FG-AR\(\uparrow\) \\ \hline SAVi & – & 4.38 & – & 6.85 & – & 7.87 & – & 3.38 \\ Ours (Fix) & 31.69 & **84.68** & 33.11 & **93.42** & 31.47 & **91.22** & 30.30 & **94.91** \\ \hline uORF & 6.65 & 38.65 & 6.63 & 29.23 & 6.89 & 31.93 & 7.95 & 22.58 \\ SAM & – & 55.52 & – & 62.66 & – & 71.68 & – & 46.80 \\ Ours & **30.70** & **96.01** & **29.98** & **94.73** & **29.78** & **95.10** & **29.31** & **96.06** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons with existing approaches based on 2D/3D object-centric representations, _i.e._, SAVi [5], uORF [34], and SAM [16]. In particular, for uORF, we present novel view synthesis and object segmentation results. For SAVi, since it requires videos with fixed viewpoints, we generate an image sequence at a certain fixed camera position and present the result of SAVi. To compare with it, we evaluate DynaVol (Fix) with images that are also collected at this fixed camera view. Figure 3: Visualization of novel view synthesis and scene decomposition results. scene. Additionally, it performs poorly in accurately representing the appearance of different objects, particularly when dealing with complex textures, as observed in the 3ObjRealCmpx scene. On the other hand, DeVRF falls short in modeling object dynamics and fails to accurately infer their corresponding spatial locations, as demonstrated in both the 3ObjRand and 3ObjRealCmpx scenes. Figure 3(a) shows a more specific novel view synthetic results on 3ObjRand scene with timestamps on \(t=6\), \(t=14\), \(t=37\), and \(t=52\) respectively. Considering the synthetic results of D-NeRF on this dataset in Figure 4, we choose its variant as an alternative. It can be found that D-NeRF+Stc produces blurry in all sampled timestamps and DeVRF suffers from severe deformation and position shift in \(t=37\) and \(t=52\), while DynaVol renders relatively clearer and more precise images. ### Scene Decomposition To get the 2D segmentation results, we assign the rays to different slots according to the contribution of each slot to the final color of the ray. In Table 2 we compare our method with the other three segmentation benchmarks. Since SAVi is a 2D segmentation method that only works on the dynamic scene with a fixed camera position, we evaluate its performance of metrics FG-ARI with DynaVol (labeled as DynaVol(FixCam)) on a fixed one-view dynamic sequence. As for uORF and SAM, they mainly focus on static scenes, so we process the dynamic sequence into \(T\) static single scenes as their inputs and evaluate their average performance on the whole sequence. Results show that our method significantly outperforms all approaches, both in reconstruction quality and segmentation results. It is worth mentioning that we take ARI-FG as the segmentation metrics. The higher the ARI-FG, the method has better segmentation results and temporal consistency. Figure 3(b) shows the results of the qualitative comparison. DynaVol handles well on object occlusion and ensures object-slot correspondence consistency both in 3D and temporal aspects across multiple views (_i.e._ object always has a fixed color). SAVi performs suboptimally in this particular scene, whereas uORF and SAM exhibit inconsistency in both the temporal and spatial dimensions. This inconsistency manifests as the assignment of different slots to the same object at different timestamps or from different viewpoints. ### Dynamic Scene Editing The object-centric representations learned in DynaVol have practical applications in downstream tasks such as scene editing without requiring additional training. By directly modifying the volumetric representations or altering the deformation function or slot representations, DynaVol enables a range of editing tasks. For instance, in Figure 5(a), DynaVol removes the right hand that is pinching the toys. In Figure 5(b), it modifies the dynamics of the shoes from falling to rotating. Additionally, in Figure 5(c), it replaces the cylinder from the 3ObjFall scene with the book from the 3ObjRealCmpx scene. Moreover, object colors can be swapped by exchanging their corresponding slots, as demonstrated in Figure 5(d). This indicates that the model effectively binds the slot features to different objects and learns effective appearance information. For a complete visualization of edited sequences and additional editing tasks, please refer to our supplementary materials. Figure 4: Novel view synthesis for different dynamic scenes. For each scene, we randomly select a novel view at an arbitrary timestamp. Note that the vanilla D-NeRF fails on _3ObjRand_. ### Analysis on Slot Convergence We explore the convergence of the slot values during the training process. Specifically, we select the four slots (out of ten) that contribute most to image rendering in 3ObjRealCmpx, 6ObjFall, and 3ObjRand. As shown in Figure 6, we present the average value across all dimensions of each slot (\(\{\bar{s}_{n}^{e}\}\)) at different training episodes. The results demonstrate that each slot effectively converges over time to a stable value, indicating that the features become progressively refined and successfully learn time-invariant information about the scene. Furthermore, the noticeable divergence among different slots indicates that DynaVol successfully learned distinct and object-specific features. ## 5 Conclusion and Limitation In this paper, we presented DynaVol, an inverse graphics method designed to understand 3D dynamic scenes using object-centric volumetric representations. Our approach demonstrates superior performance over existing techniques in unsupervised scene decomposition in both synthetic and real-world scenarios. Moreover, it goes beyond the 2D counterparts by providing additional capabilities, such as novel view synthesis and dynamic scene editing, which greatly expand its application prospects. Figure 5: Showcases of dynamic scene editing in both (a) the real-world scene [23], (b-c) the synthetic scenes with real-world object geometry and textures, and (d) the synthetic scene with severe occlusions between objects. The top row in each sub-figure indicates the results of novel view synthesis and scene decomposition. The bottom row indicates the results of scene editing. Figure 6: Visualization of the average value across all feature dimensions of \(\bar{\mathbf{S}}_{e}\) at different training episodes on _3ObjRealCmpx_, _6ObjFall_, and _3ObjRand_. Admittedly, similar to the previous neural rendering technique [34] for static scene decomposition, a notable limitation in DynaVol is its dependence on multi-view images in the warmup stage. We are actively working towards resolving this limitation in our future research. ## Acknowledgments This work was supported by NSFC (62250062, U19B2035, 62106144), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), the Fundamental Research Funds for the Central Universities, and Shanghai Sailing Program (21Z510202133) from the Science and Technology Commission of Shanghai Municipality.
``` Unsupervised学習による物体中心表現の動的視覚シーンの学習は難しい。従来の大半の方法は2D画像を分解することを学習するのに対し、私たちはDynaVolという、幾何構造と物体中心学習を統一した3Dシーン生成モデルを提案する。このモデルの重要なポイントは、物体中心の体積分解を行うことで、シーンの3D的な性質を捉え、個々の空間位置におけるオブジェクトの確率分布を推定することである。これらの体積特徴は、可微分体積レンダリングフレームワークで、空間的に変動する時間経過を伴って進化する。これは、スロット注意を用いたグローバル表現学習の基礎となる。体積特徴とグローバルの特徴は互いに補完的であり、それらの両方が、体積レンダリングのための合成NeRFデコーダーによって利用される。DynaVolは、Unsupervisedの動的シーン分解に優れたパフォーマンス
2307.16388
Poisson Pseudoalgebras
For any cocommutative Hopf algebra $H$ and a left $H$-module $V$, we construct an operad $\mathcal{P}^{cl}_H(V)$, which in the special case when $H$ is the algebra of polynomials in one variable reduces to the classical operad $\mathcal{P}^{cl}(V)$. Morphisms from the Lie operad to $\mathcal{P}^{cl}(V)$ correspond to Poisson vertex algebra structures on $V$. Likewise, our operad $\mathcal{P}^{cl}_H(V)$ gives rise to the notion of a Poisson pseudoalgebra; thus extending the notion of a Lie pseudoalgebra. As a byproduct of our construction, we introduce two cohomology theories for Poisson pseudoalgebras, generalizing the variational and classical cohomology of Poisson vertex algebras.
Bojko Bakalov, Ju Wang
2023-07-31T03:37:32
http://arxiv.org/abs/2307.16388v1
# Poisson pseudoalgebras ###### Abstract. For any cocommutative Hopf algebra \(H\) and a left \(H\)-module \(V\), we construct an operad \(\mathcal{P}^{cl}_{H}(V)\), which in the special case when \(H\) is the algebra of polynomials in one variable reduces to the classical operad \(\mathcal{P}^{cl}(V)\) of [1]. Morphisms from the Lie operad to \(\mathcal{P}^{cl}(V)\) correspond to Poisson vertex algebra structures on \(V\). Likewise, our operad \(\mathcal{P}^{cl}_{H}(V)\) gives rise to the notion of a Poisson pseudoalgebra; thus extending the notion of a Lie pseudoalgebra from [1]. As a byproduct of our construction, we introduce two cohomology theories for Poisson pseudoalgebras, generalizing the variational and classical cohomology of Poisson vertex algebras. Key words and phrases:Lie superalgebra; Lie pseudoalgebra; operad; Poisson vertex algebra 2010 Mathematics Subject Classification: Primary 17B63; Secondary 17B65, 17B66, 18M70 The first author was supported in part by a Simons Foundation grant 584741. ## 1. Introduction The notion of a Lie conformal algebra, introduced by Victor Kac [10], provides an axiomatic description of the operator product expansion of chiral fields in conformal field theory, and is closely related to infinite-dimensional Lie algebras such as affine Kac-Moody algebras and the Virasoro algebra. Recall that a _Lie conformal_ (super)_algebra_ is a vector superspace \(V\), endowed with an even endomorphism \(\partial\in\operatorname{End}(V)\) and a bilinear (over the ground field \(\mathbb{F}\)) \(\lambda\)-bracket \([\cdot\,_{\lambda}\,]\colon V\times V\to V[\lambda]\) satisfying the following three axioms for all \(a,b,c\in V\): * **Sesquilinearity:** \[[\partial a_{\lambda}b]=-\lambda[a_{\lambda}b]\,,\quad[a_{\lambda}\partial b] =(\lambda+\partial)[a_{\lambda}b]\,,\] (1.1) * **Skewsymmetry:** \[[a_{\lambda}b]=-(-1)^{p(a)p(b)}[b_{-\lambda-\partial}a]\,,\] (1.2) * **Jacobi identity:** \[[a_{\lambda}[b_{\mu}c]]-(-1)^{p(a)p(b)}[b_{\mu}[a_{\lambda}c]]=[[a_{\lambda}b ]_{\lambda+\mu}c]\,,\] (1.3) where \(p(a)\) denotes the parity of \(a\). The theory of conformal algebras, their representations and cohomology has been developed in a series of papers; see e.g. [1, 1, 2, 3, 4]. The definition of conformal algebra can be generalized naturally to the case when \(\lambda\) is a vector of dimension \(N\) (see [1]): one only needs to replace the single indeterminate \(\lambda\) with the vector \(\vec{\lambda}=(\lambda_{1},\ldots,\lambda_{N})\) and \(\partial\) with \(\vec{\partial}=(\partial_{1},\ldots,\partial_{N})\). This replacement is straightforward in (1.2) and (1.3), while sesquilinearity becomes coordinate-wise: \[[\partial_{i}a_{\vec{\lambda}}b] =-\lambda_{i}[a_{\vec{\lambda}}b]\,,\quad[a_{\vec{\lambda}} \partial_{i}b]=(\lambda_{i}+\partial_{i})[a_{\vec{\lambda}}b]\,,\quad 1\leq i \leq N\,. \tag{1.4}\] \[[a_{\vec{\lambda}}b] =-(-1)^{p(a)p(b)}[b_{-\vec{\lambda}-\vec{\partial}}a],\] (1.5) \[[a_{\vec{\lambda}}[b_{\vec{\mu}}c]] =[[a_{\vec{\lambda}}b]_{\vec{\lambda}+\vec{\mu}}c]+(-1)^{p(a)p(b) }[b_{\vec{\mu}}[a_{\vec{\lambda}}c]]. \tag{1.6}\] Taking this generalization a step further, one arrives at the notion of a _Lie_ (super)_pseudoalgebra_ over a cocommutative Hopf algebra \(H\), which was introduced in [1]. To be more concise, from now on, we will omit the prefix "super" where its meaning is clear from the context. Recall that a Lie \(H\)-pseudoalgebra \(L\) is a left \(H\)-module endowed with an even linear map \[\beta\colon L\otimes L\to(H\otimes H)\otimes_{H}L\,,\quad\beta(a\otimes b)=[a \ast b]\,, \tag{1.7}\] which is called a _pseudobracket_ and is subject to the following three axioms. For \(a,b,c\in L\), \(f,g\in H\), and \(\sigma=(12)\in S_{2}\), we have: * \(H\)**-bilinearity:** \[[fa\ast gb]=((f\otimes g)\otimes_{H}1)[a\ast b],\] (1.8) * **Skewsymmetry:** \[[b*a]=-(-1)^{p(a)p(b)}(\sigma\otimes_{H}\operatorname{id})[a*b],\] (1.9) * **Jacobi identity:** \[[a*[b*c]]-(-1)^{p(a)p(b)}((\sigma\otimes\operatorname{id})\otimes_{H} \operatorname{id})[b*[a*c]]=[[a*b]*c].\] (1.10) The details on how to compose pseudobrackets in (1.10) are given in Remark 4.2 below. Unless otherwise specified, we will work over a field \(\mathbb{F}\) of characteristic \(0\) and the Hopf algebra \(H\) will be cocommutative. With the above definition, a Lie conformal algebra is the same as a Lie pseudoalgebra with \(H=\mathbb{F}[\partial]\), while a Lie conformal algebra in dimension \(N\) is just a Lie pseudoalgebra with \(H=\mathbb{F}[\partial_{1},\dots,\partial_{N}]\) (see [1]). The simple Lie pseudoalgebras that are finitely generated as \(H\)-modules have been classified in [1]. They turn out to be closely related to the Lie-Cartan algebras of vector fields \(W_{N}\), \(S_{N}\), \(H_{N}\), and \(K_{N}\). The representation theory of simple Lie pseudoalgebras was developed in [1, 1, 2], while their cohomology is related to the Gelfand-Fuchs cohomology of the Lie-Cartan algebras of vector fields [10]. In the discussion above, Lie conformal algebras are generalized to Lie pseudoalgebras. On the other hand, one can consider the Poisson type algebras of Lie conformal algebras, which are called Poisson vertex algebras [11, 12]. Recall that a _Poisson vertex_ (super)_algebra_ is a (super)commutative differential algebra endowed with a Lie conformal (super)algebra \(\lambda\)-bracket satisfying the * **Leibniz rule:** \[[a_{\lambda}bc]=[a_{\lambda}b]c+(-1)^{p(b)p(c)}[a_{\lambda}c]b\,.\] (1.11) Poisson vertex algebras have been studied extensively in recent years; see, e.g., [1, Sect. 16] and [1, 1, 10] for applications of Poisson vertex algebras to the integrability of Hamiltonian partial differential equations, and [1, 2, 2, 3] for the cohomology theory of Poisson vertex algebras and vertex algebras. In this paper, we introduce the notion of a _Poisson pseudoalgebra_ as a Lie pseudoalgebra \(V\) equipped with a (super)commutative associative product \(V\otimes V\to V\), which is a homomorphism of \(H\)-modules and satisfies the following generalization of (1.11): * **Leibniz rule:** \[[a*bc]=[a*b]c+(-1)^{p(b)p(c)}[a*c]b,\] (1.12) where the product \([a*b]c\) is defined in (4.17) below. We remark that the skewsymmetry (1.9) in a Lie pseudoalgebra looks more symmetric than the skewsymmetry (1.2) or (1.5) in a Lie conformal algebra. Likewise, in a Poisson vertex algebra, there is a right version of the Leibniz rule, which looks more complicated than the left one (cf. [1, 1], (1.26)]). For Poisson pseudoalgebras, we derive a _right Leibniz rule_ very similar to (1.12): \[[ab*c]=a[b*c]+(-1)^{p(a)p(b)}b[a*c], \tag{1.13}\] where the product \(a[b*c]\) is defined in (4.19) below. This allowed us to find an _iterated Leibniz rule_ (4.23) for the pseudobracket of any two products (cf. [1, (1.34)]). Using (4.23), we show that the symmetric algebra \(S(L)\) of any Lie pseudoalgebra \(L\) has a canonical structure of a Poisson pseudoalgebra, just as in the usual case of Lie superalgebras (corresponding to \(H=\mathbb{F}\)). Then, in Sect. 4.4, we present several examples of Poisson pseudoalgebras, which generalize examples from [1] and [1]. The paper [1] developed a unified approach to Lie superalgebras, Lie conformal algebras, Poisson vertex algebras, vertex algebras, and their cohomology theories. The main idea, due to Beilinson and Drinfeld [1], is to view all these different algebras as Lie algebras in certain pseudotensor categories. A _pseudotensor category_ is equipped with notions of \(n\)-linear maps for all \(n\geq 1\) that can be composed and have actions of the symmetric groups \(S_{n}\). This allows one to define the notions of a Lie algebra, module, and cohomology; see [1] where these ideas were developed for Lie pseudoalgebras. For any fixed object \(V\) in a pseudotensor category, the spaces \(\mathcal{P}(V)(n)\) of \(n\)-linear maps from \(V\) to \(V\) form an _operad_, a notion that originated in algebraic topology in the works of Boardman-Vogt and May in the early 1970's. Since then, operads have been used extensively in algebra and mathematical physics; see [10, 11] for modern reviews. In the language of operads, an object \(V\) in a pseudotensor category has a Lie algebra structure if and only if there is a morphism of operads from the so-called Lie operad \(\mathcal{L}ie\) to the operad \(\mathcal{P}(V)\). Another equivalent formulation, which is more suitable for introducing cohomology, goes back to [11] and was developed in [1, 1]. As a first step, to any operad \(\mathcal{P}\), one assigns a \(\mathbb{Z}\)-graded Lie superalgebra \[W(\mathcal{P})=\bigoplus_{n\geq-1}W_{n}(\mathcal{P})\,, \tag{1.14}\] where \(W_{n}(\mathcal{P})\) is the set of \(S_{n+1}\)-invariant elements in \(\mathcal{P}(n+1)\) (see [12, 1] and Sect. 2.3 below). Next, we replace \(\mathcal{P}(V)\) with the operad \(\mathcal{P}(\Pi V)\), where \(\Pi V\) is the same vector superspace as \(V\) but with reversed parity. Then an operad morphism \(\mathcal{L}ie\to\mathcal{P}(V)\) is equivalent to an odd element \(X\in W_{1}(\mathcal{P}(\Pi V))\) such that \([X,X]=0\). Moreover, \([X,X]=0\) implies \(\mathrm{ad}_{X}^{2}=0\); hence, \(W(\mathcal{P}(\Pi V))\) becomes a cohomology complex with the differential \(\mathrm{ad}_{X}\). Let us fix a left module \(V\) over a cocommutative Hopf algebra \(H\). The operad corresponding to Lie \(H\)-pseudoalgebra structures on \(V\) via the above construction is given by [1, Sect. 3]: \[\mathcal{P}_{H}^{*}(n)=\operatorname{Hom}_{H^{\otimes n}}(V^{\otimes n},H^{\otimes n }\otimes_{H}V)\,. \tag{1.15}\] In the case when \(H=\mathbb{F}[\partial]\), this operad is also known as \(\mathcal{C}hom\) or the _conformal Hom_ (see [1, 1]). Again for \(H=\mathbb{F}[\partial]\), the paper [1] constructs extensions \(\mathcal{P}^{cl}\) and \(\mathcal{P}^{ch}\) of \(\mathcal{C}hom\), called the _classical_ and _chiral operads_, which correspond to Poisson vertex algebras and vertex algebras, respectively. The classical operad \(\mathcal{P}^{cl}\) consists of certain linear maps labeled by acyclic graphs, so that for graphs with \(n\) vertices and no edges they reduce to maps from \(\mathcal{C}hom(n)\). Note that, when \(H=\mathbb{F}\), the operad \(\mathcal{P}_{H}^{*}\) is the \(\mathcal{H}om\) operad (also known as \(\mathcal{E}nd\)) defined by \(\mathcal{H}om(n)=\operatorname{Hom}(V^{\otimes n},V)\). In this case, the operad \(\mathcal{P}^{cl}\) has an analogue \(\mathcal{P}^{fn}\) called the _finite classical operad_[1, Sect. 10.5]. The operads \(\mathcal{H}om\) and \(\mathcal{P}^{fn}\) correspond to Lie superalgebras and Poisson superalgebras, respectively. In this paper, we generalize the classical operad \(\mathcal{P}^{cl}\) and its finite version \(\mathcal{P}^{fn}\) to the case of an arbitrary cocommutative Hopf algebra \(H\). At the same time, our _generalized classical operad_\(\mathcal{P}^{cl}_{H}\) contains \(\mathcal{P}^{*}_{H}\) when restricted to graphs with no edges. We prove that Poisson pseudoalgebra structures on an \(H\)-module \(V\) are in bijection with odd elements \(X\in W_{1}(\mathcal{P}^{cl}_{H}(\Pi V))\) such that \([X,X]=0\). This result motivates our definition of a Poisson pseudoalgebra. As an application of the construction of the generalized classical operad, we obtain a cohomology complex, called the _classical cohomology_ complex of \(V\). Recall that, in the Poisson vertex algebras case (\(H=\mathbb{F}[\partial]\)), there is another cohomology theory called the _variational cohomology_[1], which is related to the classical one [1, 1]. We also introduce the variational cohomology of Poisson pseudoalgebras, but the main theorem of [1], which asserts the isomorphism of the two cohomology theories under certain conditions, remains beyond the scope of the present paper. The paper is organized as follows. In Sect. 2, we review some preliminaries including the definition of an operad, its reformulation in terms of \(\circ\)-products, the universal Lie superalgebra associated to an operad, and the cooperad of graphs. In Sect. 3, we introduce the generalized classical operad \(\mathcal{P}^{cl}_{H}\) and prove that it is indeed an operad. In Sect. 4, we define the notion of a Poisson pseudoalgebra, establish its relation to the generalized classical operad, provide examples of Poisson pseudoalgebras, and introduce the classical and variational cohomology of Poisson pseudoalgebras. ## 2. Preliminaries on Operads In this section, we review the definition and some key properties of operads. In particular, we formulate the compositions in an operad in terms of \(\circ\)-products. We also recall the universal Lie superalgebra associated to an operad, introduced in [10] (see also [1]). For more detailed reviews on operads, we refer the readers to [12, 13]. ### Axioms defining an operad Recall from [1, Sect. 3.1] that a (linear, unital, symmetric) (super)_operad_ consists of a sequence \(\mathcal{P}(n)\) (\(n\geq 0\)) of vector superspaces, with parity denoted by \(p\), equipped with the following operations and subject to the following axioms. * **Compositions.** For \(n\geq 1\) and \(m_{1},\dots,m_{n}\geq 0\), we have parity preserving linear maps \[\mathcal{P}(n)\otimes\mathcal{P}(m_{1})\otimes\cdots\otimes \mathcal{P}(m_{n}) \rightarrow\mathcal{P}(M_{n}),\qquad M_{n}:=m_{1}+\cdots+m_{n},\] \[f\otimes g_{1}\otimes\cdots\otimes g_{n} \mapsto f(g_{1}\otimes\cdots\otimes g_{n})\,.\] (2.1) * **Associativity axiom.** The compositions satisfy: \[f((g_{1}\otimes\cdots\otimes g_{n})(h_{1}\otimes\cdots\otimes h_{M_{n}}))=(f(g _{1}\otimes\cdots\otimes g_{n}))(h_{1}\otimes\cdots\otimes h_{M_{n}}),\] for \(h_{j}\in\mathcal{P}(l_{j})\), \(j=1,\dots,M_{n}\), where in the left-hand side \[(g_{1}\otimes\cdots\otimes g_{n})(h_{1}\otimes\cdots\otimes h_{ M_{n}})\] \[= \pm\,g_{1}(h_{1}\otimes\cdots\otimes h_{M_{1}})\otimes\cdots \otimes g_{n}(h_{M_{n-1}+1}\otimes\cdots\otimes h_{M_{n}}),\] with the Koszul-Quillen sign given by \[\pm=(-1)^{\sum_{i<j,M_{i-1}<k\leq M_{i}}p(g_{j})p(h_{k})}\,,\] and \(M_{0}:=0\), \(M_{k}:=m_{1}+\cdots+m_{k}\). * **Unity axiom.** There exists a unit \(1\in\mathcal{P}(1)\) such that: \[f(1\otimes\cdots\otimes 1)=1(f)=f,\quad\text{for any $f\in\mathcal{P}(n)$}.\] (2.2) * **Permutation actions.** For each \(n\geq 1\), there is a right action of the symmetric group \(S_{n}\) on \(\mathcal{P}(n)\): \[\mathcal{P}(n)\times S_{n}\rightarrow\mathcal{P}(n)\,,\quad(f,\sigma)\mapsto f ^{\sigma}.\] * **Equivariance axiom.** For any \(\sigma\in S_{n}\) and \(\tau_{1}\in S_{m_{1}},\dots,\tau_{n}\in S_{m_{n}}\), we have: \[f^{\sigma}(g_{1}^{\tau_{1}}\otimes\cdots\otimes g_{n}^{\tau_{n}})=\big{(}f( \sigma(g_{1}\otimes\cdots\otimes g_{n}))\big{)}^{\sigma(\tau_{1},\dots,\tau_{ n})},\] (2.3) where \(\sigma(\tau_{1},\dots,\tau_{n})\in S_{M_{n}}\) is defined by [1, (2.12)]: \[\sigma(\tau_{1},\dots,\tau_{n})(M_{k-1}+i):=\tau_{k}(i)+\sum_{j=1}^{\sigma(k) -1}m_{\sigma^{-1}(j)}\,,\] (2.4) for \(1\leq k\leq n\), \(1\leq i\leq m_{k}\), and \[\sigma(g_{1}\otimes\cdots\otimes g_{n}):=\epsilon_{g}(\sigma)(g_{\sigma^{-1}(1 )}\otimes\cdots\otimes g_{\sigma^{-1}(n)}),\] (2.5) with the sign factor \(\epsilon_{g}(\sigma)\) given again by the Koszul-Quillen rule: \[\epsilon_{g}(\sigma):=\prod_{i<j\,:\,\sigma(i)>\sigma(j)}(-1)^{p(g_{i})p(g_{j} )}.\] (2.6) For simplicity, in the rest of the paper, we will use the term operad in place of superoperad. ### \(\circ\)-product formulation of the axioms For an operad \(\mathcal{P}\), one can define the \(\circ_{i}\)-product as the insertion at the \(i\)-th position in the composition. More precisely, for \(m\geq 0\), \(n\geq 1\) and all \(1\leq i\leq n\), we define the linear maps \[\circ_{i}\colon\mathcal{P}(n)\otimes\mathcal{P}(m)\to\mathcal{P}(n +m-1),\] \[Y\circ_{i}X=Y(1\otimes\cdots\otimes 1\otimes X\otimes 1 \otimes\cdots\otimes 1), \tag{2.7}\] where \(X\) is inserted at the \(i\)-th position in the tensor product above. Note that if all \(\circ_{i}\)-products are known, then one can recover the general compositions (2.1). Indeed, by the associativity axiom, we have: \[Y(X_{1}\otimes\cdots\otimes X_{n})=(\cdots((Y\circ_{1}X_{1})\circ_{M_{1}+1}X_ {2})\cdots)\circ_{M_{n-1}+1}X_{n}, \tag{2.8}\] for \(Y\in\mathcal{P}(n)\) and \(X_{k}\in\mathcal{P}(m_{k})\) (\(1\leq k\leq n\)). Hence, the axioms of an operad from the previous subsection can be formulated equivalently in terms of \(\circ\)-products (see, e.g., [10, 11]). Explicitly, the associativity axiom is equivalent to the following identities: \[(Z\circ_{i}Y)\circ_{j}X=\begin{cases}(-1)^{p(Y)p(X)}(Z\circ_{j}X)\circ_{i+m-1 }Y\,,&\text{if }\ 1\leq j<i,\\ Z\circ_{i}(Y\circ_{j-i+1}X)\,,&\text{if }\ i\leq j<i+n,\\ (-1)^{p(Y)p(X)}(Z\circ_{j-n+1}X)\circ_{i}Y\,,&\text{if }\ i+n\leq j<n+l, \end{cases} \tag{2.9}\] for \(X\in\mathcal{P}(m)\), \(Y\in\mathcal{P}(n)\) and \(Z\in\mathcal{P}(l)\). The unity axiom is equivalent to: \[1\circ_{1}Y=Y\circ_{i}1=Y,\ \text{ for any }\ i=1,\ldots n, \tag{2.10}\] and the equivariance axiom is equivalent to: \[Y^{\sigma}\circ_{i}X^{\tau}=(Y\circ_{\sigma(i)}X)^{\sigma\circ_{i}\tau}. \tag{2.11}\] Here the \(\circ_{i}\)-products of permutations are defined similarly as above (cf. (2.4)): \[\sigma\circ_{i}\tau=\sigma(1,\ldots,1,\tau,1,\ldots,1)\in S_{m+n-1}, \tag{2.12}\] for \(\sigma\in S_{n}\) and \(\tau\in S_{m}\), where \(\tau\) is inserted at the \(i\)-th position. We remark that the third identity in (2.9) is equivalent to the first one after flipping the equality. Another observation is that the \(\circ_{1}\)-product is associative, which is obvious from (2.9). ### Universal Lie superalgebra associated to an operad With \(\circ\)-products from the last subsection, one can construct the universal Lie superalgebra associated to an operad [11, 1]. Let \(\mathcal{P}\) be an operad. For \(n\geq-1\), we define \(W_{n}\) to be the set of all permutation invariant elements in \(\mathcal{P}(n+1)\), that is \[W_{n}=\big{\{}f\in\mathcal{P}(n+1)\ \big{|}\ f^{\sigma}=f\ \,\forall\ \sigma\in S_{n+1}\big{\}},\] and form the \(\mathbb{Z}\)-graded vector superspace \[W(\mathcal{P})=\bigoplus_{n\geq-1}W_{n}\,.\] We define a product on \(W(\mathcal{P})\) as follows: \[f\square g=\sum_{\sigma\in S_{m+1,n}}\left(f\circ_{1}g\right)^{\sigma^{-1}}\in W_ {n+m},\qquad f\in W_{n},\ \ g\in W_{m},\] where \(S_{m,n}\) is the subset of \(S_{m+n}\) consisting of all \((m,n)\)-shuffles: \[S_{m,n}=\big{\{}\sigma\in S_{m+n}\ \big{|}\ \sigma(1)<\cdots<\sigma(m),\ \sigma(m+1)< \cdots<\sigma(m+n)\big{\}}.\] Here is a quick example, which will be useful later. For \(f,g\in W_{1}\), their \(\square\)-product is: \[f\square g =\sum_{\sigma\in S_{2,1}}\left(f\circ_{1}g\right)^{\sigma^{-1}}\] \[=f\circ_{1}g+\left(f\circ_{1}g\right)^{(23)^{-1}}+\left(f\circ_{ 1}g\right)^{(123)^{-1}}\] \[=f\circ_{1}g+\left(f\circ_{1}g\right)^{(23)}+\left(f\circ_{1}g \right)^{(132)}\] \[=f\circ_{1}g+f\circ_{2}g+\left(f\circ_{2}g\right)^{(12)}. \tag{2.13}\] The last equality follows from the equivariance axiom (2.11) and the identities \((\ref{eq:23})=(\ref{eq:23})(\ref{eq:23})\) and \((\ref{eq:23})=(\ref{eq:23})\circ_{2}(1)\). Another example is when \(f\in W_{-1}\); then \(S_{m+1,-1}=\emptyset\) and hence \(f\square g=0\) is trivial. Now we define a bracket on \(W(\mathcal{P})\) as the commutator bracket of the \(\square\)-product: \[[f,g]=f\square g-(-1)^{p(f)p(g)}g\square f. \tag{2.14}\] **Theorem 2.1** ([7], [6]).: _With the bracket given by (2.14), \(W(\mathcal{P})\) is a \(\mathbb{Z}\)-graded Lie superalgebra, called the universal Lie superalgebra associated to the operad \(\mathcal{P}\)._ ### The cooperad of \(n\)-graphs In this subsection, we review some properties of graphs from [6] that are necessary ingredients of the new operad we will introduce later. For \(n\geq 1\), an _\(n\)-graph_\(\Gamma\) is a set of _vertices_\(V(\Gamma)\) labeled from \(1\) to \(n\) and a collection of oriented _edges_\(E(\Gamma)\) between them. Let \(G(n)\) denote all graphs without tadpoles (edges that start and end at the same vertex), and \(G_{0}(n)\) be the set of all acyclic \(n\)-graphs, which are graphs in \(G(n)\) with no (unoriented) cycles, including multiple edges. By convention, we let \(G_{0}(0)=G(0)=\{\emptyset\}\) be the set consisting of a single element, the empty graph \(\emptyset\) with no vertices. **Example 2.2**.: For \(n=1\), the only graph in \(G_{0}(1)=G(1)=\{\bullet\}\) is a single vertex with no edges. When \(n=2\), there are \(3\) graphs \(\Gamma\) in \(G_{0}(2)\): \[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/202.eps}\\ \includegraphics[width=142.26378pt]{figs/203.eps}\\ \includegraphics[width=142.26378pt]{figs/204.eps}\\ \includegraphics[width=142.26378pt]{figs/205.eps}\\ \includegraphics[width=142.26378pt]{figs/206.eps}\\ \includegraphics[width=142.26378pt]{figs/207.eps}\\ \includegraphics[width=142.26378pt]{figs/208.eps}\\ \includegraphics[width=142.26378pt]{figs/209.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/202.eps}\\ \includegraphics[width=142.26378pt]{figs/203.eps}\\ \includegraphics[width=142.26378pt]{figs/204.eps}\\ \includegraphics[width=142.26378pt]{figs/205.eps}\\ \includegraphics[width=142.26378pt]{figs/206.eps}\\ \includegraphics[width=142.26378pt]{figs/207.eps}\\ \includegraphics[width=142.26378pt]{figs/208.eps}\\ \includegraphics[width=142.26378pt]{figs/209.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/202.eps}\\ \includegraphics[width=142.26378pt]{figs/203.eps}\\ \includegraphics[width=142.26378pt]{figs/204.eps}\\ \includegraphics[width=142.26378pt]{figs/205.eps}\\ \includegraphics[width=142.26378pt]{figs/206.eps}\\ \includegraphics[width=142.26378pt]{figs/207.eps}\\ \includegraphics[width=142.26378pt]{figs/208.eps}\\ \includegraphics[width=142.26378pt]{figs/209.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/202.eps}\\ \includegraphics[width=142.26378pt]{figs/203.eps}\\ \includegraphics[width=142.26378pt]{figs/204.eps}\\ \includegraphics[width=142.26378pt]{figs/205.eps}\\ \includegraphics[width=142.26378pt]{figs/206.eps}\\ \includegraphics[width=142.26378pt]{figs/207.eps}\\ \includegraphics[width=142.26378pt]{figs/208.eps}\\ \includegraphics[width=142.26378pt]{figs/209.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/202.eps}\\ \includegraphics[width=142.26378pt]{figs/203.eps}\\ \includegraphics[width=142.26378pt]{figs/204.eps}\\ \includegraphics[width=142.26378pt]{figs/205.eps}\\ \includegraphics[width=142.26378pt]{figs/206.eps}\\ \includegraphics[width=142.26378pt]{figs/207.eps}\\ \includegraphics[width=142.26378pt]{figs/208.eps}\\ \includegraphics[width=142.26378pt]{figs/209.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/202.eps}\\ \includegraphics[width=142.26378pt]{figs/203.eps}\\ \includegraphics[width=142.26378pt]{figs/204.eps}\\ \includegraphics[width=142.26378pt]{figs/205.eps}\\ \includegraphics[width=142.26378pt]{figs/206.eps}\\ \includegraphics[width=142.26378pt]{figs/207.eps}\\ \includegraphics[width=142.26378pt]{figs/208.eps}\\ \includegraphics[width=142.26378pt]{figs/209.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/201.eps}\\ \includegraphics[width=142.26378pt]{figs/202.eps} For an \(n\)-tuple of positive integers \((m_{1},\ldots,m_{n})\), and \(M_{i}\) defined as in (2.1), we have the following _cocomposition_ map [1]: \[\Delta^{m_{1}\ldots m_{n}}\colon G(M_{n}) \to G(n)\times G(m_{1})\times\cdots\times G(m_{n}),\\ \Gamma \mapsto\big{(}\Delta_{0}^{m_{1}\ldots m_{n}}(\Gamma),\Delta_{1}^ {m_{1}\ldots m_{n}}(\Gamma),\ldots,\Delta_{n}^{m_{1}\ldots m_{n}}(\Gamma) \big{)}, \tag{2.15}\] where: 1. \(\Delta_{0}^{m_{1}\ldots m_{n}}(\Gamma)\) is the graph obtained by clasping the \(m_{k}\) vertices in group \(k\) into a single vertex, for each \(1\leq k\leq n\); 2. \(\Delta_{k}^{m_{1}\ldots m_{n}}(\Gamma)\), for \(1\leq k\leq n\), is the subgraph of \(\Gamma\) obtained by taking all \(m_{k}\) vertices in group \(k\) and all edges among them. **Example 2.3**.: Consider the following graph \(\Gamma\in G(10)\) and the partition of its vertices given by \((m_{1},m_{2},m_{3},m_{4})=(2,4,1,3)\): where the dashed circles denote the groupings. Then, after relabeling the vertices so that they start from \(1\), we have: \[\Delta_{0}^{2413}(\Gamma)=\] \[\Delta_{1}^{2413}(\Gamma)=\] \[\Delta_{2}^{2413}(\Gamma)=\] \[\Delta_{3}^{2413}(\Gamma)=\] \[\Delta_{4}^{2413}(\Gamma)=\] Another notion we will need is the external connectedness defined below. **Definition 2.4** ([6]).: For an \(n\)-tuple of positive integers \((m_{1},\dots,\)\(m_{n})\), let \(M_{i}\) be defined as in (2.1), and \(\Gamma\in G(M_{n})\). For a vertex \(k\in\{1,\dots,M_{n}\}\) and a group \(j\in\{1,\dots,n\}\), suppose that \(k\) is in group \(i\), i.e., \(M_{i-1}+1\leq k\leq M_{i}\). We say that \(j\) is _externally connected_ to \(k\) if there exists an unoriented path (without repeating edges) between \(j\) and \(i\) in \(\Delta_{0}^{m_{1}\dots m_{n}}(\Gamma)\) such that the edge connecting \(i\) is the image under \(\Delta\) of an edge in \(\Gamma\) that starts or ends at \(k\). The set of all \(j\)'s that are externally connected to \(k\) is denoted by \(\mathcal{E}(k)\), a subset of \(\{1,\dots,n\}\). Given a set of variables \(x_{1},\dots,x_{n}\), we define: \[X(k)=\sum_{j\in\mathcal{E}(k)}x_{j}\,. \tag{2.16}\] **Example 2.5**.: For the graph \(\Gamma\in G(10)\) in Example 2.3, we have: \[X(7)=X(9)=0,\quad X(k)=x_{1}+x_{2}+x_{4}\ \ \text{for all other}\ k\in\{1,\dots,10\}.\] Now we state the properties of the cocomposition map of \(n\)-graphs, which will be needed in Sect. 3 below. In fact, \((G(n),\Delta)\) defines a cooperad, which is a dual notion of an operad [10], but we only focus on its properties here. **Lemma 2.6** ([6]).: _For any positive integers \(m_{1},\dots,m_{n}\), there is a natural bijection_ \[\Delta\colon E(\Gamma)\to E(\Delta_{0}^{m_{1}\dots m_{n}}(\Gamma))\sqcup E( \Delta_{1}^{m_{1}\dots m_{n}}(\Gamma))\sqcup\dots\sqcup E(\Delta_{n}^{m_{1} \dots m_{n}}(\Gamma)).\] Proof.: This is true because an edge in \(\Gamma\) is either contained in one of the \(n\) subgraphs \(\Delta_{k}^{m_{1}\dots m_{n}}(\Gamma)\) (\(1\leq k\leq n\)), or it connects two different subgraphs. In the latter case, its image under \(\Delta\) is in \(\Delta_{0}^{m_{1}\dots m_{n}}(\Gamma)\). **Lemma 2.7** ([6]).: _For an oriented cycle \(C\in E(\Gamma)\) in an \(n\)-graph \(\Gamma\), either \(\Delta(C)\subset E(\Delta_{k}^{m_{1}\dots m_{n}}(\Gamma))\) for some \(1\leq k\leq n\) or \(\Delta(C)\cap E(\Delta_{0}^{m_{1}\dots m_{n}}(\Gamma))\) is an oriented cycle in \(\Delta_{0}^{m_{1}\dots m_{n}}(\Gamma)\)._ Proof.: If \(C\) is contained in some \(\Delta_{k}^{m_{1}\dots m_{n}}(\Gamma)\), then \(\Delta(C)\subset E(\Delta_{k}^{m_{1}\dots m_{n}}(\Gamma))\). Otherwise, the set of edges from \(C\) connecting different subgraphs in \(\Gamma\), which is \(\Delta(C)\cap E(\Delta_{0}^{m_{1}\dots m_{n}}(\Gamma))\), gives an oriented cycle in \(\Delta_{0}^{m_{1}\dots m_{n}}(\Gamma)\). For positive integers \(m_{1},\dots,m_{n}\), let \(M_{i}\) be defined as in (2.1), and \(l_{1},\dots,l_{M_{n}}\) be \(M_{n}\) positive integers. For \(i\in\{0,\dots,M_{n}\}\) and \(j\in\{1,\dots,n\}\), let \[L_{i}=\sum_{n=1}^{i}l_{n},\qquad K_{j}=\sum_{m=M_{j-1}+1}^{M_{j}}l_{m}.\] **Proposition 2.8** (**Coassociativity**[1]).: _The cocomposition map \(\Delta\), given by (2.15), satisfies the following coassociativity conditions. For a graph \(\Gamma\in G_{0}(L_{M_{n}})\), we have\(:\)_ 1. \(\Delta_{0}^{m_{1}\dots m_{n}}\big{(}\Delta_{0}^{l_{1}\dots l_{M_{n}}}(\Gamma) \big{)}=\Delta_{0}^{K_{1}\dots K_{n}}(\Gamma)\in G(n);\)__ 2. \(\Delta_{i}^{m_{1}\dots m_{n}}\big{(}\Delta_{0}^{l_{1}\dots l_{M_{n}}}(\Gamma) \big{)}=\Delta_{0}^{l_{M_{i-1}+1}\dots l_{M_{i}}}\big{(}\Delta_{i}^{K_{1}\dots K _{n}}(\Gamma)\big{)}\in G(m_{i})\)_, for_ \(i=1,\dots,n;\)__ 3. \(\Delta_{M_{i-1}+j}^{l_{1}\dots l_{M_{n}}}(\Gamma)=\Delta_{j}^{l_{M_{i-1}+1} \dots l_{M_{i}}}\big{(}\Delta_{i}^{K_{1}\dots K_{n}}(\Gamma)\big{)}\in G(l_{M_{ i-1}+j})\)_, for_ \(i=1,\dots,n\) _and_ \(j=1,\dots,m_{i}\)_._ The symmetric group \(S_{n}\) (\(n\geq 1\)) acts on the set of \(n\)-graphs \(G(n)\) as follows. For a permutation \(\sigma\in S_{n}\) and \(\Gamma\in G(n)\), the new graph \(\sigma\Gamma\) is defined by relabeling each vertex \(i\) as vertex \(\sigma(i)\) while not changing the edges. This action restricts to an action on \(G_{0}(n)\). **Example 2.9**.: Suppose that \(\sigma=(12)(354)\in S_{5}\) and \(\Gamma\in G_{0}(5)\) is the graph below: Then \(\sigma\Gamma\) is the graph: After rearranging the vertices from \(1\) to \(5\), we can draw the same graph as: The actions of the symmetric groups are compatible with the cocompositions, as expressed in the next statement. **Proposition 2.10** (**Coequivariance**[1]).: _For any positive integers \(m_{1},\dots,m_{n}\)\((n\geq 1)\), permutations \(\sigma\in S_{n}\), \(\tau_{i}\in S_{m_{i}}\)\((i=1,\dots,n)\), and a graph \(\Gamma\in G_{0}(\sum_{i=1}^{n}m_{i})\), we have_ \[\Delta^{m_{\sigma^{-1}(1)}\dotsm_{\sigma^{-1}(n)}}\big{(}\sigma( \tau_{1},\dots,\tau_{n})\Gamma\big{)}\] \[= \big{(}\sigma\Delta_{0}^{m_{1}\dots m_{n}}(\Gamma),\;\tau_{\sigma ^{-1}(1)}\Delta_{\sigma^{-1}(1)}^{m_{1}\dots m_{n}}(\Gamma),\;\dots,\;\tau_{ \sigma^{-1}(n)}\Delta_{\sigma^{-1}(n)}^{m_{1}\dots m_{n}}(\Gamma)\big{)},\] _where \(\sigma(\tau_{1},\dots,\tau_{n})\) is given by (2.4)._ ## 3. Generalized Classical Operad \(\mathcal{P}_{H}^{cl}\) In this section, we present the main result of the paper, the construction of the generalized classical operad \(\mathcal{P}_{H}^{cl}(V)\) for any cocommutative Hopf algebra \(H\) and a left \(H\)-module \(V\). ### Notation for Hopf algebras First, we review several identities in Hopf algebras that will be useful later. Given a Hopf algebra \(H\), we denote by \(\Delta\) its coproduct, by \(S\) the antipode, and \(\epsilon\) the counit. We will extensively use a version of Sweedler's notation (cf. [11, 1]): \[\Delta(h) =h_{(1)}\otimes h_{(2)},\qquad h\in H, \tag{3.1}\] \[(S\otimes\mathrm{id})\Delta(h) =h_{(-1)}\otimes h_{(2)},\] (3.2) \[(\mathrm{id}\otimes S)\Delta(h) =h_{(1)}\otimes h_{(-2)}. \tag{3.3}\] In this notation, the coassociativity of \(\Delta\) is written as: \[(\Delta\otimes\mathrm{id})\Delta(h) =(h_{(1)})_{(1)}\otimes(h_{(1)})_{(2)}\otimes h_{(2)} \tag{3.4}\] \[=(\mathrm{id}\otimes\Delta)\Delta(h) =h_{(1)}\otimes(h_{(2)})_{(1)}\otimes(h_{(2)})_{(2)}\] \[=h_{(1)}\otimes h_{(2)}\otimes h_{(3)},\] and the axioms of the antipode and counit as: \[\epsilon(h) =h_{(-1)}h_{(2)}=h_{(1)}h_{(-2)}, \tag{3.5}\] \[h =\epsilon(h_{(1)})h_{(2)}=h_{(1)}\epsilon(h_{(2)}), \tag{3.6}\] A useful consequence from them are the identities \[h_{(-1)}h_{(2)}\otimes h_{(3)} =1\otimes h=h_{(1)}h_{(-2)}\otimes h_{(3)}, \tag{3.7}\] \[h_{(1)}\otimes h_{(-2)}h_{(3)} =h\otimes 1=h_{(1)}\otimes h_{(2)}h_{(-3)}. \tag{3.8}\] We define the _iterated coproducts_\(\Delta^{(n)}\colon H\to H^{\otimes(n+1)}\) inductively by \[\Delta^{(1)}:=\Delta,\quad\Delta^{(n)}:=(\Delta^{(n-1)}\otimes \mathrm{id})\Delta,\qquad n\geq 2, \tag{3.9}\] and write \[\Delta^{(n-1)}(h)=h_{(1)}\otimes h_{(2)}\otimes\dots\otimes h_{(n)}. \tag{3.10}\] It will be convenient to extend the definition of \(\Delta^{(n)}\) to \(n=0,-1\) by letting \[\Delta^{(0)}:=\operatorname{id},\qquad\Delta^{(-1)}:=\epsilon. \tag{3.11}\] Then (3.6) implies that (3.9) holds for \(n=0,1\) as well. From now on, \(H\) will be a _cocommutative_ Hopf algebra (which is purely even as a superspace), so that \[h_{(1)}\otimes h_{(2)}=h_{(2)}\otimes h_{(1)},\qquad h\in H. \tag{3.12}\] ### Definition of \(\mathcal{P}_{H}^{cl}\) Let \(V\) be a vector superspace with parity \(p\), which is also a left \(H\)-module. When \(V\) is fixed, we will write simply \(\mathcal{P}_{H}^{cl}\) instead of \(\mathcal{P}_{H}^{cl}(V)\). For a graph \(\Gamma\in G(n)\), we will denote by \(s(\Gamma)\) the number of connected components of \(\Gamma\). When \(\Gamma\) is fixed, we will often write \(s=s(\Gamma)\), and let \(\Gamma_{k}\) be the \(k\)-th connected component of \(\Gamma\), so that \(\Gamma=\Gamma_{1}\sqcup\cdots\sqcup\Gamma_{s}\). An element \(Y\) of \(\mathcal{P}_{H}^{cl}(n)\) is defined to be a collection of linear maps \[Y^{\Gamma}\colon V^{\otimes n}\to H^{\otimes s(\Gamma)}\otimes_{H}V,\qquad \Gamma\in G(n), \tag{3.13}\] satisfying the following three axioms. * **First cycle condition:**\(Y^{\Gamma}=0\) for any graph \(\Gamma\) that contains a cycle, i.e., \(\Gamma\not\in G_{0}(n)\). * **Second cycle condition:** \[\sum_{e\in C}Y^{\Gamma\setminus e}=0,\] (3.14) for any oriented cycle \(C\subset E(\Gamma)\) of \(\Gamma\). * **Componentwise \(H\)-linearity.** For any \(h\in H\), \(v\in V^{\otimes n}\), and \(k=1,\ldots,s(\Gamma)\), we have: \[Y^{\Gamma}(h\cdot_{\Gamma_{k}}v)=(1\otimes\cdots\otimes 1\otimes\underbrace{h }_{k}\otimes 1\otimes\cdots\otimes 1)\,Y^{\Gamma}(v)\,,\] (3.15) where in the the right-hand side, \(h\) appears in the \(k\)-th position in the tensor product. In the left-hand side of (3.15), \(h\cdot_{\Gamma_{k}}v\) denotes the action of \(h\) on \(v=v_{1}\otimes\cdots\otimes v_{n}\) obtained by acting via the iterated coproduct on the \(v_{i}\)'s for which \(i\) belongs to the vertex set \(V(\Gamma_{k})\) of the \(k\)-th connected component \(\Gamma_{k}\) of \(\Gamma\). Explicitly, if \[V(\Gamma_{k})=\{i_{1k},\ldots,i_{n_{k}k}\}\subset V(\Gamma)=\{1,\ldots,n\}, \qquad n_{k}:=|V(\Gamma_{k})|\,, \tag{3.16}\] then \[h\cdot_{\Gamma_{k}}v:=(1\otimes\cdots\otimes h_{(1)}\otimes\cdots\otimes h_{ (n_{k})}\otimes\cdots\otimes 1)(v_{1}\otimes\cdots\otimes v_{n}),\] where \(h_{(1)},\ldots,h_{(n_{k})}\) are placed in positions \(i_{1k},\ldots,i_{n_{k}k}\), respectively. Note that, for \(n=0\), our convention is that \(\mathcal{P}_{H}^{cl}(0)\) consists of linear maps \(Y\colon H^{\otimes 0}\to H^{\otimes 0}\otimes_{H}V\), which can be identified with elements of \[H^{\otimes 0}\otimes_{H}V=\mathbb{F}\otimes_{H}V\cong V/H_{+}V\,, \tag{3.17}\] where \(H_{+}:=\operatorname{Ker}\epsilon\) is the augmentation ideal of \(H\). In order to define the structure of an operad on \(\mathcal{P}_{H}^{cl}\), we need to describe the composition, unity, and the action of the symmetric groups \(S_{n}\). * **Unity** is the identity map \(1:=\operatorname{id}_{V}\), which is viewed as an element \(1\in\mathcal{P}_{H}^{cl}(1)\) such that \(1^{\bullet}\colon V\to H\otimes_{H}V\cong V\) is the identity on \(V\) for the unique graph \(\bullet\in G(1)\). * **Permutation actions.** Let \(Y\in\mathcal{P}_{H}^{cl}(n)\) and \(\sigma\in S_{n}\). Recall that, for \(\Gamma\in G(n)\), the graph \(\sigma\Gamma\) is defined by relabeling the vertices by applying \(\sigma\) while not changing the edges (cf. Example 2.9). This induces a permutation \(\widetilde{\sigma}\in S_{s}\) of the connected components of \(\Gamma\) (explained in more detail below). Then we define \(Y^{\sigma}\in\mathcal{P}_{H}^{cl}(n)\) by \[(Y^{\sigma})^{\Gamma}(v):=(\widetilde{\sigma}\otimes_{H}1)\big{(}Y^{\sigma \Gamma}(\sigma v)\big{)},\qquad v\in V^{\otimes n}.\] (3.18) In the right-hand side of (3.18), we are using the action of \(S_{n}\) on \(V^{\otimes n}\) defined by (cf. (2.5), (2.6)): \[\sigma(v_{1}\otimes\cdots\otimes v_{n}) :=\epsilon_{v}(\sigma)v_{\sigma^{-1}(1)}\otimes\cdots\otimes v_{ \sigma^{-1}(n)}, \tag{3.19}\] \[\epsilon_{v}(\sigma) :=\prod_{i<j\,:\sigma(i)>\sigma(j)}(-1)^{p(v_{i})p(v_{j})}. \tag{3.20}\] This is consistent with assigning \(v_{i}\) to vertex \(i\) of \(\Gamma\) in \(Y^{\Gamma}(v_{1}\otimes\cdots\otimes v_{n})\), because the \(i\)-th vertex of \(\sigma\Gamma\) is vertex \(\sigma^{-1}(i)\) in \(\Gamma\) and it corresponds to the \(i\)-th factor \(v_{\sigma^{-1}(i)}\) of \(\sigma(v_{1}\otimes\cdots\otimes v_{n})\). Similarly, in (3.18), we also have the action of \(S_{s}\) on \(H^{\otimes s}\) given by: \[\widetilde{\sigma}(h_{1}\otimes\cdots\otimes h_{s}):=h_{\widetilde{\sigma}^{- 1}(1)}\otimes\cdots\otimes h_{\widetilde{\sigma}^{-1}(s)}. \tag{3.21}\] Now let us explain the definition of \(\widetilde{\sigma}\). We fix the labeling of the connected components \(\Gamma_{1},\ldots,\Gamma_{s}\) of \(\Gamma\), where \(s=s(\Gamma)\), so that the lowest-labeled vertices are put in increasing order. Explicitly, for \(V(\Gamma_{k})\) given by (3.16) with \(i_{1k}<\cdots<i_{n_{k}k}\), we assume that \(i_{11}<i_{12}<\cdots<i_{1s}\). Following the same convention, we label the connected components of \(\sigma\Gamma\) as \((\sigma\Gamma)_{1},\ldots,(\sigma\Gamma)_{s}\). Notice that these are obtained by applying \(\sigma\) to the connected components of \(\Gamma\), but possibly in different order. This defines a permutation \(\widetilde{\sigma}\in S_{s}\) so that \[\sigma(\Gamma_{\widetilde{\sigma}(k)})=(\sigma\Gamma)_{k}\,,\qquad k=1,\ldots, s=s(\Gamma)\,. \tag{3.22}\] This is consistent with (3.21) and (3.15), as the \(k\)-th connected component of \(\Gamma\) corresponds to the \(k\)-th tensor factor of \(H^{\otimes s}\) in the image of \(Y^{\Gamma}\). **Example 3.1**.: Consider the graph \(\Gamma\in G(5)\) below: Its connected components are \(\Gamma_{1},\Gamma_{2},\Gamma_{3}\) with vertex sets \[V(\Gamma_{1})=\{1,3\}\,,\quad V(\Gamma_{2})=\{2,4\}\,,\quad V(\Gamma_{3})=\{5\}\,.\] For the permutation \(\sigma=(145)(23)\in S_{5}\), the graph \(\sigma\Gamma\) is The connected components of \(\sigma\Gamma\) have vertex sets \[V((\sigma\Gamma)_{1})=\{1\}\,,\quad V((\sigma\Gamma)_{2})=\{2,4\}\,,\quad V(( \sigma\Gamma)_{3})=\{3,5\}\,.\] On the other hand, applying \(\sigma\) to \(\Gamma_{1},\Gamma_{2},\Gamma_{3}\), we obtain the components of \(\sigma\Gamma\) with vertex sets \[V(\sigma(\Gamma_{1}))=\{2,4\}\,,\quad V(\sigma(\Gamma_{2}))=\{3,5\}\,,\quad V (\sigma(\Gamma_{3}))=\{1\}\,.\] Therefore, due to (3.22), the permutation \(\widetilde{\sigma}\in S_{3}\) is equal to (132). **Lemma 3.2**.: _For every \(Y\in\mathcal{P}_{H}^{cl}(n)\) and \(\sigma\in S_{n}\), we have \(Y^{\sigma}\in\mathcal{P}_{H}^{cl}(n)\). Moreover, \((Y^{\sigma_{1}})^{\sigma_{2}}=Y^{\sigma_{1}\sigma_{2}}\) for \(\sigma_{1},\sigma_{2}\in S_{n}\)._ Proof.: It is clear that \(Y^{\sigma}\) satisfies the two cycle conditions, because \(Y\) does and \(\sigma(C)\) is a cycle in \(\sigma\Gamma\) for every cycle \(C\subset E(\Gamma)\). The componentwise \(H\)-linearity of \(Y^{\sigma}\) follows from that of \(Y\) and the discussion above Example 3.1. This proves that \(Y^{\sigma}\in\mathcal{P}_{H}^{cl}(n)\). Next, we note that for any fixed \(\Gamma\), (3.22) implies that the map \(\sigma\mapsto\widetilde{\sigma}\) is a group homomorphism \(S_{n}\to S_{s}\). Then \(((Y^{\sigma_{1}})^{\sigma_{2}})^{\Gamma}(v)=(Y^{\sigma_{1}\sigma_{2}})^{ \Gamma}(v)\) follows from (3.18), using that \(\sigma_{1}(\sigma_{2}\Gamma)=(\sigma_{1}\sigma_{2})\Gamma\) and \(\sigma_{1}(\sigma_{2}v)=(\sigma_{1}\sigma_{2})v\). * **Compositions** in the operad \(\mathcal{P}_{H}^{cl}\) will be defined in terms of \(\circ\)-products (cf. Sect. 2.2). We start with the \(\circ_{1}\)-product. Let \(X\in\mathcal{P}_{H}^{cl}(m)\), \(Y\in\mathcal{P}_{H}^{cl}(n)\), where \(m,n\geq 1\). Given a graph \(\Gamma\in G(m+n-1)\), suppose that \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) has \(s\) connected components, and \(\Delta_{0}^{m1\ldots 1}(\Gamma)\) has \(t\) connected components and \(n\) vertices. We can write \[X^{\Delta_{1}^{m1\ldots 1}(\Gamma)}(v) =\sum_{i}(f_{i1}\otimes\cdots\otimes f_{is})\otimes_{H}x_{i}(v), \quad v\in V^{\otimes m}, \tag{3.23}\] \[Y^{\Delta_{0}^{m1\ldots 1}(\Gamma)}(w) =\sum_{j}(g_{j1}\otimes\cdots\otimes g_{jt})\otimes_{H}y_{j}(w), \quad w\in V^{\otimes n}, \tag{3.24}\] for some linear maps \(x_{i}\colon V^{\otimes m}\to V\) and \(y_{j}\colon V^{\otimes n}\to V\), and elements \(f_{ik},g_{jl}\in H\). Then for \(v\in V^{\otimes m}\), \(u\in V^{\otimes(n-1)}\), we define \[(Y\circ_{1}X)^{\Gamma}(v\otimes u)\] \[:= \sum_{i,j}\bigl{(}(f_{i1(1)}\otimes\cdots\otimes f_{is(1)} \otimes 1\otimes\cdots\otimes 1)(\Delta^{(s-1)}(g_{j1})\otimes g_{j2}\otimes \cdots\otimes g_{jt})\bigr{)}\] \[\otimes_{H}y_{j}\bigl{(}x_{i}(v)\otimes(f_{i1(-2)}\otimes\cdots \otimes f_{is(-2)})\cdot u\bigr{)}\] \[= \sum_{i,j}(f_{i1(1)}g_{j1(1)}\otimes f_{i2(1)}g_{j1(2)}\otimes \cdots\otimes f_{is(1)}g_{j1(s)}\otimes g_{j2}\otimes g_{j3}\otimes\cdots \otimes g_{jt})\] \[\otimes_{H}y_{j}\bigl{(}x_{i}(v)\otimes(f_{i1(-2)}\otimes\cdots \otimes f_{is(-2)})\cdot u\bigr{)}. \tag{3.25}\] Let us explain the notation used in (3.25), writing explicitly \[v=v_{1}\otimes\cdots\otimes v_{m}\,,\quad u=v_{m+1}\otimes\cdots\otimes v_{m+ n-1}\,. \tag{3.26}\] Recall that the iterated coproduct \(\Delta^{(s-1)}(g_{j1})=g_{j1(1)}\otimes\cdots\otimes g_{j1(s)}\) is given by (3.9). In the right-hand side of (3.25), the action \((f_{i1(-2)}\otimes\cdots\otimes f_{is(-2)})\cdot u\) is defined by letting \(f_{ik(-2)}\) act via the iterated coproduct on the vectors \(v_{j}\) with \(m+1\leq j\leq m+n-1\) such that vertex \(j\) is externally connected to any vertex of the \(k\)-th connected component of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) (\(1\leq k\leq s\)). Finally, recall that \(f_{ik(1)}\otimes f_{ik(-2)}\) is given by (3.3). In the special case when no vertex \(j\in\{m+1,\ldots,m+n-1\}\) is externally connected to any vertex in the \(k\)-th connected component of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\), we get \(f_{ik(1)}\otimes\epsilon(f_{ik(-2)})=f_{ik}\otimes 1\) (cf. (3.6)). To illustrate the formula, we present the following example. **Example 3.3**.: Let \(X\in\mathcal{P}^{cl}_{H}(3)\), \(Y\in\mathcal{P}^{cl}_{H}(4)\), and consider the graph \(\Gamma\in G(6)\): As defined in Sect. 2.4, we have: \[\Gamma_{1}:=\Delta_{1}^{3111}(\Gamma)=\] \[\Gamma_{0}:=\Delta_{0}^{3111}(\Gamma)=\] The number of connected components of \(\Gamma_{1}\) and \(\Gamma_{0}\) are both \(2\). In addition, the vertex that is externally connected to (any vertex of) the first connected component of \(\Gamma_{1}\) is \(v_{4}\), and \(v_{5}\) for the second. Let us write \[X^{\Gamma_{1}}(v_{1}\otimes v_{2}\otimes v_{3}) =\sum_{i}(f_{i1}\otimes f_{i2})\otimes_{H}x_{i}(v_{1}\otimes v_{2 }\otimes v_{3}),\] \[Y^{\Gamma_{0}}(w_{1}\otimes\cdots\otimes w_{4}) =\sum_{j}(g_{j1}\otimes g_{j2})\otimes_{H}y_{j}(w_{1}\otimes \cdots\otimes w_{4}).\] Then (3.25) becomes: \[(Y\circ_{1}X)^{\Gamma}(v_{1}\otimes\cdots\otimes v_{6})=\sum_{i, j}(f_{i1(1)}g_{j1(1)}\otimes f_{i2(1)}g_{j1(2)}\otimes g_{j2})\] \[\otimes_{H}y_{j}\big{(}x_{i}(v_{1}\otimes v_{2}\otimes v_{3}) \otimes f_{i1(-2)}v_{4}\otimes f_{i2(-2)}v_{5}\otimes v_{6}\big{)}.\] _Remark 3.4_.: Eq. (3.25) can be extended to the case when \(X\in\mathcal{P}^{cl}_{H}(0)\), for which \(X\) can be viewed as a vector in \(V\) (see (3.17)). If we define \(\Delta^{(-1)}=\epsilon\) to be the counit map, then for \(X\in\mathcal{P}^{cl}_{H}(0)\), \(Y\in\mathcal{P}^{cl}_{H}(n)\), and \(\Gamma\in G(n-1)\), Eq. (3.25) becomes: \[(Y\circ_{1}X)^{\Gamma}(u)=\sum_{j}\epsilon(g_{j1})(g_{j2}\otimes\cdots\otimes g _{jt})\otimes_{H}y_{j}(X\otimes u), \tag{3.27}\] for \(u\in V^{\otimes(n-1)}\). _Remark 3.5_.: In the case when \(H=\mathbb{F}[\partial]\), our \(\mathcal{P}^{cl}_{\mathbb{F}[\partial]}\) coincides with the _classical operad_\(P^{\mathrm{cl}}\) introduced in [1, Sect. 10.2]. When \(H=\mathbb{F}\), \(\mathcal{P}^{cl}_{\mathbb{F}}\) is the _finite_ classical operad \(P^{\mathrm{fn}}\) from [1, Sect. 10.5]. _Remark 3.6_.: We can realize the operad \(\mathcal{P}^{*}_{H}\), defined in (1.15) and [1, Sect. 3], as a suboperad of \(\mathcal{P}^{cl}_{H}\) by considering maps \(Y\in\mathcal{P}^{cl}_{H}(n)\) such that \(Y^{\Gamma}=0\) for any \(n\)-graph \(\Gamma\) with at least one edge. To show that (3.25) is well defined, we have the following lemmas. **Lemma 3.7**.: _Suppose that \(\Gamma\in G_{0}(m+n-1)\) and \(\Delta_{0}^{m1\ldots 1}(\Gamma)\in G_{0}(n)\). If there are \(s\) connected components in \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) and \(t\) connected components in \(\Delta_{0}^{m1\ldots 1}(\Gamma)\), then there are \(s+t-1\) connected components in \(\Gamma\)._ Proof.: Recall that \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) is the subgraph of \(\Gamma\) consisting of the first \(m\) vertices and all edges among them, and \(\widetilde{\Gamma}:=\Delta_{0}^{m1\ldots 1}(\Gamma)\) is obtained from \(\Gamma\) by clsaping the first \(m\) vertices into one vertex labeled \(1\). Denote the connected components of \(\widetilde{\Gamma}\) as \(\widetilde{\Gamma}_{1},\ldots,\widetilde{\Gamma}_{t}\). Note that \(\widetilde{\Gamma}_{1}\) is the image under \(\Delta_{0}^{m1\ldots 1}\) of all connected components of \(\Gamma\) that are connected to one of the first \(m\) vertices of \(\Gamma\). We claim that \(\widetilde{\Gamma}_{1}\) is the image of exactly \(s\) connected components of \(\Gamma\). Indeed, it is at most \(s\), because each vertex of \(\widetilde{\Gamma}_{1}\) is connected to some connected component of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\), and there are \(s\) of them. If it is strictly less than \(s\), then there will be a path in \(\Gamma\) from one of the \(s\) connected components of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) to another, and this path will have to first traverse outside \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) and then back to another connected component of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\). This will produce a cycle in \(\widetilde{\Gamma}\), which is a contradiction. Finally, note that the other \(t-1\) connected components of \(\widetilde{\Gamma}\) are not connected to the first \(m\) vertices of \(\Gamma\); hence they are the same in \(\Gamma\) and \(\widetilde{\Gamma}\). Altogether, we obtain that \(\Gamma\) has \(s+t-1\) connected components. **Lemma 3.8**.: _In (3.25), the actions of \(f_{ik(-2)}\) and \(f_{il(-2)}\) are disjoint for \(k\neq l\) when \(\Delta_{0}^{m1\ldots 1}(\Gamma)\in G_{0}(n)\)._ Proof.: Note that \(1\leq k,l\leq s\leq m\), hence \(k\) and \(l\) are identified as the same vertex in \(\Delta_{0}^{m1\ldots 1}(\Gamma)\). If there exits a vertex \(j\in\{m+1,\ldots,m+n-1\}\) such that both \(f_{ik(-2)}\) and \(f_{il(-2)}\) act on \(v_{j}\), then \(j\) is externally connected to both \(k\) and \(l\). This gives a cycle in \(\Delta_{0}^{m1\ldots 1}(\Gamma)\), which contradicts with \(\Delta_{0}^{m1\ldots 1}(\Gamma)\in G_{0}(n)\). Lemma 3.8 indicates that the order of the actions of \(f_{ik(-2)}\)'s does not matter in (3.25). **Proposition 3.9**.: _Eq. (3.25) defines an element \(Y\circ_{1}X\in\mathcal{P}_{H}^{cl}(m+n-1)\)._ Proof.: We have to check that \(Y\circ_{1}X\) satisfies the two cycle conditions and the componentwise \(H\)-linearity. First, suppose that \(\Gamma\in G(m+n-1)\) has a cycle \(C\). If \(C\subset\Delta_{1}^{m1\ldots 1}(\Gamma)\), then \(X^{\Delta_{1}^{m1\ldots 1}(\Gamma)}=0\) since \(X\in\mathcal{P}_{H}^{cl}(m)\); hence \((Y\circ_{1}X)^{\Gamma}=0\) by (3.23), (3.25). Otherwise, \(C\) corresponds to a cycle \(C^{\prime}\subset\Delta_{0}^{m1\ldots 1}(\Gamma)\) (see Lemma 2.7 and ignore the orientation). This makes \(Y^{\Delta_{0}^{m1\ldots 1}(\Gamma)}=0\) since \(Y\in\mathcal{P}_{H}^{cl}(n)\); thus \((Y\circ_{1}X)^{\Gamma}=0\) by (3.24), (3.25). This shows that \(Y\circ_{1}X\) satisfies the first cycle condition. For the second cycle condition, consider a cycle \(C\subset\Delta_{1}^{m1\ldots 1}(\Gamma)\). For any edge \(e\in C\), let \(\Gamma\backslash e\) be the graph obtained by removing \(e\) from the graph \(\Gamma\). We want to evaluate \((Y\circ_{1}X)^{\Gamma\backslash e}\). Since \(\Delta_{0}^{m1\ldots 1}(\Gamma\backslash e)\) is the same for all \(e\), we can use the same expression (3.24) for all \(e\). On the other hand, \(X\) will be evaluated on graphs \(\Delta_{1}^{m1\ldots 1}(\Gamma\backslash e)=\Delta_{1}^{m1\ldots 1}(\Gamma)\backslash e\), and removing an edge \(e\in C\) from \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) does not change the number of connected components of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\). Hence, in formula (3.25), only the part involving \(f_{ik}\) and \(x_{i}\) may vary with different \(e\). However, since the transformation \((\operatorname{id}\otimes S)\Delta\) is linear on \(f_{ik}\), we can evaluate it on all \(X^{\Delta_{1}^{m1\ldots 1}(\Gamma)\backslash e}_{1}\) together before composing with \(Y^{\Delta_{0}^{m1\ldots 1}(\Gamma)}\). Since \(\sum_{e\in C}X^{\Delta_{1}^{m1\ldots 1}(\Gamma)\backslash e}=0\), we get \(\sum_{e\in C}(Y\circ_{1}X)^{\Gamma\backslash e}=0\). If the cycle \(C\) is not contained in \(\Delta_{1}^{m1\ldots 1}(\Gamma)\), then \(\Delta(C)\) is still a cycle in \(\Delta_{0}^{m1\ldots 1}(\Gamma\backslash e)\) for any edge \(e\in E(\Delta_{1}^{m1\ldots 1}(\Gamma))\). For such \(e\), \((Y\circ_{1}X)^{\Gamma\backslash e}=0\) since \(Y^{\Delta_{0}^{m1\ldots 1}(\Gamma\backslash e)}=0\). Thus, \[\sum_{e\in C}(Y\circ_{1}X)^{\Gamma\backslash e}=\sum_{e\in C\cap\Delta_{0}^{m1 \ldots 1}(\Gamma)}(Y\circ_{1}X)^{\Gamma\backslash e}.\] Note that \(\Delta_{1}^{m1\ldots 1}(\Gamma\backslash e)\) is the same for all \(e\in C\cap\Delta_{0}^{m1\ldots 1}(\Gamma)\), which implies that in (3.25) only the \(g_{jk}\) and \(y_{j}\) parts may vary with different \(e\in C\cap\Delta_{0}^{m1\ldots 1}(\Gamma\backslash e)\). \(\Delta_{0}^{m1\ldots 1}(\Gamma)\). The vertices externally connected to any connected component of \(\Delta_{1}^{m1\ldots 1}(\Gamma\backslash e)\) stay the same for all \(e\in C\cap\Delta_{0}^{m1\ldots 1}(\Gamma)\). As \(\Delta^{(s-1)}\) is linear and \(\sum_{e\in C\cap\Delta_{0}^{m1\ldots 1}(\Gamma)}Y^{\Delta_{0}^{m1\ldots 1}( \Gamma)\backslash e}=0\), we obtain \(\sum_{e\in C\cap\Delta_{0}^{m1\ldots 1}(\Gamma)}(Y\circ_{1}X)^{\Gamma \backslash e}=0\). This shows that the second cycle condition is satisfied. It is left to check the componentwise \(H\)-linearity of \(Y\circ_{1}X\). Pick the \(k\)-th connected component \(K\) of \(\Gamma\in G(m+n-1)\). If \(K\) is contained in the subgraph formed by vertices \(\{m+1,\ldots,m+n-1\}\), then for any \(h\in H\), \(v\in V^{\otimes m}\) and \(u\in V^{\otimes(n-1)}\), we have \[(Y\circ_{1}X)^{\Gamma}(h\cdot_{K}(v\otimes u))=(Y\circ_{1}X)^{\Gamma}(v \otimes h\cdot_{K}u).\] Note that \(K\) remains a connected component \(\widetilde{K}\) in \(\Delta_{0}^{m1\ldots 1}(\Gamma)\); let it be the \(k_{0}\)-th. Then \(k_{0}>1\) and \(k=s-1+k_{0}\) where \(s\) is the number of connected components of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) (see the proof of Lemma 3.7). From the componentwise \(H\)-linearity of \(Y\), we get for \(w_{1}\in V\): \[Y^{\Delta_{0}^{m1\ldots 1}(\Gamma)}(w_{1}\otimes h\cdot_{ \widetilde{K}}u)\] \[\quad=(1\otimes\cdots\otimes\underbrace{h}_{k_{0}}\otimes\cdots \otimes 1)Y^{\Delta_{0}^{m1\ldots 1}(\Gamma)}(w_{1}\otimes u)\] \[\quad=\sum_{j}(g_{j1}\otimes\cdots\otimes hg_{jk_{0}}\otimes \cdots\otimes g_{jt})\otimes_{H}y_{j}(w_{1}\otimes u).\] Note that \(h\cdot_{K}u=h\cdot_{\widetilde{K}}u\), as both of these correspond to the action of \(h\) on the vertices of the same connected component \(K=\widetilde{K}\) of \(\Gamma\) or \(\Delta_{0}^{m1\ldots 1}(\Gamma)\), respectively. Plugging the above in (3.25), we obtain \[(Y\circ_{1}X)^{\Gamma}(h\cdot_{K}(v\otimes u))\] \[\quad=\sum_{i,j}(f_{i1(1)}g_{j1(1)}\otimes\cdots\otimes f_{is(1) }g_{j1(s)}\otimes g_{j2}\otimes\cdots\otimes hg_{jk_{0}}\otimes\cdots\otimes g _{jt})\] \[\quad\otimes_{H}y_{j}\big{(}x_{i}(v)\otimes(f_{i1(-2)}\otimes \cdots\otimes f_{is(-2)})\cdot u\big{)}\] \[\quad=(1\otimes\cdots\otimes\underbrace{h}_{k}\otimes\cdots \otimes 1)(Y\circ_{1}X)^{\Gamma}(v\otimes u).\] This proves the componentwise \(H\)-linearity in the case when \(K\) is contained in the subgraph with vertices \(\{m+1,\ldots,m+n-1\}\). Next, consider the case when \(K\) is a connected component of \(\Gamma\) that intersects both subgraphs formed by the first \(m\) vertices and the last \(n-1\) vertices. Let us denote these intersections by \(K_{1}\) and \(K_{2}\), respectively. If \(K\) is the \(k\)-th connected component of \(\Gamma\), then \(K_{1}\) is the \(k\)-th connected component of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\) (see the proof of Lemma 3.7). Then, by the coassociativity of the coproduct, we have for \(h\in H\), \(v\in V^{\otimes m}\) and \(u\in V^{\otimes(n-1)}\): \[(Y\circ_{1}X)^{\Gamma}\big{(}h\cdot_{K}(v\otimes u)\big{)}=(Y\circ_{1}X)^{ \Gamma}\big{(}(h_{(1)}\cdot_{K_{1}}v)\otimes(h_{(2)}\cdot_{K_{2}}u)\big{)},\] where \(\cdot_{K_{2}}\) denotes the action on the \(v_{i}\)'s in \(u=v_{m+1}\otimes\cdots\otimes v_{m+n-1}\) corresponding to the vertices of \(K_{2}\). From the componentwise \(H\)-linearity of \(X\) we get \[X^{\Delta_{1}^{m1\ldots 1}(\Gamma)}(h_{(1)}\cdot_{K_{1}}v)\] \[\quad=(1\otimes\cdots\otimes\underbrace{h_{(1)}}_{k}\otimes\cdots \otimes 1)X^{\Delta_{1}^{m1\ldots 1}(\Gamma)}(v)\] \[\quad=\sum_{i}(f_{i1}\otimes\cdots\otimes h_{(1)}f_{ik}\otimes \cdots\otimes f_{is})\otimes_{H}x_{i}(v),\] where \(h_{(1)}\) multiplies only \(f_{ik}\). Plugging this in (3.25), we obtain: \[(Y\circ_{1}X)^{\Gamma}\big{(}h\cdot_{K}(v\otimes u)\big{)}\] \[\quad=(Y\circ_{1}X)^{\Gamma}\big{(}(h_{(1)}\cdot_{K_{1}}v) \otimes(h_{(2)}\cdot_{K_{2}}u)\big{)}\] \[\quad=\sum_{i,j}\bigl{(}f_{i1(1)}g_{j1(1)}\otimes\cdots\otimes(h _{(1)}f_{ik})_{(1)}g_{j1(k)}\otimes\cdots\otimes f_{is(1)}g_{j1(s)}\] \[\quad\otimes g_{j2}\otimes\cdots\otimes g_{jt}\bigr{)}\otimes_{H }y_{j}\bigl{(}x_{i}(v)\otimes(f_{i1(-2)}\otimes\cdots\otimes(h_{(1)}f_{ik})_{( -2)}\] \[\quad\otimes\cdots\otimes f_{is(-2)})\cdot(h_{(2)}\cdot_{K_{2}}u) \bigr{)}\] \[\quad=\sum_{i,j}\bigl{(}f_{i1(1)}g_{j1(1)}\otimes\cdots\otimes h_ {(1)}f_{ik(1)}g_{j1(k)}\otimes\cdots\otimes f_{is(1)}g_{j1(s)}\] \[\quad\otimes g_{j2}\otimes\cdots\otimes g_{jt}\bigr{)}\otimes_{H }y_{j}\bigl{(}x_{i}(v)\otimes(f_{i1(-2)}\otimes\cdots\otimes f_{ik(-2)}h_{(-2)}\] \[\quad\otimes\cdots\otimes f_{is(-2)})\cdot(h_{(3)}\cdot_{K_{2}} u)\bigr{)}, \tag{3.28}\] where for the last equality we used the coassociativity (3.4). Recall that in the expression \[(f_{i1(-2)}\otimes\cdots\otimes f_{ik(-2)}h_{(-2)}\otimes\cdots\otimes f_{is( -2)})\cdot(h_{(3)}\cdot_{K_{2}}u),\] \(f_{ik(-2)}h_{(-2)}\) acts via the iterated coproduct on the vectors \(v_{j}\) in \(u\) such that vertex \(j\) is externally connected to any vertex of the \(k\)-th connected component \(K_{1}\) of \(\Delta_{1}^{m1\ldots 1}(\Gamma)\). These are precisely the vertices of \(K_{2}\). Thus, by (3.8), we can simplify the right-hand side of (3.28) to \[\sum_{i,j}\bigl{(}f_{i1(1)}g_{j1(1)}\otimes\cdots\otimes hf_{ik(1 )}g_{j1(k)}\otimes\cdots\otimes f_{is(1)}g_{j1(s)}\] \[\quad\quad\otimes g_{j2}\otimes\cdots\otimes g_{jt}\bigr{)} \otimes_{H}y_{j}\bigl{(}x_{i}(v)\otimes(f_{i1(-2)}\otimes\cdots\otimes f_{is(- 2)})\cdot u\bigr{)}\] \[\quad=(1\otimes\cdots\otimes\underbrace{h}_{k}\otimes\cdots \otimes 1)(Y\circ_{1}X)^{\Gamma}(v\otimes u),\] as desired. Finally, in the special case when \(K\) is contained in \(\Delta_{1}^{m1\ldots 1}(\Gamma)\), we can use the same argument as above with an empty subgraph \(K_{2}\). This completes the proof. Similarly to the \(\circ_{1}\)-product, we can define \(\circ_{k}\)-products for \(2\leq k\leq n\) as follows. Let \(X\in\mathcal{P}_{H}^{cl}(m)\), \(Y\in\mathcal{P}_{H}^{cl}(n)\), and \(\Gamma\in G(m+n-1)\). Now \(X\) will be evaluated on \(\Delta^{1\ldots 1m1\ldots 1}_{k}(\Gamma)\) where \(m\) is at the \(k\)-th position. As before, we write for \(v\in V^{\otimes m}\), \(w\in V^{\otimes n}\) and some linear maps \(x_{i}\), \(y_{j}\): \[X^{\Delta^{1\ldots 1m1\ldots 1}_{k}(\Gamma)}(v) =\sum_{i}(f_{i1}\otimes\cdots\otimes f_{is})\otimes_{H}x_{i}(v), \tag{3.29}\] \[Y^{\Delta^{1\ldots 1m1\ldots 1}_{0}(\Gamma)}(w) =\sum_{j}(g_{j1}\otimes\cdots\otimes g_{jt})\otimes_{H}y_{j}(w), \tag{3.30}\] where now \(s\) is the number of connected components of \(\Delta^{1\ldots 1m1\ldots 1}_{k}(\Gamma)\) and \(t\) is the number of connected components of \(\Delta^{1\ldots 1m1\ldots 1}_{0}(\Gamma)\). We have the following analogue of Lemma 3.7. **Lemma 3.10**.: _Let \(\,\Gamma\in G_{0}(m+n-1)\) and \(\,\Delta^{1\ldots 1m1\ldots 1}_{0}(\Gamma)\in G_{0}(n)\). If there are \(s\) connected components in \(\Delta^{1\ldots 1m1\ldots 1}_{k}(\Gamma)\) and \(t\) connected components in \(\Delta^{1\ldots 1m1\ldots 1}_{0}(\Gamma)\), then there are \(s+t-1\) connected components in \(\Gamma\)._ Notice that when we identify the connected components of \(\Delta^{1\ldots 1m1\ldots 1}_{0}(\Gamma)\) and \(\Delta^{1\ldots 1m1\ldots 1}_{k}(\Gamma)\) as connected components in \(\Gamma\) (similarly to the proof of Lemma 3.7), they may come in different order. We denote by \(\rho^{\Gamma}_{k}\in S_{s+t-1}\) the permutation identifying them, which is defined explicitly as follows. Let us label the connected components of \(\Gamma\) as \(\Gamma_{1},\ldots,\Gamma_{s+t-1}\), following the convention above (3.22). Note that each of the connected components \(K_{i}\) (\(1\leq i\leq s\)) of \(\Delta^{1\ldots 1m1\ldots 1}_{k}(\Gamma)\) connects to a unique connected component of \(\Gamma\), labeled as \(\Gamma_{\rho^{\Gamma}_{k}(i)}\). As in the proof of Lemma 3.7, let \(\widetilde{\Gamma}_{1},\ldots,\widetilde{\Gamma}_{t}\) be the connected components of \(\Delta^{1\ldots 1m1\ldots 1}_{0}(\Gamma)\). Suppose that the vertex in \(\Delta^{1\ldots 1m1\ldots 1}_{0}(\Gamma)\) obtained by clasping the \(m\) vertices \(\{k,\ldots,k+m-1\}\) in \(\Gamma\) is contained in the \(q\)-th connected component \(\widetilde{\Gamma}_{q}\). Then each \(\widetilde{\Gamma}_{j}\) with \(1\leq j\leq t\), \(j\neq q\) remains a connected component of \(\Gamma\), and we let \[\widetilde{\Gamma}_{j}=\begin{cases}\Gamma_{\rho^{\Gamma}_{k}(s+j)},&1\leq j \leq q-1,\\ \Gamma_{\rho^{\Gamma}_{k}(-1+j)},&q+1\leq j\leq t.\end{cases} \tag{3.31}\] This completes the definition of \(\rho^{\Gamma}_{k}\). It is illustrated in Example 3.11 below. Now the \(\circ_{k}\)-product is defined as follows: \[(Y\circ_{k}X)^{\Gamma}(v_{1}\otimes\cdots\otimes v_{m+n-1})\] \[:=(-1)^{p^{X}_{k}}\sum_{i,j}\rho^{\Gamma}_{k}\big{(}g_{j1}\otimes g _{j2}\otimes\cdots\otimes g_{j\,q-1}\otimes f_{i1(1)}g_{jq(1)}\otimes f_{i2(1) }g_{jq(2)}\] \[\qquad\otimes\cdots\otimes f_{is(1)}g_{jq(s)}\otimes g_{j\,q+1} \otimes\cdots\otimes g_{jt}\big{)}\otimes_{H}y_{j}\big{(}(f_{i1(-2)}\otimes \cdots\otimes f_{is(-2)})\] \[\qquad\cdot(v_{1}\otimes\cdots\otimes v_{k-1}\otimes x_{i}(v_{k} \otimes\cdots\otimes v_{k+m-1})\otimes v_{k+m}\otimes\cdots\otimes v_{m+n-1}) \big{)}, \tag{3.32}\] where the sign \((-1)^{p^{X}_{k}}\) follows from the Koszul-Quillen rule: \[(-1)^{p^{X}_{k}}=(-1)^{p(X)(p(v_{1})+\cdots+p(v_{k-1}))}, \tag{3.33}\] and the \(f_{il(-2)}\)'s act on the vertices \(\{1,\ldots,k-1\}\cup\{k+m,\ldots,m+n-1\}\) in a way similar to the \(\circ_{1}\)-product case. The rest of the notation is as in (3.25); see the explanations below (3.25). **Example 3.11**.: We will compute \((Y\circ_{4}X)^{\Gamma}\) for \(X\in\mathcal{P}^{cl}_{H}(3)\) and \(Y\in\mathcal{P}^{cl}_{H}(5)\), with the graph \(\Gamma\in G(7)\) as shown below: By definition of the cocomposition map, we have: \[G_{4}:=\Delta_{4}^{11131}(\Gamma)=\] Let us write \[X^{G_{4}}(v) =\sum_{i}(f_{i1}\otimes f_{i2}\otimes f_{i3})\otimes_{H}x_{i}(v), v\in V^{\otimes 3},\] \[Y^{G_{0}}(w) =\sum_{j}(g_{j1}\otimes g_{j2})\otimes_{H}y_{j}(w), w\in V^{\otimes 5}.\] Then, using (3.32), we find: \[(Y\circ_{4}X)^{\Gamma}\] \[=(-1)^{p_{4}^{X}}_{4}\,\sum_{i,j}\rho_{4}^{\Gamma}\big{(}f_{i1(1) }g_{j1(1)}\otimes f_{i2(1)}g_{j1(2)}\otimes f_{i3(1)}g_{j1(3)}\otimes g_{j2} \big{)}\] \[\otimes_{H}y_{j}\big{(}(f_{i1(-2)}\otimes f_{i2(-2)}\otimes f_{i3 (-2)})\cdot(v_{1}\otimes v_{2}\otimes v_{3}\otimes x_{i}(v_{4}\otimes v_{5} \otimes v_{6})\otimes v_{7})\big{)}\] \[=(-1)^{p_{4}^{X}}_{4}\,\sum_{i,j}\bigl{(}f_{i2(1)}g_{j1(2)} \otimes g_{j2}\otimes f_{i1(1)}g_{j1(1)}\otimes f_{i3(1)}g_{j1(3)}\bigr{)}\] \[\otimes_{H}y_{j}\bigl{(}f_{i2(-2)}v_{1}\otimes v_{2}\otimes f_{i1 (-2)}v_{3}\otimes x_{i}(v_{4}\otimes v_{5}\otimes v_{6})\otimes f_{i3(-2)}v_{7 }\bigr{)}.\] One can show that the following results are also true analogously to the proofs of Lemma 3.8 and Proposition 3.9. The \(\circ_{k}\)-product can be thought of as a \(\circ_{1}\)-product (possibly up to a sign change) after relabeling the vertices of \(\Gamma\). **Lemma 3.12**.: _In (3.32), the actions of \(f_{ij(-2)}\) and \(f_{il(-2)}\) are disjoint if \(j\neq l\) and \(\Delta_{0}^{1\ldots 1m1\ldots}(\Gamma)\in G_{0}(n)\)._ **Proposition 3.13**.: _Eq. (3.32) defines an element \(Y\circ_{k}X\in\mathcal{P}_{H}^{cl}(m+n-1)\)._ Now we have definitions of \(\circ_{k}\)-products for all \(k\geq 1\). Then, for any \(Y\in\mathcal{P}(n)\) and \(X_{j}\in\mathcal{P}(m_{j})\) (\(1\leq j\leq n\)), we define the composition \(Y(X_{1}\otimes\cdots\otimes X_{n})\) by (2.8). We will show in the following subsection that this definition makes sense by verifying the associativity axiom (2.9). ### Proof that \(\mathcal{P}_{H}^{cl}\) is an operad In this subsection, we prove the following main theorem of the paper: **Theorem 3.14**.: _Let \(H\) be a cocommutative Hopf algebra, and \(V\) be a vector superspace that is also a left \(H\)-module. Then the vector superspaces \(\mathcal{P}_{H}^{cl}(n)\), \(n\geq 0\), defined in (3.13), with the actions of the symmetric groups \(S_{n}\) given by (3.18) and the composition maps given by (2.8), (3.32), form an operad called the generalized classical operad._ To prove the theorem, we will verify (2.9), (2.10), (2.11), which will be done in the following lemmas. The unity axiom (2.10) is obvious, so we focus on the associativity and equivariance axioms. Moreover, since the first and the third equations in (2.9) are equivalent, we only need to prove the first and second. Throughout the rest of this subsection, let \[X\in\mathcal{P}_{H}^{cl}(m),\quad Y\in\mathcal{P}_{H}^{cl}(n),\quad Z\in \mathcal{P}_{H}^{cl}(l).\] **Lemma 3.15**.: _We have \((Z\circ_{i}Y)\circ_{j}X=(-1)^{p(Y)p(X)}(Z\circ_{j}X)\circ_{i+m-1}Y\) for \(1\leq j<i\), i.e., the first identity in (2.9) holds._ Proof.: By Proposition 3.13, both \((Z\circ_{i}Y)\circ_{j}X\) and \((Z\circ_{j}X)\circ_{i+m-1}Y\) are elements of \(P_{H}^{cl}(m+n+l-2)\). Given a graph \(\Gamma\in G(m+n+l-2)\), we introduce the following notation: \[\Gamma_{2} :=\Delta_{j}^{1\ldots 1m1\ldots 1}(\Gamma)\in G(m), \overline{\Gamma}_{1} :=\Delta_{0}^{1\ldots 1m1\ldots 1}(\Gamma)\in G(n+l-1),\] \[\Gamma_{1} :=\Delta_{i}^{1\ldots 1n1\ldots 1}(\overline{\Gamma}_{1})\in G(n), \Gamma_{0} :=\Delta_{0}^{1\ldots 1n1\ldots 1}(\overline{\Gamma}_{1})\in G(l),\] where \(m\) appears at the \(j\)-th position and \(n\) appears at the \(i\)-th position in the superscripts of \(\Delta\). Note that in order to compute \(((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}\), we need to evaluate \(X^{\Gamma_{2}}\), \(Y^{\Gamma_{1}}\) and \(Z^{\Gamma_{0}}\). Assume that there are \(r,t,s\) connected components in \(\Gamma_{0},\Gamma_{1},\Gamma_{2}\), respectively, and write \[X^{\Gamma_{2}}(v) =\sum_{\alpha}(f_{\alpha 1}\otimes\cdots\otimes f_{\alpha s}) \otimes_{H}x_{\alpha}(v), v\in V^{\otimes m}, \tag{3.34}\] \[Y^{\Gamma_{1}}(w) =\sum_{\beta}(g_{\beta 1}\otimes\cdots\otimes g_{\beta t}) \otimes_{H}y_{\beta}(w), w\in V^{\otimes n},\] (3.35) \[Z^{\Gamma_{0}}(u) =\sum_{\gamma}(h_{\gamma 1}\otimes\cdots\otimes h_{\gamma r}) \otimes_{H}z_{\gamma}(u), u\in V^{\otimes l}. \tag{3.36}\] If \(\Gamma_{1}\) is clasped into a vertex of the \(q_{1}\)-th connected component of \(\Gamma_{0}\), then we compute by (3.32): \[(Z\circ_{i}Y)^{\overline{\Gamma}_{1}}(v_{1}\otimes\cdots\otimes v _{n+l-1})\] \[\quad=(-1)^{p_{i}^{Y}}\sum_{\beta,\gamma}\rho_{i}^{\overline{ \Gamma}_{1}}\big{(}h_{\gamma 1}\otimes\cdots\otimes h_{\gamma\,q_{1}-1}\otimes g_{ \beta 1(1)}h_{\gamma q_{1}(1)}\otimes\cdots\otimes g_{\beta t(1)}h_{\gamma q_{1}(t)}\] \[\quad\otimes h_{\gamma\,q_{1}+1}\cdots\otimes h_{\gamma r}\big{)} \otimes_{H}z_{\gamma}\big{(}(g_{\beta 1(-2)}\otimes\cdots\otimes g_{\beta t(-2)}) \cdot(v_{1}\otimes\cdots\otimes v_{i-1}\] \[\quad\otimes y_{\beta}(v_{i}\otimes\cdots\otimes v_{i+n-1}) \otimes v_{i+n}\otimes\cdots\otimes v_{n+l-1})\big{)}, \tag{3.37}\] where \(p_{i}^{Y}\) is given by (3.33). To further compute \(((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}\), we need to consider the relations between \(\Gamma_{1}\) and \(\Gamma_{2}\), hence we split into the following two cases. In the first case, assume that \(\Gamma_{1}\) and \(\Gamma_{2}\) are disjoint after they are clasped into two vertices in \(\Gamma_{0}\), that is there exists no path in \(\Gamma\) connecting any two connected components of \(\Gamma_{1}\) and \(\Gamma_{2}\) as shown in the graph below. The graph does not show all possible edges; only edges between \(\Gamma_{2}\) and \(\Gamma_{1}\) are not allowed. In this case, if \(\Gamma_{2}\) is clasped into a vertex of the \(\overline{q}_{2}\)-th connected component of \(\overline{\Gamma}_{1}\), then \(\overline{q}_{2}\) corresponds to some \(q_{2}\)-th connected component in \(\Gamma_{0}\) that is different from the \(q_{1}\)-th. We have: \[((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}(v_{1}\otimes\cdots\otimes v_{m+ n+l-2})\] \[=(-1)^{p_{i+m-1}^{Y}+p_{j}^{X}}\sum_{\alpha,\beta,\gamma}\rho_{j} ^{\Gamma}\big{(}(\underbrace{1\otimes\cdots\otimes 1}_{\overline{q}_{2}-1} \otimes f_{\alpha 1(1)}\otimes\cdots\otimes f_{\alpha s(1)}\otimes 1\cdots\otimes 1)\] \[(\Delta_{\overline{q}_{2}}^{(s-1)}\cdot\rho_{i}^{\overline{ \Gamma}_{1}}(h_{\gamma 1}\otimes\cdots\otimes h_{\gamma\,q_{1}-1}\otimes g_{\beta 1(1)}h_{ \gamma q_{1}(1)}\otimes\cdots\otimes g_{\beta t(1)}h_{\gamma q_{1}(t)}\otimes h _{\gamma\,q_{1}+1}\] \[\otimes\cdots\otimes h_{\gamma r}))\big{)}\otimes_{H}z_{\gamma} \big{(}\big{(}f_{\alpha 1(-2)}\otimes\cdots\otimes f_{\alpha s(-2)}\big{)}\cdot(g_{ \beta 1(-2)}\otimes\cdots\otimes g_{\beta t(-2)})\] \[\cdot(v_{1}\otimes\cdots\otimes v_{j-1}\otimes x_{\alpha}(v_{j} \otimes\cdots\otimes v_{j+m-1})\otimes v_{j+m}\otimes\cdots\otimes v_{i+m-2}\] \[\otimes y_{\beta}(v_{i+m-1}\otimes\cdots\otimes v_{i+m+n-2}) \otimes v_{i+m+n-1}\otimes\cdots\otimes v_{m+n+l-2})\big{)}. \tag{3.38}\] Note that \(\Delta_{\overline{q}_{2}}^{(s-1)}\) only applies on the \(\overline{q}_{2}\)-th connected component after the permutation \(\rho_{i}^{\overline{\Gamma}_{1}}\). Since \(\Gamma_{1}\) and \(\Gamma_{2}\) are disconnected in \(\Gamma\), \(\overline{q}_{2}\) is not any connected component corresponding to \(h_{\gamma q_{1}(p)}\) for any \(p\). Similarly, one sees that \(f_{\alpha k(1)}\) multiplies with \(h_{\gamma a(k)}\) from \(\Delta^{(s-1)}(h_{\gamma a})\) (here \(h_{\gamma a}\) represents the connected component in \(\Gamma_{0}\) that becomes the \(\overline{q}_{2}\)-th connected component in \(\overline{\Gamma}_{1}\) under \(\rho_{i}^{\overline{\Gamma}_{1}}\)), while \(f_{\alpha k(1)}\)'s and \(g_{\beta p(1)}\)'s do not multiply together. Eventually, \(\rho_{j}^{\Gamma}\) permutes the connected components to their correct positions in \(\Gamma\). In (3.37), \(g_{\beta p(-2)}\)'s act only on vectors that are not within \(y_{\beta}\). They do not act on vectors within \(x_{\alpha}\) in (3.38) either; otherwise we have a path between \(\Gamma_{1}\) and \(\Gamma_{2}\) in \(\Gamma\). Thus, \(g_{\beta p(-2)}\)'s only act on vectors not in \(x_{\alpha}\) and \(y_{\beta}\), which is also true for \(f_{\alpha k(-2)}\)'s. The last observation is that the vectors not within \(x_{\alpha}\) and \(y_{\beta}\) can have at most one action: either from \(g_{\beta p(-2)}\) or \(f_{\alpha k(-2)}\); otherwise we have a path between \(\Gamma_{1}\) and \(\Gamma_{2}\) in \(\Gamma\). These observations imply that we do not need to worry about the commutativity of \(H\). Next, we compute the right hand side \(((Z\circ_{j}X)\circ_{i+m-1}Y)^{\Gamma}\). Now \(X\) will always be evaluated on \(\Gamma_{2}\), \(Y\) on \(\Gamma_{1}\), and \(Z\) on \(\Gamma_{0}\). Let \(\widetilde{\Gamma}_{1}=\Delta_{0}^{1\ldots 1n1\ldots 1}(\Gamma)\) where \(n\) appears at the \((i+m-1)\)-th position. Note that \(\Gamma_{2}\) will be clasped into a vertex in the \(q_{2}\)-th connected component of \(\Gamma_{0}\), and we have: \[(Z\circ_{j}X)^{\widetilde{\Gamma}_{1}}(v_{1}\otimes\cdots\otimes v_{ m+l-1})\] \[=(-1)^{p_{j}^{X}}\sum_{\beta,\gamma}\rho_{j}^{\widetilde{\Gamma}_ {1}}\big{(}h_{\gamma 1}\otimes\cdots\otimes h_{\gamma\,q_{2}-1}\otimes f_{\alpha 1 (1)}h_{\gamma q_{2}(1)}\otimes\cdots\otimes f_{\alpha s(1)}h_{\gamma q_{2}(s)}\] \[\otimes h_{\gamma\,q_{2}+1}\otimes\cdots\otimes h_{\gamma r} \big{)}\otimes_{H}z_{\gamma}\big{(}(f_{\alpha 1(-2)}\otimes\cdots\otimes f_{\alpha s(-2)}) \cdot(v_{1}\otimes\cdots\otimes v_{j-1}\] \[\otimes x_{\alpha}(v_{j}\otimes\cdots\otimes v_{j+m-1})\otimes v _{j+m}\otimes\cdots\otimes v_{m+l-1})\big{)}. \tag{3.39}\] Suppose that \(\Gamma_{1}\) is clasped into a vertex in the \(\widetilde{q}_{1}\)-th connected component of \(\widetilde{\Gamma}_{1}\). This will correspond to the \(q_{1}\)-th connected component of \(\Gamma_{0}\) as for the left-hand side. Thus \[((Z\circ_{j}X)\circ_{i+m-1}Y)^{\Gamma}(v_{1}\otimes\cdots\otimes v _{m+n+l-2})\] \[=(-1)^{p_{i+m-1}^{Y}+p_{j}^{X}+p(Y)p(X)}\sum_{\alpha,\beta,\gamma} \rho_{i+m-1}^{\Gamma}\big{(}(\underbrace{1\otimes\cdots\otimes 1}_{ \widetilde{q}_{1}-1}\otimes g_{\beta 1(1)}\otimes\cdots\otimes g_{\beta t(1)}\] \[\otimes 1\otimes\cdots\otimes 1)(\Delta_{\widetilde{q}_{1}}^{(t-1)} \cdot\rho_{j}^{\widetilde{\Gamma}_{1}}(h_{\gamma 1}\otimes\cdots\otimes h_{\gamma\,q_{2}-1} \otimes f_{\alpha 1(1)}h_{\gamma q_{2}(1)}\otimes\cdots\] \[\otimes f_{\alpha s(1)}h_{\gamma q_{2}(s)}\otimes h_{\gamma\,q_{ 2}+1}\cdots\otimes h_{\gamma r}))\big{)}\otimes_{H}z_{\gamma}\big{(}(g_{ \beta 1(-2)}\otimes\cdots\otimes g_{\beta t(-2)})\] \[\cdot(f_{\alpha 1(-2)}\otimes\cdots\otimes f_{\alpha s(-2)}) \cdot(v_{1}\otimes\cdots\otimes v_{j-1}\otimes x_{\alpha}(v_{j}\otimes \cdots\otimes v_{j+m-1})\] \[\otimes v_{j+m}\otimes\cdots\otimes v_{i+m-2}\otimes y_{\beta}(v _{i+m-1}\otimes\cdots\otimes v_{i+m+n-2})\] \[\otimes v_{i+m+n-1}\otimes\cdots\otimes v_{m+n+l-2})\big{)}. \tag{3.40}\] The action of \(f_{\alpha k(-2)}\)'s and \(g_{\beta p(-2)}\)'s on the vectors are the same as in (3.38). Each vector gets at most one action; hence the \(z_{\gamma}\) part is the same as in (3.38). For the coefficients representing each connected component, we have the following observations: 1. \(h_{\gamma b}\) (which represents the connected component in \(\Gamma_{0}\) that becomes the \(\widetilde{q}_{1}\)-th connected component in \(\widetilde{\Gamma}_{1}\) under \(\rho_{j}^{\widetilde{\Gamma}_{1}}\)) is distinct from \(h_{\gamma q_{2}}\) since \(\Gamma_{1}\) and \(\Gamma_{2}\) are disjoint. 2. \(\Delta^{(s-1)}(h_{\gamma q_{2}})\) multiplies with \(f_{\alpha 1(1)}\otimes\cdots\otimes f_{\alpha s(1)}\) and then gets permuted twice, by \(\rho_{j}^{\widetilde{\Gamma}_{1}}\) and \(\rho_{i+m-1}^{\Gamma}\), while in (3.38), \(\Delta^{(s-1)}(h_{\gamma a})\) will get permuted by \(\rho_{j}^{\Gamma}\). 3. These two parts will be the same, since \(h_{\gamma q_{2}}\) and \(h_{\gamma a}\) are essentially the same, both representing the connected component in \(\Gamma_{0}\) that contains the vertex clasped from \(\Gamma_{2}\). They are eventually identified as connected components of the same graph \(\Gamma\). 4. Similar arguments work for \(h_{\gamma q_{1}}\) and \(h_{\gamma b}\). The connected components that are disjoint from \(\Gamma_{1}\) and \(\Gamma_{2}\) will be the same in both cases, since they are eventually identified as connected components of \(\Gamma\). The above arguments prove that \(((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}=(-1)^{p(Y)p(X)}((Z\circ_{j}X)\circ_{i+m-1}Y )^{\Gamma}\) when \(\Gamma_{1}\) and \(\Gamma_{2}\) are disjoint. The other case is when \(\Gamma_{1}\) and \(\Gamma_{2}\) are connected. Note that there can be only one path connecting the \(k_{1}\)-th connected component of \(\Gamma_{1}\) and the \(k_{2}\)-th connected component of \(\Gamma_{2}\) otherwise we have a cycle in \(\Gamma_{0}\). A typical example looks like the graph below, where there may be some vertices \(c\) outside \(\Gamma_{1}\) and \(\Gamma_{2}\) on the path from \(a\) to \(b\). To simplify the calculations, we use the observation that for any left \(H\)-module \(W\) and elements \(h_{1},\dots,h_{n}\in H\), \(w\in W\), the expression \((h_{1}\otimes\cdots\otimes h_{n})\otimes_{H}w\) can be rewritten in the form \((\widetilde{h}_{1}\otimes\cdots\otimes\widetilde{h}_{i-1}\otimes 1\otimes \widetilde{h}_{i+1}\otimes\cdots\otimes\widetilde{h}_{n})\otimes_{H}\widetilde {w}\) for any \(i\in\{1,\dots,n\}\), where \(\widetilde{h}_{j}\in H\) and \(\widetilde{w}\in W\) are unique (see (3.7), (3.8) and [1]). Hence, without loss of generality, we can assume that \(f_{\alpha k_{2}}=1\) in (3.34) and \(g_{\beta k_{1}}=1\) in (3.35). Then we compute \((Z\circ_{i}Y)^{\overline{\Gamma}_{1}}\) from (3.37) by letting \(g_{\beta k_{1}(1)}=g_{\beta k_{1}(-2)}=1\). For \(((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}\), one can still use (3.38) with the following changes: 1. \(f_{\alpha k_{2}(1)}=f_{\alpha k_{2}(-2)}=1\). Note that \(g_{\beta k_{1}(1)}=1\) and \(\Delta_{\overline{q}_{2}}^{(s-1)}\) applies on the connected component corresponding to \(h_{\gamma q_{1}(k_{1})}\); hence \(f_{\alpha k(1)}\)'s and \(g_{\beta p(1)}\)'s do not multiply together. 2. Note that if a vector \(v_{l}\) in \(z_{\gamma}\) gets both actions from \(f_{\alpha k(-2)}\) and \(g_{\beta p(-2)}\), then it must be in the component that connects \(\Gamma_{1}\) and \(\Gamma_{2}\), and the actions are from \(f_{\alpha k_{2}(-2)}\) and \(g_{\beta k_{1}(-2)}\), which are trivial. Other vectors can only get one action, either from \(f_{\alpha k(-2)}\) or \(g_{\beta p(-2)}\); otherwise we get a cycle between \(\Gamma_{1}\) and \(\Gamma_{2}\) in \(\Gamma_{0}\). The right-hand side \(((Z\circ_{j}X)\circ_{i+m-1}Y)^{\Gamma}\) can be computed in a similar way. First, \((Z\circ_{j}Y)^{\widetilde{\Gamma}_{1}}\) can be obtained by letting \(f_{\alpha k_{2}(1)}=f_{\alpha k_{2}(-2)}=1\) in (3.39). Then \(((Z\circ_{j}X)\circ_{i+m-1}Y)^{\Gamma}\) is computed from (3.40) with the following changes: 1. \(g_{\beta k_{1}(1)}=g_{\beta k_{1}(-2)}=1\). Note that \(f_{\alpha k_{2}(1)}=1\) and \(\Delta_{\widetilde{q}_{1}}^{(t-1)}\) acts on the connected component corresponding to \(h_{\gamma q_{2}(k_{2})}\); hence \(f_{\alpha k(1)}\)'s and \(g_{\beta p(1)}\)'s do not multiply together. 2. The second change as we compute \(((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}\) above also applies here. Since on both sides we identify the connected components of \(\Gamma\) by either \(\rho_{j}^{\Gamma}\) or \(\rho_{i+m-1}^{\Gamma}\), we obtain \(((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}=(-1)^{p(Y)p(X)}((Z\circ_{j}X)\circ_{i+m-1}Y)^ {\Gamma}\) when \(\Gamma_{1}\) and \(\Gamma_{2}\) are connected. This completes the proof of the lemma. _Remark 3.16_.: In the above proof, removing the path connecting \(\Gamma_{1}\) and \(\Gamma_{2}\) (there is one and only one such path) will make these two new subgraphs disjoint. Hence, the case when \(\Gamma_{1}\) and \(\Gamma_{2}\) are connected can be thought of as the disjoint case once \(f_{\alpha k_{2}}\) and \(g_{\beta k_{1}}\) are set to \(1\). **Lemma 3.17**.: _We have \((Z\circ_{i}Y)\circ_{j}X=Z\circ_{i}(Y\circ_{j-i+1}X)\) for \(i\leq j<i+n\), i.e., the second identity in (2.9) holds._ Proof.: Consider a graph \(\Gamma\in G(m+n+l-2)\), and introduce the following notation: \[\overline{\Gamma}_{1} :=\Delta_{i}^{1\ldots 1(m+n-1)1\ldots 1}(\Gamma)\in G(m+n-1), \Gamma_{2} :=\Delta_{j-i+1}^{1\ldots 1m1\ldots 1}(\overline{\Gamma}_{1})\in G (m),\] \[\Gamma_{0} :=\Delta_{0}^{1\ldots 1(m+n-1)1\ldots 1}(\Gamma)\in G(l), \Gamma_{1} :=\Delta_{0}^{1\ldots 1m1\ldots 1}(\overline{\Gamma}_{1})\in G (n),\] \[\overline{\Gamma}_{0} :=\Delta_{0}^{1\ldots 1m1\ldots 1}(\Gamma)\in G(n+l-1),\] where \(m+n-1\) is at the \(i\)-th position and \(m\) is at the \((j-i+1)\)-th position in the superscripts of \(\Delta\), except that \(m\) is at the \(j\)-th position in the last cocomposition map defining \(\overline{\Gamma}_{0}\). Note that \(\overline{\Gamma}_{0}\) is obtained by clasping \(\Gamma_{2}\) in \(\Gamma\). Since \(i\leq j<i+n\), the graph \(\Gamma\) looks like this: As in the proof of Lemma 3.15, \(X,Y,Z\) will be evaluated on \(\Gamma_{2}\), \(\Gamma_{1}\), \(\Gamma_{0}\), respectively. Using the same notation for the number of connected components and expressions (3.34), (3.35), (3.36), one computes: \[(Z\circ_{i}Y)^{\overline{\Gamma}_{0}}(v_{1}\otimes\cdots\otimes v_{n +l-1})\] \[=(-1)^{p_{i}^{Y}}\sum_{\beta,\gamma}\rho_{i}^{\overline{\Gamma}_{0 }}\big{(}\big{(}\underbrace{1\otimes\cdots\otimes 1}_{q_{1}-1}\otimes g_{\beta 1(1)} \otimes\cdots\otimes g_{\beta t(1)}\otimes 1\otimes\cdots\otimes 1)\] \[(h_{\gamma 1}\otimes\cdots\otimes\Delta^{(t-1)}(h_{\gamma q_{1}}) \otimes\cdots\otimes h_{\gamma r})\big{)}\,\otimes_{H}z_{\gamma}\big{(}\big{(} g_{\beta 1(-2)}\otimes\cdots\otimes g_{\beta t(-2)})\] \[\cdot(v_{1}\otimes\cdots\otimes v_{i-1}\otimes y_{\beta}(v_{i} \otimes\cdots\otimes v_{i+n-1})\otimes v_{i+n}\otimes\cdots\otimes v_{n+l-1}) \big{)}\] \[=(-1)^{p_{i}^{Y}}\sum_{\beta,\gamma}\rho_{i}^{\overline{\Gamma}_{ 0}}(h_{\gamma 1}\otimes\cdots\otimes h_{\gamma\,q_{1}-1}\otimes g_{\beta 1(1)} \otimes\cdots\otimes g_{\beta t(1)}\otimes h_{\gamma\,q_{1}+1}\] \[\otimes\cdots\otimes h_{\gamma r})\otimes_{H}z_{\gamma}\big{(} (g_{\beta 1(-2)}\otimes\cdots\otimes g_{\beta t(-2)})\cdot(v_{1}\otimes\cdots \otimes v_{i-1}\] \[\otimes y_{\beta}(v_{i}\otimes\cdots\otimes v_{i+n-1})\otimes v _{i+n}\otimes\cdots\otimes v_{n+l-1})\big{)}, \tag{3.41}\] where the vertex clasped from \(\Gamma_{1}\) is contained in the \(q_{1}\)-th connected component of \(\Gamma_{0}\). The last equality is obtained after setting \(h_{\gamma q_{1}}=1\); we can always do that as explained in the proof of Lemma 3.15. If \(\Gamma_{2}\) is clasped into a vertex in the \(\overline{q}_{2}\)-th connected component of \(\overline{\Gamma}_{0}\), one finds \(((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}\) as follows: \[((Z\circ_{i}Y)\circ_{j}X)^{\Gamma}(v_{1}\otimes\cdots\otimes v_{m+ n+l-2})\] \[= (-1)^{p_{i}^{Y}+p_{j}^{X}}\!\!\sum_{\alpha,\beta,\gamma}\rho_{j}^ {\Gamma}\big{(}(\underbrace{1\otimes\cdots\otimes 1}_{\overline{q}_{2}-1}\otimes f_{ \alpha 1(1)}\otimes\cdots\otimes f_{\alpha s(1)}\otimes 1\otimes\cdots\otimes 1)( \Delta_{\overline{q}_{2}}^{(s-1)}\] \[\cdot(\rho_{i}^{\overline{\Gamma}_{0}}(h_{\gamma 1}\otimes\cdots \otimes h_{\gamma\,q_{1}-1}\otimes g_{\beta 1(1)}\otimes\cdots\otimes g_{\beta t(1)} \otimes h_{\gamma\,q_{1}+1}\otimes\cdots\otimes h_{\gamma r})))\big{)}\] \[\otimes_{H}z_{\gamma}\big{(}\big{(}f_{\alpha 1(-2)}\otimes\cdots \otimes f_{\alpha s(-2)}\big{)}\cdot(g_{\beta 1(-2)}\otimes\cdots\otimes g_{\beta t(-2)})\cdot(v_{1} \otimes\cdots\otimes v_{i-1}\] \[\otimes y_{\beta}(v_{i}\otimes\cdots\otimes v_{j-1}\otimes x_{ \alpha}(v_{j}\otimes\cdots\otimes v_{j+m-1})\otimes v_{j+m}\otimes\cdots \otimes v_{i+m+n-2})\] \[\otimes v_{i+m+n-1}\otimes\cdots\otimes v_{m+n+l-2})\big{)}. \tag{3.42}\] Note that in (3.42), since the vertex clasped from \(\Gamma_{2}\) is in \(\Gamma_{1}\), the \(\overline{q}_{2}\)-th connected component of \(\overline{\Gamma}_{0}\) corresponds to \(g_{\beta l(1)}\) for some \(l\). We assume that \(g_{\beta l}=1\); hence \(\Delta_{\overline{q}_{2}}^{(s-1)}(g_{\beta l(1)})=1\otimes\cdots\otimes 1\) and \(f_{\alpha k(1)}\)'s and \(g_{\beta p(1)}\)'s do not multiply together. In \(z_{\gamma}\), \(f_{\alpha k(-2)}\)'s act only on vectors outside \(x_{\alpha}\) and \(g_{\beta p(-2)}\)'s act only on vectors outside \(y_{\beta}\). All vectors outside \(y_{\beta}\) can only have at most a single action, either from \(f_{\alpha k(-2)}\) or \(g_{\beta p(-2)}\); otherwise we get a cycle in \(\Gamma_{0}\). Next, we will compute the right-hand side \((Z\circ_{i}(Y\circ_{j-i+1}X))^{\Gamma}\). If the vertex clasped from \(\Gamma_{2}\) is contained in the \(\overline{q}_{1}\)-th connected component of \(\Gamma_{1}\) then we have: \[(Y\circ_{j-i+1}X)^{\overline{\Gamma}_{1}}(v_{1}\otimes\cdots\otimes v _{m+n-1})\] \[=(-1)^{p_{j-i+1}^{X}}\sum_{\beta,\gamma}\rho_{j-i+1}^{\overline{ \Gamma}_{1}}\big{(}(\underbrace{1\otimes\cdots\otimes 1}_{\overline{q}_{1}-1} \otimes f_{\alpha 1(1)}\otimes\cdots\otimes f_{\alpha s(1)}\otimes 1\otimes\cdots\otimes 1)\] \[(g_{\beta 1}\otimes\cdots\otimes\Delta^{(s-1)}(g_{\beta\overline{q}_ {1}})\otimes\cdots\otimes g_{\beta t})\big{)}\otimes_{H}y_{\beta}\big{(}(f_{ \alpha 1(-2)}\otimes\cdots\otimes f_{\alpha s(-2)})\] \[\cdot(v_{1}\otimes\cdots\otimes v_{j-i}\otimes x_{\alpha}(v_{j-i +1}\otimes\cdots\otimes v_{j-i+m})\otimes v_{j-i+m+1}\otimes\cdots\otimes v _{m+n-1})\big{)}\] \[=(-1)^{p_{j-i+1}^{X}}\sum_{\beta,\gamma}\rho_{j-i+1}^{\overline{ \Gamma}_{1}}\big{(}g_{\beta 1}\otimes\cdots\otimes g_{\beta\,\overline{q}_{1}-1} \otimes f_{\alpha 1(1)}\otimes\cdots\otimes f_{\alpha s(1)}\otimes g_{\beta\, \overline{q}_{1}+1}\] \[\otimes\cdots\otimes g_{\beta t}\big{)}\otimes_{H}y_{\beta} \big{(}(f_{\alpha 1(-2)}\otimes\cdots\otimes f_{\alpha s(-2)})\cdot(v_{1}\otimes \cdots\otimes v_{j-i}\] \[\otimes x_{\alpha}(v_{j-i+1}\otimes\cdots\otimes v_{j-i+m}) \otimes v_{j-i+m+1}\otimes\cdots\otimes v_{m+n-1})\big{)}, \tag{3.43}\] where the last equality is obtained after we set \(g_{\beta\overline{q}_{1}}=1\). Indeed, here \(\overline{q}_{1}=l\) as in (3.42), since \(g_{\beta\overline{q}_{1}}\) corresponds to the connected component of \(\Gamma_{1}\) that contains the vertex clasped from \(\Gamma_{2}\). So we are imposing the same condition \(g_{\beta l}=1\) both in (3.42) and (3.43). Finally, we can compute \((Z\circ_{i}(Y\circ_{j-i+1}X))^{\Gamma}\). Suppose that \(\overline{\Gamma}_{1}\) is clasped into a vertex contained in the \(q_{2}\)-th connected component of \(\Gamma_{0}\). Then \[(Z\circ_{i}(Y\circ_{j-i+1}X))^{\Gamma}(v_{1}\otimes\cdots\otimes v _{m+n+l-2})\] \[=(-1)^{p_{i}^{Y}+p_{j}^{X}}\sum_{\alpha,\beta,\gamma}\rho_{i}^{ \Gamma}\big{(}(\underbrace{1\otimes\cdots\otimes 1}_{q_{2}-1}\otimes\rho_{j-i+1}^{ \overline{\Gamma}_{1}}(g_{\beta 1(1)}\otimes\cdots\otimes g_{\beta\,\overline{q}_{1}-1(1)}\] \[\otimes f_{\alpha 1(1)}\otimes\cdots\otimes f_{\alpha s(1)} \otimes g_{\beta\,\overline{q}_{1}+1(1)}\otimes\cdots\otimes g_{\beta t(1)}) \otimes 1\otimes\cdots\otimes 1)(h_{\gamma 1}\otimes\cdots\] \[\otimes h_{\gamma\,q_{2}-1}\otimes\Delta^{(s+t-2)}(h_{\gamma q_{2} })\otimes h_{\gamma\,q_{2}+1}\otimes\cdots\otimes h_{\gamma r})\big{)}\otimes_ {H}z_{\gamma}\big{(}(g_{\beta 1(-2)}\otimes\cdots\] \[\otimes g_{\beta\,\overline{q}_{1}-1(-2)}\otimes f_{\alpha 1(-2)} \otimes\cdots\otimes f_{\alpha s(-2)}\otimes g_{\beta\,\overline{q}_{1}+1(-2)} \otimes\cdots\otimes g_{\beta t(-2)})\] \[\cdot(v_{1}\otimes\cdots\otimes v_{i-1}\otimes y_{\beta}\big{(}(f _{\alpha 1(-3)}\otimes\cdots\otimes f_{\alpha s(-3)})\cdot(v_{i}\otimes\cdots \otimes v_{j-1}\] \[\otimes x_{\alpha}(v_{j}\otimes\cdots\otimes v_{j+m-1})\otimes v_{j +m}\otimes\cdots\otimes v_{i+m+n-2})\big{)}\] \[\otimes v_{i+m+n-1}\otimes\cdots\otimes v_{m+n+l-2})\big{)}. \tag{3.44}\] Note that here \(q_{2}=q_{1}\) and we are imposing the same condition as in (3.41): \(h_{\gamma q_{2}}=1\). Thus, \(\rho_{j-i+1}^{\overline{\Gamma}_{1}}(g_{\beta 1(1)}\otimes\cdots\otimes f_{ \alpha 1(1)}\otimes\cdots\otimes f_{\alpha s(1)}\otimes\cdots\otimes g_{\beta t(1)})\) will multiply with \(\Delta^{(s+t-2)}(h_{\gamma q_{2}})=1\otimes\cdots\otimes 1\). In \(z_{\gamma}\), we use the identity \(f_{\alpha k(1)(1)}\otimes f_{\alpha k(1)(-2)}\otimes f_{\alpha k(-2)}=f_{ \alpha k(1)}\otimes f_{\alpha k(-2)}\otimes f_{\alpha k(-3)}\). Hence, \(f_{\alpha k(-2)}\)'s will only act on the vectors outside \(y_{\beta}\), and \(f_{\alpha k(-3)}\)'s will only act on the vectors inside \(y_{\beta}\) but out of \(x_{\alpha}\). As in (3.42), \(f_{\alpha k(-2)}\)'s and \(g_{\beta p(-2)}\)'s will not act on the same vector outside \(y_{\beta}\); otherwise we get a cycle in \(\Gamma_{0}\). To see that (3.42) and (3.44) are equal, note that for the coefficient part, each tensor factor can only be \(f_{\alpha k(1)}\), \(g_{\beta p(1)}\) or \(h_{\gamma q}\). No multiplication will appear since we impose the conditions \(g_{\beta l}=h_{\gamma q_{1}}=1\). Since both (3.42) and (3.44) identify the connected components of \(\Gamma\) in the end, the coefficient parts are equal. For the vector part, as \(f_{\alpha k(-3)}\otimes f_{\alpha k(-2)}\) (since \(H\) is cocommutative), the actions of \(f_{\alpha k}\)'s are the same; hence the vector parts are the same. This completes the proof of the lemma. **Lemma 3.18**.: _For any \(X\in\mathcal{P}^{cl}_{H}(m)\), \(Y\in\mathcal{P}^{cl}_{H}(n)\), \(\sigma\in S_{n}\), \(\tau\in S_{m}\), and \(1\leq i\leq n\), we have \(Y^{\sigma}\circ_{i}X^{\tau}=(Y\circ_{\sigma(i)}X)^{\sigma\circ_{i}\tau}\), i.e., (2.11) holds where \(\sigma\circ_{i}\tau\) is given by (2.12)._ Proof.: Consider a graph \(\Gamma\in G(m+n-1)\), and let \[\Gamma_{0}=\sigma(\Delta_{0}^{1\ldots 1m1\ldots 1}(\Gamma)),\quad\Gamma_{i}= \tau(\Delta_{i}^{1\ldots 1m1\ldots 1}(\Gamma)),\] where \(m\) appears at the \(i\)-th position. As before, we write \[X^{\Gamma_{i}}(v) =\sum_{\alpha}(f_{\alpha 1}\otimes\cdots\otimes f_{\alpha s}) \otimes_{H}x_{\alpha}(v), v\in V^{\otimes m}, \tag{3.45}\] \[Y^{\Gamma_{0}}(w) =\sum_{\beta}(g_{\beta 1}\otimes\cdots\otimes g_{\beta t}) \otimes_{H}y_{\beta}(w), w\in V^{\otimes n}. \tag{3.46}\] First, using (3.18)-(3.22), (3.45) and (3.46), we find: \[(X^{\tau})^{\Delta_{i}^{1\ldots 1m1\ldots 1}(\Gamma)}(v_{i} \otimes\cdots\otimes v_{i+m-1})\] \[=(\widetilde{\tau}\otimes_{H}1)X^{\Gamma_{i}}\big{(}\tau(v_{i} \otimes\cdots\otimes v_{i+m-1})\big{)}\] \[=\epsilon(\tau)\sum_{\alpha}\widetilde{\tau}(f_{\alpha 1}\otimes \cdots\otimes f_{\alpha s})\otimes_{H}x_{\alpha}(v_{\tau^{-1}(i)}\otimes \cdots\otimes v_{\tau^{-1}(i+m-1)})\] \[=\epsilon(\tau)\sum_{\alpha}(f_{\alpha\,\widetilde{\tau}^{-1}(1) }\otimes\cdots\otimes f_{\alpha\,\widetilde{\tau}^{-1}(s)})\otimes_{H}x_{ \alpha}(v_{\tau^{-1}(i)}\otimes\cdots\otimes v_{\tau^{-1}(i+m-1)}), \tag{3.47}\] and \[(Y^{\sigma})^{\Delta_{0}^{1\ldots 1m1\ldots 1}(\Gamma)}(v_{1} \otimes\cdots\otimes v_{i-1}\otimes x_{\alpha}(v_{i}\otimes\cdots\otimes v_{ i+m-1})\] \[\quad\otimes v_{i+m}\otimes\cdots\otimes v_{m+n-1})\] \[=\big{(}\widetilde{\sigma}\otimes_{H}1\big{)}Y^{\Gamma_{0}} \big{(}\sigma(v_{1}\otimes\cdots\otimes v_{i-1}\otimes x_{\alpha}(v_{i} \otimes\cdots\otimes v_{i+m-1})\] \[\quad\otimes v_{i+m}\otimes\cdots\otimes v_{m+n-1})\big{)}\] \[=\epsilon(\sigma)\sum_{\beta}(g_{\beta\,\widetilde{\sigma}^{-1}(1 )}\otimes\cdots\otimes g_{\beta\,\widetilde{\sigma}^{-1}(t)})\otimes_{H}y_{ \beta}(v_{\sigma^{-1}(1)}\otimes\cdots\otimes v_{\sigma^{-1}(\sigma(i)-1)}\] \[\quad\otimes x_{\alpha}(v_{i}\otimes\cdots\otimes v_{i+m-1}) \otimes v_{\sigma^{-1}(\sigma(i)+1)}\otimes\cdots\otimes v_{\sigma^{-1}(m+n-1 )}). \tag{3.48}\] Recall that \(\epsilon(\sigma)\) and \(\epsilon(\tau)\) are given by (3.20); the subscripts are suppressed since the tensor products these permutations act on are clear. Then we evaluate \(Y^{\sigma}\circ_{i}X^{\tau}\) on \(\Gamma\) as follows: \[(Y^{\sigma}\circ_{i}X^{\tau})^{\Gamma}(v_{1}\otimes\cdots\otimes v_{ m+n-1})\] \[=(-1)^{\overline{p}_{i}^{X}}\epsilon(\sigma)\epsilon(\tau)\sum_{ \alpha,\beta}\rho_{i}^{\Gamma}\big{(}\big{(}\underbrace{1\otimes\cdots\otimes 1}_{ \widetilde{\sigma}(q)-1}\otimes\widetilde{\tau}(f_{\alpha 1(1)}\otimes\cdots \otimes f_{\alpha s(1)})\otimes 1\] \[\otimes\cdots\otimes 1\big{)}\,(g_{\beta\,\widetilde{\sigma}^{-1}(1)} \otimes\cdots\otimes g_{\beta\,\widetilde{\sigma}^{-1}(\widetilde{\sigma}(q)- 1)}\otimes\Delta^{(s-1)}(g_{\beta q})\otimes\cdots\otimes g_{\beta\, \widetilde{\sigma}^{-1}(t)})\big{)}\] \[\otimes_{H}y_{\beta}\big{(}(f_{\alpha 1(-2)}\otimes\cdots\otimes f_{ \alpha s(-2)})\cdot(v_{\sigma^{-1}(1)}\otimes\cdots\otimes v_{\sigma^{-1}( \sigma(i)-1)}\] \[\otimes x_{\alpha}(v_{\tau^{-1}(i)}\otimes\cdots\otimes v_{\tau ^{-1}(i+m-1)})\otimes\cdots\otimes v_{\sigma^{-1}(m+n-1)})\big{)}, \tag{3.49}\] where \(g_{\beta q}\) in \(\widetilde{\sigma}(g_{\beta 1}\otimes\cdots\otimes g_{\beta t})\) corresponds to the connected component of \(\Gamma_{0}\) that contains the vertex clasped from \(\Gamma_{i}\), and \[\overline{p}_{i}^{X}=p(X)(p(v_{\sigma^{-1}(1)})+\cdots+p(v_{\sigma^{-1}( \sigma(i)-1)})).\] For the right-hand side \(((Y\circ_{\sigma(i)}X)^{\sigma\circ_{i}\tau})^{\Gamma}\), by Proposition 2.10, we will evaluate \(X\) and \(Y\) on the graphs \[\Delta^{1\ldots 1m1\ldots 1}_{\sigma(i)}((\sigma\circ_{i}\tau)(\Gamma))=\Gamma_{ i}\quad\text{and}\quad\Delta^{1\ldots 1m1\ldots 1}_{0}((\sigma\circ_{i}\tau)( \Gamma))=\Gamma_{0},\] respectively, where \(m\) appears at the \(\sigma(i)\)-th position. Meanwhile, recall that [1, (2.17)]: \[(\sigma\circ_{i}\tau)(v_{1}\otimes\cdots\otimes v_{m+n-1})\] \[=\sigma(v_{1}\otimes\cdots\otimes v_{i-1}\otimes\tau(v_{i} \otimes\cdots\otimes v_{i+m-1})\otimes v_{i+m}\otimes\cdots\otimes v_{m+n-1})\] \[=\epsilon(\sigma)\epsilon(\tau)\,v_{\sigma^{-1}(1)}\otimes \cdots\otimes v_{\sigma^{-1}(\sigma(i)-1)}\otimes v_{\tau^{-1}(i)}\otimes \cdots\otimes v_{\tau^{-1}(i+m-1)}\] \[\otimes v_{\sigma^{-1}(\sigma(i)+1)}\otimes\cdots\otimes v_{ \sigma^{-1}(m+n-1)}. \tag{3.50}\] Hence, we compute the right-hand side: \[((Y\circ_{\sigma(i)}X)^{\sigma\circ_{i}\tau})^{\Gamma}(v_{1} \otimes\cdots\otimes v_{m+n-1})\] \[=(\widetilde{(\sigma\circ_{i}\tau)}\otimes_{H}1)(Y\circ_{\sigma( i)}X)^{(\sigma\circ_{i}\tau)\Gamma}((\sigma\circ_{i}\tau)(v_{1}\otimes\cdots \otimes v_{m+n-1}))\] \[=\epsilon(\sigma)\epsilon(\tau)(\widetilde{(\sigma\circ_{i}\tau) }\otimes_{H}1)(Y\circ_{\sigma(i)}X)^{(\sigma\circ_{i}\tau)\Gamma}(v_{\sigma^{- 1}(1)}\otimes\cdots\otimes v_{\sigma^{-1}(\sigma(i)-1)}\] \[\quad\otimes v_{\tau^{-1}(i)}\otimes\cdots\otimes v_{\tau^{-1}(i+ m-1)}\otimes v_{\sigma^{-1}(\sigma(i)+1)}\otimes\cdots\otimes v_{\sigma^{-1}(m+n-1)})\] \[=(-1)^{\overline{p}_{i}^{X}}\epsilon(\sigma)\epsilon(\tau)\sum_{ \alpha,\beta}(\widetilde{\sigma\circ_{i}\tau})\big{(}\rho_{\sigma(i)}^{(\sigma \circ_{i}\tau)\Gamma}(g_{\beta 1}\otimes\cdots\otimes g_{\beta\,q-1}\otimes f_{\alpha 1(1)}g_{ \beta q(1)}\] \[\quad\otimes\cdots\otimes f_{\alpha s(1)}g_{\beta q(s)}\otimes g_{ \beta\,q+1}\otimes\cdots\otimes g_{\beta t}\big{)}\otimes_{H}y_{\beta}\big{(} (f_{\alpha 1(-2)}\otimes\cdots\otimes f_{\alpha s(-2)})\] \[\quad\cdot(v_{\sigma^{-1}(1)}\otimes\cdots\otimes v_{\sigma^{-1}( \sigma(i)-1)}\otimes x_{\alpha}(v_{\tau^{-1}(i)}\otimes\cdots\otimes v_{\tau^{- 1}(i+m-1)})\] \[\quad\otimes v_{\sigma^{-1}(\sigma(i)+1)}\otimes\cdots\otimes v_{ \sigma^{-1}(m+n-1)})\big{)}. \tag{3.51}\] We see immediately that the vector parts are the same in the right-hand sides of (3.49) and (3.51). For the coefficient parts, without loss of generality, we assume that \(g_{\beta q}=1\). Then it suffices to show that \[\rho_{i}^{\Gamma}(\widetilde{\sigma}\circ_{\widetilde{\sigma}(q)}\widetilde{ \tau})=\widetilde{(\sigma\circ_{i}\tau)}\rho_{\sigma(i)}^{(\sigma\circ_{i}\tau) \Gamma}\in S_{s+t-1} \tag{3.52}\] when acting on any vector \(h\in H^{\otimes(s+t-1)}\); in particular, for \[h=g_{\beta 1}\otimes\cdots\otimes g_{\beta\,q-1}\otimes f_{\alpha 1(1)}\otimes \cdots\otimes f_{\alpha s(1)}\otimes g_{\beta\,q+1}\otimes\cdots\otimes g_{ \beta t}.\] In order to prove this, recall from (3.22) that \(\widetilde{\sigma}\) identifies the \(k\)-th connected component of \(\sigma\Gamma\) with the \(\widetilde{\sigma}(k)\)-th connected component of \(\Gamma\). Consider the right-hand side of (3.52), and pick a tensor factor in \(h\) that represents a connected component of \(\Gamma_{i}\) or \(\Gamma_{0}\). It is first identified by \(\rho_{\sigma(i)}^{(\sigma\circ_{i}\tau)\Gamma}\) with a connected component of \((\sigma\circ_{i}\tau)\Gamma\); then identified by \(\widetilde{(\sigma\circ_{i}\tau)}\) with a connected component of \(\Gamma\). Now consider the left-hand side of (3.52), and note that \((\widetilde{\sigma}\circ_{\widetilde{\sigma}(q)}\widetilde{\tau})\) identifies this connected component in \(h\) with a connected component in either \(\Delta_{0}^{1\ldots 1m1\ldots 1}(\Gamma)\) or \(\Delta_{i}^{1\ldots 1m1\ldots 1}(\Gamma)\), where \(m\) appears at the \(i\)-th position. If it is a connected component in \(\Gamma_{0}\), it will be identified as a connected component in \(\Delta_{0}^{1\ldots 1m1\ldots 1}(\Gamma)\); while if it is a connected component in \(\Gamma_{i}\), it will be identified as a connected component in \(\Delta_{i}^{1\ldots 1m1\ldots 1}(\Gamma)\). Then \(\rho_{i}^{\Gamma}\) identifies this connected component with a connected component of \(\Gamma\). Hence, both sides identify connected components in \(h\) with those of \(\Gamma\), and they must be equal. This proves the lemma. Combining the results of Lemmas 3.15, 3.17 and 3.18 concludes the proof of Theorem 3.14. ## 4. Poisson \(H\)-Pseudoalgebras In this section, as an application of the construction of the generalized classical operad \(\mathcal{P}_{H}^{cl}\), we introduce the notion of a Poisson \(H\)-pseudoalgebra, define the variational and classical cohomology of Poisson \(H\)-pseudoalgebras, and provide examples. As before, \(H\) will be a cocommutative Hopf algebra. ### Lie pseudoalgebras Since every Poisson vertex algebra is in particular a Lie conformal algebra, let us start by recalling the notion of a Lie pseudoalgebra, which generalizes Lie conformal algebras. **Definition 4.1** ([1]).: A _Lie \(H\)-pseudoalgebra_ is a pair \((L,\beta)\), where \(L\) is a superspace with parity \(p\), which is a left \(H\)-module (with \(H\) purely even), and \(\beta\) is an even element of \(\operatorname{Hom}_{H\otimes H}(L\otimes L,(H\otimes H)\otimes_{H}L)\) satisfying the axioms below. Writing the _pseudobracket_\(\beta\) as \(\beta(a\otimes b)=[a*b]\), the axioms are \((a,b,c\in L,\,f,g\in H,\,\sigma=(12)\in S_{2})\): * \(H\)**-bilinearity:** \[[fa*gb]=((f\otimes g)\otimes_{H}1)[a*b];\] (4.1) * **Skewsymmetry:** \[[b*a]=-(-1)^{p(a)p(b)}(\sigma\otimes_{H}\operatorname{id})[a*b];\] (4.2) * **Jacobi identity:** \[[a*[b*c]]-(-1)^{p(a)p(b)}((\sigma\otimes\operatorname{id})\otimes_{H} \operatorname{id})[b*[a*c]]=[[a*b]*c].\] (4.3) _Remark 4.2_.: In the Jacobi identity (4.3), the composition \([[a*b]*c]\) is defined as follows [1]. Let us write \[[a*b]=\sum_{i}(f_{i}\otimes g_{i})\otimes_{H}e_{i}, \tag{4.4}\] \[[e_{i}*c]=\sum_{j}(f_{ij}\otimes g_{ij})\otimes_{H}e_{ij}, \tag{4.5}\] for some \(f_{i},g_{i},f_{ij},g_{ij}\in H\) and \(e_{i},e_{ij}\in L\). Then \[[[a*b]*c]=\sum_{i,j}(f_{i}f_{ij(1)}\otimes g_{i}f_{ij(2)}\otimes g_{ij}) \otimes_{H}e_{ij}. \tag{4.6}\] We point out that, while expressions (4.4), (4.5) are not unique (as they involve \(\otimes_{H}\)), the right-hand side of (4.6) depends only on \([a*b]\) and \(c\). Similarly, if \[[b*c]=\sum_{i}(h_{i}\otimes l_{i})\otimes_{H}d_{i}, \tag{4.7}\] \[[a*d_{i}]=\sum_{j}(h_{ij}\otimes l_{ij})\otimes_{H}d_{ij}, \tag{4.8}\] then \[[a*[b*c]]=\sum_{i,j}(h_{ij}\otimes h_{i}l_{ij(1)}\otimes l_{i}l_{ij(2)}) \otimes_{H}d_{ij}. \tag{4.9}\] _Remark 4.3_.: If \([a*b]\) is given by (4.4), then the \(H\)-bilinearity (4.1) is equivalent to: \[[fa*gb]=\sum_{i}(ff_{i}\otimes gg_{i})\otimes_{H}e_{i}, \tag{4.10}\] while the skewsymmetry (4.2) is equivalent to: \[[b*a]=-(-1)^{p(a)p(b)}\sum_{i}(g_{i}\otimes f_{i})\otimes_{H}e_{i}. \tag{4.11}\] Finally, note that \(p(e_{i})=p(a)+p(b)\) for all \(i\), because the pseudobracket is even. As an example, we recall the Lie pseudoalgebra \(W(\mathfrak{d})\), which is closely related to the Lie-Cartan algebra of vector fields \(W_{N}=\operatorname{Der}\mathbb{F}[[t_{1},\dots,t_{N}]]\) (see [1] for more details). **Example 4.4** (**Lie pseudoalgebra \(W(\mathfrak{d})\)**).: Let \(H=U(\mathfrak{d})\) be the universal enveloping algebra of a finite-dimensional Lie algebra \(\mathfrak{d}\). We define \(W(\mathfrak{d})\) as the free left \(H\)-module \(H\otimes\mathfrak{d}\) (where \(H\) acts by multiplication on the first factor), with the following pseudobracket: \[[(f\otimes a)*(g\otimes b)]=(f\otimes g)\otimes_{H}(1\otimes[a,b])\] \[\qquad\qquad+(fb\otimes g)\otimes_{H}(1\otimes a)-(f\otimes ga) \otimes_{H}(1\otimes b), \tag{4.12}\] for \(f,g\in H\), \(a,b\in\mathfrak{d}\). We refer to [1] for further examples of Lie pseudoalgebras. An important special case is when \[H=\mathbb{F}[\partial_{1},\dots,\partial_{N}],\quad\text{with}\ \ \Delta( \partial_{i})=\partial_{i}\otimes 1+1\otimes\partial_{i}, \tag{4.13}\] is the universal enveloping algebra of an \(N\)-dimensional abelian Lie algebra. In this case, the pseudobracket can be expressed equivalently as a \(\vec{\lambda}\)_-bracket_\(L\otimes L\to L[\vec{\lambda}]:=L[\lambda_{1},\dots,\lambda_{N}]\), where \(\vec{\lambda}=(\lambda_{1},\dots,\lambda_{N})\). Explicitly [1]: \[[a*b]=\sum_{i}(f_{i}(\vec{\partial})\otimes 1)\otimes_{H}e_{i}\ \ \Leftrightarrow\ \ [a_{\vec{\lambda}}b]=\sum_{i}f_{i}(-\vec{\lambda})e_{i}, \tag{4.14}\] where \(\vec{\partial}=(\partial_{1},\dots,\partial_{N})\) (recall that we can always arrange that all \(g_{i}=1\) in (4.4)). Then the axioms of a Lie pseudoalgebra coincide with the axioms (1.4)-(1.6) of a _Lie conformal algebra_ in dimension \(N\)[1]. ### Definition of a Poisson pseudoalgebra Motivated by the connection between the classical operad \(\mathcal{P}^{cl}\) and the notion of a Poisson vertex algebra (see [1, Sect. 10]), our construction of the generalized classical operad \(\mathcal{P}^{cl}_{H}\) leads to the following definition of a Poisson \(H\)-pseudoalgebra. **Definition 4.5**.: A _Poisson \(H\)-pseudoalgebra_\(V\) is a Lie \(H\)-pseudoalgebra with a pseudobracket \([a*b]\), equipped with a supercommutative associative product \(ab\in V\) for \(a,b\in V\), satisfying the following axioms: * \(H\)**-differential:** \[h(ab)=(h_{(1)}a)(h_{(2)}b),\qquad h\in H,\ \ a,b\in V;\] (4.15) * **Leibniz rule:** \[[a*bc]=[a*b]c+(-1)^{p(b)p(c)}[a*c]b,\] (4.16) where \([a*b]c\) is defined by \[[a*b]c:=\sum_{i}(f_{i}\otimes g_{i(1)})\otimes_{H}e_{i}(g_{i(-2)}c) \tag{4.17}\] for \([a*b]\) written in the form (4.4). We remark that (4.15) means that \(V\) is an \(H\)_-differential algebra_, i.e., the product \(V\otimes V\to V\) is an \(H\)-module homomorphism. **Lemma 4.6**.: _The right-hand side of (4.17) is well defined, i.e., it only depends on \([a*b]\) and \(c\) but not on the expression (4.4)._ Proof.: We need to check that, for any fixed \(c\in V\), we have a well-defined linear map \[\Phi\colon(H\otimes H)\otimes_{H}V \to(H\otimes H)\otimes_{H}V,\] \[(f\otimes g)\otimes_{H}e \mapsto(f\otimes g_{(1)})\otimes_{H}e(g_{(-2)}c). \tag{4.18}\] Since the right-hand side of (4.18) depends linearly on \(f\), \(g\) and \(e\), it defines a linear map \[\overline{\Phi}\colon H\otimes H\otimes V\to(H\otimes H)\otimes_{H}V.\] For \(\overline{\Phi}\) to induce a map \(\Phi\) on the quotient \((H\otimes H)\otimes_{H}V\) of \(H\otimes H\otimes V\), it is necessary and sufficient that \[\overline{\Phi}(f\otimes g\otimes he)=\overline{\Phi}(fh_{(1)}\otimes gh_{(2)} \otimes e)\qquad\text{for all}\ \ h\in H.\] To verify this, we compute using (3.4), (3.8) and (4.15): \[\overline{\Phi}(fh_{(1)} \otimes gh_{(2)}\otimes e)=(fh_{(1)}\otimes(gh_{(2)})_{(1)}) \otimes_{H}e((gh_{(2)})_{(-2)}c)\] \[=(fh_{(1)}\otimes g_{(1)}h_{(2)})\otimes_{H}e(h_{(-3)}g_{(-2)}c)\] \[=(f\otimes g_{(1)})\otimes_{H}h_{(1)}\big{(}e(h_{(-2)}g_{(-2)}c) \big{)}\] \[=(f\otimes g_{(1)})\otimes_{H}(h_{(1)}e)(h_{(2)}h_{(-3)}g_{(-2)}c)\] \[=(f\otimes g_{(1)})\otimes_{H}(he)(g_{(-2)}c)=\overline{\Phi}(f \otimes g\otimes he).\] This completes the proof. Similarly to (4.17), we define the product \[a[b*c]:=\sum_{i}(h_{i(1)}\otimes l_{i})\otimes_{H}(h_{i(-2)}a)d_{i}, \tag{4.19}\] for \([b*c]\) written in the form (4.7). As in Lemma 4.6, one can show that (4.19) is well defined. Then from the (left) Leibniz rule (4.16) and the skewsymmetry (4.2) of the pseudobracket, one can derive the following: **Lemma 4.7** (**Right Leibniz rule**).: _In any Poisson pseudoalgebra \(V\), we have the right Leibniz rule_ \[[ab*c]=a[b*c]+(-1)^{p(a)p(b)}b[a*c]. \tag{4.20}\] Proof.: Suppose that \([a*b]\) is written again in the form (4.4); then due to skewsymmetry \([b*a]\) is given by (4.11). Thus, by (4.17), \[[b*a]c=-(-1)^{p(a)p(b)}\sum_{i}(g_{i}\otimes f_{i(1)})\otimes_{H}e_{i}(f_{i(-2 )}c).\] Note that \(p(e_{i})=p(a)+p(b)\) for all \(i\), because the pseudobracket is even. Since \(V\) is supercommutative, we have \[e_{i}(f_{i(-2)}c)=(-1)^{(p(a)+p(b))p(c)}(f_{i(-2)}c)e_{i}.\] Using this, we can relate (4.17) and (4.19): \[-(-1)^{p(a)p(b)}(\sigma \otimes_{H}1)([b*a]c)=\sum_{i}(f_{i(1)}\otimes g_{i})\otimes_{H} e_{i}(f_{i(-2)}c)\] \[=(-1)^{(p(a)+p(b))p(c)}c[a*b]. \tag{4.21}\] Now we derive the right Leibniz rule from the left Leibniz rule (4.16) and the skewsymmetry (4.2): \[[ab*c] =-(-1)^{(p(a)+p(b))p(c)}(\sigma\otimes_{H}1)[c*ab]\] \[=-(-1)^{(p(a)+p(b))p(c)}(\sigma\otimes_{H}1)([c*a]b)\] \[\quad-(-1)^{(p(a)+p(b))p(c)+p(a)p(b)}(\sigma\otimes_{H}1)([c*b]a)\] \[=(-1)^{p(a)p(b)}b[a*c]+a[b*c],\] where we use (4.21) in the last equation. We will provide examples of Poisson pseudoalgebras in Sect. 4.4 below. In the special case when \(H\) is the algebra of polynomials in \(N\) variables as in (4.13), the pseudobracket is equivalent to the \(\vec{\lambda}\)-bracket (4.14). Then the Leibniz rule (4.16) can be written as: \[[a_{\vec{\lambda}}bc]=[a_{\vec{\lambda}}b]c+(-1)^{p(b)p(c)}[a_{\vec{\lambda}}c]b. \tag{4.22}\] Thus, for \(N=1\), \(H=\mathbb{F}[\partial]\), our notion of a Poisson pseudoalgebra coincides with that of a _Poisson vertex algebra_ (see [1, Sect. 16] and [1]). Note that for Poisson vertex algebras, the right Leibniz rule looks more complicated than the left one (cf. [1, (1.26)]), while in our approach the left and right versions are symmetric. We will derive a formula for the pseudobracket of two products, generalizing the corresponding formula for Poisson vertex algebras [1, (1.34)]. In order to state the result, we first need to check that the two products defined by (4.17) and (4.19) satisfy associativity. **Lemma 4.8**.: _Define products_ \[V\otimes\big{(}(H\otimes H)\otimes_{H}V\big{)} \to(H\otimes H)\otimes_{H}V,\] \[\big{(}(H\otimes H)\otimes_{H}V\big{)}\otimes V \to(H\otimes H)\otimes_{H}V,\] _by extending linearly the formulas_ \[aB :=(f_{(1)}\otimes g)\otimes_{H}(f_{(-2)}a)b,\] \[Bc :=(f\otimes g_{(1)})\otimes_{H}b(g_{(-2)}c),\] _respectively, where \(a,c\in V\) and \(B=(f\otimes g)\otimes_{H}b\in(H\otimes H)\otimes_{H}V\). Then \((aB)c=a(Bc)\)._ Proof.: By a straightforward calculation, we have: \[(aB)c =\Big{(}(f_{(1)}\otimes g)\otimes_{H}(f_{(-2)}a)b\Big{)}c\] \[=(f_{(1)}\otimes g_{(1)})\otimes_{H}\big{(}(f_{(-2)}a)b\big{)}(g_ {(-2)}c),\] and \[a(Bc) =a\Big{(}(f\otimes g_{(1)})\otimes_{H}b(g_{(-2)}c)\Big{)}\] \[=(f_{(1)}\otimes g_{(1)})\otimes_{H}(f_{(-2)}a)\big{(}b(g_{(-2)}c )\big{)}.\] These two are equal since the product in \(V\) is associative. Then we have the following: **Proposition 4.9** (**Iterated Leibniz rule**).: _For any Poisson pseudoalgebra \(V\) and \(a_{i},b_{j}\in V\), we have_ \[[(a_{1}\cdots a_{m})*(b_{1}\cdots b_{n})]\] \[=\sum_{i=1}^{m}\sum_{j=1}^{n}\epsilon_{ij}\left(a_{1}\cdots a_{i-1 }a_{i+1}\cdots a_{m}\right)[a_{i}*b_{j}]\left(b_{1}\cdots b_{j-1}b_{j+1}\cdots b _{n}\right), \tag{4.23}\] _where the sign_ \[\epsilon_{ij}:=(-1)^{p(a_{i})(p(a_{i+1})+\cdots+p(a_{m}))+p(b_{j})(p(b_{1})+ \cdots+p(b_{j-1}))}\,.\] Proof.: Follows from applying iteratively the left and right Leibniz rules (4.16), (4.20). ### Construction of Poisson \(H\)-pseudoalgebras from \(\mathcal{P}_{H}^{cl}\) In this subsection, we generalize [1, Theorem 10.7], which established a connection between the classical operad and Poisson vertex algebra structures on an \(\mathbb{F}[\partial]\)-module \(V\). As before, let \(H\) be a cocommutative Hopf algebra, which is purely even, and \(V\) be a vector superspace with parity \(p\), which is also a left \(H\)-module. Let \(\bar{p}=1-p\) denote the reverse parity of \(p\), and \(\Pi V\) denote the same space as \(V\) but with parity \(\bar{p}\). Note that \(\Pi V\) is a left \(H\)-module too. Consider the generalized classical operad \(\mathcal{P}_{H}^{cl}(\Pi V)\) corresponding to \(\Pi V\) (before we were denoting \(\mathcal{P}_{H}^{cl}(V)\) as \(\mathcal{P}_{H}^{cl}\) for short). We denote by \[W_{H}^{cl}(\Pi V)=W(\mathcal{P}_{H}^{cl}(\Pi V))=\bigoplus_{n\geq-1}W_{H,n}^{ cl}(\Pi V)\] the universal Lie superalgebra associated to the operad \(\mathcal{P}_{H}^{cl}(\Pi V)\); see Sect. 2.3. Note that both the operad \(\mathcal{P}_{H}^{cl}(\Pi V)\) and the Lie superalgebra \(W_{H}^{cl}(\Pi V)\) are considered with the parity \(\bar{p}\). **Theorem 4.10**.: _There is a bijective correspondence between odd elements \(X\in W_{H,1}^{cl}(\Pi V)\) such that \([X,X]=0\) and Poisson pseudoalgebra structures on \(V\), given explicitly by:_ \[ab=(-1)^{p(a)}X\stackrel{{\bullet}}{{\longleftrightarrow}}(a \otimes b),\quad[a*b]=(-1)^{p(a)}X\stackrel{{\bullet}}{{\dash}}(a \otimes b), \tag{4.24}\] _for \(a,b\in V\)._ Proof.: We adapt the proof of Theorem 10.7 from [1]. Recall that \(W_{H,1}^{cl}(\Pi V)\) consists of all permutation invariant elements \(X\in\mathcal{P}_{H}^{cl}(\Pi V)(2)\), that is \(X^{\sigma}=X\) where \(\sigma=(12)\in S_{2}\). Elements \(X\in\mathcal{P}_{H}^{cl}(\Pi V)(2)\) correspond to collections of linear maps (cf. (3.13)): \[X^{\Gamma}\colon V\otimes V\to H^{\otimes s(\Gamma)}\otimes_{H}V,\qquad\Gamma \in G_{0}(2),\] satisfying the second cycle condition (3.14) and the componentwise \(H\)-linearity (3.15). The second cycle condition gives that \[X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}+X\mathbin{ \raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}=0;\] therefore, \(X\) is uniquely determined by the two maps \[X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}^{\bullet} \mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}:V\otimes V \to(H\otimes H)\otimes_{H}V,\] \[X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}^{\bullet} \mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}:V\otimes V \to H\otimes_{H}V\cong V.\] Then the componentwise \(H\)-linearity means that the map \(X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}^{\bullet}\) is \(H\)-bilinear, and \(X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}\) is \(H\)-linear. Moreover, these two maps are even with respect to the parity \(p\), because \(\bar{p}(X)=\bar{1}\). Next, we claim that the condition \(X^{(12)}=X\) is equivalent to the supercommutativity of the product and the skewsymmetry of the pseudobracket for the parity \(p\). Indeed, if we evaluate \(X^{(12)}=X\) on the connected \(2\)-graph, we get \[X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}(a \otimes b) =(X^{(12)})\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf }}}(a\otimes b)=(-1)^{\bar{p}(a)\bar{p}(b)}X\mathbin{\raisebox{-1.0pt}{ \includegraphics[]{fig/f-1.pdf}}}^{\bullet}(b\otimes a)\] \[=(-1)^{p(a)+p(b)+p(a)p(b)}X\mathbin{\raisebox{-1.0pt}{ \includegraphics[]{fig/f-1.pdf}}}(b\otimes a),\] which implies \[ab=(-1)^{p(a)}X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}(a \otimes b)=(-1)^{p(b)+p(a)p(b)}X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/ f-1.pdf}}}^{\bullet}(b\otimes a)=(-1)^{p(a)p(b)}ba.\] If we evaluate \(X^{(12)}=X\) on the disconnected \(2\)-graph, we get \[X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}^{\bullet}(a \otimes b)=(X^{(12)})\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}^{ \bullet}(a\otimes b)=(-1)^{\bar{p}(a)\bar{p}(b)}(\sigma\otimes_{H}1)X \mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}^{\bullet}(b \otimes a),\] which implies \[[a*b]=(-1)^{p(a)}X\mathbin{\raisebox{-1.0pt}{\includegraphics[]{fig/f-1.pdf}}}^{ \bullet}(a\otimes b)=-(-1)^{p(a)p(b)}(\sigma\otimes_{H}1)[b*a].\] Now we will show that \([X,X]=2X\square X=0\) is equivalent to three properties: associativity of the product \(ab\), the Jacobi identity of the pseudobracket \([a*b]\) and the Leibniz rule. We will do so by evaluating \(X\square X\) on the \(3\)-graphs below, because \(X\square X\) is invariant under the action of the symmetric group and any acyclic \(3\)-graph is obtained from one of these under this action (cf. Example 2.2): Recall formula (2.13): \[X\square X=X\circ_{1}X+X\circ_{2}X+(X\circ_{2}X)^{(12)}.\] Evaluating each summand above on graph \(G_{1}\), we have: \[(X\circ_{1}X)^{\bullet}\ \,\,\,\bullet^{\bullet}(a\otimes b\otimes c) =(-1)^{p(b)}[[a*b]*c],\] \[(X\circ_{2}X)^{\bullet}\ \,\,\,\bullet^{\bullet}(a\otimes b\otimes c) =(-1)^{1+p(b)}[a*[b*c]],\] \[((X\circ_{2}X)^{(12)})^{\bullet}\ \,\,\,\bullet^{\bullet}(a\otimes b \otimes c)\] \[=(-1)^{\bar{p}(a)\bar{p}(b)}((\sigma\otimes 1)\otimes_{H}1)(X \circ_{2}X)^{\bullet}\ \,\,\,\bullet^{\bullet}(b\otimes a\otimes c)\] \[=(-1)^{p(b)+p(a)p(b)}((\sigma\otimes 1)\otimes_{H}1)[b*[a*c]].\] Hence, \((X\square X)^{G_{1}}=0\) is equivalent to the Jacobi identity (4.3). Next, we evaluate \(X\square X\) on graph \(G_{2}\). Similarly to the previous case, we have: \[(X\circ_{1}X)^{\bullet}\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, supercommutative product. The left action of \(H\) on \(L\) extends uniquely to \(S(L)\), so that \(S(L)\) is an \(H\)-differential algebra (see (4.15)). The pseudobracket on \(L\) extends uniquely to \(S(L)\) via the iterated Leibniz rule (4.23). **Proposition 4.12**.: _Given a Lie pseudoalgebra \(L\), the symmetric superalgebra \(S(L)\) has a canonical structure of a Poisson pseudoalgebra described above._ Proof.: In the case of Poisson vertex algebras (corresponding to \(H=\mathbb{F}[\partial]\)), a detailed proof is given in [1, Theorem 1.15]. As our approach is more symmetric and our formula (4.23) is the same as in the well-known case of Lie superalgebras (when \(H=\mathbb{F}\)), the proof in our setting is simpler. For the rest of this subsection, we assume that \(H=U(\mathfrak{d})\) is the universal enveloping algebra of an \(N\)-dimensional Lie algebra \(\mathfrak{d}\). We fix a basis \(\{\partial_{1},\dots,\partial_{N}\}\) for \(\mathfrak{d}\), and the Poincare-Birkhoff-Witt basis of \(H\) consisting of all ordered monomials \[\partial^{I}:=\partial_{1}^{i_{1}}\cdots\partial_{N}^{i_{N}}\,,\qquad I=(i_{1},\dots,i_{N})\in\mathbb{Z}_{\geq 0}^{N}\,.\] The first three examples below are adapted from [1], and the last two from [1]. **Example 4.13** (**Free superboson**).: Let \(\mathfrak{g}\) be a finite-dimensional vector superspace with parity \(p\) and a supersymmetric nondegenerate bilinear map \(\beta\colon\mathfrak{g}\times\mathfrak{g}\to\mathfrak{d}\). The supersymmetry of \(\beta\) means that \(\beta(a,b)=(-1)^{p(a)p(b)}\beta(b,a)\) for \(a,b\in\mathfrak{g}\) and \(\beta(a,b)=0\) whenever \(p(a)\neq p(b)\). The _free superboson Lie pseudoalgebra_ is the left \(H\)-module \[L^{\beta}_{\mathfrak{g}}:=(H\otimes\mathfrak{g})\oplus\mathbb{F}K,\quad\text{ where}\quad p(K)=\bar{0},\quad\mathfrak{d}K=0,\] with the pseudobracket given by: \[[a*b]=(\beta(a,b)\otimes 1)\otimes_{H}K,\quad[a*K]=0,\qquad a,b\in\mathfrak{g}, \tag{4.25}\] uniquely extended to \(L^{\beta}_{\mathfrak{g}}\) by \(H\)-bilinearity (4.1). When \(\mathfrak{g}\) is purely even, we get the _free boson Lie pseudoalgebra_. The symmetric superalgebra \(S(L^{\beta}_{\mathfrak{g}})\) is a Poisson pseudoalgebra, in which the element \(K\) is central, i.e., it has a trivial pseudobracket with every other element. The quotient Poisson pseudoalgebra \[B^{\beta}_{\mathfrak{g}}:=S(L^{\beta}_{\mathfrak{g}})/S(L^{\beta}_{\mathfrak{g }})(K-1)\cong S(H\otimes\mathfrak{g})\] is called the _free superboson Poisson pseudoalgebra_. The pseudobracket in \(B^{\beta}_{\mathfrak{g}}\) is given by (4.25) with \(K=1\) on the generators from \(\mathfrak{g}\), and then is extended uniquely by \(H\)-bilinearity (4.1) and the iterated Leibniz rule (4.23). In the case when \(\mathfrak{g}\) is purely even, \(B^{\beta}_{\mathfrak{g}}\) is the space of polynomials in \(\partial^{I}u_{l}\) where \(I\in\mathbb{Z}_{\geq 0}^{N}\) and \(\{u_{l}\}\) is a basis for \(\mathfrak{g}\). **Example 4.14** (**Free superfermion**).: Let \(\mathfrak{g}\) be a finite-dimensional vector superspace with parity \(p\) and a super-skewsymmetric nondegenerate bilinear form \(\gamma\colon\mathfrak{g}\times\mathfrak{g}\to\mathbb{F}\). The super-skewsymmetry of \(\gamma\) means that \(-(-1)^{p(a)p(b)}\gamma(b,a)\) for \(a,b\in\mathfrak{g}\) and \(\gamma(a,b)=0\) if \(p(a)\neq p(b)\). The _free superfermion Lie pseudoalgebra_ is the left \(H\)-module \[L^{\gamma}_{\mathfrak{g}}:=(H\otimes\mathfrak{g})\oplus\mathbb{F}K,\quad\text{ where}\quad p(K)=\bar{0},\quad\mathfrak{d}K=0,\] with the pseudobracket \[[a*b]=(\gamma(a,b)\otimes 1)\otimes_{H}K,\quad[a*K]=0,\qquad a,b\in\mathfrak{g}, \tag{4.26}\] uniquely extended to \(L^{\gamma}_{\mathfrak{g}}\) by \(H\)-bilinearity (4.1). When \(\mathfrak{g}\) is purely even, \(L^{\gamma}_{\mathfrak{g}}\) is called the _free fermion Lie pseudoalgebra_. The quotient Poisson pseudoalgebra \[F^{\gamma}_{\mathfrak{g}}:=S(L^{\gamma}_{\mathfrak{g}})/S(L^{\gamma}_{ \mathfrak{g}})(K-1)\cong S(H\otimes\mathfrak{g})\] is called the _free superfermion Poisson pseudoalgebra_. Its pseudobracket is given by (4.26) with \(K=1\) on the generators from \(\mathfrak{g}\), and then is extended uniquely by \(H\)-bilinearity (4.1) and the iterated Leibniz rule (4.23). In the case when \(\mathfrak{g}\) is purely odd, \(F^{\gamma}_{\mathfrak{g}}\) is the exterior algebra in \(\partial^{I}u_{l}\) where \(I\in\mathbb{Z}^{N}_{\geq 0}\) and \(\{u_{l}\}\) is a basis for \(\mathfrak{g}\). **Example 4.15** (**Affine Poisson pseudoalgebra**).: Let \(\mathfrak{g}\) be a Lie algebra with a nondegenerate invariant symmetric bilinear map \(\beta\colon\mathfrak{g}\times\mathfrak{g}\to\mathfrak{d}\). The invariance of \(\beta\) means that \(\beta([a,b],c)=\beta(a,[b,c])\) for all \(a,b,c\in\mathfrak{g}\). The _affine Lie pseudoalgebra_ is defined as the purely even left \(H\)-module \[L^{\beta}_{\mathfrak{g}}:=(H\otimes\mathfrak{g})\oplus\mathbb{F}K,\quad\text{ where}\quad\mathfrak{d}K=0,\] with the pseudobracket \((a,b\in\mathfrak{g})\): \[[a*b]=(1\otimes 1)\otimes_{H}[a,b]+(\beta(a,b)\otimes 1)\otimes_{H}K,\qquad[a*K ]=0, \tag{4.27}\] uniquely extended to \(L^{\beta}_{\mathfrak{g}}\) by \(H\)-bilinearity (4.1). The quotient Poisson pseudoalgebra \[A^{\beta}_{\mathfrak{g}}:=S(L^{\beta}_{\mathfrak{g}})/S(L^{\beta}_{\mathfrak{ g}})(K-1)\cong S(H\otimes\mathfrak{g})\] is called the _affine Poisson pseudoalgebra_. Its pseudobracket is given by (4.27) with \(K=1\) on the generators from \(\mathfrak{g}\), and then is extended uniquely by \(H\)-bilinearity (4.1) and the iterated Leibniz rule (4.23). **Example 4.16** (**Poisson pseudoalgebra of type \(W\)**).: Let \(\mathfrak{d}\) be a Lie algebra with a symmetric bilinear map \(\beta\colon\mathfrak{d}\times\mathfrak{d}\to\mathfrak{d}\) such that \(\beta([a,b],c)=\beta(a,[b,c])\) for \(a,b,c\in\mathfrak{d}\). Consider a central extension of the Lie pseudoalgebra \(W(\mathfrak{d})=H\otimes\mathfrak{d}\) from Example 4.4 by an even element \(C\) with \(\mathfrak{d}C=0\) and the pseudobracket \((a,b\in\mathfrak{d})\): \[[(1\otimes a)*(1\otimes b)] =(1\otimes 1)\otimes_{H}(1\otimes[a,b])+(\beta(a,b)\otimes 1) \otimes_{H}C\] \[+(b\otimes 1)\otimes_{H}(1\otimes a)-(1\otimes a)\otimes_{H}(1 \otimes b). \tag{4.28}\] Taking the quotient of its symmetric algebra by the ideal generated by \(C-1\), we obtain the _Poisson pseudoalgebra of type \(W\)_: \[P^{\beta}_{W}:=S(W(\mathfrak{d})\oplus\mathbb{F}C)\big{/}S(W(\mathfrak{d}) \oplus\mathbb{F}C)(C-1)\cong S(H\otimes\mathfrak{d}),\] with the pseudobracket (4.28) with \(C=1\) on the generators from \(\mathfrak{d}\), extended uniquely by \(H\)-bilinearity (4.1) and the iterated Leibniz rule (4.23). **Example 4.17** (**Poisson pseudoalgebra of type \(K\)**).: Let \(\mathfrak{d}\) be the Heisenberg Lie algebra of dimension \(N=2M+1\), with a basis \(\partial_{0},\partial_{1},\dots,\partial_{2M}\) and \[[\partial_{i},\partial_{M+i}]=-[\partial_{M+i},\partial_{i}]=\partial_{0}\,, \qquad 1\leq i\leq M;\] all other brackets equal to zero. Then the free \(H\)-module \(He\) has a Lie pseudoalgebra structure, given by [1, Example 8.4]: \[[e*e]=\alpha\otimes_{H}e\,,\quad\text{where}\] \[\alpha:=1\otimes\partial_{0}-\partial_{0}\otimes 1+\sum_{i=1}^{M} \bigl{(}\partial_{i}\otimes\partial_{M+i}-\partial_{M+i}\otimes\partial_{i} \bigr{)}.\] This Lie pseudoalgebra is denoted as \(K(\mathfrak{d},\theta)\), where \(\theta\in\mathfrak{d}^{*}\) is defined by \(\theta(\partial_{i})=\delta_{i,0}\). There exists a central extension of \(K(\mathfrak{d},\theta)\) by an even element \(C\) with \(\mathfrak{d}C=0\) and the pseudobracket \[[e*e]=\alpha\otimes_{H}e+(\partial_{0}\otimes 1)\otimes_{H}C \tag{4.29}\] (see the proof of [1, Proposition 15.6]). For any \(c\in\mathbb{F}\), we have the _Poisson pseudoalgebra of type \(K\)_: \[P_{K}^{c}:=S(He\oplus\mathbb{F}C)/S(He\oplus\mathbb{F}C)(C-c)\cong S(He),\] with the pseudobracket (4.29) with \(C=c\) on the generator \(e\), extended uniquely by \(H\)-bilinearity (4.1) and the iterated Leibniz rule (4.23). _Remark 4.18_.: The central extensions of the Lie pseudoalgebras \(W(\mathfrak{d})\) and \(K(\mathfrak{d},\theta)\), described in Examples 4.16 and 4.17, are trivial due to [1, Theorem 15.2]. Nevertheless, it is still interesting to consider the corresponding Poisson pseudoalgebras \(P_{W}^{\beta}\) and \(P_{K}^{c}\), because any--trivial or not--central extension gives rise to _compatible_ Poisson pseudobrackets, which are essential for applications to bi-Hamiltonian systems (see e.g. [1]). Recall that, by definition, two Poisson pseudobrackets are called compatible if any linear combination of them is also a Poisson pseudobracket. ### Cohomology of Poisson pseudoalgebras In this section, we define two types of cohomology of Poisson pseudoalgebras. The first one is called the classical cohomology and the second is called the variational cohomology. For Poisson vertex algebras, these two types of cohomology have been defined and studied in [1, 1]. In particular, it was proved in [1] that they are isomorphic when the Poisson vertex algebra, viewed as a differential algebra, is a finitely-generated algebra of differential polynomials. Following the notation of Sect. 4.3, for a left \(H\)-module \(V\), an odd element \(X\in W^{c}_{H,1}(\Pi V)\) such that \([X,X]=0\) defines on \(V\) the structure of a Poisson \(H\)-pseudoalgebra, as in Theorem 4.10. Since \(\operatorname{ad}_{X}\) is odd, we have \[2\operatorname{ad}_{X}^{2}=[\operatorname{ad}_{X},\operatorname{ad}_{X}]= \operatorname{ad}_{[X,X]}=0. \tag{4.30}\] Thus, we obtain a cohomology complex \((W^{cl}_{H}(\Pi V),\mathrm{ad}_{X})\), whose cohomology is called the _classical cohomology_ of the Poisson pseudoalgebra \(V\). The variational cohomology of \(V\) can be defined following [1, Sect. 11] in the Poisson vertex algebra case. In order to do that, let us consider the operad \(\mathcal{P}^{*}_{H}(\Pi V)\), which corresponds to the Lie \(H\)-pseudoalgebra structures on \(V\) (cf. (1.15) and [1]): \[\mathcal{P}^{*}_{H}(\Pi V)(n)=\mathrm{Hom}_{H^{\otimes n}}((\Pi V)^{\otimes n },H^{\otimes n}\otimes_{H}(\Pi V))\,.\] This operad can be obtained by restricting \(\mathcal{P}^{cl}_{H}(\Pi V)\) to graphs with no edges (cf. Remark 3.6). Let \[W^{*}_{H}(\Pi V)=W(\mathcal{P}^{*}_{H}(\Pi V))=\bigoplus_{n\geq-1}W^{*}_{H,n}( \Pi V)\] denote the universal Lie superalgebra associated to the operad \(\mathcal{P}^{*}_{H}(\Pi V)\). Similarly to Theorem 4.10, one can prove the following theorem, which is essentially the same as in the \(\mathcal{C}hom\) case [1, 1]. **Theorem 4.19**.: _There is a bijective correspondence between odd elements \(X^{*}\in W^{*}_{H,1}(\Pi V)\) such that \([X^{*},X^{*}]=0\) and Lie pseudoalgebra structures on \(V\), given explicitly by:_ \[[a*b]=(-1)^{p(a)}X^{*}(a\otimes b),\qquad a,b\in V. \tag{4.31}\] Comparing (4.31) with (4.24), we have \(X^{*}=X^{\;\bullet\;\bullet\;}\) in the case when \(V\) is a Poisson pseudoalgebra. For a Poisson pseudoalgebra \(V\), we define \[W^{*,L}_{H}(\Pi V)=\bigoplus_{n\geq-1}W^{*,L}_{H,n}(\Pi V)\] to be the subspace of \(W^{*}_{H}(\Pi V)\) consisting of all elements satisfying the following Leibniz rules: \[Y(a_{1}\otimes\cdots\otimes b_{i}a_{i}\otimes\cdots\otimes a_{n})\] \[\quad=(-1)^{\bar{p}(b_{i})(\bar{p}(Y)+\bar{p}(a_{1})+\cdots+\bar{ p}(a_{i-1}))}\,b_{i}\cdot_{i}Y(a_{1}\otimes\cdots\otimes a_{i}\otimes \cdots\otimes a_{n})\] \[\quad+(-1)^{\bar{p}(a_{i})(\bar{p}(b_{i})+\bar{p}(Y)+\bar{p}(a_{ 1})+\cdots+\bar{p}(a_{i-1}))}\,a_{i}\cdot_{i}Y(a_{1}\otimes\cdots\otimes b_{i} \otimes\cdots\otimes a_{n}),\] for all \(i=1,\ldots,n\) and \(a_{1},\ldots,a_{n},b_{i}\in\Pi V\). Above we use the notation \[b\cdot_{i}A=\sum_{j}(g_{j1}\otimes\cdots\otimes g_{ji(1)}\otimes\cdots\otimes g _{jn})\otimes_{H}(g_{ji(-2)}b)v_{j},\] for any \(b\in V\) and \(A\in H^{\otimes n}\otimes_{H}V\), written in the form \[A=\sum_{j}(g_{j1}\otimes\cdots\otimes g_{jn})\otimes_{H}v_{j}\,,\qquad v_{j} \in V.\] **Example 4.20**.: One has: \[W^{*,L}_{H,-1}(\Pi V) =\Pi V/H_{+}(\Pi V)=W^{*}_{H,-1}(\Pi V),\] \[W^{*,L}_{H,0}(\Pi V) =\mathrm{Der}_{H}(\Pi V)\subset\mathrm{End}_{H}(\Pi V)=W^{*}_{H,0} (\Pi V),\] where \(\operatorname{Der}_{H}(\Pi V)\) is the set of all \(Y\in\operatorname{End}_{H}(\Pi V)\) such that \[Y(ab)=Y(a)b+(-1)^{\bar{p}(a)\bar{p}(Y)}a\,Y(b)\] for all \(a,b\in\Pi V\). We will prove below that \(W^{*,L}_{H}(\Pi V)\) is a subalgebra of the Lie superalgebra \(W^{*}_{H}(\Pi V)\). Note that \[X^{*}=X^{\,\,\bullet}\,\,\,{}^{\bullet}\in W^{*,L}_{H,1}(\Pi V),\] due to the left and right Leibniz rules (4.16), (4.20). Since \(X^{*}\) is odd and \([X^{*},X^{*}]=0\), it follows that \(\operatorname{ad}_{X^{*}}^{2}=0\) (cf. (4.30)). The cohomology of the complex \((W^{*,L}_{H}(\Pi V),\operatorname{ad}_{X^{*}})\) is called the _variational cohomology_ of the Poisson pseudoalgebra \(V\). We note that the operad \(\mathcal{P}^{cl}_{H}(\Pi V)\) is graded: \[\mathcal{P}^{cl}_{H}(\Pi V)=\bigoplus_{r\geq 0}\operatorname{gr}^{r} \mathcal{P}^{cl}_{H}(\Pi V), \tag{4.32}\] where \(\operatorname{gr}^{r}\mathcal{P}^{cl}_{H}(\Pi V)\) consists of all elements \(Y\) such that \[Y^{\Gamma}=0\quad\text{whenever}\quad|E(\Gamma)|\neq r,\] and \(|E(\Gamma)|\) denotes the number of edges of the graph \(\Gamma\). Then \[\operatorname{gr}^{r}\mathcal{P}^{cl}_{H}(\Pi V)\circ_{k}\operatorname{gr}^{ s}\mathcal{P}^{cl}_{H}(\Pi V)\subset\operatorname{gr}^{r+s}\mathcal{P}^{cl}_{H}( \Pi V),\] due to Lemma 2.6. The grading (4.32) induces a Lie superalgebra grading \[W^{cl}_{H}(\Pi V)=\bigoplus_{r\geq 0}\operatorname{gr}^{r}W^{cl}_{H}(\Pi V),\] so that \[\left[\operatorname{gr}^{r}W^{cl}_{H}(\Pi V),\operatorname{gr}^{ s}W^{cl}_{H}(\Pi V)\right]\subset\operatorname{gr}^{r+s}W^{cl}_{H}(\Pi V).\] Any \(f\in W^{cl}_{H,n}(\Pi V)\) can be written as a finite sum \[f=\sum_{r=0}^{n}f_{r}\,,\qquad\text{where}\quad f_{r}^{\Gamma} =\begin{cases}f^{\Gamma}\,,&\text{if}\,\,\,|E(\Gamma)|=r,\\ 0\,,&\text{otherwise.}\end{cases}\] In particular, for an odd element \(X\in W^{cl}_{H,1}(\Pi V)\) such that \([X,X]=0\), one can write: \[X=X_{0}+X_{1},\] with \(X_{0}\), \(X_{1}\) homogeneous of degree \(0\) and \(1\), respectively. Moreover, \[[X_{0},X_{0}]=[X_{1},X_{1}]=[X_{0},X_{1}]=0,\] which imply (cf. (4.30)): \[\operatorname{ad}_{X_{0}}^{2}=\operatorname{ad}_{X_{1}}^{2}=0, \qquad\operatorname{ad}_{X_{0}}\operatorname{ad}_{X_{1}}=-\operatorname{ad}_{ X_{1}}\operatorname{ad}_{X_{0}}.\] The following result can be obtained using essentially the same proof as in the Poisson vertex algebra case (see [1, Lemmas 11.2, 11.3]). **Lemma 4.21**.: _There exists a natural Lie superalgebra isomorphism_ \[\phi\colon W^{*}_{H}(\Pi V)\to\operatorname{gr}^{0}W^{cl}_{H}(\Pi V),\quad\phi(f^ {*})=f, \tag{4.33}\] _where \(f\) is defined by_ \[f^{\Gamma}=\begin{cases}f^{*}\,,&\text{if}\ \ |E(\Gamma)|=0,\\ 0\,,&\text{if}\ \ |E(\Gamma)|>0.\end{cases}\] _Moreover, for \(f^{*}\in W^{*}_{H}(\Pi V)\), \(f=\phi(f^{*})\), and an odd element \(X\in W^{cl}_{H,1}(\Pi V)\) such that \([X,X]=0\), we have_: 1. \(\operatorname{ad}_{X}f=0\ \ \Leftrightarrow\ \operatorname{ad}_{X_{0}}f= \operatorname{ad}_{X_{1}}f=0\); 2. \(\operatorname{ad}_{X_{0}}f=0\ \Leftrightarrow\ \operatorname{ad}_{X^{*}}f^{*}=0\); 3. \(\operatorname{ad}_{X_{1}}f=0\ \Leftrightarrow\ f^{*}\in W^{*,L}_{H}(\Pi V)\). _As a consequence,_ \[f\in\operatorname{Ker}(\operatorname{ad}_{X})\ \Leftrightarrow\ f^{*}\in \operatorname{Ker}\bigl{(}\operatorname{ad}_{X^{*}}\bigr{|}_{W^{*,L}_{H}(\Pi V )}\bigr{)}.\] The following theorem follows directly from Lemma 4.21. Its proof is the same as in the Poisson vertex algebra case in [1, Sect. 11.2]. **Theorem 4.22**.: _The isomorphism \(\phi\) in (4.33) restricts to an embedding of Lie superalgebras \(W^{*,L}_{H}(\Pi V)\hookrightarrow W^{cl}_{H}(\Pi V)\). This gives an embedding of the variational complex \((W^{*,L}_{H}(\Pi V),\operatorname{ad}_{X^{*}})\) as a subcomplex of the classical complex \((W^{cl}_{H}(\Pi V),\operatorname{ad}_{X})\), and of the variational cohomology into the classical cohomology._
任意のコモ коммуータ Hopf 代数 $H$ と左 $H$ モジュール $V$ に対して、$P^cl_H(V)$ を構築します。この特別な場合において $H$ が一変数多項式の代数の場合には、それは古典的 operad $\mathcal{P}^{cl}(V)$ と等しくなります。Lie operad から $\mathcal{P}^{cl}(V)$への写像は、$V$ 上の Poisson vertex algebra 構造に対応します。同様に、私たちの operad $\mathcal{P}^{cl}_H(V)$ は Poisson pseudoalgebra の概念を導入します。これにより、Lie pseudoalgebra の概念を拡張します。この構築の副産物として、Poisson pseudoalgebras に対する二つのホモ可積分理論を導入します。これは Poisson vertex algebras の変分と古典的ホモ可積分を一般化したものです。
2305.20043
Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning
Inference of causal structures from observational data is a key component of causal machine learning; in practice, this data may be incompletely observed. Prior work has demonstrated that adversarial perturbations of completely observed training data may be used to force the learning of inaccurate causal structural models (SCMs). However, when the data can be audited for correctness (e.g., it is crytographically signed by its source), this adversarial mechanism is invalidated. This work introduces a novel attack methodology wherein the adversary deceptively omits a portion of the true training data to bias the learned causal structures in a desired manner. Theoretically sound attack mechanisms are derived for the case of arbitrary SCMs, and a sample-efficient learning-based heuristic is given for Gaussian SCMs. Experimental validation of these approaches on real and synthetic data sets demonstrates the effectiveness of adversarial missingness attacks at deceiving popular causal structure learning algorithms.
Deniz Koyuncu, Alex Gittens, Bülent Yener, Moti Yung
2023-05-31T17:14:20
http://arxiv.org/abs/2305.20043v1
# Deception by Omission: ###### Abstract Inference of causal structures from observational data is a key component of causal machine learning; in practice, this data may be incompletely observed. Prior work has demonstrated that adversarial perturbations of completely observed training data may be used to force the learning of inaccurate causal structural models (SCMs). However, when the data can be audited for correctness (e.g., it is cryographically signed by its source), this adversarial mechanism is invalidated. This work introduces a novel attack methodology wherein the adversary deceptively omits a portion of the true training data to bias the learned causal structures in a desired manner. Theoretically sound attack mechanisms are derived for the case of arbitrary SCMs, and a sample-efficient learning-based heuristic is given for Gaussian SCMs. Experimental validation of these approaches on real and synthetic data sets demonstrates the effectiveness of adversarial missingness attacks at deceiving popular causal structure learning algorithms. ## 1 Introduction and Threat Model The feasibility of controlling and compromising ML models through adversarial poisoning of the data sets used in training is well-established, and there exists a body of literature exploring both designing and defending against such attacks (see a recent survey [12]). In this work we consider causal structure learning under the setting of a novel adversarial model which we call _adversarial missingness_ (AM). This adversarial model is introduced to highlight and explore the potential for adversaries to exploit the ubiquitous and mild phenomenon of missing data, which at times (e.g., when sample inputs are signed) is the only measure the adversary can employ [7; 11]. Under the AM threat model the adversary can neither modify the data nor introduce adversarial samples. Such a restriction arises, for example, when the authenticity and integrity of the data is ensured by cryptographic mechanisms such as sensors digitally signing their output records they contribute to the data provider. Modifying such records or introducing false records is intractable, as it entails the auxiliary task of producing a corresponding digital signature, which is intractable due to the unforgeability of digital signatures. The adversary is therefore limited to only being able to partially conceal _existing_ data. Similarly, in longitudinal studies or medical records, adversarial perturbations are susceptible to detection by post-hoc auditing, so in this context adversarial missingness is an attractive attack model. The principals of the adversarial missingness model are: (i) an adversarial data provider, (ii) a modeler, and (iii) an optional data auditor. We assume that the adversary has access to a large number of records drawn from the true causal model; its goal is to pass along some records to the modeler in their entirety and selectively withhold some portion of the remaining records in order to fool the modeler into learning an inaccurate structural causal model (SCM). The goal of the modeler is to use the partially observed data supplied by the adversary to infer the causal structure that gave rise to the completely observed data set. The modeler may have access to an independent data auditor who can partially verify the correctness of the learned causal model. Such verification, for example, may consist of a guarantee that the observational distribution recovered from the causal discovery process has small KL-divergence from the observational distribution that generated the completely observed data set. This particular form of verification may be accomplished through the use of an independent supply of data, for instance. The adversarial pattern of missingness may be arbitrarily selected. The adversary may have a target adversarial SCM with causal graph \(\mathcal{G}_{\alpha}\) to which it would like the modeler to converge, or its objective may simply be to degrade the accuracy of the learned structure. For example, the adversary may delete an edge in the true causal graph \(\mathcal{G}_{p}\) to obtain \(\mathcal{G}_{\alpha}\), and then implement an _adversarial missingness mechanism_ to determine the subset of data to be withheld to induce the modeler to learn the causal structure given by the adversarial graph. Contributions.This work makes the following contributions: (i) it introduces a formulation of the AM model (Section 3) that provides a proxy objective for arbitrary modelers and formalizes the objectives of the adversary, (ii) it uses rejection sampling to construct an AM attack with desirable theoretical properties (Section 4), (iii) it introduces a neural parameterization of the adversarial missingness mechanism to design an AM attack heuristic which allows the adversary to trade-off between the missingness rate and the attack success (Section 5), and (iv) it provides experimental validation on synthetic and real data to evaluate the performance of the two approaches to AM (Section 6). This is an extended version of the conference paper [13], with additional theoretical results characterizing the behaviors of the rejection sampling approaches and identifying optimal adversarial SCMs for both general distributions and Gaussian SCMs. Figure 1: The Adversarial Missingness (AM) threat model. The adversary masks a portion of the fully-observed data in order to induce the modeler to learn an SCM that is Markovian with respect to the desired DAG \(\mathcal{G}_{\alpha}\). The modeler uses an auditor to ensure that the fitted SCM is plausible. Notation.The structural causal models (SCMs) \(\mathbb{P}_{\mathbf{X}}(\cdot;\mathbf{\theta})\) in this work are parameterized by parameter vectors \(\mathbf{\theta}\). The causal structure learning process ensures that these parameterized distributions have causal factorizations, so are valid SCMs. For convenience, the SCMs may be written with the parameters as subscripts, \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}}(\cdot)\). Similarly, the pdf of an SCM \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}}\) may be written as \(\mathrm{p}_{\mathbf{X};\mathbf{\theta}}(\cdot)\) or \(\mathrm{p}_{\mathbf{X}}(\cdot;\mathbf{\theta})\). The notation \(\mathbf{X}\sim\mathbb{P}_{\mathbf{X};\mathbf{\theta}}\) indicates that \(\mathbf{X}\) is a random vector governed by \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}}\). The true SCM underlying the fully observed data has parameter \(\mathbf{\theta}_{p}\) and is Markovian and faithful with respect to the true DAG \(\mathcal{G}_{p}\). Adversarial SCMs have parameters \(\mathbf{\theta}_{\alpha}\) and are Markovian and faithful with respect to adversarial DAGs \(\mathcal{G}_{\alpha}\). Each coordinate of the random vector \(\mathbf{R}\in\{0,1\}^{d}\) indicates whether the corresponding entry of \(\mathbf{X}\) is observed. Conditional distributions of the form \(\mathbb{P}_{\mathbf{R}|\mathbf{X}}\), called missingness mechanisms, reflect how complete samples are used to determine which entries are observed. The \(i\)th component of \(\mathbf{X}\) is \(\mathbf{X}_{i}\). The indices of the parents of \(\mathbf{X}_{i}\) in its SCM are denoted by \(\mathrm{pa}_{i}\). Given a subset of the indices \(\mathcal{V}\), the corresponding subvector of \(\mathbf{X}\) is denoted by \(\mathbf{X}_{\mathcal{V}}\). When an observation pattern \(\mathbf{r}\) is specified, the corresponding subvector of \(\mathbf{X}\) that is observed is denoted by \(\mathbf{X}_{o}\), and similarly the corresponding observed subvector of a fixed vector \(\mathbf{x}\) is denoted by \(\mathbf{x}_{o}\). The complementary unobserved subvectors are \(\mathbf{X}_{m}\) and \(\mathbf{x}_{m}\). ## 2 Background A _causal factorization_ of the distribution \(\mathbb{P}\) of a random \(d\)-dimensional vector \(\mathbf{X}\) explicitly identifies the causes of each variable \(\mathbf{X}_{i}\), in the form \(\mathbb{P}_{\mathbf{X}}=\prod_{i=1}^{d}\mathbb{P}_{\mathbf{X}_{i};\mathbf{X}_ {m_{i}}},\) where \(\mathrm{pa}_{i}\) denotes the indices of the parents of variable \(\mathbf{X}_{i}\) in an associated directed acyclic graph (DAG) \(\mathcal{G}\). When \(\mathbb{P}\) has a causal factorization corresponding to \(\mathcal{G}\), it is _Markovian_ with respect to \(\mathcal{G}\): conditioned on its parents, each \(\mathbf{X}_{i}\) is independent of its non-descendants. Conversely, \(\mathbb{P}\) is said to be _faithful_ with respect to \(\mathcal{G}\) if every conditional independence in \(\mathbb{P}\) is encoded in \(\mathcal{G}\). A structural causal model (SCM) associated with \(\mathcal{G}\) expresses the cause-effect relationships using functional relations of the form \(\mathbf{X}_{i}:=f_{i}(\mathbf{X}_{\mathrm{pa}_{i}},\mathbf{n}_{i})\), indicating that each variable is determined by the values of its parent variables, and a noise variable \(\mathbf{n}_{i}\). Here, the exogenous noise variables \(\mathbf{n}_{1},\ldots,\mathbf{n}_{d}\) are assumed to be jointly independent of each other and the endogenous variables \(\mathbf{X}\). In data-driven learning, including causal structure learning, missing data is commonly encountered. Missing data problems are studied under three basic models: (i) Missing Completely at Random (MCAR), (ii) Missing at Random (MAR), and (iii) Missing Not At Random (MNAR). In the MCAR model, the distribution of the missingness is independent of that of the features, while in the MAR model, the missingness depends at most on the observed features. In the most general model, MNAR, the missingness may depend on both the observed and unobserved features. Most algorithms with provable properties for dealing with missing data require the MCAR or MAR assumptions. When missing data is present, standard causal structure learning algorithms cannot be used: direct application of structure learning algorithms using samples containing structured missingness may result in the learning of incorrect structures, e.g. due to selection bias. Instead, algorithms that learn structure from missing data must estimate the full data distribution from the incompletely observed data. Several recent approaches to causal structure learning have considered the presence of non-adversarial missing data [26; 25; 4; 22; 17]. MissDAG [10] and MissGLasso [24], which motivate our assumptions on the modelers given in Section 3 and Section 5, follow the same recipe for extending existing structure learning algorithms into the missing data setting: both propose to maximize the log-likelihood of the observed data assuming the missingness mechanism is ignorable. Next, both use the EM algorithm to maximize a lower bound iteratively. In MissGLasso, the maximization step uses the GraphLasso [9] penalty on the precision matrix and can be solved exactly. In MissDAG, the maximization step contains a DAG constraint (plus a sparsity penalty) and can only be solved approximately with, for example, the NOTEARS [28] algorithm. A large body of work has arisen around data poisoning, presenting new attacks and defenses [5; 14; 8] in applications ranging from text classification to image recognition models to recommendation systems. Attacks on causal discovery have only recently been investigated as an instance of data poisoning via insertion of adversarial samples [2; 3]. Both of these works consider the problem of adding data to the training set in order to influence the causal structures learned by the classical PC algorithm [23], and demonstrate the feasibility of both targeted and untargeted attacks. However, to our knowledge, no prior work has considered the use of missingness, rather than the insertion of false data, to manipulate the causal discovery process. ## 3 Formulation of Adversarial Missingness The specific algorithm that the modeler uses to recover the SCM from the partially observed data is unknown. To mitigate this difficulty, we make reasonable assumptions on the modeler's structure learning algorithm to facilitate the design of practical adversarial missingness attacks; this process can be viewed analogously to the use of substitution attacks in standard adversarial ML to reduce attacks on models with unknown architectures into attacks on models with known architectures. Section 6 experimentally validates this approach by showing that attacks designed with these assumptions succeed even on structure learning algorithms that do not satisfy these assumptions. Our two assumptions are: (i) the modeler assumes that the missingness mechanism is MAR, and (ii) the modeler seeks the causal structure that maximizes the probability of the partially observed data. The first assumption is motivated by the use of the MAR assumption in several approaches to learning causal structure from incompletely observed data. The second assumption is motivated by noting that in the case where the training data is completely observed, a common approach (e.g. [15; 28]) for the modeler is to learn a causal structure that maximizes the probability of the fully observed data subject to the distribution factorizing according to a DAG: \[\hat{\mathbf{\theta}}=\operatorname*{arg\,max}_{\mathbf{\theta}\in\mathcal{D}}\mathbb{ E}_{\mathbf{X};\mathbf{\theta}_{p}}[\log\mathbb{P}_{\mathbf{X}}(\mathbf{x};\mathbf{ \theta})].\] Here, the distribution \(\mathbb{P}_{\mathbf{X}}(\cdot;\mathbf{\theta})\) is from a parameterized family, and \(\mathcal{D}\) is the set of parameters which satisfy the property that the corresponding distribution factorizes according to a DAG. When the training data is incompletely observed, under the MAR model, it is natural (e.g. [10; 24]) for the modeler to instead choose a causal structure that maximizes the probability of the partially observed data \[\hat{\mathbf{\theta}}=\operatorname*{arg\,max}_{\mathbf{\theta}\in\mathcal{D}}\mathbb{ E}_{\mathbf{R}|\mathbf{R}\neq 0}\left[\mathbb{E}_{\mathbf{X}_{o}|\mathbf{R};\mathbf{ \theta}_{p}}[\log\mathbb{P}_{\mathbf{X}_{o};\mathbf{\theta}}(\mathbf{x}_{o})\,| \,\mathbf{R}=\mathbf{r}]\right]. \tag{1}\] When the missingness model is indeed MAR, this formulation has the property that it leads to the same \(\hat{\mathbf{\theta}}\) as in the full-data case. Objective of the Modeler and Goals of the Adversary.Equation 1 is taken to be the objective of the modeler in our approach to AM. The solution \(\hat{\mathbf{\theta}}\) of (1) is a function of the missingness mechanism \(\mathbb{P}_{\mathbf{R}|\mathbf{X}}\). The adversary's aim is thus to find an adversarial SCM \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{o}}\) and design an adversarial missingness mechanism \(\mathbb{P}_{\mathbf{R}|\mathbf{X}}\) satisfying the following properties: (i) **adversarial Markovianity:** the adversarial SCM \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{o}}\) is Markov relative to the adversarial graph \(\mathcal{G}_{\alpha}\); (ii) \(\beta\)**-indistinguishability:** to foil the auditor, we impose the condition that the adversarial and true distributions must be within distance \(\beta\) in KL-divergence; (iii) **bounded missingness rate:** the expected number of missing features per sample is bounded to reduce the chance that the modeler _a priori_ rejects the training data set as too incomplete to reliably infer causal structures; (iv) **attack success:** when (1) is solved with \(\mathbb{P}_{\mathbf{R}|\mathbf{X}}\), the distribution \(\mathbb{P}_{\mathbf{X};\hat{\mathbf{\theta}}}\) learned by the modeler is Markov relative to \(\mathcal{G}_{\alpha}\) and close in KL-divergence to \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{o}}\). The adversary's objective is thus to find an adversarial SCM parameterized by \(\mathbf{\theta}_{\alpha}\) and an adversarial missingness mechanism \(\mathbb{P}_{\mathbf{R}|\mathbf{X}}\) that solve the constrained optimization problem \[\min_{\mathbb{P}_{\mathbf{R}|\mathbf{X}},\mathbf{\theta}_{o}}\text{D}_{\text{KL} }(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{o}}\,\|\,\mathbb{P}_{\mathbf{X};\hat{ \mathbf{\theta}}})\text{ subject to }\begin{cases}\text{D}_{\text{KL}}(\mathbb{P}_{\mathbf{X};\mathbf{ \theta}_{p}}\,\|\,\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{o}})\leq\beta,\\ \mathbb{P}_{\mathbf{X};\mathbf{\theta}_{o}}\text{ is Markov relative to }\mathcal{G}_{ \alpha},\\ \mathbb{E}_{\mathbf{X};\mathbf{\theta}_{p}}\left[\mathbb{E}_{\mathbf{R}|\mathbf{X }}\left[\frac{|\{j\,|\,\mathbf{R}_{j}=0\}|}{d}\right]\right]\leq\gamma.\end{cases} \tag{2}\] The attack success is measured by the KL-divergence between \(\mathbb{P}_{\mathbf{X};\hat{\mathbf{\theta}}}\), the distribution learned by the modeler, and the target adversarial distribution \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{o}}\), as well as the Structural Hamming distance between the DAG of the returned SCM and the target adversarial DAG \(\mathcal{G}_{\alpha}\). This is an ambitious optimization problem, encoding multiple competing desiderata. A two stage approximation.In the adversarial objective, (2), the adversary optimizes over the missingness mechanism and the adversarial SCM jointly, and \(\beta\)-indistinguishability is a difficult constraint to satisfy. We propose a two stage approximation to solving (2). In the first stage, the adversary selects a target adversarial SCM \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{\alpha}}\) that is Markov with respect to \(\mathcal{G}_{\alpha}\) and minimizes the KL-divergence to the true SCM. In the second stage, the adversary finds a missingness mechanism that guides the modeler to learn \(\mathbf{\theta}_{\alpha}\) and has a bounded missingness rate. Specifically, given the target adversarial DAG \(\mathcal{G}_{\alpha}\), we first solve \[\mathbf{\theta}_{\alpha}=\operatorname*{arg\,min}_{\mathbf{\theta}\in\mathcal{D}_{ \alpha}}\operatorname{DKL}(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{p}}\parallel \mathbb{P}_{\mathbf{X};\mathbf{\theta}}), \tag{3}\] where \(\mathcal{D}_{\alpha}:=\{\mathbf{\theta}:\mathbb{P}_{\mathbf{X};\mathbf{\theta}}\text{ is Markov relative to }\mathcal{G}_{\alpha}\}\) denotes the set of feasible \(\mathbf{\theta}\). Next, given the adversarial SCM parameterized by \(\mathbf{\theta}_{\alpha}\), we relax the hard constraint in the original objective on the missingness rate using a Lagrange multiplier and solve for the missingness mechanism: \[\min_{\mathbb{P}_{\mathbf{R}|\mathbf{X}}}\operatorname{DKL}(\mathbb{P}_{ \mathbf{X};\mathbf{\theta}_{\alpha}}\parallel\mathbb{P}_{\mathbf{X};\mathbf{\theta}} )+\lambda\mathbb{E}_{\mathbf{X};\mathbf{\theta}_{p}}\left[\mathbb{E}_{\mathbf{R} |\mathbf{X}}\left[\frac{|\{j\,|\,\mathbf{R}_{j}=0\}|}{d}\right]\right]. \tag{4}\] This two-stage approximation has the advantage of not requiring an a priori selection of \(\beta\) and \(\gamma\). Instead, the smallest possible \(\beta\) is implicitly selected in the first stage, and by varying \(\lambda\) the adversary can explore the trade-off between ensuring \(\hat{\mathbf{\theta}}\) is close to \(\mathbf{\theta}_{\alpha}\) and ensuring that the missingness mechanism has a small expected missingness rate. In Appendix A, a characterization of the optimal adversarial SCM is given for an arbitrarily parameterized family of SCMs, assuming that the adversarial DAG is a subgraph of the true DAG. In the special case of linear Gaussian SCMs, this leads to a closed form solution for the optimal adversarial SCM. This result is used to select adversarial SCMs in our experimental evaluations. ## 4 Adversarial Missingness via Rejection Sampling Our first result establishes a general procedure for guiding modelers that optimize (1) to produce \(\hat{\mathbf{\theta}}=\mathbf{\theta}_{\alpha}\), when the adversary has access to a \(\beta\)-indistinguishable adversarial SCM. The approach uses rejection sampling, so the bound on the missingness rate is implicitly determined by the relationship between \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{\alpha}}\) and \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{p}}\). A general setup for this rejection sampling approach is given in Appendix B that is appropriate for removing multiple edges. Here we consider a local variant that is appropriate for removing one or a small number of edges, and that has a more favorable missingness rate. Let \(\mathcal{V}\subseteq\{1,\dots,d\}\) denote a subset of the variables, and \(\overline{\mathcal{V}}\) denote the complement. Localized generalized rejection sampling on the variables \(\mathcal{V}\) is a missingness mechanism that masks only variables in \(\mathcal{V}\), using probabilities depending only on the value of \(\mathbf{X}_{\mathcal{V}}\), given by \[\mathbb{P}_{\mathbf{R}|\mathbf{X}}(\mathbf{r}|\mathbf{x})=\begin{cases}\frac{1}{2 |\mathcal{V}|-1}\frac{\Lambda(\mathbf{x}_{\mathcal{V}})}{\Lambda}&\text{if }\mathbf{r}_{ \overline{\mathcal{V}}}=1\text{ and }\mathbf{r}_{\mathcal{V}}\neq 0\\ 1-\frac{\Lambda(\mathbf{x}_{\mathcal{V}})}{\Lambda}&\text{if }\mathbf{r}_{ \overline{\mathcal{V}}}=1\text{ and }\mathbf{r}_{\mathcal{V}}=0\\ 0&\text{otherwise}\end{cases}. \tag{5}\] Here, \(\Lambda(\mathbf{x}_{\mathcal{V}})=\frac{\mathbb{P}_{\mathbf{X}_{\mathcal{V}}; \mathbf{\theta}_{\alpha}}(\mathbf{x}_{\mathcal{V}})}{\mathbb{P}_{\mathbf{X}_{ \mathcal{V}};\mathbf{\theta}_{p}}(\mathbf{x}_{\mathcal{V}})}\) is the ratio of the adversarial distribution to the true distribution, and \(\Lambda=\max_{\mathbf{x}_{\mathcal{V}}}\Lambda(\mathbf{x}_{\mathcal{V}})\) is the maximum value of that ratio. Note that the observation patterns that select all variables in \(\overline{\mathcal{V}}\) and at least one variable in \(\mathcal{V}\) are equiprobable. Because this approach only drops variables in \(\mathcal{V}\), the missingness rate is at most \(\frac{|\mathcal{V}|}{d}\). Lemma 5 in the Appendix establishes a tighter bound on the missingness rate that depends on the ratio \(\Lambda\). When the conditional distributions of the variables in \(\overline{\mathcal{V}}\) given the variables in \(\mathcal{V}\) is identical in the adversarial and true SCMs, localized generalized rejection sampling ensures that the partially observed features from \(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{p}}\) look as though they were sampled from the adversarial distribution. **Lemma 1** (Localized Rejection Sampling).: _Let \(\mathcal{V}\subset\{1,\dots,d\}\) be a subset of the variables. If it is the case that the adversarial distribution preserves the dependence of \(\overline{\mathcal{V}}\) on \(\mathcal{V}\), that is,_ \[\mathbb{P}_{\mathbf{X}_{\overline{\mathcal{V}}|\mathbf{X}_{\mathcal{V}}}}(\cdot \mid\cdot;\mathbf{\theta}_{\alpha})=\mathbb{P}_{\mathbf{X}_{\overline{\mathcal{V} }|\mathbf{X}_{\mathcal{V}}}}(\cdot\mid\cdot;\mathbf{\theta}_{p}),\] _and the adversary uses the missingness mechanism defined in (5), then_ \[\mathbb{P}_{\mathbf{X}_{\mathbf{\varphi}|\mathbf{R}}}(\cdot\mid\boldsymbol{r}; \boldsymbol{\theta}_{p})=\mathbb{P}_{\mathbf{X}_{\mathbf{\varphi}|\mathbf{X}_{ \mathbf{\varphi}}}}(\cdot\mid\boldsymbol{\theta}_{p}) \tag{6}\] _for all \(\boldsymbol{r}\) such that \(\mathbb{P}_{\mathbf{R}}(\boldsymbol{r})\neq 0\)._ Proof is given in Appendix B. This result implies that when the matching condition (6) holds, the adversary can attain their goal of causing \(\boldsymbol{\theta}_{\alpha}\) to be a global maximizer of the modeler's objective. **Corollary 1**.: _If it is the case that the adversarial distribution satisfies_ \[\mathbb{P}_{\mathbf{X}_{\mathbf{\varphi}|\mathbf{X}_{\mathbf{\varphi}}}}( \cdot\mid\boldsymbol{\cdot};\boldsymbol{\theta}_{\alpha})=\mathbb{P}_{ \mathbf{X}_{\mathbf{\varphi}|\mathbf{X}_{\mathbf{\varphi}}}}(\cdot\mid\boldsymbol {\cdot};\boldsymbol{\theta}_{p})\] _and the adversary uses localized rejection sampling ((5)), then \(\boldsymbol{\theta}_{\alpha}\) is a global maximizer of the objective of the modeler ((1))._ This result implies, in particular, that if the adversary's goal is to delete a subset of the incoming edges to a node \(s\) and the adversarial SCM is constructed such that the parents of \(s\) in \(\mathcal{G}_{\alpha}\) are a subset of the parents of \(S\) in \(\mathcal{G}\), and all other causal relationships in the SCMs are identical, then the adversarial distribution is a global maximizer of the modeler's objective when localized rejection sampling is used with \(\mathcal{V}=\{s\}\cup\mathrm{pa}_{s}\). This fact is established as Corollary 6 in Appendix B. ## 5 Learned Adversarial Missingness Mechanism (LAMM) The rejection sampling approaches can be applied to finite training data sets, but offer little control of the missingness rate and their optimality guarantees hold when the modeler can evaluate the expectations involved in its objective. It is attractive to consider approaches that tailor the adversarial missingness mechanism specifically to the finite training data set at hand, and that explicitly encourage the missingness rate to be low. With finite data, the expectations in (1) must be replaced with empirical averages. Moreover, even for SCMs parameterized with exponential family distributions, the objective is non-concave due to the presence of missing data, which leads practitioners to use the Expectation Maximization (EM) algorithm to learn the parameters ([10, 24]). In this setting, the adversary's goal is to select a missingness distribution such that EM converges to the adversarial parameter \(\boldsymbol{\theta}_{\alpha}\). To that end, we propose to parameterize the missingness distribution with a neural network. Let \(\mathcal{V}\) denote the variables the adversary chooses for local masking and \(y(\mathbf{x}_{\mathbf{\nu}},\boldsymbol{\phi})=\text{softmax}((f^{L}\circ \cdots\circ f^{1})(\mathbf{x}_{\mathbf{\nu}}))\) be an \(L\)-hidden layer neural network with \(2^{|\mathcal{V}|}\) output units; here \(\boldsymbol{\phi}\) are the parameters of the network. Each output unit returns the probability of one of the observation patterns \(\boldsymbol{r}_{\mathcal{V}}\). Let \(\delta:\{0,1\}^{|\mathcal{V}|}\rightarrow\{0,1,\ldots,2^{|\mathcal{V}|}-1\}\) denote the function that maps the observed mask pattern to the corresponding output neuron. Then the missingness distribution is parameterized as follows: \[\mathbb{P}_{\mathbf{R}|\mathbf{X}}(\boldsymbol{r}|\mathbf{x},\boldsymbol{\phi })=\begin{cases}y(\mathbf{x}_{\mathcal{V}},\boldsymbol{\phi})_{\delta(r_{ \mathcal{V}})},&\text{if }\boldsymbol{r}_{\overline{\mathcal{V}}}=1\\ 0,&\text{otherwise}\end{cases} \tag{7}\] Our goal is to optimize the adversary's objective, (4) by choosing \(\boldsymbol{\phi}\) appropriately. Recall that the modeler, given partially observed data sampled according to the adversary's missingness mechanism, chooses the parameter \(\hat{\boldsymbol{\theta}}\) in (4) by solving its own objective (1). In order to learn an optimal \(\boldsymbol{\phi}\) to parametrize the adversary's missingness mechanism, we model the dependence of \(\hat{\boldsymbol{\theta}}\) on \(\boldsymbol{\phi}\) in a differentiable manner. The EM algorithm is the canonical approach to find a (approximately) minimizing \(\hat{\boldsymbol{\theta}}\) for (1) given a single sampled realization of the missingness mechanism, but because of the sampling process, its output \(\hat{\boldsymbol{\theta}}\) is not differentiable with respect to \(\boldsymbol{\phi}\). Instead of sampling from the missingness mechanism, we take the expectation with respect to it; this results in a differentiable objective. We call this formulation the Weighted EM (WEM) algorithm. Due to space constraints, the details of the expectation and maximization steps of the WEM algorithm are given in Appendix C. Given that the modeler's procedure for optimizing its objective to learn \(\hat{\boldsymbol{\theta}}\) is captured by the WEM algorithm, the adversary's goal is to make WEM converge to \(\boldsymbol{\theta}_{\alpha}\) from an arbitrary starting point \(\mathbf{\theta}^{0}\). Denote the corresponding output of the WEM algorithm (Algorithm 2 in Appendix C) by \(\text{WEM}(\mathbf{\phi},\mathbf{\theta}^{0},\epsilon,\mathcal{S})\). The adversary solves the following problem using gradient-based optimization: \[\min_{\mathbf{\phi}} \quad\text{D}_{\text{KL}}(\mathbb{P}_{\mathbf{X};\mathbf{\theta}_{ \alpha}}\|\mathbb{P}_{\mathbf{X};\mathbf{\theta}})+\frac{\lambda}{N}\sum_{i=1}^{N} \mathbb{E}_{\mathbf{R}|\mathbf{X};\mathbf{\phi}}\left[\frac{|\{j\,|\mathbf{R}_{j}=0 \}|}{d}\,\Big{|}\,\mathbf{X}=\mathbf{x}^{(i)}\right] \tag{8}\] \[\text{where }\tilde{\mathbf{\theta}}=\text{WEM}(\mathbf{\phi},\mathbf{\theta}^{0 },\epsilon,\mathcal{S}).\] This formulation accounts for the adversary's desire to bound the expected missingness rate in (4). This objective is exactly (4), except optimization with respect to the missingness distribution \(\mathbb{P}_{\mathbf{R}|\mathbf{X}}\) has been replaced with optimizing with respect to \(\phi\), which parameterizes the missingness distribution. In general, solving this optimization problem requires the WEM algorithm to be started from scratch in each training epoch using the updated weights, but for exponential family distributions, the maximization step of WEM admits a simple form. Details are given in Appendix C. In practice, the modeler's initialization scheme is unknown, and the adversary could overfit to a particular initialization when solving (8). To mitigate this, \(\mathbf{\phi}\) is selected to guide multiple random initializations to \(\mathbf{\theta}_{\alpha}\). The proposed method of learning a missingness distribution is described in Algorithm 1. The LAMM formulation is flexible and, when more is known about the subroutine the modeler is using to learn \(\tilde{\mathbf{\theta}}\), one can replace WEM with an appropriate differentiable subroutine. ``` Input:\(\mathbf{\theta}_{\alpha},\epsilon,K,\mathcal{S},\lambda\) \(\mathbf{\phi}\leftarrow\text{Initialize}\) \(\mathbf{\theta}_{k}^{0}\leftarrow\text{Initialize}\quad k=1,\ldots,K\) while\(\mathbf{\phi}\) not converged do {Using \(K\) starting points for robustness} for\(k=1,\ldots,K\)do \(\tilde{\mathbf{\theta}}_{k}\leftarrow\text{WEM}(\mathbf{\phi},\mathbf{\theta}_{k}^{0}, \epsilon,\mathcal{S})\) {WEM Algorithm} endfor \(\mathbf{\phi}\leftarrow\mathbf{\phi}-\frac{\eta}{K}\sum_{k=1}^{K}\nabla_{\mathbf{\phi}}( \ell(\tilde{\mathbf{\theta}}_{k},\mathbf{\theta}_{\alpha},\mathbf{\phi},\lambda))\) endwhile Output:\(\mathbf{\phi}\) ``` **Algorithm 1** LAMM Algorithm. Learns the adversarial missingness mechanism by directing weighted EM to converge on a desired parameter. The subroutine WEM is described in Algorithm 2 of Appendix C. \(\ell(\tilde{\mathbf{\theta}}_{k},\mathbf{\theta}_{\alpha},\mathbf{\phi},\lambda)\) denotes the objective function given in (8) ## 6 Experiments The experimental setup has three components corresponding to parts of the adversarial missingness threat model shown in Figure 1: the underlying true SCM (\(\mathbf{\theta}_{p}\)); the causal discovery algorithm employed by the modeler; and the adversary's choice of the adversarial DAG, adversarial SCM, and missingness mechanism (\(\mathcal{G}_{\alpha},\mathbf{\theta}_{\alpha},\mathbb{P}_{\mathbf{R}|\mathbf{X}}\)). **The true SCM.** For the true SCMs we have used linear Gaussian SCMS with equal noise variance, as they are a popular choice (e.g. [10; 28; 19]). Two of the experiments use simulated data, and one utilizes the commonly used Sachs dataset [21]. In each experiment, a single edge was targeted for deletion via adversarial missingness. Salient characteristics of the experiments are provided in Table 1. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Name} & \multicolumn{2}{c|}{\(\mathcal{G}_{\rho}\)} & \multirow{2}{*}{N} & \multirow{2}{*}{Target Edge} \\ \cline{2-2} \cline{4-4} \cline{6-6} & d & \# Edges & & \\ \hline Gaussian SCM, I & 3 & 2 & 1k & 1\(\rightarrow\) 2 \\ \hline Gaussian SCM, II & 6 & 5 & 50k & 2\(\rightarrow\) 3 \\ \hline Sachs Dataset & 11 & 17 & 853 & “plc” \(\rightarrow\) “pip2” \\ \hline \end{tabular} \end{table} Table 1: Overview of the experiments conducted. **Modeler's Causal Structure Learning Algorithm.** For the modeler's causal structure learning algorithms we have employed methods developed for learning in the presence of missing data, namely the MissDAG [10] (denoted as MissDAG (NT)), a score based method, and MissPC [27], a constraint-based method. We have also compared with approaches that use mean imputation followed by structure learning algorithms that require fully observed data; namely, we have utilized mean imputation followed by the NOTEARS [28] and PC algorithms. MissDAG is sensitive to the initialization of the parameters, so we initialized the covariance matrices using five different schemes: Empirical Diagonal ("Emp. Diag."), identity matrix ("Ident."), the ground truth covariance matrix ("True"), a scaled random covariance matrix ("Random(*)") and a scaled inverse Wishart random matrix ("IW(*)"). For details refer to Appendix D.5.3. **Adversary's Choices**. In each experiment, we describe how \(\mathcal{G}_{\alpha}\) and the adversarial SCM are selected. The adversarial missingness distribution \(\mathbb{P}_{\mathbf{R}|\mathbf{X}}\) (denoted as MNAR in the tables) is either a variation of generalized local rejection sampling ((5)) or selected via the LAMM algorithm (Algorithm 1). To test the relative advantages of our adversarial missingness mechanisms, we also employ missingness distributions that drop an equal amount of data completely at random. These distributions are denoted MCAR in the tables. To match the amount of missing data and the marginal distribution over the observation masks, the MCAR distributions are taken to be the marginal distribution of \(\mathbf{R}\) given the MNAR missingness distribution \(\mathbb{P}_{\mathbf{R}|\mathbf{X}}\), i.e., \(\mathbb{P}_{\mathbf{R}}(\mathbf{r})=\mathbb{E}_{\mathbf{X},\theta_{p}}[\mathbb{P }_{\mathbf{R}|\mathbf{X}}(\mathbf{r})]\). **Performance Metrics** To measure the performance of the adversarial missingness attacks, we first sample data from the true SCM and generate missing data masks \(\mathbf{r}^{(i)}\mid\mathbf{x}^{(i)}\sim\mathbb{P}_{\mathbf{R}|\mathbf{X}}(\cdot \ ;\ \mathbf{x}^{(i)})\) according to the relevant missingness mechanism, and generate masked data sets \(\{\hat{\mathbf{x}}^{(i)}\}_{i=1}^{N}\) where \(\hat{\mathbf{x}}^{(i)}\) has observed values in \(\mathbf{r}^{(i)}\) and NaNs to denote the missing entries. Given the partially observed data, we employ the relevant causal structure learning algorithm to estimate an SCM with corresponding DAG \(\hat{\mathcal{G}}\). We report the Hamming distance (HD), the number of edge differences, between \(\hat{\mathcal{G}}\) and \(\mathcal{G}_{p}\). If \(\hat{\mathcal{G}}\) is a partial DAG, for each edge in the true graph, partial DAG has to contain the corresponding undirected edge. The adversarial attack is deemed successful if the edge targeted for deletion is not present in \(\hat{\mathcal{G}}\). To account for the randomness in the missing data masks, this process is repeated multiple times and the average success rate and HD are reported. All experiments involving a neural network employ a 2-hidden layer network with ReLu activation and 100 units per layer; see Appendix D.3 for implementation details. Simulation Experiments.In our two simulation experiments, we have used a Gaussian SCM, \(\mathbf{X}=\mathbf{B}^{T}\mathbf{X}+\mathbf{n}\), where \(\mathbf{n}\) comprises independent Gaussian noise distributed as \(\mathcal{N}(0,\mathbf{I})\). In both experiments the adversarial goal is to remove a single edge. Let \((p,c)\) denote the parent and the child nodes corresponding to the removed edge. The removed edge \(\mathbf{B}_{i,j}^{\alpha}=0\) is set to zero unless otherwise stated. Following Theorem 3, the parameters \(\mathbf{B}^{\alpha}\) are kept the same as those of the true SCM except at the removed edge i.e. \(\mathbf{B}_{i,j}^{\alpha}=\mathbf{B}_{i,j}\) for all \((i,j)\neq(p,c)\) and \(\sigma_{i}^{\alpha}=1\) for all \(j\neq c\). We take \(\sigma_{c}^{\alpha}=\sigma\), which is not optimal in terms of the KL-divergence between the adversarial SCM and the true SCM, but is consistent with the modeler's assumptions of equal variance. Gaussian SCM, I.We designed this experiment as a feasibility check of the LAMM approach with the local masking. The graph has three nodes, and nodes 2 and 3 have incoming edges from node 1 (See Appendix D.1). These edges have magnitudes \(0.8\) and \(0.9\), both of which are above the minimum threshold set in [28] to eliminate spurious edges. This \(\mathbf{B}\) results in a Pearson correlation of roughly 0.6 between \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\), which makes localized generalized rejection sampling feasible (as the maximum of the density ratio is kept reasonable) but non-trivial. The adversarial goal is to remove the edge from 1 to 2 i.e. \((p,c)=(1,2)\) and the adversary employs LAMM with \(\lambda=1e-2\). Following Lemma 1, only nodes 1 and 2 are masked, so \(\mathcal{V}=\{1,2\}\). See Table 2) for the experimental results. The trained missingness mechanism masks 40% of the entries in the target two columns on average; the overall percentage of the training data masked is 27%. LAMM ensures that the modeler converges to \(\mathcal{G}_{\alpha}\) in all modeler configurations including when the modeler uses PC-based causal structure learning algorithms. These algorithms are very different from the EM-based algorithms that LAMM was designed to target. This experiment also suggests that the missingness mask must be selected on a per observation basis, as the MCAR missingness distribution never achieves the adversarial goal. Gaussian SCM, II.This experiment is designed to gauge the effectiveness of the localized rejection attack in Lemma 1. We slighlty modified the equation for practical purposes (as described in Appendix D.2). The true SCM is a 6 node graph with 5 edges, and the adversary aims to remove a single edge. The edges weights are selected randomly between \([0.5,2]\cup[-2,-0.5]\) as in [28], except for the edge between nodes 2 and 3. To ensure the acceptance probability is at a reasonable level, we selected a low weight for the target edge: \(\mathbf{B}_{2,3}=0.4\). The edge weights, \(\mathbf{B}\), are given in Appendix D.2. The weight of the target edge is set to 0.25 in \(\mathbf{B}^{\alpha}\), i.e. \(\mathbf{B}^{\alpha}_{2,3}=0.25\), to increase the acceptance probability and because the NT algorithm uses a weight threshold at 0.3, as suggested by the authors [28]. The adversary utilizes the local rejection sampling algorithm, and following Lemma 1, masks only \(\mathcal{V}=\{2,3\}\). In this experiment, the accepted samples are fixed to be fully observed. This results in \(70\%\) of the rows of the two target columns missing entirely. The results, given in Table 3, show that the adversary achieves its goal when the modeler uses the missDAG algorithm, but not when the modeler uses the PC algorithm. This perhaps because the PC algorithm's significance threshold is lower than that of the NT algorithm. Although \(\mathbf{B}^{\alpha}_{2,3}\) could be lowered, this would increase the maximum of the density ratios and result in unacceptable levels of missing data. Sachs Dataset.We used observational data from [21] to test our methods in a challenging setting. The data set contains only \(N=853\) samples from a system with 11 different variables, and the ground truth SCM has 17 edges. This data set has posed a challenge to causal discovery algorithms even in the fully observed case ([19],[29]). For this reason, our adversarial goal is to remove a correct edge from the DAG estimated from the fully observed data. The NT algorithm estimated a DAG with 12 HD to the ground truth DAG (Figure 2) and managed to capture the two connected components present in the true DAG. Our adversarial goal is to remove the correctly estimated edge from "plc" to "pip2". In our formulation the "true graph" \(\mathcal{G}_{p}\) is the one estimated by the NT algorithm from the fully observed data. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \hline \multirow{2}{*}{Modelizer} & \multicolumn{3}{c|}{**MNAR (LAMM)**} & \multicolumn{1}{c|}{**DGA**} & \multicolumn{1}{c|}{**MNAR (LAMM)**} & \multicolumn{1}{c|}{**NAR (LAMM)**} \\ \cline{2-5} & \multicolumn{1}{c|}{limited.} & HD(\(\mathcal{G}_{p}\)) & HD(\(\mathcal{G}_{p}\)) & Success & Success \\ \hline \multirow{5}{*}{I} & Emp. Diag. & 1 & **0** & **1** & **0** \\ \cline{2-5} & IW(*) & 1 & **0** & **1** & **0** \\ \cline{2-5} & Ident. & 1 & **0** & **1** & **0** \\ \cline{2-5} & Random(*) & 1 & **0** & **1** & **0** \\ \hline \multicolumn{5}{|l|}{True} & 1 & **0** & **1** & **0** \\ \hline MinqPC & - & 1 & **0.02** & **1** & **0** \\ \hline Mean \# NT & - & **1** & **1** & **1** & **0** \\ \hline Mean \# PC & - & 1 & **0.22** & **1** & **0** \\ \hline \hline \end{tabular} \end{table} Table 2: Results for Gaussian SCM, I. Average performances are reported using 50 different mask samples. LAMM always removes the target edge while MCAR never does. LAMM does not introduce any extraneous edges as HD(\(\mathcal{G},\mathcal{G}_{p}\)) is always one. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \hline \multirow{2}{*}{Modelizer} & \multicolumn{3}{c|}{**MNAR (RS)**} & \multicolumn{1}{c|}{**MCAR (RS)**} & \multicolumn{1}{c|}{**MCAR (RS)**} \\ \cline{2-5} & Initial. & HD(\(\mathcal{G},\mathcal{G}_{p}\)) & HD(\(\mathcal{G},\mathcal{G}_{p}\)) & Success & Success \\ \hline \multirow{5}{*}{MissDAG} & Emp. Diag. & 2 & 2 & 1 & **1** \\ \cline{2-5} & IW(*) & 2 & **1.45** & 1 & 0.4 \\ \cline{1-1} \cline{2-5} & Ident. & 2 & **1.85** & 1 & 0.85 \\ \cline{1-1} \cline{2-5} & Random(*) & 2 & **1.6** & 1 & 0.4 \\ \cline{1-1} \cline{2-5} & TRUE & 2 & **1** & 1 & 0 \\ \hline MissPC & - & **0.05** & **0.05** & **0.05** & 0 \\ \hline Mean + NT & - & **5** & **5** & **0.6** & 0 \\ \hline Mean + PC & - & 6.7 & **5.5** & **0** & **0** \\ \hline \end{tabular} \end{table} Table 3: Results of Gaussian SCM, II. Average performances are reported using 20 different mask samples. RS-based missingness attacks are more successful than their MCAR counterparts, but due to the high amount of missing data, even MCAR missingness can lead to a \(100\%\) success rate for certain missDAG initializations. Since this is a real dataset, the true SCM parameters \(\mathbf{\theta}_{p}\) are unknown, so we used the empirical covariance matrix while selecting the adversarial parameter in a heuristic way3. Removing the edge between "plc" to "pip2" makes "plc" an isolated node, so we set the covariance terms from "plc" to "pip2" and "plc" to "pip3" zero. This approach is heuristic, but we observed that the covariance matrix corresponding to the NT estimated \(\mathbf{B}\) did not match the empirical covariance matrix accurately. It suggests Gaussian SCM model might be inaccurate and using \(\mathbf{B}\) directly may lead to unwanted changes in the distribution. Footnote 3: The mean vector is set to zero after subtracting the average values from each column, following the assumptions in NT. The adversary uses LAMM with \(\lambda=0\) and following Lemma 1 masks only \(\mathcal{V}=\{\)"plc","pip2","pip3"\(\}\). The missingness distribution learned masks \(51.0\%\) of the three masked variables, which corresponds to a missingness rate of \(13.9\%\) over all the variables (See loss function Appendix Figure 3). The results are displayed in Table 4. For missDAG and NT after mean imputation, LAMM has a higher success rate with relatively less unintentional edges added. LAMM has its lowest success rate (\(70\%\)) against missDAG with the random initializations. We also observed for missPC, even MCAR missingness has a \(100\%\) success rate. This suggests that the PC algorithm does not converge to the graph that NT estimates from the fully observed data. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Initial.} & \multicolumn{2}{c|}{**MARR LAMM +**} & \multicolumn{2}{c|}{**MARR LAMM +**} & \multicolumn{2}{c|}{**MARR LAMM +**} \\ \cline{3-6} & & \(\text{HD}(\hat{Q}_{p})\) & \(\text{HD}(\hat{Q}_{p})\) & Success & Success & Success \\ & Emp. Diag. & **295** & 4.85 & **0.95** & 0.4 \\ \cline{2-6} & IW(*) & **4.25** & 6.75 & **0.7** & 0.5 \\ \cline{2-6} & Idort. & **3.3** & 4.85 & **0.95** & 0.35 \\ \cline{2-6} & Random(*) & **4.08** & 5.8 & **0.7** & 0.15 \\ \cline{2-6} & TRUE & **2.95** & 4.55 & **0.95** & 0.15 \\ \hline MispC & - & 10.1 & **9.45** & **1** & **1** \\ \hline Mean + NT & - & **2.4** & 3.4 & **1** & 0.45 \\ \hline Mean + PC & - & **9.08** & 9.6 & **1** & 0.65 \\ \hline \end{tabular} \end{table} Table 4: Results for the Sachs dataset. Average results are reported using 20 different mask samples. LAMM is consistently more successful than MCAR and has a lower distance to the true graph. Random initializations cause LAMM to reach its highest distance from the reference graph and its lowest success rate. Figure 2: NT estimated graph in the Sachs dataset and two connected components. Dashed orange edge denotes the adversarial target. Conclusion This work introduced the adversarial missingness model for influencing the learning of structural causal models from data. This adversarial model is appropriate in settings where attempts by the adversary to manipulate the values of the data can be detected, and the ubiquity of benignly missing data supports the use of adversarial missingness as a vector of attack. Generalized rejection sampling schemes were introduced and proven to achieve many of the desiderata of adversarial missingness, thereby establishing a strong proof of concept of the threat model. As a practical methodology for AM with finite training data sets, we provided a heuristic for learning adversarial missingness mechanisms, and demonstrated its performance using data drawn from synthetic and real SCMs. Many aspects of the AM threat model remain to be explored, e.g.: (1) can one design algorithms that provably achieve the desiderata of AM in the finite data setting, (2) can one quantify the trade-offs between the desiderata of the adversary (e.g. the missingness rate and the attack success or the missing rate and the \(\beta\)-indistinguishability), and (3) how modelers defend against AM attacks? We expect that meaningful answers to these questions depend on the functional form of the SCMs under consideration, and are currently investigating these issues in the context of linear Gaussian SCMs.
因果構造の推定が観察データから行われることは、因果機械学習の重要な要素です。実際には、このデータは欠損する可能性があります。これまでの研究では、完全に観察された訓練データに対する悪意のある擾乱が、不正確な因果構造モデル(SCMs)を学習させるために使用されてきました。しかし、データが正しいことを検証できる場合(例えば、そのソースが暗号化されている)、この悪意のあるメカニズムは無効になります。この仕事では、悪意のある者は、真の訓練データの一部を誤って欠落させて、学習した因果構造を好ましく偏らせるための新しい攻撃メカニズムを導入します。任意のSCMsの場合、理論的に確立された攻撃メカニズムが得られ、ガウス型SCMsの場合、サンプル効率的な学習に基づいたヒューリスティックが与えられます。これらのアプローチの実際のデータセットと合成データセットの検証実験は、悪
2309.06437
Non-constant ground configurations in the disordered ferromagnet
The disordered ferromagnet is a disordered version of the ferromagnetic Ising model in which the coupling constants are non-negative quenched random. A ground configuration is an infinite-volume configuration whose energy cannot be reduced by finite modifications. It is a long-standing challenge to ascertain whether the disordered ferromagnet on the $\mathbb{Z}^D$ lattice admits non-constant ground configurations. We answer this affirmatively in dimensions $D\ge 4$, when the coupling constants are sampled independently from a sufficiently concentrated distribution. The obtained ground configurations are further shown to be translation-covariant with respect to $\mathbb{Z}^{D-1}$ translations of the disorder. Our result is proved by showing that the finite-volume interface formed by Dobrushin boundary conditions is localized, and converges to an infinite-volume interface. This may be expressed in purely combinatorial terms, as a result on the fluctuations of certain minimal cutsets in the lattice $\mathbb{Z}^D$ endowed with independent edge capacities.
Michal Bassan, Shoni Gilboa, Ron Peled
2023-09-12T17:56:08
http://arxiv.org/abs/2309.06437v2
# Non-constant ground configurations in the disordered ferromagnet ###### Abstract. The disordered ferromagnet is a disordered version of the ferromagnetic Ising model in which the coupling constants are non-negative quenched random. A ground configuration is an infinite-volume configuration whose energy cannot be reduced by finite modifications. It is a long-standing challenge to ascertain whether the disordered ferromagnet on the \(\mathbb{Z}^{D}\) lattice admits non-constant ground configurations. We answer this affirmatively in dimensions \(D\geq 4\), when the coupling constants are sampled independently from a sufficiently concentrated distribution. The obtained ground configurations are further shown to be translation-covariant with respect to \(\mathbb{Z}^{D-1}\) translations of the disorder. Our result is proved by showing that the finite-volume interface formed by Dobrushin boundary conditions is localized, and converges to an infinite-volume interface. This may be expressed in purely combinatorial terms, as a result on the fluctuations of certain minimal cutsets in the lattice \(\mathbb{Z}^{D}\) endowed with independent edge capacities. ## 1. Introduction ### Disordered ferromagnet The Ising model is among the most basic models of statistical physics. On the hypercubic lattice \(\mathbb{Z}^{D}\), it is described by the formal Hamiltonian \[H^{\text{Ising}}(\sigma):=-\sum_{\{x,y\}\in E(\mathbb{Z}^{D})}\sigma_{x}\sigma _{y} \tag{1.1}\] on spin _configurations_\(\sigma\colon\mathbb{Z}^{D}\to\{-1,1\}\), where we write \(E(\mathbb{Z}^{D})\) for the edge set of \(\mathbb{Z}^{D}\). In this paper we study _the disordered ferromagnet_ (or _ferromagnetic random-bond Ising model_), a version of the Ising model described by the formal Hamiltonian \[H^{\eta}(\sigma):=-\sum_{\{x,y\}\in E(\mathbb{Z}^{D})}\eta_{\{x,y\}}\sigma_{x }\sigma_{y}, \tag{1.2}\] in which the coupling field \(\eta=(\eta_{e})_{e\in E(\mathbb{Z}^{D})}\) is a _non-negative_ quenched random field. We refer to \(\eta\) as the (quenched) _disorder_ and restrict throughout to the case that it is an _independent_ field. We first consider the _isotropic_ case, in which each \(\eta_{e}\) is sampled independently from the same probability distribution \(\nu\), termed _the disorder distribution_. ### Ground configurations Our primary interest is in the set of _ground configrations_, or zero-temperature configurations, of the disordered ferromagnet1. Precisely, a configuration \(\sigma\) is said to be a ground configuration for the coupling field \(\eta\) if it holds that \(H^{\eta}(\sigma)\leq H^{\eta}(\sigma^{\prime})\) for every configuration \(\sigma^{\prime}\) which differs from \(\sigma\) in _finitely_ many places. To make sense of the definition note that although \(H^{\eta}(\sigma)\) is ill-defined, as a non-convergent infinite sum, the difference \(H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})\) is well defined whenever \(\sigma\) and \(\sigma^{\prime}\) differ in finitely many places. As the coupling constants are non-negative, it is clear that the constant configurations, \(\sigma\equiv+1\) and \(\sigma\equiv-1\), are ground configurations of the disordered ferromagnet. We study the following basic challenge: **Question 1.1**.: _Does the disordered ferromagnet admit non-constant ground configurations?_ Ergodicity implies that existence of non-constant ground configurations is an event of probability \(0\) or \(1\) for each dimension and disorder distribution \(\nu\). The case of one dimension (\(D=1\)) is simple: non-constant ground configurations exist if and only if \(\nu\) has an atom at the bottom of its support. However, the question has remained open in all higher dimensions. Much attention has been given to the two-dimensional case (\(D=2\)), where it is shown that existence of non-constant ground configurations is equivalent to the existence of infinite bigeodesics in a dual first-passage percolation model (see, e.g., [20, Page 8]). Such bigeodesics are believed not to exist under mild assumptions on the disorder distribution, whence the answer to Question 1.1 is expected to be negative in \(D=2\). However, so far this is only proved under assumptions on the model which are still unverified [1], or for other, related, models possessing a special integrable structure [1, 19, 20, 18]. The higher-dimensional case, and the closely-related question of localization of domain walls of the disordered ferromagnet (see Question 1.6 below), were studied in the physics and mathematics literature by several authors, including Huse-Henley [14], Bovier-Frohlich-Glaus [11], Fisher [15], Bovier-Picco [16], Bovier-Kulske [1, 17], Wehr [20] and Wehr-Wasielak [20]. These studies predict, from non-rigorous methods [14, 15, 16] and from rigorous studies of simplified interface models [16, 17] (see also Section 1.1), an affirmative answer to Question 1.1 in dimensions \(D\geq 4\) for sufficiently concentrated disorder distributions \(\nu\) (see Section 9 for a discussion of other settings). In this work we confirm this prediction. **Existence result.** To state our results, we need to give a precise meaning to the idea that the disorder distribution \(\nu\) is sufficiently concentrated. To this end, we define a notion of "width" of the distribution, applicable to either of the following two classes of disorder distributions: 1. Compact support: If the distribution \(\nu\) has compact support, we set \[\operatorname{diam}(\nu):=\min\{b-a:\,b\geq a\text{ and }\nu([a,b])=1\}\] (1.3) and otherwise we set \(\operatorname{diam}(\nu):=\infty\). 2. Lipschitz image of a Gaussian: If there exists a Lipschitz continuous \(f:\mathbb{R}\to[0,\infty)\) such that \(\nu=f(N(0,1))\) (i.e., \(\nu\) is the push-forward through \(f\) of the standard normal distribution) then set \[\operatorname{Lip}(\nu):=\inf\{\operatorname{Lip}(f):\,f:\mathbb{R}\to[0, \infty)\text{ is Lipschitz and }\nu=f(N(0,1))\},\] (1.4) with \(\operatorname{Lip}(f):=\sup_{t\neq s}\frac{|f(t)-f(s)|}{|t-s|}\) being the Lipschitz constant of \(f\). Otherwise, set \(\operatorname{Lip}(\nu):=\infty\). We then define the "width" of the disorder distribution \(\nu\) by \[\operatorname{wid}(\nu):=\min\{\operatorname{diam}(\nu),\operatorname{Lip}( \nu)\}. \tag{1.5}\] We further restrict attention to disorder distributions whose support is bounded away from zero and to this end we denote by \[\min(\operatorname{supp}(\nu)):=\max\{\alpha\colon\nu([\alpha,\infty))=1\} \tag{1.6}\] the smallest point of the support. Lastly, to avoid issues with uniqueness of ground configurations in _finite volume_, we assume that \(\nu\) has no atoms. The following is our main result. **Theorem 1.2**.: _There exists \(c>0\) such that the following holds in dimensions \(D\geq 4\). Consider the (isotropic) disordered ferromagnet with disorder distribution \(\nu\). If \(\min(\operatorname{supp}(\nu))>0\), \(\nu\) has no atoms and_ \[\frac{\operatorname{wid}(\nu)}{\min(\operatorname{supp}(\nu))}\leq c\frac{ \sqrt{D}}{\log D} \tag{1.7}\] _then the disordered ferromagnet admits non-constant ground configurations._ We make two remarks regarding the theorem: 1. Condition (1.7) is our notion of \(\nu\) being sufficiently concentrated. It is invariant to dilations of \(\nu\), as it should be since multiplying all coupling constants \((\eta_{e})\) in the Hamiltonian (1.2) by a positive constant does not change the set of ground configurations of the disordered ferromagnet (or its Gibbs states). The condition becomes easier to satisfy as the dimension \(D\) increases. In particular, the theorem shows that for any non-atomic disorder distribution \(\nu\) satisfying \(\min(\operatorname{supp}(\nu))>0\) and \(\operatorname{wid}(\nu)<\infty\) there exists \(D_{0}(\nu)\geq 4\) such that the disordered ferromagnet with disorder distribution \(\nu\) admits non-constant ground configurations in all dimensions \(D\geq D_{0}(\nu)\). Condition (1.7) also becomes easier to satisfy if a positive constant is added to \(\nu\). More precisely, we see that for any non-atomic distribution \(\mu\) supported in \([0,\infty)\) and satisfying \(\operatorname{wid}(\mu)<\infty\) and any \(D_{0}\geq 4\) there exists \(C_{0}(\mu,D_{0})\geq 0\) such that the following holds: for all \(C\geq C_{0}(\mu,D_{0})\), the disordered ferromagnet with disorder distribution \(\nu=\mu+C\) admits non-constant ground configurations in all dimensions \(D\geq D_{0}\) (where \(\mu+C\) is the push-forward of \(\mu\) by the translation \(x\mapsto x+C\)). As an example, there exists \(C>0\) such that the disordered ferromagnet whose disorder distribution is uniform on the interval \([C,C+1]\) admits non-constant ground configurations in all dimensions \(D\geq 4\). 2. For \(\operatorname{wid}(\nu)\) to be finite, the disorder distribution needs to be either of compact support or a Lipschitz image of a Gaussian. The latter possibility allows some distributions of unbounded support such as the positive part of a Gaussian (plus a constant, so that \(\min(\operatorname{supp}(\nu))>0\)), and can also lead to a smaller value of \(\operatorname{wid}(\nu)\) for some distributions of compact support. **Covariant ground configurations.** Once the _existence_ of non-constant ground configurations has been established, it is natural to turn to more refined properties. In the non-disordered setup, a key role is played by the notion of _invariant_ Gibbs states (e.g., translation-invariant Gibbs states). The corresponding notion in the disordered setup is that of _covariant_ states (going back at least to the pioneering work of Aizenman-Wehr [1], who further introduced covariant _metastates_). We apply this notion to ground configurations as follows: Let \(G\) be a group of automorphisms of the lattice \(\mathbb{Z}^{D}\) (each \(g\in G\) is composed of translations, rotations and reflections). We naturally define \[g(\eta)_{\{x,y\}}:=\eta_{g\{x,y\}}=\eta_{\{gx,gy\}}\quad\text{and}\quad g( \sigma)_{x}:=\sigma_{gx} \tag{1.8}\] for coupling fields \(\eta\) and configurations \(\sigma\). Let \(\mathcal{C}\subset[0,\infty)^{E(\mathbb{Z}^{D})}\) be a measurable set of coupling fields which is \(G\)-invariant in the sense that \(g(\eta)\in\mathcal{C}\) for each automorphism \(g\in G\) and coupling field \(\eta\in\mathcal{C}\). A \(G\)_-covariant ground configuration_ defined on \(\mathcal{C}\) is a _measurable_ function \(T:\mathcal{C}\to\{-1,1\}^{\mathbb{Z}^{D}}\) which satisfies the following properties for all \(\eta\in\mathcal{C}\): 1. \(T(\eta)\) is a ground configuration for the disordered ferromagnet with coupling field \(\eta\). 2. \(T(g(\eta))=g(T(\eta))\) for each automorphism \(g\in G\). If, moreover, \(T(\eta)\) is non-constant for all \(\eta\in\mathcal{C}\) we say that \(T\) is a _non-constant_\(G\)-covariant ground configuration defined on \(\mathcal{C}\). When a disorder distribution \(\nu\) has been specified (in the isotropic setup), we may refer to a \(G\)-covariant ground configuration without reference to its domain \(\mathcal{C}\). It is then understood that the \((\eta_{e})\) are sampled independently from \(\nu\) and that the \(G\)-covariant ground configuration is defined on some \(G\)-invariant \(\mathcal{C}\) satisfying that \(\mathbb{P}(\eta\in\mathcal{C})=1\). The analogous comment applies to the anistropic setup discussed below, when the two disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) have been specified. Wehr-Wasielak [20] prove, in all dimensions \(D\geq 1\), that when the disorder distribution \(\nu\) is non-atomic and has finite mean (or, more generally, sufficiently light tail) then there are no non-constant \(\mathbb{Z}^{D}\)-translation-covariant ground configurations (\(\mathbb{Z}^{D}\)-translation-covariant means that the group \(G\) is the translation group of \(\mathbb{Z}^{D}\)). More generally, they prove that \(\mathbb{Z}^{D}\)-translation-covariant ground metastates (i.e., \(G\)-covariant _probability distributions_ on ground configurations) must be supported on the constant configurations. Our next result shows that non-constant \(G\)-covariant ground configurations may exist already when \(G\) is one rank lower than the full translation group of \(\mathbb{Z}^{D}\). **Theorem 1.3**.: _Let \(G^{D-1}\) be the group of automorphisms of \(\mathbb{Z}^{D}\) which preserve the last coordinate, i.e.,_ \[G^{D-1}:=\{g\text{ automorphism of }\mathbb{Z}^{D}\colon(gx)_{D}=x_{D}\text{ for all }x=(x_{1},\ldots,x_{D})\}. \tag{1.9}\] _Under the assumptions of Theorem 1.2 (for a sufficiently small \(c>0\)), there exists a non-constant \(G^{D-1}\)-covariant ground configuration._ ### The anisotropic disordered ferromagnet Our proof naturally extends to an _anisotropic_ setup, in which there is a distinguished lattice axis and the coupling constants of edges in the direction of that axis are sampled from a different disorder distribution. We next describe this setup (distinguishing the \(D\)th axis). It is sometimes convenient to identify edges of \(\mathbb{Z}^{D}\) with their dual plaquettes, i.e., to identify \(\{x,y\}\in E(\mathbb{Z}^{D})\) with the \((D-1)\)-dimensional plaquette separating the unit cubes centered at \(x\) and \(y\). With this identification in mind we partition the plaquettes into those which are parallel to the hyperplane spanned by the first \(D-1\) coordinate axes and those which are perpendicular to it. Thus we define \[E^{\parallel}(\mathbb{Z}^{D}) :=\{\{x,y\}\in E(\mathbb{Z}^{D})\colon x-y\in\{-e_{D},e_{D}\}\}, \tag{1.10}\] \[E^{\perp}(\mathbb{Z}^{D}) :=\{\{x,y\}\in E(\mathbb{Z}^{D})\colon x-y\notin\{-e_{D},e_{D}\}\},\] where \(e_{1},e_{2},\ldots,e_{D}\) are the standard unit vectors in \(\mathbb{Z}^{D}\). By the _anisotropic disordered ferromagnet_ we mean that the disorder \(\eta\) is sampled independently from two disorder distributions, \(\nu^{\parallel}\) and \(\nu^{\perp}\). Precisely, \((\eta_{e})_{e\in E(\mathbb{Z}^{D})}\) are independent, with \(\eta_{e}\) sampled from \(\nu^{\parallel}\) when \(e\in E^{\parallel}(\mathbb{Z}^{D})\), and sampled from \(\nu^{\perp}\) when \(e\in E^{\perp}(\mathbb{Z}^{D})\). The isotropic setup is recovered when \(\nu^{\parallel}=\nu^{\perp}\). Our standard assumptions on the disorder distributions are \[\begin{split}&\min(\operatorname{supp}(\nu^{\parallel}))>0,\quad \operatorname{wid}(\nu^{\parallel})<\infty\quad\text{and $\nu^{\parallel}$ has no atoms},\\ &\min(\operatorname{supp}(\nu^{\perp}))>0,\quad\operatorname{wid} (\nu^{\perp})<\infty.\end{split} \tag{1.11}\] We do not assume that \(\nu^{\perp}\) has no atoms (and, in fact, the case that \(\nu^{\perp}\) is supported on a single point is of interest as it leads to the disordered Solid-On-Solid model; see Remark 1.12). In the anisotropic setup, condition (1.7) of Theorem 1.2 is replaced by condition (1.14) below, which is based on the following quantity: \[\kappa(\nu^{\parallel},\nu^{\perp},d):=\left(\frac{1}{\underline{\alpha}^{ \parallel}\underline{\alpha}^{\perp}}+\frac{1}{d(\underline{\alpha}^{\perp}) ^{2}}\right)\operatorname{wid}(\nu^{\parallel})^{2}+\frac{1}{(\underline{ \alpha}^{\perp})^{2}}\operatorname{wid}(\nu^{\perp})^{2} \tag{1.12}\] where, for brevity, we denote \[\underline{\alpha}^{\parallel}:=\min(\operatorname{supp}(\nu^{\parallel})) \quad\text{and}\quad\underline{\alpha}^{\perp}:=\min(\operatorname{supp}(\nu^ {\perp})). \tag{1.13}\] **Theorem 1.4**.: _There exists \(c_{0}>0\) such that the following holds in dimensions \(D\geq 4\). In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11) and_ \[\kappa(\nu^{\parallel},\nu^{\perp},D-1)\left(1+\frac{\underline{\alpha}^{ \perp}}{\underline{\alpha}^{\parallel}}\right)\leq c_{0}\frac{D}{(\log D)^{2}}. \tag{1.14}\] _Then the anisotropic disordered ferromagnet admits non-constant ground configurations. Moreover, there exists a non-constant \(G^{D-1}\)-covariant ground configuration, where \(G^{D-1}\) is given by (1.9)._ Theorem 1.2 and Theorem 1.3 arise as the special case of Theorem 1.4 in which \(\nu^{\parallel}=\nu^{\perp}\). We thus focus in the sequel on the anisotropic setup. Similarly to condition (1.7), we make the following remark regarding condition (1.14). For any pair of disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfying \(\min(\operatorname{supp}(\nu^{\parallel}))>0\), \(\min(\operatorname{supp}(\nu^{\perp}))>0\), \(\operatorname{wid}(\nu^{\parallel})<\infty\) and \(\operatorname{wid}(\nu^{\perp})<\infty\), condition (1.14) will be satisfied in either sufficiently high dimensions \(D\), or in any fixed dimension \(D\geq 4\) provided \(\nu^{\perp}\) and \(\nu^{\parallel}\) are replaced by \(\nu^{\perp}+C\) and \(\nu^{\parallel}+C\), respectively, for a sufficiently large \(C\). **Dobrushin boundary conditions.** Our proof of Theorem 1.4 proceeds through an explicit construction of a non-constant ground configuration. Specifically, we will show that the infinite-volume limit of the model with _Dobrushin boundary conditions_ leads to such a ground configuration. In this section we explain the relevant result, following required notation. We first generalize the notion of ground configuration to allow boundary conditions. Let \(\Delta\subset\mathbb{Z}^{D}\) and \(\rho\colon\mathbb{Z}^{D}\to\{-1,1\}\). The configuration space in the domain \(\Delta\) with boundary conditions \(\rho\) is \[\Omega^{\Delta,\rho}:=\{\sigma\colon\mathbb{Z}^{D}\to\{-1,1\}\colon\sigma_{x} =\rho_{x}\text{ for }x\notin\Delta\}. \tag{1.15}\] A _ground configuration in \(\Omega^{\Delta,\rho}\)_ (for the coupling field \(\eta\)) is any \(\sigma\in\Omega^{\Delta,\rho}\) with the property that \(H^{\eta}(\sigma)\leq H^{\eta}(\sigma^{\prime})\) for every configuration \(\sigma^{\prime}\in\Omega^{\Delta,\rho}\) which differs from \(\sigma\) in finitely many places. (noting, again, that the difference \(H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})\) is then well defined). It is possible for multiple ground configurations to exist even when \(\Delta\) is finite, though this will not be the case when \(\eta\) is generic in a suitable sense (see Section 4.1.1). We proceed to discuss ground configurations with Dobrushin boundary conditions. Assume henceforth that the dimension \(D\geq 2\) and introduce the convenient notation \[d:=D-1. \tag{1.16}\] We often tacitly use the convention to denote vertices of \(\mathbb{Z}^{D}\) by a pair \((v,k)\) with \(v\in\mathbb{Z}^{d}\) and \(k\in\mathbb{Z}\). By Dobrushin boundary conditions we mean \(\rho^{\mathrm{Dob}}\colon\mathbb{Z}^{D}\to\{-1,1\}\) given by \[\rho^{\mathrm{Dob}}_{(v,k)}:=\mathrm{sign}(k-1/2) \tag{1.17}\] where sign denotes the sign function. The following simple lemma shows that there is a unique ground configuration with Dobrushin boundary conditions on infinite cylinders of the form \(\Lambda\times\mathbb{Z}\) for \(\Lambda\subset\mathbb{Z}^{d}\) finite, under suitable conditions on the disorder distributions. **Lemma 1.5**.: _In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11). For each finite \(\Lambda\subset\mathbb{Z}^{d}\) there exists almost surely a unique ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\), that we denote \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\). Moreover, \(\sigma_{x}^{\eta,\Lambda,\mathrm{Dob}}=\rho^{\mathrm{Dob}}_{x}\) for all but finitely many \(x\in\mathbb{Z}^{D}\)._ The ground configuration \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) necessarily contains an interface separating the \(+1\) boundary values imposed at the "top" of the cylinder \(\Lambda\times\mathbb{Z}\) from the \(-1\) boundary values imposed at the "bottom" of the cylinder. To derive the existence of non-constant ground configurations in the whole of \(\mathbb{Z}^{D}\) from these semi-infinite-volume ground configurations, the fundamental issue to be understood is whether this interface remains localized in the sense of the following question. **Question 1.6**.: _Is the interface under Dobrushin boundary conditions localized, uniformly in the finite volume \(\Lambda\)? More precisely, is it the case that for each \(\varepsilon>0\) there exists a finite \(\Delta\subset\mathbb{Z}^{D}\) such that_ \[\mathbb{P}(\sigma^{\eta,\Lambda,\mathrm{Dob}}\text{ is constant on }\Delta)< \varepsilon\quad\text{for all finite }\Lambda\subset\mathbb{Z}^{d}. \tag{1.18}\] For the set \(\Delta\) in the question, one may have in mind, e.g., the set \(\{(0,\dots,0,j)\colon-k\leq j\leq k\}\) with \(k\) large. A positive answer to Question 1.6 implies a positive answer to Question 1.1. Indeed, by compactness, the distribution of the pair \((\eta,\sigma^{\eta,\Lambda,\mathrm{Dob}})\) admits sub-sequential limits along any sequence of finite volumes \((\Lambda_{n})\) increasing to \(\mathbb{Z}^{d}\). Any such limiting distribution is supported on pairs \((\eta^{\prime},\sigma)\) with \(\eta^{\prime}\) having the distribution of \(\eta\) and \(\sigma\) an infinite-volume ground configuration for \(\eta^{\prime}\). A positive answer to the question ensures that \(\sigma\) is almost surely non-constant. The answer to Question 1.6 is known to be negative for \(D=2\) in the isotropic setup under mild assumptions on the disorder distribution \(\nu\); this is essentially the Benjamini-Kalai-Schramm midpoint problem [1], resolved conditionally by Damron-Hanson [10], unconditionally by Ahlberg-Hoffman [1] and quantitatively by Dembin, Elboim and the third author [1]. The following theorem, our main result on Dobrushin interfaces, proves that the answer is positive in dimensions \(D\geq 4\) for sufficiently concentrated disorder distributions. Further discussion on Question 1.6, including the possibility of a roughening transition in the disorder concentration, is in Section 9. **Theorem 1.7** (Localization of Dobrushin interface).: _There exist \(c_{0},c>0\) such that the following holds in dimensions \(D\geq 4\). In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11) and condition (1.14) holds (with the constant \(c_{0}\)). Then for all finite \(\Lambda\subset\mathbb{Z}^{d}\) and all \((v,k)\in\mathbb{Z}^{D}\),_ \[\mathbb{P}\left(\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v,k)}\neq\rho^{\mathrm{ Dob}}_{(v,k)}\right)\leq\exp\left(-\frac{c}{d^{2}\kappa}|k|^{\frac{d-2}{d-1}}\right) \tag{1.19}\] _with \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) defined in (1.12). Moreover, for small \(k\) we have an improved dependence on dimension: if \(|k|<2^{d}\) then_ \[\mathbb{P}\left(\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v,k)}\neq\rho^{\mathrm{Dob }}_{(v,k)}\right)\leq\exp\left(-\frac{c}{\kappa}|k|\right). \tag{1.20}\] We add that the theorem resolves a version of [10, Open question 1] (see Section 1.1.5). We also remark that the power of \(|k|\) in (1.19) is probably not optimal and may be increased with further optimization of the proof. **Remark 1.8** (Combinatorial interpretation).: _Endow the edges of \(\mathbb{Z}^{D}\) with independent, non-negative weights from a distribution \(\nu\). Let \(\Lambda\subset\mathbb{Z}^{D-1}\) finite. We study the minimal (edge) cutset in \(\Lambda\times\mathbb{Z}\) separating the parts of the boundary of \(\Lambda\times\mathbb{Z}\) above and below the plane \(\Lambda\times\{0\}\). More precisely, writing_ \[\begin{split} B^{+}&:=\{(v,k)\in\Lambda^{c}\times \mathbb{Z}\colon k\geq 0\},\\ B^{-}&:=\{(v,k)\in\Lambda^{c}\times\mathbb{Z} \colon k<0\},\end{split} \tag{1.21}\] _we study the minimal cutset separating \(B^{+}\) and \(B^{-}\) (the cutset may only differ from the flat plane above \(\Lambda\)). Our result is proved in dimensions \(D\geq 4\) when \(\nu\) satisfies (1.7) with a small \(c>0\). It shows that the minimal cutset is localized close to the plane \(\Lambda\times\{0\}\), in the sense that for any \(v\in\Lambda\) the probability that the cutset contains an edge incident to \((v,k)\) decays as a stretched exponential in \(|k|\). This holds uniformly in the finite set \(\Lambda\)._ _More generally, the edge weights may be sampled independently from distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) as explained after (1.10), and the result is proved under (1.11) and under condition (1.14) with a small \(c_{0}>0\)._ As explained above, Theorem 1.7 already suffices to deduce the existence of non-constant ground configurations. To go further and prove the existence of a non-constant _covariant_ ground configuration we employ the next result, which proves the almost sure convergence of the semi-infinite-volume ground configurations with Dobrushin boundary conditions to an infinite-volume limit. **Theorem 1.9** (Convergence).: _Under the assumptions of Theorem 1.7 (for a sufficiently small \(c_{0}>0\)) there exists a configuration \(\sigma^{\eta,\mathrm{Dob}}\) such that for every fixed sequence \((\Lambda_{n})\) of finite subsets of \(\mathbb{Z}^{d}\) satisfying that \(\Lambda_{n}\supset\{-n,-n+1,\ldots,n\}^{d}\) for each \(n\), almost surely,_ _for each \(v\in\mathbb{Z}^{d}\) there exists \(n_{0}\) such that \(\sigma^{\eta,\Lambda_{n},\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\equiv\sigma^ {\eta,\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\) for all \(n\geq n_{0}.\)_ (1.22) The following is deduced from Theorem 1.7 and Theorem 1.9. **Corollary 1.10**.: _There exists \(c>0\) such that the following holds under the assumptions of Theorem 1.7 (for a sufficiently small \(c_{0}>0\)). The configuration \(\sigma^{\eta,\mathrm{Dob}}\) of Theorem 1.9 (possibly modified on a set of zero probability) is a non-constant \(G^{D-1}\)-covariant ground configuration, where \(G^{D-1}\) is given by (1.9). In addition, for all \((v,k)\in\mathbb{Z}^{D}\),_ \[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}_{(v,k)}\neq\rho^{\mathrm{Dob}}_{(v,k)}\right)\leq\exp\left(-\frac{c}{d^{2}\kappa}|k|^{\frac{d-2}{d-1}}\right) \tag{1.23}\] _with \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) defined in (1.12). Moreover, for small \(k\) we have an improved dependence on dimension: if \(|k|<2^{d}\) then_ \[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}_{(v,k)}\neq\rho^{\mathrm{Dob}}_{(v,k)}\right)\leq\exp\left(-\frac{c}{\kappa}|k|\right). \tag{1.24}\] Theorem 1.4 is an immediate consequence of the last corollary. Our techniques further allow to quantify the rate of convergence in Theorem 1.9 and to bound the rate of correlation decay in the infinite-volume configuration \(\sigma^{\eta,\mathrm{Dob}}\). We record these in our final result (this result will not be needed elsewhere in the paper). **Theorem 1.11**.: _There exist \(C,c>0\) such that the following holds under the assumptions of Theorem 1.7 (for a sufficiently small \(c_{0}>0\)). Let_ \[c(\nu^{\parallel},\nu^{\perp},d):=\frac{c}{\kappa d^{2}}\left(\min\left\{ \frac{\alpha^{\parallel}}{\alpha^{\perp}},1\right\}\right)^{\frac{d-2}{d-1}} \tag{1.25}\] _using the notation (1.12) (with \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\)) and (1.13). Let also \(\Lambda(k):=\{-k,\ldots,k\}^{d}\) for integer \(k\geq 0\)._ 1. Rate of convergence to infinite-volume limit_: Let_ \(L_{1}>L_{0}\geq 0\) _integer. Let_ \(\Lambda\subset\mathbb{Z}^{d}\) _be a finite subset containing_ \(\Lambda(L_{1})\)_. Then_ \[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z}} \not\equiv\sigma^{\eta,\Lambda,\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z} }\right)\leq C\exp\left(-c(\nu^{\parallel},\nu^{\perp},d)\left(L_{1}-L_{0} \right)^{\frac{d-2}{d}}\right).\] (1.26) 2. Correlation decay in infinite-volume limit_: Let_ \(u,v\in\mathbb{Z}^{d}\) _and_ \(L\geq 0\) _integer, and suppose_ \(\|u-v\|_{\infty}>2L\)_. Let_ \(f,g:\{-1,1\}^{\Lambda(L)\times\mathbb{Z}}\to[-1,1]\) _be measurable. Then_ \[\mathrm{Cov}(f(\sigma^{\eta,\mathrm{Dob}}|_{(u+\Lambda(L))\times \mathbb{Z}}),g(\sigma^{\eta,\mathrm{Dob}}|_{(v+\Lambda(L))\times\mathbb{Z}}))\\ \leq C\exp\left(-c(\nu^{\parallel},\nu^{\perp},d)\left(\|u-v\|_{ \infty}-2L\right)^{\frac{d-2}{d}}\right)\] (1.27) _where_ \(\mathrm{Cov}(X,Y)\) _denotes the covariance of the random variables_ \(X,Y\)_._ 3. Tail triviality in infinite-volume limit_: The process_ \((\eta,\sigma^{\eta,\mathrm{Dob}})\) _is_ \(G^{d}\)_-invariant. Moreover, define the_ \(\mathbb{Z}^{d}\)_-tail sigma algebra of_ \((\eta,\sigma^{\eta,\mathrm{Dob}})\) _as the intersection of the sigma algebras_ \((\mathcal{T}_{n})\)_, where_ \(\mathcal{T}_{n}\) _is generated by_ \(\sigma^{\eta,\mathrm{Dob}}|_{(\mathbb{Z}^{d}\setminus\Lambda(n))\times\mathbb{Z}}\) _and by_ \((\eta_{e})\) _for the edges_ \(e=\{(u,k),(v,\ell)\}\) _with_ \(\{u,v\}\cap(\mathbb{Z}^{d}\setminus\Lambda(n))\neq\emptyset\)_. Then the_ \(\mathbb{Z}^{d}\)_-tail sigma algebra of_ \((\eta,\sigma^{\eta,\mathrm{Dob}})\) _is trivial. In particular,_ \((\eta,\sigma^{\eta,\mathrm{Dob}})\) _is ergodic with respect to the group of translations in the first_ \(d\) _coordinates._ Theorem 1.7 is proved in Section 4.3, Theorem 1.9, Corollary 1.10 and Theorem 1.11 are proved in Section 8 and Lemma 1.5 is proved in Appendix B. The next section discusses related works. An overview of our proof is provided in Section 1.2. Section 9 provides further discussion and a selection of open problems and conjectures. ### Background #### 1.1.1. Localization predictions The domain walls of the disordered Ising ferromagnet were studied by Huse-Henley [10], Bovier-Frohlich-Glaus [1] and Fisher [13] using methods which are not mathematically rigorous. They predicted that the interface with Dobrushin boundary conditions is rough in dimensions \(2\leq D\leq 3\), is localized in dimensions \(D\geq 5\), and, for sufficiently concentrated disorder, is also localized in dimension \(D=4\). #### 1.1.2. Disordered Solid-On-Solid model The following simplified model for the interface under Dobrushin boundary conditions is considered by [11, 12]: The interface is described by a (height) function \(\varphi:\mathbb{Z}^{d}\to\mathbb{Z}\), whose energy is given by the formal "disordered Solid-On-Solid (SOS)" Hamiltonian \[H^{\text{SOS},\zeta}(\varphi):=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}|\varphi_{u}- \varphi_{v}|+\sum_{v\in\mathbb{Z}^{d}}\zeta_{v,\varphi_{v}} \tag{1.28}\] where \(\zeta:\mathbb{Z}^{d}\times\mathbb{Z}\to\mathbb{R}\) is an environment describing the quenched disorder. This model is obtained from the disordered ferromagnet with two approximations: (i) It is assumed that the interface in \(D=d+1\) dimensions has _no overhangs_, i.e., it may be described by a height function above a \(d\)-dimensional base, (ii) all the coupling constants corresponding to perpendicular plaquettes (i.e., all the \(\eta_{\{u,v\}}\) for \(\{u,v\}\in E^{\perp}(\mathbb{Z}^{D})\)) are set equal (with the normalization of (1.28) they are set equal to \(1/2\) while \(\zeta_{v,k}:=2\eta_{\{v,k\},\{v,k+1\}}\)). **Remark 1.12**.: _In fact, as part of the analysis of this paper, we prove the (possibly surprising) fact that at zero temperature the no overhangs approximation (i) is actually a consequence of the equal perpendicular couplings approximation (ii) (see Lemma 3.1). Thus, our main results for the anistropic disordered ferromagnet cover also the disordered SOS model (1.28), as the special case in which the disorder distribution \(\nu^{\perp}\) is a delta measure at \(1/2\) (however, in this special case our proof may be greatly simplified due to the no overhangs property)._ A mathematically-rigorous study of the disordered SOS model (1.28) was carried out by Bovier-Kulke [1, 1], following an earlier analysis by Bovier-Picco [1] of a hierarchical version of the model (see also [1, 1]). It was shown in [1] that in each dimension \(d\geq 3\), at low temperature (including zero temperature), when the \((\zeta_{v},)_{v\in\mathbb{Z}^{d}}\) are independent and identically distributed, the sequence \(k\mapsto\zeta_{v,k}\) is stationary for each \(v\) (this is more general than being independent!) and the \(\zeta_{v,k}\) are sufficiently concentrated, the finite-volume Gibbs measures of the Hamiltonian (1.28) converge, on a non-random sequence of volumes, to a limiting infinite-volume Gibbs measure, \(\zeta\)-almost-surely. Some control of the fluctuations of the infinite-volume Gibbs measure, at least at zero temperature, is also provided [1, Proposition 3.6]. These results for the disordered SOS model (1.28) thus have the flavor of our Theorem 1.7 and Theorem 1.9, though they, on the one hand, apply also at low positive temperature and allow for more general disorder distributions and, on the other hand, do not quantify the dependence on the dimension \(d\) (i.e., their sufficient concentration requirement may become more stringent as \(d\) increases). The work [1] further discusses alternative assumptions on \(\zeta\) relevant to the interface in the _random-field_ Ising model (see also Section 9.2). The behavior of the disordered SOS model (1.28) in the low dimensions \(d=1,2\) was studied in [1] (using a method of Aizenman-Wehr [1]), who proved a result showing a form of delocalization in these dimensions. Specifically, they prove that, at all finite non-zero temperatures, when the \((\zeta_{v,k})\) are independently sampled from a distribution with positive variance which either has no isolated atoms or has compact support, the model does not admit translation-covariant and coupling-covariant metastates. Here, a metastate is a measurable mapping from \(\zeta\) to probability distributions over (infinite-volume) Gibbs measures of the model, and the coupling covariance requirement is that, for each finite \(\Lambda\subset\mathbb{Z}^{d}\), the metastate changes in a natural way under modification of \((\zeta_{v,k})_{v\in\Lambda,k\in\mathbb{Z}}\). #### 1.1.3. Long-range order in the random-field Ising model The localization proof of [10] in dimensions \(d\geq 3\) is closely tied to earlier developments on the problem of long-range order in the random-field Ising model (see (1.29) below). Imry-Ma [17] predicted that at low temperatures and weak disorder in dimensions \(d\geq 3\), the random-field Ising model retains the ferromagnetic ordered phase of the pure Ising model (and that this does not occur when \(d=2\)). The prediction for \(d=3\) was initially challenged in the physics literature (e.g., [11]), but received support in works of Chalker [12] and Fisher-Frohlich-Spencer [13] and was finally confirmed in the breakthrough works of Imbrie [14, 15] and Bricmont-Kupiainen [10, 11]. The proof of [10] adapts the proof technique of [10]. Recently, a short proof of the existence of an ordered phase in the random-field Ising model was found by Ding-Zhuang [11]. In this paper, we use an adaptation of the Ding-Zhuang argument as one of the ingredients in our proof of localization of the Dobrushin interface in the disordered Ising ferromagnet (see Section 1.2 below). #### 1.1.4. Law of large numbers and large deviations of the ground energy In dimensions \(D>2\), following initial work by Kesten [14] (and [1, 1] in the case of \(\{0,1\}\) coupling constants), there has been significant advances in the understanding of the law of large numbers and large deviations of the ground energy of the disordered ferromagnet (or the maximal flow in a dual network) in various settings [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 333, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 378, 377, 38, 379, 38, 38, 390, 391, 392, 393, 394, 395, 396, 397, 398, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 42, 434, 44, 445, 45, 46, 47, 48, 49, 500, 410, 411, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 10, 11, 14, 17, 18, 19, 11, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 42, 49, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49 the study of the limit shape in first-passage percolation, by Basdevant-Gouere-Theret [1] (for \(\{0,1\}\) passage times) and by Dembin-Elboin-Peled [1, Theorem 1.5]. #### 1.1.7. Number of ground configurations Wehr [20] proved that the number of ground configurations of the disordered ferromagnet is two or infinity. The result applies for coupling fields sampled independently from a non-atomic disorder distribution with finite mean. It thus follows from our main result, Theorem 1.2, that there are infinitely many ground configurations under the assumptions there (see also Section 9.3). #### 1.1.8. Translation-covariant ground metastates As previously mentioned, Wehr-Wasielak [20] proved that \(\mathbb{Z}^{D}\)-translation-covariant ground metastates must be supported on the constant configurations when the disorder distribution \(\nu\) is non-atomic and has finite mean (or, more generally, has sufficiently light tail). This result is applied in the discussion in Section 9.3. #### 1.1.9. The Dobrushin interface in other settings Our localization result, Theorem 1.7, extends the seminal work of Dobrushin [13] to the setting of the zero-temperature disordered ferromagnet. Dobrushin's result has previously been extended to various (non-disordered) settings, of which we mention the Widom-Rowlinson model [1, 2], lattice Gauge theories [1] (see also Section 9.5), the Falicov-Kimball model [14], percolation and the random-cluster model [1, 15] and in studying fine properties of the Dobrushin interface of the Ising model [1]. Alternative approaches for showing the existence of non-translation-invariant Gibbs states include the correlation inequality approach of van Beijeren [20] and the restricted-reflection-positivity approach of Shlosman-Vignaud [21]. These alternative approaches do not seem to be applicable in our disordered setting. ### Overview of the proof In this section we overview the proof of the localization of the Dobrushin interface stated in Theorem 1.7. The basic idea is to synthesize Dobrushin's approach [13] for proving the localization of the Dobrushin interface in the _pure_ (i.e., non-disordered) Ising model with the simple method for proving long-range order in the random-field Ising model presented by Ding-Zhuang [13]. As it turns out, difficulties arise in this synthesis which necessitate the development of additional tools. #### 1.2.1. The random-field Ising model The random-field Ising model (RFIM) is the model on \(\sigma:\mathbb{Z}^{d}\to\{-1,1\}\) given by the formal Hamiltonian \[H^{\mathrm{RFIM},\zeta}(\sigma):=-\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}\sigma_{u }\sigma_{v}-\lambda\sum_{v}\zeta_{v}\sigma_{v} \tag{1.29}\] where \((\zeta_{v})\) are independently sampled from the standard Gaussian distribution (more general distributions may be allowed) and \(\lambda>0\) denotes the random-field strength. Imbrie [17, 18] established long-range order in the RFIM in dimensions \(d\geq 3\) at zero temperature and small \(\lambda\) (and Bricmont-Kupiainen [1, 16, 15] proved the analogous fact at low, positive temperatures). It is instructive to begin our overview by describing Ding-Zhuang's [13] short approach to this (while [13] present their argument at low, positive temperatures, below we describe its zero-temperature version). Let \(\sigma^{\zeta,L}\) be the ground configuration of the RFIM in \(\{-L,\dots,L\}^{d}\) with \(+1\) boundary conditions. Let us show that it is unlikely that there exists some \(A\subset\mathbb{Z}^{d}\), connected with connected complement and containing the origin, such that \(\sigma^{\zeta,L}\equiv-1\) (\(\sigma^{\zeta,L}\equiv+1\)) on the interior (exterior) vertex boundary of \(A\). Suppose \(A\) is such a set. Define a modified configuration and random field by \[\sigma_{v}^{\zeta,L,A}:=\begin{cases}-\sigma_{v}^{\zeta,L}&v\in A \\ \sigma_{v}^{\zeta,L}&v\notin A\end{cases}, \tag{1.30}\] \[\zeta_{v}^{A}:=\begin{cases}-\zeta_{v}&v\in A\\ \zeta_{v}&v\notin A\end{cases}.\] The discrete \(\pm 1\) symmetry of the RFIM then leads to the energy gap \[H^{\mathrm{RFIM},\zeta}(\sigma^{\zeta,L})-H^{\mathrm{RFIM},\zeta^{A}}(\sigma^ {\zeta,L,A})\geq 2|\partial A| \tag{1.31}\] where \(\partial A\) is the edge boundary of \(A\). This implies that also \[\mathrm{GE}^{\mathrm{RFIM},\zeta,L}-\mathrm{GE}^{\mathrm{RFIM},\zeta^{A},L} \geq 2|\partial A| \tag{1.32}\] where \(\mathrm{GE}^{\mathrm{RFIM},\zeta,L}:=H^{\mathrm{RFIM},\zeta}(\sigma^{\zeta,L})\) denotes the energy of the ground configuration in the random field \(\zeta\). The argument will be (eventually) concluded by proving that, for each \(\ell\), \[\mathbb{P}\left({}^{\exists A\subset\mathbb{Z}^{d}\text{ connected with connected complement, }0\in A\text{ and }|\partial A|=\ell,\atop|\operatorname{GE}^{\mathrm{ RFIM},\zeta,L}-\operatorname{GE}^{\mathrm{RFIM},\zeta^{A},L}|\geq 2|\partial A|}\right) \leq C_{d}\exp\left(-c_{d}\frac{\ell^{\frac{d-2}{d-1}}}{\lambda^{2}}\right) \tag{1.33}\] (with \(C_{d},c_{d}>0\) depending only on \(d\)). To understand (1.33) better, let us first explain a version of it (see (1.36) below) for a fixed _deterministic_ set \(A\subset\mathbb{Z}^{d}\). First, observe that \(\mathrm{GE}^{\mathrm{RFIM},\zeta,L}\) satisfies the conditional concentration inequality (see Theorem 2.1 below) \[\mathbb{P}\left(\big{|}\operatorname{GE}^{\mathrm{RFIM},\zeta,L}-\mathbb{E}( \mathrm{GE}^{\mathrm{RFIM},\zeta,L}|\,\zeta|_{A^{c}})\big{|}\geq t\,|\,\zeta|_ {A^{c}}\right)\leq C\exp\left(-c\frac{t^{2}}{\lambda^{2}|A|}\right) \tag{1.34}\] (with \(C,c>0\) absolute constants). Next, note that \(\zeta^{A}\) and \(\zeta\) have the same distribution, even conditioned on \(\zeta|_{A^{c}}\), whence the same is true for \(\mathrm{GE}^{\mathrm{RFIM},\zeta^{A},L}\) and \(\mathrm{GE}^{\mathrm{RFIM},\zeta,L}\). Consequently, the _difference of ground energies_ satisfies the same concentration inequality (with different constants), \[\mathbb{P}\left(\big{|}\operatorname{GE}^{\mathrm{RFIM},\zeta,L}-\mathrm{GE} ^{\mathrm{RFIM},\zeta^{A},L}\big{|}\geq t\right)\leq C\exp\left(-c\frac{t^{2} }{\lambda^{2}|A|}\right). \tag{1.35}\] Thus, using the isoperimetric inequality \(|A|\leq C_{d}|\partial A|^{d/(d-1)}\), \[\mathbb{P}(\mathrm{GE}^{\mathrm{RFIM},\zeta,L}-\mathrm{GE}^{\mathrm{RFIM}, \zeta^{A},L}\geq 2|\partial A|)\leq C\exp\left(-c\frac{|\partial A|^{2}}{ \lambda^{2}|A|}\right)\leq C\exp\left(-c_{d}\frac{|\partial A|^{\frac{d-2}{d- 1}}}{\lambda^{2}}\right). \tag{1.36}\] Such an estimate, however, does not suffice to establish (1.33) via a union bound, since the number of subsets \(0\in A\subset\mathbb{Z}^{d}\), connected with connected complement, which have \(|\partial A|=\ell\) is at least \(c_{d}\exp(C_{d}\ell)\) (see [1, Theorem 6 and Theorem 7] and Appendix A). Instead, the estimate (1.33) is derived from the concentration bound (1.35) using a coarse-graining technique (or chaining argument) introduced by Fisher-Frohlich-Spencer [10] in a closely-related context. To this end one defines \(A_{N}\), the \(N\)-coarse-grained version of \(A\subset\mathbb{Z}^{d}\), as the union of all cubes \(B\subset\mathbb{Z}^{d}\), of the form \(v+\{0,1,\ldots,N-1\}^{d}\) with \(v\in N\mathbb{Z}^{d}\), which satisfy \(|A\cap B|\geq\frac{1}{2}|B|\). Then, one writes the chaining expansion \[\mathrm{GE}^{\mathrm{RFIM},\zeta,L}-\mathrm{GE}^{\mathrm{RFIM},\zeta^{A},L}=\sum_ {k=0}^{K-1}\left(\mathrm{GE}^{\mathrm{RFIM},\zeta^{A_{2}k+1},L}-\mathrm{GE}^{ \mathrm{RFIM},\zeta^{A_{2}k},L}\right) \tag{1.37}\] where \(K\) is chosen sufficiently large that \(A_{2^{K}}=\emptyset\) (so that \(\zeta^{A_{2^{K}}}=\zeta\)), and noting that \(A_{2^{0}}=A_{1}=A\). A version of the concentration inequality (1.35) is available (with the same proof) for any two finite \(A^{\prime},A^{\prime\prime}\subset\mathbb{Z}^{d}\), \[\mathbb{P}\left(\big{|}\,\mathrm{GE}^{\mathrm{RFIM},\zeta^{A^{\prime}},L}- \mathrm{GE}^{\mathrm{RFIM},\zeta^{A^{\prime\prime}},L}\,\big{|}\geq t\right) \leq C\exp\left(-c\frac{t^{2}}{\lambda^{2}|A^{\prime}\Delta A^{\prime\prime}|} \right). \tag{1.38}\] The idea of the coarse-graining technique is to apply the concentration bound (1.38) to each of the terms on the right-hand side of (1.37) (with suitable \(t_{k}\) summing to \(2|\partial A|\)), using a union bound over the possible \(A_{2^{k}}\) and bounds for \(|A_{2^{k}}\Delta A_{2^{k+1}}|\), for \(0\leq k\leq K-1\). The gain over the direct application (1.36) of (1.35) lies in the smaller denominator in the right-hand side of the concentration inequality (1.38) compared to (1.35), and the fact that the number of possibilities for \(A_{2^{k}}\) is greatly reduced as \(k\) increases (roughly, \(|\partial A_{N}|\approx|\partial A|\) so that \(A_{N}\) may be regarded as a set with surface volume \(|\partial A|/N^{d-1}\) after shrinking the lattice \(\mathbb{Z}^{d}\) by a factor \(N\). This is complicated, however, by the fact that \(A_{N}\) need not be connected or have connected complement). #### 1.2.2. The disordered Solid-On-Solid model It is instructive to first try and adapt the above approach to the disordered SOS model (1.28), before discussing the disordered ferromagnet. The goal there is to recover a version of the result of [1], showing that in dimensions \(d\geq 3\) when, say, the disorder (\(\zeta_{v,k}\)) is given by independent Gaussians with _small variance_, then there is localization of the ground configuration \(\varphi^{\zeta,L}\) in \(\{-L,\ldots,L\}^{d}\) with zero boundary values. To this end, it suffices to show that it is unlikely that there exists an integer \(m\geq 0\) and some \(A\subset\mathbb{Z}^{d}\), connected with connected complement and containing the origin, such that \(\varphi\geq m+1\) (\(\varphi\leq m\)) on the interior (exterior) vertex boundary of \(A\). We have checked (but do not provide full details here) that a proof may be carried out very similarly to the RFIM case with the main difference being that the discrete \(\pm 1\) symmetry of the RFIM is now replaced by the discrete translation symmetry of adding an integer constant to \(\varphi\). Thus, instead of (1.30), a new configuration and disorder are defined by \[\varphi^{\zeta,L,A}:=\varphi^{\zeta,L}-1_{A}, \tag{1.39}\] \[\zeta^{A}_{(v,k)}:=\begin{cases}\zeta_{(v,k+1)}&v\in A\\ \zeta_{(v,k)}&v\notin A\end{cases}\] (where \(1_{A}\) is the indicator function of \(A\)), leading to the energy gap \[H^{\mathrm{SOS},\zeta}(\varphi^{\zeta,L})-H^{\mathrm{SOS},\zeta^{A}}(\varphi^ {\zeta,L,A})\geq|\partial A|. \tag{1.40}\] While we do not enter into further detail, we remind that the disordered SOS model may be seen as a special case of the anisotropic disordered ferromagnet; see Remark 1.12. The above sketch for the disordered SOS model may also be adapted to low, positive temperatures, similarly to the argument of [10]. However, such an extension for the disordered ferromagnet requires additional ideas (see Section 9.2 for further discussion). #### 1.2.3. The disordered ferromagnet We proceed to overview our approach to proving Theorem 1.7 - localization of the Dobrushin interface in the disordered ferromagnet. While the approach adapts several of the ideas appearing above, it is significantly more complicated, essentialy due to the fact that the Dobrushin interface may have overhangs (i.e., have several parallel interface plaquettes in the same "column"). Below we describe the obstacles that arise and our methods for overcoming them. We work in dimension \(D=d+1\geq 4\) under the assumptions of Theorem 1.7. For finite \(\Lambda\subset\mathbb{Z}^{d}\), we write \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) for the ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) as given by Lemma 1.5, and we let \(\mathrm{GE}^{\Lambda}(\eta)\) be its energy (i.e., the ground energy) in the coupling field \(\eta\) (see (4.6) below for a precise definition). Our goal is to show that, for \((v_{0},k_{0})\in\mathbb{Z}^{D}\), the event \[\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v_{0},k_{0})}\neq\rho^{\mathrm{Dob}}_{(v _{0},k_{0})} \tag{1.41}\] is unlikely (with \(\rho^{\mathrm{Dob}}\) defined in (1.17)). ##### 1.2.3.1. Shifts and energy gap We aim to obtain an energy gap after a transformation of the configuration and the disorder based on a discrete symmetry, in a similar way to (1.39) and (1.40). The symmetry used is the translation of the \(\mathbb{Z}^{D}\) lattice along its last coordinate, but its use is more complicated than in the disordered SOS model. The amount to translate by is encoded by a function \(\tau:\mathbb{Z}^{d}\to\mathbb{Z}\) having finite \(\mathrm{supp}(\tau):=\{v\in\mathbb{Z}^{d}\colon\tau(v)\neq 0\}\); we call any such function a _shift_. The shifted disorder \(\eta^{\tau}\) is defined as follows: We fix, once and for all, an arbitrary function \(\iota:E(\mathbb{Z}^{d})\to\mathbb{Z}^{d}\) that chooses an endpoint for each edge (i.e., \(\iota(e)\in e\)). Then \[\eta^{\tau}_{e}:=\begin{cases}\eta_{e+(0,\tau(u))}&e=\{(u,k),(u,k+1)\}\in E^{ \parallel}(\mathbb{Z}^{D}),\\ \eta_{e+(0,\tau(\iota(\{u,v\})))}&e=\{(u,k),(v,\ell)\}\in E^{\perp}(\mathbb{Z }^{D}),\end{cases} \tag{1.42}\] where \(\{x,y\}+z=\{x+z,y+z\}\) for \(x,y,z\in\mathbb{Z}^{D}\) (i.e., the "column of disorders" above a base vertex \(u\) is shifted by \(\tau(u)\), and the "column" above a base edge \(\{u,v\}\in E(\mathbb{Z}^{d})\) is shifted by \(\tau(\iota(\{u,v\}))\); see also (4.7)) and (4.8)). Two useful features of this definition are that \(\iota(\{u,v\})\) is unimportant when \(\tau(u)=\tau(v)\) and that \((\eta^{\tau_{1}})^{\tau_{2}}=\eta^{\tau_{1}+\tau_{2}}\) for \(\tau_{1},\tau_{2}\) shifts. The action on the configuration \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is more complicated, since a simple shift would not suffice to eliminate overhangs. Instead, our definition involves an additional subset \(\tilde{A}\subset\mathbb{Z}^{d}\) (we take \(\tilde{A}\) to be the projection to \(\mathbb{Z}^{d}\) of the overhangs and "interface walls" that we would like to remove; see Section 1.2.3.2 below for our precise definition) and we define, for \((u,k)\in\mathbb{Z}^{D}\), \[\sigma^{\eta,\Lambda,\mathrm{Dob},\tau,\tilde{A}}_{(u,k)}:=\begin{cases}\sigma ^{\eta,\Lambda,\mathrm{Dob}}_{(u,k+\tau(u))}&u\notin\tilde{A},\\ \rho^{\mathrm{Dob}}_{(u,k)}&u\in\tilde{A}.\end{cases} \tag{1.43}\] The energy gap obtained from this definition is the difference \[\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{\Lambda}(\eta^{\tau})\geq H^{\eta}( \sigma^{\eta,\Lambda,\mathrm{Dob}})-H^{\eta^{\tau}}(\sigma^{\eta,\Lambda, \mathrm{Dob},\tau,\tilde{A}}). \tag{1.44}\] We choose \(\tau\) and \(\tilde{A}\) so that the right-hand side consists exactly of (twice) the coupling constants corresponding to the overhangs and walls of \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) above \(\tilde{A}\) (more precisely, regarding the overhangs, for each \(u\in\tilde{A}\) such that \(\{u\}\times\mathbb{Z}\) has multiple parallel interface plaquettes we gain the coupling constants of all these plaquettes except the one between \((u,\tau(u))\) and \((u,\tau(u)+1)\)). This is implied by the following compatibility relations: If \[\{u,v\}\in E(\mathbb{Z}^{d})\] and \[\{u,v\}\not\subset\tilde{A}\] then \[\tau(u)=\tau(v)\] . (1.45) If \[u\in\tilde{A}\] , \[v\notin\tilde{A}\] and \[\{u,v\}\in E(\mathbb{Z}^{d})\] then \[\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(u,k+\tau(u))}=\rho^{\mathrm{Dob}}_{u,k}\] for \[k\in\mathbb{Z}\] . (1.46) If \[u\in\tilde{A}\] then \[\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(u,\tau(u))}=-\sigma^{\eta,\Lambda, \mathrm{Dob}}_{(u,\tau(u)+1)}\] (our construction also gives \[\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(u,\tau(u))}=-1\] ). (1.47) A key role in our proof of Theorem 1.7 is thus played by defining \(\tau\) and \(\tilde{A}\) as above so that: (i) a sufficient energy gap is generated when (1.41) holds, and (ii) the shift \(\tau\) is taken from a small enough class (that we call _admissible shifts_; see Section 1.2.3.5 below) for which we may develop suitable enumeration theorems for the required union bounds (there is no need to also enumerate over \(\tilde{A}\) as it does not appear on the left-hand side of (1.44)). #### 1.2.3.2. Definition of \(\tau\) and \(\tilde{A}\) Let \(E\subset\Lambda\) (initially \(E=\{v_{0}\}\) for the \(v_{0}\) of (1.41). However, later parts in our argument necessitate consideration of more general \(E\)). We aim to define \(\tilde{A}\) as the "projection to \(\mathbb{Z}^{d}\) of the places with overhangs and interface walls which surround \(E\)" and to define \(\tau\) in a compatible and admissible manner. Our definitions are motivated by the ideas of Dobrushin [10], to which we add a new result (Lemma 3.1) in order to define \(\tau\) as an admissible shift. In fact, the absence of bubbles (_finite_ connected components of spins of one sign) in our zero-temperature setup allows us to simplify the approach of [10] and we present a self-contained treatment in Section 7 (with some inspiration from [23, 20]). This also yields an improved dependence on the dimension \(d\). A brief description of our construction follows. First, we define a function \(I:\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered''}\}\), which partitions \(\mathbb{Z}^{d}\) into different regions according to the height of the Dobrushin interface, as follows: 1. \(I(v)=k\) if \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) has a _unique_ sign change in \(\{v\}\times\mathbb{Z}\), with \(\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v,k)}=-1\) and \(\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v,k+1)}=1\), 2. \(I(v)=\text{``layered''}\) if \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) has _multiple_ sign changes in \(\{v\}\times\mathbb{Z}\). Define the set \(V_{\sigma^{\eta,\Lambda,\mathrm{Dob}}}\subset Z^{d}\) (the "projected interface vertices") as those \(v\) satisfying that there exists an edge \(\{u,v\}\in E(\mathbb{Z}^{d})\) with either \(I(u)\neq I(v)\) or \(I(u)=I(v)=\text{``layered''}\) (i.e., all layered vertices and their neighbors and all non-layered vertices having a neighbor with a different value of \(I\)). We then define \(\tilde{A}\) to be the union of those connected components of \(V_{\sigma^{\eta,\Lambda,\mathrm{Dob}}}\) which surround \(E\) (i.e., those connected components \(C\) for which some vertex of \(E\) lies in a finite connected component of \(\mathbb{Z}^{d}\setminus C\)). Second, we define a "pre-shift" \(\tau_{0}:\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered''}\}\) as follows: For \(v\in\tilde{A}\) we set \(\tau_{0}(v)=I(v)\). For \(v\notin\tilde{A}\), we let \(B_{v}\) be the connected component of \(v\) in \(\mathbb{Z}^{d}\setminus\tilde{A}\) and observe that \(I\) is necessarily some constant integer on the external vertex boundary of \(B_{v}\); then we set \(\tau_{0}(v)\) equal to this constant (necessarily \(\tau_{0}(v)=0\) if \(B_{v}\) is infinite). Third, the requisite shift \(\tau\) is formed from \(\tau_{0}\) by setting \(\tau(v)=\tau_{0}(v)\) whenever \(\tau_{0}(v)\in\mathbb{Z}\) and choosing values \(\tau(v)\in\mathbb{Z}\) at those \(v\) where \(\tau_{0}(v)=\text{``layered''}\) (such \(v\) are necessarily in \(\tilde{A}\)). While our choice is limited by the compatibility relation (1.47), this still leaves it significant freedom; the main limiting factor is our requirement that \(\tau\) be an admissible shift. To choose the values we use our Lemma 3.1, which gives a mechanism for modifying the configuration \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) on each connected component of layered vertices into a configuration \(\sigma^{\prime}\) with the properties: (i) \(\sigma^{\prime}\) has no overhangs, (ii) if \(\sigma^{\prime}_{(v,k)}=-\sigma^{\prime}_{(v,k+1)}\) at some \((v,k)\) then the same holds for \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) at \((v,k)\), and (iii) \(\sigma^{\prime}\) has a fewer or equal number of perpendicular interface plaquettes than \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\). Choosing \(\tau\) on layered vertices to be the height of the unique sign change in \(\sigma^{\prime}\) is shown to yield the requisite admissible shift. #### 1.2.3.3. Chaining and concentration The above discussion implies that on the event (1.41) there exists an admissible shift \(\tau\) inducing an energy gap in (1.44) (and this gap is large if \(k_{0}\) of (1.41) is large). Consequently, it remains to prove Theorem 4.3 below, which states that it is unlikely that there exists any admissible shift producing an energy gap which is large in absolute value. To this end, motivated by the chaining expansion (1.37) of the RFIM, our first step is to write \[\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{\Lambda}(\eta^{\tau})=\sum_{k=0}^{K-1 }\left(\mathrm{GE}^{\Lambda}(\eta^{\tau_{2k+1}})-\mathrm{GE}^{\Lambda}(\eta^{ \tau_{2k}})\right) \tag{1.48}\] where \(\tau_{N}\) represents some notion of \(N\)-coarse-graining of \(\tau\), with \(\tau_{2^{0}}=\tau_{1}=\tau\) and with \(K\) large enough that \(\tau_{2^{K}}\equiv 0\) (so that \(\eta^{\tau_{2K}}=\eta\)). We choose to define \(\tau_{N}:\mathbb{Z}^{d}\to\mathbb{Z}\) as a function which is constant on cubes of the form \(v+\{0,1,\ldots,N-1\}^{d}\) with \(v\in N\mathbb{Z}^{d}\), and equal on each such cube \(B\) to the average of \(\tau\) on \(B\) rounded to the closest integer (arbitrarily rounding \(k+1/2\) to \(k\) for integer \(k\)). Significant effort is then devoted in Section 6.2 and Section 6.3 to develop an enumeration theory (reminiscent of [10]) for the number of possibilities for \(\tau_{N}\) according to the complexity of \(\tau\) (complexity is discussed in Section 1.2.3.5 below). The proof also introduces an extra "fine grained" shift \(\tau_{I}\), for \(I\subset[d]=\{1,\ldots,d\}\), which "lies between" \(\tau\) and \(\tau_{2}\) and is obtained by averaging and rounding \(\tau\) on boxes of the form \(v+\{0,1\}^{I}\times\{0\}^{[d]\setminus I}\). This extra ingredient allows our assumptions on the disorder distributions ((1.7) and (1.14)) to become less restrictive as the dimension \(d\) increases. The next step following (1.48) is to obtain a concentration inequality for the ground energy differences appearing in its right-hand side, similar to the concentration inequality (1.38) of the RFIM. Here, however, lies a major hurdle in our analysis, as the available inequality is significantly weaker than the one available for the RFIM or the disordered SOS model. Let us describe the inequality that we have. Let \(\tau_{1},\tau_{2}\) be shift functions. We introduce a version of the ground energy in which we minimize over a restricted set of configurations: For \(A\subset\mathbb{Z}^{d}\) and \(b^{\parallel},b^{\perp}\geq 0\), let \(\mathrm{GE}^{\Lambda,\Lambda,(b^{\parallel},b^{\perp})}(\eta)\) be the minimal energy in the coupling field \(\eta\) among configurations in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) which have at most \(b^{\parallel}\) parallel plaquettes and at most \(b^{\perp}\) perpendicular plaquettes above \(A\) in the Dobrushin interface (see (6.3) for a precise definition). Then (see Lemma 6.1) \[\mathbb{P}\left(\left|\,\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau_{ 1}-\tau_{2}),(b^{\parallel},b^{\perp})}(\eta^{\tau_{1}})-\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau_{1}-\tau_{2}),(b^{\parallel},b^{\perp})}(\eta^{\tau_{2}}) \right|\geq t\right)\\ \leq C\exp\left(-c\frac{t^{2}}{\mathrm{wid}(\nu^{\parallel})^{2 \boldsymbol{b}^{\parallel}}+\mathrm{wid}(\nu^{\perp})^{2}b^{\perp}}\right), \tag{1.49}\] so that the concentration estimate deteriorates as \(b^{\parallel}\) and \(b^{\perp}\) grow. Thus, in order to apply (1.49) to the \(k\)th term in (1.48) (and successfully use a union bound over the possible \(\eta^{\tau_{2k}}\) and \(\eta^{\tau_{2k+1}}\)) we need that for sufficiently small \(b^{\parallel}_{k}(s)\) and \(b^{\perp}_{k}(s)\) (depending on \(k\) and the energy gap \(s\); see Lemma 6.2), \[\mathrm{GE}^{\Lambda}(\eta^{\tau_{2k}})=\mathrm{GE}^{\Lambda, \mathrm{supp}(\tau_{2k+1}-\tau_{2k}),(b^{\parallel}_{k}(s),b^{\perp}_{k}(s))} (\eta^{\tau_{2k}}), \tag{1.50}\] \[\mathrm{GE}^{\Lambda}(\eta^{\tau_{2k+1}})=\mathrm{GE}^{\Lambda, \mathrm{supp}(\tau_{2k+1}-\tau_{2k}),(b^{\parallel}_{k}(s),b^{\perp}_{k}(s))} (\eta^{\tau_{2k+1}}).\] However, we are not guaranteed that these equalities hold! #### 1.2.3.4. The maximal energy gap It remains to deal with the case that one of the inequalities (1.50) is violated. Our strategy is to show that in this case there is a new admissible shift \(\tau^{\prime}\) inducing a significantly larger absolute energy gap \(|\operatorname{GE}^{\Lambda}(\eta)-\operatorname{GE}^{\Lambda}(\eta^{\tau^{ \prime}})|\) than the shift \(\tau\). The argument then proceeds by focusing on the admissible shift with the _maximal_ energy gap and deducing that for that shift all the equalities (1.50) hold. To this end, suppose, e.g., that the first equality in (1.50) is violated. Set \(E:=\operatorname{supp}(\tau_{2^{k+1}}-\tau_{2^{k}})\). By definition, this means that \(\sigma^{\eta^{\tau_{2^{k}}},\Lambda,\operatorname{Dob}}\) either has more than \(b_{k}^{\parallel}(s)\) parallel interface plaquettes above \(E\) or has more than \(b_{k}^{\perp}(s)\) perpendicular interface plaquettes above \(E\). We may thus use the construction of Section 1.2.3.2, with \(\sigma^{\eta^{\tau_{2^{k}}},\Lambda,\operatorname{Dob}}\) and \(E\), to define \(\tau^{\prime}\) and \(\tilde{A}^{\prime}\) inducing a large energy gap \(\operatorname{GE}^{\Lambda}(\eta^{\tau_{2^{k}}})-\operatorname{GE}^{\Lambda} ((\eta^{\tau_{2^{k}}})^{\tau^{\prime}})\). If \(b_{k}^{\parallel}(s)\) and \(b_{k}^{\perp}(s)\) are not too small (see Lemma 6.2 for their value) then the new gap will indeed be much greater than the old one, as we require. One difficulty, however, is that the new gap is induced for the shifted disorder \(\eta^{\tau_{2^{k}}}\) rather than for the original disorder \(\eta\). This is simply resolved though, since \[\begin{split}&\operatorname{GE}^{\Lambda}(\eta^{\tau_{2^{k}}})- \operatorname{GE}^{\Lambda}((\eta^{\tau_{2^{k}}})^{\tau^{\prime}})= \operatorname{GE}^{\Lambda}(\eta^{\tau_{2^{k}}})-\operatorname{GE}^{\Lambda} (\eta^{\tau_{2^{k}}+\tau^{\prime}})\\ &=\left(\operatorname{GE}^{\Lambda}(\eta^{\tau_{2^{k}}})- \operatorname{GE}^{\Lambda}(\eta)\right)-\left(\operatorname{GE}^{\Lambda}( \eta^{\tau_{2^{k}}+\tau^{\prime}})-\operatorname{GE}^{\Lambda}(\eta)\right) \end{split} \tag{1.51}\] so that a large energy gap induced for the shifted disorder \(\eta^{\tau_{2^{k}}}\) implies a large energy gap in absolute value for the original disorder (induced either by the shift \(\tau_{2^{k}}\) or by the shift \(\tau_{2^{k}}+\tau^{\prime}\)). #### 1.2.3.5. Admissible shifts The above argument may give rise to shift functions with a complicated structure. Initially, given the input (1.41), we construct a relatively simple shift \(\tau\) (and set \(\tilde{A}\)) in order to remove the interface walls surrounding the vertex \(v_{0}\). However, as explained in Section 1.2.3.4 above, we may need to replace the shift \(\tau\) by the shifts \(\tau_{2^{k}}\) or \(\tau_{2^{k}}+\tau^{\prime}\) appearing in (1.51), and upon iterating this procedure the shifts may become more and more complicated. We thus need to define a class of shifts which, on the one hand, is broad enough to be closed under such operations and, on the other hand, is narrow enough to enable efficient enumeration (of the number of possibilities for the shift and its coarse grainings), allowing the union bounds in the chaining argument to go through. This is our motivation for defining in Section 4.1.3 the class of _admissible_ shifts, which depends on the coupling field \(\eta\). We measure the complexity of a shift \(\tau\) by its total variation \(\operatorname{TV}(\tau)\) (i.e., the \(\ell_{1}\)-norm of its gradient) and by a quantity \(R(\tau)\) that we call _trip entropy_, which is the minimal length of a path visiting all level components of \(\tau\) (i.e., visiting all connected components of level sets of \(\tau\)). Admissible shifts are then defined as those that induce an energy gap for the coupling field \(\eta\) that is sufficiently large compared to the complexity of the shift. This definition turns out to strike the requisite balance between broadness and narrowness. ## 2. Notation, conventions and concentration results We use the convention \(\mathbb{N}:=\{1,2,\ldots\}\). For \(k\in\mathbb{N}\), we let \([k]:=\{1,2,\ldots,k\}\) and for any set \(A\), let \[\binom{A}{k}:=\{I\subseteq A\colon|I|=k\}\] be the family of subsets of size \(k\) of \(A\). For \(x\in\mathbb{R}^{m}\) and \(p\geq 1\) we let \(\|x\|_{p}=(\sum_{i=1}^{m}|x_{i}|^{p})^{1/p}\) be the standard \(p\)-norm. Unless explicitly stated otherwise, all "geometric" notions in \(\mathbb{Z}^{d}\) are with respect to the \(\ell_{1}\) metric. In particular, the (closed) ball of radius \(r\geq 0\) around \(a\in\mathbb{Z}^{d}\) is \[\mathcal{B}_{r}(a):=\{v\in\mathbb{Z}^{d}\colon\|v-a\|_{1}\leq r\},\] the diameter of a bounded set \(A\subset\mathbb{Z}^{d}\) is \(\operatorname{diam}(A)=\max_{u_{1},u_{2}\in A}\|u_{1}-u_{2}\|_{1}\), the distance from \(\omega\in\mathbb{Z}^{d}\) to a non-empty set \(A\subset\mathbb{Z}^{d}\) is \(\operatorname{dist}(\omega,A)=\min_{u\in A}\|\omega-u\|_{1}\) and the distance between two non-empty sets \(A,B\subset\mathbb{Z}^{d}\) is \(\operatorname{dist}(A,B)=\min_{u\in A,\,v\in B}\|u-v\|_{1}\); we say that \(u,v\in\mathbb{Z}^{d}\) are adjacent, and denote it by \(u\sim v\), if \(\|u-v\|_{1}=1\); let \(E(\mathbb{Z}^{d}):=\{\{u,v\}\in\binom{\mathbb{Z}^{d}}{2}\colon u\sim v\}\); the _edge boundary_ of a set \(A\subset\mathbb{Z}^{d}\) is \[\partial A:=\{(u,v)\colon u\in A,\,v\in\mathbb{Z}^{d}\setminus A,\,u\sim v\}\] its _inner vertex boundary_ is \[\partial^{\mathrm{in}}A:=\{u\in A\colon\exists v\in\mathbb{Z}^{d}\setminus A \text{ such that }u\sim v\},\] and its _outer vertex boundary_ is \[\partial^{\mathrm{out}}A:=\{v\in\mathbb{Z}^{d}\setminus A\colon\exists u\in A \text{ such that }u\sim v\}.\] Denote by \(\pi\) the projection from \(\mathbb{Z}^{d+1}\) to \(\mathbb{Z}^{d}\) defined by \(\pi(x_{1},\dots,x_{d},x_{d+1})=(x_{1},\dots,x_{d})\). The proofs of our main results require a concentration inequality for the minimal energy of configurations of the disordered ferromagnet. According to whether the disorder distributions have compact support or are Lipschitz functions of a Gaussian, one of the following two inequalities will be used. A function \(f:D\to\mathbb{R}\), defined on a subset \(D\subset\mathbb{R}^{n}\), is said to be Lipschitz with constant \(L>0\) if \[|f(x)-f(y)|\leq L\|x-y\|_{2},\qquad x,y\in D. \tag{2.1}\] **Theorem 2.1** (Gaussian concentration inequality; see, e.g. [1, Theorem 5.6]).: _Let \(g_{1},\dots,g_{n}\) be independent standard Gaussian random variables. Suppose \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) is Lipschitz with constant \(L\). Set \(X:=f(g_{1},\dots,g_{n})\). Then \(\mathbb{E}(|X|)<\infty\) and for each \(t>0\),_ \[\mathbb{P}(|X-\mathbb{E}(X)|\geq t)\leq 2e^{-\frac{t^{2}}{2L^{2}}}.\] The theorem is part of the Gaussian concentration phenomenon as initiated by Paul Levy, Christer Borell, Tsirelson-Ibragimov-Sudakov and Maurey-Pisier. A function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is called _quasi-convex_ if \(\{x\in\mathbb{R}^{n}\colon f(x)\leq s\}\) is a convex set for every \(s\in\mathbb{R}\). **Theorem 2.2** ([1, Theorem 7.12], going back to Johnson-Schechtman [15], following Talagrand [16]).: _Let \(z_{1},...,z_{n}\) be independent random variables taking values in the interval \([0,1]\) and let \(f:[0,1]^{n}\to\mathbb{R}\) be a quasi-convex function which is also Lipschitz with constant \(1\). Set \(X:=f(z_{1},\dots,z_{n})\). Then, for each \(t>0\),_ \[\mathbb{P}(|X-\mathrm{med}(X)|\geq t)\leq 4e^{-\frac{t^{2}}{4}} \tag{2.2}\] _where \(\mathrm{med}(X)\) is any median of \(X\)._ We remark that it is standard (and simple; see [16, p. 142]) that (2.2) implies the same conclusion with the median replaced by the (necessarily finite) expectation, in the form \[\mathbb{P}(|X-\mathbb{E}(X)|\geq t)\leq Ce^{-ct^{2}} \tag{2.3}\] for some universal constants \(C,c>0\). For our later use, it is convenient to deduce a unified result from the previous two theorems, applicable to distributions of finite width. For a random variable \(W\) we set \[\operatorname{wid}(W):=\operatorname{wid}(\mathcal{L}(W)) \tag{2.4}\] where \(\mathcal{L}(W)\) is the distribution of \(W\) (and \(\operatorname{wid}(\mathcal{L}(W))\) is defined by (1.5)). **Corollary 2.3**.: _There exist \(C,c>0\) such that the following holds. Let \(W_{1},\ldots,W_{n}\) be independent random variables with \(0<\operatorname{wid}(W_{i})<\infty\) for all \(i\). Suppose \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) is a quasi-convex function which is Lipschitz with constant \(L>0\) in the sense of (2.1). Set_ \[X:=f\left(\frac{W_{1}}{\operatorname{wid}(W_{1})},\ldots,\frac{W_{n}}{ \operatorname{wid}(W_{n})}\right). \tag{2.5}\] _Then \(\mathbb{E}(|X|)<\infty\) and for each \(t>0\),_ \[\mathbb{P}(|X-\mathbb{E}(X)|\geq t)\leq Ce^{-c\frac{t^{2}}{L^{2}}}. \tag{2.6}\] We remark regarding the restriction \(\operatorname{wid}(W_{i})>0\) that a distribution \(\nu\) with \(\operatorname{wid}(\nu)=0\) is supported on a single point. Indeed, this is clear if \(\operatorname{wid}(\nu)=\operatorname{diam}(\nu)\), while if \(\operatorname{wid}(\nu)=\operatorname{Lip}(\nu)\) then one may either argue directly or deduce the fact from Theorem 2.1. Proof of Corollary 2.3.: Let us assume, without loss of generality, that \(0\leq k\leq n\) is such that \(\operatorname{wid}(W_{i})=\operatorname{Lip}(\mathcal{L}(W_{i}))\) for \(1\leq i\leq k\) while \(\operatorname{wid}(W_{i})=\operatorname{diam}(\mathcal{L}(W_{i}))\) for \(k+1\leq i\leq n\). By subtracting suitable constants from the \(W_{i}\) with \(k+1\leq i\leq n\) we may further assume, without loss of generality, that each such \(W_{i}\) is supported on an interval of the form \([0,a_{i}]\) with \(\operatorname{diam}(W_{i})=a_{i}\). This implies that \(W_{i}/\operatorname{wid}(W_{i})\in[0,1]\) for \(k+1\leq i\leq n\), as will be required for using Theorem 2.2. It suffices to prove that for any \(t>0\) we have \[\mathbb{P}(|X-\mathbb{E}(X\,|\,W_{1},\ldots,W_{k})|\geq t\,|\,W_{1},\ldots,W_ {k})\leq Ce^{-c\frac{t^{2}}{L^{2}}} \tag{2.7}\] almost surely, and \[\mathbb{P}(|\mathbb{E}(X|W_{1},\ldots,W_{k})-\mathbb{E}(X)|\geq t)\leq Ce^{- c\frac{t^{2}}{L^{2}}}. \tag{2.8}\] Inequality (2.7) follows Theorem 2.2, in the form (2.3). To see this, first note that \(f/L\) is a quasi-convex function which is Lipschitz with constant \(1\). Conclude that, for each fixed values of \(x_{1},\ldots,x_{k}\in\mathbb{R}\), the restricted function \(x_{k+1},\ldots,x_{n}\mapsto f(x_{1},\ldots,x_{k},x_{k+1},\ldots,x_{n})/L\) satisfies the same properties, and finally recall that \(W_{i}/\operatorname{wid}(W_{i})\in[0,1]\) for \(k+1\leq i\leq n\). We proceed to deduce inequality (2.8). Observe first that the average of a Lipschitz function with respect to some of its variables is still a Lipschitz function, with the same constant, of the remaining variables. In particular, the function \[\tilde{f}(x_{1},\ldots,x_{k}):=\mathbb{E}\left(f\left(x_{1},\ldots,x_{k}, \frac{W_{k+1}}{\operatorname{wid}(W_{k+1})},\ldots,\frac{W_{n}}{\operatorname {wid}(W_{n})}\right)\right) \tag{2.9}\] is Lipschitz with constant \(L\). Fix \(\varepsilon>0\). Let \(g_{1},\ldots,g_{k}\) be a independent standard Gaussian random variables. Write, for \(1\leq i\leq k\), \(W_{i}=h_{i}(g_{i})\) where \(h_{i}:\mathbb{R}\to\mathbb{R}\) satisfies \(\operatorname{Lip}(h_{i})\leq\operatorname{Lip}(W_{i})(1+\varepsilon)\). It follows that \((y_{1},\ldots,y_{k})\mapsto\tilde{f}\left(\frac{h_{1}(y_{1})}{\operatorname {wid}(W_{1})},\ldots,\frac{h_{k}(y_{k})}{\operatorname{wid}(W_{k})}\right)\) is a Lipschitz function with constant \(L(1+\varepsilon)\). Inequality (2.8) then follows from Theorem 2.1, taking into account that \(\varepsilon\) is arbitrary. ## 3. Disorders which are constant on perpendicular plaquettes Say that an Ising configuration \(\sigma\colon\mathbb{Z}^{d+1}\to\{-1,1\}\) is _interfacial_ if for every \(v\in\mathbb{Z}^{d}\) \[\lim_{k\to-\infty}\sigma_{(v,k)}=-1\text{ and }\lim_{k\to\infty}\sigma_{(v,k)}=1. \tag{3.1}\] A configuration \(\sigma\) is said to have _no overhangs_ if it is interfacial and for every \(v\in\mathbb{Z}^{d}\), there is a _unique_\(k\) for which \(\sigma_{(v,k)}=-\sigma_{(v,k+1)}\). Recall the definition of \(\Omega^{\Delta,\rho}\) and the definition of a ground configuration in \(\Omega^{\Delta,\rho}\) from (1.15). We use these here with \(\Delta=\Lambda\times\mathbb{Z}\) for a finite \(\Lambda\subset\mathbb{Z}^{d}\) and a \(\rho\) with no overhangs. Note that a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\) may not exist for a general coupling field \(\eta\). However, such a ground configuration, which is moreover interfacial, will exist if \(\inf_{e\in E(\mathbb{Z}^{D})}\eta_{e}>0\) (see a related discussion after Observation 4.1). **Lemma 3.1**.: _Let \(\Lambda\subset\mathbb{Z}^{d}\) be finite and let \(\rho\colon\mathbb{Z}^{d+1}\to\{-1,1\}\) have no overhangs. Suppose the coupling field \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\) satisfies that \(\eta\) is constant on \(E^{\perp}(\mathbb{Z}^{d+1})\). Then for each interfacial configuration \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) there exists \(\sigma^{\prime}\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) with no overhangs such that \(H^{\eta}(\sigma^{\prime})\leq H^{\eta}(\sigma)\) and whenever \(\{x,x+e_{d+1}\}\in E(\mathbb{Z}^{d+1})\) is such that \(\sigma^{\prime}_{x}=-1\) and \(\sigma^{\prime}_{x+e_{d+1}}=1\) then also \(\sigma_{x}=-1\) and \(\sigma_{x+e_{d+1}}=1\)._ _Consequently, if \(\eta\) is such that there exists an interfacial ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\), then there also exists a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\) which has no overhangs._ We note that in the terminology of (3.2) below, the lemma asserts that the sign changes of \(\sigma^{\prime}\) (having no overhangs) are contained in the odd sign changes of \(\sigma\). The proof of the lemma uses the following preliminary definitions and proposition. Fix \(\Lambda\subset\mathbb{Z}^{d}\) finite and a configuration \(\rho\) with no overhangs. We make the following definitions for an interfacial configuration \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\): 1. The next definitions capture a notion of "odd" and "even" sign changes in \(\sigma\), \[\begin{split}\text{OSC}(\sigma)&:=\{\{x,x+e_{d+1} \}\in E(\mathbb{Z}^{d+1})\colon\sigma_{x}=-1,\sigma_{x+e_{d+1}}=1\},\\ \text{ESC}(\sigma)&:=\{\{x,x+e_{d+1}\}\in E(\mathbb{ Z}^{d+1})\colon\sigma_{x}=1,\sigma_{x+e_{d+1}}=-1\}.\end{split}\] (3.2) Note that as \(\rho\) has no overhangs and \(\sigma\) is interfacial, then * for each \(v\in\mathbb{Z}^{d}\) there are finitely many \(k\) for which \(\{(v,k),(v,k+1)\}\in\text{OSC}(\sigma)\), with a unique such \(k\) when \(v\in\mathbb{Z}^{d}\setminus\Lambda\). * for each \(v\in\mathbb{Z}^{d}\), the number of \(\{(v,k),(v,k+1)\}\in\text{ESC}(\sigma)\) equals the number of \(\{(v,k),(v,k+1)\}\in\text{OSC}(\sigma)\) minus \(1\). In particular, if \(\{(v,k),(v,k+1)\}\in\text{ESC}(\sigma)\) then \(v\in\Lambda\). 2. Let \(\text{NESC}(\sigma)\) be the number of "adjacent even sign changes" in \(\sigma\), defined as the number of pairs \(\{\{(u,k),(u,k+1)\},\{(v,\ell),(v,\ell+1)\}\}\subset\text{ESC}(\sigma)\) satisfying that \(\{u,v\}\in E(\mathbb{Z}^{d})\) and \(k=\ell\). 3. Define the number of perpendicular domain wall plaquettes above \(\Lambda\) to be \[D^{\Lambda}(\sigma):=|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\} \cap(\Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}|.\] Finally, we define a partial order on interfacial configurations in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\) as follows: Say that \(\sigma^{\prime}<\sigma\) if \[D^{\Lambda}(\sigma^{\prime})\leq D^{\Lambda}(\sigma)\] and, either \[\text{OSC}(\sigma^{\prime})\subsetneq\text{OSC}(\sigma)\] \[\text{OSC}(\sigma^{\prime})=\text{OSC}(\sigma)\text{ and }\text{NESC}(\sigma^{ \prime})>\text{NESC}(\sigma).\] The following proposition is the key step in proving Lemma 3.1. **Proposition 3.2**.: _Let \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) be interfacial. If \(\text{ESC}(\sigma)\neq\emptyset\) then there exists \(\sigma^{\prime}\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) such that \(\sigma^{\prime}<\sigma\) (in particular, \(\sigma^{\prime}\) is interfacial)._ Proof.: Fix a \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) with \(\text{ESC}(\sigma)\neq\emptyset\). Fix some \(\{(v_{0},k_{0}),(v_{0},k_{0}+1)\}\in\text{ESC}(\sigma)\). Consider the set of all positions which are directly below even sign changes at height \(k_{0}\), \[\Delta:=\{v\in\Lambda\colon\{(v,k_{0}),(v,k_{0}+1)\}\in\text{ESC}(\sigma)\}.\] For a given height \(k\), define the sum of the configuration surrounding \(\Delta\) at height \(k\), \[S(k):=\sum_{v\in\partial^{\text{out}}\Delta}\sigma_{(v,k)}.\] The definition of \(\Delta\) implies that \(S(k_{0})\leq S(k_{0}+1)\). Thus, either \(S(k_{0})\leq 0\) or \(S(k_{0}+1)\geq 0\) (or both). Let us assume without loss of generality that \(S(k_{0})\leq 0\) as the other case can be treated analogously. Define \(k_{1}\leq k_{0}\) to be the smallest integer with the following property: For all \(k_{1}\leq k\leq k_{0}\) it holds that \[\sigma_{(v,k)}=\sigma_{(v,k_{0})}=1\quad\text{for }v\in\Delta,\] \[\sigma_{(v,k)}\leq\sigma_{(v,k_{0})}\quad\text{for }v\in\partial^{\text{out}}\Delta.\] The definition implies, in particular, that \[S(k)\leq S(k_{0})\leq 0 \tag{3.3}\] for all \(k_{1}\leq k\leq k_{0}\). Finally, define a configuration \(\sigma^{\prime}\) as follows \[\sigma^{\prime}_{(v,k)}=\begin{cases}-1&v\in\Delta,k_{1}\leq k\leq k_{0},\\ \sigma_{(v,k)}&\text{otherwise}.\end{cases}\] The inequality (3.3) implies that \(D^{\Lambda}(\sigma^{\prime})\leq D(\sigma)\). Moreover, the definition of \(k_{1}\) implies that either \(\text{OSC}(\sigma^{\prime})\subsetneq\text{OSC}(\sigma)\) or \(\text{OSC}(\sigma^{\prime})=\text{OSC}(\sigma)\) and \(\text{NESC}(\sigma^{\prime})>\text{NESC}(\sigma)\). Thus, \(\sigma^{\prime}<\sigma\), as we wanted to prove. A repeated use of Proposition 3.2 yields the following corollary. **Corollary 3.3**.: _For every interfacial \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\), there is an interfacial configuration \(\sigma^{\prime}\) that has a unique sign change above every vertex (i.e., \(\sigma^{\prime}\) has no overhangs), with \(\sigma\) having the same sign change at the same height, and \(D^{\Lambda}(\sigma^{\prime})\leq D^{\Lambda}(\sigma)\), i.e.,_ \[|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma^{\prime}_{x}\neq\sigma^{ \prime}_{y}\}|\\ \leq|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}|. \tag{3.4}\] Proof.: If \(\text{ESC}(\sigma)=\emptyset\) then \(\sigma\) has no overhangs and we are done. Otherwise, apply Proposition 3.2 iteratively to produce a sequence \(\sigma_{m}<\sigma_{m-1}<\cdots<\sigma_{0}=\sigma\) of configurations in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\), with \(\text{ESC}(\sigma_{m})=\emptyset\) (the iterations necessarily terminate at some finite \(m\geq 1\) since the number of odd sign changes above each \(v\in\mathbb{Z}^{d}\) cannot increase and the number of even sign changes above each \(v\in\Lambda\) is no larger than the number of odd sign changes above \(v\)). Then, \(\sigma_{m}\) has no overhangs, and by the definition of the partial order, \(\text{OSC}(\sigma_{m})\subset\text{OSC}(\sigma)\) and \(D^{\Lambda}(\sigma_{m})\leq D^{\Lambda}(\sigma)\). Lemma 3.1 immediately follows from Corollary 3.3. Proof of Lemma 3.1.: Let \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\) satisfy the properties in the lemma. Let \(\sigma\) be an interfacial configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\). Let \(\sigma^{\prime}\) be the configuration guaranteed by Corollary 3.3. Since \(\eta\) is constant on \(E^{\perp}(\mathbb{Z}^{d+1})\), it follows from (3.4) that \(H^{\eta}(\sigma^{\prime})\leq H^{\eta}(\sigma)\). ## 4. Stability of the ground energy under shifts of the disorder and a deduction of Theorem 1.7 In this section we present our main technical result, Theorem 4.3 below, which bounds the probability that certain "admissible shifts of the disorder" lead to a significant change in the energy of the ground configuration under Dobrushin boundary conditions. Our main localization theorem, Theorem 1.7, follows by combining Theorem 4.3 with the fact, stated in Lemma 4.4 below, that admissible shifts inducing large energy changes necessarily exist whenever the interface in the ground configuration deviates (in prescribed locations) from the flat interface. Theorem 4.3 will also be instrumental in the proof of Theorem 1.9 (presented in Section 8) on the convergence of the semi-infinite-volume ground configurations in the infinite-volume limit. We begin in Section 4.1 with required definitions, continue in Section 4.2 with the statement of our main technical result and finally deduce Theorem 1.7 in Section 4.3. ### Preliminaries This section contains the required definitions of ground energies, shifts and their action on the disorder, and admissibility of shifts. #### 4.1.1. Coupling fields, energies and ground configurations **Generic coupling fields.** We often work with coupling fields \(\eta\) whose values on all edges are uniformly bounded from \(0\). In addition, in order to ensure uniqueness of finite-volume ground configurations we ask that the coupling field \(\eta\) satisfies the assumption \[\sum_{i=1}^{k}s_{i}\eta_{f_{i}}\neq 0,\quad k\in\mathbb{N},\{s_{i}\}_{i=1}^{k} \subseteq\{-1,1\},\{f_{i}\}_{i=1}^{k}\subset E(\mathbb{Z}^{d+1})\text{ and }\{f_{i}\}_{i=1}^{k}\nsubseteq E^{\perp}(\mathbb{Z}^{d+1}). \tag{4.1}\] This is captured with the following notation: Given \(\alpha^{\|},\alpha^{\perp}\in(0,\infty)\) let \[\mathcal{D}(\alpha^{\|},\alpha^{\perp}):=\left\{\eta\colon E(\mathbb{Z}^{d+1} )\to(0,\infty)\colon\begin{subarray}{c}\eta_{e}\in(\alpha^{\|},\infty)\text{ for }e\in E^{\|}(\mathbb{Z}^{d+1}),\\ \eta_{e}\in(\alpha^{\perp},\infty)\text{ for }e\in E^{\perp}(\mathbb{Z}^{d+1}), \\ \eta\text{ satisfies \eqref{eq:def Now, given also a coupling field \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty]\) and a finite \(\Lambda\subset\mathbb{Z}^{d}\) define the Hamiltonian \[\mathcal{H}^{\eta,\Lambda}(\sigma):=\sum_{\begin{subarray}{c}\{x,y\}\in E( \mathbb{Z}^{d+1})\\ \{x,y\}^{\cap}(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\eta_{\{x,y \}}(1-\sigma_{x}\sigma_{y})=2\sum_{\begin{subarray}{c}\{x,y\}\in E(\mathbb{Z}^ {d+1})\\ \{x,y\}\cap(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\eta_{\{x,y \}}1_{\sigma_{x}\neq\sigma_{y}} \tag{4.5}\] and note that it is well defined on \(\Omega^{\Lambda,\mathrm{Dob}}\). From the following observation it follows that the minimizers of the Hamiltonian \(\mathcal{H}^{\eta,\Lambda}\) in \(\Omega^{\Lambda,\mathrm{Dob}}\) coincide with the ground configurations discussed in Lemma 1.5. It is proved in appendix B for completeness. **Observation 4.1**.: _Let \(\sigma,\sigma^{\prime}\in\Omega^{\Lambda,\mathrm{Dob}}\), and \(\eta:E(\mathbb{Z}^{D})\to[0,\infty)\). The following holds_ \[H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})=\mathcal{H}^{\eta,\Lambda}(\sigma) -\mathcal{H}^{\eta,\Lambda}(\sigma^{\prime}).\] We note that when \(\eta\in\mathcal{D}\) then there is a unique minimizer of \(\mathcal{H}^{\eta,\Lambda}\) in \(\Omega^{\Lambda,\mathrm{Dob}}\). Indeed, there are only finitely many configurations in \(\Omega^{\Lambda,\mathrm{Dob}}\) whose energy is lower than \(\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})\), and no two of them have equal energy by (4.1). With a slight abuse of notation, we will denote this unique minimizer by \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\), noting that it coincides with the minimizer of Lemma 1.5 under the assumptions there. We will use the terminology _ground energy_ to refer to the energy of the minimizing configuration. We thus define, for each \(\eta\in\mathcal{D}\) and finite \(\Lambda\subset\mathbb{Z}^{d}\), \[\mathrm{GE}^{\Lambda}(\eta):=\mathcal{H}^{\eta,\Lambda}(\sigma^{\eta,\Lambda, \mathrm{Dob}}). \tag{4.6}\] #### 4.1.2. Shifts of the coupling field **Shifts and shifted coupling fields.** We use the term _shift_ to denote any function \(\tau\colon\mathbb{Z}^{d}\to\mathbb{Z}\) which equals zero except at finitely many vertices. We denote the (finite) support of \(\tau\) by \[\mathrm{supp}(\tau):=\{v\in\mathbb{Z}^{d}\colon\tau(v)\neq 0\}.\] We occasionally refer to the \(\ell_{1}\) norm of a shift, \(\|\tau\|_{1}:=\sum_{v\in\mathbb{Z}^{d}}|\tau_{v}|\). The set of all shifts will be denoted by \(\mathcal{S}\). We define an operation of shifts on coupling fields \(\eta\): first fix an arbitrary choice function \(\iota:E(\mathbb{Z}^{d})\to\mathbb{Z}^{d}\) that chooses for each edge one of its endpoints, i.e., \(\iota(e)\in e\) for every \(e\in E(\mathbb{Z}^{d})\); the shifted coupling field \(\eta^{\tau}\) is defined by shifting the "column of disorders" above a base vertex \(u\) by \(\tau(u)\), and a similar shift up for "columns" above any base edge \(\{u,v\}\) such that \(\iota(\{u,v\})=u\). Precisely, given a shift \(\tau\) and a disorder \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\), define \(\eta^{\tau}\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\) as follows: for every \(u\in\mathbb{Z}^{d}\) and \(k\in\mathbb{Z}\), \[\eta^{\tau}_{\{(u,k),(u,k+1)\}}:=\eta_{\{(u,k+\tau(u)),(u,k+1+\tau(u))\}}, \tag{4.7}\] and for every \(\{u,v\}\in E(\mathbb{Z}^{d})\) and \(k\in\mathbb{Z}\), \[\eta^{\tau}_{\{(u,k),(v,k)\}}:=\eta_{\{(u,k+\tau(\iota(\{u,v\}))),(v,k+\tau( \iota(\{u,v\})))\}}. \tag{4.8}\] Note that if \(\tau(u)=\tau(v)\) for adjacent \(u,v\in\mathbb{Z}^{d}\), then for every \(k\in\mathbb{Z}\), \[\eta^{\tau}_{\{(u,k),(v,k)\}}=\eta_{\{(u,k+\tau(u)),(v,k+\tau(u))\}}=\eta_{\{( u,k+\tau(v)),(v,k+\tau(v))\}}. \tag{4.9}\] **Changes to the ground energy.** Of central importance in our arguments will be the change in ground energy induced by shifts of the coupling field. This is captured by the following definition. For each \(\eta\in\mathcal{D}\), finite \(\Lambda\subset\mathbb{Z}^{d}\) and shifts \(\tau,\tau^{\prime}\) we set \[G^{\eta,\Lambda}(\tau,\tau^{\prime}):=\mathrm{GE}^{\Lambda}(\eta^{\tau^{\prime }})-\mathrm{GE}^{\Lambda}(\eta^{\tau}). \tag{4.10}\] We also abbreviate \[G^{\eta,\Lambda}(\tau):=G^{\eta,\Lambda}(\tau,0)=\operatorname{GE}^{\Lambda}( \eta)-\operatorname{GE}^{\Lambda}(\eta^{\tau}). \tag{4.11}\] With these definitions, for any shifts \(\tau_{1},\dots,\tau_{k}\) we have the telescopic sum \[G^{\eta,\Lambda}(\tau_{1})=\sum_{i=1}^{k-1}G^{\eta,\Lambda}(\tau_{i},\tau_{i+1 })+G^{\eta,\Lambda}(\tau_{k}). \tag{4.12}\] #### 4.1.3. Enumeration of shifts, admissible shifts and the maximal energetic change The counting of various classes of shifts plays an important role in our arguments (the shifts play a role somewhat analogous to that of contours in the classical Peierls argument). To facilitate it, we need a way to succinctly describe shifts. To this end, the following notations regarding a shift \(\tau\) are handy: * The _total variation_ of \(\tau\) is defined as \[\operatorname{TV}\left(\tau\right):=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}|\tau( u)-\tau(v)|.\] * A _level component_ of a shift \(\tau\) is a connected set on which \(\tau\) is constant and which is not strictly contained in another set with this property (i.e., a connected component of \(\tau^{-1}(k)\) for some \(k\)). Denote the collection of all level components of \(\tau\) by \(\mathcal{LC}(\tau)\). * A finite sequence \((v_{i})_{i\geq 0}\) of points in \(\mathbb{Z}^{d}\) with \(v_{0}=0\) is a _root sequence_ for a collection \(\mathcal{F}\) of sets in \(\mathbb{Z}^{d}\) if there is a point of \(\{v_{i}\}_{i\geq 0}\) in every set in \(\mathcal{F}\). We further define the _trip entropy_\(R(\tau)\) of the shift \(\tau\) as \[R(\tau):=\min\left\{\sum_{i\geq 1}\|v_{i}-v_{i-1}\|_{1}\colon(v_{i})_{i\geq 0} \text{ is a root sequence for }\mathcal{LC}(\tau)\right\}.\] Similarly, define the trip entropy \(R(E)\) of a set \(E\subseteq\mathbb{Z}^{d}\) as \[R(E):=\min\left\{\sum_{i\geq 1}\|v_{i}-v_{i-1}\|_{1}\colon\begin{subarray}{c }(v_{i})_{i\geq 0}\text{ is a root sequence for the collection}\\ \text{ of connected components of }E\end{subarray}\right\}.\] These definitions are put to use in estimating the number of shifts in Proposition 6.3. We next define a restricted class of shifts, depending on \(\eta\), that we term _admissible shifts_ (while restricted, the set of admissible shifts is still defined in a broad enough fashion to contain all the shifts arising in our proof). Very roughly, the class is defined as those shifts whose action on the coupling field induces a sufficiently large energetic change to the ground energy (as defined in (4.11)) to compensate for the number of shifts in the class. Here, the first notion one has for the number of shifts with given parameters is that coming from our later Proposition 6.3. However, we will see later that this notion will need to be further refined in our argument, where we will also need to take care of the number of coarse grainings (and fine grainings) of our shifts. The need to account also for these more refined counting problems lies at the heart of our choice for the definition of root sequence above and the precise definition of admissible shifts below. Given a coupling field \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty]\), finite \(\Lambda\subset\mathbb{Z}^{d}\) and positive \(\alpha^{\parallel},\alpha^{\perp}\), the class of \((\alpha^{\parallel},\alpha^{\perp})\)-admissible shifts is defined by \[\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp}):=\left\{\tau \in\mathcal{S}\colon|G^{\eta,\Lambda}(\tau)|\geq\max\left\{\frac{\alpha^{ \perp}}{2}\operatorname{TV}(\tau),\min\{\alpha^{\parallel},\alpha^{\perp}\} \frac{d}{200}R(\tau)\right\}\right\}.\] Lastly, we give a notation to the maximal change in the ground energy that is induced by an \((\alpha^{\parallel},\alpha^{\perp})\)-admissible shift, \[\operatorname{MG}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp}):=\sup_{ \tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})}|G^{\eta,\Lambda}(\tau)|. \tag{4.13}\] Our proof will make use of the fact that \(\operatorname{MG}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})<\infty\), almost surely, under suitable assumptions on the disorder distributions. This is implied by the following lemma, proved in appendix B. **Lemma 4.2**.: _In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11). Then, for any finite \(\Lambda\subset\mathbb{Z}^{d}\) and positive \(\alpha^{\parallel},\alpha^{\perp}\) we have that \(|\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})|<\infty\) almost surely._ ### Stability of the ground energy We proceed to state our main technical result, Theorem 4.3 below. It gives a quantitative bound on the probability that there exists an admissible shift whose action on the disorder yields a large change in the ground energy. **Theorem 4.3**.: _There exist constants \(c_{0},c,C>0\) such that the following holds. In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11). Let \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) be as in definition (1.12). Let \(\underline{\alpha}^{\parallel}\) and \(\underline{\alpha}^{\perp}\) be the minimums of the supports of \(\nu^{\parallel}\) and \(\nu^{\perp}\), as in (1.13). Let \(D=d+1\geq 4\) and suppose that condition (1.14) holds (with the constant \(c_{0}\)). Then the following holds for all finite \(\Lambda\subset\mathbb{Z}^{d}\) and \(t>0\),_ \[\mathbb{P}\left(\operatorname{MG}^{\eta,\Lambda}(\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp})\geq t\right)\leq C\exp\left(-\frac{c}{ \kappa d^{2}}\left(\frac{t}{\underline{\alpha}^{\perp}}\right)^{\frac{d-2}{d-1 }}\right). \tag{4.14}\] _Moreover, for small \(t\) we have an improved dependence on dimension: if \(t<\underline{\alpha}^{\perp}2^{d}\) then_ \[\mathbb{P}\left(\operatorname{MG}^{\eta,\Lambda}(\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp})\geq t\right)\leq C\exp\left(-\frac{ct}{ \kappa\underline{\alpha}^{\perp}}\right). \tag{4.15}\] The theorem will be proven at the end of subsection 5.2. ### Deduction of Theorem 1.7 The following deterministic lemma shows that if the interface of the ground configuration is not flat around the origin then there necessarily exists an admissible shift whose action on the coupling field induces a large change in the ground energy. **Lemma 4.4**.: _Let \(\eta\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\) for some \(\alpha^{\parallel},\alpha^{\perp}>0\) and let \(\Lambda\subset\mathbb{Z}^{d}\) be a finite subset. If \(\sigma^{\eta,\Lambda,\operatorname{Dob}}_{(0,k)}\neq\rho^{\operatorname{Dob}}_ {(0,k)}\) then there exists a shift \(\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})\) for which_ \[G^{\eta,\Lambda}(\tau)\geq 2|k|\alpha^{\perp}.\] The lemma is proved in Section 7.5. Theorem 1.7 follows as a direct consequence of Lemma 4.4 and Theorem 4.3. First, it suffices to establish the inequality (1.19) at the vertex \(v=0\), since the choice of \(\Lambda\) in Theorem 1.7 is arbitrary. Then, inequality (1.19) with \(v=0\) follows directly by combining Lemma 4.4 with (4.14). ## 5. Coarse and fine grainings of shifts and their use in proving the stability of the ground energy In this section we take the first step towards proving Theorem 4.3, describing a form of "chaining" argument on the set of admissible shifts which is used to control their effect on the ground energy. The notion of coarse grainings of shifts which lies at the heart of our chaining argument is modelled after a similar graining method for sets which was introduced by Fisher-Frohlich-Spencer [10] in their discussion of the domain walls in the random-field Ising model. ### Coarse and fine grainings of shifts The chaining argument is based on the notions of coarse and fine grainings of shifts that we now describe. Given a partition \(\mathcal{P}\) of \(\mathbb{Z}^{d}\) into finite sets and a shift \(\tau\), we write \(\tau_{\mathcal{P}}\) for the shift obtained by averaging the value of \(\tau\) on each partition element of \(\mathcal{P}\) and rounding to the closest integer. Precisely, we set \[\tau_{\mathcal{P}}(v):=\left[\frac{1}{|P(v)|}\sum_{u\in P(v)}\tau(u)\right] \tag{5.1}\] where we write \(P(v)\) for the unique partition element of \(\mathcal{P}\) containing \(v\), and where \([a]\) is the rounding of \(a\) to the nearest integer, with the convention \(\left[k+\frac{1}{2}\right]=k\) for \(k\in\mathbb{Z}\). We make use of two special cases of the above definition: * Coarse graining: Given an integer \(N\geq 1\), we use the notation \(\tau_{N}:=\tau_{\mathcal{P}_{N}}\) (as in (5.1)), with \(\mathcal{P}_{N}\) is the following partition into discrete cubes of side length \(N\), \[\mathcal{P}_{N}=\{Q_{N}(v)\}_{v\in N\mathbb{Z}^{d}}\quad\text{ where}\quad Q_{N}(v):=v+\{0,1,\ldots,N-1\}^{d}.\] * Fine graining: Given a subset of the coordinates \(I\subset[d]\), we use the notation \(\tau_{I}:=\tau_{\mathcal{P}_{I}}\) (as in (5.1)), with \(\mathcal{P}_{I}\) is the following partition into discrete boxes with side length \(2\) in the directions in \(I\) and side length \(1\) in the directions in \([d]\setminus I\), \[\mathcal{P}_{I}=\{Q_{I}(v)\}_{v\in(2\mathbb{Z})^{I}\times\mathbb{Z}^{[d] \setminus I}}\quad\text{where}\quad Q_{I}(v):=v+\{0,1\}^{I}\times\{0\}^{[d] \setminus I}.\] ### The chaining argument We work under the assumptions of Theorem 4.3. Precisely, let the disorder \(\eta\) be sampled as in the anisotropic disordered ferromagnet, with the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfying (1.11). Let \(D\geq 4\) and suppose that condition (1.14) holds with a constant \(c>0\) chosen sufficiently small for the arguments below. Fix a finite \(\Lambda\subset\mathbb{Z}^{d}\). For brevity, we will write \(G\) for \(G^{\eta,\Lambda}\), \(\mathcal{AS}\) for \(\mathcal{AS}^{\eta,\Lambda}(\underline{\alpha}^{\parallel},\underline{\alpha}^ {\perp})\) (recall (1.13)), _admissible_ for \((\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp})\)-_admissible_ and MG for \(\operatorname{MG}^{\eta,\Lambda}(\underline{\alpha}^{\parallel},\underline{ \alpha}^{\perp})\). First, since \(\operatorname{MG}<\infty\) almost surely by Lemma 4.2, we have \[\mathbb{P}(\operatorname{MG}\geq t)=\sum_{k=0}^{\infty}\mathbb{P}\left( \operatorname{MG}\in[t2^{k},t2^{k+1})\right).\] Next, for any \(s>0\), integer \(K\geq 1\), integer \(1\leq r\leq d\), any positive \((\gamma_{j})_{j\in[K]\cup\{(0,r),(r,1)\}}\) with \(\gamma_{(0,r)}+\gamma_{(r,1)}+\sum_{1\leq k\leq K}\gamma_{k}\leq 1\) and any function \(I_{\tau}\) which assigns a subset of \([d]\) of size \(r\) to each shift \(\tau\), we have the chaining argument (noting that the supremum is realized in (4.13) due to Lemma 4.2, and also recalling (4.12)) \[\mathbb{P}(\,\mathrm{MG}\in[s,2s))=\mathbb{P}\left(\{\mathrm{MG}\leq 2s \}\cap\left\{\max_{\tau\in\mathcal{AS}}|G(\tau)|\geq s\right\}\right)\] \[= \mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\max_{\tau\in \mathcal{AS}}|G(\tau,\tau_{I_{\tau}})+G(\tau_{I_{\tau}},\tau_{2})+\sum_{k=1}^{K -1}G(\tau_{2^{k}},\tau_{2^{k+1}})+G(\tau_{2^{K}})|\geq s\right\}\right)\] \[\leq \mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\max_{\tau\in \mathcal{AS}}|G(\tau,\tau_{I_{\tau}})|\geq\gamma_{(0,|I_{\tau}|)}s\right\}\right) \tag{5.2a}\] \[+\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\max_{\tau\in \mathcal{AS}}|G(\tau_{I_{\tau}},\tau_{2})|\geq\gamma_{(|I_{\tau}|,1)}s\right\}\right)\] (5.2b) \[+\sum_{k=1}^{K-1}\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap \left\{\max_{\tau\in\mathcal{AS}}|G(\tau_{2^{k}},\tau_{2^{k+1}})|\geq\gamma_{ k}s\right\}\right)\] (5.2c) \[+\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\max_{\tau\in \mathcal{AS}}|G(\tau_{2^{K}})|\geq\gamma_{K}s\right\}\right). \tag{5.2d}\] The following notion will become useful for bounding the terms (5.2a) and (5.2b) from above. A set of indices \(I\subseteq[d]\) will be called _compatible_ with a shift \(\tau\) if it holds that \[\mathrm{TV}(\tau_{I})\leq 20(2|I|+1)\,\mathrm{TV}(\tau)\qquad\text{and} \qquad\|\tau_{I}-\tau\|_{1}\leq\frac{4|I|}{d}\,\mathrm{TV}(\tau).\] Denote \[\mathrm{comp}(\tau):=\left\{I\subset[d]\colon I\text{ is compatible with }\tau\right\}.\] The following proposition is proved in Section 6.2.2. **Proposition 5.1**.: _Let \(\tau\) be a shift. For each \(0\leq r\leq d\) there exists \(I\in\mathrm{comp}(\tau)\) with \(|I|=r\)._ It is clear that sufficiently coarse grainings of a shift will yield the identity (all zero) shift. The following proposition, proved in Section 6.2.1, quantifies this statement. **Proposition 5.2**.: _Let \(\tau\) be a shift. For each integer \(N>\sqrt[d]{2}\left(\frac{\mathrm{TV}(\tau)}{2d}\right)^{\frac{1}{d-1}}\) it holds that \(\tau_{N}\equiv 0\)._ The next lemma, whose proof will be the focus of section 6 allows to estimate the expressions (5.2a), (5.2b) and (5.2c). **Lemma 5.3**.: _Define \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) as in (1.12):_ \[\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d):=\left(\frac{1}{\underline{\alpha ^{\parallel}\alpha^{\perp}}}+\frac{1}{d(\underline{\alpha^{\perp}})^{2}} \right)\mathrm{wid}(\nu^{\parallel})^{2}+\frac{1}{(\underline{\alpha^{\perp}} )^{2}}\,\mathrm{wid}(\nu^{\perp})^{2}.\] _There exist universal constants \(C,c>0\) such that the following hold for every \(s>0\)._ 1. _For any_ \(1\leq r\leq d\)_, any map_ \(\tau\mapsto I_{\tau}\) _assigning to each shift_ \(\tau\) _a compatible set_ \(I_{\tau}\in\mathrm{comp}(\tau)\) _with_ \(|I|=r\)_, and_ \(Cr\kappa\frac{\log d}{d}\left(1+\frac{\alpha^{\perp}}{\underline{\alpha^{ \parallel}}}\right)\leq\Gamma\leq 1\)_,_ \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\exists\tau\in\mathcal{AS}\colon|G(\tau, \tau_{I_{\tau}})|>\sqrt{\Gamma}s\right\}\right)\leq C\exp\left(-c\frac{\Gamma} {\kappa\underline{\alpha^{\perp}}r}s\right).\] 2. _For any_ \(1\leq r\leq d\)_, any map_ \(\tau\mapsto I_{\tau}\) _map assigning to each shift_ \(\tau\) _a compatible set_ \(I_{\tau}\in\operatorname{comp}(\tau)\) _with_ \(|I|=r\)_, and_ \(C\kappa\frac{dr}{2^{r}}\left(r+\log\left(\frac{\underline{\alpha}^{\perp}}{ \underline{\alpha}^{\perp}dr}+1\right)\right)\leq\Gamma\leq 1\)_,_ \[\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\left\{\exists\tau\in\mathcal{A} \mathcal{S}\colon|G(\tau_{I_{\tau}},\tau_{2})|>\sqrt{\Gamma}s\right\}\right) \leq C\exp\left(-c\frac{\Gamma}{\kappa\underline{\alpha}^{\perp}d}s\right).\] 3. _For any_ \(k\geq 1\) _and_ \(C\kappa\frac{d^{3}}{2^{k(d-2)}}\left(dk+\log\left(\frac{\underline{\alpha}^{ \perp}}{\underline{\alpha}^{\perp}d^{2}}+1\right)\right)\leq\Gamma\leq 1\)_,_ \[\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\left\{\exists\tau\in\mathcal{A} \mathcal{S}\colon|G(\tau_{2^{k}},\tau_{2^{k+1}})|\geq\sqrt{\Gamma}s\right\} \right) \leq C\exp\left(-c\frac{\Gamma}{\kappa\underline{\alpha}^{\perp}d^{2}2^{k}}s \right).\] For small \(s\), we may obtain an improved dependence on the dimension \(d\), using the following lemma. **Lemma 5.4**.: _There exists universal constants \(C,c>0\) such that the following holds. Assume \(0<s<\underline{\alpha}^{\perp}4^{d}\), and let \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) be as in (1.12), then_ \[\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{A} \mathcal{S}\colon|G(\tau)|>s\}\right)\leq C\exp\left(-\frac{c}{\kappa \underline{\alpha}^{\perp}}s\right). \tag{5.3}\] Proof of Theorem 4.3.: Throughout the proof we will use \(C\) and \(c\) to denote positive absolute constants; the values of these constants will be allowed to change from line to line, even within the same calculation, with the value of \(C\) increasing and the value of \(c\) decreasing. Set \(K:=\lceil\frac{1}{d-1}\log_{2}\left(\frac{4s}{\underline{\alpha}^{\perp}d} \right)\rceil+1\). By Proposition 5.2 and the definition of admissibility, the term (5.2d) vanishes for any choice of \(\gamma_{K}\). Set \(\gamma_{(0,|I|)}=\gamma_{(|I|,1)}:=\frac{1}{4}\) and \(\gamma_{k}:=\gamma\,2^{-\frac{1}{4}\min\{k,K-k\}}\) for any \(1\leq k\leq K-1\), where \(\gamma=\left(2\sum_{k=1}^{K-1}2^{-\frac{1}{4}\min\{k,K-k\}}\right)^{-1}\). Set \(r:=\lceil\min\{10\log_{2}d,\frac{d}{2}\}\rceil\) and a map \(\tau\mapsto I_{\tau}\) assigning to each shift \(\tau\) a compatible set \(I_{\tau}\in\operatorname{comp}(\tau)\) of size \(r\), existing by Proposition 5.1. Notice that \(\gamma_{(0,r)}+\gamma_{(r,1)}+\sum_{k=1}^{K-1}\gamma_{k}=1\) and that \(1\leq r\leq d\) so one may use the chaining argument (5.2). Recall that \(\kappa\left(1+\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{\perp}} \right)\leq c_{0}\frac{d+1}{\log^{2}(d+1)}\), by assumption (1.14) on the noise distributions. It is easy to verify that for \(c_{0}\) sufficiently small, this enables us to use the first part of Lemma 5.3 to bound the term (5.2a), the second part of Lemma 5.3 to bound the term (5.2b), and the third part of Lemma 5.3 to bound the term (5.2c) (recall that the term (5.2d) vanishes). This yields that for every positive \(s\), \[\mathbb{P}(\operatorname{MG}\in[s,2s))\leq C\exp\left(-c\frac{s}{\kappa\underline{\alpha}^{\perp}r}\right)+C \exp\left(-c\frac{s}{\kappa\underline{\alpha}^{\perp}d}\right)+C\sum_{k=1}^{ \lceil\frac{K}{2}\rceil}\exp\left(-c\frac{s}{\kappa\underline{\alpha}^{\perp} d^{2}2^{\frac{3}{2}k}}\right)\] \[+C\sum_{k=\lceil\frac{K}{2}\rceil+1}^{K-1}\exp\left(-c\frac{s}{ \kappa\underline{\alpha}^{\perp}d^{2}2^{K}}\right)\] Now, noticing that the \(K-\lceil\frac{K}{2}\rceil-1\) last summands are asymptotically dominant and that \(2^{K}\leq 4\left(\frac{4s}{\underline{\alpha}^{\perp}d}\right)^{\frac{1}{d-1}}\), one gets the bound \[\mathbb{P}(\operatorname{MG}\in[s,2s))\leq C\exp\left(-\frac{c}{\kappa d^{2}} \left(\frac{s}{\underline{\alpha}^{\perp}}\right)^{\frac{d-2}{d-1}}\right)\] for every positive integer \(s\). Hence, \[\mathbb{P}(\mathrm{MG}\geq t) =\sum_{i=0}^{\infty}\mathbb{P}(\mathrm{MG}\in[2^{i}t,2^{i+1}t))\leq \sum_{i=0}^{\infty}C\exp\left(-\frac{c}{\kappa d^{2}}\left(\frac{2^{i}t}{ \underline{\alpha}^{\perp}}\right)^{\frac{d-2}{d-1}}\right)\] \[\leq C\exp\left(-\frac{c}{\kappa d^{2}}\left(\frac{t}{\underline {\alpha}^{\perp}}\right)^{\frac{d-2}{d-1}}\right).\] For \(t<\underline{\alpha}^{\perp}2^{d}\), by Lemma 5.4 and (4.14), \[\mathbb{P}(\mathrm{MG}\geq t) =\sum_{i=0}^{d-1}\mathbb{P}\left(\mathrm{MG}\in\left[2^{i}t,2^{i+ 1}t\right)\right)+\mathbb{P}\left(\mathrm{MG}\geq 2^{d}t\right)\] \[\leq\sum_{i=0}^{d-1}C\exp\left(-\frac{c}{\kappa\underline{\alpha }^{\perp}}2^{i}t\right)+C\exp\left(-\frac{c}{\kappa d^{2}}\left(\frac{2^{d}t}{ \underline{\alpha}^{\perp}}\right)^{\frac{d-2}{d-1}}\right)\leq C\exp\left(- \frac{ct}{\kappa\underline{\alpha}^{\perp}}\right).\ \square\] ## 6. Concentration of ground-energy differences between consecutive grainings The goal of this section is to prove Proposition 5.1, Proposition 5.2, Lemma 5.3 and Lemma 5.4. The proofs of Lemma 5.3 and Lemma 5.4 are achieved via the following pivotal statements. **Interface layering.** In the following lemmas, the concept of interface layering plays a significant role. By such layering, we mean the number of interface plaquettes (in a ground configuration with Dobrushin boundary conditions) lying above a given position in the base plane (i.e., having the same projection). We use the following definitions: The _parallel layering_ of a configuration \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\) over a set \(A\subset\Lambda\subset\mathbb{Z}^{d}\) is defined as \[\mathcal{L}_{A}^{\parallel}(\sigma):=\left|\left\{\left\{x,x+e_{d+1}\right\} \in E^{\parallel}(\mathbb{Z}^{d+1})\colon\pi\left(x\right)\in A,\,\sigma_{x} \neq\sigma_{x+e_{d+1}}\right\}\right|. \tag{6.1}\] The _perpendicular layering_ of \(\sigma\) over \(A\) is defined as \[\mathcal{L}_{A}^{\perp}(\sigma):=\left|\left\{\left\{x,y\right\}\in E^{\perp} (\mathbb{Z}^{d+1})\colon\pi\left(x\right)\in A,\,\pi\left(y\right)\in A,\, \sigma_{x}\neq\sigma_{y}\right\}\right|. \tag{6.2}\] With these definitions in mind one may think of \(\mathcal{L}_{A}^{\perp}(\tau)+\mathcal{L}_{A}^{\parallel}(\tau)-|A|\) as the number of excessive plaquettes in the interface created by the minimal energy configuration above \(A\), compared to the interface of the configuration \(\rho^{\mathrm{Dob}}\). For \(A\subset\Lambda\subset\mathbb{Z}^{d}\) and integer \(b^{\parallel},b^{\perp}\geq 0\), define: \[\Omega^{\Lambda,A,(b^{\parallel},b^{\perp})}:=\left\{\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\colon\sum_{\begin{subarray}{c}\{x,y\}\in E^{\theta}(2^{d+1})\\ \{\pi\left(x\right),\pi\left(y\right)\}\cap A\neq\emptyset\end{subarray}}1_{ \sigma_{x}\neq\sigma_{y}}\leq b^{\theta}\text{ for }\theta\in\left\{\left\|,\perp \right\}\right\},\] as well as \[\mathrm{GE}^{\Lambda,A,(b^{\parallel},b^{\perp})}(\eta):=\min\left\{\mathcal{H }^{\eta,\Lambda}(\sigma)\colon\sigma\in\Omega^{\Lambda,A,(b^{\parallel},b^{ \perp})}\right\}. \tag{6.3}\] When \(\Lambda\) is fixed, we will occasionally abbreviate by omitting it. Also define \[G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau,\tau^{\prime}):=\mathrm{GE}^{ \Lambda,\mathrm{supp}(\tau-\tau^{\prime}),(b^{\parallel},b^{\perp})}(\eta^{ \tau^{\prime}})-\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau-\tau^{\prime}),(b^{ \parallel},b^{\perp})}(\eta^{\tau}),\] and abbreviate \[G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau):=G^{\eta,\Lambda,(b^{\parallel},b ^{\perp})}(\tau,0)=\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)-\operatorname{GE}^{\Lambda,\operatorname{supp}( \tau),(b^{\parallel},b^{\perp})}(\eta^{\tau}).\] **Concentration of ground energy differences.** First, we provide a bound on the probability of a given shift producing a large energetic gain, given some a priori bound on the "number of excessive faces in the interface" above the support of the shift. **Lemma 6.1**.: _There exist universal \(C,c>0\) such that the following holds. Suppose the disorder distributions \(\nu^{\parallel},\nu^{\perp}\) satisfy (1.11). Then for any two shifts \(\tau,\tau^{\prime}\) and any non-negative \(b^{\parallel},b^{\perp}\) for which \(\rho^{\operatorname{Dob}}\in\Omega^{\Lambda,\operatorname{supp}(\tau-\tau^{ \prime}),(b^{\parallel},b^{\perp})}\),_ \[\mathbb{P}\left(|G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau,\tau^{\prime })|\geq t\right)\leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{ \parallel})^{2}b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}} \right). \tag{6.4}\] **Layering bounds.** Lemma 6.1 provides a concentration estimate for the ground energy of a restricted set of configurations. In the following lemma, we show that at each step of the graining, the non-restricted ground energy coincides with an appropriate restricted ground energy. **Lemma 6.2**.: _There exists a universal \(C>0\) such that the following holds. Let \(\eta\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\) for some \(\alpha^{\parallel},\alpha^{\perp}>0\). Let \(\Lambda\subset\mathbb{Z}^{d}\) be a finite subset. Let \(s>0\) such that \(\operatorname{MG}(\alpha^{\parallel},\alpha^{\perp})\leq 2s\), and let \(\tau\) be an \((\alpha^{\parallel},\alpha^{\perp})\)-admissible shift._ 1. _For any_ \(\emptyset\neq I\in\operatorname{comp}(\tau)\)_,_ \[\operatorname{GE}^{\Lambda}(\#) =\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau-\tau_{I}),(b ^{\parallel}_{(0,|I|)}(s),b^{\perp}_{(0,|I|)}(s))}(\#) \text{for }\ \#\in\{\eta^{\tau},\eta^{\tau_{I}}\},\] \[\operatorname{GE}^{\Lambda}(\#) =\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau_{2}-\tau_{ I}),(b^{\parallel}_{(|I|,1)}(s),b^{\perp}_{(|I|,1)}(s))}(\#) \text{for }\ \#\in\{\eta^{\tau_{2}},\eta^{\tau_{I}}\},\] _where_ \[b^{\parallel}_{(0,|I|)}(s):=C\left(\frac{1}{\alpha^{\parallel}}+ \frac{1}{\alpha^{\perp}d}\right)|I|s, b^{\perp}_{(0,|I|)}(s):=\frac{C}{\alpha^{\perp}}|I|s,\] \[b^{\parallel}_{(|I|,1)}(s):=C\left(\frac{1}{\alpha^{\parallel}}+ \frac{1}{\alpha^{\perp}d}\right)ds, b^{\perp}_{(|I|,1)}(s):=\frac{C}{\alpha^{\perp}}ds.\] 2. _For any_ \(k\geq 1\)_,_ \[\operatorname{GE}^{\Lambda}(\eta^{\tau_{2k}}) =\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau_{2k}-\tau_ {2k+1}),(b^{\parallel}_{k}(s),b^{\perp}_{k}(s))}(\eta^{\tau_{2k}})\] \[\stackrel{{(k\geq 2)}}{{=}}\operatorname{GE}^{ \Lambda,\operatorname{supp}(\tau_{2k-1}-\tau_{2k}),(b^{\parallel}_{k-1},b^{ \perp}_{k-1})}(\eta^{\tau_{2k}}),\] _where_ \[b^{\parallel}_{k}(s):=C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{ \perp}d}\right)d^{2}2^{k}s, b^{\perp}_{k}(s):=\frac{C}{\alpha^{\perp}}d^{2}2^{k}s.\] 3. _For_ \(s<\alpha^{\perp}4^{d}\) _it holds that_ \[\operatorname{GE}^{\Lambda}(\#) =\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\#) \text{for }\ \#\in\{\eta^{\tau},\eta\},\] _where_ \[b^{\parallel}(s):=C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{\perp }d}\right)s, b^{\perp}(s):=\frac{C}{\alpha^{\perp}}s.\] **Enumeration of shifts.** Lastly, we provide a bound on the number of shifts and their coarse/fine grainings, satisfying that their total variation and trip entropy are bounded by given constants. This is done by the following proposition and the corollary that follows it. **Proposition 6.3** (Counting shift functions).: _There exists \(C>0\) such that for each \(\lambda,\rho>0\),_ \[|\{\tau\in\mathcal{S}\colon\,\mathrm{TV}(\tau)\leq\lambda,\,R(\tau)\leq\rho\}| \leq\exp\left(C\min\left\{\lambda+\lambda\log\left(\frac{\rho}{\lambda}+1 \right),\lambda\frac{\log d}{d}+\rho\log d\right\}\right),\] **Corollary 6.4**.: _There exists a universal \(C>0\) such that the following holds._ 1. _For each integer_ \(N\geq 2\) _and_ \(\lambda,\rho>0\)_,_ \[|\{\tau_{N}\colon\tau\in\mathcal{S},\,\mathrm{TV}(\tau)\leq \lambda,R(\tau)\leq\rho\}|\\ \leq\exp\left(C\frac{d\lambda}{N^{d-1}}\left(d\log N+\log\left( \frac{\rho}{d\lambda}+1\right)\right)\right).\] (6.5) 2. _For each integer_ \(1\leq r\leq d\)_, a mapping_ \(\tau\mapsto I_{\tau}\) _such that_ \(I_{\tau}\in\mathrm{comp}(\tau)\) _and_ \(|I_{\tau}|=r\)_, and_ \(\lambda,\rho>0\)_,_ (6.6) Lemma 6.1 will be proved in subsection 6.4, using a concentration estimate. Lemma 6.2 will be proved in section 6.5, requiring both basic properties of grainings to be established as well as using Lemmas inspired by Dobrushin's work. Proposition 6.3 and Corollary 6.4 will be proved in section 6.3 using the work of Bollobas-Balister [1] (continuing on Lebowitz-Mazel [12]). ### Proof of Lemmas 5.3 and 5.4 In this section we show how Lemma 6.1, Lemma 6.2, Proposition 6.3 and Corollary 6.4 imply Lemmas 5.3 and 5.4. We will continue to use the abbreviations of section 5, specifically \(G\) for \(G^{\eta,\Lambda}\), \(\mathcal{AS}\) for \(\mathcal{AS}^{\eta,\Lambda}(\underline{\alpha}^{\parallel},\underline{\alpha}^ {\perp})\) and MG for \(\mathrm{MG}^{\eta,\Lambda}(\underline{\alpha}^{\parallel},\underline{\alpha}^ {\perp})\). Throughout this section we will use \(C\) and \(c\) to denote positive absolute constants; the values of these constants will be allowed to change from line to line, even within the same calculation, with the value of \(C\) increasing and the value of \(c\) decreasing. In the proofs of Lemmas 5.3 and 5.4 we will use the following corollary of Proposition 6.3 and Corollary 6.4. **Corollary 6.5**.: _The following bounds hold._ 1. _For every_ \(t>0\)_,_ \[|\{\tau\in\mathcal{AS}\colon G(\tau)\leq t\}|\leq\exp\left(C\frac{(\log d)t}{ \min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp}\}d}\right).\] (6.7) 2. _For every integer_ \(N\geq 2\) _and_ \(t>0\)_,_ (6.8) 3. _For every integer_ \(1\leq r\leq d\)_, a mapping_ \(\tau\mapsto I_{\tau}\) _such that_ \(I_{\tau}\in\mathrm{comp}(\tau)\) _and_ \(|I_{\tau}|=r\)_, and_ \(t>0\)_,_ (6.9) Proof.: For every \(t>0\), \[\{\tau\in\mathcal{AS}\colon G(\tau)\leq t\}\subseteq\left\{\tau\in\mathcal{S} \colon\operatorname{TV}(\tau)\leq\frac{2t}{\underline{\alpha}^{\perp}},\,R(\tau) \leq\frac{200t}{\min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{ \perp}\}d}\right\}.\] Hence, by Proposition 6.3, \[|\{\tau\in\mathcal{AS}\colon G(\tau)\leq t\}|\leq\left|\left\{ \tau\in\mathcal{S}\colon\operatorname{TV}(\tau)\leq\frac{2t}{\underline{ \alpha}^{\perp}},\,R(\tau)\leq\frac{200t}{\min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp}\}d}\right\}\right|\] \[\quad\leq\exp\left(C\min\left\{\frac{2t}{\underline{\alpha}^{ \perp}}+\frac{2t}{\underline{\alpha}^{\perp}}\log\left(\frac{100\underline{ \alpha}^{\perp}}{\min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{ \perp}\}d}+1\right),\frac{2t}{\underline{\alpha}^{\perp}d}\log d+\frac{200t}{ \min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp}\}d}\log d, \right\}\right)\] \[\quad\leq\exp\left(C\frac{(\log d)t}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}\right),\] for every integer \(N\geq 2\), by (6.5), \[|\{\tau_{N}\colon\tau\in\mathcal{AS},\,G(\tau)\leq t\}| \leq\left|\left\{\tau_{N}\colon\tau\in\mathcal{S},\,\operatorname{ TV}(\tau)\leq\frac{2t}{\underline{\alpha}^{\perp}},\,R(\tau)\leq\frac{200t}{ \min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp}\}}d\right\}\right|\] \[\leq\exp\left(C\frac{2t\,t}{\underline{\alpha}^{\perp}N^{d-1}} \left(d\log N+\log\left(\frac{100}{d^{2}}\left(1+\frac{\underline{\alpha}^{ \perp}}{\underline{\alpha}^{\parallel}}\right)+1\right)\right)\right)\] \[\leq\exp\left(C\frac{d\,t}{\underline{\alpha}^{\perp}N^{d-1}} \left(d\log N+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^ {\parallel}d^{2}}+1\right)\right)\right),\] and for every integer \(1\leq r\leq d\) and mapping \(\tau\mapsto I_{\tau}\) such that \(I_{\tau}\in\operatorname{comp}(\tau)\) and \(|I_{\tau}|=r\), by (6.6), \[|\{\tau_{I_{\tau}}\colon\tau\in\mathcal{AS},\,G(\tau)\leq t\}| \leq\left|\left\{\tau_{I_{\tau}}\colon\tau\in\mathcal{S},\, \operatorname{TV}(\tau)\leq\frac{2t}{\underline{\alpha}^{\perp}},\,R(\tau) \leq\frac{200t}{\min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{ \perp}\}d}\right\}\right|\] \[\leq\exp\left(C\frac{2rt}{\underline{\alpha}^{\perp}2r}\left(r+ \log\left(\frac{100\underline{\alpha}^{\perp}}{\underline{\alpha}^{\parallel} dr}+1\right)\right)\right)\] \[\leq\exp\left(C\frac{rt}{\underline{\alpha}^{\perp}2r}\left(r+ \log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{\parallel} dr}+1\right)\right)\right).\qed\] Proof of Lemma 5.3.: For the first part of the Lemma, let \(1\leq r\leq d\), \(\tau\mapsto I_{\tau}\) a map assigning to each shift \(\tau\) a compatible set \(I_{\tau}\in\operatorname{comp}(\tau)\) with \(|I_{\tau}|=r\), and \(Cr\kappa\frac{\log d}{d}\left(1+\frac{\underline{\alpha}^{\perp}}{\underline {\alpha}^{\parallel}}\right)\leq\Gamma\leq 1\). It holds that \[\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\{\exists\tau \in\mathcal{AS},|G(\tau,I_{\tau})|>\sqrt{\Gamma}s\}\right)\] \[\quad=\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\{\exists \tau\in\mathcal{AS},|G^{\eta,\Lambda,(b^{\parallel}_{(0,r)}(t),b^{\perp}_{(0,r )}(t))}(\tau,\tau_{I_{\tau}})|>\sqrt{\Gamma}s\}\right)\] \[\quad\leq C\exp\left(C\frac{(\log d)s}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}\right)\exp\left(-c\frac{\Gamma s^{2} }{\operatorname{Lip}(\nu^{\parallel})^{2}b^{\parallel}_{(0,r)}+\operatorname{ Lip}(\nu^{\perp})^{2}b^{\perp}_{(0,r)}}\right)\] \[\quad=C\exp\left(\left(\frac{C\log d}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}-c\frac{\Gamma}{\kappa\underline{ \alpha}^{\perp}r}\right)s\right),\] with the first equality by Lemma 6.2, the first inequality by union bound, (6.7), and Lemma 6.1, and the second equality by the definition of \(b^{\parallel}_{(0,r)},b^{\perp}_{(0,r)}\) and of \(\kappa\). Now, for \(C\) sufficiently large, if \[\Gamma\geq Cr\kappa\frac{\log d}{d}\left(1+\frac{\underline{\alpha}^{\perp}}{ \underline{\alpha}^{\parallel}}\right)\] then the second term in the exponent is the asymptotically dominant one and one gets \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{AS},|G(\tau, \tau_{I_{\tau}})|>\sqrt{\Gamma}s\}\right)\leq C\exp\left(-c\frac{\Gamma s}{ \kappa\underline{\alpha}^{\perp}r}\right).\] In an identical manner, it holds that \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G(\tau_{I_{\tau}},\tau_{2})|>\sqrt{\Gamma}s\}\right)\] \[=\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G^{\eta,\Lambda,(b_{(r,1)}^{\parallel}(t),b_{(r,1)}^{\perp}(t)) }(\tau_{I_{\tau}},\tau_{2})|>\sqrt{\Gamma}s\}\right)\] \[\leq C\exp\left(C\frac{rs}{\underline{\alpha}^{\perp}2^{r}} \left(r+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}dr}+1\right)\right)\right)\exp\left(-c\frac{\Gamma s^{2}}{\mathrm{ Lip}(\nu^{\parallel})^{2}b_{(r,1)}^{\parallel}+\mathrm{Lip}(\nu^{\perp})^{2}b_{(r,1)}^{ \perp}}\right)\] \[=C\exp\left(\left(C\frac{r}{\underline{\alpha}^{\perp}2^{r}} \left(r+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}dr}+1\right)\right)-c\frac{\Gamma}{\kappa\underline{\alpha}^{\perp}d }\right)s\right),\] with the first equality by Lemma 6.2, the first inequality by union bound, (6.8), and Lemma 6.1, and the second equality by the definition of \(b_{(r,1)}^{\parallel},b_{(r,1)}^{\perp}\) and of \(\kappa\). Now, for \(C\) sufficiently large, if \[\Gamma\geq C\kappa\frac{dr}{2^{r}}\left(r+\log\left(\frac{\underline{\alpha}^{ \perp}}{\underline{\alpha}^{\parallel}dr}+1\right)\right)\] then the second term in the exponent is the asymptotically dominant one and one gets \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{AS},|G(\tau, \tau_{I_{\tau}})|>\sqrt{\Gamma}s\}\right)\leq\exp C\left(-c\frac{\Gamma s}{ \kappa\underline{\alpha}^{\perp}d}\right).\] For the third bound, again in an identical manner \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G(\tau_{2^{k}},\tau_{2^{k+1}})|>\sqrt{\Gamma}s\}\right)\] \[=\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G^{\eta,\Lambda,(b_{k}^{\parallel}(t),b_{k}^{\perp}(t))}(\tau_{ 2^{k}},\tau_{2^{k+1}})|>\sqrt{\Gamma}s\}\right)\] \[\leq C\exp\left(C\frac{ds}{\underline{\alpha}^{\perp}2^{k(d-1)}} \left(dk+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}d^{2}}+1\right)\right)\right)\exp\left(-c\frac{\Gamma s^{2}}{\mathrm{ Lip}(\nu^{\parallel})^{2}b_{k}^{\parallel}+\mathrm{Lip}(\nu^{\perp})^{2}b_{k}^{ \perp}}\right)\] \[=C\exp\left(\left(C\frac{d}{\underline{\alpha}^{\perp}2^{k(d-1)} }\left(dk+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}d^{2}}+1\right)\right)-c\frac{\Gamma}{\kappa\underline{\alpha}^{ \perp}d^{2}2^{k}}\right)s\right),\] with the first equality by Lemma 6.2, the first inequality by union bound, (6.9), and Lemma 6.1, and the second equality by the definition of \(b_{k}^{\parallel},b_{k}^{\perp}\) and of \(\kappa\). For \(C>0\) sufficiently large, if \[\Gamma\geq C\kappa\frac{d^{3}}{2^{k(d-2)}}\left(dk+\log\left(\frac{\underline{ \alpha}^{\perp}}{\underline{\alpha}^{\parallel}d^{2}}+1\right)\right)\] then the second term in the exponent is the asymptotically dominant one and one gets \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{AS},|G(\tau, \tau_{I_{\tau}})|>\sqrt{\Gamma}s\}\right)\leq C\exp\left(-c\frac{\Gamma s}{ \kappa\underline{\alpha}^{\perp}d^{2}2^{k}}\right).\] This concludes the proof. Proof of Lemma 5.4.: It holds that \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G(\tau)|>s\}\right)\] \[\quad=\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)|>s\}\right)\] \[\quad\leq\exp\left(C\frac{(\log d)s}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}\right)C\exp\left(-\frac{s^{2}}{ \mathrm{Lip}(\nu^{\parallel})^{2}b^{\parallel}+\mathrm{Lip}(\nu^{\perp})^{2}b^ {\perp}}\right)\] \[\quad=C\exp\left(\left(\frac{C\log d}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}-\frac{c}{\kappa\underline{\alpha}^{ \perp}}\right)s\right),\] with the first equality by Lemma 6.2, the first inequality by union bound, (6.7), and Lemma 6.1, and the second equality by the definition of \(b^{\parallel},b^{\perp}\) and of \(\kappa\). Then, for \(c>0\) sufficiently small, if \(\kappa\left(1+\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}}\right)\leq c\frac{d}{\log d}\) which is always the case by condition (1.14), then the second term in the exponent is the asymptotically dominant one and one gets \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{AS},|G( \tau)|>s\}\right)\leq\exp\left(-\frac{c}{\kappa\underline{\alpha}^{\perp}}s \right).\qed\] ### Basic properties of grainings of shifts This section provides estimates on basic parameters (total variation, support size, trip-entropy, etc.) for coarse and fine grainings of shifts. These estimates will be used in the proof of several of our preliminary statements in the subsequent sections. #### 6.2.1. Bounding the total variation and weighted difference of coarse grainings In this section we will prove Proposition 5.2 as well as the following useful bounds. **Proposition 6.6**.: _For every shift \(\tau\) and every positive integer \(N\),_ \[\mathrm{TV}(\tau_{N})\leq 10d\ \mathrm{TV}(\tau).\] **Proposition 6.7**.: _For every shift \(\tau\),_ \[|\mathrm{supp}(\tau_{2}-\tau)|\leq\|\tau_{2}-\tau\|_{1}\leq 4\ \mathrm{TV}(\tau)\] _and moreover, for every positive integer \(N\),_ \[|\mathrm{supp}(\tau_{2N}-\tau_{N})|\leq\|\tau_{2N}-\tau_{N}\|_{1}\leq(4d+9)N \ \mathrm{TV}(\tau).\] The proofs of Propositions 5.2, 6.6 and 6.7 will rely on several lemmas. **Lemma 6.8** (Isoperimetric inequality on \(\mathbb{Z}^{d}\)).: _Let \(A\) be a finite set of points in \(\mathbb{Z}^{d}\). Then,_ \[|\partial A|\geq 2d|A|^{1-\frac{1}{d}}. \tag{6.10}\] _Moreover, for every \(N\times N\times\cdots\times N\)\(d\)-dimensional cube \(B\) in \(\mathbb{Z}^{d}\),_ \[|\partial A\cap(B\times B)|\geq\frac{2}{3N}\min\left\{|A\cap B|,|B\setminus A| \right\}. \tag{6.11}\] Proof.: For any \(S\subset\mathbb{Z}^{d}\) and \(1\leq i\leq d\), let \(\pi_{i}(S)\) be the projection of \(S\) on the hyperplane spanned by \(\{e_{1},e_{2},\ldots,e_{d}\}\setminus\{e_{i}\}\). Recall that the Loomis-Whitney inequality [10] states that for every finite set \(S\) of points in \(\mathbb{Z}^{d}\) it holds that \(\prod_{i=1}^{d}\lvert\pi_{i}(S)\rvert\geq\lvert S\rvert^{d-1}\) and hence, by the AM-GM inequality, \[\sum_{i=1}^{d}\lvert\pi_{i}(S)\rvert\geq d\lvert S\rvert^{1-\frac{1}{d}}. \tag{6.12}\] For every \(1\leq i\leq d\), let \(\partial_{i}A:=\{(u,v)\in\partial A\colon u-v\in\{-e_{i},e_{i}\}\}\). Obviously, \(\lvert\partial_{i}A\rvert\geq 2\lvert\pi_{i}(A)\rvert\) for every \(1\leq i\leq d\), and hence, \[\lvert\partial A\rvert=\sum_{i=1}^{d}\lvert\partial_{i}A\rvert\geq 2\sum_{i=1}^ {d}\lvert\pi_{i}(A)\rvert\geq 2d\lvert A\rvert^{1-\frac{1}{d}}.\] We proceed to prove (6.11). Since \(\partial A\cap(B\times B)\rvert=\partial(B\setminus A)\cap(B\times B)\), we may assume with no loss of generality that \(\lvert A\cap B\rvert\leq\frac{1}{2}\lvert B\rvert\). For every \(1\leq i\leq d\), let \[F_{i}:=\left\{x\in\pi_{i}(A\cap B)\colon\lvert A\cap B\cap\pi_{i}^{-1}(x) \rvert=\frac{\lvert B\rvert}{\lvert\pi_{i}(B)\rvert}\right\}.\] For every \(x\in\pi_{i}(A\cap B)\setminus F_{i}\) it holds that \(\lvert\partial_{i}A\cap(B\times B)\cap(\pi_{i}^{-1}(x)\times\pi_{i}^{-1}(x)) \rvert\geq 1\). Hence, \[\lvert\partial_{i}A\cap(B\times B)\rvert=\sum_{x\in\pi_{i}(A\cap B)}\lvert \partial_{i}A\cap(B\times B)\cap(\pi_{i}^{-1}(x)\times\pi_{i}^{-1}(x))\rvert \geq\lvert\pi_{i}(A\cap B)\rvert-\lvert F_{i}\rvert,\] and since \[\lvert A\cap B\rvert\geq\sum_{x\in F_{i}}\lvert A\cap B\cap\pi_{i}^{-1}(x) \rvert=\lvert F_{i}\rvert\frac{\lvert B\rvert}{\lvert\pi_{i}(B)\rvert},\] it follows that \[\lvert\partial_{i}A\cap(B\times B)\rvert\geq\lvert\pi_{i}(A\cap B)\rvert- \frac{\lvert\pi_{i}(B)\rvert}{\lvert B\rvert}\lvert A\cap B\rvert.\] Hence, by (6.12), \[\lvert\partial A\cap(B\times B)\rvert =\sum_{i=1}^{d}\lvert\partial_{i}A\cap(B\times B)\rvert\geq\sum _{i=1}^{d}\left(\lvert\pi_{i}(A\cap B)\rvert-\frac{\lvert\pi_{i}(B)\rvert}{ \lvert B\rvert}\lvert A\cap B\rvert\right)\] \[=\left(\sum_{i=1}^{d}\lvert\pi_{i}(A\cap B)\rvert\right)-\frac{d}{ N}\lvert A\cap B\rvert\geq d\lvert A\cap B\rvert^{1-\frac{1}{d}}-\frac{d}{N} \lvert A\cap B\rvert\] \[=\frac{d}{N}\left(\sqrt[d]{\lvert B\rvert}\overbrace{\sqrt[d]{ \lvert A\cap B\rvert}}^{d}-1\right)\lvert A\cap B\rvert\geq\frac{d(\sqrt[d]{ \lvert 2}-1)}{N}\lvert A\cap B\rvert,\] and (6.11) follows since \(d(\sqrt[d]{2}-1)>\ln 2>2/3\). **Lemma 6.9** (Functional isoperimetric inequality on \(\mathbb{Z}^{d}\)).: _For every shift function \(\tau\),_ \[\operatorname{TV}(\tau)\geq 2d\left(\sum_{u\in\mathbb{Z}^{d}}\lvert\tau(u)\rvert \right)^{1-\frac{1}{d}}.\] Proof.: WLOG we may assume that \(\tau\) is non-negative, since \(\operatorname{TV}(\lvert\tau\rvert)\leq\operatorname{TV}(\tau)\) by the triangle inequality. Define the family of sets \(A_{k}:=\left\{u\in\mathbb{Z}^{d}\colon\tau(u)>k\right\}\). By definition, we have that \(\operatorname{TV}(\tau):=\sum_{k\geq 0}|\partial A_{k}|\) and \(\sum_{u\in\mathbb{Z}^{d}}|\tau(u)|=\sum_{k\geq 0}|A_{k}|\) (note that in both sums all but finitely many of the terms are non-zero) and so showing \[\sum_{k\geq 0}|\partial A_{k}|\geq 2d\left(\sum_{k\geq 0}|A_{k}|\right)^{1-\frac{ 1}{d}}\] would be sufficient. Using (6.10) for each of the \(A_{k}\)'s one gets \(\sum_{k\geq 0}|\partial A_{k}|\geq 2d\sum_{k\geq 0}|A_{k}|^{1-\frac{1}{d}}\). The result follows by setting \(\lambda_{k}:=|A_{k}|/\sum_{k\geq 0}|A_{k}|\) for every \(k\geq 0\) and noting that \(0\leq\lambda_{k}\leq 1\) for every \(k\geq 0\) and hence \[\sum_{k\geq 0}|A_{k}|^{1-\frac{1}{d}}\bigg{/}\left(\sum_{k\geq 0}|A_{k}| \right)^{1-\frac{1}{d}}=\sum_{k\geq 0}\lambda_{k}^{1-\frac{1}{d}}\geq\sum_{k \geq 0}\lambda_{k}=1.\qed\] Recall that for every \(u\in\mathbb{Z}^{d}\), \[\tau_{N}(u)=\left[\tau_{N}^{\operatorname{rough}}(u)\right],\] where by \([a]\) we denote the nearest integer to \(a\) and \[\tau_{N}^{\operatorname{rough}}(u):=\frac{1}{N^{d}}\sum_{v\in Q_{N}(N\,w)}\tau (v),\] where \(w\) is the unique point in \(\mathbb{Z}^{d}\) such that \(u\in Q_{N}(N\,w)\). Proof of Proposition 5.2.: Let \(\tau\) be a shift. Observe that if \(N\) is a positive integer such that \(\sum_{u\in\mathbb{Z}^{d}}|\tau(u)|<N^{d}/2\) then necessarily \(\tau_{N}\equiv 0\). Hence, by Lemma 6.9, \(\tau_{N}\equiv 0\) if \[\left(\frac{\operatorname{TV}(\tau)}{2d}\right)^{\frac{d}{d-1}}<\frac{1}{2}N^ {d},\] i.e., \[N>\sqrt[d]{2}\left(\frac{\operatorname{TV}\left(\tau\right)}{2d}\right)^{ \frac{1}{d-1}}.\qed\] For a shift \(\tau\colon\mathbb{Z}^{d}\to\mathbb{Z}\) and a set \(A\subset\mathbb{Z}^{d}\), define \[\operatorname{TV}(\tau;A):=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{A}{2} }|\tau(u)-\tau(v)|.\] **Lemma 6.10**.: _For every \(0<\alpha<\frac{1}{2}\) and \(u\in\mathbb{Z}^{d}\) such that \(|\tau_{N}^{\operatorname{rough}}(N\,u)-\tau_{N}(N\,u)|>\alpha\), it holds that_ \[\operatorname{TV}\left(\tau;Q_{N}(N\,u)\right)\geq\frac{2\alpha}{3}N^{d-1}.\] Proof.: For simplicity, denote \(B:=Q_{N}(N\,u)\) and let \(m:=\min_{v\in B}\tau(v)\), \(M:=\max_{v\in B}\tau(v)\). For every integer \(k\), let \(A_{k}:=\{v\in B\colon\tau(v)>k\}\). Note that \(A_{m-1}=B\) and \(A_{M}=\emptyset\). Let \(\ell:=\min\{m\leq k\leq M\colon|A_{k}|<\frac{1}{2}N^{d}\}\). Note that \(\operatorname{TV}\left(\tau;B\right)=\sum_{k=m}^{M-1}|\partial A_{k}\cap(B \times B)|\) and hence, by (6.11), \[\operatorname{TV}\left(\tau;B\right)\geq\frac{2}{3N}\sum_{k=m}^{M-1}\min\left\{ |A_{k}|,|B\setminus A_{k}|\right\}=\frac{2}{3N}\sum_{k=m}^{\ell-1}|B\setminus A _{k}|+\frac{2}{3N}\sum_{k=\ell}^{M-1}|A_{k}|.\] Hence, the result follows if \(\sum_{k=m}^{\ell-1}\lvert B\setminus A_{k}\rvert\geq\alpha N^{d}\) or \(\sum_{k=\ell}^{M-1}\lvert A_{k}\rvert\geq\alpha N^{d}\). Assume by way of contradiction that \(\sum_{k=m}^{\ell-1}\lvert B\setminus A_{k}\rvert<\alpha N^{d}\) and \(\sum_{k=\ell}^{M-1}\lvert A_{k}\rvert<\alpha N^{d}\). Now, since \[\tau_{N}^{\mathrm{rough}}(N\,u)=\frac{1}{N^{d}}\sum_{v\in B}\tau(v)=m+\sum_{k=m }^{M-1}\frac{\lvert A_{k}\rvert}{N^{d}}=\ell-\sum_{k=m}^{\ell-1}\frac{\lvert B \setminus A_{k}\rvert}{N^{d}}+\sum_{k=\ell}^{M-1}\frac{\lvert A_{k}\rvert}{N^ {d}}\] it follows that \(\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\ell\rvert<\alpha\). In particular, \(\tau_{N}(N\,u)=\ell\) and we get a contradiction. **Lemma 6.11**.: _For every \(\{u,v\}\in E(\mathbb{Z}^{d})\),_ \[\left\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v) \right\rvert\leq\frac{1}{N^{d-1}}\operatorname{TV}\left(\tau;Q_{N}(N\,u)\cup Q _{N}(N\,v)\right).\] Proof.: With no loss of generality, assume that \(v=u+e_{d}\). Then, \[\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v)=\frac{1}{N^{d }}\sum_{w\in B}\left(\sum_{i=0}^{N-1}\tau(w+ie_{d})-\sum_{i=N}^{2N-1}\tau(w+ie _{d})\right),\] where \(B:=N\,u+\{0,1,2,\ldots,N-1\}^{d-1}\times\{0\}\). For every \(w\in B\), it holds that \[\sum_{i=0}^{N-1}\tau(w+ie_{d})-\sum_{i=N}^{2N-1}\tau(w+ie_{d})=\sum_{i=1}^{2N- 1}\min\{i,2N-i\}\left(\tau(w+(i-1)e_{d})-\tau(w+ie_{d})\right).\] and hence \[\left\lvert\sum_{i=0}^{N-1}\tau(w+ie_{d})-\sum_{i=N}^{2N-1}\tau(w+ie_{d}) \right\rvert\leq N\sum_{i=1}^{2N-1}\lvert\tau(w+(i-1)e_{d})-\tau(w+ie_{d})\rvert.\] Therefore, \[\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N \,v)\rvert =\frac{1}{N^{d}}\sum_{w\in B}\left\lvert\sum_{i=0}^{N-1}\tau(w+ ie_{d})-\sum_{i=N}^{2N-1}\tau(w+ie_{d})\right\rvert\] \[\leq\frac{1}{N^{d-1}}\sum_{w\in B}\sum_{i=1}^{2N-1}\lvert\tau(w+( i-1)e_{d})-\tau(w+ie_{d})\rvert\] \[\leq\operatorname{TV}\left(\tau;Q_{N}(N\,u)\cup Q_{N}(N\,v)\right).\qed\] Proof of Proposition 6.6.: It is clearly enough to prove that for every \(\{u,v\}\in E(\mathbb{Z}^{d})\), \[\lvert\tau_{N}(N\,u)-\tau_{N}(N\,v)\rvert\leq\frac{5}{N^{d-1}}\operatorname{ TV}\left(\tau;Q_{N}(N\,u)\cup Q_{N}(N\,v)\right).\] If \(\tau_{N}(N\,u)=\tau_{N}(N\,v)\) there is nothing to prove. If \(\left\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v) \right\rvert\geq\frac{1}{3}\), then \[\lvert\tau_{N}(N\,u)-\tau_{N}(N\,v)\rvert\leq\left\lvert\tau_{N}^{\mathrm{ rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v)\right\rvert+1\leq 4\left\lvert \tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v)\right\rvert\] and the result follows from Lemma 6.11. Therefore, suppose that \(\left\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v) \right\rvert<\frac{1}{3}\) but \(\tau_{N}(N\,u)\neq\tau_{N}(N\,v)\). Then, necessarily, \[\max\left\{\left\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}(N\,u)\right\rvert,\left\lvert\tau_{N}^{\mathrm{rough}}(N\,v)-\tau_{N}(N\,v)\right\rvert\right\} >\frac{1}{3}.\] If \(|\tau_{N}^{\text{rough}}(N\,u)-\tau_{N}(N\,u)|>\frac{1}{3}\) then by Lemma 6.10, \[|\tau_{N}(N\,u)-\tau_{N}(N\,v)|=1\leq\frac{9}{2N^{d-1}}\,\text{TV}\left(\tau;Q_{ N}(N\,u)\right)\] and similarly, if \(|\tau_{N}^{\text{rough}}(N\,v)-\tau_{N}(N\,v)|>\frac{1}{3}\) then by Lemma 6.10, \[|\tau_{N}(N\,u)-\tau_{N}(N\,v)|=1\leq\frac{9}{2N^{d-1}}\,\text{TV}\left(\tau;Q _{N}(N\,v)\right).\qed\] **Lemma 6.12**.: _Let \(w\in\mathbb{Z}^{d}\), let \(g\) be a function from \(B:=Q_{2}(2\,w)\) to \(\mathbb{R}\), and let \(\mu:=\frac{1}{2^{d}}\sum_{u\in B}g(u)\). Then,_ \[\sum_{u\in B}|g(u)-\mu|\leq\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap{B\choose 2}} |g(u)-g(v)|.\] Proof.: We first prove by induction on \(k\) that for every \(1\leq k\leq d\) that \[\sum_{\begin{subarray}{c}\{u,v\}\in{B\choose 2}\\ \|u-v\|_{1}=k\end{subarray}}|g(u)-g(v)|\leq{d-1\choose k-1}\sum_{\{u,v\}\in E( \mathbb{Z}^{d})\cap{B\choose 2}}|g(u)-g(v)|. \tag{6.13}\] The base case \(k=1\) obviously holds as equality, and if (6.13) holds for some \(k\), then it follows that it holds for \(k+1\) as well, since \[\sum_{\begin{subarray}{c}\{u,v\}\in{B\choose 2}\\ \|u-v\|_{1}=k+1\end{subarray}}|g(u)-g(v)|=\frac{1}{2}\sum_{u\in B}\sum_{ \begin{subarray}{c}v\in B\\ \|u-v\|_{1}=k+1\end{subarray}}|g(u)-g(v)|\] \[\leq \frac{1}{2}\sum_{u\in B}\sum_{\begin{subarray}{c}v\in B\\ \|u-v\|_{1}=k+1\end{subarray}}\frac{1}{k+1}\sum_{\begin{subarray}{c}w\in B\\ \|u-w\|_{1}=k,\,w\sim v\end{subarray}}\left(|g(u)-g(w)|+|g(w)-g(v)|\right)\] \[= \frac{1}{2}\sum_{u\in B}\frac{d-k}{k+1}\sum_{\begin{subarray}{c}w \in B\\ \|u-w\|_{1}=k\end{subarray}}|g(u)-g(w)|+\frac{1}{2}\sum_{v\in B}\frac{1}{k+1}{ d-1\choose k}\sum_{\begin{subarray}{c}w\in B\\ w\sim v\end{subarray}}|g(w)-g(v)|\] \[= \frac{d-k}{k+1}\sum_{\begin{subarray}{c}\{u,w\}\in{B\choose 2}\\ \|u-w\|_{1}=k\end{subarray}}|g(u)-g(w)|+\frac{1}{k+1}{d-1\choose k}\sum_{ \begin{subarray}{c}\{v,w\}\in E(\mathbb{Z}^{d})\cap{B\choose 2}\end{subarray}}|g(v)-g(w)|.\] Now we can conclude the proof of the Lemma. \[\sum_{u\in B}\lvert g(u)-\mu\rvert \leq\sum_{u\in B}\frac{1}{2^{d}}\sum_{v\in B}\lvert g(u)-g(v)\rvert\] \[=\frac{1}{2^{d-1}}\sum_{\{u,v\}\in\binom{B}{2}}\lvert g(u)-g(v) \rvert=\frac{1}{2^{d-1}}\sum_{k=1}^{d}\sum_{\begin{subarray}{c}\{u,v\}\in \binom{B}{2}\\ \lVert u-v\rVert_{1}=k\end{subarray}}\lvert g(u)-g(v)\rvert\] \[\leq\frac{1}{2^{d-1}}\sum_{k=1}^{d}\binom{d-1}{k-1}\sum_{\{u,v\} \in E(\mathbb{Z}^{d})\cap\binom{B}{2}}\lvert g(u)-g(v)\rvert\] \[=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{B}{2}}\lvert g(u)-g (v)\rvert.\qed\] Proof of Proposition 6.7.: It is clearly enough to prove that for every \(w\in\mathbb{Z}^{d}\), it holds that \[\sum_{u\in Q_{2N}(2N\,w)}\lvert\tau_{2N}(u)-\tau_{N}(u)\rvert\] \[\leq 9N\sum_{v\in B}\operatorname{TV}\left(\tau;Q_{N}(N\,v) \right)+4N\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{B}{2}}\operatorname{TV} \left(\tau;Q_{N}(N\,u)\cup Q_{N}(N\,v)\right),\] where \(B:=Q_{2}(2\,w)\). Let \[A:=\left\{v\in B\colon\lvert\tau_{N}^{\operatorname{rough}}(N\,v)-\tau_{N}(N \,v)\rvert>\frac{1}{6}\right\}.\] Note that for every \(v\in B\), if \(\tau_{N}(N\,v)=\tau_{2N}(N\,v)\) or \(\left\lvert\tau_{N}^{\operatorname{rough}}(N\,v)-\tau_{2N}^{\operatorname{ rough}}(N\,v)\right\rvert\geq\frac{1}{3}\) then \(\lvert\tau_{N}(N\,v)-\tau_{2N}(N\,v)\rvert\leq 4\left\lvert\tau_{N}^{ \operatorname{rough}}(N\,v)-\tau_{2N}^{\operatorname{rough}}(N\,v)\right\rvert\) and if \(\tau_{N}(N\,v)\neq\tau_{2N}(N\,v)\) and \(\left\lvert\tau_{N}^{\operatorname{rough}}(N\,v)-\tau_{2N}^{\operatorname{ rough}}(N\,v)\right\rvert<\frac{1}{3}\), then \(\lvert\tau_{N}(N\,v)-\tau_{2N}(N\,v)\rvert=1\) and \(v\in A\). Hence, \[\frac{1}{N^{d}}\sum_{u\in Q_{2N}(2N\,w)}\lvert\tau_{N}(u)-\tau_{ 2N}(u)\rvert =\sum_{v\in B}\lvert\tau_{N}(N\,v)-\tau_{2N}(N\,v)\rvert\] \[\leq\lvert A\rvert+4\sum_{v\in B}\lvert\tau_{N}^{\operatorname{ rough}}(N\,v)-\tau_{2N}^{\operatorname{rough}}(N\,v)\rvert,\] and we are done since by Lemma 6.10, \[\lvert A\rvert\leq\frac{9}{N^{d-1}}\sum_{v\in A}\operatorname{TV}\left(\tau;Q _{N}(N\,v)\right)\] and by Lemma 6.12 and Lemma 6.11, \[\sum_{v\in B}\lvert\tau_{N}^{\operatorname{rough}}(N\,v)-\tau_{2N }^{\operatorname{rough}}(N\,v)\rvert \leq\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{B}{2}}\lvert \tau_{N}^{\operatorname{rough}}(N\,u)-\tau_{N}^{\operatorname{rough}}(N\,v)\rvert\] \[\leq\frac{1}{N^{d-1}}\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap \binom{B}{2}}\operatorname{TV}\left(\tau;Q_{N}(N\,u)\cup Q_{N}(N\,v)\right).\qed\] #### 6.2.2. Bounding the total variation and weighted difference for fine grainings In this section we prove analogous results to Propositions 6.6 and 6.7 for fine grainings, and deduce Proposition 5.1. For \(I\in\binom{[d]}{r}\), let \(T_{I}\colon\mathbb{Z}^{d}\to\mathbb{Z}^{d}\) be defined as follows: for any \(x=(x_{1},x_{2},\ldots,x_{d})\in\mathbb{Z}^{d}\) and every \(1\leq i\leq d\), the \(i\)th coordinate of \(T_{I}(x)\) is \(2x_{i}\) if \(i\in I\) and \(x_{i}\) otherwise. Recall that for every \(u\in\mathbb{Z}^{d}\), \[\tau_{I}(u)=\left[\tau_{I}^{\text{rough}}(u)\right],\] where by \([a]\) we denote the nearest integer to \(a\) and \[\tau_{I}^{\text{rough}}(u):=\frac{1}{2^{r}}\sum_{v\in Q_{I}(T_{I}(w))}\tau(v),\] where \(w\) is the unique point in \(\mathbb{Z}^{d}\) such that \(u\in Q_{I}(T_{I}(w))\). **Lemma 6.13**.: _For every \(I\in\binom{[d]}{r}\) and \(\{u,v\}\in E(\mathbb{Z}^{d})\),_ \[\left|\tau_{I}^{\text{rough}}(T_{I}(u))-\tau_{I}^{\text{rough}}(T_{I}(v)) \right|\leq\frac{1}{2^{r-1}}\operatorname{TV}\left(\tau;Q_{I}(T_{I}(u))\cup Q _{I}(T_{I}(v))\right). \tag{6.14}\] Proof.: If \(u-v\in\{-e_{i},e_{i}\}\) for \(i\notin I\), then (6.14) easily follows by a straightforward use of the triangle inequlity; otherwise, (6.14) is simply the claim of Lemma 6.11 for \(N=2\) in the appropriate \(r\)-dimensional affine subspace of \(\mathbb{Z}^{d}\). **Proposition 6.14**.: _Let \(I\in\binom{[d]}{r}\) be chosen uniformly at random. Then, for every \(\tau\colon\Lambda\to\mathbb{Z}\), the following holds:_ \[\operatorname{\mathbb{E}}\operatorname{TV}(\tau_{I})\leq 10(2r+1)\operatorname{ TV}(\tau)\] Proof.: For every \(\{x,y\}\in E(\mathbb{Z}^{d})\), let \(X_{\{x,y\}}\) be the random variable defined as follows: \(X_{\{x,y\}}=2d\) if \(x-y\in\{e_{i},-e_{i}\}\) for \(i\in I\) and \(X_{\{x,y\}}=1\) otherwise. Note that for every \(\{x,y\}\in E(\mathbb{Z}^{d})\), it holds that \(\operatorname{\mathbb{E}}X_{\{x,y\}}=\frac{r}{d}\cdot 2d+\left(1-\frac{r}{d} \right)\cdot 1<2r+1\). Note that for every \(\{x,y\}\in E(\mathbb{Z}^{d})\), \[X_{\{x,y\}}=|\{u,v\}\in E(\mathbb{Z}^{d})\colon\{x,y\}\subset Q_{I}(T_{I}(u) \cup Q_{I}(T_{I}(v))\}|.\] The same argument as in the proof of Proposition 6.6, where Lemma 6.11 is replaced by Lemma 6.13, and Lemma 6.10 is applied in an appropriate \(r\)-dimensional affine subspace of \(\mathbb{Z}^{d}\), yields that for every \(\{u,v\}\in E(\mathbb{Z}^{d})\), \[|\tau_{I}(T_{I}(u))-\tau_{I}(T_{I}(v))|\leq\frac{5}{2^{r-1}}\operatorname{TV} \left(\tau;Q_{I}(T_{I}(u))\cup Q_{I}(T_{I}(v))\right).\] Therefore, \[\operatorname{TV}(\tau_{I}) \leq 2^{r}\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}|\tau_{I}(T_{I}(u))- \tau_{I}(T_{I}(v))|\] \[\leq 10\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}\operatorname{TV}(\tau ;Q_{I}(T_{I}(u)\cup Q_{I}(T_{I}(v)))=10\sum_{\{x,y\}\in E(\mathbb{Z}^{d})}| \tau(x)-\tau(y)|X_{\{x,y\}}.\] Hence, \[\operatorname{\mathbb{E}}\operatorname{TV}(\tau_{I})\leq 10\sum_{\{x,y\}\in E( \mathbb{Z}^{d})}|\tau(x)-\tau(y)|\operatorname{\mathbb{E}}X_{\{x,y\}}\leq 1 0(2r+1)\operatorname{TV}(\tau).\qed\] **Proposition 6.15**.: _Let \(I\in\binom{[d]}{r}\) be chosen uniformly at random. Then, for every \(\tau\colon\Lambda\to\mathbb{Z}\), the following holds:_ \[\mathbb{E}\|\tau_{I}-\tau\|_{1}\leq\frac{2r}{d}\operatorname{TV}(\tau).\] Proof.: For every \(1\leq i\leq d\), let \(X_{i}\) be the random variable defined as follows: \(X_{i}=1\) if \(i\in I\) and \(X_{i}=0\) otherwise. For every \(u\in\mathbb{Z}^{d}\) it holds that \(|\tau(u)-\tau_{I}(u)|\leq 2\left|\tau(u)-\tau_{I}^{\operatorname{rough}}(u)\right|\). Hence, for every \(w\in\mathbb{Z}^{d}\), by Lemma 6.12, \[\sum_{v\in Q_{I}(T_{I}(w))}|\tau(u)-\tau_{I}(u)| \leq 2\sum_{v\in Q_{I}(T_{I}(w))}|\tau(u)-\tau_{I}^{\operatorname{ rough}}(u)|\] \[\leq 2\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{Q_{I}(T_{I}(w ))}{2}}|\tau(u)-\tau(v)|.\] Therefore, \[\|\tau_{I}-\tau\|_{1}\leq 2\sum_{i\in I}\sum_{\begin{subarray}{c}\{u,v\}\in E (\mathbb{Z}^{d})\\ u-v\in\{-e_{i},e_{i}\}\end{subarray}}|\tau(u)-\tau(v)|=2\sum_{i=1}^{d}\sum_{ \begin{subarray}{c}\{u,v\}\in E(\mathbb{Z}^{d})\\ u-v\in\{-e_{i},e_{i}\}\end{subarray}}|\tau(u)-\tau(v)|X_{i}\] and hence, \[\mathbb{E}\|\tau_{I}-\tau\|_{1} \leq 2\sum_{i=1}^{d}\sum_{\begin{subarray}{c}\{u,v\}\in E( \mathbb{Z}^{d})\\ u-v\in\{-e_{i},e_{i}\}\end{subarray}}|\tau(u)-\tau(v)|\mathbb{E}X_{i}\] \[=\frac{2r}{d}\sum_{i=1}^{d}\sum_{\begin{subarray}{c}\{u,v\}\in E (\mathbb{Z}^{d})\\ u-v\in\{-e_{i},e_{i}\}\end{subarray}}|\tau(u)-\tau(v)|=\frac{2r}{d} \operatorname{TV}(\tau).\qed\] Proof of Proposition 5.1.: Let \(I\in\binom{[d]}{r}\) be chosen uniformly at random, By Markov inequality and Propositions 6.14 and 6.15, \[\mathbb{P}\left(\operatorname{TV}(\tau_{I})\geq 20(2r+1)\operatorname{TV}( \tau)+1\right)\leq\frac{\mathbb{E}\operatorname{TV}(\tau_{I})}{20(2r+1) \operatorname{TV}(\tau)+1}<\frac{1}{2}\] and \[\mathbb{P}\left(\|\tau_{I}-\tau\|_{1}\geq\frac{4r}{d}\operatorname{TV}(\tau) \right)\leq\frac{\mathbb{E}\|\tau_{I}-\tau\|_{1}}{\frac{4r}{d}\operatorname{ TV}(\tau)}\leq\frac{1}{2}.\] Hence, \[\mathbb{P}\left(\operatorname{TV}(\tau_{I})\leq 20(2r+1)\operatorname{TV}(\tau) \text{ and }\|\tau_{I}-\tau\|_{1}<\frac{4r}{d}\operatorname{TV}(\tau)\right)>0\] and the result follows. #### 6.2.3. Entropy bounds **Lemma 6.16**.: _For every \(A\subset\mathbb{Z}^{d}\) and finite \(B\subseteq\partial^{\operatorname{out}}A\), there is a set \(S\subseteq B\) such that \(|S|<\frac{1}{d}|\partial A|\) and \(B\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\)._ Proof.: We first show that for every \(a\in\partial^{\mathrm{out}}A\), \[\left|\partial A\cap\left(\mathbb{Z}^{d}\times\mathcal{B}_{2}(a)\right)\right|>d. \tag{6.15}\] With no loss of generality assume that \(a+e_{d}\in A\), and let \(E:=\{-e_{i}\}_{i=1}^{d-1}\cup\{e_{i}\}_{i=1}^{d-1}\). For every \(u\in E\), denote \[\mathcal{T}_{u}:=\left\{(a+u,a),(a+e_{d},a+u++e_{d}),(a+u+e_{d},a+u)\right\}.\] If \(a+u\in A\) then \((a_{u},a)\in\partial A\); if \(a+u+e_{d}\notin A\) then \((a+e_{d},a+u+e_{d})\in\partial A\); finally, if \(a+u\notin A\) and \(a+u+e_{d}\in A\) then \((a+u+e_{d},a+u)\in\partial A\). Hence, \(\partial A\cap\mathcal{T}_{u}\neq\emptyset\) for every \(u\in E\), and (6.15) follows since the \(2d-2>d\) sets \(\{\mathcal{T}_{u}\}_{u\in E}\) are mutually disjoint. Now, let \(S\) be a set of maximal cardinality in \(B\) such that the sets \(\{\mathcal{B}_{2}(a)\}_{a\in S}\) are mutually disjoint. The maximality of \(S\) implies that \(B\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\), and by (6.15), \[|S|<\frac{1}{d}\sum_{a\in S}\left|\partial A\cap\left(\mathbb{Z}^{d}\times \mathcal{B}_{2}(a)\right)\right|\leq\frac{1}{d}|\partial A|.\qed\] We will say that a set \(A\subseteq\mathbb{Z}^{d}\) is \(\ell_{1}^{+}\)_-connected_ if for any two points \(a,b\in A\) there is a sequence \(a=s_{0},s_{1},\ldots,s_{n}=b\) of points in \(A\) such that \(\|s_{i-1}-s_{i}\|_{1}\leq 2\) for every \(1\leq i\leq n\). **Lemma 6.17**.: _Let \(A\subset\mathbb{Z}^{d}\) be an \(\ell_{1}^{+}\)-connected finite set, and assume that there is a set \(S\subseteq A\) such that \(A\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\). Then,_ \[\mathrm{diam}(A)<10|S| \tag{6.16}\] _Moreover, there is an ordering \(a_{1},a_{2},\ldots,a_{|S|}\) of \(S\) such that, denoting \(a_{|S|+1}:=a_{1}\),_ \[\sum_{i=1}^{|S|}\|a_{i}-a_{i+1}\|_{1}<20|S|. \tag{6.17}\] _Consequently, for every finite \(\Omega\subset\mathbb{Z}^{d}\), there is an ordering \(\omega_{1},\omega_{2},\ldots,\omega_{|\Omega|}\) of \(\Omega\) such that, denoting \(\omega_{|\Omega|+1}:=\omega_{1}\),_ \[\sum_{i=1}^{|\Omega|}\|\omega_{i}-\omega_{i+1}\|_{1}<20|S|+8|\Omega|+2\sum_{ \omega\in\Omega}\mathrm{dist}(\omega,A). \tag{6.18}\] Proof.: Consider the complete graph \(K\) on the vertex set \(S\), and its spanning subgraph \(G\) in which \(a,b\in S\) are adjacent if there are \(u\in\mathcal{B}_{4}(a)\), \(v\in\mathcal{B}_{4}(b)\) such that \(\|u-v\|_{1}\leq 2\). For any edge \(e=\{a,b\}\) of \(K\), denote \(\|e\|:=\|a-b\|_{1}\). Note that \(\|e\|\leq 10\) for every edge \(e\) of \(G\). Since \(A\) is \(\ell_{1}^{+}\)-connected, it follows that the graph \(G\) is connected. Let \(\mathcal{T}\) be a spanning tree of \(G\). To prove (6.16), we need to show that \(\|a-\tilde{a}\|_{1}<10|S|\) for every \(a,\tilde{a}\in A\). There are \(s,\tilde{s}\in S\) such that \(a\in\mathcal{B}_{4}(s)\) and \(\tilde{a}\in\mathcal{B}_{4}(\tilde{s})\). Let \(s=s_{0},s_{1},\ldots,s_{k}=\tilde{s}\) be the unique path from \(s\) to \(\tilde{s}\) in \(\mathcal{T}\). Then, \[\|a-\tilde{a}\|_{1}\leq\|a-s\|_{1}+\sum_{i=1}^{k}\|s_{i-1}-s_{i}\|_{1}+\| \tilde{s}-\tilde{a}\|_{1}\leq 4+10k+4<10(k+1)\leq 10|S|.\] Using the structure of the tree, we may arrange the edges of \(\mathcal{T}\), each taken in both directions to create a directed cycle \(\mathcal{C}_{0}\) that goes through all the vertices. Let \(\mathcal{C}_{1}\) be the simple cycle in \(K\) obtained from \(\mathcal{C}_{0}\) by omitting multiple occurrences of vertices. Then, by using the triangle inequality, \[\sum_{e\in E(\mathcal{C}_{1})}\|e\|\leq\sum_{e\in E(\mathcal{C}_{0})}\|e\|=2\sum_{ e\in E(\mathcal{T})}\|e\|\leq 20|E(\mathcal{T})|=20(|S|-1)\] which proves (6.17). Finally, let \(\Omega\subset\mathbb{Z}^{d}\) be a finite set. Let \(a_{1},a_{2},\ldots,a_{|S|}\) be an ordering of \(S\) such that (denoting \(a_{|S|+1}:=a_{1}\)) \(\sum_{i=1}^{|S|}\|a_{i}-a_{i+1}\|_{1}<20|S|\). For every \(\omega\in\Omega\), there is \(1\leq n(\omega)\leq|S|\) such that \(\|\omega-a_{n(\omega)}\|_{1}\leq 4+\operatorname{dist}(\omega,A)\). Let \(\omega_{1},\omega_{2},\ldots,\omega_{|\Omega|}\) be an ordering of \(\Omega\) such that \(n(\omega_{j})\leq n(\omega_{j+1})\) for every \(1\leq j<|\Omega|\) (it is easy to see that such orderings exist), and denote \(\omega_{|\Omega|+1}:=\omega_{1}\). Then, for every \(1\leq j\leq|\Omega|\), \[\|\omega_{j}-\omega_{j+1}\|_{1} \leq\|\omega_{j}-a_{n(\omega_{j})}\|_{1}+\sum_{i=n(\omega_{j})}^{ n(\omega_{j+1})-1}\|a_{i}-a_{i+1}\|_{1}+\|a_{n(\omega_{j+1})}-\omega_{j+1}\|_{1}\] \[\leq\sum_{i=n(\omega_{j})}^{n(\omega_{j+1})-1}\|a_{i}-a_{i+1}\|_{1 }+8+\operatorname{dist}(\omega_{j},A)+\operatorname{dist}(\omega_{j+1},A),\] where, for \(j=|\Omega|\), the sum \(\sum_{i=n(\omega_{|\Omega|})}^{n(\omega_{1})-1}\|a_{i}-a_{i+1}\|_{1}\) should be interpreted as \(\sum_{i=n(\omega_{|\Omega|})}^{|S|}\|a_{i}-a_{i+1}\|_{1}+\sum_{i=1}^{n(\omega_ {1})-1}\|a_{i}-a_{i+1}\|_{1}\). Hence, \[\sum_{j=1}^{|\Omega|}\|\omega_{j}-\omega_{j+1}\|_{1} \leq\sum_{i=1}^{|S|}\|a_{i}-a_{i+1}\|_{1}+8|\Omega|+2\sum_{\omega \in\Omega}\operatorname{dist}(\omega,A)\] \[<20|S|+8|\Omega|+2\sum_{\omega\in\Omega}\operatorname{dist}(\omega,A).\qed\] Following Timar [113], we define, for a set \(A\subseteq\mathbb{Z}^{d}\) and \(v\in\mathbb{Z}^{d}\cup\{\infty\}\), the outer vertex boundary of \(A\) visible from \(v\): \[\partial_{\operatorname{vis}(v)}A:=\left\{u\in\partial^{\operatorname{out}}A \colon\text{there exists a path from $u$ to $v$ not intersecting $A$}\right\}. \tag{6.19}\] **Observation 6.18**.: _For every bounded \(A\subseteq\mathbb{Z}^{d}\) and every \(u\in A\) it holds that_ \[\operatorname{dist}(u,\partial_{\operatorname{vis}(\infty)}A)<\operatorname{ diam}(\partial_{\operatorname{vis}(\infty)}A).\] Proof.: There is \(w\in\partial_{\operatorname{vis}(\infty)}A\) such that \(\operatorname{dist}(u,\partial_{\operatorname{vis}(\infty)}A)=\|u-w\|_{1}\). With no loss of generality we may assume that the first coordinate of \(u-w\) is non-negative. Let \(n_{0}:=\max\{n\in\mathbb{Z}\colon u+ne_{1}\in A\}+1\). Obviously, \(u+n_{0}e_{1}\in\partial_{\operatorname{vis}(\infty)}A\). Therefore, \[\operatorname{dist}(u,\partial_{\operatorname{vis}(\infty)}A)=\|u-w\|_{1}<\|( u+n_{0}e_{1})-w\|_{1}\leq\operatorname{diam}(\partial_{\operatorname{vis}( \infty)}A).\qed\] The following lemma is a special case of [113, Theorem 3]. **Lemma 6.19**.: _For every connected \(A\subseteq\mathbb{Z}^{d}\) and every \(v\in\mathbb{Z}^{d}\cup\{\infty\}\), the set \(\partial_{vis(v)}A\) is \(\ell_{1}^{+}\)-connected._ **Observation 6.20**.: _The number of level components of \(\tau_{N}\) satisfies the following bound_ \[|\mathcal{LC}(\tau_{N})|\leq\frac{\operatorname{TV}(\tau_{N})}{dN^{d-1}}\] _and similarly, for the number of level components of \(\tau_{I}\),_ \[|\mathcal{LC}(\tau_{I})|\leq\frac{\operatorname{TV}(\tau_{I})}{d\,2^{|I|-1}}.\] Proof.: For every level component \(A\) of \(\tau_{N}\) it holds, by (6.10), that \(|\partial A|\geq 2d|A|^{1-\frac{1}{d}}\geq 2d\,N^{d-1}\). Hence, \[|\mathcal{LC}(\tau_{N})|\leq\frac{1}{2d\,N^{d-1}}\sum_{A\in\mathcal{LC}(\tau_ {N})}|\partial A|\leq\frac{1}{2d\,N^{d-1}}2\,\operatorname{TV}(\tau_{N}).\] Similarly, for every level component \(A\) of \(\tau_{I}\) it holds, by (6.10), that \(|\partial A|\geq 2d|A|^{1-\frac{1}{d}}\geq 2d\,2^{|I|-\frac{|I|}{d}}\geq 2d\,2^{|I|-1}\). Hence, \[|\mathcal{LC}(\tau_{N})|\leq\frac{1}{2d\,2^{|I|-1}}\sum_{A\in\mathcal{LC}( \tau_{I})}|\partial A|\leq\frac{1}{2d\,2^{|I|-1}}2\,\operatorname{TV}(\tau_{ I}).\qed\] **Proposition 6.21**.: _Let \(\tau,\tilde{\tau}\) be two shifts and let \(r\) be a positive integer such that for every level component \(\tilde{A}\in\mathcal{LC}(\tilde{\tau})\) of \(\tilde{\tau}\), there exists a level component \(A\in\mathcal{LC}(\tau)\) of \(\tau\) such that \(\operatorname{dist}(\tilde{A},\partial_{\operatorname{vis}(\infty)}A)\leq r\). Then,_ \[R(\tilde{\tau})\leq R(\tau)+\frac{88}{d}\operatorname{TV}(\tau)+(2r+8)| \mathcal{LC}(\tilde{\tau})|.\] Proof.: For simplicity, denote \(N:=|\mathcal{LC}(\tau)|\). Let \((u_{i})_{i=0}^{N-1}\) be a sequence of points in \(\mathbb{Z}^{d}\) such that \(u_{0}=0\), \(\sum_{i=1}^{N-1}\lVert u_{i-1}-u_{i}\rVert_{1}=R(\tau)\) and each \(u_{i}\) is in a different level component of \(\tau\), which we denote \(A_{i}\). For every \(\tilde{A}\in\mathcal{LC}(\tilde{\tau})\) there are \(0\leq i(\tilde{A})\leq N-1\) and \(\omega(\tilde{A})\in\tilde{A}\) such that \(\operatorname{dist}(\omega(\tilde{A}),\partial_{\operatorname{vis}(\infty)}A_{ i(\tilde{A})})\leq r\). For every \(0\leq i\leq N-1\), let \[\Omega_{i}:=\{u_{i}\}\cup\{\omega(\tilde{A})\colon\tilde{A}\in\mathcal{LC}( \tilde{\tau}),\,i(\tilde{A})=i\}.\] By Lemma 6.16, there is a set \(S_{i}\subseteq\partial_{\operatorname{vis}(\infty)}A_{i}\) such that \(|S_{i}|<\frac{1}{d}|\partial A_{i}|\) and \(\partial_{\operatorname{vis}(\infty)}A_{i}\subseteq\bigcup_{a\in S_{i}} \mathcal{B}_{4}(a)\). By Observation 6.18 and (6.16), \[\operatorname{dist}(x_{i},\partial_{\operatorname{vis}(\infty)}A_{i})< \operatorname{diam}(\partial_{\operatorname{vis}(\infty)}A_{i})<10|S_{i}|< \frac{10}{d}|\partial A_{i}|.\] The set \(\partial_{\operatorname{vis}(\infty)}A_{i}\) is \(\ell_{1}^{+}\)-connected, by Lemma 6.19. Hence, by (6.18), there is an ordering \(\omega_{1}^{(i)},\omega_{2}^{(i)},\ldots,\omega_{|\Omega_{i}|}^{(i)}\) of \(\Omega_{i}\) such that, denoting \(\omega_{|\Omega_{i}|+1}^{(i)}:=\omega_{1}^{(i)}\), \[\sum_{j=1}^{|\Omega_{i}|}\lVert\omega_{j}^{(i)}-\omega_{j+1}^{(i) }\rVert_{1}< 20|S_{i}|+8|\Omega_{i}|+2\sum_{\omega\in\Omega_{i}} \operatorname{dist}(\omega,\partial_{\operatorname{vis}(\infty)}A_{i})\] \[< \frac{20}{d}|\partial A_{i}|+8|\Omega_{i}|+\frac{20}{d}|\partial A _{i}|+2(|\Omega_{i}|-1)r\] \[= \frac{40}{d}|\partial A_{i}|+(2r+8)(|\Omega_{i}|-1)+8.\] With no loss of generality we may assume that \(\omega_{1}^{(i)}=x_{i}\). Hence, for every \(1\leq i\leq N-1\), \[\lVert\omega_{|\Omega_{i-1}|}^{(i-1)}-\omega_{2}^{(i)}\rVert_{1} \leq\lVert\omega_{|\Omega_{i-1}|}^{(i-1)}-\omega_{1}^{(i-1)} \rVert_{1}+\lVert\omega_{1}^{(i-1)}-\omega_{1}^{(i)}\rVert_{1}+\lVert\omega_{1 }^{(i)}-\omega_{2}^{(i)}\rVert_{1}\] \[=\lVert\omega_{|\Omega_{i-1}|}^{(i-1)}-\omega_{1}^{(i-1)}\rVert_{1 }+\lVert u_{i-1}-u_{i}\rVert_{1}+\lVert\omega_{1}^{(i)}-\omega_{2}^{(i)}\rVert _{1}.\] Therefore, considering the sequence \[0=\omega_{1}^{(0)},\omega_{2}^{(0)},\ldots,\omega_{|\Omega_{0}|}^{(0)},\omega_{2}^ {(1)},\omega_{3}^{(1)},\ldots,\omega_{|\Omega_{1}|}^{(1)},\omega_{2}^{(2)},\omega_ {3}^{(2)},\ldots,\omega_{|\Omega_{N-1}|}^{(N-1)},\] we conclude that \[R(\tilde{\tau})\leq\sum_{j=1}^{|\Omega_{0}|-1}\|\omega_{j}^{(0)} -\omega_{j+1}^{(0)}\|_{1}+\sum_{i=1}^{N-1}\left(\|\omega_{|\Omega_{i-1}|}^{(i- 1)}-\omega_{2}^{(i)}\|_{1}+\sum_{j=2}^{|\Omega_{i}|-1}\|\omega_{j}^{(i)}- \omega_{j+1}^{(i)}\|_{1}\right)\] \[\leq\sum_{j=1}^{|\Omega_{0}|-1}\|\omega_{j}^{(0)}-\omega_{j+1}^{( 0)}\|_{1}+\sum_{i=1}^{N-1}\left(\|\omega_{|\Omega_{i-1}|}^{(i-1)}-\omega_{1}^ {(i-1)}\|_{1}+\|u_{i-1}-u_{i}\|_{1}+\sum_{j=1}^{|\Omega_{i}|-1}\|\omega_{j}^{( i)}-\omega_{j+1}^{(i)}\|_{1}\right)\] \[=\sum_{i=1}^{N-1}\sum_{j=1}^{|\Omega_{i}|}\|\omega_{j}^{(i)}- \omega_{j+1}^{(i)}\|_{1}-\|\omega_{|\Omega_{N-1}|}^{(N-1)}-\omega_{1}^{(N-1) }\|_{1}+R(\tau)\] \[<\sum_{i=1}^{N-1}\left(\frac{40}{d}|\partial A_{i}|+(2r+8)(| \Omega_{i}|-1)+8\right)+R(\tau)\] \[=\] and the result follows since \(\sum_{i=1}^{N-1}|\partial A_{i}|\leq 2\operatorname{TV}(\tau)\), \(\sum_{i=1}^{N-1}(|\Omega_{i}|-1)=|\mathcal{LC}(\tilde{\tau})|\) and by Observation 6.20, \(N-1<\operatorname{TV}(\tau)/d\). **Lemma 6.22**.: _There is a universal constant \(C>0\) such that for every shift \(\tau\) and for every integer \(N\geq 2\),_ \[R(\tau_{N})\leq R(\tau)+\frac{C}{d}\operatorname{TV}(\tau), \tag{6.20}\] _and for every \(I\in\operatorname{comp}(\tau)\),_ \[R(\tau_{I})\leq R(\tau)+\frac{C}{d}\operatorname{TV}(\tau). \tag{6.21}\] Proof.: Let \(\tilde{A}\) be a level component of \(\tau_{N}\). By the definition of \(\tau_{N}\), there are necessarily \(u_{1}\sim u_{2}\), both at distance at most \(N\) from \(\tilde{A}\), such that \(u_{1}\in A_{1}\) and \(u_{2}\in A_{2}\), where \(A_{1},A_{2}\) are distinct level components of \(\tau\). It is easy to see that for every two disjoint connected sets \(A_{1},A_{2}\subseteq\mathbb{Z}^{d}\), it holds that \[E(\mathbb{Z}^{d})\cap(A_{1}\times A_{2})\subseteq\left(A_{1}\times\partial_{ \operatorname{vis}(\infty)}(A_{1})\right)\cup\left(\partial_{\operatorname{ vis}(\infty)}(A_{2})\times A_{2}\right). \tag{6.22}\] It follows that for every level component \(\tilde{A}\) of \(\tau_{N}\), there is a level component \(A\) of \(\tau\) such that \(\operatorname{dist}(\tilde{A},\partial_{\operatorname{vis}(\infty)}A)\leq N\). Hence, by Proposition 6.21, \[R(\tau_{N})\leq R(\tau)+\frac{88}{d}\operatorname{TV}(\tau)+(2N+8)|\mathcal{ LC}(\tau_{N})|,\] and (6.20) follows, since by Observation 6.20 and Proposition 6.6, \[|\mathcal{LC}(\tau_{N})|\leq\frac{1}{dN^{d-1}}\operatorname{TV}(\tau_{N})\leq \frac{10}{N^{d-1}}\operatorname{TV}(\tau).\] Similarly, if \(I\subseteq[d]\), then for every level component \(\tilde{A}\) of \(\tau_{I}\), there is a level component \(A\) of \(\tau\) such that \(\operatorname{dist}(\tilde{A},\partial_{\operatorname{vis}(\infty)}A)\leq 2\). Hence, by Proposition 6.21, \[R(\tau_{I})\leq R(\tau)+\frac{88}{d}\operatorname{TV}(\tau)+12|\mathcal{LC}( \tau_{I})|,\] and (6.21) follows, for \(I\in\operatorname{comp}(\tau)\), since then, by Observation 6.20, \[|\mathcal{LC}(\tau_{I})|\leq\frac{1}{d2^{|I|-1}}\mathrm{TV}(\tau_{I})\leq \frac{20(2|I|+1)}{d\,2^{|I|-1}}\mathrm{TV}(\tau).\qed\] The following observation is a simple one, deriving from the definition of total variation and triangle inequality: **Observation 6.23**.: _For any two shifts \(\tau,\tau^{\prime}\) the following holds:_ \[\mathrm{TV}(\tau+\tau^{\prime})\leq\mathrm{TV}(\tau)+\mathrm{TV}(\tau^{ \prime}).\] **Lemma 6.24**.: _There is a universal constant \(c\) such that for any two shifts \(\tau,\tau^{\prime}\),_ \[R\left(\tau+\tau^{\prime}\right) \leq 2R\left(\tau\right)+R\left(\tau^{\prime}\right)+\frac{88}{d} \left(\mathrm{TV}\left(\tau\right)+\mathrm{TV}\left(\tau^{\prime}\right) \right)+\frac{10}{d}\operatorname{TV}(\tau+\tau^{\prime})\] \[\leq 2R\left(\tau\right)+R\left(\tau^{\prime}\right)+\frac{98}{d} \left(\mathrm{TV}\left(\tau\right)+\mathrm{TV}\left(\tau^{\prime}\right) \right),\] _where the second inequality follows by Observation 6.23._ Proof.: Suppose that \(u_{1}\sim u_{2}\) belong to different level components of \(\tau+\tau^{\prime}\). Then, \(u_{1}\in A_{1}\) and \(u_{2}\in A_{2}\), where \(A_{1}\) and \(A_{2}\) are distinct level components of the same function in \(\{\tau,\tau^{\prime}\}\). By (6.22), \(u_{1}\in\partial_{\operatorname{vis}(\infty)}(A_{2})\) or \(u_{2}\in\partial_{\operatorname{vis}(\infty)}(A_{1})\). It follows that for every \(\tilde{A}\in\mathcal{LC}(\tau+\tau^{\prime})\), there is \(A\in\mathcal{LC}(\tau)\cup\mathcal{LC}(\tau^{\prime})\) such that \(\operatorname{dist}(\tilde{A},\partial_{\operatorname{vis}(\infty)}A)\leq 1\). Then, a similar argument to that of the proof of Proposition 6.21 yields that \[R(\tau+\tau^{\prime})\leq 2R(\tau)+R(\tau^{\prime})+\frac{88}{d}\left( \mathrm{TV}(\tau)+\mathrm{TV}(\tau^{\prime})\right)+10|\mathcal{LC}(\tau+\tau ^{\prime})|\] and the result follows since \(|\mathcal{LC}(\tau+\tau^{\prime})|\leq\frac{1}{d}\mathrm{TV}(\tau+\tau^{ \prime})\), by Observation 6.20. ### Enumeration of shifts The goal of this section is prove Proposition 6.3 and Corollary 6.4. Before proving Proposition 6.3, we first show how it easily implies Corollary 6.4. Intuitively it is clear that the number of possible grainings of a shift of bounded complexity will decrease significantly as the scale of the grainings grows. Corollary 6.4 quantify this simple statement and is a direct result of the previously obtained total variation and trip entropy bounds for coarse and fine grainings, a simple scaling argument, and Proposition 6.3 bounding the number of general shifts with limited total variation and trip entropy. Proof of Corollary 6.4.: To show the first bound, let \(\mathcal{S}_{N}\) be the set of shifts which are constant in each set of the partition \(\mathcal{P}_{N}\). Then, by Proposition 6.6 and (6.20) there is a universal constant \(C>0\) such that \[\left\{\tau_{N}\colon\tau\in\mathcal{S},\,\mathrm{TV}(\tau)\leq\lambda,\,R( \tau)\leq\rho\right\}\subseteq\left\{\tau\in\mathcal{S}_{N}\colon\,\mathrm{ TV}(\tau)\leq 10d\lambda,\,R(\tau)\leq\rho+\frac{C}{d}\lambda\right\}.\] Denote by \(\mu_{N}:\mathbb{Z}^{d}\to\mathbb{Z}^{d}\) the multiplication by \(N\). The mapping \(\tau\mapsto\tau\circ\mu_{N}\) is obviously a bijection of \(\mathcal{S}_{N}\) onto \(\mathcal{S}\), and moreover, for every \(\tau\in\mathcal{S}_{N}\), clearly \(\mathrm{TV}(\tau\circ\mu_{N})\leq\frac{1}{N^{d-1}}\,\mathrm{TV}(\tau)\) and \(R(\tau\circ\mu_{N})\leq R(\tau)\). Hence, \[|\{\tau_{N}\colon\tau\in\mathcal{S},\,\mathrm{TV}(\tau)\leq\lambda,\,R( \tau)\leq\rho\}| \leq\left|\left\{\tau\in\mathcal{S}_{N}\colon\,\mathrm{TV}(\tau) \leq 10d\lambda,\,R(\tau)\leq\rho+\frac{C}{d}\lambda\right\}\right|\] \[\leq\left|\left\{\tau\in\mathcal{S}\colon\,\mathrm{TV}(\tau) \leq\frac{10d\lambda}{N^{d-1}},\,R(\tau)\leq\rho+\frac{C}{d}\lambda\right\} \right|.\] The bound (6.5) now follows directly from Proposition 6.3. The proof of (6.6) is similar. Proof of Proposition 6.3.: Fix a shift \(\tau\). Let \(J(\tau)\) be the number of level components of \(\tau\), let \((v_{j}(\tau))_{j=1}^{J(\tau)}\) be a sequence of points in \(\mathbb{Z}^{d}\) such that \(v_{1}(\tau)=0\), \(\sum_{j=1}^{J(\tau)-1}\lVert v_{j}(\tau)-v_{j+1}(\tau)\rVert_{1}=R(\tau)\) and there is a unique element of \(\{v_{j}(\tau)\}_{j=1}^{J(\tau)}\) in each level component of \(\tau\). For every \(1\leq j\leq J(\tau)\), let \(L_{j}(\tau)\) be the level component of \(\tau\) containing \(v_{j}(\tau)\). Let \(j_{0}(\tau)\) be the index of the unique unbounded level component of \(\tau\). Define a partial order \(\leq_{\tau}\) on the set \([J(\tau)]\) as follows: \(i\leq_{\tau}j\) if every path from \(L_{i}\) to \(\infty\) necessarily intersects \(L_{j}\). For every \(j\in[J(\tau)]\), let \(U_{j}(\tau):=\bigcup_{i\leq_{\tau}j}L_{i}(\tau)\). Clearly, \(U_{j_{0}(\tau)}(\tau)=\mathbb{Z}^{d}\). Let \(\mathcal{G}(\tau)\) be the graph on the vertex set \([J(\tau)]\) in which \(i\neq j\) are adjacent if there are neighbouring \(u\in L_{i}\) and \(v\in L_{j}\). Define a rooted spanning tree \(\mathcal{T}(\tau)\) of \(\mathcal{G}(\tau)\) in the following inductive manner. Set \(V_{0}:=\{j_{0}(\tau)\}\), \(\tilde{V}_{0}:=\{j_{0}(\tau)\}\) and \(E_{0}:=\emptyset\), and for every \(1\leq r\leq J(\tau)\), let \(i_{r}:=\min\tilde{V}_{r-1}\) and set \(V_{r}:=V_{r-1}\cup\{j\in[J(\tau)]\setminus V_{r-1}:j\text{ is adjacent to }i_{r}\text{ in } \mathcal{G}(\tau)\}\), \(\tilde{V}_{r}:=(\tilde{V}_{r-1}\setminus\{i_{r}\})\cup(V_{r}\setminus V_{r-1})\) and \(E_{r}:=E_{r-1}\cup\{(i_{r},j):j\in V_{r}\setminus V_{r-1}\}\). Finally, let \(\mathcal{T}(\tau)\) be the tree on the vertex set \(V_{J(\tau)}=[J(\tau)]\) whose set of (directed) edges is \(E_{J(\tau)}\). For every directed edge \(e=(i,j)\) of \(\mathcal{T}(\tau)\), let \(s_{e}(\tau):=\tau(L_{i})-\tau(L_{j})\). Clearly, \(\sum_{j=1}^{J(\tau)}\lvert\partial L_{j}(\tau)\rvert\leq 2\mathrm{TV}(\tau)\) and by (6.10), \(\lvert\partial L_{j}(\tau)\rvert\geq 2d\) for every \(1\leq j\leq J(\tau)\). Consequently, \(J(\tau)\leq\frac{1}{2d}\mathrm{TV}(\tau)\). Moreover, clearly \(J(\tau)\leq 1+R(\tau)\). For every positive integer \(J\leq\min\{\frac{\lambda}{2d}.1+\rho\}\), let \[\tilde{\mathcal{S}}_{J}:=\{\tau\in\mathcal{S}\colon\mathrm{TV}(\tau)\leq \lambda,\,R(\tau)\leq\rho,\,J(\tau)=J\}.\] The map \[\chi\colon\tau\mapsto\big{(}J(\tau),j_{0}(\tau),(U_{j}(\tau))_{j_{0}(\tau)\neq j \in[J(\tau)]},(s_{e}(\tau))_{e\in E(\mathcal{T}(\tau))}\big{)}\] is clearly injective, hence \[|\{\tau\in\mathcal{S}\colon\mathrm{TV}(\tau)\leq\lambda,\,R(\tau)\leq\rho\}| =\sum_{J\leq\frac{\lambda}{2d}}|\tilde{\mathcal{S}}_{J}|\leq\sum_{J\leq\frac{ \lambda}{2d}}|\chi(\tilde{\mathcal{S}}_{J})|. \tag{6.23}\] In the estimates below we will use the following estimates several times. First, for every positive integers \(k\) and \(n\), \[\binom{n}{k}\leq\frac{n^{k}}{k!}<\left(\frac{en}{k}\right)^{k}<\left(\frac{3n} {k}\right)^{k}. \tag{6.24}\] For every positive integers \(k\) and \(m\) there are no more than \(\min\{k,m\}\) non-zero terms in every sequence \((a_{i})_{i=1}^{k}\) such that \(\sum_{i=1}^{k}\lvert a_{i}\rvert\leq m\) and hence \[\lvert\{(a_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\colon\,\sum_{i=1}^{k} \lvert a_{i}\rvert\leq m\}\rvert \leq 2^{\min\{k,m\}}\lvert\{(p_{i})_{i=1}^{k}\in\mathbb{Z}\cap[0, \infty)]^{k}\colon\,\sum_{i=1}^{k}p_{i}\leq m\}\rvert\] \[=2^{\min\{k,m\}}\binom{m+k}{k}=2^{\min\{k,m\}}\binom{m+k}{m}.\] Therefore, for every positive integers \(k\) and \(m\) and real \(\alpha\geq k\), by (6.24), \[\lvert\{(a_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\colon\,\sum_{i=1}^{k} \lvert a_{i}\rvert\leq m\}\rvert\leq 2^{k}\binom{m+k}{k}\leq\left(6\left( \frac{m}{k}+1\right)\right)^{k}\leq\left(6\left(\frac{m}{\alpha}+1\right) \right)^{\alpha}, \tag{6.25}\] where the last inequality holds since the function \(t\mapsto\left(\frac{m}{t}+1\right)^{t}\) is increasing in the interval \((0,\infty)\), and also \[\lvert\{(a_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\colon\,\sum_{i=1}^{k} \lvert a_{i}\rvert\leq m\}\rvert\leq 2^{m}\binom{m+k}{m}\leq\left(6\left( \frac{k}{m}+1\right)\right)^{m}\leq\left(6\left(\frac{\alpha}{m}+1\right) \right)^{m}. \tag{6.26}\] Let \(\mathbb{B}_{d}\) denote the family of finite \(A\subset\mathbb{Z}^{d}\) such that both \(A\) and \(\mathbb{Z}^{d}\setminus A\) are connected. For every shift \(\tau\) and \(j_{0}(\tau)\neq j\in[J(\tau)]\), the set \(U_{j}(\tau)\) is obviously in \(\mathbb{B}_{d}\). By [1, Theorem 6] (improving on [13, Corollary 1.2]; see more details in Appendix A), it holds that for every \(v\in\mathbb{Z}^{d}\) and integer \(b\geq 2d\), \[\lvert\{A\in\mathbb{B}_{d}\colon\,v\in A,\,\lvert\partial A\rvert=b\}\rvert \leq(8d)^{2b/d}. \tag{6.27}\] Hence, for every \(v_{1},\dots,v_{J}\in\mathbb{Z}^{d}\), \(1\leq j_{0}\leq J\) and integers \((b_{j})_{j_{0}\neq j\in[J]}\in([2d,\infty))^{J-1}\) such that \(\sum_{j_{0}\neq j\in[J]}b_{j}\leq\lambda\), \[\lvert\{(A_{j})_{j_{0}\neq j\in[J]}\colon\,\forall j_{0}\neq j \in[J]\text{ it holds that }A_{j}\in\mathbb{B}_{d},\,v_{j}\in A_{j},\,\lvert\partial A_{j}\rvert=b_{j} \rvert\leq\prod_{j_{0}\neq j\in[J]}(8d)^{2b_{j}/d}\\ =(8d)^{2\sum_{j_{0}\neq j\in[J]}b_{j}/d}\leq(8d)^{2\lambda/d}.\] For every shift \(\tau\) it holds that \(\sum_{j_{0}(\tau)\neq j\in[J(\tau)]}\lvert\partial U_{j}(\tau)\rvert\leq\text {TV}(\tau)\) and by (6.10), \(\lvert\partial U_{j}(\tau)\rvert\geq 2d\) for every \(j_{0}(\tau)\neq j\in[J(\tau)]\). Therefore, since by (6.25) (noting that \(d(J-1)<\lambda/2\)) and (6.26) (noting that \(d(J-1)\leq d\rho\)), \[\lvert\{(v_{j})_{j=1}^{J}\in(\mathbb{Z}^{d})^{J}\colon v_{1}=0,\, \sum_{j=1}^{J-1}\lVert v_{j}-v_{j+1}\rVert_{1}\leq\rho\}\rvert=\lvert\{(y_{j})_ {j=1}^{J-1}\in(\mathbb{Z}^{d})^{J-1}\colon\,\sum_{j=1}^{J-1}\lVert y_{j}\rVert _{1}\leq\rho\}\rvert\\ =\lvert\{(a_{i})_{i=1}^{d(J-1)}\in\mathbb{Z}^{d(J-1)}\colon\,\sum_ {i=1}^{d(J-1)}\lvert a_{i}\rvert\leq\rho\}\rvert\leq\min\left\{\left(6\left( \frac{2\rho}{\lambda}+1\right)\right)^{\lambda/2},(6(d+1))^{\rho}\right\}\\ \leq\min\left\{\left(6\left(\frac{2\rho}{\lambda}+1\right)\right) ^{\lambda/2},(8d)^{\rho}\right\}\] and for every \(1\leq j_{0}\leq J\), by (6.24), \[|\{(b_{j})_{j_{0}\neq j\in[J]}\in(\mathbb{Z}\cap[2d,\infty))^{J-1} \colon\sum_{j_{0}\neq j\in[J]}b_{j}\leq\lambda\}|=\binom{\lambda-2d(J-1)+J-1}{J-1} \\ <\left(\frac{3(\lambda-2d(J-1)+J-1)}{J-1}\right)^{J-1}<\left(\frac {3\lambda}{J-1}\right)^{J-1}\leq(6d)^{\frac{\lambda}{2d}}\] (where the last inequality holds since \(J-1<\frac{\lambda}{2d}\) and the function \(t\mapsto(e\lambda/t)^{t}\) is increasing in the interval \((0,\lambda]\)), we conclude that \[|\{(j_{0}(\tau),(U_{j}(\tau))_{j_{0}(\tau)\neq j\in[J(\tau)]})\colon \tau\in\tilde{\mathcal{S}}_{J}\}|\\ \leq\frac{\lambda}{2d}\min\left\{\left(6\left(\frac{2\rho}{ \lambda}+1\right)\right)^{\lambda/2},(8d)^{\rho}\right\}(6d)^{\frac{\lambda}{ 2d}}(8d)^{\frac{2\lambda}{d}} \tag{6.28}\] Additionally, for every \(\tau\), clearly \(\sum_{e\in E(\mathcal{T}(\tau))}|s_{e}(\tau)|\leq\operatorname{TV}(\tau)\). Hence, by using (6.25), since \(J-1<J\leq\frac{\lambda}{2d}\), \[|\{(s_{e}(\tau))_{e\in E(\mathcal{T}(\tau))}\colon\tau\in\tilde{\mathcal{S}}_{ J}\}|\leq(6(2d+1))^{\frac{\lambda}{2d}}\leq(14d)^{\frac{\lambda}{2d}}\,.\] Combining this with (6.28) we get that for every \(J\leq\min\{\frac{\lambda}{2d},\rho+1\}\), \[|\chi(\tilde{\mathcal{S}}_{J})|\leq\min\left\{C_{1}^{\lambda}\left(\frac{2 \rho}{\lambda}+1\right)^{\lambda/2},\left(C_{2}d^{3}\right)^{\lambda/d}(8d)^{ \rho}\right\}\] for some universal positive constants \(C_{1},C_{2}\), and the result follows by (6.23). ### Concentration of ground energy differences In this section we prove Lemma 6.1 in the following equivalent formulation. _There exist universal \(C,c>0\) such that the following holds. Suppose the disorder distributions \(\nu^{\parallel},\nu^{\perp}\) satisfy (1.11). Then for any shift \(\tau\) and any non-negative \(b^{\parallel},b^{\perp}\) for which \(\rho^{\operatorname{Dob}}\in\Omega^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}\),_ \[\mathbb{P}\left(|G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)|\geq t \right)\leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{\parallel})^{2 }b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\right). \tag{6.29}\] (This is a special case of Lemma 6.1, for \(\tau^{\prime}\equiv 0\), and it implies the lemma in full generality, since \(G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau,\tau^{\prime})=G^{\eta^{ \tau^{\prime}},\Lambda,(b^{\parallel},b^{\perp})}(\tau-\tau^{\prime})\) for any two shifts \(\tau,\tau^{\prime}\).) We aim to use Corollary 2.3 to show concentration of the ground energy difference under changes in the disorder \(\eta\) induced by shifts. To be able to do so would require approximating the ground energy difference by a function of the disorder on finitely many edges. To be exact, for \(\Lambda\subset\mathbb{Z}^{d}\) finite we write \(\Delta_{M}:=\Lambda\times\{-M,\ldots,M\}\), and define \[\Omega^{\Delta_{M},A,(b^{\parallel},b^{\perp})}:=\Omega^{\Delta_{M},\rho^{ \operatorname{Dob}}}\cap\Omega^{\Lambda,A,(b^{\parallel},b^{\perp})}\] to be the space of configurations on \(\Delta_{M}\) satisfying the Dobrushin boundary conditions and layering bounds \((b^{\parallel},b^{\perp})\) in \(A\subseteq\Lambda\). Let \(\eta:E(\mathbb{Z}^{d+1})\to[0,\infty)\). Denote by \[\operatorname{GE}^{\Delta_{M},A,(b^{\parallel},b^{\perp})}(\eta):=\min\left\{ \mathcal{H}^{\eta,\Lambda}(\sigma):\sigma\in\Omega^{\Delta_{M},A,(b^{\parallel},b^{\perp})}\right\}\] and by \(G^{\Delta_{M},(b^{\parallel},b^{\perp})}(\tau)=\operatorname{GE}^{\Delta_{M}, \operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta)-\operatorname{GE}^{ \Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta^{\tau})\). Then under the above definitions, the following holds: **Lemma 6.25**.: _Let \(\Lambda\subset\mathbb{Z}^{d}\) be finite, \(\tau\) a shift and non-negative \(b^{\parallel},b^{\perp}\) for which \(\rho^{\operatorname{Dob}}\in\Omega^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}\). Then_ \[\lim_{M\to\infty}\mathbb{P}\left(G^{\eta,\Delta_{M},(b^{\parallel},b^{\perp} )}(\tau)=G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)\right)=1.\] The proof of Lemma 6.25 is routine, and appears in appendix B for completeness. By the lemma above proving (6.29) may be reduced to the following. **Proposition 6.26**.: _There exist universal \(C,c>0\) such that the following holds. Suppose the disorder distributions \(\nu^{\parallel},\nu^{\perp}\) satisfy (1.11). Then for any shift \(\tau\) and non-negative \(b^{\parallel},b^{\perp}\) for which \(\rho^{\operatorname{Dob}}\in\Omega^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}\) and every positive integer \(M\),_ \[\mathbb{P}\left(\left|G^{\eta,\Delta_{M},(b^{\parallel},b^{\perp})}(\tau) \right|\geq t\right)\leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{ \parallel})^{2}b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}} \right). \tag{6.30}\] The rest of this section will be devoted to proving Proposition 6.26. Note that condition (1.11) implies that \(\operatorname{wid}(\nu^{\parallel})\neq 0\). Also notice that if \(\operatorname{wid}(\nu^{\perp})=0\) then \(\nu^{\perp}\) is supported on one point, so the value of the coupling field on perpendicular plaquettes is fixed rather than random. This simplifies the argument and hence we will assume that \(\operatorname{wid}(\nu^{\perp})\neq 0\). Let \[\mathfrak{H} :=\{\eta_{e}\colon e\in E(\mathbb{Z}^{d+1}),\,e\nsubseteq\operatorname {supp}(\tau)\times\mathbb{Z}\},\] \[A_{M} :=\{e\in E(\mathbb{Z}^{d+1})\colon e\subseteq\operatorname{supp}( \tau)\times\{-M,\dots,M\}\},\] and for every \(e\in A_{M}\), let \[X_{e}:=\begin{cases}\frac{\eta_{e}}{\operatorname{wid}(\nu^{\parallel})}&e \in E^{\parallel}(\mathbb{Z}^{d+1}),\\ \frac{\eta_{e}}{\operatorname{wid}(\nu^{\perp})}&e\in E^{\perp}(\mathbb{Z}^{d +1}).\end{cases}\] Conditioned on \(\mathfrak{H}\), the ground energy \(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(\eta)\) may be viewed as a function of \(\{X_{e}\}_{e\in A_{M}}\). Moreover, it is easy to verify that it is quasi-concave. Therefore, the following lemma will allow us to apply Corollary 2.3 to it. **Lemma 6.27**.: _The ground energy \(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(\eta)\), conditioned on \(\mathfrak{H}\), is Lipschitz, as a function of \(\{X_{e}\}_{e\in A_{M}}\), with Lipschitz constant \(2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2}b^{\parallel}+\operatorname{wid }(\nu^{\perp})^{2}b^{\perp}}\)._ Before proving Lemma 6.27, we show how it implies Proposition 6.26. By Lemma 6.27 and Corollary 2.3, \(\mathbb{E}|(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\mid\mathfrak{H})|<\infty\) and for each \(t>0\), \[\mathbb{P}\left(\left|\left(\operatorname{GE}^{\Delta_{M}, \operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta)\mid\mathfrak{H} \right)-\mathbb{E}\left(\operatorname{GE}^{\Delta_{M},\operatorname{supp}( \tau),(b^{\parallel},b^{\perp})}(\eta)\mid\mathfrak{H}\right)\right|\geq t\right) \\ \leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{\parallel} )^{2}b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\right). \tag{6.31}\] Similarly, \(\mathbb{E}|(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta^{\tau})\mid\mathfrak{H})|<\infty\) and for each \(t>0\), \[\mathbb{P}\left(\left|\left(\operatorname{GE}^{\Delta_{M}, \operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta^{\tau})\mid\mathfrak{H }\right)-\mathbb{E}\left(\operatorname{GE}^{\Delta_{M},\operatorname{supp}( \tau),(b^{\parallel},b^{\perp})}(\eta^{\tau})\mid\mathfrak{H}\right)\right|\geq t\right) \\ \leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{ \parallel})^{2}b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}} \right). \tag{6.32}\] Observe that the following holds, by linearity of expectation and the facts that the disorder is independent and \(\eta^{\tau}\) has the same distribution as \(\eta\). **Observation 6.28**.: _It holds that_ \[\mathbb{E}\left(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\mid\mathfrak{H}\right)=\mathbb{E}\left( \operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(\eta^{\tau})\mid\mathfrak{H}\right).\] Observation 6.28 implies that for every \(t>0\), \[\mathbb{P}\left(\left|\left(G^{\eta,\Delta_{M},(b^{\parallel},b^{ \perp})}\mid\mathfrak{H}\right)\right|\geq t\right)\] \[\qquad\leq\mathbb{P}\left(\left|\left(\operatorname{GE}^{\Delta_ {M},\operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta)\mid\mathfrak{H }\right)-\mathbb{E}\left(\operatorname{GE}^{\Delta_{M},\operatorname{supp}( \tau),(b^{\parallel},b^{\perp})}(\eta)\mid\mathfrak{H}\right)\right|\geq t/2\right)\] \[\qquad\qquad+\mathbb{P}\left(\left|\left(\operatorname{GE}^{ \Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta^{\tau}) \mid\mathfrak{H}\right)-\mathbb{E}\left(\operatorname{GE}^{\Delta_{M}, \operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta^{\tau})\mid\mathfrak{ H}\right)\right|\geq t/2\right)\] and (6.30) follows by (6.31) and (6.32). Proof of Lemma 6.27.: For \(y\in[0,\infty)^{A_{M}}\) and \(\tilde{y}\in[0,\infty)^{E(\mathbb{Z}^{d+1})\setminus A_{M}}\) let \(h(y,\tilde{y}):E(\mathbb{Z}^{d+1})\to[0,\infty)\) be defined by \[\left(h(y,\tilde{y})\right)_{e}=\begin{cases}\operatorname{wid}(\nu^{\parallel })y_{e}&e\in A_{M}\cap E^{\parallel}(\mathbb{Z}^{d+1}),\\ \operatorname{wid}(\nu^{\perp})y_{e}&e\in A_{M}\cap E^{\perp}(\mathbb{Z}^{d+1 }),\\ \tilde{y}&e\in E(\mathbb{Z}^{d+1})\setminus A_{M}.\end{cases}\] We need to verify that for any \(\tilde{y}\in[0,\infty)^{E(\mathbb{Z}^{d})\setminus A_{M}}\) and \(y,y^{\prime}\in[0,\infty)^{A_{M}}\) it holds that \[|\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h)-\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(h^{\prime})|\leq 2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2 \parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\,\|y-y^{\prime}\|_{2},\] where \(h:=h(y,\tilde{y})\) and \(h^{\prime}:=h(y^{\prime},\tilde{y})\). Let \(\sigma^{\prime}\) be (some) ground configuration in \(\Omega^{\Lambda,\operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}\) with respect to the coupling field \(h^{\prime}\). Then, \[\mathcal{H}^{h,\Lambda}(\sigma^{\prime})-\mathcal{H}^{h^{\prime },\Lambda}(\sigma^{\prime})\leq \sum_{\{x,y\}\in A_{M}}|h_{\{x,y\}}-h^{\prime}_{\{x,y\}}|\left(1- \sigma^{\prime}_{x}\sigma^{\prime}_{y}\right)\] \[= \operatorname{wid}(\nu^{\parallel})\sum_{\{x,y\}\in A_{M}\cap E^{ \parallel}(\mathbb{Z}^{d+1})}|y_{\{x,y\}}-y^{\prime}_{\{x,y\}}|\left(1-\sigma ^{\prime}_{x}\sigma^{\prime}_{y}\right)+\] \[\operatorname{wid}(\nu^{\perp})\sum_{\{x,y\}\in A_{M}\cap E^{ \perp}(\mathbb{Z}^{d+1})}|y_{\{x,y\}}-y^{\prime}_{\{x,y\}}|\left(1-\sigma^{ \prime}_{x}\sigma^{\prime}_{y}\right)\] \[\leq 2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2}b^{\parallel}+ \operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\,\|y-y^{\prime}\|_{2},\] where the last inequality is by the Cauchy-Schwarz inequality (and the layering bound on \(\sigma^{\prime}\) deriving from the fact \(\sigma^{\prime}\in\Omega^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b ^{\perp})}\)). Since \(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h)\leq\mathcal{H}^{h,\Lambda}(\sigma^{\prime})\) and \(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h^{\prime})=\mathcal{H}^{h^{\prime},\Lambda}(\sigma^{\prime})\), it follows that \[\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h)-\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(h^{\prime})\leq 2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2}b^{ \parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\,\|y-y^{\prime}\|_{2}.\] A symmetric inequality of the form \[\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h^{\prime})-\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(h)\leq 2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2}b^{ \parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\,\|y-y^{\prime}\|_{2}.\] holds with identical reasoning, and so we are done. ### Layering bounds In this section we prove Lemma 6.2. Fix positive \(\alpha^{\parallel},\alpha^{\perp}\) and a finite set \(\Lambda\subset\mathbb{Z}^{d}\). Recall the definitions of parallel and perpendicular layering in (6.1) and (6.2). Introduce the following convenient notation: For \(\eta\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\), \(\theta\in\{\parallel,\perp\}\) write \(\mathcal{L}_{E}^{\theta}(\eta):=\mathcal{L}_{E}^{\theta}(\sigma^{\eta,\Lambda,\mathrm{Dob}})\) where \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is the unique ground configuration, existing by Lemma 1.5. We will use the following proposition, which will be proved in Section 7. **Proposition 6.29**.: _Let \(E\subset\Lambda\) and let \(\eta\in D(\alpha^{\parallel},\alpha^{\perp})\) be a coupling field. Then, there exists a shift function \(\tau\) satisfying the following:_ \[G^{\eta,\Lambda}(\tau) \geq 2\alpha^{\parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta)-|E |\right)+2\alpha^{\perp}\max\{\mathcal{L}_{E}^{\perp}(\eta),\mathrm{TV}(\tau)\},\] \[G^{\eta,\Lambda}(\tau) \geq\frac{1}{16}\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left( R(\tau)-R(E)-2(|E|-1)\right).\] Fix any coupling field \(\eta\in D(\alpha^{\parallel},\alpha^{\perp})\). For brevity, we will once more write \(G\) for \(G^{\eta,\Lambda}\), \(\mathcal{AS}\) for \(\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})\), _admissible_ for \((\alpha^{\parallel},\alpha^{\perp})\)-admissible and MG for \(\mathrm{MG}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})\). The following proposition establishes a useful general layering bound. **Proposition 6.30**.: _Let \(E\subset\Lambda\), and \(\tau\) a shift. The following layering bound holds:_ \[\alpha^{\perp}\mathcal{L}_{E}^{\perp}(\eta^{\tau})+\alpha^{ \parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta^{\tau})-|E|\right)\\ \leq\max\left\{\mathrm{MG},\,2\alpha^{\perp}\,\mathrm{TV}\left( \tau\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau)+R\left(E \right)+|E|\right)\right\}. \tag{6.33}\] Proof.: By Proposition 6.29, applied to the set \(E\) and the shifted disorder \(\eta^{\tau}\), there is a shift \(\tau^{\prime}\) such that \[G^{\eta^{\tau},\Lambda}(\tau^{\prime}) \geq 2\alpha^{\parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta^{ \tau})-|E|\right)+2\alpha^{\perp}\max\{\mathcal{L}_{E}^{\perp}(\eta^{\tau}), \mathrm{TV}(\tau^{\prime})\},\] \[G^{\eta^{\tau},\Lambda}(\tau^{\prime}) \geq\frac{1}{16}\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left( R(\tau^{\prime})-R(E)-2(|E|-1)\right).\] Note that \[G^{\eta^{\tau},\Lambda}(\tau^{\prime}) =\mathrm{GE}^{\Lambda}(\eta^{\tau})-\mathrm{GE}^{\Lambda}\left( (\eta^{\tau})^{\tau^{\prime}}\right)=\mathrm{GE}^{\Lambda}(\eta^{\tau})- \mathrm{GE}^{\Lambda}(\eta^{\tau+\tau^{\prime}})\] \[=\left(\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{\Lambda}(\eta^{ \tau+\tau^{\prime}})\right)-\left(\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{ \Lambda}(\eta^{\tau})\right)=G(\tau+\tau^{\prime})-G(\tau)\] \[\leq 2\max\{G(\tau+\tau^{\prime}),\,|G(\tau)|\}.\] Hence, \[\max\{G(\tau+\tau^{\prime}),\,|G(\tau)|\} \geq\alpha^{\parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta^{ \tau})-|E|\right)+\alpha^{\perp}\max\{\mathcal{L}_{E}^{\perp}(\eta^{\tau}), \mathrm{TV}(\tau^{\prime})\}, \tag{6.34}\] \[\max\{G(\tau+\tau^{\prime}),\,|G(\tau)|\} \geq\frac{1}{32}\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left( R(\tau^{\prime})-R(E)-2(|E|-1)\right). \tag{6.35}\] By way of contradiction, assume that (6.33) does not hold. Then, by (6.34), \[\max\{G(\tau+\tau^{\prime}),\,|G(\tau)|\}\\ >\max\left\{\mathrm{MG},\,2\alpha^{\perp}\,\mathrm{TV}\left(\tau \right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau)+R\left(E \right)+|E|\right)\right\}. \tag{6.36}\] If \(|G(\tau)|\geq G(\tau+\tau^{\prime})\) then by (6.36), \(|G(\tau)|>2\alpha^{\perp}\,\mathrm{TV}\left(\tau\right)\), \(|G(\tau)|>2\min\{\alpha^{\parallel},\alpha^{\perp}\}dR\left(\tau\right)\) and \(|G(\tau)|>\mathrm{MG}\), hence \(\tau\) is admissible and of larger energetic gain than MG, a contradiction. Assume that \(G(\tau+\tau^{\prime})>|G(\tau)|\). By Lemma 6.24, (6.35) and (6.34), \[R(\tau+\tau^{\prime}) \leq 2R(\tau)+R(\tau^{\prime})+\frac{98}{d}\left(\operatorname{TV}( \tau)+\operatorname{TV}(\tau^{\prime})\right)\] \[\leq 2R(\tau)+R(E)+2|E|+\frac{32\,G(\tau+\tau^{\prime})}{\min\left\{ \alpha^{\parallel},\alpha^{\perp}\right\}d}+\frac{98}{d}\operatorname{TV}( \tau)+\frac{98\,G(\tau+\tau^{\prime})}{\alpha^{\perp}d}\] \[\leq 100\left(\frac{\operatorname{TV}(\tau)}{d}+R(\tau)+R(E)+|E| \right)+\frac{130\,G(\tau+\tau^{\prime})}{\min\left\{\alpha^{\parallel},\alpha ^{\perp}\right\}d}\] and hence, by (6.36), \(R(\tau+\tau^{\prime})<\frac{180}{\min\left\{\alpha^{\parallel},\alpha^{\perp} \right\}d}G(\tau+\tau^{\prime})\), and by Observation 6.23, (6.36) and (6.34), \[\operatorname{TV}(\tau+\tau^{\prime})\leq\operatorname{TV}(\tau)+ \operatorname{TV}(\tau^{\prime})<\frac{G(\tau+\tau^{\prime})}{2\alpha^{\perp} }+\frac{G(\tau+\tau^{\prime})}{\alpha^{\perp}}<\frac{2}{\alpha^{\perp}}G(\tau +\tau^{\prime}).\] Therefore, \(\tau+\tau^{\prime}\) is admissible and of larger energetic gain than \(\operatorname{MG}\), by (6.36), a contradiction. Now we use Proposition 6.30 to prove Lemma 6.2. Proof of Lemma 6.2.: First note that for every set \(E\subseteq\Lambda\) and every coupling field \(\tilde{\eta}\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\), it holds that \(\operatorname{GE}^{\Lambda}(\tilde{\eta})=\operatorname{GE}^{\Lambda,E,(b^{ \parallel},b^{\perp})}(\tilde{\eta})\) if and only if \(\mathcal{L}_{E}^{\perp}(\tilde{\eta})\leq b^{\perp}\) and \(\mathcal{L}_{E}^{\parallel}(\tilde{\eta})\leq b^{\parallel}\). Also note that \(\eta^{\tau}\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\) for every shift \(\tau\). Throughout the proof we will use \(C\) to denote a positive absolute constant; the values of this constant will be allowed to change from line to line, even within the same calculation, with its value increasing. To prove the third part of the lemma, we use Proposition 6.30 twice, for the same subset \(E:=\operatorname{supp}(\tau)\). Recall that, by the admissibility of \(\tau\), \[\operatorname{TV}(\tau) \leq\frac{2}{\alpha^{\perp}}|G(\tau)|\leq\frac{2}{\alpha^{\perp }}\operatorname{MG}\leq\frac{4s}{\alpha^{\perp}}, \tag{6.37}\] \[R(\tau) \leq\frac{200}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d}|G( \tau)|\leq\frac{200}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d} \operatorname{MG}\leq\frac{400s}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d}. \tag{6.38}\] Hence, \(R(E)\leq R(\tau)\leq\frac{Cs}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d}\) and by Lemma 6.9 and the assumption that \(s<\alpha^{\perp}4^{d}\), \[|E|\leq\sum_{u\in\mathbb{Z}^{d}}|\tau(u)|\leq\left(\frac{\operatorname{TV}( \tau)}{2d}\right)^{\frac{d}{d-1}}\leq\left(\frac{2s}{\alpha^{\perp}d}\right)^ {\frac{d}{d-1}}\leq\frac{Cs}{\alpha^{\perp}d}. \tag{6.39}\] The first use of Proposition 6.30 will be for the shift \(\tau\), and it gives \[\alpha^{\perp}\mathcal{L}_{E}^{\perp}(\eta^{\tau})+\alpha^{ \parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta^{\tau})-|E|\right)\\ \leq\max\left\{\operatorname{MG},2\alpha^{\perp}\operatorname{ TV}\left(\tau\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau)+R(E)+|E| \right)\right\}\leq Cs\] and the second use will be for the shift \(\tau_{0}\equiv 0\), and it gives \[\alpha^{\perp}\mathcal{L}_{E}^{\perp}(\eta)+\alpha^{\parallel}\left(\mathcal{ L}_{E}^{\parallel}(\eta)-|E|\right)\leq\max\left\{\operatorname{MG},2\min\{ \alpha^{\parallel},\alpha^{\perp}\}d\left(R(E)+|E|\right)\right\}\leq Cs.\] Hence, for \(\#\in\{\eta^{\tau},\eta\}\), by using (6.39), \[\mathcal{L}_{E}^{\perp}(\#)\leq\frac{Cs}{\alpha^{\perp}},\quad\mathcal{L}_{E} ^{\parallel}(\#)\leq C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{ \perp}d}\right)s.\] We proceed to prove the second part of the lemma. For every positive integer \(k\), let \(E_{k}:=\operatorname{supp}\left(\tau_{2^{k}}-\tau_{2^{k+1}}\right)\). By Proposition 6.6, \[\operatorname{TV}(\tau_{2^{k}})\leq 10d\,\operatorname{TV}(\tau)\leq\frac{Cds}{ \alpha^{\perp}}, \tag{6.40}\] and by (6.20), \[R(\tau_{2^{k}})\leq R(\tau)+\frac{C}{d}\operatorname{TV}(\tau)\leq\frac{Cs}{ \min\{\alpha^{\parallel},\alpha^{\perp}\}d}. \tag{6.41}\] Hence, by Lemma 6.24, \[R\left(E_{k}\right)\leq R\left(\tau_{2^{k}}-\tau_{2^{k+1}}\right)\leq R\left( \tau_{2^{k}}\right)+2R\left(\tau_{2^{k+1}}\right)+\frac{98}{d}\left( \operatorname{TV}\left(\tau_{2^{k}}\right)+\operatorname{TV}\left(\tau_{2^{k +1}}\right)\right)\leq\frac{Cs}{\min\{\alpha^{\parallel},\alpha^{\perp}\}},\] and by Proposition 6.7, \[|E_{k}|\leq\|\tau_{2^{k}}-\tau_{2^{k+1}}\|_{1}\leq(4d+9)2^{k} \,\operatorname{TV}(\tau)\leq\frac{C2^{k}ds}{\alpha^{\perp}}. \tag{6.42}\] Therefore, by Proposition 6.30, \[\alpha^{\perp}\mathcal{L}_{E_{k}}^{\perp}(\eta^{\tau_{2^{k}}})+ \alpha^{\parallel}\left(\mathcal{L}_{E_{k}}^{\parallel}(\eta^{\tau_{2^{k}}})-| E_{k}|\right)\\ \leq\max\left\{\operatorname{MG},\,2\alpha^{\perp}\operatorname{ TV}\left(\tau_{2^{k}}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R( \tau_{2^{k}})+R\left(E_{k}\right)+|E_{k}|\right)\right\}\leq C2^{k}d^{2}s\] and \[\alpha^{\perp}\mathcal{L}_{E_{k-1}}^{\perp}(\eta^{\tau_{2^{k}}}) +\alpha^{\parallel}\left(\mathcal{L}_{E_{k-1}}^{\parallel}(\eta^{\tau_{2^{k}}} )-|E_{k-1}|\right)\\ \leq\max\left\{\operatorname{MG},\,2\alpha^{\perp}\operatorname{ TV}\left(\tau_{2^{k}}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R( \tau_{2^{k}})+R\left(E_{k-1}\right)+|E_{k-1}|\right)\right\}\leq C2^{k}d^{2}s.\] Hence, for \(\#\in\{k-1,k\}\), by using (6.42), \[\mathcal{L}_{E_{\#}}^{\perp}(\tau^{2^{k}})\leq\frac{C2^{k}d^{2}s}{\alpha^{ \perp}},\quad\mathcal{L}_{E_{\#}}^{\parallel}(\tau_{2^{k}})\leq\frac{C2^{k}d^{ 2}s}{\alpha^{\parallel}}+|E_{\#}|\leq C\left(\frac{1}{\alpha^{\parallel}}+\frac {1}{\alpha^{\perp}d}\right)d^{2}2^{k}s.\] To prove the first part of the Lemma, fix \(\emptyset\neq I\in\operatorname{comp}(\tau)\), and let \(E_{(0,I)}:=\operatorname{supp}\left(\tau-\tau_{I}\right)\). By the compatibility of \(I\) and (6.37), \[\operatorname{TV}(\tau_{I})\leq 20(2|I|+1)\operatorname{TV}(\tau)\leq\frac{C|I |s}{\alpha^{\perp}}, \tag{6.43}\] and by (6.21), (6.37) and (6.38), \[R(\tau_{I})\leq R(\tau)+\frac{C}{d}\operatorname{TV}(\tau)\leq\frac{Cs}{\min \{\alpha^{\parallel},\alpha^{\perp}\}d}, \tag{6.44}\] and hence, by Lemma 6.24, \[R\left(E_{(0,I)}\right)\leq R\left(\tau-\tau_{I}\right)\leq R\left(\tau\right) +2R\left(\tau_{I}\right)+\frac{100}{d}\left(\operatorname{TV}\left(\tau\right) +\operatorname{TV}\left(\tau_{I}\right)\right)\leq\frac{C|I|s}{\min\{\alpha^{ \parallel},\alpha^{\perp}\}d}, \tag{6.45}\] and by the compatibility of \(I\), \[|E_{(0,I)}|\leq\|\tau-\tau_{I}\|_{1}\leq\frac{4|I|}{d}\operatorname{TV}(\tau) \leq\frac{C|I|s}{\alpha^{\perp}d}. \tag{6.46}\] Therefore, by Proposition 6.30, \[\alpha^{\perp}\mathcal{L}_{E_{(0,I)}}^{\perp}(\eta^{\tau})+\alpha^{ \parallel}\left(\mathcal{L}_{E_{(0,I)}}^{\parallel}(\eta^{\tau})-|E_{(0,I)}|\right) \\ \leq\max\left\{\text{MG},\,2\alpha^{\perp}\operatorname{TV}\left( \tau\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau)+R\left(E_ {(0,I)}\right)+|E_{(0,I)}|\right)\right\}\leq C|I|s\] and \[\alpha^{\perp}\mathcal{L}_{E_{(0,I)}}^{\perp}(\eta^{\tau_{I}})+ \alpha^{\parallel}\left(\mathcal{L}_{E_{(0,I)}}^{\parallel}(\eta^{\tau_{I}})- |E_{(0,I)}|\right)\\ \leq\max\left\{\text{MG},\,2\alpha^{\perp}\operatorname{TV}\left( \tau_{I}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau_{I})+R \left(E_{(0,I)}\right)+|E_{(0,I)}|\right)\right\}\leq C|I|s.\] Hence, for \(\#\in\{\eta^{\tau},\eta^{\tau_{I}}\}\), by using (6.46), \[\mathcal{L}_{E_{(0,I)}}^{\perp}(\#)\leq\frac{C|I|s}{\alpha^{\perp}},\quad \mathcal{L}_{E_{(0,I)}}^{\parallel}(\#)\leq\frac{C|I|s}{\alpha^{\parallel}}+|E _{(0,I)}|\leq C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{\perp}d} \right)|I|s.\] Finally, let \(E_{(I,1)}:=\operatorname{supp}\left(\tau_{I}-\tau_{2}\right)\) and note that by Lemma 6.24, (6.44), (6.41), (6.43) and (6.40), \[R\left(E_{(I,1)}\right)\leq R\left(\tau_{I}-\tau_{2}\right)\leq R\left(\tau_{ I}\right)+2R\left(\tau_{2}\right)+\frac{98}{d}\left(\operatorname{TV}\left( \tau_{I}\right)+\operatorname{TV}\left(\tau_{2}\right)\right)\leq\frac{Cs}{ \min\{\alpha^{\parallel},\alpha^{\perp}\}},\] and by the compatibility of \(I\), Proposition 6.7 and (6.37), \[|E_{(I,1)}|\leq\|\tau_{I}-\tau_{2}\|_{1}\leq\|\tau_{I}-\tau\|_{1}+\|\tau_{2}- \tau\|_{1}\leq\frac{4|I|}{d}\operatorname{TV}(\tau)+4\operatorname{TV}(\tau) \leq\frac{Cs}{\alpha^{\perp}}. \tag{6.47}\] Therefore, by Proposition 6.30, \[\alpha^{\perp}\mathcal{L}_{E_{(I,1)}}^{\perp}(\eta^{\tau_{I}})+ \alpha^{\parallel}\left(\mathcal{L}_{E_{(I,1)}}^{\parallel}(\eta^{\tau_{I}})-| E_{(I,1)}|\right)\\ \leq\max\left\{\text{MG},\,2\alpha^{\perp}\operatorname{TV} \left(\tau_{I}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau_ {I})+R\left(E_{(I,1)}\right)+|E_{(I,1)}|\right)\right\}\leq Cds\] and \[\alpha^{\perp}\mathcal{L}_{E_{(I,1)}}^{\perp}(\eta^{\tau_{2}})+ \alpha^{\parallel}\left(\mathcal{L}_{E_{(I,1)}}^{\parallel}(\eta^{\tau_{2}})- |E_{(I,1)}|\right)\\ \leq\max\left\{\text{MG},\,2\alpha^{\perp}\operatorname{TV} \left(\tau_{2}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau_ {2})+R\left(E_{(I,1)}\right)+|E_{(I,1)}|\right)\right\}\leq Cds.\] Hence, for \(\#\in\{\eta^{\tau_{I}},\eta^{\tau_{2}}\}\), by using (6.47), \[\mathcal{L}_{E_{(I,1)}}^{\perp}(\#)\leq\frac{Cds}{\alpha^{\perp}},\quad \mathcal{L}_{E_{(I,1)}}^{\parallel}(\#)\leq\frac{Cds}{\alpha^{\parallel}}+|E_{ (I,1)}|\leq C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{\perp}d} \right)ds.\qed\] ## 7. Obtaining admissible shifts from interfaces The goal of this section will be to prove Proposition 6.29 and Lemma 4.4. Fix \(\eta\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\) for some \(\alpha^{\parallel},\alpha^{\perp}>0\) and let \(\Lambda\subset\mathbb{Z}^{d}\) be finite. Recall the definition of the configuration space for semi-infinite-volume under Dobrushin boundary conditions \(\Omega^{\Lambda,\text{Dob}}\) in (4.4), the Hamiltonian \(\mathcal{H}^{\eta,\Lambda}\) in (4.5), and of the ground energy with respect to it \(\operatorname{GE}^{\Lambda,\eta}\) in (4.6). Recall as well the definition of parallel and perpendicular layering in (6.1) and (6.2). ### Defining \(\tau_{0}\) Let \(E\subseteq\Lambda\) and let \(\sigma_{0}\colon\mathbb{Z}^{d+1}\to\{-1,1\}\) be a configuration in \(\Omega^{\Lambda,\mathrm{Dob}}\). Define a function \(I_{\sigma_{0}}\colon\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\,\text{``layered''}\}\) as follows. For each \(v\in\mathbb{Z}^{d}\), if \(\sigma_{0}\) has a unique sign change above \(v\), then set \(I_{\sigma_{0}}(v)\) to be the location of this sign change (precisely, if \(\sigma_{0}(v,k)=-1\) and \(\sigma_{0}(v,k+1)=1\) then set \(I_{\sigma_{0}}(v)=k\)). If \(\sigma_{0}\) has more than one sign change above \(v\) then set \(I_{\sigma_{0}}(v)=\text{``layered''}\). Define a graph \(G_{\sigma_{0}}\) to be the induced subgraph of \(\mathbb{Z}^{d}\) on the vertex set \(V_{\sigma_{0}}\), where \(V_{\sigma_{0}}\subset\mathbb{Z}^{d}\) is defined to be the set of vertices \(v\) satisfying that there exists a neighbor \(u\sim v\) (in the usual connectivity of \(\mathbb{Z}^{d}\)) such that either \(I_{\sigma_{0}}(u)\neq I_{\sigma_{0}}(v)\) or \(I_{\sigma_{0}}(u)=I_{\sigma_{0}}(v)=\text{``layered''}\). Recall from (6.19) the definition of \(\partial_{\text{vis}(w)}(A)\), the outer vertex boundary of a set \(A\subseteq\mathbb{Z}^{d}\) visible from a point \(w\in\mathbb{Z}^{d}\). **Observation 7.1**.: _For every connected component \(A\) of \(G_{\sigma_{0}}\) and every \(v\in(\mathbb{Z}^{d}\setminus A)\cup\{\infty\}\), there is an integer, which we denote \(\tilde{I}_{\sigma_{0}}(v;A)\), such that \(I_{\sigma_{0}}(u)=\tilde{I}_{\sigma_{0}}(v;A)\) for every \(u\in\partial_{\text{vis}(v)}(A)\)._ Proof.: For any \(u\in\mathbb{Z}^{d}\setminus V_{\sigma_{0}}\), it holds that \(I_{\sigma_{0}}(w)=I_{\sigma_{0}}(u)\), for every \(w\in\mathcal{B}_{1}(u)\). Hence, for every \(u_{1},u_{2}\in\mathbb{Z}^{d}\setminus V_{\sigma_{0}}\) such that \(\|u_{1}-u_{2}\|_{1}\leq 2\), i.e., such that \(\mathcal{B}_{1}(u_{1})\cap\mathcal{B}_{1}(u_{2})\neq\emptyset\), it holds that \(I_{\sigma_{0}}(u_{1})=I_{\sigma_{0}}(u_{2})\). The claim follows since \(\partial_{\text{vis}(v)}(A)\subseteq\partial^{\text{out}}A\subseteq\mathbb{Z} ^{d}\setminus V_{\sigma_{0}}\) and the set \(\partial_{\text{vis}(v)}(A)\) is \(\ell_{1}^{+}\)-connected, by Lemma 6.19. For every \(A\subseteq\mathbb{Z}^{d}\), let \[\text{in}(A):=\left\{u\in\mathbb{Z}^{d}\setminus A\colon\text{every path from $u$ to $\infty$ intersects $A$}\right\}.\] **Lemma 7.2**.: _Let \(A_{1},A_{2}\subseteq\mathbb{Z}^{d}\) be nonempty connected sets such that \(\operatorname{dist}(A_{1},A_{2})>1\)._ 1. _If_ \(\operatorname{dist}(A_{1}\cup\text{in}(A_{1}),A_{2}\cup\text{in}(A_{2}))\leq 1\)_, then_ \((A_{1}\cup\text{in}(A_{1}))\cap(A_{2}\cup\text{in}(A_{2}))\neq\emptyset\)_._ 2. _If_ \((A_{1}\cup\text{in}(A_{1}))\cap(A_{2}\cup\text{in}(A_{2}))\neq\emptyset\) _then_ \(A_{1}\subseteq\text{in}(A_{2})\) _or_ \(A_{2}\subseteq\text{in}(A_{1})\)_._ 3. _If_ \(A_{1}\subseteq\text{in}(A_{2})\) _then_ \(\text{in}(A_{1})\subsetneq\text{in}(A_{2})\)_. (Similarly, if_ \(A_{2}\subseteq\text{in}(A_{1})\) _then_ \(\text{in}(A_{2})\subsetneq\text{in}(A_{1})\)_.)_ Proof.: We first show that the first statement holds. There are \(u_{1}\in A_{1}\cup\text{in}(A_{1})\) and \(u_{2}\in A_{2}\cup\text{in}(A_{2})\) such that \(\|u_{1}-u_{2}\|_{1}\leq 1\). If \(u_{1}=u_{2}\) there is nothing to prove, hence we assume that \(\|u_{1}-u_{2}\|_{1}=1\), i.e., \(u_{1}\sim u_{2}\). Since \(\operatorname{dist}(A_{1},A_{2})>1\), necessarily \(u_{1}\notin A_{1}\) or \(u_{2}\notin A_{2}\). With no loss of generality assume that \(u_{1}\notin A_{1}\). Hence, \(u_{1}\in\text{in}(A_{1})\). If \(P\) is a path from \(u_{2}\) to \(\infty\), then starting at \(u_{1}\) and continuing along \(P\) is a path from \(u_{1}\in\text{in}(A_{1})\) to \(\infty\), therefore it must intersect \(A_{1}\); hence, since \(u_{1}\notin A_{1}\), the path \(P\) necessarily intersects \(A_{1}\). Therefore, every path from \(u_{2}\) to \(\infty\) intersects \(A_{1}\), i.e., \(u_{2}\in A_{1}\cup\text{in}(A_{1})\). To prove the second statement, assume by contradiction that there are \(a_{1}\in A_{1}\setminus\text{in}(A_{2})\) and \(a_{2}\in A_{2}\setminus\text{in}(A_{1})\). Consider an arbitrary path \(P_{0}\) from an arbitrary vertex \(u_{0}\in(A_{1}\cup\text{in}(A_{1}))\cap(A_{2}\cup\text{in}(A_{2}))\) to \(\infty\). The path \(P_{0}\) necessarily intersects both \(A_{1}\) and \(A_{2}\). Let \(a\) be the first intersection point of \(P_{0}\) with \(A_{1}\cup A_{2}\). With no loss of generality we may assume that \(a\in A_{1}\). Since \(A_{1}\) is connected, there is a path \(P_{1}\) in \(A_{1}\) from \(a\) to \(a_{1}\). Since \(a_{1}\notin\text{in}(A_{2})\), there is a path \(P_{2}\) from \(a_{1}\) to \(\infty\) that does not intersect \(A_{2}\). Then, the path that is obtained by taking \(P_{0}\) up to the point \(a\), then \(P_{1}\) and then \(P_{2}\) is a path from \(u_{0}\in A_{2}\cup\text{in}(A_{2})\) to \(\infty\) which does not intersect \(A_{2}\), and we get a contradiction. Finally, we show that the last statement holds. If \(P\) is a path from a point of \(\text{in}(A_{1})\) to \(\infty\), then it must intersect \(A_{1}\); let \(a_{1}\) be such an intersecting point; the part of the path \(P\) that starts at \(a_{1}\) is a path from \(a_{1}\in A_{1}\subseteq\text{in}(A_{2})\) to \(\infty\), hence it intersects \(A_{2}\). Therefore, every path from any point of \(\operatorname{in}(A_{1})\) to \(\infty\) intersects \(A_{2}\), i.e., \(\operatorname{in}(A_{1})\subseteq A_{2}\cup\operatorname{in}(A_{2})\). By way of contradiction assume that \(\operatorname{in}(A_{1})\nsubseteq\operatorname{in}(A_{2})\); then, there is \(a_{2}\in A_{2}\cap\operatorname{in}(A_{1})\); let \(P\) be a path from \(a_{2}\) to \(\infty\) and let \(a\) be the last intersection point of \(P\) with \(A_{1}\cup A_{2}\); if \(a\in A_{1}\), then the part of \(P\) that starts at \(a\) is a path from \(a\) to \(\infty\) that does not intersect \(A_{2}\), contradicting the assumption that \(A_{1}\subseteq\operatorname{in}(A_{2})\); if \(a\in A_{2}\), then since \(A_{2}\) is connected there is a path \(P_{2}\) in \(A_{2}\) from \(a_{2}\) to \(a\) and then, the path that is obtained by taking \(P_{2}\) and then the part of \(P\) that starts at \(a\) is a path from \(a_{2}\in\operatorname{in}(A_{2})\) to \(\infty\) which does not intersect \(A_{2}\), and we get a contradiction. Hence, \(\operatorname{in}(A_{1})\subseteq\operatorname{in}(A_{2})\), and the inclusion is obviously proper, since \(\emptyset\neq A_{1}\subseteq\operatorname{in}(A_{2})\setminus\operatorname{in} (A_{1})\). Let \(\mathcal{C}\) be the collection of all connected components of \(G_{\sigma_{0}}\), let \[\mathcal{A}=\{A\in\mathcal{C}\colon E\cap(A\cup\operatorname{in}(A))\neq \emptyset\}, \tilde{A}:=\bigcup_{A\in\mathcal{A}}A,\] and let \[B_{\infty}:=\Big{\{}u\in\mathbb{Z}^{d}\colon\text{there is a path from $u$ to $\infty$ that does not intersect $\tilde{A}$}\Big{\}}\] be the unique infinite connected component of \(\mathbb{Z}^{d}\setminus\tilde{A}\). For \(v\in\mathbb{Z}^{d}\setminus\Big{(}B_{\infty}\cup\tilde{A}\Big{)}\), the second and third parts of Lemma 7.2 imply that there exists \(A_{v}\in\mathcal{A}\) such that \(v\in\operatorname{in}(A_{v})\) and \(\operatorname{in}(A_{v})\subsetneq\operatorname{in}(A)\) for any other \(A\in\mathcal{A}\) for which \(v\in\operatorname{in}(A)\). **Lemma 7.3**.: _For every \(v\in\partial^{\operatorname{out}}\tilde{A}\),_ \[I_{\sigma_{0}}(v)=\begin{cases}0&v\in B_{\infty},\\ \tilde{I}_{\sigma_{0}}(v;A_{v})&v\notin B_{\infty}.\end{cases}\] Proof.: Let \(\mathcal{A}_{v}:=\{A\in\mathcal{C}\colon v\in\operatorname{in}(A)\}\) and let \(S_{v}:=\bigcup_{A\in\mathcal{C}\setminus\mathcal{A}_{v}}(A\cup\operatorname{ in}(A))\). Note that \(v\notin S_{v}\). If \(v\in B_{\infty}\) then there is a path from \(v\) to \(\infty\) that does not intersect \(\tilde{A}\), and since \(I_{\sigma_{0}}=0\) at all but finitely many points of \(\mathbb{Z}^{d}\), and hence the set \(S_{v}\) is finite, this path eventually reaches a point in \(\mathbb{Z}^{d}\setminus S_{v}\) where \(I_{\sigma_{0}}=0\). If \(v\notin B_{\infty}\) then any path from \(v\) to \(\infty\) intersects \(A_{v}\), and the last point in any such path before it first meets \(A_{v}\) is necessarily in \(\partial_{\operatorname{viz}(v)}A_{v}\subseteq\mathbb{Z}^{d}\setminus S_{v}\). In any case, there is a path \(v_{0},v_{1},\dots,v_{N}\) of points in \(\mathbb{Z}^{d}\setminus\bigcup_{A\in\mathcal{A}_{v}}A\) such that \(v_{i-1}\sim v_{i}\) for every \(1\leq i\leq N\), \(v_{0}=v\), \(v_{N}\notin S_{v}\) and \[I_{\sigma_{0}}(v_{N})=\begin{cases}0&v\in B_{\infty},\\ \tilde{I}_{\sigma_{0}}(v;A_{v})&v\notin B_{\infty}.\end{cases}\] Let \(I:=\{0\leq i\leq N\colon v_{i}\notin S_{v}\}\). Note that \(v_{i}\notin V_{\sigma_{0}}\) for every \(i\in I\), hence \(I_{\sigma_{0}}(v_{i-1})=I_{\sigma_{0}}(v_{i})\) for every \(1\leq i\leq N\) such that \(\{i-1,i\}\cap I\neq\emptyset\). Note that \(0\in I\) and \(N\in I\), hence the set \(\{0,1,\dots,N\}\setminus I\) may be presented as \(\bigcup_{j=1}^{r}\{a_{j},a_{j}+1,\dots,b_{j}-1\}\), where \(1\leq a_{1}<b_{1}<a_{2}<b_{2}<\dots<a_{r}<b_{r}\leq N\). For every \(1\leq j\leq r\), Lemma 7.2 implies that there is \(A_{j}\in\mathcal{C}\setminus\mathcal{A}_{v}\) such that \(v_{i}\in A_{j}\cup\operatorname{in}(A_{j})\) for every \(a_{j}<i<b_{j}-1\); then obviously \(v_{a_{j}-1},v_{b_{j}}\in\partial^{\operatorname{out}}(A_{j}\cup\operatorname{ in}(A)j))=\partial_{\operatorname{vis}(\infty)}A_{j}\) and hence \(I_{\sigma_{0}}(v_{a_{j}-1})=\tilde{I}_{\sigma_{0}}(\infty;A)=I_{\sigma_{0}}(v_{b_ {j}})\), by Observation 7.1. Hence, \(I_{\sigma_{0}}(v_{0})=I_{\sigma_{0}}(v_{N})\) and the claim follows. Define a "pre-shift" \(\tau_{0}\colon\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered"}\}\) as follows. \[\tau_{0}(v):=\begin{cases}0&v\in B_{\infty},\\ I_{\sigma_{0}}(v)&v\in\tilde{A},\\ \tilde{I}_{\sigma_{0}}(v;A_{v})&\text{otherwise}.\end{cases}\] **Observation 7.4**.: _For every connected component \(B\) of \(\mathbb{Z}^{d}\setminus\tilde{A}\), the function \(\tau_{0}\) is constant on the set \(B\cup\partial^{\operatorname{out}}B\)._ Proof.: Let \(v\in\partial^{\operatorname{out}}B\). Necessarily \(v\in\tilde{A}\) and hence \(\tau_{0}(v)=I_{\sigma_{0}}(v)\). There is \(u\in\partial^{\operatorname{in}}B\) such that \(u\sim v\). Necessarily \(u\in\partial^{\operatorname{out}}\tilde{A}\subseteq\mathbb{Z}^{d}\setminus \tilde{A}\subseteq\mathbb{Z}^{d}\setminus V_{\sigma_{0}}\), therefore \(I_{\sigma_{0}}(v)=I_{\sigma_{0}}(u)\), and by Lemma 7.3, \(I_{\sigma_{0}}(u)=\tau_{0}(u)\). Hence, \(\tau_{0}(v)=\tau_{0}(u)\). To obtain the claim it is therefore enough to show that \(\tau_{0}\) is constant on \(B\). By definition, \(\tau_{0}=0\) on \(B_{\infty}\). For every \(u,v\in B\neq B_{\infty}\), clearly \(A_{u}=A_{v}\), and moreover \(\partial_{\operatorname{vis}(u)}(A_{u})=\partial_{\operatorname{vis}(v)}(A_{v})\), and hence \(\tau_{0}(u)=\tau_{0}(v)\). ### Defining \(\tau\) Now, we turn the "pre-shift" \(\tau_{0}\) into a shift \(\tau\colon\mathbb{Z}^{d}\to\mathbb{Z}\). Define a configuration \(\sigma\) which has a unique sign change above every vertex \(v\) where \(\tau_{0}(v)\) is an integer, and the sign change occurs at height \(\tau_{0}(v)\). The value of \(\sigma\) above vertices \(v\) where \(\tau_{0}(v)=\) "layered" equals the value of \(\sigma_{0}\) at these vertices. Note that, by Observation 7.4, \[\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap(\Lambda \times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}\\ =\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon x,y\in\tilde{A} \times\mathbb{Z},\,(\sigma_{0})_{x}\neq(\sigma_{0})_{y}\}. \tag{7.1}\] By Corollary 3.3 there is an interfacial configuration \(\sigma^{\prime}\) that has a unique sign change above every vertex, with \(\sigma\) having the same sign change at the same height, and \[|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma^{\prime}_{x}\neq\sigma^{\prime }_{y}\}|\\ \leq|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}|. \tag{7.2}\] For every \(v\in\mathbb{Z}^{d}\), let \(\tau(v)\) be the height where \(\sigma^{\prime}\) changes sign over \(v\). Note that \(\tau(v)=\tau_{0}(v)\) for every \(v\in\mathbb{Z}^{d}\) such that \(\tau_{0}(v)\neq\) "layered", in particular for every \(v\in(\mathbb{Z}^{d}\setminus\tilde{A})\cup\partial^{\operatorname{out}}( \mathbb{Z}^{d}\setminus\tilde{A})\). This observation, combined with Observation 7.4 yields the following useful corollary. **Corollary 7.5**.: _For every connected component \(B\) of \(\mathbb{Z}^{d}\setminus\tilde{A}\), the function \(\tau\) is constant on the set \(B\cup\partial^{\operatorname{out}}B\)._ ### Bounding the total variation of \(\tau\) via a layering bound This section is devoted to proving the following proposition. **Proposition 7.6**.: _The shift \(\tau\) as defined above satisfies_ \[\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\operatorname{GE}^{\Lambda}(\eta^{ \tau})\geq 2\alpha^{\parallel}\left(\mathcal{L}^{\parallel}_{\tilde{A}}(\sigma_{ 0})-|\tilde{A}|\right)+2\alpha^{\perp}\mathcal{L}^{\perp}_{\tilde{A}}(\sigma_{ 0}) \tag{7.3}\] _and consequently,_ \[\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\operatorname{GE}^{\Lambda}(\eta^{ \tau})\geq 2\alpha^{\parallel}\left(\mathcal{L}^{\parallel}_{E}(\sigma_{0})-|E| \right)+2\alpha^{\perp}\max\{\mathcal{L}^{\perp}_{E}(\sigma_{0}),\operatorname{ TV}(\tau)\}. \tag{7.4}\] For every coupling field \(\tilde{\eta}\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\) and configuration \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\), denote \[\mathcal{H}^{\tilde{\eta},\Lambda,\parallel}(\sigma) :=2\sum_{\begin{subarray}{c}\{x,y\}\in E^{\parallel}(\mathbb{Z}^{d +1})\\ \{x,y\}\cap(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\tilde{\eta}_{\{ x,y\}}1_{\sigma_{x}\neq\sigma_{y}}=2\sum_{u\in\Lambda}\sum_{k\in\mathbb{Z}}\tilde{ \eta}_{\{(u,k),(u,k+1)\}}1_{\sigma_{(u,k)}\neq\sigma_{(u,k+1)}},\] \[\mathcal{H}^{\tilde{\eta},\Lambda,\perp}(\sigma) :=2\sum_{\begin{subarray}{c}\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1} )\\ \{x,y\}\cap(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\tilde{\eta}_{\{ x,y\}}1_{\sigma_{x}\neq\sigma_{y}}=2\sum_{\begin{subarray}{c}\{u,v\}\in E( \mathbb{Z}^{d})\\ \{u,v\}\cap\Lambda\neq\emptyset\end{subarray}}\sum_{k\in\mathbb{Z}}\tilde{ \eta}_{\{(u,k),(v,k)\}}1_{\sigma_{(u,k)}\neq\sigma_{(v,k)}}.\] Define a configuration \(\tilde{\sigma}\in\Omega^{\Lambda,\mathrm{Dob}}\) as follows: \[\tilde{\sigma}_{(u,k)}=\begin{cases}(\sigma_{0})_{(u,k+s(u))}&u\in\Lambda \setminus\tilde{A},\\ 1&u\in\tilde{A},\,k>0,\\ -1&u\in\tilde{A},\,k\leq 0.\end{cases}\] **Observation 7.7**.: _If \((u,v)\in\partial\tilde{A}\), i.e., \(u\in\tilde{A}\) and \(v\notin\tilde{A}\), then \(\tilde{\sigma}_{(u,k)}=(\sigma_{0})_{(u,k+s(u))}\) for all \(k\in\mathbb{Z}\)._ Proof.: Necessarily \(v\notin V_{\sigma_{0}}\) and hence \(\tau_{0}(u)=I_{\sigma_{0}}(u)=I_{\sigma_{0}}(v)\neq\text{``layered''}\). Therefore, \(\tau(u)=\tau_{0}(u)=I_{\sigma_{0}}(u)\). Hence, \((\sigma_{0})_{(u,k)}=1\) for every \(k>\tau(u)\) and \((\sigma_{0})_{(u,k)}=-1\) for every \(k\leq\tau(u)\), i.e., \(\tilde{\sigma}_{(u,k)}=(\sigma_{0})_{(u,k+\tau(u))}\). Proposition 7.6 will easily follow from the following lemmas. **Lemma 7.8**.: _It holds that_ \[\mathcal{H}^{\eta,\Lambda,\parallel}(\sigma_{0})-\mathcal{H}^{\eta^{\tau}, \Lambda,\parallel}(\tilde{\sigma})\geq 2\alpha^{\parallel}\left(\mathcal{L}^{ \parallel}_{\tilde{A}}(\sigma_{0})-|\tilde{A}|\right).\] **Lemma 7.9**.: _It holds that_ \[\mathcal{H}^{\eta,\Lambda,\perp}(\sigma_{0})-\mathcal{H}^{\eta^{\tau}, \Lambda,\perp}(\tilde{\sigma})\geq 2\alpha^{\perp}\mathcal{L}^{\perp}_{ \tilde{A}}(\sigma_{0}).\] Before proving Lemmas 7.8 and 7.9, let us show how they imply Proposition 7.6. Proof of Proposition 7.6.: Combining Lemmas 7.8 and 7.9 yields that \[\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\mathcal{H}^{\eta^{\tau},\Lambda}(\tilde{\sigma}) =\left(\mathcal{H}^{\eta,\Lambda,\parallel}(\sigma_{0})+\mathcal{H }^{\eta,\Lambda,\perp}(\sigma_{0})\right)-\left(\mathcal{H}^{\eta^{\tau}, \Lambda,\parallel}(\tilde{\sigma})+\mathcal{H}^{\eta^{\tau},\Lambda,\perp}( \tilde{\sigma})\right)\] \[=\left(\mathcal{H}^{\eta,\Lambda,\parallel}(\sigma_{0})-\mathcal{ H}^{\eta^{\tau},\Lambda,\parallel}(\tilde{\sigma})\right)+\left(\mathcal{H}^{ \eta,\Lambda,\perp}(\sigma_{0})-\mathcal{H}^{\eta^{\tau},\Lambda,\perp}( \tilde{\sigma})\right)\] \[\geq 2\alpha^{\parallel}\left(\mathcal{L}^{\parallel}_{\tilde{A}} (\sigma_{0})-|\tilde{A}|\right)+2\alpha^{\perp}\mathcal{L}^{\perp}_{\tilde{A} }(\sigma_{0}),\] and (7.3) follows since obviously \(\mathrm{GE}^{\Lambda}(\eta^{\tau})\leq\mathcal{H}^{\eta^{\tau},\Lambda}( \tilde{\sigma})\). We proceed to show how (7.3) implies (7.4). Note first that \(E\cap V_{\sigma_{0}}\subseteq\tilde{A}\) and hence \[\mathcal{L}^{\parallel}_{\tilde{A}}(\sigma_{0})-|\tilde{A}|\geq\mathcal{L}^{ \parallel}_{E\cap V_{\sigma_{0}}}(\sigma_{0})-|E\cap V_{\sigma_{0}}|,\quad \mathcal{L}^{\perp}_{\tilde{A}}(\sigma_{0})\geq\mathcal{L}^{\perp}_{E\cap V_{ \sigma_{0}}}(\sigma_{0}).\] Additionally, note that \(\mathcal{L}^{\parallel}_{\{u\}}(\sigma_{0})=1\) for every \(u\notin V_{\sigma_{0}}\) and \(\mathcal{L}^{\perp}_{\{u,v\}}(\sigma_{0})=0\) for every \(\{u,v\}\in E(\mathbb{Z}^{d})\) such that \(\{u,v\}\nsubseteq V_{\sigma_{0}}\) and hence, \[\mathcal{L}^{\parallel}_{E\cap V_{\sigma_{0}}}(\sigma_{0})-|E\cap V_{\sigma_{0} }|=\mathcal{L}^{\parallel}_{E}(\sigma_{0})-|E|,\quad\mathcal{L}^{\perp}_{E\cap V _{\sigma_{0}}}(\sigma_{0})=\mathcal{L}^{\perp}_{E}(\sigma_{0}).\] Finally, by (7.2) and (7.1), \[\mathcal{L}^{\perp}_{\tilde{A}}(\sigma_{0}) =|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon x,y\in\tilde{A} \times\mathbb{Z},\,(\sigma_{0})_{x}\neq(\sigma_{0})_{y}\}|\] \[=|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}|\] \[\geq|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}^{\prime}\neq\sigma_{y}^{ \prime}\}|=\operatorname{TV}(\tau).\qed\] We now prove Lemmas 7.8 and 7.9. Proof of Lemma 7.8.: If \(u\in\Lambda\setminus\tilde{A}\), then for every \(k\in\mathbb{Z}\), \[\eta^{\tau}_{\{(u,k),(u,k+1)\}}=\eta_{\{(u,k+\tau(u)),(u,k+1+ \tau(u))\}},\] \[\tilde{\sigma}_{(u,k)}=(\sigma_{0})_{(u,k+\tau(u))},\quad\tilde{ \sigma}_{(u,k+1)}=(\sigma_{0})_{(u,k+1+\tau(u))},\] and hence, \[\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(u,k+1)\}}1_{(\sigma_{0})_{(u, k)}\neq(\sigma_{0})_{(u,k+1)}}= \sum_{k\in\mathbb{Z}}\eta_{\{(u,k+\tau(u)),(u,k+1+\tau(u))\}}1_{( \sigma_{0})_{(u,k+\tau(u))}\neq(\sigma_{0})_{(u,k+1+\tau(u))}}\] \[= \sum_{k\in\mathbb{Z}}\eta^{\tau}_{\{(u,k),(u,k+1)\}}1_{\tilde{ \sigma}_{(u,k)}\neq\tilde{\sigma}_{(u,k+1)}}.\] If \(u\in\tilde{A}\), then \[\sum_{k\in\mathbb{Z}}\eta^{\tau}_{\{(u,k),(u,k+1)\}}1_{\tilde{ \sigma}_{(u,k)}\neq\tilde{\sigma}_{(u,k+1)}}=\eta^{\tau}_{\{(u,0),(u,1)\}}= \eta_{\{(u,\tau(u)),(u,\tau(u)+1)\}}\] and hence, since by the definition of \(\tau\), the configuration \(\sigma_{0}\) has a sign change at height \(\tau(u)\), i.e., \((\sigma_{0})_{(u,\tau(u))}\neq(\sigma_{0})_{(u,\tau(u)+1)}\), \[\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(u,k+1)\}}1_{(\sigma_{0})_{(u, k)}\neq(\sigma_{0})_{(u,k+1)}}-\sum_{k\in\mathbb{Z}}\eta^{\tau}_{\{(u,k),(u,k+1) \}}1_{\tilde{\sigma}_{(u,k)}\neq\tilde{\sigma}_{(u,k+1)}}\\ =\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(u,k+1)\}}1_{(\sigma_{0})_{ (u,k)}\neq(\sigma_{0})_{(u,k+1)}}-\eta_{\{(u,\tau(u)),(u,\tau(u)+1)\}}\geq \alpha^{\parallel}\left(\mathcal{L}^{\parallel}_{\{u\}}(\sigma_{0})-1\right).\] The result follows. Proof of Lemma 7.9.: For every \(\{u,v\}\in E(\mathbb{Z}^{d})\) such that \(\{u,v\}\cap\Lambda\neq\emptyset\) and \(\{u,v\}\nsubseteq\tilde{A}\), it holds that \(u,v\in B\cup\partial^{\operatorname{out}}B\) for some connected component of \(\mathbb{Z}^{d}\setminus\tilde{A}\) and hence \(\tau(u)=\tau(v)\), by Corollary 7.5. Hence, by (4.9), \(\eta^{\tau}_{\{(u,k),(v,k)\}}=\eta_{\{(u,k+s(u)),(v,k+s(u))\}}\) for every \(k\in\mathbb{Z}\). Additionally, with the aid of Observation 7.7 in case that \(|\{u,v\}\cap\tilde{A}|=1\), it holds that \(\tilde{\sigma}_{(u,k)}=(\sigma_{0})_{(u,k+\tau(u))}\) and \(\tilde{\sigma}_{(v,k)}=(\sigma_{0})_{(v,k+\tau(v))}=(\sigma_{0})_{(v,k+\tau(u))}\) for every \(k\in\mathbb{Z}\). Therefore, \[\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(v,k)\}}1_{(\sigma_{0})_{(u,k)} \neq(\sigma_{0})_{(v,k)}}= \sum_{k\in\mathbb{Z}}\eta_{\{(u,k+\tau(u)),(v,k+\tau(u))\}}1_{( \sigma_{0})_{(u,k+\tau(u))}\neq(\sigma_{0})_{(v,k+\tau(u))}}\] \[= \sum_{k\in\mathbb{Z}}\eta^{\tau}_{\{(u,k),(v,k)\}}1_{\tilde{ \sigma}_{(u,k)}\neq\tilde{\sigma}_{(v,k)}}\] Hence, since for every \(\{u,v\}\in E(\mathbb{Z}^{d})\) such that both \(u\) and \(v\) are in \(\tilde{A}\) it holds that \(\tilde{\sigma}_{(u,k)}=\tilde{\sigma}_{(v,k)}\) for every integer \(k\), it follows that \[\mathcal{H}^{\eta,\Lambda,\perp}(\sigma_{0})-\mathcal{H}^{\eta^{ \tau},\Lambda,\perp}(\tilde{\sigma}) =2\sum_{\begin{subarray}{c}\{u,v\}\in E(\mathbb{Z}^{d})\\ u,v\in\tilde{A}\end{subarray}}\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(v,k)\}}1_{( \sigma_{0})_{(u,k)}\neq(\sigma_{0})_{(v,k)}}\] \[\geq 2\alpha^{\perp}\sum_{\begin{subarray}{c}\{u,v\}\in E( \mathbb{Z}^{d})\\ u,v\in\tilde{A}\end{subarray}}\sum_{k\in\mathbb{Z}}1_{(\sigma_{0})_{(u,k)} \neq(\sigma_{0})_{(v,k)}}=2\alpha^{\perp}\mathcal{L}_{\tilde{A}}^{\perp}( \sigma_{0}).\qed\] ### Bounding the trip entropy of \(\tau\) This section is devoted to proving the following proposition. **Proposition 7.10**.: _The shift \(\tau\) as defined above satisfies_ \[R(\tau)<R(E)+2(|E|-1)+\frac{16}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d} \left(\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\mathrm{GE}^{\Lambda}(\eta^{\tau} )\right).\] Denote \(\mathcal{L}:=\{u\in\mathbb{Z}^{d}\colon I_{\sigma_{0}}(u)=\text{``layered''}\}\) and \(\mathcal{D}:=\{\{u,v\}\in E(\mathbb{Z}^{d})\colon I_{\sigma_{0}}(u)\neq I_{ \sigma_{0}}(v)\}\). **Observation 7.11**.: _It holds that \(\mathcal{L}\subseteq V_{\sigma_{0}}\) and for every \(\{u,v\}\in\mathcal{D}\), the vertices \(u\) and \(v\) are both in \(V_{\sigma_{0}}\) and moreover, belong to the same connected component of \(G_{\sigma_{0}}\)._ **Observation 7.12**.: _For every connected component \(A\) of the graph \(G_{\sigma_{0}}\) it holds that_ \[|A|\leq 2\left(\mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|+\mathcal{L}_{A}^{ \perp}(\sigma_{0})\right).\] Proof.: Every \(u\in A\setminus\mathcal{L}\) has at least one neighbour \(v\) such that \(\{u,v\}\in\mathcal{D}\). Hence, by using Observation 7.11, it holds that \[|A|\leq|\mathcal{L}\cap A|+2\left|\mathcal{D}\cap\binom{A}{2}\right|\leq\frac {1}{2}\left(\mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|\right)+2\mathcal{L}_{ A}^{\perp}(\sigma_{0}).\qed\] **Lemma 7.13**.: _Let \(A\) be a connected component of the graph \(G_{\sigma_{0}}\). There is a set \(S\subseteq A\) such that \(A\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\) and_ \[|S|<\frac{1}{d}\left(\mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|+\mathcal{L}_ {A}^{\perp}(\sigma_{0})\right).\] Proof.: We first show that for every \(a\in A\), \[|\mathcal{L}\cap\mathcal{B}_{2}(a)\cap A|+\left|\mathcal{D}\cap\binom{ \mathcal{B}_{2}(a)\cap A}{2}\right|>d. \tag{7.5}\] If \(I_{\sigma_{0}}(a)=\text{``layered''}\), then \(a\in\mathcal{L}\) and for every neighbour \(b\) of \(a\) it holds that \(b\in A\) and either \(b\in\mathcal{L}\) or \(\{a,b\}\in\mathcal{D}\). Hence, \[|\mathcal{L}\cap\mathcal{B}_{1}(a)\cap A|+\left|\mathcal{D}\cap\binom{ \mathcal{B}_{1}(a)\cap A}{2}\right|\geq 2d+1.\] If \(I_{\sigma_{0}}(a)\neq\text{``layered''}\), then since \(a\in V_{\sigma_{0}}\), it follows that there is a neighbour \(b\) of \(a\) such that \(I_{\sigma_{0}}(b)\neq I_{\sigma_{0}}(a)\). Note that \(b\in A\) and \(\{a,b\}\in\mathcal{D}\). With no loss of generality we may assume that \(b=a+e_{d}\). Then, for every \(v\in\{\pm e_{i}\}_{i=1}^{d-1}\) it holds that \(\{\{a,a+v\},\{b,b+v\},\{a+v,b+v\}\}\cap\mathcal{D}\neq\emptyset\). Hence, by using Observation 7.11, \[\left|\mathcal{D}\cap\binom{\mathcal{B}_{2}(a)\cap A}{2}\right|\geq 2d-1.\] Now, let \(S\) be a set of maximal cardinality in \(A\) such that the sets \(\{\mathcal{B}_{2}(a)\}_{a\in S}\) are mutually disjoint. The maximality of \(S\) implies that \(A\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\), and by (7.5), \[|S| <\frac{1}{d}\sum_{a\in S}\left(|\mathcal{L}\cap\mathcal{B}_{2}(a) \cap A|+\left|\mathcal{D}\cap{\mathcal{B}_{2}(a)\cap A\choose 2}\right|\right)\] \[\leq\frac{1}{d}\left(|\mathcal{L}\cap A|+\left|\mathcal{D}\cap{A \choose 2}\right|\right)\leq\frac{1}{d}\left(\frac{1}{2}\left(\mathcal{L}_{A} ^{\parallel}(\sigma_{0})-|A|\right)+\mathcal{L}_{A}^{\perp}(\sigma_{0})\right).\qed\] Proof of Proposition 7.10.: By Corollary 7.5, every level component of \(s\) intersects \(\tilde{A}\). Hence, by (6.18) and Lemma 7.13, \[R(\tau)< R(E)+2(|E|-1)+\] \[\sum_{A\in\mathcal{A}}\left(2\mathrm{dist}(E,A)+\frac{20}{d} \left(\mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|+\mathcal{L}_{A}^{\perp}( \sigma_{0})\right)\right)+8|\mathcal{LC}(\tau)|. \tag{7.6}\] Let \(A\in\mathcal{A}\) and let \(v\in E\cap(A\cup\mathrm{in}(A))\). If \(v\in A\), then \(\mathrm{dist}(E,A)=0\). Otherwise, let \(B:=\mathcal{B}_{\mathrm{dist}(v,A)}(0)\cap\left(\mathbb{Z}^{d-1}\times\{0\}\right)\). For every \(u\in B\), since \(v\in\mathrm{in}(A)\), it holds that \(\{v+u+t\,e_{d}\colon t\in\mathbb{Z}\}\cap A\neq\emptyset\) and it follows that \(|B|\leq|A|\). Therefore, since the sequence \(\left(\frac{1}{d}{r+d-1}\right)\right)_{d=1}^{\infty}\) is non-decreasing for every positive integer \(r\), \[\mathrm{dist}(v,A)\leq\frac{1}{3}{\mathrm{dist}(v,A)+2\choose 2}\leq\frac{1}{d}{ \mathrm{dist}(v,A)+d-1\choose d-1}=\frac{1}{d}|B\cap[0,\infty)^{d}|<\frac{1}{ d}|B|\leq\frac{1}{d}|A|\] and hence, by Observation 7.12, \[\mathrm{dist}(E,A)\leq\mathrm{dist}(v,A)<\frac{2}{d}\left(\mathcal{L}_{A}^{ \parallel}(\sigma_{0})-|A|+\mathcal{L}_{A}^{\perp}(\sigma_{0})\right).\] Plugging this in (7.6) yields that \[R(\tau) \leq R(E)+2(|E|-1)+\frac{24}{d}\sum_{A\in\mathcal{A}}\left( \mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|+\mathcal{L}_{A}^{\perp}(\sigma_{0} )\right)+8|\mathcal{LC}(\tau)|\] \[=R(E)+2(|E|-1)+\frac{24}{d}\left(\mathcal{L}_{\tilde{A}}^{ \parallel}(\sigma_{0})-|\tilde{A}|+\mathcal{L}_{\tilde{A}}^{\perp}(\sigma_{0} )\right)+8|\mathcal{LC}(\tau)|.\] The result now follows by (7.3) and since \[|\mathcal{LC}(\tau)|\leq\frac{1}{d}\operatorname{TV}(\tau)\leq\frac{1}{2 \alpha^{\perp}d}\left(\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\mathrm{GE}^{ \Lambda}(\eta^{\tau})\right).\] by Observation 6.20 and (7.4). ### Proof of Proposition 6.29 and Lemma 4.4 Proof of Proposition 6.29.: Let \(\sigma_{0}:=\sigma^{\eta,\Lambda,\mathrm{Dob}}\) and consider the shift \(\tau\) as defined above. Note that \[G^{\eta,\Lambda}(\tau)=\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{\Lambda}(\eta ^{\tau})=\mathcal{H}^{\eta,\Lambda}\left(\sigma_{0}\right)-\mathrm{GE}^{ \Lambda}(\eta^{\tau}) \tag{7.7}\] and the result follows by (7.4) and Proposition 7.10. In the proof of Lemma 4.4 we will use the following lemma, which is an immediate consequence of [13, Lemma 1]. **Lemma 7.14**.: _For a positive integer \(\ell\). let \(\mathcal{G}_{\ell}\) be the graph on the vertex set \(E(\mathbb{Z}^{\ell})\), in which distinct \(e,\tilde{e}\in E(\mathbb{Z}^{\ell})\) are adjacent if_ \[e,\tilde{e}\in\{\{u,u+e_{i}\},\{u+e_{i},u+e_{i}+e_{j}\},\{u+e_{i}+e_{j}\},\{u+e_{ j},u\}\}\] _for some \(u\in\mathbb{Z}^{\ell}\) and \(1\leq i<j\leq\ell\). If \(A\subseteq\mathbb{Z}^{\ell}\) is a connected set such that \(\mathbb{Z}^{\ell}\setminus A\) is connected as well, then the set of edges \(\{\{u,v\}\in E(\mathbb{Z}^{\ell})\colon(u,v)\in\partial A\}\) is connected in \(\mathcal{G}_{\ell}\)._ Proof of Lemma 4.4.: Let \(\sigma_{0}:=\sigma^{\eta,\Lambda,\operatorname{Dob}}\), let \(E=\{0\}\), and consider the shift \(\tau\) as defined above. By using (7.7), it follows from (7.4) that \(G^{\eta,\Lambda}(\tau)\geq 2\alpha^{\perp}\operatorname{TV}(\tau)\) and Proposition 7.10 yields that \(G^{\eta,\Lambda}(\tau)\geq\frac{1}{16}\min\{\alpha^{\parallel},\alpha^{ \perp}\}d\,R(\tau)\); hence, \(\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})\); moreover, by (7.3), \(G^{\eta,\Lambda}(\tau)\geq 2\alpha^{\perp}\mathcal{L}^{\perp}_{A}(\sigma_{0})\). Hence, it is left to show that \(\mathcal{L}^{\perp}_{A}(\sigma_{0})\geq|k|\). With no loss of generality, assume that \(k>0\) and that \(k=\max\{h\in\mathbb{Z}\colon(\sigma_{0})_{(0,h)}=-1\}\). Define \(\pi_{d+1}:\mathbb{Z}^{d}\times\mathbb{Z}\to\mathbb{Z}\) by \(\pi_{d+1}(u,h)=h\), and for every \(\{u,v\}\in E(\mathbb{Z}^{d+1})\), let \(\pi_{d+1}(\{u,v\})=\{\pi_{d+1}(u),\pi_{d+1}(v)\}\). Note that if \(e,\tilde{e}\in E(\mathbb{Z}^{d+1})\) are adjacent in the graph \(\mathcal{G}_{d+1}\) (defined above in Lemma 7.14), then \(\pi_{d+1}(e)\subseteq\pi_{d+1}(\tilde{e})\) or \(\pi_{d+1}(e)\supseteq\pi_{d+1}(\tilde{e})\). Let \(X:=\{x\in\mathbb{Z}^{d+1}\colon\sigma_{0}(x)=1\}\). The set \(\{x\in X\colon\text{there is no path in $X$ from $x$ to $\infty$}\}\) is necessarily empty, otherwise we could flip all signs in this finite set to get a configuration in \(\Omega^{\Lambda,\operatorname{Dob}}\) with smaller \(H^{\eta}\). Hence, \(X\) is connected. Similarly, the set \(\mathbb{Z}^{d+1}\setminus X=\{y\in\mathbb{Z}^{d+1}\colon\sigma_{0}(y)=-1\}\) is connected as well. Hence, by Lemma 7.14, the set \[\mathcal{I}(\sigma_{0})=\{\{x,y\}\in E(\mathbb{Z}^{d+1})\colon(x,y)\in \partial X\}=\{\{x,y\}\in E(\mathbb{Z}^{d+1})\colon\sigma_{0}(x)=1,\sigma_{0} (y)=-1\}\] is connected in \(\mathcal{G}_{d+1}\). We now argue similarly to the proof of Lemma 7.3. Recall that \(\mathcal{C}\) is the collection of all connected components of the graph \(G_{\sigma_{0}}\) and \(\mathcal{A}=\{A\in\mathcal{C}\colon 0\in A\cup\operatorname{in}(A)\}\), since \(E=\{0\}\). Since \(\sigma_{0}=\rho^{\operatorname{Dob}}\) at all but finitely many points of \(\mathbb{Z}^{d+1}\), and hence the set \(\bigcup_{A\in\mathcal{C}}(A\cup\operatorname{in}(A))\) is finite, there is \(w\in\mathbb{Z}^{d}\setminus\bigcup_{A\in\mathcal{C}}(A\cup\operatorname{in}(A))\) such that \((\sigma_{0})_{(w,k)}=\rho^{\operatorname{Dob}}_{(w,k)}\) for every integer \(k\). Since the set \(\mathcal{I}(\sigma_{0})\) is connected in \(\mathcal{G}_{d+1}\), there is a sequence \((\tilde{e}_{i})_{i=0}^{N}\) of edges in \(\mathcal{I}(\sigma_{0})\) such that \(\tilde{e}_{i-1},\tilde{e}_{i}\) are adjacent in \(\mathcal{G}_{d+1}\) for every \(1\leq i\leq N\), \(\tilde{e}_{0}=\{(0,k+1),(0,k)\}\) and \(\tilde{e}_{N}=\{(w,1),(w,0)\}\). Let \[I:=\left\{0\leq i\leq N\colon\tilde{e}_{i}\nsubseteq\bigcup_{A\in\mathcal{C} \setminus\mathcal{A}}(A\cup\operatorname{in}(A))\times\mathbb{Z}\right\}.\] Since \(0\in I\) and \(N\in I\), the set \(\{0,1,\ldots,N\}\setminus I\) may be presented as \(\bigcup_{j=1}^{r}\{a_{j},a_{j}+1,\ldots,b_{j}-1\}\), where \(1\leq a_{1}<b_{1}<a_{2}<b_{2}<\cdots<a_{r}<b_{r}\leq N\). Fix \(1\leq j\leq r\). Lemma 7.2 implies that there is \(A\in\mathcal{C}\setminus\mathcal{A}\) such that \(\tilde{e}_{i}\subseteq(A\cup\operatorname{in}(A))\times\mathbb{Z}\) for every \(a_{j}<i<b_{j}-1\). Then, \(\tilde{e}_{a_{j}-1}\) has at least one endpoint \((v,h)\) such that \(v\in\partial^{\operatorname{out}}(A\cup\operatorname{in}(A))=\partial_{ \operatorname{vis}(\infty)}A\); Observation 7.1 implies that \(h=I_{\sigma_{0}}(v)=\tilde{I}_{\sigma_{0}}(\infty;A)\) and therefore, \(\tilde{I}_{\sigma_{0}}(\infty;A)\in\pi_{d+1}(\tilde{e}_{a_{j}-1})\). Similarly, \(\tilde{I}_{\sigma_{0}}(\infty;A)\in\pi_{d+1}(\tilde{e}_{b_{j}})\) and hence \(\pi_{d+1}(\tilde{e}_{a_{j}-1})\cap\pi_{d+1}(\tilde{e}_{b_{j}})\neq\emptyset\). Since \(\pi_{d+1}(\tilde{e}_{i-1})\subseteq\pi_{d+1}(\tilde{e}_{i})\) or \(\pi_{d+1}(\tilde{e}_{i-1})\supseteq\pi_{d+1}(\tilde{e}_{i})\) for every \(1\leq i\leq N\), \(\pi_{d+1}(\tilde{e}_{a_{j}-1})\cap\pi_{d+1}(\tilde{e}_{b_{j}})\neq\emptyset\) for every \(1\leq j\leq r\), \(\pi_{d+1}(\tilde{e}_{0})=\{k+1,k\}\) and \(\pi_{d+1}(\tilde{e}_{N})=\{1,0\}\), it follows that for every \(1\leq h\leq k\) there is \(i_{h}\in I\) such that \(\pi_{d+1}(\tilde{e}_{i_{h}})=\{h\}\). Then, for every \(1\leq h\leq k\) it holds that \(\tilde{e}_{i_{h}}\in\mathcal{I}(\sigma_{0})\cap E^{\perp}(\mathbb{Z}^{d+1})\) and \(\tilde{e}_{i_{h}}\cap(\tilde{A}\times\mathbb{Z})\neq\emptyset\) and hence, \[\mathcal{L}^{\perp}_{\tilde{A}}(\sigma_{0})=|\{e\in\mathcal{I}(\sigma_{0})\cap E ^{\perp}(\mathbb{Z}^{d+1})\colon e\cap(\tilde{A}\times\mathbb{Z})\neq \emptyset\}|\geq k,\] as desired. ## 8. Convergence of finite-volume ground configurations In this section we prove Theorem 1.9, Corollary 1.10 and Theorem 1.11. We assume throughout the section that we work with the anisotropic disordered ferromagnet in dimension \(D\geq 4\), with disorder distributions \(\nu^{\|}\) and \(\nu^{\perp}\) satisfying (1.11) and that condition (1.14) holds with a sufficiently small constant \(c>0\) so that both the assumptions of Theorem 1.7 and Theorem 4.3 hold. We introduce the notation, for integer \(k\geq 0\), \[\Lambda(k):=\{-k,\ldots,k\}^{d}. \tag{8.1}\] ### Proof of Theorem 1.9 The proof is based on the following deterministic lemma, which is proved using similar methods as those in Section 7. **Lemma 8.1**.: _Let \(\eta\in\mathcal{D}(\alpha^{\|},\alpha^{\perp})\) for some \(\alpha^{\|},\alpha^{\perp}>0\). Let \(L_{1}>L_{0}\geq 0\) integer. Let \(\Lambda^{1},\Lambda^{2}\subset\mathbb{Z}^{d}\) be finite subsets containing \(\Lambda(L_{1})\). If_ \[\sigma^{\eta,\Lambda^{1},\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z}} \not\equiv\sigma^{\eta,\Lambda^{2},\mathrm{Dob}}|_{\Lambda(L_{0})\times \mathbb{Z}} \tag{8.2}\] _then for \(i=1\) or for \(i=2\) there exists \(\tau\in\mathcal{AS}^{\eta,\Lambda^{i}}(\alpha^{\|},\alpha^{\perp})\) with_ \[G^{\eta,\Lambda^{i}}(\tau)\geq\frac{\min\{\alpha^{\|},\alpha^{\perp}\}}{4}(L_{ 1}-L_{0})^{1-\frac{1}{d}}. \tag{8.3}\] We postpone the proof of the lemma to Section 8.4. The following is an immediate consequence of the lemma and Theorem 4.3 (applied with \(\Lambda=\Lambda^{1}\) and with \(\Lambda=\Lambda^{2}\)). **Corollary 8.2**.: _There exist constants \(C,c>0\) such that the following holds under the assumptions of Theorem 4.3 (for a sufficiently small \(c_{0}>0\)). Let \(L_{1}>L_{0}\geq 0\) integer. Let \(\Lambda^{1},\Lambda^{2}\subset\mathbb{Z}^{d}\) be finite subsets containing \(\Lambda(L_{1})\). Then_ \[\mathbb{P}\left(\sigma^{\eta,\Lambda^{1},\mathrm{Dob}}|_{\Lambda (L_{0})\times\mathbb{Z}}\not\equiv\sigma^{\eta,\Lambda^{2},\mathrm{Dob}}|_{ \Lambda(L_{0})\times\mathbb{Z}}\right)\\ \leq C\exp\left(-\frac{c}{\kappa d^{2}}\left(\min\left\{\frac{ \alpha^{\|}}{\alpha^{\perp}},1\right\}\right)^{\frac{d-2}{d-1}}(L_{1}-L_{0})^{ \frac{d-2}{d}}\right). \tag{8.4}\] We proceed to prove Theorem 1.9. It suffices to prove that for every sequence \((\Lambda_{n})\) of finite domains in \(\mathbb{Z}^{d}\), satisfying that \(\Lambda_{n}\supset\Lambda(n)\) for each \(n\), and for every \(v\in\mathbb{Z}^{d}\), the restricted configuration \(\sigma^{\eta,\Lambda_{n},\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\) is the same for all large \(n\), almost surely. (8.5) Indeed, if we then define \[\text{for each }v\in\mathbb{Z}^{d},\,\sigma^{\eta,\mathrm{Dob}}|_{\{v\}\times \mathbb{Z}}\text{ is the eventual value of }\sigma^{\eta,\Lambda(n),\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\text{ as }n\to\infty \tag{8.6}\] then we may conclude that the eventual value of \(\sigma^{\eta,\Lambda_{n},\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\) also equals \(\sigma^{\eta,\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\) by applying (8.5) to the following two sequences of domains formed by interlacing subsequences of \((\Lambda_{n})\) and \((\Lambda(n))\): either taking \((\Lambda_{2n})\) in the even positions and \((\Lambda(2n+1))\) in the odd positions or taking \((\Lambda_{2n+1})\) in the odd positions and \((\Lambda(2n))\) in the even positions. We proceed to prove (8.5), for some fixed sequence \((\Lambda_{n})\) of finite domains in \(\mathbb{Z}^{d}\), satisfying that \(\Lambda_{n}\supset\Lambda(n)\) for each \(n\). Let \(L_{0}\geq 0\) be an integer. Let \(E_{n}:=\{\sigma^{\eta,\Lambda_{n},\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z }}\not=\sigma^{\eta,\Lambda_{n+1},\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z }}\}\). By Corollary 8.2 (with \(L_{1}=n\)) we deduce that for every \(n>L_{0}\), \[\mathbb{P}(E_{n})\leq C\exp\left(-c(\nu^{\|},\nu^{\perp},d)(n-L_{0})^{\frac{d- 2}{d}}\right) \tag{8.7}\] with \(c(\nu^{\parallel},\nu^{\perp},d)>0\), depending only on \(\nu^{\parallel},\nu^{\perp}\) and the dimension \(d\) and some absolute constant \(C>0\). Thus \(\sum_{n}\mathbb{P}(E_{n})<\infty\). We conclude that only a finite number of the \(E_{n}\) hold, almost surely, implying that (8.5) holds for all \(v\in\Lambda(L_{0})\). This finishes the proof of (8.5) as \(L_{0}\) is arbitrary. ### Proof of Corollary 1.10 The probabilistic estimates (1.23) and (1.24) hold as a consequence of Theorem 1.7 (with \(\Lambda\) equal to some \(\Lambda(n)\)). We proceed to define a \(G^{d}\)-invariant set \(\mathcal{C}_{0}\) of coupling fields satisfying that \(\mathbb{P}(\mathcal{C}_{0})=1\) and, on \(\mathcal{C}_{0}\), \(\sigma^{\eta,\mathrm{Dob}}\) is well defined and is \(G^{d}\) covariant. For an automorpishm \(h\in G^{d}\) and a set \(\Lambda\subset\mathbb{Z}^{d}\) we define \(h(\Lambda)\) as the set in \(\mathbb{Z}^{d}\) satisfying that \(h(\Lambda)\times\mathbb{Z}=h(\Lambda\times\mathbb{Z})\) (such a set \(h(\Lambda)\) exists by the definition (1.9) of \(G^{d}\)). Define \[\mathcal{C}_{\mathrm{unique}}:=\bigcap_{g,h\in G^{d}}\{\eta\colon\forall n, \,\text{there is a unique ground configuration in }\Omega^{h(\Lambda(n))\times\mathbb{Z},\rho^{\mathrm{Dob}}}\text{ for }g(\eta)\}. \tag{8.8}\] As \(G^{d}\) is countable and \(g(\eta)\) has the same distribution as \(\eta\) (since \(g\in G^{d}\)) we have that \(\mathbb{P}(\mathcal{C}_{\mathrm{unique}})=1\) by Lemma 1.5. It is clear from the definition that \(\mathcal{C}_{\mathrm{unique}}\) is \(G^{d}\) invariant. As before, the unique ground configuration (on \(\mathcal{C}_{\mathrm{unique}}\)) in \(\Omega^{h(\Lambda(n))\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) for \(g(\eta)\) is denoted \(\sigma^{g(\eta),h(\Lambda(n)),\mathrm{Dob}}\). Now set \[\mathcal{C}_{0}:=\left\{\eta\in\mathcal{C}_{\mathrm{unique}}\colon\begin{subarray}{ c}\text{for each }g\in G^{d},\text{ there exists a configuration }\sigma^{g(\eta),Dob}:\mathbb{Z}^{D}\to\{-1,1\}\text{ such that}\\ \lim_{n\to\infty}\sigma^{g(\eta),h(\Lambda(n)),\mathrm{Dob}}_{x}=\sigma^{g( \eta),\mathrm{Dob}}_{x}\text{ for all }h\in G^{d}\text{ and }x\in\mathbb{Z}^{D}\end{subarray} \right\}. \tag{8.9}\] Then \(\mathbb{P}(\mathcal{C}_{0})=1\) by Theorem 1.9 (applied to the sequence \((h(\Lambda(n_{0}+n)))_{n}\) for \(n_{0}=n_{0}(h)\) large enough so that \(h(\Lambda(n_{0}+n))\supset\Lambda(n)\) for each \(n\)). It is again clear from the definition that \(\mathcal{C}_{0}\) is \(G^{d}\) invariant. We proceed to check that \(\sigma^{\eta,\mathrm{Dob}}\) is \(G^{d}\) covariant on \(\mathcal{C}_{0}\). Note that, for \(\Lambda\subset\mathbb{Z}^{d}\) and an automorphism \(a\) of \(\mathbb{Z}^{D}\), the set of ground configurations in \(\Omega^{a(\Lambda)\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) for \(a(\eta)\) equals \(a\) applied to the set of ground configurations in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) for \(\eta\). In particular, if \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is well defined (i.e., uniqueness holds) then also \(\sigma^{a(\eta),a(\Lambda),\mathrm{Dob}}\) is well defined and we have \[\sigma^{a(\eta),a(\Lambda),\mathrm{Dob}}_{x}=\sigma^{\eta,\Lambda,\mathrm{Dob }}_{ax}=a(\sigma^{\eta,\Lambda,\mathrm{Dob}})_{x},\quad x\in\mathbb{Z}^{D}. \tag{8.10}\] Now let \(g\in G^{d}\) and \(\eta\in\mathcal{C}_{0}\). For each \(x\in\mathbb{Z}^{D}\), \[g(\sigma^{\eta,\mathrm{Dob}})_{x}=\sigma^{\eta,\mathrm{Dob}}_{gx}=\lim_{n\to \infty}\sigma^{\eta,\Lambda(n),\mathrm{Dob}}_{gx}=\lim_{n\to\infty}\sigma^{g( \eta),g(\Lambda(n)),\mathrm{Dob}}_{x}=\sigma^{g(\eta),\mathrm{Dob}}_{x}, \tag{8.11}\] where the second and last equality use the definition of \(\mathcal{C}_{0}\) and the third equality uses (8.10). Thus \(\sigma^{\eta,\mathrm{Dob}}\) is a \(G^{d}\)-covariant ground configuration defined on \(\mathcal{C}_{0}\). It remains to define a \(G^{d}\) invariant set of coupling fields \(\mathcal{C}\subset\mathcal{C}_{0}\) with \(\mathbb{P}(\mathcal{C})=1\) such that \(\sigma^{\eta,\mathrm{Dob}}\) is a non-constant \(G^{d}\)-covariant ground configuration defined on \(\mathcal{C}\). Define \[\mathcal{C}_{\mathrm{non-const}}:=\{\eta\colon\sigma^{\eta,\mathrm{Dob}}\text{ is not a constant configuration}\}, \tag{8.12}\] \[\mathcal{C}_{\mathrm{ground}}:=\{\eta\colon\sigma^{\eta,\mathrm{Dob}}\text{ is a ground configuration for the coupling field }\eta\}. \tag{8.13}\] Set \[\mathcal{C}:=\mathcal{C}_{0}\cap\mathcal{C}_{\mathrm{non-const}}\cap\mathcal{C}_{ \mathrm{ground}}. \tag{8.14}\] Then \(\mathbb{P}(\mathcal{C}_{\mathrm{non-const}})=1\) by the estimate (1.23), and \(\mathbb{P}(\mathcal{C}_{\mathrm{ground}})=1\) since \(\sigma^{\eta,\mathrm{Dob}}\) is the pointwise limit of \(\sigma^{\eta,\Lambda(n),\mathrm{Dob}}\) and each of these configurations is, almost surely, a ground configuration in \(\Omega^{\Lambda(n)\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) by Lemma 1.5. The set \(\mathcal{C}\) is \(G^{d}\) invariant since the covariance identity (8.11) holds on \(\mathcal{C}_{0}\) (and it maps ground configurations for \(\eta\) to ground configurations for \(g(\eta)\)). This finishes the proof. ### Proof of Theorem 1.11 The estimate (1.26) is a direct consequence of Corollary 8.2, applied with \(\Lambda^{1}=\Lambda(n)\) and \(\Lambda^{2}=\Lambda\), by taking \(n\) to infinity and applying the convergence result of Theorem 1.9. We proceed to establish (1.27). Let \(k:=\lfloor\frac{\|u-v\|_{\infty}}{2}\rfloor\), so that \(u+\Lambda(k)\) is disjoint from \(v+\Lambda(k)\). By (1.26) (after a translation by \(u\) and by \(v\)), \[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}|_{(u+\Lambda(L))\times \mathbb{Z}}\not\equiv\sigma^{\eta,u+\Lambda(k),\mathrm{Dob}}|_{(u+\Lambda(L)) \times\mathbb{Z}}\right) \leq C\exp\left(-c(\nu^{\parallel},\nu^{\perp},d)\left(k-L\right) ^{\frac{d-2}{d}}\right),\] \[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}|_{(v+\Lambda(L)) \times\mathbb{Z}}\not\equiv\sigma^{\eta,v+\Lambda(k),\mathrm{Dob}}|_{(v+ \Lambda(L))\times\mathbb{Z}}\right) \leq C\exp\left(-c(\nu^{\parallel},\nu^{\perp},d)\left(k-L\right) ^{\frac{d-2}{d}}\right).\] The estimate (1.27) follows (perhaps with a smaller \(c>0\)) as \(\sigma^{\eta,u+\Lambda(k),\mathrm{Dob}}\) and \(\sigma^{\eta,v+\Lambda(k),\mathrm{Dob}}\) are independent (as they are functions of disjoint subsets of the disorder). Lastly, the fact that \((\eta,\sigma^{\eta,\mathrm{Dob}})\) is \(G^{d}\)-invariant is a rephrasing of the fact that \(\sigma^{\eta,\mathrm{Dob}}\) is \(G^{d}\)-covariant (proved in Corollary 1.10). The fact that it has a trivial \(\mathbb{Z}^{d}\)-tail sigma algebra is a consequence of (1.26), since for each finite \(\Lambda\subset\mathbb{Z}^{d}\), \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is a function of \((\eta_{e})\) for the edges \(e\) above \(\Lambda\), and hence \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is independent of the \(\mathbb{Z}^{d}\)-tail sigma algebra of \((\eta,\sigma^{\eta,\mathrm{Dob}})\). It is standard that an invariant and tail-trivial process is ergodic. ### Proof of Lemma 8.1 For a configuration \(\sigma:\mathbb{Z}^{d+1}\to\{-1,1\}\), define the graph \(G_{\sigma}\) and the function \(I_{\sigma}\colon\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered''}\}\) as in Section 7.1. **Lemma 8.3**.: _Let \(\sigma:\mathbb{Z}^{d+1}\to\{-1,1\}\) be a configuration such that \(\sigma=\rho^{\mathrm{Dob}}\) at all but finitely many points of \(\mathbb{Z}^{d+1}\) and let \(u_{0}\in\mathbb{Z}^{d}\) such that there exists \(k_{0}\in\mathbb{Z}\) for which \(\sigma_{(u_{0},k_{0})}\neq\rho^{\mathrm{Dob}}_{(u_{0},k_{0})}\). Then, there exists a connected component \(A\) of \(G_{\sigma}\) such that \(u_{0}\in A\cup\mathrm{in}(A)\)._ Proof.: Let \(U_{0}\) be the connected component of the set \(\{u\in\mathbb{Z}^{d}\colon\sigma_{(u,k_{0})}=\sigma_{(u_{0},k_{0})}\}\) containing \(u_{0}\), let \(U:=U_{0}\cup\mathrm{in}(U_{0})\) and let \(A_{0}:=\partial^{\mathrm{in}}U\cup\partial^{\mathrm{out}}U\). Recall the definition of the graph \(\mathcal{G}_{d}\) from Lemma 7.14. Obviously, \(U\) and \(\mathbb{Z}^{d}\setminus U\) are both connected. Hence, by Lemma 7.14, the set \(\{\{u,v\}\in E(\mathbb{Z}^{d})\colon(u,v)\in\partial U\}\) is connected in \(\mathcal{G}_{d}\) and it readily follows that the set \(A_{0}\) is connected. Clearly, \(\partial^{\mathrm{in}}U\subseteq\partial^{\mathrm{in}}U_{0}\subseteq V_{\sigma}\) and \(\partial^{\mathrm{out}}U\subseteq\partial^{\mathrm{out}}U_{0}\subseteq V_{\sigma}\), and hence \(A_{0}\subseteq V_{\sigma}\). Let \(A\) be the connected component of \(G_{\sigma}\) such that \(A_{0}\subseteq A\). The set \(\{u\in\mathbb{Z}^{d}\colon\sigma_{(u,k_{0})}=\sigma_{(u_{0},k_{0})}\}\subseteq\{ u\in\mathbb{Z}^{d}\colon\sigma_{(u,k_{0})}\neq\rho^{\mathrm{Dob}}_{(u,k_{0})}\}\) is finite, hence \(U_{0}\) and \(U\) are finite as well. Therefore, since \(u_{0}\in U_{0}\subseteq U\), it follows that every path from \(u_{0}\) to \(\infty\) intersects the set \(\partial^{\mathrm{in}}U\subseteq A_{0}\subseteq A\) (as well as the set \(\partial^{\mathrm{out}}U\subseteq A_{0}\subseteq A\)) and hence \(u_{0}\in A\cup\mathrm{in}(A)\). **Lemma 8.4**.: _Under the assumptions of Lemma 8.1, there exists a path \(u_{1},\ldots,u_{n}\) of points in \(\mathbb{Z}^{d}\) starting in \(u_{1}\in\partial^{\mathrm{out}}\Lambda(L_{0})\) and ending in \(u_{n}\in\partial^{\mathrm{in}}\Lambda(L_{1})\) such that \(u_{j-1}\sim u_{j}\) for every \(1<j\leq n\), and for every \(1\leq j\leq n\) there is an integer \(k\) such that \(\sigma^{\eta,\Lambda^{1},\mathrm{Dob}}_{(u_{j},k)}\neq\sigma^{\eta,\Lambda^{2}, \mathrm{Dob}}_{(u_{j},k)}\)._ Proof.: Let \[U:=\{u\in\Lambda(L_{1})\colon\forall k\in\mathbb{Z}\text{ it holds that }\sigma^{\eta,\Lambda^{1},\mathrm{Dob}}_{(u_{j},k)}=\sigma^{\eta,\Lambda^{2}, \mathrm{Dob}}_{(u_{j},k)}\}.\] By way of contradiction, assume that \(\Lambda(L_{0})\subseteq U\cup\operatorname{in}(U)\). Consider the configurations \(\tilde{\sigma}_{1},\tilde{\sigma}_{2}\colon\mathbb{Z}^{d+1}\to\{-1,1\}\) defined as follows. For every \(u\in\mathbb{Z}^{d}\) and \(k\in\mathbb{Z}^{d}\), \[(\tilde{\sigma}_{1})_{(u,k)}=\begin{cases}\sigma^{\eta,\Lambda^{2},\operatorname {Dob}}_{(u,k)}&u\in\operatorname{in}(U),\\ \sigma^{\eta,\Lambda^{1},\operatorname{Dob}}_{(u,k)}&\text{otherwise},\end{cases} \qquad\qquad(\tilde{\sigma}_{2})_{(u,k)}=\begin{cases}\sigma^{\eta,\Lambda^{1},\operatorname{Dob}}_{(u,k)}&u\in\operatorname{in}(U),\\ \sigma^{\eta,\Lambda^{2},\operatorname{Dob}}_{(u,k)}&\text{otherwise}.\end{cases}\] For any \(i\in\{1,2\}\), clearly \(\tilde{\sigma_{i}}\in\Omega^{\Lambda^{i},\operatorname{Dob}}\) and hence \(\mathcal{H}^{\eta,\Lambda^{i}}(\tilde{\sigma_{i}})\geq\mathcal{H}^{\eta, \Lambda^{i}}(\sigma^{\eta,\Lambda^{i},\operatorname{Dob}})\). Therefore, since it is easy to see that \[\mathcal{H}^{\eta,\Lambda^{1}}(\tilde{\sigma_{1}})+\mathcal{H}^{\eta,\Lambda^ {2}}(\tilde{\sigma_{2}})=\mathcal{H}^{\eta,\Lambda^{1}}(\sigma^{\eta,\Lambda^{ 1},\operatorname{Dob}})+\mathcal{H}^{\eta,\Lambda^{2}}(\sigma^{\eta,\Lambda^{ 2},\operatorname{Dob}}),\] it follows that \(\mathcal{H}^{\eta,\Lambda^{1}}(\tilde{\sigma_{1}})=\mathcal{H}^{\eta,\Lambda^ {1}}(\sigma^{\eta,\Lambda^{1},\operatorname{Dob}})\) (as well as \(\mathcal{H}^{\eta,\Lambda^{2}}(\tilde{\sigma_{2}})\geq\mathcal{H}^{\eta, \Lambda^{2}}(\sigma^{\eta,\Lambda^{2},\operatorname{Dob}})\)), in contradiction to the uniqueness of \(\sigma^{\eta,\Lambda^{1},\operatorname{Dob}}\). Hence, \(\Lambda(L_{0})\nsubseteq U\cup\operatorname{in}(U)\) and the claim follows. We proceed to prove Lemma 8.1. For any \(i\in\{1,2\}\) denote, for brevity, \(\sigma_{i}:=\sigma^{\eta,\Lambda^{i},\operatorname{Dob}}\) and let \(\mathcal{C}_{i}\) be the collection of all connected components of \(G_{\sigma_{i}}\). For the path \(u_{1},\ldots,u_{n}\) guaranteed by Lemma 8.4, note that for every \(1\leq j\leq n\) it holds that \((\sigma_{1})_{(u_{j},k)}\neq\rho^{\operatorname{Dob}}_{(u_{j},k)}\) or \((\sigma_{2})_{(u_{j},k)}\neq\rho^{\operatorname{Dob}}_{(u_{j},k)}\) and hence, by Lemma 8.3, there is \(A\in\mathcal{C}_{1}\cup\mathcal{C}_{2}\) such that \(u_{j}\in A\cup\operatorname{in}(A)\). Let \(\mathcal{M}\subseteq\mathcal{C}_{1}\cup\mathcal{C}_{2}\) be a minimal collection (with respect to inclusion) such that for every \(1\leq j\leq n\) there is \(A\in\mathcal{M}\) such that \(u_{j}\in A\cup\operatorname{in}(A)\). For any \(i\in\{1,2\}\), let \(A^{(i)}:=\bigcup_{A\in\mathcal{M}\cap\mathcal{C}_{i}}A\) and \[B^{(i)}_{\infty}:=\{u\in\mathbb{Z}^{d}:\text{there is a path from $u$ to $\infty$ that does not intersect $A^{(i)}$}\}.\] Consider \(v\in\mathbb{Z}^{d}\setminus(A^{(i)}\cup B^{(i)}_{\infty})\) for \(i\in\{1,2\}\). Clearly, there is \(A\in\mathcal{M}\cap\mathcal{C}_{i}\) such that \(v\in\operatorname{in}(A)\), and this \(A\) is unique by the second and third parts of Lemma 7.2 and the minimality of \(\mathcal{M}\). Let \(\tilde{\tau}_{i}(v):=\tilde{I}_{\sigma_{i}}(v;A)\) (see Observation 7.1). We complete the definition of a "pre-shift" \(\tilde{\tau}_{i}\colon\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered''}\}\) by setting \(\tilde{\tau}_{i}(v)=I_{\sigma_{i}}(v)\) for \(v\in A^{(i)}\) and \(\tilde{\tau}_{i}(v)=0\) for \(v\in B^{(i)}_{\infty}\). We turn the "pre-shift" \(\tilde{\tau}_{i}\) for \(i\in\{1,2\}\) into a shift \(\tau_{i}\colon\mathbb{Z}^{d}\to\mathbb{Z}\) exactly as done in Section 7.2. The arguments presented in the proof of Proposition 7.6 imply, by using (7.7), that for any \(i\in\{1,2\}\), \[G^{\eta,\Lambda^{i}}(\tau_{i})\geq 2\alpha^{\|}\left(\mathcal{L}^{\|}_{A^{(i)}}( \sigma_{i})-|A^{(i)}|\right)+2\alpha^{\perp}\mathcal{L}^{\perp}_{A^{(i)}}( \sigma_{i}) \tag{8.15}\] and consequently, \[G^{\eta,\Lambda^{i}}(\tau_{i})\geq 2\alpha^{\perp}\operatorname{TV}(\tau_{i}). \tag{8.16}\] **Lemma 8.5**.: _For any \(i\in\{1,2\}\),_ \[R(\tau_{i})<\frac{24}{\min\{\alpha^{\|},\alpha^{\perp}\}d}\max\left\{G^{\eta, \Lambda^{1}}(\tau_{1}),G^{\eta,\Lambda^{2}}(\tau_{2})\right\}.\] Proof.: We first show that the set \(A^{(1)}\cup A^{(2)}=\bigcup_{A\in\mathcal{M}}A\) is connected. By way of contradiction, assume that there is a set \(\emptyset\neq\Gamma\subsetneq\bigcup_{A\in\mathcal{M}}A\) such that \(\operatorname{dist}(\Gamma,\left(\bigcup_{A\in\mathcal{M}}A\right)\setminus \Gamma)>1\). Since every \(A\in\mathcal{M}\) is connected, there is necessarily \(\emptyset\neq\mathcal{M}_{0}\subsetneq\mathcal{M}\) such that \(\Gamma=\bigcup_{A\in\mathcal{M}_{0}}A\) and \(\left(\bigcup_{A\in\mathcal{M}}A\right)\setminus\Gamma=\bigcup_{A\in \mathcal{M}\setminus\mathcal{M}_{0}}A\). Note that it follows that \(\operatorname{dist}(A,A^{\prime})>1\) for every \(A\in\mathcal{M}_{0}\) and \(A^{\prime}\in\mathcal{M}\setminus\mathcal{M}_{0}\). The minimality of \(\mathcal{M}\) implies that neither \(\{u_{j}\}_{j=1}^{n}\subseteq\bigcup_{A\in\mathcal{M}_{0}}(A\cup\operatorname{ in}(A))\) nor \(\{u_{j}\}_{j=1}^{n}\subseteq\bigcup_{A\in\mathcal{M}\setminus\mathcal{M}_{0}}(A\cup \operatorname{in}(A))\). Since \(\{u_{j}\}_{j=1}^{n}\subseteq\bigcup_{A\in\mathcal{M}}(A\cup\operatorname{in}( A))\), it follows that there are \(1\leq j,j^{\prime}\leq n\), \(A\in\mathcal{M}_{0}\) and \(A^{\prime}\in\mathcal{M}\setminus\mathcal{M}_{0}\) such that \(u_{j}\in(A\cup\operatorname{in}(A))\), \(u_{j^{\prime}}\in\operatorname{in}(A^{\prime})\cup A^{\prime}\) and \(|j-j^{\prime}|=1\), therefore \(\|u_{j}-u_{j^{\prime}}\|_{1}=1\) and hence \(\operatorname{dist}(A\cup\operatorname{in}(A),A^{\prime}\cup\operatorname{in}(A^{ \prime}))\leq 1\). Lemma 7.2 implies that either \(A\cup\operatorname{in}(A)\subsetneq A^{\prime}\cup\operatorname{in}(A^{\prime})\) or \(A^{\prime}\cup\operatorname{in}(A^{\prime})\subsetneq A\cup\operatorname{in}(A)\), contradicting the minimality of \(\mathcal{M}\). Lemma 7.13 implies that for any \(i\in\{1,2\}\) there is a set \(S_{i}\subseteq A^{(i)}\) such that \(A^{(i)}\subseteq\bigcup_{a\in S_{i}}\mathcal{B}_{4}(a)\) and, by using (8.15), \[|S_{i}| <\sum_{A\in\mathcal{M}\cap\mathcal{C}_{i}}\frac{1}{d}\left( \mathcal{L}_{A}^{\|}(\sigma_{i})-|A|+\mathcal{L}_{A}^{\perp}(\sigma_{i})\right)\] \[=\frac{1}{d}\left(\mathcal{L}_{A^{(i)}}^{\|}(\sigma_{i})-|A^{(i) }|+\mathcal{L}_{A^{(i)}}^{\perp}(\sigma_{i})\right)<\frac{1}{2\min\{\alpha^{ \|},\alpha^{\perp}\}d}G^{\eta,\Lambda^{i}}(\tau_{i}).\] For any \(i\in\{1,2\}\), the definition of \(\tau_{i}\) implies, as in Sections 7.1 and 7.2, that every level component of \(\tau_{i}\) intersects \(A^{(i)}\), and therefore intersects \(A^{(1)}\cup A^{(2)}\). Hence, by (6.18), since \(A^{(1)}\cup A^{(2)}\subseteq\bigcup_{a\in S_{1}\cup S_{2}}\mathcal{B}_{4}(a)\), \[R(\tau_{i})<20|S_{1}\cup S_{2}|+8|\mathcal{LC}(\tau_{i})|<\frac{10}{\min\{ \alpha^{\|},\alpha^{\perp}\}d}\left(G^{\eta,\Lambda^{1}}(\tau_{1})+G^{\eta, \Lambda^{2}}(\tau_{2})\right)+8|\mathcal{LC}(\tau_{i})|\] and the result follows since \[|\mathcal{LC}(\tau_{i})|\leq\frac{1}{d}\operatorname{TV}(\tau_{i})\leq\frac{ 1}{2\alpha^{\perp}d}G^{\eta,\Lambda^{i}}(\tau_{i}).\] by Observation 6.20 and (8.16). **Lemma 8.6**.: _It holds that_ \[\max\left\{G^{\eta,\Lambda^{1}}(\tau_{1}),G^{\eta,\Lambda^{2}}(\tau_{2}) \right\}\geq\frac{\min\{\alpha^{\|},\,\alpha^{\perp}\}}{4}(L_{1}-L_{0})^{1- \frac{1}{d}}.\] Proof.: For every finite set \(A\subset\mathbb{Z}^{d}\), by (6.10), \[|A|\geq\frac{1}{2d}|\partial(\operatorname{in}(A))|\geq|\operatorname{in}(A)| ^{1-\frac{1}{d}}\] and therefore \[2|A|\geq|\operatorname{in}(A)|^{1-\frac{1}{d}}+|A|\geq|\operatorname{in}(A)| ^{1-\frac{1}{d}}+|A|^{1-\frac{1}{d}}\geq(|\operatorname{in}(A)|+|A|)^{1-\frac {1}{d}}=|\operatorname{in}(A)\cup A|^{1-\frac{1}{d}}.\] Hence, for any \(i\in\{1,2\}\), by (8.15) and Observation 7.12, \[\frac{2}{\min\{\alpha^{\|},\,\alpha^{\perp}\}}G^{\eta,\Lambda^{i} }(\tau_{i}) \geq 4\left(\mathcal{L}_{A^{(i)}}^{\|}(\sigma_{i})-|A^{(i)}|+ \mathcal{L}_{A^{(i)}}^{\perp}(\sigma_{i})\right)\] \[=\sum_{A\in\mathcal{M}\cap\mathcal{C}_{i}}4\left(\mathcal{L}_{A}^ {\|}(\sigma_{i})-|A|+\mathcal{L}_{A}^{\perp}(\sigma_{i})\right)\] \[\geq\sum_{A\in\mathcal{M}\cap\mathcal{C}_{i}}2|A|\geq\sum_{A\in \mathcal{M}\cap\mathcal{C}_{i}}|\operatorname{in}(A)\cup A|^{1-\frac{1}{d}}.\] Therefore, \[\frac{4}{\min\{\alpha^{\|},\,\alpha^{\perp}\}}\max\left\{G^{\eta,\Lambda^{1}}( \tau_{1}),G^{\eta,\Lambda^{2}}(\tau_{2})\right\}\geq\frac{2}{\min\{\alpha^{\|},\,\alpha^{\perp}\}}\left(G^{\eta,\Lambda^{1}}(\tau_{1})+G^{\eta,\Lambda^{2}}( \tau_{2})\right)\] \[\geq\sum_{A\in\mathcal{M}}|\operatorname{in}(A)\cup A|^{1-\frac{1}{d}}\geq \left(\sum_{A\in\mathcal{M}}|\operatorname{in}(A)\cup A|\right)^{1-\frac{1}{d}}\] and the result follows since \(\{u_{1},\ldots,u_{n}\}\subseteq\bigcup_{A\in\mathcal{M}}\left(A\cup\mathrm{in}(A)\right)\), and hence \[\sum_{A\in\mathcal{M}}\left|\mathrm{in}(A)\cup A\right|\geq\Big{|}\bigcup_{A\in \mathcal{M}}\left(A\cup\mathrm{in}(A)\right)\Big{|}\geq\left|\{u_{1},\ldots,u_ {n}\}\right|\geq L_{1}-L_{0}.\qed\] To conclude the proof of Lemma 8.1, take \(i\in\{1,2\}\) for which \(\max\{G^{\eta,\Lambda^{1}}(\tau_{1}),G^{\eta,\Lambda^{2}}(\tau_{2})\}=G^{\eta,\Lambda^{i}}(\tau_{i})\). Then \(\tau_{i}\) is admissible by (8.16) and Lemma 8.5, and \[G^{\eta,\Lambda^{i}}(\tau_{i})\geq\frac{\min\{\alpha^{\parallel},\,\alpha^{ \perp}\}}{4}(L_{1}-L_{0})^{1-\frac{1}{d}}\] by Lemma 8.6. ## 9. Discussion and open problems ### Localization of Dobrushin interface, roughening transitions and non-constant ground configurations Question 1.6 asks whether the interface formed under Dobrushin boundary conditions remains localized uniformly in the volume. This question and its variants have received significant attention in the physics literature. As an approximation to the true interface (reminiscent of the disordered SOS model (1.28) but involving further approximations), it is suggested to study the ground configurations with zero boundary values of a _real-valued_ height function \(\varphi:\mathbb{Z}^{d}\to\mathbb{R}\) whose energy is given by the formal "disordered Gaussian free field (GFF)" Hamiltonian \[H^{\mathrm{GFF},\zeta}(\varphi):=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}\left( \varphi_{u}-\varphi_{v}\right)^{2}+\sum_{v\in\mathbb{Z}^{d}}\zeta_{v,\varphi_ {v}} \tag{9.1}\] where \(\zeta:\mathbb{Z}^{d}\times\mathbb{R}\to\mathbb{R}\) is an environment describing the quenched disorder, which is chosen with \((\zeta_{v,\cdot})_{v}\) independent and \(t\mapsto\zeta_{v,t}\) having short-range correlations for each \(v\) (and possibly also light tails). It is predicted that this height function delocalizes with power-law fluctuations in dimensions \(d=1,2,3\), delocalizes with sub-power-law fluctuations in dimension \(d=4\) and remains localized in dimensions \(d\geq 5\). These predictions are put on a rigorous footing in the forthcoming work [4]. More precisely, it is predicted that on the cube \(\{-L,\ldots,L\}^{d}\), the height function fluctuates to height \(L^{2/3}\) in dimension \(d=1\)[11, 12, 13], to height \(L^{0.41\ldots}\) in dimension \(d=2\), to height \(L^{0.22\ldots}\) in dimension \(d=3\)[13, 14] and to height \((\log L)^{0.41\ldots}\) in dimension \(d=4\)[13]. It is predicted, however, that the model (9.1) may display different behavior when restricted to _integer-valued_ height functions. Specifically, while it is believed that the ground configurations with zero boundary values are still delocalized in dimensions \(d=1,2\), with the same power laws as the real-valued versions, and still localized in dimensions \(d\geq 5\), a change of behavior occurs for \(d=3,4\)[11, 13, 12, 14, 15]. In dimension \(d=3\) a _roughening transition_ takes place in the disorder concentration: the height function is localized for sufficiently concentrated disorder and delocalized otherwise, having logarithmic fluctuations at the critical disorder concentration and power-law fluctuations, of the same order as the real-valued version, for less concentrated disorder [13]. In contrast, it is indicated that no such transition takes place in dimension \(d=4\), where the height function is _localized_ at all disorder concentrations [13]. These predictions are also believed to govern the fluctuations of the disordered SOS model (1.28), and the Dobrushin interface of the disordered ferromagnet on \(\mathbb{Z}^{D}\), with our standard substitution \(D=d+1\). Our work justifies the fact that the Dobrushin interface of the disordered ferromagnet is localized in dimensions \(d\geq 3\) for sufficiently concentrated disorder (the analogous fact for the disordered SOS model is established in [1]). It would be very interesting to extend it to establish the predicted roughening transition in dimension \(d=3\) and the predicted localization for all disorder concentrations in dimensions \(d\geq 4\). It would also be interesting to prove delocalization in dimension \(d=2\) (and especially to prove power-law delocalization). We expect the methods of Aizenman-Wehr [1], or their quantitative extension in [10], to be relevant in dimension \(d=2\), as was the case for the disordered SOS model [1]. Power-law delocalization in dimension \(d=1\) is proved by Licea-Newman-Piza [13] (see also [14, 15]). Related to the above, we mention that a version of the model (9.1) in which the disorder is _linear_, i.e., \(\zeta_{v,\varphi_{v}}=\bar{\zeta}_{v}\varphi_{v}\) with the \((\bar{\zeta}_{v})\) independently sampled from the Gaussian dsitribution \(N(0,\lambda^{2})\), is studied in [10]. The (real-valued) model is exactly solvable and behaves similarly to (9.1) in the sense that it also exhibits power-law delocalization in dimensions \(d=1,2,3\), square-root logarithmic delocalization when \(d=4\) and is localized when \(d\geq 5\). It is conjectured in [10] that the _integer-valued_ version of this model should also exhibit a roughening transition: the model should transition from a localized to a delocalized regime as \(\lambda\) increases in dimension \(d=3\) (whether this also occurs for \(d=4\) is unclear). The localization part of the transition is established in [10]. Lastly, Question 1.1 asks whether the disordered ferromagnet admits non-constant ground configurations. This is certainly the case whenever the Dobrushin interface is localized, as in this work. However, it may still be the case even when the Dobrushin interface is delocalized, as it may be that other boundary conditions on \(\{-L,\ldots,L\}^{D}\) (possibly depending on the disorder \(\eta\)) may lead to an interface passing near the origin. The fact that the predicted roughening exponent is relatively small already for \(d=2\) (the prediction there is \(\approx 0.41\)), together with the fact that there are more possibilities for the boundary conditions as \(d\) grows leads us to believe that non-constant ground configurations will indeed exist for all \(d\geq 2\) (see [1, Section 4.5.1] for a related heuristic of Newman for the \(d=1\) case). ### Positive temperature and the random-field Ising model Our main result (Theorem 1.2) states that the disordered ferromagnet admits non-constant ground configurations in dimension \(D\geq 4\) when the coupling constants are sampled independently from a sufficiently concentrated distribution. This is achieved by proving that the interface formed under Dobrushin boundary conditions remains localized uniformly in the volume (Theorem 1.7). It is natural to ask for an extension of these results to the disordered ferromagnet at low, positive temperatures (instead of asking about non-constant ground configurations, one may ask whether there exist Gibbs states other than mixtures of the infinite-volume limits under constant boundary conditions). Such an extension is established for the disordered SOS model (1.28) by Bovier-Kulke [1] and we believe that it holds also for the disordered ferromagnet. We also expect our methods to be relevant to the proof of such a result, though one obstacle which one will need to overcome is the existence of _bubbles_ in the Ising configuration: finite islands of one sign completely surrounded by spins of the other sign. Such bubbles occur at any non-zero temperature. Additional tools from the work of Dobrushin [13], such as the use of cluster expansions and the notion of "groups of walls" may be of help here too. Another model of interest is the _random-field_ Ising model (1.29). It is natural to wonder whether our results (and their possible low-temperature extensions) hold also for the Dobrushin interface in the random-field Ising model. On the level of a disordered SOS approximation to the Dobrushin interface, this is stated to be true, in dimensions \(D\geq 4\) and for sufficiently weak disorder, by Bovier-Kulske [10], following an earlier analysis of a hierarchical version [10]. We again believe that our methods will be relevant to the random-field Ising model case, but point out that an extra complication arising in this case compared to the disordered ferromagnet is that bubbles appear already at zero temperature. ### The set of non-constant covariant ground configurations Theorem 1.3 shows the existence of a non-constant \(G^{D-1}\)-covariant ground configuration. Additional configurations with these properties may be obtained from a given one by the following recipe: Suppose \(\eta\mapsto\sigma(\eta)\) is a non-constant \(G^{D-1}\)-covariant ground configuration. For each integer \(k\), define a configuration \(\eta\mapsto\sigma^{k}(\eta)\) by the relation \[T^{k}(\sigma^{k}(\eta)):=\sigma(T^{k}(\eta)) \tag{9.2}\] where \(T^{k}\) is the automorphism of \(\mathbb{Z}^{D}\) given by \(T^{k}(x):=x+ke_{D}\) (with \(e_{D}\) being the last coordinate vector) and the action of automorphisms on coupling fields and configurations is given by (1.8). It is straightforward that \(\eta\mapsto\sigma^{k}(\eta)\) is then also a non-constant \(G^{D-1}\)-covariant ground configuration. Suppose the coupling constants \((\eta_{\{x,y\}})\) are sampled independently from a disorder distribution which is non-atomic and has finite mean. We claim that the mappings \((\eta\mapsto\sigma^{k}(\eta))_{k}\) are all distinct (even modulo zero probability events). Indeed, to obtain a contradiction, suppose that \(\sigma^{k+m}=\sigma^{k}\) almost surely, for some integers \(k\) and \(m\neq 0\). Then also \(\sigma^{m}=\sigma\) almost surely. But this implies that \(\eta\mapsto\sigma(\eta)\) is a \(\mathbb{Z}^{D,m}\)-covariant ground configuration, where \(\mathbb{Z}^{D,m}\) is the group of translations by vectors of the form \(x=(x_{1},\ldots,x_{D})\in\mathbb{Z}^{D}\) with \(x_{D}\) divisible by \(m\). Recall that Wehr-Wasielak [21] prove that there are no non-constant \(\mathbb{Z}^{D}\)-covariant ground configurations (under the above assumptions on \(\eta\)). A minor extension of their proof also rules out non-constant \(\mathbb{Z}^{D,m}\)-covariant ground configurations, contradicting the fact that \(\sigma\) is non-constant. It is natural to ask whether there is a _unique_ family \((\eta\mapsto\sigma^{k}(\eta))_{k\in\mathbb{Z}}\) of non-constant \(G^{D-1}\)-covariant ground configurations. We believe that the answer is positive under the assumptions of Theorem 1.2. We also pose the following, somewhat related, problem. Theorem 1.9 proves that for every sequence \((\Lambda_{n})\) of finite subsets of \(\mathbb{Z}^{d}\), with \(\Lambda_{n}\supset\{-n,\ldots,n\}^{d}\) for each \(n\), it holds almost surely that \(\sigma^{\eta,\Lambda_{n},\operatorname{Dob}}\to\sigma^{\eta,\operatorname{ Dob}}\) pointwise. Are there exceptional sequences? That is, is there a _random_ sequence \((\Lambda_{n})\) of subsets of \(\mathbb{Z}^{d}\), converging to \(\mathbb{Z}^{d}\), for which, with positive probability, the pointwise convergence \(\sigma^{\eta,\Lambda_{n},\operatorname{Dob}}\to\sigma^{\eta,\operatorname{ Dob}}\) fails? We expect that the answer is negative under the assumptions of Theorem 1.7. ### Tilted interfaces Our work investigates the interface formed in the disordered ferromagnet's ground configuration when imposing the Dobrushin boundary conditions \(\rho^{\operatorname{Dob}}_{(v,k)}=\operatorname{sign}(k-1/2)\). It is also of interest to study the interfaces formed under other boundary conditions. For instance, one may consider "tilted Dobrushin-type boundary conditions" of the form \(\rho^{\operatorname{Dob},y}_{x}:=\operatorname{sign}(x\cdot y-1/2)\), corresponding to a flat interface orthogonal to the vector \(y\in\mathbb{Z}^{D}\) (so that \(\rho^{\operatorname{Dob}}\) corresponds to \(y=(0,\ldots,0,1)\)). In analogy with predictions for the pure Ising model, we expect that whenever \(y\) is not a multiple of one of \(e_{1},\ldots,e_{D}\) (the standard basis vectors) then the fluctuations of these tilted interfaces are of the same order as those of the real-valued disordered GFF model (9.1) discussed above, except perhaps in the critical dimension \(d=4\). In particular, they are delocalized in dimensions \(d\leq 3\) and localized in dimensions \(d\geq 5\) (a discussion of some simulation results is in [1, Section 7.2.3]). ### Higher codimension surfaces The Dobrushin interface studied in this paper may be thought of as a \(d\)-dimensional surface embedded in a \(D=d+1\) dimensional space. It is also of interest to consider surfaces of higher codimension, i.e., \(d\)-dimensional surfaces embedded in a \(D=d+n\) dimensional space. Generalizing an approach of Borgs [1], let us describe how such surfaces arise in the context of (generalized) Ising lattice gauge theories. Let \(d,n\geq 1\) and set \(D:=d+n\). An \(m\)-face in the lattice \(\mathbb{Z}^{D}\) is a subset of the form \(x+\{0,1\}^{I}\times\{0\}^{[D]\setminus I}\) for some \(x\in\mathbb{Z}^{D}\) and \(I\subset[D]\) with \(|I|=m\) (a 0-face is a vertex, a 1-face is an edge, etc.). Denote the set of \(m\)-faces of \(\mathbb{Z}^{D}\) by \(F_{m}\). We consider Ising lattice gauge theories on \((n-1)\)-faces, defined as follows. A configuration is a function \(\sigma:F_{n-1}\to\{-1,1\}\). We also define \[\sigma(f_{n}):=\prod_{\begin{subarray}{c}f_{n-1}\in F_{n-1}\\ f_{n-1}\subset f_{n}\end{subarray}}\sigma_{f_{n-1}} \tag{9.3}\] for an \(n\)-face \(f_{n}\). The formal Hamiltonian is \[H^{\text{gauge}}(\sigma):=-\sum_{f_{n}\in F_{n}}\sigma(f_{n}). \tag{9.4}\] The _defects_ of \(\sigma\) are the \(n\)-faces \(f_{n}\) satisfying \(\sigma(f_{n})=-1\). We think of the defect set as being dual to a \(d\)-dimensional surface (e.g., for \(n=1\), the case of the standard Ising model, the defects are dual to the domain walls separating \(-1\) and \(1\)). We wish to put the model under specific, Dobrushin-like, boundary conditions which will force such a surface through the volume. To this end, write vertices of \(\mathbb{Z}^{D}\) as \(x=(v,k)\) with \(v=(v_{1},\dots,v_{d})\in\mathbb{Z}^{d}\) and \(k=(k_{1},\dots,k_{n})\in\mathbb{Z}^{n}\). The Dobrushin-like boundary conditions are then \[\rho^{\text{surface}}_{f_{n-1}}:=\begin{cases}-1&f_{n-1}=(v,k)+C\text{ with }v\in\mathbb{Z}^{d},k=(k_{1},0,\dots,0)\text{ having }k_{1}\leq 0\text{ and }C=\{0\}^{[d+1]}\times\{0,1\}^{[d+n]\setminus[d+1]},\\ 1&\text{otherwise},\end{cases} \tag{9.5}\] for each \(f_{n-1}\in F_{n-1}\). The important fact about this choice is that its defect set is exactly the set of \(n\)-faces \(((v,0)+\{0\}^{[d]}\times\{0,1\}^{[d+n]\setminus[d]})_{v\in\mathbb{Z}^{d}}\) (other boundary conditions inducing the same defect set are also suitable). We note that \(\rho^{\text{surface}}=\rho^{\text{Dob}}\) when \(n=1\). The problem of localization is then to decide whether the surface dual to the defects of \(\sigma\) stays localized in the infinite-volume limit with \(\rho^{\text{surface}}\) boundary conditions (i.e., to show that there are defects in the neighborhood of the origin with high probability, uniformly in the volume). Borgs [1] considered the case \(d=2,n=2\) and proved that localization occurs at low temperature (this is the so-called weak coupling regime). His results apply more generally when the (gauge) group \(\{-1,1\}\) is replaced by a finite Abelian group. The result of Borgs [1] is analogous to the result of Dobrushin [2] in that he establishes the existence of a non-translation-invariant Gibbs measure for the non-disordered model. In our context, it is natural to consider a disordered version, with formal Hamiltonian \[H^{\text{gauge},\eta}(\sigma):=-\sum_{f_{n}\in F_{n}}\eta_{f_{n}}\sigma(f_{n}) \tag{9.6}\] with the \((\eta_{f_{n}})_{f_{n}\in F_{n}}\) sampled independently from a disorder distribution supported in \([0,\infty)\) (possibly satisfying additional assumptions). We highlight two special cases: (i) the case \(n=1\) is the disordered ferromagnet studied in this work, (ii) for \(d=1\), the defect surface of the finite-volume ground configuration with \(\rho^{\text{surface}}\) boundary conditions is dual to a geodesic in first-passage percolation (in finite volume) in \(\mathbb{Z}^{1+n}\). In analogy with Question 1.1 and Question 1.6 we may ask whether the disordered model admits ground configurations with non-empty defect set and whether the ground configuration with \(\rho^{\text{surface}}\) boundary conditions induces a localized surface. Regarding the second question, we believe that the localization/delocalization behavior is mostly determined by \(d\) (though the quantitative delocalization magnitude depends also on \(n\)). In particular, we expect that when \(d\geq 3\) an analogue of our results holds for each \(n\geq 1\). Regarding the first question, it seems natural that the existence of ground configurations with non-empty defect set becomes easier as \(n\) increases. We thus expect such configurations to exist (under mild assumptions on the disorder distribution) for all \(d\geq 2\) and \(n\geq 1\). For \(d=1\), the question coincides with the open problem of whether infinite bigeodesics exist in first-passage percolation on \(\mathbb{Z}^{1+n}\), where a negative answer is expected for \(n=1\) but the situation for larger \(n\) is unclear. ### More general disorder distributions Our main results are proved for non-atomic disorder distributions (though in the anisotropic setup atoms are allowed for \(\nu^{\perp}\)) with a strictly positive lower bound on their support, which are sufficiently concentrated in the sense that their "width", as defined in (1.5), is sufficiently small. Our notion of width allows either for compactly-supported distributions or distributions which are Lipschitz images of the standard Gaussian distribution. In fact, our only use of the concentration properties of the distribution is through the concentration inequality of Corollary 2.3. Thus, our proof may be used for more general classes of distributions satisfying a similar concentration inequality; see [10, 11]. In addition, our proof of the localization of the Dobrushin interface (Theorem 1.7) applies also when the disorder variables \((\eta_{e})\) are sampled independently from different disorder distributions, as long as the same disorder distribution is used for edges in the same "column" \((e_{1},e_{2}\) are in the same column if \(e_{1}=e_{2}+(0,\ldots,0,k)\) for some \(k\in\mathbb{Z}\)), and our assumptions (1.11) and (1.14) (for a sufficiently small \(c_{0}>0\)) are satisfied for each pair of parallel and perpendicular distributions. The assumption that the disorder distribution is non-atomic is imposed only to ensure the uniqueness of _finite-volume_ ground configurations. We expect suitable versions of Theorem 1.7 and Theorem 4.3 to hold also in its absence, with minor adjustments to the proofs. We also expect the results of this paper to continue to hold for some classes of disorder distributions \(\nu\) with \(\min(\text{supp}(\nu))=0\). However, the assumption that \(\min(\text{supp}(\nu))>0\) is used more heavily in our proofs. ### Acknowledgements The research of M.B. and R.P. is supported by the Israel Science Foundation grant 1971/19 and by the European Research Council Consolidator grant 101002733 (Transitions). Part of this work was completed while R.P. was a Cynthia and Robert Hillas Founders' Circle Member of the Institute for Advanced Study and a visiting fellow at the Mathematics Department of Princeton University. R.P. is grateful for their support. We are grateful to Michael Aizenman, Sky Cao, Daniel S. Fisher, Reza Gheissari and David Huse for illuminating discussions. We thank Daniel Hadas and Sasha Sodin for helpful comments. ## Appendix A For every \(A\subseteq\mathbb{Z}^{d}\), let \(\tilde{\partial}A:=\{\{u,v\}\colon(u,v)\in\partial A\}\). Recall the definition of the graph \(\mathcal{G}_{d}\) from Lemma 7.14. Following [16, 1], a set \(E\subseteq E(\mathbb{Z}^{d})\) is called a _contour_ if it is connected in \(\mathcal{G}_{d}\) and there is a finite set \(A\subseteq\mathbb{Z}^{d}\) such that \(E=\tilde{\partial}A\). A contour is _primitive_ if it is not a disjoint union of two non-empty contours. Let \(\tilde{\mathbb{B}}_{d}:=\{A\subset\mathbb{Z}^{d}\colon A\text{ is finite and }\tilde{\partial}A\text{ is a primitive contour}\}\). Recall that the family of finite \(A\subset\mathbb{Z}^{d}\) such that both \(A\) and \(\mathbb{Z}^{d}\setminus A\) are connected in \(\mathbb{Z}^{d}\) was denoted \(\mathbb{B}_{d}\) in the proof of Proposition 6.3. The claim of [1, Theorem 6] is that \(|\{A\in\tilde{\mathbb{B}}_{d}\colon 0\in A,\,|\partial A|=b|\leq(8d)^{2b/d}\) for every (even) integer \(b\geq 2d\). This is equivalent to (6.27) in light of the following proposition. **Proposition A.1**.: \(\tilde{\mathbb{B}}_{d}=\mathbb{B}_{d}\)_._ Proof.: First note that if \(A\in\mathbb{B}_{d}\) then \(\tilde{\partial}A\) is connected in \(\mathcal{G}_{d}\), by Lemma 7.14, and hence \(\tilde{\partial}A\) is a contour. Let \(A\in\tilde{\mathbb{B}}_{d}\). Since \(A\) is finite, the set \(\mathbb{Z}^{d}\setminus A\) obviously has a unique infinite connected component. Let \(\mathcal{A}\) be the collection of connected components of \(A\) and finite connected components of \(\mathbb{Z}^{d}\setminus A\). Define a partial order \(\preceq\) on the set \(\mathcal{A}\) as follows: \(C_{1}\preceq C_{2}\) if every path from \(C_{1}\) to \(\infty\) necessarily intersects \(C_{2}\). For every \(C\in\mathcal{A}\), let \(\bar{C}:=\cup_{S\preceq C}S\). It is easy to see that \(\tilde{\partial}A\) is the disjoint union of the sets \(\{\tilde{\partial}\bar{C}\}_{C\in\mathcal{A}}\) and that for every \(C\in\mathcal{A}\) it holds that \(\bar{C}\in\mathbb{B}_{d}\) and hence \(\tilde{\partial}\bar{C}\) is a contour. Since \(\tilde{\partial}A\) is a primitive contour, it follows that \(|\mathcal{A}|=1\) and hence, \(A\in\mathbb{B}_{d}\), as \(A\) is finite. Thus \(\tilde{\mathbb{B}}_{d}\subseteq\mathbb{B}_{d}\). By way of contradiction, assume there is \(A\in\mathbb{B}_{d}\setminus\tilde{\mathbb{B}}_{d}\). Since \(A\in\mathbb{B}_{d}\), the set \(\tilde{\partial}A\) is a contour. Therefore, since \(A\notin\tilde{\mathbb{B}}_{d}\), it follows that \(\tilde{\partial}A\) is not primitive, i.e., it is the disjoint union of two non-empty contours. In particular, there is a finite \(A_{0}\subset\mathbb{Z}^{d}\) such that \(\emptyset\neq\tilde{\partial}A_{0}\subsetneq\tilde{\partial}A\). Since \(A\) and \(A_{0}\) are both finite, the set \((\mathbb{Z}^{d}\setminus A)\cap(\mathbb{Z}^{d}\setminus A_{0})\) is infinite and in particular, non-empty. Hence, since \((\mathbb{Z}^{d}\setminus A)\) is connected and \(\tilde{\partial}A_{0}\subset\tilde{\partial}A\), it follows that \((\mathbb{Z}^{d}\setminus A)\subseteq(\mathbb{Z}^{d}\setminus A_{0})\), i.e., \(A_{0}\subseteq A\). The set \(A\cap A_{0}=A_{0}\) is non-empty, since \(\tilde{\partial}A_{0}\neq\emptyset\). Hence, since \(A\) is connected and \(\tilde{\partial}A_{0}\subset\tilde{\partial}A\), it follows that \(A_{0}\supseteq A\). Therefore, \(A_{0}=A\), in contradiction with \(\tilde{\partial}A_{0}\neq\tilde{\partial}A\). ## Appendix B In this section Lemma 1.5, Observation 4.1, Lemma 4.2, and Lemma 6.25 will be proved. The following observation will be used in the proof of Lemma 6.25 as well as Lemma 4.2. **Observation B.1**.: _Let \(\eta=f(X)\) be Lipschitz function \(f\) of a normalized Gaussian random variable \(X\sim N(0,1)\). Then \(\eta\) has exponential moments, i.e., \(\mathbb{E}(e^{\eta})<\infty\)._ Proof.: Denote by \(C>0\) a constant such that \(f\) is \(C\)-Lipschitz. Then the following holds: \[\mathbb{E}(e^{\eta}) =\frac{1}{\sqrt{2\pi}}\int_{x=-\infty}^{\infty}e^{f(x)}e^{-\frac{ x^{2}}{2}}dx=\frac{1}{\sqrt{2\pi}}\int_{x=0}^{\infty}\left(e^{f(x)}+e^{f(-x)} \right)e^{-\frac{x^{2}}{2}}dx\] \[\leq\frac{e^{|f(0)|}}{\sqrt{2\pi}}\int_{x=0}^{\infty}\left(e^{|f( x)-f(0)|}+e^{|f(-x)-f(0)|}\right)e^{-\frac{x^{2}}{2}}dx\leq\frac{e^{|f(0)|}}{ \sqrt{2\pi}}\int_{x=0}^{\infty}e^{Cx}e^{-\frac{x^{2}}{2}}dx\] \[=\frac{e^{|f(0)|+\frac{C^{2}}{2}}}{\sqrt{2\pi}}\int_{x=0}^{\infty} e^{-\frac{(x-C)^{2}}{2}}dx<\frac{e^{|f(0)|+\frac{C^{2}}{2}}}{\sqrt{2\pi}}\int_{x=- \infty}^{\infty}e^{-\frac{(x-C)^{2}}{2}}dx=e^{|f(0)|+\frac{C^{2}}{2}},\] where the first equality is by definition of \(\eta\), the first inequality by triangle inequality, the second by definition of \(C\)-Lipschitz function. **Proposition B.2**.: _Let \(\Lambda\subset\mathbb{Z}^{d}\) finite. Suppose the disorder distributions \(\nu^{\parallel},\nu^{\perp}\) satisfies the condition in (1.11). For every integer \(h\) and positive integer \(M\):_ \[\mathbb{P}\left(\sum_{v\in\Lambda}\eta_{\{(v,h),(v,h+1)\}}\geq M\right)\leq \beta e^{-M},\] (B.1) _where \(\beta=\beta(|\Lambda|,\nu^{\parallel})\) is a constant which depends on \(|\Lambda|,\nu^{\parallel}\). In particular, for \(h=0\), it holds for every positive integer \(M\) that_ \[\mathbb{P}\left(\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})\geq M\right) \leq\beta e^{-M/2}.\] (B.2) _Consequently, by the first Borel Cantelli lemma it holds that the configuration \(\rho^{\mathrm{Dob}}\) has finite energy a.s and so the ground energy is a.s finite._ Proof.: Defining \(\beta(|\Lambda|,\nu^{\parallel}):=\mathbb{E}(e^{X})^{|\Lambda|}<\infty\), for \(X\sim\nu^{\parallel}\) and where the expectation is finite trivially for the compact case, and by Observation B.1 for the Lipschitz of Gaussian case. By the i.i.d nature of the disorders the following holds \[\mathbb{P}\left(\sum_{v\in\Lambda}\eta_{\{(v,h),(v,h+1)\}}\geq M\right) =\mathbb{P}\left(e^{\sum_{v\in\Lambda}\eta_{\{(v,h),(v,h+1)\}}} \geq e^{M}\right)\leq\frac{\mathbb{E}\left(e^{\sum_{v\in\Lambda}\eta_{\{(v,h),(v,h+1)\}}}\right)}{e^{M}}\] \[=\frac{\prod_{v\in\Lambda}\mathbb{E}(e^{\eta_{\{(v,h),(v,h+1)\}} })}{e^{M}}=\beta e^{-M}.\qed\] The proposition below states that from the appearance of a sign change in a ground configuration in height \(k\) one may deduce a lower bound on the energy of it. **Proposition B.3**.: _Let \(\eta:E(\mathbb{Z}^{d+1})\to[0,\infty)\) such that \(\eta_{e}\geq\alpha^{\perp}\) for every \(e\in E^{\perp}(\mathbb{Z}^{d+1})\). Let \(\Lambda\subset\mathbb{Z}^{d}\) be finite and let \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\) such that both the sets \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\) and \(\{y\in\mathbb{Z}^{d+1}\colon\sigma(y)=-1\}\) are connected. If \((u,h)\in\Lambda\times\mathbb{Z}\) such that \(\sigma_{(u,h)}\neq\rho^{\mathrm{Dob}}_{(u,h)}\) then \(\mathcal{H}^{\eta,\Lambda}(\sigma)\geq 2\alpha^{\perp}|h|\)._ Proof.: The proof is similar to (though considerably simpler than) the argument used to prove Lemma 4.4. With no loss of generality, assume that \(h>0\) and that \(h=\max\{k\in\mathbb{Z}\colon\sigma_{(u,k)}=-1\}\). Recall the definition of the graph \(\mathcal{G}_{d+1}\) from Lemma 7.14. Define \(\pi_{d+1}:\mathbb{Z}^{d}\times\mathbb{Z}\to\mathbb{Z}\) by \(\pi_{d+1}(u,h)=h\), and for every \(\{x,y\}\in E(\mathbb{Z}^{d+1})\), let \(\pi_{d+1}(\{x,y\})=\{\pi_{d+1}(x),\pi_{d+1}(y)\}\). Note that if \(e,\tilde{e}\in E(\mathbb{Z}^{d+1})\) are adjacent in \(\mathcal{G}_{d+1}\), then \(\pi_{d+1}(e)\subseteq\pi_{d+1}(\tilde{e})\) or \(\pi_{d+1}(e)\supseteq\pi_{d+1}(\tilde{e})\). By Lemma 7.14. the set \[\mathcal{I}(\sigma):=\{\{x,y\}\in E(\mathbb{Z}^{d+1})\colon\sigma(x)=1,\sigma (y)=-1\}\] is connected in \(\mathcal{G}_{d+1}\). In particular, there is a sequence \((\tilde{e}_{i})_{i=1}^{N}\) of edges in \(\mathcal{I}(\sigma)\) such that \(\tilde{e}_{1}=\{(u,h+1),(u,h)\}\), \(\tilde{e}_{N}=\{(w,1),(w,0)\}\) for some \(w\in\mathbb{Z}^{d}\setminus\Lambda\) and \(\tilde{e}_{i},\tilde{e}_{i+1}\) are adjacent in \(\mathcal{G}_{d+1}\) for every \(1\leq i<N\). Since \(\pi_{d+1}(\tilde{e}_{1})=\{h+1,h\}\), \(\pi_{d+1}(\tilde{e}_{N})=\{1,0\}\) and for every \(1\leq i<N\) it holds that \(\pi_{d+1}(\tilde{e}_{i-1})\subseteq\pi_{d+1}(\tilde{e}_{i})\) or \(\pi_{d+1}(\tilde{e}_{i-1})\supseteq\pi_{d+1}(\tilde{e}_{i})\), it follows that for every \(1\leq k\leq h\) there is necessarily \(1<i_{h}<N\) such that \(\pi_{d+1}(\tilde{e}_{i_{k}})=\{k\}\). Then, for every \(1\leq k\leq h\) it holds that \(\tilde{e}_{i_{k}}\in\mathcal{I}(\sigma)\cap E^{\perp}(\mathbb{Z}^{d+1})\) and \(\tilde{e}_{i_{k}}\cap(\Lambda\times\mathbb{Z})\neq\emptyset\), and hence \[\mathcal{H}^{\eta,\Lambda}(\sigma)=2\sum_{\begin{subarray}{c}e\in\mathcal{I}( \sigma)\\ e\cap(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\eta_{e}\geq 2\sum_{k=1}^{h} \eta_{\tilde{e}_{i_{k}}}\geq 2h\alpha^{\perp}.\qed\] Proof of Lemma 6.25.: First notice that for any \(M>0\) \[\mathbb{P}\Big{(}G^{\eta,\Delta_{M},(b^{\parallel},b^{\perp})}(\tau) \neq G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)\Big{)}\] \[\leq \mathbb{P}\left(\mathrm{GE}^{\Delta_{M},\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\neq\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\right)\] \[+\mathbb{P}\left(\mathrm{GE}^{\Delta_{M},\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta^{\tau})\neq\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau ),(b^{\parallel},b^{\perp})}(\eta^{\tau})\right)\] \[= 2\mathbb{P}\left(\mathrm{GE}^{\Delta_{M},\mathrm{supp}(\tau),(b^ {\parallel},b^{\perp})}(\eta)\neq\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau),(b^ {\parallel},b^{\perp})}(\eta)\right),\] where the inequality is by union bound and equality afterwards by the fact the disorder is i.i.d and \(\tau\) is fixed. The set \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\) obviously has a unique infinite connected component for every \(\sigma\in\Omega^{\Lambda,\mathrm{supp}(\tau),(b^{\parallel},b^{\perp}), \mathrm{Dob}}\). If the set \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\) has finite connected components then flipping all signs in such a component yields a configuration in \(\Omega^{\Lambda,\mathrm{supp}(\tau),(b^{\parallel},b^{\perp}),\mathrm{Dob}}\) with smaller \(H^{\eta}\). Hence, if \(\sigma_{0}\in\Omega^{\Lambda,\mathrm{supp}(\tau),(b^{\parallel},b^{\perp}), \mathrm{Dob}}\) is a configuration minimizing \(\mathcal{H}^{\eta,\Lambda}\), then the set \(\{x\in\mathbb{Z}^{d+1}\colon\sigma_{0}(x)=1\}\) is necessarily connected, and similarly, the set \(\{y\in\mathbb{Z}^{d+1}\colon\sigma_{0}(y)=1\}\) is connected as well. Hence, if \(\sigma_{0}\notin\Omega^{\Delta_{M},\mathrm{supp}(\tau),(b^{\parallel},b^{\perp }),\mathrm{Dob}}\) then \(\mathcal{H}^{\eta,\Lambda}(\sigma)\geq 2\alpha^{\perp}(M+1)\), by Proposition B.3. It follows that \[\{\eta:\mathrm{GE}^{\Delta_{M},\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\neq\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau)(b^{ \parallel},b^{\perp})}(\eta)\}\\ \subseteq\{\eta:\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\geq 2\underline{\alpha}^{\perp}(M+1)\} \subseteq\{\eta:\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})\geq 2 \underline{\alpha}^{\perp}(M+1)\}\] where the second inclusion is by definition of ground energy, since \(\rho^{\mathrm{Dob}}\in\Omega^{\Lambda,\mathrm{supp}(\tau),(b^{\parallel},b^{ \perp})}\). And so by the above inclusion, and (B.2), \[\mathbb{P}\left(G^{\eta,\Delta_{M},(b^{\parallel},b^{\perp})}(\tau)\neq G^{ \eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)\right)\leq 2\beta e^{- \underline{\alpha}^{\perp}(M+1)}\xrightarrow[M\to\infty]{}0\] as required. Proof of Observation 4.1.: By definition of \(\Omega^{\Lambda,\mathrm{Dob}}\) the configurations \(\sigma,\sigma^{\prime}\) may only differ in finitely many places, and so the difference \(H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})\) is well defined and denoting \(D=\{x:\sigma_{x}\neq\sigma_{x}^{\prime}\}\), both \[H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})=-2\sum_{\{x,y\}\in\partial D}\eta_{ \{x,y\}}\sigma_{x}\sigma_{y}\] and also \[\mathcal{H}^{\eta,\Lambda}(\sigma)-\mathcal{H}^{\eta,\Lambda}(\sigma^{\prime}) =\sum_{\{x,y\}}\eta_{\{x,y\}}\left(\sigma_{x}^{\prime}\sigma_{y}^{ \prime}-\sigma_{x}\sigma_{y}\right))=-2\sum_{\{x,y\}\in\partial D}\eta_{\{x,y \}}\sigma_{x}\sigma_{y}.\qed\] Proof of Lemma 1.5.: Let \(M\) be an integer larger than \(\frac{1}{\underline{\alpha}^{\perp}}\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{ Dob}})\) (which is finite by Proposition B.2). Let \(\Delta_{M}:=\Lambda\times\{-M,\ldots,M\}\). The function \(\mathcal{H}^{\eta,\Lambda}\) is well defined on the finite \(\Omega^{\Delta_{M},\rho^{\mathrm{Dob}}}\subset\Omega^{\Lambda,\mathrm{Dob}}\). Thus, there is a ground configuration \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) in \(\Omega^{\Delta_{M},\rho^{\mathrm{Dob}}}\) with respect to the Hamiltonian \(\mathcal{H}^{\eta,\Lambda}\), which is unique by condition (4.1). By Observation 4.1, \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) is also the unique ground configuration in \(\Omega^{\Delta_{M},\rho^{\mathrm{Dob}}}\) with respect to the Hamiltonian \(H^{\eta}\). Consider a configuration \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\setminus\Omega^{\Delta_{M},\rho^{ \mathrm{Dob}}}\). Each of the sets \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\), \(\{y\in\mathbb{Z}^{d+1}\colon\sigma(y)=-1\}\) obviously has a unique infinite connected component. If either of the sets \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\), \(\{y\in\mathbb{Z}^{d+1}\colon\sigma(y)=-1\}\) has a finite connected component, then flipping all signs in such a component yields a configuration \(\tilde{\sigma}\in\Omega^{\Lambda,\mathrm{Dob}}\) such that \(H^{\eta}(\sigma)-H^{\eta}(\tilde{\sigma})>0\). If neither of the sets \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\), \(\{y\in\mathbb{Z}^{d+1}\colon\sigma(y)=-1\}\) has finite connected components, then both sets are connected and hence, by Observation 4.1 and Proposition B.3, \[H^{\eta}(\sigma)-H^{\eta}(\rho^{\mathrm{Dob}})=\mathcal{H}^{\eta,\Lambda}( \sigma)-\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})\geq\underline{\alpha} ^{\perp}(M+1)-\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})>0.\] Therefore, no \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\setminus\Omega^{\Delta_{M},\rho^{ \mathrm{Dob}}}\) is a ground configuration in \(\Omega^{\Lambda,\mathrm{Dob}}\) and hence \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) is also the unique ground configuration in \(\Omega^{\Lambda,\mathrm{Dob}}\). Finally, since every configuration that differs from \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) in finitely many places is necessarily in \(\Omega^{\Lambda,\mathrm{Dob}}\), it follows that \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) is also a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\); it is also almost surely unique, since almost surely no configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\setminus\Omega^{\Lambda,\mathrm{Dob}}\) is a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) with respect to the Hamiltonian \(H^{\eta}\), as we will prove next. Take \(\gamma>\ln\beta(|\Lambda|,\nu^{\parallel})\) (see Proposition B.2), and let \[J:=\{j\in\mathbb{Z}:\sum_{v\in\Lambda}\eta_{\{(v,j),(v,j+1)\}}<\gamma\}.\] We will show that if the set \(J\) is infinite, then no configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\setminus\Omega^{\Lambda,\mathrm{Dob}}\) is a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) with respect to the Hamiltonian \(H^{\eta}\). Since the disorder \(\eta\) is an independent field, it readily follows from (B.1) that \(J\) is almost surely infinite, and we are done. Therefore, assume that the set \(J\) is infinite and let \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\setminus\Omega^ {\Lambda,\mathrm{Dob}}\). Then, the set \(I:=\{i\in\mathbb{Z}\colon\exists u\in\mathbb{Z}^{d}\text{ such that }\sigma_{(u,i)}\neq\rho^{\mathrm{Dob}}_{(u,i)}\}\) is infinite, and hence there are \(0\leq j_{1}<j_{2}\) or \(j_{1}<j_{2}\leq 0\) in \(J\) such that \(|I\cap[j_{1}+1,j_{2}]|>\frac{\gamma}{\underline{\alpha}^{\perp}d}\). Consider the configuration \(\tilde{\sigma}\), defined as follows: for every \(u\in\mathbb{Z}^{d}\), \(\tilde{\sigma}_{(u,i)}=\rho^{\mathrm{Dob}}_{(u,i)}\) if \(j_{1}+1\leq i\leq j_{2}\) and \(\tilde{\sigma}_{(u,i)}=\sigma_{(u,i)}\) otherwise. Clearly, \(\tilde{\sigma}\in\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\), \(\{x\in\mathbb{Z}^{d+1}\colon\tilde{\sigma}_{x}\neq\sigma_{x}\}\) is finite and \[H^{\eta}(\sigma)-H^{\eta}(\tilde{\sigma})\geq 4\underline{\alpha}^{\perp}d|I\cap[j_{1}+1,j_{2}]|-2\sum_{v\in \Lambda}\eta_{\{(v,j_{1}),(v,j_{1}+1)\}}-2\sum_{v\in\Lambda}\eta_{\{(v,j_{2}),(v,j_{2}+1)\}}\] \[> 4\gamma-4\gamma=0.\] Hence, \(\sigma\) is not a ground configuration, as desired. Proof of Lemma 4.2.: First notice that for any \(\tau\in\mathcal{S}\cap\{\tau:\max_{u\in\Lambda}|\tau(u)|=r\}\) it holds that \(\mathrm{TV}(\tau)\geq 2dr\), and so \[\{\eta:\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp}) \}\subset\{\eta:|G^{\eta,\Lambda}(\tau)|\geq d\alpha^{\perp}\}\] (B.3) by the definition of \((\alpha^{\parallel},\alpha^{\perp})\)-admissibility. Now use union bound and (B.3) to get \[\mathbb{P}\left(\exists\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{ \parallel},\alpha^{\perp})\colon\max_{u\in\Lambda}|\tau(u)|=r\right)\leq(2r+1)^ {|\Lambda|}\max_{\begin{subarray}{c}r\in\mathcal{S}\\ \max_{u\in\Lambda}|\tau(u)|=r\end{subarray}}\mathbb{P}(\tau\in\mathcal{AS})\] \[\quad\leq 2(2r+1)^{|\Lambda|}\mathbb{P}\left(|G^{\eta,\Lambda}(\tau) |\geq d\alpha^{\perp}\right)\leq 2(2r+1)^{|\Lambda|}\mathbb{P}\left(\mathrm{GE}^{ \Lambda,\mathrm{Dob}}(\eta)\geq d\alpha^{\perp}\right)\] \[\quad\leq 2(2r+1)^{|\Lambda|}\mathbb{P}\left(\mathcal{H}^{\eta, \Lambda}(\rho^{\mathrm{Dob}})\geq d\alpha^{\perp}\right)\leq\beta 2(2r+1)^{|\Lambda|}e^{-d \alpha^{\perp}/2}\] where the first inequality is by union bound, the second is by (B.3), the third is by the fact the parallel disorder is i.i.d, the forth is by definition of ground energy, and the fifth by (B.2). Noticing that by the above \(\mathbb{P}\left(\exists\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha ^{\perp})\colon\max_{u\in\Lambda}|\tau(u)|=r\right)\) is summable with respect to \(r\) and applying Borel-Cantelli one gets that with probability one finitely many of the events \[\{\exists\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp })\colon\max_{u\in\Lambda}|\tau(u)|=r\}\] and in particular with probability one \(|\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})|<\infty\) as required.
秩序が失われた鉄磁性体は、磁性 Ising モデルの不秩序なバージョンで、結合定数が非負の、ランダムにquenchedです。ground 構成は、エネルギーが有限の変更によって削減できない無限体積の構成です。この不秩序の鉄磁性体における ground 構成が存在するかどうかを証明することは、長年の課題です。$\mathbb{Z}^D$格子上の不秩序の鉄磁性体に対して、非定数 ground 構成が存在するかどうかを証明するためには、$D \ge 4$ の次元において、結合定数が独立した分布からサンプリングされた場合にのみ、肯定的に回答できます。得られた ground 構成は、不規則性を $\mathbb{Z}^{D-1}$ 移動によって変換可能なことが示されました。私たちの結果は、有限体積の境界条件によって形成される有限体積の界面が局在化され、無限体積の境界
2309.16624
On generalised majority edge-colourings of graphs
A $\frac{1}{k}$-majority $l$-edge-colouring of a graph $G$ is a colouring of its edges with $l$ colours such that for every colour $i$ and each vertex $v$ of $G$, at most $\frac{1}{k}$'th of the edges incident with $v$ have colour $i$. We conjecture that for every integer $k\geq 2$, each graph with minimum degree $\delta\geq k^2$ is $\frac{1}{k}$-majority $(k+1)$-edge-colourable and observe that such result would be best possible. This was already known to hold for $k=2$. We support the conjecture by proving it with $2k^2$ instead of $k^2$, which confirms the right order of magnitude of the conjectured optimal lower bound for $\delta$. We at the same time improve the previously known bound of order $k^3\log k$, based on a straightforward probabilistic approach. As this technique seems not applicable towards any further improvement, we use a more direct non-random approach. We also strengthen our result, in particular substituting $2k^2$ by $(\frac{7}{4}+o(1))k^2$. Finally, we provide the proof of the conjecture itself for $k\leq 4$ and completely solve an analogous problem for the family of bipartite graphs.
Paweł Pękała, Jakub Przybyło
2023-09-28T17:24:17
http://arxiv.org/abs/2309.16624v1
# On generalised majority edge-colourings of graphs ###### Abstract A \(\frac{1}{k}\)_-majority \(l\)-edge-colouring_ of a graph \(G\) is a colouring of its edges with \(l\) colours such that for every colour \(i\) and each vertex \(v\) of \(G\), at most \(\frac{1}{k}\)'th of the edges incident with \(v\) have colour \(i\). We conjecture that for every integer \(k\geqslant 2\), each graph with minimum degree \(\delta\geqslant k^{2}\) is \(\frac{1}{k}\)-majority \((k+1)\)-edge-colourable and observe that such result would be best possible. This was already known to hold for \(k=2\). We support the conjecture by proving it with \(2k^{2}\) instead of \(k^{2}\), which confirms the right order of magnitude of the conjectured optimal lower bound for \(\delta\). We at the same time improve the previously known bound of order \(k^{3}\log k\), based on a straightforward probabilistic approach. As this technique seems not applicable towards any further improvement, we use a more direct non-random approach. We also strengthen our result, in particular substituting \(2k^{2}\) by \((\frac{7}{4}+o(1))k^{2}\). Finally, we provide the proof of the conjecture itself for \(k\leqslant 4\) and completely solve an analogous problem for the family of bipartite graphs. keywords: majority colouring, edge majority colouring, \(\frac{1}{k}\)-majority edge-colouring + Footnote †: journal: ## 1 Introduction A _majority colouring_ of a digraph \(D\) is a colouring of the vertices of \(D\) such that for every vertex of \(D\) at most half the out-neighbours of \(v\) receive the same colour as \(v\). This notion was first considered by Kreutzer, Oum, Seymour, van der Zypen and Wood [8], who in particular proved that every digraph has a majority \(4\)-colouring, and conjectured that \(3\) colours should always suffice. They also posed several other related problems, addressed in a few consecutive papers. In particular, in [3] Anholcer, Bosek and Grytczuk extended the result above to a list setting. Further, in [6; 7] the authors studied the problem of \(\frac{1}{k}\)-majority colourings of digraphs, that is such colourings of the vertices of a digraph that each vertex receives the same colour as at most \(\frac{1}{k}\)'th of its out-neighbours, which is a natural generalisation, proposed already in [8]. Girao, Kittipassorn and Popielarz [6], and independently Knox and Samal [7] proved that for each \(k\geqslant 2\), every digraph is \(\frac{1}{k}\)-majority \(2k\)-colourable, while there are digraphs requiring no less than \(2k-1\) colours. Further results and extensions can be found e.g. in [2; 3; 4]. It is worth mentioning that optimal results concerning the natural correspondents of the notions above, but in the environment of graphs follow by the argument of Lovasz [9], printed already in 1966, see also [4] for further comments and results. A _majority edge-colouring_ of a graph \(G\) is a colouring of the edges of \(G\) such that for each vertex \(v\) of \(G\), at most half of the edges incident with \(v\) have the same colour. More generally, for an integer \(k\geq 2\), a _\(\frac{1}{k}\)-majority edge-colouring_ of \(G\) is a colouring of its edges such that for every colour \(i\) and each vertex \(v\) of \(G\) at most \(\frac{1}{k}\)'th of the edges incident with \(v\) have colour \(i\). One of characteristic features of these notions, introduced recently by Bock, Kalinowski, Pardey, Pilsniak, Rautenbach and Wozniak [5], is that unlike in the case of vertex-colourings, such edge-colourings do not exist for all graphs. In particular, for every \(k\geq 2\), graphs with vertices of degree \(1\) do not admit a \(\frac{1}{k}\)-majority edge-colouring with any number of colours. In [5] it was however proven that every graph \(G\) of minimum degree \(\delta\geq 2\) has a majority \(4\)-edge-colouring. On the other hand, the minimum degree of a graph may have significant influence on the number of colours sufficient to provide such colourings, and examining this problem seems to be the primal issue in this area. The main result of [5] solves this problem for \(k=2\). **Theorem 1** ([5]).: _Every graph \(G\) of minimum degree \(\delta\geq 4\) has a majority \(3\)-edge-colouring._ This result is twofold best possible. Firstly, \(4\) cannot be decreased, as exemplify e.g. cubic graphs with chromatic index \(4\). Secondly, \(2\) colours are not sufficient e.g. for any graph containing an odd degree vertex. The main motivation of our research is thus the quest for best possible extension of Theorem 1 towards all \(k\geq 3\). Note first that for any fixed \(k\geq 2\), no minimum degree constraint can guarantee the existence of a \(\frac{1}{k}\)-majority edge-colouring with at most \(k\) colours - it is enough to consider graphs containing vertices of (arbitrarily large) degrees not divisible by \(k\). We thus must admit (at least) \(k+1\) colours, and in fact the authors of [5] showed that this number of colours is sufficient (and hence, optimal) within our quest. **Theorem 2** ([5]).: _For every integer \(k\geq 2\), there exists \(\delta_{k}\) such that each graph \(G\) of minimum degree \(\delta\geq\delta_{k}\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring._ For \(k=2\) this follows by Theorem 1, while in the remaining cases it was proven by a fairly standard application of the Lovasz Local Lemma, which admitted to get the result above with \(\delta_{k}=\Omega(k^{3}\log k)\). We believe that much smaller values of \(\delta_{k}\) should allow obtaining the same result. Our main objective concerns finding the optimal value of \(\delta_{k}\) and can be formulated as follows. **Problem 3**.: _For every integer \(k\geq 2\), find the least value \(\delta_{k}^{\mathrm{opt}}\) such that each graph \(G\) of minimum degree \(\delta\geq\delta_{k}^{\mathrm{opt}}\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring._ It seems one needs to use a non-probabilistic argument in order to completely solve this problem, we discuss this issue at the end of the paper. In the next section we present the main tool we shall use within our approach, and show how it allows to solve the problem in the environment of bipartite graphs. In Section 3 we shall in turn provide order-wise tight estimations for \(\delta_{k}^{\mathrm{opt}}\) in the general case and formulate our main conjecture. In the following section we also confirm that the conjecture holds for \(k\leq 4\). The last section is devoted to a short discussion concerning our results and further perspectives. ## 2 Bipartite graphs Let us begin with a simple observation implying the lower bound for \(\delta_{k}^{\mathrm{opt}}\), also within the family of bipartite graphs. **Observation 4**.: _For every \(k\geqslant 2\) there exist bipartite graphs \(G\) with \(\delta(G)\geqslant k^{2}-k-1\) which are not \(\frac{1}{k}\)-majority \((k+1)\)-edge-colourable._ Proof.: Let \(G\) be a graph containing a vertex \(v\) of degree \(k^{2}-k-1\) and suppose \(G\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring. Then at most \[\left\lfloor\frac{k^{2}-k-1}{k}\right\rfloor=k-2\] edges incident with \(v\) may have the same colour, and hence at most \[(k+1)(k-2)=k^{2}-k-2<d(v)\] edges incident with \(v\) can be coloured, a contradiction. Therefore, in particular the complete bipartite graph \(K_{k^{2}-k,k^{2}-k}\) with a single edge removed is an example of a bipartite graph which is not \(\frac{1}{k}\)-majority \((k+1)\)-edge-colourable and has minimum degree \(k^{2}-k-1\). We thus must require that \(\delta(G)\geqslant k(k-1)\) in order to have a chance to show that such assumption guarantees that \(G\) is \(\frac{1}{k}\)-majority edge-colourable with \(k+1\) colours. In [5] it was actually already proven that \(k+2\) colours are sufficient in such a case. We shall show that in fact in the case of bipartite graphs, \(k+1\) colours always suffice, which, in view of Observation 4, yields an optimal result. **Theorem 5**.: _For every integer \(k\geqslant 2\), if a bipartite graph \(G\) has minimum degree at least \(k(k-1)\), then \(G\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring._ In order to prove Theorem 5 we shall make use of Lemma 6 below. This was essentially proven by Alon and Wei [1]. We present a slight, yet very useful refinement of their result. We also include its full proof for the sake of completeness. We say a cycle is _odd_ if it has odd length. Moreover, cycles \(C_{1},\ldots,C_{t}\) in a graph \(G\) are called _independent_ if for any \(i\neq j\) there is no vertex of \(C_{i}\) adjacent to a vertex of \(C_{j}\) in \(G\). (Note that such cycles are in particular pairwise distinct.) **Lemma 6**.: _Let \(G=(V,E)\) be a graph, and let \(z:E\to[0,1]\) be a weight function assigning to each edge \(e\in E\) a real weight \(z(e)\) in \([0,1]\). Then there is a function \(x:E\to\{0,1\}\) assigning to each edge an integer value in \(\{0,1\}\) such that_ 1. \(\sum\limits_{e\ni v}z(e)-1<\sum\limits_{e\ni v}x(e)\leqslant\sum\limits_{e\ni v }z(e)+1\) _for every_ \(v\in V\)_;_ 2. _if_ \(\sum\limits_{e\ni u}x(e)<\sum\limits_{e\ni u}z(e)\) _and_ \(\sum\limits_{e\ni v}x(e)<\sum\limits_{e\ni v}z(e)\) _for some_ \(uv\in E\)_, then_ \(x(uv)=1\)_;_ 3. _each vertex_ \(v\) _with_ \(\sum\limits_{e\ni v}x(e)=\sum\limits_{e\ni v}z(e)+1\) _belongs to an odd cycle_ \(C_{v}\) _with_ \(\sum\limits_{e\ni u}z(e)\in\mathbb{Z}\) _for every vertex_ \(u\) _of_ \(C_{v}\)_, and moreover all such cycles are independent in_ \(G\) Proof.: If \(z(e)\in\{0,1\}\) for any edge \(e\) of \(G\), then fix \(x(e)=z(e)\). Let \(G^{\prime}\) be a subgraph of \(G\) obtained by removing all edges of \(G\) for which \(z(e)\) is an integer. Consider an incidence matrix \(M\) of \(G^{\prime}\). For every edge \(e\in E(G^{\prime})\) let \(y_{e}\) be its corresponding column in \(M\). Note that if \(G^{\prime}\) contains a closed walk of even length whose every two consecutive edges are distinct from each other, then the columns of \(M\) are linearly dependent over reals, i.e. there exist real numbers \(\alpha_{e}\), \(e\in E(G^{\prime})\), such that \[\sum_{e\in E(G^{\prime})}\alpha_{e}y_{e}=\overline{0}\] and at least one \(\alpha_{e}\) is nonzero. Note that for any nonzero real number \(c\), if we modify all of the values \(x(e)\) by \(c\alpha_{e}\), then the sum \(\sum_{e\ni v}x(e)\) remains the same for all vertices \(v\), but the value of \(x(e)\) shall change for at least one edge \(e\). Therefore we can choose the value of the coefficient \(c\) so that all modified values of \(x(e)\) remain in \([0,1]\), but at least one of them gets an integer. Remove all integer valued edges from the graph \(G^{\prime}\) and continue with the same procedure until \(G^{\prime}\) does not contain an even closed walk with no repeated consecutive edges. Note every component of the resulting graph \(G^{\prime}\) contains at most one cycle, which has to be odd. Observe that at this point for all vertices \(v\) of \(G\) we have \[\sum_{e\ni v}x(e)=\sum_{e\ni v}z(e)\] and for all edges \(e\) of \(G^{\prime}\) the value of \(x(e)\) is in the open interval \((0,1)\). Hence, after modifying each of these values to an integer in \(\{0,1\}\), for all vertices \(v\) with \(d_{G^{\prime}}(v)\leqslant 1\) we shall have \[\sum_{e\ni v}z(e)-1<\sum_{e\ni v}x(e)<\sum_{e\ni v}z(e)+1.\] Thus, in the following step we shall focus on modifying the values of \(x(e)\) for the edges \(e\) of \(G^{\prime}\) in such a way that \[\sum_{e\ni v}x(e)=\sum_{e\ni v}z(e)\] for all vertices \(v\) with \(d_{G^{\prime}}(v)\geqslant 2\). Suppose \(G^{\prime}\) has a component containing both a vertex of degree \(1\) and a vertex of degree at least \(2\). Consider the system of linear equations \[\sum_{e\ni v}x(e)=\sum_{e\ni v}z(e)\] for all vertices \(v\) of the component which have degrees at least \(2\), where the variables are \(x(e)\) for all edges \(e\) of this component. The number of variables in this system is greater than the number of equations, hence its solution set is infinite. We choose a solution such that all values of \(x(e)\) remain in the interval \([0,1]\) and at least one of them is an integer. We then remove integer valued edges from \(G^{\prime}\). We repeat this procedure until all components of \(G^{\prime}\) are odd cycles or isolated edges. For all isolated edges \(e\) of \(G^{\prime}\) we can then simply set \(x(e)=1\). The remaining edges \(e\) with non-integer values of \(x(e)\) induce disjoint odd cycles in \(G\). By previous arguments all vertices \(v\) of such cycles satisfy \[\sum\limits_{e\ni v}x(e)=\sum\limits_{e\ni v}z(e).\] We call an odd cycle _bad_ if \(x(e)=1/2\) for all its edges \(e\). Note \(\sum_{e\ni v}z(e)\in\mathbb{Z}\) for every vertex \(v\) of any bad cycle in \(G^{\prime}\). We shall show that we may modify \(G^{\prime}\) so that all its bad cycles are independent in \(G\). Suppose there are bad (odd) cycles \(C_{1},C_{2}\) in \(G^{\prime}\) joined by an edge, say \(e_{0}\) in \(G\). Let \(H\) be the subgraph of \(G\) induced by \(e_{0}\) and the edges of \(C_{1},C_{2}\). Then it is straightforward to notice that there exist \(\alpha_{e}\in\{-1,1\}\), \(e\in C_{1}\cup C_{2}\), such that \[\alpha_{e_{0}}y_{e_{0}}+\sum\limits_{e\in C_{1}\cup C_{2}}\alpha_{e}y_{e}= \overline{0}\] where \(\alpha_{e_{0}}=2\). (It suffices to alternately set \(\alpha_{e}\) to \(-1\) and \(1\) along both of the cycles, starting from a vertex of \(e_{0}\).) Since \(x(e)=1/2\) for \(e\in C_{1}\cup C_{2}\) and \(x(e_{0})\in\{0,1\}\), by adding \(\alpha_{e}/2\) or \(-\alpha_{e}/2\) (depending on \(x(e_{0})\)) to all \(x(e)\), \(e\in E(H)\) we shall thus not change the sum \(\sum_{e\ni v}x(e)\) at any vertex \(v\) while all edges of \(H\) shall get integer valued. We shall thus remove all these edges from \(G^{\prime}\). We repeat this procedure until all bad cycles of \(G^{\prime}\) are independent in \(G\). For each cycle \(C\) of \(G^{\prime}\) we then proceed as follows. If \(C\) is bad, we denote any of its vertices as \(v\). Then for all edges \(e\) of \(C\) we round the value of \(x(e)\) to the nearest integer with the additional restriction that if for the two edges \(e^{\prime},e^{\prime\prime}\) incident to any given vertex \(u\) in \(C\) we had \(x(e^{\prime})=1/2=x(e^{\prime\prime})\), then one of these values must be rounded down to \(0\) and the other one rounded up to \(1\) if \(u\neq v\), while we round both values up to \(1\) for \(u=v\). As a result we obtain a function \(x\) such that \(\sum\limits_{e\ni v}x(e)=\sum\limits_{e\ni v}z(e)+1\) and \[\sum\limits_{e\ni u}z(e)-1<\sum\limits_{e\ni u}x(e)<\sum\limits_{e\ni u}z(e)+1\] for all the remaining vertices \(u\) of \(C\) (other than \(v\)). Thus, since bad cycles of \(G^{\prime}\) were independent in \(G\), we obtain a function \(x:E\to\{0,1\}\) satisfying (i) and (iii). Finally, we shall show that we can further modify the function \(x\) so that it also satisfies (ii). Assume (ii) is not satisfied and there exists an edge \(uv\in E\) with \(x(uv)=0\) such that \(\sum\limits_{e\ni u}x(e)<\sum\limits_{e\ni u}z(e)\) and \(\sum\limits_{e\ni v}x(e)<\sum\limits_{e\ni v}z(e)\). If we modify the value of \(x(uv)\) to \(1\), then \(\sum\limits_{e\ni u}x(e)<\sum\limits_{e\ni u}z(e)+1\) and \(\sum\limits_{e\ni v}x(e)<\sum\limits_{e\ni v}z(e)+1\), hence (i) and (iii) are still satisfied, but there are less edges that contradict (ii). Therefore, we can construct in this manner a function \(x:E\to\{0,1\}\) satisfying all three conditions (i) - (iii). Proof of Theorem 5.: Let \(G=(V,E)\) be a bipartite graph with \(\delta(G)\geq k(k-1)\), \(k\geq 2\). Set \(\overline{G}_{k+1}=G\). For \(i=k+1,k,\ldots,2\) let further \(G_{i}\) be a subgraph of \(\overline{G}_{i}\) obtained via applying to it Lemma 6 with a constant weight function \(z_{i}(e)=\frac{1}{i}\) and setting \(E(G_{i})=\{e\in E(\overline{G}_{i}):x_{i}(e)=1\}\) where \(x_{i}:E(\overline{G}_{i})\to\{0,1\}\) is a function resulting from the lemma; let also \(\overline{G}_{i-1}=(V(G),E(\overline{G}_{i})\setminus E(G_{i}))\). Finally, set \(G_{1}=\overline{G}_{1}\). We shall prove that for every \(i\in\{1,\ldots,k+1\}\) and each vertex \(v\in V\) \[d_{G_{i}}(v)\leqslant\left\lfloor\frac{d_{G}(v)}{k}\right\rfloor, \tag{1}\] and thus the edges of \(G_{1},\ldots,G_{k+1}\) partition \(E\) to \(k+1\) colour classes inducing a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring of \(G\). In other words, the colouring \(c:E\rightarrow\{1,\ldots,k+1\}\) can be defined by setting \(c(e)=i\) iff \(e\in E(G_{i})\). Let \(v\) be an arbitrary vertex of \(G\) and let \[d_{G}(v)=(k+1)l+j\] where \(j\in\{0,\ldots,k\}\). Since \(d_{G}(v)\geqslant\delta(G)\geqslant k(k-1)\) we have that \(l\geqslant k-1\) or \(l=k-2\) and \(j\geqslant 2\). Hence \[\left\lfloor\frac{d_{G}(v)}{k}\right\rfloor=\left\lfloor\frac{(k+1)l+j}{k} \right\rfloor=l+\left\lfloor\frac{l+j}{k}\right\rfloor\geqslant l+1 \tag{2}\] unless \(l=k-1\) and \(j=0\). Since \(G\) is bipartite, none of \(G_{i}\) contains odd cycles. Thus, Lemma 6 (iii), exploited to construct each \(G_{i}\), guarantees the following to hold. **Claim 1**.: _If \(d_{\overline{G}_{t}}(v)=td\) for some \(t\in\{1,\ldots,k+1\}\) and \(d\in\mathbb{Z}\), hence \(\sum\limits_{e\geqslant v}z_{t}(e)=d\), then \(d_{G_{t}}(v)=d\)._ This almost immediately yields (1) in the case when \(d_{G}(v)\) is divisible by \(k+1\). It is sufficient to apply \(k\) times Claim 1 to obtain the following. **Claim 2**.: _If \(j=0\), i.e. \(d_{G}(v)=(k+1)l\), then \(d_{G_{i}}(v)=l\) for all \(i\)._ Then however, we have that for every \(i\in\{1,\ldots,k+1\}\), \[d_{G_{i}}(v)=l=\left\lfloor\frac{kl}{k}\right\rfloor\leqslant\left\lfloor \frac{d_{G}(v)}{k}\right\rfloor,\] and thus (1) follows. It remains to prove (1) in the case when \(j\neq 0\). By (2) it is sufficient to show that \(d_{G_{i}}(v)\leqslant l+1\) for every \(i\). This is implied by \(k\) times repeated application of the following claim. **Claim 3**.: _If \(d_{G}(v)=(k+1)l+j\) and \(j\neq 0\), then for every \(t\in\{2,\ldots,k+1\}\), if \(d_{\overline{G}_{t}}(v)\in\{tl,tl+1,\ldots,t(l+1)\}\), then \(d_{G_{t}}(v)\in\{l,l+1\}\) and \(d_{\overline{G}_{t-1}}(v)\in\{(t-1)l,(t-1)l+1,\ldots,(t-1)(l+1)\}\)._ Proof.: Note that analogously as above, if \(d_{\overline{G}_{t}}(v)=tl\) (respectively, \(t(l+1)\)), then by Claim 1, \(d_{G_{t}}(v)=l\) (respectively, \(l+1\)) and \(d_{\overline{G}_{t-1}}(v)=(t-1)l\) (respectively, \((t-1)(l+1)\)). In the remaining cases, \(d_{G_{t}}(v)\in\{l,l+1\}\) by Lemma 6 (i), and thus \(d_{\overline{G}_{t-1}}(v)\in\{(t-1)l,(t-1)l+1,\ldots,(t-1)(l+1)\}\). ## 3 General graphs The main obstacle in obtaining a similar result as in Theorem 5 for the general case, not only for bipartite graphs, is that we cannot show Claim 1 to hold any more if a graph has odd cycles, as the proof of this fact relied on Lemma 6 (iii). Actually, this inconvenience shall have much further reaching consequences than we initially suspected, and shall (most likely) disallow us obtaining sharp results in most of the general cases. Before we discuss our upper bounds, let us however first demonstrate that it is after all not that surprising we could not solve this apparent sole obstacle on the way towards extending Theorem 5 to all graphs, as the upper bound in it does not hold any more in general. **Observation 7**.: _For every \(k\geqslant 2\) there exists a graph \(G\) with \(\delta(G)\geqslant k^{2}-1\) which is not \(\frac{1}{k}\)-majority \((k+1)\)-edge-colourable._ Proof.: For every fixed \(k\geqslant 2\) we construct a graph \(G\) as follows. We first take the complete graph on \(k^{2}+1\) vertices and remove the edges of any fixed Hamilton cycle from it. Then we add a new vertex to the constructed graph and connect it with single edges with all the remaining vertices. Note that the obtained graph \(G\) has \(k^{2}+1\) vertices of degree \(k^{2}-1\) and a single vertex of degree \(k^{2}+1\). Suppose there is a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring of \(G\). Since \(k^{2}-1=(k+1)(k-1)\) and at most \[\left\lfloor\frac{k^{2}-1}{k}\right\rfloor=k-1\] edges incident to every vertex \(v\) of degree \(k^{2}-1\) can be coloured with the same colour, then for every such vertex \(v\) and for each of the \(k+1\) colours, exactly \(k-1\) edges incident to such \(v\) are coloured with this colour. Analogously, for the only vertex \(u\) of \(G\) with degree \(k^{2}+1=(k+1)(k-1)+2\), at most \[\left\lfloor\frac{k^{2}+1}{k}\right\rfloor=k\] edges incident to the vertex \(u\) can be coloured with the same colour, and hence exactly \(k\) edges incident to the vertex \(u\) must be coloured with some colour, say \(\alpha\). Consider the subgraph of \(G\) induced by the edges of \(G\) coloured with \(\alpha\). The sum of degrees in this subgraph equals \[(k^{2}+1)(k-1)+k,\] which is always an odd number, a contradiction. Observation 7 thus implies that \(\delta_{k}^{\mathrm{opt}}\geqslant k^{2}\) for every \(k\geqslant 2\). We conjecture that in fact \(\delta_{k}^{\mathrm{opt}}=k^{2}\) for each \(k\geqslant 2\), which holds for \(k=2\) by Theorem 1. **Conjecture 8**.: _For every integer \(k\geqslant 2\), if a graph \(G\) has minimum degree \(\delta\geqslant k^{2}\), then \(G\) is \(\frac{1}{k}\)-majority \((k+1)\)-edge-colourable._ By means of Lemma 6 we shall now show that \(\delta_{k}^{\mathrm{opt}}\) is at most twice the conjectured value. The proof of this fact shall also be an indispensable ingredient of the further slight improvement included in Theorem 11. **Theorem 9**.: _For every integer \(k\geqslant 2\), if a graph \(G\) has minimum degree \(\delta\geqslant 2k^{2}\), then \(G\) is \(\frac{1}{k}\)-majority \((k+1)\)-edge-colourable._ Proof.: Let \(G=(V,E)\) be a graph with minimum degree \(\delta\geqslant 2k^{2}\), for some fixed integer \(k\geqslant 2\). Analogously as within the proof of Theorem 5, we shall use Lemma 6 in order to choose \(k\) consecutive subgraphs \(G_{1},\ldots,G_{k}\) of \(G\) and colour the edges of each \(G_{i}\) with colour \(i\). The remaining edges shall be coloured with colour \(k+1\). Set \(\overline{G}_{0}=G\). For \(i=1,\ldots,k\) let \(G_{i}\) be a subgraph of \(\overline{G}_{i-1}\) induced by the edges \(e\) with \(x_{i}(e)=1\) where \(x_{i}\) is a function resulting from applying Lemma 6 to \(\overline{G}_{i-1}\) with a constant weight function \(z_{i}:E(\overline{G}_{i-1})\rightarrow\{\alpha_{i}\}\), where the value of \(\alpha_{i}\in[0,1]\) shall be specified later; we also set \(\overline{G}_{i}=(V(G),E(\overline{G}_{i-1})\setminus E(G_{i}))\). Let \(G_{k+1}=\overline{G}_{k}\). We shall show we may choose \(\alpha_{i}\) for \(i\in\{1,\ldots,k\}\) so that for every vertex \(v\) of \(G\) and each \(j\in\{1,\ldots,k+1\}\), \[d_{G_{j}}(v)\leqslant\frac{d_{G}(v)}{k}. \tag{3}\] By Lemma 6 (i), for every \(v\in V\): \[\alpha_{1}d_{G}(v)-1<\sum_{e\ni v}x_{1}(e)=d_{G_{1}}(v)\leqslant\alpha_{1}d_{ G}(v)+1. \tag{4}\] In order to satisfy (3) it is thus necessary and sufficient to choose \(\alpha_{1}\) so that \(\alpha_{1}d_{G}(v)+1\leqslant\frac{d_{G}(v)}{k}\) for all vertices \(v\) of \(G\), that is \(\alpha_{1}\leqslant\frac{1}{k}-\frac{1}{d_{G}(v)}\). Since the function \(f(n)=\frac{1}{k}-\frac{1}{n}\) is increasing for \(n>0\), we shall achieve our goal by setting \[\alpha_{1}=\frac{\frac{\delta}{k}-1}{\delta}. \tag{5}\] Consequently, a vertex \(v\) of degree \(\delta\), which in some sense are the most restrictive ones, may in theory end up with \(d_{G_{1}}(v)\) arbitrarily close to \(\frac{\delta}{k}-2\), cf. (4) and (5). Hence for \(i\in\{1,\ldots,k\}\) we in general set: \[\alpha_{i}=\frac{\frac{\delta}{k}-1}{\delta-(i-1)(\frac{\delta}{k}-2)}. \tag{6}\] We shall now formally prove that such choices of \(\alpha_{i}\) guarantee (3) to hold for all \(j\) and \(v\). Let \(v\) be an arbitrarily chosen vertex of \(G\). There exists \(\beta\geqslant 1\) such that \(d_{G}(v)=\beta\delta\). We shall precisely show that for every \(i\in\{1,\ldots,k\}\), \[d_{\overline{G}_{i}}(v)\leqslant\beta\left(\delta-i\left(\frac{\delta}{k}-2 \right)\right) \tag{7}\] and \[d_{G_{i}}(v)\leqslant\frac{\beta\delta}{k}=\frac{d_{G}(v)}{k}. \tag{8}\] We proceed by induction with respect to \(i\). Since \[\alpha_{1}d_{G}(v)=\frac{\frac{\delta}{k}-1}{\delta}\cdot\beta\delta=\beta \left(\frac{\delta}{k}-1\right)\] and \(\beta\geqslant 1\), by (4) we obtain \[\beta\left(\frac{\delta}{k}-2\right)<d_{G_{1}}(v)\leqslant\beta\frac{\delta}{k},\] so (7) and (8) hold for \(i=1\), which yields the base case of induction. For an induction step, assume that \(d_{\overline{G}_{i-1}}(v)=D\leqslant\beta\left(\delta-(i-1)\left(\frac{\delta }{k}-2\right)\right)\) for some \(i\leqslant k\). Note that by Lemma 6 (i), \[d_{G_{i}}(v)>\alpha_{i}D-1,\] and thus, by (6): \[d_{\overline{G}_{i}}(v) <D-(\alpha_{i}D-1)=(1-\alpha_{i})D+1\] \[\leqslant(1-\alpha_{i})\beta\left(\delta-(i-1)\left(\frac{\delta} {k}-2\right)\right)+1\] \[=\beta\left(\delta-(i-1)\left(\frac{\delta}{k}-2\right)-\left( \frac{\delta}{k}-1\right)\right)+1\] \[\leqslant\beta\left(\delta-(i-1)\left(\frac{\delta}{k}-2\right)- \left(\frac{\delta}{k}-1\right)\right)+\beta\] \[=\beta\left(\delta-i\left(\frac{\delta}{k}-2\right)\right),\] cf. (7). On the other hand, by (4) and (6): \[d_{G_{i}}(v) \leqslant\alpha_{i}D+1\leqslant\alpha_{i}\beta\left(\delta-(i-1) \left(\frac{\delta}{k}-2\right)\right)+1\] \[=\beta\left(\frac{\delta}{k}-1\right)+1\leqslant\beta\left( \frac{\delta}{k}-1\right)+\beta=\frac{\beta\delta}{k}=\frac{d_{G}(v)}{k},\] hence (8) and (7) hold (where (8) implies (3)). Finally, observe that by (7) we have: \[d_{G_{k+1}}(v)=d_{\overline{G}_{k}}(v)\leqslant\beta\left(\delta-k\left( \frac{\delta}{k}-2\right)\right)=2\beta k.\] Since \(\delta\geqslant 2k^{2}\), we obtain \(d_{G_{k+1}}(v)\leqslant\frac{\beta\delta}{k}=\frac{d_{G}(v)}{k}\), which concludes the proof of Theorem 9. Note that the bound on the minimum degree of \(G\) was only required to bound \(d_{G_{k+1}}\) in the proof of Theorem 9. This remark allows us to use almost entire reasoning above within the proof of Theorem 11 below, which improves the general lower bound for \(\delta_{k}^{\mathrm{opt}}\). This refinement exploits the following straightforward and direct consequence of Euler's Theorem (through adding a single auxiliary vertex to a graph, if necessary). Details of its proof can be found e.g. in [5, 10] and most likely in many other papers. **Observation 10**.: _Let \(G\) be a connected graph._ * _If_ \(G\) _has an even number of edges or_ \(G\) _contains vertices of odd degree, then_ \(G\) _has a_ \(2\)_-edge-colouring such that for every vertex_ \(u\) _of_ \(G\)_, at most_ \(\left\lceil\frac{d_{G}(u)}{2}\right\rceil\) _of the edges incident with_ \(u\) _have the same colour._ * _If_ \(G\) _has an odd number of edges, all vertices of_ \(G\) _have even degree and_ \(u_{G}\) _is any vertex of_ \(G\)_, then_ \(G\) _has a_ \(2\)_-edge-colouring such that for every vertex_ \(u\) _of_ \(G\) _distinct from_ \(u_{G}\)_, exactly_ \(\frac{d_{G}(u)}{2}\) _of the edges incident with_ \(u\) _have the same colour, and at most_ \(\frac{d_{G}(u_{G})}{2}+1\) _of the edges incident with_ \(u_{G}\) _have the same colour._ In what follows, a _bad vertex_ shall mean a vertex of \(G\) which was chosen as the vertex \(u_{G}\) while applying Observation 10 above, that is the vertex with exactly \(\frac{d_{G}(v)}{2}+1\) incident edges coloured the same in one of the two colours. Theorem 11: _Let \(k=2^{n}+m-1\geqslant 2\) where \(n\) is a positive integer and \(m\) is a nonnegative integer less than \(2^{n}\). If \(G\) is a graph with minimum degree \(\delta\geqslant\frac{3}{2}k^{2}+\frac{1}{2}km+\frac{1}{2}k\), then \(G\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring._ Proof: We start by partially colouring the graph \(G=(V,E)\) with \(m\) colours, choosing \(G_{1},\ldots,G_{m}\), corresponding to colours \(1,\ldots,m\), the same way as in the proof of Theorem 9. By (8) these colours satisfy the _majority rule_, that is for every vertex \(u\), at most \(d_{G}(u)/k\) edges incident with \(u\) are coloured with any of the colours in \(\{1,\ldots,m\}\). Let \(H\) be a subgraph of \(G\) induced by the uncoloured edges. Notice that \(H=\overline{G}_{m}\) (using the notation from the proof of Theorem 9), and thus by (7), if \(v\) is a vertex of \(G\) such that \(d_{G}(v)=\beta\delta\), then \[d_{H}(v)\leqslant\beta\left(\delta-m\left(\frac{\delta}{k}-2\right)\right). \tag{9}\] We shall colour the edges of \(H\) with new \(2^{n}\) colours, namely the elements of the set \(\{0,1\}^{n}\), hence we shall be colouring these edges with binary vectors of length \(n\). For any vector \(w\in\{0,1\}^{n}\) and \(0\leqslant j\leqslant n\) we denote by \([w]_{j}\) the _prefix_ of length \(j\) of \(w\), that is the vector in \(\{0,1\}^{j}\) formed of \(j\) first consecutive coordinates of \(w\) (where \([w]_{j}=\emptyset\) for \(j=0\)). Initially (in step \(0\)) we associate the vector \(c_{e}=(0,\ldots,0)\) to every edge \(e\) of \(H\). The vectors \(c_{e}\), \(e\in E(H)\), shall be modified one coordinate after another in \(n\) steps. In certain situations we shall however be finally fixing all the remaining coordinates of some of these vectors at once - the corresponding edges shall be called _determined_. In what follows, \(c_{e}\) shall always refer to the current value of the colour (vector) associated with an edge \(e\). Suppose for a given \(i\in\{1,\ldots,n\}\) we have completed step \(i-1\) of our construction, hence each \(c_{e}\) has the first \(i-1\) coordinates finally fixed (or all, for selected, determined, \(e\in E(H)\)), and we are about to perform step \(i\). We proceed as follows. For each possible prefix \(p\in\{0,1\}^{i-1}\) we denote by \(H_{p}\) the subgraph induced in \(H\) by all not yet determined edges \(e\) with \([c_{e}]_{i-1}=p\) (after step \(i-1\)). In each such \(H_{p}\) we consider all components one after another. Let \(H^{\prime}\) be such a component for any fixed \(p\in\{0,1\}^{i-1}\). Let us give an advance notice to the fact that at most one vertex of \(H^{\prime}\) shall be chosen to be so-called _special for \(H_{p^{\prime}}\)_, where \(p^{\prime}\) is the extensions of \(p\) with \(1\) added to its end (i.e. \(p^{\prime}\in\{0,1\}^{i}\), \([p^{\prime}]_{i-1}=p\) and \(p^{\prime}(i)=1\)), according to the rule specified below. * If for each vertex \(v\) of \(H^{\prime}\) there exists a prefix \(q\) of \(p\) (possibly \(q=p\)) such that \(v\) is special for \(H_{q}\), then for every edge \(e\) of \(H^{\prime}\) we fix as \(0\) all the remaining (starting from the \(i\)'th one) coordinates of \(c_{e}\), hence all edges of \(H^{\prime}\) become determined. 2. Otherwise, we use Observation 10 to temporarily colour the edges of \(H^{\prime}\) blue and red. For each edge \(e\) of \(H^{\prime}\) we fix the \(i\)'th coordinate of \(c_{e}\) as \(0\) if \(e\) is blue, and \(1\) otherwise. Moreover, if we are forced to create a bad vertex \(u_{H^{\prime}}\) (with \((d_{H^{\prime}}(u_{H^{\prime}})/2)+1\) incident edges of the same colour), then we choose it so that it was not special for \(H_{q}\) for any prefix \(q\) of \(p\) and assign colours blue and red so that \(u_{H^{\prime}}\) is incident with exactly \((d_{H^{\prime}}(u_{H^{\prime}})/2)+1\) red edges; we then also choose \(u_{H^{\prime}}\) to be special for \(H_{p^{\prime}}\) where \(p^{\prime}\) is the extensions of \(p\) with \(1\) added to its end. After going through all \(n\) steps described above, we complete a \((k+1)\)-edge-colouring of the graph \(G\). It remains to show that all new colours satisfy the \(\frac{1}{k}\)-majority rule. Consider a vertex \(v\in V\) and any fixed colour \(\alpha\in\{0,1\}^{n}\). Let us denote by \(G_{\alpha}\) the subgraph induced in \(G\) (in fact in \(H\)) by all the edges coloured with \(\alpha\). Suppose first that \(v\) is incident with some edge \(e\in E(H)\) which was coloured (determined) with a colour \(\alpha\) according to Rule (a) above, i.e. at certain iteration \(i\), the edge \(e\) belonged to a component \(H^{\prime}\) of some \(H_{p}\) with all vertices being special for some \(H_{q}\) where \(q\) is a prefix of \(H_{p}\). Note however that by our construction, \(H^{\prime}\) must have been a (connected) subgraph of every such \(H_{q}\), and thus for each such \(H_{q}\) at most one vertex of \(H^{\prime}\) might have been chosen to be special for \(H_{q}\). Consequently, as \(p\) has no more than \(n\) distinct prefixes, \(H^{\prime}\) must have contained at most \(n\) vertices. Hence for each its vertex, in particular \(v\), \[d_{G_{\alpha}}(v)<n\leqslant k\leqslant\frac{d_{G}(v)}{k}.\] Assume in turn that every edge \(e\in E(G_{\alpha})\) incident with \(v\) was coloured by means of Rule (b) exclusively. This rule was thus utilised \(n\) times in order to settle all edges incident with \(v\) coloured \(\alpha\), each time via application of Observation 10 to a component \(H^{\prime}\) (containing \(v\)) of some \(H_{p}\), where \(p\) is a prefix of \(\alpha\). Suppose \(v\) had degree \(d\) in such \(H^{\prime}\), say in iteration \(i\). If \(v\) was chosen to be special for \(H_{p}\), which could happen only once during \(n\) steps of our construction (for prefixes \(p\) of the fixed \(\alpha\)), then at most \(\frac{d}{2}+1\) edges incident with \(v\) got their colours' prefixes fixed as \(p\) after step \(i\); otherwise the number of such edges is bounded above by \(\frac{d}{2}+\frac{1}{2}\), cf. Observation 10. Only such edges retained the chance to belong to \(G_{\alpha}\). In order to estimate the maximum number of edges incident with \(v\) which eventually could be coloured \(\alpha\) let us thus consider two following functions. Let \(f(d)=\frac{d}{2}+\frac{1}{2}\) and \(g(d)=\frac{d}{2}+1\). By the observations above, \(d_{G_{\alpha}}(v)\) is bounded above by the maximum of \(f^{n}(d)\) and \(f^{i}(g(f^{j}(d)))\) for all natural numbers \(i\) and \(j\) such that \(i+j=n-1\) where \(d=d_{H}(v)\). Since \(f(d)<g(d)\) for all \(d\), the value of \(f^{i}(g(f^{j}(d)))\) is greater that \(f^{n}(d)\) for all \(i\) and \(j\) satisfying \(i+j=n-1\). We shall prove the following upper bound. **Claim 4**.: _The inequality \(f^{i}(g(f^{j}(d)))\leqslant\frac{d-1}{2^{n}}+\frac{3}{2}\) holds for all \(d\) and all natural numbers \(i\) and \(j\) such that \(i+j=n-1\)._ Proof of Claim 4.: We begin by proving that \[f^{i}(d)=\frac{d-1}{2^{i}}+1\] holds for any nonnegative integer \(i\). We proceed by induction with respect to \(i\). Clearly \(f^{0}(d)=d=\frac{d-1}{2^{0}}+1.\) Assume that \(f^{j}(d)=\frac{d-1}{2^{j}}+1\) holds for all \(j<i.\) Thus, \[f^{i}(d)=f(f^{i-1}(d))=f\left(\frac{d-1}{2^{i-1}}+1\right)=\frac{\frac{d-1}{2^{ i-1}}+1}{2}+\frac{1}{2}=\frac{d-1}{2^{i}}+1.\] Finally, for any fixed \(i\) and \(j\) such that \(i+j=n-1,\) we have \[f^{i}(g(f^{j}(d))) =f^{i}\left(g\left(\frac{d-1}{2^{j}}+1\right)\right)=f^{i}\left( \frac{\frac{d-1}{2^{j}}+1}{2}+1\right)\] \[=f^{i}\left(\frac{d-1}{2^{j+1}}+\frac{3}{2}\right)=\frac{\frac{d-1 }{2^{j+1}}+\frac{3}{2}-1}{2^{i}}+1=\frac{d-1}{2^{n}}+\frac{1}{2^{i+1}}+1.\] The value of \(f^{i}(g(f^{j}(d)))\) is greatest when \(i=0,\) and thus \(f^{i}(g(f^{j}(d)))\leqslant\frac{d-1}{2^{n}}+\frac{3}{2}.\) By Claim 4 and discussion above we obtain that \[d_{G_{\alpha}}(v)\leqslant\frac{d_{H}(v)-1}{2^{n}}+\frac{3}{2}. \tag{10}\] It remains to show that if \(d_{G}(v)=\beta\delta,\) then \(d_{G_{\alpha}}(v)\leqslant\frac{\beta\delta}{k}.\) Recall that by (9), \(d_{H}(v)\leqslant\beta\left(\delta-m\left(\frac{\delta}{k}-2\right)\right).\) This combined with (10) yield the following, where we make use of the facts that \(k=2^{n}+m-1,\)\(\delta\geqslant\frac{3}{2}k^{2}+\frac{1}{2}km+\frac{1}{2}k\) and \(\beta\geqslant 1\): \[d_{G_{\alpha}}(v) \leqslant\frac{\beta\left(\delta-m\left(\frac{\delta}{k}-2 \right)\right)-1}{2^{n}}+\frac{3}{2}\] \[=\frac{\beta\delta\left(1-\frac{m}{k}\right)+2\beta m-1}{2^{n}}+ \frac{3}{2}\] \[=\frac{\beta\delta}{k}+\frac{2\beta m-1-\frac{\beta\delta}{k}}{2 ^{n}}+\frac{3}{2}\] \[\leqslant\frac{\beta\delta}{k}+\frac{2\beta m-1-\beta\left(\frac {3}{2}k+\frac{1}{2}m+\frac{1}{2}\right)}{2^{n}}+\frac{3}{2}\] \[=\frac{\beta\delta}{k}+\frac{\beta\left(-\frac{3}{2}k+\frac{3}{2 }m-\frac{1}{2}\right)-1}{2^{n}}+\frac{3}{2}\] \[=\frac{\beta\delta}{k}+\frac{\beta\left(-\frac{3}{2}\left(2^{n}-1 \right)-\frac{1}{2}\right)-1}{2^{n}}+\frac{3}{2}\] \[=\frac{\beta\delta}{k}-\frac{3}{2}\beta+\frac{\beta-1}{2^{n}}+ \frac{3}{2}\] \[=\frac{\beta\delta}{k}+(1-\beta)\left(\frac{3}{2}-\frac{1}{2^{n}}\right)\] \[\leqslant\frac{\beta\delta}{k}=\frac{d_{G}(v)}{k}.\] This concludes the proof of Theorem 11. Note that the formula for \(k\) used in Theorem 11 implies that \(m\) is always bounded above by \(\frac{k}{2}\), and thus we immediately obtain the following corollary. Corollary 12: _For every integer \(k\geqslant 2\), if a graph \(G\) has minimum degree \(\delta\geqslant\frac{7}{4}k^{2}+\frac{1}{2}k\), then \(G\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring._ ## 4 Confirmation of Conjecture 8 for initial values of \(k\) The main result of [5] confirms Conjecture 8 for \(k=2\). In this section we shall extend this result towards two following values of \(k\). To achieve this we shall use the two observations below. **Observation 13**.: _Let \(k\geqslant 2\) be an integer. If every graph with minimum degree \(\delta\geqslant k^{2}\) and maximum degree \(\Delta<2k^{2}\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring, then every graph with minimum degree at least \(k^{2}\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring._ Proof.: Let \(G\) be an arbitrary graph with minimum degree at least \(k^{2}\). If the maximum degree of \(G\) is less than \(2k^{2}\) then by assumption it has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring. Otherwise, let \(v\) be a vertex of \(G\) such that \(d_{G}(v)\geqslant 2k^{2}\). There exist unique integers \(n\) and \(d\) such that \(d_{G}(v)=nk^{2}+d\) and \(k^{2}\leqslant d<2k^{2}\). Partition the neighbourhood of \(v\) into \(n+1\) disjoint sets \(N_{0},\ldots,N_{n}\) such that \(|N_{0}|=d\) and \(|N_{i}|=k^{2}\) for \(i\geqslant 1\). Let \(H\) be a graph such that \(V(H)=V(G)\setminus\{v\}\cup\{v_{0},v_{1},\ldots,v_{n}\}\) and \(E(H)=E(G-v)\cup\bigcup\limits_{i=0}^{n}\{uv_{i}:u\in N_{i}\}\). Note that this operation yields a natural bijection between the edges of \(G\) and the edges of \(H\). Let \(\overline{G}\) be a graph constructed from \(G\) by applying the above operation to all vertices of \(G\) with degree at least \(2k^{2}\). By construction, the maximum degree of \(\overline{G}\) is less than \(2k^{2}\), hence \(\overline{G}\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring, which yields a \((k+1)\)-edge-colouring of \(G\). It remains to prove that this is also a \(\frac{1}{k}\)-majority edge-colouring of \(G\). Let \(v\) be a vertex of \(G\) with degree \(d_{G}(v)=nk^{2}+d\) where \(k^{2}\leqslant d<2k^{2}\) (with \(n\) possibly equal \(0\)). The number of edges adjacent to \(v\) coloured with the same colour is bounded above by \[n\left\lfloor\frac{k^{2}}{k}\right\rfloor+\left\lfloor\frac{d}{k}\right\rfloor =nk+\left\lfloor\frac{d}{k}\right\rfloor=\left\lfloor\frac{nk^{2}+d}{k} \right\rfloor=\left\lfloor\frac{d_{G}(v)}{k}\right\rfloor,\] hence the colouring of \(G\) is indeed a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring. **Observation 14**.: _For every integer \(k\geqslant 2\), let \(S_{k}\) be the set of all integers \(i\) between \(k^{2}\) and \(2k^{2}\) such that \(i\equiv k-1\pmod{k}\). Let \(\mathcal{G}_{k}\) be the set of all graphs for which the degrees of all vertices are in the set \(S_{k}\). If every graph in \(\mathcal{G}_{k}\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring, then every graph with minimum degree at least \(k^{2}\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring._ Proof.: By Observation 13 it is sufficient to consider graphs with maximum degree less than \(2k^{2}\). Let \(G\) be an arbitrary graph with minimum degree \(\delta\geqslant k^{2}\) and maximum degree \(\Delta<2k^{2}\). If all vertices of \(G\) have degrees in the set \(S_{k}\), then by assumption \(G\) has a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring. Otherwise, let \(H\) be a graph constructed from \(G\) by taking two copies of \(G\) and joining by edges the vertices of \(G\) which do not have degrees in the set \(S_{k}\) with their corresponding counterparts in the second copy of \(G\). Note that \(G\) is a subgraph of \(H\). Moreover, for every vertex \(v\) of \(G\), either \(d_{H}(v)=d_{G}(v)\in S_{k}\) or \(d_{G}(v)\equiv i\pmod{k}\) and \(d_{H}(v)\equiv i+1\pmod{k}\) (and the same holds for the vertices in the second copy of \(G\)). We repeat this operation until all vertices of the obtained graph \(\overline{G}\) have degrees in the set \(S_{k}\). Note the degree of each vertex \(v\) of \(G\) increased by at most \(k-1\), and more importantly, \[\left\lfloor\frac{d_{H}(v)}{k}\right\rfloor=\left\lfloor\frac{d_{G}(v)}{k} \right\rfloor. \tag{11}\] Since the maximum degree of \(G\) is at most \(2k^{2}-1\equiv k-1\pmod{k}\), the maximum degree of \(\overline{G}\) is also less than \(2k^{2}\), hence \(\overline{G}\in\mathcal{G}_{k}\). By our assumption, there is a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring \(c\) of \(\overline{G}\). Since \(G\) is a subgraph of \(\overline{G}\), by (11), the colouring \(c\) restricted to the edges of \(G\) yields a \(\frac{1}{k}\)-majority \((k+1)\)-edge-colouring of \(G\). Observations 13 and 14 allow us to narrow down the set of graphs we need to consider in order to prove Conjecture 8. To start with, we exemplify their usefulness by reproving Theorem 1, whose proof provided in [5] is rather lengthy. Tools and observations introduced above yield a short and straightforward argument. Proof of Theorem 1.: Let \(G\) be an arbitrary graph with minimum degree \(\delta\geqslant 4\). By Observation 14, we can assume that the degrees of all vertices of \(G\) are in the set \(S_{2}=\{5,7\}\). Let \(D_{2}\) be the set of vertices of degree \(5\), and \(D_{3}\) the set of vertices of degree \(7\). Vertices in \(D_{2}\) can have at most \(2\) incident edges in the same colour, and vertices of \(D_{3}\) can have at most \(3\) such edges. We shall construct a majority \(3\)-edge-colouring of \(G\) in two steps. First, use Lemma 6 with a weight function assigning \(1/3\) to every edge of \(G\) to colour some edges of \(G\) with one of the three colours (similarly as in the proof of Theorem 9). Vertices in \(D_{2}\) have \(1\) or \(2\) edges coloured, and vertices in \(D_{3}\) - \(2\) or \(3\). Let \(H\) be the graph induced by uncoloured edges of \(G\). Vertices in \(D_{2}\) have degrees \(3\) or \(4\) in \(H\), and vertices in \(D_{3}\) have degrees \(4\) or \(5\). Finally, use Observation 10 to colour the edges of \(H\) with the remaining two colours. Note that every component of \(H\) either has vertices of odd degree or all of its vertices have degree \(4\) and thus such component has an even number of edges. Hence, by Observation 10, at most \(\left\lceil\frac{d_{H}(u)}{2}\right\rceil\) of the edges incident with any given vertex \(u\) shall get the same colour, which satisfies the majority rule for the graph \(G\). **Theorem 15**.: _Every graph with minimum degree \(\delta\geqslant 9\) has a \(\frac{1}{3}\)-majority 4-edge-colouring._ Proof.: Let \(G\) be a graph with minimum degree \(\delta\geqslant 9\). By Observation 14, we can assume that the degrees of all vertices of \(G\) are in the set \(S_{3}=\{11,14,17\}\). Similarly as before, let \(D_{3}\) be the set of vertices of degree \(11\) (which can have at most \(3\) incident edges with the same colour), \(D_{4}\) be the set of vertices of degree \(14\) (allowing \(4\) incident monochromatic edges), and \(D_{5}\) - vertices of degree \(17\) (allowing \(5\) incident edges with the same colour). Let \(G^{\prime}\) be a graph constructed from the graph \(G\) by removing all components which have all vertices of degree \(14\) and an odd number of edges. Hence, each component of \(G^{\prime}\) has an even number of edges or contains vertices of odd degree. Colour the edges of \(G^{\prime}\) using Observation 10 with colours blue and red. The number of incident edges with the same colour shall equal \(5\) or \(6\) for vertices in \(D_{3}\), \(7\) for vertices in \(D_{4}\), and \(8\) or \(9\) for vertices in \(D_{5}\). We shall show that in fact we can choose such a \(2\)-edge-colouring of \(G^{\prime}\) complying with Observation 10 that neither of the two colours induces a component with an odd number of edges and all vertices of degree \(6\). Assume this is not the case and consider a \(2\)-edge-colouring of \(G^{\prime}\) consistent with Observation 10 with the least number of such bad components. Without loss of generality we can assume that there exists such a bad component, say \(H\) in the graph induced by the blue edges. Notice that \(H\) is in particular Eulerian, and thus it is \(2\)-edge-connected. Clearly, all the vertices in \(H\) are in the set \(D_{3}\) and have exactly \(5\) red incident edges (in \(G^{\prime}\)). Let \(v\) be an arbitrary vertex of \(H\), and let \(u_{1},u_{2}\) be any two distinct neighbours of \(v\) in \(H\). Consider a component in the subgraph of \(G^{\prime}\) induced by the red edges such that \(v\) is in this component, denote it \(H^{\prime}\). If \(u_{1}\) is not in the same (red) component as \(v\), then we can recolour the edge \(u_{1}v\) with red colour. In such a case, \(H\) shall no longer have exclusively vertices of degree \(6\), and no new \(6\)-regular monochromatic component shall be created, since at least one other than \(u_{1}\) vertex in \(H^{\prime}\) needs to have odd degree. We proceed similarly if \(u_{1}\) is in the same red component as \(v\), but \(u_{2}\) is not. If both \(u_{1}\) and \(u_{2}\) are in the same red component as \(v\), then we can recolour the edge \(u_{1}v\) to red colour. Then, neither \(H\) nor \(H^{\prime}\) shall be a \(6\)-regular components. Hence, in each case, the number of monochromatic components with an odd number of edges and all vertices of degree \(6\) can be decreased, which is in contradiction with the assumption that our colouring had the least possible number of bad components. As a result, both in the graph induced by the red edges and in the graph induced by the blue edges each component has an even number of edges or contains vertices of odd degree or contains a vertex of degree \(8\). Hence, we can again use Observation 10 (separately for graphs induced by both of the colours), choosing a vertex of degree \(8\) as the bad vertex if necessary. The \(4\)-edge-colouring obtained this way satisfies the \(\frac{1}{3}\)-majority rule for the graph \(G^{\prime}\). Finally, consider components of \(G\) with all vertices of degree \(14\) and an odd number of edges. Using Observation 10 we obtain a \(2\)-edge-colouring of such components with colours red and blue, such that in the subgraph generated by the edges of one of the colours all vertices shall have degree \(7\), except one vertex of degree \(6\) or \(8\). In either case, there shall be a vertex of odd degree in each of the obtained monochromatic components, hence using again Observation 10 (and merging the result with the colouring of \(G^{\prime}\)) yields a \(\frac{1}{3}\)-majority \(4\)-edge-colouring of \(G\). **Theorem 16**.: _Every graph with minimum degree \(\delta\geqslant 16\) has a \(\frac{1}{4}\)-majority \(5\)-edge-colouring._ Proof.: Let \(G\) be a graph with minimum degree \(\delta\geqslant 16\). By Observation 14, we can assume that the degrees of all vertices of \(G\) are in the set \(S_{4}=\{19,23,27,31\}\). Let \(D_{4}\) be the set of vertices of degree \(19\) (which can have at most \(4\) incident edges with the same colour), \(D_{5}\) be the set of vertices of degree \(23\) (allowing \(5\) incident monochromatic edges), \(D_{6}\) be the set of vertices of degree \(27\) (allowing \(6\) incident edges with the same colour), and \(D_{7}\) - vertices of degree \(31\) (allowing \(7\) incident monochromatic edges). First, use Lemma 6 with a weight function assigning \(1/5\) to all edges of \(G\) to colour some edges of \(G\) with colour \(1\). As a result, every vertex \(v\) of \(G\) has either \(\lfloor\frac{d(v)}{5}\rfloor\) or \(\lceil\frac{d(v)}{5}\rceil\) incident edges coloured \(1\). As none of the vertices has degree divisible by \(5\), these two values are distinct, and by Lemma 6 (ii), every edge \(uv\) with exactly \(\lfloor\frac{d(u)}{5}\rfloor\) edges incident with \(u\) coloured \(1\) and exactly \(\lfloor\frac{d(v)}{5}\rfloor\) edges incident with \(v\) coloured \(1\) must be coloured \(1\) as well. Let \(H\) be the subgraph of \(G\) induced by the uncoloured edges. Vertices in \(D_{4}\) have degrees in \(\{15,16\}\) in \(H\), vertices in \(D_{5}\) - degrees in \(\{18,19\}\), vertices in \(D_{6}\) - degrees in \(\{21,22\}\), and vertices in \(D_{7}\) - degrees in \(\{24,25\}\). Using Observation 10 divide the graph \(H\) into two subgraphs \(H_{1}\) (coloured blue) and \(H_{2}\) (coloured red), choosing vertices of degree \(18\) or \(22\) as the bad vertices, if necessary. In the components of the graphs \(H_{1}\) and \(H_{2}\) we have the following situation: vertices in \(D_{4}\) have degrees in \(\{7,8\}\), vertices in \(D_{5}\) have degrees in \(\{9,10\}\) (and possibly a single vertex has degree \(8\)), vertices in \(D_{6}\) have degrees in \(\{10,11\}\) (and possibly a single vertex has degree \(12\)), and vertices in \(D_{7}\) have degrees in \(\{12,13\}\). Notice that if a vertex in \(D_{6}\) has degree \(12\), then in the graph \(H\), it had to be in a component with no vertex of odd degree, hence in the graphs \(H_{1}\) and \(H_{2}\), this vertex cannot be in the same component as any vertex of degree \(10\). We shall show that we can recolour the graph \(H\) (retaining conditions mentioned above) in such a way that neither \(H_{1}\) nor \(H_{2}\) contains a component with an odd number of edges whose every vertex either has degree \(10\) and belongs to \(D_{5}\) or has degree \(8\) and belongs to \(D_{4}\). Assume this is not possible and consider a colouring with the least number of such components. Let \(\overline{H}\) be one of these components. Since the number of edges of \(\overline{H}\) is odd, at least one of the vertices in \(\overline{H}\) must have degree \(10\). Let \(v\) be a vertex of degree \(10\) in \(\overline{H}\) and let \(u_{1}\) and \(u_{2}\) be two distinct neighbours of \(v\) in \(\overline{H}\). If \(v\) is the only vertex of degree \(10\) in \(\overline{H}\) and \(d_{H}(v)=18\), then \(v\) is a bad vertex and for all the remaining vertices \(u\) of \(\overline{H}\) we must have \(d_{H}(u)=16=\lceil\frac{d(u)}{5}\rceil\), and two such vertices must be adjacent in \(\overline{H}\) (hence also in \(H\)), which is impossible by the last remark of the first paragraph of the proof. We may thus assume that \(d_{H}(v)=19=\lceil\frac{d(v)}{5}\rceil\), and hence, again by the last remark of the first paragraph of the proof, \(u_{1}\) and \(u_{2}\) must be vertices of degree \(8\) in \(\overline{H}\) and \(15\) in \(H\). Thus, proceeding analogously as in the proof of Theorem 15 we can obtain a colouring of \(H\) with a smaller number of bad components, a contradiction. Hence, each component of \(H_{1}\) and \(H_{2}\) contains vertices of odd degree or has an even number of edges or contains a vertex of degree \(10\) belonging to \(D_{6}\) or a vertex of degree \(12\) belonging to \(D_{7}\). Thus, using Observation 10 (with one of the mentioned vertices being chosen as the bad vertex, if necessary) we can obtain a \(4\)-edge-colouring of \(H\), which completes a \(\frac{1}{4}\)-majority \(5\)-edge-colouring of \(G\). ## 5 Concluding remarks Theorem 9 and the construction in Observation 7 imply we managed to settle the order of magnitude of our main objective: \(\delta_{k}^{\mathrm{opt}}\) and approximate it within a multiplicative factor of \(2\). Our Conjecture 8 clearly conveys we expect that \(2\) is a redundant factor in our \(2k^{2}\) upper bound for \(\delta_{k}^{\mathrm{opt}}\). In fact, Corollary 12 shows that the leading factor in this bound should not be larger than \(\frac{7}{4}k^{2}\). Moreover, Theorem 11 also implies that there is e.g. an infinite sequence of values of \(k\) for which \(\delta_{k}^{\mathrm{opt}}\leqslant(\frac{3}{2}+o(1))k^{2}\). On the other hand, even though we were able to confirm the conjecture for several initial values of \(k\) in Section 4, we are not entirely convinced that the postulated quantity of \(\delta_{k}^{\mathrm{opt}}\) has to be precisely correct for all \(k\). One may possibly come up with some more sophisticated construction than the one in Observation 7, and this seems an interesting direction to be more thoroughly investigated. However, we would not expect the lower bound stemming from such a potential construction to exceed \(k^{2}\) by far. In any case we strongly expect an upper bound of the form \((1+o(1))k^{2}\) to be valid for \(\delta_{k}^{\mathrm{opt}}\). Recall that Observation 7 implies we cannot directly extend to all graphs our Theorem 5, yielding an optimal solution for the family of bipartite graphs. However, as mentioned, the main obstacle on the way towards obtaining some form of such an extension was lack of a valid in the general setting correspondent of Claim 1 from the proof of Theorem 5, where in some sense it allowed us to control and 'capture' degrees of consecutively constructed subgraphs of a given bipartite graph within a reasonably narrow interval. In fact, in pursuit of such a correspondent we came up with our refinement of the lemma of Alon and Wei [1], that is Lemma 6. Even though some aspects of this slight improvement were useful and crucial in the case of bipartite graphs, we did not use it in full measure, while at the same time it was not strong enough to provide a result we expect in a general case. Nevertheless, we decided to include in our paper this slightly excessive form of Lemma 6, as a suggestion for possible further development of this tool, which might hopefully lead to solving Conjecture 8, or at least help closing the current gap. Let us also mention we believe that even solving our Problem 3 for the first open case of \(k=5\) (and maybe some consecutive initial ones) seems interesting by itself, and may furthermore shed light on a possible approach to attack Conjecture 8 in its entirety. Finally, let us remark why we believe the probabilistic approach seems difficult to be (directly) utilised while trying to prove Conjecture 8. It stems from the fact that if a graph has a vertex \(v\) with degree (close to) \(k^{2}\), then while colouring its edges randomly with \(k+1\) colours we expect every colour to appear roughly \(d(v)/(k+1)>k-1\) times (and in fact some colours must appear at least this many times) around \(v\), while we admit at most \(\lfloor d(v)/k\rfloor\leqslant k\) appearances of each colour. Thus in a way we admit an error of at most \(1\) in frequency of appearing of each colour, which does not seem achievable via probabilistic approach, as e.g. typical concentration tools require admitting an error "slightly" larger than \(\sqrt{(d(v)/k)}\) (which is enough as long as \(d(v)\) is of magnitude roughly \(k^{3}\log k\)). This is also why we reckon that our, rather naive in nature, approach is surprisingly efficient.
グラフ$G$の$\frac{1}{k}$の多数派$l$エッジカラーリングは、$l$色でエッジをカラーリングしたものであり、$l$色でエッジをカラーリングしたとき、$G$の各頂点$v$に属するエッジの少なくとも$\frac{1}{k}$の割合が$i$色にカラー付けられている。私たちは、$k\geq 2$のすべての整数に対して、最小度数$\delta\geq k^2$を持つグラフは$\frac{1}{k}$の多数派$ (k+1)$エッジカラーリングが可能であるという予想を立て、その結果が最適であることを示した。これはすでに$k=2$で知られている。この予想を証明するために、$2k^2$ではなく$k^2$を用いて証明したことで、予想された最適な低域のオーダーが確認された。この
2302.14341
Generalization of the Kuramoto model to the Winfree model by a symmetry breaking coupling
We construct a nontrivial generalization of the paradigmatic Kuramoto model by using an additional coupling term that explicitly breaks its rotational symmetry resulting in a variant of the Winfree Model. Consequently, we observe the characteristic features of the phase diagrams of both the Kuramoto model and the Winfree model depending on the degree of the symmetry breaking coupling strength for unimodal frequency distribution. The phase diagrams of both the Kuramoto and the Winfree models resemble each other for symmetric bimodal frequency distribution for a range of the symmetry breaking coupling strength except for region shift and difference in the degree of spread of the macroscopic dynamical states and bistable regions. The dynamical transitions in the bistable states are characterized by an abrupt (first-order) transition in both the forward and reverse traces. For asymmetric bimodal frequency distribution, the onset of bistable regions depends on the degree of the asymmetry. Large degree of the symmetry breaking coupling strength promotes the synchronized stationary state, while a large degree of heterogeneity, proportional to the separation between the two central frequencies, facilitates the spread of the incoherent and standing wave states in the phase diagram for a low strength of the symmetry breaking coupling. We deduce the low-dimensional equations of motion for the complex order parameters using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions. We also deduce the Hopf, pitchfork, and saddle-node bifurcation curves from the evolution equations for the complex order parameters mediating the dynamical transitions. Simulation results of the original discrete set of equations of the generalized Kuramoto model agree well with the analytical bifurcation curves.
M. Manoranjani, Shamik Gupta, D. V. Senthilkumar, V. K. Chandrasekar
2023-02-28T06:25:13
http://arxiv.org/abs/2302.14341v1
# Generalization of the Kuramoto model to the Winfree model by a symmetry breaking coupling ###### Abstract We construct a nontrivial generalization of the paradigmatic Kuramoto model by using an additional coupling term that explicitly breaks its rotational symmetry resulting in a variant of the Winfree Model. Consequently, we observe the characteristic features of the phase diagrams of both the Kuramoto model and the Winfree model depending on the degree of the symmetry breaking coupling strength for unimodal frequency distribution. The phase diagrams of both the Kuramoto and the Winfree models resemble each other for symmetric bimodal frequency distribution for a range of the symmetry breaking coupling strength except for region shift and difference in the degree of spread of the macroscopic dynamical states and bistable regions. The dynamical transitions in the bistable states are characterized by an abrupt (first-order) transition in both the forward and reverse traces. For asymmetric bimodal frequency distribution, the onset of bistable regions depends on the degree of the asymmetry. Large degree of the symmetry breaking coupling strength promotes the synchronized stationary state, while a large degree of heterogeneity, proportional to the separation between the two central frequencies, facilitates the spread of the incoherent and standing wave states in the phase diagram for a low strength of the symmetry breaking coupling. We deduce the low-dimensional equations of motion for the complex order parameters using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions. We also deduce the Hopf, pitchfork, and saddle-node bifurcation curves from the evolution equations for the complex order parameters mediating the dynamical transitions. Simulation results of the original discrete set of equations of the generalized Kuramoto model agree well with the analytical bifurcation curves. Keywords:Kuramoto model, Winfree model, Bifurcation, Asymmetry bimodal distrubution. ## 1 Introduction Symmetry (translational or rotational) prevailing in the coupled dynamical networks due to the coupling geometry manifests in a wide variety of natural systems and in their intriguing macroscopic dynamical states [1]. Nevertheless, symmetry breaking couplings are shown to be a source of a plethora of collective dynamical behavior that are inherent to it and are mostly inaccessible with the symmetry preserving couplings. In particular, networks of the paradigmatic Stuart-Landau oscillators with symmetry breaking coupling have been employed to unravel several collective dynamical states that mimic a variety of collective patterns observed in nature and technology. For instance, symmetry breaking coupling facilitates the transition from the homogeneous to an inhomogeneous steady states [2], symmetry breaking interaction has been identified as an essential feature for the genesis of partially coherent inhomogeneous spatial patterns, namely chimera death state [3; 4; 5]. Multicluster oscillation death states have been observed in nonlocally coupled Stuart-Landau oscillators with symmetry breaking coupling [6]. Further, the interplay of the nonisochronicity parameter and the symmetry breaking coupling is found to facilitate the onset of different variants of chimera death state such as multichimera death state and periodic chimera death states in nonlocally coupled Stuart-Landau oscillators [7]. The effect of the symmetry breaking coupling has also been investigated on the phenomenon of reviving oscillations [8]. Recently, the effect of the symmetry breaking mean-field coupling on the phenomenon of the aging transition has also been investigatedConjugate couplings, a symmetry breaking coupling, have also been widely employed in the literature [9; 10; 11]. Note that the pointed out reports are only a tip of an ice-berg and not an exhaustive list of studies that employed symmetry breaking coupling using the network of the Stuart-Landau oscillators. Despite the substantial investigations on the effect of the symmetry breaking coupling in networks of Stuart-Landau oscillators, there is a lacunae in understanding the nontrivial role of the symmetry breaking coupling in the phase only models, which indeed allows for exact analytical treatment of the macroscopic dynamical states in most cases. In particular, phase models such as Winfree and Kuramoto models, and their variants have been extensively employed in the literature to investigate the emergence of various intriguing collective dynamical states. Interaction among the phase oscillators in the Winfree model is modeled by a phase-dependent pulse function and a sensitive function. The former characterizes the mean-field, while the latter characterizes the response of the individual oscillators to the mean-field [12; 13]. Winfree model is one of its kind representing a class of pulse-coupled biological oscillators such as flashing of fire files [14], applauding audience [15] and many more. Interaction among the phase oscillators in the Kuramoto model is modeled by the sine of difference between the phases of the oscillator and has been widely employed to investigate the emergence of spontaneous synchronization in a wide variety of biological, chemical, mechanical and physical, systems [16; 17; 18]. Examples include cardiac pacemaker [19], Josephson junction arrays [20], and power-grids [21]. A recent study has generalized the Kuramoto model by including an additional interaction term that breaks the rotational symmetry of the dynamics explicitly and unveiled a rich phase diagram with stationary and standing wave phases due to the symmetry breaking interaction [22]. Specifically, the authors have considered unimodal frequency distributions and revealed the emergence of a stationary state, characterized by time independent amplitude and phase of the complex Kuramoto order parameter, facilitated by the symmetry breaking interaction, which is otherwise absent in the original Kuramoto model that allows for the rotational symmetry of the dynamics. Interesting, in this work, we elucidate that the Kuramoto model can be translated into the Winfree model by the introduction of the additional symmetry breaking coupling and consequently, one can obtain the phase diagrams of both these models simply by tuning the symmetry breaking parameter \(q\), thereby bridging the dynamics of both the models. Note that the macroscopic dynamical states of the pulse coupled biological oscillators with different sensitive functions, characterizing the phase-response-curves of biological oscillators, are peculiar to the Winfree model and its generalizations, which are far from reach for the Kuramoto model and its variants. In particular, we consider both the unimodal and bimodal frequency distributions to explore the phase diagrams for various values of the symmetry breaking parameter \(q\). On the one hand, we observe the typical phase diagram of the Kuramoto model characterized only by incoherent and standing wave states in the absence of the symmetry breaking interaction for the unimodal frequency distribution. On the other hand, we observe the phase diagram with incoherent state, standing wave pattern along with the synchronized stationary state and bistabilities among them, a typical nature of the Winfree model, for \(q=1\). For an intermediate and increasing value of \(q\in(0,1)\), one can find the onset of the stationary state and eventually the emergence of bistability among these states in the phase diagram, and enlargement of the bistable regions resulting in the phase diagram of the Winfree model. All three states are also observed in both Kuramoto and Winfree models for symmetric bimodal frequency distributions along with the region of bistability. The degree of the spread of the different macroscopic dynamical states depends on the strength of the symmetry breaking parameter \(q\). Interestingly, for asymmetric bimodal frequency distributions, increase in the degree of asymmetry of the frequency distributions favors the onset of bistable regions even for a rather low values of \(q\), which otherwise cannot be observed with the symmetric bimodal and unimodal frequency distributions. We arrive at the phase diagrams by numerical simulation of the original equations of motion. We deduce the reduced low-dimensional evolution equations of motion for the order parameter using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions. We also deduce the Hopf, pitchfork and saddle-node bifurcation curves from the governing equations of motion for the order parameters, which mediates the dynamical transitions in the phase diagrams. Homoclinic bifurcation curve is obtained from the XPPAUT software. The plan of the paper is as follows. In Sec. II, we generalize the Kuramoto model by introducing a symmetry breaking coupling and elucidate that the latter bridges the Kuramoto model and the Winfree model. We deduce the reduced low-dimensional evolution equations for the complex order parameters corresponding to the discrete set of generalized Kuramoto model using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions in Sec. III. We also deduce Hopf, pitchfork and saddle-node bifurcation curves from the evolution equations for the complex order parameters in Sec. III, mediating the dynamical transitions among the incoherent, standing wave and synchronized stationary states. In Sec. IV, we discuss the observed dynamical states and their transitions in the various phase diagrams. Finally, in Sec. VI, we summarize the results. ## 2 Model We consider a nontrivial generalization of the Kuramoto model by including an interaction term that explicitly breaks the rotational symmetry of the dynamics [22]. The phase \(\theta_{i}\) is governed by the set of N ordinary differential equations (ODEs), \[\dot{\theta}_{i}=\omega_{i}+\frac{\varepsilon}{N}\sum_{j=1}^{N}\big{[}\sin( \theta_{j}-\theta_{i})+q\sin(\theta_{j}+\theta_{i})\big{]}, \tag{1}\] for \(i=1,\ldots,N\), where \(N\gg 1\). Here \(\theta_{i}(t)\) is the phase of the \(i\)th oscillator at time \(t\), \(\varepsilon\geq 0\) is the coupling strength, and \(q\) is the strength of the symmetry breaking coupling. Note that Eq. (1) reduces to the Kuramoto model by setting \(q=0\) and on identifying \(\varepsilon\) with the parameter \(K>0\). Equation (1) can also be viewed as a variant of the celebrated Winfree model [23; 24; 25; 26] when \(q=1\) The Winfree model takes the form \[\dot{\theta_{i}}=\omega_{i}+Q(\theta_{i})\sum_{j=1}^{N}P(\theta_{j}), \tag{2}\] where \(P(\theta_{j})\) is the phase dependent pulse function and the functional form of the response function \(Q(\theta)\) characterizes the phase-response curves of certain biological oscillators. From Eq. (1), it easy to recognize that \(Q(\theta)=-2\sin(\theta)\) and \(P(\theta)=\cos(\theta)\). It is also evident that the symmetry breaking parameter 'q' bridges the Kuramoto and the Winfree models. Equation (1) corresponds to the Kuramoto model when \(q=0\) and it corresponds to a variant of the Winfree model when \(q=1\), as in Eq. (2). We consider the frequencies of the phase-oscillators are distributed both by the unimodal Lorentzian distribution given as \[g(\omega)=\frac{\gamma}{\pi((\omega-\omega_{0})^{2}+\gamma^{2})};\quad\gamma>0, \tag{3}\] and bimodal Lorentzian distribution represented as \[g(\omega)=\frac{1}{\pi}\left[\frac{\gamma_{1}}{((\omega-\omega_{0})^{2}+ \gamma_{1}^{2})}+\frac{\gamma_{2}}{((\omega+\omega_{0})^{2}+\gamma_{2}^{2})} \right];\quad\quad\quad\gamma_{1},\gamma_{2}>0. \tag{4}\] Here \(\gamma\), \(\gamma_{1}\) and \(\gamma_{2}\) are the width parameter (half width at half maximum) of the Lorentzian and \(\pm\omega_{0}\) are their central frequencies. Note that \(\omega_{0}\) corresponds to the degree of detuning in the system, which is proportional to the separation between the two central frequencies. Note that the bimodal distribution \(g(\omega_{0})\) is symmetric about zero when \(\gamma_{1}=\gamma_{2}\). It is also to be noted that \(g(\omega_{0})\) in Eq. (4) is bimodal if and only if the separation between their central frequencies are sufficiently greater than their widths. To be precise, it is required that \(\omega_{0}>\gamma_{1,2}/\sqrt{3}\) for the distribution to be a bimodal, otherwise the classical results of the unimodal distribution holds good. Heterogeneity in the frequency distribution plays a crucial role in the manifestation of a plethora of collective dynamics in a vast variety of natural systems. In particular, coexisting co-rotating and counter-rotating systems characterized by positive and negative frequencies, respectively, are wide spread in nature. For instance, counter-rotating spirals are observed in protoplasm of the Physarum plasmodium [27], counter-rotating vortices are inevitable in the atmosphere and ocean [28; 29; 30], in magnetohydrodynamics of plasma flow [31], Bose-Einstein condensates [32; 33], and in other physical systems [34; 35; 36]. Very recently, the counter-rotating frequency induced dynamical effects were also reported in the coupled Stuart-Landau oscillator with symmetry preserving as well as symmetry breaking couplings [37]. The coexistence of co-rotating and counter-rotating oscillators was initially identified by Tabor [38], which is followed by a series of work employing co-rotating and counter-rotating oscillators. All these physical systems strongly suggest that counter-rotating time-evolving dynamical systems indeed exist in nature and play a pertinent role in the manifestation of their intriguing collective dynamics. In the following, we will deduce the low-dimensional evolution equations for the complex macroscopic order parameters corresponding to both the unimodal and bimodal frequency distributions using the Ott-Antonsen (OA) ansatz [39; 40]. Subsequently, we also deduce the various bifurcation curves facilitating the dynamical transitions among the observed dynamical states in the phase diagrams. ## 3 Low-dimensional evolution equations for the macroscopic order parameters We now provide an analysis of the dynamics (1), in the limit \(N\rightarrow\infty\), by invoking the Ott-Antonsen ansatz. In this limit, the dynamics of the discrete set of equations (1) can be captured by the probability distribution function \(f(\theta,\omega,t)\), defined such that \(f(\theta,\omega,t)\mathrm{d}\theta\) gives the probability of oscillators with phase in the range \([\theta,\theta+\mathrm{d}\theta]\) at time \(t\). The distribution is \(2\pi\)-periodic in \(\theta\) and obeys the normalization \[\int_{0}^{2\pi}\mathrm{d}\theta\ f(\theta,\omega,t)=g(\omega)\ \forall\ \omega. \tag{5}\] Since the dynamics (1) conserves the number of oscillators with a given \(\omega\), the time evolution of \(f\) follows the continuity equation \[\frac{\partial f}{\partial t}+\frac{\partial(fv)}{\partial\theta}=0, \tag{6}\] where \(v(\theta,\omega,t)\) is the angular velocity of the oscillators. From Eq. (1), we have, \[v(\theta,\omega,t)=\omega+\frac{\varepsilon}{2i}[(ze^{-i\theta}-z^{\star}e^{ i\theta})+q(ze^{i\theta}-z^{\star}e^{-i\theta})], \tag{7}\] where \(z^{\star}\) denotes the complex conjugate of the macroscopic order parameter defined as \[z=\int_{-\infty}^{\infty}g(\omega)\int_{0}^{2\pi}f(\theta,\omega,t)e^{i\theta }d\theta d\omega. \tag{8}\] Now, \(f(\theta,\omega,t)\) can be expanded in terms of Fourier series of the form \[f(\theta,\omega,t)=\frac{g(\omega)}{2\pi}\left[1+\sum_{n=1}^{\infty}\left( \alpha_{n}(\omega,t)e^{in\theta}+\mathrm{c.c.}\right)\right], \tag{9}\] where, \(\alpha_{n}(\omega,t)\) is the \(n\)th Fourier coefficient, while c.c. denotes complex conjugation of the preceding sum within the brackets. The normalization condition in (5) is satisfied by the presence of the prefactor of \(g(\omega)\) in (9). The Ott-Antonsen ansatz consists in assuming [39; 40] \[\alpha_{n}(\omega,t)=\left[\alpha(\omega,t)\right]^{n}. \tag{10}\] Now, it is straightforward to obtain \[\frac{\partial\alpha}{\partial t}+i\omega\alpha+\frac{\varepsilon_{1}}{2} \left[(z\alpha^{2}-z^{\star})+q(z-z^{\star}\alpha^{2})\right], \tag{11}\] where, \[z^{\star}=\int_{-\infty}^{\infty}\alpha(t,\omega)g(\omega)d\omega. \tag{12}\] ### Unimodal Distribution Substituting the partial fraction expansion of the unimodal frequency distribution \(g(\omega)\) (3) in Eq. (12) and evaluating the integral using an appropriate contour integral, one can obtain the order parameter as \[z(t)=a^{\star}(\omega_{0}-i\gamma,t). \tag{13}\] From (11) and (13), one can obtain the evolution equation for the complex order parameter as \[\frac{\partial z}{\partial t}-i(\omega_{0}+i\gamma)z+\frac{\varepsilon_{1}}{2 }\bigg{[}\big{[}\mathsf{z}^{2}z-z\big{]}+q\big{[}z^{\star}-z^{3}\big{]}\bigg{]} =0. \tag{14}\] The above evolution equation for the complex order parameter \(z(t)=r(t)e^{i\psi(t)}\) can be expressed in terms of the evolution equations in \(r\) and \(\psi\) as \[\tfrac{\mathrm{d}r}{\mathrm{d}t} =-\gamma r-\frac{r\varepsilon}{2}(r^{2}-1)(1-q\cos(2\psi)), \tag{15a}\] \[\tfrac{\mathrm{d}\psi}{\mathrm{d}t} =\omega_{0}+\frac{\varepsilon q}{2}(r^{2}+1)\sin(2\psi)). \tag{15b}\] The above equations govern the reduced low-dimensional order parameter dynamics, which actually corresponds to the dynamics of the original discrete set of equations (1) in the limit \(N\to\infty\) for the unimodal Lorentzian distribution function \(g(\omega)\) (3). Now, we discuss the various asymptotic macroscopic dynamical states admitted by Eq. (15). #### 3.1.1 Incoherent (IC) state: The incoherent (IC) state is characterized by time independent \(z\) satisfying \(z=z^{\star}=0\) (thus representing a stationary state of the dynamics (15)); correspondingly, one has \(r=0\). The linear stability of such a state is determined by linearizing Eq. (14) around \(z=0\). By representing \(z=u\) with \(\mathsf{u}\ll\)1, we obtain \[\frac{\partial u}{\partial t}+(\gamma-i\omega_{0})u-\frac{\varepsilon}{2}\big{[} (u)-q(u^{\star})\big{]}=0. \tag{16}\] Decomposing \(u=u_{x}+iu_{y}\) yields \[\frac{\partial}{\partial t}\begin{bmatrix}u_{x}\\ u_{y}\end{bmatrix}=M\begin{bmatrix}u_{x}\\ u_{y}\end{bmatrix}; \tag{17}\] \[M\equiv\begin{bmatrix}-\gamma+\frac{\varepsilon}{2}\big{[}1-q\big{]}&- \omega_{0}\\ \omega_{0}&-\gamma+\frac{\varepsilon}{2}\big{[}1+q\big{]}\end{bmatrix}.\] The matrix \(M\) has the characteristic eigenvalues \[\lambda_{1,2}=\frac{-2\gamma+\varepsilon\pm\sqrt{\Delta}}{2}, \tag{18}\] with \(\Delta=(\varepsilon^{2}q^{2}-4\omega_{0}^{2})\). Note that we have \(\lambda_{1}>\lambda_{2}\). The stability threshold for the incoherent state is then obtained by analysing \(\lambda_{1}\) as a function of \(\varepsilon\) and \(q\), and seeking the particular value of \(\varepsilon\) at which \(\lambda_{1}\) vanishes for a given \(q\). The stability threshold can be obtained as \[\varepsilon_{HB}=2\gamma, \text{for}\;\;\Delta\leq 0, \tag{19}\] \[\varepsilon_{PF}=2\sqrt{\frac{\gamma^{2}+\omega_{0}^{2}}{1+q^{2}}} \text{for}\;\;\Delta>0. \tag{20}\] #### 3.1.2 Synchronized stationary state (SSS): Now, we explore the possibility of existence of the synchronized stationary state. Requiring that \(r\) and \(\psi\) have time-independent non-zero values in this case and hence equating the left hand side of equations (15) to zero, we obtain the two coupled equations for the synchronized stationary state as \[\tfrac{\varepsilon q}{2}\cos(2\psi) = \frac{\gamma}{(r^{2}-1)}+\frac{\varepsilon}{2}, \tag{21a}\] \[\tfrac{\varepsilon q}{2}\sin(2\psi) = -\frac{\omega_{0}}{(r^{2}+1)}. \tag{21b}\] With some algebra, one can obtained the following expressions for the stationary \(r\) and \(\psi\): \[\tfrac{\varepsilon^{2}q^{2}}{4} = \left(\frac{\gamma}{(r^{2}-1)}+\frac{\varepsilon}{2}\right)^{2}+ \left(\frac{\omega_{0}}{(r^{2}+1)}\right)^{2}\!\!, \tag{22a}\] \[\tan(2\psi) = \frac{(1-r^{2})(\omega_{0})}{(r^{2}+1)(\gamma+\frac{\varepsilon} {2}(r^{2}-1))}. \tag{22b}\] \(r\) and \(\psi\) can be calculated for a fixed set of parameters by numerically solving the above set of equations, which is then substituted back into the evolution equations for the low-dimensional order parameters to deduce the characteristic equation. The eigenvalues of the characteristic equation is then used to determine the saddle-node bifurcation curve in the suitable two parameter phase. ### Bimodal Distribution Now, we will deduce the low-dimensional evolution equations corresponding to the macroscopic order parameters for the original discrete set of equations (1) in the limit \(N\rightarrow\infty\) for the asymmetric bimodal Lorentzian distribution function \(g(\omega)\) (4). Expanding the latter using partial fractions and evaluating the integral in Eq. (12) using appropriate contour integral, one can obtained the complex order parameter as \[z(t)=\frac{1}{2}[z_{1}(t)+z_{2}(t)], \tag{23}\] where \[z_{1,2}(t)=\alpha^{\star}(\pm\omega_{0}-i\gamma_{1,2},t). \tag{24}\] Substitution it into Eq. (11) yields two coupled complex ordinary differential equations describing the evolution of two suborder parameters as \[\dot{z}_{1}= -(\gamma_{1}+i\omega_{0})z_{1}+\frac{\varepsilon}{4}[(z_{1}+z_{2 }-(z_{1}^{\star}+z_{2}^{\star})z_{1}^{2})\] \[+q((z_{1}+z_{2})z_{1}^{2}-(z_{1}^{\star}+z_{2}^{\star}))], \tag{25}\] \[\dot{z}_{2}= -(\gamma_{2}-i\omega_{0})z_{2}+\frac{\varepsilon}{4}[(z_{1}+z_{2 }-(z_{1}^{\star}+z_{2}^{\star})z_{2}^{2})\] \[+q((z_{1}+z_{2})z_{2}^{2}-(z_{1}^{\star}+z_{2}^{\star}))]. \tag{26}\] The above evolution equations for the complex order parameters \(z(t)_{1,2}=r(t)_{1,2}e^{i\psi(t)_{1,2}}\) can be expressed in terms of the evolution equations in \(r_{1,2}\) and \(\psi_{1,2}\), as \[\frac{\mathrm{d}r_{1}}{\mathrm{d}t} =-\gamma_{1}r_{1}+\frac{\varepsilon}{4}\big{[}(1-r_{1}^{2})(r_{1} +r_{2}\cos(\psi_{2}-\psi_{1}))\] \[+q((r_{1}^{2}-1)(r_{1}\cos(2\psi_{1})+r_{2}\cos(\psi_{2}+\psi_{1}) ))\big{]}, \tag{27a}\] \[\frac{\mathrm{d}\psi_{1}}{\mathrm{d}t} =-\omega_{0}+\frac{\varepsilon}{4r_{1}}(r_{1}^{2}+1)\big{[}r_{2} \sin(\psi_{2}-\psi_{1})\] \[+q(r_{1}\sin(2\psi_{1})+r_{2}\sin(\psi_{2}+\psi_{1}))\big{]}. \tag{27b}\] and \[\frac{\mathrm{d}r_{2}}{\mathrm{d}t}=-\gamma_{2}r_{2}+\frac{\varepsilon}{4} \big{[}(1-r_{2}^{2})(r_{1}\cos(\psi_{2}-\psi_{1})+r_{2})\] \[+q((r_{2}^{2}-1)(r_{1}\cos(\psi_{2}+\psi_{1})+r_{2}\cos(2\psi_{2}))) \big{]}, \tag{28a}\] \[\frac{d\psi_{2}}{dt} =\omega_{0}-\frac{\varepsilon}{4r_{2}}(r_{2}^{2}+1)\big{[}r_{1} \sin(\psi_{2}-\psi_{1})\] \[-q(r_{1}\sin(\psi_{2}+\psi_{1})+r_{2}\sin(2\psi_{2}))\big{]}. \tag{28b}\] The above equations constitute the evolution equations for reduced low-dimensional order parameters corresponding to the dynamics (1) in the limit \(N\rightarrow\infty\) and for the case of the asymmetric bimodal Lorentzian distribution \(g(\omega)\) (4). Now, we discuss the various asymptotic macroscopic dynamical states admitted by Eqs. (27) and (28). Figure 1: Phase diagrams in the \((\omega_{0}/\gamma,\varepsilon/\gamma)\) parameter space for the generalized Kuramoto model (1) with unimodal frequency distribution for different values of the symmetry breaking parameter \(q\). (a) \(q=0.0\), (b) \(q=0.1\), (c) \(q=0.5\), and (d) \(q=1.0\). The line connected by filled squares is the Hopf bifurcation curve \(\varepsilon_{HB}\) (Eq. (19)), solid line corresponds to the pitchfork bifurcation curve \(\varepsilon_{PF}\) (Eq. (20)) dashed line corresponds to the saddle-node bifurcation curve (Eq. (22)), and the dashed dotted line correspond to the homoclinic bifurcation curve obtained using the software XPPAUT. Bistability between the standing wave (SW) state and the synchronized stationary (SS) state is represented by dark shaded region enclosed by the saddle-node bifurcation curve and the homoclinic bifurcation curve. Bistability between the incoherent (IC) and the SS state is represented by light grey shaded region enclosed by the saddle-node bifurcation curve and the pitchfork bifurcation curve. #### 3.2.1 Incoherent state The incoherent state is defined by \(r_{1}\)=\(r_{2}\)=0. A linear stability analysis of the fixed point \((z_{1},z_{2})\) = (0, 0) results in the stability condition, \[\omega_{0}^{2}=\frac{1}{4}(\varepsilon a_{1}-2a_{2}+\sqrt{\varepsilon^{2}q^{2}a _{1}-4\varepsilon a_{3}^{2}+4a_{3}^{2}a_{1}}), \tag{29}\] where, \(a_{1}=\gamma_{1}+\gamma_{2},a_{2}=\gamma_{1}^{2}+\gamma_{2}^{2}\) and \(a_{3}=\gamma_{1}-\gamma_{2}\). This stability curve actually corresponds to the pitchfork bifurcation curve across which the fixed point \((z_{1},z_{2})\) = (0, 0) (incoherent state) loses its stability leading to the synchronized stationary state. Note that the incoherent state loses it stability through the Hopf bifurcation, which results in the stability condition \[\omega_{0}^{2}= \frac{1}{4}(\varepsilon-2b_{1})^{4}(\varepsilon^{2}(q^{2}-1)-16b_ {2}+4\varepsilon b_{1})^{2}\bigg{[}\varepsilon^{5}(q-1)b_{1}-\varepsilon^{4}(q ^{2}-1)\big{(}(q^{2}-8)b_{3}\] \[+2b_{2}(q^{2}-10)\big{)}-4\varepsilon^{3}(q^{2}-2)\big{(}3(\gamma _{1}^{3}+\gamma_{2}^{3})+13b_{2}b_{1}\big{)}+4\varepsilon^{2}(b_{1})^{2} \big{(}b_{3}(q^{2}-8)\] \[+2b_{2}(3q^{2}-20)\big{)}+16\varepsilon b_{1}^{3}(b_{3}+10b_{2}) -64b_{2}b_{1}^{4}\bigg{]}, \tag{30}\] Figure 2: Time averaged order parameter \(R\) and the Shinomoto-Kuramoto order parameter \(\xi\) for the generalized Kuramoto model (1) with unimodal frequency distribution as a function of \(\varepsilon/\gamma\) for \(\omega_{0}/\gamma\) = 1. (a) and (d) \(q=0.0\), (b) and (e) \(q=0.5\), and (c) and (f) \(q=1.0\). The forward trace is indicated by the line connected by open circles, while the reverse trace is indicated by the line connected by closed circles. The states indicated by IC, SW and SS correspond to the incoherent, standing wave, and synchronized stationary states, respectively. The bifurcation curves \(\varepsilon_{HB},\varepsilon_{Hc},\varepsilon_{PF}\) and \(\varepsilon_{SN}\) correspond to the Hopf, homoclinic, pitchfork and saddle-node bifurcation curves, respectively. where, \(b_{1}=\gamma_{1}+\gamma_{2},b_{2}=\gamma_{1}\gamma_{2}\) and \(b_{3}=\gamma_{1}^{2}+\gamma_{2}^{2}\). The above stability curve corresponds to the Hopf bifurcation curve. The boundary of stable incoherent state is therefore enclosed by both the pitchfork bifurcation and Hopf bifurcation curves. #### 3.2.2 Synchronized stationary state Deducing the solution for the synchronized stationary state for the asymmetry bimodal distribution may not be possible as \(r_{1}~{}\neq~{}r_{2}\) and \(\psi_{1}~{}\neq~{}\psi_{2}\). However, for the symmetry bimodal distribution characterized by \(r_{1}~{}=~{}r_{2}\) and \(\psi_{1}~{}=~{}-\psi_{2}\), one can deduce the equations for \(r\) and \(\psi\) as in (22) and obtain the saddle-node bifurcation curves as pointed out in Sec. 3.1.2. ## 4 Numerical Results In this section, we will proceed to unravel the macroscopic dynamical states admitted by the generalized Kuramoto model (1) with explicit symmetry breaking coupling by constructing appropriate two parameter phase diagrams and classifying the underlying dynamical states from a numerical analysis of Figure 3: Phase diagrams in the \(q-\varepsilon/\gamma\) parameter space for the generalized Kuramoto model (1) with unimodal frequency distribution for increasing degree of heterogeneity of the frequency distribution. (a) \(\omega_{0}/\gamma_{2}=0.4\), (b) \(\omega_{0}/\gamma_{2}=0.6\), (c) \(\omega_{0}/\gamma_{2}=1.0\), and (a) \(\omega_{0}/\gamma_{2}=1.2\). The bifurcation curves and dynamical states are similar to those in Fig. 1. the governing equations of the original discrete model. Specifically, we will unravel the rich phase diagrams of the generalized Kuramoto model, using both unimodal and bimodal frequency distributions, for distinct values of the symmetry breaking parameter \(q\). The number of oscillators is fixed as N = \(10^{4}\), and we use the standard 4th-order Runge-Kutta integration scheme with integration step size h = 0.01 to solve the generalized Kuramoto model (1). Note that one can break the two-parameter phase into several segments and multiple copies of the same code can be simulated simultaneously for different values of the parameters to generate the data, which can then be concatenated to get the complete phase diagrams with a reasonable workstation. The initial state of the oscillators (\(\theta_{i}\)'s ) is distributed with uniform random values between -\(\pi\) and \(+\pi\). We use the time averaged order parameter \(R\) defined as \[R=\lim_{t\rightarrow\infty}\frac{1}{\tau}\int_{t}^{t+\tau}r(t)dt, \tag{31}\] where \(r(t)=|Z|=|N^{-1}\sum_{j=1}^{N}e^{i\theta_{j}}|\). Incoherent state is characterized by \(R=r(t)=0\), while the synchronized stationary state is characterized by \(R=r(t)=const\). Standing wave is characterized by the oscillating nature of \(r(t)\). In order Figure 4: Phase diagrams in the \(\omega_{0}/\gamma-\varepsilon/\gamma\) parameter space for the generalized Kuramoto model (1) with symmetric bimodal frequency distribution for increasing values of the strength of the symmetry breaking coupling. (a) \(q=0.0\), (b) \(q=0.5\), (c) \(q=0.8\), and (d) \(q=1.0\). The bifurcation curves and dynamical states are similar to those in Fig. 1. Figure 5: Phase diagrams in the \(\omega_{0}/\gamma_{2}-\varepsilon/\gamma_{2}\) parameter space for the generalized Kuramoto model (1) with asymmetric bimodal frequency distribution for increasing the strength of the symmetry breaking coupling and increasing the asymmetry between the bimodal frequency distributions. (a) and (b) \(\gamma_{1}/\gamma_{2}=0.6\), and (c) and (d) \(\gamma_{1}/\gamma_{2}=1.2\). (a) and (c) \(q=0.1\) and (b) and (d) \(q=1\). The bifurcation curves and dynamical states are similar to those in Fig. 1. Figure 6: Phase diagrams in the \(q-\varepsilon/\gamma_{2}\) parameter space (first row) for \(\omega_{0}/\gamma_{2}=1\) and \(q-\omega_{0}/\gamma_{2}\) (second row) for \(\varepsilon/\gamma_{2}=2.5\) for the generalized Kuramoto model (1) with asymmetric bimodal frequency distribution. (a) and (c) \(\gamma_{1}/\gamma_{2}=0.6\), and (b) and (d) \(\gamma_{1}/\gamma_{2}=1.2\). to distinguish the synchronized stationary state and the standing wave state more clearly, we use the Shinomoto-Kuramoto order parameter [41; 42] \[\xi=\overline{|r(t)-R|}, \tag{32}\] where \(\bar{z}\) denoted the long time average. Shinomoto-Kuramoto order parameter takes \(\xi=0\) for the incoherent and synchronized stationary states, whereas it takes nonzero value for the standing wave state. #### 4.0.1 Phase diagrams for the unimodal distribution We have depicted phase diagrams in the \((\omega_{0}/\gamma,\varepsilon/\gamma)\) parameter space for different values of the symmetry breaking parameter \(q\) in Fig. 1 in order to understand the effect of the explicit symmetry breaking interaction on the dynamics of Eq. (1) with unimodal frequency distribution. The phase diagram is demarcated into different dynamical regions using the value of the time averaged order parameter \(R\) and the Shinomoto-Kuramoto order parameter \(\xi\). Incoherent state (IC), synchronized stationary state (SS) and standing wave (SW), along with the bistable regions (dark and light gray shaded regions) are observed in the phase diagram. The parameter space indicated by light gray shaded region corresponds to the bistable regime between the incoherent and the synchronized stationary states, while that indicated by dark gray shaded region corresponds to the bistable regime between the standing wave and the synchronized stationary states, Only the incoherent and standing wave states are observed in the phase diagram for \(q=0\) (see 1(a)), a typical phase diagram of the Kuramoto model with unimodal frequency distribution. The line connected by the filled squares corresponds to the Hopf bifurcation curve, across which there is a transition from the incoherent state to the standing wave state. Note that a finite value of \(q\) results in the loss of the rotational symmetry of the dynamics of the Kuramoto oscillators. Even a feeble value of \(q\) manifests the synchronized stationary state in a rather large parameter space at the cost of the standing wave state (see Fig. 1(b) for q=0.1). There is a transition from the incoherent state to the standing wave state via the Hopf bifurcation curve \(\varepsilon_{HB}\) (indicated by the line connected by filled squares) as a function of \(\varepsilon/\gamma\) for \(\omega_{0}/\gamma>0.1\). The standing wave state loses its stability via the homoclinic bifurcation (indicated by the dashed-dotted line) as a function of \(\varepsilon/\gamma\) resulting in the synchronized stationary state. There is also a transition from the incoherent state to the synchronized stationary state for \(\omega_{0}/\gamma\leq 0.1\) as a function of \(\varepsilon/\gamma\) via the pitchfork bifurcation curve \(\varepsilon_{PF}\) indicated by the solid line. Further larger values of the symmetry breaking parameter results in the emergence of the bistability between the standing wave and the synchronized stationary states (indicated by dark shaded region) enclosed by the saddle-node bifurcation curve (indicated by dashed line) and the homoclinic bifurcation curve (see Fig. 1(c) for q=0.5). There is also a bistable region between the incoherent state and the synchronized stationary state (indicated by light grey shaded region) enclosed by the saddle-node bifurcation curve and the pitchfork bifurcation curve. For \(q=1\), both the bistable regions enlarged in the phase diagram (see Fig. 1(d)), which is a typical phase diagram of the Winfree model with the unimodal frequency distribution. The phase diagrams for \(q=0.5\) and \(1.0\) have similar dynamics except for the regime shift and enhanced bistabilities in a larger parameter space. Thus, as the value of \(q\) is increased from the null value to the unity, one can observe the transition from the phase diagram of the Kuramoto model to that of the Winfree model. Note that the Hopf, saddle-node and pitchfork bifurcation curves are the analytical bifurcation curves, Eqs. (19), (20) and (22) respectively, obtained from the low-dimensional evolution equations for the order parameters deduced in Sec. 3.1. Homoclinic bifurcation curve is obtained from the software XPPAUT [43]. Time averaged order parameter \(R\) and the Shinomoto-Kuramoto order parameter \(\xi\) are depicted in Fig. 2 as a function of \(\varepsilon/\gamma\) for different values of the symmetry breaking parameter \(q\) and \(\omega_{0}/\gamma\). The forward trace is indicated by the line connected by open circles, while the backward trace is indicated by the line connected by closed circles. There is a smooth (second order) transition from the incoherent to the standing wave states via the Hopf bifurcation \(\varepsilon_{HB}\) at \(\varepsilon/\gamma=2\) during both forward and reverse traces for \(q=0.0\) and \(\omega_{0}/\gamma=1\) as depicted in Figs. 2(a) and 2(d). In addition, to the smooth transition from the incoherent state to the standing wave state via the Hopf bifurcation \(\varepsilon_{HB}\) at \(\varepsilon/\gamma=2\), there is another smooth transition from the standing wave state to the synchronized stationary state via the homoclinic bifurcation \(\varepsilon_{Hc}\) at \(\varepsilon/\gamma=2.94\) in both the forward and reverse traces as shown in Fig. 2(b) for \(q=0.5\) and \(\omega_{0}/\gamma=1\). The transition from the standing wave state to the synchronized stationary state is also corroborated by the sharp fall of the Shinomoto-Kuramoto order parameter \(\xi\) to the null value (see Fig. 2(e)). In contrast, there is an abrupt (first order) transition from the incoherent state to the synchronized stationary state at \(\varepsilon/\gamma=2\) via the pitchfork bifurcation curve \(\varepsilon_{PF}\) for \(\omega_{0}/\gamma=1\) during the forward trace, whereas there is an abrupt transition from the synchronized stationary state to the incoherent state at \(\varepsilon/\gamma=1.8\) via the saddle-node bifurcation \(\varepsilon_{SN}\) during the reverse trace (see Fig. 2(c) for \(q=1.0\)) elucidating the presence of hysteresis and bistability between the incoherent state and the synchronized stationary state. The Shinomoto-Kuramoto order parameter \(\xi\) takes the null value, in the entire range of \(\varepsilon/\gamma\) in Fig. 2(f) for \(q=1.0\), characterizing both the incoherent and the synchronized stationary states. The observed dynamical states and their transitions are depicted in the \((q,\varepsilon/\gamma)\) parameter space for different \(\omega_{0}/\gamma\) in Fig. 3. The bifurcations mediating the dynamical transitions are similar to those observed in Fig. 1. The phase diagram for \(\omega_{0}/\gamma=0.4\) is shown in Fig. 3(a). There is a transition from the incoherent state to the standing wave state via the Hopf bifurcation curve for smaller values of the symmetry breaking parameter as a function of \(\varepsilon/\gamma\). Larger values of the symmetry breaking parameter favor the synchronized stationary state in the entire range of \(\varepsilon/\gamma\). However, in a narrow range of \(q\in(0.36,0.46]\) (see Fig. 3(a)), there is a transition from the incoherent state to the standing wave state and then to the synchronized stationary state. There is also a transition from the incoherent state to the synchronized stationary state in the range of \(q\in(0.46,0.6)\). Recall that \(\omega_{0}\) quantifies the degree of detuning of the frequency distribution. Increase in the heterogeneity of the frequency distribution promotes bistable regions, incoherent and standing wave states, to a large region of the \((q,\varepsilon/\gamma)\) parameter space. For instance, the phase diagram for \(\omega_{0}/\gamma=0.6\) is depicted in Fig. 3(b) elucidates the emergence of the bistable regions and enlarged regions of the incoherent and standing wave states as a function of \(q\), a manifestation of increased heterogeneity. Further increase in the \(\omega_{0}/\gamma\) enlarges the bistable regions, the incoherent and the standing wave states as depicted in Figs. 3(c) and 3(d) for \(\omega_{0}/\gamma=1\) and \(1.2\), respectively. These results are in agreement with the phase diagrams in Fig. 1 in the \((\omega_{0}/\gamma,\varepsilon/\gamma)\) parameter space for increasing values of the symmetry breaking parameter. Next, we will explore the effect of symmetric and asymmetric bimodal frequency distributions on the phase diagrams in the following. #### 4.0.2 Phase diagrams for bimodal distribution In this section, we analyse the phase space dynamics of the generalized Kuramoto model (1) with symmetric bimodal frequency distribution (4) by setting \(\gamma=\gamma_{1}=\gamma_{2}\) for increasing values of the strength of the symmetry breaking coupling. We have depicted the phase diagrams in the \((\omega_{0}/\gamma,\varepsilon/\gamma)\) parameter space for different values of the symmetry breaking parameter \(q\) in Fig. 4. Note that the phase space dynamics of the Kuramoto model (see Fig. 4(a) for \(q=0\)) are similar to those of the Winfree model (see Fig. 4(d) for \(q=1\)) for the symmetric bimodal frequency distribution except for the regime shift. The dynamical states and the bifurcation curves are similar to those in Fig. 1. Increasing the strength of the symmetry breaking coupling favors the synchronized stationary state and the bistable states in a large region of the parameter space as evident from Fig. 4(b) and 4(c) for \(q=0.5\) and \(q=0.8\), respectively. Note that a large heterogeneity in the frequency distribution favor the incoherent and the standing wave states in a rather large region of the phase diagram for smaller \(q\) and \(\varepsilon\) (see Fig. 4(a) for \(q=0\)). Nevertheless, the synchronized stationary state predominates the phase diagram for larger strength of the symmetry breaking coupling and \(\varepsilon\) despite the presence of a large heterogeneity in the frequency distribution (see Fig. 4(d) for \(q=1\)). Next, we analyze the phase space dynamics of the generalized Kuramoto model (1) with asymmetric bimodal frequency distribution (4) by increasing the strength of the symmetry breaking coupling and the degree of asymmetry between the bimodal frequency distributions. We have depicted the phase diagrams in the \((\omega_{0}/\gamma_{2},\varepsilon/\gamma_{2})\) parameter space for different values of the symmetry breaking parameter \(q\) in Fig. 5. Again, the dynamical states and the bifurcation curves are similar to those in Fig. 1. Phase diagram for \(q=0.1\) and \(\gamma_{1}/\gamma_{2}=0.6\) is depicted in Fig. 5(a). For most values of \(\omega_{0}/\gamma_{2}\), there is a transition from the incoherent state to the synchronized stationary state via the standing wave state and there is no bistability for \(\gamma_{1}<\gamma_{2}\). However, there is a transition from the incoherent state to the synchronized stationary state in a large range of \(\omega_{0}/\gamma\in(0,1)\) and the emergence of bistable states for \(\gamma_{1}>\gamma_{2}\) as depicted in Fig. 5(b) for \(\gamma_{1}/\gamma_{2}=1.2\). It is evident that bistable states emerge even for low values of the symmetry breaking coupling when \(\gamma_{1}>\gamma_{2}\). Note that bistable states emerge even for \(\gamma_{1}<\gamma_{2}\) but for a large strength of the symmetry breaking coupling (see Fig. 5(c) for \(q=1\) and \(\gamma_{1}/\gamma_{2}=0.6\)). The spread of the bistable states increases for \(q=1\) and \(\gamma_{1}/\gamma_{2}=1.2\) as illustrated in Fig. 5(d). Thus, larger \(\gamma_{1}/\gamma_{2}\) and \(q\) favor the emergence of the bistable states. Phase diagrams in the \((q,\varepsilon/\gamma_{2})\) parameter space is depicted in Figs. 6(a) and 6(b) for \(\gamma_{1}/\gamma_{2}=0.6\) and \(1.2\), respectively, and for \(\omega_{0}/\gamma_{2}=1\). The dynamical states and the bifurcation curves are similar to those in Fig. 1. There is a transition from the incoherent state to the synchronized stationary state via the standing wave state for small values of \(q\) (see Fig.. 6(a)) similar to that in Fig. 5(a). However, for larger values of \(q\) multistability between the standing wave and the synchronized stationary state emerges (dark shaded region in the inset) in addition to the above dynamical transition. For \(\gamma_{1}>\gamma_{2}\), there a transition from the incoherent state to the standing wave state along with the bistability among them in a rather narrow range of \(q\in(0,0.4)\) as a function of \(\varepsilon/\gamma_{2}\) as shown in inset of Fig. 6(b). For \(q>0.4\), there is a transition from the incoherent state to the synchronized stationary state with the onset of bistability (light grey shaded region) between them. Phase diagrams in the \((q,\omega_{0}/\gamma_{2})\) parameter space is depicted in Figs. 6(c) and 6(d) for \(\gamma_{1}/\gamma_{2}=0.6\) and \(1.2\), respectively, for \(\varepsilon/\gamma_{2}=2.5\). There is a transition from the synchronized stationary state to the standing wave state as a function of \(\omega_{0}/\gamma_{2}\) for \(\gamma_{1}<\gamma_{2}\) (see Fig. 6(c)) via the homoclinic bifurcation curve. Both the bistable states emerge when \(\gamma_{1}>\gamma_{2}\) as shown in Fig. 6(c) for \(\gamma_{1}=1.2\). ## 5 Conclusions We have considered a nontrivial generalization of the paradigmatic Kuramoto model by using an additional coupling term that explicitly breaks the rotational symmetry of the Kuramoto model. The strength of the symmetry breaking coupling is found to play a key role in the manifestation of the dynamical states and their transitions along with the onset of bistability among the observed dynamical states in the phase diagram. A typical phase diagram of the Kuramoto model is transformed into a typical phase diagram of the Winfree mode for the unit value of the strength of the symmetry breaking coupling thereby bridging the dynamics of both the Kuramoto and Winfree models. Large values of the strength of the symmetry breaking coupling favor the manifestation of bistable regions and synchronized stationary state in a large region of the phase diagram. The dynamical transitions in the bistable region are characterized by an abrupt (first-order) transition in both the forward and reverse traces. Phase diagrams of both the Kuramoto and Winfree models resemble each other for symmetric bimodal frequency distribution except for the regime shifts and the degree of the spread of the dynamical states and bistable regions. Nevertheless, for asymmetric bimodal frequency distribution one cannot observe the bistable states for low values of the strength of the symmetry breaking coupling when \(\gamma_{1}<\gamma_{2}\). In contrast, bistable states emerge even for \(\gamma_{1}<\gamma_{2}\) for a large strength of the symmetry breaking coupling. Larger \(\gamma_{1}/\gamma_{2}\) and larger \(q\) favors the emergence of the bistable states in the case of the asymmetric bimodal frequency distribution. A large \(\omega_{0}\) and consequently a large degree of heterogeneity facilitates the spread of the incoherent and standing wave states in the phase diagram for a low strength of the symmetry breaking coupling. However, a large \(q\) promotes the spread of the synchronized stationary state and bistable regions in the phase diagram despite the degree of heterogeneity in the frequency distribution. We have deduced the low-dimensional evolution equations for the complex order parameters using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions. We have also deduced the Hopf, pitchfork, saddle-node bifurcation curves from the low-dimensional evolution equations for the complex order parameters. Homoclinic bifurcation curve is obtained from XPPAUT software. Simulation results, obtained from the original discrete set of equations agrees well with the analytical bifurcation curves. We sincerely believe that our results will shed more light and enhance our current understanding of the effects of symmetry breaking coupling in the phase models and bridges the dynamics of two distinctly different phase models, which are far from reach otherwise. ## 6 Acknowledgements The work of V.K.C. is supported by the DST-CRG Project under Grant No. CRG/2020/004353 and DST, New Delhi for computational facilities under the DST-FIST program (SR/FST/PS- 1/2020/135)to the Department of Physics. M.M. thanks the Department of Science and Technology, Government of India, for provid- ing financial support through an INSPIRE Fellowship No. DST/INSPIRE Fellowship/2019/IF190871. S.G. acknowledges support from the Science and Engineering Research Board (SERB), India under SERB-TARE scheme Grant No. TAR/2018/000023 and SERB-MATRICS scheme Grant No. MTR/2019/000560. He also thanks ICTP - The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy for support under its Regular Associateship scheme. DVS is supported by the DST-SERB-CRG Project under Grant No. CRG/2021/000816. **Data Availability Statement**: No Data associated in the manuscript. The data sets on the current study are available from the corresponding author on reasonable request.
Kuramotoモデルの派生的な一般化を構築するため、回転の対称性に対する不変の追加結合項を使用します。これはWinfreeモデルの変種になります。その結果、KuramotoモデルとWinfreeモデルの相图の特性を説明します。これは、不変の対称性と強度の違いによって、 unimodal 頻度分布における両方のモデルの相图が観察されます。相图は、対称性の breaking の強度の違いによって、互いに類似しています。 しかし、二つの相対的な中心周波数の間の離散度と、マクロスケールダイナミック状態と bistable 区間の広がりは異なります。 Bistable 状態の力学的な遷移は、正の (一階) 遷移で特徴付けられます。非対称の二つの中心周波数の分布では、 bistable 区の発生は、不変の強度の強度によって
2309.00046
A formation mechanism for "Wrong Way" Radio Relics
Radio Relics are typically found to be arc-like regions of synchrotron emission in the outskirts of merging galaxy clusters, bowing out from the cluster center. In most cases they show synchrotron spectra that steepen towards the cluster center, indicating that they are caused by relativistic electrons being accelerated at outwards traveling merger shocks. A number of radio relics break with this ideal picture and show morphologies that are bent the opposite way and show spectral index distributions which do not follow expectations from the ideal picture. We propose that these `Wrong Way' Relics can form when an outwards travelling shock wave is bent inwards by an in-falling galaxy cluster or group. We test this in an ultra-high resolution zoom-in simulation of a massive galaxy cluster with an on-the-fly spectral Cosmic Ray model. This allows us to study not only the synchrotron emission at colliding shocks, but also their synchrotron spectra to adress the open question of relics with strongly varying spectral indices over the relic surface.
Ludwig M. Böss, Ulrich P. Steinwandel, Klaus Dolag
2023-08-31T18:00:01
http://arxiv.org/abs/2309.00046v2
# A formation mechanism for 'Wrong Way' Radio Relics ###### Abstract Radio Relics are typically found to be arc-like regions of synchrotron emission in the outskirts of merging galaxy clusters, bowing out from the cluster center. In most cases they show synchrotron spectra that steepen towards the cluster center, indicating that they are caused by relativistic electrons being accelerated at outwards traveling merger shocks. A number of radio relics break with this ideal picture and show morphologies that are bent the opposite way and show spectral index distributions which do not follow expectations from the ideal picture. We propose that these 'Wrong Way' Relics can form when an outwards travelling shock wave is bent inwards by an in-falling galaxy cluster or group. We test this in an ultra-high resolution zoom-in simulation of a massive galaxy cluster with an on-the-fly spectral Cosmic Ray model. This allows us to study not only the synchrotron emission at colliding shocks, but also their synchrotron spectra to address the open question of relics with strongly varying spectral indices over the relic surface. Extragalactic radio sources - Cosmic rays - Galaxy clusters + Footnote †: journal: ApJL 0000-0002-8870-788X]Ludwig M. Boss 0000-0002-4880-788X]Ulrich P. Steinwandel 0000-0002-4880-788X]Klaus Dolag ## 1 Introduction Radio Relics are roughly Mpc-size regions of radio emission in galaxy clusters, typically with an arc-like morphology, which shows strong polarisation (typically \(\sim 30\%\) at 1.4 GHz, e.g. Rajpurohit et al., 2022), steep integrated radio spectra (\(L_{\nu}\propto\nu^{\alpha}\), where \(\alpha\sim-1.1\)) and a steepening of these spectra towards the cluster center (see van Weeren et al., 2019, for a recent review). They are typically associated with ongoing mergers between massive galaxy clusters (see e.g. Ensslin et al., 1998; Roettiger et al., 1999; Ensslin & Bruggen, 2002; Bruggen et al., 2012; Brunetti and Jones, 2014). These mergers dissipate a large fraction of their potential energy in the form of shocks which heat the intra-cluster medium (ICM) to \(\sim 10^{8}\) K. This can be observed as thermal X-ray emission of the fully ionized plasma (e.g. Bohringer and Werner, 2010, for a review). A smaller part of the shock energy is dissipated into the acceleration of Cosmic Ray (CR) electrons and protons in a process called "diffusive shock acceleration" (DSA, see e.g., Bell, 1978, 1983; Blandford and Ostriker, 1978; Drury, 1983, the latter for a review). In this process (supra-)thermal particles cross a shock front and are scattered by MHD turbulence from the downstream of the shock back into the upstream. They gain energy at every crossing until their gyro-radii are large enough to escape from the acceleration region or they are advected away in the downstream of the shock. Hybrid and PIC plasma simulations of shock fronts show that this process can efficiently accelerate protons in low-\(\beta\) supernova shocks (e.g. Caprioli and Spitkovsky, 2014; Caprioli et al., 2018; Caprioli et al., 2020; Pohl et al., 2020, the latter for a review) and high-\(\beta\) structure formation shocks (e.g., Ryu et al., 2019; Ha et al., 2023). For electrons it is found that this process is harder to trigger, as their gyro-radii are smaller at equivalent magnetic field strength and with that it is more difficult to start a cyclical DSA process and with that efficient acceleration of thermal electrons to the GeV energies expected from synchrotron emission by radio relics. They require an efficient pre-acceleration process such as (stochastic) shock-drift acceleration (SDA), or a seed population stabilized against cooling, to efficiently part-take in a DSA process (see e.g., Guo et al., 2014; Park et al., 2015; Kang et al., 2019; Tran and Sironi, 2020; Kobzar et al., 2021; Amano and Hoshino, 2022; Tran et al., 2023). On top of that the acceleration efficiency is found to be dependent on the shock obliquity, the angle between shock propagation and magnetic field vector. Typically it is found that protons are more efficiently accelerated at quasi-parallel shocks (see e.g., Kang and Ryu, 2013; Caprioli and Spitkovsky, 2014; Ryu et al., 2019), while electrons are more efficiently accelerated at quasi-perpendicular shocks (e.g., Guo et al., 2014; Kang et al., 2019; Ha et al., 2021; Amano and Hoshino, 2022). The results from small-scale plasma simulations have been adopted in cosmological simulations to model emission originating from structure formation shocks (see e.g., Hoeft et al., 2008; Pfrommer, 2008; Pfrommer et al., 2007, 2008, 2017; Skillman et al., 2013; Vazza et al., 2012, 2016; Wittor et al., 2017; Banfi et al., 2020; Wittor, 2021; Ha et al., 2023). However, the efficiencies found in plasma simulations are not sufficient to explain the high synchrotron brightness of radio relics (see Botteon et al., 2020, for a recent discussion). Recent observations of radio relics show not only bright arc-like structures, but also more complex morphologies such as S-shapes (e.g., de Gasperin et al., 2022), flat relics with varying thickness (e.g., van Weeren et al., 2016; Rajpurohit et al., 2020, 2020) and filamentary stuctures (e.g. Trasatti et al., 2015; Rajpurohit et al., 2022; Chibueze et al., 2023). There is also a small set of radio relics that show a curvature which points in the "wrong" direction. Instead of the typical outward-bent (convex) shape of the relic, away from the cluster center, they show an inward-bent (concave) morphology. Examples of these relics can be found in the _Ant Cluster_ (PSZ2 G145.92-12.53) (Botteon et al., 2021), PSZ2 G186.99+38.65 (Botteon et al., 2022), _Source D1_ in Abell 3266 (Riseley et al., 2022), SPT-CL J2023-5535 (HyeongHan et al., 2020), Abell 168 (Dwarakanath et al., 2018) and the southern relic in Ciza J2242.8+5301 (e.g., van Weeren et al., 2010; Stroe et al., 2013, 2016; Hoang et al., 2017; Di Gennaro et al., 2018). They can show steep synchrotron spectra which would indicate Mach numbers of the underlying shocks that are in disagreement with the critical Mach numbers found to be required to efficiently accelerate CR electrons (e.g., Kang et al., 2019) and some of their spectra are better fit by broken power-laws rather than a single one hinting towards overlapping (re-)acceleration processes as shown in Riseley et al. (2022) and inititally also in Owen et al. (2014); Trasatti et al. (2015); Parekh et al. (2022). However, follow-up observations indicate that there is no spectral break in the majority of these relics after all (Benson et al., 2017; Rajpurohit et al., 2022), making the source _D1_ the only relic so far that shows this peculiar spectral behaviour. The southern relic of Ciza J2242.8+5301 shows additional strong variations of the synchrotron slope, which makes it hard to explain in the context of DSA at a single shock front (see discussion in Di Gennaro et al., 2018). Cosmological simulations of galaxy clusters show that mergers between clusters are not isolated events and that merger shocks can deform as they expand into highly complex and turbulent ICM (e.g. Hoeft et al., 2008; Skillman et al., 2013; Wittor et al., 2017; Nuza et al., 2017). In this work we propose that a possible formation mechanism for these 'Wrong Way' relics (as they are referred to in Riseley et al., 2022) is the collision of an outwards travelling shock front with an in-falling substructure. We investigate this scenario in the sibling simulation of an ultra-high resolution MHD simulation of a \(M_{\rm vir}\approx 1.3\times 10^{15}M_{\odot}\) galaxy cluster introduced in Steinwandel et al. (2023), where we attached a population of CR protons and electrons to every resolution element of our simulation. This effectively turns every particle into a tracer particle for CRs, while also accounting for feedback by the CR component on the thermal gas. We resolve these populations with a spectral resolution of 12 bins for protons, and 48 bins for electrons over a range of 6 orders of magnitude in momentum. The distribution function of the CRs is updated on-the-fly at every timestep of the simulation according to the method presented in Boss et al. (2023). This allows us to study CR electron injection at colliding shocks and the subsequent cooling of the relativistic electron population. To the best of our knowledge this simulation is the first of its kind. This work is structured as follows: In Sec. 2 we describe the simulation code, CR model and initial conditions used in this work. In Sec. 3 we study the 'Wrong Way' Relic (WWR) found in the simulation and its origin. Sec. 4 contains a discussion of our findings and a comparison to observed systems. Finally Sec. 5 contains our conclusion and outlook to future work. ## 2 Methods The simulation used in this work was carried out with the Tree-SPMHD code OpenGadget3. OpenGadget3 is a derivative of Gadget2(Springel, 2005) with improvements to the hydro and gravity solvers as well as additional physics modules. The SPH solver is updated as described in Beck et al. (2016) to in clude higher order kernels and their bias correction (see Dehnen and Aly, 2012) and artificial viscosity as well as physical conduction to improve the mixing behavior and shock capturing of SPH (e.g. Price, 2012; Hopkins, 2013; Hu et al., 2014). Magnetohydrodynamics (MHD) have been implementation by Dolag and Stasyszyn (2009) with updates to include non-ideal MHD in the form of constant (physical) diffusion and dissipation presented in Bonafede et al. (2011). Conduction is modelled via a conjugate gradient solver (Petkova and Springel, 2009; Arth et al., 2014; Steinwandel et al., 2020), with a suppression factor of the Spitzer value for conduction of 5 per cent. We adopt a Wendland C4 kernel (Wendland, 1995, 2004) with 200 neighbors and bias correction as suggested by Dehnen and Aly (2012). We employ the on-the-fly spectral CR model Crescendo introduced in Boss et al. (2023) to model the time evolution of CR protons and electrons in every resolution element of our simulation. The time evolution of distributions of CRs in the absence of CR transport, diffusion in momentum space and catastrophic losses can be described by \[\frac{Df(p,\mathbf{x},t)}{Dt} =\left(\frac{1}{3}\nabla\cdot\mathbf{u}\right)p\frac{\partial f( p,\mathbf{x},t)}{\partial p} \tag{1}\] \[+\frac{1}{p^{2}}\frac{\partial}{\partial p}\left(p^{2}\sum_{l}b_ {l}f(p,\mathbf{x},t)\right)\] (2) \[+j(\mathbf{x},p,t), \tag{3}\] where we used \(\frac{Df}{Dt}=\frac{\partial f}{\partial t}+\mathbf{u}\cdot\nabla f\) due to OpenGadget3 being a Lagrangian code. The right side of Eq. 1 describes changes due to adiabatic compression or expansion of the gas the CRs are confined in, Eq. 2 describes energy losses and Eq. 3 is the source term. We represent \(f(p,\mathbf{x},t)\) as piece-wise powerlaws in momentum space with 2 bins/dex for protons and 8 bins/dex for electrons in the dimensionless momentum range \(\hat{p}\equiv\frac{p_{i}}{m_{i}c}\in[0.1,10^{5}]\), where \(p_{i}\) and \(m_{i}\) refer to the momentum and mass for protons and electrons, respectively. The distribution function is updated at every timestep following the two-moment approach as introduced in Miniati (2001) by computing CR number and energy changes per bin. Adiabatic changes are accounted for at every timestep via the density change within a SPH particle. We model energy losses of electrons due to synchrotron emission and inverse Compton scattering off CMB photons. As a source term we employ the DSA parametrisation by Kang and Ryu (2013) for the dependency on sonic Mach number (\(\eta(\mathcal{M}_{s})\)), which allows for DSA at shocks beyond a critical Mach number \(\mathcal{M}_{s}>2\) and saturates at a maximum efficiency of \(\eta_{\rm max}\approx 0.2\). In addition to that we use the model by Pais et al. (2018) for the dependency of CR acceleration efficiency on shock obliquity (\(\eta(\theta_{B})\)). Ultimately we divert a fraction \[\eta_{\rm pot}=\eta(\mathcal{M}_{s})\times\eta(\theta_{B}) \tag{4}\] of the entropy change over the shock into the CR component. We detect the shock properties on-the-fly in the simulation with the shock finder introduced by Beck et al. (2016) with improvements to compute the shock obliquity as the angle between the pressure gradient within the kernel (which we treat as the shock normal \(\hat{\mathbf{n}}\)) and the magnetic field vector upstream of the shock \(\mathbf{B}_{u}\). The slope of the injected CR spectrum follows linear DSA theory and we use a fixed electron to proton injection ratio of \(K_{e/p}=0.01\). The CR component exerts feedback on the thermal gas by solving the pressure integral \[P_{\rm CR,c}=\frac{4\pi\,c}{3}\;a^{4}\int\limits_{p_{\rm min}}^{p_{\rm cut}}dp \;p^{3}f(p) \tag{5}\] between the minimum momentum \(p_{\rm min}\) represented by the CR population and the cutoff of the distribution function \(p_{\rm cut}\). We start the CR injection at \(z=4\) to avoid too strong time-constraints due to very efficient high-momentum energy losses of CR electrons. Synchrotron emission is calculated directly from the evolved electron distribution function (see Appendix A for details). We use a zoomed-in initial condition of a massive galaxy cluster with a virial mass of \(M_{\rm vir}\approx 1.3\times 10^{15}\;M_{\odot}\) from the sample presented in Bonafede et al. (2011). The cluster is up-sampled to 250x base resolution, which corresponds to a mass resolution of \(M_{\rm gas}\approx 8.7\times 10^{5}M_{\odot}\) and \(M_{\rm DM}\approx 4.7\times 10^{6}M_{\odot}\) for gas and dark matter particles, respectively. We reach a maximum resolution for a gas particle of \(h_{\rm sml,min}\approx 1\) kpc with a gravitational softening of \(\epsilon=0.48\,h^{-1}c\) kpc. The cluster was selected from a lower-resolution dark matter-only simulation of a Gpc volume, which is large enough to provide a large sample of systems above a few \(10^{15}M_{\odot}\). The parent simulation used a WMAP7 cosmology with \(\Omega_{0}=0.24\), \(\Omega_{\Lambda}=0.76\), \(\Omega_{\rm baryon}=0.04\), \(h=0.72\) and \(\sigma_{8}=0.8\), which we also adopt for the present simulation. We start the simulation at redshift \(z=310\) and seed a constant magnetic field in x-direction with \(B_{0}=\)\(10^{-14}\) G (see Steinwandel et al., 2021, for a study of the impact of the choice of \(B_{0}\)). The initial conditions of this cluster at this resolution have been used to study the interaction between internal- and accretion shocks in Zhang et al. (2020, 2020) and its magnetic field has been studied in Steinwandel et al. (2023). ## 3 Results ### Merger Geometry The 'Wrong Way' relic in our simulation originates from a triple-merger at \(z\sim 0.35-0.2\). We show the schematic of the merger geometry in Fig. 1. A high-velocity merger with a 1:10 mass ratio between impactor (\(M_{2}\approx 10^{14}M_{\odot}\)) and target (\(M_{1}\approx 10^{15}M_{\odot}\)) with a large impact parameter of \(b\approx 500\) kpc drives two shock waves. These shocks follow the canonical picture (e.g. Fig. 7 in van Weeren et al., 2019) of the lighter merging partner (\(M_{2}\)) driving a strong bow-shock (\(S_{2}\) in our schematic), while the heavier merging partner (\(M_{1}\)) drives a weaker counter shock (\(S_{1}\)) in the in-fall direction of the lighter partner. This counter shock is subsequently impacted by a third merger partner (\(M_{3}\)), a group of galaxies with a total mass of \(M_{3}\approx 2\times 10^{13}M_{\odot}\), which ultimately passes through the shock surface and falls into the larger merger partner (\(M_{1}\)) in a low-impact parameter merger with \(b\approx 35\) kpc. The impact of the group deforms the weaker counter shock (\(S_{1}\)) first from a convex shape at \(z=0.32\) to a concave shape at \(z=0.29\) and subsequently to a v-like shape pointing towards the cluster center at \(z=0.27\), which also leads to a complex superposition of the different parts original shock surface with different mach numbers as well as differently aged cosmic ray electron population. Due to our system being a single, isolated cluster we cannot make any predictions for the minimum critical mass of an in-falling sub-structure that is able to deform such a shock front, or the statistical frequency of such an event. We leave this question for future work with cosmological boxes, to allow for a statistical analysis. ### The Simulated 'Wrong Way' Radio Relic Fig. 2 from top to bottom shows the time evolution of the counter shock \(S_{1}\) in the \(xz\)-plane of the simulation and its phasing through morphologies matching various 'Wrong Way' relics. The bottom row shows the same relic as the row above in the \(yz\)-plane. From left to right we show the X-ray surface brightness, CR electron energy contained in the part of the potentially synchrotron bright population with \(E>1\) GeV, the synchrotron surface brightness at 1.4 GHz and the slope of the synchrotron spectrum obtained by fitting a powerlaw to the surface brightness at 144 MHz and 1.4 GHz. These images are obtained by mapping the SPH data to a 2D grid following the algorithm described in Dolag et al. (2005) with a pixel-size of \(\Delta_{\rm pix}\approx 1\) kpc. This corresponds to a resolution of \(\theta_{\rm pix}\approx 0.24\)" at \(z=0.27\) and with that is significantly below current observational limits. Accompanying to that we show the distribution of sonic Mach number \(\mathcal{M}_{s}\) of the different panels of Fig. 2 in Fig. 3 and the synchrotron spectra in Fig. 4. In Fig. 5 we show the historgrams of pixels in Fig. 2 as a function of synchrotron intensity and spectral slope. At \(z=0.32\), in the top row of Fig. 2, we see the acceleration of CR electrons at the counter shock of the main merger event. Fig. 3 shows that only a fraction of the shocked particles are above the critical Mach number \(\mathcal{M}_{s,\rm crit}=2\) and with that can accelerate CRs. We can readily identify the part of the shock surface that accelerates CRs in the center of the images, as it is the most synchrotron bright part and shows a relatively flat synchrotron spectrum. These CRs are accelerated at the contact surface between outwards traveling shock and the atmosphere of the in-falling halo. The steeper parts of the spectrum in the upper right corner of the images indicate that these electrons have been accelerated at earlier times of the shock propagation and have been freely cooling since. This is also evident in the synchrotron spectrum in Fig. 4, which shows a strong break above \(\nu\sim 200\) MHz. The counter shock is initially not very synchrotron bright, akin to the counter-shock in Abell 3667 (e.g. de Gasperin et al., 2022) or CIZA J2242.8+5301 (Di Gennaro et al., 2018). At \(z=0.29\) the collision between outwards traveling counter shock and the bow-shock of the in-falling group (\(M_{3}\)) increases the sonic Mach number and with that the acceleration efficiency of the shock (see e.g. Kang Figure 1: A simplified schematic of the merger geometry that produces the ‘Wrong Way’ relic. The initial merger between \(M_{1}\) and \(M_{2}\) drives two shocks, the weaker of which is subsequently impacted by a third substructure \(M_{3}\). This impact deforms parts the outwards traveling shock \(S_{1}\) and produces the WWR \(S_{3}\). Figure 2: From left to right: X-ray surface brightness, CR electron energy of electrons with \(E>1\) GeV, synchrotron surface brightness at 1.4 GHz and the slope of the synchrotron spectrum between 144 MHz and 1.4 GHz. The upper three rows show the time evolution of the in-falling group in the \(xz\)-plane of the simulation, the lowest row shows the same relic at \(z=0.27\) in the \(yz\)-plane. To obtain the images the SPH data is mapped to a grid with a resolution of \(\Delta_{\rm pix}\approx 1\) kpc, which corresponds to a resolution of \(\theta_{\rm pix}\approx 0.24"\) at \(z=0.27\) 2021; Inchingolo et al., 2022, for studies of multi-shock scenarios). Fig. 3 shows that while the majority of the shocked particles remain sub-critical, the shock develops a second Mach number peak around Mach 3. This significantly increases the synchrotron surface brightness at the contact surface of the shocks, flattens the synchrotron spectrum to almost the the theoretical limit of DSA and erases the spectral break. A spectral slope of \(\alpha_{100\,\mathrm{MHz}}^{1\,\mathrm{GHz}}=-0.66\) indicates \(\mathcal{M}_{s}\approx 3.6\), in good agreement with the underlying Mach number distribution. This injection domination can also be seen in Fig. 5 where the images at \(z=0.29\) show a strong bump in synchrotron slopes between \(|\alpha|\approx 0.7-0.55\) and a small bump in synchrotron intensity around \(I_{\nu}\approx 10^{-16.5}-10^{-16}\) erg s\({}^{-1}\) Hz\({}^{-1}\) cm\({}^{-2}\). The in-falling sub-structure deforms the outwards traveling shock towards a relic pointing "the wrong way", similar to the source _D1_ observed by Riseley et al. (2022). In the case of our relic the flat spectrum part is further extended, which we attribute to the shock being further bent inwards, compared to _D1_. In Fig. 7 we rotate the image into the merger plane and can see how the aged, steep-spectrum population disappears behind the newly injected electrons at the inward bent relic. Comparing to the same rotation at \(z=0.32\) indicates that the best morphological fit to _D1_ would lie between \(z=0.32\) and \(z=0.29\), however there is no output available at this time. The collision between shock waves is also visible in our X-ray image (left panel, second row in Fig. 2), which matches the detection of a shock in X-ray by Sanders et al. (2022). The in-fall scenario proposed here also produces a radio relic-like structure within \(r_{500}\), which is unlikely in the classical picture of radio relics (e.g., Vazza et al., 2012). At \(z=0.27\), as the in-falling halo passes through the outwards traveling shock its own bow-shock collides with the older shock, causing the relic to deform further into a v-shaped morphology, such as in the counter shock to the _sausage relic_(e.g. Stroe et al., 2013; Di Gennaro et al., 2018), or the relic in Abell 2256 (Rajpurohit et al., 2022). The Mach number distribution over the shock surface has become smoother at this point, with the bulk of the shock being sub-critical, however the total number of particles with \(\mathcal{M}_{s}>2\) has increased compared to the relic at \(z=0.29\). This leads to efficient acceleration at a part of the shock surface, visible in increased synchrotron surface brightness and flatter synchrotron spectra. In general the relic is however cooling and adiabatic compression dominated. This becomes visible in Fig. 5 where synchrotron intensity is increased in the Figure 4: Time evolution of the synchrotron spectrum. Colors correspond to the times in Fig. 2 and 3. Dashed lines and labels show the spectral slope in the indicated frequency ranges. Figure 3: Histograms of the sonic Mach number \(\mathcal{M}_{s}\) for the three output times shown in Fig. 2. The colors correspond to the different times and the dotted line indicate the critical Mach number beyond which CR (re-)acceleration can occur in our model. \(10^{-17.5}-10^{-16.5}\) erg s\({}^{-1}\) Hz\({}^{-1}\) cm\({}^{-2}\) range. However, spectra are generally steeper, indicating that the increase in intensity is partly by injection and partly by adiabatic compression of an already cooling electron population. A morphological best match for the relic in Abell 2256 is expected to lie between \(z=0.29-0.27\) shown here, however the simulation output for this time is not available. For the lower panels of Fig. 2 we rotate the image by \(90^{\circ}\), as this projection more closely resembles the observations of Di Gennaro et al. (2018). The collision of two shocks as shown here leads to a superposition of multiple DSA-like events due to strong variations of the Mach number over the shock surface. This leads to strong variations of synchrotron surface brightness and spectral shape between the regions of the shock surface where efficient (re-) acceleration can take place and the regions that are dominated by cooling and adiabatic compression. These variations can also be seen in the integrated spectrum in Fig. 4, where the lower frequency end of the spectrum is strongly injection dominated and the high frequency end of the spectrum shows a significant steepening beyond \(\nu\sim 1\) GHz in the cooling dominated part. This result is valid for the two lower panels of Fig. 2, as we are dealing with integrated quantities. We have confirmed this by comparing the integrated spectrum obtained based on the data directly from the SPH particles as well as integrated maps under three different projections. We find no qualitative difference between these approaches. ## 4 Discussion To discuss our findings we will compare the morphologies in chronological order to similar observed systems. Albeit the number of observed WWRs is still small, the recent discoveries due to ASKAP and MeerKAT indicate that with increased sensitivity a number of new WWRs can be detected over time. ### Abell 520 Before the onset of the WWR morphology our cluster undergoes an internal merger with an in-falling group in the cluster periphery. This group falls into the cluster at a similar trajectory as the cluster driving the current shock waves and is therefore in the path of the weaker counter-shock of the ongoing merger. A similar setup is observed in Abell 520 by Hoang et al. (2019). They detect a shock with Mach number \(\mathcal{M}_{\rm SW}=2.6^{+0.3}_{-0.2}\) propagating in SW direction with a weaker counter-shock moving with \(\mathcal{M}_{\rm SW}=2.1\pm 0.2\) in NE direction. Along the NE diagonal Chandra observations by Andrade-Santos et al. (2017) indicate in-falling matter along a similar path as the ongoing merger. This shows that the geometric setup is possible, albeit rare and Abell 520 could host a WWR in the future. ### Abell 3266 At \(z=0.32-0.29\) our WWR resembles the one observed by Riseley et al. (2022), at a distance of \(\sim 1\) Mpc from the center of Abell 3266. Their relic is very faint and shows a very steep spectral index of \(\alpha=-2.76\) in the part that is observable in frequencies above 1043 Figure 5: Histograms of the synchrotron surface brightness _(left)_ and spectral slope _(right)_ obtained from the images in Fig. 2. As before colors correspond to the times in Fig. 2 and 3. The dotted line indicates the same relic at \(z=0.27\) in the \(yz\)-plane of the simulation, as in the lowest row of Fig. 2. MHz. The lower frequency end of the relic spectrum is significantly flatter, with a spectral index of \(\alpha\approx-0.72\). This indicates that there is a re-acceleration process which is superimposed on an older cooling spectrum, even though the very steep spectrum still poses problems under this assumption (see discussion in Riseley et al., 2022). Xray observations with eROSITA (Sanders et al., 2022) show a number of discrete sources in close proximity to _D1_, but no extended sources that could indicate an infalling group. The extended sources \(X4\) and \(X6\) that lie in (projected) close proximity to _D1_ have significantly higher photometric redshifts (\(z=0.532\) for \(X4\) and \(z=0.907\) for \(X6\)) than Abell 3266 (\(z=0.0589\), Struble and Rood, 1999), which shows that these are background sources and not in-falling groups. However, as can be seen in the left panel of Fig. 2 at \(z=0.27\), depending on the projection it is not necessarily easy to distinguish the in-falling structure in the X-ray emission. ### Psz2g145.92-12.53 Another concave radio relic detected in PSZ2G145.92-12.53 similarly shows an increase in X-ray flux with concave morphology in close proximity to the relic (see Fig. 1 in Botteon et al., 2021). We note that there is also a detected peak in X-ray surface brightness akin to the one observed in the _Rim_ region in PSZ2G145.92-12.53, indicating that similar effects may be at play there, as briefly discussed by the authors. ### Abell 2256 As previously discussed, at \(z=0.27\) in the \(xz\) plane, corresponding to the third row in Fig. 2, our WWR closely resembles the steep radio relic found in Abell 2256. Rajpurohit et al. (2022) note an association between the relic and the source F, without an X-ray counterpart (see also Owen et al., 2014; Ge et al., 2020). This could hint towards the group having passed the shock before in a similar process as discussed here. The superposition of injected and cooling parts of the shock surface can also be seen in the color-color plots in Rajpurohit et al. (2022), which indicate that the relic consists of a number of overlapping substructures. The (re-) acceleration of particles in the turbulent downstream of \(S_{1}\), which becomes the upstream of \(S_{3}\), also produces filamentary structures seen in the relativistic electron component (second panels from left in Fig. 2) as observed in Abell 2256 (see e.g. Dominguez-Fernandez et al., 2021; Wittor et al., 2023, for detailed studies of surface structures in radio relics). The observed relic shows very little spectral steepening, making it difficult to discern if it was bent against its propagation direction. The little steepening that is being detected however points towards the cluster center akin to our simulated relic, which can indicate a similar process to the one we discussed here. ### Ciza j2242.8+5301 In the case of the counter shock to the sausage relic in CIZA J2242.8+5301 the reason for the strong variations of synchrotron spectral index is still under debate (see discussion in Di Gennaro et al., 2018). In the context of our merger scenario these variations can be understood as follows: As the outwards traveling shock (\(S_{1}\)) collides with the bow-shock of the in-falling substructure (\(M_{3}\)) and is deformed, the resulting shock surface (\(S_{3}\)) shows strong variations in sonic Mach number. Wherever the sonic Mach number is \(\mathcal{M}_{s}<2\) our DSA model allows no CR (re-) acceleration and the pre-existing population is simultaneously cooling due to synchrotron and IC losses and being adiabatically compressed by the subcritical shock. This leads to a continuously steepening synchrotron spectrum, while the adiabatic compression leads to an increase in synchrotron surface brightness. In regions of the shock surface where \(\mathcal{M}_{s}>2\) there is ongoing (re-) acceleration of CR electrons, which lead to a flatter spectrum than for the cooled population. This superposition of cooling- and acceleration dominated areas on the shock surface leads to a strong variation of synchrotron spectral index, as can be seen in the bottom row of Fig. 2. ## 5 Conclusion In this work we showed the first results of a high-resolution simulation of a massive galaxy cluster with an on-the-fly spectral Fokker-Planck solver to study the acceleration, advection and aging of CR electrons in cosmological zoom-in simulations. We applied this simulation to study a rare form of radio relics that show inward-bent, instead of the typical outward-bent morphologies. Our results can be summarized as follows: * In complex merging systems with multiple ongoing mergers collisions between bow-shocks of in-falling substructures and outwards traveling merger shocks can deform the outwards traveling shocks in a way that is morphologically very similar to the currently reported _'Wrong Way' relics_. * These collisions between shocks increase the Mach number at the contact surface of the shocks and with that boost the (re-)acceleration efficiency of CR electrons. This makes their detection easier than that of the cooled, outwards moving shock. * The inclusion of an on-the-fly spectral treatment of CR electrons allows to reproduce the large vari ance of synchrotron spectral slope across the relic surface. This variance stems from the co-existence of an aged CR electron population in the outwards traveling shock and newly injected CRs at the high Mach number regions of the shock surface. Future work will expand our sample size of radio relics by performing further zoom-in simulations of the cluster set presented in Bonafede et al. (2011) at 250x base resolution and will study surface structure and polarisation properties of these relics, as well as \(\gamma\)-ray emission by the accelerated protons. ## Acknowledgments We thank the anonymous referee for their helpful and constructive referee report, which improved the quality of this manuscript. We thank Nadia Boss for the illustration of Fig. 1. LMB would like to thank Barbel Koribalski, Sebastian Nuza and Harald Lesch for helpful discussion. LMB and KD are acknowledging support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311 and support for the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 860744. UPS is supported by the Simons Foundation through a Flatiron Research Fellowship at the Center for Computational Astrophysics of the Flatiron Institute. The Flatiron Institute is supported by the Simons Foundation. The simulations were carried out at the CCA clusters "rusty" located in New York City as well as the cluster "popeye" located at the San Diego Supercomputing center (SDSC). Additional computations were carried out at the c2pap cluster at the Leibnitz Rechenzentrum under the project pn36ze. Simulations are carried out with OpenGadget3(Springel, 2005; Dolag & Stasyszyn, 2009; Beck et al., 2016; Groth et al., 2023). Processing the simulation output was done with GadgetIO.jl(Boss & Valenzuela, 2023) and GadgetUnits.jl(Boss, 2023a). Mapping SPH data to a cartesian grid was performed with SPHKernels.jl(Boss, 2023b) and SPHtoGrid.jl(Boss, 2023c). These packages use the Julia programming language (Bezanson et al., 2014). Figures were generated with matplotlib(Hunter, 2007), using PyPlot.jl. The analysis scripts to reproduce this work are available at Zenodo via Boss(2023d). Data of the relevant simulation domain is available at Zenodo via Boss et al. (2023).
**ラジオレシは、銀河クラストが合体する際の、その外側の領域に見られ、中心から湾曲する弧状の領域です。一般的には、中心に向かってスペクトルが傾斜し、そのことを示すのは、外部に移動する合体衝撃波で加速された relativisticelectrons の存在です。数多くのラジオレシは、この理想的な図形を超えており、反対向きに曲がっている形態を示し、理想的な図形から予想されるスペクトル指数分布とは異なる。私たちは、このような「間違った道」のレシが、外側に移動する衝撃波が、インフィリング銀河クラストまたはグループによって曲がって内側に曲がっていることを提案しています。この提案を検証するため、巨大な銀河クラストの超高解像度、ズームインシミュレーションを使用して、非現実的な放射線をモデル化した。このシミュレーションにより、合体衝撃のシンチレーションエ
2309.10255
RGB-based Category-level Object Pose Estimation via Decoupled Metric Scale Recovery
While showing promising results, recent RGB-D camera-based category-level object pose estimation methods have restricted applications due to the heavy reliance on depth sensors. RGB-only methods provide an alternative to this problem yet suffer from inherent scale ambiguity stemming from monocular observations. In this paper, we propose a novel pipeline that decouples the 6D pose and size estimation to mitigate the influence of imperfect scales on rigid transformations. Specifically, we leverage a pre-trained monocular estimator to extract local geometric information, mainly facilitating the search for inlier 2D-3D correspondence. Meanwhile, a separate branch is designed to directly recover the metric scale of the object based on category-level statistics. Finally, we advocate using the RANSAC-P$n$P algorithm to robustly solve for 6D object pose. Extensive experiments have been conducted on both synthetic and real datasets, demonstrating the superior performance of our method over previous state-of-the-art RGB-based approaches, especially in terms of rotation accuracy. Code: https://github.com/goldoak/DMSR.
Jiaxin Wei, Xibin Song, Weizhe Liu, Laurent Kneip, Hongdong Li, Pan Ji
2023-09-19T02:20:26
http://arxiv.org/abs/2309.10255v2
# RGB-based Category-level Object Pose Estimation via Decoupled Metric Scale Recovery ###### Abstract While showing promising results, recent RGB-D camera-based category-level object pose estimation methods have restricted applications due to the heavy reliance on depth sensors. RGB-only methods provide an alternative to this problem yet suffer from inherent scale ambiguity stemming from monocular observations. In this paper, we propose a novel pipeline that decouples the 6D pose and size estimation to mitigate the influence of imperfect scales on rigid transformations. Specifically, we leverage a pre-trained monocular estimator to extract local geometric information, mainly facilitating the search for inlier 2D-3D correspondence. Meanwhile, a separate branch is designed to directly recover the metric scale of the object based on category-level statistics. Finally, we advocate using the RANSAC-P\(n\)P algorithm to robustly solve for 6D object pose. Extensive experiments have been conducted on both synthetic and real datasets, demonstrating the superior performance of our method over previous state-of-the-art RGB-based approaches, especially in terms of rotation accuracy. **Code: [https://github.com/goldoak/DMSR](https://github.com/goldoak/DMSR).** ## I Introduction Accurately estimating the position and the orientation of an object in 3D space is critical for perceiving surrounding environments, thus has broad applications in computer vision [1, 2] and robotics communities [3, 4]. Previous efforts in 6D object pose estimation have largely focused on the instance-level setting [5, 6] where a corresponding CAD model is given for each object of interest. In such cases, object pose estimation can be simplified to correspondence matching between 3D models and observations. However, the requirement of prior CAD models for each entity is not only hard to achieve in real-world scenarios, but also makes it difficult to scale well to complicated scenes. To relax the need for instance-level 3D models, category-level object pose estimation [7, 8, 9] has been intensively investigated recently. A special kind of coordinates proposed by [10] are generated from Normalized Object Coordinate Spaces (NOCS) to align different instances into a normalized canonical space in a category-wise manner. The prediction of rotation and translation is thus performed within the same category as the target object. This line of research mainly deals with the intra-class shape variation [11, 12, 13]. Once a set of inlier 3D-3D correspondences is found, accurate pose and size can be estimated using similarity transformation. Promising results have been obtained by these methods, yet heavily relying on depth observations to achieve accurate pose and size estimation limits their broader applications. Therefore, RGB-based category-level object pose estimation [14, 15] has emerged in this direction. They can be a viable alternative when depth measurements are not available, such as on VR headsets and mobile devices. However, directly predicting object pose from a single RGB image remains a challenging task nowadays as it is a highly ill-posed problem, suffering from inherent scale ambiguity. [16] and [15] propose first to reconstruct the absolute depth map of objects, thereby transferring the problem to the familiar RGBD setting. Then again, 3D-3D correspondences are used to solve for the object pose and size at the same time. Though intuitive, predictions of metric depth maps are often inaccurate, leading to unsatisfactory results. It also makes these methods hard to generalize to different scenes. Unlike previous RGB-based methods expecting to implicitly learn the metric scale through absolute depth prediction, we instead choose to decouple object pose and size estimation for RGB-based category-level pose estimation. The motivation behind this choice is obvious: we want to explicitly cut off the propagation of errors from metric scale to 6D pose estimation, especially for rotation. To this end, we introduce a novel pipeline for RGB-based category-level object pose and size estimation, consisting of feature extraction, 2D-3D correspondence learning, metric scale recovery, and pose estimation. Specifically, the (relative) depth and normal are taken as additional input to exploit instance-level geometric information. They are generated by readily available models pre-trained on a large dataset, Omnidata [17] for each RGB image. Inspired by [8] and [18], we employ a small transformer to extract distinctive features for subsequent correspondence learning. Meanwhile, we design a simple but effective scale prediction branch to recover the metric scale of the target object by leveraging category-level statistics. Finally, we compute the 6D object pose using a traditional P\(n\)P solver inside a RANSAC loop to deal with potential outliers. Our main contributions can be summarized as follows: 1. We propose the decoupled framework for RGB-based category-level object pose estimation, where 6D object pose and size are computed separately to reduce the negative influence of imperfect scale predictions on rigid transformations. 2. The metric scale of the target object is directly recovered from the network on the basis of category-level priors and is subsequently used for robust pose estimation in a RANSAC-P\(n\)P algorithm. 3. We extend the CAMERA25 and REAL275 datasets to include depth and normal predictions from pre-trained models. Our method is simple but effective, achieving significant improvements over previous RGB-based category-level pose estimation methods, particularly in terms of rotation accuracy. ## II Related Work Image-based object pose estimation has a long-standing history and was first addressed at the instance-level and based on exact object shape priors. The category-level case--as addressed in this work--has only been studied much later. A good overview of the field including but not limited to category-level object pose estimation can be found in recent survey papers such as the work of [19]. From a high-level point of view, methods can be classified depending on whether or not they make use of depth readings during inference time. ### _RGBD-based Category-level Object Pose Estimation_ A seminal contribution towards category-level pose estimation is presented by [10], who introduce the Normalized Object Coordinate Space (NOCS). In this space, all objects within the category exhibit the same alignment as defined by object-specific directions [10] introduce a framework that relies on Mask R-CNN [20] to detect objects and an additional head to predict the projection of these coordinates into the image plane (denoted the NOCS-map). Using a scale-aware implementation of Procrustes alignment [21], the object pose can be found. [22] propose an alternative approach in which both image and depth readings are fed to a variational auto-encoder to directly regress the object position and orientation. Later on, [18] then introduces the idea of Shape Prior Deformation (SPD), an approach in which a canonical point set is fed to a deformation network. The deformed canonical point set is again aligned with the input depth readings (or a derived representation) for object pose estimation. [23] extend the architecture by a recurrent network in which the canonical point set is deformed iteratively. [8] finally introduce SGPA, a transformer-based architecture to more effectively adjust the canonical point set. Though requiring RGB-D input, SGPA serves as a strong inspiration for the method proposed in this paper. [24] make use of consistency-based losses in order to improve learning by self-supervision. An alternative line of works that also in return produce category-level object pose, but is more computationally demanding, is given by analysis-by-synthesis methods that iteratively deform and align an implicit shape model until convergence. ### _RGB-based Category-level Object Pose Estimation_ One of the first analysis-by-synthesis approaches was indeed proposed for the pure RGB case. [14] notably make use of an implicit neural representation for iterative shape synthesis and object pose estimation. A disadvantage of the method however consists of its inability to recover correctly scaled object pose results, as the method only evaluates alignment in the image. [25] propose a straightforward regression network for object pose and shape trained on synthetic data. While they also propose a depth-supported refinement strategy to close the synthetic-to-real gap, the majority of their results focus on the synthetic case. Another well-performing method is introduced by [16], who propose a two-branch network to predict a metrically scaled object mesh and a NOCS-map. After rendering a depth map from the reprojected mesh, a concluding geometric alignment step returns the object pose. Finally, [15] propose OLD-Net, a variation that directly predicts object-level depths by incorporating global position hints and shape priors. In a word, the focus of existing attempts to recover metrically scaled results consists of recovering an absolute depth map from the input measurements. It is well-understood that this is a challenging problem. However, in this work, we propose a novel RGB camera-based category-level pose estimation framework that employs a decoupled scale estimation branch in order to scale a canonical object point set prior back to the metric space, without interfering with 6D object pose estimation. While we still depend on a relative depth prediction, depth uncertainties are further prevented from entering the final pose estimation by using perspective rather than generalized Procrustes alignment. ## III Approach ### _Overview_ Given a single RGB image, our goal is to estimate the 6DoF object pose (3D rotation and 3D translation) with respect to the camera coordinate frame, as well as the metric scale of the object. As shown in Figure 1, the pipeline mainly consists of four parts: feature extraction, 2D-3D correspondence learning, metric scale recovery, and pose estimation. We first pre-process each image using models pre-trained on the large Omnidata to generate local geometric cues as described in Section III-B and then feed them into the network together with category-level shape priors and RGB images to further extract higher-level features for 2D-3D correspondence learning (Section III-C). A separate branch for metric scale recovery is described in Section III-D, which is essential for the final pose estimation with P\(n\)P algorithm (Section III-E). ### _Leveraging Geometric Information_ The network is supposed to find as many inlier 2D-3D correspondences as possible. However, there is no depth measurement in the RGB-only setting. To compensate for the missing instance-level geometric information, we turn to use an off-the-shelf monocular estimator to predict 2.5D sketches [26], e.g. depth and normal maps. In particular, [17] has built an Omnidata pipeline to train strong models on a large-scale dataset, which achieves remarkable performance regarding both accuracy and generalization ability. Therefore, in a preprocessing step, we feed each image patch \(I\in\mathbb{R}^{H\times W\times 3}\) through a depth estimation model \(f_{d}\) and surface normal estimation model \(f_{n}\) to obtain \(\hat{I}_{d}=f_{d}(I),\hat{I}_{n}=f_{n}(I)\) where \(\hat{I}_{d}\in\mathbb{R}^{H\times W\times 1}\) and \(\hat{I}_{n}\in\mathbb{R}^{H\times W\times 3}\) are depth and normal maps, respectively. It is worth noting that \(\hat{I}_{d}\) is a relative depth cue, merely depicting the geometric relationship among object pixels. Without knowing the metric scale, it is meaningless to compute 3D point cloud back-projected from \(\hat{I}_{d}\). Also, the inaccurate depth predictions on the edges of the object further prohibit the direct application of point clouds. ### _2D-3D Correspondence Learning_ Our network takes cropped image patch \(I\in\mathbb{R}^{H\times W\times 3}\), category-level shape prior \(P_{r}\in\mathbb{R}^{N\times 3}\), and predicted 2.5D sketches \(\hat{I}_{d}\in\mathbb{R}^{H\times W\times 1}\) and \(\hat{I}_{n}\in\mathbb{R}^{H\times W\times 3}\) as inputs. Note that the corresponding class label and object mask can be obtained from an off-the-shelf instance segmentation network, such as Mask R-CNN [20]. These inputs are then fed into a feature extraction module to obtain the semantic feature \(F_{I}\in\mathbb{R}^{M\times d}\), geometric features \(F_{r}\in\mathbb{R}^{N\times d}\) and \(\hat{F_{o}}\in\mathbb{R}^{M\times d}\), respectively. The \(M\) pixel features in \(F_{I}\) are randomly selected to reduce the number of correspondences. In order to mitigate the interference of noise, we apply channel-wise concatenation on \(\hat{I}_{d}\) and \(\hat{I}_{n}\), and then use a fully convolutional network \(g\), following an encoder-decoder architecture, to generate instance-specific geometric features \(\hat{F_{o}}=g([\hat{I}_{d}|\hat{I}_{n}])\) where \([\cdot|\cdot]\) denotes channel-wise concatenation. Intuitively, we expect \(\hat{F_{o}}\) can benefit from the integration of these two kinds of geometric cues. This argument is later supported by our experiments in Section IV-D. Following [18] and [8], we take advantage of category-level prior models on which instance-specific deformation is performed to reconstruct shape details. Hence, the resulting NOCS coordinates will represent the desired shape of the input object. To better guide the deformation of the shape prior, \(F_{r},\hat{F_{o}},F_{I}\) are considered as the query, key and value of a transformer such that the semantic information in \(F_{I}\) can be adaptively propagated to prior \(F_{r}\), according to the structural similarity between \(F_{r}\) and \(\hat{F_{o}}\). Here \(F_{r}\) and \(\hat{F_{o}}\) are projected to a lower dimension such that only object keypoints are taken into account to improve computation efficiency. The adapted prior feature \(\hat{F}_{r}\) is then concatenated with \(F_{r}\) to further predict the deformation field \(D\), thereby resulting in the final reconstructed model \(P_{nocs}=P_{r}+D\). Meanwhile, the matching network outputs the correspondence matrix \(C\in\mathbb{R}^{M\times N}\) using the concatenation of \(F_{I}\) and \(\hat{F_{o}}\). ### _Metric Scale Recovery_ The point cloud back-projected from the observed depth map is the key to reliable object pose and size estimation for those RGBD-based methods, as the underlying metric scale of depth measurements can significantly ease the difficulty of locating the object in 3D space. However, we make use of the scale-agnostic depth prediction \(\hat{I}_{d}\) and try to solve for the metric scale in a separate way. Direct regression of metric scales seems to be the simplest solution, but the metric scales are not fixed into a stable range, making the network hard to train. Instead, we take advantage of category-level statistics and reduce the problem to an offset regression task. Similar to the generation of category-level shape prior \(P_{r}\) (i.e. mean shape), we also compute the mean scale \(s_{r}\in\mathbb{R}\) for each category and set it as an anchor for scale prediction to avoid large deviations from desired results. Then, we pass the concatenation of extracted global features \(F_{r}^{g},\hat{F_{o}^{g}},F_{I}^{g}\in\mathbb{R}^{m}\) from \(F_{r},\hat{F_{o}},F_{I}\) through a multi-layer perceptron (MLP) to recover the relative scale offset \[\Delta s=\mathrm{MLP}([F_{r}^{g}|\hat{F_{o}^{g}}|F_{I}^{g}])\quad s.t.\quad \hat{s}=s_{r}+s_{r}\times\Delta s. \tag{1}\] Here, we employ the \(\mathcal{L}\)1 loss to supervise the training of our scale recovery branch \[L_{scale}=\|\Delta s_{gt}-\Delta s\|,\Delta s_{gt}=\frac{s_{gt}-s_{r}}{s_{r}}. \tag{2}\] In this way, we manage to obtain the metric scale for subsequent pose estimation. ### _Perspective-n-Point for Pose Estimation_ Finally, we use PnP algorithm, which takes 2D-3D correspondences as input, to solve for the object pose. Unlike the similarity transformation, PnP is incapable of handling NOCS coordinates without scale information. Therefore, we need to scale the NOCS coordinates before feeding them into the solver \[\hat{R},\hat{t}=\mathrm{PnP}(P_{I},\hat{s}\cdot CP_{nocs})=\mathrm{PnP}(P_{ I},\hat{s}\cdot C(P_{r}+D))) \tag{3}\] Fig. 1: Illustration of our pipeline for RGB-based category-level object pose estimation. where \(P_{I}\in\mathbb{R}^{M\times 2}\) is the corresponding 2D pixel coordinates on image patch, \(\hat{R}\in SO(3)\) is the rotation and \(\hat{t}\in\mathbb{R}^{3}\) is the translation. ### _Overall Loss Function_ Our pipeline mainly comprises two learning objectives, namely the NOCS correspondences and the relative scale offset \(\Delta s\). The overall loss function is then the weighted sum of the following two loss terms: \[L=\lambda_{1}L_{corr}+\lambda_{2}L_{scale}. \tag{4}\] Here, \(L_{corr}\) is the same loss used in [8], including the supervision on the deformation field \(D\) and the correspondence matrix \(C\), as well as the regularization term on the keypoint projection matrix. Please refer to [8] for more details of \(L_{corr}\). ### _Implementation Details_ We generate depth and normal maps using the latest DPT models [27] pre-trained on Omnidata [17] with 3D data augmentations [28]. For each image patch, we first resize it to \(384\times 384\), and then feed it into \(f_{d}\) and \(f_{n}\) to obtain \(\hat{I}_{d}\) and \(\hat{I}_{n}\), respectively. The outputs are resized to \(192\times 192\) later on for feature extraction. Specifically, we employ a 4-layer PSPNet [29] with ResNet18 [30] backbone as \(g\). As for metric scale recovery, we implement a 3-layer MLP with hidden dimensions 512 and 128 to regress the relative scale offset. To better handle the outliers, we estimate the object pose using a traditional P\(n\)P solver inside a RANSAC loop. The two weights \(\lambda_{1},\lambda_{2}\) in the loss function 4 are set to 1.0 and 0.1, respectively. The rest of the architectural details remain the same as [8]. All experiments are conducted on two NVIDIA RTX 2080Ti GPUs. ## IV Experiments ### _Datasets_ We conduct extensive experiments on two benchmark datasets released by [10] for the category-level object pose estimation task. The CAMERA dataset is synthesized by placing CAD models on top of real backgrounds while the REAL dataset is directly collected in the real world. These two datasets contain 6 categories (i.e. bottle, bowl, camera, can, laptop and mug), and we can generate a mean shape as prior for each category following [18]. Note that CAMERA25 and REAL275 are two subsets for testing, consisting of 25k synthetic images and 2750 real images, respectively. We also extend the NOCS datasets by predicting depth and normal maps for each image patch cropped from the original \(640\times 480\) RGB image. ### _Metrics_ We adopt the widely used evaluation metrics in category-level object pose estimation for a fair comparison. The first one is 3D Intersection-Over-Union (IoU) under different thresholds. It measures the IoU between two 3D bounding boxes transformed by estimated pose and ground truth, mainly reflecting the fidelity of object size and translation. For 6D object pose, we directly compute the rotation error in degrees and translation error in centimeters. We report the mean average precision (mAP) across all categories for both metrics in the following tables. ### _Comparison against State-of-the-art Methods_ We mainly compare our proposed method with existing state-of-the-art (SOTA) RGB-based methods: Synthesis [14], MSOS [16], and OLD-Net [15]. As far as we know, these works are the SOTA methods trying to solve category-level pose estimation using a single RGB image. Please note that MSOS can use one pixel depth value per object to boost the performance, we only report their RGB setting for a fair comparison. The quantitative results are shown in Table I. It is obvious to find that our method significantly outperforms previous RGB methods in terms of rotation accuracy (i.e. \(10^{\circ}\)), demonstrating the strong ability in correspondence learning. More specifically, our method is 7.4% and 22.5% higher than OLD-Net w.r.t. \(10^{\circ}\) metric on CAMERA25 and REAL275 datasets, respectively. For translation and 3D IoU metrics, our method also achieves competitive results, especially for the strictest IoU75 metric on REAL275, where our method is 3.1% higher than MSOS. This proves the Fig. 2: Qualitative results of our method on CAMERA25 (top) and REAL275 (bottom). The predictions and ground truths are shown by red and green bounding boxes, respectively. effectiveness of our proposed metric scale recovery. The overall performance of our method under the \(10^{\circ}\)10cm metric hence surpasses other approaches by a large margin (4.0% and 13.8% higher than OLD-Net on both datasets) due to the improvements in both rotation and translation estimation. The qualitative results of our method are shown in Figure 2. We visualize the estimated object pose and size in projected 3D bounding boxes (red) and object coordinate axes. Ground truths are also presented in green bounding boxes for reference. It can be seen from the figures that our method is able to generate considerably tight bounding boxes around objects, thereby reflecting its faithfulness in metric scale recovery. Moreover, through the orientation of projected axes, we can discover the prediction consistency across categories. We further present a more detailed analysis of average precision (AP) against different error thresholds in Figure 3. It clearly shows the different performances of each category regarding 3D IoU, rotation error, and translation error. Generally speaking, the performance on CAMERA25 is better than REAL275, partially because of the insufficient training data and cluttered backgrounds in real scenarios. As for evaluation metrics, our method outperforms the baseline NOCS [10], a RGBD-based method, in terms of rotation error for almost all categories, except for the camera class. By comparing different categories w.r.t. 3D IoU and translation error, we can discover that the performance of the bottle and camera class is often inferior to that of other object categories. This probably results from the large shape variation within these two classes. We will discuss it in detail in Section IV-D.2. ### _Ablation Studies_ #### Iv-D1 Different Input Modalities Apart from the single RGB image and category-level shape prior, we also input additional geometric information into our pipeline to help build 2D-3D correspondences. Therefore, we conduct an ablation study on different formats of geometric information to justify the design choice of our method. Here, we mainly test on three representations, i.e. 3D point cloud, depth map, and normal map. In Figure 4, we show some examples of generated 2.5D sketches for all 6 categories on both CAMERA25 and REAL275. These 2.5D geometric cues can roughly restore the instance-level information according to the input image patch. The point cloud used for evaluation is then back-projected from the (relative) depth map using intrinsics provided by NOCS datasets, and we further normalize it into a unit sphere to ease the difficulty of training. We also change the feature extractor to PointNet++ [31] to cope with unordered point sets. We report the results in Table II. Despite point clouds directly coming from predicted depth maps, they have quite different performances in terms of the \(10^{\circ}\) metric, where the depth-only setting is 3.2% and 12.9% higher on CAMERA25 and REAL275, respectively. And this is probably caused by the sparse sampling of depth maps. When dealing with 2.5D representations, the large receptive fields of CNN can enable certain robustness to noise, thus extracting more distinctive features. While the sampling of depth maps, instead, magnifies the impact of outliers due to the sparsity of point clouds. Fig. 3: Average precision vs. error thresholds on CAMERA25 (top) and REAL275 (bottom). Furthermore, we discover that the combination of depth and normal maps has a positive influence on correspondence learning, resulting in 2.2% and 2.7% improvements on the \(10^{\circ}\) metric on CAMERA25 and REAL275, respectively. This eventually validates our design choice of input modalities for additional geometric information. #### Iv-D2 Effect of Scale Recovery To evaluate the effectiveness of our proposed metric scale recovery strategy, we report the results with and without the usage of relative scale offset in Table III. It is well-known that recovering the metric scale from a single image is a highly ill-posed problem due to the ignorance of depth information. However, we manage to restore this value by leveraging the category-level statistics (i.e. mean scale) collected from datasets. The first row shows a particular case where only the mean scale of each category is used to object pose estimation. It already starts giving promising results compared to previous RGB-based methods, yet fine-tuning with relative scale offset (see the second row) can further boost the performance. We can observe consistent improvements on CAMERA25 while the performance slightly degrades on REAL275 regarding IoU50 and 10cm metrics. To better understand the possible reasons behind this phenomenon, we calculate the standard deviation of ground truth metric scales w.r.t. the mean scale of each category \[\sigma_{s}=\sqrt{\frac{\sum(s_{gt}-s_{r})^{2}}{k}}\] where \(k\) denotes the number of instances within each category. This is illustrated in Figure 5. It clearly shows that synthetic objects and real objects in both datasets have roughly similar standard deviations, except for the bowl and camera classes. As for these two categories, the variation around the mean scale on the REAL dataset is higher than the CAMERA dataset. Besides, there is not enough training data in the REAL dataset so the network training is dominated by synthetic data. All of these factors combined may account for the degradation. And last but not least, the bottle, camera, and laptop classes have top3 highest standard deviations due to the large intra-class dissimilarity. This partially explains why they are inferior to other categories in category-level pose estimation. ### _Limitations_ Through extensive experiments on the NOCS benchmark, we have witnessed a discrepancy between synthetic and real datasets. And this may be caused by several reasons. First, the quality of predicted 2.5D sketches is generally better for synthetic objects than real objects. Take the predictions in Figure 4 for example. The generated depth maps and normal maps appear blurrier on the edge of real objects due to the shadows and cluttered backgrounds. Second, real-world instances tend to have larger shape variations as indicated in Figure 5, and certain categories need to be taken care of to deal with the inherent intra-class dissimilarity in category-level object pose estimation. Therefore, our future work will focus on compensating for the domain gap between synthetic and real data. ## V Conclusions In this paper, we have presented a decoupled approach for RGB-based category-level object pose and size estimation, where 6D rigid transformation and metric scale are computed in a separate way. This strategy benefits the 6D pose estimation a lot as we explicitly stop the error propagation resulting from imperfect scale predictions. We leverage rich geometric information to build reliable 2D-3D correspondences, and a direct scale regression branch is used to recover the metric scale. Through extensive experiments, our proposed method Fig. 4: Examples of generated 2.5D sketches. We show the image patch, depth map, and normal map for all 6 categories (i.e. bottle, bowl, camera, can, laptop and mug) on REAL275. Fig. 5: The standard deviation of ground truth metric scales w.r.t. the mean scale of each category on both CAMERA and REAL datasets. has demonstrated significant improvements over existing SOTA RGB-based approaches.
2309.07729
Imitation Learning-based Visual Servoing for Tracking Moving Objects
In everyday life collaboration tasks between human operators and robots, the former necessitate simple ways for programming new skills, the latter have to show adaptive capabilities to cope with environmental changes. The joint use of visual servoing and imitation learning allows us to pursue the objective of realizing friendly robotic interfaces that (i) are able to adapt to the environment thanks to the use of visual perception and (ii) avoid explicit programming thanks to the emulation of previous demonstrations. This work aims to exploit imitation learning for the visual servoing paradigm to address the specific problem of tracking moving objects. In particular, we show that it is possible to infer from data the compensation term required for realizing the tracking controller, avoiding the explicit implementation of estimators or observers. The effectiveness of the proposed method has been validated through simulations with a robotic manipulator.
Rocco Felici, Matteo Saveriano, Loris Roveda, Antonio Paolillo
2023-09-14T14:07:08
http://arxiv.org/abs/2309.07729v1
# Imitation Learning-based Visual Servoing ###### Abstract In everyday life collaboration tasks between human operators and robots, the former necessitate simple ways for programming new skills, the latter have to show adaptive capabilities to cope with environmental changes. The joint use of visual servoing and imitation learning allows us to pursue the objective of realizing friendly robotic interfaces that (i) are able to adapt to the environment thanks to the use of visual perception and (ii) avoid explicit programming thanks to the emulation of previous demonstrations. This work aims to exploit imitation learning for the visual servoing paradigm to address the specific problem of tracking moving objects. In particular, we show that it is possible to infer from data the compensation term required for realizing the tracking controller, avoiding the explicit implementation of estimators or observers. The effectiveness of the proposed method has been validated through simulations with a robotic manipulator. Keywords:Visual Servoing, Imitation Learning, Visual Tracking ## 1 Introduction Today robots are not merely asked to execute tasks in controlled environments, but they must have friendly interfaces so that everyone can conveniently operate them in everyday life. In fact, given their high level of ubiquity, more and more robots are at the disposal of people with no technical expertise. As a consequence, easy control frameworks that do not require specific engineering or programming skills are urgently needed. Furthermore, modern robots operating "in the wild" need to be highly adaptive, to cope with changes of dynamic environments. Imitation Learning (IL) [29], also known as programming by demonstrations [4] or learning from demonstrations [3], promises to avoid specific coding duties by imitating the desired behavior as performed by an expert [5]. With respect to classic control paradigms, IL is easier and more convenient for non-expert operators, as they only need to provide demonstrations of the desired robotic tasks. Among the IL approaches, Dynamical System (DS)-based methods [15, 27, 28] allow realizing the imitation strategy while ensuring stability properties. Adaptive capabilities, instead, can be realized by including exteroceptive sensing, such as vision, into the IL strategy. In particular, recent work [23, 24, 30] have explored the possibility to combine Visual Servoing (VS) [7, 8] with DS-based IL. We name such integration Imitation Learning for Visual Servoing (ILVS). Such combination brings benefit to both techniques: on the one side, the visual perception adds adaptability to the IL scheme to cope with environmental changes; on the other, the imitation strategy allows the addition of tasks or constraints to the VS law with no specific implementation. This work aims at resorting to the ILVS paradigm to tackle the specific problem of tracking moving objects. Traditional tracking techniques need to estimate the motion of the target, e.g., specifically implementing a Kalman filter [6] or predictive controllers [13]. Instead, we provide a framework that leverages ILVS and extrapolates from demonstrations of tracking experiments the required information for adding the tracking skill to the basic VS law. In particular, we propose to use the so-called Reshaped Dynamical System (RDS) approach [28] to imitate the tracking behavior into the basic VS control. The resulting learning-aided control system has been validated with robotic simulations. ## 2 Background The well-known VS technique [7, 8] employs vision to control the motion of a robot. In particular, in image-based VS, considered in this work, the objective is to zero the difference between desired and measured visual features that are directly defined on the camera image. Such visual features represent the feedback of the controller that computes camera velocities to achieve a desired task; they can be detected with standard image processing [16] or more sophisticated methods, e.g., artificial neural network [22]. Assuming an eye-in-hand configuration, a static target, and constant desired features, the basic VS law computes the camera velocity \(\mathbf{v}\in\mathbb{R}^{6}\) with a simple reactive controller. Its objective is to nullify the visual error \(\mathbf{e}\in\mathbb{R}^{k}\) between the detected and desired visual features: \[\mathbf{v}=-\lambda\widehat{\mathbf{L}^{+}}\mathbf{e}, \tag{1}\] where \(\lambda\) is a positive scalar gain and \(\widehat{\mathbf{L}^{+}}\in\mathbb{R}^{6\times k}\) an approximation of the Moore-Penrose pseudoinverse of the interaction matrix [7]. Such approximation is normally due to unknown information, such as the depth of the visual features1. The simple law (1) can be augmented with other tasks or constraints to enable additional skills, by employing planning techniques [10, 17], predictive controllers [2, 20, 21, 26], and other sort of optimization-based frameworks [1, 18, 19]. However, such approaches require careful design and implementation of the additional modules, which is desirable to avoid for the sake of easiness of use. To this end, inspired by the DS paradigm, it has been proposed to augment the skills of the basic law with an ILVS strategy [24]. In particular, by using the specific RDS method [28], one could write the augmented VS law as follows: \[\vec{v}=-\lambda\widehat{\vec{L}^{+}}\vec{e}+h\vec{\rho}(\vec{e}), \tag{2}\] where \(\vec{\rho}(\vec{e})\) is an error-dependent corrective input used to follow complex trajectories and \(h\) is a vanishing term used to suppress \(\vec{\rho}\) after a user-defined time and retrieve stability. Such an approach can be used to generate complex visual trajectories, e.g., to avoid collisions, as done in [24]. In this work, instead, we use this formulation to enable the learned compensation terms needed to achieve the tracking of moving objects, as explained in the next section. ## 3 Method ### Problem definition The aim of our work is to enable visual tracking of moving targets avoiding explicit programming of the required additional components of the basic law (1). Assuming a moving target, the VS law has to account for such motion [8]: \[\vec{v}=-\lambda\widehat{\vec{L}^{+}}\vec{e}-\widehat{\vec{L}^{+}}\frac{ \partial\vec{e}}{\partial t}, \tag{3}\] where the second term on the right of the equation actually acts as a feedforward term to compensate for the error's time variation due to the target motion [8]. Ad hoc techniques can be implemented to estimate the term due to the motion of the target so that it can be inserted in (3) and compensated, e.g., with the introduction of integrators [9], feedforward terms [6, 12] or filters [13, 31]. In this work, instead, our aim is to rely on an imitation strategy to infer the compensation term of the law (3) from previous demonstrations of tracking experiments. In particular, inspired by DS-based approaches as in (2), we treat the reshaping term \(\vec{\rho}\), to be learnt from data, as the compensation term in (3): \[\vec{\rho}=-\widehat{\vec{L}^{+}}\frac{\partial\vec{e}}{\partial t}. \tag{4}\] Therefore, our problem can be formulated as follows: learn from previous demonstrations an estimate of the compensation term \(\hat{\vec{\rho}}\) so that the VS law \[\vec{v}=-\lambda\widehat{\vec{L}^{+}}\vec{e}+\hat{\vec{\rho}}(\vec{e}) \tag{5}\] realizes tracking of moving objects. It is worth mentioning that (5) is formally the same as (2). However, the vanishing term \(h\) is not used in (5) since the estimate \(\hat{\vec{\rho}}\) has to be always active to perform the tracking skill. ### Dataset We assume that an "oracle" is available to provide a few demonstrations of the full desired tracking behavior. A possible oracle could be a human user, who can kinesthetically teach the robot the tracking motion, or an ideal controller in simulated environments, where all the required information is perfectly known. During the oracle's executions, data describing how the task is carried out are recorded for each timestamp. In particular, we log the evolution of the visual error, as measured on the camera image, and the corresponding velocities, as shown to the camera in order to achieve the full desired task: \[\mathcal{D}=\left\{\mathbf{e}_{n}^{d},\mathbf{v}_{n}^{d}\right\}_{n=1,d=1}^{N,D}, \tag{6}\] where \(N\) is the number of samples and \(D\) the number of demonstrations. This dataset serves as the basis for the actual training set \(\mathcal{T}\) that is built as follows: \[\mathcal{T}=\left\{\mathbf{\varepsilon}_{n}^{d},\mathbf{\rho}_{n}^{d}\right\}_{n=1,d=1} ^{N,D}, \tag{7}\] considering that \(\mathbf{\varepsilon}_{n}^{d}=\widehat{\mathbf{L}^{+}}\mathbf{e}_{n}^{d}\) and \(\mathbf{\rho}_{n}^{d}=\mathbf{v}_{n}^{d}+\lambda\widehat{\mathbf{L}^{+}}\mathbf{e}_{n}^{d}\). Note that for all the demonstrations we consider that the value of the control gain \(\lambda\) does not change, as well as the value of the approximated inverse of the interaction matrix \(\widehat{\mathbf{L}^{+}}\) is assumed to be constant and equal to its value at convergence. ### Learning the compensation term Given the training dataset (7), an estimate of the compensating term can be conveniently retrieved from vision data using any regression function \(\mathbf{r}\). In particular, we train a Gaussian Mixture Model (GMM) on \(\mathcal{T}\) to estimate the velocity term needed to compensate for the motion of the target object. Therefore, Gaussian Mixture Regression (GMR) is used to retrieve a smooth estimate of \(\mathbf{\rho}\), namely \(\hat{\mathbf{\rho}}\). The GMR takes as input the current value of \(\mathbf{\varepsilon}\) and provides \(\hat{\mathbf{\rho}}\) as \[\hat{\mathbf{\rho}}=\mathbf{r}_{\mathrm{GMR}}(\mathbf{\varepsilon}\ |\ \mathcal{T}). \tag{8}\] Therefore, the compensation term is online estimated using (8) and inserted in the control law (5) to achieve the tracking of moving objects. ## 4 Results ### Validation setup To validate our framework we consider a robotic experiment with the robot manipulator Franka Emika [14], which has 7 joints and an Intel RealSense D435i sensor (used as a monocular camera) mounted on the end-effector. The sensor has a field of view of 69\({}^{\circ}\times\)42\({}^{\circ}\) and a frame resolution of 1920\(\times\)1080 pixel. The robot and the environment for the experiments are simulated in CoppeliaSim [11], as shown in Fig. 1. The goal of the experiment is to allow the robot to reach a box that moves at a constant velocity on a conveyor belt. In other terms, we set the desired features so that at convergence the robot centers the box on the image plane. The box is marked with an AprilTag marker, whose corners provide the visual features for the VS law. In particular, we use the 4 corner points of the marker as visual features (i.e., \(k=8\)). As classically done in VS, 4 points are enough to ensure robust visual feedback. At the start of the experiments, the conveyor belt accelerates from zero to 0.1 m/s and keeps the velocity constant for the rest of the experiment. The implementation of the framework has been done in Python 2.7 language within the ROS [25] infrastructure. The oracle used to collect the demonstrations consists of an ideal VS controller provided with complete knowledge of the dynamics of the target, available in the simulated environment. In practice, we use the law (3) with \(\lambda=2\), and the compensation term is built from the perfect knowledge of the box velocity. The interaction matrix has been approximated by using the value of the visual features depth at the target, which is \(0.09116\,\mathrm{m}\). In total, we have collected three demonstrations of the task. If not otherwise mentioned, the same value of the gain and the same approximation of the interaction matrix are kept for the online experiments. It is worth mentioning that other teaching methodologies could be used, such as kinesthetic teaching or teleoperation. Our choice was dictated by the need for high precision in tracking the object: a tracking controller with complete knowledge, as available in simulation, provides way better performances for precise movements than human demonstration. Furthermore, human demonstrations usually require preprocessing of the trajectories to grant exact convergence to the target in the feature space. The regression is carried out using GMM with 11 components. The number of components has been set performing a grid search. At each iteration of the controller, the framework detects new visual features and computes the new value of \(\boldsymbol{\varepsilon}\), which is used by the GMR to compute an estimate \(\hat{\boldsymbol{\rho}}\) of the compensation term that is finally inserted Figure 1: Validation setup: the Franka Emika robot manipulator in the CoppeliaSim environment has to reach a box moving on a conveyor belt. in the control law as in (5). The camera velocity thus computed is sent to the kinematic control of the manipulator that transforms it into joint velocities to move the robot towards the desired tracking behavior. With this setup, multiple tests are carried out to evaluate firstly the learning and replication capabilities of the demonstrated target tracking tasks, and secondly, the system's ability to adapt to new scenarios and sudden changes in the environment. In the presented plots of the experiments, the trajectories saved in the demonstrations are shown with black dotted lines, whereas the execution of our ILVS framework is in blue; red dots represent the starts of the demonstrated trajectories, while the red crosses are their ends. The experiments are shown in the video accompanying the paper, available at the following link: [https://youtu.be/ORdAZDmCQsA](https://youtu.be/ORdAZDmCQsA). ### Comparison with the standard VS controller The first set of experiments aims at comparing the behavior of standard VS without compensation term, as in (1), with different values of the gain \(\lambda\), against our proposed ILVS strategy. The results of this comparison are shown in Fig. 2. As expected, even if the standard VS law manages to approach the box, due to its motion, it never manages to center it on the image plane. Indeed, a constant error between the current state of the features (denoted in red and numbered from zero to three in Fig. 2) and their desired position (in green) is kept at a steady state. Such error is lower by increasing the value of \(\lambda\) from 1 to 5, but cannot be nullified. It is indeed noteworthy that extremely high gain values cannot provide a reasonable solution to the tracking problem, since it would introduce instability in the control system [7, 8]. Unlike the standard controllers, our ILVS manages to infer from data the required information to compensate for the box motion. As shown in Fig. (d)d, ILVS provides the robot with the capability Figure 2: Comparison between three versions of the standard VS controller and the proposed ILVS strategy. to approach the target, reach convergence, and keep the camera above the box at the desired pose for the duration of the experiment. Indeed, in this case, the measured visual features match their desired counterpart at steady-state. Fig. 3 shows, for the same experiment, a qualitative evaluation of the trajectories of the visual features from the demonstrations (black dotted lines), and the trajectories executed by the ILVS strategy (in blue). One can observe the ability of the system to accurately replicate the demonstrated trajectories when starting from a known location (the same as the demonstrated ones). The correspondent quantitative results of this experiment are presented in terms of average Root-Mean-Square Error (RMSE)5 and its standard deviation measuring the accuracy of the predicted camera position and velocity, and the predicted feature position w.r.t the corresponding quantities contained in the demonstrations. In particular, the average RMSE regarding the predicted visual features position is \(22\pm 11\) pixel. For the camera positions and the linear camera velocities, the obtained results are \(33\pm 24\) mm and \(69\pm 71\) mm/s, respectively. Footnote 5: RMSE values rounded up to the nearest whole number. ### Target tracking experiments with unseen initial conditions The second set of experiments is carried out to test the adaptability of the system w.r.t. unseen initial conditions, i.e., when the starting orientation or the position of the camera is different from those demonstrated in the training dataset. We tested the framework with incremental levels of difficulty. In the first experiment of this set, the initial conditions are analogous (but not identical as in the experiment shown in Fig. 2d and Fig. 3) to the ones in the training dataset. As illustrated in Fig. 4 (left), the starting point of the experiment in the image plane are in the nearby of the starting points (red dots) of the demonstrations Figure 3: ILVS experiment with the same initial condition as in the demonstration: visual features trajectories as in the demonstrations and executed by our method. (black dotted lines), since the initial position of the camera has been slightly moved away from the one in the demonstrations. The starting orientation of the camera is, instead, the same as the demonstrations. Given similar initial conditions, as expected, the system executes the task (blue lines in the plot) without any particular difficulties. Fig. 4 (right) shows the time evolution of the visual error for each of the four features (blue lines), which is kept to zero after a transient time for the duration of the experiment; it is also depicted the average visual error among all features (black line). Four snapshots of this experiment are presented in Fig. 5 showing the manipulator approaching the object and tracking the target moving on the conveyor belt during all its motion. The second experiment of this set aims to evaluate the effectiveness of the approach in handling unseen conditions. In particular, at the beginning of the experiment, the camera is oriented as in the demonstrations but has a substantial difference in position. The large initial positional offset is well visible in the plot of Fig. 6, where the initial value of the visual features is far off from the Figure 4: ILVS experiment with similar initial conditions of the demonstrated ones: visual features trajectories (left) and visual error (right). Figure 5: Snapshots of the ILVS experiment with similar initial conditions of the demonstrations: robot’s external views (top) and camera images (bottom). demonstration. Nevertheless, the visual features trajectories shown in Fig. 6 (left) demonstrate that the robot manage to successfully achieve the VS task, as the current value of the feature converges to their desired one, as also demonstrated in the dataset. Similarly, target tracking performance can be evaluated also from the time evolution of the visual error presented in Fig. 6 (right). From this plot, one can evaluate that the visual error is kept to zero after a transient time, even while the box continues moving on the conveyor belt. Four snapshots of this ILVS experiment can be evaluated in Fig. 7: the manipulator can reach the box and keep it tracking for all the experiments. The last two snapshots show how the robot manages to keep the box at the center of the image for the experiment, accommodating the motion induced by the conveyor belt. The third experiment is meant to test at its greatest degree the handling of unseen initial conditions. As can be seen in Fig. 8 (left) from the position of the features in the image plane, the end-effector of the manipulator at the beginning of the experiment has a pose that is not present in the training data. Nevertheless, the robot manages to keep the box at the center of the image. The robot manages to keep the box at the center of the image. less, the robot still manages to adjust its movement to successfully approach the moving target ensuring convergence, and once reached, it is able to track the target along its motion (see also the snapshots of Fig. 9). For this experiment, we also show in Fig. 10 the plots of the camera velocities, as demonstrated (grey lines in the plots) and as executed by our method (in blue). For these three experiments, we provide a quantitative evaluation of the tracking performances. In particular, we considered the phase of the experiments that starts when the visual error is lower than \(5\) pixels (cfr. Fig. 4 (right), Fig. 6 (right), and Fig. 8 (right)). For this portion of the experiments, the visual error is on average \(1.795\pm 0.984\) pixels, corresponding to \(0.475\pm 0.257\) mm of error in the camera position. Finally, we perform one last test in which we suddenly move the target object during the execution of the experiment. We observed the system's ability to adjust to such sudden and unexpected movements of the target object (tests were pursued with both low gain \(\lambda=2\) and high gain \(\lambda=10\) yielding satisfactory results in both cases). The results of this experiment can be evaluated from the accompanying video. ## 5 Discussion and conclusion In this work, we have addressed some of the needs that arise from the introduction of friendly robots in domestic and industrial contexts where users are not necessarily experts. In these situations, adaptability and easiness of use are must-haves for robots. Therefore, we have proposed an imitation learning-based visual servoing framework for target tracking operations that avoids explicit programming, leveraging previous demonstrations of the desired behavior. Our approach relies on the VS paradigm and the DS-based IL rationale. In particular, we take advantage of the imitation strategy to learn the compensation term required to achieve the visual tracking experiment. Our approach permits us to realize the tracking without the specific implementation of an estimator or observer of the Figure 8: ILVS experiment with unseen initial position and orientation: visual features trajectories (left) and visual error (right). compensation term. The framework has been evaluated with several simulations, which show the ability to handle unseen initial conditions. As shown by the experiment in Fig. 6 and Fig. 7, the robot can converge to the visual target even starting relatively far from the initial value of the demonstrations. This out-of-domain generalization capability is a structural property of our approach that effectively combines a stable component (from standard VS) and a learned one in the closed-loop control law (5). Indeed, the standard VS component always drives the robot close to the target, i.e., in the training data domain, where the learning of the compensation term is put in an ideal condition to work. Stronger generalization capabilities (e.g., to handle the doubled velocity of the conveyor belt seen during the demonstrations) would require retraining our compensation term. The stability of the proposed controller has not Figure 10: Camera velocity during the ILVS experiment with unseen initial position and orientation: linear (top) and angular components (down). Figure 9: Snapshots of the ILVS experiment with unseen initial position and orientation: robot’s external views (top) and camera images (bottom). been formally investigated (for instance, using tools from the Lyapunov theory). However, in the conducted experiments, the robot was always able to reach the target with sub-millimeter precision. Moreover, we also tested the robustness to disturbances like changes in the object position on the conveyor belt. The fact that the controller behaved as expected in several practical cases suggests that it should have some (local) stability property. However, a formal stability proof is left as future work. Another interesting line for future development is the test of our framework with velocities of the object that are different from the one seen during the demonstrations. Indeed in our current study, the velocity of the object during the validation experiment is the same as the one used during the collection of the demonstrations. Finally, we plan to test our approach with real experiments; to this end, further development will be required to handle the noise in the input data (typical of real-life applications).
日常生活における人間操作者とロボットの協調作業では、前者は新しいスキルをプログラミングするための単純な方法を必要とする、後者は環境の変化に適応するための柔軟性を持つ必要がある。視覚サーボと模倣学習の併用により、環境への適応を可能にする親和性の高いロボットインターフェースを実現する目的を追求できる。それは (i) 視覚認識を活用して環境に適応できる、 (ii) 過去のデモの模倣によるプログラミングの排除ができるという点で、人間操作者とロボットの協調作業を可能にすることができる。この研究では、模倣学習を用いた視覚サーボ機構を、移動する物体を追跡するための具体的な問題に取り組む。特に、データから追跡制御に必要な補正項を推定することが可能であり、従来のEstimatorsやObserverの直接実装を回避する。提案された方法の有効性を、ロボットマニピュレータによる
2309.04220
Score-PA: Score-based 3D Part Assembly
Autonomous 3D part assembly is a challenging task in the areas of robotics and 3D computer vision. This task aims to assemble individual components into a complete shape without relying on predefined instructions. In this paper, we formulate this task from a novel generative perspective, introducing the Score-based 3D Part Assembly framework (Score-PA) for 3D part assembly. Knowing that score-based methods are typically time-consuming during the inference stage. To address this issue, we introduce a novel algorithm called the Fast Predictor-Corrector Sampler (FPC) that accelerates the sampling process within the framework. We employ various metrics to assess assembly quality and diversity, and our evaluation results demonstrate that our algorithm outperforms existing state-of-the-art approaches. We release our code at https://github.com/J-F-Cheng/Score-PA_Score-based-3D-Part-Assembly.
Junfeng Cheng, Mingdong Wu, Ruiyuan Zhang, Guanqi Zhan, Chao Wu, Hao Dong
2023-09-08T09:10:03
http://arxiv.org/abs/2309.04220v1
# Score-PA: Score-based 3D Part Assembly ###### Abstract Autonomous 3D part assembly is a challenging task in the areas of robotics and 3D computer vision. This task aims to assemble individual components into a complete shape without relying on predefined instructions. In this paper, we formulate this task from a novel generative perspective, introducing the Score-based 3D Part Assembly framework (Score-PA) for 3D part assembly. Knowing that score-based methods are typically time-consuming during the inference stage. To address this issue, we introduce a novel algorithm called the Fast Predictor-Corrector Sampler (FPC) that accelerates the sampling process within the framework. We employ various metrics to assess assembly quality and diversity, and our evaluation results demonstrate that our algorithm outperforms existing state-of-the-art approaches. We release our code at [https://github.com/J-F-Cheng/Score-PA_Score-based-3D-Part-Assembly](https://github.com/J-F-Cheng/Score-PA_Score-based-3D-Part-Assembly). ## 1 Introduction Assuming you purchase a piece of IKEA furniture, assembling the separate parts into a complete structure can be challenging without proper guidance (_e.g._, the instructions in the manual). If we were to use a robot to assist us with furniture assembly, a key issue would be enabling the robot to understand the relationships among all the parts and autonomously ###### Abstract We propose a new method for solving a novel generative perspective, proposing the Score-based 3D Part Assembly framework (Score-PA). This framework learns a conditional probability for the pose set of input parts, resulting in high-quality and diverse results. Keywords: Learning, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part Assembly framework, Score-based 3D Part framework 3D models. Assembly-based 3D modelling is a promising approach to expanding the accessibility of 3D modelling. In assembly-based modelling, new models are constructed from shape components extracted from a database [1]. One approach, proposed by Funkhouser et al. [1], creates new 3D shapes by assembling parts from a repository. Other works [1, 2, 3, 4] facilitate shape modelling using probabilistic models to encode semantic and geometric relationships among shape components. More recently, some studies [1, 2, 3] generate certain parts and then predict a per-part transformation for the generated parts to obtain a new shape. ### Autonomous 3D Part Assembly The autonomous 3D part assembly task, as introduced by Huang et al. [1], focuses on predicting a 6-DoF pose (comprising rotation and translation) for the composition of individual parts. To accomplish this, Huang et al. [1] proposed an assembly-oriented dynamic graph learning framework that demonstrated impressive performance using their specially designed algorithm. Following this development, a recurrent graph learning framework was introduced [1], which takes advantage of the order information of parts during the assembly process. This approach highlights the significant improvements that can be achieved by incorporating order information. However, in practical applications, such order information is often not readily available for 3D part assembly problems. As a result, our study continues to adhere to the setting proposed by Huang et al. [1], focusing on assembling parts in random order. This ensures that our approach remains applicable in real-world scenarios where order information might not be accessible. ### Score-Based Generative Models Score-based generative models aim to estimate specific distributions [1, 2, 3, 4, 5]. The primary objective is to minimize the squared distance between the estimated gradients and the gradients of the log-density of the data distribution [1, 2]. A recent variation of score-based models, known as score-based generative modelling with stochastic differential equations (SDEs) [6], employs SDEs for data perturbation, achieving remarkable success in generation tasks. This approach diffuses the data distribution during training using SDEs and generates data by reversing the diffusion process, _i.e._, through the inverse SDE. Score-based modelling has achieved significant success in various tasks, such as point cloud generation [1], molecular conformation generation [1], scene graph generation [2], point cloud denoising [1], human pose estimation [1], object rearrangement [1], _etc._ ## 3 Method This section focuses on introducing our proposed approach to tackle the challenges discussed above. We discuss the problem definition in Section 3.1. We then overview our Score-based 3D Part Assembly framework in Section 3.2, and the training algorithm in Section 3.3. To speed up the inference stage of our framework, we further propose a new sampler, FPC, for fast sampling purpose in Section 3.4. ### Problem Definition Assuming we have a set of separate parts, the core concept of autonomous 3D part assembly involves designing an algorithm that learns the assembly rules from a dataset and moves the separate parts to appropriate locations accordingly. Specifically, the goal of the 3D part assembly task is to predict a 6-DoF pose set \(\textbf{Q}_{\textbf{P}}=\{\textbf{q}_{i}\}_{i=1}^{N}\) for parts transformation, corresponding to the given 3D part point clouds \(\textbf{P}=\{\textbf{p}_{i}\}_{i=1}^{N}\) (the order of the input parts is random). Each \(\textbf{p}_{i}\in\mathbb{R}^{1000\times 3}\) is a point cloud that conveys the geometric information of a part, and each \(\textbf{q}_{i}\in\mathbb{R}^{6}\) is a combination of a translation vector \(\textbf{t}_{i}\in\mathbb{R}^{3}\) and an Euler angle vector \(\textbf{e}_{i}\in\mathbb{R}^{3}\). By using the predicted pose set \(\textbf{Q}_{\textbf{P}}=\{\textbf{q}_{i}\}_{i=1}^{N}\), the given point clouds \(\textbf{P}=\{\textbf{p}_{i}\}_{i=1}^{N}\) can be transformed into an assembled shape \(\textbf{P}^{*}\) through rotation and translation. ### Overview of Score-PA As stated in Section 1, our goal is to learn a conditional probability distribution \(p(\textbf{Q}_{\textbf{P}}\mid\textbf{P})\). Instead of directly predicting the final part pose, we learn how to transform the input part by predicting the gradient field that can guide each part to the conditional distribution. Score-based generative model, as an emerging generative technique, provides a direct way to learn the gradient field. Our goal is to build score function to approach the conditional data distribution \(\nabla_{\textbf{Q}_{\textbf{P}}}log(p_{t}(\textbf{Q}_{\textbf{P}}\mid\textbf{ P}))\). To achieve this, we design a graph neural network to model our score function \(\textbf{S}_{\textbf{G}_{\theta}}(\textbf{Q}_{\textbf{P}},\,t)=\nabla_{\textbf{ Q}_{\textbf{P}}}log(p_{t}(\textbf{Q}_{\textbf{P}}\mid\textbf{P}))\) [\(\boxed\)], where \(\textbf{G}_{\theta}\) is our designed graph neural network, and time \(t\) is used to index the diffusion process of 6-DoF pose \(\{\textbf{Q}_{\textbf{P}}(t)\}_{t=0}^{T}\). The overview of our framework is shown in Fig. 1, our objective is to train the formulated score-based model to generate the 6-DoF pose set \(\textbf{Q}_{\textbf{P}}\) based on the given 3D part point clouds information of the parts **P**, and **P** can be easily transformed to the assembled shape \(\textbf{P}^{*}\). In the proposed algorithm, the graph \(\textbf{G}_{\theta}=(\textbf{V},\textbf{E})\) represents the 3D geometric information of the given parts' point clouds. The nodes in this graph neural network are fully connected for message passing. The set of nodes \(\textbf{V}=\{\textbf{v}_{i}\}_{i=1}^{N}\) is obtained via the parts' point clouds \(\textbf{P}=\{\textbf{p}_{i}\}_{i=1}^{N}\) through a parametric function \(f_{init}\) which is designed as a vanilla PointNet [\(\boxed\)]. Figure 1: The sampling procedure of our Score-PA. We here show a process of \(N\) steps sampling. \(t_{n}\) represents the time value at the step \(n\) (the algorithm starts from step \(N-1\) to step 0). We can observe that the chair is assembled from coarse to fine. It can extract the geometric information from the parts' point clouds: \[\mathbf{V}=f_{init}(\mathbf{P}), \tag{1}\] The _Input Encoder_\(f_{input}\) shown in Fig. 1 is parameterized as an MLP, which is used to encode the input of the designed network. The embedding layer \(f_{t}\) includes a _Gaussian Fourier Projection_ layer and a linear layer, which enable the score-based model to be conditioned on the input time \(t\)[\(\overline{\mathbf{\alpha}}\), \(\overline{\mathbf{\alpha}}\)]. Subsequently, for training the model, as indicated by Song et al. [\(\overline{\mathbf{\alpha}}\)], training with perturbed data can help the model better estimate the score. In our designed framework, we train our model with the data perturbed by SDEs, and under the certain condition \(\mathbf{P}\), a diffusion process of 6-DoF pose \(\{\mathbf{Q_{P}}(t)\}_{t=0}^{T}\) is constructed for this purpose, where \(t\in[0,T]\). \(\mathbf{Q_{P}}(0)\sim p_{0}\) represents the i.i.d. samples in the given dataset, and \(\mathbf{Q_{P}}(T)\sim p_{T}\) is a prior distribution that we have already known. The diffusion process can be expressed by the following mathematical model: \[d\mathbf{Q_{P}}=\mathbf{f_{d}}(\mathbf{Q_{P}},t)dt+g(t)d\mathbf{w}, \tag{2}\] where \(\mathbf{f_{d}}(\cdot,t):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) represents the drift coefficient of \(\mathbf{Q_{P}}\), and \(g(t)\in\mathbb{R}\) is diffusion coefficient [\(\overline{\mathbf{\alpha}}\)]. \(\mathbf{w}\) is the standard Brownian motion [\(\overline{\mathbf{\alpha}}\)]. More details about the training algorithm are discussed in Section 3.3. After training, we can generate poses for the separate parts through sampling. Assume a well-trained model is obtained, and we can sample the 6-DoF pose set \(\mathbf{Q_{P}}(0)\) by using reverse-time SDE shown as follows: \[d\mathbf{Q_{P}}=[\mathbf{f_{d}}(\mathbf{Q_{P}},t)-g^{2}(t)\nabla_{\mathbf{Q_{ P}}}\log(p_{t}(\mathbf{Q_{P}}\mid\mathbf{P}))]dt+g(t)d\bar{\mathbf{w}}, \tag{3}\] where \(\bar{\mathbf{w}}\) represents a reverse time Brownian motion, and \(dt\) is an infinitesimal negative time step. In the real scenario, we normally use an iterative algorithm (_e.g._, Predictor-Corrector sampling algorithm) to solve the inverse SDEs. The simplified sampling process is described in Fig. 1. Assuming we conduct \(N\) steps sampling, the iterative algorithm starts with the initial pose \(\mathbf{Q_{P}}(T)\). Then the initialized value is processed by the trained score-based model to obtain \(\mathbf{Q_{P}}(t_{N-2})\). The iterative algorithm continues until the terminate pose \(\mathbf{Q_{P}}(t_{0})\) is obtained (\(t_{0}=0\)), and the final value is used for translating and rotating the input parts. In our framework, we design a new algorithm, Fast Predictor-Corrector sampler (FPC), to speed up the sampling procedure. More details about our sampling algorithm are discussed in Section 3.4. ### Loss Function and Training Algorithm Our proposed algorithm aims to estimate the conditional distribution of training data \(p(\mathbf{Q_{P}}(0)\mid\mathbf{P})\), and the original score-matching method is not suitable for this scenario since it is designed for single random variable estimation. We propose a new objective function to solve this problem. Different from the original score-matching objective function [\(\overline{\mathbf{\alpha}}\)], our objective estimates the gradient fields of log-conditional-density \(\nabla_{\mathbf{Q_{P}}}log(p_{t}(\mathbf{Q_{P}}(t)\mid\mathbf{P}))\). The formula is shown as follows: \[\min_{\boldsymbol{\theta}}\mathbb{E}_{t}\mathbb{E}_{\mathbf{Q_{P}}(0)}| \mathbb{E}_{\mathbf{Q_{P}}(t)|\mathbf{Q_{P}}(0),\mathbf{P}}\left[\lambda(t) \|\mathbf{S_{G_{\theta}}}(\mathbf{Q_{P}}(t),t)-\nabla_{\mathbf{Q_{P}}(t)}\log (p_{0t}(\mathbf{Q_{P}}(t)\mid\mathbf{Q_{P}}(0),\mathbf{P}))\|_{2}^{2}\right] \tag{4}\] In our algorithm, we set the perturbation SDE as \(d\mathbf{Q_{P}}=\sigma^{t}d\mathbf{w}\), where \(t\in[0,T]\). We select \(\lambda(t)=\frac{1}{2\log\sigma}(\sigma^{2t}-1)\) in our experiment. Algorithm 1 shows our training algorithm. Intuitively, for each iteration, we first select the training data from the training dataset and sample \(t\) from the uniform distribution. Then the ground truth pose \(\mathbf{Q_{P}}(0)\) is perturbed by a Gaussian noise with variance \(\frac{1}{2\log\sigma}(\sigma^{2t}-1)\mathbf{I}\). Finally, we calculate the objective function shown in Equation 4 and use the gradient-descent algorithm to optimize the parameter of the designed graph neural network. ### Fast Predictor-Corrector Sampler for Inference Assume a well-trained model is obtained by using the training method discussed above, and we can then use this trained model to sample the poses of the separate parts. We first apply the Predictor-Corrector sampler (PC) proposed by Song et al. [20] in our framework to sample poses. However, we find that the PC sampler requires a large number of sampling steps to achieve high performance. Fig. 3 in Section 4.4 shows that the PC sampler requires 400 steps to achieve optimal sampling results. This means it brings a large latency in the inference stage. Motivated by accelerating the sampling speed, we propose Fast Predictor-Corrector sampler (FPC). Algorithm 2 shows the sampling process of our proposed algorithm. Assume we conduct \(N\) steps sampling with \(C_{F}\) steps final correction. From step \(N-1\) to step 1, the algorithm executes the same program as the Predictor-Corrector sampler algorithm proposed by [20]. After that, the algorithm conducts \(C_{F}\) steps Langevin MCMC [20, 20, 20, 20] with noise decay for correction. Finally, a one-step prediction step without noise is used to obtain the results. \(d\) is a parameter which controls the decay rate of the noise. The experimental results discussed in Section 4.4 show that our algorithm generates better and more diverse results compared with that of the original Predictor-Corrector sampler. The design rationale of the FPCWe initially apply \(N-1\) steps of normal PC sampling. The objective of this stage is to find the **approximate value** of a reasonable pose. At this stage, we refrain from implementing noise decay techniques as they could potentially impede both the **convergence** of the algorithm and the **diversity** of the produced results. Then, once the algorithm finds the approximate value of a reasonable pose, we utilize \(C_{F}\) steps of Langevin MCMC to obtain the **exact value** of the pose. In the part assembly task, even slight noise can corrupt the pose. In this case, we implement the noise decay technique at this stage to avoid corrupting the sampled pose. At this point, the noise decay no longer hinders convergence and diversity (because the algorithm has already found the approximate value), but instead enhances the quality of sampling, which brings a more accurate estimation of the pose. As a result, the algorithm can also achieve better connectivity. ## 4 Experiment In this section, we discuss our datasets, baselines, evaluation metrics, experimental results and ablation study. For the experimental details, please refer to the appendix. ### Datasets and Baselines The datasets used for experiments follow, which consist of three categories: chair, table and lamp (these datasets are subsets of PartNet, which contain a large number of fine-grained shapes and hierarchical part segmentations). Chair dataset, Table dataset and Lamp dataset contain 6,323, 8,218 and 2,207 shapes, respectively. We compare our proposed algorithm with B-Global, B-LSTM, B-Complement, and Dynamic Graph Learning (B-DGL). The results show that our algorithm Figure 2: The qualitative comparisons between our algorithm and other baselines. For each algorithm, we show two generated assemblies. The results show that only our framework is able to generate diverse results with high quality. It is hardly possible for B-Global, B-LSTM and B-Complement to generate reasonable results. B-DGL can generate some reasonable assemblies, but the generated assemblies lack diversity. We show more results in the appendix. achieves state-of-the-art performance over other baselines. ### Evaluation Metrics In our experiments, we follow Huang et al. [] to use Shape Chamfer Distance (SCD) [], Part Accuracy (PA) [] and Connectivity Accuracy (CA) [] to evaluate the assembly quality. SCD and PA are used to evaluate the assembly quality of the whole shape and each individual part, respectively, and CA can show the quality of the connections between each pair of parts. We generate ten shapes for each set of separate parts in the evaluation procedure. Quality evaluations are based on Minimum Matching Distance [], which means the minimum distance between the assembled shapes and the ground truth is measured for the quality evaluation. According to Huang et al. [], apart from assembly quality, assembly diversity is another crucial aspect of the 3D part assembly task. However, Huang et al. only provide a qualitative evaluation of the diversity of the assembly algorithm in their paper without presenting a quantitative method to assess the diversity. In the following, we introduce our proposed metrics, the Quality-Diversity Score (QDS) and the Weighted Quality-Diversity Score (WQDS). Diversity Score (DS) [] evaluates the diversity of the results: The formula of DS is \(\text{DS}=\frac{1}{N^{2}}\sum_{i,j=1}^{N}(\text{Dist}(\mathbf{P}_{i}^{*}, \mathbf{P}_{j}^{*})),\) where \(\mathbf{P}_{i}^{*}\) and \(\mathbf{P}_{j}^{*}\) represent any two assembled shapes. However DS only evaluates the average distances between pairs of transformed shapes but does not consider whether the shapes are reasonably assembled. Therefore, this metric is not suitable for the task of 3D part assembly. To solve this problem, we propose QDS and WQDS that can not only test the diversity among all transformed shapes but also consider the quality of these transformed shapes. The formula is shown in Equation 5 and 6. We apply SCD as the distance metric Dist in QDS and WQDS. \[\text{QDS}=\frac{1}{N^{2}}\sum_{i,j=1}^{N}[\text{Dist}(\mathbf{P}_{i}^{*}, \mathbf{P}_{j}^{*})\cdot\mathds{1}(\text{CA}(\mathbf{P}_{i}^{*})>\tau_{q}) \cdot\mathds{1}(\text{CA}(\mathbf{P}_{j}^{*})>\tau_{q})], \tag{5}\] \[\text{WQDS}=\frac{1}{N^{2}}\sum_{i,j=1}^{N}[\text{Dist}(\mathbf{P}_{i}^{*}, \mathbf{P}_{j}^{*})\cdot\text{CA}(\mathbf{P}_{i}^{*})\cdot\text{CA}(\mathbf{ P}_{j}^{*})], \tag{6}\] The only difference between QDS/WQDS and DS is that we add constraints to the comparison pair. Specifically, for QDS, the constraint is given by \(\mathds{1}(\text{CA}(\mathbf{P}_{i}^{*})>\tau_{q})\cdot\mathds{1}(\text{CA}( \mathbf{P}_{j}^{*})>\tau_{q})\). In the case of WQDS, the constraint takes the form \(\text{CA}(\mathbf{P}_{i}^{*})\cdot\text{CA}(\mathbf{P}_{j}^{*})\). As discussed above, CA evaluates the connectivity accuracy of the algorithms. The constraints for the two metrics mean that the pair \(\mathbf{P}_{i}^{*}\) and \(\mathbf{P}_{j}^{*}\) contribute to the diversity value if and only if both assembled shapes have sufficiently high connectivity accuracy. In other words, both assembled shapes should have high-quality of connections between each pair of parts. These two new metrics, QDS and WQDS, can meet the requirements of our tasks, which involve evaluating the diversity between reasonably assembled pairs. In our experiments, we set \(\tau_{q}=0.5\) for QDS. ### Compare with the Baselines Table 1 presents the quantitative evaluation results of our algorithm in comparison to other baselines. It is clear to see that our algorithm attains the highest score in most metrics. These results indicate that our algorithm can generate diverse and high-quality assembly outcomes. Fig. 3 shows the ablation results among the vanilla PC sampler, the FPC sampler without noise decay and the full algorithm of the FPC sampler. The ablation experiments are conducted on the Chair dataset. We test the three samplers with different sampling steps. The results of the five metrics consistently show that the convergence speed of the FPC sampler is much faster than that of the other two algorithms. FPC sampler only requires 200 steps to achieve a relatively high score and 300 steps to achieve optimal performance. In comparison, the vanilla PC sampler and the FPC sampler without noise decay need 400 steps and 500 steps, respectively, to achieve similar results to FPC (200 steps). The detailed data can be found in Table 2. Compared with vanilla PC sampler, F \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Metrics & Category & B-Global & B-LSTM & B-Complement & B-DGL & Ours \\ \hline \multirow{3}{*}{SCD \(\downarrow\)} & Chair & 0.0178 & 0.0230 & 0.0197 & 0.0089 & **0.0071** \\ & Table & 0.0077 & 0.0159 & 0.0116 & 0.0051 & **0.0042** \\ & Lamp & 0.0111 & **0.0104** & 0.0157 & 0.0105 & 0.0111 \\ \hline \multirow{3}{*}{PA \(\uparrow\)} & Chair & 13.35 & 8.92 & 10.99 & 38.51 & **44.51** \\ & Table & 20.01 & 8.39 & 15.84 & 46.57 & **52.78** \\ & Lamp & 13.87 & 28.24 & 11.57 & 33.43 & **34.32** \\ \hline \multirow{3}{*}{CA \(\uparrow\)} & Chair & 10.01 & 11.2 & 10.59 & 23.51 & **30.32** \\ & Table & 18.08 & 18.78 & 15.65 & 39.63 & **40.59** \\ & Lamp & 27.73 & 28.67 & 32.2 & 40.19 & **49.07** \\ \hline \multirow{3}{*}{QDS (\(10^{-5}\)) \(\uparrow\)} & Chair & 0.152 & 0.036 & 0.086 & 1.688 & **3.355** \\ & Table & 0.2 & 0.246 & 0.057 & 3.048 & **9.172** \\ & Lamp & 0.758 & 0.629 & 2.814 & 1.835 & **6.836** \\ \hline \multirow{3}{*}{WQDS (\(10^{-4}\)) \(\uparrow\)} & Chair & 0.188 & 0.074 & 0.207 & 0.553 & **1.71** \\ & Table & 0.169 & 0.163 & 0.180 & 0.342 & **1.8** \\ \cline{1-1} & Lamp & 0.175 & 0.211 & 1.0 & 0.31 & **1.02** \\ \hline \hline \end{tabular} \end{table} Table 1: The quantitative comparison between our proposed algorithm and other baselines. In the testing stage, the sequence of the input parts is randomly shuffled. The results show our framework outperforms all the baselines for most metrics (only the SCD score of our framework is slightly below B-LSTM’s SCD in Lamp dataset testing). In particular, the QDS of our framework outperforms other baselines by a large margin. Figure 3: The quantitative results among PC, FPC without noise decay and FPC. The three samplers are tested with the sampling steps from 100 to 550. The results show that our algorithm has the fastest convergence speed. FPC-300 achieve \(\times\)2.27, \(\times\)1.79 and \(\times\)1.47 acceleration respectively. Besides, the comparison between FPC and FPC without noise decay proves that the technique of noise decay can indeed help accelerate the sampling algorithm. **Connectivity accuracy enhancement.** Our proposed FPC algorithm also has a good performance on connectivity accuracy, which can be proved by both the results shown in Fig. 3 and the qualitative results shown in Fig. 4. Fig. 4 shows that the results sampled by PC or FPC without noise decay are easier to be disconnected (see the red circles in Fig. 4), while this does not appear in the results sampled by FPC sampler. ## 5 Conclusion In this work, we propose a novel framework, Score-PA, for the 3D part assembly task, viewing the 3D part assembly as a problem of conditional probability distribution estimation. We modify the original score-matching objective function [\(\boxed\)] for log-conditional-density estimation purposes, and develop a graph neural network for score function modelling. Besides, we propose a new sampling method, FPC, to speed up the inference of our framework. The experiments demonstrate that our designed framework achieves the current state-of-the-art performance over other baselines for both assembly quality and assembly diversity. ## Acknowledgement This project was supported by the National Natural Science Foundation of China - General Program (62376006) and The National Youth Talent Support Program (8200800081). ## 6 Appendix ### More Discussion about Our Score-PA Selection of the weight function \(\lambda(t)\)We discuss our training algorithm in the main text. \(\lambda(t)\) is important for our training objective function. According to Song et al. [\(\boxed\)], we need \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Samplers & SCD \(\downarrow\) & PA \(\uparrow\) & CA \(\uparrow\) & QDS (\(10^{-5}\)) \(\uparrow\) & WQDS (\(10^{-4}\)) \(\uparrow\) & Avg. Time (s) \(\downarrow\) \\ \hline PC & 0.0073 & 44.22 & 26.97 & 3.078 & **1.902** & 0.84 (\(\times\)1) \\ FPC w/o decay & 0.0072 & 44.13 & 28.65 & **3.629** & 1.825 & 0.99 (\(\times\)0.85) \\ FPC-200 & 0.0073 & 43.52 & 27.88 & 2.545 & 1.612 & **0.37** (\(\times\)2.27) \\ FPC-250 & 0.0073 & **44.53** & 29.27 & 3.345 & 1.668 & 0.47 (\(\times\)1.79) \\ FPC-300 & **0.0071** & 44.51 & **30.32** & 3.355 & 1.71 & 0.57 (\(\times\)1.47) \\ \hline \hline \end{tabular} \end{table} Table 2: Compare FPC (200, 250 and 300 sampling steps) with the optimal PC and FPC without noise decay sampler. All the samplers are tested in the same hardware environment (R9 3900x with RTX 3090). Figure 4: The qualitative ablation experiments among PC, FPC without noise decay and FPC (all the samplers are tested with their optimal sampling steps). The results sampled by FPC have the best connectivity. to choose a suitable \(\lambda(t)\) to make our prior distribution \(p(\mathbf{Q_{P}}(T))\) independent from the data distribution and easily sampled (We set \(T=1\)). In our algorithm, the weight function is selected as \(\lambda(t)=\frac{1}{2\log\sigma}(\sigma^{2t}-1)\), which follows Song et al.'s setting \([\boxed{\mathbf{\Omega}}]\). We already have our SDE \(d\mathbf{Q_{P}}=\sigma^{t}d\mathbf{w}\), where \(t\in[0,1]\), and in this situation, \[p_{0\alpha}(\mathbf{Q_{P}}(t)\mid\mathbf{Q_{P}}(0),\mathbf{P})=\mathcal{N} \bigg{(}\mathbf{Q_{P}}(t);\mathbf{Q_{P}}(0),\frac{1}{2\log\sigma}(\sigma^{2t}- 1)\mathbf{I}\bigg{)} \tag{7}\] We have our weight function \(\lambda(t)=\frac{1}{2\log\sigma}(\sigma^{2t}-1)\), and then the prior distribution \(p_{t=1}\) is, \[\int p_{0}(\mathbf{y})\mathcal{N}\bigg{(}\mathbf{Q_{P}};\mathbf{y},\frac{1}{2 \log\sigma}(\sigma^{2}-1)\mathbf{I}\bigg{)}d\mathbf{y}\approx\mathcal{N} \bigg{(}\mathbf{Q_{P}};\mathbf{0},\frac{1}{2\log\sigma}(\sigma^{2}-1)\mathbf{ I}\bigg{)}, \tag{8}\] where \(\sigma\) should be large enough. The Equation 8 means that the prior distribution can be easily sampled from a normal distribution. ## Appendix B More Qualitative Comparisons We present more qualitative comparisons between our algorithm and other baselines in Figure 5. Similar to the comparisons in the main text, we present two assembly results per input parts \(\mathbf{P}\) for each algorithm. The comparisons show that only our algorithm is able to generate diverse results with high quality. ## Appendix C Details about Our Experiments Training detailsAs discussed in Section 3 of our paper, our algorithm has two important hyper-parameters \(T\) and \(\sigma\) in the training procedure. In our experiments on the three datasets, we set \(T=1.0\), and \(\sigma=25.0\). We train our models with 2000 Epochs on Chair and Table datasets, and 4000 Epochs on Lamp dataset. The learning rate for all datasets are set as \(10^{-4}\), and the Optimizer is Adam. The training batch size for all the datasets is 16. Figure 5: More qualitative comparisons between our algorithm and other baselines. Other detailsWe conducted both training and testing experiments using a single RTX 3090 GPU. To ensure reproducibility, we set a fixed random seed. In our ablation study, we define the sampling steps for FPC and FPC w/o decay as \(steps=N+C_{F}\). For the PC sampler, the number of sampling steps is simply \(N\), as it does not involve a decay stage. In all testing experiments, including the ablation study, we set the sampling batch size for Score-PA to 4. ## Appendix D Limitation and Future Work Currently, we achieve diverse part assembly in an ideal simulation environment. In the future, we plan to take the physical factors (_e.g._, physical collision) into our consideration, and achieve autonomous part assembly in a real physical environment.
自主3D部品組立は、ロボット工学と3Dコンピュータビジョンという分野で、挑戦的な課題です。この課題は、既定の指示に頼らず、個々の部品を完全に形状に組み立てること に取り組みます。この論文では、この課題を新たな生成的視点から formulater、3D部品組立のためのScore-based 3D Part Assembly フレームワーク (Score-PA) を導入します。スコアベースの方法は、一般的に推論段階で時間的負荷が高いことが知られています。この問題に対処するために、この論文では、スコアベースのアルゴリズムである、FastPredictor-Corrector Sampler (FPC) を導入します。このアルゴリズムは、フレームワーク内でサンプリングプロセスを高速化します。様々な評価指標を用いて、組立の質と多様性を評価し、評価結果により、私たちのアルゴリズムは既存の最先端手法を上回っています
2310.00125
Covariance Expressions for Multi-Fidelity Sampling with Multi-Output, Multi-Statistic Estimators: Application to Approximate Control Variates
We provide a collection of results on covariance expressions between Monte Carlo based multi-output mean, variance, and Sobol main effect variance estimators from an ensemble of models. These covariances can be used within multi-fidelity uncertainty quantification strategies that seek to reduce the estimator variance of high-fidelity Monte Carlo estimators with an ensemble of low-fidelity models. Such covariance expressions are required within approaches like the approximate control variate and multi-level best linear unbiased estimator. While the literature provides these expressions for some single-output cases such as mean and variance, our results are relevant to both multiple function outputs and multiple statistics across any sampling strategy. Following the description of these results, we use them within an approximate control variate scheme to show that leveraging multiple outputs can dramatically reduce estimator variance compared to single-output approaches. Synthetic examples are used to highlight the effects of optimal sample allocation and pilot sample estimation. A flight-trajectory simulation of entry, descent, and landing is used to demonstrate multi-output estimation in practical applications.
Thomas O. Dixon, James E. Warner, Geoffrey F. Bomarito, Alex A. Gorodetsky
2023-09-29T20:31:59
http://arxiv.org/abs/2310.00125v2
Covariance Expressions for Multi-Fidelity Sampling with Multi-Output, Multi-Statistic Estimators: Application to Approximate Control Variates + ###### Abstract We provide a collection of results on covariance expressions between Monte Carlo based multi-output mean, variance, and Sobol main effect variance estimators from an ensemble of models. These covariances can be used within multi-fidelity uncertainty quantification strategies that seek to reduce the estimator variance of high-fidelity Monte Carlo estimators with an ensemble of low-fidelity models. Such covariance expressions are required within approaches like the approximate control variate and multi-level best linear unbiased estimator. While the literature provides these expressions for some single-output cases such as mean and variance, our results are relevant to both multiple function outputs and multiple statistics across any sampling strategy. Following the description of these results, we use them within an approximate control variate scheme to show that leveraging multiple outputs can dramatically reduce estimator variance compared to single-output approaches. Synthetic examples are used to highlight the effects of optimal sample allocation and pilot sample estimation. A flight-trajectory simulation of entry, descent, and landing is used to demonstrate multi-output estimation in practical applications. uncertainty quantification multifidelity approximate control variate Monte Carlo estimation sensitivity analysis ## 1 Introduction Estimating statistics of simulation models is of primary concern in uncertainty quantification. However, sampling strategies for estimation are often plagued by slow convergence. For example, the variance of a Monte Carlo (MC) mean estimator is proportional to the inverse of the number of model evaluations, requiring an order of magnitude more samples per digit of accuracy. As a result, the large number of sample evaluations required for accurate estimation becomes prohibitive when the underlying model is computationally burdensome. In this paper, we consider variance reduction techniques that reduce this cost by leveraging ensembles of correlated multi-output models for multiple statistics at once. We focus on multi-fidelity sampling strategies that extract information from models of varying fidelities to reduce the variance of a baseline estimator without introducing bias. These lower fidelity models can take a hierarchical form, for example arising from a hierarchy of discretizations of a finite-element PDE approximation [2, 14], or they may be unstructured and include simulations with different physics and/or surrogates [26, 24]. In the context of multi-fidelity variance reduction, we focus on control-variate (CV) methods [12, 13, 21]. Examples of CV methods include the multi-level MC (MLMC) estimator [8, 2]; the multi-fidelity MC (MFMC) [17], and more generally approximate control variates (ACV) [9]. While MLMC and MFMC require a distinct sampling structure of the ensemble of models, potentially limiting achievable variance reduction, the ACV method provides a general framework for distributing samples amongst models. More recently, the multi-level best linear unbiased estimator (MLBLUE) provides an alter nate method to allocate samples based on estimator and model groupings [22], but can also be interpreted under the ACV framework. Effectively leveraging multiple fidelities of models requires knowledge of the covariance between all models involved. As such, all of the above approaches require the prior knowledge of the covariance between the ensemble of high and low-fidelity estimators. These estimator covariances are intimately tied to the statistics being estimated. A majority of the literature focuses on mean estimation of scalar-valued functions [13, 12, 9, 3, 17, 8]. Some works on other statistics such as the variance [19, 20, 6], Sobol indices [19, 11, 20], and quantiles [10], also exist, but focus on single-statistic estimators. Note that MLMC does not require estimator covariances by making the strong assumption of perfect correlation amongst models, and, as a result, can yield sub-optimal choices of CV weights when the models are not perfectly correlated [9]. In the case of mean estimation, the covariance between MC estimators of each model is easily related to the covariances of the underlying models themselves [9, 3]. In practice, these model covariances are generally unknown, but estimated via some pilot sampling procedure. Pilot sample estimation can be performed with a fixed number of samples or through more adaptive or robust schemes. For example, an exploration-exploitation approach can be taken to minimize the total cost of model evaluations by determining when to stop estimating the model covariances [26]. Another approach directly estimates the covariance of the estimators by creating an ensemble of ACV estimators, each with a different set of samples [18]. For other statistics, such as probablility, quantile, or Sobol index estimation, the covariance between estimators is generally unavailable [19, 10, 11, 20]. For variance and Sobol index estimation, [19] finds the optimal weights for mean estimation and applies them to high-order statistic estimation. In [11], perfect correlation between estimators is assumed for Sobol index estimation, disregarding the estimator covariance requirement, but resulting in sub-optimal CV estimation. Finally, [20] numerically estimates the covariance between estimators directly for variance and Sobol indices to find the optimal CV weights. One of the principal aims of this paper is to introduce the analytic covariances for these additional estimators to improve CV efficiency. A second issue that we consider is models with multiple outputs -- the majority of the above approaches are applied to models with single outputs. Extending these approaches to vector-valued functions requires additional covariance expressions. Current state-of-the-art estimation techniques that use multiple quantities of interest (QoIs) construct one estimator for each QoI [24, 20, 5, 6]. While creating individual estimators is a simple technique, the correlations between the QoIs are lost, which leads to limited variance reduction. Indeed, in the context of classical CVs, Rubinstein and Marcus [21] show that the correlation between model outputs can be extracted by including vector-valued functions in a single estimator to further reduce the estimator variance. We extend these results to the ACV context. Additionally, estimating multiple statistics in a single estimator can lead to further sources of correlation, which can be extracted to reduce the variance of both statistics' estimators. We newly introduce approaches to leverage multi-statistic information here. In the context of multi-output mean estimation, a recent approach using the MLBLUE estimator was introduced to indeed extract model output correlations for vector-valued mean estimation [6]. A covariance matrix estimation approach was also introduced, but lacked the capability of extracting correlations between model outputs in this case. Similarly to [5], independent MLBLUE estimators were stacked into a matrix for vector-valued estimation. The approaches in this paper are applicable to further extending these MLBLUE results to take advantage of the correlations between model outputs for covariance matrix estimation. Similarly, this work introduces multi-statistic estimators for mean, variance, and Sobol indices which can further be applied to MLBLUE estimation. We now summarize our contributions. First, we derive estimator covariances for multiple statistics and vector-valued functions for several important cases of interest that can be utilized in the majority of multi-fidelity sampling strategies. Propositions 3.1 and 3.4 provide the covariance between mean estimators and variance estimators, respectively, for vector-valued functions. Proposition 3.7 provides the covariance between the mean and variance estimators for simultaneous mean and variance estimation. Second, we derive estimator covariances for all the main effect variances of scalar-valued functions for use in Sobol indices for global sensitivity analysis. The covariance between main effect variance estimators of similar/different indices is seen in Proposition 3.10. Similarly, Proposition 3.13 provides the covariance between the variance and main effect variance estimators since the total variance of the model is required for Sobol index estimation. These covariances allow multiple Sobol indices to be estimated simultaneously, providing a thorough sensitivity analysis across multiple inputs. Finally, while these results can be adapted to several schemes, we utilize them to introduce the multi-output ACV (MOACV) estimator. This estimator can simultaneously estimate multiple statistics for vector-valued functions. We provide a number of empirical results that demonstrate that the MOACV estimator outperforms individual ACV estimators. As part of these results, we demonstrate that the newly derived estimator covariances for mean estimation do not require substantially more pilot samples than traditional ACV estimation. Finally, the MOACV estimator is tested on a realistic application of trajectory estimation for entry, descent, and landing (EDL). The numerical results demonstrate significant further variance reduction compared to existing results. The rest of this paper is structured as follows. Section 2 introduces MC sampling and the multi-output ACV theory. Section 3 provides the introduced estimator covariances and how to apply them to the ACV techniques. The results in Section 4 demonstrate the MOACV capabilities on analytical examples. Finally, Section 5 applies the MOACV estimator to the EDL application. ## 2 Background In this section, we introduce notation, the core sampling-based estimators, and multi-fidelity variance reduction approaches. ### Notation The following notation is used throughout the manuscript. Matrices and vectors are denoted by bold-faced Roman letters. Each element of a matrix \(\mathbf{F}\in\mathbb{R}^{A\times B}\) is denoted as \(F_{ab}\) for \(\{a,b\}\in\{0,1,\ldots,A-1\}\times\{0,1,\ldots,B-1\}\). Similarly each element of a vector \(\mathbf{g}\in\mathbb{R}^{A}\) is denoted by \(g_{a}\) for \(a\in\{0,1,\ldots,A-1\}\). We denote a matrix of ones with size \(A\times B\) as \(\mathbf{1}_{A\times B}\). Generally, block matrices use an underline to denote that the block structure is important. If \(\underline{\mathbf{F}}\) is an \(A\times B\) block matrix, then its blocks are denoted by \(\mathbf{F}_{ab}\). The Kronecker product between vectors \(\mathbf{X},\mathbf{Y}\in\mathbb{R}^{D}\) is treated as a flattened outer product \(\mathbf{X}\otimes\mathbf{Y}=\text{vec}(\mathbf{X}\mathbf{Y}^{T})\). The element-wise product between two vectors or matrices is written as \(\mathbf{X}\circ\mathbf{Y}\). The square of these two operations uses the following shorthand for both vectors and matrices \(\mathbf{X}^{\otimes 2}\equiv\mathbf{X}\otimes\mathbf{X}\) and \(\mathbf{X}^{\otimes 2}\equiv\mathbf{X}\circ\mathbf{X}\), respectively. Sets are denoted via upper case calligraphic letters such as \(\mathcal{Z}\). ### Monte Carlo Estimators The \(N\)-sample MC estimator of the mean of a function \(f:\mathbb{R}^{I}\rightarrow\mathbb{R}^{D}\) is defined using a set of input samples \(\mathcal{Z}=\{\mathbf{z}^{(s)}\in\mathbb{R}^{I};s=1,\ldots,N\}\) by \[\mathbf{Q}_{\mu}(\mathcal{Z})=\frac{1}{N}\sum_{s=1}^{N}f(\mathbf{z}^{(s)})\in \mathbb{R}^{D}. \tag{1}\] This estimator is unbiased and has variance \(\mathbb{V}ar\left[\mathbf{Q}_{\mu}\right]=\left.\mathbb{V}ar\left[f\right]/N\). MC estimators can also be defined for the covariance \(V=\mathbb{V}ar\left[f\right]\) : \[\mathbf{Q}_{V}(\mathcal{Z})=\frac{1}{N-1}\sum_{s=1}^{N}\left(f(\mathbf{z}^{(s) })-\mathbf{Q}_{\mu}(\mathcal{Z})\right)^{\otimes 2}=\frac{1}{2N(N-1)}\sum_{s=1}^{N} \sum_{t=1}^{N}\left(f(\mathbf{z}^{(s)})-f(\mathbf{z}^{(t)})\right)^{\otimes 2}, \tag{2}\] where \(\mathbf{Q}_{V}\in\mathbb{R}^{D^{2}}\), is a flattened estimate of the covariance matrix. Its variance \(\mathbb{V}ar\left[\mathbf{Q}_{V}\right]\in\mathbb{R}^{D^{2}\times D^{2}}\) is \[\mathbb{V}ar\left[\mathbf{Q}_{V}\right]=\frac{1}{N(N-1)}\left[\mathbb{V}ar \left[f\right]^{\otimes 2}+\left(\mathbf{1}_{D}^{T}\otimes\mathbb{V}ar\left[f \right]\otimes\mathbf{1}_{D}\right)\circ\left(\mathbf{1}_{D}\otimes\mathbb{V} ar\left[f\right]\otimes\mathbf{1}_{D}^{T}\right)\right]+\frac{1}{N}\mathbb{V}ar \left[\left(f-\mathbb{E}\left[f\right]\right)^{\otimes 2}\right], \tag{3}\] and follows from Proposition 3.4. Finally, we consider MC estimators for main effect Sobol sensitivity indices. To this end, the ANOVA decomposition [1] of the variance of a scalar-valued function \(f\) is \[\mathbb{V}ar[f]=\sum_{u=1}^{I}V_{u}+\sum_{u,v;u>v}^{I}V_{uv}+\sum_{u,v,w;u>v>w}^ {I}V_{uvw}+\cdots, \tag{4}\] where \(V_{u}=\mathbb{V}ar_{v_{a}}[\mathbb{E}_{\mathbf{x}_{uu}}[f(\mathbf{x})|x_{u}]]\) and \(x_{u}\) is the \(u\)-th input variable. The ANOVA decomposition separates the variance into terms attributed to the function's inputs. One sensitivity measure is the global sensitivity index, or Sobol index [23], \(s_{u_{1}\cdots u_{I}}=\frac{V_{u_{1}\cdots u_{I}}}{V}\) which is the percentage of variance attributed to the corresponding term of the ANOVA decomposition. In this paper, we focus on the main effect sensitivity indices \(s_{u}=\frac{V_{u}}{V}\). The Sobol estimator for the main effect can be obtained using two sets of input samples: \(\mathcal{Z}=\{\mathbf{z}^{(s)};s=1,\ldots,N\}\), and \(\mathcal{Y}_{u}=\{\mathbf{y}_{u}^{(s)};s=1,\ldots,N\}\), where \(\mathcal{Y}_{u}\) is an independent set of input samples except for the \(u\)th input, i.e., \(\mathbf{y}_{u}^{(s)}=(y_{1}^{(s)},y_{2}^{(s)},\cdots,z_{u}^{(s)},\cdots,y_{I}^{( s)})^{T}\) for \(s=1,\ldots,N\). Using these sample sets the estimator for \(V_{u}\) is \[\mathbf{Q}_{V_{u}}(\mathcal{Z},\mathcal{Y}_{u})=\frac{1}{N}\sum_{s=1}^{N}f( \mathbf{z}^{(s)})f(\mathbf{y}_{u}^{(s)})-\left(\frac{1}{N}\sum_{s=1}^{N}f( \mathbf{z}^{(s)})\right)^{2}=\frac{1}{N^{2}}\sum_{s=1}^{N}\sum_{t=1}^{N}\Big{[} f(\mathbf{z}^{(s)})f(\mathbf{y}_{u}^{(s)})-f(\mathbf{z}^{(s)})f(\mathbf{z}^{(t)}) \Big{]}. \tag{5}\] These estimators have a bias of \(-\mathbb{V}ar[f]/N\)[16], but are used for their simplicity. The variance of the Sobol estimator is \[\mathbb{V}ar[\mathbf{Q}_{V_{u}}] =\frac{1}{N^{3}}\left[(N-1)^{2}\mathbb{V}ar\left[ff_{u}-2f\mathbb{ E}[f]\right]+2(N-1)\mathbb{C}\!ov[ff_{u}-2f\mathbb{E}[f],ff_{u}-f^{2}]\right. \tag{6}\] \[\left.+\mathbb{V}ar[ff_{u}-f^{2}]+2(N-1)\mathbb{V}ar[f]^{2}~{} \right],\] where \(f:\mathbf{a}\rightarrow\mathbb{R}\), \(f_{u}:\mathbf{b}\rightarrow\mathbb{R}\) and \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{I}\). In the operation \(ff_{u}\), the domain \(\mathbf{a}\) is independent from \(\mathbf{b}\) with the exception of the \(u\)th input. For example, let \(\mathbf{a}=(a_{1},\ldots,a_{u},\ldots,a_{I})\) and \(\mathbf{b}=(b_{1},\ldots,a_{u},\ldots,b_{I})\) such that \(f\) and \(f_{u}\) share the same \(u\)-th input. Thus, \(f(\mathbf{z}^{(s)})\) and \(f(\mathbf{y}_{u}^{(s)})\) are realizations of \(f\) and \(f_{u}\) respectively. To the best of our knowledge, Equation (6) is introduced in this paper and follows from Proposition 3.10. ### Multi-Output Control Variates The estimator variances described above all decrease at a rate of \(1/N\), which is prohibitive for expensive function evaluations. Variance reduction methods reduce this expense. CV approaches reduce variance by leveraging additional estimators with known statistics [13]. Let \(\mathbf{Q}\in\mathbb{R}^{D}\) be an arbitrary estimator, and let \(\mathbf{Q}^{*}\in\mathbb{R}^{E}\) denote a second random vector with known mean \(\mathbf{q}^{*}\equiv\mathbb{E}\left[\mathbf{Q}^{*}\right]\). The CV estimator \(\tilde{\mathbf{Q}}\) is defined by \[\tilde{\mathbf{Q}}(\boldsymbol{\alpha})=\mathbf{Q}+\boldsymbol{\alpha}( \mathbf{Q}^{*}-\mathbf{q}^{*})=\mathbf{Q}+\boldsymbol{\alpha}\underline{ \boldsymbol{\Delta}}, \tag{7}\] where \(\boldsymbol{\alpha}\in\mathbb{R}^{D\times E}\) is a matrix of weights and \(\underline{\boldsymbol{\Delta}}\equiv\mathbf{Q}^{*}-\mathbf{q}^{*}\). This new estimator \(\tilde{\mathbf{Q}}\), has the same mean as \(\mathbf{Q}\). Furthermore, its variance is \[\mathbb{V}ar[\tilde{\mathbf{Q}}](\boldsymbol{\alpha})=\mathbb{V}ar[\mathbf{Q} ]+\boldsymbol{\alpha}\mathbb{V}ar[\underline{\boldsymbol{\Delta}}]\boldsymbol {\alpha}^{T}+\mathbb{C}\!ov[\mathbf{Q},\underline{\boldsymbol{\Delta}}] \boldsymbol{\alpha}^{T}+\boldsymbol{\alpha}\mathbb{C}\!ov[\underline{ \boldsymbol{\Delta}},\mathbf{Q}]. \tag{8}\] The weights \(\boldsymbol{\alpha}\) can be chosen to minimize some scalar-valued measure of the uncertainty represented by this variance. Rubinstein and Marcus [21] minimize the determinant, yielding4 Footnote 4: During vector-valued variance estimation, \(\forall ar\left[\underline{\boldsymbol{\Delta}}\right]\) becomes singular due to duplicate columns from upper-lower triangular covariance pairs. To avoid the troubles of inversion, the pseudo-inverse can be used, or the upper triangular portion of the variance estimator can be isolated to provide a non-singular \(\forall ar\left[\underline{\boldsymbol{\Delta}}\right]\). \[\boldsymbol{\alpha}^{*}=-\mathbb{C}\!ov[\mathbf{Q},\underline{\boldsymbol{ \Delta}}]\mathbb{V}ar[\underline{\boldsymbol{\Delta}}]^{-1}\quad\text{with variance}\quad\forall ar[\tilde{\mathbf{Q}}]=\mathbb{V}ar[\mathbf{Q}]- \mathbb{C}\!ov[\mathbf{Q},\underline{\boldsymbol{\Delta}}]\mathbb{V}ar[ \underline{\boldsymbol{\Delta}}]^{-1}\mathbb{C}\!ov[\mathbf{Q},\underline{ \boldsymbol{\Delta}}]^{T}. \tag{9}\] The determinant of the variance can be written as \(|\mathbb{V}ar[\tilde{\mathbf{Q}}]|=|\mathbb{V}ar[\mathbf{Q}]|\)\(\left[\prod_{d=1}^{\min(D,E)}(1-\rho_{d}^{2})\right]\) where \(\{\rho_{d}\}\) are the canonical correlations between \(\mathbf{Q}\) and \(\underline{\boldsymbol{\Delta}}\)[21]. Clearly, greater (anti)-correlations yield greater reductions in variance. In the context of estimating statistics of computational models, the random variables \(\mathbf{Q}\) and \(\underline{\boldsymbol{\Delta}}\) are the estimators using high- and low-fidelity models, respectively. In the uncertainty quantification problem, \(\underline{\boldsymbol{\Delta}}\) typically arises from an ensemble of \(K\) lower-fidelity estimators \((\mathbf{Q}_{k})_{k=1}^{K}\) according to5 Footnote 5: In this work, we assume for simplicity that all models share the same number of outputs. This assumption, however, can easily be disregarded by changing the shapes of the defined covariance matrices. The theory and results that follow can be easily modified to allow for varying quantities of model outputs. \[\underline{\boldsymbol{\Delta}}=\begin{bmatrix}\boldsymbol{\Delta}_{1}\\ \vdots\\ \boldsymbol{\Delta}_{K}\end{bmatrix}=\begin{bmatrix}\mathbf{Q}_{1}-\mathbb{E} \left[\mathbf{Q}_{1}\right]\\ \vdots\\ \mathbf{Q}_{K}-\mathbb{E}\left[\mathbf{Q}_{K}\right]\end{bmatrix}\in\mathbb{R}^{ DK},\qquad E=DK. \tag{10}\] ### Multi-Output Approximate Control Variates In the UQ setting, \((\mathbb{E}\left[\mathbf{Q}_{k}\right])_{k=1}^{K}\) are unknown. One approach to overcome this issue is to introduce new estimators for these terms and form an approximate control variate (ACV) [9]. The ACV estimators have only been defined in the scalar-function context, but we extend them here to vector-valued estimators by following the same ideas as in Section 2.3: \[\tilde{\mathbf{Q}}(\boldsymbol{\alpha},\mathcal{Z}) =\mathbf{Q}(\mathcal{Z}_{0})+\boldsymbol{\alpha}\begin{bmatrix} \mathbf{Q}_{1}(\mathcal{Z}_{1}^{*})-\mathbf{Q}_{1}(\mathcal{Z}_{1})\\ \vdots\\ \mathbf{Q}_{K}(\mathcal{Z}_{K}^{*})-\mathbf{Q}_{K}(\mathcal{Z}_{K})\end{bmatrix} =\mathbf{Q}(\mathcal{Z}_{0})+\boldsymbol{\alpha}\begin{bmatrix} \boldsymbol{\Delta}_{1}(\mathcal{Z}_{1}^{*},\mathcal{Z}_{1})\\ \vdots\\ \boldsymbol{\Delta}_{K}(\mathcal{Z}_{K}^{*},\mathcal{Z}_{K})\end{bmatrix} \tag{12}\] \[=\mathbf{Q}(\mathcal{Z}_{0})+\boldsymbol{\alpha}\underline{ \boldsymbol{\Delta}}(\mathcal{Z}_{1}^{*},\mathcal{Z}_{1},\ldots,\mathcal{Z}_{K}^ {*},\mathcal{Z}_{K}), \tag{11}\] where we now have potentially \(2K+1\) sample sets \(\mathcal{Z}=(\mathcal{Z}_{0},\mathcal{Z}_{1}^{*},\mathcal{Z}_{1},\ldots)\) and have redefined \(\underline{\boldsymbol{\Delta}}=[\boldsymbol{\Delta}_{1}(\mathcal{Z}_{1}^{*}, \mathcal{Z}_{1}),\ldots,\boldsymbol{\Delta}_{K}(\mathcal{Z}_{K}^{*},\mathcal{ Z}_{K})]^{T}\in\mathbb{R}^{DK}\). If \(\mathbf{Q}_{i}(\mathcal{Z}_{i}^{*})\) and \(\mathbf{Q}_{i}(\mathcal{Z}_{i})\) have the same expectation for all \(i\), the resulting estimator has the same bias as \(\mathbf{Q}(\mathcal{Z}_{0})\). We denote the sample sizes associated with \(\mathcal{Z}_{i}^{*}\) and \(\mathcal{Z}_{i}\) to be \(N_{i*}\) and \(N_{i}\), respectively. Furthermore, these sets of samples may have a non-empty intersection denoted by subscripts with indices, for example, \(N_{i*\cap j}\) is the number of samples that are shared by \(\mathcal{Z}_{i}^{*}\) and \(\mathcal{Z}_{j}\). The expressions for the optimal weights \(\boldsymbol{\alpha}^{*}\) and the variance \(\mathbb{V}ar[\tilde{\mathbf{Q}}]\) in Equation (9) still apply to the ACV estimator using the new definition of \(\underline{\boldsymbol{\Delta}}\). ## 3 Estimator Covariance Expressions In this section, we provide a collection of results for the covariance between several estimators that are needed for many multi-fidelity sampling strategies. These estimator covariances can then be used within multifidelity UQ sampling approaches for considering multiple outputs and/or for systems needing multiple statistics. Specifically for ACVs, the covariance expressions are needed for evaluating \(\mathbb{C}ov[\mathbf{Q},\underline{\boldsymbol{\Delta}}]\) and \(\mathbb{V}ar[\underline{\boldsymbol{\Delta}}]\). Section 3.1 summarizes how to find \(\mathbb{V}ar[\underline{\boldsymbol{\Delta}}]\) and \(\mathbb{C}ov[\mathbf{Q},\underline{\boldsymbol{\Delta}}]\) for any estimator and sets up the following sections. Section 3.2 introduces estimators for the mean and covariance of multi-fidelity vector-valued functions. Section 3.3 introduces estimators for the simultaneous estimation of variance and main effects in the context of scalar-valued functions. ### Setup and Summary In this section, we describe the structure of the results that follow. Since \(\underline{\boldsymbol{\Delta}}\) is a vector of stacked estimators, the variance, \(\mathbb{V}ar[\underline{\boldsymbol{\Delta}}]\), can be separated into a set of block covariance matrices: \[\mathbb{V}ar[\underline{\boldsymbol{\Delta}}]=\begin{bmatrix}\mathbb{V}ar[ \boldsymbol{\Delta}_{1}]&\mathbb{C}ov[\boldsymbol{\Delta}_{1},\boldsymbol{ \Delta}_{2}]&\cdots&\mathbb{C}ov[\boldsymbol{\Delta}_{1},\boldsymbol{\Delta} _{K}]\\ \mathbb{C}ov[\boldsymbol{\Delta}_{2},\boldsymbol{\Delta}_{1}]&\mathbb{V}ar[ \boldsymbol{\Delta}_{2}]&\vdots\\ \vdots&&\ddots&\\ \mathbb{C}ov[\boldsymbol{\Delta}_{K},\boldsymbol{\Delta}_{1}]&\cdots&\mathbb{ V}ar[\boldsymbol{\Delta}_{K}]\end{bmatrix}. \tag{13}\] Define \(\mathbb{V}ar[\boldsymbol{\Delta}_{i}]=\mathbb{C}ov[\boldsymbol{\Delta}_{i}, \boldsymbol{\Delta}_{i}]\), and further decompose each covariance block into \[\mathbb{C}ov[\boldsymbol{\Delta}_{i},\boldsymbol{\Delta}_{j}]=\mathbb{C}ov[ \mathbf{Q}_{i}(\mathcal{Z}_{i}^{*}),\mathbf{Q}_{j}(\mathcal{Z}_{j}^{*})]- \mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{Z}_{i}^{*}),\mathbf{Q}_{j}(\mathcal{Z}_ {j})]-\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{Z}_{i}),\mathbf{Q}_{j}(\mathcal{ Z}_{j}^{*})]+\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{Z}_{i}),\mathbf{Q}_{j}( \mathcal{Z}_{j})], \tag{14}\] Lastly, the covariance with the high fidelity estimator \(\mathbb{C}ov[\mathbf{Q},\underline{\boldsymbol{\Delta}}]\) is separated into \[\mathbb{C}ov[\mathbf{Q},\underline{\boldsymbol{\Delta}}]=\left[\mathbb{C}ov[ \mathbf{Q},\boldsymbol{\Delta}_{1}]\quad\cdots\quad\mathbb{C}ov[\mathbf{Q}, \boldsymbol{\Delta}_{K}]\right], \tag{15}\] where \[\mathbb{C}ov[\mathbf{Q},\boldsymbol{\Delta}_{i}]=\mathbb{C}ov[\mathbf{Q}, \mathbf{Q}_{i}(\mathcal{Z}_{i}^{*})]-\mathbb{C}ov[\mathbf{Q},\mathbf{Q}_{i}( \mathcal{Z}_{i})]. \tag{16}\] The subsequent sections derive expressions for the block components of these estimators, which can then be assembled into the final form. A summary of the estimator settings we consider and references to the results is provided in Table 1. For each case, the covariance \(\mathbb{C}ov\left[\mathbf{Q}_{i}(\mathcal{N}),\mathbf{Q}_{j}(\mathcal{M})\right]\) between the required estimators of two fidelities \(\mathbf{Q}_{i}\) and \(\mathbf{Q}_{j}\) is first computed for arbitrary input sample sets \(\mathcal{N}\) and \(\mathcal{M}\), where \(\mathcal{N}=\{\mathbf{n}^{(s)}\in\mathbb{R}^{I};s=1,\ldots,N\}\) and \(\mathcal{M}=\{\mathbf{m}^{(s)}\in\mathbb{R}^{I};s=1,\ldots,M\}\). Here, let \(\mathcal{P}=\mathcal{N}\cap\mathcal{M}\) be the intersection between two input sample sets such that \(P=|\mathcal{N}\cap\mathcal{M}|\) denotes the size of \(\mathcal{P}\). The computation of these components require certain statistics of the underlying multi-fidelity functions. To this end, subsequent sections begin with a highlighted box that describes what exactly is needed. In practice, these statistics can be available either analytically for some problems or must be obtained from pilot samples. Later, we numerically show that pilot samples are feasible in Section 4.2. ### Mean and Variance Estimation In this section, we estimate the mean and variance of a vector-valued function. In Section 3.2.1 and Section 3.2.2 we separately estimate the means and covariance, respectively. Finally, in Section 3.2.3 we simultaneously estimate the mean and covariance for vector-valued functions. We further define notation for this section. Let \(\mathbf{f}:\mathbb{R}^{I}\rightarrow\mathbb{R}^{D(K+1)}\), and \(\mathbf{g}:\mathbb{R}^{I}\rightarrow\mathbb{R}^{D^{2}(K+1)}\) be vector-valued functions collecting the outputs of a high-fidelity model and \(K\) low-fidelity models according to \[\mathbf{f}=\begin{bmatrix}f_{0}\\ f_{1}\\ \vdots\\ f_{K}\end{bmatrix}\quad\text{ and }\quad\mathbf{g}=\begin{bmatrix}(f_{0}-\mathbb{E} \left[f_{0}\right])^{\otimes 2}\\ (f_{1}-\mathbb{E}\left[f_{1}\right])^{\otimes 2}\\ \vdots\\ (f_{K}-\mathbb{E}\left[f_{K}\right])^{\otimes 2}\end{bmatrix}. \tag{17}\] #### 3.2.1 Mean Estimator We now estimate the mean of a vector-valued function. Required Covariances for Mean Estimation The estimators in this section require these covariances \[\underline{\mathbf{A}}\equiv\mathbb{V}ar\left[\mathbf{f}\right]\quad\text{ where }\quad\mathbf{A}_{ij}=\mathbb{C}ov\left[f_{i},f_{j}\right], \tag{18}\] for \(i,j\in\{0,1,\ldots,K\}\). **Proposition 3.1** (Covariance between Mean Estimators).: _The covariance of two MC mean estimators (1), \(\mathbf{Q}_{i}\) and \(\mathbf{Q}_{j}\), corresponding to fidelities \(i,j\) computed via input sets \(\mathcal{N},\mathcal{M}\), respectively, is \(\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{N}),\mathbf{Q}_{j}(\mathcal{M})]=\frac{ P}{NM}\mathbf{A}_{ij}\)._ Proof.: Using the definition of covariance, we obtain \[\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{N}),\mathbf{Q}_{j}(\mathcal{M})]=\mathbb{ C}ov\left[\frac{1}{N}\sum_{t=1}^{N}f_{i}(\mathbf{n}^{(t)}),\frac{1}{M}\sum_{s=1}^{M }f_{j}(\mathbf{m}^{(s)})\right]=\frac{1}{NM}\sum_{t=1}^{N}\sum_{s=1}^{M} \mathbb{C}ov\left[f_{i}(\mathbf{n}^{(t)}),f_{j}(\mathbf{m}^{(s)})\right].\] The function outputs are only correlated if the sampled inputs are the same. Thus, each covariance term is only nonzero if \(\mathbf{n}^{(t)}=\mathbf{m}^{(s)}\). The only nonzero covariance terms are due to samples in \(\mathcal{P}\). Thus, there are \(P\) nonzero covariance terms, and the stated result follows \[\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{N}),\mathbf{Q}_{j}(\mathcal{M})]=\frac{ \left|\mathcal{N}\cap\mathcal{M}\right|}{NM}\mathbb{C}ov\left[f_{i},f_{j} \right]=\frac{P}{NM}\mathbb{C}ov\left[f_{i},f_{j}\right]. \tag{19}\] Using this result, we obtain the covariance between the discrepancies as follows6. Footnote 6: In this result, and those that follow, when \(i=j\), the equation simplifies greatly. Here it becomes \(F_{ii}=\frac{1}{N_{is}}+\frac{1}{N_{i}}-2\frac{N_{i}c_{ijs}}{N_{i}N_{is}}\). All other matrices defined similarly have a reduced form along the diagonals. **Proposition 3.2** (Variance of discrepancies for M).: _The covariance between discrepancies is \(\mathbb{C}ov\left[\mathbf{\Delta}_{i},\mathbf{\Delta}_{j}\right]=F_{ij} \mathbf{A}_{ij}\) where_ \[F_{ij}=\frac{N_{is}\gamma_{j*}}{N_{is}N_{js}}-\frac{N_{is}\gamma_{j}}{N_{is}N_ {j}}-\frac{N_{i}\gamma_{j*}}{N_{i}N_{js}}+\frac{N_{i}\gamma_{j}}{N_{i}N_{j}}, \tag{20}\] _for \(i,j=1,\ldots,K,\) and the starred quantities are defined in Section 2.4._ \begin{table} \begin{tabular}{|c|c|c c|c c c|} \hline \multicolumn{3}{|c|}{MOACV Estimators} & \multicolumn{3}{c|}{Propositions} \\ \hline \hline Estimators & Abbr. & Statistic & Model Output & \(\mathbb{C}ov[\mathbf{Q}_{i},\mathbf{Q}_{j}]\) & \(\mathbb{C}ov\left[\mathbf{\Delta}_{i},\mathbf{\Delta}_{j}\right]\) & \(\mathbb{C}ov[\mathbf{Q},\mathbf{\Delta}_{i}]\) \\ \hline Mean & M & Single & Multiple & 3.1 & 3.2 & 3.3 \\ Variance & V & Single & Multiple & 3.4 & 3.5 & 3.6 \\ Mean \& Variance & MV & Multiple & Multiple & 3.7 & 3.8 & 3.9 \\ Main Effect Variances & ME & Multiple & Single & 3.10 & 3.11 & 3.12 \\ ME \& Variance & MEV & Multiple & Single & 3.13 & 3.14 & 3.15 \\ \hline \end{tabular} \end{table} Table 1: Proposition references to each of the introduced estimators. Proof.: The result follows a straightforward calculation \[\mathbb{C}ov[\mathbf{\Delta}_{i},\mathbf{\Delta}_{j}] =\mathbb{C}ov[\mathbf{Q}_{i}(Z_{i}^{*}),\mathbf{Q}_{j}(Z_{j}^{*})]- \mathbb{C}ov[\mathbf{Q}_{i}(Z_{i}^{*}),\mathbf{Q}_{j}(Z_{j})]-\mathbb{C}ov[ \mathbf{Q}_{i}(Z_{i}),\mathbf{Q}_{j}(Z_{j}^{*})]+\mathbb{C}ov[\mathbf{Q}_{i}( Z_{i}),\mathbf{Q}_{j}(Z_{j})] \tag{21}\] \[=\left[\frac{N_{i*\cap j*}}{N_{i*}N_{j*}}-\frac{N_{i*}\cap j}{N_{i *}N_{j}}-\frac{N_{i\cap j*}}{N_{i}N_{j*}}+\frac{N_{i\cap j}}{N_{i}N_{j}}\right] \mathbb{C}ov[f_{i},f_{j}].\] Note that when \(D=1\), Proposition 3.2 is equivalent to [3, Eq. 13]. Finally, the covariance between the high-fidelity and discrepancy estimators is provided. A similar argument yields the following result. **Proposition 3.3** (Variance between high-fidelity and discrepancies for M).: _The covariance between the high-fidelity and discrepancy estimator is \(\mathbb{C}ov[\mathbf{Q},\mathbf{\Delta}_{i}]=G_{i}\mathbf{A}_{0i}\) where_ \[G_{i}=\frac{N_{0\cap i*}}{N_{0}N_{i*}}-\frac{N_{0\cap i}}{N_{0}N_{i}}. \tag{22}\] #### 3.2.2 Variance Estimator We now estimate the variance of a vector-valued function. The proofs are provided in the Appendix for brevity. Required Convenances for Variance Estimation The estimators in this section require these covariances \[\underline{\mathbf{V}}=\begin{bmatrix}\mathbf{V}_{00}&\mathbf{V}_{01}&\cdots &\mathbf{V}_{0K}\\ \mathbf{V}_{10}&\mathbf{V}_{11}&&\vdots\\ \vdots&&\ddots&\\ \mathbf{V}_{K0}&\cdots&&\mathbf{V}_{KK}\end{bmatrix}\in\mathbb{R}^{(K+1)D^{2} \times(K+1)D^{2}} \tag{24}\] \[\underline{\mathbf{W}}=\mathbb{V}ar\left[\mathbf{g}\right]\in \mathbb{R}^{(K+1)D^{2}\times(K+1)D^{2}}, \tag{23}\] where \(\mathbf{g}\) can be seen in Equation (17). The elements of \(\underline{\mathbf{V}}\) are \[\mathbf{V}_{ij}=\mathbb{C}ov[f_{i},f_{j}]^{\otimes 2}+\left(\mathbf{1}_{D}^{T} \otimes\mathbb{C}ov[f_{i},f_{j}]\otimes\mathbf{1}_{D}\right)\circ\left( \mathbf{1}_{D}\otimes\mathbb{C}ov[f_{i},f_{j}]\otimes\mathbf{1}_{D}^{T}\right), \tag{25}\] where \(\mathbf{V}_{ij}\in\mathbb{R}^{D^{2}\times D^{2}}\). Elements of \(\underline{\mathbf{W}}\) are \(\mathbf{W}_{ij}=\mathbb{C}ov\left[\left(f_{i}-\mathbb{E}[f_{i}]\right)^{ \otimes 2},\left(f_{j}-\mathbb{E}[f_{j}]\right)^{\otimes 2}\right].\) **Proposition 3.4** (Covariance between Variance Estimators).: _The covariance between two MC variance estimators (2), \(\mathbf{Q}_{i}\) and \(\mathbf{Q}_{j}\), corresponding to fidelities \(i,j\) computed via input sets \(\mathcal{N},\mathcal{M}\), respectively, is_ \[\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{N}),\mathbf{Q}_{j}(\mathcal{M})]=\frac{P( P-1)}{N(N-1)M(M-1)}\mathbf{V}_{ij}+\frac{P}{NM}\mathbf{W}_{ij}. \tag{26}\] Using this result, we obtain the covariance between the discrepancies as follows. **Proposition 3.5** (Variance of discrepancies for V).: _Let \(F_{ij}\) be the same as in Equation (20). The covariance between discrepancies is \(\mathbb{C}ov\left[\mathbf{\Delta}_{i},\mathbf{\Delta}_{j}\right]=F_{ij}\mathbf{ W}_{ij}+H_{ij}\mathbf{V}_{ij}\) where_ \[H_{ij}=\frac{N_{i*\cap j*}(N_{i*\cap j*}-1)}{N_{i*}(N_{i*}-1)N_{j*}(N_{j*}-1)}- \frac{N_{i*\cap j}(N_{i*\cap j*}-1)}{N_{i*}(N_{i*}-1)N_{j}(N_{j}-1)}-\frac{N_{i \cap j*}(N_{i*\cap j*}-1)}{N_{i}(N_{i}-1)N_{j*}(N_{j}-1)}+\frac{N_{i\cap j}(N_ {i\cap j}-1)}{N_{i}(N_{i}-1)N_{j}(N_{j}-1)}. \tag{27}\] Finally, the covariance between the high-fidelity and discrepancy estimators is provided. **Proposition 3.6** (Variance between high-fidelity and discrepancies for V).: _Let \(G_{i}\) be the same as in Equation (22). The covariance between the high-fidelity and discrepancy estimator is \(\mathbb{C}ov[\mathbf{Q},\mathbf{\Delta}_{i}]=J_{i}\mathbf{V}_{0i}+G_{i}\mathbf{ W}_{0i}\) where_ \[J_{i}=\frac{N_{0\cap i*}(N_{0\cap i*}-1)}{N_{0}(N_{0}-1)N_{i*}(N_{i*}-1)}- \frac{N_{0\cap i}(N_{0\cap i}-1)}{N_{0}(N_{0}-1)N_{i}(N_{i}-1)}. \tag{28}\] #### 3.2.3 Mean and Variance Estimators We now consider a combined estimator, simultaneously providing a mean and variance (MV) estimate. Required covariances for Mean and Variance Estimators In addition to the covariances from Sections 3.2.1 and 3.2.2, the covariance \(\underline{\mathbf{B}}=\mathbb{C}ov[\mathbf{f},\mathbf{g}]\in\mathbb{R}^{D(K+1 )\times D^{2}(K+1)}\) is required such that \(\mathbf{B}_{ij}=\mathbb{C}ov[f_{i},(f_{j}-\mathbb{E}[f_{j}])^{\otimes 2}]\). The stacked MC mean and variance estimator is \[\mathbf{Q}_{i}(\mathcal{N})=\begin{bmatrix}\mathbf{Q}_{\mu,i}(\mathcal{N})\\ \mathbf{Q}_{V,i}(\mathcal{N})\end{bmatrix}=\begin{bmatrix}\frac{1}{2N(N-1)} \sum_{s=1}^{N}\sum_{t=1}^{N}f_{i}(\mathbf{n}^{(t)})\\ \end{bmatrix}\in\mathbb{R}^{D+D^{2}}. \tag{29}\] **Proposition 3.7** (Covariance between Mean and Variance Estimators).: _The covariance between two stacked MC estimators (29), \(\mathbf{Q}_{i}\) and \(\mathbf{Q}_{j}\), corresponding to fidelities \(i,j\) computed via input sets \(\mathcal{N},\mathcal{M}\), respectively, is_ \[\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{N}),\mathbf{Q}_{j}(\mathcal{M})]= \begin{bmatrix}\mathbb{C}ov[\mathbf{Q}_{\mu,i}(\mathcal{N}),\mathbf{Q}_{\mu,j }(\mathcal{M})]&\mathbb{C}ov[\mathbf{Q}_{\mu,i}(\mathcal{N}),\mathbf{Q}_{V,j }(\mathcal{M})]\\ \mathbb{C}ov[\mathbf{Q}_{V,i}^{\prime}(\mathcal{N}),\mathbf{Q}_{\mu,j}( \mathcal{M})]&\mathbb{C}ov[\mathbf{Q}_{V,i}^{\prime}(\mathcal{N}),\mathbf{Q} _{V,j}(\mathcal{M})],\end{bmatrix} \tag{30}\] _where the diagonal terms were found in Propositions 3.1 and 3.4. The covariance between the mean and variance estimator is \(\mathbb{C}ov[\mathbf{Q}_{\mu,i}(\mathcal{N}),\mathbf{Q}_{V,j}(\mathcal{M})]= \frac{P}{NM}\mathbf{B}_{ij}\)._ Using this result, we obtain the covariance between the discrepancies as follows. **Proposition 3.8** (Variance of discrepancies for MV).: _The variance of the discrepancies is_ \[\mathbb{C}ov[\mathbf{\Delta}_{\mu,i},\mathbf{\Delta}_{j}]=\begin{bmatrix} \mathbb{C}ov[\mathbf{\Delta}_{\mu,i},\mathbf{\Delta}_{\mu,j}]&\mathbb{C}ov[ \mathbf{\Delta}_{\mu,i},\mathbf{\Delta}_{V,j}]\\ \mathbb{C}ov[\mathbf{\Delta}_{V,i},\mathbf{\Delta}_{\mu,j}]&\mathbb{C}ov[ \mathbf{\Delta}_{V,i},\mathbf{\Delta}_{V,j}]\end{bmatrix}, \tag{31}\] _such that \(\mathbb{C}ov[\mathbf{\Delta}_{\mu,i},\mathbf{\Delta}_{V,j}]=F_{ij}\mathbf{B} _{ij}\) where \(F_{ij}\) is from Equation (20) and \(\mathbb{C}ov[\mathbf{\Delta}_{\mu,i},\mathbf{\Delta}_{\mu,j}]\) and \(\mathbb{C}ov[\mathbf{\Delta}_{V,i},\mathbf{\Delta}_{V,j}]\) can be found in Propositions 3.2 and 3.5 respectively._ Finally, the covariance between the high-fidelity and discrepancy estimators is provided. **Proposition 3.9** (Variance between high-fidelity and discrepancies for MV).: _The covariance between the high-fidelity and discrepancy estimators is_ \[\mathbb{C}ov[\mathbf{Q},\mathbf{\Delta}_{i}]=\begin{bmatrix}\mathbb{C}ov[ \mathbf{Q}_{\mu},\mathbf{\Delta}_{\mu,i}]&\mathbb{C}ov[\mathbf{Q}_{\mu}, \mathbf{\Delta}_{V,i}]\\ \mathbb{C}ov[\mathbf{Q}_{V},\mathbf{\Delta}_{\mu,i}]&\mathbb{C}ov[\mathbf{Q} _{V},\mathbf{\Delta}_{V,i}]\end{bmatrix}, \tag{32}\] _such that \(\mathbb{C}ov[\mathbf{Q}_{\mu},\mathbf{\Delta}_{V,i}]=G_{i}\mathbf{B}_{0i}\) and \(\mathbb{C}ov[\mathbf{Q}_{V},\mathbf{\Delta}_{\mu,i}]=G_{i}\{\underline{\mathbf{ B}}^{T}\}_{0i}\) where \(G_{i}\) is from Equation (22) and \(\mathbb{C}ov[\mathbf{Q}_{\mu},\mathbf{\Delta}_{\mu,i}]\) and \(\mathbb{C}ov[\mathbf{Q}_{V},\mathbf{\Delta}_{V,i}]\) can be found in Propositions 3.3 and 3.6 respectively._ ### Sensitivity Analysis In this section, we estimate the covariances required for main effect (ME) Sobol indices of a scalar function. In Section 3.3.1, multiple ME variances are estimated simultaneously for a scalar function. In Section 3.3.2, the variance and multiple ME variances are estimated simultaneously. For notation in this section, let \(\underline{\mathbf{f}}=\mathbf{f}\otimes\mathbf{1}_{I}\in\mathbb{R}^{I(K+1)}\) and \[\underline{\mathbf{f}}_{\underline{\mathbf{x}}}=[f_{0,1}\quad f_{0,2}\quad \cdots\quad f_{0,I}\quad f_{1,1}\quad\cdots f_{K,I}]^{T}\in\mathbb{R}^{I(K+1)}, \tag{33}\] where \(f_{i,u}\) is the \(i\)-th fidelity function with fixed \(u\)-th input. Refer to discussion of Equation (6) in Section 2.2 for further details. Required covariances for Variance and Mean Effect Variances Estimation The estimators in this section require these covariances (34) \[\mathbb{V}ar\begin{bmatrix}\underline{\mathbf{f}}\circ\underline{ \mathbf{f}}_{\underline{\mathbf{x}}}-2\underline{\mathbf{f}}\circ\mathbb{E}[ \underline{\mathbf{f}}]\\ \mathbf{g}\end{bmatrix} =\begin{bmatrix}\underline{\mathbf{O}}&\underline{\mathbf{S}}& \underline{\mathbf{C}}\\ \mathbf{Sym.}&\underline{\mathbf{R}}&\underline{\mathbf{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline \underline{\underline{\underline{\underline{\cdot\cdot\cdot\cdot\cdot\ #### 3.3.1 Main Effect Variance Estimators We now estimate multiple ME variances of a scalar function. The combined ME variance estimator is \[\mathbf{Q}_{i}(\mathcal{N})=\begin{bmatrix}\mathbf{Q}_{i,1}(\mathcal{N})\\ \mathbf{Q}_{i,2}(\mathcal{N})\\ \vdots\\ \mathbf{Q}_{i,I}(\mathcal{N})\end{bmatrix}=\begin{bmatrix}\frac{1}{N^{2}}\sum_{ j=1}^{N}\sum_{k=1}^{N}\left[f_{i}(\mathbf{z}^{(j)})f_{i}(\mathbf{y}_{1}^{(j)})-f_{i}( \mathbf{z}^{(j)})f_{i}(\mathbf{z}^{(k)})\right]\\ \frac{1}{N^{2}}\sum_{j=1}^{N}\sum_{k=1}^{N}\left[f_{i}(\mathbf{z}^{(j)})f_{i}( \mathbf{y}_{2}^{(j)})-f_{i}(\mathbf{z}^{(j)})f_{i}(\mathbf{z}^{(k)})\right]\\ \vdots\\ \frac{1}{N^{2}}\sum_{j=1}^{N}\sum_{k=1}^{N}\left[f_{i}(\mathbf{z}^{(j)})f_{i}( \mathbf{y}_{I}^{(j)})-f_{i}(\mathbf{z}^{(j)})f_{i}(\mathbf{z}^{(k)})\right] \right]\end{bmatrix}\in\mathbb{R}^{I}. \tag{36}\] **Proposition 3.10** (Covariance between Main Effect Variance Estimators).: _The covariance between two stacked MC estimators (36), \(\mathbf{Q}_{i}\) and \(\mathbf{Q}_{j}\), corresponding to fidelities \(i,j\) computed via input sets \(\mathcal{N},\mathcal{M}\), respectively, is_ \[\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{N}),\mathbf{Q}_{j}(\mathcal{ M})] =\frac{P}{M^{2}N^{2}}\left[(N-1)(M-1)\mathbf{R}_{ij}\right.+(N-1) \{\underline{\mathbf{S}}^{T}\}_{ij} \tag{37}\] \[\left.+\left(M-1\right)\mathbf{S}_{ij}+\mathbf{O}_{ij}+2(P-1) \mathbf{U}_{ij}\left.\right].\] Using this result, we obtain the covariance between the discrepancies as follows. **Proposition 3.11** (Var. of discrepancies for ME).: _The covariance between discrepancies is \(\mathbb{C}ov\left[\mathbf{\Delta}_{i},\mathbf{\Delta}_{j}\right]=F_{ij} \mathbf{O}_{ij}+G_{ij}\mathbf{R}_{ij}+H_{ij}\{\underline{\mathbf{S}}^{T}\}_{ ij}+H_{ji}\mathbf{S}_{ij}+J_{ij}\mathbf{U}_{ij}\) where_ \[F_{ij} =\frac{N_{i\epsilon\gamma j*}}{N_{i*}^{2}N_{j*}^{2}}-\frac{N_{i \epsilon\gamma j}}{N_{i*}^{2}N_{j}^{2}}-\frac{N_{i\epsilon\gamma j*}}{N_{i*}^ {2}N_{j*}^{2}}+\frac{N_{i\epsilon\gamma j}}{N_{i}^{2}N_{j*}^{2}} \tag{39}\] \[G_{ij} =\frac{N_{i\epsilon\gamma j*}(N_{i*}-1)(N_{j*}-1)}{N_{i*}^{2}N_{ j*}^{2}}-\frac{N_{i\epsilon\gamma j}(N_{i*}-1)(N_{j}-1)}{N_{i*}^{2}N_{j*}^{2}}- \frac{N_{i\epsilon\gamma j*}(N_{i}-1)(N_{j*}-1)}{N_{i}^{2}N_{j*}^{2}}+\frac{N_ {i\epsilon\gamma j}(N_{i}-1)(N_{j}-1)}{N_{i}^{2}N_{j}^{2}}\] (40) \[H_{ij} =\frac{N_{i*\epsilon\gamma j*}(N_{i*}-1)}{N_{i*}^{2}N_{j*}^{2}}- \frac{N_{i\epsilon\gamma j}(N_{i*}-1)}{N_{i*}^{2}N_{j}^{2}}-\frac{N_{i\epsilon \gamma j*}(N_{i}-1)}{N_{i}^{2}N_{j*}^{2}}+\frac{N_{i\epsilon\gamma j}(N_{i}-1 )}{N_{i}^{2}N_{j}^{2}}\] (41) \[J_{ij} =2\frac{N_{i*\epsilon\gamma j*}(N_{i*\gamma j*}-1)}{N_{i*}^{2}N_ {j*}^{2}}-2\frac{N_{i\epsilon\gamma j}(N_{i*\gamma j}-1)}{N_{i*}^{2}N_{j}^{2} }-2\frac{N_{i\epsilon\gamma j*}(N_{i\epsilon\gamma j*}-1)}{N_{i}^{2}N_{j*}^{2} }+2\frac{N_{i\epsilon\gamma j}(N_{i\epsilon\gamma j}-1)}{N_{i}^{2}N_{j}^{2}}. \tag{38}\] Finally, the covariance between the high-fidelity and discrepancy estimators is provided. **Proposition 3.12** (Variance between high-fidelity and discrepancies for ME).: _The covariance between the high-fidelity and discrepancy estimator is_ \[\mathbb{C}ov[\mathbf{Q},\mathbf{\Delta}_{i}]=V_{i}\mathbf{O}_{0i }+W_{i}\mathbf{R}_{0i}+X_{i0}\{\underline{\mathbf{S}}^{T}\}_{0i}+X_{0i}\mathbf{ S}_{0i}+Z_{i}\mathbf{U}_{0i}\text{ where }N_{0*}\equiv N_{0}\text{ and} \tag{42}\] \[V_{i} =\frac{N_{0\cap i*}}{N_{0}^{2}N_{i*}^{2}}-\frac{N_{0\cap i}}{N_{0 }^{2}N_{i*}^{2}}\] (43) \[W_{i} =\frac{N_{0\cap i*}(N_{i*}-1)(N_{0}-1)}{N_{0}^{2}N_{i*}^{2}}- \frac{N_{0\cap i}(N_{i}-1)(N_{0}-1)}{N_{0}^{2}N_{i}^{2}}\] (44) \[X_{ij} =\frac{N_{i*\cap j*}(N_{j*}-1)}{N_{i*}^{2}N_{j*}^{2}}-\frac{N_{i \cap j}(N_{j}-1)}{N_{i}^{2}N_{j}^{2}}\] (45) \[Z_{i} =2\frac{N_{0\cap i*}(N_{0\cap i*}-1)}{N_{0}^{2}N_{i*}^{2}}-2\frac {N_{0\cap i}(N_{0\cap i}-1)}{N_{0}^{2}N_{i}^{2}}.\] #### 3.3.2 Variance and Main Effect Variance Estimator We now estimate the variance and multiple ME variances (MEV estimator) of a scalar function. The stacked variance and ME variance estimator is \[\mathbf{Q}_{i}(\mathcal{N})=\begin{bmatrix}\mathbf{Q}_{i,1}(\mathcal{N})\\ \mathbf{Q}_{i,2}(\mathcal{N})\\ \vdots\\ \mathbf{Q}_{i,I}(\mathcal{N})\\ \mathbf{Q}_{V,i}(\mathcal{N})\end{bmatrix}=\begin{bmatrix}\frac{1}{N^{2}}\sum_{ j=1}^{N}\sum_{k=1}^{N}\left[f_{i}(\mathbf{z}^{(j)})f_{i}(\mathbf{y}_{1}^{(j)})-f_{i}( \mathbf{z}^{(j)})f_{i}(\mathbf{z}^{(k)})\right]\\ \frac{1}{N^{2}}\sum_{j=1}^{N}\sum_{k=1}^{N}\left[f_{i}(\mathbf{z}^{(j)})f_{i}( \mathbf{y}_{2}^{(j)})-f_{i}(\mathbf{z}^{(j)})f_{i}(\mathbf{z}^{(k)})\right]\\ \vdots\\ \frac{1}{N^{2}}\sum_{j=1}^{N}\sum_{k=1}^{N}\left[f_{i}(\mathbf{z}^{(j)})f_{i}( \mathbf{y}_{1}^{(j)})-f_{i}(\mathbf{z}^{(j)})f_{i}(\mathbf{z}^{(k)})\right]\\ \frac{1}{2N(N-1)}\sum_{j=1}^{N}\sum_{k=1}^{N}\left(f_{i}(\mathbf{z}^{(j)})-f_{i}( \mathbf{z}^{(k)})\right)^{2}\end{bmatrix}\in\mathbb{R}^{I+1}. \tag{46}\] **Proposition 3.13** (Covariance between Variance and Main Effect Variance Estimators).: _The covariance between two stacked MC estimators (46), \(\mathbf{Q}_{i}\) and \(\mathbf{Q}_{j}\), corresponding to fidelities \(i,j\) computed via input sets \(\mathcal{N},\mathcal{M}\), respectively, is_ \[\mathbb{C}ov[\mathbf{Q}_{i}(\mathcal{N}),\mathbf{Q}_{j}(\mathcal{M})]=\begin{bmatrix} \mathbb{C}ov[\mathbf{Q}_{i,\mathbf{x}}(\mathcal{N}),\mathbf{Q}_{j,\mathbf{x} }(\mathcal{M})]&\mathbb{C}ov[\mathbf{Q}_{i,\mathbf{x}}(\mathcal{N}),\mathbf{Q }_{V,j}(\mathcal{M})]\\ \mathbb{C}ov[\mathbf{Q}_{V,i}(\mathcal{N}),\mathbf{Q}_{j,\mathbf{x}}(\mathcal{ M})]&\mathbb{C}ov[\mathbf{Q}_{V,i}(\mathcal{N}),\mathbf{Q}_{V,j}(\mathcal{M}) ]\end{bmatrix}, \tag{47}\] _where \(\mathbf{Q}_{i,\mathbf{x}}(\mathcal{N})=[\mathbf{Q}_{i,1}(\mathcal{N})\dots \mathbf{Q}_{i,I}(\mathcal{N})]^{T}\). The diagonal terms of this block-matrix can be found in Propositions 3.4 and 3.10. Now, the covariance between the ME variance estimator and the variance estimator is_ \[\mathbb{C}ov[\mathbf{Q}_{i,\mathbf{x}}(\mathcal{N}),\mathbf{Q}_{V,j}( \mathcal{M})]=\frac{P(N-1)}{MN^{2}}\mathbf{E}_{ij}+\frac{P}{MN^{2}}\mathbf{C} _{ij}+\frac{2P(P-1)}{M(M-1)N^{2}}\mathbf{U}_{ij,0}, \tag{48}\] _where \(\mathbf{U}_{ij,0}\in\mathbb{R}^{I}\) is the first column of \(\mathbf{U}_{ij}\)._ Using this result, we obtain the covariance between the discrepancies as follows. **Proposition 3.14** (Variance of discrepancies for MEV).: _The variance of the discrepancies is_ \[\mathbb{C}ov[\mathbf{\Delta}_{i},\mathbf{\Delta}_{j}]=\begin{bmatrix} \mathbb{C}ov[\mathbf{\Delta}_{i,\mathbf{x}},\mathbf{\Delta}_{j,\mathbf{x}}]& \mathbb{C}ov[\mathbf{\Delta}_{i,\mathbf{x}},\mathbf{\Delta}_{V,j}]\\ \mathbb{C}ov[\mathbf{\Delta}_{V,i},\mathbf{\Delta}_{j,\mathbf{x}}]&\mathbb{C}ov [\mathbf{\Delta}_{V,i},\mathbf{\Delta}_{V,j}]\end{bmatrix}, \tag{49}\] _where the diagonal terms can be seen in Propositions 3.11 and 3.5. Now, \(\mathbb{C}ov[\mathbf{\Delta}_{i,\mathbf{x}},\mathbf{\Delta}_{V,j}]=F_{ij} \mathbf{E}_{ij}+G_{ij}\mathbf{C}_{ij}+H_{ij}\mathbf{U}_{ij,0}\) such that_ \[F_{ij} =\frac{N_{i\leftrightarrow j*}(N_{i*}-1)}{N_{i*}^{2}N_{j*}}- \frac{N_{i\leftrightarrow j}(N_{i*}-1)}{N_{i*}^{2}N_{j}}-\frac{N_{i \leftrightarrow j*}(N_{i}-1)}{N_{i}^{2}N_{j*}}+\frac{N_{i\leftrightarrow j}( N_{i}-1)}{N_{i}^{2}N_{j}} \tag{51}\] \[G_{ij} =\frac{N_{i\leftrightarrow j*}}{N_{i*}^{2}N_{j*}}-\frac{N_{i \leftrightarrow j}}{N_{i*}^{2}N_{j}}-\frac{N_{i\leftrightarrow j*}}{N_{i}^{2} N_{j*}}+\frac{N_{i\leftrightarrow j}}{N_{i}^{2}N_{j}}\] (52) \[H_{ij} =\frac{2N_{i\leftrightarrow j*}(N_{i*\cap j*}-1)}{N_{i*}^{2}N_{j *}(N_{j*}-1)}-\frac{2N_{i\leftrightarrow j}(N_{i\leftrightarrow j}-1)}{N_{i* }^{2}N_{j}(N_{j}-1)}-\frac{2N_{i\leftrightarrow j*}(N_{i\cap j*}-1)}{N_{i}^{2} N_{j*}(N_{j*}-1)}+\frac{2N_{i\cap j}(N_{i\cap j}-1)}{N_{i}^{2}N_{j}(N_{j}-1)}. \tag{50}\] Finally, the covariance between the high-fidelity and discrepancy estimators is provided. **Proposition 3.15** (Variance between high-fidelity and discrepancies for MEV).: _The covariance between the high-fidelity and discrepancy estimators is_ \[\mathbb{C}ov[\mathbf{Q},\mathbf{\Delta}_{i}]=\begin{bmatrix} \mathbb{C}ov[\mathbf{Q}_{0,\mathbf{x}},\mathbf{\Delta}_{i,\mathbf{x}}]& \mathbb{C}ov[\mathbf{Q}_{0,\mathbf{x}},\mathbf{\Delta}_{V,i}]\\ \mathbb{C}ov[\mathbf{Q}_{V,0},\mathbf{\Delta}_{i,\mathbf{x}}]&\mathbb{C}ov[ \mathbf{Q}_{V,0},\mathbf{\Delta}_{V,i}]\end{bmatrix}, \tag{53}\] _where the diagonal terms can be seen in Propositions 3.12 and 3.6. The covariance between the high and low fidelities of the variance and ME variance estimators is_ \[\mathbb{C}ov[\mathbf{Q}_{0,\mathbf{x}},\mathbf{\Delta}_{V,i}] =L_{0i}\mathbf{E}_{0i}+I_{0i}\mathbf{C}_{0i}+J_{0i}\mathbf{U}_{0i,0} \tag{55}\] \[\mathbb{C}ov[\mathbf{Q}_{V,0},\mathbf{\Delta}_{i,\mathbf{x}}] =L_{i0}\{\underline{\mathbf{E}}^{T}\}_{0i}+I_{i0}\{\underline{ \mathbf{C}}^{T}\}_{0i}+J_{i0}\{\mathbf{U}_{i0,0}\}^{T}, \tag{54}\] _such that_ \[L_{ij} =\frac{N_{i*\cap j*}(N_{i*}-1)}{N_{i*}^{2}N_{j*}}-\frac{N_{i \cap j}(N_{i}-1)}{N_{i}^{2}N_{j}} \tag{57}\] \[I_{ij} =\frac{N_{i*\cap j*}}{N_{i*}^{2}N_{j*}}-\frac{N_{i\cap j}}{N_{i}^ {2}N_{j}}\] (58) \[J_{ij} =2\frac{N_{i*\cap j*}(N_{i*\cap j*}-1)}{N_{i*}^{2}N_{j*}(N_{j*}-1) }-2\frac{N_{i\cap j}(N_{i\cap j}-1)}{N_{i}^{2}N_{j}(N_{j}-1)}. \tag{56}\] **Remark 1**.: _The Sobol estimator can also apply to other effects, not just the ME. We can let multiple indices of interest in \(\mathcal{Y}\) be dependent on \(\mathcal{Z}\), and estimate a combined effect variance. For example, in the ANOVA decomposition, we can estimate \(V_{uw}=\forall ar[f_{uw}]\) using the ME variance estimator, and thus, the same MOACV estimator as introduced above can be used. Therefore, the MOACV Sobol estimator is not restricted to only ME variance, and other variance terms in the ANOVA decomposition can be calculated._ ## 4 Synthetic Numerical Examples In this section, the performance of the multi-output estimator is investigated on synthetic vector-valued functions. In Section 4.1, the variance of the introduced multi-output estimator is compared to individual ACV estimation, and superior performance is shown. Section 4.2 explores the estimator performance when the required pilot covariances (given in the blue boxes of Section 3) are estimated. Generally, we find that as the number of estimated outputs are increased, more pilot samples are required if the pilot covariances are unknown. Moreover, higher order statistics, such as the ME variance, also can require significantly more samples. The optimal sample allocations for the examples below are found by minimizing the determinant of the estimator variance subject to cost constraints \[\min_{N_{0},N_{1},N_{1},\ldots,\ldots}\left|\forall ar\left[\tilde{\mathbf{Q} }\right]\right|\quad\text{where}\quad C_{0}N_{0}+\sum_{i=1}^{K}C_{i}\left| \mathcal{Z}_{i}\cup\mathcal{Z}_{i}^{*}\right|\leq\text{Budgeted Cost}. \tag{59}\] This formulation is consistent with both determinant minimization used for optimal weight determination in (9), and variance minimization in the single ACV case. Since the covariances \(\mathbb{V}ar\left[\tilde{\mathbf{Q}}\right]\) and \(\mathbb{C}ov\left[\mathbf{Q},\underline{\mathbf{\Delta}}\right]\) are functions of the number of estimator samples (\(N_{0}\), \(N_{i}\), \(N_{i*}\), etc.), the required preliminary covariances are used to find the optimal sample allocation. Optimal allocations are found using the MXMCPy library7[4] for mean ACV and single-statistic MOACV estimators. Since MXMCPy does not offer optimization for variance estimation, the Scipy optimization library8 is used for variance estimators. Footnote 7: [https://github.com/nasa/MXMCPy](https://github.com/nasa/MXMCPy) Footnote 8: [https://docs.scipy.org/doc/scipy/reference/optimize.html](https://docs.scipy.org/doc/scipy/reference/optimize.html) The procedure in Section 4 and Section 5 estimates variance matrices. Traditional CV estimation of covariance matrices, however, may suffer from losing positive-definiteness [15], which leads to negative variance estimates. In these cases, any negative variance estimate in the following sections is set to zero. Next, the results in the following sections consider variance reduction, defined as the variance of a MC estimator divided by the variance of a multi-fidelity estimator. Finally, we use the following acronyms in the following sections. An MOACV estimator that estimates a single statistic, such as the mean or variance, is denoted as single MOACV (S-MOACV). An MOACV estimator that estimates multiple statistics is denoted as combined MOACV (C-MOACV). ### Comparison to ACV Estimators The variance reduction of MOACV estimators and individual ACV estimators is compared in this section. Consider a system with three models of decreasing fidelity, each with one input \(x\sim\mathcal{U}(0,1)\) and three outputs \[f_{0}(x) =\left[\sqrt{11}x^{5}\quad x^{4}\quad\sin(2\pi x)\right] \tag{61}\] \[f_{1}(x) =\left[\sqrt{7}x^{3}\quad\sqrt{7}x^{2}\quad\cos(2\pi x+\tfrac{ \pi}{2})\right]\] (62) \[f_{2}(x) =\left[\tfrac{\sqrt{3}}{2}x^{2}\quad\tfrac{\sqrt{3}}{2}x\quad \cos(2\pi x+\tfrac{\pi}{4})\right]. \tag{60}\] The endowed costs of each model are shown in Table 2. We assume perfect knowledge of the covariance between the models and their outputs. These correlations are shown in Figure 1, and are computed using 100,000 pilot samples. For demonstration purposes, all estimators follow the ACV-IS sampling scheme [9], we find that changing this scheme does not change the qualitative conclusions. #### 4.1.1 Mean and Variance Estimation This section compares the variance reduction of individual ACV estimators with MOACV estimators for mean and variance estimation. To estimate the means and variances of each model output, the following estimators are constructed: six individual ACV estimators are constructed to estimate the three means and three variances; two S-MOACV estimators are constructed, one for the three means and one for the \(3\times 3\) covariance matrix; and finally, a C-MOACV is created to simultaneously estimate the three means and the \(3\times 3\) covariance matrix simultaneously. The same sample allocation is used for all estimators in this section, which is found by minimizing the variance of the ACV mean-estimator for the first output of \(f_{0}\) given a budget of 10 seconds and is shown in the ACV column in Table 2. Note, the sample allocations shown in the other columns will be used in the subsequent sections. Figures 1(a) and 1(b) display the variance reduction of the mean and variance estimators respectively. As seen in Figure 1(a), both the S-MOACV and C-MOACV estimators achieve greater variance reduction than the ACV estimator of each output by over an order of magnitude in some cases. C-MOACV also achieves improved variance reduction over the mean-specific S-MOACV. Figure 1(b) demonstrates similar results for the estimator variance, but indicates less benefit of C-MOACV over S-MOACV. The significantly improved variance reduction from S-MOACV estimation demonstrates that multi-output estimation excels in systems with the high correlations between the model outputs, as seen in Figure 1. #### 4.1.2 Sample Allocation Optimizations In the previous section, all the estimators used the same sample allocation obtained from minimizing the variance of the mean ACV estimator for the first model output. In this section, performance of sample allocations that target variance reduction of the full multi-output estimator are demonstrated. The results in this section show variance reduction \begin{table} \begin{tabular}{|c|c|c c c|} \hline \multicolumn{5}{|c|}{Sample Allocations across Optimizations} \\ \hline \hline Model & Cost & ACV & S-MOACV & C-MOACV \\ \hline \(f_{0}(x)\) & \(1\) & \(4\) & \(2\) & \(2\) \\ \(f_{1}(x)\) & \(0.01\) & \(508\) & \(499\) & \(75\) \\ \(f_{2}(x)\) & \(0.001\) & \(631\) & \(2955\) & \(7187\) \\ \hline \multicolumn{5}{|c|}{Total Cost} & \(9.711\) & \(9.945\) & \(9.937\) \\ \hline \end{tabular} \end{table} Table 2: Sample allocations for each of the compared optimizations. Figure 1: Correlations between model outputs across model fidelities. Figure 2: Variance Reduction compared to MC of each function output for each estimator type (higher is better). The C-MOACV estimator provides significantly increased variance reduction compared to ACV methods. achieved only for the mean of the first model output, because that is all that the ACV can provide. We reinforce that the MOACV estimators also provide significant variance reduction for all other outputs as well. Table 2 shows the optimal sample allocations for various objective functions. The first column, ACV, is the ACV-specific allocation for the first model output found in the previous section. The second (S-MOACV) and third (C-MOACV) columns arise from minimizing the determinant of the estimator variance obtained by the two MOACV estimators, respectively. While the optimization method resulted in sample allocations of slightly different costs9, the variance reduction metric is cost-independent since it divides the MC estimator variance by an equivalent-cost multi-fidelity estimator variance. Footnote 9: The different allocation costs is a consequence of simplifying the discrete optimization problem into a continuous domain. The rounding of the results of the continuous optimization into the discrete solution causes the solution to not lie on the computational budget boundary. However, the S-MOACV and ACV costs only have a 2% difference. The variance reduction results are shown in Figure 3. Each of the optimizations give the best variance reduction for their respective estimators. For example, the C-MOACV estimator that uses a C-MOACV optimal sample allocation achieves more variance reduction than a C-MOACV estimator that uses the ACV optimal sample allocation. The C-MOACV estimator outperforms the ACV estimator by at least an order of magnitude under all allocation strategies. We reinforce that while variance reduction significantly improves for the mean of the first output, the combined MOACV estimator also returns the means, variances, and covariances of all other model outputs. ### Pilot Sample Trade-off The multi-output estimators introduced in this work require exploiting more information than simple single-output estimators. Specifically, the boxes of Section 3 show a large number of statistics that must be known to compute the optimal CV weights. A natural question arises as to whether there are too many unknowns to allow a small set of pilot samples to yield an effective estimate. In this section, it is shown that the required number of pilot samples depends on the number of model outputs and statistics that are estimated. The type of statistic estimated is the largest contributor to the number of pilot samples that is required, while adding more outputs gradually increases the number of required samples. A new system is defined to consider a tunable number of function outputs to study convergence of increasingly complex estimators as a function of the number of pilot samples. Let the high- and low-fidelity functions be \(f_{0},f_{1},f_{2}:\mathbf{x}\rightarrow\mathbb{R}^{10}\), where \(\mathbf{x}\in[0,1)\)9 is uniformly distributed such that Footnote 9: The different allocation costs is a consequence of simplifying the discrete optimization problem into a continuous domain. The rounding of the results of the continuous optimization into the discrete solution causes the solution to not lie on the computational budget boundary. However, the S-MOACV and ACV costs only have a 2% difference. \[f_{0}(\mathbf{x})=\begin{bmatrix}\sum_{i=1}^{9}x_{i}^{3}\\ x_{1}^{3}\\ \vdots\\ x_{9}^{3}\end{bmatrix}\qquad f_{1}(\mathbf{x})=\begin{bmatrix}\sum_{i=1}^{9} \sqrt{i}x_{i}^{3}\\ \sqrt{1}x_{1}^{3}\\ \vdots\\ \sqrt{9}x_{9}^{3}\end{bmatrix}\quad\text{and}\quad f_{2}(\mathbf{x})=\begin{bmatrix} \sum_{i=1}^{9}ix_{i}^{3}\\ 1x_{1}^{3}\\ \vdots\\ 9x_{9}^{3}\end{bmatrix}. \tag{63}\] We will study the variance reduction in the mean and main-effect variances of the first output of \(f_{0}\). For mean estimation, we consider an increasing number of model outputs formed by using more components of each of the Figure 3: Variance Reduction of mean estimation of the first model output compared to MC across optimizations. The C-MOACV estimator outperforms all other methods for each optimization. model fidelities. For ME variance estimation, we consider an increasing number of ME variances to estimate across the 9 inputs. The ACV-IS sampling scheme was chosen with the un-optimized allocations of each fidelity being \(50\), \(500\), and \(5000\). Figure 4 shows the variance reduction achieved for different numbers of pilot samples and statistics. Note that the bottom edge of the plots corresponding to one output for mean estimation in Figure 3(a) and one ME in Figure 3(b) corresponds to the performance of the standard ACV estimator. To determine the performance of the variance reduction with respect to the number of pilot samples, we sweep across combinations of _additional_ model outputs (from 0 to 9) and numbers of pilot samples. At each pilot sample quantity, we run 1000 realizations of the pilot sample sets and compute the estimator variances. Figure 3(a) shows the 5th percentile of the variance reduction, a statistic that demonstrates close to worst-case behavior. The white line corresponds to the variance reduction ratio of 1, where MC has equal performance to the CV approach. Notably, we see a sharp transition at 10 pilot samples where the performance improves over MC. With too few pilot samples, the estimators perform worse than MC. Since the white contour line is near vertical, Figure 3(a) displays that adding more correlated outputs in mean estimation only requires slightly more pilot samples for significantly improved variance reduction. Next we repeat the same experiment for the ME variances of the first output of \(f_{0}\) with respect to each of the 9 inputs. As described in Section 3.3, there are many more required covariances to be estimated for multiple ME estimators than for mean estimation. The statistic of interest is also of a higher order than mean estimation. Figure 3(b) again shows the 5th percentile of the variance reduction for the first ME variance output over the 1000 trials at different combinations of outputs. In this case, the number of outputs in the estimator reflects the number of MEs that are estimated, a maximum of 9 for the 9 total inputs. Note the different axis scales between the two plots. Similarly to mean estimation, the additional statistics can improve the variance reduction. However, the number of required pilot samples to outperform ACV (green area, bottom edge of Figure 3(b)), is about 25 samples, which is double the number of samples required for mean estimation. Further, the maximum variance reduction requires around 250 pilot samples before the C-MOACV estimator variance converges. Overall, we see a similar pattern to the mean estimation with more required samples. In both mean and ME estimation, the MOACV estimators achieve larger variance reduction than ACV estimation when the ACV estimator variance has converged. Future work can focus on adaptive schemes to determine the optimal number of pilot samples. ## 5 Application: Entry, Descent, and Landing Trajectories Entry, descent, and landing (EDL) is the final phase of a space vehicle's mission upon entering the atmosphere of a celestial body. An important aspect of successful EDL includes prediction of trajectory and touchdown properties including locations, velocities, and states of a vehicle at given times. However, these predictions are difficult because of uncertainties due to the atmosphere, initial vehicle states, and actuator precision. Analyzing predicted outcomes due to these uncertainties is also computationally challenging because high-fidelity simulations may take hours or days to Figure 4: 5th percentile of variance reduction (worst case) across 1000 trials as a function of pilot samples and number of outputs. The white contour line represents the same performance as MC estimation. run. In this section, we consider the simulation of a sounding rocket with the aim of reducing the computational cost of estimation through multi-fidelity methods. NASA launched the Sounding Rocket One (SR-1) in September 2018 containing the Adaptable, Deployable, Entry, and Placement Technology (ADEPT), aimed to demonstrate a deployable aeroshell used for re-entry [7]. Before launch, this flight was simulated using the Program to Optimize Simulated Trajectories II (POST2) software [25] with a standard MC approach to consider system uncertainties [7]. The POST2 software contains around 75 uncertain inputs including initial conditions (e.g. location, velocity, angle of attack), vehicle parameters (e.g. moment of inertia, deployment impulse), and environmental parameters (e.g. atmospheric uncertainty). In Warner, et al. [24], ACV techniques were used to construct mean estimators for 15 trajectory QoIs, such as the touchdown latitude, longitude, velocity, and other QoIs listed in [24, Table 1]. Using multi-fidelity techniques, [24] was able to reduce the variance of estimation for many of the 15 QoIs. The goal of this section is demonstrate further variance reduction using multi-output estimation. The following models of varying fidelity were introduced in [24] to aid the multi-fidelity estimation. The POST2 simulation is used as the high-fidelity model which takes 219 seconds on average for a single evaluation at a fixed condition. A "reduced-physics" version of POST2 is introduced to reduce the cost of simulation by using a simplified atmospheric model, taking around 47.4 seconds per evaluation. A cheaper trajectory simulation is also created using the high-fidelity POST2 at a much larger integration time step at 2.8 seconds per evaluation, deemed the "coarse time-step" model. Finally, a support vector machine (SVM) surrogate model ("machine learning model") is trained offline using 250 high-fidelity trajectory simulations and used as a low-fidelity model taking around 0.0007 seconds per evaluation. In this section, we compare the performance of multi-output methods to ACV estimation for 9 of the 15 QoIs. The 9 QoIs were chosen for their correlations between other QoIs, as seen in Figure 4(b), where the QoIs in red are removed from this study. Section 5.1 compares ACV and MOACV methods by estimating the mean and variance of 9 QoIs. Finally, Section 5.2 uses MOACV to perform a sensitivity analysis on one QoI across three input variables. ### Mean and Variance Estimation In this section, we build 18 ACV estimators for the mean and variance of each of the 9 QoIs; two S-MOACV estimators, one for 9 mean estimators and one for 9 variance estimators; and a single C-MOACV estimator for the mean and variance of the 9 QoIs simultaneously. In particular, the C-MOACV estimator simultaneously estimates 54 statistics (9 means and 45 unique covariances). To find the preliminary covariances, 60,000 pilot samples were used. With these samples, Figure 4(a) shows the correlations between the models across the model outputs. Notably, a few QoIs have low correlations between the low-fidelity and high-fidelity models. Traditionally, poor variance reduction is expected at these QoIs for multi-fidelity estimation. The correlations between the outputs of the high fidelity model can be seen in Figure 4(b). The non-zero correlations are exploited in the MOACV techniques and used to provide more accurate estimation. A single ACV-IS allocation scheme was applied to all outputs to enable a fair comparison at equivalent computational costs. This sample allocation for all estimators was computed to minimize the variance of the ACV mean-estimator for the touchdown latitude (_lat-td_). Similarly, to Warner et al. [24], the optimization minimizes the variance with a computational budget of \(10^{4}\) seconds. The allocated samples are 31, 0, 1124, and 22075 samples for the POST2, reduced physics model, coarse time step model, and the machine learning model respectively. We obtain the empirical variance of the estimators using 10,000 realizations of data according to the above sample allocation. The variance reduction achieved for mean and variance estimation can be seen in Figure 6. The red dotted line represents no reduction compared to the equivalent-cost MC estimator. The individual ACV estimator reduction can be seen in the blue bars. In Figure 5(a) for mean estimation, the S-MOACV and C-MOACV achieves greater variance reduction than individual ACV estimation at every QoI. In Figure 5(b), both the S-MOACV and the C-MOACV estimators achieve greater variance reduction than the ACV estimator at every QoI. Figure 6 demonstrates that the MOACV estimators can turn situations where an ACV estimator performed worse than MC, into one where performance becomes better than MC. For example, the ACV estimator for the terminal velocity "vel-term" initially performs worse than MC estimation. However, the C-MOACV estimator is able to achieve reduction better than MC by leveraging the additional correlations. For mean estimation, the S-MOACV estimator achieves a median 15% greater variance reduction than ACV estimators. The C-MOACV estimator achieves a median 39% larger variance reduction than ACV estimators, with a maximum of 113% larger reduction for "rllrt-60km". For variance estimation, the C-MOACV estimator provides a median 22% greater variance variance reduction than ACV estimators. The C-MOACV estimates for landing latitude and longitude performed marginally better (about 1% larger reduction) than ACV estimation. This performance is explained by the lack of correlation amongst latitude and longitude with other QoIs, as seen in Figure 4(b). The S-MOACV and C-MOACV estimators are able to outperform traditional ACV methods by extracting the correlations between QoI and statistics to reduce the variance of ACV estimation even further. ### Sensitivity Analysis In this section, the C-MOACV estimator performs a sensitivity analysis on the roll rate at 80 km by simultaneously estimating the Sobol indices of three input variables, the initial roll rate (_IRR_) and two uncertainties in the vehicle's moment of inertia, _MO11_ and _MO12_. The input variables and QoI were chosen to demonstrate the ME variance estimation for variables with both high and low Sobol indices. The C-MOACV estimator contains 4 outputs, the 3 ME variance estimators and 1 total variance (\(\forall ar\left[{}^{*}\text{r}\text{l}\text{l}\text{l}\text{t}\text{-}80km^{* }\right]\)) estimator. The Sobol indices are then constructed by dividing the ME variance estimate by the total variance estimate from the C-MOACV estimator. The preliminary covariances are estimated using 5,000 pilot samples. The C-MOACV and ACV estimators use the ACV-IS sampling scheme, and the sample allocation was found by minimizing (59) with the C-MOACV variance subject to a budget of 10,000 seconds. The sample allocation is 21, 99, 200, and 7359 samples for the full-physics, reduced-physics, coarse time-step, and machine learning models respectively. Figure 5: Correlations between model fidelities and model outputs for the EDL problem 5. Red QoIs are not estimated in this study. Figure 6: Variance reduction compared to MC estimation for each trajectory QoI. The S-MOACV and C-MOACV estimators outperform the individual ACV estimators. Figure 6(a) shows the variance reduction compared to MC estimation for the individual ACV estimators and the C-MOACV estimator. The C-MOACV variance reduction is a median 300% larger than ACV estimation. Since the ME variance estimator is only defined for scalar functions, the C-MOACV estimator can only outperform the ACV estimator if there are large correlations between the MC ME variance and total variance estimators. In Figure 6(b), the correlations between 10,000 MC estimators for the variance and ME variance can be seen. Large correlations are shown between the ME variance estimates and the total variance estimates. The MOACV estimator is able to extract these high correlations to reduce the variance of each of the estimators. We now form the Sobol index estimates by dividing the ME variance estimate by the total variance estimate for the MC, ACV, and MOACV estimators. To measure the distribution of Sobol index estimates, 10,000 estimators are constructed with random realizations of input samples. The distribution of the 10,000 Sobol index estimates is seen in Figure 7(a). The Sobol index of _IRR_ is around \(0.99\), and the _MOI_ Sobol estimates are close to \(0\). Since a Sobol index is the percentage of the model's variance for an input, almost 100% of the QoI's (roll rate at 80 km) variance is attributed to the initial roll rate. This high percentage explains the high correlation between the _IRR_ ME and the variance estimates seen in Figure 6(b). Conversely, the approximately 0% Sobol index of the _MOI_s explains the low correlation between their ME and total variance estimates. Figure 8: Sobol index estimation for the EDL model. The ME variance estimates are divided by the total variance estimates to form the Sobol indices. The C-MOACV estimator significantly reduces the MSE of Sobol index estimation compared to ACV estimators. Figure 7: Sensitivity analysis for three EDL model inputs. C-MOACV extracts correlations between MC estimators to achieve larger variance reduction compared to individual ACV estimation. While Figure 7(a) displays the qualitative difference between ACV and C-MOACV Sobol index estimation, we now directly compare the error in each of the Sobol index estimates. Since the Sobol index estimates are found by dividing two estimators, the resulting Sobol index estimates are biased. Instead of measuring the variance of estimation, the mean squared error (MSE) is calculated to now consider the bias from the truth, which was calculated using 10,000 high-fidelity samples. The MSE for the MC estimates are divided by the MSE for the multi-fidelity estimates to calculate the MSE reduction. In Figure 7(b), the MSE reduction compared to MC estimation of the Sobol index estimates is seen. The C-MOACV estimator reduces the MSE in all Sobol index estimates compared to ACV estimation. The MSE reduction for the C-MOACV estimates is a median 515% greater than the MSE reduction for ACV estimates. This section validates that the MOACV estimator can be used for more accurate sensitivity analysis and provides an example that the MOACV estimator can achieve further variance reduction by using the correlation between estimators. ## 6 Conclusion In this work, we have introduced closed-form expressions for the covariance between MC estimators of multi-output functions for a variety of statistics. We have also used these results in the ACV context to construct the multi-output ACV estimator. The introduced multi-fidelity estimators include the vector-valued mean and variance estimators that utilize the correlations between models, outputs, and estimators to improve variance reduction. For sensitivity analysis, the MOACV estimator is demonstrated to simultaneously estimate the variance and multiple ME variances for more accurate Sobol indices. Numerous results demonstrate that the correlations between model fidelities, model outputs, and estimators can be extracted to provide further variance reduction. In the synthetic numerical results, the C-MOACV estimator is able to achieve up to 183 times larger variance reduction compared to a traditional ACV estimator. The MOACV estimator is also applied to an entry, descent, and landing application to more accurately estimate 9 QoIs given a fixed computational budget. Further, a variance-based sensitivity analysis is performed to illustrate the expected improved accuracy of the C-MOACV estimator. The C-MOACV estimator is able to increase the MSE reduction of Sobol index estimates by up to 557% compared to traditional ACV estimation. In summary, multi-output estimation techniques are able to significantly outperform traditional ACV methods when high correlations exist between model outputs and estimators. In future work, the ME variance estimator can be extended to vector-valued functions. Since the variance estimator has already been defined for multiple outputs, the extension to the ME variance estimator will be able to take advantage of correlations between other model outputs. Extending the estimator to vector-valued functions would enable the sensitivity analysis to be performed on multiple model outputs and inputs simultaneously. Additionally, the introduced estimator covariances can be applied to other multi-fidelity sampling strategies, such as the MLBLUE estimator for multi-statistic estimation. New strategies can be introduced to find the optimal number of pilot samples that minimize the total model evaluation cost, such as multi-arm bandit learning approaches [26]. Finally, covariance estimation techniques can be used to mitigate the loss of positive-definiteness by estimating on a covariance manifold [15].
``` Monte Carloに基づく多出力平均、分散、そしてSobol主効果分散推定量を備えたモデルの ensembles からの共分散の集合を提供しています。これらの共分散は、高精度Monte Carlo推定量に低精度モデルの ensembles を用いた多精度不確定性推定戦略において使用できます。これらの共分散表現は、近似制御変数のアプローチや多レベルの最適な線形不偏推定量など、関連するアプローチに必要です。その一方、文献には、平均と分散のような単出力ケースでの表現が提供されています。しかし、本結果は、多出力と、あらゆるサンプリング戦略における複数の統計量に適しています。これらの結果の説明に従い、これらの結果を利用して、近似制御変数法を用いて、複数の出力の活用により推定量のばらつきを削減できることを示しました。合成例を用いて、最適なサンプル割り当てとパイロットサンプル推定の効果
2309.13984
Near-field Hybrid Beamforming for Terahertz-band Integrated Sensing and Communications
Terahertz (THz) band communications and integrated sensing and communications (ISAC) are two main facets of the sixth generation wireless networks. In order to compensate the severe attenuation, the THz wireless systems employ large arrays, wherein the near-field beam-squint severely degrades the beamforming accuracy. Contrary to prior works that examine only either narrowband ISAC beamforming or far-field models, we introduce an alternating optimization technique for hybrid beamforming design in near-field THz-ISAC scenario. We also propose an efficient approach to compensate near-field beam-squint via baseband beamformers. Via numerical simulations, we show that the proposed approach achieves satisfactory spectral efficiency performance while accurately estimating the near-field beamformers and mitigating the beam-squint without additional hardware components.
Ahmet M. Elbir, Abdulkadir Celik, Ahmed M. Eltawil
2023-09-25T09:36:41
http://arxiv.org/abs/2309.13984v1
# Near-field Hybrid Beamforming for Terahertz-band Integrated Sensing and Communications ###### Abstract Terahertz (THz) band communications and integrated sensing and communications (ISAC) are two main facets of the sixth generation wireless networks. In order to compensate the severe attenuation, the THz wireless systems employ large arrays, wherein the near-field beam-squint severely degrades the beamforming accuracy. Contrary to prior works that examine only either narrowband ISAC beamforming or far-field models, we introduce an alternating optimization technique for hybrid beamforming design in near-field THz-ISAC scenario. We also propose an efficient approach to compensate near-field beam-squint via baseband beamformers. Via numerical simulations, we show that the proposed approach achieves satisfactory spectral efficiency performance while accurately estimating the near-field beam-formers and mitigating the beam-squint without additional hardware components. Integrated sensing and communications, massive MIMO, terahertz, near-field, beamforming ## I Introduction Integrated sensing and communications (ISAC) has emerged as one of the pivotal technologies of future sixth generation (6G) wireless networks, enabling synergistic access to the scarce radio spectrum on an integrated hardware platform [1, 2]. In particular, as the allocation of the spectrum beyond 100 GHz is underway, specifically in the terahertz (THz) band, ISAC is currently witnessing frantic research endeavors to simultaneously achieve high-resolution sensing and ultrahigh-speed communications system architecture at the THz frequencies [2, 3]. Signal processing at THz-band confronts multiple impediments, such as severe path loss, limited transmission distance, and _beam-squint_. To surmount these challenges at reduced hardware costs, hybrid analog and digital beamforming architectures are employed in a massive multiple-input multiple-output (MIMO) array configuration [4, 5]. For higher spectral efficiency (SE) and lower complexity, massive MIMO systems employ wideband signal processing, wherein subcarrier-dependent (SD) baseband and subcarrier-independent (SI) analog beamformers are adopted. In particular, the weights of the analog beamformers are subject to a single (sub-)carrier frequency [6]. Therefore, the beam generated across the subcarriers points towards disparate directions, engendering beam-squint phenomenon [7, 8]. Compared to millimeter-wave (mm-Wave) frequencies, beam-squint's ramifications are more acute in THz massive MIMO because of wider system bandwidths in the latter [8, 9]. As such, addressing beam-squint is imperative for ensuring reliable system performance. Existing techniques to compensate for the impact of beam-squint mostly employ additional hardware components, e.g., time-delayer (TD) networks [8, 10] and SD phase shifter networks [11] to virtually realize SD analog beamformers. However, these approaches are inefficient in terms of cost and power [2]. It merits notingthat beam-squint compensation does not necessitate additional hardware components for estimation of the communications channel and radar target direction-of-arrival, which can be handled in the digital domain, wherein the generation of SD analog beamformers is possible. Nevertheless, supplementary (analog) hardware is required for hybrid (analog/digital) beamformer design [5, 12]. Beside beam-squint, another formidable challenge in THz-band signal processing is short-transmission distance, which may cause the signal wavefront at the receive to become spherical in near-field (see, e.g., Fig. 1). In particular, the plane wavefront is spherical in the near-field when the transmission range is shorter than the Fraunhofer distance [13]. As a result, the beamforming algorithms must accommodate the near-field model, which depends on both direction and range information for accurate signal processing [2]. Among the works investigating the near-field signal model, [14, 15, 16] consider the near-field scenario, while neglecting the effect of beam-squint and focusing solely on mm-Wave scenarios. On the other hand, several methods have been proposed to compensate the far-field beam-squint for both THz channel estimation [17, 18] and beamforming [3, 19] applications. Furthermore, near-field THz channel estimation is explored in [20, 21], wherein an orthogonal matching pursuit (OMP)-based approach is proposed. The near-field ISAC scenario is investigated in [22], albeit exclusively for narrowband systems that do not account for the impact of beam-squint. Specifically, [22] considers a near-field multiple signal classification (MUSIC) algorithm to estimate the direction and ranges of radar targets and communication users. Nevertheless, near-field ISAC hybrid beamforming in the presence of beam-squint remains relatively unexamined. In this paper, near-field hybrid beamforming approach is proposed for the THz-ISAC scenario. We first introduce the system model for both communications and sensing signal acquisition. Subsequently, the near-field array model and near-field beam-squint are introduced. In order to design the hybrid beamformers, an alternating algorithm is devised. Initially, a dictionary of near-field steering vectors is employed to estimate the analog beamformer. Then, the baseband beamformer and the joint radar-communications (JRC) beamformers are estimated. Finally, we introduce an efficient approach to compensate beam-squint in the baseband rather than designing SD analog beamformers [11] or TD networks [8, 10], which are hardware-inefficient. Specifically, we design a beam-squint-aware (BSA) baseband beamformer by matching the SI hybrid beamformer to the SD one. Therefore, the effect of beam-squint is conveyed from analog domain to the baseband. _Notation:_ Throughout the paper, \((\cdot)^{\mathsf{T}}\) and \((\cdot)^{\mathsf{H}}\) denote the transpose and conjugate transpose operations, respectively. For a matrix \(\mathbf{A}\) and vector \(\mathbf{a}\); \([\mathbf{A}]_{i,j}\), \([\mathbf{A}]_{k}\) and \([\mathbf{a}]_{l}\) correspond to the \((i,j)\)-th entry, \(k\)-th column and \(l\)-th entry, respectively. An \(N\times N\) identity matrix is represented by \(\mathbf{I}_{N}\). The pulse-shaping function is represented by \(\mathrm{sinc}(t)=\frac{\pi t}{t}\). We denote \(||\cdot||_{2}\) and \(||\cdot||_{\mathcal{F}}\) as the \(l_{2}\)-norm and Frobenious norm, respectively. ## II System Model & Problem Formulation Consider a wideband transmitter design problem in an ISAC scenario with a communication user and \(K\) radar targets located in the near-field of the base station (BS) as illustrated in Fig. 1. The dual-function BS jointly communicate with the communication user and sense the radar targets via probing signals with \(N_{\mathrm{T}}\) antennas over \(M\) subcarriers. The user has \(N_{\mathrm{R}}\) antennas, for which \(N_{\mathrm{S}}\) data symbols \(\mathbf{s}[m]=\left[s_{1}[m],\cdots,s_{N_{\mathrm{S}}}[m]\right]^{\mathsf{T}} \in\mathbb{C}^{N_{\mathrm{S}}}\) are transmitted, where \(\mathbb{E}\{\mathbf{s}[m]\mathbf{s}^{\mathsf{H}}[m]\}=1/N_{\mathrm{S}} \mathbf{I}_{N_{\mathrm{S}}}\). ### _Communications Model_ The BS aims to transmit the data symbol vector \(\mathbf{s}[m]\in\mathbb{C}^{N_{\mathrm{S}}}\) toward the communications user. Thus, the BS first applies the SD baseband beamformer \(\mathbf{F}_{\mathrm{BB}}[m]\in\mathbb{C}^{N_{\mathrm{RF}}\times N_{\mathrm{S }}}\). Then, \(M\)-point inverse fast Fourier transform (IFFT) is applied to convert the signal to time-domain, and the cyclic prefix (CP) is added. Finally, the SI analog beamformer \(\mathbf{F}_{\mathrm{RF}}\in\mathbb{C}^{N_{\mathrm{T}}\times N_{\mathrm{RF}}}\) is applied, and the \(N_{\mathrm{T}}\times 1\) transmit signal becomes \[\mathbf{x}[m]=\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m] \mathbf{s}[m], \tag{1}\] where the analog beamformer \(\mathbf{F}_{\mathrm{RF}}\) has constant-modulus constraint, i.e., \(||\mathbf{F}_{\mathrm{RF}}|_{i,j}|=1/\sqrt{N_{\mathrm{T}}}\) for \(i=1,\cdots,N_{\mathrm{T}}\), \(j=1,\cdots,N_{\mathrm{RF}}\). Furthermore, we have \(\sum_{m=1}^{M}\|\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m]\|_{ \mathcal{F}}^{2}=MN_{\mathrm{S}}\) to account for the total power constraint. #### Ii-A1 THz Channel Model In this study, we employ Saleh-Valenzuela (S-V) multipath channel model, which is the superposition of received non-LoS (NLoS) paths to model the THz channel [23, 24]. Compared to the mmWave channel, the THz channel involves limited reflected paths and negligible scattering [23, 25]. For massive MIMO systems, approximately \(5\) paths survive at \(0.3\) THz compared to approximately \(8\) paths at \(60\) GHz [25]. Especially for outdoor applications, multipath channel models are widely used to represent the THz channel for a more general scenario [23, 25]. Hence, in this work, we consider a general scenario, wherein the delay-\(\bar{d}\)\(N_{\mathrm{R}}\times N_{\mathrm{T}}\) MIMO communications channel involving \(L\) NLoS paths is given in discrete-time domain as \[\tilde{\mathbf{H}}(\bar{d})=\sum_{l=1}^{L}\gamma_{l}\mathrm{sinc} (\bar{d}-B\tau_{l})\mathbf{a}_{\mathrm{R}}(\theta_{l},\rho_{l})\mathbf{a}_{ \mathrm{T}}^{\mathsf{H}}(\phi_{l},r_{l}), \tag{2}\] where \(\gamma_{l}\in\mathbb{C}\) denotes the channel path gain, \(B\) represents the system bandwidth and \(\tau_{l}\) is the time delay of the \(l\)-th path. \(\theta_{l}\) (\(\rho_{l}\)) and \(\phi_{l}\) (\(r_{l}\)) denote the physical DoA and direction-of-departure (DoD) angles (ranges) of the scattering paths between the user and the BS, respectively, where \(\theta_{l}=\sin\tilde{\theta}_{l}\), \(\phi_{l}=\sin\tilde{\phi}_{l}\) and \(\tilde{\theta}_{l},\tilde{\phi}_{l}\in[-\frac{\pi}{2},\frac{\pi}{2}]\). Then, the corresponding receive and transmit steering vectors are defined as \(\mathbf{a}_{\mathrm{R}}(\theta_{l},\rho_{l})\in\mathbb{C}^{N_{\mathrm{R}}}\) and \(\mathbf{a}_{\mathrm{T}}(\phi_{l},r_{l})\in\mathbb{C}^{N_{\mathrm{T}}}\), respectively. Performing \(M\)-point FFT of the delay-\(\bar{d}\) channel given in (2) yields \[\mathbf{H}[m]=\sum_{\bar{d}=1}^{\bar{D}-1}\tilde{\mathbf{H}}(\bar{d} )e^{-\mathrm{j}\frac{2\pi m}{\bar{d}}\bar{d}}, \tag{3}\] where \(\bar{D}\leq M\) is the CP length. Then, the \(N_{\mathrm{R}}\times N_{\mathrm{T}}\) channel matrix in frequency domain is represented by \[\mathbf{H}[m]\!\!=\!\!\sum_{l=1}^{L}\gamma_{l}\mathbf{a}_{\mathrm{R}}(\bar{ \theta}_{l,m},\bar{\rho}_{l,m})\mathbf{a}_{\mathrm{T}}^{\mathsf{H}}(\bar{\phi}_ {l,m},\bar{r}_{l,m})e^{-\mathrm{j}2\pi\tau_{l}\bar{r}_{m}}, \tag{4}\] where \(\bar{\theta}_{l,m}\) (\(\bar{\rho}_{l,m}\)) and \(\bar{\phi}_{l,m}\) (\(\bar{r}_{l,m}\)) denote the spatial directions (ranges), which are SD and they are deviated from the Fig. 1: Near-field ISAC scenario, wherein received signal wavefront for the communication user/targets in far-field (near-field) is plane-wave (spherical-wave). physical directions \(\theta_{l}\), \(\phi_{l}\) (\(\rho_{l},r_{l}\)) in the beamspace due to beam-squint [2, 8]. On the other hand, the beam-squint-free channel matrix is \[\overline{\mathbf{H}}[m]=\sum_{l=1}^{L}\gamma_{l}\mathbf{a}_{\mathrm{R}}(\phi_{ l},\rho_{l})\mathbf{a}_{\mathrm{T}}^{\mathsf{H}}(\theta_{l},r_{l})e^{-\mathrm{j}2 \pi\tau_{l}f_{m}}. \tag{5}\] Then, the \(N_{\mathrm{R}}\times 1\) received signal at the communications user is \[\mathbf{y}[m]=\mathbf{H}[m]\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[ m]\mathbf{s}[m]+\mathbf{n}[m], \tag{6}\] where \(\mathbf{n}[m]\sim\mathcal{CN}(\mathbf{0},\sigma_{n}^{2}\mathbf{I}_{N_{ \mathrm{R}}})\in\mathbb{C}^{N_{\mathrm{R}}}\) represents the temporarily and spatially additive white Gaussian noise vector. #### Ii-A2 Beam-Squint Effect In wideband transmission, the prevalent assumption is the employment of a monochromatic wavelength across all subcarriers, delineated as \(\lambda_{1}=\cdots\lambda_{M}=\frac{c_{0}}{f_{c}}\), wherein \(c_{0}\) signifies the speed of light and \(f_{c}\) represents the carrier frequency. However, the utilization of a singular analog beamformer renders this monochromatic wavelength assumption inapplicable, culminating in the formation of squinted beams that orient toward disparate spatial directions and ranges [2, 8]. Presuming an analogous beamforming architecture is employed at the user end(i.e., SI analog beamformer with SD digital beamformers), the high-frequency operation at THz implies the the presence of close-proximity users in the near-field region, where planar wave propagation is not valid. At ranges shorter than the Fraunhofer distance \(d_{F}=\frac{2D^{2}}{\lambda}\), where \(D\) is the array aperture and \(\lambda=\frac{c_{0}}{f_{c}}\) is the wavelength, the near-field wavefront exhibits spherical nature [7, 13]. For a uniform linear array (ULA), the array aperture is \(D=(N-1)d\), where \(d=\frac{\lambda}{2}\) is the element spacing. In the THz spectrum, it is imperative to employ a near-field signal model because \(r_{l}<d_{F}\). For instance, when \(f_{c}=300\) GHz and \(N=256\), the Fraunhofer distance is \(d_{F}=32.76\) m. Taking into account the spherical-wave model [20, 26, 13], we define the near-field steering vector \(\mathbf{a}_{\mathrm{T}}(\phi_{l},r_{l})\in\mathbb{C}^{N_{\mathrm{T}}}\) corresponding to the physical DoA \(\phi_{l}\) and range \(r_{l}\) as \[\mathbf{a}_{\mathrm{T}}(\phi_{l},r_{l})=\frac{1}{\sqrt{N_{\mathrm{T}}}}[e^{- \mathrm{j}2\pi\frac{d}{\lambda}\tau_{l}^{(1)}},\cdots,e^{-\mathrm{j}2\pi\frac{ d}{\lambda}\tau_{l}^{(N_{\mathrm{T}})}}]^{\mathsf{T}}, \tag{7}\] where \(\tau_{l}^{(n)}\) is the distance between the \(l\)-th path scatterer and the \(n\)-th antenna as \[r_{l}^{(n)}=\left(r_{l}^{2}+2(n-1)^{2}d^{2}-2r_{l}(n-1)d\phi_{l}\right)^{\frac{ 1}{2}}. \tag{8}\] Following the Fresnel approximation [26, 20], (8) becomes \[r_{l}^{(n)}\approx r_{l}-(n-1)d\phi_{l}+(n-1)^{2}d^{2}\zeta_{l}, \tag{9}\] where \(\zeta_{l}=\frac{1-\phi_{l}^{2}}{2r_{l}}\). Rewrite (7) is \[\mathbf{a}_{\mathrm{T}}(\phi_{l},r_{l})\approx e^{-\mathrm{j}2\pi\frac{f_{c}} {c_{0}}r_{l}}\tilde{\mathbf{a}}_{\mathrm{T}}(\phi_{l},r_{l}), \tag{10}\] where the \(n\)-th element of \(\tilde{\mathbf{a}}_{\mathrm{T}}(\phi_{l},r_{l})\in\mathbb{C}^{N_{\mathrm{T}}}\) is \[[\tilde{\mathbf{a}}_{\mathrm{T}}(\phi_{l},r_{l})]_{n}=e^{\mathrm{j}2\pi\frac{ f_{c}}{c_{0}}\left((n-1)d\phi_{l}-(n-1)^{2}d^{2}\zeta_{l}\right)}. \tag{11}\] The steering vector in (10) corresponds to the physical location \((\phi_{l},r_{l})\). This deviates to the spatial location \((\bar{\phi}_{m,l},\bar{r}_{m,l})\) in the beamspace because of the absence of SD analog beamformers. Then, the \(n\)-th entry of the deviated steering vector in (11) for the spatial location is \[[\tilde{\mathbf{a}}_{\mathrm{T}}(\bar{\phi}_{m,l},\bar{r}_{m,l})]_{n} \!\!=\!e^{\mathrm{j}2\pi\frac{f_{m}}{c_{0}}\left((n-1)d\bar{\phi}_{m,l}\!- (n-1)^{2}d^{2}\bar{\zeta}_{m,l}\right)}. \tag{12}\] **Theorem 1**.: _Denote \(\mathbf{u}\in\mathbb{C}^{N_{\mathrm{T}}}\) and \(\mathbf{v}_{m}\in\mathbb{C}^{N_{\mathrm{T}}}\) as the arbitrary near-field steering vectors corresponding to the physical (i.e., \(\{\phi_{l},r_{l}\}\)) and spatial (i.e., \(\{\bar{\phi}_{m,l},\bar{r}_{m,l}\}\)) locations given in (11) and (12), respectively. Then, in spatial domain at subcarrier frequency \(f_{m}\), the array gain achieved by \(\mathbf{u}^{\mathsf{H}}\mathbf{v}_{m}\) is maximized and the generated beam is focused at the location \(\{\bar{\phi}_{m,l},\bar{r}_{m,l}\}\) such that_ \[\bar{\phi}_{m,l}=\eta_{m}\phi_{l},\ \bar{r}_{m,l}=\frac{1-\eta_{m}^{2}\phi_{l}^{2}}{ \eta_{m}(1-\phi_{l}^{2})}r_{l}, \tag{13}\] _where \(\eta_{m}=\frac{f_{c}}{f_{m}}\) represents the proportional deviation of DoA/ranges._ Proof.: Please see [20]. Following (9) and (13), we define near-field beam-squint in terms of DoAs and ranges as, respectively, \[\Delta(\phi_{l},m)=\bar{\phi}_{m,l}-\phi_{l}=(\eta_{m}-1)\phi_{l}, \tag{14}\] and \(\Delta(r_{l},m)=\bar{r}_{m,l}-r_{l}=(\eta_{m}-1)r_{l}\), i.e., \[\Delta(r_{l},m)=(\eta_{m}-1)\frac{1-\eta_{m}^{2}\phi_{l}^{2}}{\eta_{m}(1-\phi_ {l}^{2})}r_{l}. \tag{15}\] ### _Radar Model_ The aim of the radar sensing task is to achieve the highest SNR toward targets. Denote the estimate of the \(k\)-th target direction and range by \(\Phi_{k}\) and \(r_{k}\), which can be estimated during the search phase of the radar, e.g., MUSIC algorithm [12]. Then, we select the radar-only beamformer as \[\mathbf{F}_{\mathrm{R}}=[\mathbf{a}_{\mathrm{T}}(\Phi_{1},r_{1}),\cdots, \mathbf{a}_{\mathrm{T}}(\Phi_{K},r_{K})]\in\mathbb{C}^{N_{\mathrm{T}}\times K}. \tag{16}\] The proposed ISAC beamformer aims to generate multiple beams toward both radar targets and the communication user. This allows us to maintain the communication between the user and the BS while tracking the radar targets, of which the initial directions/ranges are estimated. Using the hybrid beamforming structure, the beampattern of the radar for \(\Phi\in[-\frac{\pi}{2},\frac{\pi}{2}]\) and \(r\in[0,d_{F}]\) is \[B_{m}(\Phi,r)=\mathrm{Trace}\{\mathbf{a}_{\mathrm{T}}^{\mathsf{H}}(\Phi,r) \mathbf{R}_{\mathbf{x}}[m]\mathbf{a}_{\mathrm{T}}(\Phi,r)\}, \tag{17}\] where \(\mathbf{a}_{\mathrm{T}}(\Phi,r)\in\mathbb{C}^{N_{\mathrm{T}}}\) denotes the steering vector corresponding to arbitrary direction \(\Phi\) and range \(r\), and \(\mathbf{R}_{\mathbf{x}}[m]\in\mathbb{C}^{N_{\mathrm{T}}\times N_{\mathrm{T}}}\) is the covariance of the transmit signal as \[\mathbf{R}_{\mathbf{x}}[m] =\mathbb{E}\{\mathbf{x}[m]\mathbf{x}^{\mathsf{H}}[m]\}\] \[=\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m]\mathbb{E} \{\mathbf{s}[m]\mathbf{s}^{\mathsf{H}}[m]\}\mathbf{F}_{\mathrm{BB}}^{\mathsf{H}}[m] \mathbf{F}_{\mathrm{RF}}^{\mathsf{H}}\] \[=\frac{1}{N_{\mathrm{S}}}\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{ \mathrm{BB}}[m]\mathbf{F}_{\mathrm{BB}}^{\mathsf{H}}[m]\mathbf{F}_{\mathrm{RF}}^{ \mathsf{H}}. \tag{18}\] To simultaneously obtain the desired beampattern for the radar target and provide satisfactory communications performance, the hybrid beamformer \(\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m]\) should be designed accordingly. ### _Problem Formulation_ Our aim in this work is to design the ISAC hybrid beamformer \(\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m]\) while mitigating the impact of near-field beam-squint. The design problem maximizes the SE of the overall system, which can be recast via minimizing the Euclidean distance between the hybrid beamformer \(\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m]\) and the unconstrained JRC beamformer \(\mathbf{F}_{\mathrm{CR}}[m]\)[3, 4]. The JRC beamformer is defined as \[\mathbf{F}_{\mathrm{CR}}[m]=\varepsilon\mathbf{F}_{\mathrm{opt}}[m]+(1- \varepsilon)\mathbf{F}_{\mathrm{R}}\mathbf{\Pi}[m], \tag{19}\] where \(\mathbf{F}_{\mathrm{opt}}[m]\in\mathbb{C}^{N_{\mathrm{T}}\times N_{\mathrm{S}}}\) is the unconstrained communications-only beamformer, which can be obtained through the singular value decomposition (SVD) of \(\mathbf{H}[m]\)[4]. \(\mathbf{\Pi}[m]\in\mathbb{C}^{K\times N_{\mathrm{S}}}\) is a unitary matrix providing the change of dimensions between \(\mathbf{F}_{\mathrm{R}}\) and \(\mathbf{F}_{\mathrm{opt}}[m]\). In (19), \(0\leq\varepsilon\leq 1\) represents the trade-off parameter between the radar and communications tasks. In particular, \(\varepsilon=1\) (\(\varepsilon=0\)) corresponds to the communications-only (radar-only) design. In ISAC, \(\varepsilon\) controls the trade-off between the accuracy/prominence of sensing and communications tasks [2]. Now, the optimization problem becomes \[\underset{\mathbf{F}_{\mathrm{RF}},\mathbf{F}_{\mathrm{BB}}[m], \mathbf{\Pi}[m]}{\text{minimize}} \sum_{m=1}^{M}\|\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m]- \mathbf{F}_{\mathrm{CR}}[m]\|_{\mathcal{F}}\] \[\text{subject to: }[|\mathbf{F}_{\mathrm{RF}}]_{i,j}|=1/\sqrt{N_{ \mathrm{T}}}\] \[\sum_{m=1}^{M}\|\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[ m]\|_{\mathcal{F}}=MN_{\mathrm{S}}\] \[\mathbf{\Pi}[m]\mathbf{\Pi}^{\mathsf{H}}[m]=\mathbf{I}_{K}. \tag{20}\] The above optimization problem is difficult to solve due to non-convex constraints, e.g., unit-modulus constraint and it involves multiple unknowns \(\mathbf{F}_{\mathrm{RF}}\), \(\mathbf{F}_{\mathrm{BB}}[m]\) and \(\mathbf{\Pi}[m]\). In order to provide an effective solution, we follow an alternating optimization approach, wherein the beamformers are optimized one-by-one while the other term is fixed. Specifically, \(\mathbf{F}_{\mathrm{RF}}\) is first estimated via an OMP-based approach, wherein the columns of \(\mathbf{F}_{\mathrm{RF}}\) are selected from a dictionary of near-field steering vectors. Next, the baseband beamformer \(\mathbf{F}_{\mathrm{BB}}[m]\) and \(\mathbf{\Pi}[m]\) are estimated. Finally, a BSA baseband beamformer is designed for beam-squint compensation. ## III Hybrid Beamformer Design In order to solve (III-B) effectively, we propose an alternating algorithm to efficiently find the unknowns \(\mathbf{F}_{\mathrm{RF}},\mathbf{F}_{\mathrm{BB}}[m],\mathbf{\Pi}[m]\). Thus, we first introduce an OMP based approach, wherein the analog beamformer \(\mathbf{F}_{\mathrm{RF}}\) is designed, respectively, from the columns of the dictionary matrix \[\mathbf{D}=[\mathbf{a}_{\mathrm{T}}(\phi_{1},r_{1}),\cdots,\mathbf{a}_{ \mathrm{T}}(\phi_{N},r_{N})]\in\mathbb{C}^{N_{\mathrm{T}}\times N}, \tag{21}\] where \(N\) is the grid size of the dictionary with \(\phi_{n}\in[-1,1]\), \(r_{n}\in(0,d_{F}]\). Then, the columns of the analog beamformer \(\mathbf{F}_{\mathrm{RF}}\) are selected from the columns of \(\mathbf{D}[m]\) as \(\mathbf{a}_{\mathrm{T}}(\phi_{p^{\ast}},r_{p^{\ast}})\), for \(\ell=1,\cdots,N_{\mathrm{RF}}\) where \[p^{\ast}=\underset{p\in\{1,\cdots,N\}}{\mathrm{argmax}}\sum_{m=1}^{M}\left| \left[\mathbf{\Psi}[m]\mathbf{\Psi}^{\mathsf{H}}[m]\right]_{p,p}\right|, \tag{22}\] where \(\mathbf{\Psi}[m]=\mathbf{a}_{\mathrm{T}}^{\mathsf{H}}(\phi_{p},r_{p})\mathbf{ F}_{\mathrm{CR}}[m]\). Once the analog beamformer \(\mathbf{F}_{\mathrm{RF}}\) is obtained and by using \(\mathbf{F}_{\mathrm{CR}}[m]\), the baseband beamformer is given by \[\mathbf{F}_{\mathrm{BB}}[m]=\mathbf{F}_{\mathrm{RF}}{}^{\dagger}\mathbf{F}_{ \mathrm{CR}}[m], \tag{23}\] which is then normalized as \(\mathbf{F}_{\mathrm{BB}}[m]=\frac{\sqrt{N_{\mathrm{S}}}\mathbf{F}_{\mathrm{RF} }^{\dagger}\mathbf{F}_{\mathrm{CR}}[m]}{\|\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{ \mathrm{BB}}[m]\|_{F}}\). JRC beamformer is composed of the auxiliary matrix \(\mathbf{\Pi}[m]\), which can be optimized as \[\underset{\mathbf{\Pi}}{\text{minimize}} \|\mathbf{F}_{\mathrm{RF}}\mathbf{\overline{F}}_{\mathrm{BB}}- \mathbf{F}_{\mathrm{CR}}\|_{\mathcal{F}}^{2}\] \[\text{subject to: }\mathbf{\overline{\Pi}}\mathbf{\overline{\Pi}}^{ \mathsf{H}}=\mathbf{I}_{K}, \tag{24}\] where \(\mathbf{\overline{F}}_{\mathrm{BB}}=[\mathbf{F}_{\mathrm{BB}}[1],\cdots,\mathbf{ F}_{\mathrm{BB}}[M]]\), \(\mathbf{\overline{F}}_{\mathrm{CR}}=[\mathbf{F}_{\mathrm{CR}}[1],\cdots, \mathbf{F}_{\mathrm{CR}}[M]]\) and \(\mathbf{\overline{\Pi}}=[\mathbf{\Pi}[1],\cdots,\mathbf{\Pi}[M]]\) are \(N_{\mathrm{RF}}\times MN_{\mathrm{S}}\), \(N_{\mathrm{T}}\times MN_{\mathrm{S}}\) and \(K\times MN_{\mathrm{S}}\) matrices composed of information corresponding to all subcarriers, respectively. The solution to the problem in (24) can be found via SVD of the \(K\times MN_{\mathrm{S}}\) matrix \(\mathbf{F}_{\mathrm{R}}^{\mathsf{H}}\mathbf{F}_{\mathrm{RF}}\mathbf{\overline{F} }_{\mathrm{BB}}\) and it is given by \[\mathbf{\overline{\Pi}}=\mathbf{\widetilde{\Pi}}\mathbf{I}_{K\times MN_{ \mathrm{S}}}\mathbf{\widehat{V}}, \tag{25}\] where \(\mathbf{\widetilde{\Pi}}\mathbf{\widetilde{\Sigma}}\mathbf{\widetilde{\Sigma}}= \mathbf{F}_{\mathrm{R}}^{\mathsf{H}}\mathbf{F}_{\mathrm{RF}}\mathbf{\overline{F} }_{\mathrm{BB}}\) is the SVD of the \(N_{\mathrm{RF}}\times N_{\mathrm{S}}\) matrix \(\frac{1}{1-\varepsilon}\mathbf{F}_{\mathrm{R}}^{\mathsf{H}}\left(\mathbf{F}_{ \mathrm{RF}}\mathbf{\overline{F}}_{\mathrm{BB}}-\varepsilon\mathbf{\overline{F}}_{ \mathrm{CR}}\right)\), and \(\mathbf{I}_{K\times MN_{\mathrm{S}}}=\left[\mathbf{I}_{K}|\,\mathbf{0}_{MN_{ \mathrm{S}}-K\times K}^{\mathsf{T}}\right]^{\mathsf{T}}\). Then, by estimating \(\mathbf{F}_{\mathrm{BB}}[m]\) and \(\mathbf{\Pi}[m]\) iteratively, the hybrid beamformer weights are computed. The next task is to mitigate near-field beam-squint, which can be compensated if SD analog beamformers are used. However, this approach is costly since it requires employing \(MN_{\mathrm{T}}N_{\mathrm{RF}}\) (instead of \(N_{\mathrm{T}}N_{\mathrm{RF}}\)) phase-shifters. Instead, we propose an efficient approach, wherein the effect of beam-squint is handled in the baseband beamformer, which is SD. Therefore, the effect of beam-squint is conveyed from analog domain to baseband. Denoted by \(\mathbf{\breve{F}}_{\mathrm{RF}}[m]\in\mathbb{C}^{N_{\mathrm{T}}\times N_{ \mathrm{RF}}}\), the SD analog beamformer that can be computed from the SI analog beamformer \(\mathbf{F}_{\mathrm{RF}}\) as \[\mathbf{\breve{F}}_{\mathrm{RF}}[m]=\frac{1}{\sqrt{N_{\mathrm{T}}}}\mathbf{ \Omega}[m], \tag{26}\] where \(\mathbf{\Omega}[m]\in\mathbb{C}^{N_{\mathrm{T}}\times N_{\mathrm{RF}}}\) includes the angle information of \(\mathbf{F}_{\mathrm{RF}}\) as \([\mathbf{\Omega}[m]]_{i,j}=\exp\{\mathrm{j}\eta_{m}\angle\{[\mathbf{F}_{ \mathrm{RF}}]_{i,j}\}\}\) for \(1,\cdots,N_{\text{T}}\) and \(j=1,\cdots,N_{\text{RF}}\). As a result, the angular deviation in \(\mathbf{F}_{\text{RF}}\) due to beam-squint is compensated with \(\eta_{m}\). Now, we define \(\widetilde{\mathbf{F}}_{\text{BB}}[m]\in\mathbb{C}^{N_{\text{RF}}\times N_{ \text{S}}}\) as the _BSA digital beamformer_ in order to achieve SD beamforming performance that can be obtained by the usage of SD analog beamformer \(\tilde{\mathbf{F}}_{\text{RF}}[m]\). Hence, we aim to match the proposed _BSA hybrid beamformer_\(\mathbf{F}_{\text{RF}}\widetilde{\mathbf{F}}_{\text{BB}}[m]\) with the SD hybrid beamformer \(\tilde{\mathbf{F}}_{\text{RF}}[m]\mathbf{F}_{\text{BB}}[m]\) as \[\underset{\widetilde{\mathbf{F}}_{\text{BB}}[m]}{\operatorname{minimize}} \|\mathbf{F}_{\text{RF}}\widetilde{\mathbf{F}}_{\text{BB}}[m]-\tilde{\mathbf{ F}}_{\text{RF}}[m]\mathbf{F}_{\text{BB}}[m]\|_{\mathcal{F}}, \tag{27}\] for which \(\widetilde{\mathbf{F}}_{\text{BB}}[m]\) can be obtained as \[\widetilde{\mathbf{F}}_{\text{BB}}[m]=\mathbf{F}_{\text{RF}}{}^{\dagger} \tilde{\mathbf{F}}_{\text{RF}}[m]\mathbf{F}_{\text{BB}}[m]. \tag{28}\] Because of the reduced dimension of the baseband beamformer (i.e., \(N_{\text{RF}}<N_{\text{T}}\)), the BSA approach does not completely mitigate beam-squint. In other words, the beam-squint can be fully mitigated only if \(\mathbf{F}_{\text{RF}}{}^{\dagger}\tilde{\mathbf{F}}_{\text{RE}}[m]=\mathbf{I }_{\text{N}_{\text{T}}}\) so that the resulting hybrid beamformer \(\mathbf{F}_{\text{RF}}\tilde{\mathbf{F}}_{\text{BB}}[m]\) can be equal to \(\tilde{\mathbf{F}}_{\text{RF}}[m]\mathbf{F}_{\text{BB}}[m]\), which requires \(N_{\text{RF}}=N_{\text{T}}\). Nevertheless, the proposed approach provides satisfactory SE performance with beam-squint compensation for a wide range of bandwidth [12, 19]. Finally, the algorithmic steps of the proposed hybrid beamforming approach are presented in Algorithm 1, wherein we select the columns of the analog beamformer \(\mathbf{F}_{\text{RF}}\) from the near-field dictionary \(\mathbf{D}\) for \(\ell=1,\cdots,N_{\text{RF}}\). In this process, the similarity between the columns of \(\mathbf{D}\) (i.e., \(\mathbf{a}_{\text{T}}(\phi_{p},r_{p})\)) and the residual beamformer (i.e., \(\mathbf{F}_{\text{res}}[m]\)) is performed. Since this process is iterated for \(\ell=1,\cdots,N_{\text{RF}}\), its convergence is similar to the previous works [4, 27, 28]. ``` Input:\(\mathbf{D}\), \(\mathbf{F}_{\text{R}}\), \(\mathbf{F}_{\text{opt}}[m]\), \(\varepsilon\), \(\eta_{m}\). 1:\(\mathbf{F}_{\text{RF}}=\text{Empty}\), \(\mathbf{F}_{\text{res}}[m]=\mathbf{F}_{\text{CR}}[m]\). 2:for\(\ell=1,\cdots,N_{\text{RF}}\)do 3:\(p^{*}=\operatorname{argmax}_{p}\sum_{m=1}^{M}\left|\mathbf{a}_{\text{T}}^{ \text{H}}(\phi_{p},r_{p})\mathbf{F}_{\text{res}}[m]\right|\). 4:\(\mathbf{F}_{\text{RF}}=[\mathbf{F}_{\text{RF}}[\mathbf{a}_{\text{T}}(\phi _{p^{*}},r_{p^{*}})]\). 5:\(\mathbf{F}_{\text{BB}}[m]=(\mathbf{F}_{\text{RF}}^{\dagger}\mathbf{F}_{\text {RF}})^{-1}\mathbf{F}_{\text{RF}}^{\text{H}}\mathbf{F}_{\text{CR}}[m]\). 6: Update \(\mathbf{\Pi}[m]\) from (25). 7: Update \(\mathbf{F}_{\text{CR}}[m]\) from (19). 8:\(\mathbf{F}_{\text{res}}[m]=\frac{\mathbf{F}_{\text{CR}}[m]-\mathbf{F}_{\text {RF}}\mathbf{F}_{\text{BB}}[m]}{\|\mathbf{F}_{\text{CR}}[m]-\mathbf{F}_{\text {RF}}\mathbf{F}_{\text{BB}}[m]\|_{\mathcal{F}}}\). 9:endfor 10:\(\mathbf{F}_{\text{BB}}[m]=\sqrt{N_{\text{S}}}\frac{\mathbf{F}_{\text{BB}}[m]}{ \|\mathbf{F}_{\text{RF}}\mathbf{F}_{\text{BB}}[m]\|_{\mathcal{F}}}\). 11:\(\tilde{\mathbf{F}}_{\text{RF}}[m]=\frac{1}{\sqrt{N_{\text{T}}}}\mathbf{\Omega} [m]\) where \([\mathbf{\Omega}[m]]_{i,j}=\exp\{\mathrm{j}\eta_{m}\angle[\{\mathbf{F}_{\text {RF}}\}_{i,j}\}]\). 12:\(\widetilde{\mathbf{F}}_{\text{BB}}[m]=\mathbf{F}_{\text{RF}}^{\dagger}\tilde{ \mathbf{F}}_{\text{RF}}[m]\mathbf{F}_{\text{BB}}[m]\). ``` **Algorithm 1** ISAC hybrid beamforming ## IV Numerical Experiments We evaluated the performance of our hybrid beamforming technique in comparison with the fully digital (DF) ISAC and communications-only beamformers as well as far-field-based design, in terms of SE, averaged over \(500\) Monte Carlo trials. The number of antennas at the BS and the user are \(N_{\text{T}}=128\) and \(N_{\text{R}}=16\), respectively. The carrier frequency and the bandwidth are selected as \(f_{c}=300\) GHz and \(B=20\) GHz, respectively, and the number of subcarriers is \(M=64\). The number of targets is \(K=3\), number of spatial paths is \(L=8\), number of RF chains is \(N_{\text{RF}}=8\) and the trade-off parameter is \(\varepsilon=0.5\). The dictionary grid size is obtained from \(N_{\phi}=100\), \(N_{r}=20\), yielding \(N=N_{\phi}N_{r}=2000\). Targets and path directions (ranges) are uniformly drawn at random from the intervals \([-\frac{\pi}{3},\frac{\pi}{3}]\) (\([5,30]\) m) [3]. Fig. 2 delineates the SE performance of the competing algorithms. We can see that the communications-only (\(\varepsilon=1\)) FD beamformer provides the highest SE while the ISAC (\(\varepsilon=0.5\)) FD beamformer provides marginally reduced SE due to power allocation for both communications and sensing tasks. The proposed beamforming approach exhibits performance closely resembling that of the FD beamformers. A significant performance degradation is also observed when the near-field model is overlooked in favor of the far-field array model. Fig. 3 shows the SE performance against the bandwidth \(B\in[0,40]\) GHz. We can see that the proposed hybrid beamforming scheme achieves satisfactory SE performance up to \(B\leq 30\) GHz, beyond which its performance slightly declines. This degradation arises because the BSA baseband beamformer's low-dimensional structure cannot adequately compensate for the beam-squint. Nevertheless, the hybrid beamforming scheme closely trails the performance of the FD beamforming and yields a substantial SE improvement compared to the far-field model. ## V Summary We introduced a hybrid beamforming scheme for THz-ISAC systems in near-field scenario. The analog beam Fig. 2: SE performance versus SNR. formers are designed based on a dictionary composed of near-field steering vectors. Then, the baseband and JRC beamformers are obtained. In order to cope with beam-squint problem in near-field scenario, we utilized the baseband beamformer to convey the impact of beam-squint from analog domain to baseband without requiring additional hardware components. As future work, we reserve to study a more challenging scenario, e.g., near-field ISAC joint precoder and combiner design.
**6世代無線ネットワークの主な側面であるテラヘルツ(THz)通信と統合 sensing and communications(ISAC)を用いたシステムは、 severe attenuation を補償するために大型アンテナを使用しています。近接場ビームスィンは、ビーム形成精度を低下させます。従来の研究は、 narrowband ISACビームフォーミングのみや遠場モデルのみを検討してきましたが、本研究では、近接場THz-ISACシナリオにおけるハイブリッドビームフォーミング設計のための交互最適化技術を紹介します。さらに、近接場ビームスィンを補償するための効率的な方法を提案し、これは、基本的な波長帯を使用する方法です。数値シミュレーションの結果、提案されたアプローチは、正確に近接場ビームフォーミングを評価し、ビームスィンを軽減することができ、追加のハードウェア部品なしで満足なスペクトラル効率を実現しています。**
2309.12442
Folding Rays: a Bimanual Occluded Target Interaction Technique
As Virtual Reality becomes commonplace in the world, it is important for developers to focus on user interaction with the virtual world. Currently, there are limitations to some selection and navigation techniques that have not yet been completely overcome. Focusing specifically on enhancing ray-casting, we present the advanced technique of folding rays which allows for the selection of occluded targets without any unnecessary physical navigation around a virtual environment. By improving upon current approaches, our technique allows for the selection of these targets without any manipulation of the virtual environment itself using rays that can bend at user-determined points. With their potential to be used in conjunction with teleportation as a virtual navigation technique, folding rays can be used in a variety of scenarios to enhance a user's interactive experience in virtual environments.
DongHoon Kim, Preston Bruner, Isaac Cho
2023-09-21T19:23:55
http://arxiv.org/abs/2309.12442v1
# Folding Rays: a Bimanual Occluded Target Interaction Technique ###### Abstract As Virtual Reality becomes commonplace in the world, it is important for developers to focus on user interaction with the virtual world. Currently, there are limitations to some selection and navigation techniques that have not yet been completely overcome. Focusing specifically on enhancing ray-casting, we present the advanced technique of folding rays which allows for the selection of occluded targets without any unnecessary physical navigation around a virtual environment. By improving upon current approaches, our technique allows for the selection of these targets without any manipulation of the virtual environment itself using rays that can bend at user-determined points. With their potential to be used in conjunction with teleportation as a virtual navigation technique, folding rays can be used in a variety of scenarios to enhance a user's interactive experience in virtual environments. Human-centered computingHuman computer interaction (HCI)Interaction techniquesPointing Human-centered computingHuman computer interaction (HCI)Interaction paradigmsVirtual reality ## 1 Introduction Selection is an essential interaction technique in immersive virtual environments. Various techniques have been introduced to translate real-world actions into virtual ones, allowing the user to select virtual objects using familiar paradigms including simple virtual hand [8] and ray-casting [7]. Although these techniques are intuitive and widely employed, their effectiveness is limited when it comes to selecting virtual objects that are not directly visible to the user. When a target object is occluded by other objects, the user is typically required to navigate in virtual environments to bring the occluded object into view before selecting it using one of the selection techniques. In particular, in restrictive real-world scenarios, such as when the user is in a small physical room or sitting in a chair while using a Virtual Reality (VR) headset, physically moving to navigate around the virtual environment can be challenging. To address this limitation, it would be beneficial if the user could select an occluded object without the need for navigation. This paper introduces folding rays, a technique that allows the user to select an occluded object. The user can create a ray and then fold it at a desired point. A camera viewport will appear at the fold point, allowing the user to see the ray beyond the fold point. The user may repeat the folding process from within that viewport, giving them full control over the number of folds and the direction of the ray at each fold point. This seamless transition between the views at each fold allows the user to remain comfortably stationary while directing the ray to their liking. This complete control over the folded ray results in a previously unreachable and occluded object becoming accessible to the user without navigation. ## 2 Related work Ray-casting is a widely used interaction technique, which is a part of "virtual pointing", in immersive environments. Typically, rays originate from a virtual representation of an input device so that the user may determine the origin and direction of the ray being cast [7]. This gives a level of control when rays are used to select objects further than an am's reach away from the user. However, the selection of targets with traditional ray-casting is limited to targets that are visible to the user from where they currently are located. Several approaches have been developed and implemented to overcome this limitation [9]. For example, Alpha Cursor is one such technique that attaches a movable cursor to a selection ray. The user pushes the joystick backward or forward to move the cursor closer or further along the ray. If the cursor goes past an object, that object then becomes transparent, providing a view to the objects behind it [2]. Another example called Smash Probe involves spreading out objects that collide with the selection ray to give a view into the world behind them [9]. Though this technique allows users to look behind occluding objects, the selection of a desired target may be difficult because of the spreading of objects. Our folding ray-casting technique aims to provide similar insights into a virtual world behind occluding objects but without the need to alter any existing object's properties such as transparency or position. Users will be able to fold their ray around occluding objects without needing to directly interact with them. This will give users an unhindered understanding of the virtual environment around them as no objects will be forcefully manipulated. The Heuristic Ray technique introduces a curve target indicator, which looks like a curved ray, to indicate a selected target based on a calculated score [6]. While this technique has the potential to select occluded targets, it may face challenges when multiple occluded targets are close together, as the scoring algorithm may make precise selection difficult. We designed our folding ray-casting technique with precision in mind. Users will have full control over the "bend" or "folding" of their ray, so they will not ever have to question which target they will be selecting when they pull the trigger. ## 3 Folding Rays: Implementation Our folding ray-casting technique builds upon the general ray-casting and magic mirror techniques [4]. Like the general ray-casting approach, our approach utilizes a ray that extends infinitely in one direction, originating from the user's dominant hand (the main ray). The user can select an object that intersects with the ray by pressing a trigger button. Figure 1 provides a visual explanation of the folding technique's operation. In our folding ray-casting technique, we include an additional ray originating from the less-dominant hand (the secondary ray). This ray is guided in a similar manner to the main ray by moving the less-dominant hand. The secondary ray is utilized to create a folding point along with the main ray. As the secondary ray interests with the main ray within a small distance threshold, a crossing point sphere will appear on the main ray which indicates the folding point. Pressing the primary button on the dominant hand's controller creates a camera in front of the point indicated by the sphere, aligning with the user's view direction. Once the camera is created at the folding point, a camera viewport window (square screen in Figure 1) appears in front of the user at a comfortable viewing distance of 1.5m. This window shows the view from the folding point. The window follows the user's view direction to allow the user to see a 360 view of the scene from the folding point. The user can create multiple folding points by placing the rays inside the window, and each new folding point updates the window to show the camera at the respective folding point. Figure 2 shows an example of selecting an occluded target. Once the target object is visible in the window, the user can point to the main ray to select the object. ## 4 Discussion Our folding ray-casting technique has several benefits. First of all, it allows for selecting occluded targets without requiring navigation. By allowing users to create multiple folding points along the ray, they can select targets occluded by complex mages of objects. This could not be possible with the current interaction techniques which only interact with a non-occluded target [1]. With our technique, users can search where the unknown target object is located and interact with it. This makes the technique very flexible in its use cases. Users with limited space or range of motion will still be able to take full advantage of this technique as no navigation techniques are needed. Second, the technique addresses the disorientation issue. By implementing the viewport screen, we prevent the user from becoming disoriented while manipulating the camera during the folding process. Instead of taking up the user's entire view with the new camera's position, which has been shown to cause disorientation in applications like teleportation [3], the viewport screen only takes up a portion of the user's view. Outside of that screen, the user can still see the environment from the original location so that they can easily perceive their surroundings and not lose track of their orientation. Lastly, the technique can serve as a navigation method. Some current navigation techniques use a form of ray-casting, such as the arc method, to indicate a desired location for teleportation. According to a study that compared multiple teleportation techniques, the arc method was one of the most preferred and most efficient methods for quick navigation [5]. Unlike the arc method which limits a single teleportation to a target destination within the user's view, our technique determining the destination involves rays that can fold multiple times, allowing the user to navigate complex environments while teleporting much less frequently. There are some limitations with folding rays that could be further explored in the future. One limitation is the challenge of precisely selecting a folding point that is located far along the ray, which is a similar limitation in target selection with traditional ray-casting. Additionally, the square viewport screen provides a limited field of view into the environment from the folding point. ## 5 Conclusion Interacting with occluded objects in a 3D environment is a significant challenge in an immersive environment. The conventional ray-casting technique is limited in that it can only interact with visible objects from where the user is currently located. Our folding ray-casting technique provides a solution for that limitation using the folding ray and viewport screen. Using the folding ray technique, users can easily interact with any object regardless of where they are and how occluded the target may be. With their potential uses for both interaction and navigation, folding rays can be used in a wide variety of applications to assist users with limited physical space and range of motion.
仮想現実が世界で当たり前に普及するにつれ、開発者は、仮想世界とのユーザーインタラクションに焦点を当てる必要があります。現状では、一部の選択とナビゲーション技術には限界があり、完全に克服されていません。特に、レイキャスを強化することに重点を置き、折り畳むレイという、隠れたターゲットの選択を必要とすることなく、仮想環境の物理的なナビゲーションを回避できる技術を提案します。既存の APPROACH を改良することで、この技術は、ユーザーが指定したポイントでレイを曲げることができるように、仮想環境自体を操作することなく、隠れたターゲットの選択に役立ちます。テレportationと組み合わせることで、仮想空間でのユーザーのインタラクティブな体験を向上させるための、折り畳むレイの潜在能力があります。
2309.04234
Astrometric VLBI observations of H$_2$O masers in an extreme OH/IR star candidate NSV17351
Results of astrometric very long baseline interferometry (VLBI) observations towards an extreme OH/IR star candidate NSV17351 are presented. We used the VERA (VLBI Exploration of Radio Astrometry) VLBI array to observe 22\,GHz H$_2$O masers of NSV17351. We derived an annual parallax of 0.247$\pm$0.035 mas which corresponds to a distance of 4.05$\pm$0.59 kpc. By averaging the proper motions of 15 maser spots, we obtained the systemic proper motion of NSV17351 to be ($\mu_{\alpha}\cos{\delta}, \mu_{\delta}$)$^{\mathrm{avg}}$ $=$ ($-$1.19 $\pm$ 0.11, 1.30 $\pm$ 0.19) mas\,yr$^{-1}$. The maser spots spread out over a region of 20 mas $\times$ 30 mas, which can be converted to a spatial distribution of $\sim$80 au $\times$ $\sim$120 au at the source distance. Internal motions of the maser spots suggest an outward moving maser region with respect to the estimated position of the central star. From single dish monitoring of the H$_2$O maser emission, we estimate the pulsation period of NSV17351 to be 1122$\pm$24 days. This is the first report of the periodic activity of NSV17351, indicating that NSV17351 could have a mass of $\sim$4\,M$_{\odot}$. We confirmed that the time variation of H$_2$O masers can be used as a period estimator of variable OH/IR stars. Furthermore, by inspecting dozens of double-peaked H$_2$O maser spectra from the last 40 years, we detected a long-term acceleration in the radial velocity of the circumstellar matter to be $0.17\pm0.03$ km\,s$^{-1}$\,yr$^{-1}$ Finally, we determined the position and kinematics of NSV17351 in the Milky Way Galaxy and found that NSV17351 is located in an interarm region between the Outer and Perseus arms. We note that astrometric VLBI observations towards extreme OH/IR stars are useful samples for studies of the Galactic dynamics.
Akiharu Nakagawa, Atsushi Morita, Nobuyuki Sakai, Tomoharu Kurayama, Hiroshi Sudou, Gabor Orosz, Akito Yuda, Daichi Kaseda, Masako Matsuno, Shota Hamada, Toshihiro Omodaka, Yuji Ueno, Katsunori M. Shibata, Yoshiaki Tamura, Takaaki Jike, Ken Hirano, Mareki Honma
2023-09-08T09:40:26
http://arxiv.org/abs/2309.04234v1
# Astrometric VLBI observations of H\({}_{2}\)O masersin an extreme OH/IR star candidate NSV17351 ###### Abstract Results of astrometric very long baseline interferometry (VLBI) observations towards an ex treme OH/IR star candidate NSV17351 are presented. We used the VERA (VLBI Exploration of Radio Astrometry) VLBI array to observe 22 GHz H\({}_{2}\)O masers of NSV17351. We derived an annual parallax of 0.247\(\pm\)0.035 mas which corresponds to a distance of 4.05\(\pm\)0.59 kpc. By averaging the proper motions of 15 maser spots, we obtained the systemic proper motion of NSV17351 to be \((\mu_{a}\cos\delta,\mu_{a})^{\rm avg}=(-1.19\pm 0.11,\,1.30\pm 0.19)\) mas yr\({}^{-1}\). The maser spots spread out over a region of 20 mas \(\times\) 30 mas, which can be converted to a spatial distribution of \(\sim\)80 au \(\times\)\(\sim\)120 au at the source distance. Internal motions of the maser spots suggest an outward moving maser region with respect to the estimated position of the central star. From single dish monitoring of the H\({}_{2}\)O maser emission, we estimate the pulsation period of NSV17351 to be 1122\(\pm\)24 days. This is the first report of the periodic activity of NSV17351, indicating that NSV17351 could have a mass of \(\sim\)4 M\({}_{\odot}\). We confirmed that the time variation of H\({}_{2}\)O masers can be used as a period estimator of variable OH/IR stars. Furthermore, by inspecting dozens of double-peaked H\({}_{2}\)O maser spectra from the last 40 years, we detected a long-term acceleration in the radial velocity of the circumstellar matter to be \(0.17\pm 0.03\) km s\({}^{-1}\) yr\({}^{-1}\)Finally, we determined the position and kinematics of NSV17351 in the Milky Way Galaxy and found that NSV17351 is located in an interarm region between the Outer and Perseus arms. We note that astrometric VLBI observations towards extreme OH/IR stars are useful samples for studies of the Galactic dynamics. Astrometry: -- masers (H\({}_{2}\)O) -- stars: individual (NSV17351) -- stars: variable: + Footnote †: journal: Astroparticle Physics ## 1 Introduction Asymptotic Giant Branch (AGB) stars are known to be at the final stage of evolution of stars with initial masses of 0.8 to 10 \(M_{\odot}\)(e.g. Karakas & Lattanzio, 2014). Among them, the stars identified as bright infrared and OH maser emitters are referred to as OH/IR stars. They represent thick circumstellar envelopes and high mass loss ratio, sometimes up to \(10^{-4}M_{\odot}\) yr\({}^{-1}\)(te Lintel Hekkert et al., 1991). OH/IR stars are thought to be a group of evolved AGB stars at the stage before they evolve to planetary nebulae (te Lintel Hekkert et al., 1991; Etoka & Diamond, 2006; Kamizuka et al., 2020). Same as the other types of AGB stars like Mira variables and semiregular variables, OH/IR stars often represent stellar pulsation in optical and infrared bands with typical pulsation periods of 100 to 1000 days. Engels et al. (1983) determined pulsation periods between 500 to 1800 days for 15 OH/IR stars from infrared (\(K\)-band) monitoring observation. A subclass of OH/IR stars undergoing especially intensive mass loss are recognized as extreme OH/IR stars (Justtanont et al., 2013). According to the study by Hofner & Olofsson (2018), we find that sources with such high mass loss ratio have exceedingly long pulsation period, i.e., \(\geq\)800 days. Furthermore, at the late stage of AGB phase, it is known that there is also a fraction of OH/IR stars showing no or little variability, called non-variable OH/IR stars (Engels, 2002). Towards bright OH/IR stars in Baud's catalog (Baud et al., 1981), Herman & Habing (1985) monitored the OH maser emission and found that 25% of the targets were non-variable OH/IR stars. In the evolution from AGB to post-AGB phase, it is thought that optical variability gradually diminishes as ceases the pulsation and heavy mass loss from the central star (e.g. Kamizuka et al., 2020). Study of the circumstellar matter is important for understanding of the chemical properties of the Galaxy and the evolution of stars. AGB stars play a key role in the formation and transportation of circumstellar matter. OH/IR stars often host OH, H\({}_{2}\)O, and SiO masers in their circumstellar envelopes (Engels, 1979; Engels et al., 1986; Nyman et al., 1986). In previous research, a large amount of OH/IR stars were monitored using 1612, 1665, and 1667 MHz OH masers for determination of the OH maser flux density and its time variation (see e.g., Engels et al., 2012). Very long baseline interferometry (VLBI) observations of these masers revealed detailed structure and dynamics of circumstellar matters of AGB stars. Among them, the study by Diamond & Kemball 2003 is one of the most representative. Movies of SiO masers of TX Cam revealed ring like molecular outflows of masers explained with tangential amplification. The SiO maser shell shows significant asymmetry and can be described as a fragmented or irregular ellipsoid. Individual SiO maser components have radial motions in the range of \(\sim\)5 to 10 km s\({}^{-1}\). Decin et al. (2010) observed a nearby oxygen-rich AGB star IK Tau and presented its expansion velocity profile. The velocity data in their study were obtained from VLBI mapping studies of maser emissions from SiO, H\({}_{2}\)O, and OH. The CO expansion velocity derived from ground-based CO \(\mathbf{J}=1-0\) data was also considered in the study. They clarified the velocity field around an AGB star at a certain evolution phase through a wide range of radial distances, from an order of 10\({}^{13}\) cm to 10\({}^{16}\) cm (\(\sim\)1 au to \(\sim\)1000 au). The revealed velocity profile can be an evidence for radial acceleration in the expansion velocity of the circumstellar matter. Since H\({}_{2}\)O masers occur at a radial distance of 10\({}^{14}\) cm to 10\({}^{15}\) cm (\(\sim\)10 au to \(\sim\)100 au) where we can expect remarkable acceleration of the circumstellar envelopes, we try to explore the long-term acceleration using H\({}_{2}\)O maser data in the literature and our own observations. NSV17351 (also named as OH224.3\(-\)1.3, IRC\(-\)10151, CRL1074, and IRAS07054\(-\)1039) is an OH/IR star (Le Squeren et al., 1979) with a spectral type of M8 (Hansen & Blanco, 1975). It has OH maser emissions at 1612, 1665, and 1667 MHz (Le Squeren et al., 1979; te Lintel Hekkert et al. 1989), SiO masers at 86 GHz (Ukita & Goldsmith 1984; Engels & Heske 1989), 43 GHz (Kim et al. 2010), and H\({}_{2}\)O maser at 22 GHz (Blitz & Lada 1979; Crocker & Hagen 1983; Cesaroni et al. 1988; Takaba et al. 1994; Kim et al. 2010). In a study by te Lintel Hekkert et al. (1989), a stellar LSR velocity of the source is reported to be 52 km s\({}^{-1}\) with no indication of its uncertainty. Accrding to a study of SiO maser by Ukita & Goldsmith (1984), a single narrow peak at 50 km s\({}^{-1}\) with linewidth of \(\thicksim\)4 km s\({}^{-1}\) was detected. And also in Kim et al. (2010), LSR velocities of 51.8 and 51.1 km s\({}^{-1}\) are presented. Based on these velocities in previous studies, it is reasonable to assume the uncertainty in the stellar LSR velocity is \(\thicksim\)2 km s\({}^{-1}\). Though the pulsation period of NSV17351 is not yet clearly given in the literature, from our observations we found the pulsation period of the source to be longer than 800 days, suggesting that NSV17351 is a candidate extreme OH/IR star. In order to obtain physical parameters of the celestial object, distance of the source is crucial. The phase-lag method is known as a technique to derive distances of OH/IR stars (van Langevelde et al. 1990; Engels et al. 2015; Etoka et al. 2018). Distances to several OH/IR stars using the phase-lag method are reported in Engels et al. (2015). However, uncertainties of the distances from the phase-lag method are about 20 %. Recently, the Gaia Data Release 3 (Gaia DR3; Gaia Collaboration et al. 2022)1 provided a trigonometric parallax of 0.088\(\pm\)0.147 mas for NSV17351. Proper motion is also reported to be \(-0.03\pm\)0.16 mas yr\({}^{-1}\) and 1.88\(\pm\)0.19 mas yr\({}^{-1}\) in right ascension (RA) and declination (DEC), respectively. Gaia is very sensitive to extinction and size of star that is comparable to the measured parallax itself. Therefore, astrometry of OH/IR stars are considered to be essentially difficult for Gaia. Footnote 1: Gaia Data Release 3; [https://www.cosmos.esa.int/web/gaia/data-release-3](https://www.cosmos.esa.int/web/gaia/data-release-3) Trigonometric parallax distance measurements to a couples of long-period variables using astrometric VLBI observations have been reported (see e.g., Nakagawa et al. 2016). However, there has been a few VLBI astrometric results for OH/IR stars. A study by Orosz et al. (2017) is a notable one conducted with astrometric VLBI observations of 1612 MHz OH maser. They used the NRAO Very Long Baseline Array (VLBA)2 and determined parallaxes of OH/IR stars. The obtained parallax of OH 138.0+7.2 was 0.52\(\pm\)0.09 mas, making this the first sub-mas OH maser parallax. In contrast to the compactness of H\({}_{2}\)O and SiO masers, angular size of OH masers are known to be relatively extended and diffuse. OH maser parallaxes with VLBI struggle with extended maser structure and poorer resolution. Therefore, astrometric VLBI observation at higher frequency using H\({}_{2}\)O masers can help us to determine smaller parallaxes with better accuracy. Footnote 2: Very Long Baseline Array, [https://science.nrao.edu/facilities/vtha](https://science.nrao.edu/facilities/vtha) Sources with pulsation periods of \(\thicksim\)1000 days are thought to have initial masses of \(\thicksim\)4 M\({}_{\sun}\) (Feast 2009). Based on studies of the AGB star evolution (e.g. Vassiliadis & Wood 1993), ages of OH/IR stars with periods of 1000 days can be estimated to be \(\thicksim\)10\({}^{8}\) yr. Recent studies predict that galactic spiral arms are bifurcating/merging in a time scale of 10\({}^{8}\) yr (Baba et al. 2013). So, OH/IR stars with ages of \(\thicksim\)10\({}^{8}\) yr can be used as a new probe to study the structure and evolution of spiral arms. Astrometric VLBI observations of NSV17351 is the first trial to use OH/IR stars for the studies of the Galactic dynamics. In section 2, we give details of our VLBI observations and single dish monitoring observations, including the reduction process. In section 3, we present our results : the pulsation period, annual parallax and proper motion of NSV13751. Section 4 explains our interpretation of the H\({}_{2}\)O maser distribution and kinematics. We also discuss the evolutionary phase of NSV17351 based on the radial velocities of the H\({}_{2}\)O maser spectrum. We mention the difference of the astrometric results between VLBI and Gaia. A usefulness of extreme OH/IR stars for study of the Galactic dynamics is presented. We summarize our study in section 5 with our future prospects. ## 2 Observations and Data Reduction ### Single dish observations We observed H\({}_{2}\)O maser emission of NSV17351 at a rest frequency of 22.235080 GHz (\(\rm{6_{16}\hbox{-}5_{23}}\) transition) once every 3 to 4 weeks from August 2015 to December 2020 using the 20 m aperture telescope at VERA Iriki station in order to obtain its spectra and variability. Total number of the single dish observations is 59. Since the pulsation period of NSV17351 is not found in the literature, we estimate the pulsation period from our single dish monitoring. Integration time was 10 to 40 minutes to reduce noise levels in each observation to less than 0.05 K. The conversion factor from antenna temperature to the flux density is 19.6 Jy K\({}^{-1}\). A 32 MHz bandwidth data with 1024 spectral channels gives a frequency resolution of 31.25 kHz, which corresponds to a velocity resolution of 0.42 km s\({}^{-1}\). We carried out the data reduction using the Java NEWSTAR software package developed by the Nobeyama Radio Observatory. Amplitude of the raw spectra was calibrated by the chopper-wheel method, then the spectral baseline was corrected using a polynomial function of the seventh order. We excluded a total of 0.63 MHz signals at both ends. We adopted a signal-to-noise ratio (S/N) of 4 as a detection criterion in our single dish observations. ### VLBI observations We observed H\({}_{2}\)O maser emission of NSV17351 using the VLBI Exploration of Radio Astrometry (VERA). Eleven-epochs data were taken from April 2018 to June 2019 with an interval of about one month. VERA is a VLBI array which consists of four 20 m aperture radio telescopes located at Mizusawa, Iriki, Ogasawara, and Ishigaki-jima (Kobayashi et al., 2003). Its maximum baseline length is 2270 km between Mizusawa and Ishigaki-jima stations. Each antenna of VERA is equipped with a dual-beam system (Kawaguchi et al., 2000) which can simultaneously observe a target maser source and an extragalactic continuum source within a separation angle between 0.3 \(\lx@math@degree\) and 2.2 \(\lx@math@degree\). Using the dual-beam system, we can calibrate short-term tropospheric fluctuations with the phase-referencing technique (Honma et al., 2008). Table 1 shows the nominal coordinates of the target maser source NSV17351 and extragalactic reference source J0709\(-\)1127. Regarding the revised coordinate of NSV17351 in the table, please see detail in section 4.1. Their separation angle is 0.80 \(\lx@math@degree\) at a position angle of 156 \(\lx@math@degree\). In our phase-referencing analysis, J0709\(-\)1127 is used as a position reference on the sky plane. Dates of the VLBI observations are presented in table 2 with the Modified JulianDate (MJD). Typical integration times of the two sources were 2 to 3 hours for each VLBI observation. The signals of left-handed circular polarization from the target and position reference source were acquired with a total data recording rate of 1 giga-bit per second (Gsps). It can cover a total bandwidth of 256 MHz. The data were recorded onto the hard disk drives of the "OCTADISK" system (Oyama et al., 2016). This entire bandwidth is divided into 16 IF channels. Each IF channel then has a width of 16 MHz. Then one IF channel (16 MHz) was assigned for the maser source NSV17351 and the remaining 15 IF channels (16 MHz \(\times\) 16 = 240 MHz) were assigned to the reference source J0709\(-\)1127. This process was conducted with a VERA digital filter unit (Iguchi et al., 2005). Correlation processing was done with the Mizusawa software correlator at Mizusawa VLBI observatory, NAOJ. In the final output from the correlator, the 16 MHz bandwidth data of NSV17351 was divided into 512 channels with a frequency resolution of 31.25 kHz. This corresponds to a velocity resolution of 0.42 km s\({}^{-1}\) at 22 GHz. In the correlator output of J0709\(-\)1127, each 16 MHz IF was divided into 32 channels. ### Data reduction of the VLBI data We reduced the VLBI data using the Astronomical Image Processing System (AIPS1; Greisen, 2003; Fomalont, 1981) developed by the National Radio Astronomy Observatory (NRAO). Amplitude calibration was performed using the gain curves and the system noise temperatures during observations at each station. A bandpass calibration was performed using the bright continuum sources DA193, OJ287, and 3C84. In the phase-referencing process, we used the task "FRING" in AIPS to solve the residual phase, group delays, and delay rates that were included in the correlator output of the reference source J0709\(-\)1127. We adopted an integration time of three minutes ("solint = 3") and solution interval of 12 seconds ("solsub = 15") in the task "FRING". The self-calibration was done using the tasks "IMAGR" and "CALIB" iteratively to solve residual phases with shorter time scale. For phase calibration, we need two additional reduction procedures unique to the VERA array. Phase solutions between the dual-beam receivers, which was solved using the correlated data of noise signal injected into the two receivers from artificial noise sources installed on a feedome base of the VERA antenna (Honma et al., 2008), were also applied in the reduction process. Another calibration is related to delay-tracking models used to estimate a priori delays. Since an accuracy of the model in the correlator is not good enough for precise astrometry, we calibrated them based on more accurate delay tracking models and applied better estimates of them. More detailed phase-referencing procedures are shown in Nakagawa et al. (2008). Image size of the reference source is 12.8 mas \(\times\) 12.8 mas square (256\(\times\)256 pixels with a pixel size of 0.05 mas/pixel). The image of J0709\(-\)1127 was obtained with a peak flux density of \(\thicksim\)280 mJy beam\({}^{-1}\). Typical noise levels of the images were \(\thicksim\)0.9 mJy beam\({}^{-1}\). Then, the solutions from the tasks "FRING" and "CALIB" were transferred to the data of the target maser source NSV17351. Size of the synthesized beam was 1.7 mas \(\times\) 0.9 mas with a major axis position angle of \(-\)32\({}^{\lx@math@degree}\). After the data calibration given above, we used the task "IMAGR" to make synthesized images of NSV17351 on 102.4 mas \(\times\) 102.4 mas square maps (2048 \(\times\) 2048 pixels with a pixel size of 0.05 mas/pixel). Using the task "IMFIT", we fitted two-dimensional Gaussian functions to bright maser spots to estimate their position and flux density. These positions are used in the parallax and proper motion fitting. Results of the fitting are given in section 3.2. We adopted a signal to noise ratio of 7 as a detection criterion in the phase-referenced maps. ## 3 Result ### Determination of the pulsation period from single dish monitoring of H\({}_{2}\)O masers We conducted 59 single dish observations of the H\({}_{2}\)O maser of NSV17351 from 23 August 2015 (MJD 57257) to 8 December 2020 (MJD 59191). In the 59 observations, we detected H\({}_{2}\)O maser emission in 41 observations. Figure 1 shows examples of total-power spectra of NSV17351 obtained \begin{table} \begin{tabular}{l l l c c c} \hline Source & RA (J2000.0) & DEC (J2000.0) & \(l\) & \(b\) & note \\ \hline \hline NSV17351 & \(07^{\rm h}07^{\rm m}49^{\rm s}\).380 & \(-\)10\(\,\)\({}^{\circ}\)44\(\,\)\({}^{\circ}\)5\({}^{{}^{\prime\prime}}\).90 & 224.34\({}^{{}^{\lx@math@degree}}\) & \(-\)1.29\({}^{{}^{\lx@math@degree}}\) & nominal \\ & \(07^{\rm h}07^{\rm m}49^{\rm s}\).3876 \(\pm\) 0.0004 & \(-\)10\(\,\)\({}^{\circ}\)44\(\,\)\({}^{\circ}\)6\({}^{{}^{\prime\prime}}\).005\(\pm\)0.007 & \(-\) & \(-\) & revised \\ J0709\(-\)1127 & \(07^{\rm h}09^{\rm m}10^{\rm s}\).406578 & \(-\)11\(\,\)\({}^{\circ}\)27\(\,\)48\({}^{{}^{\lx@math@degree}}\).45555 & 225.14\({}^{{}^{\lx@math@degree}}\) & \(-\)1.33\({}^{{}^{\lx@math@degree}}\) & \\ \hline \end{tabular} \end{table} Table 1: Coordinates of the sources at VERA Iriki station on 3 May 2019 (MJD 58606, top), 28 December 2018 (MJD 58480, middle), and 22 April 2018 (MJD 58230, bottom). We can see prominent maser emissions at LSR velocities (\(V_{\rm LSR}\)) of 39 km s\({}^{-1}\) and 61 km s\({}^{-1}\). A spectrum in 22 April 2018 (MJD 58230) represents the widest emission range in our single dish observations. A center velocity of the spectrum was obtained to be \(50.1\pm 1.9\) km s\({}^{-1}\). The uncertainty was estimated from the full width at half maximum (FWHM) of each peak emission. This velocity is consistent with the source radial velocity of 52 km s\({}^{-1}\) reported by te Lintel Hekkert et al. (1989). In section 4, we use the center velocity of \(50.1\pm 1.9\) km s\({}^{-1}\) as a representative value of a stellar LSR velocity. In table 3, we summarize results from single dish observations at VERA Iriki station. The \(T_{\rm A}^{\rm blue}\) and \(T_{\rm A}^{\rm red}\) represent antenna temperatures of blue- and red-shifted velocity components in unit of K, and \(V^{\rm blue}\) and \(V^{\rm red}\) represent \(V_{\rm LSR}\) of the components. In order to grasp overall variation of maser activity, we considered integrated intensities \(I\) in unit of K km s\({}^{-1}\) as an integration of the whole maser components over a velocity range from 30 km s\({}^{-1}\) to 75 km s\({}^{-1}\), and presented them in 7 th column in table 3. Scales of the antenna temperatures have relative uncertainties of 5-20% (Shintani et al. 2008). In this study, we uniformly applied uncertainties of 10% to all the integrated intensities. In the last column, we present rms noise levels of each single dish spectrum. Non-detections are labeled with "\(-\)" symbols. Figure 2 shows the time variation of the integrated intensity \(I\) of the H\({}_{2}\)O maser of NSV17351 obtained from 23 August 2015 (MJD 57257) to 8 December 2020 (MJD 59191). Error bars indicate \begin{table} \begin{tabular}{c c c c} \hline \hline Obs.ID & Year & Date & MJD \\ \hline \hline 1 & 2018 & 16 April & 58224 \\ 2 & & 19 May & 58257 \\ 3 & & 02 October & 58393 \\ 4 & & 01 November & 58423 \\ 5 & & 29 November & 58451 \\ \hline 6 & 2019 & 04 January & 58484 \\ 7 & & 03 February & 58517 \\ 8 & & 12 March & 58554 \\ 9 & & 09 April & 58582 \\ 10 & & 04 May & 58607 \\ 11 & & 01 June & 58635 \\ \hline \end{tabular} \end{table} Table 2: Dates of VLBI observations. 10% uncertainties of integrated intensities. Horizontal axis of figure 2 represents the MJD. In the case that we could not detect any maser emission, we put open circles with downward arrows as detection upper limits of each single dish observation. From figure 2, we found that the integrated intensity \(I\) of the H\({}_{2}\)O maser gradually decreased from MJD 57200 to MJD 57400, then, it further decreased bellow the detection limit (S/N of 4) of the single dish observations. The maser emission remained bellow S/N of 4 until the next detection on 08 September 2017 (MJD 58004). In March 2018 (\(\thicksim\)MJD 58200), the maser emission reached its maximum. Then it decreased and disappeared on 13 May 2019 (MJD 58616) again. NSV17351 recovered its H\({}_{2}\)O maser flux on 2 March 2020 (MJD 58910) and increased to 4.29 K km s\({}^{-1}\) on 8 December 2020 (MJD 59191). Using these monitoring data, we determined the variation period of the H\({}_{2}\)O maser of NSV17351. We introduced a sinusoidal function \(I_{\rm model}\) defined as follows, \[I_{\rm model}=\Delta I\sin(\frac{2\pi(t+\theta)}{P})+I_{0}, \tag{1}\] where \(\Delta I\) is the amplitude of the variation, \(t\) is time, \(\theta\) is a zero-phase time lag, \(P\) is the variation period, and \(I_{0}\) is an average. From our least-squares analysis, the variation period \(P\) was solved to be 1122\(\pm\)24 days. In this fitting, we adjusted the value of \(I_{0}\) to be 2.05. The amplitude \(\Delta I\) was obtained to be 1.86 K km s\({}^{-1}\). The fitting solution is presented with a solid curve in figure 2. The non-detection data were not used in this fitting. Engels et al. (1986) concluded that the H\({}_{2}\)O maser luminosity of OH/IR stars follows the cycle of variation of infrared and OH luminosities, with a possible phase lag of order 0.2 relative to them. Hence, we think that the variation period of 1122\(\pm\)24 days estimated from our H\({}_{2}\)O maser monitoring can be considered as a pulsation period of the central star. Since our monitoring time coverage is shorter than two times of the pulsation cycle, further monitoring will be needed for careful determination of the pulsation period. Nonetheless, we note here that our estimation of the pulsation period is the first reference about the periodic activity of NSV17351and we think this source is a candidate of extreme OH/IR stars because of its long periodicity. ### Annual parallax and proper motions In this section, we determine an annual parallax of NSV17351 using positions of maser spots detected in phase-referencing analysis. We trace the angular motion of multiple maser spots on the sky plane. We succeeded to detect H\({}_{2}\)O maser spots from 1 st to 10 th observations. From 10 out of 11 VLBI observations, we found 27 H\({}_{2}\)O maser spots in 22 velocity channels. As a detection threshold, we adopted a signal-to-noise ratio (S/N) of 7. Noise levels of our phase-referenced images of NSV17351 were 80 to 170 mJy beam\({}^{-1}\). In the 11th observation on 01 June 2019 (MJD 58635), we were not able to detect any maser spot on the phase-referenced image. In this observation, we found maser emission in neither single dish \(\char 30\) observation nor VLBI observation. From figure 2, we can see an integrated intensity of the H\({}_{2}\)O maser on 13 May 2019 (MJD 58616) decreased below our detection limit in single dish observation. The 11 th VLBI observation on 01 June 2019 (MJD 58635) was held after this non-detection. In table 4, we summarized properties of the detected maser spots. Each column represents the following quantities, maser spot ID in column 1, the LSR velocity (\(V_{\rm LSR}\)) in column 2, offset positions in right ascension (RA) and declination (DEC) relative to the phase tracking center in columns 3 and 4, angular proper motions in RA and DEC in columns 5 and 6, peak flux of the maser spots in column 7, signal-to-noise ratio (S/N) of the peak flux density in column 8, observation IDs where we detected Figure 1: H\({}_{2}\)O maser spectra of NSV17351 obtained at VERA lriki station on 3 May 2019 (top), 28 December 2018 (middle), and 22 April 2018 (bottom). Noise levels of individual spectra are 0.4 Jy, 0.4 Jy, and 0.8 Jy, from the top to the bottom, respectively. On 22 April 2018, the spectrum shows a double-peaked profile. For convenience, we shifted noise floors to 25 Jy and 40 Jy for spectra in middle and top, respectively. Figure 2: Time variation of the integrated H\({}_{2}\)O maser intensities \(I\). Left and right ends correspond to 27 June 2015 (MJD 57200) and 27 March 2020 (MJD 59300), respectively. Filled circles represent results of successful detection. In the case of non-detection, we put open circles with downward arrows as representatives of detection upper limits. Solid line is the best-fit model indicating a pulsation period of 1122\(\pm\)24 days. maser spots in phase-referenced images in column 9. Asterisks in column 9 mean VLBI observation IDs of non-detection. When we found spatially different maser spots in an identical velocity channel, we labeled them with different spot IDs. For example, since there are two different maser spots at the \(V_{\rm LSR}=39.15\) km s\({}^{-1}\), there are IDs of 3 and 4 indicating the same LSR velocity in table 4. In figure 3, we present examples of maser spot images used in this study. From left to right, the maser spot with \(V_{\rm LSR}\) of 39.15 km s\({}^{-1}\) (identified as ID3 in table 4) detected on (a) 16 April 2018, (b) 1 November 2018 and (c) 12 March 2019 are presented, respectively. For the spot in the map (a), formal fitted values of the peak position uncertainty are 13 \(\mu\)as and 22 \(\mu\)as in RA and DEC, respectively. For other two maps (b) and (c), we see modest structures of the maser spot. We carefully examined the maser structure, its time variation and continuity, we concluded that the southern components in the maps (b) and (c) are identical in our analysis. In the model fitting of this case, we have limited the area of its fitting to derive positions of appropriate maser components. Formal fitted values of the position uncertainty in the maps (b) and (c) are several times larger than that of the map (a). In the least-squares analysis, we regarded post-fit residuals of 0.05 mas in RA and 0.09 mas in DEC as representative errors of the maser positions across all epochs. Consequently twelve maser spots with IDs 2 to 10, 16, 23, and 24 were selected for determination of a common parallax. They were detected more than three continuous epochs of observations. Among them, maser spots with IDs of 2 and 10 were detected longer than \(\thicksim\) 1 yr. The least-squares fitting gives a parallax of NSV17351 as 0.247\(\pm\)0.010 mas. There are some factors that contribute to the parallax uncertainty. For example, uncompensated atmospheric delay differences between the reference source and the maser would be common to all spots. Structural variation of the reference source would also be common to all spots. Therefore, in estimation of an accuracy of the parallax, we adopt a more conservative estimate. By multiplying the initial parallax error of 0.01 mas by the square-root of the number of maser spots used, we obtained 0.035 (\(=0.010\times\sqrt[4]{\frac{\nu}{12}}\)) mas as a true accuracy. As a result, we adopted the parallax of \(0.247\pm 0.035\) mas for NSV17351 which corresponds to a distance of \(4.05\pm 0.59\) kpc. Figure 4 shows offsets after removing the proper motions and the fitted parallax along RA axis (top) and DEC axis (bottom), respectively. Observed data are indicated as filled circles with their colors representing the LSR velocities of each maser spot. Error bars are 0.05 mas and 0.09 mas in RA and DEC, respectively. Solid curves are the best fit models of the parallax. In a very recent study by Reid (2022), an imaging method with " super-resolution " was proposed. As a validation of our parallax measurement, we also performed a parallax fitting using this method. We used a round CLEAN restoring beam with 0.6 mas diameter for a maser spot showing modest structure. Using this method, a parallax was estimated to be 0.248 \(\pm\) 0.035 mas showing an excellent agreement with our measurement of 0.247 \(\pm\) 0.035 mas. In this paper, we adopt the latter value as the parallax of NSV17351. We also derived the angular proper motions of the maser spots. In the phase-referenced image, maser spots show synthesized motions of the parallax and linear proper motions. In the fitting above, we can solve the common parallax and linear proper motions of maser spots simultaneously. We successfully derived proper motions of 15 maser spots (ID 1 to 10, 16, 23, 24, 26, 27). The proper motions along the RA and DEC axes are presented in table 4 as \(\mu_{\alpha}\)cos\(\delta\) and \(\mu_{\delta}\) in units of mas yr\({}^{-1}\), respectively. In the case when a maser spot was detected only once, or identification of the spots was difficult, we could not solve their proper motions and subsequently there are no values of proper motions. By averaging proper motions of all 15 solved maser spots, we obtained (\(\mu_{\alpha}\) cos \(\delta\),\(\mu_{\delta}\))\({}^{\rm avg}\) = (\(-1.19\,\pm\,0.11\), \(1.30\,\pm\,0.19\)) mas yr\({}^{-1}\) and we assume this motion as the systemic proper motion of NSV17351. Using this motion, we reveal the circumstellar motions of the maser spots in the next section. ## 4 Discussion ### Circumstellar distribution and kinematics of the maser spots We discuss the circumstellar kinematics of H\({}_{2}\)O maser spots of NSV17351. In figure 5, we present the distribution of all the maser spots detected in our VLBI observations. Horizontal and vertical axes of the map are position offset from the phase tracking center of NSV17351 which is given as a nominal coordinate in table 1. Position offsets of the maser spots (\(\Delta\alpha\) cos\(\delta\), \(\Delta\delta\)) are given in table 4. Maser spots are distributed in about 20 mas \(\times\) 30 mas field which corresponds to \(\thicksim\)80 au \(\times\) \(\thicksim\)120 Figure 3: Images of maser spots at \(V_{\rm LSR}\) of 39.15 km s\({}^{-1}\) detected on (a) 16 April 2018 (MJD 58224), (b) 1 November 2018 (MJD 58423) and (c) 12 March 2019 (MJD 58554). The synthesized beams are presented at bottom left of each map. In the maps of (b) and (c), southern component was used in the parallax fitting. au at a source distance of 4.05 kpc. From the maser distribution, we can estimate a stellar position in the map, by simply averaging positions of all the maser spots. We obtained it to be (115.49\(\pm\)6.35, \(-\)105.34\(\pm\)7.18) mas, where position uncertainties are introduced as standard deviations of all the maser spots with respect to the estimated stellar position. Estimated stellar positions are indicated by cross symbols in figure 5, the lengths of which represent the position errors on the RA and DEC axes, respectively. Based on our astrometric results, this position is presented as a revised coordinate of the source in table 1. The linear proper motions (\(\mu_{\alpha}\)cos\(\delta\), \(\mu_{\delta}\)) presented in the previous section derived from the phase-referencing VLBI observations are a combination of proper motion of the stellar system and internal motions of individual maser spots on a rest frame fixed to the stellar system. Therefore, to deduce their internal motions, we have to subtract the systemic motion of NSV17351 from the obtained proper motions of each maser spot. We have already derived the systemic motion of NSV17351 as (\(\mu_{\alpha}\,\cos\,\delta,\mu_{\delta}\))\({}^{\rm avg}\) = (\(-\)1.19 \(\pm\) 0.11, 1.30 \(\pm\) 0.19) mas yr\({}^{-1}\) in previous section. If the maser positions and motions are distributed uniformly and isotropically about the star, the above systemic motion would be considered reliable values. However, it is unlikely that this condition would apply in this case, and the above systemic motion can be considered to include systematic errors. If one were to consider a realistic uncertainty in the internal motion, it would be, say, 0.5 mas yr\({}^{-1}\) (\(\thicksim\)10 km s\({}^{-1}\)). We consider adding a constant vector of about this magnitude and with a direction toward the southwest to all measured maser spot motions. In the calculations, \(-\)0.35 mas yr\({}^{-1}\) (\(=-\)0.5/\(\sqrt[4]{2}\) mas yr\({}^{-1}\)) Figure 4: The annual parallax of NSV17351 along RA (top) and DEC (bottom), respectively. Results from ten observations are shown with filled circles. Colors indicate LSR velocity \(V_{\rm LSR}\) of each maser spot. Solid curves are the best fit models obtained from the parallax fitting. is added in RA and DEC. This consideration might make the distribution more likely the expansion about a central star. Taking also into account the internal motions of maser spots, the position of the central star would be considered slightly north of the position indicated by the cross symbol. In figure 5, an elongated distribution of the maser spots along the northwest to southeast direction seems to be predominant. Regarding the LSR velocity (\(V_{\rm LSR}\)), we can find that blue- and red-shifted maser spots locate at northwest and southeast of the map, respectively. This indicates that these spots are likely tracing a weak, possibly asymmetric outward motion from the map center. Here we note that the OH masers observed at the position of central star are thought to be coming from the small nearside and farside parts of the envelope near the line of sight which intersects the central star (Szymczak, 1988). This suggests that OH masers are pumped by infrared background radiation to which stellar photons are converted by a heavy dust shell (Orosz et al., 2017). Among all the H\({}_{2}\)O maser spots in our observation, positions of the most blue-shifted maser spots are close to the estimated stellar position. Although the types of maser are different, the distribution of H\({}_{2}\)O maser obtained in our observations is roughly similar to the distribution characteristics of OH maser. We also consider the three-dimensional velocities of the maser spots. In section 3.1, we determined the stellar LSR velocity of NSV17351 to be \(50.1\pm 1.9\) km s\({}^{-1}\). Residuals of LSR velocities (\(V_{\rm LSR}\)) of each maser spot from the stellar LSR velocity of NSV17351 correspond to velocity components relative to the central star along the line of sight. Using the three orthogonal velocity components of the maser spots, we can deduce the three dimensional expanding velocity of each maser spot. The average of the expanding velocities is 15.7 km s\({}^{-1}\) with a standard deviation of 3.3 kms\({}^{-1}\). Figure 5: Spatial distribution and internal motions of H\({}_{2}\)O maser spots of NSV17351. Filled circles indicate maser spots and arrows indicate their internal motions. Colors indicate LSR velocities shown in a color index at right side of the map. The arrow in the top-left corner shows a proper motion of 0.5 mas yr\({}^{-1}\) corresponding to a velocity of\(9.60\) kms\({}^{-1}\) at\(4.05\) kpc. Across symbolshows the estimated stellar position. Lengths of the cross lines indicate its errors. ### Acceleration of the H\({}_{2}\)O Maser From the single dish observations at VERA Iriki station, we obtained 41 spectra of H\({}_{2}\)O maser emission of NSV17351. A striking feature of the H\({}_{2}\)O maser spectra in this source is the presence of blue- and red-shifted bright components with a velocity separation of \(\thicksim\)20 km s\({}^{-1}\). We can also find H\({}_{2}\)O maser spectra from three previous studies. Blitz & Lada (1979), Takaba et al. (1994), and Kim et al. (2010) reported H\({}_{2}\)O maser spectra observed in 28 January 1978, 10 May 1991, and 6 June 2009, respectively. Existence of the blue- and red-shifted components seen in our observations are consistent with those reported in the previous three works, while peak velocities seem to have been slightly shifting. To argue this velocity shift, we defined \(\Delta\)\(V\) as a velocity separation between the two peaks. To estimate errors of \(\Delta\)\(V\), we considered the full width at half maximum (FWHM) of each component. In the studies by Blitz & Lada (1979) and Kim et al. (2010), they explicitly gave velocities of the two peaks, so we used these values to derive \(\Delta\)\(V\). In the case of Takaba et al. (1994), we deduced \(\Delta\)\(V\) from figure 1 in their study. We summarized LSR velocities of the two peaks and the velocity separation \(\Delta\)\(V\) in table 5. Observation dates are presented with MJD in column 1. In column 2 and column 4, peak velocities of blue- and red-shifted components are presented, respectively. In column 3 and column 5, the full width at half maximum (FWHM) values of each component are given. The velocity separation \(\Delta\)\(V\) is presented in column 6 with its error obtained as an average of two FWHMs. Using all the \(\Delta\)\(V\) values, we present their time variation in figure 6. From this figure, we can interpret that there is an increase in \(\Delta\)\(V\) over the last 40 years. If fitting were to be performed, this increase of \(\Delta\)\(V\) over a long time period would be \(d\Delta\)\(V\)/\(dt=0.33\pm 0.06\) km s\({}^{-1}\) yr\({}^{-1}\). Here we assumed a simple linear function and the fitted was presented with solid line in figure 6. Since lifetime of individual H\({}_{2}\)O maser cloud is of the order of \(\thicksim\)3 years (Engels 2002), it is difficult to interpret that we have been observing the same H\({}_{2}\)O gas clouds during the last 40 years. Therefore, a more natural explanation is that the velocity of the region where H\({}_{2}\)O masers are excited has been increasing during the last 40 years. Dividing this \(d\Delta\)\(V\)/\(dt\) by two, we can obtain an acceleration of the outflow velocity to be 0.17\(\pm\)0.03 km s\({}^{-1}\) yr\({}^{-1}\). Next, we focus on the comparison of the spctrum profiles between the H\({}_{2}\)O maser and OH masers. In the three H\({}_{2}\)O maser spectral profiles reported in the previous studies by Blitz & Lada (1979), Takaba et al. (1994), and Kim et al. (2010), the two components have relatively gentle decreases (shoulders) at the outer sides of each peak (figure 7). On the other hand, in the spectral profiles obtained from our observation (top of figure 7), the two peaks show sharp cut offs at their outer sides. Especially, the sharpness of the cut off is remarkable at the outer side of the blue-shifted peak at 38 km s\({}^{-1}\) in the spectrum on 22 April 2018 (top of figure 7). Profiles of OH maser spectra are characterized by double peaked components with sharp cut offs due to a terminal velocity of the circumstellar envelope containing OH molecules. Now we can see that the profile of recently obtained H\({}_{2}\)O maser spectra (top of figure 7) are quite similar to those formerly observed in 1612 MHz OH maser, which represents cut-offs at terminal velocities. In figure 8, we superposed H\({}_{2}\)O maser spectrum on 22 April 2018 (solid line) and 1612 MHz OH maser spectra in February 1978 (dotted line), and we found that the profiles of the two spectra resemble each other. Especially, we can find that the cut off velocity in the blue-shifted component shows exactly same velocity (38 km s\({}^{-1}\) to 40 km s\({}^{-1}\)). In the red-shifted side, velocity of the OH maser is larger than that of H\({}_{2}\)O maser. It is thought that OH molecules are supplied by photodissociation of H\({}_{2}\)O molecules carried to the outer part of the circumstellar envelope. This comparison indicates an asymmetric out flows of H\({}_{2}\)O and OH masers in the red-shifted components. In addition to the similarity seen in the shape of the maser spectra of H\({}_{2}\)O and OH masers, we also mention the similarity of the location of the maser spots. In figure 5, the most blue-shifted H\({}_{2}\)O maser spots are seen close to the estimated stellar position. In the case of OH masers, it is known that the most blue- and red-shifted maser spots are seen in the same position in the sky plane. For example, Orosz et al. (2017) revealed that the blue- and red-shifted OH masers coincide with the same position where the central star assumed to exist. In addition, we also present a study by Rosen et al. (1978). They discuss the appearance of maser emission from the gas surrounding the star by classifying limb region and far/near-side region (the direction along the line of sight) of the central star. And they reported that in the region there is rapid acceleration due to light pressure on newly formed dust grains, farside and nearside emission and limb emission are equally likely. Hence, as presumed from figure 5, we can interpret that the most blue-shifted maser spots are superposed with the position of the central star of NSV17351 along the line of sight. They can possibly be explained as the case the emission was excited along the line of sight to the central star. We mention here that there are inplane motions for the most blue-shifted maser spots. Engels (2002) noted that the H\({}_{2}\)O maser shell maintains favorable conditions for maser emission over a longer time, despite a limited lifetime for individual maser clouds on the order of \(\thicksim\)3 years. He suggested that H\({}_{2}\)O maser clouds do not survive in the outflow but are continuously formed and destroyed. We comprehend the result of NSV17351 that the H\({}_{2}\)O molecules were carried to the outermost region and the H\({}_{2}\)O gas has accelerated to the terminal velocity. Since a vast amount of H\({}_{2}\)O gas has been transported to the outermost regions of the circumstellar envelope, we can predict that the H\({}_{2}\)O gas will soon photodissociate to OH and H, then the OH maser will brighten. The 1612 MHz OH maser line was observed with the intensity of \(\thicksim\)400 mJy in February 1978 (Le Squeren et al. 1979). If the OH maser emission is detected stronger than that observed in 1978 (Le Squeren et al. 1979), we might witness NSV17351 transporting H\({}_{2}\)O gas to its outer region during the last 40 years. Therefore, it is important to carefully monitor OH masers of NSV17351 to study its material flow and confirm this hypothesis. Figure 6: Time variation of the velocity separation between red- and blue-shifted maser components (\(\Delta V\)) from 1978 to 2019 (MJD 42000 to MJD 60000). Solid line indicate a fitted model showing an acceleration of 0.33\(\pm\)0.06 km s\({}^{-1}\)yr\({}^{-1}\). Time variation of the \(\Delta V\) between MJD 57200 and 59300 can be seen in the magnified inset at top left. Figure 7: Four representative spectra of H\({}_{2}\)O maser obtained in 1978, 1994, 2009, and 2018 from bottom to top. Scales of flux density in each spectrum are presented in the right side of the figure. Time variation of the velocity profile can be seen. In the latest spectrum, we can see the largest velocity separation and sharp cut offs at the outer sides of each peak. ### Astrometric results from VERA and Gaia In the catalogs of Gaia Data Release 3 (DR3), we can find parallaxes of NSV17351 as 0.088\(\pm\)0.147 mas (Gaia Collaboration et al. 2022) which give relative errors of 170 %, while in our VLBI observations we obtained the parallax of 0.247\(\pm\)0.035 mas with a relative error of 14 %. The two parallaxes are barely in agreement within the error margin. We can compare proper motions of NSV17351. The proper motions from VERA and DR3 in RA and DEC are \(\langle\mu_{\alpha}\,\cos\delta,\mu_{\delta}\rangle^{\rm avg}=(-1.19\,\pm\,0.11,\,1.30\,\pm\,0.19)\) mas yr\({}^{-1}\) and \(\langle\mu_{\alpha}\,\cos\,\delta,\mu_{\delta}\rangle^{\rm DR3}=(-0.03\pm 0.16,\, \,1.88\pm 0.19)\) mas yr\({}^{-1}\), respectively. Residual of the Gaia DR3 proper motions from VERA proper motions are \((\Delta\mu_{\alpha}\,\cos\delta,\,\Delta\mu_{\delta})=(1.16\,\pm\,0.19,\,0.58\, \pm\,0.27)\) mas yr\({}^{-1}\) which correspond to linear velocities of \((22.3\pm 3.6,\,11.1\pm 5.2)\) km s\({}^{-1}\). In the study by Nakagawa et al. (2016), difference between the two measurements from VERA and HIPPARCOS was interpreted as internal motions of the maser spots. If we apply the same interpretation to NSV17351, the residual motion can be considered as internal motion of the maser spots with respect to the central star. When we assume a general property of outflow velocity of OH/IR stars or the three-dimensional outflow velocity of \(15.7\pm 3.3\) km s\({}^{-1}\) obtained in this study (section 4.1), the velocity differences between VERA and the Gaia of \((48.6\pm 7.9,\,-32.3\pm 8.5)\) km s\({}^{-1}\) and \((22.3\pm 3.6,\,11.1\pm 5.2)\) km s\({}^{-1}\) are large to regard them only as the internal motions of the maser spots. It should also be noted that there is a systematic uncertainty of \(\pm\,10\) km s\({}^{-1}\) in attributing the average of the spot motions to that of the stellar system. In the comparison of the proper motions, this effect should also be considered. Figure 8: Superpositions of H\({}_{2}\)O maser (solid line) and OH maser (dotted line) of NSV17351 obtained in 2018 and 1978, respectively. Flux density scales in unit of Jy for H\({}_{2}\)O and OH masers are presented in left and right vertical axes of the figure. Cut off velocity of the blue-shifted side seems to be exactly same in both spectra. We examine the three-dimensional position and motion of NSV17351 in the Galaxy. We refer to Reid et al. (2019) for transformation of the results from our VLBI observations to the position and motion in the Galactic Cartesian coordinates. We adopt the value of \(50.1\pm 1.9\) km s\({}^{-1}\) determined in section 3.1 as the stellar LSR velocity of the source. We assumed the Galactic constants of \(R_{0}=8.15\) kpc and \(\Theta_{0}=236\) km s\({}^{-1}\), a solar motion of (\(U_{\sun}\), \(V_{\sun}\), \(W_{\sun}\)) = (10.6, 10.7, 7.6) km s\({}^{-1}\)(Reid et al., 2019), and a flat Galactic rotation curve (i.e. \(\Theta(R)=\Theta_{0}\)) in the following discussion. We derived a three-dimensional position of NSV17351 to be (\(X\), \(Y\), \(Z\)) = (\(-2.83\pm 0.12\), \(11.05\pm 0.12\), \(-0.09\pm 0.01\)) kpc, where the origin of the coordinate system corresponds to the Galactic Center. From the value of \(Z=-0.09\pm 0.01\) kpc, we see that the NSV17351 is embedded in the Galactic thin disk. Since there is the offset between the physical plane and Galactic latitude b = 0 deg plane due to the Galactic warp and the tilted plane (Blaauw et al., 1960; Binney, 1992), we compare the Z value of NSV17351 with the Z range of nearby star-forming regions. We confirm that the Z value of NSV17351 is included in the Z range of the SFRs (i.e., \(-0.12<Z<0.11\) kpc). Figure 9 shows an enlarged view of the Milky Way Galaxy as viewed from the North Galactic pole reproduced from the Figure 2 of Reid et al. (2019). Three solid lines indicate centers of the spiral arms and grey regions indicate width of the Galactic arms enclosing 90% of sources (Reid et al., 2019). From the top to bottom, the Outer, Perseus, and Local spiral arms are shown. The filled circle indicates the position of NSV17351 with its error. Open circles indicate maser sources which have Galactocentric distances of \(>7\) kpc reported in a study by Reid et al. (2019). We also derived a three-dimensional noncircular motion of the source, i.e., a residual motion from the flat Galactic rotation, to be (\(U,V,W\)) = (\(-4\pm 3\), \(-5\pm 5\), \(-3\pm 3\)) km s\({}^{-1}\), where \(U\), \(V\), and \(W\) are directed toward the Galactic center, the Galactic rotation, and the North Galactic pole, respectively. The errors are estimated by considering errors of the parallax, the proper motion and the systemic velocity. Details of the procedure for the error estimation are given in an appendix of Sakai et al. (2020). The obtained velocities of (\(U\), \(V\), \(W\)) are comparable to those of thin-disk sources rather than thick-disk sources that include large number of evolved stars. In figure 9, we can find that NSV17351 is located slightly outside the Perseus arm. The distance error suggests that NSV17351 may belong to the Perseus arm, but is more likely to be in the interarm region. Indeed, if we consider the \(l\)-\(V_{\rm LSR}\) plot (i.e. position velocity diagram) of HI in the Figure 3 of Reid et al. (2019), we can find that the source is located on the interarm region in between the Outer arm and the Perseus arm. The location of NSV17351 in figure 9 can be understood by considering the age of the source. It is understood that pulsation period \(P\) increases with increasing initial mass. Mira variable stars showing a \(\log\)_P_ of \(\thicksim\)3.0 have initial masses of 3 to 4 \(M_{\odot}\)(Feast 2009). By assuming this mass range, we obtained \(\tau_{\rm MS}\) of 1.6\(\times\)10\({}^{8}\) to 3.5\(\times\)10\({}^{8}\) years from a consideration in Sparke, & Gallagher (2000), where the \(\tau_{\rm MS}\) is main sequence life time. This indicates that the age of NSV17351 is \(\thicksim\)10\({}^{8}\) years which is two orders of magnitude larger than the typical age of high mass star forming regions associated with spiral arms. In other words, we can say that we are observing a state that NSV17351 leaves the arm where it was born, but not yet sufficiently dynamically relaxed. Note that the spiral-arm assignment of NSV17351 should be revisited in the future because the Perseus and Outer arms are not accurately located in the Galactic 3rd quadrant due to the limited number of VLBI astrometric results. In the last decade, VLBI astrometry has measured more than two hundred parallaxes of star forming regions (SFRs) (e.g., Burns et al. 2016; Motogi et al. 2016; Reid et al. 2019; VERA Collaboration et al. 2020) and evolved stars (e.g., Sudou et al. 2019; Kamezaki et al. 2016; Nakagawa et al. 2016; Nakagawa et al. 2018; Nakagawa et al. 2019), however ages of the sources are divided mainly into two groups. One is the age of \(\thicksim\)10\({}^{6}\) years for SFRs, and the other is \(\thicksim\)10\({}^{9}\) years for evolved stars. From this aspect, the extreme OH/IR star candidate NSV17351 with an estimated age of \(\thicksim\)10\({}^{8}\) years can be regarded as a valuable source which fills the time scale gap between 10\({}^{6}\) years and 10\({}^{9}\) years. In recent studies of spiral arms in disk galaxies, there has been a long-standing question about how spiral arms are created and maintained. The quasi-stationary density wave theory (e.g., Lin, & Shu 1964) and the dynamic spiral theory (e.g., Sellwood, & Carlberg 1984; Baba 2015) are two major theories under discussion. Spiral arms do not show rigidly rotating patters but rather differentially rotating dynamic patterns. The amplitudes, pitch angles, and pattern speeds of spiral arms are not constant, but change within a time span of 1-2 rotational periods at each radius (Baba 2015). In the Milky Way Galaxy, rotational periods at the location of the Sun correspond to a time scale of \(\thicksim\)10\({}^{8}\) years. For better understanding, it is important to gather samples representing various ages as suggested by previous papers (e.g., Dobbs & Pringle 2010; Miyachi et al. 2019). In this context, the extreme OH/IR stars with an age of \(\thicksim\)10\({}^{8}\) years could be suitable samples, and astrometric VLBI is a powerful and promising method to determine their three-dimensional positions and kinematics. ## 5 Summary We presented the first astrometric results towards an extreme OH/IR star candidate NSV17351 using the VERA VLBI array at 22 GHz. From the single dish observations, we found that NSV17351 has an extremely long period of 1122\(\pm\)24 days based on the variation of the H\({}_{2}\)O maser emission. From our VLBI observations, we derived an annual parallax of 0.247\(\pm\)0.035 mas, which corresponds to a distance of 4.05\(\pm\)0.59 kpc. We revealed distribution and kinematics of H\({}_{2}\)O maser spots of NSV17351. Inplane distribution of 20 mas \(\times\) 30 mas (\(\sim\)80 au \(\times\)\(\sim\)120 au at a source distance) and weak asymmetric outflow were confirmed. By averaging proper motions of maser spots, the systemic proper motion of NSV17351 was obtained to be \((\mu_{\alpha}\cos\delta,\mu_{\delta})^{\rm avg}=(-1.19\pm 0.11,\,1.30\pm 0.19)\) mas yr\({}^{-1}\). NSV17351 has a characteristics of double-peaked H\({}_{2}\)O maser spectrum. We could trace the evolution of spectrums for 40 years, and we estimated the acceleration of circumstellar matter to be \(0.17\pm 0.03\) km s\({}^{-1}\) yr\({}^{-1}\). We derived a three dimensional position of NSV17351 in the Milky Way Galaxy. The source was located in the interarm region between the Outer and the Perseus arms. The mass of NSV17351, inferred from its pulsation period, is 3 to 4 \(M_{\odot}\), and its age is estimated to be \(\sim\)10\({}^{8}\) years. This is consistent with a situation that the star is located in the interam region, and away from the spiral arm where it was born. ## Acknowledgement We acknowledge the members of the VERA project for their kind collaboration and encouragement. Data analysis were in part carried out on common use data analysis computer system Figure 9: Enlarged face-on view of the Milky Way reproduced from a study by Reid et al. (2019). The Galactic center is at (0, 0) kpc and the Sun is indicated with the symbol (\(\sun\)) at (0, 8.15) kpc. The filled circle with an error bar indicates the position of NSV17351. Open circles indicate maser sources which have Galactocentric distances of \(>7\) kpc in (Reid et al., 2019). Three spiral arms are presented. Solid lines indicate centers of the spiral arms. Grey regions indicate widths of the Galactic arms in which 90% of sources are enclosed (Reid et al., 2019). at the Astronomy Data Center, ADC, of the National Astronomical Observatory of Japan. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This research was supported by the leadership program of NAOJ in FY2020.
**Answer:** 極めて高温・高温星候補NSV17351に対するアストロメトリ的VLBI観測の成果を報告する。VERA(VLBI放射アストロメトリ)を用いてNSV17351の22\,GHzH$_2$Oマセーを捉えた。この観測から、NSV17351の年近傍距離は0.247$\pm$0.035masであり、これは4.05$\pm$0.59 kpcの距離に対応する。15個のマセーのスポットの平均運動を算出したところ、NSV17351の系内運動は($\mu_{\alpha}\cos{\delta}$, $\mu_{\delta}$)<sup>avg</sup>= ($-$1.19$\pm$0.11, 1.30 $\
2309.13748
Does the "most sinfully decadent cake ever" taste good? Answering Yes/No Questions from Figurative Contexts
Figurative language is commonplace in natural language, and while making communication memorable and creative, can be difficult to understand. In this work, we investigate the robustness of Question Answering (QA) models on figurative text. Yes/no questions, in particular, are a useful probe of figurative language understanding capabilities of large language models. We propose FigurativeQA, a set of 1000 yes/no questions with figurative and non-figurative contexts, extracted from the domains of restaurant and product reviews. We show that state-of-the-art BERT-based QA models exhibit an average performance drop of up to 15\% points when answering questions from figurative contexts, as compared to non-figurative ones. While models like GPT-3 and ChatGPT are better at handling figurative texts, we show that further performance gains can be achieved by automatically simplifying the figurative contexts into their non-figurative (literal) counterparts. We find that the best overall model is ChatGPT with chain-of-thought prompting to generate non-figurative contexts. Our work provides a promising direction for building more robust QA models with figurative language understanding capabilities.
Geetanjali Rakshit, Jeffrey Flanigan
2023-09-24T20:38:48
http://arxiv.org/abs/2309.13748v1
Does the "most sinfully decadent cake ever" taste good? Answering Yes/No Questions from Figurative Contexts ###### Abstract Figurative language is commonplace in natural language, and while making communication memorable and creative, can be difficult to understand. In this work, we investigate the robustness of Question Answering (QA) models on figurative text. Yes/no questions, in particular, are a useful probe of figurative language understanding capabilities of large language models. We propose FigurativeQA, a set of 1000 yes/no questions with figurative and non-figurative contexts, extracted from the domains of restaurant and product reviews. We show that state-of-the-art BERT-based QA models exhibit an average performance drop of up to 15% points when answering questions from figurative contexts, as compared to non-figurative ones. While models like GPT-3 and ChatGPT are better at handling figurative texts, we show that further performance gains can be achieved by automatically simplifying the figurative contexts into their non-figurative (literal) counterparts. We find that the best overall model is ChatGPT with chain-of-thought prompting to generate non-figurative contexts. Our work provides a promising direction for building more robust QA models with figurative language understanding capabilities. ## 1 Introduction _"Questions are never indiscreet. Answers sometimes are."_ - _Oscar Wilde_ One of the many interesting phenomena occurring in natural language is the presence of figurative language, which, while making communication creative and memorable Danescu-Niculescu-Mizil et al. (2012), may sometimes also prove difficult to understand Zayed et al. (2020). This includes (but is not limited to) linguistic constructs such as idioms, similes, metaphors, rhetorical questions, hyperbole, personification, sarcasm, and irony. It may be particularly difficult for non-native speakers to interpret figurative expressions, and phenomena like sarcasm are often missed altogether Joshi et al. (2016). Given that figurativeness is commonplace in everyday communication Lakoff and Johnson (2008), progress in the field of Natural Language Understanding (NLU) would be incomplete without figurativeness understanding. Consequently, figurative text has been studied in various downstream NLP tasks such as machine translation Dankers et al. (2022), textual entailment Agerri (2008), Chakrabarty et al. (2021), Liu et al. (2022) and dialog models Jhamtani et al. (2021), inter-alia. However, to the best of our knowledge, there has not been a systematic study of figurative language understanding capabilities of question answering models. We focus on yes/no questions for our question answering (QA) task. Yes/no questions are a good test of figurative language understanding because correctly answering them requires the reader to correctly understand the figurative language. Extractive QA, on the other hand, is not a good test for figurative language understanding because it does not require actually understanding the figurative language. Figure 1: To answer the question “Did the cake taste good?” based on the context, a Question Answering (QA) model needs to be able to correctly infer the meaning of the figurative text “the most sinfully decadent ever” For example, if we were to pose the question "How did the cake taste?" from the context "The cake was described as the most sinfully decadent ever.", an answer such as "sinfully decadent" from an extractive QA model doesn't really tell us that the model understands the meaning of the figurative text "sinfully decadent". It simply copies the figurative text and it's up to the reader to infer what the answer means. However, in order to answer a yes/no question such as "Did the cake taste good?", a QA model needs to correctly infer that "sinfully decadent" means _rich and delicious_, or in other words, _really good_, and therefore the answer would be _yes_. Despite the lack of attention of figurative language for QA tasks, figurative language is extremely common in some important domains, such as online reviews. We randomly sampled 100 reviews from the train split of the Yelp Challenge Dataset1, and observe that at least 60% of these reviews contain figurative expressions. Users often write strongly-worded reviews, to express highly positive or highly negative opinions about products or services (Mohammad et al., 2016), which tend to contain figurative language. Footnote 1: We use the version in Huggingface Datasets ([https://huggingface.co/datasets/yelp_review_full](https://huggingface.co/datasets/yelp_review_full)), from the paper (Zhang et al., 2015) We show that it can be challenging for existing QA models to draw inferences from figurative text. To do this, we present a new dataset, _Figure/**QA**_, consisting of 1000 yes/no questions and accompanying figurative and non-figurative contexts constructed from Amazon product reviews (Niculae and Danescu-Niculescu-Mizil, 2014) and Yelp restaurant reviews (Oraby et al., 2017). In Figure 2, we show examples from FigurativeQA, in two domains: Amazon product reviews and Yelp restaurant reviews, for both figurative and non-figurative contexts. Each context is accompanied by a question-answer pair, and in the case of figurative contexts, manually constructed and automatically obtained non-figurative versions of the context. We develop a variety of methods for improving QA performance for figurative text. We prompt powerful LLMs like GPT-3 and ChatGPT to convert figurative contexts to literal as an intermediate step to question answering. We then provide these literal contexts as input to state-of-the-art QA models, resulting in considerable gains in performance. The best performance is achieved by the chain-of-thought prompting method from ChatGPT in a few-shot setting, where the model generates a simplified version of the context and then generates the yes/no answer. We also use these LLMs to generate domain-specific training data to fine-tune models specifically for this task. The outline of the paper is as follows: after reviewing related work (SS2), we introduce our new QA dataset for figurative language, FigurativeQA, in (SS3). We report baseline QA performance on FigurativeQA and introduce a method for simplifying figurative language to non-figurative by prompting GPT-3 and ChatGPT, which we use to improve our baseline QA models (SS4, 5, 6). We report our experiments with chain-of-thought prompting in SS7. We prompt ChatGPT to generate in-domain training data for figurative question answering (SS8). We finally conclude in (SS10). The FigurativeQA dataset can be accessed at [https://github.com/geetanjali-rakshit/figurativeQA](https://github.com/geetanjali-rakshit/figurativeQA). ## 2 Related Work Figurative language has been a difficult problem for many natural language processing (NLP) applications. A number of computational approaches have been proposed to study their occurrence in text (Veale et al., 2016; Qadir et al., 2016; Kordoni, 2018; Mao et al., 2018; Zhang et al., 2017; Troiano et al., 2018), including generation of figurative language (Chakrabarty et al., 2020; Zhou et al., 2021). The idea of converting metaphors to their literal counterparts has been previously explored for machine translation by Mao et al. (2018), where metaphors in English text are first identified and then converted to a literal version by using word embeddings and WordNet, before doing machine translation into Chinese. In dialog systems, a similar approach was employed by Jhamtani et al. (2021), where idioms and metaphors in utterances are converted to literal versions using a dictionary lookup-based method. Our work is closest to Jhamtani et al. (2021), except that we explore the robustness of QA systems in a machine comprehension setup, instead of dialog models, to figurative language, which, to the best of our knowledge, is a first. Our automatic approach to creating rephrased non-figurative versions of figurative text is done using pre-trained language models, rather than rule-based methods which have been shown to be errorprone (Jhamtani et al., 2021). In a concurrent work, Chakrabarty et al. (2022) have also done prompting on GPT-3 to create their figurative NLI dataset, FLUTE, as well as obtain an explanation of the NLI labels in this dataset. To our knowledge, there are no QA datasets specifically designed for figurative language understanding, but some existing QA datasets do contain figurative language. The FriendsQA dataset Yang and Choi (2019) is a dialog-based QA dataset constructed from dialogs from the TV series Friends. While it does contain metaphors and sarcasm, the focus of the dataset is not figurative language, and it is not ideal for testing figurative language understanding as it is unclear how much of the dataset is figurative. The dialog nature of the dataset further contributes to making it challenging and complicates studying the effect of figurativeness. Another dataset that requires figurative language understanding is the RiddleSense dataset Lin et al. (2021), which comprises of riddles, but unlike ours, it's modeled as an open-domain QA task rather than a machine comprehension task. Parde and Nielsen (2018) show that questions about novel metaphors from literature are judged to be deeper than non-metaphorical or non-conventional metaphors by humans, but their focus is on generating deep questions rather than testing the robustness of QA models. Dankin et al. (2022) construct yes/no questions using templates to detect the presence of metaphors in a few-shot setting. ## 3 FigurativeQA Dataset The contexts in FigurativeQA comes from two sources: Amazon product reviews Niculae and Danescu-Niculescu-Mizil (2014), and Yelp restaurant reviews Oraby et al. (2017). We extract both figurative and non-figurative contexts from each source. We manually construct yes/no questions and answers on top of these contexts. Figure 2 shows examples from FigurativeQA. The data statistics from each source Amazon and Yelp) and each split (figurative and non-figurative) are summarized in Table 1. For the Amazon part of FigurativeQA, we use Niculae and Danescu-Niculescu-Mizil (2014)'s dataset of figurative comparisons. Of the 1260 comparisons in this dataset, we extract instances where all 3 annotators are in agreement about figurativeness (i.e., average figurativeness score of greater than 3). We then randomly pick 150 exam Figure 2: Examples from the figurative and non-figurative splits of FigurativeQA, from Amazon product reviews and Yelp restaurant reviews. The figurative text fragments within the contexts are shown in bold and italics. ples to form the set of figurative contexts. From the examples with a low average figurativess score, we select 150 examples to form the set of non-figurative contexts. For the Yelp part of the dataset, the contexts are sourced from (Oraby et al., 2017)'s NLG dataset for the restaurant domain. Since highly positive or highly negative reviews are more likely to contain figurative language, we extract these first, and then, similar to (Niculae and Danescu-Niculescu-Mizil, 2014), use comparator expressions to get a set of reviews likely to be rich in figurative content. We then manually examine these reviews to annotate 350 examples of figurative contexts and non-figurative contexts, each. The figurative contexts from FigurativeQA tend to contain more _similes_, since comparator patterns (_"like"_, _"as"_, or _"than"_) were used to extract the text. However, we observe that many of these examples also contain other kinds of figurative constructs such as metaphor, idiom, hyperbole, sarcasm, etc. Table 2 shows the number of occurrences of various kinds of figurative constructs that we observe in a random set of 100 figurative contexts, each from Amazon and Yelp in FigurativeQA. (Oraby et al., 2017) note that one of the most prominent characteristics of restaurant reviews in the Yelp corpus is the prevalence of hyperbole, which we also observe in this sample. A context may contain multiple figurative elements, coming from different text fragments within the context. Also, in some cases, the same text fragment may denote multiple kinds of figurative constructs. In Figure 3, we show some examples of various kinds of figurative constructs occurring in FigurativeQA. For each context in FigurativeQA, we construct a yes/no question. For the figurative contexts, we make sure to pose a question such that answering it would require an understanding of the figurative text present in the context. For the non-figurative contexts, we construct questions similar to the ones for the figurative contexts. Additionally, for the figurative contexts, we construct questions similar to the ones for the figurative contexts. Additionally, for the figurative contexts, we construct questions similar to the ones for the figurative contexts. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Amazon**} & \multicolumn{2}{c}{**Yelp**} \\ & **Fig.** & **Non-fig.** & **Fig.** & **Non-fig.** \\ \hline **Yes** & 77 & 76 & 174 & 175 \\ **No** & 73 & 74 & 176 & 175 \\ \hline **Total** & 150 & 150 & 350 & 350 \\ \hline \hline \end{tabular} \end{table} Table 1: Distribution of yes/no questions from Amazon product reviews and Yelp restaurant reviews for figurative and non-figurative contexts \begin{table} \begin{tabular}{l l l} \hline \hline **Split** & **Context** & **fig.** & **construct** \\ \hline **Amazon** & _The books are **like potato chips** - you **can’t eat just one**_. & _simile, idiom_ \\ & _So when my laptop battery puffed up **like a balloon**_, _I dreaded paying_ & _simile, hyperbole_ \\ & _the cost of replacement_. & _Really_, this novel feels more **like a footnote** to the series whereas_ & _simile, idiom_ \\ & _The Guntslinger was a novel that **stood extremely well on its own**_. & _simile, sarcasm_ \\ \hline **Yelp** & _i had the chicken fajitas_, which came with a giant flour tortilla that was **as hot as hades**_. & _simile, hyperbole_ \\ & _the cheese was scarce as was the meat_, and the taste was nothing to_ & _idiom_ \\ & _**write home about**_. & _i ate as much as i could because truly, underneath the **salt mine** on_ & _metaphor, hyperbole_ \\ & _my plate_, _was some damn fine cored beef hash!_ & \\ \hline \hline \end{tabular} \end{table} Table 2: Distribution of occurrences of various kinds of figurative constructs in a random sample of 100 contexts from Amazon and Yelp each. It is common for a context to contain multiple figurative expressions, so these do not add up to 100% (refer to Figure 3 for examples). Figure 3: Examples of figurative constructs observed in the Amazon and Yelp datasets. The figurative text fragments within the contexts are shown in bold and italics. In case of multiple labels occurring in the same context, the first bold fragment corresponds to the first label, and so on. In some cases, the same text fragment may have multiple labels (as in row 2) urative contexts extracted from Amazon and Yelp, we manually create non-figurative counterparts that preserve the meaning and overall content. ### Inter-annotator Agreement Annotations for all the data in FigurativeQA (figurativeness scores for the examples from Yelp, construction of question-answer pairs, manual conversion of figurative contexts to non-figurative) were done by an in-house-trained graduate-student annotator. To assess the quality of figurative and non-figurative contexts for the Yelp contexts, we perform a second round of annotations with another trained annotator on a random sample of 50 contexts. This resulted in an inter-annotator agreement of 0.72 on figurativeness, calculated by Cohen's \(\kappa\). Similarly, to assess the overall quality of FigurativeQA, we randomly sample 50 figurative contexts for double annotation, which gives an additional set of annotations for the answers to the questions. The inter-annotator agreement on the answers was found to be 0.96, calculated by Cohen's \(\kappa\). To validate the effectiveness of the questions for figurativeness comprehension, we also asked the annotators to indicate if answering the question required them to understand figurative text fragments present in the context. In the random sample of 50, in 49 cases the annotators were in agreement that this was indeed the case. ## 4 Do QA models find answering questions from figurative contexts harder? Using FigurativeQA as a test set, we show that current models struggle to do well on figurative text compared to literal ones. We use a RoBERTa-based Liu et al. (2019) QA model fine-tuned on BoolQ to show this. The BoolQ dataset Clark et al. (2019) consists of yes/no questions from the Natural Questions dataset. We use the training split of BoolQ containing 9,427 examples to fine-tune RoBERTa-base and report its performance on FigurativeQA in Table 3. We find that the RoBERTa QA model performs poorly on the figurative contexts compared to the non-figurative contexts, with a drop in performance of \(\sim\)8.5% points for Amazon, and \(\sim\)23% points for Yelp. We observe that switching the figurative contexts for their manually created non-figurative counterparts shoots these numbers up in both cases, by \(\sim\)10% points and \(\sim\)23% points, for Amazon and Yelp, respectively. More powerful models like ChatGPT (in a few-shot setting) perform significantly better on figurative contexts, but still don't match the results on non-figurative versions of the contexts. This indicates that the conversion of figurative language to non-figurative language may help improve QA performance. ## 5 Can prompting or finetuning LLMs help simplify figurative contexts? We posit that answering questions from figurative contexts is harder, and that simplifying the figurative context into its literal/non-figurative version improves QA performance. However, since the task of manually converting figurative text to non-figurative is expensive and time-consuming, we propose to do this automatically by prompting GPT-3 Brown et al. (2020) in two ways. First, we use GPT-3 (da-vinci-003) and ChatGPT in a few-shot setting to generate non-figurative/literal versions of all the figurative contexts in FigurativeQA.2 We also used a similar approach to prompt ChatGPT. Please refer to Appendix A for model details and the prompts used. Second, we use a trained version of GPT-3 (da-vinci-002) fine-tuned specifically for the task of converting figurative to literal text. Footnote 2: The experiments for this method to convert figurative text to non-figurative were performed by running API calls to the OpenAI da-vinci model. For each context, this took less than 1 second, for a total of less than 18 min and cost less than 8 USD for the entire dataset. As an intrinsic evaluation of the effectiveness of our prompting method, we manually evaluate the correctness of the non-figurative/literal contexts generated by prompting GPT-3 on a random sam \begin{table} \begin{tabular}{l l l} \hline \hline & **Amazon** & **Yelp** \\ \hline **RoBERTa-BoolQ** & & \\ Fig (Original) & 83.4 \(\pm\) 0.7 & 66.8 \(\pm\) 1.4 \\ Fig (manual non-fig) & **93.5 \(\pm\) 1.1*** & **90.0 \(\pm\) 1.4*** \\ Non-fig (Original) & 92.0 \(\pm\) 1.4 & 89.8 \(\pm\) 1.7 \\ \hline **ChatGPT(few-shot)** & & \\ Fig (Original) & 92.6\(\pm\)1.1 & 80.6\(\pm\)0.7 \\ Fig (manual non-fig) & **93.8 \(\pm\)0.3*** & **83.3\(\pm\)1.6*** \\ Non-fig (Original) & \(93.5\pm 0.3\)* & \(88.7\pm 1.8\)* \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy of RoBERTa-base fine-tuned on BoolQ, and ChatGPT (few-shot), on the figurative split, manually created non-figurative version of the figurative split, and non-figurative split of FigurativeQA. (We reran experiments 1000 times with bootstrap resampling. The numbers reported are the mean and std-dev. \({}^{*}\) denotes statistically significant results, with \(p<0.05\) calculated using the Wilcoxon signed-rank test. The numbers in **bold** are the best results.) ple of 100 instances each, from Amazon and Yelp in FigurativeQA. We label each generated literal version as either **"correct"**, where none of the figurative expressions are present but the meaning is preserved, or **"incorrect"** where the generated output is the same/similar to the original context or the meaning has changed. Please note that this is a rather strict evaluation of correctness, as in some cases, some of the figurative text fragments present in the context is converted to literal, while the context may still be left with some amount of figurativeness (possibly arising from multiple figurative text fragments present in the context). Table 4 shows the results from the manual evaluation of the GPT-3 and ChatGPT outputs. We observe that these models are pretty good at converting figurative language in FigurativeQA to literal, with nearly 89% and 81% of the outputs from GPT-3 judged to be correct in Amazon and Yelp, respectively, and 92% and 88% for ChatGPT. In Figure 4, we show examples of non-figurative text generated from GPT-3 and ChatGPT. We next explore using a fine-tuned version of GPT-3 to generate literal versions of figurative texts. Chakrabarty et al. (2022) propose the FLUTE dataset for Natural Language Inference (NLI), which has 9,000 figurative NLI instances, and explanations for the NLI labels. We extract the premise-hypothesis pairs with the label _"entailment"_ from the training split of FLUTE to fine-tune GPT-3 (3,182 examples in total). We used the _davinci_ model from OpenAI as the base model and fine-tuned for 4 epochs, with all default settings. We didn't perform any hyper-parameter tuning.3 Table 4 (row 3) shows the results from manual evaluation of the fine-tuned GPT-3 outputs. Footnote 3: To fine-tune GPT-3 on the FLUTE dataset, it cost about 15 USD and took 62 minutes. ## 6 Can automatically generated non-figurative text improve QA performance? We observed that ChatGPT has a much stronger performance on FigurativeQA than the baseline model of RoBERTa finetuned on BoolQ (section 4), and both of these models do better on non-figurative texts. We showed that both GPT-3 and ChatGPT can be effectively used to simplify figurative text into their non-figurative counterparts (section 5). We next experiment with simplifying contexts to boost QA performance. As competitive baselines, we also report zero-shot and few-shot QA performance4 of GPT-3 and ChatGPT in Table 5. Besides the RoBERTa-finetuned-on-BoolQ baseline (previously described in section 4, we also fine-tune GPT-3 on the training split of BoolQ. For fine-tuning GPT-3, we used the _davinci_ model from OpenAI as the base model and fine-tuned for 4 epochs, with all default settings. We didn't perform any additional hyper-parameter tuning. Footnote 4: Please refer to Appendix B for details about prompting GPT-3 and ChatGPT as a QA system. In our experiments, we do not require knowing which contexts are figurative and which are non-figurative. We simply input both figurative and non-figurative contexts to the LLM to simplify any figurative language that is present, regardless if the context actually contains figurative language. In Table 5, we show that this method exhibits significant gains over the baseline RoBERTa model. We also report the performance of using GPT-3-finetuned-FLUTE as input to the RoBERTa baseline. ## 7 Can we use chain-of-thought prompting for improving QA performance on FigurativeQA? Wei et al. (2022) have shown chain-of-thought prompting in Large Language Models (LLMs) to be effective for solving tasks requiring complex reasoning. Since understanding figurative language often requires implicit reasoning, we investigate the effect of applying chain-of-thought prompting for FigurativeQA using ChatGPT. (Our few-shot prompt for the chain-of-thought method is described in Appendix C.) This approach gives us the highest overall accuracy on FigurativeQA (Table 5). \begin{table} \begin{tabular}{l c c} \hline \hline & **Amazon** & **Yelp** \\ \hline GPT-3 & 89\% & 81\% \\ \hline ChatGPT & **92\%** & **88\%** \\ \hline Finetuned GPT-3 & 80\% & 77\% \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation of non-figurative outputs from GPT-3 and ChatGPT, showing the percentage of generated outputs that do not contain figurative expressions, but preserve the original meaning of the figurative context. ## 8 Can we prompt LLMs to generate training data for FigurativeQA? Due to the lack of training data for question answering with figurative contexts, our supervised models are all finetuned on BoolQ. We hypothesize that adding synthetically generated QA pairs for this task will improve performance of the fine-tuned models. We prompt ChatGPT to generate synthetic training data (we tried a variety of prompts - refer to Appendix D for the prompt used). We use contexts from both Amazon and Yelp domains to generate question answer pairs from ChatGPT. For the Amazon contexts, we randomly sample reviews from 4 categories (Books, Electronics, Jewelry and Digital Music) from Amazon Product reviews from McAuley and Leskovec (2013). From these reviews, we extract sentences containing comparator patterns ("like", "as", "than") and use them as contexts, as they are more likely to contain figurative expressions. For the Yelp contexts, we extract sentences from Oraby et al. (2017)'s NLG dataset also containing the same comparator patterns, but not already included in FigurativeQA. (Refer to Appendix E for statistics of the data generated for training.) We find that further finetuning RoBERTa-finetuned-on-BoolQ on synthetic QA data generated from ChatGPT yields the best performance on the figurative split of both Amazon and Yelp (Table 5). ## 9 How much does the prompting method help with handling figurativeness? Our experiments show that the process of converting figurative text into literal by prompting GPT-3 may effectively be used for improving question answering performance. We also study the effect of our method on the degree of figurativeness present in the text. The Amazon reviews data from Niculae and Danescu-Niculescu-Mizil (2014) comes labeled with figurativeness scores of 1-4, with 3 sets of annotations. Using the average figurativeness scores, we bin the Amazon reviews examples in FigurativeQA into 4 splits, and compute the improvement in QA performance when using our method over the baseline. As evident from Figure 5, the more figurative examples show a higher gain in QA performance. ## 10 Conclusion and Future Work We demonstrate that current QA models have reduced accuracy when answering questions from Figure 4: Examples of non-figurative contexts generated from GPT-3, for Amazon and Yelp. The figurative text fragments within the contexts are shown in **bold** and _italics_. figurative contexts compared to literal ones. This indicates the need for QA models that are robust to figurative language. By manually creating non-figurative versions of these contexts, we observe a significant improvement in performance. To automate this approach, we propose a method of prompting GPT-3 to produce simplified, non-figurative contexts, which yields significant performance gains over the baseline. Chain-of-thought prompting using ChatGPT has the best overall performance on FigurativeQA. We hope that our method and dataset will spur more research into question answering with figurative language. ## 11 Acknowledgments This research was supported in part by a research gift from eBay Inc. ## Limitations Our dataset contains the specific domains of Amazon and Yelp reviews, which is English-only, and results and conclusions may not generalize to other domains or languages. The text generated by prompting GPT-3 may sometimes produce text that is not faithful to the original figurative text. Figure 5: Figurativenss Vs Accuracy for the instances from Amazon reviews \begin{table} \begin{tabular}{|l|c c|c c|c c|} \hline & \multicolumn{2}{c|}{**Fig.**} & \multicolumn{2}{c|}{**Non-fig.**} & \multicolumn{2}{c|}{**Overall**} \\ & **Amazon** & **Yelp** & **Amazon** & **Yelp** & **Amazon** & **Yelp** \\ \hline \hline \multicolumn{2}{|l|}{**Zero-Shot**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GPT-3 (zero) & 71.9\(\pm\)1.2 & 60.2\(\pm\)3.2 & 88.7\(\pm\)0.9 & 86.0\(\pm\)2.2 & 80.3\(\pm\)1.1 & 73.1\(\pm\)2.1 \\ \hline ChatGPT (zero) & 91.0\(\pm\)0.7 & 87.4\(\pm\)2.6 & 93.0\(\pm\)0.3 & 88.6\(\pm\)2.4 & 92.0\(\pm\)0.5 & 88.0\(\pm\)2.3 \\ \hline \hline \multicolumn{2}{|l|}{**Few-Shot**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GPT-3 (few) & 85.7\(\pm\)1.8 & 64.1\(\pm\)3.7 & 90.2\(\pm\)0.8 & 88.3\(\pm\)1.9 & 88.0\(\pm\)1.1 & 76.2\(\pm\)2.7 \\ \hline \multicolumn{2}{|l|}{**CatGPT (few)**} & 92.6\(\pm\)1.1 & 80.6\(\pm\)0.7 & 93.5\(\pm\)0.3 & \(88.7\pm 1.8\) & 93.0\(\pm\)0.7 & 84.7\(\pm\)1.1 \\ \hline \hline \multicolumn{2}{|l|}{**Supervised**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline RoBERTa & 83.2\(\pm\)1.1 & 66.8\(\pm\)2.6 & 92.2\(\pm\)1.4 & 89.6\(\pm\)1.7 & 87.7\(\pm\)0.9 & 78.2\(\pm\)1.6 \\ \hline GPT-3-BoolQ & 86.3\(\pm\)2.1 & 69.2\(\pm\)3.8 & 88.7\(\pm\)0.9 & 86.5\(\pm\)1.2 & 87.5\(\pm\)1.4 & 77.9\(\pm\)2.2 \\ \hline RoBERTa & **95.3\(\pm\)0.5** & **92.3\(\pm\)0.7** & 95.8\(\pm\)1.2 & 90.8\(\pm\)1.6 & 95.5\(\pm\)0.7 & **91.5\(\pm\)0.9** \\ +synthetic & & & & & & \\ \hline \hline \multicolumn{2}{|l|}{**Simplified Contexts**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GPT-3+ & \(86.5\pm 1.1\) & \(73.4\pm 1.7\) & \(92.4\pm 1.1\) & \(89.4\pm 1.7\) & \(89.5\pm 3.2\) & \(81.5\pm 1.2\) \\ RoBERTa & & & & & & \\ \hline GPT-3-FLUTE & 88\(\pm\)0.7 & 69.4\(\pm\)2.1 & 92.0\(\pm\)0.4 & 89.5\(\pm\)1.2 & \(90.0\pm 1.4^{*}\) & \(79.4\pm 2.3^{*}\) \\ +RoBERTa & & & & & & \\ \hline ChatGPT+ & 88.7\(\pm\)1.6 & 75.3\(\pm\)3.5 & 92.2\(\pm\)1.1 & 89.5\(\pm\)2.1 & 90.5\(\pm\)1.2 & 82.4\(\pm\)3.2 \\ RoBERTa & & & & & & \\ \hline ChatGPT+ & 89.3\(\pm\)0.8 & 91.0\(\pm\)0.3 & 95.7\(\pm\)0.7 & 91.2\(\pm\)0.2 & 92.5\(\pm\)0.4 & 91.1\(\pm\)0.3 \\ ChatGPT (few) & & & & & & \\ \hline ChatGPT+CoT & 94.7\(\pm\)0.3 & 91.6\(\pm\)1.2 & **96.4\(\pm\)1.1** & **91.4\(\pm\)0.7** & **95.6\(\pm\)0.9** & **91.5\(\pm\)1.1** \\ \hline \end{tabular} \end{table} Table 5: QA accuracy on FigurativeQA. (We reran experiments 1000 times with bootstrap resampling. The numbers reported are the mean and std-dev. \({}^{*}\) denotes results that are not statistically significant compared to the best results, with \(p<0.05\) calculated using the Wilcoxon signed-rank test. The numbers in **bold** are the best results.) GPT-3 finetuned models use da-vinci-002 as the base model.
figurative language の使用は自然言語で一般的なものであり、記憶に残るような創造的なコミュニケーションを可能にする一方で、理解するのが難しい場合があります。この研究では、質問応答(QA)モデルの表現力に関する検証を行い、特に、図解された文章に対するQAモデルの robustness を調査しました。 はい/いい質問は、大規模言語モデルの図解表現理解能力の指標として有効なものです。 私たちは、レストランや製品レビューの分野から抽出された、1000個の「はい/いい」質問セット「FigurativeQA」を提案しました。最新のBERTベースのQAモデルは、図解された文脈に対する質問に対する回答の平均的なパフォーマンスが、非図解文脈に対する回答と比較して、15%ポイントの低下を示しました。 GPT-3とChatGPTのようなモデルは、図解された文脈を処理するのに優れていますが、さらに性能の向上を可能にするために、図解
2301.00670
Simultaneous fermion and exciton condensations from a model Hamiltonian
Fermion-exciton condensation in which both fermion-pair (i.e., superconductivity) and exciton condensations occur simultaneously in a single coherent quantum state has recently been conjectured to exist. Here, we capture the fermion-exciton condensation through a model Hamiltonian that can recreate the physics of this new class of highly correlated condensation phenomena. We demonstrate that the Hamiltonian generates the large-eigenvalue signatures of fermion-pair and exciton condensations for a series of states with increasing particle numbers. The results confirm that the dual-condensate wave function arises from the entanglement of fermion-pair and exciton wave functions, which we previously predicted in the thermodynamic limit. This model Hamiltonian -- generalizing well-known model Hamiltonians for either superconductivity or exciton condensation -- can explore a wide variety of condensation behavior. It provides significant insights into the required forces for generating a fermion-exciton condensate, which will likely be invaluable for realizing such condensations in realistic materials with applications from superconductors to excitonic materials.
LeeAnn M. Sager, David A. Mazziotti
2022-12-29T18:30:46
http://arxiv.org/abs/2301.00670v1
# Simultaneous Fermion and Exciton Condensations from a Model Hamiltonian ###### Abstract Fermion-exciton condensation in which both fermion-pair (i.e. superconductivity) and exciton condensations occur simultaneously in a single coherent quantum state has recently been conjectured to exist. Here, we capture the fermion-exciton condensation through a model Hamiltonian that can recreate the physics of this new class of highly-correlated condensation phenomena. We demonstrate that the Hamiltonian generates the large-eigenvalue signatures of fermion-pair and exciton condensations for a series of states with increasing particle numbers. The results confirm that the dual-condensate wave function arises from the entanglement of fermion-pair and exciton wave functions, which we previously predicted in the thermodynamic limit. This model Hamiltonian--generalizing well-known model Hamiltonians for either superconductivity or exciton condensation--can explore a wide variety of condensation behavior. It provides significant insights into the required forces for generating a fermion-exciton condensate, which will likely be invaluable for realizing such condensations in realistic materials with applications from superconductors to excitonic materials. pacs: 31.10.+z ## I Introduction Model Hamiltonians are theoretical tools that are often useful in simulating the key physics associated with large-scale, highly-correlated systems. They are capable of modeling an array of quantum phases and many-body phenomena such as phase transitions [1; 2; 3; 4; 5], superconductivity [6; 7; 8; 9; 10], quantum magnetism [11; 12; 13; 14], exciton condensation [15; 16; 17; 18; 19; 20; 21], lattice-like systems [22; 23], etc. Additionally, model Hamiltonians which encompass nontrivial physics are often useful as benchmarks for theoretical tools such as many-body approximations [6; 24; 25; 26]. Condensation phenomena--which are inherently highly-correlated--have a long history of being computationally studied through the lens of model Hamiltonians as traditional band theory is inaccurate for such highly-entangled materials [6; 7; 10; 27; 28; 29]. Specifically, superconductors--materials in which fermion-fermion (Cooper/electron-electron) pairs aggregate into a single quantum state, resulting in the superfluidity of the fermion-fermion pairs--are often explored through use of the Pairing-Force (PF) Hamiltonian [6; 7; 8; 9], which is additionally referred to as the Standard Reduced Bardeen-Cooper-Schrieffer (BCS) Hamiltonian [10; 30; 31]. This Hamiltonian is a simple representation of superconductivity as it describes a system with bound Cooper (or Cooper-like particle-particle) pairs interacting in an attractive manner with the holes they leave behind in a Fermi sea with the high-correlation limit of this Hamiltonian resulting in well-known, number-projected BCS wave functions [7; 32]. Similarly, exciton condensation--in which particle-hole (exciton) pairs condense into a single quantum state resulting in the superfluidity of the composite excitons [33]--can be modeled according to the Lipkin-Meshkov-Glick (LMG) Hamiltonian, which is often simply referred to as the Lipkin model [15; 16; 17; 18; 19; 20; 21; 29]. This Hamiltonian is a highly-degenerate system in which partnered orbitals are inherently particle-hole paired and whose strongly-correlated form results in ground states that demonstrate character of exciton condensation. Here, we introduce a model Hamiltonian that is capable of capturing fermion-exciton condensation, a new class of highly-correlated condensation phenomena in which both fermion-pair and exciton condensations coexist in a single quantum state. We demonstrate such coexistent condensate character by calculating the quantum signatures of fermion-pair [35; 36] and exciton [37; 38] condensations (see Sec. II and Appendix A) for systems of even particle numbers ranging from \(N=4\) to \(N=10\) particles in \(r=2N\) orbitals. These fermion-exciton condensates are shown to be described by wavefunctions which are entanglements of wavefunctions from BCS-like superconductivity and Lipkin-like exciton condensation--consistent with our prior predictions for the large-\(N\) thermodynamic limit [39] as well as those we observed experimentally on a quantum device [40]. Our determination of a model Hamiltonian that supports fermion-exciton condensation provides information regarding the nature of the forces necessary to generate such systems--an invaluable first step in the realization of real-world systems that support such dual condensation of excitons and fermion-fermion pairs, which may demonstrate some sort of hybrid of the properties of superconductors and exciton condensates and hence have applications in energy transport and electronics. The extent of these different phases and the transitions between these phases can also be studied. Moreover, our Hamiltonian provides an important reference in order to determine whether a given many-body approximation is capable of measuring dual condensate character. ## II Theory ### Fermion-Pair Condensation Superconductivity results from the condensation of bosonic fermion-fermion pairs [41; 42; 43; 10] into a single geminal--a two-fermion function directly analogous to the one-fermion orbital [35; 36; 44; 45; 46; 47]--at temperatures below a certain critical temperature. This condensation of fermion-pairs results in the superfluidity (i.e., frictionless flow) of the constituent particle-particle pairs [43; 48; 49; 10]; if the fermionic pairs are composed of electrons (i.e., Cooper pairs), then these superfluid electron-electron pairs demonstrate superconductivity. As was first demonstrated by Yang [35] and Sasaki [36], a computational signature of such superconducting states is a large eigenvalue in the particle-particle reduced density matrix (2-RDM), whose elements are given by \[{}^{2}D^{i,j}_{k,l}=\langle\Psi|\hat{a}^{\dagger}_{i}\hat{a}^{\dagger}_{j} \hat{a}_{l}\hat{a}_{k}|\Psi\rangle \tag{1}\] where \(|\Psi\rangle\) is an \(N\)-fermion wavefunction and where \(\hat{a}^{\dagger}_{i}\) and \(\hat{a}_{i}\) are fermionic creation and annihilation operators for orbital \(i\), respectively. As eigenvalues of the 2-RDM can be interpreted as the occupations of the two-fermion geminals [50], when the largest eigenvalue of the 2-RDM--the signature of particle-particle condensation, represented by \(\lambda_{D}\)--exceeds the Pauli-like limit of one (\(\lambda_{D}>1\)), multiple fermion-fermion pairs occupy a single geminal and hence superconducting character is observed. This signature is known to directly probe the presence and extent of non-classical (off-diagonal) long-range order [45]. (See the Appendix for more details on how the signature of superconductivity, \(\lambda_{D}\), was computed.) The Pairing-Force (PF) model [6; 7; 8; 9]--also called the Standard Reduced Bardeen-Cooper-Schrieffer (BCS) model [30; 31; 10]--is known to exhibit superconducting character in the strong correlation limit and hence achieve a large \(\lambda_{D}\). The Hamiltonian for the PF model is given in second quantization by \[\mathcal{H}_{PF}=\frac{1}{2}\sum_{\sigma=\uparrow,\downarrow}\sum_{p=1}^{N} \epsilon_{p}\hat{a}^{\dagger}_{p,\sigma}\hat{a}_{p,\sigma}-G\sum_{p=1}^{N} \sum_{q=1}^{N}\hat{a}^{\dagger}_{p,\uparrow}\hat{a}^{\dagger}_{p,\downarrow} \hat{a}_{q,\downarrow}\hat{a}_{q,\uparrow} \tag{2}\] where \(p\) is a quantum number that represents a pair of orbitals denoted as \(p,\uparrow\) and \(p,\downarrow\) with the same energy, where the energies (\(\epsilon_{p}\)) are considered to be known, and where the parameter \(G\) is a constant that tunes the strength of the pairwise interactions. Note that in the limit of strong correlation (\(G>>\epsilon_{p}\)), maximal superconducting character--\(\lambda_{D}=\frac{N}{2}\left(1-\frac{N-2}{r}\right)\)[44; 50]--is observed. ### Exciton Condensation Directly analogous to superconductivity resulting from bosonic particle-particle pairs condensing into a single particle-particle function, exciton condensation results from the condensation of particle-hole pairs (i.e., excitons) into a single particle-hole function below a certain critical temperature, which results in the superfluidity of the excitons [33; 51]. Exciton condensates, while difficult to realize experimentally, have been observed in systems composed of polaritons (excitons coupled to photons) [52; 53; 54] and in two-dimensional structures such as semiconductors [55], graphene bilayers [56; 57; 58], and van der Waals heterostructures [59; 60; 61; 62]. The signature of exciton condensation--denoted as \(\lambda_{G}\)--is similarly analogous to that for fermion-pair condensation; the presence and extent of exciton condensate character can be measured from the largest eigenvalue of a modified particle-hole reduced density matrix given by [37; 38; 63] \[{}^{2}\tilde{G}^{i,j}_{k,l} ={}^{2}G^{i,j}_{k,l}-{}^{1}D^{i\,1}_{k}D^{j}_{l}\] \[=\langle\Psi|\hat{a}^{\dagger}_{i}\hat{a}^{\dagger}_{j}\hat{a}_{ l}\hat{a}_{k}|\Psi\rangle-\langle\Psi|\hat{a}^{\dagger}_{i}\hat{a}_{k}|\Psi \rangle\langle\Psi|\hat{a}^{\dagger}_{j}\hat{a}_{l}|\Psi\rangle \tag{3}\] where \({}^{1}D\) is the one-fermion reduced density matrix (1-RDM). Note that this modification removes the extraneous large eigenvalue from a ground-state-to-ground-state transition such that a signature above one (\(\lambda_{G}>1\)) is indicative of exciton condensation. (See the Appendix for more details on how the signature of exciton condensation, \(\lambda_{G}\) was computed.) This computational signature has been utilized to study exciton condensation is possible in quantum and molecular systems [39; 40; 41; 42; 43; 44; 29]. One model known to achieve a large \(\lambda_{G}\) value and hence exhibit exciton condensate character in the limit of a large correlation is the Lipkin quasispin model [15; 16; 17; 18; 19; 20; 21]. The \(N\)-fermion Lipkin quasispin model consists of two energy levels \(\left\{-\frac{\epsilon}{2},\frac{\epsilon}{2}\right\}\), each containing \(N\) energetically-degenerate states. The second-quantized Hamiltonian Figure 1: A figure of the condensate phase diagram in the phase space of the signatures of particle-particle condensation, \(\lambda_{D}\), and exciton condensation, \(\lambda_{G}\), is shown. can be expressed as [18] \[\mathcal{H}_{L}=\frac{\epsilon}{2}\sum_{\sigma=\pm 1}\sigma\sum_{p=1}^ {N}\hat{a}_{\sigma,p}^{\dagger}\hat{a}_{\sigma,p}\] \[+\frac{\gamma}{2}\sum_{\sigma=\pm 1}\sum_{p,q=1}^{N}\hat{a}_{+ \sigma,p}^{\dagger}\hat{a}_{-\sigma,p}\hat{a}_{-\sigma,q}^{\dagger}\hat{a}_{+ \sigma,q}\] \[+\frac{\lambda}{2}\sum_{\sigma=\pm 1}\sum_{p,q=1}^{N}\hat{a}_{+ \sigma,p}^{\dagger}\hat{a}_{+\sigma,q}^{\dagger}\hat{a}_{-\sigma,q}\hat{a}_{- \sigma,p} \tag{4}\] where \(\sigma=\pm 1\) and \(p=1,2,\ldots,N\) are quantum numbers that completely characterize the system in which \(p\) describes the site number labelling the \(N\) states in a given level and \(\sigma\) represents the upper (\(+1\)) or lower (\(-1\)) energy levels, respectively. Note that in this model, the \(\lambda\) term allows for double excitations and de-excitations, and the \(\gamma\) term allows for a single particle to be scattered up while another is simultaneously scattered down; as a result, in the Lipkin model, only even excitations are allowed, and only one particle may occupy a given site (i.e., have a specific quantum number \(p\)) such that each site in the lower level is particle-hole paired with the corresponding site in the upper level. By having the terms correlating orbitals in the Hamiltonian \((\lambda,\gamma)\) be sufficiently larger than the energy term (i.e., in the limit of high correlation), maximal exciton condensation--\(\lambda_{G}=\frac{N}{2}\)[37]-- can be obtained for \(\lambda=\gamma\). ### Fermion-Exciton Condensation A fermion-exciton condensate is a single quantum state that simultaneously demonstrates character of superconductivity and exciton condensation, i.e., both signatures of condensation--the largest eigenvalue of the particle-particle RDM (Eq. (1)) and the largest eigenvalue of the modified particle-hole RDM (Eq. (3))--are simultaneously large \((\lambda_{D},\lambda_{G}>1)\). [39]. To gain insight into such fermion-exciton condensates, here we propose a model system that is capable of demonstrating simultaneous fermion-pair and exciton condensate character. In this model, we introduce the pairwise interaction from the Pairing-Force model into the scaffolding of the Lipkin model; thus, the model keeps the structure of the Lipkin model in which \(N\) particles occupy two \(N\)-degenerate energy levels (\(-\frac{\epsilon}{2}\) and \(\frac{\epsilon}{2}\)) with allowed double excitations on two sites (\(\lambda\)) and simultaneous scattering of a particle up on one site and down on another (\(\gamma\))--where Lipkin-like sites are now given as orbitals \(p\) and \(p+N\); however, we additionally pair adjacent orbitals--orbitals \(2j-1\) and \(2j\) for \(j\in\{1,2,\ldots,N\}\)--via the PF parameter, \(G\). (See Fig. 2.) The Hamiltonian for this model is thus given by \[\mathcal{H}=-\frac{\epsilon}{2}\sum_{i=1}^{N}\hat{a}_{i}^{\dagger }\hat{a}_{i}+\frac{\epsilon}{2}\sum_{i=1}^{N}\hat{a}_{i+N}^{\dagger}\hat{a}_{ i+N}\] \[+\frac{\lambda}{2}\sum_{p=1}^{N}\sum_{q=1}^{N}\hat{a}_{p}^{ \dagger}\hat{a}_{q}^{\dagger}a_{q+N}\hat{a}_{p+N}+\frac{\lambda}{2}\sum_{p=1}^ {N}\sum_{q=1}^{N}\hat{a}_{p+N}^{\dagger}\hat{a}_{q+N}^{\dagger}\hat{a}_{q}\hat {a}_{p}\] \[+\frac{\gamma}{2}\sum_{p=1}^{N}\sum_{q=1}^{N}\hat{a}_{p+N}^{ \dagger}\hat{a}_{q}^{\dagger}\hat{a}_{q+N}\hat{a}_{p}+\frac{\gamma}{2}\sum_{p =1}^{N}\sum_{q=1}^{N}\hat{a}_{p}^{\dagger}\hat{a}_{q+N}^{\dagger}\hat{a}_{q} \hat{a}_{p+N}\] \[-G\sum_{j=1}^{N}\sum_{k=1}^{N}\hat{a}_{2j-1}^{\dagger}\hat{a}_{2j }^{\dagger}\hat{a}_{2k}\hat{a}_{2k-1} \tag{5}\] in second quantization, with a given set of parameters \((\epsilon,\lambda,\gamma,G)\) directly determining the extent of fermion-pair and exciton condensation (\(\lambda_{D}\) and \(\lambda_{G}\), respectively) of the ground state corresponding to this model Hamiltonian. While this model Hamiltonian is not the first to combine the pairwise interaction from the Pairing-Force model with the Lipkin model, the model Hamiltonian introduced by Plastino and coworkers causes direct competition between particle-particle and particle-hole correlations and hence proves incapable of demonstrating a fermion-exciton condensate phase (see Appendix B) [65; 66; 67]. Conversely, due to our introduction of the Pairing-Force interactions between adjacent orbitals instead of orbitals in the same Lipkin-like site, particle-particle and particle hole pairing can coexist and hence fermion-pair-exciton (FEC) states can be achieved as is shown in the results that follow. ## III Results ### \(N=4\), The Minimal FEC As the authors have previously demonstrated [39], a system with as few as \(N=4\) particles in \(r=8\) orbitals can support formation of a fermion-exciton condensate. As such, we first fully explore such a minimalistic FEC system. The ground state of the FEC Hamiltonian that we have introduced--Equation (5)--for four Figure 2: A pictorial representation of the model Hamiltonian we introduce in which there are two \(N\)-degenerate energy levels—with energies \(-\frac{\epsilon}{2}\) and \(\frac{\epsilon}{2}\)—with double excitations and de-excitations, scattering in which one particle is de-excited while another is simultaneously excited, and a pair-wise interaction term between sites \(2j-1\) and \(2j\) for \(j\in\{1,2,\ldots,N\}\) (yellow circles) is shown. Note that the Lipkin-like excitations must occur within a site (\(p\leftrightarrow p+N\), blue arrow). particles has contributions from only ten of the seventy (\(rCN\)) possible configurations. Of these ten basis states, there are only five distinct classes composed of degenerate orientations--see Fig. 3--that allow for the direct computation of a matrix-form of the Hamiltonian in a minimal basis state. The five basis states are defined by three quantum numbers, \(x,y,bool\), where the first indicates the number of particles excited to the upper energy level (\(x\)), the second indicates the number of BCS-like pairs (number of times both \(2j-1\) and \(2j\) are occupied, \(y\)), and the third is a boolean that indicates whether the configuration is "Lipkin"-like in the regard that no two orbitals representing a "Lipkin" site (denoted as \(p\) and \(p+N\), see the blue arrow in Fig. 3) are dually occupied or dually unoccupied. Utilizing the basis shown in Fig. 3--\(|0,2,T\rangle\), \(|2,2,F\rangle\), \(|2,2,T\rangle\), \(|2,0,T\rangle\), and \(|4,2,T\rangle\)--the Hamiltonian from Eq. (5) can be represented by \[{\cal H}_{4}=\left(\begin{array}{cccc}-2\epsilon-2G&-G\sqrt{2}&\frac{2 \lambda-2G}{\sqrt{2}}&2\lambda&0\\ -G\sqrt{2}&-2G+2\gamma&-2G&0&-G\sqrt{2}\\ \frac{2\lambda-2G}{\sqrt{2}}&-2G&-2G&2\gamma\sqrt{2}&\frac{2\lambda-2G}{\sqrt {2}}\\ 2\lambda&0&2\gamma\sqrt{2}&2\gamma&2\lambda\\ 0&-G\sqrt{2}&\frac{2\lambda-2G}{\sqrt{2}}&2\lambda&2\epsilon-2G\end{array}\right) \tag{6}\] where each term--corresponding to the interaction between two classes of basis states, \(|i\rangle\) and \(|j\rangle\)--is obtained from programmatically generating all sets of second-quantization creation and annihilation operators in Eq. (5), taking the expectation value for each combination of pairs of configurations in classes \(|i\rangle\) and \(|j\rangle\), summing the results, and normalizing by dividing by the square root of the number of configurations for both \(|i\rangle\) and \(|j\rangle\). For example, if \(|i\rangle=|2,2,F\rangle=(|1,2,5,6\rangle+|3,4,7,8\rangle)/\sqrt{2}\) and \(|j\rangle=|2,2,T\rangle=(|1,2,7,8\rangle+|3,4,5,6\rangle)/\sqrt{2}\), the Hamiltonian term would be given by \[\frac{\left(\langle 1,2,5,6|+\langle 3,4,7,8|\rangle\right)}{\sqrt{2 }}{\cal H}_{4}\frac{\left(|1,2,7,8\rangle+|3,4,5,6\rangle\right)}{\sqrt{2}}\] \[=\frac{1}{2}[\langle 1,2,5,6|{\cal H}_{4}|1,2,7,8\rangle+ \langle 1,2,5,6|{\cal H}_{4}|3,4,5,6\rangle\] \[+\langle 3,4,7,8|{\cal H}_{4}|1,2,7,8\rangle+\langle 3,4,7,8|{ \cal H}_{4}|3,4,5,6\rangle] \tag{7}\] Fig. 4a scans over the signatures of condensation--\(\lambda_{D}\) and \(\lambda_{G}\)--for the ground state of the Hamiltonian in Eq. (6) by systematically varying the parameters \(\epsilon\), \(\lambda\), \(\gamma\), and \(G\) where the yellow BCS x's represent ground states in which the PF Hamiltonian is implemented (i.e., \(\lambda=\gamma=0\)), the blue Lipkin x's represent states in which the Lipkin Hamiltonian is implemented (i.e., \(G=0\)), and where the green FEC x's represent states with character of both PF and Lipkin Hamiltonians. As this figure demonstrates, the largest degree of superconducting character (the largest \(\lambda_{D}\)) is indeed observed in the BCS limit of the FEC Hamiltonian (when \(G>>\epsilon,\ \lambda=\gamma\approx 0\)), and the largest degree of exciton condensate character (the largest \(\lambda_{G}\)) is observed in the Lipkin limit of the FEC Hamiltonian (\(\lambda\approx\gamma>>\epsilon,\ G\approx 0\)). However, neither the BCS nor Lipkin limits of the Hamiltonian is capable of demonstrating a dual fermion-exciton condensate as \(\lambda_{D}\) and \(\lambda_{G}\) only simultaneously exceed the Pauli-like limit of one when the full FEC Hamiltonian from Eq. (5) is implemented including both BCS-like (\(G\)) and Lipkin-like (\(\lambda,\gamma\)) terms. Our model FEC Hamiltonian, however, is capable of demonstrating a wide variety of dual condensate character as a variety of input parameters lead to ground state configurations in which both \(\lambda_{G}\) and \(\lambda_{D}\) simultaneously exceed one. Additionally, the \(\lambda_{D}\) and \(\lambda_{G}\) values obtained by scanning over the Hamiltonian parameters (in Fig. 4a) demonstrate an elliptic nature consistent with the convex nature of 2-RDMs projected onto a two-dimensional space [68; 69; 70] that matches predictions for a FEC that these authors first presented in Ref. [39]. This elliptic Figure 3: Configurations representing each of the five classes of non-zero basis states for the FEC Hamiltonian for \(N,r\ =\ 4,8\) are shown where each label \(x,y,bool\) represents the number of particles excited to the upper \(N\)-degenerate energy level (\(x\)), the number of BCS-like pairs (\(y\)), and whether the configuration is consistent with the Lipkin model (\(bool\)), where the degeneracy of each class of states is given in parenthesis, and where green, yellow, and blue configurations represent that the corresponding states are consistent with only the Lipkin Hamiltonian, only the Pairing-Force Hamiltonian, or both Lipkin and PF Hamiltonians, respectively. boundary as well as the density of points in the zone corresponding to fermion-exciton condensate character indicate that the FEC model Hamiltonian introduced here is capable of spanning the entirety of the FEC region of \(\lambda_{D}\) versus \(\lambda_{G}\) space (i.e., \(\lambda_{D},\lambda_{G}>1\)). In Ref. [39], these authors theoretically establish that in the thermodynamic limit, a possible wavefunction demonstrating fermion-exciton condensation can be obtained by entangling wavefunctions that separately demonstrate superconducting character (\(|\Psi_{D}\rangle\) with large \(\lambda_{D}\)) and exciton condensate character (\(|\Psi_{G}\rangle\) with large \(\lambda_{G}\)) according to \[|\Psi_{FEC}\rangle=\frac{1}{\sqrt{2-|\Delta|}}\left(|\Psi_{D}\rangle-\text{ sgn}(\Delta)|\Psi_{G}\rangle\right), \tag{8}\] where \(\Delta=2\langle\Psi_{D}|\Psi_{G}\rangle\). In Fig. 5 occupation probabilities for each of the five classes of basis states consistent with the \(N,r=4,8\) FEC Hamiltonian that contribute to a BCS wavefunction (yellow, \(\epsilon,\lambda,\gamma,G=0,0,0,0.7\), \(\lambda_{D}=1.50\), \(\lambda_{G}=0.67\)), a Lipkin wavefunction (blue, \(\epsilon,\lambda,\gamma,G=0,-0.5,-0.5,0\), \(\lambda_{D}=0.50\), \(\lambda_{G}=2.00\)), and a FEC wavefunction (green, \(\epsilon,\lambda,\gamma,G=0,-0.5,-0.5,0.7\), \(\lambda_{D}=1.31\), \(\lambda_{G}=1.32\)) are given. From this data, it can be observed that the FEC wavefunction does indeed appear to be an entanglement of the individual BCS and Lipkin wavefunctions for the case of \(N=4\); this is consistent with the theoretical result in the thermodynamic limit. ### Higher-Particle FECs In order to observe trends related to system size, we employ the methodologies used to explore the \(N,r=4,8\) model system and extrapolate to systems composed of \(N=6,8,10\) particles in \(r=12,16,20\) orbitals. Figures summarizing the signatures of superconducting character (\(\lambda_{D}\)) and exciton condensate character (\(\lambda_{G}\)) obtained for the ground state wavefunctions of these larger model Hamiltonians can be seen in Figs. 4b-4d. Similar to the results from the \(N=4\) data, elliptic fits spanning the maximal signature of superconducting character observed for the BCS wavefunction to the maximal signature of exciton condensate character for the Lipkin wavefunction with a large variety of parameters supporting dual fermion-exciton condensation. Note that as the size of the system increases from \(N=6\) to \(N=8\) to \(N=10\), the number of classes of degenerate, non-zero basis states as well as the number of basis states composing each class increase from 8 classes with a total of 44 non-zero basis states to 14 classes with a total of 230 non-zero basis states to 20 classes with a total of 1212 non-zero basis states. As such, the relative sparsity of the computations in \(\lambda_{D}\) versus \(\lambda_{G}\) as system size is increased is due to fewer computations being run with larger increments between each of the parameters as they are varied. To demonstrate how the classes of non-zero basis states vary as system size is increased, Fig. 6--which shows the occupation probabilities for each of the fourteen classes of basis states consistent with the \(N,r=8,16\) FEC Hamiltonian that contribute to a BCS wavefunction (yellow, \(\epsilon,\lambda,\gamma,G=0,0,0.9\), \(\lambda_{D}=2.50\), \(\lambda_{G}=0.57\)), a Lipkin wavefunction (blue, \(\epsilon,\lambda,\gamma,G=0,-0.5,-0.5,0\), \(\lambda_{D}=0.50\), \(\lambda_{G}=4.00\)), and a FEC wavefunction (green, \(\epsilon,\lambda,\gamma,G=0,-0.5,-0.5,0.9\), \(\lambda_{D}=2.06\), \(\lambda_{G}=1.87\))-- is included. Note that due to an increase in the possible complexity, two more quantum numbers are added to describe a few of the classes of basis states; specifically, \(\zeta\) and \(\tau\) are added to \(x,y,bool,\zeta,\tau\) where \(\zeta\) corresponds to the number of times BCS-like pairs are "stacked" into the same site such that orbitals \(2j-1\), \(2j\), \(2j-1+N\), and \(2j+N\) are all occupied and where \(\tau\) corresponds to the number of diagonal configurations in which either \(2j-1/2j+N\) or \(2j-1+N/2j\) are both occupied where \(2j-1\) and \(2j\) are adjacent, BCS-paired orbitals. A few configurations with the necessary quantum numbers specified for \(N=8\) are included in Fig. 7. As can be seen from Fig. 6, the groundstate wavefunction for the \(N=8\) FEC Hamiltonian no longer simply contains elements of the BCS wavefunction and the Lipkin wavefunction naively entangled together. Specifically, while the \(|4,4,F,1,2\rangle\) class of basis states does include BCS-paired particles (see Fig. 7), it does not include the maximal number of BCS-paired particles, which appears to be a necessary condition for non-zero occupation of the ground state for the BCS Hamiltonian. However, this class of basis states can interact with other BCS-like and Lipkin-like classes of basis states. Explicitly, \(|4,4,F,1,2\rangle\) interacts with \(|2,4,F\rangle\) via \(\frac{\lambda}{2}\hat{a}_{p}^{\dagger}\hat{a}_{q}^{\dagger}\hat{a}_{q+N}\hat{ a}_{p+N}\); \(|4,4,F,1\rangle\) via \(\frac{\lambda}{2}\hat{a}_{p}^{\dagger}\hat{a}_{q+N}^{\dagger}\hat{a}_{q}\hat{ a}_{p+N}\); \(|6,4,F\rangle\) via \(\frac{\lambda}{2}\hat{a}_{p+N}^{\dagger}\hat{a}_{q+N}^{\dagger}\hat{a}_{q}\hat{ a}_{p}\); and \(|2,2,T\rangle\) via \(-G\hat{a}_{2j-1}^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{2k}\hat{a}_{2k-1}\), which does further entangle the Lipkin-like configurations and BCS-like configurations in a non-trivial manner. As such, while the interaction between the BCS-like classes of basis states and Lipkin-like classes of basis states in the formation of the FEC ground state wavefunction is not as clear-cut or simple as in the \(N=4\) case, the \(N=8\) FEC wavefunction is still an entanglement of BCS-like and Lipkin-like terms. A representative configuration as well as the relevant quantum numbers for all classes of basis states for the \(N=6\), \(N=8\), and \(N=12\) FEC Hamiltonians is given in the Supplemental Information. ## IV Discussion and Conclusions In this study, we introduce a model Hamiltonian that successfully demonstrates the physics associated with both fermion-pair condensation and exciton condensation, as well as encompassing the phase space consisting of systems in which fermion-pair condensation and exciton condensation are simultaneously realized--a phenomenon which we term fermion-exciton condensation (FEC). Applying this model to systems composed of \(N=4,6,8,10\) particles in \(r=2N\) orbitals, we confirm this fermion-exciton condensate character for a wide variety of ground state wavefunctions corresponding to a diverse range of input parameters in the model Hamiltonian, additionally verifying the prediction made in prior investigation [39] that the wavefunction of a fermion-exciton condensate is an entanglement of wavefunctions of exciton condensates and fermion-pair condensates. The introduction of our model Hamiltonian that supports fermion-exciton condensation advances our understanding of the forces and orbital correlations necessary for the experimental construction of FEC states in real-world materials--important insights in the search for real-world materials exhibiting fermion-exciton condensate character. Depending on the interpretation of the Hamiltonian elements, this could have ramifications for fields such as traditional and molecularly-scaled electronics, spin systems, and nuclear physics. Specifically, if the orbitals in the Hamiltonian are interpreted as spin orbitals, fermion-exciton condensates simultaneously demonstrate the condensation of Cooper into a single particle-particle quantum state and the condensation of electron-hole pairs into a single particle-hole quantum state; thus, superfluid Cooper pairs--resulting in superconductivity--and superfluid excitons--which are associated with the dissipationless flow of energy [33; 51]--should both be present to a certain extent in FEC systems, maybe demonstrating some hybridization of the properties of superconductors and exciton condensates, which may be relevant to the fields of energy transport and electronics in both macroscopic materials and molecular-scaled systems. Alternatively, the two Lipkin-like \(N\)-degenerate levels can be interpreted as being representative of specific spin states such that the upper level is spin up and the lower level is spin down or vice versa. This interpretation is most-consistent with \(\epsilon=0\)--which does demonstrate FEC states for a wide variety of input parameters--, although in a magnetic field the different spin states could be separated by some non-zero energy. In this framework, the Lipkin-like terms could represent simultaneous double spin flips that are either aligned (\(\lambda\)) or misaligned (\(\gamma\)), and the pairwise Pairing-Force term could be seen as Figure 4: Plots of \(\lambda_{G}\) versus \(\lambda_{D}\) where parameters in the FEC Hamiltonian are systematically varied are shown for systems involving (a) \(N=4\), (b) \(N=6\), (c) \(N=8\), and (d) \(N=10\) particles in \(r=2N\) orbitals. a favorable interaction between adjacent particles demonstrating the same spin. Moreover, as both particle-particle (consistent with the Pairing-Force Hamiltonian) and particle-hole (consistent with the Lipkin Hamiltonian) are utilized in the field of nuclear physics to display the essential properties of the nuclear interaction [71; 72; 73], we can interpret our FEC Hamiltonian in this framework. In this interpretation, the particles being created and annihilated are nucleons such that the Lipkin terms are associated with the interaction of nucleons within a valence shell (\(\gamma\)), the mixing of particle-hole excitations with the valence configurations, and excitations of a nucleon from one valence shell to another having an energetic penalty (\(\epsilon\)) [71; 73]. Additionally, in this interpretation, the PF pairwise interaction is associated with the short-range portion of the nuclear interaction [71; 72]. Overall, this model Hamiltonian is capable of demonstrating a wider array of collective behavior than either the Lipkin or the Pairing-Force models. Such a Hamiltonian will have a vast degree of applications and will be beneficial for the exploration--and for benchmarking computational methodologies for the treatment of--the nontrivial physics of real-world material and chemical systems. **Author contributions.** L. M. and D. M. conceived of the project, developed the theoretical framework, designed the computations, wrote the code, performed the computations, analyzed the results, and wrote the paper. **Acknowledgments**: D.A.M. gratefully acknowledges the U.S. National Science Foundation Grants No. CHE-1565638, No. CHE-2035876, and No. DMR-2037783 and the Department of Energy, Office of Basic Energy Sciences, Grant DE-SC0019215. **Data availability.** Data will be made available upon reasonable request. **Code availability.** Code will be made available on a public Github repository upon publication. Figure 5: The probabilities corresponding to each of the five classes of basis states (see Fig. 3) consistent with the FEC Hamiltonian for \(N,r=4,8\) are shown where green, yellow, and blue bars correspond to the lowest eigenstate of the Lipkin Hamiltonian, the Pairing-Force Hamiltonian, and FEC Hamiltonian, respectively. Figure 6: The probabilities corresponding to each of the fourteen classes of basis states consistent with the FEC Hamiltonian for \(N,r=8,16\) are shown where green, yellow, and blue bars correspond to the lowest eigenstate of the Lipkin Hamiltonian, the Pairing-Force Hamiltonian, and FEC Hamiltonian, respectively. Each label \(x,y,bool,\zeta,\tau\) represents the number of particles excited to the upper \(N\)-degenerate energy level (\(x\)), the number of BCS-like pairs (\(y\)), whether the configuration is consistent with the Lipkin model (\(bool\)), the number of times BCS-like pairs are “stacked” into the same site (\(\zeta\)), and the number of times a diagonal configuration occur in which either \(2j-1/2j+N\) or \(2j-1+N/2j\) are simultaneously occupied where \(2j-1\) and \(2j\) are adjacent, paired orbitals (\(\tau\)). These values act as quantum numbers that define the degenerate classes of non-zero basis functions composing the ground state to the FEC Hamiltonian. Figure 7: Configurations representing how the Lipkin-like double excitation term (\(\lambda\)) and scattering term (\(\gamma\)) in the FEC Hamiltonian relate the \(|4,4,F,1,2\rangle\) basis state for \(N,r=8,16\) to BCS-like basis states. ## Appendix A Determination of Signatures of Condensation To determine the largest eigenvalue of the particle-particle RDM (\({}^{2}D\), see Eq. (1)--i.e., \(\lambda_{D}\), the signature of superconducting character--, only the following \(N\times N\) subblock of the full 2-RDM containing the large eigenvalue must be computed and diagonalized [50; 74; 75] \[\begin{array}{c|cccc}&\hat{a}_{0}\hat{a}_{1}&\hat{a}_{2}\hat{a}_{3}&\cdots& \hat{a}_{r-2}\hat{a}_{r-1}\\ \hline\hat{a}_{0}^{\dagger}\hat{a}_{1}^{\dagger}&\hat{a}_{0}^{\dagger}\hat{a}_ {1}\hat{a}_{0}\hat{a}_{1}&\hat{a}_{0}^{\dagger}\hat{a}_{1}^{\dagger}\hat{a}_ {2}\hat{a}_{3}&\cdots&\hat{a}_{0}^{\dagger}\hat{a}_{1}^{\dagger}\hat{a}_{r-2} \hat{a}_{r-1}\\ \hat{a}_{2}^{\dagger}\hat{a}_{3}^{\dagger}&\hat{a}_{2}^{\dagger}\hat{a}_{3} \hat{a}_{0}\hat{a}_{1}&\hat{a}_{2}^{\dagger}\hat{a}_{3}^{\dagger}\hat{a}_{2} \hat{a}_{3}&\cdots&\hat{a}_{2}^{\dagger}\hat{a}_{3}^{\dagger}\hat{a}_{r-2}\hat {a}_{r-1}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ \hat{a}_{r-2}^{\dagger}\hat{a}_{r-1}^{\dagger}&\hat{a}_{r-2}^{\dagger}\hat{a}_ {r-1}^{\dagger}\hat{a}_{0}\hat{a}_{1}&\hat{a}_{r-2}^{\dagger}\hat{a}_{r-1}^{ \dagger}\hat{a}_{2}\hat{a}_{3}&\cdots&\hat{a}_{r-2}^{\dagger}\hat{a}_{r-1}^{ \dagger}\hat{a}_{r-2}\hat{a}_{r-1}\end{array} \tag{10}\] where, again, \(\hat{a}_{i}^{\dagger}\) and \(\hat{a}_{i}\) are to creation and annihilation operators corresponding to the orbital with index \(i\). Each element of this subblock of the 2-RDM is the expectation value \(\langle\Psi|\hat{a}_{2j-1}^{\dagger}\hat{a}_{2j}^{\dagger}\hat{a}_{2k}\hat{a}_ {2k-1}|\Psi\rangle\) obtained by programmatically applying the appropriate creation and annihilation operators to each pair of non-zero basis states composing the previously-obtained ground state wavefunction of the Hamiltonian. As an example, for the \(N,r=4\) computations, there are ten non-zero basis elements composing five distinct classes \((|0,2,T),|2,2,F),|2,2,T),|2,0,T),|4,2,T)) that are used to construct the Hamiltonian (see the Result section). The ground-state wavefunction is obtained in terms of these classes with a structure given by \[|\Psi\rangle=v_{0,2,T}|0,2,T\rangle+v_{2,2,F}|2,2,F\rangle+v_{2,2, T}|2,2,T\rangle\] \[+v_{2,0,T}|2,0,T\rangle+v_{4,2,T}|4,2,T\rangle \tag{11}\] where each of the classes is a weighted linear combination of the basis states composing it, i.e, \[|2,0,T\rangle=\frac{|1,3,6,8\rangle+|1,4,6,7\rangle+|2,3,5,8\rangle+|2,4,5,7 \rangle}{\sqrt{4}} \tag{12}\] Thus, \(\langle\Psi|\hat{a}_{2j-1}^{\dagger}\hat{a}_{2j}^{\dagger}\hat{a}_{2k}\hat{a} _{2k-1}|\Psi\rangle\) is a sum of all expectation values of the form \[v_{c_{1}}v_{c_{2}}\langle c_{1}|\hat{a}_{2j-1}^{\dagger}\hat{a}_{2j}^{\dagger} \hat{a}_{2k}\hat{a}_{2k-1}|c_{2}\rangle \tag{13}\] where \(c_{1}\) and \(c_{2}\) refer to each of the distinct classes of non-zero basis states and where these expectation values are sums over \[\frac{v_{b_{1}}v_{b_{2}}}{N(c_{b_{1}})N(c_{b_{2}})}\langle b_{1}|\hat{a}_{2j-1 }^{\dagger}\hat{a}_{2j}^{\dagger}\hat{a}_{2k}\hat{a}_{2k-1}|b_{2}\rangle \tag{14}\] where \(b_{1}\) and \(b_{2}\) are the basis states composing each class, where \(N(c_{b_{1}})\) refers to the size of the class to which basis \(b_{1}\) belongs, and where all possible combinations of basis states are analyzed. Note that only \(\epsilon=0\) calculations were run for the \(N,r=10,20\) scan such that site symmetry allowed the entire matrix to be constructed from three distinct types of elements, which lowered computational expense; these element types are as follows: \(\hat{a}_{2j-1}^{\dagger}\hat{a}_{2j}^{\dagger}\hat{a}_{2j}\hat{a}_{2j-1}\), \(\hat{a}_{2j-1}^{\dagger}\hat{a}_{2j}^{\dagger}\hat{a}_{2k}\hat{a}_{2k-1}\), and \(\hat{a}_{2j-1}^{\dagger}\hat{a}_{2j}^{\dagger}\hat{a}_{2j}\hat{a}_{2j+N}\hat{a }_{2j-1\pm N}\). The signature of superconductivity (\(\lambda_{D}\)) is then computed from the \(N\times N\) subblock of the 2-RDM according to the eigenvalue equation \[{}^{2}Dv_{D}^{i}=\epsilon_{D}^{i}v_{D}^{i} \tag{15}\] with the signature corresponding the largest eigenvalue (the maximum \(\epsilon_{D}^{i}\)). The portion of the particle-hole RDM (\({}^{2}G\)) associated with a large eigenvalue is composed of sub-matrices of the form \[\begin{array}{c|cccc}&\hat{a}_{1}^{\dagger}\hat{a}_{q}&\hat{a}_{q+N}^{ \dagger}\hat{a}_{q}&\hat{a}_{q}^{\dagger}\hat{a}_{q+N}&\hat{a}_{q+N}^{\dagger }\hat{a}_{q+N}\\ \hline\hat{a}_{p}^{\dagger}\hat{a}_{p}&\hat{a}_{p}^{\dagger}\hat{a}_{q}^{ \dagger}\hat{a}_{q}&\hat{a}_{p}^{\dagger}\hat{a}_{p}^{\dagger}\hat{a}_{q+N}^{ \dagger}\hat{a}_{q}&\hat{a}_{p}^{\dagger}\hat{a}_{p}\hat{a}_{q}^{\dagger}\hat{a} _{q+N}&\hat{a}_{p}^{\dagger}\hat{a}_{p}\hat{a}_{q+N}^{\dagger}\hat{a}_{q+N}\\ \hat{a}_{p}^{\dagger}\hat{a}_{p+N}&\hat{a}_{p}^{\dagger}\hat{a}_{p+N}\hat{a}_ {q}^{\dagger}\hat{a}_{q}&\hat{a}_{p}^{\dagger}\hat{a}_{p+N}\hat{a}_{q+N}^{ \dagger}\hat{a}_{q}&\hat{a}_{p}^{\dagger}\hat{a}_{p+N}\hat{a}_{q}^{\dagger} \hat{a}_{q+N}&\hat{a}_{p}^{\dagger}\hat{a}_{p+N}\hat{a}_{q+N}^{\dagger}\hat{a}_ {q+N}\\ \hat{a}_{p+N}^{\dagger}\hat{a}_{p}&\hat{a}_{p+N}^{\dagger}\hat{a}_{p}\hat{a}_ {q}^{\dagger}\hat{a}_{q}&\hat{a}_{p+N}^{\dagger}\hat{a}_{p}\hat{a}_{q+N}^{ \dagger}\hat{a}_{q}&\hat{a}_{p+N}^{\dagger}\hat{a}_{p}\hat{a}_{q}^{\dagger}\hat{a} _{q+N}&\hat{a}_{p+N}^{\dagger}\hat{a}_{p}\hat{a}_{q+N}^{\dagger}\hat{a}_{q+N} \\ \hat{a}_{p+N}^{\dagger}\hat{a}_{p+N}\hat{a}_{p+N}\hat{a}_{p+N}\hat{a}_{q}^{ \dagger}\hat{a}_{q}&\hat{a}_{p+N}^{\dagger}\hat{a}_{p+N}\hat{a}_{q+N}^{ \dagger}\hat{a}_{q}&\hat{a}_{p+N}^{\dagger}\hat{a}_{p+N}\hat{a}_{q+N}^{ \dagger}\hat{a}_{q+N}&\hat{a}_{p+N}^{\dagger}\hat{a}_{p+N}\hat{a}_{q+N}^{ \dagger}\hat{a}_{q+N}.\end{array} \tag{16}\] tiled in the following manner: \[\begin{array}{|c|c|c|c|}\hline p=0,q=0&p=0,q=1&\cdots&p=0,q=\frac{N}{2}-1\\ \hline p=1,q=0&p=1,q=1&\cdots&p=1,q=\frac{N}{2}-1\\ \hline\vdots&\vdots&\ddots&\vdots\\ \hline p=\frac{N}{2}-1,q=0&p=\frac{N}{2}-1,q=1&\cdots&p=\frac{N}{2}-1,q=\frac{N} {2}-1\\ \hline\end{array} \tag{10}\] In order to remove the ground-state-to-ground-state transition (to form the modified particle-hole RDM, \({}^{2}\tilde{G}\), see Eq. (3)), \[\begin{array}{c|c|c|c|c|}&\hat{a}_{q}^{\dagger}\hat{a}_{q}&\hat{a}_{q+N}^{ \dagger}\hat{a}_{q}&\hat{a}_{q}^{\dagger}\hat{a}_{q+N}&\hat{a}_{p+N}^{\dagger} \hat{a}_{p+N}\\ \hline\hat{a}_{p}^{\dagger}\hat{a}_{p}&{}^{1}D_{p}[0,0]^{1}D_{q}[0,0]^{1}D_{ p}[0,1]^{1}&{}^{1}D_{p}[0,0]^{1}D_{q}[1,0]&{}^{1}D_{p}[0,0]^{1}D_{q}[1,1]\\ \hat{a}_{p}^{\dagger}\hat{a}_{p+N}^{\dagger}\hat{a}_{p}&{}^{1}D_{p}[0,1]^{1}D _{q}[0,1]&{}^{1}D_{p}[0,1]^{1}D_{q}[1,0]&{}^{1}D_{p}[0,1]^{1}D_{q}[1,1]\\ \hat{a}_{p+N}^{\dagger}\hat{a}_{p}&{}^{1}D_{p}[1,0]^{1}D_{q}[0,0]&{}^{1}D_{p}[ 1,0]^{1}D_{q}[0,1]&{}^{1}D_{p}[1,0]^{1}D_{q}[1,1]\\ \hat{a}_{p+N}^{\dagger}\hat{a}_{p+N}&{}^{1}D_{p}[1,1]^{1}D_{q}[0,0]&{}^{1}D_{p }[1,1]^{1}D_{q}[0,1]&{}^{1}D_{p}[1,1]^{1}D_{q}[1,1]\\ \end{array}\] is subtracted off from each segment defined by \(p\) and \(q\) where the one-particle density matrix (\({}^{1}D\)) is given by \[{}^{2}\tilde{G}v_{G}^{i}=\epsilon_{G}^{i}v_{G}^{i} \tag{11}\] with the signature corresponding the largest eigenvalue (the maximum \(\epsilon_{G}^{i}\)). Again, for the \(N,r=10,20,\ \epsilon=0\) calculations, site symmetry was utilized to decrease computational expense. Only sub-matrices corresponding to diagonal sub-matrices \(p=q\), sub-matrices for BCS-paired orbitals \(p=2j-1,\ q=2j\), and for unpaired orbitals \(p=2j-1,\ q\neq p\neq 2j\) needed to be computed. ## Appendix B Plastino's Model In literature that dates back to the 1960s and continues to this day, Plastino and coworkers [65; 66; 67] explore a model Hamiltonian that adds a pairing-force term to the Lipkin model in the context of nuclear physics. Introducing the Plastino pairing-force term to the Lipkin Hamiltonian from Eq. (4)--which allows for slightly more flexibility than the formulation given in the Plastino literature as that literature is concerned only with the double excitation/de-excitation (\(\lambda\)) term and omits the scattering term (\(\gamma\))--yields the following model Hamiltonian: \[\mathcal{H}_{P}=-\frac{\epsilon}{2}\sum_{i=1}^{N}\hat{a}_{i}^{ \dagger}\hat{a}_{i}+\frac{\epsilon}{2}\sum_{i=1}^{N}\hat{a}_{i+N}^{\dagger} \hat{a}_{i+N}\] \[+\frac{\lambda}{2}\sum_{p=1}^{N}\sum_{q=1}^{3}\hat{a}_{p}^{ \dagger}\hat{a}_{q}^{\dagger}\hat{a}_{q+N}\hat{a}_{p+N}+\frac{\lambda}{2}\sum_ {p=1}^{N}\sum_{q=1}^{3}\hat{a}_{p+N}^{\dagger}\hat{a}_{q+N}^{\dagger}\hat{a}_{ q}\hat{a}_{p}\] \[+\frac{\gamma}{2}\sum_{p=1}^{N}\sum_{q=1}^{3}\hat{a}_{p+N}^{ \dagger}\hat{a}_{q}^{\dagger}\hat{a}_{q+N}\hat{a}_{p}+\frac{\gamma}{2}\sum_ {p=1}^{N}\sum_{q=1}^{3}\hat{a}_{p}^{\dagger}\hat{a}_{q+N}^{\dagger}\hat{a}_{q} \hat{a}_{p+N}\] \[-G\sum_{p=1}^{N}\sum_{q=1}^{N}\hat{a}_{p+N}^{ \dagger}\hat{a}_{p}^{\dagger}\hat{a}_{q}\hat{a}_{q+N} \tag{12}\] While the form of this Hamiltonian is similar to the one we introduce in Eq. (5), the difference is the orbitals which the pairing-force term (\(G\)) causes to be correlated in Cooper-like pairs. Specifically, while our model Hamiltonian pairs adjacent qubits (see Fig. 2), the Plastino Hamiltonian pairs orbitals with on the same Lipkin-like cite in different layers (i.e., stacked orbitals \(p\) and \(p+N\)). In order to determine whether the Plastino Hamiltonian is capable of probing fermion-exciton condensate character--where \(\lambda_{D}\) and \(\lambda_{G}\) simultaneously exceed the Pauli-like limit of one and hence character of both fermion-pair condensation and exciton condensation are observed in a single quantum state--, a systematic scan over the input parameters of the Hamiltonian (\(\epsilon,\lambda,\gamma,G\)) is conducted. As can in seen by Fig. 8 where the blue pluses represent the Lipkin model Hamiltonian, the yellow pluses represent the PF BCS-like Hamiltonian, and the green x's represent the Plastino Hamiltonian, while Plastino's Hamiltonian is capable of reproducing all Lipkin states accessible by the Lipkin model and states that demonstrate fermion-pair condensation, no dual condensate character is observed from the Plastino model as the region in which both \(\lambda_{D}\) and \(\lambda_{G}\) exceed one is not probed within this model. In fact, as noted in Ref., there is direct competition between the particle-hole and particle-particle pairing between Lipkin-like sites which results in each type of pairing "driving" the system toward radically different states with the magnitudes of the coupling constants causing a transition between the Lipkin-like and BCS-like states favored by the different interactions. Conversely, because the particle-particle and particle-hole pairing in the model we introduce do not occur between the same orbitals, they can coexist, allowing for a much larger possible range of \(\lambda_{D}\) versus \(\lambda_{G}\) including the region demonstrating a fermion-exciton condensate.
フェルミオン-励起子凝縮において、フェルミオンペア (つまり、超伝導) と励起子凝縮が同時に、同じ cohernt 量子状態に存在することが recently 提案されています。ここでは、この新しいクラスの高度に相関的な凝縮現象の物理学を再現できるモデルハミルトンを通して、フェルミオン-励起子凝縮を捉えます。このハミルトンは、粒子数を増やすと、 fermion-pair と励起子凝縮の大きな値の固有値を生成します。結果として、このモデルハミルトンは、フェルミオンペアと励起子波関数の entanglemen の結果として、双方の凝縮波関数となることが確認されています。これは、以前、熱力学的限界において予測しました。このモデルハミルトンは、超伝導や励起子凝縮の既知のモデルハミルトンを拡張
2310.00131
Event-Triggered Control of Neuron Growth with Actuation at Soma
We introduce a dynamic event-triggering mechanism for regulating the axonal growth of a neuron. We apply boundary actuation at the soma (the part of a neuron that contains the nucleus) and regulate the dynamics of tubulin concentration and axon length. The control law is formulated by applying a Zero-Order Hold (ZOH) to a continuous-time controller which guides the axon to reach the desired length. The proposed dynamic event-triggering mechanism determines the specific time instants at which control inputs are sampled from the continuous-time control law. We establish the existence of a minimum dwell-time between two triggering times that ensures avoidance of Zeno behavior. Through employing the Lyapunov analysis with PDE backstepping, we prove the local stability of the closed-loop system in $L_2$-norm, initially for the target system, and subsequently for the original system. The effectiveness of the proposed method is showcased through numerical simulations.
Cenk Demir, Shumon Koga, Miroslav Krstic
2023-09-29T20:47:52
http://arxiv.org/abs/2310.00131v2
# Event-Triggered Control of Neuron Growth ###### Abstract We introduce a dynamic event-triggering mechanism for regulating the axonal growth of a neuron. We apply boundary actuation at the soma (the part of a neuron that contains the nucleus) and regulate the dynamics of tubulin concentration and axon length. The control law is formulated by applying a Zero-Order Hold (ZOH) to a continuous-time controller which guides the axon to reach the desired length. The proposed dynamic event-triggering mechanism determines the specific time instants at which control inputs are sampled from the continuous-time control law. We establish the existence of a minimum dwell-time between two triggering times that ensures avoidance of Zeno behavior. Through employing the Lyapunov analysis with PDE backstepping, we prove the local stability of the closed-loop system in \(\mathcal{H}_{1}\)-norm, initially for the target system, and subsequently for the original system. The effectiveness of the proposed method is showcased through numerical simulations. ## I Introduction Recent advancements in neuroscience draw from various disciplines such as mathematics, physics, and engineering [16, 17, 19]. These fields are crucial for understanding the structure and functioning of neurons in the nervous system and addressing neurological issues. One major challenge in this context is the growth of axons, which are similar to wires and are constructed through the assembly of tubulin proteins. Axons function as connectors between neurons for transmitting electrical signals. Some neurological diseases, such as Alzheimer's disease [27] and spinal cord injuries [26], can damage axons by impeding the assembly process of tubulin proteins, leading to halted growth or degeneration. Researchers are developing new therapies to treat these diseases. One promising therapy is called ChABC which involves injecting a bacterial enzyme that digests the axon growth inhibitors [2]. Following this therapy, axon growth can be sustained [18]. However, it's important to note that ChABC has a limitation: ChABC requires repeated injections of the bacterial enzyme since it rapidly loses its activity at \(37^{\circ}\)C [25]. To enhance the effectiveness of this therapy, the amount of enzymes required to achieve the desired axon length and the intervals for these repeated injections must be identified. Studying the behavior of tubulin proteins can help achieve the desired axon length. For this reason, numerous mathematical models have employed Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs) to clarify tubulin behavior [28, 29, 36]. Authors of [7] model the axon growth process as a coupled PDE-ODE with a moving boundary, akin to the Stefan problem, effectively describing the associated physical phenomena. In this model, the PDE represents tubulin concentration's evolution along the axon, and the ODEs describe both the evolution of axon length and tubulin concentration at the growth cone. Given that this model captures this critical information about axon growth, it is worthwhile to consider designing a controller to regulate tubulin concentration and axon length. Over the past two decades, researchers have developed a technique. known as boundary control of PDE systems by PDE backstepping, to regulate PDEs by defining control laws at their boundaries [24]. Following the development of this technique, boundary control was expanded to the class of coupled PDE-ODE systems [23, 34, 35]. While the majority of these contributions are typically assumed to have a constant domain size over time, some researchers focused on a moving domain over time specifically addressing the Stefan problem, as discussed in [8, 15, 30]. Besides these studies, backstepping-based control techniques are constructed for Stefan problem in [22] with global stability results. Following this progress, several works have proposed local stability results for nonlinear hyperbolic PDEs, as seen in [3, 37]. We achieved local stability results for the first time for nonlinear parabolic PDEs with a moving boundary for the axon growth problem in our previous works [4, 6], and with input delay in [5]. While the control designs mentioned above operate in continuous time, certain technologies necessitate the performance of control actions only when necessary due to constraints on energy, communication, and computation [13]. To deal with this problem, an event-triggered control strategy is proposed for PID controllers in [1], and for state feedback and output feedback controllers for linear and nonlinear time-invariant systems in [14] and [20]. Authors of [31] ensured asymptotic stability for a closed-loop system with state feedback control laws by employing an event-triggering mechanism, characterizing it as a hybrid system. This characterization caned the constraints associated with the event-triggering mechanism and this relaxation is detailed in [12] as a dynamic triggering approach. In addition to its application in ODE systems, the authors of [9] successfully applied an event-triggered mechanism to boundary control hyperbolic PDE systems. This innovative approach paved the way for the utilization of event-triggered boundary control in reaction-diffusion PDEs as demonstrated in [10]. For Stefan problem, both static and dynamic event-triggered boundary control laws were developed by the authors of [32] and [33]. Furthermore, an event-triggering mechanism was employed to transition between safety utilizing CBFs and stability for Stefan problem with actuator dynamics as discussed in [21]. In this paper, we introduce a novel dynamic event-triggering mechanism for the axon growth problem which consists of a coupled reaction-diffusion-advection PDE and nonlinear ODEs with a moving boundary. With this dynamic event-triggering mechanism, we aim to address the key question around appropriate time intervals for administering therapy. The contributions of this paper include (i) designing a control law for neuron growth with Dirichlet boundary actuation, (ii) developing a dynamic event-triggering mechanism for coupled reaction-diffusion-advection PDEs and nonlinear ODEs with a moving boundary, (iii) analyzing Zeno behavior avoidance, (iv) demonstrating local stability for the closed-loop system. Indeed, this work is pioneering in event-triggering boundary control for axon growth and marks the first local stability analysis using event-triggering mechanisms for PDE systems. ## II Modeling of Axon Growth In this section, we present the mathematical model governing axon growth. This model includes a coupled system of PDEs and ODEs, featuring a dynamic boundary that describes tubulin behavior along the axon and axon growth. We also introduce the steady-state solution for a target axon length and a reference error system. ### _Axon growth model by a moving boundary PDE_ The evolution of tubulin along the axon serves as the primary catalyst for the axon growth process, and to understand this process, we rely on two assumptions to create a mathematical model which are described in our previous work [4]. Thus, the axonal growth can be modeled as \[c_{t}(x,t)= Dc_{xx}(x,t)-ac_{x}(x,t)-gc(x,t), \tag{1}\] \[c(0,t)= -q_{\text{s}}(t),\] (2) \[c(l(t),t)= c_{\text{c}}(t),\] (3) \[l_{\text{c}}\dot{c}_{\text{c}}(t)= (a-gl_{\text{c}})c_{\text{c}}(t)-Dc_{x}(l(t),t)\] \[-(r_{\text{g}}c_{\text{c}}(t)+\tilde{r}_{\text{g}}l_{\text{c}})( c_{\text{c}}(t)-c_{\infty}),\] (4) \[\dot{l}(t)= r_{\text{g}}(c_{\text{c}}(t)-c_{\infty}), \tag{5}\] In this model, the PDE state \(c(x,t)\) represents tubulin concentration within the axon. ODE states include \(c_{\text{c}}(t)\) for tubulin concentration in the growth cone, \(l(t)\) for axon length, and \(q_{\text{s}}(t)\) for tubulin concentration in the soma. Tubulin proteins move along the axon at a rate \(a\) and degrade at a constant rate \(g\). The diffusivity constant in (1) is represented by \(D\). Axonal growth stops when the tubulin concentration in the cone reaches equilibrium, denoted as \(c_{\infty}\). The other parameters in this model are explained with details in our previous work [4] and [6]. ### _Steady-state solution_ For a desired axon length, \(l_{s}\), we first derive a steady-state solution of the concentration. The steady-state solution of (1)-(5) is obtained as follows \[c_{\text{eq}}(x)=c_{\infty}\left(K_{+}e^{\lambda_{+}(x-l_{\text{s}})}+K_{-}e^ {\lambda_{-}(x-l_{\text{s}})}\right), \tag{6}\] where \[\lambda_{+}= \frac{a}{2D}+\frac{\sqrt{a^{2}+4Dg}}{2D},\ \lambda_{-}=\frac{a}{2D}-\frac{ \sqrt{a^{2}+4Dg}}{2D}, \tag{7}\] \[K_{+}= \frac{1}{2}+\frac{a-2gl_{\text{c}}}{2\sqrt{a^{2}+4Dg}},\ \ K_{-}=\frac{1}{2}-\frac{a-2gl_{\text{c}}}{2\sqrt{a^{2}+4Dg}}. \tag{8}\] We obtain the steady-state input for the concentration in the soma as \[q_{\text{s}}^{*}=-c_{\infty}\left(K_{+}e^{-\lambda_{+}l_{\text{s}}}+K_{-}e^{- \lambda_{-}l_{\text{s}}}\right). \tag{9}\] ### _Reference error system_ Let us consider the following reference error states \[u(x,t)=c(x,t)-c_{\text{eq}}(x), \tag{10}\] \[z_{1}(t)=c_{\text{c}}(t)-c_{\infty},\quad z_{2}(t)=l(t)-l_{\text {s}},\] (11) \[U(t)=-(q_{\text{s}}(t)-q_{\text{s}}^{*}). \tag{12}\] where \(U(t)\) is the reference error input. Utilizing (10)-(12), (6) and (9) in the governing equations (1)-(5), we derive the reference error system as \[u_{t}(x,t)= Du_{xx}(x,t)-au_{x}(x,t)-gu(x,t), \tag{13}\] \[u(0,t)= U(t),\] (14) \[u(l(t),t)= c_{\text{c}}(t)-c_{\text{eq}}(l(t)),\] (15) \[\dot{z}_{1}(t)= \tilde{a}_{1}z_{1}(t)-\beta u_{x}(l(t),t)-\kappa z_{1}(t)^{2}+ \beta f_{1}(z_{2}(t))\] \[-\beta\tilde{a}_{2}z_{2}(t),\] (16) \[\dot{z}_{2}(t)= r_{\text{g}}z_{1}(t), \tag{17}\] where the constants in (13)-(17) are \[\tilde{a}_{1} =\frac{a-r_{\text{g}}c_{\infty}}{l_{\text{c}}}-g-\tilde{r}_{ \text{g}},\quad\beta=\frac{D}{l_{\text{c}}}, \tag{18}\] \[\tilde{a}_{2} =c_{\infty}\left(\lambda_{+}^{2}K_{+}+\lambda_{-}^{2}K_{-}\right), \quad\kappa=\frac{r_{\text{g}}}{l_{\text{c}}},\] (19) \[f_{1}(z_{2}(t))= -c_{\infty}\left(K_{+}\lambda_{+}e^{\lambda_{+}z_{2}(t)}+K_{-} \lambda_{-}e^{\lambda_{-}z_{2}(t)}\right)\] \[+\tilde{a}_{2}z_{2}(t)+c_{\infty}\frac{a-gl_{\text{c}}}{D}. \tag{20}\] The ODEs can be written using the state vector \(X(t)\in\mathbb{R}^{2}\) as \(X(t)=[z_{1}(t)\quad z_{2}(t)]^{\top}\). The system (15)-(17) simplifies \[u_{t}(x,t)= Du_{xx}(x,t)-au_{x}(x,t)-gu(x,t), \tag{21}\] \[u(0,t)= U(t),\] (22) \[u(l(t),t)= h(X(t)),\] (23) \[\dot{X}(t)= AX(t)+f(X(t))+Bu_{x}(l(t),t), \tag{24}\] Fig. 1: Schematic of neuron and state variables where \[A= \left[\begin{array}{cc}\tilde{a}&-\beta\tilde{a}_{2}\\ r_{\rm g}&0\end{array}\right],\quad B=\left[\begin{array}{c}-\beta\\ 0\end{array}\right], \tag{25}\] \[f(X(t))= -\kappa z_{1}(t)^{2}+\beta f_{1}(z_{2}(t)),\] (26) \[h(X(t))= e_{1}X(t)+\tilde{h}(e_{2}X(t)),\] (27) \[\tilde{h}(z_{2}(t))= c_{\infty}\left(1-K_{+}e^{\lambda_{+}z_{2}(t)}-K_{-}e^{ \lambda_{-}z_{2}(t)}\right). \tag{28}\] ## III Continuous-time and Sample-based Control Design First, we linearize nonlinear ODEs in (24) around zero states as \[u_{t}(x,t)= Du_{xx}(x,t)-au(x,t)-gu(x,t), \tag{29}\] \[u_{x}(0,t)= U(t),\] (30) \[u(l(t),t)= H^{\top}X(t),\] (31) \[\dot{X}(t)= A_{1}X(t)+Bu_{x}(l(t),t), \tag{32}\] where the vector \(H\in\mathbb{R}^{2}\) is defined as \[A_{1}=\left[\begin{array}{cc}\tilde{a}_{1}&\tilde{a}_{3}\\ r_{\rm g}&0\end{array}\right],\ H=\left[\begin{array}{cc}1&-\frac{(a-gl_{c})c _{\infty}}{D}\end{array}\right]^{\top}, \tag{33}\] where \(\tilde{a}_{3}=\frac{a^{2}+Dg-agl_{c}}{D^{2}}\). In this paper, our continuous-time control design relies on a backstepping transformation, as outlined in [4]. This transformation maps the linear reference error system \((u,X)\) to a corresponding nonlinear target system \((w,X)\) by utilizing the following backstepping transformation. \[w(x,t)= u(x,t)-\int_{x}^{l(t)}k(x,y)u(y,t)dy\] \[-\phi(x-l(t))^{\top}X(t), \tag{34}\] \[u(x,t)= w(x,t)+\int_{x}^{l(t)}q(x,y)w(y,t)dy\] \[+\varphi(x-l(t))^{\top}X(t), \tag{35}\] where \(k(x,y)\in\mathbb{R}\) and \(\phi(x-l(t))\in\mathbb{R}^{2}\) are the gain kernel functions are explicitly described in [4]. We suppose the desired target system as \[w_{t}(x,t)= Dw_{xx}(x,t)-aw_{x}(x,t)-gw(x,t)\] \[-\dot{l}(t)F(x,X(t)), \tag{36}\] \[w(0,t)= 0,\] (37) \[w(l(t),t)= 0,\] (38) \[\dot{X}(t)= (A_{1}+BK^{\top})X(t)+Bw_{x}(l(t),t), \tag{39}\] and \(K\in\mathbb{R}^{2}\) is chosen to ensure the stability of \(A+BK\) such that it is Hurwitz, satisfying \[k_{1}>\frac{\tilde{a}_{1}}{\beta},\quad k_{2}>\frac{\tilde{a}_{3}}{\beta}. \tag{40}\] Furthermore, we describe the redundant nonlinear term \(F(x,X(t))\in\mathbb{R}\) in (36), arising from the moving boundary, as \(F(x,X(t))=\left(\phi^{\prime}(x-l(t))^{T}-k(x,l(t))C^{T}\right)X(t)\). ### _Control law_ The continuous-time control law is obtained based on the boundary condition (37) of the target system at \(x=0\), utilizing the gain kernel solutions as detailed in [4]. \[\phi(x)^{\top}= \left[H^{\top}\quad K^{\top}-\frac{1}{D}H^{\top}BH^{\top}\right] e^{N_{1}x}\begin{bmatrix}I\\ 0\end{bmatrix}, \tag{41}\] \[k(x,y)= -\frac{1}{D}\phi(x-y)^{\top}B, \tag{42}\] where \(N_{1}\) is defined in equation (37) in [4]. Substituting \(x=0\) into the transformation (34), we have the control law as \[U(t)=\int_{0}^{l(t)}k(0,y)u(y,t)dx+\phi(-l(t))X(t), \tag{43}\] It is worth noting that the solutions of the inverse gain kernels \(q(x,y)\) and \(\varphi(x)\) can be found in [4]. This invertibility of the backstepping transformation is essential for demonstrating the stability of the \((u,X)\)-system. ### _Sample-based control law_ We aim to stabilize the closed-loop system (1)-(5) by using sampling for the continuous-time controller defined in (43) with the increasing sequence of time \((t_{j})_{j\in\mathbb{N}}\). Thus, the control input is given by \[U(t_{j})= \int_{0}^{l(t_{j})}k(0,x)u(x,t_{j})dx+\phi(-l(t_{j}))X(t_{j}). \tag{44}\] which means that the boundary condition (14) is modified as \[u(0,t)=U(t_{j}). \tag{45}\] Now, the reference error system can be written as \[u_{t}(x,t)= Du_{xx}(x,t)-au_{x}(x,t)-gu(x,t), \tag{46}\] \[u(0,t)= U(t_{j}),\] (47) \[u(l(t),t)= h(X(t)),\] (48) \[\dot{X}(t)= AX(t)+f(X(t))+Bu_{x}(l(t)t). \tag{49}\] To establish stability results, we transform the reference error system in (46)-(49) to the target system using the transformation in (34). Thus, the target system is \[w_{t}(x,t)= Dw_{xx}(x,t)-aw_{x}(x,t)-gw(x,t)\] \[-\dot{l}(t)\left(k(x,l(t))u(l(t),t)-\phi^{\prime}(x-l(t))^{T}X(t)\right)\] \[-\phi(x-l(t))^{\top}f(X(t))\] \[-\left(\phi^{\prime}(x-l(t))^{\top}B+\frac{a}{D}\phi(x-l(t))^{ \top}B\right)h^{*}(X), \tag{50}\] \[w(0,t)=d(t),\] (51) \[w(l(t),t)=h^{*}(X(t)),\] (52) \[\dot{X}(t)=(A+BK)X(t)+f(X(t))+Bw_{x}(l(t),t), \tag{53}\] where \[h^{*}(X(t))=\left(z_{1}(t)+\tilde{h}(z_{2}(t))\right)-H^{\top}X(t). \tag{54}\] and the error between continuous-time control law in (43) and sample-based control law in (44) is defined as \[d(t)=U(t)-U(t_{j}). \tag{55}\] ## IV Event-triggered based boundary control In this section, we introduce the event-triggered state-feedback control approach, deriving sampling times for our control law to obtain the event-triggering mechanism. **Definition 1**.: The design parameters are \(\gamma>0\), \(\eta>0\), \(\rho>0\) and \(\beta_{i}>0\) where \(i\in\{1,...5\}\). The event-based controller consists of two trigger mechanisms: 1. The event-trigger: The set of all event times are in increasing sequence and they are denoted as \(I=\{t_{0},t_{1},...\}\) where \(t_{0}=0\) with the following rule * If \(S(t,t_{j})=\emptyset\), then the set of the times of the events is \(\{t_{0},...,t_{j}\}\). * If \(S(t,t_{j})\neq\emptyset\), the next event time is \(t_{j+1}=\inf\left(S(t,t_{j})\right)\) where \[S(t,t_{j})=\{t\in\mathbb{R}_{+}|t>t_{j}\wedge d^{2}(t)>-\gamma m(t)\}\] (56) for all \(t\in[t_{j},t_{j+1})\), \(d(t)\) is given by (55) and \(m(t)\) satisfies the ODE \[\dot{m}(t)= -\eta m(t)+\rho d(t)^{2}-\beta_{1}X(t)^{2}-\beta_{2}X(t)^{4}\] \[-\beta_{3}|w_{x}(0,t)|^{2}-\beta_{4}||w(x,t))||^{2}\] \[-\beta_{5}|w_{x}(l(t),t)|^{2}.\] (57) 2. The control action: The feedback control law that is derived in (44) for all \(t\in[t_{j},t_{j+1})\) where \(j\in\mathbb{N}\). **Lemma 1**.: _Under the definition of the state feedback event-triggered boundary control, it holds that \(d^{2}(t)\leq-\gamma m(t)\) and \(m(t)>0\) for \(t\in[0,F)\), where \(F=\sup(I)\)._ Proof.: From the definition of the event-trigger approach, it is guaranteed that \(d^{2}(t)\leq-\gamma m(t)\), \(t\in[0,F)\)/ It yields \[\dot{m}(t) \leq-(\eta+\gamma\rho)m(t)-\beta_{1}X^{2}(t)-\beta_{2}X^{4}(t)\] \[-\beta_{3}w_{x}(0,t)^{2}-\beta_{4}||w(x,t)||^{2}-\beta_{5}w_{x}(l (t),t)^{2} \tag{58}\] for \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\). Considering time continuity of \(m(t)\), we can obtain \[m(t)\leq m(t_{j})e^{-(\eta+\rho\sigma)(t-t_{j})}\] \[-\int_{t_{j}}^{t}e^{-(\eta+\rho\sigma)(t-\tau)}\left(\beta_{1}X( \tau)^{2}+\beta_{2}X(\tau)^{4}\right)d\tau\] \[-\int_{t_{j}}^{t}e^{-(\eta+\rho\sigma)(t-\tau)}(\beta_{3}|u_{x}(0,\tau)|^{2}d\tau+\beta_{5}|u_{x}(l(\tau),\tau)|^{2})\] \[-\int_{t_{j}}^{t}e^{-(\eta+\rho\sigma)(t-\tau)}\beta_{4}||u(x,\tau ))||^{2} \tag{59}\] From the event-trigger mechanism definition, we have that \(m(t_{0})=m(0)<0\). Therefore, the estimate of \(m(t)\) in (59) ensures that \(m(t)<0\) for all \(t\in[0,t_{1}]\). This can be generalized for all \(t\). which means it can be shown that \(m(t)<0\) for \(t\in[0,F)\). **Lemma 2**.: _For \(d(t)\), it holds that_ \[(\dot{d}(t))^{2}\leq \rho_{1}d^{2}(t)+\alpha_{1}X(t)^{2}+\alpha_{2}X(t)^{4}+\alpha_{3} w_{x}(0,t)^{2}\] \[+\alpha_{4}||w(x,t)||^{2}+\alpha_{5}w_{x}(l(t),t)^{2} \tag{60}\] _for some positive constants \(\rho_{1},\ \alpha_{1},\ \alpha_{2},\ \alpha_{3},\ \alpha_{4}\) and \(\alpha_{5}\) for all \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\)._ Proof.: By taking the time derivative of (55) and (44), along with the system (46)-(49) we get \[\dot{d}(t)= \left(\dot{\phi}(0)B+\frac{1}{D}H^{\top}B\right)d(t)+H^{\top}Bu_{ x}(0,t)\] \[+(Dk(0,l(t))+\phi(l(t))B)\,u_{x}(l(t),t)\] \[+\int_{0}^{l(t)}\left(Dk_{yy}(0,y)+ak_{y}(0,y)\right)u(y,t)dy\] \[-\int_{0}^{l(t)}\left(g+\dot{\phi}(0)B+\frac{1}{D}H^{\top}B\right) k(0,y)u(y,t)dy\] \[-\dot{l}(t)\dot{\phi}(-l(t))X(t)+\phi(-l(t))f(X(t))\] \[+\left(\phi(-l(t))^{\top}B-ak(0,l(t))\right)u(l(t),t)\] \[-\left(\dot{\phi}(0)B+\frac{1}{D}H^{\top}B-A\right)\phi(-l(t))X(t)\] \[+\dot{l}(t)h(X(t))k(0,l(t)) \tag{61}\] By using inverse transformation of backstepping in (35), Young's and Cauchy Schwarz's inequalities, one can show \[||u||^{2}\leq \left(\frac{3}{2}+\frac{3}{2}\left(\int_{0}^{l(t)}\int_{x}^{l(t)} q(x,y)^{2}dydx\right)^{1/2}\right)^{2}||w||^{2}\] \[+\frac{3}{2}\left(\int_{0}^{l(t)}\varphi(x-l(t))^{\top}dx\right)^ {2}X(t)^{2} \tag{62}\] Applying the same procedure, we can also demonstrate that \[u(l(t),t)^{2}\leq 2w(l(t),t)^{2}+2(\varphi(0)^{\top})^{2}X(t)^{2}, \tag{63}\] \[u_{x}(0,t)^{2}\leq 4w_{x}(0,t)^{2}+4q(0,0)^{2}w(0,t)^{2}\] \[+4\int_{0}^{l(t)}q_{x}(0,y)^{2}dyd||w(y,t)||^{2}\] \[+4(\varphi(-l(t))^{\top})^{2}X(t)^{2},\] (64) \[u_{x}(l(t),t)^{2}\leq 4w_{x}(l(t),t)^{2}+4(\varphi(0)^{\top})^{2}X(t)^{2}\] \[+4q(l(t),l(t))^{2}w(l(t),t)^{2} \tag{65}\] The nonlinear terms can be shown to satisfy the following inequalities \[|h^{*}(X)| \leq 2k_{n}X^{\top}X, \tag{66}\] \[f(X(t))\leq\kappa X^{\top}X+2k_{m}|X^{\top}X|^{3/2}, \tag{67}\] where \[k_{n} =\max\{c_{\infty}K_{+}\lambda_{+}^{2},c_{\infty}K_{-}\lambda_{-}^{ 2}\} \tag{68}\] \[k_{m} =\max\{c_{\infty}K_{+}\lambda_{+}^{3},c_{\infty}K_{-}\lambda_{-}^{ 3}\} \tag{69}\] by utilizing \(-e^{x}+x+1\leq x^{2}\) for \(x\leq 1.79\). Then, using Young's and Cauchy-Schwarz's inequalities, one can obtain \[\dot{d}(t)^{2}\leq \rho_{1}d(t)^{2}+\alpha_{1}X(t)^{2}+\alpha_{2}X(t)^{4}+\alpha_{3} w_{x}(0,t)^{2}\] \[+\alpha_{4}||w(x,t)||^{2}+\alpha_{5}w_{x}(l(t),t)^{2}\] where \[\rho_{1}=8\dot{\phi}(0)B+\frac{1}{D}H^{\top}B, \tag{70}\] \[\alpha_{1} =8\left(\left(\dot{\phi}(0)B+\frac{1}{D}H^{\top}B-A\right)\phi(-l(t) )\right)^{2}\] \[+32(C^{\top}B)^{2}(\varphi(-l(t))^{\top})^{2}\] \[+32\left((Dk(0,l(t))+\phi(l(t))B)\right)^{2}(\varphi(0)^{\top})^{2}\] \[+64\left(\phi(-l(t))^{\top}B-ak(0,l(t))\right)^{2}(\varphi(0)^{ \top})^{2}\] \[+12\left(\int_{0}^{l(t)}\zeta(y)^{2}dy\right)\left(\int_{0}^{l(t) }\varphi(x-l(t))^{\top}dx\right)^{2}, \tag{71}\] \[\alpha_{2} =8\left(\kappa^{2}\phi(-l(t))^{2}+\left(r_{8}e_{1}\dot{\phi}(-l( t))^{\top}\right)^{2}\right)\] \[+16\left(\phi(-l(t))^{\top}B-ak(0,l(t))\right)^{2}\] \[+124k_{n}^{2}\left(Dk(0,l(t))+\phi(l(t))B\right)^{2}q(l(t),l(t)) ^{2}\] (72) \[\alpha_{3} =32(C^{\top}B)^{2},\] (73) \[\alpha_{4} =18\int_{0}^{l(t)}\zeta(y)^{2}dy\] \[\times\left(1+\left(\int_{0}^{l(t)}\int_{x}^{l(t)}q(x,y)^{2}dydx \right)^{1/2}\right)^{2}\] \[+32|C^{\top}B)|^{2}\int_{0}^{l(t)}q_{x}(0,y)^{2}dy,\] (74) \[\alpha_{5} =32\left((Dk(0,l(t))+\phi(l(t))B)\right)^{2} \tag{75}\] where \[\zeta(y)= Dk_{yy}(0,y)+ak_{y}(0,y)-\left(g+\dot{\phi}(0)B+\frac{1}{D}H^{\top}B \right)k(0,y) \tag{76}\] ## V Main Results In this section, we present the analysis for the avoidance of Zeno behavior and closed-loop system stability. ### _Avoidance of Zeno Behavior_ The event-triggering mechanism dictates when to sample the continuous-time control signal, reducing computational and communication complexity. However, defining these sampling times is challenging due to the potential for Zeno behavior, where specific instances may result in infinite triggering within finite time intervals. This limitation restricts the mechanism's applicability. To address this, we prove the existence of a minimum dwell-time in the following theorem. **Theorem 1**.: _Consider the closed-loop system of (1)-(5) incorporating the control law given by (44) and the triggering mechanism in Definition 1. There exists a minimum dwell-time denoted as \(\tau\) between two consecutive triggering times \(t_{j}\) and \(t_{j+1}\), satisfying \(t_{j+1}-t_{j}\geq\tau\) for all \(j\in\mathbb{N}\) when \(\beta_{i}\) is selected as follows:_ \[\beta_{i}=\frac{\alpha_{i}}{\gamma(1-\sigma)} \tag{77}\] _where \(\sigma\in(0,1)\), \(i=\{1,...,5\}\) and the values of \(\alpha_{i}\) are provided in equations (71)-(75)._ Proof.: By using Lemma 1, we define the continuous function \(\psi(t)\) in \([t_{j},t_{j+1})\) to derive the lower bound between interexcution times as follows: \[\psi(t):=\frac{d^{2}(t)+\gamma(1-\sigma)m(t)}{-\gamma\sigma m(t)} \tag{78}\] As described in [11], one can show that \[\dot{m}(t)= -\eta m(t)+\rho d(t)^{2}-\beta_{1}X(t)^{2}-\beta_{2}X(t)^{4}\] \[-\beta_{3}|w_{x}(0,t)|^{2}-\beta_{4}||w||^{2}-\beta_{5}|w_{x}(l( t),t)|^{2} \tag{79}\] Then, one can obtain the estimate in (59). Now, taking the time derivative of (78) and using (59), we can choose \(\beta_{i}\) as described in (77). Thus, we get \(\dot{\psi}(t)\leq a_{1}\psi(t)^{2}+a_{2}\psi(t)+a_{3}\), where \[a_{1} =\rho\sigma\gamma>0, \tag{80}\] \[a_{2} =1+2\rho_{1}+(1-\sigma)\rho+\eta>0,\] (81) \[a_{3} =(1+\rho_{1}+\gamma(1-\sigma)\rho+\eta)\frac{1-\sigma}{\sigma}>0. \tag{82}\] Using the comparison principle and the argument in [11], one can prove that there exists a time minimum dwell-time \(\tau\) as follows: \[\tau=\int_{0}^{l(t)}\frac{1}{a_{1}s^{2}+a_{2}s+a_{3}}ds \tag{83}\] which completes the proof. ### _Stability Analysis_ In this section, we initially introduce the main theorem, which establishes stability. **Theorem 2**.: _Consider the closed-loop system comprising the plant described by (1)-(5) along with the control law specified by (44) and employing an event-triggering mechanism that is defined in Definition 1. Let_ \[\gamma>\frac{16(\alpha_{3}+\alpha_{5})}{D(1-\sigma)},\ \rho=16Dd_{1}^{2}+\frac{ad_{1}}{2}+\frac{16g}{D}+\frac{16}{D}\rho_{1} \tag{84}\] _and \(\eta>0\) be design parameters, \(\sigma\in(0,1)\) while \(\beta_{i}\) for \(i=\{1,2,3,4,5\}\) are chosen as in (71)-(75). Then, there exist constants \(M>0\), \(c>0\) and \(\Gamma\), such that, if initial conditions is such that \(Z(0)<M\) then the following norm estimate is satisfied:_ \[Z(t)\leq cZ(0)exp(-\Gamma t), \tag{85}\] _for all \(t\geq 0\), in \(\mathcal{H}_{1}\)-norm \(Z(t)=||u(.,t)||_{\mathcal{H}_{1}(0,l(t))}^{2}+X^{\top}X\) which establishes the local exponential stability of the origin of the closed-loop system._ To establish local stability on a non-constant spatial interval, we rely on two assumptions derived in [4], which are as follows: \[0<l(t)\leq\bar{l},\quad|\dot{l}(t)|\leq\bar{v}, \tag{86}\] for some \(\bar{l}>l_{\mathrm{s}}>0\) and \(\bar{v}>0\). Then, we consider the following Lyapunov functionals \[V_{1}= \frac{1}{2}||w||^{2}:=\frac{1}{2}\int_{0}^{l(t)}w(x,t)^{2}dx, \tag{87}\] \[V_{2}= \frac{1}{2}||w_{x}||^{2}:=\frac{1}{2}\int_{0}^{l(t)}w_{x}(x,t)^{2}dx.\] (88) \[V_{3}= X(t)^{\top}PX(t), \tag{89}\] where \(P>0\) is a positive definite matrix that satisfies the Lyapunov equation: \((A+BK^{\top})^{\top}P+P(A+BK^{\top})=-Q\) for some positive definite matrix \(Q>0\). We define the total Lyapunov function as follows: \[V(t)=d_{1}V_{1}(t)+V_{2}(t)+d_{2}V_{3}(t)-m(t), \tag{90}\] where \(d_{1}>0\) and \(d_{2}>0\) are parameters to be determined. **Lemma 3**.: _Assume that the conditions in (86) are satisfied with \(\bar{v}=\frac{D}{16(D+1)}\), for all \(t\geq 0\). Then, for sufficiently large \(d_{1}>0\) and sufficiently small \(d_{2}<0\), there exist positive constants \(\xi_{i}\) for \(i=\{1,2,3,4,5\}\) such that the following norm estimate holds for \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\):_ \[\dot{V}\leq-\alpha^{*}V+\left(\sum_{i=1}^{5}\xi_{i}V^{(1+\frac{i}{2})}\right) \tag{91}\] _where \(\alpha^{*}=\min\left\{g+\frac{d_{1}D}{2},\frac{d_{1}g}{4},\frac{d_{2}\lambda_{ \min}(Q)}{4},\eta\right\}\)._ Proof.: By taking the time derivative of the Lyapunov functional (87)-(89) along the target system and substituting boundary conditions for \(t\in(t_{j}+t_{j+1})\), \(j\in\mathbb{N}\), and applying Poincare's, Agmon's, and Young's inequalities, (84), along with (57) and (93)-(94), the expression for (90) can be transformed into: \[\dot{V}\leq -\alpha^{*}V+\left(16Dd_{1}^{2}+\frac{ad_{1}}{2}+\frac{8g^{2}}{D} \right)h^{*}(X(t))^{2}\] \[+\frac{d_{1}}{2}\dot{l}(t)h^{*}(X(t))^{2}+\Xi_{1}(X^{\top}X)^{2}+ \Xi_{2}(X^{\top}X)^{3}\] \[+d_{1}\dot{l}(t)\int_{0}^{l(t)}F(x,X(t))w(x,t)dx\] \[+\frac{\big{|}\dot{l}(t)\big{|}}{2}F(l(t),X(t))^{2}+\frac{\big{|} \dot{l}(t)\big{|}}{2}F(0,X(t))^{2}\] \[+\dot{l}(t)\int_{0}^{l(t)}F_{x}(x,X(t))w_{x}(x,t)dx\] \[+d_{2}4k_{m}|P||X^{\top}X|^{5/2} \tag{92}\] where we use the time derivatives of boundary conditions as \[w_{t}(0,t) =\dot{d}(t), \tag{93}\] \[w_{t}(l(t),t) =\dot{X}(t)\dot{h}^{*}(X(t))-\dot{l}(t)w_{x}(l(t),t). \tag{94}\] and the positive constants are \[\int_{0}^{l(t)}\left(\phi(x-l(t))^{\top}\right)^{2}dx\leq L_{n_{ 2}}, \tag{95}\] \[\int_{0}^{l(t)}\left(\phi^{\prime}(x-l(t))^{\top}B+\frac{a}{D} \phi(x-l(t))^{\top}B\right)^{2}dx\leq L_{n_{3}}\] (96) \[\Xi_{1}= L_{n_{2}}\kappa^{2}\left(\frac{2}{D}+\frac{3d_{1}}{2g}\right)+L _{n_{3}}\left(\frac{8k_{n}^{2}}{D}+\frac{2d_{1}}{3g}\right)+\frac{D\alpha_{2} }{16\alpha_{5}}\] \[+\beta_{2}+2d_{2}\kappa|P|+\frac{3Dc_{\infty}^{2}r_{\rm g}^{2}}{1 6}(K_{+}^{2}\lambda_{+}^{4}+K_{-}^{2}\lambda_{-}^{4})\] (97) \[\Xi_{2}= \frac{2}{D}L_{n_{2}}ak_{m}^{2}|P|^{2}+d_{1}\frac{1}{2g}L_{n_{2}} 4k_{m}^{2}|P|^{2}\] \[+\frac{3D}{32}\left(c_{\infty}r_{\rm g}(K_{+}\lambda_{+}^{3}+K_{ -}\lambda_{-}^{3})\right)^{2} \tag{98}\] We choose the constants \(d_{1}\) and \(d_{2}\) to satisfy \[d_{1}\geq\frac{4a^{2}}{D^{2}}+\frac{1+\bar{l}}{\bar{l}}+\frac{ 4\beta_{4}}{g}+\frac{D\alpha_{4}}{4g\alpha_{5}},\ d_{2}<\frac{D\lambda_{\min}( Q)}{4|B^{\top}P|^{2}}. \tag{99}\] The surplus nonlinear terms in (92) can be bounded by quadratic norms of the ODE state as. Specifically, positive constants \(L_{1},\ L_{2},\ L_{3}\), and \(L_{4}\) satisfy: \(F(0,X(t))^{2}\leq L_{1}X^{\top}X\), \(F(l(t),X(t))^{2}\leq L_{2}X^{\top}X\), \(\int_{0}^{l(t)}F_{x}(x,X(t))^{2}dx\leq L_{3}X^{\top}X\), \(\int_{0}^{l(t)}F(x,X(t))^{2}dx\leq L_{4}X^{\top}X\), Furthermore, using inequality (66), taking into account \(\dot{l}(t)=r_{\rm g}e_{2}^{\top}X\), and applying Poincare's and Young's inequalities, we can derive: \[\dot{V}\leq-\alpha^{*}V+\xi_{1}V^{3/2}+\xi_{2}V^{2}+\xi_{3}V^{5/2}+ \xi_{4}V^{3}+\xi_{5}V^{7/2} \tag{100}\] where \[\xi_{1} =\frac{r_{\rm g}}{d_{2}\lambda_{min}(P)^{1/2}}\max\left\{2,\frac{ L_{1}+L_{2}+L_{3}+d_{1}L_{4}}{d_{2}\lambda_{min}(P)}\right\} \tag{101}\] \[\xi_{2} =\frac{1}{d_{2}\lambda_{\min}(P)^{2}}\left(\Xi_{1}+\kappa^{2} \left(16Dd_{1}^{2}+\frac{ad_{1}}{2}+\frac{8g^{2}}{D}\right)\right)\] (102) \[\xi_{3} =\frac{d_{1}r_{\rm g}\kappa^{2}+8d_{2}k_{m}|P|}{2d_{2}\lambda_{ \min}(P)^{5/2}}\] (103) \[\xi_{4} =\frac{\left(16Dd_{1}^{2}+\frac{ad_{1}}{2}+\frac{8g^{2}}{D} \right)4k_{m}^{2}}{d_{2}\lambda_{\min}(P)^{3}}+\frac{\Xi_{2}}{d_{2}\lambda_{ \min}(P)^{3}}\] (104) \[\xi_{5} =\frac{d_{1}r_{\rm g}4k_{m}^{2}}{2d_{2}\lambda_{\min}(P)^{(7/2)}} \tag{105}\] which completes the proof of Lemma 3. In this next section, we ensure the local stability of the closed-loop system with the event-triggering mechanism. **Lemma 4**.: _In the region \(\Omega_{1}:=\{(w,X)\in\mathcal{H}_{1}\times\mathbb{R}^{2}|V(t)<M_{0}\}\) where \(t\in(t_{j},t_{j+1})\)\(j\in\mathbb{N}\), there exists a positive constant \(M_{0}>0\) such that the conditions in (86) are satisfied._ Proof.: See the proof of Lemma 2 in [4]. From the proof of Lemma 4, we have \(M_{0}=\frac{\lambda_{\min}(P)}{d_{2}}r^{2}\) for \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\). Next, we analyze stability within the time interval \(t\in(t_{j},t_{j+1})\) for \(j\in\mathbb{N}\), and subsequently for \(t\in(0,t)\). Within this interval, we establish the following lemma: **Lemma 5**.: _There exists a positive constant \(M_{j}\) such that if \(V(t_{j})<M_{j}\) then the following norm estimate holds for \(t\in(t_{j},t_{j+1})\), where \(j\in\mathbb{N}\):_ \[V(t_{j+1})\leq V(t_{j})e^{-\frac{\alpha^{*}}{2}(t_{j+1}-t_{j})} \tag{106}\] Proof.: For \(M_{j}>0\), we easily demonstrate that \(M_{j}<M_{0}\) using Lemma 4, ensuring the norm estimate from Lemma 3 holds. Thus, we set \(M_{j}\leq p^{*}\), where \(p^{*}\) is a non-zero root of the polynomial for \(V>0\). \[-\alpha^{*}V+\xi_{1}V^{3/2}+\xi_{2}V^{2}+\xi_{3}V^{5/2}+\xi_{4}V^{3}+\xi_{5}V^ {7/2}=0 \tag{107}\] Since \(\alpha^{*}\), and \(\xi_{i}\) are all positive, at least one positive root exists for the polynomial in (107). Therefore, (100) implies \[\dot{V}(t)\leq-\frac{\alpha^{*}}{2}V(t) \tag{108}\] for \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\) where \(M_{j}=\min\left\{M_{0},p^{*}\right\}\). The and \(V(t_{j}^{+})=V(t_{j})\) where \(t_{j}^{+}\) and \(t_{j}^{-}\) are right and left limits of \(t=t_{j}\), respectively. Thus, we have \[V(t_{j+1})\leq\exp(-\alpha^{*}(t_{j+1}-t_{j}))V(t_{j}) \tag{109}\] which completes the proof of Lemma 5. For any \(t\geq 0\) in \(t\in[t_{j},t_{j+1})\), \(j\in\mathbb{N}\), we obtain \[V(t)\leq e^{-\alpha^{*}(t-t_{j})}V(t_{j})\leq e^{-\alpha^{*}t}V(0) \tag{110}\] Recalling \(m(t)<0\) and (92), we have \[d_{1}V_{1}(t)+V_{2}(t)+d_{2}V_{3}(t)\leq e^{-\alpha^{*}t}V(0) \tag{111}\] which means that \[d_{1}\frac{1}{2}\|w\|^{2}+\frac{1}{2}\|w_{x}\|^{2}+d_{2}X(t)^{ \top}PX(t)\] \[\leq e^{-\alpha^{*}t}\left(\frac{d_{1}}{2}\|w(0)\|^{2}+\frac{1}{2}\|w_{ x}(0)\|^{2}+d_{2}X(0)^{\top}PX(0)-m(0)\right)\] Therefore, we can derive the following norm estimate: \[\|w\|^{2}+\|w_{x}\|^{2}+X(t)^{\top}X(t)\] \[\leq e^{-\alpha^{*}t}\sqrt{\frac{\frac{d_{1}}{2}\|w(0)\|^{2}+\frac{1}{2 }\|w_{0}(0)\|^{2}+d_{2}X(0)^{\top}PX(0)-m(0)}{\min\left\{\frac{d_{1}}{2},\frac{ 1}{2},\frac{d_{2}}{\lambda_{\max}(P)}\right\}}}.\] This confirms the local exponential stability of the system (50)-(53) in the \(\mathcal{H}_{1}\)-norm. By exploiting the backstepping transformation's invertibility and norm equivalence, we also establish the local exponential stability of the original system (1)-(5) in the \(\mathcal{H}_{1}\) norm, concluding the proof of Theorem 2. ## VI Numerical Simulations In this section, we conduct a numerical analysis of the plant dynamics (equations (1) to (5) utilizing the control law (defined by (44)) and incorporating the event triggering mechanism (as defined in Definition 1). The model employs biological constants and control parameters from Table 1, with initial conditions set to \(c_{0}(x)=2c_{\infty}\) for the tubulin concentration along the axon and \(l_{0}=1\mu m\) for the initial axon length. The control gain parameters are chosen as \(k_{1}=-0.001\) and \(k_{2}=4\times 10^{13}\). The event-triggering mechanism parameters are set as follows: \(m(0)=-0.5\), \(\beta_{1}=1.634\times 10^{22}\), \(\beta_{2}=5.229\times 10^{12}\), \(\beta_{3}=6.569\times 10^{-14}\), \(\beta_{4}=2.614\times 10^{13}\), \(\beta_{5}=2.94\times 10^{-12}\), \(\rho=4\times 10^{22}\), \(\eta=100\) and \(\sigma=0.5\). In Fig. (a)a and (b)b, we present the evolution of tubulin concentration along the axon for both continuous-time control law and event-triggered control. Fig. 2 shows axon growth convergence under continuous-time and event-triggered control laws. Both methods achieve the desired \(12\mu m\) length from an initial \(1\mu m\) in about 3.5 minutes. In Fig. (a)a, we compare ETC control inputs for \(\eta=1\) and \(\eta=1000\) with \(\sigma=0.5\), showing similar convergence rates. Notably, the ETC controller with \(\eta=1000\) samples faster. In Fig. (b)b, fixing \(\eta\) at \(100\) and varying \(\sigma\) to \(0.8\) results in faster and more frequent sampling, leading to quicker convergence compared to \(\sigma=0.1\). ## VII Conclusion This paper explores a dynamic event-triggering boundary control approach for axonal growth modeling. It addresses the avoidance of Zeno behavior and offers a local stability analysis of the closed-loop system. Future research will focus on investigating periodic event-triggering and self-triggering boundary control methods, which are more suitable for digital implementations. \begin{table} \begin{tabular}{c|c|c|c} \hline Parameter & Value & Parameter & Value \\ \hline \(D\) & \(10\times 10^{-12}m^{2}/s\) & \(\bar{r}_{8}\) & \(0.053\) \\ \(a\) & \(1\times 10^{-8}m/s\) & \(\gamma\) & \(10^{4}\) \\ \(g\) & \(5\times 10^{-7}\) & \(s^{-1}\) & \(l_{c}\) & \(4\mu m\) \\ \(r_{8}\) & \(1.783\times 10^{-5}\) & \(m^{4}/(mols)\) & \(l_{s}\) & \(12\mu m\) \\ \(c_{\infty}\) & \(0.0119\)\(mol/m^{3}\) & \(l_{0}\) & \(1\mu m\) \\ \hline \end{tabular} \end{table} TABLE I: Biological constants and control parameters Fig. 3: The closed-loop response of the continuous-time and event-triggered control law for \(l_{s}=12\mu m\) Fig. 2: The closed-loop response of the designed full-state feedback control system for continuous-time and event-triggered control law.
軸索の成長を調節する動的イベントトリガ機構を導入します。私たちは、軸索の体細胞(核を含む)に境界アクチュエーションを適用し、チューリン濃度と軸索の長さの動態を調整します。制御律は、軸索を望ましい長さに導く連続時間制御律にゼロオーダーホールド(ZOH)を適用することによってформулиされます。提案された動的イベントトリガ機構は、制御入力のサンプル時刻を連続時間制御律から決定します。二つのトリガ時刻の間の最小 dwell-time を確立し、Zeno行為を回避します。ポリアミド分析とPDEバックステップを用いて、この閉ループシステムの局所安定性を証明します。$L_2$ノルムで、まず目標システム、そしてその後、元のシステムにおいても安定性を証明します。この提案方法の効果は、数値シミュレーションを通して示されます。
2309.15339
Detecting quantum phase transitions in a frustrated spin chain via transfer learning of a quantum classifier algorithm
The classification of phases and the detection of phase transitions are central and challenging tasks in diverse fields. Within physics, it relies on the identification of order parameters and the analysis of singularities in the free energy and its derivatives. Here, we propose an alternative framework to identify quantum phase transitions. Using the axial next-nearest neighbor Ising (ANNNI) model as a benchmark, we show how machine learning can detect three phases (ferromagnetic, paramagnetic, and a cluster of the antiphase with the floating phase). Employing supervised learning, we demonstrate the feasibility of transfer learning. Specifically, a machine trained only with nearest-neighbor interactions can learn to identify a new type of phase occurring when next-nearest-neighbor interactions are introduced. We also compare the performance of common classical machine learning methods with a version of the quantum nearest neighbors (QNN) algorithm.
André J. Ferreira-Martins, Leandro Silva, Alberto Palhares, Rodrigo Pereira, Diogo O. Soares-Pinto, Rafael Chaves, Askery Canabarro
2023-09-27T01:11:11
http://arxiv.org/abs/2309.15339v1
Detecting quantum phase transitions in a frustrated spin chain via transfer learning of a quantum classifier algorithm ###### Abstract The classification of phases and the detection of phase transitions are central and challenging tasks in diverse fields. Within physics, it relies on the identification of order parameters and the analysis of singularities in the free energy and its derivatives. Here, we propose an alternative framework to identify quantum phase transitions. Using the axial next-nearest neighbor Ising (ANNNI) model as a benchmark, we show how machine learning can detect three phases (ferromagnetic, paramagnetic, and a cluster of the antiphase with the floating phase). Employing supervised learning, we demonstrate the feasibility of transfer learning. Specifically, a machine trained only with nearest-neighbor interactions can learn to identify a new type of phase occurring when next-nearest-neighbor interactions are introduced. We also compare the performance of common classical machine learning methods with a version of the quantum nearest neighbors (QNN) algorithm. ## I Introduction Machine Learning (ML) has proven its efficiency and success in many scientific as well as business sectors [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. In essence, we can teach computers to see patterns by progressively exposing them to quality inputs, which is crucial for data-driven solutions given the gigantic and ever-increasing amount of raw data. Within any branch of ML, substantial improvements in state-of-the-art solutions are strongly related to algorithmic and hardware advances. And, although we still need to be careful about setting long-term expectations, recent breakthroughs in the current noisy intermediate-scale quantum (NISQ) era [20; 21; 22; 23] put quantum computing among the most promising directions towards significant progress in machine learning. Within this context, there has been a number of different approaches seeking for quantum advantages in machine learning ranging from the quantum analog of neural networks [24], routines for supervised or unsupervised learning [25; 26], quantum reinforcement learning [27] and quantum pattern recognition [28] (see references [29; 30; 31] for a detailed review). A field where machine learning has been particularly successful is that of quantum matter and quantum information. Classical machine learning techniques were used to perform quantum state tomography [12], to approximate the ground state of many Hamiltonians of interest [11], for the description of causal networks [32; 33; 34] and finding violations of Bell inequalities [17; 35], among many other applications. Importantly, such classical methods have also been proven capable of tackling a central topic in many-body physics, that of classifying phase transitions, a thorny goal especially due to the exponential increase of Hilbert space describing quantum systems. Apart from simple transitions, witnessed by non-analyticities in order parameters, more general quantum phase transitions require large lattice sizes, a costly computational task for which a variety of classical machine learning techniques provide alternative and reliable approaches [14; 15; 16; 36]. It seems thus natural to consider whether quantum machine learning can also identify phase transitions. Indeed, machine learning based on hybrid quantum-classical variational circuits has been shown to detect phase transitions in the simple Hamiltonians, such as the transverse field Ising and XXZ models [37; 38; 39; 40]. Our approach distinguishes itself significantly from others, primarily through the implementation of transfer learning using a quantum classifier algorithm. This algorithm is exclusively trained within a specific segment of the phase diagram while testing on the rest of the phase diagram. We also explore optimized data preprocessing for compatibility with real quantum hardware. This demonstrates the effectiveness of our technique, as discussed in detail herein. Our aim in this paper is to show that the Quantum Nearest Neighbours (QNN) algorithm [41] also provides a well-founded tool for classifying quantum phase transitions. Moving beyond the models previously considered, we benchmark the QNN performance by employing the axial next-nearest neighbor Ising (ANNNI) model [42; 43] used, for instance, to investigate the magnetic order in quasi-one-dimensional spin ladder materials [44], quantum melting of crystalline order in Rydberg atom systems [45], interactions between Majorana edge modes in arrays of Kitaev chains [46; 47], and quench dynamics and dynamical phase transitions [48; 49; 50]. The ANNNI is the simplest model combining the effects of quantum fluctuations and frustrated exchange interactions, a combination from which a rich ground state phase diagram arises [51; 52; 53; 54; 55; 56]. It thus provides an interesting challenge to the QNN algorithm capabilities. Importantly, even though the input data to the quantum algorithm is considerably small, the full 198 raw pairwise correlation functions between the spins for a lattice with 12 sites, i.e., an input array of 198 real features. And even if each of these variables is mapped to just one bit, we would still require a large amount of qubits. As better detailed ahead, to be implemented, the QNN algorithm that we use requires \(2n+2\) qubits for a \(n\)-sized input vector. In order to make the computational simulation as well as its implementation in a real quantum computer feasible, we first proceed with a pre-processing of the data, consisting of a feature selection followed by a discretization and a final one-hot encoding step. With that, we reduce to 4 the number of features that in turn require 10 qubits to be analyzed by the quantum algorithm. As we show, even after significantly reducing the input data, to make it compatible with quantum computational requirements, the QNN algorithm still allows for a successful learning of phase transitions. More precisely, we demonstrate the transfer learning in the ANNNI model, as by training the machine with nearest-neighbour interactions only, it also accurately predicts the phase transitions happening at regions including next-nearest-neighbor interactions. Interestingly, the QNN performs better than its classical counterpart, called K-Nearest Neighbors (KNN), when exposed to the same input data, thus providing a proof-of-principle example of a possible quantum advantage in accuracy. The paper is organized as follows. In Sec. II we describe the ANNNI model. In Sec. III we provide a succinct but comprehensive overview of classification problems in machine learning, also describing the KNN and QNN algorithms. In Sec. IV we detail the data pre-processing required to make the problem amenable to be implemented in near-term quantum devices. In Sec. V we present our results regarding the learning of phase transitions in the ANNNI model. In Sec. VI we discuss our results and point out interesting directions for future research. Finally, in the Appendix we provide technical details about some of the classical machine learning techniques we have employed in the pre-processing of the data. ## II The ANNNI model With the goal of analyzing the use of a quantum classifier algorithm to witness phase transitions in a quantum many-body system, we chose the axial next nearest-neighbor Ising (ANNNI) model. The reason stems from the fact that this model displays a non-trivial and rich phase diagram. As it happens, ANNNI is the simplest model combining quantum fluctuations and competing frustrated exchange interactions. The first is induced by the presence of a transverse field while the latter is due to the fact that even though the interaction is ferromagnetic for nearest neighbors, it becomes antiferromagnetic for next-nearest neighbors. The Hamiltonian for the ANNNI model is given by [42; 43] \[H=-J\sum_{j=1}^{N}\left(\sigma_{j}^{z}\sigma_{j+1}^{z}-\kappa\sigma_{j}^{z} \sigma_{j+2}^{z}+g\sigma_{j}^{x}\right), \tag{1}\] where \(\sigma_{j}^{\kappa}\) (\(\alpha=x,y,z\)), are Pauli matrices acting on spin-1/2 degrees of freedom at site \(j\) of a one-dimensional lattice with \(N\) sites and periodic boundary conditions. The parameter \(J>0\) is a coupling constant that sets the energy scale of the problem (we set \(J=1\)) and is associated with the nearest-neighbor ferromagnetic exchange interaction. The dimensionless coupling constants \(\kappa\) and \(g\) are related to the next-nearest-neighbor interaction and the transverse magnetic field, respectively. The groundstate phase diagram of the ANNNI model is well understood and known to contain four phases separated by three quantum phase transitions: ferromagnetic, antiphase, paramagnetic, and floating phase. In a nutshell, the ferromagnetic phase is characterized by a uniform spontaneous magnetization, with one of the ground states given by \(\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow\). In turn, the antiphase breaks the lattice translational symmetry and has long-range order with a four-site periodicity of the form \(\uparrow\uparrow\downarrow\downarrow\uparrow\uparrow\downarrow\downarrow\). Distinctively, the paramagnetic phase is disordered and has a unique ground state with spins pointing predominantly along the field direction. Finally, the floating phase is gapless with correlation functions decaying as a power-law for large distances, in contrast with the other phases that have a finite energy gap and exponentially decaying correlations. For \(\kappa=0\), the transverse field Ising model is reproduced, exactly solvable via the mapping to non-interacting spinless fermions. Along the \(\kappa=0\) line, a second-order phase transition occurs at \(g=1\), separating the ferromagnetic phase at \(g<1\) from the paramagnetic phase at \(g>1\). In particular, exactly at the critical point \(g=1\), the energy gap vanishes. For \(g=0\), there is a transition between the ferromagnetic phase at small \(\kappa\) and the antiphase at large \(\kappa\) occurring at \(\kappa=1/2\). Notice that with \(g=0\) the model becomes classical, since all operators in the Hamiltonian commute with each other. At this classical transition point, any configuration that does not have three neighboring spins pointing in the same direction is a ground state, showing that the degenerescence of the ground state increases exponentially with the system size. For \(g\neq 0\) and \(\kappa\neq 0\), the critical lines have to be determined numerically since the model is not integrable any longer. For \(0\leq\kappa\leq 1/2\), the Ising transition between paramagnetic and the ferromagnetic phases extends from the \(g=1\), \(\kappa=0\) until the degenerate point \(g=0\), \(\kappa=1/2\), a multicritical point at which several transition lines coincide. There are two other transition lines that start at the multicritical point and extend to the high-frustration regime \(\kappa>1/2\). For fixed \(g>0\) and increasing \(\kappa>1/2\), we first encounter a Berezinsky-Kosterlitz-Thouless (BKT) transition from the paramagnetic phase to the floating phase. Detecting the BKT transition is challenging because the correlation length diverges exponentially at the critical point. As we increase \(\kappa\) further, there is a commensurate-incommensurate (CIC) transition from the floating phase to the antiphase. Numerical density matrix renormalization group results for long spin chains [55] show that the floating phase occupies a rather narrow region in the phase diagram, which makes it hard to discern the BKT from the CIC transition for small system sizes. Using perturbation theory in the regime \(\kappa<1/2\)[43] or by fitting numerical results [55]) in the regime \(\kappa>1/2\), one can obtain approximate expressions for the transition lines. For instance, the critical value of \(g\) for the Ising transition for \(0\leq\kappa\leq 1/2\) is approximately given by [43] \[g_{\rm I}(\kappa)\approx\frac{1-\kappa}{\kappa}\left(1-\sqrt{\frac{1-3\kappa+4 \kappa^{2}}{1-\kappa}}\right). \tag{2}\] In turn, the critical value of \(g\) for the BKT transitions for \(1/2<\kappa\lesssim 3/2\) is approximated by [55] \[g_{\rm BKT}(\kappa)\ \approx\ 1.05\sqrt{(\kappa-0.5)(\kappa-0.1)}. \tag{3}\] We use these approximations to make benchmark comparisons to our heuristic results. ### Our dataset As will be discussed in more detail throughout the paper, we use the pairwise correlations among all spins in the lattice as the training data for the machine learning algorithms. Given \(N\) spins, we have a total of \(3\times C_{2}=\binom{N}{2}\) observables for up to (second) nearest neighbors, hence the combination by (2). Thus, the features are given by \(\left\{\langle\sigma_{i}^{x}\sigma_{j}^{x}\rangle,\langle\sigma_{i}^{y} \sigma_{j}^{y}\rangle,\langle\sigma_{i}^{z}\sigma_{j}^{z}\rangle\right\}\) with, \(j>i\) and \(i=[1,N-1]\) where \(\langle\sigma_{i}^{x}\sigma_{j}^{x}\rangle=\langle\lambda_{0}|\sigma_{i}^{x }\sigma_{j}^{x}|\lambda_{0}\rangle\) is the expectation value of the spin correlation for the Hamiltonian ground state \(|\lambda_{0}\rangle\) (and similarly for the other expectation values). In our case, we take N = 12, a manageable size for both computational and analytical evaluation of the ground state of the ANNNI Hamiltonian. This allows us to efficiently compute a set of 198 pairwise expectation values, which will serve as the (raw) input features for the machine learning algorithm. It is worth pointing out that, even if one only has access to short chains, the Ising transition can still be captured correctly [54]. However, detecting the BKT transitions using standard approaches requires computing observables for significantly longer chains [55]. Notwithstanding, as we will see below, even though our data regards a quite short chain \(N=12\), the machine learning algorithms, both classical and quantum, will be able to identify not only the Ising but also the antiphase and the paramagnetic phases, lumping the BKT and CIC transitions together. ## III The quantum nearest neighbors algorithm The quantum nearest neighbors (QNN) [41] is a quantum classification algorithm that employs the Hamming distance as a distance criterion to compare the training dataset and unclassified observations. Schematically, it consists of three major steps: * First, create a superposition of the training dataset and the input observation; * Encode the Hamming distance between the input observation and each example in the training set into the amplitude of each state in the superposition; * Measure the class-qudit retrieving the predicted class with the highest probability. Before the actual quantum algorithm starts, an important classical pre-processing step (whose reason will become clear in what follows) must be performed: the features in the training dataset are represented as bit vectors, so that the feature space becomes \(\mathcal{X}=\{0,1\}^{\otimes n}\). This is achieved via the procedure known as one-hot encoding, which produces the so-called dummy variables [57]. Naturally, such a representation will be discrete (binary, in fact), so that if any original feature is continuous (or even categorical with more than 2 levels), a prior step of discretization is necessary. Notice that the number of binary features after the encoding may be different from the number of original features, although here we represented both by the same number of bits \(n\). There are several ways to perform this binarization process. However, whatever method is chosen, it is important that the essence of the data topology is maintained -- that is, points that are close on the original feature space must remain close on the binarized feature space. In Sec. IV we detail the specifics of the particular procedure we applied to our problem. Once the training dataset features are binarized, their representation as quantum states is immediate via the basis encoding [58], which accounts for a direct mapping of binary features to the quantum computational-basis states: \(0\mapsto|0\rangle\) and \(1\mapsto|1\rangle\). After these two steps, each training set data point is mapped to the quantum state \(|x_{1}^{p}\cdots x_{n}^{p}\rangle\equiv|\mathbf{x}^{p}\rangle\), \(x_{k}^{p}\in\{0,1\}\), \(p=1,\cdots,N\), where \(N\) is the number of points in the training set. In parallel, in a separate quantum register, we encode the class \(y^{p}\in\{0,\cdots,d-1\}\), and construct, for each observation \(p\), the state \[|x_{1}^{p}\cdots x_{n}^{p},y^{p}\rangle\equiv|\mathbf{x}^{p},y^{p}\rangle\enspace. \tag{4}\] If we are dealing with binary classification (which is the case in this work), the respective class is also straightforwardly encoded in a single qubit, as \(0\mapsto|0\rangle\) and \(1\mapsto|1\rangle\). If we have a multiclass problem, qudits are necessary, or one could use more than one qubit to encode integers corresponding to the class (for instance, \(|5\rangle=|101\rangle\)). In this case, \(\lceil\log_{2}d\rceil\) qubits are necessary to encode \(d\) classes. Once we have the state corresponding to each one of the training states \(|\mathbf{x}^{p},y^{p}\rangle\), we construct a training set superposition of all datapoints, given by \[|T\rangle=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}|\mathbf{x}^{p},y^{p}\rangle\enspace. \tag{5}\] Naturally, with \(n\) qubits, one can construct a superposition of \(2^{n}\) states, representing all possible binarized feature vectors of \(n\) features. However, it is possible (and most likely) that in a given training dataset not all possible binary feature vectors will be present. Indeed, in the binarization process, it is likely that multiple observations that are different in the original input space are mapped to the same binary vector so that the transformed training dataset actually has a number of observations quite smaller than the original number of observations (although here we represent both as \(N\)). This leads to important details in the implementation of the algorithm in a practical problem, as it will be detailed in Sec. IV. Further, notice that in the case in which \(N<2^{n}\), the superposition in Eq. (5) will have to be prepared with an arbitrary state preparation routine, which is known to be costly [59]. However, in quantum computing software development kits (SDK) (such as Qiskit [60], which is the one we employ in this work), such a procedure is already implemented and ready to use as a self-contained routine. The next step is to perform the same classical binarization process with the unclassified input vector \(\mathbf{x}_{\text{in}}\) (the one we wish to classify) and map it to the state \(|x_{\text{in},1}\cdots x_{\text{in},n}\rangle\equiv|\mathbf{x}_{\text{in}}\rangle\), \(x_{\text{in},k}\in\{0,1\}\). Keep this as the first register of the quantum state. Finally, add an ancilla register \(|0\rangle\) as the last register. Such a construction yields an initial state given by \[|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}|\mathbf{x}_{\text{in}};\mathbf{x}^{ p},y^{p};0\rangle\enspace, \tag{6}\] which is made up of three registers (or, in fact, blocks of registers): the first containing the input state \(|\mathbf{x}_{\text{in}}\rangle\), which consists of \(n\) qubits; the second containing the superposition \(|T\rangle\) (which is the tensor product of the feature vectors \(|\mathbf{x}^{p}\rangle\) and the class vectors \(|y^{p}\rangle\)), thus consisting of \(n+1\) qubits, and given that we have a binary classification problem, the third contains an ancilla qubit initialized as \(|0\rangle\). Therefore, the number of qubits necessary for the algorithm is precisely \(2n+2\). Once the initial state is prepared, we put the ancilla into a superposition, by applying a Hadamard gate to the last register, i.e., \(H=1\otimes 1\otimes 1\otimes H\), such that \[|\psi_{1}\rangle=H\,|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}|\mathbf{x}_ {\text{in}};\mathbf{x}^{p},y^{p}\rangle\otimes\frac{1}{\sqrt{2}}(|0\rangle+|1 \rangle)\enspace. \tag{7}\] In the next step, we want the Hamming distance components \(d_{k}^{i}\) between each qubit of the first (input) and second (training) register to replace the qubits in the second register, such that \[d_{k}^{i}=\begin{cases}0,&\text{if }\,|x_{k}^{p}\rangle=|x_{\text{in},k} \rangle\\ 1,&\text{else}.\end{cases}\enspace. \tag{8}\] This is achieved by simply applying a \(\text{cNOT}(x_{\text{in},k},x_{k}^{p})\)-gate, which overwrites the entry \(x_{k}^{p}\) in the second register with \(0\) if \(x_{k}^{p}=x_{\text{in},k}\), otherwise with \(1\): \[\begin{cases}\text{cNOT}\,|00\rangle=|00\rangle\enspace;&\text{cNOT}\,|01 \rangle=|01\rangle\\ \text{cNOT}\,|11\rangle=|10\rangle\enspace;&\text{cNOT}\,|10\rangle=|11\rangle \end{cases}\enspace. \tag{9}\] Thus, after this step, the state is then \[\begin{split}\ket{\psi_{2}}&=\bigotimes_{k=1}^{n}c\text{NOT }(x_{k},v_{k}^{p})\;\ket{\psi_{1}}\\ &=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}\ket{\mathbf{x}_{\text{in}};\mathbf{d}^ {p},y^{p}}\otimes\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})\;,\end{split} \tag{10}\] where the Hamming distance components \(\ket{d_{1}^{p}\cdots d_{n}^{p}}\equiv\ket{\mathbf{d}^{p}}\), \(d_{k}^{p}\in\{0,1\}\), \(p=1,\cdots,N\) are now in the second register. In the third step, we apply the unitary operator \[U=e^{-i\frac{\pi}{2n}\mathcal{O}}\;;\mathcal{O}=1\otimes\sum_{k=1}^{n}\left( \frac{1-\sigma_{z}}{2}\right)_{d_{k}}\otimes 1\otimes\sigma_{z}\;\;. \tag{11}\] This sums the Hamming distance components \(\{d_{k}^{p}\}\) (thus yielding the actual Hamming distance) between \(\ket{\mathbf{x}^{p}}\) and \(\ket{\mathbf{x}_{\text{in}}}\), \(d_{H}(\mathbf{x}_{\text{in}},\mathbf{x}^{p})\equiv d_{H}\), into the phase of the \(p^{\text{th}}\) state of the superposition. Notice that a relative phase is added, conditioned on the ancilla state. After this step, the state becomes \[\ket{\psi_{3}}=U\ket{\psi_{2}}=\frac{1}{\sqrt{2N}}\sum_{p=1}^{N}\left(e^{-i \frac{\pi}{2n}d_{H}}\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{p},y^{p};0}+e^{i\frac{\pi} {2n}d_{H}}\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{p},y^{p};1}\right)\;. \tag{12}\] Now we apply another Hadamard to the ancilla. This will generate alternating-sign exponentials associated with each ancilla state, which are easily aggregated into a sine and cosine. The resulting state can be expressed as \[\ket{\psi_{4}}=H\ket{\psi_{3}}=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}\left(\cos\left( \frac{\pi d_{H}}{2n}\right)\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{p},y^{p};0}+\sin \left(\frac{\pi d_{H}}{2n}\right)\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{p},y^{p};1} \right)\;. \tag{13}\] Notice that \(0\leq d_{H}\leq n\Rightarrow 0\leq\frac{\pi d_{H}}{2n}\leq\frac{\pi}{2}\). Therefore, * For large \(d_{H}\), \(\cos\left(\frac{\pi d_{H}}{2n}\right)\to 0\) and \(\sin\left(\frac{\pi d_{H}}{2n}\right)\to 1\), so that we have higher probability of measuring \(\ket{1}\) in the ancilla qubit; * For small \(d_{H}\), \(\cos\left(\frac{\pi d_{H}}{2n}\right)\to 1\) and \(\sin\left(\frac{\pi d_{H}}{2n}\right)\to 0\), so that we have higher probability of measuring \(\ket{0}\). That is, if the input is far away from most training observations, we have a higher probability of measuring the ancilla in the state \(\ket{1}\); and if the input is close to many observations, the ancilla is more likely to be measured in \(\ket{0}\). Thus, intuitively, since our criterion for classification is to consider the closest observations, by measuring the ancilla in \(\ket{0}\), the amplitudes of close observations will be large, whilst the opposite is true for distant observations. The importance of this fact becomes clear if we rewrite \(\ket{\psi_{4}}\), to show that the different classes appear weighted by their member's distance to the input, such that \[\ket{\psi_{4}}=\frac{1}{\sqrt{N}}\sum_{y=0}^{d-1}\ket{y}\otimes\sum_{l\in y} \left(\cos\left(\frac{\pi d_{H}}{2n}\right)\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{l}; 0}+\sin\left(\frac{\pi d_{H}}{2n}\right)\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{l};1} \right)\;, \tag{14}\] where \(l\) runs over all training vectors classified with the label \(y\). Written like this, it becomes clear that, if the ancilla is measured to be in \(\ket{0}\), the amplitudes of close observations will be large, which implies that the probability of measuring the respective class qubit of these observations will also be large. And, as we discuss next, this is how the final classification is performed. As the final step, the ancilla of the state \(\ket{\psi_{4}}\) is measured on the computational basis. According to Eq. (13), it is easy to see that the probability of measuring \(|0\rangle\) is \[P(|0\rangle_{a})=|\langle 0|\psi_{4}\rangle|^{2}=\frac{1}{N}\sum_{p=1}^{N}\cos^{ 2}\left(\frac{\pi d_{H}}{2n}\right). \tag{15}\] The conditional probability to measure a certain class \(y\in\{1,...,d\}\), given that we previously measured the ancilla in \(|0\rangle\) (and, therefore, the state collapsed to \(|\tilde{\psi}_{4}\rangle=\langle 0|\psi_{4}\rangle\)\(|0\rangle\)) is, in terms of the joint probability, \[\begin{split} P(y\mid|0\rangle_{a})&=P(y)P(|0 \rangle_{a})\\ &=|\langle y|\tilde{\psi}_{4}\rangle|^{2}\\ &=\frac{1}{N}\sum_{l\in y}\cos^{2}\left(\frac{\pi d_{H}}{2n} \right)\,\end{split} \tag{16}\] which is easily verifiable using Eq. (14). Indeed, Eq. (16) implies that \[P(y)=\frac{1}{P(|0\rangle_{a})}\frac{1}{N}\sum_{l\in y}\cos^{2}\left(\frac{\pi d _{H}}{2n}\right). \tag{17}\] Thus, the class measured with the highest probability is that whose members are the closest to the input vector, provided that \(P(c)\) is only computed after the ancilla is measured in \(|0\rangle\), which is precisely why the amplitudes associated to the closest neighbors are considered. Notice that if the measurement returns \(|1\rangle\), this run of the algorithm is not taken into account. In Fig. 6 the full quantum circuit is illustrated for a particular dataset, as detailed in Appendix VI.4. The algorithm uses \(\mathcal{O}(Nn)\)[41] gates, which is completely due to the construction of the training data superposition (described by Eq. 5), which depends on the number of training samples, thus yielding a \(\mathcal{O}(Nn)\) complexity [41, 61], which close to the classical KNN algorithm complexity, in which we have to compute the distance between the test observation and all other \(N\) training points. However, if one can find a procedure to prepare the training data superposition in a manner independent of the number of samples (perhaps by reading quantum data [62, 63, 64, 65]), the QNN algorithms would run in \(\mathcal{O}(n)\), offering a potentially large advantage over the classical KNN, for which it seems unlikely to exist an algorithm which is independent of the number of training samples -- that's a quite remarkable advantage achieved by the exploitation of the superposition in a quantum algorithm. We highlight that, in contrast to the classical KNN, the QNN algorithm does not depend on the hyperparameter \(k\). In fact, a superposition of all members of each class is taken into consideration for the classification. This is equivalent to considering all neighbors (that is, \(k=N\)), which in the classical algorithm is associated with a high bias, since, if the dataset is imbalanced with respect to the target, the majority class will always be predicted. In the quantum algorithm, however, this is not the case: even if the dataset is imbalanced, the input observation will be assigned to the class which is the closest to it, since, as it is clear in Eq. (17), the distance of the input to the members of the class explicitly affects the probability. As a final remark, notice that the probability distribution in Eq. (17) is precisely what is recovered from multiple runs of the algorithm on an actual hardware (or a simulator thereof). The final prediction, as a class, is therefore recovered by choosing the class with the largest observed probability. However, as explained in Sec. IV and illustrated in Fig. (3), in this work we directly use the class probability, i.e., Eq. (17) itself. Fortunately, in contrast to many classical learning models, outputting class probabilities is the most natural choice for the QNN algorithm. We remark that the Python/Qiskit implementation of the algorithm described above, as well as all the data used in this paper and the corresponding results, are available in an online repository [66]. ## IV Data Pre-Processing As described in Sec. III, the classical data loaded into the quantum registers, via the basis encoding strategy, must be in a binary representation. On the other hand, as discussed in Sec. II.1, the dataset under study consists of 198 continuous features: the pairwise correlations among all spins in a lattice with 12 sites considering boundary conditions and the symmetry it implies. Thus, in order to represent each observation as a binary vector, we must first discretize the continuous features, so that the discrete levels may then be binarized. Before proceeding with these procedures, however, an important remark is due. As discussed above, the QNN algorithm uses \(2n+2\) qubits, where \(n\) is the number of binarized features. Indeed, this implies that if one wants to simulate the circuit, or even execute it on NISQ hardware, \(n\) must be chosen accordingly, to make the execution/simulation feasible. As will be detailed below, we employed an efficient discretization and binarization procedure that maps each original continuous feature to only two bits. However, given that we start with 198 features, this would imply \(n=396\), thus requiring a circuit consisting of 794 qubits, which is way beyond the limit for a classical simulation (recall that the algorithm includes entanglement) as well as current quantum hardware capabilities, both in terms of number of qubits as well as circuit depth. And this is a quite simple discretization one can think of: if one produces more bins per feature, which would be desirable, the resulting number of binary features (and qubits, consequently) would further increase, making the simulation/execution yet more intractable. Therefore, in order to fit the dataset into current capabilities, we employ a series of pre-processing steps to the original raw features, which starts with a dimensionality reduction procedure implemented via a feature selection routine, in order to pick from the 198 original features, the ones that contribute the most to the classification, with the hope that they are enough to produce a good classifier. The procedure we use for picking the most important features is based on the Random Forest algorithm [67] -- in particular, a modification thereof, known as Extremely Randomized Trees [68]. It consists of the calculation of a "feature importance coefficient" which is also known as "mean decrease impurity" or "gini importance" [69]. This coefficient is calculated as the total decrease in each node impurity, averaged over all trees in the forest. The mean is weighted by the probability that the respective node is reached, which is often estimated by the proportion of observations reaching the node. A detailed account of this algorithm can be found in the Appendix VI.3. Having in mind the discussion about the current capabilities of simulation and hardware, we have selected only the 4 most important features that correspond to the following two-body correlation terms. Physically, we expect that the most important features are the correlation functions \(\langle\sigma_{l}^{z}\sigma_{j}^{z}\rangle\) at the largest available distances. The reason is that this correlation detects long-range order associated with the spontaneous breaking of the \(\mathbb{Z}_{2}\) symmetry \(\sigma_{j}^{z}\mapsto-\sigma_{j}^{z}\) of the Hamiltonian. In the paramagnetic phase, \(\langle\sigma_{i}^{z}\sigma_{j}^{z}\rangle\) decays exponentially to zero for \(\left|i-j\right|\) larger than the correlation length, while it approaches a nonzero value in the ferromagnetic phase and oscillates with a four-site periodicity in the antiphase. By contrast, the correlation function \(\langle\sigma_{i}^{x}\sigma_{j}^{x}\rangle\) is nonzero for \(g\neq 0\) in all phases because the transverse magnetic field induces a finite expectation value for \(\sigma_{j}^{x}\). We then proceed to the discretization of the features, using a procedure based on the \(k\)-means clustering algorithm [70]. More specifically, we use a k-bins discretizer, implemented in scikit-learn [71], which divides continuous data into \(k\) intervals or "bins". Essentially, we first cluster observations that are similar to the feature being discretized and use the clusters centroids as centers of the bins, that is, values in each bin will have the same nearest centroid, as determined by the 1-dimensional \(k\)-means algorithm. See Fig. 1 for an illustration. For each feature, we created 3 bins. At this point, our dataset is characterized by 12 discrete values, 3 for each one of the 4 features selected by the feature importance procedure. After discretization, the features are binarized using the one-hot encoding procedure, which consists of creating \(l-1\) independent binary features for each column with \(l\) categorical levels, as illustrated in Fig. 2. In our case, since \(l=3\) for each discretized feature, we create \(l-1=2\) new binary features each, which then results in \(n=8\) independent binary features. This is the final dimensionality we work with. Notice that, with 8 binary features, we will need 18 qubits for the circuit execution, which is a reasonable number for the circuit simulation or execution -- and, most importantly, it is enough to yield good results for the problem we analyze, as it will be shown. We could have chosen more than \(n=4\) features or discretized the features in more bins, which could possibly have increased the performance quantum classifier. In Sec. VI we further elaborate on this point. Notice that after the feature selection, discretization and binarization pre-processing described above, some observations which were different in the original space of continuous features may be mapped to the same binary representation. This makes perfect sense, given that the different values of a given feature may fall in the same bin, which is given by a range of values. If this happens with all 4 features of two different observations, they will naturally be represented by the same 8-dimensional feature vector. This is why using a \(k\)-means binning strategy is a good idea (instead of the competing strategy "uniform", for example, in Figure 1: **k-bins discretizer using uniform and k-means (\(k=3\)) binning strategies.****a)** k-bins discretizer with bins uniformly defined. The vertical red lines represent the bins’ limits. Notice that bin widths are uniform in the feature space, but the clustering structure is not respected. **a)** The green points represent the clusters centroids, and the vertical green lines, the bins’ limits. Notice how non-uniform bins are created, but the clustering structure is respected. which all bins have identical widths, as depicted in Fig. 1b): given that bins are clusters, this strategy groups together similar observations in the same bin, so that it makes sense if they are represented by the same binary feature vector. After the pre-processing routine, our original training dataset, which had 1000 rows and 198 continuous-valued columns, was reduced to a dataset with 8 binary features and only 10 rows. We can see that as a way of reducing the dataset only to the most representative samples and better explanatory features, which, as we will show below, was enough to yield good results with the quantum classification algorithm. It is important to remark that the aforementioned feature importance and discretization processes were such that their respective parameters were determined only using the training data. That is, the exact same procedure was merely reproduced for all test datasets, but with parameters already defined in the training data. Now, although this is the correct thing to be done to avoid data leakage, there is a potential problem, especially with the discretization process: given that the features range varies a lot from training to testing data, it is possible that the resulting bins for testing data features will be concentrated, that is, all observations will fall in the same bin of a given feature. Effectively, this implies that such test observations will be characterized by less than 8 features, which is a problem because the QNN algorithm assumes that the test (input) observation has the same number of features as all training observations. In order to fix this, we pad such input observations with zeros, to guarantee that all binarized testing observations will be 8-dimensional. In practice, different observations will be identified only by the actual features present, and the padding will have no effect in terms of the classification, given that it will be the same for all observations in a given test dataset, as we observed. Indeed, as the results show, such a procedure does not jeopardize the classifier's performance. As already remarked, remember that QNN is a lazy algorithm, so each test (input) observation is classified at a time. This means that, in principle, we would have to simulate/execute a number of circuits equal to the number of test observations, to have their prediction. Given that we have 10 testing sets, one for each \(k\) value. We consider \(k=0\) the training point. each one with 1000 observations, the number of circuit simulations/executions would be quite large. However, the aforementioned fact that different training observations in the original feature space may be mapped to the same binary representation is of course also true for the testing data observations (although the exact number of unique binarized testing observations vary among the different test datasets). Given that, we implement a cache: whenever we see a new testing observation (in terms of its 8 features), we pass it through the quantum circuit, simulating/executing it, and store its prediction. If this observation is repeated (which, again, can happen given the nature of the pre-processing routine), we don't run the circuit again, but instead merely replicate the prediction in the cache. This allows us to have a prediction for each one of the observations in the testing datasets, without having to simulate/execute the quantum algorithm that many times. Indeed, this is very important for resource optimization, in terms of simulation time or hardware calls for execution. ## V Machine learning the phase diagram of the ANNI model with QNN Our aim is to understand whether transfer learning is possible using QNN. More specifically, all our training data consists of \(\kappa=0\) and we use that to predict phases at regions where \(\kappa\geq 0\). This is particularly relevant since for \(\kappa=0\) the model is analytically solvable, pointing out that a transition occurs at \(g\approx 1\). We highlight that for \(\kappa=0\) we have only two phases: either the ferromagnetic (phase 'o') or the paramagnetic (phase '1'). However, when \(\kappa\geq 0\) the ANNNI Hamiltonian also leads to a third phase, the antiphase (phase '2'), not contained in the training data. In particular, for \(\kappa\geq 0.5\), we are in a region where only phases 'o' and 'z' are present. So, the best the classifier algorithm can do is to output 'o' if the phase is indeed 'o' and '1' otherwise. Actually, as discussed above, for an observation point, both the classical and quantum classifier algorithms will return a normalized probability vector \((p_{0},p_{1})\) where the assigned phase will correspond to the component with the largest value. As typical Figure 2: **One-hot encoding procedure.** From a column with \(l=3\) categorical (discrete) levels, we construct \(l-1=2\) independent binary columns, which are the binary features. Notice that only \(l-1\) binary columns are necessary because it is possible to represent one of the levels as “oo” (in this example, “L3”). with such algorithms, to determine when we are facing a transition, we plot both the probability components and check when they cross, as shown in Fig. 3. As can be seen in Fig. 4, using this approach, the QNN algorithm recovers the left part (\(\kappa<0.5\)) of the phase diagram, corresponding to the ferromagnetic/paramagnetic transition, very successfully. The precise prediction also holds as we approach the critical point at \(\kappa=0.5\) and \(\gamma=0\) at which a new antiphase appears. However, as we start to increase \(\gamma\) the approximated solutions in (2) and (3) and the QNN predictions start to differ more significantly, even though they remain qualitatively similar. To benchmark our results we have compared the QNN solution with that obtained by the classical KNN algorithm. As can be seen in Fig. 4, the solution is significantly worse with the classical algorithm is fed with the same pre-processed data as the one given to the quantum algorithm. However, if the classical algorithm uses the complete data (not pre-processed) it reaches a similar success in the prediction, even though it is smaller as quantified by shown ahead. Importantly, the quantum classifier performs significantly better at the critical point (\(g=0,\kappa=1/2\)). ## VI Discussion The detection of phase and phase transitions of the ANNNI model with both classical as well as quantum heuristic approaches have already been done. In Ref. [36], Canabarro et al. showed the possibility of applying both unsupervised and supervised classical techniques to achieve good results. In fact, the problem was satisfactorily well solved with unsupervised learning, reducing the use of supervised learning to a validation step. Therefore, they tried using transfer learning of diverse supervised learning algorithms trained solely on nearest-neighbor interactions exhibiting the capacity to discern a novel phase emerging upon the introduction of next-nearest-neighbor interactions. They showed that some of the learners could unveil the Ising as well as the antiphase and the paramagnetic phases. This amalgamation effectively groups the BKT and CIC transitions together, \begin{table} \begin{tabular}{l c} \hline Technique & average \(\ell_{2}\)-norm \\ \hline QNN (pre-processed) & **0.0036(6)** \\ KNN (pre-processed) & 0.0164(1) \\ KNN (raw data) & 0.0043(1) \\ \end{tabular} \end{table} Table 1: Performance (average \(\ell_{2}\)-norm with relation to the analytical approximations given by Eqs. 2 and 3) computed for the three main phases and comparing QNN with KNN (using both the pre-processed and complete training data). For the classical KNN we used \(k=7\) and the Euclidean distance. The best result is in boldface. Figure 3: Detecting the critical transverse magnetic field coupling parameter \(g\) at which a phase transition occurs. The machine was trained at \(\kappa=0\) and asked to predict where the transition happens at \(\kappa=0.1\), by considering where the machine is most uncertain, that is, when the probabilities are closest \(p_{1}=p_{2}=1/2\). Here the ferromagnetic (paramagnetic) phase is labeled as 0 (1). Figure 4: Phase diagrams produced with diverse (Q)ML algorithms when trained only with \(\kappa=0\): KNN trained with raw data (blue circles); KNN trained with the same pre-processe data as the QNN - fair comparison (black squares); QNN (red triangles), and two different analytical solutions: Ising (solid blue line) and BKT (dashed orange line). All different methods recover the ferro/paramagnetic and paramagnetic/BKT transitions qualitatively well, although, as it is quantitatively expressed in Table 1, the QNN solution yields the smallest MSE with relation to the analytical approximation, thus being an overall better solution (see main text for more details). showcasing the robustness of our method. On the other hand, in Ref. [40], Monaco et al. used quantum convolutional neural networks by training only on marginal points of the phase diagram represented by integral models. Our approach in this paper is both innovative and complementary. We use a new and simpler quantum machine learning algorithm and apply transfer learning, we test some ideal preprocessing of the data to fit in a real quantum computer, and we train only on \(\kappa=0\), testing on the remaining phase diagram to show the viability of the technique as we discuss here. We show that with the right pre-processing of the data, the quantum nearest neighbor (QNN) algorithm proposed in [41] allows for the classification of phases in a non-trivial Hamiltonian model. Using two-point correlation functions obtained by exact diagonalization on a small \(N=12\) spins lattice, we could reproduce the main phases of the axial next-nearest neighbor Ising (ANNNI) model. More precisely, using as training data only the ferromagnetic and paramagnetic phases, we could detect a transition for an antiphase by increasing the interaction strength of the next-nearest neighbor. This is a relevant instance of transfer learning, since using analytical data extracted from the exactly solvable transverse field Ising model, we could explore a non-integrable region of the Hamiltonian model. This makes the approach computationally cheaper as access to training labels is one of the major bottlenecks for any supervised method. To benchmark the quality of our quantum machine model, we compared it with approximated expressions obtained by various methods. The solution provided by QNN works very well in the ferromagnetic and paramagnetic regions, offering a less precise but still qualitatively reasonable solution as we enter the antiphase. Arguably, however, to assess the quality of a quantum learning method, it is reasonable to compare its performance with that of classical learning algorithms. We performed this comparison, and the results are quite favorable to the quantum approach. Even when we feed the original data (without any pre-processing, a necessary step to reduce the number of qubits in the quantum realization) to classical classifiers, the quantum classifier remains superior, as can be seen in Fig. 4 and Table 1. And performing the fairest comparison, obtained when the quantum and classical algorithms see the same pre-processed data, the accuracy of the quantum classifier is significantly higher. Importantly, these performance comparisons were done on the testing data, that is, we were really evaluating the generalization capability of the different models, which is, after all, what matters the most when one builds a data-driven model. This proof-of-principle (since it was obtained in a simulated/perfect quantum circuit) quantum advantage does not come in terms of algorithmic complexity, but rather in generalization and accuracy, which is of major interest in the context of machine learning. Still, one may interpret the advantage from a different point of view, namely that of sample complexity [72, 73, 74]: the quantum algorithm could find the underlying pattern reflected on the dataset with much less information than its classical counterpart, and with better generalization performance. As mentioned before, we can see that as a way of reducing the dataset only to the most representative samples and better explanatory features. Although we focus on a particular Hamiltonian, we believe it leads to relevant general questions: in a statistical learning theoretical sense, how and why such a sample complexity reduction and consequent quantum advantage is achieved? Similar questions have been addressed in recent research [75, 76, 77], and further research along this direction, in the particular context of QNN might lead to new insights. Another clear path is to understand how well the QNN classifier works in real NISQ devices, also considering different Hamiltonian models and increasing the number of qubits and features. In this regard, it would be interesting to consider other data formats, such as the classical shadows [78], efficiently encoding classical data about quantum systems for machine learning purposes [79]. In conclusion, to the best of our knowledge, this paper is the first work in which the QNN algorithm [41] was applied to a concrete classification problem in the context of condensed matter physics. By applying this method, we could achieve a quantum model whose generalization performance was superior to its classical counterparts, whilst using much less information, which represents a quantum advantage in both contexts of generalization and sample complexity. This is the main result of this paper, which opens the way to several discussions concerning the statistical learning theoretical properties of the QNN model. ## Acknowledgements This work was supported by the Serrapilheira Institute (Grant No. Serra-1708-15763), the Simons Foundation (Grant Number 1023171, RC), the Brazilian National Council for Scientific and Technological Development (CNPq) via the National Institute for Science and Technology on Quantum Information (INCT-IQ) and Grants No. 307172/2017-1, the Brazilian agencies MCTIC, CAPES and MEC. AC acknowledges a paid license by the Federal University of Alagoas for a sabbatical at the University of Sao Paulo, and partial financial support by CNPq (Grant No. 311375/2020 - 0), Alagoas State Research Agency (FAPEAL) (Grant No. APQ2022021000153) and Sao Paulo Research Foundation (FAPESP) (Grant No. 2023/03562 - 1).
相変態の分類と検出は、多岐にわたる分野で中心的かつ困難な課題です。物理学では、秩序パラメータの識別と自由エネルギーとその微分における奇異点の解析が重要です。ここでは、量子相変態を識別するための代替的な枠組みを提案します。軸方向の次 nearest neighbor Ising (ANNNI) モデルを基準に、機械学習が三つの相 (磁性体、 paramagnetic、そして浮遊相を持つ反相の集合体) を検出する方法を検証します。教師あり学習を用いて、転用学習の実現可能性を démontrerます。具体的には、次近傍相互作用のみで訓練された機械が、次近傍相互作用が導入されたときに新たなタイプの相を識別する能力を習得しました。また、一般的な古典的な機械学習手法と量子近傍の変動相関 (QNN) アルゴリズムとの
2309.13990
Supervised, semi-supervised, and unsupervised learning of the Domany-Kinzel model
The Domany Kinzel (DK) model encompasses several types of non-equilibrium phase transitions, depending on the selected parameters. We apply supervised, semi-supervised, and unsupervised learning methods to studying the phase transitions and critical behaviors of the (1 + 1)-dimensional DK model. The supervised and the semi-supervised learning methods permit the estimations of the critical points, the spatial and temporal correlation exponents, concerning labelled and unlabelled DK configurations, respectively. Furthermore, we also predict the critical points by employing principal component analysis (PCA) and autoencoder. The PCA and autoencoder can produce results in good agreement with simulated particle number density.
Kui Tuo, Wei Li, Shengfeng Deng, Yueying Zhu
2023-09-25T09:41:15
http://arxiv.org/abs/2309.13990v2
# Supervised, semi-supervised, and unsupervised learning of the Domany-Kinzel model ###### Abstract The Domany Kinzel (DK) model encompasses several types of non-equilibrium phase transitions, depending on the selected parameters. We apply supervised, semi-supervised, and unsupervised learning methods to studying the phase transitions and critical behaviors of the \((1+1)\)-dimensional DK model. The supervised and the semi-supervised learning methods permit the estimations of the critical points, the spatial and temporal correlation exponents, concerning labelled and unlabelled DK configurations, respectively. Furthermore, Principal Component Analysis (PCA) and autoencoder can classify the DK phases. ## I Introduction Machine learning (ML) methods have attracted much attention in recent years and have been widely applied to many fields, such as natural language processing [1], face and image recognition [2; 3], ecology [4], economics and finance [5], data mining and analysis [6], and electronic games [7], etc. Recently, reinforced learning methods even have solved the previously unfathomable go games (AlphaGo) [8]. Machine learning methods also show their great advances in boosting multidisciplinary problem solving, especially when the related tasks are data-driven optimization problems. In the physics realm, some major progresses have also been made with machine learning methods. For example, quantum computing has been combined with machine learning to develop the field of quantum machine learning: In Ref. [9] the quantum annealer has been used to sample the Boltzmann distribution, and Ref. [10] and Ref. [11] have studied the quantum Boltzmann machine and constructed quantum neural network, respectively. In high energy physics, a machine learning classifier has been constructed to search for new particles of unknown masses [12], using parameterized networks to simplify the training process and enhance the learning performance. The quantum chromodynamics (QCD) phase transition has also been studied by using deep convolutional neural networks [13]. More recently, graph neural networks (GNNs) combined with a HaarPooling operation have been applied to extracting the features of quark-gluon tagging[14], which can enhance the accuracy of quark-gluon tagging, as compared to the weakly supervised learning method proposed earlier [15]. In astrophysics, the machine learning package astroML has been developed [16], and machine learning methods have also been utilized to boost cosmological and astrophysics process simulations [17]. Important breakthroughs have also been made in learning different phases of matters. The seminal work by Carrasquilla and Melko in 2016 have demonstrated that the ferromagnetic and paramagnetic phases of the classical Ising model can be classified based on supervised machine learning methods [18], permitting the identification of the critical points and the spatial correlation exponents. This work has since triggered great interest in the application of machine learning methods to the studies of various types of phase transitions. Regardless of the complexity of the target problem, the versatility of machine learning methods allows for learning more complex phases of 3-dimensional Ising model [19], or phases with non-local and topological (Kosterlitz-thouless) properties in percolation, XY and generalized XY models [20; 21; 22], or even phases of non-equilibrium matters [23], such as many-body localized and topological phases, and the non-equilibrium phase transitions in the directed percolation (DP) [24]. As in Ref. [18], it has been repeatedly demonstrated that one can estimate the critical points and the spatial correlation exponents, which further enhances the possibility for obtaining the entire phase diagram that is consistent with theory [23]. In this work, we study the phase transitions of the (1+1)-dimensional Domany-Kinzel (DK) model by machine learning techniques. As will be shown in the next section, the DK model is controlled by two parameters. Along the transition line, the model characterizes several types of phase transitions. Hence the DK model provides an excellent test bed for comparing the capabilities of different learning methods. To that end, supervised, semi-supervised, and unsupervised learning methods will be applied to each type of phase transition. For supervised learning, the respective critical points and critical exponents are estimated. We also propose a new semi-supervised learning method, in which only half probability data of training set with respect to test set are labelled. The trained neural network can then predict the order parameter of the unlabelled DK model and the corresponding critical points. In addition, two unsupervised learning methods, Principal Component Analysis (PCA) and autoencoder can classify the DK phases. The remainder of this paper is organized as follows: In Sec. II, we briefly introduce the DK model. Sec. III presents the supervised learning of (1+1)-dimensional DK model, in which the critical points and the correlation length and correlation time exponents are estimated. Sec. IV gives the semi-supervised learning results of (1+1)-dimensional DK model. Sec. V is about the unsupervised learning results of (1+1)-dimensional DK model, via autoencoder and PCA. Sec. VI summarizes the main findings of this work. ## II The Domany-Kinzel model The Domany-Kinzel (DK) [25; 26] model is a stochastic cellular automaton that exhibits non-equilibrium active-to-absorbing type phase transition, controlled by two parameters. In (1+1) dimensions, the model is defined on a one-dimensional array, on which site \(s_{i}\) can be either occupied (\(s_{i}=1\)) or empty (\(s_{i}=0\)). As illustrated in Fig. 1, the state of each site is then updated with time with respect to its nearest neighbors according to the following rule: \[s_{i}(t+1)=\begin{cases}1&\text{if }s_{i-1}(t)\neq s_{i+1}(t)\qquad\text{and }r_{i}(t)<p_{1}\\ 1&\text{if }s_{i-1}(t)=s_{i+1}(t)=1\text{ and }r_{i}(t)<p_{2}\\ 0&\text{otherwise}\,,\end{cases} \tag{1}\] where \(0\leq r_{i}(t)\leq 1\) is a random number generated from a uniform distribution, and \(0\leq p_{1}\leq 1\) and \(0\leq p_{2}\leq 1\) are two probabilities used to control the phases of the model. From the above rule, one can easily imagine that unless all the sites are initially occupied, given the probability \(p_{2}\), if \(p_{1}\) is too small, the proportion of the occupied sites will decrease until only empty sites remain, whereas, for large enough \(p_{1}\) value, the array will become increasingly occupied until a saturated density is reached. Once the system evolves into a fully empty state, there is no way for it to get out of that state. Hence the model displays an active-to-absorbing phase transition. As depicted in Fig. 2 for the phase diagram, there is a transition line \((p_{1c},p_{2c})\) separating the active phase and the absorbing one. Depending on the location of the parameters, the DK model includes bond and site DP as special cases. Bond DP corresponds to the line \(p_{2}\)= \(p_{1}(2-p_{1})\), while site DP is obtained for \(p_{1}=p_{2}\). For \(p_{2}\)=0, it is equivalent to Wolfram rule 18, and for \(p_{2}\)=1, since an empty site is _guaranteed_ to be filled as long as both its neighbors are occupied, the generated clusters become compact, giving rise to a different universal scaling behavior called the compact directed percolation (CDP). The CDP is different from the bond DP, the site DP, and the Wolfram rule 18, as the latter ones all belong to the DP universality [27; 28]. This is exemplified by the critical clusters of the DK model generated from a single active seed shown in Fig. 3, in which the compact pattern of the CDP cluster is quite distinctive from those of the rest ones. The order parameter of the DK is defined as the density of active sites \[\rho(t)=\big{\langle}\frac{1}{N}\sum_{i}s_{i}(t)\big{\rangle}\,. \tag{2}\] We first consider the case of an infinite system. In the active phase, \(\rho(t)\) decays initially and eventually saturates at some stationary value \(\rho_{stat}\) that varies according to a power law characterized by the exponent \(\beta\) near the critical point: \[\rho_{stat}\sim(p-p_{c})^{\beta}\,. \tag{3}\] Figure 1: The (1+1)-dimensional Domany-Kinzel model [26]. Occupied sites are marked by black circles. The state \(s_{i,t+1}\) of a given site \(i\) at time \(t+1\) depends on the states of its left and right neighbours (\(s_{i-1,t}\), \(s_{i+1,t}\)) at time step \(t\). Figure 3: Critical clusters of DK model generated from a single active seed. Figure 2: Phase diagram of the (1+1)-dimensional Domany-Kinzel model [26]. Bond directed percolation corresponds to the line \(p_{2}\)=\(p_{1}(2-p_{1})\). Site directed percolation is obtained for \(p_{1}=p_{2}\). For \(p_{2}\)=0, it is equivalent to Wolfram rule 18. For \(p_{2}\)=1, it is a different universal scaling behaviour called compact directed percolation. ## III Supervised learning of the Domany-Kinzel model The inputs for the learning machines are just raw configurations generated from Monte Carlo(MC) simulations of the (1+1)-dimensional DK model. According to the phase diagram depicted in Fig. 2, each type of phase transition of the DK model can be controlled by varying the probability \(p_{1}\). We henceforth denote it as \(p\) for simplicity. The generated configurations are split into the training set and the test one. In the training set, each configuration is labelled according to the probability \(p\) that generated that configuration. The labelling is a prerequisite for the supervised learning. If the probability \(p\) of a configuration \(\mathbf{x}_{i}\) is smaller than the critical probability \(p_{c}\), it is in the absorbing phase and labelled as \(\mathbf{y}_{i}=(0,1)\); otherwise, if \(p>p_{c}\), it is in the active phase and labelled as \(\mathbf{y}_{i}=(1,0)\). For supervised learning, we apply the convolutional neural network (CNN) as illustrated in Fig. 4, on which, sigmoid activation function is used for the convolution and the pooling layers, and softmax activation function is taken in the output layer, producing a binary classification output. For a certain test configuration \(\mathbf{x}_{i}(p)\) fed into the neural network, one output layer signifies the probability that the configuration belongs to the active phase \(P_{1}(\mathbf{x}_{i}(p))\), and the other output layer signifies the probability that the configuration belongs to the absorbing phase \(P_{0}(\mathbf{x}_{i}(p))\). Since learning machines try to learn features of configuration images of relatively small system sizes and of different phases, it is customary to make use of configurations obtained from fully occupied or randomly occupied (e.g. with a probability of 0.5) initial states instead of starting from a single active seed as shown in Fig. 3, as the latter leaves a large proportion of the sites empty at initial stages. Here, randomly occupied initial states are more preferable because for the Wolfram rule 18 (\(p_{2}=0\)) and the CDP (\(p_{1}=1\)), fully occupied initial states only result in trivial absorbing and active states, as illustrated in Fig. 5 (a) and (c), respectively, regardless of how the other parameter (\(p_{1}\) for the Wolfram rule 18 and \(p_{2}\) for the CDP) is chosen. Simulation times \(t\) are typically selected with respect to the characteristic time \(t_{f}\). On a finite lattice of non-equilibrium phase transition, there is always a non-vanishing probability of reaching the absorbing configuration, finite-size effects set in after a typical time \(t_{f}\) that grows with the system size as \(t_{f}\sim L^{z}\). For the DP universality class, \(z=1.58\)[29], while for the CDP, \(z=2\)[25]. Figure 4: Schematic structure of CNN. Figure 5: (a) Cluster of Wolfram rule 18 generated from fully occupied active seeds. (b) Critical cluster of Wolfram rule 18 generated from randomly occupied active seeds. (c) Cluster of CDP generated from fully occupied active seeds. (d) Critical cluster of CDP generated from randomly occupied active seeds. Gray colour represents occupied sites, and blue ones represent empty sites. Figure 6: Averaged learning results of CNN output layers for (a) the Wolfram rule 18 (\(L=12\), \(t=60\)); (b) the CDP(\(L=8\), \(t=64\)); (c) the bond DP (\(L=8\), \(12\), \(16\), \(20\), \(24\), \(28\), \(32\); \(t=50\), \(60\), \(80\), \(115\), \(152\), \(194\), \(240\)); (d) the site DP (\(L=12\), \(16\), \(20\), \(24\), \(28\), \(32\), \(36\), \(40\); \(t=60\), \(80\), \(115\), \(152\), \(194\), \(240\), \(288\), \(340\));(e) the Wolfram rule 18 (\(L=16\), \(20\), \(24\), \(28\), \(32\), \(36\), \(40\); \(t=80\), \(115\), \(152\), \(194\), \(240\), \(288\), \(340\)); and (f) the CDP (\(L=12\), \(16\), \(20\), \(24\), \(28\), \(32\); \(t=145\), \(256\), \(400\), \(576\), \(784\), \(1024\)). The configuration images are of \(L\times(t+1)\) dimension. From initial states with randomly occupied sites, for each probability \(p\), 1700 labelled configurations are generated for the training set and another 500 configurations for the test set. The CNN output layers are eventually averaged over the test set, giving \(P_{0|1}(p)=\frac{1}{500}\sum_{i=1}^{500}P_{0|1}(\mathbf{x}_{i}(p))\). To examine the proper system size, we started with \(L=12\) and \(L=8\) for the Wolfram rule 18 and the CDP, which give rise to the relatively low accuracy values of 89.17% and 89.8%, respectively (c.f. Fig 6(a) and (b)). Therefore, larger sizes will be taken in what follows. Fig. 6(c) and (d) show the averaged results for the output layers from the above-trained CNN with respect to bond and site DP, respectively. The critical points can be estimated from the crossing point of the two output layers, which is typically around \(P_{0}(p_{c})=P_{1}(p_{c})\approx 0.5\). With \(L=32\), we hence estimate \(p_{c}=0.643\pm 0.02\) for the bond DP. With \(L=40\), we estimate \(p_{c}=0.699\pm 0.02\) for the site DP. These estimations yield accuracy of 99.54% for the bond DP critical point, and accuracy of 99.37% for the site DP; see the accuracy data in Table 1. Therefore, even though the employed system sizes are relatively small, supervised learning by CNN still allows us to classify the two phases and estimate the associated critical points quite well. For Fig. 6(e) and (f), with \(L=40\), we estimate \(p_{c}=0.796\pm 0.02\) for the Wolfram rule 18. With \(L=32\), we estimate \(p_{c}=0.498\pm 0.02\) for the CDP. These estimations yield accuracy of 97.05% for the Wolfram rule 18 critical point, and accuracy of 98.28% for the CDP. As one can notice from Table. 1, the accuracy values for the bond DP, the site DP, and the Wolfram rule 18 display a decreasing trend amongst them for each studied system size. As shown in Fig. 7, the particle number densities of the three models at the same \(p\) are actually different and display the same trend. Since these three variants all belong to the DP universality class, here we observe that the non-universal "lacunarity" property of clusters, associated with the particle number density, still affects the learning accuracy, although one may just want to probe the same universality properties. The features of non-equilibrium phase transitions such as absorbing phase transitions are encoded in the correlations within the spatial configurations and their dynamical evolution. Approaching the critical point, the spatial correlation length \(\xi_{\perp}\) and the temporal correlation length \(\xi_{\parallel}\) diverge as \[\xi_{\perp}\sim|p-p_{c}|^{-\nu_{\perp}}\quad\text{and}\quad\xi_{\parallel} \sim|p-p_{c}|^{-\nu_{\parallel}}, \tag{4}\] where \(\nu_{\perp}\) and \(\nu_{\parallel}\) are spatial and temporal correlation exponents, respectively, and \(\xi_{\parallel}\sim\xi_{\perp}^{z}\), with \(z=\nu_{\parallel}/\nu_{\perp}\) being the dynamical exponent. For finite systems simulated within finite times, by noting that \(\xi_{\perp}\sim|p-p_{c}|^{-\nu_{\perp}}\sim L\), \(\xi_{\parallel}\sim|p-p_{c}|^{-\nu_{\parallel}}\sim t\), one sees that \(x=(p-p_{c})L^{1/\nu_{\perp}}\) and \(y=(p-p_{c})L^{1/\nu_{\parallel}}\) are dimensionless quantities, so the functions \(\hat{P}_{0|1}(x)\) and \(\hat{P}_{0|1}(y)\) are scaling functions of \(x\) and \(y\), respectively. Hence, it is possible to estimate \(\nu_{\perp}\) and \(\nu_{\parallel}\) by performing data collapse techniques. Before we proceed, let us note that the bond DP, the site DP, and the Wolfram rule 18 of the (1+1)-dimensional DK all belong to the (1+1)-dimenional DP universality class, which is characterized by the correlation exponent \(\nu_{\perp}\approx 1.0968(4)\) and the temporal corre Figure 7: With \(L=32\), \(t=240\), comparing the particle number density of the bond DP, site DP and Wolfram rule 18. lation exponent \(\nu_{\parallel}\approx 1.7338(6)\)[29]. The CDP represents a different universality class (the CDP universality class) of absorbing phase transitions, where the percolation clusters are compact objects, which is characterized by \(\nu_{\perp}=1\) and \(\nu_{\parallel}=2\) in \(1+1\) dimensions[25; 30]. Now, rescaling the probability \(p-p_{c}\) by choosing proper \(\nu_{\perp}\) and \(\nu_{\parallel}\) in Fig. 6 should render the output layer curves for different sizes collapse to the scaling functions \(\hat{P}_{0|1}(x)\) and \(\hat{P}_{0|1}(y)\). As seen in Fig. 8, the curves coincide for different sizes with a suitable choice of \(\nu_{\perp}\). The estimated DK critical exponents are \(\nu_{\perp}=1.09\pm 0.02\) for the bond DP, \(\nu_{\perp}=1.08\pm 0.03\) for the site DP, and \(\nu_{\perp}=1.07\pm 0.03\) for the Wolfram rule 18, which are consistent with the theoretical value \(\nu_{\perp}=1.0968\). For the CDP, We estimate \(\nu_{\perp}=0.99\pm 0.02\), which is again consistent with the theoretical value \(\nu_{\perp}=1\). Similarly, Fig. 9 shows the data-collapse results for temporal correlations with respect to different simulation times. The estimated DK critical exponents are \(\nu_{\parallel}=1.726\pm 0.02\) for the bond DP, \(\nu_{\parallel}=1.723\pm 0.03\) for the site DP, and \(\nu_{\parallel}=1.718\pm 0.03\) for the Wolfram rule 18, which are consistent with the theoretical value \(\nu_{\parallel}=1.7338(6)\). We also estimate \(\nu_{\parallel}=1.965\pm 0.03\) for the CDP, agreeing with the theoretical value \(\nu_{\parallel}=2\). According to \(z=\nu_{\parallel}/\nu_{\perp}\), the estimated DK dynamical exponents are \(z=1.583\pm 0.02\) for the bond DP, \(z=1.595\pm 0.03\) for the site DP, \(z=1.606\pm 0.03\) for the Wolfram rule 18, \(z=1.985\pm 0.03\) for the CDP. Machine learning has been well applied to studying equilibrium phase transition models, but applying it to studying nonequilibrium phase transitions is a new research field, having attracted much attention in recent years. Previously, it has been demonstrated that spatial correlation exponents of nonequilibrium phase transitions can be extracted in a similar manner as in the nonequilibrium case [24]. Here, we explore further and find that the CNN output layer also contains temporal correlation information, which permits the extraction of temporal correlation exponents. ## IV Semi-supervised learning of the boundary-kinzel model Semi-supervised learning is a kind of learning paradigm which combines supervised learning with unsupervised learning. The goal of semi-supervised learning is to obtain a predictive model by utilizing both labelled and unlabelled data in the training set, in which the unlabelled data in the training set become pseudo-labelled through the partially trained model, which is further updated by combining the original labelled data and these psedo-labelled data [31; 32]. In this way, one may substantially reduce the cost for data labelling, or, for more practical applications, one only needs to label data with the most certainty and leaves those less certain ones unlabelled in the training set. For the DK model, simulations are run on arrays of size \(L=16\), up to \(t=120\) steps. For each probability, 2000 labelled configurations are generated for the training set and another 1000 configurations for the test set. In semi-supervised learning, basically the same convo \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline size \(L\) & 8 & 12 & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\ \hline bond DP & 92.97\% & 96.05\% & 97.37\% & 97.71\% & 99.11\% & 99.37\% & 99.54\% & \(-\) & \(-\) \\ \hline site DP & 90.64\% & 95.02\% & 96.07\% & 97.46\% & 98.12\% & 98.83\% & 99.07\% & 99.10\% & 99.37\% \\ \hline Wolfram rule 18 & 86.72\% & 89.17\% & 94.06\% & 95.31\% & 96.22\% & 96.2\% & 96.78\% & 96.91\% & 97.05\% \\ \hline CDP & 89.80\% & 94.41\% & 95.90\% & 96.70\% & 97.56\% & 98.03\% & 98.39\% & \(-\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 1: The accuracy values of the trained CNNs with respect to different system sizes \(L\) for (a) the bond DP, (b) the site DP, (c) the Wolfram rule 18, and (d) the CDP. Figure 9: CNN outputs results as a function of \((p-p_{c})t^{1/\nu_{\parallel}}\) for (a) the bond DP (\(L=8\), 12, 16, 20, 24, 28, 32; \(t=50\), 60, 80, 115, 152, 194, 240), (b) the site DP (\(L=12\), 16, 20, 24, 28, 32, 36, 40; \(t=\) 60, 80, 115, 152, 194, 240, 288, 340), (c) the Wolfram rule 18 (\(L=16\), 20, 24, 28, 32, 36, 40; \(t=\) 80, 115, 152, 194, 240, 288, 340), (d) the CDP (\(L=12\), 16, 20, 24, 28, 32; \(t=145\), 256, 400, 576, 784, 1024). lutional neural network as illustrated in Fig. 4 is being used, only that the output layer contains only _one neuron_. Instead of labelling the raw configuration data according to the phase they are in, raw configurations are labelled by their particle number densities \(\rho\), inferred from the MC configurations directly. However, to test the ability of semi-supervised learning, only half of the training set corresponding to a sparser probability selection [e.g. \(p=(0.1,0.3,0.5,0.7,0.9]\)] are labelled with the particle number density \(\rho_{i}(p)\) for the \(i\)-th sample, while the testing set includes data for all \(p\) values [e.g. \(p=(0,0.1,0.2,0.3,...,0.9,1)\)]. Once the CNN is fully trained with respect to both the labelled and pseudo-labelled data in the training set, the CNN output for the test set should predict the particle number density. The [Fig. 10 (c)-(f)] show the results for the semi-supervised learning. We find that semi-supervised learning can predict the particle number density of the DK model, complying with the counterpart from the Monte Carlo simulations well. From the peak of the variance, the estimated values of critical points are \(p_{c}=0.636\pm 0.02\) for bond DP, \(p_{c}=0.695\pm 0.02\) for site DP, \(p_{c}=0.495\pm 0.02\) for compact DP, and \(p_{c}=0.792\pm 0.02\) for Wolfram rule 18. We remark that even for the DK model, the selection of the labelled portion in the training set could be quite arbitrary. This opens a possible avenue for the study of more intricate phase transitions such as topological phase transitions. Previously, it had been demonstrated that the unsupervised learning methods (PCA) are not suitable for extracting the critical points of the XY model [33]. It would be interesting to study if one can infer the full phase information for topological phase transitions via semi-supervised learning by utilizing only partial information of these transitions. ## V Unsupervised learning of the domain-kinzel model In certain scenario, there may be a total absence of category information for the interested data (unlabelled data) and unsupervised learning methods that realize sample classification through data analysis then become indispensable. In this section, two well-known unsupervised learning methods, i.e. autoencoder and Principal Component Analysis (PCA), will be applied to detect the phases of the DK model. ### Autoencoder results of the DK model Autoencoders are simple generative models which can produce random outputs that are similar to the inputs [34, 35, 36]. As illustrated in Fig. 11, the fully connected autoencoder architecture that we used includes an input layer, an encoder, a latent layer of hidden neurons, a decoder, and an output layer. The inputs for the autoencoders are just raw DK configurations \(x_{i}\). The model is trained until the L2 loss function \[L(\phi,\theta)=\frac{1}{N}\sum_{i=1}^{N}||x_{i}-D_{\theta}(E_{\phi}(x_{i}))|| _{2}^{2} \tag{5}\] Figure 11: Schematic structure of Autoencoder. The fully connected autoencoder architecture includes an input layer, an encoder, a latent layer of hidden neurons, a decoder, and an output layer. The autoencoder with number of neurons (1936, 968, 480, 240, 120, 60, 16, 2/1, 16, 60, 120, 240, 480, 968, 1936), relu activation functions. Figure 10: Partially labelled training sets for (a) the bond DP and (b) the CDP. Semi-supervised learning of the particle number density with partially labelled training sets for (c) the bond DP, (d) the CDP, (e) the site DP, and (f) the Wolfram rule 18. Red dot represent particle number density of the DK model, blue line represent the CNN outputs. The predicted result is obtained after averaging over all the output results for the test set. is minimized with respect to the encoder parameters \(\phi\) and the decoder parameters \(\theta\), with \(E_{\phi}\) and \(D_{\theta}\) being the encoder and the decoder functions, respectively. In this way, an effective representation, that preserves the most important information of the input data, is achieved in the latent layer through data compression, which further permits data reconstruction via data decompression with the decoder. For the DK model, simulations are run on arrays of size \(L=16\), up to \(t=120\) steps. For each probability \(p\), 2000 configurations are generated for the training set, and another 1000 configurations for the test set. Note that the autoencoder output is limited to two hidden neurons, meaning that the DK configurations are compressed into two dimensions. Once the autoencoder is trained, each input \(\mathbf{x}_{i}\) from the test set then gives rise to one point \((h_{i1},h_{i2})\) on a two-dimensional plane corresponding to the degree of freedom of the latent layer. As shown in Fig. 12, the used autoencoder roughly classifies the DK configurations into two clusters, although the absorbing phase and the active phase are not completely separated. Especially, configurations drawn from the same probability \(p\) are closely clustered together, so the fuzzy boundary of the two phases means that the transition is of continuous type. Hence, while two neurons in the latent layer are capable of clustering the DK configurations into two phases. It suggests that autoencoders can capture essential information of the input data so as to detect the phases, without any prior knowledge of the DK model. ### PCA results of the DK model Principal Component Analysis (PCA) is also an unsupervised learning algorithm which can be used for data dimensionality reduction [37, 38, 39]. PCA performs orthogonal transformation on the data to find the direction of high variance, and converts the variables with possible correlations into linearly uncorrelated ones. The transformed variables are called principal components and here only the first two leading components will be used for the analysis of the DK model. One can intuitively imagine the process as projecting the data points of the original high-dimensional representation onto a lower dimensional space by selecting proper directions of projection with largest variances, so that the maximum amount of information is still preserved after reduction of the dimensions. Simulations are run on arrays of size \(L=16\), up to \(t=120\) steps. For each probability \(p\), 1000 configurations are used to obtain the PCA results. The two leading components of DK configurations by PCA are illustrated in Fig. 13. Similar to the autoencoder results, this reveals that PCA can also roughly classify the DK configurations into two phases. Since the boundary of these phases is not distinct, the transition is of continuous nature, as expected. Although PCA is just based on linear transformations of the input data, we show that it is still effective in extracting features of the DK phase transitions. Since the PCA could achieve similar results as compared to autoencoder without needing to train a model firstly, it costs less and is more convenient to apply. Figure 12: The autoencoder output is linked with two hidden neurons, projecting the configurations of the bond DP, the site DP, the Wolfram rule 18, and the compact DP onto two dimensions. The colormap represent the probability \(p\). Figure 13: PCA results for the bond DP, the site DP, the Wolfram rule 18, and the compact DP, with projection of the raw configurations onto the plane of the two leading principal components. The colormap represent the probability \(p\). Summary In this paper, we applied supervised, semi-supervised and unsupervised learning methods to study the phase transitions and critical behavior of the Domany-Kinzel model. With supervised learning, the critical points were estimated from the neural network outputs. By further collapsing the outputs for different sizes, the correlation exponents \(\nu_{\perp}\) and \(\nu_{\parallel}\) were estimated, which are consistent with reference values in the literature. Previously, it has been demonstrated that, similar to the equilibrium case, the spatial correlation exponent \(\nu_{\perp}\) of nonequilibrium phase transitionos can be extracted. Here, we explore further and find that the CNN output layer also contains temporal correlation information, which permits the extraction of the temporal correlation exponent \(\nu_{\parallel}\). The achieved high accuracy values even for rather small system sizes suggest that the applied learning machine could learn the features of the phases quite well, so that the computation overheads in the MC simulation end can be substantially reduced. The unsupervised learning methods, PCA and autoencoder, are able to roughly separate the phases into two clusters if the output is two-dimensional. Since learning through PCA is simpler, PCA generally is more efficient as compared to autoencoder. In semi-supervised learning, even though only half of the training set were labelled, by setting the output to just one neuron, we found that the network predicts the particle density of the test set quite well, permitting us to estimate the critical points of the DK model as well. Given these features of semi-supervised and unsupervised learning methods, it is advisable to use these methods to study more intricate phase transitions if data labelling becomes costly. Finally, we remark that even though only universal property of the DK model should matter along the DP critical transition line, the non-universal "lacunarity" property of clusters affects the learning accuracy. It is observed that learning machines generally learn the phase features of the DK models with denser clusters (e.g. the bond DP) better than models with sparser clusters (e.g. the Wolfram rule 18). ###### Acknowledgements. We thank Jianmin Shen and Longfeng Zhao for valuable discussions. This work was partially supported by the Fundamental Research Funds for the Central Universities, China (Grant No. CCNU19QN029), the National Natural Science Foundation of China (Grant No. 11505071, 61702207 and 61873104), and the 111 Project 2.0, with Grant No. BP0820038.
DOMANY KINZEL (DK) モデルは、選択されたパラメータによって、複数の非均衡相転移を包含しています。私たちは、 (1 + 1) 次元 DK モデルの相転移と臨界挙動を研究するために、教師あり、教師ありなし、そして教師なし学習法を適用しました。教師ありおよび教師ありなしの学習法により、ラベル付けされたDK構成と非ラベル付けされたDK構成について、臨界点を推定したり、空間的および時空間の相関指数を計算したりできます。さらに、主成分分析 (PCA) とオートエンコーダを用いて臨界点を予測しました。PCA とオートエンコーダは、シミュレーションされた粒子の数密度と良好な一致を示しています。
2309.12456
The Astronomy Genealogy Project is ten years old: Here are ten ways you can use it
The Astronomical Genealogy Project (AstroGen) has been underway since January 2013. This project of the Historical Astronomy Division (HAD) of the American Astronomical Society (AAS) has been online since July 2020, courtesy of the AAS. The volunteers of the AstroGen team have systematically searched online directories, mostly at individual university libraries, for astronomy-related doctoral theses equivalent to the modern, research-based Ph.D. We now claim to be 'nearly complete' for 38 countries, although some have not been updated for a year or two or three. The website contains a page for each astronomer and advisor, with links to the persons, universities, institutes, and the theses themselves. More than two-thirds of the theses are online in full, although some require access to a library with a subscription. There is information about nearly 37,000 individuals who have earned astronomy-related doctorates and another 5400 who have supervised them, but may not have earned such degrees themselves. Most of the latter have not yet been evaluated, but probably a majority earned doctorates in other fields, such as physics or geology. We present some of the results of our research and discuss ten ways the reader might make use of the project.
Joseph S. Tenn
2023-09-21T19:55:50
http://arxiv.org/abs/2309.12456v1
# The Astronomy Genealogy Project (AstroGen) ###### Abstract The Astronomical Genealogy Project (AstroGen) has been underway sites January 2018. This project of the historical Astronomy Division (RAD) of the AstroGenal Astronomical Society (ACS) has been online since July 2020, courtesy of the ACS. The volunteers of the AstroGen team have systematically searched entire directions, mostly individual university theories for astronomical doctoral images, such as those from the others. When they learn to be directly complete for 62 countries through each hero and team updated for year or one of three. Theoretical contents a topic for each astronomer and review, with links to the papers, universities, institutes, and the fields themselves. More than two-thirds of the thesis are online in full, although send faculty access to a library with a suspension. These is information about many 57,030 students who have earned astronomy, related doctorates and another 5600 who have supervised them but may not have earned such degrees themselves. Most of the field have no yet been extended, but probably a lengthy count factorials in other fields, such as physics or ecology. We account some of the results of our research and discuss ten ways the reader might make use of the project. AstroGen Astronomy Genealogy Project (AstroGen) ## 1 What JavaScript The Astronomy Genealogy Project (AstroGen) is a database of people who have earned doctorates with astronomy-related theses. Founded as a project of the Historical Astronomy Division (HAD) of the American Astronomical Society (AAS) in January 2013, it has been on line at [https://astrogen.aas.org](https://astrogen.aas.org), courtesy of the AAS, since 25 July 2020. That date was chosen because it was the 159\({}^{\rm th}\) anniversary of the awarding of the first three Ph.D.s in the United States, one of them in astronomy. A preliminary account of AstroGen (Tenn, 2016) appeared in this journal seven years ago. We try to go back to the beginnings of the modern Ph.D., for which the thesis is supposed to be an original contribution to knowledge. We believe that AstroGen is 'nearly complete' for astronomy-related doctorates from 38 countries (and other polities). All data in this paper are as of 11 July 2023. A challenge was deciding which theses should be in our database. Our criteria are stated in some detail on the website in the FAQs section. Briefly, we include theses that deal with the scientific study of anything that is or comes from outside the Earth, and the development of tools to facilitate such study. AstroGen is also a database of the universities that have granted astronomy-related doctorates, and of the institutes where thesis research was done. Each astronomer has a page with his or her name, other names used professionally, Orcid or ISNI number, highest degree(s), university that awarded the highest degree, and, if applicable, the institute (not connected with the university) where the research was performed. If the highest degree was an astronomy-related doctorate, then the page also contains the title of the thesis, the name of the thesis advisor(s), and any mentors, defined as undificial thesis advisors. The page includes links to as many of these as possible. The page also includes a family tree, showing the astronomer's academic children (students) and ancestors (advisors, their advisors...). Note that we use the American term _advisor_ for the person who directs the thesis research. Equivalent terms include supervisor, guide, _directeur_, _Betheuer_, _promotor_, and others. An example of such a page (without the family tree) is shown in Figure 1 (all figures not otherwise credited are by the author, as are all tables). The name of the university that awarded the degree is given, in English, as it was at the time of the degree. This name is linked to a page giving the name in the local language as well as the names of successor institutions up to the present, with the current name linked to the current university website. For our purposes, we define a university as any institution that awards astronomy-related doctorates, even though some are not genuine universities. See Figure 2 for an example of a university page for an institute that awards doctorates. Individuals who have not earned doctorates with astronomy-related theses are included in AstroGen if they have directed research for such theses. Nowadays, most of these have Figure 1: A typical astronomer page. Note that it has links to the person; the University that awarded their highest degree, their advisor’s page, and the student’s name and university. There are also links to forms to submit additions or corrections to AstroGen. On the website, the page is accompanied by a family tree showing the astronomer’s students (but not their students) and all of their known academic ancestors back to the first ones who did not earn astronomy-related doctorates. earned what we call 'Other Doctorates', mostly in physics, but in some cases in geology, computer science, meteorology, mathematics, or engineering. A few people with master's degrees, bachelor's degrees, or even no degree have also supervised astronomy-related doctorates. The last person who had no earned degree but supervised such a doctorate was E. Arthur Milne (Figure 3) in 1940. For such persons, we include only the personal information, the highest degree (e.g., Ph.D.), the university, and the year. We do not list the thesis title or advisor(s), and we do not go back to any earlier generations. There is much more about the project in the FAOs on the AstroGen website. ## 2 What FAOs are McGomets? ### Personalization Table 1 shows who is currently listed in the AstroGen database. Those who have not earned astronomy-related doctorates are included because they supervised research for such degrees. Those who earned two astronomy-related doctorates or two other doctorates were counted only once in the first two rows. The subtraction later is for those who had one of each. The large number of advisors whose highest degrees are unknown reflects the shortage of volunteers to seek such information online. ### Interest in AstroGen Table 2 provides information about the astronomy-related theses currently in our database. We have recorded more than 37,400 such theses, and readers may be surprised to learn that 89% are in English. This will be discussed below in Section 4.7. Another potential surprise is that more than two-thirds of the theses are fully online, although not all are available to all readers. The theses listed as being 'On ProQuest' are available in full to anyone who has privileges at a library that subscribes to this proprietary database. Fortunately, a great many university libraries subscribe. How to es from recent years, when the theses were 'born digital'.) However, most of our searching is done on the websites of university libraries. Some of these are easy to navigate and search; others are more challenging and may be incomplete. Even though a great many theses are now in English, it helps to have some familiarity with the language of the country to navigate the library sites. A small group of volunteers has gathered all the information currently in the database. ## 5 How Can Not Using Survey AstroGen was established for two purposes. The more serious one is to enable studies of the astronomical community and astronomical research by historians and sociologists of science. The other is to allow astronomers the fun of looking up their academic ancestors and descendants. Our role model, the Mathematics Genealogy Project, has been underway since 1996, and it currently contains about 293,000 records. Its Director has found that people also use it for a third purpose: to enable those seeking scientists to referee grant proposals or publications to avoid inviting the students or advisors of the submitters. Since AstroGen is now ten years old, it seems appropriate to provide ten ways you might make use of our project. ## 6 Sample I Retrieval Your University AstroGen's The most important page on the AstroGen website is the Search page. It allows searching by any combination of name, highest degree, university, thesis title words, years, and country. Choose a university name. If the words in the name are also in the names of other universities, be sure to put the name you want in quotation marks. A search for Sydney yields 322 graduates from five universities. Limiting the search to 'University of Sydney' in quotation marks cuts the number down to 241, including one with a Master's degree and four with non-astronomy-related doctorates. Checking the box for highest degree AD (astronomy-related doctorate), and choosing 'Advanced Search' will yield the 229 records that include all who earned astronomy-related doctorates at the University of Sydney. Click on 'CSV' near the top of the page, and you get a comma separated variable list that you can easily open and manipulate with Microsoft Excel or any other spreadsheet application. Note that you cannot specify a department. Astronomy-related theses are written in Departments of Aerospace Engineering, Atmospheric Science, Chemistry, Computer Science, Geology, Mathematics, Physics, Space Science, and others. Some of the most productive universities have three or more departments where astronomy is done. ## 7 The Future Study Observatories track publications, including dissertations (theses), which make use of data from their instruments. While the SAO/NASA Astrophysics Data System (ADS) allows curators to readily find journal articles of interest, the coverage of dissertations continues to be difficult. The aggregation of links to dissertations which the Astronomy Genealogy Project provides makes this an important resource for finding observatory-related dissertations. For example, the Chandra X-ray Center Bibliographylists 1114 Ph.D. theses that include Chandra data ([https://cxc.harvard.edu/cgi-gen/cda/bibliography](https://cxc.harvard.edu/cgi-gen/cda/bibliography)). Sherry Winkelman, then Chief Archive Specialist at the Chandra X-ray Center, compiled most of this list and showed that a substantial portion of it came from AstroGen. She first started with a list of dissertations that were known to incorporate Chandra data, then followed up and down the family trees of these astronomers to check the theses of their advisors and students (Winkelman and Tenn, 2021). ## 8 Compact the Production of Astronomy Related Doctorates by County Currently, we claim that AstroGen is 'nearly complete' for 38 countries and other polities, although some of them have not been updated for a year or two or three. We chose to complete these first because the volunteers we have are knowledgeable about their languages and academic practices. We will need other volunteers to compile data from countries with different alphabets and practices. We can't find our way through a library website when the site uses an unfamiliar alphabet, even if the theses are in English. There is a table on our website, continuously updated, listing these countries. It is in the FAQs, under "How complete is AstroGen?" Table 3 shows its current contents. Although the online table can be sorted by any column, the default is to order the countries by the number of astronomy-related doctorates per million residents. Unsurprisingly, the countries that rank highest on this criterion are the ones that attract the most international students. The United Kingdom and the Netherlands, both of which attract a great many students from abroad, have each produced more than 70 astronomy-related Ph.D.s per million population. There are several countries that produce a lot of doctorates but are not yet complete in AstroGen. We have compiled more than 2700 degrees from universities in France, but most are not yet online. We are waiting for a knowledgeable person to help us sort out the changes in French universities, which have been on a merging binge in recent years. It is worth pointing out that universities in these 38 countries have produced 97% of all the doctorates currently in our database. Here is a question for the reader: What do you think is the median year for the award of all these astronomy-related doctorates? The answer will appear somewhere below. ## 5. Compac the Treatment of Astronomy Redest Deborates by University AstroGen goes back to the beginnings of the modern, research-based doctorate, now called a Doctor of Philosophy degree in most countries and abbreviated Ph.D. in all but a few universities. We are somewhat incomplete on the earliest such degrees, which were awarded in the eighteenth and nineteenth centuries (we need a Latin scholar), but the number to be added is undoubtedly small. Figure 4 shows the cover of the oldest thesis currently in AstroGen. We are not certain that it fits our criteria, but the Latin scholar who reviewed it for us reported that it "... is a sort of prospectus for a modern, research-based doctorate, now called a Doctor of Philosophy degree in most countries and abbreviated Ph.D. in all but a few universities. We are somewhat incomplete on the earliest such degrees, which were awarded in the eighteenth and nineteenth centuries (we need a Latin scholar), but the number to be added is undoubtedly small. Figure 4 shows the cover of the oldest thesis currently in AstroGen. We are not certain that it fits our criteria, but the Latin scholar who reviewed it for us reported that it "... is a sort of prospectus for a ## 6. Compac the Centers of Credible Of Different Universities I spent more than thirty years advising undergraduate physics majors. One topic that came Figure 4. The cover of the oldest thesis currently in AstroGen. Christian Friedrich Oechchitz (dates unknown) submitted this thesis on the rings of Saturn to Leipzig University in 1745, where Gottfried Heinsiusius (1709–1769) was his advisor. Like many theses that are out of copyright, it is freely available online, in this case on the site of the Bavarian State Library ([https://www.digitale-sammlungen.de/view/bsb10660167?page=1](https://www.digitale-sammlungen.de/view/bsb10660167?page=1)). \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{Country} & Astronomy & Population & Deg’s \\ & Related & in millions & per \\ & Doctorates & (UN, 2022) & million \\ \hline United Kingdom & 5663 & 67.5 & 83.9 \\ \hline Netherlands & 1271 & 17.6 & 72.2 \\ \hline Switzerland & 472 & 8.7 & 54.3 \\ \hline United States & 17,177 & 338.3 & 50.8 \\ \hline Germany & 3947 & 83.4 & 47.3 \\ \hline Australia & 1214 & 26.2 & 46.3 \\ \hline Sweden & 474 & 10.5 & 45.1 \\ \hline Finland & 237 & 5.5 & 43.1 \\ \hline Spain & 1468 & 47.6 & 30.8 \\ \hline Canada & 1164 & 38.5 & 30.2 \\ \hline Belgium & 351 & 11.7 & 30.0 \\ \hline Estonia & 39 & 1.3 & 30.0 \\ \hline Denmark & 160 & 5.9 & 27.1 \\ \hline Ireland & 132 & 5.0 & 26.4 \\ \hline Greece & 273 & 10.4 & 26.3 \\ \hline Israel & 226 & 9.0 & 25.1 \\ \hline New Zealand & 114 & 5.2 & 21.9 \\ \hline Austria & 144 & 8.9 & 16.2 \\ \hline Norway & 83 & 5.4 & 15.4 \\ \hline Lithuania & 40 & 2.8 & 14.3 \\ \hline Serbia & 72 & 7.2 & 10.0 \\ \hline Argentina & 353 & 45.5 & 7.8 \\ \hline Iceland & 3 & 0.4 & 7.5 \\ \hline Portugal & 61 & 10.3 & 5.9 \\ \hline Chile & 108 & 19.6 & 5.5 \\ \hline Croatia & 22 & 4.0 & 5.5 \\ \hline Mauritius & 5 & 1.3 & 3.8 \\ \hline Azerbaijan & 34 & 10.4 & 3.3 \\ \hline South Africa & 178 & 59.9 & 3.0 \\ \hline Turkey & 227 & 85.3 & 2.7 \\ \hline Brazil & 536 & 215.3 & 2.5 \\ \hline Hong Kong & 13 & 7.5 & 1.7 \\ \hline Singapore & 3 & 6.0 & 0.5 \\ \hline Iran & 30 & 88.6 & 0.3 \\ \hline Columbia & 6 & 51.9 & 0.1 \\ \hline Pakistan & 15 & 235.8 & 0.1 \\ \hline Nigeria & 10 & 218.5 & 0.0 \\ \hline Ethiopia & 5 & 123.4 & 0.0 \\ \hline Totasis & 36,330 & 190.0 & 19.1 \\ \hline \end{tabular} \end{table} Table 3. Countries deemed ‘nearly complete’. up frequently was how to choose a graduate school. If you are an advisor to students applying to doctoral programs in astronomy, you might find it useful to use AstroGen to compare the careers of astronomy Ph.D.s from different universities. This takes a bit of work, but AstroGen can get you started. As an example, we consider the careers of graduates of three (anonymous) universities from among the 25 most productive ones. Each had 30 to 36 Ph.D. graduates in the years 2006\(-\)2008. AstroGen makes it easy to download lists of names and degrees and to follow links to personal web pages of the graduates to learn what they are doing currently. This was done in October 2020, twelve to fourteen years after graduation. By this time most were no longer moving from one postdoctoral position to another every two to three years. We found the results shown in Figure 5. A student whose goal is to obtain a permanent position as a research scientist might choose the university portrayed in red, while one who aims to become a data scientist or engineer might prefer the university shown in blue. ### Trace Changes in the Language Used for Writing Images Practically all doctoral theses were written in Latin until the late nineteenth century. Most scholars then switched to the language of their university. Starting early in the twentieth century, one country after another switched to writing scientific theses in English, until by the beginning of the twenty-first a majority of astronomy-related theses were being written in English in many countries. Table 5 shows some examples. The transitions in Germany were particularly striking, as can be seen in Figure 6. The points represent ten-year bins, centered on the year shown. The two English points at the decades centered on 1860 and 1870 each represent one American who was allowed to write his thesis in English at the University of Gottingen. The trend to a single common language has accompanied the globalization of many graduate programs. Figure 7 shows countries in which at least one university offers the possibility of earning an astronomy-related Ph.D. without knowing any language but English. A particularly cosmopolitan doctoral program is shown in Table 6. Only one-fourth of those who earned astronomy-related doctorates at the Leiden University in 2021 and 2022 were from the Netherlands. They were valuable to their classmates, as they were often called upon to translate thesis abstracts into Dutch, as re \begin{table} \begin{tabular}{|l|c|} \hline University & Theses \\ University of California, Berkeley & 781 \\ University of Cambridge & 765 \\ California Institute of Technology & 748 \\ Heidelberg University & 727 \\ University of Arizona & 641 \\ Harvard University & 605 \\ University of Manchester & 550 \\ University College London & 534 \\ University of Chicago & 527 \\ LMU Munich & 522 \\ Massachusetts Institute of Technology & 481 \\ University of Maryland, College PK & 479 \\ University of Michigan & 476 \\ University of Colorado Boulder & 475 \\ Princeton University & 468 \\ University of Texas at Austin & 458 \\ University of California, Los & 454 \\ Angeles & 442 \\ Leiden University & 442 \\ Cornell University & 415 \\ University of Wisconsin-Madison & 384 \\ University of Bonn & 361 \\ University of Oxford & 360 \\ University of La Laguna & 343 \\ University of Leicester & 331 \\ Columbia University & 330 \\ \hline \end{tabular} \end{table} Table 4: Universities (currently ‘nearly complete’ in AstroGen) that have produced the most astronomy-related doctorates. Figure 5: Positions held by astronomy graduates of three highly-productive universities 12\(-\)14 years after graduation. Note that they earned their doctorates in 2006\(-\)2008, and positions are as of 2020. quired by the University, but it seems clear that seminars and conversations in the astronomy program were held in English, the language in which all astronomy theses at Leiden have been written since 1964. ### Trade the Great/Leiden Date of Different Fields of Astronomy A little over a century ago, a typical astronomy-related doctoral thesis consisted of measuring the positions of a comet or asteroid or using someone's observations to calculate its orbit. Some astronomers did both. Decades later, there was a time when nuclear astrophysics and construction of stellar models were among the most popular topics. Currently, it seems to be exoplanets. AstroGen provides a relatively easy way to quantify such changes by searching on title words over time. Figure 8 shows some examples. Both 'pulsar' and 'quasar' start \begin{table} \begin{tabular}{|c|c|} \hline Country & Number \\ \hline Netherlands & 8 \\ \hline Italy & 3 \\ \hline United States & 3 \\ \hline France & 2 \\ \hline Germany & 2 \\ \hline Poland & 2 \\ \hline United Kingdom & 2 \\ \hline Belgium & 1 \\ \hline Brazil & 1 \\ \hline Chile & 1 \\ \hline China & 1 \\ \hline Estonia & 1 \\ \hline India & 1 \\ \hline Ireland & 1 \\ \hline Spain & 1 \\ \hline Türkive & 1 \\ \hline Ukraine & 1 \\ \hline Total & 32 \\ \hline \end{tabular} \end{table} Table 6: Leiden Ph.D.s 2021–2022 by country of birth. Figure 6: The languages in which astronomy-related doctoral theses were written in Germany. German replaced Latin in the late nineteenth century, and English became dominant near the beginning of the twenty-first century. Figure 7: World map with color showing countries in which at least one university offers an astronomy-related doctorate with (post-master’s) graduate instruction and thesis entirely in English. ed appearing in thesis titles in the five-year period centered on 1970, and both rose rapidly. Theses on both have decreased a bit in the last ten to fifteen years, as the number on exoplanets has soared. Note that I have combined 'extrasolar planet' with 'exoplanet', as the two terms have the same meaning. ### Set Flow the Frequency of Fire-Vhindist I looked up the recipients of nine major awards over the past century (1923-2022). For each award I included only those who received them for astronomy-related research. A summary of the results is in Table 7. A whopping 32% of award recipients were students of award recipients. If we only consider recipients with earned doctorates, which is now nearly universal, then it is 35%. Figure 9 is a family tree of one awardee, showing how many of his academic ancestors had been recognized with comparable prizes. ### Find Academic Observations of En AstroGen This is more difficult, as an astronomer's page shows only his or her students and not their academic progeny. You must click on each one separately to see that student's students, and repeat to the end. We have not yet achieved the ability to plot family trees of descendants, but we do provide one generation of descendants as the default option with the ancestral family trees. Figure 10 shows the academic ancestors and children of an astronomer who was born and raised in Zimbabwe, earned her doctorate in the United Kingdom, and now has her own doctoral students in South Africa. ## 5 What's Date Although ten years old, AstroGen is still very much a work in progress. We have yet to obtain much information about theses submitted to universities in China, Italy, Mexico, Russia, and many other countries, most of them in Asia. We have some theses from Japan and Korea, but we need many more. We need volunteers who know the languages and can find their way through the library websites of universities in these countries. In India, the theses are in English, but we need a volunteer familiar with the names and the academic culture to add to the 326 theses currently in our database. The earliest research-based Ph.D. degrees constitute another area where help is needed, in the form of a volunteer who can translate Latin titles and, preferably, has some familiarity with nine \begin{table} \begin{tabular}{|l|c|c|} \hline Award & Since & No. \\ \hline Gold Medal, RAS (G) & \textless{}1923 & 125 \\ Draper Medal, NAS (D) & \textless{}1923 & 38 \\ Bruce Medal, ASP (B) & \textless{}1923 & 98 \\ Nobel Prize (N) & \textless{}1923 & 26 \\ \hline Russell Lectureship, AAS (R) & 1946 & 75 \\ Crafoord Prize (C) & 1985 & 13 \\ \hline Gruber Cosmology Prize (GP) & 2000 & 43 \\ Shaw Prize (S) & 2004 & 31 \\ Kavli Prize (K) & 2008 & 19 \\ \hline Total Awards & \multicolumn{2}{c}{468} \\ Correction for multiple recipients & \multicolumn{2}{c}{\(-\)207} \\ \hline Number of award-winning individuals & 261 \\ Number of earned doctorates & 240 \\ \hline Number of thesis advisors who received & \multicolumn{2}{c}{4 least one of these awards} & 83 \\ \hline \end{tabular} \end{table} Table 7: Recipients of nine major awards in 1923\(-\)2022. Figure 8: Changes over time in popularity of some fields of astronomy. Points represent five-year bins centered on the number on the horizontal axis. Figure 10: The family tree of Shazrene Mohamed, a native of Zimbabwe who earned her doctorate in the United Kingdom, showing the doctoral students she has supervised in South Africa. Figure 9: The ancestral family tree of Robert Woodrow Wilson. Letters below the University name have been added to indicate the awards received, as detailed in Table 7. teenth century astronomy. Eventually, we hope to get volunteers to go into university libraries and examine the theses that are not online. And for those who have been waiting to find the answer to the question posed above, one-half of the doctorates currently listed in AstroGen have been awarded since 2004. If we restrict the count to astronomy-related theses, then the median year is 2006. ## 6 Acronymements There is a full-page list of acknowledgments on the AstroGen website, listing many individuals who have made large or small contributions to the project. Here I will mention only a few of the most important. The AAS funds the programming and hosts AstroGen on its website. Jonathan Galantine is the ace programmer who rescued the project when it was foundering.
``` 天文系 genealogies(AstroGen)プロジェクトは2013年1月から開始されています。このプロジェクトは、アメリカ天文学協会(AAS)の歴史天文学部門(HAD)が開始し、2020年7月からオンライン化されました。これは、AASの御尽力によって実現されました。AstroGenチームのボランティアは、オンラインデータベースに関係のある天文学的な博士論文を網羅的に検索しています。これは、現代の研究に基づいた博士学位に相当するものです。現在、38か国がほぼ完成していると言われています。しかし、一部は1年以上、2年以上、3年以上更新されていない場合があります。このウェブサイトには、各天文学者とアドバイザーについてのページがあり、その人の、大学、研究機関、そしてその論文を含んでいます。論文の多くはオンラインで公開されていますが、一部はライブラリのサブスクリプションが必要な場合があります。37,000人の天文学
2309.09710
About optimization of methods for mixed derivatives of bivariate functions
The problem of optimal recovering high-order mixed derivatives of bivariate functions with finite smoothness is studied. On the basis of the truncation method, an algorithm for numerical differentiation is constructed, which is order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information.
Y. V. Semenova, S. G. Solodky
2023-09-18T12:27:32
http://arxiv.org/abs/2309.09710v1
# About optimization of methods for mixed derivatives of bivariate functions ###### Abstract. The problem of optimal recovering high-order mixed derivatives of bivariate functions with finite smoothness is studied. On the basis of the truncation method, an algorithm for numerical differentiation is constructed, which is order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information. \({}^{\dagger}\)_Key words_. Numerical differentiation, Legendre polynomials, truncation method, minimal radius of Galerkin information. ## 1. Description of the problem Currently, many research activities on the problem of stable numerical differentiation have been taking place due to the importance of this tool in such areas of science and technology as finance, mathematical physics, image processing, analytical chemistry, viscous elastic mechanics, reliability analysis, pattern recognition, and many others. Among these investigations we highlight [4], which is the first publication on numerical differentiation in terms of the theory of ill-posed problems. Further research [4] has been continued in numerous publications on numerical differentiation (see, for example, [21], [30], [7], [8], [1], [9], [12], [31], [32], [10], [24]), covering different classes of the functions and the types of proposed methods. Despite the abundance of works on this topic, the problem for recovery of high order derivatives was considered only in a few publications, among which we note [5], [22], [11], [19], [12] and [24]. In particular, the results of [24] have opened perspective for further investigation of numerical methods for recovery of high order derivatives. Namely as the main criteria of method's efficiency have been taken its ability to achieve the optimal order of accuracy by using the minimal amount of discrete information. Note that particular these aspects of numerical differentiation remains still insufficiently studied. The present paper continues the research of [24], [23] and proposes a numerical method for recovering the high order mixed derivatives of smooth bivariate functions. The method is not only stable to small perturbations of the input data, but is also optimal in terms of accuracy and quantity of involved Fourier-Legendre coefficients, and also has a simple numerical implementation. Let \(\{\varphi_{k}(t)\}_{k=0}^{\infty}\) be the system of Legendre polynomials orthonormal on \([-1,1]\) as \[\varphi_{k}(t)=\sqrt{k+1/2}(2^{k}k!)^{-1}\frac{d^{k}}{dt^{k}}[(t^{2}-1)^{k}], \quad k=0,1,2,\ldots.\] By \(L_{2}=L_{2}(Q)\) we mean space of square-summable on \(Q=[-1,1]^{2}\) functions \(f(t,\tau)\) with inner product \[\langle f,g\rangle=\int_{-1}^{1}\int_{-1}^{1}f(t,\tau)g(t,\tau)d\tau dt\] and standard norm \[\|f\|_{L_{2}}^{2}=\sum_{k,j=0}^{\infty}|\langle f,\varphi_{k,j}\rangle|^{2}<\infty,\]
``` 二変数関数の高階混合微分を、有限滑らかな条件下で最適な回復に関連する問題を研究しています。 truncation法に基づき、数値微分アルゴリズムを構築し、精度と Galerkin 情報量に関する最適性を両立させます。 ``` **Explanation:** * **二変数関数の高階混合微分を、有限滑らかな条件下で最適な回復に関連する問題を研究しています。** * This translates to "We are studying the problem of recovering high-order mixed derivatives of bivariate functions under the condition of finite smoothness, with a focus on optimal recovery." * **truncation法に基づき、数値微分アルゴリズムを構築し、精度と Galerkin 情報量に関する最適性を両立させます。** * This translates to "On the basis of the truncation method, an algorithm for numerical differentiation is constructed,
2309.06423
Accelerating Defect Predictions in Semiconductors Using Graph Neural Networks
Here, we develop a framework for the prediction and screening of native defects and functional impurities in a chemical space of Group IV, III-V, and II-VI zinc blende (ZB) semiconductors, powered by crystal Graph-based Neural Networks (GNNs) trained on high-throughput density functional theory (DFT) data. Using an innovative approach of sampling partially optimized defect configurations from DFT calculations, we generate one of the largest computational defect datasets to date, containing many types of vacancies, self-interstitials, anti-site substitutions, impurity interstitials and substitutions, as well as some defect complexes. We applied three types of established GNN techniques, namely Crystal Graph Convolutional Neural Network (CGCNN), Materials Graph Network (MEGNET), and Atomistic Line Graph Neural Network (ALIGNN), to rigorously train models for predicting defect formation energy (DFE) in multiple charge states and chemical potential conditions. We find that ALIGNN yields the best DFE predictions with root mean square errors around 0.3 eV, which represents a prediction accuracy of 98 % given the range of values within the dataset, improving significantly on the state-of-the-art. Models are tested for different defect types as well as for defect charge transition levels. We further show that GNN-based defective structure optimization can take us close to DFT-optimized geometries at a fraction of the cost of full DFT. DFT-GNN models enable prediction and screening across thousands of hypothetical defects based on both unoptimized and partially-optimized defective structures, helping identify electronically active defects in technologically-important semiconductors.
Md Habibur Rahman, Prince Gollapalli, Panayotis Manganaris, Satyesh Kumar Yadav, Ghanshyam Pilania, Brian DeCost, Kamal Choudhary, Arun Mannodi-Kanakkithodi
2023-09-12T17:40:23
http://arxiv.org/abs/2309.06423v2
# Accelerating Defect Predictions in Semiconductors Using Graph Neural Networks ###### Abstract First principles computations reliably predict the energetics of point defects in semiconductors, but are constrained by the expense of using large supercells and advanced levels of theory. Machine learning models trained on computational data, especially ones that sufficiently encode defect coordination environments, can be used to accelerate defect predictions. Here, we develop a framework for the prediction and screening of native defects and functional impurities in a chemical space of Group IV, III-V, and II-VI zinc blende (ZB) semiconductors, powered by crystal Graph-based Neural Networks (GNNs) trained on high-throughput density functional theory (DFT) data. Using an innovative approach of sampling partially optimized defect configurations from DFT calculations, we generate one of the largest computational defect datasets to date, containing many types of vacancies, self-interstitials, anti-site substitutions, impurity interstitials and substitutions, as well as some defect complexes. We applied three types of established GNN techniques, namely Crystal Graph Convolutional Neural Network (CGCNN), Materials Graph Network (MEGNET), and Atomistic Line Graph Neural Network (ALIGNN), to rigorously train models for predicting defect formation energy (DFE) in multiple charge states and chemical potential conditions. We find that ALIGNN yields the best DFE predictions with root mean square errors around 0.3 eV, which represents a prediction accuracy of 98 % given the range of values within the dataset, improving significantly on the state-of-the-art. Models are tested for different defect types as well as for defect charge transition levels. We further show that GNN-based defective structure optimization can take us close to DFT-optimized geometries at a fraction of the cost of full DFT. DFT-GNN models enable prediction and screening across thousands of hypothetical defects based on both unoptimized and partially-optimized defective structures, helping identify electronically active defects in technologically-important semiconductors. ## 1 Introduction Semiconductors are critical for a variety of technologies such as consumer electronics, healthcare and biotechnology, information technology, communication and connectivity, automotive manufacturing, renewable energy, and industrial automation [1]. With the signing of the CHIPS Act [2], there has been a massive influx of funding into semiconductor R&D, resulting in the establishment of several manufacturing facilities and research centers across the United States as well as many global partnerships between companies and universities. Developing next-generation semiconductor materials is crucial for addressing global energy needs and the demands of the electronics industry, and this process begins at the atomistic scale with enhanced structure-property relationships that can scale up to device performance and aid data-driven materials design and improvement [3]. The electronic structure of a semiconductor is heavily dependent on the presence of point defects in its crystal lattice, which range from intrinsic vacancies, self-interstitials, and anti-site substitutions, to impurities at different lattice sites [4]. Defects can introduce energy levels within the band gap, affecting carrier concentration and mobility, and often acting as traps that lead to non-radiative recombination of holes and electrons in optoelectronic devices [5, 6, 7, 8, 9, 10]. Defects also play a crucial role in dopant activation, diffusion, and segregation, which are vital factors in device fabrication processes. Even at low concentrations, unwanted point defects or impurities can have a significant impact on the electronic, optical, and transport properties of semiconductors, making it important to be able to accurately predict their stability and electronic signatures [11]. One of the applications where the effect of defects is felt most is solar cells, where semiconductors such as Si and CdTe are commonly used as absorbers [7, 12]. Undesirable defects and functional dopants in semiconductor absorbers will respectively decrease and increase the optical absorption and thus the power conversion efficiency of single-junction, tandem, and bifacial solar cells [13]. Similar effects are felt in applications such as transistors, photodiodes, lasers, sensors, and quantum information sciences [14, 15, 16, 17, 18]. Canonical group IV, III-V, and II-VI semiconductors are some of the most important materials used in these applications, either as binary compounds or alloys, typically in the zinblende (ZB) or wurtzite (WZ) phases. In addition to Si and CdTe, compounds such as GaAs, SiC, CdSe, and CdS have been used in photovoltaics (PVs). GaAs, GaN, ZnO, and InP are employed in optoelectronic devices such as light-emitting diodes (LEDs), laser diodes, quantum dots, and quantum wells. GaN, AlN, and related compounds are desirable wide band gap (WBG) semiconductors for power electronics [19]. Point defects negatively or positively impact the performance of each of these materials; in general, semiconductors with intrinsic defect tolerance and possible n-type or p-type dopability are desired for optoelectronic applications. Furthermore, defect levels in semiconductors (e.g., NV-centers in diamond) have also been suggested as qubits for quantum computing [20]. Experimentally, defect levels are measured using techniques such as cathodoluminescence and deep-level transient spectroscopy [21]. However, these methods face difficulties in sample preparation and assigning measured levels to particular defects; e.g., it is not trivial to determine whether a captured peak is from a specific vacancy or self-interstitial, or from an unwanted substitutional or interstitial impurity [12]. First principles-based density functional theory (DFT) computations have thus been widely used to calculate the formation energy (E\({}^{/}\)) of point defects as a function of Fermi level (E\({}_{F}\)), net charge in the system (\(q\)), and chemical potential (\(\mu\)) [22, 23]. Such computations help reliably identify the lowest energy donor and acceptor defects, all possible shallow and deep defect levels, the equilibrium conductivity as pinned by the lowest energy defects (p-type, intrinsic, or n-type), defect concentrations, electron/hole capture rates, and other related properties. When using an appropriate level of theory, DFT-computed charge transition levels have been shown to match remarkably well with measured levels [12]. Despite the successes of DFT, large-supercell charged calculations are rather computationally expensive, which makes it prohibitive to perform defect calculations across a very large chemical space. Predicting defect properties can be significantly accelerated by combining DFT data with state-of-the-art machine learning (ML) models [12, 24]. Some recent efforts, including our own past work, have shown that regression models trained on a DFT dataset, using vectors encoding the identity and elemental properties of the coordinating atoms involved in creating a defect, can yield accurate prediction and screening over tens of thousands of defects and impurities [25, 26, 27]. In published work from 2022 [12], we trained ML models to predict the formation energies and charge transition levels of point defects in 34 ZB semiconductors with a roughly 90% prediction accuracy, which enabled qualitatively reliable screening of consequential impurities from across a list of \(>\) 12,000. However, these models suffer from the following limitations: (a) for a wide chemical space, composition-based models [28] require a significant amount of training data to accurately capture the complex relationships between the material and target properties, (b) all predictions are made for a supposed ground state defect structure which means no competing defective structures could be sampled and no lower energy defect structures could theoretically be found, (c) the errors are larger than desired, presumably due to lack of information in the model inputs about the defective geometry and how local coordination changes in the presence of different defects, and (d) the predictive models cannot trivially be applied to related but "out-of-sample" defects, such as complexes and alloys within the same material classes. A potential approach to tackle these issues arises in the form of a "crystal graph", which is the most general representation of a crystalline structure, automatically accounting for different supercell sizes, types of ionic species, mixing or added atoms, and metastable configurations. Graph-based Neural Networks (GNNs) have been rising in popularity over the last few years, and are widely applied today for adequately representing and predicting properties of molecules, polymers, and inorganic crystals [29, 30, 31, 32, 33, 34, 35]. GNNs can work directly with graph-structured data, converting crystal structures into crystal graphs where the nodes are atomic positions and edges are chemical bonds. They are capable of learning internal representations of crystals useful for predicting properties ranging from formation or decomposition energy to band gap to defect formation energy. ML models based only on vectors encoding composition and/or elemental properties are typically not suited to deal with crystalline polymorphs of a given material, often requiring hand-crafted features that are not generalizable. GNNs are known to be much more flexible than composition-based models, as they can be normalized with respect to system size or number of atoms, and have the ability to capture important structure/chemistry information that contribute to properties of interest. By learning the global representation of crystals, GNNs can incorporate physical laws and phenomena on larger scales and be used to predict properties that are affected by long-range interactions. Predictive models trained using GNNs show much better accuracy than models that lack structural/geometry information. In this article, we present one of the most comprehensive efforts undertaken to date for predicting defect properties in semiconductors, by combining a massive high-throughput (HT) DFT dataset of defect formation energies (DFEs) with state-of-the-art GNNs. We utilize our previously published dataset [12, 24], bolstered by the inclusion of thousands of partially-optimized and unoptimized structures in addition to optimized structures, along with several new computations, and train predictive models using three types of established GNN schemes: Crystal Graph Convolutional Neural Network (CGCNN) [36], Materials Graph Network (MEGNET) [37], and Atomistic Line Graph Neural Network (ALIGNN) [38]. We train GNN models on datasets ranging from a few thousand to nearly 15,000 data points for point defects in different charge states, across 40 or so unique ZB compounds. We present a description of model optimization and visualization of the prediction results for different defect types and show how (a) ALIGNN predictions significantly improve upon previous DFE estimates, with a root mean square error (RMSE) of \(\sim\) 0.3 eV, (b) predictions can be made for defect complexes and alloyed compositions by including a subset of them in the training data, and (c) GNN predictions for new systems can be used both for screening based on upper-bound energies as well as for stabilizing any defect by changing the defect coordination environment until the GNN-predicted formation energy minimizes. We believe this provides a novel and promising approach towards predicting defect energetics and screening important defects from large chemical spaces. Considerations of the level of theory and future prospects of this work are additionally discussed. The DFT+GNN workflow applied for predicting defect properties is presented in **Fig. 1**. ## 2 Computational dataset The semiconductor+defect chemical space considered in this work is pictured in **Fig. S1** in terms of the group IV, III-V, and II-VI binary ZB compounds (referred to henceforth as AB, with A being the cation and B the anion) that serve as hosts to defects, elements selected from across the periodic table as possible defects (henceforth referred to as M for any substitutional or interstitial defect, whereas vacancies will use V), and 5 possible symmetry inequivalent defect sites, namely A-site (M\({}_{A}\)), B-site (M\({}_{B}\)), tetrahedral interstitial site with 4 neighboring A atoms (M\({}_{i,A}\)), tetrahedral interstitial site with 4 neighboring B atoms (M\({}_{i,B}\)), and octahedral interstitial site with 3 A and 3 B atoms in the neighborhood (M\({}_{i,oct}\)). Here, we utilize 4 types of datasets: dataset 1 (all possible single native defects in 34 binary ZB semiconductors), dataset 2 (hundreds Figure 1: DFT+GNN workflow to accelerate the prediction of defect formation energies and charge transition levels in semiconductors. of substitutional and interstitial impurities across all ZB compounds), dataset 3 (native defects and impurities in a variety of CdSe\({}_{x}\)Te\({}_{1-x}\) alloys), and dataset 4 (some defect complexes simulated in CdTe). Datasets 3 and 4 arise from a parallel study on dopant-activation in CdTe-based solar cells [39] and are used here to evaluate the effectiveness of GNN-based defect predictions for alloys and complexes. All datasets contain DFEs calculated for at least 5 distinct charged states (q = 0, +2, +1, -2, -1) at two extreme chemical potential conditions, namely A-rich and B-rich. **Fig. 2** shows violin plots capturing the distribution of DFEs (only for neutral defects at A-rich chemical potential conditions) for all 4 datasets, with inset box plots showing the median, lower quartile, and upper quartile for each. The DFT details, including specific VASP input parameters, level of theory information, reciprocal space sampling, references for chemical potentials, and equations used for DFE calculation, are present in our past publication [12]. All data is from the semi-local GGA-PBE functional, which generally reproduces lattice parameters and relative stabilities quite well, but is not reliable for electronic band edges and band gaps, which is likely to affect computed defect levels as well. The non-local hybrid HSE06 functional [40] is preferred for electronic and defect properties, but is much more expensive and often requires tuning of the mixing parameter (which determines the fraction in which exact exchange from Hartree-Fock is combined with the exchange-correlation energy from PBE), which is very system-specific and not trivially applied across large datasets [41]. Beyond-DFT methods such as the GW approximation, which expands the self-energy in terms of the single-particle Green's function G and the screened Coulomb interaction W [42], are more reliable but too prohibitively expensive to be applied high-throughput. In past work [12], we showed that PBE computed defect charge transition levels, when plotted to span the experimentally-known band gap of the semiconductor, match rather well with measured defect levels for a dataset of \(\sim\) 80 defects in binary ZB compounds collected from the literature, showing a PBE vs experiment RMSE of 0.21 eV. Thus, PBE-level DFEs and transition levels may be sufficient for a first-level screening of low-energy defects. Inaccuracies will still persist from incorrectly locating VBM and CBM, but appropriate corrections can be applied afterwards using different higher-accuracy bulk calculations once PBE-level DFEs are predicted for multiple \(q\) and \(\mu\) conditions. Two such possible correction methods include using the modified band alignment approach based on PBE and HSE06 band gap values [43], and shifting both band edge positions using GW quasiparticle energies [44]. The focus of present work is to demonstrate the accelerated prediction of PBE-level defect energetics, and the aforementioned corrections will be considered in future work. In the next few subsections, a detailed description is provided for the four datasets generated at the PBE level. ### Dataset 1 Dataset 1 contains DFEs for all possible native defects in each AB compound, namely V\({}_{A}\) (vacancy at A-site), V\({}_{B}\), A\({}_{iA}\) (A self-interstitial at A-coordinated tetrahedral site), A\({}_{iB}\), A\({}_{iA,oct}\) (A self-interstitial at octahedral site), B\({}_{iA}\), B\({}_{iB,B}\), B\({}_{i,oct}\), A\({}_{B}\) (A on B anti-site substitution), and B\({}_{A}\). All AB compounds, simulated in the cubic ZB structure with A atoms occupying an FCC lattice and B atoms occupying tetrahedral sites, are listed as follows: 8 II-VI compounds (ZnO, ZnS, ZnSe, ZnTe, CdO, CdS, CdSe, and CdTe), 16 III-V compounds (AlN, AlP, AlAs, AlSb, BN, BP, BAs, BSb, GaN, GaP, GaAs, GaSb, InN, InP, InAs, and InSb), and 10 group IV compounds (SiC, GeC, SnC, SiGe, SiSn, GeSn, C, Si, Ge, and Sn). There are a total of 312 possible native defects across the 34 compounds, and DFEs were computed for all under both A-rich and B-rich conditions. From each of the 312 (\(\times\) 5 \(q\) states) PBE geometry optimization calculations, we collected all possible "intermediate structures", that is, geometries generated during Figure 2: Defect formation energy distribution across the four datasets, for neutral defects under A-rich chemical potential conditions. the course of the optimization, all the way from pristine structure (which is simply the ground state semiconductor bulk supercell structure with a defect introduced) to the fully optimized structure; also collected were the total DFT energies corresponding to each structure. The shell script used to collect intermediate structures (_IS_) for every defect from XDATCAR and corresponding energies from OUTCAR (typical output files in VASP [45]) is added to the SI and available on GitHub. The DFE corresponding to every _IS_ is estimated as E\({}^{\prime}\)(_IS_) = E\({}_{DFT}\)(_IS_)- E\({}_{DFT}\)(optimized structure) + E\({}^{\prime}\)(optimized structure). This approach helps swell the DFT native defect dataset to 3071 data points for q = 0, and between \(\sim\) 1500 and \(\sim\) 2000 points for the other q values, as reported in **Table 1**. The lower number of structures for the charged systems as compared to the neutral defects comes from the fact that most of the geometry optimization takes place during the neutral calculation, whereas the charged calculations typically use the optimized neutral defect geometry as their starting point. All the _IS_ serve as energetically accessible but not ground state defective structures, which can play an important role in understanding the dynamics and behavior of the crystal, but also provide an energetically and structurally diverse dataset for a GNN model to be trained on. This also ensures that the GNN "knows" what an unoptimized, partially optimized, and fully optimized defect structure looks like, meaning it will yield the correct energy corresponding to any hypothetical new structure and potentially help reduce the energy by subtly modifying the defect coordination environment. ### Dataset 2 Dataset 2 contains hundreds of impurities or extrinsic defects (M) at various sites, namely M\({}_{A}\), M\({}_{B}\), M\({}_{I,A}\), M\({}_{I,B}\), and M\({}_{i,oct}\), across each of the 34 unique AB compounds. The five distinct defect sites are considered in 30 binary compounds and three defect sites (A-site, one tetrahedral interstitial site, and one octahedral interstitial site) are considered in the elemental systems (C, Si, Ge, and Sn). This dataset encompasses a wide range of singular impurity atoms, including groups I to VII, all transition metals, and all lanthanides, leading to a total of 77 species, as shown in **Fig. S1**. The total number of possible impurities resulting from this can be calculated as 77 \(\times\) 5 \(\times\) 30 + 77 \(\times\) 3 \(\times\) 4 = 12,474 (many of these would coincide with the native defects described earlier). Out of this dataset of 12,474 defects, 1566 were chosen for complete neutral-state geometry optimization, and \(\sim\) 1000 were subjected to charged calculations as well; points in the DFT dataset exhibit sufficient chemical diversity in terms of semiconductor type, element type, and defect site type, to adequately represent the entire chemical space. Once again, we collected all IS from the DFT optimization runs for each impurity in 5 different q states, leading to nearly 14,000 data points for q=0 and between 3500 and 5300 data points for the other q values, as reported in **Table 1**. ### Dataset 3 This dataset includes several possible native defects in a series of CdSe\({}_{x}\)Te\({}_{1-x}\) alloys (x = 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, and 0.875), namely V\({}_{Cd}\), V\({}_{Te}\), V\({}_{Se}\), Cd\({}_{i}\), Te\({}_{i}\), Se\({}_{i}\), Cd\({}_{Te}\), Cd\({}_{Se}\), Te\({}_{Cd}\), and Se\({}_{Cd}\). This results in a total of 82 unique defects across the 7 mixed compositions, which are of interest in CdTe solar cells where Se is often mixed at the anion site [46, 47, 48, 49, 50]. DFEs are computed for each defect in 5 q states at Cd-rich and Te/Se-rich conditions to obtain the two extreme ranges of energies, and all the _IS_ are collected from the optimization runs. Total datasets of a few hundred points are thus compiled, as shown in **Table 1**. This dataset will help examine whether GNN models trained on defects in pure unmixed compositions (CdTe and CdSe) are applicable to alloyed compositions in the same chemical space, and how many new alloy data points might need to be added to the training set to achieve satisfactory accuracy. ### Dataset 4 Finally, we posit that crystal graphs should be capable of representing any type of defect complexes as well in addition to the single-point defects described above. For exhaustively investigating defect tolerance and dopability of a semiconductor, it is vital to consider likely defect com \begin{table} \begin{tabular}{|c|c|} \hline **Dataset** & **Data Points** \\ \hline Native defects in 34 compounds (Dataset 1) & 2053 (q=+2), 1840 (q=+1), 3071 (q=0), 1966 (q=-1), 1498 (q=-2) \\ \hline Impurities in 34 compounds (Dataset 2) & 5326 (q=+2), 3990 (q=+1), 13966 (q=0), 3568 (q=-1), 4628 (q=-2) \\ \hline Native defects in CdSe\({}_{x}\)Te\({}_{1-x}\) (Dataset 3) & 291 (q=+2), 322 (q=+1), 734 (q=0), 305 (q=-1), 329 (q=-2) \\ \hline Defect complexes in CdTe (Dataset 4) & 47 (q=0) \\ \hline \end{tabular} \end{table} Table 1: Number of data points for each charge state q across Datasets 1, 2, 3, and 4. plexes, which are typically multiple point defects or impurities that form simultaneously in the lattice and interact with each other. Examples include Schhotky and Frenkel defects, and compensating vacancies or interstitials that form along with dopants. The V\({}_{Ga}\)-\(\Omega_{N}\)-2H triple complex was found to have a very low energy in GaN [51], and it has recently been suggested that As-O based complexes may form in CdTe [39]. Thus, we simulated a series of complexes such as V\({}_{Cd}\)+As\({}_{Te}\) and V\({}_{Te}\)+Cu\({}_{Cd}\) in CdTe, resulting in a small dataset of 47 points of neutral state defects, for both Cd-rich and Te/Se-rich conditions, including all the _IS_. ## 3 DFT optimized vs unoptimized formation energy Before training GNN models, we analyzed the DFT datasets to determine the scale of the differences between DFEs from full DFT-optimization and from pristine, unoptimized defect structures. For any hypothetical defect, an unoptimized pristine structure could be trivially generated simply by inserting the defect of interest in an optimized bulk supercell, but obtaining the ground state DFE is a matter of optimizing this structure, which would involve a PBE calculation that runs for minutes, hours, or days, depending on the nature of the defect. The purpose of GNN-based on-demand predictions is to reduce this time drastically. Since any new GNN predictions would likely be made using pristine defect structures, it is informative to examine, for a given type of defect, how low the energy could go starting from the unoptimized DFE if full optimization were to be performed. **Fig. 3** shows unoptimized DFE plotted against the fully optimized DFE, for the dataset of 312 native defects across 34 AB compounds (Dataset 1), at both A-rich and B-rich conditions. The unoptimized DFEs are obtained by performing fixed-volume single ionic step calculations on the pristine defect-introduced structures. The dataset is divided into vacancies, anti-site substitutions, and self-interstitials. It can be seen that the amount of geometry relaxation happening in vacancy structures is minimal, with the two types of DFEs almost always being very similar to each other. On the other hand, interstitials and anti-sites often undergo major atomic rearrangement, such that the optimized defect coordination environment may look starkly different from the starting structure, thus leading to DFE differences ranging from 1 eV to nearly 8 eV. The large differences for interstitials could be understood in terms of the unfavorability of introducing an extra cation or anion in a tetrahedral or octahedral void; the larger the ionic radii, the greater this unfavorability. Substitutions depend on the size differences between A and B, and may thus show either a low or high energy variability. These trends roughly inform the threshold that must be applied upon unoptimized DFEs (say, from GNN predictions) to determine their likelihood of stability upon full optimization. It should be noted that the intermediate structures collected from each "optimization run" essentially span the range of the unoptimized to optimized DFE, yielding hundreds of structures for some defects and only a handful for others. Figure 3: DFT unoptimized vs optimized neutral defect formation energy in Dataset 1, under (a) A-rich, and (b) B-rich chemical potential conditions. Graph Neural Network Architecture In this section, we briefly describe the technical details behind the three GNN schemes chosen in this study, namely GGCNN, MEGNET, and ALIGNN. ### Crystal Graph Convolutional Neural Network (CGCNN) CGCNN, developed by Xie et al. [36], is a deep learning architecture that takes a crystal graph as input and applies a series of convolution and pooling operations to extract features that capture the underlying properties of the crystal. These features are subsequently fed into a fully connected neural network (FCNN) to make predictions of the properties of interest [52]. The CGCNN framework is pictured in **Fig. S2(a)** and its operation is described below: 1. Structure representation: The crystal structure is represented as a graph where the atoms are nodes and the bonds are edges. The nodes and edges are labeled with features such as atomic coordinates, and bond length. 2. Graph convolutional layers: CGCNN applies multiple graph convolutional layers to the input graph wherein each layer aggregates information from neighboring nodes and edges and learns features that capture the local environment. Generally, the convolution operation includes computing a weighted sum of the features of neighboring nodes and edges, followed by a non-linear activation function [53, 54] : \[H^{(l+1)}=\sigma\left(\sum_{j\in\mathcal{N}(i)}W^{(l)}h_{j}^{(l)}+W^{(l)}h_{i} ^{(l)}\right)\] (1) Here, \(H^{(l+1)}\) is the output feature matrix of the \((l+1)\)-th layer, \(W^{(l)}\) is the weight matrix of the \(l\)-th layer, \(\sigma\) is a non-linear activation function, \(h_{i}^{(l)}\) is the feature vector of node \(i\) in layer \(l\), and \(\mathcal{N}(i)\) is the set of neighboring nodes of node \(i\) in the graph. 3. Pooling layer: The output of the last convolutional layer is passed through a global pooling layer (e.g., min, max pooling), which aggregates the features of all nodes in the graph into a single vector [53, 54]. This vector contains information about the entire crystal structure, including all atomic coordinates, bond distances, and well-known elemental properties of each atom such as ionization energy and electronegativity. \[h_{\text{pool}}=\frac{1}{N}\sum_{i=1}^{N}h_{i}^{(L)}\] (2) Here, \(h_{pool}\) is the output feature vector of the pooling layer, \(N\) is the total number of nodes in the graph, and \(h_{i}^{(L)}\) is the feature vector of node \(i\) in the last layer \(L\). 4. Fully connected neural network (FCNN): Finally, the output of the pooling layer is fed into an FCNN, which is trained like a regular NN-regression model to make predictions. \[y=f\left(W_{fc}h_{pool}+b_{fc}\right)\] (3) Here, \(y\) is the predicted property, \(W_{fc}\) is the weight matrix of the FCNN, \(b_{fc}\) is the bias vector, \(h_{pool}\) is the output feature vector of the pooling layer, and \(f\) is a non-linear activation function such as ReLU or sigmoid. ### Materials Graph Network (MEGNET) The MEGNET framework was developed and released by Chen et al. in 2019 [37] and is pictured in **Fig. S2(b)**. MEGNET uses elemental embeddings to encode periodic chemical trends that can be used to improve the performance of models with limited training data. Elemental embeddings are vector representations of elements that capture their chemical properties that are typically learned from a large dataset of crystals. When a new crystal is encountered, the embeddings can be used to predict the properties of interest. The MEGNET methodology is described below: 1. Graph representation of materials: MEGNET represents the crystal as a graph where the atoms are the nodes, and the edges represent the connections between the atoms. Each atom is associated with a set of features such as its atomic number, coordinates, and chemical environment. 2. Message passing algorithm: MEGNET uses a message-passing algorithm to capture the interactions between atoms in the crystal. Each atom sends a message to its neighboring atoms, which is a function of the node and edge features. The messages are then aggregated at each node and the resulting feature vector is used as input to a neural network. 3. Readout layer: The output of the message-passing algorithm is passed through a readout layer which maps the learned node and edge features to target properties, and a loss function is calculated to capture the difference between the predicted and actual values. ### Atomistic Line Graph Neural Network (ALIGNN) ALIGNN is a novel approach developed by Choudhary et al. [38], that differs from CGCNN and MEGNET in terms of considering three-body interactions (bond angles) as well in addition to two-body terms (bond lengths). ALIGNN leverages both graph convolutional layers and line graph convolutional [55] layers to capture both short-range and long-range correlations in the crystal. The framework (pictured in **Fig. S2(c)**) is described below: 1. Atomic feature extraction: ALIGNN takes as input a graph representing the atomic structure of the crystal. Each node (atom) in the graph is associated with a set of atomic features, which includes properties such as electronegativity, group number, covalent radius, valence electrons, ionization energy, electron affinity, atomic block, and atomic volume. Each edge (bond) in the graph is associated with both the bond angle and bond distance. 2. Graph convolutional layers: ALIGNN uses graph convolutional layers to update the feature vectors of each node based on the features of its neighboring nodes. In each layer, the feature vectors are updated using a weighted sum of the features of neighboring nodes, similar to other models. This step captures short-range interactions in the structure. 3. Line graph construction: To capture long-range correlations, ALIGNN constructs a line graph on top of the original crystal graph. The line graph has nodes that represent unique bonds between atoms, corresponding to edges in the crystal graph. The line graph edges connect pairs of bonds that share a central atom in the crystal graph, effectively capturing bond angle information. ALIGNN then applies another set of graph convolutional layers to the line graph, which updates the feature vectors of each bond based on the features of neighboring bonds. The information from the line graph is then propagated back to the original crystal graph to update the bond features in combination with the node features. 4. Feature refinement: After the line graph convolution, ALIGNN refines the feature vectors for each edge using a set of learnable transformations that help capture more complex interactions between atoms and bonds. 5. Graph pooling: ALIGNN aggregates the refined bond-level feature vectors into a single graph-level feature vector using a graph pooling operation that summarizes the relevant information from the entire crystal graph. 6. Output prediction: Finally, the graph-level feature vector is fed to an FCNN for making predictions. ## 5 Results and Discussion ### Testing GNN models on Dataset 1 As a first step, we tested the performance of CGCNN, MEGNET, and ALIGNN for predicting the q=0 E\({}^{f}\) for Dataset 1 only. For each model, the data is split 60:20:20 into training, validation, and test sets. The CGCNN training protocol has several important hyperparameters that must either be kept fixed at recommended values or rigorously tuned over a set of possible values, such as the number of properties used in the atom feature vector, the number of convolutional layers, the length of the learned atom feature vector, the number of hidden layers, the regularization term, the scaling factor of the Gaussian initialization of weights, step size of the Adam optimizer [56], dropout fraction, batch size, number of epochs, and the cut-off distance r\({}_{c}\) for constructing the crystal graph. Here, we optimized only the batch size, epochs and r\({}_{c}\), keeping the rest of the hyperparameters the same as in the original CGCNN publication [36]. A parity plot for the best CGCNN predictions on Dataset 1 (for A-rich conditions) is pictured in **Fig. 4(a)**, showing RMSE of 0.19 eV, 0.41 eV, and 0.35 eV respectively on the training, validation, and test sets. These errors are already far lower than our previously published DFE test prediction errors of \(\sim\) 0.6 eV for defects in Cd-based chalcogenides [24] and \(\sim\) 1 eV across many group IV, II-VI, and III-V semiconductors [12], as well as being highly competitive with the state of the art for such predictions. Learning curves showing how the CGCNN errors converge as the epochs, batch size, and r\({}_{c}\) increase are presented in **Fig. S3**. Next, we trained a MEGNET model as shown in **Fig. 4(b)** following the same strategy. Notable hyperparameters include the number of interaction blocks, number of hidden layers, hidden layer size, learning rate, regularization strength, dropout rate, batch size, activation function, number of features assigned to each bond in the input graph, and r\({}_{c}\). Here, we only optimized the number of epochs, batch size, and r\({}_{c}\), and the rest of the parameters are directly adopted from the MEGNET publication [37]. We find that MEGNET once again performs much better than previous composition-based models, but shows slightly larger errors than CGCNN with RMSE of 0.36 eV, 0.42 eV, and 0.40 eV on the training, validation, and test sets, respectively. The test error is close enough to the CGCNN error of 0.35 eV to justify the use of MEGNET over CGCNN, especially since MEGNET significantly corrects any possible overfitting in the CGCNN models by yielding roughly similar training, validation, and test errors. MEGNET has a more complex model architecture than CGCNN and in cludes elemental embeddings encoding periodic chemical trends which may allow better generalization to unseen data. Finally, we trained an ALIGNN model on Dataset 1 and found that it yields better performance than both CGCNN and MEGNET, with a slight concern of model overfitting alleviated by the remarkably low values of validation and test RMSEs. As shown in **Fig. 4(c)**, the test RMSE is 0.15 eV, which represents a 99 % accuracy considering the DFE values range from 0 eV to 15 eV. The reason for the improvement from CGCNN and MEGNET could be attributed to the line graph convolution step that captures long-range interactions, which may be important for particular point defects whose presence affects atomic arrangements beyond just the first nearest neighbor shell, causing larger lattice distortions. For training ALIGNN models, we use r\({}_{c}\) = 8 A, 12 nearest neighbors, 4 graph convolutional layers, 4 line graph layers, learning rate = 0.001, batch size = 16, an Adamw optimizer, and 150 epochs. Results of hyperparameter optimization with ALIGNN are presented in **Fig. S4**, which is a more computationally expensive step than for the other two methods, motivating the use of much of the same hyperparameters as in past work [38]. However, the training time is still reasonable and the accuracy payoff is immense; thus, we use ALIGNN as the GNN scheme of choice going forward, and discuss prediction performances for Datasets 2, 3, and 4 in the next subsection. **Table 2** lists the optimized training, validation, and test set RMSE values for CGCNN, MEGNET, and ALIGNN models trained on Dataset 1. ### Extending ALIGNN to Datasets 2, 3, and 4 To determine whether the GNN models optimized so far could directly be applied to impurities, complexes, and alloys, we first tested the ALIGNN model trained on dataset 1 for their predictive power on datasets 2, 3, and 4, before suitably re-optimizing the models by adding more training data points. **Fig. 5** shows the prediction performance of the ALIGNN model trained only on dataset 1 for dataset 2 (a), dataset 3 (b), and dataset 4 (c), along with the improvement in the predictions when 50 % of any new dataset is added to the training set and the ALIGNN model is retrained. The RMSE for the dataset 1-trained ALIGNN model is as high as 2.17 eV for dataset 2, 2.68 eV for dataset 3, and 2.98 eV for dataset 4, showing very poor performances that become worse going from native defects to impurities to alloys to complexes. The structure-property relationships learned from native defects alone cannot be generalized to extrinsic species or non-binary compounds, or indeed, the presence of multiple simultaneous defects that will inevitably cause far higher distortions and atomic rearrangements compared to single defects. Upon combining 50 % of each dataset (chosen randomly) with dataset 1 and re-optimizing the model using the same approach as before, and performing necessary hyperparameter optimization anew, RMSE values improve drastically to 0.36 eV for dataset 2, 0.70 eV for dataset 3, and 0.18 eV for dataset 4. These errors will go further down as more data is added to the training set, showing that each type of defect data needs to be adequately represented during the training process for generalizing the ALIGNN predictive power. This exercise provides insights into the challenges associated with representing and predicting the energetics of defects in different defect categories with a limited training dataset and demonstrates the importance of training ML models on comprehensive datasets to improve their performance Fig. 4: Parity plots for rigorously optimized (a) CGCNN, (b) MEGNET, and (c) ALIGNN models, trained on Dataset 1 for A-rich chemical potential conditions and q = 0. across various defect types. Next, we trained predictive models by combining all four datasets, for all charge states and chemical potential conditions. For charged defects, the DFE value is taken to be E\({}^{\prime}\)(E\({}_{F}\)=0), because by predicting this value for each charge state, the E\({}^{\prime}\) vs E\({}_{F}\) plot can be extended across the band gap region by using straight lines with slope = q. Thus, a total of 10 different ALIGNN models are trained for the combined dataset, for E\({}^{\prime}\)(q= + 2, E\({}_{F}\)=0), E\({}^{\prime}\)(q= + 1, E\({}_{F}\)=0), E\({}^{\prime}\)(q=0, E\({}_{F}\)=0), E\({}^{\prime}\)(q=-1, E\({}_{F}\)=0), and E\({}^{\prime}\)(q=-2, E\({}_{F}\)=0), at A-rich and B-rich chemical potential conditions. As seen in **Fig. S5**, there are a handful of outliers in the parity plots, especially in the case of E\({}^{\prime}\)(q=0, E\({}_{F}\)=0), which may result from some structures getting stuck in local minima during DFT optimization, and possible inaccuracies in the GNN model that may be fixed with more data and/or hyperparameter optimization. We removed a few notable outliers and trained the models again, to obtain the best ALIGNN predictive models that may be applied for new data points. The q= +1, q=0, and q=-1 ALIGNN models at A-rich conditions are pictured in **Fig. 6(a), (b)**, and **(c)**, respectively, and the remaining 7 models are presented in **Figs. S6** and S7. These models show very respectable RMSE values, suggesting that ALIGNN is capable of effectively learning the structure-property correlations in each dataset and each charge state, and generalizing them across all data types. Test prediction errors for q= +2, q= +1, q=0, q=-1, and q=-2 are found to be 0.30 eV, 0.23 eV, 0.32 eV, 0.25 eV, and 0.26 eV, respectively, representing a 98 % accuracy. The slightly larger errors for the neutral defects arise from the larger structural diversity for q=0 defect structures compared to the charged defect structures, also manifesting in much larger numbers of q=0 data points (e.g., 13,966 in dataset 2) than q=+1 (3990 in dataset 2) or q=-1 (3568 in dataset 2). The training, validation, and test errors for the best ALIGNN models for different charge states under A-rich conditions are listed in **Table 3**. ### ALIGNN-unoptimized vs DFT-optimized energies The next objective is to utilize the best ALIGNN models to make predictions for new defects and perform screening of potentially low-energy defects based on predicted DFEs. The caveat here is that for any new defect, one could only really generate an "unoptimized" defect structure, and thus the only ALIGNN prediction that can be made is the unoptimized DFE. As described earlier, a full DFT optimization of any defect structure is obviously a time-consuming step, involving structural relaxation until atomic forces become close to zero and the energy of the crystal reaches a minimum. In contrast, ALIGNN prediction of unoptimized DFE can be performed in seconds and thus used to estimate the unoptimized energies of hundreds of thousands of defect structures, which could then be used \begin{table} \begin{tabular}{|c|c|c|c|} \hline **GNN Scheme** & **Train RMSE (eV)** & **Validation RMSE (eV)** & **Test RMSE (eV)** \\ \hline CGCNN & 0.19 & 0.41 & 0.35 \\ \hline MEGNET & 0.36 & 0.42 & 0.40 \\ \hline ALIGNN & 0.03 & 0.16 & 0.15 \\ \hline \end{tabular} \end{table} Table 2: Training, validation, and test set RMSEs for different GNN models trained on Dataset 1 at A rich conditions. Figure 5: Parity plots for ALIGNN models trained on (q=0, A-rich) (a) Dataset 1 + Dataset 2, (b) Dataset 1 + Dataset 3, and (c) Dataset 1 + Dataset 4. (l) refers to models trained purely on Dataset 1 and tested on Dataset 2, 3 or 4, whereas (l) refers to 50 % of the new dataset added to the training set with Dataset 1. as a surrogate for screening based on some upper bound values [57, 58]. However, ALIGNN predictions could also, in theory, be used to replace DFT-based energy estimates within a gradient descent-type optimization process [59, 60, 61], or using brute-force structure generation and energy evaluation, and thus quickly yield low energy structures at near-DFT accuracy for any hypothetical crystalline defects. To test the correspondence between ALIGNN-unoptimized energies and DFT-optimized energies, we plotted the ALIGNN-predicted E\({}^{/}\)(q=0) (ALIGNN-unopt) on pristine structures of all defects across datasets 1, 2, 3, and 4, against the DFT-optimized E\({}^{/}\)(q=0) (DFT-opt), in **Fig. 7 (a)**. We expect the ALIGNN-unopt DFE values to always be higher than the DFT-opt values, and this is indeed the case of \(>\) 95 % of the data points. However, we do find the opposite to be true for several defects, which is perhaps a consequence of the statistical nature of ML predictions, which are very accurate on average but may show large errors for certain outliers. Importantly, notable differences between ALIGNN-unopt and DFT-opt values are seen for many defects where large structural changes are expected upon full relaxation. Some examples include V\({}_{Si}\) in SiC (8.28 eV difference), V\({}_{In}\) in InN (7.63 eV difference), and Cs\({}_{i,\alpha r}\) in SiC (7.17 eV difference). By examining the 300 defects out of this total list of 1747 defects (plotted in **Fig. 7 (a)**) which show the largest ALIGNN-unopt vs DFT-opt energy differences, we find an average difference of \(\sim\) 1 eV; thus, we could apply a rough general equivalence of DFT-opt = ALIGNN-unopt - 1 eV, and use this information to perform a high-throughput screening of likely low energy defects. Similar trends are expected to hold for ALIGNN-unopt vs DFT-opt energies of charged defects as well. Looking at only the DFT-opt values, we find that 170 defects have DFE \(<\) 0 eV, with some examples including N\({}_{S}\) in ZnS (-3.76 eV), N\({}_{As}\) in GaAs (-3.2 eV), and N\({}_{Te}\) in CdTe (-2.64 eV). Whereas a look at the ALIGNN-unopt values yields 351 defects with DFE \(<\) 1 eV which includes all 170 low energy defects from DFT, meaning that the suggested upper-bound energy screening procedure should help account for all potentially low energy defects in addition to a few unstable defects which may be eliminated subsequently. On average, the computational cost for optimizing a single-point defect in a 64-atom supercell amounts to approximately 2500 core hours for the neutral state and 1000 core hours each for charged calculations. Running only static, single-shot calculations on pristine structures requires around 400 core hours in total. On the other hand, ALIGNN-unopt predictions are made in seconds with minimal computing expense, and it is imperative to \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Charge** & **Train RMSE (eV)** & **Validation RMSE (eV)** & **Test RMSE (eV)** \\ \hline q = +2 & 0.10 & 0.25 & 0.30 \\ \hline q = +1 & 0.07 & 0.27 & 0.23 \\ \hline q = 0 & 0.20 & 0.30 & 0.32 \\ \hline q = -1 & 0.11 & 0.26 & 0.25 \\ \hline q = -2 & 0.06 & 0.26 & 0.26 \\ \hline \end{tabular} \end{table} Table 3: Training, validation, and test set RMSEs for ALIGNN models trained on different charge states at A rich condition. Figure 6: Parity plots for ALIGNN models trained on Datasets 1 + 2 + 3 + 4, for (a) E\({}^{/}\)(q=+1, E\({}_{F}\)=0), (b) E\({}^{/}\)(q=0, E\({}_{F}\)=0), and (c) E\({}^{/}\)(q=-1, E\({}_{F}\)=0), under A-rich chemical potential conditions. Parity plots for E\({}^{/}\)(q=+2, E\({}_{F}\)=0) and E\({}^{/}\)(q=-2, E\({}_{F}\)=0), as well as all parity plots for B-rich DFEs, are presented in the SI. use these predictions as descriptors for low energy defects. Visualizing the ALIGNN-unopt vs DFT-opt values for q = +2, +1, -1, and -2, in **Fig. S8** shows that there are more cases where the unoptimized energy is unexpectedly higher than the optimized energy. This may be a factor of the charged models never encountering pristine structures, as those are typically only utilized in neutral calculations and training sets. The charged models are only trained on partially optimized structures close to the ground state, or simply the ground state defect structures, making it such that q \(\neq\) 0 ALIGNN predictions for pristine structures are less reliable than the neutral state predictions, even though the best charged models still have very low test RMSE values. Finally, we examine the accuracy of predicting charge transition levels (CTLs) from ALIGNN compared to optimized DFT predictions. For any possible defect, a pristine unoptimized structure is created as described earlier and their DFE values are predicted at E\({}_{F}\) = 0 eV for q = 0, +2, +1, -1, and -2. Using these values, E\({}^{\prime}\) vs E\({}_{F}\) plots are produced based on the lowest energies and locations where the defect transitions from one stable charge state to another, referred to as \(\varepsilon\)(q1/q2), are identified. This would effectively yield ALIGNN-unopt values for \(\varepsilon\)(+2/+1), \(\varepsilon\)(+1/0), \(\varepsilon\)(0/-1), and \(\varepsilon\)(-1/-2), which may then be compared with corresponding DFT-opt values available for datasets 1, 2, and 3. **Fig. 7 (b)** shows the ALIGNN-unopt \(\varepsilon\)(+2/+1) plotted against the DFT-opt \(\varepsilon\)(+2/+1), revealing substantial scatter and a lack of overall correlation. Similar behavior is observed for other CTLs as well, as shown in **Fig. S9**. This is not surprising, and indicates that the relative stability of different charged defects and thus their transitions are sensitive to defect geometry and would thus require some level of optimization. Thus, we conclude that although ALIGNN-unopt E\({}^{\prime}\)(E\({}_{F}\) =0) predictions within a threshold will provide some idea about the likelihood of formation of defects in a compound and its possible defect tolerance and dopability, exact CTLs will be harder to determine without optimization, which means ALIGNN-unopt alone cannot reveal the shallow or deep level nature of important low energy defects. ### ALIGNN-based defect structure optimization We employed our trained ALIGNN model to optimize a crystal containing point defects using an iterative distortion approach. This gradient-free optimization method entailed subjecting the initial defective crystal structure to a series of systematic atomic displacements. Each atom in the crystal was displaced based on a normal distribution, and the displacements were carefully adjusted to ensure no overall translation of the system. The magnitude of these displacements ranged from zero (representing no displacement) to a maximum value specified as a user input. The best ALIGNN models are applied to predict the E\({}^{\prime}\) of all generated crystals in all 5 q states. This procedure is iteratively repeated and the applied distortions are adjusted until the predicted DFE becomes as low as possible. This approach allows the efficient exploration of the energy landscape of the defective crystal, seeking configurations that approach the optimal structure. Another advantage Fig. 7: (a) ALIGNN-unoptimized vs DFT-optimized E\({}^{\prime}\)(q=0, E\({}_{F}\)=0) under A-rich chemical potential conditions. (b) ALIGNN-unoptimized vs DFT-optimized \(\varepsilon\)(+2/+1) charge transition levels. All data are shown for combined datasets 1 + 2 + 3 + 4. of this gradient-free approach is that it does not rely on explicit training for atomic forces, which significantly increases the memory cost of GNNs at both training and prediction time. For a demonstration of this procedure, we pick two defects in the neutral state, namely Re\({}_{\text{Zn}}\) and La\({}_{\text{Zn}}\), both in ZnO. The ALIGNN-based defective structure optimization scheme is pictured in **Fig. 8(a)** and results of the optimization procedure for Re\({}_{\text{Zn}}\) and La\({}_{\text{Zn}}\) in ZnO are presented in **Fig. 8(b)** and **(c)**, respectively. After applying 644 consecutive distortions on the Re\({}_{\text{Zn}}\) geometry, the ALIGNN-predicted DFE comes down from 5.93 eV to 5.31 eV, which is very close to the DFT-optimized DFE of 5.30 eV. For La\({}_{\text{Zn}}\), applying a total of 2407 distortions helps reduce the ALIGNN-predicted DFE from 5.23 eV to 3.35 eV, which is more than 1 eV higher than the DFT-optimized DFE of 2.20 eV. ALIGNN-optimization required approximately 12 minutes for Re\({}_{\text{Zn}}\) and 40 minutes for La\({}_{\text{Zn}}\) using a standard laptop to generate defect configurations and make instant ALIGNN predictions, which is a vast improvement on the \(\sim\) 2500 core hours each DFT-optimization would require. Thus, this procedure will efficiently yield lower energy defective structures, though an exact match with DFT may not occur for every system. Finally, examining the lowest energy ALIGNN and DFT structures shows remarkable similarities between the two, as pictured in **Fig. S10**. Next, we applied the same optimization scheme to 6 selected defects across 3 compounds, for different charge states, and plotted their ALIGNN-optimized E\({}^{f}\) vs E\({}_{F}\) in **Fig. 9** alongside the corresponding DFT-optimized plots. We find that ALIGNN-optimization produces defect formation energy plots for Co\({}_{\text{Zn}}\) and Si\({}_{\text{Zn}}\) in ZnS (II-VI semiconductor), Rh\({}_{i}\) and B\({}_{i}\) in AlP (III-V semiconductor), and Li\({}_{Si}\) and C\({}_{i}\) in Si (group IV semiconductor), that match almost perfectly with DFT DFEs for all cases. The most stable charge states, transition levels, and E\({}^{f}\) magnitudes are predicted to be very similar from both ALIGNN and DFT. **Fig. 10** further shows the CTLs for a few selected defects plotted from ALIGNN and DFT, showing both ALIGNN-unopt and ALIGNN-opt values. It can be seen that ALIGNN-optimization brings down the DFT vs ALIGNN RMSE from 0.37 eV to 0.17 eV, which is a very respectable CTL prediction error that is far better than previous composition-based models and also very commensurate Fig. 8: (a) ALIGNN-based defect structure optimization scheme, demonstrated for Re\({}_{\text{Zn}}\) (b) and La\({}_{\text{Zn}}\) (c) in ZnO under Zn-rich chemical potential conditions. with the DFT-experiment RMSE of 0.21 eV established for the same chemical space. Our results demonstrate the effectiveness of GNNs in guiding crystal structure optimization and highlight their potential for accelerating materials discovery and design. It should be noted that the geometry optimization process is not very clean, and there is no clear answer on how many distortions or atomic displacements must be applied for a given defect, although the unoptimized vs optimized energy visualization provides some insight into different types of defects. It is not easy to determine when to stop the optimization process, other than when the GNN-predicted energy does not reduce anymore, which does not negate the risk of local minima trapping. This process can also get expensive when applying for hundreds of thousands of defects, especially depending on the values of hyperparameters such as r\({}_{c}\); nevertheless, they are still meaningfully faster than complete DFT optimization. ### High-throughput screening of defects The best ALIGNN models were finally applied to predict the E\({}^{f}\) (E\({}_{F}=0\) eV) of all 12,474 possible single defects and impurities across the entire chemical space, in all 5 q states, at A-rich chemical potential conditions. These predictions were then used to generate E\({}^{f}\) vs E\({}_{F}\) plots for all defects spanning the experimental band gap of the semiconductor. To screen for potentially low energy defects, we look for E\({}^{f}\) becoming negative for any portion of the band gap, using a stringent threshold of 1 eV for neutral defects and 0 eV for charged defects. This yields a total of 1,281 defects that are very likely to be stable based on the ALIGNN-unopt predictions, though many more such low-energy defects may exist once ALIGNN-optimization is performed. **Table 4** contains a few examples of low-energy defects identified by ALIGNN. Prediction of DFE by ALIGNN for all possible 12,474 defects at the A-rich chemical potential conditions are added to the SI. This provides a great starting point for future studies and the quick identification of n-type or Fig. 10: ALIGNN-optimized and ALIGNN-unoptimized defect charge transition levels plotted against corresponding DFT-optimized values. Fig. 9: ALIGNN-optimized and DFT-optimized defect formation energy plots for two selected defects each in (a) ZnS under Zn-rich conditions, (b) AIP under Al-rich conditions, and (c) Si under Si-rich conditions. p-type dopants in any compound of interest. ## 6 Conclusions In this work, we used state-of-the-art crystal graph-based neural networks to develop predictive models for defect formation energies in a chemical space of zincblende semiconductors, by learning from a substantial computational dataset containing optimized and partially optimized geometries. Three established GNN techniques, namely GGCNN, MEGNET, and ALIGNN, are tested in this work. The ALIGNN scheme shows the best prediction performance and is capable of high-accuracy prediction for native defects, impurities, complexes, and defects in alloys. While ALIGNN predictions made on hypothetical pristine defect structures deviate significantly from DFT-optimized defect formation energies, we demonstrate an ALIGNN-based defective geometry optimization approach which helps bridge the gap and bring down errors in predicting charge transition levels. The ALIGNN-unoptimized predictions made for the entire chemical space of \(>\) 12,000 possible defects are released with this manuscript, along with necessary code and training data. We believe the DFT-GNN approach presented in this work will be highly consequential for screening across optoelectronically active point defects and functional dopants in technologically important semiconductors, even being applicable to all kinds of defect complexes. The next steps would involve developing a package to perform ALIGNN-based defect optimization, expanding the models to other semiconductor classes and higher levels of theories, and testing alternative ML and GNN approaches for further improvement. ## Conflicts of Interest There are no conflicts to declare. ## Data Availability All DFT data and GNN predictions are included with the SI as.csv files. All code can be found on Github: Link ## Acknowledgements A.M.K. acknowledges support from the School of Materials Engineering at Purdue University under account number F.10023800.05.002, as well as support from Argonne National Laboratory under sub-contracts 21090590 and 22057223. A.M.K. also acknowledges insightful discussions with Dr. Mariana Bertoni at Arizona State University, Dr. Prashun Gorai at Colorado School of Mines, and Dr. Maria K.Y. Chan at Argonne National Laboratory. This research used resources from the National Energy Research Scientific Computing Center (NERSC), the Laboratory Computing Resource Center (LCRC) at Argonne National Laboratory, and the Rosen Center for Advanced Computing (RCAC) clusters at Purdue University. P.G. acknowledges IIT Madras for providing financial assistance through the "International Immersion Experience Travel Award" to visit Purdue University. Please note commercial software is identified to specify procedures. Such identification does not imply a recommendation by the National Institute of Standards and Technology (NIST). ## Author Contributions A.M.K. conceived and planned the research project. DFT computations and GNN model training were performed by M.H.R., P.G., P.M., and A.M.K.; S.K.Y., G.P., B.D., and K.C. provided constant guidance and software support for the project and for editing the manuscript. M.H.R. and A.M.K. took the lead on writing and editing.
``` IV、III-V、およびII-VIのزインバレン(ZB)半導体物質の化学空間における天然欠陥と機能的不純物予測と選別のためのフレームワークを開発する。これは、高 throughput の密度関数論 (DFT) データでトレーニングされた、晶体基盤のニューラルネットワーク (GNN) によって駆動される。DFT 計算からの部分最適化欠陥構成物のサンプル化という革新的なアプローチを用いることで、これまで最大の計算欠陥データセットを生成する。このデータセットには、様々なタイプの空穴、自己Interstitial、反サイト置換、不純物Interstitialと置換、そして一部の欠陥複合体も含まれている。CGCNN(Crystal Graph Convolutional Neural Network)、MEGNET(Materials Graph Network)、およびALIGNN(Atomistic Line Graph Neural Network)という3種類の確立されたGNN手法を用いて、多重電荷状態と
2305.20057
Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance
Multi-objective learning (MOL) problems often arise in emerging machine learning problems when there are multiple learning criteria, data modalities, or learning tasks. Different from single-objective learning, one of the critical challenges in MOL is the potential conflict among different objectives during the iterative optimization process. Recent works have developed various dynamic weighting algorithms for MOL such as MGDA and its variants, where the central idea is to find an update direction that avoids conflicts among objectives. Albeit its appealing intuition, empirical studies show that dynamic weighting methods may not always outperform static ones. To understand this theory-practical gap, we focus on a new stochastic variant of MGDA - the Multi-objective gradient with Double sampling (MoDo) algorithm, and study the generalization performance of the dynamic weighting-based MoDo and its interplay with optimization through the lens of algorithm stability. Perhaps surprisingly, we find that the key rationale behind MGDA -- updating along conflict-avoidant direction - may hinder dynamic weighting algorithms from achieving the optimal ${\cal O}(1/\sqrt{n})$ population risk, where $n$ is the number of training samples. We further demonstrate the impact of the variability of dynamic weights on the three-way trade-off among optimization, generalization, and conflict avoidance that is unique in MOL. We showcase the generality of our theoretical framework by analyzing other existing stochastic MOL algorithms under the framework. Experiments on various multi-task learning benchmarks are performed to demonstrate the practical applicability. Code is available at https://github.com/heshandevaka/Trade-Off-MOL.
Lisha Chen, Heshan Fernando, Yiming Ying, Tianyi Chen
2023-05-31T17:31:56
http://arxiv.org/abs/2305.20057v3
Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance ###### Abstract Multi-objective learning (MOL) problems often arise in emerging machine learning problems when there are multiple learning criteria or multiple learning tasks. Recent works have developed various _dynamic weighting_ algorithms for MOL such as MGDA and its variants, where the central idea is to find an update direction that _avoids conflicts_ among objectives. Albeit its appealing intuition, empirical studies show that dynamic weighting methods may not always outperform static ones. To understand this theory-practical gap, we focus on a new stochastic variant of MGDA - the Multi-objective gradient with Double sampling (MoDo) algorithm, and study the generalization performance of the dynamic weighting-based MoDo and its interplay with optimization through the lens of algorithm stability. Perhaps surprisingly, we find that the key rationale behind MGDA - updating along conflict-avoidant direction - may _hinder_ dynamic weighting algorithms from achieving the optimal \(\mathcal{O}(1/\sqrt{n})\) population risk, where \(n\) is the number of training samples. We further demonstrate the variability of dynamic weights on the three-way trade-off among optimization, generalization, and conflict avoidance that is unique in MOL. ## 1 Introduction Multi-objective learning (MOL) emerges frequently in recent machine learning problems such as learning under fairness and safety constraints [45]; learning across multiple tasks including multi-task learning [36] and meta-learning [43]; and, learning across multiple agents that may not share a global utility including federated learning [37] and multi-agent reinforcement learning [29]. This work considers solving the empirical version of MOL defined on the training dataset as \(S=\{z_{1},\ldots,z_{n}\}\). The performance of a model \(x\in\mathbb{R}^{d}\) on a datum \(z\) for the \(m\)-th objective is denoted as \(f_{z,m}:\mathbb{R}^{d}\mapsto\mathbb{R}\), and its performance on the entire training dataset \(S\) is measured by the \(m\)-th empirical objective \(f_{S,m}(x)\) for \(m\in[M]\). MOL optimizes the vector-valued objective, given by \[\min_{x\in\mathbb{R}^{d}}\ \ F_{S}(x)\coloneqq[f_{S,1}(x),\ldots,f_{S,M}(x)]. \tag{1.1}\] One natural method for solving (1.1) is to optimize the (weighted) average of the multiple objectives, also known as _static or unitary weighting_[41, 15]. However, this method may face challenges due t to _potential conflicts_ among multiple objectives during the optimization process; e.g., conflicting gradient directions \(\langle\nabla f_{S,m}(x),\nabla f_{S,m^{\prime}}(x)\rangle<0\). A popular alternative is thus to _dynamically weight_ gradients from different objectives to avoid conflicts and obtain a direction \(d(x)\) that optimizes all objective functions jointly that we call a _conflict-avoidant_ (CA) direction. Algorithms in this category include the multi-gradient descent algorithm (MGDA) [7], its stochastic variants [26, 36, 8] and other variants [4, 24, 28, 31]. While the idea of finding CA direction in dynamic weighting-based approaches is very appealing, recent empirical studies reveal that dynamic weighting methods may not outperform static weighting in some MOL benchmarks [15, 41], especially when it involves stochastic updates and deep models. Unfortunately, the reason behind this empirical performance degradation is not fully understood and remains an open question. To gain a deeper understanding of the dynamic weighting-based algorithms, a natural question is **Q1:**_What are the major sources of errors in dynamic weighting-based MOL methods?_ To answer this question theoretically, we first introduce a proper measure of testing performance in MOL - the _Pareto stationary measure_ in terms of the population objectives, which will immediately imply stronger measures such as Pareto optimality under strongly convex objectives. We then decompose this measure into _generalization_ error and _optimization_ error and further introduce a new metric on the _distance to CA directions_ that is unique to MOL; see Sections 2.1 and 2.2. To characterize the performance of MOL methods in a unified manner, we introduce a generic dynamic weighting-based MOL method that we term stochastic Multi-Objective gradient with DOuble sampling algorithm (**MoDo**), which uses a step size \(\gamma\) to control the change of dynamic weights. Roughly speaking, by controlling \(\gamma\), MoDo approximates MGDA (large \(\gamma\)) and static weighting algorithm (\(\gamma=0\)) as two special cases; see Section 2.3. We first analyze the generalization error of the model learned by MoDo through the lens of algorithmic stability [2, 12, 21] in the framework of statistical learning theory. To our best knowledge, this is the _first-ever-known_ stability analysis for MOL algorithms. Here the key contributions lie in defining a new notion of stability - MOL uniform stability and then establishing a tight upper bound (matching lower bound) on the MOL uniform stability for MoDo algorithm that involves two coupled sequences; see Section 3.1. We then analyze the optimization error of MoDo and its distance to CA directions, where the key contributions lie in relaxing _the bounded function value/gradient assumptions_ and significantly improving the convergence rate of state-of-the-art dynamic weighting-based MOL methods [8]; see Section 3.2. Different from the stability analysis for single-objective optimization [12], the techniques used in our generalization and optimization analysis allow to remove conflicting assumptions and use larger step sizes to ensure both small generalization and optimization errors, which are of independent interest. Given the holistic analysis of dynamic weighting methods provided in **Q1**, a follow-up question is **Q2:**_What may cause the empirical performance degradation of dynamic weighting methods?_ _Visualizing MOL solution concepts._ To reason the root cause for this, we first compare different MOL algorithms in a toy example shown in Figure 1. We find MGDA can navigate along CA directions and converge to the empirical Pareto front under all initializations, while static weighting gets stuck in some initializations; at the same time, the empirical Pareto solution obtained by MGDA may incur a larger population risk than the suboptimal empirical solution obtained by the static weighting Figure 1: An example from [25] with two objectives (1a and 1b) to show the three-way trade-off in MOL. Figures 0(c)-0(e) show the optimization trajectories, where the **black**\(\bullet\) marks initializations of the trajectories, colored from **red** (start) to yellow (end). The background solid/dotted contours display the landscape of the average empirical/population objectives. The gray/green bar marks empirical/population Pareto front, and the **black**\(\star\)/**green**\(\star\) marks solution to the average objectives. method; finally, if the step size \(\gamma\) of dynamic weights is carefully tuned, MoDo can converge along CA directions to the empirical Pareto optimal solution that also generalizes well. Aligned with this toy example, our theoretical results suggest a novel _three-way trade-off_ in the performance of dynamic weighting-based MOL algorithm; see Section 3.3. Specifically, it suggests that the step size for dynamic weighting \(\gamma\) plays a central role in the trade-off among convergence to CA direction, convergence to empirical Pareto stationarity, and generalization error; see Figure 2. In this sense, MGDA has an edge in convergence to CA direction but it could sacrifice generalization; the static weighting method cannot converge to CA direction but guarantees convergence to empirical Pareto solutions and their generalization. Our analysis also suggests that MoDo achieves a small population risk under a proper combination of step sizes and the number of iterations. ## 2 Problem Formulation and Target of Analysis In this section, we first introduce the problem formulation of MOL, the target of analysis, the metric to measure its generalization, and then present the MGDA algorithm and its stochastic variant. ### Preliminaries of MOL Denote the vector-valued objective function on datum \(z\) as \(F_{z}(x)=[f_{z,1}(x),\ldots,f_{z,M}(x)]\). The training and testing performance of \(x\) can then be measured by the empirical objective \(F_{S}(x)\) and the population objective \(F(x)\) which are, respectively, defined as \(F_{S}(x)\coloneqq\frac{1}{n}\sum_{i=1}^{n}F_{z_{i}}(x)\) and \(F(x)\coloneqq\mathbb{E}_{z\sim\mathcal{D}}[F_{z}(x)]\). Their corresponding gradients are denoted as \(\nabla F_{S}(x)\) and \(\nabla F(x)\in\mathbb{R}^{d\times M}\). Analogous to the stationary solution and optimal solution in single-objective learning, we define Pareto stationary point and Pareto optimal solution for MOL problem \(\min\limits_{x\in\mathbb{R}^{d}}F(x)\) as follows. **Definition 1** (Pareto stationary and Pareto optimal).: _If there exists a convex combination of the gradient vectors that equals to zero, i.e., there exists \(\lambda\in\Delta^{M}\) such that \(\nabla F(x)\lambda=0\), then \(x\in\mathbb{R}^{d}\) is Pareto stationary. If there is no \(x\in\mathbb{R}^{d}\) and \(x\neq x^{*}\) such that, for all \(m\in[M]\)\(f_{m}(x)\leq f_{m}(x^{*})\), with \(f_{m^{\prime}}(x)<f_{m^{\prime}}(x^{*})\) for at least one \(m^{\prime}\in[M]\), then \(x^{*}\) is Pareto optimal. If there is no \(x\in\mathbb{R}^{d}\) such that for all \(m\in[M]\), \(f_{m}(x)<f_{m}(x^{*})\), then \(x^{*}\) is weakly Pareto optimal._ By definition, at a Pareto stationary solution, there is no common descent direction for all objectives. A necessary and sufficient condition for \(x\) being Pareto stationary for smooth objectives is that \(\min_{\lambda\in\Delta^{M}}\|\nabla F(x)\lambda\|=0\). Therefore, \(\min_{\lambda\in\Delta^{M}}\|\nabla F(x)\lambda\|\) can be used as a measure of Pareto stationarity (PS) [7; 9; 39; 26; 8]. We will refer to the aforementioned quantity as the _PS population risk_ henceforth and its empirical version as _PS empirical risk_ or _PS optimization error_. We next introduce the target of our analysis based on the above definitions. ### Target of analysis and error decomposition In existing generalization analysis for MOL, measures based on function values have been used to derive generalization guarantees in terms of Pareto optimality [5; 38]. However, for general nonconvex smooth MOL problems, it can only be guaranteed for an algorithm to converge to Pareto Figure 2: An illustration of three-way trade-off among optimization, generalization, and conflict avoidance in the strongly convex case; \(\alpha\) is the step size for \(x\), \(\gamma\) is the step size for weights \(\lambda\), where \(o(\cdot)\) denotes a strictly slower growth rate, \(\omega(\cdot)\) denotes a strictly faster growth rate, and \(\Theta(\cdot)\) denotes the same growth rate. Arrows \(\downarrow\) and \(\uparrow\) respectively represent diminishing in an optimal rate and growing in a fast rate w.r.t. \(n\), while \(\searrow\) represents diminishing w.r.t. \(n\), but not in an optimal rate. stationarity of the empirical objective, i.e., a small \(\min_{\lambda\in\Delta^{M}}\|\nabla F_{S}(x)\lambda\|\)[7, 9, 26, 8]. Thus, it is not reasonable to measure population risk in terms of Pareto optimality in this case. Furthermore, when all the objectives are convex or strongly convex, Pareto stationarity is a sufficient condition for weak Pareto optimality or Pareto optimality, respectively, as stated in Proposition 1. **Proposition 1** ([39, Lemma 2.2]).: _If \(f_{m}(x)\) are convex or strongly-convex for all \(m\in[M]\), and \(x\in\mathbb{R}^{d}\) is a Pareto stationary point of \(F(x)\), then \(x\) is weakly Pareto optimal or Pareto optimal._ Next we proceed to decompose the PS population risk. **Error Decomposition.** Given a model \(x\), the PS population risk can be decomposed into \[\min_{\underbrace{\lambda\in\Delta^{M}}_{\text{PS population risk }R_{\text{pop}}(x)}}\ =\ \min_{\underbrace{\lambda\in\Delta^{M}}_{\text{A} \in\Delta^{M}}\|\nabla F(x)\lambda\|-\min_{\lambda\in\Delta^{M}}\|\nabla F_{S} (x)\lambda\|}+\min_{\underbrace{\lambda\in\Delta^{M}}_{\text{PS optimization error }R_{\text{opt}}(x)}} \tag{2.1}\] where the optimization error quantifies the training performance, i.e., how well does model \(x\) perform on the training data; and the generalization error (gap) quantifies the difference of the testing performance on new data sampled from \(\mathcal{D}\) and the training performance, i.e., how well does the model \(x\) perform on unseen testing data compared to the training data. Let \(A:\mathcal{Z}^{n}\mapsto\mathbb{R}^{d}\) denote a randomized MOL algorithm. Given training data \(S\), we are interested in the expected performance of the output model \(x=A(S)\), which is measured by \(\mathbb{E}_{A,S}[R_{\text{pop}}(A(S))]\). From (2.1) and linearity of expectation, it holds that \[\mathbb{E}_{A,S}[R_{\text{pop}}(A(S))]=\mathbb{E}_{A,S}[R_{\text{gen}}(A(S))] +\mathbb{E}_{A,S}[R_{\text{opt}}(A(S))]. \tag{2.2}\] **Distance to CA direction.** As demonstrated in Figure 1, the key merit of dynamic weighting over static weighting algorithms lies in its ability to navigate through conflicting gradients. Consider an update direction \(d=-\nabla F_{S}(x)\lambda\), where \(\lambda\) is the dynamic weights from a simplex \(\lambda\in\Delta^{M}\coloneqq\{\lambda\in\mathbb{R}^{M}\mid\mathbf{1}^{\top} \lambda=1,\;\lambda\geq 0\}\). To obtain such a steepest CA direction in unconstrained learning that maximizes the minimum descent of all objectives, we can solve the following problem [9] \[\text{CA direction}\quad d(x) =\operatorname*{arg\,min}_{d\in\mathbb{R}^{d}}\max_{m\in[M]} \left\{\langle\nabla f_{S,m}(x),d\rangle+\frac{1}{2}\|d\|^{2}\right\} \tag{2.3a}\] \[\stackrel{{\text{equivalent to}}}{{\Longleftrightarrow }}d(x) =-\nabla F_{S}(x)\lambda^{*}(x)\ \operatorname{s.t.}\ \lambda^{*}(x)\in \operatorname*{arg\,min}_{\lambda\in\Delta^{M}}\|\nabla F_{S}(x)\lambda\|^{2}. \tag{2.3b}\] Defining \(d_{\lambda}(x)=-\nabla F_{S}(x)\lambda\) given \(x\in\mathbb{R}^{d}\) and \(\lambda\in\Delta^{M}\), we measure the distance to \(d(x)\) via [8] \[\text{CA direction error}\qquad\qquad\mathcal{E}_{\text{ca}}(x,\lambda) \coloneqq\|d_{\lambda}(x)-d(x)\|^{2}. \tag{2.4}\] With the above definitions of measures that quantify the performance of algorithms in different aspects, we then introduce a stochastic gradient algorithm for MOL that is analyzed in this work. ### A stochastic gradient algorithm for MOL MGDA finds \(\lambda^{*}(x)\) in (2.3b) using the full-batch gradient \(\nabla F_{S}(x)\), and then constructs \(d(x)=-\nabla F_{S}(x)\lambda^{*}(x)\), a CA direction for all empirical objectives \(f_{S,m}(x)\). However, in practical statistical learning settings, the full-batch gradient \(\nabla F_{S}(x)\) may be costly to obtain, and thus one may resort to a stochastic estimate of \(\nabla F_{S}(x)\) instead. The direct stochastic counterpart of MGDA, referred to as the stochastic multi-gradient algorithm in [26], replaces the full-batch gradients \(\nabla f_{S,m}(x)\) in (2.3b) with their stochastic approximations \(\nabla f_{z,m}(x)\) for \(z\in S\), which, however, introduces a biased stochastic estimate of \(\lambda^{*}_{t+1}\), thus a biased CA direction; see [8, Section 2.3]. To provide a tight analysis, we introduce a simple yet theoretically grounded stochastic variant of MGDA - stochastic Multi-Objective gradient with DOuble sampling algorithm (MoDo). MoDo obtains an unbiased stochastic estimate of the gradient of problem (2.3b) through double sampling and iteratively updates \(\lambda\). At each iteration \(t\), denote \(z_{t,s}\) as an independent sample from \(S\) with \(s\in[3]\), and \(\nabla F_{z_{t,s}}(x_{t})\) as a stochastic estimate of \(\nabla F_{S}(x_{t})\). MoDo updates \(x_{t}\) and \(\lambda_{t}\) as \[\lambda_{t+1} =\Pi_{\Delta^{M}}\left(\lambda_{t}-\gamma_{t}\nabla F_{z_{t,1}}(x _{t})^{\top}\nabla F_{z_{t,2}}(x_{t})\lambda_{t}\right) \tag{2.5a}\] \[x_{t+1} =x_{t}-\alpha_{t}\nabla F_{z_{t,3}}(x_{t})\lambda_{t+1} \tag{2.5b}\] where \(\alpha_{t},\gamma_{t}\) are step sizes, and \(\Pi_{\Delta^{M}}(\cdot)\) denotes Euclidean projection to the simplex \(\Delta^{M}\). We have summarized the MoDo algorithm in Algorithm 1 and will focus on MoDo in the subsequent analysis. ## 3 Optimization, Generalization and Three-Way Trade-Off This section presents the theoretical analysis of the PS population risk associated with the MoDo algorithm, where the analysis of generalization error is in Section 3.1 and that of optimization error is in Section 3.2. A summary of our main results are given in Table 1. ### Multi-objective generalization and uniform stability We first bound the expected PS generalization error by the generalization in gradients in Proposition 2, then introduce the MOL uniform stability and establish its connection to the generalization in gradients. Finally, we bound the MOL uniform stability. **Proposition 2**.: _With \(\|\cdot\|_{\mathrm{F}}\) denoting the Frobenious norm, \(R_{\mathrm{gen}}(A(S))\) in (2.2) can be bounded by_ \[\mathbb{E}_{A,S}[R_{\mathrm{gen}}(A(S))]\leq\mathbb{E}_{A,S}[\|\nabla F(A(S))- \nabla F_{S}(A(S))\|_{\mathrm{F}}]. \tag{3.1}\] With Proposition 2, next we introduce the concept of MOL uniform stability tailored for MOL problems and show that PS generalization error in MOL can be bounded by the MOL uniform stability. Then we analyze their bound in general nonconvex case and strongly convex case, respectively. **Definition 2** (MOL uniform stability).: _A randomized algorithm \(A:\mathcal{Z}^{n}\mapsto\mathbb{R}^{d}\), is MOL-uniformly stable with \(\epsilon_{\mathrm{F}}\) if for all neighboring datasets \(S,S^{\prime}\) that differ in at most one sample, we have_ \[\sup_{z}\ \mathbb{E}_{A}\big{[}\|\nabla F_{z}(A(S))-\nabla F_{z}(A(S^{\prime})) \|_{\mathrm{F}}^{2}\big{]}\leq\epsilon_{\mathrm{F}}^{2}. \tag{3.2}\] Next we show the relation between the upper bound of PS generalization error in (3.1) and MOL uniform stability in Proposition 3. **Proposition 3** (MOL uniform stability and generalization).: _Assume for any \(z\), the function \(F_{z}(x)\) is differentiable. If a randomized algorithm \(A:\mathcal{Z}^{n}\mapsto\mathbb{R}^{d}\) is MOL-uniformly stable with \(\epsilon_{\mathrm{F}}\), then_ \[\mathbb{E}_{A,S}[\|\nabla F(A(S))-\nabla F_{S}(A(S))\|_{\mathrm{F}}]\leq 4 \epsilon_{\mathrm{F}}+\sqrt{n^{-1}\mathbb{E}_{S}\left[\mathbb{V}_{z\sim \mathcal{D}}(\nabla F_{z}(A(S)))\right]} \tag{3.3}\] _where the variance is defined as \(\mathbb{V}_{z\sim\mathcal{D}}(\nabla F_{z}(A(S)))=\mathbb{E}_{z\sim\mathcal{D} }\big{[}\|\nabla F_{z}(A(S))-\mathbb{E}_{z\sim\mathcal{D}}[\nabla F_{z}(A(S) )]\|_{\mathrm{F}}^{2}\big{]}\)._ Proposition 3 establishes a connection between the the upper bound of the PS generalization error and the MOL uniform stability, where the former can be bounded above by the latter plus the variance \begin{table} \begin{tabular}{c|c|c c c c} \hline \hline Assumption & Method & Optimization & Generalization & Risk & CA Distance \\ \hline \multirow{2}{*}{NC, Lip-C, Lip-S} & Static & \((\alpha T)^{-\frac{1}{2}}+\alpha^{\frac{1}{2}}\) & \(T^{\frac{1}{2}}n^{-\frac{1}{2}}\) & \(n^{-\frac{1}{2}}\) & \(\mathcal{K}\) \\ & Dynamic & \((\alpha T)^{-\frac{1}{2}}+\alpha^{\frac{1}{2}}+\gamma^{\frac{1}{2}}\) & \(T^{\frac{1}{2}}n^{-\frac{1}{2}}\) & \(n^{-\frac{1}{2}}\) & \((\gamma T)^{-1}+(\alpha\gamma)^{-\frac{1}{2}}\) \\ \hline \multirow{3}{*}{SC, Lip-S} & Static & \((\alpha T)^{-\frac{1}{2}}+\alpha^{\frac{1}{2}}\) & \(n^{-\frac{1}{2}}\) & \(n^{-\frac{1}{2}}\) & \(\mathcal{K}\) \\ & Dynamic & \((\alpha T)^{-\frac{1}{2}}+\alpha^{\frac{1}{2}}+\gamma^{\frac{1}{2}}\) & \(\begin{cases}n^{-\frac{1}{2}},\ \gamma=\mathcal{O}(T^{-1})\\ T^{2}n^{-\frac{1}{2}},\ \mathrm{o.w.}\end{cases}\) & \(\begin{cases}n^{-\frac{1}{2}},\ \gamma=\mathcal{O}(T^{-1})\\ n^{-\frac{1}{2}},\ \mathrm{o.w.}\end{cases}\) & \(\begin{cases}n^{-\frac{1}{2}},\ \gamma(T)^{-1}+(\alpha\gamma)^{-\frac{1}{2}}\end{cases}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of optimization error, generalization error and population risk under different assumptions for static and dynamic weighting. Use “NC”, “SC” to represent nonconvex and strongly convex, and “Lip-C”, “Lip-S” to represent Lipschitz continuous and Lipschitz smooth, respectively. of the stochastic gradient over the population data distribution. It is worth noting that the standard arguments of bounding the generalization error measured in function values by the uniform stability measured in function values [12, Theorem 2.2] is not applicable here as the summation and norm operators are not exchangeable. More explanations are given in the proof in Appendix B.1. **Theorem 1** (MOL uniform stability and PS generalization error of MoDo in nonconvex case).: _If \(\sup_{z}\mathbb{E}_{A}\left[\|\nabla F_{z}(A(S))\|_{\mathrm{F}}^{2}\right]\leq G ^{2}\) for any \(S\), then the MOL uniform stability, i.e., \(\epsilon_{\mathrm{F}}^{2}\) in Definition 2 is bounded by \(\epsilon_{\mathrm{F}}^{2}\leq 4G^{2}T/n\). And the PS generalization error \(\mathbb{E}_{A,S}[R_{\mathrm{gen}}(A(S))]=\mathcal{O}(T^{\frac{1}{2}}n^{-\frac{ 1}{2}})\)._ Compared to the function value uniform stability upper bound in [12, Theorem 3.12] for nonconvex single-objective learning, Theorem 1 does not require a step size decay \(\alpha_{t}=\mathcal{O}(1/t)\), thus can enjoy at least a polynomial convergence rate of optimization errors w.r.t. \(T\). Combining Theorem 1 with Proposition 3, to ensure the generalization error is diminishing with \(n\), one needs to choose \(T=o(n)\), which lies in the "early stopping" regime and results in potentially large optimization error. We then provide a tighter bound in the strongly convex case that allows a larger choice of \(T\). Below we list the standard assumptions used to derive the introduced MOL stability. **Assumption 1** (Lipschitz continuity of \(\nabla F_{z}(x)\)).: _For all \(m\in[M]\), \(\nabla f_{z,m}(x)\) is \(\ell_{f,1}\)-Lipschitz continuous for all \(z\). And \(\nabla F_{z}(x)\) is \(\ell_{F,1}\)-Lipschitz continuous in Frobenius norm for all \(z\)._ **Assumption 2**.: _For all \(m\in[M]\), \(z\in\mathcal{Z}\), \(f_{z,m}(x)\) is \(\mu\)-strongly convex w.r.t. \(x\), with \(\mu>0\)._ Note that in the strongly convex case, the gradient norm \(\|\nabla F_{z}(x)\|_{\mathrm{F}}\) can be unbounded in \(\mathbb{R}^{d}\). Therefore, one cannot assume Lipschitz continuity of \(f_{z,m}(x)\) w.r.t. \(x\in\mathbb{R}^{d}\). We address this challenge by showing that \(\{x_{t}\}\) generated by the MoDo algorithm is bounded as stated in Lemma 1. Notably, combining with Assumption 1, we can derive that the gradient norm \(\|\nabla F_{z}(x_{t})\|_{\mathrm{F}}\) is also bounded, which serves as a stepping stone to derive the MOL stability bound. **Lemma 1** (Boundedness of \(x_{t}\) for strongly convex and smooth objectives).: _Suppose Assumptions 1, 2 hold. For \(\{x_{t}\},t\in[T]\) generated by MoDo algorithm with \(\alpha_{t}=\alpha\) and \(0\leq\alpha\leq\ell_{f,1}^{-1}\), there exists a finite positive constant \(c_{x}\) such that \(\|x_{t}\|\leq c_{x}\). And there exists finite positive constants \(\ell_{f}\), \(\ell_{F}=\sqrt{M}\ell_{f}\), such that for all \(\lambda\in\Delta^{M}\), we have \(\|\nabla F(x_{t})\lambda\|\leq\ell_{f}\), and \(\|\nabla F(x_{t})\|_{\mathrm{F}}\leq\ell_{F}\)._ With Lemma 1, the stability bound and PS generalization is provided below. **Theorem 2** (MOL uniform stability and PS generalization error of MoDo in strongly convex case).: _Suppose Assumptions 1, 2 hold. Let \(A\) be the MoDo algorithm (Algorithm 1). For the MOL uniform stability \(\epsilon_{F}\) of algorithm \(A\) in Definition 2, if the step sizes \(\alpha_{t}\leq\alpha\leq 1/(2\ell_{f,1})\), and \(\gamma_{t}\leq\gamma\leq\min\{\frac{\mu^{2}}{120\ell_{f}^{2}\ell_{g,1}},\frac{ 1}{8(4\ell_{f}^{2}+2\ell_{g,1})}\}/T\), then it holds that_ \[\epsilon_{\mathrm{F}}^{2}\leq\frac{48}{\mu n}\ell_{f}^{2}\ell_{F,1}^{2}\Big{(} \alpha+\frac{12+4M\ell_{f}^{2}}{\mu n}+\frac{10M\ell_{f}^{4}\gamma}{\mu}\Big{)} \quad\text{and}\quad\mathbb{E}_{A,S}[R_{\mathrm{gen}}(A(S))]=\mathcal{O}(n^{- \frac{1}{2}}). \tag{3.4}\] _And there exists functions \(F_{z}(x)\) that satisfy Assumptions 1, 2, and neighboring datasets \(S\), \(S^{\prime}\) that differ in at most one sample such that_ \[\mathbb{E}_{A}\big{[}\|\nabla F_{z}(A(S))-\nabla F_{z}(A(S^{\prime}))\|_{ \mathrm{F}}^{2}\big{]}\geq\frac{M\mu^{2}}{256n^{2}}. \tag{3.5}\] Theorem 2 provides both upper and lower bounds for the MOL uniform stability. In this case, we choose \(\alpha=\Theta(T^{-\frac{1}{2}})\), \(\gamma=o(T^{-1})\), and \(T=\Theta(n^{2})\) to minimize the PS population risk upper bound, as detailed in Section 3.3. With this choice, the MOL uniform stability upper bound matches the lower bound in an order of \(n^{-2}\), suggesting that our bound is tight. The generalization error bound in (3.4) is a direct implication from the MOL uniform stability bound in (3.4), Propositions 2, and 3. It states that the PS generalization error of MoDo is \(\mathcal{O}(n^{-\frac{1}{2}})\), which matches the generalization error of static weighting up to a constant coefficient [19]. Our result also indicates that when all the objectives are strongly convex, choosing small step sizes \(\alpha\) and \(\gamma\) can benefit the generalization error. ### Multi-objective optimization error In this section, we bound the multi-objective PS optimization error \(\min_{\lambda\in\Delta^{M}}\|\nabla F_{S}(x)\lambda\|\)[7, 26, 8]. As discussed in Section 2.2, this measure being zero implies the model \(x\) achieves a Pareto stationarity for the empirical problem. Below we list an additional standard assumption used to derive the optimization error. **Assumption 3** (Lipschitz continuity of \(F_{z}(x)\)).: _For all \(m\in[M]\), \(f_{z,m}(x)\) are \(\ell_{f}\)-Lipschitz continuous for all \(z\). And \(F_{z}(x)\) are \(\ell_{F}\)-Lipschitz continuous in Frobenius norm for all \(z\)._ **Lemma 2** (Distance to CA direction).: _Suppose either: 1) Assumptions 1, 3 hold; or 2) Assumptions 1, 2 hold, with \(\ell_{f}\) and \(\ell_{F}\) defined in Lemma 1. For \(\{x_{t}\},\{\lambda_{t}\}\) generated by MoDo, it holds that_ \[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{A}[\|d_{\lambda_{t}}(x_{t})-d(x_{t})\|^{ 2}]\leq\frac{4}{\gamma T}+6\sqrt{M\ell_{f,1}\ell_{f}^{2}\frac{\alpha}{\gamma} }+\gamma M\ell_{f}^{4}. \tag{3.6}\] Lemma 2 analyzes convergence to the CA direction using the measure introduced in Section 2.2. By, e.g., choosing \(\alpha=\Theta(T^{-\frac{3}{4}})\), and \(\gamma=\Theta(T^{-\frac{1}{4}})\), the RHS of (3.6) converges in a rate of \(\mathcal{O}(T^{-\frac{1}{4}})\). **Theorem 3** (PS optimization error of MoDo).: _Suppose either: 1) Assumptions 1, 3 hold; or, 2) Assumptions 1, 2 hold, with \(\ell_{f}\) and \(\ell_{F}\) defined in Lemma 1. Define \(c_{F}\) such that \(\mathbb{E}_{A}[F_{S}(x_{1})\lambda_{1}]-\min_{x\in\mathbb{R}^{d}}\mathbb{E}_{A }[F_{S}(x)\lambda_{1}]\leq c_{F}\). Considering \(\{x_{t}\}\) generated by MoDo (Algorithm 1), with \(\alpha_{t}=\alpha\leq 1/(2\ell_{f,1})\), \(\gamma_{t}=\gamma\), then under either condition 1) or 2), it holds that_ \[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{A}\Big{[}\min_{\lambda\in\Delta^{M}}\| \nabla F_{S}(x_{t})\lambda\|\Big{]}\leq\sqrt{\frac{c_{F}}{2\alpha T}}+\sqrt{ \frac{1}{2}\gamma(M\ell_{f}^{4}+2\ell_{F}^{3}\ell_{f})}+\sqrt{\frac{1}{2} \alpha\ell_{f,1}\ell_{f}^{2}}. \tag{3.7}\] The choice of step sizes \(\alpha=\Theta(T^{-\frac{3}{4}})\), and \(\gamma=\Theta(T^{-\frac{1}{4}})\) to ensure convergence to CA direction is suboptimal in the regard of convergence to Pareto stationarity, as implied by Theorem 3, exhibiting a trade-off between convergence to the CA direction and convergence to Pareto stationarity, which will be discussed in Section 3.3. ### Optimization, generalization and conflict avoidance trade-off Combining the results in Sections 3.1 and 3.2, we are ready to analyze and summarize the three-way trade-off of MoDo in MOL. With \(A_{t}(S)=x_{t}\) denoting the output of algorithm \(A\) at the \(t\)-th iteration, we can decompose the PS population risk \(R_{\mathrm{pop}}(A_{t}(S))\) as (cf. (2.1) and (3.1)) \[\mathbb{E}_{A,S}\big{[}R_{\mathrm{pop}}(A_{t}(S))\big{]}\leq\mathbb{E}_{A,S} \Big{[}\min_{\lambda\in\Delta^{M}}\|\nabla F_{S}(A_{t}(S))\lambda\|\Big{]}+ \mathbb{E}_{A,S}\Big{[}\|\nabla F(A_{t}(S))-\nabla F_{S}(A_{t}(S))\|_{\mathrm{ F}}\Big{]}.\] The general nonconvex case.Suppose Assumptions 1, 3 hold. By the generalization error in Theorem 1, and the optimization error bound in Theorem 3, the PS population risk of the output of MoDo can be bounded by \[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{A,S}\big{[}R_{\mathrm{pop}}(A_{t}(S)) \big{]}=\mathcal{O}\left(\alpha^{-\frac{1}{2}}T^{-\frac{1}{2}}+\alpha^{\frac{1 }{2}}+\gamma^{\frac{1}{2}}+T^{\frac{1}{2}}n^{-\frac{1}{2}}\right). \tag{3.8}\] Discussion of trade-off.Choosing step sizes \(\alpha=\Theta(T^{-\frac{1}{2}})\), \(\gamma=\Theta(T^{-\frac{1}{2}})\), and number of steps \(T=\Theta(n^{\frac{2}{3}})\), then the expected PS population risk is \(\mathcal{O}(n^{-\frac{1}{6}})\), which matches the PS population risk upper bound of a general nonconvex single objective in [19]. A clear trade-off in this case is between the optimization error and generalization error, controlled by \(T\). Indeed, increasing \(T\) leads to smaller optimization error but larger generalization error, and vice versa. To satisfy convergence to CA direction, it requires \(\gamma=\omega(\alpha)\) based on Lemma 2, and the optimization error in turn becomes worse, so does the PS population risk. Specifically, choosing \(\alpha=\Theta(T^{-\frac{1}{2}})\), \(\gamma=\Theta(T^{-\frac{1}{4}})\), and \(T=\Theta(n^{\frac{1}{6}})\) leads to the expected PS population risk in \(\mathcal{O}(n^{-\frac{1}{16}})\), and the distance to CA direction in \(\mathcal{O}(n^{-\frac{1}{16}})\). This shows another trade-off between conflict avoidance and optimization error. The strongly convex case.Suppose Assumptions 1, 2 hold. By the generalization error and the optimization error given in Theorems 2 and 3, MoDo's PS population risk can be bounded by \[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{A,S}\big{[}R_{\mathrm{pop}}(A_{t}(S)) \big{]}=\mathcal{O}\left(\alpha^{-\frac{1}{2}}T^{-\frac{1}{2}}+\alpha^{\frac{1 }{2}}+\gamma^{\frac{1}{2}}+n^{-\frac{1}{2}}\right). \tag{3.9}\] Discussion of trade-off.Choosing step sizes \(\alpha=\Theta(T^{-\frac{1}{2}})\), \(\gamma=o(T^{-1})\), and number of steps \(T=\Theta(n^{2})\), we have the expected PS population risk in gradients is \(\mathcal{O}(n^{-\frac{1}{2}})\). However, choosing \(\gamma=o(T^{-1})\) leads to large distance to the CA direction according to Lemma 2 because the term \(\frac{4}{\gamma T}\) in (3.6) increases with \(T\). To ensure convergence to the CA direction, it requires \(\gamma=\omega(T^{-1})\), under which the tighter bound in Theorem 2 does not hold but the bound in Theorem 1 still holds. In this case, the PS population risk under proper choice of \(\alpha,\gamma,T\) is \(\mathcal{O}(n^{-\frac{1}{2}})\) as discussed in the previous paragraph. Therefore, to avoid conflict of gradients, one needs to sacrifice the sample complexity of PS population risk, demonstrating a trade-off between conflict avoidance and PS population risk. ## 4 Related Works **Multi-task learning (MTL).** MTL, as one application of MOL, leverages shared information among different tasks to train a model that can perform multiple tasks. MTL has been widely applied to natural language processing, computer vision and robotics [13], [34], [46], [40]. From the optimization perspective, a simple method for MTL is to take the weighted average of the per-task losses as the objective. The weights can be static or dynamic during optimization. And weighting for different tasks can be chosen based on different criteria such as uncertainty [14], gradient norms [4] or task difficulty [11]. These methods are often heuristic and designed for specific applications. Another line of work tackles MTL through MOL [36; 44; 25; 10]. A foundational algorithm in this regard is MGDA [7], which takes dynamic weighting of gradients to obtain a CA direction for all objectives. Stochastic versions of MGDA has been proposed in [26; 48; 8]. Algorithms for finding a set of Pareto optimal models rather than one have been proposed in [27; 24; 28; 31; 23; 17; 42; 47; 30]. **Theory of MOL.** Convergence analysis for the deterministic MGDA algorithm has been provided in [9]. One challenge of analyzing stochastic MGDA is the biased estimate of the weighting coefficients during optimization. This can be overcome by increasing the batch size during optimization [26], or using a momentum-based optimization method [48; 8], or performing double-sampling to update weighting as in this work. While the community has a rich history of investigating the convergence of MOL algorithms, their generalization guarantee remains open. Not until recently, generalization guarantees for multi-objective learning were theoretically analyzed. In [5], a min-max formulation to solve the MOL problem is analyzed, where the weights are chosen based on the maximum function values, rather than the CA direction. And generalization guarantees are provided based on Rademacher complexity of the hypothesis class of the learner. More recently, [38] provide generalization guarantees for MOL for a more general class of weighting, also based on Rademacher complexity. Different from these works, we study algorithm-dependent generalization bounds based on algorithm stability to gain a better understanding of how the choice of algorithm hyperparameters such as step size and number of iterations affects the generalization error. **Algorithm stability and generalization.** Stability analysis dates back to the work [6] in 1970s. Uniform stability and its relationship with generalization were studied in [2] for the exact minimizer of the ERM problem with strongly convex objectives. The work [12] pioneered the stability analysis for stochastic gradient descent (SGD) algorithms with convex and smooth objectives. The results were extended and refined in [16] with data-dependent bounds, in [3; 33; 20] for non-convex objectives, and in [1; 21] for SGD with non-smooth and convex losses. However, all these studies mainly focus on single-objective learning problems. To our best knowledge, there is no existing work on the stability and generalization analysis for multi-objective learning problems and our results on its stability and generalization are the first-ever-known ones. ## 5 Experiments In this section, we conduct experiments to further demonstrate the three-way trade-off among the optimization, generalization, and conflict avoidance of MOL algorithms on various MOL tasks. ### Synthetic experiments Our theory in the strongly convex case are verified through a synthetic experiment in Figure 3. Details are described in Appendix D.1. Figure 2(a) shows the PS optimization error and PS population risk, as well as the distance to CA direction decreases as \(T\) increases, which corroborates Lemma 2, and Theorem 3. In addition, the generalization error in this case does not vary much with \(T\), verifying Theorems 2 and 2. In Figure 2(b), the optimization error first decreases and then increases as \(\alpha\) increases, which is consistent with Theorem 3. Notably, we observe a threshold for \(\alpha\) below which the distance to CA direction converges even when the optimization error does not converge, while beyond which the distance to the CA direction becomes larger, verifying Lemma 2. Additionally, Figure 2(c) demonstrates that increasing \(\gamma\) enlarges the PS optimization error, PS generalization error, and thus the PS population risk, but decreases the distance to CA direction, which supports Lemma 2. ### Image classification experiments We further verify our theory in the nonconvex case on MNIST [18] image classification using a multi-layer perceptron (MLP) model and three objective functions: cross-entropy, mean square error (MSE), and Huber loss. Implementation details are provided in Appendix D.2. Following Section 2.2, we evaluate the performance in terms of \(R_{\text{pop}}(x)\), \(R_{\text{opt}}(x)\), \(R_{\text{gen}}(x)\), and \(\mathcal{E}_{\text{ca}}(x,\lambda)\). The exact PS population risk is not accessible without the true data distribution. To approximate the PS population risk \(R_{\text{pop}}\), we evaluate \(\min_{\lambda\in\Delta^{M}}\|\nabla F_{S_{\text{bc}}}(x)\lambda\|\) on the testing data set \(S_{\text{bc}}\) that is independent of training data set \(S\). The PS optimization error \(R_{\text{opt}}(x)\) is obtained by \(\min_{\lambda\in\Delta^{M}}\|\nabla F_{S}(x)\lambda\|\), and the PS generalization error \(R_{\text{gen}}(x)\) is obtained by \(R_{\text{pop}}(x)-R_{\text{opt}}(x)\). We analyze how different choices of \(T\), \(\alpha\), and \(\gamma\) in MoDo affect the three-way trade-off in Figure 4 (averaged over 10 random seeds with half standard deviation error bars). Figure 3(a) shows that optimization error reduces while generalization error increases with \(T\), aligning with the stability bound in Theorem 1 and optimization error bound in Theorem 3, and shows the need for early stopping in non-convex settings to improve generalization. Furthermore, the CA direction error reduces with increasing \(T\), which aligns with Lemma 2. Figure 3(b) shows that for different \(\alpha\), the PS optimization error and population risk initially decrease and then increase as \(\alpha\) increases which aligns with Theorem 3 and (3.8). On the other hand, there is an overall increase in CA direction error with \(\alpha\), which aligns with Lemma 2. Figure 3(c) shows that increasing \(\gamma\) increases the PS population and optimization errors but decreases CA direction error. This matches our bound for PS optimization error in Theorem 3, PS population risk in (3.8), and Lemma 2 for small \(\gamma\) values. Finally, we compare the performance of MoDo with static weighting in multi-task image classification on Office-31 [35] multi-domain dataset with 3 objectives in Table 2. Hyperparameters of all methods \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & Amazon & DSLR & Webcam & \multirow{2}{*}{\(\Delta\mathcal{A}\%\downarrow\)} \\ \cline{2-2} \cline{4-5} & Test Acc \(\uparrow\) & Test Acc \(\uparrow\) & Test Acc \(\uparrow\) \\ \hline Static & 84.62 \(\pm\) 0.71 & 94.43 \(\pm\) 0.96 & 97.44 \(\pm\) 1.20 & 2.56 \(\pm\) 0.37 \\ MGDA & 79.45 \(\pm\) 0.11 & **96.56 \(\pm\) 1.20** & **97.89 \(\pm\) 0.74** & 3.65 \(\pm\) 0.64 \\ **MoDo** & **85.13 \(\pm\) 0.58** & 95.41 \(\pm\) 0.98 & 96.78 \(\pm\) 0.65 & **2.26 \(\pm\) 0.31** \\ \hline \hline \end{tabular} \end{table} Table 2: Classification results on Office-31 dataset. Figure 4: Optimization, generalization, and CA direction errors of MoDo for MNIST image classification under different \(T\), \(\alpha\), and \(\gamma\). The default parameters are \(T=1000\), \(\alpha=0.1\), and \(\gamma=0.01\). Figure 3: Optimization, generalization, and CA direction errors of MoDo in the strongly convex case under different \(T\). \(\alpha\). \(\gamma\). The default parameters are \(T=100\). \(\alpha=0.01\), \(\gamma=0.001\). are chosen based on the best validation performance. Results show that MoDo achieves comparable performance with static weighting and MGDA, if not better, under proper hyperparameters. Here \(\Delta A\%\) denotes average performance degradation compared to dedicated single-task learners. Additional experimental results and details are presented in Appendix D.2 due to space limitations. ## 6 Conclusions This work studies the three-way trade-off in MOL - among optimization, generalization, and conflict avoidance. Our results show that, in the general nonconvex setting, the traditional trade-off between optimization and generalization depending on the number of iterations also exists in MOL. Moreover, dynamic weighting algorithms like MoDo introduce a new dimension of trade-off in terms of conflict avoidance compared to static weighting. We demonstrate that this three-way trade-off can be controlled by the step size for updating the dynamic weighting parameter and the number of iterations. Proper choice of these parameters can lead to decent performance on all three metrics. ## Broader impacts and limitations This work has potential impact on designing dynamic weighting algorithms and choosing hyperparameters such as step sizes and number of iterations based on the trade-off for MTL applications such as multi-language translation, multi-agent reinforcement learning, etc. No ethical concerns arise from this work. A limitation of this study is that the theory focuses on a specific algorithm, MoDo, for smooth objectives in unconstrained learning. Future research could explore the theory of other algorithms for non-smooth objectives or in constrained learning, which would be interesting directions to pursue.
多目的学習 (MOL) 問題は、学習基準、データモダリティ、または学習タスクが複数存在する新しい機械学習問題で頻繁に発生します。単目的学習とは異なり、MOLにおける重要な課題は、反復最適化プロセス中の異なる目的の潜在的な衝突です。最近の研究では、MGDA 及びその派生型などの、MOLのための様々なダイナミックウェイトアルゴリズムが開発されました。これらのアルゴリズムの核心的なアイデアは、目的間の衝突を避ける更新方向を見つけ出すことです。魅力的な直感ですが、実証的な研究は、ダイナミックウェイトアルゴリズムが常にスタティックアルゴリズムを上回るわけではないことを示しています。この理論と実務のギャップを理解するために、MGDA の新しい確率的変種である、マルチ目的勾配 avec Double サンプリング (MoDo) アルゴリズムを焦点に、動的ウェイトに基づいた MoDo
2308.16519
Characterizations of distality via weak equicontinuity
For an infinite discrete group $G$ acting on a compact metric space $X$, we introduce several weak versions of equicontinuity along subsets of $G$ and show that if a minimal system $(X,G)$ admits an invariant measure then $(X,G)$ is distal if and only if it is pairwise IP$^*$-equicontinuous; if the product system $(X\times X,G)$ of a minimal system $(X,G)$ has a dense set of minimal points, then $(X,G)$ is distal if and only if it is pairwise IP$^*$-equicontinuous if and only if it is pairwise central$^*$-equicontinuous; if $(X,G)$ is a minimal system with $G$ being abelian, then $(X,G)$ is a system of order $\infty$ if and only if it is pairwise FIP$^*$-equicontinuous.
Jian Li, Yini Yang
2023-08-31T08:03:28
http://arxiv.org/abs/2308.16519v1
# Characterizations of distality via weak equicontinuity ###### Abstract. For an infinite discrete group \(G\) acting on a compact metric space \(X\), we introduce several weak versions of equicontinuity along subsets of \(G\) and show that if a minimal system \((X,G)\) admits an invariant measure then \((X,G)\) is distal if and only if it is pairwise IP\({}^{*}\)-equicontinuous; if the product system \((X\times X,G)\) of a minimal system \((X,G)\) has a dense set of minimal points, then \((X,G)\) is distal if and only if it is pairwise IP\({}^{*}\)-equicontinuous if and only if it is pairwise central\({}^{*}\)-equicontinuous; if \((X,G)\) is a minimal system with \(G\) being abelian, then \((X,G)\) is a system of order \(\infty\) if and only if it is pairwise \(\text{FIP}^{*}\)-equicontinuous. Key words and phrases:Distality, equicontinuity, sensitivity, IP-set, central-set, FIP-set, system of order \(\infty\) 2020 Mathematics Subject Classification: Primary: 37B05; Secondary: 37B20, 37B25 J. Li was supported in part by NFS of China (Grant Nos. 12222110 and 12171298). Y. Yang is the corresponding author. ###### Abstract We consider the following question: _Is there a finite set \(X\) of continuous functions \(f\) on \(X\) such that \(f(X)=f(X)\) for all \(X\) and \(f(X)\) is a continuous function on \(X\) such that \(f(X)=f(X)\) for all \(X\)._ _Keywords:_ continuous function \(f\) on are distal. Nilsystem is important in the study of the convergence of some non-conventional ergodic averages [16]. In [18] Host, Kra and Maass introduced the regionally proximal relation \(\mathbf{RP}^{[k]}\) of order \(k\) for \(k\in\mathbb{N}\), and showed in [18, Theorem 1.3] that \(\mathbf{RP}^{[k]}\) is an invariant closed equivalence relation for a minimal distal system, and in [18, Theorem 1.2] that for a minimal system \(\mathbf{RP}^{[k]}\) is trivial if and only if the system itself is an inverse limit of \(k\)-step nilsystems. In [24, Theorem 3.5] Shao and Ye showed that \(\mathbf{RP}^{[k]}\) is an invariant closed equivalence relation for any minimal system and also remarked this result can be extended to abelian group actions without difficulty. In [6] Dong et al. introduced \(\infty\)-step nilsystems which are the systems with trivial \(\mathbf{RP}^{[\infty]}:=\bigcap_{k=1}^{\infty}\mathbf{RP}^{[k]}\) and showed in [6, Theorem 3.6] that a minimal system is an \(\infty\)-step nilsystem if and only if it is an inverse limit of minimal nilsystems. There exist some minimal distal systems which are not \(\infty\)-step nilsystem, see [6, Section 5]. Distal systems are characterized by the IP\({}^{*}\)-recurrence property for \(\mathbb{N}\)-actions [11]. In [20, Theorem 8.1.7] Huang, Shao and Ye proved that a minimal system \((X,T)\) is \(\infty\)-step nilsystem if and only if every point in \(X\) is \(\mathrm{FIP}^{*}\)-recurrent. Bergelson and Leibman [4] provided a characterization, in terms of recurrence properties, of nilsystems. In general, we say that a group action system \((X,G)\) is of order \(k\) if the regionally proximal relation of order \(k\) is trivial, see e.g. [13]. Motivated by the above results, we have the following characterization of systems of order \(\infty\). **Theorem 1.2**.: _Let \((X,G)\) be a dynamical system with \(G\) being abelian. Then \((X,G)\) is a system of order \(\infty\) if and only if it is pairwise \(\mathrm{FIP}^{*}\)-equicontinuous._ The organization of this the paper is as follows. In Section 2, we give some basic definitions and related results which will be used later. In Section 3, we introduce pairwise \(\mathrm{IP}^{*}\)-equicontinuity and prove a slight generalization of Theorem 1.1 (1). We also study the concept of almost pairwise \(\mathrm{IP}^{*}\)-equicontinuity and obtain a dichotomy result for minimal systems and the structure of almost pairwise \(\mathrm{IP}^{*}\)-equicontinuous system. In Section 4, we first recall the structure theorem for (metric) minimal systems and then prove Theorem 1.1 (2). In Section 5, we introduce pairwise \(\mathrm{FIP}^{*}\)-equicontinuity and prove Theorem 1.2. ## 2. Preliminaries ### Topological dynamical system Let \(X\) be a compact metric space with a compatible metric \(d\) and \((G,\cdot)\) be an infinite discrete group with the identity element \(e\). A _\(G\)-action_ on \(X\) is a continuous map \(\Pi\colon G\times X\to X\) satisfying \(\Pi(e,x)=x\), \(\forall x\in X\) and \(\Pi(g,\Pi(h,x))=\Pi(gh,x)\), \(\forall x\in X\), \(g,h\in G\). We say that the triple \((X,G,\Pi)\) is a _topological dynamical system_. For convenience, we will use the pair \((X,G)\) instead of \((X,G,\Pi)\) to denote the topological dynamical system, and \(gx:=\Pi(g,x)\) if the map \(\Pi\) is unambiguous. For any \(n\in\mathbb{N}\), there is a natural \(G\)-action on the \(n\)-fold product space \(X^{n}\) as \(g(x_{1},\ldots,x_{n})=(gx_{1},\ldots,gx_{n})\) for every \((x_{1},\ldots,x_{n})\in X^{n}\). The _orbit_ of a point \(x\in X\) is the set \(Gx:=\{gx\colon\ g\in G\}\). A nonempty \(G\)-invariant closed subset \(Y\subseteq X\) defines naturally a subsystem \((Y,G)\) of \((X,G)\). A dynamical system \((X,G)\) is called _minimal_ if it contains no proper subsystems. Each point belonging to some minimal subsystem of \((X,G)\) is called a _minimal point_. By Zorn's Lemma, every topological dynamical system has a minimal subsystem. For two subsets \(U\) and \(V\) of \(X\), define the _return time set_ of \(U\) and \(V\) by \[N(U,V):=\{g\in G\colon gU\cap V\neq\emptyset\}.\] For a point \(x\in X\), we write \(N(x,V)\) instead of \(N(\{x\},V)\) for simplicity. A dynamical system \((X,G)\) is called _transitive_ if for any two nonempty open subsets \(U\) and \(V\) of \(X\) one has \(N(U,V)\neq\emptyset\), and _weakly mixing_ if the product system \((X\times X,G)\) is transitive. A point \(x\in X\) is called _transitive_ if the orbit of \(x\) is dense in \(X\). Recall that a subset of \(X\) is _residual_ if it contains a dense \(G_{\delta}\) subset of \(X\). As \(X\) has a countable basis, every transitive system has a residual subset of transitive points. We say that a dynamical system \((X,G)\) is _equicontinuous_ if for any \(\varepsilon>0\), there exists \(\delta>0\) such that if \(d(x,y)<\delta\) then \(d(gx,gy)<\varepsilon\) for all \(g\in G\). A pair \((x_{1},x_{2})\in X\times X\) is said to be _proximal_ if \(\inf_{g\in G}d(gx_{1},gx_{2})=0\), and _distal_ if \(\inf_{g\in G}d(gx_{1},gx_{2})>0\). A point \(x\in X\) is _distal_ if for any \(y\in\overline{Gx}\setminus\{x\}\), \((x,y)\) is a distal pair. We say that a dynamical system \((X,G)\) is _distal_ if any two distinct points of \(X\) form a distal pair. It is easy to see that a dynamical system is distal if and only if every point is distal, and every equicontinuous system is distal. A dynamical system \((X,G)\) is called _point-distal_ if the collection of distal points is residual in \(X\). The following characterization of distal systems is a classical result, see e.g. [2, Theorem 5.6]. **Lemma 2.1**.: _A dynamical system \((X,G)\) is distal if and only if every point in the product system \((X\times X,G)\) is minimal._ Denote by \(P(X,G)\) the proximal relation on \(X\), that is the collection of all proximal pairs of \((X,G)\). It is clear that \(P(X,G)\) is a reflexive, symmetric, \(G\)-invariant relation, but it is in general not transitive or closed. We will need the following two result about the proximal relation. **Theorem 2.2** ([2, Theorem 6.13]).: _Let \((X,G)\) be a dynamical system and \(x\in X\). Then for any minimal subset \(M\) of \(\overline{Gx}\), there exists some \(y\in M\) such that \((x,y)\) is proximal._ **Lemma 2.3** ([2, Corollary 6.11]).: _Let \((X,G)\) be a dynamical system. If the proximal relation \(P(X,G)\) is closed, then it is an equivalence relation on \(X\)._ Let \(M(X)\) be the set of all Borel probability measures on \(X\). We say a measure \(\mu\in M(X)\) has _full support_ if \(\mu(U)>0\) for all nonempty open subset \(U\) of \(X\), and is \(G\)_-invariant_ if \(\mu(A)=\mu(gA)\) for all \(g\in G\) and Borel subset \(A\) of \(X\). It is easy to see that every invariant measure on a minimal system has full support. ### Factor map and the distal structure relation Let \((X,G)\) and \((Y,G)\) be two dynamical systems. If there is a continuous surjection \(\pi:X\to Y\) with \(\pi\circ g=g\circ\pi\) for all \(g\in G\), then we say that \(\pi\) is a _factor map_, the system \((Y,G)\) is a _factor_ of \((X,G)\) or \((X,G)\) is an _extension_ of \((Y,G)\). If \(\pi\) is a homeomorphism, then we say that \(\pi\) is a _conjugacy_ and dynamical systems \((X,G)\) and \((Y,G)\) are _conjugate_. Conjugate dynamical systems can be considered the same from the dynamical point of view. Let \(\pi\colon(X,G)\to(Y,G)\) be a factor map between two dynamical systems and let \[R_{\pi}=\{(x_{1},x_{2})\in X\times X\colon\pi(x_{1})=\pi(x_{2})\}.\] Then \(R_{\pi}\) is a \(G\)-invariant closed equivalence relation on \(X\) and \(Y=X/R_{\pi}\). In fact, there exists an one-to-one correspondence between the collection of factors of \((X,G)\) and the collection of \(G\)-invariant closed equivalence relations on \(X\). A factor map \(\pi\colon(X,G)\to(Y,G)\) is called _proximal_ if \(R_{\pi}\subset P(X,G)\); _almost one-to-one_ if \(\{x\in X\colon\pi^{-1}(\pi(x))\) is a singleton} is residual in \(X\); _isometric_ if for any \(\varepsilon>0\), there exists \(\delta>0\) such that if \(x_{1},x_{2}\in R_{\pi}\) with \(d(x_{1},x_{2})<\delta\) then \(d(gx_{1},gx_{2})<\varepsilon\) for all \(g\in G\); _weakly mixing_ if \((R_{\pi},G)\) is transitive. For a factor map \(\pi\colon(X,G)\to(Y,G)\) between minimal systems, \(\pi\) is almost one-to-one if and only if there exists a point \(y\in Y\) such that \(\pi^{-1}(y)\) is a singleton (see e.g. [22, Corollary 3.6]). Let \(X\) and \(Y\) be compact metric spaces. A map \(\pi\colon X\to Y\) is called _semi-open_ if for every nonempty open subset \(U\) of \(X\), \(\pi(U)\) has a non-empty interior. It is easy to see that a map \(\pi\colon X\to Y\) is semi-open if and only if for every dense subset \(A\) of \(Y\), \(\pi^{-1}(A)\) is dense in \(X\). Note that any factor map \(\pi\colon(X,G)\to(Y,G)\) between minimal systems is semi-open (see e.g. [2, Theorem 1.15]). Every topological dynamical system \((X,G)\) has a maximal distal factor \((X_{d},G)\). That is, \((X_{d},G)\) is a distal factor and every distal factor of \((X,G)\) is also a factor of \((X_{d},G)\). There exists a closed \(G\)-invariant equivalence relations \(S_{d}\) on \((X,G)\), called the _distal structure relation_, such that \(X/S_{d}=X_{d}\). It is well known that \(S_{d}\) is the smallest closed \(G\)-invariant equivalence relation containing \(P(X,G)\) (see e.g. [2, Exercise 9.3]). ### IP-set, FIP-set and central set Let \((G,\cdot)\) be an infinite discrete group. For a sequence \(\{p_{i}\}_{i\in I}\) in \(G\) indexed by a subset \(I\) of \(\mathbb{N}\), we define the _finite product_ of \(\{p_{i}\}_{i\in I}\) by \[FP(\{p_{i}\}_{i\in I})=\left\{\prod_{i\in\alpha}p_{i}\colon\alpha\text{ is a non-empty finite subset of }I\right\},\] where \(\prod_{i\in\alpha}p_{i}\) is the product in increasing order of indices. A subset \(F\) of \(G\) is called an _IP-set_ if there exists an infinite sequence \(\{p_{i}\}_{i=1}^{\infty}\) in \(G\) such that \(FP(\{p_{i}\}_{i=1}^{\infty})\subset F\). The following result follows from the well-known Hindman theorem, see e.g. [17, Corollary 5.15]. **Theorem 2.4**.: _The collection of IP-sets has the Ramsey property, that is, for every IP-set \(F\subset G\) and \(F=F_{1}\cup F_{2}\), either \(F_{1}\) or \(F_{2}\) is an IP-set._ A subset \(F\) of \(G\) is called an _IP\({}^{*}\)-set_ if for every IP-set \(A\subset G\), \(F\cap A\neq\emptyset\). An immediate consequence of Theorem 2.4 is the following property of IP\({}^{*}\)-sets, see e.g. [17, Theorem 16.6]. **Proposition 2.5**.: 1. _The intersection of two IP_\({}^{*}\)_-sets is still an IP_\({}^{*}\)_-set._ 2. _A subset_ \(F\) _of_ \(G\) _is an IP_\({}^{*}\)_-set if and only if for every IP-set_ \(A\subset G\)_,_ \(F\cap A\) _is an IP-set._ A subset \(F\) of \(G\) is called an _FIP-set_ if for every \(k\in\mathbb{N}\), there exists a finite sequence \(\{p_{i}^{(k)}\}_{i=1}^{k}\) of \(G\) such that \(FP(\{p_{i}^{(k)}\}_{i=1}^{k})\subset F\), and an _FIP_\({}^{*}\)_-set_ if for every FIP-set \(A\subset G\), \(F\cap A\) is nonempty. It should be noticed that a FIP\({}^{*}\)-set is called an IP\({}_{0}^{*}\)-set in [4]. The following result follows from [15, Corollary 3] directly. **Theorem 2.6**.: _The collection of FIP-sets has the Ramsey property, that is, for every FIP-set \(F\subset G\) and \(F=F_{1}\cup F_{2}\), either \(F_{1}\) or \(F_{2}\) is an FIP-set._ Similar to Proposition 2.5, we have the following result about FIP\({}^{*}\)-sets. **Proposition 2.7**.: 1. _The intersection of two FIP_\({}^{*}\)_-sets is also an FIP_\({}^{*}\)_-set._ 2. _A subset_ \(F\) _of_ \(G\) _is an FIP_\({}^{*}\)_-set if and only if for every FIP-set_ \(A\subset G\)_,_ \(F\cap A\) _is an FIP-set._ A subset \(F\) of \(G\) is called a _central set_ if there exists a dynamical system \((X,G)\), \(x,y\in X\) and a neighborhood \(U\) of \(y\) such that \((x,y)\) is a proximal pair, \(y\) a minimal point, and \(F\supset N(x,U)\). A subset \(F\) of \(G\) is called a _central_\({}^{*}\)_-set_ if for every central set \(A\subset G\), \(F\cap A\neq\emptyset\). The following result is proved in [11, Prosition 8.10] for the case of \(\mathbb{N}\). In fact, the proof holds for general group actions naturally. **Proposition 2.8**.: _Every central set is an IP-set._ We will need the following two results on central sets and central\({}^{*}\)-sets. **Theorem 2.9** ([17, Theorem 19.27]).: _The collection of central sets has the Ramsey property, that is, for every central set \(F\subset G\) and \(F=F_{1}\cup F_{2}\), either \(F_{1}\) or \(F_{2}\) is a central set._ **Proposition 2.10** ([17, Lemma 15.4]).: 1. _The intersection of two central_\({}^{*}\)_-sets is still a central_\({}^{*}\)_-set._ 2. _A subset_ \(F\) _of_ \(G\) _is a central_\({}^{*}\)_-set if and only if for every central set_ \(A\subset G\)_,_ \(F\cap A\) _is a central set._ Let \((X,G)\) be a dynamical system. A point \(x\in X\) is called _IP\({}^{*}\)-recurrent_ (resp. _FIP\({}^{*}\)-recurrent_, _central recurrent_, _central_\({}^{*}\)_-recurrent_) if for any neighborhood \(U\) of \(x\), \(N(x,U)\) is an IP\({}^{*}\)-set (resp. an FIP\({}^{*}\)-set, a central set, a central\({}^{*}\)-set). We have the following characterization of distal points, see [11, Theorem 9.11 and Proposition 9.17] for \(\mathbb{N}\)-actions and [7, Corollary 5.30] for general group actions. **Theorem 2.11**.: _Let \((X,G)\) be a dynamical system and \(x\in X\). Then \(x\) is distal if and only if it is IP\({}^{*}\)-recurrent if and only if it is central\({}^{*}\)-recurrent._ We will use the following characterization of central sets, see [21, Propositon 5.8] for \(\mathbb{N}\)-actions, but the proof holds for general groups without strain. **Proposition 2.12**.: _A subset \(F\) of \(G\) is central if and only if for every dynamical system \((X,G)\) and \(x\in X\) there exists a minimal point \(y\) in \(\overline{Fx}\) such that \((x,y)\) is proximal, where \(Fx=\{gx\colon g\in F\}\)._ ## 3. Pairwise IP\({}^{*}\)-equicontinuity and distality In this section we first introduce pairwise IP\({}^{*}\)-equicontinuity and give a characterization of distality via pairwise IP\({}^{*}\)-equicontinuity. Then we consider the local version of pairwise IP\({}^{*}\)-equicontinuity, named almost pairwise IP\({}^{*}\)-equicontinuity, give the dichotomy result for minimal systems and characterize the distal structure relation. It is worth mentioning that although some proofs in subsection 3.1 are similar to the ones in [23], we contribute new ideas to this section in subsection 3.2. ### Pairwise IP\({}^{*}\)-equicontinuity Let \((X,G)\) be a dynamical system. A point \(x\) in \(X\) is called _pairwise IP\({}^{*}\)-equicontinuous_ if for any \(\varepsilon>0\) there exists a neighbourhood \(U\) of \(x\) such that for any \(y,z\in U\), \(\{g\in G\colon d(gy,gz)<\varepsilon\}\) is an IP\({}^{*}\)-set. The dynamical system \((X,G)\) is called _pairwise IP\({}^{*}\)-equicontinuous_ if every point in \(X\) is pairwise IP\({}^{*}\)-equicontinuous. By the compactness of \(X\) it is easy to see that \((X,G)\) is pairwise IP\({}^{*}\)-equicontinuous if and only if for any \(\varepsilon>0\) there exists \(\delta>0\) such that for any \(x,y\in X\) with \(d(x,y)<\delta\), \(\{g\in G\colon d(gx,gy)<\varepsilon\}\) is an IP\({}^{*}\)-set. First we have the following elementary results. **Lemma 3.1**.: _If \((X,G)\) be a distal system, then it is pairwise IP\({}^{*}\)-equicontinuous._ Proof.: Fix a point \(x\in X\) and \(\varepsilon>0\). Let \(U=B(x,\frac{\varepsilon}{3})\) and \(y,z\in U\). As \(y,z\) is distal, \(N(y,U)\) and \(N(z,U)\) are IP\({}^{*}\)-sets. By Proposition 2.5, \(F:=N(y,U)\cap N(z,U)\) is also an IP\({}^{*}\)-set. Then for any \(g\in F\), \(d(gy,gz)\leq\operatorname{diam}(U)<\varepsilon\). Therefore, \(x\) is pairwise IP\({}^{*}\)-equicontinuous. As this holds for arbitrary \(x\), \((X,G)\) is pairwise IP\({}^{*}\)-equicontinuous. **Lemma 3.2**.: _Let \((X,G)\) be a dynamical system. If \(x\in X\) is pairwise IP\({}^{*}\)-equicontinuous and a limit point of distal points, then \(x\) is a distal point._ Proof.: For any \(\varepsilon>0\) there exists a neighbourhood \(U\) of \(x\) such that for any \(y\in U\), \(\{g\in G\colon d(gx,gy)<\varepsilon\}\) is an IP\({}^{*}\)-set. Pick a distal point \(z\in U\cap B(x,\varepsilon)\). By Theorem 2.11, \(N(z,U\cap B(x,\varepsilon))\) is an IP\({}^{*}\)-set. By Proposition 2.5, \(F:=\{g\in G\colon d(gx,gz)<\varepsilon\}\cap N(z,U\cap B(x,\varepsilon))\) is also an IP\({}^{*}\)-set. By the triangular inequality property of the metric \(d\), \(F\subset\{g\in G\colon d(gx,x)<2\varepsilon\}\). This implies that \(x\) is IP\({}^{*}\)-recurrent and then \(x\) is a distal point. Following [1], we say that a dynamical system \((X,G)\) is IP\({}^{*}\)-central1 if for every nonempty open subset \(U\) of \(X\), the return time set \(N(U,U)\) is an IP\({}^{*}\)-set. By Theorem 2.11, we known that if a dynamical system has a dense set of distal points, then it is IP\({}^{*}\)-central. **Lemma 3.3**.: _A dynamical system \((X,G)\) is IP\({}^{*}\)-central if and only if for any IP-set \(F\) in \(G\) and nonempty open subset \(U\) of \(X\), there exist an IP-subset \(F^{\prime}\) of \(F\) and a point \(z\in U\) such that \(F^{\prime}\subset N(z,U)\)._ Proof.: (\(\Leftarrow\)) Fix a nonempty open subset \(U\) of \(X\). For any IP-set \(F\) in \(G\), there exist an IP-subset \(F^{\prime}\) of \(F\) and a point \(z\in U\) such that \(F^{\prime}\subset N(z,U)\). It is clear that \(N(z,U)\subset N(U,U)\). Then \(F^{\prime}\subset F\cap N(z,U)\subset F\cap N(U,U)\). Therefore, \(N(U,U)\) is an IP\({}^{*}\)-set and \((X,G)\) is IP\({}^{*}\)-central. (\(\Rightarrow\)) Assume that \((X,G)\) is IP\({}^{*}\)-central. Fix an IP-set \(F\) in \(G\) and a nonempty open subset \(U\) of \(X\). Pick a sequence \(\{p_{i}\}_{i=1}^{\infty}\) in \(G\) such that \(FP\{p_{i}\}_{i=1}^{\infty}\subset F\). Take a nonempty open subset \(V_{0}\) of \(X\) such that \(\overline{V_{0}}\subset U\). Since \((X,G)\) is IP\({}^{*}\)-central, \(N(V_{0},V_{0})\) is an IP\({}^{*}\)-set, there is a finite subset \(\alpha_{1}\) of \(\mathbb{N}\) such that \(V_{0}\cap p_{\alpha_{1}}^{-1}V_{0}\neq\emptyset\) where \(p_{\alpha_{1}}=\prod_{j\in\alpha_{1}}p_{j}\). Take a nonempty open subset \(V_{1}\) of \(X\) such that \(\overline{V_{1}}\subset V_{0}\cap p_{\alpha_{1}}^{-1}V_{0}\), there is a finite subset \(\alpha_{2}\) of \(\mathbb{N}\) with \(\min\alpha_{2}>\max\alpha_{1}\) and \(V_{1}\cap p_{\alpha_{2}}^{-1}V_{1}\neq\emptyset\). By induction we get a sequence \(\{\alpha_{i}\}\) of finite subsets of \(\mathbb{N}\) and a sequence of nonempty open set \(\{V_{i}\}\) which satisfy that for any \(i\in\mathbb{N}\), \(\min\alpha_{i+1}>\max\alpha_{i}\) and \(\overline{V_{i+1}}\subset V_{i}\cap p_{\alpha_{i+1}}^{-1}V_{i}\). By the compactness of \(X\), take a point \(z\in\bigcap_{i=1}^{\infty}\overline{V_{i}}\) and let \(q_{i}=p_{\alpha_{i}}\) for \(i\in\mathbb{N}\). It is easy to verify that \(FP\{q_{i}\}_{i=1}^{\infty}\subset N(z,U)\). **Lemma 3.4**.: _If a dynamical system \((X,G)\) admits an invariant measure with full support, then it is IP\({}^{*}\)-central._ Proof.: Let \(\mu\) be the invariant measure of \((X,G)\) with full support. Fix an IP-set \(F\) in \(G\) and pick a sequence \(\{p_{i}\}_{i=1}^{\infty}\) in \(G\) such that \(FP\{p_{i}\}_{i=1}^{\infty}\subset F\). For any nonempty open subset \(U\) of \(X\), one has \(\mu(U)>0\). For any \(n\in\mathbb{N}\), let \(q_{n}=\prod_{j\in\{1,2,\ldots,n\}}p_{j}\). As \(\mu\) is \(G\)-invariant, \(\mu(q_{n}^{-1}(U))=\mu(U)\) for any \(n\in\mathbb{N}\). Note that \(\mu(X)=1\). So there exist two positive integers \(n_{1}<n_{2}\) such that \(\mu\big{(}q_{n_{1}}^{-1}(U)\cap q_{n_{2}}^{-1}(U)\big{)}>0\). Then \(U\cap(\prod_{j\in\{n_{1}+1,n_{1}+2,\ldots,n_{2}\}}p_{j})^{-1}(U)\neq\emptyset\), and \(N(U,U)\) is an IP\({}^{*}\)-set. This implies that \((X,G)\) is IP\({}^{*}\)-central. **Lemma 3.5**.: _Let \((X,G)\) be an IP\({}^{*}\)-central system. If \(x\in X\) is pairwise IP\({}^{*}\)-equicontinuous, then \(x\) is distal._ Proof.: Assume that \(x\) is not distal. By Theorem 2.11 it is not IP\({}^{*}\)-recurrent, that is, there exists a \(\delta>0\) such that \(N(x,X\setminus B(x,2\delta))\) is an IP-set. As \(x\) is pairwise IP\({}^{*}\)-equicontinuous, there exists a neighborhood \(U\) of \(x\) such that for any \(z\in U\), \(\{g\in G\colon d(gx,gz)<\delta\}\) is an IP\({}^{*}\)-set. However, as \((X,G)\) is IP\({}^{*}\)-central, by Lemma 3.3 there exists \(v\in U\cap B(x,\delta)\) such that \(N(v,U\cap B(x,\delta))\) contains an IP-subset of \(N(x,X\setminus B(x,2\delta))\). For any \(g\in N(v,U\cap B(x,\delta))\), \(d(gv,x)<\delta\) and \(d(gx,x)>2\delta\). Thus \(d(gv,gx)>\delta\) for all \(g\) in the IP-subset of \(N(v,U\cap B(x,\delta))\), which contradicts to pairwise IP\({}^{*}\)-equicontinuity of \(x\). Combining Lemmas 3.1 and 3.5, we have the following result. **Theorem 3.6**.: _An IP\({}^{*}\)-central system is distal if and only if it is pairwise IP\({}^{*}\)-equicontinuous._ Now we are ready to prove Theorem 1.1 (1). Proof of Theorem 1.1 (1).: Let \(\mu\) be an invariant measure of \((X,G)\). As \((X,G)\) is minimal, \(\mu\) has full support. By Lemma 3.4, \((X,G)\) is IP\({}^{*}\)-central. Then the result follows from Theorem 3.6. **Remark 3.7**.: Similar to Proof of Theorem 1.1 (1), it is easy to see that the conclusions of Lemma 3.3 and Lemma 3.5 are true for any minimal system which admits an invariant measure. ### Almost pairwise IP\({}^{*}\)-equicontinuity Let \((X,G)\) be a dynamical system. Denote by \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) the collection of all pairwise IP\({}^{*}\)-equicontinuous points in \(X\). We say that a dynamical system \((X,G)\) is _almost pairwise IP\({}^{*}\)-equicontinuous_ if \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is residual in \(X\). **Lemma 3.8**.: _Let \((X,G)\) be a dynamical system. Then \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is a \(G\)-invariant \(G_{\delta}\) subset of \(X\)._ Proof.: For each \(m\in\mathbb{N}\), denote by \(\operatorname{Eq}^{\text{IP}^{*}}_{m}(X,G)\) the collection of all points \(x\) in \(X\) with the property that there exists a neighbourhood \(U\) of \(x\) such that for any \(y,z\in U\), \(\left\{g\in G\colon d(gy,gz)<\frac{1}{m}\right\}\) is an IP\({}^{*}\)-set. Clearly, \(\{\operatorname{Eq}^{\text{IP}^{*}}_{m}(X,G)\}_{m=1}^{\infty}\) is a decreasing sequence of open subsets of \(X\), and \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)=\bigcap_{m=1}^{\infty}\operatorname{Eq }^{\text{IP}^{*}}_{m}(X,G)\). Then \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is a \(G_{\delta}\) subset of \(X\). Fix \(h\in G\). For any \(m\in\mathbb{N}\) there exists \(n\in\mathbb{N}\) such that for any \(u,v\in X\) with \(d(u,v)<\frac{1}{n}\) one has \(d(hu,hv)<\frac{1}{m}\). Assume that \(x\in\operatorname{Eq}^{\text{IP}^{*}}_{m}(X,G)\), that is there exists a neighbourhood \(U\) of \(x\) such that for any \(y^{\prime},z^{\prime}\in U\), \(\left\{g\in G\colon d(gy^{\prime},gz^{\prime})<\frac{1}{n}\right\}\) is an IP\({}^{*}\)-set. Let \(V=hU\). Then \(V\) is a neighborhood of \(hx\). For any \(y,z\in V\), \(h^{-1}y,h^{-1}z\in U\). Then \(\left\{g\in G\colon d(g(h^{-1}y),g(h^{-1}z))<\frac{1}{n}\right\}\) is an IP\({}^{*}\)-set. By the choice of \(n\), \(\left\{g\in G\colon d(hgh^{-1}y,hgh^{-1}z)<\frac{1}{m}\right\}\) is also an IP\({}^{*}\)-set. Note that \(\left\{g\in G\colon d(gy,gz)<\frac{1}{m}\right\}\supset h\left\{g\in G\colon d (hgh^{-1}y,hgh^{-1}z)<\frac{1}{m}\right\}h^{-1}\), then \(\left\{g\in G\colon d(gy,gz)<\frac{1}{m}\right\}\) is also an IP\({}^{*}\)-set. This implies that \(h\operatorname{Eq}^{\text{IP}^{*}}_{m}(X,G)\subset\operatorname{Eq}^{\text{IP}^{*}}_{m}(X,G)\), and then \(h\operatorname{Eq}^{\text{IP}^{*}}(X,G)\subset\operatorname{Eq}^{\text{IP}^ {*}}(X,G)\). By changing \(h\) by \(h^{-1}\), we have \(h^{-1}(\operatorname{Eq}^{\text{IP}^{*}}(X,G))\subset\operatorname{Eq}^{\text {IP}^{*}}(X,G)\), then we have \(h(\operatorname{Eq}^{\text{IP}^{*}}(X,G))=\operatorname{Eq}^{\text{IP}^{*}}(X,G)\). **Proposition 3.9**.: _If \((X,G)\) is a minimal system, then either \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is residual in \(X\) or \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is empty._ Proof.: Assume that \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is not empty. Pick \(x\in\operatorname{Eq}^{\text{IP}^{*}}(X,G)\). By Lemma 3.8\(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is \(G\)-invariant, \(Gx\subset\operatorname{Eq}^{\text{IP}^{*}}(X,G)\). As \((X,G)\) is minimal, \(Gx\) is dense in \(X\) and then \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is also dense in \(X\). By Lemma 3.8 again, \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is a \(G_{\delta}\)-subset of \(X\). Hence \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is residual in \(X\). **Lemma 3.10**.: _Let \(\pi\colon(X,G)\to(Y,G)\) be a factor map. If \(x\in X\) with \(\pi^{-1}(\pi(x))=\left\{x\right\}\) and \((Y,G)\) is distal, then \(x\) is pairwise IP\({}^{*}\)-equicontinuous._ Proof.: Fix \(\varepsilon>0\). As \(\pi^{-1}(\pi(x))=\left\{x\right\}\), there exists a neighborhood \(V\) of \(\pi(x)\) such that \(\pi^{-1}(V)\subset B(x,\frac{\varepsilon}{2})\). Pick a neighborhood \(U\) of \(x\) with \(\pi(U)\subset V\). For any \(u,v\in U\), \(\pi(u),\pi(v)\in V\). As \(\pi(u)\) and \(\pi(v)\) are distal, \(F:=\left\{g\in G\colon g\pi(u),g\pi(v)\in V\right\}\) is an IP\({}^{*}\)-set. For any \(g\in F\), \(gu,gv\in\pi^{-1}(V)\subset B(x,\frac{\varepsilon}{2})\), and then \(d(gu,gv)<\varepsilon\). This implies that \(x\) is pairwise IP\({}^{*}\)-equicontinuous. Now we consider the opposite of pairwise IP\({}^{*}\)-equicontinuity. A dynamical system \((X,G)\) is called _pairwise IP-sensitive_ if there exists a constant \(\delta>0\) with the property that for each nonempty open subset \(U\) of \(X\), there exist \(x_{1},x_{2}\in U\) such that \(\big{\{}g\in G\colon d(gx_{1},gx_{2})>\delta\big{\}}\) is an IP-set. We have the following dichotomy result for minimal systems. Note that the proof is different from the case of \(\mathbb{N}\)-actions which is proved in [23, Theorem 3.10]. **Theorem 3.11**.: _Every minimal system is either almost pairwise IP\({}^{*}\)-equicontinuous or pairwise IP-sensitive._ Proof.: Let \((X,G)\) be a minimal system. First we assume that \(\operatorname{Eq}^{\text{IP}}(X,G)\neq\emptyset\), then by Proposition 3.9\(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) is residual, that is, \((X,G)\) is almost pairwise IP\({}^{*}\)-equicontinuous. Now we assume that \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)=\emptyset\), then every point in \(X\) is not pairwise IP\({}^{*}\)-equicontinuous. For each \(m\in\mathbb{N}\), denote by \(A_{m}(X,G)\) the collection of all points \(x\) in \(X\) with the property that for each neighborhood \(U\) of \(x\), there exist \(x_{1},x_{2}\in U\) such that \(\big{\{}g\in G\colon d(gx_{1},gx_{2})>\frac{1}{m}\big{\}}\) is an IP-set. It is clear that each \(A_{m}(X,G)\) is a closed subset of \(X\) and \(\bigcup_{m\in\mathbb{N}}A_{m}(X,G)\) is the collection of all points in \(X\) which are not pairwise IP\({}^{*}\)-equicontinuous. Then \(\bigcup_{m\in\mathbb{N}}A_{m}(X,G)=X\). By Baire category theorem, there exists \(m_{0}\in\mathbb{N}\) such that the interior of \(A_{m_{0}}(X,G)\) is not empty. Pick a nonempty open subset \(V\) of \(A_{m_{0}}(X,G)\). As \((X,G)\) is minimal, there exists a finite subset \(H\) of \(G\) such that \(\bigcup_{h\in H}hV=X\). By the continuous of the action \(G\), there exists a \(\delta>0\) such that for any \(u,v\in X\) with \(d(u,v)>\frac{1}{m_{0}}\) one has \(d(hu,hv)>\delta\) for each \(h\in H\). For every nonempty open subset \(U\) of \(X\), there exists \(h\in H\) such that \(U\cap hV\neq\emptyset\). Then \(V\cap h^{-1}U\) is a nonempty open subset of \(A_{m_{0}}(X,G)\), and there exist \(x^{\prime}_{1},x^{\prime}_{2}\in V\cap h^{-1}U\) such that \(\big{\{}g\in G\colon d(gx^{\prime}_{1},gx^{\prime}_{2})>\frac{1}{m_{0}}\big{\}}\) is an IP-set. Let \(x_{1}=hx^{\prime}_{1}\) and \(x_{2}=hx^{\prime}_{2}\). Then \(x_{1},x_{2}\in U\) and \(\big{\{}g\in G\colon d(gh^{-1}x_{1},gh^{-1}x_{2})>\frac{1}{m_{0}}\big{\}}\) is an IP-set. By the choice of \(\delta\), \(\big{\{}g\in G\colon d(hgh^{-1}x_{1},hgh^{-1}x_{2})>\delta\big{\}}\) is an also IP-set. Note that \(\big{\{}g\in G\colon d(gx_{1},gx_{2})>\delta\big{\}}\supset h\big{\{}g\in G \colon d(hgh^{-1}x_{1},hgh^{-1}x_{2})>\delta\big{\}}h^{-1}\), then \(\big{\{}g\in G\colon d(gx_{1},gx_{2})>\delta\big{\}}\) is also an IP-set. This shows that \((X,G)\) is pairwise IP-sensitive. **Proposition 3.12**.: _Let \((X,G)\) be a minimal system which admits an invariant measure. If the proximal relation \(P(X,G)\) is not closed, then \((X,G)\) is pairwise IP-sensitive._ Proof.: Since \(P(X,G)\) is not closed, there exists a distal pair \((y,z)\) and proximal pairs \((y_{i},z_{i})\) such that \((y_{i},z_{i})\to(y,z)\) as \(i\to\infty\). Let \(\delta:=\frac{1}{4}\inf_{g\in G}d(gy,gz)>0\). Fix a nonempty open subset \(U\) of \(X\). As \((X,G)\) is minimal, there exists \(h\in G\) such that \(hy\in U\). There exists \(n\in\mathbb{N}\) such that \(hy_{n}\in U\cap B(hy,\delta)\) and \(d(hz_{n},hz)<\delta\). Let \(x_{1}=hy_{n}\) and \(x_{2}=hz_{n}\). Then \(x_{1}\in U\), \(d(x_{1},x_{2})>\delta\) and \((x_{1},x_{2})\) is proximal. Choose nonempty open set \(U_{i}\) containing \(x_{i}\) such that \(d(U_{1},U_{2})>\delta\) and \(U_{1}\subset U\). As \(x_{2}\) is a minimal point, we know that \(N(x_{1},U_{2})\) is a central set and hence it contains an IP-set \(FP(\{p_{i}\}_{i=1}^{\infty})\). By Lemma 3.3 and Remark 3.7, there exists a sub-IP set \(FP(\{q_{j}\}_{j=1}^{\infty})\subseteq FP(\{p_{i}\}_{i=1}^{\infty})\) and \(x_{3}\in U_{1}\) such that \(gx_{3}\in U_{1}\) for each \(g\in FP(\{q_{j}\}_{j=1}^{\infty})\). Then \(d(gx_{1},gx_{3})>\delta\) for each \(g\in FP(\{q_{j}\}_{j=1}^{\infty})\), this implies that \((X,G)\) is pairwise IP-sensitive. We have the following structure of almost pairwise IP\({}^{*}\)-equicontinuous. **Theorem 3.13**.: _Let \((X,G)\) be a minimal system which admits an invariant measure. Then the following statements are equivalent:_ 1. \((X,G)\) _is almost pairwise_ IP_\({}^{*}\)_-equicontinuous;_ 2. \((X,G)\) _is point-distal and_ \(P(X,G)\) _is closed;_ 3. \(\pi\colon(X,G)\to(X_{d},G)\) _is almost one-to-one, where_ \((X_{d},G)\) _is the maximal distal factor of_ \((X,G)\)_._ Proof.: (1)\(\Rightarrow\)(2) As \((X,G)\) is almost pairwise IP\({}^{*}\)-equicontinuous, the collection \(\operatorname{Eq}^{\text{IP}^{*}}(X,G)\) of all pairwise IP\({}^{*}\)-equicontinuous points is residual in \(X\). Since \((X,G)\) is a minimal system which admits an invariant measure, by Lemma 3.5 and Remark 3.7, every pairwise IP\({}^{*}\)-equicontinuous point is distal. Then the collection of all distal points is residual in \(X\), that is \((X,G)\) is point-distal. By Proposition 3.12, we known that the proximal relation \(P(X,G)\) is closed. (2)\(\Rightarrow\)(3) Let \(\pi\colon(X,G)\to(X_{d},G)\) be the factor map to its maximal distal factor of \((X,G)\). As \(P(X,G)\) is closed, by Lemma 2.3, \(P(X,G)\) is a \(G\)-invariant closed equivalence relation on \(X\). Thus \(R_{\pi}=P(X,G)\). Then for every point \(x\in X\), \(\pi^{-1}(\pi(x))=P(X,G)[x]\). As \((X,G)\) is point-distal, the collection of all distal points is residual in \(X\). For every distal point \(x\in X\), we claim that \(P(X,G)[x]=\{x\}\). Indeed, if \((x,y)\) is a proximal pair, as \(x,y\) are minimal points then \(\overline{Gx}=\overline{Gy}\). In particular \(y\in\overline{Gx}\). Then \(x=y\) since \(x\) is a distal point. Therefore, for every distal point \(x\in X\), \(\pi^{-1}(\pi(x))\) is a singleton. This shows that \(\pi\) is almost one-to-one. (3)\(\Rightarrow\)(1) Let \(\pi\colon(X,G)\to(X_{d},G)\) be the factor map to its maximal distal factor of \((X,G)\) and \(A=\{x\in X\colon\pi^{-1}(\pi(x))=\{x\}\}\). As \(\pi\) is almost one-to-one, \(A\) is residual in \(X\). According to Lemma 3.10, every point in \(A\) is pairwise IP\({}^{*}\)-equicontinuous. Then \((X,G)\) is almost pairwise IP\({}^{*}\)-equicontinuous. The following result reveals that pairwise IP\({}^{*}\)-equicontinuous points are the points with trivial section in the distal structure relation. **Proposition 3.14**.: _Let \((X,G)\) be a minimal system which admits an invariant measure. Then a point \(x\in X\) is pairwise IP\({}^{*}\)-equicontinuous if and only if \(S_{d}(x)=\{x\}\), where \(S_{d}\) is the distal structure relation of \((X,G)\)._ Proof.: Let \(\pi\colon(X,G)\to(X_{d},G)\) be the factor map to its maximal distal factor of \((X,G)\). Then \(R_{\pi}=S_{d}\). If \(S_{d}(x)=\{x\}\), by Lemma 3.10, \(x\) is pairwise IP\({}^{*}\)-equicontinuous. Now assume that \(x\) is pairwise IP\({}^{*}\)-equicontinuous. By Proposition 3.9, \((X,G)\) is almost pairwise IP\({}^{*}\)-equicontinuous. By Lemma 3.5, Remark 3.7 and the proof of Theorem 3.13, we known that \(S_{d}(x)=P(X,G)[x]=\{x\}\) **Remark 3.15**.: Let \((X,G)\) be a minimal system and \(\pi\colon(X,G)\to(X_{d},G)\) the factor map to its the maximal distal factor of \((X,G)\). By [25, VI (5.21)], the factor map \(\pi\) has a largest almost one-to-one factor \((Z,G)\). If in addition \((X,G)\) admits an invariant measure, then by Theorem 3.13, \((Z,G)\) is also the maximal almost pairwise \(\mathrm{IP}^{*}\)-equicontinuous factor of \((X,G)\), that is, every almost pairwise \(\mathrm{IP}^{*}\)-equicontinuous factor of \((X,G)\) is a factor of \((Z,G)\). ## 4. Pairwise central\({}^{*}\)-equicontinuity and distality In this section we introduce a new kind of weak equicontinuity, named pairwise central\({}^{*}\)-equicontinuity. We characterize distality via pairwise central\({}^{*}\)-equicontinuity. The structure theorem for (metric) minimal systems plays an important role in this section. ### Pairwise central\({}^{*}\)-equicontinuity Let \((X,G)\) be a dynamical system. A point \(x\) in \(X\) is called _pairwise central\({}^{*}\)-equicontinuous_ if for any \(\varepsilon>0\) there exists a neighbourhood \(U\) of \(x\) such that for any \(y,z\in U\), \(\{g\in G\colon d(gy,gz)<\varepsilon\}\) is a central\({}^{*}\)-set. The dynamical system \((X,G)\) is called _pairwise central\({}^{*}\)-equicontinuous_ if every point in \(X\) is pairwise central\({}^{*}\)-equicontinuous. Since every \(\mathrm{IP}^{*}\)-set is a central\({}^{*}\)-set, every pairwise \(\mathrm{IP}^{*}\)-equicontinuous point is pairwise central\({}^{*}\)-equicontinuous and every pairwise \(\mathrm{IP}^{*}\)-equicontinuous system is pairwise central\({}^{*}\)-equicontinuous. Similar to proof of Lemma 3.2, we have the following result. **Lemma 4.1**.: _Let \((X,G)\) be a dynamical system._ 1. _If_ \(x\in X\) _is pairwise central_\({}^{*}\)_-equicontinuous and a limit point of distal points, then_ \(x\) _is a distal point;_ 2. _If_ \(x\in X\) _is pairwise central_\({}^{*}\)_-equicontinuous and a limit point of central recurrent points, then_ \(x\) _is a central recurrent point._ Since we do not have the corresponding result of Lemma 3.3 for central sets, we should first study the conditions for the opposite of pairwise central\({}^{*}\)-equicontinuity. A dynamical system \((X,G)\) is called _pairwise central sensitive_ if there exists a constant \(\delta>0\) with the property that for each nonempty open subset \(U\) of \(X\), there exist \(x_{1},x_{2}\in U\) such that \(\big{\{}g\in G\colon d(gx_{1},gx_{2})>\delta\big{\}}\) is a central set. **Proposition 4.2**.: _Let \(\pi:(X,G)\to(Y,G)\) be a factor map between minimal systems with \(X\times X\) having a dense set of minimal points. If \(\pi\) is proximal but not almost one-to-one, then \((X,G)\) is pairwise central sensitive._ Proof.: As \(\pi\) is not almost one-to-one, by [22, Lemma 3.3], \[\delta:=\frac{1}{4}\inf_{y\in Y}\mathrm{diam}(\pi^{-1}(y))>0.\] Fix a nonempty open subset \(U\) of \(X\). Since \((X,G)\) is minimal, \(\pi\) is semi-open then the interior \(\mathrm{Int}(\pi(U))\) of \(U\) is not empty. Pick a point \(y_{0}\in\mathrm{Int}(\pi(U))\). Since \(\mathrm{diam}(\pi^{-1}(y_{0}))\geq 4\delta>0\), we can find \(u_{1},u_{2}\in\pi^{-1}(y_{0})\) such that \(d(u_{1},u_{2})>3\delta\). Let \(W_{i}=B(u_{i},\delta)\cap\pi^{-1}(\operatorname{Int}(\pi(U)))\), \(i=1,2\). Then \(W_{1},W_{2}\) are nonempty open subsets of \(X\) with \(d(W_{1},W_{2})>\delta\). Since \((X\times X,G)\) has a dense set of minimal points, choose a minimal point \((y_{1},y_{2})\in W_{1}\times W_{2}\) and points \(x_{i}\in U\) with \(\pi(x_{i})=\pi(y_{i})\) for \(i=1,2\). As \(\pi\) is proximal, it is easy to see that \(\pi\times\pi\colon(X\times X,G)\to(Y\times Y,G)\) is also proximal (see e.g. [25, Proposition V(2.9)]). Then \(((x_{1},x_{2}),(y_{1},y_{2}))\) is proximal and \(N((x_{1},x_{2}),W_{1}\times W_{2})\) is a central set, and \(\{g\in G\colon d(gx_{1},gx_{2})>\delta\}\supset N((x_{1},x_{2}),W_{1}\times W _{2})\). This shows that \((X,G)\) is pairwise central sensitive. We need the following structure theorem for (metric) minimal systems, see [8]. **Theorem 4.3**.: _For every minimal system \((X,G)\) there exists a countable ordinal \(\eta\) and canonically determined minimal systems \((X_{\nu},G)\), \((Y_{\nu},G)\) and \((Z_{\nu},G)\) with \(1\leq\nu\leq\eta\), and a commutative diagram_ _such that \(\{\star\}\) is the singleton, for each \(\nu\leq\eta\), \(\pi_{\nu}\) is RIC, \(\rho_{\nu}\) is isometric, \(\theta_{\nu},\theta_{\nu}^{*}\) are proximal and \(\pi_{\eta}\) is RIC and weakly mixing. For a limit ordinal \(\nu\), \(X_{\nu}\), \(Y_{\nu}\), \(\pi_{\nu}\) etc. are the inverse limits of \(X_{\iota}\), \(Y_{\iota}\), \(\pi_{\iota}\) etc. for \(\iota<\nu\)._ **Remark 4.4**.: 1. The definition of relatively incompressible (RIC) extension is more involved. We only need the fact that if \(\pi\colon(X,G)\to(Y,G)\) is a RIC factor map between minimal system then \(R_{\pi}\) has a dense set of minimal points. 2. Since an inverse limit of proximal extension is also a proximal extension (see e.g. [25, Corollary V(2.10)]), the factor map \(\theta\colon X_{\eta}\to X\) in Theorem 4.3 is proximal. We say that a minimal system \((X,G)\) is a _proximal-isometric system_ (_PI system_ for short) if the factor map \(\pi_{\eta}\) in the structure of \((X,G)\) is a homeomorphism. The following result is inspired by [26, Proposition 5.5], but the proof here is more straightforward. **Proposition 4.5**.: _If a minimal system \((X,G)\) is not a PI system then it is pairwise central sensitive._ Proof.: By Theorem 4.3 and Remark 4.4, we have the following diagram where \(\theta\) is proximal and \(\pi_{\eta}\) is weak mixing and RIC. Denote \(R_{\eta}=\{(x,y)\in X_{\eta}\times X_{\eta}\colon\pi_{\eta}(x)=\pi_{\eta}(y)\}\). Then \((R_{\eta},G)\) is a transitive system with a dense set of minimal points. As \((X,G)\) is not a PI system, \(\pi_{\eta}\) is not a homeomorphism, then \(\{(x,x)\in X_{\eta}\times X_{\eta}\colon x\in X_{\eta}\}\subsetneq R_{\eta}\). Let \(R=\theta\times\theta(R_{\eta})\). Then \((R,G)\) is a transitive system with a dense set of minimal points. Since \(\theta\) is proximal, \(\Delta_{X}:=\{(x,x)\in X\times X\colon x\in X\}\subsetneq R\). Fix a minimal point \((z_{1},z_{2})\in R\setminus\Delta_{X}\). Then there exists a \(\delta>0\) such that for every point \((a,b)\in\overline{G(z_{1},z_{2})}\) one has \(d(a,b)>3\delta\). Let \(U\) be a nonempty open subset of \(X\). Then \(U\times U\cap R\neq\emptyset\) as \(\Delta_{X}\subset R\). Pick a transitive point \((x_{1},x_{2})\) in \(U\times U\cap R\). By Theorem 2.2, there exists a minimal point \((y_{1},y_{2})\in\overline{G(z_{1},z_{2})}\) such that \(((x_{1},x_{2}),(y_{1},y_{2}))\) is proximal. Then \(N((x_{1},x_{2}),B(y_{1},\delta)\times B(y_{2},\delta))\) is a central set, and \(\left\{g\in G\colon d(gx_{1},gx_{2})>\delta\right\}\supset N((x_{1},x_{2}),B(y _{1},\delta)\times B(y_{2},\delta))\). This shows that \((X,G)\) is pairwise central sensitive. We need the following characterization of PI systems, see e.g. [25, Page 570]. **Theorem 4.6**.: _A minimal system \((X,G)\) is a PI system if and only if every transitive subsystem of \((X\times X,G)\) with a dense set of minimal points is minimal._ Now we are ready to prove Theorem 1.1(2). Proof of Theorem 1.1(2).: We only need to prove that for a minimal system \((X,G)\) with a dense set of minimal points in the product system \((X\times X,G)\), if \((X,G)\) is pairwise central\({}^{*}\)-equicontinuous then it is distal. Let \((X,G)\) be a pairwise central\({}^{*}\)-equicontinuous system. First by Proposition 4.5, \((X,G)\) is a PI system. As the intersection of two central\({}^{*}\)-sets is still a central\({}^{*}\)-set, the product system \((X\times X,G)\) is pairwise central\({}^{*}\)-equicontinuous. As \((X\times X,G)\) has a dense set of minimal points, it is clear that every minimal point is central recurrent, then by Lemma 4.1, every point in \((X\times X,G)\) is central recurrent. Fix any point \((x,y)\in X\times X\). Then \((\overline{G(x,y)},G)\) is a transitive system. For every nonempty open subset \(U\) of \(\overline{G(x,y)}\), pick a nonempty open subset \(V\) of \(U\) with \(\overline{V}\subset U\) and a point \((a,b)\in V\). As \((a,b)\) is central recurrent, \(N((a,b),V)\) is a central set. By Proposition 2.12, there exists a minimal point in \(\overline{V}\). Then \((\overline{G(x,y)},G)\) has a dense set of minimal points. Now by Theorem 4.6, \(\overline{G(x,y)}\) is minimal. In particular \((x,y)\) is a minimal point. According to Lemma 2.1, \((X,G)\) is distal. ## 5. Pairwise \(\text{FI}^{*}\)-equicontinuity and systems of order \(\infty\) In this section we focus on systems of order \(\infty\). We aim to characterize minimal systems of order \(\infty\) via a weak equicontinuity: pairwise \(\text{FI}^{*}\)-equicontinuity, and study the local property of this kind of weak equicontinuity. In Proposition 5.6 we characterize \(\text{FI}\)P-set by a return time set, which may be of interest independently. ### Systems of order \(\infty\) Let \((X,G)\) be a minimal system and let \(k\geq 1\) be an integer. A pair \((x,y)\in X\times X\) is said to be _regionally proximal of order \(k\)_ if for any \(\delta>0\), there exist \(x^{\prime},y^{\prime}\in X\) and a sequence \(\{p_{i}\}_{i=1}^{k}\) in \(G\) such that \(d(x,x^{\prime})<\delta\), \(d(y,y^{\prime})<\delta\), and \[d(gx^{\prime},gy^{\prime})<\delta\text{ for any }g\in FP(\{p_{i}\}_{i=1}^{k}).\] The _regionally proximal relation of order \(k\)_, denoted by \(\mathbf{RP}^{[k]}(X)\), is the collection of all regionally proximal pairs of order \(k\). It is clear that \[P(X,G)\subset\cdots\subset\mathbf{RP}^{[k+1]}(X,G)\subset\mathbf{RP}^{[k]}(X,G)\subset\cdots\subset\mathbf{RP}^{[2]}(X,G)\subset\mathbf{RP}^{[1]}(X,G).\] When the action group is abelian, we have the following results on the regionally proximal relation of order \(k\) for minimal system. **Theorem 5.1** ([24, Theorem 7.7]).: _If \((X,G)\) is minimal system with \(G\) being abelian, then for every \(k\in\mathbb{N}\), \(\mathbf{RP}^{[k]}(X,G)\) is a \(G\)-invariant closed equivalence relation._ **Theorem 5.2** ([20, Proposition 8.15]).: _Let \((X,G)\) be a minimal system with \(G\) being abelian and \(k\in\mathbb{N}\). Then a pair \((x,y)\in\mathbf{RP}^{[k]}(X,G)\) if and only if for any neighborhood \(U\) of \(y\) there exists a sequence \(\{p_{i}\}_{i=1}^{k+1}\) in \(G\) such that \(FP(\{p_{i}\}_{i=1}^{k+1})\subset N(x,U)\)._ Let \(\mathbf{RP}^{[\infty]}(X,G)=\bigcap\limits_{k=1}^{\infty}\mathbf{RP}^{[k]}(X,G)\). Following from Theorems 5.1 and 5.2, one has **Theorem 5.3**.: _Let \((X,G)\) be a minimal system with \(G\) being abelian. Then_ 1. \(\mathbf{RP}^{[\infty]}(X,G)\) _is a_ \(G\)_-invariant closed equivalence relation;_ 2. _a pair_ \((x,y)\in\mathbf{RP}^{[\infty]}(X,G)\) _if and only if for any neighborhood_ \(U\) _of_ \(y\)_,_ \(N(x,U)\) _is an FIP-set._ Following [6] and [13], we say that a dynamical system \((X,G)\) is a _system of order \(\infty\)_ if \(\mathbf{RP}^{[\infty]}(X,G)\) is trivial, i.e. it coincides with the diagonal of \(X\times X\). If \((X,G)\) is a minimal system with \(G\) being abelian, by Theorem 5.3\(X/\mathbf{RP}^{[\infty]}(X,G)\) is the maximal factor of order \(\infty\). A point \(x\in X\) is called \(\infty\)_-step almost automorphic_ if \(\mathbf{RP}^{[\infty]}(X,G)[x]=\{x\}\). It is clear that \((X,G)\) is a system of order \(\infty\) if and only if every point in \((X,G)\) is \(\infty\)-step almost automorphic. The following characterization of \(\infty\)-step almost automorphic points was proved in [20, Theorem 8.1.7], see also [4, Theorem 0.2]. **Theorem 5.4**.: _Let \((X,G)\) be a minimal system with \(G\) being abelian. Then a point \(x\in X\) is an \(\infty\)-step almost automorphic point if and only if it is FIP\({}^{*}\)-recurrent._ ### Pairwise \(\text{FIP}^{*}\)-equicontinuity Let \((X,G)\) be a dynamical system. A point \(x\) in \(X\) is called _pairwise \(\text{FIP}^{*}\)-equicontinuous_ if for any \(\varepsilon>0\) there exists a neighbourhood \(U\) of \(x\) such that for any \(y,z\in U\), \(\{g\in G\colon d(gy,gz)<\varepsilon\}\) is an \(\text{FIP}^{*}\)-set. Denote by \(\text{Eq}^{FIP}(X,G)\) the collection of all pairwise \(\text{FIP}^{*}\)-equicontinuous points in \(X\). A dynamical system \((X,G)\) is called _pairwise \(\text{FIP}^{*}\)-equicontinuous_ if \(\text{Eq}^{FIP}(X,G)=X\). We will prove Theorem 1.2 in this subsection, first we need the following result which is implied in [12], see [19, Proposition 5.8] for a proof of this version. **Proposition 5.5**.: _Let \((X,\mathcal{B},\mu)\) be a probability space and \(\{E_{i}\}_{i=1}^{\infty}\) be a sequence in \(\mathcal{B}\) with \(\mu(E_{i})\geq a>0\) for some constant \(a\) and any \(i\in\mathbb{N}\). Then for any \(k\geq 1\) and \(\varepsilon>0\) there is \(N=N(a,k,\varepsilon)\in\mathbb{N}\) such that for any strictly increase sequence \(\{s_{i}\}_{i=1}^{n}\) in \(\mathbb{N}\) with \(n\geq N\), there exist \(1\leq t_{1}<t_{2}<\cdots<t_{k}\leq n\) with_ \[\mu\big{(}E_{s_{t_{1}}}\cap E_{s_{t_{2}}}\cap\cdots\cap E_{s_{t_{k}}}\big{)} \geq a^{k}-\varepsilon.\] **Proposition 5.6**.: _If a dynamical system \((X,G)\) admits an invariant measure with full support, then for any FIP-set \(F\) and nonempty open subset \(U\) of \(X\), there exists an FIP-subset \(F^{\prime}\) of \(F\) and a point \(z\in U\) such that \(F^{\prime}\subset N(z,U)\)._ Proof.: Let \(\mu\) be a \(G\)-invariant measure with full support. We first prove the following claim. **Claim:** For every \(n\in\mathbb{N}\) and Borel subset \(E\) of \(X\) with \(\mu(E)\geq a>0\) for some constant \(a\). There exists \(k=k(a,n)\in\mathbb{N}\) such that for any sequence \(\{p_{i}\}_{i=1}^{k}\) in \(G\) there exists a sequence \(\{q_{i}\}_{i=1}^{n}\) in \(G\) such that \(FP(\{q_{i}\}_{i=1}^{n})\subset FP(\{p_{i}\}_{i=1}^{k})\) and \[\mu\bigg{(}E\cap\bigcap_{g\in FP(\{q_{i}\}_{i=1}^{n})}g^{-1}E\bigg{)}\geq c_{ n},\] where \(c_{1}=\frac{1}{2}a^{2}\) and \(c_{i+1}=\frac{1}{2}c_{i}^{2}\) for \(i\in\mathbb{N}\). Proof of Claim.: Let \(E\) be a Borel subset \(E\) of \(X\) with \(\mu(E)\geq a\). We prove the Claim by induction on \(n\). For \(n=1\), let \(k=k(a,1)=N(a,2,\frac{1}{2}a^{2})\) as in Proposition 5.5. For any sequence \(\{p_{i}\}_{i=1}^{k}\) in \(G\), as \(\mu\) is \(G\)-invariant, \(\mu((\prod_{i=1}^{j}p_{i})^{-1}E)=\mu(E)\geq a\) for all \(j=1,\ldots,k\). By Proposition 5.5 there exists \(1\leq j_{1}<j_{2}\leq k\) such that \(\mu((\prod_{i=1}^{j_{1}}p_{i})^{-1}(E)\cap(\prod_{i=1}^{j_{2}}p_{i})^{-1}(E) )\geq\frac{1}{2}a^{2}\). Let \(g=\prod_{i=j_{1}+1}^{j_{2}}p_{i}\). Then \[\mu(E\cap g^{-1}E)=\mu\bigg{(}\left(\prod_{i=1}^{j_{1}}p_{i}\right)^{-1}(E) \cap\left(\prod_{i=1}^{j_{2}}p_{i}\right)^{-1}(E)\bigg{)}\geq\frac{1}{2}a^{2}.\] This shows that the result holds for \(n=1\). Assume that the result holds for \(n\leq m\). For \(n=m+1\), let \(k=k(a,m+1)=k(a,m)+k(c_{m},1)\). For any sequence \(\{p_{i}\}_{i=1}^{k}\) in \(G\), there exists a sequence \(\{q_{i}\}_{i=1}^{m}\) in \(G\) such that \(FP(\{q_{i}\}_{i=1}^{m})\subset FP(\{p_{i}\}_{i=1}^{k(a,m)})\) and \[\mu\bigg{(}E\cap\bigcap_{g\in FP(\{q_{i}\}_{i=1}^{m})}g^{-1}E\bigg{)}\geq c_{ m}.\] Let \(V=E\cap\bigcap_{g\in FP(\{q_{i}\}_{i=1}^{m})}g^{-1}E\). For the sequence \(\{p_{i}\}_{i=k(a,m)+1}^{k}\), there exists \(q_{m+1}\in FP(\{p_{i}\}_{i=k(a,m)+1}^{k})\) such that \[\mu(V\cap q_{m+1}^{-1}V)\geq c_{m+1}.\] Then \(FP(\{q_{i}\}_{i=1}^{m+1})\subset FP(\{p_{i}\}_{i=1}^{k})\) and \[\mu\left(E\cap\bigcap_{g\in FP(\{q_{i}\}_{i=1}^{m+1})}g^{-1}E\right)=\mu(V\cap q _{m+1}^{-1}V)\geq c_{m+1}.\] This ends the proof of the claim. Fix an FIP-set \(F\) and a nonempty open subset \(U\) of \(X\). For every \(k\in\mathbb{N}\) there exists a sequence \(\{p_{i}^{(k)}\}_{i=1}^{k}\) in \(G\) such that \(FP(\{p_{i}^{(k)}\}_{i=1}^{k})\subset F\). Take a nonempty open subset \(V_{1}\) of \(X\) with \(\overline{V_{1}}\subset U\). Let \(a_{1}=\mu(V_{1})\). Then \(a_{1}>0\). Let \(k_{1}=k(a_{1},1)\) as in the Claim. Then there exists \(q_{1}^{(1)}\in G\) such that \(q_{1}^{(1)}\in FP(\{p_{i}^{(k_{1})}\}_{i=1}^{k_{1}})\) and \(\mu(V_{1}\cap(q_{1}^{(1)})^{-1}V_{1})>0\). Assume that \(a_{m}\), \(k_{m}=k(a_{m},m)\), \(V_{m}\), \(\{q_{i}^{(m)}\}_{i=1}^{m}\) has been chosen for \(m\leq n\) such that \(FP(\{q_{i}^{(m)}\}_{i=1}^{m})\subset FP(\{p_{i}^{(k_{m})}\}_{i=1}^{k_{m}})\) and \[\mu\left(V_{m}\cap\bigcap_{g\in FP(\{q_{i}^{(m)}\}_{i=1}^{m})}g^{-1}V_{m} \right)>0.\] Pick a nonempty open subset \(V_{n+1}\) of \(X\) with \(\overline{V_{n+1}}\subset V_{n}\cap\bigcap_{g\in FP(\{q_{i}^{(n)}\}_{i=1}^{n}) }g^{-1}V_{n}\). Let \(a_{n+1}=\mu(V_{n+1})\). Then \(a_{n+1}>0\). Let \(k_{n+1}=k(a_{n+1},n+1)\) as in the Claim. Then there exists a sequence \(\{q_{i}^{(n+1)}\}_{i=1}^{n+1}\) in \(G\) such that \(FP(\{q_{i}^{(n+1)}\}_{i=1}^{n+1})\subset FP(\{p_{i}^{(k_{n+1})}\}_{i=1}^{k_{n+ 1}})\) and \[\mu\left(V_{n+1}\cap\bigcap_{g\in FP(\{q_{i}^{(n+1)}\}_{i=1}^{n+1})}g^{-1}V_{ n+1}\right)>0.\] By induction, we get a sequence of nonempty open subsets \(\{V_{k}\}_{k=1}^{\infty}\) and an FIP set \(F^{\prime}=\bigcup_{k=1}^{\infty}FP(\{q_{i}^{(k)}\}_{i=1}^{k})\). It is clear that \(F^{\prime}\subset F\). Pick a point \(z\in\bigcap_{k=1}^{\infty}\overline{V_{k}}\). Then \(z\in U\) and \(F^{\prime}\subset N(z,U)\). **Theorem 5.7**.: _Let \((X,G)\) be a minimal system with \(G\) being abelian. Then a point \(x\in X\) is pairwise \(\text{FIP}^{*}\)-equicontinuous if and only if it is \(\infty\)-step almost automorphic._ Proof.: (\(\Rightarrow\)) Let \(x\in X\) be a pairwise \(\text{FIP}^{*}\)-equicontinuous point. As the action group \(G\) is abelian, \((X,G)\) admits an invariant measure \(\mu\). Moreover, \((X,G)\) is minimal, \(\mu\) has full support. Similar to the proof of Lemma 3.5, using Theorem 5.4 and Proposition 5.6, one has that \(x\) is \(\infty\)-step almost automorphic. (\(\Leftarrow\)) Let \(\pi\colon(X,G)\to(X_{\infty},G)\) be the factor map to the maximal factor of order \(\infty\). By Theorem 5.3(1), \(\mathbf{RP}^{[\infty]}(X,G)\) is a \(G\)-invariant closed equivalence relation, \(X_{\infty}=X/\mathbf{RP}^{[\infty]}(X,G)\) and \(R_{\pi}=\mathbf{RP}^{[\infty]}(X,G)\). For any almost automorphic point \(x\), by the definition \(R_{\pi}[x]=\mathbf{RP}^{[\infty]}(X,G)[x]=\{x\}\). Similar to the proof of Lemma 3.10, using Theorem 5.4, one has that \(x\) is pairwise \(\text{FIP}^{*}\)-equicontinuous. Now Theorem 1.2 is an immediate consequence of Theorem 5.7. ### Almost FIP\({}^{*}\)-equicontinuity Similar to Lemma 3.8 and Proposition 3.9, we have the following characterization of \(\operatorname{Eq}^{FIP^{*}}(X,G)\). **Lemma 5.8**.: _Let \((X,G)\) be a dynamical system. Then \(\operatorname{Eq}^{FIP^{*}}(X,G)\) is a \(G\)-invariant \(G_{\delta}\) subset of \(X\)._ **Proposition 5.9**.: _If \((X,G)\) be a minimal system. Then either \(\operatorname{Eq}^{FIP^{*}}(X,G)\) is residual or \(\operatorname{Eq}^{FIP^{*}}(X,G)\) is empty._ Similar to the case of pairwise IP\({}^{*}\)-equicontinuity, we consider the local property and the opposite of pairwise FIP\({}^{*}\)-equicontinuity. A dynamical system \((X,G)\) is called _almost pairwise FIP\({}^{*}\)-equicontinuous_ if \(\operatorname{Eq}^{FIP^{*}}(X,G)\) is residual in \(X\), and _pairwise FIP-sensitive_ if there exists a constant \(\delta>0\) with the property that for each nonempty open subset \(U\) of \(X\), there exist \(x_{1},x_{2}\in U\) such that \(\left\{g\in G\colon\rho(gx_{1},gx_{2})>\delta\right\}\) is an FIP-set. Similar to Theorem 3.11, we have the following dichotomy result for minimal systems. **Theorem 5.10**.: _Let \((X,G)\) be a minimal system. Then \((X,G)\) is either almost pairwise FIP\({}^{*}\)-equicontinuous or pairwise FIP-sensitive._ Recall that a minimal system \((X,G)\) is called \(\infty\)-_step almost automorphic_ if it has some \(\infty\)-step almost automorphic point. By Theorem 5.7 and Proposition 5.9, we have the following corollary. **Corollary 5.11**.: _Let \((X,G)\) be a minimal system with \(G\) being abelian. Then \((X,G)\) is almost pairwise FIP\({}^{*}\)-equicontinuous if and only if it is \(\infty\)-step almost automorphic._ **Acknowledgments.** The authors would like to thank Prof. Weisheng Wu and Dr. Jiahao Qiu for helpful suggestions. We also express many thanks to the anonymous referee, whose comments have substantially improved this paper.
infinite discrete group G がコンパクトメトリック空間 X に作用する場合、いくつかの弱な等間隔性のバージョンを G の部分集合における導入し、最小システム (X, G) がInvariant Measure を持つ場合、(X, G) は距離的になるかどうかは、 (X, G) が対称的なIP$^*$-等間隔的である場合にのみ距離的になる。最小システム (X, G) の積システム (X × X, G) が密な最小点の集まりを持つ場合、 (X, G) は距離的になるかどうかは、 (X, G) が対称的なIP$^*$-等間隔的である場合にのみ距離的になる。もし (X, G) が最小システムで、G が可換であれば、 (X, G) は無限の順序のシステムになる。 **Explanation:** This translation aims to capture the full meaning of the original English sentence, including
2302.14355
Task-Oriented Grasp Prediction with Visual-Language Inputs
To perform household tasks, assistive robots receive commands in the form of user language instructions for tool manipulation. The initial stage involves selecting the intended tool (i.e., object grounding) and grasping it in a task-oriented manner (i.e., task grounding). Nevertheless, prior researches on visual-language grasping (VLG) focus on object grounding, while disregarding the fine-grained impact of tasks on object grasping. Task-incompatible grasping of a tool will inevitably limit the success of subsequent manipulation steps. Motivated by this problem, this paper proposes GraspCLIP, which addresses the challenge of task grounding in addition to object grounding to enable task-oriented grasp prediction with visual-language inputs. Evaluation on a custom dataset demonstrates that GraspCLIP achieves superior performance over established baselines with object grounding only. The effectiveness of the proposed method is further validated on an assistive robotic arm platform for grasping previously unseen kitchen tools given the task specification. Our presentation video is available at: https://www.youtube.com/watch?v=e1wfYQPeAXU.
Chao Tang, Dehao Huang, Lingxiao Meng, Weiyu Liu, Hong Zhang
2023-02-28T07:17:25
http://arxiv.org/abs/2302.14355v1
# Task-Oriented Grasp Prediction with Visual-Language Inputs ###### Abstract To perform household tasks, assistive robots receive commands in the form of user language instructions for tool manipulation. The initial stage involves selecting the intended tool (i.e., object grounding) and grasping it in a task-oriented manner (i.e., task grounding). Nevertheless, prior researches on visual-language grasping (VLG) focus on object grounding, while disregarding the fine-grained impact of tasks on object grasping. Task-incompatible grasping of a tool will inevitably limit the success of subsequent manipulation steps. Motivated by this problem, this paper proposes GraspCLIP, which addresses the challenge of task grounding in addition to object grounding to enable task-oriented grasp prediction with visual-language inputs. Evaluation on a custom dataset demonstrates that GraspCLIP achieves superior performance over established baselines with object grounding only. The effectiveness of the proposed method is further validated on an assistive robotic arm platform for grasping previously unseen kitchen tools given the task specification. Our presentation video is available at: [https://www.youtube.com/watch?v=leWTYQPeAXU](https://www.youtube.com/watch?v=leWTYQPeAXU). ## I Introduction Language provides a natural interface for task specification in unconstructed environments such as kittenas and offices, complementing pure vision-based robotic frameworks [1, 2, 3]. Guided by natural language, an assistive robot is able to perform a wide range of household manipulation tasks using verbal instructions, such as "Use the _knife_ to _cut_ the apple for me" and "_Clean_ the mug with a _brush_". The initial step in performing such tasks is to grasp the intended tool in a task-oriented manner. This necessitates that the robot both coarsely localizes the target object (i.e., object grounding) and comprehends which fine-grained object part to grasp for the intended task execution (i.e., task grounding). However, previous researches on VLG, such as natural language object retrieval [4, 5, 6] and object rearrangement [7], focus on grounding language instructions to some coarse object-centric representations (e.g., bounding box, instance segmentation mask), while disregarding the fine-grained, task-oriented effects on object grasping. Fig.1(a) illustrates an example of manipulating kitchen tools. The language instruction of "Use the _knife_ to _cut_ an apple" necessitates both grounding the target object "_knife_" and grounding the target task of "_cut_" to the handle of the knife. Conversely, when the language instruction is "_Handover_ the _knife_ to me", humans would choose a different way by holding the blade for the target task of "_handover_". It is clear from this example that a language instruction would affect not only what object to grasp but also how the target object is grasped for an intended task execution. We, humans, take this skill for granted, but it is not explored by previous VLG researches. Disregarding the fine-grained effects of tasks on grasp poses may result in potential task failures. For instance, handover by grabbing the knife handle may cause physical injury to the receiver. Furthermore, imprecise grasping of the knife handle may result in cutting failure. So, how can we endow robots with the same ability to predict task-oriented grasps with visual-language inputs? To answer this question, we propose GraspCLIP to address task grounding in addition to object grounding to enable task-oriented grasp prediction. Fig.1(b) shows an assistive robot operating in a kitchen environment. GraspCLIP takes as input a visual scene observation \(O\) of multiple objects and a task instruction \(I\), and outputs a task-oriented grasp pose \(g\). GraspCLIP first leverages a visual-language model (VLM) CLIP [8] pre-trained on large-scale internet data to encode multi-modal inputs into a joint representation space. Then, to simultaneously achieve task grounding and object grounding, a two-stage, coarse-to-fine Task-Oriented Fusion (TOF) module is proposed to build hierarchical correspondences between visual observations and task instructions. This is in contrast to previous works, which have focused only on object grounding. In the last stage, a decoder predicts task Fig. 1: (a) Task grounding and object grounding revealed in humans’ grasping behavior. (b) GraspCLIP takes as input a visual scene observation \(O\) of multiple objects and a task instruction \(I\), and outputs a task-oriented grasp pose \(g\). oriented grasp poses based on instruction-conditioned representations generated from the previous stage. Evaluation on a custom dataset demonstrates the superiority of GraspCLIP over established baselines with object grounding only. We further validate its effectiveness in real-world applications on an assistive robotic arm platform for grasping previously unseen kitchen tools given the task specification. In summary, our contributions are as follows: * To address the challenge of task grounding in addition to object grounding, we contribute GraspCLIP to enable task-oriented grasp prediction with visual-language inputs. * To evaluate the task-oriented grasp prediction performance, we provide a custom dataset comprising 28 object categories, 96 instances, 38 household tasks, task-oriented grasp annotations, and template-based language instructions. * A system is built to enable an assistive robotic arm to predict and execute task-oriented grasps guided by user language instructions. ## II Related Work Vision-based grasping (VG) has been a fundamental problem in robotics. With the rise of deep learning in recent years, VG has achieved significant advances. For example, Mahler et al. [1] and Chu et al. [3] use CNN-based networks to predict planar grasps from RGB-D images. Mousavian et al. [2] propose to generate 6 degree-of-freedom (DoF) grasp poses on point clouds with a variational autoencoder. Most works in VG consider task-agnostic grasping, which finds stable grasp poses satisfying form and force closure. Failure to consider task constraints limits their usage in many application scenarios. To address this problem, some recent researches have proposed to merge language grounding into vision-based manipulation and grasping pipelines [4, 5, 6, 9, 10, 11, 12, 13]. Conditioned on language, the robot can understand and execute a diverse range of VLG tasks. Hatori et al. [4] present the first system to resolve ambiguity in language instructions for object picking. Similarly, Shridhar et al. [5] interactively pick objects using referring expressions. Built on top of [4] and [5], Zhang et al. [6] address language-conditioned picking in the clutter. The above methods focus on grounding natural language to coarse object-centric representations such as bounding boxes and use off-the-shelf task-agnostic grasp detectors. They do not explicitly consider the fine-grained effects of tasks on object grasping. This effect is essential since a task instruction would affect not only what object to grasp but also how to grasp it for the subsequent task execution. Another problem is that they rely on deep learning models trained on small-scale, self-collected datasets or public datasets such as RefCOCO [14], limiting their generalization capability to novel scenes, instances, categories, and tasks. There has been a recent trend of building VLG pipelines based on large pre-trained models to improve the generalization capability. For example, SayCan [11] and CLIPort [12] explore the power of large pre-trained models from natural language processing (NLP) and computer vision (CV) communities to build priors for robots efficiently. Ahn et al. [11] combine low-level skills with large language models (LLMs) [15][16] to complete long-horizon, language-guided mobile manipulation tasks. Shridhar et al. [12] present a CLIP [8] based imitation-learning agent trained for solving various language-specified tabletop tasks. Despite demonstrating a capacity to solve complex VLG tasks, they do not explicitly consider the fine-grained effects of tasks on object grasping. As a supplement to previous VLG researches, we address the challenge of task grounding in addition to object grounding to enable task-oriented grasp prediction with visual-language inputs. ## III Problem Formulation We consider the problem of learning a function \(\mathcal{F}\) that receives a visual scene observation \(O\in\mathbb{R}^{H\times W\times 3}\) and a task instruction \(I=\{s_{t}\}_{t=1}^{T}\), and outputs a task-oriented grasp pose \(g\) (in the image space), where \(s_{t}\) is the \(t\)-th word token and \(T\) is the max length: \[g=\mathcal{F}(O,I)\] Here, \(O\) is an RGB image of multiple objects, including one or more target objects and distractors. \(I\) is a natural language sentence of the task description. Depicted in Fig.2 are two examples of \(g\). Each of them is a 5-dimensional grasp rectangle parameterized by grasp location \((x,y)\), orientation \(\theta\), opening width \(w\), and length \(h\): \[g=\{x,y,\theta,w,h\}\] where the first three parameters are in \(SE(2)\) and represent the reference frame of the rectangle, and the last two describe the dimensions. \(h\) is a fixed value for a designated gripper in our implementation, although it could be a learnable parameter in general. For orientation, the space of \(SO(2)\) rotation is discretized into 120 bins. We approximate function \(\mathcal{F}\) with a deep neural network, namely GraspCLIP. To train GraspCLIP, a dataset \(\mathcal{D}=\{d_{1},d_{2},...,d_{n}\}\) of \(n\) tuples is required. The detail of data generation will be introduced later. Each tuple consists of a visual scene observation \(O_{j}\), a task instruction \(I_{j}\), and a set of \(m\) task-oriented grasp annotations \(\mathcal{G}_{j}=\{g_{i,j}\}_{i=1}^{m}\): \[d_{j}=(O_{j},I_{j},\mathcal{G}_{j})\] where \(j=1,2,...,n\). Fig. 2: Two examples of 5D grasp rectangles. The representation describes the grasp location, orientation, opening width, and length of a parallel jaw gripper. Each example is additionally annotated with task labels. ## IV Approach An overview of the proposed GraspCLIP is presented in Fig.3. The model architecture consists of four major components: two CLIP-based encoders, a two-stage, coarse-to-fine TOF module, and a decoder. Each component will be introduced for the rest of this section. ### _Encoder Module_ Given a visual scene observation \(O\) and a task instruction \(I\), GraspCLIP first encodes multi-modal inputs into a joint representation space, which enables cross-modal reasoning and semantic understanding. Previous works on VLG usually use backbone networks trained on small-scale, single-modal datasets. This will lead to (1) a limited generalization capability to novel concepts and (2) a large semantic gap between two modalities. We, therefore, opt for CLIP-based encoders. Two encoders are pre-trained jointly on a dataset of 400 million (image, text) pairs and inherently learn a broad context of semantics. They contain rich prior knowledge for grounding open-end, high-level semantic concepts (see Fig.4). This property is beneficial since an assistive robot in real-world applications needs to deal with a open set of object categories and tasks. Specifically, we use a CLIP pre-trained ResNet50 [17] and a CLIP pre-trained BERT [18] to encode \(O\) and \(I\), respectively. Although CLIP-based encoders provide a strong basis, CLIP is originally designed to align the whole image (instead of pixels or regions) with the input sentence, leading to a significant gap between high-level image understanding and low-level task-oriented grasping. We next address this problem with a multi-modal fusion module and a decoder to transfer CLIP encodings to a task-oriented grasp prediction. ### _Task-Oriented Fusion Module_ Using a single output from each CLIP encoder is not enough for accurate task-oriented grasp prediction since a task instruction contains information at multiple levels of granularity. For example, "Use the _knife_ to _cut_ an apple" requires both coarsely grounding the target object "_knife_" and understanding which fine-grained object part to grasp for the target task of "_cut_". To tackle this issue, a hierarchical approach is first employed to capture the semantic meaning of multi-modal inputs. First, we transform \(I\) into two types of language embeddings: a sentence embedding vector \(l_{sen}\in\mathbb{R}^{1024}\) and a word embedding sequence \(l_{word}\in\mathbb{R}^{77\times 512}\) (with zero-padding). While \(l_{sen}\) provides a broad abstraction of the whole instruction, \(l_{word}\) stores an detailed embedding vector for each individual word token. Similarly, the intermediate features from CLIP visual encoder are also extracted to obtain a hierarchical representation of \(O\) (i.e., object-part Fig. 4: Given detected object proposals and natural language descriptions, CLIP outputs distributions over proposals without training. Fig. 3: An overview of GraspCLIP architecture: (a) GraspCLIP consists two CLIP-based encoders, a Task-Oriented Fusion module, and a decoder. (b) Coarse-grained Object Grounding module coarsely localizes the target object. (c) Fine-grained Affordance Grounding module creates fine-grained correspondences between functional/affordance regions and task instructions. shape). To achieve both object grounding and task grounding, we then need to build hierarchical correspondences between two sets of representations. Thus, a two-stage, coarse-to-fine Task-Oriented Fusion (TOF) module is proposed. It consists of a Coarse-grained Object Grounding (COG) module and a Fine-grained Affordance Grounding (FAG) module. In the first stage, COG creates a coarse mapping from \(I\) to the target object in \(O\). It takes as input the high-level visual feature map \(v_{high}\in\mathbb{R}^{10\times 10\times 1024}\) and the sentence embedding \(l_{sen}\). To reduce the semantic gap between two modalities, a linear projection is first applied: \(l_{sen}\rightarrow\tilde{l}_{sen}\in\mathbb{R}^{1024}\). Hadamard product is then taken at each spatial location to perform object grounding: \[\tilde{v}_{high,i}=v_{high,i}\odot\tilde{l}_{sen},i=1,...,10\times 10\] \(\tilde{v}_{high}\) is upsampled and concatenated with mid-level visual feature map \(v_{mid}\in\mathbb{R}^{20\times 20\times 1024}\), followed by a transposed convolution block to output \(v_{cog}\in\mathbb{R}^{40\times 40\times 256}\). However, as discussed before, object grounding is nevertheless insufficient to predict task-oriented grasps. The model must also establish a fine-grained correspondence between the target task in \(I\) and a functional/affordance region on the target object. To tackle this, FAG module is introduced in the second stage. According to the theory of affordance [19], affordance is defined in the second order here. For example, affordances "_cut_" and "_handover_" correspond to the knife handle and blade, respectively. The architecture of FAG is shown in Fig.3(c). The computational procedure can be divided into two steps. FAG first explores affordance regions on \(v_{cog}\) and then maps \(l_{word}\), especially object and task tokens, to these regions. To model the fine-grained intra-modal (visual affordance exploration) and inter-modal (word-to-affordance mapping) interactions, two types of Transformer-based attention mechanisms [20] are utilized in cascade. According to [21], regions sharing similar geometric structures are likely to have the same affordance. Therefore, we incorporate a self-attention layer to capture the non-local structural information on \(v_{cog}\). An example is depicted in Fig.5(a) for clarification. Self-attention provides a global context for local point-wise affordance exploration and parses \(v_{cog}\) into a set of functional regions. Specifically, \(v_{cog}\) is first flattened into \(z_{cog}\in\mathbb{R}^{1600\times 256}\). The self-attended feature map \(z_{sa}\) can be then computed as: \[z_{sa}=\text{softmax}(\frac{Q_{sa}K_{sa}^{\top}}{\sqrt{256}})V_{sa},\] \[Q_{sa}=W_{sa}^{Q}z_{cog},K_{sa}=W_{sa}^{K}z_{cog},V_{sa}=W_{sa}^{V}z_{cog}\] where \(W_{sa}^{Q}\), \(W_{sa}^{K}\), and \(W_{sa}^{V}\) are self-attention query, key, and value projection matrices, respectively. After the visual affordance exploration, we are ready to build fine-grained correspondences between affordance regions and word tokens. A cross-attention layer is adopted. As is shown in Fig.5(b), the intuition is to reconstruct \(z_{sa,i}\) by all elements in \(l_{word}\) weighted by their normalized cross-modal correspondences, where \(i=1,...,1600\). To match the dimension of \(z_{sa}\), \(l_{word}\) is first projected to a lower dimension of 256: \(l_{word}\rightarrow\tilde{l}_{word}\in\mathbb{R}^{77\times 256}\). The cross-attended feature map \(z_{ca}\) then can be computed as: \[z_{ca}=\text{softmax}(\frac{Q_{ca}K_{ca}^{\top}}{\sqrt{256}})V_{ ca},z_{ca}=\text{FFN}(z_{ca}),\] \[Q_{ca}=W_{ca}^{Q}z_{sa},K_{ca}=W_{ca}^{K}\tilde{l}_{word},V_{ca}= W_{ca}^{V}\tilde{l}_{word}\] where \(W_{ca}^{Q}\), \(W_{ca}^{K}\), and \(W_{ca}^{V}\) are cross-attention query, key, and value projection matrices, respectively. FFN is a feed-forward layer. Since the training of two attention layers is computationally unstable at the beginning, we insert them with a learnable gating parameter \(\alpha\) initialized to 0. In this way, GraspCLIP learns to localize the target object in the initial training stage, and gradually attends to fine-grained affordance regions supporting the target task. Finally, \(z_{ca}\) is reshaped to \(v_{ca}\in\mathbb{R}^{40\times 40\times 256}\), and then fused with low-level visual feature map \(v_{low}\in\mathbb{R}^{80\times 80\times 256}\) to output \(v_{fag}\in\mathbb{R}^{160\times 160\times 64}\). ### _Decoder Module_ The decoder predicts a task-oriented grasp pose based on \(v_{fag}\). Specifically, three consecutive bottleneck layers are first applied to output \(v_{pred}\in\mathbb{R}^{640\times 640\times 16}\). The grasp prediction is then divided into three parallel tasks, and each is solved by appending a prediction head to \(v_{pred}\). For quality head, it outputs a heatmap \(M_{q}\in\mathbb{R}^{H\times W}\), measuring the probability (between 0 and 1) of satisfying the task instruction at each spatial location \((x,y)\). The other two heads output the orientation map \(M_{\theta}\in\mathbb{R}^{H\times W\times l}\) and opening width map \(M_{w}\in\mathbb{R}^{H\times W}\), respectively. A task instruction may correspond to multiple ground truth grasp poses. Here, GraspCLIP only outputs the top-1 prediction during inference by first taking the argmax over the smoothed \(M_{q}\) and then querying the other two maps: \[x^{*},y^{*}=\operatorname*{arg\,max}_{x,y}\text{ Gaussian}(M_{q})\] \[\theta^{*}=\operatorname*{arg\,max}_{dim=2}M_{\theta}|_{(x^{*},y^ {*})},\;\;w^{*}=\text{Gaussian}(M_{w})|_{(x^{*},y^{*})}\] The output grasp pose \(g^{*}\) is constructed as: \[g^{*}=\{x^{*},y^{*},\theta^{*},w^{*},h\}\] Fig. 5: Visualizations of two attention mechanisms. ### _Implementation Details_ The loss function consists of a location loss, an orientation loss, and an opening width loss: \[\mathcal{L}(M_{q},M_{\theta},M_{w},\hat{M_{q}},\hat{M_{q}},\hat{M_{w }})=\beta*\mathcal{L}_{loc}(M_{q},\hat{M_{q}})\] \[+\gamma*\mathcal{L}_{ori}(M_{\theta},\hat{M_{\theta}})+\mathcal{L }_{width}(M_{w},\hat{M_{w}})\] where \(\mathcal{L}_{-}\) denotes binary cross entropy loss, and \(\hat{M_{q}}\), \(\hat{M_{\theta}}\), and \(\hat{M_{w}}\) are ground truth maps. The model is trained on a single NVIDIA RTX 3090 GPU for 500 epochs with a batch size of 1. We use Adam [22] as the optimizer with an initial learning rate of \(10^{-4}\) and weight decay. During training, two CLIP pre-trained encoders are frozen. At each iteration, we randomly sample an input-output tuple \(d_{j}\) from \(\mathcal{D}\). ## V Dataset To evaluate the performance of our design and established methods, a dataset \(\mathcal{D}\) is required. Since there are no such datasets in the context of VLG, we build a custom one in two steps, including multi-object scene synthesis and template-based instruction generation. In multi-object scene synthesis, we first crowdsource a list of object categories and tasks from four highly cited VG datasets: ContactDB [23], SG14000 [24], TOG-Net [25], and TaskGrasp [26]. Note that we are particularly interested in kitchen tools as they are frequently manipulated by an assistive robot. Full object set and task set can be found in our presentation video. Then, a human operator teleoperates a robot arm (see Fig.6(a)) to collect single-object grasping data (see Fig.6(b)). Each grasp may afford one or more tasks. Teleoperation allows for the extraction of tool grasping skills from real human behavior, without the significant risk of a sim-to-real gap that may arise when using simulated data. Additionally, an assistive robot usually perceives more than one object (i.e., target objects + distractors) in real-world applications. Therefore, similar to domain randomization [27], we randomly drop single-object data on synthetic backgrounds (see Fig.6(c)) to generate multi-object scenes with ground truth grasp annotations. This process is done automatically. We apply a template-based instruction generation strategy to efficiently create \(I_{j}\) at each iteration. 11 templates are adapted from [28]. Similar to [13], we further augment the templates with QuillBot, an automatic paraphraser, to enrich the vocabulary and grammatical diversities. There are two types of instructions: (1) task with a target object (e.g., "Use _obj_ to _task_"), and (2) task only (e.g., "Use something to _task_"). Finally, _obj_ and _task_ are substituted with a target object category label and a target task label, respectively. Tab.I provides additional details of \(D\). ## VI Experimental Setup ### _Perception Experiments_ We evaluate the proposed method and baselines under four different test settings. The performance is measured by the ability to generalize to novel scenes, instances, object categories, and category-task combinations. For each level of generalization, the data is split into 80% for training and 20% for testing. Manually annotated grasps are used to evaluate models trained on \(D\). Four established baselines retrained on \(\mathcal{D}\) are compared. The details are as follows: * **TAG** represents task-agnostic VG methods [1, 2, 3] that focus on grasp stability and ignores task suitability. It receives only visual inputs and randomly ranks each candidate's task suitability. We remove the language component of GraspCLIP to model TAG. * **CLIP+TAG** is a naive combination of CLIP and a task-agnostic grasp detector. It is originally introduced in [29] for object localization. CLIP+TAG follows a two-stage pipeline where the task instruction is first grounded via Grad-CAM [30] at the pixel level, followed by a standalone task-agnostic grasp detector. * **OG+TAG** represents methods [4, 5, 6] focusing on grounding natural language to coarse object-centric representations. Specifically, we first ground the task instruction to the object bounding box with the highest matching score. A standalone task-agnostic grasp detector is then applied to that object. * **CLIPort-S** is an adapted version of state-of-the-art visual-language manipulation and grasping framework CLIPort [12]. We only keep its semantic branch and drop its spatial branch since depth information is unavailable. CLIPort does not explicitly consider task constraints when predicting grasps on the target object. ### _Real-Robot Experiments_ To further validate the effectiveness in real-world robotic applications, we deploy GraspCLIP on a 7-DoF Kinova Gen3 \begin{table} \begin{tabular}{l c} \hline Parameters & Settings \\ \hline Num of Categories & 28 \\ Num of Instances & 96 \\ Num of Tasks & 38 \\ Num of Templates & 106 \\ Grasp Type & Planar \\ Num of Grasps & 10 Per Instance \\ Examples of Categories & Spoon, Fork, Mug, Pan, Scissor, Tong \\ Examples of Tasks & Cut, Brush, Dig, Scoop, Handover, Saute \\ Example of Templates & “Hold _obj_ in your hand and _task_” \\ Data Split Type & Scene, Instance, Category, Category-Task \\ \hline \end{tabular} \end{table} TABLE I: Details of generated dataset Fig. 6: Data generation: (a) A human operator teleoperates the robot arm to stable grasp poses and assigns task labels. (b) 5D grasp poses on a single object are collected. (c) A multi-object scene with ground truth task-oriented grasp annotations are generated automatically. robot arm equipped with a Robotiq 2-finger adaptive gripper. Test objects are selected from the same categories as the training data but unseen during training. Some test kitchen tools collected from our laboratory and YCB dataset are shown in Fig.7. Here, we are only interested in revealing the gap between perception and execution. Therefore, the four baselines in Section VI-A are not physically evaluated. Converting the predicted grasp pose from image space to robot coordinate involves a sequence of transforms: \[g_{robot}=T_{RC}(T_{CI}(g_{img}))\] where \(g_{robot}\) and \(g_{img}\) are grasp poses in image space and robot coordinate, respectively. \(T_{CI}\) transforms from 2D image space to 3D camera frame, and \(T_{RC}\) transforms from camera frame to robot coordinate. The experimental procedure is as follows: (1) a set of \(N\) objects (\(N\geq 1\)) are placed in the robot workspace; (2) a natural language instruction is sent to the robot; and (3) the robot uses GraspCLIP to predict a task-oriented grasp pose \(g\) and execute it on the target object. ### _Evaluation Metrics_ Perception experiments examine the correctness of output grasps, while real-robot experiments test how well the robot physically interacts with objects. Two sets of evaluation metrics are adopted accordingly: * _Perception experiments_: Following previous works [1][3][31], we consider a predicted grasp \(g\) correct if two criteria are met: (1) the difference between the angle of \(g\) and a ground truth grasp pose \(\hat{g}\) is less than \(30^{\circ}\) and (2) the Jaccard index (similar to IOU) between \(g\) and \(\hat{g}\) is greater than 0.25. The Jaccard index \(J\) is defined as: \[J(g,\hat{g})=\frac{|\hat{g}\cap g|}{|\hat{g}\cup g|}\] Here, we choose the top-1 grasp candidate. * _Real-robot experiments_: To systematically evaluate the performance in real-robot experiments, we divide the pipeline into three stages and record their statistics separately. Three stages include Perception (\(Perc\)), Planning (\(Plan\)), and Action (\(Act\)). A grasp is considered successful if the target object is grasped subject to the task requirement and lifted stably for three seconds by the robot. ## VII Results ### _Result of Perception Experiments_ The result of perception experiments is reported in Tab.II. Scene and instance-level generalizations focus on generalizing to cases with limited variances with respect to the training data. The **scene-level generalization** experiment creates novel scene layouts with seen categories, tasks, and category-task combinations. TAG randomly explores scenes without considering task instructions, setting a lower performance bound of 38.77%. By incorporating CLIP, CLIP+TAG achieves a minor performance boost compared to TAG. Although CLIP contains rich priors for grounding high-level concepts, it has a limited ability to inform low-level grasping directly. OG+TAG can accurately ground the task instruction to the bounding box of the target object but is unable to predict task-oriented grasps. CLIPort-S is a competitive baseline, achieving a relatively high success rate of 80.19%. It still falls behind GraspCLIP since it does not explicitly consider the fine-grained effects of tasks on object grasping. GraspCLIP outperforms all baselines on scene-level generalization. In terms of **instance-level generalization**, all methods bear a performance drop due to intra-category variance. Still, GraspCLIP achieves the best performance at 85.73%. **Category-level generalization** aims to transfer knowledge learned from familiar tool categories to novel ones. For example, having been taught that _'cup"_ has the function of _"pour"_, the robot can recognize the novel category _"bowl"_ affords the same function. Task-only language instruction is used solely in this experiment. Any object that affords the task can be counted as the target object. Therefore, it increases the probability of TAG predicting correct grasps. OG+TAG suffers from detecting novel categories, performing poorly on this evaluation. GraspCLIP outperforms the second-best CLIPort-S by 4.43%. **Category-task-level generalization** is a challenging though practical evaluation. A user may teach category-task pairs \(spoon-scoop\) and \(ladle-dispense\), and the robot should be able to mix the knowledge from two sources (i.e., \(spoon-dispense\), \(ladle-scoop\)). GraspCLIP outperforms all the baselines. The performance on these two generalizations demonstrates the superiority of GraspCLIP in generalizing to relatively significant variances. ### _Result of Real-Robot Experiments_ Real-robot experiments reveal the performance gap between perception and physical grasping. Tab.III presents the quantitative results, and Fig.8 illustrates the qualitative results. Although baselines are not statistically evaluated in real-robot experiments, we provide the qualitative results of a representative baseline in Fig.8 for comparison. GraspCLIP achieves a high success rate in no clutter or lightly cluttered scenes (Fig.8(a)-(c)). One of the limitations is low grasping DoF. While humans can perform 6 DoF grasping, such as grasping along \(x\) or \(y\) axis, GraspCLIP can only predict planar grasps (i.e., along \(z\) axis). We plan to extend our framework to 6 DoF dexterous VLG. Fig. 7: Part of test objects collected from the laboratory and YCB dataset. We deliberately create complex scene layouts in the four-object setup to further gauge the limits of the implementation. The model performs reasonably well when the structure of the target object remains fair visibility. Some heavily cluttered layouts, such as stacking and containing (Fig.8(d)-(e)), are hard to deal with. Therefore, the robot fails in some cases. Fig.8(e) shows a failure case, highlighted in the red box on the rightmost side. A potential solution to this problem could involve equipping GraspCLIP with an active exploration module as in [32]. Another type of failure comes from the language grounding error. In this case, the robot grasps a distractor with the same function as the target object. For example, when the task instruction is "Use the _laundry brush_ to _clean_", the robot falsely grounds the instruction to a sponge brush next to the laundry brush. Interactive correction with natural language [33] could fix this error. ### _Ablation Study_ To gain further insights into the effectiveness of each component, we conduct two sets of ablation studies. The result is reported in Tab.IV. #### Iv-C1 TOF Module Structure The proposed TOF module consists of a COG module and a FAG module. To investigate their effectiveness, we test three ablated versions shown in the first three rows of Tab.IV. COG-only uses two consecutive COG modules without task-level grounding, and FAG-only uses one FAG module without object grounding. Using two FAG modules gives a meaningless result, thus not reported. We observe that FAG-only performs consistently better than COG-only. This suggests that (1) task grounding is more critical for task-oriented grasp prediction, and (2) FAG is able to perform object localization to some extent. GraspCLIP outperforms both COG-only and FAG-only by a large margin. FAG-COG reverses the order of two grounding modules. The significant performance gap between FAG-COG and COG-FAG (i.e., GraspCLIP) justifies our coarse-to-fine design. #### Iv-C2 CLIP-Based Encoders To highlight the effectiveness of CLIP-based encoders over alternative pre-trained models, we substitute CLIP pre-trained encoders with an ImageNet-pretrained ResNet50 with BERT, denoted as RN50+BERT. In the fourth row of Tab.IV, we observe that CLIP-based encoders consistently improve the performance across four generalization types, validating their effectiveness. Fig. 8: Qualitative results of real-robot experiments. The grasps predicted by GraspCLIP and OG+TAG are represented by red-blue rectangle and yellow-green rectangle, respectively. The green boxes represent the bounding boxes detected by OG+TAG. ## VIII Conclusion To address the challenge of task grounding in addition to object grounding in the context of VLG, GraspCLIP is proposed to enable task-oriented grasp prediction with visual-language inputs. Evaluation on a custom dataset demonstrates that GraspCLIP outperforms established baselines with object grounding only. To further validate the effectiveness, we deploy GraspCLIP on an assistive robotic arm for grasping previously unseen kitchen tools given the task specification. As a future direction, we consider the incorporation of interactive language correction into the GraspCLIP framework, as well as an extension of GraspCLIP to support 6 DoF dexterous VLG.
家庭の作業を行うために、アシストロボットはツール操作用のユーザー言語指示でコマンドを受け取ります。初期段階では、対象となるツールを選択する(つまり、物体着地)と、タスクに焦点を当ててそれを掴む(つまり、タスク着地)というステップを行います。しかし、視覚言語 grasping (VLG) に関するこれまでの研究は、物体着地に焦点を当てており、タスクが物体着地に与える細かい影響を無視しています。タスクに不整合なツールを掴むことは、後の操作ステップの成功を必ず妨げます。この問題を解決するために、この論文は GraspCLIP を提案します。GraspCLIP は、物体着地に加えてタスク着地にも対応することで、視覚言語入力に基づくタスク指向の掴み予測を可能にする。カスタムのデータセットでの評価では、GraspCLIP が物体着地のみの場合の既存の基線と比較
2309.16630
On Learning with LAD
The logical analysis of data, LAD, is a technique that yields two-class classifiers based on Boolean functions having disjunctive normal form (DNF) representation. Although LAD algorithms employ optimization techniques, the resulting binary classifiers or binary rules do not lead to overfitting. We propose a theoretical justification for the absence of overfitting by estimating the Vapnik-Chervonenkis dimension (VC dimension) for LAD models where hypothesis sets consist of DNFs with a small number of cubic monomials. We illustrate and confirm our observations empirically.
C. A. Jothishwaran, Biplav Srivastava, Jitin Singla, Sugata Gangopadhyay
2023-09-28T17:35:26
http://arxiv.org/abs/2309.16630v1
# On Learning with LAD ###### Abstract The logical analysis of data, LAD, is a technique that yields two-class classifiers based on Boolean functions having disjunctive normal form (DNF) representation. Although LAD algorithms employ optimization techniques, the resulting binary classifiers or binary rules do not lead to overfitting. We propose a theoretical justification for the absence of overfitting by estimating the Vapnik-Chervonenkis dimension (VC dimension) for LAD models where hypothesis sets consist of DNFs with small number of cubic monomials. We illustrate and confirm our observations empirically. **Keywords:** Boolean functions, PAC learning, VC dimension, logical analysis of data. ## 1 Introduction Suppose we have a collection of observations for a particular phenomenon in the form of data points and the information about its occurrence at each data point. We refer to such a data set as the _training set_. Data points are (feature) vectors whose coordinates are values of variables called _features_. The information on the occurrence or non-occurrence of the phenomenon under consideration can be recorded by labeling each data point as a "false" point or a "true" point, alternatively, by 0 or 1, respectively. Peter L. Hammer [1] proposed using partially defined Boolean functions to explore the cause-effect relationship of a data point's membership in the set of "true" points or "false" points. Crama et al. [2] developed this theory and named it the _Logical Analysis of Data_, or LAD for short. Another noteworthy survey article is by Alexe et al. [3], where the authors discuss LAD in detail and focus on using LAD for biomedical data analysis. Here, we consider LAD in the Probably Approximately Correct (PAC) learning model framework. We denote the hypothesis set by \(\mathcal{H}\). We restrict \(\mathcal{H}\) to the set of Disjunctive Normal Forms (DNFs) involving a small number of cubic terms and estimate the Vapnik-Chervonenkis (VC) dimension for the hypothesis set. Recently, Chauhan et al. [4] compared LAD with DNN and CNN for analyzing intrusion detection data sets. It was observed that LAD with low-degree terms (cubic and degree four) offer classifiers that outperform DNN or CNN classifiers. In this article, we theoretically explain why we can expect to learn from data using LAD is possible by solely checking the accuracy of the proposed Boolean classifiers within the training set. ## 2 Partially defined Boolean functions and logical analysis of data Let \(\mathbb{Z}\) be the ring of integers, and \(\mathbb{Z}^{+}\) be the set of positive integers. Consider the set \(\mathcal{B}=\{0,1\}\). For any \(u,v\in\mathcal{B}\), not necessarily distinct, we define disjunction, conjunction, and negation as \(u\lor v=u+v-uv\), \(u\wedge v=uv\), and \(\bar{u}=1-u\), respectively, where the operations on the right-hand side are over \(\mathbb{Z}\). It is customary to write \(uv\) instead of \(u\wedge v\). The set \(\mathcal{B}=\{0,1\}\) along with these operations is a Boolean algebra. For \(n\in\mathbb{Z}^{+}\), let \([n]=\{1,\ldots,n\}\subset\mathbb{Z}^{+}\). The cartesian product of \(n\) copies of \(\mathcal{B}\) is \(\mathcal{B}^{n}=\{\mathbf{x}=(x_{1},\ldots,x_{n}):x_{i}\in\mathcal{B},i\in[n]\}\). The set \(\mathcal{B}^{n}\) is a Boolean algebra where disjunction, conjunction, and negation are induced from those defined over \(\mathcal{B}\) as: \(\mathbf{x}\vee\mathbf{y}=(x_{1}\lor y_{1},\ldots,x_{n}\lor y_{n})\), \(\mathbf{x}\wedge\mathbf{y}=(x_{1}\wedge y_{1},\ldots,x_{n}\wedge y_{n})\), and \(\bar{\mathbf{x}}=(\bar{x}_{1},\ldots,\bar{x}_{n})\), for all \(\mathbf{x},\mathbf{y}\in\mathcal{B}^{n}\). Let the set of all functions from a set \(\mathcal{X}\) to a set \(\mathcal{Y}\) be denoted by \(\mathcal{F}^{\mathcal{X},\mathcal{Y}}\). In this paper, \(\mathcal{X}=\mathcal{B}^{n}\) and \(\mathcal{Y}=\mathcal{B}\). A function \(f\in\mathcal{F}^{\mathcal{B}^{n},\mathcal{B}}\) is said to be an \(n\)-variable Boolean function. The support or the set of true points of \(f\) is \(T(f)=\{\mathbf{x}\in\mathcal{B}^{n}:f(\mathbf{x})=1\}\), and the set of false points is \(F(f)=\{\mathbf{x}\in\mathcal{B}^{n}:f(\mathbf{x})=0\}\). An \(n\)-variable Boolean function can be completely defined by the ordered pair of sets \((T(f),F(f))\). Clearly, \(T(f)\cup F(f)=\mathcal{B}^{n}\) and \(T(f)\cap F(f)=\emptyset\). Hammer [1] proposed the notion of partially defined Boolean functions as follows. **Definition 2.1**.: _Let \(T,F\subseteq\mathcal{B}^{n}\) such that \(T\cap F=\emptyset\). Then \((T,F)\) is said to be a partially defined Boolean function, or pdBf, in \(n\) variables._ For a pdBf \((T,F)\), it is understood that \(T\cup F\neq\mathcal{B}^{n}\), otherwise the pdBf \((T,F)\) is a Boolean function. For studying Boolean functions and their various applications, we refer to [5]. This paper considers two-class classification problems with feature vectors in \(\mathcal{B}^{n}\). For a positive integer \(N\), consider a random sample of \(\mathcal{S}=\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\}\subseteq\mathcal{B}^ {n}\) of size \(N\). Let the label corresponding to the \(\mathbf{x}^{(i)}\) be denoted by \(y^{(i)}\in\mathcal{B}\) for all \(i\in[N]\). The vectors belonging to \(\mathcal{S}\), each augmented with its binary label, form the training set \(\mathcal{D}=\{(\mathbf{x}^{(i)},y^{(i)}):i\in[N]\}\). The sets \[T_{\mathcal{D}}=\{\mathbf{x}^{(i)}:y^{(i)}=1,i\in[N]\},\text{ and }F_{ \mathcal{D}}=\{\mathbf{x}^{(i)}:y^{(i)}=0,i\in[N]\}\] are said to be the sets of positive and negative examples, respectively. The pair of subsets \((T_{\mathcal{D}},F_{\mathcal{D}})\) is a partially defined Boolean function over \(\mathcal{B}^{n}\). **Definition 2.2**.: _A Boolean function \(f:\mathcal{B}^{n}\to\mathcal{B}\) is an extension of a pdBf \((T,F)\), if \(T\subseteq T(f)\) and \(F\subseteq F(f)\)._ LAD uses the pdBf \((T_{\mathcal{D}},F_{\mathcal{D}})\) corresponding to a training set \(\mathcal{D}\) and proposes its extension as an approximation of the target function. Researchers have demonstrated that such extensions, when carefully constructed using particular conjunctive rules, provide excellent approximations of target functions. Boros et al. (2016, page 34, line 7) call them classifiers based on the "most justifiable" rules and further state that these "rules do not seem to lead to overfitting, even though it (the process of finding them) involves an element of optimization." In this paper, we prove this observation within the framework of the PAC learning model. Before proceeding further, we introduce some definitions and notations to describe our results. A Boolean variable is a variable that can take values from the set \(\mathcal{B}\). Let \(x\) be a Boolean variable. We associate a Boolean variable, \(\bar{x}\), with \(x\) such that for all \(x\in\mathcal{B}\), \(x\bar{x}=0\) and \(x\vee\bar{x}=1\). The symbol \(x^{\alpha}\) is defined by \[x^{\alpha}=\begin{cases}x&\text{if }\alpha=1\\ \bar{x}&\text{if }\alpha=0.\end{cases}\] The symbol \(x^{\alpha}\) is said to be a literal. A LAD algorithm outputs a collection of prime patterns that maximally cover the true points of the pdBf \((T,F)\) obtained from the training set \(\mathcal{D}\). For the technical details, we refer to [1; 2; 3; 5] and other related research results. In this paper, we do not focus on developing efficient algorithms to obtain theories and testing for how accurately they approximate a target function. Instead, we aim to establish the conditions that make learning by Boolean rules feasible. In other words, we would like to understand why we do not usually see overfitting even if the LAD algorithms are designed to maximally fit a theory with the training set data. We propose to do this analysis by using the PAC learning model. ## 3 The PAC learning model Valiant [7; 8] proposed the theory of Probably Approximately Correct (PAC) in 1984. For an introduction to the concept of the VC dimension, we refer to Abu-Mostafa et al. [9]. Let us denote the set of all possible feature vectors and labels by \(\mathcal{X}=\mathcal{B}^{n}\) and \(\mathcal{Y}=\mathcal{B}\), respectively. We assume that for each phenomenon there is a _target function_\(f\in\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) that correctly labels all the vectors in \(\mathcal{X}\). We consider training sets with binary features and labels of the form \(\mathcal{D}=\{(\mathbf{x}^{(i)},y^{(i)}):i\in[N]\}\) where each \(\mathbf{x}^{(i)}\in\mathcal{B}^{n}\) and \(y^{(i)}\in\mathcal{B}\) are data points and binary labels, respectively. By definition, the target function \(f\in\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) satisfies \(f(\mathbf{x}^{(i)})=y^{(i)}\), for all \(i\in[N]\). Let \(\mathcal{H}\subset\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) be a set of functions called the hypothesis set. The PAC learning involves approximating the target function \(f\in\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) by a function \(h\in\mathcal{H}\subset\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) such that it has the lowest average error for points inside and outside the training set \(\mathcal{D}\). The hypothesis set ought to be carefully chosen and fixed before the execution of a learning algorithm over a training set. **Definition 3.1**.: _The in-sample error is the fraction of data points in \(\mathcal{D}\) where the target function \(f\) and \(h\in\mathcal{H}\) disagree. That is,_ \[E_{\mathrm{in}}(h)=\frac{1}{N}\sum_{i\in[N]}\#\{\mathbf{x}^{(i)}:h(\mathbf{x}^ {(i)})\neq f(\mathbf{x}^{(i)})\}. \tag{1}\] It is realistic to assume that the input space \(\mathcal{X}\) has a probability distribution \(\mu\) defined on it. For an input \(\mathbf{x}\) chosen from this space satisfying the probability distribution \(\mu\), we write \(\mathbf{x}\sim\mu\). The out-of-sample error is the probability that \(h(\mathbf{x})\neq f(\mathbf{x})\) when \(\mathbf{x}\sim\mu\). **Definition 3.2**.: _The out-of-sample error is_ \[E_{\mathrm{out}}(h)=\Pr_{\mathbf{x}\sim\mu}[h(\mathbf{x})\neq f(\mathbf{x})]. \tag{2}\] Learning is feasible if the learning algorithm can produce a function \(g\in\mathcal{H}\) such that the in-sample error is close enough to the out-of-sample error asymptotically with increasing sample size \(N\), and \(E_{\mathrm{in}}(g)\) is sufficiently small. We introduce the notions of the growth function and Vapnik-Chervonenkis dimension to explore the feasibility of learning using LAD. **Definition 3.3**.: _Let \(\mathcal{H}\) be a hypothesis set for the phenomenon under consideration. For any \(h\in\mathcal{H}\) and \(N\) points \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathcal{X}\), the \(N\)-tuple \((h(\mathbf{x}^{(1)}),\ldots,h(\mathbf{x}^{(N)}))\) is said to be a dichotomy._ The set of dichotomies generated by \(\mathcal{H}\) on the points \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathcal{X}\) is \(\mathcal{H}(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})=\{(h(\mathbf{x}^{(1)}), \ldots,h(\mathbf{x}^{(N)})):h\in\mathcal{H}\}\). If \(\mathcal{H}\) is capable of generating all possible dichotomies on \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\), i.e., \(\mathcal{H}(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})=\mathcal{B}^{N}\), we say that \(\mathcal{H}\)_shatters_ the set \(\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\}\). **Definition 3.4**.: _The growth function for a hypothesis set \(\mathcal{H}\) is_ \[m_{\mathcal{H}}(N)=\max\left\{\left|\mathcal{H}(\mathbf{x}^{(1)},\ldots, \mathbf{x}^{(N)})\right|:\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathcal{ B}^{n}\right\}. \tag{3}\] The growth function \(m_{\mathcal{H}}(N)\leq 2^{N}\) since for any \(\mathcal{H}\) and \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathcal{B}^{n}\), the set \(\mathcal{H}(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})\subseteq\mathcal{B}^{N}\). The Vapnik-Chervonenkis dimension, i.e., the VC dimension, of a hypothesis set \(\mathcal{H}\) is defined as follows. **Definition 3.5**.: _The Vapnik-Chervonenkis dimension of a hypothesis set \(\mathcal{H}\), denoted by \(d_{\mathrm{vc}}(\mathcal{H})\), or \(d_{\mathrm{vc}}\), is the largest value of \(N\) for which \(m_{\mathcal{H}}(N)=2^{N}\). If \(m_{\mathcal{H}}(N)=2^{N}\) for all \(N\), then \(d_{\mathrm{vc}}=\infty\)._ The following inequality provides an upper bound for the growth function as a function of the VC dimension and the sample size. \[m_{\mathcal{H}}(N)\leq\sum_{i=0}^{d_{\mathrm{vc}}}\binom{N}{i} \tag{4}\] Finally, we state the VC generalization bound. **Theorem 3.1** (Theorem 2.5, page 53, [9]): _For any tolerance \(\delta>0\),_ \[E_{\mathrm{out}}(g)\leq E_{\mathrm{in}}(g)+\sqrt{\frac{8}{N}\ln\frac{4m_{ \mathcal{H}}(2N)}{\delta}} \tag{5}\] _with probability \(\geq 1-\delta\)._ ## 4 LAD as a PAC learning model Suppose the data points in our training set \(\mathcal{D}\) involve \(n\) binary features for some positive integer \(n\). We use Boolean functions defined on \(\mathcal{B}^{n}\) to learn from such a training set. First, we consider the hypothesis set \(\mathcal{H}_{n}\) consisting of all cubic monomials in \(n\) binary variables. That is \[\mathcal{H}_{n}=\{x_{i}^{\alpha_{i}}x_{j}^{\alpha_{j}}x_{k}^{\alpha_{k}}: \alpha_{i},\alpha_{j},\alpha_{k}\in\{0,1\},i<j<k,\text{ for all }i,j,k\in[n]\}. \tag{6}\] The following theorem estimates the VC dimension of \(\mathcal{H}_{n}\). **Theorem 4.1**: _Let \(\mathcal{H}_{n}\) be the hypothesis set consisting of cubic monomials. Then the VC dimension_ \[d_{\mathrm{vc}}(\mathcal{H}_{n})=\Theta(\log_{2}n). \tag{7}\] Suppose \(\mathcal{S}\subset\mathcal{B}^{n}\) contains \(N\) vectors denoted by \[\mathbf{b}^{(1)} =(b_{1}^{(1)},b_{2}^{(1)},b_{3}^{(1)},\ldots,b_{n}^{(1)})\] \[\mathbf{b}^{(2)} =(b_{1}^{(2)},b_{2}^{(2)},b_{3}^{(2)},\ldots,b_{n}^{(2)})\] \[\cdots \cdots \cdots\] \[\mathbf{b}^{(N)} =(b_{1}^{(N)},b_{2}^{(N)},b_{3}^{(N)},\ldots,b_{n}^{(N)})\] We set \(b_{1}^{(i)}=b_{2}^{(i)}=1\) for all \(i\in[N]\). The vector corresponding to the binary representation of the non-negative integer \(m\), where \(0\leq m\leq 2^{N}-1\), is denoted by \(\mathbf{y}^{(m)}=(y_{1}^{(m)},\ldots,y_{N}^{(m)})\). Our aim is to construct \(\mathcal{S}\) such that there exist \(2^{N}\) cubic monomials in \(\mathcal{H}_{n}\) each generating a distinct vector in \(\mathcal{B}^{N}\) as the restriction of its truth table on \(\mathcal{S}\). The vectors \(\mathbf{y}^{(0)}\) and \(\mathbf{y}^{(2^{N}-1)}\) are generated by the monomials \(x_{1}x_{2}x_{3}\) and \(x_{1}x_{2}\overline{x}_{3}\), if we set \(b_{3}^{(i)}=y_{i}^{(0)}\), for all \(i\in[N]\). We note that \(y_{i}^{(0)}=0\), for all \(i\in[N]\). For each non-negative integer \(m\) where \(0\leq m\leq 2^{N}-1\), let \(\overline{m}\) be the integer in the same interval that satisfies the condition \(\mathbf{y}^{\overline{m}}=\overline{\mathbf{y}}^{(m)}\). If we set \(b_{m}^{(i)}=y_{i}^{(m)}\) for all \(i\in[N]\), the restrictions of the monomials \(x_{1}x_{2}x_{m}\) and \(x_{1}x_{2}\overline{x}_{m}\) of the set \(\mathcal{S}\) are \(\mathbf{y}^{(m)}\) and \(\overline{\mathbf{y}}^{(m)}=\mathbf{y}^{(\overline{m})}\), respectively. Therefore, if \(n=2+2^{N-1}\), the hypothesis set \(\mathcal{H}_{n}\) shatters a sample of size \(N\). This means that if \(n=2+2^{N-1}\), the VC dimension of \(d_{\mathrm{vc}}(\mathcal{H}_{n})\) satisfies \(n=2+2^{N-1}\leq 2+2^{d_{\mathrm{vc}}(\mathcal{H}_{n})-1}.\) Taking logarithm on both sides \[d_{\mathrm{vc}}(\mathcal{H}_{n})\geq\lfloor\log_{2}(n-2)+1\rfloor. \tag{8}\] Since the number of distinct cubic monomials is \(2^{3}\times\binom{n}{3}\) we have \(2^{d_{\mathrm{vc}}(\mathcal{H}_{n})}\leq 2^{3}\times\binom{n}{3}\), that is \[d_{\mathrm{vc}}(\mathcal{H}_{n})\leq\log_{2}(2^{3}\times\binom{n}{3})=3+\log_ {2}(\frac{n(n-1)(n-2)}{3!}). \tag{9}\] Combining (8) and (9) we have \(d_{\mathrm{vc}}(\mathcal{H}_{n})=\Theta(\log_{2}n)\). \(\Box\) We conjecture that \(d_{\mathrm{vc}}(\mathcal{H}_{n})=\lfloor\log_{2}(n-2)+1\rfloor\). Our experimental observations in the next section support our conjecture. Restricting to the asymptotic analysis we obtain the bounds for a larger class of functions. **Theorem 4.2**.: _Let \(\mathcal{H}_{n}^{(t)}\) be the hypothesis set containing exclusively the DNFs consisting of \(t\) cubic terms in \(n\) binary variables where \(t\leq n/3\). Then_ \[d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})=\Theta(t\log_{2}n). \tag{10}\] **Proof.** Let \(A(n;3)=2^{3}\times\binom{n}{3}\), the number of cubic monomials in \(n\) binary variables. The number of DNFs in \(\mathcal{H}_{n}^{(t)}\) with \(t\) terms is \(B(n;t,3)=\binom{A(n;3)}{t}\). Since \(A(n;3)=2^{3}\times\binom{n}{3}=\Theta(n^{3})\), \[B(n;t,3)=\frac{A(n;3)(A(n;3)-1)\ldots(A(n;3)-t+1)}{t!}=\Theta(n^{3t}). \tag{11}\] The VC dimension \(d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})\) satisfies \(2^{d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})}\leq B(n;t,3)=\Theta(n^{3t})\). Therefore, \[d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})\leq O(t\log_{2}n). \tag{12}\] Since \(t\leq n/3\), there are \(t\) mutually exclusive subsets of binary variables each of size three. The lower bound (8) obtained in Theorem 4.1 implies \[d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})=\Omega(t\log_{2}n). \tag{13}\] Combining (12) and (13) we have \(d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})=\Theta(t\log_{2}n)\). \(\Box\) The significance of Theorem 4.2 is that if we have a data set with \(n\) features, we are assured that the VC dimension \(d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})=\Theta(t\log_{2}n)\). Therefore, we can start learning from this data set using samples of size \(\Theta(t\log_{2}(n))\). Furthermore, the upper bound given in (4) implies that if the VC dimension is finite then the growth function \(m_{\mathcal{H}}(N)=O(N^{d_{\mathrm{vc}}})\). Therefore, by (5) \[E_{\mathrm{out}}(g)-E_{\mathrm{in}}(g)\leq\sqrt{\frac{8}{N}\ln\frac{4m_{ \mathcal{H}}(2N)}{\delta}}\leq\sqrt{\frac{8}{N}\ln\frac{k(2N)^{d_{\mathrm{vc}} }}{\delta}} \tag{14}\] for some positive constant \(k\). This implies that for a hypothesis class with a finite VC dimension, the in-sample error is an accurate approximation of the out-of sample error for large enough training size, \(N\). As mentioned in (9, page 56), choosing \(N=10\times d_{\text{vc}}\) yields a good enough generalization to the out-of-sample error from the in-sample error. ## 5 Experimental results Since LAD attempts to approximate a pdf, we are considering the approximation of a random Boolean function using cubic Boolean monomials. In particular, we are considering the approximation of a Boolean function \(f:\mathcal{B}^{10}\to\mathcal{B}\) using the hypothesis class \(\mathcal{H}_{10}\) as defined in (6). We conducted an experiment wherein we chose 100 random Boolean functions. For each function \(f\), 50 training sets were sampled as training data sets from the truth table of the Boolean function where each training set was of size \(N\). Hypotheses in \(\mathcal{H}_{10}\) that corresponded to the lowest value of \(E_{\text{in}}\) for each training sample were considered as suitable candidates for approximating \(f\). The corresponding \(E_{\text{out}}\) was calculated from the entire truth table. The algorithm of the experiment is as follows: ``` 1: Generate a random Boolean function \(f:\mathcal{B}^{10}\to\mathcal{B}\) as truth table 2: Sample \(f\) uniformly at random to collect \(N\) samples 3: Calculate the in-sample error on N samples according to Equation 1 for all functions in \(\mathcal{H}_{10}\) 4: Identify the hypothesis function \(g\) with lowest \(E_{\text{in}}(g)\). 5: Calculate \(E_{\text{out}}(g)\), from the truth tables of \(f\) and \(g\). 6: Store the values of in-sample and out-of-sample errors. 7: Go to Step 2: repeat 50 times 8: Go to Step 1: repeat 100 times 9: Plot a histogram to observe the variation in \(E_{\text{out}}(g)-E_{\text{in}}(g)\) ``` **Algorithm 1** Algorithm for the experiment. If in Step 4 of the above algorithm, there are multiple functions having minimum \(E_{\text{in}}\), then all of them are considered for the following step. This was observed to be the case in almost all instances. The experiment described in Algorithm 1 was initially repeated for values around \(N=4\). The reason for this choice is because our conjectured VC dimension of \(\mathcal{H}_{10}\) is given by \(\lfloor\log_{2}(10-2)+1\rfloor=4\), and the given values will enable us to observe the connection between VC dimension and the extent to which learning is possible in the given experiment. The same experiment was then run for values \(N=10,20,40,60\); this was done to observe the relation between \(E_{\text{out}}\) and \(E_{\text{in}}\) in the \(10\times d_{\text{vc}}\) limit and to confirm if the in-sample error is indeed a good approximation of the out of sample error. Since we are attempting to approximate randomly generated Boolean functions \(f\), the average value of \(E_{\mathrm{out}}\) is going to be \(0.5\). This is so because a randomly generated Boolean function can evaluate to \(0\) or \(1\) with equal probability at every input value. Therefore, the given experiment is not going to yield good approximations of \(f\). This is fine as we are concerned with observing the connection between the in-sample and out-of-sample errors as the sample size \(N\) increases. The results of the initial run of the experiment can are given in Figure 1. In the cases where the sample \(N\) size is below \(d_{\mathrm{vc}}=4\), it can be seen that \(E_{\mathrm{out}}-E_{\mathrm{in}}\) is around \(0.5\) for a vast number of cases, this is due to the fact that for small sample sizes, it is possible to find a large number of hypotheses with near-zero \(E_{\mathrm{in}}\), but many of these hypotheses will invariably be poor approximations and therefore the in-sample error is a very poor generalization for the out-of-sample error. This situation changes as we reach \(N=4\), the (conjectured) VC dimension for this problem. There are now some situations where \(E_{\mathrm{out}}-E_{\mathrm{in}}<0.5\). In these cases, \(E_{\mathrm{in}}\) is a relatively better generalization of \(E_{\mathrm{out}}\). This situation improves further as one moves beyond the VC dimension in \(N=5\). The result of the experiment for the larger values of \(N\) are given in Figure 2, it can now be seen that lower values of \(E_{\mathrm{out}}-E_{\mathrm{in}}\) are occurring with greater frequency. This enables us to establish confidence intervals for the difference between the two errors. This implies that we are in the regime of probably approximately correct (PAC) learning. Figure 1: Histograms showing the distribution of \(E_{\mathrm{out}}(g)-E_{\mathrm{in}}(g)\) in the neighbourhood of \(d_{\mathrm{vc}}\). Therefore, one can state the probability for the accuracy of the estimate of the out-of-sample error with respect to \(E_{\text{in}}(g)\) for the functions belonging to \(\mathcal{H}_{10}\). This serves as an elementary illustration that learning becomes feasible as the size of the sample \(N\), increases beyond the VC dimension. It should be noted that increasing the sample size after a point does not increase the overall accuracy of the approximation. This can be seen by reading off the values of the average in-sample error from Table 1 and observing the corresponding plot from Figure 1 or Figure 2. ## 6 Conclusion Logical Analysis of Data (LAD) as proposed by Peter L. Hammer demonstrates significantly accurate results by fitting Boolean functions to the training set. However, we have not found any research on incorporating LAD into the PAC learning framework. We initiate such an effort in this article. We believe that research in this direction will help us in characterizing cases when LAD can be used as feasible learning algorithm. The methods presented here may also let us construct provably unlearnable Boolean functions. \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline Sample Size (\(N\)) & 2 & 3 & 4 & 5 & 10 & 20 & 40 & 60 \\ \hline Avg. in-sample error (\(E_{\text{in}}\)) & 0.0072 & 0.0216 & 0.0425 & 0.0662 & 0.1853 & 0.2888 & 0.3478 & 0.3752 \\ \hline \end{tabular} \end{table} Table 1: Values of average in sample errors for different sample sizes Figure 2: Histograms showing the distribution of \(E_{\text{out}}(g)-E_{\text{in}}(g)\) for larger values of \(N\).
データの論理的分析(LAD)は、ボルダン関数を用いて、論理的分離可能な二クラス分類器を生成する技術です。LADアルゴリズムは最適化手法を採用していますが、その結果、二値分類器や二値ルールは過剰学習につながりません。LADモデルの欠かせない性質である過剰学習の欠如を理論的に説明するため、VC次元をLADモデルの仮説集合のDnf表現を用いて推定しています。この仮説集合のDnf表現は、小さな立方多項式の数が少ない場合に、特に興味深い結果を示すことができています。私たちは、これらの観察を、実証実験で確認しています。
2309.08368
Robust Burned Area Delineation through Multitask Learning
In recent years, wildfires have posed a significant challenge due to their increasing frequency and severity. For this reason, accurate delineation of burned areas is crucial for environmental monitoring and post-fire assessment. However, traditional approaches relying on binary segmentation models often struggle to achieve robust and accurate results, especially when trained from scratch, due to limited resources and the inherent imbalance of this segmentation task. We propose to address these limitations in two ways: first, we construct an ad-hoc dataset to cope with the limited resources, combining information from Sentinel-2 feeds with Copernicus activations and other data sources. In this dataset, we provide annotations for multiple tasks, including burned area delineation and land cover segmentation. Second, we propose a multitask learning framework that incorporates land cover classification as an auxiliary task to enhance the robustness and performance of the burned area segmentation models. We compare the performance of different models, including UPerNet and SegFormer, demonstrating the effectiveness of our approach in comparison to standard binary segmentation.
Edoardo Arnaudo, Luca Barco, Matteo Merlo, Claudio Rossi
2023-09-15T12:49:17
http://arxiv.org/abs/2309.08368v1
# Robust Burned Area Delineation through Multitask Learning + ###### Abstract In recent years, wildfires have posed a significant challenge due to their increasing frequency and severity. For this reason, accurate delineation of burned areas is crucial for environmental monitoring and post-fire assessment. However, traditional approaches relying on binary segmentation models often struggle to achieve robust and accurate results, especially when trained from scratch, due to limited resources and the inherent imbalance of this segmentation task. We propose to address these limitations in two ways: first, we construct an ad-hoc dataset to cope with the limited resources, combining information from Sentinel-2 feeds with Copernicus activations and other data sources. In this dataset, we provide annotations for multiple tasks, including burned area delineation and land cover segmentation. Second, we propose a multitask learning framework that incorporates land cover classification as an auxiliary task to enhance the robustness and performance of the burned area segmentation models. We compare the performance of different models, including UPerNet and SegFormer, demonstrating the effectiveness of our approach in comparison to standard binary segmentation. Keywords:Remote Sensing Computer Vision Semantic Segmentation. ## 1 Introduction In recent years, wildfire events have become a recurring major problem, due to their increasing frequency and severity. These events have serious environmental and socio-economic impacts: with potential to cause extensive damage to forests, wildlife habitats, and even human lives. For this reason, understanding and effectively managing wildfires represents a crucial task for first responders and decision makers. Accurate and reliable delineation of burned areas is therefore essential for various applications, including environmental monitoring and post-fire assessment. Traditional approaches for burned area delineation often rely on binary segmentation models trained from scratch. However, these models may struggle to achieve accurate and robust results, due to the limitations of the underlying data. First, resources specifically tailored for this task remain particularly scarce, often lacking large and diverse datasets for an effective training. Second, burned area segmentation is an inherently unbalanced problem, as the extent of burned areas is often significantly smaller compared to non-burned regions in the input imagery. This imbalance usually hinders the generalization abilities of the models, when applied in different scenarios. Furthermore, existing datasets used for burned area delineation are often lacking in terms of surface covered [8] or diversity [17]. These shortcomings may hinder the ability of the models to generalize effectively, underlining the need for more comprehensive and varied data sources to enhance model performance and real-world applicability. To address these limitations, we first construct an ad-hoc dataset, specifically tailored for the task of burned area segmentation, cross-referencing information from the Copernicus European Monitoring System (EMS) with Sentinel-2 feeds and other relevant sources. This dataset provides a comprehensive set of samples, with a focus on the European soil, including annotations for two different tasks: burned area delineation, and land cover segmentation. Second, we propose a multitask learning framework that leverages land cover classification as an auxiliary task. By incorporating this information into the learning process, we aim to improve the robustness and performance of the model on the burned area segmentation task. We compare the performance of different models, including UPerNet [23] and SegFormer [24], demonstrating the effectiveness of our approach against a classic binary segmentation training in several configurations, including with and without bootstrap from pretrained weights. Dataset and code related to this work are available at github.com/links-ads/burned-area-seg. The remainder of this paper is structured as follows. Section 2 reviews related works, Section 3 describes the construction of the multitask dataset for burned area delineation, while Section 4 presents the proposed multitask learning framework and its components. Section 5 details the experimental setup and discusses the obtained results. Lastly, Section 6 concludes the manuscript, suggesting potential future directions. ## 2 Related Works ### Aerial Semantic Segmentation Considering remote sensing and aerial images, semantic segmentation plays a crucial role in various applications, including urban planning [13], land cover monitoring [2], and crisis management [5]. Existing semantic segmentation methods typically rely on convolutional encoder-decoder architectures (CNNs), with different variants to capture both the global context and the finer details of the scene. Approaches such as Fully Convolutional Networks (FCN) [12] and U-Nets [19], make use of bottleneck components to encode pixel information into semantically meaningful vectors, coupled with skip connections to integrate lower-level features. Other solutions involve multiscale feature extraction and fusion, such as DeepLab [6] and PSPNet [25], where inputs are processed with varying kernel sizes and dilations to capture local and global context at once. Subsequent variants often combine these concepts to provide more robust features [6, 23]. Segmenting aerial images introduces several specific challenges that often require tailored solutions. Unlike other settings, satellite data often provides multiple spectra beyond the visible bands. These can be integrated in multiple ways, such as additional input channels [20] or using ad-hoc encoders for feature fusion [21]. Moreover, aerial images they are often denser, containing several entities against complex backgrounds, with wider spatial relationships. To address this, attention components are commonly employed to better model long-distance similarities among pixels [16]. Transformer architectures and their segmentation variants [24] thus become a natural choice in this case, given its inherent ability at extracting long-range relations. ### Burned Area Delineation Over the years, numerous techniques have been proposed to delineate burned areas from remote sensing data. Standard approaches make use of several spectral indices, to discern burned soil from unburned areas using a combination of multiple bands. The _de facto_ standard is represented by the Normalized Burn Ratio (NBR) [7] and the difference NBR (dNBR) [14], which are often used in combination with other indices [9]. Other variants have been developed to better adapt to specific satellite feeds, such as the Burned Area Index for Sentinel-2 (BAIS) [10]. However, these approaches are usually noisy and require further manual processing to produce clean results. In some cases, such as the dNBR, a pre-wildfire image is also needed to compare the same regions before and after the event. In the last decades, machine and deep learning techniques obtained promising results, reducing the manual effort while obtaining more robust tools. Supervised classification algorithms, such as Support Vector Machines (SVM) and Random Forests (RF), have been widely employed for burned area mapping [18, 11]. Acting on a per-pixel basis, these approaches remain effective on lower resolution feeds such as MODIS, however their lack of contextual information may result in suboptimal results on higher resolution feeds such as Sentinel-2 [11]. Recently, convolutional networks have been successfully employed to produce robust results on this task, especially considering post-wildfire images only. U-net segmentation architectures [15, 11] represent the standard approach, however Transformer-based architectures have demonstrated their effectiveness in several remote sensing scenarios [20], including burned area segmentation [5]. ## 3 Dataset To carry out this work, a crucial initial step involved the construction of a custom dataset specifically tailored for multitask learning, focusing on wildfire events. Expanding on similar works in this field [8], our dataset contains 171 fire events derived from Copernicus EMS3. For each Area Of Interest of the event, we provide (i) the Sentinel-2 satellite imagery, (ii) a burned area annotation, derived from EMS (iii) a land cover map, derived from ESA WorldCover, and (iv) a cloud mask computed on the remote sensing input. Footnote 3: [https://emergency.copernicus.eu/](https://emergency.copernicus.eu/) ### Data sources We gather all the available large wildfire events in recent years from the catalog of the Copernicus EMS, an integral part of the Copernicus program launched by the European Union. Within this open service, the Rapid Mapping module plays a crucial role by providing a curated set of Areas of Interest (AoI) associated with each event, where each crisis event has been carefully analyzed, and its delineation has been manually generated by a team of experts. Every AoI may provide three distinct manual annotations, named products: the First Estimate Product (FEP), the Delineation Product, and the Grading Product. The FEP consists of preliminary information about the affected territory and event, facilitating initial emergency response efforts. On the other hand, the delineation and grading products offer a more accurate and comprehensive label regarding the extent of the event and the assessment of damages. Following the geographical coordinates and the time of the event associated with each AoI, we download the corresponding satellite images from the Sentinel-2 mission, that serve as input to the deep learning algorithms. Sentinel-2 captures data across 12 spectral bands with varying resolutions, ranging from 10 to 60 meters. In this study, we focus on the L2A product, which transforms the reflectance into Bottom-of-Atmosphere (BoA) values through atmospheric correction. In addition to the satellite imagery and the burned area delineation maps, we also incorporate land cover data on the same area as an auxiliary target, exploiting the ESA World Cover dataset [1]. This resource offers annual maps for the years 2020 and 2021, featuring 11 distinct classes, including trees, shrubland, grassland, built-up areas, areas with sparse vegetation, water bodies and other surfaces. By integrating a more generic land cover information, we aim to enrich the semantic segmentation process with a broader understanding of the landscape dynamics, enhancing the robustness and contextual accuracy of our model. ### Data preparation We download each EMS activation available, with the aim of maximizing the amount of samples with valid input image and corresponding ground truth labels. For each fire event we gather several details, including the event date, geographical coordinates defining the bounding box of the affected area, and corresponding delineation and grading maps. However, we note that there may be cases where the delineation, the grading, or both maps are unavailable. In such instances, given its higher quality, we maintain only those areas with a valid grading, and we generate the corresponding delineation map performing a standard binarization over the burn severity values. Starting from the remaining processed activations, we retrieve the corresponding post-fire Sentinel-2 images, exploiting the SentinelHub services4. Given the input requirements of the models, we force each image to have a minimum dimension of 512 pixels on each side, expanding the smaller regions until this requirement is satisfied for every AoI. At the same time, we split areas larger than \(2,500\times 2,500\) pixels in multiple subsections for practical use. We sample and rasterize each image with a resolution of 10m per pixel, the maximum provided by Sentinel-2, upscaling the lower resolution bands with nearest neighbor interpolation. To maximize the number of clear images, without smoke or large clouds, we consider a time frame of up to 30 days following the reported event date, selecting the satellite acquisition with the least cloud coverage. Despite these precautions, it is not uncommon to observe clouds in the final image samples: for this reason, we further process the images using a cloud segmentation model, derived from CloudSen12 [3], generating a validity map. This additional mask is then applied during training, excluding every pixel covered by clouds from the loss computation. For the corresponding land cover maps, we retrieve the required raster layers from the ESA World Cover database, available via Microsoft Planetary Computer 5. No further processing is applied to the labels, except for a direct remapping from the original ESA taxonomy to a contiguous list of categories indexed from 0. A value of 255 is further assigned to pixels missing their specific category. Footnote 4: [https://www.sentinel-hub.com/](https://www.sentinel-hub.com/) Footnote 5: [https://planetarycomputer.microsoft.com/](https://planetarycomputer.microsoft.com/) The final dataset comprises a collection of 433 samples, spanning from 2017 to the first months of 2023. The events are predominantly concentrated in Europe, with select events occurring in Australia and on the American continent. Given Figure 1: Distribution of fire events contained in our dataset divided into train (red), validation (blue), and test (orange), on a worldwide scale (left) and at European level (right). the same source, our collection effectively represents an extension of previous datasets [8]. For this reason, we dedicate every activation already present in previous works to testing purposes, training on the remaining events. This allows for easier comparisons with prior results, while also serving as a benchmark for assessing the generalizability and performance of our proposed approach. ## 4 Methodology ### Problem statement The problem at hand involves developing a multitask learning framework for burned area delineation, exploiting land cover classification as an auxiliary target to guide the training. We have access to a delineation map (\(y_{D}\)) and a land cover map (\(y_{LC}\)) as ground truth labels. We employ models composed of a single encoder and a single decoder with two classification heads, namely \(h_{D}\) and \(h_{LC}\). The objective is to simultaneously train the model \(f_{\theta}\), with parameters \(\theta\), to predict accurate burned area delineations (\(\hat{y}_{D}\)), while also training on land cover classification (\(\hat{y}_{LC}\)) using the shared representations \(\phi_{\theta}\). The shared architecture with two standard classification heads enables the model to learn from both tasks jointly. ### Framework and models Our approach is shown in Fig. 2. To train the full model \(f_{\theta}\) we simultaneously predict burned area delineation and land cover segmentation using the shared representations from the decoder stage \(\phi_{\theta}\). These enable the model to capture and leverage common patterns and features between the two tasks, which may Figure 2: Multitask learning framework: the decoder features are shared with the auxiliary head \(h_{LC}\) for joint traning. The auxiliary head is dropped at test time. help in improving the segmentation outcome. Throughout the training process, we employ a standard Cross Entropy loss, in its binary and multi-class variants respectively. The gradients derived from both tasks are jointly propagated back to update the model's parameters. At test time, we drop the auxiliary head, focusing only on the burned area delineation performance, through standard binary segmentation. To provide a comprehensive overview and compare standard convolutional networks with vision transformers, we explore three different architectures: two UPerNet [23] variants, using a Residual Network (ResNet) and a Vision Transformer (ViT) as encoders respectively, and SegFormer [24]. Thanks to its unified perceptual parsing structure, UPerNet provides the flexibility to use both standard CNNs, and recent transformer-based solutions. This allows for a better comparison between the two architectures. On the other hand, SegFormer represents an alternative end-to-end solution which demonstrated its effectiveness on aerial tasks, including burned area delineation [5, 20]. ## 5 Experiments ### Implementation details As mentioned in Section 3, we train our models on the subset of activations that are not present in previous datasets [8], considering the remaining ones as our test set. We further extract a 10% of activations from our training for validation purposes, obtaining a total of 129 wildfire events in training, 15 in validation, and 27 for testing purposes. To cope with the varying image dimensions, we implement a random sampling strategy that extracts square crops of \(512\times 512\) pixels from random image sections at runtime during training. For validation and testing, we adopt instead a sequential sampling strategy with overlapping tiles, reconstructing the original inputs by means of a smooth blending using splines. We consider two groups of experiments: first, we only focus on burned area delineation, as a single training task. Second, we conduct a multitask training, using both delineation and land cover maps. In the latter case, we further mask out the burned pixels from the annotation, to avoid inconsistent labels. For both scenarios, we train three architectures: UperNet with two different encoders (i.e., ResNet-50 and ViT-S) and SegFormer with MiT-B3 as encoder. Moreover, we investigate the impact of using pretrained weights on the backbones in both configurations. We exploit pretrained weights derived from large-scale pretraining on SSL4EO-S12 [22] for ResNet and ViT, in the RN50 and ViT-S variants, while we adopt weights pretrained on ImageNet [24] for SegFormer for lack of better options. We base our code on the _mmsegmentation_6 library, adapting the model inputs to adjust for the additional channels. In every experiment, we train on a single NVIDIA A100 GPU for 30 epochs, using a batch size of 32 tiles, AdamW as optimizer with a learning rate of 1e-4, and a Cross Entropy loss for both tasks, in binary and multi-class versions respectively. Following similar works [5, 20], we adopt macro-averaged F1 score and Intersection over Union (IoU) as evaluation metrics in every configuration. ### Results We conduct single-task (STL) and multitask (MTL) training experiments using both pretrained and non-pretrained weights. For each configuration, we perform three separate runs with different seeds, reporting the results in Table 5.2 as average scores with their corresponding standard deviation. Focusing on the experiments conducted without pretrained weights, the multitask setting consistently achieves superior performance and lower standard deviation compared to the single task setting. Except for the SegFormer, that reports the highest scores in both variants, the multitask approach exhibits a noticeable improvement of \(+3.85\) in terms of F1 score, or \(+5.71\) in terms IoU, averaged across every model. Furthermore, we note that the results are way more stable in the multitask configuration, where the standard deviation decreases by \(-3.51\) (F1) and \(-4.88\) (IoU). This is also shown in Figure 3, where the latter produce more reliable segmentation maps. Considering the experiments with pretrained \begin{table} \begin{tabular}{|c c|c c|c c c|} \hline \multicolumn{6}{|c|}{**From scratch**} & \multicolumn{3}{c|}{**Pretrained**} \\ \hline **Setting** & **Model** & **F1** & **IoU** & **F1** & **IoU** \\ \hline \multirow{3}{*}{**STL**} & **SegFormer (MiT-B3)** & **89.01\(\pm\) 1.39** & **80.22\(\pm\) 2.25** & 90.79\(\pm\) 0.46 & 83.13\(\pm\) 0.78 \\ & **UPerNet (RN50)** & 82.33\(\pm\) 9.17 & 70.94\(\pm\) 12.63 & **91.27\(\pm\) 0.08** & **83.95\(\pm\) 0.13** \\ & **UPerNet (ViT-S)** & 87.65\(\pm\) 2.01 & 78.08\(\pm\) 3.17 & 89.20\(\pm\) 1.29 & 80.53\(\pm\) 2.09 \\ \hline \multirow{3}{*}{**MTL**} & **SegFormer (MiT-B3)** & **90.94\(\pm\) 0.17** & **83.38\(\pm\) 0.29** & 90.91\(\pm\) 0.28 & 83.34\(\pm\) 0.47 \\ & **UPerNet (RN50)** & 89.82\(\pm\) 1.76 & 81.57\(\pm\) 2.87 & **91.86\(\pm\) 0.30** & **84.94\(\pm\) 0.51** \\ \cline{1-1} & **UPerNet (ViT-S)** & 89.76\(\pm\) 0.15 & 81.43\(\pm\) 0.25 & 90.69\(\pm\) 0.58 & 82.98\(\pm\) 0.97 \\ \hline \end{tabular} \end{table} Table 1: Experimental results in single (STL) and multitask (MTL) training, comparing models trained from scratch, or using pretrained encoders. \begin{table} \begin{tabular}{|c c|c|c|} \hline **Setting** & **Model** & **Training time (1 Ep.)** & **Param. (M)** \\ \hline \multirow{3}{*}{**STL**} & **SegFormer (MiT-B3)** & 3h28m & 44,6 \\ & **UPerNet (RN50)** & 3h20m & 64,1 \\ & **UPerNet (ViT-S)** & 3h20m & 57,9 \\ \hline \multirow{3}{*}{**MTL**} & **SegFormer (MiT-B3)** & 3h40m (+12m) & 44,6 \\ & **UPerNet (RN50)** & 3h50m (+30m) & 64,1 \\ \cline{1-1} & **UPerNet (ViT-S)** & 3h30m (+10m) & 57,9 \\ \hline \end{tabular} \end{table} Table 2: Analysis of the computational costs in terms of training time over one epoch, as average of three epochs, and total network parameters. While the training time increases by a small margin, the parameter increase is effectively negligible given the shared encoder-decoder structure. weights, the disparity between single and multitask performance is no longer apparent, with higher and more stable scores even in the single task setup. This is expected, as large-scale pretraining has been proven to be effective in several contexts [22]. Nevertheless, multitask training still yields an average overall improvement of +0.73 in F1 score and +1.21 in IoU, regardless of the underlying architecture. Lastly, comparing training from scratch and using pretrained weights, the latter always exhibit higher performances, even more so in multitask configuration. Specifically, when comparing the top performing models from both tables (i.e., Segformer in the first case, UPerNet-RN50 in the second case), the single task setting achieves +2.24 in F1 score and +3.72 in IoU, whereas in multitask achieves +0.92 in F1 score and +1.56 in IoU. Overall, the results demonstrate the validity of the multitask strategy, exhibiting increased performance robustness, comparable to or even surpassing pretrained solutions in certain instances. In Table 2 we also compare the computational costs of the STL approach compared to the MTL solution. We observe that the MTL versions, despite including an additional segmentation head, exhibit only a modest increase in training speed compared to their single task learning (STL) counterparts, with a marginal difference of 20 seconds in training speed. Moreover, while MTL models do incur a slight increase in memory usage, this increment remains negligible and does not substantially impact the feasibility of implementation. This is expected, since the MTL setting only effectively adds the parameters of a single pixel classification head, which boils down to a \(1\times 1\) convolutional layer with \(|\phi_{\theta}|\) feature channels as input, and 11 categories as output. Moreover, we note that during inference the auxiliary head is omitted, effectively eliminating any computational overhead associated with the auxiliary task. Figure 3: Qualitative examples derived from UPerNet-RN50: a) Sentinel-2 input; b) Single task and c) Multitask from scratch; d) Single Task and e) Multitask with pretrained weights; f) ground truth. Red pixels represent prediction errors. ## 6 Conclusion In this work, we propose a multitask approach for burned area delineation, exploiting land cover classification as an auxiliary target. Results show that the devised solutions yield more stable and robust performances, comparable to or even superior to pretrained solutions. Multitask learning offers promising results, especially in absence of pre-trained solutions. Despite the robust performance, the current multitask approach presents some limitations: first, the performance of the models heavily rely on the quality of the annotations of both tasks. Second, the improved robustness comes at the cost of additional computational complexity, which may limit the scalability. Moreover, we recognize the need to delve deeper into the impact of task characteristics and explore a wider array of auxiliary tasks for a more comprehensive multi-task learning approach, including for instance multiple training objectives to further enhance scalability and generalization capabilities of the models. Future studies may therefore focus on improving the multitask capabilities by integrating multiple heterogeneous tasks at the same time [4], or may consider more computationally demanding approaches such as large-scale self-supervised learning, to generate better pretrained solutions and thus translate these downstream tasks in simpler and faster fine-tuning objectives.
近年、wildfireは、頻度とSeverityの増加により重要な課題となっています。このため、火災による焼失区域の正確な delineation は環境モニタリングと火災後の評価に不可欠です。しかし、二値分割モデルに頼る従来の方法は、限られたリソースとこの分割タスクのその内在的な不均衡のために、 robustness と正確さを達成するのが困難であり、特にゼロから訓練する際には、困難を伴います。私たちは、これらの制約を2つの方法で解決します。まず、限られたリソースに対応するために、Sentinel-2のデータをCopernicusの活動と他のデータソースと組み合わせて、独自のデータセットを作成します。このデータセットでは、焼失区域の delineation と土地カバーの分割などの複数のタスクの注釈を提供します。そして、土地カバーの分類を補助タスクとして組み込んだ多タスク学習フレームワークを提案します。このフレームワークは、焼失区域の
2309.10527
SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations
Annotating 3D LiDAR point clouds for perception tasks is fundamental for many applications e.g., autonomous driving, yet it still remains notoriously labor-intensive. Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks. In this paper, we propose SPOT, namely Scalable Pre-training via Occupancy prediction for learning Transferable 3D representations under such a label-efficient fine-tuning paradigm. SPOT achieves effectiveness on various public datasets with different downstream tasks, showcasing its general representation power, cross-domain robustness and data scalability which are three key factors for real-world application. Specifically, we both theoretically and empirically show, for the first time, that general representations learning can be achieved through the task of occupancy prediction. Then, to address the domain gap caused by different LiDAR sensors and annotation methods, we develop a beam re-sampling technique for point cloud augmentation combined with class-balancing strategy. Furthermore, scalable pre-training is observed, that is, the downstream performance across all the experiments gets better with more pre-training data. Additionally, such pre-training strategy also remains compatible with unlabeled data. The hope is that our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
Xiangchao Yan, Runjian Chen, Bo Zhang, Hancheng Ye, Renqiu Xia, Jiakang Yuan, Hongbin Zhou, Xinyu Cai, Botian Shi, Wenqi Shao, Ping Luo, Yu Qiao, Tao Chen, Junchi Yan
2023-09-19T11:13:01
http://arxiv.org/abs/2309.10527v3
# SPOT: Scalable 3D Pre-training via Occupancy Prediction for Autonomous Driving ###### Abstract Annotating 3D LiDAR point clouds for perception tasks including 3D object detection and LiDAR semantic segmentation is notoriously time-and-energy-consuming. To alleviate the burden from labeling, it is promising to perform large-scale pre-training and fine-tune the pre-trained backbone on different downstream datasets as well as tasks. In this paper, we propose SPOT, namely Scalable **P**re-training via **O**ccupancy prediction for learning **T**ransferable 3D representations, and demonstrate its effectiveness on various public datasets with different downstream tasks under the label-efficiency setting. Our contributions are threefold: (1) Occupancy prediction is shown to be promising for learning general representations, which is demonstrated by extensive experiments on plenty of datasets and tasks. (2) SPOT uses beam re-sampling technique for point cloud augmentation and applies class-balancing strategies to overcome the domain gap brought by various LiDAR sensors and annotation strategies in different datasets. (3) Scalable pre-training is observed, that is, the downstream performance across all the experiments gets better with more pre-training data. We believe that our findings can facilitate understanding of LiDAR point clouds and pave the way for future exploration in LiDAR pre-training. Codes and models will be released. ## 1 Introduction Light Detection And Ranging (LiDAR), which emits and receives laser beams to accurately estimate the distance between the sensor and objects, serves as one of the most important sensors in outdoor scenes, especially for autonomous driving. The return of LiDAR is a set of points in the 3D space, each of which contains location (the XYZ coordinates) and other information like intensity and elongation. Taking these points as inputs, 3D perception tasks like 3D object detection and semantic segmentation aim to predict 3D bounding boxes or per-point labels for different objects including cars, pedestrians, cyclists, and so on, which are prerequisites for downstream safety control tasks. In the past few years, research on learning-based 3D perception methods (Yan et al., 2018; Yin et al., 2021; Shi et al., 2020, 2023; Zhu et al., 2021; Zhang et al., 2023) flourishes and achieves unprecedented performance on different published datasets (Geiger et al., 2012; Behley et al., 2019; Mao et al., 2021; Caesar et al., 2020; Sun et al., 2020). However, these learning-based methods are **data-hungry** and it is notoriously time-and-energy-consuming to label 3D point clouds. On the contrary, large-scale pre-training and fine-tuning with fewer labels in downstream tasks serves as a promising solution to improve the performance in label-efficiency setting. Previous methods can be divided into two streams: (1) Embraced by AD-PT (Yuan et al., 2023), semi-supervised pre-training achieves strong performance gain when using fewer labels but limited to specific task like 3D object detection. (2) Other works including GCC-3D (Liang et al., 2021), STRL (Huang et al., 2021), BEV-MAE (Lin and Wang, 2022), CO3 (Chen et al., 2022) and MV-JAR (Xu et al., 2023) utilize unlabeled data for pre-training. This branch of work fails to generalize across datasets with different LiDAR sensors and annotation strategies, as shown in Fig. 1b. To learn general representations on both **task-level** and **dataset-level**, we propose SPOT, namely **S**calable **P**re-training via **O**ccupancy prediction for learning **T**ransferable representation. Firstly, we argue that occupancy prediction serves as a more general pre-training task for task-level generalization, as compared to 3D object detection and LiDAR semantic segmentation. The reason lies in that occupancy prediction is based on denser voxel-level labels with abundant classes, which incorporates spatial information similar to 3D object detection as well as semantic information introduced in semantic segmentation. Secondly, as the existing datasets use LiDAR sensors with various numbers of laser beams and different category annotation strategies, we propose to use beam re-sampling for point cloud augmentation and class-balancing strategies to overcome these domain gaps. Beam re-sampling augmentation simulates LiDAR sensors with different numbers of laser beams to augment point clouds from a single source pre-training dataset, alleviating the domain gap brought by LiDAR types. Class-balancing strategies apply balance sampling on the dataset and category-specific weights on the loss functions to narrow down the annotation gap. Last but not least, we observe that more pre-training data bring better downstream performance towards different tasks. This indicates that SPOT is a scalable pre-training method for LiDAR point clouds, which paves the way for future large-scale 3D representation learning in autonomous driving. Our contributions can be summarized into three aspects: (1) SPOT demonstrates that occupancy prediction is a promising pre-training method for general and scalable 3D representation learning on LiDAR point cloud. (2) Beam re-sampling augmentation and class-balancing strategies are useful in narrowing domain gaps introduced by different LiDAR sensors and annotation strategies. (3) Extensive experiments are conducted on different 3D perception tasks (3D object detection and semantic segmentation) and various datasets including Waymo (Sun et al., 2020), nuScenes (Caesar et al., 2020), ONCE (Mao et al., 2021), KITTI (Geiger et al., 2012), and SemanticKITTI Behley et al. (2019) to demonstrate the effectiveness of SPOT. As shown in Fig. 1, SPOT (a) continuously improves the downstream performance as more pre-training data are used, and (b) learns general representations as compared to previous pre-training methods. ## 2 Related Work **LiDAR 3D Perception.** There are two main tasks on LiDAR point clouds: 3D object detection and LiDAR semantic segmentation, both of which are essential for scene understanding and control tasks. Current LiDAR 3D detectors can be divided into three main classes based on the 3D backbone in the architectures. (1) Point-based 3D detector embeds point-level features to predict 3D bounding boxes, which is embraced by PointRCNN (Shi et al., 2019). (2) Voxel-based 3D detector divides the surrounding environment of the autonomous vehicle into 3D voxels and uses sparse convolution or transformer-based encoder to generate voxel-level features for detection heads. Second (Yan et al., Figure 1: (a) SPOT pre-trains the 3D and 2D backbones and achieves scalable performance improvement across various datasets and tasks in label-efficient setting. Different colors indicate different amounts of pre-training data. (b) SPOT delivers the best performance on various datasets and tasks among different pre-training methods. “ K. (det) ”, “ N. (det) ”, “ W. (det) ” are abbreviations for KITTI, nuScenes, and Waymo detection tasks, while “ S.K. (seg) ” and “ N. (seg) ” are abbreviations for SemanticKITTI, and nuScenes segmentation tasks, respectively. 2018) and CenterPoint (Yin et al., 2021) are popular and SOTA voxel-based 3D detectors. (3) Point-and-voxel-combined method like PV-RCNN (Shi et al., 2020) and PV-RCNN++ (Shi et al., 2023) utilize both voxel-level and point-level features. For LiDAR semantic segmentation task, the goal is to predict a category label for each point in the LiDAR point clouds. Cylinder3D (Zhu et al., 2021), the pioneering work on this task, proposes to first apply the 3D backbone to embed the voxel-level features and then a decoder for final semantic label predictions. All these methods are data-hungry and labeling for 3D point clouds is time-and-energy-consuming. In this work, we explore large-scale pre-training for label-efficiency setting on LiDAR point clouds. **Label-efficient Training for LiDAR 3D Perception.** There are two promising ways to improve the performance of LiDAR 3D detectors with fewer real labels. The first one, embraced by pseudo-labeling (Caine et al., 2021; Yuan et al., 2023b) and its follow-up works (Qi et al., 2021; Xu et al., 2021; Wu et al., 2022) is semi-supervised learning that utilizes both fewer labeled data and a large amount of unlabeled data. This branch of methods requires collecting a huge amount of data, and the cost of collecting data cannot be neglected. The second way is to use large-scale pre-training and fine-tune the pre-trained backbones on different downstream datasets with fewer labels. AD-PT (Yuan et al., 2023a) is the representative work for semi-supervised pre-training for 3D detection on LiDAR point cloud and demonstrates strong performance gain when using fewer labels. Other works including GCC-3D (Liang et al., 2021), STRL (Huang et al., 2021), CO3 (Chen et al., 2022), BEV-MAE (Lin and Wang, 2022) and MV-JAR (Xu et al., 2023) utilize unlabeled data for pre-training. Methods in this branch still suffer from either the limited downstream tasks (AD-PT) or failures to generalize across different LiDAR sensors. In this work, we propose SPOT to pre-train the 3D backbone for LiDAR point clouds and improve performance in different downstream tasks with various sensors and architectures, as shown in Fig. 1. **Semantic Occupancy Prediction.** The primary objective is to predict whether a voxel in 3D space is free or occupied as well as the semantic labels for the occupied ones, which enables a comprehensive and detailed understanding of the 3D environment. Inspired by MonoScene (Cao and de Charette, 2022), VoxFormer (Li et al., 2023), TPVFormer (Huang et al., 2023), JS3C-Net (Yan et al., 2021) and SCPNet (Xia et al., 2023), deep learning methods achieve unprecedented performance gains on this task. However, these methods are specially designed for semantic occupancy prediction task and fail to learn general representations for different 3D perception tasks, such as object detection and semantic segmentation. In this paper, SPOT is proposed to use 3D semantic occupancy prediction to learn a unified 3D scene representation for various downstream tasks including 3D object detection and LiDAR semantic segmentation. ## 3 Method We discuss the proposed SPOT in detail. As shown in Fig. 2, SPOT contains four parts: (a) Augmentations on LiDAR point clouds. (b) Encoder for LiDAR point clouds to generate BEV features, which are pre-trained and used for different downstream architectures and tasks. (c) Decoder to predict occupancy based on BEV features. (d) Loss function with class-balancing strategy. We first introduce the problem formulation as well as the overall pipeline in Sec. 3.1. Then we respectively discuss beam re-sampling augmentation and class-balancing strategies in Sec. 3.2 and Sec. 3.3. ### Problem Formulation and Pipeline **Notation.** To start with, we denote LiDAR point clouds \(\mathbf{P}\in\mathbb{R}^{N\times(3+d)}\) as the concatenation of \(xyz\)-coordinate \(\mathbf{C}\in\mathbb{R}^{N\times 3}\) and features for each point \(\mathbf{F}\in\mathbb{R}^{N\times d}\), that is \(\mathbf{P}=[\mathbf{C},\mathbf{F}]\). \(N\) here is the number of points and \(d\) represents the number of point feature channels, which is normally \(d=1\) for intensity of raw input point clouds. Paired with each LiDAR point cloud, detection labels \(L_{det}\in\mathbb{R}^{N_{det}\times 10}\) and segmentation labels for each point \(L^{j}_{seg}\in\{0,1,2,...,N_{\text{cls}}\}\) (\(j=1,2,...,N\)) are provided. For detection labels, \(N_{det}\) is the number of 3D boundary boxes in the corresponding LiDAR frame and each box is assigned \(xyz\)-location, sizes in \(xyz\)-axis (length, width and height), orientation in \(xy\)-plane (the yaw angle), velocity in \(xy\)-axis and the category label for the corresponding object. For segmentation labels, each LiDAR point is assigned a semantic label where \(0\) indicates "empty", and \(1\) to \(N_{\text{cls}}\) are different categories like vehicle, pedestrian, and so on. **Pre-processing.** We generate "ground-truth" occupancy \(\mathbf{O}\in\{0,1,2,...,N_{\text{cls}}\}^{H\times W}\) for pre-training following the practice in (Tian et al., 2023), where \(H\) and \(W\) are respectively number of voxels in \(xy\)-axis and Fig. 2 shows an example. In general, we take LiDAR point clouds in the same sequence along with their detection and segmentation labels as the inputs, and divide the labels into dynamic and static. After that, all LiDAR point clouds in that sequence can be fused to generate dense point clouds, followed by mesh reconstruction to fill up the holes. Finally, based on the meshes, we can obtain occupancy \(\mathbf{O}\). For more details, please refer to (Tian et al., 2023). **Encoding and Decoding.** Given an input LiDAR point cloud \(\mathbf{P}\in\mathbb{R}^{N\times(3+d)}\), augmentations including beam re-sampling, random flip and rotation, are first applied and result in augmented point cloud \(\mathbf{P}_{\text{aug}}\in\mathbb{R}^{N\times(3+d)}\). Then \(\mathbf{P}_{\text{aug}}\) is embedded with sparse 3D convolution and BEV convolution backbones and obtain dense BEV features \(\mathbf{F}_{\text{BEV}}\in\mathbb{R}^{\hat{H}\times\hat{W}\times\hat{d}}\) as follows: \[\mathbf{F}_{\text{BEV}}=f^{\text{enc}}(\mathbf{P}_{\text{aug}}), \tag{1}\] where \(\hat{H}\) and \(\hat{W}\) are height and width of the BEV feature map and \(\hat{d}\) is the number of feature channels after encoding. Then based on \(\mathbf{F}_{\text{BEV}}\), a convolution decoder together with a Softmax operation (on the last dimension) is applied to generate dense occupancy probability prediction \(\hat{\mathbf{O}}\in\mathbb{R}^{H\times W\times(N_{\text{cls}}+1)}\) using the following equation: \[\hat{\mathbf{O}}=\text{softmax}(f^{\text{dec}}(\mathbf{F}_{\text{BEV}})), \tag{2}\] where \(H\) and \(W\) are the same as those of \(\mathbf{O}\). For each pixel on BEV map, an \(N_{\text{cls}}+1\) dimensional probability vector is predicted, each entry of which indicates the probability of the corresponding category. The decoder \(f^{\text{dec}}\) is kept simple and lightweight. It consists of only three layers of 2D transposed convolution with a kernel size of 3 and a prediction head composed of linear layers. **Loss Function.** To guide the encoders to learn transferable representations, class-balancing cross-entropy loss and Lovasz-Softmax loss (Berman et al., 2018) are applied on the predicted occupancy probability \(\hat{\mathbf{O}}\) and the "ground-truth" occupancy \(\mathbf{O}\). The overall loss can be written by: \[\mathcal{L}=\mathcal{L}_{\text{ce}}(\mathbf{O},\hat{\mathbf{O}})+\lambda\cdot \mathcal{L}_{\text{lov}}(\mathbf{O},\hat{\mathbf{O}}), \tag{3}\] where \(\lambda\) is the weighting coefficient used to balance the contributions of the two loss. For class-balancing cross-entropy loss, details are discussed in Sec. 3.3. And the Lovasz-Softmax loss is a popular loss function used in semantic segmentation, whose formulation is as follows: \[\mathcal{L}_{\text{lov}}(\mathbf{O},\hat{\mathbf{O}})=\frac{1}{N_{\text{cls}} }\sum_{n=1}^{N_{\text{cls}}}\overline{\Delta_{J_{c}}}(\mathbf{M}(n)),\quad \mathbf{M}(n)_{h,w}=\begin{cases}1-\hat{\mathbf{O}}_{h,w,n}&if\ n=\mathbf{O}_{ h,w}\\ \hat{\mathbf{O}}_{h,w,n}&otherwise\end{cases}, \tag{4}\] Figure 2: The overview of the proposed SPOT. Firstly, the input LiDAR point cloud is augmented by beam re-sampling to simulate various LiDAR sensors, which helps learn general representations. Then point clouds are processed by backbone encoders consisting of 3D and 2D ones, which are utilized to initialize downstream architectures after pre-training. Next, a lightweight decoder with stacked transposed convolutions embeds the BEV features to further predict occupancy probability. Finally, we use class-balancing cross entropy loss and Lovasz-Softmax loss to guide the pre-training. where \(\mathbf{M}(n)\in\mathbb{R}^{H\times W}\) means the errors of each pixel on BEV map of class \(n\), and \(h,w\) is the pixel index for the BEV map. \(\overline{\Delta_{J_{e}}}\) denotes the Lovasz extension of the Jaccard index to maximize the Intersection-over-Union (IoU) score for class \(n\), which smoothly extends the Jaccard index loss based on a submodular analysis of the set function. ### Beam Re-sampling Augmentation Different datasets use different LiDAR sensors to collect data. The most significant coefficient that brings domain gap is the beam numbers of LiDAR sensors, which directly determines the sparsity of the return point clouds. Fig. 3 shows an example where two LiDAR point clouds are collected by screen and it can be found that 16-beam LiDAR brings a much sparser point cloud, which results in varying distributions of the same object and degrades the performance. In order to learn general representations that benefit various datasets, we propose equivalent LiDAR beam sampling to diversify the pre-training data. First of all, we quantify the sparsity of point clouds collected by different LiDAR sensors. The dominant factor is beam-number and the Vertical Field Of View (VFOV) also matters. We calculate the beam density by the following Eq. 5, where \(N_{\text{beam}}\) is the number of the LiDAR beam, and \(\alpha_{\text{up}}\) and \(\alpha_{\text{low}}\) respectively represent the upper and lower limits of the vertical field of view of the sensor, \[B_{\text{density}}=\frac{N_{\text{beam}}}{\alpha_{\text{up}}-\alpha_{\text{ low}}}. \tag{5}\] Next, by dividing \(B_{\text{density}}\) of different downstream datasets with that of the pre-training dataset, we compute re-sampling factors \(R_{\text{sample}}\). Re-sampling is conducted for the pre-training data according to different \(R_{\text{sample}}\). Specifically, given the original LiDAR point cloud, we transform the Cartesian coordinates \((x,y,z)\) of each point into the spherical coordinates \((r,\phi,\theta)\), where \((r,\phi,\theta)\) are the range, inclination and azimuth, respectively. Finally, uniform re-sampling is conducted on the dimension of inclination. The transformation function can be formulated by: \[r=\sqrt{x^{2}+y^{2}+z^{2}},\ \ \phi=arctan(x/y),\ \ \theta=arctan(z/\sqrt{x^{2}+y^{2}}). \tag{6}\] ### Class-balancing Strategies The contribution to downstream tasks of different categories varies. First, different datasets have various distributions over categories, which causes domain gaps and hinders learning general representations. Also, in 3D detection task, foreground classes like vehicle, pedestrian and cyclist are more important than background categories including pavement and vegetation. Thus, we propose class-balancing strategies respectively on the dataset and loss function to narrow the domain gaps. **Dataset Balancing.** Considering that background classes are almost ubiquitous in every scene, we focus solely on the foreground classes in the dataset, such as cars, pedestrians, cyclists and so on. As shown in Fig. 4, we conducted a statistical analysis of the distribution of foreground semantic classes in the pre-training dataset, and it is evident that the pre-training dataset has a severe class imbalance problem. Inspired by (Zhu et al., 2019), we employ a frame-level re-sampling strategy to alleviate the severe class imbalance. Assuming that there are \(N_{\text{fg}}\) foreground classes, we calculate Figure 4: Distribution of different classes. Figure 3: Examples of different LiDAR beams. the class sampling weights \(s_{i}\) (\(i=1,2,...,N_{\text{fg}}\)) for each class based on the proportion of samples: \[s_{i}=\sqrt{m/n_{i}},\ \ m=\frac{1}{N_{\text{fg}}},\ \ n_{i}=\frac{N_{i}}{\sum_{j=1}^{N _{\text{fg}}}N_{j}}, \tag{7}\] where \(N_{i}\) is the number of samples for the \(i^{th}\) class. Fewer samples in a category brings higher weight \(s_{i}\) for it. Based on the sampling weights, we can employ a random duplication to balance the classes and compose the final dataset to alleviate the class imbalance. This is advantageous as it allows us to learn scene representations more effectively in the pre-training task, facilitating downstream tasks. **Loss Function Balancing.** In real-world scenarios, the surrounding 3D space of the autonomous vehicle is dominated by unoccupied states or background information. This can be harmful to the training process because the loss would be overwhelmed by a substantial amount of useless information. To overcome this challenge, we propose to assign different weights to different categories. Specifically, we assign weight \(w_{\text{fg}}=2.0\) to common foreground categories including car, pedestrian, cyclist, bicycle, and motorcycle. Meanwhile, other background categories like vegetation and road are assigned \(w_{\text{bg}}=1.0\) and \(w_{\text{empty}}=0.01\) for unoccupied voxels. ## 4 Experiments The goal of pre-training is to learn general representations for various downstream tasks, datasets, and architectures. In this section, we design extensive experiments to answer the question whether SPOT learns such representations in a label-efficiency way. We first introduce experiment setup in Sec. 4.1, followed by main results with baselines in Sec. 4.2. Then we also provide discussions about pre-training tasks selection, ablation study and performance on full downstream datasets in Sec. 4.3. Finally, we end this section with visualization about 3D object detection results. ### Experimental Setup **Pre-training Dataset.** We use the _Waymo Open dataset_(Sun et al., 2020) as our pre-training dataset, which uses a main 64-beam LiDAR and 4 short-range LiDARs to collect point clouds. Waymo contains 798 sequences and 202 sequences for training and validation, respectively. Following the methodology mentioned in Sec. 3.1, we generate dense occupancy labels for each sample where \(N_{\text{cls}}=15\). This means 15 semantic categories including car, pedestrian and motorcycle, as well as "empty" are marked for each voxel. To evaluate the scalability of SPOT, we partition Waymo into \(5\%\), \(20\%\), and \(100\%\) subsets at the sequence level and perform the pre-training on different subsets. **Downstream Tasks.** Popular LiDAR perception tasks include 3D object detection and LiDAR semantic segmentation. For detection, we cover the vast majority of currently available datasets, including _KITTI_(Geiger et al., 2012), _NuScenes_(Caesar et al., 2020) and _ONCE_(Mao et al., 2021) with popular 3D detectors including SECOND (Yan et al., 2018), CenterPoint (Yin et al., 2021) and PV-RCNN (Shi et al., 2020) for evaluation. _NuScenes_ utilizes a 32-beam LiDAR to collect 40,000 LiDAR point clouds, of which 28,130 samples are used for training and 6,019 samples for validation. We evaluate the performance using the official Mean Average Precision (mAP) and NuScenes Detection Score (NDS) (Caesar et al., 2020). _KITTI_ consists of 7,481 samples for training and 7,518 samples for validation collected with a 64-beam LiDAR. We report the results using three levels of mAP metrics: easy, moderate, and hard, following the official settings in (Geiger et al., 2012). _ONCE_ contains 19k labeled LiDAR point clouds, of which 5k point clouds are used for training, 3k for validation and 8k for testing, all of which are collected by a 40-beam LiDAR. For evaluation, we follow (Mao et al., 2021) to use the mAP metrics by different ranges: 0-30m, 30-50m, and 50m-Inf. For semantic segmentation, we conduct experiments on _SemanticKITTI_(Behley et al., 2019) and _NuScenes_(Caesar et al., 2020) with the famous LiDAR segmentor Cylinder3D (Zhu et al., 2021). _SemanticKITTI_ has 22 point cloud sequences and is divided into a train set with 19,130 samples together with a validation set with 4,071 frames. The evaluation metric of the two datasets adopts the commonly used mIoU (mcan Intersection over Union). To compute mIoU, per-category IoU is first computed as IoU\({}_{i}=\frac{\text{TP}_{i}}{\text{TP}_{i}+\text{TP}_{i}+\text{FN}_{i}}\), where TP\({}_{i}\), FP\({}_{i}\) and FN\({}_{i}\) denote true positive, false positive and false negative for class \(i\), respectively. Then IoUs for different classes are averaged to get the final mIoU. Baseline Methods.We select two representative pre-training methods for unsupervised (BEV-MAE (Lin and Wang, 2022)) and supervised (AD-PT (Yuan et al., 2023a)) branches respectively. **Implementation Details.** For pre-training phase, we adopt commonly used 3D and 2D backbones in (Yan et al., 2018; Yin et al., 2021; Shi et al., 2020) and \(N_{\text{cls}}=15\), \(\lambda=1\). We train 30 epochs with the Adam optimizer, using the one-cycle policy with a learning rate of 0.003. For the downstream detection task, we train 30 epochs for NuScenes, 80 epochs for KITTI and ONCE. For the downstream segmentation task, we train 20 and 10 epochs for SemanticKITTI and nuScenes respectively. Our experiments are implemented based on 3DTrans (Team, 2023), using 8 NVIDIA Tesla A100 GPUs. Note that our experiments are under label-efficiency setting, which means that we conduct fine-tuning on a randomly selected subset of the downstream datasets (\(5\%\) for _NuScenes_ detection, \(20\%\) for _KITTI_ and _ONCE_ and \(10\%\) for _SemanticKITTI_ and _NuScenes_ segmentation). ### Main Results **NuScenes Detection.** Equipped with different types of LiDAR sensors, the domain gap between the pre-training dataset Waymo and the downstream dataset NuScenes is non-negligible. By harnessing the capabilities of SPOT, which learns general 3D scene representations, it can be found in Tab. 1 that SPOT achieves considerable improvements on the SECOND and CenterPoint detectors compared to other pre-training strategies. Specifically, when pre-trained by 100% Waymo data, SPOT achieves the best overall performance (mAP and NDS) among all the pre-training methods including randomly initialization, BEV-MAE and AD-PT, improving training-from-scratch by up to 10.41 mAPs and 12.69 NDS. Scalable pre-training can also be observed when increasing the amount of pre-training data. When further looking into the detailed categories, SPOT almost achieves the best performance among all the categories for both detectors. For example, SPOT improves SECOND on Bus, Trail, Barries, Motorcycle and Pedestrian for more than 10 mAP compared to training from scratch, which is essential for downstream safety control in real-world deployment. **KITTI Detection.** Although KITTI uses the same type of LiDAR sensor as that in Waymo dataset, KITTI only employs front-view point clouds for detection, which still introduces domain gaps. In Tab. 2, it can be found that, SECOND and PV-RCNN detectors with SPOT method are significantly and continuously improved as more pre-training data are added. For 100% pre-training data, the improvements are respectively 5.66 and 5.06 mAPs at moderate level. For detailed categories, SPOT brings consistent improvement over different classes. When we focus on Moderate level, the most \begin{table} \begin{tabular}{c|c|c|c c c c c c c c c c c} \hline \hline Detector & Method & PD.A. & mAP & NDS & Car & Truck & CV. & Bus & Trailer & Barrier & Motor & Bicycle & Pol. & TC \\ \hline \multirow{6}{*}{SECOND} & From Scratch & - & 32.16 & 41.59 & 69.13 & 33.94 & 10.12 & 46.56 & 17.97 & 32.34 & 15.87 & 0.00 & 57.30 & 37.99 \\ & BEV-MAE (Lin and Wang, 2022) & 100\% & 32.09 & 42.08 & 48.94 & 34.79 & 18.49 & 48.36 & 22.46 & 32.67 & 1.301 & 0.13 & 56.10 & 35.33 \\ & AD-PT (Yuan et al., 2023a) & 100\% & 37.96 & 47.95 & 47.94 & 41.29 & 12.50 & 45.47 & 27.89 & 31.41 & 23.39 & 3.61 & 39.54 \\ & SPOT (ours) & 59.76 & 36.85 & 47.42 & 37.94 & 12.74 & 54.94 & 27.69 & 38.03 & 2.991 & 2.55 & 64.27 & 24.31 \\ & SPOT (ours) & 200\% & 39.63 & 45.18 & 57.58 & 41.21 & 12.95 & 55.67 & **29.02** & 40.13 & 22.36 & 4.77 & 72.04 & 20.28 \\ & SPOT (ours) & 100\% & **42.57** & **54.28** & **76.59** & **42.86** & **14.54** & **59.66** & 29.30 & **44.04** & **30.91** & **7.52** & **72.70** & **47.26** \\ \hline \multirow{6}{*}{CenterPoint} & From Scratch & - & 42.37 & 52.01 & 71.07 & 13.18 & 10.50 & 58.57 & 23.43 & 50.50 & 35.13 & 15.18 & 71.58 & 46.16 \\ & BEV-MAE (Lin and Wang, 2022) & 100\% & 42.86 & 25.95 & 77.33 & 39.95 & 80.74 & 54.43 & 25.03 & 51.20 & 34.88 & 15.15 & 72.24 & 49.66 \\ & AD-PT (Yuan et al., 2023a) & 100\% & 44.99 & 52.99 & 78.04 & **45.82** & 11.13 & 55.16 & 21.22 & **55.10** & 39.03 & 17.76 & 72.28 & **55.43** \\ & SPOT (ours) & 59.43 & 45.36 & 50.04 & 77.21 & 38.13 & 10.45 & 56.64 & 24.19 & 50.33 & 37.74 & 18.55 & 73.97 & 48.59 \\ & SPOT (ours) & 20\% & 44.94 & 54.98 & 50.89 & 40.09 & 12.92 & 56.68 & 28.10 & 57.17 & 35.93 & 22.46 & 75.98 & 47.38 \\ & SPOT (ours) & 100\% & **47.47** & **57.11** & **79.01** & 42.41 & **13.04** & **59.51** & **29.53** & 54.74 & **42.54** & **24.66** & **77.68** & 51.65 \\ \hline \hline \end{tabular} \end{table} Table 1: Fine-tuning performance on NuScenes benchmark. P.D.A. represents the Pre-training Data Amount. We fine-tune on 5% NuScenes training data. \begin{table} \begin{tabular}{c|c|c|c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Detector} & \multirow{2}{*}{Method} & \multirow{2}{*}{P.D.A.} & mAP & \multicolumn{2}{c}{Car} & \multicolumn{2}{c|}{Poleertain} & \multicolumn{2}{c}{Cyclist} \\ \cline{3-14} & & & (Mod.) & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\ \hline \multirow{6}{*}{SECOND} & From Scratch (Lin and Wang, 2022) & 100\% & 63.45 & 89.50 & 78.83 & 76.21 & 52.08 & 47.23 & 43.37 & 76.35 & 99.06 & 55.24 \\ & AP-MAE (Lin and Wang, 2022) & 100\% & 63.45 & 89.50 & 78.53 & 75.37 & 53.59 & 48.71 & 44.20 & 80.73 & 63.12 & 58.96 \\ & AD-PT (Yuan et al., 2023a) & 100\% & 65.95 & 90.23 & 80.70 & **78.29** & 56.63 & 47.51 & 83.72 & 87.67 & 65.04 & 40.40 \\ & SPOT (ours) & 5\% & 63.53 & 90.82 & 80.69 & 79.71 & 58.42 & 50.22 & 46.38 & 80.80 & 63.53 & 99.31 \\ & SPOT (ours) & 20\% & 65.45 & 90.55 & 89.05 & 89.75 & 75.60 & 56.07 & 51.68 & 47.56 & 35.32 & 65.45 & 61.11 \\ & SPOT (ours) & 100\% & **67.36** & **90.54** & **81.12** & 78.09 & **57.55** & **53.03** & **47.86** & **87.00** & **67.93** & **63.50** \\ \hline \multirow{6}{*}{PV-RCNN} & From Scratch (Lin and Wang, 2022) & 100\% & 66.67 & 91.81 & 82.52 & 51.01 & 58.33 & 43.71 & 66.86 & 86.42 & 58.95 \\ & BEV-MAE (Lin and Wang, 2022) & 100\% & 69.91 & 92.58 & 82.81 & 81.68 & 64.82 & 57.13 & 51.98 & 88.26 & 69.78 & 5.75 \\ & AD-PT (Yuan et al., 2023a) & 100\% & 69.43 & 92.18 & 82.75 & 51.22 & 65.50 & 57.59 & 51.84 & 84.15 & 65.96 & 64.73 \\ & SPOT (ours commonly used metrics, SPOT achieves the best among all the initialization methods for all classes, which shows great potential to avoid disaster in real-world applications. **ONCE Detection.** As shown in Fig. 5, when pre-trained by SPOT (solid lines), both SECOND and CenterPoint outperform training from scratch (dot lines) by considerable margins (2.70 and 7.58 mAP respectively). Meanwhile, increasing pre-training data also enlarges this gap, which again demonstrates the ability of SPOT to scale up. **SemanticKITTI Segmentation.** Results are presented in Tab. 3. It can be found that SPOT significantly improves mIoU metrics compared to training from scratch and achieves the best performance among all pre-training methods. For detailed categories, SPOT gains more than 20 mIoU improvement compared to random initialization on truck, person and bicyclist, which can help guarantee safety in control task. **NuScenes Segmentation.** As shown in Tab. 4, considerable gains are achieved by SPOT, 4.03 and 2.38 mIOUs on \(5\%\) and \(10\%\) NuScenes data respectively. SPOT also achieves the best performance among all initialization methods. ### Discussions and Analyses **Pre-training Tasks.** We argue that occupancy prediction is a scalable and general task for 3D representation learning. Here we conduct experiments to compare different kinds of existing task for pre-training, including detection and segmentation tasks. Pre-training is conducted on the full Waymo dataset and downstream datasets include \(20\%\) KITTI data, \(5\%\) nuScenes(det) data, \(100\%\) SemanticKITTI data, and \(100\%\) nuScenes(seg) data. The results presented in Tab. 5 reveal that \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline Backbone & Method & Fin-tuning & mIOU & bus & car & ped. & trailer & sidewalk & vegetable \\ \hline \multirow{8}{*}{Cylinder3D} & From Scratch & 5\% & 45.85 & 10.88 & 75.29 & 47.68 & 15.61 & 61.07 & 80.81 \\ & BEV-MAE (Lin \& Wang, 2022) & 5\% & 46.94 & 34.48 & 36.83 & 51.63 & 14.04 & 61.27 & 80.42 \\ & AD-PT (Yuan et al., 2023a) & 5\% & 45.61 & 9.33 & 76.06 & 51.27 & 15.95 & 60.49 & 79.67 \\ & SPOT (ours) & 5\% & **49.88** & **50.35** & **76.26** & **52.42** & **16.45** & **63.74** & **81.83** \\ \cline{2-10} & From Scratch & 10\% & 53.72 & 60.54 & 72.85 & 55.90 & 33.47 & 64.02 & 81.62 \\ & BEV-MAE (Lin \& Wang, 2022) & 10\% & 53.75 & 57.11 & 76.26 & 54.88 & 20.92 & 65.00 & 81.81 \\ & AD-PT (Yuan et al., 2023a) & 10\% & 52.86 & 53.76 & 81.09 & 53.11 & 28.60 & 65.45 & 82.14 \\ & SPOT (ours) & 10\% & **56.10** & **63.24** & **81.30** & **57.86** & **33.99** & **67.04** & **82.73** \\ \hline \end{tabular} \end{table} Table 4: Fine-tuning performance on NuScenes for **segmentation task** using 100% pre-training data. We fine-tune on 5% and 10% NuScenes training data, respectively, and show the results of some of the categories. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Backbone & Method & mIOU & car & truck & bus & person & bicyclist & road & fence & trunk \\ \hline \multirow{8}{*}{Cylinder3D} & From Scratch & 49.01 & 93.73 & 38.03 & 25.42 & 35.52 & 0.00 & 92.55 & 44.66 & 65.22 \\ & BEV-MAE (Lin \& Wang, 2022) & 53.81 & 94.06 & 58.46 & 36.13 & 50.08 & 51.46 & 92.46 & 46.96 & 62.28 \\ & AD-PT (Yuan et al., 2023a) & 52.85 & 94.02 & 42.03 & 36.90 & 50.26 & 49.49 & 91.94 & 49.00 & 60.10 \\ & SPOT (ours) & **55.88** & **94.34** & **61.27** & **43.01** & **55.56** & **67.61** & **92.61** & **52.81** & **67.17** \\ \hline \end{tabular} \end{table} Table 3: Fine-tuning performance on SemanticKITTI for **segmentation task** using 100% pre-training data. We fine-tune on 10% training data and show the results of some of the categories. relying solely on detection as a pre-training task yields minimal performance gains, particularly when significant domain discrepancies exist, _e.g._ Waymo to NuScenes. Similarly, segmentation alone as a pre-training task demonstrates poor performance in the downstream detection task, likely due to the absence of localization information. On the contrary, our occupancy prediction task is beneficial to achieve consistent performance improvements for various downstream perception datasets and tasks. **Module-level Ablation Studies in SPOT.** We conduct comprehensive ablation experiments to analyze the individual components of the proposed SPOT. For pre-training, we uniformly sample \(5\%\) Waymo data and subsequently perform fine-tuning experiments on subsets of \(5\%\) NuScenes (det) data, \(20\%\) KITTI data, and \(20\%\) ONCE dataset, using SECOND as the detector. The results presented in Tab. 6 demonstrate the effectiveness of the occupancy prediction task in enhancing the performance of the downstream tasks. Moreover, our proposed strategies for pre-training, including loss balancing, LiDAR beam re-sampling, and dataset balancing, also yield significant improvements in different downstream datasets. **Beyond the Label-efficiency Setting.** We further conduct experiments on complete downstream datasets, and the results are shown in Tab. 7 and Tab. 8. It can be found that SPOT achieves consistent performance gains even with \(100\%\) labeled data, which highlights the effectiveness of SPOT. Consistent improvement in various downstream datasets and tasks as well as scalable pre-training are observed. We believe SPOT paves the way for large-scale pre-training on LiDAR point clouds.
3D LiDAR 点群の注釈は、自律運転などの多くのアプリケーションにとって重要な要素であり、一方で、そのラベル付け作業は非常に時間と労力を要します。事前学習と fine-tuning アプローチは、事前学習された骨格をさまざまな下流データセットとタスクに対して微調整することで、ラベル付けの負担を軽減することができます。この論文では、SPOT(つまり、 occupancy 予測によるスケーラブルな事前学習)という提案を提案します。SPOTは、様々な公開されたデータセットと異なる下流タスクで効果を発揮し、その汎用性、クロスドメインの安定性、データスケーラビリティの高さは、現実世界のアプリケーションにとって重要な3つの要素です。具体的には、私たちでは、 occupancy 予測というタスクを通じて一般表現学習が実現可能であるという理論と実証的な結果を示しました。さらに、異なるLiDAR センサーと注釈方法によって生じるドメインギャップに対処するために、
2309.17248
Introducing the Condor Array Telescope. II. Deep imaging observations of the edge-on spiral galaxy NGC 5907 and the NGC 5866 Group: yet another view of the iconic stellar stream
We used the Condor Array Telescope to obtain deep imaging observations through the luminance filter of the entirety of the NGC 5866 Group, including a very extended region surrounding the galaxy NGC 5907 and its stellar stream. We find that the stellar stream consists of a single curved structure that stretches $220$ kpc from a brighter eastern stream to a fainter western stream that bends to the north and then curls back toward the galaxy. This result runs contrary to a previous claim of a second loop of the stellar stream but is consistent with another previous description of the overall morphology of the stream. We further find that: (1) an extension of the western stream appears to bifurcate near its apex, (2) there is an apparent gap of $\approx 6$ kpc in the western stream due east of the galaxy, (3) contrary to a previous claim, there is no evidence of the remnant of a progenitor galaxy within the eastern stream, although (4) there are many other possible progenitor galaxies, (5) there is another structure that, if it is at the distance of the galaxy, stretches 240 kpc and contains two very large, very low-surface-brightness "patches" of emission, one of which was noted previously and another of which was not. We note the number and variety of stellar streams in the vicinity of NGC 5907 and the apparent gap in the western stream, which may be indicative of a dark subhalo or satellite in the vicinity of the galaxy.
Kenneth M. Lanzetta, Stefan Gromoll, Michael M. Shara, Stephen Berg, James Garland, Evan Mancini, David Valls-Gabaud, Frederick M. Walter, John K. Webb
2023-09-29T13:54:27
http://arxiv.org/abs/2309.17248v1
Introducing the Condor Array Telescope. II. Deep imaging observations of the edge-on spiral galaxy NGC 5907 and the NGC 5866 Group: yet another view of the iconic stellar stream ###### Abstract We used the Condor Array Telescope to obtain deep imaging observations through the luminance filter of the entirety of the NGC 5866 Group, including a very extended region surrounding the galaxy NGC 5907 and its stellar stream. We find that the stellar stream consists of a single curved structure that stretches 220 kpc from a brighter eastern stream to a fainter western stream that bends to the north and then curls back toward the galaxy. This result runs contrary to a previous claim of a second loop of the stellar stream but is consistent with another previous description of the overall morphology of the stream. We further find that: (1) an extension of the western stream appears to bifurcate near its apex, (2) there is an apparent gap of \(\approx 6\) kpc in the western stream due east of the galaxy, (3) contrary to a previous claim, there is no evidence of the remnant of a progenitor galaxy within the eastern stream, although (4) there are many other possible progenitor galaxies, (5) there is another structure that, if it is at the distance of the galaxy, stretches 240 kpc and contains two very large, very low-surface-brightness "patches" of emission, one of which was noted previously and another of which was not. We not the number and variety of stellar streams in the vicinity of NGC 5907 and the apparent gap in the western stream, which may be indicative of a dark subhalo or satellite in the vicinity of the galaxy. keywords: Dwarf galaxies (416), Dwarf irregular galaxies (417), Galaxies (573), Galaxy groups (597), Galaxy interactions (600), Galaxy mergers (608), Galaxy photometry (611) Galaxy spurs (620), Giant galaxies (652), Interacting galaxies (802), Low surface brightness galaxies (940), Galaxy tails (2125) ## 1 Introduction Over the past several years, the subject of low-surface-brightness imaging of astronomical sources has experienced a resurgence of interest, driven by new instrumentation capable of recording low surface brightnesses over wide fields of view. The edge-on spiral galaxy NGC 5907 has become a prime target of such observations. The galaxy was discovered by William Herschel in 1788 using an 18.7-inch (47.5 cm) reflecting telescope (Herschel, 1789) and is a member of the NGC 5866 Group, which consists of at least the galaxies NGC 5866 (or M102), NGC 5879, and NGC 5907. The NGC 5866 Group is located near on the sky to the M101 Group and the M51 Group, and the redshifts of all three groups are similar, which suggests that they are all part of the same structure. Observations of NGC 5907 in H I by Sancisi (1976) showed that the galaxy exhibits a pronounced warp, which was also observed at optical wavelengths by van der Kruit (1979), Sasaki (1987), and Sackett et al. (1994). Subsequent observations of the galaxy at optical wavelengths by Shang et al. (1998) and Zheng et al. (1999) revealed a remarkable stellar stream forming a section of a loop surrounding the disk of the galaxy. The galaxy and the stellar stream were then observed again at optical wavelengths by Martinez-Delgado et al. (2010), who reported that the stellar stream comprised not one but _two_ full loops surrounding the disk of the galaxy and proposed that both loops could be plausibly modeled by \(N\)-body simulations as the accretion of a dwarf satellite onto the disk of the galaxy. Because the configuration was so striking and unusual, this image of NGC 5907 by Martinez-Delgado et al. (2010) became _the_ iconic image depicting the effects of tidal interactions between accreting dwarf satellites and spiral galaxies and is likely one of the most widely-recognized and influential images of any galaxy ever. The galaxy was then observed again by Laine et al. (2016) using the Suprime-Cam imager on the Subaru 8.2-m telescope through the Sloan \(g^{\prime}\), \(r^{\prime}\), and \(i^{\prime}\) filters and using the Infrared Array Camera on the Spitzer telescope at 3.6 \(\mu\)m; these observations detected only the first loop of the stellar stream. The situation took another dramatic turn when van Dokkum et al. (2019) used the Dragonfly Telephoto Array (Abraham & van Dokkum, 2014) to again observe the galaxy and the stellar stream at optical wavelengths. These observations (1) showed no evidence at all of the second loop of the stellar stream but instead (2) indicated that the stellar stream consists of a single curved structure that stretches 220 kpc, from the brighter "eastern stream" or the first loop identified by Shang et al. (1998) and Zheng et al. (1999), across the southern edge of the galaxy, to a fainter "western stream" that bends to the north. Results of van Dokkum et al. (2019) further indicated (3) a "density enhancement near the luminosity-weighted midpoint of the [eastern] stream," which they interpreted as the "likely remnant of a nearly disrupted progenitor galaxy," (4) that the configuration could be plausibly modeled by \(N\)-body simulations, (5) a new "linear" feature emanating from the eastern stream toward the east and terminating on a "patch" of emission (6) a tentative extension of the western stream to the northeast looping back south toward the disk, (7) a tentative continuation of the eastern stream looping back to the disk, and (8) a previously-uncataloged dwarf galaxy located just west of the eastern stream. Subsequent observations at optical wavelengths by Muller et al. (2019) and by Byun et al. (2022) likewise showed no evidence at all of the second loop, although these observations also did not detect the western stream or any of the other features reported by van Dokkum et al. (2019). In the late winter and spring of 2022, we used the Condor Array Telescope (Lanzetta et al., 2023) to obtain deep imaging observations through the luminance filter of the entirety of the NGC 5866 Group, including a very extended region surrounding the galaxy NGC 5907 and its stellar stream. Our motivation was severalfold: * to assess the technical capabilities and sensitivity of Condor in comparison with other telescopes optimized for low-surface-brightness imaging, which is especially relevant since NGC 5907 and its stellar stream have become something of a benchmark within the low-surface-brightness community; * to confirm (or refute) the results of van Dokkum et al. (2019) and to weigh in on the apparent discrepancy between the results of van Dokkum et al. (2019) and the results of Martinez-Delgado et al. (2010); * to search for new low-surface-brightness features in the vicinity of NGC 5907, potentially with greater sensitivity than any previous observations; * to exploit the higher angular resolution of Condor with respect to Dragonfly to help constrain the nature of the various low-surface-brightness features in the vicinity of NGC 5907; * and to set low-surface-brightness features in the vicinity of NGC 5907 into the broader context of the NGC 5866 Group. Here we report results of these observations, which together constitute the deepest imaging observations of NGC 5907 and its stellar stream and of the NGC 5866 Group yet obtained. In what follows, we adopt for the galaxy NGC 5907 a heliocentric recession velocity \(v=665\pm 1\) km s\({}^{-1}\) and redshift \(z=0.002218\pm 0.000002\)(Springob et al., 2005) and a distance \(d\approx 17\) Mpc (Tully et al., 2016). (iii) Each science image is field flattened and background subtracted. As described by Lanzetta et al. (2023), this involves dividing the science image by an appropriate "twilight flat image" (i.e. a sum of images of the sky obtained during disk or dawn twilight), masking regions of the image surrounding detectable sources using NoiseChisel (Akhlaghi & Ichikawa, 2015; Akhlaghi, 2019), fitting the resulting quotient with a high-order (typically eighth-order) two-dimensional polynomial, and subtracting the resulting polynomial fit from the quotient. Because the source mask depends on the background, this procedure is iterated through convergence (which typically requires four iterations). (iv) Each science image is astrometrically calibrated. As described by Lanzetta et al. (2023), this involves fitting parameters of an affine transformation and a seventh-order geometric distortion polynomial in the TPV projection to pixel coordinates of sources detected in the image and celestial coordinates of sources contained in the Gaia DR3 catalog (Gaia Collaboration et al., 2017, 2018, 2021; Gaia Collaboration, 2022). The astrometric calibrations exhibit systematic differences between the transformed pixel and celestial coordinates of \(\lesssim 0.1\) arcsec. (v) Each science image is processed using MaxiMask (Paillassa et al., 2020), which is a convolutional neural network that identifies contaminants in astronomical images, including cosmic-ray events and satellite trails. Pixels flagged by MaxiMask are excluded from the subsequent analysis. Each science image is also processed using MaxiTrack (Paillassa et al., 2020), which is a convolutional neural network that identifies images affected by tracking errors. (vi) For each science image, an additional pixel mask is constructed, identifying pixels that are found in the master bias image to exhibit significant effects of random telegraph noise (e.g. Chao et al., 2019). Pixels flagged in this way are excluded from the subsequent analysis. (vii) Each science image is photometrically calibrated. As described by Lanzetta et al. (2023), this involves comparing aperture photometry of sources detected in the image to Sloan \(g^{\prime}\) magnitudes of sources contained in the Gaia DR3 catalog (Gaia Collaboration et al., 2017, 2018, 2021; Gaia Collaboration, 2022). The resulting magnitude zero points are used subsequently to assess the quality of the science images. Note that this procedure scales the luminance images to Sloan \(g^{\prime}\) magnitudes, although the luminance bandpass is actually roughly comparable to the sum of the Sloan \(g^{\prime}\) and \(r^{\prime}\) bandpasses. This introduces a color-dependent ambiguity in the photometric calibration, which for low-redshift galaxies amounts to \(\approx 0.25\) mag. (viii) Each science image is associated with an uncertainty image, which propagates the \(1\sigma\) uncertainty appropriate for each pixel, starting from read noise and photon noise. (ix) Science images are rejected from the analysis based on (1) poor or impossible astrometric calibration (indicating clouds or obstruction by an observatory wall), (2) large width of the autocorrelation function (indicating out-of-focus images or poor seeing conditions), (3) high background (indicating substantial manade or Moon light), (4) low sky transparency (indicating fog, haze, or clouds), or (5) significant tracking errors (indicating substantial wind buffering). (x) The science images are then drizzled (Gonzaga et al., 2012) onto a common coordinate grid and coadded weighted for maximum sensitivity in the background-limited regime according to the uncertainty images. The resulting coadded images are show in Figures 1 through 6, and a coadded mosaic of the six images of the entirety of the NGC 5866 Group is shown in Figure 7. The measured point-source FWHM and point-source (\(5\sigma\)) and surface-brightness (\(3\sigma\) over \(10\times 10\) arcsec\({}^{2}\) regions) sensitivities of the various images (determined near the centers of the images) are presented in Table 2. Note that the FWHM of Table 2 include the combined effects of focus, seeing, tracking errors, and astrometric errors averaged over many images. Also note that the surface-brightness sensitivities of Table 2 are formal statistical values determined from the uncertainty images and neglect systematic uncertainties associated with field flattening, background subtraction, scattered starlight, and undetected faint sources. And finally note that the sensitivities of Table 2 do not scale in a simple way with exposure time. For a telescope like Condor that obtains observations spanning long dwell times, there will of course be significant variations in seeing, background, and sky transparency over the course of the (perhaps substantial) duration of the observations. So for this reason, exposure time alone is not a good indicator of the depth of an image. ## 4 Assessment of field flattening and background subtraction Errors in field flattening and background subtraction can be significant sources of systematic uncertainties at low surface-brightness thresholds. Here we assess the field flattening and background subtraction of the images of Figure 1 through 7, concentrating on the mosaic image of Figure 7. One possible assessment of errors in field flattening and background subtraction might be obtained by measuring fluctuations within randomly chosen apertures that by chance are devoid of detectable sources. But at the faint limits of the images of Figures 1 through 7, the sky is covered with faint sources (mostly background galaxies), at an incidence that exceeds \(10\) arcmin\({}^{-2}\). Hence there are essentially no apertures as large as, say, \(1\times 1\) arcmin\({}^{2}\) (or even \(0.5\times 0.5\) arcmin\({}^{2}\)) that are devoid of detectable sources. Instead, we assess errors in field flattening and background subtraction by measuring the data covariance over "background" pixels of the images, i.e. pixels of the images that are _not_ masked surrounding detectable sources using NoiseChisel (as described in SS 3, enumerated point iii). On small spatial scales (i.e. on scales of a few pixels), we expect the images to be highly correlated due to the drizzling process used to coadd the images (as described in SS 3, enumerated point x). But on larger spatial scales (i.e. on scales of tens, hundreds, or thousands of pixels), the images should ideally exhibit zero covariance, and any non-zero covariance must indicate large-spatial-scale undulations of the background, which could be due in part to errors in field flattening and background subtraction. \begin{table} \begin{tabular}{l c c c} \hline \hline & & Point & Surface \\ & FWHM & Source & Brightness \\ & (arcsec) & (mag) & (mag arcsec\({}^{-2}\)) \\ \hline Condor field 6089 & 2.6 & 24.9 & 29.5 \\ Condor field 6090 & 3.0 & 24.6 & 29.4 \\ Condor field 6183 & 2.6 & 24.8 & 29.4 \\ Condor field 6184 & 3.0 & 24.7 & 29.5 \\ Condor field 6185 & 3.0 & 23.9 & 28.7 \\ NGC 5907 & 2.1 & 25.2 & 29.6 \\ mosaic & 2.3 & 25.5 & 29.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Image FWHM and sensitivities. Point-source sensitivities are \(5\sigma\), and surface-brightness sensitivities are \(3\sigma\) over \(10\times 10\) arcsec\({}^{2}\) regions. Figure 1: Coadded image of Condor field 6089. Image is smoothed by Gaussian kernel of FWHM = 2.5 pix. Total exposure time is 22.7 hr. Figure 2: Coadded image of Condor field 6090. Image is smoothed by Gaussian kernel of FWHM = 2.5 pix. Total exposure time is 25.0 hr. Galaxy NGC 5907 is toward upper right of image. Figure 4: Coadded image of Condor field 6184. Image is smoothed by Gaussian kernel of FWHM = 2.5 pix. Total exposure time is 24.1 hr. Galaxy NGC 5907 is toward lower center of image. Figure 3: Coadded image of Condor field 6183. Image is smoothed by Gaussian kernel of FWHM = 2.5 pix. Total exposure time array is 19.7 hr. Figure 5: Coadded image of Condor field 6185. Image is smoothed by Gaussian kernel of FWHM = 2.5 pix. Total exposure time is 9.1 hr. Figure 6: Coadded image of NGC 5907. Image is smoothed by Gaussian kernel of FWHM = 2.5 pix. Total exposure time is 21.4 hr. Galaxy NGC 5907 is near center of image. We consider some region of some image for which the uncertainty image over the background (i.e. unmasked) pixels is roughly constant. (This applies over the central regions of all of the images considered here.) The values of these pixels can be considered a random variable of zero mean and constant variance. We write the data covariance \(C_{I}^{2}\) at some pixel lag \(I\) as \[C_{I}^{2}=\frac{1}{N-1}\sum_{i}x_{i}x_{i-I}, \tag{1}\] where the sum extends over the \(N\) background pixels of the region. The data covariance \(C_{0}^{2}\) at zero pixel lag \[C_{0}^{2}=\frac{1}{N-1}\sum_{i}x_{i}^{2} \tag{2}\] is the pixel-to-pixel variance of the region. We further write the correlation coefficient \(\rho_{I}\) at pixel lag \(I\) as \[\rho_{I}=\frac{C_{I}^{2}}{C_{0}^{2}}. \tag{3}\] Here we consider results obtained from a \(5000\times 5000\) pix\({}^{2}\) region of the mosaic image of Figure 7 centered on NGC 5907, although similar results can of course be obtained using other regions of other images. The distribution \(\Phi(f_{\nu})\) of pixel-to-pixel energy fluxes \(f_{\nu}\) of the background pixels of the mosaic region is shown by the blue curve in Figure 8. The pixel-to-pixel variance of the mosaic region is measured to be \[C_{0}^{2}=1.569\times 10^{-4}\ \mu\mathrm{Jy}^{2}, \tag{4}\] while the median "statistical" variance \(\sigma_{S}^{2}\) of the mosaic region determined from the background pixels of the uncertainty is image is measured to be \[\sigma_{s}^{2}=2.329\times 10^{-4}\mu\mathrm{Jy}^{2}. \tag{5}\] The background pixels of the uncertainty image are indeed roughly constant over the mosaic region, and we use the median only to mitigate possible effects of deviant pixels. As expected, the pixel-to-pixel variance is _less_ than the median variance determined from the uncertainty image, because the drizzling process used to coadd the images combines nearby pixels, which has the effect of "smoothing" the image and thus reducing the variance. We characterize the relationship between the pixel-to-pixel variance and the median variance determined from the uncertainty image by the ratio \[r=\frac{C_{0}^{2}}{\sigma_{s}^{2}}=0.674. \tag{6}\] Gaussian distribution functions of standard deviations \(\sigma_{s}\) and \((C_{0}^{2})^{1/2}\) are shown by the orange and green curves, respectively, in Figure 8. It is clear that a standard deviation \(\sigma_{s}\) is too wide to adequately describe the observed distribution (for the reasons described above) and that a standard deviation \((C_{0}^{2})^{1/2}\) provides a better but still inadequate description of the observed distribution. Specifically, the observed distribution deviates from a Gaussian distribution function due to an extended tail of positive energy fluxes, which we attribute to sources missed by the masking procedure. A standard deviation 0.00995 \(\mu\)Jy (determined by measuring the standard deviation of the observed distribution truncated at 0.025 \(\mu\)Jy) is shown by the purple curve in Figure 8; this distribution function adequately describes the observed distribution except for the extended tail of positive energy fluxes. We conclude that pixel-to-pixel fluctuations of the mosaic region are well described by a Figure 7: Coadded mosaic of the six images of the entirety of the NGC 5866 Group. Image is smoothed by Gaussian kernel of FWHM = 2.5 pix, and angular extent of image is \(\approx 7.0\times 3.5\) deg\({}^{2}\). Total exposure time summed over all six pointings is 122 hr. combination of a Gaussian distribution function of standard deviation \(\approx 0.01\)\(\mu\)Jy and an extended tail of positive energy fluxes due to sources missed by the masking procedure, which is prominent beyond \(\approx 2.5\) standard deviations. The correlation coefficient \(\rho_{I}\) of the mosaic region for pixel lags over the interval \(l=0\) through 1000 is shown in Figure 9. We note several results from Figure 9 as follows: (1) Neighboring pixels are highly correlated, with correlation coefficients ranging from \(\rho_{1}=0.60\) for immediately adjacent neighbors to \(\rho_{5}=0.16\) to \(\rho_{10}=0.04\) to \(\rho_{20}=0.02\). We attribute the strong correlation of neighboring pixels to the drizzling process. (2) Pixels remain correlated to a pixel lag of \(l\approx 300\), with a correlation coefficient over the range \(l=50\) to 300 of \(\rho_{I}\approx 0.005\). We attribute the correlation of pixels at pixel lags \(l=50\) to 300 to large-spatial-scale undulations of the background. And (3) pixels at pixel lags \(l\gtrsim 300\) are uncorrelated or only weakly correlated. We now consider the fluctuations attributable only to background within an aperture that encompasses \(N\) pixels. If the \(N\) pixels are uncorrelated, then the variance \(\sigma_{N}^{2}\) of the background within the aperture is \[\sigma_{N}^{2}=\sum_{i}C_{0}^{2}=NC_{0}^{2}, \tag{7}\] where the sum extends over the pixels that comprise the aperture. If the \(N\) pixels are correlated, then the variance is \[\sigma_{N}^{2}=\sum_{i}C_{0}^{2}+\sum_{i}\sum_{j\neq i}C_{i}^{2}\approx NC_{0} ^{2}+NC_{0}^{2}2\pi\sum_{l=1}^{n/2}\rho_{I}, \tag{8}\] where the sums over \(i\) and \(j\) extend over the pixels that comprise the aperture and the sum over \(l\) extends over the diameter \(n\sim N^{1/2}\) of the aperture. Expressing \(C_{0}^{2}\) in terms of \(\sigma_{s}^{2}\) and \(r\) via equation (6) then yields \[\sigma_{N}^{2}=\left(1+2\pi\sum_{l=1}^{n/2}\rho_{I}\right)Nr\sigma_{s}^{2}. \tag{9}\] The corresponding relationship expressed in terms of a standard deviation rather than a variance is \[\sigma_{N}=\left(1+2\pi\sum_{l=1}^{n/2}\rho_{I}\right)^{1/2}N^{1/2}r^{1/2} \sigma_{s}. \tag{10}\] Equation (10) provides the way to relate the formal statistical uncertainties of the uncertainty images (and hence the sensitives presented, e.g. in Table 2) to the actual uncertainties including effects of pixel-to-pixel correlations on small spatial scales (due to the drizzling process) and on large spatial scales (due to undulations of the background). Specifically, the ultimate effect of pixel-to-pixel correlations is to alter the standard deviation attributable only to background of an aperture that encompasses \(N\) pixels by a factor \(f_{N}\) given by \[f_{N}=\left(1+2\pi\sum_{l=1}^{n/2}\rho_{I}\right)^{1/2}r^{1/2} \tag{11}\] with respect to the value \[\sigma_{N}=N^{1/2}\sigma_{s} \tag{12}\] that is obtained by considering the uncertainty images alone. The resulting values of \(f_{N}\) measured for the mosaic region are shown (on a magnitude scale) versus angular scale \(\theta\) (i.e. expressing diameter \(n\) in angular units) in Figure 10, using the correlation coefficient \(\rho_{I}\) from Figure 9 and the ratio \(r\) from equation (6). Fluctuations on single-pixel scales are _less_ by a factor 0.82 (or \(-0.2\) mag) than the value obtained by considering the uncertainty image alone due to the drizzling process. Fluctuations of the background over \(0.5\times 0.5\) and \(1\times 1\) arcmin\({}^{2}\) apertures exceed the values obtained by considering the uncertainty image alone by around 2.0 and 2.3 mag, respectively. The values shown in Figure 10 represent _upper limits_ to the errors in field flattening and background subtraction of the mosaic region, because fluctuations in the background also arise due to scattered starlight and to undetected faint sources (including the faint sources that make up the extended tail of positive energy fluxes of the distribution of pixel-to-pixel energy fluxes of the background pixels of the mosaic regions shown in Figure 8). Further, fluctuations in the background are not necessarily simply related to sensitivity; for example, a source of diameter 0.1 arcmin might be detected despite undulations in the background on scales of 1 arcmin. A detailed Figure 8: Distributions \(\Phi(f_{\nu})\) of pixel-to-pixel energy fluxes \(f_{\nu}\) of background pixels of mosaic region. Blue curve shows observed distribution, and orange, green, and purple curves show Gaussian distribution functions of standard deviations \(\sigma_{s}\), \((C_{0}^{2})^{1/2}\), and \(0.00995\)\(\mu\)Jy, respectively. Figure 9: Correlation coefficient \(\rho_{I}\) (blue curves) together with positive and negative one standard deviation uncertainties (orange curves) of mosaic region for pixel lags over intervals \(l=0\) through 50 (left panel and left scale) and \(l=50\) to 1000 (right panel and right scale). accounting of all sources of fluctuations in the background will be described elsewhere. ## 5 Correction for scattered starlight Although Condor exhibits a very clean point-spread function (PSF, Lanzetta et al., 2023), scattered starlight can be a significant source of systematic noise. Hence to fully exploit the sensitivity of the images described in SS 3 to very low-surface-brightness features, it is necessary to correct for scattered starlight by (1) accurately determining the PSF on large angular scales and (2) using the resulting PSF to model and subtract the contributions of all stars within (and possibly even beyond) the field of view. Details of our method of PSF determination and subtraction will be described elsewhere, but here we present a brief summary of the procedures and results. To determine the PSF, we compare a "data" image with a "model" image, where we take the model image to be the convolution of a "sky" image with a "PSF" image. We allow for the possibility that the model image (and hence the PSF image) is expressed on a finer grid than the data image (i.e. is "subsampled" with respect to the data image). We assume that the sky consists only of stars (i.e. we mask regions around galaxies and other non-stellar sources), and hence we take the sky image to be a sum of delta functions, where the locations of the delta functions (i.e. the locations of the stars) are taken as given. We then write the comparison between the data image and the model image as a linear least squares problem, and we solve the normal equations (e.g. Press et al., 2007) to minimize \(\chi^{2}\) with respect to some parameters. In particular, if the normalizations of the delta functions (i.e. the energy fluxes of the stars) are taken as given, then we solve the normal equations for the PSF image, or if the PSF image is taken as given, then we solve the normal equations for the energy fluxes of the stars. In practice, starting with any reasonable guess for the energy fluxes of the stars and iterating between solving for the PSF image and solving for the energy fluxes of the stars, the solution quickly converges to the desired simultaneous solution. Our primary objective is to determine and subtract the PSF on large angular scales, and for this purpose, the limitations of a pixel-based approach are obvious: Near the core of the PSF, a fine pixel grid is both necessary (because the PSF exhibits rapid variations at small angles) and feasible (because observations of the PSF contain substantial signal at small angles). But moving outward from the core of the PSF, the same fine pixel grid becomes both unnecessary (because the PSF exhibits less rapid variations at larger angles) and implausible (because observations of the PSF contain less signal at larger angles). Clearly some some sort of adaptive parametrization is required, which is finer near the core of the PSF and grows increasingly coarser moving outward. Accordingly, we modify the method described above to allow arbitrary groupings of pixels on the pixel grid of the model image (and hence the PSF image) to be treated as single parameters. Specifically, we rewrite the normal equations to allow for (1) a pixellated parameter grid near the core of the PSF and (2) a circular annulus (if azimuthal symmetry is assumed) or annulus sector (if azimuthal symmetry is not assumed) parameter grid moving outward from the core. Together these modifications optimally represent the PSF over a huge dynamic range, vastly reduce the dimensionality of the problem, and remain linear in the parameters. We emphasize that the method determines the PSF by _simultaneously_ fitting all stars in the field, so there is no requirement of incorporating only isolated stars into the analysis. In practice, we take locations and starting values of the energy fluxes of the stars from the Gaia DR3 catalog (Gaia Collaboration et al., 2017, 2018, 2021; Gaia Collaboration, 2022), and we solve for the PSF image and the energy fluxes of the stars assuming a pixellated PSF at angular radius \(\theta<20\) arcsec and an azimuthally-symmetric PSF at angular radius \(20\) arcsec \(<\theta<10\) arcmin, masking regions around galaxies and other non-stellar sources. We then subtract the model from the data, masking pixels of the result near the very cores of the stars at an isophotal flux limit. (This masking is necessary because residuals near the cores of the stars can be large compared with the very low surface brightness limits farther from the cores.) A radial cut of a representative example of the PSF determined from the mosaic image is shown in Figure 11. It is apparent from Figure 11 that the "aureole" portion of the PSF (i.e. beyond an angular radius \(\theta\approx 20\) arcsec) roughly follows a \(\theta^{-2}\) radial profile, which is similar to the radial profiles of some other telescopes used for low-surface-brightness imaging (e.g. Sandin, 2014). The processed image of a portion of the mosaic image surrounding NGC 5907 obtained by modeling and subtracting the contributions of stars in the Gaia DR3 catalog is show in two different stretches in Figure 12; Figure 12 also shows schematic representation of features described in SS 6 below. Our analysis differs from the analysis of van Dokkum et al. (2019) in that we model and subtract only contributions from stars (and the occasional galaxy) that are contained in the Gaia DR3 catalog whereas they model and subtract contributions from all "compact emission sources." It is apparent from Figure 12 that our processed images exhibit a large number of faint sources, the vast majority of which are faint, background galaxies. But some fraction of these faint sources might be associated with NGC 5907, e.g. as dwarf galaxies, globular clusters, or perhaps other types of star clusters or associations. This difference between our analysis and the analysis of van Dokkum et al. (2019) leads to some important consequences, as is described below. ## 6 Results and comparison with previous work Here we use the processed mosaic image of the region surrounding the galaxy NGC 5907 shown in Figure 12 to assess the various established, proposed, and tentative features reported by others and Figure 10: Factor \(f_{N}\) by which standard deviation of background is altered with respect to value obtained considering uncertainty images alone (on a magnitude scale) of the mosaic region versus angular scale \(\theta\). propose new features and new interpretations of some previously-reported features. ### Eastern Stream Our image of the eastern stream (which is indicated as feature 1 in green in Figure 12) through the luminance filter is consistent in location, size, shape, brightness, and overall morphology with the image of the eastern stream through the sum of the Sloan \(g^{\prime}\) and \(r^{\prime}\) filters presented by van Dokkum et al. (2019), which we established by overlaying and comparing the two images. In contrast, both our image and the image of van Dokkum et al. (2019) of the eastern stream are inconsistent with the image of the eastern stream through the sum of the \(R\), \(G\), \(B\), and luminance filters presented by Martinez-Delgado et al. (2010) in the sense that the portion of the stream that is maximally displaced from the disk of the galaxy (i.e. the apex of the stream) in the image of Martinez-Delgado et al. (2010) lies _interior_ to the same portion of the stream in our image and the image of van Dokkum et al. (2019), which we established by overlaying and comparing the three images. The displacement of the stream over this region between the two dichotomous sets of images amounts to \(\approx 1\) arcmin. This discrepancy in the location of the eastern stream was noted previously by van Dokkum et al. (2019). We measured the typical surface brightness through the luminance filter of the eastern stream to be \(\mu_{\rm lum}\approx 27.4\) mag arcsec\({}^{-2}\). This value may be compared with the peak surface brightness through the Sloan \(g^{\prime}\) filter of the eastern stream measured by van Dokkum et al. (2019) to be \(\mu_{g^{\prime}}=27.6\) mag arcsec\({}^{-2}\). ### Western Stream Our image of the western stream (which is indicated as feature 2 in red in Figure 12) through the luminance filter is consistent in location, size, shape, brightness, and overall morphology with the image of the western stream through the Sloan \(g^{\prime}\) and \(r^{\prime}\) filters presented by van Dokkum et al. (2019), which we established by overlaying and comparing the two images. In contrast, both our image and the image of van Dokkum et al. (2019) of the western stream are inconsistent with the image of the western stream through the \(R\), \(G\), \(B\), and luminance filters presented by Martinez-Delgado et al. (2010) in the sense that the image presented by Martinez-Delgado et al. (2010) shows only a small portion of the western stream, near where it emerges from the southern edge of the galaxy, and does not show the remainder of the stream, as it bends toward the north, which we established by overlaying and comparing the three images. This discrepancy in the morphology of the western stream was noted previously by van Dokkum et al. (2019). We measured the typical surface brightness through the luminance filter of the western stream to be \(\mu_{\rm lum}\approx 28.6\) mag arcsec\({}^{-2}\). This value may be compared with the typical surface brightness through the Sloan \(g^{\prime}\) filter of the western stream measured by van Dokkum et al. (2019) to be \(\mu_{g^{\prime}}=28.8\) mag arcsec\({}^{-2}\). Thus, consistent with results of van Dokkum et al. (2019), we find that the western stream is of surface brightness significantly lower than that of the eastern stream (by \(\approx 1.2\) mag arcsec\({}^{-2}\)). Apparently, images that fail to detect all or part of the western stream must not reach surface-brightness sensitivities of \(\approx 28.7\) mag arcsec\({}^{-2}\) over angular scales necessary to detect the stream. Our image also shows an apparent gap in the western stream due east of the galaxy. The gap is followed by a marked thickening or enhancement of the western stream, although its surface brightness does not increase significantly in this thicker region. The apparent gap in the western stream is indicated as feature 2a in purple in Figure 12. The gap extends \(\approx 70\) arcsec, which at the distance of NGC 5907 corresponds to \(\approx 6\) kpc. ### Putative Second Loop of Stellar Stream Neither our image through the luminance filter nor the image through the Sloan \(g^{\prime}\) and \(r^{\prime}\) filters presented by van Dokkum et al. (2019) show any evidence at all of the second loop of the stellar stream seen in the image through the \(R\), \(G\), \(B\), and luminance filters presented by Martinez-Delgado et al. (2010). The location of the putative second loop of the stellar stream is indicated as feature 3 in orange in Figure 12, which we determined by overlaying and tracing the feature from the image of Martinez-Delgado et al. (2010). Our image reaches a formal \(3\sigma\) surface-brightness sensitivities over \(10\times 10\) arcsec\({}^{2}\) regions of \(\approx 29.9\) mag arcsec\({}^{-2}\) (see Table 2), and van Dokkum et al. (2019) quote a \(3\sigma\) surface-brightness sensitivity of 29.4 mag arcsec\({}^{-2}\) (although they do not state the angular scale over which this limit is meant to apply). We see no plausible way for color effects to explain the discrepancy, given that the observations reported by Martinez-Delgado et al. (2010) were obtained either through the luminance filter, as were our observations, or through a "synthetic" luminance filter (formed using observations obtained through the \(R\), \(G\), and \(B\) filters). We conclude that the second loop of the stellar stream seen in the image presented by Martinez-Delgado et al. (2010) is not real and must result from some artifact of their data processing; we further suggest that the discrepancies in the location of the eastern stream and the morphology of the western stream must also result from some artifact of their data processing. Putative Remnant of Nearly Disrupted Progenitor Galaxy and Luminosity-Weighted Midpoint of Eastern Stream Our image shows a "feature" (which is indicated as feature 4a in white in Figure 12) near the location of the "density enhancement near the luminosity-weighted midpoint of the [eastern] stream" noted by van Dokkum et al. (2019). But our images resolve this feature into Figure 11: Radial cut of representative example of PSF determined according to description of § 4, normalized to unit area. Solid and dashed curves show PSF in two opposite directions. Orange line segment show \(\theta^{-2}\) radial profile. Figure 12: Processed mosaic image of region around NGC 5907 at shallower (top and middle panels) and deeper (bottom panel) stretches. Image is smoothed by Gaussian kernel of FWHM = 2.5 pix. Middle panel shows schematic representation of features described in § 6 labeled as follows: 1 (green) eastern stream, 2 (red) western stream, 2a (purple) apparent gap in western stream, 3 (orange) putative second loop of stellar stream, 4a and 4b (white) feature and clump of sources in eastern stream, 5 (maroon) linear feature terminating on patch, 6 (pink) putative extension of western stream, 7 (brown) continuation of eastern stream, 8 (blue) western "horn", 9 (turquoise) southern "spur," 10 (aqua) western "hook," and A–G (yellow) dwarf galaxies. a clump of sources, which we interpret as members of a background galaxy group or cluster rather than as the "likely remnant of a nearly disrupted progenitor galaxy" proposed by van Dokkum et al. (2019). Our image also shows a different clump of sources (which is indicated as feature 4b in white in Figure 12) near the "luminosity-weighted midpoint" of the eastern stream, including one relatively bright galaxy that might be a dwarf galaxy associated with NGC 5907 or might be a member of a background galaxy group or cluster. (This galaxy is included into the Gaia DR3 catalog, and our analysis described in SS 4 attempted to model and subtract it, although unsuccessfully since it is not a point source.) But in either case, it is clear from Figure 12 that galaxies (background or otherwise) or other discrete sources contribute significantly to the luminosity-weighted midpoint of the eastern stream. In particular, much of the "density enhancement" of the eastern stream found by van Dokkum et al. (2019) (i.e. the portions of their Figure 3 depicted in red) is in fact contributed by discrete sources, which we established by overlaying and comparing our Figure 12 with their Figure 3. (The discrete sources can be picked out one by one by means of this comparison.) Further, the possible asymmetry in the density enhancement of the eastern stream noted by van Dokkum et al. (2019) is in fact contributed by discrete sources, including in particular the relatively bright galaxy noted above. We conclude that the feature proposed by van Dokkum et al. (2019) as the likely remnant of a nearly disrupted progenitor galaxy is not the progenitor galaxy but is in fact a member of a background galaxy group or cluster and that the density enhancement and possible asymmetry of the density enhancement of the eastern stream noted by van Dokkum et al. (2019) is in fact contributed by discrete sources. This difference of interpretation presumably arises due to the higher angular resolution of our observations in comparison with the observations of van Dokkum et al. (2019). ### Linear Feature Terminating on Patch Our images confirm the "linear" feature emanating from the eastern stream toward the east and terminating on a "patch" of emission identified by van Dokkum et al. (2019). But our images further indicate that the feature continues past the patch toward the east and eventually terminates on another patch of emission located \(\approx 0.37\) deg away from the first patch (which itself is located \(\approx 0.67\) deg from the center of NGC 5907). Our images also further indicate that first patch is itself resolved into two roughly parallel linear segments running roughly east-west and another clump of emission toward the east. This entire structure is indicated as feature 5 in maroon in Figure 12. In total, the structure stretches \(\approx 0.85\) deg from where it emanates near the apex of the eastern stream to where it terminates on the second patch. We measured the typical surface brightness through the luminance filter of the first patch to be \(\mu_{\rm lum}\approx 28.1\) mag arcsec\({}^{-2}\) and the typical surface brightness through the luminance filter of the second patch to be \(\mu_{\rm lum}\approx 28.9\) mag arcsec\({}^{-2}\). Thus we find that both patches are of surface brightness significantly lower than that of the eastern stream and that the first patch is of surface brightness significantly higher than that of the western stream while the second patch is of surface brightness comparable to that of the western stream. The linear feature and the continuation of the linear feature vary significantly in brightness along their lengths, but we measured a typical surface brightness through the luminance filter of these features to be \(\mu_{\rm lum}\approx 29.7\) mag arcsec\({}^{-2}\). Thus we find that the linear feature and the continuation of the linear feature are typically of surface brightness significantly lower than that of the patches (by \(\approx 1.0\) mag arcsec\({}^{-2}\)), although we note a significant brightening of the linear feature west of the first patch, roughly midway between the first patch and the eastern stream. We measured the angular extent of the first patch to be \(\approx 530\times 240\) arcsec\({}^{2}\) and the angular extent of the second patch to be \(\approx 220\times 270\) arcsec\({}^{2}\), where the measurements apply to an isophotal contour of \(\approx 29\) mag arcsec\({}^{-2}\). We conclude that the linear feature emanating from the eastern stream toward the east and terminating on a patch identified by van Dokkum et al. (2019) are part of a yet larger structure. If this structure is at the distance of NGC 5907 (which is plausible or likely given that it appears to emanate near the apex of the eastern stream), then the first patch is located \(\approx 200\) kpc from the center of the galaxy, the second patch is located \(\approx 300\) kpc from the center of the galaxy, and the entire structure stretches \(\approx 240\) kpc from where it emanates near the apex of the eastern stream to where it terminates on the second patch. Further, the spatial extent of the first patch is \(\approx 43\times 20\) kpc\({}^{-2}\), the spatial extent of the second patch is \(\approx 18\times 22\) kpc\({}^{2}\), the absolute magnitude through the luminance filter of the first patch is \(\approx-15.4\), i.e. roughly 0.6% that of the Milky Way, and the absolute magnitude through the luminance filter of the second patch is \(\approx-14.2\), i.e. roughly 0.2% that of the Milky Way (where we take the Sloan \(g^{\prime}\) absolute magnitude of the Milky Way to be \(-21.0\), e.g. Bland-Hawthorn & Gerhard 2016). Multi-band imaging of the field surrounding NGC 5907 will be necessary to establish the nature of the patches of emission. ### Putative Extension of Western Stream Our image confirms the extension of the western stream (which is indicated as feature 6 in pink in Figure 12) tentatively identified by van Dokkum et al. (2019). This extension continues along the direction of the western stream described in SS 6.2 toward the north and then curls back south toward NGC 5907, about 0.3 deg north of the center of the galaxy. Our image further shows that the stream appears to bifurcate near its apex. We measured the typical surface brightness through the luminance filter of the extension of the western stream to be \(\mu_{\rm lum}\approx 28.9\) mag arcsec\({}^{-2}\). Thus we find that the extension of the western stream is of surface brightness lower than that of the rest of the western stream (by \(\approx 0.3\) mag arcsec\({}^{-2}\)). ### Putative Continuation of Eastern Stream Our image confirms the continuation of the eastern stream (which is indicated as feature 7 in brown in Figure 12) tentatively identified by van Dokkum et al. (2019). We measured the typical surface brightness through the luminance filter of the continuation of the eastern stream to be \(\mu_{\rm lum}\approx 29.0\) mag arcsec\({}^{-2}\). Thus we find that the continuation of the eastern stream is of surface brightness significantly lower than that of the bulk of the eastern stream and lower even than that of the western stream. There is some indication of a gap between the brighter bulk of the eastern stream and the fainter continuation of the eastern stream that joins up to the disk, although this gap is roughly coincident with three Gaia sources, which muddy the interpretation. ### Western "Horn" Our image reveals a new western "horn" (which is indicated as feature 8 in blue in Figure 12) emanating from the western side of the northern portion of the disk of NGC 5907 and extending to the northwest. The horn constitutes a thin, roughly linear feature of diffuse emission. We also tentatively identify a continuation of the horn that meanders from the northern tip of the horn northward by \(\approx 0.15\) deg to the extension of the western stream described in SS 6.6. We measured the typical surface brightness through the luminance filter \(\mu_{\rm lum}\) of the western horn to be \(\mu_{\rm lum}\approx 29.0\) mag arcsec\({}^{-2}\). Thus we find that the western horn is of surface brightness lower than that of the western stream. The western horn is apparent in the image through the sum of the Sloan \(g^{\prime}\) and \(r^{\prime}\) filters presented by van Dokkum et al. (2019), although these authors did not call attention to the feature. ### Southern "Spur" Our image reveals a new southern "spur" emanating from the southern portion of the western stream and continuing to the southwest. The spur comprises a band of diffuse emission of thickness comparable to the thickness of the western stream that runs almost perpendicular to the western stream. We measured the typical surface brightness through the luminance filter \(\mu_{\rm lum}\) of the southern spur to be \(\mu_{\rm lum}\approx 29.0\) mag arcsec\({}^{-2}\). Thus we find that the southern spur is of surface brightness lower than that of the western stream. The southern spur is not obviously evident in the image through the sum of the Sloan \(g^{\prime}\) and \(r^{\prime}\) filters presented by van Dokkum et al. (2019). ### Western "Hook" Our image reveals a new western "hook" (which is indicated as feature 10 in aqua in Figure 12) located \(\approx 0.68\) deg due west of the center of NGC 5907. Hence the western hook is about as far west of the galaxy as the first patch described in SS 6.5 is east of the galaxy. There is no clear and obvious connection between the hook and NGC 5907, but if the hook is at the distance of the galaxy, then it is located \(\approx 200\) kpc from the center of the galaxy. We measured the typical surface brightness through the luminance filter \(\mu_{\rm lum}\) of the western hook to be \(\mu_{\rm lum}\approx 29.0\) mag arcsec\({}^{-2}\). Thus we find that the western hook is of surface brightness lower than that of the western stream. The western hook is not covered by the image through the sum of the Sloan \(g^{\prime}\) and \(r^{\prime}\) filters presented by van Dokkum et al. (2019). ### Dwarf Galaxies Figure 12 calls attention to several sources, which are identified as sources A through G in yellow in the figure. Source B in Figure 12 is the previously-uncataloged putative dwarf galaxy located just west of the eastern stream reported by van Dokkum et al. (2019). The proximity of this galaxy to the eastern stream obviously suggests that the galaxy is associated with (rather than behind) NGC 5907, but without spectroscopy or multi-band imaging, it is not possible to know for sure. But interestingly, there are several _known_ galaxies in the immediate vicinity of NGC 5907--two of which are _known_ to be associated with the galaxy--that were not considered by the analysis of van Dokkum et al. (2019) because they were modeled and subtracted by their analysis as "compact emission sources." These include sources A, C, D, E, and G in Figure 12. In particular: * _Source A_: This galaxy (MCG+10-22-010) exhibits a heliocentric recession velocity \(781\pm 80\) km s\({}^{-1}\)(Falco et al., 1999) consistent with the recession velocity of NGC 5907 and a Sloan \(g^{\prime}\) magnitude \(g^{\prime}=14.987\pm 0.003\)(Adelman-McCarthy and et al., 2011) and is morphologically classified as a dwarf irregular galaxy (Ann et al., 2015). At the distance of NGC 5907, the absolute magnitude of source A is \(\approx-16.2\), i.e. roughly comparable to that of the SMC. The projected impact parameter of source A to the center of NGC 5907 is \(\approx 117\) kpc and to the plane of the disk is \(\approx 99\) kpc. * _Source C_: This galaxy (LEDA 54419) exhibits a heliocentric recession velocity \(710\) km s\({}^{-1}\)(Wenger et al., 2000) consistent with the recession velocity of NGC 5907 and a Sloan \(g^{\prime}\) magnitude \(g^{\prime}=16.206\pm 0.004\)(Wenger et al., 2000) and is morphologically classified as a Magellanic irregular galaxy (Ann et al., 2015). At the distance of NGC 5907, the absolute magnitude of source C is \(\approx-14.9\), i.e. roughly 0.3 times that of the SMC. The projected impact parameter of source C to the center of NGC 5907 is \(\approx 56\) kpc and to the plane of the disk is \(\approx 21\) kpc. * _Source D_: This galaxy (2MASX J15140431+5630186) exhibits a Gaia \(G\) magnitude \(G=19.615\pm 0.008\)(Gaia Collaboration, 2022). If it is at the distance of NGC 5907, then the absolute magnitude of source D is \(\approx-11.5\), i.e. roughly 1% that of the SMC, and the projected impact parameter to the center of NGC 5907 is \(\approx 89\) kpc and to the plane of the disk is \(\approx 43\) kpc. This galaxy is of particular interest because it is located at the very terminus of the western stream (and at the starting point of the putative extension of the western stream), near the location of the thickening or enhancement of the western stream noted in SS 5.2. Because the galaxy is included into the Gaia DR3 catalog, our analysis described in SS 4 attempted (unsuccessfully, because it is not a point source) to model and subtract it. * _Source E_: This galaxy (LEDA 2535522) exhibits a Gaia \(G\) magnitude \(G=20.44\pm 0.01\)(Gaia Collaboration, 2022). If it is at the distance of NGC 5907, then the absolute magnitude of source G is \(\approx-10.7\), i.e. roughly 0.6% that of the SMC, and the projected impact parameter to the center of NGC 5907 is \(\approx 106\) kpc and to the plane of the disk is \(\approx 102\) kpc. As with source D, this galaxy is included into the Gaia DR3 catalog, and our analysis attempted to model and subtract it. * _Source G_: This galaxy (LEDA 2523331) exhibits a Gaia \(G\) magnitude \(G=20.58\)(Gaia Collaboration, 2022). If it is at the distance of NGC 5907, then the absolute magnitude of source G is \(\approx-10.6\), i.e. roughly 0.5% that of the SMC, and the projected impact parameter to the center of NGC 5907 is \(\approx 96\) kpc, and it is roughly in the plane of the disk. As with source D, this galaxy is included into the Gaia DR3 catalog, and our analysis attempted to model and subtract it. Properties of these galaxies are summarized in Table 3, which for each galaxy lists the source, name, ICRS coordinates, heliocentric recession velocity \(v_{\rm rec}\), Sloan \(g^{\prime}\) or Gaia \(G\) magnitude, morphological type, absolute Sloan \(g^{\prime}\) or Gaia \(G\) magnitude \(M\), impact parameter \(b\), and impact parameter to the plane of the disk \(b_{\rm disk}\). Source F in Figure 12 might appear at first glance to be a dwarf galaxy in close proximity to the eastern stream. But our images resolve this "source" into a number discrete sources, which we interpret as a background galaxy group or cluster. There are several other background galaxy groups or clusters also evident in the images. We conclude that there are at least several (and possibly many more) dwarf galaxies associated with NGC 5907 that may play roles as progenitor galaxies. The few galaxies considered here are far from a complete inventory of dwarf galaxies and possible dwarf galaxies associated with NGC 5907, and as is discussed in SS 4, our processed images exhibit a large number of faint sources, some fraction of which could be dwarf galaxies. Multi-object spectroscopy or multi-band imaging of faint sources in the field surrounding NGC 5907 will be necessary to identify other dwarf galaxies associated with NGC 5907. ### Possible Confusion with Galactic Cirrus To assess possible confusion with Galactic cirrus in the direction of NGC 5907, we examined (1) AKARI far-infrared all-sky survey maps at 65, 90, 140, and 165 \(\mu\)m (Doi et al., 2015) and (2) an interstellar reddening map derived from H I emission (Lenz et al., 2017). We found that at the Galactic coordinates \(l=91.58\) deg and \(b=+51.09\) deg of the galaxy, there is negligible infrared emission at any AKARI bandpass, and there is negligible interstellar reddening. We therefore consider it highly unlikely that any of the very low-surface-brightness features in the direction of the galaxy arise due to Galactic cirrus. ## 7 Summary and Discussion The results described in SS 5 confirm the overall picture of the galaxy NGC 5907 and its stellar stream advanced by van Dokkum et al. (2019): the stellar stream consists of a single curved structure that stretches 220 kpc from the brighter eastern stream, across the southern edge of the galaxy, to a fainter western stream that bends to the north and then curls back south toward the galaxy. But these results also demonstrate that the the situation is more subtle and complex in several respects: (1) the western stream appears to bifurcate near its apex, (2) there is an apparent gap of \(\approx 6\) kpc in the western stream due east of the galaxy, (3) there is no evidence of the remnant of a progenitor galaxy within the eastern stream, although (4) there are many other possible progenitor galaxies, including some that are quite close and at least one that is located within the western stream, (5) there is another structure that stretches 240 kpc and that contains two very large, very low-surface-brightness patches of emission, one of which was noted by van Dokkum et al. (2019) and another of which was not, and (6) there are other notable new features, including a western "horn," a southern "spur," and a western "hook." We consider several aspects of these results to be particularly significant as follows: First, we note that two different \(N\)-body simulations (i.e. by Martinez-Delgado et al., 2010 and by van Dokkum et al., 2019) predict two very different configurations for the stellar stream, both of which apparently run counter to observation (in one case with respect to the second loop and in the other case with respect to the remnant of a nearly disrupted progenitor galaxy). This suggests to us that the boundary conditions of both simulations are very significantly under constrained. We propose that a correct and complete understanding of the nature and origin of the stellar stream can be obtained using \(N\)-body simulations only if additional boundary conditions can be supplied, most crucially relating to the eastern stream and of source D. Second, we are intrigued by the number and variety of stellar streams in the vicinity of NGC 5907, including the eastern stream, the western stream, the structure containing the linear feature and two patches of emission, and possibly the western hook. Given that more than 100 stellar streams are known in the vicinity of the Milky Way (e.g. Mateu, 2022), there is every reason to suspect that similar networks of stellar streams might be found around other galaxies, including NGC 5907. Third, we are struck by the apparent gap in the western stream. Gaps in stellar streams may be caused by the impacts of dark "subhalos" or satellites orbiting within the halos of massive galaxies (e.g. Helmi and Koppelman, 2016; Koppelman and Helmi, 2020). Hence the apparent gap in the western stream may be indicative of a dark subhalo or satellite in the vicinity of the galaxy. Further observations and analysis are clearly required to confirm and interpret the apparent gap. Finally, we are puzzled by the nature of the two very large, very low-surface-brightness patches of emission. If these patches are considered to be galaxies, then they are extremely low-surface-brightness galaxies; if these patches are not considered to be galaxies, then is not clear what they are, and they presumably represent some new phenomenon with no known analog. We speculate that the presence of the patches is in some way related to the presence of the tidal stream, although the morphology of the linear feature and the patches together is vaguely reminiscent of "jellyfish" galaxies (e.g. Moretti et al., 2018) or of the young, isolated stellar systems found in the Virgo cluster (Jones et al., 2022), both of which may be formed via ram-pressure stripping of gas from a parent galaxy. There is clearly more to be learned about the galaxy NGC 5907 and its stellar streams, and we anticipate using Condorc to obtain additional deep observations of NGC 5907 and the NGC 5866 Group through its complement of broad- and narrow-band filters. ## Acknowledgments This material is based upon work supported by the National Science Foundation under Grants 1910001, 2107954, and 2108234. We gratefully acknowledge Chris Mihos and an anonymous referee for very valuable comments on earlier drafts of the manuscript; the staff of Dark Sky New Mexico, including Diana Hensley, Michael Hensley, and the late Dennis Recla for their outstanding logistical and technical support; and Yuri Petrunin for crafting six superb instruments. This work made use of the following software: astroalign (Beroiz et al., 2020), astropy (Astropy Collaboration et al., 2013, 2018), django (Django Software Foundation, 2019), Docker (Merkel, 2014), DrizzlePac (Gonzaga et al., 2012), NoiseChisel (Akhlaghi and Ichikawa, 2015; Akhlaghi, 2019), numba (Lam et al., 2015), numpy (Harris et al., 2020), photutils (Bradley et al., 2020), scipy (Virtanen et al., 2020), SExtractor (Bertin and Arnouts, 1996). \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Source} & \multicolumn{2}{c}{Name} & \multicolumn{2}{c}{J2000} & \multicolumn{2}{c}{\(v_{\rm{Vec}}\)} & \multicolumn{2}{c}{Absolute} & \(b\) & \(b_{\rm{disk}}\) \\ \cline{3-10} \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{Name} & R.A. & Dec & (km s\({}^{-1}\)) & Type & Magnitude & Magnitude & (kpc) & (kpc) \\ \hline A & MCG+10-22-010 & 15:17:25.2 & +56:39:48.5 & \(781\pm 80\) & ddrr & \(14.987\pm 0.003\) & \(-16.2\) & 117 & 99 \\ C & LEDA 54419 & 15:14:47.8 & +56:27:14.8 & 710 & ddrr & \(16.2006\pm 0.004\) & \(-14.9\) & 56 & 21 \\ D & 2MASX J15140431+5630186 & 15:14:04.3 & +56:30:18.7 & — & \(-19.6156\pm 0.008\) & \(-11.5\) (?) & 89 & 43 \\ E & LEDA 253552 & 15:18:23.6 & +56:23:58.1 & — & \(-20.44\pm 0.01\) & \(-10.7\) (?) & 106 & 102 \\ G & & LEDA 2523331 & 15:16:46.5 & +56:02:16.0 & — & \(-20.58\) & \(-10.6\) (?) & 86 & 0 \\ \hline \end{tabular} \end{table} Table 3: Properties of known galaxies in the immediate vicinity of NGC 5907. ## Data Availability All raw Condor data are available following an 18-month proprietary period. All raw and processed data described here, including the coadded images of Figures 1 through 6 and the mosaic image of Figure 7, are available on the Condor web site [https://condorarraytelecscope.org/data_access/](https://condorarraytelecscope.org/data_access/) or by contacting the corresponding author.
NGC 5866群の深層画像観察において、コンdor配列望遠鏡を用いて、NGC 5907星系とその星条流を含む、その周辺の広い領域を luminance filter を通じて観察しました。その星条流は、明るい東側の流と暗い西側の流からなる単一の曲線構造であり、220 kpc の距離で、東側の流から西側の流へと延びています。これは、星条流の別のループの主張に反しますが、その全体の形態は、これまでに説明されたものと一致しています。さらに、西側の流の以下の特徴を見出しました。 (1) 流の頂部にBifurcation が見られます。 (2) 西側の流の東側約6 kpc には、Apparent Gap があります。 (3) 以前の主張とは異なり、東側の流には、親星系残骸が存在しません。しかし、
2309.16156
On Steinerberger Curvature and Graph Distance Matrices
Steinerberger proposed a notion of curvature on graphs (J. Graph Theory, 2023). We show that nonnegative curvature is almost preserved under three graph operations. We characterize the distance matrix and its null space after adding an edge between two graphs. Let $D$ be the graph distance matrix and $\mathbf{1}$ be the all-one vector. We provide a way to construct graphs so that the linear system $Dx = \mathbf{1}$ does not have a solution. Let $\eta$ be the Perron eigenvector of $D.$ We provide a lower bound to $\langle\eta,\mathbf{1}\rangle$ when the graph is a tree.
Wei-Chia Chen, Mao-Pei Tsui
2023-09-28T04:06:57
http://arxiv.org/abs/2309.16156v2
# On Steinerberger curvature and graph distance matrices ###### Abstract. Steinerberger proposed a notion of curvature on graphs (J. Graph Theory, 2023). We show that nonnegative curvature is almost preserved under three graph operations. We characterize the distance matrix and its null space after adding an edge between two graphs. Let \(D\) be a graph distance matrix and \(\mathbf{1}\) be the all-one vector. We provide a way to construct graphs so that the linear system \(Dx=\mathbf{1}\) does not have a solution. Let \(\eta\) be the Perron eigenvector of \(D.\) We provide a lower bound to \(\langle\eta,\mathbf{1}\rangle\) when the graph is a tree. Key words and phrases:Graph, Curvature, Distance Matrix, Perron-Frobenius 2020 Mathematics Subject Classification: 05C12, 05C50 W.-C. Chen and M.-P. Tsui are supported by NSTC grant 109-2115-M-002-006. 4. We show that if two graphs have the property that \(Dx=\mathbf{1}\) has no solution, then after merging them at a vertex, the new graph has the same property. 5. We provide a lower bound to \(\langle\eta,\mathbf{1}\rangle\) involving the number of leaves when the graph is a tree. ### Definition Let \(G=(V,E)\) be a finite, connected graph. The curvature proposed by Steinerberger in [13] is a measure \(\mu:V\to\mathbb{R}^{n}\) so that for every vertex \(u\in V,\) we have \[\sum_{v\in V}d(u,v)\mu(v)=|V|,\] where \(d(u,v)\) is the length of a shortest path from \(u\) to \(v.\) Equivalently, if the vertices are \(V=\{v_{i}:1\leq i\leq n\},\) by considering the vector \((\mu(v_{1}),...,\mu(v_{n})),\) the curvature of a graph is a vector \(w\) satisfying \[Dw=n\cdot\mathbf{1},\] where \(D_{ij}=d(v_{i},v_{j})\) is the distance matrix of the graph. ## 2. Main Results ### Invariance of Total Curvature and Bonnet-Myers Sharpness The following property of the curvature was proved by Steinerberger as a consequence of von Neumann's Minimax theorem. Inspired by his remarks, we simplify the proof by using linear algebra. **Theorem 1** ([13]).: _Let \(G\) be a connected graph. Suppose there are \(w_{1},w_{2}\in\mathbb{R}^{n}_{\geq 0}\) so that \(Dw_{1}=Dw_{2}=n\cdot\mathbf{1}.\) Then \(||w_{1}||_{1}=||w_{2}||_{1}.\)_ Proof.: Since \(Dw_{1}=n\cdot\mathbf{1},\) we have \(\mathbf{1}\in\operatorname{Im}(D)=(\operatorname{null}D^{t})^{\perp}=( \operatorname{null}D)^{\perp}.\) Since \(D(w_{1}-w_{2})=\mathbf{0},\) we have \(\langle w_{1}-w_{2},\mathbf{1}\rangle=\mathbf{0}.\) Therefore, we get \[||w_{1}||_{1}=\langle w_{1},\mathbf{1}\rangle=\langle w_{2},\mathbf{1}\rangle =||w_{2}||_{1}.\] From the proof above, if we relax the assumption to \(w_{1},w_{2}\in\mathbb{R}^{n},\) we still have \(\langle w_{1},\mathbf{1}\rangle=\langle w_{2},\mathbf{1}\rangle.\) The discrete Bonnet-Myers theorem in [13] states that if \(G\) has a nonnegative curvature \(w\) so that \(Dw=n\cdot\mathbf{1}\) with \(K=\min_{i}w_{i}\geq 0,\) then \[\operatorname{diam}G\leq\frac{2n}{||w||_{l^{1}}}\leq\frac{2}{K}.\] In addition, if \(\operatorname{diam}G\cdot K=2,\) then \(G\) has a constant curvature. Inspired by [3], we find that the Bonnet-Myers sharpness will be preserved under the Cartesian product. **Proposition**.: _Let \(G_{1},G_{2}\) be connected graphs with curvatures bounded below by \(K_{1},K_{2}\geq 0,\) respectively. Suppose \(G_{1},G_{2}\) are discrete Bonnet-Myers sharp, i.e., \(\operatorname{diam}(G_{1})\cdot K_{1}=\operatorname{diam}(G_{2})\cdot K_{2}=2.\) Then the Cartesian product graph \(G_{1}\square G_{2}\) is discrete Bonnet-Myers sharp._ Proof.: The discrete Bonnet-Myers theorem above implies that \(G_{1}\) and \(G_{2}\) have constant curvature \(K_{1},K_{2}>0.\) By [13, Proposition 2], \(G_{1}\square G_{2}\) has constant curvature \(K>0\) and \[K=(\frac{1}{K_{1}}+\frac{1}{K_{2}})^{-1}=\frac{K_{1}K_{2}}{K_{1}+K_{2}}.\] Since \(\operatorname{diam}(G_{1}\square G_{2})=\operatorname{diam}G_{1}+\operatorname{diam}G _{2}\), we have \[\operatorname{diam}(G_{1}\square G_{2})\cdot K=2.\] ### Bridging, Merging, and Cutting Graphs Nonnegative curvature will be preserved except for at most two vertices under three basic graph operations. Let \(G_{1}\) and \(G_{2}\) be two graphs whose distance matrices are \(D_{1}\) and \(D_{2}\), respectively. Assume that \(G_{i}\) has \(n_{i}\) vertices for \(i=1,2.\) We can create a larger graph \(G\) by adding an edge \(e\) between them. We can also obtain a graph \(H\) by performing an edge contraction on \(e\) in \(G\). We say that \(H\) is obtained by _merging \(G_{1}\) and \(G_{2}\) at a vertex_. **Theorem 2** (Bridging Graphs).: _Suppose \(G_{1}\) and \(G_{2}\) have nonnegative curvature, namely, \(D_{i}w_{i}=n_{i}\cdot\mathbf{1}\) holds for some \(w_{i}\in\mathbb{R}_{\geq 0}^{n_{i}}\) for \(i=1,2\). Then the graph \(G\) obtained by adding an edge \(e=\{u,v\}\) between \(G_{1}\) and \(G_{2}\) has a curvature nonnegative everywhere except for the two vertices \(u\) and \(v\)._ As we will show, if \(w_{1},w_{2}\) are curvatures of \(G_{1}\) and \(G_{2}\), respectively, then the curvature of \(G\) at \(u\) and at \(v\) are \[\frac{||w_{2}||_{1}(n_{1}+n_{2})}{n_{1}||w_{2}||_{1}+n_{2}||w_{1}||_{1}+\frac{ 1}{2}||w_{1}||_{1}||w_{2}||_{1}}\cdot((w_{1})_{n_{1}}-\frac{1}{2}||w_{1}||_{1}) \tag{2.1}\] and \[\frac{||w_{1}||_{1}(n_{1}+n_{2})}{n_{1}||w_{2}||_{1}+n_{2}||w_{1}||_{1}+\frac{ 1}{2}||w_{1}||_{1}||w_{2}||_{1}}\cdot((w_{2})_{1}-\frac{1}{2}||w_{2}||_{1}), \tag{2.2}\] respectively. The curvature of \(G\) at the two vertices \(u\) and \(v\) can be negative. For example, consider adding an edge between two cycles \(C_{3}\). The new graph has a unique curvature \[w=(\frac{12}{11},\frac{12}{11},\frac{-6}{11},\frac{-6}{11},\frac{12}{11}, \frac{12}{11}).\] It remains unclear to the authors whether the curvature of \(G\) at \(u\) and at \(v\) are always nonpositive. Is there a graph with nonnegative curvature \(w\), i.e., \(Dw=n\cdot\mathbf{1}\), such that \(w_{i}>\frac{1}{2}||w||_{1}\) for some \(i\)? More generally, is it true that if a graph with a bridge admits a curvature, then the curvature at the vertices of the bridge are always nonpositive? Figure 1. Adding an edge between the complete graph \(K_{4}\) and the cycle \(C_{6}\). **Corollary 1**.: _Assume that \(G^{\prime}\) has constant curvature \(K>0\) and \(n=|V(G^{\prime})|.\) Let \(G\) be the graph obtained by adding an edge between two copies of \(G^{\prime}\). The curvature of \(G\) has value \(\frac{(2-n)2K}{4+K}<0\) at the vertices belonging to the edge and \(\frac{4K}{4+K}>0\) at all the other vertices._ The nonnegativeness of curvature will be preserved except for one vertex when we merge two graphs at this vertex. **Theorem 3** (Merging Graphs).: _Suppose \(G_{1}\) and \(G_{2}\) have nonnegative curvature so that \(D_{i}w_{i}=n_{i}\cdot\mathbf{1}\) for some \(w_{i}\in\mathbb{R}_{\geq 0}^{n_{i}}\). Then the graph \(H\) obtained by adding an edge between \(G_{1}\) and \(G_{2}\) and performing an edge contraction on this edge has a curvature nonnegative everywhere except for the vertex of the contracted edge._ The proofs of the above theorems are inspired by Bapat's work [1]. By using induction and decomposing the distance matrix into smaller ones, Bapat proves that for a tree, the distance matrix satisfies the equation \[D\tau=(n-1)\cdot\mathbf{1}.\] where \(\tau=\mathbf{2}-(\deg(v_{1}),...\deg(v_{n}))^{t}\). We will relate the distance matrix of the larger graphs \(G\) and \(H\) to the distance matrices of \(G_{1}\) and \(G_{2}\) in our proof. The following theorem states that nonnegative curvature will be preserved when we remove a bridge from a graph. **Theorem 4** (Cutting Graphs).: _Suppose \(G\) is a connected graph containing a bridge \(e\). Let \(G_{1}\) and \(G_{2}\) be the components after removing \(e\) from \(G\). If \(G\) has a nonnegative curvature then \(G_{1}\) and \(G_{2}\) have a nonnegative curvature. If \(G\) has a constant curvature then \(G_{i}\) has a constant curvature except at the vertices belonging to \(e\)._ Figure 3. Merging the complete graph \(K_{4}\) and the cycle \(C_{6}\) at a vertex. Figure 2. Adding an edge between two copies of \(K_{5}\). ### Null Space of Graph Distance Matrix In the previous section, we created a new graph \(G\) by adding an edge between two graphs \(G_{1}\) and \(G_{2}\). In this section, we give a characterization of the null space of the distance matrix of \(G\). **Theorem 5**.: _Let \(G_{1},G_{2}\) be two connected graphs with \(n_{1}\) and \(n_{2}\) vertices, respectively. Let \(G\) be the graph obtained by adding an edge between \(G_{1}\) and \(G_{2}\). Suppose \(D_{G},D_{1},D_{2}\) are the distance matrices of \(G,G_{1},\) and \(G_{2}\), respectively. Then we have_ \[\operatorname{null}D_{G}=\operatorname{null}D_{1}\oplus\operatorname{null}D_ {2}\] _and_ \[\dim\operatorname{null}D_{G}=\dim\operatorname{null}D_{1}\oplus\dim \operatorname{null}D_{2},\] _where we canonically embed \(\operatorname{null}D_{i}\) to \(\mathbb{R}^{n_{1}+n_{2}}\) by augmenting zeros._ This implies that \[\operatorname{rank}D_{G}=\operatorname{rank}D_{1}+\operatorname{rank}D_{2}.\] ### Nonexistence of Curvature A necessary condition for the curvature to have desirable geometric properties is that the linear system \(Dx=\mathbf{1}\) has a solution. Steinerberger raised the following problem. **Problem [14].** It seems that for most graphs, the linear system \(Dx=\mathbf{1}\) tends to have a solution. Why is that? He gave a sufficient condition for \(Dx=\mathbf{1}\) to have a solution. **Proposition 1** ([14]).: _Suppose \(D\in\mathbb{R}_{\geq 0}^{n\times n}\) has eigenvalues \(\lambda_{1}>0\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) and eigenvector \(D\eta=\lambda_{1}\eta.\) If_ \[1-\langle\eta,\frac{1}{\sqrt{n}}\rangle^{2}<\frac{|\lambda_{2}|}{\lambda_{1}- \lambda_{2}}, \tag{2.3}\] _then the linear system \(Dx=\mathbf{1}\) has a solution._ The proof is correct. However, this condition degenerates to the trivial condition of whether \(D\) is invertible or not. Since if \(\lambda_{1}>0>\lambda_{2}\geq\cdots\geq\lambda_{n}\), then \(D\) is invertible. This implies that \(Dx=\mathbf{1}\) has a solution. If \(\lambda_{1}>0=\lambda_{2}\geq\cdots\geq\lambda_{n}\) then the right-hand side of inequality 2.3 is 0. The Cauchy-Schwarz inequality then implies that condition 2.3 will never satisfy. By merging two graphs at a vertex, we can create graphs so that \(Dx=\mathbf{1}\) does not have a solution. **Theorem 6**.: _Let \(G_{1}\) and \(G_{2}\) be two connected graphs so that \(D_{i}x=\mathbf{1}\) does not have a solution. Let \(H\) be obtained by adding an edge between \(G_{1}\) and \(G_{2}\), then performing an edge contraction on this edge. If \(D_{H}\) is the distance matrix of \(H\) then_ \[D_{H}x=\mathbf{1}\] _does not have a solution._ We use Matlab to generate 10000 Erdos-Renyi random graphs \(G(n,p)\), with parameters \(n=50\) and \(p=1/2.\) We find that for each graph we generated, both the adjacency matrix and the distance matrix have full rank. Let \(Q_{n}\) be the adjacency matrix of a random graph, where self-loops are allowed. In other words, the upper triangular entries and the diagonal entries of \(Q_{n}\) are independent Bernoulli random variables. In their work [2], Costello, Tao and Vu showed that \(Q_{n}\) is invertible with probability 1 as \(n\to\infty\). It was shown in [8] that with probability 1, the distance matrix of \(G(n,p)\) is invertible as \(n\to\infty\). ### Perron Eigenvector of Distance Matrix In his work [14], Steinerberger proves that if \(\eta\) is the Perron eigenvector of the distance matrix of a graph (the first eigenvector whose entries are nonnegative), then \(\langle\eta,\mathbf{1}\rangle^{2}\geq\frac{n}{2},\) where \(n\) is the number of vertices. We provide a lower bound when the graph is a tree involving the number of leaves. **Proposition 2**.: _Let \(T\) be a tree with \(n\) vertices and \(l\) leaves. Let \(D\) be its distance matrix, \(\lambda\) be its largest positive eigenvalue (Perron root), and \(\eta\) be the Perron eigenvector of \(D\) with \(||\eta||_{2}=1.\) Then_ \[\langle\eta,\mathbf{1}\rangle^{2}>\frac{n}{2}(\frac{\lambda}{\lambda-l+1})+ \frac{n-l-1}{\lambda-l+2}.\] **Example**. The star graph with \(n\) vertices has \(l=n-1\) leaves. The eigenvalue estimate of the Perron root (see for example, [10, Theorem 8.1.22] and [16, Corollary 7]) gives \[\frac{2(n-1)^{2}}{n}=\frac{\sum_{i,j}D_{ij}}{n}\leq\lambda\leq\max_{i}\sum_{j= 1}^{n}D_{ij}=2n-3.\] Then the proposition above gives \[\langle\eta,\mathbf{1}\rangle^{2}>\frac{(n-1)^{2}}{n-1}=n-1.\] ## 3. Proofs ### Proof of Theorem 2 Proof.: Let \(V(G_{1})=\{u_{i}:1\leq i\leq n_{1}\}\) and \(V(G_{2})=\{v_{j}:1\leq j\leq n_{2}\}.\) The main observation is that if \(u_{i}\in V(G_{1})\) and \(v_{j}\in V(G_{2}),\) then the shortest path from \(u_{i}\) to \(v_{j}\) has to pass through the edge \(\{u,v\}.\) Relabel the vertices so that \(u=u_{n_{1}}\) is the last vertex of \(G_{1}\) and \(v=v_{1}\) is the first vertex of \(G_{2}.\) The observation implies \[d_{G}(u_{i},v_{j})=d_{G_{1}}(u_{i},u_{n_{1}})+1+d_{G_{2}}(v_{1},v_{j})\] for \(1\leq i\leq n_{1},1\leq j\leq n_{2}.\) Let \(y\) be the last column of \(D_{1}\) and \(z\) be the first column of \(D_{2}.\) In other words, \[y_{i} =d_{G_{1}}(u_{i},u_{n_{1}})\] \[z_{j} =d_{G_{2}}(v_{1},v_{j}).\] If \(D_{G}\) is the distance matrix of \(G,\) we can write \[D_{G}=\begin{bmatrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}\mathbf{1}^{t}+\mathbf{ 1}z^{t}\\ \mathbf{1}y^{t}+\mathbf{1}\mathbf{1}^{t}+z\mathbf{1}^{t}&D_{2}\end{bmatrix}.\] Let \(\alpha,s\in\mathbb{R}\) be chosen later. Let \(e_{n_{1}},e_{n_{1}+1}\in\mathbb{R}^{n_{1}+n_{2}}\) be the \(n_{1}\)-th and the \((n_{1}+1)\)-th standard coordinate vectors, respectively. Define \[w=\begin{bmatrix}\alpha w_{1}\\ w_{2}\end{bmatrix}+se_{n_{1}}+se_{n_{1}+1}. \tag{3.4}\] Then \[D_{G}w=\begin{bmatrix}\alpha n_{1}\mathbf{1}+y\mathbf{1}^{t}w_{2}+\mathbf{1 }\mathbf{1}^{t}w_{2}+\mathbf{1}z^{t}w_{2}\\ \alpha\mathbf{1}y^{t}w_{1}+\alpha\mathbf{1}\mathbf{1}^{t}w_{1}+\alpha z \mathbf{1}^{t}w_{1}+n_{2}\mathbf{1}\end{bmatrix}+\begin{bmatrix}sy\\ s(z+\mathbf{1})\end{bmatrix}+\begin{bmatrix}s(y+\mathbf{1})\\ sz\end{bmatrix}\] since \(z_{1}=y_{n_{1}}=0.\) By looking at the \(n_{1}\)-th row and the first row of \(D_{i}w_{i}=n_{i}\cdot\mathbf{1},\) we have \(y^{t}w_{1}=n_{1}\) and \(z^{t}w_{2}=n_{2}.\) Therefore, \[D_{G}w=\left[\begin{matrix}(\alpha n_{1}+\mathbf{1}^{t}w_{2}+n_{2}+s)\mathbf{1 }+(2s+\mathbf{1}^{t}w_{2})y\\ (\alpha n_{1}+n_{2}+\alpha\mathbf{1}^{t}w_{1}+s)\mathbf{1}+(2s+\alpha\mathbf{ 1}^{t}w_{1})z\end{matrix}\right].\] Define \[s=\frac{-\mathbf{1}^{t}w_{2}}{2},\alpha=\frac{\mathbf{1}^{t}w_{2}}{\mathbf{1} ^{t}w_{1}}>0.\] Note that since \(\mathbf{1}^{t}w_{1}>0,\) the number \(\alpha\) is well-defined. Then \(2s=-\mathbf{1}^{t}w_{2}=-\alpha\mathbf{1}^{t}w_{1}.\) Thus, we get \[D_{G}w=(\alpha n_{1}+\mathbf{1}^{t}w_{2}+n_{2}+s)\mathbf{1}=(\alpha n_{1}+n_{ 2}+\frac{\mathbf{1}^{t}w_{2}}{2})\mathbf{1}.\] This implies \(G\) admits a curvature after scaling. We have \[\alpha n_{1}+n_{2}+\frac{\mathbf{1}^{t}w_{2}}{2}>0.\] Therefore, \(G\) admits a curvature nonnegative everywhere except at the vertices \(u_{n_{1}}\) and \(v_{1}.\) Equations 2.1 and 2.2 follow from our construction of \(w\). The corollary can be proved by plugging in \(\alpha=1\) and \(s=nK\) in equation 3.4. ### Proof of Theorem 3 The idea is the same as the proof of Theorem 2. However, the analysis needs to be more careful. Proof.: Write \(V(G_{1})=\{u_{1},...,u_{n_{1}}\}\) and \(V(G_{2})=\{v_{1},...,v_{n_{2}}\}\) so that the edge added and then contracted is \(\{u_{n_{1}},v_{1}\}.\) Thus, \(u_{n_{1}}\) and \(v_{1}\) will be the identical vertex in \(H.\) Let \(y\in\mathbb{R}^{n_{1}}\) be the last column of \(D_{1}\) and \(z\in\mathbb{R}^{n_{2}-1}\) be the first column of \(D_{2}\) without the first entry. Namely, \[y=\left[\begin{matrix}d_{G_{1}}(u_{1},u_{n_{1}})\\ \vdots\\ d_{G_{1}}(u_{n_{1}},u_{n_{1}})\end{matrix}\right],z=\left[\begin{matrix}d_{G_{ 2}}(v_{1},v_{2})\\ \vdots\\ d_{G_{2}}(v_{1},v_{n_{2}})\end{matrix}\right].\] Let \(w\) and \(g\) be nonnegative vectors satisfying \(D_{1}w=n_{1}\cdot\mathbf{1}\) and \(D_{2}g=n_{2}\cdot\mathbf{1}.\) Let \(\bar{g}=(g_{2},...,g_{n_{2}}),\) and \(\bar{D_{2}}\) be the matrix obtained by removing the first column and the first row of \(D_{2}.\) Thus, \[D_{2}=\left[\begin{matrix}0&z^{t}\\ z&\bar{D_{2}}\end{matrix}\right].\] The equation \(D_{2}g=n_{2}\cdot\mathbf{1}\) gives \(z^{t}\bar{g}=n_{2}\) and \(\bar{D_{2}}\bar{g}=n_{2}\mathbf{1}-g_{1}z\). Similar to the proof of Theorem 2, the shortest path in \(H\) between \(u_{i}\) and \(v_{j}\) has to pass through the common vertex \(u_{n_{1}}=v_{1}.\) We thus have \[d_{H}(u_{i},v_{j})=d_{G_{1}}(u_{i},u_{n_{1}})+d_{G_{2}}(v_{1},v_{j})=y_{i}+1+z_ {j-1}\] for \(1\leq i\leq n_{1}\) and \(2\leq j\leq n_{2}.\) In addition, \[d_{H}(u_{i},u_{j}) =d_{G_{1}}(u_{i},u_{j})\] \[d_{H}(v_{i},v_{j}) =d_{G_{2}}(v_{i},v_{j})\] hold for all \(i,j.\) Therefore, we can write the distance matrix of \(H\) as \[D_{H}=\left[\begin{matrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}z^{t}\\ \mathbf{1}y^{t}+z\mathbf{1}^{t}&\bar{D_{2}}\end{matrix}\right]\in\mathbb{R}^{(n _{1}+n_{2}-1)\times(n_{1}+n_{2}-1)}.\] Let \(\alpha,s\in\mathbb{R}\) be chosen later. Define the potential candidate of the curvature \[w^{\prime}=\begin{bmatrix}\alpha w\\ \mathbf{0}_{n_{2}-1}\end{bmatrix}+\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \bar{g}\end{bmatrix}+(s+g_{1})e_{n_{1}}\in\mathbb{R}^{n_{1}+n_{2}-1},\] where \(e_{n_{1}}\in\mathbb{R}^{n_{1}+n_{2}-1}\) is the \(n_{1}\)-th standard coordinate vector. Then \[D_{H}w^{\prime} =\begin{bmatrix}\alpha n_{1}\mathbf{1}\\ \alpha\mathbf{1}y^{t}w+\alpha z\mathbf{1}^{t}w\end{bmatrix}+\begin{bmatrix} y\mathbf{1}^{t}\bar{g}+\mathbf{1}z^{t}\bar{g}\\ n_{2}\mathbf{1}-g_{1}z\end{bmatrix}+(s+g_{1})\begin{bmatrix}y\\ z\end{bmatrix}\] \[=\begin{bmatrix}(\alpha n_{1}+z^{t}\bar{g})\mathbf{1}\\ (\alpha y^{t}w+n_{2})\mathbf{1}\end{bmatrix}+\begin{bmatrix}(\mathbf{1}^{t} \bar{g}+s+g_{1})y\\ (\alpha\mathbf{1}^{t}w+s)z\end{bmatrix}.\] Note that \(z^{t}\bar{g}=n_{2}\) and \(y^{t}w=n_{1}.\) Set \[s =-g_{1}-\mathbf{1}^{t}\bar{g}=-\mathbf{1}^{t}g,\] \[\alpha =\frac{-s}{\mathbf{1}^{t}w}=\frac{\mathbf{1}^{t}g}{\mathbf{1}^{t }w}.\] The fact that \(w,g\) are nonnegative curvature of \(G_{1}\) and \(G_{2},\) respectively, implies \(\mathbf{1}^{t}w>0\) and \(\mathbf{1}^{t}g>0\). Thus, \(\alpha>0\) is well-defined. We then have \[D_{H}w^{\prime}=(\alpha n_{1}+n_{2})\mathbf{1}.\] Thus, we have \[D_{H}(\frac{n_{1}+n_{2}-1}{\alpha n_{1}+n_{2}}w^{\prime})=(n_{1}+n_{2}-1)\cdot \mathbf{1}.\] This implies \(H\) admits a curvature nonnegative everywhere except at the common vertex \(u_{n_{1}}=v_{1}\). From our construction, the curvature of \(H\) at the common vertex \(u_{n_{1}}=v_{1}\) is \[\frac{||g||_{1}(w)_{n_{1}}-||w||_{1}||\bar{g}||_{1}}{||g||_{1}n_{1}+||w||_{1}n _{2}}\cdot(n_{1}+n_{2}-1).\] ### Proof of Theorem 4 Proof.: Let \(D_{i}\) be the distance matrices of \(G_{i}\) for \(i=1,2.\) Write \(V(G_{1})=\{u_{1},...,u_{n_{1}}\}\) and \(V(G_{2})=\{v_{1},...v_{n_{2}}\}\) so that the bridge is \(e=\{u_{n_{1}},v_{1}\}.\) Since \(G\) has a nonnegative curvature, we have \[D_{G}w=(n_{1}+n_{2})\cdot\mathbf{1}\] for some \(w\in\mathbb{R}_{\geq 0}^{n_{1}+n_{2}}.\) Write \[w=\begin{bmatrix}w_{1}\\ w_{2}\end{bmatrix},\] where \(w_{i}\in\mathbb{R}_{\geq 0}^{n_{i}}.\) Let \(y\) be the last column of \(D_{1}\) and \(z\) be the first column of \(D_{2},\) as in the proof of Theorem 2. Then \[D_{G}=\begin{bmatrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}\mathbf{1}^{t}+\mathbf{1} z^{t}\\ \mathbf{1}y^{t}+\mathbf{1}\mathbf{1}^{t}+z\mathbf{1}^{t}&D_{2}\end{bmatrix}.\] Since \(D_{G}w=(n_{1}+n_{2})\cdot\mathbf{1},\) we get \[D_{1}w_{1}+y\mathbf{1}^{t}w_{2}+\mathbf{1}\mathbf{1}^{t}w_{2}+ \mathbf{1}z^{t}w_{2} =(n_{1}+n_{2})\cdot\mathbf{1} \tag{3.5}\] \[\mathbf{1}y^{t}w_{1}+\mathbf{1}\mathbf{1}^{t}w_{1}+z\mathbf{1}^{t }w_{1}+D_{2}w_{2} =(n_{1}+n_{2})\cdot\mathbf{1}. \tag{3.6}\] The last row of equation 3.5, the first row of equation 3.6, together with \(y_{n_{1}}=z_{1}=0\) give \[y^{t}w_{1}+(z+\mathbf{1})^{t}w_{2} =n_{1}+n_{2} \tag{3.7}\] \[(\mathbf{1}+y)^{t}w_{1}+z^{t}w_{2} =n_{1}+n_{2}. \tag{3.8}\] Define \[\bar{w}_{1} =w_{1}+(\mathbf{1}^{t}w_{2})e_{n_{1}}\] \[\bar{w}_{2} =w_{2}+(\mathbf{1}^{t}w_{1})e_{1},\] where \(e_{n_{1}},e_{1}\) are the \(n_{1}\)-th and the first coordinate vectors in \(\mathbb{R}^{n_{1}}\) and \(\mathbb{R}^{n_{2}}\), respectively. Then \[D_{1}\bar{w}_{1} =D_{1}w_{1}+\mathbf{1}^{t}w_{2}y=(n_{1}+n_{2}-\mathbf{1}^{t}w_{2}- z^{t}w_{2})\mathbf{1}=(y^{t}w_{1})\mathbf{1}\] \[D_{2}\bar{w}_{2} =D_{2}w_{2}+\mathbf{1}^{t}w_{1}z=(n_{1}+n_{2}-y^{t}w_{1}-\mathbf{ 1}^{t}w_{1})\mathbf{1}=(z^{t}w_{2})\mathbf{1},\] by equations 3.5 to 3.8. We claim that \(y^{t}w_{1},z^{t}w_{2}>0.\) Suppose \(y^{t}w_{1}=0.\) Since \(w=\begin{bmatrix}w_{1}\\ w_{2}\end{bmatrix}\) satisfies \[D_{G}w=(n_{1}+n_{2})\cdot\mathbf{1}\] and \(D_{G}\) is a nonnegative matrix, we have \[\mathbf{1}^{t}w=\mathbf{1}^{t}w_{1}+\mathbf{1}^{t}w_{2}>0.\] Note that equations 3.7 and 3.8 implies \(\mathbf{1}^{t}w_{1}=\mathbf{1}^{t}w_{2}.\) Therefore, \(\mathbf{1}^{t}w_{1}=\mathbf{1}^{t}w_{2}>0.\) Since \(w_{1}\in\mathbb{R}_{\geq 0}^{n_{1}}\) and \(y_{n_{1}}=0,\) we have \[0<\mathbf{1}^{t}w_{1}\leq y^{t}w_{1}+(w_{1})_{n_{1}}=(w_{1})_{n_{1}}.\] This implies \(w_{1}=ce_{n_{1}}\) for \(c=(w_{1})_{n_{1}}>0.\) Plugging this into equation 3.5, we get \[2cy=(n_{1}+n_{2}-\mathbf{1}^{t}w_{2}-z^{t}w_{2})\cdot\mathbf{1}=(y^{t}w_{1}) \cdot\mathbf{1}=\mathbf{0},\] by equation 3.7. This implies \(c=0\) and \(w_{1}=\mathbf{0},\) contradicts to \(\mathbf{1}^{t}w_{1}>0.\) A similar argument shows that \(z^{t}w_{2}>0.\) Consider \[w_{1}^{\prime} =\frac{n_{1}}{y^{t}w_{1}}\bar{w}_{1}\] \[w_{2}^{\prime} =\frac{n_{2}}{z^{t}w_{2}}\bar{w}_{2}.\] Then \(D_{i}w_{i}^{\prime}=n_{i}\cdot\mathbf{1}\) for \(i=1,2.\) Thus, \(G_{i}\) has a nonnegative curvature for \(i=1,2.\) If both \(w_{1}\) and \(w_{2}\) are constant, then by construction, \(w_{1}^{\prime}\) and \(w_{2}^{\prime}\) are constant everywhere except at vertices \(u_{n_{1}}\) and \(v_{1},\) respectively. ### Proof of Theorem 5 We first need a lemma. **Lemma 1**.: _Let \(G\) be a graph admitting a nonnegative curvature \(w\in\mathbb{R}_{\geq 0}^{n},\) i.e., \(Dw=n\cdot\mathbf{1}.\) Suppose \(Dg=\mathbf{1}\) for some \(g\in\mathbb{R}^{n}.\) Then \(\mathbf{1}^{t}g>0.\)_ Proof.: If the null space of \(D\) is empty, then \(g=\frac{1}{n}w\in\mathbb{R}_{\geq 0}^{n}.\) Therefore, \(\mathbf{1}^{t}g>0.\) Otherwise, let \(z_{1},...,z_{k}\in\operatorname{null}(D).\) Then we can write \[g=\frac{w}{n}+c_{1}z_{1}+\cdots+c_{k}z_{k}\] for some coefficients \(c_{i}\). Thus, \[\mathbf{1}^{t}g=\mathbf{1}^{t}\frac{w}{n}+c_{1}\mathbf{1}^{t}z_{1}+\cdots+c_{k} \mathbf{1}^{t}z_{k}=\mathbf{1}^{t}\frac{w}{n}>0,\] where we use the fact that \(\mathbf{1}\in\operatorname{Im}(D)=(\operatorname{null}D)^{\perp}.\) Proof of Theorem 5.: As in the proof of Theorem 2, we write \[D_{G}=\begin{bmatrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}\mathbf{1}^{t}+\mathbf{1 }z^{t}\\ \mathbf{1}y^{t}+\mathbf{1}\mathbf{1}^{t}+z\mathbf{1}^{t}&D_{2}\end{bmatrix},\] where \(y\) is the last column of \(D_{1},\) and \(z\) is the first column of \(D_{2}.\) Since \(G_{1}\) and \(G_{2}\) are nonnegatively curved, \(\mathbf{1}\in\operatorname{Im}D_{i}=(\operatorname{null}D_{i})^{\perp}.\) In addition, by Theorem 2, \(G\) admits a curvature. This implies \[\mathbf{1}\in\operatorname{Im}D_{G}=(\operatorname{null}D_{G})^{\perp}.\] If \(\eta\in\operatorname{null}D_{1},\) then \(y^{t}\eta=\mathbf{1}^{t}\eta=0\). This implies \(D_{G}\begin{bmatrix}\eta\\ \mathbf{0}_{n_{2}}\end{bmatrix}=\mathbf{0}.\) Similarly, if \(\xi\in\operatorname{null}D_{2},\) then \(\mathbf{1}^{t}\xi=z^{t}\xi=0\). This implies \(D_{G}\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi\end{bmatrix}=\mathbf{0}.\) Therefore, if \(\{\eta_{1},...,\eta_{k_{1}}\}\) is a basis of \(\operatorname{null}D_{1}\) and \(\{\xi_{1},...,\xi_{k_{2}}\}\) is a basis of \(\operatorname{null}D_{2},\) then \[\left\{\begin{bmatrix}\eta_{1}\\ \mathbf{0}_{n_{2}}\end{bmatrix},...,\begin{bmatrix}\eta_{k_{1}}\\ \mathbf{0}_{n_{2}}\end{bmatrix},\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{1}\end{bmatrix},...,\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{k_{2}}\end{bmatrix}\right\}\] is linearly independent in \(\operatorname{null}D_{G}.\) This shows that \[\dim\operatorname{null}D_{G}\geq k_{1}+k_{2}=\dim\operatorname{null}D_{1}+ \dim\operatorname{null}D_{2}.\] On the other hand, suppose \(\begin{bmatrix}\eta\\ \xi\end{bmatrix}\in\operatorname{null}D_{G}.\) Our goal is to show that \(\eta\in\operatorname{null}D_{1}\) and \(\xi\in\operatorname{null}D_{2}\). We have \[\mathbf{0}_{n_{1}} =D_{1}\eta+y\mathbf{1}^{t}\xi+\mathbf{1}\mathbf{1}^{t}\xi+ \mathbf{1}z^{t}\xi \tag{3.9}\] \[\mathbf{0}_{n_{2}} =\mathbf{1}y^{t}\eta+\mathbf{1}\mathbf{1}^{t}\eta+z\mathbf{1}^{t} \eta+D_{2}\xi\] (3.10) \[0 =\mathbf{1}^{t}\begin{bmatrix}\eta\\ \xi\end{bmatrix} \tag{3.11}\] By looking at the \(n_{1}\)-th row of the first equation and using \(y_{n_{1}}=0\), we get \[0=y^{t}\eta+\mathbf{1}^{t}\xi+z^{t}\xi.\] The first row of the second equation and \(z_{1}=0\) gives \[0=y^{t}\eta+\mathbf{1}^{t}\eta+z^{t}\xi.\] Combining these with the third equation, we conclude that \[\mathbf{1}^{t}\eta=\mathbf{1}^{t}\xi=0.\] Therefore, equations 3.9 and 3.10 give \[D_{1}\eta =-(z^{t}\xi)\mathbf{1}\] \[D_{2}\xi =-(y^{t}\eta)\mathbf{1}.\] Suppose that \(z^{t}\xi\neq 0.\) Since \(G_{1}\) admits a nonnegative curvature, by Lemma 1, we have \(0<\mathbf{1}^{t}\frac{\eta}{-z^{t}\xi}=0,\) a contradiction. Thus, \(z^{t}\xi=0\) and \(D_{1}\eta=\mathbf{0}.\) Similarly, we have \(D_{2}\xi=\mathbf{0}\). Therefore, \(\eta\in\operatorname{null}D_{1}\) and \(\xi\in\operatorname{null}D_{2}\). We can thus write \(\eta=c_{1}\eta_{1}+\cdots+c_{k_{1}}\eta_{k_{1}}\) and \(\xi=d_{1}v_{1}+\cdots+d_{k_{2}}v_{k_{2}}\) where \(c_{i},d_{j}\in\mathbb{R}\). This means that \[\begin{bmatrix}\eta\\ \xi\end{bmatrix}=c_{1}\begin{bmatrix}\eta_{1}\\ \mathbf{0}_{n_{2}}\end{bmatrix}+\cdots+c_{k_{1}}\begin{bmatrix}\eta_{k_{1}}\\ \mathbf{0}_{n_{2}}\end{bmatrix}+d_{1}\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{1}\end{bmatrix}+\cdots+d_{k_{2}}\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{k_{2}}\end{bmatrix}.\] Thus, the vectors \[\begin{bmatrix}\eta_{1}\\ \mathbf{0}_{n_{2}}\end{bmatrix},...,\begin{bmatrix}\eta_{k_{1}}\\ \mathbf{0}_{n_{2}}\end{bmatrix},\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{1}\end{bmatrix},...,\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{k_{2}}\end{bmatrix}\] form a basis of \(\operatorname{null}D_{G}.\) This implies \[\dim\operatorname{null}D_{G}=\dim\operatorname{null}D_{1}+\dim\operatorname{ null}D_{2},\] as desired. ### Proof of Theorem 6 As in the proof of Theorem 3, we assume \(V(G_{1})=\{u_{1},...,u_{n_{1}}\},V(G_{2})=\{v_{1},...,v_{n_{2}}\}\) and \(\{u_{n_{1}},v_{1}\}\) is the edge added and contracted. The condition that \(D_{1}x=\mathbf{1}\) has no solution is equivalent to \(\mathbf{1}\not\in\operatorname{Im}D_{1}=(\operatorname{null}D_{1})^{\perp}.\) This is equivalent to that there is \(\eta\in\operatorname{null}D_{1}\) with \(\langle\eta,\mathbf{1}\rangle\neq 0.\) Similarly, we can find a vector \(\xi\in\operatorname{null}D_{2}\) with \(\langle\xi,\mathbf{1}\rangle\neq 0.\) Our goal is to find a vector \(\zeta\in\operatorname{null}D_{H}\) with \(\langle\zeta,\mathbf{1}\rangle\neq 0.\) Consider the vector \[\zeta=\alpha\begin{bmatrix}\eta\\ \mathbf{0}_{n_{2}-1}\end{bmatrix}+\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \bar{\xi}\end{bmatrix}+(s+\xi_{1})e_{n_{1}},\] where \(\bar{\xi}=(\xi_{2},...,\xi_{n_{2}}),\)\(e_{n_{1}}\) is the \(n_{1}\)-th coordinate vector in \(\mathbb{R}^{n_{1}+n_{2}-1},\) and \(\alpha,s\in\mathbb{R}\) are to be chosen. As in the proof of Theorem 3, let \(y\in\mathbb{R}^{n_{1}}\) be the last column of \(D_{1}\) and \(z\in\mathbb{R}^{n_{2}-1}\) be the first column of \(D_{2}\) without the first entry. Write \[D_{H}=\begin{bmatrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}z^{t}\\ \mathbf{1}y^{t}+z\mathbf{1}^{t}&\bar{D_{2}}\end{bmatrix}\in\mathbb{R}^{(n_{1} +n_{2}-1)\times(n_{1}+n_{2}-1)},\] where \[D_{2}=\begin{bmatrix}0&z^{t}\\ z&\bar{D_{2}}\end{bmatrix}.\] Then \(D_{2}\xi=\mathbf{0}\) implies \(z^{t}\bar{\xi}=0\) and \(\xi_{1}z+\bar{D_{2}}\bar{\xi}=\mathbf{0}_{n_{2}-1}.\) Therefore, \[D_{H}\zeta=\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \alpha\mathbf{1}y^{t}\eta+\alpha z\mathbf{1}^{t}\eta\end{bmatrix}+\begin{bmatrix} y\mathbf{1}^{t}\bar{\xi}\\ -\xi_{1}z\end{bmatrix}+(s+\xi_{1})\begin{bmatrix}y\\ z\end{bmatrix}.\] Note that \(D_{1}\eta=\mathbf{0}_{n_{1}}\) gives \(y^{t}\eta=0.\) Thus, \[D_{H}\zeta=\begin{bmatrix}(\mathbf{1}^{t}\xi+s)y\\ (\mathbf{1}^{t}\eta\alpha+s)z\end{bmatrix}.\] Set \(s=-\mathbf{1}^{t}\xi\) and \(\alpha=\frac{\mathbf{1}^{t}\xi}{\mathbf{1}^{t}\eta}.\) Note that \(\alpha\) is well-defined since \(\mathbf{1}^{t}\eta\neq 0.\) Then \[D_{H}\zeta=\mathbf{0}_{n_{1}+n_{2}-1}.\] In addition, we have \[\langle\zeta,\mathbf{1}\rangle=\alpha\mathbf{1}^{t}\eta+\mathbf{1}^{t}\xi+s= \mathbf{1}^{t}\xi\neq 0.\] Therefore, \[\mathbf{1}\not\in(\operatorname{null}D_{H})^{\perp}=\operatorname{Im}D_{H}.\] This implies that \(D_{H}x=\mathbf{1}\) does not have a solution. ### Proof of Proposition 2 We follow the idea in the Theorem in [14] with some revision. Proof.: Let \(V=\{u_{1},...,u_{n}\}\) be the vertices of the tree \(T.\) Let \(L\subset V\) be the leaves of \(T.\) Assume \(u_{k}\) is not a leaf with \(k\) fixed. Then \[\lambda =\sum_{i,j=1}^{n}d(u_{i},u_{j})\eta_{i}\eta_{j}\] \[=\sum_{i\neq j}d(u_{i},u_{j})\eta_{i}\eta_{j}\] \[\leq\sum_{i\neq j}(d(u_{i},u_{k})+d(u_{k},u_{j}))\eta_{i}\eta_{j}\] \[=\sum_{i,j=1}^{n}(d(u_{i},u_{k})+d(u_{k},u_{j}))\eta_{i}\eta_{j}- \sum_{i=1}^{n}2d(u_{i},u_{k})\eta_{i}^{2}.\] Therefore, \[\lambda+2\sum_{i=1}^{n}d(u_{i},u_{k})\eta_{i}^{2}\leq 2\langle\eta,\mathbf{1} \rangle\lambda\eta_{k}.\] Note that \[\sum_{i=1}^{n}d(u_{i},u_{k})\eta_{i}^{2}=\sum_{i\neq k}d(u_{i},u_{k})\eta_{i}^ {2}\geq\sum_{i\neq k}\eta_{i}^{2}=||\eta||^{2}-\eta_{k}^{2}=1-\eta_{k}^{2}.\] Thus, we get \[\lambda+2-2\eta_{k}^{2}\leq 2\langle\eta,\mathbf{1}\rangle\lambda\eta_{k}.\] Rearranging the terms and summing \(k\) over all non-leaves, we get \[\lambda(n-l)\leq 2\langle\eta,\mathbf{1}\rangle\lambda\sum_{k:u_{k}\notin L} \eta_{k}+2\sum_{k:u_{k}\notin L}\eta_{k}^{2}-2(n-l). \tag{3.12}\] On the other hand, suppose \(u_{k}\in L\) is a leaf with \(k\) fixed. If \(i,j\neq k\) then \[d(u_{i},u_{j})\leq d(u_{i},u_{k})+d(u_{k},u_{j})-2.\] To see this, assume that \(u_{k}\) is adjacent to the vertex \(u_{k^{\prime}}.\) Then \[d(u_{i},u_{k}) =d(u_{i},u_{k^{\prime}})+1\] \[d(u_{j},u_{k}) =d(u_{j},u_{k}^{\prime})+1.\] Thus, \[d(u_{i},u_{j})\leq d(u_{i},u_{k^{\prime}})+d(u_{j},u_{k^{\prime}})=d(u_{i},u_ {k})+d(u_{j},u_{k})-2.\] Then we have \[\lambda =\sum_{i,j}\eta_{i}\eta_{j}d(u_{i},u_{j})\] \[\leq\sum_{i,j\neq k}\eta_{i}\eta_{j}(d(u_{i},u_{k})+d(u_{k},u_{j} )-2)+2\sum_{i\neq k}\eta_{i}\eta_{k}d(u_{i},u_{k})+\eta_{k}^{2}d(u_{k},u_{k})\] \[=2(\langle\eta,\mathbf{1}\rangle-\eta_{k})\lambda\eta_{k}-2( \langle\eta,\mathbf{1}\rangle-\eta_{k})^{2}+2\lambda\eta_{k}^{2}\] \[=(2\lambda+4)\eta_{k}\langle\eta,\mathbf{1}\rangle-2\langle\eta,\mathbf{1}\rangle^{2}-2\eta_{k}^{2}.\] By summing \(k\) over all leaves, we get \[\lambda l\leq(2\lambda+4)\langle\eta,\mathbf{1}\rangle\sum_{k:u_{k}\in L}\eta_{k} -2\langle\eta,\mathbf{1}\rangle^{2}l-2\sum_{k:u_{k}\in L}\eta_{k}^{2}. \tag{3.13}\] Thus, adding equations 3.12 and 3.13, we get \[\lambda n\leq(2\lambda-2l)\langle\eta,\mathbf{1}\rangle^{2}+4\langle\eta, \mathbf{1}\rangle\sum_{k:u_{k}\in L}\eta_{k}+2(\sum_{k:u_{k}\not\in L}\eta_{k} ^{2}-\sum_{k:u_{k}\in L}\eta_{k}^{2})-2(n-l).\] Since \[\sum_{k:u_{k}\not\in L}\eta_{k}^{2}-\sum_{k:u_{k}\in L}\eta_{k}^{2} <\sum_{k:u_{k}\not\in L}\eta_{k}^{2}+\sum_{k:u_{k}\in L}\eta_{k}^{2}=1\] \[\sum_{k:u_{k}\in L}\eta_{k} <\langle\eta,\mathbf{1}\rangle,\] we get \[\lambda n<(2\lambda-2l+4)\langle\eta,\mathbf{1}\rangle^{2}+2-2(n-l).\] Note that \(l\leq n-1\) since \(T\) is a tree. The eigenvalue estimate [10, Theorem 8.1.22] gives \[\lambda\geq\min_{i}\sum_{j=1}^{n}D_{ij}\geq n-1.\] Therefore, \(\lambda-l+2>0.\) Thus, \[\langle\eta,\mathbf{1}\rangle^{2}>\frac{n}{2}(\frac{\lambda}{\lambda-l+2})+ \frac{n-l-1}{\lambda-l+2}.\]
Steinerberger はグラフにおける曲率の概念を提案した (J. Graph Theory,2023)。三つのグラフ操作によって、非負曲率はほとんど保存される。距離行列とその零空間を、二つのグラフの間で辺を追加することで特徴付けした。$D$ をグラフ距離行列、$\mathbf{1}$ を全て1のベクトルとする。$D$ の線形システム $Dx = \mathbf{1}$ が解を持たない方法を構成する。$\eta$ は $D$ の Perron eigenvector である。グラフが木の場合、$\langle\eta,\mathbf{1}\rangle$ に対する下限値を構成する。
2309.11639
The latent cognitive structures of social networks
When people are asked to recall their social networks, theoretical and empirical work tells us that they rely on shortcuts, or heuristics. Cognitive Social Structures (CSS) are multilayer social networks where each layer corresponds to an individual's perception of the network. With multiple perceptions of the same network, CSSs contain rich information about how these heuristics manifest, motivating the question, Can we identify people who share the same heuristics? In this work, we propose a method for identifying cognitive structure across multiple network perceptions, analogous to how community detection aims to identify social structure in a network. To simultaneously model the joint latent social and cognitive structure, we study CSSs as three-dimensional tensors, employing low-rank nonnegative Tucker decompositions (NNTuck) to approximate the CSS--a procedure closely related to estimating a multilayer stochastic block model (SBM) from such data. We propose the resulting latent cognitive space as an operationalization of the sociological theory of social cognition by identifying individuals who share relational schema. In addition to modeling cognitively independent, dependent, and redundant networks, we propose a specific model instance and related statistical test for testing when there is social-cognitive agreement in a network: when the social and cognitive structures are equivalent. We use our approach to analyze four different CSSs and give insights into the latent cognitive structures of those networks.
Izabel Aguiar, Johan Ugander
2023-09-20T21:07:36
http://arxiv.org/abs/2309.11639v2
# The latent cognitive structures of social networks ###### Abstract Cognitive Social Structures (CSS) are multilayer social networks where each layer corresponds to an individual's perception of the network. Traditional analyses of CSSs have focused on identifying individuals' accurate (or inaccurate) perceptions of a network or examining how these perceptions correlate with behavior. Largely overlooked, however, has been the rich information that CSSs contain about the possible correspondence between the social and cognitive structure shared by the individuals within a network. How does a person's _social position_, capturing how they are related to other individuals, relate to their _cognitive position_, capturing how they view the network? In this work, we study CSSs as three-dimensional tensors, applying tensor decomposition methods to simultaneously capture the joint latent _social_ structure of how individuals are perceived as connected as well as the latent _cognitive_ structure of how different individuals view the connections. In addition to modeling cognitively _independent_, _dependent_, and _redundant_ networks, we propose a specific model instance and related statistical test for testing when there is _social-cognitive agreement_ in a network: when the social and cognitive spaces are equivalent. We employ low-rank nonnegative Tucker decompositions (NNTuck) to approximate the CSS, a procedure closely related to estimating a multilayer stochastic block model (SBM) from such data. We place a particular emphasis on the NNTuck's latent cognitive space, proposing it as an operationalization of sociological theories of _social cognition_ and _relational schema_. We use our approach to analyze four different CSSs and give insights into the latent cognitive structures of those networks. ## 1 Introduction The study of social networks often concerns itself with the notion of a _true_ network. For a given relationship type, typically the true network is definitionally what is captured when asking each individual about their own relationships (Wasserman and Faust, 1994). A common extension of this approach is to aim to uncover multiple true networks through multiple name generators (e.g., Campbell and Lee, 1991; Banerjee et al., 2013). Yet other approaches use interaction data to contrast physical contact networks to participants' faulty recollections (e.g., Killworth and Bernard, 1976; Bernard et al., 1979; Freeman et al., 1987). The study of cognitive social structures (CSSs) (Krackhardt, 1987) (sometimes referred to simply as "Krackhardt data"), where _perceptions_ of the social network are collected from each individual within it, makes a radical departure from these approaches. The body of work on CSSs proposed the idea that individuals' perceptions provide valuable information about the network, and through them, multiple notions of a _true_ network exist. Even so, much CSS work has overwhelmingly focused on measuring how these perceptions differ from some sense of a true network. In this work we stray from the conception of a true network existing, and instead intend to learn what perceived networks tell us about the joint social and cognitive structures that CSSs capture and what we can learn from considering these varied perceptions as a whole. Cognitive social structures (CSSs) capture information present, but usually ignored, in our social networks: our (varied) perceptions of the social structure. Newcomb (1961) first introduced the concept of incorporating this perspective in social network analysis by asking participants about ties involving both themselves and others, and Krackhardt (1987) expanded this perspective in his work that laid the foundations for the formal study of cognitive social structures. For the purposes of this work we will consider the CSS as a _multilayer network_, wherein each layer is defined by a different individual's perception of the social network (see De Domenico et al., 2013, for a review of multilayer networks). The present work makes the following contributions. First, our work proposes examining the CSS of a population to study the latent structure of the perceptions of individuals within the network. We approach this task using nonnegative Tucker decompositions. The nonnegative Tucker decomposition (NNTuck), which decomposes a tensor into three nonnegative _factor matrices_ and one nonnegative _core tensor_, is a multilayer network extension of the degree-corrected, mixed-membership SBM proposed in Ball et al. (2011). Analogous to how the factor matrices in the single layer SBM identify _node communities_, the additional third factor matrix in the NNTuck identifies _layer communities_. As such, the third factor matrix of the NNTuck allows for the adjacency tensor to be low rank in the layer dimension, and identifies interdependencies between the layers of a multilayer network. In the case of the CSS multilayer network, the NNTuck identifies interdependencies in the _cognitions_ of the network. Second, the NNTuck provides a useful operationalization of the sociological theories of _relational schema_(Baldwin, 1992) and _social cognition_(Howard, 1994), theories that each discuss the ways in which we carry and perceive social structures in our minds, and the shortcuts we use to do so. We propose that the nonnegative Tucker decomposition (NNTuck) can be used as a tool for identifying sets of individuals with commonly shared relational schema, doing so in an unsupervised manner without deciding a priori _which_ schema to examine (as do, e.g., Janicik and Larrick, 2005; Kilduff et al., 2008). The interpretation of the relational schema at play can instead be interpreted after the latent cognitive spaces have been identified. Third, we propose statistical tests for assessing different types of cognitive structure in a social network. Notably, we propose a definition and test for determining if the latent cognitive structure is mirrored by (or mirrors) the social structure, a property we call _social-cognitive agreement_. Finally, and most broadly, our work examines the CSS as a three dimensional tensor, connecting the study of CSSs to useful formalisms from multilinear algebra for studying such rich data objects. We use the tools developed here to uncover insights into the cognitive and social spaces of empirical networks, showing how the NNTuck can be used to identify how social structure and their related cognitions can significantly alter throughout time. The structure of the work is as follows. In Section 2 we discuss related work. We introduce the nonnegative Tucker decomposition (NNTuck) of multilayer networks (Aguiar et al., 2022) in Section 3, where we also discuss the sociological concepts that we draw upon and propose the interpretation of the NNTuck in the context of analyzing CSSs. In Section 3.3 we define and discuss the notion of _social-cognitive agreement_ useful for analysing CSS datasets, and Section 3.5 we present for statistical tests for assessing latent cognitive structure in social networks. In Section 4 we investigate the use of the NNTuck to find latent structure in four CSS datasets from Krackhardt (1987) and Hunter (2019). We conclude with a discussion and promising open questions for future work. Related Work This work is by no means the first which suggests that there is a latent, or shared, cognitive structure present in the perceptions of social networks. Even preceding Krackhardt's original CSS work, Carley (1986) showed how the shared "frames" developed in the social networks of the members of a college dormitory impacted shared cognition. In Krackhardt (1987), the author motivates his proposal of cognitive social structures by discussing schemas, invoking research that finds people rely on patterns for their recollections (Freeman et al., 1987), and suggesting that Heider's (1958) balance theory is a possible cognitive model underlying peoples' perceptions. Later, DiMaggio's (1997) work on _Culture and Cognition_ cites Krackhardt's (1987) work, stating that "networks are crucial environments for the activation of schemata, logics, and frames." Although there was a strong emphasis on the theory of shared schema leading up to and within Krackhardt (1987), the bulk of the CSS work that has followed has focused on quantifying the "accuracy" of the perceptions of members in a social network. For example, Krackhardt (1990) found that those with more accurate perceptions of the network are perceived as being more powerful, and Brands and Kilduff (2014) relate the misperceptions of women's roles in a network to the characteristics that their peers attribute to them. For a comprehensive review of CSS work through 2013, see Brands (2013). From a modeling perspective, our work is similar to that of Kumbasar et al. (1994) or Sosa and Rodriguez (2021), which both identify latent structure in the perceptions of the network by considering the CSS data object in its entirety, but both focus on identifying and contrasting the latent self-perceived social space to a latent group-perceived social space. In Kumbasar et al. (1994), the authors conduct a correspondence analysis of a \(N^{2}\times N\) "stacked" matrix representation of a CSS to find and compare the latent self-perceived and group-perceived social spaces of each person. In Sosa and Rodriguez (2021), the authors propose a 3-dimensional extension of Hoff et al.'s (2002) latent space model for social networks. Their model identifies the self-perceived latent position of each individual, and the posterior probabilities of a Bayesian model are used to assess "whether the perception of an individual about its own position in social space agrees with the judgments of other actors." The authors propose that this information can be used to create a weighted (single layer) network, where the weights of an edge between persons \(i\) and \(j\) corresponds to how much cognitive agreement \(i\) and \(j\) share. Also related is the work of Stevenson and Radin (2015), in which the authors aim to connect shared cognitions about events in the workplace with the underlying social network structure. While not explicitly related to schemas, in Sewell (2019) the author proposes a latent space model which describes the deviations between individuals' perceptions through bias and variance, explaining individuals' tendencies to report more or less dense networks, and their confidence in the network perception, respectively. More recently, De Bacco et al. (2023) proposed a latent model to uncover an unobserved underlying network from multiply reported network data (such as a CSS) by identifying latent propensities for each reporter to over or under report edges between others, as well as each reporter's tendency to report reciprocated ties between people. Such related work that focuses on identifying latent spaces in a CSS sets the precedent that we hope to continue here. From a conceptual perspective, our work is most similar to that of Janicik and Larrick (2005), Carnabuci et al. (2018), Menon and Smith (2014), Kilduff et al. (2008), Emery et al. (2011), Brashears (2013), and Brashears and Quintane (2015), which all aim to identify specific relational schema individuals use when recalling their social networks. In contrast to our proposed method, these prior works all identify _specific_ relational schema to study the presence of in the social network, whereas we aim to identify _people_ who share a general relational schema that can be interpreted and identified a posteriori. Janicik and Larrick (2005) specifies two relational schema that influence social network perceptions, balance schema (Heider, 1958) and linear-ordered schema (De Soto and Kuethe, 1959). Their work goes on to study how people use these schema to fill in incomplete parts of a social network. Carnabuci et al. (2018) take an experimental approach to infer the relational schema that participants rely upon by measuring how learning rates change in conditions which assume different relational schema in the network. Similarly, Menon and Smith (2014) study how psychological priming impacts the density and characteristics of people's perceived networks. Kilduff et al. (2008) study multiple CSS datasets to understand how schemas like small-world assumptions (Milgram, 1967) and perceived popularity manifest themselves in CSSs. In Emery et al. (2011), the goal is to identify how specific relational schemas impact the emergence of leadership in a social network. In Brashears (2013), the author conducts experiments to understand how compression heuristics aid in the recall of large social networks, finding that people have better recall when structural assumptions of relational schema like kinship and triadic closure are accurate ones. In Brashears and Quintane (2015) the same experimental data is reanalyzed and the recall of triads and groups is compared to what is expected from exponential random graphs. We aim to enrich the study of these varied questions about cognitive structure. The operationalization we introduce in this work identifies shared relational schema without necessitating _which_ relational schema to identify ahead of time, and enables us to contrast social structure to cognitive structure. The model we propose for analyzing CSSs builds upon a diverse set of work on stochastic block models (SBMs) for multilayer networks. Most similarly to the NNTuck work of Aguiar et al. (2022) is that of Schein et al. (2016) and De Bacco et al. (2017). In De Bacco et al. (2017), a multilayer SBM is proposed by estimating a separate SBM for each layer of the network, keeping the social space fixed across layers. The model proposed in De Bacco et al. (2017) is a specific instance of the NNTuck wherein the third factor matrix in the decomposition is constrained to be the identity matrix (see Definition 1 in Section 3 for more details). In Schein et al. (2016), the authors propose a Bayesian Poisson Tucker Decomposition (BPTD) as a generalization of the degree-corrected, mixed-membership SBM (dc-mm-SBM) (Ball et al., 2011) to study multilayer networks. Whereas Schein et al. (2016) estimate the SBM with an MCMC algorithmic approach, the NNTuck multiplicative updates procedure seeks to maximize the log-likelihood to obtain a point estimate of the same model (see Section 3.1 and Algorithm 1 for more details). The NNTuck multiplicative updates procedure is equivalent to the expectation maximization (EM) algorithm for estimating the multilayer SBM in De Bacco et al. (2017). Viewing the CSS more generally, as a multilayer network, identifying structure in the cognitive space (which we propose to do with the NNTuck) is similar to identifying _layer interdependence_, for which there have been a multitude of proposed methods. Beginning in the CSS literature, Krackhardt (1987) suggested differentiating layer similarity by comparing individual layers to a _consensus structure_. Battiston et al. (2014) and Kao and Porter (2018) introduce and use various similarity measures to identify layer communities. De Domenico et al. (2015) and De Domenico and Biamonte (2016) cluster similar layers using information-theoretic tools. In Stanley et al. (2016), layers are categorized into groups that were drawn from the same SBM. In De Bacco et al. (2017) the authors build multiple models using different subsets of the layers and layer interdependence is determined via each models' performance on a link prediction task. In contrast to this previous work, the NNTuck identifies layer interdependence by identifying _layer _communities_ which have the same underlying generative model. As such, the NNTuck allows for the adjacency tensor to be low rank in the layers as well as the nodes and, as we discuss in more detail in Section 3.2, is thus a natural operationalization of existing sociological theories. ## 3 A factor model of cognitive social structures In this section we propose the use of the nonnegative Tucker decomposition (NNTuck) (Aguiar et al., 2022) for analyzing cognitive social structures. We introduce the NNTuck as a multilayer stochastic block model (SBM), discuss the interpretation of the NNTuck in the context of CSSs, and relate it to relevant sociological theories. ### The nonnegative Tucker decomposition The degree-corrected, mixed-membership SBM (dc-mm-SBM) (Ball et al., 2011) is a generative model of a single layer network that assumes each of the \(N\) nodes in the network have soft ("mixed") membership in each of \(K\) different communities, and that an affinity matrix captures the Poisson rates at which nodes in each community form an edge with one another. More specifically, for a network of \(N\) nodes represented by an adjacency matrix \(A\in\mathbb{Z}_{0+}^{N\times N}\), the dc-mm-SBM assumes that each node \(i\) has a nonnegative outgoing membership vector, \(\mathbf{u}_{i}\in\mathbb{R}_{+}^{K}\), and a nonnegative incoming membership vector, \(\mathbf{v}_{i}\in\mathbb{R}_{+}^{K}\). Each membership vector represents the node's soft assignment to \(K\leq N\) groups when forming outgoing and incoming edges, respectively. The \(K\times K\) nonnegative affinity matrix \(\boldsymbol{G}\) describes the rates of connections between nodes in each communitiy. Then, for \(N\times K\) matrices \(\boldsymbol{U}\) and \(\boldsymbol{V}\) with rows defined by \(\mathbf{u}_{i}\) and \(\mathbf{v}_{i}\), the dc-mm-SBM assumes that \(\boldsymbol{A}\sim\text{Poisson}(\boldsymbol{U}\boldsymbol{G}\boldsymbol{V}^{ \top})\). While the model allows for adjacency matrices \(A\) consisting of nonnegative count data, it is common practice to employ the dc-mm-SBM for model-based analyses of binary network data. Note that constraining all the elements of \(\mathbf{u}_{i},\mathbf{v}_{i}\), and \(\mathbf{G}\) to be nonnegative allows for these parameters to be interpretable as membership weights and affinities. The nonnegative Tucker decomposition (NNTuck) as a generative model is a multilayer network extension of the dc-mm-SBM. It builds on related work on multilayer SBMs (e.g., Schein et al., 2016; De Bacco et al., 2017; Tarres-Deulofeu et al., 2019) by generalizing the SBM to a multilayer setting and using tensor decompositions to jointly model the layers, much like how matrix decompositions underlie traditional (single layer) stochastic block models. Consider a multilayer network with \(N\) nodes and \(L\) layers represented by adjacency tensor \(\boldsymbol{\mathcal{A}}\in\mathbb{Z}_{0+}^{N\times N\times L}\). The NNTuck multilayer SBM again assumes that each node \(i\) has nonnegative membership vectors \(\mathbf{u}_{i}\in\mathbb{R}_{+}^{K}\) and \(\mathbf{v}_{i}\in\mathbb{R}_{+}^{K}\), representing the node's soft assignment to \(K\leq N\) groups when forming outgoing and incoming edges, respectively (for an undirected network, \(\mathbf{u}_{i}=\mathbf{v}_{i}\)). Furthermore, the NNTuck assumes that each layer \(\ell\) has a nonnegative vector \(\mathbf{y}_{\ell}\in\mathbb{R}_{+}^{C}\) describing the _layer's_ soft membership to each of \(C\leq L\)_layer communities_. Just as matrices \(\boldsymbol{U}\) and \(\boldsymbol{V}\) in the dc-mm-SBM describe latent community structure in the _nodes_ of single-layer networks, the factor matrix \(\boldsymbol{Y}\) in the NNTuck describes latent structure in the _layers_ of a multilayer network. Finally, the NNTuck assumes a nonnegative core tensor \(\boldsymbol{\mathcal{G}}\in\mathbb{R}_{+}^{K\times K\times C}\) whose frontal slices are \(C\) different affinity matrices. Let \(\mathbf{u}_{i},\mathbf{v}_{i},\mathbf{y}_{\ell}\) be the rows of nonnegative matrices \(\boldsymbol{U},\boldsymbol{V}\), and \(\boldsymbol{Y}\), respectively. Then the NNTuck multilayer SBM assumes that \[\boldsymbol{\mathcal{A}}\sim\text{Poisson}(\boldsymbol{\mathcal{G}}\times_{1 }\boldsymbol{U}\times_{2}\boldsymbol{V}\times_{3}\boldsymbol{Y}). \tag{1}\] When estimating this model from multilayer network data, maximizing the log-likelihood of observing \(\mathbf{\mathcal{A}}\) under the model given by Eq. (1) is equivalent to minimizing the KL-divergence between \(\mathbf{\mathcal{A}}\) and \(\mathbf{\mathcal{\hat{A}}}=\mathbf{\mathcal{G}}\times_{1}\mathbf{U}\times_{2}\mathbf{V}\times_ {3}\mathbf{Y}\). This equivalence is essential to why the NNTuck estimated by minimizing the KL-divergence is synonymous with the multilayer SBM. ### Interpretation of the NNTuck of a CSS Through its multiple factor matrices, the NNTuck multilayer SBM models both the latent social structure--sometimes referred to as the Blau space (Blau, 1977; McPherson, 1983; McPherson and Ranger-Moore, 1991)--and the latent layer structure of a multilayer network. When viewing a CSS as a multilayer network, each layer corresponds to a different individual's perception of the social network. Therefore, the latent layer structure captured by the NNTuck of a CSS is precisely the _latent cognitive structure_ of the CSS. In the setting of CSSs, then, the NNTuck identifies three latent spaces in a network: the outgoing and incoming social groups in the \(\mathbf{U}\) and \(\mathbf{V}\) factor matrices, respectively, and the \(\mathbf{Y}\) factor matrix, which identifies \(C\)_cognitive groups_ and each individual's membership to each of these groups. Relationship to Social Cognition and Relational Schema.Aiming to interpret this factor model through sociological theories, we will not review all relevant or adjacent sociological literature surrounding cognition, social representations, schema, or culture. While tempting, it is far beyond the scope of the present work. Therefore, we focus on connecting the NNTuck to Baldwin's (1992) theory of _relational schema_ and Howard's (1994) theory of _social cognition_. A succinct summary of Baldwin's theory of _relational schema_ (emphasis our own) is that, _"[relational schema] describe expectations about the nature of relationships between people, defining what aspects of social interactions individuals will pay attention to, as well as the attributes of others that are meaningful within those interactions"_(Brands, 2013). Figure 1: In this work we analyze cognitive social structures (CSSs) as multilayer networks represented by an \(N\times N\times N\) adjacency tensor (left). The _frontal slices_ of the adjacency tensor are visualized in blue, yellow, and green. The \(N\) frontal slices of the CSS are adjacency matrices representing the perception each person has of their network. We use the nonnegative Tucker decomposition (NNTuck) to model the CSS with a multilayer stochastic block model, which decomposes the adjacency tensor into latent social spaces and a latent cognitive space (right). Relational schema describe the generalizations or patterns we rely on when making assumptions about relationships or when making new social connections. In data collection for cognitive social structures, each person is asked to report on their perceptions of the relationships between every pair of people in their network. It has been empirically shown that people rely on compression heuristics when recalling large networks (Brashears, 2013), and it is reasonable to assume that the perceptions reported in CSSs are the product of some such compression heuristic. In Section 4.3, we use the NNTuck to understand the cognitive latent space of a longitudinal CSS collected repeatedly over several weeks, allowing us to investigate the heuristics involved (and not involved) in that early formation of the cognitive social structure for that population. About _social cognition_, Howard writes, _"Social cognition articulates explicitly how social structures are carried in individuals' mental systems...The content of the group dimensions on which people are prone to categorize reveals the connections between cognitive and social structures."_ CSS datasets provide empirical access to "how social structures are carried in individuals' mental systems," and we propose that the NNTuck, through the \(\mathbf{Y}\) factor matrix, can identify specific ways in which any differences in these perceptions can be attributed to differences in such mental systems. Connecting both theories, we propose that individuals in the same cognitive group can be thought of as having a shared set of relational schema because they share the same generative process that describes their perceptions of the network. In this sense, too, the NNTuck provides an explicit operationalization of Howard's theory of social cognition. In Section 4 we interpret possible relational schema underlying the shared cognitive spaces in three empirical datasets, and in Section 4.3, we directly tie our analysis using the NNTuck to Howard's theories. To formalize our interpretation of the NNTuck in terms of established social theories, we now turn to possible different assumptions of the structure and dimension of the "layer" factor matrix \(\mathbf{Y}\in\mathbb{R}^{L\times C}\) in the NNTuck of a CSS. The following definitions apply specifications for multilayer network models from Aguiar et al. (2022) to the CSS context. They form the basis of our primary interpretation of empirical CSS data in Section 4. **Definition 1** (Cognitively independent NNTuck): _A **cognitively independent NNTuck** is a nonnegative Tucker decomposition where \(C=L\) and \(\mathbf{Y}\) has the constraint \(\mathbf{Y}=\mathbf{I}\), meaning each individual in the network relies on a distinct relational schema._ **Definition 2** (Cognitively dependent NNTuck): _A **cognitively dependent NNTuck** is a nonnegative Tucker decomposition where \(\mathbf{Y}\) has the constraint \(C<L\), meaning each individual's social cognition can be described by a mixture of \(C\) relational schema._ **Definition 3** (Cognitively redundant NNTuck): _A **cognitively redundant NNTuck** is a nonnegative Tucker decomposition where \(C=1\) and we constrain \(\mathbf{Y}\) to be the ones vector, \(\mathbf{Y}=[1,\ldots,1]^{\top}\), meaning each individual in the network has the exact same relational schema in perceiving their network._ ### Social-Cognitive Agreement In addition to the above model specifications, we next propose a new specification that is particularly well-suited to the CSS context. Considering that the NNTuck of a CSS can tell us both about the latent social and cognitive structures, a natural question to ask is how the two spaces are related to one another. Do our social surroundings directly influence how we conceptualize our social network, does our conceptualization influence how we socialize, and if so, can we empirically assess the ways in which they do? The hypothesis that our cognitions are influenced by our peers was proposed in Carley (1986), where a main goal of the work was to "relate cognitive structure to social structure at an empirical level." As Carley writes, "the social and cognitive processes cannot be decoupled." Indeed, through a combination of quantitative and qualitative data collection, Carley identifies specific tightly knit groups in a community who share similar cognitive mappings of a concept. A similar idea was later echoed in Freeman (1992), where the author writes, "the individuals involved in any particular local community would be expected ultimately to produce very similar mental images of group structure in that community". These related hypotheses have a natural analogue to a specification of the NNTuck. Namely, the social structure is the same as the cognitive structure if \(\mathbf{U}=\mathbf{Y}\). We refer to this structural assumption as one of _social-cognitive agreement_. **Definition 4** (Social-cognitive agreement NNTuck): _An NNTuck with **social-cognitive agreement (SCA)** is a nonnegative Tucker decomposition where \(C<L\) and we constrain \(\mathbf{Y}=\mathbf{U}\). Individuals in the same social group share the same relational schema._ ### Model optimization details Algorithmically, our primary method for maximizing the log-likelihood of observing a CSS dataset under a given NNTuck model specification is to employ the multiplicative updates algorithm of Kim and Choi (2007), an effective extension of the multiplicative updates algorithm for nonnegative matrix factorization (NMF) given by Lee and Seung (2001). For more details, see Algorithm 1 in Appendix C or Aguiar et al. (2022). The Kim and Choi (2007) algorithm, like the NMF algorithm it builds on, is guaranteed to find a local maxima of the nonconvex log-likelihood, but not necessarily the global maxima. That said, extensive empirical evaluations have shown it to perform very well when employed using best practices for nonconvex optimization (e.g., choosing the best of many random restarts of the optimization routine). For the cognitively independent, dependent, and redundant NNTuck model assumptions (Definitions 1 to 3), this algorithm transfers easily to the relevant specifications. Briefly, when constraining \(\mathbf{Y}=\mathbf{I}\) or \(\mathbf{Y}=\mathbf{1}\) as in the cognitively independent and redundant NNTucks, respectively, the algorithm maintains monotonic convergence to a local minimum by simply initializing these constraints and never updating \(\mathbf{Y}\) after initialization. Similarly, when constraining \(\mathbf{U}=\mathbf{V}\), as in the case of an undirected network, by initializing both \(\mathbf{U}=\mathbf{V}\) and symmetry in the frontal slices of \(\mathbf{\mathcal{G}}\), monotonic convergence to a local minimum is maintained. The symmetric structure in the frontal slices of adjacency tensor \(\mathbf{\mathcal{A}}\) (as is the case in an undirected network) and core tensor \(\mathbf{\mathcal{G}}\) (which is an interpretable assumption for an undirected network) result in equivalence between the multiplicative update derived for \(\mathbf{U}\) and the multiplicative update derived for \(\mathbf{V}\). Thus, by simply initializing a symmetric \(\mathbf{\mathcal{G}}\) and \(\mathbf{U}=\mathbf{V}\), Algorithm 1 maintains this symmetry and equivalence. In the case of estimating a social-cognitive agreement NNTuck (Definition 4), however, constraining \(\mathbf{Y}=\mathbf{U}\) is a nontrivial constraint on the multiplicative updates algorithm. The structure in both the data \(\mathbf{\mathcal{A}}\) and core tensor \(\mathbf{\mathcal{G}}\) that would be necessary in order to ensure equivalence in the multiplicative updates of \(\mathbf{Y}\) and \(\mathbf{U}\) (as we noted for the symmetric network specification above), are contextually unreasonable to expect or assume. Specifically, equivalence in the multiplicative updates for \(\mathbf{Y}\) and \(\mathbf{U}\) requires that the 1-unfolding and the 3-unfolding* of \(\mathbf{\mathcal{A}}\) are equivalent (\(\mathbf{A}_{(1)}=\mathbf{A}_{(3)}\)) and that the 1-unfolding and the 3-unfolding of \(\mathbf{\mathcal{G}}\) are equivalent (\(\mathbf{G}_{(1)}=\mathbf{G}_{(3)}\)). In the context of the CSS, this structural assumption on \(\mathbf{\mathcal{A}}\) means that, for every pair of people \(i\) and \(j\), person \(i\)'s perception of \(j\)'s outgoing relationships is the same as person \(j\)'s perception of person \(i\)'s outgoing relationships. In order for this assumption to hold, each person would need to report their _own_ outgoing relationships as their perceptions for everyone else's outgoing relationships (that they assume everybody goes to the exact same people for advice/friendship). Footnote *: Tensor unfoldings are higher-order analogues of matrix vectorizations. See Kolda and Bader (2009) for explanations of tensor unfoldings and other tensor properties. Thus, in order to estimate a social-cognitive agreement (SCA) NNTuck, seemingly minor changes to the Kim and Choi (2007) algorithm nontrivially change its behavior. In considering other algorithmic approaches, Cambre et al. (1999) (and more recently Jin et al. (2022)) develop and discuss a method for estimating a _symmetric_ Tucker decomposition (with no nonnegativity constraint) wherein all three factors (\(\mathbf{U},\mathbf{V},\mathbf{Y}\) in our vocabulary), are constrained to be equivalent. However, in this and other related work, the focus is on developing algorithms for decomposing a data tensor which already has an appropriately symmetric structure. Such data symmetry is something that, again, is not reasonable to assume in the case of CSSs. As such, we move forward with altering Algorithm 1 to accommodate the SCA constraint, resigning ourselves to a estimation algorithm without guaranteed monotonic convergence to a local optima. That said, this was only ever a weak guarantee relative guarantees of reaching a global optima, as enjoyed by many optimization procedures (e.g., for convex log-likelihoods). We alter the multiplicative updates from Kim and Choi (2007) in the following ways. First, we initialize nonnegative factors \(\mathbf{Y},\mathbf{V}\), and nonnegative core tensor \(\mathbf{\mathcal{G}}\). Then, we set \(\mathbf{U}=\mathbf{Y}\). For each multiplicative iteration, we proceed by first updating \(\mathbf{V}\), then updating \(\mathbf{U}\), and then updating \(\mathbf{Y}\), all according to the same multiplicative updates as in Algorithm 1. As our key modification, after updating \(\mathbf{Y}\) and before updating \(\mathbf{\mathcal{G}}\), we set both \(\mathbf{U}\) and \(\mathbf{Y}\) to be equal to their average. This step ensures that the model remains "feasible" within the constrained model class, essentially updating both factor matrices to reflect the gradients in both roles (as \(\mathbf{U}\) and \(\mathbf{Y}\) factors). We then update \(\mathbf{\mathcal{G}}\) as usual. As a final minor modification, given the lack of monotonicity guarantee, Algorithm 2 also features a modified termination criteria to ensure the return of the minimum solution found during a given solution trajectory. We describe this approach in full in Algorithm 2 in Appendix C, and provide a python implementation in our code repository (see Appendix A). In Appendix C we also provide further details on our decision to choose this modification over others. This heuristic optimization procedure for returning a maximum likelihood estimate of an NNTuck with social-cognitive agreement is provided as an initial feasible approach to the question of modeling social-cognitive agreement in CSSs. It is an interesting open question if the multiplicative updates algorithm of Kim and Choi (2007), or some other algorithm, can be modified to produce a maximum likelihood estimate with local or global optimality guarantees. For example, Arora et al. (2012) showed that under "approximate separability" (Donoho and Stodden, 2003) assumptions on a data matrix, the nonnegative matrix factorization (NMF) has a guaranteed solution and an associated convex optimization problem. Their proposed algorithm overcame prior limitations of heuristics for approximating an NMF by minimizing the Frobenius norm error of the approximation. Establishing identifiability and convergence conditions for the nonnegative Tucker decomposition has been addressed in, e.g., Xu (2015) and Sun and Huang (2023). In Xu (2015), the author proposes an algorithm for estimating the nonnegative Tucker decomposition with global convergence guarantees, but does so for minimizing only the Frobenius loss. In Sun and Huang (2023), the authors propose a regularization that makes the nonnegative Tucker decomposition identifiable, but the associated loss based on KL-divergence is non-convex. Extending this work to estimating the nonnegative Tucker decomposition (and the nonnegative Tucker decomposition with social-cognitive agreement) with global convergence guarantees under loss given by the KL-divergence is promising future work. ### Statistical tests for cognitive structure The above structural assumptions are all articulated in terms of constraints on the parameters of the given models, where the constrained models all lie within a subspace of the more general (cognitively independent) model. As such, these assumptions are highly testable from data using likelihood ratio tests. We adapt three statistical tests from Aguiar et al. (2022) to have vocabulary specific to the interpretation of the CSS, and introduce an additional statistical test for social-cognitive agreement, all for studying the cognitive structure of a social network through its CSS. **Definition 5** (Cognitive independence): _For a multilayer network let model I be the cognitively independent NNTuck and let model II be the cognitively dependent NNTuck. A CSS has **cognitive independence** at level \(\alpha\) if the likelihood ratio test (LRT) with \((L-C)K^{2}-LC\) degrees of freedom is significant at level \(\alpha\)._ **Definition 6** (Cognitive dependence): _A CSS has **cognitive dependence** at level \(\alpha\) if the LRT described above is not significant at level \(\alpha\) for a pre-specified \(C\)._ **Definition 7** (Cognitive redundance): _A CSS has **cognitive redundance** at level \(\alpha\) if the LRT comparing the cognitively redundant NNTuck to the cognitively independent NNTuck with \((L-1)K^{2}\) degrees of freedom is not significant at level \(\alpha\)._ **Definition 8** (Social-cognitive agreement): _A CSS has **social-cognitive agreement** at level \(\alpha\) if the LRT comparing a cognitively dependent NNTuck with \(K^{\prime}\) and \(C^{\prime}\), not necessarily equal, to a social-cognitive agreement NNTuck with the constraint \(K=C\) and \(\mathbf{U}=\mathbf{Y}\) with \(2N(K^{\prime}-K)+K^{\prime 2}C^{\prime}+C^{\prime}N-K^{3}\) degrees of freedom is not significant at level \(\alpha\). The cognitively dependent NNTuck must be such that either one or both of \(C^{\prime}\geq K\) or \(K^{\prime}\geq K\) is true._ The use of likelihood ratio tests here, as in the analogous tests for layer interdependence in Aguiar et al. (2022), relies on Wilks' Theorem (Wilks, 1938) and its assumptions. Wilks' Theorem provides the standard theory for the asymptotic validity of likelihood ratio tests, but its application assumes that the likelihoods being considered are the global optima, something our optimization routines do not guarantee. We caution against overly naive interpretations of these tests, but extensive empirical investigations make us confident that our optimization routines are not missing any global optima that offer qualitatively different likelihood values that the local minima under consideration. For more discussion on the assumptions underlying the use likelihood ratio tests here, as well as a discussion of the recent and related split-LRT (Wasserman et al., 2020), see Appendix C. Empirical cognitive structure We now use the models and tests introduced in previous sections to analyse four different CSS datasets, with the aim of exploring and interpreting the social and cognitive structures in each. In each of the below datasets, we interpret each person's responses as a layer of a multilayer network, representing the CSS as an \(N\times N\times N\) adjacency tensor \(\mathbf{\mathcal{A}}\), where each layer is represented as a frontal slice of the tensor. Krackhardt Advice/FriendshipIn the original CSS work by Krackhardt (1987), \(N=21\) managers in a high tech firm were asked to report on their perceptions of the _advice network_ and the _friendship network_ of the firm, forming two CSS data sets. We also have information about each employee's affiliation to one of four departments (where the president doesn't belong to any department), their relative hierarchy to one another (president, vice president, supervisor), their tenure at the firm, and their age. In what we call the "Krackhardt Advice CSS" each person was asked who approached whom for work-related advice, and in the "Krackhardt Friendship CSS" each person was asked who was friends with whom. Hunter FriendshipHere, we analyze longitudinal CSSs from Hunter (2019) which track the friendship network perceptions of \(N=20\) college juniors from around the country in a summer leadership class over the course of six weeks. In each week, each student was asked to report on their perceptions of the friendships between students in the class. We also have data about each student's self-reported gender (in the case of this class, only "male" and "female" were reported), race, academic major, and undergraduate institution. We also know which of 10 dorm rooms and 4 study groups each student was assigned to. Although a CSS was collected at each of six weeks2, we focus on the data sets corresponding to the first and sixth week. Going forward, we refer to the friendship CSS datasets as the "Hunter Friendship, Week One CSS" and "Hunter Friendship, Week Six CSS", respectively. Footnote 2: The entire longitudinal CSS for each relationship type from Hunter (2019) could be meaningfully analyzed as a fourth order tensor of size \(20\times 20\times 20\times 6\), studying temporal factors through tensor decomposition methods. That said, this data constitutes the only presently known instance of such fourth order data, and we consider such an analysis to be beyond the scope of the present work. We analyze each CSS dataset above through a three-step procedure of (1) model selection, (2) statistical testing, and (3) estimation, visualization, and interpretation. For (1) model selection, our main goal is to assess which dimensions of the latent spaces (\(K\) and \(C\)) should be used for the analysis. To do so, we set up a cross-validation link prediction task. By splitting each dataset into a train and test set, we are able to assess how well each model performs in predicting _unseen_ data, which can help account for overfitting that may happen with models that have more parameters. The construction of the cross-validation approach is such that for each link prediction task we construct five different _masking tensors_ which each split the data into different _train_ and _test_ sets. For masking tensor \(\mathbf{\mathcal{M}}\), \(\mathbf{\mathcal{M}}_{ij\ell}=1\) indicates that the link between \(i\) and \(j\) in layer \(\ell\) is in the train set. Conversely, \(\mathbf{\mathcal{M}}_{ij\ell}=0\) indicates that the link between nodes \(i\) and \(j\) in layer \(\ell\) is in the test set (is _missing_ or _unknown_), and will be held out in the estimation of the NNTuck. In the _tubular_ link prediction task, which we use here, masking is done tube-wise (in the tensorial sense), meaning edges are always observed or unknown across all layers. Missing link \((i,j)\) in layer \(k\) implies that link \((i,j)\) is missing in all layers (\(\mathbf{\mathcal{M}}_{ijk}=0\Rightarrow\mathbf{\mathcal{M}}_{ij\ell}=0,\forall\ell\)). For \(b\)-fold cross-validation, tubes \((i,j,\cdot)\) in the tensor are missing with uniform and independent probability \(1/b\). We select the NNTuck with the highest training set log-likelihood from 20 runs of the multiplicative updates algorithm with different random initializations. Then, test-AUC is averaged across the five different maskings. This process is repeated for varying dimensions \((K,C)\) and model assumptions in the NNTuck. Next, for step (2), statistical testing, we use the appropriate dimensions \(K\) and \(C\) determined from the above parameter sweep to perform statistical tests for cognitive redundance, independence, and social-cognitive agreement using the corresponding LRTs. Finally, as step (3), we estimate an NNTuck of the dataset using either Algorithm 1 or Algorithm 2 (the latter for models that feature social-cognitive agreement) with the model specification decided by the outcome of the LRT. We then visualize and interpret the social and cognitive spaces identified by these models. ### Krackhardt Advice CSS For this CSS with \(N=21\) managers, the test-AUC under different model specifications is shown in Figure 2 (left). We choose to examine the factor matrices corresponding to the cognitively dependent NNTuck (see Definition 2) with \(K=3\) and \(C=3\). We choose these parameters over others with a higher test-AUC due to the observation that higher values of either \(K\) or \(C\) result in a minimal increase in test-AUC++. We perform a likelihood ratio test to compare the goodness-of-fit of the nested, cognitively dependent NNTuck with \(K=C=3\), to the full cognitively independent NNTuck. In doing so, we fail to reject the null hypothesis that the data was generated from the smaller cognitively dependent model (see Table 1). Figure 2: The test-AUC averaged across a tubular fivefold cross-validation task for the Krackhardt advice (left) and friendship (right) CSS datasets. The pink and black lines correspond to the NNTuck model assumptions of cognitive redundancy and independence, respectively. Each other colored line corresponds to a different value of \(C\) in assuming cognitive dependence in the CSS, and the x-axis corresponds to different choices of the social latent space parameter \(K\). Based on this cross-validation task, we choose to examine the social and cognitive factor matrices of the advice and friendship CSS datasets corresponding to the cognitively dependent NNTuck with \(K=C=3\), and \(K=3,C=5\), respectively. Figure 3: The latent social and cognitive spaces in the high tech firm from Krackhardt (1987), identified by estimating a cognitively dependent NNTuck of the advice CSS with \(K=C=3\). The plotted network is of the network’s _consensus structure_, with an edge shown if at least 50% of the network perceived its existence. Each node’s position is determined by the departmental affiliation and hierarchy structure of the firm, where the person in the middle is the president, persons 1, 3, 17, and 20 are vice presidents, and the rest are supervisors. Each node is colored according to its proportional membership to each group, where a darker color denotes more proportional membership. We see that persons 5 and 16 belong mostly to the same cognitive space as the president, persons 0 and 13 belong mostly to the same cognitive space as person 14, and everyone else belongs to the third cognitive space. We inspect the latent spaces identified by this NNTuck in Figure 3. We see that the groupings of the social space of the advice network can be mostly attributed according to departmental affiliation within the firm: the three social groups mostly correspond to the left, bottom, and right, departmental groupings (visualized above in separate clusters), whereas the president (person 6) doesn't strongly belong to any of the three groups. The exception to this is the departmental grouping that we view in the upper center of the network visualization, containing nodes 9, 10, and 17, which doesn't clearly belong to any of the three social groups. The cognitive space, however, seems to group employees according to different attributes: those who perceive the advice network similarly to the president, those who perceive it similarly to person 14, and everyone else. As is, however, these cognitive groupings aren't entirely interpretable. To further interpret the identified cognitive space, we can rewrite the \(\mathbf{Y}\) factor matrix and the core tensor in the basis of \(C=3\) individuals in the network. We choose the three individuals according to the heuristic proposed in Aguiar et al. (2022), where the aim is to choose \(C\) such layers such that the corresponding rows of \(\mathbf{Y}\) are (nearly) linearly independent. Doing so, we identify person 6, person 14, and person 10 as the reference perspectives to consider. We transform \(\mathbf{Y}\) such that the \(C=3\) frontal slices of \(\mathbf{\mathcal{G}}\) correspond exactly to the affinity matrix from which persons 6, 14, and 10 generate their perceptions, respectively. Thus, the \(6th\), \(14th\), and \(10th\) rows of the transformed \(\mathbf{Y}^{*}\) matrix will be \([1,0,0]\), \([0,1,0]\), and \([0,0,1]\) respectively, and all other rows will represent each individual's relational schema _relative_ to these three people. We inspect this transformed \(\mathbf{Y}^{*}\) matrix, which identifies the _relative cognitive space_, by plotting each individual's proportional cognitive membership in Figure 4. Figure 4: The latent cognitive space of the Krackhardt (1987) advice CSS, rewritten relative to the relational schema of the president of the company, the supervisor we refer to as person 14, and person 10. Each node is colored according to its proportional membership to each cognitive group, where dark pink denotes more membership. Note that, because this plot shows the cognitive membership of each node _relative_ to persons 6, 14, and 10, person 6 (the president) has his entire membership in the first cognitive group, and persons 14 and 10 have their entire membership in the second and third cognitive groups, respectively. \begin{table} \begin{tabular}{c|l|l} \hline **Dataset** & **Test** & **LRT Determination** \\ \hline \multirow{4}{*}{Krackhardt Advice} & \(H_{0}\): Redundant & reject \(H_{0}\) \\ & \(H_{1}\): Independent & p-value \textless{}1e-16 \\ \cline{2-3} & \(H_{0}\): Dependent \(K=C=3\) & fail to reject \(H_{0}\) \\ & \(H_{1}\): Independent & p-value \textless{}0.887 \\ \cline{2-3} & \(H_{0}\): SCA \(K=C=3\) & reject \(H_{0}\) \\ & \(H_{1}\): Dependent \(K=C=3\) & p-value \textless{}1e-16 \\ \hline \multirow{4}{*}{Krackhardt Friendship} & \(H_{0}\): Redundant & reject \textless{}1e-16 \\ & \(H_{1}\): Independent & p-value \textless{}1e-16 \\ \cline{2-3} & \(H_{0}\): Dependent \(K=3,C=5\) & fail to reject \(H_{0}\) \\ & \(H_{1}\): Independent & p-value \textless{}0.995 \\ \cline{2-3} & \(H_{0}\): SCA \(K=C=3\) & reject \(H_{0}\) \\ & \(H_{1}\): Dependent \(K=3,C=5\) & p-value \textless{}9.92e-13 \\ \hline \multirow{4}{*}{Hunter Friendship,} & \(H_{0}\): Redundant & reject \(H_{0}\) \\ & \(H_{1}\): Independent & p-value \textless{}1e-16 \\ \cline{2-3} & \(H_{0}\): Dependent \(K=2,C=3\) & reject \(H_{0}\) \\ & \(H_{1}\): Independent & p-value \textless{}0.005 \\ \cline{2-3} & \(H_{0}\): SCA \(K=C=2\) & reject \(H_{0}\) \\ & \(H_{1}\): Dependent \(K=2,C=3\) & p-value \textless{}1e-16 \\ \hline \multirow{4}{*}{Hunter Friendship} & \(H_{0}\): Redundant & reject \(H_{0}\) \\ & \(H_{1}\): Independent & p-value \textless{}1e-16 \\ \cline{1-1} \cline{2-3} & \(H_{0}\): Dependent \(K=2,C=3\) & fail to reject \(H_{0}\) \\ \cline{1-1} & \(H_{1}\): Independent & p-value \textless{}0.375 \\ \cline{1-1} \cline{2-3} & \(H_{0}\): SCA \(K=C=2\) & reject \(H_{0}\) \\ \cline{1-1} & \(H_{1}\): Dependent \(K=2,C=3\) & p-value \textless{}1e-16 \\ \hline \end{tabular} \end{table} Table 1: The p-values and LRT determinations for the Krackhardt and Hunter CSS datasets. See Appendix D for a discussion on the likelihood ratio test (LRT) and possible alternate statistical tests. Of note is that when testing for social-cognitive agreement, the LRT always rejects the hypothesis that the data was generated from the SCA NNTuck. This suggests evidence that all of the CSS datasets we explore are better explained when we allow for the social space to differ from the cognitive space. Implications of whether or not SCA is present in a network is an exciting topic for future work. Whereas we know that person 6 is the president of the firm, all we nominally know about person 14 is that he is a supervisor of a department, and that person 10 is a long-standing employee. Digging further into the original analysis of this dataset, however, we find that person 14 is identified as having a particularly interesting perception of his advice network (the friendship network is not discussed). Quoting from Krackhardt (1987) (emphasis our own), _"...**his own perception is that he is very active in the network**: advice is sought from him by 12 people; he actively seeks advice from 20 others; and he is on the crossroads of this network as evidence by his 81.15 betweenness score. **This self-evaluation is not shared by his coworkers...**Only three of the 12 indegrees he claims to have are confirmed...**Also, only nine of his 20 outdegree nominations are confirmed...And finally, his betweenness in the LAS just about disappears (betweenness \(=0.70\)). The Consensus Structure reveals that **people generally think that no one approaches him for advice, that he goes to only five people for advice, and that he is not in between any other pair of people.**"_ Thus, the relational schema that the 21 different individuals in this firm rely upon when recalling the advice network can be concisely described by the relational schema of just three distinct people: the president of the firm, this supervisor with an overly optimistic view of his surrounding advice network, and someone who seems to represent the remaining employees. We continue to analyze this network and these interpretations in the next section, as we explore and contrast the social and cognitive spaces of the friendship network. ### Krackhardt Friendship CSS We turn now to analyze the same firm as above, with the CSS now representing the friendship relationships. Notably, the test-AUC for different model specifications suggests using \(K=3\), \(C=5\) as preferable to \(K=3\), \(C=3\), what we used to analyse the advice network. The former has a much lower test-AUC (see Figure 2, right). We see that for these values of \(K=3\) and \(C=5\), the standard LRT fails to reject the null hypothesis that the CSS is explained by the simpler cognitively dependent NNTuck. We inspect the social and cognitive spaces with this model specification in Figure 5. We observe that this decomposition identifies a similar three-dimensional social space to that identified in the advice CSS above. Again, the three social dimensions mostly identify three different department affiliations within the firm. Differently from the advice network, however, we see that the president (person 6) almost entirely belongs to the third social space, whereas in the decomposition of the advice CSS, his social membership was mostly spread across all three groups. Additionally, we see that person 7's social membership does not correspond to his departmental affiliation, but to that of another department. Whereas we lack ethnographic data that might support these friendship alliances, it is curious to wonder how and why these two people belong to different social groups when considering friendship as opposed to advice. In comparing the identified cognitive spaces in the advice and friendship CSSs for this firm, we see that, again, person 6 and person 5 belong to the same cognitive space. We also observe again that person 14 is identified as belonging to a different cognitive space than most of the others. Although the friendship CSS is not discussed in Krackhardt (1987), it is reasonable to assume that person 14 might have been just as overly optimistic in reporting his perception of the friendship network as he was in the advice network. Why is his relational schema, which sets him apart from many others in the firm, so notably different from that of his colleagues? While there are any number of reasons that could describe this (internal firm politics, his socioeconomic status, his upbringing), it's also possible that he is singled out by the NNTuck because of the noise that he contributed to the CSS by exaggerating connections in his perceived network. In this sense, the NNTuck has the potential to identify a cleaner model of the data, if we were to remove the noisy perceptions. We see that using the NNTuck to understand the advice and friendship CSSs gives us a much cleaner and richer framework for analyzing these social networks. The original data analysis of this firm was able to identify person 14 as someone who had a noticeably abnormal perception of his social network by comparing his perception of the advice network to an aggregation of his coworkers' perceptions. The NNTuck, however, and the sociological theories that it builds upon and operationalizes, gives us the vocabulary to say that he has a different _relational schema_ than most of his coworkers. Likewise, for the advice network, we are able to identify two other groups with shared relational schema, as well as visualize the employees who share relational schema across the groups (for instance, person 8 shares the relational schema of both person 14 and person 10). Furthermore, we are able to visualize how both the social and cognitive spaces change when considering the advice network in contrast to the friendship network. A clear limitation of this present analysis is the lack of data about the individuals in this organization: each person is a manager in a high tech firm in the pacific northwest; we don't know much about the internal dynamics of the company; each person is a male aged between 27 and 59. According to Howard (1994), "Gender, race, and age are social systems of differentiation that are especially prone to cognitive categorization." In this present dataset, where each individual has the same gender and race, we are unable to confirm or refute whether the latent cognitive space identifies these relational schema. ### Hunter Friendship CSS We focus the analysis of this rich dataset by contrasting the identified social and cognitive spaces of week one and week six of the longitudinal study. Figure 5: The latent social and cognitive spaces in the friendship CSS of the high tech firm from Krackhardt (1987), identified by estimating a cognitively dependent NNTuck. Week One.When considering the predictive power of various NNTuck models of the week one CSS, we see that the cognitively dependent NNTuck with \(K=2\) and \(C=3\) performs just as well as the cognitively independent NNTuck (see Figure 6, left). Although the LRT comparing these two models rejects the smaller, cognitively dependent, model (see Table 1), this could be due to the fact that the two models have a very small difference in the number of parameters (see Appendix D for a larger discussion on using the LRT to evaluate factor models). Considering that the two models have nearly identical predictive power and similar log-likelihoods, we choose to examine the cognitively dependent model further. We inspect the identified social and cognitive space, as well as the relative cognitive space, in Figure 7. Notably, we see that during week one of the course, the students' social space can largely be described by their self-identified gender. Similarly, the identified cognitive spaces are well aligned with gender identity, where we see the first group is mostly one gender, the second group is mostly another gender. Interestingly, the third identified cognitive group, which mostly includes students 3 and 6, has students of both gender identities. While we do not have enough data at hand to understand why these students have different relational schema from the others with their same gender identity, our analysis provides an entry point for further analysis. Week Six.The test-AUC for different model specifications describing the last week of the friendship CSS lead us to inspect the NNTuck with \(K=2\) and \(C=3\) (see Figure 6, right). With this model choice we fail to reject the null hypothesis that the advice CSS was generated from a cognitively dependent NNTuck, and we explore this dependent NNTuck in further detail. Figure 8 gives visualizations of the identified social and cognitive spaces. The social spaces that are identified in the NNTuck estimation are not easily explained given the metadata we have (see Figure 9 in Appendix B for visualizations of available student attributes, including study groups, undergraduate institutions, and declared majors). Even without any external context about the students in each of the identified social and cognitive spaces in week six, the longitudinal nature of this dataset provides unique opportunities for other rich insights. Notably, we can learn a lot about both the students and the course by Figure 6: The test-AUC averaged across a tubular fivefold cross-validation task for the Hunter friendship week one (left) and week six (right) CSS datasets. We choose to examine the social and cognitive factor matrices of both CSS datasets corresponding to the cognitively dependent NNTuck with \(K=2\) and \(C=3\). Figure 7: The latent social and cognitive spaces in week one of the Hunter Friendship CSS, identified by estimating a cognitively dependent NNTuck with \(K=2\) and \(C=3\). In the first row we plot the gender and race identifiers for each of the 20 students and in the last row we plot the cognitive space relative to the relational schema of persons 0, 6, and 18. Note that both the social and cognitive spaces in week one align well with the self-identified gender of each student. The plotted network is of the network’s _locally aggregated structure_, with an edge shown from node \(i\) to \(j\) if node \(i\) perceived its existence. Figure 8: The latent social and cognitive spaces in the last week of the college leadership course friendship network from Hunter (2019), identified by estimating a cognitively dependent NNTuck with \(K=2\) and \(C=3\). In the last row we also plot the cognitive space relative to students 2, 9, and 10. comparing these spaces to those identified in week one. We first observe that the identified social and cognitive spaces no longer correspond with the self-reported gender identity of the students. Secondly, we note that students 3 and 6, who shared relational schema in the first week, now have different relational schema, with person 3 mostly belonging to the first cognitive space and person 6 mostly belonging to the last cognitive space. While there are many possible interpretations for the social and cognitive spaces identified in the last week (see Appendix B for one interpretation), the most valuable observations would come from a contextual analysis of these findings done directly by the researchers (see, for example, Carley, 1986, wherein the author compares the findings of a community detection algorithm with her own ethnographic observations of the study participants.). Overall, the above observations have the potential to say a lot about this classroom setting, the instructional materials, and the students enrolled in this course. The course began with a relatively clear divide along gender lines, both in the social environment and the students' relational schema. Again, we quote Howard (1994) in her theory of social cognition wherein she notes that gender is one of the "social systems of differentiation that are especially prone to cognitive categorization." Furthermore, Howard offers additional theory about _why_ these initial relational schema occurred along gender lines, "systems of classification must be relatively simple in order to provide a beginning place for interaction. This interactional requirement generalizes the principle of cognitive efficiency: interaction is easier if there are fewer cognitive distinctions to consider...the constant use, in interaction, of dichotomies such as gender may help to keep them simple." Considering the last week of the course, however, after 6 weeks of instructional material, both the social and cognitive spaces became more complicated across identifiers. Whereas the students were prone to this easy heuristic of cognitive categorization--gender--in the first week of the course, we see richer and more complicated relational schema emerge by the end of the study. Again, in Howard's (1994) introduction of the theory of social cognition she writes, "if schemas are to be sustained and reproduced over time...they must be validated by the accumulation of resources that their enactment engenders. Schemas and resources constitute structure only when they imply and sustain each other over time." It could be argued that this departure from a gender-informed schema indicates the success of this particular course: that the students' initial schema was not validated and engendered. ## 5 Conclusion From the analyses of the cognitive social structures in this work, we conclude that individuals' _perceptions_ of their social networks are often just as, if not more, important than some identified "true" network. These perceptions are rich with information, and arguably more valuable when considered all together. By considering a CSS data object as a tensor, and estimating a factor model of that data tensor, we are able to simultaneously consider each individual's relational schema--how they categorize and compress their perceptions of their social world--and how it relates to others'. This work highlights many exciting opportunities for future work. Above, we proposed the question of empirically studying and testing for _social-cognitive agreement_ in a network, and future work can be dedicated to further studying its sociological (or cognitive) implications. As we briefly discussed above, we find evidence that none of the CSS datasets we explore are well explained by a model with social-cognitive agreement (see Table 1). _When_ can we expect to see social-cognitive agreement? _What_ does it mean when social-cognitive agreement is or isn't observed in a group? Do only certain types of social relationships reflect an equivalence in social and cognitive space? A domain-informed perspective and study of these questions (perhaps with an accompanying CSS dataset) which uses the tools proposed here (or other appropriate latent space models) to empirically study these specific questions would provide rich insight into the cognitive processes underlying social networks. At a technical level, as discussed in the main text as well as in Appendix C, there are also several open questions regarding the optimization techniques we propose for estimating the social-cognitive agreement NNTuck. As presented, it does not have monotonic convergence guarantees (and none of the multiplicative update algorithms for tensor factorization have any guarantees of global optimality). As such, we cannot confidently assume that the resulting estimated NNTuck satisfies the maximum likelihood assumption necessary in both the standard LRT (or split-LRT, as dicussed in Appendix D). For future work aiming to identify and/or test social-cognitive agreement in a network, efforts to improve the suggested algorithm and test would be very helpful contributions. As another direction for future work, we saw in the analysis of the Krackhardt CSS that the estimated cognitive space identified persons 6, 10, and 14 as having notably different relational schema from one another. In Section 4.2 we discuss how person 14 may have been singled out due to his notably dense and optimistic perceptions of the network. Although we focused this work on the introduction of the NNTuck as a tool for studying CSSs, interesting future work could further explore this question of identifying _noise_ in CSS datasets, possibly comparing the cognitive space of the NNTuck to the latent methods proposed by Sewell (2019) and De Bacco et al. (2023). In a separate direction, recent work on information diffusion has focused on how information and behaviors spread differently across different _layers_ of multilayer social networks, aiming to identify which types of relationships are most important to identify for purposes of influence maximization (Kempe et al., 2003). This examination of information cascades in a multilayer perspective readily lends itself to interesting questions in the CSS space, namely, _how does information propagate across different network perspectives?_ and _does information seeded in different cognitive spaces spread differently?_ It's well established that seeding information in different parts of a social space leads to differences in diffusion (Krackhardt, 1996; Banerjee et al., 2013). To go a step further, we contend that perhaps differences in cognitive space--when departing from social space, see our earlier discussion of social-cognitive agreement--are even more significant. When people share a piece of information, they do not have an omniscient view of some ground truth network: they act and share according to their perceptions. Indeed, to quote Thomas and Thomas (1928), just as Krackhardt did in his original work, "if men define situations as real, they are real in their consequences." We believe factor-based analyses of cognitive social structure, using the NNTuck, can meaningfully help us better understand how cognition impacts information propagation. Currently, CSS data is collected by asking each person _yes/no_ questions about relationships in their surrounding network. Another interesting direction for future work is incorporating something akin to an _unknown_ response into the space of possible answers for CSS surveys. Distinguishing between perceived relationships, educational guesses-at relationships, and pure speculation of relationships ought to have interesting impacts on how CSSs can be interpreted, including through properly adjusted NNTuck models, showing how and when relational schema are most important. Finally, there is a obvious lack of CSS data from networks consisting of more than 30 people. This is, in most part, due to the immense burden (both on the survey administrator and participant) of asking each person in a network of size \(N\) to report on \(N^{2}-N\) distinct relationships. Collecting the entire CSS, however, may be unneccessary depending on the research objective. For example, if the end goal of collecting CSS data is to identify the generative NNTuck and the network's associated latent social and cognitive spaces, partial data may be more than sufficient. Extending the active matrix factorization for surveys work of Zhang et al. (2020) to tensors could potentially allow for efficient CSS data collection of much larger networks. We conclude by urging that there is much to be learned from continuing to view the CSS as a tensor-object. We hope that the present work opens new research directions towards the aim of uncovering relational schema in social networks, and are eager for future work that continues in this pursuit. ## Funding IA acknowledges support from the NSF GRFP and the Knight-Hennessy Scholars Fellowship. JU acknowledges partial support from ARO (#76582-NS-MUR) and NSF (#2143176). ## Declaration of competing interests The authors declare that they have no known competing interests that could have influenced the work in this paper. ## Acknowledgements We thank Keith Hunter for making his data available and for his generous availability for discussions. We thank Amir Goldberg and our MURI collaborators for their fruitful discussions and encouragement on this work.
人々がソーシャルネットワークを思い出してもらうと、理論的および実証的な研究によると、彼らはショートカットを使用する傾向があるということがわかります。認知社会構造 (CSS) は、個人のネットワークの捉え方の層を重ねたネットワークです。同じネットワークに対する複数の捉え方が存在する、つまり、CSSには、これらのショートカットの表現に関する豊富な情報が含まれており、この質問につながります。同じショートカットを持つ人を見つけることはできるのでしょうか?この研究では、複数のネットワーク捉え方の認知構造を同定するための方法を提案しています。これは、コミュニティ検出がネットワークにおける社会構造を特定する目的で、類似したアプローチです。同時に、社会と認知構造をモデル化するために、CSSを3次元テンソルとして表現し、低ランク非負の Tucker 分解 (NNTuck) を用いて、CSSを近似することで、これらは、多層的確率ブロックモデル (SBM) を推定するのに近いプロセスです
2310.04431
Can neural networks count digit frequency?
In this research, we aim to compare the performance of different classical machine learning models and neural networks in identifying the frequency of occurrence of each digit in a given number. It has various applications in machine learning and computer vision, e.g. for obtaining the frequency of a target object in a visual scene. We considered this problem as a hybrid of classification and regression tasks. We carefully create our own datasets to observe systematic differences between different methods. We evaluate each of the methods using different metrics across multiple datasets.The metrics of performance used were the root mean squared error and mean absolute error for regression evaluation, and accuracy for classification performance evaluation. We observe that decision trees and random forests overfit to the dataset, due to their inherent bias, and are not able to generalize well. We also observe that the neural networks significantly outperform the classical machine learning models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number datasets. Dataset and code are available on github.
Padmaksh Khandelwal
2023-09-25T03:45:36
http://arxiv.org/abs/2310.04431v1
## Can Neural Networks Count Digit Frequency? ### Abstract In this research, we aim to compare the performance of different classical machine learning models and neural networks in identifying the frequency of occurrence of each digit in a given number. It has various applications in machine learning and computer vision, e.g. for obtaining the frequency of a target object in a visual scene. We considered this problem as a hybrid of classification and regression tasks. We carefully create our own datasets to observe systematic differences between different methods. We evaluate each of the methods using different metrics across multiple datasets. The metrics of performance used were the root mean squared error and mean absolute error for regression evaluation, and accuracy for classification performance evaluation. We observe that decision trees and random forests overfit to the dataset, due to their inherent bias, and are not able to generalize well. We also observe that the neural networks significantly outperform the classical machine learning models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number datasets. Dataset and code are available on github. ### Introduction Some of the fundamental aspects of deep learning were introduced quite early, e.g. backpropagation[1] and deep convolutional neural networks[2], however, it required an increase in computational power and access to large datasets[3, 4, 5] to get mainstream. Recently, these learning techniques have been shown to be successful in different tasks like playing the game of Go[6] and even the task of question-answering interactions, e.g. instructGPT[7] which led to recently popular ChatGPT. In this paper, we show that it is still not easy to use the recent machine learning models for a simple but important task of counting the frequency of different digits in a given sequence of numbers, e.g. Figure 1 shows that even ChatGPT is not good at this task. This task has several downstream applications, e.g. counting the number of objects detected in a scene[8, 9]. We compare different classical machine learning and neural network-based methods for this task. As part of classical methods, we utilize decision trees[10, 11] and random forests[12, 13, 14]. Thus, in this research work, we try to understand classical machine learning and neural network architectures and their effects. Decision Tree and Random Forests: A decision tree is created using a binary split which decides the branch to allocate for a data sample. The quality of a split is decided by a measure of impurity, e.g. "gini", which can be similar to the sum of the standard deviation of samples lying on each side of the split[15, 16], hence the best split is likely to have the least "gini" score. Refer to Figures 6 to 9 to see decision tree structures. Decision trees can face the issue of overfitting which can be avoided by using random forests[12, 13, 14]. The basic idea behind random forests is to create lots of large decision trees such that their predictions are uncorrelated[14] and then take the average of their predictions, which is also called bagging[9]. There are different approaches to create uncorrelated models, e.g. by training them on different subsets of data, by considering a random subset of columns for each split, etc[12, 13, 14]. Random forests have been shown to work quite well in practice, which is also evident from this work. Our major contributions in this work are listed below: * We systematically create our own datasets to bring out the differences in performance of different methods. * We carefully split the datasets into training, validation and test sets to test the generalization capabilities of different methods across dataset sizes. * For fair evaluation of the methods, we do multiple runs of each method to obtain statistical results. We also consider different metrics for both regression-based evaluation and accuracy-based evaluation. * We also list specific examples to observe the overfitting behavior of decision trees and random forests which is not observed in the neural networks. * We also perform hyper-parameter tuning of the neural networks and provide our observation as part of the ablation studies. allow for the fine-tuning of the hyperparameters of the neural networks on the validation set which can later be tested on the unseen and unbiased test set, whose samples follow the same distribution as the training and validation set. The training set of size 90,000 represents 9% of the total possible 6-digit numbers. This can help us understand the generalization of the performance of machine learning models to unseen 6-digit numbers. To further challenge the generalizability of the models and test their capabilities to learn from limited data, we also considered a 10-digit numbers dataset as a 90,000-sized training set represents only 0.0009% of the total possible 10-digit numbers. We show that this change in the fraction of seen dataset (from 9% to 0.0009%) has the least effect on the performance of the neural networks [1, 2] as compared to the classical machine learning models [10, 11, 12, 13, 14]. ### Implementation For the implementation of the different machine learning models, we extensively used Jupyter Notebooks with the _scikit learn_[17] and _fastai_[18] libraries. While _scikit learn_[17] has several built-in classical ML models, _fastai_[18] has implementations of several state-of-the-art deep learning models. Using these libraries help us overcome the challenge of tediously and manually assigning all hyperparameters and thus allows us to quickly experiment with multiple methods and techniques. We decided to use the decision tree and random forest regressor as classical ML models. Decision trees [10] build regression or classification models in the form of a tree structure. At every node, it splits a dataset into two subsets such that the "gini" score is minimized, to incrementally develop the decision tree. The final result is a tree with decision nodes and leaf nodes. A random forest [13, 14] is a meta-estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and avoid over-fitting. Figure 2: 6-Digit Original Dataset: a sequence of 6-digit number (rightmost column) and the corresponding count of each digit The dataset follows a specific labeling pattern, hence we believe that the decision tree could, perhaps, identify the necessary comparisons to perfectly, or nearly perfectly, predict the pattern. Random forest in general is the best performing and the most versatile classical ML model and is a key reason for its widespread popularity and, thus, also stood out as a possibly strong baseline. Let \(x_{i}\) be the \(i^{th}\) number or sample for 1\(\leq\)\(i\)\(\leq\)\(n\), let \(y_{i}\) be the ground-truth label vector for the \(i^{th}\) number such that \(y_{ij}\) is the count of \(j^{th}\) digit for 0\(\leq\)\(j\)\(\leq\)9, and \(\overset{\wedge}{y_{i}}\) be the predicted vector for the \(i^{th}\) number such that \(y_{ij}^{\ \wedge}\) is the count of \(j^{th}\) digit for 0\(\leq\)\(j\)\(\leq\)9. The regression performance metrics we consider are root mean squared error and mean absolute error, the two popular metrics in regression, and the classification metric we consider is accuracy. Root mean squared error is calculated as \[RMSE\ =\ \sqrt{\sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{{{{\left({{y_{ ij}-{y_{ij}}}}\right)}^{2}}}}{{n!}}}\] and the mean absolute error is calculated as \[MAE\ =\ \sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{{{{\left|{{y_{ij}-{y_{ ij}}}}\right|}}}}{{n!}}\] , where \(n\) is the total number of samples (or numbers), \(l\) is the length of output vector (which is 10 for the count of 10 digits), \(y_{i}\) is the \(i^{th}\) ground-truth label vector; and \(\overset{\wedge}{y_{i}}\) is the \(i^{th}\) predicted vector. Figure 3: 10-Digit Original Dataset: a sequence of 10-digit number (rightmost column) and the corresponding count of each digit The problem statement can be tackled either using a regression method or classification method. The count of each of the 10 digits is only limited to integers 0 to 6 for the 6-digit set and 0 to 10 for the 10-digit set. However, if we consider a classification method, the presence of different digits would require an excessively complex and yet underperforming multi-class multi-label classification method which may easily overfit the small fraction of real data we have. Therefore, to tackle this problem, we first implemented multi-class regression models and generated the two error metrics and, then modified the predictions to be rounded off to the nearest whole number (predictions less than zero rounded up to zero and those more than the total number of digits rounded down to the total digits themselves (6 and 10 respectively). We can therefore also consider accuracy metric over these predictions which we define as: \[Accuracy\ =\ \sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{I(y_{ij}=y_{ ij}^{\ \ All the neural networks were composed of input layers, dense linear layers, and dense non-linear layers, which implement ReLUs (Rectified Linear Units)[3] as activation functions, SGD[1, 2, 3], and Adam optimizers[19]. For reference, a ReLU layer is used to implement a non-linearity in the neural network to better trace a non-linear pattern, which is essentially an identity function for all non-negative values, and zero for negative values. ### Experiments The results show that neural networks performed significantly better than the decision tree and random forest models, especially when using the modified dataset. The best results were obtained by using the appropriate number of layers, learning rate, and number of epochs. Figure 4: 6-Digit Original Dataset with 16 columns: a sequence of 6-digit (rightmost 6 columns) and the corresponding count of each digit (left columns) Figure 5: 10-Digit Original Dataset with 20 columns: a sequence of 10-digit (rightmost 10 columns) and the corresponding count of each digit (left columns) The results are shown in Tables 1, 2, 3, and 4. For reference, the following keys are provided to identify the different models: * Decision Tree trained on the original dataset * Random Forest trained on the original dataset * Decision Tree trained on the modified dataset * Random Forest trained on the modified dataset * _fastai.tabular_ implemented neural network[20] * _fastai.tabular_ neural network implemented with a hidden embedding[20]. We report RMSE, MAE and Accuracy metrics for each of the methods. We run each method multiple times on the validation set to obtain statistical errors. The results are consistent for both the 6-digit and 10-digit datasets, and by employing both the regression and classification metrics. However, it is key to note that even the neural networks do not have perfect accuracy but it is almost 100%. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Method** & **RMSE** & **MAE** & **Accuracy** \\ \hline Decision Tree 1 & 0.998 & 0.693 & 43.986\% \\ \hline Decision Tree 2 & 1.018 & 0.712 & 43.198\% \\ \hline Random Forest 1 & 0.864 & 0.666 & 44.545\% \\ \hline Random Forest 2 & 0.620 & 0.495 & 52.827\% \\ \hline Neural Network & 0.303 & 0.216 & 97.833\% \\ \hline Neural Network \(+\) Embedding & 0.274 & 0.208 & 97.920\% \\ \hline \end{tabular} \end{table} Table 4: 10-Digit Test Set \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Method** & **RMSE** & **MAE** & **Accuracy** \\ \hline Decision Tree 1 & 0.997\(\pm\)0.000 & 0.693\(\pm\)0.000 & 44.167\% \\ \hline Decision Tree 2 & 1.021\(\pm\)0.001 & 0.714\(\pm\)0.000 & 42.994\% \\ \hline Random Forest 1 & 0.862\(\pm\)0.000 & 0.666\(\pm\)0.000 & 44.583\% \\ \hline Random Forest 2 & 0.623\(\pm\)0.001 & 0.499\(\pm\)0.001 & 53.019\% \\ \hline Neural Network & 0.293\(\pm\)0.025 & 0.221\(\pm\)0.018 & 98.256\% \\ \hline Neural Network \(+\) Embedding & 0.210\(\pm\)0.014 & 0.162\(\pm\)0.010 & 96.965\% \\ \hline \end{tabular} \end{table} Table 3: 10-Digit Validation Set. For statistical error, each method was run 5 times. networks are only slightly affected or nearly unaffected by the increase in the digits, especially considering the large difference in the proportionality of more possible values in 6-digit and 10-digit numbers as mentioned earlier. Modified dataset effect: It is observed that the modified dataset improves the performance of both decision trees and random forests, however, substantially more for random forests. This could be attributed to the tendency of random forests to generate many decision trees over multiple different features, instead of a single feature which generated the one and only possible tree given in the figures below. The averaging process of random forests [12, 13] over several decision trees in the modified dataset and on multiple batches of random, unbiased data is responsible for generating different outputs every time they are run and causing substantially less error and more accuracy compared to the performance on the original dataset. This could also be the explanation for the decision trees and random forests generating exactly the same performance consistently on the original datasets for both 6-digit and 10-digit numbers across multiple runs, thus, having no change in the statistical error, as only a single decision tree is possible and only a single set of decision trees and their respective batches are being computed in the random forest. Decision tree overfits: As we used decision tree analysis methods, it was observed that the decision tree had created over 85,000 leaf nodes for the training dataset of 90,000 numbers for both datasets, which is a clear example of an overfitting and memorizing model. The random forest model performed slightly better than the decision tree model; however, it is worth mentioning that as a random forest creates many decision trees on unbiased data and bags them together, it will always outperform decision trees. It is also worth noting that the decision tree created many numerical splits to make nodes and for inference, it simply outputs the average of the count of each digit across numbers reaching a leaf node during training, refer to Figure 6, Figure 7, Figure 8 and Figure 9, which shows that both the classical ML models clearly could not interpret any patterns. Figure 7: First 6 nodes of the decision tree for the modified 6-digit training dataset. Figure 8: First 6 nodes of the decision tree for the original 10-digit training dataset (a): the top part, and (b): the lower part of the decision tree. Figure 9: First 6 nodes of the decision tree for the modified 10-digit training dataset (a): the top part, and (b): the lower part of the decision tree. We also experimented with a handful of outlier data points or numbers to observe predictions of the classical ML models. For the original 6-digit dataset we tried the two pairs of consecutive numbers: (999998, 9999999) and (100000, 100001). The decision tree predicted [0, 0, 0, 0, 1, 0, 0, 0, 0, 5] for both numbers of the first pair and [4, 2, 0, 0, 0, 0, 0, 0, 0, 0] for both numbers of the second pair, and random forest after undergoing the classification modification predicted [0, 0, 0, 0, 0, 0, 0, 1, 0, 5] for the first pair and [4, 2, 0, 0, 0, 0, 0, 0, 0, 0] for the second pair. Rerunning the classical ML models on the modified dataset still generated similar results: the decision tree predicted [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 5] for the first pair and [3, 3, 0, 0, 0, 0, 0, 0, 0, 0] for the second pair, and random forest after undergoing the classification modification predicted [0, 0, 0, 0, 0, 0, 0, 1, 5] for first pair and [3, 3, 0, 0, 0, 0, 0, 0, 0, 0] for the second. Thus these classical methods are making the same prediction for the successive numbers. This shows the inherent limitation of the decision tree and random forest, as they are splitting the nodes based on the numeric values of the numbers and not the count of each digit. For the 10-digit dataset, we tried the two pairs of numbers: (9999999999, 9999999998) and (100000000, 100000001). The decision tree predicted [0, 1, 1, 0, 2, 0, 0, 0, 0, 6] for the former and [4, 2, 0, 1, 2, 0, 0, 0, 1, 0] for the latter. The random forest, whereas, predicted [0.02, 0.61, 0.71, 0.26, 1.31, 0.2, 0.29, 0.35, 0.75, 5.5] for the former and [3.57, 1.67, 0.52, 0.95, 1.81, 0.05, 0.4, 0.02, 0.57, 0.44] for the latter which after the classification modification are [0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 6] and [4, 2, 0, 1, 2, 0, 0, 0, 1, 0] respectively. The results are similar for the modified dataset. Evidently, this is another indication of the memorization that these classical ML models underwent and how they failed poorly in pattern recognition, which is even more evident in the 10-digit dataset. ### Observations on Neural Networks The neural networks, as aforementioned, outperformed classical ML models in every scenario and for both datasets. According to our hyperparameter optimization, we found the following best values for all the different scenarios using 16 epochs and [x,y,z] layers, where x,y, and z respectively are the number of parameters in each of the non-linear (ReLU [6]) hidden layers: * Layers = [96,96,96], Learning Rate = 0.01 \(\circ\) Neural Network with Embedding - Layers = [96,96,96], Learning Rate = 0.01, Embeddings are [10,100] by considering each of the 10 unique digits * Layers = [128,128,128], Learning Rate = 0.01 \(\circ\) Neural Network with Embedding - Layers = [256,256,256], Learning Rate = 0.005, Embeddings are [10,100] by considering each of the 10 unique digits It could be hypothesized that as the neural networks utilize stochastic gradient descent to minimize loss by altering the parameters or weights and implement non-linearities through the ReLU layers, they at least trace out the non-linear pattern very well1,2. The 100-dimensional embeddings were used as an input feature for each of the ten possible values. Overall they did not significantly alter the predictions across the different metrics. It is an intriguing detail that the classical ML models, which gave an accuracy of nearly 90% for 6-digit numbers, although by memorization, fell to less than or nearly 50% accuracy for 10-digit ones. On the contrary, neural networks hardly changed by even 1% in accuracy across datasets. They also produced less than half the errors compared to the best classical ML model baseline, which is the random forest, in both metrics. The following loss curve vs the number of epochs graphs, refer to Figure 10(a), 10(b), 10(c) and 10(d), indicate that the neural networks did not undergo any form of overfitting or memorization. This shows the generalization capability of neural networks. Similar to the classical ML models, we also worked with the following consecutive numbers for the neural networks: 6-digit numbers - (99999, 999998) and (100000, 100001); 10-digit numbers - (9999999999, 999999998) and (100000000,100000001). Here are firstly the results by ChatGPT3 when asked for the task for recognizing the frequency of each digit in the above numbers, refer to Figure 11(a), 11(b), 11(c), 11(d), 11(e), 11(f). Figure 10: **(a)-(d):** _Loss (MSE) Curves for Neural Networks vs Number of Epochs_ To summarize the results, except for the number 9,999,999,999 which it predicted completely correctly, all the predictions by ChatGPT3 were even worse than the classical ML models. This further showcases the deceptiveness of the simplicity of the task. The neural networks, on the other hand, produced the following results after the classification modification: * Digit Dataset: Figure 11: **(a) - (b):**_ChatGPT3 responses for the above-mentioned numbers_ * Input: (999999, 999998) and (100000, 100001) * Neural Network output: [0,0,0,0,0,0,0,0,0,5] and [0,0,0,0,0,0,0,0,1,4] for the former pair, and [5,1,0,0,0,0,0,0,0,0] and [4,2,0,0,0,0,0,0,0,0] for the latter. * Neural Network with Embedding output: [0,0,0,0,0,0,0,0,6] and [0,0,0,0,0,0,0,0,0,1,5] for the former pair, and [5,3,0,0,0,0,0,0,0,0] and [3,3,0,0,0,0,0,0,0,0] for the latter. * 10 - Digit Dataset: * Input: (9999999999, 9999999998) and (1000000000,1000000001) * Neural Network output: [0,0,0,0,1,1,0,0,0,9] and [0,0,0,0,1,1,0,0,1,8] for the former pair, and [7,2,1,0,1,0,0,1,0,0] and [7,2,1,0,0,0,1,0,0,0] for the latter. * Neural Network with Embedding output: [0,1,0,0,0,1,0,2,2,9] and [0,0,0,0,1,0,0,1,9] for the former pair, and [9,1,0,0,0,1,0,0,0,0] and [9,2,0,0,0,0,2,0,0,1] for the latter. Interestingly, half of these predictions are incorrect but the other half are either completely correct or close to it with one or so digits wrong. They, at least, do not make the exact same prediction for the successive numbers unlike the classical ML models which means that they are partially learning the pattern. However, similar to classical ML models, their performance significantly worsens for 10-digit numbers as well. The proportion of data seems to play a significant role in the performance of all the models but with varying degrees. #### Ablation Study When running the neural networks on the 6-digit and 10-digit test sets, we found some alternative hyperparameter values, learning rate (lr) and layers, which gave significantly better outputs in terms of the regression metrics. We have mentioned them in the table given below, refer to Table 5. ## Conclusion In this research work we compared the performance of different classical machine learning models and neural networks in identifying the frequency of occurrences of each digit in a given \begin{table} \begin{tabular}{|l|l|l|l|} \hline **6-Digit Test Set** & **Hyperparameters** & **RMSE** & **MAE** \\ \hline Neural Network + Embedding & lr = 1e-5, layers = (96,96,96) & 0.093 & 0.073 \\ \hline **10-Digit Test Set** & & & \\ \hline Neural Network & lr = 0.003, layers = [128,128,128] & 0.171 & 0.130 \\ \hline Neural Network + Embedding & lr=5e-3, layers = [256,256,256] & 0.221 & 0.168 \\ \hline \end{tabular} \end{table} Table 5: Alternative hyperparameter values for neural networks on the test sets number. We observed that the neural networks significantly outperformed the classical ML models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number datasets. We discovered that some of the behaviors of the classical machine learning models such as split condition and averaging made the trees extremely biased and led to overfitting and memorization. Thus they failed in pattern recognition. The neural networks, on the other hand, thanks to their non-linear optimization were substantially more successful in recognizing the evident pattern. The accuracy was greater than 95% for all scenarios which indicates that the deep learning models did, in fact, learn the pattern accurately. This research further acknowledges the vast learning capabilities and adaptability of neural networks that have been stated in previous research work. All the experiments were conducted on a MacBook M2 Air in a matter of two months. With more time, one could potentially extend the research to other datasets with larger numbers of digits and may find various other trends with neural networks. Regardless, they already seem to be reliable in learning this unconventional, yet simple pattern. Furthermore, despite the research being experimental in nature, the results obtained in this research can potentially be applied to downstream computer vision problems, such as counting the number of times a specific object occurs in an image, which is an essential task in many computer vision applications [3, 5, 15, 16]. Also, the ability to detect the most frequent elements can be used to detect the rare elements, which can have applications in healthcare, e.g. to detect rare diseases. ## Acknowledgement I would like to acknowledge the unconditional support and guidance offered from my mentor Mr. Viveka Kulharia, PhD in Computer Vision from the University of Oxford, for assisting me in everything, from researching the idea through his resources to writing the paper.
``` この研究では、異なる古典的な機械学習モデルとニューラルネットワークの性能を比較し、特定の数字の出現頻度を識別することを目的としています。機械学習とコンピュータビジョンにおいて、様々な応用があります。例えば、視覚的なシーンにおけるターゲットオブジェクトの出現頻度を得るために使用されます。この問題を分類と回帰タスクのハイブリッドとして捉えています。私たちは、異なる方法間の体系的な違いを観察するために独自のデータセットを作成しました。各方法を、異なるデータセットで異なる評価指標を用いて評価しました。評価指標として、回帰評価では根 mean squared error と平均絶対誤差、分類評価では精度を採用しました。決定木とランダムフォレストは、データセットに過剰に適合し、汎化性に欠けるため、一般化が難しいと観察しました。また、ニューラルネットワークは、6桁と10桁の数字のデータセットにおいて、古典
2309.09076
Dynamical Phonons Following Electron Relaxation Stages in Photo-excited Graphene
Ultrafast electron-phonon relaxation dynamics in graphene hides many distinct phenomena, such as hot phonon generation, dynamical Kohn anomalies, and phonon decoupling, yet still remains largely unexplored. Here, we unravel intricate mechanisms governing the vibrational relaxation and phonon dressing in graphene at a highly non-equilibrium state by means of first-principles techniques. We calculate dynamical phonon spectral functions and momentum-resolved linewidths for various stages of electron relaxation and find photo-induced phonon hardening, overall increase of relaxation rate and nonadiabaticity as well as phonon gain. Namely, the initial stage of photo-excitation is found to be governed by strong phonon anomalies of finite-momentum optical modes along with incoherent phonon production. Population inversion state, on the other hand, allows production of coherent and strongly-coupled phonon modes. Our research provides vital insights into the electron-phonon coupling phenomena in graphene, and serves as a foundation for exploring non-equilibrium phonon dressing in materials where ordered states and phase transitions can be induced by photo-excitation.
Nina Girotto, Dino Novko
2023-09-16T18:55:29
http://arxiv.org/abs/2309.09076v1
# Dynamical Phonons Following Electron Relaxation Stages in Photo-excited Graphene ###### Abstract Ultrafast electron-phonon relaxation dynamics in graphene hides many distinct phenomena, such as hot phonon generation, dynamical Kohn anomalies, and phonon decoupling, yet still remains largely unexplored. Here, we unravel intricate mechanisms governing the vibrational relaxation and phonon dressing in graphene at a highly non-equilibrium state by means of first-principles techniques. We calculate dynamical phonon spectral functions and momentum-resolved linewidths for various stages of electron relaxation and find photo-induced phonon hardening, overall increase of relaxation rate and nonadiabaticity as well as phonon gain. Namely, the initial stage of photo-excitation is found to be governed by strong phonon anomalies of finite-momentum optical modes along with incoherent phonon production. Population inversion state, on the other hand, allows production of coherent and strongly-coupled phonon modes. Our research provides vital insights into the electron-phonon coupling phenomena in graphene, and serves as a foundation for exploring non-equilibrium phonon dressing in materials where ordered states and phase transitions can be induced by photo-excitation. phonon dynamics, electron-phonon coupling, graphene, density functional theory ## 1 Introduction Phonon dynamics, electron-phonon coupling, graphene, density functional theory Through the ionic motion manipulation, photoexcitation as in pump-probe setup is a powerful tool which paves the way for a highly effective designing and customizing the desired functionalities of materials [1, 2]. Namely, it can induce novel phases [3], sometimes unreachable in equilibrium, opening, for example, the possibility of light-induced superconductivity [4, 5], charge-density-wave order [6, 7], ferroelectricity [8], and disorder-assisted structural transition [9]. Very often, these states of matter are characterized with a strongly-coupled phonon mode and considerable electron-phonon coupling (EPC), which are believed to be additionally renormalized in a photo-excited non-equilibrium regime [10, 11]. For instance, photo-induced softening of the relevant phonon mode is quite common in ultrafast dynamics [12, 13], nonetheless, photo-excitation can in some cases lead to phonon hardening and consequently stabilize the structural phase [14, 15, 16, 17, 18]. Graphene exhibits extraordinary mechanical, transport and optoelectronic properties [19], and is therefore an ideal platform to investigate the fundamentals of ultrafast electron-lattice dynamics. Experimental techniques such as time- and angle-resolved photoemission spectroscopy (tr-ARPES) [20, 21, 22, 23, 24], two-photon photoemission [25, 26, 27], and transient optical spectroscopy [28, 29, 30] have reveled various aspects of carrier thermalization in graphene, such as rapid electron-electron and electron-phonon recombination of highly non-equilibrium distribution towards population inversion [21], scattering of electrons with strongly-coupled optical phonons [22, 31], cooling of hot carriers via acoustic phonons [22], as well as electron band renormalization [30]. On the other hand, non-equilibrium phonon dynamics in graphene, phonon relaxation pathways and the corresponding photo-induced phonon dress ing are less investigated. Raman and coherent phonon spectroscopy have demonstrated considerable phonon hardening of the \(E_{2g}\) optical phonon mode [14, 15, 32], which was attributed to the reduction of the nonadiabatic electron-phonon interaction in non-equilibrium [14]. Recent attosecond core-level spectroscopy also uncovered ultrafast phonon stiffening of both zone-center \(E_{2g}\) and zone-edge \(A^{\prime}_{1}\) phonon Kohn anomalies [33]. The theoretical studies of the aforesaid ultrafast phenomena are mostly based on the two- and multi-temperature models [34, 22, 35], as well as on the time-dependent Boltzmann equations [29, 36], and were proven to be valuable in comprehending energy transfer between electrons and strongly-coupled hot phonons. However, these methods do not account for transient phonon renormalization and as such are not suitable for exploring all aspects of EPC and phonon dynamics far from equilibrium, such as structural transitions and soft phonon physics. The phonon renormalization in graphene was recently inspected by means of real-time time-dependent density functional theory in combination with molecular dynamics and real-space lattice distortions, which allows for time-resolved self-consistent renormalization of phonons and EPC strengths [11]. However, since it relies on real-space distortions within the supercell approach it is able to track the phonon dynamics of only few selected modes. In addition, the study delivered somehow conflicting results, i.e., instead of phonon hardening and electron-phonon decoupling as observed in the experiments [14, 15, 32, 33], the phonon softening and enhanced EPC strength were reported [11]. This calls for further theoretical insights on non-equilibrium phonon dynamics in graphene. Here, we overcome these difficulties and investigate the effects of the photo-excited population on phonon dynamics and EPC in graphene by means of constrained density functional perturbation theory (cDFPT) [37, 38]. Important advantage of this approach is that it provides a full momentum-dependent picture of phonon renormalization due to constrained photo-excited electron distribution [39, 40, 41, 42, 43, 37]. In addition, we combine cDFPT with nonadiabatic phonon self-energy calculations in order to provide information on phonon relaxation rates (linewidths) and nonadiabatic frequency modifications, which are absent in the standard adiabatic cDFPT studies. We discuss phonon renormalization for the usual stages of carrier relaxation in graphene, namely, for strong non-equilibrium, population inversion, and hot (Fermi-Dirac) carrier distribution. We observe remarkable modifications of the well-known Kohn anomalies at the \(\Gamma\) and K points of the Brillouin zone, as well as appearance of the additional phonon anomalies away from the Brillouin zone center induced by non-equilibrium population, renormalizing the equilibrium dispersion for up to 6 meV. Light-induced increase of the overall phonon linewidth and nonadiabatic effects are observed along with a striking phonon gain, where the latter becomes coherent once graphene reaches the state of photo-inversion. From the fermiology analysis we show that the EPC coupling matrix elements are slightly reduced in non-equilibrium state, while the observed features mostly stem from the modified scattering phase space under transient conditions. With this work, we expand our microscopic understanding of phonon relaxation dynamics of graphene in far non-equilibrium, which along with the well-explored electron thermalization paths constitutes the full dynamical picture of electron-phonon scatterings in graphene. Photo-excitation implies promoting a certain carrier density form the valence to the conduction band separated by the energy of laser pulse \(\hbar\Omega\), which typically amounts to 1.5 eV. In this work, we track phonon properties for various electron distributions inside the Dirac cone following directly the pulse application. First, an equilibrium distribution [Fig. 1 (a)] is disrupted with a short (fs) pulse causing an occurrence of empty states below the Dirac point and filled states above it [Fig. 1 (b)]. Photo-generated electrons and holes establish separate distributions and begin a rapid process of thermalization and cooling through carrier-carrier and carrier-phonon scatterings. The energy transfer to the strongly-coupled optical phonons produces hot phonons, reported to exist on a femtosecond time-scale [28, 33]. Then, the photo-inverted state is established [Fig. 1 (c)] through the competition between phonon-induced intraband scattering and Auger recombination [44]. The formation of the population inversion has already been thoroughly explored with tr-ARPES [21, 22, 45], which reveals its relaxation time of \(\sim\)100 fs. After its decay electrons follow a Fermi-Dirac distribution at elevated temperatures [Fig. 1 (d)]. The whole process of electron thermalization conceptually follows the one described for graphite in Ref. [20]. Subsequent hot-carrier cooling is governed by phonon-assisted scatterings on the time scale of \(1-10\,\mathrm{ps}\). We implemented the nonequlibrium distributions in the calculation of electronic-structure properties and then calculated the renormalization of phonons with the PHonon[46, 47] and EPW [48, 49] codes in the adiabatic and nonadiabatic approximations (see Supporting Information (SI) for more details). The resulting adiabatic phonon dispersions are shown in Fig. 2. We compare the phonon dispersion for the case of photo-excited electron population as in Fig. 1(b) and photo-inverted distribution Fig. 1(c) with the equilibrium adiabatic case. The largest effects of the nonequilibrium electron distribution happen for the strongly-coupled optical modes \(E_{2g}\) and \(A^{\prime}_{1}\) around \(\Gamma\) and K points, respectively. We observe phonon hardening for both phonons, and it ranges from \(\simeq 1\,\mathrm{meV}\) for the \(E_{2g}\) mode and \(4-6\,\mathrm{meV}\) for the \(A^{\prime}_{1}\) mode. Our results are in a good agreement with Refs. [14, 15, 32] and especially with Ref. [33] where the attosecond core-level spectroscopy demonstrated that the Raman-inactive \(A^{\prime}_{1}\) mode is the dominating channel for dissipation of electronic coherence due to stronger coupling to electrons [50]. Note also that in our calculations the acoustic sum rule is not fully fulfilled because we inhibit the long-wavelength dipole-allowed transitions [51, 52] by filling the states above the Fermi energy. Figure 2: Adiabatic DFPT phonon dispersion in the presence of a photo-excited (red line) and photo-inverted (yellow line) electron distribution in comparison with an equilibrium result (grey dashed line). Two insets are a zoom in region around high-symmetry points (\(\Gamma\) and K), showing the hardening of strongly-coupled optical modes (\(E_{2g}\) and \(A^{\prime}_{1}\)) along with a schematics of the corresponding atomic motions. Figure 1: Schematic representation of different stages of electron relaxation in photo-excited graphene. (a) Dirac cone with an equilibrium Fermi-Dirac electron distribution. (b) At \(t_{0}\), laser pulse excites electrons form the [\(\varepsilon_{1}\), \(\varepsilon_{2}\)] energy interval vertically upwards in the conduction band, where they fill the states in the [\(\varepsilon_{3}\), \(\varepsilon_{4}\)] range. (c) Immediately after the pulse, electrons scatter with other electrons and strongly-coupled phonons (\(E_{2g}\simeq 200\,\mathrm{meV}\) and \(A^{\prime}_{1}\simeq 160\,\mathrm{meV}\)) until the population inversion is created. (d) When the time scale of electron thermalization time (\(\tau_{\mathrm{th}}\)) is reached, electrons follow a hot Fermi-Dirac distribution, which in our calculations, amounts to 2200K. In Fig. 3 we present the results of the non-equilibrium EPC calculations as obtained with cDFPT. They contain the phonon spectral function \(B_{\nu}^{c}(\mathbf{q},\omega)\), which incorporates nonadiabatic renormalization effects and transverse and longitudinal optical phonon linewidth along the \(\Gamma-\mathrm{K}\) path. With the term "adiabatic" we refer to a calculation where the phonon energy \(\omega\) is omitted in the phonon self-energy calculations, and by "nonadiabatic" or "dynamic" where it is included [53, 54] (see also SI). The first row represents four spectral functions corresponding to the four distinct electron distributions from Fig. 1, together with the equilibrium adiabatic result for comparison. Equilibrium graphene linewidth contributions for the \(E_{2g}\) and \(A_{1}^{\prime}\) modes come from the vertical interband electron scattering inside the Dirac cone (\(q\simeq\Gamma\)) or between the two neighboring ones (\(q\simeq\mathrm{K}\)). The features around \(\Gamma\) and \(\mathrm{K}\) points are symmetrical, but differ due to the disparate EPC strengths. The photo-excited electron distribution opens up new scattering possibilities, which are schematically shown with arrows in Fig. 3(f). Besides the significant modifications of the well-known nonadiabatic Kohn anomalies [55, 56] at the \(\Gamma\) and \(\mathrm{K}\) points coming from the non-equilibrium population, the spectral function shows additional new anomalies further away from the \(\Gamma\) and \(\mathrm{K}\) points. These photo-induced dynamical phonon anomalies come from the electron transitions away from the Dirac point, at the photo-doped regions. Compared to the equilibrium adiabatic dispersions, a \(4\,\mathrm{meV}\) renormalization is visible directly in the \(\Gamma\) point for the \(E_{2g}\) mode, and away from it, the highest optical branch is renormalized by \(5\,\mathrm{meV}\). For the \(A_{1}^{\prime}\) mode, we observe a \(5\,\mathrm{meV}\) hardening and a \(6\,\mathrm{meV}\) modification at the intersection of the two optical branches. We note that these sharp transient frequency modifications are quite large and are comparable to the nonadiabatic frequency shifts of the \(E_{2g}\) mode in the highly-doped graphene [56]. In the case of population inversion, the non-equilibrium Figure 3: Dynamical phonon spectral functions at different stages of electron relaxation in comparison with the adiabatic equilibrium DFPT result (grey dashed line), i.e., for: (a) Equilibrium regime, (b) far non-equilibrium following the laser excitation, (c) population inversion, (d) hot equilibrium distribution. Note the strong renormalizations occurring near the two strongly-coupled optical modes (\(E_{2g}\) and \(A_{1}^{\prime}\)). Also, the negative linewidth contribution to spectral function is shown in teal. The histograms in the insets of (b)-(d) show the nonadiabatic corrections to the \(E_{2g}\) and \(A_{1}^{\prime}\) modes (\(\Delta^{\mathrm{NA}}=\omega^{\mathrm{NA}}-\omega^{\mathrm{A}}\)). (e) Phonon linewidth on \(\Gamma\) to \(\mathrm{K}\) path of LO/TO optical modes, due to EPC. Color-coding is the same as in Fig. 2. (f-g) Intra- and inter-valley (i.e., \(\mathrm{K}\rightarrow\mathrm{K}\) and \(\mathrm{K}\rightarrow\mathrm{K}^{\prime}\)) electron transitions which contribute to the phonon self-energy. The color-coded arrows reveal positive (brown) and negative (teal) contributions to the linewidth. (h) Same as in (e) but for the photo-inverted (yellow) and hot electron distributions (dark red). electron distribution is condensed in the vicinity of the Dirac point, bringing the non-equilibrium spectral features closer to the \(\Gamma\) and K points. We again observe phonon renormalization for the \(E_{2g}\) and \(A^{\prime}_{1}\) modes of about 2 meV for both. Interestingly, for the strong non-equilibrium and population inversion we observe additional phonon hardening (softening) for \(E_{2g}\) (\(A^{\prime}_{1}\)) when the nonadiabatic effects are taken into account [insets of Figs. 3(b) and 3(c)]. In fact, significant increase of the nonadiabatic correction is obtained for the strong non-equilibrium, while it is reduced for the population inversion and almost diminished for the hot equilibrium case. Note that this is contrary to the conclusions drawn in Ref. [14], where the decrease of the nonadiabaticity is suggested. The corresponding contributions to the linewidth [Figs. 3(e) and 3(h)] show that values at the \(\Gamma\) and K points are unaltered, while slightly away from these points it is significantly enhanced compared to its value at equilibrium. For highly non-equilibrium case, additional notable phonon broadening arise displaced from the high-symmetry points, at momenta where the new dynamical anomalies appear. As stated, these linewidth features stem from the electron transitions between the photo-excited block of filled states above the Dirac point, and empty block below it. A crucial thing to notice is that in the photo-excited state, electrons can scatter from the filled states at higher energies to the low-energy empty states [see Fig. 3 (f-g), downwards pointing teal arrows], causing a negative linewidth contribution. This phonon gain, happens in the immediate vicinity of dynamical anomaly. For the population inversion, the phonon-gain contributions are located directly at the \(\Gamma\) and K high-symmetry points. In graphene, acoustic phonon generation was experimentally achieved [57] and theoretically explained [58]. A conceptually similar phenomenon has recently been widely explored in graphene, namely the photo-induced plasmon amplification with a high potential for the development of novel optoelectronic devices [59, 60, 61, 62, 63, 64, 65, 66, 67, 68]. Our observation of phonon gain is in agreement with the observation of the negative plasmon linewidth and negative conductivity in the photo-inverted state, and it simply means that hot phonons are emitted in the non-equilibrium regime. In particular, the results show that far non-equilibrium state supports the generation of incoherent hot phonons with momenta slightly away from the high symmetry points (i.e., phonon displacement pattern is shifted in phase between neighbouring unit cells), while the population inversion supports coherent phonon generation of hot \(E_{2g}\) and \(A^{\prime}_{1}\) (i.e., phonon displacement pattern is repeated between neighbouring unit cells). This could explain, on the one hand, the reduction of the phonon dephasing rate of \(E_{2g}\) mode as reported in Ref. [14], and, on the other hand, the non-displacive mechanism for the generation of both \(E_{2g}\) and \(A^{\prime}_{1}\) hot coherent phonons as obtained in attosecond core-level spectroscopy [33]. As for the hot electron distribution, we calculated the number of carriers located above the Dirac point in the state of photo-inversion and found the temperature for which a Fermi-Dirac distribution produces the same number of carriers in the conduction band. We show the spectral function for the hot equilibrium electron distribution at \(T=2200\) K. The \(E_{2g}\) and \(A^{\prime}_{1}\) modes are slightly hardened, and here one can clearly see also the edge of the electron-hole pair excitation continuum. The linewidth resembles the one obtained for the photo-inverted population, only without the negative contributions. Further analysis includes changing the excited carrier density, which experimentally corresponds to changing the laser fluence (Fig. 4). Effective carrier density is denoted in Fig. 4(a) above the Dirac cones, and is calculated as the summed density of photo-excited electrons and holes. As expected, we observe larger phonon stiffening in the DFPT calculation for a larger carrier density. Here we show only the adiabatic dispersions to focus solely on the increased phonon stiffening in the \(\Gamma\) and K points. The \(E_{2g}\) phonon hardening increases by 2 meV as the photo-excited density increases by \(6\times 10^{12}\) cm\({}^{-1}\). Further, we observe larger phonon linewidths deriving from the modified phase space with increasing carrier density. Directly in the \(\Gamma\) point the linewidth remains at its equilibrium value, while slight differences in the position of the peaks away from the high-symmetry points are visible. Furthermore, since in experiments graphene is frequently placed on a substrate, we also provide results for doped graphene, specifically for Fermi levels of \(E_{F}=200\) and 400 meV. We compare the photo-doped spectral function with the adiabatic DFPT equilibrium calculation for the corresponding Fermi energy (dashed black lines). Again, we notice larger phonon hardening around the K point (5 meV for both dopings) then around the \(\Gamma\) point (4 meV for \(E_{F}\) = 200 meV and 1 meV for \(E_{F}\) = 400 meV). We observe dynamical anomalies at the same positions as they occur in the pristine photo-doped case. We notice how with increased doping, the dynamic phonon dispersion softens and the effects of photo-induced phonon hardening are less pronounced. Largest softening is observed around the dynamical anomalies. In general, for doped graphene, the intrinsic linewidths of the two highest optical modes are larger then for pristine graphene. When doped graphene is photo-excited, linewidth behaves in the same fashion as in the pristine photo-doped case, with the strong dynamical anomalies occurring in the vicinity of the high-symmetry points. Finally, we present the analysis of the adiabatic phonon self-energy as calculated in cDFPT \(\pi_{\nu}^{c}(\mathbf{q})\)[69, 70] (see also SI). Averaging out the electronic degrees of freedom deriving from the EPC matrix elements, leads to \(|g_{\nu}^{nm,c}(\mathbf{k},\mathbf{q})|\)\(\rightarrow|g_{\nu}^{c}(\mathbf{q})|\) and the self-energy expression can be written as \(\pi_{\nu}^{c}(\mathbf{q})=|g_{\nu}^{c}(\mathbf{q})|^{2}\chi_{0}^{c}(\mathbf{q})\), where \(\chi_{0}^{c}(\mathbf{q})\) denotes the bare susceptibility function. In this way, we can separate the non-equilibrium effects that come from the modifications in screened EPC matrix elements \(|g_{\nu}^{c}(\mathbf{q})|\) and the photo-induced changes in the available phase space via \(\chi_{0}^{c}(\mathbf{k})\). In Fig. 5 we show the analysis for the conduction band, but the results for the valence band are the same due to the electron-hole symmetry. In the first two columns we show the relative difference between the EPC matrix elements in the photo-excited and photo-inverted graphene with respect to the pristine equilibrium case for the \(E_{2g}\) and \(A_{1}^{\prime}\) modes \(\Delta^{rel}|g_{\nu}(\mathbf{k})|^{2}\)= \((|g_{\nu}^{c}(\mathbf{k})|^{2}\)\(-|g_{\nu}^{\text{eq}}(\mathbf{k})|^{2})/|g_{\nu}^{\text{eq}}(\mathbf{k})|^{2}\). In the first column, the relative differences are calculated after the electron-phonon matrix elements were summed throughout the whole Brillouin zone for a chosen \(\mathbf{q}\) point on the \(\Gamma\) - K path and for each one of the two highest optical modes. The largest value of the relative change is only \(\pm 10\%\) and it appears for those wavevectors \(\mathbf{q}\) for which the electronic transitions are forbidden by the specific electronic structure of graphene and for which the Figure 4: (a) Non-equilibrium phonon renormalization for different densities of excited electrons. In the first row, we show the corresponding adiabatic DFPT dispersions in the vicinity of \(\Gamma\) and K points and observe phonon-hardening increase with photo-excited electron density. The second row contains the corresponding EPC induced linewidths. (b) Two cases of electron-doped photo-excited graphene (i.e., for \(E_{F}\) = 0.2 eV and \(E_{F}\) = 0.4 eV). In the first row, we show the corresponding dynamic spectral functions. The second row shows the linewidths for the two doping regimes. We observe the same features as in Fig. 3(e) with significant increase in the linewidth close to the \(\Gamma\) point. It derives from the larger electron density at the Fermi level. Again, note the negative linewidth occurrence and dynamical anomalies. EPC strength is weak at equilibrium (i.e., away from the \(\Gamma\) and K points). More dramatic changes occur if \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) is resolved in the k\({}_{x}\)-k\({}_{y}\) plane [column (b) in Fig. 5]. Here we explicitly show the result for \({\bf q}=\Gamma\) for which we found the largest values of \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) (see SI for results in additional q points). For both the photo-excited and photo-inverted electron distribution case, our calculations show that \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) reaches values of \(\pm\) 40% in certain regions of k\({}_{x}\)-k\({}_{y}\) space for the E\({}_{2g}\) mode. We observe a symmetrical pattern of positive and negative contributions to \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\). Due to phase space restrictions, only a small region around the K points is picked out when doing a self-energy calculation, or multiplying \(|g_{\nu}^{c}({\bf q})|^{2}\) and \(\chi_{0}^{c}({\bf q})\). In other words, as shown in columns (c) and (d) of Fig. 5, \(\chi_{0}^{c}({\bf k})\) is finite around the Dirac points also in a symmetric pattern. In this way, when calculating the phonon self-energy and summing over the whole Brillouin zone, the net effect of the large relative changes \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) cancels out as equal amounts of positive and negative values are picked out by the \(\chi_{0}^{c}({\bf k})\) factor (see Sec. S2 in SI for more detailed discussion). Therefore, the effects of photo-induced changes of the electron-phonon matrix elements are small in the end, and it turns out that the phonon hardening and decrease of the dephasing rate observed in Ref. [14], along with other non-equilibrium phenomena presented here, come dominantly from the photo-induced changes in the carrier scattering phase space \(\chi_{0}^{c}({\bf k})\). Therefore, optically-induced phonon blueshifts are not necessarily a sign of the suppressed coupling and it could come from the pure electronic origin, as was shown for instance in the case of photo-excited TiSe\({}_{2}\)[18]. This resolves the debate on whether the EPC strength is suppressed [14] or enhanced [11] in the non-equilibrium graphene. We elaborate this claim by thoroughly inspecting the color-coded contributions to the bare electron susceptibility \(\chi_{0}^{c}({\bf k})\). The features for \({\bf q}\) = K and \({\bf q}\) = \(\Gamma\) are in essence the same, so the discussion is applicable to both. In the equilibrium case where the Fermi surface is almost a point, the only contribution comes from the Dirac point. Since the electrons can only scatter from the filled states below the Fermi energy to the empty states above it, the final results for \(\chi^{0}({\bf k})\) is negative. Focusing now on the photo-excited case (Fig. 5, first row), we first notice the mentioned equilibrium contribution (red dots) positioned directly in the Dirac points. Photo-excited electrons fill the states visible here as an additional triangle around each Dirac point. Each triangle consists of positive and neg Figure 5: The analysis of static phonon self-energies, revealing the electronic processes behind phonon anomalies in the case of photo-excited (first row) and photo-inverted (second-row) electron distribution. (a) Relative change in the \({\bf k}\)-summed EPC matrix elements \(\Delta^{rel}|g_{\nu}({\bf k})|^{2}\) for the two highest optical modes and along the \(\Gamma-{\rm K}\) q-path. (b) \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) resolved in \({\bf k}\) space. (c),(d) \({\bf k}\)-resolved static susceptibility \(\chi_{0}^{c}({\bf k})\) for the \(E_{2g}\) and \(A_{1}^{\prime}\) modes, respectively. The positive (negative) contributions to both \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) and \(\chi_{0}^{c}({\bf k})\) are shown with red (blue). ative susceptibility contributions. Electrons from the positive \(\chi_{0}^{c}(\mathbf{k})\) portion are responsible for the negative linewidth contribution, as they contribute by scattering from the higher energy filled states to the lower energy empty ones. Due to finite temperature, this is in principle also possible within an equilibrium distribution, but those contributions are then suppressed by the much larger negative \(\chi_{0}(\mathbf{k})\) contribution. The electrons from the negative \(\chi_{0}^{c}(\mathbf{k})\) section make standard transitions to higher energy empty states. In the EPC calculations, we used a dynamical susceptibility term [see Eq. (S1) in SI], which, together with varying the wavevector \(\mathbf{q}\), leads to the competition between these two contributions and, hence, to the net negative linewidth regions visible in Fig. 3. The obvious difference between susceptibility contribution shape in the \(\Gamma\) and K points, is the result of trigonal warping [71], which is reversed in the neighboring K points. Setting \(\mathbf{q}=\mathrm{K}\) leads to the superposition of two relatively rotated triangles, making the \(A_{1}^{\prime}\) susceptibility look circular. \(\chi_{0}^{c}(\mathbf{k})\) for the photo-inverted distribution, consists of two circular contributions, as the filled/empty states are in the energy range of a linear electron distribution. Color-coding again suggests that the electrons closer to the Dirac point rather scatter to the empty states below the Fermi level, while the higher energy electrons do the opposite and contribute positively to the phonon linewidth. In this case, varying the wavevector \(\mathbf{q}\) in the dynamical susceptibility calculation reduces the phase space for the electrons in the negative section, as a number of vertical transitions is restricted. It confines the phonon gain to be placed directly in the high-symmetry points. Note in the end that the present analysis on phonon dynamics based in cDFPT is not only restricted to the non-equilibrium induced by laser pulses, but it could be utilized to study phonon renormalization in any out-of-equilibrium conditions. For instance, impact of the static electric fields and the corresponding non-equilibrium electrons on phonons in current-carrying materials is still not well understood despite its importance for comprehending various transport properties [72, 73, 74]. Understanding the mechanisms behind the out-of-equilibrium EPC provides valuable insights into the fundamental physics of graphene and helps unravel the complex interplay between charge carriers and lattice vibrations. We investigated the coupling of the high-energy optical phonon modes with the photo-excited electron distribution by means of cDFPT. We observed hardening of the well-known Kohn anomalies at the center and edge of the Brillouin zone. The latter comes from the modified phase space for electron transitions which leads to different screening effects, while the effective EPC coupling strenghts are only slightly changed. We obtained complex nonadiabatic EPC features that emerge in both the dispersion and linewidth, and mostly originate from the new scattering channels opened in non-equilibrium. For instance, sharp dynamical phonon anomalies away from the high-symmetry points and the overall increase of the phonon scattering rate have been observed. Also, we showed incoherent phonon gain at finite wavevectors irrespective of the doping level or the concentration of the photo-excited carriers, while coherent phonon generation is expected in the state of population inversion and is within the scope of typical experiments. We believe our work offers crucial information on the nature of EPC in graphene, justifies the known non-equilibrium features and sheds new light on the understanding of the underlying ultrafast vibrational relaxation mechanisms. **Acknowledgement** Useful discussions with Jan Berges and Samuel Ponce are gratefully acknowledged. We acknowledge financial support from the Croatian Science Foundation (Grant no. UIP-2019-04-6869) and from the European Regional Development Fund for the "Center of Excellence for Advanced Materials and Sensing Devices" (Grant No. KK.01.1.1.01.0001). ## Supporting Information Available More information on computational details and detailed analysis of the modifications in the electron-phonon matrix elements and scattering phase space due to non-equilibrium distribution
grapheneの超高速電子Phonon緩和ダイナミクスには、多くの異なった現象が存在しますが、熱Phonon生成、力学的Kohnの異常、Phonondecouplingなど、未だlargely未調査の領域です。ここでは、第一原理技術を用いて、Grapheneの振動緩和とPhononドレスディングを、高度非平衡状態において、複雑なメカニズムを明らかにします。We calculate dynamical phonon spectral functions and momentum-resolved linewidths for various stages of electron relaxation and find photo-induced phonon hardening, overall increase of relaxation rate and nonadiabaticity as well as phonon gain. Namely, the initial stage of photo-excitation is found to be governed by strong phonon anomalies of finite-momentum optical modes along with coherent phonon production. Population inversion state, on the other hand, allows production of coherent and strongly-coupled phonon modes. Our research provides vital insights into the electron-phonon coupling phenomena in graphene, and serves as
2309.06345
Coexistence of localized and extended states in the Anderson model with long-range hopping
We study states arising from fluctuations in the disorder potential in systems with long-range hopping. Here, contrary to systems with short-range hopping, the optimal fluctuations of disorder responsible for the formation of the states in the gap, are not rendered shallow and long-range when $E$ approaches the band edge ($E\to 0$). Instead, they remain deep and short-range. The corresponding electronic wave functions also remain short-range-localized for all $E<0$. This behavior has striking implications for the structure of the wave functions slightly above $E=0$. By a study of finite systems, we demonstrate that the wave functions $\Psi_E$ transform from a localized to a quasi-localized type upon crossing the $E=0$ level, forming resonances embedded in the $E>0$ continuum. The quasi-localized $\Psi_{E>0}$ consists of a short-range core that is essentially the same as $\Psi_{E=0}$ and a delocalized tail extending to the boundaries of the system. The amplitude of the tail is small, but it decreases with $r$ slowly. Its contribution to the norm of the wave function dominates for sufficiently large system sizes, $L\gg L_c(E)$; such states behave as delocalized ones. In contrast, in small systems, $L\ll L_c(E)$, quasi-localized states are overwhelmingly dominated by the localized cores and are effectively localized.
V. Temkin, A. S. Ioselevich
2023-09-12T16:06:00
http://arxiv.org/abs/2309.06345v3
# Coexistence of localized and extended states in the Anderson model with long-range hopping ###### Abstract We study states arising from fluctuations in the disorder potential in systems with long-range hopping. Here, contrary to systems with short-range hopping, the optimal fluctuations of disorder responsible for the formation of the states in the gap, are not rendered shallow and long-range when \(E\) approaches the band edge (\(E\to 0\)). Instead, they remain deep and short-range. The corresponding electronic wave functions also remain short-range-localized for all \(E<0\). This behavior has striking implications for the structure of the wave functions slightly above \(E=0\). By a study of finite systems, we demonstrate that the wave functions \(\Psi_{E}\) transform from a localized to a quasi-localized type upon crossing the \(E=0\) level, forming resonances embedded in the \(E>0\) continuum. The quasi-localized \(\Psi_{E>0}\) consists of a short-range core that is essentially the same as \(\Psi_{E=0}\) and a delocalized tail extending to the boundaries of the system. The amplitude of the tail is small, but it decreases with \(r\) slowly. Its contribution to the norm of the wave function dominates for sufficiently large system sizes, \(L\gg L_{c}(E)\); such states behave as delocalized ones. In contrast, in small systems, \(L\ll L_{c}(E)\), quasi-localized states have localized cores and are effectively localized. ## I Introduction The theoretical and numerical study of eigenfunctions for the quantum-mechanical problem with deterministic power-law hopping \[\hat{H}_{\rm hop}=\sum_{{\bf 3}\!{\bf j}^{\prime}}\varepsilon_{{\bf j}-{\bf j }^{\prime}}a^{\dagger}_{{\bf j}}a_{{\bf j}^{\prime}},\quad\varepsilon_{{\bf r }}\propto r^{-\beta}, \tag{1}\] and local disorder \[\hat{H}_{\rm dis}=\sum_{{\bf j}}V_{{\bf j}}a^{\dagger}_{{\bf j}}a_{{\bf j}}, \tag{2}\] which is a modification of the Anderson impurity model [1], first started more than 30 years ago [2] (or see [3; 4] which are closely related) and has attracted significant interest in the community [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Today, the demand for proper theoretical analysis is great because of the growing number of experimentally accessible physical systems that are described by the same mathematical framework. For example, it can be used to describe quantum superconductor-metal transition in 2D disordered metals [22] or the behavior of arrays of trapped ions [23; 24], which is of great interest in quantum computing (for more examples, see [19]). In this study, we consider the case when the value of the exponent \(\beta\) in (1) lies in an interval \(D<\beta<3D/2\), where \(D\) is the dimension of the considered lattice (we provide results for any dimension, but our numerical study of the optimal fluctuation, see Section VII, is limited to physical dimensions \(D=1,2,3\) only). In the case that we explore, the effects of typical weak fluctuations of the random potential were studied extensively, and it was shown [18] that a non-Anderson disorder-driven metal-insulator transition takes place. Here, we aim to elaborate on the understanding of the effects of the interplay between typical weak fluctuations of the random potential and rare strong local fluctuations, (the latter are sometimes called "rare-regions"). Particularly, we explain numerical results from [25], which seem to indicate the possibility of the coexistence of localized and extended states near one of the edges of the band in the considered model. We expect the effects of strong local fluctuations to be the main mechanism for the formation of small-sized localized (or rather quasi-localized, see below) states on the background of extended ones. As will become clear later in this paper, no "true" coexistence is present in the investigated case, and Mott's principle [26] is not violated. The localized band-gap states arising due to localized fluctuations in a standard Anderson model with nearest neighbor hopping and gaussian disorder in dimensions \(D\leq 3\) are well known - they form the so-called Lifshitz tail in the density of states \(\nu(E)\) within the energy gap (see [27]). For \(E\) being deep enough in the gap, \(\nu(E)\) is exponentially small \[\nu(E)\propto\exp\{-S_{\rm Lif}(E)/W^{2}\},\quad S_{\rm Lif}(E)\propto|E|^{2-D/2} \tag{3}\] where \(W^{2}=\langle V^{2}\rangle\). Here the energy \(E\) is accounted for with respect to the band edge. The optimal fluctuation of disorder, responsible for formation of the localized state with energy \(E\) has a spatial scale \(a(E)\) and depth \(U(E)\) where \[a(E)\propto|E|^{-1/2},\quad U(E)\sim|E|. \tag{4}\] The optimal fluctuation approach (that is, technically, the steepest descent method for the functional integration over configurations of random potential) is justified if \(S_{\rm Lif}(E)/W^{2}\gg 1\) Note that \(S_{\rm Lif}(E)\to 0\) and \(a(E)\to\infty\) as \(E\to 0\), so that the optimal fluctuation method is not applicable in the close vicinity if the band edge. Generalization of the result (3) to the systems with the general hopping Hamiltonian (1) gives \[S_{\rm Lif}\propto|E|^{2-D/\alpha},\quad\alpha\equiv\beta-D. \tag{5}\] The result (5) is perfectly reasonable for \(2-D/\alpha>0\), and the estimates (4) apply to the optimal fluctuation in this case. The situation is changed cardinally for \(2-D/\alpha<0\): here the fluctuation with size \(a(E)\propto|E|^{-1/2}\) ceases to be an optimal one: the actual "non-Lifshitz" optimal fluctuation at \(2-D/\alpha<0\) has a microscopic spatial scale \(a_{0}\): \[S_{\rm nonLif}(E)\approx S_{\rm nonLif}(0)+A|E|,\quad a(E)\sim a _{0}, \tag{6}\] \[S_{\rm nonLif}(0)\sim\varepsilon_{0}^{2},\quad A\sim 1/\varepsilon_{0} \tag{7}\] where \(\varepsilon_{0}\) is some characteristic energy scale of order of the electronic bandwidth. The linear expansion (6) is valid for \(|E|\ll\varepsilon_{0}\) It is important, that, in contrast with the long-range Lifshitz fluctuations, the short-range non-Lifshitz optimal fluctuations provide a valid description of the corresponding contribution to the density of states even at \(E\to 0\): the corresponding \(S_{\rm nonLif}(E)\) tends to a finite limit as \(E\to 0\). The latter observation was the origin for the idea about the existence of Lifshitz-like states not only for \(E<0\), but also for \(E>0\), at least in a certain range. To reliably address the question of possible existence of localized electronic states on the continuum background of the delocalized band states, one is forced to consider finite systems. As we will see, the structure of both delocalized and quasi-localized states essentially depends on the system size. Namely, we show that upon crossing the band edge \(E=0\) the true localized states that existed for \(E<0\), continually transform into the quasi-localized ones. They consist of the localized parts (which are basically are the same as for \(E<0\)) and the delocalized ones with the amplitude that vanish continuously as \(E\) approaches \(0\) from above. The delocalized part is, however, extremely sensitive to the systems size \(L\) and becomes increasingly important with increasing \(L\). As a result, the quasi-localized states behave practically as localized ones for \(L<L_{c}(E)\equiv E^{-\frac{D+1}{\alpha}+2}\) while becoming essentially delocalized for \(L>L_{c}(E)\). ## II The problem statement. We consider a FINITE \(D\)-dimensional hypercubic lattice of \((2L)^{D}\) sites (\(L\gg 1\)) with periodic boundary conditions. The hamiltonian \[\hat{H}=\hat{H}_{\rm hop}+\hat{H}_{\rm dis}, \tag{8}\] where the random potential \(V_{\bf j}\) obeys the gaussian distribution: \[{\cal P}\{V\}=\prod_{\bf j}P(V_{\bf j})\propto e^{-\frac{S\{V\}} {W^{2}}},\quad S\{V\}=\frac{1}{2}\sum_{\bf j}V_{\bf j}^{2},\] \[P(V)=\frac{1}{\sqrt{2\pi}W}\exp\left\{-\frac{V^{2}}{2W^{2}} \right\}. \tag{9}\] In the momentum representation \[\hat{H}=\sum_{n}\varepsilon({\bf k}_{\bf n})a_{\bf n}^{\dagger}a_{\bf n}+\sum _{{\bf nn}^{\prime}}a_{\bf n}^{\dagger}a_{\bf n^{\prime}}V_{{\bf n}^{\prime}-{ \bf n}}, \tag{10}\] where the momenta \({\bf k}_{\bf n}\equiv\pi{\bf n}/L\), and the corresponding normalized eigenfunctions \[\phi_{\bf n}({\bf j})=(2L)^{-D/2}\exp(i\pi({\bf j}\cdot{\bf n})/ L), \tag{11}\] \[{\bf n}\equiv(n_{1},n_{2},\ldots n_{D}),\quad n_{i}=-L,-L+1, \ldots,L, \tag{12}\] \[a_{\bf j}=\sum_{\bf n}a_{\bf n}\phi_{\bf n}({\bf j}),\quad a_{\bf j}^{ \dagger}=\sum_{\bf n}a_{\bf n}^{\dagger}\phi_{\bf n}^{*}({\bf j}), \tag{13}\] The kinetic energy in \(k\)-representation: \[\varepsilon({\bf k})=\sum_{{\bf j}^{\prime}}\varepsilon_{{\bf j }-{\bf j}^{\prime}}\phi_{{\bf n}+{\bf k}}({\bf j})\phi_{\bf n}({\bf j}^{ \prime})=\varepsilon_{0}f({\bf k}), \tag{14}\] \[{\bf k}=\frac{\pi{\bf n}}{L},\quad{\bf k}\equiv(k_{1},k_{2}, \ldots k_{D})\quad-\pi<k_{i}<\pi, \tag{15}\] where all lengths are measured in the units of lattice spacing. The characteristic energy \(\varepsilon_{0}\) by the order of magnitude is an electronic bandwidth, in what follows we will measure all energies in the units of \(\varepsilon_{0}\). The \(2\pi\)-periodic function \(f(k)\) \[f({\bf k})=\left|4\sum_{\mu=1}^{D}\sin^{2}k_{\mu}/2\right|^{ \alpha/2}, \tag{16}\] \[f_{\rm max}=f(\pi,\pi,\ldots,\pi)=(4D)^{\alpha/2},\quad\alpha= \beta-D \tag{17}\] behaves at \(k\ll 1\) as \[f({\bf k})\approx|k|^{\alpha},\quad|k|\equiv\left(\sum_{\mu=1}^{D}k_{\mu}^{2} \right)^{1/2}. \tag{18}\] Thus, all the energies are confined within the interval \(0<\varepsilon_{n}<W_{\rm band}\), where \(W_{\rm band}=\varepsilon_{0}f_{\rm max}\). ## III Low energy properties of an ideal system For small \(E\ll 1\) the spectrum \(\varepsilon({\bf k})\) is isotropic and the corresponding wave-functions can be characterized by the angular momenta. In our problem only the fully symmetric solutions are relevant, because the low-symmetric ones vanish at \(r\to 0\) and hardly feel the strongly localized potential \(V_{\bf j}\). The normalized fully-symmetric eigenfunctions are \[\psi_{n}(r)=\sqrt{\frac{2k_{n}^{D-1}}{\sigma_{D}L}}f(k_{n}r),\;\int_{0}^{L} \sigma_{D}r^{D-1}dr|\psi_{n}(r)|^{2}=1,\] \[f(x)=\sqrt{\frac{\pi}{2}}\frac{J_{D/2-1}(x)}{x^{D/2-1}},\quad\sigma_{D}=\frac{ D\pi^{D/2}}{\Gamma(D/2+1)}, \tag{19}\] where \(r\equiv|{\bf r}|\), \(k\equiv|{\bf k}|\), \(k_{n}=\pi n/L\), \(\sigma_{D}\) is the surface area of the \(D\)-dimensional sphere with unit radius. The asymptotics of \(f(x)\) are \[f(x\gg 1)\approx x^{-\frac{D-1}{2}}\cos(x+\varphi_{D}),\quad \varphi_{D}=\frac{\pi}{4}(1-D), \tag{20}\] \[f(0)=\frac{\sqrt{\pi/2}}{2^{D/2-1}\Gamma(D/2)}. \tag{21}\] For general \(D\), the low energy (\(E\ll 1\)) density of states is \[\nu_{0}^{(D)}(E)=\frac{\sigma_{D}}{(2\pi)^{D}}\frac{k^{D-1}dk}{dE} =\frac{\sigma_{D}/D}{(2\pi)^{D}}\frac{d(k^{D})}{dE}=\] \[\frac{\sigma_{D}/D}{(2\pi)^{D}}\frac{d(E^{D/\alpha})}{dE}=\frac{ \sigma_{D}}{(2\pi)^{D}\alpha}\frac{K^{D}}{E}. \tag{22}\] We have introduced characteristic momentum \[K=E^{1/\alpha}, \tag{23}\] which, alongside with the short range scale \(K_{0}\sim\pi\), is an important momentum scale in our problem. Throughout this paper we assume that \[1/L\ll K\ll 1 \tag{24}\] The general level spacing, which takes into account all the states irrespective to their symmetry, is \[\delta_{D}(E)=\left(\nu_{0}^{(D)}(E)L^{D}\right)^{-1}=\frac{(2\pi)^{D}\, \alpha}{\sigma_{D}}\frac{E}{(KL)^{D}} \tag{25}\] Note that the dimensions of the density of states and of the level spacing are \([\nu]=(1/\mbox{volume})\times(1/\mbox{energy})\) and \([\delta(E)]=\mbox{energy}\). In what follows we will also need the density of states and the level spacing with respect only to fully symmetric states. They coincide with \(\delta_{1}(E)\) and \(\nu_{1}(E)\), no matter what the real \(D\) is: \[\nu_{1}(E)\approx\frac{1}{\pi\alpha}\frac{K}{E},\quad\delta_{1}(E)=[L\nu_{1}( E)]^{-1}=\pi\alpha\frac{E}{KL}. \tag{26}\] Note that for small \(E\ll 1\) the density of states becomes very small and, therefore, the level spacing becomes relatively large. ## IV The localized states and the optimal fluctuations For small disorder \(W\ll 1\) there is some (exponentially small) number of localized states with \(E<0\), associated with exponentially rare local fluctuations of the random potential. Let us look at the contribution of these localized states to the density of states. Following the standard procedure [27; 28] of finding an optimal fluctuation \(V_{\bf j}\) and the corresponding localized wave-function \(\Psi_{\bf n}\) we should minimize the functional \[\tilde{S}(\{\Psi,V\},\lambda,\eta)=\frac{1}{2}\sum_{\bf j}V_{\bf j}^{2}- \lambda\left\{\sum_{\bf j}\Psi_{\bf j}^{*}\varepsilon_{\bf j-j^{\prime}}\Psi _{\bf j}+\sum_{\bf j}V_{\bf j}|\Psi_{\bf j}|^{2}-E\right\}-\eta\left\{\sum_{ \bf j}|\Psi_{\bf j}|^{2}-1\right\} \tag{27}\] with respect to two functions \(\Psi_{\bf j}\), \(V_{\bf j}\) and two additional parameters \(\lambda\) and \(\eta\). Variation of (27) with respect to \(V_{\bf j}\) allows one to express \(V_{\bf j}\) through \(\Psi_{\bf j}\) and \(\lambda\): \[V_{\bf j}=\lambda|\Psi_{\bf j}|^{2} \tag{28}\] and we are left with the functional \[-\frac{1}{\lambda}\tilde{S}(\{\Psi\},\lambda)=\] \[=\sum_{\bf j}\Psi_{\bf j}^{*}\varepsilon_{\bf j-j^{\prime}}\Psi_{ \bf j}+\frac{\lambda}{2}\sum_{\bf j}|\Psi_{\bf j}|^{4}-E\sum_{\bf j}|\Psi_{\bf j }|^{2} \tag{29}\] subject to minimization with respect to \(\Psi_{\bf j}\) with the normalization constraint \[\sum_{\bf j}|\Psi_{\bf j}|^{2}=1 \tag{30}\] Thus, we arrive at the nonlinear Schrodinger equation \[\sum_{\bf j^{\prime}}\varepsilon_{\bf j-j^{\prime}}\Psi_{\bf j^{\prime}}+\{ \lambda|\Psi_{\bf j}|^{2}-E\}\Psi_{\bf j}=0 \tag{31}\] The function \(\Psi_{\bf j}\) should be localized, i.e., it should vanish for large \({}_{\bf j}\). The implications of this requirement we will discuss in the Section VI. Finally, we have to ensure that the normalization condition (30) is fulfilled. To satisfy this condition we have to choose the only free parameter at our disposal - \(\lambda\). The explicit form of the wave function \(\Psi_{\bf j}^{\rm(opt)}\) and optimal parameter \(\lambda_{\rm opt}\) can only be found by means of numerical solution of the essentially discrete nonlinear Schrodinger equation (31). The final expression for the optimal exponent in (9) reads \[\frac{S_{\rm opt}}{W^{2}}=\frac{\sum_{\bf j}\left(V_{\bf j}^{\rm(opt)}\right)^{2 }}{2W^{2}}=\frac{\lambda_{\rm opt}^{2}}{2W^{2}}\sum_{\bf j}\left|\Psi_{\bf j}^{ \rm(opt)}\right|^{4} \tag{32}\] We are interested in the behavior of \(S_{\rm opt}(E)\) for small energies \(|E|\ll 1\), so that \(S_{\rm opt}(E)\) can be expanded in \(E\) up to linear terms: \[S_{\rm opt}(E)\approx S_{\rm opt}(0)+\lambda_{\rm opt}(0)E. \tag{33}\] Note that \(S_{\rm opt}(0)\) and \(\lambda_{\rm opt}(0)\) are some numerical constants of order unity, depending on \(D\), \(\alpha\) and on the type of lattice. ## V The local character of the optimal fluctuation The equations (29), (31) are perfectly standard - they do not differ from what we have for the conventional Lifshits tails, arising in the case of \(\alpha>D/2\). Then why do we expect an anomalous behavior of the tails in our case \(\alpha<D/2\)? Let us model an optimal fluctuation as a square potential well with depth \(U\) and width \(a\), so that we have to minimize the function of two variables \[S(U,a)\sim U^{2}a^{d} \tag{34}\] To have a level with energy \(E\) this well should obey the following constraints 1. The well should be deeper than \(E\): \((U>|E|)\) 2. The well should be wider than the wave-length: \(a>Q^{-1}=U^{-1/\alpha}\). It seems plausible that the narrowest possible well is a good choice. Then, assuming \(a\sim a_{\rm min}=Q^{-1}\) we have to minimize the function \[S(U)\sim U^{2-D/\alpha} \tag{35}\] If \(\alpha>D/2\) (as it is for conventional Lifshits tails with \(\alpha=2\) and \(D<4\)) then \(S\) decreases with decreasing \(U\), so that the optimal fluctuation corresponds to minimal possible \(U_{\rm min}\sim|E|\) which leads to the standard Lifshits result: \[S_{\rm Lif}^{\rm(opt)}\propto|E|^{2-D/\alpha} \tag{36}\] In our case \(\alpha<D/2\) and \(S\) decreases with decreasing \(U\), so the minimum of \(S\) corresponds to the deepest possible fluctuation. Thus, within the continual approximation, the optimal fluctuation would be infinitely deep and infinitely narrow. In reality, however, the fluctuation should contain at least one site, so the minimum is attained at \(a\sim 1\), \(U\sim\max\{|E|,t\}\). As a result, we obtain \[S_{\rm nonLif}^{\rm(opt)}\sim 1 \tag{37}\] ### The Flat Band Approximation (FBA) For very small \(\alpha\ll 1\) the electrons are almost dispersionless in the main part of the Brillouin Zone \[\varepsilon({\bf k})\approx W_{\rm band},\quad E_{\rm loc}^{(0)}\approx W_{\rm band}. \tag{38}\] The dispersion is only present in the domain of exponentially small \(k\sim e^{-1/\alpha}\). In the leading approximation both the optimal potential \[V_{\bf j}^{\rm(opt)}=(-W_{\rm band}+E)\delta_{\bf j,0}, \tag{39}\] and the corresponding wave-function \[\Psi_{\bf j}^{\rm(opt)}=\delta_{\bf j,0} \tag{40}\] are perfectly localized at the same site. \[S^{\rm(opt)}(E)=\frac{1}{2}(W_{\rm band}-E)^{2} \tag{41}\] ### The Single Site Approximation (SSA) If \(\alpha\) is not specifically small, the FBA does not work: the wave function is not localized on one site, so that formula (40) is not valid. However, as we conclude from numerics (se Section VII), the potential \(V_{\bf j}^{\rm(opt)}\) remains extremely short range even for \(\alpha\) away from zero: the potential remains localized at a single site with an accuracy better than 1%! Thus, it is very interesting to explore the single-site approximation (SSA) that postulates \[V_{\bf j}^{\rm(opt)}=V_{0}(E)\delta_{\bf j,0},\quad S^{\rm(opt)}(E)=V_{0}^{2} (E)/2, \tag{42}\] where the dependence \(V_{0}(E)\) is yet to be found. We stress again that, strictly speaking, the formula (42) is incorrect. Namely, it is inconsistent with the requirement (28) which relates the shape of optimal potential to that of the optimal wave-function. Nevertheless, as is demonstrated in the Section VII, SSA works extremely well, as long as we are interested in the "integral" characteristics, governed by the core of the fluctuation. What is also important, SSA allows for the analytical solution of the arising quantum-mechanical problem. In particular, in [25] it was shown that, within SSA \[V_{0}(E)=\left\{\fint_{BZ}\frac{d^{D}{\bf k}}{(2\pi)^{D}}\frac{1}{E-\varepsilon_{ \bf k}}\right\}^{-1} \tag{43}\] However, we choose to postpone using the SSA, because there are many important and nice results that can be derived without appealing to this approximation. ## VI Localized vs delocalized wave-functions: general consideration As long as we consider systems of finite size, the optimal fluctuation method is perfectly applicable not only to genuine localized states with \(E<0\), but to all the states, including those with \(E>0\). In the Section IV we have studied only the electronic ground state in the presence of the optimal fluctuation, here we will discuss the entire spectrum of the states. We will see that, besides the standard fully delocalized states with positive energies (plane waves), there is a lot of hybrid states - partly localized and partly delocalized. Suppose that we have found the form of optimal fluctuation \(V_{\bf j}^{(\rm opt)}\). To find the entire set of the states \(\psi_{\bf j}^{(m)}\) and the corresponding energies \(E_{m}\), we have to solve the linear Schrodinger equation \[\sum_{{\bf j}^{\prime}}\varepsilon_{{\bf j}-{\bf j}^{\prime}}\psi_{{\bf j}^{ \prime}}^{(m)}+\{V_{\bf j}^{(\rm opt)}-E_{m}\}\psi_{\bf j}^{(m)}=0, \tag{44}\] to apply periodic boundary condition to wave-functions \(\psi_{\bf j}^{(m)}\), and obtain a discrete set of eigenenergies \(E_{m}\) and the corresponding eigenfunctions \(\psi_{m}({\bf j})\). Clearly, \(\Psi_{\bf j}^{(\rm opt)}\) will be one of these states (the ground state with energy \(E_{0}\)). A formal solution of (44) may be written as \[\psi_{\bf j}=\sum_{{\bf j}^{\prime}}g_{E}({\bf j}-{\bf j}^{\prime})\psi_{{\bf j }^{\prime}}V_{{\bf j}^{\prime}}^{(\rm opt)}, \tag{45}\] where \[g_{E}({\bf j},{\bf j}^{\prime})=g_{E}({\bf r})=\sum_{\bf n}\frac{\exp[i({\bf k }_{\bf n}\cdot{\bf r})]}{E-\varepsilon_{\bf n}},\quad{\bf r}\equiv{\bf j}-{ \bf j}^{\prime}. \tag{46}\] is the Green function of the free Schrodinger equation. Note that there is no free term in the solution (45) since we have assumed that the energy \(E\) is out of resonance with all the eigenfrequencies of the free Schrodinger equation: \(E\neq\varepsilon_{n}\) for all \(n\). Writing \(\psi_{n}({\bf j})\) in terms of the Green function (46), which uses the basis (12), ensures that the boundary conditions for the wave function are fulfilled automatically. The sum over \({\bf j}^{\prime}\) in (45) is dominated by small \(|{\bf j}^{\prime}|\sim 1\), because we have assumed that \(V_{{\bf j}^{\prime}}^{(\rm opt)}\) is localized: it rapidly decays with \(|{\bf j}^{\prime}|\). Therefore, for \({\bf j}\gg 1\) we get \[\psi_{\bf j}=A(E)g_{E}({\bf j}) \tag{47}\] where \(A(E)\) is certain \({\bf j}\)-independent coefficient. Thus, the asymptotics of the wave function feels the presence of the optimal fluctuation only through the value of the energy \(E\). There are two different cases that we will discuss: negative energies \(E<0\) and positive energies \(E>0\). ### Negative energies: localized wave-function When the energy of the state is negative, the Green function can be approximated by the integral instead of the discrete sum \[g_{E}^{(\rm loc)}=\int_{\rm BZ}\frac{d^{D}{\bf k}}{(2\pi)^{D}}\frac{e^{i({\bf k }\cdot{\bf r})}}{E-\varepsilon({\bf k})}, \tag{48}\] since it converges at \(k\) in the entire Brillouin zone. The large-\(r\) asymptotic behavior of \(g_{E}^{(\rm loc)}({\bf r})\) can be easily evaluated. At smallest distances \(|{\bf r}|\lesssim r_{0}\sim 1\) the components with high momenta \(k\sim\pi\) give principal contribution to (48), \(E\) in the denominator can be neglected compared to \(\varepsilon({\bf k})\) and we get \(g\sim 1\) in this range of distances. However, \(E\) in the denominator still can be neglected in a wider range, namely, for \(r\lesssim r_{1}\) where \[r_{1}(E)\sim 1/K\sim E^{-1/\alpha}\gg 1 \tag{49}\] In this range of distances (\(r_{0}\ll r\ll r_{1}\)) we have: \[g_{E}^{(\rm loc)}({\bf r})\approx-\int_{BZ}\frac{d^{D}{\bf k}}{( 2\pi)^{D}}\frac{e^{i({\bf k}\cdot{\bf r})}}{\varepsilon({\bf k})}\approx\] \[\approx-\frac{1}{r^{D-\alpha}}\int\frac{d^{D}{\bf q}}{(2\pi)^{D}} \frac{e^{i({\bf q}\cdot{\bf m})}}{\varepsilon(q)}\propto\frac{1}{r^{D-\alpha}}, \tag{50}\] where we have introduced \({\bf m}\equiv{\bf r}/r\) and \({\bf q}\equiv{\bf k}r\). The main contribution to the integral (50) here comes from \(q\sim 1\), or, from relatively small \(k\sim 1/r\). For \(r\gg r_{1}(E)\) we can expand the integrand in \(\varepsilon(k)\) and get \[g_{E}^{(\rm loc)}({\bf r})\approx\frac{1}{E^{2}}\int_{\rm BZ} \frac{d^{D}{\bf k}}{(2\pi)^{D}}e^{i({\bf k}\cdot{\bf r})}\varepsilon(k)=\] \[=\frac{1}{E^{2}r^{D+\alpha}}\int\frac{d^{D}{\bf q}}{(2\pi)^{D}}e^{ i({\bf q}\cdot{\bf m})}\varepsilon(q)\propto\frac{1}{r^{D+\alpha}}, \tag{51}\] It is easy to see that the results (50) and (51) match at \(r\sim r_{1}\). Thus \[g_{E}^{(\rm loc)}({\bf r})\sim\begin{cases}\hskip 28.452756pt1,&r\lesssim r_{0}, \\ \hskip 28.452756ptr^{\alpha-D},&r_{0}\ll r\ll r_{1},\\ r_{1}^{2\alpha}(E)r^{-\alpha-D},&r\gg r_{1},\end{cases} \tag{52}\] and \[\psi_{\rm opt}^{(\rm loc)}({\bf r})=\frac{1}{c}g_{E}^{(\rm loc)}({\bf r}) \tag{53}\] where \(c\sim 1\) is the normalization constant. Note that the main contribution to the normalization integral (and, therefore, to \(c\)) comes from the range \(r\sim 1\), so that \(c\) is almost \(E\)-independent. It should be mentioned that the asymptotic formula (47) and, hence, the formula (53) either, does not apply at \(r\sim 1\). So, to evaluate \(c\), one, in principle, has to use an explicit numerical solution of the initial discrete problem. In general, the localized part is not strongly sensitive to \(E\), so, for \(E\ll 1\), \[\Psi_{\rm opt}^{\rm(loc)}({\bf r})\approx\frac{1}{c}g_{E=0}^{\rm(loc)}({\bf r}) \tag{54}\] ### Positive energies: quasi-localized wave function The vast majority of the eigenstates \(\psi_{\bf j}^{(m)}\) are not much affected by the presence of the optimal fluctuation, so that the corresponding eigenfunctions and eigenenergies are described by (12) and (15) \[\psi_{\bf j}^{(m)}\approx\phi_{\bf n}({\bf j}),\quad E_{n}\approx\varepsilon_{ \bf n}, \tag{55}\] These states are perfectly delocalized. There is, however a subset of states, much more sensitive to the potential (28) - the states, fully symmetric with respect to rotations around the center of optimal fluctuation. Note, that the local level spacing within this subset is \(\delta_{1}(E)\propto L^{-1}\) (see (26)), which, for \(D>1\), is much larger than the total level spacing \(\delta_{D}\). Still, as we will see soon, even these fully symmetric states are strongly delocalized, except for a bunch of \(\sim M(E_{0})\) states in a narrow interval of energies \(|E-E_{0}|\lesssim\Delta(E_{0})\) around \(E_{0}\), where the states can be effectively localized. Under which condition the wave-function with positive energy is effectively localized? To answer this question let us introduce an important characteristic \[\epsilon(E)\equiv\frac{E-\varepsilon_{\rm mid}}{\delta_{1}(E)},\quad \varepsilon_{\rm mid}(E)\equiv\frac{\varepsilon_{\rm right}+\varepsilon_{\rm left}} {2} \tag{56}\] where \(\varepsilon_{\rm left}\) is the closest neighbour of \(E\) from the left, and \(\varepsilon_{\rm right}\) - from the right in the string of eigenenergies \(\varepsilon_{n}\), corresponding to free fully symmetric states (see Fig. 1). The local level spacing is \(\delta_{1}(E)=\varepsilon_{\rm right}-\varepsilon_{\rm left}\). Suppose that the energy \(E\) is placed in the middle of the interval \((\varepsilon_{\rm left},\varepsilon_{\rm right})\), or, in other words \(E=E_{\rm mid}(E)\) and \(\epsilon(E)=0\). Then, obviously, for \(r\ll r_{1}\) the terms in the sum (46) with \(\varepsilon_{n}<E\) and with \(\varepsilon_{n}>E\) will cancel each other in pairs exactly in a way, prescribed by the principal value integration. Hence, for \(r\ll r_{1}\) and \(\epsilon(E)=0\) the Green function is given by the following integral \[g_{E}^{(\epsilon=0)}(r\ll r_{1})=\!\!-\!\!\int_{\rm BZ}\frac{d^{D}{\bf k}}{(2 \pi)^{D}}\frac{e^{i({\bf k}\cdot{\bf r})}}{E-\varepsilon({\bf k})}, \tag{57}\] which is evaluated in exactly the same manner as before \[g_{E}^{(\epsilon=0)}({\bf r})\sim\begin{cases}\phantom{-}1,&r\lesssim r_{0}, \\ r^{\alpha-D},&r_{0}\ll r\ll r_{1}\end{cases} \tag{58}\] Evaluation of the very far tails \(r\gg r_{1}\) is not so straightforward. Indeed, since \(Kr\gg 1\) one needs to account for the discreteness of the system even when \(\epsilon(E)=0\). Explicitly, the Green function reads \[g_{E}(|{\bf r}|\gg r_{1})\propto\sum_{\bf n}\frac{e^{i{\bf k}_{\bf n}{\bf r}} }{E_{\rm mid}-\varepsilon_{\bf n}}. \tag{59}\] The main contribution to this sum comes from \(|{\bf k}_{\bf n}|\approx K\), hence, we expand \(\varepsilon_{\bf k_{\bf n}}\) in the vicinity of \(E\). Let us introduce integer \(l\) in the following way \[n=n_{\rm left}(E_{\rm mid})+l, k_{n}=K(E_{\rm mid})+\frac{\pi}{L}(l-1/2), \tag{60}\] \[\varepsilon_{n}=E_{\rm mid}+(l-1/2)\delta_{1}(E). \tag{61}\] Since the spectrum is spherically symmetric, we need the asymptotic of the spherical wave \[f_{n}(r)\approx x^{-(D-1)/2}\cos(x+\varphi_{D}), \tag{62}\] \[x\equiv(K(E_{m})r-(\pi r/L)[\epsilon-(l-1/2)])\gg 1. \tag{63}\] Therefore, relation (59) reads \[g_{E}(|{\bf r}|\gg r_{1})\propto\] \[\propto{\rm Re}\left[-e^{iKr-i\frac{\pi r}{2L}}(Kr)^{-(D-1)/2} \sum_{l=-\infty}^{\infty}\frac{e^{i\frac{\pi r}{2}l}}{l-\frac{1}{2}}\right], \tag{64}\] since it converges at small \(l\)'s. Now, we use \[\sum_{l=-\infty}^{\infty}\frac{e^{i\pi lz}}{l-1/2}=-i\pi e^{i\frac{\pi}{2}z}, \tag{65}\] and obtain the final expression for the wave function with positive energy in the middle of the interval \(E=E_{\rm mid}\) \[\Psi_{E}({\bf r})\sim\begin{cases}\phantom{-}1,&r\lesssim r_{0},\\ \phantom{-}r^{\alpha-D},&r_{0}\ll r\ll r_{1},\\ \phantom{-}r_{1}^{\alpha-D}\frac{\sin{(Kr+\varphi_{D})}}{(Kr)^{\frac{D-1}{2}}},&r\gg r_{1}.\end{cases} \tag{66}\] The oscillating tail at \(r\gg r_{1}\) prevents \(\Psi_{E}({\bf r})\) from being truly localized: even when \(\epsilon=0\) the wave function has delocalized tails. We call this state quasi-localized. Effective localization condition We consider finite systems of size \((2L)^{D}\), hence, it is possible for the quasi-localized state to be effectively localized in the vicinity of the optimal fluctuation. Indeed, one can compute the norm of \(\Psi_{E}(\mathbf{r})\) \[\int d^{D}\mathbf{r}|\Psi(\mathbf{r})|^{2}\sim\left(1+r_{1}^{2(\alpha-D)}K^{-D} \int_{1}^{Kr}dy\sin^{2}y\right)\sim\left(1+E^{-\frac{2}{\alpha}(\alpha-D)- \frac{D}{\alpha}+\frac{1}{\alpha}}L\right)\sim\left(1+E^{-2+\frac{D+1}{\alpha} }L\right). \tag{67}\] The contribution from the oscillating tail vanishes when the energy is sufficiently low. Let us introduce \(L_{c}(E)\) in the following way \[L_{c}\equiv E^{-\frac{D+1-2\alpha}{\alpha}}. \tag{68}\] Tail contribution vanishes if \[L\ll L_{c}. \tag{69}\] Our calculations are valid only if we consider higly excited state, i.e. \(KL\gg 1\), as given by condition (24). Conditions (24) and (69) can be satisfied simultaneously in very large systems only if \(\alpha<D/2\). Quasi-localized states, that we have just introduced, exist due to the presence of strong local fluctuations which correspond to the saddle-point solution. Since typical fluctuations are always present in real systems, one needs to take them into account. Let us demonstrate for the most simple case \(D=1\) that the quasi-localized states are robust to these fluctuations. As we will show later (see Section IX), the level spacing is not very sensitive to the presence of the potential fluctuations, hence, we can assume it to coincide with the one in clean system. Using that, we can easily find energy that corresponds to the level spacing \(\delta_{D}(E)\) which is of order of the characteristic scale of the matrix element of the random potential \(\sqrt{\langle V^{2}\rangle}\sim WL^{-1/2}\): \[E_{c}^{\prime}\sim W^{-\frac{\alpha}{1-\alpha}}L^{-\frac{\alpha}{2(1-\alpha)}}. \tag{70}\] Therefore, states with energies \(E\ll E_{c}^{\prime}\) remain almost unperturbed owing to typical fluctuations. Some of them are "extended" over the whole system: the localization length \(l_{E}\sim W^{-2}E^{2-\frac{2}{\alpha}}\)[25] for these energies is much larger than the system size; and some of them are quasi-localized in the sense described above, since \(E_{c}\ll E_{c}^{\prime}\), where \[E_{c}\sim L^{-\frac{\alpha}{D+1-2\alpha}}. \tag{71}\] ## VII Numerical study of the optimal fluctuation We perform a numerical study of the optimal fluctuation dropping contribution from the delocalized tails. Indeed, because we know that any state with positive energy is either extended or quasi-localized, one cannot fine-tune the energy to remove the oscillating non-decaying contribution. Our results support strongly localized character of the core of the optimal fluctuation. At small \(|\mathbf{j}|\) the potential \(V_{\mathbf{j}}^{(\mathrm{opt})}\) rapidly decays with \(|\mathbf{j}|\). For example, in \(1D\)-case, \(V_{\pm 1}^{(\mathrm{opt})}/V_{0}^{(\mathrm{opt})}\) varies from \(0.01\) at \(\alpha=0.15\) to \(0.04\) at \(\alpha=0.35\) (see Fig. 4). At the same time at \(|\mathbf{j}|\gg 1\) the decay of \(V_{\mathbf{j}}^{(\mathrm{opt})}\) becomes rather slow and is well described by a power law: \[V_{\mathbf{r}}^{(\mathrm{opt})}\propto|\Psi_{\mathbf{r}}^{(\mathrm{opt})}|^{2 }\propto|\mathbf{r}|^{2\alpha-2D} \tag{72}\] Figure 2: Upper string: a locally equidistant spectrum of levels \(E_{n}\equiv\varepsilon_{n}\) in the absence of the fluctuation. Lower string: a spectrum in the presence of fluctuation with (a) \(L\ll L_{c}\), (b) \(L\gg L_{c}\).. The levels \(E_{n}\) within the localization energy domain are shifted with respect to \(\varepsilon_{n}\). In general, they contain both localized and delocalized components. which is perfectly consistent with the exact relations (52) and (28). Although the validity of the latter relation signals about the validity of our numerics, it should be admitted that for the vast majority of questions which we address in this study, the tails of the potential \(V_{\mathbf{j}}^{(\mathrm{opt})}\) are irrelevant. To test the accuracy of the SSP we have found \(S_{\mathrm{opt}}^{(P)}\) for series of truncated models where all \(V_{\mathbf{j}}^{(\mathrm{opt})}\) were forcefully set to be zeroes for \(|j|>P\), while the remaining \(2DP+1\) potentials were chosen to optimize \(S\). Particularly, due to (28) we set \[V_{|\mathbf{j}|\leq P}^{(\mathrm{opt})}=\lambda|\psi_{\mathbf{j}}|^{2},\quad V _{|\mathbf{j}|>P}^{(\mathrm{opt})}=0. \tag{73}\] After that, we add normalization condition \(\sum_{j}|\psi_{j}|^{2}=1\) and solve the system of \(P+2\) equations (instead of \(2DP+1\) since the localized state possesses discrete rotational symmetry). During the calculations, \(g_{E}^{(\mathrm{loc})}\) is used instead of \(g_{E}\) since we are interested in the localized solution. The results of exact optimization are illustrated in Fig. 3 and Fig. 5. ## VIII Away from \(E_{0}\): partly localized wave-functions In Section VI we have found the condition for the state of energy \(E=E_{\mathrm{mid}}\) to be effectively localized and have studied the properties of the quasi-localized wave functions in detail. Now we discuss the properties of the states with energies \(E_{m}\neq E_{0}\). Slightly away from \(E_{0}\) we expect the \(\epsilon\neq 0\) part to be small: \[\psi_{E_{m}}^{(\mathrm{m,del})}(\mathbf{r})\propto g_{E_{m}}^{(\epsilon\neq 0 )}(\mathbf{r})\propto[E_{m}-E_{\mathrm{mid}}(E_{m})]. \tag{74}\] and, since \(E-E_{\mathrm{mid}}(E_{m})=0\) at \(E_{m}=E_{0}\), at small \(E_{m}-E_{0}\) we will have have \(E-E_{\mathrm{mid}}(E_{m})\propto E_{m}-E_{0}\). ### The delocalized part of the wave-function Let us again introduce integer \(l\), such that \[n=n_{\mathrm{left}}(E_{m})+l,\quad\varepsilon_{n}=E_{\mathrm{mid }}(E_{m})+(l-1/2)\delta_{1},\quad k_{n}=K(E_{m})-\frac{\pi}{L}[\epsilon-(l-1/2)], \tag{75}\] \[E_{m}-\varepsilon_{n}=(E_{m}-E_{\mathrm{mid}}(E_{m}))-(l-1/2) \delta_{1},\quad E_{\mathrm{mid}}(E_{m})-\varepsilon_{n}=-(l-1/2)\delta_{1},\] (76) \[f_{n}(r)\approx x^{-(D-1)/2}\cos(x+\varphi_{D}),\quad x\equiv(K( E_{m})r-(\pi r/L)[\epsilon-(l-1/2)])\gg 1, \tag{77}\] Figure 3: The dependence \(S^{(\mathrm{opt})}(E_{0})\) for \(D=1\) obtained numerically. Solid line (red online) shows the result of FBA. Then for the asymptotics of the \(\epsilon\neq 0\) part of the Green function we can write \[g_{E_{m}}^{(\epsilon\neq 0)}({\bf r})\approx\sum_{l=-\infty}^{ \infty}\phi_{n_{\rm left}(E_{m})+l}(r)\phi_{m_{\rm left}(E_{m})+l}^{*}(0)\left\{ \frac{1}{[E_{m}-E_{\rm mid}(E_{m})]-\delta_{1}(l-1/2)}-\frac{1}{-\delta_{1}(l- 1/2)}\right\}\approx\\ \approx\frac{2K^{D-1}}{\sigma_{D}L}f(0)(Kr)^{-(D-1)/2}\sum_{l=- \infty}^{\infty}\left\{\frac{1}{[E_{m}-E_{\rm mid}(E_{m})]-\delta_{1}(l-1/2)} -\frac{1}{-\delta_{1}(l-1/2)}\right\}\times\\ \times\cos\{(Kr-(\pi r/L)[\epsilon-(l-1/2)])+\varphi_{D}\}\approx\\ \approx\frac{2K^{D-1}}{\sigma_{D}L}f(0)(Kr)^{-(D-1)/2}\frac{1}{ \delta_{1}}{\rm Re}\,\left\{\sum_{l=-\infty}^{\infty}\frac{\epsilon\exp\{ikr-i( \pi r/L)[\epsilon-(l-1/2)]+i\varphi_{D}\}}{[\epsilon-(l-1/2)](l-1/2)}\right\}= \\ =\frac{2K^{D-1}}{\sigma_{D}L}f(0)(Kr)^{-(D-1)/2}\frac{1}{\delta_{1 }}{\rm Re}\,\left\{\exp(iKr+i\varphi_{D})\sum_{l=-\infty}^{\infty}\frac{ \epsilon\exp\{-i(\pi r/L)[\epsilon-(l-1/2)]\}}{[\epsilon-(l-1/2)](l-1/2)} \right\}=\\ =\frac{2f(0)}{\sigma_{D}}K^{D-1}\nu_{1}(E)(Kr)^{-(D-1)/2}{\rm Re }\,\left\{\exp(iKr+i\varphi_{D})\Phi(r/L,\epsilon)\right\} \tag{78}\] Figure 4: (a) The shape of optimal fluctuation in \(1D\). At first coordination sphere it drops already by two orders of magnitude, while in the tail it decreases only slowly (see inset). (b) The relative accuracy \(\varkappa(P)=[S^{(P)}-S^{(\infty)}]/S^{(\infty)}\) of truncated models with \(P\) shells of nonzero potentials. where \[\Phi(z,\epsilon)=\sum_{l=-\infty}^{\infty}\frac{\epsilon e^{-i\pi z[\epsilon-(l-1/2 )]}}{(\epsilon-(l-1/2))(l-1/2)}\approx\begin{cases}\begin{array}{ll}-\pi\tan( \pi\epsilon)&\text{for }z\ll 1,\;\text{any }\epsilon,\\ -\pi^{2}(1-|z|)\epsilon&\text{for }\epsilon\ll 1,\;\text{any }z,\\ 1/(\epsilon\mp 1/2)&\text{for }\epsilon\to\pm 1/2,\;\text{any }z,\end{array}\end{cases} \tag{79}\] The corresponding contribution to the wave function \(\psi_{\text{deloc}}(\mathbf{r})\propto g^{(\epsilon\neq 0)}(\mathbf{r})\) is delocalized. Having in mind that the preexponential coefficient in (78) is \(L\)-independent, we conclude that the normalization integral \(N_{\text{deloc}}=\int|\psi_{\text{deloc}}(r)|^{2}r^{D-1}dr\propto L\). Thus, in the case of general \(\epsilon\sim 1\), when also \(\Phi(z,\epsilon)\sim 1\), the norm \(N_{\text{deloc}}\sim L\) strongly dominates over the norm of the localized part \(N_{\text{loc}}\sim 1\). Since we are interested in such wave functions, that are at least partly localized (i.e., \(N_{\text{loc}}\gtrsim N_{\text{deloc}}\)), we have to concentrate on the case \(\epsilon\ll 1\), when \(\Phi(z,\epsilon)\ll 1\). Therefore we are allowed to use the corresponding asymptotics of (79). As a result \[g_{E}^{(\epsilon\neq 0)}(\mathbf{r})\approx-\frac{2\pi^{2}f(0)}{ \sigma_{D}}K^{D-1}\nu_{1}(E)(Kr)^{-(D-1)/2}\epsilon(1-r/L)\cos(Kr+\varphi_{D}) =C\frac{\epsilon\sqrt{K^{D+1}L}}{E}\tilde{\phi}_{\text{deloc}}(r), \tag{80}\] \[C=-\frac{\pi f(0)}{\alpha}\sqrt{\frac{2}{3\sigma_{D}}}=-\frac{2 \frac{2-D}{2}\pi}{\alpha\Gamma(D/2)}\sqrt{\frac{\pi^{\frac{2-D}{2}}\Gamma(D/2 +1)}{3D}} \tag{81}\] where the normalized delocalized wave function \(\tilde{\phi}_{\text{deloc}}(r)\) has, for \(|\epsilon|\ll 1\), the following asymptotics at \(Kr\gg 1\): \[\tilde{\phi}_{\text{deloc}}(r)\approx\sqrt{\frac{6K^{D-1}}{\sigma_{D}L}}(Kr)^ {-\frac{D-1}{2}}(1-r/L)\cos(Kr+\varphi_{D}), \tag{82}\] From (80) it is clear that the wave function becomes essentially delocalized already at \[\epsilon\gtrsim\sqrt{L_{c}(E)/L},\quad\text{where }L_{c}(E)\sim E^{2-2(D+1)/ \alpha}\gg 1. \tag{83}\] When \(\epsilon\) further increases and, finally, reaches \(|\epsilon|\sim 1\) the shape of the localized wave function starts to change and gradually approaches the standard cosine form (see Fig. 6). It is now necessary to find the proper expression of \(\epsilon\) as a function of the energy \(E\). ## IX Eigenenergies Until now we didn't need the exact form of the optimal fluctuation and considered it to be short-range only. It is impossible to find the spectrum in the presence of the optimal fluctuation given by the solution of the nonlinear Shrodinger equation (31) analytically. Hence, it is now Figure 5: The dependence \(S^{(\text{opt})}(E_{0})\) for (a) \(D=2\), (b) \(D=3\). Solid line (red online) shows the result of FBA. when we use SSA explicitly. ### The Dyson equation and its general solution The Dyson equation for the Green function \(G_{E}(\mathbf{r},\mathbf{r}^{\prime})\) reads \[G_{E}(\mathbf{r},\mathbf{r}^{\prime})=g_{E}(\mathbf{r},\mathbf{r}^{\prime})+Vg_{ E}(\mathbf{r},\mathbf{0})G_{E}(\mathbf{0},\mathbf{r}^{\prime}) \tag{84}\] Then, for \(G_{E}(\mathbf{r},\mathbf{r}^{\prime})\) we obtain \[G_{E}(\mathbf{r},\mathbf{r}^{\prime})=g_{E}(\mathbf{r},\mathbf{ r}^{\prime})+\frac{Vg_{E}(\mathbf{r},\mathbf{0})g_{E}(\mathbf{0},\mathbf{r}^{ \prime})}{1-g_{E}(\mathbf{0},\mathbf{0})V}=\\ =g_{E}(\mathbf{r}-\mathbf{r}^{\prime},\mathbf{0})+\frac{Vg_{E}( \mathbf{r},\mathbf{0})g_{E}(\mathbf{0},\mathbf{r}^{\prime})}{1-g_{E}(\mathbf{ 0},\mathbf{0})V}. \tag{85}\] The eigenenergies \(E_{m}\) of the corresponding Schrodinger equation can be found as solutions of equations \[g_{E}^{-1}(\mathbf{0},\mathbf{0})-V=0, \tag{86}\] with respect to \(E\). As earlier, we split the Green function into two terms \[g_{E}(0)=g_{E}^{(\varepsilon=0)}(0)+g_{E}^{(\varepsilon\neq 0) }(0), \tag{87}\] \[g_{E}^{(\varepsilon=0)}(0)=\fint_{BZ}\frac{d^{D}\mathbf{k}}{(2 \pi)^{D}}\frac{1}{E-\varepsilon(\mathbf{k})}. \tag{88}\] From the previous chapter, we know that when \(E=E_{\rm mid}\) discrete part of the Green function is zero: \(g_{E}^{(\rm deloc)}(\mathbf{0})=0\). Hence, we write \[g_{E}^{(\varepsilon=0)}(0)=\fint_{BZ}\frac{d^{D}\mathbf{k}}{(2 \pi)^{D}}\frac{1}{E-\varepsilon(\mathbf{k})}\approx\\ \approx\sum_{n}\frac{|\psi_{n}(0)|^{2}}{E_{\rm mid}(E)-\varepsilon _{n}}. \tag{89}\] In order to evaluate singular part, \(g_{E}^{(\varepsilon\neq 0)}(\mathbf{0})\), we, again, introduce integer \(l\) \[n=n_{\rm left}(E)+l,\quad\varepsilon_{n}=E_{\rm mid}(E)+(l-1/2) \delta_{1} \tag{90}\] \[E-\varepsilon_{n}=(E-E_{\rm mid}(E))-(l-1/2)\delta_{1},\quad E_{ \rm mid}(E)-\varepsilon_{n}=-(l-1/2)\delta_{1},\] (91) \[\delta_{1}\equiv\delta_{1}(E). \tag{92}\] Therefore, we find \[g_{E}^{(\varepsilon\neq 0)}(0)\approx\frac{2K^{D-1}f(0)^{2}}{ \sigma_{D}L}\sum_{l=-\infty}^{\infty}\left\{\frac{1}{[E-E_{\rm mid}(E)]-\delta _{1}(l-1/2)}-\frac{1}{-\delta_{1}(l-1/2)}\right\}=\\ =-\frac{2K^{D-1}f(0)^{2}}{\sigma_{D}L}\frac{1}{\delta_{1}(E)}\sum _{l=-\infty}^{\infty}\frac{\epsilon}{[(1/2+l)-\epsilon][1/2+l]}=-\frac{2K^{D- 1}f(0)^{2}}{\sigma_{D}}\nu_{1}(E)\pi\tan{(\pi\epsilon)}=-\pi\nu_{D}(E)\tan{( \pi\epsilon)} \tag{93}\] Finally, we obtain \[g_{E}^{(\rm deloc)}(0)=-\pi\nu_{D}(E)\tan{(\pi\epsilon)}, \tag{94}\] It is interesting that \(D\)-dimensional DOS \(\nu_{D}(E)\) enters the final result. Now, eigenenergies can be found from the following equation \[1/V=F_{0}(E)-\pi\nu_{D}(E)\tan(\pi\epsilon) \tag{95}\] or \[\epsilon=\frac{1}{\pi}\arctan{\left(\frac{F_{0}(E)-1/V}{\pi\nu_{D}(E)}\right)}, \tag{96}\] where \(F_{0}(E)=g_{E}^{(\varepsilon=0)}(0)\). Let us denote solution of \[F_{0}(E)-1/V=0 \tag{97}\] as \(E=E_{0}(V)\). Hence, when energy \(E\) is very close \(E_{0}\): Figure 6: The evolution of spatial shape of the delocalized part of the wave function in \(D=1\) with the change of parameter \(\epsilon\). (a): \(\epsilon\)=0.49, (b): \(\epsilon\)=0.25, (c): \(\epsilon\)=0.1, (d): \(\epsilon\)=0.01 \(|E-E_{0}(V)|\ll 1\) we find \[\epsilon=\frac{1}{\pi}\arctan\left(\frac{E-E_{0}(V)}{\pi\nu_{D}(E)} \left.\frac{dF_{0}}{dE}\right|_{E=E_{0}(V)}\right)=\\ =\frac{1}{\pi}\arctan\left(\frac{E-E_{0}(V)}{\Delta(E)}\right), \tag{98}\] where \[\Delta(E_{0})=\frac{\pi\nu_{D}(E_{0})}{b(E_{0})}\sim\frac{K^{D}}{E}\ll E,\quad \text{(since $\alpha<D/2$)}, \tag{99}\] and \[b(E_{0})=\left.\frac{dF_{0}}{dE}\right|_{E=E_{0}(V)}\sim 1. \tag{100}\] When \(E_{0}\ll 1\) we get \[b(E_{0})\approx b(0)=-\int_{-\pi}^{\pi}\frac{d^{D}k}{(2\pi)^{D}}\frac{1}{ \varepsilon(k)^{2}}. \tag{101}\] This integral safely converges \(k\to 0\), since \(\alpha<D/2\), and \(b(E_{0})\sim 1\) (see App. A). Finally, we are in position to provide explicit expression for the eigenenergies spectrum. Every interval \((\varepsilon_{n},\varepsilon_{n+1})\) contains only one energy level \(E_{n}\) \[E_{n}=E_{\text{mid}}(E_{n})+\epsilon\delta_{1}=E_{\text{mid}}(E_ {n})+\frac{\delta_{1}(E_{0})}{\pi}\arctan\left(\frac{E_{\text{mid}}(E_{n})-E_{ 0}}{\Delta(E_{0})}\right)\approx\\ \approx\begin{cases}\varepsilon_{n}+\frac{\delta_{1}(E_{0})\Delta (E_{0})}{\pi(E_{0}-E_{\text{mid}}(E_{n}))},&\quad E_{\text{mid}}(E_{n})<E_{0},\quad|E_{\text{mid}}(E_{n})-E_{0}|\gg\Delta(E_{0})\\ E_{\text{mid}}(E_{n})+\delta_{1}(E_{0})\left(\frac{E_{\text{mid}}(E_{n})-E_{0}}{ \pi\Delta(E_{0})}\right),&\quad|E_{\text{mid}}(E_{n})-E_{0}|\ll\Delta(E_{0}), \\ \varepsilon_{n+1}-\frac{\delta_{1}(E_{0})\Delta(E_{0})}{\pi(E_{\text{mid}}(E_{ n})-E_{0})},&\quad E_{\text{mid}}(E_{n})>E_{0},\quad|E_{\text{mid}}(E_{n})-E_{0}| \gg\Delta(E_{0})\end{cases} \tag{102}\] When \(|E_{n}-E_{0}|\gg\Delta(E_{0})\) energy level \(E_{n}\) almost coincides with \(\varepsilon_{n}\) or \(\varepsilon_{n+1}\) and the corresponding wave function is almost unperturbed extended wave. When \(|E_{n}-E_{0}|\ll\Delta(E_{0})\) energy is very close to the middle of the interval \(E_{n}\approx E_{\text{mid}}(E_{n})\), which corresponds to the quasi-localized state. ### Full expression for the wave function Let us now get back to the wave function. Since we are interested in the quasi-localized states with energies close to the \(E_{\text{mid}}\), we can expand relation (98) and plug it in the expression for the wave function. Hence, we obtain \[\psi_{E}(r)=\left[1+u_{1}^{2}L+u^{2}L\right]^{-1/2}\times\\ \times\left(\widetilde{\Psi}_{E_{0}}(r)+u_{1}\sqrt{L}\psi_{n(E)}^ {\perp}(r)+u\sqrt{L}\tilde{\phi}_{\text{deloc}}(r)\right), \tag{103}\] \[u_{1}=\sqrt{\frac{\sigma_{D}}{2L_{c}}},\quad u=C^{\prime}E^{\frac {1-D}{2n}}(E-E_{0})\\ C^{\prime}=-\frac{b(E_{0})2^{\frac{D+2}{2}}\Gamma^{\frac{3}{2}}\left( \frac{D}{2}+1\right)}{3^{\frac{1}{2}}D^{\frac{3}{2}}\pi^{\frac{D+2}{2}}} \tag{104}\] where \(\widetilde{\Psi}_{E_{0}}(r)\sim r^{\alpha-D}\) - localized part of the quasi-localized wave function and \(\psi_{n(E)}^{\perp}(r)\) - its delocalized tail at \(r>r_{1}\) that exists even for \(E=E_{0}\): \[\psi_{n(E)}^{\perp}(r)=r_{1}^{\alpha-D}\sqrt{\frac{2L_{c}}{L\sigma_{D}}}\frac{ \sin\left(Kr+\varphi_{D}\right)}{(Kr)^{\frac{D-1}{2}}}. \tag{105}\] Each of the functions \(\widetilde{\Psi},\psi^{\perp},\tilde{\phi}_{\text{deloc}}\) are normalized to unity. We have also used the fact that three functions are orthogonal to each other (the overlap tends to zero as \(1/L\)). The first two contributions to the overall normalization coefficient \(\left[1+u_{1}^{2}L+u^{2}L\right]^{-1/2}\) come from the quasi-localized part \(\Psi_{E_{0}}(r)\), while the third contribution arises due to deviation \(E-E_{0}\). Hence, when \(|u|\sqrt{L}\ll 1\) and \(L\ll L_{c}(E)\) states (103) are effectively localized. There is at least one such state with \(E=E_{0}\) and \(u=0\). How many more of them are there? Effectively localized states should satisfy the following condition \[|E-E_{0}|\ll\frac{E^{\frac{D-1}{2}}}{\sqrt{L}}\equiv\widetilde{\Delta}(E). \tag{106}\] Therefore, there are \(M_{\text{loc}}\) more effectively localized states \[M_{\text{loc}}\equiv\frac{\widetilde{\Delta}(E)}{\delta_{1}(E)}\sim\frac{E^{ \frac{D-1}{2\alpha}}LE^{\frac{1}{\alpha}}}{\sqrt{L}E}=\sqrt{\frac{L}{L_{c}}}\ll 1. \tag{107}\] Hence, there is only one effectively localized state in the vicinity of the optimal fluctuation. Inverse participation ratio In this Section we will separately examine cases \(L\ll L_{c}\) (69), and the opposite one \(L\gg L_{c}\). ### IPR in the near tail: \(L\ll L_{c}\) In this case \(u_{1}\sqrt{L}\ll 1\) and one can neglect the second term in (103). Then for arbitrary \(q\) and \(D\) IPR obtains the following form \[P_{q}=\sum_{j}|\psi_{E}(j)|^{2q}\approx\frac{1+u^{2q}L^{D(1-q)+q}}{(1+u^{2}L)^ {q}}. \tag{109}\] If one fixes \(q\), one immediately finds critical dimension \[D_{\rm cr}=\frac{q}{q-1}. \tag{110}\] If \(D<D_{\rm cr}\) it is possible to introduce two distinct characteristic lengths \[\xi_{1}(E)\sim u^{-2}, \xi_{2}(E,q,D)\sim u^{-\frac{2q}{D(1-q)+q}}, \tag{111}\] \[1\ll\xi_{1}(E)\ll\xi_{2}(E,q,D), \tag{112}\] Hence, IPR is given by the following relation \[P_{q}(E,D)\approx\frac{1+u^{2q}L^{D(1-q)+q}}{(1+u^{2}L)^{q}}\sim \\ \sim\begin{cases}&1,\qquad L\ll\xi_{1},\\ &\\ &\left(\frac{\xi_{1}}{L}\right)^{q},\qquad\xi_{1}\ll L\ll\xi_{2},\\ &\\ L^{-D(q-1)},\qquad\xi_{2}\ll L\ll L_{c}.\end{cases} \tag{113}\] Case \(D>D_{\rm cr}\) is much more surprising. Here \(\xi_{2}(E)\) does not exist, IPR (113) is as follows \[P_{q}(E,D)\sim\begin{cases}&1,\qquad L\ll\xi_{1},\\ &\\ \left(\frac{\xi_{1}}{L}\right)^{q},\qquad\xi_{1}\ll L\ll L_{c}.\end{cases} \tag{114}\] For example, when \(D=3\) and \(q=2\) the IPR large-\(L\) behavior is \(P_{2}\propto L^{-2}\) instead of the standard three-dimensional law \(P_{2}\propto L^{-3}\) even for energies far away from \(E_{0}\), i.e. \(\xi_{1}\ll L\). If we define fractal dimension \(D_{q}\) according to \[P_{q}\sim L^{-D_{q}(q-1)}. \tag{115}\] then, in our case, we obtain \[D_{q}=\frac{q}{q-1}\quad\text{ when }\quad q>\frac{D}{D-1} \tag{116}\] When \(D>D_{\rm cr}\) it is easy to see from (116) that the fractal dimension \(D_{q}<D\). ### IPR in the far tail: \(L\gg L_{c}\) Here \(u_{1}^{2}L\gg 1\) and, therefore \[P_{q}\approx\frac{1+u_{1}^{2q}L^{D(1-q)+q}+u^{2q}L^{D(1-q)+q}}{(u_{1}^{2}L+u^ {2}L)^{q}}. \tag{117}\] In high dimensions, \(D>D_{\rm cr}\), IPR, again, is fractal with the same fractal dimension \[P_{q}(E,D)\sim\begin{cases}\left(\frac{1}{u_{1}^{2}L}\right)^{q},&\quad|E-E_ {0}|\ll E^{\frac{D-\alpha}{\alpha}},\\ \left(\frac{1}{u^{2}L}\right)^{q},&\quad|E-E_{0}|\gg E^{\frac{D-\alpha}{ \alpha}}.\end{cases} \tag{118}\] Thus, we see that the localized part of the wave function can dominate IPR even when the state is not effectively localized (the norm is dominated by the delocalized tail). ## XI Conclusion We have demonstrated that finite disordered systems with long range hopping indeed exhibit unusual properties. In addition to conventional localized states with negative energies that contribute to Lifshitz tails, the fluctuations of disorder in such systems support the existence of quasi-localized states with positive energies. The structure of such states is as follows: there is a strong short-range core, localized in the vicinity of a strong local fluctuation of disorder, and a weak oscillating tail that spans through the entire system. Under the condition (69) contribution from the localized part of the wave function dominates the norm. However, as the systems size increases, the contribution of the tail increases either and sooner or later it overcomes the contribution of the core. It happens because the long-range tails, however weak, decay too slowly and cannot be normalized in an infinite system. Thus, the quasi-localized states can only exist in finite systems. Note that the quasi-localized states can be highly excited states: there may be a lot of extended states with lower energies. Moreover, even when condition (69) is not satisfied, and the norm of the wave function is dominated by the Figure 7: Three different energy regimes exist in 1D systems: the lowest energies allow quasi-localized states to exist. tail, the behavior of the IPR \(P_{q}\) may still be determined by the localized core of the wave function. Then \(P_{q}\) exhibits unusual behavior in a wide range of energies away from the energy of a quasi-localized state: for certain values of \(q\) the character of \(P_{q}\) is "fractal". Found states are robust to typical fluctuations of the random potential. Keeping that in mind, in 1D, that is in the simplest possible case, we can distinguish three different energy domains as follows (see Fig. 7): * \(E\ll E_{c}\): here the quasi-localized states are formed on the continuum background of extended states. * \(E_{c}\ll E\ll E_{c}^{\prime}\): here remnants of the quasi-localized states become extended but exhibit unusual "fractal" properties. * \(E\gg E_{c}^{\prime}\): here all the states are localized owing to standard 1D localization by typical fluctuations, i.e. \(l_{E}\ll L\). ## Acknowledgements We are indebted to M.V.Feigel'man for valuable discussions, to I.M.Khaymovich for pointing out multiple helpful references, and to L.Levitov for useful comments on the manuscript. This work was supported by the Basic Research Program of The Higher School of Economics.
2309.14999
Object-Centric Open-Vocabulary Image-Retrieval with Aggregated Features
The task of open-vocabulary object-centric image retrieval involves the retrieval of images containing a specified object of interest, delineated by an open-set text query. As working on large image datasets becomes standard, solving this task efficiently has gained significant practical importance. Applications include targeted performance analysis of retrieved images using ad-hoc queries and hard example mining during training. Recent advancements in contrastive-based open vocabulary systems have yielded remarkable breakthroughs, facilitating large-scale open vocabulary image retrieval. However, these approaches use a single global embedding per image, thereby constraining the system's ability to retrieve images containing relatively small object instances. Alternatively, incorporating local embeddings from detection pipelines faces scalability challenges, making it unsuitable for retrieval from large databases. In this work, we present a simple yet effective approach to object-centric open-vocabulary image retrieval. Our approach aggregates dense embeddings extracted from CLIP into a compact representation, essentially combining the scalability of image retrieval pipelines with the object identification capabilities of dense detection methods. We show the effectiveness of our scheme to the task by achieving significantly better results than global feature approaches on three datasets, increasing accuracy by up to 15 mAP points. We further integrate our scheme into a large scale retrieval framework and demonstrate our method's advantages in terms of scalability and interpretability.
Hila Levi, Guy Heller, Dan Levi, Ethan Fetaya
2023-09-26T15:13:09
http://arxiv.org/abs/2309.14999v1
# Object-Centric Open-Vocabulary Image Retrieval with Aggregated Features ###### Abstract The task of open-vocabulary object-centric image retrieval involves the retrieval of images containing a specified object of interest, delineated by an open-set text query. As working on large image datasets becomes standard, solving this task efficiently has gained significant practical importance. Applications include targeted performance analysis of retrieved images using ad-hoc queries and hard example mining during training. Recent advancements in contrastive-based open vocabulary systems have yielded remarkable breakthroughs, facilitating large-scale open vocabulary image retrieval. However, these approaches use a single global embedding per image, thereby constraining the system's ability to retrieve images containing relatively small object instances. Alternatively, incorporating local embeddings from detection pipelines faces scalability challenges, making it unsuitable for retrieval from large databases. In this work, we present a simple yet effective approach to object-centric open-vocabulary image retrieval. Our approach aggregates dense embeddings extracted from CLIP into a compact representation, essentially combining the scalability of image retrieval pipelines with the object identification capabilities of dense detection methods. We show the effectiveness of our scheme to the task by achieving significantly better results than global feature approaches on three datasets, increasing accuracy by up to 15 mAP points. We further integrate our scheme into a large scale retrieval framework and demonstrate our method's advantages in terms of scalability and interpretability. 1 Ghan.fetaya@biu.ac.il 1 Ghan.fetaya@biu.ac.il General Motors, RND, Israel 2Bar-Ilan University, Israel 2 Ghan.fetaya@biu.ac.il ## 1 Introduction Retrieving images which include specific objects, according to an on-demand open-set text query, is an important task in computer vision with numerous practical applications. Performing such targeted searches, especially over unlabeled rare concepts, can be used, for example, to analyze the performance of an already trained system, to mine hard examples during training, or to guide the process of gathering the data for manual annotations. Relevant use cases vary in scale from web-scale search (e.g., Google Bard, Microsoft Bing) to searching in application specific datasets (e.g., e-commerce, automotive, medical applications). In both cases, scalability and efficiency play critical roles in adopting the technology. Despite its importance, literature lacks direct references to this task. One possible reason might be the task complexity: common vision-language (VL) representation of open-set objects was hard to achieve until the accelerated evolution of contrastive-based open-vocabulary models. These models (e.g., CLIP [12], Florence [13], Coca [14]), trained on web-scale image-caption data, produce a common embedding space for global image and caption representations, maximized directly via training. Retrieval is then performed by ranking the text-image similarity using cosine distance in the common embeddings space and can be scaled by using frameworks as schematically illustrated in Figure 1. However, empirical experiments with CLIP using open-set object queries on the similar task of object-centric retrieval produce less satisfactory results (see Figure 2 and direct quantitative comparison in Section 4). In particular, performance degrades with the increase in image complexity and the decrease in relative objects sizes. Alternatively, the use of pure detections from SoTA open-vocabulary detection frameworks (e.g., OwlViT [14]) is more compatible with cluttered images but is considered ill-suited for retrieval tasks; Running the detection model on each image for each query will require an enormous amount of computational resources, while precomputing and saving its internal dense embeddings will require orders of magnitude more storage. In this work, we visit the task of object-centric open-vocabulary image retrieval and present a simple approach to tackle it based on the complementary advantages of classification and detection open-vocabulary frameworks. The main challenge in this task is to combine the scalability of image retrieval pipelines, which can operate on huge datasets, with object-level processing of detection systems that commonly operate on a single image at a time. A second challenge is to preserve the good zero-shot accuracy obtained from web-scaled pretrained open vocabulary schemes, which holds significant importance for the task. We address these challenges by exploring the use of aggregated features in two steps. As a first step towards the solution, we explore the use of local features, extracting dense embeddings from an intermediate feature-map of CLIP vision encoder and manipulating them, keeping CLIP visual-language association as is (abbreviated as Dense-CLIP). Retrieval is performed by ranking according to the maximum similarity in each image. In the experiments, we show that using Dense-CLIP to represent images achieves on-par results to the OwlViT baseline on all populations and significantly better results on rare objects queries, with less than half the embeddings per image. With respect to CLIP, which uses one global feature to represent each image, results are significantly increased by up to 12 mAP points. Figure 1: **Overall retrieval framework**: Two-stage operation: Offline, we generate per-image global / aggregated embeddings via (a) CLIP or (b) Cluster-CLIP. Subsequently, an online stage enables on-demand retrieval using textual queries. Retrieval is performed by ranking text-image similarity using cosine distance in a shared embeddings space, with potential acceleration using Large Scale Index (details in Section 3.4). This approach accommodates any dual-encoder architecture. We present top retrieved images for the query ’bird’ using Cluster-CLIP features. Note that while Dense-CLIP retrieval results are impressive, its use alone is not enough since it enlarges the search space by a large extent, thus impairing potential scalability. To address scalability requirements, we explore the use of aggregated visual features by introducing Cluster-CLIP (Fig. 1). Cluster-CLIP aggregates Dense-CLIP's dense embeddings into sparse representatives with distinct local semantics (aggregation module in Fig. 1). We have examined a variety of aggregation methods that require no training to the task. By manipulating hyper-parameters of each aggregation method, we check various points on the accuracy-efficiency tradeoff (see Section 4). Interestingly, we found out that Cluster-CLIP shows, in some cases, higher retrieval rates than Dense-CLIP (and up to 15 mAP points increase with respect to CLIP) for the small number of 10-50 representatives per image. The investigation is meaningful in that, as far as we know, it is the first work to explore and quantify the range between leveraging open vocabulary features as a global image representation and as local dense embeddings. To summarize our contributions: 1. We visit the task of object-centric open-vocabulary image retrieval and introduce Dense-CLIP, which uses CLIP's local features, keeping its original zero-shot properties. 2. We present Cluster-CLIP which enables scalability via a compact representation. 3. We show the effectiveness of our approaches by achieving significantly better results compared with a global feature (CLIP) on three datasets: COCO [], LVIS [], and nuImages [], increasing retrieval accuracy by up to 15 points. 4. We integrate Cluster-CLIP into a retrieval framework, showcasing its scalability and presenting empirical evidence of its efficacy through plausible results. ## 2 Related Literature Our work is closely related to research on instance retrieval frameworks, cross-modal retrieval, and open-vocabulary VL models. We briefly review related work in these domains. **Instance Retrieval Frameworks.** Way before the deep learning era, large scale image-to-image retrieval research was dominated by mining for large-sized geographical landmarks applications, presenting methods that use local handcrafted features to the use of handcrafted features. Compared to these works, our application focuses on the open-vocabulary text-to-image retrieval task, yet draws inspiration from the use of local descriptors and aggregation methods that lacks in current cross modal retrieval literature. **Cross-Modal Retrieval.** Aligning vision and language has a long-standing history of research (e.g., [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 100, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 1777, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 435, 44, 44, 45, 46, 47, 48, 49, 411, 42, 44, 45, 46, 48, 49, 42, 45, 46, 49, 43, 47, 48, 49, 40, 41, 44, 45, 46, 49, 44, 49, 45, 46, 47, 48, 49, 49, 40, 41, 42, 44, 45, 46, 49, 45, 47, 49, 46, 48, 49, 40, 41, 42, 44, 45, 46, 49, 47, 49, 48, 49, 40, 41, 42, 44, 45, 46, 49, 45, 49, 46, 47, 48, 49, 41, 42, 44, 45, 46, 49, 47, 48, 49, 49, 49, 40, 42, 44, 45, 46, 49, 48, 49, 49, 40, 41, 42, 44, 45, 46, 49, 41, 43, 44, 47, 48, 49, 42, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 44, 45, 46, 49, 42, 48, 49, 43, 49, 44, 45, 47, 49, 46, 48, 49, 49, 41, 42, 45, 46, 49, 42, 47, 48, 49, 43, 49, 40, 41, 42, 45, 46, 49, 45, 47, 48, 49, 49, 41, 43, 48, 49, 42, 49, 43, 44, 45, 46, 49, 47, 48, 49, 45, 49, 40, 41, 42, 44, 46, 49, 45, 48, 49, 42, 49, 43, 49, 44, 46, 49, 45, 47, 48, 49, 49, 40, 41, 42, 45, 46, 49, 41, 43, 49, 42, 45, 46, 47, 48, 49, 45, 49, 46, 47, 48, 49, 40, 41, 42, 49, 43, 44, 48, 49, 45, 49, 40, 41, 42, 45, 46, 49, 42, 45, 46, 47, 48, 49, 41, 45, 48, 49, 42, 49, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 45, 46, 49, 45, 47, 48, 49, 40, 41, 42, 45, 49, 46, 49, 42, 46, 47, 49, 48, 49, 40, 41, 42, 45, 46, 49, 43, 49, 45, 47, 48, 49, 49, 40, 41, 42, 45, 46, 49, 45, 47, 49, 48, 49, 40, 41, 42, 45, 46, 49, 45, 47, 48, 49, 42, 49, 46, 49, 43, 49, 45, 47, 48, 49, 45, 49, 46, 49, 47, 49, 48, 49, 40, 41, 42, 45, 46, 49, 49, 45, 48, 49, 40, 41, 42, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 45, 46, 49, 47, 49, 45, 48, 49, 40, 41, 42, 45, 46, 49, 49, 40, 42, 45, 46, 49, 47, 49, 41, 45, 48, 49, 40, 41, 42, 45, 46, 49, 42, 47, 48, 49, 40, 43, 44, 45, 47, 49, 46, 49, 47, 48, 49, 40, 41, 42, 45, 46, 49, 42, 47 produces one embedding per image. Following, Section 3.2 presents Dense-CLIP1, which creates dense embeddings while keeping the same embedding space as CLIP. Finally, Section 3.3 presents Cluster-CLIP (which sets an aggregation module on top of Dense-CLIP) and counts several clustering instantiations to create a compact representation. The last part of the section (3.4) presents the whole retrieval framework, as illustrated in Figure 1. Footnote 1: we note that this is similar yet distinct from the denseCLIP defined in [C] and similarly used in [E] as they use a fine-tunning process and address different tasks. ### Preliminaries: CLIP CLIP is a vision-language model, pretrained on massive amounts of web-scaled image-caption data via contrastive learning. It consists of two separate streams: a text-encoder, implemented as a transformer [C], and a vision-encoder, implemented either by a modified ResNet backbone [C] or by a ViT backbone [C]. In both cases, the last layer is a multi-head attention layer, which sums information from all the pixels in the input tensor weighted by their similarity to a query vector and projects it by an output linear layer (Fig. 3, left). The multi-head attention layer for the modified ResNet backbone is formulated as: \[y=out\left(concat\left[y^{1},y^{2},...,y^{M}\right]\right),\hskip 28.452756pty^{m }=softmax\left(\frac{q^{m}(\bar{x})\cdot k^{m}(X)^{T}}{\sqrt{C_{q}}}\right)v^{ m}(X) \tag{1}\] Here \(X\in R^{K\times C_{e}}\) is the input tensor and \(y\in R^{1\times C_{o}}\) is the output embedding (one global vector of \(C_{o}\) channels in the output representation of CLIP). \(\bar{x}=\frac{1}{K}\sum_{i=1}^{K}x_{i}\) represents the average of all spatial locations, \(\{x_{i}\}_{i=1}^{K}\), of the input tensor \(X\). \(q^{m}:R^{C_{e}}\to R^{C_{q}}\), \(k^{m}:R^{C_{e}}\to R^{C_{q}}\), \(v^{m}:R^{C_{e}}\to R^{C_{v}}\) and \(out:R^{MC_{v}}\to R^{C_{o}}\) are respectively the query, key, value and output linear layers, where \(m\in\{1\dots M\}\) is the index of a specific head in the multi-head architecture and \(\sqrt{C_{q}}\) is a normalization factor. The described attention layer, which sums information from all spatial locations of the input tensor, is optimized to capture the "average" semantics in the image as forced to match the embedding of the corresponding caption. A reasonable hypothesis, also raised in [C], is that the average semantic is built upon local semantics, already captured by the spatial locations at the input to the attention layer. ### Dense-CLIP Module Inspired by the use of dense embeddings in detection frameworks and based on the above hypothesis, we use the following reformulation of CLIP multi-head attention layer: \[y_{i}=out\left(concat\left[y_{i}^{1},y_{i}^{2},...,y_{i}^{M}\right]\right), \hskip 56.905512pty_{i}^{m}=v^{m}(x_{i}) \tag{2}\] Figure 3: **Simplified overview of image encoders**: Encoders receive image \(I\in R^{H\times W\times 3}\) as input. Dense-CLIP uses only the value (V) and output (O) linear layers of CLIP’s Multi-Head Attention module. Cluster-CLIP clusters Dense-CLIP output and transfers a single representative per cluster. Compared to the previous formulation, here the output embedding \(Y\in R^{K\times C_{o}}\) is a tensor, and \(y_{i}\) is the representation of its i'th spatial pixel: \(\{y_{i}\in R^{1\times C_{o}}\}_{i=1}^{K}\). This reformulation, implemented by removing the query and key linear layers and implementing the value and output linear layers as 1x1 convolutional layers (with the same weights), essentially creates dense patch embeddings with the same output space as CLIP (Fig. 3, middle). We use it as is, without fine-tuning. Empirical results at Section 4 reveal that Dense-CLIP achieves on-par retrieval accuracy as SoTA detection frameworks (i.e. OwlViT []) with fewer representatives. Given this finding, we next explore whether we can further reduce the number of representatives and to which extent. ### Cluster-CLIP To improve scalability, we introduce Cluster-CLIP, which produces aggregated embeddings by an additional aggregation module on top of Dense-CLIP embeddings. The aggregation module first clusters the dense features predicted by Dense-CLIP (\(Y=\{y_{i}\}_{i=1}^{K}\)) within \(N\) clusters, denoted as \(\{C_{j}\}_{j=1}^{N}\), where \(C_{j}\subset Y\) and \(N<<K\). Then, it transfers one representative embedding per cluster (the average of the embeddings within the cluster) for future retrieval use (see Figures 1 and 3 right). Note that the aggregation module is generally defined, and many clustering variants fit into it. We empirically examined a variety of clustering mechanisms and present here the most effective methods (while deferring the complete list and full implementation details to the Supplementary Materials): **K-Means (Cluster-CLIP, K.M.).** In this method, we perform K-Means clustering on top of each image's dense embeddings. Once clustered, the representatives of an image are the clusters' centroids. Examples of interest that demonstrate grouping by semantic similarity can be seen in Figure 4. **Agglomerative Clustering (Cluster-CLIP, AG.).** This method applies Agglomerative clustering (hierarchical clustering using a bottom-up approach) where the average embedding from each cluster is declared as the cluster representative. Results are presented with connectivity constraints (AG-T) and without (AG-F). **Region Proposals (Cluster-CLIP, R.P.).** In this method, we use Segment Anything (SAM) [] to segment each image. Cluster-CLIP is then provided with both the image and the masks, with the masks serving as guidance for aggregating the dense embeddings. ### Overall Framework A schematic illustration of our overall framework is shown in Fig. 1. Our framework consists of two separate stages. The first stage receives a dataset of images as input and uses an image encoder from a vision-language model to create embeddings through sequential processing, followed by indexing to allow a quick approximate nearest neighbour (ANN) search. In our experiments, we considered embeddings based on global embedding strategy (i.e., from CLIP), local embedding strategies (OwlViT, Ours: Dense-CLIP), and aggregations (Ours: Cluster-CLIP). The second stage receives two inputs: the large scale index created in the first stage and a textual object query wrapped by textual prompt/s. Processing includes applying the corresponding text encoder (of the same vision-language model) to create a search vector, followed by an ANN search to get a final list of ranked images. Any dual-encoder vision-language model can be integrated into this scheme. Notice that whereas the first stage is computationally expensive (i.e., in terms of number of FLOPS), it is executed only once. The above partitioning into two separate flows essentially allows us to refer to the index as given, enabling on-line interactive retrieval at the second stage. This is a significant advantage compared to object detection pipelines. The performance of the on-line system is dominated by the tradeoff between the quality of representation, the number of representatives per image, and the parameters of the ANN search. In our experiments, we follow common practice and exclude the last factor, as it is not the scope of our work, and report quantitative results based on ranking all images by cosine similarity. ## 4 Experiments We evaluated our approach on the task of object-centric image retrieval on three publicly available datasets (COCO [], LVIS [] and nuImages[]), using datasets' semantic categories as queries. We compared the performance, in terms of retrieval accuracy vs. number of embeddings, to global and local features from existing methods. Our approach demonstrates increased performance with a small number of representatives per image, thereby allowing scalability with better retrieval rates. ### Datasets and Metrics **COCO 2017**[] is a very popular object detection and instance segmentation dataset of common objects in context, consisting of 120K training images and 5K validation images, fully annotated with 80 semantic categories. Categories are varied from large objects (e.g., car, elephant, tv, refrigerator) to much smaller objects (e.g., fork, book, bird, frisbee, donut). **LVIS**[] is a federated dataset, which includes 20K images in its validation set, intensively used on the long-tail object detection task []. LVIS is annotated with 1203 semantic categories, 337 of which are considered rare objects (less than 10 training examples). In our experiments, we separately report a retrieval metric for rare objects as an applicable approximation for the open-set retrieval task. **nuImages**[] is a public autonomous driving dataset, significantly different from the former two datasets in terms of resolution, context, RGB distribution and annotated classes. nuImages validation set includes 16.5K images, 1600x900 sized, annotated with 23 diverse semantic categories. In the experiments, we define 7 of them (those that appear in less than 0.3% of the total annotations number) as rare categories, reporting their accuracy separately. **Evaluation Protocol**. We evaluated our pipeline in two steps: a first processing step used to store the embeddings followed by a ranking step, which uses the datasets' categories names as queries and sorts all images in descending order of relevance per query, based on the maximal similarity over all patches. For each query, images are declared as _true positive_ if they include an object of that category. We report mean average precision (_mAP_), as widely reported in retrieval tasks [], [], [], [], and _mAP_@50 (defined in []), previously used in [], which considers top-k images only, as our main criteria for comparison. **Implementation Details**. We used the CLIP backbones from the CLIP library. Clustering (K-Means and hierarchical clustering) was performed via the sklearn library []. Region proposals were calculated by Segment Anything (SAM) library and tuned with different number point-prompts (64, 256 and 1024 points) which created, approximately 25, 50 and 100 representatives per image. All Dense-CLIP and clustering experiments were conducted on 1 Nvidia GPU machine. Images were resized to a square aspect ratio following hyperparameter search, and positional embeddings were interpolated to match the resolution of the image. For a fair comparison, we ensemble over the embeddings space of the 7 best CLIP prompts [[]] in all VL pre-trained modules. Full architectures descriptions, design choices, and hyperparameters are specified in the supplementary. **Baselines**. We compare our work with existing global embeddings from CLIP and local embeddings from OwlViT []. As described in Section 2, OwlViT is an open-vocabulary dual-branch VL detection model that achieves SoTA results, making it a strong baseline for our task. It is pre-trained on a web-scale dataset and then fine-tuned for open-vocabulary object detection, which can lead to a forgetting effect. For image representation, we directly used local embeddings from OwlViT's ViT [] vision-encoder output, excluding bounding boxes prediction. We additionally compare to dual-stream cross-modal retrieval methods preceding CLIP that are trained on distinct medium-size datasets using caption annotations. Specifically, we compare to PCME and VSRN trained on COCO using the split defined in [] and to VSRN trained on Flicker30K [] (VSRN is referred to as either 'VSRN, COCO-Caption' or 'VSRN, Flicker30K', depending on the dataset). ### Results **Dense-CLIP Results.** Tables 1 and 2 compare the retrieval results of Dense-CLIP to the baselines. The '#rep.' column shows the average number of embeddings for each image. Dense-CLIP achieves on-par results compared to OwlViT with fewer embeddings per image and leads to a significant and consistent improvement of up to 12 points in the retrieval rates over CLIP. Furthermore, Dense-CLIP surpasses VSRN and PCME on COCO and, to a greater extent, on LVIS, even when the latter two were fine-tuned on these datasets' images. Small object retrieval (_mAP@50\({}_{s-m}\)_ in Table 1) benefits from the use of local features, where Dense-CLIP achieves competitive results with respect to OwlViT. Rare object retrieval (right side of the tables) proved to be more difficult (lower accuracy). Interestingly, results show higher retrieval rates for CLIP over OwlViT, maybe because of forgetting effects due to finetuning. Dense-CLIP shows a significant improvement over all baselines, exploiting CLIP open vocabulary representations in a dense manner. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{COCO} & \multicolumn{3}{c}{LVIS} & \multicolumn{2}{c}{LVIS-fac} \\ Backbone & Res. & Rep. & mAP@50 & mAP & mAP@50\({}_{s-m}\) & mAP@50 & mAP & mAP@50 & mAP \\ \hline VSRN, Flick230K & 600 & 1 & 44.28 & 37.23 & 21.49 & 31.56 & 37.35 & 10.52 & 11.53 \\ VSRN, COCO-Caption & 600 & 1 & 70.29 & 52.33 & 30.99 & 42.09 & 45.60 & 20.46 & 21.36 \\ PCME & 224 & 7 & 69.69 & 57.98 & 29.98 & 47.38 & 51.06 & 27.85 & 28.58 \\ \hline CLIP, EN50 & 224 & 1 & 56.70 & 50.80 & 21.91 & 52.69 & 55.94 & 39.84 & 40.74 \\ CLIP, EN500+ & 288 & 1 & 64.58 & 56.39 & 29.39 & 57.85 & 60.35 & 43.37 & 44.28 \\ CLIP, EN500+ & 448 & 1 & 70.62 & 61.03 & 36.60 & 62.60 & 64.71 & 53.14 & 53.92 \\ \hline OnWiT, ViT-3/2 & 768 & 576 & 77.31 & 70.37 & 52.22 & 66.09 & 67.86 & 42.60 & 42.95 \\ OutVNT, ViT-3/16 & 768 & 2304 & 74.96 & 65.91 & 47.02 & 61.28 & 63.39 & 34.75 & 35.82 \\ OnWiT, ViT-1/4 & 840 & 3600 & 76.61 & 71.06 & 88.95 & 66.15 & 67.86 & 40.62 & 41.10 \\ \hline Dense-CLIP, EN50 & 512 & 256 & 58.83 (+-1.51) & 52.35 (+-1.35) & 32.58 (+-1.05) & 55.41 (+-1.25) & 57.46 (+-1.35) & 38.80 (+-1.39) & 59.86 (+-1.05) \\ Dense-CLIP, EN500+ & 512 & 256 & 69.61 (+-1.62) & 62.10 (+-1.57) & 41.18 (+-1.17) & 63.85 (+-1.05) & 65.83 (+-1.05) & 55.32 (+-1.17) & 56.40 (+-1.12) \\ Dense-CLIP, EN500+ & 448 & 196 & 77.28 (+-1.54) & 69.08 (+-1.51) & 51.47 (+-1.42) & 70.36 (+-1.06) & 71.80 (+-1.05) & 59.77 (+-1.43) & 58.72 (+-1.48) \\ \hline CLIP + Dense-CLIP, EN50 & 512 & 257 & 67.27 (+-1.05) & 59.33 (+-1.53) & 34.99 (+-1.60) & 61.32 (+-1.48) & 63.02 (+-1.27) & 46.96 (+-1.22) & 47.82 (+-1.20) \\ CLIP + Dense-CLIP, EN500+ & 512 & 257 & 66.12 (+-1.54) & 58.06 (+-1.42) & 28.90 (+-1.61) & 61.22 (+-1.77) & 49.99 (+-1.02) & 50.35 (+-1.49) \\ CLIP + Dense-CLIP, EN500+ & 448 & 197 & 77.08 (+-1.68) & 69.45 (+-1.48) & 48.73 (+-1.21) & 27.24 (+-1.46) & 73.36 (+-1.68) & 61.68 (+-1.56) & 62.33 (+-1.44) \\ \hline Chinese-CLIP, K.M, EN500+ & 448 & 10 & 76.77 (+-1.46) & 63.66 (+-1.26) & 46.69 (+-1.60) & 62.72 (+-1.42) & 64.11 (+-1.66) & 51.66 (+-1.44) & 52.45 (+-1.45) \\ Chinese-CLIP, K.M, EN500+ & 448 & 10 & 76.72 (+-1.18) & 64.00 (+-1.29) & 45.08 (+-1.63) & 63.70 (+-1.65) & 65.16 (+-1.05) & 49.53 (+-1.44) & 50.35 (+-1.50) \\ Chinese-CLIP, K.P, EN500+ & 448 & 50 & 73.75 (+-1.72) & 69.84 (+-1.51) & 51.95 (+-1.63) & 71.63 (+-1.62) & 72.60 (+-1.69) & 52.95 (+-1.50) & 59.07 (+-1.55) \\ Chinese-CLIP, R.P., EN500+ & 448 & 91 & 79.24 (+-1.62) & 69.43 (+-1.40) & 50.51 (+-1.53) & 70.74 (+-1.66) & 71.92 (+-1.21) & 58.28 (+-1.54) & 58.88 (+-1.49) \\ \hline CLIP + Cluster-CLIP, K.M, EN500+ & 448 & 11 & 75.51 (+-1.58) & 64.60 (+-1.57) & 41.61 (+-1.66) & 66.99 (+-1.59) & 67.96 (+-1.55) & 57.02 (+-1.58) & 57.73 (+-1.51) \\ CLIP + Cluster-CLIP, A.G.T, EN500+ & 448 & 11 & 75.74 (+-1.48) & 65.19 (+-1.43) & 43.22 (+-1.64) & 66.76 (+-1.56) & 56.66 (+-1.56) & 58.64 (+-1.57) & 57.55 (+-1.50) \\ CLIP + Cluster-CLIP, A.G.F, EN500+ & 54 & 77.06 (+-1.40) & 69.03 (+-1.60) & 48.05 (+-1.40) & 71.79 (+-1.49) & 73.02 (+-1.43) & 69.92 (+-1.72) & 61.61 (+-1.54) \\ CLIP + Cluster-CLIP, R.P., EN500+ & 448 & 92 & 77.75 (+-1.13) & 68.59 (+-1.58) & 46.75 (+-1.15) & 71.26 (+-1.66) & 72.59 (+-1.28) & 60.98 (+-1.76) & 61.21 (+-1.32) \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results on COCO2017 and LVIS val sets. First and second best scores are marked in red and blue. Using Dense-CLIP improves retrieval accuracy but increases the number of features. Using Cluster-CLIP compensates both, enabling scaling. **Cluster-CLIP Results.** We empirically studied the use of aggregated features using several clustering mechanisms, described in section 3.3. Fig. 5 displays Cluster-CLIP's results for different clustering methods and numbers of clusters, while promising working points are presented in the sixth parts of Tab. 1 and 2 (see results for RN50/x4 backbones in the supplementary). To reduce variance, all results are averaged across three experiments. Notably, K.M. and AG-F (blue and lime) outperform Dense-CLIP when employing a modest count of 10 representatives per image on COCO. With a larger representatives budget of 50, aggregating the dense features according to R.P. (star markers) is made an alternative clustering mechanism. On LVIS, which has almost twice the number of instances per image compared to COCO, AG-F with 50 representatives achieves the highest retrieval rates. Interestingly, while AG-F and K.M. share certain clustering characteristics, it appears that AG-F bottom-up clustering is preferred when fine-grained understanding is required (as in LVIS) and on small objects. On nuImages, AG-T and R.P., which incorporate localization considerations into the clustering process, outperform other mechanisms with merely 5-50 representatives, implying localization is a strong cue for clustering in high-resolution images. As mentioned, Cluster-CLIP outperforms Dense-CLIP in several cases. This can be explained by noticing that applying clustering to the output of Dense-CLIP introduces two conflicting phenomena. On the one hand, clustering reduces the number of representatives per image, thereby decreasing the presence of distractors. On the other hand, averaging across multiple patches may lead to the averaging out of small-sized objects or fine-grained details, potentially hindering performance. We consider the above results as key contributions, highlighting the potential of aggregated features to offer both efficiency and performance. **Mixed Architectures.** We found that in many cases, Dense-CLIP and Cluster-CLIP embeddings are in fact complementary to the global embedding produced by CLIP, and those can be effectively combined to boost their performance (referred to as CLIP + Dense/Cluster-CLIP). Results are reported in the fifth and seventh part of Tables 1 and 2. Notably, incorporating CLIP embeddings enhances Dense-CLIP results by up to 3.8 mAP@50 points. Moreover, integrating CLIP embeddings into Cluster-CLIP boosts results by up to 7.6 mAP@50 points while reinforcing its efficacy across tasks. \begin{table} \begin{tabular}{|l c c|c c|c c|} \hline \hline & & \multicolumn{3}{c|}{unImages} & \multicolumn{2}{c|}{unImages - rare} \\ \cline{2-7} Box-stone & res. & Rep. & mAP@50 & mAP & mAP@50 & mAP \\ \hline \hline PCLE & 224 & 7 & 256.7 & 15.08 & 0.2 & 0.75 \\ \hline CLIP & RN50 & 224 & 1 & 27.20 & 17.61 & 0.84 & 1.58 \\ CLIP & RN50b4 & 288 & 1 & 28.62 & 19.07 & 2.70 & 3.39 \\ CLIP & RN50b4 & 448 & 1 & 31.93 & 20.77 & 3.16 & 4.28 \\ \hline \hline Our-VLT & VLT-B/2 & 768 & 576 & 36.93 & 27.19 & 1.88 & 2.48 \\ OWL-VLT & VLT-R/16 & 768 & 2304 & 34.55 & 26.65 & 2.25 & 2.58 \\ OWL-VLT & VLT-L/14 & 840 & 3600 & 30.30 & 25.10 & 2.12 & 3.43 \\ \hline \hline Dense-CLIP & RN50 & 768 & 576 & 316.4(-0.44) & 24.3(-0.2) & 33.33 (-0.24) & 4.95 (-1.27) \\ Dense-CLIP & RN50b4 & 768 & 576 & 32.04(-0.42) & 26.77(-0.20) & 2.80(-0.11) & 4.88 (-1.46) \\ Dense-CLIP & RN50b64 & 768 & 576 & 34.08(-0.23) & 30.74(-0.60) & 1.01(-0.15) & 10.88 (-0.44) \\ \hline \hline \(\text{CLIP}\) & Dense-CLIP & RN50b3 & 768 & 577 & 32.60(-0.42) & 25.68(-0.56) & 3.42(-0.25) & 5.13 (-1.59) \\ CLIP + Dense-CLIP & RN50b4 & 768 & 577 & 33.59(-0.41) & 25.94(-0.46) & 6.32(-0.46) & 7.70 (-1.31) \\ CLIP + Dense-CLIP & RN50b4 & 768 & 577 & 37.95(-0.41) & 31.96(-0.46) & 10.23(-0.47) & 11.24 (-0.46) \\ \hline \hline Cluster-CLIP & K.M.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.S.SS.S. ### Qualitative Results with Overall Framework We build a simple image retrieval framework to study the behavior of Cluster-CLIP aggregated embeddings (Figure 1). For demonstration purposes, we use an index consisting of 120K COCO training set images. Specifically, we used Cluster-CLIP with RN50x64 backbone and K-Means with 10 centroids, then utilized FAISS [EUR] to map the embeddings into a large-scale index. Interactive querying is enabled using text or image queries (created by CLIP text or image encoder). Qualitative examples of interest are demonstrated in Figures 1, 2 and 6. Please refer to the Supplementary Materials for additional qualitative examples. ## 5 Conclusions We examine the possible use of local features for the object-centric image retrieval task and introduce Dense-CLIP, which extracts dense embedding, manipulated such that CLIP vision-language association is kept. We compare our method with the use of global features extracted from CLIP with the same backbone and report a significant increase in retrieval rates of up to 12 mAP points. Compared to SoTA detection pipelines, our results are competitive, with a significant increase in retrieval rates for rare categories. Practically, utilizing local features within a retrieval framework significantly enlarges the search space, impairing scalability. To address this, we introduce Cluster-CLIP, which innovatively represents images using features aggregated from local features. Our approach achieves improved retrieval rates with fewer representatives, practically enabling scaling. From a broader perspective, the potential use of compact representation to efficiently carry useful image information is interesting by itself and might contribute to future work on a wide range of applications (e.g., detection, segmentation, image generation). Figure 5: Cluster-CLIP accuracy-efficiency scatter plots: retrieval accuracy (mAP@50) vs. average numbers of embeddings per image for COCO, LVIS and nulmages datasets. Top left is better. Cluster-CLIP achieves high accuracy for the small number of 10-50 embeddings per image. Figure 6: Qualitative Examples: top-1 retrieved images with Dense-CLIP heat-maps for rare textual and visual queries. Index created from 120K images with K.M. clustered embeddings. ## S1 Clustering Methods In our work, we introduce Cluster-CLIP, a method that represents images using a compact representation by adding an aggregation module on top of Dense-CLIP dense emebeddings (as detailed in Section 3.3 of the main article). The aggregation module first clusters the dense embeddings and then transfers a single representative per cluster. We empirically evaluated various clustering methods within the aggregation module, which are presented in this section, with their results reported in Section S2. **K-Means (K.M.).** In this method, we perform K-Means clustering on top of each image's dense embeddings. Once clustered, the representatives of an image are the clusters' centroids. We hypothesize that in such a way, each group of semantically similar objects will be represented by their common semantics. An example of such behavior can be seen in Figure 4 in the main article, where several wine glasses are represented by a single cluster, which also scores highest among the different clusters when compared to the embeddings of the phrase "Wine Glass". **Agglomerative Clustering (AG).** This method applies Agglomerative clustering, which performs hierarchical clustering using a bottom-up approach: each observation starts in its own cluster, and clusters are successively merged together via a linkage criterion. Different linkage criteria were tested, with and without connectivity constraints; Specifically, Ward linkage, which minimizes the sum of squared differences within all clusters, and Average linkage, which minimizes the average of the distances between all observations of pairs of clusters. Average linkage was tested both with Euclidean and Cosine metrics. We found that using Ward linkage works better, and so we present its results in Section S2 with connectivity constraints, marked AG-T, and without, marked AG-F. **Region Proposals (R.P.).** In this method, we use Segment Anything (SAM) [53] to segment each image. Then, Cluster-CLIP is provided with both the image and the masks, with the masks serving as guidance for clustering the dense embeddings. Formally, given an image of dimension \(H\times W\), and the matching Dense-CLIP dense embeddings of dimensions \(\frac{H}{32}\times\frac{W}{32}\times C_{o}\), where \(C_{o}\) is the number of channels at CLIP's output. For each binary mask \(m\in R^{H\times W}\) predicted by SAM, we first use max pooling to down-sample the mask to the dense embeddings resolution. Then, we aggregate dense embeddings which coincide with the downsampled mask. Once clusters are formed, each cluster is represented with the mean of its embeddings. To adjust the final number of masks per image, we conducted experiments with various quantities of candidate point-prompts to SAM. Specifically, we ran with 64, 256, and 1024 candidates, which resulted in different numbers of masks per image. **Soft Aggregation via Attention (AT).** The clustering algorithms in our work, and K-Means specifically, aggregate each cluster's embeddings by taking the mean over them. Therefore, each cluster representative includes information aggregated only in its cluster (Hard Aggregation). In this method, we suggest weighted aggregation of non-local information, i.e., from all of the image embeddings. This idea is implemented by adapting the attention mechanism described in Section 3.1 in two steps: 1) Clusters are computed on the inputs to the attention layer. 2) The means of the clusters' embeddings (centroids for K-Means) are used as queries in the attention mechanism. Using the notations from eq. 1 in the main article, this can be formulated as: \[y_{j}=out\left(concat\left[y_{j}^{1},y_{j}^{2},...,y_{j}^{M}\right]\right)\] \[y_{j}^{m}=softmax\left(\frac{q^{m}(c_{j})\cdot k^{m}(X)^{T}}{\sqrt {C_{q}}}\right)v^{m}(X)\] (S1) Here \(c_{j}\) and \(y_{j}\) are the mean and soft aggregated representation of the j'th cluster, respectively: \(\{c_{j}\in R^{1\times C_{e}}\}_{i=1}^{N}\), \(\{y_{j}\in R^{1\times C_{o}}\}_{i=1}^{N}\), where \(N\) is the number of clusters. This reformulation inherits information from the clustering mechanism (here K-Means) and uses CLIP pretrained query, key, and value weights to essentially create aggregated embeddings with the same output space as CLIP, keeping its zero-shot performance. **Adaptive K-Means (A-K.M.).** As different images can contain different numbers of categories, applying K-Means with an adaptive number of clusters per image as a function of the image properties might also be beneficial. A-K.M. uses the Bayesian information criterion (BIC) [5], which is a popular criterion for model selection, in an attempt to choose the best number of clusters per image. The BIC score of a probabilistic model \(Q\) is defined as \[BIC(Q)=\kappa\ln(n)-2\ln(\hat{L})\] (S2) Here, \(\kappa\) is the number of estimated parameters in \(Q\), \(n\) is the number of samples observed, and \(\hat{L}\) is the model's maximized likelihood function for the observed samples. A lower BIC value is commonly considered better, as it balances the model's complexity (in terms of the number of parameters) and the model fit. To that end, the term \(\kappa\ln(n)\) functions as a penalty against utilizing models with a larger number of parameters in order to inflate the likelihood of the model. To apply a BIC score for K-Means, the method models K-Means with \(k\) clusters as a Gaussian Mixture Model (GMM) with \(k\) components and spherical covariance. Each GMM component represents a cluster by setting the component's mean to the cluster's centroid and estimating the covariance by the cluster's embeddings. Using the above definition of BIC score for K-Means, the following algorithm is used to select the best number clusters. Let \(k_{1},k_{2},...k_{n}\) be a collection of choices for the number of clusters selected apriori, such that \(k_{i}<k_{i+1}\), and mark by \(BIC_{k_{i}}\) the BIC score computed over clusters produced by K-Means with \(k_{i}\) clusters. If \(\exists k_{i}:BIC_{k_{i}}<BIC_{k_{i+1}}\), then \(k_{i}\) is selected as the number of clusters for the image; otherwise \(k_{n}\) is selected. **Anchors (AN).** In this method, the dense representations are clustered according to a spatial division. The resized image is divided into equal-sized squares in multiple resolutions, and the matching embeddings at each resolution are clustered together. The embeddings in each cluster are averaged to create a single representative per cluster. ## S2 Cluster-CLIP Results This section extends Cluster-CLIP results from Section 4 of the main article by evaluating the additional clustering methods described in Section S1, and extending the results for other ResNet backbones. Figure S1 presents the results in terms of retrieval accuracy (mAP@50) vs. average number of embeddings per image using RN50x64, RN50x4, and RN50 backbones (first, second, and third rows) on COCO [40], LVIS [41], and nuImages [42] datasets (left, middle and right columns). K-Means and Agglomerative Clustering are presented by blue and green solid lines, while other clustering methods are depicted by scatter plots. CLIP is denoted by red circles. From Figure S1, we can see the effectiveness Cluster-CLIP top-performing clustering methods mentioned in Section 3.3 of the main article. Specifically, K-Means (K.M.), Agglomerative Clustering (AG-T/F), and Region Proposals (R.P) outperform CLIP when using the same backbone architecture across all datasets with merely 5-50 representatives per image, showcasing Cluster-CLIP effectiveness across backbones. When considering the Adaptive K-Means method (purple pentagon), we see that adaptively selecting the number of clusters often scores close to the interpolated score of K-Means with no significant gain. For the Anchors method (gray diamond), it is generally advantageous to partition the embedding space into a greater number of scales or utilize finer divisions, thereby increasing the number of clusters. An exception to this rule arises when using the RN50x64 backbone on nuImages. Compared to other methods, using anchors shows lesser or on-par results, with the only exception being nuImages using RN50x4. Using Attention for soft aggregation of embeddings (orange triangle) is beneficial for RN50 on all datasets and number of clusters; however, it greatly impairs performance for RN50x64. ## S3 Cluster-OwlViT The Cluster-CLIP architecture is compatible with any dual-encoder VL open-vocabulary model, as elaborated in Section 2 of the main article. This section demonstrates this compatibility by implementing the Cluster-CLIP architecture with OwlViT backbones, referred to as Cluster-OwlViT. To achieve this, we apply the aggregation module outlined in Section 3.3 of the main article on top of OwlViT's dense embeddings. Figure S2 presents the results of Cluster-OwlViT (represented by gray lines), compared to Cluster-CLIP using different backbones and K-Means clustering. When equipped with the ViT-B/32 backbone, Cluster-OwlViT maintains high retrieval rates while reducing the number of clusters by 30% across all three datasets. With the larger ViT-L/14 backbone, Cluster-OwlViT remains competitive while managing to reduce the number of clusters from 3600 to 1000. As Cluster-CLIP outperforms Cluster-OwlViT with fewer representatives, we focus our work on it. ## S4 Qualitative Examples For demonstration purposes, and as discussed in Section 4.3 of the main article, we build two image retrieval framework indexes consisting of 120K COCO training set images. The first uses Cluster-CLIP-K.M. with RN50x64 backbone and 10 clusters, while the second uses CLIP with RN50x64 backbone. In both cases, FAISS [] is utilized to map the embeddings (aggregated embeddings for Cluster-CLIP, global embedding for CLIP) into a large-scale index. Qualitative retrieval examples of interest are presented in Figures S3 and S4. Figure S3 shows the top retrieval results for 'Helicopter', 'Wall clock', and 'Bulldozer' text queries. Using Cluster-CLIP allows the retrieval of cluttered images with relatively small instances of the requested category. Figure S4 shows top retrieval results for 'Water Tower', 'Globe', 'Passport', 'Earplugs', 'Lemon' and 'Chickpea' text queries, in which Cluster-CLIP produces desired results whereas CLIP prefers larger instances from false categories (sometimes semantically similar). Both figures emphasize the importance of using non-global features for the object-centric image-retrieval task. ## S5 Hyperparameters **Dense models.** We used the CLIP backbones (RN50, RN50x4, and RN50x64) from the CLIP [6] library and OwlViT framework [1] from the huggingface transformers library [6] with default hyperparameters. Images were resized to a square aspect ratio (for details of the different resolutions, refer to Tables 1 and 2 in the main article), and positional embeddings were interpolated to match the image resolution. For a fair comparison, we ensemble over the embeddings space of the 7 best CLIP prompts [1] in all baselines and experiments that use CLIP or OwlViT text encoders. **Clustering methods.** We provide a detailed list of the different hyperparameters used in each of the clustering methods. * We used sklearn library [6] with the following configurations to run the K-Means clustering: _init_=random, _n_init_=10, _max_iter_=300, _tol_=0.0001, _algorithm_=lloyd. * We used Segment Anything library [5], using the _vit_h_ architecture along with its pre-trained weights, with different number of point-prompts (64, 256, 1024), an IoU threshold of 0.88, stability score threshold of 0.88, stability score offset of 0.1, box NMS threshold of 0.7, and no minimum mask region area nor running separately on crops of the image. * We used sklearn library with the following configurations to run Agglomerative clustering: _linkage_=Ward, _affinity_=Euclidean. Additionally, for Cluster-CLIP, AG-T, we set _connectivity_ to be a grid. * The method attempts to select best number of clusters out of 5, 10, 15, and 20 clusters. * We evaluated the following different divisions of the embeddings space: (1) \(1\times 1\), \(2\times 2\) (2) \(2\times 2\), \(3\times 3\) (3) \(3\times 3\), \(4\times 4\), \(5\times 5\) (4) \(2\times 2\), \(3\times 3\), \(5\times 5\), \(7\times 7\) (5) \(2\times 2\), \(3\times 3\), \(4\times 4\), \(5\times 5\), \(7\times 7\) resulting in 5, 13, 50, 87 and 103 clusters respectively.
``` オープン・Vocabularyオブジェクト指向画像検索のタスクは、特定の興味関心を持つオブジェクトを定義するオープンセットのテキストクエリで検索する画像の検索です。大規模な画像データセットの活用が標準化されたことで、このタスクを効率的に解決することは実用的な重要性を持つようになりました。このタスクは、アドホッククエリによる検索された画像のパフォーマンス分析と、トレーニング中のハードエクセルを抽出するなど、さまざまな用途に利用されています。近年、対話型ベースのオープン・Vocabularyシステムの進歩は、大規模なオープン・Vocabulary画像検索を可能にしています。しかし、これらのアプローチは画像ごとに単一のグローバルエンベディングを使用しており、これにより、Relatively小さなオブジェクトのインスタンスを含む画像の検索に制限が生じます。代わりに、検出パイプラインからのローカルエンベディングを組み込むことはスケーラビリティの問題を抱え、大規模な
2303.18120
UKP-SQuARE v3: A Platform for Multi-Agent QA Research
The continuous development of Question Answering (QA) datasets has drawn the research community's attention toward multi-domain models. A popular approach is to use multi-dataset models, which are models trained on multiple datasets to learn their regularities and prevent overfitting to a single dataset. However, with the proliferation of QA models in online repositories such as GitHub or Hugging Face, an alternative is becoming viable. Recent works have demonstrated that combining expert agents can yield large performance gains over multi-dataset models. To ease research in multi-agent models, we extend UKP-SQuARE, an online platform for QA research, to support three families of multi-agent systems: i) agent selection, ii) early-fusion of agents, and iii) late-fusion of agents. We conduct experiments to evaluate their inference speed and discuss the performance vs. speed trade-off compared to multi-dataset models. UKP-SQuARE is open-source and publicly available at http://square.ukp-lab.de.
Haritz Puerto, Tim Baumgärtner, Rachneet Sachdeva, Haishuo Fang, Hao Zhang, Sewin Tariverdian, Kexin Wang, Iryna Gurevych
2023-03-31T15:07:36
http://arxiv.org/abs/2303.18120v2
# UKP-SQuARE v3: A Platform for Multi-Agent QA Research ###### Abstract The continuous development of Question Answering (QA) datasets has drawn the research community's attention toward multi-domain models. A popular approach is to use _multi-dataset models_, which are models trained on multiple datasets to learn their regularities and prevent overfitting to a single dataset. However, with the proliferation of QA models in online repositories such as GitHub or Hugging Face, an alternative is becoming viable. Recent works have demonstrated that combining expert agents can yield large performance gains over multi-dataset models. To ease research in _multi-agent models_, we extend UKP-SQuARE, an online platform for QA research, to support three families of multi-agent systems: i) agent selection, ii) early-fusion of agents, and iii) late-fusion of agents. We conduct experiments to evaluate their inference speed and discuss the performance vs. speed trade-off compared to multi-dataset models. UKP-SQuARE is open-source1 and publicly available at square.ukp-lab.de. Footnote 1: [https://github.com/UKP-SQuARE/square-core](https://github.com/UKP-SQuARE/square-core) ## 1 Introduction The current high-speed development of Artificial Intelligence yields thousands of datasets and trained models in repositories such as GitHub and Hugging Face Rogers et al. (2023). These models are creating new research and application opportunities, such as high-performing Question Answering (QA) skills in chatbots Burtsev et al. (2018); Miller et al. (2017). Comparing and analyzing these models usually requires learning libraries, writing code to run the models, and unifying their formats to compare them, which makes this process time-consuming and not scalable. UKP-SQuARE Baumgartner et al. (2022); Sachdeva et al. (2022) addresses this challenge, providing the first online platform that offers an ecosystem for QA research enabling reproducibility, analysis, and comparison of QA models through a standardized interface and from multiple angles (i.e., general behavior, explainability, adversarial attacks, and behavioral tests). The large variety of tasks and domains in QA datasets is pushing the research community towards creating models that generalize across domains Fisch et al. (2019); Talmor and Berant (2019); Khashabi et al. (2020). Currently, there are two main approaches to achieve this: i) multi-dataset models and ii) multi-agent models. While the former trains a model on multiple datasets Talmor and Berant (2019); Khashabi et al. (2020), the latter combines multiple expert agents Geigle et al. (2021); Friedman et al. (2021); Puerto et al. (2023). Concurrently, large language models (LLM) such as GPT-3 Brown et al. (2020) are emerging as new powerful systems for multi-task and multi-domain NLP applications. These LLM models are complementary to the focus of our work, multi-agent systems. While LLMs show impressive performance, they are extremely expensive to run and can usually only be accessed through APIs or deployed with great hardware resources. On the other hand, multi-agent systems offer a solution to create multi-domain models reusing available pretrained models that can be run on more modest hardware, which is an important requirement, e.g. where data cannot be sent to third parties. Multi-agent models are particularly promising due to the thousands of models readily available on online model hubs and their current exponential growth.2 This growth in the number of models is increasing the interest of the community in multi-agent model research Wang et al. (2020); Matena and Raffel (2021); Geigle et al. (2021); Friedman et al. (2021); Puerto et al. (2023); Wortsman et al. (2022); Jin et al. (2023). However, model hubs such as Hugging Face only allow inference on individ ual models, disregarding the possibility of combining them to make systems modular and multi-domain. This is a severe limitation as Puerto et al. (2023) showed that combining several QA models can yield performance gains of over 10 percentage points with respect to multi-dataset models (i.e., a single model trained on multiple datasets). Therefore, we extend UKP-SQuARE to democratize access and research to multi-agent models. In particular, we add support to the three main methods to combine agents3: i) Skill selection, ii) early-fusion of Skills, and iii) late-fusion of Skills. The first consists of identifying the Skill with the highest likelihood of giving the correct answer and then routing the input to that Skill. We deploy TWEAC (Transformer With Extendable QA Agent Classifiers; Geigle et al., 2021) as an example of this method. The second one combines multiple models' weights to obtain a new model with the distributional knowledge of the source weights. We deploy MADE (Multi-Adapter Dataset Experts; Friedman et al., 2021) as an example of this method. Lastly, the late-fusion of models consists of running multiple models to get their predictions and then combing them. This creates a system that can combine heterogeneous expert agents without reducing their performance in each domain. We provide MetaQA (Puerto et al., 2023) as an example of this method. Footnote 3: An agent is referred to as _Skill_ in UKP-SQuARE. UKP-SQuARE facilitates research on multi-agent QA systems by offering a platform equipped with dozens of agents and three methods to combine them. This upgrade holds paramount significance as the number of QA models created annually is increasing exponentially. UKP-SQuARE enables users to run, compare, and evaluate the strengths and weaknesses of multi-agent models, and compare them with multi-dataset models. ## 2 Related Work The most famous types of multi-agent systems are Mixture of Experts (MoE) and ensemble methods. MoE consists of a gating mechanism that routes the input to a set of agents (Jacobs et al., 1991) while ensemble methods aggregate the outputs of multiple experts through a voting mechanism (Breiman, 1996; Freund and Schapire, 1996). Much work has been made to simplify the training of these multi-agent systems (Pedregosa et al., 2011; Chen and Guestrin, 2016; He et al., 2021; Hwang et al., 2022). However, as far as we know, there are no online platforms to run and compare them. The most similar works to ours are the online model hubs such as Hugging Face's Model Hub4 and AdapterHub (Pfeiffer et al., 2020). They both offer a large number of models to download. In addition, Hugging Face's Model Hub also allows running models through Spaces.5 However, this re Figure 1: Overview of different multi-agent system architectures deployed in UKP-SQuARE. TWEAC (left) selects an agent (a _Skill_ in UKP-SQuARE) based on which dataset it predicts the question is closest to and on which dataset a Skill was trained. MADE (center) fuses the weights of adapters trained on different datasets. MetaQA (right) predicts the final answer from a set of answers and their confidence scores. We illustrate the architectures with three different Skills. However, in practice, more Skills are used. quires implementing the Space, which can be nontrivial for complex scenarios such as ours (i.e., deploying and comparing multi-agent systems). UKPSQuARE removes technical barriers and allows researchers to deploy multi-agent systems with a user-friendly interface. Transformer Vaswani et al. (2017) models using adapters Houlsby et al. (2019) can also be seen as a type of multi-agent system. For this type of architecture, adapterHub Pfeiffer et al. (2020) is a well-established library. In addition to simplifying the training of adapter-based models, it allows composing adapters (i.e., agents) with methods such as adapterFusion Pfeiffer et al. (2021) or stacking Pfeiffer et al. (2020). However, this library is not an online platform for analyzing models such as UKP-SQuARE. Their focus is to offer tools to create models based on adapters. ## 3 UKP-SQuARE UKP-SQuARE Baumgartner et al. (2022); Sachdeva et al. (2022) is the first online platform that offers an ecosystem for QA research. Its goal is to provide a common place to share, run, compare, and analyze QA models from multiple angles, such as explainability, adversarial attacks, behavioral tests, and I/O behaviors. The platform follows a flexible and scalable microservice architecture containing five main services: * **Datastores**: Provide access to collections of unstructured text such as Wikipedia and Knowledge Graphs such as ConceptNet Speer and Havasi (2012). * **Models**: Enable the dynamic deployment and inference of any Transformer model that implements a Hugging Face pipeline Wolf et al. (2020) including models that use the adapter-transformers Pfeiffer et al. (2020) or sentence-transformers Reimers and Gurevych (2019) framework. * **Skills**: central entity of the UKP-SQuARE. They specify a configurable QA pipeline (e.g., extractive, multiple-choice, and open-domain QA) leveraging Datastores and Models. Users interact with Skills since the platform's goal is to remove technical barriers and focus on QA research (i.e., the QA pipeline). These Skills are equivalent to agents in the multi-agent system literature. * **Explainability**: Provides saliency maps, behavioral tests, and graph visualizations6 that explains the outputs of a Skill. Footnote 6: For graph-based models. * **Adversarial Attacks**: Create modified versions of the input to create adversarial attacks to expose vulnerabilities of the Skills. All these services allow UKP-SQuARE to offer an ecosystem of tools to analyze Skills through a user-friendly interface without writing any code or complex configurations. UKP-SQuARE helps researchers identify the models' strengths and weaknesses to push the boundaries of QA research. ### Target Users and Scenarios This new update of UKP-SQuARE targets researchers working on multi-agent and multi-dataset systems. These users can use the platform as a showcase of their systems. The dozens of Skills already available in UKP-SQuARE simplify the deployment of multi-agent systems since users can employ our user-friendly interface to select the Skills they want to combine using the three families of methods we deploy. Furthermore, researchers can deploy their new multi-skill methods through a pull request in our repository. The platform can also be used to analyze and compare multiple multi-agent systems from efficiency (i.e., inference time) and effectiveness (i.e., performance) points of view. Furthermore, it can also be used to compare multi-agent with multi-dataset systems. Lastly, UKP-SQuARE can also be used for teaching QA. The ecosystem of QA tools can be used to help students understand explainability, adversarial attacks, multi-dataset, and multi-agent models through interactive explanations with examples. Our platform can also be used to design homework where students train QA models and analyze them with the aforementioned QA tools. ## 4 Multi-Agent Systems Multi-Agent systems are a type of multi-domain system that aggregate multiple expert agents from different domains to create a unified system. i.e., their focus is on the agents (_Skills_ in UKP-SQuARE). On the other hand, multi-dataset systems aim to learn a unified model from multiple data distributions to create a single, general agent. For example, UnifiedQA Khashabi et al. (2020) is a QA model trained on multiple datasets using a generative model to overcome format boundaries. However, Raffel et al. (2020) show that a model trained on multiple datasets may underperform the same architecture trained on a single dataset, i.e., multi-dataset models may underfit certain distributions. Based on this observation, Puerto et al. (2023) show that multi-agent models can avoid this limitation while being data-efficient to train and even outperform multi-dataset models by large margins in both in-domain and out-of-domain scenarios. This is possible because instead of using a very general architecture to solve multiple tasks, it uses a list of expert agents with specific architectures designed to solve those tasks (i.e., SOTA agents) and establishes a collaboration between these agents. However, this performance comes at a cost. The inference time is higher because it needs to run more than one model (at least one expert agent and one answer aggregator). Therefore, we extend UKP-SQuARE to add support to the three main approaches for multi-agent systems, which we refer to as Meta-Skills on the platform: i) Skill Selection (SS4.1), ii) Early-Fusion of Skills (SS4.2), and iii) Late-Fusion of Skills (SS4.3). An overview of the different architectures is illustrated in Figure 1. ### Skill Selection Skill selection is the simplest method of the three. It aims to identify the Skill with the highest likelihood of returning the correct answer to the input question and then route the input to that Skill. More formally, it defines a function \(f:Q\to S\) that maps any question \(Q\) to an available Skill \(S\). Geigle et al. (2021) follow this approach and propose TWEAC (Transformer with Extendable QA Agent Classifiers), a Transformer model with a classification head for each Skill that maps questions to Skills. However, instead of predicting Skills, they predict _datasets_, i.e., they identify the dataset from which the input question comes. Then, they select a Skill trained on that dataset. Using this method, they report a Skill prediction accuracy higher than 90% across ten different QA types. We train TWEAC on 16 datasets (shown in Appendix 5) with an accuracy of 79% and deploy it in UKP-SQuARE. The cause of the accuracy difference is the selection of the datasets. While the authors experiment on widely different QA tasks such as SQuAD, CommunityQA, and Weather Report, we use the most popular QA datasets, including the 2019 MRQA Shared Task (Fisch et al., 2019), which are more similar and thus, the task becomes more challenging since it is more difficult to distinguish the type of questions. We deploy two TWEAC Skills on UKP-SQuARE: one for extractive QA and another for multiple-choice. Figure 2 shows an extractive QA TWEAC that identifies the question as _SQuAD-like_ and routes it to two Skills trained on SQuAD. ### Early-Fusion of Skills This method combines the weights of multiple models to create a new model that generalizes across all the input models. Friedman et al. (2021) propose to train adapter weights for individual datasets while sharing the weights of a common Transformer that is also trained with those adapters. Later, in a second training phase, they freeze the Transformer weights and fine-tune each adapter on its corresponding dataset. The intuition behind this is that the shared parameters encode the regularities of the QA task while the adapters model the sub-distributions. This training schema yields a model that performs robustly on new domains by averaging its adapter weights. Following this work, we extend UKP-SQuARE to allow the creation of Skills that average the weights of a series of adapters. To do this, on the Skill creation page (Figure 3), users are prompted to select whether they wish to combine adapters and, if affirmative, which ones to average. Figure 2: TWEAC predicts that the question is _SQuAD-like_ and routes it to Skills trained on this dataset. ### Late-Fusion of Skills Lastly, Puerto et al. (2023) propose MetaQA, a system that combines 18 heterogeneous expert agents across multiple formats. This system yields significant gains over multi-dataset models because some tasks require particular architectures to solve them, such as DROP Dua et al. (2019), which requires numerical reasoning. Thus, while a _one-size-fits-all_ architecture cannot learn such a wide variety of distributions, a multi-agent system that combines predictions can use expert agents to solve these datasets and yield a higher-performing model in general. Figure 4 shows how MetaQA answers a question from the _DuoRC_ dataset but selects an out-of-domain (OOD) agent instead of the in-domain agent to answer, which gives a wrong answer. Thanks to the interface provided by UKPSQuARE, it is easier to analyze the collaboration between the Skills established by MetaQA. One limitation of this type of system is its need to run multiple models, which makes it more expensive than the previous two approaches. To alleviate this limitation, we run the expert agents in parallel. In this way, the inference time of MetaQA remains close to the other multi-agent systems, as shown in Table 1. ### Comparison of Multi-Skill Models In this section, we compare the inference time of the deployed multi-skill systems (i.e., MetaQA, TWEAC, and MADE) and UnifiedQA as a representative of the multi-dataset models. We extract 20 random questions from the six datasets from the MRQA 2019 Shared Task Fisch et al. (2019) yielding a total of 120 questions and measure the time needed by each Skill to solve them. We repeat this process with five different random seeds and show the means and standard deviations in Table 1. Each model has 8 CPUs7 assigned to it and runs behind an asynchronous API. Footnote 7: AMD EPYC 7543 with 2.8GHz. As shown in Table 1, MetaQA is the slowest model. This is expected since it needs to run all the expert agents to get the predictions. However, its inference time is remarkably close to both MADE and TWEAC. TWEAC is surprisingly as fast as MADE, considering that TWEAC has to run at least two models (router and expert agent), while MADE only runs one. We conjecture that MADE is not faster because the adapter layers increase the depth of the transformer stack. UnifiedQA is the fastest model, as expected, since it is a multi-dataset model and hence, does not need to combine multiple agents. Beyond inference, training time and cost are also interesting factors to consider. TWEAC and MetaQA are considered cheap to train assuming the existence of pretrained agents on online model hubs such as the Hugging Face Model Hub.8 Hence, the part that they train is a small router or answer aggregator. On the other hand, MADE and UnifiedQA require training a neural network from scratch in the task of question answering, which is much more challenging than simply routing questions or aggregating answers. Therefore, MADE and UnifiedQA need more training data than TWEAC and MetaQA, Figure 4: UKP-SQuARE simplifies the analysis of the collaboration between the agents. The question comes from the DuoRC dataset. However, while the in-domain agent gives a wrong answer (not shown), MetaQA selects an out-of-domain agent that gives a correct answer. Only an excerpt of the context is shown. Figure 3: UKP-SQuARE allows combining adapters by simply writing the list of adapters. making them more expensive. Table 1 shows the trade-off between performance, training, and inference efficiency. Although MetaQA is the slowest Skill to run, its inference time is very close to the other models' thanks to the parallel inference of the expert agents offered by UKP-SQuARE (cf. Figure 1). Furthermore, it is cheap to train, has almost the highest performance, and is compatible with any QA format. This makes it interesting for scenarios where model updating, performance, and flexibility are vital. TWEAC is also cheap and as flexible as MetaQA. Although, it is significantly worse than MetaQA on extractive QA datasets. This makes TWEAC ideal in the same scenarios as MetaQA but where running the expert agents in parallel is difficult (i.e., when MetaQA cannot be used). MADE has the highest performance and is as fast as TWEAC. However, it is more expensive to train than MetaQA and TWEAC, and it is not as flexible as MetaQA and TWEAC since it cannot be used for multiple formats simultaneously. Therefore, it should be used when inference, performance, and simple deployment are vital, while the model is not expected to need re-training (i.e., updates) often and is not required to be compatible with multiple QA formats at the same time. Lastly, UnifiedQA is compatible with any text-based QA format but has lower (although competitive) results. Although it is the fastest to run, it is more expensive to train than TWEAC and MetaQA. Thus, its ideal use case is a scenario where a simple deployment is needed while being flexible to process any text-based QA format. Therefore, this small study suggests that in scenarios where new domains are introduced often, router-based systems such as MetaQA might be more suitable, whereas, in scenarios where inference speed or simple deployment are needed, MADE and UnifiedQA might be more appropriate. ## 5 Conclusions and Discussions In this work, we have extended UKP-SQuARE to support multi-agent models. In particular, we deployed a routing system, TWEAC Geigle et al. (2021), a method to combine adapter weights, MADE Friedman et al. (2021), and a model that combines the prediction of multiple Skills, MetaQA Puerto et al. (2023). We have conducted experiments on these three models and UnifiedQA Khashabi et al. (2020), a multi-dataset system, to analyze the trade-off between the performance, efficiency, and flexibility of these systems. We showed that in scenarios where new domains or expertise are often needed, MetaQA provides the best trade-off since its performance is close to the best model, it is compatible with any QA format, cheap to train, and its inference runtime is close to TWEAC and MADE using the parallel engine provided by UKP-SQuARE. However, when simple deployment is needed or the model is not expected to be updated, MADE and UnifiedQA might be more appropriate. This update of UKP-SQuARE is of utmost importance due to the current speed of development of QA models that creates thousands of models per year. Our platform eases the deployment, running, comparison, and analysis of QA Skills. With this update, we also facilitated the aggregation of these Skills into Multi-Skills simplifying research on multi-agent systems. We leave as future work the comparison of these modular systems with prompting-based QA in large language models Brown et al. (2020); Zhong et al. (2022). ### Limitations UKP-SQuARE v3 does not aim to provide all existing multi-skill systems off the shelf. Instead, we deploy three different approaches and encourage the community to share, deploy and compare their multi-skill systems. Using the modular Skill system of UKP-SQuARE and the reference implementations, users can reconfigure the existing multi-skill pipelines or implement and deploy their own through a streamlined pull request.9 Footnote 9: For details, we refer to the documentation at [https://square.ukp-lab.de/docs/home/components/skills](https://square.ukp-lab.de/docs/home/components/skills) Another limitation is that the multi-skill systems deployed in this paper have been shown to work effectively with no more than 20 Skills. Hence, the effectiveness of multi-skill systems remains unknown for a larger number of Skills. We hope \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{\(\mathbf{F_{1}}\)} & **Inference** & \multirow{2}{*}{**Training**} \\ & & **Time [s]** & \\ \hline TWEAC & 77.65 & \(5.38\pm 0.06\) & cheap \\ MADE & 82.20 & \(5.45\pm 0.18\) & expensive \\ MetaQA & 81.13 & \(7.08\pm 0.16\) & cheap \\ UnifiedQA & 77.30 & \(2.15\pm 0.02\) & expensive \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of inference time on UKP-SQuARE averaged over 600 predictions. Performance from their respective papers. that UKP-SQuARE v3 can help shed light on this topic. Lastly, since multi-skill systems combine several models, it is feasible that the resulting system can inherit biases and unfair behaviors. Although the Skills we used are not intended to exhibit any bias or unfairness, users should use them at their own discretion. ## Ethics Statement Intended UseThe intended use of UKP-SQuARE v3 is deploying, running, comparing, analyzing, and combining Skills. Our platform provides dozens of Skills readily available to be combined using the implemented multi-agent systems or new systems to be created by the community. This simplifies the analysis of these systems and thus fosters multi-agent QA research. Potential MisuseA malicious user could train multiple Skills with biased and unfair behaviors, such as a QA system that responds harmful answers, and combine them with the deployed methods available in UKP-SQuARE. UKP-SQuARE does not provide any Skill with such an intended behavior, but the community is free to upload any model to our platform. Therefore, we encourage the community not to publicly upload such models unless there is a clear research intention with a discussion of the ethics of such research, and in this case, make the Skills private, so that nobody can use them in an unintended way. We are not liable for errors, false, biased, offensive, or any other unintended behavior of the Skills. Users should use them at their own discretion. Environmental ImpactThe use of UKP-SQuARE can reduce the computational cost of reproducing prior research since it prevents the community from training models that are already trained. ## Acknowledgements We thank Haris Jabbar, Martin Tutek, and Imbesat Hassan Rizvi for their insightful comments on a previous draft of this paper. This work has been funded by the German Research Foundation (DFG) as part of the UKP-SQuARE project (grant GU 798/29-1), the QASciInf project (GU 798/18-3), and by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
``` 質問応答(QA)データセットの継続的な開発により、研究コミュニティは多様性を持つモデルへの注目を集めています。一般的に、多データセットモデルを使用するアプローチは、複数のデータセットに訓練されたモデルを用いて、その規則性を学習し、単一のデータセットへの過剰適合を防ぐための方法です。しかし、GitHubやHugging FaceなどのオンラインリポジトリにQAモデルが増加したことで、代替案が viable になっています。最近の研究では、専門的なエージェントを組み合わせることで、多データセットモデルに比べて大きな性能向上を得られることが示されています。多エージェントモデルの研究を容易にするために、私たちは、QA研究のためのオンラインプラットフォームであるUKP-SQuAREを、エージェント選択、エージェントの早期融合、エージェントの遅延融合の3つのタイプのマルチエージェントシステムをサポートするように拡張しました。これらのシステムの推論速度
2301.01208
Mask Matching Transformer for Few-Shot Segmentation
In this paper, we aim to tackle the challenging few-shot segmentation task from a new perspective. Typical methods follow the paradigm to firstly learn prototypical features from support images and then match query features in pixel-level to obtain segmentation results. However, to obtain satisfactory segments, such a paradigm needs to couple the learning of the matching operations with heavy segmentation modules, limiting the flexibility of design and increasing the learning complexity. To alleviate this issue, we propose Mask Matching Transformer (MM-Former), a new paradigm for the few-shot segmentation task. Specifically, MM-Former first uses a class-agnostic segmenter to decompose the query image into multiple segment proposals. Then, a simple matching mechanism is applied to merge the related segment proposals into the final mask guided by the support images. The advantages of our MM-Former are two-fold. First, the MM-Former follows the paradigm of decompose first and then blend, allowing our method to benefit from the advanced potential objects segmenter to produce high-quality mask proposals for query images. Second, the mission of prototypical features is relaxed to learn coefficients to fuse correct ones within a proposal pool, making the MM-Former be well generalized to complex scenarios or cases. We conduct extensive experiments on the popular COCO-$20^i$ and Pascal-$5^i$ benchmarks. Competitive results well demonstrate the effectiveness and the generalization ability of our MM-Former.
Siyu Jiao, Gengwei Zhang, Shant Navasardyan, Ling Chen, Yao Zhao, Yunchao Wei, Humphrey Shi
2022-12-05T11:00:32
http://arxiv.org/abs/2301.01208v1
# Mask Matching Transformer for ###### Abstract In this paper, we aim to tackle the challenging few-shot segmentation task from a new perspective. Typical methods follow the paradigm to firstly learn prototypical features from support images and then match query features in pixel-level to obtain segmentation results. However, to obtain satisfactory segments, such a paradigm needs to couple the learning of the matching operations with heavy segmentation modules, limiting the flexibility of design and increasing the learning complexity. To alleviate this issue, we propose Mask Matching Transformer (MM-Former), a new paradigm for the few-shot segmentation task. Specifically, MM-Former first uses a class-agnostic segmenter to decompose the query image into multiple segment proposals. Then, a simple matching mechanism is applied to merge the related segment proposals into the final mask guided by the support images. The advantages of our MM-Former are two-fold. First, the MM-Former follows the paradigm of _decompose first and then blend_, allowing our method to benefit from the advanced potential objects segmenter to produce high-quality mask proposals for query images. Second, the mission of prototypical features is relaxed to learn coefficients to fuse correct ones within a proposal pool, making the MM-Former be well generalized to complex scenarios or cases. We conduct extensive experiments on the popular COCO-\(20^{i}\) and Pascal-\(5^{i}\) benchmarks. Competitive results well demonstrate the effectiveness and the generalization ability of our MM-Former. Code is available at github.com/Picsart-AI-Research/Mask-Matching-Transformer. ## 1 Introduction Semantic segmentation, one of the fundamental tasks in computer vision, has achieved a grand success [3; 5; 37; 12] in recent years with the advantages of deep learning techniques [11] and large-scale annotated datasets [14; 6]. However, the presence of data samples naturally abides by a long-tailed distribution where the overwhelming majority of categories have very few samples. Therefore, few-shot segmentation [21; 27; 35] is introduced to segment objects of the tail categories only according to a minimal number of labels. Mainstream few-shot segmentation approaches typically follow the learning-to-learning fashion, where a network is trained with episodic training to segment objects conditioned on a handful of labeled samples. The fundamental idea behind it is how to effectively use the information provided by the labeled samples (called _support_) to segment the test (referred to as the _query_) image. Early works [27; 36; 33; 24; 30] achieve this by first extracting one or few semantic-level prototypes from features of support images, and then pixels in the query feature map are matched (activated) by the support prototypes to obtain the segmentation results. We refer to this kind of method as "few-to-many" matching paradigm since the number of support prototypes is typically much less (_e.g.,_ two to three orders of magnitude less) than the number of pixels in the query feature map. While acceptable results were obtained, this few-to-many matching paradigm turns out to be restricted in segmentation performance due to the information loss in extracting prototypes. Therefore recent advances [34; 26; 19] proceed to a "many-to-many" matching fashion. Concretely, pixel-level relationships are modeled between the support and query feature maps either by attention machanism [34; 26] or 4D convolutions [19]. Benefiting from these advanced techniques, the many-to-many matching approaches exhibit excellent performance over the few-to-many matching counterparts. **Overall**, the aforementioned approaches construct modules combining the matching operation with segmentation modules and optimizing them jointly. For the sake of improving the segmentation quality, techniques of context modeling module such as atrous spatial pooling pyramid [3], self-attention [39] or multi-scale feature fusion [24] are integrated with the matching operations [33] and then are simultaneously learned via the episodic training [30; 34; 24]. However, this joint learning fashion not only vastly increases the learning complexity, but also makes it hard to distinguish the effect of matching modules in few-shot segmentation. Therefore, in this work, we steer toward a different perspective for few-shot segmentation: decoupling the learning of segmentation and matching modules as illustrated in Fig. 1. Rather than being matched with the pixel-level query features, the support samples are directly matched with a few class-agnostic query mask proposals, forming a "few-to-few" matching paradigm. By performing matching in the mask level, several advantages are provided: 1) Such a few-to-few matching paradigm releases matching from the segmentation module and focuses on the matching problem itself. 2) It reduces the training complexity, thus a simple few-to-few matching is enough for solving the few-shot segmentation problem. 3) While previous works turned out to be overfitting when using high-level features for matching and predicting the segmentation mask [24; 27; 34], the learning of our matching and segmentation module would not affect each other and hence avoids this daunting problem. To achieve this few-to-few matching paradigm, we introduce a two-stage framework, named Mask Matching Transformer (dubbed as MM-Former), that generates mask proposals for the query image in the first stage and then matches the support samples with the mask proposals in the second stage. Recently, MaskFormer [4; 5] formulates semantic segmentation as a mask classification problem, which obtains semantic segmentation results by combining the predictions of binary masks and the corresponding classification scores, where the masks and the scores are both obtained by using a transformer decoder. It provides the flexibility for segmenting an image of high quality without knowing the categories of the objects in advance. Inspired by this, we also use the same transformer decoder as in [4] to predict a set of class-agnostic masks based on the query image only. To further determine the target objects indicated by the support annotation, a simple Mask Matching Module is constructed. Given the features extracted from both support and query samples, the Mask Matching Module obtains prototypes from both support and query features through masked global average pooling [27]. Further, a matching operation is applied to match the supports with all query proposals and produces a set of coefficients for each query candidate. The final segmentation result for a given Figure 1: Comparison between the existing few-shot segmentation framework and our MM-Former. (a) Previous works first match support prototype (s) [33] or pixel-level support feature [34; 32] with the pixel-level query feature, then pass the matched feature through the segmentation module to obtain the query prediction. (b) Our MM-Former decouples the segmentation and the matching problems for the few-shot segmentation task. The query segmentation result is acquired with a simple Mask Matching module, which operates on the support samples and a set of query mask proposals. support(s)-query pair is acquired by combining the mask proposals according to the coefficients. In addition, to resolve the problem of representation misalignment caused by the differences between query and support images, a Feature Alignment Block is integrated into the Mask-Matching Module. Concretely, since the features for all images are extracted with a fixed network, they may not be aligned well in the feature space, especially for testing images with novel classes. A Self Alignment block and a Cross Alignment block are introduced to consist of the Feature Alignment Block to align the query and support samples in the feature space so that the matching operation can be safely applied to the extracted features. We evaluate our MM-Former on two commonly used few-shot segmentation benchmarks: COCO-\(20^{i}\) and Pascal-\(5^{i}\). Our model stands out from previous works on the challenging COCO-\(20^{i}\) dataset. While our MM-Former only performs comparably with previous state-of-the-art methods due to the limited scale of the Pascal dataset, our MM-Former exhibits a strong transferable ability across different datasets (_i.e._, COCO-\(20^{i}\)\(\rightarrow\) Pascal-\(5^{i}\)), owing to our superior mask-matching design. **In a nutshell**, our contributions can be summarized as follows: (1) We put forward a new perspective for few-shot segmentation, which decouples the learning of matching and segmentation modules, allowing more flexibility and lower training complexity. (2) We introduce a simple two-stage framework named MM-Former that efficiently matches the support samples with a set of query mask proposals to obtain segmentation results. (3) Extensive evaluations on COCO-\(20^{i}\) and Pascal-\(5^{i}\) demonstrate the potential of the method to be a robust baseline in the few-to-few matching paradigm. ## 2 Methodology **Problem Setting**: Few-shot segmentation aims at training a segmentation model that can segment novel objects with very few labeled samples. Specifically, given two image sets \(D_{train}\) and \(D_{test}\) with category set \(C_{train}\) and \(C_{test}\) respectively, where \(C_{train}\) and \(C_{test}\) are disjoint in terms of object categories (\(C_{train}\cap C_{test}=\emptyset\)). The model trained on \(D_{train}\) is directly applied to test on \(D_{test}\). The episodic paradigm was adopted in [24; 38] to train and evaluate few-shot models. A \(k\)-shot episode \(\{\{I_{s}\}^{k},I_{q}\}\) is composed of \(k\) support images \(I_{s}\) and a query image \(I_{q}\), all \(\{I_{s}\}^{k}\) and \(I_{q}\) contain objects from the same category. We estimate the number of episodes for training and testing set are \(N_{train}\) and \(N_{test}\), the training set and test set can be represented by \(D_{train}=\{\{I_{s}\}^{k},I_{q}\}^{N_{train}}\) and \(D_{test}=\{\{I_{s}\}^{k},I_{q}\}^{N_{test}}\). Note that both support masks \(M_{s}\) and query masks \(M_{q}\) are available for training, and only support masks \(M_{s}\) are accessible during testing. **Overview**: The proposed architecture can be divided into three parts, _i.e._, Backbone Network, Potential Objects Segmenter and Mask Matching Module. Specifically, the Backbone Network is used to extract features only, whose parameters are fixed during the training. The Potential Objects Segmenter (dubbed as POS) is applied to produce multiple mask proposals that may contain potential object regions within the given image. The Mask Matching Module (dubbed as MM module) takes support cues as guidance to choose the most likely ones from the mask proposals. The selected masks are finally merged into the target output. The complete diagram of the architecture is shown in Fig. 2. Each of the modules will be explained in detail in the following subsections. Figure 2: **An overview of the proposed MM-Former. The support and query images are passed through an weight-shared encoder. The support and query features from the Encoder donate as \(\textbf{F}_{S}\) and \(\textbf{F}_{Q}\), respectively. We first train a Potential Objects Segmenter, which can predict all the proposal objects in one image (color in blue). Then we use \(F_{S}\) in stage two as guidance information to collaboratively mine the final segmentation by Mask Matching Module (color in grey).** ### Feature Extraction Module We adopt a ResNet [11] to extract features for input images. Unlike previous few shot segmentation methods [24; 38; 31] using Atrous Convolution to replace strides in convolutions for keeping larger resolutions, we keep the original structure of ResNet following [5]. We use the outputs from the last three layers in the following modules and named them as \(F_{S}\) and \(F_{Q}\) for \(I_{S}\) and \(I_{Q}\), where \(F=\left\{F^{i}\right\},i\in[3,4,5]\) and \(F\) is the features of \(I_{S}\) or \(I_{Q}\), \(i\) is the layer index of backbone. We further extract the output of query \(layer2\) to obtain the segmentation mask (named as \(F_{Q}^{2}\)). \(F^{2},F^{3}\), \(F^{4}\) and \(F^{5}\) have strides of \(\{4,8,16,32\}\) with respect to the input image. ### Potential Objects Segmenter The POS aims to segment all the objects in one image. Follow Mask2Former [4], a standard transformer decoder [25] is used to compute cross attention between \(F_{S}\) and N learnable embeddings. The transformer decoder consists of 3 consecutive transformer layers, each of which takes the corresponding \(F^{i}\) as an input. Each layer in the transformer decoder can be formulated as \[E^{l+1}=\mathrm{TLayer}^{\mathrm{I}}(E^{l},F^{i}), \tag{1}\] where \(E^{l}\) and \(E^{l+1}\) represent the N learnable embeddings before and after applying the transformer layer respectively. \(\mathrm{TLayer}\) denote a transformer decoder layer. We simplify the representation of transformer decoder, whereas we conduct the same pipeline proposed by Mask2Former. The output of the transformer decoder is multiplied with \(F_{Q}^{2}\) to get N mask proposals \(M\in\mathbb{R}^{N\times H/4\times W/4}\). Note that \(\mathrm{Sigmoid}\) is applied to normalize all mask proposals to \([0,1]\). Besides, our POS abandons the classifier of Mask2Former since we don't need to classify the mask proposals. ### Mask Matching (MM) Module In MM, our goal is to use support cues as guidance to match the relevant masks. The building blocks of MM are a Feature Alignment block and a Learnable Matching block. We first apply Feature Alignment block to align \(F_{Q}\) and \(F_{S}\) from the pixel level. Then, the Learnable Matching block matches appropriate query masks correspondence to the support images. **Feature Alignment Block**: We achieve the alignment using two types of building blocks: a Self-Alignment block and a Cross-Alignment block. The complete architecture is shown in Fig. 3. We adopt the Self-Alignment block to align features in each channel. Inspired from Polarized Self-Attention [16], we design a non-parametric block to normalize representations. Specifically, the input feature map \(F\in\mathbb{R}^{c\times hw}\) is first averaged at the channel dimension to obtain \(F_{avg}\in\mathbb{R}^{1\times hw}\). \(F_{avg}\) is regarded as an anchor to obtain the attention weight \(A\in\mathbb{R}^{c\times 1}\) by matrix multiplication: \(A=FF_{avg}^{T}\), which represents the weights of different channels. \(A\) is used to activate the feature by position-wise-multiplication (_i.e._, expand at the spatial dimension and perform point-wise multiplication with the feature). In this way, the input feature is adjusted across the channel dimension, and the outliers are expected to be smoothed. Note that the Self Alignment block processes \(F_{S}\) and \(F_{Q}\) individually and does not involve interactions across images. The Cross-Alignment block is introduced to mitigate divergence across different images. \(F_{S}\) and \(F_{Q}\) are fed into two weight-shared transformer decoders in parallel. We take \(i^{th}\) layer as an example in Fig. 3, which can be formulated as \[\begin{split}\hat{F_{Q}^{i}}&=\mathrm{MLP}(\mathrm{ MHAtten}(F_{Q}^{i}),F_{S}^{i},F_{S}^{i})),\\ \hat{F_{S}^{i}}&=\mathrm{MLP}(\mathrm{MHAtten}(F_{S }^{i}),F_{Q}^{i},F_{Q}^{i})),\end{split} \tag{2}\] where \(F_{Q}^{i}\) and \(F_{S}^{i}\) represent the \(i^{th}\) layer feature in \(F_{Q}\) and \(F_{S}\). \(\hat{F}^{i}\) represents the alignment features. (\(\hat{F}=\left\{\hat{F}^{i}\right\}_{i}^{L},i\in[3,4,5]\)). \(\mathrm{MLP}\) denote MultiLayer Perceptron and \(\mathrm{MHAtten}\) represents multi Figure 3: Details of **Feature Alignment Block**. Note _Average_ means channel-wise average. head attention [25] \[\mathrm{MHAtten}(q,k,v)=\mathrm{softmax}(\frac{qk^{T}}{\sqrt{d_{k}}})v, \tag{3}\] where \(q,k,v\) mean three matrices, and \(d_{k}\) is the dimension of \(q\) and \(k\) elements. \(k,v\) are downsampled to \(\frac{1}{32}\) of the original resolution to save computation. To distinguish from the phrase "Query Image" in few-shot segmentation and "matrix Q" in Transformer, we name "Query Image" as **Q** and "matrix Q" as **q**. We simplify the representation of \(\mathrm{MHAtten}\) and omit some Shortcut-Connections and Layer Normalizations in transformer decoder, whereas we conduct the same pipeline with the standard transformer. **Learnable Matching Block**: After acquiring \(\hat{F}_{S}\) and \(\hat{F}_{Q}\), we first apply masked global average pooling (GAP) [38; 27; 24; 31] on each \(\hat{F}^{i}\) and concat them together to generate prototypes for support ground-truth and N query mask proposals. Named as \(\left\{P_{S}^{gt}\right\}\), \(\left\{P_{Q}^{i}\right\}\), \(P_{S}^{gt},P_{Q}^{n}\in\mathbb{R}^{3d}\) and \(n\in[1,2...N]\). Here \(d\) represents the dimension of \(\hat{F}^{i}\), \(3d\) is achieved by concatenating \(\left\{\hat{F}^{3},\hat{F}^{4},\hat{F}^{5}\right\}\) together. We use cosine distance to measure the similarity between the prototypes of \(P_{S}^{gt}\) and \(P_{Q}^{n}\). In some cases, the mask corresponding to the prototype with the highest similarity may not be complete (_e.g._, the support image has only parts of the object). So, we further use an MLP layer to merge corresponding masks. The detailed diagram can be formulated as \[\begin{split} S&=\cos(P_{S}^{gt},P_{Q}^{n}),n\in[1,2...N],\\ \hat{M}&=M\times\mathrm{MLP}(S),S\in R^{1\times N}, \end{split} \tag{4}\] where \(\hat{M}\) is our final result, \(\mathrm{MLP}\) and \(\cos\) indicate the fully connected operation and the cosine similarity. We take N similarities (\(S\)) as the input of MLP, and use the output to perform a weighted average of N mask proposals. Note that we do not select the mask with the highest similarity directly, our ablation studies prove that using this block can help improve the performance. ### Objective In POS, we adopt segmentation loss functions proposed by Mask2Former (denote as \(\mathcal{L}_{P}\)). We apply Hungarian algorithm to match mask proposals with groud-truth and only conduct Dice Loss to supervise on masks with the best matching. In MM module, we conduct Dice Loss on \(\hat{M}\) (denote as \(\mathcal{L}_{M}\)) and design a contrastive loss to constrain the cross-alignment module. Our goal is to make prototypes for the same class more similar while different classes less similar by constraining \(S\). We first normalize \(S\) to \([0,1]\) by min-max normalization \(\hat{S}=\frac{S-\min(S)}{\max(S)-\min(S)+\varepsilon}\). Then we calculate IoU between N mask proposals and Query ground-truth. We assume that mask proposals contain various objects in query images. It is unrealistic to constrain the corresponding prototypes across different categories since it is hard to acquire the proper similarity among them. Therefore, we apply a criterion at the location of \(\max\)(IoU) and denote the point as \(positive\) point \(\hat{S}_{pos}\). Only constrain on \(\hat{S}_{pos}\) may lead all outputs of Cross Alignment block tend to be same. Thus, we add a constraint on the point in \(\hat{S}\) corresponding to the lowest IoU, and denote it as \(negative\) point \(\hat{S}_{neg}\). We assign \(y_{pos}=1\) and \(y_{neg}=0\) to \(\hat{S}_{pos}\) and \(\hat{S}_{neg}\) during the optimization, respectively. Therefore, the cross-alignment loss \(\mathcal{L}_{co}\) can be defined as \[\mathcal{L}_{co}=-\frac{1}{2}(y_{pos}\log\hat{S}_{pos}+(1-y_{neg})\log{(1- \hat{S}_{neg})}) \tag{5}\] Thus, the final loss function can be formulated as \(\mathcal{L}=\mathcal{L}_{P}+\lambda_{1}\mathcal{L}_{M}+\lambda_{2}\mathcal{L}_ {co}\), where \(\lambda_{1}\) and \(\lambda_{2}\) are constants and are set to 10 and 6 in our experiments. ### Training Strategy In order to avoid the mutual influence of POS and MM module during training, we propose a two-stage training strategy of first training POS and then training MM. In addition, by decoupling POS and MM, the network can share the same POS under 1-shot and K-shot settings, which greatly improves the training efficiency. **K-shot Setting**: Based on the two stages training strategy, MM-Former can easily extend to the K-shot setting by averaging knowledge from K samples, _i.e._, \(P_{S}^{gt}\). Note that after pre-trained the POS, MM-Former can be applied to 1-shot/ K-shot tasks with only a very small amount of training. ## 3 Experiments ### Dataset and Evaluation Metric We conduct experiments on two popular few-shot segmentation benchmarks, Pascal-\(5^{i}\)[9] and COCO-\(20^{i}\)[14], to evaluate our method. Pascal-\(5^{i}\) with extra mask annotations SBD [10] consisting of 20 classes are separated into 4 splits. For each split, 15 classes are used for training and 5 classes for testing. COCO-\(20^{i}\) consists of annotated images from 80 classes.We follow the common data split settings in [20; 38; 19] to divide 80 classes evenly into 4 splits, 60 classes for training and test on 20 classes. 1,000 episodes from the testing split are randomly sampled for evaluation. To quantitatively evaluate the performance, we follow common practice [24; 27; 36; 19; 38], and adopt mean intersection-over-union (mIoU) as the evaluation metrics for experiments. ### Implementation details The training process of our MM-Former is divided into two stages. For the **first** stage, we freeze the ImageNet [6] pre-trained backbone. The POS is trained on Pascal-\(5^{i}\) for 20,000 iterations and 60,000 iterations on COCO-\(20^{i}\), respectively. Learning rate is set to \(1e^{-4}\), batch size is set to 8. For the **second** stage, we freeze the parameters of the backbone and the POS, and only train the MM module for 10,000/20,000 iterations on Pascal-\(5^{i}\) / COCO-\(20^{i}\), respectively. Learning rate is set to \(1e^{-4}\), batch size is set to \(4\). For both stages, we use AdamW [17] optimizer with a weight decay of \(5e^{-2}\). The learning rate is decreased using the poly schedule with a factor of \(0.9\). All images are resized and cropped into \(480\times 480\) for training. We also employ random horizontal flipping and random crop techniques for data augmentation. All the experiments are conducted on a single RTX A6000 GPU. The standard ResNet-50 [11] is adopted as the backbone network. ### Comparison with State-of-the-art Methods We compare the proposed approach with state-of-the-art methods [24; 31; 13; 28; 19; 18; 1; 15; 38] on Pascal-\(5^{i}\) and COCO-\(20^{i}\) datasets. The results are shown in Tab. 1 and Tab. 2. \begin{table} \begin{tabular}{l|c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{4}{c|}{1-shot} & \multicolumn{4}{c}{5-shot} \\ \cline{3-11} & & \(5^{i}\) & \(5^{i}\) & \(5^{i}\) & \(5^{i}\) & Mean & \(5^{i}\) & \(5^{i}\) & \(5^{i}\) & \(5^{i}\) & Mean \\ \hline PFENet[17][24] & & 34.3 & 33.0 & 32.3 & 33.1 & 32.4 & 38.5 & 38.6 & 38.2 & 34.3 & 37.4 \\ SCL(\(\mathrm{CVPR21}\)) [31] & & 36.4 & 38.6 & 37.5 & 35.4 & 37.0 & 38.9 & 40.5 & 41.5 & 38.7 & 39.9 \\ ASGNet(\(\mathrm{CVPR21}\)) [13] & & - & - & - & - & 34.6 & - & - & - & - & 42.5 \\ REPRI[1][1] & & 31.2 & 38.1 & 33.3 & 33.0 & 34.0 & 38.5 & 46.2 & 40.0 & 43.6 & 42.1 \\ MM-Net[15][28] & & 34.9 & 41.0 & 37.8 & 35.2 & 37.2 & 38.5 & 39.6 & 38.4 & 35.5 & 38.0 \\ CVT(\(\mathrm{CV21}\)) [18] & & 32.3 & 36.0 & 31.6 & 31.6 & 32.9 & 40.1 & 43.8 & 39.0 & 42.4 & 41.3 \\ HSNet(\(\mathrm{CV21}\)) [19] & & 36.3 & 43.1 & 38.7 & 38.7 & 39.2 & 43.3 & 51.3 & **48.2** & 45.0 & 46.9 \\ CyCTR(\(\mathrm{NeurIPS21}\)) [38] & & 38.9 & 43.0 & 39.6 & 39.8 & 40.3 & 41.1 & 48.9 & 45.2 & 47.0 & 45.6 \\ \hline MM-Former (Ours) & \multirow{2}{*}{Res-50} & **40.5** & **47.7** & **45.2** & **43.3** & **44.2** & **44.0** & **52.4** & 47.4 & **50.0** & **48.4** \\ Oracle & & 66.1 & 74.3 & 64.8 & 70.4 & 68.9 & 60.1 & 74.3 & 64.8 & 70.4 & 68.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with state-of-the-art methods on COCO-\(20^{i}\). Best results are shown in bold. \begin{table} \begin{tabular}{l|c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c|}{PASCAL \(\rightarrow\) PASCAL} & \multicolumn{2}{c}{COCO \(\rightarrow\) PASCAL} \\ \cline{3-6} & & 1 shot & 5 shot & 1 shot & 5 shot \\ \hline PFENet[17][24] & & 60.8 & 61.9 & 61.1 & 63.4 \\ SCL(\(\mathrm{CVPR21}\)) [31] & & 61.8 & 62.9 & - & - \\ REPRI[15][16] & Res-50 & 59.1 & 66.8 & 63.2 & 67.7 \\ HSNet(\(\mathrm{ICCV21}\)) [19] & & **64.0** & **69.5** & 61.6 & 68.7 \\ CyCTR(\(\mathrm{NeurIPS21}\)) [38] & & **64.0** & 67.5 & - & - \\ \hline MM-Former (Ours) & \multirow{2}{*}{Res-50} & 63.3 & 64.9 & **67.7** & **70.4** \\ Oracle & & 82.5 & 82.5 & 85.8 & 85.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with state-of-the-art methods on PASCAL-\(5^{i}\). Best results are shown in bold. **Results on COCO-\(20^{i}\)**. In Tab. 1, our MM-Former performs remarkably well in COCO for both 1-shot and 5-shot setting. Specifically, we achieve \(3.9\%\) improvement on 1-shot compared with CyCTR [34] and outperform HSNet [19] by \(1.5\%\) mIoU. **Results on Pascal-\(5^{i}\)**. Due to the limited number of training samples in the Pascal dataset, the POS may easily overfit during the first training stage. Therefore, following recent works [24; 1; 19], we include the transferring results that transfer the COCO-trained model to be tested on Pascal. Note that when training on the COCO dataset, the testing classes shared with the Pascal dataset are removed, so as to avoid the category information leakage. According to Tab. 2, although our MM-Former is slightly inferior to some competitive results when training on Pascal dataset, we find MM-Former exhibits remarkable transferability when training on COCO and testing on Pascal. Specifically, the previous state-of-the-art method HSNet shows powerful results on Pascal\(\rightarrow\)Pascal but degrades when transferring from COCO to Pascal. Instead, our MM-Former further enhances the performance of 1-shot and 5-shot by \(4.4\%\) and \(5.5\%\), outperforming HSNet by \(6.1\%\) and \(1.7\%\), respectively. **Oracle Analysis.** We also explore the room for further improvement of this new few-shot segmentation paradigm by using query ground truth (GT) during inference. The results refer to the last rows in Tab. 1, 2. In detail, after the POS generates N mask proposals, we use the GT mask to select one proposal mask with the highest IoU, and regard this result as the segmentation result. Note that this natural selection is not the optimal solution because there may be other masks complementary to the selected one, but it is still a good oracle to show the potential of the new learning paradigm. According to the results, there is still a large gap between current performance and the oracle (\(\approx 20\%\) mIoU), which suggests that our model has enormous potential for improvement whereas we have achieved state-of-the-art performance. ### Ablation Studies We conduct ablation studies on various choices of designs of our MM-Former to show their contribution to the final results. Component-wise ablations, including the MM Module and the POS, are shown in Sec. 3.4.1. The experiments are performed with the 1-shot setting on Pascal-\(5^{i}\) and COCO-\(20^{i}\). We further experimentally demonstrate the benefits of the two-stage training strategy in Sec. 3.4.2. \begin{table} \end{table} Table 4: **Ablations on Potential Objects Segmenter (Stage-1)**. \begin{table} \end{table} Table 3: **Ablations on Mask Matching Module (Stage-2)**. In (a), the result in first row is obtained by our baseline. * means non-parameters design (Details in Sec. 2.3). We denote cross-alignment supervised with \(\mathcal{L}_{co}\) by \(\boldsymbol{\vee}\) ✓ in the last row (Sec. 2.4). SA, CA, LM represent Self Alignment block, Cross Alignment block and Learnable Matching block, respectively. #### 3.4.1 Component-wise Ablations To understand the effect of each component in the MM module, we start the ablations with a heuristic matching baseline and progressively add each building block. **Baseline.** The result of the heuristic matching baseline is shown in the first row of Tab. 2(a), which directly selects the mask corresponding to the highest cosine similarity with the support prototype. Note that when not using the learnable matching block, the results in Tab. 2(a) are all obtained in the same way as the heuristic matching baseline. We observe that the heuristic matching strategy does not provide strong performance, which is caused by the feature misalignment problem and fails to fuse multiple mask proposals. **Self-Alignment Block**. With the self-alignment block, the performance is improved by \(4.3\%\) on Pascal and \(1.2\%\) on COCO, as shown by the \(2^{nd}\) result in Tab. 2(a), demonstrating that channel-wise attention does help normalize the features for comparison. However, the performance is still inferior, encouraging us to further align the support and query features with the cross-alignment block. **Cross-Alignment Block**. In the third result of Tab. 2(a), we experiment with a non-parametric variant of the cross-alignment block that removes all learnable parameters in the cross-alignment block. A significant performance drop is observed. This is not surprising because the attention in the cross-alignment block cannot attend to proper areas due to the feature misalignment. When learning the cross-alignment block, indicated by the \(4^{th}\) result of Tab. 2(a), the performance is remarkably improved by \(14\%\) on Pascal and \(12.9\%\) on COCO, manifesting the necessity of learning the feature alignment for further matching. **Learnable Matching Block**. Surprisingly, simply using our learnable matching block can already achieve decent performance (the \(5^{th}\) result in Tab. 2(a)) compared with the baseline, thanks to its capability to adaptively match and merge multiple mask proposals. **Mask Matching Module**. By applying all components, our MM-Former pushes the state-of-the-art performance on COCO to \(43.2\%\) (the \(7^{th}\) result in Tab. 2(a)). In addition, to encourage the alignment of query and support in the feature space, we add the auxiliary loss \(L_{co}\) to the output of the cross-alignment block, which additionally enhances the performance by more than \(1\%\). **Potential Objects Segmenter** Although we follow Mask2former to build our POS, several differences are made and we evaluate the choice of design as follows. _Mask Classification_. In Mask2Former [4], a linear classifier is trained with cross-entropy loss for categorizing each mask proposal, while in our MM-Former for few-shot segmentation, we remove it to avoid learning class-specific representations in the first training stage. The result in Tab. 3(a) shows that the classifier harms the performance due to that the linear classifier would make the network fit the "seen" classes in the training set. Since this change only affects the first stage, we use the oracle results to demonstrate the effect. _Numbers of Proposals_. In Tab. 3(b), we try to vary the numbers of the mask proposals \(N\). Increasing the number of \(N\) will significantly improve the oracle result and our result. Thus we chose 100 as the default value in all other experiments. It is worth noting that, when varying the number from 10 to 100, our result is improved by \(5.4\%\), but the oracle result is improved by \(12.1\%\), indicating the large room for improvement with our new mask matching paradigm. **Effect of Different Feature Extraction**. Previous few-shot segmentation works [34, 24, 30] typically integrate matching operations with segmentation-specific techniques [39, 24, 3]. Following Mask2Former, our POS also includes a multi-scale deformable attention (MSDeformAtt) [39]. In Tab. 3b, we investigate using the features from the MSDeformAttn instead of the backbone feature for the MM module. Interestingly, although the feature after the context modeling is essential for segmentation, it is not suitable for the matching problem and impairs the matching performance. #### 3.4.2 Analysis of Training Strategy **Effect of Two-stage Training**. One may wonder what if we couple the training of POS and MM modules. Tab. 3c experiments on this point, the joint optimization is inferior to the two-stage training strategy, since POS and MM have different convergence rates. **Efficiency of Two-stage Training**.We analyze the efficiency of our method and provide comparisons of training time, and training memory consumption (Tab. 5). All models are based on the ResNet-50 backbone and tested on the COCO benchmark. All models are tested with RTX A6000 GPU(s) for a fair comparison. Training times for CyCTR and HSNet are reported according to the official implementation. We report the training time for the first and second stages separately. It is worth noting that for the same test split, our method can share the same stage-1 model across 1-shot and 5-shot. The training time of stage-1 for 5-shot can be ignored if 1-shot models already exist. ### Analysis of model transferability Our MM-Former shows a better transfer performance when trained on COCO but a relatively lower performance when trained on Pascal. We make an in-depth study of this phenomenon. **Effect of the number of training samples**: We use all training samples belonging to 15 Pascal training classes from COCO to train MM-Former. In this case, training samples are 9 times larger than the number in Pascal but the categories are the same, dubbed COCO-15 in Tab. 6. When the number of classes is limited, more training data would worsen the matching performance (60.7\(\%\) vs. 63.3\(\%\) for 1-shot and 64.8\(\%\) vs. 64.9\(\%\) for 5-shot), though a better POS could be obtained, as indicated by the oracle result (86.3\(\%\) vs. 82.5\(\%\)). **Effect of the number of training classes**: We randomly sample an equal number of training images ( 6000 images averaged across 4 splits) as in Pascal training set from 75 classes (excluding test classes) in COCO to train our MM-Former, dubbed COCO-75-sub in Tab. 6. When training with the same amount of data, more classes lead to better matching performance (66.8\(\%\) vs. 63.3\(\%\) for 1-shot and 68.9\(\%\) vs. 64.9\(\%\) for 5-shot). In a word, the **number of classes** determines the quality of the matching module. This finding is reasonable and inline with the motivation of few-shot segmentation and meta-learning: learning to learn by observing a wide range of tasks and fast adapting to new tasks. When the number of classes is limited, the variety of tasks and meta-knowledge are restricted, therefore influencing the learning of the matching module. ### Qualitative Analysis **Visual examples**: We show some visual examples in Fig. 4. Without loss of generality, some support images may only contain part of the object (_e.g._, the \(2^{nd}\) row). Directly selecting the mask with the highest cosine similarity can not obtain the anticipated result. Using a learnable mask matching block to fuse multiple masks can solve the problem to a large extent, the proposed feature alignment block can further improve our model by alleviating the misalignment problem, _e.g._ the results in the last row. \begin{table} \begin{tabular}{c c c c} \hline Training Set & 1-shot & 5-shot & Oracle \\ \hline Pascal & 63.3 & 64.9 & 82.5 \\ COCO-15 & 60.7 (-2.6) & 64.8 (-0.1) & 86.3 \\ COCO-75-sub & 66.8 (+3.5) & 68.9 (+4.0) & 85.3 \\ COCO & 67.7 (+4.4) & 70.4 (+5.5) & 85.8 \\ \hline \end{tabular} \end{table} Table 6: **Transfer study (testing on Pascal).** \begin{table} \begin{tabular}{l|c c c c|c c c} \hline \hline & \multicolumn{4}{c|}{1-shot} & \multicolumn{4}{c}{5-shot} \\ \cline{2-9} & mIoU & training time & GPUs & memory & mIoU & training time & GPUs & memory \\ \hline HSNet & 39.2 & 168h & 4 & 27.5G & 46.9 & 168h & 4 & 27.5G \\ CyCTR & 40.3 & 32.6h & 4 & 115.7G & 45.4 & 56.6h & 4 & 136.6G \\ MM-Former & 44.2 & 7.1h / 4.0h & 1 & 10.7G / 6.0G & 48.4 & 7.1h / 8.6h & 1 & 10.7G / 11.3G \\ \hline \hline \end{tabular} \end{table} Table 5: Analysis of the model efficiency, training time for MM-Former is shown in terms of \(1^{st}\) / \(2^{nd}\) stage. **Robustness analysis**: We also provide robustness analysis in Fig. 6, which uses an anomaly support sample for segmenting the query image. Compared with the previous state-of-the-art method, HSNet, which tends to segment the salient object in the image, our model is more robust to the anomaly inputs. **Explanation of SA**: SA is proposed to align the features at the channel dimension so that outliers at the channel dimension could be smoothed and features would be more robust by aligning with the attended global (specifically, channel-wise weighed average) features. Fig. 5 proves this point. It can be seen that \(F_{avg}\) (row \(2^{th}\)) has response to general foreground regions. Important channels (row \(3^{th}\)) are emphasized, and outliers are suppressed (row \(4^{th}\)). ## 4 Related Work **Few-Shot Segmentation**[21] is established to perform segmentation with very few labeled images. Many recent approaches formulate few-shot segmentation from the view of metric learning [23; 7; 27]. PrototypicalNet [22] is the first to perform metric learning on few-shot segmentation. PFENet [24] further designs an feature pyramid module to extract features from multi-levels. Many recent methods point out that only a single support prototype is insufficient to represent a given category. To address this problem, [32] attempt to obtain multiple prototypes via EM algorithm. [15] utilized super-pixel segmentation technique to generate multiple prototypes. Another way to solve the above problem is to apply pixel-level attention mechanism. [32; 26] attempt to use graph attention networks to utilize all foreground support pixel features. HSNet [19] propose to learn dense matching through 4D Convolution. CyCTR [38] points out that not all foreground pixels are conducive to segmentation and adopt cycle-consistency technology to filter out proper pixels to guide segmentation. **Transformers** originally proposed for NLP [25] are being rapidly adapted in computer vision task [8; 2; 29; 5; 4]. The major benefit of transformers is the ability to capture global information using self-attention module. DETR [2] is the first work applying Transformers on object detection task. Mask2Former [4] using Transformers to unify semantic segmentation and instance segmentation. Motivated by the design of MaskFormer, we apply transformers to segment all potential objects in one image, align support features and query features in pixel-level within our MM-Former. ## 5 Conclusion In the paper, we present Mask Matching Transformer (MM-Former), a new perspective to tackle the challenging few-shot segmentation task. Different from the previous practice, MM-Former is a two-stage framework, which adopts a Potential Objects Segmentor and Mask Matching Module to first produce high-quality mask proposals and then blend them into the final segmentation result. Extensive experiments on COCO-\(20^{i}\) and Pascal-\(5^{i}\) well demonstrate the effectiveness and the generalization advantage of the proposed MM-Former. We hope our MM-Former can serve as a solid baseline and help advance the future research of few-shot segmentation. **Limitations and societal impact.** Our MM-Former introduces the paradigm of _decompose first and then blend_ to the research of few-shot segmentation, which is a totally new perspective and may inspire future researchers to develop more advanced versions. However, there is still a large gap between the current results and the oracle (\(\approx 20\%\) mIoU). How to further narrow this gap is our future research focus. **Acknowledgment.** This work was supported in part by the National Key R & D Program of China (No.2021ZD0112100), the National NSF of China (No.U1936212, No.62120106009), the Fundamental Research Funds for the Central Universities (No. K22RC00010). Yao Zhao and Yunchao Wei are the corresponding authors. Figure 6: **Robustness Analysis.** We use different classes of support and query images to verify the robustness of the network.
この論文では、新しい視点から、Few-Shot分割タスクに取り組むことを目指します。従来の方法では、サポート画像からプロトタイプ特徴を学習し、ピクセルレベルで質問の特徴とマッチングして分割結果を得ています。しかし、満足できる分割を得るためには、マッチング操作の学習を重たい分割モジュールに組み合わせて、設計の柔軟性を制限し、学習の複雑さを増大させます。この問題を解決するために、私たちはMask Matching Transformer (MM-Former)という新しい分割タスクのためのパラダイムを提案しました。具体的には、MM-Formerは、最初にクラス非依存の分割器を使用して、質問画像を複数のセグメント提案に分解します。その後、関連するセグメント提案をサポート画像を基に、最終的なマスクに統合するための簡単なマッチングメカニズムを適用します。MM-Formerの利点は2つあります。まず、MM-Formerは分解してから
2310.20587
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning
Offline reinforcement learning (RL) aims to find a near-optimal policy using pre-collected datasets. In real-world scenarios, data collection could be costly and risky; therefore, offline RL becomes particularly challenging when the in-domain data is limited. Given recent advances in Large Language Models (LLMs) and their few-shot learning prowess, this paper introduces $\textbf{La}$nguage Models for $\textbf{Mo}$tion Control ($\textbf{LaMo}$), a general framework based on Decision Transformers to effectively use pre-trained Language Models (LMs) for offline RL. Our framework highlights four crucial components: (1) Initializing Decision Transformers with sequentially pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to full-weight fine-tuning, to combine the pre-trained knowledge from LMs and in-domain knowledge effectively, (3) using the non-linear MLP transformation instead of linear projections, to generate embeddings, and (4) integrating an auxiliary language prediction loss during fine-tuning to stabilize the LMs and retain their original abilities on languages. Empirical results indicate $\textbf{LaMo}$ achieves state-of-the-art performance in sparse-reward tasks and closes the gap between value-based offline RL methods and decision transformers in dense-reward tasks. In particular, our method demonstrates superior performance in scenarios with limited data samples.
Ruizhe Shi, Yuyao Liu, Yanjie Ze, Simon S. Du, Huazhe Xu
2023-10-31T16:24:17
http://arxiv.org/abs/2310.20587v4
# Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning ###### Abstract Offline reinforcement learning (RL) aims to find a near-optimal policy using pre-collected datasets. In real-world scenarios, data collection could be costly and risky; therefore, offline RL becomes particularly challenging when the in-domain data is limited. Given recent advances in Large Language Models (LLMs) and their few-shot learning prowess, this paper introduces **L**anguage Models for **M**otion Control (**LaMo**), a general framework based on Decision Transformers to effectively use pre-trained Language Models (LMs) for offline RL. Our framework highlights four crucial components: (1) Initializing Decision Transformers with sequentially pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to full-weight fine-tuning, to combine the pre-trained knowledge from LMs and in-domain knowledge effectively, (3) using the non-linear MLP transformation instead of linear projections, to generate embeddings, and (4) integrating an auxiliary language prediction loss during fine-tuning to stabilize the LMs and retain their original abilities on languages. Empirical results indicate **LaMo** achieves state-of-the-art performance in sparse-reward tasks and closes the gap between value-based offline RL methods and decision transformers in dense-reward tasks. In particular, our method demonstrates superior performance in scenarios with limited data samples. Our project website is **lamo2023.github.io**. ## 1 Introduction Offline reinforcement learning (RL) has gained significant attention in recent years due to its potential for utilizing pre-collected datasets to improve agent performance (Lange et al., 2012; Prudencio et al., 2023; Levine et al., 2020). Among the prominent algorithms in offline RL, Decision Transformer (DT) (Chen et al., 2021) reframes RL as a conditional sequence modeling problem and Figure 1: **Normalized score on D4RL (Fu et al., 2020) dataset of Language Models for **M**otion Control (**LaMo**), Decision Transformer (DT, Chen et al., 2021), Wiki-RL (Reid et al., 2022), Conservative Q-Learning (CQL, Kumar et al., 2020) and Behavior Cloning (BC). We average scores over tasks and data sample ratios for each domain. (_Medium_ for _Mujoco_ and _Atari_, _Complete_ and _Partial_ for _Kitchen_, of different sample ratios, described in Appendix B.) utilizes the Transformer architecture (Vaswani et al., 2017), showing the potential of sequence models for decision making (Xu et al., 2022; Hu et al., 2023; 20; Xie et al., 2023; Laskin et al., 2023). However, Transformers are known to be data hungry (Khan et al., 2022; Brown et al., 2020; OpenAI, 2023), meaning that pre-training on massive amounts of data is usually required to achieve satisfactory model ability (Touvron et al., 2021). One of the most pronounced applications of Transformers -- large language models (LLMs) -- has achieved significant progress in language understanding recently, such as GPT (Radford and Narasimhan, 2018; Radford et al., 2019; Brown et al., 2020; OpenAI, 2023), ChatGPT (Ouyang et al., 2022), and LLaMA (Touvron et al., 2023). Pre-trained on rich and diverse linguistic data, LLMs gain great few-shot and zero-shot learning abilities (Brown et al., 2020; Kojima et al., 2022). A natural thought to enhance the Transformer-based sequential decision-making methods is thus to introduce the power of pre-trained Language Models (LMs) into them, initially explored by a lot of recent works (Ichter et al., 2022; Huang et al., 2022; Driess et al., 2023; Wu et al., 2023; Li et al., 2022; Reed et al., 2022; Lin et al., 2023; Brohan et al., 2023; 20; Tang et al., 2023; Wang et al., 2023). Among them, Li et al. (2022) propose to encode the environment states with LLMs and learn a policy based on the decoded states, while their environment states are restricted to language descriptions only, making it hard for motion control. Reid et al. (2022) address this weakness by directly utilizing a pre-trained LM as the initialization of DT and processing low-level agent states and actions directly, instead of processing language descriptions. Their architecture thus successfully utilizes pre-trained LMs in motion control tasks like locomotion (Fu et al., 2020). However, despite the novelty of the proposed method in (Reid et al., 2022), they still do not fully unleash the power of LMs: their empirical performance is on par with pure DT methods and lags behind CQL (Kumar et al., 2020). We thus ask, **Can we unleash the power of pre-trained LMs to solve sequential decision-making problems?** In this work, we propose **L**anguage Models for **M**otion Control (**LaMo**), a framework to effectively utilize pre-trained LMs for offline RL. While the motivation is straightforward, it takes four crucial designs to empower LaMo: 1) pre-trained language model is used as the initial weight of DT; 2) the pre-trained weights are frozen and the model is fine-tuned with parameter-efficient finetuning method LoRA (Hu et al., 2022) on **0.7%** of the parameters; 3) we replace the input embeddings and the output linear projections with Multi-Layer Perceptrons (MLPs); 4) a language prediction loss function as an auxiliary objective. Consequently, we find that the four components combined can help LaMo preserve the prior knowledge and generalization ability acquired from the pre-training while adapting efficiently to the new domain of offline RL. We conduct comprehensive experiments across three distinct environments: _Kitchen_(Gupta et al., 2019), _MuJoCo_Todorov et al. (2012), and _Atari_(Bellemare et al., 2013), spanning 8 tasks altogether. These tasks range from sparse-reward to dense-reward, and from state inputs and image inputs. For each task, we evaluate performance under varying data ratios to examine the influence of sample amount on the outcomes. We observe that as is shown in Figure 1, LaMo surpasses both DT and value-based baselines in **sparse-reward** tasks; and in **dense-reward** tasks, our method significantly outperforms DT and closes the gap between value-based methods and DT-based methods. Especially, we find that when the data scale is limited (_e.g._, 1% of the whole dataset), LaMo demonstrates much more powerful learning ability, which could be credited to inductive bias within pre-trained LMs. Our contributions are three-fold: * We propose LaMo, a novel offline RL framework that unleashes the power of pre-trained language models. * To better utilize the cross-domain knowledge from language modeling, we propose 3 additional techniques including LoRA finetuning, non-linear MLP projections, and an auxiliary language loss. Each module is shown to contribute positively to the final results of LaMo. * Through extensive experiments in 8 tasks across diverse domains, dataset scales, and reward densities, we demonstrate the superiority of LaMo over DT-based and value-based offline RL algorithms. Specifically, we find that LaMo could successfully handle the challenging low-data regime while DT could not. This highlights the great potential of our cross-domain pre-training for sequential modeling. Related Work **Transformers for decision making.** Transformers have dominated the language tasks in the NLP community (Radford and Narasimhan, 2018; Radford et al., 2019; Brown et al., 2020; Devlin et al., 2019) and also started to show potential in other domains, such as decision making. As one initial trial to introduce Transformers into reinforcement learning (RL), Decision Transformer (DT, Chen et al., 2021) models the elements such as states and actions into a sequence, thus framing the RL problem into a sequence prediction problem. There are a lot of following works make improvements under the framework of DT (Xu et al., 2022; Hu et al., 2023b; Xie et al., 2023; Yamagata et al., 2023; Liu and Abbeel, 2023). For example, Prompt DT (Xu et al., 2022) appends demonstrations into the sequence to achieve generalization in new tasks; Xie et al. (2023) pre-train DT by leveraging future trajectory information; Q-learning DT (Yamagata et al., 2023) refines the return-to-go in training data using Q-values, thereby imbuing DT with Q-learning's proficiency in handling sub-optimal data. Agentic Transformer (Liu and Abbeel, 2023) addresses the issues of sub-optimality by using chain of hindsight to relabel the target returns, which achieves competitive performance compared with value-based methods. Trajectory Transformer (Janner et al., 2021) trains on sequences of discretized states, actions, and rewards, indicating a more direct solution. Our work focuses on utilizing the cross-domain knowledge, _i.e._, language pre-training, as privileged information to enhance DT-based methods, which thus is orthogonal to these works. **Large Language Models** (LLMs) have been the most pronounced application of the Transformer architecture in recent years (Radford and Narasimhan, 2018; Radford et al., 2019; Brown et al., 2020; OpenAI, 2023; Devlin et al., 2019; Touvron et al., 2023a;b). Pre-trained on massive amounts of corpus, LLMs have shown surprising few-shot and even zero-shot ability in language tasks, such as GPT series (Radford and Narasimhan, 2018; Radford et al., 2019; Brown et al., 2020; OpenAI, 2023). To personalize LLMs for different downstream user applications with computational efficiency, researchers commonly utilize parameter-efficient finetuning techniques (Hu et al., 2022; Zhang et al., 2023a; Li and Liang, 2021; Lester et al., 2021; Liu et al., 2022; Wang et al., 2023a) to finetune LLMs. In this work, we use the GPT-2 architecture (Radford et al., 2019) as the backbone due to its affordability and use LoRA (Hu et al., 2022) for downstream finetuning. **LMs for decision making.** The great success of LMs in language tasks also motivates researchers to explore the potential of LMs for decision making problems (Ichter et al., 2022; Huang et al., 2022; Driess et al., 2023; Wu et al., 2023). One line of works (Ichter et al., 2022; Huang et al., 2022; Driess et al., 2023; Wu et al., 2023) utilizes LMs for high-level task decomposition and task planning, while their low-level execution policy is learned or designed separately. Another line of works (Li et al., 2022; Reed et al., 2022; Lin et al., 2023; Brohan et al., 2023a; Tang et al., 2023; Wang et al., 2023b) exploits the representation and generalization power of pre-trained LMs. Li et al. (2022) adapt pre-trained LMs to generate policies for tasks where the inputs could be converted into word sequences and point out the significance of sequential structure of inputs; Lin et al. (2023) use a geometric feasibility planner to encourage LM to generate both mid-level and low-level plans given language instruction; and Tang et al. (2023) design prompts for LMs to encode language instructions. When multi-modal inputs are involved, one solution is transforming them into one common embedding space (Brohan et al., 2023a; Reed et al., 2022). For example, RT-2 (Brohan et al., 2023a) utilizes a Vision-Language Model pre-trained on massive language and vision-language data, and also represents actions as text tokens on the Robot-Action Fine-tuning stage; GATO (Reed et al., 2022) utilizes a Vision Transformer to encode the image inputs, and learns from a large multi-modal, multi-task dataset to perform various tasks all in one model. The most relevant work to us is Wiki-RL (Reid et al., 2022), which also uses a pre-trained language model as the initialization of DT for offline RL. However, their empirical results are shown to be only close to DT and could not surpass CQL (Kumar et al., 2020). Therefore, our work tries to better unleash the power of pre-trained LMs for offline RL. ## 3 Preliminaries ### Offline Reinforcement Learning We formulate reinforcement learning (RL) as a standard Markov Decision Process (MDP) with a tuple \((\mathcal{S},\mathcal{A},T,d_{0},\mathcal{R},\gamma)\), where \(\mathcal{S}\) is the set of states \(s\in\mathcal{S}\), \(\mathcal{A}\) is the set of actions \(a\in\mathcal{A}\), \(\mathcal{T}\) is the transition distribution of form \(T(s_{t+1}|s_{t},a_{t})\), \(d_{0}(s_{0})\) describes the distribution of states \(s_{0}\), \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function, \(r_{t}=\mathcal{R}(s_{t},a_{t})\) is the reward at timestep \(t\), and \(\gamma\in(0,1)\) is the discount factor. The agent in this MDP follows a policy \(\pi(a|s)\), and the objective is: \[J(\pi)=\mathbb{E}_{s_{0}\sim d_{0}(\cdot),\;a_{t}\sim\pi(\cdot|s_{t}),\;s_{t+1} \sim T(\cdot|s_{t},a_{t})}\left[\sum_{t=0}^{\infty}\gamma^{t}\mathcal{R}(s_{t}, a_{t})\right]\,. \tag{1}\] In offline RL, the access to interacting with the environment is removed while the objective remains \(J(\pi)\). Agents could only learn on pre-collected trajectories \(\mathcal{D}=\{(s_{t}^{(i)},a_{t}^{(i)},s_{t+1}^{(i)},r_{t}^{(i)})\}\), which is generated by a unknown behavior policy \(\pi_{B}\). Here we introduce common properties of the dataset \(\mathcal{D}\): _1)_ **Sub-optimality.** In many contexts, \(\pi_{B}\) is not an optimal policy, i.e., \(\mathcal{D}\) would not contain the optimal behaviors, and thus simple imitation may exhibit suboptimal performance; _2)_ **Dense-reward or sparse-reward.** In the dense-reward environment, agents receive reward signals that correspond to whether agents' behaviors are good for each timestep, while in the sparse-reward setting, positive reward signals from the environments might be only given when success is achieved, and otherwise are zero. The sparse-reward setting is thus much more challenging but closer to the real world scenarios. ### Decision Transformer Following Decision Transformer (DT), we frame the RL problem as a sequential modeling problem. We consider each trajectory \(\tau\) as a sequence of ordered return-to-go \(\hat{R}\), action \(a\), and states \(s\), defined as follows, \[\tau=(\hat{R}_{t_{0}},s_{t_{0}},a_{t_{0}},\hat{R}_{t_{0}+1},s_{t_{0}+1},a_{t_ {0}+1},\ldots,\hat{R}_{t_{0}+K-1},s_{t_{0}+K-1},a_{t_{0}+K-1})\,. \tag{2}\] where return-to-go \(\hat{R}\) is defined as the sum of rewards from the current timestep to the future: \(\hat{R}_{k}=\sum_{i=k+1}^{T}r_{i}\), \(T\) is the episode length, and \(K\) is the context length. The learning objective of the model is to predict the future action \(a_{t}^{\prime}\) given the history sequence and the current state \(s_{t}\), while the ground truth is \(a_{t}\), written as a simple squared error term: \[\mathcal{L}_{\text{decision}}=\sum_{t=t_{0}}^{t_{0}+K-1}\|a_{t}-a_{t}^{\prime} \|_{2}^{2}\,. \tag{3}\] ## 4 Method We propose **L**anguage Models for **M**otion Control (**LaMo**), an effective framework that incorporates pre-trained Language Models (LMs) into offline Reinforcement Learning, to leverage the reasoning and few-shot ability of LMs and solve challenging scenarios such as limited data and sparse reward. An illustration of LaMo is given in Figure 2. LaMo encompasses several crucial designs: _1)_ We adopt a pre-trained LM (_i.e.,_ GPT-2 (Radford et al., 2019)) as the initialization of a Decision Transformer (DT) (Chen et al., 2021); _2)_ We replace the linear embedding projections with MLPs to augment representation learning capabilities for complicated tasks; _3)_ During training the offline RL agents, we freeze the pre-trained parts and utilize the parameter-efficient fine-tuning technique LoRA (Hu et al., 2022), where the trainable parameters account for only **0.7%** of the entire model; _4)_ We introduce language prediction as an auxiliary objective while finetuning, in order to stabilize the performance and maintain the language ability. ### Pre-training on Language Tasks The initial step involves obtaining pre-trained language models (LMs). Considering the widespread recognition and computational affordability of the GPT-2 architecture (Radford et al., 2019), we utilize the commonly available pre-trained weight of GPT-2 from Hugging Face1. To further explore the effects of the quality of different pre-trained models on the downstream offline RL tasks, we also pre-train GPT-2 by ourselves in the ablation study, using the corpus dataset WikiText (Merity et al., 2017) and the common next-token prediction objective Footnote 1: [https://huggingface.co/gpt2](https://huggingface.co/gpt2) \[\mathcal{L}_{\text{language}}=\sum_{i=1}^{s-1}-\log\left(T\left(w_{i+1}|w_{1 },\ldots,w_{i}\right)\right), \tag{4}\] where \(w_{i}\) is the \(i\)th language token in one sentence, and \(T\) is the probability distribution of next token predicted by the model. We have explored three variants of models: _1)_ a model that is pre-trained for fewer steps; _2)_ a model that is pre-trained on randomly shuffled text corpus; _3)_ a model with randomly initialized weights. Our results in Section 5.5 and Appendix G show that high language pre-training quality is helpful for downstream RL tasks, underscoring the importance and necessity of the pre-training. ### Finetuning for Offline Reinforcement Learning **Multi-layer perceptrons for embeddings.** The pre-trained LMs process the input into latent vectors and decode the latent vectors into the output via simple linear projections. We find that to effectively utilize the pre-trained language model in offline RL, replacing the linear projections with MLPs is essential to bridge the domain gap. Extensive ablations are provided in Section 5.5 to support the importance of this non-linear module. **Frozen weights and low rank adaptation.** We apply the parameter-efficient training technique LoRA (Hu et al., 2022), which constrains the gradient update process in a low-dimension space by rewriting the weight matrix \(W\in\mathbb{R}^{d\times k}\) as \(W_{0}+\Delta W=W_{0}+BA\), where \(B\in\mathbb{R}^{d\times r}\), \(A\in\mathbb{R}^{r\times k}\), and \(r\ll\min(d,k)\). We inject low-rank matrices into the attention weights \(Q,K,V\) and freeze all other weights of the Transformer. Meanwhile, the model is desired to maintain the knowledge of the LMs. The number of trainable parameters only takes up **0.7**% of the entire Transformer. We hypothesize that such a mechanism would let the pre-trained model treat the inputs as languages to the maximum extent while maintaining adaptivity. Empirically, we find that full-weight finetuning or frozen Transformer layers would harm performance, as is shown in Figure 5. More discussions are provided in Section 5.5. **Language prediction as an auxiliary objective**. To further stabilize the training process and maintain the knowledge learned from languages, we simultaneously train the model on language prediction tasks. The corpus we train on is WikiText (Merity et al., 2017), same as the pre-training stage. To perform language prediction, we would temporarily replace the input and output projections with the projections of the pre-trained LM. This auxiliary objective is used in Reid et al. (2022). Empirically, we find that this term could prominently prevent the model from overfitting. Intriguingly, for sparse-reward tasks such as _Kitchen_, the performance of LaMo is critically enhanced to surpass recent strong baselines, as is shown in Figure 5(b). Besides, this objective could help preserve the language understanding ability, which means we could obtain a model skilled at both language understanding and motion control as a side effect. A more detailed discussion is in Section 5.5. The overall objective while training the offline RL agents is then \[\mathcal{L}=\mathcal{L}_{\text{decision}}+\lambda\cdot\mathcal{L}_{\text{ language}} \tag{5}\] where \(\lambda\) is a tunable parameter that is set to be in \(\{0,\ 0.1,\ 1\}\). Figure 2: **The overview of LaMo.** LaMo mainly consists of two stages: _(1)_ pre-training LMs on language tasks, _(2)_ freezing the pre-trained attention layers, replacing linear projections with MLPs, and using LoRA to adapt to RL tasks. We also apply the language loss during the offline RL stage as a regularizer. Experiments In this work, we delve into solving sequential decision-making problems while only offline interaction datasets are available during training, known as the _Offline RL_ problem. We evaluate the performance of LaMo on the standard benchmark _D4RL_(Fu et al., 2020) and also evaluate the learning ability of LaMo under the low-data regime. To show the effectiveness of each component in LaMo, extensive ablations are also conducted. ### Experiment Setup We conduct our experiments on 8 tasks from 3 domains _MuJoCo_, _Atari_, and _Kitchen_. Detailed task descriptions are provided in Appendix C. We use datasets from _D4RL_(Fu et al., 2020) and d4rl-atari (more details are provided in Appendix B). Due to the limitation of computation resources, we run each experiment for \(3\) seeds with numbers \(0,1,\,2\) to ensure reproducibility. We compare the performance of LaMo with various powerful baselines in offline reinforcement learning: CQL (Kumar et al., 2020), IQL (Kostrikov et al., 2022), TD3+BC (Fujimoto and Gu, 2021), BCQ (Fujimoto et al., 2019), NFQ (Riedmiller, 2005), Behavior Cloning (BC), and DT (Chen et al., 2021). Besides, we compare with Wiki-RL (Reid et al., 2022), which also utilizes pre-trained language model in offline reinforcement learning. To systematically report the performance of all these methods, we compute the average performance over the last \(20\)K training steps out of a total of \(100\)K training steps with evaluations conducted every \(2500\) training steps. The scores we report are normalized scores so that 100 represents an expert policy and 0 represents a random policy, following the convention of Fu et al. (2020) and Hafner et al. (2020). ### Sparse-reward tasks Results for sparse-reward tasks including _Kitchen_ and _Reacher2d_ are given in Table 1. We select strong baselines including CQL, IQL, TD3+BC, BC, DT and Wiki-RL. We observe that LaMo shows an overwhelming advantage over Decision Transformer and Wiki-RL across all tasks and datasets, which indicates that our approach effectively harnesses the power of the pre-trained model. Overall, LaMo has improved the performance of DT by up to \(\textbf{50}\%\). Compared with value-based methods, our approach also demonstrates significant advantages in average performance. We have achieved the best performance among all strong baselines in 7 tasks and second-place results in 2 tasks _Kitchen_ Partial with \(1\%\) data and _Reacher2d_ Medium with \(10\%\) data. Significantly, in _Kitchen_ tasks, CQL initially performs reasonably well, but as training progresses, it faces the issue of overfitting, causing a notable drop in its performance, which is shown in Appendix F. While for LaMo, such a phenomenon does not occur, reflecting LaMo's success in preventing overfitting. ### Dense-reward tasks Results for dense reward tasks are given in Table 2 and Table 3. For _Atari_, Since IQL and TD3+BC do not support discrete control (Seno and Imai, 2022), we select CQL, BCQ, and NFQ as baselines. We observe that LaMo achieves the highest average scores in _Atari_ and _MuJoCo_ under the low-data regime. However, we also notice that in _MuJoCo_ domain, when the data scale is relatively large (10%, 100%), LaMo only comes close to DT and falls behind CQL in _Halfcheetah_ and _Walker2d_. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Task & Dataset & Radio & LaMo & DT & Wiki-RL & CQL & IQL & TD3+BC & BC \\ \hline Kitchen & Partial & 1 & \(\textbf{46.6}=5.5\) & \(33.8+1.5\) & \(20.4+1.0\) & \(0.2+1.0\) & \(45.7+3.5\) & \(8.2+0.5\) & \(1.1+1.9\) \\ Kitchen & Complete & 1 & \(\textbf{46.2}=5.3\) & \(23.8+2.0\) & \(21.7+6.0\) & \(0.4+0.0\) & \(30.9+1.7\) & \(6.6+1.0\) & \(4.0+0.0\) \\ Resacher2d & Medium & 1 & \(33.0+5.3\) & \(22.8+0.0\) & \(24.4+5.5\) & \(31.3+0.0\) & \(34.4+1.0\) & \(31.2+0.2\) & \(14.9+7.4\) \\ \hline \hline \multicolumn{1}{c}{**Average**} & \multicolumn{1}{c}{\(47.9(194)\)} & \multicolumn{1}{c}{\(36.5\)} & \multicolumn{1}{c}{\(23.8\)} & \multicolumn{1}{c}{\(10.6\)} & \multicolumn{1}{c}{\(35.4\)} & \multicolumn{1}{c}{\(13.3\)} & \multicolumn{1}{c}{\(5.0\)} \\ \hline \hline Task & Dataset & Radio & LaMo & DT & Wiki-RL & CQL & IQL & TD3+BC & BC \\ \hline Kitchen & Partial & 0.01 & \(11.6+2.0\) & \(0.9+0.9\) & \(0.2+0.9\) & \(0.7+1.0\) & \(5.5+1.5\) & \(13.9+2.0\) & \(12.6+0.9\) \\ Kitchen & Partial & 0.1 & \(35.1+5.2\) & \(22.6+6.8\) & \(27.9+3.6\) & \(0.0+0.9\) & \(19.7+3.3\) & \(17.0+3.4\) & \(46.2+2.2\) \\ Kitchen & Complete & 0.3 & \(45.9+3.9\) & \(31.5+3.5\) & \(22.8+3.9\) & \(17.7+3.5\) & \(29.5+1.2\) & \(0.0+0.0\) & \(0.0+0.0\) \\ Resacher2d & Medium & 0.5 & \(50.6+1.6\) & \(36.6+1.9\) & \(13.9+1.7\) & \(17.6+0.5\) & \(35.4+2.5\) & \(0.1+0.3\) & \(4.8+1.9\) \\ Resacher2d & Medium & 0.1 & \(12.4+3.8\) & \(2.3+1.3\) & \(4.1+2.1\) & \(15.8+0.2\) & \(0.2+5.8\) & \(5.8+3.7\) & \(21.2+1.1\) \\ Resacher2d & Medium & 0.3 & \(31.2+7.6\) & \(6.4+2.6\) & \(19.4+7.4\) & \(\textbf{30.0}-0.4\) & \(\textbf{10.2}+1.1\) & \(\textbf{24.5}+1.7\) & \(\textbf{10.2}+3.8\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Normalized score for sparse-reward tasks. We compare LaMo with DT, Wiki-RL, CQL, IQL, TD3+BC, and BC. Mean of \(3\) seeds with number \(0,1,2\). Blue highlight indicates the highest score, orange highlight indicates the second-highest score, and red numbers represent the improvement of LaMo over DT.** In _Qbert_ Medium (\(100\%\)) and _Pong_ Medium (\(10\%\)), LaMo also does not surpass CQL. We attribute it to the following reasons: unlike sparse-reward tasks, where the Bellman backups would slowly propagate the information of rewards (Chen et al., 2021), limiting the performance of value-based algorithms, dense-reward tasks are extremely suitable for value-based methods such as CQL while DT is less preferable, which is empirically examined by Bhargava et al. (2023). Our experiments verify the stands and point out that LaMo could further enhance the potential of DT, closing the performance gap between DT and CQL in dense-reward tasks. ### Ability in Low-Data Regime We look into the relationship between the performance of various algorithms and the scale of data. As depicted in the Figure 3, LaMo is capable of achieving excellent performance even with relatively small datasets. For example, in _Hopper_, LaMo surpasses the performance of CQL and DT when the sample ratio of data is \(0.5\%\) and maintains this advantage consistently as the sample ratio increases. ### Ablations To show contributions of our various designs in LaMo, we conduct extensive ablation experiments. **Linear projections v.s. MLPs**. In LaMo, we find that simple linear projections could not fully exploit the cross-domain knowledge from language pre-training, and thus our design to replace linear projections with MLPs is critical. As shown in Figure 4, such design exhibits clear improvements compared to linear projections (termed as _LaMo w/o. MLP_). It is also observed that in _Walker2d_ \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Task \\ \end{tabular} } & \multirow{2}{*}{Dataset} & Ratio & LaMo & DT & Wik-RL & CQL & BCQ & NPQ & BC \\ \hline Baseline & Median & 1 & **473.4**\(\pm\) 1.50 & **402.8**\(\pm\) 1.07 & 129.0 \(\pm\) 10.50 & 367.8 \(\pm\) 11.35 & 56.2 \(\pm\) 19.2 & 45.5 \(\pm\) 2.0 & 291.3 \(\pm\) 11.45 \\ Glent & Median & 1 & **79.0**\(\pm\) 1.31 & 25.9 \(\pm\) 1.30 & 7.6 \(\pm\) 6.5 & **833.3**\(\pm\) 98.88 & 30.8 \(\pm\) 16.5 & 43.0 \(\pm\) 0.4 & 51.9 \(\pm\) 11.2 \\ Pong & Median & 1 & **125.6**\(\pm\) 0.6 & 116.1 \(\pm\) 10.4 & 98.1 \(\pm\) 1.56 & **166.4**\(\pm\) 9.5 & 99.1 \(\pm\) 16.5 & -1.0 \(\pm\) 0.0 & -1.0 \(\pm\) 0.1 \\ \hline \multirow{2}{*}{\begin{tabular}{c} **Average** \\ \end{tabular} } & \multirow{2}{*}{**Average**} & 226.0(\(\pm\)1248) & 182.6 & 78.2 & 189.1 & 65.3 & -1.9 & 114.1 \\ \hline \hline \multirow{2}{*}{ \begin{tabular}{c} **Last** \\ \end{tabular} } & \multirow{2}{*}{Dataset} & Ratio & LaMo & & & & & & & \\ \cline{2-9} \cline{1-1} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-11} \cline{2-9} \cline{11-111} \cline{2-9} \cline{11-11} \cline{2-9} \cline task, LaMo with linear projections achieves descent scores after a few training steps but suffers from overfitting after more training steps, resulting in sub-optimal convergence. **Comparing LoRA with full finetuning and frozen parameters**. Results are given in Figure 5. Though Hansen et al. (2022); Ze et al. (2023a) show that full finetuning representations for visual RL tasks is better than adopting the frozen pre-trained models, there are works (Ze et al., 2023b) showing that finetuning only a small portion of parameters could outperform frozen and fully finetuned models, and we observe that in our settings, freezing the pre-trained parameters and adapting with LoRA could not only improve training efficiency but also address the issue of overfitting that occurs in full finetuning. We attribute this to the internal generalizable knowledge within LMs from large-scale pre-training and we transfer it to the domain of motion control. We also conduct experiments about removing LoRA and only using the frozen pre-trained LM, which also underperforms LaMo that applies LoRA for in-domain task learning. **Language pre-training v.s. visual pre-training.** Furthermore, considering observations in _Atari_ are in pixel format, we investigate whether the visual pre-training could also be helpful for motion control. We replace the pre-trained model with ImageGPT (Chen et al., 2020), a Transformer pre-trained on the ImageNet dataset (Russakovsky et al., 2015). During pre-training, ImageGPT reshapes two-dimensional images into one-dimensional vectors after downsampling, and is trained in an autoregressive manner. The results are presented in Table 4. It is observed across _Atari_ tasks that visual pre-training could be a positive initialization for DT, while since LMs better model the sequence structure, there exists a significant gap between LaMo and ImageGPT. This empirical evidence further substantiates our hypothesis that **proficiency in sequential modeling is the key to unleashing the potential of cross-domain pre-trained models**. **The relationship between language ability and motion control ability.** We found that training on language tasks jointly can prevent overfitting and improve overall performance. For the most challenging one among \(8\) tasks, _Kitchen_, as Figure 5(b) shows, we notice that by adding a simple Figure 4: **Ablation on the effectiveness of MLP embeddings**. We replace the MLPs in LaMo as embeddings with linear projections, denoted as _LaMo w/o. MLP_. We compare LaMo with _LaMo w/o. MLP_ and DT across all tasks. Mean of \(3\) seeds with number \(0,1,2\). Shaded area is \([\mu-0.5\sigma,\mu+0.5\sigma]\) interval, where \(\mu\) is the average and \(\sigma\) is the standard deviation. Figure 5: **Ablation on the effectiveness of LoRA.**_(1)_ We involve all the parameters into fine-tuning, denoted as _Full Finetuning._ _(2)_ We freeze all parameters in Transformer layers and leave out LoRA, denoted as _Freezing_. We compare LaMo with _Full Finetuning_, _Freezing_, and DT. weighted loss during training, the performance no longer drops significantly in the RL training stage, and it consistently outperforms the baselines. This suggests that training with a language prediction loss as a regularization jointly can retain the advantages of the pre-trained model while learning from a limited decision-making dataset. As presented in Figure 5(a), we show the curve of cross-entropy loss to approximately demonstrate the change of language ability during training, which remains consistent across all tasks. **This empirically validates the ability of language models to simultaneously learn two different sequential modeling tasks.** However, whether this term could enhance performance in all cases still requires further investigation. **Effects of pre-training qualities of LMs.** We conduct a systematic study on how pre-training qualities of LMs would affect the performance of downstream offline RL agents. We pre-train several GPT-2 models as follows: _1)_**early-stopped pre-trained**, which is pre-trained on WikiText for \(100\)K training steps. _2)_**random corpus**, which is pre-trained on randomly shuffled WikiText, so that the token prediction is totally disturbed. In this way, we aim to investigate whether the performance improvement resulting from pre-training is closely related to the nature of the corpus or solely attributed to the network's warm-up. We then replace GPT-2 in LaMo with these models and compare the performance in downstream RL tasks. As Figure 7 shows, while these two pre-trained models achieves competitive results against DT, they still fall short in comparison with LaMo in certain tasks. This initial observation verifies our hypothesis that a model with stronger language ability could perform more effectively when transferring to the field of motion control. ## 6 Conclusion We propose **LaMo**, an offline RL framework that leverages the pre-trained **L**anguage Models (LMs) for low-level **M**otion control. On sparse-reward tasks, LaMo achieves strong results and surpasses recent strong algorithms CQL, IQL, TD3+BC, and DT; On dense-reward tasks, LaMo significantly improves Decision Transformer and closes the gap between value-based methods and DT-based methods. Notably, in low-data scenarios, our method demonstrates powerful few-shot learning ability, which can be attributed to the inductive bias from pre-trained LMs. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Task & Dataset & Ratio & LaMo & DT & LaMo (ImageGPT Pre-training) \\ \hline Breakout & Medium & 0.1 & 1369 \(\pm\) 91.3 & 45.0 \(\pm\) 18.6 & 57.7 \(\pm\) 56.1 \\ Breakout & Medium & 1 & 473.4 \(\pm\) 195.6 & 402.8 \(\pm\) 147.6 & 454.5 \(\pm\) 219.0 \\ Qbert & Medium & 0.1 & 63.6 \(\pm\) 17.2 & 26.1 \(\pm\) 14.3 & 22.5 \(\pm\) 13.7 \\ Qbert & Medium & 1 & 79.0 \(\pm\) 13.1 & 28.9 \(\pm\) 18.3 & 29.5 \(\pm\) 17.4 \\ Pong & Medium & 0.1 & 114.8 \(\pm\) 8.8 & 87.1 \(\pm\) 19.7 & 0.7 \(\pm\) 1.1 \\ Pong & Medium & 1 & 125.6 \(\pm\) 6.6 & 116.1 \(\pm\) 10.4 & 116.7 \(\pm\) 9.4 \\ \hline \hline \multicolumn{3}{c}{**Average**} & 165.6 & 117.7 & 113.6 \\ \hline \end{tabular} \end{table} Table 4: **Ablation on the effectiveness of sequential language pre-training**. We replace the pre-trained model in LaMo with ImageGPT (Chen et al., 2020), denoted as _LaMo (ImageGPT Pre-training)_. We compare LaMo with _LaMo (ImageGPT Pre-training)_ and DT across \(3\)_Atari_ tasks. Blue highlight indicates the highest score. Figure 6: **Ablations to show effects of the language loss for motion control**. It is also important to acknowledge the limitations of our work. On dense-reward _MuJoCo_ tasks, we find that CQL is very competitive to LaMo, showing that value-based methods are still very strong in offline RL. Besides, the auxiliary language prediction loss in LaMo has only shown its advantage in very low-horizon tasks, _e.g._, _Kitchen_, while in other tasks, it serves the purpose of preserving language capabilities but does not increase the performance significantly. How to better leverage the language reasoning ability to further help offline RL is thus a future direction. Lastly, limited by computational resources, we have not looked into utilizing larger language models (Touvron et al., 2023; Chung et al., 2022; Touvron et al., 2022), and we hope our work could motivate the community to explore further applications of LLMs in offline RL.
オフライン強化学習 (RL) は、事前に収集されたデータセットを用いて、近似最適なポリシーを見つけることを目的としている。現実世界のシナリオでは、データ収集が費用とリスクがかかる可能性があるため、オフライン RLは特にデータのイン・ドメインが限られている状況において困難になる。近年、大規模言語モデル (LLM) の進歩と彼らの少数の学習能力により、この論文では、動的制御のための言語モデル (LaMo) を導入する。LaMo は、決定トランス former を基にした汎用的なフレームワークであり、オフライン RL には事前学習された言語モデル (LM) を効果的に利用するためのものである。このフレームワークは、以下の4つの重要な要素を強調している。(1) 決定トランス former を順番に事前学習された LM で初期化する。(2) LoRA の微調整手法を使用する (完全なウェイト微調整に比べて) 、LM の事前学習